question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
69,735,443 | 69,735,592 | How to understand auto deduce in for loop? | Initial value of i don't equal 5, output countless number.
int main() {
vector<int> nums = {1, 2, 1};
auto size = nums.size();
for(auto i = 2 * size - 1; i >= 0; i--) {
// do_stuff()
std::cout << i << " ";
}
std::cout << std::endl;
return 0;
}
| Your issue stems from the fact that the type of i is unsigned, so i >= 0 is always true, rendering the loop infinite.
i is deduced to be of the same type as size, and size is deduced to be of type size_t because vector::size() returns size_t.
Because of above, and unsigned integer overflow, your loop is infinite. It prints (*)
5 4 3 2 1 0 4294967295 4294967294 4294967293 ... 3 2 1 0 4294967295 4294967294 ...
to infinity and beyond.
(*) if size_t is 32 bits on your platform.
|
69,735,468 | 69,738,381 | How to parse date string which is formed by the CRT function? | I'm working on a Windows-driven C++ project now. And there is a function which composes a file names with date portion in them. It uses the wcsftime C runtime library function to format the date portion with the "%x" formating code. This code corresponds to the
%x Date representation for the locale
as the documentation says.
Now, I need to check which files match to some dates-range. So, I need to parse this date portion of the file name back to the date and perform the comparing. But, I'm in trouble: how to do this? I didn't find any CRT function which can do the work. Furthermore, it could be much more better for me to get the formating string alike the "dd-mm-yyyy" for the current locale in order to parse this string. But I didn't find any way how to do this.
I pay your attention, that it's about the CRT functions. Its locale differs from the locale used by the GetLocaleInfo, for example.
| One easy solution is to go forward : Generate a filename using the same pattern, but for a test date of 2001-02-03. The result will tell you the order of year,month and day.
|
69,735,621 | 69,736,404 | The best way to capture user input int with error handling loop | In my case, I have to make sure the user input is either 1 or 2, or 3.
Here's my code:
#include <iostream>
using namespace std;
void invalid_choice_prompt() {
string msg = "\nInvalid Command! Please try again.";
cout << msg << endl;
}
int ask_user_rps_check_input(int user_choice) {
if (user_choice == 1 || user_choice == 2 || user_choice == 3) return 1;
return 0;
}
int ask_user_rps() {
// ask user's choice of Rock or Paper or Scissors
while (1) {
string msg =
"\nPlease enter your choice:\nRock - 1\nPaper - 2\nScissors - 3";
cout << msg << endl;
int user_choice;
cin >> user_choice;
if (ask_user_rps_check_input(user_choice)) {
return user_choice;
}
invalid_choice_prompt();
}
}
int main() {
ask_user_rps();
return 0;
}
The code is capable to handle the situation when the input is an integer, but when the input are characters or strings, the program will be trapped in the infinite loop.
Is there any elegant way to do this? I've found some methods about using cin.ignore to ignore the specified length of io buffer, but I don't think this method is flexible enough. I am looking for a more flexible solution.
| Reading from a stream using operator >> takes as many characters from the stream as the target type accepts; the rest will remain in the stream for subsequent reads. If the input has a format error (e.g. a leading alphabetical characters when an integer is expected), then an error-flag is set, too. This error-flag can be checked with cin.fail(). It remains set until it gets explicitly cleared. So if your code is...
int user_choice;
cin >> user_choice;
and if you then enter something that is not a number, e.g. asdf, then user_choice has an undefined value, an error-flag cin.fail() is (and reamins) set. So any subsequent read will fail, too.
To overcome this, you have to do three things:
First, check the error-flag. You can do this either through calling cin.fail() after a read attempt of through checking the return value of the expression (cin >> user_choice), which is the same as calling cin.fail().
Second, in case of an error, you need to clear the error-flag using cin.clear(). Otherwise, any attempt to read in anything afterwards will fail.
Third, if you want to continue with reading integral values, you need to take the invalid characters from the stream. Otherwise, you will read in asdf into a variable of type integer again and again, and it will fail again and again. You can use cin.ignore(numeric_limits<streamsize>::max(),'\n'); to take all characters until EOF or an end-of-line from the input buffer.
The complete code for reading an integral value with error-handling could look as follows:
int readNumber() {
int result;
while (!(cin >> result)) {
cin.clear();
cin.ignore(numeric_limits<streamsize>::max(),'\n');
cout << "Input is not a number." << std::endl;
}
return result;
}
|
69,735,729 | 69,770,526 | How to handle png generation with changing frame buffer size? | I am writing some unit tests for my drawing code. The steps include:
Setting up GLFW window and context
glfwMakeContextCurrent(window);
glfwGetWindowSize(window, &window_width, &window_height);
glfwGetFramebufferSize(window, &frame_buffer_width, &frame_buffer_height);
Perform drawings
beginFrame();
// perform drawing..
endFrame();
Output what has been drawn to a png file (using stb_image_write).
stbi_write_png(file_name, frame_buffer_width, frame_buffer_height, 4, image.get(), frame_buffer_width * 4);
Compare the generated png with a reference image (using pixel comparison of the images).
The problem I encounter is frame_buffer_width and frame_buffer_height are not always consistent. To be more particular, sometimes they are 1:1 to the window size, sometimes their sizes are doubled. This makes the tests fail because the generated png is not always the same (while the reference image size is constant). And writing to png using window_width and window_height is not a correct way to do.
According to GLFW documentation link.
The size of a framebuffer may change independently of the size of a window.
I also read somewhere that for macOS specifically, the frame buffer size can be double of the window size.
How can I solve the issue with changing frame buffer size?
| Relying on the default framebuffer for testing is wrong for multiple reasons. Other than the undetermined size, the bit-depth can change too, as well as some pixels may fail the pixel-ownership test.
Instead, for unit-testing purposes, refactor your rendering code so it can render to an off-screen FBO. Then you can create an FBO of a determined size and format, render to it, and save it to a file.
|
69,736,547 | 69,737,431 | Parse string and store it in struct c++ | we are given a txt file with :"6=3+3" and i want to parse the string in two like:"6=" and "3+3".
afterwards I want to save everything in a struct not array but struct. any idea?
| The below program shows how you can separate out the LHS(left hand side) and RHS(right hand side) and store it in a struct object.
#include <iostream>
#include <sstream>
#include<fstream>
struct Equation
{
std::string lhs, rhs;
};
int main() {
struct Equation equation1;//the lhs and rhs read from the file will be stored into this equation1 object's data member
std::ifstream inFile("input.txt");
if(inFile)
{
getline(inFile, equation1.lhs, '=') ; //store the lhs of line read into data member lhs. Note this will put whatever is on the left hand side of `=` sign. If you want to include `=` then you can add it explicitly to equation.lhs using `equation1.lhs = equation1.lhs + "="`
getline(inFile, equation1.rhs, '\n'); //store the rhs of line read into data member rhs
}
else
{
std::cout<<"file cannot be opened"<<std::endl;
}
inFile.close();
//print out the lhs and rhs
std::cout<<equation1.lhs<<std::endl;
std::cout<<equation1.rhs<<std::endl;
return 0;
}
The output of the program can be seen here
|
69,736,764 | 69,745,754 | Strange uint8_t conversion with OpenCV | I have encountered a strange behavior from the Matrix class in OpenCV regarding the conversion float to uint8_t.
It seems that OpenCV with the Matrix class converts float to uint8_t by doing a ceil instead of just truncating the decimal.
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/imgcodecs.hpp>
int main() {
cv::Mat m1(1, 1, CV_8UC1);
cv::Mat m2(1, 1, CV_8UC1);
cv::Mat m3(1, 1, CV_8UC1);
m1.at<uint8_t>(0, 0) = 121;
m2.at<uint8_t>(0, 0) = 105;
m3.at<uint8_t>(0, 0) = 82;
cv::Mat x = m1 * 0.5 + m2 * 0.25 + m3 * 0.25;
printf("%d \n", x.at<uint8_t>(0, 0));
uint8_t r = 121 * 0.5 + 105 * 0.25 + 82 * 0.25;
printf("%d \n\n", r);
return 0;
}
Output:
108
107
Do you know why this append and how to correct this behavior ?
Thank you,
| The strange behavior is a result of cv::MatExpr and Lasy evaluation usage as described here.
The actual result equals:
round(round(121*0.5 + 105*0.25) + 82*0.25) = 108
The rounding is used because the element type is UINT8 (integer type).
The computation order is a result of the "Lasy evaluation" strategy.
Following the computation process using the debugger is challenging, because OpenCV implementation includes operator overloading, templates, macros and pointer to functions...
The actual computation is performed in static void scalar_loop function in
dst[x] = op::r(src1[x], src2[x], scalar);
When for example: src1[x] = 121, src2[x] = 105 and scalar = 0.5.
It executes an inline function:
inline uchar c_add<uchar, float>(uchar a, uchar b, float alpha, float beta, float gamma)
{ return saturate_cast<uchar>(CV_8TO32F(a) * alpha + CV_8TO32F(b) * beta + gamma); }
The actual rounding is in saturate_cast:
template<> inline uchar saturate_cast<uchar>(float v) { int iv = cvRound(v); return saturate_cast<uchar>(iv); }
cvRound uses an SIMD intrinsic return _mm_cvtss_si32(t)
It's equivalent to: return (int)(value + (value >= 0 ? 0.5f : -0.5f));
The Lasy evaluation stages builds MatExpr with alpha and beta scalars.
cv::Mat x = m1 * 0.5 + m2 * 0.25 + m3 * 0.25; //m1 = 121, m2 = 105, m3 = 82
The expression is built recursively (hard to follow).
Following the "operator +" function (using the debugger):
MatExpr operator + (const MatExpr& e1, const MatExpr& e2)
{
MatExpr en;
e1.op->add(e1, e2, en);
return en;
}
State 1:
e1.a data = 121 (UINT8)
e1.b (NULL)
e1.alpha = 0.5
e1.beta = 0
e2.a data = 105 (UINT8)
e1.b (NULL)
e1.alpha = 0.25
e1.beta = 0
Result:
en.a data = 121 (UINT8)
en.b data = 105 (UINT8)
en.alpha = 0.5
en.beta = 0.25
State 2:
e1.a data = 121 (UINT8)
e1.b data = 105 (UINT8)
e1.alpha = 0.5
e1.beta = 0.25
e2.a data = 82 (UINT8)
e1.b (NULL)
e1.alpha = 0.25
e1.beta = 0
en.a data = 87 (UINT8) <--- 121*0.5 + 105*0.25 = 86.7500 rounded to 87
en.b data = 82 (UINT8)
en.alpha = 1
en.beta = 0.25
Stage 3: (in MatExpr::operator Mat() const):
m data = 108 (UINT8) <--- 87*1 + 82*0.25 = 87 + 20.5 = 107.5 rounded to 108
You may try to follow the computation process using the debugger.
It requires building OpenCV from sources, in Debug configuration, and a lot of patient...
|
69,737,101 | 69,737,611 | Dynamic array of Linear search funcion implementation | Need to implement a function
int* linearSearch(int* array, int num);
That gets a fixed size array of integers with a number and return an array with indices to the occurrences of the searched number.
For example array={3,4,5,3,6,8,7,8,3,5} & num=5 will return occArray={2,9}.
I've implemented it in c++ with a main function to check the output
#include <iostream>
using namespace std;
int* linearSearch(int* array, int num);
int main()
{
int array[] = {3,4,5,3,6,8,7,8,3,5}, num=5;
int* occArray = linearSearch(array, num);
int i = sizeof(occArray)/sizeof(occArray[0]);
while (i>0) {
std::cout<<occArray[i]<<" ";
i--;
}
}
int* linearSearch(int* array, int num)
{
int *occArray= new int[];
for (int i = 0,j = 0; i < sizeof(array) / sizeof(array[0]); i++) {
if (array[i] == num) {
occArray[j] = i;
j++;
}
}
return occArray;
}
I think the logic is fine but I have a syntax problems with creating a dynamic cell for occArray
Also a neater implantation with std::vector will be welcomed
Thank You
| At very first I join in the std::vector recommendation in the question's comments (pass it as const reference to avoid unnecessary copy!), that solves all of your issues:
std::vector<size_t> linearSearch(std::vector<int> const& array, int value)
{
std::vector<size_t> occurrences;
// to prevent unnecessary re-allocations, which are expensive,
// one should reserve sufficient space in advance
occurrences.reserve(array.size());
// if you expect only few occurrences you might reserve a bit less,
// maybe half or quarter of array's size, then in general you use
// less memory but in few cases you still re-allocate
for(auto i = array.begin(); i != array.end(); ++i)
{
if(*i == value)
{
// as using iterators, need to calculate the distance:
occurrences.push_back(i - array.begin());
}
}
return occurences;
}
Alternatively you could iterate with a size_t i variable from 0 to array.size(), compare array[i] == value and push_back(i); – that's equivalent, so select whichever you like better...
If you cannot use std::vector for whatever reason you need to be aware of a few issues:
You indeed can get the length of an array by sizeof(array)/sizeof(*array) – but that only works as long as you have direct access to that array. In most other cases (including passing them to functions) arrays decay to pointers and these do not retain any size information, thus this trick won't work any more, you'd always get sizeOfPointer/sizeOfUnderlyingType, on typical modern 64-bit hardware that would be 8/4 = 2 for int* – no matter how long the array originally was.
So you need to pass the size of the array in an additional parameter, e.g.:
size_t* linearSearch
(
int* array,
size_t number, // of elements in the array
int value // to search for
);
Similarly you need to return the number of occurrences of the searched value by some means. There are several options for:
Turn num into a reference (size_t& num), then you can modify it inside the function and the change gets visible outside. Usage of the function get's a bit inconvenient, though, as you need to explicitly define a variable for:
size_t num = sizeof(array)/sizeof(*array);
auto occurrences = linearSearch(array, num, 7);
Append a sentinel value to the array, which might be the array size or probably better maximum value for size_t – with all the disadvantages you have with C strings as well (mainly having to iterate over the result array to detect the number of occurences).
Prepend the number of occurrences to the array – somehow ugly as well as you mix different kind of information into one and the same array.
Return result pointer and size in a custom struct of yours or in e.g. a std::pair<size_t, size_t*>. You could even use that in a structured binding expression when calling the function:
auto [num, occurences] = linearSearch(array, sizeof(array)/sizeof(*array), 7);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// here the trick yet works provided the array is declared above the call, too,
// as is in your example
Option 4 would be my personal recommendation out of these.
Side note: I switched to size_t for return values as negative indices into an array are meaningless anyway (unless you intend to use these as sentinel values, e. g. -1 for end of occurrences in option 2).
|
69,737,159 | 69,737,301 | Stop visual studio 2019 from higlighting occurences of word under the cursor | Visual studio 2019 keeps on highlighting occurences of the word under my cursor in the current file:
Is there a way I can get rid of this?
| Tools -> Options -> Text Editor -> C/C++ -> Advanced -> References -> "Disable Reference Highlighting"
Set to true.
|
69,737,493 | 69,737,576 | Can I be sure a vector contains objects and not pointers to objects? | Can I be sure an std::vector (or, in general, any standard container) contains objects and not pointers to objects, no matter how complex the objects' class is, if it has constant size?
E.g.: in this simple case:
struct MyStruct { int a, b; };
std::vector<MyStruct> vs;
The resulting vector layout is:
[ ..., a1, b1, a2, b2, a3, b3, ... ]
Does the Standard guarantees the same happens in this (or more complex) case(s), where the size of the structure is supposed to be constant:
struct MyStruct2 { float f[10]; };
std::vector<MyStruct2> vs2;
With layout:
[ ..., f1[0], f1[1], ..., f1[9], f2[0], f2[1], ..., f2[9], ... ]
instead of:
[ ..., *pf1, *pf2, ... ]
pf1 = [ f1[0], f1[1], ..., f1[9] ]
pf2 = [ f2[0], f2[1], ..., f2[9] ]
| Since C++11 and onwards (C++03 nearly guarantees it), the data in a std::vector are contiguous with no gaps.
In particular if you have a pointer to an element in the std::vector, you can reach all other elements using pointer arithmetic.
Of course, pointer arithmetic works in sizeof units of your struct. And the struct itself may contain padding. The behaviour on attempting to reach a b, given a pointer to an a in your struct is undefined.
|
69,737,895 | 69,738,286 | Can std::stoi verify if the value of digit exceeds range of base? | I'm using std::stoi in following manner
int ConvertToInt( const std::string& aVal )
{
int lVal = std::stoi( aVal, nullptr, 0 );
return lVal;
}
Third argument in std::stoi was provided as 0 to convert automatically both DEC and HEX values.
I also use try-catch structure to catch std::invalid_argument, std::out_of_range and ( ... ) althought when I provide number as 0xgg no exception is thrown and the number is converted to 0 which is not my intention.
Is it possbile to catch digits, which are out of range and throw some exception ?
| std::stoi parses the string until an invalid character is encountered, which is interpreted to be the end of the integer.
Is it possbile to catch digits, which are out of range and throw some exception ?
Yes. You can use the pos argument. After the conversion, the pointed integer will contain the index of the first unconverted character. If the *pos is not equal to the size of the input, then there are unconverted characters, which must have not been valid. You can throw an exception in such case.
|
69,737,959 | 69,738,663 | What should happen if one calls `std::exit` in a global object's destructor? | Consider the following code:
#include <cstdlib>
struct Foo {
~Foo() {
std::exit(0);
}
} foo;
int main() {
}
It compiles and terminates with zero successfully for me both on my Linux (GCC, Clang) and Windows (Visual Studio). However, when compiled with MSYS2's GCC on Windows (g++ (Rev2, Built by MSYS2 project) 10.3.0), it enters an infinite recursion and dies from stack overflow. This can be checked by adding some debug output right before std::exit(); I did not add it initially to avoid thinking about the destruction of std::cout.
Does any C++ standard have anything to say about such behavior? Is it well-defined/implementation-defined/undefined/etc and why?
For instance, [support.start.term]/9.1 of some recent draft says the following about std::exit's behavior:
First, objects with thread storage duration and associated with the current thread are destroyed.
Next, objects with static storage duration are destroyed and functions registered by calling atexit are called. See [basic.start.term] for the order of destructions and calls.
which refers to [basic.start.term]/1, I guess:
Constructed objects ([dcl.init]) with static storage duration are destroyed and functions registered with std::atexit are called as part of a call to std::exit ([support.start.term]).
The call to std::exit is sequenced before the destructions and the registered functions.
I don't see any immediate restriction on calling std::exit in a destructor.
Sidenote: please refrain from commenting that "this code is bad", "thou shalt not use destructors in global objects" (which you probably should not) and probing for the XY problem in the comments. Consider this an academic question from a curious student who knows a much better solution to their original problem, but have stumbled upon this quirk while exploring the vast meadows of C++.
| [basic.start.main]/4:
If std::exit is called to end a program during the destruction of an object with static or thread storage duration, the program has undefined behavior.
|
69,738,294 | 69,738,711 | How to add all positive integers and get their average | Im a beginner at c++ and Im having a hard time, Im still a student, our professor ask us to input numbers(must input positive and negative) and print the sum of the positive integers and their average, example:
How many input? 5
input # 1 : 5
input # 2 : 3
input # 3 : -2
input # 4 : -4
input # 5 : 6
so the expected output is to print the positive integers, in this case is 5, 3, and 6
| This is a fairly simple problem (almost everything isif one's basics are clear). The below program shows how you can do this. I have added some comments so that you can get an idea about what is happening.
#include <iostream>
#include <vector>
int main()
{
int numberOfInputs = 0;
std::cout<<"How many inputs?"<<std::endl;
std::cin>>numberOfInputs;//take input into numberOfInputs
std::vector<int> vectorOfInputs(numberOfInputs);//this vector(container) will contain all the inputs(both positive and negative)
//this will contain the sum of all positive numbers
int sum = 0;
//this will count how many numbers entered by user are positive
int countPositive = 0;
//use for loop or any other loop populate the vector by user input
for(int i = 0; i < numberOfInputs; ++i)
{
std::cout<<"input#"<<i+1<<":";
std::cin >> vectorOfInputs.at(i);
//if the number is positive increase sum
if(vectorOfInputs.at(i) >0)
{
sum += vectorOfInputs.at(i);
++countPositive;
}
}
//print all the positive numbers
for(int elem: vectorOfInputs)
{
if(elem>0)
{
std::cout<<elem<<std::endl;
}
}
//print the sum and average
std::cout<<"the sum of all the above positive numbers is: "<<sum<<std::endl;
std::cout<<"the average of all the above positive numbers is: "<<(static_cast<double>(sum))/countPositive<<std::endl;//static_cast is used to get the result as double instead of int
return 0;
}
|
69,738,619 | 69,738,735 | C++ static member function vs lambda overhead | I have some kind of templated base class
template<typename Derived>
class Base { };
and want to store derived instances of it in a list.
For that I use a using derived_handle = std::unique_ptr<void, void(*)(void*) alias.
When I now add a derived instance to the list i cound use a static member function as deleter
class foo {
template<typename Derived, typename... Args>
void add_base(Args&&... args) {
auto derived = derived_handle{new Base{std::forward<Args>(args)..., &foo::_deleter<Derived>};
_derived.emplace_back(std::move(derived));
}
private:
template<typename Baser>
void _deleter(void* base) {
delete static_cast<Base*>(base);
}
std::vector<derived_handle> _derived{};
};
or a lambda
class foo {
template<typename Derived, typename... Args>
void add_base(Args&&... args) {
auto deleter = [](auto* derived){
delete static_cast<Derived*>(derived);
}
auto derived = derived_handle{new Base{std::forward<Args>(args)..., std::move(deleter)};
_derived.emplace_back(std::move(derived));
}
private:
std::vector<derived_handle> _derived{};
};
Are there any advantages/disadvantages of the lambda version I should be aware of?
| Time for a frame challenge!
You've made some bad decisions in that code. Most people who use unique_ptr, even in a polymorphic context, don't need custom deleters at all. The only reason you do, is because of your type erasure, and that's only there because Base<A> and Base<B> are unrelated types.
If you really need Base<T>, have it inherit from an actual polymorphic (and non-templated) base class with a virtual destructor. Then you don't need unique_ptr<void> (a really bad code smell), and you can actually use your list in a type-safe manner.
|
69,738,775 | 69,739,162 | What is a base class subobject? | I got that subobjects are member subobjects, base class subobjects and arrays.
I couldn't find anything that explicit explain the two first terms. In the following code for example:
struct A{int a;};
struct B{int b;};
struct C:public A,public B{};
I think that: int a is a member subobject of a possible, not yet instantiated, object of type A; int a is a base class subobject of a possible, not yet instantiated, object of type C. Is it Right? What is the definition of member subobject and base class subobject? Could you provide examples?
| Whenever a class inherits from another one it inherits, too, an instance of that class:
class A { };
class B : A { };
Then class B internally looks like:
class B
{
A a; // <- implicit base class sub-object, not visible to you
};
Note that in some cases there might be even be more than one A!
class A { };
class B : A { };
class C : A { };
class D : C, B { };
D then internally looks like:
class D
{
B b; // { A a; }
C c; // { A a; }
};
with b and c being the base class sub-objects; or in a flattened representation:
class D
{
A aFromB; // inherited from B, but not a sub-object of D itself
// other members of B
A aFromC; // inherited from C, again not a sub-object of D
// other members of C
};
B and C base class sub-objects are not visible in this representation, still they are there in form of the respective A instance combined with the respective other members (think of having braces around).
If you want to avoid duplicates of A, you need to inherit virtually: class B : virtual A { } – all virtually inherited (directly or indirectly) instances of A are then combined into one single instance (though if there are non-virtually inherited ones these remain in parallel to the combined one), consider:
class A { };
class B : A { };
class C : virtual A { };
class D : virtual A { };
class E : A, B, C
{
A combinedAFromCAndD;
// other members of B
// other members of C
A separateAFromD
// other members of D
};
Note: These layouts above are just examples, concrete layouts might vary.
|
69,738,996 | 69,739,049 | Assigning an array with std::fgetc() return | I am trying to store the first 4 chars of a .awv file using the std::fgetc function
This is what I have
FILE* WAVF = fopen(FName, "rb");
std::vector<std::string> ID;
ID[4];
for (int i = 0; i < 4; i++)
{
ID[i] = fgetc(WAVF);
}
I keep getting this error:
Exception thrown at 0x00007FF696431309 in ConsoleApplication3.exe:
0xC0000005: Access violation writing location 0x0000000000000010.
| Your program has undefined behavior!
Your vector ID is empty. By calling operator[] on an empty std::vector, invoking an undefined behavior. You got lucky that your program got crashed, saying "Access violation".
You need instead:
// create a vector of string and initialize 4 empty strings
std::vector<std::string> ID(4);
for (auto& element: ID)
{
element = some `std::string`s
}
However, in your case, std::fgetc returns intger as
The obtained character on success or EOF on failure.
So you might want the data structures such as std::vector<char> or (at best) std::string.
|
69,739,017 | 69,752,035 | Label Text not changing on C++/CLR Windows Forms | I am working on a small C++/CLR Windows Forms Project on Visual Studios Community 2019 using .NET Framework 4.0 in which I have a Combo Box and a Label.
The code fragment below works fine:
private: System::Void comboBox1_SelectedIndexChanged(System::Object^ sender, System::EventArgs^ e) {
label1->Text = "comboBox1->Text";
}
But if I add a Sleep(1000); after label1->Text = "comboBox1->Text";, I expect the label to change before the sleep period, but it changes after the sleep period is over.
In general, the label1->Text = "comboBox1->Text"; gets executed after whatever is below that line.
For the below code fragment, I want the program to sleep after changing the label1 Text.
private: System::Void comboBox1_SelectedIndexChanged(System::Object^ sender, System::EventArgs^ e) {
label1->Text = "comboBox1->Text";
Sleep(1000);
}
| I understand what you mean. To implement this function, you need to use a timer. You need to add a timer to your WinForm, and then set the Interval value to 1000 in the timer property. You need to use Start to start the timer, you could refer to my code.
this->timer1->Interval = 1000;
this->timer1->Tick += gcnew System::EventHandler(this, &MyForm::timer1_Tick);
timer1->Start();
private: System::Void timer1_Tick(System::Object^ sender, System::EventArgs^ e) { label1->Text = comboBox1->Text; }
|
69,739,074 | 69,739,197 | Sorting structures inside vector by two criteria in alphabetical order | I have a following data structure (first string as "theme" of the school)
map<string, vector<School>> information;
And the school is:
struct School {
string name;
string location;
}
I have trouble printing my whole data structure out in alphabetical order (first theme, then location, then name). For an example.
"map key string : struct location : struct name"
"technology : berlin : university_of_berlin"
So far I managed to loop through the initial map by
for (auto const key:information) {
//access to struct
vector<School> v = key.second;
//sorting by location name
//comparasion done by seperate function that returns school.location1 < school.location2
sort(v.begin(), v.end(), compare);
If I print out the theme (key.first) and v.location, it's almost complete. Map is ordered by default and location comparasion works. But I can't figure out how to add second comparasion by name. If I do another sorting, this time by name, then I lose the original order by location. Is it somehow possible to "double sorting" where one criteria is more important, then another?
| You can, you just need to add some condition in compare
bool compare(School const& lhs, School const& rhs)
{
if(lhs.location != rhs.location)
return lhs.location < rhs.location)
return lhs.name < rhs.name
}
Or you can overload the < operator like @ceorron did
|
69,739,319 | 69,740,163 | What is the difference between conanfile.py, conanfile.txt, conanprofile and settings.yml? | I have been trying to build Conan packages of my project for a week. I have been reading the documentation but there are many points that I'm still confused about.
There are 4 files that I think are very important:
conanfile.py
conanfile.txt
conan_profile
settings.yml
What is the purpose of each file? Where should each file be located? Which ones are interchangeable?
I have the following conanfile.py that generates the Conan package:
from conans import ConanFile, CMake
class mylibConan(ConanFile):
name = "mylib"
version = "1.16.0"
generators = "cmake"
settings = "os", "arch", "compiler", "build_type"
options = {"shared": [True, False]}
default_options = "shared=False"
exports_sources = ["*"]
url = ""
license = ""
description = "The mylib HAL library."
def configure(self):
self.options.shared = False
def build(self):
cmake = CMake(self)
cmake.configure()
cmake.build()
def package(self):
libs_build_dir = "lib_mylib/" + str(self.settings.build_type)
api_dir = "modules/mylib/lib/api/"
self.copy(pattern="lib_mylib.lib", dst="lib", src=libs_build_dir)
self.copy(pattern="*", dst="include", src=api_dir)
def package_info(self):
self.cpp_info.includedirs = ['include']
self.cpp_info.libdirs = ['lib']
self.cpp_info.libs = ['mylib']
...and the following conanfile.txt in my main project that consumes the Conan package:
[requires]
mylib/1.16.0@demo/testing
[generators]
cmake
visual_studio_multi
I need to define the cl version to be 14.24.28314 so it doesn't conflict with the consuming project.
Where should I define the cl version?
| The files are:
conanfile.py is a Conan "recipe". It declares dependencies, how to build a package from sources. The same recipe can be used to manage different configurations, like different platforms, compilers, build types, etc
conanfile.txt is a text simplification of conanfile.py, that can be used exclusively to consume dependencies, but not to create packages. conanfile.py can be used both to consume dependencies and to create packages
a profile file is a text file containing configuration values like os=Windows and compiler=gcc. You can pass these values in the command line, but it is better to have them in files, easier to manage and more convenient.
settings.yml is a declaration of what values can the settings take. To validate the inputs and make sure there are no typos, and use a common set of configurations so people can collaborate together.
I suggest following the tutorials in the docs, like https://docs.conan.io/en/latest/getting_started.html, or if you are into video-format, this free training is good: https://academy.jfrog.com/path/conan
Regarding the versions, you need to use those defined in the settings, for Visual Studio you need to use 14, 15, etc. The new msvc compiler setting, experimental, will be using the compiler version, like 19.xx. In general, it is not needed to specify the compiler version down to the patch, because this is mostly for the binary model, and it is typically not needed to go to that level. If you want to learn how to customize the settings value, read this section
|
69,739,846 | 69,740,040 | Sharing two strings of data between processes in C++ | I've come into a problem recently where I had two separate processes that need to share two strings. (A dynamic IP address and a key) I'm used to using ROS for this, where I would define a ROS msg with the two strings and send it from one to the other.
However we are trying to go as simple as possible with our application here and thus avoid using third party SW as much as we can. To do so I originally planned to use shared memory to send a struct holding both std::string only to realize it is not a trivial problem as this struct's size is dynamic...
I've also thought of using other means like sockets or queues, but I always run into the problem of not knowing beforehand the size of this struct. How can one deal with this problem? Is there a way to do so that doesn't involve defining some protocol where you prepend the string with its size and end it with a null, or similar?
Here is a snip of my code that uses Qt to create a SharedMemory to pass this struct (unsuccessfully of course).
struct
{
std::string ip_address;
std::string key;
}server_data_t;
#define SERVER_DATA_LEN sizeof(server_data_t)
class SharedMemoryManager : public QSharedMemory
{
...
QByteArray SharedMemoryManager::read()
{
QLOG_TRACE() << "SharedMemoryManager::read()";
QByteArray readData;
try
{
if(!isAttached())
throw std::runtime_error("Share memory segment has not been opened yet");
lock();
readData = QByteArray(static_cast<char *>(data()), size());
unlock();
return readData;
}
catch (std::exception &e)
{
QLOG_ERROR() << e.what();
return readData;
}
}
void SharedMemoryManager::write(const QByteArray &byte_array)
{
QLOG_TRACE() << "SharedMemoryManager::write()";
try
{
if(!isAttached())
throw std::runtime_error("Share memory segment has not been opened yet");
lock();
{
auto *from = byte_array.data();
char *to = static_cast<char*>(data());
memcpy(to, from, qMin(size(), byte_array.size()));
}
unlock();
}
catch (std::exception &e)
{
QLOG_ERROR() << e.what();
}
}
}
...
SharedMemoryManager _shm_manager;
server_data_t server_data;
server_data.ip_address = ...;
server_data.key = ...;
char *p = (char*)&server_data;
QByteArray byte_array = QByteArray(p, sizeof(server_data_t));
_shm_manager.write(byte_array);
...
SharedMemoryManager _shm_manager;
QByteArray byte_array = _shm_manager.read();
auto server_data = reinterpret_cast<server_data_t *>(byte_array.data());
std::cout << server_data->ip_address << std::endl;
When it tries to access to the string it fails.
| Shared memory should be fine (you even let Qt do all the hard work)
What you need is probably something like this, something that has a fixed size in your shared memory and still has enough space to hold your strings.
const std::size_t message_buf_size = 256;
struct data_t
{
char message[message_buf_size]; // copy string into this and terminate with 0
std::uint8_t ipv4[4];
}
Note that the data of the strings would have been allocated on the heap anyway (except for small strings they're optimized) and not in shared memory.
|
69,739,930 | 69,741,150 | oat++ : put DTO in a list of DTOs | I'm trying to create a single big DTO from multiple DTOs, but I am having a lot of trouble to put my DTOs inside a list.
I have two DTOs :
class TypeDocDto : public oatpp::DTO
{
DTO_INIT(TypeDocDto, DTO)
DTO_FIELD(Int32, code);
DTO_FIELD(String, desciption);
};
class DocumentDto : public oatpp::DTO
{
DTO_INIT(DocumentDto, DTO)
DTO_FIELD(Int32, docNumber);
DTO_FIELD(Int32, typeDocNb);
DTO_FIELD(List<Object<TypeDocDto>>, typeDocs);
};
The idea here is that one document object can carry multiple "TypeDoc" objects.
So I tried to create a list of TypeDocDto, and then to add it to my DocumentDto object.
auto dtoDoc = DocumentDto::createShared();
dtoDoc->docNumber = 0; //That value is whatever for now.
dtoDoc->typeDocNb = 3;
oatpp::List<oatpp::Object<TypeDocDto>> typeDocsList = {};
for (int i = 0; i < dtoDoc->typeDocNb; i++)
{
auto typedocDto = TypeDocDto::createShared();
typedocDto->code = i;
typedocDto->desciption = "foo";
typeDocsList->emplace(typeDocsList->end(), typedocDto);
}
dtoDoc->typeDocs = typeDocsList;
But I can't manage to put anything in my typeDocsList variable. The object I add seem to be always NULL.
What am I doing wrong ?
| Found where the issue comes from.
It looks like oat++ is a bit finnicky when it comes about declaring the list object.
//*oatpp::List<oatpp::Object<TypeDocDto>> typeDocsList = {}* should become :
oatpp::List<oatpp::Object<TypeDocDto>> typeDocsList({});
That precise syntax seems to be required. After that, my code works as intended.
|
69,739,936 | 69,740,141 | C++: std::map and std::set aren't ordered if using custom class (not pointers) | This must be something incredibly stupid, yet I can't manage to make head or tail from it.
This is the testing code.
#include <iostream>
#include <vector>
#include <limits>
#include <random>
#include <map>
#include <set>
#include <stdlib.h>
class value_randomized
{
public:
double value;
long random;
value_randomized()
{
value=0;
random=0;
}
/*
bool operator<(const value_randomized& b) const
{
if (value<b.value) return true;
return (random<b.random);
}
*/
friend bool operator<(const value_randomized& a, const value_randomized& b);
};
inline bool operator<(const value_randomized& a, const value_randomized& b)
{
return (a.value<b.value)?true:(a.random<b.random);
}
int main(int argc, char *argv[])
{
std::map<value_randomized,size_t> results;
for (size_t i=0; i<1000; ++i)
{
value_randomized r;
r.value=rand();
r.value/=RAND_MAX;
r.random=rand();
results.insert(std::make_pair(r, i));
}
std::multiset<value_randomized> s;
for (size_t i=0; i<1000; ++i)
{
value_randomized r;
r.value=rand();
r.value/=RAND_MAX;
r.random=rand();
s.insert(r);
}
return 0;
}
I've tried overloading the operator< both within the class and outside the class.
I've tried both maps and (multi)set.
Yet I can't understand how the results are ordered.
I'll show the screens of the debug window
Debug screen1
Debug screen2
Even printing directly the value of the multiset gives as result something apparently unordered. These are the first results
(0.0515083,114723506)
(0.0995593,822262754)
(0.0491625,964445884)
(0.410788,11614769)
(0.107848,1389867269)
(0.15123,155789224)
(0.293678,246247255)
(0.331386,195740084)
(0.138238,774044599)
(0.178208,476667372)
(0.162757,588219756)
(0.244327,700108581)
(0.329642,407487131)
(0.363598,619054081)
(0.111276,776532036)
(0.180421,1469262009)
(0.121143,1472713773)
(0.188201,937370163)
(0.210883,1017679567)
(0.301763,1411154259)
(0.394327,1414829150)
(0.383832,1662739668)
(0.260497,1884167637)
Clearly I'm missing something, but what?
| Your comparison function does not do strict weak ordering in accordance with the Compare requirement.
Fixing the existing code could be done like this:
inline bool operator<(const value_randomized& a, const value_randomized& b) {
if(a.value < b.value) return true;
if(b.value < a.value) return false;
return a.random < b.random;
}
or simpler, use std::tie:
inline bool operator<(const value_randomized& a, const value_randomized& b) {
return std::tie(a.value, a.random) < std::tie(b.value, b.random);
}
|
69,739,946 | 69,857,080 | C++ Best way to search/traverse/replace multiple tags with pugixml? | I have to replace in multiple template_xml multiple tags to build some web services requests. While with pugixml i can access a tag like this doc.child("tag1").child("tag2").etc i dont know if that is the best way since with multiple templates and multiple nested tags there would be multiple lines of code for each tag (since they have different paths).
My input would be a struct or something with multiple std::strings that i need to replace. This is one xml template.
example xml_template:
<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:es="http://mywebservice.com/">
<soapenv:Header />
<soapenv:Body>
<es:Tag1>
%(auth)
<es:AnotherTag>
<es:Date>
<es:CantReg>%(num_records)</es:CantReg>
<es:Pto>%(pto_data)</es:Pto>
<es:DataType>%(data_type)</es:DataType>
</es:Date>
<es:Req>
<es:DetRequest>
%(user_data)
<es:CA>%(CA)</es:CA>
<es:GenDate>%(datetime_date)</es:GenDate>
</es:DetRequest>
</es:Req>
</es:AnotherTag>
</es:Tag1>
</soapenv:Body>
</soapenv:Envelope>
I though of using regex but probably there is a better way with xml directly.
Pugixml has this example, but it doesnt traverse the all xml childs (nested/depth full) and in my case with multiple strtings/template i still would have different functions for each template and each tag.
pugixml example:
pugi::xml_document doc;
if (!doc.load_file("xgconsole.xml")) return -1;
pugi::xml_node tools = doc.child("Profile").child("Tools");
for (pugi::xml_node tool = tools.first_child(); tool; tool = tool.next_sibling())
{
std::cout << "Tool:";
for (pugi::xml_attribute attr = tool.first_attribute(); attr; attr = attr.next_attribute())
std::cout << " " << attr.name() << "=" << attr.value();
std::cout << std::endl;
}
<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<Profile FormatVersion="1">
<Tools>
<Tool Filename="jam" AllowIntercept="true">
<Description>Jamplus build system</Description>
</Tool>
<Tool Filename="mayabatch.exe" AllowRemote="true" OutputFileMasks="*.dae" DeriveCaptionFrom="lastparam" Timeout="40" />
<Tool Filename="meshbuilder_*.exe" AllowRemote="false" OutputFileMasks="*.mesh" DeriveCaptionFrom="lastparam" Timeout="10" />
<Tool Filename="texbuilder_*.exe" AllowRemote="true" OutputFileMasks="*.tex" DeriveCaptionFrom="lastparam" />
<Tool Filename="shaderbuilder_*.exe" AllowRemote="true" DeriveCaptionFrom="lastparam" />
</Tools>
</Profile>
Couldn't find in pugixml a method like doc.find_tag_by_name(tag_name), only doc.find_child_by_attribute() but isnt depth full, only for the childs of that tag.
The ideal would be to have a function that i invoque like replace_tag(std::string tag, std:.string new_text) and then another function that i do a for loop for each string/tag that i need.
What would be a good way to do it? Probably an algorithm to search through all the xml and return the node path when the node.name() matches my tag_name (or replace it directly) but im not very familiar with search algorithms neither with search algorithms for an xml structure.
| Finally end up using this.
int get_tag_value(pugi::xml_document *xml_doc, std::string tag_name, std::string *tag_value)
{
std::string search_str = "//*/"; // xpath search for nested tags
search_str += tag_name;
pugi::xpath_node xpath_node = xml_doc->select_node(search_str.c_str()); // search node
if(!xpath_node)
return -1;
pugi::xml_node selected_node = xpath_node.node().first_child();
*tag_value = selected_node.value();
return 0;
}
Similar with replace_tag but using selected_node.set_value(value.c_str()); at the end.
|
69,739,985 | 69,740,081 | What is the change I need to make to perform reverse of upper_bound? | I feel lower_bound in c++ stl is not the opposite of the upper_bound function. By default, in a non-decreasing array, if I use upper_bound and if the element is found and it is not the last element in the sorted array, then the next element > passed element is given and if the element is then last element or not found, then end() iterator returned. The following C++ code can be used to test this.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
int
main ()
{
vector<int> arr{-973, -808, -550, 301, 414, 897};
auto p = upper_bound (arr.begin (), arr.end (), 301);
if (p != arr.end ())
cout << *p << endl;
else
cout << "end" << endl;
return 0;
}
// Output: 414
But now I need the opposite thing. I need the smaller element from the matched element to be returned. In the above example, if I pass 301, then, I want to get -550 in return. Currently, I am using the following code and it seems to work for the given example but I am not sure if it is the right code or I need to implement it manually using binary search.
auto p = upper_bound(arr.rbegin(), arr.rend(), 301, greater<int>());
PS. I am using if (p != arr.rend ()) for this.
| std::lower_bound is what you want here. lower_bound returns the first element that is equal to or greater than the input provided. Knowing that, if you do
auto p = lower_bound(arr.begin (), arr.end (), 301);
then p will be at the 301, and subtracting 1 from it will give you element -550. So, you just need to check before you do that subtraction that p != arr.begin(). If it is, then the subtraction will work. If it isn't, then there is no element less then the input passed to lower_bound. That gives you something like
auto p = lower_bound(arr.begin (), arr.end (), input_value);
if (p == arr.begin())
std::cout << "no element found less than " << input_value;
else
std::cout << "first element less than " << input_value << " is " << *std::prev(p);
|
69,740,936 | 69,741,110 | How to get input and display 2 dimensional array in c++? | Code:-
#include <iostream>
using namespace std;
int main() {
int r,c,*p;
cout<<"Rows : ";
cin>>r;
cout<<"Columns : ";
cin>>c;
p=new int[r*c];
cout<<"\nEnter array elements :"<<endl;
int i,j,k;
for( i=0;i<r;i++){
for( j=0;j<c;j++){
cin>>k;
*(p+i*c+j)=k;
}
}
cout<<"\nThe array elements are:"<<endl;
for( i=0;i<r;i++){
for( j=0;j<c;j++){
cout<<*p<<" ";
p=p+1;
}
cout<<endl;
}
cout<<endl;
delete[]p;
return 0;
}
Output:-
Output
Error:-
munmap_chunk(): invalid pointer Process finished with exit code -6.
Can anyone explain why the above error occurs?
| The problem occurs at the end of your program, when you delete[] p. In the nested for-loop immediately preceding it, you are modifying p, thus, when attempting to delete[] p at the end, you get undefined behaviour.
Potential fixes include, when printing the array elements, access the pointer the same way you did in the first for loop (or as Jarod mentioned, using [i * c + j].
Alternatively, you can use an additional variable.
int *tmp = p;
for( i = 0 ; i < r; i++ ){
for( j = 0; j < c; j++ ){
cout << *tmp << " ";
tmp = tmp + 1; // ++tmp; also works
}
cout << endl;
}
cout << endl;
delete[] p;
This way, p still points to the original address.
|
69,740,993 | 69,742,369 | Wrong handle while setting text to RichEdit control | I can set plain text to RichEdit control with SF_TEXT flag, but I cann't set RTF-text with SF_RTF flag.
Here is the creation of control:
LoadLibrary(TEXT("Riched20.dll"));
richEdit = CreateWindowEx(
0,
RICHEDIT_CLASS,
TEXT(""),
ES_MULTILINE | WS_VISIBLE | WS_CHILD | WS_BORDER | WS_TABSTOP,
x,
y,
width,
height,
hwndOwner,
NULL,
hInstance,
NULL);
}
And here is how I set text (I took example somewhere in SO), resulting in "ERROR_INVALID_HANDLE":
DWORD CALLBACK EditStreamInCallback(DWORD_PTR dwCookie, LPBYTE pbBuff, LONG cb, LONG *pcb)
{
std::wstringstream *rtf = (std::wstringstream*) dwCookie;
*pcb = rtf->readsome((wchar_t*)pbBuff, cb);
return 0;
}
// ......
// Setting text
std::wstringstream rtf(L"{\rtf1Hello!\par{ \i This } is some { \b text }.\par}");
EDITSTREAM es = { 0 };
es.dwCookie = (DWORD_PTR)&rtf;
es.pfnCallback = &EditStreamInCallback;
if (SendMessage(richEdit, EM_STREAMIN, SF_RTF, (LPARAM)&es) && es.dwError == 0)
{
// ...
} else {
auto error = GetLastError(); // error == 6
}
| RTF string should be "\\r \\p" etc. not "\r \p", or use raw string literal. The string should have compatible font. For example:
std::wstring wrtf = (LR"({\rtf1{\fonttbl\f0\fswiss Helvetica;}
Hello world Привет Ελληνικά 日本語 { \b bold \b } regular.\par})");
EM_STREAMIN/EM_STREAMOUT expect BYTE input/output.
When reading/writing standard RTF file, just read the file as char, don't do any conversion to wide string, and use this format
format = SF_RTF;
If source is UTF16 string literal, convert it to UTF8, everything else stays the same, except format changes to:
format = SF_RTF | SF_USECODEPAGE | (CP_UTF8 << 16);
Example reading UTF16 string literal in to RichEdit control:
std::string utf8(const std::wstring& wstr)
{
if (wstr.empty()) return std::string();
int sz = WideCharToMultiByte(CP_UTF8,0,wstr.c_str(),-1,0,0,0,0);
std::string res(sz, 0);
WideCharToMultiByte(CP_UTF8,0,wstr.c_str(),(int)wstr.size(),&res[0],sz,0,0);
return res;
}
DWORD CALLBACK EditStreamInCallback(
DWORD_PTR dwCookie, LPBYTE pbBuff, LONG cb, LONG* pcb)
{
std::stringstream* rtf = (std::stringstream*)dwCookie;
*pcb = rtf->readsome((char*)pbBuff, cb);
return 0;
}
std::stringstream ss(utf8(wrtf));
EDITSTREAM es = { 0 };
es.dwCookie = (DWORD_PTR)&ss;
es.pfnCallback = &EditStreamInCallback;
SendMessage(richedit, EM_STREAMIN,
SF_RTF | SF_USECODEPAGE | (CP_UTF8 << 16), (LPARAM)&es);
|
69,741,034 | 69,741,492 | changing the declaration from char to int, made output different | In the code below, when I change the declaration of "isuit" from "char" to "int", the result differ.
I thought int and char is the same in the essense, so I cannot figure out why.
#include <iostream>
#include <cstdio>
using namespace std;
int main()
{
int n, irank;
int cards[4][13] = {};
char isuit;
cin >> n;
for (int i = 0; i < n; i++) {
cin >> isuit >> irank;
switch (isuit)
{
case 'S':
cards[0][irank - 1]++;
break;
case 'H':
cards[1][irank - 1]++;
break;
case 'C':
cards[2][irank - 1]++;
break;
case 'D':
cards[3][irank - 1]++;
break;
}
}
for (int i = 0; i < 4; i++) {
for (int j = 0; j < 13; j++) {
if (!cards[i][j]) {
switch (i)
{
case 0:
cout << "S" << " " << j + 1 << endl;
break;
case 1:
cout << "H" << " " << j + 1 << endl;
break;
case 2:
cout << "C" << " " << j + 1 << endl;
break;
case 3:
cout << "D" << " " << j + 1 << endl;
break;
}
}
}
}
return 0;
}
Example input:
47
S 10
S 11
S 12
S 13
H 1
H 2
S 6
S 7
S 8
S 9
H 6
H 8
H 9
H 10
H 11
H 4
H 5
S 2
S 3
S 4
S 5
H 12
H 13
C 1
C 2
D 1
D 2
D 3
D 4
D 5
D 6
D 7
C 3
C 4
C 5
C 6
C 7
C 8
C 9
C 10
C 11
C 13
D 9
D 10
D 11
D 12
D 13
output when char:
S 1
H 3
H 7
C 12
D 8
output when int:
S 1
S 2
S 3
S 4
S 5
S 6
S 7
S 8
S 9
S 10
S 11
S 12
S 13
H 1
H 2
H 3
H 4
H 5
H 6
H 7
H 8
H 9
H 10
H 11
H 12
H 13
C 1
C 2
C 3
C 4
C 5
C 6
C 7
C 8
C 9
C 10
C 11
C 12
C 13
D 1
D 2
D 3
D 4
D 5
D 6
D 7
D 8
D 9
D 10
D 11
D 12
D 13
| Think of datatypes like icecream where you can choose size and flavor.
For flavors you have two choices, signed and unsigned.
For sizes, you have a range from 1 byte to 8 bytes. People refer to these as uint8_t, uint16_t, uint32_t.... etc.
So the difference between int and char is its 'size' and 'signed or unsignedness'. Google datatypes for more info regarding these two differences.
Note, there is even more complexity in breaking 'how' each datatype is interpreted when it comes to byte encoding and endianess... Lets ignore that for now.
Here are some resources discussing the difference between char and int
Difference between char and int when declaring character
Practical difference between int and char
In addition, when we think of characters and numbers they are two completetly different things in our brains. But in computers they are the similar. Like the 'yes, but actually no meme'.
1 != '1'
'1' is encoded as interpreted by the computer as the integer value 49. So really everything is interpreted as a number, even characters! It may help to think of it as: "Everything is a number, because everything is a series of bits. And as we know, bits are just binary numbers." ==> Everything is a number!
No back to your problem. You are telling the computer to interpret the std::cin value very differently. Remember 1 != '1'
Int ==> A number (1)
Char ==> ASCII table interpretation ('1')
|
69,741,268 | 69,741,314 | How do I initialize all elements of TrieNodes' children to null | I am trying to solve a Trie problem, for which I create a TrieNode class as below:
class TrieNode {
public:
bool isWord;
TrieNode* children[26];
TrieNode() {
isWord=false;
memset(children, NULL, sizeof(children)); //results in warning
};
};
This results in a warning:
warning: passing NULL to non-pointer argument 2 of 'void* memset(void*, int, size_t)' [-Wconversion-null]
Replacing it with nullptr results in a compile time error:
error: cannot convert 'std::nullptr_t' to 'int' for argument '2' to 'void* memset(void*, int, size_t)'
So my question is, how do I initialize all the values in children to NULL/nullptr? I tried a few options such as children[26]={ nullptr };, but those all resulted in runtime errors (worked fine only with memset(children, NULL, sizeof(children));).
Eventually, while building the trie, I wish to have the following logic:
if(!curr->children[index]) {
curr->children[index]=new TrieNode();
}
curr=curr->children[index];
| You might do:
class TrieNode
{
public:
bool isWord = false;
TrieNode* children[26]{};
TrieNode() = default;
};
|
69,741,940 | 69,742,342 | How do assign an overall variable for several inputs? | I am new to C++. I am facing a problem. I want to have 2 different dates (DD/MM/YYYY)
How do I assign an overall variable for the first chunk and another one for the 2nd chunk?
For example:
First Day of date: 2
First Month "" : 5
First year "" : 1985
-------
Second ""
second ""
second ""
--------
if both are the same dates the output will be "Both are equal"
else the greater will be mentioned "Date ... is greater"
I appreciate the help in advance.
#include <iostream>
using namespace std;
int main()
{
//This should be my first chunk
int first_date;
cout << "first day: ";
cin >> first_y;
int first_month;
cout << "first month: ";
cin >> first_month;
int first_year;
cout << "first year: ";
cin >> first_year;
}
| The below program shows what you want:
#include <iostream>
struct Date
{
//always always initialize built in type in block/local scope
int date = 0, month = 0, year = 0; // by default public
//default constructor
Date() = default;
//lets overload operator= for comparing two Date types as you desire
friend bool operator==(const Date &lhs, const Date &rhs);
friend bool operator!=(const Date &lhs, const Date &rhs);
//overload operator>> for taking input from user
friend std::istream& operator>>(std::istream &is, Date &rhs);
};
bool operator==(const Date &lhs, const Date &rhs)
{
return lhs.date == rhs.date &&
lhs.month == rhs.month &&
lhs.year == rhs.year;
}
bool operator!=(const Date &lhs, const Date &rhs)
{
return !(lhs == rhs);
}
std::istream& operator>>(std::istream &is, Date &rhs)
{
std::cout<<"Enter date: ";
//take date as input from user
std::cin >> rhs.date;
std::cout<<"Enter month: ";
//take month as input from user
std::cin >> rhs.month;
std::cout<<"Enter year: ";
//take year as input from user
std::cin >> rhs.year;
//check if input succedded
if(!is)
{
rhs = Date();
}
return is;
}
int main()
{
//create first Date
struct Date d1;
std::cin >> d1;//take input from user
//create second Date
struct Date d2;
std::cin >> d2;//take input from user
//lets check if dates d1 and d2 entered by user are equal or not
std::cout<<"The dates d1 and d2 are: "<<(d1==d2? "equal": "not equal")<<std::endl;
return 0;
}
|
69,742,093 | 69,744,721 | Get the size of an std::array as r-value | Consider the following snippet of code:
#include<array>
#include<cstdint>
const std::array<int, 3> array{0, 1 , 2};
template<class string_type>
auto parse(string_type&& name) {
const auto s = std::uint8_t{array.size()};
return s;
}
While it compiles using gcc 9.3.0 (the default on Ubuntu 20.04), it fails with gcc 11.2.0 (built from sources) with the following error message:
test2.cpp: In function ‘auto parse(string_type&&)’:
test2.cpp:8:47: error: no matching function for call to ‘std::array<int, 3>::size(const std::array<int, 3>*)’
8 | const auto s = std::uint8_t{array.size()};
| ~~~~~~~~~~^~
In file included from test2.cpp:1:
/opt/modules/install/gcc/11.2.0/include/c++/11.2.0/array:176:7: note: candidate: ‘constexpr std::array<_Tp, _Nm>::size_type std::array<_Tp, _Nm>::size() const [with _Tp = int; long unsigned int _Nm = 3; std::array<_Tp, _Nm>::size_type = long unsigned int]’
176 | size() const noexcept { return _Nm; }
| ^~~~
/opt/modules/install/gcc/11.2.0/include/c++/11.2.0/array:176:7: note: candidate expects 0 arguments, 1 provided
Running example
Besides the fact that it does not make much sense, I can't find where is the error, can you help me?
| It appears to be a bug in:
gcc-10: https://godbolt.org/z/95TTv4z9P and
gcc-11: https://godbolt.org/z/KWMs4MMcK
It works fine in:
gcc-9: https://godbolt.org/z/YMqsMjr7x and
clang: https://godbolt.org/z/6Kq9nY7bo
To work around, you can do either this:
const auto s = static_cast<std::uint8_t>(array.size());
or this:
const std::uint8_t s = array.size();
or this (but please don't):
const auto s = std::uint8_t( array.size() );
I would suggest this:
#include<array>
#include<cstdint>
const std::array<int, 3> array{ 0, 1 , 2 };
template<class string_type>
auto parse(string_type&& name)
{
const std::uint8_t s = array.size();
return s;
}
Running example
|
69,742,205 | 69,742,592 | Climbing Stairs DP Problem base case concept | Question:
You are climbing a staircase. It takes n steps to reach the top.
Each time you can either jump 1 or 2 or 3 steps. In how many total number of ways can you jump to the top?
My Explanation:
Well I'm thinking of applying recursion because I can find the solution by solving similar subproblems and on that process, there will be many overlapping subproblems so I'll array data structure to save the denomination of similar subproblems so that I don't need to solve same subproblem twice. So I'm using top down DP approach.
My Doubt:
Now to build the solution, I need a base case where the program flow ends and it returns back to it's parent node(if you visualize it as a tree). So the base case what I was thinking is like when I was at the floor, at ground 0 so there will be no other ways I can reach ground 0 state, so it's the base case.
When n=0, I should return 0 or 1, that's my doubt? Well I have written the code, so the code work when I return 1, not 0 at n=0. So why I should return 1 when n=0, what's the reason behind it? Please Help!!!
My Code:
#include <iostream>
using namespace std;
int climbing_ladders_topDown(int n, int k, int dp[]){
//Base Case
if(n==0){
return 1;
}
//LookUp
if(dp[n]!=0){
return dp[n];
}
//Recursive Case
int total_num_of_ways = 0;
for(int jumps=1;jumps<=k;jumps++){
if(n-jumps>=0){
total_num_of_ways += climbing_ladders_topDown(n-jumps,k,dp);
}
}
dp[n] = total_num_of_ways;
return dp[n];
}
int main() {
int num_of_stairs = 4;
int num_of_jumbs = 3;
int dp_arr[100] = {0};
cout<<climbing_ladders_topDown(num_of_stairs,num_of_jumbs,dp_arr);
return 0;
}
Output: 7
Correct flow of Code (thanks to @appleapple):
#include <iostream>
using namespace std;
int climbing_ladders_topDown(int n, int k, int dp[]){
//Base Case
if(n==0){
return 0;
}
//LookUp
if(dp[n]!=0){
return dp[n];
}
//Recursive Case
int total_num_of_ways = 0;
for(int jumps=1;jumps<=k;jumps++){
if(n-jumps > 0){
total_num_of_ways += climbing_ladders_topDown(n-jumps,k,dp);
}
if(n-jumps == 0){ // have reach the end or ground 0, base so no more move possible in downward direction
total_num_of_ways += 1;
}
if(n-jumps < 0){ //we can't move to -ve state/underground, because it doesn't exist
total_num_of_ways += 0;
}
}
dp[n] = total_num_of_ways;
return dp[n];
}
int main() {
int num_of_stairs = 4;
int num_of_jumbs = 3;
int dp_arr[100] = {0};
cout<<climbing_ladders_topDown(num_of_stairs,num_of_jumbs,dp_arr);
return 0;
}
| because you request it to be 1 here (f(0) = 1)
for(int jumps=1;jumps<=k;jumps++){
if(n-jumps>=0){
total_num_of_ways += climbing_ladders_topDown(n-jumps,k,dp); // here
}
}
if you want f(0)=0, since recurse into f(0) doesn't really make sense anymore (there is no possible solution, just like f(-1))
the algorithm for such case would becomes
if(n<=0){ // not really necessary as implied inside the loop
return 0; // not possible
}
///...
int total_num_of_ways = 0;
for(int jumps=1;jumps<=k;jumps++){
if(n-jumps>0){
total_num_of_ways += climbing_ladders_topDown(n-jumps,k,dp);
}
if(n-jumps==0){ // have reach the end, no more move possible
++total_num_of_ways; // you put this under n=0
}
// if(n-jumps<0){/*do nothing*/}
}
Note: f(0) = 0 or f(0) = 1 provide a little different meaning. (so the algorithm also change)
f(0) = 1 means no move is a possible solution.
f(0) = 0 means at least 1 move need to be taken.
Both imply there is no possible way to go back once leave 0 (no negative movement), btw.
|
69,742,326 | 69,742,744 | How can I merge three functions into one? | I have written next code but 3 functions must be replaced by 1 and I don't know how to.
The program creates 3 arrays but only 1 function must calculate negative numbers of each column and find the max element in each column. Here's the code:
#include <iostream>
#include <ctime>
#include <iomanip>
using namespace std;
int n = 0;
const int m = 3, k = 3, b = 4, u = 5;
int i, j;
void calc(float** array, int i, int j );
void calc1(float** array, int i, int j);
void calc2(float** array, int i, int j);
int main()
{
float** array = new float* [m];
for (int l = 0; l < m; l++) {
array[l] = new float[k];
}
// заполнение массива
srand(time(0));
for (int i = 0; i < m; i++) {
for (int j = 0; j < k; j++) {
array[i][j] = rand() % 21 - 10;
}
}
cout << "The initial array is: " << endl << endl;
for (int i = 0; i < m; i++)
{
for (int j = 0; j < k; j++) {
cout << setprecision(2) << setw(4) << array[i][j] << " ";
}
cout << endl;
}
cout << endl << "The amount of negative elements in each column: ";
calc(array, i, j); // FUNCTION !!!
float** arr = new float* [b];
for (int l = 0; l < b; l++) {
arr[l] = new float[b];
}
// заполнение массива
srand(time(0));
for (int i = 0; i < b; i++) {
for (int j = 0; j < b; j++) {
arr[i][j] = rand() % 21 - 10;
}
}
cout << "The initial array is: " << endl << endl;
for (int i = 0; i < b; i++)
{
for (int j = 0; j < b; j++) {
cout << setprecision(2) << setw(4) << arr[i][j] << " ";
}
cout << endl;
}
cout << endl << "The amount of negative elements in each column: ";
calc(arr, i, j); // FUNCTION !!!
float** ar = new float* [u];
for (int l = 0; l < u; l++) {
ar[l] = new float[u];
}
// заполнение массива
srand(time(0));
for (int i = 0; i < u; i++) {
for (int j = 0; j < u; j++) {
ar[i][j] = rand() % 21 - 10;
}
}
cout << "The initial array is: " << endl << endl;
for (int i = 0; i < u; i++)
{
for (int j = 0; j < u; j++) {
cout << setprecision(2) << setw(4) << ar[i][j] << " ";
}
cout << endl;
}
cout << endl << "The amount of negative elements in each column: ";
calc2(ar, i, j); // FUNCTION !!!
}
void calc(float** array, int i, int j) {
int max = array[0][0];
for (int j = 0; j < k; j++)
{
max = array[0][0];
for (int i = 0; i < k; i++) {
if (array[i][j] > max)
max = array[i][j];
if (array[i][j] < 0) {
n += 1;
}
}
cout << endl << "IN the [" << j + 1 << "] column is " << n << " negative elements" << endl << endl; n = 0;
cout << "IN the [" << j + 1 << "] column is " << max << " maximal element" << endl;
}
}
void calc1(float** arr, int i, int j) {
int max = arr[0][0];
for (int j = 0; j < b; j++)
{
max = arr[0][0];
for (int i = 0; i < b; i++) {
if (arr[i][j] > max)
max = arr[i][j];
if (arr[i][j] < 0) {
n += 1;
}
}
cout << endl << "IN the [" << j + 1 << "] column is " << n << " negative elements" << endl << endl; n = 0;
cout << "IN the [" << j + 1 << "] column is " << max << " maximal element" << endl;
}
}
void calc2(float** ar, int i, int j) {
int max = ar[0][0];
for (int j = 0; j < u; j++)
{
max = ar[0][0];
for (int i = 0; i < u; i++) {
if (ar[i][j] > max)
max = ar[i][j];
if (ar[i][j] < 0) {
n += 1;
}
}
cout << endl << "IN the [" << j + 1 << "] column is " << n << " negative elements" << endl << endl; n = 0;
cout << "IN the [" << j + 1 << "] column is " << max << " maximal element" << endl;
}
}
| The parameters to calc() should be the number of rows and columns in the array. Then it should use these as the limits in the for loops.
Also, since you're calculating total negative and maximum for each column, you must reset these variables each time through the column loop.
#include <iostream>
#include <ctime>
#include <iomanip>
using namespace std;
const int m = 3, k = 3, b = 4, u = 5;
void calc(float** array, int rows, int cols);
int main()
{
float** array = new float* [m];
for (int l = 0; l < m; l++) {
array[l] = new float[k];
}
// заполнение массива
srand(time(0));
for (int i = 0; i < m; i++) {
for (int j = 0; j < k; j++) {
array[i][j] = rand() % 21 - 10;
}
}
cout << "The initial array is: " << endl << endl;
for (int i = 0; i < m; i++)
{
for (int j = 0; j < k; j++) {
cout << setprecision(2) << setw(4) << array[i][j] << " ";
}
cout << endl;
}
cout << endl << "The amount of negative elements in each column: ";
calc(array, m, k); // FUNCTION !!!
float *arr = new float* [b];
for (int l = 0; l < b; l++) {
arr[l] = new float[b];
}
// заполнение массива
srand(time(0));
for (int i = 0; i < b; i++) {
for (int j = 0; j < b; j++) {
arr[i][j] = rand() % 21 - 10;
}
}
cout << "The initial array is: " << endl << endl;
for (int i = 0; i < b; i++)
{
for (int j = 0; j < b; j++) {
cout << setprecision(2) << setw(4) << arr[i][j] << " ";
}
cout << endl;
}
cout << endl << "The amount of negative elements in each column: ";
calc(arr, b, b); // FUNCTION !!!
float** ar = new float* [u];
for (int l = 0; l < u; l++) {
ar[l] = new float[u];
}
// заполнение массива
srand(time(0));
for (int i = 0; i < u; i++) {
for (int j = 0; j < u; j++) {
ar[i][j] = rand() % 21 - 10;
}
}
cout << "The initial array is: " << endl << endl;
for (int i = 0; i < u; i++)
{
for (int j = 0; j < u; j++) {
cout << setprecision(2) << setw(4) << ar[i][j] << " ";
}
cout << endl;
}
cout << endl << "The amount of negative elements in each column: ";
calc(ar, u, u); // FUNCTION !!!
}
void calc(float** array, int rows, int cols) {
for (int j = 0; j < cols; j++)
{
int n = 0;
int max = array[0][j];
for (int i = 1; i < rows; i++) {
if (array[i][j] > max)
max = array[i][j];
if (array[i][j] < 0) {
n += 1;
}
}
cout << endl << "IN the [" << j + 1 << "] column is " << n << " negative elements" << endl << endl; n = 0;
cout << "IN the [" << j + 1 << "] column is " << max << " maximal element" << endl;
}
}
|
69,742,511 | 69,742,634 | Pass-by-value and std::move vs forwarding reference | I encounter the pass by value and move idiom quite often:
struct Test
{
Test(std::string str_) : str{std::move(str_)} {}
std::string str;
};
But it seems to me that passing by either const reference or rvalue reference can save a copy in some situations. Something like:
struct Test1
{
Test1(std::string&& str_) : str{std::move(str_)} {}
Test1(std::string const& str_) : str{str_} {}
std::string str;
};
Or maybe using a forwarding reference to avoid writing both constructors. Something like:
struct Test2
{
template<typename T> Test2(T&& str_) : str{std::forward<T>(str_)} {}
std::string str;
};
Is this the case? And if so, why is it not used instead?
Additionally, it looks like C++20 allows the use of auto parameters to simplify the syntax. I am not sure what the syntax would be in this case. Consider:
struct Test3
{
Test3(auto&& str_) : str{std::forward<decltype(str_)>(str_)} {}
std::string str;
};
struct Test4
{
Test4(auto str_) : str{std::forward<decltype(str_)>(str_)} {}
std::string str;
};
Edit:
The suggested questions are informative, but they do not mention the "auto" case.
|
But it seems to me that passing by either const reference or rvalue reference can save a copy in some situations.
Indeed, but it requires more overloads (and even worst with several parameters).
Pass by value and move idiom has (at worst) one extra move. which is a good trade-off most of the time.
maybe using a forwarding reference to avoid writing both constructors.
Forwarding reference has its own pitfalls:
disallows {..} syntax for parameter as {..} has no type.
Test2 a({5u, '*'}); // "*****"
would not be possible.
is not restrict to valid types (requires extra requires or SFINAE).
Test2 b(4.2f); // Invalid, but `std::is_constructible_v<Test2, float>` is (falsely) true.
would produces error inside the constructor, and not at call site (so error message less clear, and SFINAE not possible).
for constructor, it can take precedence over copy constructor (for non-const l-value)
Test2 c(a); // Call Test2(T&&) with T=Test2&
// instead of copy constructor Test2(const Test2&)
would produce error, as std::string cannot be constructed from Test2&.
|
69,743,217 | 69,744,364 | What is a URI parameter for openLDAP that contains a schema, host, and port? | Specifically "the uri parameter may be a comma- or whitespace-separated list of URIs containing only the schema, the host, and the port fields" what does this mean?
Working with openLDAP in c.
| It is simple list of URIs. The list can be separated by whitespace or comma:
Example:
http://myhost.com:4567,http://myhost1.com:45,http://myhost2.com:34545
or
http://myhost.com:4567 http://myhost1.com:45 http://myhost2.com:34545
|
69,743,266 | 69,743,419 | Why does segmentation fault occur when pushing to pointer of vector | I am trying to learn c++ and wanted to write a simple program to explore the use of vectors and pointer. When I try to run a simple program that uses this function a segmentation fault occur. When I change
std::vector<string> *data;
to
std::vector<string> data;
and change the '->push_back()' to a '.push_back()' it runs fine.
int simple_tokenizer(string s)
{
std::stringstream ss(s);
std::vector<string> *data;
string word;
//char delimiter = ',';
while(getline(ss,word, ',')) {
//cout << "charsplit" << word << endl;
data->push_back(word);
}
return 0;//data;
}
| Your code is generation a segmentation fault because you didn't allocate memory for your pointer.
int simple_tokenizer(string s)
{
std::stringstream ss(s);
std::vector<string> *data = new std::vector<string>();
string word;
//char delimiter = ',';
while(getline(ss,word, ',')) {
//cout << "charsplit" << word << endl;
data->push_back(word);
}
return 0;//data;
}
Mind you you need to delete it once you are done using it, but really there is no point in allocated an std::vector dynamically, it will allocating everything needed within it, and you won't have to risk memory leaks because you won't have to chase it around with delete everywhere.
|
69,743,370 | 69,743,875 | Using Member Variable to set address.sin_port | I am currently sitting on a small C++ project, where I am trying to write a class that implements tcp sockets, when I came across the following:
ServerSocket::ServerSocket(uint16_t port_) {
struct sockaddr_in _address;
uint16_t _port = port_;
this->bindSocket();
}
int ServerSocket::bindSocket() {
_address.sin_port = htons(_port);
std::cout << _address.sin_port << std::endl;
}
Which prints "0" and doesn't work as expected, while the following works as expected:
ServerSocket::ServerSocket(uint16_t port_) {
struct sockaddr_in _address;
this->bindSocket(port_);
}
int ServerSocket::bindSocket(uint16_t port_) {
_address.sin_port = htons(port_);
std::cout << _address.sin_port << std::endl;
}
I don't understand how the first piece of code does not work and I really hope somebody can help me to understand.
| You have accidentally declared a variable, instead of initializing a member:
ServerSocket::ServerSocket(uint16_t port_) {
struct sockaddr_in _address; // This is redundant
uint16_t _port = port_; // <-- oopsie
this->bindSocket(); // redundant this
}
It should be:
ServerSocket::ServerSocket(uint16_t port_) {
_port = port_;
this->bindSocket();
}
You can try to avoid situations like these in the future if you use a list initializer
ServerSocket::ServerSocket(uint16_t port_) : _port(port_)
{
bindSocket();
}
|
69,743,601 | 69,743,621 | How to supress unused (void **arg) parameter? | In the function below I'm not using the parameter (void **arg). But since it's unused inside the function compiler gives me the error below:
error: unused parameter 'arg' [-Werror=unused-parameter]
bool decodeString(pb_istream_t *stream, const pb_field_t *field, void **arg)
I tried to suppress it by writing void(arg) inside the function without any luck.
Can anyone help me with the correct way?
| Use the parameter in an expression casted to void. Then the parameter is "used".
bool decodeString(pb_istream_t *stream, const pb_field_t *field, void **arg)
{
(void)arg;
...
}
|
69,743,701 | 69,745,287 | Undefined reference when compiling when using header and cpp file with templates in one of them | I've been trying to compile my project and I've encountered some problems when trying so. The error in particular that appears is:
[build] /usr/bin/ld: CMakeFiles/robot_control.dir/main.cpp.o:(.data.rel.ro._ZTVN4comm15cameraInterfaceE[_ZTVN4comm15cameraInterfaceE]+0x10): undefined reference to `comm::Interface<cv::Mat>::callbackMsg()'
My project is organized right now as it follows:
-${HOME_WORKSPACE}
|-main.cpp
|-src
|-communication.cpp
|-communication.hpp
The header file (communication.hpp) is:
#include <opencv2/opencv.hpp>
#include <gazebo/gazebo_client.hh>
#include <gazebo/msgs/msgs.hh>
#include <gazebo/transport/transport.hh>
#include <algorithm>
#ifndef COMM_GUARD
#define COMM_GUARD
namespace comm
{
struct lidarMsg
{
float angle_min, angle_increment, range_min, range_max;
int nranges, nintensities;
std::vector<int> ranges;
};
template <typename T>
class Interface
{
public:
Interface() : received{false} {};
virtual void callbackMsg();
bool receptionAccomplished()
{
return this -> received;
}
T checkReceived()
{
return this -> elementReceived;
}
protected:
bool received;
T elementReceived;
};
class cameraInterface : public Interface<cv::Mat>
{
public:
void callbackMsg(ConstImageStampedPtr &msg);
};
class lidarInterface : public Interface<lidarMsg>
{
public:
void callbackMsg(ConstLaserScanStampedPtr &msg);
};
}
#endif
The source file (communication.cpp) is:
#include <opencv2/opencv.hpp>
#include <algorithm>
#include <iostream>
#include "communication.hpp"
#ifndef COMM_CPP_GUARD
#define COMM_CPP_GUARD
namespace comm
{
void cameraInterface::callbackMsg(ConstImageStampedPtr &msg)
{
std::size_t width = msg->image().width();
std::size_t height = msg->image().height();
const char *data = msg->image().data().c_str();
cv::Mat im(int(height), int(width), CV_8UC3, const_cast<char *>(data));
im = im.clone();
cv::cvtColor(im, im, cv::COLOR_RGB2BGR);
this->elementReceived = im;
received = true;
}
void lidarInterface::callbackMsg(ConstLaserScanStampedPtr &msg) {
this->elementReceived.angle_min = float(msg->scan().angle_min());
this->elementReceived.angle_increment = float(msg->scan().angle_step());
this->elementReceived.range_min = float(msg->scan().range_min());
this->elementReceived.range_max = float(msg->scan().range_max());
this->elementReceived.nranges = msg->scan().ranges_size();
this->elementReceived.nintensities = msg->scan().intensities_size();
for (int i = 0; i < this->elementReceived.nranges; i++)
{
if (this->elementReceived.ranges.size() <= i)
{
this->elementReceived.ranges.push_back(std::min(float(msg->scan().ranges(i)), this->elementReceived.range_max));
}
else
{
this->elementReceived.ranges[i] = std::min(float(msg->scan().ranges(i)), this->elementReceived.range_max);
}
}
}
}
#endif
The main file(main.cpp) includes the following header:
#include <gazebo/gazebo_client.hh>
#include <gazebo/msgs/msgs.hh>
#include <gazebo/transport/transport.hh>
#include <opencv2/opencv.hpp>
#include <opencv2/calib3d.hpp>
#include <iostream>
#include <stdlib.h>
#include "src/communication.hpp"
I included the part of the #ifndef /#define /#endif since it is a solution that I found to this kind of problem in other problem. I've been toggling the CMakeLists.txt file but still no solution that could solve this error.
| You can't do this:
virtual void callbackMsg();
You have to actually provide the implementation for all template methods within the .h file.
|
69,744,386 | 69,744,525 | Single line lookup for nested std::map | Let’s say I have a std::map<int, std::map<int, std::string>>, is there a way to lookup directly for a string if you are given the two keys in a shorter statement?
Some syntax sugar for:
std::map<int, std::map<int, std::string>> nested_map;
nested_map[1][3] = "toto";
int key1 = 1;
int key2 = 3;
std::string val;
auto it1 = nested_map.find(key1);
if (it1 != nested_map.end())
{
auto it2 = it1->second.find(key2);
if (it2 != it1->second.end())
{
val = it2->second;
}
}
Edit: I am only looking for syntax sugar to save typing a bit, because there are a lot of these nested maps in my code.
Edit2: I don’t want to throw on failure.
| Write a recursive variadic template function that would accept map by reference and keys as variadic template argument, and returnstd::optional of innermost value type. Or may return pointer with nullptr indication that it is not found.
|
69,744,680 | 69,744,788 | How to get type of template of class from its object in C++ | I have a custom class A and want to get the type of template from the code which initialized an object of A class to use it as a variable later. Is that possible?
#include <functional>
template<class T>
class A {
public:
A(){};
T data;
};
int main() {
A<int> a;
// How to get int as type?
// function <void (type)> foo;
return 0;
}
| If you would like the template type to be discoverable by code, you can make a public using to expose it:
template<class T>
class A {
public:
using type = T; // <<<< add this (or something like it)
A(){};
T data;
};
Then in main
int main() { // side note: main() *must* return an int
using AI = A<int>;
AI a;
function <void (AI::type)> foo;
}
[Edited to match your question that changed after this was posted.]
|
69,745,068 | 69,745,489 | Perform same operation on different class members without duplicate code | How do I perform the same operation on different class members without duplicating code?
I have a function which creates an object of type Farm, and then performs some kind of a calculation on its members (in this case, it prints the member variable, but the code I am currently working on is too complex to copy here in its entirety):
#include <iostream>
#include <String>
class Farm
{
public:
int cows = 1;
int chickens = 2;
int mules = 3;
};
using namespace std;
void count_animals()
{
Farm* animal_farm = new Farm;
cout << animal_farm->chickens;
}
int main()
{
string animals_to_count = "count my chickens";
if (animals_to_count == "count my chickens")
count_animals();
if (animals_to_count == "count my cows")
count_animals();
if (animals_to_count == "count my mules")
count_animals();
return 0;
}
"Count my chickens" is hard-coded in main(). However, in the problem I am working on right now, animals_to_count will come from another function as an argument.
Is it possible to print cows/chickens/mules of animal_farm without using n if statements in count_animals, where n is the number of member variables?
To further clarify my problem: what I am trying to do is have 1 if statement in count_animals() which will identify which member of Farm is printed (change ->chickens to ->cows or to ->mules).
Is what I am trying possible? If not, are there other ways to work around this?
| Perhaps a pointer-to-member is what you are looking for?
#include <iostream>
using namespace std;
class Farm
{
public:
int cows = 1;
int chickens = 2;
int mules = 3;
};
int Farm::* getMemberPtr(int whichMember)
{
switch (whichMember)
{
case 0: return &Farm::chickens;
case 1: return &Farm::cows;
case 2: return &Farm::mules;
}
throw invalid_argument("");
}
void count_animals(int Farm::*member)
{
Farm animal_farm;
cout << animal_farm.*member;
}
int main()
{
int animals_to_count = ...; // 0, 1, 2, etc
int Farm::* member = getMemberPtr(animals_to_count);
count_animals(member);
return 0;
}
Online Demo
|
69,745,132 | 69,745,168 | Why does my dynamic array work without being resized? | I'm working on dynamic arrays for my c++ course, but I'm confused about the behavior of my dynamic arrays. For example, if I run this code:
int* myDynamicArr = new int[3];
for (int i = 0; i < 10; i++)
{
myDynamicArr[i] = i + 1;
cout << myDynamicArr[i] << endl;
}
I would expect it to not work since I only declared it as size 3. But when I run it, it prints out 0-9. Same thing if I do this:
char* myCharArr = new char[2];
strcpy(myCharArr, "ThisIsALongString");
cout << myCharArr;
It prints the full string even though it seems like it should fail. Can anyone explain what I'm doing wrong here? Thanks!
| C++ does not perform bounds checking on arrays. So when you read or write past the bounds of an array you trigger undefined behavior.
With undefined behavior, your program may crash, it may output strange results, or it may (as in your case) appear to work properly.
Just because it could crash doesn't mean it will.
|
69,745,393 | 69,745,684 | Using prefix with string literal split over multiple lines | I have a Unicode string literal, let's say like this.
const char8_t x[] = u8"aaa\nbbb©\nccc\n";
I would like to split it over multiple lines for readability.
Which of the notations below are correct and equivalent to the one above?
const char8_t x[] =
u8"aaa\n"
u8"bbb©\n"
u8"ccc\n";
const char8_t x[] =
u8"aaa\n"
"bbb©\n"
"ccc\n";
const char8_t x[] =
"aaa\n"
u8"bbb©\n"
"ccc\n";
All seem to compile but I'm afraid some of them may be implementation-defined.
| From this cppreference page, it would appear that all your code snippets are equivalent and well-defined:
Concatenation
…
If one of the strings has an encoding prefix and the other doesn't, the one that doesn't will be considered to have the same encoding prefix as the other.
Or, from this Draft C++17 Standard:
5.13.5 String literals [lex.string]
…
13 In translation phase 6 (5.2), adjacent string-literals are concatenated. If both string-literals have the same encoding-prefix, the resulting concatenated string literal has that encoding-prefix. If one string-literal has no encoding-prefix, it is treated as a string-literal of the same encoding-prefix as the other operand. …
|
69,745,880 | 69,746,358 | How can I multithread this code snippet in C++ with Eigen | I'm trying to implement a faster version of the following code fragment:
Eigen::VectorXd dTX = (( (XPSF.array() - x0).square() + (ZPSF.array() - z0).square() ).sqrt() + txShift)*fs/c + t0*fs;
Eigen::VectorXd Zsq = ZPSF.array().square();
Eigen::MatrixXd idxt(XPSF.size(),nc);
for (int i = 0; i < nc; i++) {
idxt.col(i) = ((XPSF.array() - xe(i)).square() + Zsq.array()).sqrt()*fs/c + dTX.array();
idxt.col(i) = (abs(XPSF.array()-xe(i)) <= ZPSF.array()*0.5/fnumber).select(idxt.col(i),-1);
}
The sample array sizes I'm working with right now are:
XPSF: Column Vector of 591*192 coefficients (113,472 total values in the column vector)
ZPSF: Same size as XPSF
xe: RowVector of 192 coefficients
idxt: Matrix of 113,472x192 size
Current runs with gcc and -msse2 and -o3 optimization yield an average time of ~0.08 seconds for the first line of the loop and ~0.03 seconds for the second line of the loop. I know that runtimes are platform dependent, but I believe that this still can be much faster. A commercial software performs the operations I'm trying to do here in ~two orders of magnitude less time. Also, I suspect my code is a bit amateurish right now!
I've tried reading over Eigen documentation to understand how vectorization works, where it is implemented and how much of this code might be "implicitly" parallelized by Eigen, but I've struggled to keep track of the details. I'm also a bit new to C++ in general, but I've seen the documentation and other resources regarding std::thread and have tried to combine it with this code, but without much success.
Any advice would be appreciated.
Update:
Update 2
I would upvote Soleil's answer because it contains helpful information if I had the reputation score for it. However, I should clarify that I would like to first figure out what optimizations I can do without a GPU. I'm convinced (albeit without OpenMP) Eigen's inherent multithreading and vectorization won't speed it up any further (unless there are unnecessary temporaries being generated). How could I use something like std::thread to explicitly parellelize this? I'm struggling to combine both std::thread and Eigen to this end.
| OpenMP
If your CPU has enough many cores and threads, usually a simple and quick first step is to invoke OpenMP by adding the pragma:
#pragma omp parallel for
for (int i = 0; i < nc; i++)
and compile with /openmp (cl) or -fopenmp (gcc) or just -ftree-parallelize-loops with gcc in order to auto unroll the loops.
This will do a map reduce and the map will occur over the number of parallel threads your CPU can handle (8 threads with the 7700HQ).
In general you also can set a clause num_threads(n) where n is the desired number of threads:
#pragma omp parallel num_threads(8)
Where I used 8 since the 7700HQ can handle 8 concurrent threads.
TBB
You also can unroll your loop with TBB:
#pragma unroll
for (int i = 0; i < nc; i++)
threading integrated with eigen
With Eigen you can add
OMP_NUM_THREADS=n ./my_program
omp_set_num_threads(n);
Eigen::setNbThreads(n);
remarks with multithreading with eigen
However, in the FAQ:
currently Eigen parallelizes only general matrix-matrix products (bench), so it doesn't by itself take much advantage of parallel hardware."
In general, the improvement with OpenMP is not always here, so benchmark the release build. Another way is to make sure that you're using vectorized instructions.
Again, from the FAQ/vectorization:
How can I enable vectorization?
You just need to tell your compiler to enable the corresponding
instruction set, and Eigen will then detect it. If it is enabled by
default, then you don't need to do anything. On GCC and clang you can
simply pass -march=native to let the compiler enables all instruction
set that are supported by your CPU.
On the x86 architecture, SSE is not enabled by default by most
compilers. You need to enable SSE2 (or newer) manually. For example,
with GCC, you would pass the -msse2 command-line option.
On the x86-64 architecture, SSE2 is generally enabled by default, but
you can enable AVX and FMA for better performance
On PowerPC, you have to use the following flags: -maltivec
-mabi=altivec, for AltiVec, or -mvsx for VSX-capable systems.
On 32-bit ARM NEON, the following: -mfpu=neon -mfloat-abi=softfp|hard,
depending if you are on a softfp/hardfp system. Most current
distributions are using a hard floating-point ABI, so go for the
latter, or just leave the default and just pass -mfpu=neon.
On 64-bit ARM, SIMD is enabled by default, you don't have to do
anything extra.
On S390X SIMD (ZVector), you have to use a recent gcc (version >5.2.1)
compiler, and add the following flags: -march=z13 -mzvector.
multithreading with cuda
Given the size of your arrays, you want to try to offload to a GPU to reach the microsecond; in that case you would have (typically) as many threads as the number of elements in your array.
For a simple start, if you have an nvidia card, you want to look at cublas, which also allows you to use the tensor registers (fused multiply add, etc) of the last generations, unlike regular kernel.
Since eigen is a header only library, it makes sense that you could use it in a cuda kernel.
You also may implements everything "by hand" (ie., without eigen) with regular kernels. This is a nonsense in terms of engineering, but common practice in an education/university project, in order to understand everything.
multithreading with OneAPI and Intel GPU
Since you have a skylake architecture, you also can unroll your loop on your CPU's GPU with OneAPI:
// Unroll loop as specified by the unroll factor.
#pragma unroll unroll_factor
for (int i = 0; i < nc; i++)
(from the sample).
|
69,746,120 | 69,746,960 | How to make a variable in a struct variable that is not inputted but set based on previous variables' values | I am making a program which inputs fractions and puts them in order. I used struct to define a fraction type. I think I am making a type that initializing 2 variables(the numerator and the denominator of the fraction) and initializing the double type variable called value to a / b in this code:
struct fraction {
int a; // numerator
int b; // denominator
double value = a / b; // floating point value of fraction
bool operator > (const fraction &a) {
fraction ans;
return ans.value > a.value;
}
bool operator < (const fraction &a) {
fraction ans;
return ans.value < a.value;
}
};
int main() {
//---------logging-------
fraction ratio = {1,2};
cout << ratio.value;
//-----------------------
// outputs 0
// other things down here that is not included
}
but apparently, that is not the case because I also need to initialize value. I figured out why, but the problem is, how can I make the variable without initializing it at the creation of the fraction? Thanks!
|
How can I make the variable without initializing it at the creation of the fraction?
One could just write a member function double value() calculating and returning the floating-point value of the fraction, but first there are some issues in the posted code that need to be addressed (and may actually solve OP's problem).
The only in-class member variable initialization shown isn't correct.
double value = a / b; // floating point value of fraction
Beeing both a and b variables of type int, a / b is an integer division, yielding an int result that is only after assigned to a double variable. In OP's example, int(1)/int(2) == int(0).
To produce the expected value, we need to explicitly convert at least one of the terms into a double:
double value = static_cast<double>(a) / b;
Both the comparison operators are wrong.
bool operator > (const fraction &a) {
fraction ans; // This is a LOCAL, UNINITIALIZED varible.
return ans.value > a.value; // The result is meaningless.
}
The following snippet shows a possible implementation where value is calculated and not stored (which isn't necessary a good idea).
#include <iostream>
#include <numeric>
class fraction
{
int numerator_{};
int denominator_{1};
public:
fraction() = default;
fraction(int num, int den)
: numerator_{num}, denominator_{den}
{
if (auto divisor = std::gcd(num, den); divisor != 1)
{
numerator_ /= divisor;
denominator_ /= divisor;
}
}
bool operator > (fraction const& a) const noexcept {
return value() > a.value();
}
bool operator < (fraction const& a) const noexcept {
return value() < a.value();
}
auto numerator() const noexcept {
return numerator_;
}
auto denominator() const noexcept {
return denominator_;
}
double value() const noexcept {
return static_cast<double>(numerator_) / denominator_;
}
};
|
69,746,186 | 69,746,265 | What is under the hood when a process receive a signal? | I am writing a program that needs to catch the ctrl-c event. And I learned that I can call signal function or sigaction function in signal.h to customize what to do when the process receives a SIGINT signal. But I am also curious what is the mechanism for such a signal listener. In other words, how can a process keep waiting for a specific signal while continuing to execute its code?
| The process doesn't "wait" for the signal. Calling sigaction() tells the operating system to force the process to take the specified action when the process receives the specified signal. When this happens, the process is interrupted and forced to call the registered handler.
|
69,746,542 | 70,724,996 | use of flow operators on objects | I have a problem using the operators for injecting input / output flows on objects (operator <<)
I was actually writing code to make "cout <<" run on my objects and display their values; this is the following code, specifically a function and a class method, located in the same class file:
-function code:
ostream &operator<<( ostream &flux, Duree const& duree)
{
duree.afficher(flux) ; // <- Changement ici
return flux;
}
-code of the method:
void Duree::afficher(ostream &flux) const
{
flux << m_heures << "h" << m_minutes << "m" << m_secondes << "s";
}
But the problem is that when I compile I am told that "ostream is not a type name", "ostream was not declared in this code".
I don't understand, yet I searched the internet and apparently this is the right thing to do. I am using version 20.3 of Code :: Blocks.
| For this to work the function must be in the main file and not in the class file.
|
69,747,056 | 69,749,872 | how to have lua call a c++ function that returns multiple values to lua | my code (partial)
c++:
lua_register(L, "GetPosition", lua_GetPosition);
int lua_GetPosition(lua_State* L)
{
Entity e = static_cast<Entity>(lua_tointeger(L, 1));
TransformComponent* trans = TransformComponentPool.GetComponentByEntity(e);
if (trans != nullptr)
{
lua_pushnumber(L, trans->transform->position.x);
lua_pushnumber(L, trans->transform->position.y);
lua_pushnumber(L, trans->transform->position.z);
}
else
{
lua_pushnumber(L, 0);
lua_pushnumber(L,0);
lua_pushnumber(L, 0);
LOG_ERROR("Transform not found");
}
return 1;
}
lua:
local x = 69
local y = 69
local z = 69
x,y,z = GetPosition(e)
print("xyz =",x,y,z)
I expected "xyz = 1.0 1.0 1.0"
I got "xyz = 1.0 nil nil"
What's the right way to do this so lua sees all return values?
| When Lua calls your function it will check its return value to find out how many values it should fetch from the stack. In your case that's 1. How else would Lua know how many of the pushed values you want to return?
From Lua 5.4 Reference Manual 4.6 Functions and Types:
In order to communicate properly with Lua, a C function must use the
following protocol, which defines the way parameters and results are
passed: a C function receives its arguments from Lua in its stack in
direct order (the first argument is pushed first). So, when the
function starts, lua_gettop(L) returns the number of arguments
received by the function. The first argument (if any) is at index 1
and its last argument is at index lua_gettop(L). To return values to
Lua, a C function just pushes them onto the stack, in direct order
(the first result is pushed first), and
returns in C the number of
results. Any other value in the stack below the results will be
properly discarded by Lua. Like a Lua function, a C function called by
Lua can also return many results.
As an example, the following function receives a variable number of
numeric arguments and returns their average and their sum:
static int foo (lua_State *L) {
int n = lua_gettop(L); /* number of arguments */
lua_Number sum = 0.0;
int i;
for (i = 1; i <= n; i++) {
if (!lua_isnumber(L, i)) {
lua_pushliteral(L, "incorrect argument");
lua_error(L);
}
sum += lua_tonumber(L, i);
}
lua_pushnumber(L, sum/n); /* first result */
lua_pushnumber(L, sum); /* second result */
return 2; /* number of results */
}
|
69,747,261 | 69,747,433 | Printing out specific length in vector array | I am trying to created a random password from the input text.
minimum length is 3. I found out that the first word which is The has size of 6 somehow.
Only the first word gives me weird size so far.
Eventually, I want to erase when words are less than 3 words.
I don't know why it returns size 6.
Please advise.
void setMinLength(std::vector<std::string> &words) {
for (int i = 0; i < words.size()-1; i++) {
if (words[i].size() == 6) {
std::cout << words[i] << std::endl;
//words.erase(words.begin() + i);
}
}
}
int main() {
std::ifstream myFile("input.txt");
if (!myFile.is_open()) {
std::cout << "Couldn't open the file.";
return 0;
}
std::vector<std::string> words;
std::string word;
while (myFile >> word) {
words.push_back(word);
}
setMinLength(words);
myFile.close();
return 0;
}
Input.text file is below.
The Project Gutenberg EBook of Grimms’ Fairy Tales, by The Brothers Grimm This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or
The
Tales,
anyone
almost
re-use
online
Title:
Taylor
Marian
hex editor
| First, in your input text file, there is Byte Order Mark, so it affects the size of the word. Delete that.
In your setMinLength(std::vector<std::string> &words) function.
void setMinLength(std::vector<std::string> &words) {
for (int i = 0; i < words.size()-1; i++) {
if (words[i].size() == 6) {
std::cout << words[i] << std::endl;
//words.erase(words.begin() + i);
// --i; explain below
}
}
}
Note if you use erase(): when you use erase(), words[i] is now the next word, then the loop increment i and you skip one word. Remember to --i at the end.
|
69,747,821 | 69,749,110 | Difference between returning a reference and modifying a class directly? | I have looked at the disassembly for the following code and found that the result is often the same (or very similar) for both test functions. I am wondering what the difference is between the two functions and if not, is there any history or reason for this existing?
struct test
{
int x;
test& testfunction1() { x++; return *this; }
void testfunction2() { x++; }
};
edit: with my question being answered so quickly I am now wondering what reasons you would choose to use one over the other? it seems for the sake of 'freedom' one should always use a reference over the directly modifying the data.
| The functionality is almost the same, difference is the first enables the ability to chain methods. This design pattern is called fluent interface. It is supposed to provide an easy-readable, flowing interface, that often mimics a domain specific language. Using this pattern results in code that can be read nearly as human language.
Widget w = new Widget();
w.setHeight(1024).setWidth(768).setColor(255,0,0); // easy to follow
Why people don't do it everywhere?
It is matter of preference, readability, coding convention of the group, it takes more effort to design you API in such way. Also if you don't use the returned value, it can produce warning or error expression result is unused when compiling C++ with some flags.
The common usage of testfunction1() example is using of overloaded operators of output for something, often you can see in code
std::cout << "The results are: " << SomeUserDefinedStruct << " and "<< SomeUserDefinedStruct2 << std::endl;
The overloaded version of operator<< is returning reference to std::ostream, which is chained to others.
|
69,747,956 | 69,748,200 | Error: there are no arguments to 'static_assert' that depend on a template parameter | When compiling this code, I get an error:
#include <cstdio>
template<size_t Index, typename T, size_t Length>
T& get(T (&arr)[Length]) {
static_assert(Index < Length, "Index out of bounds");
return arr[Index];
}
int main() {
int arr[] = {1, 2, 3, 4, 5};
int value = get<5>(arr);
printf("value = %d\n", value);
}
Error message:
In function 'T& get(T (&)[Length])':
Line 5: error: there are no arguments to 'static_assert' that depend on a template parameter, so
a declaration of 'static_assert' must be available
compilation terminated due to -Wfatal-errors.
Can someone explain how to fix this please?
Running with gcc 4.1.2
flags:
-O -std=c++98 -pedantic-errors -Wfatal-errors -Werror -Wall -Wextra -Wno-missing-field-initializers -Wwrite-strings -Wno-deprecated -Wno-unused -Wno-non-virtual-dtor -Wno-variadic-macros -fmessage-length=0 -ftemplate-depth-128 -fno-merge-constants -fno-nonansi-builtins -fno-gnu-keywords -fno-elide-constructors -fstrict-aliasing -fstack-protector-all -Winvalid-pch
| This language feature became available in C++ only with C++11, see:
https://en.cppreference.com/w/cpp/language/static_assert
|
69,747,987 | 69,750,337 | gstreamer rtsp tee appsink can't emit signal new-sample | I am using gstreamer to play and slove the rtsp stream.
rtspsrc location=rtspt://admin:scut123456@192.168.1.64:554/Streaming/Channels/1 ! tee name=t ! queue ! decodebin ! videoconvert ! autovideosink t. ! queue ! rtph264depay ! h264parse ! appsink name=mysink
and i write in c++ code like this :
#include <gst/gst.h>
void printIt(GList *p) {
if(!p) {
g_print("p null\n");
return ;
}
while(p) {
GstPad *pad = (GstPad*)p->data;
g_print("[%s]", pad->object.name);
p = p->next;
}
g_print("\n");
}
GstFlowReturn new_sample_cb (GstElement * appsink, gpointer udata) {
g_print("new-sample cb\n");
return GST_FLOW_OK;
}
GstFlowReturn new_preroll_cb (GstElement* appsink, gpointer udata) {
g_print("new_preroll_cb cb\n");
return GST_FLOW_OK;
}
int
main (int argc, char *argv[]) {
GstElement *pipeline;
GstBus *bus;
GstMessage *msg;
/* Initialize GStreamer */
gst_init (&argc, &argv);
/* Build the pipeline */
pipeline = gst_parse_launch("rtspsrc location=rtspt://admin:scut123456@192.168.1.64:554/Streaming/Channels/1 ! tee name=t ! queue ! decodebin ! videoconvert ! autovideosink t. ! queue ! rtph264depay ! h264parse ! appsink name=mysink", NULL);
GstElement *appsink = gst_bin_get_by_name(GST_BIN(pipeline), "mysink");
printIt(appsink->pads);
g_signal_connect(appsink, "new-sample", G_CALLBACK(new_sample_cb), pipeline);
g_print("sig conn new-sample\n");
g_signal_connect(appsink, "new-preroll", G_CALLBACK(new_preroll_cb), pipeline);
g_print("sig conn new-preroll\n");
/* Start playing */
gst_element_set_state (pipeline, GST_STATE_PLAYING);
/* Wait until error or EOS */
bus = gst_element_get_bus (pipeline);
msg =
gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE,
GstMessageType(GST_MESSAGE_ERROR | GST_MESSAGE_EOS));
/* Free resources */
if (msg != NULL)
gst_message_unref (msg);
gst_object_unref (bus);
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
return 0;
}
when i compile and run it. it has output video in the autovideosink but the appsink's signal new-sample is not be callbacked. what should i do if i what to slove a frame in appsink ?
thanks.
| By default appsink favors to use callbacks instead of signals for performance reasons (but I wouldn't consider your use case as a performance problem). For appsink to emit signals you will need to set the emit-signals property of the appsink to true. It defaults to false.
P.S. Apart from the above, I think you will need a GMainLoop for the event processing as demonstrated in the GStreamer examples.
|
69,748,358 | 69,754,676 | Convert every column of an Eigen::Matrix to an std::vector? | Lets assume I have the following Eigen::Matrix:
Eigen::MatrixXf mat(3, 4);
mat << 1.1, 2, 3, 50,
2.2, 2, 3, 50,
3.1, 2, 3, 50;
Now how can I convert every column into an std::vector<float>
I tried an adaptation of this solution typecasting Eigen::VectorXd to std::vector:
std::vector<float> vec;
vec.resize(mat.rows());
for(int col=0; col<mat.cols(); col++){
Eigen::MatrixXf::Map(&vec[0], mat.rows());
}
But that throws the following error:
n template: static_assert failed due to requirement 'Map<Eigen::Matrix<float, -1, -1, 0, -1, -1>, 0, Eigen::Stride<0, 0>>::IsVectorAtCompileTime' "YOU_TRIED_CALLING_A_VECTOR_METHOD_ON_A_MATRIX"
What is the right and most efficient solution?
| I think the most elegant Solution would be to use Eigen::Map. In your case you would do it like this:
Eigen::MatrixXf mat(3, 4);
mat << 1.1, 2, 3, 50,
2.2, 2, 3, 50,
3.1, 2, 3, 50;
std::vector<float> vec;
vec.resize(mat.rows());
for(int col=0; col<mat.cols(); col++){
Eigen::Map<Eigen::MatrixXf>(vec.data(), mat.rows(), 1 ) = mat.col(col); }
|
69,748,538 | 69,755,349 | cython use class wrapper pointer | I'm new to cython and maybe i'm missing some base info, so be patient. What i want to do is create a c++ object in python, modify it and return the object's pointer to a c++ function. Basically i have:
// headers.h
class A {
A();
void modifyA()
}
class B {
B();
void useA(A *a);
}
# headers.pxd
cdef extern from "headers.h":
cdef cppclass A:
void modifyA()
cdef cppclass B:
void useA(A *a)
# PyA.pyx
cdef class PyA:
cdef A *pa
def __cinit__(self):
self.pa = new A()
def modifyA(self):
self.pa.modifyA()
Now, as far as i understand, when i instantiate PyA from python code, an A object is created in c++ and a pointer is stored inside the new PyA object. What i want to do is use that pointer like:
# PyB.pyx
cdef class PyB:
cdef B *b
def __cinit__(self):
self.b = new B()
def useA(self, pyA: PyA):
self.b.useA(pyA.pa)
but it gives me "Cannot convert Python object to 'A *'" and i can't understand why... Is there something i'm missing?
| The "Cannot convert Python object to 'Something'" error in Cython generally means Cython is not detecting the type of an object/property, and thus believes it will be a Python object, only available at runtime.
That being said, you have to make sure Cython understands the type. In your particular case, you can choose between:
Having a single .pyx (merging both .pyx files), for cython to be able to directly determine that pyA.pa is of type A*.
Declaring a "cython header file" or .pxd file, which will work much like a .h file in C++ by declaring types of objects, functions and classes.
The second case would look like this:
# PyA.pxd
cdef class PyA:
cdef A *pa
# PyB.pyx
from PyA cimport PyA # import the PyA class from PyA.pxd definition
cdef class PyB:
# [...]
|
69,748,653 | 69,748,721 | Two-Sum Problem using Binary Search Approach | The problem is as follows :-
Given an array of integers numbers and an integer target, return indices of the two numbers such that they add up to target.
Eg:-
Input: vec = [2,7,11,15], target = 9
Output: [0,1]
Output: Because vec[0] + vec[1] == 9, we return [0, 1].
I coded the problem using binary search approach and my main looks like this :-
vector<int>vec = {2,7,11,15};
int flag = 0;
int target = 0,i;
int idx;
vector<int>::iterator it;
for(i=0;i<vec.size();i++)
{
if(binary_search(vec.begin()+i,vec.end(),target-vec[i]))
{
it = lower_bound(vec.begin()+i,vec.end(),target-vec[i]);
idx = it-vec.begin();
if (i!=idx)
{
flag = 1;
break;
}
}
}
if(flag==1)
{
cout<<"Found @"<<idx<<"and "<<i<<endl;
}
else{
cout<<"Not found";
}
It gives correct answer.
The problem is that when I am using this approach and returning the answer vector(which has both index) from function in Leetcode, it is giving me this error :-
Line 1034: Char 9: runtime error: reference binding to null pointer of
type 'int' (stl_vector.h) SUMMARY: UndefinedBehaviorSanitizer:
undefined-behavior
/usr/bin/../lib/gcc/x86_64-linux-gnu/9/../../../../include/c++/9/bits/stl_vector.h:1043:9
PS:- Why almost nobody has posted a binary search approach to this problem ?
|
Why almost nobody has posted a binary search approach to this problem ?
To apply Binary Search algorithm, you need to sort the inputs, which would directly change the index of the array, that is no way convenient to use for this problem.
You may get correct result in your sample input cause your input array is sorted; but all inputs may not come in sorted order. Take care of this.
|
69,748,663 | 69,748,806 | extract data using c++ and store in txt | I have a text file with the following data format
<create>
<way id="-200341" version="0" timestamp="1970-01-01T00:00:00Z">
<nd ref="-106862"/>
<nd ref="-106343"/>
<nd ref="-107240"/>
<nd ref="-107241"/>
<nd ref="-106863"/>
<nd ref="-106858"/>
<nd ref="-106866"/>
<nd ref="-106263"/>
<nd ref="-106868"/>
<nd ref="-106857"/>
<nd ref="-107242"/>
<nd ref="-106867"/>
<nd ref="-106865"/>
<nd ref="-107243"/>
<nd ref="-107244"/>
<nd ref="-106864"/>
<tag k="shelter" v="yes"/>
<tag k="highway" v="footway"/>
</way>
<way id="-200340" version="0" timestamp="1970-01-01T00:00:00Z">
<nd ref="-106853"/>
<nd ref="-106852"/>
<tag k="shelter" v="yes"/>
<tag k="highway" v="footway"/>
</way>
<way id="-200277" version="0" timestamp="1970-01-01T00:00:00Z">
<nd ref="-106228"/>
<nd ref="8236806130"/>
<tag k="highway" v="footway"/>
</way>
<way id="-200253" version="0" timestamp="1970-01-01T00:00:00Z">
<nd ref="-106766"/>
<nd ref="-106765"/>
<nd ref="-106226"/>
<nd ref="-106769"/>
<nd ref="-106228"/>
<nd ref="-106773"/>
<nd ref="-106230"/>
<nd ref="-106771"/>
<nd ref="-106768"/>
<tag k="highway" v="footway"/>
<tag k="shelter" v="yes"/>
</way>
<way id="-200219" version="0" timestamp="1970-01-01T00:00:00Z">
<nd ref="-107148"/>
<nd ref="-106747"/>
<tag k="shelter" v="yes"/>
<tag k="highway" v="footway"/>
</way>
<way id="-200218" version="0" timestamp="1970-01-01T00:00:00Z">
<nd ref="-106766"/>
<nd ref="-106755"/>
<tag k="shelter" v="yes"/>
<tag k="highway" v="footway"/>
</way>
<way id="-200066" version="0" timestamp="1970-01-01T00:00:00Z">
<nd ref="-106755"/>
<nd ref="-107148"/>
<nd ref="-106760"/>
<nd ref="-106764"/>
<nd ref="-106762"/>
<nd ref="-107115"/>
<nd ref="-106197"/>
<tag k="highway" v="footway"/>
<tag k="shelter" v="yes"/>
</way>
<way id="543558082" version="1" timestamp="2017-11-29T19:30:02Z" uid="0" user="">
<nd ref="1314909074"/>
<nd ref="5254615443"/>
<nd ref="5254615442"/>
<nd ref="5254615441"/>
<nd ref="5254615440"/>
<nd ref="-106516"/>
<nd ref="5254615439"/>
<nd ref="5254615438"/>
<nd ref="5254615437"/>
<nd ref="5254615436"/>
<nd ref="5254615435"/>
<tag k="service" v="driveway"/>
<tag k="highway" v="service"/>
<tag k="oneway" v="yes"/>
</way>
I have a unordered_map std::unordered_map<int, std::string> uMapID_feats{}; declared like this.
Assuming that my map has all of these ID numbers ref="XXXXX" numbers stored, and a defaulted string stored in it, like "unknownplace".
what I want to do is map the ID (called "ref=XXXXXX") to the tags that are listed under it. So the tags eg. for the first ID ref=106862, the tags that are connected to it are 200341, shelter, highway, footway, yes
So in the map, the first 3 pairs would look like this:
"106862" , "200341,shelter,highway,footway,yes"
"106343" , "200341,shelter,highway,footway,yes"
"107240" , "200341,shelter,highway,footway,yes"
Some sets have more tags than others and some have only 2 tags, hence why I would like it all to be written to a string and stored in this unordered_map, the tags separated by commas.
How should I go about parsing this data and getting it stored in the unordered_map correctly?
any help is appreciated, thank you!
| Use an xml parsing Library like plugixml, or you could build your own one.
There are many libraries which parses xml. Chose the one which fits your needs.
This may help you: What XML parser should I use in C++?
|
69,749,289 | 69,749,341 | Pass nested struct to a function | Hello I have the following code:
struct temperatures_t {
char lowTempSetting = 18;
char highTempSetting = 26;
char currentTemp = 23;
};
struct runningState_t {
struct temperatures_t temperatures;
};
struct runningState_t runningState;
void test(runningState_t *runningStateVar) {
runningStateVar->temperatures->lowTempSetting++;
runningStateVar->temperatures->currentTemp = 10;
printf(runningStateVar.temperatures.lowTempSetting);
}
void main() {
test(&runningState);
}
But getting the following error on the runningState->temperatures-> lines:
{
"message": "operator -> or ->* applied to \"temperatures_t\" instead of to a pointer type"
}
I have also tried variations:
&(runningState)->temperatures->lowTempSetting++;
And other variations based off what I saw in this answer: C pass a nested structure within a structure to a function?
But without much luck
| In test, the local argument variable runningState (not to be confused with the global variable of the same name) is a pointer to a structure object, so the arrow operator -> is the correct to use to access its members.
But runningState->temperatures is not a pointer, it's an actual structure object. Therefore you must use the dot . to access its members:
runningState->temperatures.lowTempSetting++;
// ^
// Note using dot here
|
69,749,405 | 69,783,912 | I want to know how to set normals with OBJ loader | Comment part of the presentation code ///// It is an internal code, but I do not know how to set the normal. I created an .obj loader with reference to the reference site, but the lighting is strange as shown in the reference image. What is the cause of this?
what I want to know
How to set the normal of obj file correctly
Current status
The obj file was output in a blender.
Direct substitution without using f
reference image: https://imgur.com/a/YQEn8R2
Github : https://github.com/Shigurechan/GL/tree/606cc64088f926d9ba31e09bd2573f43c135bbb0
reference site: http://www.opengl-tutorial.org/jp/beginners-tutorials/tutorial-7-model-loading/
OBJ Loader
// ##################################### .objファイル読み込み #####################################
void FrameWork::D3::LoadObj(const char *fileName, ObjFile &attribute)
{
ObjFile obj;
std::vector<int> vertexIndex;
std::vector<int> uvIndex;
std::vector<int> normalIndex;
std::vector<glm::vec3> vertex;
std::vector<glm::vec2> uv;
std::vector<glm::vec3> normal;
FILE *file = fopen(fileName, "r");
if (file == NULL)
{
std::cerr << ".OBJファイルが開けません: " << fileName << std::endl;
assert(0);
}
else
{
while (true)
{
char line[500];
int res = fscanf(file, "%s", line);
if (res == EOF)
{
break;
}
if (strcmp(line, "v") == 0)
{
glm::vec3 vert;
fscanf(file, "%f %f %fn", &vert.x, &vert.y, &vert.z);
vertex.push_back(vert);
}
else if (strcmp(line, "vt") == 0)
{
glm::vec2 u;
fscanf(file, "%f %fn", &u.x, &u.y);
uv.push_back(u);
}
else if (strcmp(line, "vn") == 0)
{
glm::vec3 norm;
fscanf(file, "%f %f %fn", &norm.x, &norm.y, &norm.z);
normal.push_back(norm);
}
else if (strcmp(line, "f") == 0)
{
unsigned int v[3], u[3], n[3];
int matches = fscanf(file, "%d/%d/%d %d/%d/%d %d/%d/%dn", &v[0], &u[0], &n[0], &v[1], &u[1], &n[1], &v[2], &u[2], &n[2]);
vertexIndex.push_back(v[0]);
vertexIndex.push_back(v[1]);
vertexIndex.push_back(v[2]);
uvIndex.push_back(u[0]);
uvIndex.push_back(u[1]);
uvIndex.push_back(u[2]);
normalIndex.push_back(n[0]);
normalIndex.push_back(n[1]);
normalIndex.push_back(n[2]);
}
}
//////////////////////////////////////////////////////////////////
for (unsigned int i = 0; i < vertexIndex.size(); i++)
{
unsigned int vi = vertexIndex[i];
unsigned int ui = uvIndex[i];
unsigned int ni = normalIndex[i];
glm::vec3 v = vertex[vi - 1];
glm::vec2 u = uv[ui - 1];
glm::vec3 n = normal[ni - 1];
VertexAttribute attrib;
attrib.position[0] = v.x;
attrib.position[1] = v.y;
attrib.position[2] = v.z;
attrib.uv[0] = u.x;
attrib.uv[1] = u.y;
attrib.normal[0] = n.x;
attrib.normal[1] = n.y;
attrib.normal[2] = n.z;
obj.attribute.push_back(attrib);
}
///////////////////////////////////////////////////////////////////
}
attribute = obj;
}
| The cause was that I forgot how to specify the front and back of the polygon. The obj loader is correct.
https://imgur.com/a/6hTwnXP
add code
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
|
69,749,751 | 69,750,324 | cout permutation of three or more string | suppose I have been given strings "abc", "def" and "ghi", I want to generate all the possible combination of word generated by picking from these strings. for eg
for "abc", "def" and "ghi"
we should get
"adg","adh","adi","aeg","aeh","aei","afg","afh","afi",
"bdg","bdh","bdi","beg","beh","bei","bfg","bfh","bfi",
"cdg","cdh","cdi","ceg","ceh","cei","cfg","cfh","cfi"
How to Do it.
my attampt...
vector<string> wordset;
for(int i = 0; i < digits.size(); i++ )
{
wordset.push_back( latters[digits[i] - '0'] );
}
for(int i = 0; i < wordset.size()-2; i++ )
{
string word = wordset[i];
for(int j = 0; j < word.size(); j++ )
{
string combn = "";
combn += word[j];
for(int k = 0; k < wordset[i+1].size(); k++ )
{
combn += wordset[i+1][k];
for(int l = 0; l < wordset[i+2].size(); l++ )
{
combn += wordset[i+2][l];
ans.push_back(combn);
combn = "";
combn += word[j];
combn += wordset[i+1][k];
}
}
}
}
| This is one example where recursion allows simpler code: you just have to combine all characters from the first word with the permutations of the other ones.
In C++ it could be:
#include <vector>
#include <string>
using std::vector;
using std::string;
// using an array will allow to simply process the end of the array
vector<string> permuts(const string* arr, size_t sz) {
vector<string> ret;
switch (sz) {
case 1: // one single word: return its individual chars as strings
for (char c : arr[0]) {
string str(1, c);
ret.push_back(str);
}
case 0: // empty array gives an empty vector
return ret;
default:
;
}
// combine chars from first word with permuts of others
vector<string> tmp = permuts(arr + 1, sz - 1);
for (char c : arr[0]) {
for (const string& str : tmp) {
string str2 = string(1, c) + str;
ret.push_back(str2);
}
}
return ret;
}
// the version using a vector just delegates to the one using an array
vector<string> permuts(const vector<string>& wordset) {
return permuts(wordset.data(), wordset.size());
}
|
69,749,883 | 69,794,703 | Is an object a storage location or a value in C++? | In C++, is an object a storage location (container) or a value (content)?
With this sentence from [intro.object]/1, one can assume it is a value (bold emphasis mine):
An object occupies a region of storage in its period of construction ([class.cdtor]), throughout its lifetime, and in its period of destruction ([class.cdtor]).
With this sentence from [basic.types.general]/2, one can assume it is a storage location (bold emphasis mine):
For any object (other than a potentially-overlapping subobject) of trivially copyable type T, whether or not the object holds a valid value of type T, the underlying bytes ([intro.memory]) making up the object can be copied into an array of char, unsigned char, or std::byte ([cstddef.syn]).
| An object is an entity that has a type (a set of operations that can be performed on it) and occupies some allocated storage region with the proper size (given by the operator sizeof) and alignment (given by the operator alignof) for the type. The storage region has an address (given by the operator &) and holds a representation which is a sequence of bytes, a subset of which represents a value. An object finally has a lifetime which starts at the end of its initialization, and ends at the start of its finalization or when its storage region is deallocated or reused by another object.
Using an object outside its lifetime is undefined behaviour.
As a programmer can change the representation of an object at any time by directly accessing its bytes, an object can hold a valid value or not. Invalid values for basic types are known as trap representation and using an object containing such a trap representation is undefined behaviour.
So an object is neither a storage region nor a value, but a complex thing that is called object in C language.
|
69,749,928 | 69,838,774 | How to print a string from an object? | I tried the below code to write an object to a dat file:
#include<iostream>
#include<fstream>
#include<string>
#include<string.h>
using namespace std;
class Student
{ //data members
int adm;
string name;
public:
Student()
{
adm = 0;
name = "";
}
Student(int a,string n)
{
adm = a;
name = n;
}
Student setData(Student st) //member function
{
cout << "\nEnter admission no. ";
cin >> adm;
cout << "Enter name of student ";
cin.ignore();
getline(cin,name);
st = Student(adm,name);
return st;
}
void showData()
{
cout << "\nAdmission no. : " << adm;
cout << "\nStudent Name : " << name;
}
int retAdmno()
{
return adm;
}
};
/*
* function to write in a binary file.
*/
void demo()
{
ofstream f;
f.open("student.dat",ios::binary);
for(int i = 0;i<4;i++)
{
Student st;
st = st.setData(st);
f.write((char*)&st,sizeof(st));
}
f.close();
ifstream fin;
fin.open("student.dat",ios::binary);
Student st;
while(!fin.eof())
{
fin.read((char*)&st,sizeof(st));
st.showData();
}
}
int main()
{
demo();
return 0;
}
But when I am executing the demo function I am getting some garbage values from the "student.dat"
file. I am creating a database and want to get the records but I am not able to get all the records in the dat file.
Please suggest a solution
| You cannot write complex data types to a file in binary mode. They have some additional variables and functions inside,which you do not know or see. Those data types have some internal state that or context dependent. So, you cannot store in binary and then reuse it somewhere else. That will never work.
The solution is serialization/deserialization.
This sounds complicated, but is not at all in your case. It basically means that all your data from your struct shall be converted to plain text and put in a text-file.
For readin the data back, it will be first read as text, and then converted to your internal data structures.
And the default approach for that is to overwrite the inserter << operator and extractor >> operator.
See the simple example in your modified code:
#include<iostream>
#include<fstream>
#include<string>
#include<iomanip>
class Student
{ //data members
int adm;
std::string name;
public:
Student()
{
adm = 0;
name = "";
}
Student(int a, std::string n)
{
adm = a;
name = n;
}
Student setData(Student st) //member function
{
std::cout << "\nEnter admission no. ";
std::cin >> adm;
std::cout << "Enter name of student ";
std::getline(std::cin>> std::ws, name);
st = Student(adm, name);
return st;
}
void showData()
{
std::cout << "\nAdmission no. : " << adm;
std::cout << "\nStudent Name : " << name;
}
int retAdmno()
{
return adm;
}
friend std::ostream& operator << (std::ostream& os, const Student& s) {
return os << s.adm << '\n' << s.name << '\n';
}
friend std::istream& operator >> (std::istream& is, Student& s) {
return std::getline(is >> s.adm >> std::ws, s.name);
}
};
/*
* function to write in a binary file.
*/
void demo()
{
std::ofstream f("student.dat");
for (int i = 0; i < 4; i++)
{
Student st;
st = st.setData(st);
f << st;
}
f.close();
std::ifstream fin("student.dat");
Student st;
while (!fin.eof())
{
fin >> st;
st.showData();
}
}
int main()
{
demo();
return 0;
}
|
69,750,318 | 69,750,592 | How is it possible to unite multiple types in the operator function called by std::visit in C++? | I am using std::variant and std::visit to call operator functions. I have a lot of variants (which mostly inherit from one superclass), but most of the operator functions should return the same value. Is there a way to have one operator function, which is called everytime one of those child-classes is called (such as if a normal function is called with a child-class as a parameter and if there is no such function, the (overlaoded) function is called, if the super-class is a parameter.
This may be easier to understand with an example:
I have two superclasses:
struct function_node; struct nary_node;
I also have multiple classes which inherit from those superclasses:
struct addition_node : function_node, nary_node;
struct division_node: function_node, nary_node;
struct cos_node: function_node, nary_node;
I have another class which has those classes as variants:
struct value_node{
var get_variant(){
return std::variant<
addition_node*,
division_node*,
cos_node*
>;
}
};
I finally have a last class (constant_checker), which evaluates the expressions.
double eval(value_node* node){
return std::visit(*this, node->get_variant());
}
In this last class I am currently having multiple operator functions of the form:
double operator(division_node* node){
return 0;
}
This works fine, but I actually have multiple dozen of those children-nodes. Since the operator function should all return the same value, I want a single operator function for this, e.g.
double operator(function_node* node){
return 0;
}
I have tried it in this exact way, but I receive the error
3>C:\...\include\variant(1644): error C2893: Failed to specialize function template 'unknown-type std::_C_invoke(_Callable &&,_Types &&...) noexcept(<expr>)'
3>C:\...\include\variant(1644): note: With the following template arguments:
3>C:\...\include\variant(1644): note: '_Callable=ale::util::constant_checker &'
3>C:\...\include\variant(1644): note: '_Types={ale::minus_node *}'
3>C:\...\include\variant(1656): error C2955: 'std::_All_same': use of class template requires template argument list
This error will go away, if I insert the operator function for this exact node (minus_node in this case) and then occurr again for the other nodes, so appearently the general oeprator function is not called.
Is there any solution to this or do I have to keep every operator function?
| Just use template operator():
template<class Node>
double operator()(Node* node) {
if constexpr (std::is_same_v<Node, addition_node>) {
// ...
} else if constexpr (std::is_same_v<Node, division_node>) {
// ...
}
}
|
69,750,787 | 69,750,993 | command to kill/stop the program if it runs more than a certain time limit | If I have a C++ code with an infinite loop inside i want a command that will kill the execution after certain time.
so i came up with something like this-
g++ -std=c++20 -DLOCAL_PROJECT solution.cpp -o solution.exe & solution.exe & timeout /t 0 & taskkill /im solution.exe /f
But the problem with this was that it would first execute the program so due to the infinite loop it won't even come to timeout and taskkill part.
Does anybody have any solution to it or other alternatives instead of timeout?
I am using windows 10 and my compiler is gnu 11.2.0
Also in case there is No TLE i don't want taskkill to show this error
ERROR: The process "solution.exe" not found.
| Your main loop could exit after a certain time limit, if you're confident it is called regularly enough.
#include <chrono>
using namespace std::chrono_literals;
using Clock = std::chrono::system_clock;
int main()
{
auto timeLimit = Clock::now() + 1s;
while (Clock::now() < timeLimit) {
//...
}
}
Alternatively you could launch a thread in your main throwing an exception after a certain delay:
#include <chrono>
#include <thread>
using namespace std::chrono_literals;
struct TimeOutException {};
int main()
{
std::thread([]{
std::this_thread::sleep_for(1s);
std::cerr << "TLE" << std::endl;
throw TimeOutException{};
}).detach();
//...
}
terminate called after throwing an instance of 'TimeOutException'
|
69,751,041 | 69,754,246 | Is there a data structure like a C++ std set which also quickly returns the number of elements in a range? | In a C++ std::set (often implemented using red-black binary search trees), the elements are automatically sorted, and key lookups and deletions in arbitrary positions take time O(log n) [amortised, i.e. ignoring reallocations when the size gets too big for the current capacity].
In a sorted C++ std::vector, lookups are also fast (actually probably a bit faster than std::set), but insertions are slow (since maintaining sortedness takes time O(n)).
However, sorted C++ std::vectors have another property: they can find the number of elements in a range quickly (in time O(log n)).
i.e., a sorted C++ std::vector can quickly answer: how many elements lie between given x,y?
std::set can quickly find iterators to the start and end of the range, but gives no clue how many elements are within.
So, is there a data structure that allows all the speed of a C++ std::set (fast lookups and deletions), but also allows fast computation of the number of elements in a given range?
(By fast, I mean time O(log n), or maybe a polynomial in log n, or maybe even sqrt(n). Just as long as it's faster than O(n), since O(n) is almost the same as the trivial O(n log n) to search through everything).
(If not possible, even an estimate of the number to within a fixed factor would be useful. For integers a trivial upper bound is y-x+1, but how to get a lower bound? For arbitrary objects with an ordering there's no such estimate).
EDIT: I have just seen the
related question, which essentially asks whether one can compute the number of preceding elements. (Sorry, my fault for not seeing it before). This is clearly trivially equivalent to this question (to get the number in a range, just compute the start/end elements and subtract, etc.)
However, that question also allows the data to be computed once and then be fixed, unlike here, so that question (and the sorted vector answer) isn't actually a duplicate of this one.
| The data structure you're looking for is an Order Statistic Tree
It's typically implemented as a binary search tree in which each node additionally stores the size of its subtree.
Unfortunately, I'm pretty sure the STL doesn't provide one.
|
69,751,073 | 70,075,880 | PJSIP Received Remote Sip Header | Let's asssume we've two phone and Phone1 & Phone2 they're both has custom sip header
[Phone1] ----calling----> [Phone2] (This is onIncomingCallState for Phone2 and it can read header of Phone1)
[Phone1] <----answer---- [Phone2] (This is answer for Phone2 and it send it's header with it's CallOpParam)
[Phone1] <----OnCallState----> [Phone2] (This is on call state for both, Phone2 has Phone1's header, now Phone1 need to get Phone2's header.)
I'm writing the code at the level of PjSua2 with C++, i can see the log, Phone1 has access the value of header and when i sniff also with the wireshark i can see as well. But how can i handle it at the level of pjsua2, is there any call back or something else?
| Actually, it is already implemented in the pjsua2 level.
virtual void onCallState(OnCallStatePrm &prm){
..
prm.e.body.tsxState.src.rdata.wholeMsg //this is what i want exactly.
..
}
|
69,751,077 | 69,755,405 | UDP clients pool sending but not receiving | I am creating an udp client pool. The servers will be some other applications running in different computers, and they are suppoused to be alive from beginning. Using a configurable file (not important to problem in example so not included) one to several clients are created so they connect to those servers (1 to 1 relation) in a bidirectional way, sending and receiving.
Sending can be sync because it uses small messages and blocking there in not a problem, but receiving must be async, because answerback can arrive much later after sending.
In my test with only one sockect, it is able to send, but it is not receiving anything at all.
Q1: Where is the problem and how to fix it?
Q2: I also wonder if the use of iterators from std::vector in the async calls can be problematic at the time new connections are pushed into vector due to its rearangment in memory. This may be a problem?
Q3: I really does not understand why in all examples sender and receiver endpoints (endpoint1 and endpoint2 in example struct Socket) are different, couldn't they be the same?
My code is next:
#include <iostream>
#include <boost/array.hpp>
#include <boost/asio.hpp>
using boost::asio::ip::udp;
class Pool
{
struct Socket {
std::string id;
udp::socket socket;
udp::endpoint endpoint1;
udp::endpoint endpoint2;
enum { max_length = 1024 };
std::array<char, max_length> data;
};
public:
void create(const std::string& id, const std::string& host, const std::string& port)
{
udp::resolver resolver(io_context);
sockets.emplace_back(Socket{ id, udp::socket{io_context, udp::v4()}, *resolver.resolve(udp::v4(), host, port).begin() });
receive(id);
}
void send(const std::string& id, const std::string& msg)
{
auto it = std::find_if(sockets.begin(), sockets.end(), [&](auto& socket) { return id == socket.id; });
if (it == sockets.end()) return;
it->data = std::array<char, Socket::max_length>{ 'h', 'e', 'l', 'l', 'o' };
auto bytes = it->socket.send_to(boost::asio::buffer(it->data, 5), it->endpoint1);
}
void receive(const std::string& id)
{
auto it = std::find_if(sockets.begin(), sockets.end(), [&](auto& socket) { return id == socket.id; });
if (it == sockets.end()) return;
it->socket.async_receive_from(
boost::asio::buffer(it->data, Socket::max_length),
it->endpoint2,
[this, id](boost::system::error_code error, std::size_t bytes) {
if (!error && bytes)
bool ok = true;//Call to whatever function
receive(id);
}
);
}
void poll()
{
io_context.poll();
}
private:
boost::asio::io_context io_context;
std::vector<Socket> sockets;
};
int main()
{
Pool clients;
clients.create("ID", "localhost", "55000");
while (true) {
clients.poll();
clients.send("ID", "x");
Sleep(5000);
}
}
|
Q1: Where is the problem and how to fix it?
You don't really bind to any port, and then you have multiple sockets all receiving unbound udp packets. Likely they're simply competing and something gets lost in the confusion.
Q2: can std::vector be problematic
Yes. Use a std::deque (stable iterator/references as long as you only push/pop at either end). Otherwise, consider a std::list or other node-based container.
In your case map<id, socket> seems more intuitive.
Actually, map<endpoint, peer> would be a lot more intuitive. Or... you could do without the peers entirely.
Q3: I really does not understand why in all examples sender and
receiver endpoints (endpoint1 and endpoint2 in example struct Socket)
are different, couldn't they be the same?
Yeah, they could be "the same" if you don't care about overwriting the original endpoint you had sent to.
Here's my simplified take. As others have said, it's not possible/useful to have many UDP sockets "listening" on the same endpoint. That is, provided that you even bound to an endpoint.
So my sample uses a single _socket with local endpoint :8765.
It can connect to many client endpoints - I chose to replace the id string with the endpoint itself for simplicity. Feel free to add a map<string, endpoint> for some translation.
See it Live On Coliru
#include <boost/asio.hpp>
#include <iomanip>
#include <iostream>
#include <set>
using boost::asio::ip::udp;
using namespace std::chrono_literals;
class Pool {
public:
using Message = std::array<char, 1024>;
using Peers = std::set<udp::endpoint>;
using Id = udp::endpoint;
Pool() { receive_loop(); }
Id create(const std::string& host, const std::string& port)
{
auto ep = *udp::resolver(_io).resolve(udp::v4(), host, port).begin();
/*auto [it,ok] =*/_peers.emplace(ep);
return ep;
}
void send(Id id, const std::string& msg)
{
/*auto bytes =*/
_socket.send_to(boost::asio::buffer(msg), id);
}
void receive_loop()
{
_socket.async_receive_from(
boost::asio::buffer(_incoming), _incoming_ep,
[this](boost::system::error_code error, std::size_t bytes) {
if (!error && bytes)
{
if (_peers.contains(_incoming_ep)) {
std::cout << "Received: "
<< std::quoted(std::string_view(
_incoming.data(), bytes))
<< " from " << _incoming_ep << "\n";
} else {
std::cout << "Ignoring message from unknown peer "
<< _incoming_ep << "\n";
}
}
receive_loop();
});
}
void poll() { _io.poll(); }
private:
boost::asio::io_context _io;
udp::socket _socket{_io, udp::endpoint{udp::v4(), 8765}};
Message _incoming;
udp::endpoint _incoming_ep;
Peers _peers;
};
int main(int argc, char** argv) {
Pool pool;
std::vector<Pool::Id> peers;
for (auto port : std::vector(argv + 1, argv + argc)) {
peers.push_back(pool.create("localhost", port));
}
int message_number = 0;
while (peers.size()) {
pool.poll();
auto id = peers.at(rand() % peers.size());
pool.send(id, "Message #" + std::to_string(++message_number) + "\n");
std::this_thread::sleep_for(1s);
}
}
Live on my machine with some remotes simulated like
sort -R /etc/dictionaries-common/words | while read word; do sleep 5; echo "$word"; done | netcat -u -l -p 8787 -w 1000
Also sending bogus messages from an "other" endpoint to simulate stray/unknown messages.
|
69,751,176 | 69,751,233 | Why is copy assigment possible, if a class has only a (templated) move assignment operator? | I have stumbled over code today, that I don't understand. Please consider the following example:
#include <iostream>
#include <string>
class A
{
public:
template <class Type>
Type& operator=(Type&& theOther)
{
text = std::forward<Type>(theOther).text;
return *this;
}
private:
std::string text;
};
class B
{
public:
B& operator=(B&& theOther)
{
text = std::forward<B>(theOther).text;
return *this;
}
private:
std::string text;
};
int main()
{
A a1;
A a2;
a2 = a1;
B b1;
B b2;
b2 = b1;
return 0;
}
When compiling, MinGW-w64/g++ 10.2 states:
..\src\Main.cpp: In function 'int main()':
..\src\Main.cpp:41:7: error: use of deleted function 'B& B::operator=(const B&)'
41 | b2 = b1;
| ^~
..\src\Main.cpp:19:7: note: 'B& B::operator=(const B&)' is implicitly declared as deleted because 'B' declares a move constructor or move assignment operator
19 | class B
| ^
mingw32-make: *** [Makefile:419: Main.o] Error 1
I fully understand the error message. But I don't understand why I don't get the same message with class A. Isn't the templated move assignment operator also a move assignment operator? Why then is the copy assignment operator not deleted? Is this well-written code?
|
Isn't the templated move assignment operator also a move assignment operator?
No, it's not considered as move assignment operator.
(emphasis mine)
A move assignment operator of class T is a non-template non-static member function with the name operator= that takes exactly one parameter of type T&&, const T&&, volatile T&&, or const volatile T&&.
As the effect, A still has the implicitly-declared copy/move assignment operator.
BTW: Your template assignment operator takes forwarding reference, it could accept both lvalue and rvalue. In a2 = a1;, it wins against the generated copy assignment operator in overload resolution and gets called.
|
69,752,007 | 69,752,436 | finding size of char array in C++ | I am getting the char array from user and trying to find the size of it and it is not working somehow.
My code looks like this:
int main()
{
char str[] ={}
cout << "Enter a characters ";
cin >> str;
int arrSize = sizeof(str);
cout << arrSize;
return 0;
}
When I define array like code below, it will work:
int main()
{
char str[] ={"1234"}
int arrSize = sizeof(str);
cout << arrSize;
return 0;
}
I am not used to C++, please someone help me.
| Arrays in C are static. After creating empty array of chars char str[] = {} you cannot fill it with arbitrary number of characters. Size of static C-style array is calculated in compile-time, as well as sizeof() operator. If you for some reason really have to use C-style string (array of chars), firstly allocate enough space in your array and than use strlen() function to determine string length.
#include <iostream>
#include <cstring>
using namespace std;
int main()
{
char str[256];
cin >> str;
int arrSize = strlen(str);
cout << arrSize;
}
However since you are working with C++ (not C), it would be better to use std::string in your case.
#include <iostream>
using namespace std;
int main()
{
string str;
cin >> str;
int arrSize = str.size();
cout << arrSize;
}
Btw, you don't need return 0; in C++.
|
69,752,545 | 69,752,802 | The fastest way to swap the two lowest bits in an unsigned int in C++ | Assume that I have:
unsigned int x = 883621;
which in binary is :
00000000000011010111101110100101
I need the fastest way to swap the two lowest bits:
00000000000011010111101110100110
Note: To clarify: If x is 7 (0b111), the output should be still 7.
| If you have few bytes of memory to spare, I would start with a lookup table:
constexpr unsigned int table[]={0b00,0b10,0b01,0b11};
unsigned int func(unsigned int x){
auto y = (x & (~0b11)) |( table[x&0b11]);
return y;
}
Quickbench -O3 of all the answers so far.
Quickbench -Ofast of all the answers so far.
(Plus my ifelse naive idea.)
[Feel free to add yourself and edit my answer].
Please do correct me if you believe the benchmark is incorrect, I am not an expert in reading assembly. So hopefully volatile x prevented caching the result between loops.
|
69,753,109 | 69,753,221 | Why it is legal to compare scoped enumerations | Although scoped enumerations (enum class) cannot be implicitly converted to integral types, I still can compare them by < (on GCC 10.3).
#include <algorithm>
#include <iostream>
enum class Colours {
Red = 0,
Green = 1,
Blue = 2
};
int main() {
std::cout << (std::min(Colours::Blue, Colours::Red) < Colours::Green) << std::endl;
return 0;
}
Why is this standard behaviour (if it is)?
Could you give me a reference to cppreference.com or c++ standard?
| This is described in comparison operators
Arithmetic comparison operators
If the operands have arithmetic or enumeration type (scoped or unscoped), usual arithmetic conversions are performed on both operands following the rules for arithmetic operators. The values are compared after conversions:
So in addition to arithmetic types (including integral types), scoped and unscoped enum types are explicitly mentioned.
|
69,753,241 | 69,753,501 | Initialization of structs in C++ | I stumbled upon this weird struct implementation(from a big project) and I wanted to know the difference between this one and the normal one and why is it even implemented this way :
struct Sabc{
Sabc()
{
A = 0;
B = 0.0f;
}
int A;
float B;
}
Why not just :
struct Sabc{
Sabc()
{
int A = 0;
float B = 0.0f;
}
}
Or why not even this way:
struct Sabc{
int A = 0;
float B = 0.0f;
}
| Declare a struct with two members, explicitly overrides the default c'tor and initializes both members 0 and 0.0f:
struct Sabc{
Sabc()
{
A = 0;
B = 0.0f;
}
int A;
float B;
}
Declares a struct with no members, and explicitly override the default c'tor, within it declare two local variables, initialize with values, and on c'tor return, variables go out of scope and are deleted:
struct Sabc{
Sabc()
{
int A = 0;
float B = 0.0f;
}
}
Declares a struct with two members, and implicitly default initialize to 0 and 0.0f, only compiles in C++:
struct Sabc{
int A = 0;
float B = 0.0f;
}
|
69,754,300 | 69,754,571 | C++ read specific range of line from file | I have the following content in a file:
A(3#John Brook)
A(2#Allies Frank)
A(1#Lucas Feider)
I want to read the line piecemeal. First I want to read in order. For example, A than 3 than John Brook. Every thing is fine till 3 but how can I read John Brook without "#" and ")" as string.
I have a funciton and you can have a look my codes:
void readFile()
{
ifstream read;
char process;
char index;
string data;
read.open("datas.txt");
while(true)
{
read.get(process);
read.get(index);
// Here, I need to read "John Brook" for first line.
// "Allies Frank" for second line.
// "Lucas Feider" for third line.
}
read.close();
}
| First organize your data into some structure.
struct Data {
char process;
char index;
std::string data;
};
Then implement function which is able to read single item. Read separators into temporary variables and then later check if they contain proper values.
Here is an example assuming each item is in single line.
std::istream& operator>>(std::istream& in, Data& d) {
std::string l;
if (std::getline(in, l)) {
std::istringstream in_line{l};
char openParan;
char separator;
if (!std::getline(
in_line >> d.process >> openParan >> d.index >> separator,
d.data, ')') ||
openParan != '(' || separator != '#') {
in.setstate(std::ios::failbit);
}
}
return in;
}
After that rest is quick and simple.
https://godbolt.org/z/aGYvPeWfW
|
69,754,757 | 69,754,890 | Reduce number of template parameters | I would like to store some objects I know at compile-time in a class, and keep them constexpr, in order to proceed at compile-time. However, the way I'm storing these values in a struct seems unsatisfactory:
template <class T1, T1 _x1, class T2, T2 _x2>
struct A
{
constexpr static T1 x1 = _x1;
constexpr static T1 x2 = _x2;
}
While the code above achieves my goal, it seems unnecessarily complicated to have to provide both type and value explicitly in order to store a constexpr value in a templated class.
Is there a better/more elegant way of achieving this? In particular one, where I do not have to deduce the type again first would be desirable.
| In C++17 you can have the auto template parameters
template <auto _x1, auto _x2>
struct A
{
// Use _x1 and _x2 directly
}
|
69,754,776 | 69,755,215 | Do I always have to use a unique_ptr to express ownership? | Lets say I have a class A that owns an object of class B.
A is responsible for creating and deleting this object of B.
The ownership must not be transferred to another class.
The object of B will never be reinitialized after an object of A was created.
Normally, as far as I know, in modern C++ we would use a unique_ptr to express that A is the owner of this object / reference:
// variant 1 (unique pointer)
class A {
public:
A(int param) : b(std::make_unique<B>(param)) {}
// give out a raw pointer, so that others can access and change the object
// (without transferring ownership)
B* getB() {
return b.get();
}
private:
std::unique_ptr<B> b;
};
Someone suggested that I also may use this approach instead:
// variant 2 (no pointer)
class A {
public:
A(int param) : b(B(param)) {}
// give out a reference, so that others can access and change the object
// (without transferring ownership)
B& getB() {
return b;
}
private:
B b;
};
As far as I understand the main differences are that in variant 2, the memory is allocated coherently, while in variant 1 the memory for the B object can be allocated anywhere and the pointer must be resolved first in order to find it.
Also of course, it makes a difference if users of A's public interface can work with a B* or a B&.
But in both cases, I am sure that ownership stays within A as I need it.
Do I always have to use variant 1 (unique_ptrs) to express ownership?
What are reasons to use variant 2 over variant 1?
| Ownership can be expressed in different ways.
Your B b of variant 2 is the simplest form of ownership. The instance of class A exclusively owns the object stored in b and the ownership cannot be transferred to another object or (member) variable.
std::unique_ptr<B> b expresses an unique - but transferable - ownership over the object managed by std::unique_ptr<B>.
But also std::optional<B>, std::vector<B>, … or std::variant<B,C>, expresses ownership.
Also of course, it makes a difference if users of A's public interface can work with a B* or a B&. But in both cases, I am sure that ownership stays within A as I need it.
You can always create a member function that returns B* no matter if the member is B b, or std::unique_ptr<B> b (or also for std::optional<B>, std::vector<B>, … std::variant<B,C>)
Interestingly this code:
Variant 1
class A {
public:
A(int param) : b(B(param)) {}
// give out a reference, so that others can access and change the object
B& getBRef() {
return b;
}
B* getB() {
return &b;
}
private:
B b;
};
can be less problematic then:
Variant 2
class A {
public:
A(int param) : b(std::make_unique<B>(param)) {}
// give out a raw pointer, so that others can access and change the object
// (without transferring ownership)
B* getB() {
return b.get();
}
private:
std::unique_ptr<B> b;
};
For the Variant 2 case, a call to another member function of A could potentially invalidate the pointer (if that function e.g. assigned another object to the unique_ptr). While for Variant 1 the validity of the returned pointer (or reference) is given by the lifetime of an instance of A
In any case, you then have to treat B * as non owning raw pointer, and make clear in the documentation, how long that raw pointer is valid.
Which kind of ownership you choose depends on the actual use case.
Most of the time you will try to stay with the simplest ownership B b. If that object should be optional you will use std::optional<B>.
If the implementation requires it e.g. if you plan to use PImpl, if the data structure might prevent the use of B b (like in the case of tree- or graph-like structures), or if the owner-ship has to be transferable, you might want to use std::unique_ptr<B>.
|
69,755,565 | 69,755,811 | What causes this vector subscript out of range Error? | I am currently mapping a Graph to a Minesweeper like grid, where every Block represents a node.
Here is my Graph class:
class Graph : public sf::Drawable
{
public:
Graph(uint32_t numNodesWidth, uint32_t numNodesHeight);
[[nodiscard]] std::vector<Node> & operator[](std::size_t i)
{ return data[i]; }
[[nodiscard]] sf::Vector2u dimension() const
{ return {static_cast<uint32_t>(data.size()),
static_cast<uint32_t>(data[0].size())};}
...
...
private:
std::vector<std::vector<Node>> data;
};
here is the implementation of the constructor:
Graph::Graph(uint32_t numNodesWidth, uint32_t numNodesHeight)
{
data.resize(numNodesHeight);
for(auto & row : data)
{
row.resize(numNodesWidth);
}
}
Somewhere in another class I read mouse coordinates and convert them to "Graph Coordinates":
sf::Vector2u translatedCoords = toGraphCoords(sf::Mouse::getPosition(window), nodeSize_);
bool inBounds = checkGraphBounds(translatedCoords, graph.dimension());
Here are the helper functions:
sf::Vector2u toGraphCoords(sf::Vector2i mouseCoord, sf::Vector2f nodeSize)
{
return {static_cast<uint32_t>(mouseCoord.y / nodeSize.y),
static_cast<uint32_t>(mouseCoord.x / nodeSize.x)};
}
bool checkGraphBounds(sf::Vector2u mouseCoord, sf::Vector2u bounds)
{
return mouseCoord.x >= 0 &&
mouseCoord.y >= 0 &&
mouseCoord.x < bounds.x &&
mouseCoord.y < bounds.y ;
}
Somehow I get the vector subscript out of range 1655 error when I try to use these new checked Coordinates which is somehow strange, can someone explain to me what I am doing wrong. This error always shows when I try to hover beyond the "Bounds" of the Interactive area, slightly behind or in front the first or the last Node.
Thanks in advance.
| There is no guarantee that bounds <= num_nodes * node_size. This is especially risky since there are integer divisions involved, which means that you are at the mercy of rounding.
You could shuffle code around until such a guarantee is present, but there's a better way.
If the checkGraphBounds() function operated on the same math that the grid does, you could be sure that the result would be consistent with grid, no matter how that relates to the bounds.
The ideal way to do so would be to actually use toGraphCoords() as part of it:
bool checkGraphBounds(sf::Vector2u mouseCoord, const Graph& graph,
sf::Vector2f nodeSize)
{
auto coord = toGraphCoords(mouseCoord, nodeSize);
return coord.x >= 0 &&
coord.y >= 0 &&
coord.x < graph.dimensions().x &&
coord.y < graph.dimensions().y) ;
}
With this, you can formally guarantee that should a mouseCoord pass that test, static_cast<uint32_t>(mouseCoord.x / nodeSize.x)} will for certain return a value no greater than graph.dimensions().x.
Personally, I would combine both functions as a method of Graph like so:
class Graph : public sf::Drawable {
// Make nodeSize a member of the Graph
sf::Vector2f nodeSize_;
// This is one of the cases where caching an inferable value is worth it.
sf::Vector2u dimensions_;
public:
std::optional<sf::Vector2u> toGraphCoords(sf::Vector2i mouseCoord) {
sf::Vector2u coord{
static_cast<uint32_t>(mouseCoord.y / nodeSize_.y),
static_cast<uint32_t>(mouseCoord.x / nodeSize_.x)};
};
// No need to compare against 0, we are dealing with unsigned ints
if(coord.x < dimensions_.x &&
coord.y < dimensions_.y ) {
return coord;
}
return std::nullopt;
}
// ...
};
Usage:
void on_click(sf::Vector2i mouse_loc) {
auto maybe_graph_coord = the_graph.toGraphCoords(mouse_loc);
if(maybe_graph_coord) {
sf::Vector2u graph_coord = *maybe_graph_coord;
// ...
}
}
|
69,755,824 | 69,756,155 | Is there a way to have the same #define statement in different files that are included into the same file | So, I have a file structure like this:
FileA
FileB
FileC
FileA includes FileB and FileC
FileB has:
#define image(i, j, w) (image[ ((i)*(w)) + (j) ])
and FileC has:
#define image(i, j, h) (image[ ((j)*(h)) + (i) ])
on compilation i get:
warning: "image" redefined
note: this is the location of the previous definition ...
Does this warning mean it changes the definition of the other file where it found it initially when compiling ?
Is there any way to avoid this warning while maintaining these two defines, and them applying their different definitions on their respective files?
Thankyou in advance :)
|
Does this warning mean it changes the definition of the other file where it found it initially when compiling ?
The program is ill-formed. The language doesn't specify what happens in this case. If the compiler accepts an ill-formed program, then you must read the documentation of the compiler to find out what they do in such case.
Note that the program might not even compile with other compilers.
Is there any way to avoid this warning while maintaining these two defines, and them applying their different definitions on their respective files?
Technically, you could use hack like this without touching either header:
#include "FileB"
#undef image
#include "FileC"
But a good solution - if you can modify the headers - is to not use macros. Edit the headers to get rid of them. Use functions instead, and declare them in distinct namespaces so that their names don't conflict.
Some rules of thumb:
Don't use unnecessary macros. Functions and variables are superior to macros.
Follow the common convention of using only upper case for macro names, if you absolutely need to use macros. It is important to make sure that macro names don't mix with non-macros because macros don't respect namespaces nor scopes.
If you need a macro within a single header, then undefine it immediately when it's no longer needed instead of leaking it into other headers.
Don't use names without namespaces. That will lead to name conflicts. Macros don't respect C++ namespaces, but you can instead prefix their names. For example, you could have FILE_B_IMAGE and FILE_C_IMAGE (or something more descriptive based on the concrete context).
They are not functionally equivalent, one can be seen as a row-wise iteration and the other a column-wise
This seems like a good argument for renaming the functions (or the macros, if you for some reason cannot replace them). Call one row_wise and the other column_wise or something along those lines. Use descriptive names!
|
69,756,133 | 69,756,601 | Why do I get integer output when I try to put "glfwSetErrorCallback" function in cout, which return non-integer value? | I'm learning GLFW 3.3, and, as it's said in functions description:
Returns the previously set callback, or NULL if no callback was set.
Source: [https://www.glfw.org/docs/3.3/group__init.html#gaff45816610d53f0b83656092a4034f40]
Now what I'm trying to do is to understand what kind of value it is. I tried to understand it as if the return value was a pointer to a function, because, as said about GLFWerrorfun type in the documentation:
typedef void(* GLFWerrorfun) (int, const char *)
This is the function pointer type for error callbacks.
Source: [https://www.glfw.org/docs/3.3/group__init.html#ga6b8a2639706d5c409fc1287e8f55e928]
Here's my code:
#define GLFW_DLL
#include <glad/glad.h>
#include <GLFW/glfw3.h>
#include <iostream>
void handleErrorCallback (int code, const char * description)
{
// Code
}
int main (int argc, char * argv[])
{
std::cout << glfwSetErrorCallback (handleErrorCallback) << std::endl;
std::cout << glfwSetErrorCallback (NULL) << std::endl;
return 0;
}
The first value written on the console is 0.
I get it as the NULL was returned because no callback was set yet.
The second value written on the console is 1, and it confuses me: how did GLFWerrorfun return value was converted to an integer? All I see is that it's always returns '1' when the callback function was set and always '0' when it wasn't.
| glfwSetErrorCallback returns a pointer wich is interpreted as an integer.
I think this post will help:How to print function pointers with cout?
|
69,756,347 | 69,756,499 | How the process of member look up occurs in C++? | I'm using the document number 4901, C++ Draft ISO 2021, specifically 6.5.2 (Member Name Lookup). I'm failing in understanding a lot of uses of the terms "member subobject" and "base class subobjects". I already asked about these terms in : What is a member sub object? and
What is a base class subobject
The Second question had an answer relatively satisfactory for me, the first one although didn't help me. I'm thinking that the explanation in the draft is a little too abstract, so I would rely on a rigorous defitnion of the terms cited above, but really didn't find any. Taking another path, How the member name look up occurs in practice? How the terms: member subobject and base class subobject are related to member names lookup?
| From an ABI standpoint, there is very little distinction between B and C in the following:
struct A {
int x;
};
struct B : A {};
struct C {
A base;
};
Creating an object of type B or C both require creating an object of type A. In both cases, the instance of A belongs to the parent object. So in both cases they are sub-objects.
For objects of type B, the A object is a base class sub-object.
For objects of type C, the A object is a member sub-object.
Edit: integrating stuff from followup questions in the comments.
struct D : A {
A base;
};
In D's case, there are 2 sub-objects of type A in each instance of D. One base class sub-object and one member sub-object.
|
69,756,728 | 69,780,047 | Installing Azure SDK for C++ in a docker container | I would like to know how I may install azure c++ sdk in a docker container. I need it for a C++ services that downloads and processes files in Azure blob storage. Personally, I feel like the container will become too large and also the installation is kind of complex compare to the popular:
...
// Docker file
RUN git clone https://github.com/blah/bla.git \
&& cd blah && mkdir -p build \
&& cd build \
&& cmake .. \
&& cmake --build . --target install
...
I have read the installation guide here and here but after following the whole installation procedure in my development environment (Centos 8), cmake fails to find the
azure-storage-blobs-cpp even after adding the -DCMAKE_TOOLCHAIN_FILE=[path to vcpkg repo]/scripts/buildsystems/vcpkg.cmake flag correctly.
Is there an alternative or straight forward way like in the above snippet for installing the SDK in my development environment and then in a Linux based container for deployment?
| While researching, I came across this issue. Here, janbernloehr references to a library called azure-storage-cpplite which I searched and tried. Yes, it solves my problem!
First, it's easy to install locally or in a docker container. Dependencies: OpenSSL, libuuid and libcurl.
RUN git clone https://github.com/azure/azure-storage-cpplite.git \
&& cd azure-storage-cpplite && mkdir -p build && cd build \
&& cmake .. -DCMAKE_BUILD_TYPE=Release \
&& cmake --build . \
&& cmake --build . --target install
Secondly, it's simple to include the library in your CMake project.
set(Headers
include/your-file.h
)
set(Sources
src/your-file.cpp
)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -lm -Wl,-R/usr/local/lib")
set(CURL_LIBRARY "-lcurl")
find_package(CURL REQUIRED)
add_library(appname STATIC ${Sources} ${Headers})
target_compile_features(appname PUBLIC cxx_std_17)
target_include_directories(appname PUBLIC ${CURL_INCLUDE_DIR})
target_link_libraries(appname ${CURL_LIBRARIES} azure-storage-lite -lcrypto -luuid)
Thirdly, it is lightweight, with close to 30MB in download size which is OK for a docker container.
If it wasn't for this issue I wouldn't have found the cpplite SDK. I don't know why it's a hidden gem. If you know anything let us know.
|
69,757,264 | 69,759,075 | Need help to setup RichEdit | I'm trying to set the following text in RichEdit (v2.0 I guess, as I use "Riched20.dll" library):
{\rtf1Привет!\par{ \i This } is super {\b text}.\par}
The first problem is wrong symbols instead of non-latin text Привет, the second problem is bold text section {\\b text}, which is rendered as non bold. Here is the screenshot:
Visual Studio set up to "Use Unicode Character Set" (the app I'm working on is already setuped this way, and I'm still quiet bad in how Win encodings work). I use ordinar (e.g. not wide char) std::string, as wide char classes don't work for my code - that was my previous question.
Here is the code snippet:
DWORD CALLBACK EditStreamInCallback(DWORD_PTR dwCookie, LPBYTE pbBuff, LONG cb, LONG* pcb)
{
std::stringstream* rtf = (std::stringstream*)dwCookie;
*pcb = rtf->readsome((char*)pbBuff, cb);
return 0;
}
// ...
auto hwndEdit = CreateRichEdit(hWnd, 100, 100, 300, 300, hInstance);
std::stringstream rtf("{\\rtf1Привет!\\par{ \\i This } is super {\\b text}.\\par}");
EDITSTREAM es = { 0 };
es.dwCookie = (DWORD_PTR)&rtf;
es.pfnCallback = &EditStreamInCallback;
SendMessage(hwndEdit, EM_STREAMIN, SF_RTF, (LPARAM)&es);
Update: The end goal is: get some RTF-string (which may consists of unicode(?) text, links, etc.) from JSON like:
{
"text": "{\\rtf1Привет!\\par{ \\i This } is super {\\b text}.\\par}"
}
, show it, handle clicks at hyperlinks, and almost certainly to modify specific symbols (the specific symbol is custom symbol that replaces original symbol in our own modified .ttf font). I didn't read the RTF documentation yet and used given string just to check out how RichEdit contol and corresponding winapi work.
The final RTF-text would be formed in RTF-editor, I suppose. Almost certainly, the WordPad.
| Convert your text according the RTF format specification:
std::string rtf("{\\rtf1\\deff1{\\fonttbl{\\f0\\fcharset0 Times New Roman;}{\\f1\\fcharset0 Segoe UI;}}{\\lang1033{\\f1{\\ltrch\\u1055?\\u1088?\\u1080?\\u1074?\\u1077?\\u1090?!}\\li0\\ri0\\sa0\\sb0\\fi0\\ql\\par}{\\f1{\\i\\ltrch This }{\\ltrch is super }{\\b\\ltrch text}{\\ltrch .}\\li0\\ri0\\sa0\\sb0\\fi0\\ql\\par}}}");
std::stringstream ss(rtf);
EDITSTREAM es = { 0 };
es.dwCookie = (DWORD_PTR)&ss;
es.pfnCallback = &EditStreamInCallback;
SendMessage(richedit, EM_STREAMIN, SF_RTF, (LPARAM)&es);
This rtf string produces the following text:
List of main control words used in the rtfstring above (according the Rich Text Format (RTF) Version 1.5 Specification):
\rtf1
The RTF document Specification version is 1.
\deffN
The \deff control word specified the default font number.
\fonttbl
The \fonttbl control word introduces the font table group.
\lang1033
Applies a language to a character. N is a number corresponding to a language.
In the project settings you can define Character Set as Use Unicode character, Use Multi-Byte Character or Not Set, does not matter for this case.
|
69,757,433 | 69,757,619 | Trivial function gives unexpected return value | I have coded a function which receives a double matrix and looks backwards for a zero entry. If it finds one it changes the value of that entry to -2.0 and returns true. Otherwise it returns false.
Here's the code:
#include <iostream>
#include <vector>
bool remove1zero(std::vector<std::vector<double>> & matrix)
{
size_t dim = matrix.size();
for (size_t j = dim - 1; j >= 0; j--)
for (size_t i = dim - 1; i >= 0; i--)
if ((matrix[j])[i] == 0.0)
{
(matrix[j])[i] = -2.0;
return true;
}
return false;
}
int main()
{
std::vector<std::vector<double>> testMatrix(3);
testMatrix[0] = std::vector<double> {-2.0, -2.0, 3.0};
testMatrix[1] = std::vector<double> {-2.0, -1.0, 3.0};
testMatrix[2] = std::vector<double> {2.0, 2.0, -1.0};
std::cout << remove1zero(testMatrix);
}
Since that matrix has no zero entry, the if-condition shouldn't activate, and eventually remove1zero should return false. However, that's not what happens. I have tried it on my machine as well as in http://cpp.sh/ and the output is 1/true. I would appreciate any insight on why this happens.
| As mentioned in the comments, as size_t is an unsigned type, the j >= 0 and i >= 0 comparisons will always evaluate as "true" and, when either index reaches zero, the next value (after decrementing that zero value) will wrap around to the maximum value for the size_t type, causing undefined behaviour (out-of-bounds access).
A nice 'trick' to get round this is to use the "goes to" pseudo-operator, -->, which is actually a combination of two operators: What is the "-->" operator in C/C++?.
You can use this in your for loops as outlined below, leaving the "iteration expression" blank (as the decrement is done in the "condition expression") and starting the loop at a one higher index in the "init-statement" (as that decrement will be applied before entering the body of the loop).
Here's a version of your function using this approach (note that I have included a space in the x-- > 0 expressions, to clarify that there are actually two separate operators involved):
bool remove1zero(std::vector<std::vector<double>>& matrix)
{
size_t dim = matrix.size();
for (size_t j = dim ; j-- > 0; )
for (size_t i = dim ; i-- > 0; )
if (matrix[j][i] == 0.0) {
matrix[j][i] = -2.0;
return true;
}
return false;
}
|
69,757,457 | 69,759,109 | GDB: There is no member named "" | I'm writing some code that is calling some classes from a much larger project. Let's call the large project SampleProject. I have the static library of the SampleProject called libSampleProject.a. I cannot show the actual code, but I will provide some examples:
Let's say the SampleProject has a Class named SamplePointers in a source file called SamplePointers.cpp.
In the SamplePointers.cpp, there are pointers to other classes within the SampleProject.
So for example, there is a class named ReadXml, and ReadXml is a member of the SamplePointers class as a pointer.
class SamplePointers {
public:
ReadXml * readXmlObject;
<MANY OTHER POINTERS TO CLASSES>
}
FYI: the pointers are initialized in the constructor of the SamplePointers class.
In my CPP file, main.cpp, where I'm calling said Class looks like this following:
#include <iostream>
#include <SampleClass.hpp>
int main() {
SamplePointers * sampleObject = new SamplePointers;
sampleObject->read("sampleFile.xml");
std::cout << sampleObject->readXmlObject->xmlDataField << "\n";
}
SamplePointers also has a read function which reads in XML Fields. The fields will be available in the ReadXml class via accessing member variables. Running this executable, the value for xmlDataField prints out. However, when debugging on GDB, I set a breakpoint on say the read line, and then I type in this path, sampleObject->readXmlObject->xmlDataField, hit enter, and then gdb says that there is no member named readXmlObject.
This is curious, since I'm able to print it out, with std::cout, but gdb cannot physically access the object member variables.
Any ideas why gdb wouldn't be able to access the members?
Sample GDB output:
(gdb) file out
(gdb) b 7
(gdb) run
(gdb) <breaks at line 7>
(gdb) p sampleObject->readXmlObject->xmlDataField
There is no member named readXmlObject
Also, here is a look at what the Makefile looks like:
INCLUDES = -I/path/to/SamplePointersHeader -I/path/to/ReadXmlHeader -I/many/more/headers
main:
g++ main.cpp $(INCLUDES) -L/path/to/SampleProjectLib -lSampleProject -o out
debug:
g++ -g main.cpp $(INCLUDES) -L/path/to/SampleProjectLib -lSampleProject -o out
| Thanks to @G.M., for the suggestion. I believe the problem may have been that when I made the library, I compiled it without the debug flag, i.e the -g flag. I re-compiled the library, made the library out of the object files, and even changed the optimization level from -O3 to -O0 which will help with debugging.
I then recompiled my main.cpp file with the newly made library, passed it through GDB and it was able to find the Class members.
Thanks for all of the input!
|
69,757,980 | 69,758,109 | C++ program behaviour different between optimizations | My question is in relation to this little code snippet:
typedef std::map<std::string, std::string> my_map_t;
std::string_view get_value_worse(const my_map_t& input_map, std::string_view value)
{
auto retrieved = input_map.find(value.data());
return retrieved != input_map.cend() ? retrieved->second : "";
}
std::string_view get_value_better(const my_map_t& input_map, std::string_view value)
{
auto retrieved = input_map.find(value.data());
if (retrieved != input_map.cend())
{
return retrieved->second;
}
return "";
}
int main()
{
my_map_t my_map = {
{"key_0", "value_0"},
{"key_1", "value_1"},
};
std::cout << (get_value_worse(my_map, "key_0") == get_value_better(my_map, "key_0")) << std::endl;
}
Under the latest gcc with no optimisations this prints 0 for false, while under -O3 this prints 1 for true.
I believe the un-optimised behaviour is because the second and third comparison operator arguments are expressions, not statements - and so the retrieved->second in retrieved != arguments.cend() ? retrieved->second : "" gets evaluated as a string construction on the stack, and returning a string_view to that is bad.
I can also see that with -O3 the compiler would be able to inline all of this, remove branching, and be done with it... but I would have expected -O3 to act exactly "as if" I had compiled with -O0.
Can anyone explain why the compiler gets to elide the copy construction I believe is happening in the -O0 version?
| In the conditional expression, a temporary std::string object is constructed. Temporary object are usually constructed on the stack, although this is an implementation detail that is not important. The important thing is that the temporary object is destroyed at the end of the return statement, so the returned std::string_view is dangling. Attempting to access the data it points to (using the == operator, or otherwise) results in undefined behaviour.
When a program contains undefined behaviour, the compiler can do whatever it wants with it. In particular, the compiler is permitted to optimize by assuming that a condition that implies undefined behaviour will always be false. If it turns out that this assumption is wrong, then the compiler is off the hook (because it means undefined behaviour is occurring). What kind of assumption exactly is being made by your compiler is not clear. It also doesn't really matter, because you can't depend on the behaviour that you see now. You should just rewrite your program to remove the undefined behaviour.
|
69,758,451 | 69,761,160 | Is it better to perform n additions of a floating-point number or one integer multiplication? | Consider the two cases below:
// Case 1
double val { initial_value };
for (int i { 0 }; i < n; ++i) {
val += step;
foo(val);
}
// Case 2
for (int i { 0 }; i < n; ++i) {
double val = initial_value + i * step;
foo(val);
}
where n is the number of steps, initial_value is some given value of type double, step is some predetermined value of type double and val is a variable used in subsequent call for the function foo. Which of the cases produces less of floating-point error? My guess would be the second one, as there are only one addition and multiplication, while the first case incurs the floating-point representation error from all of the n additions. I am asking this question because I didn't know what to search for. Does there exists some good reference for cases like these?
In practice the variable val is to be used in the loop of the both cases. I didn't include any example for this as I'm only interested in the floating-point error.
| Considering the comment by supercat (emphasis mine):
The point is that in many scenarios one might want a sequence of values that are uniformly spaced between specified start and end points. Using the second approach would yield values that are as uniformly spaced as possible between the start point and an end value that's near a desired one, but may not quite match.
And the one by Bathsheba:
Both are flawed. You should compute the start and end, then compute each value as a function of those. The problem with the second way is you multiply the error in step. The former accumulates errors.
I'd suggest a couple of alternatives.
Since C++20, the Standard Library provides std::lerp where std::lerp(a, b, t) returns "the linear interpolation between a and b for the parameter t (or extrapolation, when t is outside the range [0,1])".
A formula like value = (a * (n - i) + b * i) / n; may result in a more uniform1 distribution of the intermediate values.
(1) Here I tried to test all those approaches for different extremes and number of sample points. The program compares the values generated by each algorithm when applied in the opposite directions (first from left to right, then from right to left). It shows the average and variance of the sum of the absolute difference between the values of the intermediate points.
Other metrics may yield different results.
|
69,759,030 | 70,057,958 | Why my trained model output is same for each random input? | I trained my model on the Python platform. after training, I faced up with same output for each random input. I solved this problem by deactivating BatchNorm layers with the model.eval() method. but when I tried to load my trained model in C++ with Pytorch C++ API, this problem showed up again, and model.eval() not helping me at this time. I faced the same output for each random input again.
This is my C++ model loading function:
std::vector<torch::jit::script::Module> module_loader(std::string file_addr) {
std::vector<torch::jit::script::Module> modul;
torch::jit::script::Module model = torch::jit::load(file_addr);
model.eval();
modul.push_back(model);
return modul;
}
And this is my testing function:
void test(std::vector<torch::jit::script::Module> &model) {
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::rand({1, 2, 64, 172}));
torch::Tensor output = model[0].forward(inputs).toTensor();
std::cout << output << std::endl;
}
After all, I put it all together in main() like this:
int main() {
auto modul = module_loader(MODEL_ADDRESS);
test(modul);
}
MODEL_ADDRESS is a macro for the address of the trained model on my local disk.
The output of the program is this for every run:
0.3231 [ CPUFloatType{1,1} ]
| I debugged multiple times my code and retry to save the model. Finally, I found the answer. I used a server to train my model, and for exporting the model to C++ I load my model weight in Python Shell into a model object. The problem is here, I should go to eval() before exporting the model for C++. C++ eval() does not work for the model is trained in Python. This solved my problem.
|
69,759,128 | 69,783,180 | How to resample audio? | What’s the best algorithm to change sample rate of PCM audio?
The input is often int16_t at 44.1 kHz but can also be 32kHz or other frequency. The output I need is 32-bit float at 48 kHz. I’m proficient in SIMD intrinsics and guaranteed to have either NEON or AVX, so an algorithm based on float math is OK.
Do I need to implement FFT + inverse, or is there something less computationally expensive?
For instance, will cubic splines work for this use case, or they gonna introduce frequency artifacts?
| Yes, FFT is the requirement for good quality.
This web site has nice graphs of more than 100 pieces of software who are doing audio resampling. From a prior experience, I knew the professional software made by Steinberg is often doing the right things. The graphs on that web site agree, for Cubase 10 and Nuendo 11 these graphs are very good indeed.
Fortunately for me, pretty much the same quality is produced by ffmpeg 4.2.2 with soxr resampler. That particular resampling library comes with a good enough license, and the DLL is even available as a package for my target OS.
I have integrated that library. While playing back 44.1 kHz wave file resampling in runtime into 48 kHz, my test program only consumes about 1% of CPU time (the CPU is quad code Allwinner A64 @ 1 GHz) so the performance is good despite the FFT.
Back to my original question, the algorithm implemented in that library is from the 2005 paper by Laurent de Soras “The Quest For The Perfect Resampler”.
As written in the readme, it combines Julius O. Smith's `Bandlimited Interpolation' technique with FFT-based over-sampling. The math is rather complicated there. I was lucky to find that library as I would have wasted too much time trying to do something similar myself.
|
69,759,241 | 69,767,913 | Cmake adding a source folder to xcode without compiling the sources inside | I have a project that I am porting to cmake. The architecture of the sources is as follow:
src
|- FolderA
|-fileAa.cpp
|-fileAb.cpp
...
|- fileA.cpp
|-CMakeLists.txt
...
The file fileA.cpp is as follow:
#include <FolderA/fileAa.cpp>
#include <FolderA/fileAb.cpp>
//etc
My CMakeFiles.txt is as follow (the files from FolderA ARE NOT included in the add_library function)
#...
add_library(... fileA.cpp ...)
#...
And it works well. When I tried to build the library, if I modify any files inside FolderA, only the file fileA.cpp is rebuilt and the library is remade.
However, when I generate my project for Xcode, the files inside FolderA are not shown. Is there a way to tell cmake that I want it to include the files inside FolderA in my Xcode project (and any other supported IDE) but without adding them to the list of files to compile for the add_library function?
I tried to set the property of the files in FolderA as HEADER and still include them in add_library (as mentioned here) and it sort of worked, but in Xcode the files were displayed in a flat hierarchy and I would like to keep the same structure:
What I want in Xcode:
|-FolderA
|-fileAa.cpp
|-fileAb.cpp
...
|- fileA.cpp
What I have in Xcode:
|-fileAa.cpp
|-fileAb.cpp
|-fileA.cpp
...
I there a way to tell cmake to include a folder and its content to the project generated for an IDE, without adding the files I want to the list of files to compile ?
| I solved my problem.
As mentioned here I need to use the files in a target for source_group to work. I used the trick that I mentionned of setting the sources of FolderA as header files and it worked perfectly:
file(GLOB COMPILED_SOURCES "*.cpp")
file(GLOB FOLDERA_SOURCES "FolderA/*.cpp")
set_source_files_properties(${FOLDERA_SOURCES} PROPERTIES HEADER_FILES_ONLY TRUE)
source_group(TREE ${PROJECT_SOURCE_DIR}/src FILES ${COMPILED_SOURCES} ${FOLDERA_SOURCES})
#...
<any function that create a target>(... ${COMPILED_SOURCES} ${FOLDERA_SOURCES} ... )
#...
|
69,760,485 | 69,760,651 | Problem compiling multithreading process regarding arguments | I'm trying to compile a program but it keeps coming up with errors; I have searched through the forum and I think it has to do with passing by reference, but I can't find where I went wrong.
Here is an extract of the code:
#include <iostream>
#include <thread>
using namespace std;
const int N = 512;
const int N_BUSC = 8;
using VectInt = int[N];
void coord (VectInt v, bool& lectura, bool acabados[N_BUSC], int resultados[N_BUSC]);
int main(int argc, char *argv[]){
bool leido = false;
VectInt v;
int acabados [N_BUSC], resultados [N_BUSC];
...
thread coordinacion (&coord, v, std::ref(leido), acabados, resultados);
...
}
The error it keeps showing is:
/usr/include/c++/9/thread: In instantiation of ‘std::thread::thread(_Callable&&, _Args&& ...) [with _Callable = void (*)(int*, bool&, bool*, int*); _Args = {int (&)[512], std::reference_wrapper<bool>, int (&)[8], int (&)[8]}; <template-parameter-1-3> = void]’:
ej1.cpp:43:74: required from here
/usr/include/c++/9/thread:120:44: error: static assertion failed: std::thread arguments must be invocable after conversion to rvalues
120 | typename decay<_Args>::type...>::value,
| ^~~~~
/usr/include/c++/9/thread: In instantiation of ‘struct std::thread::_Invoker<std::tuple<void (*)(int*, bool&, bool*, int*), int*, std::reference_wrapper<bool>, int*, int*> >’:
/usr/include/c++/9/thread:131:22: required from ‘std::thread::thread(_Callable&&, _Args&& ...) [with _Callable = void (*)(int*, bool&, bool*, int*); _Args = {int (&)[512], std::reference_wrapper<bool>, int (&)[8], int (&)[8]}; <template-parameter-1-3> = void]’
ej1.cpp:43:74: required from here
/usr/include/c++/9/thread:243:4: error: no type named ‘type’ in ‘struct std::thread::_Invoker<std::tuple<void (*)(int*, bool&, bool*, int*), int*, std::reference_wrapper<bool>, int*, int*> >::__result<std::tuple<void (*)(int*, bool&, bool*, int*), int*, std::reference_wrapper<bool>, int*, int*> >’
243 | _M_invoke(_Index_tuple<_Ind...>)
| ^~~~~~~~~
/usr/include/c++/9/thread:247:2: error: no type named ‘type’ in ‘struct std::thread::_Invoker<std::tuple<void (*)(int*, bool&, bool*, int*), int*, std::reference_wrapper<bool>, int*, int*> >::__result<std::tuple<void (*)(int*, bool&, bool*, int*), int*, std::reference_wrapper<bool>, int*, int*> >’
247 | operator()()
| ^~~~~~~~
Thank you in advance.
| The arguments you pass to coord do not match its signature.
void coord (VectInt v, bool& lectura, bool acabados[N_BUSC], int resultados[N_BUSC])
^^^^
It should be int if you want to pass an array of int to the function - or you should pass an array of bool to it instead:
bool acabados [N_BUSC]; // not int[N_BUSC]
int resultados [N_BUSC];
thread coordinacion (&coord, v, std::ref(leido), acabados, resultados);
Demo
|
69,761,018 | 69,761,687 | Is clang wrongfully reporting ambiguity when mixing member and non-member binary operators? | Consider the following code, which mixes a member and a non-member operator|
template <typename T>
struct S
{
template <typename U>
void operator|(U)
{}
};
template <typename T>
void operator|(S<T>,int) {}
int main()
{
S<int>() | 42;
}
In Clang this code fails to compile stating that the call to operator| is ambiguous, while gcc and msvc can compile it. I would expect the non-member overload to be selected since it is more specialized. What is the behavior mandated by the standard? Is this a bug in Clang?
(Note that moving the operator template outside the struct resolves the ambiguity.)
| I do believe clang is correct marking the call as ambiguous.
Converting the member to a free-standing function
First, the following snippet is NOT equal to the code you have posted w.r.t S<int>() | 42; call.
template <typename T>
struct S
{
};
template <typename T,typename U>
void operator|(S<T>,U)
{}
template <typename T>
void operator|(S<T>,int) {}
In this case the now-non-member implementation must deduce both T and U, making the second template more specialized and thus chosen. All compilers agree on this as you have observed.
Overload resolution
Given the code you have posted.
For overload resolution to take place, a name lookup is initiated first. That consists of finding all symbols named operator| in the current context, among all members of S<int>, and the namespaces given by ADL rules which is not relevant here. Crucially, T is resolved at this stage, before overload resolution even happens, it must be. The found symbols are thus
template <typename T> void operator|(S<T>,int)
template <typename U> void S<int>::operator|(U)
For the purpose of picking a better candidate, all member functions are treated as non-members with a special *this parameter. S<int>& in our case.
[over.match.funcs.4]
For implicit object member functions, the type of the implicit object parameter is
(4.1)
“lvalue reference to cv X” for functions declared without a ref-qualifier or with the & ref-qualifier
(4.2)
“rvalue reference to cv X” for functions declared with the && ref-qualifier
where X is the class of which the function is a member and cv is the cv-qualification on the member function declaration.
This leads to:
template <typename T> void operator|(S<T>,int)
template <typename U> void operator|(S<int>&,U)
Looking at this, one might assume that because the second function cannot bind the rvalue used in the call, the first function must be chosen. Nope, there is special rule that covers this:
[over.match.funcs.5][Emphasis mine]
During overload resolution, the implied object argument is indistinguishable from other arguments.
The implicit object parameter, however, retains its identity since no user-defined conversions can be applied to achieve a type match with it.
For implicit object member functions declared without a ref-qualifier, even if the implicit object parameter is not const-qualified, an rvalue can be bound to the parameter as long as in all other respects the argument can be converted to the type of the implicit object parameter.
Due to some other rules, U is deduced to be int, not const int& or int&. S<T> can also be easily deduced as S<int> and S<int> is copyable.
So both candidates are still valid. Furthermore neither is more specialized than the other. I won't go through the process step by step because I am not able to who can blame me if even all the compilers did not get the rules right here. But it is ambiguous exactly for the same reason as foo(42) is for
void foo(int){}
void foo(const int&){}
I.e. there is no preference between copy and reference as long as the reference can bind the value. Which is true in our case even for S<int>& due to the rule above. The resolution is just done for two arguments instead of one.
|
69,761,806 | 69,761,905 | Listing permutations of two dice with one loop | As part of a coding challenge for myself, I've been trying to write code for simple problems in one line. Currently, I'm trying to print out all permutations possible for two dice. So far, I have a simple algorithm utilizing two for loops:
for(int i = 1; i <= 6; i++)
for(int j = i; j <= 6; j++)
printf("%d %d", i, j);
Which can be easily turned into one line:
for(int i = 1; i <= 6; i++) for(int j = i; j <= 6; j++) printf("%d %d", i, j);
Both programs output the permutations in the desired order:
1 1
1 2
1 3
1 4
1 5
1 6
2 2
2 3
2 4
2 5
2 6
3 3
3 4
3 5
3 6
4 4
4 5
4 6
5 5
5 6
6 6
In order to make this challenge harder, I'm trying to print out the permutations in the same order as the output above, using only one loop. I'm sure I'll need to use division and modulo in order to accomplish this, but I'm stuck and unsure how to achieve this.
| The 2 loop version is the most readable. But since you asked, here are some 1 loop versions:
int main()
{
int n = 6;
for (int i = 1, j = 1; i < n; ++j)
{
if (j > n)
{
++i;
j = i;
}
std::cout << i << " " << j << '\n';
}
std::cout << std::flush;
}
int main()
{
int n = 6;
int i = 1;
int j = 1;
while (true)
{
if (j > n)
{
++i;
j = i;
}
if (i > n)
{
break;
}
std::cout << i << " " << j << '\n';
++j;
}
std::cout << std::flush;
}
As a side note:
for(int i = 1; i <= 6; i++) for(int j = i; j <= 6; j++) printf("%d %d", i, j);
This is horrible. Don't ever write code like this. The artificial goal of writing code in one line is completely and utterly pointless. You can take any code and put it on one line like that without any effort so it has absolutely not benefit, teaching or otherwise.
There is the concept of writing one-liners but that is not just replacing new lines in source code with spaces. It's about e.g. replacing loops with function calls or chain calls.
And don't forget, first and foremost write correct, clean and readable code.
|
69,761,865 | 69,762,695 | How to pass an n-dim Eigen tensor to a function? | I'm looking to make a loss function that can take 2 tensors of any dimensions as parameters, but the dimensions of tensor 1 (t1) and tensor (t2) must match. Below are the templates that I tried to use to can pass the tensors into the function. I was thinking that T would be a type and N would model the number of indexes possible without explicitly writing a type for infinitely possible tensor dimensions.
loss.h
#include <iostream>
namespace Loss {
template<class T, std::size_t N>
void loss(Eigen::Tensor<T, N, 0>& predicted, Eigen::Tensor<T, N, 0>& actual) {
std::cout << "Loss::loss() not implemented" << std::endl;
};
};
main.cpp
#include "loss.h"
int main() {
Eigen::Tensor<double, 3> t1(2, 3, 4);
Eigen::Tensor<double, 3> t2(2, 3, 4);
t1.setZero();
t2.setZero();
Loss::loss(t1, t2);
return 0;
}
The type error that I get before compiling from my editor:
no instance of function template "Loss::loss" matches the argument list -- argument types are: (Eigen::Tensor<double, 3, 0, Eigen::DenseIndex>, Eigen::Tensor<double, 3, 0, Eigen::DenseIndex>
And this is the message I get once I compile (unsuccessfully):
note: candidate template ignored: substitution failure [with T = double]: deduced non-type template argument does not have the same type as the corresponding template parameter ('int' vs 'std::size_t' (aka 'unsigned long'))
void loss(Eigen::Tensor<T, N, 0>& predicted, Eigen::Tensor<T, N, 0>& actual) {
^
1 error generated.
| The error message is pointing out the type of the non-type template parameter is size_t, but in the declaration of t1 and t2 the value of that parameter is 3, which has type int. This mismatch makes the template argument deduction fail.
You can fix this by changing the type of the non-type template parameter to int
template<class T, int N>
void loss( // ...
or just let it be deduced
template<class T, auto N>
void loss( // ...
|
69,762,069 | 69,762,093 | Understanding vector::assign example on cplusplus | I am confused about the following code and what it does:
first.assign (7,100); // 7 ints with a value of 100
std::vector<int>::iterator it;
it=first.begin()+1;
second.assign (it,first.end()-1); // the 5 central values of first
I don't understand the second.assign statement. I would assume it assigns 100 elements in second with a value of 100. Why is the size of second 5?
| In the example code
it = vec.begin()+1 meaning 2nd element
And
second.assign (it,first.end()-1);
^^^^^^^^^^
One past the last element.
it has skipped the first and last elements and hence you have 7-2=5 elements in the last assignment.
|
69,762,090 | 69,765,885 | CImg problem with color-interpolated 2D triangle | I feel like i'm missing something obvious here.
I am unable to get CImg's color-interpolated 2D triangle to work as expected.
To add to the confusion, it behaves differently on my system version of CImg (cimg_version 245) to the latest in Github (cimg_version 300).
If i draw a simple filled triangle, everything works as expected.
If i specify a colour for each vertex, there is a difference depending what CImg version i use:
cimg_version 245:
Interpolation works to some extent but colour channels are clamped to 255 for values greater than 0.
You can see this by comparing the center and right triangles in the image titled "CImg version: 245".
The center image fades from {0, 0, 0} to {255, 255, 255}
whereas the right image goes from {100, 100, 100} to {255, 255, 255}.
cimg_version 300:
In this version, i was unable to get any interpolation to work.
So, am i missing a setting to enable interpolation or should i file a bug report?
#include <iostream>
#include <stdio.h>
#include "CImg.h"
using namespace cimg_library;
#define TITLE_LEN 100
int main() {
char title [TITLE_LEN];
snprintf(title, TITLE_LEN, "CImg version: %d", cimg_version);
std::cout << title << "\n";
CImg<unsigned char> image(600, 200, 1, 3, 0);
CImgDisplay main_disp(image, title);
const unsigned char black[3] = {0, 0, 0};
const unsigned char grey[3] = {100, 100, 100};
const unsigned char white[3] = {255, 255, 255};
// Left
image.draw_triangle(100, 10, 10, 190, 190, 190, grey);
// Center
image.draw_triangle(300, 10, 210, 190, 390, 190, white, black, white);
// Right
image.draw_triangle(500, 10, 410, 190, 590, 190, white, grey, white);
image.display(main_disp);
while (!main_disp.is_closed()) {
main_disp.wait();
}
return 0;
}
[EDIT]
Forgot to include what i'm running on:
$ lsb_release -a
LSB Version: core-11.1.0ubuntu2-noarch:printing-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
$ uname -a
Linux lapdancer 5.4.0-89-generic #100-Ubuntu SMP Fri Sep 24 14:50:10 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
$ g++ -v
Using built-in specs.
COLLECT_GCC=g++
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/9/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none:hsa
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 9.3.0-17ubuntu1~20.04' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none=/build/gcc-9-HskZEa/gcc-9-9.3.0/debian/tmp-nvptx/usr,hsa --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)
| developer of CImg here.
It looks like a bug indeed. I'll try to fix it ASAP.
Thanks.
Do not hesitate to fill an issue on our github site (https://github.com/dtschump/CImg/issues) when you encounter such strange behaviors.
EDIT : This should be fixed now, with github.com/dtschump/CImg/issues/332. New pre-release have been pushed on the CImg website.
|
69,762,219 | 69,762,605 | Weird C++14 and C++17 difference in assignment operator | I have the following code:
#include <vector>
#include <iostream>
std::vector <int> a;
int append(){
a.emplace_back(0);
return 10;
}
int main(){
a = {0};
a[0] = append();
std::cout << a[0] << '\n';
return 0;
}
The function append() as a side effect increases the vector size by one. Because of how vectors work, this can trigger a reallocation of its memory when exceeding its capacity.
So, when doing a[0] = append(), if a reallocation happens, then a[0] is invalidated and points to the old memory of the vector. Because of this you can expect the vector to end up being {0, 0} instead of {10, 0}, because it is assigning to the old a[0] instead of the new one.
The weird thing that confuses me is that this behavior changes between C++14 and C++17.
On C++14, the program will print 0. On C++17, it will print 10, meaning a[0] actually got 10 assigned to it. So, I have the following questions whose answers I couldn't find:
Is C++17 evaluating a[0]'s memory address after evaluating the RHS of the assignment expression? Does C++14 evaluate this before and that's why it changes?
Was this a bug that was fixed in C++17? What changed in the standard?
Is there a clean way to make this assignment behave like in C++17 when using C++14 or C++11?
| This code is UB prior to C++17, as pointed out in comments, due to C++ evaluation order rules. The basic problem: Order of operations is not order of evaluation. Even something like x++ + x++ is UB.
In C++17, sequencing rules for assignments were changed:
In every simple assignment expression E1=E2 and every compound
assignment expression E1@=E2, every value computation and side-effect
of E2 is sequenced before every value computation and side effect of
E1
|
69,762,366 | 69,763,590 | spdlog for C++; what(): Failed opening file ~/logs/log.txt for writing: No such file or directory | I am using spdlog to do some simple logging on a c++ program on a beaglebone, debian 10. I set up a rotating logger:
auto logger = spdlog::rotating_logger_mt("runtime_log", "~/logs/log.txt", max_size, max_files);
and this returns the error
terminate called after throwing an instance of spdlog::spdlog_ex'
what(): Failed opening file ~/logs/log.txt for writing: No such file or directory
Aborted
I have already ensured that the directory exists and have tried chmod -R 776 ~/logs/
so:
drwxrwxrw- 2 user group 4096 Oct 28 09:03 logs
-rwxrwxrw- 1 user group 0 Oct 28 09:03 runtime.log
When given the path logs/log.txt, it works. This puts the logfile in ~/project/build/logs
I have also tried giving the full path /home/user/logs/log.txt. And that works fine.
Why would my program and spdlog not be able to access the directory at the path ~/logs/log.txt?
| ~ is special character in Bash shell, shorthand for home directory. The C++ program doesn't know about the ~. Usual way to get home directory would be to use std::getenv function like:
const char* home_dir = std::getenv("HOME");
auto log_path = std::string{home_dir} + "/logs/log.txt";
I suggest you also to make use of <filesystem> header as it has functions to check the existence of files and directories.
|
69,762,931 | 69,765,372 | Can I take a reference of a pointer in C++? | I am passing in a reference to a pointer variable into a function. The function will do something and point the pointer variable to some object. Code:
int Foo(Obj* &ptr) {
// do something...
ptr = some_address;
return return_value;
}
int main() {
Obj* p = nullptr;
int ret = Foo(p);
// do something with ret
p->DoSomething();
}
However, things get trickier if I want to pass a reference to a pointer to const. I would like the pointer variable itself to be changed (hence the reference), but I don't want the pointed Obj instance to change using this pointer. In my imagination, it should be something like:
int Foo(const Obj* &ptr) {
// do something...
ptr = some_address;
return return_value;
}
int main() {
const Obj* p = nullptr;
int ret = Foo(p);
// do something with ret
p->DoSomething(); // this should only work when DoSomething() is const method
}
EDIT: the following error cannot be reproduced and is hence deleted. This question is focused on the concept of reference-of-pointer instead of solving an issue
C++ gives this compile error:
main.cpp:22:10: error: cannot bind non-const lvalue reference of type ‘const Obj*&’ to an rvalue of type ‘const Obj*’
22 | Foo(ptr);
| ^~~
main.cpp:14:23: note: initializing argument 1 of ‘void test(const Obj*&)’
14 | int Foo(const Obj* &ptr) {
| ~~~~~~~~~~~~^~~
Some thoughts:
Error cannot be reproduced
I believe this error is shown when I am trying to pass in an "unnamed variable" into a reference parameter. In this case I am passing the variable ptr, which shouldn't be an issue.
ptr is passed in as a parameter because the function has a useful return value. Setting ptr is more like a side-product of this function, where the caller can choose to ignore or use.
I can also try using Obj** as the parameter, which is passing by pointer instead of passing by reference. This works when a const Obj** is passed as parameter. I am just wondering what will be the case if the parameter is passed by reference.
| I am not sure what your problem is, as the error code given does not match your code.
Your second example with int Foo(const Obj* &ptr) works exactly as intended, and compiles fine if you make DoSomethingconst.
To comment your three thoughts:
If you const things correctly, the error goes away.
I really, really dislike such out-paramteres. It is much cleaner to return a struct or a pair of int and pointer. That way the caller can write const auto[ret, p] = Foo(); and not have to explicitely declare the pointer that you may not want to use.
Passing pointers to pointers is C-style, due to lack of references, and just make the code harder to read, with no benefit.
Below is slightly modified code that compiles fine, with a better Foo too, as mentioned in my answer to 2.:
#include <utility>
struct Obj
{
void DoSomething() const;
};
// This is ugly of course, used just to have a valid ptr to return
Obj global;
int Foo(const Obj* &ptr) {
// do something...
ptr = &global;
return 5;
}
std::pair<int, const Obj*> BetterFoo()
{
// do something...
return {5, &global};
}
int main() {
const Obj* p1 = nullptr;
int ret1 = Foo(p1);
const auto[ret2, p2] = BetterFoo();
p1->DoSomething();
p2->DoSomething();
}
|
69,762,937 | 69,762,998 | c++ function pointer assignment is safe? | Is this code thread safe?
static PFN_SOMEFUNC pfnSomeFunc = nullptr;
PFN_SOMEFUNC SomeFuncGetter();
void CalledFromManyThreads() {
if (pfnSomeFunc == nullptr)
pfnSomeFunc = SomeFuncGetter();
pfnSomeFunc();
}
This is not atomic pointer operation question. There are some special conditions.
SomeFuncGetter() always returns same address.
SomeFuncGetter() is thread safe.
| It doesn't look thread-safe, because the global variable can be modified by any thread without synchronization. Even an assignment is not guaranteed to be atomic.
What you could do is leverage a language feature that guarantees thread-safe atomic initialization by moving the static into your function (a very common solution to the Static Initialization Fiasco):
void CalledFromManyThreads() {
static PFN_SOMEFUNC pfnSomeFunc = SomeFuncGetter();
pfnSomeFunc();
}
Now it's thread-safe. And if you need that cached result in multiple places, you can wrap it:
PFN_SOMEFUNC GetSomeFunc()
{
static PFN_SOMEFUNC pfnSomeFunc = SomeFuncGetter();
return pfnSomeFunc;
}
void CalledFromManyThreads() {
PFN_SOMEFUNC pfnSomeFunc = GetSomeFunc();
pfnSomeFunc();
}
|
69,763,620 | 69,763,816 | Do I need to offset a pointer when calling recv? | When using recv, I have always done something like this:
int len = 1000;
int n = 0;
char buf[1000];
while(n < len) {
n += recv(socket, buf + n, len - n, 0);
}
my logic being that if recv does not recieve the full data of len bytes, I should only receive the rest of the bytes (len - n) and should not overwrite the data that is already recieved (so I offset the beginner of the buffer to the end of already received content). This seems to work perfectly fine whenever I use it.
However, most if not all examples of recv that I see simply do something like follows:
int len = 1000;
int n = 0;
char buf[1000];
while(n < len) {
n += recv(socket, buf, len, 0);
}
Does this not leave you vulnerable to overwriting the beginning of your buffer if recv is called more than once?
| Both of your examples are not accounting for the possibility of recv() failing. You need to check the return value for errors, eg:
char buf[1000];
int len = sizeof(buf);
int n = 0, ret;
while (n < len) {
ret = recv(socket, buf + n, len - n, 0);
if (ret <= 0) {
// error handling...
break;
}
n += ret;
}
Your 1st example is good to use when you know up front exactly how many bytes you need to read, and can pre-allocate a buffer large enough to hold it all.
Your 2nd example is good to use when you need to stream the data more dynamically, 1 buffer at a time. For instance, maybe the data is being saved to a file. Or maybe the data is too large to fit in such a small buffer and needs to be processed in chunks. Or maybe the data is appended to another dynamically growing buffer for later use. Lots of reasons for this use-case, eg:
char buf[1000];
int n;
while (some condition) {
n = recv(socket, buf, sizeof(buf), 0);
if (n <= 0) {
// error handling...
break;
}
// use buf up to n bytes...
}
|
69,763,634 | 69,764,604 | Cython: How to import global variables from a module? | A module with global C variable:
# mymod.pyx (compiled to mymod.so)
cdef int myvar
How to access myvar from another file?
Scenario 1:
# myapp.pyx (import module only)
import mymod
print(mymod.myvar) # myvar is Python object, not int
Scenario 2:
# myapp.pyx (import variable directly)
from mymod import myvar # Error, no such myvar as Python var
Scenario 3:
# myapp.pyx (import with cimport, needs .pxd file)
from mymod cimport myvar
I wish to use .pyx files only, if possible. Unless there are no choices, I may use .pxd files, how to move myvar to .pxd file in such case?
| Turned out that cdef globals in a module must be in .pxd files.
For example:
# submod.pyx
some code...
# submod.pxd
cdef int myvar
# mod.pyx
cimport submod
print(submod.myvar)
# app.py
import mod
|
69,763,705 | 69,766,713 | How to render from world space into camera space? | I have 2 functions, the first function renders my objects in the world, while the second function was supposed to render my objects directly in the view frame of the camera like a UI ie. if the camera moves, the object appear to be stationary as it moves with the camera. However, my second function doesn't seem to work as nothing appears, is my logic for the view projection matrix incorrect?
This is the function that sends the camera's view projection matrix to the vertex shader to render objects in the world, it works:
void Renderer2D::BeginScene(const OrthographicCamera& camera)
{
s_Data.shader = LightTextureShader;
(s_Data.shader)->Bind();
(s_Data.shader)->SetMat4("u_ViewProjection", camera.GetViewProjectionMatrix());
s_Data.CameraUniformBuffer->SetData(&s_Data.CameraBuffer, sizeof(Renderer2DData::CameraData));
s_Data.QuadVertexBuffer = LightQuadVertexBuffer;
s_Data.QuadVertexArray = LightQuadVertexArray;
s_Data.QuadIndexCount = LightQuadIndexCount;
s_Data.QuadVertexBufferBase = LightQuadVertexBufferBase;
StartBatch();
}
This is the function that was supposed to render my objects directly to the camera like a UI but it doesn't work:
void Renderer2D::BeginUIScene(const OrthographicCamera& camera)
{
s_Data.shader = TextureShader;
(s_Data.shader)->Bind();
Mat4 projection = getOrtho(0.0f, camera.GetWidth(), 0.0f, camera.GetHeight(), -1.0f, 1.f);
(s_Data.shader)->SetMat4("u_ViewProjection", projection);
s_Data.CameraUniformBuffer->SetData(&s_Data.CameraBuffer, sizeof(Renderer2DData::CameraData));
s_Data.QuadVertexBuffer = TexQuadVertexBuffer;
s_Data.QuadVertexArray = TexQuadVertexArray;
s_Data.QuadIndexCount = TexQuadIndexCount;
s_Data.QuadVertexBufferBase = TexQuadVertexBufferBase;
StartBatch();
}
Edit:
The declaration for getOrtho():
Mat4 getOrtho(float left, float right, float bottom, float top, float zNear, float zFar);
| There are two approaches that I can think of. One is to directly pass the screen space coordinates to a vertex shader that does not apply a model, view or projection matrix to it. An example vertex shader would look like this:
#version ...
layout (location = 0) in vec3 aPos; // The vertex coords should be given in screen space
void main()
{
gl_Position = vec4(aPos, 1.0f);
}
This will render a 2D image to the screen at a fixed position which does not move as the camera moves.
The other way is if you want a 3D object that is "attached" to the camera (so it is on the screen in a fixed position), you need to only apply the model and projection matrix, and not the view. An example vertex shader that does this:
#version ...
layout (location = 0) in vec3 aPos;
uniform mat4 model;
uniform mat4 projection;
void main()
{
gl_Position = projection * model * vec4(aPos, 1.0f);
}
By not using the view matrix, the model will always be on the screen at whatever position the model matrix moves the model to. The center here is the camera so a model matrix that translates by vector vec3(0.1f, 0.0f, -0.2f) would move the model 0.1f to the right of the center of the camera, and 0.2f away from the camera into the screen. Essentially, the model matrix here is defining the transformation of the model in relation to the camera's position. Note that if you want to do lighting calculations on the model then you will need to use the second method rather than the first, and you will need to do all of the lighting calculations in view/camera space for this model.
Edit:
To convert screen space coordinates from the range [0.0, screen resolution] to [-1.0, 1.0] which is the range that OpenGL uses:
float xResolution = 800.0f;
float yResolution = 600.0f;
float x = 200.0f;
float y = 400.0f;
float convertedX = ((x / xResolution) * 2) - 1;
float convertedY = ((y / yResolution) * 2) - 1;
|
69,764,971 | 69,769,941 | How can I compile lsqlite3 for Windows? | I have a C++ project developed in Ubuntu Linux. This C++ project is not written by me. Also, it runs under Ubuntu without any issues.
I am trying to compile the project under Windows 10 using GCC/G++ compiler and Visual Studio 2019 IDE. I am very close to success.
This project has the following lines in its CMakeList.txt file:
find_package(sqlite3)
if (SQLITE3_FOUND)
message("Sqlite3 found")
include_directories(${SQLite3_INCLUDE_DIRS})
SET(CMAKE_CXX_FLAGS_RELEASE " -DSQLITE3 ${CMAKE_CXX_FLAGS_RELEASE}")
SET(CMAKE_CXX_FLAGS_DEBUG " -DSQLITE3 ${CMAKE_CXX_FLAGS_DEBUG}")
SET(CMAKE_EXE_LINKER_FLAGS "-lsqlite3")
SET(CMAKE_SHARED_LINKER_FLAGS "-lsqlite3")
else()
message("Sqlite3 not found")
endif (SQLITE3_FOUND)
I need lsqlite3. I found the source code but failed to compile it using GCC:
C:\Users\pc\Downloads\sqlite3\lsqlite3-master\lsqlite3-master>mingw32-make
FIND: Parameter format not correct
FIND: Parameter format not correct
luarocks make
Error: Please specify which rockspec file to use.
luarocks make
Error: Please specify which rockspec file to use.
C:\Users\pc\Downloads\sqlite3\lsqlite3-master\lsqlite3-master>mingw32-make lsqlite3complete-0.9.4-2.rockspec
mingw32-make: Nothing to be done for 'lsqlite3complete-0.9.4-2.rockspec'.
C:\Users\pc\Downloads\sqlite3\lsqlite3-master\lsqlite3-master>
The file lsqlite3.c contains #include "lua.h" and #include "lauxlib.h". It seems that they are from Lua for Windows project.
How can I compile lsqlite3 for Windows and link it to my existing CPP project's port?
Should it be compiled using GCC or Lua?
| You are horribly mistaken. The linker flag -lsqlite3 means "link with libsqlite3.so. The correct library to compile is SQLite3. If you have trouble compiling it on Windows, there are quite a few questions about it on SO.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.