question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
70,728,093 | 70,728,505 | Cuda device memory variables with OpenMP multithreading produce wrong results | I have a function in which I am calling a cuda kernel serially in a loop. This function is executed parallely in threads using OpenMP. Through each iteration, I update a variable currentTime with:
cudaMemcpyFromSymbolAsync(¤tTime, minChangeTime, sizeof(currentTime), 0, cudaMemcpyDeviceToHost, stream_id);
where minChangeTime is computed in the kernel. Somehow, the update of this variable currentTime is not done properly when calling several kernels in parallel using OpenMP. I have provided a reproducible code at the end. The results I am expecting are:
0 65 186
1 130 251
2 195 316
3 260 381
4 325 446
...
But when enabling OpenMP, I do not get this difference of 121:
7 325 641
3 325 381
3 325 381
6 325 576
4 390 446
8 390 706
7 390 641
4 3063 446
What am I doing wrong or misunderstanding ? If device memory variables are inappropriate here, what would then be a better variable type ?
#ifdef __CUDACC__
#define CUDA_HOSTDEV __host__ __device__
#define CUDA_DEVICE __device__
#define CUDA_GLOBAL __global__
#define CUDA_CONST __constant__
#else
#define CUDA_HOSTDEV
#define CUDA_DEVICE
#define CUDA_GLOBAL
#define CUDA_CONST
#endif
#include <cuda.h>
#include <cuda_runtime.h>
#include <omp.h>
#include "helper_cuda.h"
#include "helper_functions.h"
CUDA_DEVICE int minChangeTime;
CUDA_DEVICE bool foundMinimum;
CUDA_GLOBAL void reduction(
int* cu_adjustment_time
){
unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
__syncthreads();
for (unsigned int s=1; s < blockDim.x; s *= 2) {
if (tid % (2*s) == 0){
atomicMin(&minChangeTime, cu_adjustment_time[tid+s]);
}
__syncthreads();
}
}
CUDA_GLOBAL void wh(int* cu_adjustment_time, int currentTime){
int tid = threadIdx.x + blockDim.x*blockIdx.x;
cu_adjustment_time[tid] = currentTime+tid;
}
void iteration_function(int *iRows, int time_data_index, int num_nets, cudaStream_t stream_id){
int currentTime = 0;
int limit = *iRows-1;
int starting_point = time_data_index;
time_data_index+=currentTime;
int* cu_adjustment_time;
cudaMalloc((void **)&cu_adjustment_time, sizeof(int) * (num_nets));
limit = (*iRows) - 1;
cudaStreamSynchronize(stream_id);
int loop = 0;
while(currentTime<limit){
cudaMemcpyToSymbolAsync(minChangeTime, &limit, sizeof(*iRows), 0, cudaMemcpyHostToDevice, stream_id);
wh<<<num_nets, 1, 0, stream_id>>>(
cu_adjustment_time,
currentTime
);
cudaStreamSynchronize(stream_id);
reduction<<<1, num_nets, 0, stream_id>>>(
cu_adjustment_time
);
cudaStreamSynchronize(stream_id);
cudaMemcpyFromSymbolAsync(¤tTime, minChangeTime, sizeof(currentTime), 0, cudaMemcpyDeviceToHost, stream_id);
cudaStreamSynchronize(stream_id);
currentTime+=num_nets;
time_data_index+=num_nets+1;
std::cout << loop << " " << currentTime << " " << time_data_index << std::endl;
loop++;
}
std::cout << "finished" << std::endl;
}
int main(){
//compiled with: nvcc no_fun.cu -Xcompiler=-fopenmp -o no_fun
int iRows = 3000;
int iter = 300;
int time_data_index = 121;
int num_nets = 64;
cudaStream_t streams[iter];
//#pragma omp parallel for simd schedule(dynamic) -> including this part causes undefined results
for(unsigned int j = 0; j < iter; j++){
cudaStreamCreate(&streams[j]);
iteration_function(&iRows, time_data_index, num_nets, streams[j]);
cudaStreamSynchronize(streams[j]);
cudaStreamDestroy(streams[j]);
}
}
| When multiple reduction kernels run simultaneously, there is a race- condition with the global variable minChangeTime.
You need to have separate device memory for each kernel that should run in parallel. The simplest approach would be to just cudaMalloc minChangeTime in each thread instead of declaring it a global variable, and pass it to the kernel.
|
70,728,149 | 70,728,336 | Homework Beginner C++: Unable to perform Type Casting during addition of integers | In order to help us understand type casting in C++, we are required to perform addition of two int's as shown below. If we provide two int's as 4 and 5 respectively, the output should be 4 + 5 = 9.
I tried to follow this type casting tutorial without any success. Could someone please provide me a hint or something?
Quoting the assignment verbatim.
Your friend wrote a program called an adder. The adder is supposed to take two numbers inputted by a user and then find the sum of those numbers, but it’s behaving oddly.
Your first task is to figure out what is wrong with the adder. Your second task is to fix it.
Hint(s) to identify the problem
Try entering 1 and 1. You expect the output to be 2 but you get 11 instead. Similarly, if you enter 3 and 4, you expect the output to be 7 but you get 34. Remember, string concatenation also uses the + operator.
Hint(s) to identify the solution
The + operator functions differently based on the type of data that comes before and after it. What data types will cause the + operator to calculate a mathematical sum? What data type is present in the program now? How do you convert from one data type to another? Check out the Type Casting page for some idea
#include <iostream>
using namespace std;
int main() {
string num1;
string num2;
cout << "Type the first whole number and then press Enter or Return: ";
cin >> num1;
cout << "Type the second whole number and then press Enter or Return: ";
cin >> num2;
string sum = num1 + num2;
cout << ( num1 + " + " + num2 + " = " + sum ) << endl;
return 0;
}
| if you typecast a character or string it get's converted into its equivalent ASCII value , either you need to use stoi or subtract from '0' for every digit position(a bit repeatitive work) go with stoi
|
70,728,570 | 70,735,443 | Why cpplint doesn't care about indentation/spaces and what is the alternative? | This is my C++ code:
int f() {
return 1 + 2;
}
I'm trying to run it through cpplint:
$ cpplint src/f.cpp
Done processing src/f.cpp
No errors found. What am I doing wrong? Cpplint really does think that this code is properly formatted? If this is really the case, which style checker can help me catch errors in this piece of C++ code?
| Cpplint only checks a few special indentation cases, it is not a complete style checker. The cause for this is that cpplint does not property parse files, but checks files line-by-line using regular expressions. That makes it hard to write certain checks for issues that require reasoning about Multi-Line contexts.
So cpplint does not check for the style flaws of your example.
|
70,728,874 | 70,728,932 | Template classes circular dependency issue (c++) | I wanted to implement an observer pattern for an event system, but the compiler gives me a bunch of errors and I'm currently not able to fix them. I have two templated class in two different headers, a listener and a event dispatcher and both needs to store pointers to the other class.
In Dispatcher.h
//Dispatcher.h
#pragma once
#include "Listener.h"
template <typename T>
class Dispatcher {
private:
std::vector<Listener<T>*> m_listeners;
public:
Dispatcher() {}
~Dispatcher() {
for (Listener<T>* l : m_listeners) {
delete l;
}
}
void Attach(Listener<T>* listener) {
m_listeners.push_back(listener);
}
void Detach(Listener<T>* listener) {
std::vector<Listener<T>*>::iterator it = std::find(m_listeners.begin(), m_listeners.end(), listener);
if (it != m_listeners.end()) {
m_listeners.erase(it);
}
}
void Notify(T& event) {
for (Listener<T>* l : m_listeners) {
l->OnEvent(event);
}
}
};
In Listener.h
//Listener.h
#pragma once
#include "Dispatcher.h"
template <typename T>
class Listener {
private:
Dispatcher<T>* m_dispatcher;
protected:
Listener(Dispatcher<T>* dispatcher) : m_dispatcher(dispatcher) {
m_dispatcher->Attach(this);
}
public:
virtual ~Listener() {
m_dispatcher->Detach(this);
}
virtual void OnEvent(T& event) = 0;
};
The compiler seems to complain about not being able to find declaration for the listener. What's wrong with this? Could someone explain? Thanks.
P.S. Obviously there's a cpp file which includes both headers.
| You can solve this problem of circular dependency by removing the #include "Listener.h" from Dispatcher.h and instead adding a forward declaration for class template Listener<> as shown below:
Dispatcher.h
#pragma once
#include <vector>
//no need to include Listener.h
//forward declaration
template<typename T> class Listener;
template <typename T>
class Dispatcher {
private:
std::vector<Listener<T>*> m_listeners;
public:
Dispatcher() {}
~Dispatcher() {
for (Listener<T>* l : m_listeners) {
delete l;
}
}
void Attach(Listener<T>* listener) {
m_listeners.push_back(listener);
}
void Detach(Listener<T>* listener) {
std::vector<Listener<T>*>::iterator it = std::find(m_listeners.begin(), m_listeners.end(), listener);
if (it != m_listeners.end()) {
m_listeners.erase(it);
}
}
void Notify(T& event) {
for (Listener<T>* l : m_listeners) {
l->OnEvent(event);
}
}
};
|
70,729,275 | 70,730,391 | How to use a constructor variable in the operator() function? | I'm trying to call a variable which is declared in the constructor in the operator() function. Variable is declared of type boost::multi_array<float, 2>. But still it throws the error:
error: no match for ‘operator /’
I guess boost library has these predefined operators! Can anyone see what I'm doing wrong here?
#ifndef CORRELATOR_CHARACTERISTIC_FUNCTION_HPP
#define CORRELATOR_CHARACTERISTIC_FUNCTION_HPP
#include <halmd/numeric/blas/fixed_vector.hpp>
#include <cmath>
#include <boost/multi_array.hpp>
#include "read_box.hpp"
namespace correlator {
class Characteristic_function
{
public:
typedef std::shared_ptr<boost::multi_array<float, 2>> sample_type;
typedef halmd::fixed_vector<double, 3> result_type;
using k_type = boost::multi_array<float, 2>;
Characteristic_function()
{
// using array_2d_t = boost::multi_array<float, 2>;
read_box read_box_file;
// auto b = read_box_file.open_dataset("file.h5");
k_type frame_b = read_box_file.read_frame(1);
auto w = frame_b[0][0];
}
result_type operator()(sample_type const &first, sample_type const &second) const
{
result_type c_func = 0;
size_t N = first->size();
N = std::min(100UL, N);
Characteristic_function w;
// k_type Characteristic_function wave;
// std::cout << "First wave vector: " << wave[0][1] << std::endl;
double k = 2 * M_PI/w;
for (unsigned int i = 0; i < N; ++i) {
for (unsigned int j = 0; j <= 0; ++j) {
double dr = (*first)[i][j] - (*second)[i][j];
c_func[j] = exp(k*dr);
}
}
return c_func / static_cast<double>(N);
}
};
}
#endif /* ! CORRELATOR_CHARACTERISTIC_FUNCTION_HPP */
w just a float number and I want to use this number in the operator() function.
| You can do something like this:
/* Characteristic function */
#ifndef CORRELATOR_CHARACTERISTIC_FUNCTION_HPP
#define CORRELATOR_CHARACTERISTIC_FUNCTION_HPP
#include <halmd/numeric/blas/fixed_vector.hpp>
#include <cmath>
#include <boost/multi_array.hpp>
#include <complex>
#include "read_box.hpp"
namespace correlator {
class Characteristic_function
{
private:
double w;
public:
typedef std::shared_ptr<boost::multi_array<float, 2>> sample_type;
typedef halmd::fixed_vector<double, 3> result_type;
// using k_type = boost::multi_array<float, 2>;
typedef boost::multi_array<float, 2> k_type;
Characteristic_function()
{
read_box read_box_file;
k_type frame_b = read_box_file.read_frame(1);
w = frame_b[0][0];
}
result_type operator()(sample_type const &first, sample_type const &second) const
{
result_type c_func = 0;
size_t N = first->size();
N = std::min(100000UL, N);
double k = 2 * M_PI / w;
for (unsigned int i = 0; i < N; ++i) {
for (unsigned int j = 0; j <= 0; ++j) {
double dr = exp( k*((*first)[i][j] - (*second)[i][j]) );
c_func[j] = dr;
}
}
return c_func / static_cast<double>(N);
}
};
}
#endif /* ! CORRELATOR_CHARACTERISTIC_FUNCTION_HPP */
It will automatically reads the value of w.
|
70,729,425 | 70,731,008 | Initialize std::vector with given array without std::allocator | I am using an external library which provides a function with the following interface:
void foo(const std::vector<int>& data);
I am receiving a very large C-style array from another library which has already been allocated:
int* data = bar();
Is there any way for me to pass on data to foo without allocating and copying each element? data is very large and therefore I want to avoid a copy and allocation if possible.
I could have used allocators, but foo is not templated for an allocator, so I don't believe this is possible.
I understand I may be asking for magic, but if it is possible that would be great. Of course if foo rather took an std::span this would not be a problem.
| Magic
This answer is magic, dependent on the implementation of the compiler.
We can forcibly access the container of a vector.
Take g++ as an example. It uses three protected pointers, _M_start, _M_finish, and _M_end_of_storage to handle storage. So we can create a derived class that sets/resets the pointers to the return of vaule bar() in the constructor and destructor.
Example code for g++:
static_assert(__GNUC__ == 7 && __GNUC_MINOR__ == 5 && __GNUC_PATCHLEVEL__ == 0);
class Dmy: public std::vector<int>
{
public:
Dmy(int *b, int *e)
{
_M_impl._M_start = b;
_M_impl._M_finish = e;
_M_impl._M_end_of_storage = _M_impl._M_finish;
}
~Dmy()
{
_M_impl._M_start = 0;
_M_impl._M_finish = 0;
_M_impl._M_end_of_storage = 0;
}
};
foo(Dmy(data, end_of_data));
|
70,729,666 | 70,729,784 | When to use c or cpp to accelerate a python or matlab implementation? | I want to create a special case of a room-impulse-response. I am following this implemetation for a room-impulse-response generator. I am also following this tutorial for integrating c++\c with python.
According to the tutorial:
You want to speed up a particular section of your Python code by converting a critical section to C. Not only does C have a faster execution speed, but it also allows you to break free from the limitations of the GIL, provided you’re careful.
However, when looking at the MATLAB example, all I see the cpp code segment doing, are regular loops and mathematical computations. In what way will c\cpp be faster than python\MATLAB in this example or any other? Will any general c\cpp code run faster? If so, why? If not, what are the indicators I need to look for, when opting for a c\cpp segment implementation? which operations are faster in c\cpp?
| Why use C++ to speed up Python
C++ code compiles into machine code. This makes it faster compared to interpreter languages (however not every code written in C++ is faster than Python code if you don't know what you are doing). in C++ you can access the data pointers directly and use SIMD instructions on them to make them multiple times faster. You can also multi-thread your loops and codes to make them run even faster (either explicit multi-threading or tools like OpenMP). You can't do these things (at least properly) in a high level language).
When to use C++ to speedup Python
Not every part of the code is worth optimizing. You should only optimize the parts that are computationally expensive and are a bottleneck of your program. These parts can be written in C or C++ and exposed to python by using bindings (by using pybind11 for example). Big machine learning libraries like PyTorch and TensorFlow do this.
Dedicated Hardware
Sometimes having a well optimized C++ CPU code is not enough. Then you can assess your problem and if it is suitable, you can use dedicated hardware. These hardware can go from low-level (like FPGA) to high-level hardware like dedicated graphics cards we usually have on our system (like CUDA programming for NVIDIA GPUs).
Regular Code Difference in Low and High Level Languages
Using a language that compiles has great advantages even if you don't use multi-threading or SIMD operations. For example, looping over a C array or std::vector in C++ can be more than 100x faster compared to looping over Python arrays or using for in MATLAB (recently JIT compiling is being used to speed up high-level languages but still, the difference exists). This has many reasons, some of which are basic data types that are recognized at compile time and having contiguous arrays. This is why people recommend using Numpy vectorized operations over simple Python loops (same is recommended for MATLAB).
|
70,729,816 | 70,729,876 | Can a braced initializer be used for non-type template argument in C++? | In the next program the second non-type template argument of struct A is initialized with {} in the alias template B<T>:
template<class T, T>
struct A{};
template<class T>
using B = A<T, {}>;
B<int> b;
GCC is the only compiler accepting this. Both Clang and MSVC reject the program with similar errors. Clang:
error: expected expression
MSVC:
error C2760: syntax error: '{' was unexpected here; expected 'expression'
Demo: https://gcc.godbolt.org/z/6bc3sx451
Which compiler is right here?
| I'd say GCC is wrong.
The grammar for template-argument in [temp.names] says that a template argument must either be a constant-expression, a type-id or an id-expression.
{} is neither an expression, nor a type, nor an (un)qualified name.
|
70,730,831 | 70,735,005 | What's the mathematical reason behind Python choosing to round integer division toward negative infinity? | I know Python // rounds towards negative infinity and in C++ / is truncating, rounding towards 0.
And here's what I know so far:
|remainder|
-12 / 10 = -1, - 2 // C++
-12 // 10 = -2, + 8 # Python
12 / -10 = -1, 2 // C++
12 // -10 = -2, - 8 # Python
12 / 10 = 1, 2 // Both
12 // 10 = 1, 2
-12 / -10 = 1, - 2 // Both
= 2, + 8
C++:
1. m%(-n) == m%n
2. -m%n == -(m%n)
3. (m/n)*n + m%n == m
Python:
1. m%(-n) == -8 == -(-m%n)
2. (m//n)*n + m%n == m
But why Python // choose to round towards negative infinity? I didn't find any resources explain that, but only find and hear people say vaguely: "for mathematics reasons".
For example, in Why is -1/2 evaluated to 0 in C++, but -1 in Python?:
People dealing with these things in the abstract tend to feel that
rounding toward negative infinity makes more sense (that means it's
compatible with the modulo function as defined in mathematics, rather
than % having a somewhat funny meaning).
But I don't see C++ 's / not being compatible with the modulo function. In C++, (m/n)*n + m%n == m also applies.
So what's the (mathematical) reason behind Python choosing rounding towards negative infinity?
See also Guido van Rossum's old blog post on the topic.
|
But why Python // choose to round towards negative infinity?
I'm not sure if the reason why this choice was originally made is documented anywhere (although, for all I know, it could be explained in great length in some PEP somewhere), but we can certainly come up with various reasons why it makes sense.
One reason is simply that rounding towards negative (or positive!) infinity means that all numbers get rounded the same way, whereas rounding towards zero makes zero special. The mathematical way of saying this is that rounding down towards −∞ is translation invariant, i.e. it satisfies the equation:
round_down(x + k) == round_down(x) + k
for all real numbers x and all integers k. Rounding towards zero does not, since, for example:
round_to_zero(0.5 - 1) != round_to_zero(0.5) - 1
Of course, other arguments exist too, such as the argument you quote based on compatibility with (how we would like) the % operator (to behave) — more on that below.
Indeed, I would say the real question here is why Python's int() function is not defined to round floating point arguments towards negative infinity, so that m // n would equal int(m / n). (I suspect "historical reasons".) Then again, it's not that big of a deal, since Python does at least have math.floor() that does satisfy m // n == math.floor(m / n).
But I don't see C++ 's / not being compatible with the modulo function. In C++, (m/n)*n + m%n == m also applies.
True, but retaining that identity while having / round towards zero requires defining % in an awkward way for negative numbers. In particular, we lose both of the following useful mathematical properties of Python's %:
0 <= m % n < n for all m and all positive n; and
(m + k * n) % n == m % n for all integers m, n and k.
These properties are useful because one of the main uses of % is "wrapping around" a number m to a limited range of length n.
For example, let's say we're trying to calculate directions: let's say heading is our current compass heading in degrees (counted clockwise from due north, with 0 <= heading < 360) and that we want to calculate our new heading after turning angle degrees (where angle > 0 if we turn clockwise, or angle < 0 if we turn counterclockwise). Using Python's % operator, we can calculate our new heading simply as:
heading = (heading + angle) % 360
and this will simply work in all cases.
However, if we try to to use this formula in C++, with its different rounding rules and correspondingly different % operator, we'll find that the wrap-around doesn't always work as expected! For example, if we start facing northwest (heading = 315) and turn 90° clockwise (angle = 90), we'll indeed end up facing northeast (heading = 45). But if then try to turn back 90° counterclockwise (angle = -90), with C++'s % operator we won't end up back at heading = 315 as expected, but instead at heading = -45!
To get the correct wrap-around behavior using the C++ % operator, we'll instead need to write the formula as something like:
heading = (heading + angle) % 360;
if (heading < 0) heading += 360;
or as:
heading = ((heading + angle) % 360) + 360) % 360;
(The simpler formula heading = (heading + angle + 360) % 360 will only work if we can always guarantee that heading + angle >= -360.)
This is the price you pay for having a non-translation-invariant rounding rule for division, and consequently a non-translation-invariant % operator.
|
70,731,072 | 70,731,138 | Printing variable value using Reference to const gives different result | I am learning about references in C++. So i am trying out different examples to better understand the concept. One example that i could not understand is given below:
double val = 4.55;
const int &ref = val;
std::cout << ref <<std::endl; //prints 4 instead of 4.55
I want to know what is the problem and how can i solve it?
| The problem is that the reference ref is bound to a temporary object of type int that has the value 4. This is explained in more detail below.
When you wrote:
const int &ref = val;
a temporary of type int with value 4 is created and then the reference ref is bound to this temporary int object instead of binding to the variable val directly. This happens because the type of the variable val on the right hand side is double while on the left hand side you have a reference to int. But for binding a reference to a variable the types should match.
For solving the problem you should write:
const double &ref = val; //note int changed to double on the left hand side
The above statement means ref is a reference to const double. This means we cannot change the variable val using ref.
If you want to be able to change val using ref then you could simply write:
double &ref = val;
|
70,731,559 | 70,731,703 | How can I pass a class method as a parameter to another function and later call it, preferably making the variable class method signature explicit? | If I have a class that needs to call a parent class method with a class method as parameter I can do it with std::function + std::bind as shown below:
class A {
void complexMethod(std::function<void()> variableMethod) {
// other stuff ...
variableMethod();
// some other stuff..
}
}
class B : public A {
void myWay() {
// this and that
}
void otherWay() {
// other and different
}
void doingSomething() {
// Preparing to do something complex.
complexMethod(std::bind(&B::myWay, this));
}
void doingAnotherThing() {
// Different preparation to do some other complex thing.
complexMethod(std::bind(&B::otherWay, this));
}
}
How would I need to change the above code to implement the same thing using templates instead of std::function + std::bind?
And how about lambdas instead of std::function + std::bind? I still want to call B:myWay() and B::otherWay() but using lambdas. I don't want to substitute B:myWay() and B::otherWay() with lambdas.
Is there any implementation technique (one of the above or some other) were I would be able to make variableMethod return type and parameters explicit? How would I do it? Let's say the signature of variableMethod is:
bool variableMethod(int a, double b);
Which technique is recommended? Why (speed, flexibility, readility...)?
|
How would I need to change the above code to implement the same thing
using templates instead of std::function + std::bind?
And how about lambdas instead of std::function + std::bind? I
still want to call B:myWay() and B::otherWay() but using lambdas.
I don't want to substitute B:myWay() and B::otherWay() with
lambdas.
You can use a lambda, yes.
Something like [this]() { return myWay(); } that:
captures this, and
calls a method of the current object.
[Demo]
#include <iostream> // cout
class A {
protected:
template <typename F>
void complexMethod(F&& f) { f(); }
};
class B : public A {
void myWay() { std::cout << "myWay\n"; }
void otherWay() { std::cout << "otherWay\n"; }
public:
void doingSomething() {
complexMethod([this]() { return myWay(); });
}
void doingAnotherThing() {
complexMethod([this]() { return otherWay(); });
}
};
int main() {
B b{};
b.doingSomething();
b.doingAnotherThing();
}
// Outputs:
//
// myWay
// otherWay
Is there any implementation technique (one of the above or some other)
were I would be able to make variableMethod return type and
parameters explicit? How would I do it?
You could use const std::function<bool(int,double)>& f as the parameter receiving a function for complexMethod. And still pass a lambda. Notice though lambdas are now receiving (int i, double d) (it could be (auto i, auto d) as well).
[Demo]
#include <functional> // function
#include <ios> // boolalpha
#include <iostream> // cout
class A {
protected:
bool complexMethod(const std::function<bool(int,double)>& f, int i, double d)
{ return f(i, d); }
};
class B : public A {
bool myWay(int a, double b) { return a < static_cast<int>(b); }
bool otherWay(int a, double b) { return a*a < static_cast<int>(b); }
public:
bool doingSomething(int a, double b) {
return complexMethod([this](int i, double d) {
return myWay(i, d); }, a, b);
}
bool doingAnotherThing(int a, double b) {
return complexMethod([this](auto i, auto d) {
return otherWay(i, d); }, a, b);
}
};
int main() {
B b{};
std::cout << std::boolalpha << b.doingSomething(3, 5.5) << "\n";
std::cout << std::boolalpha << b.doingAnotherThing(3, 5.5) << "\n";
}
// Outputs:
//
// true
// false
Notice also the same could be accomplished with templates, although you wouldn't be making the signature explicit.
[Demo]
#include <functional> // function
#include <ios> // boolalpha
#include <iostream> // cout
class A {
protected:
template <typename F, typename... Args>
auto complexMethod(F&& f, Args&&... args) -> decltype(f(args...))
{ return f(args...); }
};
class B : public A {
bool myWay(int a, double b) { return a < static_cast<int>(b); }
bool otherWay(int a, double b) { return a*a < static_cast<int>(b); }
public:
bool doingSomething(int a, double b) {
return complexMethod([this](auto i, auto d) {
return myWay(i, d); }, a, b);
}
bool doingAnotherThing(int a, double b) {
return complexMethod([this](auto i, auto d) {
return otherWay(i, d); }, a, b);
}
};
int main() {
B b{};
std::cout << std::boolalpha << b.doingSomething(3, 5.5) << "\n";
std::cout << std::boolalpha << b.doingAnotherThing(3, 5.5) << "\n";
}
// Outputs:
//
// true
// false
Which technique is recommended? Why (speed, flexibility,
readility...)?
Item 34 of Scott Meyer's Effective Modern C++ book is titled Prefer lambdas to std::bind. It ends with a summary saying: Lambdas are more readable, more expressive, and may be more efficient than using std::bind. However, it also mentions a case when std::bind may be useful over lambdas.
|
70,731,687 | 70,762,042 | How can I integrate QML MediaPLayer with C++ side | I have developed a QML based video player program using MediaPlayer element. The program has most of basic functionality of a video player(play,pause,vol up/down,forward,bakcward etc.). My next task is add subtitle to a video and I need to use metaObject method of MediaPlayer element but QML side does allow that funtionality, it says:
Note: This property is not accessible from QML.
There is a description in the document related metaObject which is confusing my mind:
mediaObject : variant
This property holds the native media object.
It can be used to get a pointer to a QMediaPlayer object in order to integrate with C++ code.
QObject *qmlMediaPlayer; // The QML MediaPlayer object
QMediaPlayer *player = qvariant_cast<QMediaPlayer *>(qmlMediaPlayer->property("mediaObject"));
What is this supposed to mean? How can I integrate QML MediaPlayer with C++? Any help would be great, thanks.
| This will depend on how exactly you launch the QML application. Suppose it is set up like this:
int main(int argc, char **argv)
{
// Q(Gui)Application setup...
QQmlApplicationEngine engine;
engine.load(QUrl("qrc:/main.qml"));
// ...
}
And somewhere inside the QML object hierarchy you have a MediaPlayer:
MediaPlayer {
objectName: "player"
// ...
}
It's important to set the objectName property, so that you can look up the MediaPlayer instance by this name in the C++ code. After loading the QML document as above, the engine has a single root object you can search with findChild or findChildren (adapted from this answer):
auto qmlPlayer = engine.rootObjects()[0]->findChild<QObject*>("player");
auto player = qvariant_cast<QMediaPlayer*>(qmlPlayer->property("mediaObject"));
// use the QMediaPlayer*
|
70,732,250 | 70,736,813 | Setting the size of array in a struct by passing as a const to a function - Non-type template argument is not a constant expression | User sets k at running time. This number will be constant for the rest of the code. I want to create a function to pass and create a struct that includes an array of size k with that number. However, the compiler returns this error:
Non-type template argument is not a constant expression
Any recommendation will be appreciated.
The code is like:
template <int N>
struct UL {
unsigned long ul [N];
};
void func(const int k){
UL<k> x; //problem is here
}
int main () {
int k;
cin >> k;
func(k);
return 0;
}
| A fundamental principle about templates is that:
Any template argument must be a quantity or value that can be determined at compile time.
This has dramatic advantages on the runtime cost of template entities.
But in your example, k is not a compile time constant and you're using it as a template argument and so as a consequence of the above quoted statement you get the error.
To solve your problem you can use a std::vector as shown below:
#include <iostream>
#include <vector>
struct UL {
std::vector<unsigned long> ul;
//constructor
UL(int k): ul(k) //this creates vector ul of size k
{
std::cout<<"size of vector set to: "<<ul.size()<<std::endl;
}
};
void func(const int k){
UL x(k); //pass k as argument to constructor
}
int main () {
int k;
std::cin >> k;
func(k);
return 0;
}
The output of the program can be seen here.
|
70,732,298 | 70,733,432 | QML Update Gridview from QML or C++ without emit dataChanged | I have a QAbstractListModel, which is tied to QML via
engine.rootContext()->setContextProperty
which is displayed in a GridView. It contains cards like Aces, Queens etc. I would like to sort the cards in different ways (like color, type etc). The sorting fuction can be called via qml:
GridView
{
id:table_player
model: Playerfield
delegate: Card
{
card_id: model.card_id
Component.onCompleted:
{
Playerfield.sortDeck()
}
}
}
The C++ Code:
public slots:
Q_INVOKABLE void sortDeck();
It works, but only updates the Playfield, when a card is changed / a new card is played. I need a way to send a signal other than "emit dataChanged()" to QML to update. Or a way directly from QML to Update the Gridview with the changed model data (table_player.update() does not work).
void Deckmodel::sortDeck()
{
for(uint a = 0; a < cards.size(); a++)
{
for(uint b = a+1; b < cards.size(); b++)
{
if(cards[a].type > cards[b].type)
{
Card temp = cards[a];
cards[a] = cards[b];
cards[b] = temp;
}
}
}
//insert signal here
}
| Is Deckmodel the same thing as Playerfield in your code? When changing your model, you need to call begin/endResetModel(). That will automatically emit the appropriate signals so your QML should update correclty.
void Deckmodel::sortDeck()
{
beginResetModel();
for(uint a = 0; a < cards.size(); a++)
{
for(uint b = a+1; b < cards.size(); b++)
{
if(cards[a].type > cards[b].type)
{
Card temp = cards[a];
cards[a] = cards[b];
cards[b] = temp;
}
}
}
endResetModel();
}
|
70,732,683 | 70,732,858 | How to add blank lines between definitions? | I successfully managed to make clang-format format my code like iIwant. However, there is one thing that bugs me:
I want a blank line between definitions of structs/classes/functions and between declarations of functions. Currently, when formatting, clang-format removes blank lines, which makes everything condensed.
Here is my file:
---
AlignAfterOpenBracket: DontAlign
AlignTrailingComments: "true"
AllowAllArgumentsOnNextLine: "false"
AllowAllConstructorInitializersOnNextLine: "true"
AllowAllParametersOfDeclarationOnNextLine: "false"
AllowShortBlocksOnASingleLine: "false"
AllowShortCaseLabelsOnASingleLine: "false"
AllowShortFunctionsOnASingleLine: None
AllowShortIfStatementsOnASingleLine: Never
AllowShortLambdasOnASingleLine: None
AllowShortLoopsOnASingleLine: "false"
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: "false"
AlwaysBreakTemplateDeclarations: "Yes"
BinPackArguments: "true"
BinPackParameters: "true"
BreakBeforeTernaryOperators: "false"
BreakConstructorInitializers: AfterColon
BreakInheritanceList: AfterColon
ColumnLimit: "170"
CompactNamespaces: "false"
ConstructorInitializerAllOnOneLineOrOnePerLine: "true"
IncludeBlocks: Merge
IndentCaseLabels: "true"
IndentPPDirectives: BeforeHash
IndentWidth: "4"
IndentWrappedFunctionNames: "false"
KeepEmptyLinesAtTheStartOfBlocks: "false"
Language: Cpp
MaxEmptyLinesToKeep: "0"
NamespaceIndentation: All
PointerAlignment: Left
SortIncludes: "true"
SortUsingDeclarations: "true"
SpaceAfterCStyleCast: "false"
SpaceAfterLogicalNot: "false"
SpaceAfterTemplateKeyword: "false"
SpaceBeforeAssignmentOperators: "true"
SpaceBeforeCpp11BracedList: "false"
SpaceBeforeCtorInitializerColon: "false"
SpaceBeforeInheritanceColon: "false"
SpaceBeforeParens: Never
SpaceBeforeRangeBasedForLoopColon: "false"
SpaceInEmptyParentheses: "false"
SpacesBeforeTrailingComments: "3"
SpacesInAngles: "false"
SpacesInCStyleCastParentheses: "false"
SpacesInContainerLiterals: "false"
SpacesInParentheses: "false"
SpacesInSquareBrackets: "false"
Standard: Auto
UseTab: Always
TabWidth: "4"
Here is how it looks:
I want a blank line between the two structs.
| You have
MaxEmptyLinesToKeep: "0"
which needs to be set to 1 instead, to ensure that multiple empty lines between definitions are not removed.
Importantly, the value needs to be an unsigned type, not a string.
So the correct setting for what you want should be
MaxEmptyLinesToKeep: 1
Note that this won't work if you don't already have at least one blank line between definitions, but from clang-format 14, you can use
SeparateDefinitionBlocks : Always
which will add an empty line between every definition. The other options are Leave which doesn't change anything, and Never, which will remove blank lines, if any.
I don't think there's an option that lets you do this in previous versions.
Source: Clang format style options.
|
70,732,979 | 70,732,995 | Is using headers multiple times bad? | Lets say I am using header guards,
#ifndef MAIN_H
#define MAIN_H
#include "foo.h"
#include "some_header_file.h"
... // Some Code
#endif
Inside foo.h file also with Header guards.
#ifndef FOO_H
#define FOO_H
#include "some_header_file.h"
... // Some Code
#endif
As you can see main file have 2 headers one of them is dublicate. I have three questions:
Does Header guards prevent duplicate header files?
Does the compiler optimize it and removes it?
Is this a bad practice and the extra header file should be deleted from main file?
|
Does Header guards prevent duplicate header files?
Yes. The first encountered inclusion will bring the content of the header to the translation unit, and the header guard causes successive inclusions to be empty which prevents the content of the header from being duplicated. This is exactly the reason why header guards are used.
or is this a bad practice and the extra header file should be deleted from main file?
No, the duplicate inclusion is not a bad practice. If the "main" header depends on any declaration from "some_header_file.h", then "main" absolutely should include "some_header_file.h" directly, whether another header - even one included by "main" - also includes it or not.
Relying on a transitive inclusion would generally be a bad practice - i.e. in this case it may be bad to rely on the detail that "foo.h" includes "some_header_file.h" when including "foo.h" into "main". Such assumptions often can cause programs to break unexpectedly when they are modified. In this case, if "foo.h" was modified to no longer depend on "some_header_file.h", and that inclusion was removed, then that change would suddenly cause the assumption to fail, and "some_header_file.h" would no longer be included into "main" as a result of change that didn't involve "main" at all. That would be bad.
|
70,733,043 | 70,733,123 | bitwise operation xor mask | I'm sorry if this seems like a stupid question I'm just having trouble understanding bits and bitwise operation assignment
I have two integers one is a mask and the other is arbitrary and I'm supposed to xor the integer with the 16 most significant bits of the mask but I'm not exactly sure if significant is the first 16 or the last 16 and if my operation is even correct since I don't know how to verify it
I tried this
int main(){
uint32_t mask = 3405705229;
uint16_t arbitrary = 0xABCD;
arbitrary^=mask&16;
printf("%X\n",arbitrary);
}
I assumed mask&16 would give me only the first 16 bits of mask
But when I print I still get ABCD so that can't be right..
I also tried arbitrary^=(mask>>16)&16; but that didn't do anything either
| For an unsigned 8-bit value, you might have:
Bit number: 7 6 5 4 3 2 1 0
Bit value: 1 0 0 0 1 1 0 1 = 141 = 0x8D
The 4 most significant bits (MSB) are bits 7-4; the 4 least significant bits (LSB) are bits 3-0.
You'd extract the 4 most significant bits from uint8_t x8 = 141; using:
uint8_t y = (x >> 4) & 0xF
The output would be y equal to 8 or 0x08.
You can, of course, expand this to accommodate larger numbers of bits.
Note that MSB and LSB are often used for {Most|Least} Significant Bytes rather than bits — but that still applies in a similar way to the logical value in chunks of 8-bit bytes. As paulsm4 noted in a comment, this applies to the logical value. CPUs come in two main flavours: big-endian (SPARC, older PowerPC, Motorola, …) and little-endian (Intel and many other chips, including modern PowerPC at least as an option). The difference is the order in which the bytes of a multi-byte (integer) value are stored:
For a 4-byte unsigned integer with the value 0x12345678, the two types store the data in opposite orders:
Address 0x1000 0x1001 0x1002 0x1003
Big-Endian 0x12 0x34 0x56 0x78
Little-Endian 0x78 0x56 0x34 0x12
Big-Endian MSB ... ... LSB
Little-Endian LSB ... ... MSB
These days, little-endian is more widespread. However, many network protocols and other systems mandate big-endian. Most of the time, you don't have to worry about it. Sometimes, you do — and it is important to know when you do and when you don't. Data on a single machine, not shared elsewhere, usually doesn't require you to worry about byte order. You don't have to worry about the byte order of data in (single-byte) character strings.
|
70,733,262 | 70,733,946 | Template function specialization for specific template ( not type ) | I have some templated class types like A,B,C as follows:
template < typename T >
class A{};
template < typename T >
class B{};
template < typename T >
class C{};
And now I want to have a function which accepts in general any type like:
template < typename T>
void Func()
{
std::cout << "Default " << __PRETTY_FUNCTION__ << std::endl;
}
And now I want to specialize the function to only accept one of the given template classes like:
template < typename T>
void Func<A<T>>()
{
std::cout << "All A Types " << __PRETTY_FUNCTION__ << std::endl;
}
Which is not allowed because it is only a partial specialization. OK.
I think concept may help but it feels that I think much too complicated. My solution is:
template < typename T, template <typename > typename OUTER >
bool Check;
template < typename INNER, template < typename > typename OUTER, template < typename> typename T>
constexpr bool Check< T<INNER>, OUTER > = std::is_same_v< OUTER<INNER>, T<INNER>>;
template < typename T >
concept IsA = Check< T, A >;
template < typename T >
concept IsB = Check< T, B >;
template < IsA T >
void Func()
{
std::cout << "All A Types " << __PRETTY_FUNCTION__ << std::endl;
}
template < IsB T >
void Func()
{
std::cout << "All B Types " << __PRETTY_FUNCTION__ << std::endl;
}
int main()
{
Func<A<int>>();
Func<B<int>>();
Func<C<int>>();
}
It feels a bit complicated to me. Can that be simplified? Would be nice if the Check template can be removed. Any idea?
See full example here live
|
It feels a bit complicated to me. Can that be simplified? Would be nice if the Check template can be removed. Any idea?
Much of complexity and in-elegance is in the fact you need a new concept for every class template. Write a general-purpose and reusable concept, and it is no longer complicated to use.
template <typename T, template <typename...> class TT>
constexpr bool is_instantiation_of_v = false;
template <template <typename...> class TT, typename... TS>
constexpr bool is_instantiation_of_v <TT<TS...>, TT> = true;
template <class C, template<typename...> class TT>
concept instantiation_of = is_instantiation_of_v<C, TT>;
The same principle as yours, except the checker is usable with a template taking any number of type arguments. Meanwhile, the concept accepts the same parameters. The first parameter has a special meaning and is implicitly understood to be the constrained template parameter in the short-hand syntax. The rest (the template template-parameter) must be given explicitly.
How can it be used? Like this
template <instantiation_of<A> T>
int Func()
{
return 'A';
}
template <instantiation_of<B> T>
int Func()
{
return 'B';
}
Got a new class template to constrain over? No problem, this concept works without additional boiler-plate.
template <instantiation_of<D> T>
int Func()
{
return 'D';
}
|
70,733,508 | 70,736,286 | Linking up C++ and Python | I am trying to set up a program in Visual Studio where I link up a C++ file and a Python file. The printing statement from the Python statement still outputs and I am able to change it. However, whenever I run the program my console says:
Start 1
2
00000000
File "C:\Users\marce\source\repos\PythonCPPSample\Release\setup.py", line 4
print("Hello everyone! My name is Marcel.")
IndentationError: expected an indented block after function definition on line 3
3
Is this really a syntax problem? What can I do to fix this? Here is my code:
source.cpp
#include <Python.h>
#include <iostream>
#include <string>
using namespace std;
void main()
{
cout << "Start 1 \n";
Py_Initialize();
cout << "2\n";
PyObject* my_module = PyImport_ImportModule("setup");
cerr << my_module << "\n";
PyErr_Print();
cout << "3\n";
PyObject* my_function = PyObject_GetAttrString(my_module,
"printsomething");
cout << "4\n";
PyObject* my_result = PyObject_CallObject(my_function, NULL);
Py_Finalize();
}
setup.py
import re
import string
def printsomething():
print("Hello everyone! My name is Marcel.")
| This error message is appearing in the console application, but even if the printing statement were to be indented underneath the function in the setup.py it would not fix the problem. This is because the python file that it's linking to had not been pulled up from the correct release folder.
So, I just had to look at the path to the .py file that was shown on the console. Right click on the Source file in Solution Explorer and add existing item. Go to the file referred to by the path, and then click on it. On that file, you can fix the indentation problem and the problem as a whole.
|
70,733,518 | 70,733,550 | Why defining a variable inside a struct doesn't cause link errors? | struct foo{
int bar1 = 5;
const static int bar2 = 6;
};
Sharing foo using a header file among multiple translation units doesn't cause a link error for bar1 and bar2, why is that?
To my knowledge, bar1 doesn't even exist until an instance of foo is created and each instance of bar1 will have unique symbol, so no link errors can happen. bar2 is not even a definition, it's a declaration with initializer and needs to be initialized with const int foo::bar2; in only one file so again no link errors can happen.
Referring to this answer: https://stackoverflow.com/a/11301299
Is my understanding correct?
| Violations of The One Definition Rule do not require a diagnostic or a compiler error.
Presuming that bar2 is referenced in some translation unit: some compilers may successfully compile and link the resulting code. Other compilers may not and reject it (presumably at the linking stage). The C++ standard indicates that this is ill-formed, but a formal compiler diagnostic is not mandatory. That's just what the C++ standard says.
The reason why some compilers may let this skate is because they'll inline all the references to the static class member, hence there won't be any unresolved references that produce a failure at the link stage.
|
70,733,545 | 70,733,692 | finding top 10 elements in the hash table which contains 250,000 integer elements in C++ | i want to find top 10 elements int the hash table. The hash table contains 250000 elements and these elements are only integer values. I dont want to sort the whole table. When i find the top 10 elements it is enough for me the rest is not important. What is the FASTEST(RUNTİME) way to do it? maybe heap sort? C++
| I'll assume that the integers are in a collection you can iterate over.
Imagine you need to find the single max element - you can iterate over the collection once keeping track of the largest element you have seen so far.
Now modify the above approach for the top two elements you have seen so far.
...
Now modify the above approach for the top ten elements you have seen so far.
This "algorithm" has linear complexity which is as good as it gets.
There are other ways of achieving linear time - all better then what you will get from Heapsort.
|
70,733,643 | 70,733,710 | gtest EXPECT_CALL does not working when expected call is nested | Look at such code snippet:
class Foo {
public:
Foo() {
}
void invoke_init(const int i) {
init(i);
}
private:
void init(const int i) {
std::cout << "init " << i << std::endl;
}
};
class MockedFoo : public Foo {
public:
MOCK_METHOD(void, init, (const int));
};
TEST(Foo, TestInitCalled) {
MockedFoo mock;
mock.invoke_init(1);
EXPECT_CALL(mock, init(1));
}
As expected, init() is called and i see corresponding output. But the test is failed. Why? What is wrong there?
| Foo::init needs to be protected instead of private. It also needs to be virtual.
Without protected as its visibility attribute, it can't really be overridden in the inherited class. Without virtual, gmock can't do much with it either.
Instead of this:
private:
void init(const int i) {
std::cout << "init " << i << std::endl;
}
This:
protected:
virtual void init(const int i) {
std::cout << "init " << i << std::endl;
}
|
70,734,093 | 70,739,661 | How to load ImGui image from byte array with opengl? | I am trying to render an image in my c++ ImGui menu; I believe the end code would be something like ImGui::Image(ImTextureID, ImVec2(X, Y));. I already have a byte array that includes the image I want to render, but don't know how to go about loading it into that ImTextureID that's being passed in. I have found how to do it with Direct X using D3DXCreateTextureFromFileInMemoryEx but need to know the opengl equivalent for doing this.
| The 'ImTextureID' in ImGui::Image is simply an int, with a value corresponding to a texture that has been generated in your graphics environment (DirectX or OpenGL).
A way to do so in OpenGL is as follows (I'm not familiar with DirectX but I bet that 'D3DXCreateTextureFromFileInMemoryEx' does pretty much the same):
Generate the texture name (this 'name' is just an integer, and it is the integer that ImGui uses as ImTextureID) using glGenTextures()
Set UV sampling parameters for the newly generated texture using glTexParameteri()
Bind the texture to the currently active texture unit using glBindTexture()
Upload pixel data to the GPU using glTexImage2D()
Typically that would look like something like this:
int textureID;
glGenTextures(GL_TEXTURE_2D, 1, &textureID);
glTextureParameteri(textureID, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTextureParameteri(textureID, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTextureParameteri(textureID, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTextureParameteri(textureID, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, [width of your texture], [height of your texture], false, GL_RGBA, GL_FLOAT, [pointer to first element in array of texture pixel values]);
It's been a while since I did this in c++ so I might be wrong on some details. But the documentation is pretty good, and best to read it anyway to figure out how to make a texture compatible with the type of texture data you intend to input: https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml
Once setting up the texture like that, you use the value of textureID in the ImGui Image call.
|
70,734,791 | 70,734,925 | C++ removing a element in string array which appears 5 times | Here is the code I have, I am having trouble finding a way to remove all Apple elem in the array. I am able to count the apples in the array. I hope someone can help...
string items[10] = { "Apple", "Oranges", "Pears", "Apple", "bananas", "Apple", "Cucumbers", "Apple", "Lemons", "Apple" };
//Counts the total amount of apples
int n = sizeof(items) / sizeof(items[0]);
cout << "Number of times Apple appears : "
<< count(items, items + n, "Apple");
//remove the element Apple from array
if (string items[].contains("Apple"))
{
items[].remove("Apple");
}
| Some options you'd have would be:
Walk your array of items and substitute your "Apple" strings for empty strings.
Use a std::vector of strings and whether a) initialize it with the array of items and then call std::erase_if (C++20) to remove the "Apple" strings, or b) initialize it without elements and then call std::copy_if together with std::back_inserter to append the non-"Apple" strings.
[Demo]
#include <algorithm> // copy_if, transform
#include <iostream> // cout
#include <string>
#include <vector> // erase_if
int main()
{
{
std::string items[10] = { "Apple", "Oranges", "Pears", "Apple", "bananas", "Apple", "Cucumbers", "Apple", "Lemons", "Apple" };
std::transform(std::begin(items), std::end(items), std::begin(items), [](auto& s) {
return (s == "Apple" ? "" : s);
});
for (const auto& s : items) { std::cout << s << ", "; }
std::cout << "\n";
}
{
const std::string items[10] = { "Apple", "Oranges", "Pears", "Apple", "bananas", "Apple", "Cucumbers", "Apple", "Lemons", "Apple" };
std::vector<std::string> v{std::cbegin(items), std::cend(items)};
std::erase_if(v, [](auto& s) { return s == "Apple"; });
for (const auto& s : v) { std::cout << s << ", "; }
std::cout << "\n";
}
{
const std::string items[10] = { "Apple", "Oranges", "Pears", "Apple", "bananas", "Apple", "Cucumbers", "Apple", "Lemons", "Apple" };
std::vector<std::string> v{};
std::copy_if(std::cbegin(items), std::cend(items), std::back_inserter(v), [](auto& s) {
return s != "Apple";
});
for (const auto& s : v) { std::cout << s << ", "; }
}
}
// Outputs:
//
// , Oranges, Pears, , bananas, , Cucumbers, , Lemons, ,
// Oranges, Pears, bananas, Cucumbers, Lemons,
// Oranges, Pears, bananas, Cucumbers, Lemons,
|
70,735,056 | 70,735,340 | two arrays, find the missing numbers | Given two arrays, first has 'n' numbers and the second one has 'n-m' numbers; the second array is not in the same order as the first. If there are several numbers with the same value, they end up in the order of the positions in the original array. Also, all the values from the second array are also found in the first array. I have to find the 'm' missing numbers in the order in which they appear in the first array.
input:
7 3
12 34 45 29 100 87 32
100 87 12 34
output:
45 29 32
#include <iostream>
using namespace std;
int main()
{
int n, missing_number = 0, m, i, j, v[1201], w[1201];
cin >> n >> m;
for (i = 0; i < n; ++i) {
cin >> v[i];
}
for (i = 0; i < n - m; ++i) {
cin >> w[i];
}
for (i = 0; i < n; ++i) {
missing_number = 1;
for (j = 0; j < n - m; ++j) {
if (v[i] == w[j]) {
missing_number = -1;
}
}
if (missing_number == 1) {
cout << v[i] << " ";
}
}
if (m == 0)
cout << "there are no missing numbers";
return 0;
}
my code doesn't work for repeating numbers like:
7 3
2 6 1 9 3 2 4
4 1 2 3
where my output should be:
6 9 2
| Your program seems to be outputting the correct result. However, I felt that I need to refactor your code to improve its readability and remove the bad practices used in it.
The below is the same as your code with a bit of improvement:
#include <iostream>
#include <array>
#include <limits>
int main( )
{
std::array<int, 1201> arr1; // use std::array instead of raw arrays
std::array<int, 1201> arr2;
std::size_t arr1_size { }; // renamed n
std::size_t arr2_size { }; // renamed m
std::cin >> arr1_size >> arr2_size;
if ( arr2_size == 0 ) // this if statement should be here to help end
{ // the program early on to prevent the execution
// of the for-loops
std::cout << "There are no missing numbers.\n";
return 0;
}
for ( std::size_t idx { }; idx < arr1_size; ++idx ) // use std::size_t
{ // for the loop counters
std::cin >> arr1[ idx ];
}
for ( std::size_t idx { }; idx < arr1_size - arr2_size; ++idx )
{
std::cin >> arr2[ idx ];
}
for ( std::size_t arr1_idx { }; arr1_idx < arr1_size; ++arr1_idx )
{
bool isNumberMissing { true }; // this should be of type bool
for ( std::size_t arr2_idx { }; arr2_idx < arr1_size - arr2_size; ++arr2_idx )
{
if ( arr1[ arr1_idx ] == arr2[ arr2_idx ] )
{
isNumberMissing = false;
// this is my trick for solving your code's bug
arr2[ arr2_idx ] = std::numeric_limits<int>::min( );
break; // break here to improve performance
}
}
if ( isNumberMissing )
{
std::cout << arr1[ arr1_idx ] << " ";
}
}
std::cout << '\n';
}
Sample input/output #1:
7 3
12 34 45 29 100 87 32
100 87 12 34
45 29 32
Sample input/output #2:
7 3
2 6 1 9 3 2 4
4 1 2 3
6 9 2
Note: See Why is "using namespace std;" considered bad practice?
|
70,735,301 | 70,735,746 | Rcpp does not compile on Mac OS Monterey, R 4.1.2., clang error | Trying to get Rcpp to work on R 4.1.2 on Mac OS Monterey using an Intel computer.
> library(Rcpp)
> evalCpp("2 + 2")
clang++ -mmacosx-version-min=10.13 -std=gnu++14 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I"/Library/Frameworks/R.framework/Versions/4.1/Resources/library/Rcpp/include" -I"/private/var/folders/jz/977gqfr957g_rlgw1h05152w0000gq/T/Rtmp8hmDZ5/sourceCpp-x86_64-apple-darwin17.0-1.0.8" -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -I/usr/local/include -fPIC -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk -c file152f71a55f97c.cpp -o file152f71a55f97c.o
clang++ -mmacosx-version-min=10.13 -std=gnu++14 -dynamiclib -Wl,-headerpad_max_install_names -undefined dynamic_lookup -single_module -multiply_defined suppress -Wl,-rpath,/Library/Frameworks/R.framework/Resources/lib /Library/Frameworks/R.framework/Resources/lib/libc++abi.1.dylib -L/Library/Frameworks/R.framework/Resources/lib -L/usr/local/lib -o sourceCpp_2.so file152f71a55f97c.o -F/Library/Frameworks/R.framework/.. -framework R -Wl,-framework -Wl,CoreFoundation
Error in sourceCpp(code = code, env = env, rebuild = rebuild, cacheDir = cacheDir, :
Error 1 occurred building shared library.
clang: error: no such file or directory: '/Library/Frameworks/R.framework/Resources/lib/libc++abi.1.dylib'
make: *** [sourceCpp_2.so] Error 1
I have followed Coatless Professor's instructions:
https://thecoatlessprofessor.com/programming/cpp/r-compiler-tools-for-rcpp-on-macos/
I believe Xcode Command Line Tools and gfortran are installed:
$which gcc
/usr/bin/gcc
$which gfortran
/usr/local/bin/gfortran
and I have removed both ~/.R/Makevars and ~/.Renviron.
Any help would be much appreciated -- thanks so much!
| There is an old reference to a prior version of the clang compiler present.
In R, please type:
unlink("~/.R/Makevars")
unlink("~/.Renviron")
Please restart R and, then, try:
Rcpp::evalCpp("1+1")
|
70,735,488 | 70,735,703 | How to call a Python function from C++ on Ubuntu 20.04 | I'm currently in a class where I'm asked to establish integration between Python and C++ as we will be working with both later in the class. My instructor has provided some simple C++ and Python code so we can test the integration and has also provided instructions on how to set this up using Visual Studio. I currently only have access to Ubuntu and my (very old) laptop won't be able to handle a VM.
I'm currently using Clion as my IDE and Anaconda as a virtual environment. I've spent most of the day trying to figure this out and I believe the issue is related to my CMake file as I know pretty much nothing about CMake right now, but plan to learn it soon.
The CMakeLists.txt, main.cpp, and myPython.py are all in the main project directory.
CMakeLists.txt
cmake_minimum_required(VERSION 3.21)
project(myProject)
set(CMAKE_CXX_STANDARD 14)
add_executable(myProject main.cpp)
target_include_directories(myProject PUBLIC /usr/include/python3.9)
target_link_directories(myProject PUBLIC /usr/lib/python3.9)
target_link_libraries(myProject PUBLIC python3.9)
For CMakeLists.txt I also tried the following but got the same error/output regardless of which one I used
cmake_minimum_required(VERSION 3.21)
project(myProject)
set(CMAKE_CXX_STANDARD 14)
find_package(Python3 REQUIRED COMPONENTS Interpreter Development)
add_executable(myProject main.cpp)
target_link_libraries(myProject PUBLIC Python3::Python)
main.cpp
#include <python3.9/Python.h>
#include <iostream>
#include <string>
using namespace std;
// the main in the provided code is void but Clion gave me an error saying to change it to int
int main()
{
cout << "Start 1 \n";
Py_Initialize();
cout << "2\n";
PyObject* my_module = PyImport_ImportModule("myPython");
cerr << my_module << "\n";
PyErr_Print();
cout << "3\n";
PyObject* my_function = PyObject_GetAttrString(my_module,
"printsomething");
cout << "4\n";
PyObject* my_result = PyObject_CallObject(my_function, NULL);
Py_Finalize();
return 0;
}
myPython.py
import re
import string
def printsomething():
print("Hello from Python!")
The expected output is
Start 1
2
01592AE0 // this will vary since it's the .py files location
3
4
Hello from Python!
what I'm getting when I run it is
Start 1
2
0 // output is red
3
ModuleNotFoundError: No module named 'myPython' // output is red
Since I'm not familiar with CMake and would consider myself a "general user" when it comes to Ubuntu, I may need some additional details on any steps provided.
| Add this in your main function after Py_Initialize();
PySys_SetPath(L".:/usr/lib/python3.8");
this is the search path for python module. Add . to search in the current path.
|
70,735,492 | 70,735,611 | Inferring namespace of freestanding function | Question on namespace qualification: Why isn't the namespace of the function inferred below?
namespace X {
void func(int) {}
struct Z{
void func(){
//func(int{}); does not compile
X::func(int{}); // this works
}
};
}
int main() {
X::Z z;
z.func();
}
| This specific part of C++ can be generally called "unqualified name lookup". This term describes taking a single identifier in a C++ program and then determining which actual type, or object, or function, or class, it is referencing.
For example, there can be many things called rosebud in a C++ program, so
rosebud();
This can reference a class method of this name, and this calls it. Or this could be an object with an overloaded () operator, which invokes it. There could also be something called rosebud() in a different namespace. Unqualified name lookup specifies which one of these this particular rosebud references.
struct Z{
void func(){
Here we're in a method of class Z. Unqualified name lookup first looks for identifiers that are members of this class. Only if it's not found then does unqualified name lookup looks in the global namespace, to see if something is there.
func(int{}); // does not compile
Well, there happens to be a method with the name func in this class, so this func's unqualified name lookup resolves to this method. This fails, because that func method takes no parameters.
Unqualified name lookup considers where exactly the unqualified identifier occurs. When it occurs in a class, unqualified name lookup searches the class's members, first.
Even though there's also a func function in the global scope, unqualified lookup finds the class method, and that's it (quoting from the cited link):
[unqualified] name lookup examines the scopes as described below,
until it finds at least one declaration of any kind, at which time the
lookup stops and no further scopes are examined.
End of the road. The fact that there's also a func in the global namespace is immaterial. For unqualified name lookup, once "something" is found, it better work, or else.
These are just one of the rules of unqualified name lookup.
X::func(int{}); // this works
Well, yes. This explicitly references func in the X namespace. This symbol is (partially) qualified with an explicit namespace reference.
|
70,736,140 | 70,738,597 | Why does Qt Designer add so much space? | Why does Qt make so much space? How can I fix this? I just want to create two labels, two text boxes and a login button. I'm trying to make a login form.
Something like this:
Why does it need so much space to just have small buttons?
This is the nicest I've been able to get it to look, but even this looks terrible.
| Just add a Vertical Spacer to the top and the bottom, then you will have your expected result.
If you would like to add it through the code and not in Designer, you would need to add it on the QLayout with QBoxLayout::addStretch(int stretch = 0) or QBoxLayout::addSpacing(int size), depending on your need
|
70,736,588 | 70,736,823 | Why does MSVC's STL implementation cast to const volatile char* here? | I was looking through some of the standard library's implementation for the usual containers (vector, unordered_map, etc...) when I came across the following, in the xutility header:
template <class _CtgIt, class _OutCtgIt>
_OutCtgIt _Copy_memmove(_CtgIt _First, _CtgIt _Last, _OutCtgIt _Dest) {
auto _FirstPtr = _To_address(_First);
auto _LastPtr = _To_address(_Last);
auto _DestPtr = _To_address(_Dest);
const char* const _First_ch = const_cast<const char*>(reinterpret_cast<const volatile char*>(_FirstPtr));
const char* const _Last_ch = const_cast<const char*>(reinterpret_cast<const volatile char*>(_LastPtr));
char* const _Dest_ch = const_cast<char*>(reinterpret_cast<const volatile char*>(_DestPtr));
const auto _Count = static_cast<size_t>(_Last_ch - _First_ch);
_CSTD memmove(_Dest_ch, _First_ch, _Count);
if constexpr (is_pointer_v<_OutCtgIt>) {
return reinterpret_cast<_OutCtgIt>(_Dest_ch + _Count);
} else {
return _Dest + (_LastPtr - _FirstPtr);
}
}
Does anybody know why _First_ch and _Last_ch are first cast to const volatile char* type then immediately cast to const char*? I'm assuming it's to stop the compiler from optimizing prematurely, for some specific cases, but no concrete examples come to mind.
| If the target type of the pointer is volatile-qualified, it is not possible to use reinterpret_cast to directly cast to const char*.
reinterpret_cast is not allowed to cast away const or volatile. const_cast however can do this, while not being able to change the pointer's target type itself.
I think a C-style cast would also always work in this situation, but reasoning about it is a bit more difficult, since it attempts multiple C++-style conversion sequences, only the last of which is a reinterpret_cast followed by a const_cast.
It may be just a style choice to not use C-style casts here.
|
70,736,973 | 70,737,165 | Strange Behavior with pthreads | I have a rather strange problem, despite locking the critical code section in a thread that I am launching, I do not get the right results -
#include <stdio.h>
#include <pthread.h>
#include <thread>
#include <mutex>
#define NUMTAGS 6
std::mutex foo,bar;
pthread_mutex_t mutex;
typedef struct PKT{
int ii;
int jj;
}pkt;
void *print(void *pk)
{
pthread_mutex_lock(&mutex);
pkt *x = (pkt *)pk;
printf("--> %d %d %d\n", x->ii, x->jj, x->ii*NUMTAGS + x->jj);
pthread_mutex_unlock(&mutex);
}
int main(int argc, char **argv)
{
int count = 3;
int ii, jj;
pthread_t t_id[count*NUMTAGS];
int t_status[count*NUMTAGS];
pkt p;
pkt *p_p = &p;
for(ii=0; ii < count; ii+=1){
for(jj=0; jj < NUMTAGS; jj+=1){
p.ii = ii;
p.jj = jj;
//printf("<> %d %d %d\n", p_p->ii, p_p->jj, p_p->ii*NUMTAGS + p_p->jj);
t_status[ii*NUMTAGS + jj]=pthread_create(&t_id[ii*NUMTAGS + jj], NULL, print, (void*)p_p);
}
}
for(ii=0; ii < count; ii+=1){
for(jj=0; jj < NUMTAGS; jj+=1){
pthread_join(t_id[ii*NUMTAGS + jj], NULL);
}
}
}
The resulting answer is randomly wrong...
--> 0 3 3
--> 1 0 6
--> 1 0 6
--> 1 0 6
--> 1 0 6
--> 1 0 6
--> 1 1 7
--> 1 2 8
--> 1 3 9
--> 1 5 11
--> 2 0 12
--> 2 0 12
--> 2 1 13
--> 2 3 15
--> 2 4 16
--> 2 5 17
--> 2 5 17
--> 2 5 17
Can someone tell me what is the correct way to launch the print thread and have the body execute atomically ?
Thanks,
Raj
| Thanks David,
After your comment, I modified my code and it now works -
#include <stdio.h>
#include <pthread.h>
#include <thread>
#include <mutex>
#define NUMTAGS 6
std::mutex foo,bar;
pthread_mutex_t mutex;
typedef struct PKT{
int ii;
int jj;
}pkt;
void *print(void *pk)
{
pthread_mutex_lock(&mutex);
pkt *x = (pkt *)pk;
printf("--> %d %d %d\n", x->ii, x->jj, x->ii*NUMTAGS + x->jj);
pthread_mutex_unlock(&mutex);
}
int main(int argc, char **argv)
{
int count = 3;
int ii, jj;
pthread_t t_id[count*NUMTAGS];
int t_status[count*NUMTAGS];
pkt p[count*NUMTAGS];
pkt *p_p = p;
/*pre-assigned structure before launching thread*/
for(ii=0; ii < count; ii+=1){
for(jj=0; jj < NUMTAGS; jj+=1){
p[NUMTAGS*ii + jj].ii= ii;
p[NUMTAGS*ii + jj].jj= jj;
}
}
for(ii=0; ii < count; ii+=1){
/*assign p_p[i] to thread i*/
for(jj=0; jj < NUMTAGS; jj+=1, p_p+=1){
//printf("<> %d %d %d\n", p_p->ii, p_p->jj, p_p->ii*NUMTAGS + p_p->jj);
t_status[ii*NUMTAGS + jj]=pthread_create(&t_id[ii*NUMTAGS + jj], NULL, print, (void*)p_p);
}
}
for(ii=0; ii < count; ii+=1){
for(jj=0; jj < NUMTAGS; jj+=1){
pthread_join(t_id[ii*NUMTAGS + jj], NULL);
}
}
}
|
70,737,101 | 70,737,151 | Pointer is not incremented (moved) to expected location or pointer arithmetic is not providing expected answer | In the following snippet of code I expect pointer to move to next location i.e. current location + sizeof(datatype) but not happening unless I type cast to int.
#include <iostream>
#include <string.h>
int sizeOf()
{
int a = 0;
int* b = &a;
int* c = &a;
c++;// expect pointer to move "current location + sizeof(int)"
return c - b;
}
int main()
{
std::cout << "Size of int is " << sizeOf() << ", actual size of int is "<< sizeof(int) << "\n";
return 0;
}
Output:
Size of int is 1, actual size of int is 4
Expectation: if I increment pointer of type int then pointer should move to current location + sizeof(int) but not happening
But it actually works when I use following line
return (int)c - (int)b;
Compiler warns
my-pc $ g++ test.cpp -fpermissive
test.cpp: In function ‘int sizeOf()’:
test.cpp:10:17: warning: cast from ‘int*’ to ‘int’ loses precision [-fpermissive]
10 | return (int)c - (int)b;
| ^
test.cpp:10:26: warning: cast from ‘int*’ to ‘int’ loses precision [-fpermissive]
10 | return (int)c - (int)b;
| ^
my-pc $ ./a.out
Size of int is 4, actual size of int is 4
Now I got the output what I expect. I know int* is typecasted to int. I want to know the reason why it did not worked in first case and worked later.
| The difference is in terms of the size of the pointed object.
So a difference of 1 means the pointers point to 1 * sizeof(int) bytes apart.
For example,
&( a[2] ) - &( a[0] )
always gives 2 for a C array. It doesn't matter if it's an array of char, int, or some struct.
And it's easy to show why this is necessary, given that a[i] is equivalent to *(a+i) for a C array.
&( a[2] ) - &( a[0] )
= &( *( a + 2 ) ) - &( *( a + 0 ) )
= ( a + 2 ) - ( a + 0 )
= 2
|
70,737,335 | 70,737,414 | Can anyone explain how to use unique( ) in the vector? | #include<bits/stdc++.h>
using namespace std;
int main() {
vector <int> v = {1, 2 , 3, 2 , 1};
cout << v[0] << " " << v[1] << " " << v[2] << " " << v[3] << " " << v[4] << endl;
sort(v.begin(), v.end());
cout << v[0] << " " << v[1] << " " << v[2] << " " << v[3] << " " << v[4] << endl;
unique(v.begin(), v.end());
cout << v[0] << " " << v[1] << " " << v[2] << " " << v[3] << " " << v[4] << endl;
}
Output is
1 2 3 2 1
1 1 2 2 3
1 2 3 2 3
I'm not understanding what is unique function and how it is working ??
| Aside the fact, that bits/stdc++.h is not the proper header when taking C++ standard into account (please use iostream, vector and algorithm).
From: https://en.cppreference.com/w/cpp/algorithm/unique
Eliminates all except the first element from every consecutive group of equivalent elements from the range [first, last) and returns a past-the-end iterator for the new logical end of the range.
Removing is done by shifting the elements in the range in such a way that elements to be erased are overwritten
Thus the end of the vector might contain "garbage", but this is fine, as the new end of it is also returned.
#include <vector>
#include <iostream>
#include <algorithm>
using namespace std;
int main() {
vector <int> v = {1, 2 , 3, 2 , 1};
cout << v[0] << " " << v[1] << " " << v[2] << " " << v[3] << " " << v[4] << endl;
sort(v.begin(), v.end());
cout << v[0] << " " << v[1] << " " << v[2] << " " << v[3] << " " << v[4] << endl;
auto new_end = unique(v.begin(), v.end());
auto b = v.begin();
while (b!=new_end) {
std::cout << *b++ << " ";
}
std::cout << "\n";
return 0;
}
demo
|
70,737,355 | 70,737,455 | Is there a better way to modify a char array in a struct? C++ | I am trying to read in a cstring from a edit control box in MFC, then put it into a char array in a struct, but since I cannot do something like clientPacket->path = convertfuntion(a); I had to create another char array to store the string then store it element by element.
That felt like a bandait solution, is there a better way to approach this? I'd like to learn how to clean up the code.
CString stri;//Read text from edit control box and convert it to std::string
GetDlgItem(IDC_EDIT1)->GetWindowText(stri);
string a;
a = CT2A(stri);
char holder[256];
strcpy_s(holder,a.c_str());
int size = sizeof(holder);
struct packet {
char caseRadio;
char path[256];
};
packet* clientPacket = new packet;
for (int t = 0; t < size; t++) {
clientPacket->path[t] = holder[t] ;
}
EDIT:This is currently what I went with:
CString stri;//Read text from edit control box and convert it to std::string
GetDlgItem(IDC_EDIT1)->GetWindowText(stri);
string a = CT2A(stri);
struct packet {
char caseRadio;
char path[CONSTANT];//#define CONSTANT 256
};
packet* clientPacket = new packet;
a = a.substr(0, sizeof(clientPacket->path) - 1);
strcpy_s(clientPacket->path, a.c_str());
I got a problem where I got "1path" instead of "path", turns out it read in caseRadio='1', fixed it by reading out caseRadio first in the server
| I don't see the need to create the intermediate 'holder' char array.
I think you can just directly do
strcpy(clientPacket->path, a.c_str());
You may want to do this:
a= a.substr(0, sizeof(clientPacket->path)-1);
before the strcpy to avoid buffer overrun depending on whether the edit text is size limited or not.
|
70,737,474 | 70,737,584 | Is TMP really faster if the recusion depth is very deep? | I made a simple sqrt struct using TMP. It goes like :
template <int N, int i>
struct sqrt {
static const int val = (i*i <= N && (i + 1)*(i + 1) > N) ? i : sqrt<N, i - 1 >::val;
};
but is causes error since it does not have the exit condition, so I added this :
template <int N>
struct sqrtM<N, 0> {
static const int val = 0;
};
So as I understand it, in case we use TMP, the compiler goes into recursion loop until they meet the exit condition (in terms of sqrt, usally when i = 0 or i = 1)
But if we make a recursive sqrt function, compiler doesn't have to dive until it meets i = 0, because at some point, recursive function ends at exact location where condition (i*i <= N && (i + 1)*(i + 1) > N) is met.
So let's assume we put very large value into our sqrt, then our TMP should do extra computation of sqrt<N, sqrt<N-1>::val> compared to the recursive version of sqrt function, and it seems waste to me.
Am I getting it wrong? Or TMP is really worth it even in this kind of cases?
| The thing is that in TMP you can't go very deep by default. The depth is limited but the limit can be changed (see this). The other thing is that you write your TMP code with recursion but it can be compiled into a non-recursive code so it doesn't have the extra cost of saving the state and doing a function call as it goes deeper. So it is a tradeoff between compile time, executable size and runtime performance. If your N is not known at compile time, then you can't use TMP.
|
70,737,686 | 70,737,929 | free memory of c++ lambda when execute finished | I'm coding a network function on C++, the HTTP request in background thread, use lambda callback when receive HTTP data. But I don't know how to release the lambda, hopes some help.
void foo()
{
// the `func` must be a heap variable for asynchronously.
auto func = new auto ([&](std::string response){
printf("recv data: %s", response.c_str());
});
std::thread t([&]{
sleep(2); // simulate a HTTP request.
std::string ret = "http result";
(*func)(ret);
});
t.detach();
// The foo function was finished. bug `func` lambda still in memory ?
}
int main()
{
foo();
getchar(); // simulate UI Event Loop.
return 0;
}
| You can capture lambda inside a lambda:
void foo()
{
std::thread t(
[func = [](std::string response) {
printf("recv data: %s", response.c_str());
}](){
sleep(2); // simulate a HTTP request.
std::string ret = "http result";
func(ret);
});
t.detach();
// The foo function was finished. bug `func` lambda still in memory ?
}
or if it's supposed to be shared, you can use shared ownership semantics via shared_ptr and then capture it into the lambda by value in order to increase its reference count:
void foo()
{
auto lambda = [](std::string response){
printf("recv data: %s", response.c_str());
};
std::shared_ptr<decltype(lambda)> func{
std::make_shared<decltype(lambda)>(std::move(lambda))
};
std::thread t([func]{
sleep(2); // simulate a HTTP request.
std::string ret = "http result";
(*func)(ret);
});
t.detach();
}
Of for non-capturing lambdas one can just turn it into a function pointer and don't really care
void foo()
{
auto func_{
[](std::string response){
printf("recv data: %s", response.c_str());
}
};
std::thread t([func=+func_]{ //note the + to turn lambda into function pointer
sleep(2); // simulate a HTTP request.
std::string ret = "http result";
(*func)(ret);
});
t.detach();
|
70,738,082 | 70,738,105 | Is there any way to assign default value to a map passed as a parameter by reference to a function in C++? | I'm trying to use map in my recursive functions (as an implementation of DP). Here, I wrote a simple Fibonacci function (I know I can use a normal array here but I want to get some idea which I can use in other functions which will take more complex inputs like pairs, strings, objects etc).
#include <bits/stdc++.h>
using namespace std;
#define int long long
int fib(int n, map<int, int> &memo); // What I did
/* What I want:
Instead of pulling an external map as an argument,
the function will automatically create an empty map as a default parameter at the first call
and pass it by reference in the recursive calls. */
/* I tried some stuff
int fib(int n, map<int,int> memo={}); // Slow
int fib(int n, map<int, int> &memo, bool reset); // Works, but I want to know if there are any better idea which doesn't take 3 inputs
int fib(int n, map<int, int> &memo={}); // Doesn't compile (my target is something close to this)
*/
signed main()
{
map<int,int> mp; // Needs to be empty before being passed to fib()
int n;
cin >> n;
cout << n << ' ' << fib(n, mp); // I want to use just fib(n)
return 0;
}
int fib(int n, map<int, int> &memo) // The external memo needs to be empty
{
if(n==!!n) return n;
if(memo.find(n)!=memo.end()) return memo[n];
if(n<0)
{
if(n%2) return fib(-n, memo);
return -fib(-n, memo);
}
memo[n]=fib(n-1, memo)+fib(n-2, memo);
return memo[n];
}
I want to know if there are any ways to implement the empty map parameter in C++.
| You can simply overload the function:
int fib(int n)
{
std::map<int, int> map;
fib(n, map);
}
int fib(int n, map<int, int> &memo) { ... }
Is this what you meant to achive?
Sidenote: You should remove #define int long long, it's not legal C++ and utterly confusing.
|
70,738,482 | 70,738,629 | How to pass initialization values for the member array in C++ template class arguments? | Let's say I have following template C++ class
template<uint32_t FILTER_ORDER, uint32_t BUFFER_LENGTH>
class Filter {
public:
Filter() {}
private:
float data_buffer[BUFFER_LENGTH];
const float filter_coefficients[FILTER_ORDER + 1];
};
I have been looking for a way how I can pass the coefficients of the filter i.e. individual items of the member array filter_coefficients at compile time. My goal is to have the ability to define filter object in following manner
Filter<3, 8, 1.0, 0.3678, 0.1353, 0.04978> filter;
where the last four non-type arguments of the template are the initialization values for the member array filter_coefficients. Does it exist a way how can I do that in C++?
| Yes, it is possible in C++20, before that float is not allowed as non-type template parameter.
template<uint32_t FILTER_ORDER, uint32_t BUFFER_LENGTH, float...values>
class Filter {
public:
Filter() {}
private:
float data_buffer[BUFFER_LENGTH];
const float filter_coefficients[FILTER_ORDER + 1]{values...};
};
But this embeds the values into the type, is that really what you want? It will prevent storing arrays of these objects because they now have different types.
I would recommend just using constexpr constructor. Why do you need them to be compile-time anyway?
C++14 variant with constexpr constructor:
#include <cstdint>
#include <array>
template<std::uint32_t FILTER_ORDER, std::uint32_t BUFFER_LENGTH>
class Filter {
public:
Filter() {}
template<typename...Floats,typename=decltype(std::array<float,FILTER_ORDER+1>{std::declval<Floats>()...})>
constexpr Filter(Floats...floats):filter_coefficients{floats...}{}
private:
// Consider using `std::array` here too.
float data_buffer[BUFFER_LENGTH];
const float filter_coefficients[FILTER_ORDER + 1];
};
int main(){
Filter<3, 4> filter{1.0f,2.0f,3.0f,1.0f};
}
There is basic type check to ensure the elements are floats, otherwise this variadic template can shadow some other constructors.
|
70,738,820 | 70,739,020 | OPENGL C++ tiles rendered with small gaps between them | I have been attempting to recreate my pygame RPG in c++ due to performance problems in pygame. I have started rendering tiles. However when these tiles rendered if you go into full screen, you can see lines between the tiles which I believe to be caused by rounding errors when I calculate the locations of the tiles, or the texture vertices. I made a system to convert integers -1.0f to 1.0f coordinates opengl use which I think is rounding or not doing it correctly, I couldn't figure out what to change to remove the gaps between them.
Here is an image of what I see
https://ibb.co/PDZ5fRT
Here is the code for my tile class
class Tile {
public:
//coordinates for placement and texture coordinates
/*
b = bottom
t = top
l = left
r = right5
x,y = x,y lol dumbass
t = texture coordinate
*/
float bl_x, tl_x, tr_x, br_x, bl_y, tl_y, tr_y, br_y;
float t_bl_x, t_bl_y, t_tl_x, t_tl_y, t_tr_x, t_tr_y, t_br_x, t_br_y;
Tile(int x, int y, int width, int height, std::vector<float>* vertices, std::vector<unsigned int>* indices) {
unsigned int tex_x = 0.0f;
unsigned int tex_y = 2.0f * 32.0f;
int SCR_WIDTH = 640;
int SCR_HEIGHT = 360;
//convert x,y coords to open gl coords (between -1 and 1) (prop means proportion of screen)
float x_prop = float(float(x) / float(SCR_WIDTH)) * 2.0f - 1.0f;
float y_prop = float(float(y) / float(SCR_HEIGHT)) * 2 - 1;
float width_prop = float(float(x + width) / float(SCR_WIDTH)) * 2.0f - 1.0f;
float height_prop = float(float(y + height) / float(SCR_HEIGHT)) * 2 - 1;
//convert texture x,y coords open gl coords (BETWEEN 0 and 1) (prop means proportion of screen)
float tex_x_prop = float(float(tex_x) / 641.0f);
float tex_y_prop = float(float(tex_y) / 288.0f);
float tex_width_prop = float(float(tex_x + width) / 641.0f);
float tex_height_prop = float(float(tex_y + height) / 288.0f);
tl_x = x_prop;
tl_y = y_prop;
bl_x = x_prop;
bl_y = height_prop;
br_x = width_prop;
br_y = height_prop;
tr_x = width_prop;
tr_y = y_prop;
//texture coordinates
t_bl_x = tex_x_prop;
t_bl_y = tex_y_prop;
t_tl_x = tex_x_prop;
t_tl_y = tex_height_prop;
t_tr_x = tex_width_prop;
t_tr_y = tex_height_prop;
t_br_x = tex_width_prop;
t_br_y = tex_y_prop;
// --- push back coordinates to the vertex buffer----
//bottom left coords
vertices->push_back(bl_x);
vertices->push_back(bl_y);
vertices->push_back(0.0f);
//bottom left color
vertices->push_back(0.0f);
vertices->push_back(0.0f);
vertices->push_back(0.0f);
//bottom left texture coords
vertices->push_back(t_bl_x);
vertices->push_back(t_bl_y);
//top left coords
vertices->push_back(tl_x);
vertices->push_back(tl_y);
vertices->push_back(0.0f);
//top left color
vertices->push_back(0.0f);
vertices->push_back(0.0f);
vertices->push_back(0.0f);
//top left texture coords
vertices->push_back(t_tl_x);
vertices->push_back(t_tl_y);
//top right coords
vertices->push_back(tr_x);
vertices->push_back(tr_y);
vertices->push_back(0.0f);
//top right color
vertices->push_back(0.0f);
vertices->push_back(0.0f);
vertices->push_back(0.0f);
//top right texture coords
vertices->push_back(t_tr_x);
vertices->push_back(t_tr_y);
//bottom right coords
vertices->push_back(br_x);
vertices->push_back(br_y);
vertices->push_back(0.0f);
//bottom right color
vertices->push_back(0.0f);
vertices->push_back(0.0f);
vertices->push_back(0.0f);
//bottom right texture coords
vertices->push_back(t_br_x);
vertices->push_back(t_br_y);
// --- push back indices ---
unsigned int p = indices->size() / 6 * 4;
indices->push_back(0 + p);
indices->push_back(1 + p);
indices->push_back(3 + p);
indices->push_back(1 + p);
indices->push_back(2 + p);
indices->push_back(3 + p);
}
};
| Fixed by changing from GL_LINEAR to GL_NEAREST when rendering :) Thank you Raildex
|
70,738,860 | 70,741,451 | How to store a variable or object on desired memory location? | Let's suppose I have an object of a class as shown below:
Myclass obj;
Now I want to store this object to my desired memory location. How can I do this? Is it possible or not?
I have created an array class which simply insert data of integer type in respective indexes but the size of array is not declared inside the class. I insert small amount of data like 50 to 60 integers , then I use loop to insert a large data inside this class, but this time my program crashes , I think it is because of reason that if some index is encountered in memory which has some value in it , program stops itself. Now I want to analyze the memory that my object of array class start from that particular address which contain maximum number of available empty useable memory spaces
I think it is not allowing more insertions because of some allocated memory location. Now I want to place my object on that memory address which contain maximum available empty useable space. Please guide me
#include<iostream>
using namespace std;
class newarray
{
public:
int n;
int arr[]; //size not declared
int* ptr;
newarray():n(0),ptr(NULL)
{
}
void getadress(int indexno) //for getting address of an index
{
ptr=&arr[indexno];
cout<<endl<<ptr;
}
void insert(int a) //inserting an element
{
arr[n]=a;
n++;
}
void display()
{
for(int i=0;i<=n;i++)
{
cout<<endl<<arr[i];
}
}
};
int main()
{
newarray a1;
int x=1;
for(int i=0;i<100;i++) //work good with 100 inssertions
//but if exceed to 300 or more it crashes.
{
a1.insert(x);
x++;
}
a1.display();
}
| You wrote
class newarray
{
int arr[]; //size not declared
This is not allowed. Unfortunately, your compiler did not warn you when compilinhg. You only discovered that this was a problem when your code crashed.
You then wonder about "some allocated memory location" and picking one manually. That's not the problem. int arr[] without explicitly specifying bounds is allowed only in a few contexts, such as a function declaration. The function declaration doesn't need a size because the size in that case is determined by the caller.
Since this is C++, just use std::vector<int> for your dynamic arrays.
|
70,739,127 | 70,739,211 | How does compiler deduce return type from this lambda expression? | I am creating a web-socket server in C++ with Boost library. My starting point was a Boost example from this site.
I have a question with this part of code in the on_run method:
ws_.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res)
{
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) +
" websocket-server-async");
}));
As it can seen, ws_.set_option takes a websocket::stream_base::decorator as a parameter and this parameter is created by a lambda expression, which takes a Decorator&& type variable as a parameter according to this site.
How does the lambda expression know, it should return a Decorator&&, when there is no trailing-return-type or return statement in the lambda expression?
Last, but no least, where does the websocket::response_type &res come from? There is no () including this parameter after the lambda body.
| websocket::stream_base::decorator is a template function with one template type parameter.
template<class Decorator>
decorator( Decorator&& f );
In this call
ws_.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res)
{
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) +
" websocket-server-async");
}));
the function is called accepting the lambda expression
[](websocket::response_type& res)
{
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) +
" websocket-server-async");
}
as its template argument. The lambda expression has by default the return type void because there is no return statement in the lambda expression.
How does the lambda expression know, it should return a Decorator&&
It is the lambda expression itself is used as an argument for the declared function template parameter Decorator&&.
|
70,740,150 | 70,740,572 | Is it possible to provide a constuctor with copy elision for member initialization? | I'm testing different modes for initializing class members with following small code:
struct S {
S() { std::cout << "ctor\n"; }
S(const S&) { std::cout << "cc\n"; }
S(S&&) noexcept{ std::cout << "mc\n"; }
S& operator=(const S&) { std::cout << "ca\n"; return *this; }
S& operator=(S&&) noexcept{ std::cout << "ma\n"; return *this; }
~S() { std::cout << "dtor\n"; }
};
struct P1 {
S s_;
};
struct P2 {
P2(const S& s) : s_(s) {}
S s_;
};
struct P3 {
P3(const S& s) : s_(s) {}
P3(S&& s) : s_(std::move(s)) {}
S s_;
};
int main() {
S s;
std::cout << "------\n";
{
P1 p{s}; // cc
}
std::cout << "---\n";
{
P1 p{S{}}; // ctor = copy elision
}
std::cout << "------\n";
{
P2 p{s}; // cc
}
std::cout << "---\n";
{
P2 p{S{}}; // ctor + cc
}
std::cout << "------\n";
{
P3 p{s}; // cc
}
std::cout << "---\n";
{
P3 p{S{}}; // ctor + mc
}
std::cout << "------\n";
}
As you see in comments, only in case of aggregate-initialization of P1{S{}} copy elision happens and our class is initialized without any copy/move constructor calls. I wonder if it is possible to provide a constructor which initialize members directly like aggregate initializer. Any idea?
Update:
I wonder if I have understood standard incorrectly, but from my understanding, something strange happens here:
For initializer list we have:
class-or-identifier ( expression-list(optional) ):
Initializes the base or member named by class-or-identifier using direct initialization or, if expression-list is empty, value-initialization
For direct initialization we have:
If T is a class type, if the initializer is a prvalue expression whose type is the same class as T (ignoring cv-qualification), the initializer expression itself, rather than a temporary materialized from it, is used to initialize the destination object. (copy elision)
So from this, I thought that for a initializer list like s_(std::move(s)) copy elision should happen, isn't it?
|
I wonder if it is possible to provide a constructor which initialize members directly like aggregate initializer.
Certainly. Write a constructor that doesn't accept an argument of the member type, but rather accepts arguments that are forwarded to the constructor of the member. In your case, the member type is default constructible, so you don't need to forward any arguments:
struct P4 {
P4(): s() {}
S s;
};
|
70,740,701 | 70,740,822 | Permutation calculator isn't working in C++ | Good day! I am having some trouble with my permutation calculator. For some reason, the end result is always 1. We are not allowed to use recursion at the moment so I opted to use a for loop for finding the factorials.
Here is my code:
#include <iostream>
using namespace std;
int fact(int x);
int perm(int y, int z);
int n, r, npr;
char ch;
int main()
{
do{
cout<<"Enter n (object/s): ";
cin>> n;
cout<< "Enter r (sample/s): ";
cin>> r;
npr= perm(n,r);
cout<< "Value of "<< n<<"P"<< r<< " = "<< npr<<endl;
cout<<"Would you like to repeat this again? [y/n] \n";
cin>> ch;
cout<< "\n";
} while(ch=='y');
cout<< "Thank you and have a nice day!";
return 0;
}
int fact(int x)
{
int number, cum = 1;
for(number=1;number<=n;number++)
cum=cum*number;
return cum;
}
int perm(int y, int z)
{
return fact(n) / fact(n-r);
}
| The problem in your code is unecessary abuse of global variables. This function:
int fact(int x)
{
int number, cum = 1;
for(number=1;number<=n;number++)
cum=cum*number;
return cum;
}
Always calculates the factorial of n. No matter what parameter you pass when calling it, hence here:
int perm(int y, int z)
{
return fact(n) / fact(n-r);
}
fact(n) returns the factorial of n. fact(n-r) returns the factorial of n. And the result is always 1. Remove the globals and make the functions acutally use their arguments:
#include <iostream>
int fact(int x);
int perm(int y, int z);
int main() {
int n = 0;
int r = 0;
char ch = 'n';
do{
std::cout << "Enter n (object/s): \n";
std::cin >> n;
std::cout << "Enter r (sample/s): \n";
std::cin >> r;
auto npr = perm(n,r);
std::cout << "Value of "<< n << "P" << r << " = " << npr << "\n";
std::cout << "Would you like to repeat this again? [y/n] \n";
std::cin >> ch;
std::cout << "\n";
} while(ch=='y');
std::cout << "Thank you and have a nice day!";
}
int fact(int x) {
int cum = 1;
for(int number=1;number<=x;number++) {
cum=cum*number;
}
return cum;
}
int perm(int y, int z) {
return fact(y) / fact(y-z);
}
|
70,740,797 | 70,742,553 | How to set cell values using Excel12v C interface | I have an Excel12v function using XLOPER to set some values on an Excel sheet. I can create XLLs fine as per Microsoft's XLL guide. I authored xladd-derive for Rust which enables this an allows returning scalars and ranges of values very simply.
However I would like, rather than return a value, to set a random cell to a value. There is xlSet function demonstrated below that does this and works fine.
short WINAPI xlSetExample()
{
XLOPER12 xRef, xValue;
xRef.xltype = xltypeSRef;
xRef.val.sref.count = 1;
xRef.val.sref.ref.rwFirst = 204;
xRef.val.sref.ref.rwLast = 205;
xRef.val.sref.ref.colFirst = 1;
xRef.val.sref.ref.colLast = 1;
xValue.xltype = xltypeInt;
xValue.val.w = 12345;
Excel12v(xlSet, 0, 2, (LPXLOPER12)&xRef, (LPXLOPER12)&xValue);
return 1;
}
but only works if it's called from a VBA macro
Sub test()
Application.Run("xlSetExample","12345")
End Sub
Is there an equivalent xlf* or xlc* function that allows one to set cell values but do not need to be called from a VBA macro
| In general, Excel prevents spreadsheet functions from changing the values in cells. In effect, spreadsheet functions are given a read-only view of the values in the sheet.
This is the documentation for xlSet which states:
xlSet behaves as a Class 3 command-equivalent function; that is, it is
available only inside a DLL when the DLL is called from an object,
macro, menu, toolbar, shortcut key, or the Run button in the Macro
dialog box (accessed from View tab on the ribbon starting in Excel
2007, and the Tools menu in earlier versions).
The reason for this is to prevent circular references or other actions that would break or confuse the calculation tree. Excel would struggle to determine dependencies between cells if a function in one cell could change other cells' contents.
Consider the hypothetical function AddOne() which takes a number, adds one and uses this to set the cell immediately to the right via xlSet (or otherwise). What would happen if the formula in cell A1 were =AddOne(B1)?
This Excel SDK reference gives more information. Namely:
Different Types of Functions
Excel4 and Excel12 distinguish among three classes of functions. The
functions are classified according to the three states in which Excel
might call the DLL.
Class 1 applies when the DLL is called from a worksheet as a result of
recalculation.
Class 2 applies when the DLL is called from within a function macro or
from a worksheet where it was registered with a number sign (#) in the
type text.
Class 3 applies when a DLL is called from an object, macro, menu,
toolbar, shortcut key, ExecuteExcel4Macro method, or the
Tools/Macro/Run command. For more information, see Excel Commands,
Functions, and States.
Only Class 3 functions can call xlSet.
So, in summary, the Excel application really doesn't want users to change one cell from a function call in another. As always, if you work hard enough you could probably achieve this (eg get the COM application object pointer by some method and modify the cell that way, or set up a callback to modify the cell asynchronously), but you might have unpredictable results.
|
70,741,051 | 70,745,424 | Why my natural log function is so imprecise? | Firstly, I'm using this approximation of a natural log. Or look here (4.1.27) for a better representation of formula.
Here's my implementation:
constexpr double eps = 1e-12;
constexpr double my_exp(const double& power)
{
double numerator = 1;
ull denominator = 1;
size_t count = 1;
double term = numerator / denominator;
double sum = 0;
while (count < 20)
{
sum += term;
numerator *= power;
#ifdef _DEBUG
if (denominator > std::numeric_limits<ull>::max() / count)
throw std::overflow_error("Denominator has overflown at count " + std::to_string(count));
#endif // _DEBUG
denominator *= count++;
term = numerator / denominator;
}
return sum;
}
constexpr double E = my_exp(1);
constexpr double my_log(const double& num)
{
if (num < 1)
return my_log(num * E) - 1;
else if (num > E)
return my_log(num / E) + 1;
else
{
double s = 0;
size_t tmp_odd = 1;
double tmp = (num - 1) / (num + 1);
double mul = tmp * tmp;
while (tmp >= eps)
{
s += tmp;
tmp_odd += 2;
tmp *= mul / tmp_odd;
}
return 2 * s;
}
}
You probably can see why I want to implement these functions. Basically, I want to implement a pow function. But still my approach gives very imprecise answers, for example my_log(10) = 2.30256, but according to google (ln 10 ~ 2.30259).
my_exp() is very precise since it's taylor expansion is highly convergant. my_exp(1) = 2.718281828459, meanwhile e^1 = 2.71828182846 according to google. But unfortunately it's not the same case for natural log, and I don't even know how is this series for a natural log derived (I mean from the links above). And I couldn't find any source about this series.
Where's the precision errors coming from?
| The line tmp *= mul / tmp_odd; means that each term is also being divided by the denominators of all previous terms, i.e. 1, 1*3, 1*3*5, 1*3*5*7, ... rather than 1, 3, 5, 7, ... as the formula states.
The numerator and denominator should therefore be computed independently:
double sum = 0;
double value = (num - 1) / (num + 1);
double mul = value * value;
size_t denom = 1;
double power = value;
double term = value;
while (term > eps)
{
sum += term;
power *= mul;
denom += 2;
term = power / denom;
}
return 2 * sum;
...
// Output for num = 1.5, eps = 1e-12
My func: 0.405465108108004513
Cmath log: 0.405465108108164385
------------
Much better!
Reducing the epsilon to 1e-18, we hit the accuracy limits of naïve summation:
// Output for num = 1.5, eps = 1e-18
My func: 0.40546510810816444
Cmath log: 0.405465108108164385
---------------
Kahan-Neumaier to the rescue:
double sum = 0;
double error = 0;
double value = (num - 1) / (num + 1);
double mul = value * value;
size_t denom = 1;
double power = value;
double term = value;
while (term > eps)
{
double temp = sum + term;
if (abs(sum) >= abs(term))
error += (sum - temp) + term;
else
error += (term - temp) + sum;
sum = temp;
power *= mul;
denom += 2;
term = power / denom;
}
return 2 * (sum + error);
...
// Output for num = 1.5, eps = 1e-18
My func: 0.405465108108164385
Cmath log: 0.405465108108164385
|
70,741,663 | 70,748,971 | How to member pass a function, which has another member function as an argument, to a thread | I have a member function that takes another member function as an argument, which works normally when executed directly on the main thread. However, when trying to run this function in a separate thread, I get the following error on g++:
In file included from /usr/include/c++/11/thread:43,
from teste.cpp:4:
/usr/include/c++/11/bits/std_thread.h: In instantiation of ‘std::thread::thread(_Callable&&, _Args&& ...) [with _Callable = void (Shell::*)(void (Memoria::*)(), Memoria&); _Args = {Shell*&, void (Memoria::*)(), Memoria&}; <template-parameter-1-3> = void]’:
teste.cpp:46:57: required from here
/usr/include/c++/11/bits/std_thread.h:130:72: error: static assertion failed: std::thread arguments must be invocable after conversion to rvalues
130 | typename decay<_Args>::type...>::value,
| ^~~~~
/usr/include/c++/11/bits/std_thread.h:130:72: note: ‘std::integral_constant<bool, false>::value’ evaluates to false
/usr/include/c++/11/bits/std_thread.h: In instantiation of ‘struct std::thread::_Invoker<std::tuple<void (Shell::*)(void (Memoria::*)(), Memoria&), Shell*, void (Memoria::*)(), Memoria> >’:
/usr/include/c++/11/bits/std_thread.h:203:13: required from ‘struct std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (Shell::*)(void (Memoria::*)(), Memoria&), Shell*, void (Memoria::*)(), Memoria> > >’
/usr/include/c++/11/bits/std_thread.h:143:29: required from ‘std::thread::thread(_Callable&&, _Args&& ...) [with _Callable = void (Shell::*)(void (Memoria::*)(), Memoria&); _Args = {Shell*&, void (Memoria::*)(), Memoria&}; <template-parameter-1-3> = void]’
teste.cpp:46:57: required from here
/usr/include/c++/11/bits/std_thread.h:252:11: error: no type named ‘type’ in ‘struct std::thread::_Invoker<std::tuple<void (Shell::*)(void (Memoria::*)(), Memoria&), Shell*, void (Memoria::*)(), Memoria> >::__result<std::tuple<void (Shell::*)(void (Memoria::*)(), Memoria&), Shell*, void (Memoria::*)(), Memoria> >’
252 | _M_invoke(_Index_tuple<_Ind...>)
| ^~~~~~~~~
/usr/include/c++/11/bits/std_thread.h:256:9: error: no type named ‘type’ in ‘struct std::thread::_Invoker<std::tuple<void (Shell::*)(void (Memoria::*)(), Memoria&), Shell*, void (Memoria::*)(), Memoria> >::__result<std::tuple<void (Shell::*)(void (Memoria::*)(), Memoria&), Shell*, void (Memoria::*)(), Memoria> >’
256 | operator()()
| ^~~~~~~~
The error happens in this example:
#include <unistd.h>
#include <iostream>
#include <thread>
using namespace std;
class Memoria {
public:
void imprimir() {
cout << "printed" << endl;
};
};
class Kernel {
public:
Memoria* memoria = new Memoria();
};
class Escalonador {
public:
Kernel* kernel = new Kernel();
};
class Shell {
public:
bool loop = true;
Escalonador* escalonador = new Escalonador();
template <class C>
void function1(void (C::*function)(), C& c) {
while (loop == true) {
(c.*function)();
usleep(500000);
}
return;
}
};
int main() {
Shell* shell = new Shell();
//Works
// shell->function1(&Memoria::imprimir,
// *shell->escalonador->kernel->memoria);
// not works
thread t = thread(&Shell::function1<Memoria>,
shell,
&Memoria::imprimir,
*shell->escalonador->kernel->memoria);
cin.get();
shell->loop = false;
t.join();
return 0;
}
| According to @super tip, use std::ref()
t = thread(&Shell::function1<Memoria>,
this,
&Memoria::imprimir,
ref(*escalonador->kernel->memoria));
|
70,742,079 | 70,742,338 | Why can we avoid specifying the type in a lambda capture? | Why is the variable 'n' in 2nd usage of std::generate and within lambda capture not preceded with it's data type in below code?
I thought it's important to specify the datatype of all identifiers we use in a c++ code.
#include <algorithm>
#include <iostream>
#include <vector>
int f()
{
static int i;
return ++i;
}
int main()
{
std::vector<int> v(5);
auto print = [&] {
for (std::cout << "v: "; auto iv: v)
std::cout << iv << " ";
std::cout << "\n";
};
std::generate(v.begin(), v.end(), f);
print();
// Initialize with default values 0,1,2,3,4 from a lambda function
// Equivalent to std::iota(v.begin(), v.end(), 0);
std::generate(v.begin(), v.end(), [n = 0] () mutable { return n++; });
print();
}
| From cppreference:
A capture with an initializer acts as if it declares and explicitly captures a variable declared with type auto, whose declarative region is the body of the lambda expression (that is, it is not in scope within its initializer), [...]
Lambdas used the opportunity of a syntax that was anyhow fresh and new to get some things right and allow a nice and terse syntax. For example lambdas operator() is const and you need to opt-out via mutable instead of the default non-const of member functions.
No auto in this place does not create any issues or ambiguities. The example from cppreference:
int x = 4;
auto y = [&r = x, x = x + 1]()->int
{
r += 2;
return x * x;
}(); // updates ::x to 6 and initializes y to 25.
From the lambda syntax it is clear that &r is a by reference capture initialized by x and x is a by value capture initialized by x + 1. The types can be deduced from the initializers. There would be no gain in requiring to add auto.
In my experience n could have been just declared inside the lambda body with auto or int as datatype. Isnt it?
Yes, but then it would need to be static. This produces the same output in your example:
std::generate(v.begin(), v.end(), [] () mutable {
static int n = 0;
return n++; });
However, the capture can be considered cleaner than the function local static.
|
70,743,100 | 70,751,300 | How to check what case is selected with radio button Win 32 api | I am using win 32 api in C++ to devellop a desktop app.
At on point I want to use a radio button with two case, and depending on what case is selected by the user I want to create a dialogBox.
I use a ressource file to create the dialog box that contains the radio button :
IDD_INPUT DIALOG DISCARDABLE 0, 0, 150, 150
STYLE DS_MODALFRAME | DS_CENTER | WS_POPUP | WS_CAPTION
CAPTION "Pricing Input"
FONT 8, "MS Sans Serif"
BEGIN
RADIOBUTTON "Historical Data",IDC_HISTO,20, 20, 50,14
RADIOBUTTON "User Inpu",IDC_USER,90,20,50,14
PUSHBUTTON "Ok",IDC_VALID,60,100,50,14
END
The dialog box is created as follow :
hWndDlgBox = CreateDialog(GetModuleHandle(NULL), MAKEINTRESOURCE(IDD_INPUT), hWnd, (DLGPROC)DlgInput);
And the DlgInput procedure is something like:
LRESULT CALLBACK DlgInput(HWND hWnDlg, UINT Msg, WPARAM wParam, LPARAM lParam)
{
switch (Msg)
{
case WM_INITDIALOG:
{
return TRUE;
}
break;
case WM_COMMAND:
{
switch (LOWORD(wParam))
{
case IDC_VALID:
{
if (GetDlgItem(hWnDlg, IDC_USER)) {
// Open dialog box x
}
else {
//Open dialog box y
}
SendMessage(hWnDlg, WM_CLOSE, 0, 0);
}
break;
}
case WM_CLOSE:
DestroyWindow(hWnDlg);
hWndDlgBox = NULL;
break;
default:
return FALSE;
break;
}
}
}
So in the IDC_VALID case I want to check the value of the radio button, I tried using the GetDlgItem function but I don't really understand what value it returns. I saw that it's possible to use the BM_GETCHECK message but I'm not sure how to use it.
Also when I click on one case the dialog box closes and I don't know why.
Can someone explain to me how the radio button work ?
| You could try to send the BM_GETCHECK message to the control and check the return value. And you will need the HWND of your control, to get that from the control ID, you could try to call GetDlgItem().
|
70,743,256 | 70,743,643 | String throwing exception | I have the following code with two functions which should throw exceptions when condition is satisfied. Unfortunately the second one with string seems not working and I don't have a clue whats wrong
#include "iostream"
#include "stdafx.h"
#include "string"
using namespace std;
struct P
{
int first;
string second;
};
void T(P b)
{ if (b.first==0)
throw (b.first);
};
void U(P b)
{ if (b.second == "1, 2, 3, 4, 5, 6, 7, 8, 9" )
throw (b.second);
};
int _tmain(int argc, _TCHAR* argv[])
{
P x;
cin>>x.first;
cin>>x.second;
try
{
P x;
T(x);
}
catch (int exception)
{
std::cout << exception;
}
try{
U(x);
}
catch (const char* exception)
{
std::cout << "\n" << exception;
}
system("pause");
return 0;
}
I have the following input:
0
1, 2, 3, 4, 5, 6, 7, 8, 9
and the output:
0
and I want to get:
0
1, 2, 3, 4, 5, 6, 7, 8, 9
How can I change char for string output?
| I do not know what you are trying to experiment, but despite being allowed by the language, throwing objects that are not instances of (subclasses of) std::exception should be avoided.
That being said you have a bunch of inconsistencies in your code.
First cin >> x.second; will stop at the first blank character. So in your example you have only "1," in x.second, so you test fails and you code does not throw anything.
You should ignore the newline left by cin >> x.first and use getline to read a full line include spaces:
P x;
cin >> x.first;
cin.ignore();
std::getline(cin, x.second);
The first try block invokes UB, because you are declaring a new x in that block that will hide the one you have just read. It should be:
try
{
//P x; // do not hide x from the enclosing function!
T(x);
}
Finaly, and even it is not an error you should always catch non trivial object by const reference to avoid a copy. Remember that exceptions are expected to be raised in abnormal conditions, and when memory becomes scarce you should avoid copy. But you must catch the exact same object that was thrown. So the second catch should be:
catch (std::string exception)
{
std::cout << "\n" << exception;
}
or better (avoid a copy):
catch (const std::string& exception)
{
std::cout << "\n" << exception;
}
|
70,743,339 | 70,898,033 | Get cwnd of my TCP connection from a program | I am creating a TCP connection from my linux program with boost.asio. I wonder how do I get the value of its congestion window (cwnd) from the program? The only way I know of is to parse /proc/net/tcp, but this does not feel right. I'd rather use a dedicated syscall to get this info.
A solution to a similar question (How to monitor cwnd and ssthresh values for a TCP connection?) suggests using TCP Probe, but it feels even less appealing.
So what is the best way to get the value of cwnd?
| It turned out getsockopt() is able to return the same tcp_info when called with TCP_INFO option:
tcp_info tcpi = {};
socklen_t len = sizeof(tcp_info);
getsockopt(tcp_socket, SOL_TCP, TCP_INFO, &tcpi, &len);
tcpi.tcpi_snd_cwnd; // <-- CWND
|
70,743,348 | 70,744,380 | How do I fix the assigning numbers to the variable and the number of occurrence? | I managed to formulate a code but I still cannot figure out where I'm getting it wrong, there are two problems the numbers and the number of occurrence. If one is working the other is not.
Task: Using while loop, write a program that asks a user to input for a positive number let’s say N. The program then asks the user for N number of positive integers. The program is to determine the
largest value among the numbers and the number of times it occurred.
What I've Worked on So Far:
#include <iostream>
using namespace std;
int main()
{
int nnum, x = 1, unum, sum, max = 0, anum, occ = 0, rem, quo;
cout << "Input number of positive integers: ";
cin >> nnum;
cout << "Input " << nnum << " positive numbers: ";
while (x<=nnum) {
cin >> unum;
anum = unum;
quo = anum;
sum += unum;
if (max < unum) {
max = unum;
}
x++;
while (quo != 0) {
rem = quo%10;
quo = quo/10;
if (anum == rem) {
occ += 1;
}
}
}
cout << "Highest: " << max << endl;
cout << "Occurrence: " << occ;
return 0;
}
| I'm not sure what your code is trying to do. The approach I'd take is simple: keep track of the maximum and the number of occurrences as you read numbers.
If you read a number greater than the current maximum, update the current maximum and reset the occurrence counter.
If you read a number equal to the current maximum, increment the occurrence counter.
You should also use a for loop instead of a while loop here, and you might want to validate your input as well.
#include <iostream>
int main()
{
int n, max_number = -1, occurrences = 0;
std::cin >> n;
for(int i = 1; i <= n; i++)
{
int temp;
std::cin >> temp;
if(temp > max_number)
{
max_number = temp;
occurrences = 0;
}
if(temp == max_number)
{
occurrences++;
}
}
std::cout << max_number << ' ' << occurrences;
return 0;
}
|
70,743,376 | 70,743,398 | Problem with filling a two-dimensional array in c++ | I'm trying to fill a two-dimensional array of int in c++.
But i have a weird problem.
Basically right now i have a code like that :
int array[83][86];
int test_1 = 0;
int test_2 = 0;
for (int x = box.min_corner().x(); x < box.max_corner().x(); x = x + 50)
{
for (int y = box.min_corner().y(); y < box.max_corner().y(); y = y + 50)
{
point_t point_p(x, y);
if (bg::within(point_p, poly))
{
array[test_1][test_2] = '1';
}
else {
array[test_1][test_2] = '0';
}
test_2++;
}
test_1++;
}
My program crashes before all columns are filled. Basically my program stop column 58.
The problem is not my two for loops, because if I increase my array like this :
int array[83 * 2][86];
It continues normally, as it is supposed to work initially.
Anyone have an idea of what can trigger this issue ?
| You have to reset test_2 to 0 appropriately. Otherwise, you keep incrementing the variable and it eventually goes past the limit of 86.
|
70,743,728 | 70,744,216 | Can a class with consteval constructor be created on heap in C++? | In the following code struct A has immediate function default constructor, and an object of the struct is created in the dynamic memory be means of new A{}:
struct A {
consteval A() {}
};
int main() {
new A{};
}
Only Clang accepts it.
GCC complains
error: the value of '<anonymous>' is not usable in a constant expression
6 | new A{};
| ^
note: '<anonymous>' was not declared 'constexpr'
And MSVC does as well:
error C7595: 'A::A': call to immediate function is not a constant expression
Demo: https://gcc.godbolt.org/z/6Px5WYGzd
Which compiler is right here?
|
Which compiler is right here?
Invoking a consteval constructor with new is ill-formed.
MSVC and GCC are right to reject it; clang is wrong as a diagnostic is required.
struct A { consteval A() {} };
consteval makes A::A() an immediate function1.
An immediate function can only be called from2,3:
another immediate function, or
a consteval if statement, or
a constant expression4.
new A{} is none of the above.
1) [dcl.constexpr]/2
A constexpr or consteval specifier used in the declaration of a function declares that function to be a constexpr function.
A function or constructor declared with the consteval specifier is called an immediate function.
2) [expr.prim.id.general]/4
A potentially-evaluated id-expression that denotes an immediate function shall appear only
(4.1) as a subexpression of an immediate invocation, or
(4.2) in an immediate function context.
3) [expr.const]/13
An expression or conversion is in an immediate function context if it is potentially evaluated and either:
(13.1) its innermost enclosing non-block scope is a function parameter scope of an immediate function, or
(13.2) its enclosing statement is enclosed ([stmt.pre]) by the compound-statement of a consteval if statement ([stmt.if]).
An expression or conversion is an immediate invocation if it is a potentially-evaluated explicit or implicit invocation of an immediate function and is not in an immediate function context.
An immediate invocation shall be a constant expression.
4) [expr.const]/11.2
A constant expression is either a glvalue core constant expression that refers to an entity that is a permitted result of a constant expression (as defined below), or a prvalue core constant expression whose value satisfies the following constraints:
(11.2) if the value is of pointer type, it contains the address of an object with static storage duration, the address past the end of such an object ([expr.add]), the address of a non-immediate function, or a null pointer value,
|
70,743,758 | 70,745,836 | How to interpret the precondition of std::launder? | struct X { int n; };
const X *p = new const X{3}; // #1
new (const_cast<X*>(p)) const X{5}; // #2
const int c = std::launder(p)->n;
Assume that the object created at #1 is named obj1 while the object created at #2 is named obj2. The precondition of std::launder is that
[ptr.launder] p2 link
p represents the address A of a byte in memory. An object X that is within its lifetime and whose type is similar to T is located at the address A. All bytes of storage that would be reachable through the result are reachable through p (see below).
A byte of storage b is reachable through a pointer value that points to an object Y if there is an object Z, pointer-interconvertible with Y, such that b is within the storage occupied by Z, or the immediately-enclosing array object if Z is an array element.
This rule is a bit obscure. Is the following interpretation a right read?
obj2 will occupy the number of sizeof(X) bytes beginning with A. Consider Y(the object to which std::launder(p) points) and Z(namely, obj2) as the same object, they are pointer-interconvertible, and the sizeof(X) bytes occupied by obj2 are all within Z, hence these bytes are all reachable through std::launder(p). that is, "All bytes of storage that would be reachable through the result". Whether these bytes are reachable through p? With the assumption that Y (namely, the object to which p points) and Z are the same object obj1, which are also the array element of a hypothetical array, as per [basic.compound] p3
an object of type T that is not an array element is considered to belong to an array with one element of type T.
Since these bytes beginning with A are all within the array of which Z is an element. Hence, we can say these bytes are all reachable through p?
| [basic.compound]/3 is not relevant. It specifically says that it applies only for the purpose of pointer arithmetic and comparison. There doesn't actually exist an array for the object.
I think when you call std::launder, there are four objects at the relevant address: obj1, obj1.n, obj2 and obj2.n.
obj1 and obj1.n are pointer-interconvertible, as are obj2 and obj2.n. Other combinations aside from identical pairs, are not pointer-interconvertible. There are no array objects and therefore "or the immediately-enclosing array object if Z is an array element." isn't relevant.
When considering reachability from std::launder(p), which points to obj2 thus only obj2 and obj2.n need to be considered as Z in the quote. obj2.n occupies an (improper) subset of bytes of obj2, so it is not relevant. The bytes reachable are those in obj2. Except that I considered obj2.n specifically, this is a rephrasing of your considerations.
By exactly the same reasoning, the bytes reachable from p (pointing to obj1) are all those in obj1.
obj1 and obj2 have the same size and therefore occupy exactly the same bytes. Therefore std::launder(p) would not make any bytes reachable that aren't reachable from p.
|
70,743,892 | 70,743,893 | How can I set up my class so it can't be inherited from in C++98/C++03? | Using C++98 (or C++03), how can a class (B) be defined, such that no objects can be instantiated from a class (D) deriving from B.
struct B {};
struct D : public B {};
D d; // this should result in a compiler error
In C++11 (or newer) one could use the final specifier.
| I found these possible solutions, each with drawbacks:
"named constructors"
Define all constructors of the base class private and provide named constructors (static, public method which returns an object of that class).
Drawbacks:
Using that class is "not-so-clean" / less straightforward. The purpose of the effort should be to simplify using that class. The result requires more effort for using such classes.
"virtual inheritance trick"
I found this suggestion here and from Bjarne Stroustrup here. See also his book "The Design and Evolution of C++" sec 11.4.3.
The class for which inheritance shall be restricted (B), inherits (must be public virtual inheritance) from a helper class (H).
That helper class has only private constructors.
It has a friend relationship to the to-be restricted class B.
As only B may call the constructor of H, further successors of B can not be instantiated.
In contrast to the "named constructors" solutions the "usual" constructor can be called.
I consider this more straightforward for using that class.
Drawbacks:
Usually this will increase the size of objects of B in memory because of the virtual inheritance. See here.
It requires more effort for programming such classes.
|
70,744,339 | 70,757,056 | Example of passing a list of strings from Python to C++ function with Cython | I'm going in circles trying to figure out a fairly basic question in Cython. On the Python side, I have a list of variable-length strings. I need to pass these to a C++ function that will process this list and return some calculations (a vector of floats of same length as the input, if that matters). I know how to pass a single string, but I'm really struggling to figure out how to efficiently pass a list of multiple strings from Python->C++. I have no need to mutate the list of strings, the C++ side will treat them as read-only. The strings are coming from Python so they are a standard Python unicode string but they are guaranteed to be ASCII if that matters.
Could someone provide an example? I feel like this shouldn't be too complicated but I can't seem to find a good explanation. I'm definitely still getting the hang of Cython, so maybe I just don't know the right terms to search for.
| Lenormju's first link had the solution. I didn't know you could do memory views into a list, but the solution is much easier than I realized. A very simple minimal example:
# cython: language_level=3
# distutils: language = c++
from libcpp.vector cimport vector
from libcpp.string cimport string
cdef extern from "my_function_src.cc":
cdef void my_function(vector[string] L)
def call_my_function(L):
cdef vector[string] L_buffer = L
my_function(L_buffer)
|
70,744,937 | 70,745,373 | Workaround for passing parameter pack to alias templates (which have non-pack parameters) | Look at this example:
template <typename A>
struct Foo1 {};
template <typename A, typename B>
struct Foo2 {};
struct Bar1 {
template <typename A>
using Foo = Foo1<A>;
};
struct Bar2 {
template <typename A, typename B>
using Foo = Foo2<A, B>;
};
template <typename BAR>
struct Something {
template <typename ...P>
void func(typename BAR::template Foo<P...> foo) {
}
};
I'd like to achieve: if Something is specialized with Bar1, then I'd like to have a template function func, which is:
template <typename A>
void func(Foo1<A> foo)
and if Something is specialized with Bar2, then func should be:
template <typename A, typename B>
void func(Foo2<A, B> foo)
However, this straightforward approach doesn't work, because clang complains (you need to instantiate Something<BarX> to make clang to report this error):
error: pack expansion used as argument for non-pack parameter of alias template
void func(typename BAR::template Foo<P...> foo) {
So it seems that alias templates are not 100% transparent (I found some discussion about this: CWG 1430, that this is by design).
Are the any workarounds for this problem?
(gcc compiles this code, but according to CWG 1430, this might be unintended)
| Workaround is to make the alias variadic:
struct Bar1 {
template <typename... Ts>
using Foo = Foo1<Ts...>;
};
struct Bar2 {
template <typename... Ts>
using Foo = Foo2<Ts...>;
};
Demo
|
70,744,948 | 70,745,092 | Extract from a member pointer the type of class it points to | How make the following compile correctly with C++20, a.k.a calculate extract<mem_fun>::type? Is it possible?
Error-scenarios like passing non-member function or private function extract<> are not so important for me.
#include <concepts>
struct x
{
void f()
{
}
};
template<auto mem_fun>
struct extract
{
// using type= ???
};
int main()
{
using namespace std;
static_assert(same_as<typename extract<&x::f>::type, x>);
return 0;
}
| Pointers to members all have type T C::*, where T can be either some type or some function type or even some "abominable" function type.
So you just need to specialize on that particular shape:
template<auto mem_fun>
struct extract;
template <typename T, typename C, T C::* v>
struct extract<v> {
using type = C;
};
|
70,745,070 | 70,745,112 | LNK2019 Unresolved External Symbol, Can't figure out why? | apologies in advance as this is most likely my own impotence to find the error and simply overlooking the answer.
Anyway; when invoking XEngine::MapConstBufferData in my entry point I run into LNK2019, I'm quite clueless as to why but believe the error to lay within the fact that it is template function, all help is highly appreciated!
XTypes.h
struct TRANSLATE2D
{
FLOAT OffsetX, OffsetY;
};
XEngine.h
template <class BufferType>
VOID MapConstBufferData(ComPtr<ID3D11Buffer> Buffer, BufferType BufferData, UINT Size);
XEngine.cpp
template <class BufferType>
VOID XEngine::MapConstBufferData(ComPtr<ID3D11Buffer> Buffer, BufferType BufferData, UINT Size)
{
Contents are irrelevant.
}
EntryPoint.cpp
INT MAIN
{
XTYPES::TRANSLATE2D Translate{};
Engine.MapConstBufferData<XTYPES::TRANSLATE2D>(Engine.GetConstBuffer(), Translate, sizeof(XTYPES::TRANSLATE2D));
// ^ LNK2019: Unresolved External
Contents are irrelevant.
}
| templates require either header-file only implementation, or explicit instantiation.
The compiler can't code the template when it sees it in EntryPoint.cpp as it doesn't have the rules.
The compiler doesn't realize it needs it when it sees it in XEngine.cpp
|
70,746,345 | 70,747,637 | Reading custom (.ndev) Json-like file structure | I'm currently creating a custom file structure (File extension is .ndev), to increase my skill in working with files in C++. I can save values to a file in a specific way (See below)
{
"username": "Nikkie",
"password": "test",
"role": "Developer",
"email": "test@gmail.com"
}
This doesn't have anything to do with actual JSON, it's just
structured like it.
My question is, how can I read the value of one of those variables with C++, without it coming out like the screenshot below:
My current code to write the file:
void user::RegisterUser(string username, string password, string role, string email)
{
string filename = "E:\\Coding\\C\\test\\data\\" + username + ".ndev";
ifstream CheckFile(filename);
if (CheckFile.good())
{
printf("User already exists!");
}
else {
ofstream UserDataFile(filename);
UserDataFile << "{\n\t\"username\": \"" << username << "\",\n\t\"password\": \"" << password << "\",\n\t\"role\": \"" << role << "\",\n\t\"email\": \"" << email << "\"\n}";
UserDataFile.close();
}
CheckFile.close();
}
Don't bludgeon me about the password encryption, I will add that later. I'm currently
trying to actually let it read the values before I do anything else
My current code to read the file:
void user::LoginUser(string username)
{
string filename = "E:/Coding/C/test/data/" + username + ".ndev";
ifstream UserFile(filename, ios_base::in);
if (UserFile.good())
{
string name;
string passw;
string role;
string email;
while (UserFile >> name >> passw >> role >> email)
{
cout << name << passw << endl;
cout << role << email << endl;
}
}
else
{
printf("User doesn't exist!");
}
}
I just can't seem to get it to display the values properly, there are also no errors listed in the console nor in the VS debug build.
| From a practical point of view, there is no reason to store your structure in that format. There are simpler ways.
Anyhow, here's a starting point (demo):
#include <iostream>
#include <string>
#include <map>
#include <iomanip>
#include <fstream>
using namespace std;
// your structure
struct person
{
string name, pass, role, mail;
};
// the tokens your format is using
enum class token : char
{
lb = '{',
rb = '}',
sc = ':',
comma = ',',
str,
end
};
token tk; // current token
string str; // if the current token is token::str, str is its value
// get_token breaks the input stream into tokens - this is the lexer, or tokenizer, or scanner
token get_token(istream& is)
{
char c;
if (!(is >> c))
return tk = token::end;
switch (c)
{
case '{':
case '}':
case ':':
case ',':
return tk = token(c);
case '"':
is.unget();
is >> quoted(str);
return tk = token::str;
default: throw "unexpected char";
}
}
// throws if the current token is not the expected one
void expect(istream& is, token expected, const char* error)
{
if (tk != expected)
throw error;
get_token(is);
}
// throws if the current token is not a string
string expect_str(istream& is, const char* error)
{
if (tk != token::str)
throw error;
string s = str;
get_token(is);
return s;
}
// the actual parser; it extracts the tokens one by oneand compares them with the expected order.
// if the order is not what it expects, it throws an exception.
void read(istream& is, person& p)
{
get_token(is); // prepare the first token
expect(is, token::lb, "'{' expected");
map<string, string> m; // key/values storage
while (tk == token::str)
{
string k = expect_str(is, "key expected");
expect(is, token::sc, "':' expected");
string v = expect_str(is, "value expected");
if (m.find(k) == m.end())
m[k] = v;
else
throw "duplicated key";
if (tk == token::comma)
get_token(is);
else
break; // end of of key/value pairs
}
expect(is, token::rb, "'}' expected");
expect(is, token::end, "eof expected");
// check the size of m & the keys & copy from m to p
// ...
}
int main()
{
ifstream is{ "c:/temp/test.txt" };
if (!is)
return -1;
try
{
person p;
read(is, p);
}
catch (const char* e)
{
cout << e;
}
}
|
70,746,835 | 70,747,276 | How to set a relative path/How to use $(SolutionDir) | I'm trying to make a project compile no matter where it is cloned, but I don't know how to write the relative path in relation to the solution location, for now this is how it looks
I tried something with $(SolutionDir) but i don't know how to go one step back from it and into the libraries folder as shown in the current absolute paths. Can someone explain what should i do or show an example ?
| To go one step back from a $(SolutionDir) you can write $(SolutionDir)\..\.
You also can go deeper and create a property sheet for your library, so that if you need to use this library in another project, you would need to include only one .prop-file into your .vcxproj.
Assuming the library name is cereal and property sheet is located in libraries, the .prop-file would look like this:
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemDefinitionGroup>
<ClCompile>
<AdditionalIncludeDirectories>$(MSBuildThisFileDirectory)\cereal\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
</ClCompile>
<Link>
<AdditionalLibraryDirectories>$(MSBuildThisFileDirectory)\cereal\$(Platform)\lib;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>cereal.lib;%(AdditionalDependencies)</AdditionalDependencies>
</Link>
</ItemDefinitionGroup>
</Project>
Then to use it in a project, you add this in your .vcxproj (you can also use gui for this):
<ImportGroup Condition="..." Label="PropertySheets">
...
<Import Project="..\..\..\libraries\cereal.props" />
</ImportGroup>
You can also greatly ease your life by using some package manager, namely vcpkg because of it integration with Visual Studio.
|
70,746,893 | 70,746,902 | For multiple instances of the same object, are member variables stored at the same offset? | Say I have a struct Foo:
struct Foo
{
char a;
int b;
} Foo1, Foo2;
The compiler may insert padding so that Foo::a is stored at the start of the object's memory, and Foo::b is stored at an offset of 0x04 (say on a 32-bit system for example).
If I create multiple instances of this object Foo1 and Foo2, will they always have the same padding? Is there ever a case where Foo1::b is stored at offset 0x04 and Foo2::b is stored at offset 0x08 for example?
|
For multiple instances of the same object, are member variables stored at the same offset?
Yes.
If I create multiple instances of this object Foo1 and Foo2, will they always have the same padding?
Yes.
Is there ever a case where Foo1::b is stored at offset 0x04 and Foo2::b is stored at offset 0x08 for example?
No.
This assumption holds only within the program. Compile the program on another system, and the offset can be different there.
|
70,746,942 | 70,747,051 | Partial specialization of typedef | I want to have a type which is either a typedef or a class. Thus, something like this
template< typename T > class A { };
template< typename T > using X< T, true > = T;
template< typename T > using X< T, false > = A< T >;
Obviously, this does not compile.
To overcome this problem I "invented" the following construct, which seems quite complicated in my eyes:
template< typename T > class A { };
template< typename T, bool E > struct make_X;
template< typename T >
struct make_X< T, true > {
T make();
};
template< typename T >
struct make_X< T, false > {
A< T > make();
};
template< typename T, bool E > using X = decltype( make_X<T,E>::make() );
Is there an easier way to achieve my goal?
| Normally, one would do this:
template< typename T, bool E > using X = std::conditional_t<E, T, A<T>>;
The only issue here would be if you wanted to use the type X<T, true> in a situation where simply mentioning A<T> would be ill-formed. In that case X<T, true> will be ill-formed even though the result is not A<T> anyway. If this is a possible situation, then something more complicated, like what you wrote, could be appropriate in order to make sure that A<T> is never even mentioned in the first place. However, in general, this would be done using type aliases in the partial specializations (e.g. using type = T;) and not through functions, in case T or A<T> is some kind of type that is not allowed as a function return type.
|
70,747,015 | 70,747,071 | Error: Too few Argument in multithreaded code | I have been trying to implement a multithreaded program, I already tried <thread.h> and the code worked perfectly; but now I have to use <pthread.h> library due to some college project.
I wrote a long code, but for now the problem i'm struggling with, is about pthread_create function.
So, I just write only a part of the code that I'm asking about:
#include<pthread.h>
using namespace std;
void hello()
{
cout << "HelloWorld\n";
}
int main()
{
pthread_t p;
pthread_create(&p,NULL,hello);
cout << "Thread created\n";
return 0;
}
and this is the error I get:
new.cpp:11:25: error: invalid conversion from ‘void (*)()’ to ‘void* (*)(void*)’ [-fpermissive]
pthread_create(&p,NULL,hello);
^~~~~
new.cpp:11:30: error: too few arguments to function ‘int pthread_create(pthread_t*, const pthread_attr_t*, void* (*)(void*), void*)’
pthread_create(&p,NULL,hello);
^
In file included from /usr/include/x86_64-linux-gnu/c++/8/bits/gthr-default.h:35,
from /usr/include/x86_64-linux-gnu/c++/8/bits/gthr.h:148,
from /usr/include/c++/8/ext/atomicity.h:35,
from /usr/include/c++/8/bits/ios_base.h:39,
from /usr/include/c++/8/ios:42,
from /usr/include/c++/8/ostream:38,
from /usr/include/c++/8/iostream:39,
from new.cpp:1:
/usr/include/pthread.h:234:12: note: declared here
extern int pthread_create (pthread_t *__restrict __newthread,
^~~~~~~~~~~~~~
I know that similar questions might be already asked, but I tried their solution as much as I could and none of them worked; Also, I don't want to pass any argument to the specified function.
Don't know what to do now.
Any help is appreciated!!!
| There are two problems here. First, the function that is the entry point for a thread as specified by the third argument to pthread_create must accept a void * as an argument and return a void *. Your function must conform to that signature. You're free to ignore the parameter and to just return NULL.
void *hello(void *param)
{
cout << "HelloWorld\n";
return NULL;
}
Second, pthread_create takes a fourth argument of type void * which is the argument to the thread function. Since you're not using the argument, you can pass a null pointer.
pthread_create(&p,NULL,hello,NULL);
|
70,747,306 | 70,747,417 | Optimizing away static variable / passing by reference | In this question Will a static variable always use up memory? it is stated that compilers are allowed to optimize away a static variable if the address is never taken, e.g. like following:
void f() {
static int i = 3;
printf( "%d", i );
}
If there exists a function which takes its arguments by reference, is the compiler still allowed to optimize away the variable, e.g. as in
void ref( int & i ) {
printf( "%d", i );
}
void f() {
static int i = 3;
g( i );
}
Is the situation different for the "perfect forwarding" case. Here the function body is empty on purpose:
template< typename T >
void fwd( T && i ) {
}
void f() {
static int i = 3;
fwd( i );
}
Furthermore, would the compiler be allowed to optimize the call in the following case. (The function body is empty on purpose again):
void ptr( int * i ) {
}
void f() {
static int i = 3;
ptr( &i );
}
My questions arise from the fact, that references are not a pointer by the standard - but implemented as one usually.
Apart from, "is the compiler allowed to?" I am actually more interested in whether compilers do this kind of optimization?
|
that compilers are allowed to optimize away a static variable if the address is never taken
You seem to concentrated on the wrong part of the answer. The answer states:
the compiler can do anything it wants to your code so long as the observable behavior is the same
The end. You can take the address, don't take it, calculate the meaning of life and calculate how to heal cancer, the only thing that matters is observable effect. As long as you don't actually heal cancer (or output the results of calculations...), all calculations are just no-op.
f there exists a function which takes its arguments by reference, is the compiler still allowed to optimize away the variable
Yes. The code is just putc('3').
Is the situation different for the "perfect forwarding" case
No. The code is still just putc('3').
would the compiler be allowed to optimize the call in the following case
Yes. This code has no observable effect, contrary to the previous ones. The call to f() can just be removed.
in whether compilers do this kind of optimization?
Copy your code to https://godbolt.org/ and inspect the assembly code. Even with no experience in assembly code, you will see differences with different code and compilers.
Choose x86 gcc (trunk) and remember to enable optimizations -O. Copy code with static, then remove static - did the code change? Repeat for all code snippets.
|
70,747,371 | 70,780,873 | Meson on windows cannot find llvm-lib? | I am trying to port a Linux Library to windows, the library uses meson for compilation. I have a dummy meson.build file:
project(
'Dummy',
'cpp',
version: '0.0.1',
license: 'GPL',
default_options : [
'cpp_std=c++latest',
'default_library=static',
'optimization=3',
'buildtype=debugoptimized'])
When I run meson configure I get:
PS C:\Users\Makogan\Documents\neverengine\build> meson compile
[0/1] Regenerating build files.
The Meson build system
Version: 0.60.3
Source dir: C:\Users\Makogan\Documents\neverengine
Build dir: C:\Users\Makogan\Documents\neverengine\build
Build type: native build
Project name: NeverEngine
Project version: 0.0.1
C++ compiler for the host machine: cl (msvc 19.13.26131.1 "Microsoft (R) C/C++ Optimizing Compiler Version 19.13.26131.1 for x64")
C++ linker for the host machine: link link 14.13.26131.1
..\meson.build:1:0: ERROR: Unknown linker(s): [['lib'], ['llvm-lib']]
The following exception(s) were encountered:
Running "lib /?" gave "[WinError 2] The system cannot find the file specified"
Running "llvm-lib /?" gave "[WinError 2] The system cannot find the file specified"
A full log can be found at C:\Users\Makogan\Documents\neverengine\build\meson-logs\meson-log.txt
FAILED: build.ninja
"C:\Python311\Scripts\meson" "--internal" "regenerate" "C:\Users\Makogan\Documents\neverengine" "C:\Users\Makogan\Documents\neverengine\build" "--backend" "ninja"
ninja: error: rebuilding 'build.ninja': subcommand failed
Why is meson automatically searching for these libraries when it is aware it is on windows?
| Those aren't libraries, those are static linkers (also called archivers), which are used to produce static libraries (those ending in .a or .lib, usually). Those are pretty important to meson, and it assumes that it can find the three pieces of the toolchain (The compiler, the archiver, and the [dynamic] linker) for any given language + machine.
It is interesting to me that meson is able to pick up the cl.exe and link.exe, but not lib.exe
|
70,747,639 | 71,875,469 | How to get the value of a template class? | I'm trying to get the value of a template class. To get the value of a class, I can easily do like:
int get_value()
{
return *this;
}
But I want to create a class, and extend it, and don't make get_value() in all classes again. So, I did that:
template<typename T>
class _extend : public T
{
public:
auto _get_value()
{
return *this;
}
};
template<typename T>
class extend : public _extend<T>
{
public:
T get_value()
{
auto value = this->_get_value(); /* It was `_get_value()`, changed to `this->_get_value()` due to the comments */
T output = value;
return output;
}
};
But it's not working: output is nothing.
Edit
Sample program:
#include <iostream>
namespace kc
{
template<typename T>
class _extend : public T
{
public:
auto _get_value()
{
return *this;
}
};
template<typename T>
class extend : public _extend<T>
{
public:
T get_value()
{
auto value = this->_get_value();
T output = value;
return output;
}
};
class A : public std::string, public extend<std::string>
{
public:
using std::string::string;
};
}
int main()
{
kc::A a("a");
std::cout << a.get_value() << std::endl;
}
| Basically:
template<class C>
class extend : public C
{
public:
using C::C;
auto get()
{
return *this;
}
};
And a full example:
#include <iostream>
template<class C>
class extend : public C
{
public:
using C::C;
auto get()
{
return *this;
}
};
class e_string : public extend<std::string>
{
public:
using extend<std::string>::extend;
};
int main()
{
e_string s = "Test";
std::cout << s.get() << std::endl;
}
|
70,747,645 | 70,748,343 | Trouble understanding Caesar decryption steps | The following code will decrypt a caesar encrypted string given the ciphertext and the key:
#include <iostream>
std::string decrypt(std::string cipher, int key) {
std::string d = "";
for(int i=0; i<cipher.length();i++) {
d += ((cipher[i]-65-key+26) %26)+65;
}
return d;
}
int main()
{
std::cout << decrypt("WKLVLVJRRG", 3) << std::endl; // THISISGOOD
std::cout << decrypt("NBCMCMAIIX", 20) << std::endl; // THISISGOOD
}
I'm having trouble to understand the operations performed to compute the new character ASCII code at this line:
d += ((cipher[i]-65-key+26) %26)+65;
The first subtraction should shift the number range
Then we will subtract the key as how the Caesar decryption is defined
We add 26 to deal with negative numbers (?)
The module will limit the output as the range of the ASCII numbers is 26 length
We come back to the old range by adding 65 at the end
What am I missing?
| If we reorder the expression slightly, like this:
d += (((cipher[i] - 65) + (26 - key)) % 26) + 65;
We get a formula for rotating cipher[i] left by key:
cipher[i] - 65 brings the ASCII range A..Z into an integer range 0..25
(cipher[i] - 65 + 26 - key) % 26 rotates that value left by key (subtracts key modulo 26)
+ 65 to shift the range 0..25 back into ASCII range A..Z.
e.g. given a key of 2, A becomes Y, B becomes Z, C becomes A, etc.
|
70,747,890 | 70,747,920 | Why am I getting this big value at the end? | I need to find the biggest value in an array, but instead I'm getting a huge number at the end.
#include <iostream>
#include <string>
int main()
{
int masivs[5];
int enter = 0;
for (int i = 0; i < 5; i++) {
std::cout << "Enter number:";
std::cin >> enter;
masivs[i] = enter;
std::cout << masivs[i] << "\n";
}
std::cout << "End filling array" << std::endl;
int index = masivs[0];
for (int i = 0; i < 5; i++){
if(index > masivs[i + 1]){
index = index;
std::cout << index << "\n";
}else if (index < masivs[i + 1]){
index = masivs[i + 1];
std::cout << index << "\n";
}else if(index == masivs[i + 1]){
index = index;
std::cout << index << "\n";
}
}
std::cout << "End calculating" << "\n" << std::endl;
std::cout << index << "\n";
return 0;
}
Here is the image of output:
| This is called "off by one" error, because... You are off by one.
Usually, this error is caused due to confusion that array indexes start at 0, but in your case you just misjudged the loop counters:
if(index > masivs[i + 1])
also:
index = masivs[i + 1];
When i is 4, you are trying to access masivs[5] which is past your array memory.
But this is "C style" array, so C++ does not check its size, just reads whatever happens to be at that memory address.
|
70,747,951 | 70,753,029 | Access an attribute c++ object from QML | I am having a problem understanding how to use a c++ singleton object from qml.
I know that I have to inherit my classes from the QObject class and that I have to expose the properties.
And that in order for them to be usable in the qml I have to do setContextProperty("ClassName", &class name).
However, if we admit that this class contains another object and that I want to be able to use it from the qml, I get errors like "cannot call method name" from undefined object.
Example:
APi.h
class API : public QObject {
Q_OBJECT
Q_PROPERTY(User *user READ getUser WRITE setUser NOTIFY userChanged)
public:
Q_INVOKABLE inline User *getUser() const {
qDebug() << "get called";
return user;
};
inline void setUser(User *new_user) { this->user = new_user; };
signals:
void userChanged();
private:
User *user;
};
User.h
#ifndef USER_H
#define USER_H
#include <QObject>
#include <QString>
class User: public QObject{
Q_OBJECT
public:
Q_INVOKABLE inline QString &getName(){return name;}; // qrc:/main.qml:63: TypeError: Cannot call method 'getName' of undefined
private:
QString name;
};
#endif // USER_H
main.cpp
#include <QQmlContext>
#include <QGuiApplication>
#include <QQmlApplicationEngine>
#include <QtQuick3D/qquick3d.h>
#include "API.h"
#include "User.h"
int main(int argc, char *argv[]) {
QGuiApplication app(argc, argv);
QQmlApplicationEngine engine;
API api;
User user;
api.setUser(&user);
engine.rootContext()->setContextProperty("API", &api);
qmlRegisterType<User>("User", 1, 0, "User");
engine.load(QUrl(QStringLiteral("qrc:/main.qml")));
if (engine.rootObjects().isEmpty())
return -1;
return app.exec();
}
main.qml
import QtQuick
import QtQuick3D
import QtQuick.Controls
import QtQuick.Layouts
Window {
Item{
id: test
Component.onCompleted: {
console.log("Completed")
API.getUser() // no error, getUser is called without error
console.log(API.getUser().getName())
}
}
}
What possibilities do I have to access the User object from the qml through API?
| You could do any of the following:
Add a public slot in API class:
API.cpp:
QString API::getUserName()
{
return user.getName();
}
main.qml:
Component.onCompleted: console.log( API.getUserName() )
Make User a Q_PROPERTY in API class:
API.h:
Q_PROPERTY( User* user READ user NOTIFY userChanged)
main.qml:
Component.onCompleted: console.log( API.user.getName() )
Register User with QML:
main.cpp:
qmlRegisterType<User>("MyUser", 1, 0, "MyUser");
main.qml:
Component.onCompleted: console.log( API.getUser().getName() )
|
70,747,983 | 70,749,396 | Adding QWidget into QGridLayout adds border to layout? | Currently I have my program organized this way in QTDesigner (which I use with VS 2022): QMainWindow->centralWidget(QWidget)->QTabWidget->Tab(QWidget)->QGridLayout.
All these elements are created in QtDesigner.
In my cpp code I'm downloading some data and generating QTableWidget* m_table.
Unfortunately after adding it to QGridLayout element, I get a black border exactly around this layout. How it's possible if it border can't be set at all for this element, at least in QtDesigner?
QTableWidget* m_table;
///(...)
ui.gridLayout->addWidget(m_table);
| Look into Qt's stylesheets. You should be able to style your QTableWidget in any way you want to.
Keep in mind a QTableWidget is specialized version of QTableView, so you have set your stylesheet to a QTableView:
m_table->setStyleSheet("QTableView {background: transparent; border: 1px solid green;}");
Here are some links to get you started with Qt stylesheets:
https://doc.qt.io/qt-5/stylesheet.html
https://doc.qt.io/qt-5/stylesheet-syntax.html
https://doc.qt.io/qt-5/stylesheet-examples.html
|
70,748,378 | 70,749,137 | How to insert a type between all elements of a parameter pack? | I have:
struct spacer : foo<bar>{};
struct sequence : baz<qux, spacer, quz, spacer, plugh>{};
I would like to be able to write (something like this, exact syntax doesn't matter):
struct spaced_sequence : SPACED_BAZ<qux, quz, plugh>{};
Can this be done with macros/templates/anything else?
| You can do the following:
Create a base case function template that appends a single type T to some specialization of baz
template<typename T, typename ... Args>
auto append(baz<Args...>) -> baz<Args..., T>;
Note that no spacer is added here, since T is the last type we're adding.
Then write a recursive case, also as a function template, that gets called if there are at least 2 more types T1, and T2 that need to be added
template<typename T1, typename T2, typename ...Rest, typename ... Args>
auto append(baz<Args...>)
-> decltype(append<T2, Rest...>(std::declval<baz<Args..., T1, spacer>>()));
// insert spacer after T1 ^
// ^ pass the remaining types recursively
and finally add a convenience alias
template<typename ...Ts>
using SPACED_BAZ = decltype(append<Ts...>(std::declval<baz<>>()));
// types to add ^ and baz is empty to start
Here's a demo.
|
70,748,442 | 70,752,711 | Passing 'this' pointer to a static method in a parent class | I am writing an embedded application using freeRTOS and I'm trying to do it in a neat, object-oriented fashion.
To start a FreeRTOS task from within an object, you need to give it a static function. In order to make it possible to use non-static variables inside that function, I've been implementing it in the following way:
void ModeSwitcher::loop(){
// Can use any member of the ModeSwitcher class here
}
void ModeSwitcher::runner(void* parameter){ // declared as static in header
ModeSwitcher* ref = static_cast< ModeSwitcher *>(parameter);
while(1){
ref->loop();
}
}
void FlightMode::start(uint8_t core_id) {
xTaskCreatePinnedToCore(
runner, // Task function
"switcher", // String with name of task
2000, // Stack size in bytes
this, // Parameter passed as input of the task
1, // Priority of the task
&this->xHandle, // Task handle
core_id // The cpu core to use
);
}
As you can see, the static runner is passed to rtos, which can on the other hand pass 'this' pointer back to runner. Thanks to that I can put all my code neatly in the loop and just call it by reference.
However, now I have two almost identical classes. The only difference is the code inside the 'loop'. So I would like to put my start and runner methods in a base class and the loop in a derived class. I imagine it can most likely be written in a similar way. However I can't get it to work. In my had it looks somewhat like that:
base_mode.cpp :
BaseMode::BaseMode(const char* name, void* ref) : task_name(name), reference(ref) {
}
void BaseMode::start(uint8_t core_id) {
xTaskCreatePinnedToCore(
runner, // Task function
task_name, // String with name of task (by default max 16 characters long)
2000, // Stack size in bytes
reference, // Parameter passed as input of the task
1, // Priority of the task
&this->xHandle, // Task handle
core_id
);
}
void BaseMode::runner(void* parameter){
BaseMode* ref = static_cast<BaseMode *>(parameter);
while(1){
ref->loop();
}
}
ground_mode.cpp :
void GroundMode::loop(){
// Again, should be able to do whatever is desired here
}
GroundMode::GroundMode() : BaseMode("ground_mode", this){
}
It doesn't work, because there's no loop function declared in base class. If I declared one, it would be used instead of the one inside the derived class. So how can I fix this? Thanks in advance.
| Declaring a virtual loop function in BaseMode worked. Forgot these even existed.
|
70,748,458 | 70,748,694 | How to make function for save and replay tones on Arduino buzzer? | i have a question about my project in Arduino,
i have this array of frequencies for notes:
int note[] = {261, 293, 329, 349, 392, 440, 494, 523};
and this function for play notes if one of pushbuttons is pressed:
void play(float U_ADC0){
if(U_ADC0 >= 4.80) { // ADC conversion (Voltage value) PB1
BUZZ (0.1 , note[0]) ; _delay_ms (100) ; // buzz
lcd_clear();
lcd_write("C4"); // lcd display
}
if(U_ADC0 < 4.80 && U_ADC0 >= 4.70){ //PB2
BUZZ (0.1 , note[1]) ; _delay_ms (100) ;
lcd_clear();
lcd_write("D4");
}
if(U_ADC0 < 4.72 && U_ADC0 >= 4.65){ //PB3
BUZZ (0.1 , note[2]) ; _delay_ms (100) ;
lcd_clear();
lcd_write("E4");
}
if(U_ADC0 < 4.60 && U_ADC0 >= 4.50){ //PB4
BUZZ (0.1 , note[3]) ; _delay_ms (100) ;
lcd_clear();
lcd_write("F4");
}
if(U_ADC0 < 4.20 && U_ADC0 >= 4.05){ //PB5
BUZZ (0.1 , note[4]) ; _delay_ms (100) ;
lcd_clear();
lcd_write("G4");
}
if(U_ADC0 < 3.80 && U_ADC0 >= 3.70){ //PB6
BUZZ (0.1 , note[5]) ; _delay_ms (100) ;
lcd_clear();
lcd_write("A4");
}
if(U_ADC0 < 3.55 && U_ADC0 >= 3.30){ //PB7
BUZZ (0.1 , note[6]) ; _delay_ms (100) ;
lcd_clear();
lcd_write("B4");
}
if(U_ADC0 < 2.55 && U_ADC0 >= 2.45){ //PB8
BUZZ (0.1 , note[7]) ; _delay_ms (100) ;
lcd_clear();
lcd_write("C5");
}
}
so, how can i make new field of frequencies in order by pressed pushbuttons so i could save and replay my melody on buzzer?
I used all my ideas but doesn't work and i don't have new ones. So if somebody have idea, can you help me?
| I would use one button (let's call it the record button) to switch between play & record and only play. In this way, whenever you push buttons that get those buzzer frequencies will not save, but when you like the melody and want to save you can click the record button and start to save. For making this happen, follow the algorithm below:
After your first function, create a function for record button. In this function, you need call your first function (void play) you have already written and add one more snippet of code for assigning the value of pushed button into the int array you will create at the beginning of your code (let's call it int recorded[]).
One more step remains and that's to check (if the record button is pushed) the switch button, so it will toggle between record & play and play and call the corresponding function. Finally, you can add one more button to play the melody from your int recorded[].
I have finished the code already. You can check my comments throughout the code. It might not be the shortest way to work, simulate and test the results, but I believe it is going to solve your problem. Let me know if it helped you.
Link: https://onlinegdb.com/PV7Mu_51q
Embedded:
<script src="//onlinegdb.com/embed/js/PV7Mu_51q?theme=dark"></script>
|
70,748,602 | 70,764,194 | How can I get CPU idle residency in macOS on arm64 and x86 | I'm trying to get the idle residency of the CPU in macOS (C-State C0 residency on x86 unsure on arm64). I am aware you can find this info by running something like sudo powermetrics -i1 -n1 -s cpu_power | grep residency in the terminal, but I need a way to pull this info using C, C++, Objective-C, or even Assembly...especially in a way that doesn't need admin privileges.
All I can find regarding this topic is this: Time each CPU core spends in C0 power state, but the answers are not the clearest. Please help!
| The powermetrics tool use private API to do this: IOReportStateGetResidency
You could try to import it and with some reversing use yourself too:
https://github.com/samdmarshall/OSXPrivateSDK/blob/master/PrivateSDK10.10.sparse.sdk/usr/local/include/IOReport.h
https://opensource.apple.com/source/PowerManagement/PowerManagement-637.1.2/pmset/pmset.c
Other than that, there is an example of direct usage of mwait asm, but it is for kernel mode (you will need to write a kext to try to run it on macos):
https://rayanfam.com/topics/using-intels-streaming-simd-extensions-3-monitormwait-as-a-kernel-debugging-trick/
I know nothing about is it possible to do this without being root or not, and since the API is private - this is for you to research.
|
70,749,182 | 70,756,348 | meson cannot find a conan package, despite setting pkg_config path? | I am trying to build on windows using meson and conan.
I installed packages for VS 2017 using conan and generated the PC files in the build directory.
Inside my conan.py I have the snippet:
meson = Meson(self)
self.output.warn(self.folders.generators)
meson.configure(build_folder="build", args=[
f"-Dpkg_config_path={self.folders.generators}",
f"-Db_sanitize=undefined"
])
meson.build(args=['-j2'])
I have checked and confirmed this works and that the directory is correct.
I also tried using absolute paths by doing:
os.path.abspath(self.folders.generators)
But meson still cannot find the package for some reason.
The exact error is:
Found pkg-config: C:\msys64\mingw64\bin\pkg-config.EXE (1.8.0)
Found CMake: C:\Program Files\CMake\bin\cmake.EXE (3.22.1)
Run-time dependency vulkan-memory-allocator found: NO (tried pkgconfig and cmake)
..\meson.build:97:0: ERROR: Dependency "vulkan-memory-allocator" not found, tried pkgconfig and cmake
A full log can be found at C:\Users\Makogan\Documents\neverengine\build\meson-logs\meson-log.txt
FAILED: build.ninja
"C:\Python311\Scripts\meson" "--internal" "regenerate" "C:\Users\Makogan\Documents\neverengine" "C:\Users\Makogan\Documents\neverengine\build" "--backend" "ninja"
ninja: error: rebuilding 'build.ninja': subcommand failed
ERROR: conanfile.py: Error in build() method, line 108
meson.build(args=['-j2'])
ConanException: Error 1 while executing ninja -C "C:\Users\Makogan\Documents\neverengine\build" -j2
It does work if I do meson --reconfigure -Dpkg_config=<path>.
I am confused.
| Try specify instead -Dbuild.pkg_config_path=... from this
Since 0.51.0, some options are specified per machine rather than
globally for all machine configurations. Prefixing the option with
build. just affects the build machine configuration...
build.pkg_config_path controls the paths pkg-config will search for
just native: true dependencies (build machine).
PS, the version of meson and that you have native build I deduced from your previous question ;)
|
70,749,292 | 70,749,480 | C# to C++: Convert C# Time calculation to C++ chrono | I've got the task to convert some C# code to C++ and I've problems with chrono. Goal is to round a time to a variable time span.
C#:
int iRoundTo = 30; // can be 45 or other variable value, not a const
DateTime dt = Floor(DateTime.Now, new TimeSpan(0, 0, 0, iRoundTo));
I found a solution with const iRoundTo but this is not what I'm looking for.
How do I convert this to C++ with using std::chono?
C++:
std::chrono::seconds diff(iRoundTo);
auto dt = std::chrono::floor<diff>(Now);
This is not working due to compile error.
Thanks.
| I'm doing a bit of guessing with this answer, but my guess is that you want to truncate the current time to the floor of the current half minute (in case of iRoundTo == 30).
If I'm correct, this is easy to do as long as iRoundTo is a compile-time constant.
#include <chrono>
#include <iostream>
int
main()
{
using RoundTo = std::chrono::duration<int, std::ratio<30>>;
auto Now = std::chrono::system_clock::now();
auto dt = std::chrono::floor<RoundTo>(Now);
std::cout << dt << '\n';
}
The above creates a new duration type that is 30 seconds long. It then gets the current time and floors it (truncates downwards) to the previous half minute unit. It then prints out the result. Everything before the print works in C++17. The printing (the last line) requires C++20.
Example output for me:
2022-01-18 01:45:30
The above code can also work pre-C++17 but in that case you'll need to find your own floor (e.g. date.h) or use std::chrono::duration_cast which truncates towards zero.
Update
In the comments below, it is explained that iRoundTo is not a compile-time constant, but does always represent a multiple of seconds. In this case I would do this in two steps:
int iRoundTo = 30;
auto Now = std::chrono::system_clock::now();
// First truncate to seconds
auto dt = std::chrono::floor<std::chrono::seconds>(Now);
// Then subtract of the modulus according to iRoundTo
dt -= dt.time_since_epoch() % iRoundTo;
The sub-expression dt.time_since_epoch() % iRoundTo has type seconds and a value between 0s and seconds{iRoundTo} - 1s.
|
70,750,236 | 70,750,403 | How can I solve the error -- error: invalid types ‘int[int]’ for array subscript? | #include <iostream>
#include <iomanip>
using namespace std;
int col=10;
int row=0;
void avg(int * ar,int row, int col)
{
float size= row * col;
int sum=0, ave;
for(int i=0; i<row; i++)
{
for(int j=0; j<col; j++){
sum+=ar[i][j];
cout<<sum;}
}
ave=sum/size;
cout<<sum<<endl;
cout<<ave;
}
int main()
{
int row, col;
cout<<"How many rows does the 2D array have: ";
cin>>row;
cout<<"How many columns does the 2D array have: ";
cin>>col;
int ar[row][col];
cout<<"Enter the 2D array elements below : \n";
for(int i=0; i<row; i++){
cout<<"For row "<<i + 1<<" : \n";
for(int j=0; j<col; j++)
cin>>ar[i][j];
}
cout<<"\n Array is: \n";
for(int i=0; i<row; i++)
{
for(int j=0; j<col; j++)
cout<<setw(6)<<ar[i][j];
cout<<endl;
}
cout<<"\nAverage of all the elements of the given D array is: \n";
avg((int*)ar,row,col);
return 0;
}
Hi there, I have written this code to calculate the average of the elements of an 2D array. I am getting error while trying to access the array elements of a 2D array at line 12 -13 ar[i][j]
The error says- error: invalid types ‘int[int]’ for array subscript
How can I solve this error?
PS: I want to give row( no. of rows in 2D array) and col(no. of columns in 2D array) in the function parameter to make this more dynamic.
| Your function parameter ar is a int*. But when you wrote sum+=ar[i][j] you're subscripting it as if we had a 2D array. You can only subscript it for one dimension like arr[i].
Additionally, row and col are not constant expressions. And in Standard C++ the size of an array must be a compile time constant(constant expression). So,
int ar[row][col]; //this statement is not standard c++
The above statement is not standard c++.
A better way(to avoid these problems) would be to use a 2D std::vector instead of a 2D array as shown below.
#include <iostream>
#include <iomanip>
#include <vector>
//this function takes a 2D vector by reference and returns a double value
double avg(const std::vector<std::vector<int>> &arr)
{
int sum=0;
for(const std::vector<int> &tempRow: arr)
{
for(const int &tempCol: tempRow){
sum+=tempCol;
//std::cout<<sum;
}
}
return (static_cast<double>(sum)/(arr.at(0).size() * arr.size()));
}
int main()
{
int row, col;
std::cout<<"How many rows does the 2D array have: ";
std::cin>>row;
std::cout<<"How many columns does the 2D array have: ";
std::cin>>col;
//create a 2D vector instead of array
std::vector<std::vector<int>> ar(row, std::vector<int>(col));
std::cout<<"Enter the 2D array elements below : \n";
for(auto &tempRow: ar){
for(auto &tempCol: tempRow){
std::cin>>tempCol;
}
}
std::cout<<"\n Array is: \n";
for(auto &tempRow: ar)
{
for(auto &tempCol: tempRow)
std::cout<<std::setw(6)<<tempCol;
std::cout<<std::endl;
}
std::cout<<"\nAverage of all the elements of the given D array is: \n";
std::cout<<avg(ar);
return 0;
}
The output of the above program can be seen here.
|
70,750,288 | 70,750,343 | can derived class access base class non-static members without object of the base class | can derived class access base class non-static members without object of the base class
class base
{
public:
int data;
void f1()
{
}
};
class derived : base
{
public :
void f()
{
base::data = 44; // is this possible
cout << base::data << endl;
}
};
why does the below one shows a error
class base
{
public:
int data;
void f1()
{
}
};
class derived : base
{
public :
static void f()
{
base::data = 44; // this one shows a error
cout << base::data << endl;
}
};
i could not find the answer at any wedsites
| In your 1st example
class derived : base
{
void f()
{
base::data = 44;
}
};
f() is not-static. It works on an object of derived, which includes an object of base. So, base::data = 44; is equivalent to data = 44; and it accesses the member of the object.
In the 2nd example
class derived : base
{
static void f()
{
base::data = 44;
}
};
function f() is static, so it does not have access to any object. There, base::data = 44; could mean static data member of base. But because data is non-static, the expression is ill-formed.
|
70,751,049 | 70,751,757 | question on initializeOpenGLFunctions returning false | I'm running into the following segfault after initializeGL() fails. Any ideas as to what might cause this? There is no problem when I inherit from QOpenGLFunctions but I need v3.0 functionality.
class MyGLWidget : public QOpenGLWidget, protected QOpenGLFunctions
QOpenGLFunctions_3_3_Compatibility::glClearColor (this=0x5d3c40, red=0, green=0, blue=0, alpha=1) at /usr/include/qt5/QtGui/qopenglfunctions_3_3_compatibility.h:1064
1064 d_1_0_Core->f.ClearColor(red, green, blue, alpha);
class MyGLWidget : public QOpenGLWidget, protected QOpenGLFunctions_3_3_Compatibility
{
Q_OBJECT
...
};
void initializeGL() override {
makeCurrent();
bool ret = initializeOpenGLFunctions();
std::cout << std::boolalpha << ret << std::endl; // FALSE
glClearColor(0, 0, 0, 1); // CRASH
}
| I solved this problem as follows. I inherited from QOpenGLFunctions and then used the corresponding Core profile 3.3 in my case when needed.
class MyGLWidget : public QOpenGLWidget, protected QOpenGLFunctions
{
Q_OBJECT
...
protected:
void initializeGL() override {
initializeOpenGLFunctions();
auto * glFunctions = QOpenGLContext::currentContext()->versionFunctions<QOpenGLFunctions_3_3_Core>();
glFunctions->glClearColor(0, 0, 0, 1);
}
};
|
70,751,060 | 70,751,135 | do for loops deallocate memory after they finish | Let's say you wrote a for loop:
for (int i = 0; i < 10; i++)
for (int j = 0; j < 10; j++)
Is that for loop creating 10 different j variables, and does it deallocate i and j after its done looping?
I have seen many people do this instead:
int i, j, k
for (i = 0; i < 10; i++)
for (j = 0; j < 10; j++)
//..All The Loops..//
Is there any advantage of declaring the i j k variables before all of your loops, or is it just a personal preference?
| All of the variables in question are being created in automatic storage. They are destroyed when they go out of scope. The two examples are simply declaring the variables in different scopes.
In the first example, i is scoped to the outer loop, meaning i exists only while the loop is running. It is created when the loop begins, and it is destroyed when the loop ends:
for (int i = 0; i < 10; i++) { <- created here
<statements>
} <- destroyed here
Same with j in the inner loop:
for (int i = 0; i < 10; i++) { <- i created here
for (int j = 0; j < 10; j++) { <- j created here
<statements>
} <- j destroyed here
} <- i destroyed here
In the second example, the variables are scoped to the outer block which the loops exist in. So the variables already exist before the outer loop begins, and they continue to exist after the loop ends.
{
...
int i, j, k; <- created here
for (i = 0; i < 10; i++)
for (j = 0; j < 10; j++)
...
...
} <- destroyed here
|
70,751,387 | 70,751,684 | Returning multiple unique_ptr from factory mock | How can I return multiple object from a mocked factory returning unique_ptr, when the calls cannot be identified through different input parameters to the called function?
I'm doing this:
EXPECT_CALL(MyFactoryMock, create())
.WillRepeatedly(Return(ByMove(std::make_unique<MyObjectTypeMock>())));
And run-time error is:
[ FATAL ]
/.../tools/googletest/1.8.0-57/include/gmock/gmock-actions.h:604::
Condition !performed_ failed. A ByMove() action should only be
performed once.
Doing the same thing only once, using WillOnce, works fine.
Follow-up question: Return multiple mock objects from a mocked factory function which returns std::unique_ptr
| ByMove is designed to move a predefined value that you prepared in your test, so it can only be called once. If you need something else, you'll need to write it yourself explicitly.
Here's an excerpt from the googletest documentation:
Quiz time! What do you think will happen if a Return(ByMove(...))
action is performed more than once (e.g. you write ...
.WillRepeatedly(Return(ByMove(...)));)? Come think of it, after the
first time the action runs, the source value will be consumed (since
it’s a move-only value), so the next time around, there’s no value to
move from – you’ll get a run-time error that Return(ByMove(...)) can
only be run once.
If you need your mock method to do more than just moving a pre-defined
value, remember that you can always use a lambda or a callable object,
which can do pretty much anything you want:
EXPECT_CALL(mock_buzzer_, MakeBuzz("x"))
.WillRepeatedly([](StringPiece text) {
return MakeUnique<Buzz>(AccessLevel::kInternal);
});
EXPECT_NE(nullptr, mock_buzzer_.MakeBuzz("x"));
EXPECT_NE(nullptr, mock_buzzer_.MakeBuzz("x"));
Every time this EXPECT_CALL fires, a new unique_ptr<Buzz> will be
created and returned. You cannot do this with Return(ByMove(...)).
|
70,751,584 | 70,751,666 | Extremely long linking time on windows but not Linux | I have a program that on Linux compiles and links in about 15 minutes from scratch, and then takes about 1 minute to compile on subsequent rebuilds.
The exact same program is taking hours to link on windows. The compilation step is the same, but it just gets hanged on the linking step for a really long time.
Is there a way to profile what is bottlenecking compilation?
I am trying to compile on Win10 using meson as the build system and VS through the command line as the compiler and cl as the linker.
| I don't think it's possible to profile but you can normally significantly speed up link via /incremental and /debug:fastlink flags.
|
70,752,045 | 70,752,443 | How can I return `map<K, V>::iterator`? (`map` class extends `std::map`) | I making a custom map class (kc::map) that extends std::map, and I want to add at_index(), and I need to return map<K, V>::iterator. But when I return map<K, V>, this error occurs: error: invalid use of incomplete type ‘class kc::map<K, V>’:
namespace kc
{
template<typename K, typename V> class map : public std::map<K, V>
{
public:
using std::map<K, V>::map;
map<K, V> get_value()
{
return *this;
}
map<K, V>::iterator at_index(int index)
{
map<K, V> m = get_value();
for (int i = 0; i < index; i++)
{
m.erase(m.begin());
}
return m.begin();
}
};
};
| Ignoring the real problems with the code (deriving from standard containers is not recommended and at_index returns a dangling iterator to the local object m) to get the code to compile you have two options.
within a class you don't need to prefix members of the class with the class name. As iterator isn't a class member you need to add it first then an unqualified iterator will work:
using iterator = typename std::map<K, V>::iterator;
iterator at_index(int index)
You can just use std::map::iterator directly:
typename std::map<K, V>::iterator at_index(int index)
If what you're actually trying to do is get the ith item in a std::map this will work:
#include <map>
#include <stdexcept>
#include <iostream>
namespace kc
{
template<typename K, typename V> class map
{
private:
std::map<K, V> impl;
public:
using iterator = typename std::map<K, V>::iterator;
using value_type = typename std::map<K, V>::value_type;
std::pair<iterator, bool> insert( const value_type& value )
{
return impl.insert(value);
}
iterator at_index(int index)
{
if (index >= impl.size())
{
throw std::invalid_argument("index out of range");
}
auto it = impl.begin();
std::advance(it, index);
return it;
}
};
};
int main()
{
kc::map<int, int> m;
m.insert({1, 1});
m.insert({2, 2});
m.insert({3, 3});
std::cout << m.at_index(2)->second;
}
|
70,752,203 | 70,752,347 | std::move versus copy elision | The following code compiles without warnings in Visual Studio 2019 msvc x64:
class ThreadRunner {
public:
void start() {
m_thread = std::move(std::thread(&ThreadRunner::runInThread, this));
}
private:
void runInThread() {
for (int i = 0; i < 1000 * 1000; i++) {
std::cout << "i: " << i << "\n";
}
};
std::thread m_thread;
};
However if I compile the same code with x64-Clang I get the following warning:
warning : moving a temporary object prevents copy elision [-Wpessimizing-move]
Does this mean that I should have written:
m_thread = std::thread(&ThreadRunner::runInThread, this);
instead?
And the compiler would have optimized away ("copy elided") the temporary variable?
Will msvc x64 also copy elide the temporary variable?
I did an experiment:
struct B {
void f1() {
a = A(5);
}
void f2() {
A tmp = A(5);
a = tmp;
}
void f3() {
a = std::move(A(5));
}
void f4() {
A tmp = A(5);
a = std::move(tmp);
}
A a;
};
f1, f3 and f4 produce the same sequence of calls to A member functions:
default ctor
ctor
move = operator
f2 produce another result:
default ctor
ctor
= operator
| The two versions
m_thread = std::thread(&ThreadRunner::runInThread, this);
and
m_thread = std::move(std::thread(&ThreadRunner::runInThread, this));
behave identically. No elision is possible in either case, since this is assignment to, not initialization of, m_thread. The temporary object must be constructed and then there will be a move assignment from it in either version.
The hint is still correct though, since std::move on a temporary either doesn't have any effect at all (as here) or prevents elision if used in a context where copy elision would otherwise be allowed/mandatory, for example if this was the initializer of m_thread instead of an assignment to it.
|
70,752,236 | 70,752,275 | Can a class member function be invoked without an object? | I was learning the history about Lambda's in C++ and saw the following code (which is not lambda) but I am surprised how it Works
struct Printer{
void operator() (int x) const{
std::cout << x << '\n';
}
};
int main(){
std::vector <int> vint;
//doing it the C++ 03 way
vint.push_back(1);
vint.push_back(7);
std::for_each(vint.begin(),vint.end(), Printer());
}
How is the Printer() call in the for_each function working?
| Printer() is an instance of the Printer class. It will result in a temporary object of type Printer which is passed to std::for_each.
This is the object on which operator() is called by std::for_each internally.
Without an object of type Printer, it is not possible to call the operator() member function.
|
70,752,718 | 70,752,861 | convert const std::shared_ptr<const T> into boost::shared_ptr<T> | I need convert a variable type of const std::shared_ptr<const T> into boost::shared_ptr<T>.
In the following scanCallback(), I can not modify the param const std::shared_ptr<const io_adaptor::msg::PandarScan> msg. The msg is very big in memory, which contains large lidar points. PushScanPacket() func's arg is boost::shared_ptr<io_adaptor::msg::PandarScan>, which I also can not modify its type.
The following code does not compile successfully, does somebody know how to do this?
void HesaiLidarModule::scanCallback(
const std::shared_ptr<const io_adaptor::msg::PandarScan> msg){
std::remove_const<const std::shared_ptr<
const io_adaptor::msg::PandarScan>
>::type non_const_msg(msg);
boost::shared_ptr<io_adaptor::msg::PandarScan> msg_boost(non_const_msg.get());
hsdk->PushScanPacket(msg_boost);
}
UPDATE_1st: The following code can compile successfully, but i'm not sure whether std::remove_const<const io_adaptor::msg::PandarScan>::type non_const_obj(*msg); induces a copy operator, which is expensive for msg.
void HesaiLidarModule::scanCallback(
const std::shared_ptr<const io_adaptor::msg::PandarScan> msg){
std::remove_const<const io_adaptor::msg::PandarScan>::type non_const_obj(*msg);
io_adaptor::msg::PandarScan& copy = non_const_obj;
boost::shared_ptr<io_adaptor::msg::PandarScan> msg_boost(©);
hsdk->PushScanPacket(msg_boost);
}
| You cannot transfer ownership from std::shared_ptr to boost::shared_ptr.
You might from a std::unique_ptr though.
But you can create a boost::shared_ptr with a custom deleter.
boost::shared_ptr<io_adaptor::msg::PandarScan>
msg_boost(const_cast<io_adaptor::msg::PandarScan*>(msg.get()),
[msg = msg](auto*) mutable { msg.reset(); });
Demo
Deleter captures original shared_ptr to maintain the lifetime,
and instead of releasing the resource, it just "decreases" the refcount.
|
70,752,912 | 70,752,993 | Understanding const reference in assignment when constructing object | Today I see this piece of code and I'm wondering to know what it is exactly doing this const reference in an assignment where a new object is created. (I don't know how to name this kind of assignments.)
std::string const& p = s.c_str(); // s is a std::string
I understand that something like std::string const& p = s; will create a reference p to s, but in the line shown we are creating a new object (using the raw pointer from std::string::c_str).
I've made a MCVE in Coliru with this:
#include <iostream>
#include <string>
void foo(std::string const& s)
{
std::string const& p = s.c_str(); // << here
std::cout << s << " " << p << " " << &s << " " << &p << std::endl;
}
int main()
{
foo("hello");
}
And, as expected the output is showing that a new object was created:
hello hello 0x7ffdd54ef9a0 0x7ffdd54ef950
So, my question is: Is this actually doing something I'm not able to see? Does it have any problem (like a dangling reference) in the code?
| From std::string::c_str's documentation, it returns:
a pointer to an array that contains a null-terminated sequence of characters (i.e., a C-string) representing the current value of the string object. That is, a const char*.
So when you wrote:
std::string const& p = s.c_str();
In the above statement, the const char* that was returned on the right hand side is used to create a temporary object of type std::string using a converting constructor that takes const char* as argument.
Next, the lvalue reference p on the left hand side is bound to that temporary obect. And in doing so, the lifetime of the temporary is extended.
To answer your last question, there is no dangling reference in your program.
|
70,753,018 | 70,755,061 | Why does all elements get replaced when inserting new elements in boost multi_index container? | When I insert in the main function one after the other elements are inserted properly but when I try to do that through a function all values get replaced by the last elements.
Please refer the following code :
struct X
{
std::string panelid; // assume unique
std::string messageid; // assume unique
std::string tid; // assume non-unique
std::string name;
};
struct IndexByPanelID {};
struct IndexByMessageID {};
struct IndexByTID {};
typedef boost::multi_index_container<
X*, // the data type stored
boost::multi_index::indexed_by<
boost::multi_index::hashed_non_unique<
boost::multi_index::tag<IndexByPanelID>,
boost::multi_index::member<X, std::string, &X::panelid>
>,
boost::multi_index::hashed_non_unique<
boost::multi_index::tag<IndexByMessageID>,
boost::multi_index::member<X, std::string, &X::messageid>
>,
boost::multi_index::hashed_non_unique<
boost::multi_index::tag<IndexByTID>,
boost::multi_index::member<X, std::string, &X::tid>
>
>
> Container;
int Insert(X newframe, Container *Cont)
{
auto& indexByL = c.get<IndexByPanelID>();
indexByL.insert(&newframe);
return 0;
}
int main()
{
Container c; // empty container
X x1{ "1", "80", "FE01", "0712"};
X x2{ "2", "80", "FE02", "0713"};
X x3{ "3", "180", "FE03", "0714"};
X x4{ "4", "80", "FE04", "0715"};
Insert(x1,&c); // Doesnt work.
Insert(x2,&c); // Doesnt work.
Insert(x3,&c); // Doesnt work.
Insert(x4,&c); // Doesnt work.
auto& indexByI3 = c.get<IndexByPanelID>();
for(auto i = indexByI3.begin(); i != indexByI3.end(); i++)
{
X *x = *i;
std::cout << x->name << '\n';
std::cout << x->messageid << '\n';
std::cout << x->panelid << '\n';
std::cout << x->tid << '\n';
}
int main()
{
Container c; // empty container
X x1{ "1", "80", "FE01", "0712"};
X x2{ "2", "80", "FE02", "0713"};
X x3{ "3", "180", "FE03", "0714"};
X x4{ "4", "80", "FE04", "0715"};
// Insert some elements
auto& indexByL = c.get<IndexByPanelID>();
indexByL.insert(&x1); //works fine.
indexByL.insert(&x2); //works fine.
indexByL.insert(&x3); //works fine.
indexByL.insert(&x4); //works fine.
auto& indexByI3 = c.get<IndexByPanelID>();
for(auto i = indexByI3.begin(); i != indexByI3.end(); i++)
{
X *x = *i;
std::cout << x->name << '\n';
std::cout << x->messageid << '\n';
std::cout << x->panelid << '\n';
std::cout << x->tid << '\n';
}
}
Output 1st main :
0715
80
4
FE04
0715
80
4
FE04
0715
80
4
FE04
0715
80
4
FE04
2nd main :
0712
80
1
FE01
0713
80
2
FE02
0714
180
3
FE03
0715
80
4
FE04
| You didn't supply working code. You're overusing pointers. The first error makes it so the code can compile:
int Insert(X newframe, Container *Cont)
should be
int Insert(X newframe, Container& c)
The real problem is that you're storing pointers, not frames. The pointer inserted in the Insert function are dangling (they point to the argument frame which is gone after returning from Insert), so the behaviour is Undefined.
Lastly, when inserting new items, there is no need to jump through the extra hoops to select a specific index through which to do so (unless you need an index-specific iterator to the newly inserted element. However, you don't seem to the return value at all).
Here's a simplified version with fixes:
Live On Compiler Explorer
#include <boost/multi_index_container.hpp>
#include <boost/multi_index/hashed_index.hpp>
#include <boost/multi_index/member.hpp>
#include <iostream>
#include <iomanip>
namespace bmi = boost::multi_index;
#include <string>
struct X { std::string panelid, messageid, tid, name; };
using Container = bmi::multi_index_container<
X,
bmi::indexed_by< //
bmi::hashed_non_unique< //
bmi::tag<struct ByPanelID>, //
bmi::member<X, std::string, &X::panelid> //
>, //
bmi::hashed_non_unique< //
bmi::tag<struct ByMessageID>, //
bmi::member<X, std::string, &X::messageid> //
>, //
bmi::hashed_non_unique< //
bmi::tag<struct ByTID>, //
bmi::member<X, std::string, &X::tid> //
>>>;
void Insert(X newframe, Container &c) {
c.insert(std::move(newframe));
}
int main()
{
Container c; // empty container
Insert({ "1", "80", "FE01", "0712"},c);
Insert({ "2", "80", "FE02", "0713"},c);
Insert({ "3", "180", "FE03", "0714"},c);
Insert({ "4", "80", "FE04", "0715"},c);
for (auto& [panelid, messageid, tid, name] : c.get<ByPanelID>()) {
std::cout << std::quoted(name) << ' ' //
<< std::quoted(messageid) << ' ' //
<< std::quoted(panelid) << ' ' //
<< std::quoted(tid) << '\n';
}
}
Prints
"0712" "80" "1" "FE01"
"0713" "80" "2" "FE02"
"0714" "180" "3" "FE03"
"0715" "80" "4" "FE04"
In fact for this simple demonstration you can do much simpler still: https://godbolt.org/z/hbeeGd7ox
|
70,753,041 | 70,753,437 | Why does this construction of std::function from lambda not compile? | Why doesnt the following line compile?
std::function<void (int)> f = [](int&){};
But many other alternatives do:
[[maybe_unused]]
std::function<void (const int&)> f1 = [](const int&){};
//[[maybe_unused]]
// doesnt compile because losing constness
//std::function<void (const int&)> f2 = [](int&){};
[[maybe_unused]]
std::function<void (const int&)> f3 = [](int){};
[[maybe_unused]]
std::function<void (const int&)> f4 = [](const int){};
[[maybe_unused]]
std::function<void (int&)> f5 = [](const int&){};
[[maybe_unused]]
std::function<void (int&)> f6 = [](int&){};
[[maybe_unused]]
std::function<void (int&)> f7 = [](int){};
[[maybe_unused]]
std::function<void (int&)> f8 = [](int){};
[[maybe_unused]]
std::function<void (const int)> f9 = [](const int&){};
//[[maybe_unused]]
// doesnt compile because losing constness
//std::function<void (const int)> f10 = [](int&){};
[[maybe_unused]]
std::function<void (const int)> f11 = [](const int){};
[[maybe_unused]]
std::function<void (const int)> f12 = [](int){};
[[maybe_unused]]
std::function<void (int)> f13 = [](const int&){};
//[[maybe_unused]]
// doesnt compile because ??
//std::function<void (int)> f14 = [](int&){};
[[maybe_unused]]
std::function<void (int)> f15 = [](const int){};
[[maybe_unused]]
std::function<void (int)> f16 = [](int){};
My understanding was that std::function have a () operator which takes the args and calls the functor's () operator with the args
so its possible that for e,g, - first one takes const ref and next takes copy.
With this line of logic, I would think that a int copy should be able to be passed forward as int&. Can someone explain?
| std::function forwards it's arguments to it's target. The lambda assigned to f14 can't be called with an rvalue.
You are also incorrect about f10, that is the same case as f14. Top level const is ignored in arguments. Each of f11, f12, f15 and f16 are the same case.
|
70,753,213 | 70,753,468 | Is it good practice to declare derivate classes in the same C++ header? | I'm declaring a pure virtual class that will provide a unified interface for a handful of derived classes. My instinctual way to organize this would be to create a base folder with the header for the base class (e.g lib/Base.h) and then create subfolders for the header + source file of the derived classes (so lib/implA/ImplA.h,lib/implA/ImplA.cpp and so forth). This keeps the files short, but feels cluttered.
Would it be considered good practice to gather the definitions of the derived classes in the header lib/Base.h and keep the various implementations in the same folder?
| Two-file folders (like lib/implA/ImplA.h, lib/implA/ImplA.cpp) are unnecessary, for small projects people usually just put everything in lib/. If lib/ becomes too cluttered, put this whole hierarchy in lib/my_hierarchy/Base.h, lib/my_hierarchy/ImplA.cpp, etc. Maybe extract a logical subsystem instead of a hierarchy. Just keep reasonable folder sizes and some organized structure.
As for putting multiple declarations in the same header, it's your design choice. As far as I know, there's no single "best practice" in C++ regarding this. C++ doesn't enforce one class per file, like Java does. However, including a lot of classes in a single header means slightly longer compilation times for users, because that long header needs to be parsed in every .cpp file where it's #included. Usually people try to keep their headers minimal, but also provide a convenience "aggregate" header that includes all other headers (like bits/stdc++.h for the standard library). In your case, that would be:
// lib/lib.h
#include "my_hierarchy/Base.h"
#include "my_hierarchy/ImplA.h"
// etc.
So that users who don't mind longer compilation times can just #include <lib/lib.h> and have everything, while others can #include only classes they need.
|
70,753,294 | 70,753,348 | What is this 'bad:' label generated by Cython? | While cythonizing my Cython source code files, I can see a dozen of warnings about a label named 'bad:' generated by Cython, for example:
read_input.cpp:30037:3: warning: label ‘bad’ defined but not used [-Wunused-label]
The C++ generated function is like this:
static PyObject* __pyx_convert__to_py_struct__VehicleCaps(struct VehicleCaps s) {
PyObject* res;
PyObject* member;
res = __Pyx_PyDict_NewPresized(0); if (unlikely(!res)) return NULL;
return res;
bad:
Py_XDECREF(member);
Py_DECREF(res);
return NULL;
}
The 'bad:' label is in there, I don't get it why Cython is generating this unused label and it shows warnings.
Do I have really fix these warnings? or it's safe to leave them untouched?
| It's for goto bad if something fails in the function, but it doesn't look like anything can fail, so it's unused.
It isn't a problem so you can ignore it. But Cython generally tries not to generate unused labels, so feel free to report it as a (small) bug
|
70,753,352 | 70,753,949 | Cannot get operator() pointer of std::bind() returned object | I need to extract the type of a function object parameter.
Lambdas get translated into a closure object with the operator(). std::function has got the operator(), too.
So, I can get a pointer to the operator() to pass to another function, in this way:
template <typename F, typename T, typename R, typename ... Args>
void helper(F&, R (T::*)(Args...) const)
{
// do something with Args types
}
template <typename F>
void bar(F f)
{
helper(f, &F::operator());
}
void freefunc(int) {}
void foo()
{
// lambda: ok
bar([](int){});
// std::function: ok
const std::function<void(double)> f = [](double){};
bar(f);
// std::bind: does not compile
auto g = std::bind(freefunc, std::placeholders::_1);
bar(g);
}
std::bind should create an object with the operator(), too. However, my code does not work with std::bind(), and I cannot understand why.
gcc produces this error:
In instantiation of 'void bar(F) [with F = std::_Bind<void (*(std::_Placeholder<1>))(int)>]':
<source>:58:8: required from here
<source>:47:11: error: no matching function for call to 'helper(std::_Bind<void (*(std::_Placeholder<1>))(int)>&, <unresolved overloaded function type>)'
47 | helper(f, &F::operator());
| ~~~~~~^~~~~~~~~~~~~~~~~~~
<source>:39:6: note: candidate: 'template<class F, class T, class R, class ... Args> void helper(F&, R (T::*)(Args ...) const)'
39 | void helper(F&, R (T::*)(Args...) const)
| ^~~~~~
<source>:39:6: note: template argument deduction/substitution failed:
<source>:47:11: note: couldn't deduce template parameter 'T'
47 | helper(f, &F::operator());
| ~~~~~~^~~~~~~~~~~~~~~~~~~
ASM generation compiler returned: 1
<source>: In instantiation of 'void bar(F) [with F = std::_Bind<void (*(std::_Placeholder<1>))(int)>]':
<source>:58:8: required from here
<source>:47:11: error: no matching function for call to 'helper(std::_Bind<void (*(std::_Placeholder<1>))(int)>&, <unresolved overloaded function type>)'
47 | helper(f, &F::operator());
| ~~~~~~^~~~~~~~~~~~~~~~~~~
<source>:39:6: note: candidate: 'template<class F, class T, class R, class ... Args> void helper(F&, R (T::*)(Args ...) const)'
39 | void helper(F&, R (T::*)(Args...) const)
| ^~~~~~
<source>:39:6: note: template argument deduction/substitution failed:
<source>:47:11: note: couldn't deduce template parameter 'T'
47 | helper(f, &F::operator());
What's the correct way to do the same with std::bind?
| What you want to do is unfortunately impossible because the return type of std::bind is too loosely specified by the standard.
std::function::operator() is clearly defined by the standard, so you can match it against R (T::*)(Args... ), see [func.wrap.func.general],
for lambda functions, it's not that clear from [expr.prim.lambda.closure#3], but I'd say that it should work,
for std::bind, the specification [func.bind.bind#4] is much broader because it only says that you can call g(u1, u2, …, uM) where g is the returned value from std::bind, so there is no guarantee that the return type of std::bind even has an operator() member-function.
The actual implementation problem here, which is the same for gcc, clang and msvc, is that the operator() member-function of the return value is actually a template, so you cannot use &F::operator() directly — you cannot take the address of a templated (member-)function.
|
70,754,244 | 70,757,310 | How to write parallelly into a container | I'm working with C++14 and I don't know how to write parallelly into a container with multi-threading.
Let's say I have such a map: std::map<int, int> mp {{1, 0}, {2, 0}, {3, 0}} and a function as below:
void updateValue(int& value) {
value = xxx; // heavy calculation
}
Then I try to create three threads:
std::vector<std::thread> vec;
for (auto& ele : mp) {
vec.emplace_back(std::thread(updateValue, std::ref(ele.second)));
}
However, the problem is that std::map is not thread-safe.
So it seems that I need to add a lock in the function updateValue. But if I do this, the function has to be called one by one, just like a single-thread.
It there some method to allow me to use multi-threading in this case?
| Your example is safe as-is, and doesn't require any additional synchronization.
[container.requirements.dataraces]/2 Notwithstanding [res.on.data.races], implementations are required to avoid data races when the contents of the contained object in different elements in the same container, excepting vector<bool>, are modified concurrently.
|
70,754,848 | 70,754,897 | How to fix the Segmentation fault (core dumped) in C++? | I'm writing a program that combines 2 vectors and sorts them and then prints the vector but I'm not using a third vector. Instead I'm combining one vector with another and then sorting the combined vector. But I get a error called "Segmentation fault".
here's the code:
#include<bits/stdc++.h>
using namespace std;
int main() {
// your code goes here
ios_base::sync_with_stdio(0);
cin.tie(0);
int m,n;
cin >> m >> n;
vector<int> nums1, nums2;
for(int i=0; i<m; i++) cin >> nums1[i];
for(int i=0; i<n; i++) cin >> nums2[i];
for(int i=0; i<n; i++){ // nums2
nums1.push_back(nums2[i]); // I am adding all the elements present in nums2 into nums1
}
sort(nums1.begin(), nums1.end());
for(int i=0; i<(m+n); i++) cout << nums1[i] << " ";
return 0;
}
The error I get: run: line 1: 3 Segmentation fault (core dumped) LD_LIBRARY_PATH=/usr/local/gcc-8.3.0/lib64 ./a.out
Please tell me how I can fix this error and how I could avoid it in the future.
| Here:
vector<int> nums1, nums2;
for(int i=0; i<m; i++) cin >> nums1[i]; // this causes undefined behavior
for(int i=0; i<n; i++) cin >> nums2[i]; // also this one
your vectors have no buffer to store data so you need to do this before using operator[]:
vector<int> nums1(m), nums2(n);
nums1.push_back(2); // will add 2 to the back of nums1 so size will become m + 1
nums2.push_back(6); // will add 6 to the back of nums2 so size will become n + 1
// you can do as many push_backs as you want until
// your computer runs out of memory
Now both will be initialized with m and n number of elements respectively.
If you used the at function instead of [], the program would throw a std::out_of_range exception and you would suddenly notice that you were trying to go out of bounds. But of course at comes at a performance cost.
|
70,755,447 | 70,768,788 | Close connection with client after inactivty period | I'm currently managing a server that can serve at most MAX_CLIENTS clients concurrently.
This is the code I've written so far:
//create and bind listen_socket_
struct pollfd poll_fds_[MAX_CLIENTS];
for (auto& poll_fd: poll_fds_)
{
poll_fd.fd = -1;
}
listen(listen_socket_, MAX_CLIENTS);
poll_fds_[0].fd = listen_socket_;
poll_fds_[0].events = POLLIN;
while (enabled)
{
const int result = poll(poll_fds_, MAX_CLIENTS, DEFAULT_TIMEOUT);
if (result == 0)
{
continue;
}
else if (result < 0)
{
// throw error
}
else
{
for (auto& poll_fd: poll_fds_)
{
if (poll_fd.revents == 0)
{
continue;
}
else if (poll_fd.revents != POLLIN)
{
// throw error
}
else if (poll_fd.fd == listen_socket_)
{
int new_socket = accept(listen_socket_, nullptr, nullptr);
if (new_socket < 0)
{
// throw error
}
else
{
for (auto& poll_fd: poll_fds_)
{
if (poll_fd.fd == -1)
{
poll_fd.fd = new_socket;
poll_fd.events = POLLIN;
break;
}
}
}
}
else
{
// serve connection
}
}
}
}
Everything is working great, and when a client closes the socket on its side, everything gets handled well.
The problem I'm facing is that when a client connects and send a requests, if it does not close the socket on its side afterwards, I do not detect it and leave that socket "busy".
Is there any way to implement a system to detect if nothing is received on a socket after a certain time? In that way I could free that connection on the server side, leaving room for new clients.
Thanks in advance.
| You could close the client connection when the client has not sent any data for a specific time.
For each client, you need to store the time when the last data was received.
Periodically, for example when poll() returns because the timeout expired, you need to check this time for all clients. When this time to too long ago, you can shutdown(SHUT_WR) and close() the connection. You need to determine what "too long ago" is.
If a client does not have any data to send but wants to leave the connection open, it could send a "ping" message periodically. The server could reply with a "pong" message. These are just small messages with no actual data. It depends on your client/server protocol whether you can implement this.
|
70,755,471 | 70,756,174 | Is there a backend optimizer in LLVM? | I can get the optimization level from the command llc -help
-O=<char> - Optimization level. [-O0, -O1, -O2, or -O3] (default = '-O2')
I want to know what the optimization does exactly.
So, I'm searching the source code of the backend optimizer.
I google it by "llvm backend optimizer", but there is no information about it, only some target-independent pass source code.
I want to know what does the optimization do for div-rem-pars.
It can combine two instructions of llvm IR to one instruction for assembly code.
| Apparently there are backend optimizers options in llvm. They are however not well documented [1,2]. The TargetMachine [3] class has functions getOptLevel and setOptLevel to set an optimization level from 0-3 for a specific target machine, so starting from there you can try to track where it is used.
[1] https://llvm.org/docs/CodeGenerator.html#ssa-based-machine-code-optimizations
[2] https://llvm.org/docs/CodeGenerator.html#late-machine-code-optimizations
[3] https://llvm.org/doxygen/classllvm_1_1TargetMachine.html
|
70,755,499 | 70,759,851 | Create std::chrono::zoned_time from zone and time | I have a datetime in a platonic sense, i.e some date and time (like 18th of January 2022 15:15:00) and I know in which timezone it represent something, e.g "Europe/Moscow"
I want to create std::chrono::zoned_time. Is is possible?
I looked at the constructors and it seems all of them require either sys_time or local_time which is not what I have.
Am I missing something obvious?
| #include <chrono>
#include <iostream>
int
main()
{
using namespace std::literals;
std::chrono::zoned_time zt{"Europe/Moscow",
std::chrono::local_days{18d/std::chrono::January/2022} + 15h + 15min};
std::cout << zt << '\n';
}
local_time isn't necessarily the computer's local time. It is a local time that has not yet been associated with a time zone. When you construct a zoned_time, you associate a local time with a time zone.
The above program prints out:
2022-01-18 15:15:00 MSK
So you can think of this as the identity function. But in reality you can also get the UTC time out of zt with .get_sys_time(). And you can also use zt as the "time point" to construct another zoned_time:
std::chrono::zoned_time zt2{"America/New_York", zt};
std::cout << zt2 << '\n';
Output:
2022-01-18 07:15:00 EST
zt2 will have the same sys_time (UTC) as zt. This makes it handy to set up international video conferences (for example).
|
70,755,669 | 70,755,774 | The template function encountered an error when passed in a numeric literal, but string literals didn't | I'm writing this code
#include <iostream>
#include <string>
template <typename T>
void Print(T& value)
{
std::cout << value << std::endl;
}
int main()
{
Print("Hello");
Print(1);
}
And when compiling, compilers told an error that “void Print<int>(T &)' : cannot convert argument 1 from 'int' to 'T &'". But the Print("Hello") didn't get an error. Why is that?
And I changed Print() function to
void Print(T value)
{
std::cout << value << std::endl;
}
It worked. But I don't understand why the former code didn't work.
| Case 1
Here we consider how Print(1); works.
In this case, the problem is that 1 is an rvalue and you're trying to bind that rvalue to a lvalue reference to nonconst T( that is, T&) which is not possible and hence the error. For example you cannot have:
void Print(int &value)
{
std::cout << value << std::endl;
}
int main()
{
Print(1);// won't work
}
Solution
So to solve your problem you can use a lvalue reference to const T(that is const T&) that can be bind to an rvalue, as shown below:
template <typename T>
void Print(const T& value)//note the const added here
{
std::cout << value << std::endl;
}
int main()
{
Print(1); //works now
}
Alternatively, you can also make the parameter as a rvalue reference to nonconst T(that is T&&).
template <typename T>
void Print(T&& value)//note the && added here
{
std::cout << value << std::endl;
}
int main()
{
Print(1); //this works too
}
Case 2
Here we consider the statement Print("Hello");
In this case, "Hello" is a string literal and has the type const char [6]. Also, the string literal "Hello" is an lvalue.
And we know that we can bind an lvalue to a lvalue reference to nonconst T(that is T&). So there is no error in this case. Also note that in this case the T is deduced to be const char [6].
Note
In case 2 above (Print("Hello");) there is no type decay because the argument is passed by reference and not by value.
|
70,755,720 | 70,757,636 | How can I minimize boilerplate code associated with std::thread? | I have several classes like this in my C++ code:
class ThreadRunner {
public:
void start() {
m_thread = std::thread(&ThreadRunner::runInThread, this);
}
void stop() {
m_terminate = true;
}
~ThreadRunner() {
m_terminate = true;
if (m_thread.joinable()) {
m_thread.join();
}
}
private:
void runInThread() {
size_t i = 0;
while (!m_terminate) {
std::cout << "i: " << i << "\n";
i++;
}
};
std::thread m_thread;
std::atomic<bool> m_terminate{ false };
};
Some of my runInThread functions take arguments, and some do not.
There is quite a bit of boilerplate code (m_terminate, join) that I have to repeat for each class.
All this is of course motivated by that I do not want terminate to get called inside destructor of std::thread:
~thread() noexcept {
if (joinable()) {
_STD terminate();
}
}
A colleague has made a class encapsulating this boilerplate.
The way it works is that it has a virtual void runInThread() = 0 method.
So each of my classes would inherit this baseclass and override this method.
Also there is an implicit assumption that each override checks m_terminate occasionally.
I would prefer to use composition instead of inheritance.
That is MyClass uses a MyThread not is a MyThreadObject.
How can I create a start method that takes a "function pointer" and arbitrary arguments and passes them on to std::thread constructor?
If it is too much hassle I guess I could will go with my colleague's approach instead. From what I understand Java has a Runnable interface where one must override void Run().
If I were to use std::jthread is this how it should be done?
class ThreadRunner {
public:
void start() {
m_thread = std::jthread(&ThreadRunner::runInThread, this, m_stop_source.get_token());
}
void stop() {
m_stop_source.request_stop();
}
~ThreadRunner() {
stop(); // otherwise ~jthread waits forever for runInThread to finish
}
private:
void runInThread(std::stop_token stoken) {
size_t i = 0;
while (!stoken.stop_requested()) {
std::cout << "i: " << i << "\n";
i++;
}
};
std::jthread m_thread;
std::stop_source m_stop_source;
};
What are the benefits of using a std::stop_source over a std::atomic<bool> m_terminate?
If I were to use std::async is this how it would be done?:
class ThreadRunner {
public:
void start() {
m_future = std::async(std::launch::async, &ThreadRunner::runInThread, this);
}
void stop() {
m_terminate = true;
}
~ThreadRunner() {
stop();
}
private:
void runInThread() {
size_t i = 0;
while (!m_terminate) {
std::cout << "i: " << i << "\n";
i++;
}
};
std::future<void> m_future;
std::atomic<bool> m_terminate{ false };
};
| Your std::jthread code can be simplified to:
class ThreadRunner {
public:
void start() {
m_thread = std::jthread(&ThreadRunner::runInThread, this);
}
void stop() {
m_thread.request_stop();
}
private:
void runInThread() {
size_t i = 0;
auto stopToken = m_thread.get_stop_token();
while (!stopToken.stop_requested()) {
std::cout << "i: " << i << "\n";
i++;
}
};
std::jthread m_thread;
};
Or as your wrapper isn't doing much just using std::jthread directly might be simpler:
std::jthread thread([](std::stop_token stoken){
size_t i = 0;
while (!stoken.stop_requested()) {
std::cout << "i: " << i << "\n";
i++;
}
});
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.