question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
70,566,857 | 70,567,492 | How to center align a pattern? | How can I align this arrow tail centered unded the arrow head?
**
****
******
********
**********
****
****
****
****
Here is my code:
#include<iostream>
using namespace std;
int main() {
int n;
cout<<"enter size";
cin>>n;
int rows,columns;
cout<<"enter numbers";
cin>>rows>>columns;
for(int k=1;k<=n;k++){
for(int j=1;j<=n-k;j++){
cout<<" ";
}
for(int j=1;j<=k;j++){
cout<<"*";
}
for(int i=1;i<=k;i++){
cout<<"*";
}
cout<<endl;
}
for(int i=1;i<=rows;i++){
for(int j=1;j<=columns;j++){
cout<<"*";
}
cout<<endl;
}
return 0;
}
I try to run it with the following input:
enter size 5
enter numbers 4 4
The arrow looks fine, but the tail is left aligned. How can I get it into the center?
| Assuming you display fixed spaced characters on a terminal, the general algorithm is:
Take the width we of the element you want to center.
Take the width WA of the area it needs to be centered in
Print (WA-we)/2 whitespaces before the element
Considering that in your case WA would be 2*n and we would be columns, you could write somewhere:
int s=(2*n-columns)/2;
and print in the second part of your algorithm s spaces before starting printing the * of your Christmas tree trunk.
By the way, I don't know if you're allowed to use string for your homework, but if yes, you could replace the loops for printing w characters c one after another with just printing string(w, c), (for example cout<<string(s, ' ');), which will make your programme more compact and readable.
|
70,567,396 | 70,568,568 | Retrieve a type from a string known at compile time | Is it possible to get a type according to a string known at compile time?
Mainly with constexpr std::string_view.
#include <bits/stdc++.h>
template <std::string_view>
struct MakeType {};
template <>
struct MakeType<"int"> {
using type = int;
};
template <>
struct MakeType<"float"> {
using type = float;
};
int main() {
constexpr std::string_view my_int = "int";
MakeType<my_int>::type i = 5;
return 0;
}
| Yes, you can do such things, even if I currently did not see why we need it. But it did not work on base of std::string_view as we need a data type which contains the data in the object itself. As C++20 offers a simple way to define a constexpr string type via template parms, we have all what we need!
template<size_t N>
struct mystring
{
std::array<char, N> arr_;
constexpr mystring(const char(&in)[N]) : arr_{}
{
std::copy(in, in + N, arr_.begin());
}
};
template < mystring s > struct MakeType { using type=void;};
template <> struct MakeType<"int"> {using type=int;};
template <> struct MakeType<"double"> {using type=double;};
template < mystring T>
using MakeType_t = MakeType<T>::type;
int main()
{
MakeType_t<"int"> xi=9;
MakeType_t<"double"> xd=10.234;
std::cout << xi << std::endl;
std::cout << xd << std::endl;
static_assert( std::is_same_v< double, MakeType_t<"double">>);
static_assert( std::is_same_v< int, MakeType_t<"int">>);
}
See it working here on gcc ...
also for clang
Remark: clang requires an additional typename. I believe clang is wrong here, as in C++20 the need of additional typename was relaxed a lot, but I am not a language-lawyer.
|
70,567,569 | 70,567,607 | C++11 Template Class with Multiple Definitions | In summary, I would like to have a templated class that can either have a class member that is a std::tuple or an integral type.
The essence of what I want to do is pasted below.
#include <tuple>
#include <vector>
#include <string>
template<typename T>
class DATA
{
public:
T value;
};
template<typename... T>
class DATA
{
public:
std::tuple<T...> value;
};
int main(int argc, char *argv[])
{
DATA<int> d1;
d1.value = 10;
DATA<int, std::string, std::vector<int>> d2;
std::get<0>( d2.value ) = 100;
std::get<1>( d2.value ) = "Hello World";
std::get<2>( d2.value ).push_back(1);
std::get<2>( d2.value ).push_back(2);
std::get<2>( d2.value ).push_back(3);
return 0;
}
| C++14 and newer:
template <typename T, typename ...P>
struct A
{
std::conditional_t<sizeof...(P) == 0, T, std::tuple<T, P...>> value;
};
C++11:
template <typename T, typename ...P>
struct A
{
typename std::conditional<sizeof...(P) == 0, T, std::tuple<T, P...>>::type value;
};
|
70,567,800 | 70,573,278 | How to launch UWP app with app's main window in background using url | I want to launch UWP app from another app. For example I want to launch apps with launch protocol (ms-people:, msnweather:, etc.). I am using API LaunchUriAsync. It is launching the app. But the new app that is launched gets the focus and its main window comes in the foreground on top of the app that I am interacting with (from which I launched this new app).
However I want to keep this new app window in the background and let user interact with the original app. How do I do that?
Thanks
|
How to launch UWP app with app's main window in background using url
I'm afraid LaunchUriAsync api does not contain such options to launch app's main window in background using url. But we have a workaround that push the launched app into background manually if the app was launched with uri.
The OnActivated event handler receives all activation events. The Kind property indicates the type of activation event. So you could minimize the target app in the following
protected override void OnActivated(IActivatedEventArgs args)
{
if (args.Kind == ActivationKind.Protocol)
{
ProtocolActivatedEventArgs eventArgs = args as ProtocolActivatedEventArgs;
// TODO: Handle URI activation
// The received URI is eventArgs.Uri.AbsoluteUri
}
}
For minimize the app please refer this case reply.
|
70,568,026 | 70,568,299 | How do I use system("chcp 936") in my dialog based project? | The code below is supposed to convert a wstring "!" to a string and output it,
setlocale(LC_ALL, "Chinese_China.936");
//system("chcp 936");
std::wstring ws = L"!";
string as((ws.length()) * sizeof(wchar_t), '-');
auto rs = wcstombs((char*)as.c_str(), ws.c_str(), as.length());
as.resize(rs);
cout << rs << ":" << as << endl;
If you run it without system("chcp 936");, the converted string is "£¡" rather than "!". If with system("chcp 936");, the result is correct in a console project.
But on my Dialog based project, system("chcp 936")is useless, even if it's workable, I can't use it, because it would popup a console.
PS: the IDE is Visual Studio 2019, and my source code is stored as in UTF-8 with signature.
My operation system language is English and language for non-unicode programs is English (United States).
Edit: it's interesting, even with "en-US" locale, "!" can be converted to an ASCII "!".
But I don't get where "£¡" I got in the dialog based project.
| There are two distinct points to considere with locales:
you must tell the program what charset should be used when converting unicode characters to plain bytes (this is the role for setlocale)
you must tell the terminal what charset it should render (this is the role for chcp in Windows console)
The first point depends on the language and optionaly libraries that you use in your program (here the C++ language and Standard Library)
The second point depends on the console application and underlying system. Windows console uses chcp, and you will find in that other post how you can configure xterm in a Unix-like system.
|
70,568,167 | 70,578,766 | Libcurl - curl_multi_poll + curl_multi_add_handle - in one thread - never waits | The curl_multi_poll function in conjunction with curl_multi_add_handle - for some reason it never waits for an event and immediately returns:
Simple example:
#include <iostream>
#include <curl.h>
int main()
{
curl_global_init(CURL_GLOBAL_ALL);
CURLM* CURLM_ = curl_multi_init();
CURL* CURL_ = curl_easy_init();
curl_easy_setopt(CURL_, CURLOPT_URL, "https://stackoverflow.com");
int num_desc_events;
curl_multi_add_handle(CURLM_, CURL_); //If this line is deleted, then curl_multi_poll enters waits mode.
if (CURLMcode_ != CURLM_OK)
{
std::cout << "curl_multi_add_handle_status:" << CURLMcode_ << std::endl;
}
while (1)
{
std::cout << "curl_multi_poll_start" << std::endl;
curl_multi_poll(CURLM_, NULL, 0, 100000, &num_desc_events);
if (CURLMcode_ != CURLM_OK)
{
std::cout << "curl_multi_poll_status:" << CURLMcode_ << std::endl;
}
std::cout << "curl_multi_poll_awakened" << std::endl;
std::cout << "num_desc_events:" << num_desc_events << std::endl;
}
curl_multi_cleanup(CURLM_);
curl_global_cleanup();
}
As you can see, this is a very simple code, but it works very strangely or even I would say that it does not work.
From the description of the curl_multi_poll function, it follows that it waits FOREVER until an event occurs on the mult descriptor or a set timeout.
That is, when there is a line with the curl_multi_add_handle function in the code, the curl_multi_poll function does not enter standby mode.
And if the line of code with the curl_multi_add_handle function is removed, then curl_multi_poll works correctly and enters standby mode until the first event, in this case indefinitely or until timeout.
| The code doesn't call curl_multi_perform() so it doesn't actually do anything and whatever libcurl wants to do, it still wants to do...
|
70,568,513 | 70,568,925 | Access bits in memory | I want to assemble a message bit by bit, then handle the message as a vector of unsigned characters ( e.g. to calculate the CRC )
I can assemble the message OK, using either a std::vector<bool> or a std::bitset
I can copy the assembled message to a std::vector doing it bit by bit. ( Note: the meesage is padded so that its length is an integer number of bytes )
// assemble message
std::vector<bool> bitMessage;
...
// copy the bits one by one into bytes and add them to the message
std::vector<unsigned char> myMessage;
// loop over bytes
for (int kbyte = 0;
kbyte < bitMessage.size() / 8;
kbyte++)
{
unsigned char byte = 0;
// loop over bits
for (int kbit = 0;
kbit < 8;
kbit++)
{
// add bit to byte
byte += bitMessage[8 * kbyte + kbit] << kbit;
}
// add byte to message
myMessage.push_back(byte);
}
This works.
But it seems awfully slow! I would like to use std::memcpy.
For a 'normal' vector I would do
memcpy(
myMessage.data(),
bitMessage.data(),
bitMessage.size() / 8 );
or
memcpy(
&myMessage[0],
&bitMessage[0],
bitMessage.size() / 8 );
but neither of these methods is possible with either a vector<bool> or bitset
Question: Is there a way to get a pointer to the memory where the bits are stored?
The answer is: not with std::vector<bool> or std::bitset
However, with some hints , especially from @Ayxan Haqverdili, it is possible to write a small class that will accept single bits and construct a well mannered std::vector<unsigned char> as we go along.
/** Build a message bit by bit, creating an unsigned character vector of integer length
*
* Hides the messy bit twiddling required,
* allowing bits to be added to the end of the message
*
* The message is automatically padded at the end with zeroes
*/
class cTwiddle
{
public:
std::vector<unsigned char> myMessage;
cTwiddle() : myBitLength(0) {}
/** add a bit to end of message
* @param[in] bit
*/
void add(bool bit)
{
// check if message vector is full
if (!(myBitLength % 8))
{
// add byte to end of message
myMessage.push_back(0);
}
// control order bits are added to a byte
int shift = 7 - (myBitLength % 8); // add bits from left to right ( MSB first )
// int shift = (myBitLength % 8); // add bits from right to left ( LSB first )
myMessage.back() += (1 & bit) << shift;
myBitLength++;
}
private:
int myBitLength;
};
| Apparently neither of those classes define the layout. Just write your own class and define the layout you want:
template <int size>
class BitSet final {
private:
unsigned char buffer[size / 8 + (size % 8 != 0)] = {};
public:
constexpr bool get(size_t index) const noexcept {
return (buffer[index / 8] >> (index % 8)) & 1U;
}
constexpr void set(size_t index) noexcept {
buffer[index / 8] |= (1U << (index % 8));
}
constexpr void clear(size_t index) noexcept {
buffer[index / 8] &= ~(1U << (index % 8));
}
};
Memcpy-ing this class is perfectly fine. Otherwise, you might also provide direct access to the byte array.
Alternatively, you can dynamically allocate the buffer:
#include <memory>
class DynBitSet final {
private:
size_t size = 0;
std::unique_ptr<unsigned char[]> buffer;
public:
explicit DynBitSet(size_t bitsize)
: size(bitsize / 8 + (bitsize % 8 != 0)),
buffer(new unsigned char[size]{}) {}
bool get(size_t index) const noexcept {
return (buffer[index / 8] >> (index % 8)) & 1U;
}
void set(size_t index) noexcept { buffer[index / 8] |= (1U << (index % 8)); }
void clear(size_t index) noexcept {
buffer[index / 8] &= ~(1U << (index % 8));
}
auto bitSize() const noexcept { return size * 8; }
auto byteSize() const noexcept { return size; }
auto const* byteBuffer() const noexcept { return buffer.get(); }
};
|
70,568,575 | 70,577,614 | How to get the edges of a 3D Delaunay tessellation with CGAL? | The question is clear from the title. I tried a ton of variants of
const DT3::Finite_edges itedges = mesh.finite_edges();
for(DT3::Finite_edges_iterator eit = itedges.begin(); eit != itedges.end(); eit++) {
const CGAL::Triple<DT3::Cell_handle, int, int> edge = *eit;
edge.first->vertex((edge.second+1) % 3)->info();
edge.first->vertex((edge.third+1) % 3)->info();
}
but none has worked (I tried % 2, % 4, +2, etc).
I'm able to get the tetrahedra and the triangles. Of course I could extract the edges from them but that would require to remove some duplicates.
| No need to use addition or modular arithmetic. The solution is simpler:
const DT3::Finite_edges itedges = mesh.finite_edges();
for(DT3::Finite_edges_iterator eit = itedges.begin(); eit != itedges.end(); eit++) {
const DT3::Edge edge = *eit;
const DT3::Vertex::Info v1_info = edge.first->vertex(edge.second)->info();
const DT3::Vertex::Info v2_info = edge.first->vertex(edge.third)->info();
}
The documentation of the nested type Edge is here as well as in the section "Representation" of the user manual of 3D Triangulation Data Structure.
Edit: note that I have chosen to respect your coding style, ignoring modern C++ features. Using C++11 features, I prefer to write it this way, using auto and range-based for loop:
for(const auto edge: mesh.finite_edges()) {
const auto v1_info = edge.first->vertex(edge.second)->info();
const auto v2_info = edge.first->vertex(edge.third)->info();
}
|
70,568,700 | 70,568,758 | Thread of a member function | Mainly for test purposes,
I want to run a member function on a thread.
Endless tries - and still, only error messages,
Please - can anyone explain the cause of the error and some best practices of doing so?
Thanks
#include <thread>
#include <iostream>
using namespace std;
class Test {
public:
int x = 1;
int y = 7;
int a() {
for (int i = 1; i < 1000; i++) {
x += y;
cout << "x:" << x << endl;
}
return x;
}
int b() {
for (int i = 1; i < 1000; i++) {
y -= 0.5 * x;
cout << "y:" << y << endl;
}
return y;
}
Test() {
}
Test* run() {
thread(&Test::a, this);
//thread(&Test::b, this);
return this;
}
};
int main()
{
Test* obj = new Test();
obj->run();
return 0;
}
| Your std::thread object lives only until the end of the expression in which it is created as a temporary. When it is destroyed and the thread is still in a joinable state, std::terminate is called, which aborts your program.
You should store the std::thread object somewhere (e.g. in the Test object or in main or locally in run) and call .join() on it at the correct time where you expect the thread to finish (e.g. destructor or before the end of main or at the end of run).
Aside from that, if you run the out-commented thread as well, you have data races causing undefined behavior. You may not access non-atomic objects (such as x and y) in multiple threads without synchronization if at least one of these accesses is a write.
|
70,568,786 | 70,569,307 | C++ Not able to instantiate a Template Class | Let me put it step by step. The problem is I am not able to instantiate a Template class. Pls help where am i doing wrong.
I have a template class as below :
template <typename K, typename V>
class HashNodeDefaultPrint {
public:
void operator()(K key, V value) {}
};
Then I have a specific class that belongs to the above template for K = int, V = int
class HashNodePrintIntInt {
public:
void operator()(int key, int value) {
printf ("k = %d, v = %d", key, value);
}
};
Similarly, I have another template class as below :
template <typename K>
class defaultHashFunction {
public:
int operator()(K key) {
return 0;
}
};
Again, I have a specific class which matches the above template for K = int
class HashFunctionInteger {
public:
int operator()(int key) {
return key % TABLE_SIZE;
}
};
Now I create a HashNode templated Class as below :
template <typename K, typename V, typename F = HashNodeDefaultPrint<K, V>>
class HashNode {
public:
K key;
V value;
F printFunc;
HashNode *next;
HashNode(K key, V value, F func) {
this->key = key;
this->value = value;
next = NULL;
}
};
And finally a HashMap Templates class as Below :
template <typename K, typename V, typename F = defaultHashFunction<K>, typename F2 = HashNodeDefaultPrint<K,V>>
class HashMap {
private:
HashNode<K, V, F2> *table_ptr;
F hashfunc;
public:
HashMap() {
table_ptr = new HashNode<K,V, F2> [TABLE_SIZE]();
}
};
Now in main(), i am instantiating the HashMap class as below :
int
main(int argc, char **argv) {
HashMap<int, int, HashFunctionInteger, HashNodePrintIntInt> hmap;
return 0;
}
but i am seeing compilation error :
vm@ubuntu:~/src/Cplusplus/HashMap$ g++ -g -c hashmap.cpp -o hashmap.o
hashmap.cpp: In instantiation of ‘HashMap<K, V, F, F2>::HashMap() [with K = int; V = int; F = HashFunctionInteger; F2 = HashNodePrintIntInt]’:
hashmap.cpp:157:65: required from here
hashmap.cpp:107:21: error: no matching function for call to ‘HashNode<int, int, HashNodePrintIntInt>::HashNode()’
107 | table_ptr = new HashNode<K,V, F2> [TABLE_SIZE]();
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
hashmap.cpp:30:9: note: candidate: ‘HashNode<K, V, F>::HashNode(K, V, F) [with K = int; V = int; F = HashNodePrintIntInt]’
30 | HashNode(K key, V value, F func) {
| ^~~~~~~~
hashmap.cpp:30:9: note: candidate expects 3 arguments, 0 provided
hashmap.cpp:27:7: note: candidate: ‘constexpr HashNode<int, int, HashNodePrintIntInt>::HashNode(const HashNode<int, int, HashNodePrintIntInt>&)’
27 | class HashNode {
| ^~~~~~~~
hashmap.cpp:27:7: note: candidate expects 1 argument, 0 provided
hashmap.cpp:27:7: note: candidate: ‘constexpr HashNode<int, int, HashNodePrintIntInt>::HashNode(HashNode<int, int, HashNodePrintIntInt>&&)’
hashmap.cpp:27:7: note: candidate expects 1 argument, 0 provided
vm@ubuntu:~/src/Cplusplus/HashMap$
What is wrong with this instantiation when I am correctly specifying the specific class names to define template variables?
| You think much to complicated :-)
If you already use templates, you can specialize them if you need. That did NOT require new class names!
Starting with:
template <typename K, typename V>
struct HashNodePrint {
void operator()(K key, V value) {}
};
// and specialize for int,int:
template<>
struct HashNodePrint<int,int> {
void operator()(int key, int value) {
printf ("k = %d, v = %d", key, value);
}
};
The same for
template <typename K>
struct HashFunction {
int operator()(K key) {
return 0;
}
};
template <>
struct HashFunction<int> {
int operator()(int ) {
return 0;
}
};
And I expect you do not longer need the template default funcs, as you have not longer the need to manually specify the specialized functions. They are now automatically mapped to the specialized version.
This will end up here:
template <typename K, typename V>
class HashNode {
public:
K key;
V value;
HashFunction<K> printFunc;
HashNode *next;
HashNode(K key, V value) {
this->key = key;
this->value = value;
next = NULL;
}
HashNode(){}
};
But still you have no default constructor for your HashNode. Maybe you generate a default one OR you put some default values in the call or pass the values down from construction at all. I have no idea what you want to achieve by generating it in a defaulted way.
template <typename K, typename V >
class HashMap {
private:
HashNode<K, V> *table_ptr;
HashFunction<K> hashfunc;
public:
HashMap(/* put maybe a container type here */) {
// no idea how big your TABLE_SIZE will be.
//
table_ptr = new HashNode<K,V> [TABLE_SIZE]{ /* and forward as needed */};
}
};
int main() {
HashMap<int, int> hmap{ /* pass args maybe in container or std::initializer_list */;
return 0;
}
|
70,569,170 | 70,569,576 | Does going through uintptr_t bring any safety when casting a pointer type to uint64_t? | Note that this is purely an academic question, from a language lawyer perspective. It's about the theoretically safest way to accomplish the conversion.
Suppose I have a void* and I need to convert it to a 64-bit integer. The reason is that this pointer holds the address of a faulting instruction; I wish to report this to my backend to be logged, and I use a fixed-size protocol - so I have precisely 64 bits to use for the address.
The cast will of course be implementation defined. I know my platform (64-bit Windows) allows this conversion, so in practice it's fine to just reinterpret_cast<uint64_t>(address).
But I'm wondering: from a theoretical standpoint, is it any safer to first convert to uintptr_t? That is: static_cast<uint64_t>(reinterpret_cast<uintptr_t>(address)). https://en.cppreference.com/w/cpp/language/reinterpret_cast says (emphasis mine):
Unlike static_cast, but like const_cast, the reinterpret_cast expression does not compile to any CPU instructions (except when converting between integers and pointers or on obscure architectures where pointer representation depends on its type).
So, in theory, pointer representation is not defined to be anything in particular; going from pointer to uintptr_t might theoretically perform a conversion of some kind to make the pointer representable as an integer. After that, I forcibly extract the lower 64 bits. Whereas just directly casting to uint64_t would not trigger the conversion mentioned above, and so I'd get a different result.
Is my interpretation correct, or is there no difference whatsoever between the two casts in theory as well?
FWIW, on a 32-bit system, apparently the widening conversion to unsigned 64-bit could sign-extend, as in this case. But on 64-bit I shouldn't have that issue.
| You’re parsing that (shockingly informal, for cppreference) paragraph too closely. The thing it’s trying to get at is simply that other casts potentially involve conversion operations (float/int stuff, sign extension, pointer adjustment), whereas reinterpret_cast has the flavor of direct reuse of the bits.
If you reinterpret a pointer as an integer and the integer type is not large enough, you get a compile-time error. If it is large enough, you’re fine. There’s nothing magical about uintptr_t other than the guarantee that (if it exists) it’s large enough, and if you then re-cast to a smaller type you lose that anyway. Either 64 bits is enough, in which case you get the same guarantees with either type, or it’s not, and you’re screwed no matter what you do. And if your implementation is willing to do something weird inside reinterpret_cast, which might give different results than (say) bit_cast, neither method will guarantee nor prevent that.
That’s not to say the two are guaranteed identical, of course. Consider a DS9k-ish architecture with 32-bit pointers, where reinterpret_cast of a pointer to a uint64_t resulted in the pointer bits being duplicated in the low and high words. There you’d get both copies if you went directly to a uint64_t, and zeros in the top half if you went through a 32-bit uintptr_t. In that case, which one was “right” would be a matter of personal opinion.
|
70,569,651 | 70,569,741 | polymorphism with vector and function? | I have basically the following code:
class A{/*something*/};
class B : public A{/*something else*/};
void foo(B* aux){/*something something*/}
int main()
{
vector<shared_ptr<A>>content;
content.emplace_back(new B());
foo(content[0].get());//error, invalid conversion from A to B
return 0;
}
Trying to compile this gives the "invalid conversion" error. Is there any way to use the foo function while keeping a vector of A?
Edit: sorry, I should have been more specific: yes, A is a virtual class. What I'm trying to do is: there are 3 classes that inherit from A (B,C and D), and it would be extremely convenient if I could store them all in a single place (there may be a lot of them, and using one vector for each is a hassle) and as they are all "A", I figured a vector of A would be the best idea, however there are some functions that need specific classes of objects in this vector- like the foo in this case. I could just make multiple vectors but I decided to check for suggestions first, thus my question.
| B is always an A, but A is not necessarily always a B, so your foo function cannot accept an A as a B. You could use a dynamic_cast to check whether A is a B for a specific runtime instance, and then call foo after you know for sure that it is a B, but in most cases that is not the best design (see static vs dynamic polymorphism).
|
70,569,736 | 70,569,843 | How to Iterate over number of variadic template Types | I'am currently learning C++ and i am currently building a very simple Entity Component System. For that i have a Function getComponentType which maps each Component to a
uint8_t. A Signature is just a std::bitset
I would like a method like this.
Signature signature = createSignature<TransformComponent, GraphicsComp>();
Lets say TransformComponent gets mapped to 0 and GraphicsComp get mapped to 1.
The Signature should now be a std::bitset {1100000...}
I know how to do that with non various template methods, the Question is now how would i archive the same with various template types or is there a better solution to do the same.
template <typename T> Signature createSignature(){
return Signature(((unsigned long long int)1)<<getComponentType<T>());
}
template <typename T, typename R> Signature createSignature(){
return Signature(
((unsigned long long int)1)<<getComponentType<T>() |
((unsigned long long int)1)<<getComponentType<R>()
);
}
template <typename T, typename R, typename S> Signature createSignature(){
return Signature(
((unsigned long long int)1)<<getComponentType<T>() |
((unsigned long long int)1)<<getComponentType<R>() |
((unsigned long long int)1)<<getComponentType<S>()
);
}
template <typename T, typename R, typename S, typename U> Signature createSignature(){
return Signature(
((unsigned long long int)1)<<getComponentType<T>() |
((unsigned long long int)1)<<getComponentType<R>() |
((unsigned long long int)1)<<getComponentType<S>() |
((unsigned long long int)1)<<getComponentType<U>()
);
}
| From C++ 17 onward you could use a fold expression:
template<typename... T>
Signature createSignature()
{
Return Signature((((unsigned long long int)1) << getComponentType<T>() | ...));
}
The unsigned long long int cast seems a bit weird, but I left it the same as the question to clarify the use of the fold expression:
(statement | ...)
The minimal version would look something like this:
template<typename T>
unsigned long long int stuffFor();
template<typename... T>
unsigned long long int variadicFoldedStuff()
{
return (stuffFor<T>() | ...);
}
|
70,570,822 | 70,573,713 | Print all visible rows in QTableView in c++ | I have QTableView with 100+ rows in it. But at a time only 6 rows are visible. To see next set of rows, I have to use scrool bar.
I want to print visible rows in QTableView. But could not do that. I could just able to print single selected row.
QItemSelectionModel *select = _table->selectionModel();
QModelIndexList selectedRow = select->selectedRows();
QModelIndex index = selectedRow.at(0);
int columnCount = 2; // there are 2 columns in a row
QString copySelectedRowText_;
for(int i = 0 ; i < columnCount; i++)
{
copySelectedRowText_ += index.sibling(index.row(), i).data().toString()+ " ";
}
qDebug() << copySelectedRowText_;
How to print visible rows in QTableView?
| You can get the current line number through the value() of the verticalScrollbar, and you can also get the number of displayable lines through pagestep().
This is my code ,you can try it:
void TesWidget::onbtnClicked()
{
int start_index = ui.tableView->verticalScrollBar()->value();
int page_cnt = ui.tableView->verticalScrollBar()->pageStep();
int end_index = start_index + page_cnt;
int row_cnt = model_->rowCount();
int col_cnt = model_->columnCount();
QString text;
for (int i = start_index; i < row_cnt && i <= end_index; i++)
{
for (int j = 0; j < col_cnt; j++)
{
text.append(QStringLiteral("%1 ").arg(model_->item(i,j)->text()));
}
text.append("\n");
}
qDebug() << text;
}
|
70,570,860 | 70,571,138 | How to cast nonconst variable to constant static integral class member variable via reinterpret_cast in C++? | I am reading a book on writing modern C++ code for microcontrollers which is named "Real time C++". I am trying to write the codes in the book myself. However, while copying the code from the book and trying to build it, I got a compilation error of:
error C2131: expression did not evaluate to a constant.
message : a non-constant (sub-) expression was encountered
I inserted the relevant part of the code below:
#include <cstdint>
#include <iomanip>
#include <iostream>
namespace mcal
{
namespace reg
{
// Simulate the transmit and receive hardware buffers on the PC.
std::uint8_t dummy_register_tbuf;
std::uint8_t dummy_register_rbuf;
}
}
class communication
{
private:
static constexpr std::uint8_t* tbuf = reinterpret_cast<std::uint8_t*>(&mcal::reg::dummy_register_tbuf);
static constexpr std::uint8_t* rbuf = reinterpret_cast<std::uint8_t*>(&mcal::reg::dummy_register_rbuf);
};
/* rest of the nonrelated code */
The error indicates those two lines where casting happens. I know that we try to use static constexpr integral class member variables, because this ensures optimization (constant folding) on them. I think that the error happens because we try to set a nonconstant variable to a constant variable, but I can be wrong surely. So, I would kindly ask you to explain to me what is the real problem here and why the author did such a mistake (if it is a mistake). Also, if you additionally point out the correct way of casting I would highly appreciate. Thank you very much.
| It is unclear what the intention behind the reinterpret_cast is, but the program is ill-formed.
constexpr on a variable requires that the initializer is a constant expression. But an expression is disqualified from being a constant expression if it would evaluate a reinterpret_cast. Therefore the initialization is ill-formed.
However, nothing else in the initialization stops it from being a constant expression and so
static constexpr std::uint8_t* tbuf = &mcal::reg::dummy_register_tbuf;
will just work and the reinterpret_cast would be a redundant anyway since it would cast between identical pointer types which is specified to result in the same value.
GCC, ICC and MSVC up to v19.16 do seem to erroneously accept the code (https://godbolt.org/z/YKjhxqo3v). Maybe the author tested the code only on one of these compilers.
For GCC there is a bug report here.
|
70,571,273 | 70,571,518 | Template based Linked List - What should be Returned in search operation? | I am working with Templates and defined the below templated ListNode.
template <typename T>
class ListNode{
private :
public:
ListNode *left;
ListNode *right;
T data;
ListNode(T data){ this->data = data; }
};
If I implement the search Operation on this Linked list with the below prototype what should be the return value of the function. If T is a pointer, I can return NULL, but if T is a Complete object, what should be the return type, and how can i differentiate if T is a pointer or a Complete object. How can this search function works for T where T can be a pointer Or a complete object.
T List_search(T srch_data);
T List_search(T srch_data) {
ListNode<T> *curr = head;
while (curr) {
if (comp_fn.compare_data (curr->data, srch_data) == 0)
return curr->data;
curr = curr->right;
}
return NULL; <<< Compilation error if T is complete object
}
| One option is to keep things simple: return the address of the found item, and nullptr if the item cannot be found:
template <typename T>
T* List_search(T srch_data)
{
ListNode<T> *curr = head;
while (curr)
{
if (comp_fn.compare_data (curr->data, srch_data) == 0)
return &curr->data;
curr = curr->right;
}
return nullptr;
}
The client would then check for a nullptr, and if it isn't a nullptr, can dereference the pointer.
For example, if the linked list node holds an int type:
int* foundData = List_search(10)
if ( foundData )
{
std::cout << *foundData;
}
|
70,571,380 | 70,572,145 | Can a type be defined inside a template parameter list in C++? | In the following definition of template struct B, a lambda is used as a default value of a non-type template argument, and in the body of the lambda some type A is defined:
template <auto = []{ struct A{}; }>
struct B {};
Clang and MSVC are fine with this definition, but GCC complains:
error: definition of 'struct<lambda()>::A' inside template parameter list
Demo: https://gcc.godbolt.org/z/f1dxGbPvs
Which compiler is right here?
| [temp.param]/2 says:
Types shall not be defined in a template-parameter declaration.
Taking this as written, GCC is correct to reject this code: this prohibition is not constrained to type-id of a type parameter, but applies to anywhere within template parameter declaration. Including nested within a lambda.
This sentence was added as a result of DR 1380 (N3481), which reveals it was considered already implied by what now I am guessing to be [dcl.fct]/17:
Types shall not be defined in return or parameter types.
This, however, only seems to apply to the type of the parameter declared and not to the initializer-clause.
On the other hand, one might also read it as prohibiting lambdas themselves in template parameters. After all, a lambda expression implicitly defines a class type ([expr.prim.lambda.closure]/1).
On the third hand, we also have [expr.prim.lambda.closure]/2, which states:
The closure type is declared in the smallest block scope, class scope, or namespace scope that contains the corresponding lambda-expression.
The relevant scope here seems to be the namespace scope. This would imply the lambda should be treated as if its type were declared outside the template parameter list. But then, so should be declarations inside the body of the lambda, and the definition in the question should be allowed.
Personally, I consider it a defect in the standard that the scope of this prohibition seems so ill-defined.
|
70,571,655 | 70,571,714 | constexpr std::string in C++20, how does it work? | Apparently, the constexpr std::string has not been added to libstdc++ of GCC yet (as of GCC v11.2).
This code:
#include <iostream>
#include <string>
int main( )
{
constexpr std::string str { "Where is the constexpr std::string support?"};
std::cout << str << '\n';
}
does not compile:
time_measure.cpp:37:31: error: the type 'const string' {aka 'const std::__cxx11::basic_string<char>'} of 'constexpr' variable 'str' is not literal
37 | constexpr std::string str { "Where is the constexpr std::string support?"};
| ^~~
In file included from c:\mingw64\include\c++\11.2.0\string:55,
from c:\mingw64\include\c++\11.2.0\bits\locale_classes.h:40,
from c:\mingw64\include\c++\11.2.0\bits\ios_base.h:41,
from c:\mingw64\include\c++\11.2.0\ios:42,
from c:\mingw64\include\c++\11.2.0\ostream:38,
from c:\mingw64\include\c++\11.2.0\iostream:39,
from time_measure.cpp:2:
c:\mingw64\include\c++\11.2.0\bits\basic_string.h:85:11: note: 'std::__cxx11::basic_string<char>' is not literal because:
85 | class basic_string
| ^~~~~~~~~~~~
c:\mingw64\include\c++\11.2.0\bits\basic_string.h:85:11: note: 'std::__cxx11::basic_string<char>' does not have 'constexpr' destructor
My question is that how will such strings work under the hood when a string contains more than 16 chars (because GCC's SSO buffer size is 16)? Can someone give me a brief explanation? Will a trivial constructor create the string object on the stack and never use dynamic allocations?
This code:
std::cout << "is_trivially_constructible: "
<< std::boolalpha << std::is_trivially_constructible<const std::string>::value << '\n';
prints this:
is_trivially_constructible: false
Now by using constexpr here (obviously does not compile with GCC v11.2):
std::cout << "is_trivially_constructible: "
<< std::boolalpha << std::is_trivially_constructible<constexpr std::string>::value << '\n';
will the result be true like below?
is_trivially_constructible: true
My goal
My goal was to do something like:
constexpr std::size_t a { 4 };
constexpr std::size_t b { 5 };
constexpr std::string msg { std::format( "{0} + {1} == {2}", a, b, a + b ) };
std::cout << msg << '\n';
Neither std::format nor constexpr std::string compile on GCC v11.2.
| C++20 supports allocation during constexpr time, as long as the allocation is completely deallocated by the time constant evaluation ends. So, for instance, this very silly example is valid in C++20:
constexpr int f() {
int* p = new int(42);
int v = *p;
delete p;
return v;
}
static_assert(f() == 42);
However, if you forget to delete p; there, then f() is no longer a constant expression. Can't leak memory. gcc, for instance, rejects with:
<source>:2:24: error: '(f() == 42)' is not a constant expression because allocated storage has not been deallocated
2 | int* p = new int(42);
| ^
Getting back to your question, std::string will work in constexpr for long strings just fine -- by allocating memory for it as you might expect. However, the C++20 constexpr rules are still limited by this rule that all allocations must be cleaned up by the end of evaluation. Alternatively put, all allocations must be transient - C++ does not yet support non-transient constexpr allocation.
As a result, your original program
int main( )
{
constexpr std::string str { "Where is the constexpr std::string support?"};
}
is invalid, even once gcc supports constexpr string (as it does on trunk right now), because str needs to be destroyed. But this would be fine:
constexpr int f() {
std::string s = "Where is the constexpr std::string support?";
return s.size();
}
static_assert(f() > 16);
whereas it would not have compiled in C++17.
There still won't be support for non-transient constexpr allocation in C++23. It's a surprisingly tricky problem. But, hopefully soon.
|
70,572,014 | 70,572,079 | template class which accepts either a typename or int without auto | Is it possible to have a class template accept either of one (unsigned int, typename) parameter, based upon what was given?
Example of what I mean:
template<??>
class Bytes
{
// ....
};
Bytes<4> FourBytes;
Bytes<int> FourBytes;
Bytes<DWORD64> EightBytes;
Iam aware of the template<auto T>, though was thinking if there was a different solution?
| Template parameters need to either be a type, or a value, there isn't a placeholder for something that can be either a type or a value. That said, you can make a couple factory functions to help you. That could look like
template<std::size_t N>
class Bytes
{
// ....
};
template <typename T>
auto make_bytes() { return Bytes<sizeof(T)>{}; }
template <std::size_t N>
auto make_bytes() { return Bytes<N>{}; }
and you would declare objects like
auto FourBytesValue = make_bytes<4>();
auto FourBytesType = make_bytes<int>();
auto EightBytes = make_bytes<DWORD64>();
|
70,572,683 | 70,572,702 | C++ error: too many initializers for 'int [2]' | I keep getting this error when declaring an int[2] array, but it looks fine to me.
error: too many initializers for 'int [2]'
int <array_name>[2] = { 0, 255, 255 };
^
am I doing someting wrong?
| You declared the size of array is 2 but gave it 3 elements, I think just change it to int <array_name>[3] will fix the problem
|
70,573,038 | 70,573,092 | If I've separated a template into a header and source, is there any way to compile it to its own object file? | I like header files to exist as self-documenting references. I try to keep them to declarations with documentation comments, and then program all the implementation in my source files. Essentially a documented interface.
I'm working on a project making heavy use of templates and instead of filling up the header with implementation details (I know I could still declare the templates and define them later in the header but this doesn't solve this question) I've elected to do something like.
// foo.h
template<typename T>
class Foo {
public:
Foo();
T get_foo();
private:
T foo;
}
#include "foo.tpp"
// foo.tpp
template<typename T>
Foo<T>(T f) : foo(f) {};
template<typename T>
T Foo<T>get_foo() { return foo; }
This has a moderate downside. Any source using #include "foo.h" must be fully recompiled any time foo changes. Essentially this has led to me needing to recompile most/all of the project every time I make a change in what is (conceptually but not practically) independent code.
I understand that due to either the way C++ compilers handle templates, or the language standard, or some combination thereof that when the compiler encounters a situation where substitution is needed like auto f = Foo<int>();, it needs the full definition of Foo<int> available to properly perform the substitution.
Is there no way at all around this? In this situation at least it seems to me that given foo.tpp as a source file the compiler could easily expand the template arguments for both and compile them as if they were a normal header/source file as a standalone TU. Then the linker just needs to find them later to resolve the definition of Foo<int> in the final linking stage.
Ostensibly this example is oversimplified and there are more complicated cases and caveats I've missed. I'm wondering though if there is a practical way around this. I'm considering writing a script to perform the substitution for me and then compiling those files, but was hoping there was a known and more generalized method.
|
Is there no way at all around this?
Yes, there is. If you instantiate a template explicitly in the translation unit where the functions are defined, then you can use those instances in other translation units.
But that of course limits what template arguments can be used to those that you've chosen for explicit instantiation. For unconstrained template arguments, there's no way around defining the functions in all translation units (where they are ODR-used).
|
70,573,188 | 70,573,204 | Why different behaviour of synthesized default constructor for static and local variable of user defined class type? | In the sample program below,
Why is the output different for static & automatic variable of user defined class type ?
/* test.cpp */
/* SalesData Class */
class SalesData {
public:
SalesData() = default;
// other member funcations
private:
std::string bookNo;
unsigned int unitsSold;
double revenue;
};
/*
* Prints the SalesData object
*/
std::ostream& print(std::ostream &os, const SalesData &item) {
os << "ISBN :\'" << item.isbn() << "\', Units Sold :" << item.unitsSold
<< ", Revenue :" << item.revenue << ", Avg. Price :"
<< item.avgPrice() << std::endl;
return os;
}
int main(int argc, char *argv[]) {
SalesData s;
static SalesData s2;
print(cout, s);
print(cout, s2);
return 0;
}
Output of the program is this --
$ ./test
ISBN :'', Units Sold :3417894856, Revenue :4.66042e-310, Avg. Price :1.36352e-319
ISBN :'', Units Sold :0, Revenue :0, Avg. Price :0
How is static changing the scene in synthesized default constructor ?
| The default constructor has the same behavior on s and s2. The difference is, for static local variables,
Variables declared at block scope with the specifier static or thread_local (since C++11) have static or thread (since C++11) storage duration but are initialized the first time control passes through their declaration (unless their initialization is zero- or constant-initialization, which can be performed before the block is first entered).
and about zero-initialization:
For every named variable with static or thread-local (since C++11) storage duration that is not subject to constant initialization, before any other initialization.
That means s2 will be zero-initialized firstly, as the effect its data members are zero-initialized too; then enterning main() it's default-initialized via the default constructor.
|
70,573,267 | 70,573,633 | How do you define a "Hello World" function in a seperate file in c++ | and I apologize for asking a very basic question, but basically, I'm not able to wrap my head around include "fileImade.h"
I'm trying to write a main function, that's something like
int main()
{
int x = 5;
int y x 6;
std::cout << add(x, y) << std::endl;
}
where add() is defined in a separate .cpp file, and #include -ed in this one, (I'm doing this because I'm getting the point where my code is getting impractically large to do in a single file.), but my understanding is that you need a header file to... Glue your other files together, or I guess mortar them if the files are the bricks, but I absolutely cannot figure out how to make this work for the life of me.
(while using g++), should I tag -I? According to me googling, yes, according to my compiler output, no.
Should I write a header file and a .cpp files for the add() function? Apparantly, yes, or no, if I choose to write both files in the command line before the -o.
Should I include a forward declaration of the function in my main.cpp file? Again, according to the internet, yes, though that's not working terribly well for me.
So to my question: Can someone show me a main() function that calls a hello world() function in a separate file?
I'm honestly at my wits' end with this because all the guides seem to be on defining classes in header files, which, while useful, is a bit beyond the scope of what I'm attempting right now.
Thank you in advance for reading, and any advice offered.
| The logic of the file separation may be imagined as:
(single file program)
/// DECLARATION of all functions needed in the main
int add(int x, int y); // declaration of add
///
int main()
{
std::cout << add(2, 3) << std::endl;
return 0;
}
/// IMPLEMENTATION of all functions needed in the main
int add(int x, int y)
{
return x + y;
}
The next you have to do is move all declarations to the headers and implementations to the cpps:
(separated files program)
/// add.h
#ifndef ADD_H /// https://en.wikipedia.org/wiki/Include_guard
#define ADD_H
int add(int x, int y);
#endif
/// main.cpp
#include "add.h"
int main()
{
std::cout << add(2, 3) << std::endl;
return 0;
}
/// add.cpp
#include "add.h"
int add(int x, int y)
{
return x + y;
}
|
70,573,583 | 70,573,617 | Assigning the reference of a stack allocated variable to a pointer | Given the following function:
void test(queue<string>* out) {
queue<string> abc = queue<string>();
abc.push("abc");
out = &abc;
}
Theoretically, the abc variable is allocated on the stack and at the end of the function it must automatically pop out of the stack. But I am assigning the reference of that variable to the out parameter which is a pointer. So when the context which calls this function pass a pointer to capture the output, what would happen? In other words, is this operation safe at all?
To make the question very clear, is the following function equivalent to the above:
void test(queue<string>* out) {
out->push("abc");
}
| The first function is safe, but not equivalent to the second.
You are just assigning a value to the pointer out which is a local variable. That local variable is not connected to the caller and the assignment is not observable by the caller. Your function has no side-effects. It is equivalent to
void test(queue<string>* out) {}
You have not stored a pointer of the abc object anywhere reachable from outside the function.
If you had done so, that pointer would be dangling after the function call returns, because the abc object will be destroyed at that point. Whether you hold a reference/pointer to it does not matter at all. Variables declared at block scope (without static or thread_local specifiers) have automatic storage duration, meaning that they will be destroyed at the end of the block in which they are declared, always.
Dereferencing the dangling pointer would then cause the program to have undefined behavior, meaning that you lose any guarantees on the program behavior.
For this reason it is a clear mistake to return a reference or pointer to a local variable, whether through the return value or out-parameters.
|
70,573,940 | 70,574,195 | Strict aliasing accross DLL boundary | I've been reviewing C++'s strict aliasing rules, which got me thinking of some code at my previous job. I believe said code violated strict aliasing rules, but was curious why we didn't run into any issues or compiler warnings. We utilized a core .DLL to receive network messages that were handed off to a server application. A (very) simplified example of what was done:
#include <iostream>
#include <cstring>
using namespace std;
// These enums/structs lived in a shared .h file consumed by the DLL and server application
enum NetworkMessageId : int
{
NETWORK_MESSAGE_LOGIN
// ...
};
struct NetworkMessageBase
{
NetworkMessageId type;
size_t size;
};
struct LoginNetworkMessage : NetworkMessageBase
{
static constexpr size_t MaxUsernameLength = 25;
static constexpr size_t MaxPasswordLength = 50;
char username[MaxUsernameLength];
char password[MaxUsernameLength];
};
// This buffer and function was created/exported by the DLL
char* receiveBuffer = new char[sizeof(LoginNetworkMessage)];
NetworkMessageBase* receiveNetworkMessage()
{
// Simulate receiving data from network, actual production code provided additional safety checks
LoginNetworkMessage msg;
msg.type = NETWORK_MESSAGE_LOGIN;
msg.size = sizeof(msg);
strcpy(msg.username, "username1");
strcpy(msg.password, "qwerty");
memcpy(receiveBuffer, &msg, sizeof(msg));
return (NetworkMessageBase*)&receiveBuffer[0]; // I believe this line invokes undefined behavior (strict aliasing)
}
// Pretend main is the server application
int main()
{
NetworkMessageBase* msg = receiveNetworkMessage();
switch (msg->type)
{
case NETWORK_MESSAGE_LOGIN:
{
LoginNetworkMessage* loginMsg = (LoginNetworkMessage*)msg;
cout << "Username: " << loginMsg->username << " Password: " << loginMsg->password << endl;
}
break;
}
delete [] receiveBuffer; // A cleanup function defined in the DLL actually did this
return 0;
}
From what I understand, receiveNetworkMessage() invokes undefined behavior. I've read strict aliasing U.B. typically relates to compiler optimizations/assumptions. I'm thinking these optimizations are not relevant in this case since the .DLL and server application are compiled separately. Is that correct?
Lastly, the client application also shared the example .h provided which it utilized to create a LoginNetworkMessage which was streamed byte-for-byte to the server. Is this portable? Packing/endianness issues aside, I believe it's not since LoginNetworkMessage's layout is non-standard, so member ordering may be different.
|
From what I understand, receiveNetworkMessage() invokes undefined behavior
Correct.
LoginNetworkMessage which was streamed byte-for-byte to the server. Is this portable?
No, network communication that relies on binary compatibility isn't portable.
Packing/endianness issues aside, I believe it's not since LoginNetworkMessage's layout is non-standard, so member ordering may be different.
Order of members is guaranteed even for non-standard-layout classes. What is a problem (besides endianness) is the amount and the placement of padding for alignment purposes, the sizes of the fundamental types, the numbers of bits in a byte (although to be fair, non-8-bit-byte network connected hardware probably isn't a thing you need to support).
|
70,574,049 | 70,574,140 | How to do a callback using a std::function() as reference? | I want to write a callback like this
template<typename T>
T GetValue(T (*CallBack) (const string), string in)
{
T value;
try
{
value = CallBack(in);
}
catch(std::invalid_argument &e)///if no conversion could be performed
{
cout << "Error: invalid argument: --> " + string(e.what());
exit(-1);
}
return value;
}
int main()
{
int integer=0;
string data;
cin>>data;
integer=GetValue<int>(&std::stoi, data);
return 0;
}
but it does not works. I have the following error.
error: cannot convert "|unresolved overloaded function type|" to "int
(^)(std::string)" {aka int (^)(std::__cxx11::basic_string)}
I tried other ways too. Like this:
class Object;
typedef int (Object::*CallBack)(const string&, std::size_t*, int);
Or like this:
typedef int (std::*FCAST)(const string&);
std::function<int(const string&)> f = (FCAST)&std::stoi;
error: address of overloaded function with no contextual type
information|
But i had not success. ¿Someone knows how to do it?
Thank you so much!!
| Link: https://godbolt.org/z/69fTzW36z
You have to use the correct function signature - the one which stoi has.
As per this link: https://en.cppreference.com/w/cpp/string/basic_string/stol, the signrature is int stoi(const string&, size_t* pos = nullptr, int base = 10)
So making the changes (see // CHANGE HERE)
#include <iostream>
using namespace std;
// CHANGE HERE: see the function arguments to Callback
template<typename T>
T GetValue(T (*CallBack) (const string&, size_t* pos, int base), const string& in)
{
T value;
try
{
// CHANGE HERE: pass default arguments when calling the function
value = CallBack(in, nullptr, 10);
}
catch(std::invalid_argument &e)///if no conversion could be performed
{
cout << "Error: invalid argument: --> " + string(e.what());
exit(-1);
}
return value;
}
int main()
{
int integer=0;
string data = "1234";
//cin>>data;
integer=GetValue<int>(std::stoi, data);
cout << integer << endl;
return 0;
}
As mentioned in the comments, it's better not to use standard library functions directly as function callbacks. Rather you may implement your own logic or you can use some other way like lambda functions, something like:
#include <iostream>
using namespace std;
template<typename T>
T GetValue(T (*CallBack) (const string&), const string& in)
{
T value;
try
{
value = CallBack(in);
}
catch(std::invalid_argument &e)///if no conversion could be performed
{
cout << "Error: invalid argument: --> " + string(e.what());
exit(-1);
}
return value;
}
int main()
{
string data = "1234";
//cin>>data;
int integer = GetValue<int>([](const string& s) { return stoi(s); }, data);
cout << integer << endl;
string data2 = "12345678900";
long l = GetValue<long>([](const string& s) { return stol(s); }, data2);
cout << l << endl;
return 0;
}
Link: https://godbolt.org/z/31oKx5cjE
|
70,575,154 | 70,728,856 | how to achieve Read/Write on YL160 Magnetic Stripe 4in 1 encoder? | i have recently bought a Magnetic Reader/Writer from China (YL160 4 in 1 Reader/Writer)
and it came with the Demo application along with the API.
What i need mainly from this device is Magnetic Stripe Write, i need to write data to a blank HiCo magnetic card.
When i open the demo application under the magnetic stripe tab they are two columns
Read-Only
Read Write
the Read-Only works but the Read/Write doesn't, it refers me to Read-only which suggests the devices doesn't have write capabilities so i went into an API to check in case the demo app is buggy and here is what i found inside 160.h Header file
extern int _stdcall MSR_Init(void);
extern void _stdcall MSR_Exit(void);
extern int _stdcall MSR_DoCancel(void);
extern int _stdcall MSR_Read(void);
extern int _stdcall MSR_Write(unsigned char *TK1Dat, unsigned char *TK2Dat, unsigned char *TK3Dat);
extern int _stdcall MSR_Read_ASCII(void);
extern int _stdcall MSR_Write_ASCII(char *trace1, char *trace2, char *trace3);
extern int _stdcall MSR_Erase(unsigned char mode);
extern int _stdcall MSR_GetTrackData(unsigned char *TK1Dat, unsigned char *TK2Dat, unsigned char *TK3Dat);
extern int _stdcall MSR_Set_HiCo ();
extern int _stdcall MSR_Set_LoCo ();
extern int _stdcall MSR_Get_CoStatus(unsigned char *status);
extern int _stdcall Msr_ChangeAscii(unsigned char *TK1Dat, unsigned char *TK2Dat, unsigned char *TK3Dat);
extern int _stdcall MSR_Write_status(void);
extern int _stdcall MSR_Fast_Init(int portnum);
extern int _stdcall MSR_Get_HwInfo (char *info);
extern void _stdcall MSR_BeepOff (void);
extern void _stdcall MSR_BeepOn (void);
extern int _stdcall MSR_Erase_status(void);
extern int _stdcall MSR_ASC_Read(void);
extern int __stdcall auto_init(void)
And Then This
extern void __stdcall MagOpen(void);
extern void __stdcall MagClose(void);
extern void __stdcall MagReset(void);
extern unsigned char __stdcall MagSwiped(void);
extern unsigned char __stdcall MagRead(unsigned char *Track1, unsigned char *Track2,unsigned char *Track3);
Then i thought maybe the demo application is using the MagOpen() to switch the device to Read Mode but there is no function to switch the device to Write Mode.
i haven't tested MagOpen() with MSR_Write().
Inside the API folder there is Lib160.dll and Lib160.lib
i have tried to disassemble the Lib160.dll with IDA Pro to see whats happening in MagOpen so i can maybe implement the Write Mode Switch but IDA Pro shows an error about Input.
Further more this device appears in the Device Manager as Microsoft Usbccid Smart Card Reader (WUDF) under Smart Card Readers not on COM Port or HID.
Now Here are My questions
Is it possible to Check if the Magnetic Head is capable of Write? if yes How?
Can i Replace the Magnetic head with the one with Write Capabilities?
Please help i need help, Coding is not a problem but the problem is where to start
Thank you
| Per the product image on Amazon (See ASIN # B09L17C3PG):
Magnetic cards are read-only.
|
70,575,466 | 70,575,557 | Reversible string transformation | I have a string returned from an external C++ lib after saving a record.
This string is a key to be used if you want to retrieve the saved record via the lib.
I would like to hide the specific key format returned by the lib and return something like a hash code to the user, so that the user can use this key string to query the saved record from the system.
The key string transformation should be reversible as I still need the original value to communicate with the external lib. In this case, a Hash code is not appropriate.
The questions I would like to ask:
Is there any existing standard lib to do that?
As it is a key to the user directly, if possible, I would like it to be a string of printable chars.
It seems that encryption and decryption is my best choice? If so, which algo/lib is suggested? A simple code example is enough.
Any help is highly appreciated.
| It sounds like you want to obfuscate the string so that the user can't use it directly. The question is, how obfuscated does it need to be? If a trivial amount of obfuscation is all that is required, there are any number of simple algorithms that can do that (ROT13, XOR, nybbleizing, etc). You could combine them or come up with your own, although keep in mind that if you release an executable or library containing the algorithm, then any sufficiently determined user could reverse-engineer the algorithm or step through your code with a debugger to figure it out, if they really wanted to.
If it's really important that nobody figure it out, then the best thing to do is to never give the user the obfuscated information or the algorithm at all. For that, you could simply create a unique ID for each string (e.g. by computing a sufficiently large hash code) and store the mapping between generated IDs and their source-strings on a server that you control. Then you only give your user the generated ID, which he later hands back to your server, and your servers looks up the corresponding original string in its database. (That's pretty much the algorithm that sites like TinyURL.com use, FWIW).
Another option would be to use something like OpenSSL's libcrypto to encrypt the string using a secret key, then nybbleize or base64-encode the encrypted output and pass the results back to the user. That would avoid the need to maintain a database, but of course it only remains secure if the secret key is secure, which means it still needs to be done on a computer you control rather than on the user's computer, otherwise the user can simply run a debugger to find out what the secret encryption/decryption key is, and you're back to square one.
|
70,576,019 | 70,576,379 | why do I have to use int &n instead of int n as parameter? | I have to define a function to delete an element in an array, here is the code
void delete_element(int a[], int n, int pos)
{
if (pos>=n)
pos=n-1;
else if (pos<0)
pos=0;
for (int i=pos-1;i<n-1;i++)
{
a[i]=a[i+1];
}
--n;
}
and here is an example:
int n;
printf("Enter the length of the array: ");
scanf("%d", &n);
int A[n]
for (int i=0;i<n;i++)
scanf("%d", &A[i]);
delete_element(A,n,2);
suppose that n=5 and A = {1,2,3,4,5}, after running the above code, it will print out {1,3,4,5,5}
When I use int n as parameter, the function deletes the element I want but the last element will appear twice in the array. I searched and then found out that by using int &n, the problem will be solved but I don't understand the reasons here. I would really appreciate if you could help me with this!
|
This function doesn't delete an element in an array actually, because it just overlaps the data at pos with the next data, and the size is not changed.
It seems that n is array's size, so when you use int n, the size is passed as value, so outer n is not changed. And when you use int& n, the size is passed as reference, so n will be changed.
If you want to delete an element in array actually, you may refer to pop_back() or pop_front() function of vector.
|
70,576,360 | 70,576,673 | Zlib Installation - mingw compiler | I just downloaded Zlib's source code from the website -> https://zlib.net/
zlib source code, version 1.2.11, zipfile format ....
- US (zlib.net)
And I'm struggling with setting up this library, So I'm trying to get some help from some experienced people. And an example will be helpful for me to start with.
I'm using gcc 8.1.0, windows.
Thanks!
| the steps I use:
open cmd.exe
type sh
You should see a prompt like that:
sh-3.1$
once in sh, change dir to your lib dir., so for me is:
cd /c/Users/ing.conti/Documents/zlib1211/zlib-1.2.11/
when there, you should ber allowed to call ./configure
You should see a message saying:
"Please use win32/Makefile.gcc instead." if so:
type:
make -fwin32/Makefile.gcc; make test testdll -fwin32/Makefile.gcc
as per readme for windows.
You should see:
Now their tests run fine.
You will see a bunch of *.o and *.exe inside.
(delete manually if You want to see recompiling again, OR use:
make clean -fwin32/Makefile.gcc)
You can run *.exe BOTH from "sh" AND from cmd line of windows.
Now You can start modifying sources and / or Makefile.gcc, or duplicate it....
|
70,576,797 | 70,578,075 | How to Halt Sound Effect in SDL2 | How would I halt the playing of a sound effect in SDL2?
Currently I'm playing sound effects using the SDL2 Mixer with this code.
Mix_PlayChannel(-1, soundEffect, 0);
However I want the play to be able to not have to listen to the entire sound effect and when they leave the menu the sound effect should stop.
I've tried Mix_HaltMusic(); however that doesn't seem to apply to Mix_Chunk.
How would I do so?
| To stop the Mix_Chunk started with Mix_PlayChannel, you have to use Mix_HaltChannel as explained in this answer for the opposite problem.
|
70,577,320 | 70,577,860 | Why does getline() cut off CSV Input? | I'm trying to read and parse my CSV files in C++ and ran into an error.
The CSV has 1-1000 rows and always 8 columns.
Generally what i would like to do is read the csv and output only lines that match a filter criteria. For example column 2 is timestamp and only in a specific time range.
My problem is that my program cuts off some lines.
At the point where the data is in the string record variable its not cutoff. As soon as I push it into the map of int/vector its cutoff. Am I doing something wrong here?
Could someone help me identify what the problem truly is or maybe even give me a better way to do this?
#include <iostream>
#include <fstream>
#include <string>
#include <vector>
#include <sstream>
#include <iostream>
#include <map>
#include "csv.h"
using std::cout; using std::cerr;
using std::endl; using std::string;
using std::ifstream; using std::ostringstream;
using std::istringstream;
string readFileIntoString(const string& path) {
auto ss = ostringstream{};
ifstream input_file(path);
if (!input_file.is_open()) {
cerr << "Could not open the file - '"
<< path << "'" << endl;
exit(EXIT_FAILURE);
}
ss << input_file.rdbuf();
return ss.str();
}
int main()
{
int filterID = 3;
int filterIDIndex = filterID;
string filter = "System";
/*Filter ID's:
0 Record ID
1 TimeStamp
2 UTC
3 UserID
4 ObjectID
5 Description
6 Comment
7 Checksum
*/
string filename("C:/Storage Card SD/Audit.csv");
string file_contents;
std::map<int, std::vector<string>> csv_contents;
char delimiter = ',';
file_contents = readFileIntoString(filename);
istringstream sstream(file_contents);
std::vector<string> items;
string record;
int counter = 0;
while (std::getline(sstream, record)) {
istringstream line(record);
while (std::getline(line, record, delimiter)) {
items.push_back(record);
cout << record << endl;
}
csv_contents[counter] = items;
//cout << csv_contents[counter][0] << endl;
items.clear();
counter += 1;
}
| I can't see a reason why you data is being cropped, but I have refactored you code slightly and using this it might be easier for you to debug the problem, if it doesn't just disappear on its own.
int main()
{
string path("D:/Audit.csv");
ifstream input_file(path);
if (!input_file.is_open())
{
cerr << "Could not open the file - '" << path << "'" << endl;
exit(EXIT_FAILURE);
}
std::map<int, std::vector<string>> csv_contents;
std::vector<string> items;
string record;
char delimiter = ';';
int counter = 0;
while (std::getline(input_file, record))
{
istringstream line(record);
while (std::getline(line, record, delimiter))
{
items.push_back(record);
cout << record << endl;
}
csv_contents[counter] = items;
items.clear();
++counter;
}
return counter;
}
I have tried your code and (after fixing the delimiter) had no problems, but I only had three lines of data, so if it is a memory issue it would have been unlikely to show.
|
70,577,560 | 70,595,754 | Are seq-cst fences exactly the same as acq-rel fences in absence of seq-cst loads? | I'm trying to understand the purpose of std::atomic_thread_fence(std::memory_order_seq_cst); fences, and how they're different from acq_rel fences.
So far my understanding is that the only difference is that seq-cst fences affect the global order of seq-cst operations ([atomics.order]/4). And said order can only be observed if you actually perform seq-cst loads.
So I'm thinking that if I have no seq-cst loads, then I can replace all my seq-cst fences with acq-rel fences without changing the behavior. Is that correct?
And if that's correct, why am I seeing code like this "implementation Dekker's algorithm with Fences", that uses seq-cst fences, while keeping all atomic reads/writes relaxed? Here's the code from that blog post:
std::atomic<bool> flag0(false),flag1(false);
std::atomic<int> turn(0);
void p0()
{
flag0.store(true,std::memory_order_relaxed);
std::atomic_thread_fence(std::memory_order_seq_cst);
while (flag1.load(std::memory_order_relaxed))
{
if (turn.load(std::memory_order_relaxed) != 0)
{
flag0.store(false,std::memory_order_relaxed);
while (turn.load(std::memory_order_relaxed) != 0)
{
}
flag0.store(true,std::memory_order_relaxed);
std::atomic_thread_fence(std::memory_order_seq_cst);
}
}
std::atomic_thread_fence(std::memory_order_acquire);
// critical section
turn.store(1,std::memory_order_relaxed);
std::atomic_thread_fence(std::memory_order_release);
flag0.store(false,std::memory_order_relaxed);
}
void p1()
{
flag1.store(true,std::memory_order_relaxed);
std::atomic_thread_fence(std::memory_order_seq_cst);
while (flag0.load(std::memory_order_relaxed))
{
if (turn.load(std::memory_order_relaxed) != 1)
{
flag1.store(false,std::memory_order_relaxed);
while (turn.load(std::memory_order_relaxed) != 1)
{
}
flag1.store(true,std::memory_order_relaxed);
std::atomic_thread_fence(std::memory_order_seq_cst);
}
}
std::atomic_thread_fence(std::memory_order_acquire);
// critical section
turn.store(0,std::memory_order_relaxed);
std::atomic_thread_fence(std::memory_order_release);
flag1.store(false,std::memory_order_relaxed);
}
| As I understand it, they're not the same, and a counterexample is
below. I believe the error in your logic is here:
And said order can only be observed if you actually perform seq-cst
loads.
I don't think that's true. In atomics.order p4 which defines the
axioms of the sequential consistency total order S, items 2-4 all may
involve operations which are not seq_cst. You can observe the
coherence ordering between such operations, and this can let you infer
how the seq_cst operations have been ordered.
As an example, consider the following version of the StoreLoad litmus test, akin to Peterson's algorithm:
std::atomic<bool> a,b; // initialized to false
void thr1() {
a.store(true, std::memory_order_seq_cst);
std::atomic_thread_fence(std::memory_order_seq_cst);
if (b.load(std::memory_order_relaxed) == false)
std::cout << "thr1 wins";
}
void thr2() {
b.store(true, std::memory_order_seq_cst);
std::atomic_thread_fence(std::memory_order_seq_cst);
if (a.load(std::memory_order_relaxed) == false)
std::cout << "thr2 wins";
}
Note all the loads are relaxed.
I claim that if thr1 prints "thr1 wins", then we deduce that
a.store(true) preceded b.store(true) in the sequential consistency
order S.
To see this, let A be b.load() and B be b.store(true). If
b.load() == false then we have that A is coherence-ordered before
B. (Apply atomics.order p3.3 with A,B as above, and X the
initialization of b to false.)
Now let X be the fence in thr1. Then X happens before A by
sequencing, so X precedes B in S by atomics.order p4.3. That is, the
thr1 fence precedes b.store(true). And a.store(true), which is
also seq_cst, precedes the thr1 fence, because the store strongly
happens before the fence by sequencing. So by
transitivity, a.store(true) precedes b.store(true), as claimed.
Similarly, if thr2 prints, then b.store(true) precedes
a.store(true). They can't both precede each other, so we have
therefore proved that the program cannot print both messages.
If you downgrade the fences to acq_rel, the proof breaks down. In
that case, as far as I can see, nothing prevents the program from
printing thr1 wins even if b.store(true) precedes a.store(true)
in the order S. As such, with acq_rel fences, I believe it is
allowed for both threads to print. Though I'm not sure whether there
is any real implementation where it could actually happen.
We can get an even simpler example if we make all the loads and stores relaxed, so that the only seq_cst operations are the fences. Then we can use (4.4) instead to show that if b.load(relaxed) returns false, the thr1 fence precedes the thr2 fence, and vice versa if a.load() returns false. We therefore conclude, as before, that the program cannot print both messages.
However, if we keep the loads and stores relaxed and weaken the fences to acq_rel, it is then more clear that we have lost this guarantee. Indeed, with a little prodding, a similar example actually fails on x86, where the acq_rel fence is a no-op because the loads/stores are already acquire/release. So that's a clearer case where a seq_cst fence really achieves something that acq_rel does not.
|
70,577,774 | 70,603,292 | Why does the function about getting current time return a wrong time point | I'm working with C++11 and I wrote a function to get the current time point:
template <typename T = std::chrono::milliseconds>
using Clock = std::chrono::time_point<std::chrono::system_clock, T>;
// get current time point
template <typename T = std::chrono::milliseconds>
inline Clock<T> getCurrentTimePoint(int8_t timeZone = 0) {
return std::chrono::time_point_cast<T>(std::chrono::system_clock::now()) +
std::chrono::hours {timezone};
}
However, I just tested the function and it gave me a very strange output.
auto now1 = std::chrono::time_point_cast<std::chrono::seconds>(std::chrono::system_clock::now());
auto now2 = getCurrentTimePoint<std::chrono::seconds>();
LOG(INFO) << "debug - now1:" << now1.time_since_epoch().count() << " now2:" << now2.time_since_epoch().count();
LOG(INFO) can print stream into a log file. So in the log file, I got this:
debug - now1:1641294039 now2:1537614039
So, now1 works as exptected but now2 is really weird because its value is a time point of about three years ago, which is 22/09/2018.
However, I've tried to make a demo here: https://godbolt.org/z/ns3116e63 and it always gives me a correct result.
I'm really confused.
My machine is Ubuntu 16.04.4 LTS. I'm using CMake to compile my project. I added add_definitions(-std=c++14) in the file CMakeLists.txt.
Update
I tested again the next day and here is the result.
I added two more functions, which are exactly the same except their names:
std::chrono::time_point<std::chrono::system_clock, std::chrono::seconds> getCurrentTimePoint1(int8_t timeZone = 0) {
return std::chrono::time_point_cast<std::chrono::seconds>(std::chrono::system_clock::now()) + std::chrono::hours {timeZone};
}
std::chrono::time_point<std::chrono::system_clock, std::chrono::seconds> getCurrentTimePoint2(int8_t timeZone = 0) { // OK
return std::chrono::time_point_cast<std::chrono::seconds>(std::chrono::system_clock::now()) + std::chrono::hours {timeZone};
}
getCurrentTimePoint1 gave me the wrong result, just like now2 above. But getCurrentTimePoint2 gave me the correct result, just like now1 above. This is really weird...
Then I added more functions, such as the function without parameter to do more tests. At some moment, getCurrentTimePoint1 could generate a correct result too!
It seems that adding more functions returning std::chrono::time_point or calling these functions more times could solve this issue!
| Well, this is a very stupid mistake.
template <typename T = std::chrono::milliseconds>
inline Clock<T> getCurrentTimePoint(int8_t timeZone = 0) {
return std::chrono::time_point_cast<T>(std::chrono::system_clock::now()) +
std::chrono::hours {timezone}; // typo error! timeZone, instead of timezone
}
The variable timezone has been defined (man7.org/linux/man-pages/man3/tzset.3.html) and been included by chrono. That's why the compiler didn't generate any error.
|
70,577,852 | 70,577,952 | How to map generic templated compile-time functions | I'd like to have some sort of structure/type/map which can contain std::function specialisations (to contain my callbacks) which have the type known at compile time, without having to do any sort of virtual inheritance or making all my types inherit from a base type.
e.g. I'd like GenericFunction in here to hold the std::function without erasing the known type, so I can get the type later on during the callback()
#include <unordered_map>
#include <string>
#include <functional>
#include <iostream>
// Example struct that I want
template<class CallbackDatatype>
struct GenericFunction{
GenericFunction(const std::function<void(const CallbackDatatype&)> callback)
: m_callback(callback)
std::function<void(const CallbackDatatype&)> m_callback;
static T datatypePlaceholder;
}
class Test
{
public:
Test() = default;
template <class CallbackDatatype>
void attachCallback(const std::string& key, const std::function<void(const CallbackDatatype&)>& callbackFn)
{
// How do I implement GenericFunction/a map that can hold GenericFunction??
m_callbackMap[key] = GenericFunction<CallbackDatatype>(callbackFn)
}
// Called during runtime
void callback(const std::string& key, void* payload)
{
const auto mapIter = m_callbackMap.find(key);
if (mapIter != m_callbackMap.end()) {
// I need to get the datatype of the argument specified in the std::function stored in m_callbackMap to use for deserialising my payload
decltype(mapIter->second.datatypePlaceholder) data = m_deserialiser.deserialise<decltype(mapIter->second.datatypePlaceholder)>(payload);
mapIter->second.m_callback(data);
}
}
private:
Deserialiser m_deserialiser;
std::unordered_map<std::string, GenericFunction> m_callbackMap;
};
struct SomeStruct{
int a;
int b;
float c;
};
int main(){
// Example int function (though I want to use more complicated structs)
const fn1 = [](const int& data) -> void {
std::cout << data << std::endl;
}
const fn2 = [](const SomeStruct& data) -> void {
std::cout << data.c << std::endl;
}
Test test;
test.attachCallback("Route1", fn1);
test.attachCallback("Route2", fn2);
return 0;
}
Is there any way to do this?
Thank you!
| You can store std::function<void(void*)>, something like:
class Test
{
public:
Test() = default;
template <class CallbackDatatype>
void attachCallback(
const std::string& key,
const std::function<void(const CallbackDatatype&)>& callbackFn)
{
m_callbackMap[key] = [=](void* payload){
callbackFn(m_deserialiser.deserialise<CallbackDatatype>(payload));
};
}
// Called during runtime
void callback(const std::string& key, void* payload)
{
const auto mapIter = m_callbackMap.find(key);
if (mapIter != m_callbackMap.end()) {
mapIter->second(payload);
}
}
private:
Deserialiser m_deserialiser;
std::unordered_map<std::string, std::function<void(void*)>> m_callbackMap;
};
|
70,578,205 | 70,579,488 | std::ctype Derived Class Fails to Compile for char | The code below fails to compile with: override did not override any base class methods & do_is is not a member of ctype.
It works fine for wchar_t.
Tested on VC++ 2022, default settings. [EDIT] I got the same result for online GCC. It looks like it is a feature, but why?
#include <locale>
struct fail_t : std::ctype<char> // works for wchar_t
{
bool do_is(mask m, char_type c) const override
{
return ctype::do_is(m, c);
}
};
int main()
{
// nop
}
| Perhaps not a complete answer, but the cppreference site page for the std::ctype<char> specialization1 does briefly explain (bolding mine):
This specialization of std::ctype encapsulates character
classification features for type char. Unlike general-purpose
std::ctype, which uses virtual functions, this specialization
uses table lookup to classify characters (which is generally faster).
Note also, on that page, that there is no do_is() member function (inherited or otherwise).
As for the "but why" part of your question: I guess the last (parenthesised) phrase covers that: which is generally faster.
1 I appreciate that cppreference does not represent any official C++ Standard; however, it is generally very reliable and the language used is often rather more understandable than that in those Standard documents.
Looking through this C++17 Draft Standard, there is another possible answer to the "but why" question:
25.4.1.3 ctype specializations [facet.ctype.special]
1 A specialization ctype<char> is provided so that the member functions on type char can be implemented inline.
|
70,578,277 | 70,578,906 | Using perfect fowarding in STL predicate | Does it make sense to use perfect forwarding in some STL algorithm ? What will be the deduced type ?
auto it = std::find_if(cont.cgebin(),
cont.cend(),
[](auto&& element){ return myFunction(std::forward<decltype(element)>(element)); })
I suppose the L-value version will be used (what about const ?), but what does determine that ? Is there other algorithms which would send a R-value ? I don't think so, because it would mean the predicate could be "consume" the element, whatever the result of the predicate.
| The deduced type of element depends on how the lambda is used within find_if.
find_if will never move by itself (you could pass a movable-iterator to find_if, but then it's not find_if that's doing the move, but the dereference operator on the iterator). find_if will pass the value returned by the dereference operator to the lambda, so in your case it's most likely1 const T& (it could well be T& if you had a non-const container and used begin/end).
In both const T& and T& cases, std::forward will do nothing, so you will just pass the parameter to myFunction.
Note that the standard requires the predicate of std::find_if to not modify its argument, so if you have myFunction(T&) that modifies its argument, your code has undefined behavior.
You should not std::forward here since there is no reason to accept an argument in find_if using anything else than const T&, so a better version would be2:
auto it = std::find_if(cont.cbegin(), cont.cend(),
[](const auto& element){ return myFunction(element); });
1 Some weird containers (std::vector<bool>) will not return a const bool& but a proxy-object, so your lambda needs to take a const auto& or auto&&, auto& will be a compile-time error.
2 Some people (a lot?) uses auto&& in these case, without the std::forward of course, because it's shorter and gives you consistent lambda (you use auto&& everywhere for the lambda).
|
70,578,331 | 70,620,067 | SFML: Object's shape not rendered in window | I want to be able to render sf::CircleShape (representing pointwise charges) when pressing mouse buttons on the window. The problem is easy enough, however the shapes that I want to draw are attributes of a class Charge. The Scene class implements window management and event polling/ rendering methods and it has an attribute std::vector<Charge> distribution.
The idea is to update the distribution variable everytime the event sf::Mouse::isButtonPressed is recorded and then draw the charges' shapes within such vector. For some reason I cannot make it work and I think it's due to the object being created and destroyed within the event loop.
I have a main that looks like this
#include "Scene.h"
int main(){
Scene scene;
while(scene.running())
{
scene.update();
scene.render();
}
return 0;
}
with the header Scene.h declaring the class methods for window management and event polling
#include "Charge.h"
class Scene
{
private:
sf::RenderWindow* window;
sf::Event event;
std::vector<Charge> distribution;
public:
Scene();
virtual ~Scene();
bool running();
void polling();
void render();
void update();
};
The definitions of the methods instantiated in the game loop are
void Scene::update(){this -> polling();}
void Scene::polling()
{
while(this -> window -> pollEvent(this -> event))
{
switch(this -> event.type)
{
case sf::Event::Closed: this -> window -> close();
break;
case sf::Event::MouseButtonPressed:
this -> distribution.push_back(Charge(*this -> window, sf::Mouse::getPosition(*this -> window));
std::cout << "Distribution size = " << distribution.size() << "\n";
break;
}
}
}
void Scene::render()
{
this -> window -> clear(sf::Color::Black);
for(auto charge: this -> distribution)
{
charge.render(*this -> window);
}
this -> window -> display();
}
The window object is instatiated in the cosntrutor of Scene. Now Charge.h declares the class
class Charge
{
private:
sf::CircleShape shape;
public:
Charge(const sf::RenderWindow& window, sf::Vector2i position);
virtual ~Charge();
void render(sf::RenderTarget& target);
};
and the definition of its methods is the following
Charge::Charge(const sf::RenderWindow& window, sf::Vector2i position)
{
std::cout << "Charge's object created!" << std::endl;
this -> shape.setOrigin(sf::Vector2f(static_cast<float>(position.x), static_cast<float>(position.y)));
this -> shape.setFillColor(sf::Color(255,50,50));
this -> shape.setRadius(25.f);
}
Charge::~Charge(){std::cout << "Charge's object destroyed!" << std::endl;}
void Charge::render(sf::RenderTarget& target)
{
target.draw(this -> shape);
}
I added printing on terminal in the constructor and destructor. The execution of the program does not render any of the objects' shapes when mouse buttons are pressed. The terminal however reports
Charge's object created! # <-- Here I pressed the mouse's button
Charge's object destroyed!
Distribution size = 1
Charge's object destroyed!
Charge's object destroyed!
Charge's object destroyed!
Charge's object destroyed!
Charge's object destroyed!
Charge's object destroyed!
Charge's object destroyed! # <-- And it goes on and on as long as the window is not closed.
I tried to approach the problem in various ways but none have worked so far. Any idea?
| Your range-based for loop uses auto instead of auto&, thus you're constantly creating temporary copies of the Charge class.
Additionally, you shouldn't mix sf::Mouse::getPosition() with events. The mouse events already provide the position: event.mouseButton.x / event.mouseButton.y.
|
70,578,669 | 70,578,716 | Using 'new' when creating a linked list | I've been trying to create a linked list in C++. I am trying to avoid using the keyword new when creating it, but it doesn't seem to work
// Linked lists
struct Node
{
int value;
Node *next;
Node( int val ) : value( val ), next(nullptr) {};
};
int main()
{
vector<int> vec = { 2,5,7,1,4,7 };
//insertian
Node head(0); // we need to use a pointer
Node* ptr = &head;
for ( auto v : vec)
{
Node* temp = new Node( v ); // Node temp( v );
ptr->next = temp; //ptr->next = &temp;
ptr = ptr->next;
}
}
The code above works fine. But if I replace the code inside the loop with the commented lines, then it fails. I'm not sure why.
I was also informed that you need to perform a delete when using new. If using new cant be avoided then how can the delete be performed?
| Saying Node temp( v ) will create a local variable scoped to the for loop. On each iteration the local variable will be created, and at the end of the iteration destroyed.
You're storing a pointer to the local variable, which is undefined behaviour. What is probably happening is that the local variable is being created over the top of the old one on each iteration (also this is implementation dependent).
As you want the node to outlive the lifetime of the iteration you will need to allocate the node on the heap via new
|
70,578,807 | 70,579,111 | bitwise conversion of decimal numbers to binary | The code works fine for some values like for eg 10 the output is 1010 which is correct but for 20 or 50 or 51 the output is wrong or atleast seems so to me.
please help !
#include <iostream>
#include <math.h>
using namespace std;
int main()
{
int n;
cin >> n;
int ans = 0;
int i = 0;
while (n != 0)
{
int bit = n & 1;
ans = (bit * pow(10, i)) + ans;
n = n >> 1;
i++;
}
cout << " Answer is " << ans << endl;
}
| After trying to run your code, it works. 51 correctly comes out as 110011 and 50 as 110010 and 20 as 10100. Those are the correct bit values, you can try calculating them by counting or by just adding 10 (i.e. 1010) in different ways.
|
70,578,986 | 70,579,579 | ARMv7 NEON: Unpack 32 bit mask to 64 bit mask | I have a 32 NEON bit mask that I need to unpack to 64 bits like so:
uint32x4_t mask = { 0xFFFFFFFF, 0xFFFFFFFF, 0, 0 };
uint64x2_t mask_lo = { 0xFFFFFFFFFFFFFFFF, 0xFFFFFFFFFFFFFFFF };
uint64x2_t mask_hi = { 0, 0 };
What I came up with so far is this:
uint64x2_t mask_lo = vmovl_u32(vget_low_u32(mask)); // { 0x00000000FFFFFFFF, 0x00000000FFFFFFFF }
uint64x2_t mask_hi = vmovl_u32(vget_high_u32(mask)); // { 0, 0 }
The problem is, that it is missing the first two bytes of ones. I think it could be solved with vtstq_u64, but I am working with ARMv7, so it is sadly not avaiable for me.
Thanks!
EDIT: My bit mask elements are either all ones or all zeros!
EDIT 2: I just used vmovl_s32 instead of vmovl_u32:
mask_lo = vmovl_s32(vget_low_s32(vreinterpretq_s32_u32(mask)));
mask_hi = vmovl_s32(vget_high_s32(vreinterpretq_s32_u32(mask)));
| It’s unclear why do you want 0xFFFFFFFF to unpack into 0xFFFFFFFFFFFFFFFF
If you want sign extend, use reinterpret intrinsics, and vmovl_s32 for the unpacking. This will unpack 0x80000000 into 0xFFFFFFFF80000000
If instead you want to duplicate the uint32_t lanes, use vzipq_u32 intrinsic with your source vector in both arguments. This will unpack 0x80000000 into 0x8000000080000000
|
70,579,550 | 70,581,590 | Why am I unable to enter elements in the linked list while the function is working otherwise? | I wrote a program to merge two sorted linked list into one and this function was the one I used to do it but it's not working. The code of the function is as follows is as follows:
void combine(Node **temp, Node *temp_1, Node *temp_2){
while(temp_1 != NULL || temp_2 != NULL){
if(temp_1->data > temp_2->data){
push(temp, temp_2->data);
temp_2 = temp_2->next;
}
else{
push(temp, temp_1->data);
temp_1 = temp_1->next;
}
}
while(temp_1 != NULL){
push(temp, temp_1->data);
temp_1 = temp_1->next;
}
while(temp_2 != NULL){
push(temp, temp_2->data);
temp_2 = temp_2->next;
}
}
Now, this code doesn't add anything to the final linked list. If I write something like
push(temp, temp_1->data);
it will add elements just fine so the problem isn't definitely with the push function. Can someone tell me what is the problem with the above code?
The full code is in the following URL:
https://ide.geeksforgeeks.org/FZ8IS4PADE
| The issue is the while condition:
while(temp_1 != NULL || temp_2 != NULL){
This will allow the execution of the body of the loop when just one of those two pointers is null, and this will result in undefined behaviour on the first statement in that body:
if(temp_1->data > temp_2->data){
The || should be an &&. This will fix your issue.
Other remarks on your code
Don't use NULL for comparing your pointer variables against, but nullptr
The use of push makes your code inefficient: at every push, your code is starting an iteration through the whole list to find the end of it. Since you actually know what is the last node (since it was created in the previous iteration of the loop) this is a waste of time. Instead, keep a reference to the tail of the list that is being created. As there is no tail at the start of the combine process, it might be useful to create a "sentinel" node that comes before the real list that will be returned.
Use better variable names. temp is not temporary at all. It is the result that the caller wants to get: this name is misleading.
Avoid code repetition. The last two loops are the same except for the list that is copied from, and this code is again similar to the parts in the main loop. So create a function that does this job of copying a node from a source list to the end of another list, and that advances both pointers.
Here is what that would look like:
void copyNode(Node **source, Node **targetTail) {
*targetTail = (*targetTail)->next = new Node((*source)->data);
*source = (*source)->next;
}
void combine(Node **result, Node *head_1, Node *head_2){
Node *sentinel = new Node(0); // Dummy
Node *current = sentinel;
while(head_1 != nullptr && head_2 != nullptr){
if(head_1->data > head_2->data){
copyNode(&head_2, ¤t);
}
else{
copyNode(&head_1, ¤t);
}
}
if (head_1 == nullptr) {
head_1 = head_2;
}
while (head_1 != NULL) {
copyNode(&head_1, ¤t);
}
*result = sentinel->next;
delete sentinel;
}
|
70,579,657 | 70,816,049 | {fmt} How to install and use fmtlib in Visual Studio? | I am trying to install fmtlib and I have downloaded the zip folder and extracted it, what do I do next to use it in my Visual Studio 2022 project? Because it's my first time installing an external library. Im using windows 10.
| Once you have downloaded and extracted the fmtlib, Open Visual Studio and create new project New Project -> Console App
replace the application file where main method is present with below code.
First line (#define FMT_HEADER_ONLY) is mandatory, which tells compiler to compile fmt header file also.
#define FMT_HEADER_ONLY
#include <iostream>
#include <fmt/color.h>
int main()
{
std::cout << "Hello World!\n";
fmt::print(fmt::emphasis::bold | fg(fmt::color::red),
"Elapsed time: {0:.2f} seconds", 1.23);
}
Go to Properties of project Properties, and click on dropdown of Include Directories as shown in screenshot.
Click on New Line folder Folder Icon and click on line to edit and browse folder path of include directory (C:\Users\username\Downloads\fmt-8.1.0\fmt-8.1.0\include)Browse till include directory of fmt lib.
Cross check the path is added Confirm.
Now build the Project and Execute code. Colorful Text Output
Happy Coding!
|
70,579,990 | 70,580,105 | Why does binary search algorithm require returning the recursive calls? | I was implementing recursive binary search and I ran into this problem that really confused me. Here is the code that I was initially running:
'''
int recursiveBinarySearch(int* arr, int start, int end, int key){
int middle = (start + end) / 2;
if (start >= end)return -1;
if (arr[middle] == key)return middle;
if (arr[middle] < key) {
recursiveBinarySearch(arr, middle+1, end, key);
}
if (arr[middle] > key) {
recursiveBinarySearch(arr, start, middle-1, key);
}
}
'''
Now, as far as I understand, as soon as a base case is reached, the only thing returned should be the value that is returned from the base-case because no other call is returning anything.
However, clearly I was wrong because this wasn't giving correct answers and apparently I needed a return statement infront of each of the recursive call for this algo to work, as I found out later.
So my question is, why do we need to have the return statement? What exactly is the reason that my solution doesn't work?
| Let's assume, X is the function we are calling from the main function. And there is a function Y which is being called from function X. Function Y does some computation and calculates the result for X. So you should just call function Y and return it's result like below.
Y() {
// Some computation
return result;
}
X() {
return Y();
}
main() {
res = X();
}
Now think of it like this, each recursive call is a separate function that is doing some task. So your recursiveBinarySearch function should return the answer but it can't compute that itself, so it is calling some other functions (which in this case the same function, thus call recursion) to get the result. So it will keep calling the function and break down the problems into smaller ones until it reaches the base case where it will get the answer and finally return it.
|
70,580,020 | 70,580,203 | I try to write something from a class to a file but it has undefined refrence error c++ | I try to write somthing from my class to a file bu it has this error
C:\Users\Lenovo\AppData\Local\Temp\ccaLsCIe.o:main.cpp:(.text+0xe7): undefined reference to `CoronaVaccine::CoronaVaccine(std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::__cxx11::basic_string<char, std::char_traits, std::allocator >, int, std::__cxx11::basic_string<char, std::char_traits, std::allocator >)'
collect2.exe: error: ld returned 1 exit status
and this is my code
#include <iostream>
#include <fstream>
#include <cstdlib>
using namespace std;
class CoronaVaccine{
string name;
string nationalID;
int dose = 0;
string nextDate;
public:
CoronaVaccine (string="", string="",int=0,string="");
void setName(string a){
name = a;
}
string getName() const{
return name;
}
void setNatinalID(string b){
nationalID = b;
}
string getNtinalID() const{
return nationalID;
}
void setDoseDate(int c, string d){
dose = c;
if (dose == 1){
nextDate = d;
}else{
nextDate = "Done";
}
}
int getDose() const{
return dose;
}
string getNextDte() const{
return nextDate;
}
};
int main(){
char cont;
string nameMain;
string natinalIdMain;
int doseMain;
string nextDateMain;
CoronaVaccine client;
ofstream fp("coronaVaccine.txt");
if (!fp){
cout << "something goes wrong!";
exit(0);
}
while (1)
{
cout << "Name natinalID dose date: \n";
cin >> nameMain;
cin >> natinalIdMain;
cin >> doseMain;
cin >> nextDateMain;
client.setName(nameMain);
client.setNatinalID(natinalIdMain);
client.setDoseDate(doseMain,nextDateMain);
cout << "do you want to countinue(y/n): ";
cin >> cont;
if (cont == 'n'){
break;
}
fp.write((char *) &client,sizeof(CoronaVaccine));
}
cout << "\n==============================\n";
fp.close();
return 0;
}
| The problem is that you have provided only a declaration for the parameterised constructor CoronaVaccine (string="", string="",int=0,string="");.
You can solve this by providing the corresponding definition as shown below:
//define the constructor. This uses constructor initializer list
CoronaVaccine::CoronaVaccine (string pname, string pnationalID,int pDose,string pnextDate)
:name(pname), nationalID(pnationalID), dose(pDose), nextDate(pnextDate)
{
}
So the modified code should look like:
class CoronaVaccine{
string name;
string nationalID;
int dose = 0;
string nextDate;
public:
CoronaVaccine (string="", string="",int=0,string="");//THIS IS A DECLARATION
//other members here
};
//THIS IS THE DEFINITION. This uses constructor initializer list
CoronaVaccine::CoronaVaccine (string pname, string pnationalID,int pDose,string pnextDate)
:name(pname), nationalID(pnationalID), dose(pDose), nextDate(pnextDate)
{
}
|
70,580,209 | 70,580,287 | Getting mismatched types error when i tried to assign numeric value to array object using pointer | I have initiated an array of 6 elements and tried to print it using a function called 'print'. I have used array object from the stl library. I passed the address of the array object to the print function. When I tried to change the value of the array object in the print function I am getting mismatched types error.
#include <bits/stdc++.h>
using namespace std;
void print(array<int, 6> *arr){
for(int i : *arr){
cout<<i<<" ";
}
*(*arr+2)=2;
cout<<endl;
}
int main(){
array<int, 6> arr2={1, 2, 3, 4, 5, 6};
print(&arr2);
print(&arr2);
}
| In *(*arr+2)=2; you deference the array pointer and try to add 2 to it and then dereference that result to assign 2. I assume you want to assign 2 to the element at index 2 in the array.
You do not need to use pointers here though, take the array by reference.
And, never #include <bits/stdc++.h>.
#include <array> // include the proper header files
#include <iostream>
void print(std::array<int, 6>& arr) { // by reference
for (int i : arr) {
std::cout << i << ' ';
}
arr[2] = 2; // assign 2 to the element at index 2
std::cout << '\n'; // std::endl flushes the stream which is usually uncessary
}
int main() {
std::array<int, 6> arr2 = {1, 2, 3, 4, 5, 6};
print(arr2); // and don't take the array address here
print(arr2);
}
If you really want to use a pointer here, this could be an option:
#include <array>
#include <iostream>
void print(std::array<int, 6>* arr_ptr) { // pointer
if (arr_ptr == nullptr) return; // check that it's pointing at something
std::array<int, 6>& arr = *arr_ptr; // dereference it
for (int i : arr) {
std::cout << i << ' ';
}
arr[2] = 2;
std::cout << '\n';
}
int main() {
std::array<int, 6> arr2 = {1, 2, 3, 4, 5, 6};
print(&arr2);
print(&arr2);
}
|
70,580,647 | 70,580,730 | Libcurl - how can you explain this behavior curl_multi_poll | This is my third question about curl_multi_poll, but now I seem to have done everything according to the rules:
#include <iostream>
#include <curl.h>
int main()
{
curl_global_init(CURL_GLOBAL_ALL);
CURLM* CURLM_ = curl_multi_init();
CURL* CURL_ = curl_easy_init();
curl_easy_setopt(CURL_, CURLOPT_URL, "https://stackoverflow.com");
int num_desc_events;
int running_now;
curl_multi_add_handle(CURLM_, CURL_);
curl_multi_perform(CURLM_, &running_now);
while (1)
{
std::cout << "curl_multi_poll_start" << std::endl;
curl_multi_poll(CURLM_, NULL, 0, 1000000, &num_desc_events);
std::cout << "curl_multi_poll_awakened" << std::endl;
std::cout << "num_desc_events:" << num_desc_events << std::endl;
curl_multi_perform(CURLM_, &running_now);
std::cout << "curl_multi_perform_start" << std::endl;
std::cout << "running_now:" << running_now << std::endl;
}
curl_easy_cleanup(CURL_);
curl_multi_cleanup(CURLM_);
curl_global_cleanup();
}
The console output will be as follows:
curl_multi_poll_start //Begin loop
curl_multi_poll_awakened
num_desc_events:1 //curl_multi_poll awakened - and the number of sockets on which the event occurred is indicated
curl_multi_perform_start //If there are sockets on which events have occurred, I run curl_multi_perform
running_now:1 //number of sockets still in operation
curl_multi_poll_start
curl_multi_poll_awakened
num_desc_events:1
curl_multi_perform_start
running_now:1
curl_multi_poll_start
curl_multi_poll_awakened
num_desc_events:0 //Attention!!! Why does curl_multi_poll wake up if the number of sockets on which events have occurred is 0?
curl_multi_perform_start
running_now:1
curl_multi_poll_start
curl_multi_poll_awakened
num_desc_events:1
curl_multi_perform_start
running_now:1
curl_multi_poll_start
curl_multi_poll_awakened
num_desc_events:0 //The same
curl_multi_perform_start
running_now:1
curl_multi_poll_start
curl_multi_poll_awakened
num_desc_events:1
curl_multi_perform_start
running_now:1
curl_multi_poll_start
curl_multi_poll_awakened
num_desc_events:1
curl_multi_perform_start
running_now:0 //the number of sockets in operation is 0. Request completed.
curl_multi_poll_start //curl_multi_poll pending
From curl_multi_poll description: https://curl.se/libcurl/c/curl_multi_poll.html
On completion, if numfds is non-NULL, it will be populated with the
total number of file descriptors on which interesting events occurred.
Actually the question is: why did curl_multi_poll wake up twice if there were no events on the socket?
| As is explained in the documentation, curl_multi_poll() can return "early" without any socket activities when libcurl has "other stuff" to do. Most notably things that are based on timers or timeouts.
Sometimes enabling CURLOPT_VERBOSE and watching that output helps explain what it does at a specific moment in time.
|
70,580,681 | 70,581,229 | function returning - unique_ptr VS passing result as parameter VS returning by value | In c++, what the preferred/recommended way to create an object in a function/method and return it to be used outside the creation function's scope?
In most functional languages, option 3 (and sometimes even option 1) would be preferred, but what's the c++ way of best handling this?
Option 1 (return unique_ptr)
pros: function is pure and does not change input params
cons: is this an unnecessarily complicated solution?
std::unique_ptr<SomeClass> createSometing(){
auto s = std::make_unique<SomeClass>();
return s;
}
Option 2 (pass result as a reference parameter)
pros: simple and does not involve pointers
cons: input parameter is changed (makes function less pure and more unpredictable - the result reference param could be changed anywhere within the function and it could get hard/messy to track in larger functions).
void createSometing(SomeClass& result){
SomeClass s;
result = s;
}
Option 3 (return by value - involves copying)
pros: simple and clear
cons: involves copying an object - which could be expensive. But is this ok?
SomeClass createSometing(){
SomeClass s;
return s;
}
| In modern C++, the rule is that the compiler is smarter than the programmer. Said differently the programmer is expected to write code that will be easy to read and maintain. And except when profiling have proven that there is a non acceptable bottleneck, low level concerns should be left to the optimizing compilers.
For that reason and except if profiling has proven that another way is required I would first try option 3 and return a plain object. If the object is moveable, moving an object is generally not too expensive. Furthermore, most compilers are able to fully elide the copy/move operation if they can. If I correctly remember, copy elision is even required starting with C++17 for statements like that:
T foo = functionReturningT();
|
70,580,799 | 70,581,965 | Should I use compare_exchange_weak(or strong) when check atomic<bool> variable is set? | Below code is an usage example of atomic<bool> from book c++ concurrency in action chapter 5.
Why do they use compare_exchange_weak for checking b is set and why do they use !expected inside while loop ?
bool expected=false;
extern atomic<bool> b; // set somewhere else
while(!b.compare_exchange_weak(expected,true) && !expected);
Can I change above code to simple code like below ?
extern atomic<bool> b; // set somewhere else
while(!b.load());
| Wow that is a confusing piece of code. Firstly the obvious point, the compare_exchange_weak version (might) change the value of the underlying atomic value. b.load() does not, so they are not equivalent.
To explain...
while(!b.compare_exchange_weak(expected,true) && !expected);
b.compare_exchange_weak(expected,true) says
if b is false make it true, otherwise set expected = b(true)
Now you might ask, why does the code check the value after the exchange?
It is checking to see if another thread has already set the value. which causes the loop to exit. But then, if the code exits no matter what the value of b, why the loop?
The loop is there because the weak-version of the function can just do nothing (for no reason).
So this code is equivalent to
b.compare_exchange_strong(expected,true);
I'm not sure why the author didn't write it this way, but I'm missing the context for it. I'd certainly have problems with someone who put that in production code, without a nice explanatory comment above it!
See https://en.cppreference.com/w/cpp/atomic/atomic/compare_exchange for details.
See https://newbedev.com/understanding-std-atomic-compare-exchange-weak-in-c-11 for a similar discusion
|
70,581,597 | 70,588,507 | Why does this_thread::sleep_for not reduce CPU usage of while loop | I have a while loop as follows:
while(true){
//do stuff
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
When I look at the CPU usage it is almost 100% ... is there any way to do something like this while preserving CPU cycles without having to use complicated condition variables?
EDIT: "do stuff" checks a queue to see if there are any events to send... if there are none, then it does nothing (super fast), otherwise it sends the events via HTTP. It is very rare to have an event (e.g once every 10 minutes).
I know you could re-write this to use a condition variable but I'd prefer to figure out why there is so much CPU usage.
| I figured out the reason: During the logic of the code ("// do stuff") there was a continue statement. The continue statement caused the thread to skip over the sleep statement causing it to loop continuously. I moved the thread_sleep to the top of the while loop and the CPU usage went from 99% to 0.1%
|
70,581,832 | 70,581,932 | How do I replace this raw loop with an STL algorithm or iterators? (avoiding unchecked subscript operator) | I'm implementing a generic clone of the Snake game in C++ as an exercise in following the C++ Recommended Guidelines. I'm currently working on the Snake's update() method, which is tiggering a Guideline-violation for using the unchecked subscript operator in a for loop:
Prefer to use gsl::at() instead of unchecked subscript operator
(bounds.4).
I want to avoid the redundant range change check of gsl::at() / container::at(), and don't want to suppress the warning either. With that in mind, how can I replace this raw loop with something more Guideline-friendly? (eg. an STL algorithm, iterators, what have you)
void Snake::update() noexcept {
if (hasTrailingSegments()) {
for (auto i = body_segments.size() - 1; i > 0; i--) {
body_segments[i] = body_segments[i - 1];
}
}
head() += heading;
}
The code explained, bottom up:
the loop needs to ensure each body segment follows (= takes the position of) the one in front of it.
body_segments is a std::vector of integer pairs.
hasTrailingSegments() ensure that "trailing_pieces" is size() > 1 (eg. player has more than just the head).
head() is simply body_segments[0]
| You can std::rotate the snake so that the old tail is the front element, and then overwrite that with the new segment.
void Snake::update() noexcept {
auto new_head = head() + heading;
std::rotate(body_segments.begin(), body_segments.end() - 1, body_segments.end());
head() = std::move(new_head);
}
Alternatively, you can copy_backward the body, overwriting the tail.
void Snake::update() noexcept {
std::copy_backward(body_segments.begin(), body_segments.end() - 1, body_segments.end());
head() += heading;
}
These are safe so long as a Snake always has at least one segment, which head() requires anyway.
|
70,581,961 | 70,582,049 | Multilevel static inheritance member accessing with CRTP | I've been trying to implement a multilevel inheritance using CRTP in C++.
But I'm facing the problem of accessing members with more than 2 levels.
There is no problem with 2 levels, I'm using the friend and private constructor technique.
The problem faces when I try to add another level to the hierarchy.
Here's my example:
template<typename CRTPType>
class State
{
private:
State() = default;
friend CRTPType;
protected:
float value = 0.0f;
}
template<typename CRTPType>
class AnimState : public State<AnimState<CRTPType>>
{
private:
State() = default;
friend CRTPType;
public:
void test() value = 5.0f; //Error, value undeclared
}
class IdleState : AnimState<IdleState>
{
public:
void test() value = 5.0f; //It works, obviously
}
I'm aware of the problem, the IdleState class will be friend of AnimState and State so it can access to both class memberers. But I also want the AnimState to be able to access to the State class members.
Any good solution out there?
Thanks in advance.
| In AnimState, value is inherited from a base that depends on a template parameter.
Because of that, it must be accessed using this->value, or AnimState::value, or State<...>::value.
That's probably because value could also be the name of a global variable, and may or may not exist in the parent depending on the value of the template parameter, so for your sanity, the compiler doesn't want to have to resort to switching between a global variable and an inhertied variable depending on the template argument. So it refuses to search for value in the base classes, unless you qualify it as suggested above.
|
70,582,103 | 70,595,658 | Windows Toast Notification callback not being invoked | I managed to send Toast messages but once clicked, the callback is not invoked. This is the toast-tutorial that was used.
The messages should be sent through classic Win32 and in order to do this, a shortcut needs to be created which contains the AUMID and the CLSID. This is explained in Step 5 of the tutorial, where for MSIX and WiX these id's are put in their config files. There isn't an explanation how to generate the shortcut in Win32, but can be found in another aumid-tutorial.
After following the steps provided, sending the toast works fine, but clicking it does not invoke the callback for handling the feedback.
One thing that stands out, is that the installShortcut function uses only the AUMID in the creation of the shortcut, the CLSID is only used when registering the COM Server, where the configuration for MSIX and WIX shortcuts use both.
It seems as there is the link missing that windows needs to route the feedback back into the app.
Toasts use the "ToastGeneric" binding.
Any idea why this is happening?
| Just on the name alone it seems to me like you need to set the PKEY_AppUserModel_ToastActivatorCLSID property on the .lnk and not just the AUMID.
MSDN says:
Used to CoCreate an INotificationActivationCallback interface to notify about toast activations.
This page is marked as pre-release but does have a different InstallShortcut function that sets this property.
|
70,582,109 | 70,582,374 | C++ std::thread arguments must be invocable after conversion to rvalues | I'm clueless on whats going on here
here is a simple code example of what I'm trying to achieve:
main.cpp
#include <iostream>
#include <thread>
int main(int argc, char** argv) {
constexpr int SIZE = 10;
std::array<int, SIZE> arr{0};
auto add = []<typename T>(std::array<T, SIZE>& arr) {
for (int i = 0; i < SIZE; i++)
arr[i] = i + 1;
};
std::thread t1(std::ref(add), std::ref(arr));
t1.join();
return 0;
}
compile command:
g++ -std=c++20 -Wall main.cpp -pthread -o t1
error:
In file included from /usr/include/c++/11.1.0/stop_token:35,
from /usr/include/c++/11.1.0/thread:40,
from main.cpp:2:
/usr/include/c++/11.1.0/bits/std_thread.h: In instantiation of ‘std::thread::thread(_Callable&&, _Args&& ...) [with _Callable = std::reference_wrapper<main(int, char**)::<lambda(std::array<T, 10>&)> >; _Args = {std::reference_wrapper<std::array<int, 10> >}; <template-parameter-1-3> = void]’:
main.cpp:15:48: required from here
/usr/include/c++/11.1.0/bits/std_thread.h:130:72: error: static assertion failed: std::thread arguments must be invocable after conversion to rvalues
130 | typename decay<_Args>::type...>::value,
| ^~~~~
/usr/include/c++/11.1.0/bits/std_thread.h:130:72: note: ‘std::integral_constant<bool, false>::value’ evaluates to false
/usr/include/c++/11.1.0/bits/std_thread.h: In instantiation of ‘struct std::thread::_Invoker<std::tuple<std::reference_wrapper<main(int, char**)::<lambda(std::array<T, 10>&)> >, std::reference_wrapper<std::array<int, 10> > > >’:
/usr/include/c++/11.1.0/bits/std_thread.h:203:13: required from ‘struct std::thread::_State_impl<std::thread::_Invoker<std::tuple<std::reference_wrapper<main(int, char**)::<lambda(std::array<T, 10>&)> >, std::reference_wrapper<std::array<int, 10> > > > >’
/usr/include/c++/11.1.0/bits/std_thread.h:143:29: required from ‘std::thread::thread(_Callable&&, _Args&& ...) [with _Callable = std::reference_wrapper<main(int, char**)::<lambda(std::array<T, 10>&)> >; _Args = {std::reference_wrapper<std::array<int, 10> >}; <template-parameter-1-3> = void]’
main.cpp:15:48: required from here
/usr/include/c++/11.1.0/bits/std_thread.h:252:11: error: no type named ‘type’ in ‘struct std::thread::_Invoker<std::tuple<std::reference_wrapper<main(int, char**)::<lambda(std::array<T, 10>&)> >, std::reference_wrapper<std::array<int, 10> > > >::__result<std::tuple<std::reference_wrapper<main(int, char**)::<lambda(std::array<T, 10>&)> >, std::reference_wrapper<std::array<int, 10> > > >’
252 | _M_invoke(_Index_tuple<_Ind...>)
| ^~~~~~~~~
/usr/include/c++/11.1.0/bits/std_thread.h:256:9: error: no type named ‘type’ in ‘struct std::thread::_Invoker<std::tuple<std::reference_wrapper<main(int, char**)::<lambda(std::array<T, 10>&)> >, std::reference_wrapper<std::array<int, 10> > > >::__result<std::tuple<std::reference_wrapper<main(int, char**)::<lambda(std::array<T, 10>&)> >, std::reference_wrapper<std::array<int, 10> > > >’
256 | operator()()
| ^~~~~~~~
note:
if I change the lambda to:
auto add = []<typename T>(std::array<T, SIZE> arr)
and call the thread like this:
std::thread t1(std::ref(add), arr);
it compiles, but obviously the array doesn't changed because its a copy and not a reference
| std::reference_wrapper is not a std::array, so T cannot be deduced.
add(std::ref(arr)); doesn't compile neither.
You might use
std::thread t1([&](auto arg){ add(arg.get()); }, std::ref(arr));
Demo
|
70,582,161 | 70,582,463 | std::string constructed from subrange of char array calls strlen | It is similar to LeetCode C++ Convert char[] to string, throws AddressSanitizer: stack-buffer-overflow error
The code is
#include <string>
int main() {
char buf[10] = {6, 6, 6, 6, 6, 6, 6, 6, 6, 6};
std::string s{buf, 2, 3};
return 0;
}
Execution ends up with address sanitizer complaining about strlen's stack-buffer-overflow:
$ clang++ -g -fsanitize=address foo.cpp ; ./a.out
=================================================================
==1001715==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffd76b2510a at pc 0x00000042f029 bp 0x7ffd76b250b0 sp 0x7ffd76b24870
READ of size 23 at 0x7ffd76b2510a thread T0
#0 0x42f028 in strlen (/tmp/a.out+0x42f028)
#1 0x7fd6de786e9b in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, std::allocator<char> const&) (/usr/lib/x86_64-linux-gnu/libstdc++.so.6+0x145e9b)
#2 0x4c6cfe in main /tmp/foo.cpp:6:19
#3 0x7fd6de2d60b2 in __libc_start_main /build/glibc-eX1tMB/glibc-2.31/csu/../csu/libc-start.c:308:16
#4 0x41c3fd in _start (/tmp/a.out+0x41c3fd)
I'd expect that std::string s{buf, 2, 3}; calls a constructor overload with known bounds (start at 2, length is 3). Why is it calling strlen()? Which overload is used?
| Check cpp insights. It is great tool to see what was used during overload resolution.
It generates this:
#include <string>
int main()
{
char buf[10] = {6, 6, 6, 6, 6, 6, 6, 6, 6, 6};
std::string s = std::basic_string<char, std::char_traits<char>, std::allocator<char> >{std::basic_string<char, std::char_traits<char>, std::allocator<char> >(buf, std::allocator<char>()), 2, 3};
return 0;
}
After cleaning up to make this more readable:
#include <string>
int main()
{
char buf[10] = {6, 6, 6, 6, 6, 6, 6, 6, 6, 6};
std::string s = std::string{std::string(buf), 2, 3};
return 0;
}
So note that buf is first converted to std::string and this conversion requires strlen. Since your array do not contain terminating zero buffer overflow happens.
|
70,582,321 | 70,661,018 | glfw window with no title bar | I am trying to make a way to toggle my window between windowed mode and fullscreen mode. I had done it successfully except for one problem. The title bar is not working! You can’t move the window either. Without this piece of code everything works just fine.
setFullscreen method:
void Window::setFullscreen(bool fullscreen)
{
GLFWmonitor* monitor = glfwGetPrimaryMonitor();
const GLFWvidmode* mode = glfwGetVideoMode(monitor);
if (fullscreen) {
glfwSetWindowMonitor(m_window, monitor, 0, 0, mode->width, mode->height, mode->refreshRate);
glViewport(0, 0, mode->width, mode->height);
}
if (!fullscreen) {
glfwSetWindowMonitor(m_window, nullptr, 0, 0, m_width, m_height, GLFW_DONT_CARE);
glViewport(0, 0, m_width, m_height);
}
}
The result of the code:
| @tomasantunes help me figure this out.
in setFullscreen I am setting the window to be at 0, 0 or the top left of the screen. The title bar didn't actually disappear is was just off screen. so if I set the window to be at 100, 100 instead I get the title bar back. This was pretty dumb of me to make a stupid mistake like this.
if (!fullscreen) {
glfwSetWindowMonitor(m_window, nullptr, 0, 0, m_width, m_height, GLFW_DONT_CARE); // I set the position to 0,0 in the 3rd and 4th parameter
glViewport(0, 0, m_width, m_height);
}
Fixed code:
if (!fullscreen) {
glfwSetWindowMonitor(m_window, nullptr, 100, 100, m_width, m_height, GLFW_DONT_CARE); // I set the position to 100, 100
glViewport(0, 0, m_width, m_height);
}
New result:
|
70,582,577 | 70,605,436 | Static linking of SDL2 2.0.18 with VS 2019, (memcpy already defined bug MT setting.) | I tried to compile and link a very simple SDL2 example code.
It works for all the following configurations:
Win32 release and debug for both MD and MT setting in runtime lib.
x64 release and debug with runtime lib \MD
When I compile and link with x64, release and \MT I get this error:
Error LNK2005 memcpy already defined in SDL2.lib(SDL_stdlib.obj) c:\SDL2\libcruntime.lib
(memcpy.obj)
This is the same problem kind of.
https://github.com/libsdl-org/SDL/issues/3662
Below is my c++ compiler command line:
/permissive- /ifcOutput "x64 Release" /GS /GL /W3 /Gy /Zc:wchar_t /Zi /Gm- /O2 /sdl /Fd"x64 Release\vc142.pdb" /Zc:inline /fp:precise /D "NDEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /errorReport:prompt /WX- /Zc:forScope /Gd /Oi /MT /FC /Fa"x64 Release" /EHsc /nologo /Fo"x64 Release" /Fp"x64 Release\SDL2.pch" /diagnostics:column
Here are the linker command line:
/OUT:"C:\Users\jerry\source\repos\SDL2\x64 Release\SDL2.exe" /MANIFEST /LTCG:incremental /NXCOMPAT /PDB:"C:\Users\jerry\source\repos\SDL2\x64 Release\SDL2.pdb" /DYNAMICBASE "SDL2main.lib" "zlib.lib" "libpng16.lib" "libwebp.lib" "jpeg.lib" "SDL2_image.lib" "SDL2.lib" "winmm.lib" "imm32.lib" "version.lib" "Setupapi.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" /DEBUG /MACHINE:X64 /OPT:REF /INCREMENTAL:NO /PGD:"C:\Users\jerry\source\repos\SDL2\x64 Release\SDL2.pgd" /SUBSYSTEM:CONSOLE /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /ManifestFile:"x64 Release\SDL2.exe.intermediate.manifest" /LTCGOUT:"x64 Release\SDL2.iobj" /OPT:ICF /ERRORREPORT:PROMPT /ILK:"x64 Release\SDL2.ilk" /NOLOGO /LIBPATH:"D:\SDL2Lib\x64" /TLBID:1
So how to make this work with VS 2019 and x64 and \MT setting? can pragma guards work in someway. What I learned is that the optimizer is doing memcpy and other things.
| Thanks to keltar advice about the SDL_LIBC flag
Solution is in the file SDL_config.h change the following starting from row 32
#if defined(__WIN32__)
#include "SDL_config_windows.h"
#elif defined(__WINRT__)
....
and change it to this
#if defined(__WIN32__)
#if defined(_WIN64)
#define HAVE_LIBC 1
#endif
#include "SDL_config_windows.h"
#elif defined(__WINRT__)
....
|
70,582,675 | 70,582,757 | What order does gcc __attribute__((constructor)) run in relation to global variables in same translation unit? | I saw this question answering some of this, but at least not clearly my question.
I suspect that I should probably not access any global variables that requires code to execute (e.g. std::string), but how about POD variables?
std::string s = "hello";
const char* c = "world";
extern std::string s2; // (actually below in the same TU)
__attribute__((constructor)) void init()
{
// safe to assume that !strcmp(c, "world");
// not safe to assume s == "hello"?
// even less safe to assume s2 == "foo";
}
std::string s2 = "foo";
| The documentation says:
However, at present, the order in which constructors for C++ objects with static storage duration and functions decorated with attribute constructor are invoked is unspecified.
!strcmp(c, "world") is probably safe to assume.
char* c = "world";
This is ill-formed because string literal doesn't implicitly convert to a pointer to non-const char (since C++11).
|
70,582,957 | 70,584,031 | Threads appear to run randomly.. Reliable only after slowing down the join after thread creation | I am observing strange behavior using pthreads. Note the following code -
#include <iostream>
#include <string>
#include <algorithm>
#include <vector>
#include <pthread.h>
#include <unistd.h>
typedef struct _FOO_{
int ii=0;
std::string x="DEFAULT";
}foo;
void *dump(void *x)
{
foo *X;
X = (foo *)x;
std::cout << X->x << std::endl;
X->ii+=1;
}
int main(int argc, char **argv)
{
foo X[2];
const char *U[2] = {"Hello", "World"};
pthread_t t_id[2];
int t_status[2];
/*initalize data structures*/
for(int ii=0; ii < 2; ii+=1){
X[ii].x=U[ii];
}
foo *p = X;
for(int ii=0; ii < 2; ii+=1){
t_status[ii] = pthread_create(&t_id[ii], NULL, dump, (void *)p);
std::cout << "Thread ID = " << t_id[ii] << " Status = " << t_status[ii] << std::endl;
p+=1;
}
//sleep(1); /*if this is left commented out, one of the threads do not execute*/
for(int ii=0; ii < 2; ii+=1){
std::cout << pthread_join(t_status[ii], NULL) << std::endl;
}
for(int ii=0; ii < 2; ii+=1){
std::cout << X[ii].ii << std::endl;
}
}
When I leave the sleep(1) (between thread create and join) call commented out, I get erratic behavior in the randomly only 1 of the 2 thread run.
rajatkmitra@butterfly:~/mpi/tmp$ ./foo
Thread ID = 139646898239232 Status = 0
Hello
Thread ID = 139646889846528 Status = 0
3
3
1
0
When I uncomment sleep(1). Both threads execute reliably.
rajatkmitra@butterfly:~/mpi/tmp$ ./foo
Thread ID = 140072074356480 Status = 0
Hello
Thread ID = 140072065963776 Status = 0
World
3
3
1
1
The pthread_join() should hold up exit from the program, till both threads complete, but in this example I am unable to get that to happen without the sleep() function. I really do not like the implementation with sleep(). Can someone tell me if I am missing something??
| See Peter's note -
pthread_join should be called with the thread id, not the status value that pthread_create returned. So: pthread_join(t_id[ii], NULL), not pthread_join(t_status[ii], NULL). Even better, since the question is tagged C++, use std::thread. –
Pete Becker
|
70,583,131 | 70,583,255 | How to specify default initialization conditionally based on templated member variable type | I have a templated class that uses std::conditional to determine the type of a particular member variable. However, I'd like to also change the default initialisation behaviour (either in the ctor list initialisation, or in the member declaration itself) depending on that condition, as one of the options is a singleton.
e.g.
#include <type_traits>
class NotSingleton{
public:
NotSingleton() = default;
};
class Singleton{
public:
static Singleton& getInstance(){
static Singleton singleton;
return singleton;
}
// delete copy ctor/assignment etc.
private:
Singleton() = default;
~Singleton() = default;
};
template<bool UseSingleton>
class MyClass{
public:
MyClass() {}
private:
std::conditional_t<UseSingleton, Singleton&, NotSingleton> m_member;
};
This does not compile when UseSingleton = true as m_member cannot use the default ctor for Singleton. How can I set m_member to initialise from MultipleSubscriberListener::getInstance() if UseSingleton = true?
| std::conditional_t<UseSingleton, Singleton&, NotSingleton> m_member =
[]() -> decltype(m_member) {
if constexpr (UseSingleton)
return Singleton::getInstance();
else
return {};
}();
If you don't want this to be a default initializer, you can also put it in the member initializer list of the constructor.
Instead of a lambda you can also use a member function if that looks cleaner to you. In either case the return type should be decltype(m_member), so that the reference is passed on instead of decayed.
You could also use decltype(auto), however with that there is potential for undefined behavior. If you use decltype(auto) and don't construct the NotSingleton object directly in the return statement (instead of passing it on from e.g. a local variable), then you would return a dangling reference, causing undefined behavior in the construction of the member.
|
70,583,395 | 70,587,711 | Why is std::regex notoriously much slower than other regular expression libraries? | This Github repository added std::regex to the list of regular expression engines and got decimated by the others.
Why is that std::regex - as implemented in libstdc++ - so much slower than others? Is that because of the C++ standard requirements or it is just that that particular implementation is not very well optimized?
Also in the shootout std::regex was unable to compile several regular expressions that all the others accepted, even after adding the flag std::regex::extended. They were (?i)Twain, \b\w+nn\b, (?i)Tom|Sawyer|Huckleberry|Finn, \s[a-zA-Z]{0,12}ing\s, ([A-Za-z]awyer|[A-Za-z]inn)\s and \p{Sm}.
UPDATE: Added comparison with boost::regex.
UPDATE2: added ctre
|
Is that because of the C++ standard requirements or it is just that that particular implementation is not very well optimized?
The answer is yes. Kinda.
There is no question that libstdc++'s implementation of <regex> is not well optimized. But there is more to it than that. It's not that the standard requirements inhibit optimizations so much as the standard requirements inhibit changes.
The regex library is defined through a bunch of templates. This allows people to choose between char and wchar_t, which is in theory good. But there's a catch.
Template libraries are used by copy-and-pasting the code directly into the code compiling against those libraries. Because of how templates get included, even types that nobody outside of the template library knows about are effectively part of the library's ABI. If you change them, two libraries compiled against different versions of the standard library cannot work with each other. And because the template parameter for regex is its character type, those implementation details touch basically everything about the implementation.
The minute libstdc++ (and other standard library implementations) started shipping an implementation of C++ regular expressions, they bound themselves to a specific implementation that could not be changed in a way that impacted the ABI of the library. And while they could cause another ABI break to fix it, standard library implementers don't like breaking ABI because people don't upgrade to standard libraries that break their code.
When C++11 forbade basic_string copy-on-write implementations, libstdc++ had an ABI problem. Their COW string was widely used, and changing it would make code that compiled against the new one break when used with code compiled against the old one. It took years before libstdc++ bit the bullet and actually implemented C++11 strings.
If Regex had been defined without templates, implementations could use traditional mechanisms to hide implementation details. The ABI for the interface to external code could be fixed and unchanging, with only the implementation of the functions behind that ABI changing from version to version.
|
70,584,096 | 70,585,784 | Exception: STATUS_ACCESS_VIOLATION at rip=0010040108D when executing program | I have a problem with execution of the project compiled in eclipse Version: 2021-12 (4.22.0)
The program is just 2 files:
function.asm
.code32
.global array
.section .text
array: pushl %ebp
movl %esp, %ebp
pushl %ecx
pushl %esi
movl 12(%ebp), %ecx
movl 8(%ebp), %esi
xorl %eax, %eax
lp: addl (%esi), %eax
addl $4, %esi
loop lp
popl %esi
popl %ecx
popl %ebp
ret
main.cpp
#include <iostream>
using namespace std;
extern "C" int array(int a[], int length); // external ASM procedure
int main()
{
int a[] = {1, 3, 5, 7, 9, 2, 4, 6, 8, 0}; // array declaration
int array_length = 10; // length of the array
int sum = array(a, array_length); // call of the ASM procedure
cout << "sum=" << sum << endl; // displaying the sum
}
The program is compiled without any problems
make all
Building file: ../src/function.asm
Invoking: GCC Assembler
as -o "src/function.o" "../src/function.asm"
Finished building: ../src/function.asm
Building file: ../src/main.cpp
Invoking: Cygwin C++ Compiler
g++ -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"src/main.d" -MT"src/main.o" -o "src/main.o" "../src/main.cpp"
Finished building: ../src/main.cpp
Building target: first.exe
Invoking: Cygwin C++ Linker
g++ -o "first.exe" ./src/function.o ./src/main.o
Finished building target: first.exe
but when I execute I get the following error:
0 [main] first 1941 cygwin_exception::open_stackdumpfile: Dumping stack trace to first.exe.stackdump
And the stack dump looks as follows:
Exception: STATUS_ACCESS_VIOLATION at rip=0010040108D
rax=0000000000000000 rbx=00000000FFFFCC30 rcx=0000000000000001
rdx=000000000000000A rsi=0000000000401109 rdi=0000000000008000
r8 =000000060001803F r9 =0000000000000000 r10=00000000FFFFCA50
r11=0000000100401189 r12=0000000180248C20 r13=00000000FFFFCC77
r14=0000000000000000 r15=00000000FFFFCC77
rbp=00000000FFFFCB80 rsp=00000000FFFFCB70
program=C:\WORKSPACE.ICE\first\Debug\first.exe, pid 102, thread main
cs=0033 ds=002B es=002B fs=0053 gs=002B ss=002B
Stack trace:
Frame Function Args
000FFFFCB80 0010040108D (000FFFFCBE0, 00100401109, 000FFFFCC77, 00180333F78)
000FFFFCB80 0018027FB40 (00100401109, 000FFFFCC77, 00180333F78, 0018027FB40)
000FFFFCB80 000FFFFCBB0 (000FFFFCC77, 00180333F78, 0018027FB40, 000FFFFCC30)
000FFFFCB80 000FFFFCBE0 (00180333F78, 0018027FB40, 000FFFFCC30, 00300000001)
000FFFFCB80 00100401109 (00180333F78, 0018027FB40, 000FFFFCC30, 00300000001)
000FFFFCBE0 00100401109 (00000000020, 70700000006FF00, 0018004A7AA, 00000000000)
000FFFFCCD0 0018004A816 (00000000000, 00000000000, 00000000000, 00000000000)
00000000000 00180048353 (00000000000, 00000000000, 00000000000, 00000000000)
000FFFFFFF0 00180048404 (00000000000, 00000000000, 00000000000, 00000000000)
End of stack trace
The same program is compiled and executed without any problems under Linux but the assemblies are written in INTEL syntax like that:
global array ; required for linker and NASM
section .text ; start of the "CODE segment"
array: push ebp
mov ebp, esp ; set up the EBP
push ecx ; save used registers
push esi
mov ecx, [ebp+12] ; array length
mov esi, [ebp+8] ; array address
xor eax, eax ; clear the sum value
lp: add eax, [esi] ; fetch an array element
add esi, 4 ; move to another element
loop lp ; loop over all elements
pop esi ; restore used registers
pop ecx
pop ebp
ret ; return to caller
Under the Linux the 32-bit code is compiled using:
nasm -f elf32 function.asm
g++ -m32 main.cpp function.asm
Could anybody please help me to identify where I go wrong?
Thanks
Marek
| OK, I've managed to figure this out
I've finally successfully executed the program by compiling at my cygwin64 that is linked to my ECLIPSE
And I've used two simple commands:
as function.asm -o function.o
g++ main.cpp function.o
And the AT&T assembly syntax for the external function:
.code32
.global array
.section .text
array: pushl %ebp
movl %esp, %ebp
pushl %ecx
pushl %esi
movl 12(%ebp), %ecx
movl 8(%ebp), %esi
xorl %eax, %eax
lp: addl (%esi), %eax
addl $4, %esi
loop lp
popl %esi
popl %ecx
popl %ebp
ret
For some reason the compiler requires me to add a carriage return to the end of the assembler file (function.asm)
So now, to launch my 32-bit source: I just probably need to hook up the above commands with my ECLIPSE
|
70,585,114 | 70,595,092 | Imported target "Boost::system" includes non-existent path "/include" | I am a newbie with CMake please bear with me. I have a library (libvpop) which I created in c++ using some Boost components (system and date_time). I can link to it without a problem in windows but on Ubuntu, I am getting an error that implies the path to the boost include files cannot be found. Here is the simple CMakeLists.txt file.
cmake_minimum_required(VERSION 3.0.0)
set (Boost_DEBUG 1)
project(vpoplibuser)
find_package(fmt CONFIG REQUIRED)
find_package(Boost CONFIG REQUIRED system )
find_package(Boost CONFIG REQUIRED date_time)
add_executable(vpoplibuser vpoplibuser.cpp vpoplib.h)
find_library(VPLIB libvpop HINTS ~/projects/vpoplibuser/ )
message(STATUS "VPLib include dir: ${VPLIB}")
target_include_directories(vpoplibuser PUBLIC ${PROJECT_SOURCE_DIR} )
target_link_libraries(vpoplibuser PUBLIC ${VPLIB})
target_link_libraries(vpoplibuser PRIVATE fmt::fmt)
target_link_libraries(vpoplibuser PRIVATE Boost::system Boost::date_time)
When I run CMake, I get message:
CMake Error in CMakeLists.txt
Imported target "Boost::system" includes non-existent path "/include"
in its INTERFACE_INCLUDE_DIRECTORIES. Possible reasons include:
The path was deleted, renamed, or moved to another location.
An install or uninstall procedure did not complete successfully.
The installation package was faulty and references files it does not provide.
I have removed and reinstalled Boost. My Boost libraries are at /lib/x86_64-linux-gnu. I cannot figure out exactly where CMake is searching for the boost include file. When I inspect the _BOOST_INCLUDEDIR variable in boost_header-1.71.0/boost_headers-config.cmake it tells me _BOOST_INCLUDEDIR is "/include". I have read something about the PATH variable being an issue so I added /usr to the beginning of my PATH (there is a folder /usr/include/boost which has the boost .hpp files so I was making an assumption that is what CMake is looking for). I have been stuck on this for a couple of days so I would appreciate any advice from the expert community.
| I have found a work around thanks to this article: https://github.com/VowpalWabbit/vowpal_wabbit/issues/3003
Something in the Boost cmake process is causing boost to look for the include files at /include when they are really at /usr/include. I created a symbolic link for /include to point to /usr/include and this allowed cmake to find everything. I have not solved the root cause but can move forward with this approach.
|
70,585,249 | 70,585,451 | Is there any way to convert a array pointer back into a regular array? | I am trying to pass an array through a function but when I try to get the length of the array it gives me the length of the pointer. Is there any way to convert the array pointer back into a regular array?
float arr[] = {10, 9, 8]
void func(float arr[])
{
// now I want to figure out the size of the array
int lenArr = sizeof(arr) / sizeof(arr[0]); // this will get the size of the pointer to the array and not the actual array's size
}
| You can declare the parameter a reference to an array.
void func(float (&arr)[10])
{
// but you have to know the size of the array.
}
To get around having to know the size, you can template on size
template<int Size>
void func(float (&arr)[Size])
{
// Now the size of the array is in "Size"
// So you don't need to calcualte it.
}
|
70,585,477 | 70,585,623 | How to use enum as starting args with aliases | I am trying to use enum as starting args.
It should works as aliases pairs so "i" and "info" should have same value, etc...
I know it is possible to use if/else with flags, but i would like to done this using for eg. switch with int value.
#include <iostream>
#include <string>
namespace startFlags {
enum class flag {
i, info = 0,
e, encrypt = 1,
d, decrypt = 2,
c, check = 3,
h, help = 4
};
void printFlag(startFlags::flag input) {
std::cout << "Output: " << input << std::endl; //error
}
}
Is there any other way to deal with starting args with aliases.
| You need to cast enum classes if you'd like to print them as int (or something else) even though the underlying type is int:
Example:
#include <iostream>
namespace startFlags {
enum class flag {
i, info = i, // both will be 0
e, encrypt = e, // both will be 1
d, decrypt = d, // ...
c, check = c,
h, help = h
};
void printFlag(startFlags::flag input) {
// Note cast below:
std::cout << "Output: " << static_cast<int>(input) << '\n';
}
}
int main() {
printFlag(startFlags::flag::i);
printFlag(startFlags::flag::info);
}
|
70,586,050 | 70,586,170 | How to construct an array using make_unique | How can I use std::make_unique to construct a std::array?
In the following code uptr2's declaration does not compile:
#include <iostream>
#include <array>
#include <memory>
int main( )
{
// compiles
const std::unique_ptr< std::array<int, 1'000'000> > uptr1( new std::array<int, 1'000'000> );
// does not compile
const std::unique_ptr< std::array<int, 1'000'000> > uptr2 { std::make_unique< std::array<int, 1'000'000> >( { } ) };
for ( const auto elem : *uptr1 )
{
std::cout << elem << ' ';
}
std::cout << '\n';
}
| std::make_unique< std::array<int, 1'000'000> >( { } ) does not compile because the std::make_unique function template takes an arbitrary number of arguments by forwarding reference, and you can't pass {} to a forwarding reference because it has no type.
However, std::make_unique< std::array<int, 1'000'000> >() works just fine. It initializes the std::array the object in the same manner as the declaration auto a = std::array<int, 1'000'000>();: that is, value-initialization. Just like aggregate initialization from {}, in the case of this particular type, value-initialization will initialize it to all zeroes.
|
70,586,056 | 70,586,114 | How to change c++ code to make clang-tidy modernize-use-transparent-functors happy | We have the following c++ code using catch2 framework:
auto check2 = [](size_t exp, size_t val, auto comp) {
REQUIRE(comp(exp, val));
};
check2(10, 20, std::equal_to<size_t>{});
and clang-tidy generates the following
/test_lingua.cpp:1236:36: warning: prefer transparent functors 'equal_to<>' [modernize-use-transparent-functors]
check2(10, 20, std::equal_to<size_t>{});
^
Can any (reasonable) change to the code make clang-tidy happy?
UPDATE:
The original code compared size_t and nlohmann::json (something like REQUIRE(comp(val, exp_dict[s]));) and the problem using std::equal_to<>{} resulted in this compilation error
1>test_lingua.cpp(1224,25): error C2672: 'operator __surrogate_func': no matching overloaded function found
1>test_lingua.cpp(1227): message : see reference to function template instantiation 'void maz::tests::test_superscript_remover::<lambda_afa1c9ba8dd5ae9bea1ac934f5f796ab>::()::<lambda_9d6a94e643f0b9fe3677202d1edfa8f2>::operator ()<std::equal_to<void>>(const std::string &,size_t,std::equal_to<void>) const' being compiled
1>test_lingua.cpp(1224,1): error C2893:
Failed to specialize function template 'unknown-type std::equal_to<void>::operator ()(_Ty1 &&,_Ty2 &&) noexcept(<expr>) const'
1>E:\VS019\VC\Tools\MSVC\14.29.30037\include\xstddef(198): message : see declaration of 'std::equal_to<void>::operator ()'
In order to fix this, you had to also use .get<size_t>() in order for the perfect forwarding to work.
| You simply replace
std::equal_to<size_t>{}
with
std::equal_to<>{}
(C++14 and above, uses template default argument) or
std::equal_to{}
(C++17 and above, uses CTAD).
This way the std::equal_to<void> specialization is used, which generically compares two arguments a and b of any types as if by a == b (plus perfect forwarding).
It avoids having to specify the correct types again, so you don't have to repeat yourself, which may sometimes be a source of error (e.g. if the types are changed but the std::equal_to template argument is not correctly updated).
|
70,586,376 | 70,586,449 | Assigning values to std::array of std::optional objects | I am trying to fill a std::array of std::optional objects as below.
class MyClass
{
private:
int distance;
MyClass(int x, int y);
friend class MasterClass;
};
MyClass::MyClass(int x, int y)
{
distance = x+y;
}
class MasterClass
{
public:
MasterClass(std::array<std::optional<int>,5> xs, std::array<std::optional<int>,5> ys);
private:
std::array<std::optional<MyClass>, 5> myclassarray{};
};
MasterClass::MasterClass(std::array<std::optional<int>,5> xs, std::array<std::optional<int>,5> ys)
{
for(int i=0; i<5;i++)
{
myclassarray[i].emplace(new MyClass(*xs[i], *ys[i])); //---(1)
}
}
From the line commented with (1) above, I get the following error,
error: no matching function for call to std::optional<MyClass>::emplace(MyClass&)
I also tried replacing the same line with
myclassarray[i] = new MyClass(*xs[i], *ys[i]) ; //---(2)
This will give me
error: no match for ‘operator=’ (operand types are ‘std::array<std::optional<MyClass>,5>::value_type’ {aka ‘std::optional<MyClass>’} and ‘MyClass*’)
How do I solve this issue?
| Looks like maybe you are coming from Java, or C#. When you assign a value in c++, it is rare that you will use new. The issue is, that you are basically doing this:
std::optional<MyClass> o = new MyClass();
o is of type, std::optional<MyClass> and new My Class() is of type MyClass *. You can see from here that there is no operation that converts a pointer to an object to an optional of an object. Lets take this back to basics, what we want to do is something like:
std::optional<int> o; // defaults to std::nullopt
o = value; // We set it with some value.
And actually this is all there is to it. So lets expand to an array:
std::array<std::optional<int>, 5> a; // An array with 5 optional, all are std::nullopt
a[2] = value; // Set the optional at position 2 to a value.
And this extends easily to your example:
for(int i=0; i<5;i++) {
myclassarray[i] = MyClass(*xs[i], *ys[i]));
}
Just be careful with this bit:
*xs[i], *ys[i]
because from here:
The behavior is undefined if *this does not contain a value.
Which will cause you much grief if this is the case.
|
70,586,470 | 70,587,501 | How to speed up my Print all partitions of an n-element set into k unordered sets | how to speed up my program?
my task: 1<=k<=n<=10, time 1 sec
Print all partitions of an n-element set into k unordered sets. Partitions
can be output in any order. Within a partition, sets can be displayed in any order.
Within the set, numbers must be displayed in ascending order. Follow the format from the example.
example:
example
my solve:
#include <iostream>
#include <vector>
//#pragma GCC optimize ("O3")
using namespace std;
int n;
int num(vector<vector<int>> sets) {
int res = 0;
for (int i = 0; i < sets.size(); i++)
for (int j = 0; j < sets[i].size(); j++)
if (sets[i][j] != 0)
res++;
return res;
}
bool testSets(vector<vector<int>> sets) {
for (int i = 0; i < sets.size(); i++) {
if (sets[i].size() == 0) {
return false;
} else {
for (int j = 0; j < sets[i].size() - 1; j++)
if (sets[i][j] >= sets[i][j + 1])
return false;
}
}
if (num(sets) == n)
return true;
else
return false;
}
void out(vector<vector<int>> a) {
for (int i = 0; i < a.size(); i++) {
for (int j = 0; j < a[i].size(); j++)
cout << a[i][j] << " ";
cout << endl;
}
}
void func(vector<vector<int>> sets, vector<int> a, vector<bool> used) {
if (num(sets) == n) {
if (testSets(sets)) {
out(sets);
cout << endl;
} else {
return;
}
} else {
int i, x;
for (i = 0; i < used.size(); i++) {
if (used[i] == false)
break;
}
if (i == used.size())
i--;
for (x = 0; x < sets.size(); x++) {
if (sets[x].size() == 0)
break;
}
if (x == sets.size())
x--;
used[i] = true;
for (int j = 0; j <= x; j++) {
sets[j].push_back(a[i]);
//out(sets);
func(sets, a, used);
sets[j].pop_back();
}
used[i] = false;
}
}
int main() {
int k;
cin >> n >> k;
vector<int> a(n);
vector<vector<int>> sets;
vector<bool> used(n, false);
for (int i = 0; i < a.size(); i++)
a[i] = i + 1;
sets.resize(k);
func(sets, a, used);
return 0;
}```
| Thanks every one
removed the num function, added the sum variable to func, which increase by 1 when pushing, and decrease it by 1 when pop
|
70,587,080 | 70,705,202 | Alpaca Traders API Unable to Connect with httplib | I am attempting to use the C++ wrapper for the Alpaca Traders API for the found here:
https://github.com/marpaia/alpaca-trade-api-cpp#client-instantiation
However, I'm having trouble even connecting to my paper trading account.
Here is the code from the wrapper for getting the Alpaca account:
httplib::Headers headers(const Environment& environment) {
return {
{"APCA-API-KEY-ID", environment.getAPIKeyID()},
{"APCA-API-SECRET-KEY", environment.getAPISecretKey()},
};
}
std::pair<Status, Account> Client::getAccount() const {
Account account;
httplib::SSLClient client(environment_.getAPIBaseURL());
auto resp = client.Get("/v2/account", headers(environment_));
if (!resp) {
return std::make_pair(Status(1, "Call to /v2/account returned an empty response"), account);
}
}
The problem is that I get an error back that it's unable to connect:
Error: resp.error(): Connection (2)
I've checked the environment, and it's been parsed correctly, I even tried the following curl command, and it was able to get the http page.
curl -X GET -H "APCA-API-KEY-ID: {YOUR_API_KEY_ID}"
-H "APCA-API-SECRET-KEY: {YOUR_API_SECRET_KEY}"
https://paper-api.alpaca.markets/v2/account
So my machine can find, and get the page, thus it must be something in the code that is wrong. Any help would be appreciated.
| I found the problem. After looking over the documentation on the cpp-httplib github, the SSLClient doesn't have the https:// at the beginning of the URL, and me having that in there was causing the problem.
So you want:
httplib::SSLClient client("paper-api.alpaca.markets");
and not:
httplib::SSLClient client("https://paper-api.alpaca.markets");
The second one will not connect.
|
70,587,148 | 74,369,544 | Why is AUDCLNT_E_ENDPOINT_CREATE_FAILED triggered when I use WASAPI to create an audio endpoint on a Windows computer? | I used Core Audio to collect audio on a Windows computer. There was no problem at first, but after calling the initialize interface many times, the AUDCLNT_E_ENDPOINT_CREATE_FAILED error message appeared. Does anyone know the reason?
API link is as follows:https://learn.microsoft.com/en-us/windows/win32/api/audioclient/nf-audioclient-iaudioclient-initialize
| Finally, I found the answer, the computer appeared AUDCLNT_E_ENDPOINT_CREATE_FAILED error msg, because of the Kaspersky anti-virus software baned the audio stream from computer to SDK. Therfore, I configured the white list of software, and SDK run normally.
|
70,587,488 | 70,596,088 | Conan on windows claims setting isn't set, it is set | I am trying to port a program from Linux to windows. The program is built with conan.
Currently I run:
conan install . -if build -s build_type=Debug
I get this error:
ERROR: : 'settings.compiler.cppstd' value not defined
I have this in my conan.py:
class ConanFileNeverEngine(ConanFile):
generators = "pkg_config"
requires = [
"eigen/3.4.0",
"libffi/3.4.2", # Not sure why but without this wayland causes a conflict.
"wayland/1.19.0",
"glfw/3.3.4",
"shaderc/2019.0",
"freetype/2.10.4",
"eigen/3.4.0",
"harfbuzz/2.8.2",
"vulkan-memory-allocator/2.3.0",
"gtest/1.10.0",
"benchmark/1.5.3",
"tinygltf/2.5.0",
"stb/20200203",
"vulkan-headers/1.2.182",
"vulkan-validationlayers/1.2.182",
"cereal/1.3.0"
]
settings = {
"os": None,
"compiler" : None,
"cppstd": ["20"],
"build_type" : None}
....
I also tried to manually set it:
def config_options(self):
# self.output.warn("test")
self.settings.compiler.cppstd = "20"
self.settings.compiler.runtime = "dynamic"
if os_info.is_linux:
if os_info.linux_distro == 'ubuntu':
window_system = os.environ['XDG_SESSION_TYPE']
if window_system == 'wayland':
self.output.error("System is using wayland, switch to x11.")
I get the exact same error.
I don't understand I AM setting the value.
| Settings are external, project wide configuration, they cannot be defined or assigned values in conanfile.py files.
Settings are defined in your profile, like the "default", you can see it printed when you type conan install, something like:
Configuration:
[settings]
arch=x86_64
arch_build=x86_64
build_type=Release
compiler=Visual Studio
You can check that your defined value is not there, because you are trying to define its value way later.
So you can:
Define your compiler.cppstd in one profile file. The "default" one, or your own custom profile (you can share profiles with conan config install command)
Note you can set settings per package, with mypkg:compiler.cppstd=xxx if necessary, typically as something exceptional, as settings are intended to be project-wide
Pass it in command line -s compiler.cppstd=xxx
The reason for this is that "recipes/conanfile" describe how things are done, but they are generic, they should work for any configuration. Specific configuration values, like cppstd or used compiler then must be external to recipes.
|
70,587,536 | 70,588,156 | What is function name in c++ | I know the function name in ASM is just the address of the first instruction in the function.
So I think the function name should be the right value in c++. But why I also can get the address of the function name. And both can be assigned to the function pointer like these:
typedef int (*Func)();
int A(){
return 1;
}
int main()
{
Func* f = &A;
f = A;
(*f)();
}
So I want to know what is the function name in CPP and the difference between the function name and the address of the function name;
Update:
Because of my fault, I submit the wrong code, the code I want to ask is:
typedef int (*Func)();
int A(){
return 1;
}
int main()
{
Func f = &A;
f = A;
(*f)();
}
| Lets take a look at line by line explanation of your code snippet.
typedef int (*Func)(); //Statement 1: This means Func is just another name for a "pointer to a function that takes no parameter and returns an int
//Here you are defining a function named A that takes no parameter and returns an int
int A(){
return 1;
}
int main()
{
Func* f = &A;//Statement 2
f = A;//Statement 3
(*f)();//statement 4
}
Statement 1
Here we have the statement
typedef int (*Func)();
This means, that
Func is just another name for a pointer to a function that takes no parameter and returns an int. Technically this is nothing but int(*)().
Statement 2
Here we have the statement,
Func* f = &A;
On the left hand side we are defining an object named f of type Func*. That is, the type of f is pointer to Func. But since Func itself is a pointer to a function that takes no parameter and returns an int.
This implies that the type of f is
a pointer to a pointer to a function that takes no parameter and returns an int. Technically it is nothing but int(**)().
Now we are going to look at the right hand side of statement 2. But for that you must know this:
The type of A is int() which is called a function type.
So when you wrote the expression &A this means you'll get:
a pointer to a function type. Which in this case is int(*)().
Why Problem in Statement 2
So essentially on the left hand side you have an expression of type int(**)() while on the right hand side you have an expression of type int(*)(). So there is a mismatch in the types and this is exactly what the error says:
error: cannot convert ‘int (*)()’ to ‘int (**)()’ in initialization
Statement 3
Here we have the statement,
f = A;
Here on the left hand side you have an expression of type int(**)() as explained in above(in statement 2's explanation). In particular, f has the type int(**)()
While on the right hand side we have expression of type int() as already explained above(in statement 2's explanation). In particular, since A is a function type and has the type int().
Why Problem in Statement 3
Here again there is mismatch in types of expressions on the left hand side and the right and side which is exactly what the error says:
error: cannot convert ‘int()’ to ‘int (**)()’ in assignment
Statement 4
Here we have the statement
(*f)();
Here you are doing 2 things:
Dereferencing the pointer f. But since f's type is int(**)(), the result of dereferencing f will be int(*)() which is nothing but a pointer to a function that takes no parameter and returns an int.
Next, you are calling the function through that resulting pointer. In this case there is no syntactic error. That is there is there is no syntactic error in statement 4.
Edit
Now since you have edited statement 2 to Func f = &A; i will explain this one.
In this case on the left hand side the type of f is Func. But since Func is nothing but int(*)(). This means :
f itself is a pointer to a function that takes no parameter and returns an int. That is f has type int(*)().
Now on the right hand side you have &A. But note as i already explained, A is a function type which is int() in this case. So this means, when you write &A:
&A is a pointer to a function that takes no parameter and returns an int. This means, &A has type int(*)().
So essentially the types on the right hand side and left hand side match and there is no error in this case.
|
70,588,019 | 70,588,256 | Converting between instances of a class template using a member function | I have a couple of strong types that are just aliases of a class template that contains useful shared code.
template <typename T>
struct DiscretePosition {
public:
int x{0};
int y{0};
// ... useful generic functions
};
struct ChunkTag{};
struct TileTag{};
using ChunkPosition = DiscretePosition<ChunkTag>;
using TilePosition = DiscretePosition<TileTag>;
I would like to write a constructor or helper function to help me convert one of these types to the other.
TilePosition tilePosition{1, 1};
ChunkPosition chunkPosition{tilePosition};
// or
ChunkPosition chunkPosition{tilePosition.asChunkPosition()};
This constructor would just take the underlying x, y and scale them to match the other type using a constant.
Is there a way to do this, or would I need to use a free function?
| Here is one possible generic solution that allows you to specialize a get_scaling_factor function for conversions you want to allow:
godbolt link
#include <iostream>
struct ChunkTag{};
struct TileTag{};
template<typename T, typename U>
consteval double get_scaling_factor();
template<>
consteval double get_scaling_factor<ChunkTag, TileTag>() {
return 2;
}
template <typename T>
struct DiscretePosition {
public:
int x{0};
int y{0};
// ... useful generic functions
DiscretePosition(int x, int y) : x(x), y(y) {}
template<typename U>
explicit DiscretePosition(const DiscretePosition<U>& other)
: x(other.x * get_scaling_factor<T, U>())
, y(other.y * get_scaling_factor<T, U>())
{}
};
using ChunkPosition = DiscretePosition<ChunkTag>;
using TilePosition = DiscretePosition<TileTag>;
int main() {
TilePosition tilePosition{1, 1};
ChunkPosition chunkPosition{tilePosition};
std::cout << chunkPosition.x << ", " << chunkPosition.y << std::endl;
// or
// ChunkPosition chunkPosition{tilePosition.asChunkPosition()};
}
|
70,588,183 | 70,588,273 | How can I transform int to string then join with std::ranges::views? | #include <iostream>
#include <numeric>
#include <ranges>
#include <vector>
#include <string>
#include <string_view>
int main() {
auto str = (
std::views::iota(1)
| std::ranges::views::take(5)
| std::ranges::views::transform([](int x) -> std::string_view {
return std::to_string(x) + "|";
})
| std::ranges::views::join
);
for (const char ch : str) {
std::cout << ch;
}
return 0;
}
I'm new to functional programming in cpp. I want to generate first five natural numbers and translate them to string, then join them and print it.
If I use std::string for return type of lambda for transform, it throws many error on compile time. I thought that I should change it into std::string_view. I changed so, and it compiled without compile error. However, if I use std::string_view there, the lambda function returns only reference for string, and translated string that on stack memory is removed on memory when lambda ends. So, the program doesn't print anything.
How can I fix it?
|
If I use std::string for return type of lambda for transform, it
throws many error on compile time.
What you have observed is a C++20 defect that has been resolved by P2328. If you use a newer compiler version that has already implemented P2328 (such as gcc-11.2), your code will be well-formed.
Before P2328, I think there is no simple solution in the standard.
|
70,588,538 | 70,588,780 | How to efficiently write a function for this? | I have a long string which I separate to shorter strings and parallelize them. How to write a function which passes the thread count and separates the string to that many shorter segments?
This is how I've been doing
//Thread count is 4
seg = content.length() / 4;
string dataSeg1, dataSeg2, dataSeg3, dataSeg4;
dataSeg1 = content.substr(0, seg);
dataSeg2 = content.substr(seg, seg * 2);
dataSeg3 = content.substr(seg * 2, seg * 3);
dataSeg4 = content.substr(seg * 3, seg * 4);
thread t1(printLen, dataSeg1);
thread t2(printLen, dataSeg2);
thread t3(printLen, dataSeg3);
thread t4(printLen, dataSeg4);
if (t1.joinable())
{
t1.join();
}
if (t2.joinable())
{
t2.join();
}
if (t3.joinable())
{
t3.join;
}
if (t4.joinable())
{
t4.join();
}
What is a better way to write this?
| I think you should use a loop because you have a number of lines of code that are almost identical.
Using your code as a base:
static const unsigned int numberOfThreads = 4;
const size_t segmentLength = content.length() / numberOfThreads;
std::vector<thread> threads;
for (int threadCount = 0; threadCount < numberOfThreads; ++threadCount)
{
threads.push_back(thread(printLen, content.substr((seg * threadCount), ((seg+1) * threadCount));
}
for (auto thread : threads)
{
if (thread.joinable())
thread.join();
}
This is just the next step (I haven't compiled it), there are still issues you will need to handle, such as what happens if content.Length() is not divisible by 4. There is going to be a better way of handling the has the thread finished check as well.
Hopefully it should help you a bit.
|
70,588,853 | 71,017,830 | UE4. Widget component doesn't show up when its outer is controller | As it's said, in UE 4.27 widget component doesn't show up when its outer is player controller. But in 4.18 version it worked.
Could you please explain why ?
And how ( if it's possible ) can I make it work in 4.27 ?
Widget is set in the widget component.
To reproduce:
.h:
UPROPERTY(EditDefaultsOnly)
TSubclassOf<UComicFXWidgetComponent> ComicWidgetClass;
UComicFXWidgetComponent* NewComponent;
My .cpp code that does work:
"BeginPlay":
// If I change "this" with PlayerController ref then it will not work
NewComponent = NewObject<UComicFXWidgetComponent>(this, ComicWidgetClass);
NewComponent->RegisterComponent();
"InteractButtonPressed":
NewComponent->AttachToComponent(equippedWeapon_->mesh_, FAttachmentTransformRules::SnapToTargetNotIncludingScale, TEXT("DamageSocket"));
My .cpp code that does NOT work:
"BeginPlay":
APlayerController* PC = UGameplayStatics::GetPlayerController(this, 0);
NewComponent = NewObject<UComicFXWidgetComponent>(PC, ComicWidgetClass); // "this" replaced with "PC"
NewComponent->RegisterComponent();
"InteractButtonPressed":
NewComponent->AttachToComponent(equippedWeapon_->mesh_, FAttachmentTransformRules::SnapToTargetNotIncludingScale, TEXT("DamageSocket"));
| Okay. The answer is:
APlayerController derives from AController. And AController in it's constructor calls SetHidden(true); and it affects it's children ( or actors whose outer is the controller ).
To solve just add in your Controller constructor:
SetActorHiddenInGame(false);
|
70,588,880 | 70,589,821 | what is the logic behind it? how output came | In c++, I am not able to understand this code logic. Can someone explain it?
Output is 0 3 5 7 9 11 13 15, mainly if(i&1){continue} logic behind it.
#include<iostream>
using namespace std;
int main(){
for(int i = 0; i<=15; i+=2){
cout<<i<<" ";
if(i&1){
continue;
}
i++;
}
return 0;
}
| i & 1 performs a bitwise AND between i and 1. So if i == 13, then you are performing:
1101
AND 0001
---------
0001
So when i == 13, if(i & 1) would be essentially if(1) or if(true).
Similarly, if i == 10, 1010 AND 0001 = 0000, so if(i & 1) would be essentially if(false).
Now you might see a pattern, what you are doing is querying the right most bit of i, or the LSB of i, which can also be used to test if i is an odd number.
|
70,589,479 | 70,589,519 | Nesting comments is not allowed in C/C++. But what do the following errors mean when you try to put one comment pair inside the other? | Consider the following code from 'C++ Primer' by Lippman,
#include <iostream>
/*
* comment pairs /* */ cannot nest.
* "cannot nest" is considered source code,
* as is the rest of the program
*/
int main(){
return 0;
}
On compiling,
$cl -EHsc .\Program.cc
Microsoft (R) C/C++ Optimizing Compiler Version 19.30.30706 for x86
Copyright (C) Microsoft Corporation. All rights reserved.
Program.cc
.\Program.cc(4): error C4430: missing type specifier - int assumed. Note: C++ does not
support default-int
.\Program.cc(4): error C2146: syntax error: missing ';' before identifier 'nest'
.\Program.cc(7): warning C4138: '*/' found outside of comment
.\Program.cc(10): error C2143: syntax error: missing ';' before '{'
.\Program.cc(10): error C2447: '{': missing function header (old-style formal list?)
Question 1: What does the following mean from the aforementioned code in the book, ""cannot nest" is considered source code, as is the rest of the program"?
Question 2: What happens under the hood when comments are nested? What do these rather confusing errors produced by the mean?
| A comment starts with a /* and ends at */. So in the example, the comment is
/*
* comment pairs /* */
The fact that there is a second /* inside the comment does not "restart" it. It still ends an */.
So then the compiler tries to interpret cannot nest. as source code. Its best guess seems to be that int cannot could have been a variable declaration, but that the int is missing. And then nest is also a bit wrong, and the errors just continue...
|
70,589,728 | 70,589,801 | No Function to Pointer Decay during Assignment | In the below given code snippet, when i wrote f = A; then why doesn't A decay to a pointer to a function?
//Func is alias for "pointer to a function that returns an int and does not take any parameter"
typedef int (*Func)();
int A(){
return 1;
}
int main()
{
Func* f = &A;//cannot convert ‘int (*)()’ to ‘int (**)()’ in initialization - I UNDERSTAND THIS ERROR
f = A;//error: cannot convert ‘int()’ to ‘int (**)()’ in assignment - My QUESTION IS THAT IN THIS CASE WHY DOESN'T "A" decay to a pointer to a function and give the same error as the above
}
I know why Func *f = &A; produces error. But i expected f = A; to produce the same error because i thought in this case A will decay to a pointer to a function and hence should produce the same error as Func*f = &A;. To be precise, i thought i would get the error
error: cannot convert ‘int(*)()’ to ‘int (**)()’ in assignment
But to my surprise, there is no decay and i do not get the above error.
Why/How is this so? That is, why is there no decay.
|
why doesn't A decay to a pointer to a function?
The error message says that a function (int()) cannot be implicitly converted to a pointer to a pointer to a function (int (**)()), because the type of the expression (A) is a function.
The function would decay if there was a valid conversion sequence to the target type through the decayed type. But there isn't such conversion sequence, and so the program is ill-formed.
|
70,589,732 | 70,590,127 | Transpose a 2D vector matrix | Is there a way to find transpose a 2D vector matrix without allocating another 2D vector??
Sample Testcase
Input: matrix = [[1,2,3],[4,5,6],[7,8,9]]
Output: [[1,4,7],[2,5,8],[3,6,9]]
Code that I tried
class Solution {
public:
void rotate(vector<vector<int>>& matrix) {
int i,j, n=matrix.size(),temp;
for(i=0; i<n; i++) {
for(j=0; j<n; j++) {
temp=matrix[i][j];
matrix[i][j]=matrix[j][i];
matrix[j][i]=temp;
}
}
for(i=0; i<n; i++) {
for(j=0; j<n; j++) {
cout<<matrix[i][j]<<" ";
}
cout<<endl;
}
}
};
Printed Output:
1 2 3
4 5 6
7 8 9
Expected Output:
1 4 7
2 5 9
3 6 9
Edited:
New Code:
class Solution {
public:
void transpose(vector<vector<int>>& mat, int n) {
int i,j,temp;
for(i=0; i<n; i++) {
for(j=0; j<n; j++) {
temp=mat[i][j];
mat[i][j]=mat[j][i];
mat[j][i]=temp;
}
}
}
void rotate(vector<vector<int>>& matrix) {
int n=matrix.size(),i,j;
transpose(matrix, n);
for(i=0; i<n; i++) {
for(j=0; j<n; j++) {
cout<<matrix[i][j]<<" ";
}
cout<<endl;
}
}
};
Thanks in advance
| Below is the code for inplace(Fixed space) || n*n matrix
class Solution {
public:
void rotate(vector<vector<int>>& matrix) {
int i,j,n=matrix.size();
for(i=0; i<n; i++) {
// Instead of j starting with 0 every time, its needs to start from i+1
for(j=i+1; j<n; j++) {
swap(matrix[i][j], matrix[j][i]);
}
}
for(int i=0; i<n;i++) {
reverse(matrix[i].begin(), matrix[i].end());
}
}
};
|
70,590,059 | 70,590,426 | C++ how to define custom key in map(It's a little different from similar problems)? | I want to define a map like:
#include<map>
struct key{
vector<int> start_idx;
vector<int> len;
};
map<key, int> m;
I looked at other questions and found that I could write the comparison function like this
struct Class1Compare
{
bool operator() (const key1& lhs, const key2& rhs) const
{
.....
}
};
In fact, start_idx means a start index in a file, len means length, so I need to use other parameter in the comparsion function like:
bool operator() (const key1& lhs, const key2& rhs) const
{
... //in this field, i can use (char *file).
}
and char *file may not be global, because I use multi-thread which means in different thread, char *file is different.
| You can have data members in you comparer.
struct Class1Compare
{
bool operator() (const key1& lhs, const key2& rhs) const
{
// uses lhs, rhs and file
}
char * file;
};
Your map will require a non-default constructed Class1Compare.
char * file = /* some value */
map<key, int, Class1Compare> m({ file });
key k = /* key's data relating to file */
m[k] = 42;
|
70,590,092 | 71,620,646 | Is it guaranteed to be 2038-safe if sizeof(std::time_t) == sizeof(std::uint64_t) in C++? | Excerpted from the cppref:
Implementations in which std::time_t is a 32-bit signed integer (many
historical implementations) fail in the year 2038.
However, the documentation doesn't say how to detect whether the current implementation is 2038-safe. So, my question is:
Is it guaranteed to be 2038-safe if sizeof(std::time_t) == sizeof(std::uint64_t) in C++?
| Practically speaking yes. In all modern implementations in major OSes time_t is the number of seconds since POSIX epoch, so if time_t is larger than int32_t then it's immune to the y2038 problem
You can also check if __USE_TIME_BITS64 is defined in 32-bit Linux and if _USE_32BIT_TIME_T is not defined in 32-bit Windows to know if it's 2038-safe
However regarding the C++ standard, things aren't as simple. time_t in C++ is defined in <ctime> which has the same content as <time.h> in C standard. And in C time_t isn't defined to have any format
3. The types declared are size_t (described in 7.19);
clock_t
and
time_t
which are real types capable of representing times;
4. The range and precision of times representable in clock_t and time_t are implementation-defined
http://port70.net/~nsz/c/c11/n1570.html#7.27.1p3
So it's permitted for some implementation to have for example double as time_t and store picoseconds from 1/1 year 16383 BCE, or even a 64-bit integer with only 32 value bits and 32 padding bits. That may be one of the reasons difftime() returns a double
To check y2038 issue portably at run time you can use mktime
The mktime function returns the specified calendar time encoded as a value of type time_t. If the calendar time cannot be represented, the function returns the value (time_t)(-1).
http://port70.net/~nsz/c/c11/n1570.html#7.27.2.3p3
struct tm time_str;
time_str.tm_year = 2039 - 1900;
time_str.tm_mon = 1 - 1;
time_str.tm_mday = 1;
time_str.tm_hour = 0;
time_str.tm_min = 0;
time_str.tm_sec = 1;
time_str.tm_isdst = -1;
if (mktime(&time_str) == (time_t)(-1))
std::cout << "Not y2038 safe\n";
|
70,590,194 | 70,590,515 | small object optimization useless in using std::function | Many topics told us that use small object like lambda expression could avoid heap allocation when using std::function. But my study shows not that way.
This is my experiment code, very simple
#include <iostream>
#include <functional>
using namespace std;
typedef std::function<int(int, int)> FUNC_PROTO;
class Test
{
public:
int Add(int x, int y) { return x + y; }
};
int main()
{
Test test;
FUNC_PROTO functor = [&test](int a, int b) {return test.Add(a, b); };
cout << functor(1, 2) << endl;
}
And I compile it on centos7, with gcc version 4.8.5 20150623
But the valgrind shows this:
==22903== HEAP SUMMARY:
==22903== in use at exit: 0 bytes in 0 blocks
==22903== total heap usage: 1 allocs, 1 frees, 8 bytes allocated
Even if I remove the reference capture, just a plain lambda function. It still cost 1 byte heap allocation.
Is there somthing wrong of me to get small object optimization.
Update:
Thanks for the repies. I think I should add more detail of my experiment.
In order to eliminate the possible cause of refrence capturing. I removed capture, code like this:
FUNC_PROTO functor = [](int a, int b) {return a + b; };
Valgrind shows this:
==16691== total heap usage: 1 allocs, 1 frees, 1 bytes allocated
Still 1 byte heap allocation.
I also tried this to eliminate possible influence of lambda itself(which I think not)
FUNC_PROTO functor = [](int a, int b) {return a + b; };
FUNC_PROTO functor2 = [](int a, int b) {return a * b; };
FUNC_PROTO test = nullptr;
for ( int i = 0; i < 10; ++i)
{
if (i % 2 == 0)
{
test = functor;
}
else
{
test = functor2;
}
}
Valgrind shows:
==17414== total heap usage: 12 allocs, 12 frees, 12 bytes allocated
That could prove that the functors are not fully stack based object.
And this is my build script:
g++ test.cpp -o test -std=c++11 -g -O3 -DNDEBUG
This is my valgrind script:
valgrind --log-file=valgrind.log --tool=memcheck --leak-check=full --show-leak-kinds=all ./test
| Older versions of libstdc++, like the one shipped by gcc 4.8.5, seem to only optimise function pointers to not allocate (as seen here).
Since the std::function implementation does not have the small object optimisation that you want, you will have to use an alternative implementation. Either upgrade your compiler or use boost::function, which is essentially the same as std::function.
|
70,592,633 | 70,592,911 | Why does the sizeof a class give different output in C++? | According to the cppreference,
When applied to a reference type, the result is the size of the
referenced type.
But in the following program, compiler is giving different output.
#include <iostream>
using namespace std;
class A
{
private:
char ch;
const char &ref = ch;
};
int main()
{
cout<<sizeof(A)<<endl;
return 0;
}
Output:
16
Here ch is of a character type and the reference is also of type character. So output would be 2 bytes instead of 16 bytes.
Online compiler: GDB
| Firstly you're asking for the size of the object, not of the reference type itself.
sizeof(A::ref) will equal 1:
class A
{
public:
char ch;
const char &ref = ch;
};
int main()
{
cout<<sizeof(A::ref)<<endl;
return 0;
}
The object size is 16 because:
The actual size taken up by the reference type inside the object is equal to the size of a pointer (8 in this case).
Because the object alignment has increased to 8 due to the reference type, the char now also takes up 8 bytes even though it only really uses 1 byte of that space.
I.e. If you were to change char ch to char ch[8], sizeof(A) would still equal 16:
class A
{
private:
char ch[8];
const char &ref = ch[0];
};
int main()
{
cout<<sizeof(A)<<endl;
return 0;
}
|
70,592,980 | 70,593,488 | C++ error: no matching function for call to ''" | Although this question has been asked previously on the community, however, those other cases are different from mine, and their solutions cannot be applied in my case.
So I have a very big header "rrc_nbiot.h" file with the following struct:
#include "rrc.h"
namespace asn1 {
namespace rrc {
...
// SystemInformationBlockType2-NB-r13 ::= SEQUENCE
struct sib_type2_nb_r13_s {
struct freq_info_r13_s_ {
bool ul_carrier_freq_r13_present = false;
carrier_freq_nb_r13_s ul_carrier_freq_r13;
uint8_t add_spec_emission_r13 = 1;
};
using multi_band_info_list_r13_l_ = bounded_array<uint8_t, 8>;
struct freq_info_v1530_s_ {
tdd_ul_dl_align_offset_nb_r15_e tdd_ul_dl_align_offset_r15;
};
// member variables
bool ext = false;
bool multi_band_info_list_r13_present = false;
bool late_non_crit_ext_present = false;
rr_cfg_common_sib_nb_r13_s rr_cfg_common_r13;
ue_timers_and_consts_nb_r13_s ue_timers_and_consts_r13;
freq_info_r13_s_ freq_info_r13;
time_align_timer_e time_align_timer_common_r13;
multi_band_info_list_r13_l_ multi_band_info_list_r13;
dyn_octstring late_non_crit_ext;
// ...
// group 0
bool cp_reest_r14_present = false;
// group 1
bool serving_cell_meas_info_r14_present = false;
bool cqi_report_r14_present = false;
// group 2
bool enhanced_phr_r15_present = false;
bool cp_edt_r15_present = false;
bool up_edt_r15_present = false;
copy_ptr<freq_info_v1530_s_> freq_info_v1530;
// sequence methods
SRSASN_CODE pack(bit_ref& bref) const;
SRSASN_CODE unpack(cbit_ref& bref);
void to_json(json_writer& j) const;
};
...
} // namespace rrc
} // namespace asn1
The member function "pack" that is declared in the struct above is defined in another file. "rrc_nbiot.cc"
// SystemInformationBlockType2-NB-r13 ::= SEQUENCE
SRSASN_CODE sib_type2_nb_r13_s::pack(bit_ref& bref) const
{
/* some code */
return SRSASN_SUCCESS;
}
In my main file "main.cpp" I instantiate the struct and try to call the member function as such:
#include "srsran/asn1/rrc_nbiot.h"
struct asn1::rrc::sib_type2_nb_r13_s sib2;
struct asn1::rrc::sib_type2_nb_r13_s *sib2_ref = &sib2;
int main(int argc, char** argv)
{
uint8_t buf[1024];
uint32_t nof_bytes = 2;
asn1::bit_ref bref;
sib2_ref->pack(&bref);
...
When I compile, I get the error:
" error: no matching function for call to
‘asn1::rrc::sib_type2_nb_r13_s::pack(asn1::bit_ref*)’
sib2_ref->pack(&bref); "
Although as you see in the header file "pack" is clearly declared under "sib_type2_nb_r13_s".
Other members of the struct that are not functions I can access and modify without any problems, but calling the member functions like "pack" is giving me this error.
Any help is appreciated
| Thanks to @molbdnilo in the comments, I fixed the error. It was as just as he said.
When I call the member function, I should not pass the argument it as a pointer in my case.
The argument should be like this:
sib2_ref->pack(bref);
|
70,593,236 | 70,594,325 | Boost.Spirit X3 -- operator minus does not work as expected | Consider the following code:
TEST_CASE("Requirements Parser Description", "[test]")
{
namespace x3 = ::boost::spirit::x3;
std::string s = "### Description\n\nSome\nmultiline\ntext."
"\n\n### Attributes";
std::string expectedValue = "Some\nmultiline\ntext.";
auto rule = x3::lit("### ") >> x3::lit("Description")
>> (x3::lexeme
[+x3::char_
- (x3::lit("###") >> *x3::space >> x3::lit("Attributes"))]);
std::string value;
bool success = x3::phrase_parse(s.begin(), s.end(), rule, x3::space, value);
REQUIRE(success);
REQUIRE(value == expectedValue);
}
which yields the following output:
test_boost_spirit_x3_parser.cpp:183: FAILED:
REQUIRE( value == expectedValue )
with expansion:
"Some
multiline
text.
### Attributes"
==
"Some
multiline
text."
Any explanation why the minus operator does not work as I expect? Any fixes at hand?
| Probably operator precedence. The unary + operator takes precedence over the binary - operator. This leads to:
From the boost manual: The - operator difference parser matches LHS but not RHS.
LHS is +x3::char_
RHS is (x3::lit("###") >> *x3::space >> x3::lit("Attributes"))
Now LHS +x3::char_ matches as many characters as it gets (greedy match). So LHS evaluates to
Some
multiline
text.
### Attributes
after that there are no characters left, so RHS matches nothing. As a result, the - operator matches as well (LHS yes, RHS no, which is exactly what you are seeing).
Or, to put it otherwise: Your +x3::char_ eats up all remaining characters, before the - operator gets a chance.
To fix it I guess you need to write
+(x3::char_ - (x3::lit...))
Thats at least from what I gather from the example here: https://www.boost.org/doc/libs/1_78_0/libs/spirit/doc/html/spirit/qi/reference/operator/difference.html
test_parser("/*A Comment*/", "/*" >> *(char_ - "*/") >> "*/");
Note the brackets around (char_ - "*/")
|
70,593,479 | 70,593,868 | Is it possible to change value of a constant variable via reinterpret_cast? | all. I have read a code snippet from a book where the author tries to set the value of a register via direct memory access (he simulates this process). He used reinterpret_cast<volatile uint8_t*> for this. So, after reading his code, out of curiosity I have tried to apply the same code for a constant variable, and I experienced a very interesting output. I inserted the code below which is very simple:
int main()
{
const std::uint8_t a = 5;
const std::uintptr_t address = reinterpret_cast<std::uintptr_t>(&a);
*reinterpret_cast<volatile uint8_t*>(address) = 10;
std::cout << unsigned(a) << std::endl;
return 0;
}
So, my purpose is to change the value of constant variable via direct memory access. I have written this code in Visual Studio C++ 2019 and compiled and run it. There was no any error or warning, but the output was very interesting. The value of a is printed as 5. So, I switched to the debug mode in order to see at each step what is happening. I will insert the images below:
Step 1
Step 2
Step 3
Step 4
Step 5
I am sorry to include debugging output as images, but I thought that it would be better to include images, so I will not miss any important detail. The thing that is interesting for me, how the program output is 5, while debugger clearly indicates the value of a is 10? (I even printed the addresses of a before and after the reinterpret_cast and they were the same.) Thank you very much.
|
The thing that is interesting for me, how the program output is 5, while debugger clearly indicates the value of a is 10?
This depends entirely on the compiler. It could output 10, 5, crash, ... because it is undefined behavior.
If you want to know why the output of the binary created by a particular compiler has a certain result for undefined behavior, you have to look at the generated output of the compiler. This can be done using e.g. godbolt.org
For your code the output gcc (11.2) generates is:
push rbp
mov rbp, rsp
sub rsp, 16
mov BYTE PTR [rbp-9], 5
lea rax, [rbp-9]
mov QWORD PTR [rbp-8], rax
mov rax, QWORD PTR [rbp-8]
mov BYTE PTR [rax], 10
mov esi, 5
mov edi, OFFSET FLAT:_ZSt4cout
call std::basic_ostream<char, std::char_traits<char> >::operator<<(unsigned int)
mov esi, OFFSET FLAT:_ZSt4endlIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_
mov rdi, rax
call std::basic_ostream<char, std::char_traits<char> >::operator<<(std::basic_ostream<char, std::char_traits<char> >& (*)(std::basic_ostream<char, std::char_traits<char> >&))
mov eax, 0
leave
ret
Here you can see that the compiler correctly assumes that the value of a will not change. And replaces std::cout << unsigned(a) << std::endl; with std::cout << unsigned(5) << std::endl;:
mov esi, 5
mov edi, OFFSET FLAT:_ZSt4cout
call std::basic_ostream<char, std::char_traits<char> >::operator<<(unsigned int)
If you remove the const from a the output is:
movzx eax, BYTE PTR [rbp-9]
movzx eax, al
mov esi, eax
mov edi, OFFSET FLAT:_ZSt4cout
call std::basic_ostream<char, std::char_traits<char> >::operator<<(unsigned int)
|
70,593,602 | 70,593,891 | Socket recv() made my string into a char "C:\User" 'C' | Why does string a return 'C' instead of "C:\Users\Desktop\Project phoneedge\ForMark\Top"?
When I tested it in a empty c++ project, and before I moved some of my code from ThreadFunction to StartButton it worked(The UI is suppose to update constantly but the socket recv() is cockblocking it causing it only update once so I moved the UI code to startbutton)
This is the server code, after start button is pressed, initiate the socket and there is a thread created to run the listen() accept() and recv(). The close button closes the socket and the thread.
Server Code(MFC project)
void CUIServerDlg::StartButton()
{
WSADATA Winsockdata;
int iTCPClientAdd = sizeof(TCPClientAdd);
WSAStartup(MAKEWORD(2, 2), &Winsockdata);
TCPServerAdd.sin_family = AF_INET;
TCPServerAdd.sin_addr.s_addr = inet_addr("127.0.0.1");
TCPServerAdd.sin_port = htons(8000);
TCPServersocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
bind(TCPServersocket, (SOCKADDR*)&TCPServerAdd, sizeof(TCPServerAdd));
bRunning = true;
hthread = CreateThread(NULL, 0, ThreadFunction, this, 0, &ThreadID);
WaitForSingleObject(hthread, INFINITE);
funRunning = true;
while (funRunning == true) {
vector<string> caseOne;
/*string a;
char RecvBuffer[512];//this is the declaration in member class
int iRecvBuffer = strlen(RecvBuffer) + 1;*/
**a = RecvBuffer;**//a is a String, RecvBuffer is a path name like c:\user..
//Find files,This part of code is left out because it should not effect the question
//put the files found in a vector, then display it on a listbox
for (string fileVec : caseOne) {
CString fileunderPath;
string filevector1 = fileVec;
fileunderPath = filevector1.c_str();//conversion for AddString
list1.AddString(fileunderPath);
}
Sleep(1000);//The code updates every 1 second , when file names are modified is displays immediately.
}
}
I am suppose to change Sleep(1000) into WaitForSingleObject() to replace WM_Timer for the assignment but I don't know how since you need a handle, do I create another thread?
void CUIServerDlg::CloseButton()
{
bRunning = false;
funRunning = false;
WaitForSingleObject(hthread, INFINITE);
CloseHandle(hthread);
closesocket(TCPServersocket);
}
So I have never learned anything about socket and thread prior to this project, the idea of the code below is to use a thread to run a while loop to constantly check for new cilents to send things in, do make sure to correct me if the thought process is wrong.
DWORD WINAPI CUIServerDlg::ThreadFunction(LPVOID lpParam) {
CUIServerDlg* This = (CUIServerDlg*)lpParam;
while (This->bRunning == true) {
int iListen = listen(This->TCPServersocket, 10);
if (iListen == INVALID_SOCKET)
OutputDebugString(_T("FAIL LISTEN\n"));
This->sAccecpSocket = accept(This->TCPServersocket, (SOCKADDR*)&This->iTCPClientAdd, &This->iTCPClientAdd);
recv(This->sAccecpSocket, This->RecvBuffer, This->iRecvBuffer, 0);
}
return 0;
}
Client Code (Empty c++ project)
int main(){
string a = "C:\\Users\\Desktop\\Project phoneedge\\ForMark\\Top";
const char* SenderBuffer = a.c_str();
int iSenderBuffer = strlen(SenderBuffer) + 1;
WSAStartup(MAKEWORD(2, 2), &WinSockData);
TCPClientSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
TCPServerAdd.sin_family = AF_INET;
TCPServerAdd.sin_addr.s_addr = inet_addr("127.0.0.1");
TCPServerAdd.sin_port = htons(8000);
connect(TCPClientSocket,(SOCKADDR*)&TCPServerAdd,sizeof(TCPServerAdd));
send(TCPClientSocket, SenderBuffer, iSenderBuffer, 0);
closesocket(TCPClientSocket);
WSACleanup();
system("PAUSE");
return 0;
}
| The TCP protocol passes the data by byte stream.
It means the client passes the data byte by byte rather than pass all the data at one time.
When you receive the data from the client. The passing procedure maybe not be finished.
So you need to check whether the data is finished passing after receiving some data by one recv call and then save the sub-data until receiving all the data.
|
70,593,668 | 70,593,874 | Changing bool on this from inside lambda | I have a bool on a AActor, that I'd like to change from a lambda function, how shall I capture the bool so it is actually changed? I currently use [&], which should pass this by reference as I understand it, however changing the bool from inside the lambda function doesn't change it on the actor.
[&] () { bMyBool = true; };
EDIT 1: more info
bool is defined in the header of the class as protected.
protected:
UPROPERTY(BlueprintReadOnly, VisibleAnywhere)
bool bMagBoots;
I have a function to bind a delegate to an input action, that should call the lambda.
void ASPCharacterActor::BindLambdaToAction(UInputComponent* InputComponent, FName ActionName,
EInputEvent InputEventType, TFunctionRef<void()> ActionHandler)
{
FInputActionHandlerSignature ActionHandlerSignature;
ActionHandlerSignature.BindLambda(ActionHandler);
FInputActionBinding ActionBinding = FInputActionBinding(ActionName, InputEventType);
ActionBinding.ActionDelegate = ActionHandlerSignature;
InputComponent->AddActionBinding(ActionBinding);
}
Then I call the function inside BeginPlay. The lambda gets called when I press the button, however the bool won't change outside the lambda function. If I print it out inside the lambda it did change, so I think it just gets copied instead of referenced.
void ASPCharacterActor::BeginPlay()
{
Super::BeginPlay();
BindLambdaToAction(InputComponent, "MagBoots", IE_Pressed, [&]
{
bMagBoots = true;
});
}
| I have no idea what you have done, but your code will do what we expect by using the following environment around your single code line:
int main()
{
bool bMyBool = false;
auto l= [&] () { bMyBool = true; };
l();
std::cout << bMyBool << std::endl;
}
And as in your edit mentioned, you use it in a class context:
// Same in class context:
class X
{
private:
bool bMyBool = false;
std::function<void(void)> lam;
public:
void CreateAndStoreLambda()
{
lam= [&] () { bMyBool = true; };
// or you can capture this to access all vars of the instance like:
// lam= [this] () { bMyBool = true; };
}
void CallLambda()
{
lam();
std::cout << bMyBool << std::endl;
}
};
int main()
{
X x;
x.CreateAndStoreLambda();
x.CallLambda();
}
see it running
|
70,593,988 | 70,594,687 | mingw vs msvc on implicit conversion of string literals | I have a std::variant of different types including int32_t, int64_t, float, double, std::string and bool.
When I assign a string literal (const char*, which is not present in this variant), I assumed it will be implicitly converted to std::string and it worked as I expected with MinGW (9.0.0 64-bit). But with MSVC (2019 64-bit) it implicitly converted to bool.
If I explicitly converts it to std::string and then assign it to variant it works fine with both compilers.
Here's the code
#include <iostream>
#include <variant>
#if defined(__MINGW64__) || defined(__MINGW32__)
#define CMP_NAME "[ MinGW ]"
#elif defined(_MSC_VER)
#define CMP_NAME "[ MSVC ]"
#else
#define CMP_NAME "[ Others ]"
#endif
using KInt32 = int32_t;
using KInt64 = int64_t;
using KFloat = float;
using KDouble = double;
using KString = std::string;
using KBoolean = bool;
using Variant = std::variant<
KInt32 /*0*/,
KInt64 /*1*/,
KFloat /*2*/,
KDouble /*3*/,
KString /*4*/,
KBoolean /*5*/
>;
int main()
{
//passing a const char* to Variant [target is to initialize as KString]
Variant var = "ABCDE";
std::cout << "Build With Compiler Set " CMP_NAME << std::endl;
std::cout << "index = " << var.index() << std::endl;
try{
KString &str = std::get<KString>(var);
std::cout << "\t[string = " << str << "]" << std::endl;
}
catch(const std::exception &e){
std::cout << "\t[exception = " << e.what() << "]" << std::endl;
}
return 0;
}
Here's the output
With MinGW 9.0.0
Build With Compiler Set [ MSVC ]
index = 5
[exception = bad variant access]
With MSVC 2019
Build With Compiler Set [ MinGW ]
index = 4
[string = ABCDE]
Index 4 denotes to KString (aka std::string) and 5 denotes to KBoolean (aka bool).
So my question is why both compilers are giving different results?
| The behavior of variant changed for this exact case in C++20.
See What is the best way to disable implicit conversion from pointer types to bool when constructing an std::variant? for a longer discussion.
|
70,594,562 | 70,594,596 | Why can't conversion to non-scalar types be performed if a suitable assignment operator exists? | struct Foo {
int val;
Foo() : val(-1) {}
explicit Foo(int val_) : val(val_) {}
Foo& operator=(int val_) { val = val_; return *this; }
operator int () const { return val; }
};
int main()
{
Foo foo = 1; // error
Foo foo2;
foo2 = 2; // works fine
return 0;
}
error: conversion from 'int' to non-scalar type 'Foo' requested
Foo foo = 1;
The code should be self-explanatory. I would like to understand why the direct assignment is illegal when a suitable assignment operator has been defined.
| Unlike foo2 = 2;, Foo foo = 1; is not assignment, but initialization. Only constructors would be considered (to construct foo). The appropriate constructor Foo::Foo(int) is marked as explicit then can't be used in copy initialization like Foo foo = 1;.
If you make Foo::Foo(int) non-explicit then the code would work; direct initialization (which considers explicit constructor) like Foo foo(1); works too.
|
70,594,842 | 70,595,459 | Why does my vector store a copy of my object, and not the original value, in c++? | I have a mid-term assignment in which we conduct 3 sets of unit tests with documentation etc for a program from our course. The program I chose is a physics simulation.
Within this program, there are two classes, Thing and World. I am able to independently create these objects. I tried adding the Thing object to the World object by creating a std::vector<Thing> things, and then creating a function to add the Thing to the vector of things. However, when I do that, it seems as though the World object creates its own copy of the Thing, because when I change the Things position, the version of the Thing in the things vector remains the same.
Please provide some guidance on the matter, I feel my problem might be with how I use pointers in this situation.
void testBoundaryBounce()
{
//Create the world
World world{10.0f, 10.0f, 1.0f};
//Thing, 2 units away from boundary
Thing thing{8.0f, 5.0f, 1.0f};
//Add thing to world (adding it the the things vector)
world.addThing(&thing);
//Apply force that should trigger bounce back to x = 7
thing.applyForce(1.0f, 0.0f);
//Updating thing so that movement takes effect
thing.update();
//Running world update to account for collisions, bounces etc
world.update();
std::cout << "thing x : " << thing.getX() << std::endl;
CPPUNIT_ASSERT(thing.getX() == 7);
}
Thing::Thing(float x, float y, float radius)
: x{x}, y{y}, dX{0}, dY{0}, radius{radius}
{
}
World::World(float width, float height, float gravity)
: width{width}, height{height}, gravity{gravity}
{
std::vector<Thing> things;
}
void World::addThing(Thing* thing)
{
float thingX = thing->getX();
float thingY = thing->getY();
float thingRad = thing->getRad();
std::cout << "Radius : " << thingRad << std::endl;
if (thingX + thingRad > width || thingX - thingRad <= 0 || thingY + thingRad >
height|| thingY - thingRad <= 0)
{
std::cout << "Thing is out of bounds or is too large" << std::endl;
}
else {
std::cout << "Thing is good" << std::endl;
things.push_back(*thing);
}
}
void World::update()
{
for (Thing& thing : things)
{
thing.update();
float thingX = thing.getX();
float thingY = thing.getY();
float thingRad = thing.getRad();
float worldGrav = this->gravity;
std::cout << "thing x: " << thingX << std::endl;
std::cout << "thing rad: " << thingRad << std::endl;
//World Boundary Bounces
if (thingX + thingRad >= width)
{
std::cout << "Bounce left" << std::endl;
thing.applyForce(-2.0f, 0.0f);
thing.update();
}
if (thingX + thingRad <= 0)
{
thing.applyForce(2.0f, 0.0f);
thing.update();
std::cout << "Bounce right" << std::endl;
}
if (thingY + thingRad >= height)
{
thing.applyForce(0.0f, -2.0f);
thing.update();
std::cout << "Bounce up" << std::endl;
}
if (thingY - thingRad <= 0)
{
thing.applyForce(0.0f, 2.0f);
thing.update();
std::cout << "Bounce down" << std::endl;
}
//Thing Collision Bounces
for (Thing& otherthing : things)
{
float thing2X = otherthing.getX();
float thing2Y = otherthing.getY();
float thing2Rad = otherthing.getRad();
if (thingX + thingRad == thing2X + thing2Rad && thingY + thingRad ==
thing2Y + thing2Rad)
{
thing.applyForce(-2.0f, -2.0f);
thing.update();
otherthing.applyForce(2.0f, 2.0f);
otherthing.update();
}
}
//Gravitational Pull
thing.applyForce(0.0f, worldGrav);
}
}
| Its right in the definition of void push_back (const value_type& val);...
Adds a new element at the end of the vector, after its current last element. The content of val is copied (or moved) to the new element.
so when you call things.push_back(*thing);, you are adding a new element to the 'things' vector which is a copy of the value pointed to by the thing pointer.
You want to change your vector to hold pointers to Thing types instead:
std::vector<Thing *> things;
and add the pointers instead of copies:
things.push_back(thing);
Note you will later have to access fields via -> instead of ., or you can create a reference to it such as:
for (Thing* pt: things)
{
Thing& thing = *pt;
thing.update();
//etc...
|
70,595,062 | 70,595,260 | Using class type as template argument type when create class definition | I have a base class BaseCmd like:
template<typename T>
class BaseCmd {
public:
private:
T m;
};
and then derived class Cmd1:
class Cmd1 : public BaseCmd<Cmd1::A> {
public:
struct A {
int c, d;
};
};
but I'm getting error:
error: incomplete type ‘Cmd1‘ used in nested name specifier
Is it even possible to define it like that? Thanks.
| You cannot have a member of type Cmd1::A before Cmd1 is complete. The simple fix is to define A outside of Cmd1. However, if for whatever reason you want to define A inside Cmd1 you can add a layer of indierction like this:
template<typename T>
class BaseCmd {
public:
private:
T m;
};
class Cmd1 {
public:
struct A {
int c, d;
};
struct impl : public BaseCmd<A> {};
};
Cmd1::impl can inherit from A because As definition is complete by the time BaseCmd<A> is used as base class.
|
70,595,133 | 70,598,756 | Is there a way to call a function without adding to the call stack? | I've some goto-laden C++ code that looks like
#include <stdlib.h>
void test0()
{
int i = 0;
loop:
i++;
if (i > 10) goto done;
goto loop;
done:
exit(EXIT_SUCCESS);
}
I'd like to get rid of the gotos while (mostly) preserving the appearance of the original code; that more-or-less rules-out for, while etc. (unless they're hidden behind macros) as that changes the appearance of the existing code too much. Imagine wanting to minimize the "diffs" between existing code and changed code.
One idea is to use a class with methods:
class test1 final
{
int i = 0;
void goto_loop()
{
i++;
if (i > 10) goto_done();
goto_loop();
}
void goto_done()
{
exit(EXIT_SUCCESS);
}
public:
test1() { goto_loop(); }
};
This works, but every call to goto_loop() adds to the call stack. Is there some way to do an exec-like function call? That is, call a function "inline" somehow...execute additional code without adding to the call stack? Is there a way to make tail-recursion explicit?
Using C++20 (or even C++23) is acceptable, although a C++17 solution would be "nice."
For all those wondering about "why?" The real original code is BASIC ...
| My solution is to write an exec() routine that stops the recursion:
template<typename Func>
void exec(const Func& f)
{
using function_t = Func;
static std::map<const function_t*, size_t> functions;
const auto it = functions.find(&f);
if (it == functions.end())
{
functions[&f] = 1;
while (functions[&f] > 0)
{
f();
functions[&f]--;
}
functions.erase(&f);
}
else
{
functions[&f]++;
}
}
With that utility, I can more-or-less preserve the appearance of the existing code
class test4 final
{
int i = 0;
void goto_loop_() {
i++;
if (i > 10) goto_done(); }
void goto_loop() { goto_loop_(); static const auto f = [&]() { goto_loop(); }; exec(f); }
void goto_done()
{
exit(EXIT_SUCCESS);
}
public:
test4() { goto_loop(); }
};
(Using a lambda avoids hassles with pointers to members functions.)
|
70,595,235 | 70,595,523 | CMake way of wildcard values on set variables | I have the next snippet on a CMake based project
set(Headers
./include/MyLib/main.hpp
)
set(Sources
src/main.cpp
)
add_library(${This} STATIC ${Headers} ${Sources})
How can I indicate to recursively include all the interface files under the:
./include/MyLib/{ /* File name here */ }.ixx
and all the source files under the
src/{ /* File name here */ }.cpp
| One solution is to replace set() by:
file(
GLOB_RECURSE
Headers
./include/MyLib/*.ixx
)
and same thing for your source files.
|
70,595,264 | 70,624,756 | Measuring elapsed time, storing the start time as a primitive type |
I need to measured elapsed time in ms
I need to store the start time as a primitive type
I need to retrieve the start time as a primitive type, when making the comparison to determine how much time has elapsed
Any suggestions?
I have C++17 and do not want to use any external libraries (like boost).
std::chrono would be fine if someone could explain to me how to convert the elapsed time to/from a primitive. I'm not very good at C++.
resolution accuracy is not important.. if it is off by tens of ms that's ok, I just need to implement a delay.. e.g. 100ms or 1.5s
| You can simply do this by:
double time = 1000 * ((double)clock()) / (double)CLOCKS_PER_SEC;
See the below code for better understanding.
#include <iostream>
#include <time.h>
using namespace std;
int main() {
double start = 1000 * ((double)clock()) / (double)CLOCKS_PER_SEC;
for(int i=0;i<1e9;i++);
double end = 1000 * ((double)clock()) / (double)CLOCKS_PER_SEC;
double time_taken = end - start;
cout << time_taken << "ms\n";
return 0;
}
Include the header file:
#include <time.h> // include this
|
70,596,101 | 70,596,287 | Why is fstream put function filling 4GB of space on my drive? | For whatever reason, my program is filling up 4GB of space on my drive. Why?
I narrowed it down to this for loop using breakpoints:
int blockPos = 1;
char blockAddressPos = 0x00;
for (int d = 0; d < img.width * img.height * img.channels; d++) {
tf.write(blockPos, blockAddressPos, (char)img.data[d]);
//printf("Byte write: %i\n", (unsigned int)img.data[d]);
blockAddressPos++;
break; // Debug purposes
if (blockAddressPos >= 0xFF) {
blockPos++;
blockAddressPos = 0x00;
}
}
The tf.write() function:
void TableFormatter::write(int block, char blockAddr, char data) {
if (_valid) {
if (block == 0) {
if (blockAddr <= 0x0F) {
// Core file metadata is located here, disallow write access or shift address to 0x10
blockAddr = 0x10;
_states.write.TableMetadataWarning = true;
}
}
unsigned int location = (block << 8) | blockAddr;
_table.seekp(location, FileBeginning);
_table.put(data);
} else {
_states.fileSignatureInvalid = true;
}
}
Anyone know why this is happening?
| According to /J (Default char Type Is unsigned), by default char is signed in Visual C++. So after blockAddressPos exceeds 0x7F, it wraps around and most likely becomes negative, e.g. 0x80 = -128.
When you pass this negative value to tf.write(), the line unsigned int location = (block << 8) | blockAddr; promotes blockAddr to int, which sign-extends. So you do the equivalent of location = (block << 8) | 0xFFFFFF80, which is where your ~4 GB comes from.
You probably want to change blockAddressPos and the blockAddr parameter to be unsigned char, or better, uint8_t.
(By the way, with that fixed, your test blockAddressPos >= 0xFF will write blocks of size 255 bytes, not 256; is that really what you want?)
|
70,596,564 | 70,605,124 | Why does calling methods on a protobuf Message throw a pure virtual method called error? | I'm trying to use the Google Protobuf library and I want to store a bunch of different message types together in a container and get their names as I pull them out of the container. I think I can use the interface type google::protobuf::Message to do this. Here is what I have so far.
#include <iostream>
#include "addressbook.pb.h"
using namespace std;
int main(void) {
vector<shared_ptr<google::protobuf::Message>> vec;
{
tutorial::AddressBook address_book;
vec.push_back(shared_ptr<google::protobuf::Message>(&address_book));
}
cout << "Typename is " << vec.back()->GetTypeName() << endl;
return 0;
}
Calling GetTypeName throws the following error:
pure virtual method called
terminate called without an active exception
Aborted (core dumped)
Note this is me playing around with the tutorial found here:
https://developers.google.com/protocol-buffers/docs/cpptutorial
| address_book is on the stack it will be deleted when it goes out of scope, no smart pointer can prevent that.
Just create your book with std::make_shared, that will be on the heap and its lifetime will be managed from the std::shared_ptr.
{
auto address_book = shared_ptr<google::protobuf::Message>(new tutorial::AddressBook);
vec.push_back(address_book);
}
|
70,596,946 | 70,596,987 | Why does this function not print the coordinates properly? | As i said in the title im having an issue while trying to print coordinate values like this while using a std::thread
#include <array>
#include <thread>
struct Vec2
{
int x;
int y;
};
void dostuff2(Vec2 x)
{
std::cout << x.x << x.y << " ";
}
void dostuff(Vec2 Oven[3])
{
for (int i=0; i<3; ++i)
{
dostuff2(Oven[i]);
}
}
int main()
{
Vec2 Oven[3]{ {63,21},{63,22},{63,23} };
std::thread thread_obj(dostuff,std::ref(Oven));
thread_obj.detach();
}
Any ideas why this code isnt working? It was working without me executing the function on a seperate thread..
| The main function could end before the thread finishes, meaning the life-time of Oven ends and any references or pointers to it will become invalid.
If you don't detach the thread (and instead join it) then it should work fine.
Another solution is to use std::array instead, in which case the thread would have its own copy of the array object.
On a side-note, there's no need for std::ref here, as the dostuff function expects a pointer, not a reference. Which is what plain Oven will decay to.
Plain
std::thread thread_obj(dostuff,Oven);
would work exactly the same, and even have the same problem.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.