question_id int64 25 74.7M | answer_id int64 332 74.7M | title stringlengths 20 150 | question stringlengths 23 4.1k | answer stringlengths 20 4.1k |
|---|---|---|---|---|
71,490,658 | 71,490,708 | Detect Nvidia's NVC++ (not NVCC) compiler and compiler version | I am using Nvidia's HPC compiler nvc++.
Is there a way to detect that the program is being compile with this specific compiler and the version?
I couldn't find anything in the manual https://docs.nvidia.com/hpc-sdk/index.html.
Another Nvidia-related compiler nvcc has these macros
__NVCC__
Defined when compiling C/C++/CUDA source files.
__CUDACC__
Defined when compiling CUDA source files.
__CUDACC_RDC__
Defined when compiling CUDA source files in relocatable device code mode (see NVCC Options for Separate Compilation).
__CUDACC_EWP__
Defined when compiling CUDA source files in extensible whole program mode (see Options for Specifying Behavior of Compiler/Linker).
__CUDACC_DEBUG__
Defined when compiling CUDA source files in the device-debug mode (see Options for Specifying Behavior of Compiler/Linker).
__CUDACC_RELAXED_CONSTEXPR__
Defined when the --expt-relaxed-constexpr flag is specified on the command line. Refer to CUDA C++ Programming Guide for more details.
__CUDACC_EXTENDED_LAMBDA__
Defined when the --expt-extended-lambda or --extended-lambda flag is specified on the command line. Refer to CUDA C++ Programming Guide for more details.
__CUDACC_VER_MAJOR__
Defined with the major version number of nvcc.
__CUDACC_VER_MINOR__
Defined with the minor version number of nvcc.
__CUDACC_VER_BUILD__
Defined with the build version number of nvcc.
__NVCC_DIAG_PRAGMA_SUPPORT__
Defined when the CUDA frontend compiler supports diagnostic control with the nv_diag_suppress, nv_diag_error, nv_diag_warning, nv_diag_default, nv_diag_once, nv_diagnostic pragmas.
but I couldn't find the equivalent for nvc++.
The strings command didn't show any good candidate for macro names.
$ strings /opt/nvidia/hpc_sdk/Linux_x86_64/22.1/compilers/bin/nvc++ | grep D
| __NVCOMPILER __NVCOMPILER_MAJOR__ __NVCOMPILER_MINOR__
Found them accidentally in a random third-party library https://github.com/fmtlib/fmt/blob/master/include/fmt/core.h
#ifdef __NVCOMPILER
# define FMT_NVCOMPILER_VERSION \
(__NVCOMPILER_MAJOR__ * 100 + __NVCOMPILER_MINOR__)
#else
# define FMT_NVCOMPILER_VERSION 0
#endif
|
71,490,765 | 71,491,587 | Searching a set<> via Custom Criterion | Language Version
I am using a version of the GNU C++ compiler that supports C++17 (though I am not yet familiar with all that was added in C++11/14/17).
The Problem
In the code below, I have found that I can insert into a set<> using a custom sorting criterion, but I cannot test for the presence of an element using set<>::find(). The following compilation error results upon a call to set<>::find():
error: no matching function for call to
‘std::setstd::shared_ptr<node_t, node_comp_t>::find(char&) const’
return children.find(c) != children.end();
This is confusing to me since element equivalence is defined by:
!(lhs < rhs) && !(rhs < lhs)
As can be seen below, I have the required operator defined.
Undesirable Alternative
As an alternative, I can search through the set<> element-by-element, but that is not preferable since doing so would run in linear time.
My Questions
What am I misunderstanding?
Is there a way I can search in logarithmic time, as one normally could with set<>::find()?
Code
struct node_t;
using node_sp_t = shared_ptr<node_t>;
class node_comp_t
{
public:
inline bool operator()(const node_sp_t &lhs, const node_sp_t &rhs) const;
};
using node_sp_set_t = set<node_sp_t, node_comp_t>;
class node_t
{
public:
explicit node_t(char c_p): c{c_p} {}
inline char get_c() const
{
return c;
}
void add_child(char c)
{
if (!child_exists(c))
children.insert(make_shared<node_t>(c));
}
bool child_exists(char c) const
{
// c is not of the children set element type (node_sp_t).
// So, find() cannot be used with c.
//
// Why does node_comp_t not get used by find()?
//
// How may I search for c in logarithmic time if not by using find()?
//
return children.find(c) != children.end();
// This works, but it runs in linear time!
// for (const node_sp_t &node_sp : children)
// {
// if (node_sp->get_c() == c)
// return true;
// }
// return false;
}
private:
char c;
node_sp_set_t children;
};
inline bool node_comp_t::operator()(const node_sp_t &lhs, const node_sp_t &rhs) const
{
return lhs->get_c() < rhs->get_c();
}
| Your comparator takes two arguments of type const node_sp_t&, so you must have a node_sp_t object to compare against.
In C++14, you can use a transparent comparator, allowing you to compare against a different type (in this case, char)
class node_comp_t
{
public:
inline bool operator()(char lhs, const node_sp_t &rhs) const;
inline bool operator()(const node_sp_t &lhs, char rhs) const;
inline bool operator()(const node_sp_t &lhs, const node_sp_t &rhs) const;
using is_transparent = void;
};
inline bool node_comp_t::operator()(char lhs, const node_sp_t &rhs) const
{
return lhs < rhs->get_c();
}
inline bool node_comp_t::operator()(const node_sp_t &lhs, char rhs) const
{
return lhs->get_c() < rhs;
}
inline bool node_comp_t::operator()(const node_sp_t &lhs, const node_sp_t &rhs) const
{
return lhs->get_c() < rhs->get_c();
}
But since you are using C++11, this isn't an option with std::set (you can use this or something like it with alternative implementations, like boost::container::set).
An "easy" way to create a shared_ptr for cheap without dynamically allocating is to use the aliasing constructor:
bool child_exists(char c) const
{
node_t compare_against{c};
return children.find(node_sp_t{node_sp_t{nullptr}, &compare_against}) != children.end();
}
Unfortunately, you have to pay for the cost of constructing a node_t in C++11.
You might also want to move from this in add_child if default constructing is expensive:
void add_child(char c)
{
node_t compare_against{c};
if (children.find(node_sp_t{node_sp_t{nullptr}, &compare_against}) == children.end())
children.insert(std::make_shared<node_t>(std::move(compare_against)));
}
Alternatively, you can use something like std::map<char, node_sp_t> instead.
|
71,490,784 | 71,491,818 | Vector iterator does not start at index 0 | I'm new to vectors and iterators. How come the iterator in the second for-loop does NOT start at index 0?
int main() {
PS1Solution instance;
std::vector<int> result;
std::vector<int> testCase = {2, 7, 11, 15};
int target = 9;
result = instance.twoSum(testCase, target);
for (auto it = result.begin(); it != result.end(); it++)
printf("%d\n", result[*it]);
testCase.clear();
result.clear();
testCase = {3, 2, 4};
target = 6;
result = instance.twoSum(testCase, target);
for (auto it = result.begin(); it != result.end(); it++) // for (auto& val : result) also doesn't work
printf("%d\n", result[*it]);
return 0;
}
The range-for loop doesn't work either. Nor does: for (auto it = &*result[0]; ...) If necessary, I could post my implementation of twoSum. Though, it's pretty simple: it uses a simple nested for-loop (indexed, NOT iterator, since I need the indices).
| An Iterator is not an index.
An iterator acts like a pointer to a specific element. Dereferencing an iterator gives you the value it refers to, not the index of the value. So, using result[*it] is wrong, it should be just *it by itself, eg:
for (auto it = result.begin(); it != result.end(); it++)
printf("%d\n", *it);
A range-for loop wraps this logic internally for you. The loop variable is the dereferenced value, eg:
for (auto& val : result)
printf("%d\n", val);
|
71,490,817 | 71,495,503 | How do you link libraries in Qt? | I have a project that I've written in VS2017 that has a lot of static libraries and I've got to the point where I want to start refining the gui. To make sure I can use Qt I made a test subdir program using the tips in https://www.toptal.com/qt/vital-guide-qmake, the https://wiki.qt.io/SUBDIRS_-_handling_dependencies example, the https://github.com/rainbyte/example-qmake-subdirs and https://github.com/219-design/qt-qml-project-template-with-ci/tree/6af6488d74e1dc97a8cef545080074055a696e9a projects as well as several of the similar questions here on SO.
This consists of a subdir project called "project" with 2 sub-projects "app" and "library". These are both simple QtApplications created using the New Subproject wizard, with all the files except for main.cpp removed from "app", everything left in except for main.cpp in "library", the point being to make sure mainwindow.h in "library" is referenced from main.cpp in "app". Despite trying every example I can find I'm still getting the "'mainwindow.h' file not found" error.
According to all the examples I could find you should only need to add a few lines (#) to the wizard-produced .pro files and add an additional .pri file to the "library" project;
Directory structure;
project/
project.pro
app/
app.pro
main.ccp
library/
library.pro
mainwindow.h
mainwindow.ccp
mainwindow.ui
library.pri
project.pro;
TEMPLATE = subdirs
TARGET = project ##
SUBDIRS += \
app \
library
app.depends = library ##
app.pro;
TEMPLATE = app ##
TARGET = app ##
QT += core gui
greaterThan(QT_MAJOR_VERSION, 4): QT += widgets
CONFIG += c++11
SOURCES += \
main.cpp
include(../library/library.pri) ##
library.pro;
TEMPLATE = lib ##
TARGET = library ##
QT += core gui
greaterThan(QT_MAJOR_VERSION, 4): QT += widgets
CONFIG += c++11
CONFIG += shared ##
CONFIG += lib_bundle ##
SOURCES += \
mainwindow.cpp
HEADERS += \
mainwindow.h
FORMS += \
mainwindow.ui
DISTFILES += \
library.pri ##
and the added library.pri file;
LIBTARGET = library
BASEDIR = $${PWD}
INCLUDEPATH *= $${BASEDIR}/include
LIBS += -L$${DESTDIR} -llibrary
I know this has been asked many times already but I can find no definitive solution, or rather no-one seems to have bothered to post their working solution. I'd really appreciate any help with this, I'd love to be able to use Qt and escape the purgatory of win32 ui creation.
| You have this:
INCLUDEPATH *= $${BASEDIR}/include
But you don't have directory named include anywhere, it seems. So probably remove the /include part from above.
|
71,490,966 | 71,515,128 | How to call org.jdom.Element APIs using JNI C++ | New to JNI. I am trying to call Element::getChild and Element::getChildText APIs (java org.jdom.Element) to get the version number of a system that is stored in "settings.xml" This xml file is archived in JAR file. Assuming the root element is available, here is what I am doing:
jstring fileNameStr = env->NewStringUTF("settings.xml");
jobject rootElement = env->CallStaticObjectMethod(a_class, xmlRootElement_mid, fileNameStr);
jclass cls_element = env->FindClass("org/jdom/Element");
/*get method ID for getChild() & getChildText() */
jmethodID getChild_mid = env->GetMethodID(cls_element, "getChild", "(Ljava/lang/String;)Lorg/jdom/Element;");
jmethodID getChildText_mid = env->GetMethodID(cls_element, "getChildText", "(Ljava/lang/String;)Ljava/lang/String;");
jstring aboutStr = env->NewStringUTF("about");
jobject about = env->CallStaticObjectMethod(cls_element, getChild_mid, aboutStr); ---> Seg Faults!!!
Basically, I want to do Java equivalent of this:
In Java:
import org.jdom.Element;
...
Element element = SomeMethodToReadXmlFile("settings.xml");
version = element.getChild("about").getChildText("version"); <---- works
How should I do this?
| Using JNI, here is what worked for me that uses org.jdom.Element. My settings.xml file looks like this:
<?xml version="1.0">
<settings>
<about>
<version>1.0</version>
</about>
</settings>
JNI C++:
jstring fileNameStr = env->NewStringUTF("settings.xml");
// assuming xmlRootElement_mid is known
jobject element_obj = env->CallStaticObjectMethod(a_class, xmlRootElement_mid, fileNameStr);
jclass cls_element = env->FindClass("org/jdom/Element");
/*get method ID for getChild() & getChildText() */
jmethodID getChild_mid = env->GetMethodID(cls_element, "getChild", "(Ljava/lang/String;)Lorg/jdom/Element;");
jmethodID getChildText_mid = env->GetMethodID(cls_element, "getChildText", "(Ljava/lang/String;)Ljava/lang/String;");
jstring jstr = env->NewStringUTF("about");
element_obj = env->CallObjectMethod(element_obj, getChild_mid, jstr)
jstr = env->NewStringUTF("version");
jstring version_str = (jstring)env->CallObjectMethod(element_obj, getChildText_mid, jstr);
//Convert to std::string
std::string version_std_str;
if (version_str)
{
const char *c = env->GetStringUTFChars(version_str, 0);
version_std_str = std::string(c);
env->ReleaseStringUTFChars(version_str, c);
}
std::cout << version_std_str << std::endl
Outputs:
1.0
|
71,491,330 | 71,491,504 | if constexpr std::is_same under VS 2022 | I have converted one of my projects from VS 2019 to VS 2022 and the following conditional compilation template doesn't compile properly anymore:
struct T_USER;
struct T_SERVICE;
template<typename T>
class system_state
{
public:
system_state();
};
template<typename T>
system_state<T>::system_state()
{
if constexpr (std::is_same<T, T_USER>)
{
std::cout << "User templ\n";
}
else if constexpr (std::is_same<T, T_SERVICE>)
{
std::cout << "Service templ\n";
}
else
{
//Bad type
static_assert(false, "Bad template type in T: must be either T_USER or T_SERVICE");
std::cout << "Unknown templ\n";
}
}
The idea was to compile parts of code in system_state depending on a specific template, as such:
int main()
{
system_state<T_USER> user_state;
}
But now the if constexpr std::is_same doesn't seem to detect my T and I'm always getting my static_assert clause:
Bad template type in T: must be either T_USER or T_SERVICE
What has changed? It used to work in VS 2019.
| The code is ill-formed because for constexpr if:
Note: the discarded statement can't be ill-formed for every possible
specialization:
template <typename T>
void f() {
if constexpr (std::is_arithmetic_v<T>)
// ...
else
static_assert(false, "Must be arithmetic"); // ill-formed: invalid for every T
}
The common workaround for such a catch-all statement is a
type-dependent expression that is always false:
template<class> inline constexpr bool dependent_false_v = false;
template <typename T>
void f() {
if constexpr (std::is_arithmetic_v<T>)
// ...
else
static_assert(dependent_false_v<T>, "Must be arithmetic"); // ok
}
You can use the type-dependent expression above in the else branch too, e.g.
//Bad type
static_assert(dependent_false_v<T>, "Bad template type in T: must be either T_USER or T_SERVICE");
BTW: In if constexpr (std::is_same<T, T_USER>), std::is_same<T, T_USER> is type but not a bool value; you should change it to std::is_same_v<T, T_USER>, or std::is_same<T, T_USER>::value, or std::is_same<T, T_USER>() (std::is_same<T, T_USER>{}).
|
71,491,613 | 71,493,966 | How to sort non-numeric strings by converting them to integers? Is there a way to convert strings to unique integers while being ordered? | I am trying to convert strings to integers and sort them based on the integer value. These values should be unique to the string, no other string should be able to produce the same value. And if a string1 is bigger than string2, its integer value should be greater. Ex: since "orange" > "apple", "orange" should have a greater integer value. How can I do this?
I know there are an infinite number of possibilities between just 'a' and 'b' but I am not trying to fit every single possibility into a number. I am just trying to possibly sort, let say 1 million values, not an infinite amount.
I was able to get the values to be unique using the following:
long int order = 0;
for (auto letter : word)
order = order * 26 + letter - 'a' + 1;
return order;
but this obviously does not work since the value for "apple" will be greater than the value for "z".
This is not a homework assignment or a puzzle, this is something I thought of myself. Your help is appreciated, thank you!
| You are almost there ... just a minor tweaks are needed:
you are multiplying by 26
however you have letters (a..z) and empty space so you should multiply by 27 instead !!!
Add zeropading
in order to make starting letter the most significant digit you should zeropad/align the strings to common length... if you are using 32bit integers then max size of string is:
floor(log27(2^32)) = 6
floor(32/log2(27)) = 6
Here small example:
int lexhash(char *s)
{
int i,h;
for (h=0,i=0;i<6;i++) // process string
{
if (s[i]==0) break;
h*=27;
h+=s[i]-'a'+1;
}
for (;i<6;i++) h*=27; // zeropad missing letters
return h;
}
returning these:
14348907 a
28697814 b
43046721 c
373071582 z
15470838 abc
358171551 xyz
23175774 apple
224829626 orange
ordered by hash:
14348907 a
15470838 abc
23175774 apple
28697814 b
43046721 c
224829626 orange
358171551 xyz
373071582 z
This will handle all lowercase a..z strings up to 6 characters length which is:
26^6 + 26^5 +26^4 + 26^3 + 26^2 + 26^1 = 321272406 possibilities
For more just use bigger bitwidth for the hash. Do not forget to use unsigned type if you use the highest bit of it too (not the case for 32bit)
|
71,491,721 | 71,495,414 | void pointer subtraction can't compile in C++, but can compile in C, what's the reason for the difference? | int arr[10];
void* p1 = arr;
void* p2 = arr + 10;
size_t sz = p2 - p1;
The same code, on the C++ side, it doesn't compile. But on the C side, it compiles. And the result sz is 40.
I know why it doesn't compile on C++ side, because void does't have size so it can't do subtraction. But what's for the C side?
I guess C treat void* as int*, right?
| GCC defines an extension to the C language in which addition and subtraction with void * acts like arithmetic on char *.
This makes the compiler non-conforming to the C standard in its default mode because the standard requires the compiler to issue a diagnostic for addition and subtraction on a pointer to an incomplete type. If -pedantic is used, the compiler will issue a diagnostic but still compiles the program (in the absence of additional switches that prevent that, such as -Werror), and then the compiler conforms to the C standard in this regard.
When compiling for C++, which has stricter typing rules, GCC does not provide this extension.
|
71,491,967 | 71,492,042 | default copy move constructor efficiency different | if default copy constructor provider by compiler only make a shallow copy(copy the pointer of a member in heap to target object's corresponding member field), what is the difference between default copy constructor and default move constructor?
I think default move constructor should not be more more efficient than default copy constructor, as no deep copy happened. Am I right?
|
what is the difference between default copy constructor and default move constructor?
A default copy constructor does memberwise copy of the data members while a default move constructor does memberwise move of the data members. That is, the default move constructor steal resources instead of copying them from the passed argument.
|
71,492,830 | 71,494,895 | QProcess Backup Database on QT C++ | I want to backup my database with qprocess in QT program, the code is as follows, but 0kb occurs when backing up and when I look at the error Qprocess: Destroyed while process("mysqldump.exe") is still runnuing.
QProcess dump(this);
QStringlist args;
QString path="C:/Users/mahmut/Desktop/dbbackupfile/deneme.sql";
args<<"-uroot"<<"-proot"<<"kopuz"<<">";
dump.setStandardOutputFile(path);
dump.start("mysqldump.exe",args);
if(!dump.waitForStarted(1000))
{
qDebug()<<dump.errorString();
}
Can you help to me? ı do not understand this error and okb back up file.
| Your program terminates before process finished, you need to either use static bool QProcess::startDetached(program, arguments, workingDirectory) or add dump.waitForFinished(); to the end.
Also, you dont need to add ">" to arguments. You already redirected output with dump.setStandardOutputFile(path), ">" does not work with process as it requires shell to execute command, QProcess does not use shell it just runs one process not shell expression.
|
71,493,678 | 71,493,736 | Why doesn't the optimizer optimize this code? | Compiling and running this code with the maximum optimization settings seems to give the same result.
#include <stdio.h>
class A
{
public:
A() { }
const int* begin() const { return a; };
const int* end() const { printf("end\n"); return a + 3; };
bool isTrue() const { return true; }
int a[4];
};
const A a{};
class B
{
public:
const A& operator[](size_t) const { printf("B called\n"); return a; }
};
int main()
{
const B b{};
if (!b[0].isTrue()) return -1;
for (const auto& x : b[0]) printf("%d\n", x);
}
I call b[0] twice on a constant object of type B which operator[] returns a constant object of type A.
Why does "B called" get printed twice? Why can't the code save b[0] on the side and run on it different functionality? (Since the functions are const functions, they will return the same result...)
|
Why does "B called" get printed twice?
For starters, because you reference b[0] twice in main. And because the printf statement inside the operator function dictates that there's a side effect for accessing b[0]. So the compiler can't assume that your printf is just for debugging - it has to invoke it once for each call.
Using godbolt, if we remove the printf statements and evaluate, we can see the code is heavily optimized to print 0 three times without any calls whatsoever.
Optimized: https://godbolt.org/z/E8czPdr5W
|
71,493,740 | 71,494,150 | Can reference be compared with pointer? | I'm studying Copy assignment in C++. If you see the line 5 at the code below, there is "this == &rhs". Is this expression legal? this is a pointer to an obejct and rhs is an reference to object. So it is different.
Or Can reference be compared with pointer?
Thank you.
class Mystring{
//class
};
Mystring& Mystring::operator=(const Mystring &rhs){
if (this==&rhs) //<===========this line
return *this;
delete [] str;
str = new char[std::strlen(rhs.str)+1];
std::strcpy(str, rhs.str);
return *this;
}
|
there is "this == &rhs". Is this expression legal?
Yes.
this is a pointer to an obejct and rhs is an reference to object. So it is different.
Yes.
Or Can reference be compared with pointer?
Potentially yes (if the reference is to a class type with an operator overload for comparing with a pointer), but that's not what the example program does because it doesn't compare pointer with rhs. The example programs a pointer with &rhs which is also a pointer.
|
71,494,090 | 71,502,909 | Define and use MOCK_METHOD with gtest and gmock | I am new to googletest/googlemock and I have following questions:
First question: I have this DatabaseClient class which has a method query_items which I want to mock. I`m unable to find the syntax to do it. Here is my attempt:
class DatabaseClient {
public:
// constructors
DatabaseClient();
DatabaseClient(std::string base_url, std::string master_key);
// destructor
~DatabaseClient();
// properties
some properties
// methods
// last parameter is optional with default value NULL
CURLcode query_items(std::string databaseId, std::string containerId, std::string query, size_t (*p_callback_function)(void*, size_t, size_t, std::string*) = NULL);
};
class MockDatabaseClient {
public:
MOCK_METHOD(CURLcode, query_items, (std::string databaseId, std::string containerId, std::string query, size_t (*p_callback_function)(void*, size_t, size_t, std::string*));
};
The MOCK_METHOD has compile errors, can you help in how to write it correctly?
SECOND Question: The Database client is used in a test service class which I want to test: Here is the sample of the method which I want to test:
#include "db/DatabaseClient.hpp"
#include "db/LibcurlCallbacks.hpp"
TestService::create() {
DatabaseClient* databaseClient = new DatabaseClient();
std::string body = "some query"
try {
auto respone = databaseClient->query_items(this->databaseName, this->containerName, body, \
LibcurlCallbacks::CallbackFunctionOnCreate);
return SomeObject;
}
How can I mock the databaseClient->query_items in my test because I do not want to hit the database when running tests.
Here is the basic test I started:
#include <gtest/gtest.h>
#include "gmock/gmock.h"
#include "../src/service/TestService.hpp"
#include "../src/db/DatabaseClient.hpp"
#include "../src/db/LibcurlCallbacks.hpp"
using ::testing::AtLeast;
TEST(HelloTest, BasicAssertions) {
// Expect two strings not to be equal.
MockCosmosClient cosmosClient;
//EXPECT_CALL(cosmosClient, query_items("test", "test", "test", LibcurlCallbacks::CallbackFunctionOnDestinationCreate))
TestService ds;
auto response = ds.create();
EXPECT_STRNE(response, "some string");
}
Any help is appreciated!
| Somehow you need to tell your TestService class to use the mock object instead of the real object.
Currently you instantiate the DatabaseClient in create():
TestService::create() {
DatabaseClient* databaseClient = new DatabaseClient();
//...
}
You should tell it to use MockDatabaseClient instead. This can be done in several ways:
Use dependency injection:
Create an abstract class interface called DatabaseClientInterface that has virtual functions, then have your MockDatabaseClient and DatabaseClient inherit and implement that class.
Then rather than instantiating databaseClient in TestService::create, have a member variable in TestService of type DatabaseClientInterface* which can be initialized in the constructor of TestService by an object of either MockDatabaseClient for testing or DatabaseClient for production.
In your test you should then call:
TEST(HelloTest, BasicAssertions) {
MockDatabaseClient mock_client;
TestService ds(&mock_client);
//...
Templatize your class:
Make TestService a template class which creates the databaseClient object based on the template variable. Something like this:
template <typename T>
class TestService
{
//...
create() {
T* databaseClient = new DatabaseClient();
//...
}
};
Now for testing instantiate TestService with MockDatabaseClient.
See GMock documentations or here for an example of the first method and here for an example of the second method.
Compile issues
As for the compile issues with MOCK_METHOD, note that macros are limited and the preprocessor gets confused by seeing , in a macro parameter. I think you are hitting this problem.
|
71,494,610 | 71,495,139 | Statically link all dependences so that the end user will never be asked to install vc_redist.exe | I'm building a Windows executable with VS 2019. When I run it on my machine, it works, but I'm not 100% sure it will work for end users who don't have vc_redist.x64.exe version 2019. (Especially users on Win7 - it's in a niche where users still use this version).
How to statically link everything so that the end user will never be asked to download and install Visual C++ Redistributable "vc_redist"?
I'm using msbuild.exe, and no IDE. Which setting to add in the .vcxproj file or in the .cpp file to enable full static linking, to prevent the need for vcredist?
My .cpp code asks for these libraries:
#pragma comment(lib, "comctl32.lib")
#pragma comment(lib, "winmm.lib")
Sample .vcxproj:
<Project DefaultTargets="Build" ToolsVersion="16.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|x64">
<Configuration>Release</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.default.props" />
<PropertyGroup>
<ConfigurationType>Application</ConfigurationType>
<PlatformToolset>v142</PlatformToolset>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ItemGroup>
<ClCompile Include="main.cpp" />
</ItemGroup>
<ItemGroup>
<ClInclude Include="main.h" />
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Targets" />
</Project>
Note: it is linked with the topic How to deploy a Win32 API application as an executable, but here it's about doing it specifically directly from the .vcxproj file and without the IDE.
| Add a specific ClCompile property for the compilation configuration:
<Project DefaultTargets="Build" ToolsVersion="16.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
...
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<ClCompile>
<RuntimeLibrary>MultiThreaded</RuntimeLibrary>
</ClCompile>
</ItemDefinitionGroup>
</Project>
Possible values are here: runtimeLibraryOption Enum
(remove the "rt" prefix)
rtMultiThreaded 0 : Multi-threaded (/MT)
rtMultiThreadedDebug 1 : Multi-threaded Debug (/MTd)
rtMultiThreadedDebugDLL 3 : Multi-threaded Debug DLL (/MDd)
rtMultiThreadedDLL 2 : Multi-threaded DLL (/MD)
More info about MT / MD can be found here: /MD, /MT, /LD (Use Run-Time Library)
|
71,494,809 | 71,495,018 | Why is header including sufficient for definitions? | as far as i understood; headerfiles declare things. Now including header files like #include iostream includes the header file iostream.h. This is telling the compiler for example „there is something called: cout“.
QUESTION: How does the compiler get to the definition of cout (or all the other functions)? In my understanding the compiler only gets to know names and types but no definitions.
Thanks in advance.
| Actually: It doesn't. It needs to know how the objects look like, what interfaces they offer (so for std::cout that's a some std::ostream stream object, apparently a subclass of) and that such objects do exist somewhere. That's it. What the compiler then does is adding placeholders for that object – right as it does for function calls as well.
After compilation there's then a second unit: the linker. As its name tells, it links all those compilation units together. If it now sees such a place holder it will replace it with the address of the object or function – which must exist, of course (for std::cout, there's an extern declaration in the header, but some other source file must have implemented it without extern – and if pre-compiled in some library), otherwise a linker error is thrown.
|
71,494,909 | 71,495,306 | Is it possible to check if two classes have the same members | struct Test1 : public Base {
enum { type = 1 };
int a;
char ch;
virtual void func1();
};
struct Test2 : public Base {
enum { type = 2 };
int a;
char ch;
virtual void func1();
};
I'm developing a project with C++14. For some compatibility reason, I have to declare two classes as above, whose members are exactly the same.
My question is if there is some metaprogramming method, which allows me to check if the two classes have exactly the same members?
real issue
Yeah, I know it's so weird but I did get such an issue...
I'm developing a web server, each class above is a protocol.
Now the problem is that some Android developer has wrongly used the protocol Test1, so we can't touch it as users may not update their App. What I could do is just add another protocol. So I add the class Test2.
Since the two protocols do the exactly the samething, I want to make sure that they are always the same. Meaning that if someone adds a member into Test1 someday but he forgets to add the member into Test2, I want to get a compile-time error.
BTW, I only care about the data member, not member function.
| I don't like the premise of this question because it is essentially asks for a way to keep code duplication. However in practice shit happens and if someone wants two classes with the same content the better idea would be not to declare two classes and then check them for compatibility, but to declare them just once. This can be done by using a base class or by using preprocessor. The latter case will also prevent any new members from sneaking into derived classes while adding new members into both classes with require just a single modification:
#define MAKE_CLASS(mp_name) \
struct mp_name : public Base { \
enum { type = 1 }; \
int a; \
char ch; \
void func1(); \
};
MAKE_CLASS(test1)
MAKE_CLASS(test2)
|
71,495,222 | 71,501,051 | Boost Graph Library: Adding vertices with same identification | How can I represent file path using BGL?
Consider path like: root/a/a/a/a/a
Corresponding graph would be 'root'->'a'->'a'->...
Is it possible to add multiple vertices sharing the same name?
Could not find clear answer.
| Sure. As long as the name is not the identifier (identity implies unique).
The whole idea of filesystem paths is that the paths are unique. So, what you would probably want is to have the unique name be the path to the node, and when displaying, choose what part of the path you want to display.
For an elegant demonstration using internal vertex names¹:
using G = boost::adjacency_list<boost::vecS, boost::vecS, boost::directedS, fs::path>;
using V = G::vertex_descriptor;
Now you can add any path to the graph:
void add_path(fs::path p, G& g) {
if (p.empty()) return;
if (!p.has_root_path()) p = fs::absolute(p);
std::optional<V> prev;
fs::path curr;
for (auto const& el : p) {
curr /= el;
auto v = add_vertex(curr, g);
if (prev)
add_edge(*prev, v, g);
prev = v;
}
}
We'll have to tell BGL to use std::identity to get the internal name from fs::path:
template <> struct boost::graph::internal_vertex_name<fs::path> {
struct type : std::identity {
using result_type = fs::path;
};
};
Now, demonstrating:
G g;
add_path("/root/a/a/a/a/a", g);
add_path("test.cpp", g);
To print using the vertex ids:
print_graph(g);
To print using the unique node paths:
auto paths = get(boost::vertex_bundle, g);
print_graph(g, paths);
To print using only the local names:
auto filename = std::mem_fn(&fs::path::filename);
auto local = make_transform_value_property_map(filename, paths);
print_graph(g, local);
Live Demo
Live On Compiler Explorer
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/graph_utility.hpp>
#include <boost/property_map/transform_value_property_map.hpp>
#include <filesystem>
using std::filesystem::path;
template <>
struct boost::graph::internal_vertex_name<path> {
struct type : std::identity {
using result_type = path;
};
};
using G =
boost::adjacency_list<boost::vecS, boost::vecS, boost::directedS, path>;
using V = G::vertex_descriptor;
void add_path(path p, G& g) {
if (p.empty()) return;
if (!p.has_root_path()) p = absolute(p);
std::optional<V> prev;
path curr;
for (auto const& el : p) {
curr /= el;
auto v = add_vertex(curr, g);
if (prev) add_edge(*prev, v, g);
prev = v;
}
}
int main() {
G g;
add_path("/root/a/a/a/a/a", g);
add_path("test.cpp", g);
// To print using the vertex ids:
print_graph(g, std::cout << " ---- vertex index\n");
// To print using the unique node paths:
auto paths = get(boost::vertex_bundle, g);
print_graph(g, paths, std::cout << " --- node path\n");
// To print using only the local names:
auto filename = std::mem_fn(&path::filename);
auto local = make_transform_value_property_map(filename, paths);
print_graph(g, local, std::cout << " --- local name\n");
}
Prints (on my machine, where test.cpp exists in /home/sehe/Projects/stackoverflow):
---- vertex index
0 --> 1 7
1 --> 2
2 --> 3
3 --> 4
4 --> 5
5 --> 6
6 -->
7 --> 8
8 --> 9
9 --> 10
10 --> 11
11 -->
--- node path
"/" --> "/root" "/home"
"/root" --> "/root/a"
"/root/a" --> "/root/a/a"
"/root/a/a" --> "/root/a/a/a"
"/root/a/a/a" --> "/root/a/a/a/a"
"/root/a/a/a/a" --> "/root/a/a/a/a/a"
"/root/a/a/a/a/a" -->
"/home" --> "/home/sehe"
"/home/sehe" --> "/home/sehe/Projects"
"/home/sehe/Projects" --> "/home/sehe/Projects/stackoverflow"
"/home/sehe/Projects/stackoverflow" --> "/home/sehe/Projects/stackoverflow/test.cpp"
"/home/sehe/Projects/stackoverflow/test.cpp" -->
--- local name
"" --> "root" "home"
"root" --> "a"
"a" --> "a"
"a" --> "a"
"a" --> "a"
"a" --> "a"
"a" -->
"home" --> "sehe"
"sehe" --> "Projects"
"Projects" --> "stackoverflow"
"stackoverflow" --> "test.cpp"
"test.cpp" -->
BONUS
Graphviz output:
write_graphviz(std::cout, g, boost::label_writer{local});
Gives this graphviz
¹ see e.g. How to configure boost::graph to use my own (stable) index for vertices?
|
71,495,508 | 71,497,810 | Error using llvm-11 in combination with standard library headers from gcc-11 compiling with -std=c++2a | I am trying to use clang together with gcc standard library headers as follows:
/opt/rh/llvm-toolset-11.0/root/usr/bin/clang -MD -MF bazel-out/k8-fastbuild/bin/external/com_google_googletest/_objs/gtest/gtest-typed-test.d '-frandom-seed=bazel-out/k8-fastbuild/bin/external/com_google_googletest/_objs/gtest/gtest-typed-test.o' -iquote external/com_google_googletest -iquote bazel-out/k8-fastbuild/bin/external/com_google_googletest -isystem external/com_google_googletest/googlemock -isystem bazel-out/k8-fastbuild/bin/external/com_google_googletest/googlemock -isystem external/com_google_googletest/googlemock/include -isystem bazel-out/k8-fastbuild/bin/external/com_google_googletest/googlemock/include -isystem external/com_google_googletest/googletest -isystem bazel-out/k8-fastbuild/bin/external/com_google_googletest/googletest -isystem external/com_google_googletest/googletest/include -isystem bazel-out/k8-fastbuild/bin/external/com_google_googletest/googletest/include -isystem /opt/rh/devtoolset-11/root/usr/include/c++/11 -isystem /opt/rh/devtoolset-11/root/usr/include/c++/11/bits -isystem /opt/rh/devtoolset-11/root/include/c++/11/x86_64-redhat-linux/bits -fdiagnostics-color -Wfatal-errors '-std=c++2a' -Wall -Wno-sign-compare '--gcc-toolchain=/opt/rh/devtoolset-11/root' -Wheader-guard -pthread -c external/com_google_googletest/googletest/src/gtest-typed-test.cc -o bazel-out/k8-fastbuild/bin/external/com_google_googletest/_objs/gtest/gtest-typed-test.o
Then I get this error:
In file included from external/com_google_googletest/googletest/include/gtest/gtest.h:62:
In file included from external/com_google_googletest/googletest/include/gtest/internal/gtest-internal.h:40:
In file included from external/com_google_googletest/googletest/include/gtest/internal/gtest-port.h:395:
/opt/rh/devtoolset-11/root/usr/include/c++/11/bits/regex.h:56:9: fatal error: use of undeclared identifier 'regex_constants'
regex_constants::match_flag_type __flags);
What could be the reason for the error? Is there an incompatibility between gcc and clang? Should I instead install clang headers and libc++ and is that made by installing package llvm-dev?
| The gtest-port.h file includes a file with #include <regex.h> (see here for the code). It expects the file to be the POSIX regex.h which is normally installed directly under the prefix /usr/include. As you can see in the error message, the compiler instead tries to include the /usr/include/c++/11/bits/regex.h which is the wrong file.
The header files in .../bits/ are not meant to be included directly by user code. The are internal to the standard library implementation. Thus it is no surprise to me, that directly including it fails (the missing symbol is probably defined in another internal header file).
To solve your problem I suggest you try to leave out the .../bits directories* in your compile command. I do not know who told you to include them, but they are not meant to be added to the compiler search path.
* drop these two flags from the compiler command line:
-isystem /opt/rh/devtoolset-11/root/usr/include/c++/11/bits
-isystem /opt/rh/devtoolset-11/root/include/c++/11/x86_64-redhat-linux/bits
|
71,495,536 | 71,495,823 | How can I minimize both boilerplate and coupling in object construction? | I have a C++20 program where the configuration is passed externally via JSON. According to the “Clean Architecture” I would like to transfer the information into a self-defined structure as soon as possible. The usage of JSON is only to be apparent in the “outer ring” and not spread through my whole program. So I want my own Config struct. But I am not sure how to write the constructor in a way that is safe against missing initializations, avoids redundancy and also separates the external library from my core entities.
One way of separation would be to define the structure without a constructor:
struct Config {
bool flag;
int number;
};
And then in a different file I can write a factory function that depends on the JSON library.
Config make_config(json const &json_config) {
return {.flag = json_config["flag"], .number = json_config["number"]};
}
This is somewhat safe to write, because one can directly see how the struct field names correspond to the JSON field. Also I don't have so much redundancy. But I don't really notice if fields are not initialized.
Another way would be to have a an explicit constructor. Clang-tidy would warn me if I forget to initialize a field:
struct Config {
Config(bool const flag, int const number) : flag(flag), number(number) {}
bool flag;
int number;
};
And then the factory would use the constructor:
Config make_config(json const &json_config) {
return Config(json_config["flag"], json_config["number"]);
}
I just have to specify the name of the field five times now. And in the factory function the correspondence is not clearly visible. Surely the IDE will show the parameter hints, but it feel brittle.
A really compact way of writing it would be to have a constructor that takes JSON, like this:
struct Config {
Config(json const &json_config)
: flag(json_config["flag"]), number(json_config["number"]) {}
bool flag;
int number;
};
That is really short, would warn me about uninitialized fields, the correspondence between fields and JSON is directly visible. But I need to import the JSON header in my Config.h file, which I really dislike. It also means that I need to recompile everything that uses the Config class if I should change the way that the configuration is loaded.
Surely C++ is a language where a lot of boilerplate code is needed. And in theory I like the second variant the best. It is the most encapsulated, the most separated one. But it is the worst to write and maintain. Given that in the realistic code the number of fields is significantly larger, I would sacrifice compilation time for less redundancy and more maintainability.
Is there some alternative way to organize this, or is the most separated variant also the one with the most boilerplate code?
| I'd go with the constructor approach, however:
// header, possibly config.h
// only pre-declare!
class json;
struct Config
{
Config(json const& json_config); // only declare!
bool flag;
int number;
};
// now have a separate source file config.cpp:
#include "config.h"
#include <json.h>
Config::Config(json const& json_config)
: flag(json_config["flag"]), number(json_config["number"])
{ }
Clean approach and you avoid indirect inclusions of the json header. Sure, the constructor is duplicated as declaration and definition, but that's the usual C++ way.
|
71,495,789 | 71,495,907 | C++ - Making an HTML Validator using Stack | I received an assignment from my lecturer to make an HTML Validator using stacks. I fail to wrap my head around the algorithm to do so, since stacks can only do LIFO, what am I supposed to do to check if a tag has been closed or not? Is it possible? Any answer would be helpful, since I've been stuck for a few days now. Thanks in advance!
Btw, my lecturer gave me an example to run:
<html>
<head>
<title>
Example
</title>
</head>
<body>
<h1>Hello, world</h1>
</body>
</html>
| In a stack you push new elements on top and remove them by popping the most recent element. So you could push opening tags on the stack until there is a closing tag, then check the element on the top of the stack if it matches the closing tag. If another opening tag appears after the closing tag, simply push it on the stack again and continue this until reaching the last element on the stack. If every element has a matching closing tag, the html document should be valid.
Pseudo code:
while (has_unresolved_tags) {
if (is_opening_tag) {
stack.push(opening_tag);
} else {
// peek the last element of the stack
if (stack.peek() == closing tag of opening tag) {
stack.pop()
} else {
// not a valid html document, exit
}
}
|
71,495,807 | 71,503,419 | error: nested name specifier for declaration does not refer into a class, class template or class template partial specialization | Why error about "error: nested name specifier 'h::TYPE::' for declaration does not refer into a class, class template or class template partial specialization" show is only mark place?
#include <iostream>
namespace h
{
enum TYPE
{
A, B, C
};
struct test
{
test(TYPE type = B) { std::cout << type << std::endl; }
};
};
int main()
{
h::test t{h::TYPE::A};
h::test r(h::TYPE::A);
h::test {h::TYPE::A};
h::test (h::TYPE::A); <--- only here
}
DEMO
| As I found out from my colleagues. C++ is a pretty funny language. Mainly due to its long history.
The meaning of the compilation error is that the problematic line of code: h::test (h::TYPE::A);
The compiler parsed as "creating a variable of type h::test with the name h::TYPE::A". And then the compiler goes crazy, because you can't call variables that way.
All this is because someone once decided that it might be convenient to declare variables in this
int (value) style;
value = 42;
std::string (str);
str = "hello";
In other cases, the compiler parses the code correctly - it understands that this is a call to the constructor of the h::text class.
|
71,495,886 | 71,496,373 | Error when compiling source file which includes stb_image.h | I get this particular error when compiling a C++ source file which includes stb_image.h.
In file included from /home/zeux/Documents/Projects/cube-game/./lib/stb/stb_image.h:723,
from /home/zeux/Documents/Projects/cube-game/src/core/stbi_impl.cpp:2:
/usr/lib/gcc/x86_64-pc-linux-gnu/11.2.0/include/emmintrin.h: In function ‘stbi_uc* stbi__resample_row_hv_2_simd(stbi_uc*, stbi_uc*, stbi_uc*, int, int)’:
/usr/lib/gcc/x86_64-pc-linux-gnu/11.2.0/include/emmintrin.h:1230:10: error: the last argument must be an 8-bit immediate
1230 | return (__m128i)__builtin_ia32_pslldqi128 (__A, __N * 8);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/lib/gcc/x86_64-pc-linux-gnu/11.2.0/include/emmintrin.h:1224:10: error: the last argument must be an 8-bit immediate
1224 | return (__m128i)__builtin_ia32_psrldqi128 (__A, __N * 8);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
make[2]: *** [CMakeFiles/App.dir/build.make:205: CMakeFiles/App.dir/src/core/stbi_impl.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:158: CMakeFiles/App.dir/all] Error 2
make: *** [Makefile:136: all] Error 2
I did not get this error when I directly had
#define STB_IMAGE_IMPLEMENTATION
#include <stb_image.h>
In my main.cpp file.
My setup right now is, I include the stb image header file in a precompiled header, which is configured using CMake, and I include that precompiled header in my main.cpp file. Would this be the issue?
main.cpp file,
#include "./epch.hpp"
/*
Code that uses STB Image
*/
cmake file,
add_executable(App ${app_src})
target_precompile_headers(App PRIVATE src/epch.hpp)
epch.hpp file,
/*Other includes*/
#include <stb_image.h>
/*Other includes*/
stbi_impl.cpp file,
#define STB_IMAGE_IMPLEMENTATION
#include <stb_image.h>
| There seems to be a problem with your compiler configuration for SIMD instruction generation. You should first disable SIMD:
#define STBI_NO_SIMD
#define STB_IMAGE_IMPLEMENTATION
#include <stb_image.h>
If the program works correctly, you can try and investigate SSE2 support and add a compiler option -msse2.
|
71,496,512 | 71,497,041 | How to avoid controls flickering in a CDialog (MFC C++ ) | Hello i've been looking for a couple of days now how to avoid controls themselves from flickering in a CDialog.
I am using CMemDC and erasing the background to draw some basic shapes with GDI+
void CCustomDialog::OnPaint()
{
CPaintDC pDC(this);
CMemDC dc(&pDC);
Gdiplus::Graphics graphics(dc.GetSafeHdc());
CRect clip;
dc.GetClipBox(&clip);
dc.FillSolidRect(clip, GetSysColor(COLOR_WINDOW));
DefWindowProc(WM_PAINT, (WPARAM)dc->m_hDC, (LPARAM)0);
Gdiplus::Pen pen(Gdiplus::Color(150, 125, 255, 100), 5.0);
graphics.DrawEllipse(&pen, 200, 50+m_interator, 100, 100);
}
This class inherits from CDialog and will then itself be a base class to other dialogs to control the "theme".
however when i invalidate and then update the window on a mouse move event
void CCustomDialog::OnMouseMove(UINT nFlags, CPoint point)
{
m_interator++;
Invalidate();
UpdateWindow();
CDialog::OnMouseMove(nFlags, point);
}
The Ellipse doesn't flicker at all but all the other buttons, labels and edit controls do.
I haven't found anything to avoid this and i myself do not know enough about MFC to avoid it.
Any ideas?
I was thinking maybe I can set the DC of the controls to be the same CMemDC, but i'm not sure how to do that yet, will post here if I figure it out.
| You can set the style WS_CLIPCHILDREN in the dialog resource, for example:
IDD_STEP_DLG DIALOGEX 0, 0, 344, 215
// here:
STYLE DS_SETFONT | DS_FIXEDSYS | WS_MAXIMIZEBOX | WS_POPUP | WS_CLIPCHILDREN | WS_CAPTION | WS_SYSMENU | WS_THICKFRAME
CAPTION "Dialog"
FONT 8, "MS Shell Dlg", 400, 0, 0x1
BEGIN
LTEXT "Static",IDC_PREP_HISTOGRAM_PLACE,0,0,343,214,SS_NOTIFY | WS_TABSTOP
END
|
71,497,336 | 71,501,389 | Trigraphs not compiling with MS compiler? | I have a C++14 project withe the Microsoft compiler in Visual Studio 2019 and I'm trying to understand Digraphs and Trigraphs, so my code is a bit weird:
#include "Trigraphs.h"
void Trigraphs::assert_graphs()
??<
// How does this ever compile ????/
ouch!
??>
Reading about the /Zc:trigraphs switch
Through C++14, trigraphs are supported as in C. The C++17 standard removes trigraphs from the C++ language.
I understand that trigraphs should be supported until C++14 because they were removed in C++17 only. Yet, the above code does not compile with C++14 settings until I add the additional command line switch. I am not a native English speaker, did I get something wrong about the sentence that trigraphs are supported until C++14?
| MSDN also says:
The /Zc:trigraphs option is off by default
and that seems to apply for C++14 already. Although that results in a non 100% conformant C++ compilation, most programmers will actually prefer not dealing with the strange symbols of C++ trigraphs.
|
71,497,344 | 71,497,770 | no matching function for call to <unresolved overloaded function type> | I can't relate with similar questions. That's my MRE, basically I'd like to overload fun with a version accepting a template reference. It all works until std::thread enters in the game. It seems I'm missing something from its constructor.
Error shown on g++-10 is
error: no matching function for call to ‘std::thread::thread(<unresolved overloaded function type>, std::string, std::shared_ptr<MySem>)’
43 | std::make_shared<MySem>());
#include <string>
#include <memory>
#include <thread>
class MyData
{};
class MySem
{};
template <typename T, typename Sem>
void fun(T & t, const std::string & TAG, std::shared_ptr<Sem> upSem)
{}
template <typename T, typename Sem>
void fun(const std::string & TAG, std::shared_ptr<Sem> upSem)
{
T t;
fun(t, TAG, upSem); // NO std::ref(t)
}
int main(int argc, char ** argv)
{
MyData d;
fun<MyData, MySem>(
"works",
std::make_shared<MySem>());
fun<MyData, MySem>(
d,
"this too",
std::make_shared<MySem>());
std::thread t1(fun<MyData, MySem>,
std::string("this doesn't"),
std::make_shared<MySem>()); // line 43
std::thread t2(fun<MyData, MySem>,
d,
std::string("this neither"),
std::make_shared<MySem>());
return 0;
}
| My guess is that the constructor of std::thread cannot resolve which overload of fun you're trying to call. No idea why though.
Having only one version of fun such as
template <typename T, typename sem>
void fun(const std::string&, std::shared_ptr<sem>)
{
...
}
Allows you to construct t1 fine (but t2 will obviously fail).
A workaround is to pass a lambda instead, such as:
std::thread t3([&](){fun<data, sem>(d, "works again", std::make_shared<sem>());});
std::thread t4([&](){fun<data, sem>("this too", std::make_shared<sem>());});
|
71,497,660 | 71,504,284 | Can I implement this kind of profiling code with a macro? | Not an expert on preprocessor macro tricks, so if the problem here is just that I'm not familiar with some common macro idiom I'd be happy with just a term to Google. X macros are about as far as I've got before and I'm pretty sure I can't do anything with them.
Right now I do some stuff like this in code:
std::size_t trial = 0;
std::array<std::array<uint64_t, 5>, MAX_TRIALS> results_f;
void f()
{
unsigned int core;
results_f[trial][0] = __rdtscp(&core);
// do stuff
results_f[trial][1] = __rdtscp(&core);
// do some more stuff
results_f[trial][2] = __rdtscp(&core);
// do yet more stuff
results_f[trial][3] = __rdtscp(&core);
// do even more stuff
results_f[trial][4] = __rdtscp(&core);
if(++trial == MAX_TRIALS)
{
process_timestamps_f(results);
trial = 0;
}
}
(Here __rdtscp is an x86 intrinsic that gets a tick number from the CPU.)
I would instead like to be able to write something like so:
STOPWATCH_BOILERPLATE_PRE(f);
void f()
{
STOPWATCH_BEGIN;
// do stuff
STOPWATCH_LAP;
// do some more stuff
STOPWATCH_LAP;
// do yet more stuff
STOPWATCH_LAP;
// do even more stuff
STOPWATCH_END;
};
STOPWATCH_BOILERPLATE_POST(f);
So basically, I need to be able to count the number of times that STOPWATCH_LAP appears inside of f and use it to set the size of an array which is visible inside of f.
Bonus: it would be nice if, inside of process_timestamps_f, I can write something like LINE_ID(f, 3) and get the results of the __LINE__ preprocessor macro at the third instance of STOPWATCH_LAP in f.
(I imagine the actual code probably can't look exactly like what I wrote above. Open to whatever modifications are required to make this work. The only real requirement is that I don't have to constantly count how many of these lap points I've put into a function and update the corresponding code to match.)
| I am assuming you do not want to have any dynamic memory management involved? Because otherwise you could simply use a std::vector and do a push_back() for each result...
Otherwise, I do not think this can be achieved easily by just using standard language elements. But MSVC, clang and gcc support __COUNTER__, which is a special macro that is incremented in each use and that can be exploited here. Storing the initial value before the function, then using it in every "LAP", you can compute the number of laps within the function. Moreover, you can declare the result array without needing to specify the first dimension before the function if you use a C-array via extern, and then define it afterwards with the now known number of laps.
You can also simply store the __LINE__ result at the same time when you store the __rdtscp() result.
See the following example. It is all quite fragile and assumes that the macros are used in that order, but depending on the actual code, it might be sufficient (https://godbolt.org/z/crrGY7n4P):
#include <array>
#include <cstdint>
#include <iostream>
#ifndef _MSC_VER
// https://code-examples.net/en/q/e19526
uint64_t __rdtscp( uint32_t * aux )
{
uint64_t rax,rdx;
asm volatile ( "rdtscp\n" : "=a" (rax), "=d" (rdx), "=c" (*aux) : : );
return (rdx << 32) + rax;
}
#else
#include <intrin.h>
#endif
constexpr std::size_t MAX_TRIALS = 3;
struct ResultElem
{
uint64_t timing;
unsigned line;
};
void process_timestamps(ResultElem results[][MAX_TRIALS], std::size_t numResult, char const * const func)
{
std::cout << func << ": Num = " << numResult << std::endl;
for (std::size_t trial = 0; trial < MAX_TRIALS; ++trial) {
std::cout << "\tTrial=" << trial << std::endl;
for (std::size_t i = 0; i < numResult; ++i) {
std::cout << "\t\tLine=" << results[i][trial].line << ", time=" << results[i][trial].timing << std::endl;
}
}
}
#define STOPWATCH_BOILERPLATE_PRE(f) \
extern ResultElem results_ ## f[][MAX_TRIALS]; \
constexpr std::size_t counterStart_ ## f = __COUNTER__; \
std::size_t trial_ ## f = 0;
#define STOPWATCH_BEGIN(f) uint32_t core; STOPWATCH_LAP(f)
#define STOPWATCH_LAP(f) results_ ## f[__COUNTER__ - counterStart_ ## f - 1][trial_ ## f] = {__rdtscp(&core), __LINE__}
#define STOPWATCH_END(f) \
STOPWATCH_LAP(f); \
if(++trial_ ## f == MAX_TRIALS) { \
process_timestamps(results_ ## f, __COUNTER__ - counterStart_ ## f - 1, #f); \
trial_ ## f = 0; \
}
// Needs to be used directly after STOPWATCH_END() because we subtract 2 from __COUNTER__.
#define STOPWATCH_BOILERPLATE_POST(f) \
constexpr std::size_t numResult_ ## f = __COUNTER__ - counterStart_ ## f - 2; \
ResultElem results_ ## f[numResult_ ## f][MAX_TRIALS];
STOPWATCH_BOILERPLATE_PRE(f)
void f()
{
STOPWATCH_BEGIN(f);
// do stuff
STOPWATCH_LAP(f);
// do some more stuff
STOPWATCH_LAP(f);
// do even more stuff
STOPWATCH_END(f);
}
STOPWATCH_BOILERPLATE_POST(f)
Alternatives I could think of:
Without dynamic allocations and staying in the standard, you might build something using BOOST_PP_COUNTER. The STOPWATCH_LAP would then probably turn into some form of #include statement.
I could also imagine that it might be possible to build something without macros using a weird loophole in C++14, but that gets terribly complicated.
|
71,498,406 | 71,498,667 | VSCode Makefile no longer creating executable, which fails when make is invoked | So I was practicing with a tutorial series on C++ projects for Linux. In order to create the makefile I did CTR+SHIFT+P to go into Palet, searched for make file, selected the correct option, and selected C++ project. In the tutorial, the person changed src in the make file to a static path ie: pwd. That worked. When he changed back to src and after moving the files into the src folder like he did, doing make clean, and then make, I get this:
[termg@term-ms7c02 HelloWorldnonVR]$ make
g++ -std=c++11 -Wall -o obj/list.o -c src/list.cpp
g++ -std=c++11 -Wall -o obj/main.o -c src/main.cpp
g++ -std=c++11 -Wall -o HelloWorld obj/list.o obj/main.o
/usr/bin/ld: obj/main.o: in function `List::List()':
main.cpp:(.text+0x0): multiple definition of `List::List()'; obj/list.o:list.cpp:(.text+0x0): first defined here
/usr/bin/ld: obj/main.o: in function `List::List()':
main.cpp:(.text+0x0): multiple definition of `List::List()'; obj/list.o:list.cpp:(.text+0x0): first defined here
/usr/bin/ld: obj/main.o: in function `List::~List()':
main.cpp:(.text+0x2c): multiple definition of `List::~List()'; obj/list.o:list.cpp:(.text+0x2c): first defined here
/usr/bin/ld: obj/main.o: in function `List::~List()':
main.cpp:(.text+0x2c): multiple definition of `List::~List()'; obj/list.o:list.cpp:(.text+0x2c): first defined here
collect2: error: ld returned 1 exit status
make: *** [Makefile:37: HelloWorld] Error 1
[termg@term-ms7c02 HelloWorldnonVR]$
There are three files. A class and its header, and then the main. list.h is in the include subfolder for src.
I currently cannot find any information about this problem, so any help would be appreciated. The tutorial had no issues with making or running the files. The VSCode extension is C/C++ Makefile Project. My system is Manjaro. I did do make clean and delete the makefile to start fresh, in case I hit the keyboard or something but same result persists. From what I am seeing, the issue is that there is no HelloWorld executable being created. HelloWorld is the appname in the template.
Narrowed the issue down to the header. Without constructors it works but with them it does not.
#include <iostream>
#include <vector>
using namespace std;
class List
{
private:
/* data */
protected:
public:
void print_menu(); //Prototype
void print_list();
void add_item();
void delete_item();
vector<string> list;
string name;
//constructor
List(/* args */);
//destructor
~List();
};
List::List(/* args */)
{
}
List::~List()
{
}
Any ideas on what is causing this?
| The error message tells you exactly what the problem is, if you learn the compiler-ese to interpret it:
main.cpp:...: multiple definition of `List::List()'; obj/list.o:list.cpp:...: first defined here
Here it's saying you have defined the constructor twice: once in main.cpp and once in list.cpp.
And, as is the case 99.99999% of the time, the compiler (well in this case technically the linker) is correct. You have defined the constructor and destructor in the header file, and you've included the header file in both the main.cpp file and in the list.cpp file, so the constructor and destructor are defined in both, just as the error says.
You need to put the constructor and destructor in the list.cpp source file, not in the header file.
Alternatively you can put them inside the class itself, which makes them inline, and then you won't have this issue:
class List
{
private:
/* data */
protected:
public:
List(/* args */) {}
~List() {}
void print_menu(); //Prototype
Of course this only makes sense if they are small and simple enough to inline like this.
|
71,498,522 | 71,498,695 | class's friend function are incompatible? | IDE throws warning that the class's friend function are not compatible with the function's declaration outside of class.
What is the cause for the warning?
namespace CommonUtility
{
Interface::CellType Foo(int);
}
// when placed as friend of class Interface
class Interface
{
public:
static enum class CellType
{
Group,
NoSuchType
};
friend Interface::CellType CommonUtility::Foo(int); // IDE warning not compatible to the declaration
}
// definition
Interface::CellType CommonUtility::Foo(int i)
{
if (i == 1)
return Interface::CellType::Group;
else
return Interface::CellType::NoSuchType;
}
|
For Interface::CellType Foo(int); the Interface::CellType is unknown at that point and should result in a compiler error.
static enum class CellType would also result in a compiler error, because static is not correct here.
And finally:
The declaration of Interface::CellType CommonUtility::Foo(int); has to exists before friend Interface::CellType CommonUtility::Foo(int); can be used. But Interface::CellType Foo(int); can only be declared as soons as Interface::CellType is known. CellType is a nested type and cannot be forward decaled.
And these conditions conflict with each other.
You would need to move the whole enum class CellType {} outside of class Interface to get that working.
|
71,498,764 | 71,502,739 | How can I prevent this memory leak? | Below is a stripped-down version of the problem I'm hitting with memory management in relation to using the Python interpreter from C++.
The code as it is below will run properly, but its memory footprint will gradually grow over time. I added a line to manually invoke the Python garbage collection; this didn't solve the issue.
What do I need to change with this code to prevent the growing memory leak?
[edit]: As per the suggestion from below, I've cut down the pythonTest function even further. All it does is create an environment, reset it, and close it. The memory leak persists.
I'm using Python 3.10.2 on Windows 10. C++ is being compiled by Visual Studio to the C++14 standard. I have OpenAI-Gym version 0.22.0 installed.
void pythonTest(PyObject* inModule)
{
// Section 1: Get the make function:
PyObject* pMakeFunc = PyObject_GetAttrString(inModule, "make");
PyObject* pMakeArgs = PyTuple_New(1);
PyTuple_SetItem(pMakeArgs, 0, PyUnicode_FromString("LunarLanderContinuous-v2"));
// Section 2: Get the environment and its functions:
PyObject* pEnv = PyObject_CallObject(pMakeFunc, pMakeArgs);
PyObject* pEnvReset = PyObject_GetAttrString(pEnv, "reset");
PyObject* pEnvStep = PyObject_GetAttrString(pEnv, "step");
PyObject* pEnvClose = PyObject_GetAttrString(pEnv, "close");
PyObject* pEnvRender = PyObject_GetAttrString(pEnv, "render");
// Section 3: Reset the environment to get the initial observation:
PyObject* pInitialObsArray = PyObject_CallNoArgs(pEnvReset);
PyObject* pInitialObsListFunc = PyObject_GetAttrString(pInitialObsArray, "tolist");
PyObject* pInitialObsList = PyObject_CallNoArgs(pInitialObsListFunc);
// Clear section 3:
Py_CLEAR(pInitialObsList);
Py_CLEAR(pInitialObsListFunc);
Py_CLEAR(pInitialObsArray);
// Clear section 2: Close the environment, first:
PyObject_CallNoArgs(pEnvClose);
Py_CLEAR(pEnvRender);
Py_CLEAR(pEnvClose);
Py_CLEAR(pEnvStep);
Py_CLEAR(pEnvReset);
Py_CLEAR(pEnv);
// Clear section 1:
Py_CLEAR(pMakeArgs);
Py_CLEAR(pMakeFunc);
}
int main()
{
Py_Initialize();
// Get gym module:
PyObject* pGymName = PyUnicode_FromString("gym");
PyObject* pModule = PyImport_Import(pGymName);
// Get garbage collection module and collect function:
PyObject* pgcName = PyUnicode_FromString("gc");
PyObject* pgcModule = PyImport_Import(pgcName);
PyObject* pgcFunction = PyObject_GetAttrString(pgcModule, "collect");
for (int k = 0; k < 1000000; ++k)
{
pythonTest(pModule);
// Manually invoke the garbage collection:
PyObject* pGCReturn = PyObject_CallNoArgs(pgcFunction);
auto objectsCollected = PyLong_AsLong(pGCReturn);
std::cout << "Iteration " << k << " objects collected: "
<< objectsCollected << std::endl;
Py_CLEAR(pGCReturn);
}
Py_CLEAR(pgcFunction);
Py_CLEAR(pgcModule);
Py_CLEAR(pgcName);
Py_CLEAR(pModule);
Py_CLEAR(pGymName);
Py_Finalize();
return 0;
}
| The problem is neither in Python nor its interface to C++. The problem is in Box2D, which is used by some of the OpenAI Gym environments.
I can repeat the above code while creating a different environment that doesn't use Box2D (such as "CartPole-v1") and let it run endlessly without any memory leak. As soon as I put a Box2D environment back in (such as "BipedalWalker-v3" or "LunarLander-v2"), the memory leak comes back.
I can repeat the above process entirely in Python and get the same results. Even with manually running garbage collection after every environment destruction, the memory allocated by the application grows without limit.
The reset function on any environment is where it does a lot of preparation to run and it is where the memory leak occurs. If Box2D environments are created and destroyed endlessly, there is no memory leak. Created, reset, then destroyed? Memory leak.
Thank you all for the help, but this is a bug in the underlying library. I'll need to go submit it there.
|
71,498,932 | 71,499,018 | Decide which member function to call by ternary operator | If I want to call function foo on an object thisThing (in C# jargon, this is called the "receiver", so I'll call it that here too) and pass the argument myStuff, I'll do it like this:
thisThing.foo(myStuff);
Super simple, nothing surprising going on here.
If I want to change the argument to yourStuff if a bool value b is false, I'll do it like this:
thisThing.foo(b ? myStuff : yourStuff);
Also very simple, basic use of the ternary operator.
If I want to change the receiver to otherThing if b is false, I'll do it like this:
(b ? thisThing : otherThing).foo(myStuff);
A little bit weirder, you probably don't do this super often, but it's nothing crazy either.
But if I want to change the called function to bar if b is false, how do I do that?
I would think something like this:
thisThing.(b ? foo : bar)(myStuff);
But of course, this does not work.
Is there a simple, neat-looking, performant way of doing this, preferrably without redundantly specifying anything?
There will probably have to be some compromises made, but the point is to not have to repeat the receiver and arguments. So the following works:
if (b)
{
thisThing.foo(myStuff);
}
else
{
thisThing.bar(myStuff);
}
But you have to repeat the receiver and arguments. Imagine that thisThing and myStuff are placeholders for much more complex expressions. You might want to put those in local variables first, but that has implications for copying, and it does not play nicely if you have many arguments.
You might be able to take function pointers to those member functions and then do something like (b ? pointerToFoo : pointerToBar)(myStuff);, but dealing with function pointers tends to be messy (think function overloading) and it does not seem like something that the compiler would properly optimize away. But I'd be happy to be proven wrong here.
| You can use member function pointers, but you need special syntax to call the function via the pointer:
struct X {
void foo() {}
void bar() {}
};
int main() {
X thisThing;
bool b = false;
(thisThing.*(b ? &X::foo : &X::bar))();
}
However, I would not recommend to actually use it like this (unless there are reasons not shown in the question). Note that it won't work like this when foo or bar are overloaded.
Anyhow, in my opinion also your other examples are not good use-cases for the conditional operator. The conditional operator is not just a equivalent replacement for if-else. It has slightly different use-cases. Sometimes one uses the conditional operator to determine the common type of two expressions. Sometimes you cannot use an if-else. For example when initializing a reference:
int& x = a ? c : d; // cannot be done with if/else
My advice is: Don't use the conditional operator to save on typing compared to an if-else. The difference between if-else and the conditional operator is almost never the amount of code you have to write only.
|
71,498,992 | 71,499,960 | How can I use sub-projects in Qt? | I'm trying to move onto Qt to rewrite a win32 project that has a lot of static libraries so as a preliminary test project I tried creating a subdir project following the instructions in https://www.toptal.com/qt/vital-guide-qmake. I've seen plenty of other examples, and similar questions on this site, but none of them are a bare bones example that actually works when I build it so I'm presenting this minimal example in the hope that someone can tell me what it is that I'm missing.
Here's the file structure;
project/
project.pro
app/
app.pro
main.ccp
library/
library.pro
mainwindow.h
library.pri
Both app and library projects are simple Qt Widgets Applications, created with the New Subproject wizard. The app project has main.cpp;
#include "mainwindow.h"
#include <QApplication>
int main(int argc, char *argv[]) {
QApplication a(argc, argv);
MainWindow w;
w.show();
return a.exec();
}
The library project has just the header file mainwindow.h defining the MainWindow class;
#ifndef MAINWINDOW_H
#define MAINWINDOW_H
#include <QMainWindow>
class MainWindow : public QMainWindow {
Q_OBJECT
public:
MainWindow(QWidget *parent = nullptr) : QMainWindow(parent) {}
~MainWindow() {}
};
#endif // MAINWINDOW_H
Here are the pro. files;
project.pro;
TEMPLATE = subdirs
TARGET = project
SUBDIRS = app library
app.depends = library
app.pro;
TEMPLATE = app
TARGET = app
QT += core gui
greaterThan(QT_MAJOR_VERSION, 4): QT += widgets
CONFIG += c++11
SOURCES += \
main.cpp
# Default rules for deployment.
qnx: target.path = /tmp/$${TARGET}/bin
else: unix:!android: target.path = /opt/$${TARGET}/bin
!isEmpty(target.path): INSTALLS += target
include(../library/library.pri)
library.pro;
TEMPLATE = lib
TARGET = library
QT += core gui
greaterThan(QT_MAJOR_VERSION, 4): QT += widgets
CONFIG += c++11
HEADERS += \
mainwindow.h
# Default rules for deployment.
qnx: target.path = /tmp/$${TARGET}/bin
else: unix:!android: target.path = /opt/$${TARGET}/bin
!isEmpty(target.path): INSTALLS += target
DISTFILES += \
library.pri
Lastly, the library.pri file;
LIBTARGET = library
BASEDIR = $${PWD}
INCLUDEPATH *= $${BASEDIR}/include
LIBS += -L$${DESTDIR} -llibrary
The point of this is to check that I can write a project in Qt where a function in one project can reference a class in another. When compiled as a single project, with main.cpp and mainwindow.h in the same project, you get the usual application window on screen with no errors so I know both main.cpp and the MainWindow class are good; compiled as-is, with the two sub-projects, I get "'mainwindow.h' file not found" on line 1, main.cpp, and "unknown type name 'MainWindow'" on line 6, main.cpp, so the library project is obviously not getting referenced properly by the app project. I'm new to Qt and have absolutely no experience with qmake, any help would be appreciated, and as I know I'm not the only person struggling with this if anyone can help get this working I'll put it up on gtihub.
| To use library you need to setup two things: append INCLUDEPATH, and LIBS, you can do it in pri file, and then include in app, if error says "file *.h not found" it means INCLUDEPATH is incomplete.
Here's how you can do it:
Project structure
project/
├── app
│ ├── app.pro
│ └── main.cpp
├── library
│ ├── library.pri
│ ├── library.pro
│ ├── mainwindow.cpp
│ ├── mainwindow.h
│ └── mainwindow.ui
└── project.pro
project.pro
TEMPLATE = subdirs
SUBDIRS += \
app \
library
app.depends = library
library.pri
INCLUDEPATH += $$PWD
win32 {
CONFIG(debug, debug|release) {
LIBS += -L$$PWD/debug
} else {
LIBS += -L$$PWD/release
}
} else {
LIBS += -L$$PWD
}
LIBS += -llibrary
library.pro
QT += gui widgets
TEMPLATE = lib
CONFIG += staticlib
SOURCES += \
mainwindow.cpp
HEADERS += \
mainwindow.h
FORMS += \
mainwindow.ui
app.pro
QT += gui widgets
SOURCES += main.cpp
include($$PWD/../library/library.pri)
QtCreator uses shadow build by default (it can be disabled on projects page), so if you build from QtCreator IDE not from shell (command line), you are likely using shadow build. In this case all build artifacts including static libraries are build outide of source directory in a separate directory named build-${project-name}-${device-type}-${build-config} (build-project-Desktop-Debug in our case), so -L$$PWD wont work, there's OUT_PWD for that, unfortunately for some reason it points not to pri directory but to pro directory (build-project-Desktop-Debug/app in out case) and things get a little hairier. Here's library.pri that works for both shadow and normal build:
INCLUDEPATH += $$PWD
win32 {
CONFIG(debug, debug|release) {
LIBS += -L$$PWD/debug -L$$OUT_PWD/../library/debug
} else {
LIBS += -L$$PWD/release -L$$OUT_PWD/../library/release
}
} else {
LIBS += -L$$PWD -L$$OUT_PWD/../library
}
LIBS += -llibrary
Full source: https://github.com/mugiseyebrows/app-and-library.git
|
71,499,092 | 71,499,607 | Generic function to accurately round floating-point to the nearest multiple of X | I am trying to write a generic function which rounds a double input value to the nearest multiple of X.
Due to floating-point precision reasons, the naive approach of just scaling and rounding can fail:
double round(double in, double multiple)
{
return std::round(in / multiple) * multiple;
}
For example, since 0.15 is actually stored as ~0.14999999999999999445, the above function when called to round to a multiple of 0.1 will return 0.1 rather than 0.2, as can be seen in the following godbolt.
So perhaps I should add an epsilon value, but what epsilon would be generic and sufficient?
I see there is a std::numeric_limits<double>::epsilon(), and I see from this SO answer, I need to scale the value to the magnitude of the number I'm working with.
So should I use the multiple input parameter as my scaling factor?
If so, my round function becomes:
double round(double in, double multiple)
{
const double epsilon = std::numeric_limits<double>::epsilon() * multiple;
return std::round((in + epsilon) / multiple) * multiple;
}
Finally to make my function work for negative numbers, I use std::signbit to work out if I should subtract epsilon instead of add:
double round(double in, double multiple)
{
const double epsilon = std::numeric_limits<double>::epsilon() * multiple;
const double incr = std::signbit(in) ? -epsilon : epsilon;
return std::round((in + incr) / multiple) * multiple;
}
I've tested this with a range of inputs and multiples, and for the inputs I've tried it seems to work... but I'm uncertain.
Is my final version of round accurate?
If not, under what circumstances / for what inputs will it fail to produce the result I want it to, and is it possible to produce an accurate and generic function?
| Your round function is working correctly. The value of 0.14999999999999999445 is rounded downwards, just as you woudl expect it to be. The problem you face is that double values can not represent arbitrary values due to the limited precision. Now consider the following program with your round() function:
double x = 0.15;
double y = 0.14999999999999999445;
std::cout << round(x, 0.1) << " ";
<< round(y, 0.1) << std::endl;
It outputs something like 0.1 0.1. Because x and y have exactly the same value internally you will never get different results. Now if you add a small value before rounding (the epsilon you mentioned) you will get 0.2 0.2. This might be correct for the 0.15 value but not for the other one anymore.
What I am trying to show you here is, that no matter how carefully you design your round() function, there will always be inconsitencies due to the precision limit. The rounding cutoff will always fluctuate. Some time it is a little bit above the real cutoff, some other time it may be a little bit below. There is nothing you can do about that.
So I am afraid, there is no satifying solution to your problem. You must accept a small fuzziness if you want to work with binary floating point numbers.
Addition
As you asked for an example where your second implementation (the one with the epsilon) fails, take a look here. For 0.35 the rounded result is rounded towards 0.3.
|
71,499,177 | 71,499,386 | Template specialization for constructor based on type | I've been looking for this for quite a while, and I maybe I just don't know what words to use to find it.
I have a template class that accepts a type, and would like the constructor to be different depending on if that type is a pointer or not. Here is some code to explain what I mean.
template <class T> class Example
{
bool choice;
public:
//Only if T is not a pointer type
Example() : choice{false}
{}
//Only if T is a pointer type
Example(bool choice) : choice{choice}
{}
}
I have experimented with std::enable_if and std::is_pointer<T> but with no luck.
| You can either specialize the whole class:
template <class T> struct Example {
bool choice;
Example() : choice{false} {}
};
template <class T> struct Example<T*> {
bool choice;
Example(bool choice) : choice{choice} {}
};
int main() {
Example<int> e;
Example<int*> f(false);
}
Or via std::enable_if:
#include <type_traits>
template <class T> struct Example {
bool choice;
template <typename U = T, std::enable_if_t<!std::is_pointer_v<U>,bool> = true>
Example() : choice{false} {}
template <typename U = T, std::enable_if_t<std::is_pointer_v<U>,bool> = true>
Example(bool choice) : choice{choice} {}
};
int main() {
Example<int> e;
Example<int*> f(false);
}
Only one of the conditions for std::enable_if is true. Either std::is_pointer_v<T> is true then the first constructor is a substitution failure or it is false then the second is discarded.
|
71,499,436 | 71,499,721 | Why does selects-behaviour differ when trying to read and write sockets? | Lets say we have a client file-descriptor accepted with accept()
client_socket = accept(_socket, (sockaddr *)&client_addr, &len)
We now set this file-descriptor in a read and write fd_set:
fd_set readfds;
fd_set writefds;
//zero them
FD_ZERO(readfds);
FD_ZERO(writefds);
//set the client_socket
FD_SET(client_socket, &readfds);
FD_SET(client_socket, &writefds);
now we use select the to check if the socket is readable or writable:
select(FD_SETSIZE, &readfs, &writefds, NULL, NULL)
We now check if we can read first and read all bytes from it.
if (FD_ISSET(client_socket, &readfds) {
read(client_socket, &buf, 4096);
}
//assume that buf is big enough and that read returns less than 4096
In the next loop we reset the fd_sets just like before.
Now select will allow us to write our response to the client:
if (FD_ISSET(client_socket, &readfds) {
write(client_socket, &buf, len(buf));
}
Till here everything works fine but now the weird behaviour occurs.
Lets assume that our client told us to keep the connection alive, in that
case we would set the fd_set the same way as before like this:
//zero them
FD_ZERO(readfds);
FD_ZERO(writefds);
//set the client_socket
FD_SET(client_socket, &readfds);
FD_SET(client_socket, &writefds);
// reading not allowed
When using select now it will allow us to write again but it won't allow to read from the client_socket.
BUT if we change the setting of the writefds to zero it will allows us to read, although we did not change anything in the readfds.
//zero them
FD_ZERO(readfds);
FD_ZERO(writefds);
//set the client_socket
FD_SET(client_socket, &readfds);
//FD_SET(client_socket, &writefds); -> don't set the file-descriptors for write
// now reading is allowed
Can someone explain me if this is the correct behaviour of select or if this is my fault maybe in other parts of the code that I didn't show (way too complex).
For me it seems like the behaviour of select is random when setting both sets (writing and reading).
I know that there is a way around this by keeping a kind of state to decide if we want to set the reading-file-descriptors or the writing-file-descriptors, but I was hoping for a cleaner solution.
| The purpose of select() is to not return until there is something for your program to do. That way your program can sleep inside select() until I/O is ready, wake up immediately to do the I/O, and then go back to sleep as quickly as possible afterwards.
So the question is, how does select() know when to return? The answer is, you have to tell it what should cause it to return, by calling FD_SET() in the appropriate ways.
Usually you will want select() to return when data is ready-for-read on any of your sockets (so you can read the newly-arrived data), so you should usually call FD_SET(mySock, &readFD) on all of your sockets.
FD_SET(mySock, &writeFD) is a bit more nuanced. It tells select() to return when the socket has buffer-space available to write output bytes to. However, in many cases you don't want select() to return when a socket has buffer-space available, simply because you don't currently have any data that you want to write to the socket anyway. In that scenario, if you always call FD_SET(mySocket, &writeFD) then select() will keep returning immediately even though you don't have any task you want to perform, and that will cause your program to use up lots of CPU cycles for no good purpose.
So the only time you should call FD_SET(mySocket, &writeFD) is if you know that you want to write some data to that socket ASAP.
In your program, what is likely happening is that FD_SET(mySocket, &writeFD) is causing select() to return immediately (because mySocket currently has buffer-space available to write to), and then your program is (mistakenly) assuming that because select() has returned, the socket is ready-for-read, only to find out that it isn't. In the case where you've commented out the FD_SET(mySocket, &writeFD), OTOH, select() doesn't return until the socket is ready-for-read, and so you get the behavior you expect when you call FD_ISSET(mySocket, &readFD).
|
71,499,469 | 71,500,139 | Strange behavior in std::make_pair call with CRTP class | Problem
I have a simple CRTP-pattern class BaseInterface, and two classes, derived from this class: test_dint and test_dint2.
Difference between test_dint and test_dint2 - in test_dint dtor explicitly declared as ~test_dint() = default;.
I'm try make std::pair with types <std::intptr_t, test_dint> by calling std::make_pair and compilation is failed with error:
MSVC - error C2440: '<function-style-cast>': cannot convert from 'initializer list' to '_Mypair'
CLang 11 - error: no matching constructor for initialization of '__pair_type' (aka 'pair<long, test_dint>')
But, if types in pair change to <std::intptr_t, test_dint2> - all compiles without errors.
I can't understand - why does explicit dtor declaration change behavior of std::pair template?
Full code
#include <memory>
#include <unordered_map>
template<typename DerivedT>
class enable_down_cast
{
public:
DerivedT const& impl() const
{
// casting "down" the inheritance hierarchy
return *static_cast<DerivedT const*>(this);
}
DerivedT& impl()
{
return *static_cast<DerivedT*>(this);
}
//~enable_down_cast() = default;
protected:
// disable deletion of Derived* through Base*
// enable deletion of Base* through Derived*
~enable_down_cast() = default;
private:
using Base = enable_down_cast;
};
template<typename Impl>
class BaseInterface : public enable_down_cast<Impl>
{
public:
using handle_type = std::intptr_t;
BaseInterface() = default;
// Disable copy
BaseInterface(const BaseInterface&) = delete;
BaseInterface& operator=(const BaseInterface&) = delete;
// Enable move
BaseInterface(BaseInterface&&) = default;
BaseInterface& operator=(BaseInterface&&) = default;
~BaseInterface() = default;
handle_type handle() const
{
return m_handle;
}
protected:
handle_type m_handle{ 0 };
private:
using enable_down_cast<Impl>::impl;
};
class test_dint : public BaseInterface<test_dint> {
public:
test_dint() = delete;
test_dint(const handle_type handle) :
BaseInterface<test_dint>()
{
m_handle = handle;
}
~test_dint() = default;
};
class test_dint2 : public BaseInterface<test_dint2> {
public:
test_dint2() = delete;
test_dint2(const handle_type handle) :
BaseInterface<test_dint2>()
{
m_handle = handle;
}
};
int main()
{
test_dint::handle_type handle = 100500;
std::make_pair(handle, test_dint{ handle }); // <--- failed ctor
std::make_pair(handle, test_dint2{ handle });
return 0;
}
Live demo
https://godbolt.org/z/eee7h47v7
| It's because when you declared the destructor, you prevent the compiler from generate a move constructor, so test_dint is not moveconstructable (nor copyconstructable since it's base) anymore.
explicitly declare it would make it work.
test_dint(test_dint&&)=default;
|
71,499,571 | 71,499,766 | Why are deques used as the underlying container for stacks by default, when vectors would do the trick? | As I understand, any container that supports push_back(), pop_back() and back() can be used as the underlying container for stacks, but by default, deques are used. I understand the pros of deques over vectors generally (possibility to add elements at the beginning as well as at the end), but in the case of stacks, I don't see any reason to prefer deques.
|
I don't see any reason to prefer deques.
A reason to prefer deque that applies to the stack use case is that individual push back has worst case constant complexity compared to vector whose individual push back is linear in worst case (it has amortised constant complexity over multiple push backs). This was particularly significant prior to C++11 when reallocating vector had to copy the elements which could be very expensive. Consider case where the elements themselves are long strings.
Another reason to prefer deques is that they release memory as they shrink. Vectors don't. Hence, if you have a stack that temporarily grows large, then shrinks and remains small for the rest of the execution, then an underlying vector would be wasting a lot of memory.
Historically, when STL was designed and thus when the default was chosen, there used to also be issues with very large vectors because the size of the address space didn't exceed (significantly, or at all) the amount of memory (this was before 64 bit processing was common). The consequence of the limited address space was that memory fragmentation would make it expensive or impossible to allocate large contiguous blocks of memory that a large vector would require. Furthermore, the way that vector grows by deallocating old buffers is a behaviour that causes such fragmentation.
|
71,500,261 | 71,506,382 | Converting ctype float to python float properly | I am receiving a ctype c_float via a C library that I would like to convert to a regular python float.
The problem is that ctype uses 32 bit precision while python uses 64 bit precision.
Say a user enters the number 1.9 into the C user interface. The number is represented as 1.899999976158142. When I print it on the C side, this representation is taken into account and I get an output of 1.9.
However on the python side, when I convert the c_float to python with its higher precision, then the number will not be treated as a 1.9 when printing, but as 1.899999976158142 instead.
How do I solve this?
I found a way, but it's a bit clunky (and unnecessarily inefficient), so I was hoping for a more elegant solution:
float_c = ctypes.c_float(1.9) # c_float(1.899999976158142)
float_32 = float_c.value # 1.899999976158142
float_str = f"{float_32:g}" # "1.9"
float_py = float(float_str) # 1.9
Since the precision will only become a problem during conversion to string (printing and writing to file), I could just live with it and omit the last step, but I find it inelegant to keep the number in its "problematic" state where I have to worry about the conversion every time I may want to print it.
Instead I think it's cleaner to just convert it once at the handover point between the C lib and python and never worry about it again.
So should I just do the conversion from c_float to python float like this or is there a better way?
| People have pointed out that the number converted to float64 from float32 is exactly the same value stored in C and you need knowledge of the original number to meet your definition of more precise, but you can round the resulting number to the same number of decimal places as C (or what you think the user intended) and the resulting float64 will be closer to that value:
>>> import struct
>>> x=struct.unpack('f',struct.pack('f',1.9))[0] # trick to achieve converted number
>>> x
1.899999976158142
>>> y=round(x,7) # or to whatever places you expect the user to enter
>>> y
1.9
>>> format(x,'.20f')
'1.89999997615814208984'
>>> format(y,'.20f')
'1.89999999999999991118'
|
71,500,440 | 71,500,698 | How to deduce a return type in C++ | I want to create some kind of Variant in C++. Actually I want to use templates as less as possible. The idea is to store the value in union both with the type of the variable and return the value according to the stored type.
So the test code looks like following:
#include <iostream>
#include <vector>
#include <cstring>
#include <typeinfo>
#include <typeindex>
using namespace std;
constexpr uint64_t mix(char m, uint64_t s)
{
return ((s << 7) + ~(s >> 3)) + static_cast<uint64_t>(~m);
}
constexpr uint64_t _(const char* str)
{
return (*str) ? mix(*str,_(str + 1)) : 0;
}
class Variant
{
public:
template<typename T>
Variant(T value):
m_info(typeid(value))
{
std::memcpy(&m_val, &value, sizeof(T));
}
auto toValue() ->decltype(???) // what have I use here ???
{
switch(_(m_info.name()))
{
case _("b"):
return m_val.bval;
case _("i"):
return m_val.ival;
case _("d"):
return m_val.dval;
break;
}
return 0;
}
char cval;
unsigned char ucval;
private:
union Types
{
bool bval;
int ival;
double dval;
} m_val;
std::type_index m_info;
};
Usage:
int main()
{
std::vector<Variant> arr = { 1, 2.2, true };
for(auto &v: arr)
{
cout << "value is: " << v.toValue() << endl;
}
return 0;
}
But decltype requires an expression as a parameter and that's where I'm stuck. What expression have I use here?
| As per @UnholySheep's comment, what you're trying to do is have a function whose return type is deduced at runtime, which is simply not possible. The return type has to be known at compile time. So you're going to have to change your API. There are a few different options here.
This seems similar to std::variant, whose API equivalent to your toValue() looks like this:
std::get<double>(variant)
std::get<int>(variant)
std::get<bool>(variant)
This function call std::get will throw std::bad_variant_access if you try to get the value with the wrong type. You could do that here.
Another option is to extract the union { bool, int, double } type out of the Variant class so you can use it as the return type. Then it'd probably be advisable to have another function call so the caller can tell at runtime which type the union actually is. You could return an enum or just return your m_type variable for this.
|
71,501,272 | 71,501,877 | Fence Post Errors When Displaying Arrays | I am currently using C++ on a program called CodeZinger for one of my classes. I was asked to make a program that will output an array with input that the program gives me.
See screenshot below.
The issue is that the program outputs an extra space at the end of my array, which is making the program say that I have not gotten the question right.
#include <iostream>
using namespace std;
int main()
{
int rows = 1;
int cols = 1;
long int arr[100][100];
for (int i = 0; i < rows; i++)
{
for (int j = 0; j < cols; j++)
{
cin >> arr[i][j];
}
}
for (int i = 0; i < rows; i++)
{
for (int j = 0; j < cols; j++)
{
cout << arr[i][j] << " ";
}
cout << endl;
}
return 0;
}
My output (see screenshot above) is showing that my code is right and it was going well, except for that extra space at the end. Is there any way to remove that extra space without adding an if statement into my code somewhere?
| Have the inner loop iterate until j < cols - 1 and then write one more output line after it ends, without a space (e.g.: std::cout << arr[i][cols-1];) –
UnholySheep
|
71,501,516 | 71,504,659 | How to implement user input in a linked list? | For this code I need to be able to use user input of books they have read and put them in a linked list. I have most of the code done but when I try putting books in the list the code isn't adding the books to the list. How can I fix this?
this is the function.cpp file
void displayMenu()
{
cout << "[1] Add Book\n"
<< "[2] Size Of List\n"
<< "[3] Display List\n"
<< "[4] Remove Last Book\n"
<< "[5] Delete List\n"
<< "[6] Quit Program\n"
<< "Enter Choice: ";
}
int getChoice(int & choice1)
{
cin >> choice1;
while (choice1 < 1 || choice1 > 6) {
cout << endl;
cout << "Invalid Entry!!" << endl;
cout << "Enter Choice: ";
cin >> choice1;
}
return choice1;
}
int endProgram(bool & start2)
{
start2 = false;
cout << "\n\n\t\tThank you for using this system!!\n\n";
return start2;
}
void clear()
{
system("clear");
}
void linkedList::addBook()
{
Book *ptr;
bool quit = false;
string temp = "";
while (!quit)
{
cout << "Enter a book(enter quit to stop): ";
cin >> temp;
if (temp == "quit")
{
quit = true;
return;
}
ptr = new Book;
ptr->data = temp;
ptr->next = NULL;
if(head == NULL)
{
head = ptr;
tail = ptr;
}
else
{
tail->next = ptr;
tail = tail->next;
}
}
return;
}
void linkedList::displayList()
{
Book *ptr;
ptr = head;
while (ptr != NULL)
{
cout << ptr->data << endl;
ptr = ptr->next;
}
}
void linkedList::listSize()
{
Book *ptr;
int counter = 0;
ptr = head;
while (ptr != NULL)
{
ptr = ptr->next;
counter++;
}
cout << "Number of books in the list: " << counter;
}
void linkedList::deleteLast()
{
if (head == NULL)
return;
if (head->next == NULL)
{
delete head;
head = NULL;
return;
}
// Find the second last node
Book* ptr = head;
while (ptr->next->next != NULL)
ptr = ptr->next;
// Delete last node
delete (ptr->next);
// Change next of second last
ptr->next = NULL;
if(ptr == NULL)
cout << "Last book is cleared!" << endl;
}
void linkedList::deleteList()
{
Book *ptr;
while (head != NULL)
{
ptr = head->next;
delete head;
head = ptr;
}
if(head == NULL)
cout <<"List is cleared!" << endl;
}
This is the main.cpp file
int main() {
int choice = 0;
bool start = true;
linkedList a;
while(start != false)
{
while(choice != 6)
{
displayMenu();
getChoice(choice);
if(choice == 1)
{
clear();
a.addBook();
}
if(choice == 2)
{
clear();
a.listSize();
}
if(choice == 3)
{
clear();
a.displayList();
cout << endl;
}
if(choice == 4)
{
clear();
a.deleteLast();
}
if(choice == 5)
{
clear();
a.deleteList();
}
if(choice == 6)
{
clear();
endProgram(start);
}
}
}
}
Last is the function.h file
#include <iostream>
#include <string>
using namespace std;
struct Book
{
string data;
Book *next;
};
class linkedList
{
private:
Book *head,*tail;
public:
linkedList()
{
head = NULL;
tail = NULL;
}
void addBook();
void displayList();
void listSize();
void deleteList();
void deleteLast();
};
void displayMenu();
int getChoice(int & choice1);
int endProgram(bool & start2);
void clear();
| OK got it
In deleteLast
if (head->next == NULL)
{
delete head;
return;
}
You forgot to update head
if (head->next == NULL)
{
delete head;
head = NULL; <<<<=====
return;
}
also in the delete function you don't update tail, you need
Book* ptr = head;
while (ptr->next->next != NULL)
ptr = ptr->next;
// Delete last node
delete (ptr->next);
tail = ptr;<<<<<<<<<<<===================
// Change next of second last
ptr->next = NULL;
You really need to learn to use your debugger
|
71,501,540 | 71,501,646 | How to join a number of threads which don't stop in C++ | In C++ I have an std::vector of threads, each running a function running forever [while(true)].
I'm joining them in a for loop:
for (auto& thread : threads)
{
thread.join();
}
When the program finishes I'm getting a std::terminate() call inside the destructor of one of the threads. I think I understand why that happens, except for the first thread the other join calls don't get called.
What is the correct way of joining those threads?
And is it actually necessary to join them? (assuming they are not supposed to join under normal circumstances)
| If the threads cannot be joined because they never exit then you could use std::thread::detach (https://en.cppreference.com/w/cpp/thread/thread/detach). Either way before joining you should always check std::thread::joinable (https://en.cppreference.com/w/cpp/thread/thread/joinable).
The std::terminate is indeed most likely due to a running thread being destroyed and not being detached or joined before that. Note however that what happens to detached threads on application exit is implementation defined. If possible you should probably change the logic in those threads to allow graceful exit (std::jthread or std::atomic could help make stoppable threads):
EDIT:
Semi-complete C++17 "correct" code:
std::atomic stop{false};
std::vector<std::thread> threads;
threads.emplace_back(std::thread{[&] { while (!stop.load()) { /* */ }}});
threads.emplace_back(std::thread{[&] { while (!stop.load()) { /* */ }}});
//...
stop.store(true);
for (auto& thread : threads)
{
if (thread.joinable())
{
thread.join();
}
}
Semi-complete C++20 "correct" code:
std::vector<std::jthread> threads;
threads.emplace_back(std::jthread{[] (std::stop_token stopToken) { while (!stopToken.stop_requested()) { /* */ }}});
threads.emplace_back(std::jthread{[] (std::stop_token stopToken) { while (!stopToken.stop_requested()) { /* */ }}});
The C++20 std::jthread allows functions that take std::stop_token to receive a signal to stop. The destructor std::~jthread() first requests stop via the token and then joins so in the above setup basically no manual cleanup is necessary. Unfortunately only MSVC STL and libstdc++ currently support it while Clang's libc++ does not. But it is easy enough to implement yourself atop of std::thread if you'd fancy a bit of exercise.
|
71,502,101 | 71,502,217 | Can i cast method pointer to long(int, size_t) | My problem is i need to represent a pointer to class's method like integer number. So it's not problem with functions, for example void (*func)() easy cast to number, but when i trying to cast void (&SomeClass::SomeMethod) to integer with any ways compiles says it's impossible
C-style cast from 'void(ForthInterpreter::*)()' to long is not alowed
I tried (size_t)&ForthInterpreter::CodeFLiteral, static_cast<size_t>(&ForthInterpreter::CodeFLiteral) but i got the same errors. Should to suppose there is a principal differense between pointer to function and method but what is it? And how can i cast it succesfully?
I use clang++ with C++11 version.
| A pointer-to-member is not just a simple pointer, it is much more complex. Depending on compiler implementation, it could be 2 pointers, one to the object and one to the method. Or it could be an object pointer and an offset into a method table. And so on.
As such, a pointer-to-member simply cannot be stored as-is in an integer, like you are attempting to do. So you need to find another solution to whatever problem you are trying to solve by storing a pointer inside an integer.
|
71,502,700 | 71,503,863 | cannot run code (error: no such file or directory) but can compile file C++ VSCode | File structure in folder /home/cyan/proj10
fst
| -- include
| |-- fstlib
| |-- fst_reader.h
|
| -- lib
|-- libfst.so
include
| -- A.h
| -- B.h
src
| -- A.cc
| -- B.cc
main.cc
CMakeLists.txt
fst folder is a library I added.
CMakeList.txt
cmake_minimum_required(VERSION 3.0.0)
project(READ_FST VERSION 0.1.0)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall")
set(CMAKE_BUILD_TYPE Debug)
include_directories(${CMAKE_SOURCE_DIR}/include)
include_directories(${CMAKE_SOURCE_DIR}/fst/include)
link_directories(${CMAKE_SOURCE_DIR}/fst/lib)
add_executable(READ_FST main.cc src/A.cc src/B.cc)
target_link_libraries(READ_FST ${CMAKE_SOURCE_DIR}/fst/lib/libfst.so)
A.h
#pragma once
#include <B.h>
namespace me {
class A{
public:
void init();
public:
double func(double a, double b, double c);
private:
B b_;
};
}
A.cc
#include <A.h>
namespace me {
void A::init() {
b_.init(1.0, 2.0, 1.0);
}
double A::func(double a, double b, double c) {
return b_.func(a, b, c);
}
}
main.cc
#include <A.h>
#include <B.h>
#include <iostream>
#include <./fstlib/fst_reader.h>
int main()
{
std::cout << "Hello World" << std::endl;
}
c_cpp_properties.json
{
"configurations": [
{
"name": "Linux",
"defines": [],
"configurationProvider": "ms-vscode.cmake-tools",
"includePath": ["${workspaceFolder}/**"],
"intelliSenseMode": "linux-gcc-x64"
}
],
"version": 4
}
tasks.json
{
"tasks": [
{
"type": "shell",
"label": "cmake",
"command": "cmake",
"args": [
"-g",
"-Wall",
"-I${workspaceFolder}/include",
"-I${workspaceFolder}/fst/include",
"${file}",
"-o",
"${workspaceFolder}/build/${fileBasenameNoExtension}",
],
"options": {
"cwd": "${workspaceFolder}/build"
},
"group": {
"kind": "build",
"isDefault": true
},
}
],
"version": "2.0.0"
}
Problem
I can compile active file and intellisense is fine. But every time I click the Run Code button, it gives the following error:
cyan@machine:~/proj10$ cd "/home/cyan/proj10/" && g++ main.cc -o main && "/home/cyan/proj10/"main
main.cc:1:10: fatal error: A.h: No such file or directory
#include <A.h>
^~~~~
compilation terminated.
| Okay
First
Try use CMake extension
Second
Run Code does one simple thing - it compiles and runs the program from your current file.
cd "/home/cyan/proj10/" && g++ main.cc -o main && "/home/cyan/proj10/"main
Compilation at this point knows nothing about your CMake project, compilation flags, include paths, etc.
If you're trying to build your entire CMake project this way, it won't work.
You need to build exactly the CMake project, and run exactly the compiled application (the easiest way to do this is with the extension from the first paragraph).
|
71,503,147 | 71,503,289 | Linked list SIGSEGV, Segmentation fault | I was doing a practice problem using linked lists (I wanted to practice them a bit more) and I got the following error
Program received signal SIGSEGV, Segmentation fault.
0x0000555555555888 in LinkedList::getLink (this=0x0) at main.cpp:24 24
return link;
I can't tell what the problem with this method is,since looking back on the past things I wrote they seemed to be the same as this one.
#include<iostream>
#include<string>
class LinkedList{
char sign;
int count;
LinkedList *link;
public:
LinkedList(char sign) : sign(sign),count(1),link(nullptr) {}
inline void Increment()
{
count++;
}
inline int getCount() const
{
return count;
}
inline void setLink(LinkedList *whereTo)
{
link=whereTo;
}
inline LinkedList* getLink() const
{
return link;
}
inline char getSign() const
{
return sign;
}
};
void stringInput(std::string &var)
{
std::cout<<"Enter some text:";
std::getline(std::cin,var);
}
unsigned int factorial(unsigned int n)
{
return (n!=0)? n*factorial(n-1) : 1;
}
void addList(char sign,LinkedList *&start,LinkedList *&helper,LinkedList *&end)
{
end=new LinkedList(sign);
if(start==nullptr)
{
start=end;
}
else
{
helper->setLink(end);
}
helper=end;
}
bool isInList(char sign,LinkedList *helper)
{
while(helper!=nullptr)
{
if(sign==helper->getSign())
{
helper->Increment();
return true;
}
helper=helper->getLink();
}
return false;
}
void addSignsToList(const std::string &var,LinkedList *&start,LinkedList *&helper,LinkedList *&end)
{
for(int i=0;i<var.size();i++)
{
if(!isInList(var[i],start))
{
addList(var[i],start,helper,end);
}
}
}
void freeLinkedLists(LinkedList *start)
{
LinkedList *helper=start->getLink();
while(start!=nullptr)
{
delete start;
start=helper;
helper=helper->getLink();
}
}
unsigned int factorialSum(LinkedList *helper)
{
unsigned int sum=1;
while(helper!=nullptr)
{
sum*=factorial(helper->getCount());
helper=helper->getLink();
}
return sum;
}
unsigned int comb(const std::string &var)
{
LinkedList *start=nullptr,*helper=nullptr,*end=nullptr;
addSignsToList(var,start,helper,end);
unsigned int upperHalf=factorial(var.size());
double lowerHalf=factorialSum(start);
freeLinkedLists(start);
return upperHalf/lowerHalf;
}
int main()
{
std::string var;
stringInput(var);
std::cout<<"The word \""<<var<<"\" has "<<comb(var)<<" possible combinations!\n";
std::cin.get();
return 0;
}
| In
void freeLinkedLists(LinkedList *start)
{
LinkedList *helper=start->getLink(); // fails immediately if start is null
while(start!=nullptr)
{
delete start;
start=helper;
helper=helper->getLink(); // too late. Helper may already be null.
// This won't be spotted until start is
// tested on the next loop iteration
}
}
Instead use
void freeLinkedLists(LinkedList *start)
{
while(start!=nullptr) // handles empty list case
{
LinkedList *helper=start->getLink(); // get next node while we know
// current node is valid
delete start;
start=helper; // may be null and will be caught by the while
}
}
|
71,504,035 | 71,668,051 | how to working with files in c++ in this sample code? | in this project, we can add product information and save them into a file <product.txt>.notice that the file name can include space and it can be muluti_piece -->Ex)Samsung air conditioner.
but for reading the name from file in the 'edit' Function I have to use getline function. but when I use getline in edit function for reading data, File entries are corrupted. how can I fix it ?
#include <iostream>
#include<string.h>
#include<string>
#include <stream>
#include <windows.h>
using namespace std;
void edit(int& j)
{
int step = 0;
int price[100], number[100], code[100];
string name[100];
ifstream Product_For_read;
Product_For_read.open("C:\\PROJECTS\\PROJECT2\\ConsoleApplication1\\products.txt");
while (Product_For_read >> code[step])
{
getline(Product_For_read,name[step],'\n'); // this Is the problem !!!!!!
Product_For_read >> number[step];
Product_For_read >> price[step];
step++;
}
Product_For_read.close();
int counter = step;
bool flag = false;
for (int step1 = 0; step1 <= counter; step1++) // editing data from file
{
if (step1 == j)
{
flag = true;
cout << "\n edite the product name : \n";
cin.ignore();
getline(cin, name[step1], '\n');
cout << "\nedite the product numebr of count : ";
cin >> number[step1];
cout << "\nedite the product price : ";
cin >> price[step1];
}
}
if (flag == true)
{
ofstream product_for_write; //write data to file
product_for_write.open(("C:\\PROJECTS\\PROJECT2\\ConsoleApplication1\\products.txt"));
for (int step2 = 0; step2 < counter; step2 ++)
{
product_for_write << code[step2] << endl << name[step2] << endl << number[step2] << endl << price[step2] << endl;
}
product_for_write.close();
}
}
void apply_product()
{
string title;
int code, number, price;
cout << " product code : ";
cin >> code;
cin.ignore();
cout << " product name : ";
getline(cin, title, '\n');
cout << " product number : ";
cin >> number;
cout << " product price : ";
cin >> price;
ofstream Product_For_write;
Product_For_write.open("C:\\PROJECTS\\PROJECT2\\ConsoleApplication1\\products.txt", ios::app);
Product_For_write << code << endl << title << endl << number << endl << price << endl;
Product_For_write.close();
cout << "\n\n product added successfully ....\n";
}
int main()
{
int i;
cout << "1-add a product\n2-edit product\n";
cin >> i;
if (i == 1)
apply_product();
else if (i == 2)
{
int j;
cout << "which one of items do you want to edit?-->";
cin >> j; //user will enter the code of item which is the product code [0,1,2,...]
edit(j);
}
}
| the way to solve this problem is put a Product_For_read.ignore(); on top the problem line .
|
71,504,552 | 71,504,654 | Split variadic template params up to use as single template parameters for other classes | I'm trying to figure out how to give arbitrary template parameters to a class, then have that class use each of those parameters to instantiate a bass class. Something along these lines:
template<class T>
SingleParamClass;
template<class ... TYPE_LIST>
MultiParamClass : SingleParamClass<TYPE_LIST[0]>, SingleParamClass<TYPE_LIST[1]>... SingleParamClass<TYPE_LIST[N]>;
I've written it with indexing into the parameter pack just for demonstration purposes obviously.
I know how to unpack the type list, but not how to unpack it in a way that I can use it as above.
Edit:
As requested I'll expand on what I'm trying to do...
I want to make a subclass that constructs a series of pure virtual methods using the types in the variadic template params. These pure virtual methods are there for another component to call, and simultaneously forces the dev to implement those methods in the derived class.
In a world where C++ magically works the way I want, I'd do something like this:
template<class ... TYPE_LIST>
MultiParamClass {
virtual void func(TYPE_LIST[0] arg) = 0;
virtual void func(TYPE_LIST[1] arg) = 0;
...
virtual void func(TYPE_LIST[N] arg) = 0;
}
I don't know a way to do this, so I'm trying to find a way around it using subclasses, something like this:
template<class T>
SingleParamClass {
virtual void func(T arg) = 0;
}
template<class ... TYPE_LIST>
MultiParamClass : SingleParamClass<TYPE_LIST[0]>, SingleParamClass<TYPE_LIST[1]>... SingleParamClass<TYPE_LIST[N]>;
| You can expand the parameter pack like so:
template<class T>
struct SingleParamClass {};
template<class ... TYPE_LIST>
struct MultiParamClass : SingleParamClass<TYPE_LIST>... {};
|
71,504,735 | 71,505,204 | template template parameter deduction with C++ class templates | Is there a way, without partial template specialization, to determine the template parameter of a template parameter that a class is templatized with, assuming that a class can only be templatized with a template parameter that itself is a template?
To make things concrete, here's an example:
template <typename T>
struct A {
// Needed: a function to print the size of the template parameter
// that "T" is templatized with, e.g. "char" in the example
// below
};
template <typename A>
struct X{}
int main() {
A<X<char>>
}
| It is for situations like this why standard containers expose a value_type member, eg:
template <class T>
struct A {
// Needed: a function to print the size of the template parameter
// that "T" is templatized with, e.g. "char" in the example
// below
void printSize() const { cout << sizeof(typename T::value_type); }
};
template <typename A>
struct X {
using value_type = A;
};
Online Demo
Otherwise, you can use a separate helper that utilizes a template template parameter and template argument deduction to determine what the value type is, eg:
template<template<class, class...> class C, class T, class... OtherTs>
size_t A_helper_printSize(C<T, OtherTs...>&&) { return sizeof(T); }
template <class T>
struct A {
// Needed: a function to print the size of the template parameter
// that "T" is templatized with, e.g. "char" in the example
// below
void printSize() const { cout << A_helper_printSize(T{}); }
};
Online Demo
Alternatively:
template<template<class, class...> class C, class T, class... OtherTs>
T A_helper_valueType(C<T, OtherTs...>&&);
template <class T>
struct A {
// Needed: a function to print the size of the template parameter
// that "T" is templatized with, e.g. "char" in the example
// below
void printSize() const { cout << sizeof(decltype(A_helper_valueType(std::declval<T>()))); }
};
Online Demo
|
71,504,813 | 71,505,031 | C++: algorithmic complexity of std::next and std::nth_element for std::multiset | What is the algorithmic (time) complexity of std::next and std::nth_element for a std::multiset in C++? I am interested in the latest gcc implementation, in particular, if it matters. I have a set of numbers in std::multiset and I need to compute their median. I have a feeling that it should be possible to do it in O(log N) for a balanced binary tree (or even in O(1) if the tree is perfectly balanced and, so the median element is at the root of the tree).
using Set = std::multiset<double>;
const Set x = {/*some numbers*/};
if(x.empty()) return 0;
const Set::size_type size = x.size();
const Set::size_type size2 = size/2;
const Set::const_iterator it = std::next(x.begin(), size2);
return size%2==1 ? *it : 0.5*(*std::next(it,-1) + *it);
If the numbers were in a std::vector, I could do it in O(1). To find Nth element in a std::multiset I use std::next, but I am thinking if std::nth_elment could be better (I hope they both use the fact that std::multiset is sorted).
If what I am doing to compute the median is not optimal, please, advise on a better way to do it.
Thank you very much for your help!
| std::next is O(k) in the distance you move.
There is no other way to get the median, practically, if you have a multiset.
It is possible to write a more efficient advance operation on containers like this, but std does not contain it.
You could keep around two multisets and balance them using splicing operations such that the median is always the first element of the 2nd container. This will have asymptotically good performance characteristics.
It could make other operations a bit more of a pain.
|
71,505,155 | 71,505,217 | C++ and MSVC #define directive with optional arguments | I'm having trouble trying to get a macro that I'm writing to function correctly. I've read the docs and can't find anything online to help with what I'm looking for.
I am attempting to write a macro that is used for info and debugging purposes. The exact macro should look like:
INFODUMP("Some format string here with a variable %f", someVariableInCode);
Which would expand to:
std::printf("INFO: Some format string here with a variable %f\n" someVariableInCode);
The macro that I have written that doesn't function correctly is:
#define INFODUMP(s, ...) std::printf("INFO: %s\n", s, __VA_ARGS__)
While technically, this macro is functioning, it doesn't do the formatting.
For example, I have:
INFODUMP("Size of width buffer: %i, size of height buffer: %i", wCells.size(), hCells.size());
And in the console when the program is ran, I get:
INFO: Size of width buffer: %i, size of height buffer: %i
So the macro works, it just isn't formatting the string. It's kind of like formatting it with s correctly, but not any of the optional arguments afterwards.
If I had to guess, it has something to do with strings and how they are formatted during runtime. If that's the case, I'm still completely lost on what to look up.
Notes:
I'm using Visual Studio 2022 Community with the Visual C++ (v143) compiler (latest).
While researching online, the __VA_OPT__ preprocessor variable isn't a thing anymore, I don't believe, but I could be entirely wrong.
| That's not how printf() works.
Your macro expands:
INFODUMP("%d", 42);
To:
std::printf("INFO: %s\n", "%d", 42)
You need:
#define INFODUMP(s, ...) std::printf("INFO: " ## s ## "\n", __VA_ARGS__)
Which will then expand:
INFODUMP("%d", 42);
To:
std::printf("INFO: %d\n", 42)
|
71,505,162 | 71,505,277 | Clicking on command window seeming halts program execution, how to prevent this? | I just started a simple text program in C++ (MSVS) that outputs to the command window, it is a simple timer that counts up by 1 every second. When I click on the command window (inside of it, on the blank text) it appears to halt the program execution, until I stroke a key, click off the window, or the title bar. Is it possible to prevent this?
Normally in the title bar it says "C:/.../program.exe", and when I click in the window it says "Select C:/.../program.exe".
| You have selected text in the window, the window is expecting you to do a copy operation. The console window stopped accepting new info from the app while you have stuff selected. It does this trying to be helpful, (if you are copying text you dont want to have to chase it round the screen)
People also use this as a quick way to freeze an apps output (a replacement for the less used 'scroll lock')
Press ESC to unselect and resume running
|
71,505,316 | 71,505,369 | Does this downcasting lead to undefined behavior? | Given this sample:
class Base
{
public:
void foo() {};
};
class Derived : public Base
{
};
int main()
{
Base b;
Derived* d = static_cast<Derived*>(&b);
d->foo();
}
I just have three cases: when void foo():
is member of Base,
and when it is member of Derived,
and when it is member of Base and Derived.
My questions are:
Does the member access expression d->foo() is undefined behavior in all three cases?.
If the member access expression is UB, Does the only workaround is to use dynamic_cast?
| From the C++ standard §7.6.1.9.11
A prvalue of type “pointer to cv1 B”, where B is a class type, can be converted to a prvalue of type “pointer to cv2 D”, where D is a complete class derived from B,
...
If the prvalue of type “pointer to cv1 B” points to a B that is actually a base class subobject of an object of type D, the resulting pointer points to the enclosing object of type D.
Otherwise, the behavior is undefined.
So using static_cast to downcast is only valid if you know (via some other means) that the cast is valid. If you take a Base that isn't a Derived and use a static_cast to pretend that it is, you've invoked undefined behavior, even before you try to dereference the pointer. So you could remove the d->foo() line and still have undefined behavior. It's the cast that's bad.
If your goal is to conditionally check whether you've got an instance of a subclass, then you want dynamic_cast.
|
71,505,585 | 71,506,689 | How to get the expiration date based on week code not on current date using c++ | How will I get the expiration date of an item, which is based on week code? Whenever I run the code that I made, the program reads the current date and disregards the week code. For example:
Week Code: 2138 (2021 week 38)
Shelf_life : 6 months
CTime weekcode = CTime::GetCurrentTime();
CTimeSpan shelf_no = CTimeSpan(atoi(view->m_pODBCPartsNo->m_shelf_life), 0, 0, 0);
CTime expiration_date = weekcode = shelf_no;
Week code is a date code, for example 2138, year 2021(21) week 38(38). Week 38 is around September 19, 2021 to September 25, 2021.
Now the question is, how will i get the expiration date that is based on week code ? Will I still use "GetCurrentTime" ?
| Here's how I would do it using Howard Hinnant's free, open-source, header-only date library:
#include "date/iso_week.h"
#include "date/date.h"
#include <chrono>
#include <iostream>
int
main()
{
using namespace date;
int weekcode = 2138;
int shelf_life = 6;
iso_week::year y{weekcode / 100 + 2000};
iso_week::weeknum wk(weekcode % 100);
auto prod_date = y/wk/iso_week::wed;
auto exp_date = floor<days>(sys_days{prod_date} + months{shelf_life});
std::cout << "Production date : " << prod_date << '\n';
std::cout << "Expiration date : " << exp_date << '\n';
}
This assumes that you are using ISO week date as the definition of "week number", which is an internationally accepted standard. The program begins by simply decoding weekcode into a year and weeknum. It then uses the iso_week library to create a production date with this year and weeknum. To complete this date, a weekday must be supplied. I've chosen Wednesday since this is the middle of the ISO work week.
auto prod_date = y/wk/iso_week::wed;
The expiration date can then be computed by simply adding 6 months to that date. Note that this addition is done using a chronological computation because product aging is a physical process that does not care that different months have different lengths. Using an "average month" length is adequate.
To do this chronological arithmetic, the production date must first be converted to a chronological date, sys_days. This is simply a count of days since the Unix Time epoch.
auto exp_date = floor<days>(sys_days{prod_date} + months{shelf_life});
The result has a precision finer than days since the average month duration is not an integral number of days. So it must be truncated back to the precision of days to create a date, as opposed to a date-time.
The above program prints out:
Production date : 2021-W38-Wed
Expiration date : 2022-03-23
|
71,506,169 | 71,506,322 | Passing an entire group of variables as one argument to a function C++ | I'm trying to simulate a system that requires multiple variables and parameters and requires multiple nested functions. I was wondering if there was a way to pass them as a group so I can just pass one or two arguments without having to itemize each parameter and variable within each function and trying to keep track of the location of them all in an array/vector. With R, one can use with() to evaluate a function using a list of named parameters (see here). Can you do this with a map in C++?
| Write a struct with the parameters.
The multiple nested functions either need an override taking such a struct, or be rewritten to consume it as their argument.
Now you can pass parameters around in a bundle.
If the set of parameters is not uniform, you might be out of luck. You also might be able to solve it using inheritance and the like. It will depend a lot on details.
If there is a significant set of parameters that is uniform, making them a struct can reduce the number of parameters you pass around.
void do_math(int x, int y, int z, int w);
vs
struct coords {
int x,y,z,w;
};
void do_math(coords);
if we want to keep the original do_math around we can:
inline void do_math(coords c) { do_math(c.x, c.y, c.z, c.w); }
now we can have code that wants to call do_math a bunch of times with tweaks;
void do_math_over_and_over(coords c, int x_range) {
for(int i = 0; i < x_range; ++i) {
coords tmp = c;
tmp.x += i;
do_math(tmp)
}
}
|
71,506,213 | 71,566,954 | How Can I implement MSAA on DX12? | I Searched many other questions and samples, but I still can't understand what I must do.
What I know about this process is
Create a Render Target for msaa. - Different from SwapChain's Backbuffer.
Draw everything (like meshes) on msaa render target.
Copy the contents of the msaa Render Target to the current BackBuffer using the ResolveSubresource function.
Is this the right process? Or is there a part that I left out?
| These samples demonstrate using MSAA with DirectX12:
https://github.com/microsoft/Xbox-ATG-Samples/tree/master/PCSamples/IntroGraphics/SimpleMSAA_PC12
https://github.com/microsoft/Xbox-ATG-Samples/tree/master/UWPSamples/IntroGraphics/SimpleMSAA_UWP12
I also cover this (among other topics) in this blog series.
Per the comments, you can also find MSAA covered in the DirectX Tool Kit for DX12 tutorials.
|
71,506,421 | 71,506,442 | How to overload function template function inside a class template? | How can I overload the Contains function template in class template Range?
When I run this code , I get an error as below:
template <typename T>
class Range {
public:
Range(T lo, T hi) : low(lo), high(hi)
{}
typename std::enable_if<!std::numeric_limits<T>::is_integer, bool>::type
Contains(T value, bool leftBoundary = true, bool rightBoundary = true) const
{
// do sth
return true;
}
typename std::enable_if<std::numeric_limits<T>::is_integer, bool>::type
Contains(T value, bool leftBoundary = true, bool rightBoundary = true) const
{
// do sth
return true;
}
};
error: ‘typename std::enable_if<std::numeric_limits<_Tp>::is_integer, bool>::type Range<T>::Contains(T, bool, bool) const’ cannot be overloaded
Contains(T value, bool leftBoundary = true, bool rightBoundary = true) const
^~~~~~~~
error: with ‘typename std::enable_if<(! std::numeric_limits<_Tp>::is_integer), bool>::type Range<T>::Contains(T, bool, bool) const’
Contains(T value, bool leftBoundary = true, bool rightBoundary = true) const
| You need to make Contains themselves template too. E.g.
template <typename X>
typename std::enable_if<!std::numeric_limits<X>::is_integer, bool>::type
Contains(X value, bool leftBoundary = true, bool rightBoundary = true) const
{
// do sth
return true;
}
template <typename X>
typename std::enable_if<std::numeric_limits<X>::is_integer, bool>::type
Contains(X value, bool leftBoundary = true, bool rightBoundary = true) const
{
// do sth
return true;
}
Since C++17 you can use Constexpr If instead of overloading.
bool
Contains(T value, bool leftBoundary = true, bool rightBoundary = true) const
{
if constexpr (!std::numeric_limits<T>::is_integer) {
// do sth
return true;
} else {
// do sth
return true;
}
}
|
71,506,510 | 71,506,542 | How to resolve the error: called object type 'char' is not a function or function pointer | So, in a program I was trying to print a pair from a stack. The code is as follows:
#include <iostream>
#include <stack>
#include <utility>
using namespace std;
int main()
{
stack<pair<char, int>> deleteOperations;
stack<pair<pair<char, char>, int>> replaceOperations;
deleteOperations.push(make_pair('a', 1));
replaceOperations.push(make_pair(make_pair('b', 'c'), 2));
cout << deleteOperations.top().first();
cout << replaceOperations.top().first().first();
return 0;
}
The error is:
test.cpp:12:41: error: called object type 'char' is not a function or function pointer
cout << deleteOperations.top().first();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
test.cpp:13:13: error: type 'std::__1::pair<char, char>' does not provide a call operator
cout << replaceOperations.top().first().first();
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2 errors generated.
| std::pair<>::first is a member variable, not a function, just use deleteOperations.top().first; and replaceOperations.top().first.first
|
71,506,554 | 71,506,842 | SetTokenInformation fails with 24, but the length is correct | I'm trying to create a elevated SYSTEM token, but the code below fails:
#include <windows.h>
#include <stdio.h>
BOOL Elevate()
{
PSID pSID = NULL;
HANDLE hToken = NULL, hToken2 = NULL;
SID_IDENTIFIER_AUTHORITY NtAuthority = SECURITY_NT_AUTHORITY;
if (!OpenProcessToken(GetCurrentProcess(), TOKEN_ALL_ACCESS, &hToken))
{
fprintf(stderr, "OpenProcessToken(): %d\n", GetLastError());
goto done;
}
if (!DuplicateTokenEx(hToken, MAXIMUM_ALLOWED, NULL, SecurityImpersonation, TokenPrimary, &hToken2))
{
fprintf(stderr, "DuplicateTokenEx(): %d\n", GetLastError());
goto done;
}
if (!AllocateAndInitializeSid(
&NtAuthority,
1,
SECURITY_MANDATORY_SYSTEM_RID,
0,
0, 0, 0, 0, 0, 0,
&pSID))
{
fprintf(stderr, "AllocateAndInitializeSid(): %d\n", GetLastError());
goto done;
}
if (!SetTokenInformation(hToken2, TokenIntegrityLevel, &pSID, sizeof(pSID)))
{
fprintf(stderr, "SetTokenInformation(): %d\n", GetLastError());
goto done;
}
done:
if (pSID)
{
FreeSid(pSID);
pSID = NULL;
}
CloseHandle(hToken);
CloseHandle(hToken2);
return TRUE;
}
int main(int argc, char** argv)
{
Elevate();
}
It fails on SetTokenInformation, and the error code is 24: ERROR_BAD_LENGTH. Does anyone know what's wrong?
EDIT
Remy Lebeau was right, and I found an example here: https://wiki.sei.cmu.edu/confluence/display/c/WIN02-C.+Restrict+privileges+when+spawning+child+processes
| Per the TOKEN_INFORMATION_CLASS documentation
TokenIntegrityLevel
The buffer receives a TOKEN_MANDATORY_LABEL structure that specifies the token's integrity level.
Where TOKEN_MANDATORY_LABEL is defined as:
typedef struct _SID_AND_ATTRIBUTES {
#if ...
PISID Sid;
#else
PSID Sid;
#endif
DWORD Attributes;
} SID_AND_ATTRIBUTES, *PSID_AND_ATTRIBUTES;
typedef struct _TOKEN_MANDATORY_LABEL {
SID_AND_ATTRIBUTES Label;
} TOKEN_MANDATORY_LABEL, *PTOKEN_MANDATORY_LABEL;
So, you need to give SetTokenInformation() a pointer to a TOKEN_MANDATORY_LABEL, not a pointer to a SID, eg:
#include <windows.h>
#include <stdio.h>
BOOL Elevate()
{
TOKEN_MANDATORY_LABEL tml = {};
HANDLE hToken = NULL, hToken2 = NULL;
SID_IDENTIFIER_AUTHORITY NtAuthority = SECURITY_NT_AUTHORITY;
BOOL result = FALSE;
if (!OpenProcessToken(GetCurrentProcess(), TOKEN_ALL_ACCESS, &hToken))
{
fprintf(stderr, "OpenProcessToken(): %ul\n", GetLastError());
goto done;
}
if (!DuplicateTokenEx(hToken, MAXIMUM_ALLOWED, NULL, SecurityImpersonation, TokenPrimary, &hToken2))
{
fprintf(stderr, "DuplicateTokenEx(): %ul\n", GetLastError());
goto done;
}
if (!AllocateAndInitializeSid(
&NtAuthority,
1,
SECURITY_MANDATORY_SYSTEM_RID,
0,
0, 0, 0, 0, 0, 0,
&(tml.Label.SID)))
{
fprintf(stderr, "AllocateAndInitializeSid(): %ul\n", GetLastError());
goto done;
}
tml.Label.Attributes = ...; // desired integrity level
if (!SetTokenInformation(hToken2, TokenIntegrityLevel, &tml, sizeof(tml)))
{
fprintf(stderr, "SetTokenInformation(): %ul\n", GetLastError());
goto done;
}
result = TRUE;
done:
if (tml.Label.SID) FreeSid(tml.Label.SID);
if (hToken) CloseHandle(hToken);
if (hToken2) CloseHandle(hToken2);
return result;
}
int main(int argc, char** argv)
{
Elevate();
}
|
71,506,735 | 71,506,782 | C++ How to add two arrays of unequal sizes using the for loop? | So the aim is to take two arrays as shown below
int x[10] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
int k[4] = {1, 2, 3, 4};
and add each element of k to each element of x in a loop as shown
1 2 3 4 5 6 7 8 9 10
+1 +2 +3 +4 +1 +2 +3 +4 +1 +2
This should give us a final array [2, 4, 6, 8, 6, 8, 10, 12, 10, 12].
Any suggestions as to how I could achieve this in C++
| Loop through the indexes of the larger array, using the modulus (%) operator to wrap-around the indexes when accessing the smaller array.
int x[10] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
int k[4] = {1, 2, 3, 4};
int res[10];
for (int i = 0; i < 10; ++i) {
res[i] = x[i] + k[i % 4];
}
Online Demo
|
71,507,110 | 71,507,571 | Is it Undefined behavior to not having a return statement for a non-void function in which control can never off over the end? | I am learning C++ using the books listed here. In particular, I read that flowing off the end of a non-void function is undefined behavior. Then I looked at this answer that says:
In C++ just flowing off the end of a value returning function is always undefined behavior (regardless of whether the function's result is used by the calling code). In C this causes undefined behavior only if the calling code tries to use the returned value.
But in this answer I read:
It is legal under C/C++ to not return from a function that claims to return something.
As you can see in the first quoted answer, the user says that in C++ it is always UB but the second quoted answer says that it is legal. They seem to be contradicting each other.
Which of the above quoted answer is correct in C++?
Also I have the following example in C++:
int func(int a, int b)
{
if(a > b)
{
return a;
}
else if(a < b)
{
return b;
}
}
int main()
{
int x =0, y =0;
std::cin>> x >> y;
if(x!=y)
{
func(x,y); //Question 1: IS THIS UB?
std::cout<<"max is: "<<func(x,y); //Question 2: IS THIS UB?
}
else
{
std::cout<<"both are equal"<<std::endl;
}
return 0;
}
I have 2 question from the above given code snippet which I have mentioned in the comments of the code above.
As can be seen from the code, the control can never flow over the end of the function func because a will never be equal to b inside the function since I have checked that condition in main separately.
| The two statements are in no way contradictory.
The first statement is about what happens when control flow exits a non-void function without executing a return statement. The second statement is about what happens when control flow does not exit the function at all. Calls to functions like exit or std::terminate do not ever have control flow proceed past the point when those functions are called.
But that has nothing to do with the nature of the return value.
The behavior of the program when a non-void function runs out of stuff to do without an explicit return statement (or throw. Or co_return these days) is governed by [stmt.return]/2:
Flowing off the end of a function is equivalent to a return with no value; this results in undefined behavior in a value-returning function.
|
71,507,133 | 71,507,163 | What does this line mean?, can we assign something to an object other than attribute? | Greetings this is my first question here.
I'm really new to C++, and to Object Oriented Programming as well.
So, my tasks currently need to wrap this C++ library, the code is:
#include "cavc/polylineoffset.hpp"
int main(int argc, char *argv[]) {
(void)argc;
(void)argv;
// input polyline
cavc::Polyline<double> input;
// add vertexes as (x, y, bulge)
input.addVertex(0, 25, 1);
input.addVertex(0, 0, 0);
input.addVertex(2, 0, 1);
input.addVertex(10, 0, -0.5);
input.addVertex(8, 9, 0.374794619217547);
input.addVertex(21, 0, 0);
input.addVertex(23, 0, 1);
input.addVertex(32, 0, -0.5);
input.addVertex(28, 0, 0.5);
input.addVertex(39, 21, 0);
input.addVertex(28, 12, 0);
input.isClosed() = true;
// below this is the line that i dont understand
std::vector<cavc::Polyline<double>> results = cavc::parallelOffset(input, 3.0);
}
So, what I don't understand is the last line. The basic C++ OOP that I understand is that we can create an object and can assign an attribute to it:
class MyClass { // The class
public: // Access specifier
int myNum; // Attribute (int variable)
string myString; // Attribute (string variable)
};
myClass myObject;
myObject.myNum = 1;
myObject.myString = "something";
But, what I don't understand in the last line (from the library) is it's creating an object from a class which is results but after that, directly assign to something:
results = cavc::parallelOffset(input, 3.0);
This is the header file:
https://github.com/jbuckmccready/CavalierContours/blob/master/include/cavc/polylineoffset.hpp
| The line in question is calling a function named parallelOffset, that was declared in a namespace called cavc. The function returns an object of type std::vector<cavc::Polyline<double>>, so the line is declaring an object of that type and setting it equal to the value retuned by the function.
The syntax is the same as e.g.
float x = sin(3.0);
... just with a more complicated return-type (std::vector<cavc::Polyline<double>>, a.k.a a vector of cavc::Polyline<double> objects)
std::vector<cavc::Polyline<double>> results = cavc::parallelOffset(input, 3.0);
|
71,507,970 | 71,514,472 | Is bitshifting from an unsigned to a signed smaller type portable? | I have a unsigned short (which is 16 bit on the target platforms)
It contains two 8-bit signed values, one in the lower byte, one in the higher byte.
#include <vector>
#include <iostream>
int main() {
unsigned short a = 0xE00E;
signed char b = a & 0xFF;
signed char c = ((a >> 8) & 0xFF);
std::cout << (int)b << std::endl;
std::cout << (int)c << std::endl;
}
Is this portable, or am I relying on platform dependent behaviour here?
On all major compilers (gcc, msvc, clang), the result is 14 and -32, which is the expected output.
| Disclaimer: I am no language lawyer
Is this portable, or am I relying on platform dependent behaviour here?
Since there is no version specified, I used last draft.
So what did we need to check:
Can unsigned short hold 0xE00E and signed char can hold 8 bits?
How a & 0xFF and ((a >> 8) & 0xFF) are transformed into signed char?
How signed char is transformed into int?
1. Can unsigned short hold 0xE00E?
Type | Minimum width
signed char 8
short int 16
int 16
long int 32
long long int 64
Source : https://eel.is/c++draft/basic.fundamental
2. How a & 0xFF and ((a >> 8) & 0xFF) are transformed into signed char?
In particular, arithmetic operators do not accept types smaller than int as arguments, and integral promotions are automatically applied
unsigned char or unsigned short can be converted to int if it can hold its entire value range, and unsigned int otherwise;
Source: https://en.cppreference.com/w/cpp/language/implicit_conversion Integral promotion
AFAIU, in a & 0xFF, a can be an int or unsigned int.
If sizeof(unsigned short) == sizeof(int) a will be an unsigned int and an
int otherwise.
If the types are the same, that type is the common type.
If the unsigned type has conversion rank greater than or equal to the rank of the signed type, then the operand with the signed type is implicitly converted to the unsigned type.
Source: https://en.cppreference.com/w/c/language/conversion Usual arithmetic conversions
Now we need to go to signed char from int or unsigned int
Otherwise, the result is the unique value of the destination type that is congruent to the source integer modulo 2N, where N is the width of the destination type.
Source: https://eel.is/c++draft/conv.integral#3
Note: is implementation-defined until C++20 (source https://en.cppreference.com/w/cpp/language/implicit_conversion)
And the memory representation of an negative signed:
An unsigned integer type has the same object representation, value representation, and alignment requirements ([basic.align]) as the corresponding signed integer type.
For each value x of a signed integer type, the value of the corresponding unsigned integer type congruent to x modulo 2N has the same value of corresponding bits in its value representation.
[Example 1: The value −1 of a signed integer type has the same representation as the largest value of the corresponding unsigned type.
— end example]
Source: https://eel.is/c++draft/basic.fundamental#3
All good.
signed char b = a & 0xFF;
signed char c = ((a >> 8) & 0xFF);
Are defined and what you expect.
3. How signed char is transformed into int?
It's https://eel.is/c++draft/conv.integral#3 again. And it wont modified the value.
Conclusion
Is bitshifting from an unsigned to a signed smaller type portable?
In C++20
Yes
Before C++20
The conversion unsigned to signed is implementation defined. To prevent this to happen we need to
static_assert(sizeof(unsigned short) < sizeof(int));
And the code is fully portable.
|
71,508,125 | 71,508,173 | OpenCV: Unable to get a red line in Hough transform | I have written a simple code to perform Hough transform and display the lines. The code is as follows,
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int lowThreshold=0;
int const max_lowThreshold = 100;
int kernel_size = 3;
int ratio = 3;
Mat img;
Mat display;
Mat temp;
void CannyThreshold()
{
cvtColor(img, display, COLOR_RGB2GRAY);
// GaussianBlur(display,display,Size(7,7),3,3);
GaussianBlur(display, display, Size(1, 1), 1,1);
// printf("%d\n",lowThreshold);
Canny(display,display,lowThreshold,3);
imshow("Canny",display);
}
void Hough()
{
Canny(temp,display,50,3);
vector<Vec2f> lines; // will hold the results of the detection
HoughLines(display, lines, 1, CV_PI/180, 150, 0, 0 ); // runs the actual detection
for( size_t i = 0; i < lines.size(); i++ )
{
float rho = lines[i][0], theta = lines[i][1];
Point pt1, pt2;
double a = cos(theta), b = sin(theta);
double x0 = a*rho, y0 = b*rho;
pt1.x = cvRound(x0 + 1000*(-b));
pt1.y = cvRound(y0 + 1000*(a));
pt2.x = cvRound(x0 - 1000*(-b));
pt2.y = cvRound(y0 - 1000*(a));
line(display, pt1, pt2, Scalar(0,0,255), 3, LINE_AA);
}
printf("Lines = %ld\n",lines.size());
imshow("Hough",display);
}
int main()
{
VideoCapture cap(0);
namedWindow("Canny");
createTrackbar("Min Threshold: ","Canny",&lowThreshold,max_lowThreshold);
while(1)
{
cap.read(img);
temp = img;
CannyThreshold();
Hough();
waitKey(1);
}
cap.release();
return 0;
}
I am unable to get a red line (or any color) in the output Image in the window "Hough". I just get a black and white image. I'm also running a simple Canny edge detection before the Hough transform. Could that be causing an issue?
Any suggestions on how I could get a color line on to be displayed?
| You are drawing Hough lines on the gray image of canny.
|
71,508,298 | 71,508,447 | Access mouse.button variable in QML | I was trying something to strengthen my experience with C++ and QML.
I have a MouseArea item. I want to pass the "onPressed" , "onReleased" and "onPositionChanged" events to the backend side that I am trying to write in C++. Actually I want this for clean and simple code. I can do whatever I want by writing in QML.
The problem is that I couldn't define "mouse.button" variable of MouseArea in C++ side. I am getting error like:
qrc:/main.qml:58: Error: Unknown method parameter type: Qt::MouseButton
My QML script:
.
.
Item{
id: item
anchors.fill: parent
Viewer{
id: viewer
}
MouseArea{
id: viewerMouseArea
anchors.fill: parent
hoverEnabled: true
acceptedButtons: Qt.RightButton | Qt.LeftButton | Qt.MiddleButton
onPressed: {
//console.log("Mouse buttons in mouse area pressed.");
viewer.mousePressEvent(mouseX, mouseY, mouse.button);
}
onReleased:{
//console.log("Mouse buttons in mouse area released.")
viewer.mouseReleaseEvent(mouseX, mouseY, mouse.button);
}
onPositionChanged:{
//console.log("Position of cursor in mouse area changed.")
//viewer.mouseMoveEvent(x, mouseY);
}
}
}
.
.
My C++ backend code:
.
.
void Viewer::mousePressEvent(double x, double y, Qt::MouseButton button) {
qDebug() << "Viewer::mousePressEvent()";
}
void Viewer::mouseReleaseEvent(double x, double y, Qt::MouseButton button) {
qDebug() << "Viewer::mouseReleaseEvent()";
}
void Viewer::mouseMoveEvent(double x, double y) {
qDebug() << "Viewer::mouseMoveEvent()";
}
.
.
How can I access mouse.button variable in QML in C++?
| I looked at the documentation here. https://doc.qt.io/qt-6/qt.html#MouseButton-enum I solved it by working directly with the unsigned integer.
void Viewer::mousePressEvent(double x, double y, quint32 button) {
qDebug() << "Viewer::mousePressEvent()";
qDebug() << "x: " << x << " y: " << y << " button: " << button;
}
void Viewer::mouseReleaseEvent(double x, double y, quint32 button) {
qDebug() << "Viewer::mouseReleaseEvent()";
qDebug() << "x: " << x << " y: " << y << " button: " << button;
}
void Viewer::mouseMoveEvent(double x, double y) {
qDebug() << "Viewer::mouseMoveEvent()";
}
If you have a better solution suggestion, please let me know. I would be very grateful.
Console output:
Viewer::mousePressEvent()
x: 243 y: 161 button: 2
Viewer::mouseReleaseEvent()
Viewer::mousePressEvent()
x: 282 y: 183 button: 1
Viewer::mouseReleaseEvent()
Viewer::mousePressEvent()
x: 277 y: 138 button: 4
Viewer::mouseReleaseEvent()
|
71,508,553 | 71,512,833 | Backup the database file as sql inside the zip file on QT C++ | I want to backup my database by creating a zip file with qprocess in the QT program, in the code below it does it as a sql file. How can I make a backup inside the zip file?
QProcess dump(this);
QStringlist args;
QString path="C:/Users/ali/Desktop/dbfile/db.sql";
args<<"-uroot"<<"-proot"<<"denemesql";
dump.setStandardOutputFile(path);
dump.start("mysqldump.exe",args);
if(!dump.waitForStarted(1000))
{
qDebug()<<dump.errorString();
}
dump.waitForFinished(-1);
can you help me?
| The easy way is perform it in two steps: dump and then zip
bool dump1() {
QString path = "C:/Users/ali/Desktop/dbfile/db.sql";
QString zipPath = "C:/Users/ali/Desktop/dbfile/db.zip";
QProcess dump;
dump.setProgram("mysqldump.exe");
dump.setArguments({"-uroot", "-proot", "denemesql"});
dump.setStandardOutputFile(path);
dump.start();
if (!dump.waitForStarted()) {
qDebug() << dump.error();
return false;
}
if (!dump.waitForFinished()) {
qDebug() << dump.error();
return false;
}
QProcess zip;
zip.setProgram("C:\\Program Files\\7-Zip\\7z.exe");
zip.setArguments({"a", zipPath, path});
zip.start();
if (!zip.waitForStarted()) {
qDebug() << dump.error();
return false;
}
if (!zip.waitForFinished()) {
qDebug() << dump.error();
return false;
}
QFile(path).remove();
return true;
}
You can avoid creating temporary file piping ouput of dump directly to 7z,but you will be limited to gzip format.
bool dump2() {
QString gzipPath = "C:/Users/ali/Desktop/dbfile/db.gz";
QString fileName = "db.sql";
QProcess dump;
dump.setProgram("mysqldump.exe");
dump.setArguments({"-uroot", "-proot", "denemesql"});
QProcess zip;
zip.setProgram("C:\\Program Files\\7-Zip\\7z.exe");
zip.setArguments({"a", gzipPath, "-si" + fileName});
dump.setStandardOutputProcess(&zip);
dump.start();
zip.start();
if (!dump.waitForStarted()) {
qDebug() << dump.error();
return false;
}
if (!zip.waitForStarted()) {
qDebug() << zip.error();
return false;
}
if (!dump.waitForFinished()) {
qDebug() << dump.error();
return false;
}
if (!zip.waitForFinished()) {
qDebug() << zip.error();
return false;
}
return true;
}
The hard way is to use zlib, you can add it to your project and use
zipOpen
zipOpenNewFileInZip
zipWriteInFileInZip
zipCloseFileInZip
zipClose
functions. This requires linking library and including headers and sources from zlib\contrib\minizip.
|
71,509,342 | 71,514,771 | Why does static_cast conversion speed up an un-optimized build of my integer division function? | ... or rather, why does not static_cast-ing slow down my function?
Consider the function below, which performs integer division:
int Divide(int x, int y) {
int ret = 0, i = 32;
long j = static_cast<long>(y) << i;
while (x >= y) {
while (x < j) --i, j >>= 1;
ret += 1 << i, x -= j;
}
return ret;
}
This performs reasonably well, as one might expect.
However, if we remove the static_cast on line 3, like so:
int Divide(int x, int y) {
int ret = 0, i = 32;
long j = y << i;
while (x >= y) {
while (x < j) --i, j >>= 1;
ret += 1 << i, x -= j;
}
return ret;
}
This version performs noticeably slower, sometimes several hundreds times slower (I haven't measured rigorously, but shouldn't be far off) for pathological inputs where x is large and y is small. I was curious and wanted to look into why, and tried digging into the assembly code. However, apart from the casting differences on line 3, I get the exact same output. Here's the line 3 output for reference (source):
With static_cast:
movsxd rax, dword ptr [rbp - 8]
mov ecx, dword ptr [rbp - 16]
shl rax, cl
mov qword ptr [rbp - 24], rax
Without static_cast:
mov eax, dword ptr [rbp - 8]
mov ecx, dword ptr [rbp - 16]
shl eax, cl
cdqe
mov qword ptr [rbp - 24], rax
The rest is identical.
I'm really curious where the overhead is occurring.
EDIT: I've tested a bit further, and it looks like the while loop is where most of the time is spent, not when y is initialized. The additional cdqe instruction doesn't seem to be significant enough to warrant the total increase in wall time.
Some disclaimers, since I've been getting a lot of comments peripheral to the actual question:
I'm aware that shifting an int further than 32 bits is UB.
I'm assuming only positive inputs.
long is 8 bytes long on my platform, so it doesn't overflow.
I'd like to know what might be causing the increased runtime, which the comments criticizing the above don't actually address.
| Widening after the shift reduces your loop to naive repeated subtraction
It's not the run-time of cdqe or movsxd vs. mov that's relevant, it's the different starting values for your loop, resulting in a different iteration count, especially for pathological cases.
Clang without optimization compiled your source exactly the way it was written, doing the shift on an int and then sign-extending the result to long. The shift-count UB is invisible to the compiler with optimization disabled because, for consistent debugging, it assumes variable values can change between statements, so the behaviour depends on what the target machine does with a shift-count by the operand-size.
When compiling for x86-64, that results in long j = (long)(y<<0);, i.e. long j = y;, rather than having those bits at the top of a 64-bit value.
x86 scalar shifts like shl eax, cl mask the count with &31 (except with 64-bit operand size) so the shift used a count of 32 % 32 == 0. AArch64 would I think saturate the shift count, i.e. let you shift out all the bits.
Notice that it does a 32-bit operand-size shl eax, cl and then sign-extends the result with cdqe, instead of doing a sign-extending reload of y and then a 64-bit operand-size shl rax,cl.
Your loop has a data-dependent iteration count
If you single-step with a debugger, you could see the local variable values accurately. (That's the main benefit of an un-optimized debug build, which is not what you should be benchmarking.) And you can count iterations.
while (x >= y) {
while (x < j) --i, j >>= 1;
ret += 1 << i, x -= j;
}
With j = y, if we enter the outer loop at all, then the inner loop condition is always false.
So it never runs, j stays constant the whole time, and i stays constant at 32.
1<<32 again compiles to a variable-count shift with 32-bit operand-size, because 1 has type int. (1LL has type long long, and can safely be left-shifted by 32). On x86-64, this is just a slow way to do ret += 1;.
x -= j; is of course just x -= y;, so we're counting how many subtractions to make x < y.
It's well-known that division by repeated subtraction is extremely slow for large quotients, since the run time scales linearly with the quotient.
You do happen to get the right result, though. Yay.
BTW, long is only 32-bit on some targets like Windows x64 and 32-bit platforms; use long long or int64_t if you want a type twice the width of int. And maybe static_assert to make sure int isn't that wide.
With optimization enabled, I think the same things would still hold true: clang looks like it's compiling to similar asm just without the store/reload. So it's effectively / de-facto defining the behaviour of 1<<32 to just compile to an x86 shift instruction.
But I didn't test, that's just from a quick look at the asm https://godbolt.org/z/M33vqGj5P and noting things like mov r8d, 1 ; shl r8d, cl (32-bit operand-size) ; add eax, r8d
|
71,509,445 | 71,522,528 | How do I get rid of the default macOS menu items in wxWidgets? | "Toggle Sidebar" is the only item I have added, how do I remove the other items which I don't really need? I'm stuck
I'm on macOS 12.2 with wxWidgets v3.1.5
here's the code I used to add the menu:
wxMenuBar *mainMenuBar = new wxMenuBar();
wxMenu *viewMenu = new wxMenu();
viewMenu->Append(wxID_ANY, "Toggle Sidebar");
mainMenuBar->Append(viewMenu, "&View");
this->SetMenuBar(mainMenuBar);
| As said in the comments, calling SetMenuBar() on the frame first and then appending the menus fixed the issue.
|
71,509,586 | 71,509,670 | usage of import with plain header files | Is it good practice to get rid of #include and only use the import keyword instead even for headers (like <span> or "Foo.h")? Are there any benefits to this? Any possible downsides? Does it add to the length of build time?
cppreference has an example in which it says this:
import <set>; // imports a synthesized header unit formed from header
What exactly does synthesized mean in this context?
|
Are there any benefits to this?
Importing synthesised header units instead of including the header may satisfy a style guide that wants to use one type of directive for both modules and headers.
Any possible downsides?
It won't work in pre-C++20 code, nor compilers that haven't yet implemented importing header files. This won't matter if you also use modules, since those won't work pre-C++20 either.
|
71,509,935 | 71,511,394 | How does mixing relaxed and acquire/release accesses on the same atomic variable affect synchronises-with? | I have a question about the definition of the synchronises-with relation in the C++ memory model when relaxed and acquire/release accesses are mixed on one and the same atomic variable. Consider the following example consisting of a global initialiser and three threads:
int x = 0;
std::atomic<int> atm(0);
[thread T1]
x = 42;
atm.store(1, std::memory_order_release);
[thread T2]
if (atm.load(std::memory_order_relaxed) == 1)
atm.store(2, std::memory_order_relaxed);
[thread T3]
int value = atm.load(std::memory_order_acquire);
assert(value != 1 || x == 42); // Hopefully this is guaranteed to hold.
assert(value != 2 || x == 42); // Does this assert hold necessarily??
My question is whether the second assert in T3 can fail under the C++ memory model. Note that the answer to this SO question suggests that the assert could not fail if T2 used load/acquire and store/release; please correct me if I got this wrong. However, as stated above, the answer seems to depend on how exactly the synchronises-with relation is defined in this case. I was confused by the text on cppreference, and I came up with the following two possible readings.
The second assert fails. The store to atm in T1 could be conceptually understood as storing 1_release where _release is annotation specifying how the value was stored; along the same lines, the store in T2 could be understood as storing 2_relaxed. Hence, if the load in T3 returns 2, the thread actually read 2_relaxed; thus, the load in T3 does not synchronise-with the store in T1 and there is no guarantee that T3 sees x == 42. However, if the load in T3 returns 1, then 1_release was read, and therefore the load in T3 synchronises-with the store in T1 and T3 is guaranteed to see x == 42.
The second assert success. If the load in T3 returns 2, then this load reads a side-effect of the relaxed store in T2; however, this store of T2 is present in the modification order of atm only if the modification order of atm contains a preceding store with a release semantics. Therefore, the load/acquire in T3 synchronises-with the store/release of T1 because the latter necessarily precedes the former in the modification order of atm.
At first glance, the answer to this SO question seems to suggest that my reading 1 is correct. However, that answer seems to be different in a subtle way: all stores in the answer are release, and the crux of the question is to see that load/acquire and store/release establishes synchronises-with between a pair of threads. In contrast, my question is about how exactly synchronises-with is defined when memory orders are heterogeneous.
I actually hope that reading 2 is correct since this would make reasoning about concurrency easier. Thread T2 does not read or write any memory other than atm; therefore, T2 itself has no synchronisation requirements and should therefore be able to use relaxed memory order. In contrast, T1 publishes x and T3 consumes it -- that is, these two threads communicate with each other so they should clearly use acquire/release semantics. In other words, if interpretation 1 turns out to be correct, then the code T2 cannot be written by thinking only about what T2 does; rather, the code of T2 needs to know that it should not "disturb" synchronisation between T1 and T3.
In any case, knowing what exactly is sanctioned by the standard in this case seems absolutely crucial to me.
| Because you use relaxed ordering on a separate load & store in T2, the release sequence is broken and the second assert can trigger (although not on a TSO platform such as X86).
You can fix this by either using acq/rel ordering in thread T2 (as you suggested) or by modifying T2 to use an atomic read-modify-write operation (RMW), like this:
[Thread T2]
int ret;
do {
int val = 1;
ret = atm.compare_exchange_weak(val, 2, std::memory_order_relaxed);
} while (ret != 0);
The modification order of atm is 0-1-2 and T3 will pick up on either 1 or 2 and no assert can fail.
Another valid implementation of T2 is:
[thread T2]
if (atm.load(std::memory_order_relaxed) == 1)
{
atm.exchange(2, std::memory_order_relaxed);
}
Here the RMW itself is unconditional and it must be accompanied by an if-statement & (relaxed) load to ensure that the modification order of atm is 0-1 or 0-1-2
Without the if-statement, the modification order could be 0-2 which can cause the assert to fail. (This works because we know there is only one other write in the whole rest of the program. Separate if() / exchange is of course not in general equivalent to compare_exchange_strong.)
In the C++ standard, the following quotes are related:
[intro.races]
A release sequence headed by a release operation A on an atomic object M is a maximal contiguous subsequence
of side effects in the modification order of M, where the first operation is A, and every subsequent
operation is an atomic read-modify-write operation.
[atomics.order]
An atomic operation A that performs a release operation on an atomic object M synchronizes with an atomic
operation B that performs an acquire operation on M and takes its value from any side effect in the release
sequence headed by A.
this question is about why an RMW works in a release sequence.
|
71,510,209 | 71,510,324 | SFINAE still produces error while using exception | I am learning about SFINAE in C++. So after reading about it, i am trying out different examples to better understand the concept. Below i have given 2 snippets out of which 1 i can understand but the second one where i have used noexcept in the declaration i can't understand.
Example 1: I am able to understand this.
#include <iostream>
template<typename T>
decltype(func(T())) timer(T a)
{
std::cout<<"template timer called"<<std::endl;
return func(T());
}
void timer(...)
{
std::cout<<"ordinary timer called"<<std::endl;
}
int main()
{
timer(5);
return 0;
}
The output of the above program(as expected) is:
ordinary timer called
I can understand that due to SFINAE the deduction will result in failure and so the ordinary timer will be called.
Example 2: Why do we get error in this example.
#include <iostream>
template<typename T>
void timer(T a) noexcept(func(T()))
{
std::cout<<"template timer called"<<std::endl;
}
void timer(...)
{
std::cout<<"ordinary timer called"<<std::endl;
}
int main()
{
timer(5);
return 0;
}
This second example results in an error saying that func was not declared. My question is why just like example 1 here also due to deduction failure the ordinary timer isn't selected?
I expected that here also ordinary timer should have been called but it isn't the case. Can someone explain the reason behind it.
| The problem is that exception specification do not participate in template argument deduction(TAD). This is explained in more detail below. Source: C++ Templates: The Complete Guide Page No. 290
Case 1
Here we consider example 1. In this case, since there is no func the error in the declaration of the function template timer triggers template argument deduction failure aka SFINAE which allows the call timer(5); to succeed by selecting the ordinary function timer and you get the expected output.
Case 2
Here we consider example 2. In this case, because exception specification do not participate in TAD, overload resolution selects the function template version and so when the exception specification is instantiated at later time, the program becomes ill-formed and you get the mentioned error.
In other words, exception specifications are only instantiated when needed just like default call arguments.
|
71,510,249 | 71,511,141 | Different char values require different sizes in file | I have this code snippet to write a buffer to a file
int WriteBufferToFile(std::string path, const char* buffer, int bufferSize) {
std::ofstream ofs;
ofs.open(path);
if (!ofs) {
return 1;
}
ofs.write(buffer, bufferSize);
if (!ofs) {
return 2;
}
ofs.close();
return 0;
}
Case 1
std::vector<char> buffer(1000000, 0);
WriteBufferToFile("myRawData", buffer.data(), 1000000);
Case 2
std::vector<char> buffer(1000000);
for (int i = 0; i < 1000000; i++) {
buffer[i] = char(i);
}
WriteBufferToFile("myRawData2", buffer.data(), 1000000);
In Case 1 one I'm writing 1mb of just zeros to a file, which also will have 1mb in size, but in the second case i write arbitary chars (still should be 1mb in RAM) to a file, but now (in my tests it seems like especially when char's >= 10 are contained) the file size increases.
Why is that, and is there a way to fix this?
| I'm going to take a wild guess and say that you're running this code on a Windows system.
Here's what I think is probably happening.
ofs.open(path) is opening the file in text mode. On Windows, text mode means that every newline character (1 byte) will be replaced by a CRLF sequence (2 bytes). Your buffer contains 1 million characters which are filled with the values 0 to 999999 modulo 256. So 1 on 256 characters (3906 to be exact) will be replaced by a 2 byte sequence which accounts for the file size difference.
To fix this, open the file in binary mode:
ofs.open(path, ios_base::out | ios_base::binary)
|
71,510,298 | 71,510,384 | Dangling reference solution | T&& operator[](std::size_t n) && noexcept {std::cout<<"move"<<std::endl; return std::move(vec[n]); }
I cannot get the expected result in this part.
I predict a dangling reference happens.
T operator[](std::size_t n) && noexcept {std::cout<<"move"<<std::endl; return std::move(vec[n]); }
This works well.
Why doesn't T&& increase lifetime?
#include <iostream>
#include <stdlib.h>
#include<vector>
template<typename T, typename Alloc = std::allocator<T>>
class my_vector
{
std::vector<T, Alloc> vec;
public:
my_vector(std::initializer_list<T> init) : vec{init} {}
T&& operator[](std::size_t n) && noexcept {std::cout<<"move"<<std::endl; return std::move(vec[n]); } // moveしたものを参照するとごみを参照してることになる
//T operator[](std::size_t n) && noexcept {std::cout<<"move"<<std::endl; return std::move(vec[n]); }
};
int main()
{
auto&& vec = my_vector<int>{1, 2, 3}[0]; //case3 move
std::cout<<vec<<std::endl; //0
}
| For auto&& vec = my_vector<int>{1, 2, 3}[0];, the reference isn't bound to the temporary (i.e. my_vector<int>{1, 2, 3}) directly, its lifetime won't be extended.
In general, the lifetime of a temporary cannot be further extended by "passing it on": a second reference, initialized from the reference variable or data member to which the temporary was bound, does not affect its lifetime.
On the other hand, if you change the return type of operator[] to T, then what it returns (i.e. my_vector<int>{1, 2, 3}[0]) is a temporary and gets bound to vec, then its lifetime is extended to the lifetime of vec.
|
71,510,314 | 71,522,125 | How to wrap a class derived from vector in swig | I want to wrap a class derived from std::vector with some extend functions into csharp with swig. the functions from vector are also needed like push_back to add new item into the class (which named Add in csharp).
I tried with default setting with swig, IntArray is valid in csharp .But, vector's functions are invalid.
if i try to define a vector in the .i file:
namespace std{
%template(ScalarVec) vector<ScalarTest>;
}
a class named ScalarVec have functions like vector is valid in csharp, but without the extend function.
How to wrap the ScalarArray to csharp with swig?
The following is a simple example.
#include <vector>
#include <numeric>
namespace test
{
struct ScalarTest {
int val;
};
struct ScalarArray : public std::vector<ScalarTest>
{
int sum() const {
int res = 0;
for (const ScalarTest &item : *this) {
res += item.val;
}
return res;
}
};
}
| SWIG is picky about order of declarations. Below correctly wraps your example code and can call the sum function. I'm not set up for C# so the demo is created for Python:
test.i
%module test
%{
// Code to wrap
#include <vector>
#include <numeric>
namespace test
{
struct ScalarTest {
int val;
};
struct ScalarArray : public std::vector<ScalarTest>
{
int sum() const {
int res = 0;
for (const ScalarTest &item : *this) {
res += item.val;
}
return res;
}
};
}
%}
namespace test
{
struct ScalarTest {
int val;
};
}
%include <std_vector.i>
// Must declare ScalarTest above before instantiating template here
%template(ScalarVec) std::vector<test::ScalarTest>;
// Now declare the interface for SWIG to wrap
namespace test
{
struct ScalarArray : public std::vector<ScalarTest>
{
int sum() const;
};
}
demo.py
import test
x = test.ScalarArray()
a = test.ScalarTest()
a.val = 1
b = test.ScalarTest()
b.val = 2
x.push_back(a)
x.push_back(b)
print('sum',x.sum())
print(x[0].val,x[1].val)
Output:
sum 3
1 2
|
71,510,678 | 71,510,789 | Static class variable initializing to 100 by itself | This is my first question on here, so excuse me if I've formatted everything in a wrong way.
So, to get to the problem - this is s university assignment of mine. The goal is to create a class called Student, which has a few fields, and store the instances in an array of Student objects. One of the tasks is to have a static variable inside the class that keeps track of how many Student instances have been created. To clarify, I have a getData() method that asks the user for the values, and then sets the current object's values to those entered (basically just like a constructor, which makes the constructors obsolete, but they want us to do it that way for some reason). In the getData() function, the static count variable gets raised by 1, as well as in the constructors, and gets decremented in the destructor.
The issue is that for some reason, Student::amount variable sets itself to 100. Every time that i try to access it from the main() function, its 100 plus the amount of students created, so if we have created 2 students, Student::amount will be equal to 102. I've nowhere explicitly set it to that number, which also matches the size of the array, by the way.
I'm still rather new to C++, I've read and watched some material on how OOP works here, but I've barely even scratched the surface.
Sorry again if my question is badly formulated or/and badly formatted, and I hope you can help me with this!
#include <iostream>
#include <string>
using namespace std;
class Date {
...
};
class Student {
// Fields
private:
string name = "";
string PID = "";
int marks[5]{ 0 };
short course = 0;
public:
// Count of all students
static int amount;
// Getters
...
// Setters
...
// Constructors
Student(); // Default one, Student::amount gets raised by 1 there
Student(string name, string PID, int marks[5], short course);
// Destructor
~Student();
// Methods
void getData();
void display(); // Display the information of a student
};
// Array of students
Student students[100];
// Student::Count
int Student::amount; // Initializes with 0 if not explicitly initialized
void Student::getData() {
cin.ignore();
cout << "\nName: "; getline(cin, name);
cout << "PID: "; getline(cin, PID);
cout << "Marks:\n";
for (int i = 0; i < 5; i++)
{
cin >> marks[i];
}
cout << "Course: "; cin >> course;
Student::amount++;
}
Student::Student(string name, string PID, int marks[5], short course) {
this->setName(name);
this->setPID(PID);
this->setMarks(marks);
this->setCourse(course);
Student::amount++;
}
| The global declaration Student students[100]; calls the default Student constructor 100 times, before main is reached. According to your comment (you don't supply the constructor implementation), that constructor increases amount by 1.
A solution here is to remove Student::amount and instead use
std::vector<Student> students;
students.size() will give you the number of students in that container. Use push_back to put students into the vector.
A very crude alternative that at least is broadly compliant with the question constraints is to remove the amount increment from the default constructor, and all other places apart from the four argument constructor.
|
71,511,313 | 71,511,413 | Qchart Remove the line point from bottom | I am new in QT, I want to build a chart. In the chart, i want to show only line. You can see the picture with is attached. How can i remove this point?1 Thank you.
| you should hide legends.
chart->legend()->hide();
For example :
QChart *chart = new QChart();
chart->addSeries(series);
chart->setTitle("Simple areachart example");
chart->createDefaultAxes();
chart->axes(Qt::Horizontal).first()->setRange(0, 20);
chart->axes(Qt::Vertical).first()->setRange(0, 10);
chart->legend()->hide();
QChartView *chartView = new QChartView(chart);
chartView->setRenderHint(QPainter::Antialiasing);
you can see what happens from this picture:
before after
I use this Qt Example
|
71,512,488 | 71,512,538 | Only Printing the First Value of Linked List | I have no idea why display function is not displaying anything other than the first node's data. I've tried switching the While(p!=NULL) to while(p->next!= NULL but when I do that instead of only the first node's data displaying no data is being displayed.
#include <iostream>
using namespace std;
class Node {
public:
int no;
Node* next;
};
Node* createNode(int no1) {
Node* n = new Node();
n->no = no1;
n->next = NULL;
return n;
}
void addValue(int x, Node** head) {
//insert first node into linked list
Node* n = createNode(x),*p = *head;
if (*head == NULL) {
*head = n;
}
//insert second node onwards into linked list
else {
while (p->next!= NULL) {
p->next = n;
p = p->next;
}
}
}
void display(Node *head) {
Node* temp = head;
// temp is equal to head
while (temp->next!=NULL) {
cout << temp->no;
temp = temp->next;
}
}
int main() {
int num; char choice;
Node* head = NULL;
do {
cout << "Enter a number : ";
cin >> num;
addValue(num,&head);
cout << "Enter [Y] to add another number : ";
cin >> choice;
} while (choice == 'Y');
cout << "List of existing record : ";
display(head);
return 0;
}
I've tried changing the contents fo the else while loop in the addRecord function to p = p->next; p->next = n; in that order to no avail.
| In the while loop, it should be
while (p->next!= NULL) {
p = p->next;
}
p->next = n;
Traverse until the end of linked list is reached and then, add the new entry.
|
71,512,649 | 71,512,930 | "Failed to specialize alias template" errors for the most simple SFINAE bool condition | I'm trying to implement simple condinional implementation and failing tremendously... A tried this:
template<class T, bool COND=0> class A
{
public:
template< typename TT=T >
std::enable_if_t<!COND> METHOD() const { };
template< typename TT=T >
std::enable_if_t<COND> METHOD() const { };
};
and this:
template<class T, bool COND=0> class A
{
public:
template< typename TT=T, std::enable_if_t<!COND, int> = 0 >
void METHOD() const { };
template< typename TT=T, std::enable_if_t<COND, int> = 0 >
void METHOD() const { };
};
and quite a few others... and always get "Failed to specialize alias template". What am I doing wrong?
EDIT: Using newest MSVC 2022, just A<int, 0> will trigger the error.
| What about as follows?
template<class T, bool COND=0> class A
{
public:
template< bool CC=COND >
std::enable_if_t<!CC> METHOD() const { };
template< bool CC=COND >
std::enable_if_t<CC> METHOD() const { };
};
I mean... if you want enable/disable a method of a class through std::enable_if, you have to check a test that depend from a template (type or value) of the method, not of the class.
So
std::enable_if_t<!COND>
doesn't works because COND is a template value of the class; you have to use a template value of the method, so you can add a template value CC, that you can default to COND
template< bool CC=COND >
and check CC instead of COND
std::enable_if_t<!CC>
|
71,512,973 | 71,513,105 | Usage of decltype in return type of function template removes error due to exception specification | I saw an answer to a question here. There the author of the answer made use of the fact that
exception specifications do not participate1 in template argument deduction.
In the answer linked above it is explained why the following doesn't compile:
#include <iostream>
template<typename T>
void timer(T a) noexcept(func(T()))
{
std::cout<<"template timer called"<<std::endl;
}
void timer(...)
{
std::cout<<"ordinary timer called"<<std::endl;
}
int main()
{
timer(5);//won't compile
return 0;
}
it is said that because exception specification do not participate in TAD, overload resolution selects the function template version and so when the exception specification is instantiated at later time, the program becomes ill-formed and we get the error.
Now i modified the above program to see if i understand the topic correctly. In particular, i have added decltype(func(T())) as the return type of the function template version(as shown below) and then the error goes away:
template<typename T>
decltype(func(T())) timer(T a) noexcept(func(T())) //return type added here
{
std::cout<<"template timer called"<<std::endl;
return func(T());//return statement added here
}
void timer(...)
{
std::cout<<"ordinary timer called"<<std::endl;
}
int main()
{
timer(5); //works now?
return 0;
}
My question is, why when we add the return type to the function template as shown above, the error has gone away?
PS: My question is not about why the first example gives error works but it is rather why the modified example doesn't give error.
1This quote is from C++ Templates: The Complete Guide.
| Here since there is no func, so during the substitution of the template argument(s) in the return type of the function template, we get substitution failure and due to SFINAE this function template is not added to the set. In other words, it is ignored.
Thus the call timer(5); uses the ordinary function timer since it is the only viable option now that the function template has been ignored. Hence the program compiles and gives the output:
ordinary timer called
|
71,513,265 | 71,515,300 | Visual Studio Code: Theme One Monokai: Change / Custom Highlight Color for C/C++ `const` | I am using the (amazing) One Monokai theme in visual studio code. One thing that bothers me is that variable modifers like const and control flow like for, if, while, ... are displayed using the same color. Based on this tutorial, I tried a custom coloring by adding to settigs.json:
"editor.semanticTokenColorCustomizations": {
"[One Monokai]": {
"rules": {
"<KEYWORD>": {
"foreground": "#A2142F"
}
}
}
}
I tried multiple <KEYWORDS> listed here, where I expected either readonly or property to work. I tried a different, unambigious, keyword, e.g. variable - that worked as expected, so the general "frame" is working.
Any idea what keyword has to be? Or do I have to make a workaround using a third party package for specific word highlighting?
| With the comment pushing me into the right direction and this tutorial, the working code is
"editor.tokenColorCustomizations": {
"[One Monokai]": {
"textMateRules": [
{
"scope": "storage.modifier.specifier.const.cpp",
"settings": {
"foreground": "#A2142F"
}
}
]
}
}
|
71,513,853 | 71,514,308 | Segmentation fault when using threads on function with large arrays -C++ | I am using threads for the first time and came across a weird segmentation error whenever the called function takes very large arrays.
#include <iostream>
#include <thread>
#include <cmath>
const int dimension = 100000; // Dimension of the array
// Create a simple function of an array
void absolut(double *vec) {
double res = 0.;
for (int i = 0; i < dimension; i++) {
res += vec[i] * vec[i];
}
std::cout << std::sqrt(res) << std::endl;
}
int main() {
// Define arrays
double p[dimension], v[dimension];
for (int i = 1; i < dimension; i++) {
p[i] = 1./double(i);
v[i] = 1./double(i)/double(i);
}
// use multithreading
std::thread t1(absolut, p);
std::thread t2(absolut, v);
t1.join();
t2.join();
return 0;
}
The program runs fine like this, but if I increase the dimension of the arrays by a factor 10, then I get a segmentation fault. Does anybody know why this occurs and how to fix it?
| double p* = new double[dimension];
double v* = new double[dimension];
I think this compiles because of the compiler defined size limits maybe using dynamically allocation.
|
71,514,176 | 71,514,236 | Initialization of member variable via parentheses doesn't work | For the following codes, can anyone explain why we can't initialize the variable data by parentheses?
#include <iostream>
using namespace std;
class X
{
private:
int data(1); // wrong here
public:
void print()
{
cout << data << endl;
}
};
int main()
{
X temp;
temp.print();
return 0;
}
| There isnt actually much to explain, its just not valid syntax. Default member initializers are
class X
{
private:
int data{1}; // ok
int data2 = 42; // also ok
public:
void print()
{
cout << data << endl;
}
};
While int data(1); is not valid syntax for a default member initializer. See here for details: https://en.cppreference.com/w/cpp/language/data_members
|
71,515,071 | 71,515,245 | Accessing entries of multidimensional variables using the gams-c++ api | I am generating the following gams program with my c++ program
variable x(*) /1.lo = -1,1.up = 1,2.lo = -1,2.up = 1/;
variable obj; equation eqobj; eqobj.. obj =e= x['1']+x['2'];
parameter ms, ss, lbd, ubd, cpu;
model mod /all/;
option decimals = 8;
solve mod minimizing obj using minlp;
lbd=mod.objest; ubd=obj.l;
ms=mod.modelstat; ss=mod.solvestat; cpu=mod.resusd;
and using the gams-c++ api to let gams solve it. Afterwards, I want to obtain the results in c++ using this method:
auto value = m_job.outDB().getVariable(var).findRecord().level();
were job is my GAMSJob, which was used to solve the program above and var is a string containing the variable name, whose value I want to obtain.
This methods works perfectly with one dimensional variables, e.g. when my variable looks as follows
variable x;
x.up = 1;
x.lo = 0;
and I am passing "x" as var in the code above.
I have tried to access the entries of the multidimensional variables now with a string like "x['1']", but then always 0 is returned. What is the correct way to obtain the values I want, i.e. the entries of the multidimensional variable?
| I guess you want to iterate over all records of x? There is actually an example in the tutorial for a two dimension variable doing this:
for (GAMSVariableRecord rec : m_job.outDB().getVariable("x"))
cout << "x(" << rec.key(0) << "," << rec.key(1) << "):" << " level=" << rec.level() << " marginal=" << rec.marginal() << endl;
|
71,515,127 | 71,516,243 | D3D11CreateDeviceAndSwapChain Fails With E_ACCESSDENIED When Using Same HWND | If I create a window and pass the HWND to D3D11CreateDeviceAndSwapChain, it works. However, after I release the device, context, swapchain, etc and try to repeat the process using the same HWND, D3D11CreateDeviceAndSwapChain fails with E_ACCESSDENIED. This tells me something must be holding onto the HWND, but what? I release all my global variables in the destructor of the class. Anyone have an idea what the problem is?
~decoder()
{
m_VertexShader->Release();
m_VertexShader = nullptr;
m_PixelShader->Release();
m_PixelShader = nullptr;
m_InputLayout->Release();
m_InputLayout = nullptr;
device->Release();
device = nullptr;
context->Release();
context = nullptr;
swapchain->Release();
swapchain = nullptr;
rendertargetview->Release();
rendertargetview = nullptr;
m_SamplerLinear->Release();
m_SamplerLinear = nullptr;
HRESULT hr = S_OK;
hr = decoder_transform->ProcessMessage(MFT_MESSAGE_NOTIFY_END_OF_STREAM, NULL);
hr = decoder_transform->ProcessMessage(MFT_MESSAGE_NOTIFY_END_STREAMING, NULL);
hr = decoder_transform->ProcessMessage(MFT_MESSAGE_COMMAND_FLUSH, NULL);
decoder_transform.Release();
color_transform.Release();
hr = MFShutdown();
}
| While D3D11CreateDeviceAbdSwapChain does not mention why this is happening in the documentation, it is essentially just a wrapper around creating a D3D11Device and swap chain. The documentation for IDXGIFactory2::CreateSwapChainForHwnd does go into detail on why this is happening.
Because you can associate only one flip presentation model swap chain at a time with an HWND, the Microsoft Direct3D 11 policy of deferring the destruction of objects can cause problems if you attempt to destroy a flip presentation model swap chain and replace it with another swap chain. For more info about this situation, see Deferred Destruction Issues with Flip Presentation Swap Chains.
The documentation regarding Deferred Destruction Issues with Flip Presentation Swap Chains advises calling ID3D11DeviceContext::ClearState followed by ID3D11DeviceContext::Flush.
However, if an application must actually destroy an old swap chain and create a new swap chain, the application must force the destruction of all objects that the application freed. To force the destruction, call ID3D11DeviceContext::ClearState (or otherwise ensure no views are bound to pipeline state), and then call Flush on the immediate context. You must force destruction before you call IDXGIFactory2::CreateSwapChainForHwnd, IDXGIFactory2::CreateSwapChainForCoreWindow, or IDXGIFactory2::CreateSwapChainForComposition again to create a new swap chain.
|
71,515,356 | 71,515,571 | c++ doesn't set some of the array elements to nullptr | Inside of int main() i declared double* arr = new double[2]; (an array that its items should be set to double or to nothing)
then i tried to change the values of the array elements from another function void func(double* arr_pointer[2]) by setting them to
arr_pointer[1] = nullptr;
arr_pointer[0] = nullptr;
but after i checked whenever their values changed to nullptr (in the main function) by printing out
cout << (nullptr == &arr[0]) << ' ' << (nullptr == &arr[1])
i got 1 0 in the console,
why?
minimal reproducible example:
#include <iostream>
void func(double* arr_pointer[2], double& abc) {
if (abc > 0) {
(*arr_pointer)[0] = 20;
(*arr_pointer)[1] = 22;
}
else if (abc == 0) {
(*arr_pointer)[0] = 10;
arr_pointer[1] = nullptr;
}
else {
arr_pointer[1] = nullptr;
arr_pointer[0] = nullptr;
}
}
int main() {
double* arr = new double[2];
double abc = -1.2;
func(&arr, abc)
std::cout << (nullptr == &arr[0]) << ' ' << (nullptr == &arr[1])
}
|
double* arr = new double[2];
(an array that its items should be set to double or to nothing)
arr is a pointer to a double. It points to the first element of an array of 2 doubles. The elements are double objects; they cannot be "nothing" objects. Furthermore, the array doesn't contain any pointers.
Sidenote: Avoid owning bare pointers. Prefer using std::vector.
void func(double* arr_pointer[2])
Function parameters of array types are adjusted to be pointers to element of such array. The parameter of this function is double** i.e. pointer to pointer to double.
You will not be able to call func(arr) because a double* is not implicitly convertible to double**.
cout << (nullptr == &arr[0]) << ' ' << (nullptr == &arr[1])
You are comparing the addresses of the array elements with null. The address of neither element should compare equal to null unless you've modified arr after its initialisation.
Regarding your edit:
func(&arr, abc)
arr is a singular pointer object. It is not an element in an array of multiple pointers.
arr_pointer[1] = nullptr;
arr_pointer[1] points past the end of the arr object, and assigning to it causes undefined behaviour. If the output of the program is something that you don't expect, it's probably because the behaviour of the program is undefined. If the program is what you do expect, it's probably in spite of the behaviour being undefined. Don't do this.
arr_pointer[0] = nullptr;
Since arr_pointer points to arr, indirecting through arr_pointer and assigning will cause arr to be assigned. In other words, this will set arr to point to null. Since arr was the sole pointer to the dynamic array, this means that the allocation can no longer be deallocated; This is a memory leak.
After this assignment, it's questionable whether &arr[0] is allowed because that subscript operator would be indirecting through a null pointer. However if we assume it to be allowed, then it's reasonable that arr == nullptr implies &arr[0] == nullptr.
|
71,515,602 | 71,515,744 | Random number generator generating low numbers more frequently than high numbers C++ | So, I have made a program that simulates things and in it I noticed that the c++ function rand() seemed to generate low numbers too often, so I tried to test it.
#include <iostream>
#include <fstream>
#include <stdio.h>
#include <vector>
#include <cstdlib>
#include <time.h>
#include <cfloat>
#include <iomanip>
using namespace std;
int main(){
srand(time(NULL));
int qwerty=0;
for(int i=0; i<10000000;i++){
if(rand()%10000<2800){
qwerty++;
}
}
cout << qwerty << endl;
return 0;
}
If I ran the file with this "for tester" in it I would get consistently a number near 3400000, or 34%, which is near to the 34% I had seen appear inside my real program, the problem is that the output should be near 2800000 or 28%.
I then tried to run this "for tester" on a new project(the same I wrote here) where only the libraries and the srand(time(NULL)) were present, same output.
I then tried to copy this file inside an online compiler, this time instead of 3400000 I got the correct number 2800000.
I can't find why this is happening, anyone who knows?
Additional info:
I am using dev-c++ as a IDE with the TDM-GCC 4.9.2 64bit release and the ISO C++11, If I take the executable generated by my computer and run it in another one I get the same 34% result, Windows 10 is the operating system. This problem happens also if I use different numbers.
| For a uniformly distributed random variable E in the open interval [0, 32767]
the probability of mod(E, 10000) < 2800 is around 34%. Intuitively you can think of mod(E, 10000) < 2800 as favouring the bucket of numbers in the range [30000, 32767]: that bucket modulo 10000 is always less than 2800. So that has the effect of pushing the result above 28%.
That's the behavior you are observing here.
It's not a function of the quality of the random generator, although you would get better results if you were to use a uniform generator with a larger periodicity. Using rand() out of your C++ standard library is ill-advised as the standard is too relaxed about the function requirements for it to be portable. <random> from C++11 will cause you far less trouble: you'd be able to avoid explicit % too.
|
71,515,992 | 71,517,984 | rapidjson schema how to get the keyword from the error | I'm making a physical software and we deploy a json solution and I wanted to used json schema. So, when I had a wrong key typical looking a "length" in the schema and the user gives somethings wrong like "length2". I don't know how to get it with rapidjson actually, I obtained these results
Invalid schema: #/properties/TEST
Invalid keyword: required
Invalid document: #/TEST
But I want an output like "length" key is missing in order to inform the user.
Imy test.json file:
{
"$schema": "./schema.json",
"TEST": {
"Length2": {
"Value":20,
"Unit":"mm"
}
}
}
Edit following Ether comment my schema.json but this doesn't change my output see "Invalid schema"
{
"type": "object",
"required": ["TEST"],
"properties": {
"TEST": {
"type": "object",
"required": ["Length"],
"properties": {
"Length":{
"type": "object",
"required": ["Value","Unit"],
"properties": {
"Value": {"type": "number"},
"Unit": {"type": "string"}
}
}
}
}
}
}
and my cpp code:
#include "rapidjson/document.h"
#include "rapidjson/error/en.h"
#include "rapidjson/schema.h"
#include "rapidjson/stringbuffer.h"
#include "rapidjson/filereadstream.h"
#include<iostream>
#include<string>
#include<fstream>
using namespace std;
using namespace rapidjson;
int main()
{
char readBuffer[65536];
FILE* fp = fopen("test.json", "r");
FileReadStream is(fp, readBuffer, sizeof(readBuffer));
Document d;
d.ParseStream(is);
FILE* fp2 = fopen("schema.json", "r");
FileReadStream fs(fp2, readBuffer, sizeof(readBuffer));
Document sd;
sd.ParseStream(fs);
SchemaDocument schema(sd);
SchemaValidator validator(schema);
if(!d.Accept(validator))
{
rapidjson::StringBuffer sb;
validator.GetInvalidSchemaPointer().StringifyUriFragment(sb);
printf("Invalid schema: %s\n", sb.GetString());
printf("Invalid keyword: %s\n", validator.GetInvalidSchemaKeyword());
sb.Clear();
validator.GetInvalidDocumentPointer().StringifyUriFragment(sb);
printf("Invalid document: %s\n", sb.GetString());
}
else
printf("\nJson file validated with the given schema successfully\n");
return 0;
}
| So I found my answer by following the schemavalidator.cpp example in rapidjson folder.
I provide here the example in my case :
https://github.com/faudard/rapidjson_scheme
I use the "CreateErrorMessages" same to the example.
|
71,516,283 | 71,516,362 | Do I need to delete pointer in stack, which is created in heap? | int *createInt()
{
int *a = new int;
return a;
}
int main()
{
int *x = createInt();
*x = 10;
cout << *x << '\n';
delete x; // do I need this?
return 0;
}
DO I need delete x? If i don't, will it cause a memory leak problem?
| You appear confused about the distinction between a pointer and the pointed at object. a and x are pointers. Think of a piece of paper with a party address written on it.
int *a = new int; allocates a new object on the heap and assigns the address to the pointer a. This is like starting a party at a house, and then writing the address on a paper named a.
return a; returns the pointer to the caller. This is like handing the address-paper to the person that told you to start a party.
*x = 10; accesses the object, and modifies it. This is like going to the address on the paper, and passing around silly hats, you've changed it to a costume party.
delete x; // do I need this? Yes, you need to delete the object that you allocated. This says "end the party at the address on this paper".
Other questions:
what if I don't delete memory? Then the party continues forever, and nobody else can party at that address ever again because the first party is still going.
what if I delete the memory twice? Ending a party twice makes no sense. Nothing might happen. Or it might burn down. Or it might be different each time.
what if I use the pointer after deleting it? Going to a party that ended makes no sense. Maybe nothing happens. Or maybe someone replaced the house with a Military base, and you're trespassing and get arrested. Or it might be different each time.
what if I reassign the pointer without deleting? If you erase the paper and write a new address on it, that doesn't do anything to the party. Except that you probably won't ever find it again, and the party continues forever.
I was told that members of classes are destroyed when the class is destroyed. Does that delete the object? No, burning the piece of paper with the address on it doesn't end the party.
|
71,516,427 | 71,517,869 | Is it possible to detect WASM compiler in code via compiler directives? | Is there a standard #define I can detect within my own C++ code that would indicate if WASM is compiling the code?
In C++ on Android I can use #ifdef __ANDROID__ but I'm not sure for Web Assembly ? I'm actually using emcc compiler so maybe there's a standard #define for EMCC compiler...
Thanks
| You can use __wasm__ to detect the Wasm architecture in general or __wasm32__/__wasm64__ to be more precise. Or you can use __EMSCRIPTEN__ to specifically detect the emscripten target.
|
71,517,035 | 71,517,111 | Accept and return lambda (lock wrapper) | I want to accept any lambda, so that it can perform some operation safely under the lock_guard and then return anything, but using as shown below throws an error:
#include <iostream>
#include <functional>
#include <mutex>
#include <memory>
class Test {
public:
template<typename Ret>
Ret DoLocked(std::function<Ret(Test&)> func) {
std::lock_guard<std::mutex> lock(*mtx);
return func(*this);
}
private:
std::unique_ptr<std::mutex> mtx = std::make_unique<std::mutex>();
};
int main() {
Test a;
a.DoLocked([](Test &s) {
std::cout << "in do locked" << std::endl;
});
return 0;
}
[build] /some-path/test.cc:21:6: error: no matching function for call to ‘Test::DoLocked(main()::<lambda(Test&)>)’
[build] 21 | });
[build] | ^
[build] /some-path/test.cc:9:13: note: candidate: ‘template<class Ret> Ret Test::DoLocked(std::function<Ret(Test&)>)’
[build] 9 | Ret DoLocked(std::function<Ret(Test&)> func) {
[build] | ^~~~~~~~
[build] /some-path/test.cc:9:13: note: template argument deduction/substitution failed:
[build] /some-path/test.cc:21:6: note: ‘main()::<lambda(Test&)>’ is not derived from ‘std::function<Ret(Test&)>’
[build] 21 | });```
| This is easily solved by getting rid of std::function and making the function parameter a template parameter. That would look like
template<typename Func>
auto DoLocked(Func func) -> decltype(func(*this)) {
std::lock_guard<std::mutex> lock(*mtx);
return func(*this);
}
The reason it doesn't work with the std::function is that a lambda expression does not create a std::function. It creates a object of an unnamed class type that has an operator() defined with the body of the lambda expression. Because of this, template deduction fails as it is expecting a std::function but that is not what is being provided.
|
71,517,342 | 71,517,560 | vtkDelaunay3D::SetAlpha() does not accept values below 0 | I'm trying to use VTK's Delaunay3D() to get a minimal bounding surface on my data using the alphaShapes algorithm. The particular dataset I'm working on is generally toroidally- or cylindrically-shaped, so by my understanding I should be trying to find a value < 0 for alpha. The class, however, does not seem to be able to handle negative floats. This can be confirmed by this minimal example:
#include <iostream>
#include <vtkSmartPointer>
#include <vtkDelaunay3D>
void test() {
vtkSmartPointer<vtkDelaunay3D> dataMesh = vtkSmartPointer<vtkDelaunay3D>::New();
dataMesh->SetAlpha(-.1);
std::cout << dataMesh->GetAlpha() << std::endl;
}
int main(void) {
test();
}
I get an output of 0, and this is reflected in visualizations of my actual data -- I get a big ugly diamond instead of a beautiful donut. If SetAlpha() is given a positive value, VTK responds as expected.
Is this a known issue? Are there workarounds?
SYSTEM: Ubuntu 20.04, using gcc version 9.4.0 with CMake. VTK 9.1 for c++.
| It looks like your code is doing 3D Delaunay triangulation, not alpha shapes.
From the documentation for Delaunay3D:
For a non-zero alpha value, only verts, edges, faces, or tetra contained within the circumsphere (of radius alpha) will be output.
In this implementation of Delaunay triangulation, alpha is a radius that can't be negative.
Looks like VTK is silently changing it to 0 right here in the code: https://github.com/Kitware/VTK/blob/01f5cc18fd9f0c8e34a5de313d53d2178ff6e325/Filters/Core/vtkDelaunay3D.h#L129
The documentation also mentions explicitly that the "alpha" is not the same as the alpha in alpha shapes, it's merely means something similar.
(The notion of alpha value is derived from Edelsbrunner's work on "alpha shapes".) Note that a modification to alpha shapes enables output of combinations of tetrahedra, triangles, lines, and/or verts (see the boolean ivars AlphaTets, AlphaTris, AlphaLines, AlphaVerts).
I'm not a PhD in CS/Geometry so maybe I'm missing something, but it seems like that class is not really what you want.
So try setting alpha as something small and positive and if your data is toroidal it will probably give you what you want.
|
71,517,568 | 71,520,419 | What's the best way to get a list of all the macros passed as compiler arguments? | I'm working on a code base that uses quite a bit of conditional compilation via macros passed as arguments to the compiler (i.e. gcc -DMACRO_HERE file.cpp). I would like to have a way to get a list of all the macros defined this way within the code so that I can write out all the used macros to the console and save files when the application is run, that way we know exactly what build was used.
I also need to do the same thing with the git hash but I think I can do that easily with a macro.
Edit 1: Note that this is not the same question as GCC dump preprocessor defines since I want the list available within the program and I only want the macros that are declared by being passed to the compiler with the -D argument
Edit 2: I also need it be cross compiler compatible since we use GCC, XL, CCE, CLANG, NVCC, HIP, and the MPI versions of those. Note that we're building with Make
| Here's an outline of a possible solution.
The request is not well-specified because there is no guarantee that all object files will be built with the same conditional macros. So let's say that you want to capture the conditional macros specified for some designated source file.
On that basis, we can play a build trick, easy to do with make: the build recipe for that designated source file actually invokes a script, by inserting the path to the script at the beginning of the compile line.
The script runs through its arguments, selects the ones which start -D, and uses them to create a simple C source file which defines an array const char* build_options[], populating it with stringified versions of the command line arguments. (Unless you're a perfectionist, you don't need to do heroics to correctly escape the strings, because no sane build configuration would use -D arguments which require heroic escaping.)
Once the source file is built, the script saves it and either uses the command-line it was passed as its arguments to compile it, or leaves it to be compiled by some later build step.
|
71,517,790 | 71,518,704 | Cannot Convert in_addr to Unsigned Int in Socket Programming for Linux | I'm building a reverse echo server in TCP using c++.
My problem occurs when I run my client.
When I compile my client.cpp, I get this error:
error: cannot convert ‘in_addr’ to ‘in_addr_t {aka unsigned int}’ in assignment
serverAddress.sin_addr.s_addr = *((struct in_addr*)host->h_addr);
This is my code for creating a connection:
serverName = argv[1];
struct sockaddr_in serverAddress;
struct hostent* host;
host = gethostbyname(serverName);
memset(&serverAddress, 0, sizeof(serverAddress));
serverAddress.sin_family = AF_INET;
serverAddress.sin_addr.s_addr = *((struct in_addr*)host->h_addr);
serverAddress.sin_port = htons(serverPort);
memmove(&serverAddress.sin_addr.s_addr, host->h_addr_list[0], host->h_length);
connectValue = connect(clientSocket, (struct sockaddr*)&serverAddress, sizeof(serverAddress));
Am I missing something?
| You are trying to assign a whole in_addr struct instance to a single integer. That will not work.
The sockaddr_in::sin_addr member is an in_addr struct, and the in_addr::s_addr member is the actual IP address. Just drop the s_addr part to assign the whole in_addr struct as-is:
serverAddress.sin_addr = *((struct in_addr*)host->h_addr);
Otherwise, you can memcpy() (or, in your example, memmove()) the in_addr struct into the sin_addr member as a whole. You are trying to copy/move to its s_addr member instead, so just drop that:
memcpy(&serverAddress.sin_addr, host->h_addr, host->h_length);
|
71,518,089 | 71,518,117 | A reference to a class member | A reference to a class member is kind of an offset relative to the class. If I understood everything correctly. But why is 1 always output here?
#include <iostream>
struct user
{
int id;
double name;
std::string last;
};
template<class V>
void kek(V b)
{
std::cout << b << std::endl;
}
int main()
{
kek(&user::id);
kek(&user::name);
kek(&user::last);
}
DEMO
| &foo::bar is a pointer-to-member. There are no references-to-members.
cout can't print member pointers directly, the closest thing it can print is bool. Your pointer was converted to bool, and since it was non-zero, you got true.
If you want to get an offset from a pointer-to-member, you could try std::bit_casting it to std::size_t, but note that it's not guaranteed to work.
But if you just want an offset to a hardcoded member, use offsetof. (thanks @TedLyngmo)
|
71,518,200 | 71,518,785 | How to get timestamp from date time format in C++? | I would like to get timestamp from date time format in C++. I wrote C style solution, but this->cache_time doesn't promise \0 at the end because it is std::string_view.
std::time_t client::get_cache_time() {
struct tm time;
if(strptime(this->cache_time.data(), "%a, %d %b %Y %X %Z", &time) != NULL) {
return timelocal(&time);
}
return 0;
}
Is there a strptime() alternative in c ++ what can work with std::string_view?
Thank you for help.
| I don't know if this solution is clean and memory efficient, but it works.
std::time_t client::get_cache_time() {
std::tm time;
std::istringstream buffer(std::string(this->cache_time));
buffer >> std::get_time(&time, "%a, %d %b %Y %X %Z");
if(!buffer.fail()) {
return timelocal(&time);
}
return 0;
}
std::string_view is nice, but not supported everywhere.
|
71,518,396 | 71,519,613 | Is it okay to delete an object that is created in the app from dll | I have a dll with a class that uses an abstract class for customizing the behaviour and it also has an implementation defined in dll
With this the app allocates a Child object and passes it into the class A which it deallocates the object when it is deleted
Can deleting an object that is created in the app from the dll create a problem
if it does any idea of fixing it
The code roughly translates into this
// Dll
DLLEXPORT class A
{
private:
Base* ptr;
public:
A(Base* ptr) { this->ptr = ptr; };
~A() { delete ptr; }
};
DLLEXPORT class Base
{
virtual int foo() = 0;
};
DLLEXPORT class Child : public Base
{
virtual int foo() { return 1; }
};
// App
int main()
{
A obj(new Child);
return 0;
}
| MS says this:
The DLL allocates memory from the virtual address space of the calling process (docs)
And in this answer you can see:
If your DLL allocates memory using C++ functions, it will do so by calling operator new in the C++ runtime DLL. That memory must be returned by calling operator delete in the (same) C++ runtime DLL. Again, it doesn't matter who does that.
You can have trouble if the DLL, c++ runtime and/or app are compiled with a different compiler; or if some of those are compiled statically.
To avoid those problems, you can pass a "deleter" object/function to the DLL so it makes the deletion in the app:
DLL:
class Base;
typedef void (*deleter_t)(Base * ptr);
deleter_t app_deleter {nullptr};
DLLEXPORT class A
{
private:
Base* ptr;
public:
A(Base* ptr) { this->ptr = ptr; };
~A() { app_deleter(ptr); }
};
DLLEXPORT void set_deleter (deleter_t func)
{
app_deleter = func;
}
APP:
void deleter (Base * obj)
{
delete obj;
}
int main()
{
set_deleter (deleter);
A obj(new Child);
return 0;
}
More details at: SO answer SO answer
|
71,518,890 | 71,518,981 | Couldn't resolve LNK2019 on VS | while coding I got the following error: 1>giochino.obj : error LNK2019: riferimento al simbolo esterno "public: void __thiscall entity::print_Pv(int,int)" (?print_Pv@entity@@QAEXHH@Z) non risolto nella funzione _main 1>C:\Users\tomma\source\repos\giochino\Debug\giochino.exe : fatal error LNK1120: 1 esterni non risolti 1>Compilazione progetto "giochino.vcxproj" NON COMPLETATA.
it seems I couldn't find the solution
here's the code, although quite short I can't find the problem
The lib that I've created:
#include <iostream>
using namespace std;
class entity{
public:
int pv_left;
int Max_pv;
//find the percentage of life left and print a bar filled of the same percentage
void print_Pv(int pv_now, int pv_max) {
char pv_bar[10];
int pv_perc = ( pv_now / pv_max) * 10;
for (int i = 0; i < 10; i++) {
if (i <= pv_perc) {
pv_bar[i] = '*';
}
else if (i > pv_perc) {
pv_bar[i] = '°';
}
}
for (int i = 0; i < 10; i++) {
cout << pv_bar[i];
}
}
};
the header of the lib
#pragma once
#include <iostream>
class entity {
public:
int pv_left;
int Max_pv;
void print_Pv(int pv_now, int max_pv);
};
and the main method
#include "game_library.h"
using namespace std;
entity Hero;
int main()
{
Hero.Max_pv = 100;
Hero.pv_left = 10;
Hero.print_Pv(Hero.pv_left, Hero.Max_pv);
}
| Your implementation file is wrong, you need
#include "your.h"
void entity::print_Pv(int pv_now, int pv_max) {
char pv_bar[10];
int pv_perc = ( pv_now / pv_max) * 10;
for (int i = 0; i < 10; i++) {
if (i <= pv_perc) {
pv_bar[i] = '*';
}
else if (i > pv_perc) {
pv_bar[i] = '°';
}
}
for (int i = 0; i < 10; i++) {
cout << pv_bar[i];
}
}
you must not declare the class again, just the method bodies that you didnt put in the .h file
|
71,519,156 | 71,519,409 | How can i make it display 5 different asterisk using functions | I need to make a program to that display five different asterisk using functions C++ for a test and currently, its saying too many arguments and also cant take one arguments
#include <iostream>
using namespace std;
void asterisk();
int main()
{
int k;
int i;
// Asking the user to input 5 random between 1 to 30
cout << "Enter 5 numbers between 1 to 30" << endl;
cin >> k;
asterisk(i);
return 0;
}
void display(int a)
{
// Using for loop to get the asterisk and print it out to the console
// How many numbers its looking for the program
for (int i = 1; i <= +a; ++i) // How many asterisk is getting printing out at a time
cout << '*'; // asterisk
cout << '\n'; // new line
}
| If i have not understood wrong you are looking something like this
#include<iostream>
#include<vector>
using namespace std;
void display(int a);
vector<uint> asterisks = vector<uint>(5);
int main()
{
// Asking the user to input 5 random between 1 to 30
cout << "Enter "<<asterisks.size()<<" numbers between 1 to 30" << endl;
for(int i=0;i<asterisks.size();i++) {
cout << i << ":\t";
cin >> asterisks[i];
}
for(int i=0;i<asterisks.size();i++)
display(asterisks[i]);
return 0;
}
void display(int a) {
for (int i = 1; i <= +a; ++i) cout << '*'; // asterisk
cout << '\n'; // new line
}
Now you store what the user chooses in a vector and then print the content of the vector using the display function
Hope it will be helpful
|
71,519,706 | 71,519,998 | Appending to vector of union | I have a union, defined like so:
union CellType {
std::string str;
int i;
double d;
Money<2> m; // Custom class for fixed-decimal math.
};
Then, I have a vector of such unions.
std::vector<CellType> csvLine;
My question is, how do I append a value to the end of the vector? I normally use push_back for vectors of strings, ints, etc., but can't quite figure out the syntax when the vector elements are unions. Any help would be appreciated.
(Yes, this class is reading CSV files a line at a time, if that makes any difference.)
here is failing MRE
#include <vector>
#include <iostream>
union CellType {
std::string str;
int i;
double d;
// Money<2> m; // Custom class for fixed-decimal math.
};
int main() {
std::vector<CellType> csvLine;
CellType c;
c.str = "foo";
csvLine.push_back(c);
}
Money commented out because because irrelevant to the MRE
Error
Severity Code Description Project File Line Suppression State
Error C2280 'CellType::CellType(void)': attempting to reference a deleted function ConsoleApplication1 C:\work\ConsoleApplication1\ConsoleApplication1.cpp 22
| There is no clean way to do this for the simple reason that given an arbitrary instance of this union there is no authoritative way for a generic, template-based function/method, like std::vector::push_back to know which member of the union is active, in order to execute the member-specific copy/move operation, when at least one member of the union is not a POD. There's nothing inherent to any particular instance of a union that states "this specific union contains a std::string so to copy/move it you will use std::string's appropriate copy/move operator". C++ simply does not work this way, this is fundamental to C++. There is no workaround, and no syntax for this.
In general, due to C++'s foundation in type-safety, an inhererently type-unsafe union, when used together with non-POD members like std::string, produces an end result with quite limited capabilities.
The least amount of pain for you would be to replace your union with C++17's std::variant, a type-safe union. push_back then becomes a big, fat, nothing-burger.
|
71,519,720 | 71,520,044 | On acquire/release semantics not being commutative to other threads | The gcc wiki here provides an example on memory ordering constraints.
In the below example, the wiki asserts that, if the memory ordering in use is an acquire/release, then thead-2's assert is guaranteed to succeed while thread-3's assert can fail.
-Thread 1- -Thread 2- -Thread 3-
y.store (20); if (x.load() == 10) { if (y.load() == 10)
x.store (10); assert (y.load() == 20) assert (x.load() == 10)
y.store (10)
}
I don't see how that is possible. My understanding is that since Thread-2's y.store(10) synchronizes with Thread-3's y.load() and since y can be 10 if and only if x is 10, I say the assert in Thread-3 should succeed. The wiki disagrees. Can someone explain why?
|
and since y can be 10 if and only if x is 10,
And that's the part that is incorrect.
An acquire/release pair works between a releasing store operation and an acquiring load operation which reads the value that was release-stored.
In thread 1, we have a releasing store to x. In thread 2, we have an acquiring load from x. If thread 2 read the value stored by thread 1, then the two threads have an acquire/release relationship. What that means is that any values written to other objects in thread 1 before the releasing store are visible to thread 2 after its acquiring load.
In thread 2, we have a releasing store to y. In thread 3, we have an acquiring load from y. If thread 3 read the value stored by thread 2, then the two threads have an acquire/release relationship. What that means is that any values written to other objects in thread 2 before the releasing store are visible to thread 3 after its acquiring load.
But notice what I said: "any values written to other objects in thread 2".
x was not written by thread 2; it was written by thread 1. And thread 3 has no relationship to thread 1.
Pure acquire/release is not transitive. If you need transitive access to memory, then you need sequential consistency. Indeed, that's what seq_cst is for: ensuring consistently across all acquire/releases which transitively rely on each other.
Note that this is primarily about caches. A pure acquire/release operation may only flush certain caches, particularly if the compiler can clearly see exactly what memory a thread is making available in a release.
|
71,519,972 | 71,520,227 | Constructor with exception handling for invalid input c++ | I'm trying to create a constructor that validates input and throws an exception if the input is invalid.
Let's say I have a constructor that only accepts values in mod 12 for int a, values of mod 16 for b, and values greater than 0 for c. I'm trying to use the std::invalid_argument. How would I implement the exception handler? Would this throw an exception? If the values entered were out of bound?
Mod(int a, int b, int c) {
try {
if (a > 11 || a < 0 || b > 15 || b < 0 || c < 0) {
throw std::invalid_argument("Invalid Argument");
}
} catch (std::invalid_argument const& value) {
std::cerr << "Invalid Argument " << value.what() << '\n';
}
}
|
How would I implement the exception handler?
By NOT implementing it in the constructor that is throwing. Implement it in the code that is trying to pass invalid input to the constructor, eg:
Mod(int a, int b, int c){
if (a > 11 || a < 0 || b > 15 || b < 0 || c < 0 ) {
throw std::invalid_argument("Invalid Argument");
}
}
try {
Mod m(1, 2, -1);
// use m as needed...
}
catch (std::invalid_argument const& value) {
std::cerr << "Invalid Argument " << value.what() << '\n';
}
Would this throw an exception? If the values entered were out of bound?
Yes, it does, but then it immediately catches and discards the exception, so the caller will never see it. It may as well have never been thrown.
|
71,520,009 | 71,522,920 | OMP for loop condition not a simple relational comparison | I have this program that isn't compiling due to error: condition of OpenMP for loop must be a relational comparison ('<', '<=', '>', '>=', or '!=') of loop variable 'i', referring to for (size_t i = 2; i * i <= n; i++). How can I modify and fix it without affecting performance? Is this an issue due to having an old OpenMP version? (Because I remember having a different issue before on another computer with an older version that is resolved now.)
#include <iostream>
#include <cstdio>
int main(int argc, char **argv)
{
if (argc != 2)
return EXIT_FAILURE;
size_t n;
if (sscanf(argv[1], "%zu", &n) == 0)
return EXIT_FAILURE;
auto *prime = new size_t[n + 1];
for (size_t i = 2; i <= n; i++)
prime[i] = i;
#pragma omp parallel for
for (size_t i = 2; i * i <= n; i++)
for (size_t j = i * i; j <= n; j += i)
prime[j] = 0;
size_t N = 0;
for (size_t i = 2; i <= n; i++)
if (prime[i] != 0)
N++;
std::cout << N << '\n';
}
| Loop after the #pragma omp parallel for have to be a canonical form to be confirming. In your case the problem is with the test expression, which should be one of the following:
var relational-op b
b relational-op var
So, you have to use what was suggested by @Yakk: calculate the sqrt of n and compare it to i:
const size_t max_i=sqrt(n);
#pragma omp parallel for
for (size_t i = 2; i <= max_i; i++)
....
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.