question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
423,335
423,580
What can C++ do that is too hard or messy in any other language?
I still feel C++ offers some things that can't be beaten. It's not my intention to start a flame war here, please, if you have strong opinions about not liking C++ don't vent them here. I'm interested in hearing from C++ gurus about why they stick with it. I'm particularly interested in aspects of C++ that are little known, or underutilised.
I have stayed with C++ as it is still the highest performing general purpose language for applications that need to combine efficiency and complexity. As an example, I write real time surface modelling software for hand-held devices for the surveying industry. Given the limited resources, Java, C#, etc... just don't provide the necessary performance characteristics, whereas lower level languages like C are much slower to develop in given the weaker abstraction characteristics. The range of levels of abstraction available to a C++ developer is huge, at one extreme I can be overloading arithmetic operators such that I can say something like MaterialVolume = DesignSurface - GroundSurface while at the same time running a number of different heaps to manage the memory most efficiently for my app on a specific device. Combine this with a wealth of freely available source for solving pretty much any common problem, and you have one heck of a powerful development language. Is C++ still the optimal development solution for most problems in most domains? Probably not, though at a pinch it can still be used for most of them. Is it still the best solution for efficient development of high performance applications? IMHO without a doubt.
423,730
450,293
How to pass a COM method as a function argument? And Microsoft Compiler error C3867
I would like to pass a COM method as a function argument but I get this error (Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 15.00.30729.01 for 80x86): error C3867: 'IDispatch::GetTypeInfoCount': function call missing argument list; use '&IDispatch::GetTypeInfoCount' to create a pointer to member What am I missing? Thank you very much. #include <atlbase.h> void update( HRESULT(*com_uint_getter)(UINT*), UINT& u ) { UINT tmp; if ( S_OK == com_uint_getter( &tmp ) ) { u = tmp; } } // only for compile purpose, it will not work at runtime int main(int, char*[]) { // I choose IDispatch::GetTypeInfoCount just for the sake of exemplification CComPtr< IDispatch > ptr; UINT u; update( ptr->GetTypeInfoCount, u ); return 0; }
As morechilli pointed out this is a C++ issue. Here it is the solution, thanks to my colleague Daniele: #include <atlbase.h> template < typename interface_t > void update( interface_t* p, HRESULT (__stdcall interface_t::*com_uint_getter)(UINT*), UINT& u ) { UINT tmp; if ( S_OK == (p->*com_uint_getter)( &tmp ) ) { u = tmp; } } // only for compile purpose, it will not work at runtime int main(int, char*[]) { // I choose IDispatch::GetTypeInfoCount just for the sake of exemplification CComPtr< IDispatch > ptr; UINT u; update( ptr.p, &IDispatch::GetTypeInfoCount, u ); return 0; }
423,961
424,115
easy way to add 1 month to a time_t in C/C++
I have some code that uses the Oracle function add_months to increment a Date by X number of months. I now need to re-implement the same logic in a C / C++ function. For reasons I don't want/need to go into I can't simply issue a query to oracle to get the new date. Does anyone know of a simple and reliable way of adding X number of months to a time_t? Some examples of the types of calculations are shown below. 30/01/2009 + 1 month = 28/02/2009 31/01/2009 + 1 month = 28/02/2009 27/02/2009 + 1 month = 27/03/2009 28/02/2009 + 1 month = 31/03/2009 31/01/2009 + 50 months = 31/03/2013
Method AddMonths_OracleStyle does what you need. Perhaps you would want to replace IsLeapYear and GetDaysInMonth to some librarian methods. #include <ctime> #include <assert.h> bool IsLeapYear(int year) { if (year % 4 != 0) return false; if (year % 400 == 0) return true; if (year % 100 == 0) return false; return true; } int daysInMonths[] = {31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31}; int GetDaysInMonth(int year, int month) { assert(month >= 0); assert(month < 12); int days = daysInMonths[month]; if (month == 1 && IsLeapYear(year)) // February of a leap year days += 1; return days; } tm AddMonths_OracleStyle(const tm &d, int months) { bool isLastDayInMonth = d.tm_mday == GetDaysInMonth(d.tm_year, d.tm_mon); int year = d.tm_year + months / 12; int month = d.tm_mon + months % 12; if (month > 11) { year += 1; month -= 12; } int day; if (isLastDayInMonth) day = GetDaysInMonth(year, month); // Last day of month maps to last day of result month else day = std::min(d.tm_mday, GetDaysInMonth(year, month)); tm result = tm(); result.tm_year = year; result.tm_mon = month; result.tm_mday = day; result.tm_hour = d.tm_hour; result.tm_min = d.tm_min; result.tm_sec = d.tm_sec; return result; } time_t AddMonths_OracleStyle(const time_t &date, int months) { tm d = tm(); localtime_s(&d, &date); tm result = AddMonths_OracleStyle(d, months); return mktime(&result); }
424,031
443,104
Symbian C++ - Use a TTF font in your application?
Is it possible to package a .TTF file in your application and use it to render text at runtime, and have the application release the font after use? I've found bits of information scattered around the forum, but nothing conclusive. Can anyone offer any advice?
The accepted answer above is wrong. You can use TTF in symbian without converting it into GRD. I did it two years back on older versions of Symbian (probably 8). Newer versions probably have built in support. In any case serch the net for a truetype font file driver for symbian (a .dll) file. Install it and you can use any TTF simply by copying it into the fonts folder. Symbian has excellent support for fonts even for your own home built propitory formats. Just write a driver and install it. Symbian will scan the fonts directory and call each driver with each font file. Which ever driver can handle the font file extension responds and the font loads.
424,104
424,990
Can I access private members from outside the class without using friends?
Disclaimer Yes, I am fully aware that what I am asking about is totally stupid and that anyone who would wish to try such a thing in production code should be fired and/or shot. I'm mainly looking to see if can be done. Now that that's out of the way, is there any way to access private class members in C++ from outside the class? For example, is there any way to do this with pointer offsets? (Naive and otherwise non-production-ready techniques welcome) Update As noted in the comments, I asked this question because I wanted to write a blog post on over-encapsulation (and how it affects TDD). I wanted to see if there was a way to say "using private variables isn't a 100% reliable way to enforce encapsulation, even in C++." At the end, I decided to focus more on how to solve the problem rather than why it's a problem, so I didn't feature some of the stuff brought up here as prominently as I had planned, but I still left a link. At any rate, if anyone's interested in how it came out, here it is: Enemies of Test Driven Development part I: encapsulation (I suggest reading it before you decide that I'm crazy).
If the class contains any template member functions you can specialize that member function to suit your needs. Even if the original developer didn't think of it. safe.h class safe { int money; public: safe() : money(1000000) { } template <typename T> void backdoor() { // Do some stuff. } }; main.cpp: #include <safe.h> #include <iostream> class key; template <> void safe::backdoor<key>() { // My specialization. money -= 100000; std::cout << money << "\n"; } int main() { safe s; s.backdoor<key>(); s.backdoor<key>(); } Output: 900000 800000
424,105
424,136
What kind of code library should I build for distribution?
I need to build a C++ library to distribute among our customers. The library must be able to be accessed from a wide range of languages including VB6, C++, VB.net and C#. I've being using ActiveX controls (ocx files) until now. But I wonder if there is a better kind of library (dll, etc.) that I can build. What do you recommend? I'm limited to C++ as the library language, but you can mention other languages for reference to other developers. P.S. Sorry if the question was already asked. I had some trouble finding a suitable title. Feel free to correct my English. Edit: Seems like the best choice is either DLLs or OCX (i.e., COM), but I'm still having some doubts on which one will I choose. Which one is more suitable to modern languages (.NET for instance)? Which one would be easier to use from an end developer perspective?
Almost every language has a way of loading dynamic libraries and accessing exported C functions from them. There is nothing preventing you from using C++ inside the dll but for maximum portability, export only C functions. I have some more about this in this post.
424,503
424,621
How do I open a new document in running application without opening a new instance of the application?
I have a situation that has been partially covered by other answers at SO, but I cannot find a complete answer. In short, we are trying to use URL's for our specific data types that when double clicked will open up our application and load those data sets into that app. We have this part working. (for example, an URL might look like this: resource://shaders/basic_shader.hlsl) What we would like to do is to prevent new instances of the application from opening when a new URL is double clicked. For example, let's say we have a URL that opens up a shader in our shader editor. When clicking this resource URL, it will open our shader editor. When a new shader URL is clicked, we'd like to be able to open up the shader in the currently running application and have it open up the new shader in a new tab in our editor. We can easily detect if another instance of our application is running. The problem that we don't know how to easily solve is how to tell the currently running application to open up this new file for editing. This behavior is very much like the Apple Finder. In unix, you could emulate this behavior by having your application open some named pipe and then new apps could check if this pipe is active and then send the document data down the pipe. Is there a more standard windows way of achieving this behavior? We need a C/C++ solution. Thanks.
Create a named mutex when application launches as David Grant said, then before displaying the UI for the second URL, check for this mutex, if it is already created then just quit by passing the new URL to the first launched application (Have interface in the application to set the URL and tell to redirect programatically)
424,549
424,653
Difference between C/C++ Runtime Library and C/C++ Standard Library
Can you guys tell me the difference between them? By the way, is there something called C++ library or C library?
The C++ Standard Library and C Standard Library are the libraries that the C++ and C Standard define that is provided to C++ and C programs to use. That's a common meaning of those words, i haven't ever seen another definition of it, and C++ itself defines it as this: The C++ Standard Library provides an extensible framework, and contains components for: language support, diagnostics, general utilities, strings, locales, containers, iterators, algorithms, numerics, and input/output. The language support components are required by certain parts of the C++ language, such as memory allocation (5.3.4, 5.3.5) and exception processing (clause 15). C++ Runtime Library and C Runtime Library aren't so equally used. Some say a runtime library is the part that a program uses at runtime (like, the code that implements std::type_info or the code supporting signal handlers) as opposed to stuff that they only use at compile time (like macro definitions). Other people say that a runtime library is one that is linked to a program at load time dynamically, as opposed to statically at compile time, though this use is very seldom. shared library or dynamically linked library are better terms for that. C++ Library and C Library are very broad terms. They just mean that a library is written in C++ and/or C. The above is not only limited to C++ and/or C. There are python libraries and there is a python Standard Library too.
425,400
425,436
C++ DLL: Not exposing the entire class
How can I "hide" parts of a class so that whoever is using the libary does not have to include headers for all the types used in my class. Ie take the MainWindow class below, ho can I have it so when compiled in a static/dynamic libary, whoever is useing the libary does NOT have to include windows.h, ie HWND, CRITICAL_SECTION, LRESULT, etc do not have to be defined. I know I could split it into two classes, an abstract class with just the public interface, and an implementation class which is hidden that contains the members that require windows.h. The problem here is that the visible class can no longer be created itsself, and an additional create function (eg CreateMainWindow) is required. That is fine in this case since it is most likly that just a single instance created on the heap is wanted but for other classes this is not. class MainWindow { HWND hwnd; int width, height; std::string caption; bool started,exited; bool closeRequest; unsigned loopThread; CRITICAL_SECTION inputLock; Input *input; public: static void init_type(); Py::Object getattr(const char *name); MainWindow(int width, int height, std::string caption); ~MainWindow(); bool CloseRequest(const Py::Tuple &args); bool CloseRequestReset(const Py::Tuple &args); HWND GetHwnd(); int GetWidth(); int GetHeight(); Input* GetInput(); protected: unsigned static __stdcall loopThreadWrap(void *arg); unsigned LoopThreadMain(); LRESULT WndProc(UINT msg, WPARAM wParam, LPARAM lParam); LRESULT static CALLBACK WndProcWrapper(HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam); };
You can hide parts of a class using the so-called "cheshire cat", "letter/envelope", or "pimpl" technique (which are, all, different names for the same technique): class MainWindow { private: //opaque data class ImplementationDetails; ImplementationDetails* m_data; public: ... declare your public methods here ... } The best way is probably the abstract class mentioned in your 2nd paragraph (however I failed to understand your last sentence, in which you (tried/failed to) explain what your counter-argument to that is).
425,430
425,456
How do you return a vector iterator from a variable in a templated class?
I'm trying to return an iterator for a vector in a templated class (I'm not sure if that makes a difference, but I've read that may, so I thought I'd mention it). The problem is that I get an error about C++ not supporting default-int when I try this. I've looked online and from what I can see in forums and explanaions, I don't think I'm that far off, it just won't compile. template<class T> class Table { public: ... vector<shared_ptr<vector<T>>>::iterator GetRowIterator(); //vector<shared_ptr<vector<CellValueType> > >::const_iterator GetRowIterator(); ... protected: vector<shared_ptr<vector<CellValueType> > > data; //outside vector is rows, inside vector is columns ... }; vector<shared_ptr<vector<T> > >::const_iterator Table<T>::GetRowIterator() { return data.begin(); } The errors that I get are: error C2146: syntax error : missing ';' before identifier 'GetRowIterator' error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Edit: Changed the end angle brackets so they are not as close together - same error. Any thoughts as to why this is occurring? As always, thanks for advice/help in advance!
Also remember to use typename when declaring the template-dependent return type: typename vector< shared_ptr< vector< T > > >::iterator GetRowIterator(); and the method definition typename vector< shared_ptr< vector< T > > >::const_iterator Table<T>::GetRowIterator() { return data.begin(); } Notice also that when defining a template class method outside the class definition, you have to use the template keyword: template <class T> typename vector< shared_ptr< vector< T > > >::const_iterator Table<T>::GetRowIterator() { return data.begin(); } So that the compiler can know what the T is about.
425,457
425,615
Inheriting from a non-templated class that has a templated constructor - how to resolve ambiguity?
Let's say we have a class, MyParent: class MyParent { public: template<namespace T> MyParent() { T* Something; } }; And a derived class, which uses this constructor: class MyDerived : public MyParent { public: MyDerived() : MyParent<int>() { } }; Then I get a compiling error, because there's ambiguity. The compiler thinks that the int is a template argument to the class, not the constructor. How do I specify that I want the int to be an argument to the constructor?
It is not possible. From the standard section 14.8.1 Explicit template argument, it notes: [Note: because the explicit template argument list follows the function template name, and because conversion member function templates and constructor member function templates are called without using a function name, there is no way to provide an explicit template argument list for these function templates. ] As noted in the comments, you would need to have the constructor take a parameter of type T (or const T &), and then have MyDerived call MyParent::MyParent with a paremeter of type int.
425,576
425,715
Template instantiation with VARIANT return type
An explicit instantiation of a static template member function keeps failing to compile with the message error C2785: 'at_Intermediate CUtil::convert_variant(const VARIANT &)' and '<Unknown>' have different return types When I make a corresponding class with non-static member functions, the compiler likes me. // utility class - static methods struct CUtil { template< typename at_Intermediate > static at_Intermediate convert_variant( const VARIANT &v ) ; template<> static VARIANT convert_variant<VARIANT >( const VARIANT &v ) { return v; } // template<> static double convert_variant<double >( const VARIANT &v ) { return v.dblVal; } template<> static long convert_variant<long >( const VARIANT &v ) { return v.lVal ; } template<> static BSTR convert_variant<BSTR >( const VARIANT &v ) { return v.bstrVal; } }; This is a composed question: Why does the compiler complain about a function "Unknown" while it's clearly known? What triggers this message - it disappears when the function is made global or non-static. EDIT: after some useful hints from Josh: is it not allowed to explicitly instantiate template functions within the class declaration?
Apparently you may only use explicit template specialization at namespace scope although I can't find this in the standard (but GCC says as much). The following works for me (on GCC): struct CUtil { template< typename at_Intermediate > static at_Intermediate convert_variant( const VARIANT &v ) ; }; template<> VARIANT CUtil::convert_variant<VARIANT >( const VARIANT &v ) { return v; } template<> double CUtil::convert_variant<double >( const VARIANT &v ) { return v.dblVal; } template<> long CUtil::convert_variant<long >( const VARIANT &v ) { return v.lVal ; } template<> BSTR CUtil::convert_variant<BSTR >( const VARIANT &v ) { return v.bstrVal; } EDIT It is in the standard: 14.7.2.5: An explicit instantiation of a class or function template specialization is placed in the namespace in which the template is defined. An explicit instantiation for a member of a class template is placed in the namespace where the enclosing class is defined. An explicit instantiation for a member template is placed in the namespace where the enclosing class or class template is defined. (All emphasis added by me.)
425,891
426,214
Is it possible to manage C++ application via JMX?
We have a distributed application containing C++ and Java modules, interacting via CORBA. Are there any C++ libraries/tools for exposing "variables" and "methods" to JMX tools (to create unified management) ?
So even though your application is C++ and Java, you're only looking to expose C++ module attributes to the JMX manager ? If so, or actually, even if you are exposing both, I would look at using an SNMP library for both instances, since trying to get C++ to support JMX directly could be hairy. JMX and SNMP are broadly interoperable, I am pretty sure you can find SNMP libraries for your C++ components (and I know you can for Java) and you may as well stick to one protocol rather than have one for C++ and one for Java. The last piece (and probably the easiest) is to find an SNMP "bridge" for your JMX console, manager or whatever you're using.
425,987
426,000
How to prevent memory leaks while cancelling an operation in a worker thread?
Currently i am working on a desktop application which consists mathematical analysiss.I am using qt for GUI and project written in c++. When user starts an analysis, i open a worker thread and start a progress bar.Everything is ok up to now, problem starts when user cancels operation.Operation is complex, i am using several functions and objects, i allocate/deallocate memory at several times.I want to learn what should i do for recovering in cancel operation.Because there can be memory leaks.Which pattern or method i should use to be robust and safe for cancelling operation? My idea is throwing an exception, but operation is really complex so should i put try-catch to all of my functions or is there a more generic way, pattern.. Edit: Problem is my objects are transfered between scopes, so shared_ptr or auto_ptr doesnt solve my problem, Flag idea can be, but i think it requires so much code and there should be an easy way.
A pretty common way to close down worker threads, is to mark it with a flag, and let the worker thread inspect this flag at regular intervals. If marked, it should discontinue its workflow, clean up and exit. Is that a possibility in your situation?
426,162
426,592
Is Polymorphism worth an increase in coupling?
I'm writing a simplistic game to learn get some more C++ experience, and I have an idea where I feel polymorphism almost works, but doesn't. In this game, the Party moves fairly linearly through a Map, but can occasionally encounter a Fork in the road. A fork is (basically) an std::vector<location*>.Originally I was going to code something like the following into the a Party member function: if(!CurrLocation->fork_.empty()) // Loop through forks and show options to the player, go where s/he wants else (CurrLocation++) But I was wondering if some variant of the following might be better: CurrLocation = CurrLocation->getNext(); With Fork actually being derived from Location, and overloading some new function getNext(). But in the latter case, the location (a low level structure) would have to be the one to present the message to the user instead of "passing this back up", which I don't feel is elegant as it couples location to UserInterface::*. Your opinions?
All problems can be solved by adding a level of indirection. I would use your suggested variant, and decouple Location from Party by allowing getNext to accept an object that resolves directional choices. Here is an example (untested): class Location; class IDirectionChooser { public: virtual bool ShouldIGoThisWay(Location & way) = 0; }; class Location { public: virtual Location * GetNext(IDirectionChooser & chooser) { return nextLocation; } virtual Describe(); private: Location * nextLocation; }; class Fork : public Location { public: virtual Location * GetNext(IDirectionChooser & chooser) { for (int i = 0; i < locations.size(); i++) if (chooser.ShouldIGoThisWay(*locations[i])) return locations[i]; } virtual Describe(); private: vector<Location *> locations; }; class Party : public IDirectionChooser { public: void Move() { currentLocation = currentLocation->GetNext(GetDirectionChooser()); } virtual IDirectionChooser & GetDirectionChooser() { return *this; } virtual bool ShouldIGoThisWay(Location & way) { way.Describe(); cout << "Do you want to go that way? y/n" << endl; char ans; cin >> ans; return ans == 'y'; } };
426,736
426,767
Error: macro names must be identifiers using #ifdef 0
I have the source code of an application written in C++ and I just want to comment something using: #ifdef 0 ... #endif And I get this error error: macro names must be identifiers Why is this happening?
The #ifdef directive is used to check if a preprocessor symbol is defined. The standard (C11 6.4.2 Identifiers) mandates that identifiers must not start with a digit: identifier: identifier-nondigit identifier identifier-nondigit identifier digit identifier-nondigit: nondigit universal-character-name other implementation-defined characters> nondigit: one of _ a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z digit: one of 0 1 2 3 4 5 6 7 8 9 The correct form for using the pre-processor to block out code is: #if 0 : : : #endif You can also use: #ifdef NO_CHANCE_THAT_THIS_SYMBOL_WILL_EVER_EXIST : : : #endif but you need to be confident that the symbols will not be inadvertently set by code other than your own. In other words, don't use something like NOTUSED or DONOTCOMPILE which others may also use. To be safe, the #if option should be preferred.
426,737
426,768
Does this type of memory get allocated on the heap or the stack?
In the context of C++ (not that it matters): class Foo{ private: int x[100]; public: Foo(); } What I've learnt tells me that if you create an instance of Foo like so: Foo bar = new Foo(); Then the array x is allocated on the heap, but if you created an instance of Foo like so: Foo bar; Then it's created on the stack. I can't find resources online to confirm this.
Given a slight modification of your example: class Foo{ private: int x[100]; int *y; public: Foo() { y = new int[100]; } ~Foo() { delete[] y; } } Example 1: Foo *bar = new Foo(); x and y are on the heap: sizeof(Foo*) is created on the stack. sizeof(int) * 100 * 2 + sizeof(int *) is on the heap Example 2: Foo bar; x is on the stack, and y is on the heap sizeof(int) * 100 is on the stack (x) + sizeof(int*) sizeof(int) * 100 is on the heap (y) Actual sizes may differ slightly due to class/struct alignment depending on your compiler and platform.
427,266
427,279
Why does std::auto_ptr operator = trash objects?
Seem to be having an issue with std::auto_ptr and assignment, such that the object referenced seems to get trashed for some reason. std::auto_ptr<AClass> someVar = new AClass(); // should work, but mangles content std::auto_ptr<AClass> someVar( new AClass() ); // works fine. std::auto_ptr<AClass> someVar = std::auto_ptr<AClass>(new AClass()); // works fine. std::auto_ptr<AClass> someVar; someVar.reset( new AClass() ); // works fine. I've traced it through, and it appears (via watching the values in the debugger) that the problem occurs in the transfer of the pointer from the temporary std::auto_ptr_byref() that is created to wrap the rhs pointer. That is the value contained in _Right on entering the auto_ptr(auto_ptr_ref<_Ty> _Right) function is correct, but the value in _Myptr on leaving is junk. template<class _Ty> struct auto_ptr_ref { // proxy reference for auto_ptr copying auto_ptr_ref(void *_Right) : _Ref(_Right) { // construct from generic pointer to auto_ptr ptr } void *_Ref; // generic pointer to auto_ptr ptr }; template<class _Ty> class auto_ptr { // wrap an object pointer to ensure destruction public: typedef _Ty element_type; explicit auto_ptr(_Ty *_Ptr = 0) _THROW0() : _Myptr(_Ptr) { // construct from object pointer } auto_ptr(auto_ptr<_Ty>& _Right) _THROW0() : _Myptr(_Right.release()) { // construct by assuming pointer from _Right auto_ptr } auto_ptr(auto_ptr_ref<_Ty> _Right) _THROW0() { // construct by assuming pointer from _Right auto_ptr_ref _Ty **_Pptr = (_Ty **)_Right._Ref; _Ty *_Ptr = *_Pptr; *_Pptr = 0; // release old _Myptr = _Ptr; // reset this } auto_ptr<_Ty>& operator=(auto_ptr_ref<_Ty> _Right) _THROW0() { // assign compatible _Right._Ref (assume pointer) _Ty **_Pptr = (_Ty **)_Right._Ref; _Ty *_Ptr = *_Pptr; *_Pptr = 0; // release old reset(_Ptr); // set new return (*this); } At first I thought it was messing up the inheritance and slicing off interfaces, however this happens even if the class only has one parent class. We can avoid doing = new if we remember, either by using brackets or changing to have a explicit std::auto_ptr temp on the rhs, this is of course error prone. Is it just this version of the library being broken, or some underlying thing I'm just not getting? We also noticed a similar issue with assigning a std::auto_ptr to a boot::shared_ptr though we removed that entirely now and I don't recall which syntax caused the issue.
The first line: std::auto_ptr<AClass> someVar = new AClass(); // should work, but mangles content should result in a compiler error. Because there is no implicit conversion from the raw AClass pointer to an auto_ptr (the constructor for an auto_ptr that takes a raw pointer is marked explicit), initialization using the 'copy initializer' form is not permitted. VC9 gives the following error: C:\temp\test.cpp(23) : error C2440: 'initializing' : cannot convert from 'AClass *' to 'std::auto_ptr<_Ty>' Other compilers I've tried (GCC 3.4.5, Comeau C/C++ 4.3.10.1, Digital Mars) give a similar error. EDIT: It looks like this is in fact a bug in VS2005's implementation of auto_ptr<> (not sure if it was introduced in SP1 or was in VS2005 from the start) that got fixed in VS2008. Here's the MS Connect bug record for the problem: https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=101842&wa=wsignin1.0
427,289
427,310
Access a COM Interface method C++
Both: CLSID IID Having specified the above, and using: CoCreateInstance() To returning a single uninitialised object of the class specified by the CLSID above. How can I then access an Interface's method from C++? Without: ATL MFC Just plain C++ Afterwards, I use CreateInstance() I'm having trouble, using CreateInstance() - with the last parameter - ppv Using oleview, I can see methods of the specified IIDabove IID above, such as: interface IS8Simulation : IDispatch { HRESULT Open([in] BSTR FileName); }; How can I then access the above? Examples/guidance - please Regards
By doing a CoCreateInstance you get an interface pointer. Through QueryInterface(...) method you can get the interface pointer of some other interface easily. e.g., IUnknown* pUnk = NULL; HRESULT hr = ::CoCreateInstance(clsid,NULL,CLSCTX_ALL,__uuidof(IUnknown),(void**)&pUnk); IS8Simulation* pSim = NULL; hr = pUnk->QueryInterface(__uuidof(IS8Simulation), (void**)&pSim); After doing this, you will get the pointer to IS8Simulation in pSim and through that you can call methods of that interface. Remember you need to provide a valid clsid in the CoCreateInstance call.
427,329
428,796
Reading from 2 sockets in 2 threads causes data loss
I have a multi-threaded Windows C++ app written in Visual Studio 6. Within the app 2 threads are running each trying to read UDP packets on different ports. If I protect the reading from the socket with a critical section then all the date read is fine. Without that protection data is lost from both sockets. Is reading from a socket not thread safe? I've written many socket apps in the past and don't remember having to use this sort of thread protection.
Within the app 2 threads are running each trying to read UDP packets on different ports. How much UDP data are you sending/reading? How fast are you sending it? How much of your data is lost? This could be a race condition... Not between the two threads, but between the thread and the socket! I've seen problems in the past porting code from Linux to Windows. Windows uses (used) a default UDP buffersize of 8k. Naturally, we were sending 12k bursts, and there's just no way to read it fast enough even with a dedicated read thread! You can change the UDP buffersize (under Windows) with something like: int newBufferSize = 128 * 1024; // 128k setsockopt( readSocketFd, SOL_SOCKET, SO_RCVBUF, (char *) & newBufferSize );
427,589
427,633
Inspecting standard container (std::map) contents with gdb
Supposing to have something like this: #include <map> int main(){ std::map<int,int> m; m[1] = 2; m[2] = 4; return 0; } I would like to be able to inspect the contents of the map running the program from gdb. If I try using the subscript operator I get: (gdb) p m[1] Attempt to take address of value not located in memory. Using the find method does not yield better results: (gdb) p m.find(1) Cannot evaluate function -- may be inlined Is there a way to accomplish this?
I think there isn't, at least not if your source is optimized etc. However, there are some macros for gdb that can inspect STL containers for you: http://sourceware.org/ml/gdb/2008-02/msg00064.html However, I don't use this, so YMMV
427,678
427,936
Design of a polling event API
Say you were designing a C++ windowing library. It may or may not provide a callback API, but needs to provide a polling API to facilitate a functional style of programming. What would the polling API look like? Some options SDL style struct Event { enum { MousePress, KeyPress } type; union { struct { Point pos; MouseButton b; } mousePress; struct { Modifiers mods; char key; } keyPress; }; }; void userCode() { for(;;) { Event e; if(pollEvent(&e)) { switch(e.type) { case MousePress: cout<<event.mousePress.pos.x; break; // not typesafe case KeyPress: cout<<event.keyPress.key; break; } } } } State style struct Input { enum { Mouse, Keyboard, Nothing } whatChanged; MouseButtonsBitfield pressedButtons; bool keysPressed[keyCount]; }; void userCode() { for(;;) { Input in = pollInput(); switch(in.whatChanged) { // typesafe yay case Mouse: cout << "is LMB pressed? " << bool(in.pressedButtons&LeftButton); break; case Keyboard: cout << "is A pressed? " << in.keysPressed['A']; break; } } } Fun functional pseudo-C++ style struct Event { // transforms listener by notifying it of event, // returns transormed listener. nondestructive. template<class Listener> // sadly invalid, templates can't be virtual. // a solution is to make Listener the base // of a hierarchy and make Listener::handle virtual // but then we're forced to use imperative style virtual Listener transform(Listener const&) =0; }; struct MousePress : Event { // yay we're extensible via inheritance template<class Listener> virtual Listener transform(Listener const& listener) { return listener.handle(*this); // calls the MousePress overload } Point pos; MouseButton b; }; struct KeyPress : Event { template<class Listener> virtual Listener transform(Listener const& listener) { return listener.handle(*this); // calls the KeyPress overload } Modifiers mods; char key; }; struct NoEvent : Event { template<class Listener> virtual Listener transform(Listener const& listener) { return listener.handle(*this); } }; struct UserWidget { UserWidget handle(NoEvent) { return UserWidget(); } UserWidget handle(MousePress p) { return (UserWidget) { string("pressed at")+lex_cast<string>(p.pos)) }; } UserWidget handle(KeyPress k) { return (UserWidget) { string("pressed key=")+lex_cast<string>(k.key)) }; } string pendingOutput; }; void userTick(UserWidget const& w) { cout<<w.pendingOutput; userTick(pollEvent().transform(w)); } void userCode() { userTick(UserWidget()); } Answers for other languages than C++ are OK, if they provide interesting insight. No comments on encapsulation please - yes public fields should really be accessors, i left that out for clarity.
To answer your question quickly, I prefer the simplicity of the "SDL-style code". Mainly because your slightly more complicated "State Style" wastes memory and buys you absolutely nothing (see below), and the recursion in your tortured "Functional pseudo-C++" style will overflow the stack within a few milliseconds. "State Style": Your "typesafe yay" in the "State Style" code is a bit unwarranted. You are still deciding which member to access based on a switch on another member, so the code has all the same weaknesses that the "SDL Style" code has -- for any mistake that you could make with the SDL-style code that leads to interpreting memory as the wrong type, you would make the equally bad mistake of accessing an uninitialised member with the State-style code. "Functional pseudo-C++ style": Now you're getting somewhere, inheriting different event types from a base event type. Obviously the silly recursion needs to become a loop, and there are a few little things to tidy up (I think your 3 methods named transform() in UserWidget want to be called handle(); I'm guessing that you can resolve the problem of no template virtual methods using Boost.Function or similar). I think this approach has potential, though I prefer the simplicity of SDL style. But more fundamentally: I question the need for a polling interface. Is there a reason why pollEvent() cannot block? As it stands, all 3 code segments are burning CPU time doing nothing 99.99% of the time.
427,693
429,395
How to use a program which is not in the source code's folder?
For example: I'm on MS DOS, I have a source code in the folder C:\Documents and Settings\Programs. Can i make my source code use a program (for example gnuplot) that is in a random folder?
Here are some options: Search in the system PATH for the executable you want to run Allow the user to specify the location on the command-line Store the location in a configuration file, and allow the user to specify it during install (if you have an install process) or by editing the file by hand Ideally you'd do all 3
427,761
427,797
Variable sized class - C++
I've seen a class which is a class which is defined like this.. class StringChild : public StringBase { public: //some non-virtual functions static StringChild* CreateMe(int size); private: unsigned char iBuf[1]; }; The static factory function has the following implementation.. return new(malloc(__builtin_offsetof(StringChild ,iBuf[size]))) StringChild(); So as far as I understand it this function is using placement new to extend this class. Is this safe only because there is only 1 member and it's allocated on the heap?
It's an old C trick that was used to work around the non-availablity of variable length arrays in plain C. Yes, it also works in C++ as long as you use suitable allocator constructs (like allocating a bunch of raw memory the desired size and then placement newing the object in there). It's safe as long as you don't wander over the end of the allocated memory, but it does tend to confuse at least some memory debuggers. One thing you have to make absolutely certain when using this technique is that the variable length array is the last element in the object layout, otherwise you'll walk over other internal variables. I am however a little dubious about the implementation of the factory function - I assume the 'size' parameter is actually the desired array size? Also, don't forget that you'd have to release the memory above using 'free' and not 'delete', even though the latter might work in most cases. Unless there's a compelling reason as to why the memory has to be managed this way, I would simply replace the array with a std::vector.
427,927
439,121
C++ Library to Convert HTML to PDF?
I am looking for a C/C++ library to convert HTML (Actually XHTML + CSS) documents to PDF. It is for commercial use and source would be nice but not essential. Anybody have any recommendations or experience doing this? UPDATE: To clarify, I am targeting the Windows platform only. I am developing with Borland C++ Builder 2006, but the library does not have to be a VCL component. Many thanks in advance. Steve.
Just to bump this, I have evaluated both VisPDF and PDFDoc Scout and will probably go with PDFDoc Scout as it can format HTML input. Thanks for everybody else's input.
428,013
428,117
Regex Replacing &#58; to ":" etc
I've got a bunch of strings like: "Hello, here's a test colon&#58;. Here's a test semi-colon&#59;" I would like to replace that with "Hello, here's a test colon:. Here's a test semi-colon;" And so on for all printable ASCII values. At present I'm using boost::regex_search to match &#(\d+);, building up a string as I process each match in turn (including appending the substring containing no matches since the last match I found). Can anyone think of a better way of doing it? I'm open to non-regex methods, but regex seemed a reasonably sensible approach in this case. Thanks, Dom
The big advantage of using a regex is to deal with the tricky cases like &#38;#38; Entity replacement isn't iterative, it's a single step. The regex is also going to be fairly efficient: the two lead characters are fixed, so it will quickly skip anything not starting with &#. Finally, the regex solution is one without a lot of surprises for future maintainers. I'd say a regex was the right choice. Is it the best regex, though? You know you need two digits and if you have 3 digits, the first one will be a 1. Printable ASCII is after all &#32;-&#126;. For that reason, you could consider &#1?\d\d;. As for replacing the content, I'd use the basic algorithm described for boost::regex::replace : For each match // Using regex_iterator<> Print the prefix of the match Remove the first 2 and last character of the match (&#;) lexical_cast the result to int, then truncate to char and append. Print the suffix of the last match.
428,153
428,179
How do I call a C++ static library from Perl?
I'm writing a C++ static library that needs to be shared among several applications, one of them written in Perl. Unfortunately, I only barely know the core Perl language (I read the Llama book), and I'm not all that familiar with its libraries. How do you make calls to an external C++ binary from a Perl script? By Google search, I found information on the Perl Inline module, but if I understand it correctly, that isn't what I need. I'm pretty sure that's for writing C and C++ code directly in your Perl scripts, not for calling external C++ libraries. The C++ static library is being cross-compiled for an ARM processor, so there will be no C++ compiler on the target machine where the Perl script will be running. (If that makes a difference.)
You want to look at using XS, which is how Perl normally interfaces with C/C++ libraries. It's not quite trivial. A couple of relevant portions of the Perl documentation: perlxs perlxstut
428,553
530,860
Unable to get hudson to parse JUnit test output XML
EDIT: This issue has been fixed by google in gtest 1.4.0; see the original bug report for more information. I've recently switched to gtest for my C++ testing framework, and one great feature of it which I am presently unable to use is the ability to generate JUnit-style XML test reports, which could then be read in by our hudson build server. The XML output generated by the gtest test suite all looks legit: <?xml version="1.0" encoding="UTF-8"?> <testsuite tests="370" failures="0" disabled="0" errors="0" time="45.61" name="AllTests"> <testsuite name="application" tests="7" failures="0" disabled="0" errors="0" time="8.953"> <testcase name="zero_tasks_on_bootup" status="run" time="0" classname="application" /> ...etc. </testsuite> </testsuite> I've also tried adding a JUnitReport task to my ant build script, which works fine, and generates XML like so: <?xml version="1.0" encoding="UTF-8"?> <testsuite tests="370" failures="0" disabled="0" errors="0" time="45.61" name="AllTests"> <testsuite name="application" tests="7" failures="0" disabled="0" errors="0" time="8.953"> <testcase name="zero_tasks_on_bootup" status="run" time="0" classname="application" /> ...etc. </testsuite> </testsuite> The problem is, whenever I tell ant to publish the JUnit test results, and then point it to either the raw test result XML, or the compiled result generated in the ant JUnitReport task, hudson always complains about finding no test results there. I'm not a java guy, so I can't tell what's going on here, and I can't find an example of how the JUnit XML ought to look like. Can someone help to point me in the right direction?
Edit: Google test has fixed this issue, which is included in the gtest 1.4.0 release. See the original bug report for more info. Bah! I've finally found the cause of this problem -- it's because gtest produces one giant XML file for all test results, and hudson expects one XML test report per class. I've written a perl script as a workaround for this issue. To use it, you would make a target in your ant xml script which looks something like this: <target name="runtests"> <exec executable="wherever/${ant.project.name}Test" failonerror="false" dir="tests"> <arg value="--gtest_output=xml:${build.dir}\reports\${ant.project.name}.xml"/> </exec> <!-- Workaround for broken gtest output --> <mkdir dir="${build.dir}/reports/output"/> <exec executable="perl" failonerror="false" dir="tests"> <arg value="gtest-hudson.pl"/> <arg value="${build.dir}/reports/${ant.project.name}.xml"/> <arg value="${build.dir}/reports/output"/> </exec> </target> For some reason, gtest also doesn't like the wrong style of slashes being passed to it from ant, so I made my exec for windows only, as my hudson is running on a windows server. Change to '/' for unix, obviously. I've also filed an issue for this on the gtest page, and also one on hudson's issue tracker, so hopefully one of the two teams will pick up on the issue, as I don't have enough time to jump in and make a patch myself.... though if this doesn't get fixed in the near future, I might just have to. ;)
428,630
428,674
Assigning cout to a variable name
In ANSI C++, how can I assign the cout stream to a variable name? What I want to do is, if the user has specified an output file name, I send output there, otherwise, send it to the screen. So something like: ofstream outFile; if (outFileRequested) outFile.open("foo.txt", ios::out); else outFile = cout; // Will not compile because outFile does not have an // assignment operator outFile << "whatever" << endl; I tried doing this as a Macro function as well: #define OUTPUT outFileRequested?outFile:cout OUTPUT << "whatever" << endl; But that gave me a compiler error as well. I supposed I could either use an IF-THEN block for every output, but I'd like to avoid that if I could. Any ideas?
Use a reference. Note that the reference must be of type std::ostream, not std::ofstream, since std::cout is an std::ostream, so you must use the least common denominator. std::ofstream realOutFile; if(outFileRequested) realOutFile.open("foo.txt", std::ios::out); std::ostream & outFile = (outFileRequested ? realOutFile : std::cout);
429,125
429,236
Override and overload in C++
Yes, I do understand the difference between them. What I want to know is: why OVERRIDE a method? What is the good in doing it? In case of overload: the only advantage is you haven't to think in different names to functions?
Overloading generally means that you have two or more functions in the same scope having the same name. The function that better matches the arguments when a call is made wins and is called. Important to note, as opposed to calling a virtual function, is that the function that's called is selected at compile time. It all depends on the static type of the argument. If you have an overload for B and one for D, and the argument is a reference to B, but it really points to a D object, then the overload for B is chosen in C++. That's called static dispatch as opposed to dynamic dispatch. You overload if you want to do the same as another function having the same name, but you want to do that for another argument type. Example: void print(Foo const& f) { // print a foo } void print(Bar const& bar) { // print a bar } they both print their argument, so they are overloaded. But the first prints a foo, and the second prints a bar. If you have two functions that do different things, it's considered bad style when they have the same name, because that can lead to confusion about what will happen actually when calling the functions. Another usecase for overloading is when you have additional parameters for functions, but they just forward control to other functions: void print(Foo & f, PrintAttributes b) { /* ... */ } void print(Foo & f, std::string const& header, bool printBold) { print(f, PrintAttributes(header, printBold)); } That can be convenient for the caller, if the options that the overloads take are often used. Overriding is something completely different. It doesn't compete with overloading. It means that if you have a virtual function in a base class, you can write a function with the same signature in the derived class. The function in the derived class overrides the function of the base. Sample: struct base { virtual void print() { cout << "base!"; } } struct derived: base { virtual void print() { cout << "derived!"; } } Now, if you have an object and call the print member function, the print function of the derived is always called, because it overrides the one of the base. If the function print wasn't virtual, then the function in the derived wouldn't override the base function, but would merely hide it. Overriding can be useful if you have a function that accepts a base class, and every one that's derived from it: void doit(base &b) { // and sometimes, we want to print it b.print(); } Now, even though at compile time the compiler only knows that b is at least base, print of the derived class will be called. That's the point of virtual functions. Without them, the print function of the base would be called, and the one in the derived class wouldn't override it.
429,322
429,884
Unresolved external symbol CompleteAuthToken referenced
While porting a desktop application to windows mobile I've reached the following error: Error LNK2019: unresolved external symbol CompleteAuthToken referenced in function Reading MSDN it tell me that CompleteAuthToken is supported on Windows CE 2.10 and later and I should link against Secur32.lib, but adding that library didn't fix the unresolved symbol error. Any ideas? Note: I'm using Visual Studio 2008 + Windows Mobile 6.0 SDK.
I think the MSDN docs are wrong. Looking in the Platform Builder source, I see that CompleteAuthToken() is in schannel.def and that it gets built into schannel.lib, not secure32.lib. See if making that change helps.
429,632
430,668
How to speed up floating-point to integer number conversion?
We're doing a great deal of floating-point to integer number conversions in our project. Basically, something like this for(int i = 0; i < HUGE_NUMBER; i++) int_array[i] = float_array[i]; The default C function which performs the conversion turns out to be quite time consuming. Is there any work around (maybe a hand tuned function) which can speed up the process a little bit? We don't care much about a precision.
Most of the other answers here just try to eliminate loop overhead. Only deft_code's answer gets to the heart of what is likely the real problem -- that converting floating point to integers is shockingly expensive on an x86 processor. deft_code's solution is correct, though he gives no citation or explanation. Here is the source of the trick, with some explanation and also versions specific to whether you want to round up, down, or toward zero: Know your FPU Sorry to provide a link, but really anything written here, short of reproducing that excellent article, is not going to make things clear.
429,995
430,049
How do C and C++ store large objects on the stack?
I am trying to figure out how C and C++ store large objects on the stack. Usually, the stack is the size of an integer, so I don't understand how larger objects are stored there. Do they simply take up multiple stack "slots"?
The stack is a piece of memory. The stack pointer points to the top. Values can be pushed on the stack and popped to retrieve them. For example if we have a function which is called with two parameters (1 byte sized and the other 2 byte sized; just assume we have an 8-bit PC). Both are pushed on the stack this moves the stack pointer up: 03: par2 byte2 02: par2 byte1 01: par1 Now the function is called and the return addres is put on the stack: 05: ret byte2 04: ret byte1 03: par2 byte2 02: par2 byte1 01: par1 OK, within the function we have 2 local variables; one of 2 bytes and one of 4. For these a position is reserved on the stack, but first we save the stack pointer so we know where the variables start by counting up and the parameters are found by counting down. 11: var2 byte4 10: var2 byte3 09: var2 byte2 08: var2 byte1 07: var1 byte2 06: var1 byte1 --------- 05: ret byte2 04: ret byte1 03: par2 byte2 02: par2 byte1 01: par1 As you see, you can put anything on the stack as long as you have space left. And else you will get the phenomena that gives this site its name.
430,386
430,396
C# is probably not the best choice for a security application?
I have stumbled in this phrase in the web: C# is probably not the best choice for a system-level application like this. I believe plain C++ is much better here as you will need to do lots of low-level API calls. I was searching about security programs made using C#, like firewall, parental control, anti-virus, anti-spyware, etc. So, is true that C# is probably not the best choice for a security application? [edit] I was researching about c# security programs, and I found 2 programs: 1) Keepass (http://keepass.info/compare.html) an impressive password manager that is migrating from c++ to c# !!! 2) Home Again (http://www.codeplex.com/totty530) is a packet sniffer made using c# [/edit]
Things like firewalls often need hooks into kernel-level facilities which aren't (trivially) exposed by managed runtimes like .NET. You can jump through hoops to use the C# language to do this sort of thing, but then, well, you'd be jumping through hoops.
430,401
430,513
C++ Standard and Global Symbol Removal
Anyone have a link to what the C++ standard says regarding a compiler removing global and static symbols? I thought you weren't guaranteed that the compiler will remove global symbols if they're not referenced. A colleague of mine asserts that if your global symbols are included in the main translation unit, those symbols will not be removed even if they're not referenced.
Interestingly, all I can find on this in the C++2003 standard is this: 3.7.1 Static storage duration [basic.stc.static] All objects which neither have dynamic storage duration nor are local have static storage duration. The storage for these objects shall last for the duration of the program (3.6.2, 3.6.3). If an object of static storage duration has initialization or a destructor with side effects, it shall not be eliminated even if it appears to be unused, except that a class object or its copy may be eliminated as specified in 12.8. This implies that the standard permits elimination of items in static storage if initialization and destruction of them have no side effects and they are otherwise unused. If there's a more direct permission, I didn't see it (but maybe someone else will). However, it should be noted that having the linker eliminate unused objects in the final image is a very common optimization.
430,424
430,435
Are there any macros to determine if my code is being compiled to Windows?
I would like to detect whether the OS I'm compiling on is Windows. Is there a simple macro I can check to verify that?
[Edit: I assume you want to use compile-time macros to determine which environment you're on. Maybe you want to determine if you're running on Wine under Linux or something instead of Windows, but in general, your compiler targets a specific environment, and that is either Windows (DOS) or it isn't, but it's rarely (never?) both.] Some compilers offer macros to indicate a Windows build environment. But these will vary from compiler to compiler, and even on the same compiler on Windows if the target environment is not exclusively windows. Usually it's __WIN32__, but not always. #if defined (__WIN32__) // Windows stuff #endif Sometimes it can be _WIN32, __CYGWIN32__, or possibly just the compiler indicator (_MSC_VER). If you know the environment you'll be building in (from the makefile) then you can usually pass in the #define on the command line, like "g++ -D __WIN32__ yourfile.c". A little more info here
431,082
431,114
pthread_join - multiple threads waiting
Using POSIX threads & C++, I have an "Insert operation" which can only be done safely one at a time. If I have multiple threads waiting to insert using pthread_join then spawning a new thread when it finishes. Will they all receive the "thread complete" signal at once and spawn multiple inserts or is it safe to assume that the thread that receives the "thread complete" signal first will spawn a new thread blocking the others from creating new threads. /* --- GLOBAL --- */ pthread_t insertThread; /* --- DIFFERENT THREADS --- */ // Wait for Current insert to finish pthread_join(insertThread, NULL); // Done start a new one pthread_create(&insertThread, NULL, Insert, Data); Thank you for the replies The program is basically a huge hash table which takes requests from clients through Sockets. Each new client connection spawns a new thread from which it can then perform multiple operations, specifically lookups or inserts. lookups can be conducted in parallel. But inserts need to be "re-combined" into a single thread. You could say that lookup operations could be done without spawning a new thread for the client, however they can take a while causing the server to lock, dropping new requests. The design tries to minimize system calls and thread creation as much as possible. But now that i know it's not safe the way i first thought I should be able to cobble something together Thanks
From opengroup.org on pthread_join: The results of multiple simultaneous calls to pthread_join() specifying the same target thread are undefined. So, you really should not have several threads joining your previous insertThread. First, as you use C++, I recommend boost.thread. They resemble the POSIX model of threads, and also work on Windows. And it helps you with C++, i.e. by making function-objects usable more easily. Second, why do you want to start a new thread for inserting an element, when you always have to wait for the previous one to finish before you start the next one? Seems not to be classical use of multiple-threads. Although... One classical solution to this would be to have one worker-thread getting jobs from an event-queue, and other threads posting the operation onto the event-queue. If you really just want to keep it more or less the way you have it now, you'd have to do this: Create a condition variable, like insert_finished. All the threads which want to do an insert, wait on the condition variable. As soon as one thread is done with its insertion, it fires the condition variable. As the condition variable requires a mutex, you can just notify all waiting threads, they all want start inserting, but as only one thread can acquire the mutex at a time, all threads will do the insert sequentially. But you should take care that your synchronization is not implemented in a too ad-hoc way. As this is called insert, I suspect you want to manipulate a data-structure, so you probably want to implement a thread-safe data-structure first, instead of sharing the synchronization between data-structure-accesses and all clients. I also suspect that there will be more operations then just insert, which will need proper synchronization...
431,206
433,891
No type definitions in pre-defined IDL FILE
Firstly, Using plain C++, without ATL, MFC attempting to use COM Object interface. Using oleview (OLE/COM Object viewer) - used to engineer the IDL code. At this stage, using MIDL Compiler, now I'm having trouble trying to produce the following: Syntax on cmd line: midl /nologo /env win32 /tlb ".\S8_.tlb" /h ".\S8_.h" /iid ".\S8_i.c" S8.idl A corresponding .TLB (Type Library) A .H (header) An IID definitions include file (*_i.c) A proxy (*_p.c) MIDL compiler error: S8.IDL(513) : error MIDL2025 : syntax error : expecting a type specification near "S8SimObject" HRESULT LinkSimObjects( [in] S8SimObject* SourceObject, ####line 513 #### [in] S8SimObject* DestObject, [in] float TravelTime);
MIDL compiler error S8.IDL(513) : error MIDL2025 : syntax error : expecting a type specification near "S8SimObject" SOLVED Add a forward declaration - at the TOP of IDL file: import "ocidl.idl"; interface S8SimObject;
431,311
431,319
How to decide where to handle an exception - in scope of function it been thrown or in global one?
I`m writing client-server app for windows using WinSock and I have class for server. while initialising server I have such code: class Server { static const int MaxClients = 10; std::vector connections; CRITICAL_SECTION cs; int port; SOCKET ServerSocket; sockaddr_in ServerAddress; void init(); public: Server(int Port); void addConnection(const Client& newClient); void closeConnection(int index); void Listen(); int size(); /*virtual void ClientService(); virtual void SendMsg(const std::string& msg);*/ virtual ~Server(); }; void Server::init() { WSADATA wsaData; if(WSAStartup(MAKEWORD(2, 2), &wsaData)) throw Exception("WinSock init failed"); ServerSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); if (ServerSocket == INVALID_SOCKET) throw Exception("Socket failed to create!"); ServerAddress.sin_family = AF_INET; ServerAddress.sin_addr.s_addr = htonl(INADDR_ANY); ServerAddress.sin_port = htons(port); if(bind(ServerSocket,(sockaddr*)&ServerAddress,sizeof(ServerAddress)) == SOCKET_ERROR) { throw Exception("Binding failed"); closesocket(ServerSocket); } } where do I need to handle exceptions? Here, in init() private method (which is called from constructor) or in main function? Any rules existing where to place handlers for exceptions and what does it depend on?
It depends on whether you're actually going to handle the exception, e.g. retry with a slightly different input, or decide to ignore the problem and proceed anyway (rarely appropriate, but can be useful). In this case, you may well want to catch the exception close to its source. In most cases, the only thing you can really do with an exception is log it and abort the operation (e.g. serve the user an error page in a web app, or pop up a friendly "this program has crashed, sorry" dialog and exit). In this case, your catch block is likely to be quite close to the top of the stack - e.g. in the main method.
431,429
431,434
Class vs Struct for data only?
Is there any advantage over using a class over a struct in cases such as these? (note: it will only hold variables, there will never be functions) class Foo { private: struct Pos { int x, y, z }; public: Pos Position; }; Versus: struct Foo { struct Pos { int x, y, z } Pos; }; Similar questions: When should you use a class vs a struct in C++? What are the differences between struct and class in C++? When should I use a struct instead of a class?
There is no real advantage of using one over the other, in c++, the only difference between a struct and a class is the default visibility of it's members (structs default to public, classes default to private). Personally, I tend to prefer structs for POD types and use classes for everything else. EDIT: litb made a good point in the comment so I'm going to quote him here: one important other difference is that structs derive from other classes/struct public by default, while classes derive privately by default.
431,469
431,489
How expensive is it to dereference a pointer?
How expensive is it to perform the dereference operation on a pointer? I can imagine that the memory transfer is somehow proportional to the object size, but I want to know how expensive the dereference operation part is.
Dereferencing, when translated into machine code, can mean different things depending on what you do with the dereferenced object. Accessing a single member of a class through a pointer is typically cheap. For example if c is a pointer to an instance of class C with an int member n then something like this: int n = c->n; Might translate into one or two machine instructions and might load a register with a single memory access. On the other hand this implies making a complete copy of the object pointed to by c: C d = *c; The cost of this will depend on the size of C, but note that it is the copy that is the major expense and the 'dereference' part is really just 'using' the pointer address in the copy instructions. Note that accessing members of large objects typically requires pointer offset calculation and memory access whether or not the object is a local object or not. Typically only very small objects are optimized to live only in registers. If you are concerned about the cost of pointers over references then don't be. The difference between these are a language semantics difference and by the time the machine code is generated pointer and reference access look exactly the same.
431,470
431,548
window border width and height in Win32 - how do I get it?
::GetSystemMetrics (SM_CYBORDER) ...comes back with 1 and I know the title bar is taller than ONE pixel :/ I also tried: RECT r; r.left = r.top = 0; r.right = r.bottom = 400; ::AdjustWindowRect (& r, WS_OVERLAPPED, FALSE); _bdW = (uword)(r.right - r.left - 400); _bdH = (uword)(r.bottom - r.top - 400); But border w,h came back as 0. In my WM_SIZE handler, I need to make sure the window's height changes in "steps" so, for example a whole new line of text could fit in the window with no "junky partial line space" at the bottom. But ::MoveWindow needs the dimensions WITH the border space added in. SOMEbody must have done this before... Thanks for any help :)
The GetWindowRect and GetClientRect functions can be used calculate the size of all the window borders. Suite101 has a article on resizing a window and the keeping client area at a know size. Here is their sample code: void ClientResize(HWND hWnd, int nWidth, int nHeight) { RECT rcClient, rcWind; POINT ptDiff; GetClientRect(hWnd, &rcClient); GetWindowRect(hWnd, &rcWind); ptDiff.x = (rcWind.right - rcWind.left) - rcClient.right; ptDiff.y = (rcWind.bottom - rcWind.top) - rcClient.bottom; MoveWindow(hWnd,rcWind.left, rcWind.top, nWidth + ptDiff.x, nHeight + ptDiff.y, TRUE); }
431,533
431,578
C++: Dynamically loading classes from dlls
For my current project I want to be able to load some classes from a dll (which is not always the same, and may not even exist when my app is compiled). There may also be several alternative dll's for a given class (eg an implementation for Direct3D9 and one for OpenGL), but only one of the dlls will be loaded/used at any one time. I have a set of base classes that define the interface plus some basic methods/members (ie the ones for refrence counting) of the classes I want to load, which the dll projects then derive from when creating there classes. //in namespace base class Sprite : public RefCounted//void AddRef(), void Release() and unsigned refCnt { public: virtual base::Texture *GetTexture()=0; virtual unsigned GetWidth()=0; virtual unsigned GetHeight()=0; virtual float GetCentreX()=0; virtual float GetCentreY()=0; virtual void SetCentre(float x, float y)=0; virtual void Draw(float x, float y)=0; virtual void Draw(float x, float y, float angle)=0; virtual void Draw(float x, float y, float scaleX, flota scaleY)=0; virtual void Draw(float x, float y, float scaleX, flota scaleY, float angle)=0; }; The thing is I'm not sure how to do it all so that the executable and other dlls can load and use these classes since ive only ever used dlls where there was only one dll and I could have the Visual Studio linker sort it all out using the .lib file I get when compileing dll's. I dont mind using factory methods for instancing the classes, many of them do already by design (Ie a sprite class is created by the main Graphics class, eg Graphics->CreateSpriteFromTexture(base::Texture*) EDIT: When I needed to write some c++ dlls for use in python I used a library called pyCxx. The resulting dll basicly only exported one method, which created an instance of the "Module" class, which could then contain factory methods to create other classes etc. The resulting dll could be imported in python just with "import [dllname]". //dll compiled as cpputill.pyd extern "C" void initcpputill()//only exported method { static CppUtill* cpputill = new CppUtill; } class CppUtill : public Py::ExtensionModule<CppUtill> { public: CppUtill() : Py::ExtensionModule<CppUtill>("cpputill") { ExampleClass::init_type(); add_varargs_method("ExampleClass",&CppUtill::ExampleClassFactory, "ExampleClass(), create instance of ExampleClass"); add_varargs_method("HelloWorld", &CppUtill::HelloWorld, "HelloWorld(), print Hello World to console"); initialize("C Plus Plus module"); } ... class ExampleClass ... static void init_type() { behaviors().name("ExampleClass"); behaviors().doc ("example class"); behaviors().supportGetattr(); add_varargs_method("Random", &ExampleClass::Random, "Random(), get float in range 0<=x<1"); } How exactly does that work, and could I use it in a purely c++ enviroment to solve my problem here?
Easiest way to do this, IMHO, is to have a simple C function that returns a pointer to an interface described elsewhere. Then your app, can call all of the functions of that interface, without actually knowing what class it is using. Edit: Here's a simple example. In your main app code, you create a header for the interface: class IModule { public: virtual ~IModule(); // <= important! virtual void doStuff() = 0; }; Main app is coded to use the interface above, without any details on the actual implementation of the interface. class ActualModule: public IModule { /* implementation */ }; Now, the modules - the DLL's have the actual implementations of that interface, and those classes don't even have to be exported - __declspec (dllexport) isn't needed. The only requirement for the modules is to export a single function, that would create and return an implementation of the interface: __declspec (dllexport) IModule* CreateModule() { // call the constructor of the actual implementation IModule * module = new ActualModule(); // return the created function return module; } note: error checking left out - you'd usually want to check, if new returned the correct pointer and you should protect yourself from the exceptions that might be thrown in the constructor of the ActualModule class. Then, in your main app, all you need is to simply load the module (LoadLibrary function) and find the function CreateModule (GetProcAddr function). Then, you use the class through the interface. Edit 2: your RefCount (base class of the interface), can be implemented in (and exported from) the main app. Then all your module would need to link to the lib file of the main app (yes! EXE files can have LIB files just like DLL files!) And that should be enough.
431,827
432,058
Message Window C++ Win32 class/example
Is there a class/example application for a message-only window that is in C++ Win32?
If I recall, the standard solution is to create a basic styleless window with a message pump as you normally would, but never call ShowWindow on it. This way you can receive and process the standard messages like WM_QUERYENDSESSION which are sent to all windows.
432,143
432,166
Multiple inheritance in C++ leading to difficulty overriding common functionality
In a C++ physics simulation, I have a class called Circle, and Square. These are Shapes, and have a method called push(), which applies force to it. There is then a special case of Circle, call it SpecialCircle, in which push() should exhibit slightly different properties. But in fact, there is also SpecialSquare() which should exhibit the same force properties. So I'd like to have an abstract base class called Shape which takes care of Circles and Squares, but then I'd also like an abstract base class called Special, which applies special properties to force(). What's the best way to design this class structure? So far, I've got: class Shape { virtual void push(); }; class Circle : public Shape {}; class Square : public Shape {}; class Special { virtual void push(); }; class SpecialCircle : public Circle, Special {}; class SpecialSquare : public Square, Special {}; Of course, the above won't compile, since Special::push() and Shape::push() conflict. I get "error: request for member ‘push’ is ambiguous", as expected. How can I re-organize my class structure so that Circle and Square can share certain properties with each other, but SpecialCircle and SpecialSquare can still inherit from Shape, and also inherit modified functionality from Special? Thanks. ps., is this the diamond inheritance problem?
Another solution (it may or may not fit your needs, it depends on the details of your implementation): Have the class Behavior, and let NormalBehavior and SpecialBehavior inherit from it. Have the class Shape, and let Square and Circle inherit from it. Let Shape be an aggregate type, with a Behavior member (i.e. you pass a Behavior object to the various Shape constructors). In other words, let a Shape have a Behavior. Delegate the actual differences in the behavior of shapes to methods of the Behavior hierarchy. Conversely, you can: Have the class PhysicalObject, and let NormalObject and SpecialObject inherit from it; Have the class Shape, and let Square and Circle inherit from it; Let a PhysicalObject have a Shape. Prefer aggregation over inheritance. This is an application of the Bridge pattern. The advantage of this strategy with respect to having Square, SpecialSquare, Circle, and SpecialCircle, is that tomorrow you'll have to add Rectangle, Hexagon and so on, and for each shape you add you'll have to implement two classes (duplicated code is evil); this is, in my opinion, the real issue that Bridge addresses.
432,173
438,532
What are the disadvantages of the Spirit parser-generator framework from boost.org?
In several questions I've seen recommendations for the Spirit parser-generator framework from boost.org, but then in the comments there is grumbling from people using Spirit who are not happy. Will those people please stand forth and explain to the rest of us what are the drawbacks or downsides to using Spirit?
It is a quite cool idea, and I liked it; it was especially useful to really learn how to use C++ templates. But their documentation recommends the usage of spirit for small to medium-size parsers. A parser for a full language would take ages to compile. I will list three reasons. Scannerless parsing. While it's quite simpler, when backtracking is required it may slow down the parser. It's optional though - a lexer might be integrated, see the C preprocessor built with Spirit. A grammar of ~300 lines (including both .h and .cpp files) compiles (unoptimized) to a file of 6M with GCC. Inlining and maximum optimizations gets that down to ~1,7M. Slow parsing - there is no static checking of the grammar, neither to hint about excessive lookahead required, nor to verify basic errors, such as for instance usage of left recursion (which leads to infinite recursion in recursive-descent parsers LL grammars). Left recursion is not a really hard bug to track down, though, but excessive lookahead might cause exponential parsing times. Heavy template usage - while this has certain advantages, this impacts compilation times and code size. Additionally, the grammar definition must normally be visible to all other users, impacting even more compilation times. I've been able to move grammars to .cpp files by adding explicit template instantiations with the right parameters, but it was not easy. UPDATE: my response is limited to my experience with Spirit classic, not Spirit V2. I would still expect Spirit to be heavily template-based, but now I'm just guessing.
432,230
432,236
Why can I assign an existing reference to a literal value in C++?
Consider the following: int ival = 1.01; int &rval = 1.01; // error: non-const reference to a const value. int &rval = ival; rval = 1.01; The first assignment of &rval to a literal value fails as expected. If I comment out that line the code compiles and runs. I understand why the initialization fails, but I'm confused why the assignment to rval works in the last line. I didn't think it was allowed to assign a reference to a literal value. EDIT: Thanks for the quick answers. I'm tempted to delete this to hide my shame, but I think I'll leave it here so everyone else can point and laugh. In my own defense, I'm working through the exercises in a book (C++ Primer) and this problem is about reference initialization. Still, it's pretty embarrassing to have so completely overlooked the point of a reference in the first place. :)
ival isn't a literal value, 1.01 is the literal value. It's been copied to ival which is a variable, which most definitely can have it's references assigned to another variable.
432,361
432,365
Is it necessary to remember lots of Win32 Api for a C++ programmer?
I always work in windows environment and most often to write the program in C/C++. Is it necessary to a Windows Application Programmer to remember Win32 APIs as more as possible?? Dazza
Well, I can't say it would hurt, but I think that it's better to remember how to quickly reference the API documentation instead of actually remebering the documentation itself. That mental currency (of which there is a limited amount, of course) could be better used remembering other things to make you a better developer in your environment.
432,443
432,453
Why are there digraphs in C and C++?
I learned today that there are digraphs in C99 and C++. The following is a valid program: %:include <stdio.h> %:ifndef BUFSIZE %:define BUFSIZE 512 %:endif void copy(char d<::>, const char s<::>, int len) <% while (len-- >= 0) <% d<:len:> = s<:len:>; %> %> My question is: why do they exist?
Digraphs were created for programmers that didn't have a keyboard which supported the ISO 646 character set. http://en.wikipedia.org/wiki/C_trigraph
432,520
432,563
Is there a O(1) way in windows api to concatenate 2 files?
Is there a O(1) way in windows API to concatenate 2 files? O(1) with respect to not having to read in the entire second file and write it out to the file you want to append to. So as opposed to O(n) bytes processed. I think this should be possible at the file system driver level, and I don't think there is a user mode API available for this, but I thought I'd ask.
If the "new file" is only going to be read by your application, then you can get away without actually concatenating them on disk. You can just implement a stream interface that behaves as if the two files have been concatenated, and then use that stream as opposed to what ever the default filestream implementation used by your app framework is. If that won't work for you, and you are using windows, you could always create a re parse point and a file system filter. I believe if you create a "mini filter" that it will run in user mode, but I'm not sure. You can probably find more information about it here: http://www.microsoft.com/whdc/driver/filterdrv/default.mspx
432,567
435,605
Debugging Best Practices for C++ STL/Boost with gdb
Debugging with gdb, any c++ code that uses STL/boost is still a nightmare. Anyone who has used gdb with STL knows this. For example, see sample runs of some debugging sessions in code here. I am trying to reduce the pain by collecting tips. Can you please comment on the tips I have collected below (particularly which ones you have been using and any changes you would recommend on them) - I have listed the tips is decreasing order of technicality. Is anyone using "Stanford GDB STL utils" and "UCF GDB utils"? Is there some such utils for boost data structures? The utils above do not seem to be usable recursively, for example for printing vector of a boost::shared_ptr in a legible manner within one command. Write your .gdbinit file. Include, for example, C++ related beautifiers, listed at the bottom of UCF GDB utils. Use checked/debug STL/Boost library, such as STLport. Use logging (for example as described here) Update: GDB has a new C++ branch.
Maybe not the sort of "tip" you were looking for, but I have to say that my experience after a few years of moving from C++ & STL to C++ & boost & STL is that I now spend a lot less time in GDB than I used to. I put this down to a number of things: boost smart pointers (particularly "shared pointer", and the pointer containers when performance is needed). I can't remember the last time I had to write an explicit delete (delete is the "goto" of C++ IMHO). There goes a lot of GDB time tracking down invalid and leaking pointers. boost is full of proven code for things you'd probably hack together an inferior version of otherwise. e.g boost::bimap is great for the common pattern of LRU caching logic. There goes another heap of GDB time. Adopting unittesting. boost::test's AUTO macros mean it's an absolute doddle to set up test cases (easier than CppUnit). This catches lots of stuff long before it gets built into anything you'd have to attach a debugger to. Related to that, tools like boost::bind make it easier to design-for-test. e.g algorithms can be more generic and less tied up with the types they operate on; this makes plugging them into test shims/proxies/mock objects etc easier (that and the fact that exposure to boost's template-tasticness will encourage you to "dare to template" things you'd never have considered before, yielding similar testing benefits). boost::array. "C array" performance, with range checking in debug builds. boost is full of great code you can't help but learn from
432,570
433,214
How to get a user token from Logonuser for a user account with no password?
How can you get a user token from Logonuser for a user account with no password? In particular Logonuser will fail for accounts that do not have passwords. You can validate an account by checking for a blank password + checking for GetLastError() == ERROR_ACCOUNT_RESTRICTION. But I need to actually get a token returned, so I need this function to succeed.
This will fail if the registry setting LimitBlankPasswordUse is enabled, which it is by default. In order to disable this change the LimitBlankPasswordUse value under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa to zero. Or change the group policy setting under Security Options. Be aware that this creates a security hole since it allows remote logons for accounts with null passwords.
432,982
434,074
Is it feasible to convert a desktop based MFC C++ application to a web app
I have a very large app, 1.5 million lines of C++, which is currently MFC based using the Document/View architecture. The application includes a lot of 3d vector graphics, spreadsheets, and very many dialogs and Windows. Within the constraints of the DVA it is fairly well written, in that there is no significant program logic in the user interface, and anything that can be done using the user interface can also be carried out programmatically using a COM/OLE Automation interface. At the request of a number of users, I have been toying with the idea of creating a browser interface to the program, where the program itself runs on a server. Thoughts so far are convert all the COM interfaces to DCOM and rewrite/port the UI to Java. Initial experimentation shows that this will be a huge amount of work. Anyone else out there got any ideas for an easier implmentation? Anyone come across any refactoring or similar tools specifically to help this kind of a port?
The short answer is that it is feasible, don't use java, and that it will be a considerable amount of work. A good few years ago (around the time of IE5) I was asked by a client to answer a similar question to this one. The application in question was a well structured three tier desktop application. The upshot of the study was that it is possible. The options considered were Java, and CGI, using either CORBA or COM/DCOM. Building a Java applet was considered, but ruled out because it wouldn't have been too different to the C++ desktop version. The approach taken was to take the backend tier and turn that into a server application sitting behind a cgi layer. The UI was largely re-written using what we now know as Ajax, i.e. Javascript and HTML. The UI was split between browser elements and server elements. I did consider writing a tool to convert documents, views and dialogs and wrapping these into a compatible format, however after much analysis it became obvious that it wasn't really feasible to do this because the MFC is actually quite closely coupled to the Win32 API, and itself. Depending on the amount of dialogs it may be worth writing something to convert these. I did find, that even in a well structured system that a fair bit of code that should have been in the backend had leaked into the front end. If I were to do the same thing now, there are some excellent javascript libraries that would help, but I'd still take the same approach of doing as much as possible in the browser using Ajax, with possibly something like qooxdoo or YUI. I'd also probably move look at using XUL and moving at least most of the dialogs into the backend. The only thing that concerns me from your list of requirements is the 3d vector graphics, although there maybe some mileage in this JS toy or JS3D. This is only a brief summary, as I'm trying to avoid writing a dissertation.
433,164
433,207
What happens to an STL iterator after erasing it in VS, UNIX/Linux?
Please consider the following scenario: map(T,S*) & GetMap(); //Forward decleration map(T, S*) T2pS = GetMap(); for(map(T, S*)::iterator it = T2pS.begin(); it != T2pS.end(); ++it) { if(it->second != NULL) { delete it->second; it->second = NULL; } T2pS.erase(it); //In VS2005, after the erase, we will crash on the ++it of the for loop. //In UNIX, Linux, this doesn't crash. }//for It seems to me that in VS2005, after the "erase", the iterator will be equal to end(), hence the crash while trying to increment it. Are there really differences between compilers in the behavior presented here? If so, what will the iterator after the "erase" equal to in UNIX/Linux? Thanks...
Yes, if you erase an iterator, that iterator gets a so-called singular value, which means it doesn't belong to any container anymore. You can't increment, decrement or read it out/write to it anymore. The correct way to do that loop is: for(map<T, S*>::iterator it = T2pS.begin(); it != T2pS.end(); T2pS.erase(it++)) { // wilhelmtell in the comments is right: no need to check for NULL. // delete of a NULL pointer is a no-op. if(it->second != NULL) { delete it->second; it->second = NULL; } } For containers that could invalidate other iterators when you erase one iterator, erase returns the next valid iterator. Then you do it with it = T2pS.erase(it) That's how it works for std::vector and std::deque, but not for std::map or std::set.
433,220
433,269
QNX c++ thread question
I have a question concerning this code which I want to run on QNX: class ConcreteThread : public Thread { public: ConcreteThread(int test) { testNumber = test; } void *start_routine() { for(int i = 0; i < 10; i++) { sleep(1); cout << testNumber << endl; } } private: int testNumber; }; class Thread { public: Thread(){}; int Create() { pthread_t m_id; return pthread_create(&m_id, NULL, &(this->start_routine_trampoline), this); } protected: virtual void *start_routine() = 0; private: static void *start_routine_trampoline(void *p) { Thread *pThis = (Thread *)p; return pThis->start_routine(); } }; Now, when I run this code without the sleep in *start_routine, it will simply print the number 10 times, before continuing on to the next line of code (sequential instead of parallel). However, when I use a sleep like in the code, it doesn't print any numbers at all and simply goes on to the next line of code. Why doesn't sleep work and how can I make a thread like this work, instead of running sequential?
Note 1: If you only have 1 processor the code can only be done sequentially no matter how many threads you create. Each thread is given a slice of processor time before it is swapped out for the next threads. Note 2: If the main thread exits pthreads will kill all child threads before they have a chance to execute. Now to answer you questions: Without the sleep. The thread once started has enough time in the single slice it was given to execute the loop 10 times completely. With the sleep: Your worker thread is going to sleep for a full second. So your main thread has time to do a lot of work. If the main thread exits in this time the worker will be killed. I would make the following changes: // Remove the Create() method // Put thread creation in the constructor. // Make the thread variable part of the object pthread_t m_id; Thread() { if (pthread_create(&m_id, NULL, &(this->start_routine_trampoline), this) != 0) { throw std::runtime_error("Thread was not created"); } } // Make sure the destructor waits for the thread to exit. ~Thread() { pthread_join(m_id); } If you go and look at boost threading library. you will find that all the little mistakes like this have already been taken care of; Thus making threading easier to use. Also note. That using a static may work but it is non portable. This is because pthread's is a C library and is thus expecting a function pointer with a C ABI. You are just getting lucky for your platform here. You need to define this as a function and declare the ABI by using extern "C" // This needs to be a standard function with C Interface. extern "C" void *start_routine_trampoline(void *p) { }
433,274
433,405
C++ Parameter's Value Changes Between Stack Frames in std::vector
I've run into a really strange bug, that I'm hoping someone can explain. I have a simple std::vector<V3x>, where V3x is a 3d vector (the linear algebra kind.) The following code causes a std::length_error exception to be thrown: std::vector<V3x> vertices; int vertexCount = computeVertexCount(); vertices.resize(vertexCount); // throws std::length_error I've verified that computeVertexCount() returns a value of 35, which is far far below vector::max_size() so there's no way it's asking for too much memory. I traced the exception down into the definition of std::vector, to the following two functions. void resize(size_type _Newsize, _Ty _Val) { // determine new length, padding with _Val elements as needed if (size() < _Newsize) // NOTE: here, _Newsize - size() = 35 _Insert_n(end(), _Newsize - size(), _Val); else if (_Newsize < size()) erase(begin() + _Newsize, end()); } void _Insert_n(const_iterator _Where, size_type _Count, const _Ty& _Val) { // insert _Count * _Val at _Where // NOTE: here, _Count = 3435973836 ... } So when the _Count parameter is passed between resize() and _Insert_n(), the value changes from 35 to 3435973836. I'm assuming the memory has somehow become corrupted, but I have no idea how that could be. For a little more context in case it's part of the problem, this code sits in a .dll plugin that I'm loading from Softimage XSI. Does anyone know what might cause something like this to happen? EDIT: SOLUTION nobugz, I could kiss you. The size of std::vector was changing inside my .dll, because of _HAS_ITERATOR_DEBUGGING in VS2008. The search led me to someone with the same problem, and it was fixed by adding the following at the top of my project: // fix stack corruption errors caused by VS2008 #define _HAS_ITERATOR_DEBUGGING 0 #define _SECURE_SCL 0
The value 3435973836 is significant. In hex, that's 0xcccccccc. That's the value assigned to local variables in Debug mode by the stack frame initialization code. When you see it back while debugging, you'd say "ah, variable not initialized". Maybe that gets you a bit closer to solving this. You mention DLL. That's relevant too. Iterator debugging might get you into trouble, you cannot mix code that has it turned off with code that doesn't. Since the DLL is probably compiled without it, try #define _HAS_ITERATOR_DEBUGGING 0.
433,301
433,343
Portable and simple unicode string library for C/C++?
I'm looking for a portable and easy-to-use string library for C/C++, which helps me to work with Unicode input/output. In the best case, it will store its strings in memory in UTF-8, and allow me to convert strings from ASCII to UTF-8/UTF-16 and back. I don't need much more besides that (ok, a liberal license won't hurt). I have seen that C++ comes with a <locale> header, but this seems to work on wchar_t only, which may or may not be UTF-16 encoded, plus I'm not sure how good this is actually. Uses cases are for example: On Windows, the unicode APIs expect UTF-16 strings, and I need to convert ASCII or UTF-8 strings to pass it on to the API. Same goes for XML parsing, which may come with UTF-16, but I actually only want to process internally with UTF-8 (or, for that matter, if I switch internally to UTF-16, I'll need a conversion to that anyway). So far, I've taken a look at the ICU, which is quite huge. Moreover, it wants to be built using it own project files, while I'd prefer a library for which there is either a CMake project or which is easy to build (something like compile all these .c files, link and good to go), instead of shipping something large as the ICU along my application. Do you know such a library, which is also being maintained? After all, this seems to be a pretty basic problem.
UTF8-CPP seems to be exactly what you want.
433,325
433,666
Running background services on a PocketPC
I've recently bought myself a new cellphone, running Windows Mobile 6.1 Professional. And of course I am currently looking into doing some coding for it, on a hobby basis. My plan is to have a service running as a DLL, loaded by Services.exe. This needs to gather som data, and do som processing at regular intervals (every 5-10 minutes). Since I need to run this at regular intervals, it is a bit of a problem for me, that the system typically goes to sleep (suspend) after a short period of inactivity by the user. I have been reading all the documentation I could find on MSDN, and MSDN blogs about this subject, and it seems to me, that there are three possible solutions to this problem: Keep the system in an "Always On"-state, by calling SystemIdleTimerReset periodically. This seems a bit excessive, and is therefore out of the question. Have the system periodically waken up with CeRunAppAtTime, and enter the unattended state, to do my processing. Use the unattended state instead of going into a full suspend. This would be transparent to the user, but the system would never go into sleep. The second approach seems to be preferred, however, this would require an executable to be called by the system on wake up, with the only task of notifying my service that it should commence processing. This seems a bit unnecessary and I would like to avoid this extra executable. I could of course move all my processing into this extra executable, but I would like to use some of the facilities provided when running as a service, and also not have a program pop up (even if its in the background) whenever processing starts. At first glance, the third approach seems to have the same basic problem as the first. However, I have read on some of the MSDN blogs, that it might be possible to actually conserve battery consumption with this approach, instead of going in and out of suspend mode often (The arguments for this was that the nature of the WM platform is to have a very little battery consumption, when the system is idle. And that going in and out of suspend require quite a bit of processing). So I guess my questions are as following: Which approach would you recommend in my situation? With respect to keeping a minimum battery consumption, and a nice clean implementation. In the case of approach number two, is it possible to eliminate the need for a notifying executable? Either through alternative API functions, or existing generic applications on the platform? In the case of approach number three, do you know of any information/statistics relevant to the claim, that it is possible to extend the battery lifetime when using unattended mode over going into suspend. E.g. how often do you need to pull the system out of suspend, before unattended mode is to be preferred. Implementation specific (bonus) question: Is it necessary to regularly call SystemIdleTimerReset to stay in unattended mode? And finally, if you think I have prematurely eliminated approach number one, please tell me why. Please include in your response whether you base your response on knowledge, or are merely guessing (the latter is also very welcome!). Please leave a comment, if you think I need to clarify any parts of this question.
CERunAppAtTime is a much-misunderstood API (largely because of the terrible name). It doesn't have to run an app. It can simply set a named system event (see the description of the pwszAppName parameter in the MSDN docs). If you care to know when it has fired (to lat your app put the device to sleep again when it's done processing) simply have a worker thread that is doing a WaitForSingleObject on that same named event. Unattended state is often used for devices that need to keep an app running continuously (like an MP3 player) but conserve power by shutting down the backlight (probably the single most power consuming subsystem). Obviously unattended mode uses significantly more powr than suspend, becasue in suspend the only power draw is for RAM self-refresh. In unattended mode the processor is stuill powered and running (and several peripherals may be too - depends on how the OEM defined their unattended mode). SystemIdleTimerReset simply prevents the power manager from putting the device into low-power mode due to inactivity. This mode, whether suspended, unattended, flight or other, is defined by the OEM. Use it sparingly because when your do it impacts the power consumption of the device. Doing it in unattended mode is especially problematic from a user perspective because they might think the device is off (it looks that way) but now their battery life has gone south.
433,327
434,587
Minimal XML library for C++?
What XML libraries are out there, which are minimal, easy to use, come with little dependencies (ideally none), can be linked statically and come with a liberal license? So far, I've been a pretty happy user of TinyXML, but I'm curious what alternatives I have missed so far.
I recommend rapidxml. It's an order of magnitude smaller than tinyxml, and doesn't choke on doctypes like tinyxml does. If you need entity support or anything advanced, forget about static linking and use expat or libxml2.
433,356
433,633
Editing the IDL created by OLE view for a C++ component
What things do I need to consider when doing this: What to remove/manipulate/add Regards
Defining interfaces in IDL for client neutrality A number of issues are described in the link above to take note
433,853
433,877
What are some common Java pitfalls/gotchas for C++ programmer?
As the question says, what are some common/major issues that C++ programmers face when switching to Java? I am looking for some broad topic names or examples and day to day adjustments that engineers had to make. I can then go and do an in-depth reading on this. I am specifically interested in opinions of engineers who have worked in C++ for years and had to work with Java but any pointers from others or even book recommendations are more than welcome.
In C++ you'd use destructors to clean up file descriptors, database connections and the like. The naive equivalent is to use finalizers. Don't. Ever. Instead use this pattern: OutputStream os; try { os = ... // do stuff } finally { try { os.close(); } catch (Exception e) { } } You'll end up doing stuff like that a lot. If you specify no access modifer, in Java the members are package-private by default, unlike C++ in which they are private. Package-private is an annoying access level meaning it's private but anything in the same package can access it too (which is an idiotic default access level imho); There is no stack/heap separation. Everything is created on the heap (well, that's not strictly true but we'll pretend it is); There is no pass-by-reference; The equivalent to function pointers is anonymous interfaces.
433,895
433,988
Why are C character literals ints instead of chars?
In C++, sizeof('a') == sizeof(char) == 1. This makes intuitive sense, since 'a' is a character literal, and sizeof(char) == 1 as defined by the standard. In C however, sizeof('a') == sizeof(int). That is, it appears that C character literals are actually integers. Does anyone know why? I can find plenty of mentions of this C quirk but no explanation for why it exists.
discussion on same subject "More specifically the integral promotions. In K&R C it was virtually (?) impossible to use a character value without it being promoted to int first, so making character constant int in the first place eliminated that step. There were and still are multi character constants such as 'abcd' or however many will fit in an int."
433,965
433,980
Protected derived class
#include <iostream> using namespace std; class Base { public: Base(){cout <<"Base"<<endl;} virtual ~Base(){cout<<"~Base"<<endl;} virtual void foo(){ cout<<"foo base"<<endl;} }; class Derived: private Base { public: Derived(){cout<<"Derived"<<endl;} virtual ~Derived(){cout<<"~Derived"<<endl;} virtual void foo(){ cout<<"foo dervied"<<endl;} }; int main(int argc, char *argv[]) { Base *pb = new Derived; Derived d; d.foo(); return 0; } when I execute the above sample program I get following error: protected.cpp: In function ‘int main(int, char**)’: protected.cpp:26: error: ‘Base’ is an inaccessible base of ‘Derived’ Why its not possible to create Derived object with base pointer???? So I can create an instanse of Derived class like Derived d Derived d1= new Derived; But creating instance from Base class pointer like Base * b = new derived will fail. This is because Derived is not actaully a derived class from Base when derived procted and privately?? Is this correct?????
Why its not possible to create Derived object with base pointer???? Because the base is private. This explicitly forbids treating your class as a Base instance from the outside. Seen from the outside, your class Derived is not a subclass of Base, only from inside the class itself. The same counts for protected inheritance, with the only difference that the base class now isn't private to the own class any more but rather to any derived class as well. To the outside though, it behaves just like private inheritance.
434,099
434,173
Mac OSX - Xcode/Leaks problem
I'm new to development using Xcode, and am having trouble using the built-in Leaks Instrument. I have enabled guard malloc and put MallocStackLogging YES & MallocStackLoggingNoCompact YES in the environmental variables for the executable. Then running the process by clicking Run->Start with performance tool->Leaks But only object allocations show in the UI, In the leaks frame all it says is "Analyzing process" (I have left it for over 20mins like this with no change) I deliberatly created a leak in main() void * leak = malloc(100); leak=NULL; but no change If i run from the console ~$ leaks ProcessName It correctly reports Process 1014: 223344 nodes malloced for 9017 KB Process 1014: 1 leak for 112 total leaked bytes. Leak: 0x100100020 size=112 string '' Call stack: [thread 0x7fff70767720]: | start | main | malloc | malloc_zone_malloc but does anyone know why the UI won't show the same thing? Thanks
The problem seems to be compiling the app as 64 bit. A simple test Xcode project #include <iostream> void leaks(){ char * newleak = (char* ) malloc(2000); return; } int main (int argc, char * const argv[]) { void * leak = malloc(100); leak = NULL; while(1){ leaks(); sleep(10); } } Leaks will display correctly if architecture is i386 however if i compile as x86_64 (64 bit math required) it will hang on "Analyzing process". Is there something i'm doing wrong here? I'm using a Intel Core 2 Duo macbook pro
434,140
434,153
Array of structs and new / delete
I have a struct like this: class Items { private: struct item { unsigned int a, b, c; }; item* items[MAX_ITEMS]; } Say I wanted to 'delete' an item, like so: items[5] = NULL; And I created a new item on that same spot later: items[5] = new item; Would I still need to call delete[] to clean this up? Or won't this be needed since bounds of array items[] are known before compiling? Is setting that pointer to NULL valid or should I be calling delete there?
You need to call delete before setting it to NULL. (Setting it to NULL isn't required, it just helps reduce bugs if you accidentally try to dereference the pointer after deleting it.) Remember that every time you use new, you will need to use delete later on the same pointer. Never use one without the other. Also, new [] and delete [] go together in the same way, but you should never mix new [] with delete or new with delete []. In your example, since you created the object with new (rather than new [] which would create an array of objects) you must delete the object with delete (rather than delete []).
434,522
434,529
Anyone knows how to fix compile error: LNK2005? (Source Code inside)
I have the below code in stdafx.h. using namespace std; typedef struct { DWORD address; DWORD size; char file[64]; DWORD line; } ALLOC_INFO; typedef list<ALLOC_INFO*> AllocList; //AllocList *allocList; Without the commented code (last line), it compiles just fine. But when I add the commented code, Im getting the following error. error LNK2005: "class std::list > * allocList" (?allocList@@3PAV?$list@PAUALLOC_INFO@@V?$allocator@PAUALLOC_INFO@@@std@@@std@@A) already defined in test.obj Im using Visual Studio .NET 2003. Anyone has any idea what that is and how to solve it?
Don't put definitions in header files, just declarations. Declarations specify that something exists while definitions actually define them (by allocating space). For example typedef, extern and function prototypes are all declarations, while things like struct, int and function bodies are definitions. What's happening is that you're most likely including stdafx.h in multiple compilation units (C++ source files) and each of the resulting object files is getting its own copy of allocList. Then when you link the objects together, there's two (or more) things called allocList, hence the link error. You would be better off declaring the variable: extern AllocList *allocList; in your header file and defining it somewhere in a C++ source file (such as a main.cpp): AllocList *allocList; That way, every compilation unit that includes stdafx.h will know about the external variable, but it's only defined in one compilation unit. Based on your further information: I was trying to follow http://www.flipcode.com/archives/How_To_Find_Memory_Leaks.shtml, I assume that all those code are meant to be placed in the stdafx.h. Any other alternatives, pax? My response is as follows. I wouldn't put them in stdafx.h myself since I think that uses some MS magic for pre-compiled headers. Make a separate header file mymemory.h and put your function prototypes in it, for example (note that this has no body): inline void * __cdecl operator new( unsigned int size, const char *file, int line); Also in that header, put the other prototypes for AddTrack(), DumpUnfreed(), etc., and the #define, typedef and extern statements: extern AllocList *allocList; Then, in a new file mymemory.cpp (which also contains #include "mymemory.h"), put the actual definition of allocList along with all the real functions (not just the prototypes) and add that file to your project. Then, #include "mymemory.h" in every source file in which you need to track memory (probably all of them). Because there are no definitions in the header file, you won't get duplicates during the link and because the declarations are there, you won't get undefined references either. Keep in mind that this won't track memory leaks in code that you don't compile (e.g., third-party libraries) but it should let you know about your own problems.
434,592
434,644
Create user exception derived from std::exception?
How a user exception class is created from standard exception? Addressing below cases Say i have a class with some enum that indicates type of object so based on type, member functions are available.Calling member function that is not available should throw an exception.Similarly when a getter of uninitialized is called again a exception should be thrown(I am using default argument to check for the uninitialized object).
You should probably derive from std::runtime_error or one of the other standard exceptions (see <stdexcept>) rather than directly from std::exception. It defines the what() method correctly in addition to describing what is exactly happening. Also it is usually better to define a different exception for each problem rather than using an enum. This allows the try catch mechanism to catch the appropriate problems but this all depends on your exact situation and without further info it is hard to know. class myException: public std::runtime_error { public: myException(std::string const& msg): std::runtime_error(msg) {} }; But: so based on type, member functions are available.Calling member function that is not available should throw an exception. This smells funny (As in a Martin Fowler Smell (Check out his book)).
434,702
438,052
Normal main to WinCE main
I'm porting an existing (mostly) cross-platform application to WinCE 4.2. The current entry point for the function is int main(int argc, char *argv[]){} I would like to keep this part as-is, and have the WinCE entry point simply call it. I believe something like the following should work: int WINAPI WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { int argc = _tcslen(lpCmdLine); char *argv = new char[argc]; wcstombs(argv,lpCmdLine,1024); argc = main(argc,&argv); delete [] argv; return argc; } It compiles, and should run once I figure out why the linker is throwing up, but is it right?
Thank you both for your helpful answers. I wrote the following, which works as well as we need it for now. Only our code will be calling this executable, and never with quotes, although that wouldn't be too hard to add. Also, it might not do well if there is more than one space between arguments, but again, we don't have to worry about other people using this program, it's just for academic purposes. If you think improvements are necessary, edit this post, and justify it in your comment. int WINAPI WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow ) { assert(lpCmdLine != NULL); int argc = 1, ret = 0; std::vector<char*> args; // Copy the entire array to a regular cstr int cmdLineLen = _tcslen(lpCmdLine); char *argv = new char[cmdLineLen]; wcstombs(argv,lpCmdLine,cmdLineLen); args.push_back(&argv[0]); // Replace spaces with nulls to effectively create array of cstr for(int i=0; i<cmdLineLen; i++){ if(argv[i] == ' '){ argv[i] = '\0'; args.push_back(&argv[i+1]); // Keep track of the first char in each word argc++; } } // argv[argc] should be NULL. args.push_back(NULL); try{ // Run the program ret = main(argc,&args[0]); } catch(...){ // TODO: Report error here. Commented code works OK for WinCE .NET // delete argv; // throw; ret = -1; } delete argv; return ret; } Also, for those interested, running this at the command line >myprogam.exe -a shortargument -b -c will put the following in lpCmdLine "-a shortargument -b -c" Also, my first guess was that argv needed to be delete[]'d (because I new char[]'d), but when I did this, the program had a fatal error. When I switched to the above, it worked. Doesn't that violate the new->delete/new[]->delete[] rule?
435,147
435,190
What is the best way to store UTF-8 strings in memory in C/C++?
Looking at the unicode standard, they recommend to use plain chars for storing UTF-8 encoded strings. Does this work as expected with C++ and the basic std::string, or do cases exist in which the UTF-8 encoding can create problems? For example, when computing the length, it may not be identical to the number of bytes - how is this supposed to be handled? Reading the standard, I'm probably fine using a char array for storage, but I'll still need to write functions like strlen etc. on my own, which work on encoded text, cause as far as I understand the problem, the standard routines are either ASCII only, or expect wide literals (16bit or more), which are not recommended by the unicode standard. So far, the best source I found about the encoding stuff is a post on Joel's on Software, but it does not explain what we poor C++ developer should use :)
There's a library called "UTF8-CPP", which lets you store your UTF-8 strings in standard std::string objects, and provides additional functions to enumerate and manipulate utf-8 characters. I haven't tested it yet, so I don't know what it's worth, but I am considering using it myself.
435,322
435,334
Good or Bad C++ Idiom - Objects used purely for constructor/destructor?
I have a few classes which do nothing except in their constructors/destructors. Here's an example class BusyCursor { private: Cursor oldCursor_; public: BusyCursor() { oldCursor_ = CurrentCursor(); SetCursor(BUSY_CURSOR); } ~BusyCursor() { SetCursor(oldCursor_); } } // example of use void DoSlowThing { BusyCursor busy; ... do something time-consuming ... } I'm a little concerned about future readability. Am I being too "tricksy" here, with a variable ("busy") which is never actually used in the code? Could some static analysis tool suggest they be removed, or is this idiom sufficiently common not to worry about?
This technique is very common and is known as the design pattern: Resource Acquisition Is Initialization (RAII). I would not hesitate to use this design pattern at all. It's much better that you are coding using this design pattern because you will avoid bugs by forgetting to reset the cursor, or whatever the resource in question is. If you are concerned that other programmers might not understand it, then those programmers should be more educated. Always strive to code in the most error free way where you make it impossible for you and others to shoot yourself/themselves in the foot. "Could some static analysis tool suggest they be removed?" No static analysis tool will see this as a problem. No compiler warning will be given No compiler optimization will cause any problems. The reason is because the object is created and the constructor/destructor are called. So it is not an unreferenced variable.
435,429
483,736
Browser agnostic C++ DOM interface
When programming in C++ against the browser's DOM each engine has a different set of interfaces, IE has the COM based [MSHTML](http://msdn.microsoft.com/en-us/library/aa752279(VS.85).aspx), Mozilla has the XPCOM based Gecko DOM etc. Is there a common API that has adapters for major browsers (and versions)? As a clarification, the application in question is a desktop application written in C++ which interacts with browsers, currently we have separate code bases for support of IE and Mozilla and I'm trying to reduce duplications of logic and allow adding new browsers with less effort. A concrete example can be getting the innerHTML property of an HTML element. // Firefox nsAutoString html; nsCOMPtr<nsIDOMNSHTMLElement> elem = do_QueryInterface(obj); if (elem) elem->GetInnerHTML(html); // IE CComBSTR html; MSHTML::IHTMLElementPtr elem = obj; if (elem) elem->get_innerHTML(&html);
Your best bet seems to be to define your own interface and write different adapters. In the likeliness that your C++ code won't be employing all DOM traversing capabilities, your interface will only have to define a compact set of traversal functions. As a bonus of defining your own interface, you can easily write a mock adapter for your unit testing.
435,627
439,278
How to correctly benchmark a [templated] C++ program
< backgound> I'm at a point where I really need to optimize C++ code. I'm writing a library for molecular simulations and I need to add a new feature. I already tried to add this feature in the past, but I then used virtual functions called in nested loops. I had bad feelings about that and the first implementation proved that this was a bad idea. However this was OK for testing the concept. < /background> Now I need this feature to be as fast as possible (well without assembly code or GPU calculation, this still has to be C++ and more readable than less). Now I know a little bit more about templates and class policies (from Alexandrescu's excellent book) and I think that a compile-time code generation may be the solution. However I need to test the design before doing the huge work of implementing it into the library. The question is about the best way to test the efficiency of this new feature. Obviously I need to turn optimizations on because without this g++ (and probably other compilers as well) would keep some unnecessary operations in the object code. I also need to make a heavy use of the new feature in the benchmark because a delta of 1e-3 second can make the difference between a good and a bad design (this feature will be called million times in the real program). The problem is that g++ is sometimes "too smart" while optimizing and can remove a whole loop if it consider that the result of a calculation is never used. I've already seen that once when looking at the output assembly code. If I add some printing to stdout, the compiler will then be forced to do the calculation in the loop but I will probably mostly benchmark the iostream implementation. So how can I do a correct benchmark of a little feature extracted from a library ? Related question: is it a correct approach to do this kind of in vitro tests on a small unit or do I need the whole context ? Thanks for advices ! There seem to be several strategies, from compiler-specific options allowing fine tuning to more general solutions that should work with every compiler like volatile or extern. I think I will try all of these. Thanks a lot for all your answers!
If you want to force any compiler to not discard a result, have it write the result to a volatile object. That operation cannot be optimized out, by definition. template<typename T> void sink(T const& t) { volatile T sinkhole = t; } No iostream overhead, just a copy that has to remain in the generated code. Now, if you're collecting results from a lot of operations, it's best not to discard them one by one. These copies can still add some overhead. Instead, somehow collect all results in a single non-volatile object (so all individual results are needed) and then assign that result object to a volatile. E.g. if your individual operations all produce strings, you can force evaluation by adding all char values together modulo 1<<32. This adds hardly any overhead; the strings will likely be in cache. The result of the addition will subsequently be assigned-to-volatile so each char in each sting must in fact be calculated, no shortcuts allowed.
436,013
436,029
Is there any way to check if an iterator is valid?
For two threads manipulating a container map for example, what the correct way to test whether an iterator still valid (for performance reason) ? Or would be of only indirect way that this can be done. The sample code for this : #define _SECURE_SCL 1 //http://msdn2.microsoft.com/en-us/library/aa985973.aspx #define _SECURE_SCL_THROWS 1 #include "map" #include "string" #include "exception" #include "iostream" using namespace std; void main(void) { map<string, string> map_test; map<string, string>::iterator iter_map_test; map_test [ "AAAAA" ] = "11111"; map_test [ "BBBBB" ] = "22222"; map_test [ "CCCCC" ] = "33333"; iter_map_test = map_test.find ("BBBBB"); map_test.erase ("BBBBB"); try { string value = (*iter_map_test).second; } catch ( exception & e ) { cout << e.what() << endl; } catch ( ... ) { cout << "generic exception." << endl; } }
std::maps are not at all thread-safe. You'll end up with much worse problems than invalidated iterators, if you have more than one thread at a time modifying the same map. I don't even think you have a guarantee that you can read anything out of a map while it's being modified by another thread. Some pages on the STL and threading: MSDN SGI GCC
436,265
436,284
Emptying a C++ object
Often I add an Empty method to my C++ objects to clear the internal state using code similar to the following. class Foo { private: int n_; std::string str_; public: Foo() : n_(1234), str_("Hello, world!") { } void Empty() { *this = Foo(); } }; This seems to be better than duplicating code in the constructor, but I wondered if *this = Foo() is a common approach when wanting to clear an object? Are there any problems with this waiting to bite me on the backside? Are there any other better ways to achieve this sort of thing?
I'd let the constructor call my function instead: class Foo { private: int n_; std::string str_; public: Foo() { Reset(); } void Reset() { n_ = 1234; str_ = "Hello, world!"; } }; Yes, you're unnecessarily initializing the string as an empty string first, then doing an assignment, but this is much clearer.
436,367
436,549
Best way to safely printf to a string?
Does anyone know a good safe way to redirect the output of a printf-style function to a string? The obvious ways result in buffer overflows. Something like: string s; output.beginRedirect( s ); // redirect output to s ... output.print( "%s%d", foo, bar ); output.endRedirect(); I think the problem is the same as asking, "how many characters will print produce?" Ideas?
This StackOverflow question has a similar discussion. Also in that question I present my favorite solution, a "format" function that takes identical arguments to printf and returns a std::string.
436,513
436,561
char!=(signed char), char!=(unsigned char)
The code below compiles, but has different behavior for the char type than for the int types. In particular cout << getIsTrue< isX<int8>::ikIsX >() << endl; cout << getIsTrue< isX<uint8>::ikIsX >() << endl; cout << getIsTrue< isX<char>::ikIsX >() << endl; result in 3 instantiations of templates for three types: int8, uint8 and char. What gives? The same is not true for ints: int and uint32 which result in the same template instantiation, and signed int another. The reason seems to be that C++ sees char, signed char and unsigned char as three different types. Whereas int is the same as a signed int. Is this right or am I missing something? #include <iostream> using namespace std; typedef signed char int8; typedef unsigned char uint8; typedef signed short int16; typedef unsigned short uint16; typedef signed int int32; typedef unsigned int uint32; typedef signed long long int64; typedef unsigned long long uint64; struct TrueType {}; struct FalseType {}; template <typename T> struct isX { typedef typename T::ikIsX ikIsX; }; // This int==int32 is ambiguous //template <> struct isX<int > { typedef FalseType ikIsX; }; // Fails template <> struct isX<int32 > { typedef FalseType ikIsX; }; template <> struct isX<uint32 > { typedef FalseType ikIsX; }; // Whay isn't this ambiguous? char==int8 template <> struct isX<char > { typedef FalseType ikIsX; }; template <> struct isX<int8 > { typedef FalseType ikIsX; }; template <> struct isX<uint8 > { typedef FalseType ikIsX; }; template <typename T> bool getIsTrue(); template <> bool getIsTrue<TrueType>() { return true; } template <> bool getIsTrue<FalseType>() { return false; } int main(int, char **t ) { cout << sizeof(int8) << endl; // 1 cout << sizeof(uint8) << endl; // 1 cout << sizeof(char) << endl; // 1 cout << getIsTrue< isX<int8>::ikIsX >() << endl; cout << getIsTrue< isX<uint8>::ikIsX >() << endl; cout << getIsTrue< isX<char>::ikIsX >() << endl; cout << getIsTrue< isX<int32>::ikIsX >() << endl; cout << getIsTrue< isX<uint32>::ikIsX >() << endl; cout << getIsTrue< isX<int>::ikIsX >() << endl; } I'm using g++ 4.something
Here is your answer from the standard: 3.9.1 Fundamental types [basic.fundamental] Objects declared as characters (char) shall be large enough to store any member of the implementation's basic character set. If a character from this set is stored in a character object, the integral value of that character object is equal to the value of the single character literal form of that character. It is implementation-defined whether a char object can hold negative values. Characters can be explicitly declared unsigned or signed. Plain char, signed char, and unsigned char are three distinct types. A char, a signed char, and an unsigned char occupy the same amount of storage and have the same alignment requirements (basic.types); that is, they have the same object representation. For character types, all bits of the object representation participate in the value representation. For unsigned character types, all possible bit patterns of the value representation represent numbers. These requirements do not hold for other types. In any particular implementation, a plain char object can take on either the same values as a signed char or an unsigned char; which one is implementation-defined.
436,622
436,644
Quick and dirty operator!=
In my classes I often write a quick operator!= by returning !(*this == rhs), e.g.: class Foo { private: int n_; std::string str_; public: ... bool operator==(const Foo& rhs) const { return n_ == rhs.n_ && str_ == rhs.str_; } bool operator!=(const Foo& rhs) const { return !(*this == rhs); } }; I can't see any obvious problems with doing this but thought I'd ask if anyone knows of any.
I believe that's the preferred method of implementing operator!= so that you don't repeat yourself, and you have a guaranteed correct relationship with operator==.
436,671
438,852
How can I guarantee catching a EXCEPTION_STACK_OVERFLOW structured exception in C++ under Visual Studio 2005?
Background I have an application with a Poof-Crash[1]. I'm fairly certain it is due to a blown stack. The application is Multi-Threaded. I am compiling with "Enable C++ Exceptions: Yes With SEH Exceptions (/EHa)". I have written an SE Translator function and called _set_se_translator() with it. I have written functions for and setup set_terminate() and set_unexpected(). To get the Stack Overflow, I must run in release mode, under heavy load, for several days. Running under a debugger is not an option as the application can't perform fast enough to achieve the runtime necessary to see the issue. I can simulate the issue by adding infinite recursion on execution of one of the functions, and thus test the catching of the EXCEPTION_STACK_OVERFLOW exception. I have WinDBG setup as the crash dump program, and get good information for all other crash issues but not this one. The crash dump will only contain one thread, which is 'Sleep()'ing. All other threads have exited. The Question None of the things I've tried has resulted in picking up the EXCEPTION_STACK_OVERFLOW exception. Does anyone know how to guarantee getting a a chance at this exception during runtime in release mode? Definitions Poof-Crash: The application crashes by going "poof" and disappearing without a trace. (Considering the name of this site, I'm kind of surprised this question isn't on here already!) Notes An answer was posted briefly about adjusting the stack size to potentially force the issue sooner and allow catching it with a debugger. That is a clever thought, but unfortunately, I don't believe it would help. The issue is likely caused by a corner case leading to infinite recursion. Shortening the stack would not expose the issue any sooner and would likely cause an unrelated crash in validly deep code. Nice idea though, and thanks for posting it, even if you did remove it.
Everything prior to windows xp would not (or would be harder) generally be able to trap stack overflows. With the advent of xp, you can set vectored exception handler that gets a chance at stack overflow prior to any stack-based (structured exception) handlers (this is being the very reason - structured exception handlers are stack-based). But there's really not much you can do even if you're able to trap such an exception. In his blog, cbrumme (sorry, do not have his/her real name) discusses a stack page neighboring the guard page (the one, that generates the stack overflow) that can potentially be used for backout. If you can squeeze your backout code to use just one stack page - you can free as much as your logic allows. Otherwise, the application is pretty much dead upon encountering stack overflow. The only other reasonable thing to do, having trapped it, is to write a dump file for later debugging. Hope, it helps.
436,804
436,895
Lifetime management of encapsulated objects
What is the best approach to encapsulate objects and manage their lifetime? Example: I have a class A, that contains an object of type B and is solely responsible for it. Solution 1, clone b object to ensure that only A is able to clean it up. class A { B *b; public: A(B &b) { this->b = b.clone(); } ~A() { delete b; // safe } }; Solution 2, directly use the passed object, we risk a potential double free here. class A { B *b; public: A(B *b) { this->b = b; } ~A() { delete b; // unsafe } }; In my actual case, solution #2 would fit best. However I wonder if this is considered bad code because someone might not know about the behavior of A, even if it's documented. I can think of these scenarios: B *myB = new B(); A *myA = new A(myB); delete myB; // myA contains a wild pointer now Or, B *myB = new B(); A *firstA = new A(myB); A *secondA = new A(myB); // bug! double assignment delete firstA; // deletes myB, secondA contains a wild pointer now delete secondA; // deletes myB again, double free Can I just ignore these issues if I properly document the behavior of A? Is it enough to declare the responsibility and leave it up to the others to read the docs? How is this managed in your codebase?
You should define your object so that the ownership semantics are, as much as possible, defined by the interface. As David Thornley pointed out, std::auto_ptr is the smart pointer of choice to indicate transfer of ownership. Define your class like so: class A { std::auto_ptr<B> b; public: A(std::auto_ptr<B> b) { this->b = b; } // Don't need to define this for this scenario //~A() //{ // delete b; // safe //} }; Since the contract of std::auto_ptr is that assignment = transfer of ownership, your constructor now states implicitly that an A object has ownership of the pointer to B it's passed. In fact, if a client tries to do something with a std::auto_ptr<B> that they used to construct an A after the construction, the operation will fail, as the pointer they hold will be invalid.
437,150
437,178
Can someone explain this template code that gives me the size of an array?
template<typename T, size_t n> size_t array_size(const T (&)[n]) { return n; } The part that I don't get is the parameters for this template function. What happens with the array when I pass it through there that gives n as the number of elements in the array?
Well, first you have to understand that trying to get a value out of an array can give you a pointer to its first element: int a[] = {1, 2, 3}; int *ap = a; // a pointer, size is lost int (&ar)[3] = a; // a reference to the array, size is not lost References refer to objects using their exact type or their base-class type. The key is that the template takes arrays by reference. Arrays (not references to them) as parameters do not exist in C++. If you give a parameter an array type, it will be a pointer instead. So using a reference is necessary when we want to know the size of the passed array. The size and the element type are automatically deduced, as is generally the case for function templates. The following template template<typename T, size_t n> size_t array_size(const T (&)[n]) { return n; } Called with our previously defined array a will implicitly instantiate the following function: size_t array_size(const int (&)[3]) { return 3; } Which can be used like this: size_t size_of_a = array_size(a); There's a variation I made up some time ago [Edit: turns out someone already had that same idea here] which can determine a value at compile time. Instead of returning the value directly, it gives the template a return type depending on n: template<typename T, size_t n> char (& array_size(const T (&)[n]) )[n]; You say if the array has n elements, the return type is a reference to an array having size n and element type char. Now, you can get a compile-time determined size of the passed array: size_t size_of_a = sizeof(array_size(a)); Because an array of char having n elements has sizeof n, that will give you the number of elements in the given array too. At compile time, so you can do int havingSameSize[sizeof(array_size(a))]; Because the function never is actually called, it doesn't need to be defined, so it doesn't have a body. Hope I could clear the matter up a little bit.
437,279
437,295
Is it safe to use getenv() in static initializers, that is, before main()?
I looked in Stevens, and in the Posix Programmer's Guide, and the best I can find is An array of strings called the enviroment is made available when the process begins. This array is pointed to by the external variable environ, which is defined as: extern char **environ; It's that environ variable that has me hesitating. I want to say -The calling process/shell has already allocated the block of null terminated strings -the 'external' variable environ is used as the entry point by getenv(). -ipso facto feel free to call getenv() within a static initializer. But I can't find any guarantee that the 'static initialization' of environ precedes all the other static initialization code. Am I overthinking this? Update On my platform (AMD Opteron, Redhat 4, GCC 3.2.3), setting LD_DEBUG shows that environ gets set before my static initializers are called. This is a nice thing to know; thanks, @codelogic. But it is not necessarily the result I'd get on all platforms. Also, while I agree intuitively with @ChrisW on the behavior of the C/C++ runtime library, this is just my intuition based on experience. So anyone who can pipe up with a quote from someplace authoritative guaranteeing that environ is there before static initializers are called, bonus points!
I think you can run your program with LD_DEBUG set to see the exact order: LD_DEBUG=all <myprogram> EDIT: If you look at the source code of the runtime linker (glibc 2.7), specifically in files: sysdeps/unix/sysv/linux/init-first.c sysdeps/i386/init-first.c csu/libc-start.c sysdeps/i386/elf/start.S you will see that argc, argv and environ (alias for __environ) are set before any global constructors are called (the init functions). You can follow the execution starting right from _start, the actual entry point (start.S). As you've quoted Stevens "An array of strings called the enviroment is made available when the process begins", suggesting that environment assignment happens at the very beginning of the process initialization. This backed by the linker code, which does the same, should give you sufficient peace of mind :-) EDIT 2: Also worth mentioning is that environ is set early enough that even the runtime linker can query it to determine whether or not to output verbosely (LD_DEBUG).
437,416
437,452
C++ Builder or Visual Studio for native C++ development?
I've decided I want to get more into native code development with C++. I'm trying to decide if I would be better served using CodeGear C++ Builder 2009 or Visual Studio 2008. I currently use Delphi 2007, so I'm very comfortable with C++ Builder's IDE (its the same as Delphi), as well as the VCL and RTL. I've never been a big fan of MFC (from the first time I played around with it in the VS 6.0 days), but haven't taken a close look at it since then. I'm interested in hearing from some experts that have experience with both IDE's, whether they are the most recent versions or not. Right now, I'm leaning towards C++ Builder because I believe the VCL is much more robust and easier to work with than MFC --- but as I said, it's been a while since I've used MFC. I'm not interested in building programs that rely on the .NET Framework because I'm partly teaching myself native development. Is MFC still king for Windows C++? Or is WTL or ATL the big thing? Any C++ gurus out there want to share their opinions? EDIT: I understand MFC is not the only gui toolkit for Visual Studio. However, I'm looking for some recommendations based on GUI toolkit + IDE. For C++ Builder, there is only 1 real option, which is C++ Builder + the VCL. For VS 2008, it's VS + MFC/ATL/WTL/QT....confusing for me since I don't know much about them.
Coming from Delphi, you'll find the VCL straightforward to use with C++ Builder. There are a few oddities, like C++ doesn't hide the fact that TObjects are all really pointers (which Delphi hides from you), and some things like array properties are accessed differently. Two or three years back, I was looking for any way out of C++Builder, but now, with recent releases (and Embarcadero's purchase of Codegear), I'm happy with the product and the direction. You'll find the number of string types and the assorted potential incompatibilities quite painful with C++Builder, but you'll get used to it! (std::string, char[], wchar_t[], TCHAR, AnsiString, WideString, UnicodeString and String to name a few) Personally I'd vote for C++ Builder - because of two-way RAD and the VCL, although it may not be the best way of learning modern C++ idioms.
437,432
443,714
Is there a way to find all the functions exposed by a dll
I've been searching for a way to get all the strings that map to function names in a dll. I mean by this all the strings for which you can call GetProcAddress. If you do a hex dump of a dll the symbols (strings) are there but I figure there must me a system call to acquire those names.
It takes a bit of work, but you can do this programmaticly using the DbgHelp library from Microsoft. Debugging Applications for Microsoft .Net and Microsoft Windows, by John Robbins is an excellent (if a little older) book which contains use details and full source. And, you can pick it up on Amazon for the cheap!
437,685
437,718
Reduce windows executable size
I have a C++/MFC app on windows - dynamically linked it's only 60kb static it's > 3Mb. It is a being distributed to customers by email and so needs to be as small as possible. It statically links the MFC and MSCVRT libraries - because it is a fix to some problems and I don't want more support calls about missing libs - especially the very helpful 'application configuration is incorrect' one! Is there anyway to strip a Windows exe of all the extra MFC stuff that is the dll but that I'm not using? Or tell it to dynamically link MSVCRT (which must be present) but statically link the MFC dll which might not be? Edit - thanks that got it down to 1.6Mb that will have to do!
You can't mix the CRT/MFC dlls. Going from memory... As the other answer suggested, you can #define WIN32_LEAN_AND_MEAN and VC_EXTRALEAN. These probably won't help though. They tend to be about minimizing build time - not the final exe size. Short of rebuilding MFC (Which is an option - you could rebuild it /Os, or if you are feeling really cheeky, /GL - but this will probably lead to more downstream fun - hopefully it's already built /Gy). OK. Simple things to try. Build your app with /GL /Os /GF /Gy /GA. In the linker you more or less want /OPT:REF and /OPT:ICF and /LTCG. I have to say - a release build from 30k to megabytes is a bit much. You could also pass /map: to the linker and see what's taking all that space - but that's very very tedius. It almost sounds like MFC wasn't built with /Gy - but that would be surprising.
438,003
438,083
Environment overrides for Linux linker/loader
Earlier today I asked a question about environ, and one of the more interesting replies suggested that I could gather information using LD_DEBUG. Now I've known about some linker/loader environment variables (such as LD_PRELOAD) for awhile, but this one was new to me. Googling, I found a Linux-specific man page discussing environment variables used by the linker/loader, including LD_DEBUG. So theoretically, I'm set. But in practice, I'm sure I'm missing a whole bundle of tricks relating to this fundamental tool in the toolchain. Put simply, what else am I missing?
My favorite is using LD_PRELOAD to work around bugs or misfeatures in GNU libc; for a while connect was doing strange things with IPv6 and I just wrote my own version that always, always used IPv4. Linux users can try man ld.so Also, the ldd command, which tells how dynamic libraries are resolved, deserves to be more widely known.
438,012
438,057
STL __merge_without_buffer algorithm?
Where can I get a decent high-level description of the algorithm used in __merge_without_buffer() in the C++ STL? I'm trying to reimplement this code in the D programming language, with some enhancements. I can't seem to grok what it's doing at the algorithmic level from just reading the STL source code because there are too many low-level details obscuring it. Also, I don't want to just blindly translate the code because then, if it didn't work I wouldn't know why, and I wouldn't be able to add my enhancements.
__merge_without_buffer() is performing an in-place merge, as the merge step of an in-place merge sort. It takes as input two ranges of data [first, middle) and [middle, last) which are assumed to already be sorted. The len1 and len2 parameters are equal to the lengths of the two input ranges, namely (middle - first) and (last - middle) respectively. First, it picks a pivot element. Then, it rearranges the data into the order A1 B1 A2 B2, where A1 is the set of elements in [first, middle) that are less than the pivot, A2 is the set of elements in [first, middle) greater than or equal to the pivot, B1 is the set of elements in [middle, last) less than the pivot, and B2 is the set of elements in [middle, last) greater than or equal to the pivot. Note that the data is originally in the order A1 A2 B1 B2, so all we need to do is to turn A2 B1 into B1 A2. This is with a call to std::rotate(), which does just that. Now we've separated out the elements which are less than the pivot (A1 and B1) from those that are greater than or equal to the pivot (A2 and B2), so now we can recursively merge the two subranges A1 A2 and B1 B2. How do we choose a pivot? In the implementation I'm looking at, it chooses the median element from the larger subrange (i.e. if [first, middle) has more elements than [middle, last), it chooses the median of [first, middle); otherwise, it chooses the median of [middle, last)). Since the subranges are already sorted, choosing the median is trivial. This pivot choice ensures that when recursively merging the two subranges, each subproblem is no more than 3/4 the size of the current problem, because in the worst case, at least 1/4 of the elements are larger than or smaller than the pivot. What is the running time of this? The std::rotate() call takes O(N) time, and we make two recursive calls to ourselves. This equates to a running time of O(N log N). However, note that this is only one step in mergesort: remember that in mergesort you first recursively sort both halves and then merge. Thus, the recurrence relation for the running time of mergesort is now: T(N) = 2T(N/2) + O(N log N) Plug this into the Master theorem, and you get that mergesort now runs in O(N log2 N) time! As an interesting final point, consider the following three qualities of a comparison-based sorting algorithm: In-place Stable Runs in O(N log N) time You can usually only get 2 of these at once - quicksort gets you (1) and (3), mergesort gets you (2) and (3), and an in-place mergesort gets you (1) and (2). Non-comparison-based sorts such as count sort can achieve all 3, but those sorts can only sort certain data types. It's possible there exists a comparison-based sort which achieves all 3, but if there is, I'm not aware of its existence, and it's almost certainly much more complicated.
438,059
438,098
Use C++ class from C#
Alright so I have this C++ image capturing class. I was wondering if I could get some help..I know basic C++ (I have done one intro to c and one intro to c++ class) and I have NO idea how to do this: I need to use this class (ie create a new c++ project in my solution) and use c# to reference it and use it to save a screenshot of the screen. When I try to add a new project I dont know which one to choose (win32, mfc, atl, clr, abc, xyz, and so on) . The image capturing class is here: http://pastebin.com/m72414ab I don't know if I need to create .h files or just a .cpp file would do.. like honestly I have no clue where to start lol. I am looking for a solution, but i want to learn in the process so I dont have to ask next time (not to mention that I like c++ so I am gonna continue coding with it)
You cannot easily use C++ classes from C# without knowing some somewhat specialized information about C++/CLI - either rewrite your C++ class in C and use P/Invoke, or find a fully C# solution. But I'd like to use this c++ class for speed and memory. I question this, unless you are capturing images thousands of times a second, there's no way choosing C++ would be of any benefit, and it makes your program much more complicated. Stick with C# until you know you need the slight performance boost.
438,067
438,100
What's the best way to get started in modern game programming and application programming?
I want to get started game programming in C/C++, or C# with DirectX or OpenGL. I'm not really sure what I want to get started, but a smallish project like a simple engine would be nice to get started with. Additionally, I would like to learn about designing applications in Windows with C#, .NET 3.5, and WPF. Utilizing C# and .NET, I intend to create a reasonably large project that would integrate with the windows shell. However, I do not really know where to start. Moreover, I'm well versed in C/C++, JAVA, and I'm taking an advanced Computer Science class focused on Algorithm Complexity and Classic Computer Science problems. Any help on where to get started would be appreciated.
Modern game programming is tremendously complex as a whole but it is also very fractionated in terms of division of labor which helps reduce this complexity. Modern games are usually too complicated and large for one person to have a detailed and in-depth knowledge of the entire process. There are a lot of specialties within game programming that you might want to familiarize yourself with and decide if there is a particular aspect you really like doing: Game Physics Programmer Artificial Intelligence Programmer Graphics Programmer / 3D Modeling Sound Programmer Scripter (using in-house scripting for nearly all of the in-game events) UI Programmer Network Programmer Tools Programmer ... and more. Recently, for my Computer Graphics course in College we used the open source OGRE 3D engine. Not only is this an extremely robust 3D engine but it was a blast! Develop a medium sized game using it and you will get a good taste of many of the above mentioned game programming specialties. You'll find yourself doing 3d modeling, sound effects, physics programming, AI, the works. Screenshot of a recent OGRE 3D Game: alt text http://www.ogre3d.org/gallery/albums/album76/snapshot20081030015244.jpg There are some caveats to the Game Programming world. I've often heard and read articles surrounding the industry talking about the long hours, the crunch time, the unpaid overtime and the relatively lower pay when compared to other types of programming. But honestly, if this is something you want to do then don't let those things get you down. I like to think that good pay and working conditions will come to most people who are sufficiently driven. As far as school, you mentioned you were taking some CS courses. Take some math courses as well! Game programming is one area where Math is used extensively in several phases of development. Most of the articles I have read involving the industry suggest that someone with no professional experience looking to get break into gaming should put together a portfolio of small games you have written demonstrating your abilities. It is also fairly typical for junior game programmers to start out as the Tools Programmers writing in-house software for the rest of the game development team. Things like Map-Making programs, diagnostic tools, format conversion, etc. You can then work on advancing beyond this. Good luck!
438,192
438,210
A 'using' declaration with an enum
A using declaration does not seem to work with an enum type: class Sample{ public: enum Colour {RED, BLUE, GREEN}; } using Sample::Colour; does not work! Do we need to add a using declaration for every enumerators of enum type? Like below: using sample::Colour::RED;
A class does not define a namespace, and therefore "using" isn't applicable here. Also, you need to make the enum public. If you're trying to use the enum within the same class, here's an example: class Sample { public: enum Colour { RED, BLUE, GREEN }; void foo(); } void Sample::foo() { Colour foo = RED; } And to access it from outside the class: void bar() { Sample::Colour colour = Sample::RED; }
438,227
438,797
exception hierarchy vs error enumeration
I read somewhere (can't find it now) that large exception hierarchies are a waste of time. The justification for this statement seemed sound at the time and the idea stuck with me. In my own code when I have a code base that can have a range of error conditions I use a single exception with an enumeration member to differentiate between them. If the case arises that I need to catch on one of these errors, I catch it, check the enum and rethrow if it was not what I expected. Ideally, this should be rare. I was working with exceptions again and the I had a reflective moment where I questioned my exception habits. I am curious what everyone else does and why? Hierarchy or one exception with data members. BTW, I am assuming you agree with the idea of exception vs error codes. I don't wish to open that can of worms.
Simple rule of thumb: If you end up rethrowing exceptions after examining them, then you need a more finegrained exception hierarchy (Except for the rare case where the examination entails considerable logic). If you have Exception classes that are never caught (only their supertypes are), then you need a less finegrained exception hierarchy.
438,444
438,480
Passing object ownership in C++
What is the best way to indicate that an object wants to take ownership of another object? So far, I've been using a std::auto_ptr in the public interface, so the client knows that the interface wants to take ownership of the passed object. However, the latest GCC tells me auto_ptr is deprecated, so I wonder what is recommended? boost::interprocess::unique_pointer looks like a good candidate, but is this really the best solution out there?
boost::interprocess is a library for interprocess communication, so I wouldn't use it for different purposes. As discussed on this forum: http://objectmix.com/c/113487-std-auto_ptr-deprecated.html std::auto_ptr will be declared deprecated in the next version of the standard, where it will be recommended the usage of std::unique_ptr, which requires rvalue references and move semantics to be implemented (that's a fairly complicated feature). Until the new standard is released, I would simply try to disable the warning if possible, or ignore it, for maximum portability. If you want to already switch to the next language standard, it is possible since rvalue references have been implemented (see http://russ.yanofsky.org/rref/), so also std::unique_ptr should be supported. On of the advantages of the new semantics is that you can pass to the move constructor also a temporary or any rvalue; in other cases, this allows avoiding to copy (for instance) objects contained inside a std::vector (during reallocation) before destroying the original ones.
438,515
439,876
How to track memory allocations in C++ (especially new/delete)
How can I track the memory allocations in C++, especially those done by new/delete. For an object, I can easily override the operator new, but I'm not sure how to globally override all allocations so they go through my custom new/delete. This should be not a big problem, but I'm not sure how this is supposed to be done (#define new MY_NEW?). As soon as this works, I would assume it's enough to have a map somewhere of pointer/location of the allocation, so I can keep track of all allocations which are currently 'active' and - at the end of the application - check for allocations which have not been freed. Well, this seems again like something that surely has been done several times at least, so any good library out there (preferably a portable one)?
I would recommend you to use valgrind for linux. It will catch not freed memory, among other bugs like writing to unallocated memory. Another option is mudflap, which tells you about not freed memory too. Use -fmudflap -lmudflap options with gcc, then start your program with MUDFLAP_OPTIONS=-print-leaks ./my_program. Here's some very simple code. It's not suitable for sophisticated tracking, but intended to show you how you would do it in principle, if you were to implement it yourself. Something like this (left out stuff calling the registered new_handler and other details). template<typename T> struct track_alloc : std::allocator<T> { typedef typename std::allocator<T>::pointer pointer; typedef typename std::allocator<T>::size_type size_type; template<typename U> struct rebind { typedef track_alloc<U> other; }; track_alloc() {} template<typename U> track_alloc(track_alloc<U> const& u) :std::allocator<T>(u) {} pointer allocate(size_type size, std::allocator<void>::const_pointer = 0) { void * p = std::malloc(size * sizeof(T)); if(p == 0) { throw std::bad_alloc(); } return static_cast<pointer>(p); } void deallocate(pointer p, size_type) { std::free(p); } }; typedef std::map< void*, std::size_t, std::less<void*>, track_alloc< std::pair<void* const, std::size_t> > > track_type; struct track_printer { track_type * track; track_printer(track_type * track):track(track) {} ~track_printer() { track_type::const_iterator it = track->begin(); while(it != track->end()) { std::cerr << "TRACK: leaked at " << it->first << ", " << it->second << " bytes\n"; ++it; } } }; track_type * get_map() { // don't use normal new to avoid infinite recursion. static track_type * track = new (std::malloc(sizeof *track)) track_type; static track_printer printer(track); return track; } void * operator new(std::size_t size) throw(std::bad_alloc) { // we are required to return non-null void * mem = std::malloc(size == 0 ? 1 : size); if(mem == 0) { throw std::bad_alloc(); } (*get_map())[mem] = size; return mem; } void operator delete(void * mem) throw() { if(get_map()->erase(mem) == 0) { // this indicates a serious bug std::cerr << "bug: memory at " << mem << " wasn't allocated by us\n"; } std::free(mem); } int main() { std::string *s = new std::string; // will print something like: TRACK: leaked at 0x9564008, 4 bytes } We have to use our own allocator for our map, because the standard one will use our overridden operator new, which would result in an infinite recursion. Make sure if you override operator new, you use the map to register your allocations. Deleting memory allocated by placement forms of new will use that delete operator too, so it can become tricky if some code you don't know has overloaded operator new not using your map, because operator delete will tell you that it wasn't allocated and use std::free to free the memory. Also note, as Pax pointed out for his solution too, this will only show leaks that are caused by code using our own defined operator new/delete. So if you want to use them, put their declaration in a header, and include it in all files that should be watched.
438,809
438,947
Testing framework for functional/system testing for C/C++?
For C++, there are lots of good unit test frameworks out there, but I couldn't find a good one for functional testing. With functional testing, I mean stuff which touches the disk, requires the whole application to be in place etc. Point in case: What framework helps with testing things like whether your I/O works? I've got a hand-rolled system in place, which creates temporary folders, copies around a bunch of data, so the tests are always done in the same environment, but before I spend more time on my custom framework -- is there a good one out there already?
I wrote one from scratch three times already - twice for testing C++ apps that talked to exchanges using FIX protocol, once for a GUI app. The problem is, you need to emulate the outside world to do proper system testing. I don't mean "outside of your code" - outside of your application. This involves emulating end users, outside entities, the Internet and so on. I usually use perl to write my system testing framework and tests, mostly because it's good with accessing all sorts of OS facilities and regexps are first-class citizens. Some tips: make sure your logs are easy to parse, detailed but not too verbose. Have a sane default configuration. Make it easy to "reset" the application - you need to do it after each test. The approach I usually use is to have some sort of "adapter" that turns the application's communications with the outside world into stdin/stdout of some executable. Then I build a perl framework on top of that, and then the test cases use the framework.
439,097
439,168
Query building in a database agnostic way
In a C++ application that can use just about any relational database, what would be the best way of generating queries that can be easily extended to allow for a database engine's eccentricities? In other words, the code may need to retrieve data in a way that is not consistent among the various database engines. What's the best way to design the code on the client side to generate queries in a way that will make supporting a new database engine a relatively painless affair. For example, if I have (MFC)code that looks like this: CString query = "SELECT id FROM table" results = dbConnection->Query(query); and we decide to support some database that uses, um, "AVEC" instead of "FROM". Now whenever the user uses that database engine, this query will fail. Options so far: Worst option: have the code making the query check the database type. Better option: Create query request method on the db connection object that takes a unique query "code" and returns the appropriate query based on the database engine in use. Betterer option: Create a query builder class that allows the caller to construct queries without using any SQL directly. Once the query is completed, caller can invoke a "Generate" method which returns a query string approrpriate for the active database engine Best option: ?? Note: The database engine itself is abstracted away through some thin layers of our own creation. It's the queries themselves are the only remaining problem. Solution: I've decided to go with the "better" option (query "selector") for two reasons. Debugging: As mentioned below, debugging is going to be slightly easier with the selector approach since the queries are pre-built and listed out in a readable form in code. Flexibility: It occurred to me that there are some databases which might have vastly better and completely different ways of solving a particular query. For example, with Access I perform a complicated query on multiple tables each time because I have to, but on Sql Server I'd like to setup a view. Selecting from the view and from several tables are completely different queries (i think) and this query selector would handle it easily.
I would think that what you would want to do, if you needed the ability to support multiple databases, would be to create a data provider interface (or abstract class) and associated concrete implementations. The data provider would need to support your standard query operators and other common, supported functionality required support your query operations (have a look at IEnumerable extension methods in .NET 3.5). Each concrete provider would then translate these into specific queries based on the target database engine. Essentially, what you do is create a database abstraction layer and have your code interact with it. If you can find one of these for C++, it would probably be worth buying instead of writing. You may also want to look for Inversion of Control (IoC) containers for C++ that would basically do this and more. I know of several for Java and C#, but I'm not familiar with any for C++.
439,219
442,993
How to make a single static library from multiple static libraries?
We recently converted a C++ project from Visual Studio 6 to Visual Studio 8. Everything went well until we reached the compilation of a project who put all the static libraries inside one big static library. By default after the conversion between the two version of projects the project didn't do anything (no big static library was created, it only said that the project was up-to-date). We found a working solution by adding a dummy (empty) file to the project (it had no other files before, in the Visual Studio 6 solution). Everything went well and we managed to compile the project with the new development environment (Visual Studio 8). My question is: Is there any other way to do this, without dummy files added to the project? Thanks in advance for your responses
You can use the method described in the answer by nobugz also with multiple configurations and different directories for debug and release input libs. Just add all input libs, debug and release, and use "exclude from build". In the debug configuration exclude all release input libs from the build, in the release configuration exclude all debug input libs.
439,288
439,306
Gently kill a process
I have a Windows service(C#), that sprawns few child native processes (C++). I'd like to gently kill those processes once in a while. (gently = let the procs to finalize its work before going down). I tried to use the SetConsoleCtrlHandler() routine to register the child procs to console events and to call the CloseMainWindow() from the C# code to raise the "close console" events. This didn't work for me. Mainly because the child processes are not console applications. Does anyone knows what other methods can be used in order to fulfill this requirement?
Kernel event objects come to mind: your "manager" raises a named event. Your child processes should check the state of this event at least once in a while (or have a thread that continuously checks it).
439,402
439,559
Building a subset of boost in windows
I'm trying setup a subset of boost and get it properly compiled using bjam, however I'm not getting the result I'm looking for. I'm working on windows using boost 1.37.0. Let's say I want the libraries smart_ptr and filesystem built/installed. I intentionally chose a header only library and one library needing to compile a library. I want them to be built into a directory structure similar to the one I would get if I built a complete boost installation with bjam (i.e not specifying any --with-libraryX) but of course without all the libraries I'm not interested in. My first approach was to use the --with-filesystem --with-smart_ptr in bjam, however it seemed like it didn't recognize smart_ptr (I tried smartptr and smart-ptr without success). I guess this is because it's a header only library. When I removed it and only had --with-filesystem it seemed to copy ALL libraries header files to the install path and only build the libraries for filesystem. The library building behavior was the one I hoped for, but I got tons of header files for libraries I'm not interested in installed in my boost directory. My second approach was to use bcp to copy the relevant projects. This did work out in the sense that I only got the projects I cared about (and their dependencies). However they didn't provide any make files for building the libraries that was copied. This means I would need to setup projects files for all the libraries that are not header only and manually build them. So my question is basically, is there a way of selectively building boost that only copies the headers for the libraries I'm interested in and only building library files for the non header only libraries I'm interested in (and the boost libraries they are dependent on course)? There are probably tons of manual/scripting based solutions for this, but if I could get something running only using bjam would be way more useful for me since adding new libraries or upgrading to a new boost version would be very simple. EDIT: Added the complete command line for the first approach: bjam install --prefix=c:\temp\boostsmall --build-dir=C:\temp\boostsmalltemp --layout=system --with-filesystem variant=debug link=static threading=multi runtime-link=static Changed scoped_ptr to smart_ptr
Solved it. The bcp solution had make files for the projects, however I needed to copy the tools directory and the root of the boost directory to the place I copied all my libs to get things up running.
439,540
439,587
Function pointer to class member function problems
First of all I have to admit that my programming skills are pretty limited and I took over a (really small) existing C++ OOP project where I try to push my own stuff in. Unfortunately I'm experiencing a problem which goes beyond my knowledge and I hope to find some help here. I'm working with a third party library (which cannot be changed) for grabbing images from a camera and will use some placeholder names here. The third party library has a function "ThirdPartyGrab" to start a continuous live grab and takes a pointer to a function which will be called every time a new frame arrives. So in a normal C application it goes like this: ThirdPartyGrab (HookFunction); "HookFunction" needs to be declared as: long _stdcall HookFunction (long, long, void*); or "BUF_HOOK_FUNCTION_PTR" which is declared as typedef long (_stdcall *HOOK_FUNCTION_PTR) (long, long, void*); Now I have a C++ application and a class "MyFrameGrabber" which should encapsulate everything I do. So I put in the hook function as a private member like this: long _stdcall HookFunction (long, long, void*); Also there is a public void function "StartGrab" in my class which should start the Grab. Inside I try to call: ThirdPartyGrab (..., HookFunction, ...); which (not surprisingly) fails. It says that the function call to MyFrameGrabber::HookFunction misses the argument list and I should try to use &MyFrameGrabber::HookFunction to create a pointer instead. However passing "&MyFrameGrabber::HookFunction" instead results in another error that this cannot be converted to BUF_HOOK_FUNCTION_PTR. After reading through the C++ FAQ function pointers I think I understand the problem but can't make up a solution. I tried to make the hook function static but this also results in a conversion error. I also thought of putting the hook function outside of the class but I need to use class functions inside the hook function. Is there another way or do I need to change my whole concept? EDIT 14.01.08: I tested the singleton workaround since I cannot change the third party library and the void pointer is only for data that is used inside the hook function. Unfortunately it didn't worked out of the box like I hoped.... I don't know if the static function needs to be in a separate class so I put it in my "MyFrameGrabber" class: static MyFrameGrabber& instance() { static MyFrameGrabber _instance; return _instance; } long Hook(long, long, void*); // Implementation is in a separate cpp file In my cpp file I have the call_hook function: long MFTYPE call_hook(long x, MIL_ID y, void MPTYPE *z) { return MyFrameGrabber::instance().Hook(x,y,z); } void MyFrameGrabber::grab () { ThirdPartyGrab(..., call_hook, ...); } But this gives me an error in static MatroxFrameGrabber _instance; that no matching standard constructor is found. That's correct because my MyFrameGrabber constructor looks like this: MyFrameGrabber (void* x, const std::string &y, int z, std::string &zz); I tried to put in an empty constructor MyFrameGrabber(); but this results in a linker error. Should I pass empty parameters to the MyFrameGrabber constructor in the singleton? Or do I need to have a separate Hook Class and if yes how could I access MyFrameGrabber functions? Thanks in advance. SECOND EDIT 15.01.08: I applied the changes and it compiles and links now. Unfortunately I cannot test this at runtime yet because it's a DLL and I have no Debug Caller Exe yet and there are other problems during initialization etc. I will mark the post as answer because I'm sure this is the right way to do this.
The reason "&MyFrameGrabber::HookFunction" cannot be converted to a BUF_HOOK_FUNCTION_PTR is that, being a member of the class, it has implicitly as first parameter the "this" pointer, thus you cannot convert a member function to a non-member function: the two signatures look the same but are actually different. I would declare an interface, defining the function to call, have your class implement it and pass the object itself instead of the callback (you can think of an interface as the object-oriented replacement of a function pointer): class IHookInterface{ public: virtual long HookFunction(long, long, void*) = 0; }; class HookClass : public IHookInterface{ public: virtual long Hook(long, long, void*) { // your code here... } }; // new definition: ThirdPartyGrab (..., IHookInterface, ...); EDIT - other possible solution in case you cannot modify the library: use a singleton rather than a static function. class HookClass{ public: static HookClass& instance(){ static HookClass _instance; return _instance; } long Hook(long, long, void*) { // your code here... } }; long call_hook(long x,long y,void * z){ return HookClass::instance().Hook(x,y,z); } SECOND EDIT: you might slightly modify the singleton class with an initialization method to call the constructor with the proper parameters, but maybe it is not more elegant than the following solution, which is simpler: class HookClass{ public: HookClass(string x,string y...){ } long Hook(long, long, void*) { // your code here... } }; static HookClass * hook_instance = 0; long call_hook(long x,long y,void * z){ if (0 != hook_instance){ return hook_instance->Hook(x,y,z); } } int main(){ hook_instance = new HookClass("x","y"); ThirdPartyGrab(..., call_hook, ...); }
439,561
439,650
Regular Expression to match {if cond}foo{else}bar{/if}
I'm having difficulty throwing away the bits of the expression I don't want, and keeping the bits I do. The problem is - given the input string: {if cond}foo{else}bar{/if} I'd like just to have: 0: {if cond}foo{else}bar{/if} 1: cond 2: foo 3: bar And for the input string: {if cond}foo{/if} I'd like just to have: 0: {if cond}foo{else}bar{/if} 1: cond 2: foo 3: The regex I've got at present looks like this: \{if ([a-z0-9]+)\}([^\{]*?)(((?:\{else\})?)(.*?)?)\{/if\} I get the following data back: 0: {if cond}foo{else}bar{/if} 1: cond 2: 3: foo{else}bar 4: 5: foo{else}bar Which would require further parsing of the foo{else}bar bit to get the respective values. Is my regex anywhere near close? I'm not too concerned about the sort of data this might be run over - including { in the body of an if statement is allowed to break the regex. The values of foo and bar can be anything that does not include a {. Thanks, Dom
What about this? \{if ([a-z0-9]+)\}([^\{]*)(?:\{else\})?([^\{]*)\{/if\}
439,573
439,589
How to convert a single char into an int
I have a string of digits, e.g. "123456789", and I need to extract each one of them to use them in a calculation. I can of course access each char by index, but how do I convert it into an int? I've looked into atoi(), but it takes a string as argument. Hence I must convert each char into a string and then call atoi on it. Is there a better way?
You can utilize the fact that the character encodings for digits are all in order from 48 (for '0') to 57 (for '9'). This holds true for ASCII, UTF-x and practically all other encodings (see comments below for more on this). Therefore the integer value for any digit is the digit minus '0' (or 48). char c = '1'; int i = c - '0'; // i is now equal to 1, not '1' is synonymous to char c = '1'; int i = c - 48; // i is now equal to 1, not '1' However I find the first c - '0' far more readable.