question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
652,193
657,909
Serialize and send a data structure using Boost?
I have a data structure that looks like this: typedef struct { unsigned short m_short1; unsigned short m_short2; unsigned char m_character; } MyDataType; I want to use boost::serialization to serialize this data structure, then use boost::asio to transmit it via TCP/IP, then have another application receive the data and de-serialize it using the same boost libraries. I'm trying to following boost::serialization tutorial, (as some other SO questions have suggested) but the example is specifically for writing/reading to a file, not to a socket using boost::asio. I'm pretty sure I've got the right tools for the job -- I just need help making them work together. Writing to a socket can't be that different from writing to a file, right? Any suggestions are very much appreciated. Thanks!
For such simple structure, boost::serialization is overkill and huge overhead. Do simpler: vector<uint16_t> net(3,0); net[0]=htons(data.m_short1); net[1]=htons(data.m_short2); net[2]=htons(data.character); asio::async_write(socket,buffer((char*)&net.front(),6),callback); vector<uint16_t> net(3,0); asio::async_read(socket,buffer((char*)&net.front(),6),callback); callback: data.m_short1=ntohs(net[0]); data.m_short2=ntohs(net[1]); data.character=ntohs(net[2]); And Save yourself HUGE overhead that boost::serialization has And if you private protocol where computers with same order of bytes work (big/little) that just send structure as is -- POD.
652,503
652,619
unix-fork-monitor-child-progress
I have an application where a bit of parallel processing would be of benefit. For the purposes of the discussion, let's say there is a directory with 10 text files in it, and I want to start a program, that forks off 10 processes, each taking one of the files, and uppercasing the contents of the file. I acknowledge that the parent program can wait for the children to complete using one of the wait functions, or using the select function. What I would like to do is have the parent process monitor the progress of each forked process, and display something like a progress bar as the processes run. My Question. What would be a reasonable alternatives do I have for the forked processes to communicate this information back to the parent? What IPC techniques would be reasonable to use?
In this kind of situation where you only want to monitor the progress, the easiest alternative is to use shared memory. Every process updates it progress value (e.g. an integer) on a shared memory block, and the master process reads the block regularly. Basically, you don't need any locking in this scheme. Also, it is a "polling" style application because the master can read the information whenever it wants, so you do not need any event processing for handling the progress data.
652,542
669,687
Strange SAL annotation warning
I'm trying to use Micosoft's SAL annotation for my project, however I get the following warning, and I don't know why. As an example, I created a new C++ console application, and have this code: #include <sal.h> class Whatever { public: _Check_return_ int Method(__in int number) ; }; int main() { return 0; } When I compile using Visual Studio 2008, I get the following warning: warning C6540: The use of attribute annotations on this function will invalidate all of its existing __declspec annotations In the file "c1xxast" What am I doing wrong? If I remove either the _Check_return_ or the __in, the warning goes away. I cannot find any reference to the warning C6550. However the same text can be found here: http://msdn.microsoft.com/en-us/library/dd445322.aspx, but it's not very helpful.
The problem may be because you are mixing SAL annotation types. Although made very clear on MSDN, there are two types of SAL annotation: attribute and ... er ... not. The #defines in <sal.h> VC2005 use the non-attribute versions and start with an underscore followed by a lowercase letter. The newer VC2008 versions expand to compiler attributes and start (and end) with an underscore followed by a capital letter. You have mixed the two types: Attribute: _In_ _Check_return_ Non-attribute: __in __checkReturn Try changing your annotations to use a single type consistently. This blog post explains a bit more about this.
652,605
670,429
Why won't MFC::CHttpFile 'PUT' for me?
My code talks to a little Java application provided by a vendor. This Java app sets up a web server at localhost:57000 which is used to control the state of 'the machine'. For the purpose of this question, I need to change the state of 'the machine' from 'off' to 'on'. To make this happen I'm supposed to HTTP PUT the following string to 'the machine' at http://localhost:57000/settings.xml: <settings><machine_state><status>on</status></machine_state></settings> This Curl command works perfectly: curl -X PUT -H "Content-Type:application/xml" -d @settings.xml http://localhost:57000/settings.xml" where the local file 'settings.xml' has the above xml string in it. I want to do what Curl is doing with MFC's WININET classes. The following code should IMHO do exactly the same thing that curl does. Sadly, although the localhost web server returns a code 200 it ignores my xml string. What little thing am I missing? int MyHttp::HttpPutThread() NOTHROW { try { m_xml = "<settings><machine_state><status>on</status></machine_state></settings>"; m_url = "settings.xml" CInternetSession session; SetSessionOptions(session); CString server = "localhost:57920"; boost::scoped_ptr<CHttpConnection> phttp(session.GetHttpConnection(server)); LPCTSTR accept = 0;//"text/xml"; boost::scoped_ptr<CHttpFile> phttpfile(phttp->OpenRequest( "PUT", //verb "settings.xml", //object name 0, //referer 1, //context &accept, // accept types 0, //version INTERNET_FLAG_EXISTING_CONNECT)); CString header = "Content-Type:application/xml\r\n"; if(phttpfile->SendRequest(header,(LPVOID)m_xml.GetBuffer(), m_xml.GetLength())) { // LOG_DEBUG (Same as TRACE) output are shown in comment DWORD code(0); phttpfile->QueryInfoStatusCode(code); LOG_DEBUG("HttpPutThread result code: %d", code); // '200' CString object = phttpfile->GetObject(); LOG_DEBUG("object: %s", object); // 'settings.xml' CString statustxt; phttpfile->QueryInfo(HTTP_QUERY_STATUS_TEXT,statustxt); LOG_DEBUG("status text:%s", statustxt); // 'HTTP/1.0 200 OK' CString rawheaders; phttpfile->QueryInfo(HTTP_QUERY_RAW_HEADERS,rawheaders); LOG_DEBUG("raw headers:%s", rawheaders); // http://localhost:57000/settings.xml LOG_DEBUG("File url:%s",phttpfile->GetFileURL()); LOG_DEBUG("Verb:%s", phttpfile->GetVerb()); // 'PUT' } else { //This does not happen LOG_DEBUG("PUT failed in AffHttp::HttpPutThread"); } } catch(CInternetException* pe) { //No exceptions are thrown LOG_DEBUG("Exception HttpPutThread:%d", pe->m_dwError); pe->Delete(); } return 0; } Thanks in advance.
I wound up replacing the MFC classes with my own low level socket code to send exactly the same text in exactly the same order as Curl did. It seemed like the little embedded 'jetty' Java server just objected to one of the headers generated by the MFC classes.
652,683
652,689
G++ Compiler won't allow recursion?
I have created a very simple program that uses recursion. I'm using the g++ compiler. I can compile it, but when I try to run it I get an error message that says SEGMENTATION FAULT. Here's my code: #include <iostream.h> using namespace std; int Recurse(int); int main(int argc, char *argv[]) { Recurse(10); cout << endl; } int Recurse(int numTimes) { if (numTimes == 0) return 0; else { cout << numTimes << " "; Recurse(numTimes--); } }
In your recursive call, you're using the postfix -- (numTimes--), rather than the prefix version (--numTimes). As a result, the value of numTimes is decremented after the recursive call. This means that Recurse is called with 10 infinitely. Use the prefix version (which will decrement it before the call), or just pass numTimes-1 (since the numTimes value doesn't need to be modified). The reason you're seeing a segfault is that your stack overflows into protected memory.
652,715
652,737
std::allocator construct/destroy vs. placement new/p->~T()
For a project of mine, I am writing some STL containers from scratch (I have my reasons). Since I am mimicking the functionality and interfaces of the STL so closely I am doing my best to keep with the policy "if it has the same name as a standard construct, it will conform to the standard as much as possible." So, of course my containers take allocators as a template parameter, which is very nice as it allows for some custom allocation schemes. On to my question. The std::allocator interface separates memory allocation from object construction. Likewise it separates deallocation from destruction. This makes sense since where you get memory from is more or less irrelevant to properly constructing an object in c++. So there are two construction/deallocation functions which look like this for the default implementation (lifted straight from a book): void construct(pointer p, const T& val) { new(p) T(val); } void destroy(pointer p) { p->~T(); } as you can see construct simply calls placement new and destroy simply calls the destructor. Is there any reason to use these over just using placement new and destructor syntax? can a "correct" allocator implement these in another way? Or am I guaranteed that all allocator implementations which conform to the standard will have there construct/destroy methods implemented in this way? More to the point, is it safe to say that I can always use the std::uninitialized_copy and std::uninitialized_fill for constructing the elements of my containers? Thanks.
The allocator could add logging statements before and after construction/destruction, or any other side effects it cared to do. Of course the actual construction has to occur by calling placement new and the destructor, but it doesn't say in the rulebook that nothing else must happen in the construct/destroy functions
652,779
653,758
Automatically separate class definitions from declarations?
I am using a library that consists almost entirely of templated classes and functions in header files, like this: // foo.h template<class T> class Foo { Foo(){} void computeXYZ() { /* heavy code */ } }; template<class T> void processFoo(const Foo<T>& foo) { /* more heavy code */ } Now this is bad because compile times are unbearable whenever I include one of those header files (and actually I include many of them in each of my compilation units). Since as a template parameter I only use one or two types anyway I am planning to create, for each library header file, a file that contains only declarations, without the heavy code, like this: // NEW: fwd-foo.h template<class T> class Foo { Foo(); void computeXYZ(); }; template<class T> void processFoo(const Foo<T>& foo); And then one file that creates all the instantiations that I'll need. That file can be compiled separately once and for all: // NEW: foo.cpp #include "foo.h" template class Foo<int>; template class Foo<double>; template void processFoo(const Foo<int>& foo); template void processFoo(const Foo<double>& foo); Now I can just include fwd-foo.h in my code and have short compile times. I'll link against foo.o at the end. The downside, of course, is that I have to create these new fwd-foo.h and foo.cpp files myself. And of course it's a maintenance problem: When a new library version is released I have to adapt them to that new version. Are there any other downsides? And my main question is: Is there any chance I can create these new files, especially fwd-foo.h, automatically from the original foo.h? I have to do this for many library header files (maybe 20 or so), and an automatic solution would be best especially in case a new library version is released and I have to do this again with the new version. Are any tools available for this task? EDIT: Additional question: How can the newly supported extern keyword help me in this case?
We use lzz which splits out a single file into a separate header and translation unit. By default, it would normally put the template definitions into the header too, however, you can specify that you don't want this to happen. To show you how you might use it consider the following: // t.cc #include "b.h" #include "c.h" template <typename T> class A { void foo () { C c; c.foo (); b.foo (); } B b; } Take the above file and copy it to 't.lzz' file. Place any #include directives into separate $hdr and $src blocks as necessary: // t.lzz $hdr #include "b.h" $end $src #include "c.h" $end template <typename T> class A { void foo () { C c; c.foo (); b.foo (); } B b; } Now finally, run lzz over the file specifying that it places the template definitions into the source file. You can either do this using a $pragma in the source file, or you can use the command line option "-ts": This will result in the following files being generated: // t.h // #ifndef LZZ_t_h #define LZZ_t_h #include "b.h" #undef LZZ_INLINE #ifdef LZZ_ENABLE_INLINE #define LZZ_INLINE inline #else #define LZZ_INLINE #endif template <typename T> class A { void foo (); B b; }; #undef LZZ_INLINE #endif And: // t.cpp // #include "t.h" #include "c.h" #define LZZ_INLINE inline template <typename T> void A <T>::foo () { C c; c.foo (); b.foo (); } #undef LZZ_INLINE You can then run these through some grep/sed commands to remove the LZZ helper macros.
652,788
652,945
What is the worst real-world macros/pre-processor abuse you've ever come across?
What is the worst real-world macros/pre-processor abuse you've ever come across (please no contrived IOCCC answers *haha*)? Please add a short snippet or story if it is really entertaining. The goal is to teach something instead of always telling people "never use macros". p.s.: I've used macros before... but usually I get rid of them eventually when I have a "real" solution (even if the real solution is inlined so it becomes similar to a macro). Bonus: Give an example where the macro was really was better than a not-macro solution. Related question: When are C++ macros beneficial?
From memory, it looked something like this: #define RETURN(result) return (result);} int myfunction1(args) { int x = 0; // do something RETURN(x) int myfunction2(args) { int y = 0; // do something RETURN(y) int myfunction3(args) { int z = 0; // do something RETURN(z) Yes that's right, no closing braces in any of the functions. Syntax highlighting was a mess, so he used vi to edit (not vim, it has syntax coloring!) He was a Russian programmer who had mostly worked in assembly language. He was fanatical about saving as many bytes as possible because he had previously worked on systems with very limited memory. "It was for satellite. Only very few byte, so we use each byte over for many things." (bit fiddling, reusing machine instruction bytes for their numeric values) When I tried to find out what kinds of satellites, I was only able to get "Orbiting satellite. For making to orbit." He had two other quirks: A convex mirror mounted above his monitor "For knowing who is watching", and an occasional sudden exit from his chair to do a quick ten pushups. He explained this last one as "Compiler found error in code. This is punishment".
652,813
652,837
Macros and Visual C++
I'm trying to get a better understanding of what place (if any) Macros have in modern C++ and Visual C++, also with reference to Windows programming libraries: What problem (if any) do Macros solve in these situations that cannot be solved without using them? I remember reading about Google Chrome's use of WTL for Macros (amonst other things) from this blog post, and they are also used in MFC - here is an example of a Macro from that blog post I'd like explained in a superb amount of detail if possible: // CWindowImpl BEGIN_MSG_MAP(Edit) MSG_WM_CHAR(OnChar) MSG_WM_CONTEXTMENU(OnContextMenu) MSG_WM_COPY(OnCopy) MSG_WM_CUT(OnCut) MESSAGE_HANDLER_EX(WM_IME_COMPOSITION, OnImeComposition) MSG_WM_KEYDOWN(OnKeyDown) MSG_WM_LBUTTONDBLCLK(OnLButtonDblClk) MSG_WM_LBUTTONDOWN(OnLButtonDown) MSG_WM_LBUTTONUP(OnLButtonUp) MSG_WM_MBUTTONDOWN(OnNonLButtonDown) MSG_WM_MOUSEMOVE(OnMouseMove) MSG_WM_MOUSELEAVE(OnMouseLeave) MSG_WM_NCCALCSIZE(OnNCCalcSize) MSG_WM_NCPAINT(OnNCPaint) MSG_WM_RBUTTONDOWN(OnNonLButtonDown) MSG_WM_PASTE(OnPaste) MSG_WM_SYSCHAR(OnSysChar) // WM_SYSxxx == WM_xxx with ALT down MSG_WM_SYSKEYDOWN(OnKeyDown) END_MSG_MAP() I've read these articles to do with Macros on MSDN but am trying to also pick up best practices for writing or avoiding Macros and where/when to use them.
All of those Macros are defined in public sdk header files, so you can go read what they do yourself if you want. Basically what is happening here is you're generating a WndProc function using Macros. Each MSG_WM_* entry is a case statement that handles the given window message by translating its arguments from wParam and lParam into their original types and calling a helper function (which you then get to go implement) that takes those types as arguments (if appropriate—some window messages have 0 or 1 arguments). In my opinion all this is crap. You're trading off having to write a bunch of boiler-plate up front but in exchange you make it much harder to debug later. Macros don't have much of a place in my modern programming world, other than to check if they are defined or not. Macros that do flow control, or assume specific local variables are always defined, etc, make me quite unhappy. Especially when I have to debug them.
652,815
844,281
Has anyone ever had a use for the __COUNTER__ pre-processor macro?
The __COUNTER__ symbol is provided by VC++ and GCC, and gives an increasing non-negative integral value each time it is used. I'm interested to learn whether anyone's ever used it, and whether it's something that would be worth standardising?
It's used in the xCover code coverage library, to mark the lines that execution passes through, to find ones that are not covered.
653,150
653,164
Equivalence of <limits> and <climits>
Is this guaranteed to be always true: std::numeric_limits<int>::max() == INT_MAX What does C++ standard say about it? I could not find any reference in the standard that would explicitly state this, but I keep reading that those should be equivalent. What about C99 types that are not in C++98 standard for compilers that implement both C99 (at least long long part) and C++98? I am not sure whether there is any guarantee that this always holds true: std::numeric_limits<unsigned long long>::max() == ULLONG_MAX Is this a reasonable assumption?
My copy of the C++ 2003 standard says that the numeric_limits<>::max() and min() templates will return values: Equivalent to CHAR_MIN, SHRT_MIN, FLT_MIN, DBL_MIN, etc. Equivalent to CHAR_MAX, SHRT_MAX, FLT_MAX, DBL_MAX, etc However, those are in footnotes. ISO/IEC Directives Part 3: "[Footnotes] shall not contain requirements." Though footnotes to tables or figures may be requirements.
653,178
653,188
"Unable to find an entry point named [function] in dll" (c++ to c# type conversion)
I have a dll which comes from a third party, which was written in C++. Here is some information that comes from the dll documentation: //start documentation RECO_DATA{ wchar_t Surname[200]; wchar_t Firstname[200]; } Description: Data structure for receiving the function result. All function result will be stored as Unicode (UTF-8). Method: bool recoCHN_P_Name(char *imgPath,RECO_DATA *o_data); Input: char * imgPath the full path of the image location for this function to recognize RECO_DATA * o_data data object for receiving the function result. Function return: True if Success, otherwise false will return. //end documentation I am trying to call the recoCHN_P_Name from my C# application. To this end, I came up with this code: The code to import the dll: public class cnOCRsdk { [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Unicode)] public struct RECO_DATA{ [MarshalAs(UnmanagedType.ByValTStr, SizeConst=200)] public string FirstName; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 200)] public string Surname; } [DllImport(@"cnOCRsdk.dll", EntryPoint="recoCHN_P_Name")] public static extern bool recoCHN_P_Name(byte[] imgPath, RECO_DATA o_data); } The code to call the function: cnOCRsdk.RECO_DATA recoData = new cnOCRsdk.RECO_DATA(); string path = @"C:\WINDOWS\twain_32\twainrgb.bmp"; System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); byte[] bytes = encoding.GetBytes(path); bool res = cnOCRsdk.recoCHN_P_Name(bytes, recoData); And the error I'm getting is ""Unable to find an entry point named 'recoCHN_P_Name' in DLL 'cnOCRsdk.dll'." I'm suspecting that I'm having an error in converting a type from C++ to C#. But where exactly ... ?
First make sure the function is actually exported: In the Visual Studio Command Prompt, use dumpbin /exports whatever.dll
653,211
653,235
STL class for reference-counted pointers?
This should be trivial but I can't seem to find it (unless no such class exists!) What's the STL class (or set of classes) for smart pointers? UPDATE Thanks for the responses, I must say I'm surprised there's no standard implementation. I ended up using this one: http://archive.gamedev.net/reference/articles/article1060.asp
With the exception of the already mentionned TR1 shared_ptr, there is no reference-counted pointer in STL. I suggest you use boost::shared_ptr (downloading boost will be enough, there is nothing to compile, its implementation is header-only). You may also want to have a look at smart pointers from Loki libraries (again, header-only implementation). For an overview of available smart pointers, see there.
653,336
5,058,059
Should a buffer of bytes be signed or unsigned char buffer?
Should a buffer of bytes be signed char or unsigned char or simply a char buffer? Any differences between C and C++? Thanks.
Should a buffer of bytes be signed char or unsigned char or simply a char buffer? Any differences between C and C++? A minor difference in how the language treats it. A huge difference in how convention treats it. char = ASCII (or UTF-8, but the signedness gets in the way there) textual data unsigned char = byte signed char = rarely used And there is code that relies on such a distinction. Just a week or two ago I encountered a bug where JPEG data was getting corrupted because it was being passed to the char* version of our Base64 encode function — which "helpfully" replaced all the invalid UTF-8 in the "string". Changing to BYTE aka unsigned char was all it took to fix it.
653,423
653,448
pointer to objects within a class, C++ newbie question
why in C++, for objects A,B //interface, case #1 class A { B bb; } A::A() { //constructor bb = B(); } //interface, case #2 class A { B *bb; } A::A() { //constructor bb = new B(); } Why case #2 work but not #1?? Edit: I got it now. But for case #1, if an instance of A is freed, will its bb also be automatically freed? Case #2 you have to explicitly call bb = NULL right?
The question as for me was how to make A::A() { //constructor bb = new B(); } but without new. And I suppose the code in the question is not a real code. Just why 'new' works but simple assignment doesn't. And my answer is following. If I understood the question in a wrong way or the answer itself is wrong - please let me know. Change A::A() { //constructor bb = B(); } to A::A(): bb() { // some logic } and to have a consistence in the case with 'new' it is better to implement like A::A(): bb( new B() ) { // some logic } In these both cases when 'some logic' will start its execution you can be sure that object bb either initialized or exception will be thrown. To make your case compilable B should has implemented assign operator. To your edit: Your guess about case 1 is correct. in case 2 you have to call delete bb in the class destructor. Please leave a message of the '-1' reason. I am really confused.
653,787
653,806
Direct3D10: No 32bit ARGB format?
Im starting to add d3d10 support to go along with my existing d3d9 graphics backend. The problem is all the existing code (in several applications...) uses ARGB formatted colours however I couldnt find a format mode that matches for d3d10. Does d3d10 not support ARGB colour formats at all or have I just missed something? If I havnt missed something what is a good way to convert between them, It just requires the first byte be moved to the end, this seems like a pretty simple concept however I cant see anyway to do it other than breaking the colour into its components and reconstructing it...eg: //unsigned colIn, colOut unsigned char a = (colIn & 0xFF000000) >> 24, r = (colIn & 0x00FF0000) >> 16, g = (colIn & 0x0000FF00) >> 8, b = (colIn & 0x000000FF); colOut = (r << 24) | (g << 16) | (b << 8) | a;
Looking at the relevant enum type, I (too) fail to find any AxRxGxBx formats. So it seems you need to do the swizzling by yourself, then. This is extremely suitable for SSE optimization of course, check if your compiler is able to optimize the code into something that uses SSE and performance should be fine. Think about endianness issues when doing this code, it's easy to make a mistake and this code is hard to write without caring about endianness.
653,980
654,025
List of common C++ Optimization Techniques
Can I have a great list of common C++ optimization practices? What I mean by optimization is that you have to modify the source code to be able to run a program faster, not changing the compiler settings.
Two ways to write better programs: Make best use of language Code Complete by Steve McConnell Effective C++ Exceptional C++ profile your application Identify what areas of code are taking how much time See if you can use better data structures/ algorithms to make things faster There is not much language specific optimization one can do - it is limited to using language constructs (learn from #1). The main benefit comes from #2 above.
654,154
654,180
Profiling programs written in C or C++
What would you suggest the best tool to profile C/C++ code and determine which parts are taking the most time. Currently, I'm just relying on logs but ofcourse the information is not accurate since unnecessary delays are introduced. Preferrably, the tool would also be able to detect/suggest areas which could be optimized, if such tool exist. Platform: Linux The application shall be used on an embedded environment so it should be lightweight and external (not a plugin on some IDE).
I can heartily recommend callgrind in combination with KCachegrind.
654,197
655,253
Port GNU C++ programs to Visual C++
How do you port C++ programs with makefile made from GNU C++ in Linux to Visual C++?
One thing I can suggest is to use CMake. If you implement your build system with CMake to auto-generate the makefiles for GCC on Linux, it takes only minor modifications to auto-generate projects and solutions for VC++. Of course, this means learning a whole new build tool, so it may not be for you. It's only a suggestion.
654,220
654,316
Is using .h as a header for a c++ file wrong?
Is using .h as a header for a c++ file wrong? I see it all over the place, especially with code written in the "C style". I noticed that Emacs always selects C highlighting style for a .h header, but c++ for hpp or hh. Is it actually "wrong" to label your headers .h or is it just something which annoys me? EDIT: There is a good (ish) reason why this annoys me, if I have project files labelled, 'hpp & cpp' I can get away with 'grep something *pp' etc. otherwise I have to type '.h cpp'
It's not wrong to call your C++ headers .h. There are no rules of what extensions your code must use. For your non-headers, MSVC uses .cpp and on Linux, .cc as well. There is no one global standard, and .h is definitely very widely used. But I'd say calling your headers .hpp (I've seen .hh a few times as well) is a lot more consistent and informative than just using .h.
654,389
654,398
Reporting tool for C++ that shows number of times executed given a line
I would to know if there is a tool in C++ that gives you a report where it displays the following: The source code of the whole project. Usually one HTML page per source file. Beside the source code, there are line numbers, for readability purposes of course. And for each line, at the left of the line number, there is a value displayed for how many times that line is executed. Of course, you need to execute a script that runs say unit tests and from there generate a report. If you are familiar with code coverage tool Cobertura for Java and NCover for C# that's one of its important features.
Have a look at that list of Code Coverage Tools for C and C++.
654,428
654,432
What is the order in which the destructors and the constructors are called in C++
What is the order in which the destructors and the constructors are called in C++? Using the examples of some Base classes and Derived Classes
The order is: Base constructor Derived constructor Derived destructor Base destructor Example: class B { public: B() { cout<<"Construct B"<<endl; } virtual ~B() { cout<<"Destruct B"<<endl; } }; class D : public B { public: D() { cout<<"Construct D"<<endl; } virtual ~D() { cout<<"Destruct D"<<endl; } }; int main(int argc, char **argv) { D d; return 0; } Output of example: Construct B Construct D Destruct D Destruct B Multiple levels of inheritance works like a stack: If you consider pushing an item onto the stack as construction, and taking it off as destruction, then you can look at multiple levels of inheritance like a stack. This works for any number of levels. Example D2 derives from D derives from B. Push B on the stack, push D on the stack, push D2 on the stack. So the construction order is B, D, D2. Then to find out destruction order start popping. D2, D, B More complicated examples: For more complicated examples, please see the link provided by @JaredPar
654,609
654,947
How much functionality is "acceptable" for a C++ struct?
My first post so please go easy on me! I know that there's no real difference between structs and classes in C++, but a lot of people including me use a struct or class to show intent - structs for grouping "plain old data" and classes for encapsulated data that has meaningful operations. Now, that's fine but at what point do you start to think that something isn't just a struct anymore and should become a class? Things I think are reasonable for structs to have: constructors with simple initialisation code only. serialization code such as stream insertion / extraction operators. Things I'm not so sure about, but would probably do: comparison operators Simple transformation functions - for example byteswapping all the members after receiving data from an external source. I don't think structs should have: dynamic memory allocation. destructor. complex member functions. Where do the boundaries lie??? Also, is it reasonable to have class instances as members of a struct? e.g. class C {private: int hiddenData; public: void DoSomething();}; struct S {int a; float b; C c; }; S s; s.c.DoSomething(); Remember, I'm not on about what you CAN do with C++, I'm interested in what you SHOULD do when designing good software. Thoughts?
Class vs. struct Using class or struct keyword is a matter of taste together with the 'feeling' it produces on the reader. Technically they are equivalent, but readability is better if structs are used for PODs and C-struct types and classes for anything else. Basic things that should go in a C++ struct: constructor that initializes the data (I dislike using memset, and it can later bite back if the POD evolves into something different) or construction from other types but not copy constructor. If you need to define a copy constructor or assignment operator because the compiler generated is not good enough, make it a class. It is common to use structs also for functors that will be passed to STL algorithms and template metaprogramming, as in struct square_int { int operator()( int value ) { return value*value; } }; std::transform( v.begin(), v.end(), v.begin(), square_int() ); or // off the top of my head template <typename T> struct is_pointer { enum { value = false } }; template <typename T> struct is_pointer<T*> { enum { value = true } }; Member methods vs. free functions Besides what I have said before, that do not add to what others already answered, I wanted to put some focus on other types of functions that you comment in your post, as comparison operators and the like. Operators that are meant to be symmetric (comparison, arithmetic), insertion and deletion operators and transformations are usually better implemented as free functions regardless of whether you declare it as a class or struct. Symmetric operators (with regard to data types) are not symmetric if they are implemented as member functions. The lookup rules won't cast the left hand side to call a member function, but it will apply the same cast to match a free function. // Example of symmetry with free functions where method would be asymmetric int main() { std::string( "Hello " ) + "world"; // compiles as free / member function "Hello " + std::string( "world" ); // compiles with free function, fails with member function definition of + } In the code above, if operator+ were a member method of std::string the compiler would fail to compile as it cannot cast the const char* literal into a std::string to use the member method. Insertion and extraction from streams must always be implemented as free functions as the stream is always the left hand side of the operation. Keeping transformations as free functions decouple the two different types. If A and A' can be converted into one another and you decide to implement transformations as members of A, then A must know A' and all uses of A will depend on A' whether you use it or not. If you define the transformation as a free function, A is complete without A' and the coupling between the two classes/structs will be smaller. The same goes for transformations to/from network, serialization and deserialization. When you implement them inside the class/struct you are forcing all users to know about those transforms.
654,645
654,727
using const to prevent datatype changing and value changing
Is there a difference between using const: Cannot change the datatype but can change the value of a or b int add(const int a, const int b); Can change the datatype but cannot change the value of a or b int add(int const a, int const b); Cannot change the datatype and cannot change the value of a or b int add(const int const a, const int const b); Many thanks for any suggestions
I don't know how one is supposed to changed the datatype of a variable in C++ ... 'const' is a promise you make to the compiler about not modifying a value. It complains when you don't (probably uncovering z bug in the process). It also helps it to do various optimizations. Here are some const examples and what they mean : f ( const int a ) f cannot change the value of 'a'. f ( int const a ) the same but written in a weird way f ( const int const a ) means nothing, gcc tells me "duplicate const" f ( const int * pa ) f cannot change the value pointed to by pa f ( int * const pa ) f cannot change the value of the pointer f ( const int * const pa ) f cannot change the value of the pointer nor the value pointed to f ( int a ) const The member function f cannot modify its object Hope it makes things clearer ..
654,713
654,724
.o files vs .a files
What is the difference between these two file types. I see that my C++ app links against both types during the construction of the executable. How to build .a files? links, references, and especially examples, are highly appreciated.
.o files are objects. They are the output of the compiler and input to the linker/librarian. .a files are archives. They are groups of objects or static libraries and are also input into the linker. Additional Content I didn't notice the "examples" part of your question. Generally you will be using a makefile to generate static libraries. AR = ar CC = gcc objects := hello.o world.o libby.a: $(objects) $(AR) rcu $@ $(objects) %.o: %.c $(CC) $(CFLAGS) -c $< -o $@ This will compile hello.c and world.c into objects and then archive them into library. Depending on the platform, you may also need to run a utility called ranlib to generate the table of contents on the archive. An interesting side note: .a files are technically archive files and not libraries. They are analogous to zip files without compression though they use a much older file format. The table of contents generated by utilities like ranlib is what makes an archive a library. Java archive files (.jar) are similar in that they are zip files that have some special directory structures created by the Java archiver.
654,751
654,801
Using a C++ header with .NET language
I trying to use a ".h" file from Windows SDK in a .NET language (maybe C#), but without success. This header exposes some Windows Media player functionality through COM. If I use Win32 C++, I can use it with no problems, so I thought that I could use Managed C++ as a "Bridge" to expose it to C#. The header file is the subscriptionservices.h that comes with Windows Media Player SDK 11 (part of Windows SDK 6). Is that possible? How could I use that header file in a .NET Application? Thanks, Eduardo Cobuci
You can use PInvoke to Interop with Win32. If you are trying to use a COM object you should be able to add a reference to the project. Have you looked at this article? More practically you need to understand the kind of work that you are doing. If you are going to be doing lots of pointer arithmetic then I would recommend managed c++. If not C#. Good luck.
654,853
654,881
Why would one use function pointers to member method in C++?
A lot of C++ books and tutorials explain how to do this, but I haven't seen one that gives a convincing reason to choose to do this. I understand very well why function pointers were necessary in C (e.g., when using some POSIX facilities). However, AFAIK you can't send them a member function because of the "this" parameter. But if you're already using classes and objects, why not just use an object oriented solution like functors? Real world examples of where you had to use such function pointers would be appreciated. Update: I appreciate everyone's answers. I have to say, though, that none of these examples really convinces me that this is a valid mechanism from a pure-OO perspective...
Functors are not a priori object-oriented (in C++, the term “functor” usually means a struct defining an operator () with arbitrary arguments and return value that can be used as syntactical drop-in replacements to real functions or function pointers). However, their object-oriented problem has a lot of issues, first and foremost usability. It's just a whole lot of complicated boilerplate code. In order for a decent signalling framework as in most dialog frameworks, a whole lot of inheritance mess becomes necessary. Instance-bound function pointers would be very beneficial here (.NET demonstrates this amply with delegates). However, C++ member function pointers satisfy another need still. Imagine, for example, that you've got a lot of values in a list of which you want to execute one method, say its print(). A function pointer to YourType::size helps here because it lets you write such code: std::for_each(lst.begin(), lst.end(), std::mem_fun(&YourType::print))
655,027
655,144
when should a member function be both const and volatile together?
I was reading about volatile member function and came across an affirmation that member function can be both const and volatile together. I didn't get the real use of such a thing. Can anyone please share their experience on practical usage of having member function as const and volatile together. I wrote small class to test the same: class Temp { public: Temp(int x) : X(x) { } int getX() const volatile { return X; } int getBiggerX() { return X + 10; } private: int X; }; void test( const volatile Temp& aTemp) { int x = aTemp.getX(); } int main(int argc, char* argv[]) { const volatile Temp aTemp(10); test(aTemp); return 0; }
You asked for a practical example of volatile member functions. Well i can't think of one because the only situations i could imagine are so low-level that i would not consider using a member function in the first place, but just a plain struct with data-members accessed by a volatile reference. However, let's put a const volatile function into it just for the sake of answering the question. Assume you have a port with address 0x378h that contains 2 integers, 4 bytes each. Then you could write struct ints { int first; int second; int getfirst() const volatile { return first; } int getsecond() const volatile { return second; } // note that you could also overload on volatile-ness, just like // with const-ness }; // could also be mapped by the linker. ints const volatile &p = *reinterpret_cast<ints*>(0x378L); You are stating I'm not changing them, but another thing outside this abstract semantics could change it. So always do a real load from its address. Actually, volatile signals that the value of an object might not be the value last stored into it but is actually unknown and might have been changed in between by external (not observable by the compiler) conditions. So when you read from a volatile object, the compiler has to emulate the exact abstract semantics, and perform no optimizations: a = 4; a *= 2; // can't be optimized to a = 8; if a is volatile because the abstract // semantics described by the language contain two assignments and one load. The following already determines what volatile does. Everything can be found in 1.9 of the Standard. The parameters it talks about are implementation defined things, like the sizeof of some type. The semantic descriptions in this International Standard define a parameterized nondeterministic abstract machine. This International Standard places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below. [...] A conforming implementation executing a well-formed program shall produce the same observable behavior as one of the possible execution sequences of the corresponding instance of the abstract machine with the same program and the same input. [...] The observable behavior of the abstract machine is its sequence of reads and writes to volatile data and calls to library I/O functions.
655,065
655,086
When should I use the new keyword in C++?
I've been using C++ for a short while, and I've been wondering about the new keyword. Simply, should I be using it, or not? With the new keyword... MyClass* myClass = new MyClass(); myClass->MyField = "Hello world!"; Without the new keyword... MyClass myClass; myClass.MyField = "Hello world!"; From an implementation perspective, they don't seem that different (but I'm sure they are)... However, my primary language is C#, and of course the 1st method is what I'm used to. The difficulty seems to be that method 1 is harder to use with the std C++ classes. Which method should I use? Update 1: I recently used the new keyword for heap memory (or free store) for a large array which was going out of scope (i.e. being returned from a function). Where before I was using the stack, which caused half of the elements to be corrupt outside of scope, switching to heap usage ensured that the elements were intact. Yay! Update 2: A friend of mine recently told me there's a simple rule for using the new keyword; every time you type new, type delete. Foobar *foobar = new Foobar(); delete foobar; // TODO: Move this to the right place. This helps to prevent memory leaks, as you always have to put the delete somewhere (i.e. when you cut and paste it to either a destructor or otherwise).
Method 1 (using new) Allocates memory for the object on the free store (This is frequently the same thing as the heap) Requires you to explicitly delete your object later. (If you don't delete it, you could create a memory leak) Memory stays allocated until you delete it. (i.e. you could return an object that you created using new) The example in the question will leak memory unless the pointer is deleted; and it should always be deleted, regardless of which control path is taken, or if exceptions are thrown. Method 2 (not using new) Allocates memory for the object on the stack (where all local variables go) There is generally less memory available for the stack; if you allocate too many objects, you risk stack overflow. You won't need to delete it later. Memory is no longer allocated when it goes out of scope. (i.e. you shouldn't return a pointer to an object on the stack) As far as which one to use; you choose the method that works best for you, given the above constraints. Some easy cases: If you don't want to worry about calling delete, (and the potential to cause memory leaks) you shouldn't use new. If you'd like to return a pointer to your object from a function, you must use new
655,122
655,152
Doing a run-around of existing application to make database changes, good idea?
We have an existing "legacy" app written in C++/powerbuilder running on Unix with it's own Sybase databases. For complex organizational(existing apps have to go through lot of red-tape to be modified) and code reasons(no re-factoring has been done in yrs so the code is spaghetti), so it's difficult to get modifications done to this application. Hence I am considering writing a new modern, maybe grails, based web app to do some "admin" type things directly into the database. For example to add users, or to add "constraint rows". What does the software community think of this approach of doing a run-around of the existing app like this? Good idea? Tips and hints?
Good idea? No. Sometimes necessary? Yes. Living in a world where you sometimes have to do things you know aren't a good idea? Priceless. In general, you should always follow best practices. For everything else, there's kludges.
655,202
655,304
git and C++ workflow, how to handle object and archive files?
I use git to interface with an SVN repository. I have several git branches for the different projects I work on. Now, whenever I switch from one branch to another using 'git checkout ', all the compiled executables and object files from the previous branch are still there. What I would like to see is that switching from branch A to B results in a tree with all object files and binaries from the last time I worked on branch B. Is there a way to handle this without creating multiple git repositories? Update: I understand that executables and binaries should not end up in the repository. I'm a bit disappointed in the fact that all the branching stuff in git is useless to me, as it turns out I'll have to clone my proxy git repository for every branch I want to start. Something I already did for SVN and hoped to avoid with git. Of course, I don't have to do it, but it would result in me doing a new make most of the time after switching between branches (not fun).
What you want is a full context, not just the branch... which is generally out of scope for a version control tool. The best way to do that is to use multiple repositories. Don't worry about the inefficiency of that though... Make your second repository a clone of the first. Git will automatically use links to avoid having multiple copies on disk. Here's a hack to give you want you want Since you have separate obj directories, you could modify your Makefiles to make the base location dynamic using something like this: OBJBASE = `git branch --no-color 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/\1\//'` OBJDIR = "$(OBJBASE).obj" # branch master: OBJBASE == "master/", OBJDIR == "master/.obj" # non-git checkout: OBJBASE == "", OBJDIR == ".obj" That will but your branch name into OBJBASE, which you can use to build your actual objdir location from. I'll leave it to you to modify it to fit your environment and make it friendly to non-git users of your Makefiles.
655,257
655,275
Two's complement binary form
In a TC++ compiler, the binary representation of 5 is (00000000000000101). I know that negative numbers are stored as 2's complement, thus -5 in binary is (111111111111011). The most significant bit (sign bit) is 1 which tells that it is a negative number. So how does the compiler know that it is -5? If we interpret the binary value given above (111111111111011) as an unsigned number, it will turn out completely different? Also, why is the 1's compliment of 5 -6 (1111111111111010)?
The compiler doesn't know. If you cast -5 to unsigned int you'll get 32763.
655,378
655,448
Howto read chunk of memory as char in c++
Hello I have a chunk of memory (allocated with malloc()) that contains bits (bit literal), I'd like to read it as an array of char, or, better, I'd like to printout the ASCII value of 8 consecutively bits of the memory. I have allocated he memory as char *, but I've not been able to take characters out in a better way than evaluating each bit, adding the value to a char and shifting left the value of the char, in a loop, but I was looking for a faster solution. Thank you What I've wrote for now is this: for allocation: char * bits = (char*) malloc(1); for writing to mem: ifstream cleartext; cleartext.open(sometext); while(cleartext.good()) { c = cleartext.get(); for(int j = 0; j < 8; j++) { //set(index) and reset(index) set or reset the bit at bits[i] (c & 0x80) ? (set(index)):(reset(index));//(*ptr++ = '1'):(*ptr++='0'); c = c << 1; }.. }.. and until now I've not been able to get character back, I only get the bits printed out using: printf("%s\n" bits); An example of what I'm trying to do is: input.txt contains the string "AAAB" My program would have to write "AAAB" as "01000001010000010100000101000010" to memory (it's the ASCII values in bit of AAAB that are 65656566 in bits) Then I would like that it have a function to rewrite the content of the memory to a file. So if memory contains again "01000001010000010100000101000010" it would write to the output file "AAAB".
int numBytes = 512; char *pChar = (char *)malloc(numBytes); for( int i = 0; i < numBytes; i++ ){ pChar[i] = '8'; } Since this is C++, you can also use "new": int numBytes = 512; char *pChar = new char[numBytes]; for( int i = 0; i < numBytes; i++ ){ pChar[i] = '8'; }
655,450
655,464
Passing a smart pointer as argument inside a class: scoped_ptr or shared_ptr?
I have a class that creates an object inside one public method. The object is private and not visible to the users of the class. This method then calls other private methods inside the same class and pass the created object as a parameter: class Foo { ... }; class A { private: typedef scoped_ptr<Foo> FooPtr; void privateMethod1(FooPtr fooObj); public: void showSomethingOnTheScreen() { FooPtr fooObj(new Foo); privateMethod1(fooObj); }; }; I believe the correct smart pointer in this case would be a scoped_ptr, however, I can't do this because scoped_ptr makes the class non copyable if used that way, so should I make the methods like this: void privateMethod1(FooPtr& fooObj); privateMethod1 doesn't store the object, neither keeps references of it. Just retrieves data from the class Foo. The correct way would probably be not using a smart pointer at all and allocating the object in the stack, but that's not possible because it uses a library that doesn't allow objects on the stack, they must be on the Heap. After all, I'm still confused about the real usage of scoped_ptr.
Use here simple std::auto_ptr as you can't create objects on the stack. And it is better to your private function just simply accept raw pointer. Real usage is that you don't have to catch all possible exceptions and do manual delete. In fact if your object is doesn't modify object and your API return object for sure you'd better to use void privateMethod1(const Foo& fooObj); and pass the object there as privateMethod1(*fooObj.get());
655,650
655,679
How to program a RPG game in C++ with SDL?
I want to know how to program a 2D RPG game in C++ with SDL. I have searched, but was unable to find anything good. Many of the articles were too basic and didn't delve into anything practical. Can anyone help give me some articles, free books or other resources so I can learn how to program a RPG using C++ and SDL? EDIT: I don't need documentation of SDL. I'm programming a RPG Game with SDL, and I need some examples that help me in this task.
Do you have examples of articles that are too simple? Are these too simple: devshed or gpwiki ? You might consider studying the topics separately. For example Bruce Eckels has, IMO, the best C++ books, "Thinking in C++ I & II" that will take you from novice to expert (including SQA techniques like unit testing) and they are available for free. I've found that C++ is a harsh mistress and if you aren't prepared for the language, no tutorial specific to 'SDL and Games' will help much - this is true for any other advanced libraries, toolkits, etc.
655,957
2,363,905
Why do I keep getting "Must declare the Scalar variable "@Param1"" when performing a parameterized query in C++ using ADO?
Here is the code. A Few notes about it. DStr is an string class that is internal to our company. It functions much like CString. I can connect to the database and run non-parameterized queries. Also this insert works fine if I do not use parameters. The cmd->Execute statment throws an exception. This is where I am getting the error msg from. void CEmailArchiveDatabase::TestAddUser( DStr username, DStr displayname ) { DStr sql = "INSERT INTO [User] (UserName, DisplayName) VALUES (@Param1, @Param2)"; _variant_t vUser; _variant_t vDisp; vUser.vt = VT_BSTR; vUser.bstrVal = username.AllocSysString(); vDisp.vt = VT_BSTR; vDisp.bstrVal = displayname.AllocSysString(); _CommandPtr cmd = NULL; _ConnectionPtr conn = NULL; _ParameterPtr prm = NULL; DStr connStr = "driver={SQL Server};server=DEMETER\\SQLEXPRESS;database=ExtractoMundo"; try { conn.CreateInstance(__uuidof(Connection)); cmd.CreateInstance(__uuidof(Command)); if( !(conn->GetState() & adStateOpen) ) { conn->ConnectionString = connStr.AllocSysString(); conn->Open("", "", "", NULL); conn->CursorLocation = adUseClient; } cmd->ActiveConnection = conn; cmd->CommandText = sql.AllocSysString(); cmd->CommandType = adCmdText; prm = cmd->CreateParameter("@Param1", adLongVarChar, adParamInput, -1, vUser); cmd->Parameters->Append(prm); prm = cmd->CreateParameter("@Param2", adLongVarChar, adParamInput, -1, vDisp); cmd->Parameters->Append(prm); _variant_t recAff; cmd->Execute(&recAff, NULL, adExecuteNoRecords); } catch(_com_error &ex) { //ClearParameters(); DStr err; err += PrintProviderError(conn); err += PrintComError(ex); } SysFreeString(vUser.bstrVal); SysFreeString(vDisp.bstrVal); SysFreeString(cmd->CommandText); SysFreeString(conn->ConnectionString); }
ADO refuses to work with named parameters in dynamic queries. You have to either convert named parameters into parameter placeholders: DStr sql = "INSERT INTO [User] (UserName, DisplayName) VALUES (?, ?)"; or use a stored procedure instead. Create the procedure: CREATE PROCEDURE spUserInsert @Param1 nvarchar(50), @Param2 nvarchar(50) AS SET NOCOUNT ON; INSERT INTO [User] (UserName, DisplayName) VALUES (@Param1, @Param2) GO and modify your code to call it: DStr sql = "spUserInsert"; ... cmd->CommandType = adCmdStoredProcedure;
655,997
656,015
Modern C++ Design Generic programming and Design Patterns Applied
I have purchase this book for our group in the company, perhaps, to improve our design skills and ultimately have a better programming practices. As I read it, I find, mostly, a set of nifty tricks that can be used with template, and not sure if it is worthwhile - and not detrimental-to incorporate it into our code thus introducing code complexity/readability and un-maintainability. I would like to find out from follow practitioners, what do you use from that book? What is relevant and important to professional software development, what should be avoided? General thought about this book and how it fits into software development of large-scale system (on the same them, I love John Lakos book)? What is the Alexandrescu effect?
Outside of standard template uses, the operation I find most useful about the information talked about generic C++ programming, is the ability to use templates to create compile time errors for invalid code scenarios. Once you get the hang of it you can become very efficient at turning a class of what would be a runtime bug into a compile time error. I've had a lot of success doing that at work. Sure it produces completely unreadable messages and my coworkers occasionally drop by and say "what in the world is that?". But every time it was a bug and they weren't able to compile it. I also heavily comment the particular errors in the hope that it's moderately understandable.
656,224
656,523
When should I use C++ private inheritance?
Unlike protected inheritance, C++ private inheritance found its way into mainstream C++ development. However, I still haven't found a good use for it. When do you guys use it?
Note after answer acceptance: This is NOT a complete answer. Read other answers like here (conceptually) and here (both theoretic and practic) if you are interested in the question. This is just a fancy trick that can be achieved with private inheritance. While it is fancy it is not the answer to the question. Besides the basic usage of just private inheritance shown in the C++ FAQ (linked in other's comments) you can use a combination of private and virtual inheritance to seal a class (in .NET terminology) or to make a class final (in Java terminology). This is not a common use, but anyway I found it interesting: class ClassSealer { private: friend class Sealed; ClassSealer() {} }; class Sealed : private virtual ClassSealer { // ... }; class FailsToDerive : public Sealed { // Cannot be instantiated }; Sealed can be instantiated. It derives from ClassSealer and can call the private constructor directly as it is a friend. FailsToDerive won't compile as it must call the ClassSealer constructor directly (virtual inheritance requirement), but it cannot as it is private in the Sealed class and in this case FailsToDerive is not a friend of ClassSealer. EDIT It was mentioned in the comments that this could not be made generic at the time using CRTP. The C++11 standard removes that limitation by providing a different syntax to befriend template arguments: template <typename T> class Seal { friend T; // not: friend class T!!! Seal() {} }; class Sealed : private virtual Seal<Sealed> // ... Of course this is all moot, since C++11 provides a final contextual keyword for exactly this purpose: class Sealed final // ...
656,651
656,667
Virtual inheritance in C++ usages/tricks
I've never used it in the professional software even though in our shop, and others I have worked for, we design large scale systems. The only time I messed with virtual inheritance was during my interview in a company. Nonetheless, I played with it during afterhours. Do you guys use it? Do you understand how it works in depth (how most popular compiler vendors implement it)? I would really like to know how it is used in professional software, if at all. Tricks and tips would be appreciated, too. To me, virtual inheritance was just a solution to an infamous diamond problem. Hence, it never found its way in our software, as we don't have in our architecture MI resulting in the diamond. Thanks.
The main point with virtual inheritance is to prevent derived classes from inheriting multiple copies of different superior classes. This can occur in any case where there may be multiple inheritance -- as you correctly note, the "diamond problem", which is to say where the inheritance graph is a DAG instead of a strict tree. The C++ FAQ goes into it in some detail. I'd also recommend the C++ FAQ Book; I used to work for the authors and they're quite good.
656,655
656,853
GetOpenFileName() with OFN_ALLOWMULTISELECT flag set
I'm trying to use the GetOpenFileName() common dialog box call to pop open a dialog box and allow the user to select multiple files. I've got the OFN_ALLOWMULTISELECT flag set, as well as OFN_EXPLORER set so I get the "new style" file selection box. When I set up my OPENFILENAME structure, I have ofn.lpstrFile pointing to a buffer allocated to hold the results, and ofn.nMaxFile set to its length. The problem I'm having is that if the user selects so many filenames that the buffer would overflow, the call to GetOpenFileName returns FALSE, and then CommDlgExtendedError() returns FNERR_BUFFERTOOSMALL. That's fine for error detection, and I could up the size of the buffer to fix it, but sooner or later the user will select enough filenames to overflow that buffer. I've seen the note in MSDN that says if the buffer is too small, the first two bytes of the lpstrFile buffer will contain the required size, but the size it's returning seems to way too small (maybe this is correct when OFN_ALLOWMULTISELECT isn't set). Plus, this would require me to open the dialog again! Another thought I had was creating a dialog hook procedure, and then detecting the size of the filenames when I get a CDN_SELCHANGE notify message and dynamically allocate a buffer of the correct size, but while it will write the data in the new buffer, it seems to remember the orignal value of ofn.nMaxFile. Does anyone know the correct way to dynamically allocate a buffer to hold the results of the GetOpenFile call without making the dialog appear twice? So, it turns out that Martlark's article is right on the money. My 2 mistakes were: 1) I forgot to add MAX_PATH into the size to applcate in the hook, and 2) This only works in the unicode version of GetOpenFileName. (I was compiling with UNICODE not defined)
An interesting problem. I guess you could just allocate all of memory; just in case! But this document suggests using a Hook proc: http://support.microsoft.com/kb/131462 And all in delightfull understandable non OO C!
656,706
656,735
Installing Root CA Cert via code on Win32
We've just set up a new remote access solution using Microsoft's TS Gateway, which requires a couple of somewhat fiddly steps on the end users behalf in order to get it working (installing our root ca cert, requirement of RDP 6.1 client etc). In order to make this setup process as easy as possible (a lot of these users aren't technically minded), I'm looking to create a program to perform all these tasks automatically. I have most of it working, however I'm not entirely sure how to go about importing the Root CA cert into the Windows certificate store. Because this can potentially be run on a wide range of computers with varying levels of patches and updates, I'm steering well clear of .NET and anything that isn't native - the tool should 'just run' without the user having to install anything extra (well, I will say windows XP, no service packs, is the bare minimum required version of windows). In saying that, I don't mind using something third party if it can be bundled in with the tool, as long as it's not huge, and doesn't introduce any interactive steps. Ideally something in the windows API would be best, however I can't seem to track down anything relevant. Currently the tool is a C++ application, so I don't mind if it's quite low level stuff.
First you need to open the root certificate store... HCERTSTORE hRootCertStore = CertOpenSystemStore(NULL,"ROOT"); Then add the certificate using one of the CertAdd functions, such as CertAddEncodedCertificateToStore. CertAddEncodedCertificateToStore(hRootCertStore,X509_ASN_ENCODING | PKCS_7_ASN_ENCODING,pCertData,cbCertData,CERT_STORE_ADD_USE_EXISTING,NULL); pCertData and cbCertData would likely point to the certificate data that you read from a file (not sure if the certificate will be in a file, or how you will include it in your application). Then close the store with... CertCloseStore(hRootCertStore,0); NOTE: This code if run as the user, installs the certificate to the user's root store, not the computer's. It also results in a warning dialog that the user must understand and select "Yes" to authorize the import. If your setup program can run this code in a system account, the import will affect the computer's root store and not warning dialog will be shown.
656,948
657,113
a class-key must be declared when declaring a friend
The g++ compiler complains with this error when I declare a friend thusly: friend MyClass; instead of friend class MyClass; Why should the class keyword be required? (the Borland C++ compiler, BTW, does not require it.) Couldn't the compiler simply look-up MyClass in the symbol table and tell it was declared as a class? (it is obviously doing the look-up anyway because it complains when MyClass it not declared) It is not like it is making a forward declaration of the class: I still have to have either declared the class above or at least have forward declared it. It would make sense to me (would be great actually) if friend class MyClass; makes a forward declaration if needed, otherwise it just seems like syntactic salt to me. I have been merrily using friend statements without the class or struct keyword with no compiler complaints for almost 20 years. Is this something fairly new?
I was surprised about this (and as a result deleted a previous incorrect answer). The C++03 standard says in 11.4: An elaborated-type-specifier shall be used in a friend declaration for a class. Then to make sure there's no misunderstanding, it footnotes that with: The class-key of the elaborated-type-specifier is required. GCC is the only compiler that I have that complains about the missing class-key, but it looks like other compilers are letting us get away with something non-standard... Now as for the rationale - you'd have to ask someone who knows more about compilers (or standards) than I do.
657,141
657,166
How to avoid blocking (C++, Win32)
I'm making a dll that has to respond to an application's requests. One of the application's requirements is that a call should not take long to complete. Say, I have a function foo(), which is called by the host application: int foo(arg){ // some code i need to execute, say, LengthyRoutine(); return 0; } Lets say, foo has to perform a task (or call a function) that is certain to take a long time. The application allows me to set a wait variable; if this variable is non-zero when foo returns, it calls foo again and again (resetting the wait variable before each call) until wait is returned 0. What's the best approach to this? Do I go: int foo(arg){ if (inRoutine == TRUE) { wait = 1; return 0; } else { if (doRoutine == TRUE) { LengthyRoutine(); return 0; } } return 0; } This doesn't really solve the problem that LengthyRoutine is gonna take a long time to complete. Should I spawn a thread of some sort that updates inRoutine depending on whether or not it has finished its task? Thanks..
Spawning another thread is pretty much the best way to do it, just make sure you set the result variables before you set the variable that says you're finished to avoid race conditions. If this is called often you might want to spawn a worker thread ahead of time and reuse it to avoid thread start overhead. There is another possible solution, do part of the work each time the function is called, however this spends more time in the DLL and probably isn't optimal, as well as being more complex to implement the worker code for most algos.
657,155
669,621
How to enable_shared_from_this of both parent and derived
I have simple base and derived class that I want both have shared_from_this(). This simple solution: class foo : public enable_shared_from_this<foo> { void foo_do_it() { cout<<"foo::do_it\n"; } public: virtual function<void()> get_callback() { return boost::bind(&foo::foo_do_it,shared_from_this()); } virtual ~foo() {}; }; class bar1 : public foo , public enable_shared_from_this<bar1> { using enable_shared_from_this<bar1>::shared_from_this; void bar1_do_it() { cout<<"foo::do_it\n"; } public: virtual function<void()> get_callback() { return boost::bind(&bar1::bar1_do_it,shared_from_this()); } }; Causes exception tr1::bad_weak_ptr in following code: shared_ptr<foo> ptr(shared_ptr<foo>(new bar1)); function<void()> f=ptr->get_callback(); f(); So after "googling" I have found following solution: class bar2 : public foo { void bar2_do_it() { cout<<"foo::do_it\n"; } shared_ptr<bar2> shared_from_this() { return boost::static_pointer_cast<bar2>(foo::shared_from_this()); } public: virtual function<void()> get_callback() { return boost::bind(&bar2::bar2_do_it,shared_from_this()); } }; And now it works. Is there any better and more convinient and correct way to enable_shared_from_this for both parent and child? Thanks
Sorry, but there isn't. The problem is that shared_ptr<foo> and shared_ptr<bar1> are different types. I don't understand everything that's going on under the hood, but I think that when the constructor returns and is assigned to a shared_ptr<foo>, the internal weak_ptr<bar1> sees that nothing is pointing to it (because only a shared_ptr<bar1> would increment the counter) and resets itself. When you call bar1::shared_from_this in get_callback, you get the exception because the internal weak_ptr isn't pointing to anything. Essentially, enable_shared_from_this only seems to work transparently from a single class in a hierarchy. If you try implementing it manually, the problem should become obvious.
657,248
657,258
Is it acceptable for a C++ programmer to not know how null-terminated strings work?
Is there any way for a C++ programmer with 1,5 years of experience to have no idea that null-terminated strings exist as a concept and are widely used in a variety of applications? Is this a sign that he is potentially a bad hire?
What does he use -- std::string only? Does he know about string literals? What is his take on string literals? There's too little detail to tell you if he's a bad hire, but he sounds like he needs a bit more talking to than most.
657,281
657,312
How do you model application states?
I'm writing a game, and I want to model its different states (the Game Maker analogy would be frames, I guess) in a clean, object-oriented way. Previously, I've done it in the following way: class Game { enum AppStates { APP_STARTING, APP_TITLE, APP_NEWGAME, APP_NEWLEVEL, APP_PLAYING, APP_PAUSED, APP_ENDED }; typedef AppState(Game::*StateFn)(); typedef std::vector<StateFn> StateFnArray; void Run() { // StateFn's to be registered here AppState lastState(APP_STARTING); while(lastState != APP_ENDED) { lastState = GetCycle_(lastState); } // cleanup } protected: // define StateFn's here AppState GetCycle_(AppState a) { // pick StateFn based on passed variable, call it and return its result. } StateFnArray states_; }; This was hardly manageble for a smaller project. All the variables that the states were using were dumped in the Game class, however I'd want to keep object-orientedness to a maximum, only exposing variables that are shared by more than one state. I also want to be able to initialize a new state when switching to it rather than having to do it in the state that's just finishing (as it might have multiple outcomes - APP_PLAYING can switch to APP_PAUSED, APP_GAMEOVER, APP_NEWLEVEL, etc.). I thought of something like this (CAUTION! FUZZY STUFF!): struct AppState { enum { LAST_STATE = -1; } typedef int StateID; typedef std::vector<AppState*> StateArray; static bool Add(AppState *state, StateID desiredID); // return false if desiredID is an id already assigned to static void Execute(StateID state) { while(id != LAST_STATE) { // bounds check etc. states_[id]->Execute(); } } AppState() {}; virtual ~AppState() {}; virtual StateID Execute() =0; // return the ID for the next state to be executed protected: static StageArray stages_; }; The problem here is that the class and instance levels are getting jumbled up (static vs virtual). The states need to inherit from AppState, but - how I'd imagine - most of them would be classes with all-static members, or, at least I won't need more than one instance from one class (TitleState, LevelIntroState, PlayingState, GameOverState, EndSequenceState, EditorState... - pausing would no longer be a state, rather than being taken care of in the states where it makes sense). How can it be done elegantly and efficiently?
The following article gives a nice, simple way to manage game states: http://gamedevgeek.com/tutorials/managing-game-states-in-c/ Basically, you maintain a stack of game states, and just run the top state. You're right that many states would only have one instance, but this isn't really a problem. Actually, though, many of the states you're talking about could have multiple instances. E.g.: push TitleState push MenuState push LevelIntroState change_to PlayingState change_to GameOverState pop (back to MenuState) ... and you can start over with a new instance of LevelIntroState, and so on.
657,472
657,479
What programs are there for parser generation?
I recently took a class at school where we had to learn Scheme to build a parser for a simple made up scheme-like language. As the semester went on, we added to our parser to make it more and more interesting. Since then, on my own time, I've started writing my own parser that's quite a bit neater than what I did in class, but it parses my C++ code, gathers a list of all the data structures and public members, and generates a recursive descent parser for it. For example, if I have the the following class Bar { public: int a; } class Foo { public: Bar* myBar; int x; } and I run my parser generator on it, it spits out a new class simply called Parser that I can use like this to read from a file: Parser p; Foo* f = p.parseFoo("example.txt"); where example.txt would be something like this: Foo { myBar { a 5 } x 10 } (This is just a simple example, there are some other neat things too like recognizing when it should be push_back-ing onto a vector, and being able to assign function callbacks) This seems like the type of thing that other (probably smarter) people ought to have done before me. I did some quick Google searches to see what was out there, but there's a lot of stuff to sift though. So my question is this: are there tools out there that do what my parser generator does right now? Is this project worth continuing* or are there better tools out there already? *of course it's always worth continuing as a learning experience. Rather, I mean from the point of view as a user who would want to read data structures to and from text.
lex and yacc (or rather, flex and bison) are powerful tools that will help you generate parsers for regular languages with ease.
657,511
657,521
C++ compiler that supports C++0x features?
Is where any C++ compiler that supports C++0x features already?
Both the 2008 Visual C++ 'Feature Pack' and g++ support some features. The list of C++0x features supported by g++. The Visual C++ 2008 Feature Pack ... includes an implementation of TR1. Portions of TR1 are scheduled for adoption in the upcoming C++0x standard as the first major addition to the ISO 2003 standard C++ library. Our implementation includes a number of important features such as: Smart pointers Regular expression parsing New containers (tuple, array, unordered set, etc) Sophisticated random number generators Polymorphic function wrappers Type traits And more!
657,724
657,780
How to add a conditional breakpoint in Visual C++
I want to add a breakpoint condition to my code in VC++ Express 2005, so that the breakpoint only triggers if a local variable meets a specified criteria. e.g. bool my_test(UIDList test_list) { foo(test_list); bar(test_list); // I have a breakpoint here, but only want it to trigger if test_list.Length() > 0 print(test_list); } Having right-clicked on my breakpoint and selected "Condition..." I have found a dialog that appears to do what I want, however anything I try typing into the text field results in the following error: Unable to evaluate the breakpoint condition: CX0052: Error: member function not present I tried the help documentation, but I couldn't find my answer. I'm hoping someone experienced in VC++ might be able to point me in the right direction... I have previously tried upgrading to a more recent version of VC++ Express, but the project did not import cleanly. Due to the complexity of the project and my current time scales I can't consider upgrading as a solution at this point.
use the DebugBreak(); function: bool my_test(UIDList test_list) { foo(test_list); if (bar(test_list) /* or whatever check :) */) // I have a breakpoint here, but only want it to trigger if test_list.Length() > 0 DebugBreak(); } print(test_list); } Or you can always use assert(expression) bool my_test(UIDList test_list) { foo(test_list); bar(test_list); assert(test_list.Length() > 0); // will break here print(test_list); }
657,783
661,228
How does Intel TBB's scalable_allocator work?
What does the tbb::scalable_allocator in Intel Threading Building Blocks actually do under the hood ? It can certainly be effective. I've just used it to take 25% off an apps' execution time (and see an increase in CPU utilization from ~200% to 350% on a 4-core system) by changing a single std::vector<T> to std::vector<T,tbb::scalable_allocator<T> >. On the other hand in another app I've seen it double an already large memory consumption and send things to swap city. Intel's own documentation doesn't give a lot away (e.g a short section at the end of this FAQ). Can anyone tell me what tricks it uses before I go and dig into its code myself ? UPDATE: Just using TBB 3.0 for the first time, and seen my best speedup from scalable_allocator yet. Changing a single vector<int> to a vector<int,scalable_allocator<int> > reduced the runtime of something from 85s to 35s (Debian Lenny, Core2, with TBB 3.0 from testing).
There is a good paper on the allocator: The Foundations for Scalable Multi-core Software in Intel Threading Building Blocks My limited experience: I overloaded the global new/delete with the tbb::scalable_allocator for my AI application. But there was little change in the time profile. I didn't compare the memory usage though.
657,920
657,935
Should lookup tables be static
I have a Message class which parses text messages using lookup tables. I receive a lot of messages and create and destroy a lot of objects so I thought I declare those lookup tables as static members to prevent initializing the same tables with the same values again and again. Is it the correct approach or there's more appropriate C++ way? Thanks.
This sounds like the right way to do it, although I'd expect the compiler to optimize this. Have you benchmarked your application and does declaring the tables as static speed it up? Also note that if you have many large lookup tables, performance will increase, but the tables will be hold in memory all the time.
657,964
657,968
Is using NULL references OK?
I came across this code: void f(const std::string &s); And then a call: f( *((std::string*)NULL) ); And I was wondering what others think of this construction, it is used to signal that function f() should use some default value (which it computes) instead of some user provided value. I am not sure what to think of it, it looks weird but what do you think of this construction?
No. It is undefined behaviour and can lead to code to do anything (including reformatting you hard disk, core dumping or insulting your mother). If you need to be able to pass NULL, then use pointers. Code that takes a reference can assume it refers to a valid object. Addendum: The C++03 Standard (ISO/IEC 14882, 2nd edition 2003) says, in §8.3.2 "References", paragraph 4: A reference shall be initialized to refer to a valid object or function. [Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the “object” obtained by dereferencing a null pointer, which causes undefined behavior. As described in 9.6, a reference cannot be bound directly to a bit-field. ] [Bold added for emphasis]
658,016
658,135
expat parser: memory consumption
I am using expat parser to parse an XML file of around 15 GB . The problem is it throws an "Out of Memory" error and the program aborts . I want to know has any body faced a similar issue with the expat parser or is it a known bug and has been rectified in later versions ?
I've used expat to parse large files before and never had any problems. I'm assuming you're using SAX and not one of the expat DOM wrappers. If you are using DOM, then that's your problem right there - it would be essentially trying to load the whole file into memory. Are you allocating objects as you parse the XML and maybe not deallocating them? That would be the first thing I would check for. One way to check if the problem is really with expat or not - if you reduce the program to a simple version that has empty tag handlers (i.e. it just parses the file and does nothing with the results) does it still run out of memory?
658,133
658,180
c++: when to use pointers?
After reading some tutorials I came to the conclusion that one should always use pointers for objects. But I have also seen a few exceptions while reading some QT tutorials (http://zetcode.com/gui/qt4/painting/) where QPaint object is created on the stack. So now I am confused. When should I use pointers?
If you don't know when you should use pointers just don't use them. It will become apparent when you need to use them, every situation is different. It is not easy to sum up concisely when they should be used. Do not get into the habit of 'always using pointers for objects', that is certainly bad advice.
658,355
658,529
Why a child window may not receive mouse events?
I have a custom WTL control which is a panel with a list and a custom scroll bar. class Panel : public ATL::CWindowImpl<Panel>, public WTL::CDoubleBufferImpl<Panel> { public: DECLARE_WND_CLASS("Panel") BEGIN_MSG_MAP_EX(Panel) MSG_WM_CREATE(OnCreate) MSG_WM_DESTROY(OnDestroy) MSG_WM_SIZE(OnSize) CHAIN_MSG_MAP(CDoubleBufferImpl<Panel>) REFLECT_NOTIFICATIONS() END_MSG_MAP() The scroll bar is created by the panel in OnCreate(): m_scrollBar.Create(m_hWnd, WTL::CRect(...)); That scroll bar works fine in many other dialog windows. However, inside that panel control the scroll bar appears, but receives no mouse events at all. If I add WM_MOUSEMOVE handler to the panel, it does get called. What could be the problem?
Found it. The problem was in the scroll bar class declaration: class CScrollBase : public ATL::CWindowImpl<CScrollBase, WTL::CStatic> Changing to: class CScrollBase : public ATL::CWindowImpl<CScrollBase> makes the scroll bar work on the panel.
658,403
658,495
How would you implement a basic event-loop?
If you have worked with gui toolkits, you know that there is a event-loop/main-loop that should be executed after everything is done, and that will keep the application alive and responsive to different events. For example, for Qt, you would do this in main(): int main() { QApplication app(argc, argv); // init code return app.exec(); } Which in this case, app.exec() is the application's main-loop. The obvious way to implement this kind of loop would be: void exec() { while (1) { process_events(); // create a thread for each new event (possibly?) } } But this caps the CPU to 100% and is practicaly useless. Now, how can I implement such an event loop that is responsive without eating the CPU altogether? Answers are appreciated in Python and/or C++. Thanks. Footnote: For the sake of learning, I will implement my own signals/slots, and I would use those to generate custom events (e.g. go_forward_event(steps)). But if you know how I can use system events manually, I would like to know about that too.
I used to wonder a lot about the same! A GUI main loop looks like this, in pseudo-code: void App::exec() { for(;;) { vector<Waitable> waitables; waitables.push_back(m_networkSocket); waitables.push_back(m_xConnection); waitables.push_back(m_globalTimer); Waitable* whatHappened = System::waitOnAll(waitables); switch(whatHappened) { case &m_networkSocket: readAndDispatchNetworkEvent(); break; case &m_xConnection: readAndDispatchGuiEvent(); break; case &m_globalTimer: readAndDispatchTimerEvent(); break; } } } What is a "Waitable"? Well, it's system dependant. On UNIX it's called a "file descriptor" and "waitOnAll" is the ::select system call. The so-called vector<Waitable> is a ::fd_set on UNIX, and "whatHappened" is actually queried via FD_ISSET. The actual waitable-handles are acquired in various ways, for example m_xConnection can be taken from ::XConnectionNumber(). X11 also provides a high-level, portable API for this -- ::XNextEvent() -- but if you were to use that, you wouldn't be able to wait on several event sources simultaneously. How does the blocking work? "waitOnAll" is a syscall that tells the OS to put your process on a "sleep list". This means you are not given any CPU time until an event occurs on one of the waitables. This, then, means your process is idle, consuming 0% CPU. When an event occurs, your process will briefly react to it and then return to idle state. GUI apps spend almost all their time idling. What happens to all the CPU cycles while you're sleeping? Depends. Sometimes another process will have a use for them. If not, your OS will busy-loop the CPU, or put it into temporary low-power mode, etc. Please ask for further details!
659,000
659,586
RegisterDeviceNotification Returns NULL but notifications still recieved
I'm using RegisterDeviceNotification to watch for changes to a USB device, using the WM_DEVICECHANGE event. However, when I call RegisterDeviceNotification() it returns NULL for the notification handle, which should indicate that it failed. But GetLastError() returns ERROR_SUCCESS and the notifications actually go through. This causes real trouble when I, later on, need to UnRegister for notifications and then re-register. Since the handle it returned was NULL, I can't actually re-register. If I try, RegisterDeviceNotification() just hangs. My code is as follows: DEV_BROADCAST_HANDLE devBrHdr; ::ZeroMemory( &devBrHdr,sizeof( DEV_BROADCAST_HANDLE ) ); devBrHdr.dbch_size = sizeof( DEV_BROADCAST_HANDLE ); devBrHdr.dbch_devicetype = DBT_DEVTYP_HANDLE; devBrHdr.dbch_handle = m_hDriver; m_hDevNotify = ::RegisterDeviceNotification( hWnd,&devBrHdr,DEVICE_NOTIFY_WINDOW_HANDLE ); m_hDriver is the Handle to the driver, which I opened earlier, upon connecting to the device (a USB barcode scanner). Any ideas what I'm doing wrong here?
Make sure you are not making another Win32 API call between RegisterDeviceNotification and GetLastError. Check the value of devBrHdr.dbch_hdevnotify. It should contain the same handle returned by RegisterDeviceNotification. Was the m_hDriver value obtained from a call to CreateFile?
659,166
661,791
Write C++ in a graphical scratch-like way?
I am considering the possibility of designing an application that would allow people to develop C++ code graphically. I was amazed when I discovered Scratch (see site and tutorial videos). I believe most of C++ can be represented graphically, with the exceptions of preprocessor instructions and possibly function pointers. What C++ features do you think could be (or not be) represented by graphical items? What would be the pros and cons of such an application ? How much simpler would it be than "plain" C++? RECAP and MORE: Pros: intuitive simple for small applications helps avoid typos Cons: may become unreadable for large (medium?) - sized applications manual coding is faster for experienced programmers C++ is too complicated a language for such an approach Considering that we -at my work- already have quite a bit of existing C++ code, I am not looking for a completely new way of programming. I am considering an alternate way of programming that is fully compatible with legacy code. Some kind of "viral language" that people would use for new code and, hopefully, would eventually use to replace existing code as well (where it could be useful). How do you feel towards this viral approach? When it comes to manual vs graphical programming, I tend to agree with your answers. This is why, ideally, I'll find a way to let the user always choose between typing and graphical programming. A line-by-line parser (+partial interpreter) might be able to convert typed code into graphical design. It is possible. Let's all cross our fingers. Are there caveats to providing both typing and graphical programming capabilities that I should think about and analyze carefully? I have already worked on template classes (and more generally type-level C++) and their graphical representation. See there for an example of graphical representation of template classes. Boxes represent classes or class templates. First top node is the class itself, the next ones (if any) are typedef instructions inside the class. Bottom nodes are template arguments. Edges, of course, connect classes to template arguments for instantiations. I already have a prototype for working on such type-level diagrams. If you feel this way of representing template classes is plain wrong, don't hesitate to say so and why!
Writing code is the easiest part of a developers day. I don't think we need more help with that. Reading, understanding, maintaining, comparing, annotating, documenting, and validating is where - despite a gargantuan amount of tools and frameworks - we still are lacking. To dissect your pros: Intuitive and simple for small applications - replace that with "misleading". It makes it look simple, but it isn't: As long as it is simple, VB.NET is simpler. When it gets complicated, visual design would get in the way. Help avoid typos - that's what a good style, consistency and last not least intellisense are for. The things you need anyway when things aren't simple anymore. Wrong level You are thinking on the wrong level: C++ statements are not reusable, robust components, they are more like a big bag of gears that need to be put together correctly. C++ with it's complexity and exceptions (to rules) isn't even particulary suited. If you want to make things easy, you need reusable components at a much higher level. Even if you have these, plugging them together is not simple. Despite years of struggle, and many attempts in many environments, this sometimes works and often fails. Viral - You are correct IMO about that requriement: allow incremental adoption. This is closely related to switching smoothly between source code and visual representation, which in turn probably means you must be able to generate the visual representation from modified source code. IDE Support - here's where most language-centered approaches go astray. A modern IDE is more than just a text editor and a compiler. What about debugging your graph - with breakpoints, data inspection etc? Will profilers, leak detectors etc. highlight nodes in your graph? Will source control give me a Visual Diff of yesterday's graph vs. today's? Maybe you are on to something, despite all my "no"s: a better way to visualize code, a way to put different filters on it so that I see just what I need to see.
659,231
659,447
Crossing assignment operators?
It's time for my first question now. How do you cross assignments operators between two classes? class B; class A { public: A &operator = ( const B &b ); friend B &B::operator = ( const A &a ); //compiler error }; class B { public: B &operator = ( const A &a ); friend A &A::operator = ( const B &b ); }; I searched for how to forward declare a member function like: class B; B &B::operator = ( const A &a ); //error But I didn't find anything. And I don't want to make the classes all-out friends with each other. How do I do this?
The reason for the compiler error is a circular dependency. Each of your operator=() functions require knowledge of the operator=() function inside the other class, so no matter which order you define your classes in, there will always be an error. Here is one way to sort it out. It isn't very elegant, but it will do what you want: class A; class B; A & set_equal(A & a, const B & b); B & set_equal(B & a, const A & a); class A { private: int x; public: A & operator=(const B & b) { return set_equal(*this, b); } friend B & set_equal(B & b, const A & a); friend A & set_equal(A & a, const B & b); }; class B { private: int y; public: B & operator=(const A & a) { return set_equal(*this, a); } friend A & set_equal(A & a, const B & b); friend B & set_equal(B & b, const A & a); }; A & set_equal(A & a, const B & b) { a.x = b.y; return a; } B & set_equal(B & b, const A & a) { b.y = a.x; return b; } You may also be able to solve this problem with inheritance. edit: here is an example using inheritance. This will work if the copying procedure only needs access to some common data shared by both A and B, which would seem likely if the = operator is to have any meaning at all. class A; class B; class common { protected: int x; void copyFrom(const common & c) { x = c.x; } }; class A : public common { public: A & operator=(const common & c) { copyFrom(c); return *this; } }; class B : public common { public: B & operator=(const common & c) { copyFrom(c); return *this; } };
659,270
659,278
Why is there a special new and delete for arrays?
What is wrong with using delete instead of delete[]? Is there something special happening under the covers for allocating and freeing arrays? Why would it be different from malloc and free?
Objects created with new[] must use delete[]. Using delete is undefined on arrays. With malloc and free you have a more simple situation. There is only 1 function that frees the data you allocate, there is no concept of a destructor being called either. The confusion just comes in because delete[] and delete look similar. Actually they are 2 completely different functions. Using delete won't call the correct function to delete the memory. It should call delete[](void*) but instead it calls delete(void*). For this reason you can't rely on using delete for memory allocated with new[] See this C++ FAQ [16.13] Can I drop the [] when deleteing array of some built-in type (char, int, etc)? No! Sometimes programmers think that the [] in the delete[] p only exists so the compiler will call the appropriate destructors for all elements in the array. Because of this reasoning, they assume that an array of some built-in type such as char or int can be deleted without the []. E.g., they assume the following is valid code: void userCode(int n) { char* p = new char[n]; ... delete p; // ← ERROR! Should be delete[] p ! } But the above code is wrong, and it can cause a disaster at runtime. In particular, the code that's called for delete p is operator delete(void*), but the code that's called for delete[] p is operator delete[](void*). The default behavior for the latter is to call the former, but users are allowed to replace the latter with a different behavior (in which case they would normally also replace the corresponding new code in operator new[](size_t)). If they replaced the delete[] code so it wasn't compatible with the delete code, and you called the wrong one (i.e., if you said delete p rather than delete[] p), you could end up with a disaster at runtime. Why does delete[] exist in the first place? Whether you do x or y: char * x = new char[100]; char * y = new char; Both are stored in char * typed variables. I think the reason for the decision of delete, and delete[] goes along with a long list of decisions that are in favor of efficiency in C++. It is so that there is no enforced price to do a lookup of how much needs to be deleted for a normal delete operation. Having 2 new and new[] seems only logical to have delete and delete[] anyway for symmetry.
659,376
659,438
Are there any good beginner tutorials for threads in windows? C++
Looking for a good site or book that explains windows threads, preferably for a beginner. Maybe has a example program to run, etc....
You want Chapter 20 of Programming Windows by Charles Petzold "Multitasking and Multithreading". It also covers related things like synchronization, and events. This book is a classic, and probably one of the best ways to get a very good understanding of how Windows Win32 programming works with C++. Otherwise you can start on this MSDN pages for CreateThread. For a more portable solution, boost threads are another way to go as well. Combined with boost::bind and several boost synchronization objects, it makes for a very powerful threading library.
659,581
659,593
Replace giant switch statement with what?
I have a code that parses some template files and when it finds a placeholder, it replaces it with a value. Something like: <html> <head> <title>%title%</title> </head> <body bgcolor="%color%"> ...etc. In code, the parser finds those, calls this function: string getContent(const string& name) { if (name == "title") return page->getTitle(); else if (name == "color") return getBodyColor(); ...etc. } and then replaces the original placeholder with returned value. In real case, it is not a dummy web page, and there are many (50+) different placeholders that can occur. My code is C++, but I guess this problem exists with any language. It's more about algorithms and OO design I guess. Only important thing is that this must be compiled, even if I wanted I couldn't have any dynamic/eval'd code. I though about implementing Chain of Responsibility pattern, but it doesn't seem it would improve the situation much. UPDATE: and I'm also concerned about this comment in another thread. Should I care about it?
Use a dictionary that maps tag names to a tag handler.
659,825
659,863
System::Windows::Forms::Form::ShowDialog()
I have a 2 forms setup my first form will call another form with the ShowDialog() method when I receive data from the serial port ShowDialog() throws InvalidOperation with additionnal information "DragDrop registration did not succeed" any hints on how to solve this?
Cheesy google search answer... but it seems to address your issue: http://social.msdn.microsoft.com/forums/en-US/vbgeneral/thread/269b8950-2ef9-483f-8124-e2725ffd5d83/
660,517
660,596
Minor question regarding templeted functions in templated class
I am trying to understand some C++ syntax: template<class T> class Foo { Foo(); template<class U> Foo(const Foo<U>& other); }; template<class T> Foo<T>::Foo() { /*normal init*/ } template<class T> template<class U> Foo<T>::Foo(const Foo<U>& other) { /*odd copy constructed Foo*/ } So, I wrote code like this, and it happens to compile fine in windows and linux. What I don't understand is why the copy constructor has two templates defined as so. Basically, I had to expirment a bit before I found the correct syntax and I would like to know why that particular syntax is correct, and not something like template<class T, class U>.
It has to have separate template clauses for each template that is involved. Here, two templates are involved, that all deserve their (non-empty) template clauses: The class template Foo The constructor template Consider this case which fails because of the ambiguity as to where the parameter U belongs to template<typename T> struct A { template<typename U> void f(); }; template<typename T, typename U> void A<T>::f() { } Now, what is up with the parameter U? Sure the compiler could guess it could belong to f, but guesswork is not what the compiler likes :) The existing rule says that depending on the nesting of templates, template clauses appear in the right order. Everything is clear then. Even if one comes up with a rule how to match the parameters to arguments of the templates involved (so far i don't see a real difficulty in doing that), it would be inconsistent. Because as of now, one template clause lists all parameters that the corresponding template accepts. Much like a function parameter list. If we would put everything into one clause, that clear semantic could be broken - not to mention that when we put the definition into the class again, all of a sudden the template would get its own clause: // provides arguments for A's parameters, then for f ones // when it's called A<int> a; a.f<bool>(); It's much more natural when we have separate template clauses that catch each their own arguments. So, the syntax for the above wrong definition is template<typename T> template<typename U> void A<T>::f() { } Now, also the reader of the code immediately sees that this is a definition of a member template, and not a (potential accidentally declared but unused) second parameter for A.
660,667
660,753
iostream and large file support
I'm trying to find a definitive answer and can't, so I'm hoping someone might know. I'm developing a C++ app using GCC 4.x on Linux (32-bit OS). This app needs to be able to read files > 2GB in size. I would really like to use iostream stuff vs. FILE pointers, but I can't find if the large file #defines (_LARGEFILE_SOURCE, _LARGEFILE64_SOURCE, _FILE_OFFSET_BITS=64) have any effect on the iostream headers. I'm compiling on a 32-bit system. Any pointers would be helpful.
This has already been decided for you when libstdc++ was compiled, and normally depends on whether or not _GLIBCXX_USE_LFS was defined in c++config.h. If in doubt, pass your executable (or libstdc++.so, if linking against it dynamically) through readelf -r (or through strings) and see if your binary/libstdc++ linked against fopen/fseek/etc. or fopen64/fseek64/etc. UPDATE You don't have to worry about the 2GB limit as long as you don't need/attempt to fseek or ftell (you just read from or write to the stream.)
660,838
660,864
C++ Optimization on negative integers
Lets say we have a negative integer say int a; is there a faster implementation of -a? Do I have to do some bitwise operation on this?
To clarify Pax's statement, C++ compilers are not mandated to use two's complement, except in 1 case. When you convert a signed type to an unsigned type, if the number is negative, the result of the conversion must be the 2's complement representation of the integer. In short, there is not a faster way than -a; even if there were, it would not be portable. Keep in mind as well that premature optimization is evil. Profile your code first and then work on the bottlenecks. See The C++ Programming Language, 3rd Ed., section C.6.2.1.
661,273
661,285
What happens when passing reference to literal in C++?
What happens here: double foo( const double& x ) { // do stuff with x } foo( 5.0 ); Does the compiler create an anonymous variable and sets its value to 5.0? Does the x reference a memory location in read-only memory? This is a weird phrasing, I know... edit: I forgot the const keyword...
A temporary variable is created for this purpose and it's usually created on stack. You could try to const_cast, but it's pontless anyway, since you can no longer access a variable once the function returns.
661,338
662,250
SSE SSE2 and SSE3 for GNU C++
Is there a simple tutorial for me to get up to speed in SSE, SSE2 and SSE3 in GNU C++? How can you do code optimization in SSE?
Sorry don't know of a tutorial. Your best bet (IMHO) is to use SSE via the "intrinsic" functions Intel provides to wrap (generally) single SSE instructions. These are made available via a set of include files named *mmintrin.h e.g xmmintrin.h is the original SSE instruction set. Begin familiar with the contents of Intel's Optimization Reference Manual is a good idea (see section 4.3.1.2 for an example of intrinsics) and the SIMD sections are essential reading. The instruction set reference manuals are pretty helpful too, in that each instruction's documentation includes the "intrinsic" function it corresponds to. Do spend some time inspecting the assembler produced by the compiler from intrinsics (you'll learn a lot) and on profiling/performance measurement (you'll avoid wasting time SSE-ing code for little return on the effort). Update 2011-05-31: There is some very nice coverage of intrinsics and vectorization in Agner Fog's optimization PDFs (thanks) although it's a bit spread about (e.g section 12 of the first one and section 5 of the second one). These aren't exactly tutorial material (in fact there's a "these manuals are not for beginners" warning) but they do rightly treat SIMD (whether used via asm, intrinsics or compiler vectorization) as just one part of the larger optimization toolbox. Update 2012-10-04: A nice little Linux Journal article on gcc vector intrinsics deserves a mention here. More general than just SSE (covers PPC and ARM extensions too). There's a good collection of references on the last page, which drew my attention to Intel's "intrinsics guide".
661,346
680,077
Moving an engineering application from standalone to internal to CAD
I have a large MFC C++ application that I would be very keen to port into AutoCAD and IntelliCAD. AutoDesk offer Object ARX for this purpose, which replaces the older and slower ADS technology. IntelliCAD, afaik only supports ADS. Has anyone out there done this, and if so which tools did you use and what pitfalls did you encounter? I'm specifically interested in resources that will simplify the transition, and allow me to maintain seperate CAD based and standalone versions going forward.
Have a look at my answers to a couple of previous AutoCAD questions Open source cad drawing (dwg) library in C# .Net CAD component that can read/write dxf/ dwg files If you were looking for the same code base to work both inside and outside of AutoCAD then the RealDWG approach may work for you since the code is the same - RealDWG doesn't need AutoCAD as a host application. The open Design Alliance libraries are for making stand-alone applications. Both have supported C++ for years & can be considered stable - well, as stable as CAD gets. This blog (http://through-the-interface.typepad.com/) is a good one for RealDWG
661,464
661,477
How internally this works int const iVal = 5; (int&)iVal = 10;
I wanted to know how the following works @ compiler level. int const iVal = 5; (int&)iVal = 10; A bit of m/c or compiler level answer would be great full. Thanks in advance.
It is undefined behavior. In the first line you define a constant integer. Henceforth, in your program, the compiler is permitted to just substitute iVal with the value 5. It may load it from memory instead, but probably won't, because that would bring no benefit. The second line writes to the memory location that your compiler tells you contains the number 5. However, this is not guaranteed to have any effect, as you've already told the compiler that the value won't change. For example, the following will define an array of 5 elements, and print an undefined value (or it can do anything it wants! it's undefined) int const iVal = 5; (int&)iVal = 10; char arr[iVal]; cout << iVal; The generated assembly might look something like: sub ESP, 9 ; allocate mem for arr and iVal. hardcoded 5+sizeof(int) bytes ; (iVal isn't _required_ to have space allocated to it) mov $iVal, 10 ; the compiler might do this, assuming that you know what ; you're doing. But then again, it might not. push $cout push 5 call $operator_ltlt__ostream_int add ESP, 9
661,481
661,495
Nested struct in templated class with std::map::const_iterator?
The folowing code generates a syntax error at the line where the iterator is declared: template <typename T> class A { public: struct B { int x, y, z; }; void a() { std::map<int, B>::const_iterator itr; // error: ; expected before itr } std::vector<T> v; std::map<int, B> m; }; This only happens when A is a templated class. What's wrong with this code? If I move B out of A, the code compiles fine.
You need a typename: typename std::map<int, B>::const_iterator itr; The iterator is a dependant type (depends on B) and when you have this situation the compiler requires you to clarify it with a typename. There is a reasonable discussion of the issue here.
661,617
661,737
Is there a better way to design this message passing code?
class A was using below two functions to build and send messages 1 & 2 builder::prepareAndDeliverMsg1(msg1_arg1,msg1_arg2) { } builder::prepareAndDeliverMsg2(msg2_arg1,msg2_arg2) { } Now, a new class B is introduced, which would like to do what A was doing in two stages stage1->prepare stage2->deliver I was thinking to extend the builder class like below: ///---- builder::prepareMsg1(msg1_arg1,msg1_arg2) { } builder::prepareMsg2(msg2_arg1,msg2_arg2) { } builder::deliverMsg1(msg1_arg1) { This function, inserts re-calculated msg1_arg1 into the prepared message in stage1 } builder::deliverMsg2(msg2_arg1) { This function, inserts re-calculated msg2_arg1 into the prepared message in stage1 } // These two functions are still retained for the usage of class A builder::prepareAndDeliverMsg1(msg1_arg1,msg1_arg2) { } builder::prepareAndDeliverMsg2(msg2_arg1,msg2_arg2) { } //--- I would like to know, if there is any better way of designing this ?
To expand on Darks idea you can have a base class that implements the combined prepare and delivers in terms of the separate functions and allows deriving classes to override those as required: class base { virtual bool prepareMsg1() = 0; virtual bool prepareMsg2() = 0; virtual bool deliverMsg1() = 0; virtual bool deliverMsg2() = 0; bool prepareAndDeliverMsg1(){ prepareMsg1(); deliverMsg1(); } bool prepareAndDeliverMsg2(msg2_arg1,msg2_arg2){ prepareMsg2(); deliverMsg2(); } }; You may find that a lot of the functionality from the two derived classes is the same in which case you won't want to use pure virtuals in the base class: class base { virtual bool prepareMsg1(args) {//not pure virtual //do the common stuff } }; class derived { bool prepareMsg1( args ) { base::prepareMsg1(args); //code to specailise the message } }; It could even be that the base class implments your original class but allows your second class to be derived without having to repeat the common code.
661,862
661,983
C++ Memory Management for Texture Streaming in Videogames
this is a "hard" question. I've found nothing interesting over the web. I'm developing a Memory Management module for my company. We develop games for next-gen consoles (Xbox 360, PS3 and PC... we consider PC a console!). We'll need in future, for our next games, to handle texture streaming for large game worlds that cannot be loaded all in main console memory (not talking about PC for now). We are going to stream at the beginning hi-res mipmaps of textures (that is about 70% of the size of world data). Maybe in the future we'll have to stream also geometry, smaller mipmaps, audio, etc. I'm developing a Memory Manager for that issue, focused on X360 (because over PS3 we can use host memory and the associated, auto-defragmenting GMM allocator). The problem I'm facing is the following: We have decided to reserve a specific Memory Area for texture streaming (for example 64 Megabytes) and we want to handle all allocations and deallocations in that area. We have allocated the area at the beginning of the application and the area is Physically guaranteed to be contiguous (not just virtually, cause we need to store textures there). I've implemented an auto defragmenting allocator, using handles instead of pointers. Time is not an issue, the problem is memory fragmentation. In game we continuously load and unload streaming targets, so we'd like to use the maximum amount of our buffer (64 Megabytes). With this allocator we can use all of the allocated space but the defragmentation routine works in an unaccettable time (sometimes 60 milliseconds, more than a frames!) while the algorithm is not too bad... there are just too meny unevitable memcpy! I'm looking for a solution to solve this issue. I'd like to find at least a good paper, or a post-mortem, or someone who have faced the same problem of mine. Now I'm choosing between two strategies: 1) move the defragmentation routine on a dedicated thread (good for X360 with 6 hw threads, bad for PS3 with just a hw thread... and don't tell me to use SPU's!) with all multithreading problems of locking regions, of accessing a region who is being moved,... 2) find an "incremental" solution to defragmentation problem: we can give each frame a time budget (for example up to 1 millisecond) for defragmentation and the Memory Manager will do what it can do in the budget each frame. Can someone tell me his experience about?
I did a lot of study recently regarding memory management and this is the most informative and helpful article I found on the net. http://www.ibm.com/developerworks/linux/library/l-memory/ Based on that paper the best and fastest result you will get is to divide your 64 MB into equal sized chunks. The size of chunks will depend on your object size. And allocate or deallocate a full chunk at a time. It's Faster than incremental garbage collection. Simpler. And solves that "too much fragmantation" problem by some amount. Read it, you will find excellent information on every possible solution there is and merits and demerits for each.
661,907
664,970
VC++ : How to prevent esc from closing a dialog box (not mfc)
How could I prevent esc from closing a dialog box? I searched for this topic, but all I found was for MFC (You can overwrite PreTranslateMessage function in MFC). but my program is written in Windows API, not MFC. I tried to catch all Keyboard messages in Dialog procedure, but none of them works. I also tried using subclassing in dialog box, but it didn't work too. How could I do that? Thanks.
You can determine whether it's from the system menu easily enough -- check out the "return value" section of this MSDN page to see how. If you can't determine whether it's from the ESCAPE key or a button, you could always get around that by using a different identifier for your Cancel button.
662,084
662,095
What's the difference between the WIN32 and _WIN32 defines in C++
I know that WIN32 denotes win32 compilation but what is _WIN32 used for?
WIN32 is a name that you could use and even define in your own code and so might clash with Microsoft's usage. _WIN32 is a name that is reserved for the implementor (in this case Microsoft) because it begins with an underscore and an uppercase letter - you are not allowed to define reserved names in your own code, so there can be no clash.
662,328
662,356
What is a simple C or C++ TCP server and client example?
I need to quickly implement a very small C or C++ TCP server/client solution. This is simply to transfer literally an array of bytes from one computer to another - doesn't need to be scalable / over-complicated. The simpler the better. Quick and dirty if you can. I tried to use the code from this tutorial, but I couldn't get it to build using g++ in Linux: http://www.linuxhowtos.org/C_C++/socket.htm If possible, I'd like to avoid 3rd party libraries, as the system I'm running this on is quite restricted. This must be C or C++ as the existing application is already implemented. Thanks to emg-2's answer, I managed to make the above mentioned code sample compatible with C++ using the following steps: Add these headers to both client and server: #include <cstdlib> #include <cstring> #include <unistd.h> In server.c, change the type of clilen to socklen_t. int sockfd, newsockfd, portno/*, clilen*/; socklen_t clilen; In client.c, change the following line: if (connect(sockfd,&serv_addr,sizeof(serv_addr)) < 0) { ... } To: if (connect(sockfd,(const sockaddr*)&serv_addr,sizeof(serv_addr)) < 0)
I've used Beej's Guide to Network Programming in the past. It's in C, not C++, but the examples are good. Go directly to section 6 for the simple client and server example programs.
662,378
662,408
Error in returning a pointer from a function that points to an array
I'm in a bit of a fiddle in that I don't know why my code brings up the following error when compiling: 1>..\SA.cpp(81) : error C2664: 'CFE' : cannot convert parameter 1 from 'int' to 'int []' 1> Conversion from integral type to pointer type requires reinterpret_cast, C-style cast or function-style cast Essentially I am trying to: Step1: Convert from a vector to an array using: int* VecToArray(vector<int> Vec) { int ary[Vec.size()]; for(int i = 0; i < Vec.size(); i++) ary[i] = Vec[i]; return ary; } Step2: Calling upon a function into which the array is an paremeter and it returns a pointer from a newly generated array: int* CFE(int density[]) { ...do stuff to generate 'double Energy[]'.... return Energy; } Step 3: Using this pointer in a third function to calcualte the sum of Energy[]: double ObjFunction (double *E_Array) { double SumEnergy = 0; int n = 10; // Is irrelivant for (int i = 0; i < n; i++) { SumEnergy += E_Array[i]; } return SumEnergy; } To make for simpler coding I've used the functions like so, where VectorName is an integer vector: double TotalEnergy = ObjFunction ( CFE ( VecToArray ( VectorName ) ) ); I am obviously getting the parameter types wrong somewhere, though I just cant see why myself. Could anyone with a more experienced eye assist in spotting it/them?
Where does Energy come from? If it's a double[] then you can't just cast it to an int*. std::vector<int> is guaranteed to be contiguous, so if you want to convert a std::vector<int> VectorName to an const int* use &VectorName[0]. If, on the other hand, your CFE function modifies the array is passed, it's probably better off creating it locally.
662,440
662,614
Palette Animation in OpenGL
I am making an old-school 2d game, and I want to animate a specific color in my texture. Only ways I know are: opengl shaders. animating one color channel only. white texture under the color-animated texture. But I don't want to use shaders, I want to make this game as simple as possible, not many extra openGL functions etc.. And the color channel animating doesnt fit this because I need all color channels in my textures. Currently I am doing it with 2 textures: white texture under the other texture, translated the specific pixel color into transparent, then I change white texture color with glColor3f() function to what ever I want, and I see the "palet animation" on that specific color. But that style sounds pretty hacky, so I am wondering if there is some better trick to this?
How about just using paletted textures? There are extensions to do just that. If using extensions is out of question you can just make palette handling by your own. Just do your palette tricks, there are lot of them, and just write RGB texture using palette. Ofcourse this limits number of colors, but thats whole point of using palette. Paint programs that are good for palette handling are nowadays rare. That's why I will not remove Deluxe Paint from my drive.
662,845
662,922
Why is std::for_each a non-modifying sequence operation?
I just read in the C++ standard that std::for_each is a non-modifying sequence operation, along with find, search and so on. Does that mean that the function applied to each element should not modify them? Why is that? What could possibly go wrong? Here is a sample code, where the sequence is modified. Can you see anything wrong with it? void foo(int & i) { i = 12; } int main() { std::vector<int> v; v.push_back(0); std::for_each(v.begin(), v.end(), foo); // v now contains 12 } I suspect this to be just an interpretation issue, but I wanted to have your opinion about that. PS: I know I could use std::transform instead of for_each, but that's not the point.
See this defect report they say The LWG believes that nothing in the standard prohibits function objects that modify the sequence elements. The problem is that for_each is in a secion entitled "nonmutating algorithms", and the title may be confusing. A nonnormative note should clarify that. But also note this one. They seem to call it "non-modifying" because for_each itself does not exlicitly modify the elements of the sequence.
662,918
662,936
How do I concatenate multiple C++ strings on one line?
C# has a syntax feature where you can concatenate many data types together on 1 line. string s = new String(); s += "Hello world, " + myInt + niceToSeeYouString; s += someChar1 + interestingDecimal + someChar2; What would be the equivalent in C++? As far as I can see, you'd have to do it all on separate lines as it doesn't support multiple strings/variables with the + operator. This is OK, but doesn't look as neat. string s; s += "Hello world, " + "nice to see you, " + "or not."; The above code produces an error.
#include <sstream> #include <string> std::stringstream ss; ss << "Hello, world, " << myInt << niceToSeeYouString; std::string s = ss.str(); Take a look at this Guru Of The Week article from Herb Sutter: The String Formatters of Manor Farm
662,976
662,981
How do I convert from stringstream to string in C++?
How do I convert from std::stringstream to std::string in C++? Do I need to call a method on the string stream?
​​​​​​​ yourStringStream.str()
663,071
663,114
Stopping a thread in Win32/MFC
I was reading through some threading related code and found this piece of code: MyThread::start() { //Create a thread m_pThread = AfxBeginThread(/*some parameters*/) //Create a duplicate handle for the created thread m_hDuplicateHandle = DuplicateHandle(/* some more parameters*/) } MyThread::stop() { //Set some variables so that the thread comes out of its run() function WaitForSingleObject(m_hDuplicateHandle, defaultTimeout); CloseHandle(m_hDuplicateHandle); } My question, why the duplicate handle is required ? Can't we directly wait on the original thread handle? Does it somehow become invalid?
AfxBeginThread returns a CWinThread* and MFC assumes it will be managing the handle associated with the thread. So in order to safely use the handle directly you need to duplicate it, otherwise when the thread ends MFC may have closed the handle before you get to the WaitForSingleObject call. If you were working directly with the win32 CreateThread API then you could certainly wait directly on the returned handle.
663,072
663,091
How to catch an exception thrown in a critical section?
I'm working on win 32 multithreading with c++. Scenario: I have a function used by multiple threads. This function as a critical sections (or any kind of construct that can lock a resource). In the critical section an exception is thrown. At this point I need to take care of unlocking the resource in the exception catch block. Is there any other way that this can be done? I mean, let's say that I don't want to have to remember to release the lock in the catch block, is there any common way to handle this problem to avoid this error prone scenario?
The idea is to encapsulate the act of acquiring and releasing the critical section in an object such that constructing the object acquires the CS and destroying the object releases it. struct CSHolder { explicit CSHolder(CRITICAL_SECTION& cs): lock(cs) { ::EnterCriticalSection(&lock); } ~CSHolder() { ::LeaveCriticalSection(&lock); } CRITICAL_SECTION& lock; }; CRITICAL_SECTION gLock; void foo() { CSHolder lockIt(gLock); // lock is held until lockIt is destroyed } The concept is called RAII - Resource Acquisition is Initialization. It is a very common idiom in modern C++.
663,145
663,164
Any library that overloaded boolean operators?
Have you ever seen any library/code that overloaded boolean operators, which is said to be evil? and What advantages does it give to the user?
The standard library itself overloads operator ! for input streams, so perhaps "evil" is a touch strong? But I suspect that you were talking about && and ||. The reason for not overlaoding these is that their short-circuting abilities cannot be duplicated in the user defined overloads, and no I am not aware of any library that overloads them.
663,209
664,401
Can someone explain about Linux library naming?
When I create a library on Linux, I use this method: Build: libhelloworld.so.1.0.0 Link: libhelloworld.so.1.0.0 libhelloworld.so Link: libhelloworld.so.1.0.0 libhelloworld.so.1 The versioning is so that if you change the public facing methods, you can build to libhelloworld.so.2.0.0 for example (and leave 1.0.0 where it is), so that applications using the old library won't break. However, what's the point in naming it 1.0.0 - why not just stick with libhelloworld.so and libhelloworld.so.1? Also, is it best practice to name your library using 1.0.0 for example, or just 1? g++ ... -Wl,-soname,libhelloworld.1 Or: g++ ... -Wl,-soname,libhelloworld.1.0.0
From an old email I sent to a colleague about this question: Let's look at libxml as an example. First of all, shared objects are stored in /usr/lib with a series of symlinks to represent the version of the library availiable: lrwxrwxrwx 1 root root 16 Apr 4 2002 libxml.so -> libxml.so.1.8.14 lrwxrwxrwx 1 root root 16 Apr 4 2002 libxml.so.1 -> libxml.so.1.8.14 -rwxr-xr-x 1 root root 498438 Aug 13 2001 libxml.so.1.8.14 If I'm the author of libxml and I come out with a new version, libxml 2.0.0 that breaks interface compatiblity with the previous version, I can install it as libxml.so.2, and libxml.so.2.0.0. Note that it is up to the application programmer to be responsible about what he links to. If I'm really anal, I can link directly to libxml.so.1.8.14 and any other version will result in my program not running. Or I can link against libxml.so.1 and hope that the libxml developer doesn't break symbol compatibility on me in the 1.X version. Or if you don't care and are reckless, just link to libxml.so and get whatever version there is. Sometimes, when enough people do this, the library author has to get creative with later versions. Hence, libxml2: lrwxrwxrwx 1 root root 17 Apr 4 2002 libxml2.so.2 -> libxml2.so.2.4.10 -rwxr-xr-x 1 root root 692727 Nov 13 2001 libxml2.so.2.4.10 Note that there's no libxml2.so in this one. Looks like the developer got fed up with irresponsible application developers.
663,280
663,334
What's the advantage of this indirect function call?
I found the following code in a library: class Bar { public: bool foo(int i) { return foo_(i); } private: virtual bool foo_(int i) = 0; }; Now I'm wondering: Why would you use this indirection? Could there be any reasons why the above would be better than the simple alternative: class Bar { public: virtual bool foo(int i) = 0; };
This is the Non-Virtual Interface Idiom (NVI). That page by Herb Sutter has a good bit of detail about it. However, temper what you read there with what the C++ FAQ Lite says here and here. The primary advantage of NVI is separating interface from implementation. A base class can implement a generic algorithm and present it to the world while its subclasses can implement the details of the algorithm through virtual functions. Outside users are shielded from changes in the algorithm details, especially if you later decide you want to do add pre- and post-processing code. The obvious disadvantage is that you have to write extra code. Also, private virtual functions are confusing to a lot of people. Many coders mistakenly think you can't override them. Herb Sutter seems to like private virtuals, but IMHO it's more effective in practice to follow the C++ FAQ Lite's recommendation and make them protected.
663,326
663,348
if (!this) { return false; }
I stumbled upon this piece of code which seems totaly broken to me, but it does happen that this is null. I just don't get how this can be null it is inside a normal method call such as myObject->func(); inside MyObject::func() we have if (!this) { return false; } is there any way I can have the first line to throw a NullPointerException instead of going inside the null(?) method?
If you have: MyObject *o = NULL; o->func(); What happens next depends on whether func is virtual. If it is, then it will crash, because it needs an object to get the vtable from. But if it's not virtual the call proceeds with the this pointer set to NULL. I believe the standard says this is "undefined behaviour", so anything could happen, but typical compilers just generate the code to not check whether the pointer is NULL. Some well known libraries rely on the behaviour I described: MFC has a function called something like SafeGetHandle that can be called on a null pointer, and returns NULL in that case. You might want to write a reusable helper function: void CheckNotNull(void *p) { if (p == NULL) throw NullPointerException(); } You can then use that at the start of a function to check all its arguments, including this: CheckNotNull(this);
663,447
663,577
Getting Started with OLE - What's a good learning project choice?
I suspect that I will shortly have a need to write an "integration" library that will need to call an OLE object on Windows from Java. I have done Java to C/C++ integration on windows before (using C/C++ and JNI) - so I'm not new to that part of the equation. However; I'd like to try out writing a C/C++ wrapper around an OLE object to give that part of the project a test-spin. So, some questions: Can I do OLE with VS C++ Express Edition. I believe the answer is, "Yes, but you don't have MFC, so you can't call the Microsoft OLE Objects (e.g. Excel, Word, etc.)" If my presumption about #1 is correct, I need a "test" application that provides an OLE object that I can call - what are suggestions for something that would be good practice?
Yes, you can do OLE with VC++ Express, I'd recommend to install Windows Platform SDK. You don't need MFC to use office applications. With VC++ Express 2005 you can install the older version of Platform SDK 2003 R1 which includes ATL, which are convenient wrappers around COM functionality. If you can't install it you can create you own wrappes see this more advanced article COM in Plain C . A well know application which provides COM objects is Internet Explorer, also Windows Media Player, and Flash can be accessed by COM. Also you can take a look at Jacob "The JACOB Project: A JAva-COM Bridge" You can think that COM is the layer in which OLE functionality is built.
663,449
663,502
What is the best way to attach a debugger to a process in VC++ at just the right point in time?
When debugging, sometimes you need to attach an already running process instead of just starting the application in a debugger. It's common for myself to put in a Sleep() or MessageBox call, so that it's easier to attach a debugger. I worry that some of these may be committed eventually to source control. What is the best thing to do to avoid this situation while still delaying enough time so that you can attach your debugger to a running process? Guarding the Sleep or message box with an #ifdef _DEBUG is one way, but I'm wondering if there is a better way. With a Sleep you also have the problem that you may not attach in time. With a MessageBox you have the problem that you may be remotely debugging, or debugging a process that has no visible GUI (example running as a service on Vista)
you can use DebugBreak, check these links: http://www.epsilon-delta.net/articles/vc6_debug.html#breaking-with-debugbreak http://blogs.msdn.com/calvin_hsia/archive/2006/08/25/724572.aspx
663,546
663,809
can one make concurrent scalable reliable programs in C as in erlang?
a theoretical question. After reading Armstrongs 'programming erlang' book I was wondering the following: It will take some time to learn Erlang. Let alone master it. It really is fundamentally different in a lot of respects. So my question: Is it possible to write 'like erlang' or with some 'erlang like framework', which given that you take care not to create functions with sideffects, you can create scaleable reliable apps as well as in Erlang? Maybe with the same msgs sending, loads of 'mini processes' paradigm. The advantage would be to not throw all your accumulated C/C++ knowledge over the fence. Any thoughts about this would be welcome
Yes, it is possible, but... Probably the best answer for this question is given by Robert Virding’s First Rule: “Any sufficiently complicated concurrent program in another language contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Erlang.” Very good rule is use the right tool for the task. Erlang excels in concurrency and reliability. C/C++ was not designed with these properties in mind. If you don't want to throw away your C/C++ knowledge and experience and your project allows this kind of division, good approach is to create a mixed solution. Write concurrent, communication and error handling code in Erlang, then add C/C++ parts, which will do CPU and IO bound stuff.
663,624
663,685
What is the best practice for a shared library primary header file in C++?
When I create shared libraries, I have a header file (but with no file name extension) in the root of the library source named the same as the library. So for example, if my library was called libirock.so, then I'd have a file called irock in the project root. This file will include all of the most important headers in the library, so that when the library is to be implemented, all you need to do is use this include line: #include <irock> // Instead of <irock.h> I got the idea from when I saw a compiler warning similar to: #include <string.h> is obsolete, use #include <string> instead Two questions: Is using irock instead of irock.h best practice? Is is correct to use a single header file instead of many headers? Course of action Thanks for your answers! From the answers, I've decided: Will use <irock.h> instead of <irock>. I will continue to use a 'primary' header file.
There is nothing in the standard governing "allowed", "prohibited" or "best practices" regarding extensions for filenames. Use whichever form you prefer. On some platforms there's a convenience factor to having an file extensions for registered types. For what it's worth <string.h> and <string> are totally different headers. The C++ namespaced equivalent of <string.h> is actually <cstring>.
663,724
663,775
wrong argument conversion preferred when calling function
I'm writing a program under MS Visual C++ 6.0 (yes, I know it's ancient, no there's nothing I can do to upgrade). I'm seeing some behavior that I think is really weird. I have a class with two constructors defined like this: class MyClass { public: explicit MyClass(bool bAbsolute = true, bool bLocation = false) : m_bAbsolute(bAbsolute), m_bLocation(bLocation) { ; } MyClass(const RWCString& strPath, bool bLocation = false); private: bool m_bAbsolute; bool m_bLocation; }; When I instantiate an instance of this class with this syntax: MyClass("blah"), it calls the first constructor. As you can see, I added the explicit keyword to it in the hopes that it wouldn't do that... no dice. It would appear to prefer the conversion from const char * to bool over the conversion to RWCString, which has a copy constructor which takes a const char *. Why does it do this? I would assume that given two possible choices like this, it would say it's ambiguous. What can I do to prevent it from doing this? If at all possible, I'd like to avoid having to explicitly cast the strPath argument to an RWCString, as it's going to be used with literals a lot and that's a lot of extra typing (plus a really easy mistake to make).
Explicit will not help here as the constructor is not a part of the implicit conversion, just the recipient. There's no way to control the preferred order of conversions, but you could add a second constructor that took a const char*. E.g: class MyClass { public: MyClass(bool bAbsolute = true, bool bLocation = false); MyClass(const RWCString& strPath, bool bLocation = false); MyClass(const char* strPath, bool bLocation = false); private: bool m_bAbsolute; bool m_bLocation; };
663,811
664,183
Why do I get the "unrecognised emulation mode: 32" error in Eclipse?
How come I get this error when compiling with the -m32 argument? unrecognised emulation mode: 32 I'm compiling using g++ on an x86_64 machine. It happens in one project, but not the other... Any hints? Note: I'm using Eclipse CDT, so perhaps this is an IDE specific gocha? Rephrased question Perhaps a better question would be: What is the best method of building an x86 binary in Eclipse when running on an x86_64 machine?
It seems that in Eclipse, you must specify -m32 in the miscelanious linker options, if it's specified in the compiler options. The "unrecognised emulation mode: 32" error message appears when your reference a library path that doesn't exist - which is weird!
663,870
663,889
c++ class why need main?
Hello I'm writing a little project in c++ where I would like to have some classes that does some work, I wrote the interfaces and the implementation of the classes. The thing that surprises me is that I cannot have a simple class without a main(), I would like to have a class that once instantiated, It's methods can be called, do things, but I don't need (nor want) a main() in the class implementation. Here's an example I have in my head of what I'd like to have: file animal.h: class animal { public: animal(); ~animal(); public: int method1(int arg1); private: int var1; }; file animal.cpp: #include "animal.h" animal::animal(){...} animal::~animal(){...} int animal::method1(int arg1){return var1;} } And I would like to call the animal class form another file and have it work, something like this: file app.cpp: #include <neededlib> #include "animal.h" int main() { animal dog; cout << dog.method1(42); return 0; } But compiler give me /usr/lib/gcc/i686-pc-linux-gnu/4.3.3/../../../crt1.o: In function _start: "(.text+0x18): undefined reference to `main`" collect2: ld returned 1 exit status for animal.cpp, but I don't need a main there, or do I need it? Where Am I wrong?
but I don't need (nor want) a main() in the class implementation. The function main is your entry-point. That is where execution begins. You need to have one and only one such function. But compiler give me "undefined reference to main" for animal.cpp, but I don't need a main there, or do I need it? Now, your problem looks like you have not linked the compiled forms of app.cpp and animal.cpp. I'm not so strong in Makefiles, I used something like g++ animal.h -o animal and g++ animal.cpp but it gives me the error above You don't compile headers. So don't use: g++ animal.h When you compiled the animal.cpp separately, g++ created an object file. You will also need to compile the app.cpp because you do need the main. Once you compile the app.cpp file you will have to link it with the animal object file created earlier. But if these did not get linked in, specially, the file containing the main function you will hit the error you are getting now. However, g++ takes care of what I have described above. Try something like this: g++ animal.cpp app.cpp -o test This will create an executable called test and you can run your application using: ./test
664,123
664,134
'default' as a variable name
While debugging some code, I came across an array named default. I thought that keywords were not allowed as variable names. #include "stdafx.h" #include <stdio.h> int main() { int default = 5; printf("%d\n", default); return 0; } Now the above code compiles without a hitch on VS 2008. Isn't 'default' a keyword? How come it works as a variable name? Side-effects? PS: Infragistics::Win::UltraWinToolbars::ToolbarsCollection has a property with this name!
It's a known issue in VC++. Basically by-design for C++/CLI compatibility.
664,273
664,289
Boost lambda bewilderment
Why is callback called once only? bool callback() { static bool res = false; res = !res; return res; } int main(int argc, char* argv[]) { vector<int> x(10); bool result=false; for_each(x.begin(),x.end(),var(result)=var(result)||bind(callback)); return 0; }
The || expression short circuits after the first time bind returns true. The first time you evaluate result = result || bind(...) // result is false at this point bind is called, because that's the only way to determine the value of false || bind(...). Because bind(...) returns true, result is set to true. Every other time you say result = result || bind(...) // result is true at this point ... the bind(...) expression isn't evaluated, because it doesn't matter what it returns; the expression true || anything is always true, and the || expression short circuits. One way to ensure that bind is always called would be to move it to the left side of the ||, or change the || to an &&, depending on what you are trying to accomplish with result.
664,320
664,361
Can I pass C++ strings into a method in the style of a stream?
I'd like to do this: MyClass mc = MyClass("Some string" << anotherString); Thanks for your answers, I have decided to re-write this question based on what you've told me, as it's gotten a little messy. Eventually, I read C++ format macro / inline ostringstream, and decided to use a macro, as it's not really possible to do this using a constructor. Some answers my no longer be relevant. Now, what I can actually, do is: MY_CLASS("Some string" << anotherString << " more string!"); Using this macro: #include <sstream> #define MY_CLASS(stream) \ MyClass( ( dynamic_cast<std::ostringstream &> ( \ std::ostringstream() . seekp( 0, std::ios_base::cur ) << stream ) \ ) . str() ) Where the MyClass constructor takes a string: MyClass::MyClass(string s) { /* ... */ }
I think you should look at this question for some hints as to what will be required to get the behavior you want. This sort of thing seems to a bit difficult.
664,698
664,775
How to I draw pixels as a texture to a polygon in OpenGL?
In C++ OpenGL, I want to draw each pixel manually (as a texture I assume) on to a simple square primitive, or indeed 2 polygons forming a square. I have no idea where to start, or what phrases to look for. Am I looking for texture mapping or creating textures? Most examples are to load from a file, but I dont want to do that. I've tried reading my OpenGL Programming Guide book, but its just a maze as I'm quite new to OpenGL. Please help.
Take a close look at glTexImage2D. This is the call that loads the image in OpenGL for a 2D Texture. glTexImage2D actually takes a raw pointer to the pixel data in the image. You can allocate memory yourself, and set the image data directly (however you want), then call this function to pass the image to OpenGL. Once you've created the texture, you can bind it to the state so that your triangles use that texture. There is nothing in OpenGL that directly requires images used as textures to be loaded from files. A good resource for textures can be found in this GameDev article.
665,112
665,147
How the buffer of cout work?
I know that cout have buffer several days ago, and when I google it, it is said that the buffer is some like a stack and get the output of cout and printf from right to left, then put them out(to the console or file)from top to bottem. Like this, a = 1; b = 2; c = 3; cout<<a<<b<<c<<endl; buffer:|3|2|1|<- (take “<-” as a poniter) output:|3|2|<- (output 1) |3|<- (output 2) |<- (output 3) Then I write a code below, #include <iostream> using namespace std; int c = 6; int f() { c+=1; return c; } int main() { int i = 0; cout <<"i="<<i<<" i++="<<i++<<" i--="<<i--<<endl; i = 0; printf("i=%d i++=%d i--=%d\n" , i , i++ ,i-- ); cout<<f()<<" "<<f()<<" "<<f()<<endl; c = 6; printf("%d %d %d\n" , f() , f() ,f() ); system("pause"); return 0; } Under VS2005, the output is i=0 i++=-1 i--=0 i=0 i++=-1 i--=0 9 8 7 9 8 7 It seems that the stack way is right~ However, I read C++ Primer Plus yesterday, and it is said that the cout work from left to right, every time return an object(cout), so "That’s the feature that lets you concatenate output by using insertion". But the from left to right way can not explain cout< Then Alnitak tell me that, "The << operator is really ostream& operator<<(ostream& os, int), so another way of writing this is: operator<< ( operator<< ( operator<< ( cout, a ), b ), c )", If the rightest argument is first evaluated, it can be some explained. Now I'm confused about how cout's buffer work, can somebody help me?
You are mixing a lot of things. To date: Implementation details of cout Chained calls Calling conventions Try to read up on them separately. And don't think about all of them in one go. printf("i=%d i++=%d i--=%d\n" , i , i++ ,i-- ); The above line invokes undefined behavior. Read the FAQ 3.2. Note, what you observe is a side-effect of the function's calling convention and the way parameters are passed in the stack by a particular implementation (i.e. yours). This is not guaranteed to be the same if you were working on other machines. I think you are confusing the order of function calls with buffering. When you have a cout statement followed by multiple insertions << you are actually invoking multiple function calls, one after the other. So, if you were to write: cout << 42 << 0; It really means: You call, cout = operator<<(cout, 42) and then use the return in another call to the same operator as: cout = operator<<(cout, 0) What you have tested by the above will not tell you anything cout's internal representation. I suggest you take a look at the header files to know more.
665,509
7,857,736
Redirecting standard output to syslog
I'm planning to package OpenTibia Server for Debian. One of the things I want to do is add startup via /etc/init.d and daemonization of the otserv process. Thing is, we should probably redirect output to syslog. This is usually done via the syslog() function. Currently, the code is swarmed with: std::cout << "Stuff to printout" << std::endl; Is there a proper, easy to add, way to redirect standard output and standard error output into syslog without replacing every single "call" to std::cout and friends?
Not sure whether a straight "C" answer suffices; but in "C" you can use underlying stdio features to plug the (FILE*) directly into syslog calls, without an intervening "logger" process. Check out http://mischasan.wordpress.com/2011/05/25/redirecting-stderr-to-syslog/