question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
615,100
615,128
Should I learn Python after C++?
I`m currently studying C++ and want to learn another language. For work I use C# + ASP (just started learning it, actually), but I want something "less Microsoft" and powerful. I have heard Python is a popular and powerful language, not so complicated as C++. But many people mentioned it was hard for them to get back to C++/Java from Python because they started thinking in it, get used to absence of memory management, etc. What do you recommend?
There's no right or wrong answer, really. But I think you'll benefit more from learning Python. Given the similarities between C# and C++, you'll learn a different way of thinking from Python. The more ways you learn to think about a problem, the better it makes you as a programmer, regardless of the language.
615,264
616,538
C++ Parallelization Libraries: OpenMP vs. Thread Building Blocks
I'm going to retrofit my custom graphics engine so that it takes advantage of multicore CPUs. More exactly, I am looking for a library to parallelize loops. It seems to me that both OpenMP and Intel's Thread Building Blocks are very well suited for the job. Also, both are supported by Visual Studio's C++ compiler and most other popular compilers. And both libraries seem quite straight-forward to use. So, which one should I choose? Has anyone tried both libraries and can give me some cons and pros of using either library? Also, what did you choose to work with in the end? Thanks, Adrian
I haven't used TBB extensively, but my impression is that they complement each other more than competing. TBB provides threadsafe containers and some parallel algorithms, whereas OpenMP is more of a way to parallelise existing code. Personally I've found OpenMP very easy to drop into existing code where you have a parallelisable loop or bunch of sections that can be run in parallel. However it doesn't help you particularly for a case where you need to modify some shared data - where TBB's concurrent containers might be exactly what you want. If all you want is to parallelise loops where the iterations are independent (or can be fairly easily made so), I'd go for OpenMP. If you're going to need more interaction between the threads, I think TBB may offer a little more in that regard.
615,355
615,366
Is there any reason to check for a NULL pointer before deleting?
I often see legacy code checking for NULL before deleting a pointer, similar to, if (NULL != pSomeObject) { delete pSomeObject; pSomeObject = NULL; } Is there any reason to checking for a NULL pointer before deleting it? What is the reason for setting the pointer to NULL afterwards?
It's perfectly "safe" to delete a null pointer; it effectively amounts to a no-op. The reason you might want to check for null before you delete is that trying to delete a null pointer could indicate a bug in your program. Edit NOTE: if you overload the delete operator, it may no longer be "safe" to delete NULL
615,523
615,584
Is it OK to pass parameters to a Factory method?
One of the ways to implement Dependency Injection correctly is to separate object creation from business logic. Typically, this involves using a Factory for object creation. Up until this point, I've never seriously considered using a Factory so I apologize if this question seems a little simplistic: In all the examples of the Factory Pattern that I've run across, I always see very simple examples that have no parameterization. For example, here's a Factory stolen from Misko Hevery's excellent How To Think About the "new" Operator article. class ApplicationBuilder { House build() { return new House(new Kitchen( new Sink(), new Dishwasher(), new Refrigerator()) ); } } However, what happens if I want each house that I build to have a name? Am I still using the Factory pattern if I re-write this code as follows? class ApplicationBuilder { House build( const std::string & house_name) { return new House( house_name, new Kitchen(new Sink(), new Dishwasher(), new Refrigerator()) ); } } Note that my Factory method call has changed from this: ApplicationBuilder builder; House * my_house = builder.build(); To this: ApplicationBuilder builder; House * my_house = builder.build("Michaels-Treehouse"); By the way: I think the concept of separating object instantiation from business logic is great, I'm just trying to figure out how I can apply it to my own situation. What confuses me is that all the examples I've seen of the Factory pattern never pass any parameters into the build() function. To be clear: I don't know the name of the house until the moment before I need to instantiate it.
I've seen quite a lot of examples that use a fixed set of arguments, like in your name example, and have used them myself too and i can't see anything wrong with it. However there is a good reason that many tutorials or small articles avoid showing factories that forward parameters to the constructed objects: It is practically impossible to forward arbitrary number of arguments (even for a sane limit like 6 arguments). Each parameter you forward has to be accepted as const T& and T& if you want to do it generic. For more complicated examples, however, you need an exponentially growing set of overloads (for each parameter, a const and a nonconst version) and perfect forwarding is not possible at all (so that temporaries are forwarded as temporaries, for example). For the next C++ Standard that issue is solved: class ApplicationBuilder { template<typename... T> House *build( T&&... t ) { return new House( std::forward<T>(t)..., new Kitchen(new Sink(), new Dishwasher(), new Refrigerator()) ); } }; That way, you can call builder.build("Hello", 13); And it will return new House("Hello", 13, new Kitchen(new Sink(... Read the article i linked above.
615,551
615,555
How to get width and height from CreateWindowEx() window? C++
I have made a window with CreateWindowEx() function, now how do i get the width and height from that window i created? This sounds very basic thing to do, but i just couldnt find any answer ;_; This is needed because the window height is created automatically depending on how the Windows wants to create it. Language C or C++
Use GetWindowRect. Subtract the right from the left to get the width and the bottom from the top to get the height. RECT rect; if(GetWindowRect(hwnd, &rect)) { int width = rect.right - rect.left; int height = rect.bottom - rect.top; } As a side note, if you'd like the client area instead of the entire window. You can use GetClientRect. For other information about the window you can use GetWindowInfo.
615,701
615,711
InterlockedIncrement usage
While reading about the function InterlockedIncrement I saw the remark that the variable passed must be aligned on a 32-bit boundary. Normally I have seen the code which uses the InterlockedIncrement like this: class A { public: A(); void f(); private: volatile long m_count; }; A::A() : m_count(0) { } void A::f() { ::InterlockedIncrement(&m_count); } Does the above code work properly in multi-processor systems or should I take some more care for this?
It depends on your compiler settings. However, by default, anything eight bytes and under will be aligned on a natural boundary. Thus an "int" we be aligned on a 32-bit boundary. Also, the "#pragma pack" directive can be used to change alignment inside a compile unit. I would like to add that the answer assumes Microsoft C/C++ compiler. Packing rules might differ from compiler to compiler. But in general, I would assume that most C/C++ compilers for Windows use the same packing defaults just to make working with Microsoft SDK headers a bit easier.
615,969
631,356
How to get the running version of Power Point using C++ unmanaged?
I am using C++ unmanaged with Power Point (2003 and 2007). How do I get the running version of Power Point (2003 or 2007) with IDispatch? Thanks, any help would be awesome.
I am sorry I was working in another project. I found a simple way to get the version using CComDispatchDriver instance. CComVariant ccVersion; //disp is CComDispatchDrive type disp.GetPropertyByName("Version", ccVersion); doing that I get ccVersion = "11.0" for 2003 and "12.0" for 2007. To cast it to string I used CString class: CString version; version = CString (V_BSTR(&ccVersion)); Thanks for your help, I hope this can be useful for someone else
615,993
616,025
Parsing a string in C++
I have a huge set of log lines and I need to parse each line (so efficiency is very important). Each log line is of the form cust_name time_start time_end (IP or URL )* So ip address, time, time and a possibly empty list of ip addresses or urls separated by semicolons. If there is only ip or url in the last list there is no separator. If there is more than 1, then they are separated by semicolons. I need a way to parse this line and read it into a data structure. time_start or time_end could be either system time or GMT. cust_name could also have multiple strings separated by spaces. I can do this by reading character by character and essentially writing my own parser. Is there a better way to do this ?
Maybe Boost RegExp lib will help you. http://www.boost.org/doc/libs/1_38_0/libs/regex/doc/html/index.html
616,573
616,703
Isn't the Factory pattern the same thing as global state?
Let's say I have a class like this: class MonkeyFish { MonkeyFish( GlobalObjectA & a, GlobalObjectB & b, GlobalObjectC & c); private: GlobalObjectA & m_a; GlobalObjectB & m_b; GlobalObjectC & m_c; } Without a factory, I need to do the following in order to instantiated a MonkeyFish. GlobalObjectA a; GlobalObjectB b; GlobalObjectC c; int main() { MonkeyFish * monkey_fish = new MonkeyFish(a, b, c); monkey_fish->go(); } On the other hand, if I have a MonkeyFishFactory, it seems like I have to do this: GlobalObjectA a; GlobalObjectB b; GlobalObjectC c; int main() { MonkeyFishFactory mf_factory(a, b, c); MonkeyFish * monkey_fish = mf_factory.buildMonkeyFish("Bob"); monkey_fish->go(); } I still have global objects. Even if the MonkeyFishFactory itself created the GlobalObjects internally (so they are now inside the MonkeyFishFactory instead of true globals), it seems like the MonkeyFishFactory itself still needs to be a global object so that I can access it anytime I want to create a MonkeyFish. Isn't the Factory pattern the same thing as global state in this case? (I'm currently operating under the assumption that global state is a Bad Thing, and eliminating it is a Good Thing.)
Are you confusing concepts here? The Factory pattern is usually applied when you are returning an instance of a concrete class that hides behind an abstract interface. The idea is that the caller will see just the interface and doesn't even have to know what the concrete type of the object is. It is all about creating an object instance based on parameters and decoupling the logic associated with deciding what object to create from the user creating the object. What you are describing is a mixture of Singleton (or MonoState) and Factory. Your Factory has state so it cannot be made static. In this case, you will need to apply something like the Singleton pattern to control the creation of a single Factory instance with the appropriate globals hidden within it: class IMonkeyFish { public: virtual ~IMonkeyFish() = 0; virtual void go() = 0; }; class Factory { public: static Factory& instance(); IMonkeyFish* createMonkeyFish(); protected: Factory(GlobalObjectA& a, GlobalObjectB& b, GlobalObjectC& c); private: static Factory *theInstance; GlobalObjectA& instanceOfA; GlobalObjectB& instanceOfB; GlobalObjectC& instanceOfC; }; Factory& factory = Factory::instance(); IMonkeyFish* fishie = factory.createMonkeyFish(); fishie->go(); The Singleton pattern governs the creation of the factory instance. The Factory pattern hides the details surrounding the creation of objects that implement the IMonkeyFish interface. The Good Thing (TM) is the hiding of the global state and decoupling of the MonkeyFish concrete details from creating an instance. The usage or correctness of using the Singleton stuff is a whole other issue though. There are probably a bunch of threads floating around about that as well.
616,653
616,695
Portable C++ Stack Trace on Exception
I am writing a library that I would like to be portable. Thus, it should not depend on glibc or Microsoft extensions or anything else that is not in the standard. I have a nice hierarchy of classes derived from std::exception that I use to handle errors in logic and input. Knowing that a particular type of exception was thrown at a particular file and line number is useful, but knowing how the execution got there would be potentially much more valuable, so I have been looking at ways of acquiring the stack trace. I am aware that this data is available when building against glibc using the functions in execinfo.h (see question 76822) and through the StackWalk interface in Microsoft's C++ implementation (see question 126450), but I would very much like to avoid anything that is not portable. I was thinking of implementing this functionality myself in this form: class myException : public std::exception { public: ... void AddCall( std::string s ) { m_vCallStack.push_back( s ); } std::string ToStr() const { std::string l_sRet = ""; ... l_sRet += "Call stack:\n"; for( int i = 0; i < m_vCallStack.size(); i++ ) l_sRet += " " + m_vCallStack[i] + "\n"; ... return l_sRet; } private: ... std::vector< std::string > m_vCallStack; }; ret_type some_function( param_1, param_2, param_3 ) { try { ... } catch( myException e ) { e.AddCall( "some_function( " + param_1 + ", " + param_2 + ", " + param_3 + " )" ); throw e; } } int main( int argc, char * argv[] ) { try { ... } catch ( myException e ) { std::cerr << "Caught exception: \n" << e.ToStr(); return 1; } return 0; } Is this a terrible idea? It would mean a lot of work adding try/catch blocks to every function, but I can live with that. It would not work when the cause of the exception is memory corruption or lack of memory, but at that point you are pretty much screwed anyway. It may provide misleading information if some functions in the stack do not catch exceptions, add themselves to the list, and rethrow, but I can at least provide a guarantee that all of my library functions do so. Unlike a "real" stack trace I will not get the line number in calling functions, but at least I would have something. My primary concern is the possibility that this will cause a slowdown even when no exceptions are actually thrown. Do all of these try/catch blocks require an additional set-up and tear-down on each function invocation, or is somehow handled at compile-time? Or are there other issues I have not considered?
I think this is a really bad idea. Portability is a very worthy goal, but not when it results in a solution that is intrusive, performance-sapping, and an inferior implementation. Every platform (Windows/Linux/PS2/iPhone/etc) I've worked on has offered a way to walk the stack when an exception occurs and match addresses to function names. Yes, none of these are portable but the reporting framework can be and it usually takes less than a day or two to write a platform-specific version of stack walking code. Not only is this less time than it'd take creating/maintaining a cross-platform solution, but the results are far better; No need to modify functions Traps crashes in standard or third party libraries No need for a try/catch in every function (slow and memory intensive)
616,936
633,337
How to get Last Active Cell in Excel 2007
I am working with C++ unmanaged and Excel 2007. I am using a call to the Excel4 API to get the range of cells selected by the user. When the user selects what I call a "common" range, this call returns a range like this "R1C1:R4C3", which is exactly the format that I need for doing other operations in my application. However, when the user decides to select whole columns then this API call returns a range only with columns, something like this "C1:C3", which of course is a valid excel range, but it means to me that I have to modify it so it looks like the the range above; in other words I have to manually add it the rows range. So I take this range "C1C3" and change it to be "R1C1:R65534C3". Perhaps at this point you already noticed my problem, as you can see I'm hard coding the rows range to be the max numbers of rows supported by Excel 2003 (the application also works with this version of excel). This of course is not what I'm looking for, because once I get the selected range, I read the info in that range and use that info to make a query to the server, so imagine that the user only entered data until row 534 and still my query will have 65000 rows with nothing. So my question is how can I get the last cell in which the user entered data, I think it is called the last active cell, so I can delimit my range until that row and please remember that I'm working with unmanaged code.
The Excel 4 API? Really? There's a command xlcSelectEnd which you can use to jump to the last cell with text entered in it in any direction from a given cell.
617,185
617,560
Approximating a shape boundary using Fourier descriptors
I am trying to approximate shape boundaries by using Fourier descriptors. I know this can be done because I've learned about it in class and read about it in several sources. To obtain the Fourier descriptors of a boundary of (x,y) coordinates, I do the following: 1) Turn (x,y) coordinates into complex numbers of the form x + iy 2) Feed this new set of numbers into the 1D Fourier transform 3) The output are the Fourier descriptors To approximate the boundary, I simply remove (set to zero) the high frequencies, then apply the inverse Fourier transform, then convert the complex numbers back to (x,y) coordinates, and then reconstruct the image from this new set of coordinates. The goal of my project is to find out how well I can approximate boundaries depending on how many of the terms I set to zero. My problem is that whenever I set ANY of the frequencies to 0, my output image is very small and comes out as very weird patterns. I've included an example below. The input image is a normal square. The first output image given is the reconstruction of the image using all the Fourier descriptors as normal. Note that the whole boundary is not there because the number of boundary pixels was sampled to 256 and I didn't bother connecting the dots when I output. Also note that the output is translated to the bottom left corner, this was on purpose. The second output image is when I use only the first 128 frequencies. Input Image http://img19.imageshack.us/my.php?image=square0.bmp Output Image 1: All frequencies http://img27.imageshack.us/my.php?image=square0normal.bmp Output Image 2: First half of frequencies http://img23.imageshack.us/my.php?image=square0out.bmp Does anyone have any idea why this might be happening? Edit: First time putting images on here, not sure why they aren't showing up.. here are the links: Input image Output1 Output2 Also here is a link to a document that talks about this a bit, it starts at the end of page 5.
The result you are getting is what would be expected if you threw out the low frequencies instead of the high ones. Are you sure about which frequencies are which?
617,248
617,261
Can the HWND from CreateWindow/CreateDialog be GetMessage'd from another thread?
Using the Win32 APIs, is it possible to create a Window or Dialog in one thread then collect events for it from another thread? Are HWNDs tied to threads? Trying the contrived example below I never see GetMessage() fire. HWND g_hWnd; DWORD WINAPI myThreadProc(LPVOID lpParam) { while(GetMessage(&msg, hWnd, 0, 0) > 0) { ... } } int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) { hWnd = CreateDialog(hInstance, MAKEINTRESOURCE(IDD_MYDIALOG), 0, myDlgProc); CreateThread(NULL, 0 myThreadProc, NULL, 0, NULL); ... } But here, I do. HWND g_hWnd; HINSTANCE g_hInstance; DWORD WINAPI myThreadProc(LPVOID lpParam) { hWnd = CreateDialog(hInstance, MAKEINTRESOURCE(IDD_MYDIALOG), 0, myDlgProc); while(GetMessage(&msg, hWnd, 0, 0) > 0) { ... } } int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) { g_hInstance = hInstance; CreateThread(NULL, 0 myThreadProc, NULL, 0, NULL); ... } Can somebody explain what I'm seeing?
No. GetMessage returns messages on the current thread's input queue. The HWND parameter is a filter, so that GetMessage only returns messages in the current thread's input queue intended for that window. Windows have thread affinity - messages intended for a window get handled on the thread that created and therefore owns the window.
617,358
617,364
Get optarg as a C++ string object
I am using getopt_long to process command line arguments in a C++ application. The examples all show something like printf("Username: %s\n", optarg) in the processing examples. This is great for showing an example, but I want to be able to actually store the values for use later. Much of the rest of the code uses string objects instead of char* so I need to cast/copy/whatever the contents of optarg into a string. string bar; while(1) { c = getopt_long (argc, argv, "s:U:", long_options, &option_index); if (c == -1) break; switch(c) { case 'U': // What do I need to do here to get // the value of optarg into the string // object bar? bar.assign(optarg); break; } } The above code compiles, but when it executes I get an Illegal instruction error if I try to print out the value of bar using printf (it seems to work just fine for cout). // Runs just fine, although I'm not certain it is actually safe! cout << " bar: " << bar << "\n"; // 'Illegal instruction' printf(" bar: %s\n", bar); I do not know enough about command line debugging to better dig into what the illegal instruction might be. I had been running valgrind, but the sheer volume of memory errors that result from this error have made it difficult for me to pinpoint exactly what might be causing this error.
You told printf that you were suppling a c style string (null terminated array of chars) when specifying %s, but you provided a string class instead. Assuming you are using std::string try: printf("bar : %s\n", bar.c_str());
617,513
617,518
How to identify if an object should be on the stack or not?
I was looking for a rule of thumb for allocating objects on stack or heap in C++. I have found many discussions here on SO. Many people said, it's about the lifetime of an object. If you need more lifetime than the scope of the function, put it in the heap. That makes perfect sense. But what made me confusing is, many people said, allocate objects to stack if they are small. If objects are big, put it to heap. But none of them said how to identify an object is big or not? I have the following questions, How to identify an object is big or not? What will be the stack maximum size? Each OS will have different stack size? I have a wrapper class which wraps vector<string>. It will have around 100 items. Will it make a stack overflow if I allocate this class to a stack? I tried this, but it worked perfectly. Not sure I am doing something wrong.
Well firstly vectors (and all the STL container classes) always allocate from the heap so you don't have to worry about that. For any container with a variable size it's pretty much impossible to use the stack. If you think about how stack allocation works (at compile time, basically by incrementing a pointer for each object) then it should be clear that vector memory comes from the heap. std::vector<int> myInts; std::string myString; SomeOther Class; // this memory must come from the heap, there's no way it // can now be allocated on the stack since there are other // objects and these parameters could be variable myString = "Some String"; myInts.reserve(256); Unless you are in a recursive function you can place several kilobytes of data on the stack without much worry. Stack-sizes are controlled by the program (not the OS) and the default tends to range from 32kb - 1mb. Most desktop software comes in at the 1mb range. Individual objects are almost never a concern. In general they will either be small enough for the stack, or will allocate internally from the heap. If objects are local to a function put them on the stack. If not put them on the heap. Use the heap for large buffers you allocate for loading/sorting/manipulating data.
617,555
617,696
Error with two ways of linking boost regex
I understand that boost regex static library is created with the ar utility by archiving the individual object files. I linked boost regex library by using the -l option in gcc. This worked very well. g++ *.o libboost_regex-gcc-1_37.a -o sairay.out I individually compiled the boost regex source files and then tried to link the object files of my application and the object files of boost regex into a single executable. But now I am getting errors. g++ *.o -o sairay.out Undefined first referenced symbol in file main /opt/csw/gcc3/lib/gcc/sparc-sun-solaris2.8/3.4.5/crt1.o I wish to know what is the difference between linking the static library and linking the individual objects of the static library files with the applicatiion?
I think it's supposed to be: g++ *.o -L. -lboost_regex-gcc -o sairay.out -static
617,571
617,586
Microsoft _s functions, are they part of the C++ standard now?
I just recently changed my IDE to MS Visual Studio 2005 coming from MSVC++ 6, and I've gotten a lot of deprecation warnings. Rather than ignore the warning, I started to change them to the _s equivalents. However, I then found out that these were microsoft-only implementations. I read somewhere that they were pushing for these to become part of the standard. Is it? Is it a good idea to use these _s functions? Or should I use something else? Thanks.
The *_s() functions are not part of the C standard, but there is a pending 'Technical Report' proposing that they be added (I'm not sure if the routines in the TR are exactly the same as Microsoft's or if they're just similar). TR 24731-1: Extensions to the C Library Part I: Bounds-checking interfaces: http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1225.pdf If you want to continue to use the old functions you can keep the deprecation warnings quiet by defining the macro _CRT_SECURE_NO_WARNINGS (was _CRT_SECURE_NO_DEPRECATE which might still be supported).
617,726
617,783
How to get debug information for an abstract(?) pimpl in C++?
I have a wrapper class that delegates its work to a pimpl, and the pimpl is a pointer to a baseclass/interface with no data that is specialized in several different ways. Like this: class Base { void doStuff=0; }; class Derived { int x,y; void doStuff() { x = (x+y*2)*x; //whatever } }; class Wrapper { Base* _pimpl; void doStuff() { _pimpl->doStuff(); } }; Now this works fine most of the time, but when going into the debugger I can't view x,y of the Derived class (because it could be anything). Normally this is irrelevant, but when something goes wrong seeing the state of Derived can be important, but pimpl obscures the state too much (however that's the original idea of a pimpl, so I guess I can't really complain). Now I have a tostring() function that prints the state out for debug purposes, but was wondering if there is a better solution, to debug this sort of construct in VisualStudio in particular, but a general solution would be better. Thanks
Just keep expanding out the tree in the Autos window or one of the Watch windows: alt text http://www.freeimagehosting.net/uploads/626b4a37ee.png
617,809
617,853
Stack unwinding in case of structured exceptions
This question provides more clarity on the problem described here. I did some more investigation and found that the stack unwinding is not happening in the following piece of code: class One { public: int x ; }; class Wrapper { public: Wrapper(CString csText):mcsText(csText) { CString csTempText; csTempText.Format("Wrapper constructor :: %s\n", mcsText); OutputDebugString(csTempText); } ~Wrapper() { CString csTempText; csTempText.Format("Wrapper destructor :: %s\n", mcsText); OutputDebugString(csTempText); } CString mcsText; }; class Test { public: void notifyError() { try { int x = 10; } catch(...) {} } void OnRecvBuffer() { try { Wrapper a("AddRef"); One* p = NULL; p->x = 10; } catch(...) { notifyError(); } } }; int main() { Test* pTest = new Test; pTest->OnRecvBuffer(); OutputDebugString("Test"); } I compiled this code using VC6 SP5 compiler and the output is "Wrapper constructor :: AddRef!!!" (i.e. the destructor of wrapper object which was constructed on stack is not called. Is this the expected behavior ? or is it a bug with VC compiler ? Can I use some compiler flags so that the stack unwinding happens in this case?
If you want to use SEH, you must use _set_se_translator function and /EHa compiler option.
617,825
617,834
Searching for Junk Characters in a String
Friends I want to integrate the following code into the main application code. The junk characters that come populated with the o/p string dumps the application The following code snipette doesnt work.. void stringCheck(char*); int main() { char some_str[] = "Common Application FE LBS Serverr is down"; stringCheck(some_str); } void stringCheck(char * newString) { for(int i=0;i<strlen(newString);i++) { if ((int)newString[i] >128) { TRACE(" JUNK Characters in Application Error message FROM DCE IS = "<<(char)newString[i]<<"++++++"<<(int)newString[i]); } } } Can someone please show me the better approaches to find junk characters in a string.. Many Thanks
Your char probably is represented signed. Cast it to unsigned char instead to avoid that it becomes a negative integer when casting to int: if ((unsigned char)newString[i] >128) Depending on your needs, isprint might do a better job, checking for a printable character, including space: if (!isprint((unsigned char)newString[i])) ... Note that you have to cast to unsigned char: input for isprint requires values between 0 and UCHAR_MAX as character values.
617,943
617,987
Best Replacement for a Character Array
we have a data structure struct MyData { int length ; char package[MAX_SIZE]; }; where MAX_SIZE is a fixed value . Now we want to change it so as to support "unlimited" package length greater than MAX_SIZE . one of the proposed solution is to replace the static array with a pointer and then dynamically allocating the size as we require For EX struct MyData { int length ; char* package; }; and then package = (char*)malloc(SOME_RUNTIME_SIZE) ; Now my question is that is this the most efficient way to cater to the requirement OR is there any other method .. maybe using STL data structures like growable arrays etc etc . we want a solution where most of the code that works for the static char array should work for the new structure too ..
I would also wrap a vector: // wraps a vector. provides convenience conversion constructors // and assign functions. struct bytebuf { explicit bytebuf(size_t size):c(size) { } template<size_t size> bytebuf(char const(&v)[size]) { assign(v); } template<size_t size> void assign(char const(&v)[size]) { c.assign(v, v+size); } // provide access to wrapped vector std::vector<char> & buf() { return c; } private: std::vector<char> c; }; int main() { bytebuf b("data"); process(&b.buf()[0], b.buf().size()); // process 5 byte std::string str(&b.buf()[0]); std::cout << str; // outputs "data" bytebuf c(100); read(&c.buf()[0], c.buf().size()); // read 100 byte // ... } There is no need to add many more functions to it, i think. You can always get the vector using buf() and operate on it directly. Since a vectors' storage is contiguous, you can use it like a C array, but it is still resizable: c.buf().resize(42) The template conversion constructor and assign function allows you to initialize or assign from a C array directly. If you like, you can add more constructors that can initialize from a set of two iterators or a pointer and a length. But i would try keeping the amount of added functionality low, so it keeps being a tight, transparent vector wrapping struct.
618,111
618,152
C++ (Standard) Exceptions and Unicode
I'm running into an issue where I'm processing unicode strings and I want to do some error reporting with standard exceptions. The error messages contained in standard exceptions are not unicode. Usually that hasn't been a problem for me because I can define the error message in non-unicode and have enough information, but in this case I want to include data from the original strings, and these can be unicode. How do you handle unicode messages in your exceptions? Do you create your own custom exception class, do you derive from the standard exceptions extending them to unicode, or do you have even other solutions to this problem (such as a rule "don't use unicode in exceptions")?
I think Peter Dimov's rationale as pointed out in the Boost error handling guidelines covers this well: Don't worry too much about the what() message. It's nice to have a message that a programmer stands a chance of figuring out, but you're very unlikely to be able to compose a relevant and user-comprehensible error message at the point an exception is thrown. Certainly, internationalization is beyond the scope of the exception class author. Peter Dimov makes an excellent argument that the proper use of a what() string is to serve as a key into a table of error message formatters. Now if only we could get standardized what() strings for exceptions thrown by the standard library...
618,169
618,175
Clean vector every loop iteration. What is the most memory efficient way?
I have a question about the std::vector. I have a very memory intensive algorithm where I forsee that predicting vector sizes and reserving enough memory for the vectors in advance will help me a lot with reducing memory usage. Which of the following is better: for ( ... ) { std::vector<Type> my_vector; my_vector.reserve(stuff_count); // Do stuff , and append stuff to my_vector. } Or this: std::vector my_vector; for ( ... ) { my_vector.clear(); my_vector.reserve(stuff_count); // Do stuff , and append stuff to my_vector. } Please tell me which is best, or if there is an even better way of doing stuff. Thank you very much in advance!
With the first variant you reallocate the vector's buffer on each iteration – that's usually quite costly. With the second variant you only reallocate occasionally. The second variant is better since speed is a priority for you. It's unclear from you question where the number of elements is know from. Maybe you even can quickly calculate the maximum number of elements for all iterations, set this to be the buffer size and have no reallocation.
618,204
618,220
De facto list of primitive types usable in C++
If, for example, you're going to write a variant type class, you will naturally need identification of what type an instance of that class is carrying. I'm wondering if anyone knows of any official or semi-official (de-facto?) reference of all primitive datatypes one would possibly be interested in? Only primitives, and no need for abstract types like string or handle. Thanks.
The only official reference is the ISO/IEC 14882 C++ Standard.
618,215
72,674,082
How do I catch system-level exceptions in Linux C++?
The following catch() is not called: void test(void) { int i=1,j=0,k; try { k = i/j; } catch(...) { ...handle it... } } Is there a way to catch this kind of exception?
below code implement __try/__except effect like in visual studio c++ or how to simulate __try/__except for gcc or g++ #include <stdio.h> #include <signal.h> #include <setjmp.h> __thread jmp_buf * gThreadData; //thread local storage variable declare void FPE_ExceptionHandler(int signal) { printf("exception handler signalid=%d\n", signal); //jmp to setjmp_return and rc will equal to non zero longjmp(*gThreadData, 10001); } int main(int argc, char *argv[]) { //setup a callback function for access violation exception signal(SIGSEGV, (__sighandler_t)FPE_ExceptionHandler); //allocate a jmp_buf struct and assign it to thread local storage pointer gThreadData = (jmp_buf *)(new jmp_buf); //setjmp save current thread context int rc = setjmp(*gThreadData); //setjmp_return //first time, run to here rc will equal to 0 if (rc == 0) { *(int*)0 = 1; //generate a exception } printf("return from exception\n"); delete (jmp_buf *)gThreadData; }
618,472
618,580
using of std::accumulate
Need prettier solution of below example but with std::accumulate. #include <algorithm> #include <vector> #include <iostream> class Object { public: Object( double a, double b ): a_( a ), b_( b ) {} double GetA() const { return a_; } double GetB() const { return b_; } // other methods private: double a_; double b_; }; class Calculator { public: Calculator( double& result ): result_( result ) {} void operator() ( const Object& object ) { // some formula result_ += object.GetA() * object.GetB(); } private: double& result_; }; int main() { std::vector< Object > collection; collection.push_back( Object( 1, 2 ) ); collection.push_back( Object( 3, 4 ) ); double result = 0.0; std::for_each( collection.begin(), collection.end(), Calculator( result ) ); std::cout << "result = " << result << std::endl; return 0; }
do changes in Calculator and main function. struct Calculator { double operator() ( double result, const Object& obj ) { return result + ( obj.GetA() * obj.GetB()); } }; int main() { std::vector< Object > collection; collection.push_back( Object( 1, 2 ) ); collection.push_back( Object( 3, 4 ) ); double result = std::accumulate( collection.begin(), collection.end(), 0, Calculator() ); std::cout << "result = " << result << std::endl; return 0; } also it could be better: double sumABProduct( double result, const Object& obj ) { return result + ( obj.GetA() * obj.GetB()); } double result = std::accumulate( collection.begin(), collection.end(), 0, sumABProduct );
618,581
618,597
Registering each C/C++ source file to create a runtime list of used sources
For a debugging and logging library, I want to be able to find, at runtime, a list of all of the source files that the project has compiled and linked. I assume I'll be including some kind of header in each source file, and the preprocessor __FILE__ macro can give me a character constant for that file, so I just need to somehow "broadcast" that information from each file to be gathered by a runtime function. The question is how to elegantly do this, and especially if it can be done from C as opposed to C++. In C++ I'd probably try to make a class with a static storage to hold the list of filenames. Each header file would create a file-local static instance of that class, which on creation would append the FILE pointer or whatever into the class's static data members, perhaps as a linked list. But I don't think this will work in C, and even in C++ I'm not sure it's guaranteed that each element will be created.
I wouldn't do that sort of thing right in the code. I would write a tool which parsed the project file (vcproj, makefile or even just scan the project directory for *.c* files) and generated an additional C source file which contained the names of all the source files in some kind of pre-initialized data structure. I would then make that tool part of the build process so that every time you do a build this would all happen automatically. At run time, all you would have to do is read that data structure that was built.
618,829
618,869
openGL glDrawElements with interleaved buffers
Thus far i have only used glDrawArrays and would like to move over to using an index buffer and indexed triangles. I am drawing a somewhat complicated object with texture coords, normals and vertex coords. All this data is gathered into a single interleaved vertex buffer and drawn using calls similar to ( Assuming all the serup is done correctly ): glVertexPointer( 3, GL_FLOAT, 22, (char*)m_vertexData ); glNormalPointer( GL_SHORT, 22, (char*)m_vertexData+(12) ); glTexCoordPointer( 2, GL_SHORT, 22, (char*)m_vertexData+(18) ); glDrawElements(GL_TRIANGLES, m_numTriangles, GL_UNSIGNED_SHORT, m_indexData ); Does this allow for m_indexData to also be interleaved with the indices of my normals and texture coords as well as the standard position index array? Or does it assume a single linear list of inidices that apply to the entire vertex format ( POS, NOR, TEX )? If the latter is true, how is it possible to render the same vertex with different texture coords or normals? I guess this question could also be rephrased into: if i had 3 seperate indexed lists ( POS, NOR, TEX ) where the latter 2 cannot be rearranged to share the same index list as the first, what is the best way to render that.
You cannot have different indexes for the different lists. When you specify glArrayElement(3) then OpenGL is going to take the 3rd element of every list. What you can do is play with the pointer you specify since essentially the place in the list which is eventually accessed is the pointer offset from the start of the list plus the index you specify. This is useful if you have a constant offset between the lists. if the lists are just a random permutation then this kind of play for every vertex is probably going to be as costy as just using plain old glVertex3fv(), glNormal3fv() and glTexCoord3fv()
618,859
618,874
How can I make my own C++ compiler understand templates, nested classes, etc. strong features of C++?
It is a university task in my group to write a compiler of C-like language. Of course I am going to implement a small part of our beloved C++. The exact task is absolutely stupid, and the lecturer told us it need to be self-compilable (should be able to compile itself) - so, he meant not to use libraries such as Boost and STL. He also does not want us to use templates because it is hard to implement. The question is - is it real for me, as I`m going to write this project on my own, with the deadline at the end of May - the middle of June (this year), to implement not only templates, but also nested classes, namespaces, virtual functions tables at the level of syntax analysis? PS I am not noobie in C++
Stick to doing a C compiler. Believe me, it's hard enough work building a decent C compiler, especially if its expected to compile itself. Trying to support all the C++ features like nested classes and templates will drive you insane. Perhaps a group could do it, but on your own, I think a C compiler is more than enough to do. If you are dead set on this, at least implement a C-like language first (so you have something to hand in). Then focus on showing off.
620,137
620,402
Do the parentheses after the type name make a difference with new?
If 'Test' is an ordinary class, is there any difference between: Test* test = new Test; and Test* test = new Test();
Let's get pedantic, because there are differences that can actually affect your code's behavior. Much of the following is taken from comments made to an "Old New Thing" article. Sometimes the memory returned by the new operator will be initialized, and sometimes it won't depending on whether the type you're newing up is a POD (plain old data), or if it's a class that contains POD members and is using a compiler-generated default constructor. In C++1998 there are 2 types of initialization: zero and default In C++2003 a 3rd type of initialization, value initialization was added. Assume: struct A { int m; }; // POD struct B { ~B(); int m; }; // non-POD, compiler generated default ctor struct C { C() : m() {}; ~C(); int m; }; // non-POD, default-initialising m In a C++98 compiler, the following should occur: new A - indeterminate value new A() - zero-initialize new B - default construct (B::m is uninitialized) new B() - default construct (B::m is uninitialized) new C - default construct (C::m is zero-initialized) new C() - default construct (C::m is zero-initialized) In a C++03 conformant compiler, things should work like so: new A - indeterminate value new A() - value-initialize A, which is zero-initialization since it's a POD. new B - default-initializes (leaves B::m uninitialized) new B() - value-initializes B which zero-initializes all fields since its default ctor is compiler generated as opposed to user-defined. new C - default-initializes C, which calls the default ctor. new C() - value-initializes C, which calls the default ctor. So in all versions of C++ there's a difference between new A and new A() because A is a POD. And there's a difference in behavior between C++98 and C++03 for the case new B(). This is one of the dusty corners of C++ that can drive you crazy. When constructing an object, sometimes you want/need the parens, sometimes you absolutely cannot have them, and sometimes it doesn't matter.
620,378
620,437
Why do C++ templates let me circumvent incomplete types (forward declarations)?
I tried three iterations of the following simple program. This is a highly simplified attempt to write a container-and-iterator pair of classes, but I was running into issues with incomplete types (forward declarations). I discovered that this was in fact possible once I templatized everything - but only if I actually used the template parameter! (I realized this by looking at the Google sparsetable code.) Any hints explaining why the second works while the third doesn't? (I know why the first one doesn't work - the compiler needs to know the memory layout of the container.) Thanks in advance. // This doesn't work: invalid use of incomplete type. #if 0 struct container; struct iter { container &c; int *p; iter(container &c) : c(c), p(&c.value()) {} }; struct container { int x; int &value() { return x; } iter begin() { return iter(*this); } }; int main() { container c; c.begin(); return 0; } #endif // This *does* work. template<typename T> struct container; template<typename T> struct iter { container<T> &c; T *p; iter(container<T> &c) : c(c), p(&c.value()) {} }; template<typename T> struct container { T x; T &value() { return x; } iter<T> begin() { return iter<T>(*this); } }; int main() { container<int> c; c.begin(); return 0; }; // This doesn't work either. #if 0 template<typename T> struct container; template<typename T> struct iter { container<int> &c; int *p; iter(container<int> &c) : c(c), p(&c.value()) {} }; template<typename T> struct container { int x; int &value() { return x; } iter<int> begin() { return iter<int>(*this); } }; int main() { container<int> c; c.begin(); return 0; } #endif
The first requires a definition of container since you are doing a copy operation. If you define the constructor of iter after container's definition you'd be okay. So: struct container; struct iter { container &c; int *p; iter(container &c); }; struct container { int x; int &value() { return x; } iter begin() { return iter(*this); } }; iter::iter(container &c) : c(c), p(&c.value()) {} int main() { container c; c.begin(); return 0; } The second example works because there is no class until you actually instantiate one in your main function. By that time all types are defined. Try moving any of the iter or container templates definition after main and you'll hit an error. The third example is a specialization for int or so it appears. This should compile because the template parameter for iter is not used. You've got the specialization syntax a bit off. However, there is no proper constructor so you'll only get garbage for x. Moreover, iterators are modeled well by pointers. Passing this's value will not be of much help. Iterators are typically required for a sequence and not an individual object. Though, there is nothing that can stop you from building one. And you don't need a ; after a function body.
620,498
620,563
How to detect PowerPoint 2007 from a C++ addin?
I need to detect if my addin is in PowerPoint 2007 via my C++ addin. The PowerPoint object model exposes Application.Version, which should work, but I do not know enough about how to use this with IDispatch. How to detect PowerPoint 2007 from a C++ addin?
Firstly - call IDispatch::GetIDsOfNames and get id for "Version" property. After that call IDispatch::Invoke which will get Version porperty value by id. Also, you could generate wrappers with #import directive and use more easy methods for get value of this property. Maybe this article will help you http://support.microsoft.com/kb/238393 (How To Use Visual C++ to Access DocumentProperties with Automation)
620,604
620,634
difference between a pointer and reference parameter?
Are these the same: int foo(bar* p) { return p->someInt(); } and int foo(bar& r) { return r.someInt(); } Ignore the null pointer potential. Are these two functions functionally identical no matter if someInt() is virtual or if they are passed a bar or a subclass of bar? Does this slice anything: bar& ref = *ptr_to_bar;
C++ references are intentionally not specified in the standard to be implemented using pointers. A reference is more like a "synonym" to a variable than a pointer to it. This semantics opens some possible optimizations for the compiler when it's possible to realize that a pointer would be an overkill in some situations. A few more differences: You can't assign NULL to a reference. This is a crucial difference and the main reason you'd prefer one over the other. When you take the address of a pointer, you get the address of the pointer variable. When you take the address of a reference, you get the address of the variable being referred to. You can't reassign a reference. Once it is initialized it points to the same object for its entire life.
620,745
705,474
C++: Rotating a vector around a certain point
I am trying to rotate a vector around a certain point on the vector(in C++): 1 2 3 4 5 6 7 8 9 rotated around the point (1,1) (which is the "5") 90 degrees would result in: 7 4 1 8 5 2 9 6 3 Right now I am using: x = (x * cos(90)) - (y * sin(90)) y = (y * cos(90)) + (x * sin(90)) But I don't want it rotated around (0,0)
The answer depends on your coordinate system. Computer graphics coordinate system, with (0,0) at Top left If you are using a computer graphics vector implementation where (0,0) is the top left corner and you are rotating around the point (dx, dy), then the rotation calculation, including the translation back into the original coordinate system, would be: x_rotated = ((x - dx) * cos(angle)) - ((dy - y) * sin(angle)) + dx y_rotated = dy - ((dy - y) * cos(angle)) + ((x - dx) * sin(angle)) Physics/Maths coordinate system, with (0,0) at Bottom left If you are using a more traditional real world coordinate system, where (0,0) is the bottom left corner, then the rotation calculation, around the point (dx, dy) including the translation back into the original coordinate system, would be: x_rotated = ((x - dx) * cos(angle)) - ((y - dy) * sin(angle)) + dx y_rotated = ((x - dx) * sin(angle)) + ((y - dy) * cos(angle)) + dy Thanks to mmx for their comment on Pesto's post, and to SkeletorFromEterenia for highlighting an error in my implementation.
620,752
620,813
Compiling C++ program on Windows XP
I have a problem running a simple Hello-world program in C++ on my Windows XP. I have written a post here: Using the g++ C++ compiler from cygwin where I stated the problem and I received many helpful replies, which solved some things. However, I still cannot run my hello-world program. Please, have a look at the post, I have included the program there. A friend of mine is able to run this program on his machine without problems. So, the problem should be in my system or libraries or something else, but I cannot figure what. In the "Edited" section of my post, I have included what error messages exactly I get. Here is what I receive when I try to compile the program like this: g++ hello.cpp From the cygwin shell: When I try g++ hello.cpp, I get: hello.cpp:1: parse error before character 0357 hello.cpp: In function 'int main()': hello.cpp:'cout' undeclared (first use this function) hello.cpp: (Each undeclared identifier is reported only once hello.cpp: for each function it appears in.) hello.cpp: 'endl' undeclared (first use this function) g++.exe: hello.cpp: No such file or directory g++.exe: No input files g++.exe: hello.cpp: No such file or directory g++.exe: No input files g++.exe: hello.cpp: No such file or directory g++.exe: No input files g++.exe: hello.cpp: No such file or directory g++.exe: No input files g++.exe: hello.cpp: No such file or directory g++.exe: No input files And here is what I get when I try to compile the program (again with g++ hello.cpp) from the DOS command prompt: When I try g++ hello.cpp, I get: hello.cpp:1: parse error before character 0357 hello.cpp: In function 'int main()': hello.cpp:'cout' undeclared (first use this function) hello.cpp: (Each undeclared identifier is reported only once hello.cpp: for each function it appears in.) hello.cpp: 'endl' undeclared (first use this function)
It looks like you have somehow got an illegal character in your code. Did you type the code in yourself or copy & paste it? If the latter, the source you copied from may be suspect in some way - type the code into the editor yourself, save it and recompile.
620,795
620,817
Function-wide exception handling in c++ - is it a bad style?
There is a try-catch thing about functions, which I think sometimes may be quite useful: bool function() try { //do something } catch(exception_type & t) { //do something } So the first part of the question: is this style considered bad in general case? And the concrete example I used this approach in: We had project with quite a lot of code in c and c++. And there we had custom exception types (not std::exception derived). I needed to integrate XML library and cast all exception to our types. So, basically, the last step was to catch all exceptions from XML library and convert them. Function before: bool readEntity(...) { while(...) { if(...) { //lot's of code... } } } after: bool readEntity(...) try { while(...) { if(...) { //lot's of code... } } } catch(XMLBaseException & ex) { //create our exception and throw } My thoughts went something like this: I clearly state my intentions to convert all exception derived from one type into custom type AND we keep our screen without horizontal scroll bar (cause horizontal scroll bars are bad). Well, I actually was quite criticized for this approach as for non-clear one during code review. So I'd like to hear you thoughts. UPDATE: just to be clear: refactoring the function wasn't an option. And actually it was good written one.
Really the only reason to function-level try blocks is for constructors, otherwise it's a somewhat obscure feature that doesn't buy you that much. It's just as easy to do it this way: bool readEntity(...) { try { while(...) { if(...) { //lot's of code... } } } catch(XMLBaseException & ex) { //create our exception and throw } } If you are having troubles with horizontal scrolling then the thing to do is to split up your code. try/catches are complexity and this should be represented in the nesting level, not hidden. In constructors, this is a different issue: there is no other way to catch exceptions in an initializer list: SomeClass::SomeClass(parameter p1, parameter p2) : Member1(p1), Member2(p2) try { } catch(Exception &ex) { // handle and rethrow } Of course, if you have an exception mid-construction, there's not likely much you can do to recover except log and and rethrow (it's going to get rethrown anyway in the constructor case). Your object isn't completely constructed yet and there's nothing you can really do with it. The only thing that you can trust to be valid are the parameters (although if the initialization failed, that will likely be due to bad parameters). See this GOTW for a discussion on this.
620,843
620,849
How do I create an array of pointers?
I am trying to create an array of pointers. These pointers will point to a Student object that I created. How do I do it? What I have now is: Student * db = new Student[5]; But each element in that array is the student object, not a pointer to the student object. Thanks.
Student** db = new Student*[5]; // To allocate it statically: Student* db[5];
620,914
620,979
C++ concept check vs inheritance
What is the relationship between using virtual functions and C++ inheritance mechanisms versus using templates and something like boost concepts? It seems like there is quite an overlap of what is possible. Namely, it appears to be possible to achieve polymorphic behavior with either approach. So, when does it make sense to favor one over the other? The reason why I bring this up is because I have a templated container, where the containers themselves have a hierarchical relationship. I would like to write algorithms that use these containers without caring about which specific container it is. Also, some algorithms would benefit from knowing that the template type satisfied certain concepts (Comparable, for example). So, on one hand, I want containers to behave polymorphicly. On the other, I still have to use concepts if I want to correctly implement some algorithms. What is a junior developer to do?
I think of concepts as a kind of meta-interface. They categorize types after their abilities. The next C++ version supplies native concepts. I hadn't understood it until i came across C++1x's concepts and how they allow putting different yet unrelated types together. Imagine you have a Range interface. You can model that with two ways. One is a subtype relationship: class Range { virtual Iterator * begin() = 0; virtual Iterator * end() = 0; virtual size_t size() = 0; }; Of course, every class that derives from that implements the Range interface and can be used with your functions. But now you see it is limited. What about an array? It's a range too! T t[N]; begin() => t end() => t + size() size() => N Sadly, you cannot derive an array from that Range class implementing that interface. You need an extra method (overloading). And what about third party containers? A user of your library might want to use their containers together with your functions. But he can't change the definition of their containers. Here, concepts come into game: auto concept Range<typename T> { typename iterator; iterator T::begin(); iterator T::end(); size_t T::size(); } Now, you say something about the supported operations of some type which can be fulfilled if T has the appropriate member functions. In your library, you would write the function generic. This allows you accept any type so long as it supports the required operations: template<Range R> void assign(R const& r) { ... iterate from r.begin() to r.end(). } It's a great kind of substitutability. Any type will fit the bill that adheres to the concept, and not only those types that actively implement some interface. The next C++ Standard goes further: It defines a Container concept that will be fit by plain arrays (by something caled concept map that defines how some type fits some concept) and other, existing standard containers. The reason why I bring this up is because I have a templated container, where the containers themselves have a hierarchical relationship. I would like to write algorithms that use these containers without caring about which specific container it is. Also, some algorithms would benefit from knowing that the template type satisfied certain concepts (Comparable, for example). You can actually do both with templates. You can keep having your hierarchical relationship to share code, and then write the algorithms in a generic fashion. For example, to communicate that your container is comparable. That's like standard random-access/forward/output/input iterator categories are implemented: // tag types for the comparator cagetory struct not_comparable { }; struct basic_comparable : not_comparable { }; template<typename T> class MyVector : public BasicContainer<T> { typedef basic_comparable comparator_kind; }; /* Container concept */ T::comparator_kind: comparator category It's a reasonable simple way to do it, actually. Now you can call a function and it will forward to the correct implementation. template<typename Container> void takesAdvantage(Container const& c) { takesAdvantageOfCompare(c, typename Container::comparator_kind()); } // implementation for basic_comparable containers template<typename Container> void takesAdvantage(Container const& c, basic_comparable) { ... } // implementation for not_comparable containers template<typename Container> void takesAdvantage(Container const& c, not_comparable) { ... } There are actually different techniques that can be used to implement that. Another way is to use boost::enable_if to enable or disable different implementations each time.
621,084
621,090
Comparing string data received from a socket in C
I have a question on sockets. I have this code: while(bytes = recv(sClient, cClientMessage, 599, 0)){ This puts the message it recives into cClientMessage and the message is always "Message". How I made an if statement like if(cClientMessage == "Message"){//do func}. Now this code will not do the function I want. I think this is because it's not receiving the message right. Can someone help me?
Try: if( strcmp( cClientMessage, "Message")) == 0 ) { // do something } Edit, following suggestion from strager: A better solution, which does not depend on the received data being null terminated is to use memcmp: if( memcmp( cClientMessage, "Message", strlen( "Message") )) == 0 ) { // do something }
621,157
621,161
How to restart a sockets program?
I need my server to stay connected to the server. Does anyone know how to do this? Or post links tutorials anything? Also it says when it restarts 'could not accept client' so how would I clear everything and make it accept it?
Server code: For your server side code, do a loop wrapping the accept call. For the accepted socket that is created create a new thread, so that the next accept will be called right away. On server startup you may also want to use the SO_REUSEADDR flag. That way if you had a crash, or even a fast restart of the program, then your server will be able to use the same port again without a problem. Client code: For your client code you would just check for a socket error and if that occurs just establish a new connection. Other resources: Beej's guide to network programming is a great resource for learning socket programming. Frostbytes.com also has a great tutorial on socket programming. If you want something more in depth, check out Unix Network Programming 3rd Edition by W. Richard Stevens. Other options: Instead of plain bsd-style sockets, you could also try using boost asio for easier socket programming. You could check out their examples page.
621,233
621,268
How to intentionally delete a boost::shared_ptr?
I have many boost::shared_ptr<MyClass> objects, and at some point I intentionally want to delete some of them to free some memory. (I know at that point that I will never need the pointed-to MyClass objects anymore.) How can I do that? I guess you can't just call delete() with the raw pointer that I get with get(). I've seen a function get_deleter(shared_ptr<T> const & p) in boost::shared_ptr, but I'm not sure how to use it, and also it says experimental right next to it. (I think I have Boost 1.38.) Maybe just assign a new empty boost::shared_ptr to the variable? That should throw away the old value and delete it.
You just do ptr.reset(); See the shared_ptr manual. It is equivalent to shared_ptr<T>().swap(ptr) You call reset on every smart pointer that should not reference the object anymore. The last such reset (or any other action that causes the reference count drop to zero, actually) will cause the object to be free'ed using the deleter automatically. Maybe you are interested in the Smart Pointer Programming Techniques. It has an entry about delayed deallocation.
621,255
621,308
Force screen redraw after drawing to screen's DC C++
I'm creating a Windows Mobile custom SIP and as the user presses or "hovers" over a button on the keyboard I draw it's corresponding selected image (iPhone-esque) to the screen's DC using ::GetDC(NULL). It is developed in Win32 C++. My problem is that I can never get the screen to repaint itself, erasing the previously drawn key. I've tried everything I could think of to force it, ::RedrawWindow() with all different combos of params using NULL or any of the Get..Window functions as the HWND, and all combo of flags (INVALIDATE | UPDATENOW | ERASENOW | ALLCHILDREN etc). I've also tried invalidating the rect and forcing a repaint of the underlying app using ::GetActiveWindow and ::GetForegroundWindow, variations using ::GetDesktopWindow, etc. I've essentially run out of ideas and it's the last thing holding me up from using it on my Windows Mobile device. Does anyone have any ideas? Any help would be greatly appreciated. I've included some screenshots to illustrate my dilemma. Before Screenshot After Screenshot Before http://img13.imageshack.us/img13/9861/cezoom1.png After http://img3.imageshack.us/img3/7481/cezoom2.png Thanks in advance!
Rather than drawing to the background DC, why don't you create a temporary window and draw into that? When you destroy the window, the background should get repainted automatically. I'm just guessing, because I don't know Windows Mobile, but it could be that Windows is caching the last thing that was drawn into the DC and repainting it every time you invalidate the window.
621,262
621,296
How to run a Qt application?
I have been using Qt creator to make applications using the Qt libraries. I can run these applications by just clicking the play button, but I want to learn how to make applications run just by double clicking on a .exe. So how can I do this?
I'm assuming you are running windows since you mention an .exe file extension. Look in the debug and/or release subdirectories of your project to find the produced binary. You can double-click on it to run the application. However, there are several DLL's that will be required. So, make sure they are in your PATH or copy them in the same directory. Dependency Walker is useful for finding out what you need.
621,535
621,537
What are data breakpoints?
I just came to know that there are data breakpoints. I have worked for the last 5 years in C++ using Visual Studio, and I have never used data breakpoints. Can someone throw some light on what data breakpoints are, when to use them and how to use them with VS? As per my understanding we can set a data breakpoint when we want to check for changes to a variable's value. In this case, we can set a data breakpoint with a condition on the variable value. Any other examples?
Definition: Data breakpoints allow you to break execution when the value stored at a specified memory location changes. From MSDN: How to: Set a Data Breakpoint: How to Set a Memory Change Breakpoint From the Debug Menu, choose New Breakpoint and click New Data Breakpoint —or— in the Breakpoints window Menu, click the New dropdown and choose New Data Breakpoint. The New Breakpoint dialog box appears. In the Address box, enter a memory address or expression that evaluates to a memory address. For example, &foo to break when the contents of variable foo change. In the Byte Count box, enter the number of bytes you want the debugger to watch. For example, if you enter 4, the debugger will watch the four bytes starting at &foo and break if any of those bytes change value. Click OK.
621,542
621,548
Compilers and argument order of evaluation in C++
Okay, I'm aware that the standard dictates that a C++ implementation may choose in which order arguments of a function are evaluated, but are there any implementations that actually 'take advantage' of this in a scenario where it would actually affect the program? Classic Example: int i = 0; foo(i++, i++); Note: I'm not looking for someone to tell me that the order of evaluation can't be relied on, I'm well aware of that. I'm only interested in whether any compilers actually do evaluate out of a left-to-right order because my guess would be that if they did lots of poorly written code would break (rightly so, but they would still probably complain).
It depends on the argument type, the called function's calling convention, the archtecture and the compiler. On an x86, the Pascal calling convention evaluates arguments left to right whereas in the C calling convention (__cdecl) it is right to left. Most programs which run on multiple platforms do take into account the calling conventions to skip surprises. There is a nice article on Raymond Chen' blog if you are interested. You may also want to take a look at the Stack and Calling section of the GCC manual. Edit: So long as we are splitting hairs: My answer treats this not as a language question but as a platform one. The language standard does not gurantee or prefer one over the other and leaves it as unspecified. Note the wording. It does not say this is undefined. Unspecified in this sense means something you cannot count on, non-portable behavior. I don't have the C spec/draft handy but it should be similar to that from my n2798 draft (C++) Certain other aspects and operations of the abstract machine are described in this International Standard as unspecified (for example, order of evaluation of arguments to a function). Where possible, this International Standard defines a set of allowable behaviors. These define the nondeterministic aspects of the abstract machine. An instance of the abstract machine can thus have more than one possible execution sequence for a given program and a given input.
621,573
621,581
How new keyword works in c#
There's a class which is compiled into a dll //HeaderFile.h //version 1.0 class __declspec(dllexport) A { int variable; //member functions omitted for clarity }; //implementation file omitted for clarity You build an exe which uses above class from the dll it was compiled into #include "HeaderFile.h" int main() { A *obj = new A(); obj->CallSomeFuncOnObj(); // //whatever // } Up til now your program is working fine. But now you recompile your dll with the following code //HeaderFile.h //version 2.0 class __declspec(dllexport) A { int variable; int anotherVariable; //member functions omitted for clarity }; //implementation file omitted for clarity and you do not recompile your exe but start using the recompiled dll from the old exe. What will happen now is that your exe has the code that will allocate memory = sizeof(class A version 1.0) but the constructor in your new dll has code that assumes it is being passed a memory block = sizeof(class A version 2.0). There's an integer's worth size difference between the two - a recipe for unpredictability. A similar example is shown in the first chapter of an excellent book - Essential COM by Don Box. Now for the question. In a similar situation in c#(or any other .Net languare) what would happen?
In COM, your DLL implements an object factory: The DLL creates the object itself in order to avoid such 'syncho' problems. In .NET, the CLR instantiates the object based on type knowledge pulled from the DLL where the type is implemented. In both cases, the problem you mention is avoided.
621,616
621,648
C++: What is the size of an object of an empty class?
I was wondering what could be the size of an object of an empty class. It surely could not be 0 bytes since it should be possible to reference and point to it like any other object. But, how big is such an object? I used this small program: #include <iostream> using namespace std; class Empty {}; int main() { Empty e; cerr << sizeof(e) << endl; return 0; } The output I got on both Visual C++ and Cygwin-g++ compilers was 1 byte! This was a little surprising to me since I was expecting it to be of the size of the machine word (32 bits or 4 bytes). Can anyone explain why the size of 1 byte? Why not 4 bytes? Is this dependent on compiler or the machine too? Also, can someone give a more cogent reason for why an empty class object will not be of size 0 bytes?
Quoting Bjarne Stroustrup's C++ Style and Technique FAQ, the reason the size is non-zero is "To ensure that the addresses of two different objects will be different." And the size can be 1 because alignment doesn't matter here, as there is nothing to actually look at.
621,745
621,748
Signaling an error in file streams in C++
I have got the following sample: #include <iostream> #include <fstream> using namespace std; int main() { ifstream file; cout << file << endl; // 0xbffff3e4 file.open("no such file"); cout << file << endl; // 0 cout << (file == NULL) << endl; // 1 cout << file.fail() << endl; // 1 } If the file is NULL, how is it possible to call the fail member function? I am not very familiar with C++, is this normal behaviour? What am I getting wrong here?
file is an object - it cannot be null. However, ifstream has an operator void*() overload which returns 0 when the file is in a bad state. When you say (for example): cout << file << endl; the compiler converts this to: cout << file.operator void*() << endl; This conversion will be used in all sorts of places - basically anywhere that a pointer or integer type could be used. It is used when you say: (file == NULL) You compare the zero returned by operator void*() zero with NULL and get 1.
621,776
626,144
How to reduce CPU usage of a program?
I wrote a multi-threaded program which does some CPU heavy computation with a lot of floating point operations. More specifically, it's a program which compares animation sequences frame by frame. I.e. it compares frame data from animation A with all the frames in animation B, for all frames in animation A. I carry out this intensive operation for different animations in parallel, so the program can be working on A-B pair, B-C pair and C-A pair in parallel. The program is using QtConcurrent and a "map" function which maps a container with motions onto a function. QtConcurrent manages thread pool for me, I am working on Intel Quad Core processor so it spawns 4 threads. Now, the problem is that my process destroys my CPU. The usage is 100% constant and I actually get a Blue Screen of Death if I run my program on a big enough set of motions (Page fault in non-paged area). I suspect that this is because my computer is overclocked. However, could this be because of the way I coded my program? Some very intensive benchamrking tools I used to test my machine's stability never crashed my PC. Is there any way to control how my program uses my CPU to reduce the load? Or perhaps I am misunderstanding my problem?
There are some excellent answers here. I would only add, from the perspective of having done lots of performance tuning, unless each thread has been optimized aggressively, chances are it has lots of room for cycle-reduction. To make an analogy with a long-distance auto race, there are two ways to try to win: Make the car go faster Make fewer stops and side-trips In my experience, most software as first written is quite far from taking the most direct route, especially as the software gets large. To find wasted cycles in your program, as Kenneth Cochran said, never guess. If you fix something without having proved that it is a problem, you are investng in a guess. The popular way to find performance problems is to use profilers. However, I do this a lot, and my method is this: http://www.wikihow.com/Optimize-Your-Program%27s-Performance
622,017
622,028
list of public functions/classes with their corresponded header files
I tried to find a place where I can find ready to copy list of all functions and classes available in each stl header file. Looking through /usr/include/c++ is not so convenient as I expected. Google very often shows http://www.cplusplus.com/reference/ which is not so convenient to copy and paste. Does anyone knows a good place to look? Thanks. Edit: This is for an "auto stl header" plugin. So I don't need an examples. Just correspondence of each std::xxx to <yyy>. For Vim users and those who might be interested Using half given links and the std header files I've created auto std include vim plugin. I can't choose right one answer by now. But the method with which I can receive such kind of information more automatically is still considering by me. Thanks.
I'm not sure if I understood correctly, but if you need a reference and a list of functions from the headers, then maybe dinkumware manuals. If you want examples then try this. If you want an absolute and the true reference then go to the ISO standard. I forgot to mention SGI STL programmers guide...
622,019
622,907
How can I get better profiling?
I need to profile a program to see whether any changes need to be made regarding performance. I suspect there is a need, but measuring first is the way to go. This is not that program, but it illustrates the problem I'm having: #include <stdio.h> int main (int argc, char** argv) { FILE* fp = fopen ("trivial.c", "r"); if (fp) { char line[80]; while (fgets (line, 80, fp)) printf (line); fclose (fp); } return 0; } Here's what I did with it: % gcc trivial.c -pg -o trivial % ./trivial ... % gprof trivial gmon.out Granted, this is a trivial program, but I would have thought it would make some kind of blip on the profiling radar. It didn't: called/total parents index %time self descendents called+self name index called/total children 0.00 0.00 1/1 __start [1704] [105] 0.0 0.00 0.00 1 _main [105] ----------------------------------------------- % cumulative self self total time seconds seconds calls ms/call ms/call name 0.0 0.00 0.00 1 0.00 0.00 _main [105] Index by function name [105] _main Can anyone guide me here? I would like the output to reflect that it called fgets and printf at least 14 times, and it did hit the disk after all - there should be some measured time, surely. When I run the same command on the real program, I get more functions listed, but even then it is not a complete list - just a sample. Perhaps gprof is not the right tool to use. What is? This is on OS X Leopard. Edit: I ran the real program and got this: % time real_program real 4m24.107s user 2m34.630s sys 0m38.716s
There are certain commonly-accepted beliefs in this business, that I would suggest you examine closely. One is that the best (if not only) way to find performance problems is to measure the time each subroutine takes and count how many times it is called. That is top-down. It stems from a belief that the forest is more important than the trees. It is based on myths about "speed of code" and "bottlenecks". It is not very scientific. A performance problem is more like a bug than a quantitative thing. What it is doing wrong is it is wasting time, and it needs to be fixed. It is based on a simple observation: Slowness consists of time being spent for poor reasons. To find it, sample the program state at random slivers of clock time, and investigate their reasons. If something is causing slowness, then that fact alone exposes it to your samples. So if you take enough samples, you will see it. You will know approximately how much time it is costing you, by the fraction of samples that show it. A good way to tell if a sliver of time is being spent for a good reason is to look carefully at the call stack. Every function invocation on the stack has an implicit reason, and if any of those reasons are poor, then the reason for the entire sample is poor. Some profilers tell you, at the statement level, what each statement is costing you. Personally, I just randomly halt the program several times. Any invocations showing up on multiple samples are likely candidates for suspicion. It never fails. You may say "It's not accurate." It's extremely accurate. It precisely pinpoints the instructions causing the problem. It doesn't give you 3 decimal places of timing accuracy. I.e. it is lousy for measurement, but superb for diagnosis. You may say "What about recursion?". Well, what about it? You may say "I think that could only work on toy programs." That would be just wishing. In fact large programs tend to have more performance problems, because they have deeper stacks, thus more opportunity for invocations with poor reasons, and sampling finds them just fine, thanks. Sorry to be a curmudgeon. I just hate to see myths in what should be a scientifically-based field. MORE
622,048
622,068
Functor class doing work in constructor
I'm using C++ templates to pass in Strategy functors to change my function's behavior. It works fine. The functor I pass is a stateless class with no storage and it just overloads the () operator in the classic functor way. template <typename Operation> int foo(int a) { int b=Operation()(a); /* use b here, etc */ } I do this often, and it works well, and often I'm making templates with 6 or 7 templated functors passed in! However I worry both about code elegance and also efficiency. The functor is stateless so I assume the Operation() constructor is free and the evaluation of the functor is just as efficient as an inlined function, but like all C++ programmers I always have some nagging doubt. My second question is whether I could use an alternate functor approach.. one that does not override the () operator, but does everything in the constructor as a side effect! Something like: struct Operation { Operation(int a, int &b) { b=a*a; } }; template <typename Operation> int foo(int a) { int b; Operation(a,b); /* use b here, etc */ } I've never seen anyone use a constructor as the "work" of a functor, but it seems like it should work. Is there any advantage? Any disadvantage? I do like the removal of the strange doubled parenthesis "Operator()(a)" , but that's likely just aesthetic.
Any disadvantage? Ctors do not return any useful value -- cannot be used in chained calls (e.g. foo(bar()). They can throw. Design point of view -- ctors are object creation functions, not really meant to be workhorses.
622,129
622,138
Error Logging C++ Preprocessor Macros __LINE__, __FUNCTION__
I trying to incorporate a simple error logging into my existing app, at the moment it reports errors just using cout so I was hoping to keep a similar interface using the << operator. However I want it to log the line and function the error occurred, but I don't want to have to type __LINE__, __FUNCTION__ every time I need to log. Does anyone know a trick I can use to allow the __LINE__ macro to be used inside another function, reporting the calling line instead? Hope that makes sense. class myLogClass { uint8_t level; public: bool operator<<( const char * input ); }; bool myLogClass::operator<<( const char * input ) { logItInSQL( input ); return true; } Instead of this every time myLogClass << "Line No: " << __LINE__ << " Function: " << __FUNCTION__ << " Error: " << "This is my error to be logged"; I would like to just be able to do: myLogClass << "This is my error to be logged"; bool myLogClass::operator<<( const char * input ) { logItInSQL( " Line No: __LINE__" ); logItInSQL( " Function: __FUNCTION__" ); logItInSQL( " Error: " + input ); return true; }
myLogClass << "Line No: " << __LINE__ ... With your operator << chaining will not work since it returns a bool. bool myLogClass::operator << (const char * input) It is customary to define stream insertion as follows: std::ostream& myLogClass::operator << (std::ostream& o, const char * input) { // do something return o; } Do this: #define log(o, s) o << "Line No: " << __LINE__ << \ " Function: " << __FUNCTION__ << \ " Error: " << s // note I leave ; out Additionally, you can wrap the macro in a do-while loop: #define log(o, s) do { o << "Line No: " << __LINE__ << \ " Function: " << __FUNCTION__ << \ " Error: " << s; \ } while(0) // here, I leave ; out Then you can happily write: myLogClass myLogger; // do this // use it log(myLogger, "This is my error to be logged"); // note the ;
622,210
622,438
Problem calling a function when it is in a .lib
I have a class with a static method that looks roughly like: class X { static float getFloat(MyBase& obj) { return obj.value(); // MyBase::value() is virtual } }; I'm calling it with an instance of MyDerived which subclasses MyBase: MyDerived d; float f = X::getFloat(d); If I link the obj file containing X into my executable, everything works as expected. If I'm expecting to get 3.14, I get it. If I create a .lib that contains the X.obj file and link in the .lib, it breaks. When I call getFloat(), it's returning -1.#IND00. Is this some type of sentinel value that should tell me what's wrong here? Is anything different when you link in a lib rather than an obj directly? I don't get any compiler warnings or errors. Edit: I'm using Visual Studio 2005 on Windows XP Pro SP3. To make sure I wasn't linking old files, I cloned the value() method into a new value2() method and called that instead. The behavior was the same. Edit #2: So, if I trace into the call with my debugger, I'm finding that it isn't going into my value() method at all. Instead it's going into a different (unrelated) method. This makes me think my vtable is corrupted. I think the behavior I'm seeing must be a side effect of some other problem. Solved! (thanks to Vlad) It turns out I was violating the one definition rule (ODR) although it wasn't evident from the code I posted. This is a great article from the Visual C++ guys that explains the problem and one way to track it down. The /d1reportSingleClassLayout compiler flag is a fantastic learning tool. When I dumped out the my class layout for MyBase and MyDerived in the two different projects, I found differences between the calling code and the library code. It turns out I had some #ifdef blocks in my header files and the corresponding #define statement was in the precompiled header for the main project but not in the subproject (the library). Have I mentioned how evil I think preprocessor macros are? Anyway, I'm only posting this stuff because it might be helpful to somebody else. This question was also very helpful to me.
This problem will occur when the lib and the executable have been be compiled with different definitions of the MyDerived class (i.e. different versions of the .h/.hh/.hpp file that declares MyDerived. Completely clean and rebuild your projects. Barring this, different compiler options could be responsible, though it is somewhat unlikely. If the problem persists after rebuilding everything from scratch, then nail it by instantiating a dummy MyDerived object inside getFloat, in the library. Use the debugger to compare the vtable of the dummy MyDerived (instantiated in the library) and the vtable of the MyDerived object reference passed as parameter (instantiated in the executable.) Something should spring to eye right away.
622,229
622,239
error LNK2005: already defined - C++
Background I have a project named PersonLibrary which has two files. Person.h Person.cpp This library produces a static library file. Another project is TestProject which uses the PersonLibrary (Added though project dependencies in VS008). Everything worked fine until I added a non-member function to Person.h. Person.h looks like class Person { public: void SetName(const std::string name); private: std::string personName_; }; void SetPersonName(Person& person,const std::string name) { person.SetName(name); } Person.cpp defines SetName function. When I try to use SetPersonName from TestProject, I get error LNK2005: already defined. Here is how I used it #include "../PersonLibrary/Person.h" int main(int argc, char* argv[]) { Person person; SetPersonName(person, "Bill"); return 0; } Workarounds tried 1 - I have removed the Person.cpp and defined the whole class in Person.h. Error gone and everything worked. 2 - Changed the SetPersonName modifier to static. Like the below static void SetPersonName(Person& person,const std::string name) { person.SetName(name); } Questions Why the code shown first is not working as I expected? What difference static made here? What is the approapriate solution for this problem? Thanks
You either have to move SetPersonName's definition to a .cpp file, compile and link to the resulting target make SetPersonName inline This is a well known case of One Definition Rule violation. The static keyword makes the function's linkage internal i.e. only available to the translation unit it is included in. This however is hiding the real problem. I'd suggest move the definition of the function to its own implementation file but keep the declaration in the header.
622,339
622,346
Which is the best, standard (and hopefully free) C++ compiler?
Saludos a todos en stackoverflow.com!! So... I'm a C++ newbie currently taking the subject of Data Structures, and I want to consult something with you guys: Since I started studying Systems Engineering, I've been using the last version of Dev-C++ for all my programming projects. It has done it's job well so far, but it has a FATAL flaw... trying to debug your code in it it's just... awfully painful!! Everytime you activate the debug mode you have to cross your fingers and hope it doesn't crash in your face just before the variable you are watching even changes. Then, I though about using Visual-C++ from Microsoft, as it is free too; however, I asked my teacher and told me not to use it in the course, as it is not standard (compiled programs in it will only work for Windows); we are required to make code that can be compiled both in a Windows and Linux environment, and once compiled in any, being able to use the application on both environments. So people, I want you to please give me advice: Which C++ compiling software out there has a strong and/or stable debugging mode, implements the standard compiler (is it called g++, by the way??), and it's hopefully free??? Thanks for your attention. Gracias y hasta luego ^_^ (BTW, happy first post!!! ^o^)
Code::Blocks - it's free, it's cross-platform, it's pretty good. You can download a package consisting of the CB IDE, the MinGW C++ compiler and the gdb debugger. Installation is very straightforward.
622,402
622,462
boost thread compiler error with GCC
on linux, gcc 4.3, compiling a class with boost::thread implementation and mutexes / condition variables I get the following strange error, apparently due to type conflicts with the posix thread library: *Compiling: filter.cpp /usr/include/boost/thread/condition.hpp: In member function »void boost::condition::wait(L&) [with L = boost::mutex]«: /host/.../filter.cpp:68: instantiated from here /usr/include/boost/thread/condition.hpp:90: Error: no match für »operator!« in »!lock«* */usr/include/boost/thread/condition.hpp:90: Notice: candidates are: operator!(bool) <built in>* */usr/include/boost/thread/mutex.hpp:66: Error: »pthread_mutex_t boost::mutex::m_mutex« is private /usr/include/boost/thread/condition.hpp:93: Error: in this context* The code is: void CFilter::process( CData **s ) { boost::mutex::scoped_lock bufferLock(m_mutex); while (!m_bStop) m_change.wait(bufferLock); //<- line 68 // ... further processing } with the class declaration #include <boost/shared_ptr.hpp> #include <boost/bind.hpp> #include <boost/thread/condition.hpp> #include <boost/thread/thread.hpp> #include <boost/thread/mutex.hpp> class CFilter { // ... boost::shared_ptr<boost::thread> m_thread; boost::mutex m_mutex; boost::condition m_change; // ... process( CData **s ); } The operator error takes place in boost's condition.hpp, in if (!lock) throw lock_error(); I'm using Boost 1.38.0, on Windows I don't find any problems. Any help is appreciated!
You must wait on the bufferLock, not m_mutex: while (!m_bStop) m_change.wait(bufferLock); Condition<>::wait() takes a ScopedLock as parameter, not a Mutex.
622,418
622,427
How to deploy a Qt application on Windows?
So now I can make a .exe of my application. Now how do I get my application ready to deploy for windows? This is meant to be the canonical question for Qt application deployment issues on Windows.
The Qt documentation has pages for that: Qt 5, Qt 4.
622,592
622,666
Win32 programming hiding console window
I'm learning C++ and I made a new program. I deleted some of my code and now my console window is not hidden. Is there a way to make it hide on startup without them seeing it?
If you're writing a console program and you want to disconnect your program from the console it started with, then call FreeConsole. Ultimately, you probably won't be satisfied with what that function really does, but that's the literal answer to the question you asked. If you're writing a program that you never want to have a console in the first place, then configure your project so that it is not a console program. "Consoleness" is a property of the EXE file. The OS reads that setting and decides whether to allocate a console for your program before any of your code ever runs, so you can't control it within the program. Sometimes a non-console program is called a "GUI program," so you might look for a choice between "console" and "GUI" in the configuration options of your development environment. Setting it to GUI doesn't require that you have any user interface at all, though. The setting merely controls whether your program starts with a console. If you're trying to write a program that can sometimes have a console and sometimes not, then please see an earlier question, Can one executable be both a console and GUI app?
622,659
622,722
What are the good and bad points of C++ templates?
I've been talking with friends and some completely agree that templates in C++ should be used, others disagree entirely. Some of the good things are: They are more safe to use (type safety). They are a good way of doing generalizations for APIs. What other good things can you tell me about C++ templates? What bad things can you tell me about C++ templates? Edit: One of the reasons I'm asking this is that I am studying for an exam and at the moment I am covering the topic of C++ templates. So I am trying to understand a bit more on them.
Templates are a very powerful mechanism which can simplify many things. However to use them properly requires much time and experience - in order to decide when their usage is appropriate. For me the most important advantages are: reducing the repetition of code (generic containers, algorithms) reducing the repetition of code advanced (MPL and Fusion) static polymorphism (=performance) and other compile time calculations policy based design (flexibility, reusability, easier changes, etc) increasing safety at no cost (i.e. dimension analysis via Boost Units, static assertions, concept checks) functional programming (Phoenix), lazy evaluation, expression templates (we can create Domain-specific embedded languages in C++, we have great Proto library, we have Blitz++) other less spectacular tools and tricks used in everyday life: STL and the algorithms (what's the difference between for and for_each) bind, lambda (or Phoenix) ( write clearer code, simplify things) Boost Function (makes writing callbacks easier) tuples (how to genericly hash a tuple? Use Fusion for example...) TBB (parallel_for and other STL like algorithms and containers) Can you imagine C++ without templates? Yes I can, in the early times you couldn't use them because of compiler limitations. Would you write in C++ without templates? No, as I would lose many of the advantages mentioned above. Downsides: Compilation time (for example throw in Sprit, Phoenix, MPL and some Fusion and you can go for a coffee) People who can use and understand templates are not that common (and these people are useful) People who think that they can use and understand templates are quite common (and these people are dangerous, as they can make a hell out of your code. However most of them after some education/mentoring will join the group mentioned in the previous point) template export support (lack of) error messages could be less cryptic (after some learning you can find what you need, but still...) I highly recommend the following books: C++ Templates: The Complete Guide by David Vandevoorde and Nicolai Josuttis (thorough introduction to the subject of templates) Modern C++ Design. Generic Programming and Design Patterns Applied by Andrei Alexandrescu (what is the less known way of using templates to simplify your code, make development easier and result in code robust to changes) C++ Template Metaprogramming by David Abrahms and Aleksey Gutov (again - different way of using the templates) More C++ Idioms from Wikibooks presents some nice ideas.
622,762
627,400
Running music as SDL_Mixer chunks
Currently, SDL_Mixer has two types of sound resources: chunks and music. Apart from the API and supported formats limitations, are there any reasons not to load and play music as a SDL_Chunk and channel? (memory, speed, etc.)
The API is the real issue. The "music" APIs are designed to deal with streaming compressed music, while the "sound" APIs aren't. Then again, if you manage to make it work in your app, then it works.
622,978
623,031
Error in outputting to a file in C++ that I can't find
Not as in "can't find the answer on stackoverflow", but as in "can't see what I'm doing wrong", big difference! Anywho, the code is attached below. What it does is fairly basic, it takes in a user created text file, and spits out one that has been encrypted. In this case, the user tells it how many junk characters to put between each real character. (IE: if I wanted to encrypt the word "Hello" with 1 junk character, it would look like "9H(eal~l.o") My problem is that for some reason, it isn't reading the input file correctly. I'm using the same setup to read in the file as I had done previously on decrypting, yet this time it's reading garbage characters, and when I tell it to output to file, it prints it on the screen instead, and it seems like nothing is being put in the output file (though it is being created, so that means I've done something correctly, point for me! code: string start; char choice; char letter; int x; int y; int z; char c; string filename; while(start == "enc") { x = 1; y = 1; cout << "How many garbage characters would you like between each correct character?: " ; cin >> z; cout << endl << "Please insert the name of the document you wish to encrypt, make sure you enter the name, and the file type (ie: filename.txt): " ; cin >> filename; ifstream infile(filename.c_str()); ofstream outfile("encrypted.txt", ios::out); while(!infile.eof()) { infile.get(letter); while ((x - y) != z) { outfile << putchar(33 + rand() % 94); x++; } while((x - y) == z) { outfile << letter; y = 1; x = 1; } } outfile.close(); cout << endl << "Encryption complete...please return to directory of program, a new file named encrypted.txt will be there." << endl; infile.close(); cout << "Do you wish to try again? Please press y then enter if yes (case sensitive)."; cin >> choice; What I pasted above the start of the while loop are the declaration variables, this is part of a much larger code that not only will encrypt, but decrypt as well, I left the decryption part out as it works perfectly, it's this part I'm having an issue with. Thanks in advance for the assist! EDIT:: I'm using visual C++ express 2008, and it shoots back that there are no errors at all, nor any warnings. IMPORTANT EDIT It turns out it is outputting to the file! However, it is outputting numbers instead of ascii characters, and it is also outputting the garbage character for the letter it should be. When it goes back to the "infile.get(letter)", it doesn't get a new character. So right now it seems to be the issues are 2 fold: 1) Printing numbers instead of ascii characters. 2) Using garbage instead of the actual character it should be getting. Question Answered Found out the second part in the "Important Edit" ...it turns out if you name something test.txt...that means it is actually called test.txt.txt when you type it into a C++ program. Just goes to show it's the tiny, minute, simple details that cause any program to go pooey. Thank you to George Shore. Your comment about the input file being in the wrong place is what gave me the idea to try the actual items name. Thank you to everyone who helped with the answer!
Further to the previous answers, I believe it's because the file you wish to encrypt is not being found by the original code. Is it safe to assume that you're running the code from the IDE? If so, then the file that is to be encrypted has to be in the same directory as the source. Also: outfile << putchar(33 + rand() % 94); seems to be the source of your garbage to the screen; the 'putchar' function echoes to the screen whilst returning the integer value of that character. What is then going to happen is that number will be output to the file, as opposed to the character. Changing that block to something like: while ((x - y) != z) { c = (33 + rand() % 94); outfile << c; x++; } should enable the code to run as you want it to.
623,040
623,060
C++ development on linux - where do I start?
I decided to leave my windows install behind and am now running Debian as my default OS. I have always coded in Windows and specifically with Visual Studio. I am currently trying to get used to compiling my code under linux. Although I still have a lot of documentation to read, and don't expect you guys to make it too easy for me, it'd still be nice to get some pointers on where to start. I have some specific questions, but feel free to suggest/recommend anything else regarding the subject. What are recommended guides on creating a make file, how do I compile from this makefile (do I call g++ myself, do I use 'make'?) Looking at other linux software, they almost always seem to have a 'configure' file. What exactly does it do? Does it only check if the required libraries are installed or does it more than just checking requirements? How do I link libraries, and how does this relate to my makefile or g++ parameters? In windows I would compile the library, include some header files, tell my linker what additional lib file to link, and copy a dll file. How exactly does this process work in linux? Recommendations for code editors? I am currently using nano and I've heard of vim and emacs, but don't know what the benefits of them are over eachother. Are there any others, and why would I consider them over any of the previous three? Note: I am not looking for an IDE. Any help, links to guides & documentation (preferably those that are aimed at beginners) are very much appreciated!
What are recommended guides on creating a make file, how do I compile from this makefile (do I call g++ myself, do I use 'make'?) You build from the makefile by invoking "make". And inside your makefile, you compile and link using g++ and ld. Looking at other linux software, they almost always seem to have a 'configure' file. What exactly does it do? Does it only check if the required libraries are installed or does it more than just checking requirements? It's a script usually used to set up various things based on the environment being used for building. Sometimes it's just a basic shell script, other times it invokes tools like Autoconf to discover what is available when building. The "configure" script is usually also a place for the user to specify various optional things to be built or excluded, like support for experimental features. How do I link libraries, and how does this relate to my makefile or g++ parameters? In windows I would compile the library, include some header files, tell my linker what additional lib file to link, and copy a dll file. How exactly does this process work in linux? ld is the GNU linker. You can invoke it separately (which is what most makefiles will end up doing), or you can have g++ delegate to it. The options you pass to g++ and ld determine where to look for included headers, libraries to link, and how to output the result. Recommendations for code editors? I am currently using nano and I've heard of vim and emacs, but don't know what the benefits of them are over eachother. Are there any others, and why would I consider them over any of the previous three? Note: I am not looking for an IDE. Vim and Emacs are very flexible editors that support a whole bunch of different usages. Use whatever feels best to you, though I'd suggest you might want a few minimal things like syntax highlighting.
623,127
623,160
OpenGL coordinate problem
I am creating a simple 2D OpenGL application but I seem to be experiencing some camera issues. When I draw a rectangle at (20,20) it is drawn at (25,20) or so. When I draw it at (100, 20) it is drawn at 125 or so. For some reasons everything is being shifted to the right by a few %. I have pasted a trimmed down version here http://pastebin.com/m56491c4c Is there something wrong with the way I am setting up GLUT? I know it's not my objects doing anything weird since the same thing happens when I disable them. Thanks in advance.
You need to set the projection matrix inside the reshape function (resize()), which also automatically solves the problem of the user resizing the window: void resize(int w, int h) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluOrtho2D(0, w, h, 0); } And then in your draw function, make sure that the matrix mode is model-view: void draw() { glMatrixMode(GL_MODELVIEW); glLoadIdentity(); ... } Other problems with your code: You probably shouldn't be calling glutPostRedisplay() at the end of draw(). This is going to make your CPU run at 100%. You can instead use glutTimerFunc() to still have updates every some number of milliseconds. In processMouse(), you're using wsprintf() on an array of chars: wsprintf() takes an array of wide characters (wchar_t), so you should make the local variable s of type wchar_t[], or use sprintf() and MessageBoxA() instead of wsprintf() and MessageBoxW() (to which MessageBox() expands as a macro when compiling a Unicode application, which I'm assuming you're doing). You're also vulnerable to a buffer overflow -- you should use a buffer of at least 12 characters, even though realistically you'll never be passed a very large x value. Finally, you should also use snprintf()/wsnprintf() instead of sprintf()/wsprintf() to protect against the buffer overflow.
623,268
623,278
C++: Searching in Process Memory
By already given specific process' handle, how can I move further to search for a specific keywords(bytes, ints(2 bytes), text(an array)) in its memory in code, using VC++ ?
I take it you want to scan through another's process raw memory. By definition, processes are and should be isolated from one another and have totally independent address spaces (your address 0x06573AF8 contains something entirely different from the corresponsing address in another process' address space). However, there are a number of ways around this: in Windows, the classic answer is to write a specific DLL which you then inject into the address space of the other process. Take a look at CodeProject for some hands-on advice, or slightly more generally, wikipedia.
623,373
623,388
Catching exception in code
I was trying this piece of code to check whether the divide by zero exception is being caught: int main(int argc, char* argv[]) { try { //Divide by zero int k = 0; int j = 8/k; } catch (...) { std::cout<<"Caught exception\n"; } return 0; } When I complied this using VC6, the catch handler was executed and the output was "Caught exception". However, when I compiled this using VS2008, the program crashed without executing the catch block. What could be the reason for the difference?
Enable structured exception handling under project -> properties -> configuration properties -> c/c++ -> code generation -> enable c++ exceptions. Use a try except. Ideally with a filter that checks the exception code then returns the constant signalling if it would like to catch. I have skipped that out here but I recommend you see here for examples of the filter. #include <iostream> #include <windows.h> int main(int argc, char* argv[]) { __try { //Divide by zero int k = 0; int j = 8/k; } __except(EXCEPTION_EXECUTE_HANDLER) { if(GetExceptionCode()==EXCEPTION_INT_DIVIDE_BY_ZERO) std::cout << "Caught int divison exception\n"; else std::cout << "Caught exception\n"; system("pause"); } return 0; }
623,475
623,493
Reverse engineer C++ DLL
I have a small utility that was originally written in VS2005. I need to make a small change, but the source code for one of the dlls has been lost somewhere. Is there a free or reasonably priced tool to reverse engineer the dll back to C++ code.
Hex-Rays decompiler is a great tool, but the code will be quite hard to read and you will have to spend a lot of time to reverse engineer the whole DLL.
623,605
623,675
C++ vim IDE. Things you'd need from it
I was going to create the C++ IDE Vim extendable plugin. It is not a problem to make one which will satisfy my own needs. This plugin was going to work with workspaces, projects and its dependencies. This is for unix like system with gcc as c++ compiler. So my question is what is the most important things you'd need from an IDE? Please take in account that this is Vim, where almost all, almost, is possible. Several questions: How often do you manage different workspaces with projects inside them and their relationships between them? What is the most annoying things in this process. Is is necessary to recreate "project" from the Makefile? Thanks. Reason to create this plugin: With a bunch of plugins and self written ones we can simulate most of things. It is ok when we work on a one big "infinitive" project. Good when we already have a makefile or jam file. Bad when we have to create our owns, mostly by copy and paste existing. All ctags and cscope related things have to know about list of a real project files. And we create such ones. This <project#get_list_of_files()> and many similar could be a good project api function to cooperate with an existing and the future plugins. Cooperation with an existing makefiles can help to find out the list of the real project files and the executable name. With plugin system inside the plugin there can be different project templates. Above are some reasons why I will start the job. I'd like to hear your one.
debugger source code navigation tools (now I am using http://www.vim.org/scripts/script.php?script_id=1638 plugin and ctags) compile lib/project/one source file from ide navigation by files in project work with source control system easy acces to file changes history rename file/variable/method functions easy access to c++ help easy change project settings (Makefiles, jam, etc) fast autocomplette for paths/variables/methods/parameters smart identation for new scopes (also it will be good thing if developer will have posibility to setup identation rules) highlighting incorrect by code convenstion identation (tabs instead spaces, spaces after ";", spaces near "(" or ")", etc) reformating selected block by convenstion
623,638
623,650
Compiling a C++ .lib with only header files?
I'm compiling a C++ static library and as all the classes are templated, the class definitions and implementations are all in header files. As a result, it seems (under visual studio 2005) that I need to create a .cpp file which includes all the other header files in order for it to compile correctly into the library. Why is this?
The compiler doesn't compile header files since these are meant to be included into source files. Prior to any compilation taking place the preprocessor takes all the code from any included header files and places that code into the source files where they're included, at the very location they're included. If the compiler should compile the headerfiles as well, you'd for example have multiple definitions on many things. Example, this is what the preprocessor sees: [foo.h] void foo(); -- [mysource.cpp] #include "foo.h" int main() { foo(); } And this is what the compiler sees: [mysource.cpp] void foo(); int main() { foo(); }
623,692
623,783
Handcode GUI or use gui-designer tool
I would like to hear some opinions on hand coding your GUIs as one typically do when using Java or Qt with C++, vs using a gui-designer tool? Examples of GUI designer tools would be MFC GUI-designer, Qt designer, Interface Builder (Apple). I used to be a fan of hand coding but from recent experience I have switched. The problem I have seen with hand coding is that it is fairly quick and flexible to write the GUIs but once you need to make a change to a GUI written a long time ago it can be very difficult. Finding the right element in big panel can be difficult. The second problem is that it makes it far too easy to add a lot of logic in the GUI creation and layout code. I have often had to take over maintenance of GUI code which is really hard to reuse because its behavior is mixed with its appearance and mixing layout and behavior often makes the class very large and difficult to understand. Using a GUI designer tool force a much clearer separation between appearance and logic in my view.
I feel strongly that you should use an interface builder instead of hand-coding a GUI. As in the question mentioned it's a much cleaner separation and once something has to be edited it's much easier. The Qt Designer got this feature to create a class out of a .ui file1), but I think that not using this feature is the best way, as this creates just more coded that shouldn't exist at all. The speed issue of creating the window from a .ui-file is negligible because the window has to be loaded only once. This is PyQt, but something similar is possible in C++: class SelectDateDialog(QDialog): def __init__(self): QDialog.__init__(self) uic.loadUi("resources/SelectDate.ui", self) Essentially this has the same effect as including all your UI-code into the __init__() method, but the UI is almost completely separated from the code. 1).ui files are XML files that describe a user interface
623,693
623,709
Accessing struct members directly
I have a testing struct definition as follows: struct test{ int a, b, c; bool d, e; int f; long g, h; }; And somewhere I use it this way: test* t = new test; // create the testing struct int* ptr = (int*) t; ptr[2] = 15; // directly manipulate the third word cout << t->c; // look if it really affected the third integer This works correctly on my Windows - it prints 15 as expected, but is it safe? Can I be really sure the variable is on the spot in memory I want it to be - expecially in case of such combined structs (for example f is on my compiler the fifth word, but it is a sixth variable)? If not, is there any other way to manipulate struct members directly without actually having struct->member construct in the code?
It looks like you are asking two questions Is it safe to treat &test as a 3 length int arrray? It's probably best to avoid this. This may be a defined action in the C++ standard but even if it is, it's unlikely that everyone you work with will understand what you are doing here. I believe this is not supported if you read the standard because of the potential to pad structs but I am not sure. Is there a better way to access a member without it's name? Yes. Try using the offsetof macro/operator. This will provide the memory offset of a particular member within a structure and will allow you to correctly position a point to that member. size_t offset = offsetof(mystruct,c); int* pointerToC = (int*)((char*)&someTest + offset); Another way though would be to just take the address of c directly int* pointerToC = &(someTest->c);
623,784
623,803
How does one write the hex values of a char in ASCII to a text file?
Here is what I currently have so far: void WriteHexToFile( std::ofstream &stream, void *ptr, int buflen, char *prefix ) { unsigned char *buf = (unsigned char*)ptr; for( int i = 0; i < buflen; ++i ) { if( i % 16 == 0 ) { stream << prefix; } stream << buf[i] << ' '; } } I've tried doing stream.hex, stream.setf( std::ios::hex ), as well as searching Google for a bit. I've also tried: stream << stream.hex << (int)buf[i] << ' '; But that doesn't seem to work either. Here is an example of some output that it currently produces: Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í Í I would like the output to look like the following: FF EE DD CC BB AA 99 88 77 66 55 44 33 22 11 00 FF EE DD CC BB AA 99 88 77 66 55 44 33 22 11 00 FF EE DD CC BB AA 99 88 77 66 55 44 33 22 11 00 FF EE DD CC BB AA 99 88 77 66 55 44 33 22 11 00 FF EE DD CC BB AA 99 88 77 66 55 44 33 22 11 00 FF EE DD CC BB AA 99 88 77 66 55 44 33 22 11 00
#include <iostream> using namespace std; int main() { char c = 123; cout << hex << int(c) << endl; } Edit: with zero padding: #include <iostream> #include <iomanip> using namespace std; int main() { char c = 13; cout << hex << setw(2) << setfill('0') << int(c) << endl; }
623,903
623,932
Defining a class within a namespace
Is there a more succinct way to define a class in a namespace than this: namespace ns { class A {}; } I was hoping something like class ns::A {}; would work, but alas not.
You're close, you can forward declare the class in the namespace and then define it outside if you want: namespace ns { class A; // just tell the compiler to expect a class def } class ns::A { // define here }; What you cannot do is define the class in the namespace without members and then define the class again outside of the namespace. That violates the One Definition Rule (or somesuch nonsense).
623,929
623,961
Which floating-point image format should I use?
In the past I've saved RGB images (generated from physical simulations) as 8-bits/channel PPM or PNG or JPEG. Now I want to preserve the dynamic range of the simulation output, which means saving a floating point image and then treating conversion to 8-bits/channel as a post-processing step (so I can tweak the conversion to 8-bit without running the lengthy simulation again). Has a "standard" floating point image format emerged ? Good free supporting libraries/viewers/manipulation tools, preferably available in Debian, would be a bonus.
Have you looked into Radiance RGBE (.hdr) and OpenEXR (.exr). RGBE has some source code here. NVIDIA and ATI both support EXR data in their graphics cards. There are source code and binaries from the OpenEXR download page. ILM created OpenEXR and it has wide support. OpenEXR has support for 16 and 32 bit floating point per channel, and is what most people use these days, unless they've written their own format. The Pixel Image Editor for linux has EXR support for editing, too. pfstools is also necessary if you're going to work with HDR on linux. Its a set of command line programs for reading, writing and manipulating HDR and has Qt and OpenGL viewers. Theres also jpeg2exr for linux Heres some other debian packages for OpenEXR viewers. Based on this, it looks like theres also a Gimp plugin somewhere.
624,037
624,056
32 bit Linux clock() overflow
I need to measure, in C++ on Linux (and other Unix-like systems), the CPU (not wall clock) time taken by a long computation, so I'm using clock(). Problem: on 32 bit systems, this wraps around after about 2000 or 4000 seconds. What's the recommended workaround for this?
You want to use getrusage which fills the following struct: struct rusage { struct timeval ru_utime; /* user time used */ struct timeval ru_stime; /* system time used */ ... }; For completion, struct timeval: struct timeval { time_t tv_sec; /* seconds */ suseconds_t tv_usec; /* microseconds */ }; As always, consult the manpage for details (man getrusage)
624,250
624,258
How do I read / write gzipped files in C++?
How do I read / write gzipped files in C++? The iostream wrapper classes here look good, and here is a simple usage example: gz::igzstream in(filename); std::string line; while(std::getline(in, line)){ std::cout << line << std::endl; } But I wasn't able to actually link it (although I have a /usr/lib/libz.a). A simple g++ test-gzstream.cpp -lz didn't do it (undefined reference to gz::gzstreambase::~gzstreambase()).
Obviously you need the cpp-file where the gzstreambase destructor is defined as well, i.e. gzstream.cpp (that's the link fault). libz is just a c-api for gzip, it knows nothing of c++ stdlib streams. Boost's iostream lib has gzip and bzip2 streams too. EDIT: Updated the link to point to the latest version of the code that includes a major bug fix.
624,260
624,291
How to reuse an ostringstream?
I'd like to clear out and reuse an ostringstream (and the underlying buffer) so that my app doesn't have to do as many allocations. How do I reset the object to its initial state?
I've used a sequence of clear and str in the past: // clear, because eof or other bits may be still set. s.clear(); s.str(""); Which has done the thing for both input and output stringstreams. Alternatively, you can manually clear, then seek the appropriate sequence to the begin: s.clear(); s.seekp(0); // for outputs: seek put ptr to start s.seekg(0); // for inputs: seek get ptr to start That will prevent some reallocations done by str by overwriting whatever is in the output buffer currently instead. Results are like this: std::ostringstream s; s << "hello"; s.seekp(0); s << "b"; assert(s.str() == "bello"); If you want to use the string for c-functions, you can use std::ends, putting a terminating null like this: std::ostringstream s; s << "hello"; s.seekp(0); s << "b" << std::ends; assert(s.str().size() == 5 && std::strlen(s.str().data()) == 1); std::ends is a relict of the deprecated std::strstream, which was able to write directly to a char array you allocated on the stack. You had to insert a terminating null manually. However, std::ends is not deprecated, i think because it's still useful as in the above cases.
624,348
625,371
How can I build different versions of a project using the Jam make tool?
I have a C++ project that compiles to different versions, including release, debug, shared library, and executable, with different compiler flags for each. I am trying out Jam as an alternative to Make, because it looks like a simpler system. Is Jam capable of this? The main problem is that it always places the .o files into the same folder as the source file, so it overwrites them when building multiple versions. Update I found a solution that seems to work. Using this file, I can build debug and release configurations of a library or executable. Command to build release library: jam -s config=lib -s release=1 If you only type jam, it builds the debug executable. Here is the Jamfile: FILES = main.cpp ; BASENAME = steve ; OBJ = obj ; if $(release) { OBJ = $(OBJ)r ; } else { DEFINES += DEBUG ; OBJ = $(OBJ)d ; } if $(config) = lib { OBJ = $(OBJ)_lib ; OUTFILE = lib$(BASENAME).so ; DEFINES += SHARED_LIBRARY ; LINKFLAGS += -shared -Wl,-soname,$(OUTFILE) -fvisibility=hidden -fPICS ; } else { OUTFILE = $(BASENAME) ; } LOCATE_TARGET = $(OBJ) ; MkDir $(LOCATE_TARGET) ; Main $(OUTFILE) : $(FILES) ;
I'm not familiar with Perforce's Jam however bjam allows this - and it's trivially easy. bjam does not place the intermediate files in the same directory as the source; it creates debug/release/static/shared directories depending on the type of project you're building. For example if you wanted to build release and debug version of a library and you wanted to build it statically: bjam debug release link=static bjam does have some quirks but we've found it very effective. Currently we're using (almost) identical build scripts to build our system using msvc (8.0 and 9.0), gcc 4.3 on x86, gcc 3.4 on ARM and gcc 4.3 for the PowerPC. Very nice.
624,540
624,559
Algorithm to return the maximum possible sum of subsequences in a sequence
int maxSumRec(const vector<int> &a, int left, int right){ if (left == right){ if (a[left] > 0){ return a[left]; } else return 0; } int center = (left + right)/2; int maxLeftSum = maxSumRec(a, left, center); int maxRightSum = maxSumRec(a, center+1, right); int leftBorderSum = 0; int leftBorderMax = 0; for (int i = center; i >= left; i--){ leftBorderSum += a[i]; if (leftBorderSum > leftBorderMax){ leftBorderMax = leftBorderSum; } } int rightBorderSum = 0; int rightBorderMax = 0; for (int i = center+1; i <= right; i++){ rightBorderSum += a[i]; if (rightBorderSum > rightBorderMax){ rightBorderMax = rightBorderSum; } } int crossSum = rightBorderMax + leftBorderMax; return max3(maxLeftSum, maxRightSum, crossSum); } This is a O(NlogN) algorithm, I know it is not the best one. But just have several questions on it: in the first if statement, why return 0 if a[left] < 0? why need the 2 for loops? isn't the logic like this, find the max of the first half and the second half, and then add them to see if the addition is bigger than either. if it is this case, then we can directly jump to the last 2 lines? Thanks very much, Yue Harriet
in the first if statement, why return 0 if a[left] < 0? Because then the empty subsequence has the maximum sum, which is 0.
624,691
624,697
Passing a pointer to process spawned with exec()
I would like to pass a pointer (I am putting a file with data in memory with mmap) to processes spawned using fork + exec, but I am stuck on how to pass a pointer to the exec() spawned process? UPDATE1: Thanks for your inputs, I do use shared memory creating it with mmap with MAP_INHERIT flag: Each mapped file and shared memory region created with the mmap() function is unmapped by a successful call to any of the exec functions, except those regions mapped with the MAP_INHERIT option. Regions mapped with the MAP_INHERIT option remain mapped in the new process image. source: http://www.uwm.edu/cgi-bin/IMT/wwwman?topic=exec(2)&msection= UPDATE2: This is homework excercise, but I think I must stop thinking about pointers and think about the IPC itself. I guess I will go with trying to mmap the same file in child process. Short code example much appreciated. Thanks in advance for your help.
If you use shared memory, you can't pass the pointer. The pointer will contain the virtual address, which is different from one process to another. You have to exchange offset values, based on the start of the shared memory area. If you don't use shared memory, you can't exchange pointers of any kind: The other process won't be able to access the memory of your process.
624,854
624,858
static_cast with boost::shared_ptr?
What is the equivalent of a static_cast with boost::shared_ptr? In other words, how do I have to rewrite the following Base* b = new Derived(); Derived* d = static_cast<Derived*>(b); when using shared_ptr? boost::shared_ptr<Base> b(new Derived()); boost::shared_ptr<Derived> d = ???
Use boost::static_pointer_cast: boost::shared_ptr<Base> b(new Derived()); boost::shared_ptr<Derived> d = boost::static_pointer_cast<Derived>(b);
625,023
625,044
C++ class best practice
I'd like to know some best practice when designing c++ classes. To put it in context, I have a c++ class named Vec3. class Vec3{ private: float elements[3]; public: Vec3(Vec3 v1){...} Vec3(int x, int y, int z){...} Vec3 add(Vec3 v1){...} Vec3 add(int x, int y, int z){...} ... Vec3 multiply(Vec3 v1){...} ... int dotProduct(Vec3 v1){...} Vec3 normalize(){...} .... int operator[](int pos){...} }; So, I have this class that does computing over a Vector of size 3. I'd like to know what's better. Working with pointers or not. Should I return pointer and have my parameters as Pointers or not. Vec3 add(Vec3 v1) or Vec3* add(Vec3 v1) or Vec3* add(Vec3* v1) or .... Now I'm confused, I don't know if I should use pointer or not in my class. I guess there is always a way to send my arguments to function that don't handle pointers... Vec3* v2 = new Vec3(1,1,1); Vec3 sum = v1.add(*v2); And there is a solution that is probably the best of all I can come up with.. having both functions Vec3 add(Vec3 v2){...} Vec3* add(Vec3* v2){...} But I fear this will lead to duplicate code and may be overhead. Thank you for answers...btw, I could use a template to change the size of the Vector but I prefer to keep my Vec3 class alone and create a Vec4 class or name it Quaternion. EDIT Here is the solution I came with. Feel free to comment or modify or reuse the code. One thing. I just want to mention that, in my case, This class is supposed to be transparent. Just like we add numbers. int i = 10; int j = 15; int k = i + k; If the add overload modify the object that is calling the function in this case i. I would endup with a k being a reference to i and i being equal to 25. But what we really want here is a k equal to 25 and i,k unchanged. Thats how my class work. Vec3 k = i + k will not modify i or k because we are creating a new number from these values. The only case where I return a reference is for +=, -=, ++, --..., set([XYZ])? and normalize. It could be fun to do something like myvec.setX(10).normalize().scale(10) NOTE: scale should return a reference. I didn't see it but I guess it should be better this way. Vec3 t = myvec.normalize().scale(100).copy(); http://pastebin.com/f413b7ffb Thank you all, I'll be working on the Matrix class now.
These are the rules I usually stick to. Note 'usually', sometimes there are reasons for doing things differently... For parameters I don't intend to modify I pass by value if they aren't too large since they will be copied. If they are a bit large or aren't copyable, you could use a const reference or a pointer (I prefer const reference). For parameters I do intend to modify, I use a reference. For return values I will return a copy whenever possible. Some times it's handy to return a reference (this works well for a single function for get/set where you don't need to do any special processing when the item is fetched or set). Where pointers really shine in my opinion is for instance variables where I want control over when it is constructed or destructed. Hope that helps.
625,105
625,113
Compact way to extract parts of strings (FASTA header)
Given the following string: string Header =">day11:1:356617"; How do you extract everything except ">", yielding only: day11:1:356617 I could do standard loop over the string character and keep only other than ">". string nStr =""; for (int i=0; i < Header.size(); i++) { if (Header[i] != ">") { nStr = nStr + Header[i]; } } But the approach seems too clumsy and slow, in particular I need to do such extraction for millions of lines.
if (Header[0] == '>') Header = Header.substr(1);
625,247
625,261
How do you read a word in from a file in C++?
So I was feeling bored and decided I wanted to make a hangman game. I did an assignment like this back in high school when I first took C++. But this was before I even too geometry, so unfortunately I didn't do well in any way shape or form in it, and after the semester I trashed everything in a fit of rage. I'm looking to make a txt document and just throw in a whole bunch of words (ie: test love hungery flummuxed discombobulated pie awkward you get the idea ) So here's my question: How do I get C++ to read a random word from the document? I have a feeling #include<ctime> will be needed, as well as srand(time(0)); to get some kind of pseudorandom choice...but I haven't the foggiest on how to have a random word taken from a file...any suggestions? Thanks ahead of time!
Here's a rough sketch, assuming that the words are separated by whitespaces (space, tab, newline, etc): vector<string> words; ifstream in("words.txt"); while(in) { string word; in >> word; words.push_back(word); } string r=words[rand()%words.size()];
625,333
625,440
How to limit the impact of implementation-dependent language features in C++?
The following is an excerpt from Bjarne Stroustrup's book, The C++ Programming Language: Section 4.6: Some of the aspects of C++’s fundamental types, such as the size of an int, are implementation- defined (§C.2). I point out these dependencies and often recommend avoiding them or taking steps to minimize their impact. Why should you bother? People who program on a variety of systems or use a variety of compilers care a lot because if they don’t, they are forced to waste time finding and fixing obscure bugs. People who claim they don’t care about portability usually do so because they use only a single system and feel they can afford the attitude that ‘‘the language is what my compiler implements.’’ This is a narrow and shortsighted view. If your program is a success, it is likely to be ported, so someone will have to find and fix problems related to implementation-dependent features. In addition, programs often need to be compiled with other compilers for the same system, and even a future release of your favorite compiler may do some things differently from the current one. It is far easier to know and limit the impact of implementation dependencies when a program is written than to try to untangle the mess afterwards. It is relatively easy to limit the impact of implementation-dependent language features. My question is: How to limit the impact of implementation-dependent language features? Please mention implementation-dependent language features then show how to limit their impact.
Few ideas: Unfortunately you will have to use macros to avoid some platform specific or compiler specific issues. You can look at the headers of Boost libraries to see that it can quite easily get cumbersome, for example look at the files: boost/config/compiler/gcc.hpp boost/config/compiler/intel.hpp boost/config/platform/linux.hpp and so on The integer types tend to be messy among different platforms, you will have to define your own typedefs or use something like Boost cstdint.hpp If you decide to use any library, then do a check that the library is supported on the given platform Use the libraries with good support and clearly documented platform support (for example Boost) You can abstract yourself from some C++ implementation specific issues by relying heavily on libraries like Qt, which provide an "alternative" in sense of types and algorithms. They also attempt to make the coding in C++ more portable. Does it work? I'm not sure. Not everything can be done with macros. Your build system will have to be able to detect the platform and the presence of certain libraries. Many would suggest autotools for project configuration, I on the other hand recommend CMake (rather nice language, no more M4) endianness and alignment might be an issue if you do some low level meddling (i.e. reinterpret_cast and friends things alike (friends was a bad word in C++ context)). throw in a lot of warning flags for the compiler, for gcc I would recommend at least -Wall -Wextra. But there is much more, see the documentation of the compiler or this question. you have to watch out for everything that is implementation-defined and implementation-dependend. If you want the truth, only the truth, nothing but the truth, then go to ISO standard.
625,353
625,412
Creating Map from Alternate Key Value Input
I have a data that looks like this: >day11:1:356617 ACTTCTGATTCTGACAGACTCAGGAAGAAACCAT >day11:2:283282 CTCAGCCCGTAGCCCGTCGGTTCCGGAGTAAGTT >day11:3:205058 NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN >day11:4:202520 AGTTCGATCGGTAGCGGGAGCGGAGAGCGGACCC >day11:5:107099 AGGCATTCAGGCAGCGAGAGCAGAGCAGCGTAGA >day11:6:106715 CTCTTTGCCCCATCTACTGCGAGGATGAAGACCA What I want to do is to create a map, with line started with ">" as the key and the ACGT as the value. However this construct of mine doesn't work? The Map seems fail to capture the value as I expected. #include <iostream> #include <fstream> #include <sstream> #include <string> #include <map> int main() { ifstream myfile ("mydata.txt"); map <string,string>FastaMap; cerr << "Read Fasta File to Map" << endl; if (myfile.is_open()) { while (getline(myfile,line) ) { stringstream ss(line); string Fasta; string Header = ""; string Tag = ""; ss >> Fasta; // read first column if ( Fasta[0] == '>') { // get header only Header = Fasta.substr(1); //cerr << Header << endl; } else { Tag = Fasta; } if (Header != "" || Tag != "") { FastaMap[Header] = Tag; //cout << "TAG: " << Tag << endl; //cout << "Head: " << Header << endl; // FastaMap.insert(make_pair(Header,Tag)); } } myfile.close(); } else { cout << "Unable to open file"; } // This doesn't print the second value, only prints the first for (map<string,string>::iterator it = FastaMap.begin(); it!= FastaMap.end(); it++) { cout << "Head: " << (*it).first << ", End: " << (*it).second << endl; } } The expected output is: Head: day11:1:356617, End: ACTTCTGATTCTGACAGACTCAGGAAGAAACCAT Head: day11:2:283282, End: CTCAGCCCGTAGCCCGTCGGTTCCGGAGTAAGTT ...etc...
You're cleaning Fasta, Header and Tag each loop. What you have to do is: Declare those variables outside the while (just before it) Change the if (Header != "" || Tag != "") line to using && instead of || (there is a logic error there) Reset the Tag and Header variables when you add them to the map. The correct code follows: #include <iostream> #include <fstream> #include <sstream> #include <string> #include <map> using namespace std; int main() { string line; ifstream myfile ("test"); map <string,string> FastaMap; cerr << "Read Fasta File to Map" << endl; if (myfile.is_open()) { string Fasta; string Header = ""; string Tag = ""; while (getline(myfile,line) ) { stringstream ss(line); ss >> Fasta; // read first column if ( Fasta[0] == '>') { // get header only Header = Fasta.substr(1); //cerr << Header << endl; } else { Tag = Fasta; } if (Header != "" && Tag != "") { FastaMap[Header] = Tag; cout << "TAG: " << Tag << endl; cout << "Head: " << Header << endl; Header = ""; Tag = ""; // FastaMap.insert(make_pair(Header,Tag)); } } myfile.close(); } else { cout << "Unable to open file"; } // This doesn't print the second value, only prints the first for (map<string,string>::iterator it = FastaMap.begin(); it!= FastaMap.end(); it++) { cout << "Head: " << (*it).first << ", End: " << (*it).second << endl; } } Note there are other possible enhancements to the code, but as it is now works.
625,416
625,421
Can you set the size of the call stack in c++? (vs2008)
I'm working from an example piece of code that allocates a relatively large local array. (32768 to be precise) When I try the same I'm getting behaviour that appears to be a stack overflow. Now I was wondering if my example has maybe set the stack to be larger then my application. Is this possible? if so how?
With the Microsoft compiler you can use /F to set the stack size, however it seems like you should just allocate the object on the heap. You should have a reason you're allocating this on the stack rather than the heap. Edit: This page gives a good cross-platform breakdown, though it may be dated.
625,504
625,514
Can Events be Inter-Process?
I have created an event in one process and to test, sent the event handle via a pipe to a totally separate process (not a child thread) When I fire the event in the first, WaitForSingleObject does not detect the event so I am guessing the answer is no unless I missed some trick in the SECURITY_ATTRIBUTES structure? Or perhaps I need to use a named event and call OpenEvent()? In this case I cannot use window messages because I am trying to signal a windows service. I could use the pipe, but there will be many of these applications, and I would like to find a "low cost" solution if possible. Other options like Memory mapped files have even more overhead than the pipe? How would you do this?
You need to create a named event and open it in both processes. If you have multiple processes listening, you may consider using a semaphore.
625,507
625,521
Static vs. member variable
For debugging, I would like to add some counter variables to my class. But it would be nice to do it without changing the header to cause much recompiling. If Ive understood the keyword correctly, the following two snippets would be quite identical. Assuming of course that there is only one instance. class FooA { public: FooA() : count(0) {} ~FooA() {} void update() { ++count; } private: int count; }; vs. class FooB { public: FooB() {} ~FooB() {} void update() { static int count = 0; ++count; } }; In FooA, count can be accessed anywhere within the class, and also bloats the header, as the variable should be removed when not needed anymore. In FooB, the variable is only visible within the one function where it exists. Easy to remove later. The only drawback I can think of is the fact that FooB's count is shared among all instances of the class, but thats not a problem in my case. Is this correct use of the keyword? I assume that once count is created in FooB, it stays created and is not re-initialized to zero every call to update. Are there any other caveats or headsup I should be aware of? Edit: After being notified that this would cause problems in multithreaded environments, I clarify that my codebase is singlethreaded.
Your assumptions about static function variables are correct. If you access this from multiple threads, it may not be correct. Consider using InterlockedIncrement().
625,616
625,634
template default argument in a template
I am trying to compile this : template <class T, class U = myDefaultUClass<T> > class myClass{ ... }; Although it seems quite intuitive to me it is not for my compiler, does anyone knows how to do this ? edit : Ok, the problem was not actually coming from this but from a residual try ... Sorry about this, thanks for your answers anyway.
The following works for me using g++. Please post more code, the error messages you are getting and the compiler version. class A {}; template <class T> class T1 {}; template <class T, class U = T1<T> > class T2 { }; T2 <A> t2;
625,799
628,079
Resolve build errors due to circular dependency amongst classes
I often find myself in a situation where I am facing multiple compilation/linker errors in a C++ project due to some bad design decisions (made by someone else :) ) which lead to circular dependencies between C++ classes in different header files (can happen also in the same file). But fortunately(?) this doesn't happen often enough for me to remember the solution to this problem for the next time it happens again. So for the purposes of easy recall in the future I am going to post a representative problem and a solution along with it. Better solutions are of-course welcome. A.h class B; class A { int _val; B *_b; public: A(int val) :_val(val) { } void SetB(B *b) { _b = b; _b->Print(); // COMPILER ERROR: C2027: use of undefined type 'B' } void Print() { cout<<"Type:A val="<<_val<<endl; } }; B.h #include "A.h" class B { double _val; A* _a; public: B(double val) :_val(val) { } void SetA(A *a) { _a = a; _a->Print(); } void Print() { cout<<"Type:B val="<<_val<<endl; } }; main.cpp #include "B.h" #include <iostream> int main(int argc, char* argv[]) { A a(10); B b(3.14); a.Print(); a.SetB(&b); b.Print(); b.SetA(&a); return 0; }
The way to think about this is to "think like a compiler". Imagine you are writing a compiler. And you see code like this. // file: A.h class A { B _b; }; // file: B.h class B { A _a; }; // file main.cc #include "A.h" #include "B.h" int main(...) { A a; } When you are compiling the .cc file (remember that the .cc and not the .h is the unit of compilation), you need to allocate space for object A. So, well, how much space then? Enough to store B! What's the size of B then? Enough to store A! Oops. Clearly a circular reference that you must break. You can break it by allowing the compiler to instead reserve as much space as it knows about upfront - pointers and references, for example, will always be 32 or 64 bits (depending on the architecture) and so if you replaced (either one) by a pointer or reference, things would be great. Let's say we replace in A: // file: A.h class A { // both these are fine, so are various const versions of the same. B& _b_ref; B* _b_ptr; }; Now things are better. Somewhat. main() still says: // file: main.cc #include "A.h" // <-- Houston, we have a problem #include, for all extents and purposes (if you take the preprocessor out) just copies the file into the .cc. So really, the .cc looks like: // file: partially_pre_processed_main.cc class A { B& _b_ref; B* _b_ptr; }; #include "B.h" int main (...) { A a; } You can see why the compiler can't deal with this - it has no idea what B is - it has never even seen the symbol before. So let's tell the compiler about B. This is known as a forward declaration, and is discussed further in this answer. // main.cc class B; #include "A.h" #include "B.h" int main (...) { A a; } This works. It is not great. But at this point you should have an understanding of the circular reference problem and what we did to "fix" it, albeit the fix is bad. The reason this fix is bad is because the next person to #include "A.h" will have to declare B before they can use it and will get a terrible #include error. So let's move the declaration into A.h itself. // file: A.h class B; class A { B* _b; // or any of the other variants. }; And in B.h, at this point, you can just #include "A.h" directly. // file: B.h #include "A.h" class B { // note that this is cool because the compiler knows by this time // how much space A will need. A _a; } HTH.
625,849
625,868
Why does the output fail to show the content of a variable after merely adding a cout line?
#include <iostream> using namespace std; class Marks { public: char* name(); }; char* Marks::name() { char temp[30]; cout<<"Enter a name:"<<endl; cin.getline(temp,30); return temp; } int main () { char *name; Marks test1; name=test1.name(); //cout<<"name:"; //uncomment this line to see the problem cout<<name<<endl; return 0; }
The problem is because the value that name is pointing to has been destroyed. You are returning the address of a local variable from Marks::name(). Most likely a side affect of the first cout is causing the contents of name to be destroyed. You're probably just getting lucky when the first cout is commented out. The correct way to do this is to allocate some memory, return that, and then destroy it when you're done: char* Marks::name() { char* temp = new char[30]; cout<<"Enter a name:"<<endl; cin.getline(temp,30); return temp; } int main () { char *name; Marks test1; name=test1.name(); cout<<"name:"; cout<<name<<endl; delete[] name; return 0; } Don't forget to use delete[], rather than just delete, since otherwise only the first character would be deallocated.
625,990
626,265
Are there any reasons not to use Visual Studio 6 for C++?
Are there any reasons why I shouldn't use Visual Studio 6 for C++ development? Where can I find some resources why this would or wouldn't be a good idea? Are there any lists of issues I would have with this?
std::string multicore/proc issues in the runtime, re: KB813810 poor STL support even poorer Standard C++ support Don't do it.
626,067
626,986
Inheritance issues with template classes
Can anyone figure out a nice way to get the following code to work? (This is, once again, an incredibly simplified way of doing this) template <class f, class g> class Ptr; class RealBase { }; template <class a, class b, class c = Ptr<a,b> > class Base : public RealBase { public: Base(){}; }; template <class d, class e> class Derived : public Base <d,e> { public: Derived(){}; void DerivedMethod(){}; }; template <class f, class g> class Ptr { public: Ptr(){}; Ptr(Base<f,g,Ptr<f,g> >* a){in = a;}; Base<f,g,Ptr<f,g> >* operator->() { return in; }; private: Base<f,g,Ptr<f,g> >* in; }; typedef Derived<double,double> DDerived; int main() { Base<int,int> b = Base<int,int>(); Derived<double,double> d = Derived<double,double>(); DDerived dd = DDerived(); Ptr<double,double> p(&dd); p->DerivedMethod(); return 1; } At the moment the compiler moans because a Base<> doesn't have a DerivedMethod(); EDIT I do understand the problem here, my bad for a lack of explanation in the first case (I was in a rush). [This is very cut down code by the way so, yes, for the sake of simplicity I don't have destructors or methods that actually DO anything. Just ignore that, it makes no difference to the problem.] What I need to happen is that an iterator (that isn't detailed here) that deals only with the derived class (D_Derived) need to access a method that is contained within a derived class and not the base class. Simply plonking this method into the base class isn't really an option because this error is going to occur a LOT if I have to do that. (There are about 10 classes built on the real base class, and they all have derived methods of their own. Like 20 of them. Each.) Can anyone think of another way to achieve this functionality?
OK. After trying about 10 different methods (static_cast, reinterpret_cast etc.) you can just cast it. int main() { Base&lt;int,int> b = Base&lt;int,int>(); Derived&lt;double,double> d = Derived&lt;double,double>(); DDerived dd = DDerived(); Ptr&lt;double,double> p(&dd); DDerived * fauxpointer; fauxpointer = (DDerived\*) &(*p); fauxpointer->DerivedMethod(); return 1; } Nasty. But it works, and that's all that counts. Cheers to everyone who repeated the question back at me.
626,123
626,138
Question on using realloc implementation in C++ code
Friends In our C++ , Iam current using realloc method to resize the memory allocated by malloc. realloc() usage is done as below my_Struct *strPtr =(my_struct*)malloc(sizeof(my_Struct)); /* an later */ strPtr = (my_struct*)realloc(strPtr,sizeof(my_Struct)*NBR); now wikipeadia (_http://en.wikipedia.org/wiki/Malloc)says that If instead one did void *p = malloc(orig_size); /* and later... */ p = realloc(p, big_size); then in case it is not possible to obtain big_size bytes of memory, p will have value NULL and we no longer have a pointer to the memory previously allocated for p, creating a memory leak And it also says that the correct way to rectify the above error is void *p = malloc(orig_size); /* and later... */ void *tmp = realloc(p, big_size); if (tmp != NULL) { p = tmp; /* OK, assign new, larger storage to p */ } else { /* handle the problem somehow */ } Can you tell me which is the best way to use realloc() also once I have pointer to a structure and then while using realloc later can i use pointer to a void ??? Many Thanks
Of course you must protect against the case that realloc() returns NULL. It is a memory allocation, and in C (where realloc()) is mostly used, I think C++ programmers think it is a bit low-level/qaint to use raw realloc() calls, memory allocations can always fail. Directly overwriting the pointer with the return value is an error, as that drops the original pointer and makes the memory leak in case the reallocation failed.
626,160
626,289
Threading in a DLL where the DLL must return before child thread finishes
I am working on writing a wrapper DLL to interface a communication DLL for a yokogawa WT1600 power meter, to a PC based automation package. I got the communication part to work but I need to thread it so that a 50ms scan time of the automation package can be maintained. (The Extended Function Block (EFB) Call will block the scan until it returns) These are the steps I need to do. Call EFB EFB creates a thread to perform communication setup (takes about 200ms to do) EFB returns EFB_BUSY while the thread is doing the work 3a. (automation program continues scanning until it comes back to the EFB call) Call EFB passing in that it returned busy on the last call EFB checks if the thread has returned If the thread returned Then the EFB returns success, Else return EFB_BUSY repeat 3a-6 until efb returns success So my problem is, how do I create a thread that exists past the life of the function that called it? And how do I get that thread return value when I call back into the DLL? EDIT #1 HeavyFunction::HeavyFunction^ hf; //HeavyFunction is a class that has a time consuming function in it ThreadStart^ efbThreadDelegate; Thread^ efbThread; if( pEfbData->nBlockingRecall != DOEFB_BUSY ) { hf = gcnew HeavyFunction::HeavyFunction; hf->iiStart = (int)(pEfbData->uParams[0].dw); hf->iiEnd = (int)(pEfbData->uParams[1].dw); efbThreadDelegate = gcnew ThreadStart( hf, &HeavyFunction::HeavyFunction::iGetPrime ); efbThread = gcnew Thread( efbThreadDelegate ); efbThread->Start(); return DOEFB_BUSY; }else if ( efbThread->IsAlive ) { return DOEFB_BUSY; }else { uRetValue->dw = hf->iReturn; return 0; } Will efbThread still have the same thread handle upon a subsequent call? EDIT #2 I got it to work by creating a global HANDLE for a Mutex and a thread. Initializing the mutex in the init entry point (done upon dll loading) and creating the thread in the main function when a call is actually made to the dll. I used the sample code from MSDN: Creating Threads as my model.
Any thread created (whether in a DLL or elsewhere) will not stop spontaneously. In particular, the function that created the thread may return. The new thread would still run even if the creator thread exited. That is, assuming it didn't hit the end of its entry function. Windows threads return a DWORD when ready. To peek, call WaitForSingleObject on the thread handle with a 0 second timeout, and it that succeeds, call GetExitCodeThread . I don't understand your whole "EFB" thing, neither what it is nor what it does, though. If it is doing funny things to normal Windows threads, all bets are off.
626,199
626,215
Overloading << operator C++ - Pointer to Class
class logger { .... }; logger& operator<<(logger& log, const std::string& str) { cout << "My Log: " << str << endl; return log; } logger log; log << "Lexicon Starting"; Works fine, but i would like to use a pointer to a class instance instead. i.e. logger * log = new log(); log << "Lexicon Starting"; Is this possible? If so what is the syntax? Thanks Edit: The compiler Error is error: invalid operands of types 'logger*' and 'const char [17]' to binary 'operator<<'
You'd have to dereference the pointer to your logger object and obviously check if it's not 0. Something like this should do the job: log && ((*log) << "Lexicon starting") As a general aside, I would shy away from referencing objects like a logger (which you normally unconditionally expect to be present) via a pointer due to the uncertainty you get with a pointer, AKA is there an object or not?
626,345
8,786,996
Cannot load symbols in GlowCode x64
This question might be too application specific to be out here on SO, but here goes. I am trying to profile a simple native c++ application using GlowCode-x64 6.2 . The problem is that no matter which settings I set in the "Options->Symbol server and search path" the symbols are never loaded. My .pdb files are all in the same folder as the .exe file too. When entering the "Running->Hooks" menu and adding modules I always get "Not loaded" in the "Address" column. I have read the startup tutorial several times and followed it every step of the way. I have even tried profiling the same program using AQTime6, which worked perfectly. So I know that the debug information is there. I also recall using GlowCode 2 years ago on my 32bit laptop, and got it working at once. Any help beyond "read the manual again" is greatly appreciated. Edit: Just wanted to point out that my application is built with the VS2005 SP1 compiler. Edit2: Just got home from work and tried GlowCode at home on Win32. Worked like a charm.
I visited the GlowCode site and found that version 6.2 is only 32 bit build and not 64. GlowCode 7.0 has both the installers, 32 bit and 64 bit. This might be your problem.
626,373
626,413
How to conditionally choose the C# class I invoke via COM in my C++ DLL?
After much help from all my StackOverFlow brethren, I managed to create a C++ DLL that calls my C# classes via COM and passes data back and forth to an external application. There was much celebration in the kingdom after that code started working. Now I have a new problem. I'm expanding the DLL so that it can call different classes (all implementing the same interface). I need to decide what class to call depending on a char array passed to the DLL when it is loaded. The methods I call are exactly the same regardless of which class I use. What is the best way to switch between classes when calling the DLL? // Initialize COM. HRESULT hr = CoInitialize(NULL); // I want to do something like this....but how? if (strcmp(modelType, "Model1") == 0) { IUnitModelPtr pIUnit(__uuidof(ClassOne)); } if (strcmp(modelType, "Model2") == 0) { IUnitModelPtr pIUnit(__uuidof(ClassTwo)); } //call method 1 //call method 2 CoUninitialize(); //exit This is probably a fairly simple question, but I really do not know any C++. Just getting COM going was a major challenge for me. edit: Im sure there are some super elegant ways to achieve this (reflection?) but please limit your suggestions to stuff that can be implemented easily....efficiency is not important here and maintainability is not really an issue.
Do smth like this: GUID classId = GUID_NULL; if( strcmp( modelType, "Model1" ) == 0 ) { classId = __uuidof( class1 ); } else if( strcmp( modelType, "Model2" ) == 0 ) { classId = __uuidof( class2 ); } else if(... etc, continue for all possible model types } IUnitModelPtr unit; unit.CreateInstance( classId ); // interface methods can be called here
626,510
626,521
What is the impact of namespaces in c++ linkages compared to linkages in c?
What is the impact of namespaces in c++ linkages compared to linkages in c? Is it possible to make a name that has internal linkage to external linkage just by using namespace.Similarly the other way around.
In general, namespace name is prepended to any enclosed entity's name before the name is mangled and goes to the linker. If you have two functions with the same signatures in different namespaces they link into one file just fine. If you have two classes with the same name and at least one method with the same signature and these classes are in different namespaces – they again link together just fine.