question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
734,958
735,007
C++ empty-paren member initialization - zeroes out memory?
I originally wrote some code like this: class Foo { public: Foo() : m_buffer() {} private: char m_buffer[1024]; }; Someone who is smarter than me said that having the m_buffer() initializer would zero out the memory. My intention was to leave the memory uninitialized. I didn't have time to discuss it further, but it piqued my curiosity. Previously, I had thought it was wise to always list each member in the initializer list. Could someone please describe this behavior further? 1) Why does the empty-paren initializer fill in memory? 2) Does it only hold for POD datatypes? I heard that it was so, but don't have the standard handy. Thanks
If you have a member initialized like that, it will be value-initialized. That is also true for PODs. For a struct, every member is value-initialized that way, and for an array, every element of it is value-initialized. Value-initialization for a scalar type like pointer or integer you will have it inialized to 0 converted to the right type. So you will get null pointers or false or whatever type you have concretely. Note that the rule changed subtly from C++98 to C++03 (what we have right now), which can have surprising effects. C++98 didn't have that value-initialization. It said default initialization happens, which for a non-POD type always meant it's default constructor invokes. But value-initialization in C++03 has special meaning if there is no user-declared constructor: Every element is value-initialized then. Here is the difference: struct A { int c; ~A() { } }; // non-POD, but no user declared ctor struct B { A a; B():a(){ } } b; Now, in C++03, you will be guaranteed that b.a.c is zero. While in C++98, b.a.c will have some indeterminated value.
734,996
735,120
How to marshal C++/CLI array of struct to unmanaged C++
I'm searching for the correct syntax to pass a struct array to an unmanaged C++ dll. my dll imports are called like this #define _DllImport [DllImport("Controller.dll", CallingConvention = CallingConvention::Cdecl)] static _DllImport bool _Validation(/* array of struct somehow */); In my client code I have List<MyStruct^> list; MyObject::_Validation(/* list*/); I know System::Runtime::InteropServices::Marshal has a lot of useful methods for doing stuff like this but I'm not sure about which to use.
Create a managed version of the unmanaged struct using StructLayout.Sequential (make sure to put things in the same order). You should then be able to pass it like you'd pass it to any managed function (e.g., Validation(MyStruct[] pStructs). For example, let's say our native function has this prototype: extern "C" { STRUCTINTEROPTEST_API int fnStructInteropTest(MYSTRUCT *pStructs, int nItems); } and the native MYSTRUCT is defined as follows: struct MYSTRUCT { int a; int b; char c; }; Then in C#, you define a managed version of the struct as follows: [System.Runtime.InteropServices.StructLayoutAttribute(System.Runtime.InteropServices.LayoutKind.Sequential)] public struct MYSTRUCT { public int a; public int b; public byte c; } And the managed prototype as follows: [System.Runtime.InteropServices.DllImportAttribute("StructInteropTest.dll", EntryPoint = "fnStructInteropTest")] public static extern int fnStructInteropTest(MYSTRUCT[] pStructs, int nItems); You can then call the function passing it an array of MYSTRUCT structs as follows: static void Main(string[] args) { MYSTRUCT[] structs = new MYSTRUCT[5]; for (int i = 0; i < structs.Length; i++) { structs[i].a = i; structs[i].b = i + structs.Length; structs[i].c = (byte)(60 + i); } NativeMethods.fnStructInteropTest(structs, structs.Length); Console.ReadLine(); }
735,021
735,094
Most useful or amazing STL short liners
I'm looking for practical and educational samples of C++ / STL code fitting in few lines. My actual favorites are: Empty a vector freeing its reserved memory: vector <...>().swap (v) (swap with a temporary) Copy a map to a vector: map<T1, T2> myMap; vector< pair<T1, T2> > myVec(myMap.begin(), myMap.end()); // or myVec.assign(myMap.begin(), myMap.end()); Custom, non-boost split: vector<string> &mysplit(const string &s, char delim, vector<string> &elems) { stringstream ss(s); string item; while(getline(ss, item, delim)) { elems.push_back(item); } return elems; }
// std::back_inserter usage ( std::inserter for map ) std::copy( source.begin(), source.end(), std::back_inserter( container ) ); - // mem_fun and bind usage (but boost better) std::some_algorithm(..., std::mem_fun( func ) ); not so useful, but powerful: check is container sorted std::adjacent_find( container.begin(), container.end(), greater<Container::value_type>() ) == container.end() also examples mentioned by you and dirkgently.
735,170
735,304
Can the "Application Error" dialog box be disabled?
I am using Hudson as a continuous integration server to test C/C++ code. Unfortunatly, I have a bug somewhere that causes memory corruption, so on some Windows machines I will sometimes get a "Application Error" dialog box explaining that an instruction referenced memory that could not be read. This dialog box pops up and basically hangs the test run, as it requires manual intervention. Is there a way to prevent this dialog box from appearing, so that the test run simply fails and is reported as such in Hudson? Is it possible to automatically generate a minidump instead of showing the dialog?
Use "Disable error reporting", as Mr. Gently suggests. See also this PC World article. If you happen to have MS Visual Studio on your build machine, it will catch Application Errors and pop up a dialog box. To disable these dialogs (and also the Just-In-Time Debugging feature of Visual Studio), run the command drwtsn32.exe -i to set Dr. Watson as the default system debugger. Dr. Watson will generate a core dump and silently exit. (See this Microsoft Knowledge Base article: http://support.microsoft.com/kb/q121434/.)
735,204
735,241
Convert a String In C++ To Upper Case
How could one convert a string to upper case. The examples I have found from googling only have to deal with chars.
Boost string algorithms: #include <boost/algorithm/string.hpp> #include <string> std::string str = "Hello World"; boost::to_upper(str); std::string newstr = boost::to_upper_copy<std::string>("Hello World");
735,249
735,338
Blocking socket returns EAGAIN
One of my projects on Linux uses blocking sockets. Things happen very serially so non-blocking would just make things more complicated. Anyway, I am finding that often a recv() call is returning -1 with errno set to EAGAIN. The man page only really mentions this happening for non-blocking sockets, which makes sense. With non-blocking, the socket may or may not be available so you might need to try again. What would cause it to happen for a blocking socket? Can I do anything to avoid it? At the moment, my code to deal with it looks something like this (I have it throw an exception on error, but beyond that it is a very simple wrapper around recv()): int ret; do { ret = ::recv(socket, buf, len, flags | MSG_NOSIGNAL); } while(ret == -1 && errno == EAGAIN); if(ret == -1) { throw socket_error(strerror(errno)); } return ret; Is this even correct? The EAGAIN condition gets hit pretty often. EDIT: some things which I've noticed which may be relevant. I do set a read timeout on the socket using setsockopts(), but it is set to 30 seconds. the EAGAIN's happen way more often than once every 30 secs. CORRECTION my debugging was flawed, EAGAIN's don't happen as often as I thought they did. Perhaps it is the timeout triggering. For connecting, I want to be able to have connect timeout, so I temporarily set the socket to non-blocking. That code looks like this: int error = 0; fd_set rset; fd_set wset; int n; const SOCKET sock = m_Socket; // set the socket as nonblocking IO const int flags = fcntl (sock, F_GETFL, 0); fcntl(sock, F_SETFL, flags | O_NONBLOCK); errno = 0; // we connect, but it will return soon n = ::connect(sock, addr, size_addr); if(n < 0) { if (errno != EINPROGRESS) { return -1; } } else if (n == 0) { goto done; } FD_ZERO(&rset); FD_ZERO(&wset); FD_SET(sock, &rset); FD_SET(sock, &wset); struct timeval tval; tval.tv_sec = timeout; tval.tv_usec = 0; // We "select()" until connect() returns its result or timeout n = select(sock + 1, &rset, &wset, 0, timeout ? &tval : 0); if(n == 0) { errno = ETIMEDOUT; return -1; } if (FD_ISSET(sock, &rset) || FD_ISSET(sock, &wset)) { socklen_t len = sizeof(error); if (getsockopt(SOL_SOCKET, SO_ERROR, &error, &len) < 0) { return -1; } } else { return -1; } done: // We change the socket options back to blocking IO if (fcntl(sock, F_SETFL, flags) == -1) { return -1; } return 0; The idea is that I set it to non-blocking, attempt a connect and select on the socket so I can enforce a timeout. Both the set and restore fcntl() calls return successfully, so the socket should end up in blocking mode again when this function completes.
It's possible that you have a nonzero receive timeout set on the socket (via setsockopt(sock, SOL_SOCKET, SO_RCVTIMEO,...)) as that would also cause recv to return EAGAIN
735,346
735,402
Cannot Load C++ DLL in C#
>My previous thread< I created this one,because I installed WinXP on VMBox and I cannot get it working again. This time I created an OnLoad Event on my form if (LoadLibrary("blowfish.dll") == 0) { Misc.LogToFile("Could not load dll", true); Application.Exit(); } Runs fine on my PC,but on VMBox LoadLibrary returns 0. Some users mentioned that the problem would be in mixing older NET Framework(2.0) with dlls made on newest MS Visual studio(2008 SP1) so I took action and now the program properties it's set to work with NET 3.5 On the VMBox I have NET 2.0,but this is not the problem - the program itself runs fine.I also have C++ Redistributable(2005,2005 SP1 and 2008). What could be the problem?
To further trouble should you could call Marshal.GetLastWin32Error(); which should give you an error code. Is it possible that you deployed a debug version of your native dll which also requires a debug version of MSVCR90D.DLL? You should have distributed the release version because the debug version requires a different set of dlls to be present on the target system. It obviously works on your development machine because all debug versions of the required libraries come with Visual Studio. This is how you would get the message belonging to an error code: [DllImport("kernel32.dll")] private static extern int FormatMessage(int dwFlags, IntPtr lpSource, int dwMessageId, int dwLanguageId, out string lpBuffer, int nSize, IntPtr pArguments); public static string GetErrorMessage(int errorCode) { const int FORMAT_MESSAGE_ALLOCATE_BUFFER = 0x00000100; const int FORMAT_MESSAGE_IGNORE_INSERTS = 0x00000200; const int FORMAT_MESSAGE_FROM_SYSTEM = 0x00001000; string lpMsgBuf; int dwFlags = FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS; int retVal = FormatMessage(dwFlags, IntPtr.Zero, errorCode, 0, out lpMsgBuf, 0, IntPtr.Zero); if (0 == retVal) { return null; } return lpMsgBuf; }
735,647
735,656
#ifdef for 32-bit platform
In an application I maintain, we've encountered a problem with file descriptor limitations affecting the stdlib. This problem only affects the 32-bit version of the standard lib. I have devised a fix for my code and would like to implement it, but only when compiling for 32-bit executable. What pre-processor symbol can I #ifdef for to determine whether the code is being compiled for a 32 or 64-bit target? EDIT Sorry, didn't mention, the code is cross-platform, linux, windows, solaris and a few other unix flavors, mostly using GCC for compilation. Any de-facto standards I can use cross-platform? EDIT 2 I've found some definitions "__ILP23" and "__LP64" that seem like they may work... a discussion here explains the background on the unix platform. Anyone had any experience with using these defines? Is this going to be usable?
I'm not sure if there is a universal #if def that is appropriate. The C++ standard almost certainly does not define one. There are certainly platform spcefic ones though. For example, Windows #if _WIN64 // 64 bit build #else // 32 bit build #endif EDIT OP mentioned this is a cross compile between Windows and Non-Windows using GCC and other compilers There is no universal macro that can be used for all platforms and compilers. A little bit of preprocessor magic though can do the trick. Assuming you're only working on x86 and amd64 chips the following should do the trick. It can easily be expanded for other platforms though #if _WIN64 || __amd64__ #define PORTABLE_64_BIT #else #define PORTABLE_32_BIT #endif
736,061
736,111
C/C++ pattern to USE_HEAP or USE_STACK
Is there a way to define a macro (or something similar) that would allow objects to be allocated on the stack or on the heap, cleanly? eg. Current code: A a; a.someFunc(); The simplest suggestion might be the following, but as you can see below, it's not very clean to maintain 2 sets of code. #ifdef USE_STACK A a; a.someFunc(); #elseif USE_HEAP A* a = new A(); a->someFunc(); #endif I am looking for a design pattern / proxy class that can be use to compile the code either way, depending on the needs of our customer. Edit: The code is used to build a library for embedded device / (embedded) Linux / Windows Mobile. Most customers want stack based allocation only. A few others have asked to trade stack for heap. Thanks, Charles
EDIT: improved to allow calling of wrapped member functions through operator-> Expanding on Manuel's answer to make it more complete, try this: #include <iostream> #define USE_STACK template <class T> class HeapWrapper { #ifdef USE_STACK T obj_; #else T *obj_; #endif public: #ifdef USE_STACK HeapWrapper() : obj_() {} template <class A1> HeapWrapper(const A1 &a1) : obj_(a1) {} template <class A1, class A2> HeapWrapper(const A1 &a1, const A2 &a2) : obj_(a1, a2) {} // etc #else HeapWrapper() : obj_(new T()) {} ~HeapWrapper() { delete obj_; } template <class A1> HeapWrapper(const A1 &a1) : obj_(new T(a1)) {} template <class A1, class A2> HeapWrapper(const A1 &a1, const A2 &a2) : obj_(new T(a1, a2)) {} // etc #endif #ifdef USE_STACK operator const T &() const { return obj_; } operator T &() { return obj_; } T *operator->() { return &obj_; } T& operator*() { return obj_; } #else operator const T &() const { return *obj_; } operator T &() { return *obj_; } T *operator->() { return obj_; } T& operator*() { return *obj_; } #endif // cast operators makes this work nicely HeapWrapper &operator=(const T &rhs) { *obj_ = rhs; return *this;} }; class A { public: void member(int x) { std::cout << x << std::endl; } }; int main() { HeapWrapper<int> x1(5); HeapWrapper<int> x2; HeapWrapper<int> x3 = x1; HeapWrapper<int> x4 = 3; std::cout << x1 << " " << x2 << " " << x3 << " " << x4 << std::endl; // example using a custom class's members.. HeapWrapper<A> a1; a1->member(5); }
736,188
736,210
Iterating a read() from a socket
Is this the proper way to iterate over a read on a socket? I am having a hard time getting this to work properly. data.size is an unsigned int that is populated from the socket as well. It is correct. data.data is an unsigned char *. if ( data.size > 0 ) { data.data = (unsigned char*)malloc(data.size); memset(&data.data, 0, data.size); int remainingSize = data.size; unsigned char *iter = data.data; int count = 0; do { count = read(connect_fd, iter, remainingSize); iter += count; remainingSize -= count; } while (count > 0 && remainingSize > 0); } else { data.data = 0; } Thanks in advance.
You need to check the return value from read before you start adding it to other values. You'll get a zero when the socket reports EOF, and -1 on error. Keep in mind that for a socket EOF is not the same as closed.
736,533
736,550
How do you convert a string to ascii to binary in C#?
A while back (freshman year of high school) I asked a really good C++ programmer who was a junior to make a simple application to convert a string to binary. He gave me the following code sample: void ToBinary(char* str) { char* tempstr; int k = 0; tempstr = new char[90]; while (str[k] != '\0') { itoa((int)str[k], tempstr, 2); cout << "\n" << tempstr; k++; } delete[] tempstr; } So I guess my question is how do I get an equivalent to the itoa function in C#? Or if there is not one how could I achieve the same effect?
This is very easy to do with C#. var str = "Hello world"; With LINQ foreach (string letter in str.Select(c => Convert.ToString(c, 2))) { Console.WriteLine(letter); } Pre-LINQ foreach (char letter in str.ToCharArray()) { Console.WriteLine(Convert.ToString(letter, 2)); }
736,701
752,324
class & function names highlighting in Vim
I just recently set up my Vim environment from Textmate, after becoming addicted to its modal input. However, syntax highlighting seems to be not so beautiful in Vim. I code in C++ and since the function call and class names can't be highlighted, the code is more difficult to read. I played with color scheme for a bit, but couldn't find any field that corresponded to "class name" or "function name". In the picture below, notice how DroughtLayer:: and *.size() is not highlighted on the right in MacVim. (source: ivzhao.com) Any ideas how to solve this? It really annoys me as I am so much a visual-sensitive guy.
Interestingly, the syntax highlighters in VIM don't support applying a syntax to identifiers or function names - at least not the syntax highlighters for C and C++. So, even if you do: :hi Function guifg=red or :hi Identifier guifg=red it doesn't give these a color. I just seems to be not much more than keywords and constants for these languages. Here, someone has started extending the cpp syntax file to support method names. It's a start I guess. http://vim.wikia.com/wiki/Highlighting_of_method_names_in_the_definition
736,782
736,790
CoInitializeEx fails when calling OpenCV method cvLoadImage()
I am trying to integrate some OpenCV functionality into my application. Currently I have code set up with DirectShow to get a video feed from my camera, which is then showed in an MFC window. This code cannot be changed or removed. The code runs completely fine, but regardless of the location i place the following line of code: IplImage *img = cvLoadImage("C:/well.jpg"); The webcam fails to initialize correctly and breaks the program. more directly, i get a FAILED HRESULT at: CoInitializeEx(NULL, COINIT_APARTMENTTHREADED) More specifically, at some point in my code i Call CDialog::doModal(), which then hits CoInitializeEx(), and thus makes the program fail. would anyone know what is going on here?
CoInitialize will fail if the thread was previously initialized as a different apartment, i.e., if there was a previous CoInitializeEx(NULL, COINIT_MULTITHREADED) I would guess that OpenCV calls CoInitializeEx(NULL, COINIT_MULTITHREADED), causing your subsequent calls to CoInitializeEx to fail. You can confirm this by checking the return of CoInitializeEx - it'll be RPC_E_CHANGED_MODE in this case. There is no straightforward fix, the most straightforward will be to move the OpenCV calls into a separate thread.
736,981
738,264
How do I deal with "Project Files" in my Qt application?
My Qt application should be able to create/open/save a single "Project" at once. What is the painless way to store project's settings in a file? Should it be XML or something less horrible? Of course data to be stored in a file is a subject to change over time. What I need is something like QSettings but bounded to a project in my application rather than to the whole application.
You can use QSettings to store data in a specific .ini file. From the docs: Sometimes you do want to access settings stored in a specific file or registry path. On all platforms, if you want to read an INI file directly, you can use the QSettings constructor that takes a file name as first argument and pass QSettings::IniFormat as second argument. For example: QSettings settings("/home/petra/misc/myapp.ini", QSettings::IniFormat);
736,982
736,994
C++ polymorphism not supported for pointer-to-pointer
I'm searching for a proper way to clean my pointers. Here the example code: class Parent { protected: int m_Var; public: Parent() : m_Var(0) {} virtual ~Parent() {} void PubFunc(); }; class Child : public Parent { protected: bool m_Bool; public: Child() : m_Bool(false) {} virtual ~Child() {} void ChildFunc(); }; void RemoveObj(Parent **ppObj) { *ppObj->PubFunc(); delete *ppObj; ppObj = NULL; } int main() { Parent* pPObj = NULL; Child* pCObj = NULL; pPObj = new Parent(); pCObj = new Child(); RemoveObj(&pPObj); RemoveObj(&pCObj); // This is line 33 return 1; } But the compiler gives error: classes.cpp:33: error: invalid conversion from ‘Child**’ to ‘Parent**’ classes.cpp:33: error: initializing argument 1 of ‘void RemoveObj(Parent**)’
There are soo many ways to handle memory correctly. The one close to your example would be: template <typename T> RemoveObj(T **p) { if (p == NULL) return; delete *p; *p = NULL; } Additionally you might want to use std::auto_ptr instead. It would look like: int main() { std::auto_ptr<Parent*> pPObj(new Parent); std::auto_ptr<Child*> pCObj(new Child); // no deletes needed anymore
737,108
737,133
Converting from Derived* to Base*&
I was trying to answer the question mentioned here by passing the reference to the pointer instead of pointer to pointer like this: class Parent { }; class Child : public Parent { }; void RemoveObj(Parent*& pObj) { delete pObj; pObj = NULL; } int main() { Parent* pPObj = new Parent; Child* pCObj = new Child; pPObj = new Parent(); pCObj = new Child(); RemoveObj(pPObj); RemoveObj(pCObj); // This is line 32 return 1; } But this produces the following compiler error at line 32: error C2664: 'RemoveObj' : cannot convert parameter 1 from 'Child *' to 'Parent *&' I agree that conversion from Child** to Parent** is not allowed. But why this conversion is also not allowed?
An object of type Child* cannot be bound to a Parent*& for exactly the same reason that a Child** cannot be converted to a Parent**. Allowing it would allow the programmer (intentionally or not) to break type safety without a cast. class Animal {}; class DangerousShark : public Animal {}; class CuteKitten : public Animal {}; void f(Animal*& animalPtrRef, Animal* anotherAnimalPtr) { animalPtrRef = anotherAnimalPtr; } void g() { DangerousShark myPet; CuteKitten* harmlessPetPtr; f(harmlessPetPtr, &myPet); // Fortunately, an illegal function call. } Edit I think that some of the confusion arises because of the loose use of the words 'convert' and 'conversion'. References can't be rebound, unlike objects which can be reassigned, so in the context of references when we speak of conversion we can only be concerned about initializing a new reference. References are always bound to an object, and from the OP's question it was clear that he is aiming to get a reference that is a direct bind to an existing object. This is only allowed if the object used to initialize the reference is reference-compatible with the type of the reference. Essentially, this is only if the types are the same, or the type of the object is derived from the type of the reference and the reference type is at least as cv-qualified as the initializing object. In particular, pointers to different types are not reference-compatible, regardless of the relationship of the pointed-to types. In other cases, a reference can be initialized with something that can be converted to the reference type. In these cases, though, the reference must be const and not volatile and the conversion will create a temporary and the reference will be bound to this temporary and not the original object. As pointed out, this is not suitable for the requirements of OP's motivating example. In summary, a Child can be bound directly to a Parent& but a Child* cannot be directly bound to a Parent*&. A Parent* const& can be initialized with a Child*, but the reference will actually bind to a temporary Parent* object copy-initialized from the Child* object.
737,240
737,245
Array size at run time without dynamic allocation is allowed?
I've been using C++ for a few years, and today I saw some code, but how can this be perfectly legal? int main(int argc, char **argv) { size_t size; cin >> size; int array[size]; for(size_t i = 0; i < size; i++) { array[i] = i; cout << i << endl; } return 0; } Compiled under GCC. How can the size be determined at run-time without new or malloc? Just to double check, I've googled some and all similar codes to mine are claimed to give storage size error. Even Deitel's C++ How To Program p. 261 states under Common Programming Error 4.5: Only constants can be used to declare the size of automatic and static arrays. Enlight me.
This is valid in C99. C99 standard supports variable sized arrays on the stack. Probably your compiler has chosen to support this construct too. Note that this is different from malloc and new. gcc allocates the array on the stack, just like it does with int array[100] by just adjusting the stack pointer. No heap allocation is done. It's pretty much like _alloca.
737,409
737,437
Are get and set functions popular with C++ programmers?
I'm from the world of C# originally, and I'm learning C++. I've been wondering about get and set functions in C++. In C# usage of these are quite popular, and tools like Visual Studio promote usage by making them very easy and quick to implement. However, this doesn't seem to be the case in the C++ world. Here's the C# 2.0 code: public class Foo { private string bar; public string Bar { get { return bar; } set { bar = value; } } } Or, in C# 3.0: public class Foo { get; set; } May people will say, well whats the point in that? Why not just create a public field and then make it a property later if you need to; honestly, I'm actually not sure. I just do it out of good practice because I've seen it done so many times. Now because I'm so used to doing it, I feel like I should carry over the habit to my C++ code, but is this really necessary? I don't see it done as often as with C#. Anyway, here's the C++ from what I gather: class Foo { public: std::string GetBar() const; // Thanks for the tip, @Daniel Earwicker. void SetBar(std::string bar); private: std::string bar; } std::string Foo::GetBar() const { return bar; } void Foo::SetBar(std::string bar) { // Also, I always wonder if using 'this->' is good practice. this->bar = bar; } Now, to me that seems like a whole lot of leg work; considering using Visual Studio's tools the C# implementation would take literally seconds to implement, and the C++ took me a lot longer to type - I feel its not worth the effort, especially when the alternative is 5 lines long: class Foo { public: std::string Bar; } From what I gather, these are the advantages: You can change implementation details for the get and set functions, so instead of returning a private field you can return something more interesting. You can remove a get/set later on and make it read/write only (but for a public facing interface, this seems, not good). And the disadvantages: Takes ages to type, is this really worth the effort? Generally speaking. In some cases, the advantages make it worth the effort, but I mean, speaking in terms of "good practice", is it? Answer: Why did I choose the answer with less votes? I was actually very close to choosing veefu's answer; however my personal opinion (which is apparently controversial), is that the answer over egged the pudding. The answer I chose, on the other hand, seems to argue both sides; I think getters and setters are evil if used excessively (by that I mean, when it's not necessary and would break the business model), but why shouldn't we have a function called GetBalance()? Surely this would be far more versatile than PrintBalance(); what if I wanted to show it to the user in another way than as the class wanted me to? Now, in some sense GetBalance() may not be relevant enough to argue that "getters and setters are good" because it doesn't (or maybe, shouldn't) have an accompanying setter, and speaking of which, a function called SetBalance(float f) could be bad (in my opinion) because it would imply to the implementer of the function that the account must be manipulated out side of the class, which is not a good thing.
I'd argue that providing accessors are more important in C++ than in C#. C++ has no builtin support for properties. In C# you can change a public field to a property mostly without changing the user code. In C++ this is harder. For less typing you can implement trivial setters/getters as inline methods: class Foo { public: const std::string& bar() const { return _bar; } void bar(const std::string& bar) { _bar = bar; } private: std::string _bar; }; And don't forget that getters and setters are somewhat evil.
737,575
737,584
Must I use pointers for my C++ class fields?
After reading a question on the difference between pointers and references, I decided that I'd like to use references instead of pointers for my class fields. However it seems that this is not possible, because they cannot be declared uninitialized (right?). In the particular scenario I'm working on right now, I don't want to use normal variables (what's the correct term for them by the way?) because they're automatically initialized when I declare them. In my snippet, bar1 is automatically instantiated with the default constructor (which isn't what I want), &bar2 causes a compiler error because you can't use uninitialized references (correct?), and *bar3 is happy as larry because pointers can be declared uninitialized (by the way, is it best practice to set this to NULL?). class Foo { public: Bar bar1; Bar &bar2; Bar *bar3; } It looks like I have to use pointers in this scenario, is this true? Also, what's the best way of using the variable? The -> syntax is a bit cumbersome... Tough luck? What about smart pointers, etc? Is this relevant? Update 1: After attempting to implement a reference variable field in my class and initializing it in the constructor, why might I receive the following error? ../src/textures/VTexture.cpp: In constructor ‘vimrid::textures::VTexture::VTexture()’: ../src/textures/VTexture.cpp:19: error: uninitialized reference member ‘vimrid::textures::VTexture::image’ Here's the real code: // VTexture.h class VTexture { public: VTexture(vimrid::imaging::ImageMatrix &rImage); private: vimrid::imaging::ImageMatrix &image; } // VTexture.cpp VTexture::VTexture(ImageMatrix &rImage) : image(rImage) { } I've also tried doing this in the header, but no luck (I get the same error). // VTexture.h class VTexture { public: VTexture(vimrid::imaging::ImageMatrix &rimage) : image(rImage) { } } Update 2: Fred Larson - Yes! There is a default constructor; I neglected it because I thought it wasn't relevant to the problem (how foolish of me). After removing the default constructor I caused a compiler error because the class is used with a std::vector which requires there to be a default constructor. So it looks like I must use a default constructor, and therefore must use a pointer. Shame... or is it? :)
Answer to Question 1: However it seems that this is not possible, because they [references] cannot be declared uninitialized (right?). Right. Answer to Question 2: In my snippet, bar1 is automatically instantiated with the default constructor (which isn't what I want), &bar2 causes a compiler error because you can't use uninitialized references (correct?), You initialize references of your class in your constructor's initializer list: class Foo { public: Foo(Bar &rBar) : bar2(rBar), bar3(NULL) { } Bar bar1; Bar &bar2; Bar *bar3; } Answer to Question 3: In the particular scenario I'm working on right now, I don't want to use normal variables (what's the correct term for them by the way?) There is no correct name for them, typically you can just say pointers for most discussions (except this one) and everything you need to discuss will also apply to references. You initialize non pointer, non reference members in the same way via the initailizer list. class Foo { public: Foo() : x(0), y(4) { } int x, y; }; Answer to Question 4: pointers can be declared uninitialized (by the way, is it best practice to set this to NULL?). They can be declared uninitialized yes. It is better to initialize them to NULL because then you can check if they are valid. int *p = NULL; //... //Later in code if(p) { //Do something with p } Answer to Question 5: It looks like I have to use pointers in this scenario, is this true? Also, what's the best way of using the variable? You can use either pointers or references, but references cannot be re-assigned and references cannot be NULL. A pointer is just like any other variable, like an int, but it holds a memory address. An array is an aliased name for another variable. A pointer has its own memory address, whereas an array should be seen as sharing the address of the variable it references. With a reference, after it is initialized and declared, you use it just like you would have used the variable it references. There is no special syntax. With a pointer, to access the value at the address it holds, you have to dereference the pointer. You do this by putting a * before it. int x=0; int *p = &x;//p holds the address of x int &r(x);//r is a reference to x //From this point *p == r == x *p = 3;//change x to 3 r = 4;//change x to 4 //Up until now int y=0; p = &y;//p now holds the address of y instead. Answer to Question 6: What about smart pointers, etc? Is this relevant? Smart pointers (See boost::shared_ptr) are used so that when you allocate on the heap, you do not need to manually free your memory. None of the examples I gave above allocated on the heap. Here is an example where the use of smart pointers would have helped. void createANewFooAndCallOneOfItsMethods(Bar &bar) { Foo *p = new Foo(bar); p->f(); //The memory for p is never freed here, but if you would have used a smart pointer then it would have been freed here. } Answer to Question 7: Update 1: After attempting to implement a reference variable field in my class and initializing it in the constructor, why might I receive the following error? The problem is that you didn't specify an initializer list. See my answer to question 2 above. Everything after the colon : class VTexture { public: VTexture(vimrid::imaging::ImageMatrix &rImage) : image(rImage) { } private: vimrid::imaging::ImageMatrix &image; }
737,583
737,595
What are the reasons for preferring Singleton or function scope local static objects over one another?
Both Marshall Clines' "C++ FAQ Lite" and Scott Meyers' Effective C++ suggest using functions returning local static objects to avoid possible problems with non-local static object initialization order. In short (from "Effective C++", 3rd edition by Scott Meyers): FileSystem& tfs() { static FileSystem fs; return fs; } Both writers add that this is similar to the Singleton pattern, except that this does not ensure that the local fs is the only instance of a FileSystem. Now, in a situation where one instance of resource-managing class T is enough, what would be your reasons to prefer a Singleton class or this local static approach over one another? It is not strictly necessary to limit using the class T to just one instance, although our application does not need more than one. Obviously having a global object is an issue when doing TDD, but in this case both approaches are global.
You can use both: class Singleton { public: static Singleton & Instance() { static Singleton s; return s; } private: Singleton() {} }; Now the only way a Singleton can be created is via the Instance function (because the constructor is private) and so you can guarantee only one Singleton exists. If you want to use the free function approach described in your question, you could consider making the function a friend of the Singleton class while retaining the private constructor. Note that this construct (as with all constructs that involve static variables) is not thread safe. If thread safety is an issue, you need to consider using something like the double-checked locking pattern when accessing the static Singleton variable.
737,653
737,656
What's the best technique for exiting from a constructor on an error condition in C++
What's the best technique for exiting from a constructor on an error condition in C++? In particular, this is an error opening a file. Thanks for the responses. I'm throwing an exception. Here's the code (don't know if it's the best way to do it, but it's simple) // Test to see if file is now open; die otherwise if ( !file.is_open() ) { cerr << "Failed to open file: " << m_filename << endl; throw ("Failed to open file"); } One think I like about C++ is you don't have to declare thrown exceptions on the method declarations.
The best suggestion is probably what parashift says. But read my caution note below as well please. See parashift FAQ 17.2 [17.2] How can I handle a constructor that fails? Throw an exception. Constructors don't have a return type, so it's not possible to use return codes. The best way to signal constructor failure is therefore to throw an exception. If you don't have the option of using exceptions, the "least bad" work-around is to put the object into a "zombie" state by setting an internal status bit so the object acts sort of like it's dead even though it is technically still alive. The idea of a "zombie" object has a lot of down-side. You need to add a query ("inspector") member function to check this "zombie" bit so users of your class can find out if their object is truly alive, or if it's a zombie (i.e., a "living dead" object), and just about every place you construct one of your objects (including within a larger object or an array of objects) you need to check that status flag via an if statement. You'll also want to add an if to your other member functions: if the object is a zombie, do a no-op or perhaps something more obnoxious. In practice the "zombie" thing gets pretty ugly. Certainly you should prefer exceptions over zombie objects, but if you do not have the option of using exceptions, zombie objects might be the "least bad" alternative. A word of caution with throwing exceptions in a constructor: Be very careful though because if an exception is thrown in a constructor, the class's destructor is not called. So you need to be careful about destructing objects that you already constructed before the exception is thrown. The same warnings apply to exception handling in general, but it is maybe a little less obvious when dealing with a constructor. class B { public: B() { } virtual ~B() { //called after D's constructor's exception is called } }; class D : public B { public: D() { p = new char[1024]; throw std::exception("test"); } ~D() { delete[] p; //never called, so p causes a memory leak } char *p; }; int main(int argc, char **argv) { B *p; try { p = new D(); } catch(...) { } return 0; } Protected/Private constructors with CreateInstance method: Another way around this is to make your constructor private or protected and make a CreateInstance method that can return errors.
737,830
737,849
C++ functions accepting both pointers and references
I am writing a library in C++ and have some functions that work with modules. An example would look like this: void connect(Module *a, Module *b); The problem is, that it would be sometimes handy if the function accepted also references (some of the Modules may be allocated on the stack and some on the heap and all the &s and *s get boring and messy soon). Now I have inlined function that takes references, turns them into pointers and calls the original function. inline void connect(Module &a, Module &b){ connect(&a, &b); } I don't like this solution much, because for a few more functions it makes a lot of code to write, read, compile, ... Another thing that was thinking about is adding Module::operator Module *() that would hust return this. What are your thoughts on this? Isn't there any potential for epic failure that I missed? Thanks.
Why not just call the function with connect(&a, &b); like in your inline function, whenever you have to call it with references? This makes it very clear that the function takes pointers, and that a and b are not pointers. You only have to type two more characters.
737,996
738,024
Directory of running program on Linux?
Hey, I've been writing a program (a sort of e-Book viewing type thing) and it loads text files from a folder within the folder of which the executable is located. This gives me a bit of a problem since if I run the program from another directory with the command "./folder/folder/program" for example, my program will not find the text, because the working directory isn't correct. I cannot have an absolute directory because I would like the program to be portable. Is there any way to get the precise directory that the executable is running from even if it has been run from a different directory. I've heard could combine argc[0] and getcwd() but argc is truncated when there is a space in the directory, (I think?) so I would like to avoid that if possible. I'm on Linux using g++, Thanx in advance
EDIT - don't use getcwd(), it's just where the user is not where the executable is. See here for details. On linux /proc/<pid>/exe or /proc/self/exe should be a symbolic link to your executable. Like others, I think the more important question is "why do you need this?" It's not really UNIX form to use the executable path to find ancillary files. Instead you use an environment variable or a default location, or follow one of the other conventions for finding the location of ancillary files (ie, ~/.<myapp>rc).
738,204
738,214
Why might my virtual function call be failing?
Update: This issue is caused by bad memory usage, see solution at the bottom. Here's some semi-pseudo code: class ClassA { public: virtual void VirtualFunction(); void SomeFunction(); } class ClassB : public ClassA { public: void VirtualFunction(); } void ClassA::VirtualFunction() { // Intentionally empty (code smell?). } void ClassA::SomeFunction() { VirtualFunction(); } void ClassB::VirtualFunction() { // I'd like this to be called from ClassA::SomeFunction() std::cout << "Hello world!" << endl; } The C# equivalent is as follows: Removed C# example, as it's not relevant to the actual problem. Why isn't the ClassB::VirtualFunction function being called when called from ClassA::SomeFunction? Instead ClassA::VirtualFunction is being called... When I force implementation of the virtual function ClassA::VirtualFunction, like so: class ClassA { public: virtual void VirtualFunction() = 0; void SomeFunction(); } class ClassB : public ClassA { public: void VirtualFunction(); } void ClassA::SomeFunction() { VirtualFunction(); } void ClassB::VirtualFunction() { // I'd like this to be called from ClassA::SomeFunction() std::cout << "Hello world!" << endl; } The following error occurs at runtime, despite the derrived function deffinately being declared and defined. pure virtual method called terminate called without an active exception Note: It seems like the error can be caused even by bad memory usage. See self-answer for details. Update 1 - 4: Comments removed (not releavnt). Solution: Posted as an answer.
class Base { public: virtual void f() { std::cout << "Base" << std::endl; } void call() { f(); } }; class Derived : public Base { public: virtual void f() { std::cout << "Derived" << std::endl; } }; int main() { Derived d; Base& b = d; b.call(); // prints Derived } If in the Base class you do not want to implement the function you must declare so: class Base { public: virtual void f() = 0; // pure virtual method void call() { f(); } }; And the compiler won't allow you to instantiate the class: int main() { //Base b; // error b has a pure virtual method Derived d; // derive provides the implementation: ok Base & b=d; // ok, the object is Derived, the reference is Base b.call(); } As a side note, be careful not to call virtual functions from constructors or destructors as you might get unexpected results.
738,606
742,372
A C++ Application Compiled With VS2008 Doesn't Run In Other Computer
I have created a wn32 project with Visual Studio 2008 and Visual C++ language, it uses the ws2_32.lib library and then I compiled in Release mode. It runs very good in the same computer, but when I copy the exe file to other computer (that doesn't have installed Visual Studio), it doesn't run. The message I see is: This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem. But, if I compile my application using DEV C++, it generates a bigger executable (738KB) compared with the Visual Studio 2008 executable (9.5 KB). However, the DEV C++ executable runs in the other computer. I have add the library ws2_32.lib to the linker properties of my project, in the Additional Dependencies field. How can I fix it to work with Visual Studio 2008? My code is the following: http://www.stan.com.mx/yupi/udpserver.cpp
I agree with JaredPar. The application you build with VS2008 is using dynamic linking, whereas the DEV C++ is linking statically, hence the larger size and why one works and not the other. However, if its a plain win32 application project you've got (and you don't want/need to distribute it with a setup), you may be able to get it to run on another machine without redistributing the CRT by getting VS2008 to link statically for you (if its just the standard lib you're missing). I don't have a copy of VS2008 to hand, so I'll describe how to do it in VS2005 and hopefully it'll translate across. Bring up configuration properties for the project (right click the project name, then select "properties" from the menu) Expand "Configuration Properties", then "C/C++", then click "Code Generation" Under the "Runtime Library" item, for your particular configuration select the non-DLL version of the library i.e. for debug builds you want "Multi-threaded Debug (/MTd) and for release builds you want "Multi-threaded (/MT)" Try and see if that works. You'll obviously get a much bigger final binary now the library is statically linked.
738,933
739,121
Multiple Singleton Instances
I am writing a library of utility classes, many of which are singletons. I have implemented them as such using inheritance: template <class T> class Singleton { public: T& getInstance() { if(m_instance == 0) { m_instance = new T; } return m_instance; } private: static T* m_instance; }; class SomeClass : public Singleton<SomeClass> { public: SomeClass() {} virtual ~SomeClass() {} void doSomething() {;} }; Obviously this is a simple example, not an actual class. Anyways, I am finding that using code such as: SomeClass::getInstance().doSomething(); Will create more than one instance of SomeClass. I am thinking this may be due to the fact that it is being used outside my library (.a) file as well as internally. For example, I am using a UI library not written by myself which is separately compiled and to which I am making additions. Some of these additions utilize singletons which are also being used in my .a library. Is the separate compilation causing this? Something else? The only way I have managed to get around the issue is to create a global object in my main.cpp file which I initialize with any singletons I will need. Then all code accesses this common global object with calls such as: GlobalObject::getSomeClass().doSomething() I hate having to add an additional method to this object every time I create another singleton. Plus the syntax seems clearer and more familiar using the first access method: SomeClass::getInstance().doSomething(); Please let me know if you have any thoughts, opinions, etc. Thanks.
Your problem is that your template is going to be instantiated in more than one compilation unit as it is completely inline. Therefore in every compilation unit that uses the template you will end up creating one singleton (per compilation unit). What you would need is to force global linkage, so that all compilation units reference the same template instantiation. The upcoming C++ standard will support this via extern template. What you can do now is to disable automatic instantiation in your project and manually instantiate the templates that you use explicitly. This way when you use the template in any compilation unit you will generate an unknown reference to the implementation which can then be satisfied by the linker from the (one) compilation unit where you do the explicit instantiation.
738,952
738,972
const float & x = something; // considered harmful?
There was some code like this: // Convenience to make things more legible in the following code const float & x = some.buried.variable.elsewhere; // Go on to use x in calculations... I have been told that the "const float &" is "bad" and should just be a plain float or const float. I, however, could not think of a compelling reason other than "you don't have to type '&'". In fact, it seems to me that in some cases the original could be better, since compiler might not allocate extra stack space to the variable. In other words, originally I could validly say: assert(&x == &some.buried.variable.elsewhere) Whereas in the second case I cannot. Also, the original seems to communicate intent better, in my view, since the whole point of a reference is to make an alias to another value. Can anyone give me examples of where the "const float &" version is worse than a plain "float" or "const float" in some tangible way?
I can't think of a reason why const float & would be better than const float. References make sense if you're either worried about copies being made (which is irrelevant with a primitive type like float) or you want to be able to update a value across all instances that share the reference (which is irrelevant with const members). On top of that, references in members are a huge pain in the neck* when it comes to initialization, and so they'd have to offer a significant advantage of the alternatives in order to be useful, and it's clearly not the case with const float. * The FQA on references is always amusing and thought provoking
739,038
739,048
Why do priority queues mostly use 0 as the most important priority?
Why are most priority/heap queues implemented as 0 being the highest priority? I'm assuming I'm missing out some key mathematical principle. As I was implementing my own priority queue recently it seemed easier to write the insert function if priority went up with the integer value, but apparently people smarter than me think it should go the other way. Any ideas?
Most priority queues are implemented as a fibonacci heap or something similar. That data structure supports extracting the minimum in constant time, which makes it natural to make 0 the highest priority, and take elements out of the queue by extracting the minimum.
739,095
739,115
Win32 LB_GETTEXT returns garbage
I have a problem which is most likely a simple problem, but neverthe less still a problem for me. I am using the Listbox in Win32 / C++ and when getting the selected text from my listbox the string returned is just garbage. It is a handle to a struct or similar? Below is the code and an example of what I get. std::string Listbox::GetSelected() { int index = -1; int count = 0; count = SendMessage(control, LB_GETSELCOUNT, 0, 0); if(count > 0) { index = SendMessage(control, LB_GETSEL, 0, 0); } return GetString(index); } std::string Listbox::GetString(int index) { int count = 0; int length = 0; char * text; if(index >= 0) { count = GetItemCount(); if(index < count) { length = SendMessage(control, LB_GETTEXTLEN, (WPARAM)index, 0); text = new char[length + 1]; SendMessage(control, LB_GETTEXT, (WPARAM)index, (LPARAM)text); } } std::string s(text); delete[] text; return s; } GetItemCount just does that. It just gets the number of items currently in the listbox. The string I was grabbing from the Listbox is "Test String" and it returned ¨±é» Tzã Any help is appericated, thanks. Ok, I narrowed it down to my GetSelected function as GetString returns the correct string.
The LB_GETSEL message does not return the index of a selected item, it returns the selected STATE of the ITEM you pass in WPARAM. You also have a serious bug where if no items are selected you will attempt to retrieve the string of the item at index -1, which is clearly wrong. Checking the return values of these SendMessage calls would have helped you diagnose the problem. Here's an example of how to get the text of the first selected item; // get the number of items in the box. count = SendMessage(control, LB_GETCOUNT, 0, 0); int iSelected = -1; // go through the items and find the first selected one for (int i = 0; i < count; i++) { // check if this item is selected or not.. if (SendMessage(control, LB_GETSEL, i, 0) > 0) { // yes, we only want the first selected so break. iSelected = i; break; } } // get the text of the selected item if (iSelected != -1) SendMessage(control, LB_GETTEXT, (WPARAM)iSelected , (LPARAM)text); Alternatively you can use LB_GETSELITEMS to get a list of the items that are selected.
739,398
739,412
Transiting from COBOL to C++
I am a fairly junior programmer and have the task of interviewing an experienced mainframe COBOL programmer for a position doing C++ development for Windows Mobile. By experienced I mean that the candidate's entire programming career has been COBOL. I am not biased against any particular language, just slightly concerned about my ability to do a thorough interview. Inside of an interview how can I, with no COBOL experience, get a good feel for the abilities of this interviewee? Are there any specific things I should ask or look for? I have already read up on a lot of general interview stuff on SO, what would be helpful are any specific things I look for given the nature of the candidate's background.
The best thing is, give him a task, similar to what he will have to do at your company. Tell him he can use pseudo code (So no Internet is needed). Also, The main problem Cobol people have is to grasp OO (Since Cobol is mostly procedural...I am aware of new OO versions). One more pitfall Cobol people have is grasping the scope concept, As traditional Cobol each variable live from start to bottom of the app he is in.
739,439
739,447
Is the PS3's Cell architecture the wrong platform to be learning game programming?
I have an opportunity to attend Sony licensed training classes to learn about programming with the PS3's cell architecture. However I only have a rudimentary knowledge of C++ and was wondering if the PS3 is a bit of an overkill for a starter aspiring game dev like me. And also what is the best resources to get me to a decent level in C++ graphics programming in about 2 months time?
The problem isn't so much that a PS3 is overkill, it's that the Cell processor is notoriously difficult to program to it's potential. The highly parallelized architecture is potentially quite powerful, but it's not easy to actually get that performance.
739,539
741,234
Continuous Integration: Unmanaged C++ on Visual Studio 2008
I've spent 4 years developing C++ using Visual Studio 2008 for a commercial company; it's now time for me to upgrade my development process. Here's the problem: I dont have a 1 button build automation. I also dont have a CI server that automatically builds when a commit happens, and emails me whether a build is broken or not. Worse we dont even have a single unit test!! Can someone please point to me how I can get started? I have looked at many many tools and I think I might go with: Visual Build (for build automation) (Note: I also considered Final Builder) Cruise (for CI server) I also now am just starting to practice TDD...so I will want to automate my unit tests as well. I chose Google Test/Mock for their extensive documentation. (Cant go wrong with Google brand can I? =p) Price is not the issue, I want what's best and easiest to get started. Can people that use real CI/automation tool for unmanaged MSVC++ tell me their tools and how I can go about starting? Our source control is Subversion. Last point: I'm also considering project management/tracking tool that integrates right into VSTD ..and thinking about using OnTime. VSTS costs too much. I tried FogBugz, but I think it's too simple. Any others?
I would take some time to seriously consider TeamCity. We used CruiseControl.NET for a while and TeamCity completely demolishes it. Plus it has built-in plugins for Boost and CppUnit, so your unit testing will come for free. Best of all, the tool is free for < 20 users and gives you three build agents. I just finished implementing our C++ product at work and it was fairly simple. We did it with msbuild and basically use the msbuild task to compile the solution. Other targets can be used to copy files, run unit tests, etc.
739,594
1,546,689
Static-code analyzer: unmanaged C++ Visual Studio 2008
I develop commercial unmanaged C++ app on Visual Studio 2008, and I want to add a static-code analysis tool. Any recommendations? I think it would be real nice if the tool can be integrated into MSVC. I'm thinking about PC-Lint + Visual Lint However, I have been taking a hard look at Coverity, Understand, and Klockwork as well. Price isnt really the issue. I want opinions from people who actually used the tool for unmanaged C++ on MSVC, and they just absolutely loved it. Lastly, VSTS and Intel Parallel Studio now also offer static code analysis. Nice~ Note: related post suggest Coverity is the best (?) (see last 2 posts)
I work for RedLizard building Goanna, a C++ static analysis plugin for Visual Studio. Its focus is on desktop use by a programmer. You can run it on individual files, just as you do the compiler, and it can give you results quickly. There is a trial available. Right-click a file, select Run Goanna, and the results appear in the Visual Studio warnings list.
739,830
739,852
library for server side (c/c++) xmlrpc
I want to implement support of the XMLRPC protocol for my server that is written in C and C++ and now looking for the most widely adopted xmlrpc library. License is not an issue, GPL would be fine. What would you suggest ? Is there any defacto standard xmlrpc C library for such a purpose ?
The de facto standard would imo be this one: http://xmlrpc-c.sourceforge.net/ and it supports both C and C++ and it even has its own embedded http daemon for servicing the http requests ... edit: and it's available under a BSD-style license, so it allows you to boldly go where GPL didn't allow you to go before ;)
740,026
757,752
Boost Deserialization Optimizations?
I'm deserializing a fair amount of data through Boost.Serialization (one for each frame). However, when I output how long the deserialization takes, it varies wildly. It is not unusably slow at the moment, but it would be nice to make it faster. The data represents the same classes, arrays, maps and vectors but merely with different values. Looking at the memory spiking as each deserialization takes place, I have to believe there's a better way of doing this than continually allocating and deallocating all this memory. Here's a few of the read times as an example: Deserialization - 390 milliseconds Deserialization - 422 milliseconds Deserialization - 422 milliseconds Deserialization - 422 milliseconds Deserialization - 438 milliseconds Deserialization - 2156 milliseconds Deserialization - 1797 milliseconds Deserialization - 1656 milliseconds Deserialization - 1328 milliseconds Deserialization - 1219 milliseconds Deserialization - 1078 milliseconds Deserialization - 1078 milliseconds Is there a way of writing a custom deserialization function for the same data that uses Boost.Serialization so I can specify to allocate the memory at the beginning, and then just change their values for each frame? Update: I realised a minor issue with the optimization flags I was using was causing the serialization data to be written incorrectly, which resulted in the inconsistency of deserialization times. After fixing this, it is now consistently at 750 - 780 milliseconds each frame. However, my original question still stands, as currently I am serializing and deserializing an entire stl container, when I really want to only serialize the contents (as the size and indexing of the container will remain exactly the same). I'm not sure of the best way to go about doing this though.
You might want to reconsider the design of the feature using this serialization. From your description it seems as though you are serializing/deserializing an entire STL container very frequently. This shouldn't be required. Serialization shouldn't be used unless the data needs to be persisted so that it can be re-built later or by someone else. If serialization is required for you application you might consider serializing each item in the container separately and then only re-serializing when an item changes. This way you won't be re-doing all of the work un-necessarily.
740,030
740,038
How to release the unused capacity of a string
I am dealing with a lot of strings in my program. These string data don't change through out the whole life time after they being read into my program. But since the C++ string reserves capacity, they waste a lot of space that won't be used for sure. I tried to release those spaces, but it didn't work. The following is the simple code that I tried: string temp = "1234567890123456"; string str; cout << str.capacity() << endl; str.reserve(16); cout << str.capacity() << endl; // capacity is 31 on my computer str += temp; cout << str.capacity() << endl; str.reserve(16); cout << str.capacity() << endl; // can't release. The capacity is still 31. (The compiler is Visual C++) How could I release it?
When you call reserve, you're making a request to change the capacity. Implementations will only guarantee that a number equal to or greater than this amount is reserved. Therefore, a request to shrink capacity may be safely ignored by a particular implementation. However, I encourage you to consider whether this isn't premature optimization. Are you sure that you're really making so many strings that it's a memory bottleneck for you? Are you sure that it's actually memory that's the bottleneck? From the documentation for reserve: This can expand or shrink the size of the storage space in the string, although notice that the resulting capacity after a call to this function is not necessarily equal to res_arg but can be either equal or greater than res_arg, therefore shrinking requests may or may not produce an actual reduction of the allocated space in a particular library implementation. In any case, it never trims the string content (for that purposes, see resize or clear, which modify the content).
740,131
740,147
C/C++ codehighlighter in visual studio 2005
I just starting using VS2005 and I wish to have code highlighting in C/C++. The VS menu Tools->Options->TextEditor->C/C++ is very poor. I come from PHP and there the IDE's are very friendly when is about highlighting. I didn't expect that Visual Studio to be so poor at this kind of options. Can you recommend me a free tool/plugin/lib in order to have code highlighting for C/C++?
It's not free, but Visual Assist X has some really nice highlighting. Another (suboptimal for you) solution is to switch editors. Emacs is infinitely configurable. I believe that Eclipse CDT and NetBeans have better syntax highlighting.
740,169
740,192
lib to read a DVD FS (data disc)
I am thinking i might want to port a lib to read a DVD filesystem. I am not talking about movies but datadisc. Theres existing code for me to do raw reads from the disc. I need code that request this data and allow me to browse files on the disc. What lib can i use for this? -edit- NOTE: I am using an OSless hardware. Ppl seem to miss that but Alnitak caught it and gave me a great answer :)
You need libudf which is part of GNU libcdio.
740,228
740,245
Define smallest possible datatype in c++ that can hold six values
I want to define my own datatype that can hold a single one of six possible values in order to learn more about memory management in c++. In numbers, I want to be able to hold 0 through 5. Binary, It would suffice with three bits (101=5), although some (6 and 7) wont be used. The datatype should also consume as little memory as possible. Im not sure on how to accomplish this. First, I tried an enum with defined values for all the fields. As far as I know, the values are in hex there, so one "hexbit" should allow me to store 0 through 15. But comparing it to a char (with sizeof) it stated that its 4 times the size of a char, and a char holds 0 through 255 if Im not misstaken. #include <iostream> enum Foo { a = 0x0, b = 0x1, c = 0x2, d = 0x3, e = 0x4, f = 0x5, }; int main() { Foo myfoo = a; char mychar = 'a'; std::cout << sizeof(myfoo); // prints 4 std::cout << sizeof(mychar); // prints 1 return 1; } Ive clearly misunderstood something, but fail to see what, so I turn to SO. :) Also, when writing this post I realised that I clearly lack some parts of the vocabulary. Ive made this post a community wiki, please edit it so I can learn the correct words for everything.
A char is the smallest possible type. If you happen to know that you need several such 3 bit values in a single place you get use a structure with bitfield syntax: struct foo { unsigned int val1:3; unsigned int val2:3; }; and hence get 2 of them within one byte. In theory you could pack 10 such fields into a 32-bit "int" value.
740,263
740,272
C++ dll in C program
I'd like to create a dll library from C++ code and use it in C program. I'd like to export only one function: GLboolean load_obj (const char *filename, GLuint &object_list); Header file from library: #ifndef __OBJ__H__ #define __OBJ__H__ #include <windows.h> #include <GL/gl.h> #include <GL/glext.h> #include <GL/glu.h> #include <GL/glut.h> #if defined DLL_EXPORT #define DECLDIR __declspec(dllexport) #else #define DECLDIR __declspec(dllimport) #endif extern "C" GLboolean load_obj (const char *filename, GLuint &object_list); #endif // __3DS__H__ in .cpp (in library project) function is also declared as: extern "C" GLboolean load_obj (const char *filename, GLuint &object_list) { code... } File .lib is added in VS project options (Linker/Input/Additional dependencies). .dll is in folder where .exe is. When I compile C project - error: Error 1 error C2059: syntax error : 'string' It is about part "extern "C" " in header file. I've tried to change header file to: extern GLboolean load_obj (const char *filename, GLuint &object_list); then Error 1 error C2143: syntax error : missing ')' before '&' Error 2 error C2143: syntax error : missing '{' before '&' Error 3 error C2059: syntax error : '&' Error 4 error C2059: syntax error : ')' and even when I changed & to * appeared: Error 6 error LNK2019: unresolved external symbol _load_obj referenced in function _main main.obj I've no idea why it is wrong. .lib .h and .dll are properly added.
The parameter "GLuint &object_list" means "pass a reference to an GLuint here". C doesn't have references. Use a pointer instead. // declaration extern "C" GLboolean load_obj (const char *filename, GLuint *object_list); // definition GLboolean load_obj (const char *filename, GLuint *object_list) { code... }
740,423
740,486
Qt class for handling file paths
Possible Duplicate: Qt equivalent of PathAppend? Is there a class that handles file paths in Qt? Particularly I'm looking for something like .NET's Path.Combine. I know there's one in boost::filesystem but I was wondering if there's one in Qt.
There is QDir which might be of help (see QDir::relativeFilePath and QDir::canonicalPath and others). Quoting from QDir doc: A QDir is used to manipulate path names, access information regarding paths and files, and manipulate the underlying file system.
740,435
740,444
How to convert this VC++ 6 code to VC++ 2008?
Forgive me my C++ is incredibly rusty. But I am trying to take some old code and recompile it under Visual C++ 2008. It was originally written for Visual C++ 6.0 The error I am getting is this: error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Ok seems simple enough. But then I look at the offending line of code: operator=(int i) {SetAsInt(i);}; And it appears the type IS declared. So what am I missing? FOLLOW UP: I took Micheals advice and added a return type of the function (the class), and added return this; to the end of each. Then I ran across this: operator=(const CString& str); There is no function body defined... what exactly does this mean?
You need to have the operator=() method return something (it would assume int if the diagnostic weren't an error, as the error message somewhat confusingly indicates). Generally it would be a reference to the object the operator is working on so the assignments can be chained as in a normal assignment expression. Something like: // where T is the class for this operator= implementation T& operator=( int i) { // ... return *this; }
740,577
740,686
sizeof a union in C/C++
What is the sizeof the union in C/C++? Is it the sizeof the largest datatype inside it? If so, how does the compiler calculate how to move the stack pointer if one of the smaller datatype of the union is active?
The Standard answers all questions in section 9.5 of the C++ standard, or section 6.5.2.3 paragraph 5 of the C99 standard (or paragraph 6 of the C11 standard, or section 6.7.2.1 paragraph 16 of the C18 standard): In a union, at most one of the data members can be active at any time, that is, the value of at most one of the data members can be stored in a union at any time. [Note: one special guarantee is made in order to simplify the use of unions: If a POD-union contains several POD-structs that share a common initial sequence (9.2), and if an object of this POD-union type contains one of the POD-structs, it is permitted to inspect the common initial sequence of any of POD-struct members; see 9.2. ] The size of a union is sufficient to contain the largest of its data members. Each data member is allocated as if it were the sole member of a struct. That means each member share the same memory region. There is at most one member active, but you can't find out which one. You will have to store that information about the currently active member yourself somewhere else. Storing such a flag in addition to the union (for example having a struct with an integer as the type-flag and an union as the data-store) will give you a so called "discriminated union": An union which knows what type in it is currently the "active one". One common use is in lexers, where you can have different tokens, but depending on the token, you have different informations to store (putting line into each struct to show what a common initial sequence is): struct tokeni { int token; /* type tag */ union { struct { int line; } noVal; struct { int line; int val; } intVal; struct { int line; struct string val; } stringVal; } data; }; The Standard allows you to access line of each member, because that's the common initial sequence of each one. There exist compiler extensions that allow accessing all members disregarding which one currently has its value stored. That allows efficient reinterpretation of stored bits with different types among each of the members. For example, the following may be used to dissect a float variable into 2 unsigned shorts: union float_cast { unsigned short s[2]; float f; }; That can come quite handy when writing low-level code. If the compiler does not support that extension, but you do it anyway, you write code whose results are not defined. So be certain your compiler has support for it if you use that trick.
740,700
740,714
ABC Virtual OStream Insertion Operator
Consider the following snippet: struct ObjectInterface { virtual ~ObjectInterface() {} virtual void Print(std::ostream& target) const = 0; }; struct Foo : ObjectInterface { virtual void Print(std::ostream& target) const { target << "Foo"; } }; struct Bar : ObjectInterface { virtual void Print(std::ostream& target) const { target << "Bar"; } }; Is there any way to change Print in ObjectInterface to the standard "std::ostream& operator<<"-type of output? I can't make it work. EDIT: I'm basically trying to figure out if I can make friend work with virtual.
You need a free function: ostream & operator << ( ostream & os, const ObjectInterface & oi ) { oi.Print( os ); return os; }
740,836
741,373
Compiling C++ Programs with Emacs on Windows
I've been using Emacs for quite some time for basic text editing but as of today I am attempting to use it for c++ compilation. I have looked for the past few hours about how to go about this but I keep hitting roadblocks in their techniques (I think some of this is having to do with the tutorials being outdated). Basically, all I want to do is be able to compile C++ programs that I write in Emacs through the 'M-x compile' command. So far I have installed Cygwin and downloaded the packages for gcc. I have attempted some other solutions presented by tutorials online but they didn't pan out. Thank you.
The M-x compile command calls out to a shell (e.g. linux bash, windows cmd.exe, etc) to run the make command. On windows I think emacs defaults to the cmd.exe shell (through a special C:/Program Files/Emacs/emacs/bin/cmdproxy.exe executable). If you want your M-x compile to use a different shell (probably cygwin bash in your case) then you need to tell emacs through changing shell-file-name variable or using the SHELL environment variable. You will also need to make sure that the cygwin make is found by changing exec-path variable (or using PATH environment variable). To do this: (setq shell-file-name "C:/cygwin/bin/bash.exe") (setq exec-path (cons "C:/cygwin/bin" exec-path)) And you could also look at setup-cygwin.el to set this up and some other things for cygwin.
741,000
744,595
Help storing an intrusive_ptr of a template class in a std::map
I have a small template class of type Locker contained within a boost::intrusive_ptr that I want to store inside a std::map: template <typename T> bool LockerManager<T>:: AddData(const std::string& id, T* pData) { boost::intrusive_ptr<Locker<T> > lPtr(Locker<T>(pData)); // Line 359 - compiles mMap.insert(make_pair(id, lPtr)); // Line 361 - gives error } Locker is just a container class; its constructor looks like: template <typename T> Locker<T>:: Locker(T* pData) : IntrusivePtrCountable(), mpData(pData), mThreadId(0), mDataRefCount(0) {} In my test of this class, I am trying to do the following: class Clayton { public: static int count; Clayton() { mNumber = count++;} void GetNumber() { cerr<<"My number is: "<<mNumber<<endl; } private: int mNumber; }; int Clayton::count = 0; class ClaytonManager { public: bool AddData(const std::string& id, Clayton* pData) { return mManager.AddData(id, pData); } private: LockerManager<Clayton> mManager; }; I get the following compile error: Compiling LockerManagerTest.cpp : /usr/local/lib/gcc/i686-pc-linux-gnu/3.4.6/../../../../include/c++/3.4.6/bits/stl_pair.h: In constructor `std::pair<_T1, _T2>::pair(const std::pair<_U1, _U2>&) [with _U1 = std::basic_string<char, std::char_traits<char>, std::allocator<char> >, _U2 = boost::intrusive_ptr<Locker<Clayton> > (*)(Locker<Clayton>), _T1 = const std::basic_string<char, std::char_traits<char>, std::allocator<char> >, _T2 = boost::intrusive_ptr<Locker<Clayton> >]': ../Utilities/include/LockerManager.h:361: instantiated from `bool LockerManager<T>::AddData(const std::string&, T*) [with T = Clayton]' src/LockerManagerTest.cpp:35: instantiated from here /usr/local/lib/gcc/i686-pc-linux-gnu/3.4.6/../../../../include/c++/3.4.6/bits/stl_pair.h:90: error: no matching function for call to `boost::intrusive_ptr<Locker<Clayton> >::intrusive_ptr(boost::intrusive_ptr<Locker<Clayton> > (* const&)(Locker<Clayton>))' /usr/local/boost-1.36.0/include/boost-1_36/boost/intrusive_ptr.hpp:94: note: candidates are: boost::intrusive_ptr<T>::intrusive_ptr(const boost::intrusive_ptr<T>&) [with T = Locker<Clayton>] /usr/local/boost-1.36.0/include/boost-1_36/boost/intrusive_ptr.hpp:70: note: boost::intrusive_ptr<T>::intrusive_ptr(T*, bool) [with T = Locker<Clayton>] /usr/local/boost-1.36.0/include/boost-1_36/boost/intrusive_ptr.hpp:66: note: boost::intrusive_ptr<T>::intrusive_ptr() [with T = Locker<Clayton>] Command exited with non-zero status 1 0:05.40 Please help
Actually, intrusive_ptr already has a < operator and a copy constructor defined, so that wasn't the problem. There were two main things that we were missing. First, we needed to use value_type, instead of make_pair, to avoid implicit type conversion in the insert statement. Second, we missed the fact that the intrusive_ptr constructor takes a pointer to the type that it is templated around. So, the final, working method looks like this: // --------------------------------------------------------------------------- // LockerManager::AddData // --------------------------------------------------------------------------- template <typename T> bool LockerManager<T>:: AddData(const std::string& id, T* pData) { Lock<MutualExclusion> lLock(mMutex); if ((pData == NULL) || (mMap.find(id) != mMap.end())) return false; mMap.insert(typename std::map<std::string, boost::intrusive_ptr<Locker<T> > >::value_type(id, new Locker<T>(pData))); return true; } // LockerManager::AddData
741,054
741,085
Mapping between stl C++ and C# containers
Can someone point out a good mapping between the usual C++ STL containers such as vector, list, map, set, multimap... and the C# generic containers? I'm used to the former ones and somehow I've accustomed myself to express algorithms in terms of those containers. I'm having some hard time finding the C# equivalent to those. Thank you!
Here's a rough equivalence: Dictionary<K,V> <=> unordered_map<K,V> HashSet<T> <=> unordered_set<T> List<T> <=> vector<T> LinkedList<T> <=> list<T> The .NET BCL (base class library) does not have red-black trees (stl map) or priority queues (make_heap(), push_heap(), pop_heap()). .NET collections don't use "iterators" the way C++ does. They all implement IEnumerable<T>, and can be iterated over using the "foreach statement". If you want to manually control iteration you can call "GetEnumerator()" on the collection which will return an IEnumerator<T> objet. IEnumerator<T>.MoveNext() is roughly equivalent to "++" on a C++ iterator, and "Current" is roughly equivalent to the pointer-deference operator ("*"). C# does have a language feature called "iterators". They are not the same as "iterator objects" in the STL, however. Instead, they are a language feature that allows for automatic implementation of IEnumerable<T>. See documentation for the yield return and yield break statements for more information.
741,301
741,371
How can I add and subtract 128 bit integers in C or C++ if my compiler does not support them?
I'm writing a compressor for a long stream of 128 bit numbers. I would like to store the numbers as differences -- storing only the difference between the numbers rather than the numbers themselves because I can pack the differences in fewer bytes because they are smaller. However, for compression then I need to subtract these 128 bit values, and for decompression I need to add these values. Maximum integer size for my compiler is 64 bits wide. Anyone have any ideas for doing this efficiently?
If all you need is addition and subtraction, and you already have your 128-bit values in binary form, a library might be handy but isn't strictly necessary. This math is trivial to do yourself. I don't know what your compiler uses for 64-bit types, so I'll use INT64 and UINT64 for signed and unsigned 64-bit integer quantities. class Int128 { public: ... Int128 operator+(const Int128 & rhs) { Int128 sum; sum.high = high + rhs.high; sum.low = low + rhs.low; // check for overflow of low 64 bits, add carry to high if (sum.low < low) ++sum.high; return sum; } Int128 operator-(const Int128 & rhs) { Int128 difference; difference.high = high - rhs.high; difference.low = low - rhs.low; // check for underflow of low 64 bits, subtract carry to high if (difference.low > low) --difference.high; return difference; } private: INT64 high; UINT64 low; };
741,423
741,445
Creating popup menu in Qt for QTableView
I have a QTableView in the main UI of my program. I'd like to show popup menu when user right clicks on the cells of the table and take appropriate action when an option is selected from the menu. I am using Qt Creator 1 (Qt version 4.5). How can I do that?
Check out the customContextMenuRequested signal to get the event, and use a QMenu for the menu itself. Use QTableView::indexAt to find out what, if any, cell was clicked based on the coordinates given to the signal and take the appropriate action when a menu item is clicked.
741,452
741,565
C++ Tricky Inheritance Class Definition Problem
I'm getting this error when dealing with a number of classes including each other: error: expected class-name before '{' token I see what is going on, but I do not know how to properly correct it. Here is an abstracted version of the code: A.h #ifndef A_H_ #define A_H_ #include "K.h" class A { public: A(); }; #endif /*A_H_*/ A.cpp #include "A.h" A::A() {} B.h #ifndef B_H_ #define B_H_ #include "A.h" class B : public A { // error: expected class-name before '{' token public: B(); }; #endif /*B_H_*/ B.cpp #include "B.h" B::B() : A() {} J.h #ifndef J_H_ #define J_H_ #include "B.h" class J { public: J(); }; #endif /*J_H_*/ J.cpp #include "J.h" J::J() {} K.h #ifndef K_H_ #define K_H_ #include "J.h" class K : public J { // error: expected class-name before '{' token public: K(); }; #endif /*K_H_*/ K.cpp #include "K.h" K::K() : J() {} main.cpp #include "A.h" int main() { return 0; } Starting in main.cpp, I can determine that this is what the compiler sees: #include "A.h" #ifndef A_H_ #define A_H_ #include "K.h" #ifndef K_H_ #define K_H_ #include "J.h" #ifndef J_H_ #define J_H_ #include "B.h" #ifndef B_H_ #define B_H_ #include "A.h" class B : public A { // error: expected class-name before '{' token So, A's definition is not complete when we get to B. I've been told that sometimes you need to use a forward declaration and then move the #include statement into the .cpp file, but I'm not having any luck with that. If I try anything like that, I simply get the additional error: error: forward declaration of 'struct ClassName' I think maybe I'm just not doing things in the right places. Can someone please show me how to get this code to compile? Thank you very much! Edit: I want to point out that this is just abstracted version of the real code. I realize that there are no references to K in A or B in J, but there are in the real code and I feel that they're completely necessary. Perhaps if I give a brief description of the real classes, someone can help me restructure or fix my code. Class A is an abstract node class that acts as an interface for nodes in a graph. Class B is one of what will be a number of different implementations of A. In the same manner, class J is an abstract Visitor class and K is the corresponding implementation. Here is the code with a little more context: A.h (Abstract Node) #ifndef A_H_ #define A_H_ #include "K.h" class K; class A { public: A(); virtual void accept(const K&) const = 0; }; #endif /*A_H_*/ A.cpp #include "A.h" A::A() {} B.h (Concrete Node) #ifndef B_H_ #define B_H_ #include "A.h" class K; class B : public A { // error: expected class-name before '{' token public: B(); virtual void accept(const K&) const; }; #endif /*B_H_*/ B.cpp #include "B.h" B::B() : A() {} void B::accept(const K& k) const { k.visit(this); } J.h (Abstract Visitor) #ifndef J_H_ #define J_H_ #include "B.h" class B; class J { public: J(); virtual void visit(const B*) const = 0; }; #endif /*J_H_*/ J.cpp #include "J.h" J::J() {} K.h (Concrete Visitor) #ifndef K_H_ #define K_H_ #include "J.h" class B; class K : public J { // error: expected class-name before '{' token public: K(); virtual void visit(const B*) const; }; #endif /*K_H_*/ K.cpp #include "K.h" K::K() : J() {} void K::visit(const B*) const {}; main.cpp #include "A.h" int main() { return 0; } I had to add some forward declarations to make some additional errors that appeared (when I added detail) go away. Some of them may not be necessary or correct.
Circular inclusions do not work. Try to keep inclusions to a strict minimum. If you do that, you'll either be fine altogether, or you'll discover problems in your design. In your case, i don't see anything wrong with your design. When defining class K, you're only using a pointer to an object of type B. That does not require B to be defined (as in "include the header file"), only to be declared (forward declaration is fine). So, in your case, removing inclusion to header "B.h" replaced by "class B;" is sufficient. (the same goes for class J)
741,476
741,477
Is it possible to put several objects together inside a union?
What if I have this: union{ vector<int> intVec ; vector<float> floatVec ; vector<double> doubleVec ; } ; Of course, I'll be using just one of the 3 vectors. But... what happens when all the 3 vectors are contructed?? Would the consructors of the 3 vectors interfere with each other?? (since the 3 of them are in the same memory address) Thanks.
Current C++ standard does not allow non-POD types inside unions. You will get this compiler error from gcc: error: member ‘std::vector<int, std::allocator<int> > <anonymous union>::i’ with constructor not allowed in union error: member ‘std::vector<int, std::allocator<int> > <anonymous union>::i’ with destructor not allowed in union New C++ standard (C++0x) proposes unrestricted unions, but it adds yet more object lifetime pitfalls to C++.
741,597
741,606
Using types defined in template arguments
When using a container class like vector, list, etc., I can use the type of the elements by writing vector<type>::value_type. However, the following code template<class container> void foo(container& c) { typedef container::value_type elementtype; elementtype b; } fails with the error "expected initializer before ‘elementtype’". Is it possible to infer the element type when the container type is given as a template argument or do I have to give the element type as an extra template argument?
You're missing the required typename keyword: typedef typename container::value_type elementtype; This is because container is a dependent name in this template, so the compiler has no way of knowing whether container::value_type is always a type or not, as it may depend on the choice of container. Surely this question is a FAQ somewhere? Edit, it is: http://www.parashift.com/c++-faq-lite/templates.html#faq-35.18
741,719
741,747
What's the difference between APR (Apache Portable Runtime) 1.3 and 0.9?
I'm just getting started with APR and it seems that there are two supported versions developed side-by-side: http://apr.apache.org/ The docs don't explain the difference between 1.3.x and 0.9.x... Can anyone please shed light on the matter? Or in short, which should I use?
See http://www.apache.org/dist/apr/Announcement0.9.html : This version of APR is principally a bug fix release, and is provided only for users requiring APR 0.9 compatibility. Most developers are encouraged to adopt the latest APR 1.x version to ensure the most comprehensive support and access to the latest features and enhancements. For example, the Apache HTTP Server Project's httpd 2.0 release uses APR 0.9 for binary compatibility, while later httpd 2.2 releases require APR 1.2 for better support and additional features.
741,746
741,758
Problems with starting a program + DLL multiple times in Windows XP?
We develop a network library that uses TCP and UDP sockets. This DLL is used by a testclient, which is started multiple times at the same PC for a load test. In Windows Vista, it is no problem to start the testclient many times. In Windows XP, starting it up to 5 times is no problem, but if we start it 6 times or more, and then closing one client, ALL of them crash with apparently random stack traces. Yes, although we do not use any interprocess code (only sockets between the clients), the termination of one of the client leads to the crash of all of them. Our DLL is compiled with MSVC and uses Boost and Crypto++ libs (statically linked). Any idea why the different processes could influence each other?
An idea: You have some bug. Seriously, there is no way to know what's your problem without any information what so ever. When a process crashes it usually has a very good reason to do so. find out what that is. Compile your dlls and executables in debug, attach a debugger and make sense of the stack trace you get. if you get a nonsense stack trace, find out why that is. As with many problems, this one is likely to be solved by "Just debugging it"
741,834
741,849
How to avoid running out of memory in high memory usage application? C / C++
I have written a converter that takes openstreetmap xml files and converts them to a binary runtime rendering format that is typically about 10% of the original size. Input file sizes are typically 3gb and larger. The input files are not loaded into memory all at once, but streamed as points and polys are collected, then a bsp is run on them and the file is output. Recently on larger files it runs out of memory and dies (the one in question has 14million points and 1million polygons). Typically my program is using about 1gb to 1.2 gb of ram when this happens. I've tried increasing virtual memory from 2 to 8gb (on XP) but this change made no effect. Also, since this code is open-source I would like to have it work regardless of the available ram (albeit slower), it runs on Windows, Linux and Mac. What techniques can I use to avoid having it run out of memory? Processing the data in smaller sub-sets and then merging the final results? Using my own virtual memory type of handler? Any other ideas?
First, on a 32-bit system, you will always be limited to 4 GB of memory, no matter pagefile settings. (And of those, only 2GB will be available to your process on Windows. On Linux, you'll typically have around 3GB available) So the first obvious solution is to switch to a 64-bit OS, and compile your application for 64-bit. That gives you a huge virtual memory space to use, and the OS will swap data in and out of the pagefile as necessary to keep things working. Second, allocating smaller chunks of memory at a time may help. It's often easier to find 4 256MB chunks of free memory than one 1GB chunk. Third, split up the problem. Don't process the entire dataset at once, but try to load and process only a small section at a time.
742,034
743,455
qpThreads documentation
Is there any documentation on qpThreads? In what way is it different from pthreads?
Found some documentation finally. Sourceforge qpthreads
742,342
764,328
Simple OpenGL texture map not working?
I'm trying to figure out texture mapping in OpenGL and I can't get a simple example to work. The polygon is being drawn, though it's not textured but just a solid color. Also the bitmap is being loaded correctly into sprite1[] as I was successfully using glDrawPixels up til now. I use glGenTextures to get my tex name, but I notice it doesn't change texName1; this GLuint is whatever I initialize it to, even after the call to glGenTextures... I have enabled GL_TEXTURE_2D. Heres the code: GLuint texName1 = 0; glGenTextures(1, &texName1); glBindTexture(GL_TEXTURE_2D, texName1); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glTexImage2D(GL_TEXTURE_2D, 0, GL_BGRA_EXT, sprite1[18], sprite1[22], 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, &sprite1[54]); glColor3f(1, 1, 0); glBindTexture(GL_TEXTURE_2D, texName1); glBegin(GL_QUADS); glTexCoord2f (0.0, 0.0); glVertex3f (0.0, 0.0, -5.0f); glTexCoord2f (1.0, 0.0); glVertex3f (.5, 0.0, -5.0f); glTexCoord2f (1.0, 1.0); glVertex3f (.5, .5, -5.0f); glTexCoord2f (0.0, 1.0); glVertex3f (0.0, .5, -5.0f); glEnd(); UPDATE: I'm at a loss. Here's everything I've tried: Turns out I was initializing my texture before OGL was initialized. The texture is initialized (glGenTextures->glTexImage2D) in a class constructor and drawn (glBegin->glEnd) in a member function that is called every frame. genTextures appears to be working correctly now and I'm getting a name of 1. Every possible combination of GL_RGBA8, GL_BGRA_EXT (GL_BGRA doesn't work on my system; I need the _EXT), and I even removed the alpha channel from the bitmap and tried all combinations of GL_RGB, GL_BGR_EXT, etc etc. No luck. Tried procedurally creating a bitmap and using that Made sure GL_COLOR_MATERIAL isn't enabled. Changed bitmap size to 32x32. Tried glTexEnvi instead of glTexEnvf.
I found the problem. My call to glEnable was glEnable(GL_BLEND | GL_TEXTURE_2D). Using glGetError I saw I was getting a GL_INVALID_ENUM for this call, so I moved GL_TEXTURE_2D to its own enable function and bingo. I guess logical OR isn't allowed for glEnable?
742,415
742,422
C memset seems to not write to every member
I wrote a small coordinate class to handle both int and float coordinates. template <class T> class vector2 { public: vector2() { memset(this, 0, sizeof(this)); } T x; T y; }; Then in main() I do: vector2<int> v; But according to my MSVC debugger, only the x value is set to 0, the y value is untouched. Ive never used sizeof() in a template class before, could that be whats causing the trouble?
No don't use memset -- it zeroes out the size of a pointer (4 bytes on my x86 Intel machine) bytes starting at the location pointed by this. This is a bad habit: you will also zero out virtual pointers and pointers to virtual bases when using memset with a complex class. Instead do: template <class T> class vector2 { public: // use initializer lists vector2() : x(0), y(0) {} T x; T y; };
742,545
742,571
Operator = Overload with Const Variable in C++
I was wondering if you guys could help me. Here are my .h: Class Doctor { const string name; public: Doctor(); Doctor(string name); Doctor & Doctor::operator=(const Doctor &doc); } and my main: int main(){ Doctor d1 = Doctor("peter"); Doctor d2 = Doctor(); d2 = d1; } I want to do the operator= function. Can anyone help me? Notice the const member on Doctor. ************EDIT:********* My main problem is that I want another class to have an attribute which is a Doctor like a Pacient has a Doctor. But I want to be able to change my Doctor. Like i am seeing doctor A but I want to see Doctor B. That would be done using a setDoctor function in my other class (Pacient). If it was me doing the code I would say something like this: class Patient{ Doctor &d; }; and then change the pointer. However I am using a base code made by one of the teachers and it has the class defined like: class Patient{ Doctor d; } But I think this is impossible to do because with a setDoctor() in the Patient class I would either make a copy or alter the varable itself. The first doesn't make any difference and the second is impossible due to the const. Am I right?
You are almost there. Few noteworthy points: The name should not be const qualified. A const cannot be modified, which is exactly what we want in the assignment operator. The C++ keyword is class and not Class as your code has it (it'll give you compile errors) As Michael Burr notes: "It should be noted though that if the class simply contains other classes that already properly support assignment (as in this case with a simple string member), the implicit, compiler-generated operator=() will work just fine." Here, in your case, the only member string has a proper op=. So explicitly defining is redundant. Meeh's solution is almost there. The only thing it doesn't talk about is self-assignment. Read FAQ 12. Assignment is one the Big Three member functions FAQ 27.10. Look it up. It says, requirement to implement either one of copy ctor, op= or the dtor usually implies that you'd need to implement the other two as well. The corrected code sample should be something like this: class Doctor { string name; public: Doctor& operator=(Doctor const& o) { if (&o != this) name = o.name; return *this; } // ... };
742,607
742,617
Using local classes with STL algorithms
I have always wondered why you cannot use locally defined classes as predicates to STL algorithms. In the question: Approaching STL algorithms, lambda, local classes and other approaches, BubbaT mentions says that 'Since the C++ standard forbids local types to be used as arguments' Example code: int main() { int array[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; std::vector<int> v( array, array+10 ); struct even : public std::unary_function<int,bool> { bool operator()( int x ) { return !( x % 2 ); } }; std::remove_if( v.begin(), v.end(), even() ); // error } Does anyone know where in the standard is the restriction? What is the rationale for disallowing local types? EDIT: Since C++11, it is legal to use a local type as a template argument.
It's explicitly forbidden by the C++98/03 standard. C++11 remove that restriction. To be more complete : The restrictions on types that are used as template parameters are listed in article 14.3.1 of the C++03 (and C++98) standard: A local type, a type with no linkage, an unnamed type or a type compounded from any of these types shall not be used as a template-argument for a template type-parameter. template <class T> class Y { /* ... */ }; void func() { struct S { /* ... */ }; //local class Y< S > y1; // error: local type used as template-argument Y< S* > y2; // error: pointer to local type used as template-argument } Source and more details : http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=420 To sum up, the restriction was a mistake that would have been fixed sooner if the standard was evolving faster... That said today most last versions of common compilers does allow it, along with providing lambda expressions.
743,055
743,100
Convert iterator to pointer?
I have a std::vector with n elements. Now I need to pass a pointer to a vector that has the last n-1 elements to a function. For example, my vector<int> foo contains (5,2,6,87,251). A function takes vector<int>* and I want to pass it a pointer to (2,6,87,251). Can I just (safely) take the iterator ++foo.begin(), convert it to a pointer and pass that to the function? Or use &foo[1]? UPDATE: People suggest that I change my function to take an iterator rather than a pointer. That seems not possible in my situation, since the function I mentioned is the find function of unordered_set<std::vector*>. So in that case, is copying the n-1 elements from foo into a new vector and calling find with a pointer to that the only option? Very inefficient! It's like Shlemiel the painter, especially since i have to query many subsets: the last n-1, then n-2, etc. elements and see if they are in the unordered_set.
That seems not possible in my situation, since the function I mentioned is the find function of unordered_set<std::vector*>. Are you using custom hash/predicate function objects? If not, then you must pass unordered_set<std::vector<int>*>::find() the pointer to the exact vector that you want to find. A pointer to another vector with the same contents will not work. This is not very useful for lookups, to say the least. Using unordered_set<std::vector<int> > would be better, because then you could perform lookups by value. I think that would also require a custom hash function object because hash does not to my knowledge have a specialization for vector<int>. Either way, a pointer into the middle of a vector is not itself a vector, as others have explained. You cannot convert an iterator into a pointer to vector without copying its contents.
743,191
743,848
How to Parse Lines With Differing Number of Fields in C++
I have a data that looks like this: AAA 0.3 1.00 foo chr1,100 AAC 0.1 2.00 bar chr2,33 AAT 3.3 2.11 chr3,45 AAG 1.3 3.11 qux chr1,88 ACA 2.3 1.33 chr8,13 ACT 2.3 7.00 bux chr5,122 Note that the lines above are tab separated. Moreover, it sometime may contain 5 fields or 4 fields. What I want to do is to capture 4th fields in variable as "" if it doesn't contain any value. I have the following codes, but somehow it reads the 5th fields, as 4th fields when 4th is empty. What's the correct way to do it? #include <iostream> #include <vector> #include <fstream> #include <sstream> using namespace std; int main ( int arg_count, char *arg_vec[] ) { string line; ifstream myfile (arg_vec[1]); if (myfile.is_open()) { while (getline(myfile,line) ) { stringstream ss(line); string Tag; double Val1; double Val2; double Field4; double Field5; ss >> Tag >> Val1 >> Val2 >> Field4 >> Field5; cout << Field4 << endl; //cout << Tag << "," << Val1 << "," << Val2 << "," << Field4 << "," << Field5 << endl; } myfile.close(); } else { cout << "Unable to open file"; } return 0; }
Another C++ only version that just uses the fact that istream must set the failbit if operator>> fails to parse. while(getline(ss, line)) { stringstream sl(line); sl >> tag >> v1 >> v2 >> v3 >> v4; if(sl.rdstate() == ios::failbit) // failed to parse 5 arguments? { sl.clear(); sl.seekg(ios::beg); sl >> tag >> v1 >> v2 >> v4; // do it again with 4 v3 = "EMPTY"; // just a default value } cout << "tag: " << tag <<std::endl << "v1: " << v1 << std::endl << "v2: " << v2 << std::endl << "v3: " << v3 << std::endl << "v4: " << v4 << std::endl << std::endl; }
743,197
743,211
size() Vs empty() in vector - why empty() is preferred?
While debugging something, I saw the STL vector::empty() implementation: bool empty() const {return (size() == 0); } I believe, whenever we are probing the emptiness of vector it is always recommended to use empty over size(). But seeing that implementation, I am wondering, what is the benefit of doing so? Instead, there is a function call overhead in calling empty as it internally calls size()==0. I thought empty() may be helpful in case of list as size() doesn't guarantees the constant time in list. To verify my assumption, I checked the list implementation and surprisingly, found the same implementation in list as well, return (size() == 0); I am bit confused now. If empty internally uses size() then why should we prefer empty over size() ?
You would need to write the condition out everytime you use size(). It's convenient to use empty(). This is of course, provided you don't switch containers. As others have pointed out, it is upto the implementation to use size() in empty() or not. However, the standard does guarantee that: empty() is a constant-time operation for all standard containers.
743,203
743,205
Accessing static class variables in C++?
Duplicate: C++: undefined reference to static class member If I have a class/struct like this // header file class Foo { public: static int bar; int baz; int adder(); }; // implementation int Foo::adder() { return baz + bar; } This doesn't work. I get an "undefined reference to `Foo::bar'" error. How do I access static class variables in C++?
You must add the following line in the implementation file: int Foo::bar = you_initial_value_here; This is required so the compiler has a place for the static variable.
743,321
743,334
Is it good to send stack allocated object as a pointer parameter to some other function?
Is it good to send stack allocated object as a pointer parameter to some other function?
Yes, but the more common C++ idiom for this situation is to use a reference (and probably a const rreference) instead of a pointer. So instead of void foo( sometype * p ) { p->func(); } you write: void foo( sometype & p ) { p.func(); } This has the advantage that you don't need to dereference the object in the caller: void afunc() { sometype t; foo( t ); } and also gives a subliminal hint to the reader that you do not intend the function to take ownership of the object.
743,395
743,397
How to refactor a class in C++ to make a certain function const?
I have a class which looks something like this: class MyClass { public: // some stuff omitted /*** A function which, in itself, is constant and doesn't change the class ***/ void myFunction( void ) const; private: /*** If loaded is true, then internal resources are loaded ***/ boolean loaded; }; Because I designed my class this way I am forced to do this: MyClass :: myFunction( void ) const { if( !loaded ) { // do something here loaded = true; /** <-- this violates const **/ } // carry out some computation } Because I need to set loaded flag the function now violates const qualifier. I have a lot of code that stores constant objects and it will be time consuming to go round and change them to non-const. Moreover, this will be slightly hacky since I really want the objects to be const. What is the best way to refactor my class in order to keep myFunction constant? P.S. The resources guarded by loaded are required only for several functions and therefore loading them in advance is not a very good solution.
Use the mutable keyword. class MyClass { public: void myFunction( void ) const; private: mutable boolean loaded; }; This says that the loaded member should be treated as being logically const but that physically it may change.
743,413
743,436
Tool Chain for WxWidgets explained
Where can I find an writeup that shows me how to set up a tool chain for WxWidgets (C++) on linux/ubunto and/or OS X. I downloaded, compiled & installed WxWidgets both on linux and OS X, compiled and tried the samples, but seem to be stuck setting up a compile environment in my own home directory. DialogBlocks from http://www.dialogblocks.com looked promising, but it insists on recompiling WxWidgets again and again .. must be something about it I don't understand. Writing code from scratch seems to fail due to a lack of paths to libraries, tools or whatnot .. again a lack og understanding on my part, I am sure. So, can anyone point me to a tool chain setup, that has more than the bare minimum of instructions and fills in some of the "why" instead of only the minimal "what".
Like all C/C++ programs, the compiler has to know in what directories to look for include files, and the linker has to know what libraries it should link to. The WxWidgets package, if installed correctly, includes the program wx-config. This can be used while compiling and linking, like so: g++ $(wx-config --cxxflags) -c my_prog.cpp g++ my_prog.o $(wx-config --libs) -o my_prog
743,458
743,483
How does const correctness help write better programs?
This question is from a C# guy asking the C++ people. (I know a fair bit of C but only have some general knowledge of C++). Allot of C++ developers that come to C# say they miss const correctness, which seems rather strange to me. In .Net in order to disallow changing of things you need to create immutable objects or objects with read only properties/fields, or to pass copies of objects (by default structs are copied). Is C++ const correctness, a mechanism to create immutable/readonly objects, or is there more to it? If the purpose is to create immutable/readonly objects, the same thing can be done in an environment like .Net.
A whole section is devoted to Const Correctness in the FAQ. Enjoy!
743,594
743,604
C++ Implementation of a Binary Heap
I need a min-heap implemented as a binary tree. Really fast access to the minimum node and insertion sort. Is there a good implementation in stl or boost that anyone can point me too?
I think std::priority_queue is what you are looking for.
743,669
743,685
Are inline functions in C/C++ a way to make them thread-safe?
I make the following reasoning, please tell me what's wrong (or right) about it: "If inlining a function duplicates the code in the place the function is called, then the static and local variables are duplicated for each function calling it and if there is only one thread running the function that calls the inlined one at the same time, then the code is thread-safe". "And, if it doesn't help with static and global variables, does it with code that is creating temporary variables?" Thanks
When you declare a function as inline, it is merely a hint to the compiler. Static variables have a clear definition in the language. If the compiler does inline the function, it is still obligated to keep the static variables shared between all instances of the function. Therefore, they will remain global and have to be protected in a MT environment. As to the local variables, unless they are used outside of the function, they are thread safe regardless of the function being inlined or not.
743,697
743,702
What is the exact definition of instance variable?
I think instance variables are simple data types like int or double. Everything that is created automatically when the object is created. If an object creates additional objects - like everything that is it done with the NEW keyword - these are not instance variables. Am I right or wrong? What is the exact definition?
Wrong. Anything that is bound within the instance (i.e. an instantiated object) is instance variable. As opposite of static (class) variables, which are bound to the class. It doesn't matter if they are simple types or pointers to objects.
743,732
744,440
About the MSDN Documentation on NOTIFYICONDATA's cbSize member
I am reading the NOTIFYICONDATA documentation in MSDN. It says the NOTIFYICONDATA structure has a cbSize member should be set to the size of the structure, but NOTIFYICONDATA structure's size has different size in every Shell32.dll, so you should get the Shell32.dll version before setting cbSize. The following quotes from MSDN: If it is version 5.0 or later, initialize the cbSize member as follows. nid.cbSize = sizeof(NOTIFYICONDATA); Setting cbSize to this value enables all the version 5.0 and 6.0 enhancements. For earlier versions, the size of the pre-6.0 structure is given by the NOTIFYICONDATA_V2_SIZE constant and the pre-5.0 structure is given by the NOTIFYICONDATA_V1_SIZE constant. Initialize the cbSize member as follows. nid.cbSize = NOTIFYICONDATA_V2_SIZE; Using this value for cbSize will allow your application to use NOTIFYICONDATA with earlier Shell32.dll versions, although without the version 6.0 enhancements. I found it a bit of vague, because 'sizeof(NOTIFYICONDATA)' has different value in Win98 (using Shell32.dll version 4.x), Win2K (version 5.0) and WinXP (version 6.0). How could it 'enable all version 5.0 and 6.0 enhancements'? So I looked for the definition of NOTIFYICONDATA_V1_SIZE (source code as below), I see: NOTIFYICONDATA_V1_SIZE is for Win 2K (doesn't include 2K) NOTIFYICONDATA_V2_SIZE is for Win XP NOTIFYICONDATA_V3_SIZE is for Vista (not sure if I am right) It's completely different from what MSDN says? and none for Win2K? So, I am totally confused right now. How should I set the cbSize member according to Shell32.dll version? //= = = = = = = = ShellAPI.h = = = = = = = = typedef struct _NOTIFYICONDATAA { DWORD cbSize; HWND hWnd; UINT uID; UINT uFlags; UINT uCallbackMessage; HICON hIcon; #if (NTDDI_VERSION < NTDDI_WIN2K) CHAR szTip[64]; #endif #if (NTDDI_VERSION >= NTDDI_WIN2K) CHAR szTip[128]; DWORD dwState; DWORD dwStateMask; CHAR szInfo[256]; union { UINT uTimeout; UINT uVersion; // used with NIM_SETVERSION, values 0, 3 and 4 } DUMMYUNIONNAME; CHAR szInfoTitle[64]; DWORD dwInfoFlags; #endif #if (NTDDI_VERSION >= NTDDI_WINXP) GUID guidItem; #endif #if (NTDDI_VERSION >= NTDDI_VISTA) HICON hBalloonIcon; #endif } NOTIFYICONDATAA, *PNOTIFYICONDATAA; typedef struct _NOTIFYICONDATAW { DWORD cbSize; HWND hWnd; UINT uID; UINT uFlags; UINT uCallbackMessage; HICON hIcon; #if (NTDDI_VERSION < NTDDI_WIN2K) WCHAR szTip[64]; #endif #if (NTDDI_VERSION >= NTDDI_WIN2K) WCHAR szTip[128]; DWORD dwState; DWORD dwStateMask; WCHAR szInfo[256]; union { UINT uTimeout; UINT uVersion; // used with NIM_SETVERSION, values 0, 3 and 4 } DUMMYUNIONNAME; WCHAR szInfoTitle[64]; DWORD dwInfoFlags; #endif #if (NTDDI_VERSION >= NTDDI_WINXP) GUID guidItem; #endif #if (NTDDI_VERSION >= NTDDI_VISTA) HICON hBalloonIcon; #endif } NOTIFYICONDATAW, *PNOTIFYICONDATAW; #define NOTIFYICONDATAA_V1_SIZE FIELD_OFFSET(NOTIFYICONDATAA, szTip[64]) #define NOTIFYICONDATAW_V1_SIZE FIELD_OFFSET(NOTIFYICONDATAW, szTip[64]) #ifdef UNICODE #define NOTIFYICONDATA_V1_SIZE NOTIFYICONDATAW_V1_SIZE #else #define NOTIFYICONDATA_V1_SIZE NOTIFYICONDATAA_V1_SIZE #endif #define NOTIFYICONDATAA_V2_SIZE FIELD_OFFSET(NOTIFYICONDATAA, guidItem) #define NOTIFYICONDATAW_V2_SIZE FIELD_OFFSET(NOTIFYICONDATAW, guidItem) #ifdef UNICODE #define NOTIFYICONDATA_V2_SIZE NOTIFYICONDATAW_V2_SIZE #else #define NOTIFYICONDATA_V2_SIZE NOTIFYICONDATAA_V2_SIZE #endif #define NOTIFYICONDATAA_V3_SIZE FIELD_OFFSET(NOTIFYICONDATAA, hBalloonIcon) #define NOTIFYICONDATAW_V3_SIZE FIELD_OFFSET(NOTIFYICONDATAW, hBalloonIcon) #ifdef UNICODE #define NOTIFYICONDATA_V3_SIZE NOTIFYICONDATAW_V3_SIZE #else #define NOTIFYICONDATA_V3_SIZE NOTIFYICONDATAA_V3_SIZE #endif (Seems like the code doesn't look good on the web site, but it from ShellAPI.h, all the same)
Which features are available through platform sdk headers are controlled by _WIN32_WINNT, which should be defined to the lower version of the operating system you are targeting. From http://msdn.microsoft.com/en-us/library/6sehtctf.aspx the correct values are: 0x0500 for Windows 2000 operating system, 0x0501 for Windows XP, 0x0502 for Windows Server 2003, and 0x0600 for Windows Vista. So NOTIFYICONDATA_V1_SIZE refer to any version lower than 2K, NOTIFYICONDATA_V2_SIZE to 2K, NOTIFYICONDATA_V3_SIZE to XP and none to Vista (in this case you can use sizeof(NOTIFYICONDATA)). If you compile your project with _WIN32_WINNT set to the latest version, and detect which version of shell.dll you are running on runtime you can set .cbSize to the correct size, the rest of the fields will be ignored. Something like this should work: NOTIFYICONDATA notify; ZeroMemory(&notify, sizeof(notify)); if(version >= VISTA) { notify.cbSize = sizeof(NOTIFYICONDATA); } else if(version >= XP) { notify.cbSize = NOTIFYICONDATA_V3_SIZE; } else if(version >= 2K) { notify.cbSize = NOTIFYICONDATA_V2_SIZE; } else { notify.cbSize = NOTIFYICONDATA_V1_SIZE; }
743,735
743,810
Why is printf showing -1.#IND for FPTAN results?
I am working on a program which produces assembler code from expressions. One of the functions required is tan(x) which currently works using the following sequence of code (the addresses are filled in at run time): fld [0x00C01288]; fld st(0); fsin; fld st(1); fcos; fdivp; fst [0x0030FA8C]; However, I would like to use the FPTAN opcode instead, so I tried using the following code: fld [0x00C01288]; fptan; fincstp; fst [0x0030FA8C]; The test program uses printf to show the result stored at 0x30FA8C, but for the second sequence the result is displayed as -1.#IND (the first using cos and sin works fine). If I try examining the value at the memory address, or on top of the floating point stack, in the debugger, it appears as the correct number. So, my question is: Why is printf showing -1.#IND and how can I fix it? The value at 0x00C01288 is double precision 0.5 The result in both cases is ~0.5463024898 My first thought was that the value being stored was a different representation of the same number, but inspecting the value stored at 0x0030FA8C shows it to be 0x3FE17B4F5BF3474A in both cases. I do not understand why identical inputs to the printf function could produce different output... Any help or suggestions would be greatly appreciated. Edit: Source where printf is called: #include "FridgeScript.h" #include <stdio.h> #include <math.h> char test[] = "a=tan(0.5);"; int main(int c, char** s) { unsigned int SC = FSCreateContext(); double a = 0.0; FSRegisterVariable(SC, "a", &a); unsigned int CH = FSCompile(SC, test); if(CH) FSExecute(SC, CH); printf("a: %.10g\r\n", a); printf("hex a: %I64X", a); FSDestroyContext(SC); return 0; }
Let me just throw something out there: how about using fstp st(0); instead of fincstp; The docs on fincstp say it's not equivalent to popping the item from the stack because it leaves that spot tagged as filled - perhaps this is messing up the float handling inside of printf? (You may be able to guess I don't know what I'm talking about. But maybe this will give you an idea?)
743,901
743,914
Complex statements in the member initialization part?
I have this: struct myClass{ multiset<string,binPred<string> > values ; myClass(const char param1, const char param2) : values(less<string>()) { } } ; I need to initialize the values member with a different functor depending on the values of param1 and param2. Unfortunately, the logic to decide which functor to use is not so simple and once values is constructed I cant change its associated comparison functor. So... I would need to put all that decision logic in the member initializaion part, but I dont know how aside using the ?: operator. Is it possible to put more complex statements in there?? (like switch staments) If not, is there a way to delay the construction of values so I can initialize it in the constructor's body?? Thanks.
You can use a static member function that will accept the parameters you have and return a necessary value. This solves the problem completely and allows for clean easily debuggable code.
744,110
744,129
CEdit numeric validation event C++ MFC
I have a CEdit text box which is a part of a property pane and only allows numeric values (positive integers). The box works fine when people enter non-numeric values, but when they delete the value in the box a dialog pops up saying: "Please enter a positive integer." Here is the situation: 1. I have a number (say 20) in the box. 2. I delete the number. 3. I get the error dialog. Could anybody tell me how I can intercept this event and put a default value in there? Here is what my property pane looks like: const int DEFAULT_VALUE = 20; class MyPropertyPane:public CPropertyPane { //.... private: CEdit m_NumericBox; int m_value; //.... public: afx_msg void OnEnChangeNumericBox(); //.... } void MyPropertyPane::MyPropertyPane() { // Set a default value m_value = DEFAULT_VALUE; } //.... void MyPropertyPane::DoDataExchange(CDataExchange* pDX) { DDX_Control(pDX, IDC_NUMERIC_BOX, m_NumericBox); // this sets the displayed value to 20 DDX_Text(pDX, IDC_NUMERIC_BOX, m_value); } //.... void MyPropertyPane::OnEnChangeNumericBox() { // Somebody deleted the value in the box and I got an event // saying that the value is changed. // I try to get the value from the box by updating my data UpdateData(TRUE); // m_value is still 20 although the value is // deleted inside the text box. }
The message you are receiving is coming from the data validation routines, not the data exchange routines. There is probably a call like this in DoDataExchange(): void MyPropertyPane::DoDataExchange(CDataExchange* pDX) { DDX_Control(pDX, IDC_NUMERIC_BOX, m_NumericBox); DDX_Text(pDX, IDC_NUMERIC_BOX, m_value); DDV_MinMaxInt(pDX, m_value, 1, 20); // if the value in m_value is outside the range 1-20, MFC will pop up an error dialog } You can fix this problem by removing the built-in MFC data validation and adding your own: void MyPropertyPane::DoDataExchange(CDataExchange* pDX) { DDX_Control(pDX, IDC_NUMERIC_BOX, m_NumericBox); DDX_Text(pDX, IDC_NUMERIC_BOX, m_value); if( m_value < 1 || m_value > 20 ) { m_value = DefaultValue; } }
744,750
744,773
Converting a C++ .exe project to a dll
Microsoft provides the source code of vshadow to manipulate VSS (Volume Shadow Service [shadow copy]), and I've modified it a bit but I want to make it into a dll so I can use it in my C# projects. I don't know exactly how to go about doing that, the source code is fairly simple, and it shouldn't be too hard, but I don't really know where to get started. How should I go about converting it to a usable dll instead of compiling to a executable? Update: Someone has already done this: http://www.alphaleonis.com/2008/08/alphavss-bringing-windows-shadow-copy-service-vss-to-net/
You will need to change your project settings in Visual Studio to create a DLL. In addition you will need to define dll entry points. However, the VSS is a set of COM API's, so you can call them directly from C# with pinvoke, instead of using this wrapper C++ executable. Since the SDK only contains libs, not DLL's you'll have to create a dll project to use it. This is a good blog-how-to. You'll need to download the VSS SDK (if you haven't already). Decent article on pinvoke. Hope this helps.
744,768
744,922
How to determine the actual level of development in a shop, e.g. C++ vs. C?
I imagine most of you know what I am getting at. You start a new job and within the first week or so of scanning through the code you realize you are in yet another C shop that throws in the occasional stream or hapless user defined class here and there. You quickly realize that not only aren't you going to learn anything new but it is just a matter of time before you are asked not use certain things because no one else understands them and won't be able to maintain your work. How often do you see some new technique on say, StackOverflow, only to realize that if you ever used it at work you would be met with bewilderment or annoyance at best? In your experience are these places the norm or the exception? How do (or would) you try to determine a group's level of sophistication and commitment to C++ in the interview setting? For instance I have tried asking questions about the company's use of things like STL, Boost, 3rd party libs, etc., but that only seems to get incrementally closer to the reality of situation you'll find once in it. Thoughts?
It's really all across the board. On one end of the spectrum, I've worked in one place where the code was recently rewritten in C. Recently being 10 years ago. Everyone was highly skeptical of this new-fangled technology. Slightly farther down the spectrum, you'll find C programmers who happen to have compilers with C++ features enabled. They'll dabble in the "class" keyword. But other than that, they are practically writing in C. Next down the line I think you'll find those who just groked inheritance and polymorphism. These are the kind of guys and gals that would swoon over an MFC or wxWidgets program, thinking it was the greatest. You'll go a little farther and find the folks who just read Meyer. They are all about figuring out how to use STL efficiently. They've got a basic notion of templates that equates mostly to the way STL uses them. Then you've got the Modern C++ Template Metaprogramming geeks. Many of us these days use their work in the form of boost and all its goodness. Few actually dork around with template metaprogramming. That's probably a good thing. I have seen perfectly good code written at each level. The cool thing I always liked about C++ is that at each level of added complexity, the cost could be conceptualized in terms of "how this new abstraction might be implemented in C". So I can conceptualize that inheritance and polymorphism is adding the cost of a function pointer lookup. Since I have seen this and written this in C, I know what I'm in for. I also have a sense of how templates work in part because of how the preprocessor works in C. This applies to most features in the C++ and C world. Since it's easy to map concepts to how they are implemented in C, I can easily visualize how the underlying architecture runs my code.
745,043
745,090
Are there any tools that are able to do cyclomatics on Pro*C++ sources?
Are there any tools that are able to do code metrics on Pro*C++ sources? I haven't been able to find anything specific via Google. Does anyone have any experience with this?
Pro*C can generate valid C++ files, for code metrics tools on standard c++ Google is your friend.
745,536
745,801
Small open source Unicode library for C/C++
Does anyone know of a great small open source Unicode handling library for C or C++? I've looked at ICU, but it seems way too big. I need the library to support: all the normal encodings normalization finding character types - finding if a character should be allowed in identifiers and comments validation - recognizing nonsense
I looked at UT8-CPP, and libiconv, and neither seemed to have all the features I needed. So, I guess I'll just use ICU, even though it is really big. I think there are some ways to strip out the unneeded functions and data, so I'll try that. This page (under "Customizing ICU's Data Library") describes how to cut out some of the data.
745,692
840,156
C++ pointer casting issue
I need to pass a pointer through a scripting language which just has a double and string type, for this I only have to worry about 32-bit pointers. Seeing as the pointers are 32-bit, I figured doubles had enough precision to safely store the pointer, which works, however the problem arises with some pointers to stream, presumably due to multiple inheritance...and I'm really not sure how to solve it. Basically I've been casting the pointers to unsigned, and then to double. Then to get them back I've cast the double to unsigned and then to whatever type the pointer is. e.g.: double example() { int *i = new int[100]; return (double)(unsigned)i; } double example2(double i) { doSomething((int*)(unsigned)i); } However with the stream types it seems to not work... std::ofstream *fs = new std::ofstream("example.txt"); std::cout << fs << std::endl;//029D1DF8 for example double d = (double)(unsigned)fs; std::ios *p = (std::ios*)(unsigned)d; std::cout << p << std::endl;//029D1DF8 same thing seems fine std::cout << ((std::ios*)fs) << std::endl;//029D1E54, opps, apparently a cast to std::ios changes the pointer to some offset! Is there some way around this? I have got an idea using a map and id numbers but I'd rather avoid the cost of having such maps which may contain 1000's of entries. I much rather get the casting to work for this...
Ok, the solution I came up with was to simplfy the casts. I defined a structure, like the one below that stored pointers already cast to the types I wanted. struct Streams { std::istream *is; std::ostream *os; std::stringstream *ss; std::fstream *fs; }; I then populated the struct (setting the ones that were not supported by the "real" object to null),having allocated it on the heap and then just casted a double to return to my script, which is then safe to cast back to Streams*. this also had the added advantage of allowing for safe checking that the stream passed to a method support that operation, simply by checking if the type used for that method (eg ss for the str() method of string streams) was not null.
745,819
745,949
How can I implement metaclasses in C++?
I've been reading a bit about what metaclasses are, but I would like to know if they can be achieved in C++. I know that Qt library is using MetaObjects, but it uses an extension of C++ to achieve it. I want to know if it is possible directly in C++. Thanks.
It's possible to create meta-classes, however C++ is not about that, it's about statically compile-time based implementations, not runtime flexibility. Anyways it depends if you want Meta-Classes with methods or just Meta-Classes with data, the Data classes can be implemented with Boost constructs like boost::any, and if you want Classes with methods, you can use boost::bind to bind methods to the object, or you can implement them on your own with a single-entry point interface like COM-objects. However the "real" C++ way is to use generics so it can all be determined at compile-time, for best performance. To be honest, I've seen very few systems, although I have seen some, that truly need runtime flexibility, in most cases objects are born and die the same class, or at least enough to spend 95% of their lifetime as a single class once they come out of their factory. So in many situations one finds themselves paying too much for runtime meta-classes. Of course there is the school of thought that this provides better developer performance, but in many situations, each line of code will be run on hardware a few hundred million times more than the time it took to write it. So you can think about compile-time and runtime classes as paying up front or leasing for-ever. Personally I like to pay up front.
745,850
745,861
Issues with C++ 'new' operator?
I've recently come across this rant. I don't quite understand a few of the points mentioned in the article: The author mentions the small annoyance of delete vs delete[], but seems to argue that it is actually necessary (for the compiler), without ever offering a solution. Did I miss something? In the section 'Specialized allocators', in function f(), it seems the problems can be solved with replacing the allocations with: (omitting alignment) // if you're going to the trouble to implement an entire Arena for memory, // making an arena_ptr won't be much work. basically the same as an auto_ptr, // except that it knows which arena to deallocate from when destructed. arena_ptr<char> string(a); string.allocate(80); // or: arena_ptr<char> string; string.allocate(a, 80); arena_ptr<int> intp(a); intp.allocate(); // or: arena_ptr<int> intp; intp.allocate(a); arena_ptr<foo> fp(a); fp.allocate(); // or: arena_ptr<foo>; fp.allocate(a); // use templates in 'arena.allocate(...)' to determine that foo has // a constructor which needs to be called. do something similar // for destructors in '~arena_ptr()'. In 'Dangers of overloading ::operator new[]', the author tries to do a new(p) obj[10]. Why not this instead (far less ambiguous): obj *p = (obj *)special_malloc(sizeof(obj[10])); for(int i = 0; i < 10; ++i, ++p) new(p) obj; 'Debugging memory allocation in C++'. Can't argue here. The entire article seems to revolve around classes with significant constructors and destructors located in a custom memory management scheme. While that could be useful, and I can't argue with it, it's pretty limited in commonality. Basically, we have placement new and per-class allocators -- what problems can't be solved with these approaches? Also, in case I'm just thick-skulled and crazy, in your ideal C++, what would replace operator new? Invent syntax as necessary -- what would be ideal, simply to help me understand these problems better.
Well, the ideal would probably be to not need delete of any kind. Have a garbage-collected environment, let the programmer avoid the whole problem. The complaints in the rant seem to come down to "I liked the way malloc does it" "I don't like being forced to explicitly create objects of a known type" He's right about the annoying fact that you have to implement both new and new[], but you're forced into that by Stroustrups' desire to maintain the core of C's semantics. Since you can't tell a pointer from an array, you have to tell the compiler yourself. You could fix that, but doing so would mean changing the semantics of the C part of the language radically; you could no longer make use of the identity *(a+i) == a[i] which would break a very large subset of all C code. So, you could have a language which implements a more complicated notion of an array, and eliminates the wonders of pointer arithmetic, implementing arrays with dope vectors or something similar. is garbage collected, so you don't need your own delete discipline. Which is to say, you could download Java. You could then extend that by changing the language so it isn't strongly typed, so type checking the void * upcast is eliminated, ...but that means that you can write code that transforms a Foo into a Bar without the compiler seeing it. This would also enable ducktyping, if you want it. The thing is, once you've done those things, you've got Python or Ruby with a C-ish syntax. I've been writing C++ since Stroustrup sent out tapes of cfront 1.0; a lot of the history involved in C++ as it is now comes out of the desire to have an OO language that could fit into the C world. There were plenty of other, more satisfying, languages that came out around the same time, like Eiffel. C++ seems to have won. I suspect that it won because it could fit into the C world.
745,897
746,266
Reading parts of an input file
I would like to read an input file in C++, for which the structure (or lack of) would be something like a series of lines with text = number, such as input1 = 10 input2 = 4 set1 = 1.2 set2 = 1.e3 I want to get the number out of the line, and throw the rest away. Numbers can be either integers or doubles, but I know when they are one or other. I also would like to read it such as input1 = 10 input2=4 set1 =1.2 set2= 1.e3 so as to be more robust to the user. I think this means that it shouldn't be red in a formatted fashion. Anyway, is there a smart way to do that? I have already tried the following, but with minimal knowledge of what I've been doing, so the result was as expected... no success. #include <stdio.h> #include <stdlib.h> #include <float.h> #include <math.h> #include <iostream> #include <fstream> #include <iomanip> #include <cstdlib> #include <boost/lexical_cast.hpp> #include <string> using namespace std; using namespace boost; int main(){ string tmp; char temp[100]; int i,j,k; ifstream InFile("input.dat"); //strtol InFile.getline(temp,100); k=strtol(temp,0,10); cout << k << endl; //lexical_cast InFile.getline(temp,100); j = lexical_cast<int>(temp); cout << j << endl; //Direct read InFile >> tmp >> i; cout << i << endl; return 0; }
Simply read one line at a time. Then split each line on the '=' sign. Use the stream functionality do the rest. #include <sstream> #include <fstream> #include <iostream> #include <string> int main() { std::ifstream data("input.dat"); std::string line; while(std::getline(data,line)) { std::stringstream str(line); std::string text; std::getline(str,text,'='); double value; str >> value; } } With error checking: #include <sstream> #include <fstream> #include <iostream> #include <string> int main() { std::ifstream data("input.dat"); std::string line; while(std::getline(data,line)) { std::stringstream str(line); std::string text; double value; if ((std::getline(str,text,'=')) && (str >> value)) { // Happy Days.. // Do processing. continue; // To start next iteration of loop. } // If we get here. An error occurred. // By doing nothing the line will be ignored. // Maybe just log an error. } }
746,034
747,240
glDrawPixels in grayscale?
I have an image that I'm rendering like this: glDrawPixels(image->width, image->height, GL_BGR, GL_UNSIGNED_BYTE, image->imageData); Is there anyway I can draw it in grayscale instead (without loading it into a texture first)? I don't care if only, say, the blue component is used for the gray value rather than the L2 norm or something, I just need a quick and dirty output. GL_LUMINANCE would be great, except that it won't work on a 3-channel image. @timday:
A perverse idea for you to try, and I've no idea if it'll work: glPixelZoom(1.0f/3.0f,1.0f); glDrawPixels(3*width,height,GL_LUMINANCE,GL_UNSIGNED_BYTE,data); ie treat your 3-channel image as being a 1-channel (grayscale) image 3 times as wide, and compensate for this by squishing the width using the x zoom factor. I believe GL always does nearest-neighbour sampling for zoomed glDrawPixels, so it ought to consistently pick out the same component from each triple of samples, as you require.
746,298
747,415
How do you build a debug .exe (MSVCRTD.lib) against a release built lib (MSVCRT.lib)?
I'm using Visual C++ 2008, SP1. I have a QT app (gui, .exe) in debug build config. It's set to use the Multi-threaded Debug DLL version of the CRT, aka MSVCRTD.lib. I'm linking against a 3rd party library that is built in release mode and using the Multi-threaded DLL (non-debug) version of the CRT, aka MSVCRT.lib. It links and runs but crashes at startup. While linking I get the warning: LINK : warning LNK4098: defaultlib 'MSVCRT' conflicts with use of other libs; use /NODEFAULTLIB:library I tried setting /NODEFAULTLIB:msvcrt.lib but that resulted in 5 linking errors due to missing symbols. So is it not possible to use two different libs? What is the alternative? Can I create a DLL out of the 3rd party lib I have? Or is that something the 3rd party would have to do? The exception at startup is: "Unhandled exception at ....... in MyApp.exe: ...... Access violation reading location 0x00000000f" The following is the call stack after the app is run and it crashes: MyApp.exe!std::_Aux_cont::_Getcont() + 0xa bytes C++ MyApp.exe!std::_Iterator_base_aux::_Getmycont() + 0x1b bytes C++ MyApp.exe!std::_Tree<std::_Tmap_traits<std::basic_string<char,std::char_traits<char>,std::allocator<char> >,unsigned int,std::less<std::basic_string<char,std::char_traits<char>,std::allocator<char> > >,std::allocator<std::pair<std::basic_string<char,std::char_traits<char>,std::allocator<char> > const ,unsigned int> >,0> >::const_iterator::operator*() + 0x28 bytes C++ MyApp.exe!std::_Tree<std::_Tmap_traits<std::basic_string<char,std::char_traits<char>,std::allocator<char> >,unsigned int,std::less<std::basic_string<char,std::char_traits<char>,std::allocator<char> > >,std::allocator<std::pair<std::basic_string<char,std::char_traits<char>,std::allocator<char> > const ,unsigned int> >,0> >::iterator::operator*() + 0xf bytes C++ MyApp.exe!std::map<std::basic_string<char,std::char_traits<char>,std::allocator<char> >,unsigned int,std::less<std::basic_string<char,std::char_traits<char>,std::allocator<char> > >,std::allocator<std::pair<std::basic_string<char,std::char_traits<char>,std::allocator<char> > const ,unsigned int> > >::operator[]() + 0xe9 bytes C++ MyApp.exe!ThirdPartyLib::client::`anonymous namespace'::init_xlt_mode() + 0x5f bytes C++ MyApp.exe!_GetCommandLineW@0() + 0x8d8f3 bytes C++ msvcr90d.dll!_initterm(void (void)* * pfbegin=0x006c4468, void (void)* * pfend=0x006cb0b8) Line 903 C MyApp.exe!__tmainCRTStartup() Line 501 + 0xf bytes C MyApp.exe!WinMainCRTStartup() Line 403 C kernel32.dll!7c817067()
You could build your project to link against the release CRT and enable debug information for your code. In "Project Properties" go to C++/General and change the Debug Information Format. In the "Optimization" section turn off optimization. Switch to the "Linker/Debugging" section and enable generation of debug info. Make sure to set the program database file (PDB). At this point your application will emit debugging information for everything in your code and link against the non-debug DLL CRT. This enables you to debug your application in a Release configuration while avoiding the problems associated with using multiple CRTs in the same application.
746,391
746,471
Deriving from a class with operator overloading
I want to create a collection of classes that behave like math vectors, so that multiplying an object by a scalar multiplies each field by that ammount, etc. The thing is that I want the fields to have actual names, instead of being treated as an index. My original idea to implement this was creating a base class Rn with the overloads and then create derived classes with the pretty names. Something like this: #include <iostream> #include <algorithm> using namespace std; template<int N, class X=double> struct Base{ X xs[N]; Base(){}; Base(X *data){ copy(data, data+N, xs); } Base operator*= (double d){ for(int i=0; i<N; i++){ xs[i] *= d; } return *this; } Base operator* (double d){ Base answer = *this; answer *= d; return answer; } //also operators for +=, +, multiplication from left, maybe [] too }; struct Derived : public Base<2>{ Derived(double a, double b){ foo() = a; bar() = b; } double &foo(){ return xs[0]; } double &bar(){ return xs[1]; } }; int main() { //this is OK: double data[2] = {0.0, 2.0}; Base<2> b(data); b = b*17.0; cout << b.xs[0] << endl; //I can't do this: Derived x(0.0, 2.0); x = x*17.0; cout << x.foo() << endl; return 0; } I get a compiler error whenever I try to use of of the operators that requires copying. gcc gave me the following compiler error: teste.cpp: In function ‘int main()’: teste.cpp:52: error: no match for ‘operator=’ in ‘x = x.Derived::<anonymous>.Base<N, X>::operator* [with int N = 2, X = double](1.7e+1)’ teste.cpp:31: note: candidates are: Derived& Derived::operator=(const Derived&) I think the problem is that the overloading functions deal with Base objects that can't be converted to Derived ones, so I can't use them in the derived class. However, I can't come up with a solution. Is there a way around this or should I use a totally different approach? Bonus question: is there some way that I can use std::valarray to keep from having to type lots and lots of operator overloads?
I'll only address the technical difficulty, not whether this is a good idea or not. The problem is that the result of operator* of Derived is a Base, and operator= of Derived (which is a default operator=) doesn't know how to "eat" a Base. A simple solution is to create a constructor of Derived that gets a Base, and does whatever is needed to initialize itself correctly. This would allow an on-the-fly conversion of a Base to a Derived - and would work for all other operators of Derived that expect a Base. Something along the lines of - Derived(const Base<2>& B) : Base<2>( B ) { }
746,414
746,608
What is the best way to get all Windows startup processes using Windows API?
I know there are startup folders and certain registry keys I need to look into. But how to do that using Windows API? I'm interested to know for Windows XP and Vista. Thanks for your time.
There is no single API to get all the programs that run while the system is starting up. Consider all the things that Autoruns shows. Updates to that program occasionally allow it to show new classes of programs, and since those are updates to the program and not to the OS, it's obviously not some API that's changing to include more things. Rather, it's the program that's gradually expanding its notion of what an "auto-run program" is. Work out what you wish to consider to be a "startup process," and then you can determine what APIs or other techniques you can combine to find out what all those processes are. Among the places to look include the following: The "run" registry key, like AJ mentioned. The "startup" menu, as given by various constants used with ShGetFolderPath and some of its related functions. Tasks scheduled to run on login. Services. Each bullet point includes at least one link to a page on MSDN that should get you started. If you need more, consider asking some more specific questions.
746,604
746,653
Bind pointer to member operators in C++
What is the point of them? I've never used them for anything, and I can't see myself needing to use them at all. Am I missing something about them or are they pretty much useless? EDIT: I don't know much about them, so a description about them might be necessary...
A PMF (pointer to member function) is like a normal (static) function pointer, except, because non-static member functions require the this object to be specified, the PMF invocation syntax (.* or ->*) allow the this object to be specified (on the left-hand side). Here's an example of PMFs in use (note the "magic" line with the .* operator being used: (lhs.*opit->second)(...), and the syntax for creating a PMF, &class::func): #include <complex> #include <iostream> #include <map> #include <stack> #include <stdexcept> #include <string> namespace { using std::cin; using std::complex; using std::cout; using std::invalid_argument; using std::map; using std::stack; using std::string; using std::underflow_error; typedef complex<double> complexd; typedef complexd& (complexd::*complexd_pmf)(complexd const&); typedef map<char, complexd_pmf> opmap; template <typename T> typename T::reference top(T& st) { if (st.empty()) throw underflow_error("Empty stack"); return st.top(); } } int main() { opmap const ops{{'+', &complexd::operator+=}, {'-', &complexd::operator-=}, {'*', &complexd::operator*=}, {'/', &complexd::operator/=}}; char op; complexd val; stack<complexd> st; while (cin >> op) { opmap::const_iterator opit(ops.find(op)); if (opit != ops.end()) { complexd rhs(top(st)); st.pop(); // For example of ->* syntax: complexd& lhs(top(st)); // complexd* lhs(&top(st)); (lhs.*opit->second)(rhs); // (lhs->*opit->second)(rhs); cout << lhs << '\n'; // cout << *lhs << '\n'; } else if (cin.unget() && cin >> val) { st.push(val); } else { throw invalid_argument(string("Unknown operator ") += op); } } } [Download] It's a simple RPN calculator using complex numbers instead of real numbers (mostly because std::complex is a class type with overloaded operators). I've tested this with clang; your mileage may vary with other platforms. Input should be of the form (0,1). Spaces are optional, but can be added for readability.
746,702
4,024,686
"endpoint is a duplicate" when starting an RPC server
My program uses Microsoft RPC for interprocess communications. To prepare for receiving RPC calls the program runs the following sequence: RpcServerUseProtseqEp(), then RpcServerRegisterIf(), then RpcServerListen() The program starts its RPC server with the sequence above, works for some time, then terminates and may later be restarted by another program. The set of parameters values for RpcServerUseProtseqEp() is the same each time the program is run. When the sequence is run the first time after reboot it always succeeds, but on subsequent runs RpcServerUseProtseqEp() returns RPC_S_DUPLICATE_ENDPOINT ("The endpoint is a duplicate.") Currently I just ignore this particular error code and treat it as success, then all the other primitives usually work fine. What is the correct way of using RpcServerUseProtseqEp()? Should I do any cleanup to revoke the registered endpoint or just keep ignoring the RPC_S_DUPLICATE_ENDPOINT error code?
i had the same problem, i can't fixed totally, but this code works for me: UCHAR* pszProtocolSequence = (UCHAR*)"ncacn_ip_tcp"; // Use RPC over TCP/IP UCHAR* pszSecurity = NULL; UCHAR* pszEndpoint = (UCHAR*)"9300"; UINT cMinCalls = 1; UINT cMaxCalls = m_dwConcurrentChannels; UINT fDontWait = FALSE; int RPC_tries, MAX_RPC_Tries; RPC_tries=0; MAX_RPC_Tries=60; do { status = ::RpcServerUseProtseqEp( pszProtocolSequence, cMaxCalls, pszEndpoint, pszSecurity); Sleep(1000); RPC_tries+=1; }while(status!=RPC_S_OK && RPC_tries<MAX_RPC_Tries); For some reason you have to wait some time until use RpcServerUseProtseqEp again when you restart a windows service.
746,735
746,753
Is there a way to use an "ostream" to write over an existing instance of std::string
I know all kinds of ostreams holds their own internal buffers. I have to know whether there is some kind of ostream which accept an instance std::string and write on that instance. (I want to avoid redundant copies) Note: My question is about the standard library, don't offer me other libraries that can do that, I know they exist. :) Edit: After a request to be more specific ... Here is what I want, consider the following code: std::string str = "bla bla bla "; std::ospecialstream o(str); o << 34 << " bla bla"; std::cout << str; //console output : "bla bla bla 34 bla bla" I want ospecialstream such that it won't copy str contents into some internal buffer but rather write to the same instance of str. Edit #2 I need it for performece reasons , ostringstream will make a memcopy when created with a string and will also make a memcpy when the contents are retrieved.
If you are asking can you alter the buffering of ostreams, then the answer is yes. However, depending on what you actually want the buffer to do, this is not a particularly simple task. You will want to consult a book like Langer & Kreft for more info. L&K have an example of how to implement an unbuffered output stream - it begins on page 229 of the book. It's too long to reproduce here, but basically you need to redefine the overflow() method of a derived streambuf class. Note the book is not available on-line but the source code apparently is - see this page for details.
746,879
754,531
Using MFC MDI with multiple top level window
I am working with a multiple top level windows application. The main window is a MDIFrameWnd, I put some code in CWinApp to switch m_pMainWnd when switch top level window. It's work fine but fire a assert when I close one of the main window. This assert is from CMDIChildWnd: void CMDIChildWnd::AssertValid() const { CFrameWnd::AssertValid(); ASSERT(m_hMenuShared == NULL || ::IsMenu(m_hMenuShared)); } It look like the window be closed release the shared menu. I try to find out where the menu is released in MFC framework, but can not find it. Could somebody help me? Thanks..
I found the problem is. A Menu bar control in main frame will destory menu in its dtor. Thanks every one.
747,310
747,344
Debugging on Linux for Windows Developer
Primarily I've done basic (novice level) software development on a Windows machine, but I've always had MS Visual Studio to help me step through the process of debugging. Now, however, it looks like I will be on Linux, so in order to get ready for the jump I want to make sure I have a tool/tools lined up to help me step through the code and debug. Unfortunately when I've verbally asked folks how they go about debugging on Linux, I typically get the following answer, "Oh, I just put a bunch of print statements." OMG! No way you say, but yes that is their answer. Since it is on Linux, and will be working with C++ code on the CentOS 32-bit OS, I am hoping here is a preferred OpenSource solution. So, I guess I asking for the preferred OpenSource IDE for C++ code on CentOS Linux. Thanks for any insight and suggestions.
I would suggest using Eclipse Eclipse is a mature IDE with plenty of support available. There is also Code::Blocks if you want to try something different
747,440
747,462
TypeDef as an overridable class feature
If I have a class that contains a number of typedef'd variables, like so: class X { typedef token TokenType; bool doStuff() { TokenType data; fillData(&data); return true; } }; Is there any way to override the typedef for TokenType in a derived class? N.B. This is NOT a good place to use templates (This is already a templated class and any changes are likely to result in [EDIT: infinite] recursive definitions [class X < class Y = class X < class Y . . .> > etc.].)
What you can do is shadow, but not override. That is: you can define a derived class Y with its own typedefs for TokenType, but that will only come into play if somebody references Y::TokenType directly or via an object statically typed as Y. Any code that references X::TokenType statically will do so even for objects of type Y.
748,014
748,059
Do I need to manually close an ifstream?
Do I need to manually call close() when I use a std::ifstream? For example, in the code: std::string readContentsOfFile(std::string fileName) { std::ifstream file(fileName.c_str()); if (file.good()) { std::stringstream buffer; buffer << file.rdbuf(); file.close(); return buffer.str(); } throw std::runtime_exception("file not found"); } Do I need to call file.close() manually? Shouldn't ifstream make use of RAII for closing files?
NO This is what RAII is for, let the destructor do its job. There is no harm in closing it manually, but it's not the C++ way, it's programming in C with classes. If you want to close the file before the end of a function you can always use a nested scope. In the standard (27.8.1.5 Class template basic_ifstream), ifstream is to be implemented with a basic_filebuf member holding the actual file handle. It is held as a member so that when an ifstream object destructs, it also calls the destructor on basic_filebuf. And from the standard (27.8.1.2), that destructor closes the file: virtual ˜basic_filebuf(); Effects: Destroys an object of class basic_filebuf<charT,traits>. Calls close().
748,085
748,100
How to convert a double* to an array<double>(6)
I have a function that returns a array of 6 doubles. double* Validation(); I would like to cast this return value in managed code. array<double>^ validationPosition = gcnew array<double>(6); validationPosition = Validation(); I get this error: error C2440: '=' : cannot convert from 'double *' to 'cli::array<Type> ^' How should I do this? Thanks.
If you want this to be in a managed array, you will need to copy it into the array. The native double* array will not be usable directly as a managed array. You can use Marshall::Copy to copy this, or just loop through your 6 values. You will also want to (probably) delete[] your return values, since it sounds like it's allocating an array internal to your validation() routine.
748,547
754,405
Internet Explorer control won't load CSS and JS
I have embedded an IE control in a Win32 app. The only purpose of this app is to ensure that a URl is always loaded and being refreshed every N minutes. My problem is that almost always the first time the URL is accessed the CSS and JS files are not loaded. This behavior repeats randomly while the application is running. My webserver is not even registering the attempt by the browser to obtain the CSS and JS files. I don't know why, but the browser control is not asking for them from time to time. Anyone have an idea what could be causing this erratic behavior? How can I debug this?
If the purpose is to load URIs, why use the IE control? You can't predict when it will cache. Instead use the the WinInet API http://msdn.microsoft.com/en-us/library/aa383630(VS.85).aspx Here's a tutorial You need to set the desired caching behavior, but you can just program what you want it to do directly without depending on the behavior of the control. You can parse the result to make sure you get it right (check for the right HTTP result, parse the text, etc)
749,061
749,097
In C++, how can I get a pointer into a vector?
I'm writing some C++ code that manipulates a bunch of vectors that are changing in size and are thus being reallocated constantly. I would like to get a "pointer" into these vectors that remains valid even after reallocation of the vector. More specifically, I just want these "pointers" to remember which vector they point into and the index to which they point. When I dereference them using the standard (*ptr) syntax, I just want them to do the obvious lookup. Obviously, actual pointers will not be valid after reallocation, and my understanding is that iterators aren't valid after reallocation either. Note also that I don't care if elements are inserted before my objects, so these "pointers" really don't have to remember anything but a vector and an index. Now, I could easily write such a class myself. Has anyone (Boost? STL?) done it for me already? Edit: The answers don't address my question. I asked if this functionality is any standard library. I take the responses as a "no"?
Try a std::pair< vector*, int>, as neither the position of the vector nor the index of the element changes. Or, as a class: template<class T> class VectorElementPointer { vector<T>& vectorref; typename vector<T>::size_type index; public: VectorElementPointer(vector<T>& vref, typename vector<T>::size_type index):vectorref(vref),index(index){} T& operator*() const {return vectorref[index];} T* operator->() const {return &vectorref[index];} }; This is the easiest solution that comes to my mind, as neither the STL nor Boost contains anything to do it easier.
749,129
749,483
std::set filled with boost::variant elements cannot be sorted descendantly?
typedef boost::variant<long long,double,string> possibleTypes ; set<possibleTypes,less<possibleTypes> > ascSet ; set<possibleTypes,greater<possibleTypes> > descSet ; When I try to compile I get a bunch of errors in some library headers. But, if I remove the third line (the one with descSet ) the code compile just fine. What's the problem? Can't boost::variant objects be sorted in descendant order?? Edit: I'm using Visual Studio 2005 and Boost 1.38.0 and the following command line: cl /EHsc /I"C:\boost_1_38_0" test.cpp Edit2 As it was suggested, by Doug T, if I define this: bool operator>(const possibleTypes& a, const possibleTypes& b){ return b < a ; } Then the following code doesn't compile: possibleTypes pt1="a", pt2="b" ; greater<possibleTypes> func ; cout << func(pt1,pt2) << endl ; However, this code compiles just fine: possibleTypes pt1="a", pt2="b" ; cout << (pt1 > pt2) << endl ; Could anyone help me to understand why?? I tried with VC++ 2005 and GCC 3.4.6
As it was suggested, if I define this: bool operator>(const possibleTypes& a, const possibleTypes& b){ return b < a ; } Then the following code doesn't compile: possibleTypes pt1="a", pt2="b" ; greater<possibleTypes> func ; cout << func(pt1,pt2) << endl ; However, this code compiles just fine: possibleTypes pt1="a", pt2="b" ; cout << (pt1 > pt2) << endl ; Could anyone help me to understand why?? I tried with VC++ 2005 and GCC 3.4.6
749,171
5,326,024
How to attach a winforms dialog to an existing toolbar/menubar (compiled C++ app)?
To attach a winforms dialog on Microsoft Wordpad toolbar/menubar?
First you need to get your assembly loaded into the target process. Then you'll need to use Win32 API functions to create the new menu item, with a unique child ID. Finally, you'll need to subclass the window procedure and process WM_COMMAND messages, which are generated by Windows when a native menu item is selected. Be sure to process all other messages, including WM_COMMAND messages that don't match your menu item, to the original window procedure by calling base.WndProc. Of course, this won't work on Windows 7 WordPad, since it doesn't have menus or toolbars, it uses the new "ribbon" control. The Ribbon API may give you a way to add new items, however.
749,517
749,555
How to download an image from an URL to a local dir?
I'm using C++ without .NET on Win32, how can I download an image over HTTP from a website without having to re-invent the wheel? Is there an API or library that provides a single function to do this? http://mywebsite/file.imgext --> C:\path\to\dir\file.imgext
You could use cURLpp I havn't used it yet, but example20 looks like it could solve your problem.