question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
494,064
494,077
C++ Visual Studio Compilation error
I get the following compilation error fatal error C1189: #error : ERROR: Use of C runtime library internal header file. I absolutely have no idea about it. can anyone throw some light on it? The complete error: C:\Program Files\Microsoft Visual Studio 8\VC\ce\include\crtdefs.h(100) : fatal error C1189: #error : ERROR: Use of C runtime library internal header file. Generating Code...
You've probably got crt/src in your include directory search path. The headers in there are used to build the C Runtime - they aren't intended for use in user programs (even though they may have the same names as files that are intended to be included). If you look in the header that's causing the problem, you'll probably see something like this: /* This version of the header files is NOT for user programs. * It is intended for use when building the C runtimes ONLY. * The version intended for public use will not have this message. */ You need to fix your include search path. I see you have ce/include in your include search path - are you building a WinCE application? If so, your build should be defining _WIN32_WCE to prevent this problem. If not, this directory should not be in the include path.
494,448
494,492
The procedure entry point _wsplitpath_s could not be locating in the dynamic link library msvcrt.dll
Recently upgrade a MFC++ Project which includes .NET assemblies from Visual Studio 2005 to 2008. Now whenever its installed it displays the following message: The procedure entry point _wsplitpath _s could not be locating in the dynamic link library msvcrt.dll I've install Microsoft Visual C++ 2008 SP1 Redistributable Package (x86), still no luck. And google didn't prove to be very helpful either.
Following URLs may be of interest to you... http://msdn.microsoft.com/en-us/library/bb166245.aspx http://blogs.msdn.com/jameslau/archive/2008/02/13/upgrading-vs-2005-packages-to-vs-2008-a-more-advanced-guide.aspx http://blogs.msdn.com/quanto/archive/2008/02/19/migrating-vs-2005-packages-to-vs-2008.aspx http://blogs.msdn.com/quanto/archive/2008/01/19/migrating-from-vs-2005-to-vs-2008-with-microsoft-visualstudio-shell-references.aspx
494,530
543,419
Why does MIcroQuill Smartheap throw "mem_bad_pointer" errors after I embed perl?
I am embedding perl in a C++ application that uses Smartheap. Regardless of whether I compile the perl to use its own malloc or the system's I get a bunch of error mem___bad_pointer dialogs. It seems to work fine when I just click "ok" and ignore the errors, but obviously I need to actually solve the problem. Do I maybe need to compile SmartHeap into my perl build? Is that even feasible? Below is the only documentation page about mem__bad_pointer's I could find, but I am no closer to solving the problem. I do not understand how or where perl and Smartheap are conflicting with each other. Any pointers appreciated. The pointer was allocated by a memory manager other than SmartHeap, such as from another DLL or EXE, or from the compiler runtime library. Examine your map file to see that the SmartHeap version of malloc, _fmalloc/farmalloc, or operator new is being linked in. The pointer is “wild” (uninitialized), is allocated on the stack (local variable), or is otherwise invalid. The pointer was previously freed. If SmartHeap has freed the page from which the pointer was originally allocated, SmartHeap won’t be able to detect that it’s a double free. However, SmartHeap will report the invalid pointer. Use dbgMemDeferFreeing to catch this type of bug. The pointer was incremented or decremented since the time of allocation. For 16-bit x86, the pointer was cast to a near pointer after allocation, in which case the segment portion of the pointer has been lost. The memory pool from which the pointer was allocated has been freed, or SmartHeap has been unregistered from the task. The task from which the pointer was allocated has terminated (see section B.4).
Without seeing the code it is hard to debug the problem. Perhaps you are allocating memory using both smartheap and the regular memory manager. this can be caused when you allocat memory in a dll build without smart heap. Depending on your code, the allocation could be fine and you may be writing outside the allcoated memory area.
494,571
494,585
Controlling object creation
I have a class whose object must be created on the heap. Is there any better way of doing this other than this: class A { public: static A* createInstance(); //Allocate using new and return static void deleteInstance(A*); //Free the memory using delete private: //Constructor and destructor are private so that the object can not be created on stack A(); ~A(); };
This is pretty much the standard pattern for making the object heap-only. Can't really be simplified much, except that you could just make the destructor private without forcing the use of a factory method for creation.
494,597
494,630
C++ member variable aliases?
I'm pretty sure this is possible, because I'm pretty sure I've seen it done. I think it is awesome, but I will gladly accept answers along the lines of "this is a terrible idea because ____". Say we have a basic struct. struct vertex { float x, y, z; }; Now, I want to implement aliases on these variables. vertex pos; vertex col; vertex arr; pos.x = 0.0f; pos.y = 0.5f; pos.z = 1.0f; col.r = 0.0f; col.g = 0.5f; col.b = 1.0f; arr[0] = 0.0f; arr[1] = 0.5f; arr[2] = 1.0f; Ideally the third syntax would be indistinguishable from an array. That is, if I sent arr as a reference parameter to a function expecting an array of floats into which it will store data (eg many of the OpenGL glGet functions), it would work fine. What do you think? Possible? Possible but stupid?
What I would do is make accessors: struct Vertex { float& r() { return values[0]; } float& g() { return values[1]; } float& b() { return values[2]; } float& x() { return values[0]; } float& y() { return values[1]; } float& z() { return values[2]; } float operator [] (unsigned i) const { return this->values_[i]; } float& operator [] (unsigned i) { return this->values_[i]; } operator float*() const { return this->values_; } private: float[3] values_; }
494,629
495,404
Building Boost for static linking (MinGW)
I'm building Boost (I'm using System and FileSystem) for MinGW using bjam: bjam --toolset=gcc stage And it builds fine, but I want to be able to statically link to it (I have to have a single file for the final product) so I tried: bjam --link=static --toolset=gcc stage But I get the same output. Any ideas? edit second question in a row I've answered moments after posting :p guess I'll leave this up here for others though. bjam --build-type=complete --toolset=gcc stage Will build both dynamic and static for sure.
I think link is a property as opposed to an option for bjam. That means that there should be no -- before it. This is my command line for building only static libraries (visual c++ though): bjam install --toolset=msvc variant=release link=static threading=multi runtime-link=static Mapping that to your original build command I would say it should look something like this: bjam --toolset=gcc link=static stage or perhaps: bjam stage --toolset=gcc link=static Try running bjam --help for more info on properties and options for bjam.
494,653
495,277
How can I use the TRACE macro in non-MFC projects?
I want to use the TRACE() macro to get output in the debug window in Visual Studio 2005 in a non-MFC C++ project, but which additional header or library is needed? Is there a way of putting messages in the debug output window and how can I do that?
Build your own. trace.cpp: #ifdef _DEBUG bool _trace(TCHAR *format, ...) { TCHAR buffer[1000]; va_list argptr; va_start(argptr, format); wvsprintf(buffer, format, argptr); va_end(argptr); OutputDebugString(buffer); return true; } #endif trace.h: #include <windows.h> #ifdef _DEBUG bool _trace(TCHAR *format, ...); #define TRACE _trace #else #define TRACE false && _trace #endif then just #include "trace.h" and you're all set. Disclaimer: I just copy/pasted this code from a personal project and took out some project specific stuff, but there's no reason it shouldn't work. ;-)
495,021
495,056
Why can templates only be implemented in the header file?
Quote from The C++ standard library: a tutorial and handbook: The only portable way of using templates at the moment is to implement them in header files by using inline functions. Why is this? (Clarification: header files are not the only portable solution. But they are the most convenient portable solution.)
Caveat: It is not necessary to put the implementation in the header file, see the alternative solution at the end of this answer. Anyway, the reason your code is failing is that, when instantiating a template, the compiler creates a new class with the given template argument. For example: template<typename T> struct Foo { T bar; void doSomething(T param) {/* do stuff using T */} }; // somewhere in a .cpp Foo<int> f; When reading this line, the compiler will create a new class (let's call it FooInt), which is equivalent to the following: struct FooInt { int bar; void doSomething(int param) {/* do stuff using int */} } Consequently, the compiler needs to have access to the implementation of the methods, to instantiate them with the template argument (in this case int). If these implementations were not in the header, they wouldn't be accessible, and therefore the compiler wouldn't be able to instantiate the template. A common solution to this is to write the template declaration in a header file, then implement the class in an implementation file (for example .tpp), and include this implementation file at the end of the header. Foo.h template <typename T> struct Foo { void doSomething(T param); }; #include "Foo.tpp" Foo.tpp template <typename T> void Foo<T>::doSomething(T param) { //implementation } This way, implementation is still separated from declaration, but is accessible to the compiler. Alternative solution Another solution is to keep the implementation separated, and explicitly instantiate all the template instances you'll need: Foo.h // no implementation template <typename T> struct Foo { ... }; Foo.cpp // implementation of Foo's methods // explicit instantiations template class Foo<int>; template class Foo<float>; // You will only be able to use Foo with int or float If my explanation isn't clear enough, you can have a look at the C++ Super-FAQ on this subject.
495,452
495,486
Cancel a DeferWindowPos
I am doing a series of window resizing using the DeferWindowPos functionality. Suppose I already opened the DeferWindowPos handle, and called DeferWindowPos a few time, and now I want to cancel everything: not call EndDeferWindowPos. I tried CloseHandle( hDWP ), but it does not work (crash). If I simply return from my function, I assume it will leak an handle. It is possible to terminate the DeferWindowPos without calling EndDeferWindowPos? // Initialize HDWP hDWP = BeginDeferWindowPos( ... ) for( ... ) { // Calculate new rectangle CRect dcNew; ... // Oh,now I want to return from my function, I want to cancel everything // Accumulate hDWP = DeferWindowPos( hDWP, hWnd, 0, rcNew.left, rcNew.top, rcNew.Width(), rcNew.Height(), SWP_NOZORDER ); } // Finally BOOL bResult = EndDeferWindowPos( hDWP ); If this is not possible, I will simply accumulate them in a temporary vector, and call the Defer stuff at the end, when I am certain I want to do them all.
The only reference to any kind of "abort" functionality I see is this: If any of the windows in the multiple-window- position structure have the SWP_HIDEWINDOW or SWP_SHOWWINDOW flag set, none of the windows are repositioned. This is coming from here.
495,795
495,899
How do I use a third-party DLL file in Visual Studio C++?
I understand that I need to use LoadLibrary(). But what other steps do I need to take in order to use a third-party DLL file? I simply jumped into C++ and this is the only part that I do not get (as a Java programmer). I am just looking into how I can use a Qt Library and tesseract-ocr, yet the process makes no sense to me and is so difficult to google. How do I tell the compiler of the functions that I am using? Should there be an include file from the third-party vendor?
As everyone else says, LoadLibrary is the hard way to do it, and is hardly ever necessary. The DLL should have come with a .lib file for linking, and one or more header files to #include into your sources. The header files will define the classes and function prototypes that you can use from the DLL. You will need this even if you use LoadLibrary. To link with the library, you might have to add the .lib file to the project configuration under Linker/Input/Additional Dependencies.
495,922
495,951
What is the best way to implement a heartbeat in C++ to check for socket connectivity?
Hey gang. I have just written a client and server in C++ using sys/socket. I need to handle a situation where the client is still active but the server is down. One suggested way to do this is to use a heartbeat to periodically assert connectivity. And if there is none to try to reconnect every X seconds for Y period of time, and then to time out. Is this "heartbeat" the best way to check for connectivity? The socket I am using might have information on it, is there a way to check that there is a connection without messing with the buffer?
If you're using TCP sockets over an IP network, you can use the TCP protocol's keepalive feature, which will periodically check the socket to make sure the other end is still there. (This also has the advantage of keeping the forwarding record for your socket valid in any NAT routers between your client and your server.) Here's a TCP keepalive overview which outlines some of the reasons you might want to use TCP keepalive; this Linux-specific HOWTO describes how to configure your socket to use TCP keepalive at runtime. It looks like you can enable TCP keepalive in Windows sockets by setting SIO_KEEPALIVE_VALS using the WSAIoctl() function. If you're using UDP sockets over IP you'll need to build your own heartbeat into your protocol.
496,034
496,076
Most efficient replacement for IsBadReadPtr?
I have some Visual C++ code that receives a pointer to a buffer with data that needs to be processed by my code and the length of that buffer. Due to a bug outside my control, sometimes this pointer comes into my code uninitialized or otherwise unsuitable for reading (i.e. it causes a crash when I try to access the data in the buffer.) So, I need to verify this pointer before I use it. I don't want to use IsBadReadPtr or IsBadWritePtr because everyone agrees that they're buggy. (Google them for examples.) They're also not thread-safe -- that's probably not a concern in this case, though a thread-safe solution would be nice. I've seen suggestions on the net of accomplishing this by using VirtualQuery, or by just doing a memcpy inside an exception handler. However, the code where this check needs to be done is time sensitive so I need the most efficient check possible that is also 100% effective. Any ideas would be appreciated. Just to be clear: I know that the best practice would be to just read the bad pointer, let it cause an exception, then trace that back to the source and fix the actual problem. However, in this case the bad pointers are coming from Microsoft code that I don't have control over so I have to verify them. Note also that I don't care if the data pointed at is valid. My code is looking for specific data patterns and will ignore the data if it doesn't find them. I'm just trying to prevent the crash that occurs when running memcpy on this data, and handling the exception at the point memcpy is attempted would require changing a dozen places in legacy code (but if I had something like IsBadReadPtr to call I would only have to change code in one place).
a thread-safe solution would be nice I'm guessing it's only IsBadWritePtr that isn't thread-safe. just doing a memcpy inside an exception handler This is effectively what IsBadReadPtr is doing ... and if you did it in your code, then your code would have the same bug as the IsBadReadPtr implementation: http://blogs.msdn.com/oldnewthing/archive/2006/09/27/773741.aspx --Edit:-- The only problem with IsBadReadPtr that I've read about is that the bad pointer might be pointing to (and so you might accidentally touch) a stack's guard page. Perhaps you could avoid this problem (and therefore use IsBadReadPtr safely), by: Know what threads are running in your process Know where the threads' stacks are, and how big they are Walk down each stack, delberately touching each page of the stack at least once, before you begin to call isBadReadPtr Also, the some of the comments associated with the URL above also suggest using VirtualQuery.
496,204
496,230
How do I write binary data for 7z archive format?
I've been pouring over the format description and source code for the 7z archive format, but I'm still having trouble writing a valid container. I assume I can create an empty container... anyway here's my start: std::ofstream ofs(archivename.c_str(), std::ios::binary|std::ios::trunc); Byte signature[6] = {'7', 'z', 0xBC, 0xAF, 0x27, 0x1C}; Byte major = 0; Byte minor = 3; ofs.write((const char*)signature, 6); ofs.write((const char*)major, 1); ofs.write((const char*)minor, 1); UInt64 offset = 0; UInt64 size = 0; UInt32 crc = 0; ofs.write((const char*)offset, 4); ofs.write((const char*)size, 8); ofs.write((const char*)crc, 8); ofs.write((const char*)CrcCalc(0, 0), 8); ofs.close(); I think my main problem is a lack of understanding of std::ofstream::write(). Byte is an 'unsigned char', UInt64 & UInt32 are both 'unsigned long'. UPDATE0: As everyone points out, it'd be a problem if I ran this on a big-endian machine. That's not the case here. Per Fredrik Janssen, I should be casting the address of the non-arrays. I should also mention that CrcCalc() is a function in the LZMA SDK. Adding & helps a bit, it's that first unsigned char[6] that's having some problems. UPDATE1: Working code to get an empty archive file below. static void SetUInt32(Byte *p, UInt32 d) { for (int i = 0; i < 4; i++, d >>= 8) p[i] = (Byte)d; } static void SetUInt64(Byte *p, UInt64 d) { for (int i = 0; i < 8; i++, d >>= 8) p[i] = (Byte)d; } void make_7z_archive() { CrcGenerateTable(); std::ofstream ofs(archivename.c_str(), std::ios::binary|std::ios::trunc); Byte signature[6] = {'7', 'z', 0xBC, 0xAF, 0x27, 0x1C}; Byte major = 0; Byte minor = 3; ofs.write((const char*)signature, 6); ofs.write((const char*)&major, 1); ofs.write((const char*)&minor, 1); UInt64 offset = 0; UInt64 size = 0; UInt32 crc = 0; Byte buf[24]; SetUInt64(buf + 4, offset); SetUInt64(buf + 12, size); SetUInt32(buf + 20, crc); SetUInt32(buf, CrcCalc(buf + 4, 20)); ofs.write((const char*)buf, 24); ofs.close(); } NOTE: CrcGenerateTable() and CrcCalc() are from the LZMA SDK.
don't know the format of 7z, but I notice when you write down offset, size and crc that these will be written to the file in little-endian format (I assume you have a little-endian CPU). Edit: An probably worse, you are missing the & before major, minor, offset, size and crc, i.e. you are casting the actual values to a pointer.
496,214
502,367
How to use T4 code generation templates with VS C++ projects?
T4 template files are automatically recognizable by the IDE under C# projects, but I have no clue on how they can be integrated into C++ projects (other than using make files). Any ideas?
T4 Template files can be integrated into C++ projects, but it's a bit more work than with a C#/VB project. Create a new text file in your C++ project and give it a .tt extension. Then write your template as normal. A C++ project then needs further work to get it to transform the templates. The quick and dirty way I got it to work was to add a custom build step and have it call "C:\Program Files\Common Files\Microsoft Shared\TextTemplating\1.1\TextTransform.exe" directly. Another way I found was to add a custom MSBuild task. Instructions can be found here This page has more information and some good links to other pages on using T4 code generation.
496,304
496,608
Calling Ruby class methods from C++
I'm trying to call a class method from C++. I've tried all combinations of rb_intern I could think of to make it work, but I've gotten nothing. Example class class CallTest def go (do something here) end end Trying to call in C++: rb_funcall(?, rb_intern("go"), 0); What goes in the ? space? I know if I use Qnil there, it will call global functions, but I'd prefer class methods. Am I heading in the wrong direction? Also, I'd prefer not to have to know the class name ahead of time if possible, but if I have to require that I know what it is, I can try passing it by name to my application. I'm using SWIG to generate the binding.
First off, go is, as you've defined it, not a class method, but an instance method. As an object oriented language, all ruby methods require a receiver, that is, an object that the method is invoked on. For instance methods, the receiver is an instance of the class, for class methods, the receiver is the class object itself. The ? placeholder you have is the slot for the receiver of the method call. If you want to leave it as an instance method, then you need to do this: rb_funcall(a_CallTest_instance, rb_intern("go"), 0); where a_CallTest_instance was an instance of CallTest you created using rb_class_new_instance(). If you make it into a class method: class CallTest def self.go # ... end end Then you need to use the CallTest class itself as the receiver: rb_funcall(klass, rb_intern("go"), 0); You can get a reference to the CallTest class using rb_const_get() VALUE klass = rb_const_get(rb_cObject, rb_intern('CallTest')); Use rb_cObject there, since CallTest is defined in the global context. I'd suggest reading the chapter in the Pickaxe about extending ruby.
496,305
496,347
Widget wag = *new Widget()
I just came across a C++ SDK that makes heavy use of this really weird *new pattern. I dont understand why they do it like that at all. What's the point of constructing objects with *new, e.g. Widget wag = *new Widget();? Update: Interesting, they are actually doing XPtr<T> p = *new T; - must be the semantics of some smart pointer magic. Still doesn't make much sense. I do trust the SDK is high quality.
It constructs a new object and then makes a copy of it. The pointer to the original object is discarded, so there may be a memory leak. There isn't necessarily a memory leak, though. It could be that Widget maintains a list of all its instances, and it updates that list in its constructor and destructor. There might be some other way of attaining the contents of the list. But it seems rather pointless. The reason was probably due to a misunderstanding of how the code really works. Your best bet for finding out why the code was made that way is to ask the ones who wrote it. There might be clues in the code's comments. Can you reveal more specifics about what code you're looking at?
496,440
496,450
C++ virtual function from constructor
Why the following example prints "0" and what must change for it to print "1" as I expected ? #include <iostream> struct base { virtual const int value() const { return 0; } base() { std::cout << value() << std::endl; } virtual ~base() {} }; struct derived : public base { virtual const int value() const { return 1; } }; int main(void) { derived example; }
Because base is constructed first and hasn't "matured" into a derived yet. It can't call methods on an object when it can't guarantee that the object is already properly initialized.
496,473
496,494
Why does the g++ 4.0 version of map<T>::erase(map::<T> iterator) not return a iterator?
I'm porting a medium-sized C++ project from Visual Studio 2005 to MacOS, XCode / GCC 4.0. One of the differences I have just stumbled across has to do with erasing an element from a map. In Visual Studio I could erase an element specified by an iterator and assign the return value to the iterator to get the position of the next element. This way the iterator would not point to some invalid address after erasing. In other words, in Visual Studio I could do this: itor=m_ResourceMap.erase(itor); In GCC 4.0, the erase function returns void, so I can't do that. Does that mean that the following map elements are shifted one backwards, so the iterator points automatically to the next element, or does that mean that I have to increment the iterator afterwards? The online STL Documentation is not very concise on the subject and XCode does not seem to have any. Thanks for your help, Adrian
No. You have to increment the iterator before erasing. Like this m_ResourceMap.erase(itor++); The iterator is invalidated by the erase, so you can't increment it afterwards.
496,664
497,158
C++ Dynamic Shared Library on Linux
This is a follow-up to Dynamic Shared Library compilation with g++. I'm trying to create a shared class library in C++ on Linux. I'm able to get the library to compile, and I can call some of the (non-class) functions using the tutorials that I found here and here. My problems start when I try to use the classes that are defined in the library. The second tutorial that I linked to shows how to load the symbols for creating objects of the classes defined in the library, but stops short of using those objects to get any work done. Does anyone know of a more complete tutorial for creating shared C++ class libraries that also shows how to use those classes in a separate executable? A very simple tutorial that shows object creation, use (simple getters and setters would be fine), and deletion would be fantastic. A link or a reference to some open source code that illustrates the use of a shared class library would be equally good. Although the answers from codelogic and nimrodm do work, I just wanted to add that I picked up a copy of Beginning Linux Programming since asking this question, and its first chapter has example C code and good explanations for creating and using both static and shared libraries. These examples are available through Google Book Search in an older edition of that book.
myclass.h #ifndef __MYCLASS_H__ #define __MYCLASS_H__ class MyClass { public: MyClass(); /* use virtual otherwise linker will try to perform static linkage */ virtual void DoSomething(); private: int x; }; #endif myclass.cc #include "myclass.h" #include <iostream> using namespace std; extern "C" MyClass* create_object() { return new MyClass; } extern "C" void destroy_object( MyClass* object ) { delete object; } MyClass::MyClass() { x = 20; } void MyClass::DoSomething() { cout<<x<<endl; } class_user.cc #include <dlfcn.h> #include <iostream> #include "myclass.h" using namespace std; int main(int argc, char **argv) { /* on Linux, use "./myclass.so" */ void* handle = dlopen("myclass.so", RTLD_LAZY); MyClass* (*create)(); void (*destroy)(MyClass*); create = (MyClass* (*)())dlsym(handle, "create_object"); destroy = (void (*)(MyClass*))dlsym(handle, "destroy_object"); MyClass* myClass = (MyClass*)create(); myClass->DoSomething(); destroy( myClass ); } On Mac OS X, compile with: g++ -dynamiclib -flat_namespace myclass.cc -o myclass.so g++ class_user.cc -o class_user On Linux, compile with: g++ -fPIC -shared myclass.cc -o myclass.so g++ class_user.cc -ldl -o class_user If this were for a plugin system, you would use MyClass as a base class and define all the required functions virtual. The plugin author would then derive from MyClass, override the virtuals and implement create_object and destroy_object. Your main application would not need to be changed in any way.
497,179
498,034
How does one get the instance of a Ruby class running in the current RB file? (Embedding Ruby in C++)
I have embedded Ruby inside my C++ application. I have generated the bindings using SWIG. Basically, I run the ruby file and then Ruby takes over and calls my C++ class. Based on my previous question, I would like to get the current instance of the class that is defined in the ruby file back to the C++ class so that I may execute instance methods. I execute the ruby file as follows: rb_eval_string_protect(<ruby script string>, &status ); rb_funcall(Qnil, rb_intern("main"), 0); The global main method in the script creates an instance of the defined class in the file. That's the instance I am after. If I have to, I will add a parameter or another function to pass the instance back, however, I'm not sure how to define that in C++ so that when SWIG generates the binding, it all works ok. Any help would be appreciated. Previous Question: Calling Ruby class methods from C++
The C api for ruby does its best to preserve ruby's functional nature, so rb_eval_string_protect() returns the VALUE of the last line of the script given, and rb_funcall() returns the VALUE of the last line of the method invoked. So the trick is really to think of it as how would you get that instance value in pure ruby? If it's just the return value of main, like # I'm a ruby script! main_retval = main() Then capturing the return value in C is similar: // I'm some C (or C++) code VALUE main_retval; // ... rb_eval_string_protect("...", &status); main_retval = rb_funcall(Qnil, rb_intern("main"), 0); And would give you a reference to the ruby object returned by main. You can use this object as normal, calling methods and the like VALUE main_retval_as_string = rb_funcall(main_retval, rb_intern("to_s"), 0);
497,347
2,089,909
How do I find out whether a ResourceManager contains a key without calling ResourceManager.GetString() and catching the exception?
So I have a ResourceManager that points to a resource file with a bunch of strings in it. When I call GetString() with a key that doesn't exist in the file, I get a System.Resources.MissingManifestResourceException. I need to find out whether the Resource contains the specified key without using exception handling to control program flow. Is there a keys.exists() method or something?
Note that by default, it appears that a new .net project's Resources.resx is going to be in the Properties folder, so you'll need to create the ResourceManager like this: rm = new ResourceManager("MyNamespace.Properties.MyResource", assembly); Alternatively, by getting frustrated and deleting/recreating Resources.resx, you'll probably create it in the root of the project, in which case the thing you were doing before, namely this: rm = new ResourceManager("MyNamespace.MyResource", assembly); will work. This is exactly what happened to me today, and I'm adding this post in the hope that it will spare someone some grief.
497,408
497,497
Template typedef error
Can anyone explain why this code gives the error: error C2039: 'RT' : is not a member of 'ConcreteTable' (at least when compiled with VS2008 SP1) class Record { }; template <class T> class Table { public: typedef typename T::RT Zot; // << error occurs here }; class ConcreteTable : public Table<ConcreteTable> { public: typedef Record RT; }; What can be done to fix it up. Thanks! Update: Thanks pointing out the issue, and for all the suggestions. This snippet was based on code that was providing an extensibility point within an existing code base, and the primary design goal was reducing the amount of work (typing) required to add new extensions using this mechanism. A separate 'type traits' style class actually fits into the solution best. Especially as I could even wrap it in a C style macro if the style police aren't looking!
That's because the class ConcreteTable is not yet instantiated when instantiating Table, so the compiler doesn't see T::RT yet. I'm not really sure how exactly C++ standard handles this kind of recursion (I suspect it's undefined), but it doesn't work how you'd expect (and this is probably good, otherwise things would be much more complicated - you could express a logical paradox with it - like a const bool which is false iff it is true). Fixing With typedefs, I think you cannot hope for more than passing RT as additional template parameter, like this template <class T, class RT> class Table { public: typedef typename RT Zot; }; class ConcreteTable : public Table<ConcreteTable, Record> { public: typedef Record RT; }; If you don't insist on RT being available as Table<>::Zot, you can put it inside a nested struct template <class T> class Table { public: struct S { typedef typename RT Zot; }; }; class ConcreteTable : public Table<ConcreteTable> { public: typedef Record RT; }; Or even external traits struct template <class T> struct TableTraits<T>; template <class T> struct TableTraits<Table<T> > { typedef typename T::RT Zot; }; If you only want the type be argument/return type of a method, you can do it by templatizing this method, eg. void f(typename T::RT*); // this won't work template <class U> void f(U*); // this will The point of all these manipulations is to postpone the need for T::RT as late as possible, particularly till after ConcreteTable is a complete class.
497,428
504,907
How do I import an RSA Public Key from .NET into OpenSSL
I have a .NET program and a Borland Win32 program that need to pass some cryptographically secure information. The plan right now is to have the .NET app create a public/private key pair, store the public key on disk and keep the private key in memory for as long as the .NET program is running. The Borland app will then read the public key from disk and use the OpenSSL library to encrypt the data with the public key and write that result to disk. Finally the .NET app will read the encrypted data and decrypt it with the private key. What is the best way to export the key from .NET and in turn import it into the OpenSSL library?
In the .NET program create a new RSACryptoServiceProvider. Export the public key as RSAParameters and write the Modulus and Exponent values to disk. Like this: RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(4096); //4096 bit key RSAParameters par = rsa.ExportParameters(false); // export the public key File.WriteAllBytes(@"C:\modulus.bin", par.Modulus); // write the modulus and the exponent to disk File.WriteAllBytes(@"C:\exponent.bin", par.Exponent); On the C++ side you'll need to read the modulus and exponent values from disk convert them into BIGNUM values. These values will be loaded into an RSA key and then you can encrypt the plain text and write the cipher text to disk. Like this: RSA * key; unsigned char *modulus; unsigned char *exp; FILE * fp = fopen("c:\\modulus.bin", "rb"); // Read the modulus from disk modulus = new unsigned char[512]; memset(modulus, 0, 512); fread(modulus, 512, 1, fp); fclose(fp); fp = fopen("c:\\exponent.bin", "rb"); // Read the exponent from disk exp = new unsigned char[3]; memset(exp, 0, 3); fread(exp, 3, 1, fp); fclose(fp); BIGNUM * bn_mod = NULL; BIGNUM * bn_exp = NULL; bn_mod = BN_bin2bn(modulus, 512, NULL); // Convert both values to BIGNUM bn_exp = BN_bin2bn(exp, 3, NULL); key = RSA_new(); // Create a new RSA key key->n = bn_mod; // Assign in the values key->e = bn_exp; key->d = NULL; key->p = NULL; key->q = NULL; int maxSize = RSA_size(key); // Find the length of the cipher text cipher = new char[valid]; memset(cipher, 0, valid); RSA_public_encrypt(strlen(plain), plain, cipher, key, RSA_PKCS1_PADDING); // Encrypt plaintext fp = fopen("C:\\cipher.bin", "wb"); // write ciphertext to disk fwrite(cipher, 512, 1, fp); fclose(fp); Finally you can take the ciphertext and decrypt it in C# without any difficulty. byte[] cipher = File.ReadAllBytes(@"c:\cipher.bin"); // Read ciphertext from file byte[] plain = rsa.Decrypt(cipher, false); // Decrypt ciphertext Console.WriteLine(ASCIIEncoding.ASCII.GetString(plain)); // Decode and display plain text
497,630
11,609,941
Safely override C++ virtual functions
I have a base class with a virtual function and I want to override that function in a derived class. Is there some way to make the compiler check if the function I declared in the derived class actually overrides a function in the base class? I would like to add some macro or something that ensures that I didn't accidentally declare a new function, instead of overriding the old one. Take this example: class parent { public: virtual void handle_event(int something) const { // boring default code } }; class child : public parent { public: virtual void handle_event(int something) { // new exciting code } }; int main() { parent *p = new child(); p->handle_event(1); } Here parent::handle_event() is called instead of child::handle_event(), because the child's method misses the const declaration and therefore declares a new method. This could also be a typo in the function name or some minor difference in the parameters types. It can also easily happen if the interface of the base class changes and somewhere some derived class wasn't updated to reflect the change. Is there some way to avoid this problem, can I somehow tell the compiler or some other tool to check this for me? Any helpful compiler flags (preferably for g++)? How do you avoid these problems?
Since g++ 4.7 it does understand the new C++11 override keyword: class child : public parent { public: // force handle_event to override a existing function in parent // error out if the function with the correct signature does not exist void handle_event(int something) override; };
497,659
520,282
Can I add x-headers to a Lotus Notes email message without COM?
Trying to add a custom header item to a Lotus Notes email item, from the context of a Notes Client Extension before the mail is sent from the Lotus Notes client app. Is this possible? I'm looking along the lines of using something in the NSFItemSetText family of functions if at all possible, as opposed to the lotus script object model which I gather is COM components. I'm implementing this in inherited code and have no reference on where I should be looking for a tutorial on this sort of thing. Basic googling has led me into the realms of the horrible IBM reference site, and there simply must be something out there better than that.
You can add headers using the MailAddHeaderItem function, which is in mailsrv.h If your using NSFItemSetText you might have code based on the SENDMEMO example. Have a Look at the SENDMAIL example which references using MailAddHeaderItemByHandle. The examples are included in the C api toolkit which you can download here if you don't already have it.
497,766
497,800
What is the difference between c++0x concepts and c# constraints?
C++0x introduces concepts, that let you define, basically, a type of a type. It specifies the properties required of a type. C# let you specify constraints of a generic with the "where" clause. Is there any semantic difference between them? Thank you.
One thing to keep in mind is that C++ templates and C# generics are not exactly the same. See this answer for more details on those differences. From the page you linked to explaining C++0x concepts, it sounds like the idea is that in C++ you want to be able to specify that the template type implements certain properties. In C#, the constraint goes further than that and forces the generic type to be "of" that constraint. For example, the following C# code: public GenericList<T> where T : IDisposable says that any type used in place of T must implement the IDisposable interface. Likewise, the following code: public abstract class ABC {} public class XYZ : ABC {} public GenericList<T> where T : ABC says that any type used in place of T must be of type ABC or derived from ABC. The C++0x concept idea says only that the type used in place of T must have the same properties as defined by ABC (or IDisposable) not that it must be of that type.
497,786
497,869
Why would anybody use C over C++?
Although people seem to like to complain about C++, I haven't been able to find much evidence as to why you would want to choose C over C++. C doesn't seem to get nearly as much flak and if C++ has all these problems why can't you just restrict yourself to the C subset? What are your thoughts/experience?
Joel's answer is good for reasons you might have to use C, though there are a few others: You must meet industry guidelines, which are easier to prove and test for in C You have tools to work with C, but not C++ (think not just about the compiler, but all the support tools, coverage, analysis, etc) Your target developers are C gurus You're writing drivers, kernels, or other low-level code You know the C++ compiler isn't good at optimizing the kind of code you need to write Your app not only doesn't lend itself to be object-oriented but would be harder to write in that form In some cases, though, you might want to use C rather than C++: You want the performance of assembler without the trouble of coding in assembler (C++ is, in theory, capable of 'perfect' performance, but the compilers aren't as good at seeing optimizations a good C programmer will see) The software you're writing is trivial, or nearly so - whip out the tiny C compiler, write a few lines of code, compile and you're all set - no need to open a huge editor with helpers, no need to write practically empty and useless classes, deal with namespaces, etc. You can do nearly the same thing with a C++ compiler and simply use the C subset, but the C++ compiler is slower, even for tiny programs. You need extreme performance or small code size and know the C++ compiler will actually make it harder to accomplish due to the size and performance of the libraries. You contend that you could just use the C subset and compile with a C++ compiler, but you'll find that if you do that you'll get slightly different results depending on the compiler. Regardless, if you're doing that, you're using C. Is your question really "Why don't C programmers use C++ compilers?" If it is, then you either don't understand the language differences, or you don't understand the compiler theory.
498,106
498,130
How do I compile a Visual Studio project from the command-line?
I'm scripting the checkout, build, distribution, test, and commit cycle for a large C++ solution that is using Monotone, CMake, Visual Studio Express 2008, and custom tests. All of the other parts seem pretty straight-forward, but I don't see how to compile the Visual Studio solution without getting the GUI. The script is written in Python, but an answer that would allow me to just make a call to: os.system would do.
I know of two ways to do it. Method 1 The first method (which I prefer) is to use msbuild: msbuild project.sln /Flags... Method 2 You can also run: vcexpress project.sln /build /Flags... The vcexpress option returns immediately and does not print any output. I suppose that might be what you want for a script. Note that DevEnv is not distributed with Visual Studio Express 2008 (I spent a lot of time trying to figure that out when I first had a similar issue). So, the end result might be: os.system("msbuild project.sln /p:Configuration=Debug") You'll also want to make sure your environment variables are correct, as msbuild and vcexpress are not by default on the system path. Either start the Visual Studio build environment and run your script from there, or modify the paths in Python (with os.putenv).
498,169
498,196
Qt and VC++ 6.0 IDE?
First of all I was wondering if I should use Qt and Windows together. If so should I use VC++ 6.0 IDE or another one?
I'm of the opinion that no one should be using VC++ 6.0 for anything at this point. Your better off just getting the express version of 2008 if your worried about cost.
498,319
543,452
Visual Studio: Intellisense Problems and Linux Compatibility
Two somewhat unrelated questions: Sometimes when I am working on a C++ project in Visual Studio 2008 Express, intellisense just does not want to "work" even though it really should. Auto completion box does not show and status bar says something along the lines of: "Intellisense: No further information is available". Sometimes it can be fixed by either rebuilding the solution or re-opening the solution, and sometimes even that doesn't work. Is this a known problem? If so: are there any known fixes? Is there any C++ IDE for Linux that has compatibility with MSVC++'s .sln files? I sometimes want to work on some project without having to go through the hassle of creating a new project and adding the files or manually creating a Make file. edit: To answer my own questions: Apparently there's no real fix other than to try and delete the .ncb file. Alternative would be a different IDE or to use a commercial package replacing intellisense. Code::Blocks seems to be able to open Visual Studio files. Or at least import them easily. I posted these together as they both related to visual studio and I didn't deem them important enough to both deserve their own topic. Do think the downvote is a little harsh though!
Intellisense failing is usually because of a "corrupt" ncb file. The usual solution is to delete it. Reportly the next version VS 2010 will not be using ncb files anymore.
498,433
498,442
Resolving "only static const integral data members can be initialized within a class" compilation error
The following for creating a Global Object is resulting in compilation errors. #include "stdafx.h" #include <iostream> using namespace System; using namespace std; #pragma hdrstop class Tester; void input(); class Tester { static int number = 5; public: Tester(){}; ~Tester(){}; void setNumber(int newNumber) { number = newNumber; } int getNumber() { return number; } } Tester testerObject; void main(void) { cout << "Welcome!" << endl; while(1) { input(); } } void input() { int newNumber = 0; cout << "The current number is " << testerObject.getNumber(); cout << "Change number to: "; cin >> newNumber; cout << endl; testerObject.setNumber(newNumber); cout << "The number has been changed to " << testerObject.getNumber() << endl; } Here are the compile errors: 1>------ Build started: Project: test, Configuration: Debug Win32 ------ 1>Compiling... 1>test.cpp 1>.\test.cpp(15) : error C2864: 'Tester::number' : only static const integral data members can be initialized within a class 1>.\test.cpp(33) : error C2146: syntax error : missing ';' before identifier 'testerObject' 1>.\test.cpp(33) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int 1>.\test.cpp(49) : error C2039: 'getNumber' : is not a member of 'System::Int32' 1> c:\windows\microsoft.net\framework\v2.0.50727\mscorlib.dll : see declaration of 'System::Int32' 1>.\test.cpp(55) : error C2039: 'setNumber' : is not a member of 'System::Int32' 1> c:\windows\microsoft.net\framework\v2.0.50727\mscorlib.dll : see declaration of 'System::Int32' 1>.\test.cpp(57) : error C2039: 'getNumber' : is not a member of 'System::Int32' 1> c:\windows\microsoft.net\framework\v2.0.50727\mscorlib.dll : see declaration of 'System::Int32' 1>Build log was saved at "file://c:\Users\Owner\Documents\Visual Studio 2008\Projects\test\test\Debug\BuildLog.htm" 1>test - 6 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== How do I create a Global Class Object correctly like I've attempted here. And how do I fix that "only static const integral data members can be initialized within a class" And basically how do I fix the rest of the errors so I can get this to compile? I like declaring Global Class Objects at file scope (I like declaring all globals at file scope) because when I have to create separate source files and do "extern" and everything it becomes extremely complicated and never works for me. Although, I do want to figure out how to do that eventually... it seems every tutorial I look at won't compile though and unless it compiles I have no idea how to recreate it! If I can just get this to compile...then I can successfully learn how to do this. So if someone could rewrite the above to where it literally copies & pastes into Visual C++ Express 2008 and works I will finally be able to figure out how to recreate it. I'm extremely excited on seeing the fix for this! It is just I can't get Global Objects to work right! Any other information on declaring Global Class Objects...or anything for that matter is welcome!
Just start addressing the errors one by one. A lot of the errors are just cascaded from the initial errors, so it looks like there are a lot of problems when there's only a couple. Just start from the top: 1>.\test.cpp(15) : error C2864: 'Tester::number' : only static const integral data members can be initialized within a class You can't initialize a member in the class definition unless it's static, const, and one of the integral types. Leave the "= 5" off of the declaration of number. Then you'll need to have a definition of Tester::number outside of the class definition, like so: int Tester::number = 5; Problem #2: 1>.\test.cpp(33) : error C2146: syntax error : missing ';' before identifier 'testerObject' Almost exactly what it says (missing semi-colon errors can be a bit inexact in saying where the semicolon should be) - you need a semi-colon after the definition of the Tester class. Fix those and your compilation problems go away. The key thing is to try and take compiler errors one at a time from the top. If you get more than about 3 of them, you can probably just ignore everything after the 3rd or so because the initial error just cause the compile to into the weeds (and if they are real errors, they'll show up again in the next compile anyway).
498,455
498,483
Best way to iterate through a container
What are the Advantages/Drawbacks of these two ways of iterating through a container / which one do you prefer and why: for (MyClass::iterator i = m.begin(), e = m.end() ; i != e ; i++) { // ... } or for (MyClass::iterator i = m.begin() ; i != m.end() ; i++) { // ... } Subsidiary question: i++ or ++i? Why?
If the iterator is non-trivial (ie. not a pointer), ++i is definitely faster as it doesn't involves a copy to a temporary, which may or may not be optimized out. The first form is a little faster but could be wrong if you erase or insert things in the loop. For simple iteration over a container I use #define foreach BOOST_FOREACH // in some header foreach(MyType &element, any_container) { // deal with element } most of the time for succinctness and clarity.
498,526
498,559
Native vs. Protothreads, what is easier?
I just stumbled on Protothreads. They seems superior to native threads since context switch are explicit. My question is. Makes this multi-threaded programming an easy task again? (I think so. But have I missed something?)
They're not "superior" - they're just different and fit another purpose. Protothreads are simulated, and hence aren't real threads. They won't run on multiple cores, and they will all block on a single system call (socket recv() and such). Hence you shouldn't see it as a "silver bullet" that solves all multithreading problems. Such threads have existed for Java, Ruby and Python for quite some time now. On the other hand, they're very lightweight so they do make some tasks quicker and simpler. They're suitable for small embedded systems because of low code and memory footprint. If you design your whole system (including an "OS", as is customary on small embedded devices) from the ground up, protothreads can provide a simple way to achieve concurrency. Also read on green threads.
498,582
499,187
Smart pointers with a library written in C
I'm using C++ with the OpenCV library, which is a library image-processing although that's not relevant for this question. Currently I have a design decision to make. OpenCV, being a C library, has its data structures (such as CvMat) declared as structs. To create them, you use functions like cvCreateMat, and to release them, you use functions like cvReleaseMat. Being a C++ programmer, I created a special cv_scoped class which would automatically call cvReleaseMat when it went out of scope (like boost::scoped_ptr). What I'm realising now is that I wish I could use auto_ptr and shared_ptr in cases as well. I just feel that writing code for my own cv_auto_ptr and cv_shared_ptr classes would be a bad idea, not to mention a waste of time. So I've been looking for solutions, and I've come up with three possibilities. First, I could use the cv_scoped class I've already made. I'd rename it to cv_ptr and then use smart pointers like so: std::auto_ptr<cv_ptr>. The annoying thing about this though is, I'd always have to dereference twice: std::auto_ptr<cv_ptr> matrix(cv_ptr(cvCreateMat(320, 240, CV_32FC3))); cvPow(matrix.get()->get()); // one get for the auto_ptr, one for the cv_ptr I know it looks like I could declare an implicit conversion, but I couldn't actually - most of OpenCV's functions have the parameter void* - so no implicit conversion would be called. I would really like a way of doing this where I didn't have to do the double dereference. Second, I could somehow override operator delete. I don't want to override the global operator delete because I'd only want this to apply to CvMat (and a few other) types. However, I can't change the library, so I can't add operator delete to the CvMat struct. So I don't know how this would work. Third, I could just rewrite my own auto_ptr, scoped_ptr, and shared_ptr. They're not large classes so it wouldn't be too difficult, but I just feel like this is bad design. If I were to do this, I would probably do something along these lines: class cv_auto_ptr { public: cv_auto_ptr(); ~cv_auto_ptr(); // each method would just be a proxy for the smart pointer CvMat* get() { return this->matrix_.get()->get(); } // all the other operators/methods in auto_ptr would be the same, you get the idea private: auto_ptr<cv_ptr> matrix_; // cv_ptr deletes CvMat properly } What would you do in my situation? Please help me figure this one out.
The auto_ptr are really designed for RAII on C++ class with constructs/destructors you are pushing their uses here to things they probably should not be used for (but can). Anyway don'y you want to be able to use your C++ object as if it was a normal stack variable without dynamically allocating each time? The standard solution to your problem is to create a wrapper with constructor/destructor. But to make it usable by the C functions just add an internal cast operator so it auto-magically converts itself back to the C object when passed to a C function Write a wrapper class. class Mat { CvMat* impl; public: Mat(/* Constructor Arguments */) { impl = cvCreateMat(/* BLAH */); } ~Mat() { cvReleaseMat(impl); } operator CvMat*() { // Cast opertator. Convert your C++ wrapper object into C object // when you use it with all those C functions that come with the // library. return impl; } }; void Plop(CvMat* x) { // Some C function dealing with CvMat } int main() { // Don't need to dynamically allocate Mat m; // Just create on the stack. Plop(m); // Call Plop directly std::auto_ptr<Mat> mP(new Mat); Plop(*mP); }
498,757
498,821
Factory method returning an concrete instantiation of a C++ template class
I have a class template <unsigned int N> class StaticVector { // stuff }; How can I declare and define in this class a static factory method returning a StaticVector<3> object, sth like StaticVector<3> create3dVec(double x1, double x2, double x2); ?
"How can I declare and define in this class" In what class? You've defined a class template, not a class. You can't call a static function of a class template itself, you have to call a particular version of the static function that's part of a real class. So, do you want the template (and hence all instantiations of it) to have a function returning a StaticVector<3>, or do you want one particular instantiation of that template to have a function returning a StaticVector<3>? If the former: template <unsigned int N> struct SV { int contents[N]; static SV<3> get3dVec(int x, int y, int z) { SV<3> v; v.contents[0] = x; v.contents[1] = y; v.contents[2] = z; return v; } }; int main() { SV<3> v = SV<1>::get3dVec(1,2,3); } works for me. If the latter (you only want get3dVec to be a member of SV<3>, not of all SV<whatever>), then you want template specialisation: template <unsigned int N> struct SV { int contents[N]; }; template<> struct SV<3> { int contents[3]; // must be re-declared in the specialization static SV<3> get3dVec(int x, int y, int z) { SV<3> v; v.contents[0] = x; v.contents[1] = y; v.contents[2] = z; return v; } }; int main() { SV<3> v = SV<1>::get3dVec(1,2,3); // compile error SV<3> v = SV<3>::get3dVec(1,2,3); // OK } If for no other reason than to make the calling code look nicer by omitting the basically irrelevant template parameter, I agree with Iraimbilanja that normally a free function (in a namespace if you're writing for re-use) would make more sense for this example. C++ templates mean that you don't need static functions as much in C++ as you do in Java: if you want a "foo" function that does one thing for class Bar and another thing for class Baz, you can declare it as a function template with a template parameter that can be Bar or Baz (and which may or may not be inferred from function parameters), rather than making it a static function on each class. But if you do want it to be a static function, then you have to call it using a specific class, not just a template name.
498,783
498,793
Instantiating a queue as a class member in C++
Suppose I need to have a class which wraps a priority queue of other objects (meaning the queue is a member of the class), and additionally gives it some extra functions. I am not quite sure what the best way is to define that vector and, mainly, how to instantiate it. Currently I have something like this in the header file: // in SomeClass.h: class SomeClass { public: SomeClass(); // constructor // other methods private: std::priority_queue<OtherClass> queue; }; while the source file is something like this: // in SomeClass.cpp SomeClass::SomeClass(): queue() // removed the constructor, no need to explicitly call it {} // other methods EDIT: removed the call, per Ray's answer.
Just write: SomeClass::SomeClass(): queue() { } C++ knows to call the constructor automatically from there with no arguments.
498,835
498,842
A question related to deriving standard exception classes
/* user-defined exception class derived from a standard class for exceptions*/ class MyProblem : public std::exception { public: ... MyProblem(...) { //special constructor } virtual const char* what() const throw() { //what() function ... } }; ... void f() { ... //create an exception object and throw it throw MyProblem(...); ... } My question is why there is a "const throw()" after what()? Normally,if there is a throw() , it implies that the function before throw() can throw exception.However ,why there is a throw here?
Empty braces in "throw()" means the function does not throw.
499,016
499,066
Erase all members of a class
Yesterday I read some code of a colleague and came across this: class a_class { public: a_class() {...} int some_method(int some_param) {...} int value_1; int value_2; float value_3; std::vector<some_other_class*> even_more_values; /* and so on */ } a_class a_instances[10]; void some_function() { do_stuff(); do_more_stuff(); memset(a_instances, 0, 10 * sizeof(a_class)); // <===== WTF? } Is that legal (the WTF line, not the public attributes)? To me it smells really, really bad... The code ran fine when compiled with VC8, but it throws an "unexpected exception" when compiled with VC9 when calling a_instances[0].event_more_values.push_back(whatever), but when accessing any of the other members. Any insights? EDIT: Changed the memset from memset(&a_instances... to memset(a_instances.... Thanks for pointing it out Eduard. EDIT2: Removed the ctor's return type. Thanks litb. Conclusion: Thanks folks, you confirmed my suspicion.
This is a widely accepted method for initialization for C structs. In C++ it doesn't work ofcourse because you can't assume anything about vectors internal structure. Zeroing it out is very likely to leave it in an illegal state which is why your program crashes.
499,106
499,122
What does template <unsigned int N> mean?
When declaring a template, I am used to having this kind of code: template <class T> But in this question, they used: template <unsigned int N> I checked that it compiles. But what does it mean? Is it a non-type parameter? And if so, how can we have a template without any type parameter?
It's perfectly possible to template a class on an integer rather than a type. We can assign the templated value to a variable, or otherwise manipulate it in a way we might with any other integer literal: unsigned int x = N; In fact, we can create algorithms which evaluate at compile time (from Wikipedia): template <int N> struct Factorial { enum { value = N * Factorial<N - 1>::value }; }; template <> struct Factorial<0> { enum { value = 1 }; }; // Factorial<4>::value == 24 // Factorial<0>::value == 1 void foo() { int x = Factorial<4>::value; // == 24 int y = Factorial<0>::value; // == 1 }
499,153
499,159
Passing a qualified non-static member function as a function pointer
I have a function in an external library that I cannot change with the following signature: void registerResizeCallback(void (*)(int, int)) I want to pass in a member function as the callback, as my callback needs to modify instance variables. Obviously this isn't possible with a simple: registerResizeCallback(&Window::Resize); so I'm not really sure how to solve the problem.
Check "[33.2] How do I pass a pointer-to-member-function to a signal handler, X event callback, system call that starts a thread/task, etc?" at the C++ FAQ Lite: Don't. Because a member function is meaningless without an object to invoke it on, you can't do this directly ... As a patch for existing software, use a top-level (non-member) function as a wrapper which takes an object obtained through some other technique.
499,236
499,250
A Question on Template Specialization and the Resulting Code Duplication
To specialize a class template, one has to redefine all of the member functions in the underlying base template (i.e. the unspecialized class template) even if they are expected to remain mostly unchanged. What are some of the accepted methods and "best practices" to avoid this code duplication? Thanks.
You can fully specialize a member selectively: template<int N> struct Vector { int calculate() { return N; } }; // put into the .cpp file, or make inline! template<> int Vector<3>::calculate() { return -1; } You do a full specialization. Meaning you cannot partial specialize it: template<int N, int P> struct Vector { int calculate() { return N; } }; // WROOONG! template<int N> int Vector<N, 3>::calculate() { return -1; } If you need that, you can use enable_if: template<int N, int P> struct Vector { int calculate() { return calculate<P>(); } private: // enable for P1 == 3 template<int P1> typename enable_if_c<P1 == P && P1 == 3, int>::type calculate() { return -1; } // disable for P1 == 3 template<int P1> typename enable_if_c<!(P1 == P && P1 == 3), int>::type calculate() { return N; } }; An alternative approach is to split your stuff up (common stuff into a base class, and specialized stuff into the derived class) like Nick recommends. I usually would take the second approach. But i prefer the first one if i don't need to partial specialize the functions.
499,450
499,474
On Linux, is the command-line program mktemp less safe than the C-function mkstemp?
Both operations create an empty file and return the filename but mkstemp leaves the file open in exclusive mode and gives you the handle. Is there a safety benefit to the C-function? Does this imply that there is a safety hole in the command-line version? As an aside, it is interesting that there are several related functions in the C api on Linux and most of them say "Don't use this function" (or similar) in their man page.
As you can easily see from mktemp(1) source code, it essentially does nothing but calling mkstemp(3). Exclusive mode in Linux means that function will fail if the file already exists, it does not guarantee locking. Other process can delete this file, create it again and fill it with data, despite the file handle being open(3) by your process. There is no additional safety in C function compared to command line utility.
499,636
499,654
How to create a std::ofstream to a temp file?
Okay, mkstemp is the preferred way to create a temp file in POSIX. But it opens the file and returns an int, which is a file descriptor. From that I can only create a FILE*, but not an std::ofstream, which I would prefer in C++. (Apparently, on AIX and some other systems, you can create an std::ofstream from a file descriptor, but my compiler complains when I try that.) I know I could get a temp file name with tmpnam and then open my own ofstream with it, but that's apparently unsafe due to race conditions, and results in a compiler warning (g++ v3.4. on Linux): warning: the use of `tmpnam' is dangerous, better use `mkstemp' So, is there any portable way to create an std::ofstream to a temp file?
I think this should work: char *tmpname = strdup("/tmp/tmpfileXXXXXX"); ofstream f; int fd = mkstemp(tmpname); f.attach(fd); EDIT: Well, this might not be portable. If you can't use attach and can't create a ofstream directly from a file descriptor, then you have to do this: char *tmpname = strdup("/tmp/tmpfileXXXXXX"); mkstemp(tmpname); ofstream f(tmpname); As mkstemp already creates the file for you, race condition should not be a problem here.
499,713
500,934
Transforming an object between two coordinate spaces
So I'm reading the "3D Math Primer For Graphics And Game Development" book, coming from pretty much a non-math background I'm finally starting to grasp vector/matrix math - which is a relief. But, yes there's always a but, I'm having trouble understand the translation of an object from one coordinate space to another. In the book the author takes an example with gun shooting at a car (image) that is turned 20 degrees (just a 2D space for simplicity) in "world space". So we have three spaces: World Space, Gun Object Space and Car Object Space - correct? The book then states this: "In this figure, we have introduced a rifle that is firing a bullet at the car. As indicated by the coordinate space on the left, we would normally begin by knowing about the gun and the trajectory of the bullet in world space. Now, imagine transforming the coordinate space in line with the car’s object space while keeping the car, the gun, and the trajectory of the bullet still. Now we know the position of the gun and the trajectory of the bullet in the object space of the car, and we could perform intersection tests to see if and where the bullet would hit the car." And I follow this explanation, and when I beforehand know that the car is rotated 20* degrees in world space this isn't a problem - but how does this translate into a situation say when I have an archer in a game shooting from a hill down on someone else? I don't know the angle at which everything is displaced there? And which object space is rotated here? The World or Gun space? Yeah as you can see I'm a bit confused. I think the ideal response would be using the car and gun example using arbitrary variables for positions, angle, etc.
You should read how to change basis and think in vector, not arrays but the math ones :P
499,780
499,796
Is PThread a good choice for multi-platorm C/C++ multi-threading program?
Been doing mostly Java and smattering of .NET for last five years and haven't written any significant C or C++ during that time. So have been away from that scene for a while. If I want to write a C or C++ program today that does some multi-threading and is source code portable across Windows, Mac OS X, and Linux/Unix - is PThread a good choice? The C or C++ code won't be doing any GUI, so won't need to worry with any of that. For the Windows platform, I don't want to bring a lot of Unix baggage, though, in terms of unix emulation runtime libraries. Would prefer a PThread API for Windows that is a thin-as-possible wrapper over existing Windows threading APIs. ADDENDUM EDIT: Am leaning toward going with boost:thread - I also want to be able to use C++ try/catch exception handling too. And even though my program will be rather minimal and not particularly OOPish, I like to encapsulate using class and namespace - as opposed to C disembodied functions.
Well, pthreads is the old posix standard for writing threaded programs. Its the lowest level threading routines, so its a good choice for cross-platform threading. However, there are alternatives: boost::thread - an STL style threading library Intel's Thread Building Blocks OpenMP - both these are a higher-level way of writing threaded apps without needing to do any threading calls. As the latter are all fully supported on all platforms, (pthreads requires a bit of compiler settings as its only part of Windows posix subsystem, unless you want to use Pthreads-w32), then perhaps the latter ones are a better choice. boost::threads are more like a threading library, the other 2 are high-level ways of achieving parallelism without needing to code 'threads', they allow you to write loops that run concurrently automatically (subject to common-sense conditions) Boost::thread is not a C compatible library though. edit: cross-platform abilities of the above: Intel TBB is cross-platform (Windows*, Linux*, and Mac OS* X), supports 32-bit and 64-bit applications and works with Intel, Microsoft and GNU compilers. OpenMP depends on the compiler you want to use, but GCC and/or Intel compilers have supported OpenMP Windows, Linux and MacOS.
500,006
500,008
What is the purpose of anonymous { } blocks in C style languages?
What is the purpose of anonymous { } blocks in C style languages (C, C++, C#) Example - void function() { { int i = 0; i = i + 1; } { int k = 0; k = k + 1; } } Edit - Thanks for all of the excellent answers!
It limits the scope of variables to the block inside the { }.
500,182
500,189
c++: header function not being linked properly from library into exe
I have a header file in a library (alibrary.lib). The library is a static library (.lib) and it links properly to exe. Now, I have a class: Vector3d. class Vector3d { void amethod() { blah } }; Vector3d cross(const Vector3d &v0, const Vector3d &v1) { float x,y,z; x = v0.y*v1.z-v0.z*v1.y; y = v0.z*v1.x-v0.x*v1.z; z = v0.x*v1.y-v0.y*v1.x; return Vector3d(x,y,z); } Vector3d is declared and defined in a header file (Vector3d .h). After the class declaration, I the cross function. The lib compile is file, but when it links to the unit test exe I get this error: flywindow.obj :error LNK2005: "class Vector3d __cdecl cross(class Vector3d const &,class Vector3d const &)" (?cross@@YA?AVVector3d@@ABV1@0@Z) already defined in fly.obj Any ideas? Thanks
If you define a free (not a member of a class) function, it has to be defined in a .cpp file separately compiled, or in a header and marked inline. So in your case, you can get away making it compile by this: inline Vector3d cross(const Vector3d &v0, const Vector3d &v1) { float x,y,z; x = v0.y*v1.z-v0.z*v1.y; y = v0.z*v1.x-v0.x*v1.z; z = v0.x*v1.y-v0.y*v1.x; return Vector3d(x,y,z); } The error is caused because you have the definition of the function in the header, but haven't marked it inline. If you now include that header into two files that are compiled separately, the linker, when trying to link the compiled object files, will throw out an error, because it then sees a cross function being defined twice. It works without exlicitly putting inline for member functions of a class, because member functions that are defined inside the class definition are implicitly inline. It is however, not a good idea generally to make function definitions in the header. If your function would depend on other types than just the vector (in your case it's fine IMHO, but it's debatable of course - some people don't like it), then you would be required to include the headers for those types. That will unnecessarily bloat the code that's indirectly included by your header. Instead, in those cases you would solely put only a declaration of your function inside the header: Vector3d cross(const Vector3d &v0, const Vector3d &v1); But define it within the .cpp file that's compiled separately. The inline, of course, should then be dropped. Let me add a small list of definitions and declarations, just to help in keeping things clear about what declaration and definition means for functions and classes. Note that every definition is also a declaration, but not the other way around: // class _declaration_ of boo class boo; // class _definition_ of foo. class foo { // member function _declaration_ of bar void bar(); // member function _definition_ of baz void baz() { } }; // function _definition_ of fuzz inline void fuzz() { } // function _declaration_ of fezz void fezz();
500,244
500,251
Is there a favored idiom for mimicing Java's try/finally in C++?
Been doing Java for number of years so haven't been tracking C++. Has finally clause been added to C++ exception handling in the language definition? Is there a favored idiom that mimics Java's try/finally? Am also bothered that C++ doesn't have an ultimate super type for all possible exceptions that could be thrown - like Java's Throwable class. I can write: try { // do something } catch(...) { // alas, can't examine the exception // can only do cleanup code and perhaps rethrow, ala: throw; } ADDENDUM EDIT: I ended up accepting the answer that had the most up votes, i.e., use destructors to do cleanup. Of course, from my own comments, it is clear I don't entirely agree with that. However, C++ is what it is and so in the application endeavor I have in mind, I'm going to more or less strive to adhere to common community practice. I'll use template classes to wrap resources that don't already have a class destructor (i.e., C library resources), thus bestowing on them destructor semantics. NEW ADDENDUM EDIT: Hmm, instead of finally then a closure feature perhaps? A closure combined with ScopeGuard approach (see one of the answers below) would be a way to accomplish cleanup with arbitrary actions and access to the cleanup code's outer scope context. Cleanup could be done in the idiom fashion that is seen in Ruby programming where they supply cleanup blocks when a resource is being opened. Isn't a closure feature being considered for C++?
By making effective use of destructors. When an exception is thrown in a try block, any object created within it will be destroyed immediately (and hence its destructor called). This is different from Java where you have no idea when an object's finalizer will be called. UPDATE: Straight from the horse's mouth: Why doesn't C++ provide a "finally" construct?
500,362
500,367
Alternative Keyword Representations
The C++ standard (ISO/IEC 14882:03) states the following (2.11/2): Furthermore, the alternative representations shown in Table 4 for certain operators and punctuators (2.5) are reserved and shall not be used otherwise: and, and_eq, bitand, bitor, compl, not, not_eq, or, or_eq, xor, xor_eq I have tried using these with multiple compilers without problems. From my understanding I'm only forbidden to use these as identifier names. English isn't my first language, so it would be nice if someone could verify that I can indeed use these bit-wise mnemonics in expressions (i.e., "int i = not 0;"). EDIT: I should probably point out that I'm not going to obscure code by using these (if they are valid). It's just for general knowledge. =) Thanks!
Yes, you can use them as alternative to name tokens. For example: struct foo { // defines a destructor compl foo() { } }; Your example would work too. It would however store an one into i. If you want to use bitwise not, you use compl (~): int i = compl 0;
500,387
500,500
Keeping track of an objects local coordinate space
Ok, so - this is heavily related to my previous question Transforming an object between two coordinate spaces, but a lot more direct and it should have an obvious answer. An objects local coordinate space, how do I "get a hold of it"? Say that I load an Orc into my game, how do I know programatically where it's head, left arm, right arm and origin (belly button?) is? And when I know where it is do I need to save it manually or is it something that magically exists in the DirectX API? In my other question one person said something about storing a vertice for X,Y,Z directions and and origin? How do I find these vertices? Do I pick them arbitrarily, assign them in the model before loading it into my game? etc. This is not about transforming between coordinate spaces
Actually, it is about transforming between coordinate spaces. Ok, you understand that you can have a Matrix that does Translation or Rotation. (Or scaling. Or skew. Etc.) That you can multiply such a Matrix by a point (V) to get the new (translated/rotated/etc) point. E.g.: Mtranlate = [ 1 0 0 Tx ] * V = [ Vx ] = [ Vx + Tx ] [ 0 1 0 Ty ] [ Vy ] [ Vy + Ty ] [ 0 0 1 Tz ] [ Vz ] [ Vz + Tz ] [ 0 0 0 1 ] [ 1 ] [ 1 ] Or rotation about the X/Y/Z axis: Mrotate_x = [ 1 0 0 0 ] * V = [ Vx ] = [ Vx ] [ 0 cos -sin 0 ] [ Vy ] [ Vy*cos - Vz*sin ] [ 0 sin cos 0 ] [ Vz ] [ Vy*sin + Vz*cos ] [ 0 0 0 1 ] [ 1 ] [ 1 ] That you can combine multiple operations by multiplying the matrices together. For example: Mnew = Mtranslate * Mrotate_x. (And yes, the order does matter!) So given a real-world base origin, you could translate to your (known/supplied) orc's xy location, deduce from the terrain at that point what your orc's feet's z location is and translate to that. Now you have your orc's "feet" in global coordinates. From there, you might want to rotate your orc so he faces a particular direction. Then translate upwards (based on your orc's model) to find it's neck. From the neck, we can translate upwards (and perhaps rotate) to find its head or over to find its shoulder. From the shoulder, we can rotate and translate to find the elbow. From the elbow we can rotate and translate to find the hand. Each component of your orc model has a local reference frame. Each is related to the reference frame where it is attached by one or more transforms (matrices). By multiplying through all the local matrices (transforms), we can eventually see how to go from the orc's feet to his hand. (Mind you, I'd animate the orc and just use his feet x,y,z location. Otherwise it's too much work.) You see a lot of this sort of thing in robotics which can provide a nice elementary introduction to forward kinematics. In robotics, at least with the simpler robots, you have a motor which rotates, and a limb-length to translate by. And then there's another motor. (In contrast, the human shoulder-joint can rotate about all three axis at once.) You might look over the course HANDOUTS for CMU's Robotic Manipulation class (15-384). Perhaps something like their Forward Kinematics Intro.
500,456
500,496
Pure Virtual Method VS. Function Pointer
Recently I've been designing a Thread class library, I've made a Thread abstract class like the following: class Thread { public: run() { /*start the thread*/ } kill() { /*stop the thread*/ } protected: virtual int doOperation(unsigned int, void *) = 0; }; Real thread classes would inherit this abstract class and implement doOperation method in its own logic, something similar to Strategy Pattern. The problem is that I'm relying on a C back-end library which defines running the thread in the following function: int startThread(char* name, (int)(*)(unsigned int, void*), int, int, int, void*); As you can see; the second parameter is a function pointer to thread's loop (main function), and here is the problem; since I use this C-function to start the thread in the run method, I pass the address of doOperation to the second parameter, and this cannot be done, because of type mismatch. I've tried to use reinterpret_cast to return a pointer, but I ISO-C++ forbids returning a pointer of un-initialized function member. I don't know how to overcome this conflict, using a static method is the only solution I guess, but it blows up my designing pattern!
First, be sure to read the link Michael Burr provided, as it contains good information. Then, here is C++ish pseudo-code for it: int wrapperDoOperation(int v, void *ctx) { Thread *thread = (Thread *)ctx; return thread->doOperation(v); } class Thread { public: run() { startThread("bla", wrapperDoOperation, bla, bla, bla, (void *)this); } kill() { /*stop the thread*/ } protected: virtual int doOperation(unsigned int) = 0; friend wrapperDoOperation ......; }; The idea is that doOperation, being a member function of Thread, doesn't need a void *context, you can just keep whatever you would pass as a context in the object itself. Therefore, you can use the void pointer to pass the actuall this pointer to the doOperation. Notice that the void * details are hidden from the users of your class, which is nice.
500,493
500,495
C++ equivalent of java's instanceof
What is the preferred method to achieve the C++ equivalent of java's instanceof?
Try using: if(NewType* v = dynamic_cast<NewType*>(old)) { // old was safely casted to NewType v->doSomething(); } This requires your compiler to have rtti support enabled. EDIT: I've had some good comments on this answer! Every time you need to use a dynamic_cast (or instanceof) you'd better ask yourself whether it's a necessary thing. It's generally a sign of poor design. Typical workarounds is putting the special behaviour for the class you are checking for into a virtual function on the base class or perhaps introducing something like a visitor where you can introduce specific behaviour for subclasses without changing the interface (except for adding the visitor acceptance interface of course). As pointed out dynamic_cast doesn't come for free. A simple and consistently performing hack that handles most (but not all cases) is basically adding an enum representing all the possible types your class can have and check whether you got the right one. if(old->getType() == BOX) { Box* box = static_cast<Box*>(old); // Do something box specific } This is not good oo design, but it can be a workaround and its cost is more or less only a virtual function call. It also works regardless of RTTI is enabled or not. Note that this approach doesn't support multiple levels of inheritance so if you're not careful you might end with code looking like this: // Here we have a SpecialBox class that inherits Box, since it has its own type // we must check for both BOX or SPECIAL_BOX if(old->getType() == BOX || old->getType() == SPECIAL_BOX) { Box* box = static_cast<Box*>(old); // Do something box specific }
500,656
500,743
C++ using scoped_ptr as a member variable
Just wanted opinions on a design question. If you have a C++ class than owns other objects, would you use smart pointers to achieve this? class Example { public: // ... private: boost::scoped_ptr<Owned> data; }; The 'Owned' object can't be stored by value because it may change through the lifetime of the object. My view of it is that on the one side, you make it clear that the object is owned and ensure its deletion, but on the flipside, you could easily just have a regular pointer and delete it in the destructor. Is this overkill? Follow up: Just wanted to say thanks for all your answers. Thanks for the heads-up about auto_ptr leaving the other object with a NULL pointer when the whole object is copied, I have used auto_ptr extensively but had not thought of that yet. I make basically all my classes boost::noncopyable unless I have a good reason, so there's nothing to worry about there. And thanks also for the information on memory leaks in exceptions, that's good to know too. I try not to write things which could cause exceptions in the constructor anyway - there are better ways of doing that - so that shouldn't be a problem. I just had another question though. What I wanted when I asked this question was to know whether anyone actually did this, and you all seem to mention that it's a good idea theoretically, but no one's said they actually do it. Which surprises me! Certainly one object owning a pointer to another is not a new idea, I would have expected you all would have done it before at some point. What's going on?
It's a good idea. It helps simplify your code, and ensure that when you do change the Owned object during the lifetime of the object, the previous one gets destroyed properly. You have to remember that scoped_ptr is noncopyable, though, which makes your class noncopyable by default until/unless you add your own copy constructor, etc. (Of course, using the default copy constructor in the case of raw pointers would be a no-no too!) If your class has more than one pointer field, then use of scoped_ptr actually improves exception safety in one case: class C { Owned * o1; Owned * o2; public: C() : o1(new Owned), o2(new Owned) {} ~C() { delete o1; delete o2;} }; Now, imagine that during construction of a C the second "new Owned" throws an exception (out-of-memory, for example). o1 will be leaked, because C::~C() (the destructor) won't get called, because the object has not been completely constructed yet. The destructor of any completely constructed member field does get called though. So, using a scoped_ptr instead of a plain pointer will allow o1 to be properly destroyed.
500,663
531,160
How to determine the supported thread model of an out-of-process COM server?
Question: How to find the threading models supported by a predefined out-of-process (EXE-based) Server: Using oleview? Or any other valid methods? Note: Attempting to connect to the above described server to receive event notifications
I'm afraid the question is wrong. Threading models (STA, MTA, etc) are a necessary evil that apply only to in-process COM objects, where objects and clients need to coexist in the same process and somehow they must prevent stepping on each other's toes (a fun and lengthy topic). Out-of-process (EXE) COM servers live in their own process and manage their own threading as they see fit. COM doesn't know -- and doesn't care. So there isn't any "threading model" (in a COM sense) associated with an EXE. It sounds like you have a specific problem: If you post more details, I/someone/we might be able to help.
500,748
500,765
What is static_case operator in C++?
I've heard of static_cast operator Recently I've come across static_case, for instance: *ppv = static_case<IUnknown> What does this mean?
It's a typo : there is no static_case, only static_cast, dynamic_cast, const_cast and reinterpret_cast. You can see on google that the docs where you find "static_case" have typos and use static_cast and static_case like if it was the same word. To be sure, just try to use static_case in available compilers.
501,060
501,136
How to efficiently implement an event loop?
COM Object (Server) sends event notification successfully to COM Client Without: ATL MFC How to efficiently get the main thread to wait/sleep (infinitely) until COM Server notifies the COM Client of a particular event?
With event objects. The main thread calls CreateEvent() in its initialisation to create an auto-reset event object. The main thread then enters an event loop in which it calls MsgWaitForMultipleObjects() repeatedly. (here is an example of a message loop.) And you generally do need to check for window messages, even if the main thread has no GUI. In the client thread (the one that creates the sink object) call SetEvent() within the sink method, after any necessary state update. This will wake up the main thread. And read this and this, if you haven't already.
501,163
501,182
A question about auto_ptr
template<class Y> operator auto_ptr_ref<Y>() throw() { return auto_ptr_ref<Y>(release()); } It is part of implementation of class auto_ptr in standard library. What does this means to do? Why there is an "auto_ptr_ref" between "operator" and "()"?
That is the conversion operator in action, casting from auto_ptr to auto_ptr_ref<Y>.
501,486
3,984,156
Getting GDB to save a list of breakpoints
OK, info break lists the breakpoints, but not in a format that would work well with reusing them using the --command as in this question. Does GDB have a method for dumping them into a file acceptable for input again? Sometimes in a debugging session, it is necessary to restart GDB after building up a set of breakpoints for testing. The .gdbinit file has the same problem as --command. The info break command does not list commands, but rather a table for human consumption. To elaborate, here is a sample from info break: (gdb) info break Num Type Disp Enb Address What 1 breakpoint keep y 0x08048517 <foo::bar(void)+7>
As of GDB 7.2 (2011-08-23) you can now use the save breakpoints command. save breakpoints <filename> Save all current breakpoint definitions to a file suitable for use in a later debugging session. To read the saved breakpoint definitions, use the `source' command. Use source <filename> to restore the saved breakpoints from the file.
501,774
501,792
Adding C++ template classes to a list
I have a template class, C_Foo<T>, which is specialised in a number of ways. struct Bar_Base { ... }; struct Bar_1 : public Bar_Base { ... }; struct Bar_2 : public Bar_Base { ... }; struct Bar_3 : public Bar_Base { ... }; class C_Foo<T> { ... }; class C_Foo_1 : public C_Foo<Bar_1> { ... }; class C_Foo_2 : public C_Foo<Bar_2> { ... }; class C_Foo_3 : public C_Foo<Bar_3> { ... }; And instantiations as follows: C_Foo_1 foo1; C_Foo_2 foo2; C_Foo_3 foo3; I have a set of common operations, all of which are defined on C_Foo, that I want to perform on foo1, foo2, and foo3. I've tried the following: vector<C_Foo *> v; v.push_back(&foo1); v.push_back(&foo2); v.push_back(&foo3); But I get compile errors, presumably because the compiler isn't sure how to go from a C_Foo_1 to a C_Foo. Is it possible to do something like this? I want to be able to loop through foo1 .. fooN and perform the same operations on all of them, without having to copy and paste boilerplate code like so: foo1.do_stuff(); foo2.do_stuff(); foo3.do_stuff(); Thanks for your help.
You can do that, if the function does not depend on the template parameter: // note: not a template class C_Foo_Common { public: virtual void do_stuff() = 0; }; template<typename T> class C_Foo : public C_Foo_Common { virtual void do_stuff() { // do stuff... } }; vector<C_Foo_Common *> v; v.push_back(&foo1); v.push_back(&foo2); v.push_back(&foo3); // now, you can iterate and call do_stuff on them. But if the function in C_Foo_Common needs to know the type T (for example to have another return type that depends on T), then that's not possible anymore. C_Foo<Bar_1> is a different type than C_Foo<Bar_2>. You can use discriminated unions instead. Those keep track about what is stored in them and are completely generic: typedef boost::variant< C_Foo<Bar_1>*, C_Foo<Bar_2>*, C_Foo<Bar_3>* > variant_type; vector<variant_type> v; v.push_back(&foo1); v.push_back(&foo2); v.push_back(&foo3); The variant knows what it stores, and can call functions overloaded on the types of what can be stored in it. Read the documentation of boost::variant for more information on how to get at what the variants contain.
501,816
501,828
Why does cout print char arrays differently from other arrays?
I'm using C++ to understand how exactly pointers work. I have this piece of code using arrays, which I'm using just to understand how the equivalent works with pointers. int main() { int arr[10] = {1,2,3}; char arr2[10] = {'c','i','a','o','\0'}; cout << arr << endl; cout << arr2 << endl; } However when I run this, arr outputs the address of the first element of the array of ints (as expected) but arr2 doesn't output the address of the first element of the array of chars; it actually prints "ciao". What is it that I'm missing or that I haven't learned yet about this?
It's the operator<< that is overloaded for const void* and for const char*. Your char array is converted to const char* and passed to that overload, because it fits better than to const void*. The int array, however, is converted to const void* and passed to that version. The version of operator<< taking const void* just outputs the address. The version taking the const char* actually treats it like a C-string and outputs every character until the terminating null character. If you don't want that, convert your char array to const void* explicitly when passing it to operator<<: cout << static_cast<const void*>(arr2) << endl;
501,899
501,908
In Inheritance: Can I override base class data members?
Let's say I have two classes like the following: Class A { public: .. private: int length; } Class B: public Class A { public: .. private: float length; } What I would like to know is: Is overriding of base class data members allowed? If yes, is it a good practice? If no, what is the best way to extend the type of the data members of a class? There is a class that satisfies my needs and I want to reuse it. However for my program needs, its data members should be of another type. I have some books, but all of them refer only to overriding of base class member methods.
You can use templatized members i.e., generic members instead of overriding the members. You can also declare a VARIANT(COM) like union. struct MyData { int vt; // To store the type union { LONG lVal; BYTE bVal; SHORT iVal; FLOAT fltVal; . . } };
501,962
501,971
Erasing items from an STL list
I want to make a function which moves items from one STL list to another if they match a certain condition. This code is not the way to do it. The iterator will most likely be invalidated by the erase() function and cause a problem: for(std::list<MyClass>::iterator it = myList.begin(); it != myList.end(); it++) { if(myCondition(*it)) { myOtherList.push_back(*it); myList.erase(it); } } So can anyone suggest a better way to do this ?
Erase returns an iterator pointing to the element after the erased one: std::list<MyClass>::iterator it = myList.begin(); while (it != myList.end()) { if(myCondition(*it)) { myOtherList.push_back(*it); it = myList.erase(it); } else { ++it; } }
502,022
502,040
How to find mantissa length on a particular machine?
I'm wanting to find the number of mantissa digits and the unit round-off on a particular computer. I have an understanding of what these are, just no idea how to find them - though I understand they can vary from computer to computer. I need this number in order to perform certain aspects of numerical analysis, like analyzing errors. What I'm currently thinking is that I could write a small c++ program to slowly increment a number until overflow occurs, but I'm not sure what type of number to use. Am I on the right track? How exactly does one go about calculating this?
I would think that whatever language you were using would specify how floats were stored. I know Java does this by use of a specific IEEE standard (754, I think). If it's not specified, I would think you could just do your own check by adding 0.5 to 1 to see if the actual number changes. If it does, then add 0.25 to 1, the 0.125 to 1, and so on until the number doesn't change, something like: float a = 1; float b = 0.5; int bits = 0; while (a + b != a) { bits = bits + 1; b = b / 2; } If you only had 3 mantissa bits, then 1 + 1/16 would be equal to 1. Then you've exhausted your mantissa bits. You might actually need the base number to be 2 rather than 1, since IEEE754 uses an implied '1+' at the start. EDIT: It appears the method described above may have some issues as it gives 63 bits for a system that clearly has 4-byte floats. Whether that's to do with intermediate results (I doubt it since the same code with explicit casts [while (((float)(a + b) != (float)(a))] has similar problems) or (more likely, I believe) the possibility that the unit value a can be represented with bits closer to the fractional b by adjusting the exponent, I don't yet know. For now, it's best to rely on the language information I mentioned above such as use of IEEE754 (if that information is available). I'll leave the problematic code in as a trap for wary players. Maybe someone with more floating point knowledge then I can leave a note explaining why it acts strangely (no conjecture, please :-). EDIT 2: This piece of code fixes it by ensuring intermediates are stored in floats. Turns out Jonathan Leffler was right - it was intermediate results. #include <stdio.h> #include <float.h> int main(void) { float a = 1; float b = 0.5; float c = a + b; int bits = 1; while (c != a) { bits = bits + 1; b = b / 2; c = a + b; } printf("%d\n",FLT_MANT_DIG); printf("%d\n",bits); return 0; } This code outputs (24,24) to show that the calculated value matches the one in the header file. Whilst written in C, it should be applicable to any language (specifically one where the information isn't available in a header or by virtue that it's specified in the language documentation). I only tested in C because Eclipse takes so long to start on my Ubuntu box :-).
502,278
502,280
Create GUID menu item missing in Visual C++
I do not see "Create GUID" option under the menu item Tools -> . I am using Visual Studio 2005 . Do I have to install anything for that .
It's likely missing because the profile you chose does not include that particular option. Try the following Tools -> Import and Export Settings Select "Reset all Settings" and hit Next Select "No, just reset settings, overwriting my current settings" and hit Next Select "General Development Settings" and hit Finish Now the "Create GUID" menu item should be present in the menu.
502,430
502,467
Passing "const" variable to method in Java
Is there an equivalent in Java to the passing on const references in C++? Isn't leaving out the "constness" misleading in regard to the method signature?
No, there isn't. Java "final" is not an exact equivalent of C++ "const". The following (delayed initialization of a final variable) works in Java: final double x; int w = 1; if (w > 2) { x = 0.5; } else { x = - 0.5; } but it doesn't work in C++ with "final" replaced by "const". Using "final" on a variable in the method declaration can be useful in Java, because allows you to use this variable inside any anonymous class created inside your method. PS. I was first disappointed by the lack of "const" in Java but later learned to live with "final". PS2. The Java glossary (http://mindprod.com/jgloss/immutable.html) linked to in this thread has one thing wrong: no, you are not given a 100% guaranntee that the final variable doesn't change its value: 1) it changes from "undefined" to "defined", but the compiler will tell you if you reference it before initialization 2) on Linux, a double has 80-bit precision when stored in a register, but 64-bit when stored in memory. When a final double variable is pushed out of the register, it will be truncated and change its value. As Joel Spolsky says, "abstraction has sprung a leak".
502,435
502,459
Should I wrap all my c++ code in its own namespace?
I come from a c# background where everything has its own namespace, but this practice appears to be uncommon in the c++ world. Should I wrap my code in it's own namespace, the unnamed namespace, or no namespace?
Many C++ developers do not use namespaces, sadly. When I started with C++, I didn't use them for a long time, until I came to the conclusion that I can do better using namespaces. Many libraries work around namespaces by putting prefixes before names. For example, wxWidgets puts the characters "wx" before everything. Qt puts "Q" before everything. It's nothing really wrong with that, but it requires you to type that prefix all over again, even though when it can be deduced from the context which declarations you mean. Namespaces have a hierarchic order. Names that are lexically closer to the point that reference them are found earlier. So if you reference "Window" within your GUI framework, it will find "my::gui::Window", instead of "::Window". Namespaces enable some nice features that can't be used without them. For example, if you put your class into a namespace, you can define free functions within that namespace. You then call the function without putting the namespace in front by importing all names, or selectively only some of them into the current scope ("using declaration"). Nowadays, I don't do any project anymore without using them. They make it so easy not to type the same prefix all over again, but still have good organization and avoidance of name-pollution of the global namespace.
502,443
503,545
Prime numbers program
I'm currently trying out some questions just to practice my programming skills. ( Not taking it in school or anything yet, self taught ) I came across this problem which required me to read in a number from a given txt file. This number would be N. Now I'm suppose to find the Nth prime number for N <= 10 000. After I find it, I'm suppose to print it out to another txt file. Now for most parts of the question I'm able to understand and devise a method to get N. The problem is that I'm using an array to save previously found prime numbers so as to use them to check against future numbers. Even when my array was size 100, as long as the input integer was roughly < 15, the program crashes. #include <cstdio> #include <iostream> #include <cstdlib> #include <fstream> using namespace std; int main() { ifstream trial; trial.open("C:\\Users\\User\\Documents\\trial.txt"); int prime; trial >> prime; ofstream write; write.open("C:\\Users\\User\\Documents\\answer.txt"); int num[100], b, c, e; bool check; b = 0; switch (prime) { case 1: { write << 2 << endl; break; } case 2: { write << 3 << endl; break; } case 3: { write << 5 << endl; break; } case 4: { write << 7 << endl; break; } default: { for (int a = 10; a <= 1000000; a++) { check = false; if (((a % 2) != 0) && ((a % 3) != 0) && ((a % 5) != 0) && ((a % 7) != 0)) // first filter { for (int d = 0; d <= b; d++) { c = num[d]; if ((a % c) == 0) { check = true; // second filter based on previous recorded primes in array break; } } if (!check) { e = a; if (b <= 100) { num[b] = a; } b = b + 1; } } if ((b) == (prime - 4)) { write << e << endl; break; } } } } trial.close(); write.close(); return 0; } I did this entirely base on my dummies guide and myself so do forgive some code inefficiency and general newbie-ness of my algorithm. Also for up to 15 it displays the prime numbers correctly. Could anyone tell me how I should go about improving this current code? I'm thinking of using a txt file in place of the array. Is that possible? Any help is appreciated.
#include <cstdio> #include <iostream> #include <cstdlib> #include <fstream> using namespace std; int main() { ifstream trial; trial.open("C:\\Users\\User\\Documents\\trial.txt"); int prime, e; trial>>prime; ofstream write; write.open("C:\\Users\\User\\Documents\\answer.txt"); int num[10000], currentPrime, c, primePrint; bool check; currentPrime=0; num[currentPrime] = 2; currentPrime=1; for(int currentInt=2; currentInt<=1000000; currentInt++) {check = false; for( int arrayPrime=0; arrayPrime<currentPrime; arrayPrime++) { c=num[arrayPrime]; if ((currentInt%c)==0) { check = true;// second filter based on previous recorded primes in array break;} } if (!check) { e=currentInt; if( currentInt!= 2 ) { num[currentPrime]= currentInt;} currentPrime = currentPrime+1;} if(currentPrime==prime) { write<<e<<endl; break;} } trial.close(); write.close(); return 0; } This is the finalized version base on my original code. It works perfectly and if you want to increase the range of prime numbers simply increase the array number. Thanks for the help =)
502,640
504,125
Disable/Enable Ribbon Buttons for MFC Feature Pack
I am using the MFC Feature Pack and I have some buttons on a ribbon bar, instances of CMFCRibbonButton. The problem is that I would like to enable and disable some of them in certain conditions, but at runtime. How can I do this? because there is no specific method for this...I heard that a solution would be to attach/detach the event handlers at runtime, but I do not know how...
When you create the CMFCRibbonButton object you have to specify the associated command ID (see the documentation for the CMFCRibbonButton constructor here). Enabling and disabling of ribbon buttons is then done using the usual command update mechanism in MFC, using the CCmdUI class. For example, if you have a ribbon button whose command ID is ID_MYCOMMAND and you want to handle this command in your application's view class, you should add these functions to the class: // MyView.h class CMyView : public CView { // ... private: afx_msg void OnMyCommand(); afx_msg void OnUpdateMyCommand(CCmdUI* pCmdUI); DECLARE_MESSAGE_MAP() }; and implement them in the .cpp file: // MyView.cpp void CMyView::OnMyCommand() { // add command handler code. } void CMyView::OnUpdateMyCommand(CCmdUI* pCmdUI) { BOOL enable = ...; // set flag to enable or disable the command. pCmdUI->Enable(enable); } You should also add ON_COMMAND and ON_UPDATE_COMMAND_UI entries to the message map for the CMyView class: // MyView.cpp BEGIN_MESSAGE_MAP(CMyView, CView) ON_COMMAND(ID_MYCOMMAND, &CMyView::OnMyCommand) ON_UPDATE_COMMAND_UI(ID_MYCOMMAND, &CMyView::OnUpdateMyCommand) END_MESSAGE_MAP() For more information on message maps in MFC, refer to TN006: Message Maps in MSDN. I hope this helps!
502,856
502,862
What's the difference between size_t and int in C++?
In several C++ examples I see a use of the type size_t where I would have used a simple int. What's the difference, and why size_t should be better?
From the friendly Wikipedia: The stdlib.h and stddef.h header files define a datatype called size_t which is used to represent the size of an object. Library functions that take sizes expect them to be of type size_t, and the sizeof operator evaluates to size_t. The actual type of size_t is platform-dependent; a common mistake is to assume size_t is the same as unsigned int, which can lead to programming errors, particularly as 64-bit architectures become more prevalent. Also, check Why size_t matters
503,030
504,745
What function was used to code these passwords in AFX?
I am trying to work out the format of a password file which is used by a LOGIN DLL of which the source cannot be found. The admin tool was written in AFX, so I hope that it perhaps gives a clue as to the algorithm used to encode the passwords. Using the admin tool, we have two passwords that are encoded. The first is "dinosaur123456789" and the hex of the encryption is here: The resulting hex values for the dinosaur password are 00h: 4A 6E 3C 34 29 32 2E 59 51 6B 2B 4E 4F 20 47 75 ; Jn<4)2.YQk+NO Gu 10h: 6A 33 09 ; j3. 20h: 64 69 6E 6F 73 61 75 72 31 32 33 34 35 36 37 38 ; dinosaur12345678 30h: 39 30 ; 90 Another password "gertcha" is encoded as e8h: 4D 35 4C 46 53 5C 7E ; GROUT M5LFS\~ I've tried looking for a common XOR, but failed to find anything. The passwords are of the same length in the password file so I assume that these are a reversible encoding (it was of another age!). I'm wondering if the AFX classes may have had a means that would be used for this sort of thing? If anyone can work out the encoding, then that would be great! Thanks, Matthew [edit:] Okay, first, I'm moving on and going to leave the past behind in the new solution. It would have been nice to use the old data still. Indeed, if someone wants to solve it as a puzzle, then I would still like to be able to use it. For those who want to have a go, I got two passwords done. All 'a' - a password with 19 a's: 47 7D 47 38 58 57 7C 73 59 2D 50 ; G}G8XW|sY-P 79 68 29 3E 44 52 31 6B 09 ; yh)>DR1k. All 'b' - a password with 16 b's. 48 7D 2C 71 78 67 4B 46 49 48 5F ; H},qxgKFIH_ 69 7D 39 79 5E 09 ; i}9y^. This convinced me that there is no simple solution involved, and that there is some feedback.
Well, I did a quick cryptanalysis on it, and so far, I can tell you that each password appears to start off with it's ascii value + 26. The next octet seems to be the difference between the first char of the password and the second, added to it's ascii value. The 3d letter, I haven't figured out yet. I think it's safe to say you are dealing with some kind of feedback cipher, which is why XOR turns up nothing. I think each octets value will depend on the previous. I can go on, but this stuff takes a lot of time. Hopefully this may give you a start, or maybe give you a couple of ideas.
503,401
879,569
How to debug file change notifications obtained by FindFirstChangeNotification?
So, the question is: I get some notifications I don't want to get. But I don't know for what file/dir I got them. Is there a way to know why given notification was fired? If you think about ReadDirectoryChangesW, please include a meaningful code sample.
If you would like Windows to tell you what specific file or subdirectory changed, you will need to use ReadDirectoryChangesW. The asynchronous mode is fairly simple if you use a completion routine. On the other hand, you will probably get better performance by using the slightly more complicated I/O completion ports approach. I would recommend downloading Wes Jones' excellent CDirectoryChangeWatcher source code as a starting point. There are several gotchas that his code will help you avoid, particularly in parsing the FILE_NOTIFY_INFORMATION records.
503,421
506,128
How can I get the current users permission groups?
I have a Qt/C++ project and an old VB6 project. The user base might not have permissions to HKEY_LOCAL_MACHINE due to lack of administrator rights but I need to update a registry entry. How can I get a list of the groups to which a user belongs?
I have this is_admin program bookmarked. It's a good example of how to do this.
503,526
503,575
Image Processing Library for C++
I need a library that can detect objects in an image (uses edge detection). This is NOT related to captchas. I am working on an MTGO bot that uses OCR and that works in any screen resolution. In order for it to port to any screen resolution my idea is to scan down narrow range on a results page (the cards that a player has can be listed in rows of text) and to find each object in that range. Then to take the lowest and highest pixel coordinates of each object to find where the row starts and ends (on the y axis) so that I can use OCR to read each line.
If you don't know of the OpenCV collection of examples, then they could help you in the right direction... there's also Camellia which doesn't use "edge detection" per-se but could get the results you need with a bit of work.
503,664
503,861
Member functions for derived information in a class
While designing an interface for a class I normally get caught in two minds whether should I provide member functions which can be calculated / derived by using combinations of other member functions. For example: class DocContainer { public: Doc* getDoc(int index) const; bool isDocSelected(Doc*) const; int getDocCount() const; //Should this method be here??? //This method returns the selected documents in the contrainer (in selectedDocs_out) void getSelectedDocs(std::vector<Doc*>& selectedDocs_out) const; }; Should I provide this as a class member function or probably a namespace where I can define this method? Which one is preferred?
In general, you should probably prefer free functions. Think about it from an OOP perspective. If the function does not need access to any private members, then why should it be given access to them? That's not good for encapsulation. It means more code that may potentially fail when the internals of the class is modified. It also limits the possible amount of code reuse. If you wrote the function as something like this: template <typename T> bool getSelectedDocs(T& container, std::vector<Doc*>&); Then the same implementation of getSelectedDocs will work for any class that exposes the required functions, not just your DocContainer. Of course, if you don't like templates, an interface could be used, and then it'd still work for any class that implemented this interface. On the other hand, if it is a member function, then it'll only work for this particular class (and possibly derived classes). The C++ standard library follows the same approach. Consider std::find, for example, which is made a free function for this precise reason. It doesn't need to know the internals of the class it's searching in. It just needs some implementation that fulfills its requirements. Which means that the same find() implementation can work on any container, in the standard library or elsewhere. Scott Meyers argues for the same thing. If you don't like it cluttering up your main namespace, you can of course put it into a separate namespace with functionality for this particular class.
503,833
504,432
What is the best way to implement smart pointers in C++?
I've been evaluating various smart pointer implementations (wow, there are a LOT out there) and it seems to me that most of them can be categorized into two broad classifications: 1) This category uses inheritance on the objects referenced so that they have reference counts and usually up() and down() (or their equivalents) implemented. IE, to use the smart pointer, the objects you're pointing at must inherit from some class the ref implementation provides. 2) This category uses a secondary object to hold the reference counts. For example, instead of pointing the smart pointer right at an object, it actually points at this meta data object... Who has a reference count and up() and down() implementations (and who usually provides a mechanism for the pointer to get at the actual object being pointed to, so that the smart pointer can properly implement operator ->()). Now, 1 has the downside that it forces all of the objects you'd like to reference count to inherit from a common ancestor, and this means that you cannot use this to reference count objects that you don't have control over the source code to. 2 has the problem that since the count is stored in another object, if you ever have a situation that a pointer to an existing reference counted object is being converted into a reference, you probably have a bug (I.E., since the count is not in the actual object, there is no way for the new reference to get the count... ref to ref copy construction or assignment is fine, because they can share the count object, but if you ever have to convert from a pointer, you're totally hosed)... Now, as I understand it, boost::shared_pointer uses mechanism 2, or something like it... That said, I can't quite make up my mind which is worse! I have only ever used mechanism 1, in production code... Does anyone have experience with both styles? Or perhaps there is another way thats better than both of these?
"What is the best way to implement smart pointers in C++" Don't! Use an existing, well tested smart pointer, such as boost::shared_ptr or std::tr1::shared_ptr (std::unique_ptr and std::shared_ptr with C++ 11) If you have to, then remember to: use safe-bool idiom provide an operator-> provide the strong exception guarantee document the exception requirements your class makes on the deleter use copy-modify-swap where possible to implement the strong exception guarantee document whether you handle multithreading correctly write extensive unit tests implement conversion-to-base in such a way that it will delete on the derived pointer type (policied smart pointers / dynamic deleter smart pointers) support getting access to raw pointer consider cost/benifit of providing weak pointers to break cycles provide appropriate casting operators for your smart pointers make your constructor templated to handle constructing base pointer from derived. And don't forget anything I may have forgotten in the above incomplete list.
503,866
505,961
timer class in linux
I need a timer to execute callbacks with relatively low resolution. What's the best way to implement such C++ timer class in Linux? Are there any libraries I could use?
If you're writing within a framework (Glib, Qt, Wx, ...), you'll already have an event loop with timed callback functionalities. I'll assume that's not the case. If you're writing your own event loop, you can use the gettimeofday/select pair (struct timeval, microsecond precision) or the clock_gettime/nanosleep pair (struct timespec, nanosecond precision) for your own event dispatcher. Even though latter interface is higher resolution, scheduling is never that accurate anyways, so take whatever fits best. #include <algorithm> #include <functional> #include <vector> #include <errno.h> #include <sys/time.h> #include <unistd.h> using namespace std; class scheduler { public: scheduler(); int events(); void addEvent(const struct timeval, int (*)(void *), void *); int dispatchUntil(const struct timeval &); bool waitUntil(const struct timeval * = NULL); int loopUntil(const struct timeval * = NULL); private: static bool tv_le(const struct timeval &, const struct timeval &); struct event { struct timeval when; int (*callback)(void *); void *data; }; static struct _cmp : public binary_function<bool, const struct event &, const struct event &> { bool operator()(const struct event &a, const struct event &b) { return !tv_le(a.when, b.when); } } cmp; vector<struct event> heap; }; bool scheduler::tv_le(const struct timeval &a, const struct timeval &b) { return a.tv_sec < b.tv_sec || a.tv_sec == b.tv_sec && a.tv_usec <= b.tv_usec; } scheduler::scheduler() : heap() {} int scheduler::events() { return heap.size(); } void scheduler::addEvent(const struct timeval when, int (*callback)(void *), void *data) { struct event ev = {when, callback, data}; heap.push_back(ev); push_heap(heap.begin(), heap.end(), cmp); } int scheduler::dispatchUntil(const struct timeval &tv) { int count = 0; while (heap.size() > 0 && tv_le(heap.front().when, tv)) { struct event ev = heap.front(); pop_heap(heap.begin(), heap.end(), cmp); heap.pop_back(); ev.callback(ev.data); count++; } return count; } bool scheduler::waitUntil(const struct timeval *tv) { if (heap.size() > 0 && (!tv || tv_le(heap.front().when, *tv))) tv = &heap.front().when; if (!tv) return false; struct timeval tv2; do { gettimeofday(&tv2, NULL); if (tv_le(*tv, tv2)) break; tv2.tv_sec -= tv->tv_sec; if ((tv2.tv_usec -= tv->tv_usec) < 0) { tv2.tv_sec--; tv2.tv_usec += 1000000; } } while (select(0, NULL, NULL, NULL, &tv2) < 0 && errno == EINTR); return heap.size() > 0 && tv_le(*tv, heap.front().when); } int scheduler::loopUntil(const struct timeval *tv) { int counter = 0; while (waitUntil(tv)) counter += dispatchUntil(heap.front().when); return counter; } Warning: I love C. I never write C++. I'm just pretending to know the language. Disclaimer: written just now and totally untested. The basic idea is to keep events in a priority queue, wait until the first one, run it, and repeat.
503,916
503,953
Extract element from 2 vectors?
I have 2 vector of with one has vec1{e1,e2,e3,e4} and the other one with vec2 {e2,e4,e5,e7} How to effectively get three vector from above vectors such that 1.has elements that is available only in vec1 similarly 2 has only vec2 elements and 3.with common elements
std::set_intersection should do the trick, if both vectors are sorted: http://msdn.microsoft.com/en-us/library/zfd331yx.aspx std::set_intersection(vec1.begin(), vec1.end(), vec2.begin(), vec2.end(), std::back_inserter(vec3)); A custom predicate can be used for the comparison too: std::set_intersection(vec1.begin(), vec1.end(), vec2.begin(), vec2.end(), std::back_inserter(vec3), my_equal_functor()); If they are not sorted, you may of course sort them first, or alternatively, you can iterate through vec1, and for each element, use std::find to see if it exists in vec2.
504,257
504,418
Public and private access for the same member functions
I have a class (class A) that is designed to be inherited by other classes written by other people. I also have another class (class B), that also inherits from A. B has to access some A's member functions that shouldn't be accessed by other inheriting classes. So, these A's member functions should be public for B, but private for others. How can I solve it without using 'friend' directive? Thank you. EDIT: Example why I need it. class A { public: void PublicFunc() { PrivateFunc(); // and other code } private: virtual void PrivateFunc(); }; class B : public class A { private: virtual void PrivateFunc() { //do something and call A's PrivateFunc A::PrivateFunc(); // Can't, it's private! } };
What you say is: there are two sets of subclasses of A. One set should have access, the other set shouldn't. It feels wrong to have only one brand of subclasses (i.e. B) 'see' A's members. If what you mean is: only we can use this part of functionality, while our clients can't, there are other resorts. (Functionality reuse by inheritance often corners you with this kind of problems. If you go towards reuse by aggregation, you may get around it.) A suggestion: // separate the 'invisible' from the 'visible'. class A_private_part { protected: int inherited_content(); public: int public_interface(); }; class B_internal : public A_private_part { }; class A_export : private A_private_part { public: int public_interface() { A_private_part::public_interface(); } }; // client code class ClientClass : public A_export { }; But better would be to go the aggregation way, and split the current "A" into a visible and an invisible part: class InvisibleFunctionality { }; class VisibleFunctionality { }; class B { InvisibleFunctionality m_Invisible; VisibleFunctionality m_Visible; }; // client code uses VisibleFunctionality only class ClientClass { VisibleFunctionality m_Visible; };
505,021
505,047
Get bytes from std::string in C++
I'm working in a C++ unmanaged project. I need to know how can I take a string like this "some data to encrypt" and get a byte[] array which I'm gonna use as the source for Encrypt. In C# I do for (int i = 0; i < text.Length; i++) buffer[i] = (byte)text[i]; What I need to know is how to do the same but using unmanaged C++. Thanks!
If you just need read-only access, then c_str() will do it: char const *c = myString.c_str(); If you need read/write access, then you can copy the string into a vector. vectors manage dynamic memory for you. You don't have to mess with allocation/deallocation then: std::vector<char> bytes(myString.begin(), myString.end()); bytes.push_back('\0'); char *c = &bytes[0];
505,024
505,068
Writing string (REG_SZ) values to the registry in C++
I've got most of the code for writing a value to the windows registry, however when I change the path to a dummy key and value that I've set up for testing it fails. My code is below: HKEY hKey; LPCTSTR sk = TEXT("SOFTWARE\TestSoftware"); LONG openRes = RegOpenKeyEx(HKEY_LOCAL_MACHINE, sk, 0, KEY_ALL_ACCESS , &hKey); if (openRes==ERROR_SUCCESS) { printf("Success opening key."); } else { printf("Error opening key."); } LPCTSTR value = TEXT("TestSoftwareKey"); LPCTSTR data = "TestData\0"; LONG setRes = RegSetValueEx (hKey, value, 0, REG_SZ, (LPBYTE)data, strlen(data)+1); if (setRes == ERROR_SUCCESS) { printf("Success writing to Registry."); } else { printf("Error writing to Registry."); } LONG closeOut = RegCloseKey(hKey); if (closeOut == ERROR_SUCCESS) { printf("Success closing key."); } else { printf("Error closing key."); } All three tests yield error statuses. The part that confuses me is that I was able to run this code when pointing it at other portions of the registry. Any ideas? thanks, brian
I feel silly. The solution is that need to properly escape the slash in the string as follows: LPCTSTR sk = TEXT("SOFTWARE\\TestSoftware"); Hopefully someone finds this useful...
505,151
2,017,166
CUDA + Visual Studio = suppressed output window
Normally, when I use Visual Studio to do a build, I see warnings and errors shown in the output pane, e.g. 1>------ Build started: Project: pdcuda, Configuration: Release x64 ------ Compiling... foo.cpp Linking... foo.obj : error LNK2001: unresolved external symbol "foo" ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== I'm doing some GPU programming with CUDA. Upon upgrading to 2.1, I no longer get any useful output in Visual Studio. For example, all I now see is: 1>------ Build started: Project: pdcuda, Configuration: Release x64 ------ ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== The other details can be found in BuildLog.htm, but it's inconvenient to hunt that file down all the time. Does anyone know how to force Visual Studio to show the output in its output pane? Things that don't help: uninstalling CUDA: the problem persists in all projects Tools > Options > Projects and Solutions > Build and Run > MSBuild project build output verbosity: changing this pulldown, even to "Diagnostic" has no discernable effect. EDIT: Additional things that don't help: devenv.exe /resetsettings devenv.exe /resetuserdata UPDATE (in response to Die in Sente): It's now working on one of the two machines (I'm not sure what I did to fix it though). The machine that's still having problems has a CUDA Visual Studio Wizard installed that has caused similar problems before. The still-broken machine had version 15.00.21022.8 of the compiler. The working machine has 15.00.30729.1. After making a backup, I transferred "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\amd64" from the working to the broken machine. I observe no changes in behavior on the broken machine.
Final solution: reinstall the OS.
505,261
505,308
How to create threads in VC++
I tend to use POSIX Threads, when programming in C, under Linux. Without MFC Question: How would I then create threads in VC++? Find more information on threads under win32? Edit: Brief illustrations I LOVE stackoverflow - best resource for students! Regards
You should not use the raw Win32 CreateThread() API. Use the C runtime's _beginthreadex() so the runtime has an opportunity to set up its own thread support.
505,476
505,519
Why does c_str() print the string twice?
So... when I go: cout<<stringName<<endl; I get: NT But when I go: cout<<stringName.c_str()<<endl; I get: NTNT Why?
A quick test with the following code: #include <string> #include <iostream> using namespace std; int main(void) { string str = "NT"; cout << str.c_str() << endl; return 0; } produces one instance of NT so it looks like you probably have another output call somewhere.
505,593
505,603
Efficiently wait for a flag state change without blocking resources?
Thread to wait infinitely in a loop until a flag state change, then call function. pseudo code illustration: while (true) { while (!flag) { sleep(1); } clean_upfunction(); } Currently: Using the multithreaded versions of the C run-time libraries only No: MFC Question: Is there a more efficient way of implementing the above A waitForStateChange() - similar to above - in the threading library
For Windows (which you have this tagged for), you want to look at WaitForSingleObject. Use a Windows Event (with CreateEvent), then wait on it; the other thread should call SetEvent. All native Windows, no MFC or anything else required.
505,627
505,631
tracking after cluster group status c++
I would like to write a cluster aware application that will track after the status of a cluster group. to be more specific, I'd like to probe after the group's Owner. The application should know if the local machine is the owner of the group or not and behave accordingly. can I probe the registry for that? if yes, where? if not, does someone know any win32 api calls that will provide this info? many thanks, ofer
Read up on the Cluster API. I'm not going to write up an entire walk-through, but using the API you can get all (or virtually all) of the information exposed by the OS tools, and then some.
505,647
505,672
Interprocess communication between 32- and 64-bit apps on Windows x64
We'd like to support some hardware that has recently been discontinued. The driver for the hardware is a plain 32-bit C DLL. We don't have the source code, and (for legal reasons) are not interested in decompiling or reverse engineering the driver. The hardware sends tons of data quickly, so the communication protocol needs to be pretty efficient. Our software is a native 64-bit C++ app, but we'd like to access the hardware via a 32-bit process. What is an efficient, elegant way for 32-bit and 64-bit applications to communicate with each other (that, ideally, doesn't involve inventing a new protocol)? The solution should be in C/C++. Update: several respondents asked for clarification whether this was a user-mode or kernel-mode driver. Fortunately, it's a user-mode driver.
If this is a real driver (kernel mode), you're SOL. Vista x64 doesn't allow installing unsigned drivers. It this is just a user-mode DLL, you can get a fix by using any of the standard IPC mechanisms. Pipes, sockets, out-of-proc COM, roughly in that order. It all operates on bus speeds so as long as you can buffer enough data, the context switch overhead shouldn't hurt too much.
505,863
505,984
How do I zip a directory of files using C++?
I'm working on a project using C++, Boost, and Qt. I understand how to compress single files and bytestreams using, for example, the qCompress() function in Qt. How do I zip a directory of multiple files, including subdirectories? I am looking for a cross-platform (Mac, Win, Linux) solution; I'd prefer not to fire off a bunch of new processes. Is there a standard way to combine bytestreams from multiple files into a zipped archive, or maybe there is a convenience function or method that would be available in the Boost iostream library? Many thanks for the assistance. Update: The QuaZip library looks really great. There is an example in the download package (in the "tests" dir) that shows very clearly how to zip up a directory of files. Update 2: After completing this task on my Linux build environment, I discovered that QuaZip doesn't work at all with the Visual Studio compiler. It may be possible to tackle all those compiler errors, but a word of caution to anyone looking down this path.
I have found the following two libraries: ZipIOS++. Seems to be "pure" C++. They don't list Windows explicitly as a supported platform. So i think you should try your luck yourself. QuaZIP. Based on Qt4. Actually looks nice. They list Windows explicitly (Using mingw). Apparently, it is a C++ wrapper for [this] library. Ah, and of course, i have ripped those sites from this Qt Mailinglist question about Zipping/Unzipping of directories :)
505,939
505,967
VS2005 C++ compiler problem including <comdef.h> in MFC application
I am having some trouble converting an old project from VS6 to VS2005. At one place in the code it uses the type variant_t so it includes comdef.h for this purpose. comdef.h then includes comutil.h which generates these errors for me: c:\program files\microsoft visual studio 8\vc\include\comutil.h(978) : error C2535: '_variant_t::_variant_t(int) throw()' : member function already defined or declared c:\program files\microsoft visual studio 8\vc\include\comutil.h(970) : see declaration of '_variant_t::_variant_t' c:\program files\microsoft visual studio 8\vc\include\comutil.h(1007) : error C2535: '_variant_t::operator int(void) const' : member function already defined or declared c:\program files\microsoft visual studio 8\vc\include\comutil.h(998) : see declaration of '_variant_t::operator int' c:\program files\microsoft visual studio 8\vc\include\comutil.h(1037) : error C2535: '_variant_t &_variant_t::operator =(int)' : member function already defined or declared c:\program files\microsoft visual studio 8\vc\include\comutil.h(1029) : see declaration of '_variant_t::operator =' c:\program files\microsoft visual studio 8\vc\include\comutil.h(1331) : error C2084: function '_variant_t::_variant_t(int) throw()' already has a body c:\program files\microsoft visual studio 8\vc\include\comutil.h(970) : see previous definition of '{ctor}' c:\program files\microsoft visual studio 8\vc\include\comutil.h(1588) : error C2084: function '_variant_t::operator int(void) const' already has a body c:\program files\microsoft visual studio 8\vc\include\comutil.h(998) : see previous definition of '.H' c:\program files\microsoft visual studio 8\vc\include\comutil.h(2006) : error C2084: function '_variant_t &_variant_t::operator =(int)' already has a body c:\program files\microsoft visual studio 8\vc\include\comutil.h(1029) : see previous definition of '=' There is probably some configuration that is incorrect, some define missing or some include file I should have included but I can't seem to find the problem. Any pointers in the right direction is much appreciated
Does your own code do something like this: #define long int
506,358
506,508
excelApp.CreateDispatch() returns a zero value : failure
I have the following piece of code in Visual C++ 2005 : : class _Application:public COleDispatchDriver {....}; _Application excelApp; excelApp.CreateDispatch((LPCTSTR)_T("Excel.Application"))) But the call to excelApp.CreateDispatch((LPCTSTR)_T("Excel.Application"))) returns a zero value indicating a failure . Could you please tell me what could be the possible reason ? PS : I copied the above piece of code from an another solution ( Visual C++ 2005) where this works perfectly fine on the same machine .
I got the answer people : I just had to call CoInitialize(0) before the above piece of code . and then CoUninitialize() after all are done . Cool it was ....
506,441
506,491
Keeping the GUI separate
I have a program that (amongst other things) has a command line interface that lets the user enter strings, which will then be sent over the network. The problem is that I'm not sure how to connect the events, which are generated deep inside the GUI, to the network interface. Suppose for instance that my GUI class hierarchy looks like this: GUI -> MainWindow -> CommandLineInterface -> EntryField Each GUI object holds some other GUI objects and everything is private. Now the entryField object generates an event/signal that a message has been entered. At the moment I'm passing the signal up the class hierarchy so the CLI class would look something like this: public: sig::csignal<void, string> msgEntered; And in the c'tor: entryField.msgEntered.connect(sigc::mem_fun(this, &CLI::passUp)); The passUp function just emits the signal again for the owning class (MainWindow) to connect to until I can finally do this in the main loop: gui.msgEntered.connect(sigc::mem_fun(networkInterface, &NetworkInterface::sendMSG)); Now this seems like a real bad solution. Every time I add something to the GUI I have to wire it up all through the class hierarchy. I do see several ways around this. I could make all objects public, which would allow me to just do this in the main loop: gui.mainWindow.cli.entryField.msgEntered.connect(sigc::mem_fun(networkInterface, &NetworkInterface::sendMSG)); But that would go against the idea of encapsulation. I could also pass a reference to the network interface all over the GUI, but I would like to keep the GUI code as seperate as possible. It feels like I'm missing something essential here. Is there a clean way to do this? Note: I'm using GTK+/gtkmm/LibSigC++, but I'm not tagging it as such because I've had pretty much the same problem with Qt. It's really a general question.
Short of having some global pub/sub hub, you aren't going to get away from passing something up or down the hierarchy. Even if you abstract the listener to a generic interface or a controller, you still have to attach the controller to the UI event somehow. With a pub/sub hub you add another layer of indirection, but there's still a duplication - the entryField still says 'publish message ready event' and the listener/controller/network interface says 'listen for message ready event', so there's a common event ID that both sides need to know about, and if you're not going to hard-code that in two places then it needs to be passed into both files (though as global it's not passed as an argument; which in itself isn't any great advantage). I've used all four approaches - direct coupling, controller, listener and pub-sub - and in each successor you loosen the coupling a bit, but you don't ever get away from having some duplication, even if it's only the id of the published event. It really comes down to variance. If you find you need to switch to a different implementation of the interface, then abstracting the concrete interface as a controller is worthwhile. If you find you need to have other logic observing the state, change it to an observer. If you need to decouple it between processes, or want to plug into a more general architecture, pub/sub can work, but it introduces a form of global state, and isn't as amenable to compile-time checking. But if you don't need to vary the parts of the system independently it's probably not worth worrying about.
506,496
506,509
Strange behavior in constructor
I have a class made up of several fields, and I have several constructors. I also have a constructor that doesn't take any parameters, but when I try to use it: int main { A a; } The compiler generates an error, while if I use it like this: int main { A a(); } It's ok. What's that? Thank you
The first main uses A's default constructor. The second one declares a function that takes no parameters and returns an A by value, which probably isn't what you intend. So what does the definition of A look like and what is the error that the compiler generates? Oh, and you need to provide a parameter list in the declaration of main: int main() { //... , not int main { //...
506,518
506,590
Is there any guarantee of alignment of address return by C++'s new operation?
Most of experienced programmer knows data alignment is important for program's performance. I have seen some programmer wrote program that allocate bigger size of buffer than they need, and use the aligned pointer as begin. I am wondering should I do that in my program, I have no idea is there any guarantee of alignment of address returned by C++'s new operation. So I wrote a little program to test for(size_t i = 0; i < 100; ++i) { char *p = new char[123]; if(reinterpret_cast<size_t>(p) % 4) { cout << "*"; system("pause"); } cout << reinterpret_cast<void *>(p) << endl; } for(size_t i = 0; i < 100; ++i) { short *p = new short[123]; if(reinterpret_cast<size_t>(p) % 4) { cout << "*"; system("pause"); } cout << reinterpret_cast<void *>(p) << endl; } for(size_t i = 0; i < 100; ++i) { float *p = new float[123]; if(reinterpret_cast<size_t>(p) % 4) { cout << "*"; system("pause"); } cout << reinterpret_cast<void *>(p) << endl; } system("pause"); The compiler I am using is Visual C++ Express 2008. It seems that all addresses the new operation returned are aligned. But I am not sure. So my question is: are there any guarantee? If they do have guarantee, I don't have to align myself, if not, I have to.
The alignment has the following guarantee from the standard (3.7.3.1/2): The pointer returned shall be suitably aligned so that it can be converted to a pointer of any complete object type and then used to access the object or array in the storage allocated (until the storage is explicitly deallocated by a call to a corresponding deallocation function). EDIT: Thanks to timday for highlighting a bug in gcc/glibc where the guarantee does not hold. EDIT 2: Ben's comment highlights an intersting edge case. The requirements on the allocation routines are for those provided by the standard only. If the application has it's own version, then there's no such guarantee on the result.
506,522
507,302
Mysterious relative path library dependency
After loading an existing MFC application in Visual Studio 2008, I am left with one linking error: LINK : fatal error LNK1104: cannot open file '..\..\xpressmp\lib\xprm_rt.lib' I have looked "everywhere", but I can't figure out where the relative path is set. The lib file is located in C:\xpressmp\lib, and I have added this directory both under Tools-Options->Projects and Solutions->VC++Directories->Library files and Project->Properties->Linker->Additional Library Directories. I also searched all files in the project to no avail. I have the library file (xprm_rt.lib) listed under Additional Dependencies for both Debug and Release. I also tried adding the path there, but that did not help. I cannot find any #pragma comment-directives. About the LNK1104, the file clearly does not exist in the location that the linker is searching. But I can't see why it is searching there (..\..\...) as I have not specified any relative paths. Any help appreciated :-) UPDATE: In the project .vcproj file, I found the following xml: <File RelativePath="..\..\XpressMP\lib\xprm_rt.lib"></File> <File RelativePath="..\..\XpressMP\lib\xprs.lib"></File> After deleting these lines (where were they set?), I was able to link successfully. Thanks for your help, it seems the relative library path was indeed being appended "automatically" by VS. Thanks both of you, I think it was Nick that put me on the right track.
It sounds like one of a couple possibilities to me: The library itself is setting the lib include path via a #pragma comment(lib, ...) directive; search library headers to see if that's the case You have a project for the library included in your solution which your main project is dependent on, and the relative library path is being appended automatically by VC; check the command line property page for the main project to see if that's the case That's what I can think of which could cause the error; hope it helps.
506,898
506,905
What are the common misuse of using STL containers with iterators?
What are the common misuse of using STL containers with iterators?
Forgetting that iterators are quite often invalidated if you change the container by inserting or erasing container members. For many great tips on using STL I highly recommend Scott Meyers's book "Effective STL" (sanitised Amazon link)
507,043
507,047
Virtual function invocation from constructor
Maybe I am wrong, but this seems to be a very basic question. Suddenly my inheritance chain stopped working. Writing a small basic test application proved that it was me that was wrong (so I can't blame the compiler). I have a base class, with the default behavior in a virtual function. A child class derives from that and changes the behavior. #include <iostream> class Base { public: Base() { print(); } ~Base() {} protected: virtual void print() { std::cout << "base\n"; } }; class Child : public Base { public: Child() {} ~Child() {} protected: virtual void print() { std::cout << "child\n"; } }; int main() { Base b; Child c; } This prints: base base When a Child instance is created, why is Base::print() called? I thought that by using the virtual keyword, the function can be replaced for the derived class. At what point did I get myself confused?
You are calling a virtual method in the constructor, which is not going to work, as the child class isn't fully initialized yet. See also this StackOverflow question.
507,249
507,397
Is there a way to monitor heap usage in C++/MacOS?
I fear that some of my code is causing memory leaks, and I'm not sure about how to check it. Is there a tool or something for MacOS X? Thank you
Apple has a good description of how to use MallocDebug on OS X on their developer pages. document on finding leaks in general enabling debug features of malloc in particular.
507,446
507,511
GCC 4.0: "no matching function to call" in template function
I am wondering why the following contrived example code works perfectly fine in Visual Studio 2005, but generates an error in GCC ("no matching function to call" when calling Interpolate() as shown below). Also, how do I work around this? It seems that the error message is just a generic message because GCC did not have a more specific message for the actual reason of the problem and it had to output something. I'm a bit at a loss on how to proceed porting this class without some really ugly workarounds. namespace Geo { template <class T> class TMyPointTemplate { T X,Y; public: inline TMyPointTemplate(): X(0), Y(0) {} inline TMyPointTemplate(T _X,T _Y): X(_X), Y(_Y) {} inline T GetX ()const { return X; } inline T GetY ()const { return Y; } //... template<T> TMyPointTemplate<T> Interpolate(const TMyPointTemplate<T> &OtherPoint)const { return TMyPointTemplate((X+OtherPoint.GetX())/2,(Y+OtherPoint.GetY())/2); } }; typedef TMyPointTemplate<int> IntegerPoint; } Geo::IntegerPoint Point1(0,0); Geo::IntegerPoint Point2(10,10); Geo::IntegerPoint Point3=Point1.Interpolate(Point2); //GCC PRODUCES ERROR: no matching function for call to 'Geo::TMyPointTemplate<int>::Interpolate(Geo::IntegerPoint&)' Thanks for your help, Adrian
I don't think you need the template there at all in the function definition, since it is defined inline with the class TMyPointTemplate Interpolate(const TMyPointTemplate &OtherPoint)const { should do. And when you do use the template for defining the function not inline, I think you need the class keyword in there like this. template<class T> // <- here TMyPointTemplate<T> TMyPointTemplate<T>::Interpolate(const TMyPointTemplate<T> &OtherPoint)const {
507,477
1,493,765
How to convert a float to a string regardless of regional settings?
My product is targeted to a Portuguese audience where the comma is the decimal symbol. I usually use CString::Format to input numbers into strings, and it takes into account the computer's regional settings. While in general this is a good approach, I'm having problems in formatting SQL queries, for instance: CString szInsert; szInsert.Format("INSERT INTO Vertices (X, Y) VALUES (%f, %f)", pt.X, pt.Y); When values are passed I get this string which is an incorrect query: INSERT INTO Vertices (X, Y) VALUES (3,56, 4,67) How do I enforce the dot as the decimal symbol in these strings, without changing the regional settings and without having to make specialized strings for each float value? Note: this is intended as a general question, not a SQL one.
Here's what I did. CString FormatQuery(LPCTSTR pszFormat, ...) { CString szLocale = setlocale(LC_NUMERIC, NULL); setlocale(LC_NUMERIC, "English"); va_list args; va_start(args, pszFormat); CString szFormatted; int nSize = (_vscprintf(pszFormat, args) + 1) * sizeof(char); _vsnprintf_s(szFormatted.GetBuffer(nSize), nSize, nSize, pszFormat, args); szFormatted.ReleaseBuffer(); va_end(args); setlocale(LC_NUMERIC, szLocale); return szFormatted; } You should use it like sprintf. You must #include <locale.h> in order for it to work. I'm a bit stubborn so I didn't use prepared statements/parametrized queries. If you have a similar problem, I suggest you do that. Meanwhile, if your problem is not SQL-related, my answer should help. Edit: Here's a thread safe version: CString FormatQuery(LPCTSTR pszFormat, ...) { _locale_t locale = _create_locale(LC_NUMERIC, "English"); va_list args; va_start(args, pszFormat); CString szFormatted; int nSize = (_vscprintf_l(pszFormat, locale, args) + 1) * sizeof(char); _vsnprintf_s_l(szFormatted.GetBuffer(nSize), nSize, nSize, pszFormat, locale, args); szFormatted.ReleaseBuffer(); va_end(args); return szFormatted; }
507,560
508,081
Adding resource file to VC6 dll
I have a number of VC 6.0 projects (dsps) which build into dlls which don't have resource files. Any idea how to add resources into an existing project? The project is due for a major release shortly and I want to add a fileversion to those dlls currently lacking one. The dlls will be recompilied before release so I'm just trying to make these dsps like all the others I've inherited with this project (that do have a file and product version etc so that we can easily tell exactly what is running on a customer's machine. One answer : Create an *.rc and resource.h file (copy from another project?) and add it to the source folder of ypur project in VC6 file view. The resource view is automatically created. Thanks for your help guys, gave me the pointers I needed.
Just add a VERSIONINFO block to the resource file for the DLL. Open the .rc file, and use "Insert/Resource.../Version" and you'll get a new VERSIONINFO resource with a bunch of defaults. If the project does not already have a resource file, you can add one using "File/New.../Resource Script". If you want to roll your own, an example VERSIONINFO block is given on the MSDN page for VERSIONINFO: #define VER_FILEVERSION 3,10,349,0 #define VER_FILEVERSION_STR "3.10.349.0\0" #define VER_PRODUCTVERSION 3,10,0,0 #define VER_PRODUCTVERSION_STR "3.10\0" #ifndef DEBUG #define VER_DEBUG 0 #else #define VER_DEBUG VS_FF_DEBUG #endif VS_VERSION_INFO VERSIONINFO FILEVERSION VER_FILEVERSION PRODUCTVERSION VER_PRODUCTVERSION FILEFLAGSMASK VS_FFI_FILEFLAGSMASK FILEFLAGS (VER_PRIVATEBUILD|VER_PRERELEASE|VER_DEBUG) FILEOS VOS__WINDOWS32 FILETYPE VFT_DLL FILESUBTYPE VFT2_UNKNOWN BEGIN BLOCK "StringFileInfo" BEGIN BLOCK "040904E4" BEGIN VALUE "CompanyName", VER_COMPANYNAME_STR VALUE "FileDescription", VER_FILEDESCRIPTION_STR VALUE "FileVersion", VER_FILEVERSION_STR VALUE "InternalName", VER_INTERNALNAME_STR VALUE "LegalCopyright", VER_LEGALCOPYRIGHT_STR VALUE "LegalTrademarks1", VER_LEGALTRADEMARKS1_STR VALUE "LegalTrademarks2", VER_LEGALTRADEMARKS2_STR VALUE "OriginalFilename", VER_ORIGINALFILENAME_STR VALUE "ProductName", VER_PRODUCTNAME_STR VALUE "ProductVersion", VER_PRODUCTVERSION_STR END END BLOCK "VarFileInfo" BEGIN /* The following line should only be modified for localized versions. */ /* It consists of any number of WORD,WORD pairs, with each pair */ /* describing a language,codepage combination supported by the file. */ /* */ /* For example, a file might have values "0x409,1252" indicating that it */ /* supports English language (0x409) in the Windows ANSI codepage (1252). */ VALUE "Translation", 0x409, 1252 END END
507,884
507,905
What is the nicest way to find a specific string in vector?
For instance. I have some structure: s_Some{ std::string lable; s_some_junk some_junk; }; And a vector: std::vector<s_Some> mSome; And then I fill this vector with a lot of s_Somes. I need to find an iterator for a single s_Some in this vector, which has a specific lable. So far I just iterate through all of this junk and match every lable with the one wanted. This looks a little bit stupid to me. Is there a better way to do so?
Option 1) If you are compelled to use the std::vector, but once the vector is filled it stays unchanged, then you could sort the vector and use the binary search. The only cost would be the sorting then and there will be no additional overhead. Searching time is logarithmic O(logN). Option 2) If you have the freedom and can choose different data structure, then consider using the map (also logarithmic) or unordered_map ( expected O(1), worst O(n) ). I have just noticed that you said you wanted to match every label with the one being looked for. So I conclude you can have duplicate labels. Then for point 2 use corresponding multi_map containers, while for point 1 things get a bit messier.