question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
679,145
679,529
How to set up a Winsock UDP socket?
I want to create a Winsock UDP socket that only sends data to a client. I want the kernel to choose an available port for me. On the other hand, I want to indicate which local IP to use, since I'm running a few nics. I've tried combing through the maze of socket options, as well as binding with the port in the socket address set to 0 to no avail. My code is in Win32 C++.
Please excuse the lack of error checking: char pkt[...]; size_t pkt_length = ...; sockaddr_in dest; sockaddr_in local; WSAData data; WSAStartup( MAKEWORD( 2, 2 ), &data ); local.sin_family = AF_INET; local.sin_addr.s_addr = inet_addr( <source IP address> ); local.sin_port = 0; // choose any dest.sin_family = AF_INET; dest.sin_addr.s_addr = inet_addr( <destination IP address> ); dest.sin_port = htons( <destination port number> ); // create the socket SOCKET s = socket( AF_INET, SOCK_DGRAM, IPPROTO_UDP ); // bind to the local address bind( s, (sockaddr *)&local, sizeof(local) ); // send the pkt int ret = sendto( s, pkt, pkt_length, 0, (sockaddr *)&dest, sizeof(dest) );
679,210
679,339
How can I use a dynamically sized texture array with glTexImage2D?
Currently, I'm able to load in a static sized texture which I have created. In this case it's 512 x 512. This code is from the header: #define TEXTURE_WIDTH 512 #define TEXTURE_HEIGHT 512 GLubyte textureArray[TEXTURE_HEIGHT][TEXTURE_WIDTH][4]; Here's the usage of glTexImage2D: glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, TEXTURE_WIDTH, TEXTURE_HEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureArray); And here's how I'm populating the array (rough example, not exact copy from my code): for (int i = 0; i < getTexturePixelCount(); i++) { textureArray[column][row][0] = (GLubyte)pixelValue1; textureArray[column][row][1] = (GLubyte)pixelValue2; textureArray[column][row][2] = (GLubyte)pixelValue3; textureArray[column][row][3] = (GLubyte)pixelValue4; } How do I change that so that there's no need for TEXTURE_WIDTH and TEXTURE_HEIGHT? Perhaps I could use a pointer style array and dynamically allocate the memory... Edit: I think I see the problem, in C++ it can't really be done. The work around as pointed out by Budric is to use a single dimensional array but use all 3 dimensions multiplied to represent what would be the indexes: GLbyte *array = new GLbyte[xMax * yMax * zMax]; And to access, for example x/y/z of 1/2/3, you'd need to do: GLbyte byte = array[1 * 2 * 3]; However, the problem is, I don't think the glTexImage2D function supports this. Can anyone think of a workaround that would work with this OpenGL function? Edit 2: Attention OpenGL developers, this can be overcome by using a single dimensional array of pixels... [0]: column 0 > [1]: row 0 > [2]: channel 0 ... n > [n]: row 1 ... n > [n]: column 1 .. n ... no need to use a 3 dimensional array. In this case I've had to use this work around as 3 dimensional arrays are apparently not strictly possible in C++.
You can use int width = 1024; int height = 1024; GLubyte * texture = new GLubyte[4*width*height]; ... glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureArray); delete [] texture; //remove the un-needed local copy of the texture; However you still need to specify the width and height to OpenGL in glTexImage2D call. This call copies texture data and that data is managed by OpenGL. You can delete, resize, change your original texture array all you want and it won't make a different to the texture you specified to OpenGL. Edit: C/C++ deals with only 1 dimensional arrays. The fact that you can do texture[a][b] is hidden and converted by the compiler at compile time. The compiler must know the number of columns and will do texture[a*cols + b]. Use a class to hide the allocation, access to the texture. For academic purposes, if you really want dynamic multi dimensional arrays the following should work: int rows = 16, cols = 16; char * storage = new char[rows * cols]; char ** accessor2D = new char *[rows]; for (int i = 0; i < rows; i++) { accessor2D[i] = storage + i*cols; } accessor2D[5][5] = 2; assert(storage[5*cols + 5] == accessor2D[5][5]); delete [] accessor2D; delete [] storage; Notice that in all the cases I'm using 1D arrays. They are just arrays of pointers, and array of pointers to pointers. There's memory overhead to this. Also this is done for 2D array without colour components. For 3D dereferencing this gets really messy. Don't use this in your code.
679,370
679,393
DXGetErrorString newbie question
I'm new to C++ and Direct X, and I was wondering what is the proper use of DXGetErrorString and DXGetErrorDescription? According to http://msdn.microsoft.com/en-us/library/bb173057(VS.85).aspx and http://msdn.microsoft.com/en-us/library/bb173056(VS.85).aspx, these functions return a pointer to a string. However, in all the examples I've seen on the web, they directly use the return value without freeing it afterward. For example: char buf[2048]; sprintf(buf, "Error: %s error description: %s\n",DXGetErrorString(hr),DXGetErrorDescription(hr)); Does that mean there is a memory leak because the memory allocated for the error string and the error description is never released ? If not, how is it released ? Thank you for the help !
Most likely, the functions return a static string, so it doesn't need to be free'd. It'd be similar to writing code like this, where you would not worry about freeing the return value: PCWSTR GetErrorCode(int error) { switch (error) { case 1: return L"File not found"; ... default: return "Unknown error"; } }
679,520
679,526
Advice on a better way to extend C++ STL container with user-defined methods
I inherited from C++ STL container and add my own methods to it. The rationale was such that to the clients, it will look act a regular list, yet has application-specific methods they can readily be called. This works fine, but I have read numerous posts about not inheriting from STL. Can someone provide a concrete advice of how I might write the code below in a better way? class Item { int a; int b; int c; int SpecialB() { return a * b + c; } }; class ItemList : public std::vector<Item> { int MaxA() { if( this->empty() ) throw; int maxA = (*this)[0].a; for( int idx = 1; idx < this->size(); idx++ ) { if( (*this)[idx].a > maxA ) { maxA = (*this)[idx].a; } } return maxA; } int SpecialB() { if( this->empty() ) throw; int specialB = (*this)[0].SpecialB(); for( int idx = 1; idx < this->size(); idx++ ) { if( (*this)[idx].SpecialB() < specialB ) { specialB -= (*this)[idx].c; } } return specialB; } int AvgC() { if( this->empty() ) throw; int cSum = 0; for( int idx = 0; idx < this->size(); idx++ ) { cSum += (*this)[idx].c; } return cSum / this->size(); // average } }; EDIT: Thanks for a bunch of thoughtful answers. I will create helper functions instead and from now on will never inherit from STL containers.
why you need extend vector in this way? use standard <algorithm> with your functors. e.g. std::min_element, std::max_element int max_a = std::max_element ( v.begin(), v.end(), boost::bind( std::less< int >(), bind( &Item::a, _1 ), bind( &Item::a, _2 ) ) )->a; std::accumulate - for calculate avarage const double avg_c = std::accumulate( v.begin(), v.end(), double( 0 ), boost::bind( Item::c, _1 ) ) / v.size(); // ofcourse check size before divide your ItemList::SpecialB() could be rewrited as: int accumulate_func( int start_from, int result, const Item& item ) { if ( item.SpecialB() < start_from ) { result -= item.SpecialB(); } return result; } if ( v.empty() ) { throw sometghing( "empty vector" ); } const int result = std::accumulate( v.begin(), v.end(), v.front(), boost::bind( &accumulate_func, v.front(), _1, _2 ) ); BTW: if you don't need access to members, you don't need inheritance.
679,571
679,580
When to use "new" and when not to, in C++?
Possible Duplicate: When should I use the new keyword in C++? When should I use the "new" operator in C++? I'm coming from C#/Java background and instantiating objects is confusing for me. If I've created a simple class called "Point", when I create a point should I: Point p1 = Point(0,0); or Point* p1 = new Point(0, 0); Can someone clarify for me when to use the new operator and when not to? Duplicate of: When should I use the new keyword in C++? Related: About constructors/destructors and new/delete operators in C++ for custom objects Proper stack and heap usage in C++?
You should use new when you wish an object to remain in existence until you delete it. If you do not use new then the object will be destroyed when it goes out of scope. Some examples of this are: void foo() { Point p = Point(0,0); } // p is now destroyed. for (...) { Point p = Point(0,0); } // p is destroyed after each loop Some people will say that the use of new decides whether your object is on the heap or the stack, but that is only true of variables declared within functions. In the example below the location of 'p' will be where its containing object, Foo, is allocated. I prefer to call this 'in-place' allocation. class Foo { Point p; }; // p will be automatically destroyed when foo is. Allocating (and freeing) objects with the use of new is far more expensive than if they are allocated in-place so its use should be restricted to where necessary. A second example of when to allocate via new is for arrays. You cannot* change the size of an in-place or stack array at run-time so where you need an array of undetermined size it must be allocated via new. E.g. void foo(int size) { Point* pointArray = new Point[size]; ... delete [] pointArray; } (*pre-emptive nitpicking - yes, there are extensions that allow variable sized stack allocations).
679,660
679,666
How do I define a generic std::list of a custom type?
I'm getting the following error trying to define a list of "Lines": line-clip.cpp:44: error: expected initializer before '<' token #include <list> using namespace std; class Line { public: Point p1; Point p2; Line(Point P1, Point P2) { p1 = P1; p2 = P2; } } // Line List list <Line> lineList; How do you define a generic List of type "Line"?
You need a semicolon after your class declaration :-). class Line { ... }; // put a semicolon here.
679,759
679,769
Should C++ template be used in this case?
I have a class that my client uses to Get() a packet. The packet contains a std::vector whose type is not known until the packet is generated internally in my Interface class (in this example, it depends on the Packet::type variable). I was wondering if template could be used for this since the Packet class is just a generic class whose type can be pretty much anything. The problem with it, as far as I can think of, is that the client has no clue what type of packet it is until he gets the packet and look at the Packet::type member. So, this wouldnt work because he wouldnt be able to declare a variable that Get() would return (?) Could template be used elegantly in this case? One alternative I could think of is to define a base class, and create a child class for each type. Then, the Get() method could return a pointer to the base class. Then client can simply look at the Packet::type (packet->type) and cast it to be the appropriate child class. But that's a bit messy? Is there a more elegant solution? The code below demonstrates the scenario roughly: enum { T_FLOAT, T_COMPLEX } TYPE_ENUM; // T can either be of type float or std::complex template<typename T> class Packet { public: TYPE_ENUM type; std::vector<T> data; }; class Interface { public: // Method that client calls to obtain the packet Packet<> Get() { return queue.pop(); // return current packet in queue } private: Queue<Packet> queue; };
You would want to create a base class, and keep the template Packet class that you have. Call this base class PacketBase. Your Packet class would derive from the new PacketBase class. That is to say each Packet<> type that is generated at compile time, will derive from PacketBase. class PacketBase { }; // T can either be of type float or std::complex or ... template<typename T> class Packet : public PacketBase { public: std::vector<T> data; //Put other members you need here //Note you don't need the type member that you had before. }; Interface::Get would return a PacketBase*. Here PacketBase is just used as a generic name that holds any Packet<> type. The queue would store a collection of PacketBase*. class Interface { public: // Method that client calls to obtain the packet PacketBase* Get() { return queue.pop(); // return current packet in queue } private: Queue<PacketBase*> queue; }; To figure out which type of Packet you have you can use RTTI with dynamic_cast. InterfaceObject o; //fill the queue PacketBase *pPacket = o.Get(); if(dynamic_cast<Packet<float> * >(pPacket) != NULL) ;//You have a Packet<float> else if(dynamic_cast<Packet<std::complex> * >(pPacket) != NULL) ;//You have a Packet<std::complex> //... for each Packet<> type you have You can also beef up your PacketBase with some virtual methods. Then you can call those directly instead of the RTTI dynamic_cast stuff.
679,862
679,900
Defining Function Pointers
I am trying to call the internal Windows NT API function NtOpenProcess. I know calling internal APIs can be a bad idea, but for this particular tool I need the low-level access this API provides. My problem is that to use such an internal API, I need to use Runtime Dynamic Linking, as specified in this article To do that, I need to define a function pointer to NtOpenProcess. Here's my declaration: typedef NTSTATUS (NTAPI *_NtOpenProcess) ( OUT PHANDLE, IN ACCESS_MASK, IN POBJECT_ATTRIBUTES, IN PCLIENT_ID OPTIONAL); class procManager { HINSTANCE hNTDLL; public: procManager() { hNTDLL = LoadLibrary(L"ntdll.dll"); if (!hNTDLL) throw std::runtime_error("NTDLL.DLL failure."); _NtOpenProcess NtOpenProcess; NtOpenProcess = reinterpret_cast <_NtOpenProcess> (GetProcAddress(hNTDLL, L"NtOpenProcess")); if (!NtOpenProcess) throw std::runtime_error("NtOpenProcess not found."); //Use NTOpenProcess for stuff here }; ~procManager() { FreeLibrary(hNTDLL); }; }; Problem is, apparently there is an error in my typedef above. The compiler returns: error C2059: syntax error : '__stdcall' I used the handy dandy "Go To Definition" feature of my IDE (Visual Studio 2008) and found that NTAPI in the declaration is defined as __stdcall. Unfortunately, removing NTAPI from my declaration, making it this: typedef NTSTATUS (*_NtOpenProcess) ( OUT PHANDLE, IN ACCESS_MASK, IN POBJECT_ATTRIBUTES, IN PCLIENT_ID OPTIONAL); results in another error: error C2065: '_NtOpenProcess' : undeclared identifier At this point I'm saying "Of course it's undefined, that's why it's a typedef!" Does anyone see my error in the declaration?
Did you include "ntdef.h" and "ntstatus.h" ? The compiler probably cant understand NTSTATUS.
679,888
679,899
How can I easily work with a char**?
I have a char** that I frequently need to insert into or perform a lookup. It is very tedious to realloc(), malloc() the array and insert strings. Is there any standard way that I can add strings to or do lookups in a char**? I guess I'm looking for something like string, but using char**'s instead.
If you're frequently inserting into this structure, you shouldn't be using a char** at all; an array isn't a suitable data structure for these kinds of operations. Consider a std::vector<string> or something similar if possible.
680,011
692,039
Finding Strings Neighbors By Up To 2 Differing Positions
Given a seed string, I want to find its neighbors with at most differ in 2 positions. All the digits involve in generating string are only four (i.e. 0,1,2,3). This is the example for what I mean: # In this example, 'first' column # are neighbors with only 1 position differ. # The rest of the columns are 2 positions differ Seed = 000 100 110 120 130 101 102 103 200 210 220 230 201 202 203 300 310 320 330 301 302 303 010 011 012 013 020 021 022 023 030 031 032 033 001 002 003 Seed = 001 101 111 121 131 100 102 103 201 211 221 231 200 202 203 301 311 321 331 300 302 303 011 010 012 013 021 020 022 023 031 030 032 033 000 003 002 Hence given a tag of length L we will have 3*L + 9L(L-1)/2 neighbors But why this code of mine fails to generate it correctly? Especially when the seed string is other than "000". Other approaches are also welcomed, escpecially with speed improvement. Since we will be processing millions of seed tags of length 34 to 36. #include <iostream> #include <vector> #include <fstream> #include <sstream> using namespace std; string ConvertInt2String(int IntVal) { std::string S; std::stringstream out; out << IntVal; S = out.str(); return S; } string Vec2Str (vector <int> NTg) { string StTg = ""; for (unsigned i = 0; i < NTg.size(); i++) { StTg += ConvertInt2String(NTg[i]); } return StTg; } template <typename T> void prn_vec(const std::vector < T >&arg, string sep="") { for (unsigned n = 0; n < arg.size(); n++) { cout << arg[n] << sep; } return; } vector <int> neighbors(vector<int>& arg, int posNo, int baseNo) { // pass base position and return neighbors vector <int> transfVec; transfVec = arg; //modified according to strager's first post transfVec[posNo % arg.size()] = baseNo; return transfVec; } int main () { vector <int> numTag; numTag.push_back(0); numTag.push_back(0); numTag.push_back(1); // If "000" this code works, but not 001 or others // Note that in actual practice numTag can be greater than 3 int TagLen = static_cast<int>(numTag.size()); for ( int p=0; p< TagLen ; p++ ) { // First loop is to generate tags 1 position differ for ( int b=1; b<=3 ; b++ ) { int bval = b; if (numTag[p] == b) { bval = 0; } vector <int> nbnumTag = neighbors(numTag, p, bval); string SnbnumTag = Vec2Str(nbnumTag); cout << SnbnumTag; cout << "\n"; // Second loop for tags in 2 position differ for (int l=p+1; l < TagLen; l++) { for (int c=1; c<=3; c++) { int cval = c; if (nbnumTag[l] == c) { cval = c; } vector <int> nbnumTag2 = neighbors(nbnumTag, l, cval); string SnbnumTag2 = Vec2Str(nbnumTag2); cout << "\t" << SnbnumTag2; cout << "\n"; } } } } return 0; }
Would this do it? It enumerates the tree of possible strings, pruning all with >2 differences from the original. void walk(char* s, int i, int ndiff){ char c = s[i]; if (ndiff > 2) return; if (c == '\0'){ if (ndiff > 0) print(s); } else { s[i] = '0'; walk(s, i+1, (s[i]==c ? ndiff : ndiff+1); s[i] = '1'; walk(s, i+1, (s[i]==c ? ndiff : ndiff+1); s[i] = '2'; walk(s, i+1, (s[i]==c ? ndiff : ndiff+1); s[i] = '3'; walk(s, i+1, (s[i]==c ? ndiff : ndiff+1); s[i] = c; } } char seed[] = "000"; main(){ walk(seed, 0, 0); }
680,066
680,115
Calling C++ dll function from C#: Of structs, strings and wchar_t arrays
Here's a simple problem I need to solve, but it makes me feel my hair turning gray as all my attempts are returning me the same error: "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." I have a sample app written in C++ which makes a call to the dll. Here is the relevant code: //function I need to call bool convertHKID_Name(char *code,RECO_DATA *o_data); //hkid //struct definition struct RECO_DATA{ wchar_t FirstName[200]; wchar_t Surname[200]; }; //how it is used in C++ code CString code; RECO_DATA data; GetDlgItemText(IDC_CODE,code); char _code[200]; WideCharToMultiByte(CP_UTF8, 0, code, -1, (char *)_code, 200, NULL, NULL); ocr->convertHKID_Name(_code,&data) Now when I debug the C++ code, it does the proper thing - writes some Unicode data into the data struct. Here is my attempt to do the same in C# //my C# wrapper class public class cnOCRsdk { [StructLayout(LayoutKind.Sequential, Size=400, CharSet=CharSet.Unicode), Serializable] public struct RECO_DATA { [MarshalAsAttribute(UnmanagedType.ByValTStr, SizeConst = 200)] public string FirstName; [MarshalAsAttribute(UnmanagedType.ByValTStr, SizeConst = 200)] public string Surname; }; [DllImport(@"cnOCRsdk.dll", CharSet=CharSet.Auto, EntryPoint = "?convertHKID_Name@CcnOCRsdk@@QAE_NPADPAURECO_DATA@@@Z")] public static extern bool convertHKID_Name(ref string num, ref RECO_DATA o_data); [DllImport("Kernel32.dll")] public static extern int WideCharToMultiByte(uint CodePage, uint dwFlags, [In, MarshalAs(UnmanagedType.LPWStr)]string lpWideCharStr, int cchWideChar, [Out, MarshalAs(UnmanagedType.LPStr)]StringBuilder lpMultiByteStr, int cbMultiByte, IntPtr lpDefaultChar, // Defined as IntPtr because in most cases is better to pass IntPtr lpUsedDefaultChar // NULL ); } //my attempt to call the function from the dll cnOCRsdk.RECO_DATA recoData = new cnOCRsdk.RECO_DATA(); string num = "262125355174"; StringBuilder sb = new StringBuilder(200, 200); cnOCRsdk.WideCharToMultiByte(65001, 0, num, -1, sb, 200, IntPtr.Zero, IntPtr.Zero); string sbTostring = sb.ToString(); //the next line generates the 'Attempted to read or write protected memory' error bool res = cnOCRsdk.convertHKID_Name(ref sbTostring, out recoData); My guess is that I'm not marshaling the RECO_DATA structure properly, because it is this struct that gets written to by the convertHKID_Name function. But how should I fix it?
I believe it should work if you Change the declaration on convertHKID_Name to CharSet.Ansi Remove the "ref" from the string parameter Pass the string num directly to convertHKID_Name instead of calling WideCharToMultiByte
680,097
680,114
I've heard i++ isn't thread safe, is ++i thread-safe?
I've heard that i++ isn't a thread-safe statement since in assembly it reduces down to storing the original value as a temp somewhere, incrementing it, and then replacing it, which could be interrupted by a context switch. However, I'm wondering about ++i. As far as I can tell, this would reduce to a single assembly instruction, such as 'add r1, r1, 1' and since it's only one instruction, it'd be uninterruptable by a context switch. Can anyone clarify? I'm assuming that an x86 platform is being used.
You've heard wrong. It may well be that "i++" is thread-safe for a specific compiler and specific processor architecture but it's not mandated in the standards at all. In fact, since multi-threading isn't part of the ISO C or C++ standards (a), you can't consider anything to be thread-safe based on what you think it will compile down to. It's quite feasible that ++i could compile to an arbitrary sequence such as: load r0,[i] ; load memory into reg 0 incr r0 ; increment reg 0 stor [i],r0 ; store reg 0 back to memory which would not be thread-safe on my (imaginary) CPU that has no memory-increment instructions. Or it may be smart and compile it into: lock ; disable task switching (interrupts) load r0,[i] ; load memory into reg 0 incr r0 ; increment reg 0 stor [i],r0 ; store reg 0 back to memory unlock ; enable task switching (interrupts) where lock disables and unlock enables interrupts. But, even then, this may not be thread-safe in an architecture that has more than one of these CPUs sharing memory (the lock may only disable interrupts for one CPU). The language itself (or libraries for it, if it's not built into the language) will provide thread-safe constructs and you should use those rather than depend on your understanding (or possibly misunderstanding) of what machine code will be generated. Things like Java synchronized and pthread_mutex_lock() (available to C/C++ under some operating systems) are what you need to look into (a). (a) This question was asked before the C11 and C++11 standards were completed. Those iterations have now introduced threading support into the language specifications, including atomic data types (though they, and threads in general, are optional, at least in C).
680,125
680,132
Can I use a grayscale image with the OpenGL glTexImage2D function?
I have a texture which has only 1 channel as it's a grayscale image. When I pass the pixels in to glTexImage2D, it comes out red (obviously because channel 1 is red; RGB). glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, dicomImage->GetColumns(), dicomImage->GetRows(), 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelArrayPtr); Do I change GL_RGBA? If so, what to?
Change it to GL_LUMINANCE. See https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glTexImage2D.xhtml
680,300
680,361
Operator overloading with memory allocation?
The sentence below is from, The Positive Legacy of C++ and Java by Bruce Eckel, about operator overloading in C++: C++ has both stack allocation and heap allocation and you must overload your operators to handle all situations and not cause memory leaks. Difficult indeed. I do not understand how operator overloading has anything to do with memory allocation. Can anyone please explain how they are correlated?
I can imagine a couple possible interpretations: First, in C++ new and delete are both actually operators; if you choose to provide custom allocation behavior for an object by overloading these operators, you must be very careful in doing so to ensure you don't introduce leaks. Second, some types of objects require that you overload operator= to avoid memory management bugs. For example, if you have a reference counting smart pointer object (like the Boost shared_ptr), you must implement operator=, and you must be sure to do so correctly. Consider this broken example: template <class T> class RefCountedPtr { public: RefCountedPtr(T *data) : mData(data) { mData->incrRefCount(); } ~RefCountedPtr() { mData->decrRefCount(); } RefCountedPtr<T>& operator=(const RefCountedPtr<T>& other) { mData = other.mData; return *this; } ... protected: T *mData; }; The operator= implementation here is broken because it doesn't manage the reference counts on mData and other.mData: it does not decrement the reference count on mData, leading to a leak; and it does not increment the reference count on other.mData, leading to a possible memory fault down the road because the object being pointed to could be deleted before all the actual references are gone. Note that if you do not explicitly declare your own operator= for your classes, the compiler will provide a default implementation which has behavior identical to the implementation shown here -- that is, completely broken for this particular case. So as the article says -- in some cases you must overload operators, and you must be careful to handle all situations correctly. EDIT: Sorry, I didn't realize that the reference was an online article, rather than a book. Even after reading the full article it's not clear what was intended, but I think Eckel was probably referring to situations like the second one I described above.
680,489
680,500
Why do we use 'this->' and not 'this.' to access members?
I was looking at a library a person has made for FaceBook in C++. The header file is this: #ifndef __FACEBOOK_H__ #define __FACEBOOK_H__ /** * Facebook Class * Joel Seligstein * Last mod: Aug 22, 2006 * * This is the beginnings of a facebook class set and REST client. Its not documented * yet nor nearly complete. But this is a release to demonstrate its usefulness. * Please email joel@seligstein.com with suggestions or additions. * * TODO: Create classes/parsers for each request type * TODO: Linux URL launcher */ //uncomment to have verbose output turned on //#define fb_debug 1 //define which platform you're compiling for #define fb_windows 1 //#define fb_linux 1 #include <string> #include <sstream> #include <list> using namespace std; #ifdef fb_windows #include <windows.h> #endif #include "curl/curl.h" #include "xmlParser/xmlParser.h" #include "md5.h" class facebook { public: //app/session vars string api_key; string secret; string token; string server; string session_key; string session_secret; string uid; bool has_session; facebook( string my_key, string my_secret, string my_server ); bool authenticate( ); bool request( string method, list<string> params, string *res ); bool load_token( ); void launch_login( string url ); bool get_session( ); void clean_up( ); private: //curl info CURL *curl; CURLcode res; int call_id; //internal functions string get_signature( list<string> params ); static string md5( string str ); static string get_param_string( list<string> params, bool separate ); static size_t write_callback( void *ptr, size_t size, size_t nmemb, void *userp ); }; #endif //__FACEBOOK_H__ Then inside the cpp file, my question is regarding this, the below is the constructor: facebook::facebook( string my_key, string my_secret, string my_server ) { this->api_key = my_key; this->secret = my_secret; this->server = my_server; this->has_session = false; this->call_id = 0; } Why have they used the -> operator and not .? I have a limited understanding that -> accesses properties and methods of the type directly in memory but I am confused, I would, through ignorance, expect to see: facebook::facebook( string my_key, string my_secret, string my_server ) { this.api_key = my_key; this.secret = my_secret; this.server = my_server; this.has_session = false; this.call_id = 0; } All I want to know is the theory behind why -> is used over the dot notation. UPDATE: For anyone else in the same boat as me and learning C++. I have expanded on an example posted by a member in this question. I have also wrapped in initialization list for a member field. #include "stdafx.h" #include <iostream> using namespace std; class A { private: int x; public: A() : x(0){} int getX() const {return x;} void setX(int xx) {x += xx;} }; int main() { A a; a.setX(13); A *pa = &a; pa->setX(2); A b = a; b.setX(5); cout << "a" << a.getX() << endl; cout << "a*" << pa->getX() << endl; cout << "b" << b.getX() << endl; return 0; }
this is a pointer to the current object i.e., inside methods (or constructor) of class A, this is of type A *. (Note that, if the method is tagged as const, this is of type A const *.) Hence the use of -> (designed only for pointers) and not . (designed only for class objects A or references to class objects A&).
680,561
693,724
Beginner questions regarding to "building a library", in Xcode on iPhone specifically
I have never been clearly understand all these linking/building/dependency business. Now, I am trying to build the FreeType library (which is in C++), into the *.a library file for the iPhone (because another library I am trying to use, openFrameworks, would depend on FreeType). I know that to compile C++ with iPhone I simply need to rename *.cpp to *.mm. (I have tried with some simply programs in main() and it works) But how can I build the library in Xcode then ? (without the main() I suppose) On the other hand, it would be great if you guys could recommend some books or documents on such linking/building topics, which I found myself most confused about. PS. I have already got the paid version of Xcode and some sample apps compiled onto the iPhone.
Renaming .cpp files to .mm would mean they'll be treated as Objective-C++ code rather than C++ code. I don't think that'd be a good idea, even if it should still work. Besides, FreeType is written in C, not C++. Google for "compiler linker" and you'll find quite a few documents on how they work. That should help for documentation. It's entirely possible to compile static libraries for the iPhone; what you can't do is compile dynamic libraries or frameworks - you could, but it's not encouraged. Open your project In the source/target browser, select the "Targets" node (with the little red and white target icon). Right-click on the node, and in the popup menu select "Add", followed by "New Target". A dialog opens. On the left hand side, there's an iPhone OS and a Mac OS X section. The iPhone OS section should already be selected; if not do so. You should have three choices of targets, one of it is a static library. Select it, click OK. Give the library a name in the next page. Click finish. Your "Targets" node has includes a child for your static library now. You can add sources either via the file menu, or by dragging it onto the "Compile Sources" child node. Hope that helps.
680,672
680,693
How to delay the initialisation of a member in a C++ base class until the ctor of derived class is executed?
Here's the scenario: I want define a base class that allocate some buffer once for all derived classes, while the size of the buffer varies among different derived classes. I could achieve it in this way: class base { public: base():size(), p_array(0){} private: size_t size; boost::shared_array<unsigned char> p_array; }; in the derived class: class derived { public: derived(size_t array_size):size(array_size) { p_array.reset(new unsigned char[size]); } }; However, in order to simplify the design of the derived class, I really want to put this line: p_array.reset(new unsigned char[size]); to somewhere in the base class, thus writing it only once. Is there any C++ design pattern could achive it? Thanks.
sorry but why do you have 2 arrays (and 2 sizes)? Now if you create a derived class you have a 2 times a p_array. I think the compiler should give an error on this. Don't you want this? class base { public: base():size(), p_array(0){} base(size_t array_size):size(array_size) { p_array.reset(new unsigned char[size]); } private: size_t size; boost::shared_ptr<unsigned char> p_array; }; class derived { public: derived(size_t array_size):base(array_size) { } private: };
680,715
680,741
Silverlight with C++.Net
Can you use C++.Net for writting a Silverlight application? Not use C# or VB.Net as the backend language but C++.Net
Check this. A google search would have answered your query. Update: I was wrong. Apologies for that. Mark has answered it correctly. Mark : You can use any language so long as it compiles to pure managed code. For example there are Silverlight applications using IronPython, IronRuby. The only restriction is you can't have any native code, or use parts of the FCL that are restricted by Silverlight (e.g. P/Invoke)
680,721
681,508
msvcp80d.dll not found while using TBB
I am using Intel TBB C++ for multithreading an application on visual studio 2008. When I run the executable I get a dialog saying "MSVCP80D.dll" was not found. There is so much on the net about this that it confuses me. Please help. EDIT: Based on answers, finally I was able to fix the "dll missing" problem. I had given a path to TBB lib of vc8 leading to dependency on vc8 dlls, which are used with visual studio 2005, not with 2008. (Using depends (http://www.dependencywalker.com/ ) it is easy to determine the run-time dependencies of an executable.) I changed by project to depend on vc9 dlls, not vc8 and then it worked fine. Another thing to note is use of manifest files on windows. Manifest files describe dependencies. The manifest files must be generated while writing an application as it is necessary.
You can find them online at various places. Just scan it for a virus and put it in your program's path and everything should work fine. You may need more than one of the debug dlls, you can use depends32.exe to see what you are missing.
680,738
680,798
Law of demeter or return the whole vector
Which one is better: public: const vector<int> & GetPointsVector(); private: vector<int> PointsVector; Or: public: int GetCurrentPoint(); void MoveToFirstPoint(); void MoveToNextPoint(); bool IsAtLastPoint(); size_t GetNumberOfPoints(); private: vector<int> PointsVector;
There is no right answer. The answer will vary with context, of which, at present we have very little. It depends on the clients of your class. In most situations you would not want a casual inspector to change the object's state, so it is better to a certain degree to return a reference to a const object. However, in such a case, I'd also make the function a const i.e. /* a design const - this accessor does not change the state */ const vector<int> & GetPointsVector() const; This is also efficient since you are not passing around heavy objects as return values (it is a different question that most compilers do a RVO these days). If, your client needs to use the vector in algorithms, yes, you'd better provide iterators. But two pairs i.e. typedef vector<int>::iterator _MyItr; _MyItr begin() const; _MyItr begin(); _MyItr end() const; _MyItr end(); This would be in keeping with the way the STL is designed. But, be careful in specifying what sort of iterator the client can expect (for example: if the class specification says that the iterators returned are RandomIterators, it sets a certain amount of expectation on your implementation).
681,142
681,150
cpp iterator problem
template<class T> class mStack { private: vector<T> a; vector<T>::iterator top; public: void push(T); T pop(); mStack(); void printStack(); }; The code with above class is not getting compiled... why? What is the problem? The compiler says "expected ; above top".
You need a typename: typename vector<T>::iterator top; This reassures the compiler thar vector<T> really is a type. For a discussion of this and other template gotchas, see the C++ FAQ.
681,148
683,318
How to print a Qt dialog or window?
How do I get Qt to print a complete dialog or window? I could dump the window contents with an external program like xwd and print that, but I would prefer to do it all with Qt.
While you can use grabWidget to get the pixmap representation of the dialog, essentially you will be printing the pixels of the pixmap, i.e. the dialog is rasterized a the screen resolution and then scaled to the printer resolution. This may or may not result in some artifacts. Another way to do it is by using QWidget::render() function that takes a paint device. This way, you can pass your printer as the paint device. The dialog is now "drawn" onto the printer with the the printer's resolution.
681,192
681,202
How to get Vector of Complex numbers from two vectors (real & imag)
I have two vectors of floats and i want them to become one vector of Complex numbers. I'm stuck. I don't mind using iterators, but i am sure it'd be rediscovering the wheel i'm not informed about. Is my code leading me in the right direction? typedef std::vector<float> CVFloat; CVFloat vA, vB; //fil vectors typedef std::complex<CVFloat> myComplexVector; myComplexVector* vA_Complex = new myComplexVector(vA, vB); The code above is going through the compiler correctly, but when i want to get single numbers from myComplexVector using iterator i get error "Undefined symbol 'const_iterator'" (Borland C++) myComplexVector::const_iterator it = vA_Complex->begin();
Here you are creating a "complex" object whose real and imaginary parts are vectors of floats. Maybe what you actually want to do is creating a vector of complex objects whose real and imaginary parts are floats? EDIT: myComplexVector is not a vector, is a complex. That's why a const_iterator for it is not defined.
681,200
681,220
How to use 2 C libs that export the same function names
Duplicate of the following question: C function conflict Hi, in my current project I have to use some kind of interface lib. The function names are given by this interface, what this functions do is developers choice. As far as I can tell a project shall use this functions and when it comes to compiling you choose the lib and with it the functionality. What I try to do is to use an existing lib and my lib at the same time by wrapping the other and call it in mein functions: otherlib: int function1 (int a) { // do something } mylib: int function1 (int a) { //my code here otherlib::function1(a); } Problem is I don't have access to the other lib and the other lib doesn't have any namespaces. I already tried namespace old { #include "otherlib.h" } and then call the old function by old::function1 in my function. This works as long as it's only header file. The lib exports it's symbol back into global space. Also something like namespace new { function1 (int a) { ::function1(a); } } didn't work. Last but not least I tried ifdefs and defines suggested here but I wasn't successful. Any ideas how to solve this? Thanks in advance. EDIT: I neither have access to the old lib nor the project both libs shall be used in. EDIT2: at least the old lib is a static one
Namespaces in C solved using library names prefixes like: libfoo --> foo_function1 libbar --> bar_function1 These prefixes are actual namespaces. so if you write libbar int bar_function1(int a) { function1(a); } This is the way to solve problems. C has namespaces --- they just called prefixes ;) Another option is to do various dirty tricks with dynamic loading of libraries like: h1=dlopen("libfoo.so") foo_function1=dlsym(h1,"function1") h2=dlopen("libbar.so") bar_function1=dlsym(h2,"function1")
681,221
682,694
Writing files to USB stick causes file corruption/lockup on surprise removal
I'm writing a background application to copy files in a loop to a USB stick with the "Optimize for quick removal" policy set. However, if the stick is removed part way through this process (specifically in the WriteFile() call below, which returns ERROR FILE NOT FOUND) the application hangs, the drive is then permanently inaccessible from any other application and the PC cannot be shutdown/logged off/restarted etc. All running instances of Windows Explorer also hang as a result. I have traced the issue to the CloseHandle() call made after the stick is removed and the above error occurs. Its almost as if CloseHandle() is blocking indefinitely in the driver somewhere because the stick is no longer there? Anyway, I have managed to get past this issue by simply skipping the CloseHandle() call if WriteFile() returns ERROR FILE NOT FOUND. However, this leads to another problem where, every so often, a file gets irrecoverably corrupted and the only way to fix it is using chkdsk, or reformat the stick. Note that this only happens on XP (SP2 and 3), Vista does not seem to suffer from the issue. A snippet of the code follows: HANDLE hFile = CreateFile(szFile, GENERIC_WRITE, FILE_SHARE_WRITE | FILE_SHARE_READ | FILE_SHARE_DELETE, NULL, CREATE_ALWAYS, FILE_FLAG_WRITE_THROUGH, NULL); if (hFile != INVALID_HANDLE_VALUE) { if (!WriteFile(hFile, pBuffer, dwBufferSize, &dwWritten)) { int nLastError = GetLastError(); } // If usb stick is removed during WriteFile(), ERROR_FILE_NOT_FOUND usually results. // If handle is closed at this point then drive is inaccessible. // If CloseHandle() is skipped, then file corruption occurs instead if (nLastError != ERROR_FILE_NOT_FOUND) { CloseHandle(hFile); } } I've tried pretty much every combination of flags for CreateFile() all to no avail. Has anybody seen this before or have any good ideas how to avoid either of the two problems occuring. Is what I'm seeing a driver problem which has been silently fixed under vista? Thanks for any help.
Its almost as if CloseHandle() is blocking indefinitely in the driver somewhere because the stick is no longer there? Sounds reasonable. CloseHandle() will ultimately emit a file system IRP and you're not using non-blocking I/O, so that IRP will be synchronous, but it looks like where the actual file system has abruptly disappeared from underneath the file system driver, that IRP is never completed. Which means you're stuffed - the user-mode function call which lead to the file system IRP being issued will never return. Try using non-blocking I/O - that will prolly get you around this problem, at least from the point of view of not hanging. You will still be experiencing resource loss and the like, since the IRP will still be passed down and almost certainly still won't be coming back up, but at least you won't be blocking on it. BTW, "optimize for quick removal" is I would say designed to reduce the amount of caching that goes on and perhaps influence the order of writes to the file system to reduce the chance of corruption; I extremely doubt it is intended to preserve the file system in the event of file system departure during a write! You should not be surprised that this kills the file system.
681,243
681,259
In C++, how can I avoid #including a header file when I need to use an enumeration?
In my C++ header files I try to use forward declarations (class MyClass;) instead of #including the class header, as recommended in many C++ coding standards (the Google C++ Style Guide is one). Unfortunately, when I introduce enumerations, I can't do the forward declaration any more. Like this: //// myclass1.hpp //// class MyClass1 { enum MyEnum1 { Enum_A, Enum_B, Enum_C }; }; //// myclass2.hpp //// // I want to avoid this #include "myclass1.hpp" // I'd prefer to do this (forward declaration) class MyClass1; class MyClass2 { // This is o.k.: I only need to forward declare MyClass1 MyClass1* ptr; // This forces me to #include, but I don't want to! void func( MyClass1::MyEnum1 e ); }; The best solution I can think of so far is to replace enums with member constants: //// myclass1.hpp //// MyClass1 { static const int Enum_A; static const int Enum_B; static const int Enum_C; }; //// myclass1.cpp //// const int Enum_A = 1; const int Enum_B = 2; const int Enum_C = 3; In this case, though, the solution seems worse than the problem. I'm currently looking through Large Scale C++ Software Design (Lakos) and Working Effectively with Legacy Code (Feathers) for dependency breaking techniques, but I haven't found a good solution yet.
You cannot forward declare enum values - and your workaround is a step down the path to complete madness. Are you experiencing any major compilation slowdowns caused by #including headers? If not, just #include them. Use of forward declarations is not "best practice" it is a hack.
681,518
681,536
C++ enum not properly recognized by compiler
Can anyone explain why the following code does not compile (on g++ (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-49))? struct X { public: enum State { A, B, C }; X(State s) {} }; int main() { X(X::A); } The message I get is: jjj.cpp: In function 'int main()': jjj.cpp:10: 'X X::A' is not a static member of 'struct X' jjj.cpp:10: no matching function for call to 'X::X()' jjj.cpp:1: candidates are: X::X(const X&) jjj.cpp:5: X::X(X::State)` Is this bad code or a compiler bug? Problem solved by Neil+Konrad. See the comments to Neil's answer below.
X(X::A); is being seen a s a function declaration. If you really want this code, use: (X)(X::A);
681,725
681,736
What do I get from front() of empty std container?
If front() returns a reference and the container is empty what do I get, an undefined reference? Does it mean I need to check empty() before each front()?
You get undefined behaviour - you need to check that the container contains something using empty() (which checks if the container is empty) before calling front().
681,746
681,778
2D matrix and overloading operator() / ugly syntax
I'm using a 2D matrix in one of my projects. It's something like it is suggested at C++ FAQ Lite. The neat thing is that you can use it like this: int main() { Matrix m(10,10); m(5,8) = 106.15; std::cout << m(5,8); ... } Now, I have a graph composed of vertices and each vertex has a public (just for simplicity of the example) pointer to 2D matrix like above. Now I do have a pretty ugly syntax to access it. (*sampleVertex.some2DTable)(0,0) = 0; //bad sampleVertex.some2DTable->operator()(0,0) = 0; //even worse... Probably I'm missing some syntactic sugar here due to my inexperience with operator overloading. Is there a better solution?
Consider using references instead of pointers (provided, it can't be null and you can initialize in the constructor). Consider making a getter or an instance of a matrix wrapper class for a vertex that returns a reference to 2D matrix (provided, it can't be null). sampleVertex.some2DTable()(0,0) = 0; sampleVertex.some2DTableWrap(0,0) = 0; However, to me it sounds like a non-issue to justify going through all the trouble.
681,772
681,799
Equivalent of IllegalArgumentException of Java in C++
In Java if an input argument to a method is invalid, we can throw an IllegalArgumentException (which is of type RuntimeException). In C++, there is no notion of checked and unchecked exceptions. Is there a similar exception in standard C++ which can be used to indicate a runtime exception? Or is there a common style not in the standard but everyone follows in practice for a situation like this? Or, should I just create my own custom exception and throw it?
Unlike Java, C++ does not have a "standard framework" but only a small (and optional) standard library. Moreover, there are different opinions under C++ programmers whether to use exceptions at all. Therefore you will find different recommendations by different people: Some like to use exception types from the standard library, some libraries (e.g. Poco) use a custom exception hierarchy (derived from std::exception), and others don't use exceptions at all (e.g. Qt). If you want to stick to the standard library, there exists a specialized exception type: invalid_argument (extends logic_error). #include <stdexcept> // ... throw std::invalid_argument("..."); For the reference: Here is an overview of standard exception types defined (and documented) in stdexcept: exception logic_error domain_error invalid_argument length_error out_of_range runtime_error range_error overflow_error underflow_error
681,943
681,997
how can I get a std::set of keys to a std::map
I was writing an algorithm this morning and I ran into a curious situation. I have two std::maps. I want to perform a set intersection on the sets of the keys of each (to find which keys are common to both maps). At some point in the future, I think it's likely I'll also want to perform set subtraction here as well. Luckily, the STL includes functions for both of those operations. The problem is, I can't seem to get a std::set of the keys out of a std::map. Is there any way to do this? I'm looking for something that would be this simple, like it is in Java: std::set<Foo> keys = myMap.getKeySet(); My understanding is that I can't use the std::set_intersection() function directly on iterators into the maps because the maps expose std::pair objects instead of just keys. Also, I don't think the map guarantees order. I'm also interested in performing this same operation on a pair of std::multimaps, if that makes any difference. EDIT: I forgot to mention initially that due to the age of the compiler I'm forced to use (MSVC++ 6), most of the nifty template tricks that are available in boost can not be used.
You can use the versatile boost::transform_iterator to return an iterator that returns only the keys (and not the values). See How to retrieve all keys (or values) from a std::map and put them into a vector?
682,304
699,149
What am I missing from my environment variables for my linker to fail with LNK1181?
I have a Qt project which I have had a debug console displayed whilst I am developing, I am about to ship the product to I removed the qmake console command: CONFIG += console However when I do that I get the following error: link /LIBPATH:"c:\Qt\4.5.0\lib" /NOLOGO /INCREMENTAL:NO /LTCG /MANIFEST /MANIFESTFILE:"./_obj/win32\Lynx.intermediate.manifest" /SUBSYSTEM:WINDOWS "/MANIFESTDEPENDENCY:type='win32' name='Microsoft.Windows.Common-Controls' version='6.0.0.0' publicKeyToken='6595b64144ccf1df' language='*' processorArchitecture='*'" /VERSION:4.00 /OUT:bin\win32\Lynx.exe @C:\DOCUME~1\hannentp\LOCALS~1\Temp\nm1C9.tmp link_.exe /LIBPATH:c:\Qt\4.5.0\lib /NOLOGO /INCREMENTAL:NO /LTCG /MANIFEST /MANIFESTFILE:./_obj/win32\Lynx.intermediate.manifest /SUBSYSTEM:WINDOWS /MANIFESTDEPENDENCY:type='win32' name='Microsoft.Windows.Common-Controls' version='6.0.0.0' publicKeyToken='6595b64144ccf1df' language='*' processorArchitecture='*' /VERSION:4.00 /OUT:bin\win32\Lynx.exe @C:\DOCUME~1\hannentp\LOCALS~1\Temp\nm1C9.tmp~ LINK_ : fatal error LNK1181: cannot open input file 'name='Microsoft.Windows.Common-Controls'' NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 9.0\VC\Bin\link.EXE"' : return code '0x49d' Stop. I think that a lib path is missing, however I do have a large number of environment variables setup and working already. Has anybody seen this and know the location of the Microsoft.Windows.Common-Controls? Also I am running this from buildbot so VS2008's IDE is not really somewhere I want help with.
It seems that the command line is just underquoted: "/MANIFESTDEPENDENCY:type='win32' name='Microsoft.Windows.Common-Controls' version='6.0.0.0' publicKeyToken='6595b64144ccf1df' language='*' processorArchitecture='*'" On the second line, the quotes are gone and the linker treats each word as an object to link. You should probably just add quotes (if it was you who added this argument), ie. begin and end with "\" (3 characters in place of one quote). It seems other sources suggest that too, so try experimenting with that.
682,432
682,489
VC++ Resource Files and Lengthy String Resources
In our app we have resource strings that are apparently too long for the compiler. The build breaks stating the "line length is too long." I have found little information about the topic of lengthy string resources and even had a difficult time finding what the limit on such a resource string is. Eventually I found this article which gives the limit: MSDN . Have you had any expierence with limits on string resources? Is there some way to concatonate these without doing any coding? Any other suggestions would be greatly appriecated.
I would have a look at RCDATA resources. I used it to store large text files in my application. Edit: Here is my MFC code, it should be able to give you some pointers. CString CWSApplication::LoadTextResource(UINT nID) { HRSRC hResInfo; HGLOBAL hResData; hResInfo = ::FindResource(AfxGetResourceHandle(), MAKEINTRESOURCE(nID), RT_RCDATA); if ( hResInfo == NULL ) { return CString(); } hResData = ::LoadResource(NULL, hResInfo); if ( hResData == NULL ) { return CString(); } char *data = (char*)(::LockResource(hResData)); DWORD len = ::SizeofResource(NULL, hResInfo); return CString(data, len); }
682,434
682,521
What's the difference between these two classes?
Below, I'm not declaring my_ints as a pointer. I don't know where the memory will be allocated. Please educate me here! #include <iostream> #include <vector> class FieldStorage { private: std::vector<int> my_ints; public: FieldStorage() { my_ints.push_back(1); my_ints.push_back(2); } void displayAll() { for (int i = 0; i < my_ints.size(); i++) { std::cout << my_ints[i] << std::endl; } } }; And in here, I'm declaring the field my_ints as a pointer: #include <iostream> #include <vector> class FieldStorage { private: std::vector<int> *my_ints; public: FieldStorage() { my_ints = new std::vector<int>(); my_ints->push_back(1); my_ints->push_back(2); } void displayAll() { for (int i = 0; i < my_ints->size(); i++) { std::cout << (*my_ints)[i] << std::endl; } } ~FieldStorage() { delete my_ints; } }; main() function to test: int main() { FieldStorage obj; obj.displayAll(); return 0; } Both of them produces the same result. What's the difference?
In terms of memory management, these two classes are virtually identical. Several other responders have suggested that there is a difference between the two in that one is allocating storage on the stack and other on the heap, but that's not necessarily true, and even in the cases where it is true, it's terribly misleading. In reality, all that's different is where the metadata for the vector is allocated; the actual underlying storage in the vector is allocated from the heap regardless. It's a little bit tricky to see this because you're using std::vector, so the specific implementation details are hidden. But basically, std::vector is implemented like this: template <class T> class vector { public: vector() : mCapacity(0), mSize(0), mData(0) { } ~vector() { if (mData) delete[] mData; } ... protected: int mCapacity; int mSize; T *mData; }; As you can see, the vector class itself only has a few members -- capacity, size and a pointer to a dynamically allocated block of memory that will store the actual contents of the vector. In your example, the only difference is where the storage for those few fields comes from. In the first example, the storage is allocated from whatever storage you use for your containing class -- if it is heap allocated, so too will be those few bits of the vector. If your container is stack allocated, so too will be those few bits of the vector. In the second example, those bits of the vector are always heap allocated. In both examples, the actual meat of the vector -- the contents of it -- are allocated from the heap, and you cannot change that. Everybody else has pointed out already that you have a memory leak in your second example, and that is also true. Make sure to delete the vector in the destructor of your container class.
682,444
682,473
How can you find out the maximum size of the memory stack for a C++ program on linux? (gnu compiler)
I am curious about how to find out what the maximum stack size is for a particular compiler/os combo. I am using Ubuntu/GNU compiler. A few questions I have in addition are: Who controls the default maximum stack size; OS or compiler? Is the default maximum scaled according to total memory? (ie a machine with 2gb memory would have larger default size than a machine with only 512mb) For this example both machines are same os/compiler setup, just different amounts of system RAM. Thanks!
Who controls the default maximum stack size; OS or compiler? The compiler typically. The OS/hardware does limit it to a certain extent. Default is 8MB on linux IIRC. Think of ulimit -s on Linux (to change stack sizes). Is the default maximum scaled according to total memory? (ie a machine with 2gb memory would have larger default size than a machine with only 512mb) For this example both machines are same os/compiler setup, just different amounts of system RAM. No. Until and unless you do it yiurself.You can alter stack sizes via compiler switches. ld --stack=<STACK_SIZE> or gcc -Wl,--stack=<STACK_SIZE> The C++ Standard's take on the issue of stacks and heaps: The standard is based on an abstract machine and does not really concern itself with hardware or stacks or heaps. It does talk about an allocated store and a free store. The free store is where you'd be if you are calling new (mostly). FWIW, an implementation can have only one memory area masquerading as both stack and heap when it comes to object allocation. Your question, therefor, boils down to be an implementation specific issue rather than a language issue. Hope this helps.
682,658
682,753
Negative Speedup on Multithreading my Program
On my laptop with Intel Pentium dual-core processor T2370 (Acer Extensa) I ran a simple multithreading speedup test. I am using Linux. The code is pasted below. While I was expecting a speedup of 2-3 times, I was surprised to see a slowdown by a factor of 2. I tried the same with gcc optimization levels -O0 ... -O3, but everytime I got the same result. I am using pthreads. I also tried the same with only two threads (instead of 3 threads in the code), but the performance was similar. What could be the reason? The faster version took reasonably long - about 20 secs - so it seems is not an issue of startup overhead. NOTE: This code is a lot buggy (indeed it does not make much sense as the output of serial and parallel versions would be different). The intention was just to "get" a speedup comparison for the same number of instructions. #include <stdio.h> #include <time.h> #include <unistd.h> #include <pthread.h> class Thread{ private: pthread_t thread; static void *thread_func(void *d){((Thread *)d)->run();} public: Thread(){} virtual ~Thread(){} virtual void run(){} int start(){return pthread_create(&thread, NULL, Thread::thread_func, (void*)this);} int wait(){return pthread_join(thread, NULL);} }; #include <iostream> const int ARR_SIZE = 100000000; const int N = 20; int arr[ARR_SIZE]; int main(void) { class Thread_a:public Thread{ public: Thread_a(int* a): arr_(a) {} void run() { for(int n = 0; n<N; n++) for(int i=0; i<ARR_SIZE/3; i++){ arr_[i] += arr_[i-1];} } private: int* arr_; }; class Thread_b:public Thread{ public: Thread_b(int* a): arr_(a) {} void run() { for(int n = 0; n<N; n++) for(int i=ARR_SIZE/3; i<2*ARR_SIZE/3; i++){ arr_[i] += arr_[i-1];} } private: int* arr_; }; class Thread_c:public Thread{ public: Thread_c(int* a): arr_(a) {} void run() { for(int n = 0; n<N; n++) for(int i=2*ARR_SIZE/3; i<ARR_SIZE; i++){ arr_[i] += arr_[i-1];} } private: int* arr_; }; { Thread *a=new Thread_a(arr); Thread *b=new Thread_b(arr); Thread *c=new Thread_c(arr); clock_t start = clock(); if (a->start() != 0) { return 1; } if (b->start() != 0) { return 1; } if (c->start() != 0) { return 1; } if (a->wait() != 0) { return 1; } if (b->wait() != 0) { return 1; } if (c->wait() != 0) { return 1; } clock_t end = clock(); double duration = (double)(end - start) / CLOCKS_PER_SEC; std::cout << duration << "seconds\n"; delete a; delete b; } { clock_t start = clock(); for(int n = 0; n<N; n++) for(int i=0; i<ARR_SIZE; i++){ arr[i] += arr[i-1];} clock_t end = clock(); double duration = (double)(end - start) / CLOCKS_PER_SEC; std::cout << "serial: " << duration << "seconds\n"; } return 0; } See also: What can make a program run slower when using more threads?
The times you are reporting are measured using the clock function: The clock() function returns an approximation of processor time used by the program. $ time bin/amit_kumar_threads.cpp 6.62seconds serial: 2.7seconds real 0m5.247s user 0m9.025s sys 0m0.304s The real time will be less for multiprocessor tasks, but the processor time will typically be greater. When you use multiple threads, the work may be done by more than one processor, but the amount of work is the same, and in addition there may be some overhead such as contention for limited resources. clock() measures the total processor time, which will be the work + any contention overhead. So it should never be less than the processor time for doing the work in a single thread. It's a little hard to tell from the question whether you knew this, and were surprised that the value returned by clock() was twice that for a single thread rather than being only a little more, or you were expecting it to be less. Using clock_gettime() instead (you'll need the realtime library librt, g++ -lrt etc.) gives: $ time bin/amit_kumar_threads.cpp 2.524 seconds serial: 2.761 seconds real 0m5.326s user 0m9.057s sys 0m0.344s which still is less of a speed-up than one might hope for, but at least the numbers make some sense. 100000000*20/2.5s = 800Hz, the bus frequency is 1600 MHz, so I suspect with a read and a write for each iteration (assuming some caching), you're memory bandwidth limited as tstenner suggests, and the clock() value shows that most of the time some of your processors are waiting for data. (does anyone know whether clock() time includes such stalls?)
682,859
682,946
Restrict functionality to a certain computer
I have a program that is using a configuration file. I would like to tie the configuration file to the PC, so copying the file on another PC with the same configuration won't work. I know that Windows Activation Mecanism is monitoring hardware to detect changes and that it can tolerates some minor changes to the hardware. Is there any library that can help me doing that? My other option is to use WMI to get Hardware configuration and to program my own tolerance mecanism. Thanks a lot, Nicolas
Microsoft Software Licensing and Protection Services has functionality to bind a license to hardware. It might be worth looking into. Here's a blog posting that might be of interest to you as well.
683,059
683,085
How to validate LPVOID to <Bad Ptr>
I'm working with C++ unmanaged, the problem that I have happens when I call a method that returns an LPVOID. LPVOID MyMethod(...); The problem is that this method sometimes returns a Bad Ptr and I want to know if there is a way of detecting this, if the value returned is a Bad Ptr. I have tried asking if it is NULL with no luck. The only way in which I realize if the result is a Bad Ptr is while I'm debugging, I have tried some different ways but still unable to do it.
No, there is no easy way to determine if a pointer is bad. Windows does have IsBadReadPtr, IsBadWritePtr. These functions are inherently flawed - they only determine if a function is readable or writable in your address space at the moment of the call. They can also be the cause of security issues and should never be used. The main issue is that there is no way to differentiate between a "bad" pointer that is still accessible to your process, and a good pointer. For instance, int g[5]; int somethingElse; void* GetPointer() { return &g[5]; // Whoops, off by one. } &g[5] is probably a valid pointer in your process, and might be pointing to somethingElse, you'll be able to access it without crashing but writing to it will corrupt your state.
683,180
683,199
Is this extern harmless?
main.h extern int array[100]; main.c #include "main.h" int array[100] = {0}; int main(void) { /* do_stuff_with_array */ } In the main.c module, the array is defined, and declared. Does the act of also having the extern statement included in the module, cause any problems? I have always visualized the extern statement as a command to the linker to "look elsewhere for the actual named entity. It's not in here. What am I missing? Thanks. Evil.
The correct interpretation of extern is that you tell something to the compiler. You tell the compiler that, despite not being present right now, the variable declared will somehow be found by the linker (typically in another object (file)). The linker will then be the lucky guy to find everything and put it together, whether you had some extern declarations or not. To avoid exposure of names (variables, functions, ..) outside of a specific object (file), you would have to use static.
683,544
684,913
AMF0 parser in ObjectiveC
I am writing an application which connects to an FLV stream and parse data to extract SCRIPTDATAOBJECT. I successfuly get content of a DATASCRIPTOBJECT and am trying to parse AMF0 data to get some metadata, do you know sample code of AMF0 parsing ? Thanks a lot. Thierry
Licensed under MIT style: Objective-C implementation of the Flash Remoting format (AMF0/AMF3) for servers and clients
684,112
684,130
The precision of the long double output is not correct. What might be wrong?
I have a long double constant that I am setting either as const or not-const. It is longer (40 digits) than the precision of a long double on my test workstation (19 digits). When I print it out, it no longer is displayed at 19 digits of precision, but at 16. Here is the code I am testing: #include <iostream> #include <iomanip> #include <limits> #include <cstdio> int main () { const long double constLog2 = 0.6931471805599453094172321214581765680755; long double log2 = 0.6931471805599453094172321214581765680755; std::cout << std::numeric_limits<long double>::digits10 + 1 << std::endl; std::cout << "const via cout: " << std::setprecision(19) << constLog2 << std::endl; std::cout << "non-const via cout: " << std::setprecision(19) << log2 << std::endl; std::fprintf(stdout, "const via printf: %.19Lf\n", constLog2); std::fprintf(stdout, "non-const via printf: %.19Lf\n", log2); return 0; } Compile: $ g++ -Wall precisionTest.cpp Output: $ ./a.out 19 const via cout: 0.6931471805599452862 non-const via cout: 0.6931471805599452862 const via printf: 0.6931471805599452862 non-const via printf: 0.6931471805599452862 I would expect 0.6931471805599453094 but instead get 0.6931471805599452862. Is there a reason that the 19 digits of precision are cut to 16 digits? Here is my environment: $ gcc --version i686-apple-darwin9-g++-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5490) I am seeing the same problem with other versions of gcc, e.g.: $ gcc --version g++ (GCC) 3.4.6 20060404 (Red Hat 3.4.6-10) I can look into NTL or other libraries but I'm curious what is causing this. Thanks for your insight.
I get this output: 19 const via cout: 0.6931471805599453094 non-const via cout: 0.6931471805599453094 const via printf: 0.6931471805599453094 non-const via printf: 0.6931471805599453094 But i'm using long double literals instead of double literals: const long double constLog2 = 0.6931471805599453094172321214581765680755L; long double log2 = 0.6931471805599453094172321214581765680755L;
684,133
684,181
Passing C# data type parameters to dll written in C++?
Still working on a problem that started from here Calling C++ dll function from C#: Of structs, strings and wchar_t arrays., but with a different approach. Following the example Calling Managed Code from Unmanaged Code and vice-versa I wrote a managed wrapper in C++ to access the unmanages class in the unmanaged C++ dll. It looks like this: //in header file public __gc class TSSLDllWrapper { public: TSSLDllWrapper(); //this is the unmanaged class CcnOCRsdk * _sdk; bool convertHKID_Name(char *code, RECO_DATA *o_data); }; //in .cpp file TSSLDllWrapper::TSSLDllWrapper(void) { _sdk = new CcnOCRsdk(); } bool TSSLDllWrapper::convertHKID_Name(char *code, RECO_DATA *o_data) { return _sdk->convertHKID_Name(code, o_data); } //C++ RECO_DATA structure definition: struct RECO_DATA{ wchar_t FirstName[200]; wchar_t Surname[200]; }; Now I have a dll that I can import into my C# project. Here is the problem however: When I want to call the method from the dll file, like this: TSSLDllWrapper wrapper = new TSSLDllWrapper(); bool res = wrapper.convertHKID_NameSimple( //need to pass parameters here ); It expects the C++ parameters - pointers to char and RECO_DATA. How can I fix this and pass C++ types from C# code?
One way to convert most C data types is to use the PInvoke Interop Assitant. It will create proper C# / VB.Net types for most C structures. Here is the output for RECO_DATA [System.Runtime.InteropServices.StructLayoutAttribute(System.Runtime.InteropServices.LayoutKind.Sequential, CharSet=System.Runtime.InteropServices.CharSet.Unicode)] public struct RECO_DATA { /// wchar_t[200] [System.Runtime.InteropServices.MarshalAsAttribute(System.Runtime.InteropServices.UnmanagedType.ByValTStr, SizeConst=200)] public string FirstName; /// wchar_t[200] [System.Runtime.InteropServices.MarshalAsAttribute(System.Runtime.InteropServices.UnmanagedType.ByValTStr, SizeConst=200)] public string Surname; } For the char* parameter, you can pass IntPtr.Zero or use Marshal.StringToCoTaskMemAnsi to get the job done.
684,191
684,199
Howto Construct Char Arrays
I have the following code that tries to enumerate strings. #include <string> #include <iostream> using namespace std; string base = "000"; char values[] = {'0', '1', '2', '3' }; // Error Here for (int i = 0; i < base.length(); ++i) { for (int j = 0; j < countof(values); ++j) { if (base[i] != values[j]) { string copy = base; copy[i] = values[j]; cout << copy << endl; for (int k = i+1; k < base.length(); ++k) { for (int l = 0; l < countof(values); ++l) { if (copy[k] != values[l]) { string copy2 = copy; copy[k] = values[l]; cout << copy2 << endl; } } } } } } But how come upon compilation it gave error: test.cc:9: error: expected unqualified-id before 'for' test.cc:9: error: expected constructor, destructor, or type conversion before '<' token test.cc:9: error: expected unqualified-id before '++' token
The error is actually in the following line, at the for loop: your code needs to be contained in a function of some sort, most likely int main(void)
684,449
684,483
Best practices for a C++ portable opensource application
I am starting an open source cross platform project in C++. My development environment is Linux. There may be other developers who develop from different platforms as well. So I need some help in getting started with the configuration and development environment setup, so that all developers from multiple platforms can develop easily. Following are my questions Compiler : I plan to use g++ and heard that it is cross platform. Is that a good choice? Make files : I have seen Code::Blocks editor and it generates make files on fly and you don't have to write one manually. Is this the best practice or do I need to create make files? What are the other settings to be taken care when developing cross-platform applications? Any thoughts? Edit Thanks for the answers. One more question. Do you create makefiles by hand? Or is there any tool which can generate it?
The most important thing for your project to catch up is portability. It should be easy to build & run for everybody. GCC (g++) is indeed the compiler of choice. It comes from the opensource world and is therefore most widely adopted by it. However, a simple Makefile won't cut it. Generating it using CodeBlocks or any other IDE has a problem: Due to their platform, other developers will probably have to generate their own, but won't necessarily have CodeBlocks at hand, or just don't want to use it. There exist several different cross-platform build systems, which are IDE-agnostic. Some of them create Makefiles, others don't use make but build on their own. The most widely adopted build system is Autotools. However, it is hard to learn, cluttered, and an overall pain in the ass. Out of many other choices, I recommend Waf. It is proven by several larger open source projects already, XMMS2 being a good example (while not a very popular project, it has a large build with a lot of plugins and builds on a lot of platforms including OS X and Windows). While waf is not very broadly adopted, it is meant to be shipped with the source and easy to set-up. My recommendation for you. Edit: to get started with your Open Source project, I also recommending this book by Karl Fogel (available for reading online). Have fun!
684,475
684,527
C++ how to copy a map to a vector
What's the best way in C++ to copy a pair from a map to vector? I'm doing this so I can subsequently sort the vector.
This should do what you want: #include <iostream> #include <vector> #include <map> #include <algorithm> #include <iterator> using namespace std; bool cmp(const pair<int, int> &p1, const pair<int, int> &p2) { return p1.second < p2.second; } int main() { map<int, int> m; for(int i = 0; i < 10; ++i) m[i] = i * -i; vector<pair<int, int> > v; copy(m.begin(), m.end(), back_inserter(v)); sort(v.begin(), v.end(), cmp); for(int i = 0; i < v.size(); ++i) cout << v[i].first << " : " << v[i].second << endl; return 0; }
684,623
684,636
Howto Pass Filehandle to a Function for Output Streaming
I have the following template function which prints to cout: template <typename T> void prn_vec(const std::vector < T >&arg, string sep="") { for (unsigned n = 0; n < arg.size(); n++) { cout << arg[n] << sep; } return; } // Usage: //prn_vec<int>(myVec,"\t"); // I tried this but it fails: /* template <typename T> void prn_vec_os(const std::vector < T >&arg, string sep="",ofstream fn) { for (unsigned n = 0; n < arg.size(); n++) { fn << arg[n] << sep; } return; } */ How can I modify it so that it also takes file handle as input and print out to that file as referred by the filehandle? So that we can do something like: #include <fstream> #include <vector> #include <iostream> int main () { vector <int> MyVec; MyVec.push_back(123); MyVec.push_back(10); ofstream myfile; myfile.open ("example.txt"); myfile << "Writing this to a file.\n"; // prn_vec(MyVec,myfile,"\t"); myfile.close(); return 0; }
template <typename T> ostream& prn_vec(ostream& o, const std::vector < T >&arg, string sep="") { for (unsigned n = 0; n < arg.size(); n++) { o << arg[n] << sep; } return o; } int main () { vector <int> MyVec; // ... ofstream myfile; // ... prn_vec(myfile, MyVec, "\t"); myfile.close(); return 0; }
684,625
684,859
"Already listening" when invoking an RPC call
I use Microsoft RPC for interprocess communications. I have an interface with a set of methods accepting a byte pipe as an "in" parameter (IDL description): [ uuid(ActualGuidHere), version(1.0), pointer_default(unique) ] interface IMyInterface { //other irrelevant methods here error_status_t rpcDoAction( [in] pipe byte params ); //more irrelevant methods here } every once in a while when I invoke such methods on a client side through a client stub generated by MIDL and delegated to NdrClientCall2(), it doesn't reach the server side, but NdrClientCall2() returns RPC_S_ALREADY_LISTENING ("The server is already listening") which is really confusing. I added logging into the functions used for push/pull operations during transfer - they are not ever called for any of the calls that fail this way. This implies that the transmission is not even started. I never encounter such behaviour when calling methods that don't have pipes as parameters. What might be the reason and the workaround?
The documentation seems quite minimal - I don't think there is a huge user community for direct use of the RPC APIs - but my guess is that in order to set up the pipe parameter, it is necessary for RPC to internally call RpcServerListen. Only one call to that API is able to block at a time. The fact that you see the problem only occasionally suggests that it is time-related, so is it possible that you are making two such calls on different threads and sometimes they will be simultaneous? Either that or there is a timing-related bug in the RPC library's implementation and you will have to work around it by retrying the call (that may be an effective workaround in any case).
684,669
684,680
Why does operator< need to be overloaded when implementing class-based priority queues in c++?
note, I am not asking for answers. I simply am curious regarding why things work I need to implement a priority queue for a printer simulator for a class assignment. After looking at examples on the internet, I noticed that operator< was being overloaded in order to arrange the priority queue correctly. code in question: java2s priority queue example Why does operator< need to be overloaded? Where is '<' even used to make the comparison? Does implementing the operator overload change the way the queue STL works? This implementation doesn't seem intuitive to me at all: why isn't operator> being overloaded instead? How is one supposed to learn that operator< needs to be overloaded in order for the priority_queue to work correctly?
STL containers use operator< by default to order the contents, for those containers that order the contents. You can override this by passing in a comparison functor to the constructor of the container, which allows you to decouple the sorting/ordering from the container object. Operator> could have been chosen, but one had to be picked and that was operator<, and is then used everywhere for consistency.
684,684
684,720
Normalize file path with WinAPI
Possible Duplicate: Best way to determine if two path reference to same file in C/C++ Given two file path strings with potentially different casing and slashes ('\' vs '/'), is there a quick way (that does not involve writing my own function) to normalize both paths to the same form, or at least to test them for equivalence? I'm restricted to WinAPI and standard C++. All files are local.
Depending on whether the paths could be relative, or contain "..", or junction points, or UNC paths this may be more difficult than you think. The best way might be to use the GetFileInformationByHandle() function as in this answer. Edit: I agree with the comment by RBerteig that this may become hard to impossible to do if the paths are not pointing to a local file. Any comment on how to safely handle this case would be greatly appreciated.
684,715
684,736
What does it mean when you get a compile error "looks like a function definition" for a class declaration?
I recently encountered this problem. I found many instances of people asking the question—here, for example—but no concrete answers. Here's the sample code hoisted from that link: class AFX_BASE_APPLICATION_APP_CLASS CFileExtension { public: CFileExtension (); virtual ~CFileExtension (); }; The error this generates is: c:\FileExtension.h(14) : error C2470: 'CFileExtension' : looks like a function definition, but there is no formal parameter list; skipping apparent body
You've almost certainly missed the header which defines AFX_BASE_APPLICATION_APP_CLASS. In that case, it would be passed through unaltered and VC++ would assume that CFileExtension was a function that returned class AFX_BASE_APPLICATION_APP_CLASS. And, since it thinks it's a function, it also thinks it needs parentheses. You just need to find where AFX_BASE_APPLICATION_APP_CLASS is defined and #include that file.
684,763
684,778
How to Write Strings Into Binary File
With this code I tried to print the string "foo" 10 times in binary format. But why doesn't the function to do it work? #include <iostream> #include <fstream> using namespace std; template <typename T> void WriteStr2BinFh (string St, ostream &fn) { for (unsigned i = 0; i < St.size(); i++) { char CStr = St[i]; fn.write(&CStr.front(), CStr.size()); } return; } int main() { string MyStr = "Foo"; ofstream myfile; myfile.open("OuputFile.txt", ios::binary|ios::out); // We want to print it 10 times horizontally // separated with tab for (int i = 0; i < 9; i++) { WriteStr2BinFh(Mystr+"\t", myfile); } myfile.close(); }
There is so much wrong here, I'm just going to list everything I see: Your for loop condition should be i < 10. Why are you using a template but not the templatized parameter T? You're calling the method front() on CStr, but CStr is a char, not a string, so I don't even know how that compiles. Assuming CStr was a string, you don't want to take the address of the front() iterator using &, instead you want to say something like: fn.write(St.c_str(), St.size()); And you don't want to loop for St.size() iterations. Just do the above.
684,931
684,969
Content of Binary Output File Created With Output Stream
This code compiles and does execute. It simply print the content into a binary format. However the output differs from what I expected, namely: Output file size should be much smaller that those created with std::cout. The content of output file should be compressed, hence when we open it in editor, we should not be able to see the content. But why the code below doesn't do as I hope it does? How can I modify it accordingly? #include <iostream> #include <vector> #include <fstream> #include <string> #include <sstream> using namespace std; void WriteStr2BinFh(const string& St, ostream &fn) { fn.write(St.c_str(), St.length()); } int main ( int arg_count, char *arg_vec[] ) { vector <string> Tags; // In principle we have millions of such tags // Hence we need to compress it into binary output. Tags.push_back("0000000000"); Tags.push_back("0000101110"); Tags.push_back("0133030122"); Tags.push_back("0133132033"); Tags.push_back("1002013320"); Tags.push_back("1111111111"); Tags.push_back("1301013133"); Tags.push_back("3010112111"); Tags.push_back("3203012113"); Tags.push_back("3203012212"); //prn_vec<string>(Tags, "\t"); //cout << endl; ofstream outfile; outfile.open("MyOut.bin", ios::binary|ios::out); for (unsigned i=0; i <Tags.size(); i++) { WriteStr2BinFh(Tags[i]+"\t",outfile); } outfile.close(); return 0; }
You must write data in binary format (not text): void WriteStr2BinFh(const string& St, ostream &fn) { char *p = 0; long l = strtol(St.c_str(), &p); fn << l; } You must be aware that types like long have some maximum values, so you will probably have to split your string into n pieces and save as n longs.
685,439
685,727
"Multiple definition of" C++ compiler error
I can't seem to get rid of these seemingly random compiles errors in one of my classes. I get about 4 errors such as: multiple definition of `draw_line(float, float, float, float)' and multiple definition of `near_far_clip(float, float, float*, float*, float*, float*, float*, float*)' that are flagged in the middle of the method. I also consistently get multiple definition of `stack' in the middle of another method. stack is a global variable in a totally different file. It isn't even mentioned in the file I'm getting the error in. I tried separating the error prone file into .h and .cpp files (was originally just a .cpp) and nothing about the error changed... I don't have duplicate methods. I only have one #include of lines.h and there is an #ifndef clause at the beginning. All these errors appear in the .cpp file. Any ideas what it could be? Alright, I got the code up: lines.cpp ThreeD.cpp ThreeD.h makefile The lines.cpp is a converted .c file I received from my instructor. I included the makefile just in case, because I always have problems with it. I also annotated exactly where the errors were flagged in the file, but they seem pretty random so I don't know if it's particularly important. I abandoned the .h file because it wasn't solving anything or helping. I believe it will be easier to find the mistake without it. Here is the requested main.cpp file (there is no .h). I remade the lines.h file due to and I'm still receiving the: multiple definition of `draw_line(float, float, float, float)' and multiple definition of `near_far_clip(float, float, float*, float*, float*, float*, float*, float*)' errors in the lines.cpp file, but the multiple definition of `stack' error is now in a random place in the ThreeD.cpp file (and is marked by a comment now). Update: This error has been fixed and the files have been revised to show this: lines.h lines.cpp ThreeD.cpp ThreeD.h I messed around with labeling some the global variables extern, but it didn't seem to affect anything.
Why do you #include lines.cpp in ThreeD.cpp? This is very unusual. Your makefile wants lines.o, so you're going to compile lines.cpp. Anything defined in lines.cpp will be in lines.o and also in ThreeD.o. There is an intriguing comment in lines.cpp: Don't forget to put declarations in your .h files. I think the instructor wants you to break lines.cpp into a .h and a .cpp. Excerpt from lines.cpp: /* These go in your .h file or in lines.h */ /* Line drawing header. */ void draw_line(float, float, float, float); int near_far_clip(float, float, float *, float *, float *, float *, float *, float *); I suspect that these two declarations are the only thing that should be in lines.h.
685,601
685,616
About C/C++ stack allocation
While studying C++ (and C) I had some particular doubts regarding the working of stack allocation, that I can't find a solution to: Does stack allocation call malloc/free functions implicitly? If not; how does it assure there is no conflict between stack allocation and heap allocation? If yes; does stack allocation in C++ implicitly call new/delete too? If yes; does overloading the new operator for a class affect its stack allocation? It yielded confusing results in VC++; but as VC++ isn't entirely standard-compliant (or so I heard) I decided I better ask here...
Stack allocation doesn't use anything like malloc/free. It uses a piece of memory called program stack which is just a contiguous segment of memory. There's a special register that stores the top of the stack. When a new object is created on stack the top is raised thus increasing the stack, when an object is deallocated (goes out of scope) the top is lowered thus decreasing the stack. If you try to allocate a too large object on stack or go too deep into recursion the top will outgrow the maximum allowed size of the stack and this is called stack overflow. Note: actual direction of stack growth (increasing or decreasing addresses) will vary by system, but general idea is the same regardless of actual direction.
685,907
686,271
How do I do floating point rounding with a bias (always round up or down)?
I want to round floats with a bias, either always down or always up. There is a specific point in the code where I need this, the rest of the program should round to the nearest value as usual. For example, I want to round to the nearest multiple of 1/10. The closest floating point number to 7/10 is approximately 0.69999998807, but the closest number to 8/10 is approximately 0.80000001192. When I round off numbers, these are the two results I get. I'd rather get them rounded the same way. 7/10 should round to 0.70000004768 and 8/10 should round to 0.80000001192. In this example I am always rounding up, but I have some places where I want to always round down. Fortunately, I am only dealing with positive values in each of these places. The line I am using to round is floor(val * 100 + 0.5) / 100. I am programming in C++.
I think the best way to achieve this is to rely on the fact that according to the IEEE 754 floating point standard, the integer representation of floating point bits are lexicographically ordered as a 2-complement integer. I.e. you could simply add one ulp (units in the last place) to get the next floating point representation (which will always be slightly larger than your treshold if it was smaller, since the round error is at most 1/2 ulp) e.g. float floatValue = 7.f/10; std::cout << std::setprecision(20) << floatValue << std::endl; int asInt = *(int*)&floatValue; asInt += 1; floatValue = *(float*)&asInt; std::cout << floatValue << std::endl; prints (on my system) 0.69999998807907104492 0.70000004768371582031 To know when you need to add one ulp, you'll have to rely on the difference of floor and a rounded floor if (std::floor(floatValue * 100.) != std::floor(floatValue * 100. + 0.5)) { int asInt = *(int*)&floatValue; asInt += 1; floatValue = *(float*)&asInt; } Would correctly convert 0.69.. to 0.70.. but leave 0.80.. alone. Note that the float gets promoted to a double via the multiplication with 100. before the floor is applied. If you don't do this you risk getting in the situation that for 7.f/10.f * 100.f The (limited in precision) float representation would be 70.00...
685,934
685,947
Does C# clean up C++ allocated memory?
I have a hypothetical COM object with the following signature void MemAlloc(ref double[] test, int membercount) where the memory is allocated in C++ using new/malloc. Once this is in C#, using the RCW, how do I ensure that the memory is freed correctly? I would think it would be difficult for .NET to free, considering in C++ you need to know if it was allocated with new/malloc/mm_malloc before you can correctly free it. So, what is the appopriate way to cleanup my C++ allocated array? Thanks.
I believe you should use CoTaskMemAlloc() for memory that you want to explicitly free from the managed side. The CLR will take care of freeing the memory once it's no longer reachable. If you want to free it explicitly you can use the managed Marshal.CoTaskFree() routine. In general the interop marshaler and CLR abide by COM conventions for freeing memory; the recipient is responsible for freeing memory. So the CLR/Interop marshaler will usually take care of freeing memory that was allocated in a native call if that memory is returned to the managed caller. From Memory Management with the Interop Marshaler (msdn): The interop marshaler always attempts to free memory allocated by unmanaged code. This behavior complies with COM memory management rules, but differs from the rules that govern native C++. Confusion can arise if you anticipate native C++ behavior (no memory freeing) when using platform invoke, which automatically frees memory for pointers. For example, calling the following unmanaged method from a C++ DLL does not automatically free any memory. The runtime always uses the CoTaskMemFree method to free memory. If the memory you are working with was not allocated with the CoTaskMemAlloc method, you must use an IntPtr and free the memory manually using the appropriate method.
685,951
685,981
How can I check if a client disconnected through Winsock in C++?
How can I check if a client disconnected through Winsock in C++?
Beej's Network Programming Guide if you call recv in blocking mode and it returns with 0 bytes read, the socket has disconnected, else it wait for bytes to be received. Look in this FAQ 2.12 example from select on this page. int nRet; if(( nRet = select( 0, &fdread, NULL, NULL, NULL )) == SOCKET_ERROR ) { // Error condition // Check WSAGetLastError } if( nRet > 0 ) { // select() will return value 1 because i m using only one socket // At this point, it should be checked whether the // socket is part of a set. if( FD_ISSET( s, &fdread )) { // A read event has occurred on socket s } }
686,131
686,144
Does C++ have a sequential search function?
I have a small unsorted array and I'd like to find the index of a particular value. Does C++ have a built-in sequential search function for this, or do you just write the loop yourself each time it comes up? I'm specifically using a C-style array like: std::string arr[5] = { "EVEN", "ODD", "NONE", "MARK", "SPACE" }; and I need the index of a value that the user supplies.
std::find() should work: #include <stdio.h> #include <algorithm> #include <string> using std::string; std::string arr[5] = { "EVEN", "ODD", "NONE", "MARK", "SPACE" }; int main() { string* pArrEnd = arr + sizeof( arr)/sizeof(arr[0]); string* pFound = std::find( arr, pArrEnd, "MARK"); if (pFound == pArrEnd) { printf( "not found\n"); } else { printf( "%s was found at index %d\n", pFound->c_str(), pFound - arr); printf( "or using STL: %d\n", std::distance( arr, pFound)); } return 0; }
686,353
686,373
Random float number generation
How do I generate random floats in C++? I thought I could take the integer rand and divide it by something, would that be adequate enough?
rand() can be used to generate pseudo-random numbers in C++. In combination with RAND_MAX and a little math, you can generate random numbers in any arbitrary interval you choose. This is sufficient for learning purposes and toy programs. If you need truly random numbers with normal distribution, you'll need to employ a more advanced method. This will generate a number from 0.0 to 1.0, inclusive. float r = static_cast <float> (rand()) / static_cast <float> (RAND_MAX); This will generate a number from 0.0 to some arbitrary float, X: float r2 = static_cast <float> (rand()) / (static_cast <float> (RAND_MAX/X)); This will generate a number from some arbitrary LO to some arbitrary HI: float r3 = LO + static_cast <float> (rand()) /( static_cast <float> (RAND_MAX/(HI-LO))); Note that the rand() function will often not be sufficient if you need truly random numbers. Before calling rand(), you must first "seed" the random number generator by calling srand(). This should be done once during your program's run -- not once every time you call rand(). This is often done like this: srand (static_cast <unsigned> (time(0))); In order to call rand or srand you must #include <cstdlib>. In order to call time, you must #include <ctime>.
686,378
691,009
C++ Native Way to Pack and Unpack String
Following my earlier question. Is there a way to write a string in a compressed/bit version using C++ native idiom. I am thinking something like Perl's native pack and unpack.
Based on reading your previous question, I think you mean to say that you want a binary encoded output, rather than a "compressed" output. Generally, "compressed" is used to refer specifically to data that has been reduced in size through the application of an algorithm such as LZW encoding. In your case, you may find that the output is "compressed" in the sense that it is smaller because for a wide variety of numbers a binary representation is more efficient than an ASCII representation, but this is not "compression" in the standard sense, which may be why you are having trouble getting the answer you are looking for. I think you are really asking the following: Given a number in ASCII format (stored in a std::string, for example), how can I write this to a file as a binary encoding integer? There are two parts to the answer. First, you must convert the ASCII encoded string to an integer value. You may use a function such as strtol, which will return a long integer equivalent in value to your ASCII encoded number. Do be aware that there are limitations on the magnitude of the number that may be represented in a long integer, so if your numbers are very, very large, you may need to be more creative in translating them. Second, you must write the data to the output stream using ostream::write(), which does not attempt to format the bytes you give it. If you simply use the default operator<<() stream operation to write the values, you'll find that your numbers just get translated back to ASCII and written out that way. Put this all together like this: #include <stdlib.h> // For strtol(). #include <arpa/inet.h> // For htonl(). #include <fstream> // For fstream. #include <string> // For string. int main(int argc, char *argv[]) { char *dummy = 0; std::string value("12345"); // Use strtol to convert to an int; "10" here means the string is // in decimal, as opposed to, eg, hexadecimal or octol, etc. long intValue = strtol(value.c_str(), &dummy, 10); // Convert the value to "network order"; not strictly necessary, // but it is good hygiene. Note that if you do this, you will // have to convert back to "host order" with ntohl() when you read // the data back. uint32_t netValue = htonl(intValue); // Create an output stream; make sure to open the file in binary mode. std::fstream output; output.open("out.dat", std::fstream::out | std::fstream::binary); // Write out the data using fstream::write(), not operator<<()! output.write(reinterpret_cast<char *>(&netValue), sizeof(netValue)); output.close(); }
686,452
687,904
Create WCF service for unmanaged C++ clients
I need to get unmanaged Windows C++ clients to talk to a WCF service. C++ clients could be running on Win2000 and later. I have a control over both WCF service and which C++ API is being used. Since it's for a proprietary application, it is preferable to use Microsoft stuff where possible, definitely not GNU licensed APIs. Those of you who have it working, can you share a step-by-step process how to make it working? I have researched following options so far: WWSAPI - not good, will not work on Win 2000 clients. ATL Server, used following guide as a reference. I followed the steps outlined (remove policy refs and flatten WSDL), however the resulting WSDL is still not usable by sproxy Any more ideas? Please answer only if you actually have it working yourself. Edit1: I apologize for anyone who I might have confused: what I was looking for was a way to call WCF service from client(s) where no .NET framework is installed, so using .NET-based helper library is not an option, it must be pure unmanaged C++
For those who are interested, I found one semi-working ATL Server solution. Following is the host code, notice it is using BasicHttpBinding, it's the only one which works with ATL Server: var svc = new Service1(); Uri uri = new Uri("http://localhost:8200/Service1"); ServiceHost host = new ServiceHost(typeof(Service1), uri); var binding = new BasicHttpBinding(); ServiceEndpoint endpoint = host.AddServiceEndpoint(typeof(IService1), binding, uri); endpoint.Behaviors.Add(new InlineXsdInWsdlBehavior()); host.Description.Behaviors.Add(new ServiceMetadataBehavior() { HttpGetEnabled = true }); var mex = host.AddServiceEndpoint(typeof(IMetadataExchange), MetadataExchangeBindings.CreateMexHttpBinding(), "mex"); host.Open(); Console.ReadLine(); code for InlineXsdInWsdlBehavior could be found here . One important change needs to be done to the InlineXsdInWsdlBehavior in order for it to work properly with sproxy when complex types are involved. It is caused by the bug in sproxy, which does not properly scope the namespace aliases, so wsdl cannot have repeating namespace aliases or sproxy will crap out. Here's the functions which needs to change: public void ExportEndpoint(WsdlExporter exporter, WsdlEndpointConversionContext context) { int tnsCount = 0; XmlSchemaSet schemaSet = exporter.GeneratedXmlSchemas; foreach (WsdlDescription wsdl in exporter.GeneratedWsdlDocuments) { // // Recursively find all schemas imported by this wsdl // and then add them. In the process, remove any // <xsd:imports/> // List<XmlSchema> importsList = new List<XmlSchema>(); foreach (XmlSchema schema in wsdl.Types.Schemas) { AddImportedSchemas(schema, schemaSet, importsList, ref tnsCount); } wsdl.Types.Schemas.Clear(); foreach (XmlSchema schema in importsList) { RemoveXsdImports(schema); wsdl.Types.Schemas.Add(schema); } } } private void AddImportedSchemas(XmlSchema schema, XmlSchemaSet schemaSet, List<XmlSchema> importsList, ref int tnsCount) { foreach (XmlSchemaImport import in schema.Includes) { ICollection realSchemas = schemaSet.Schemas(import.Namespace); foreach (XmlSchema ixsd in realSchemas) { if (!importsList.Contains(ixsd)) { var new_namespaces = new XmlSerializerNamespaces(); foreach (var ns in ixsd.Namespaces.ToArray()) { var new_pfx = (ns.Name == "tns") ? string.Format("tns{0}", tnsCount++) : ns.Name; new_namespaces.Add(new_pfx, ns.Namespace); } ixsd.Namespaces = new_namespaces; importsList.Add(ixsd); AddImportedSchemas(ixsd, schemaSet, importsList, ref tnsCount); } } } } Next step is to generate C++ header: sproxy.exe /wsdl http://localhost:8200/Service1?wsdl and then C++ program looks like this: using namespace Service1; CoInitializeEx( NULL, COINIT_MULTITHREADED ); { CService1T<CSoapWininetClient> cli; cli.SetUrl( _T("http://localhost:8200/Service1") ); HRESULT hr = cli.HelloWorld(); //todo: analyze hr } CoUninitialize(); return 0; Resulting C++ code handles complex types pretty decently, except that it cannot assign NULL to the objects.
686,470
686,489
Why is my return type meaningless?
I am trying to use a return type of const MyClass * const. However, I get a warning: Warning: #815-D: type qualifier on return type is meaningless. Is this not a valid type? I want a pointer than cannot be changed, and I want the thing it points to to not be changed either.
The pointer itself has value type, so it doesn't make sense to make it const. What the caller function does with the returned value can't be restricted by the called function. This is akin to trying to define something like: const int getInt(); getInt(), in this case, just returns an int value (not a reference). It goes to a register, then the caller function receives it and does whatever it wants with it.
687,049
687,074
Is it possible to provide exceptions in C++ virtual(pure) class member?
If so how? I know how to provide exception specifications for members such as class SOMEClass { public: void method(void) throw (SOMEException); virtual void pure_method(void) = 0; }; So that the method throws only SOMEException. If I want to ensure that sub-classes of SOMEClass throw SOMEException for pure_method, is it possible to add the exception specification?. Is this approach feasible or do I need to understand more on exceptions and abstract methods to find out why it can(not) be done?
Yes, a pure virtual member can have an exception specification. I recommend you to read this: http://www.gotw.ca/publications/mill22.htm before getting too much involved in exception specifications, though.
687,135
687,144
What is the origin of the throw/catch exception naming?
Was the creator of this construct a baseball fan?
See Stroustrup's book "The Design & Evolution of C++" - basically, "raise" was already taken.
687,448
687,455
C++ Undefined Reference (Even with Include)
I cannot get this simple piece of code to compile without including the TestClass.cpp file explicitly in my main.cpp file. What am I doing wrong? Thanks in advance! Here is the code: TestClass.h #ifndef TESTCLASS_H_ #define TESTCLASS_H_ class TestClass { public: static int foo(); }; #endif TestClass.cpp #include "TestClass.h" int TestClass::foo() { return 42; } main.cpp #include <iostream> #include "TestClass.h" using namespace std; int main() { cout << TestClass::foo() << endl; return 0; } Here is the error: g++ main.cpp -o main.app /tmp/ccCjOhpy.o: In function `main': main.cpp:(.text+0x18e): undefined reference to `TestClass::foo()' collect2: ld returned 1 exit status
Include TestClass.cpp into the commandline, so the linker can find the function definition: g++ main.cpp TestClass.cpp -o main.app Alternatively, compile each to their own object file, then tell the compiler to link them together (it will forward them to the linker) g++ -c main.cpp -o main.o g++ -c TestClass.cpp -o TestClass.o g++ main.o TestClass.o -o main.app
687,718
690,061
How Do You Call an MSSQL System Function From ADO/C++?
...specifically, the fn_listextendedproperty system function in MSSQL 2005. I have added an Extended Property to my database object, named 'schemaVersion'. In my MSVC application, using ADO, I need to determine if that Extended Property exists and, if it does, return the string value out of it. Here is the T-SQL code that does what I want. How do I write this in C++/ADO, or otherwise get the job done? select value as schemaVer from fn_listextendedproperty(default, default, default, default, default, default, default) where name=N'schemaVersion' Here's the code I tried at first. It failed with the error listed below the code: _CommandPtr cmd; cmd.CreateInstance(__uuidof(Command)); cmd->ActiveConnection = cnn; cmd->PutCommandText("select value " "from fn_listextendedproperty(default, default, default, default, default, default, default) " "where name=N'schemaVersion'"); VARIANT varCount; cmd->Execute(NULL, NULL, adCmdText); ...here are the errors I peeled out of the ADO errors collection. The output is from my little utility function which adds the extra text like the thread ID etc, so ignore that. (Proc:0x1930, Thread:0x8A0) INFO : === 1 Provider Error Messages : ======================= (Proc:0x1930, Thread:0x8A0) INFO : [ 1] (-2147217900) 'Incorrect syntax near the keyword 'default'.' (Proc:0x1930, Thread:0x8A0) INFO : (SQLState = '42000') (Proc:0x1930, Thread:0x8A0) INFO : (Source = 'Microsoft OLE DB Provider for SQL Server') (Proc:0x1930, Thread:0x8A0) INFO : (NativeError = 156) (Proc:0x1930, Thread:0x8A0) INFO : ========================================================== EDIT: Updated the call according to suggestions. Also changed "SELECT value AS schemaVer" to just "SELECT value". EDIT: Changed the first parameter of Execute() to NULL per suggestion. This fixed my original problem, and I proceeded to the next. :)
I still have not figured out how to do this directly. To get on with my life, I wrote a stored procedure which called the function: set ANSI_NULLS ON set QUOTED_IDENTIFIER ON go ALTER PROCEDURE [dbo].[mh_getSchemaVersion] @schemaVer VARCHAR(256) OUTPUT AS select @schemaVer = CAST( (select value from fn_listextendedproperty(default, default, default, default, default, default, default) where name=N'schemaVersion') AS varchar(256) ) return @@ROWCOUNT ...and then called thst sproc from my ADO/C++ code: _CommandPtr cmd; cmd.CreateInstance(__uuidof(Command)); cmd->ActiveConnection = cnn; cmd->PutCommandText("mh_getSchemaVersion")_l _variant_t schemaVar; _ParameterPtr schemaVarParam = cmd->CreateParameter("@schemaVer", adVarChar, adParamOutput, 256); cmd->GetParameters()->Append((IDispatch*)schemaVarParam); cmd->Execute(NULL, NULL, adCmdStoredProc); std::string v = (const char*)(_bstr_t)schemaVarParam->GetValue(); ver->hasVersion_ = true; ...which works, but I didn't want to have to deploy a new stored procedure. So if anyone can come up with a solution to the original problem and show me how to call the system function directly from ADO/C++, I will accept that as the answer. Otherwise I'll just accept this.
687,789
687,805
C++ const std::map reference fails to compile
Is there a reason why passing a reference to a std::map as const causes the [] operator to break? I get this compiler error (gcc 4.2) when I use const: error: no match for ‘operator[]’ in ‘map[name]’ Here's the function prototype: void func(const char ch, std::string &str, const std::map<std::string, std::string> &map); And, I should mention that there is no problem when I remove the const keyword in front of std::map. If I've been instructed correctly, the [] operator will actually insert a new pair into the map if it doesn't find the key, which would of course explain why this happens, but I can't imagine that this would ever be acceptable behavior. If there is a better method, like using find instead of [], I'd appreciate it. I can't seem to get find to work either though... I receive const mismatched iterator errors.
Yes you can't use operator[]. Use find, but note it returns const_iterator instead of iterator: std::map<std::string, std::string>::const_iterator it; it = map.find(name); if(it != map.end()) { std::string const& data = it->second; // ... } It's like with pointers. You can't assign int const* to int*. Likewise, you can't assign const_iterator to iterator.
687,813
687,831
Are C++ int operations atomic on the mips architecture
I wonder if I could read or write shared int value without locking on mips cpu (especially Amazon or Danube). What I mean is if such a read or write are atomic (other thread can't interrupt them). To be clear - I don't want to prevent the race between threads, but I care if int value itself is not corrupted. Assuming that the compiler aligns all ints at the boundaries of cpu word, it should be possible. I use gcc (g++). Tests also shows that it seems work correctly. But maybe someone knows it for sure?
Use gcc's builtin atomic operations and you'll get warnings if they're not supported: http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html It looks like combinations of addition/subtraction and testing (at least) are possible on the hardware: http://rswiki.csie.org/lxr/http/source/include/asm-mips/atomic.h
688,039
688,068
How can I build a std::vector<std::string> and then sort them?
I have a bunch of strings that I need to sort. I think a std::vector would be the easiest way to do this. However, I've never used vectors before and so would like some help. I just need to sort them alphanumerically, nothing special. Indeed, the string::compare function would work. After that, how can I iterate through them to verify that they're sorted? Here's what I have so far: std::sort(data.begin(), data.end(), std::string::compare); for(std::vector<std::string>::iterator i = data.begin(); i != data.end(); ++i) { printf("%s\n", i.c_str); }
You can just do std::sort(data.begin(), data.end()); And it will sort your strings. Then go through them checking whether they are in order if(names.empty()) return true; // empty vector sorted correctly for(std::vector<std::string>::iterator i=names.begin(), j=i+1; j != names.end(); ++i, ++j) if(*i > *j) return false; return true; // sort verified In particular, std::string::compare couldn't be used as a comparator, because it doesn't do what sort wants it to do: Return true if the first argument is less than the second, and return false otherwise. If you use sort like above, it will just use operator<, which will do exactly that (i.e std::string makes it return first.compare(second) < 0).
688,053
688,091
What to put in precompiled header? (MSVC)
What are the best candidates for a precompiled header file? Can I put STL and Boost headers there, even though they have templates? And will that reduce compile times? Also, what are the best IDE settings to reduce compile times?
The quick answer: the STL and Boost headers do indeed belong in the precompiled header file, even though these header files define template classes. When generating a precompiled header file, a compiler parses the header text (a significant task!), and converts it into a binary format that is optimised for the compiler's benefit. Even though the template classes will be instantiated when other .cpp files are compiled, they will be instantiated from information in the precompiled header, which is significantly faster for the compiler to read. (later addition) One thing that you should not include in a precompiled header are files that are part of your project and are changed frequently, even if every single .CPP file includes these files. The reason is this - the generation of the precompiled header can take a long time, because the boost, stl and windows libraries are very large. You might have a simple file (eg "StringDefs.h") that everything uses. If StringDefs.h is included in stdafx.h, and one developer touches StringDefs.h, then every developer has to wait until the entire precompiled header recompiles. It would be much faster if StringDefs.h was left out of the precompiled header, and parsed along with each .CPP file.
688,386
688,428
IS C++ converted into MSIL?
I have been a long time C# and .Net developer, and have been playing with the idea of learning c++. One of the primary reasons I have been thinking about this, is how much faster C++ can be over apps using the .Net framework. But am I right in assuming that if I write a C++ app in Visual Studio, and/or reference .Net libraries in a C++ application that, that C++ is converted in MSIL (just like C#) - and therefor I'd loose any benefit from coding in it? So my question is really this: are C++ components of an application referencing .Net assemblies compiled in the "traditional" way, or comiled into MSIL?
Well it's a bit more complicated than that. There are actually two totally different versions of .NET-supporting C++. The old one, Managed Extensions for C++, was the only option available in Visual C++ 2002/2003. It's available in newer compilers under the option /clr:oldSyntax. It's kinda clumsy as it tries hard to integrate with standard C++, so all new keywords (and there's lots of them) are prefixed with double underscores, etc. Code generated by this compiler is a mixture of native and MSIL code, dubbed IJW "it just works". The new one, called C++/CLI, is a clean new language available in Visual C++ 2005 and newer. Most importantly, it supports several modes of code generation. The /clr option again generates a IJW mixture of native and MSIL code. /clr:pure results in a managed-only assembly, although it may translate native types into corresponding .net structures. The code therefore may not be type-safe and can use pointer arithmetic, pretty much like C# with /unsafe. And the strictest of options is /clr:safe, which produces type-safe, verifiable MSIL-only assembly, exactly like C# compiler does (without /unsafe, that is). For differences between MC++ and C++/CLI, see wikipedia. For description of the compiler switches, see MSDN. PS. The .NET byte-code is called either MSIL (Microsoft Intermediate Language) or CIL (Common Intermediate Language). MIL can stand for Media Integration Layer, the undocumented low-level library used by WPF and Vista Desktop Window Manager.
688,447
688,477
Best practices for defining your own exception classes?
I have some special exception cases that I want to throw and catch, so I want to define my own exception classes. What are the best practices for that? Should I inherit from std::exception or std::runtime_error?
Yes, it's good practice to inherit from std::runtime_error or the other standard exception classes like std::logic_error, std::invalid_argument and so on, depending on which kind of exception it is. If all the exceptions inherit some way from std::exception it's easy to catch all common errors by a catch(const std::exception &e) {...}. If you have several independent hierarchies this gets more complicated. Deriving from the specialized exception classes makes these exceptions carry more information, but how useful this really is depends on how you do your exception handling.
688,550
690,710
Windows Named-Pipe ACL under IIS
I am trying to connect a CGI process to my windows service with a named pipe. My code runs fine using another server on my development machine, but on IIS there are security issues when I call CreateFile() in the CGI process. The Windows service is the Named Pipe Server and so the CGI process is trying to connect to that named pipe as a client. I have dug up some code that creates a Global security descriptor with the flag SECURITY_WORLD_SID_AUTHORITY by calling: InitializeAcl() AddAccessAllowedAce() SetSecurityDescriptorDacl() But I dont really understand the concept. I am assuming that the Pipe Server CreateNamedPipe() must be called with this ACL in the SECURITY_ATTRIBUTES structure but what about the Named Pipe client? Do I create another Global security descriptor and call CreateFile() with that? Do I inherit it with SecurityAttributes.bInheritHandle = TRUE; Do I just pass Null in the CreateFile() call?
Since the pipe is created by the server, only the server needs to specify the ACL, the client uses NULL for the ACL. Inheritance only applies if the named pipe is created in one process and that processes creates a new process in which you want that spawned process to have direct access to the handle (it doesn't reopen the handle, rather it gets the value some other way, like a command line). You can use the Process Explorer on sysinternals to view named pipes open in a process and then look at the ACL.
688,760
696,026
How to create a UTF-8 string literal in Visual C++ 2008
In VC++ 2003, I could just save the source file as UTF-8 and all strings were used as is. In other words, the following code would print the strings as is to the console. If the source file was saved as UTF-8 then the output would be UTF-8. printf("Chinese (Traditional)"); printf("中国語 (繁体)"); printf("중국어 (번체)"); printf("Chinês (Tradicional)"); I have saved the file in UTF-8 format with the UTF-8 BOM. However compiling with VC2008 results in: warning C4566: character represented by universal-character-name '\uC911' cannot be represented in the current code page (932) warning C4566: character represented by universal-character-name '\uAD6D' cannot be represented in the current code page (932) etc. The characters causing these warnings are corrupted. The ones that do fit the locale (in this case 932 = Japanese) are converted to the locale encoding, i.e. Shift-JIS. I cannot find a way to get VC++ 2008 to compile this for me. Note that it doesn't matter what locale I use in the source file. There doesn't appear to be a locale that says "I know what I'm doing, so don't f$%##ng change my string literals". In particular, the useless UTF-8 pseudo-locale doesn't work. #pragma setlocale(".65001") => error C2175: '.65001' : invalid locale Neither does "C": #pragma setlocale("C") => see warnings above (in particular locale is still 932) It appears that VC2008 forces all characters into the specified (or default) locale, and that locale cannot be UTF-8. I do not want to change the file to use escape strings like "\xbf\x11..." because the same source is compiled using gcc which can quite happily deal with UTF-8 files. Is there any way to specify that compilation of the source file should leave string literals untouched? To ask it differently, what compile flags can I use to specify backward compatibility with VC2003 when compiling the source file. i.e. do not change the string literals, use them byte for byte as they are. Update Thanks for the suggestions, but I want to avoid wchar. Since this app deals with strings in UTF-8 exclusively, using wchar would then require me to convert all strings back into UTF-8 which should be unnecessary. All input, output and internal processing is in UTF-8. It is a simple app that works fine as is on Linux and when compiled with VC2003. I want to be able to compile the same app with VC2008 and have it work. For this to happen, I need VC2008 to not try to convert it to my local machine's locale (Japanese, 932). I want VC2008 to be backward compatible with VC2003. I want a locale or compiler setting that says strings are used as is, essentially as opaque arrays of char, or as UTF-8. It looks like I might be stuck with VC2003 and gcc though, VC2008 is trying to be too smart in this instance.
Update: I've decided that there is no guaranteed way to do this. The solution that I present below works for English version VC2003, but fails when compiling with Japanese version VC2003 (or perhaps it is Japanese OS). In any case, it cannot be depended on to work. Note that even declaring everything as L"" strings didn't work (and is painful in gcc as described below). Instead I believe that you just need to bite the bullet and move all text into a data file and load it from there. I am now storing and accessing the text in INI files via SimpleIni (cross-platform INI-file library). At least there is a guarantee that it works as all text is out of the program. Original: I'm answering this myself since only Evan appeared to understand the problem. The answers regarding what Unicode is and how to use wchar_t are not relevant for this problem as this is not about internationalization, nor a misunderstanding of Unicode, character encodings. I appreciate your attempt to help though, apologies if I wasn't clear enough. The problem is that I have source files that need to be cross-compiled under a variety of platforms and compilers. The program does UTF-8 processing. It doesn't care about any other encodings. I want to have string literals in UTF-8 like currently works with gcc and vc2003. How do I do it with VC2008? (i.e. backward compatible solution). This is what I have found: gcc (v4.3.2 20081105): string literals are used as is (raw strings) supports UTF-8 encoded source files source files must not have a UTF-8 BOM vc2003: string literals are used as is (raw strings) supports UTF-8 encoded source files source files may or may not have a UTF-8 BOM (it doesn't matter) vc2005+: string literals are massaged by the compiler (no raw strings) char string literals are re-encoded to a specified locale UTF-8 is not supported as a target locale source files must have a UTF-8 BOM So, the simple answer is that for this particular purpose, VC2005+ is broken and does not supply a backward compatible compile path. The only way to get Unicode strings into the compiled program is via UTF-8 + BOM + wchar which means that I need to convert all strings back to UTF-8 at time of use. There isn't any simple cross-platform method of converting wchar to UTF-8, for instance, what size and encoding is the wchar in? On Windows, UTF-16. On other platforms? It varies. See the ICU project for some details. In the end I decided that I will avoid the conversion cost on all compilers other than vc2005+ with source like the following. #if defined(_MSC_VER) && _MSC_VER > 1310 // Visual C++ 2005 and later require the source files in UTF-8, and all strings // to be encoded as wchar_t otherwise the strings will be converted into the // local multibyte encoding and cause errors. To use a wchar_t as UTF-8, these // strings then need to be convert back to UTF-8. This function is just a rough // example of how to do this. # define utf8(str) ConvertToUTF8(L##str) const char * ConvertToUTF8(const wchar_t * pStr) { static char szBuf[1024]; WideCharToMultiByte(CP_UTF8, 0, pStr, -1, szBuf, sizeof(szBuf), NULL, NULL); return szBuf; } #else // Visual C++ 2003 and gcc will use the string literals as is, so the files // should be saved as UTF-8. gcc requires the files to not have a UTF-8 BOM. # define utf8(str) str #endif Note that this code is just a simplified example. Production use would need to clean it up in a variety of ways (thread-safety, error checking, buffer size checks, etc). This is used like the following code. It compiles cleanly and works correctly in my tests on gcc, vc2003, and vc2008: std::string mText; mText = utf8("Chinese (Traditional)"); mText = utf8("中国語 (繁体)"); mText = utf8("중국어 (번체)"); mText = utf8("Chinês (Tradicional)");
688,802
688,862
Can C++ method names be qualified by the class name, in the header?
Simple question, is this valid C++: class Foo { void Foo::doSomething(); }; The point of the question: is that repeated use of the class name and double colon before the method name valid inside the class declaration? I'm having issues compiling code that does this using g++ 4.2.3. I would love to see a reference to something describing the syntax here, before digging in and changing the code. Or downgrading the compiler; this does build with g++ 3.3.6. The error I'm getting is (roughly): Foo.h:3: error: extra qualification ‘Foo::’ on member ‘doSomething’ I did Google, but couldn't come up with something. I don't have the standard, and even if I did it would probably take me quite a while to find anything authoritative. I'm no C++ language lawyer.
I took a look at the standard, section 9.2 would be the relevant portion. I'm not that great with BNF but I didn't see anything in the BNF for class members that would indicate this was allowed. The identifier is even named "unqualified-id" in the BNF. G++ changed the behavior in version 4.1, and apparently a lot of other compilers accepted this, but I've never seen this style used and I have no idea why anyone would do it. Since it seems to not trigger an error on a pretty wide variety of compilers, there may be some historical reason for this style, but as far as I can tell it's indeed not valid. The only good reference I found through Google was this page, which just attempts to explain some of the changes in G++ 4.1.
688,945
688,948
How do I make g++ ignore -mregparm for certain code?
Some background: As a personal project, I've been developing a kernel in c++. Things are going well, in fact I have very good support for much of c++ available in kernel land (I've implemented nearly the entire libc and libstdc++). One of the more difficult and compiler specific things is RTTI and exception support. For now I'm disabling exceptions entirely, but RTTI is something I want since things like dynamic_cast can be very useful. To make this work, I have a basic implementation of std::type_info which matches what g++ expects and then I link to g++'s libsupc++.a and libgcc_eh.a. This works great. RTTI works like a champ! The question: I've been toying with some optimization options and would like to someday have -mregparm as a compile time choice. Obviously this being a kernel and having to interact with assembly code, there are certain functions which don't play nice with not having the parameters on the stack. To solve this, I use the following macro: #define asmlinkage attribute((regparm(0))) Once again, this works very well. The problem is that when you do a dynamic_cast. The compile emits calls to some implicitly defined internal functions (defined in the support libraries previously mentioned) and does so respecting the -mregparm flag. Of course since I linked to the system's support libraries, they may or may not (they don't in my case) have a compatible calling convention...leading to a nice pretty kernel panic. Since these functions are implicit (no prototype in any of my files) and they have long, mangled names, it is (nearly) impossible to add my asmlinkage attribute to them. There are 3 possible solutions which come to mind. forget about supporting -mregparm's all together. recompile those 2 support libs with the same flags as the kernel. This might be annoying and slightly impractical (I don't know if they can be cleanly isolated from the gcc build and toolchain upgrades could be very painful), but should work. somehow make the compiler ignore -mregparm when calling code found in a particular .a/.o file. Is option 3 possible? My gut feeling is no, but I figured that I would ask since there are a few g++ gurus on here :-).
You are probably best going with options 1 or 2 (1 is clearly easier). To my knowledge g++ has no specific switch for option 3.
689,677
689,716
Why cast unused return values to void?
int fn(); void whatever() { (void) fn(); } Is there any reason for casting an unused return value to void, or am I right in thinking it's a complete waste of time?
David's answer pretty much covers the motivation for this, to explicitly show other "developers" that you know this function returns but you're explicitly ignoring it. This is a way to ensure that where necessary error codes are always handled. I think for C++ this is probably the only place that I prefer to use C-style casts too, since using the full static cast notation just feels like overkill here. Finally, if you're reviewing a coding standard or writing one, then it's also a good idea to explicitly state that calls to overloaded operators (not using function call notation) should be exempt from this too: class A {}; A operator+(A const &, A const &); int main () { A a; a + a; // Not a problem (void)operator+(a,a); // Using function call notation - so add the cast.
689,959
690,482
simple cross-platform c++ GUI console -- how to?
I'm writing a game and I'm wound up in needing a console for simple text input; filenames and simple values. Using SDL, my console looks the following at it's simplest: class Console { public: typedef std::list<String> InputList; enum Result { NOTHING = 0, ENTERED, ESCAPED }; static const String& GetInput() { return input; } static Result Query(SDLKey lastKey) { if(lastResult == ENTERED || lastResult == ESCAPED) { input.clear(); } switch (lastKey) { case SDLK_a: case SDLK_b: case SDLK_c: case SDLK_d: case SDLK_e: case SDLK_f: case SDLK_g: case SDLK_h: case SDLK_i: case SDLK_j: case SDLK_k: case SDLK_l: case SDLK_m: case SDLK_n: case SDLK_o: case SDLK_p: case SDLK_q: case SDLK_r: case SDLK_s: case SDLK_t: case SDLK_u: case SDLK_v: case SDLK_w: case SDLK_x: case SDLK_y: case SDLK_z: case SDLK_0: case SDLK_1: case SDLK_2: case SDLK_3: case SDLK_4: case SDLK_5: case SDLK_6: case SDLK_7: case SDLK_8: case SDLK_9: case SDLK_SLASH: case SDLK_BACKSLASH: case SDLK_PERIOD: case SDLK_COMMA: case SDLK_SPACE: case SDLK_UNDERSCORE: case SDLK_MINUS: input += static_cast<char> (lastKey); lastResult = NOTHING; break; case SDLK_RETURN: lastResult = ENTERED; break; case SDLK_ESCAPE: lastResult = ESCAPED; break; } return lastResult; } protected: static Result lastResult; static String input; }; This would be called from the application's main event loop, if the console is active and the last event was a keypress, then the result of the input is processed at a state where it's necessary. Of course, it looks incredibly awkward... What's a better way to implement a simple console that can be easily rendered in my game's window? (Not going anywhere near to highly unportable solutions like having to reroute std::cout or writing code to bring up a UNIX console etc.)
One suggestion I would offer is to use if statements instead of a switch in this case: if(lastKey == SDLK_RETURN) lastResult = ENTERED; else if(lastKey == SDLK_ESCAPE) lastResult = ESCAPED; else if(lastKey >= SDLK_SPACE && lastKey <= SDLK_z) { input += static_cast<char> (lastKey); lastResult = NOTHING; } I took some liberties and included some characters that you didn't have in your code above, such as the ampersand, quotes, parentheses, brackets, etc. If you don't want those keys, you can add a few more if statements to break it down a bit more. This assumes that the enum for the keys doesn't change a lot. If it does change a lot you may be better off with what you had.
690,176
690,252
C/C++: Optimization of pointers to string constants
Have a look at this code: #include <iostream> using namespace std; int main() { const char* str0 = "Watchmen"; const char* str1 = "Watchmen"; char* str2 = "Watchmen"; char* str3 = "Watchmen"; cerr << static_cast<void*>( const_cast<char*>( str0 ) ) << endl; cerr << static_cast<void*>( const_cast<char*>( str1 ) ) << endl; cerr << static_cast<void*>( str2 ) << endl; cerr << static_cast<void*>( str3 ) << endl; return 0; } Which produces an output like this: 0x443000 0x443000 0x443000 0x443000 This was on the g++ compiler running under Cygwin. The pointers all point to the same location even with no optimization turned on (-O0). Does the compiler always optimize so much that it searches all the string constants to see if they are equal? Can this behaviour be relied on?
It's an extremely easy optimization, probably so much so that most compiler writers don't even consider it much of an optimization at all. Setting the optimization flag to the lowest level doesn't mean "Be completely naive," after all. Compilers will vary in how aggressive they are at merging duplicate string literals. They might limit themselves to a single subroutine — put those four declarations in different functions instead of a single function, and you might see different results. Others might do an entire compilation unit. Others might rely on the linker to do further merging among multiple compilation units. You can't rely on this behavior, unless your particular compiler's documentation says you can. The language itself makes no demands in this regard. I'd be wary about relying on it in my own code, even if portability weren't a concern, because behavior is liable to change even between different versions of a single vendor's compiler.
690,356
690,368
Re-assinging an "auto_ptr" and Managing Memory
I've a situation like this: class MyClass { private: std::auto_ptr<MyOtherClass> obj; public: MyClass() { obj = auto_ptr<MyOtherClass>(new MyOtherClass()); } void reassignMyOtherClass() { // ... do funny stuff MyOtherClass new_other_class = new MyOtherClass(); // Here, I want to: // 1) Delete the pointer object inside 'obj' // 2) Re-assign the pointer object of 'obj' to 'new_other_class' // so that 'obj' now manages 'new_other_class' instead of the // object that just got deleted manually } }; Is there a way to achieve this? Will the following code do what I want? void MyClass::reassignMyOtherClass() { // ... still, do more funny stuff (flashback humor :-) MyOtherClass new_other_class = new MyOtherClass(); obj.reset(new_other_class); } Will the memory of new_other_class be de-allocated in the default destructor of MyClass?
Yes it will. You can use obj.reset( new MyOtherClass() ); And I'd better use such constructor MyClass(): obj( new MyOtherClass() ) { }
690,382
690,526
How can I put an array inside a struct in C#?
C++ code: struct tPacket { WORD word1; WORD word2; BYTE byte1; BYTE byte2; BYTE array123[8]; } static char data[8192] = {0}; ... some code to fill up the array ... tPacket * packet = (tPacket *)data; We can't do that as easy in C#. Please note there is an array in the C++ structure. Alternatively, using this source file could do the job for us, but not if there is an array in the structure.
I think what you are looking for (if you are using a similar structure definition like JaredPar posted) is something like this: tPacket t = new tPacket(); byte[] buffer = new byte[Marshal.SizeOf(typeof(tPacket))]; socket.Receive(buffer, 0, buffer.length, 0); GCHandle pin = GCHandle.Alloc(buffer, GCHandleType.Pinned); t = (tPacket)Marshal.PtrToStructure(pin.AddrOfPinnedObject(), typeof(tPacket)); pin.free(); //do stuff with your new tPacket t
690,579
690,597
What is the scope of variables declared in a class constructor?
I was curious, what is the scope of variables declared inside a class constructor which are not data members of that class? For example, if a constructor needs an iterating int i, will this variable be destroyed after the constructor finishes, or is it then global for the program?
In this sense a constructor is like any other function - any variable declared inside has usual scope limitations and they all surely go out of scope and get destroyed once constructor is finished.
691,014
691,034
How to allocate array in base constructor with size based on derived class?
I have a hierarchy of classes. The base class uses some tuning parameters that are loadable from file (and reloadable during runtime). Each derived class may add some additional parameters. I am looking for a way to allocate a correctly sized parameters array in the base constructor, so that I don't have to deallocate and reallocate in the derived class. I was hoping for something like this, but it's not working (parameters always has 2 elements): class Base { static int nParms; virtual int getNParms() { return nParms;} float *parameters; public: Base() { parameters= new float[this->getNParms()]; parameters[0] = globalReloadableX; parameters[1] = globalReloadableY; } }; int Base::nParams =2; class Derived : public Base { static int nParms; virtual int getNParms() { return nParms;} public: Derived() : Base() { parameters[2] = globalReloadableZ; } } int Derived::nParams =3; I've seen this question, but the solution there doesn't quite work for me. I also tried making parameters a regular array in each class: class Base { float parameters[2] ... class Derived : public Base { float parameters[3] ... but that makes Derived have 2 separate arrays. Any ideas?
Why not pass the required array size as a parameter in the constructor of the base class? (The reason the virtual function doesn't call the derived class is because that is how C++ virtual functions work; conceptually, until the derived class constructor completes, the object's type is still the base class.)
691,079
691,100
Is there a standard #include convention for C++?
This is a rather basic question, but it's one that's bugged me for awhile. My project has a bunch of .cpp (Implementation) and .hpp (Definition) files. I find that as I add additional classes and more class inter-dependencies, I have to #include other header files. After a week or two, I end up with #include directives in lots of places. Later, I'll try removing some of the #includes and discover that everything still works because some OTHER included class is also #including what I just removed. Is there a simple, easy rule for putting in #includes that will stop this ugly mess from happening in the first place? What is the best practice? For example, I've worked on projects where the Implementation .cpp file ONLY includes the corresponding Definition .hpp file, and nothing else. If there are any other .hpp files that need to be used by the Implementation .cpp, they are all referenced by the Definition .hpp file.
I always use the principle of least coupling. I only include a file if the current file actually needs it; if I can get away with a forward declaration instead of a full definition, I'll use that instead. My .cpp files always have a pile of #includes at the top. Bar.h: class Foo; class Bar { Foo * m_foo; }; Bar.cpp: #include "Foo.h" #include "Bar.h"
691,194
691,202
Why is my implementation of C++ map not storing values?
I have a class called ImageMatrix, which implements the C++ map in a recursive fashion; the end result is that I have a 3 dimensional array. typedef uint32_t VUInt32; typedef int32_t VInt32; class ImageMatrix { public: ImageMatrixRow operator[](VInt32 rowIndex) private: ImageMatrixRowMap rows; }; typedef std::map <VUInt32, VInt32> ImageMatrixChannelMap; class ImageMatrixColumn { public: VInt32 &operator[](VUInt32 channelIndex); private: ImageMatrixChannelMap channels; }; typedef std::map<VUInt32, ImageMatrixColumn> ImageMatrixColumnMap; class ImageMatrixRow { public: ImageMatrixColumn operator[](VUInt32 columnIndex); private: ImageMatrixColumnMap columns; }; typedef std::map<VUInt32, ImageMatrixRow> ImageMatrixRowMap; Each operator simply returns a map-wrapper class within, like so: ImageMatrixRow ImageMatrix::operator[](VInt32 rowIndex) { return rows[rowIndex]; } ImageMatrixColumn ImageMatrixRow::operator[](VUInt32 columnIndex) { return columns[columnIndex]; } VInt32 &ImageMatrixColumn::operator[](VUInt32 channelIndex) { return channels[channelIndex]; } Basically, when I set the value as say 100, and test the value to cout, it shows as 0, and not the number to which I had set it. for (VUInt32 a = 0; a < GetRowCount(); a++) { for (VUInt32 b = 0; b < GetColumnCount(); b++) { for (VUInt32 c = 0; c < GetChannelCount(); c++) { VInt32 value = 100; matrix[a][b][c] = value; VInt32 test = matrix[a][b][c]; // pixel = 100, test = 0 - why? cout << pixel << "/" << test << endl; } } } Note: I've altered the original code for this example so that it takes up less space, so some syntax errors may occur (please don't point them out).
The following operators return by value, no writes modify the actual data. ImageMatrixRow ImageMatrix::operator[](VInt32 rowIndex); ImageMatrixColumn ImageMatrixRow::operator[](VUInt32 columnIndex); Use: ImageMatrixRow& ImageMatrix::operator[](VInt32 rowIndex) ImageMatrixColumn& ImageMatrixRow::operator[](VUInt32 columnIndex)
691,223
691,296
handling a special character in a string while storing to a record on sqlite
in the following piece of code, I see that when my 'description' is something like: " ' ' ", I have a problem updating the description to the sqlite record. How do i handle the ' character. thanks! sql = wxString::Format( "UPDATE event SET event_description='%s' WHERE id=%d", description.c_str(), event_id); rc = sqlite3_exec((sqlite3 *)_theDB, sql.c_str(), NULL, 0, &sqlError); The OP answered his own question: check this out FAQ we need to replace the occurences of ' with '' in the string
Doubling up all the single quotes in the description string is one way to do it. This way you can avoid malicious descriptions (see Bobby Tables). ' ' becomes: '' '' And more importantly, the potentially dangerous description: ' WHERE 1=1 DELETE FROM Event -- becomes the harmless: '' WHERE 1=1 DELETE FROM Event -- Another (safer) way, is to use prepared statements.
691,347
691,366
optimize time(NULL) call in c++
I have a system that spend 66% of its time in a time(NULL) call. It there a way to cache or optimize this call? Context: I'm playing with Protothread for c++. Trying to simulate threads with state machines. So Therefore I cant use native threads. Here's the header: #ifndef __TIMER_H__ #define __TIMER_H__ #include <time.h> #include <iostream> class Timer { private: time_t initial; public: Timer(); unsigned long passed(); }; #endif and the source file: #include "Timer.h" using namespace std; Timer::Timer() { initial = time(NULL); } unsigned long Timer::passed() { time_t current = time(NULL); return (current - initial); } UPDATE: Final solution! The cpu cycles it going away somewhere, and if I spend them being correct. That is not so bad after all. #define start_timer() timer_start=time(NULL) #define timeout(x) ((time(NULL)-timer_start)>=x)
I presume you are calling it within some loop which is otherwise stonkingly efficient. What you could do is keep a count of how many iterations your loop goes through before the return value of time changes. Then don't call it again until you've gone through that many iterations again. You can dynamically adjust this count upwards or downwards if you find you're going adrift, but you should be able to engineer it so that on average, it calls time() once per second. Here's a rough idea of how you might do it (there's many variations on this theme) int iterations_per_sec=10; //wild guess int iterations=0; while(looping) { //do the real work //check our timing if (++iterations>iterations_per_sec) { int t=time(NULL); if (t==lasttime) { iterations_per_sec++; } else { iterations_per_sec=iterations/(t-lasttime); iterations=0; lastime=t; //do whatever else you want to do on a per-second basis } } }
691,686
692,512
How to build C++ app which runs on plain old XP SP2 with Visual Studio 2008 and no Side-by-Side DLLs?
I'd like to compile a C++ project with just a single call to WinExec in order to launch another executable with some command line parameters. I've no idea what settings to specify in my project in order to get produce an executable that works without requiring Microsoft side-by-side DLLs, which I don't want to have to install on my target system. Any hints? The symptom is an application which fails to start and the following event getting written to the application log (freely translated from French): Error, SideBySide, event #33 Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8" cannot be found. UPDATE: I know that using WinExec is bad practice, but it works like a charm, and Microsoft can't possibly remove it in any future release of the API, can't they?
The solution has been answered (partially) by both jachymko and Josh. Here is the full solution: Set Project Properties / Configuration / Linker / Input / Ignore All Default Libraries to Yes and add kernel32.lib to Additional Dependencies. This alone won't link, as the code automatically refers to __security_check_cookie and _WinMainCRTStartup. Remove /GS switch to instruct the compiler not to inject the security check code. For this, set Project Properties / Configuration / C/C++ / Code Generation / Buffer Security Check to No. Set Project Properties / Configuration / C/C++ / Code Generation / Runtime Library to Multi-threaded (/MT). The initial Visual Studio 2008 generated code contains an entry point named _tWinMain(HINSTANCE, HINSTANCE, LPTSTR, int). Modify it by renaming it WinMain and convert the third argument to LPSTR. Set Project Properties / Configuration / Linker / Advanced / Entry Point to WinMain. With these changes to a default C++ project, the code finally compiles and links, and runs on a freshly installed Vista or XP, which lacks the runtime library.
691,719
691,742
C++ display stack trace on exception
I want to have a way to report the stack trace to the user if an exception is thrown. What is the best way to do this? Does it take huge amounts of extra code? To answer questions: I'd like it to be portable if possible. I want information to pop up, so the user can copy the stack trace and email it to me if an error comes up.
It depends which platform. On GCC it's pretty trivial, see this post for more details. On MSVC then you can use the StackWalker library that handles all of the underlying API calls needed for Windows. You'll have to figure out the best way to integrate this functionality into your app, but the amount of code you need to write should be minimal.
691,773
691,786
Visual Studio unmanaged C++ smart file explorer
I use Visual Studio to develop unmanaged C++ apps for a living. Is there an add-on tool (price isnt the issue) that is smart enough to detect folders that I create within my solution folder and add them to the include path (/I)? It would automatically do this for win32, or x64 platform settings, or debug, and release, etc. It should also be smart enough to remove the include paths if folder is deleted Arent you frustrated sometimes when you double click a file under solution explorer and Visual Studio says it cant find the file because somebody moved or renamed it? ---> Is there a tool that would periodically scan the files in Solution Explorer and automatically removes or prompts the user to let them know those files are no longer valid? Like John Lakos once said, I too would pay for an add-on that can automatically add #include into my code by just parsing my project's .h/.cpp files. When is Visual Studio going to have an Eclipse-like solution explorer that allows user to directly manipuate the files/folders structure directly from within Visual Studio?
You can manipulate files/folders from the IDE! Just click the "show all files" button in the solution explorer (2nd from left, at the top). I just recently found that out.
691,954
691,957
Should I comment the declaration or the definition in C++?
Which is more practical to comment, the declaration (in the header file) or the definition (in the source file)? Maybe I should comment both, or comment neither and put it all in a separate file...
You should completely document the header file with highest priority. Comments in the definition should be concentrated on implementation details, while header comments should be concentrated on the interface. A third source of documentation, as you suggested, is useful as well. It should describe the overall concept. A big plus of commenting header files is that you can create documentation automatically from them if you adhere to some simple syntax. Say hello to doxygen!
692,225
692,278
What are the differences between C, C# and C++ in terms of real-world applications?
As I posted earlier here I've decided to try my hand at one of these, but given my interests as a web developer, I'd like to know the difference between them in their real-world applications. Edit Note: While I'm a web developer, please don't let that limit your answer. I'm 30...I've got years of career changing ahead of me.
Both C and C++ give you a lower level of abstraction that, with increased complexity, provides a breadth of access to underlying machine functionality that are not necessarily exposed with other languages. Compared to C, C++ adds the convenience of a fully object oriented language(reduced development time) which can, potentially, add an additional performance cost. In terms of real world applications, I see these languages applied in the following domains: C Kernel level software. Hardware device drivers Applications where access to old, stable code is required. C,C++ Application or Server development where memory management needs to be fine tuned (and can't be left to generic garbage collection solutions). Development environments that require access to libraries that do not interface well with more modern managed languages. Although managed C++ can be used to access the .NET framework, it is not a seamless transition. C# provides a managed memory model that adds a higher level of abstraction again. This level of abstraction adds convenience and improves development times, but complicates access to lower level APIs and makes specialized performance requirements problematic. It is certainly possible to implement extremely high performance software in a managed memory environment, but awareness of the implications is essential. The syntax of C# is certainly less demanding (and error prone) than C/C++ and has, for the initiated programmer, a shallower learning curve. C# Rapid client application development. High performance Server development (StackOverflow for example) that benefits from the .NET framework. Applications that require the benefits of the .NET framework in the language it was designed for. Johannes Rössel makes the valid point that the use C# Pointers, Unsafe and Unchecked keywords break through the layer of abstraction upon which C# is built. I would emphasize that type of programming is the exception to most C# development scenarios and not a fundamental part of the language (as is the case with C/C++).
692,438
692,449
Is boost shared_ptr <XXX> thread safe?
I have a question about boost::shared_ptr<T>. There are lots of thread. using namespace boost; class CResource { // xxxxxx } class CResourceBase { public: void SetResource(shared_ptr<CResource> res) { m_Res = res; } shared_ptr<CResource> GetResource() { return m_Res; } private: shared_ptr<CResource> m_Res; } CResourceBase base; //---------------------------------------------- // Thread_A: while (true) { //... shared_ptr<CResource> nowResource = base.GetResource(); nowResource.doSomeThing(); //... } // Thread_B: shared_ptr<CResource> nowResource; base.SetResource(nowResource); //... Q1 If Thread_A do not care the nowResource is the newest, will this part of code have problem? I mean when Thread_B do not SetResource() completely, Thread_A get a wrong smart point by GetResource()? Q2 What does thread-safe mean? If I do not care about whether the resource is newest, will the shared_ptr<CResource> nowResource crash the program when the nowResource is released or will the problem destroy the shared_ptr<CResource>?
From the boost documentation: shared_ptr objects offer the same level of thread safety as built-in types. A shared_ptr instance can be "read" (accessed using only const operations) simultaneously by multiple threads. Different shared_ptr instances can be "written to" (accessed using mutable operations such as operator= or reset) simultaneously by multiple threads (even when these instances are copies, and share the same reference count underneath.) Any other simultaneous accesses result in undefined behavior. So your usage is not safe, since it uses simultaneous read and write of m_res. Example 3 in the boost documentation also illustrates this. You should use a separate mutex that guards the access to m_res in SetResource/GetResource.
692,541
692,547
C++ template static pointer-to-member initialization
I have a template class which has a static pointer-to-member, like this: template<class T, T* T::*nextptr> class Queue { T* head; T* tail; static T* T::*pnext; }; My question is how to write the initializer of the static pointer-to-member. I tried the obvious case: template<class T, T* T::*nextptr> T* Queue<T, nextptr>::*pnext(nextptr); But that didn't work. Any idea?
Queue<T, nextptr>::pnext is declared as type T* T::*, so it should look like this: template<class T, T* T::*nextptr> T* T::* Queue<T, nextptr>::pnext(nextptr);
692,752
692,759
Naming conventions for template types?
Traditionally, the names of template types are just a single upper-case letter: template<class A, class B, class C> class Foo {}; But I hesitate to do this because it's non-descriptive and hard therefore to read. So, wouldn't something like this be better: template<class AtomT, class BioT, class ChemT> class Foo {}; I also tend to think the following would not be a bad idea: template<class ATOM, class BIO, class CHEM> class Foo {}; It makes them stand out (and also, it's upper-case letters again). What's your opinion?
For C++ templates I have a couple of patterns If there is just a single template parameter, I name it T (or U,V for nested templates). When there are multiple parameters and the use is not immediately obvious then I use descriptive names prefixed with T. For example, TKey, TValue, TIdentifiier, etc ... This makes the parameters fairly easy to spot throughout the template usage. I would avoid the all upper case version though. Most people use all upper case identifiers in C/C++ to represent a macro definition. Repeating that pattern for a template parameter is likely to confuse people down the road.
692,880
692,897
TCP: How are the seq / ack numbers generated?
I am currently working on a program which sniffs TCP packets being sent and received to and from a particular address. What I am trying to accomplish is replying with custom tailored packets to certain received packets. I've already got the parsing done. I can already generated valid Ethernet, IP, and--for the most part--TCP packets. The only thing that I cannot figure out is how the seq / ack numbers are determined. While this may be irrelevant to the problem, the program is written in C++ using WinPCap. I am asking for any tips, articles, or other resources that may help me.
When a TCP connection is established, each side generates a random number as its initial sequence number. It is a strongly random number: there are security problems if anybody on the internet can guess the sequence number, as they can easily forge packets to inject into the TCP stream. Thereafter, for every byte transmitted the sequence number will increment by 1. The ACK field is the sequence number from the other side, sent back to acknowledge reception. RFC 793, the original TCP protocol specification, can be of great help.
692,885
692,894
Cross-platform C++ IDEs?
I'm looking for a good IDE for C++ that has most or all of the following properties (well, the first 4 or 5 ones are mandatory): cross-platform (at least Mac, Linux) of course, syntax highlighting and other basic coding editor functionality reasonably responsive GUI, not too sluggish on mid-size (say, 100 files) projects (both my Linux and Mac computers are 5 years old) code completion integration with gdb SVN integration would be nice too refactoring (rename a variable semi-automatically throughout the codebase, etc.) can display class hierarchy can add hypertext-style links to the code (so you can click on a function/class name and it brings you to the declaration), can show code parts that (potentially) call my function (I know that's hard to do for C++ so I added the potentially) So far, I've been using Emacs for everything, but I think advanced features that give me more overview and search capabilities like the last three would be nice. I'm tired of just using grep and find on my codebase. Basically, I want most of the things for C++ development that Java IDEs usually do. I see Netbeans can be used for C++, as well as Eclipse. I think they would fit my requirements. Has anyone actually used them for daily coding? What are the differences? What are your experiences? Code::Blocks is more lightweight and doesn't do many of the things listed above, right? Note: I don't need a GUI builder. Update: With the bullet point 10 I mean that I can give it a function name (e.g. Foo::setValue), and it shows me all occurrences where this particular function (and not other functions of the same name, like Bar::setValue) is called in my codebase.
Code::Blocks does the first 5 and it's also got class method browsing (though not a heirarchy display). It's a much more lightweight solition thaen Eclipse or NetBeans, but if you like the minimalist approach it's pretty good. To summarise CB versus your requirements: Yes Yes Yes Yes Yes No - but you can add it easily No No, but similar functionality No Can go from definition to decl and vice versa, but no callers list As all the solutions you mention are free, you could always try them all and then make a decision on which one to stick with. That's what I did.
692,893
693,314
How can I code my own custom splash screen for Linux?
This is NOT a question on plain old boring customization; I actually want to create an program, you know, with source code, etc... I'm thinking about programming my own media centre interface, and I figured it'd look better if I coded my own splash screen for when the OS is loading. Note: The media centre interface will be run in X, but this question is regarding what will happen before the X server loads. Simply, I'd like to make a splash screen application to hide the linux kernel boot messages. Is there a way I can program some animation in to this like some sort of animated progress bar for example? I assume that I won't be able to code any 2D/3D graphics (as that'd require X to be running, right?), so how would I go about generating that? I'd prefer to do this in C++, but C is also an option. Note: I'm not looking to use any existing "themes" or anything like that, just interested in the programming side of things. Update: Some suggestions have been to use standard images (.bmp, .jpeg, etc), I am not interested in loading images in to an existing application. But obviously I may want to load images in to the boot screen application that I will make. I'm not tied to a Linux distro, so this can be for anything, although Debian or a Debian-based distro would be nice. I like the suggestion about loading the X server early and running a loading screen from there, however is there not a more direct approach? Surely you can make a program which hides the boot messages and shows a custom program? Obviously this would be very low level programming, but that's what I'm looking for... Also, I'm not interested in altering the boot loader (LILO, GRUB, etc). Update 2: So far good suggestions have been looking at the source code for applications like splashy and fbsplash. Can anyone better this suggestion?
For the graphical output you can use the Linux framebuffer, for application development you can use gtk which support rendering directly to the framebuffer GtkFB. For the video and such you can use mplayer which also support rendering to the framebuffer. For the initialization you have to look around the system used, debian uses a sysv init style initialization http://www.debian-administration.org/articles/212, ubuntu uses upstart.
693,206
694,481
Seeking suggestions for Unit Testing C++ app spread over several dlls
New to unit testing and have an app that is spread out over several dlls. What are some suggestions for unit testing the app? If I put the unit tests in their own project, i can only test the published interface for each dll. Is that what most people do? Or should I put the unit tests in with the code in the dlls and test there? What would be the best way to coordinate all the tests at that point? Most of the examples I've seen for using the various frameworks don't really address this and I've search elsewhere but haven't found much information. Thanks. Update: I guess to clarify a bit I have class A and class B in a dll. Class B is used by Class A, but only Class A is exposed. I'm wanting to get unit tests in place before refactoring this existing code. So the question is should I put unit tests in with the dll code to test class A and class B directly and/or have the unit tests in a separate project and test class A and class B through the exposed class A?
Unit testing in C++ means rather class testing. DLL testing would be rather integration testing. Both are necessary, but it's better to test things at as low level as possible. Take a look on v-model: http://en.wikipedia.org/wiki/V-Model_(software_development).
693,361
706,960
Visual Studio code metrics plugin for C++
Is there a VS2008 plugin for code metrics for C++? I have Team System but it doesn't work for non- .NET code. I tried studioTools but it just freezes. So, does anyone know one that actually works?
Well, I found out that CodeRush does this.