question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
336,759
336,918
How loader Maps DLL in to Process Address Space
I am curious to know How the Loader Maps DLL in to Process Address Space. How loader does that magic. Example is highly appreciated. Thanks in advance.
What level of detail are you looking for? On the basic level, all dynamic linkers work pretty much the same way: Dynamic libraries are compiled to relocatable code (using relative jumps instead of absolute, for example). The linker finds an appropriately-sized empty space in the memory map of the application, and reads the DLL's code and any static data into that space. The dynamic library contains a table of offsets to the start of each exported function, and calls to the DLL's functions in the client program are patched at load-time with a new destination address, based on where the library was loaded. Most dynamic linker systems have some system for setting a preferred base address for a particular library. If a library is loaded at its preferred address, then the relocation in steps 2 and 3 can be skipped.
336,771
917,695
How can I run code on Windows Mobile while being suspended?
I'd like to run some C++ code while the Windows Mobile PocketPC is (or seems) being suspended. An example what I mean is the HTC Home plugin that shows (among others) a tab where the HTC Audio Manager can be used to play back mp3 files. When I press the on/off button, the display goes black, but the audio keeps playing. The only button to switch back on is the on/off button, as expected. What I tried so far is to capture hardware button presses (works) and switch off the video display (works). What doesn't work with this approach is that when (accidentally) pressing any key on the device, the video display is switched on. I think this isn't the approach taken in the HTC Audio Manager. I'm guessing on some low-level API magic for this to work, or that the code to play back audio runs at some interrupt level, or the device goes into a different suspend mode.
I found sourcecode on the xda-developers forum that explains what to do, and it works as thought. The main points are: Set the device to send a notification when going into "unattended" mode. This is done with PowerPolicyNotify(PPN_UNATTENDEDMODE, TRUE) For every device that you need during unattended mode, call SetPowerRequirement(L"gpd0:", D0, POWER_NAME|POWER_FORCE, NULL, NULL); The "gpd0:" device is the GPS Intermediate driver; replace or duplicate call with any device you need, e.g. "wav1:" for audio, "dsk1:" for memory card or "com1:" for serial port 1. Create a message queue and request power notifications using RequestPowerNotifications(hMsgQueue, PBT_POWERINFOCHANGE | PBT_TRANSITION) Every time a power notification is sent, the message queue is filled with a struct of type POWER_BROADCAST. Look for PBT_TRANSITION message type. The field pPwrBrodcast->SystemPowerState then contains a string "unattended" when the device is shut off, e.g. by the user pressing the off button In this transition, just call SystemIdleTimerReset() to tell the device to not shut off Repeat when the transition occurs again When shutting down, call PowerPolicyNotify() to leave unattended mode, release any devices with ReleasePowerRequirement() and stop receiving power notifications with StopPowerNotifications().
336,801
338,456
Memory management practices and tools for Symbian C++
I have pretty much finished my first working Symbian application, but in my hastened learning have paid little attention to memory management and pushing to and cleaning up the stack? Could somebody please point me in the direction of some of the best practises to use here, and maybe some of the best leak detection/memory profiling tools. For example, if I grab a TDesC or a TPtrC16 inside a function, how do I then clean them up the best way, is it simply TPtrC16 temp = ... temp.CleanupClosePushL(); CleanupStack::PopAndDestroy() ..for everything? Thanks and please forgive me, I am a self confessed Symbian n00b.
I have in the past used HookLogger from Symbian to trace and investigate memory leaks. It is not the best, but it sure does help. Also, the heap markers raise ALLOC panics in case of memory leaks whenever your exit your application. The information those panics provide is barely helpful, but it is good indication that there is a memory leak somewhere. From there, you may use tools like HookLogger to narrow in on which part of the code the leak is getting introduced at. I mentioned a few Symbian resources in reply to this thread. While all those have been most useful for me, I did find the book Accredited Symbian Developer Primer immensely helpful. If you could get your hands on it, I would definitely recommend it. It is a quick read, but it explains the concepts brilliantly, including things about memory management and descriptors.
337,620
337,652
Matrix Template Library matrix inversion
I'm trying to inverse a matrix with version Boost boost_1_37_0 and MTL mtl4-alpha-1-r6418. I can't seem to locate the matrix inversion code. I've googled for examples and they seem to reference lu.h that seems to be missing in the above release(s). Any hints? @Matt suggested copying lu.h, but that seems to be from MTL2 rather than MTL4. I'm having trouble compiling with MTL2 with VS05 or higher. So, any idea how to do a matrix inversion in MTL4? Update: I think I understand Matt better and I'm heading down this ITL path.
Looks like you use lu_factor, and then lu_inverse. I don't remember what you have to do with the pivots, though. From the documentation. And yeah, like you said, it looks like their documentations says you need lu.h, somehow: How do I invert a matrix? The first question you should ask yourself is whether you want to really compute the inverse of a matrix or if you really want to solve a linear system. For solving a linear system of equations, it is not necessary to explicitly compute the matrix inverse. Rather, it is more efficient to compute triangular factors of the matrix and then perform forward and backward triangular solves with the factors. More about solving linear systems is given below. If you really want to invert a matrix, there is a function lu_inverse() in mtl/lu.h. If nothing else, you can look at lu.h on their site.
338,400
338,418
Warnings using format strings with sprintf() in C++
Compiling this lines long int sz; char tmpret[128]; //take substring of c, translate in c string, convert to int, //and multiply with 1024 sz=atoi(c.substr(0,pos).c_str())*1024; snprintf(tmpret,128,"%l",sz); I read two warning on snprintf line: warning: conversion lacks type at end of format warning: too many arguments for format Why? The type is specified (long int sz, and %l in snprintf) and the argument in snprintf is only one. Can anybody help me? Thanks.
Your format lacks type, because l is a "sizeof" modifier. Should be %ld
338,427
338,447
How to find out if a pointer is on the stack on PC/Visual C++
[This is for PC/Visual C++ specifically (although any other answers would be quite illuminating :))] How can you tell if a pointer comes from an object in the stack? For example: int g_n = 0; void F() { int *pA = &s_n; ASSERT_IS_POINTER_ON_STACK(pA); int i = 0; int *pB = &i; ASSERT_IS_POINTER_ON_STACK(pB); } so only the second assert (pB) should trip. I'm thinking using some inline assembly to figure out if it's within the SS segment register or something like that. Does anybody know if there's any built in functions for this, or a simple way to do this? Thanks! RC
Technically speaking, in portable C you can't know. A stack for arguments is a hardware detail that is honored on many but not all compilers. Some compilers will use registers for arguments when they can (ie, fastcall). If you are working specifically on windows NT, you want to grab the Thread Execution Block from calling NtCurrentTeb(). Joe Duffy's blog has information on this and from it you can get the stack range. You check for pointer in range and you should be good to go.
338,467
338,507
Is there a tutorial on C++ programming in Visual Studio 2008?
Can anyone link me to a decent c++ tutorial that's actually currently in date? Almost everything I find applies to 2005 and the code examples are riddled with errors which won't run in my 2008 version of the visual compiler.
The book Accelerated C++ is a good start to learn C++.
338,509
338,553
When will C++0x be released?
Possible Duplicates: C++0X when? When will C++0x be finished? When will C++0x be released? Anyone here know anything?
Edit: We have a new standard now : http://herbsutter.com/2011/08/12/we-have-an-international-standard-c0x-is-unanimously-approved/ Edit: The FDIS is done, so officially it should be released in few months. See : http://herbsutter.com/2011/03/25/we-have-fdis-trip-report-march-2011-c-standards-meeting/ Herb Sutter is a useful source of information on this as the convener of then ISO C++ committee (until recently). EDIT See his latest blog post here from March 13, 2010 for an update on recent progress: C++0x is now a Final Committee Draft, and... "... assuming all goes well , C++0x could officially be published as soon as next year as ISO C++ 2011, and we can stop with the “x-is-hex” jokes and just start calling it C++11." P.J. Plauger has taken over as the new convener, but I expect that Herb will continue to provide updates on the committee's progress - and as Herb also works for Microsoft, early clues as to when a Microsoft implementation of C++0X will be available.
338,809
338,819
How to insert a null value with Qt?
Tip me plase, how to insert a null value into table using Trolltech Qt 4.x SQL classes? QSqlQuery, I guess, or something else from QtNetwork. As analog of it, in .NET there is the System.DbNull class, which represents sql NULL. And what type should I use for some object's property, that can hold both null-value and QString? In C# I could use System.Object.
From QSqlQuery::addBindValue documentation: To bind a NULL value, use a null QVariant; for example, use QVariant(QVariant::String) if you are binding a string.
339,150
339,161
Extracting individual digits from a float
I have been banging my head on this one all day. The C++ project I am currently working on has a requirement to display an editable value. The currently selected digit displays the incremented value above and decremented value below for said digit. It is useful to be able to reference the editable value as both a number and collection of digits. What would be awesome is if there was some indexable form of a floating point number, but I have been unable to find such a solution. I am throwing this question out there to see if there is something obvious I am missing or if I should just roll my own. Thanks for the advice! I was hoping for a solution that wouldn't convert from float -> string -> int, but I think that is the best way to get away from floating point quantization issues. I ended up going with boost::format and just referencing the individual characters of the string. I can't see that being a huge performance difference compared to using combinations of modf and fmod to attempt to get a digit out of a float (It probably does just that behind the scenes, only more robustly than my implementation).
Internal representation of the float point numbers aren't like was you see. You can only cast to a stirng. To cast, do this: char string[99]; sprintf(string,"%f",floatValue); Or see this : http://www.parashift.com/c++-faq-lite/misc-technical-issues.html#faq-39.1 The wikipedia article can explain more on the representation: http://en.wikipedia.org/wiki/Floating_point
339,971
339,988
How to include sub-directories in Visual Studio?
I have to include many header files, which are in different sub-directories. Is there a way in Visual Studio (I am using 2005 edition) to set one include path that Visual Studio will search also the sub-directories for header files?
Setting the folder search paths in the Visual Studio settings to fix an include issue is generally not really a good idea from a design point of view. Your code will be less portable to different machines with different directory lay-outs. My suggestion would be to settle on an organisation of your code so that the relative paths of the sub-directories (relative to your including code) are fixed: Add the "base folder" to the project (project properties -> Configuration Properties -> C/C++ -> Additional Include Directories, "additional include directories") Add the subdirectories to the #include statements , i.e. #include "subdirectory/somefile.h". This has the added bonus of being able to see which folder in your Solution contains the file - that is often useful information when you're trying to find your way around or trying to figure out what a file is for.
340,003
340,117
What is the C++ equivalent to GetObject in JavaScript and VBScript?
What is the C++ equivalent to GetObject in JavaScript and VBScript? The closest match I found to my question is: http://codewiz51.blogspot.com/2008/06/vb-script-getobject-c-api-cogetobject.html However the sample use an unexisting interface and asking for the IUnknown returns null. Did someone have an example that works?
I figured out the issue. The object I wanted to access was winmgmts:{impersonationLevel=impersonate}!\\.\root\default:StdRegProv I mistakenly took \\ for an escapement. In C++ the correct query is : ::CoGetObject(L"winmgmts:{impersonationLevel=impersonate}!\\\\.\\root\\default:StdRegProv", NULL, IID_IUnknown, (void**)&pUnk); Thank you :)
340,071
340,103
Looking for OSS for OSI Layer 2 Traffic Generator
I am looking for layer 2 traffic generator [open source]. Some OSS using winpcap or libpcap. Thanks a lot.
Scapy bittwist any from perl's Net::Pcap, python's or ruby's pcap interface...
340,122
340,291
::FindWindow fails from Service application
Windows API ::FindWindow function fails when called from Service application. GetLastError() also returns 0 (success?). Is this some privilege\access right problem? Do you think it's design problem and I should use another IPC method?
leppie's right, Windows services are usually denied in interaction with desktop. You can bypass that in XP and earlier versions but won't be able to do in Vista and above. You'd better delegate desktop and user interactions to a GUI application. See this document for details.
340,185
389,332
using GDAL/OGR api to read vector data (shapefile)--How?
I am working on an application that involves some gis stuff. There would be some .shp files to be read and plotted onto an opengl screen. The current opengl screen is using the orthographic projection as set from glOrtho() and is already displaying a map using coordinates from a simple text file.. Now the map to be plotted is to be read from a shapefile. I have the following doubts: How to use the WGS84 projection of the .shp file(as read from the .prj file of the shapefile,WKT format) into my existing glOrtho projection..is there any conversion that needs to be done? and how is it different from what the glOrtho() sets up?basically how to use this information? My application needs to be setup in such a way that i can know the exact lat/long of a point on the map.for eg. if i am hovering on X city,its correct lat/long could be fetched.I know that this can be done by using opensource utils/apis like GDAL/OGR but i am messed up as the documentation of these apis are not getting into my head. I tried to find some sample c++ progs but couldnt find one. I have already written my own logic to read the coordinates from a shapefile containing either points/polyline/polygon(using C-shapelib) and plotted over my opengl screen.I found a OGR sample code in doc to read a POINTS shapefile but none for POLYGON shapefile.And the problem is that this application has to be so dynamic that upon loading the shapefile,it should correctly setup the projection of the opengl screen depending upon the projection of the .shp file being read..eg WGS84,LCC,EVEREST MODIFIED...etc. how to achieve this from OGR api? Kindly give your inputs on this problem.. I am really keen to make this work but im not getting the right start..
Shapefile rendering is quite straight forward in OpenGL. You may require "shapelib",a free shapefile parsing library in C(Google it). Use GL_POINTS for point shapefile, GL_LINES for line shapefile and GL_LINE_LOOP for polygon shapefile. Set your bounding box coords to the Ortho. What you read from the .prj file is projection info. WGS84 gives you lat/long coords(Spherical). But your display system is 2D(Rectangular). So, you need to convert 3D Spherical coords to 2D Rectangular coords(This is the meaning of Projection).Projection types are numerous,depending on the area of interest on the globe(remember projection distorts area/shape/size of features).Projection types range from Polyconic, Modified Everest, NAD, UTM, etc., If you simply need WGS84 ,then read bounding box coords of your .sh file and assign them to glOrtho. If you have any projection(eg:-UTM), then you convert your bounding box coords into Projection coords and then assign the newly projected coords to glOrtho. For converting lat/long into any Projection, you may require projection libraries like "Projlib" or "GeotransEngine" and etc. For further clarifications you may contact me on dgplinux@ y a h o o . c o m
340,282
340,305
i = ++i + ++i; in C++
Can someone explain to me why this code prints 14? I was just asked by another student and couldn't figure it out. int i = 5; i = ++i + ++i; cout<<i;
The order of side effects is undefined in C++. Additionally, modifying a variable twice in a single expression has no defined behavior (See the C++ standard, §5.0.4, physical page 87 / logical page 73). Solution: Don't use side effects in complex expression, don't use more than one in simple ones. And it does not hurt to enable all the warnings the compiler can give you: Adding -Wall(gcc) or /Wall /W4(Visual C++) to the command line yields a fitting warning: test-so-side-effects.c: In function 'main': test-so-side-effects.c:5: warning: operation on 'i' may be undefined test-so-side-effects.c:5: warning: operation on 'i' may be undefined Obviously, the code compiles to: i = i + 1; i = i + 1; i = i + i;
340,345
342,657
Web service can't open named pipe - access denied
I've got a C++ service which provides a named pipe to clients with a NULL SECURITY_ATTRIBUTES as follows: hPipe = CreateNamedPipe( lpszPipename, PIPE_ACCESS_DUPLEX | FILE_FLAG_OVERLAPPED, PIPE_TYPE_BYTE | PIPE_READMODE_BYTE | PIPE_WAIT, PIPE_UNLIMITED_INSTANCES, BUFSIZE, BUFSIZE, 0, NULL); There is a dll which uses this pipe to get services. There is a c# GUI which uses the dll and works fine. There is a .net web site which also uses this dll (the exact same one on the same PC) but always gets permission denied when it tries to open the pipe. Any one know why this might happen and how to fix it? Also does anyone know of a good tutorial on SECURITY_ATTRIBUTES because I haven't understood the msdn info yet. Thanks, Patrick
The default ACL for named pipes (what you get with a null security descriptor) grants write access only to LocalSystem, Administrators, and the pipe's owner/creator. Unless your web application is running under one of those accounts (which by default it would not be) you won't be able to get write access. (I'm assuming you're asking for read/write.) There are a couple of options... Have the web application run under the same account as the service that created the pipe. Configure the web application to use impersonation, either by specifying a specific user with write access in web.config, or by setting it to use the user passed in by IIS (and accessing the application from a user account with write access). Manually impersonate a user with write access for the duration of the pipe access (e.g., with WindowsIdentity.Impersonate). Use a non-default security descriptor on the pipe that grants write access to Everyone (or the specific account running the application, although that would be more complicated to set up). There's an example of creating a simple security descriptor here; you should be able to modify it to suit your needs.
340,413
341,330
How do C/C++ compilers handle type casting between types with different value ranges?
How do type casting happen without loss of data inside the compiler? For example: int i = 10; UINT k = (UINT) k; float fl = 10.123; UINT ufl = (UINT) fl; // data loss here? char *p = "Stackoverflow Rocks"; unsigned char *up = (unsigned char *) p; How does the compiler handle this type of typecasting? A low-level example showing the bits would be highly appreciated.
Well, first note that a cast is an explicit request to convert a value of one type to a value of another type. A cast will also always produce a new object, which is a temporary returned by the cast operator. Casting to a reference type, however, will not create a new object. The object referenced by the value is reinterpreted as a reference of a different type. Now to your question. Note that there are two major types of conversions: Promotions: This type can be thought of casting from a possibly more narrow type to a wider type. Casting from char to int, short to int, float to double are all promotions. Conversions: These allow casting from long to int, int to unsigned int and so forth. They can in principle cause loss of information. There are rules for what happens if you assign a -1 to an unsigned typed object for example. In some cases, a wrong conversion can result in undefined behavior. If you assign a double larger than what a float can store to a float, the behavior is not defined. Let's look at your casts: int i = 10; unsigned int k = (unsigned int) i; // :1 float fl = 10.123; unsigned int ufl = (unsigned int) fl; // :2 char *p = "Stackoverflow Rocks"; unsigned char *up = (unsigned char *) p; // :3 This cast causes a conversion to happen. No loss of data happens, since 10 is guaranteed to be stored by an unsigned int. If the integer were negative, the value would basically wrap around the maximal value of an unsigned int (see 4.7/2). The value 10.123 is truncated to 10. Here, it does cause lost of information, obviously. As 10 fits into an unsigned int, the behavior is defined. This actually requires more attention. First, there is a deprecated conversion from a string literal to char*. But let's ignore that here. (see here). More importantly, what does happen if you cast to an unsigned type? Actually, the result of that is unspecified per 5.2.10/7 (note the semantics of that cast is the same as using reinterpret_cast in this case, since that is the only C++ cast being able to do that): A pointer to an object can be explicitly converted to a pointer to an object of different type. Except that converting an rvalue of type “pointer to T1” to the type "pointer to T2" (where T1 and T2 are object types and where the alignment requirements of T2 are no stricter than those of T1) and back to its original type yields the original pointer value, the result of such a pointer conversion is unspecified. So you are only safe to use the pointer after you cast back to char * again.
340,437
340,446
Missing 'virtual' qualifier in function declarations
Whilst trawling through some old code I came across something similar to the following: class Base { public: virtual int Func(); ... }; class Derived : public Base { public: int Func(); // Missing 'virtual' qualifier ... }; The code compiles fine (MS VS2008) with no warnings (level 4) and it works as expected - Func is virtual even though the virtual qualifier is missing in the derived class. Now, other than causing some confusion, are there any dangers with this code or should I change it all, adding the virtual qualifier?
The virtual will be carried down to all overriding functions in derived classes. The only real benefit to adding the keyword is to signify your intent a casual observer of the Derived class definition will immediately know that Func is virtual. Even classes that extend Derived will have virtual Func methods. Reference: Virtual Functions on MSDN. Scroll down the page to see The virtual keyword can be used when declaring overriding functions in a derived class, but it is unnecessary; overrides of virtual functions are always virtual.
340,687
341,989
Unable to load C++ DLL in C# application in Vista x64
I have a DLL written in C++ that needs to be used by an application in C#. It works great under Vista x86, but under x64 it fails to load. So I build an x64 version of the DLL and I detect whether the OS is x86 or x64 and use the appropriate interop call to the appropriate DLL. This works fine under Vista x86, but under Vista x64 I get a "side-by-side" error when it tries to load the DLL. Why exactly is it failing to load it, and what can be done to correct this? (Please let me know if you need more information, I'm not sure what information is relevant in trouble-shooting this issue.)
The redist for VC90 for x64 will need to be installed on the client machine. As far as the manifest goes, I think you can alter it to remove the processorArchitecture tag. Either that or have it say "any".
340,943
340,990
C++ Multi-dimensional Arrays on the Heap
How would I go about dynamically allocating a multi-dimensional array?
If you know the size of nested dimensions already, you can also literally allocate a multi dimensional array using new: typedef int dimensions[3][4]; dimensions * dim = new dimensions[10]; dim[/* from 0 to 9 */][/* from 0 to 2 */][/* from 0 to 3 */] = 42; delete [] dim; instead of 10, a runtime determined value can be passed. Since it's not part of the type operator new returns, that's allowed. This is nice if you know the number of columns, but want to keep the number of rows variable, for example. The typedef makes it easier to read the code.
341,117
1,754,941
Display Outlook icon in notification area for messages, not in inbox
I have rules set to move some email messages into different folders. I would like this to still show the envelope in the notification area but there is no option in the rules wizard to do this. It looks like I would either have to have the rule "run a script" or "perform a custom action" allowing either vba or c/c++ respectively. Anyone else have a better solution?
You can also achieve it not by using a rule, but doing the rule-like action in code. For example: Private Sub Application_NewMailEx(ByVal EntryIDCollection As String) Dim mai As Object Dim strEntryId For Each strEntryId In Split(EntryIDCollection, ",") Set mai = Application.Session.GetItemFromID(strEntryId) If mai.Parent = "Inbox" Then If mai.SenderEmailAddress = "the-email-address-the-rule-applies-to" Then mai.Move Application.GetNamespace("MAPI").GetFolderFromID("the-entry-ID-of-the-folder-you-want-to-move-the-message-to") End If End If Set mai = Nothing Next End Sub How to get the folder ID (i.e., entryID of the folder): This is just a manual way, you could make a recursive procedure but for simple purposes this is fine. For instance, I had a structure like: Mailbox - My_Name_Here Inbox The Subfolder I'm Looking For Sent Items ... So in the Immediate window I typed: ? Application.GetNamespace("MAPI").Folders(1) and increased the number until I got "Mailbox - My_Name_Here" then, I typed: ? Application.GetNamespace("MAPI").Folders(the_number_of_my_mailbox).Folders(1) increasing the number until I got "Inbox". Then: ? Application.GetNamespace("MAPI").Folders(the_number_of_my_mailbox).Folders(the_number_of_my_Inbox).Folders(1) increasing the number until I got "The Subfolder I'm Looking For" Then: ? Application.GetNamespace("MAPI").Folders(the_number_of_my_mailbox).Folders(the_number_of_my_Inbox).Folders(the_number_of_the_subfolder_i_was_looking_for).EntryID And that was it: the entryID of the folder I wanted to move the message to. You get the point, I'm sure :)
341,130
341,170
Hide Comments in Code in Unix Environment
I work in a Unix environment with the typical Unix tools (emacs, vim, gvim, sunstudio, etc) My project has huge gross boilerplate comments on every method. It makes the files thousands of lines long, with a couple hundred lines of actual code. I may be exagerrating a bit but you get the idea. I am looking for a way when viewing these files to hide (not remove) all comments so I can quickly go through the code. C++ comments '//' only.
It all depends on which editor you use. In vim, you can enable folding with : set foldenable Then, you'll be able to use different of folding methods, for mainstream languages, you can set : set foldmethod=syntax which will enable syntax folding. There are half a dozen folding methods, I think the best would be to read help folding which should answer everything.
341,192
341,833
Segmentation fault using SDL with C++, trying to Blit images
OK - I have an interesting one here. I'm working on a tetris clone (basically to "level-up" my skills). I was trying to refactor my code to get it abstracted the way I wanted it. While it was working just fine before, now I get a segmentation fault before any images can be blitted. I've tried debugging it to no avail. I have posted my SVN working copy of the project here. It's just a small project and someone with more knowledge than me and a good debugger will probably figure it out in a snap. The only dependency is SDL. Kudos to the person that can tell me what I'm doing wrong. Edit: As far as I can tell, what I have now and what I had before are logically the same, so I wouldn't think that what I have now would cause a segmentation fault. Just run an svn revert on the working copy, recompile and you can see that it was working...
Look at line 15 to 18 of Surface.cpp: surface = SDL_DisplayFormatAlpha( tempSurface ); surface = tempSurface; } SDL_FreeSurface( tempSurface ); I assume it segfaults because when you use this surface later, you are actually operating on tempSurface because of this line: surface = tempSurface; and not the surface returned by SDL_DisplayFormatAlpha(). Since you free tempSurface, surface is now pointing to invalid memory. To fix, simply remove the second line in the else block.
341,345
342,305
Viewing language (C/C++) reference/documentation in CodeBlocks
My first question on StackOverflow... Does anybody know a way of viewing the reference/documentation manual of a language through CodeBlocks? Specifically for C/C++. Example: Say I want to look up the reference for strncpy(). In a very old Borland system (which we use at school) I would write the word and middle-click on it, thus being taken to its reference. It would also be nice if one can browse through the manual in some way. Have I completely overlooked this (also searching google), or is it missing? If it's missing, what's the quickest way of searching the C and/or reference manual without being online? Regards!
Yes, it is possible. I'm not sure about the help files themselves though. The procedure seems to be documented here. from the forums. Re: F1 - help and function reference « Reply #1 on: September 15, 2008, 02:07:59 pm » if you have the help plugin installed, you can set help files through "Settings->Environment" and clicking on "Help files" in the list to the left. The default file is called when pressing F1.. Logged starmaker Newcomer * Posts: 3 Re: F1 - help and function reference « Reply #2 on: September 15, 2008, 06:57:29 pm » You are right but is there any up do date documentation ? Now I use MSDN or cppreference.com web pages. Regards, starmaker
341,462
343,590
What's the difference between BSTR and _bstr_t?
Can anyone explain the difference between the types mentioned above and some sample usage to clearly explain the difference between the two? Any help would be highly appreciated! Note: this question is a spin-off from this other question
BSTR is the string data type used with COM. _bstr_t is a wrapper class that works like a smart pointer, so it will free the allocated memory when the variable is destroyed or goes out of scope. _bstr_t also has reference counting, which increases every time you pass the _bstr_t variable by value (avoiding unnecessary copy) and decrement when it is no longer used. Whenever all references are destroyed, the allocated memory for the string is freed. An alternative to BSTR is the CComBSTR. It also manages the memory for the BSTR, but has no reference counting.
341,536
341,563
Is it possible to build a DLL in C++ that has no dependencies?
I would like to deploy a very simple DLL with my C# application, but any DLL that I build in Visual Studio 2008 seems to have a dependency on "Microsoft.VC90.CRT". Is it possible to build a DLL using VS2008 without this dependency? How can I tell what is causing the dependency?
I'm not sure about the latest VC++ versions, but previously you could tell the linker to link with a static version of the MSVCRT runtime library instead of the dynamic (DLL) version. It's possible this option still exists.
341,550
341,627
On Windows, when should you use the "\\\\?\\" filename prefix?
I came across a c library for opening files given a Unicode filename. Before opening the file, it first converts the filename to a path by prepending "\\?\". Is there any reason to do this other than to increase the maximum number of characters allowed in the path, per this msdn article? It looks like these "\\?\" paths require the Unicode versions of the Windows API and standard library.
Yes, it's just for that purpose. However, you will likely see compatibility problems if you decide to creating paths over MAX_PATH length. For example, the explorer shell and the command prompt (at least on XP, I don't know about Vista) can't handle paths over that length and will return errors.
341,594
341,613
How do I read system information in C++?
I'm trying to get information like OS version, hard disk space, disk space available, and installed RAM on a Linux system in C++. I know I can use system() to run different Linux commands and capture their output (which is what I'm currently doing) but I was wondering if there's a better way? Is there something in the C++ standard library that I can use to get information from the operating system?
If you are using *nix commands via system. Then do man scroll to the bottom of the man page and it will usually show you what relevant C system calls are related. Example: man uname: SEE ALSO uname(2), getdomainname(2), gethostname(2) Explanation of numbers: (1): User UNIX Command (2): Unix and C system calls (3): C Library routines (4): Special file names (5): File formats (6): (7): (8): System admin commands So if you are using system("uname"). From the man page you can see that there is also a uname C system call (uname(2)). So you can now do a 'man 2 uname' to get information about how to use the C system call uname.
341,668
341,686
C++: invalid conversion from ‘BaseNode*’ to ‘Match*’
All objects in my program inherit from a Container class. The Container class has a virtual BaseNode* getParent() const; method and a virtual void setParent(BaseNode *p); method. I have a Set class (Set in a tennis match, not a data structure) which has the Match class as it's parent (via setParent()) but since Set inherits from Container, The program creates a tree structure from the top down and the Set class is a child, it doesn't need to have methods to track and maintain information about it's parent beyond what Container provides. The error C++: invalid conversion from ‘BaseNode*’ to ‘Match*’ shows up in the method below when I try to compile my program. (Player* getPlayer1() const; only exists in the Match class) Player* Set::getPlayer1() const{ return getParent()->getPlayer1(); } This is my inheritance structure for Match. (Note that TreeNode is a template) Match -> TreeNode<Set> -> BaseNode -> Container I don't understand why I'm getting a conversation error. I have tried reading my textbook but it's a rather poor reference. Google just provided too much irrelevant information. Edit Player* Set::getPlayer1() const{ return dynamic_cast<Match>(getParent())->getPlayer1(); } causes error: cannot dynamic_cast ‘#‘obj_type_ref’ not supported by dump_expr#<expression error>((&((const Set*)this)->Set::<anonymous>))’ (of type ‘class BaseNode*’) to type ‘class Match’ (target is not pointer or reference) Edit 2 I just realized I need dynamic_cast<Match*> which works.
The problem is that getParent() returns a BaseNode*, which could be a pointer to any type of BaseNode - it might point to an unrelated class that also derives from BaseNode. If you're 100% sure that the parent must be of type Match, you should cast the parent to a Match* first, and then you can call getPlayer() on that: Player* Set::getPlayer1() const{ return dynamic_cast<Match*>(getParent())->getPlayer1(); } If the parent isn't necessary a Match, then dynamic_cast might return NULL, so be sure to check for that.
341,743
341,774
C++ string that can be NULL
I'm used to passing around string like this in my C++ applications: void foo(const std::string& input) { std::cout << input.size() << std::endl; } void bar() { foo("stackoverflow"); } Now I have a case where I want the string to be NULL: void baz() { foo("stackoverflow"); foo(NULL); // very bad with foo implementation above } I could change foo to: void foo(const std::string* input) { // TODO: support NULL input std::cout << input->size() << std::endl; } But to pass a string literal or copy a char* to that implementation of foo I need to write something like this: void bar() { string input("hi"); // annoying temporary foo(&input); foo(NULL); // will work as long as foo handles NULL properly } I started thinking about inheriting from std::string and adding a null property, but I'm not so sure it's a good idea. Maybe it is better to simply use a const char* string for parameters that can be NULL, but what if I want to save a copy of the string (or NULL) without having to manage its memory myself? (See What are some of the drawbacks to using C-style strings? etc.) Any clever solution around?
If you want the type to be null, then make it a pointer. Pass string pointers around instead of references, since this is precisely what pointers can do, and references cant. References always point to the same valid object. Pointers can be set to null, or be reseated to point to another object. Thus, if you need the things pointers can do, use pointers. Alternatively, use boost::optional, which allows a more type-safe way to specify "this variable may or may not contain a value". Or, of course, change the semantics so you either use empty strings instead of null, pass a separate bool parameter specifying whether the string is available or not, or refactor so you don't need this in the first place.
341,817
826,027
Is there a replacement for unistd.h for Windows (Visual C)?
I'm porting a relatively simple console program written for Unix to the Windows platform (Visual C++ 8.0). All the source files include "unistd.h", which doesn't exist. Removing it, I get complaints about misssing prototypes for 'srandom', 'random', and 'getopt'. I know I can replace the random functions, and I'm pretty sure I can find/hack-up a getopt implementation. But I'm sure others have run into the same challenge. My question is: is there a port of "unistd.h" to Windows? At least one containg those functions which do have a native Windows implementation - I don't need pipes or forking. EDIT: I know I can create my very own "unistd.h" which contains replacements for the things I need - especially in this case, since it is a limited set. But since it seems like a common problem, I was wondering if someone had done the work already for a bigger subset of the functionality. Switching to a different compiler or environment isn't possible at work - I'm stuck with Visual Studio.
Since we can't find a version on the Internet, let's start one here. Most ports to Windows probably only need a subset of the complete Unix file. Here's a starting point. Please add definitions as needed. #ifndef _UNISTD_H #define _UNISTD_H 1 /* This is intended as a drop-in replacement for unistd.h on Windows. * Please add functionality as neeeded. * https://stackoverflow.com/a/826027/1202830 */ #include <stdlib.h> #include <io.h> #include <getopt.h> /* getopt at: https://gist.github.com/ashelly/7776712 */ #include <process.h> /* for getpid() and the exec..() family */ #include <direct.h> /* for _getcwd() and _chdir() */ #define srandom srand #define random rand /* Values for the second argument to access. These may be OR'd together. */ #define R_OK 4 /* Test for read permission. */ #define W_OK 2 /* Test for write permission. */ //#define X_OK 1 /* execute permission - unsupported in windows*/ #define F_OK 0 /* Test for existence. */ #define access _access #define dup2 _dup2 #define execve _execve #define ftruncate _chsize #define unlink _unlink #define fileno _fileno #define getcwd _getcwd #define chdir _chdir #define isatty _isatty #define lseek _lseek /* read, write, and close are NOT being #defined here, because while there are file handle specific versions for Windows, they probably don't work for sockets. You need to look at your app and consider whether to call e.g. closesocket(). */ #ifdef _WIN64 #define ssize_t __int64 #else #define ssize_t long #endif #define STDIN_FILENO 0 #define STDOUT_FILENO 1 #define STDERR_FILENO 2 /* should be in some equivalent to <sys/types.h> */ typedef __int8 int8_t; typedef __int16 int16_t; typedef __int32 int32_t; typedef __int64 int64_t; typedef unsigned __int8 uint8_t; typedef unsigned __int16 uint16_t; typedef unsigned __int32 uint32_t; typedef unsigned __int64 uint64_t; #endif /* unistd.h */
342,152
342,192
Why can't variable names start with numbers?
I was working with a new C++ developer a while back when he asked the question: "Why can't variable names start with numbers?" I couldn't come up with an answer except that some numbers can have text in them (123456L, 123456U) and that wouldn't be possible if the compilers were thinking everything with some amount of alpha characters was a variable name. Was that the right answer? Are there any more reasons? string 2BeOrNot2Be = "that is the question"; // Why won't this compile?
Because then a string of digits would be a valid identifier as well as a valid number. int 17 = 497; int 42 = 6 * 9; String 1111 = "Totally text";
342,167
342,260
Keyboard Hook... not getting Lower or Upper case characters
The function below is logging the "0", "z" and the "1" ok... but its not capturing the "Z" (shift-z)... any help would be appreciated... __declspec(dllexport) LRESULT CALLBACK HookProc (UINT nCode, WPARAM wParam, LPARAM lParam) { if ((nCode == HC_ACTION) && (wParam == WM_KEYUP)) { // This Struct gets infos on typed key KBDLLHOOKSTRUCT hookstruct = *((KBDLLHOOKSTRUCT*)lParam); // Bytes written counter for WriteFile() DWORD Counter; wchar_t Logger[1]; switch (hookstruct.vkCode) { case 060: Logger[0] = L'0'; break; case 061: Logger[0] = L'1'; break; case 90: Logger[0] = L'z'; break; case 116: Logger[0] = L'Z'; break; } // Opening of a logfile. Creating it if it does not exists HANDLE hFile = CreateFile(L"C:\\logfile.txt", GENERIC_WRITE, FILE_SHARE_READ, NULL,OPEN_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); // put the file pointer to the end SetFilePointer(hFile,NULL,NULL,FILE_END); // Write the hFile typed in logfile WriteFile(hFile,&Logger,sizeof(Logger),&Counter,NULL); //WriteFile(hFile,&hookstruct.vkCode,sizeof(hookstruct.vkCode),&Counter,NULL); // Close the file CloseHandle(hFile); } }
The keyboard does not send characters. It sends keys. Whether you're typing z or Z, you're still pressing the same key, and that key has the same VK code both times. You should also get notification when the Shift key is pressed or released. You can use those notifications to translate the keystrokes into characters. The caps-lock state will also be relevant for that. You may also be concerned about dead keys. You can check whether the Shift key is pressed. GetAsyncKeyState will tell you the state of the key right now, and GetKeyState will tell you the state of the key as of the last message removed from the message queue.
342,171
342,372
multiple CComboBox sharing the same data
I have a MFC dialog with 32 CComboBoxes on it that all have the same data in the listbox. Its taking a while to come up, and it looks like part of the delay is the time I need to spend using InsertString() to add all the data to the 32 controls. How can I subclass CComboBox so that the 32 instances share the same data?
Turn off window redrawing when filling the combos. e.g.: m_wndCombo.SetRedraw(FALSE); // Fill combo here ... m_wndCombo.SetRedraw(TRUE); m_wndCombo.Invalidate(); This might help.
342,772
342,787
Convert lptstr to char*
Would anyone happen to know how to convert type LPTSTR to char * in C++?
Depends if it is Unicode or not it appears. LPTSTR is char* if not Unicode, or w_char* if so. Discussed better here (accepted answer worth reading)
342,839
342,869
Problem with SDL_DisplayFormatAlpha (c++)
As I stated in this question, I am using SDL for a small game I'm developing. Now I am having problems with SDL_DisplayFormatAlpha. I am trying to create a surface with an alpha channel from a PNG image. It was working before, but now that I've done some slight refactoring something got broken. I've narrowed it down to this constructor: Surface::Surface( tfilename file ) { // initialize the surface data member to the image indicated by filename SDL_Surface *tempSurface; tempSurface = IMG_Load( file.c_str() ); if ( !tempSurface ) { surface = NULL; exit(1); } else { surface = SDL_DisplayFormatAlpha( tempSurface ); //surface = tempSurface; } SDL_FreeSurface( tempSurface ); } This compiles just fine, but when I run it I get a Segmentation fault. The error reported by gdb: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xb79c16c0 (LWP 8089)] 0xb7e8b9a3 in SDL_DisplayFormatAlpha () from /usr/lib/libSDL-1.2.so.0 The stack trace is as follows: #0 0xb7e8b9a3 in SDL_DisplayFormatAlpha () from /usr/lib/libSDL-1.2.so.0 #1 0x0804987e in Surface (this=0x804d060, file=@0xbfb20760) at Surface.cpp:16 #2 0x0804a159 in Image (this=0x804d038, x=0, y=0, file=@0xbfb207a0) at Image.cpp:16 #3 0x0804a3de in Object (this=0x804d028, imageFile=@0xbfb207dc) at Object.cpp:4 #4 0x080491cb in Application (this=0xbfb20810) at Application.cpp:8 #5 0x08048e0d in main () at main.cpp:5 If I comment out surface = SDL_DisplayFormatAlpha( tempSurface ); and SDL_FreeSurface( tempSurface ); and uncomment surface = tempSurface; like so: Surface::Surface( tfilename file ) { // initialize the surface data member to the image indicated by filename SDL_Surface *tempSurface; tempSurface = IMG_Load( file.c_str() ); if ( !tempSurface ) { surface = NULL; exit(1); } else { //surface = SDL_DisplayFormatAlpha( tempSurface ); surface = tempSurface; } //SDL_FreeSurface( tempSurface ); } Then it seems to work just fine. Can anyone tell me what's going on? Actually, the transparency seems to work, too when I comment out SDL_DisplayFormatAlpha. Is that function only meant to be used with images that do not already have an alpha channel?
IMG_Load should handle transparent PNG's automatically, as the end of your post notes. What is the actual exception/error being thrown? Your stack trace doesn't show that.
343,025
343,132
Need of Formula for Accurate Bandwith for 1 Gigabit NIC Card
I am Need of Formula to Accurately Calculate Bandwith for 1 Gig Nic Card. What i am doing is send Layer 2 Packets @ 1Gbps but my software is showing 6oo Mbps. The whole experiment is Back to Back. No switch No Router. Here is what i did. // LinkSpeed = 1Gb UINT nBandwidth = LinkSpeed/100;//Mbps nBandwidth = nBandwidth/8; //Bytes/sec nBandwidth = nBandwidth/FrameLength; //Frames/Sec. Frame Length = 1518 UINT FramesPerBurst = (nBandwidth*Sleeptime)/1000; //Frames/Burst UINT nBufferSpaceNeededPerFrame = FrameLength-4 + sizeof(dump_bpf_hdr)); UINT nTxBufferSize = FramesPerBurst * nBufferSpaceNeededPerFrame; unsigned char* pTxBuffer = new unsigned char[m_nTxBufferSize];
In ethernet, you also have to take into account the interframe gap, which is at minimum, 96 quantum time, that is, the quantum time being the time to send a bit, which is, 1ns in GigaEthernet (1 second / 1,000,000,000). Also, if you get a collision, there will be backoff time, which quantum is chosen randomly between 0 and 2^<nb collisions> - 1.
343,219
344,495
Is it possible to use signal inside a C++ class?
I am doing something like this: #include <signal.h> class myClass { public: void myFunction () { signal(SIGIO,myHandler); } void myHandler (int signum) { /** * Handling code */ } } I am working on Ubuntu, using gcc. But it won't compile. It is complaining with: error: the argument with type void (MyClass::)(int) doesn't agree with void (*) (int) Any clues? Or maybe it is just that I cannot use a signal inside classes? Are signals only allowed in C? The error message is an approximate translation because my compiler is not in English.
The second parameter of signal should be a pointer to a function accepting an int and returning void. What you're passing to signal is a pointer to a member function accepting an int and returning void (its type being void (myClass::*)(int)). I can see three possibilities to overcome this issue: 1 - Your method myHandler can be static: this is great, make it static class myClass { public: void myFunction () { signal(SIGIO, myClass::myHandler); } static void myHandler (int signum) { // handling code } }; 2 - Your method shouldn't be static: if you're planning to use signal with only one instance, you can create a private static object, and write a static method that simply call the method on this object. Something along the lines of class myClass { public: void myFunction () { signal(SIGIO, myClass::static_myHandler); } void myHandler (int signum) { // handling code } static void static_myHandler(int signum) { instance.myHandler(signum); } private: static myClass instance; }; 3 - However, if you're planning on using the signal with multiple instances, things will get more complicated. Perhaps a solution would be to store each instance you want to manipulate in a static vector, and invoking the method on each of these : class myClass { public: void myFunction () // registers a handler { instances.push_back(this); } void myHandler (int signum) { // handling code } static void callHandlers (int signum) // calls the handlers { std::for_each(instances.begin(), instances.end(), std::bind2nd(std::mem_fun(&myClass::myHandler), signum)); } private: static std::vector<myClass *> instances; }; and somewhere, do a single call to signal(SIGIO, myClass::callHandlers); But I think that if you end up using the last solution, you should probably think about changing your handling design :-)!
343,368
343,467
error LNK2005: _DllMain@12 already defined in MSVCRT.lib
I am getting this linker error. mfcs80.lib(dllmodul.obj) : error LNK2005: _DllMain@12 already defined in MSVCRT.lib(dllmain.obj) Please tell me the correct way of eliminating this bug. I read solution on microsoft support site about this bug but it didnt helped much. I am using VS 2005 with Platform SDK
If you read the linker error thoroughly, and apply some knowledge, you may get there yourself: The linker links a number of compiled objects and libraries together to get a binary. Each object/library describes what symbols it expects to be present in other objects what symbols it defines If two objects define the same symbol, you get exactly this linker error. In your case, both mfcs80.lib and MSVCRT.lib define the _DllMain@12 symbol. Getting rid of the error: find out which of both libraries you actually need find out how to tell the linker not to use the other one (using e.g. the tip from James Hopkin)
343,437
343,456
Memcached client for Windows in C or C++?
I need a portable C/C++ solution, so I'm looking for a C/C++ client library for Memcached that work on both Windows and Unix. Any suggestions?
There's libmemcached in C. Should be most portable :)
343,488
1,470,091
Signing data with smartcards on Mac in C++
is there any support in Mac OS X for signing data using smartcards? I have looked through the system headers and found only vauge references to smart card support (in SecKeychain.h), which didn't really take me anywhere. If there's no built-in support, which are my options (ie. what free/non-free libraries exist that can help me)?
I'm answering my own question here, for reference. The OpenSC libraries provides everything you need to deal with smartcards, and it is cross-platform (Windows, Linux and Mac), and its license is good for commercial projects.
343,605
343,640
How do you validate an object's internal state?
I'm interested in hearing what technique(s) you're using to validate the internal state of an object during an operation that, from it's own point of view, only can fail because of bad internal state or invariant breach. My primary focus is on C++, since in C# the official and prevalent way is to throw an exception, and in C++ there's not just one single way to do this (ok, not really in C# either, I know that). Note that I'm not talking about function parameter validation, but more like class invariant integrity checks. For instance, let's say we want a Printer object to Queue a print job asynchronously. To the user of Printer, that operation can only succeed, because an asynchronous queue result with arrive at another time. So, there's no relevant error code to convey to the caller. But to the Printer object, this operation can fail if the internal state is bad, i.e., the class invariant is broken, which basically means: a bug. This condition is not necessarily of any interest to the user of the Printer object. Personally, I tend to mix three styles of internal state validation and I can't really decide which one's the best, if any, only which one is absolutely the worst. I'd like to hear your views on these and also that you share any of your own experiences and thoughts on this matter. The first style I use - better fail in a controllable way than corrupt data: void Printer::Queue(const PrintJob& job) { // Validate the state in both release and debug builds. // Never proceed with the queuing in a bad state. if(!IsValidState()) { throw InvalidOperationException(); } // Continue with queuing, parameter checking, etc. // Internal state is guaranteed to be good. } The second style I use - better crash uncontrollable than corrupt data: void Printer::Queue(const PrintJob& job) { // Validate the state in debug builds only. // Break into the debugger in debug builds. // Always proceed with the queuing, also in a bad state. DebugAssert(IsValidState()); // Continue with queuing, parameter checking, etc. // Generally, behavior is now undefined, because of bad internal state. // But, specifically, this often means an access violation when // a NULL pointer is dereferenced, or something similar, and that crash will // generate a dump file that can be used to find the error cause during // testing before shipping the product. } The third style I use - better silently and defensively bail out than corrupt data: void Printer::Queue(const PrintJob& job) { // Validate the state in both release and debug builds. // Break into the debugger in debug builds. // Never proceed with the queuing in a bad state. // This object will likely never again succeed in queuing anything. if(!IsValidState()) { DebugBreak(); return; } // Continue with defenestration. // Internal state is guaranteed to be good. } My comments to the styles: I think I prefer the second style, where the failure isn't hidden, provided that an access violation actually causes a crash. If it's not a NULL pointer involved in the invariant, then I tend to lean towards the first style. I really dislike the third style, since it will hide lots of bugs, but I know people that prefers it in production code, because it creates the illusion of a robust software that doesn't crash (features will just stop to function, as in the queuing on the broken Printer object). Do you prefer any of these or do you have other ways of achieving this?
The question is best considered in combination with how you test your software. It's important that hitting a broken invariant during testing is filed as a high severity bug, just as a crash would be. Builds for testing during development can be made to stop dead and output diagnostics. It can be appropriate to add defensive code, rather like your style 3: your DebugBreak would dump diagnostics in test builds, but just be a break point for developers. This makes less likely the situation where a developer is prevented from working by a bug in unrelated code. Sadly, I've often seen it done the other way round, where developers get all the inconvenience, but test builds sail through broken invariants. Lots of strange behaviour bugs get filed, where in fact a single bug is the cause.
343,946
373,970
Fixed width font - Symbian C++ CEikLabel
I want to change the font I am using in a CEikLabel on S60 device I believe I can do the following const CFont* aPlainFont = LatinPlain12(); aLabel->SetFont(aPlainFont); where LatinPlain12 is one from this list.. Albi12 Alp13 Alpi13 Albi13 alp17 Alb17b albi17b alpi17 Aco13 Aco21 Acalc21 LatinBold12 LatinBold13 LatinBold17 LatinBold19 LatinPlain12 Acb14 Acb30 Acp5 However, who can help me find out which ones from this list are fixed width.. Thanks :)
Programatically, you can determine if a font is proportional using: const CFont* myFont; // Initialize your font // .... TBool isProportional = (myFont->FontSpecInTwips().iTypeface.Attributes() & TTypeFace::EProportional); BTW you might be better off enumerating the fonts on the device and/or using the logical font API than relying on the static font accessor functions.
344,012
344,029
How do I replace this preprocessor macro with a #include?
UPDATE: Obviously, you'd want to do this using templates or a base class rather than macros. Unfortunately for various reasons I can't use templates, or a base class. At the moment I am using a macro to define a bunch of fields and methods on various classes, like this: class Example { // Use FIELDS_AND_METHODS macro to define some methods and fields FIELDS_AND_METHODS(Example) }; FIELDS_AND_METHODS is a multi-line macro that uses stringizing and token-pasting operators. I would like to replace this with the following kind of thing class Example { // Include FieldsNMethods.h, with TYPE_NAME preprocessor symbol // defined, to achieve the same result as the macro. #define TYPE_NAME Example #include "FieldsNMethods.h" }; Here I #define the name of the class (previously the parameter to the macro), and the FieldsNMethods.h file contains the content of the original macro. However, because I'm #including I can step into the code at runtime, when debugging. However I am having trouble 'stringizing' and 'token pasting' the TYPE_NAME preprocessor symbol in the FieldsNMethods.h file. For example, I want to define the destructor of the class in FieldsNMethods.h, so this would need to use the value of TYPE_NAME as below: ~TYPE_NAME() { //... } But with TYPE_NAME replaced by its value. Is what I'm attempting possible? I can't use the stringizing and token-pasting operators directly, because I'm not in a macro definition.
This cries out for a template. class Example<class T> { ...class definition... }; The direct answer to the last part of your question - "given that I'm not in a macro definition any more, how do I get pasting and stringizing operators to work" - is "You can't". Those operators only work in macros, so you'd have to write macro invocations in order to get them to work. Added: @mackenir said "templates are not an option". Why are templates not an option? The code is simulating templates the old-fashioned pre-standard, pre-template way, and does so causing much pain and grief. Using templates would avoid that pain -- though there'd be a conversion operation. @mackenir asked "is there a way to make things work with macros?" Yes, you can, but you should use templates - they are more reliable and maintainable. To make it work with macros, then you'd have to have the function names in the code in the included header be macro invocations. You need to go through a level of indirection to get this to work correctly: #define PASTE_NAME(x, y) PASTE_TOKENS(x, y) #define PASTE_TOKENS(x, y) x ## y #define TYPE_NAME Example int PASTE_NAME(TYPE_NAME, _function_suffix)(void) { ... } This level of indirection is an often necessary idiom for both tokenizing and stringizing operators. Additional comments from @mackenir indicate continued problems. Let's make it concrete. At the moment I am using a macro to define a bunch of fields and methods on various classes, like this: class Example { // Use FIELDS_AND_METHODS macro to define some methods and fields FIELDS_AND_METHODS(Example) }; FIELDS_AND_METHODS is a multi-line macro that uses stringizing and token-pasting operators. I would like to replace this with the following kind of thing class Example { // Include FieldsNMethods.h, with TYPE_NAME preprocessor symbol // defined, to achieve the same result as the macro. #define TYPE_NAME Example #include "FieldsNMethods.h" }; OK. To make this concrete, we need a FIELDS_AND_METHODS(type) macro that is multi-line and uses token-pasting (I'm not going to deal with stringizing - the same basic mechanisms will apply, though). #define FIELDS_AND_METHODS(type) \ type *next; \ type() : next(0) { } \ type * type ## _next() { return next; } With luck, this declares a member of the type 'pointer to argument type', a constructor for that type, and a method (Example_next in this case) that returns that pointer. So, this might be the macro - and we need to replace it such that the '#include' does the equivalent job. The content of fieldsNmethods.h becomes: #ifndef TYPE_NAME #error TYPE_NAME not defined #endif #define FNM_PASTE_NAME(x, y) FNM_PASTE_TOKENS(x, y) #define FNM_PASTE_TOKENS(x, y) x ## y TYPE_NAME *next; TYPE_NAME() : next(0) { } TYPE_NAME * FNM_PASTE_NAME(TYPE_NAME, _next)() { return next; } #undef FNM_PASTE_NAME #undef FNM_PASTE_TOKENS Note that the header would not contain multiple-inclusion guards; its raison d'etre is to allow it to be included multiple times. It also undefines its helper macros to permit multiple inclusion (well, since the redefinitions would be identical, they're 'benign' and wouldn't cause an error), and I prefixed them with FNM_ as a primitive namespace control on the macros. This generates the code I'd expect from the C pre-processor. and G++ doesn't witter but produces an empty object file (because the types declared are not used in my example code). Note that this does not require any changes to the calling code except the one outlined in the question. I think the question should be improved using the SPOT "Single Point of Truth" principle (or DRY "Don't Repeat Yourself"): #define TYPE_NAME Example class TYPE_NAME { // Include FieldsNMethods.h, with TYPE_NAME preprocessor symbol // defined, to achieve the same result as the macro. #include "FieldsNMethods.h" };
344,031
344,087
shared_ptr: what's it used for
I make a lot of use of boost::scoped_ptr in my code and it is great but I'm currently working with software that uses shared_ptr all over the place and I'm wondering if I'm missing something. AFAIK a shared_ptr is only useful if different threads are going to be accessing the same data and you don't know what order the threads are going to finish (with the shared_ptr ensuring that the object exists until the last thread has finished with it). Are there other use cases?
Threads are irrelevant here. What's relevant is whether it's easy to specify a point at which the object is no longer of use. Suppose several different objects want to use the same object. It might be a pack of data, or for input/output, or some geometric object, or whatever. You want the shared object to be deleted after all of the using objects are deleted, and not a clock cycle before. Rather than figure out which owning object is going to have the longest lifespan (and that can change if you change the program, or perhaps through user interaction), you can use a shared_ptr to force this behavior. It doesn't matter whether the using objects are in the same or different threads. Objects can have unpredictable lifetimes even if they're all in the same thread.
344,101
344,726
Multithreading using the boost library
Wish to simultaneously call a function multiple times. I wish to use threads to call a function which will utilize the machines capability to the fullest. This is a 8 core machine, and my requirement is to use the machine cpu from 10% to 100% or more. My requirement is to use the boost class. Is there any way I can accomplish this using the boost thread or threadpool library? Or some other way to do it? Also, if I have to call multiple functions with different parameters each time (with separate threads), what is the best way to do this? [using boost or not using boost] and how? #include <iostream> #include <fstream> #include <string.h> #include <time.h> #include <boost/thread/mutex.hpp> #include <boost/bind.hpp> using namespace std; using boost::mutex; using boost::thread; int threadedAPI1( ); int threadedAPI2( ); int threadedAPI3( ); int threadedAPI4( ); int threadedAPI1( ) { cout << "Thread0" << endl; } int threadedAPI2( ) { cout << "Thread1" << endl; } int threadedAPI3( ) { cout << "Thread2" << endl; } int threadedAPI4( ) { cout << "Thread3" << endl; } int main(int argc, char* argv[]) { boost::threadpool::thread_pool<> threads(4); // start a new thread that calls the "threadLockedAPI" function threads.schedule(boost::bind(&threadedAPI1,0)); threads.schedule(boost::bind(&threadedAPI2,1)); threads.schedule(boost::bind(&threadedAPI3,2)); threads.schedule(boost::bind(&threadedAPI4,3)); // wait for the thread to finish threads.wait(); return 0; } The above is not working and I am not sure why? :-(
I suggest that you read up on the documentation for the functions you use. From your comment in James Hopkin's answer, it seems like you don't know what boost::bind does, but simply copy-pasted the code. boost::bind takes a function (call it f), and optionally a number of parameters, and returns a function which, when called, calls f with the specified parameters. That is, boost::bind(threadedAPI1, 0)() (creating a function which takes no arguments and calls threadedAPI1() with the argument 0, and then calling that) is equivalent to threadedAPI1(0). Since your threadedAPI functions don't actually take any parameters, you can't pass any arguments to them. That is just fundamental C++. You can't call threadedAPI1(0), but only threadedAPI1(), and yet when you call the function, you try (via boost::bind) to pass the integer 0 as an argument. So the simple answer to your question is to simply define threadedAPI1 as follows: int threadedAPI1(int i); However, one way to avoid the boost::bind calls is to call a functor instead of a free function when launching the thread. Declare a class something like this: struct threadedAPI { threadedAPI(int i) : i(i) {} // A constructor taking the arguments you wish to pass to the thread, and saves them in the class instance. void operator()() { // The () operator is the function that is actually called when the thread starts, and because it is just a regular class member function, it can see the 'i' variable initialized by the constructor cout << "Thread" << i << endl; // No need to create 4 identical functions. We can just reuse this one, and pass a different `i` each time we call it. } private: int i; }; Finally, depending on what you need, plain threads may be better suited than a threadpool. In general, a thread pool only runs a limited number of threads, so it may queue up some tasks until one of its threads finish executing. It is mainly intended for cases where you have many short-lived tasks. If you have a fixed number of longer-duration tasks, creating a dedicated thread for each may be the way to go.
344,317
344,525
Where does gcc look for C and C++ header files?
On a Unix system, where does gcc look for header files? I spent a little time this morning looking for some system header files, so I thought this would be good information to have here.
`gcc -print-prog-name=cc1plus` -v This command asks gcc which C++ preprocessor it is using, and then asks that preprocessor where it looks for includes. You will get a reliable answer for your specific setup. Likewise, for the C preprocessor: `gcc -print-prog-name=cpp` -v
344,715
344,738
Getting control of the main thread in windows c++ app
I'm writing some code that id like to be able to work with any window, such as a window created through the windows API, MFC, wxWidgets, etc. The problem is that for some things I need to use the same thread that created the window, which in many cases is just sat in a message loop. My first thought was to post a callback message to the window, which would then call a function in my code when it recieves the message using one of the params and a function pointer of some sorts. However there doesnt seem to be a standard windows message to do this, and I cant create my own message since I dont control the windows code, so cant add the needed code to the message handler to implement the callback... Is there some other way to get the thread that created the window to enter my function? EDIT: John Z sugessted that I hooked the windows messages. If I do that is there some way to get "ids" for custom messages without the risk of conflicting with any custom messages the window already has? eg I might do WM_CALLBACK = WM_APP+1 But if the window I'm hooking has already done something with WM_APP+1 I'm gonna run into problems. EDIT2: just found RegisterWindowMessage :)
If you are in the same process as the window you can hook its messages by subclassing it. Check out http://msdn.microsoft.com/en-us/library/ms633570(VS.85).aspx The key API is SetWindowLong. // Subclass the edit control. wpOrigEditProc = (WNDPROC) SetWindowLong(hwndEdit, GWL_WNDPROC, (LONG)EditSubclassProc); // Remove the subclass from the edit control. SetWindowLong(hwndEdit, GWL_WNDPROC, (LONG)wpOrigEditProc);
344,829
344,841
What is __kernel_vsyscall?
I got a core that looks very different from the ones I usually get - most of the threads are in __kernel_vsyscall() : 9 process 11334 0xffffe410 in __kernel_vsyscall () 8 process 11453 0xffffe410 in __kernel_vsyscall () 7 process 11454 0xffffe410 in __kernel_vsyscall () 6 process 11455 0xffffe410 in __kernel_vsyscall () 5 process 11474 0xffffe410 in __kernel_vsyscall () 4 process 11475 0xffffe410 in __kernel_vsyscall () 3 process 11476 0xffffe410 in __kernel_vsyscall () 2 process 11477 0xffffe410 in __kernel_vsyscall () 1 process 11323 0x08220782 in MyClass::myfunc () What does that mean? EDIT: In particular, I usually see a lot of threads in "pthread_cond_wait" and "___newselect_nocancel" and now those are on the second frame in each thread - why is this core different?
__kernel_vsyscal is the method used by linux-gate.so (a part of the Linux kernel) to make a system call using the fastest available method, preferably the sysenter instruction. The thing is properly explained by Johan Petersson.
345,003
345,474
Is it safe to use STL (TR1) shared_ptr's between modules (exes and dlls)
I know that new-ing something in one module and delete-ing it in another can often cause problems in VC++. Problems with different runtimes. Mixing modules with staticly linked runtimes and/or dynamically linked versioning mismatches both can screw stuff up if I recall correctly. However, is it safe to use VC++ 2008's std::tr1::shared_ptr across modules? Since there is only one version of the runtime that even knows what what a shared_ptr is, static linking is my only danger (for now...). I thought I've read that boost's version of a shared_ptr was safe to use like this, but I'm using Redmond's version... I'm trying to avoid having a special call to free objects in the allocating module. (or something like a "delete this" in the class itself). If this all seems a little hacky, I'm using this for unit testing. If you've ever tried to unit test existing C++ code you can understand how creative you need to be at times. My memory is allocated by an EXE, but ultimately will be freed in a DLL (if the reference counting works the way I think it does).
Freeing the memory is safe, so long as it all came from the same memory management context. You've identified the most common issue (different C++ runtimes); having separate heaps is another less-common issue you can run into. Another issue which you didn't mention, but which can be exascerbated by shared pointers, is when an object's code exists in the DLL and is created by the DLL, but another object outside the DLL ends up with a reference to it (via shared pointer). If that object is destroyed after the DLL is unloaded (for example, if it's a module-level static, or if the DLL is explicitly unloaded by FreeLibrary(), the shared object's destructor will crash. This can bite you if you attempt to write DLL-based, loosely-coupled plugins. It's also the reason that COM lets DLLs decide when they can be unloaded, rather than letting COM servers demand-unload them.
345,045
345,213
accessing bitmap resources in a C++ DLL from VB6
I have a C++ DLL including bitmap resources created by Visual Studio. Though I can load the DLL in VB6 using LoadLibrary, I cannot load the image resources either by using LoadImage or by using LoadBitmap. When I try to get the error using GetLastError(), it doesnot return any errors. I have tried using LoadImage and LoadBitmap in another C++ program with the same DLL and they work without any problems. Is there any other way of accessing the resource bitmaps in C++ DLLs using VB6?
Since you are using the numeric ID of the bitmap as a string, you have to add a "#" in front of it: DLLHandle = LoadLibrary("Mydll.dll") myimage = LoadBitmap(DLLHandle, "#101") ' note the "#" In C++ you could also use the MAKEINTRESOURCE macro, which is simply a cast to LPCTSTR: imagehandle = LoadBitmap(DLLHandle, MAKEINTRESOURCE(101));
345,305
345,339
Could anyone suggest a good packet sniffer class for c++?
Could anyone suggest a good packet sniffer class for c++? Looking for a easy insertable class I can use in my c++ program, nothing complicated.
You will never be able to intercept network traffic just by inserting a class into your project. Packet capture functionality requires kernel mode support, hence you will at the very least need to have your application require or install libpcap/WinPcap, as Will Dean pointed out. Most modern Unix-like distributions include libpcap out of the box, in which case you could take a look at this very simple example: http://www.tcpdump.org/pcap.htm If you're using Windows, you're more or less on your own, although WinPcap programming is extremely similar to libpcap programming (unsurprisingly, since it's a libpcap port to Win32.) The SDK can be found here: http://www.winpcap.org/devel.htm At any rate, no matter the operating system, you will need root / Administrator access to actually perform a capture. Just using the library to replay or analyze precaptured data doesn't require any special privilege, of course.
345,382
345,446
Capturing MSN Chat via Memory
Is it possible, or does anyone know the best way to capture MSN chats? I was thinking about attaching to the MSN process, and messing about searching for static addresses for conversations, to capture them. (This is all on the windows platform using c++)
It would probably be easiest to sniff packets on the ports known to be used by MSN. That has the added benefit of working with clients other than the Microsoft one (such as Pidgin).
345,610
346,224
Remove all but the last 500,000 bytes from a file with the STL
Our logging class, when initialised, truncates the log file to 500,000 bytes. From then on, log statements are appended to the file. We do this to keep disk usage low, we're a commodity end-user product. Obviously keeping the first 500,000 bytes is not useful, so we keep the last 500,000 bytes. Our solution has some serious performance problem. What is an efficient way to do this?
"I would probably create a new file, seek in the old file, do a buffered read/write from old file to new file, rename the new file over the old one." I think you'd be better off simply: #include <fstream> std::ifstream ifs("logfile"); //One call to start it all. . . ifs.seekg(-512000, std::ios_base::end); // One call to find it. . . char tmpBuffer[512000]; ifs.read(tmpBuffer, 512000); //One call to read it all. . . ifs.close(); std::ofstream ofs("logfile", ios::trunc); ofs.write(tmpBuffer, 512000); //And to the FS bind it. This avoids the file rename stuff by simply copying the last 512K to a buffer, opening your logfile in truncate mode (clears the contents of the logfile), and spitting that same 512K back into the beginning of the file. Note that the above code hasn't been tested, but I think the idea should be sound. You could load the 512K into a buffer in memory, close the input stream, then open the output stream; in this way, you wouldn't need two files since you'd input, close, open, output the 512 bytes, then go. You avoid the rename / file relocation magic this way. If you don't have an aversion to mixing C with C++ to some extent, you could also perhaps: (Note: pseudocode; I don't remember the mmap call off the top of my head) int myfd = open("mylog", O_RDONLY); // Grab a file descriptor (char *) myptr = mmap(mylog, myfd, filesize - 512000) // mmap the last 512K std::string mystr(myptr, 512000) // pull 512K from our mmap'd buffer and load it directly into the std::string munmap(mylog, 512000); //Unmap the file close(myfd); // Close the file descriptor Depending on many things, mmap could be faster than seeking. Googling 'fseek vs mmap' yields some interesting reading about it, if you're curious. HTH
345,930
346,671
Efficient Methods for a Life Simulation
Having read up on quite a few articles on Artificial Life (A subject I find very interesting) along with several questions right here on SO, I've begun to toy with the idea of designing a (Very, very, very) simple simulator. No graphics required, even. If I've overlooked a question, please feel free to point it out to me. Like I said, this will hardly be a Sims level simulation. I believe it will barely reach "acceptable freeware" level, it is simply a learning exercise and something to keep my skills up during a break. The basic premise is that a generic person is created. No name, height or anything like that (Like I said, simple), the only real thing it will receive is a list of "associations" and generic "use", "pick up" and "look" abilities. My first question is in regards to the associations. What does SO recommend as an efficient way to handle such things? I was thinking a multimap, with the relatively easy set up of the key being what it wants (Food, eat, rest, et cetera) and the other bit (Sorry, my mind has lapsed) being what it associates with that need. For example, say we have a fridge. The fridge contains food (Just a generic base object). Initially the person doesn't associate fridge with food, but it does associate food with hunger. So when its hunger grows it begins to arbitrarily look for food. If no food is within reach it "uses" objects to find food. Since it has no known associations with food it uses things willy-nilly (Probably looking for the nearest object and expanding out). Once it uses/opens the fridge it sees food, making the connection (Read: inserting the pair "food, fridge") that the fridge contains food. Now, I realise this will be far more complex than it appears, and I'm prepared to hammer it out. The question is, would a multimap be suitable for a (Possibly) exponentially expanding list of associations? If not, what would be? The second question I have is probably far easier. Simply put, would a generic object/item interface be suitable for most any item? In other words, would a generic "use" interface work for what I intend? I don't think I'm explaining this well. Anyway, any comments are appreciated.
If you were doing this as a hard-core development project, I'd suggest using the equivalent of Java reflection (substitute the language of your choice there). If you want to do a toy project as a starter effort, I'd suggest at least rolling your own simple version of reflection, per the following rationale. Each artifact in your environment offers certain capabilities. A simple model of that fact is to ask what "verbs" are applicable to each object your virtual character encounters (including possible dependence on the current state of that object). For instance, your character can "open" a refrigerator, a box of cereal, or a book, provided that each of them is in a "closed" state. Once a book is opened, your character can read it or close it. Once a refrigerator is opened, your character can "look-in" it to get a list of visible contents, can remove an object from it, put an object in it, etc. The point is that a typical situation might involve your character looking around to see what is visible, querying an object to determine its current state or what can be done with it (i.e. "what-state" and "what-can-i-do" are general verbs applicable to all objects), and then use knowledge about its current state, the state of the object, and the verb list for that object to try doing various things. By implementing a set of positive and negative feedback, over time your character can "learn" under what circumstances it should or should not engage in different behaviors. (You could obviously make this simulation interactive by having it ask the user to participate in providing feedback.) The above is just a sketch, but perhaps it can give you some interesting ideas to play with. Have fun! ;-)
345,936
345,961
Will .NET take over C/C++ any time?
This is a subjective question. I worked in Visual Basic 6.0 before coming into the .NET space. One thing I see that there are a lot of things, for which there is a need to deal with the Win32 API. As a Visual Basic 6.0 developer, there were a lot of limitations. .NET fixes some of the old problems however the need to rely on Win32 has not been taken care of. Will there be anytime that there wouldn't be the need to rely on Win32? (I guess only when .NET has support at OS level.) I understand that domain of .NET (writing LOB applications/websites) and C/C++ is different so far. Can .NET or any other infrastructure really make C/C++ less significant? Am I expecting too much? EDIT: Doesn't it look like, building wrapper over another wrapper (and bring in new set of complexities along-with it)?
It can't go away any time soon, but in Windows, at least, new projects are going less and less with C/C++. You can even see examples of Microsoft starting to eat their own dog food. A great example of this is the C# compiler. Currently, it is written in C/C++. The C# team is currently working on re-writing it purely in managed code. A big reason for this has to do with CAS policies in .NET. Invoking unmanaged code requires FullTrust, so compiling code also requires FullTrust. As they move over to managed code, this limitation goes away. As more and more developers go to .NET as their development platform, we are seeing Microsoft follow suit. Another great example is Visual Studio 2010. They are re-writing large portions of the IDE using WPF and MEF so that it is easily extensible. Next, look at the Silverlight runtime. It doesn't rely (specifically) on Win32 at all. It runs on a Mac just as well as it runs on Windows. As Silverlight becomes more capable, we may find many of the .NET calls that currently rely on Win32 no longer do so. I guess my point is that we ARE starting to see some changes in the Windows world at least. Of course, as soon as you need to be more cross-platform, C/C++ becomes more appealing again...
346,012
346,036
What is the usefulness of project1st<Arg1, Arg2> in the STL?
I was browsing the SGI STL documentation and ran into project1st<Arg1, Arg2>. I understand its definition, but I am having a hard time imagining a practical usage. Have you ever used project1st or can you imagine a scenario?
My guess is that if you were using the strategy pattern and had a situation where you needed to pass an identity object, this would be a good choice. For example, there might be a case where an algorithm takes several such objects, and perhaps it is possible that you want one of them to do nothing under some situation.
346,024
346,082
If CHttpConnection::OpenRequest returns NULL how do I find out why
c++ mfc if CHttpConnection::OpenRequest returns a null what can I use to get the internet error. The mfc artical doesn't say what a bad responce looks like. I just said it returns a handle to a CHttpFile.
Did you see what is the error code returned by GetLastError() ? Get the error code and perform a error lookup (Tools->Error Lookup) to get the description about the code. Normally you will get the exact reason for the failure using this.
346,058
346,098
C++ class header files organization
What are the C++ coding and file organization guidelines you suggest for people who have to deal with lots of interdependent classes spread over several source and header files? I have this situation in my project and solving class definition related errors crossing over several header files has become quite a headache.
Some general guidelines: Pair up your interfaces with implementations. If you have foo.cxx, everything defined in there had better be declared in foo.h. Ensure that every header file #includes all other necessary headers or forward-declarations necessary for independent compilation. Resist the temptation to create an "everything" header. They're always trouble down the road. Put a set of related (and interdependent) functionality into a single file. Java and other environments encourage one-class-per-file. With C++, you often want one set of classes per file. It depends on the structure of your code. Prefer forward declaration over #includes whenever possible. This allows you to break the cyclic header dependencies. Essentially, for cyclical dependencies across separate files, you want a file-dependency graph that looks something like this: A.cxx requires A.h and B.h B.cxx requires A.h and B.h A.h requires B.h B.h is independent (and forward-declares classes defined in A.h) If your code is intended to be a library consumed by other developers, there are some additional steps that are important to take: If necessary, use the concept of "private headers". That is, header files that are required by several source files, but never required by the public interface. This could be a file with common inline functions, macros, or internal constants. Separate your public interface from your private implementation at the filesystem level. I tend to use include/ and src/ subdirectories in my C or C++ projects, where include/ has all of my public headers, and src/ has all of my sources. and private headers. I'd recommend finding a copy of John Lakos' book Large-Scale C++ Software Design. It's a pretty hefty book, but if you just skim through some of his discussions on physical architecture, you'll learn a lot.
346,613
346,616
How do you exit from a void function in C++?
How can you prematurely exit from a function without returning a value if it is a void function? I have a void method that needs to not execute its code if a certain condition is true. I really don't want to have to change the method to actually return a value.
Use a return statement! return; or if (condition) return; You don't need to (and can't) specify any values, if your method returns void.
346,622
346,764
Opinions on type-punning in C++?
I'm curious about conventions for type-punning pointers/arrays in C++. Here's the use case I have at the moment: Compute a simple 32-bit checksum over a binary blob of data by treating it as an array of 32-bit integers (we know its total length is a multiple of 4), and then summing up all values and ignoring overflow. I would expect such an function to look like this: uint32_t compute_checksum(const char *data, size_t size) { const uint32_t *udata = /* ??? */; uint32_t checksum = 0; for (size_t i = 0; i != size / 4; ++i) checksum += udata[i]; return udata; } Now the question I have is, what do you consider the "best" way to convert data to udata? C-style cast? udata = (const uint32_t *)data C++ cast that assumes all pointers are convertible? udata = reinterpret_cast<const uint32_t *>(data) C++ cast that between arbitrary pointer types using intermediate void*? udata = static_cast<const uint32_t *>(static_cast<const void *>(data)) Cast through a union? union { const uint32_t *udata; const char *cdata; }; cdata = data; // now use udata I fully realize that this will not be a 100% portable solution, but I am only expecting to use it on a small set of platforms where I know it works (namely unaligned memory accesses and compiler assumptions on pointer aliasing). What would you recommend?
As far as the C++ standard is concerned, litb's answer is completely correct and the most portable. Casting const char *data to a const uint3_t *, whether it be via a C-style cast, static_cast, or reinterpret_cast, breaks the strict aliasing rules (see Understanding Strict Aliasing). If you compile with full optimization, there's a good chance that the code will not do the right thing. Casting through a union (such as litb's my_reint) is probably the best solution, although it does technically violate the rule that if you write to a union through one member and read it through another, it results in undefined behavior. However, practically all compilers support this, and it results in the the expected result. If you absolutely desire to conform to the standard 100%, go with the bit-shifting method. Otherwise, I'd recommend going with casting through a union, which is likely to give you better performance.
346,650
347,100
Is there a "nice" way to deal with reassembling multicasts from multiple sources?
I'm currently reworking our existing proprietary socket wrapper code to use boost asio so that it can do some of the heavy lifting for us. Perhaps the most complex area of our existing code is the multicast handling code. The code allows our middle-tier servers (of which there can me many in one system) to send multicasts to client boxes, which use these to present updates to the users of the system. The reason the code is complex and error-prone is that it uses a number of raw buffers to reassemble the multicast streams according to where they have come from. It appears that even with Boost.Asio I'm going to have to deal with this same issue, so before I get stuck in I thought it would be worth asking how other people have dealt with this situation. It seems like a very common use-case. Is there anything out there that can help me do this job without the kind of code I have now? Or is there an established C++ template (Boost or otherwise) that can do this kind of work? Obviously I could make things easier on myself and use STL containers to buffer the packets instead of raw arrays, but this code needs to be really high performance. On large installs there are a huge number of packets flying around and it needs to respond as near to real time as possible. Thanks in advance for any thoughts on this matter. Jamie
It doesn't sound like you've given enough information for detailed answers, but there are a few general pointers to consider for realtime handling of multicast data. If you're using raw UDP multicast, you're probably doing some sort of protocol sequencing in userspace in order to deal with lost or duplicated packets. Whatever optimizations you want to do, resist the temptation to break the layering between your application and your protocol layer. std::vector, for most purposes, is identical to a raw dynamically-allocated character buffer. Don't shy away from using it just because it's an abstraction layer. There are two cases where you should avoid it, however: If you can get away with statically-allocated buffers If you need to transfer ownership of the buffer downstream (though if you design carefuly, swap() may be sufficient) Preallocation is your friend. If you can have a set of buffers available for usage when data comes in, you can remove most dynamic allocation from the fast path of execution. Minimize memory copies. If you can process data in a single call stack, you have a chance to avoid copies. If you have to pass data off to a different thread, you may be forced to copy data. If your application can handle chunked buffers (rather than aggregating all data into a single buffer), look into writev and readv. I don't believe any canned solution will solve your problems. Boost ASIO, libevent, etc. will all handle socket abstractions for you, but what you do with your data is still your responsibility, once it's been delivered.
346,697
346,713
Can the "using" declaration be used with templates?
Is it possible to use the "using" declaration with template base classes? I have read it isn't here but is that because of a technical reason or is it against the C++ standard, and does it apply to gcc or other compilers? If it is not possible, why not? Example code (from the link above): struct A { template<class T> void f(T); }; struct B : A { using A::f<int>; };
What you linked to is a using directive. A using declaration can be used fine with templated base classes (haven't looked it up in the standard, but just tested it with a compiler): template<typename T> struct c1 { void foo() { std::cout << "empty" << std::endl; } }; template<typename T> struct c2 : c1<T> { using c1<T>::foo; void foo(int) { std::cout << "int" << std::endl; } }; int main() { c2<void> c; c.foo(); c.foo(10); } The compiler correctly finds the parameter-less foo function because of our using-declaration re-declaring it into the scope of c2, and outputs the expected result. Edit: updated the question. here is the updated answer: The article is right about that you are not allowed to use a template-id (template name and arguments). But you can put a template name: struct c1 { template<int> void foo() { std::cout << "empty" << std::endl; } }; struct c2 : c1 { using c1::foo; // using c1::foo<10> is not valid void foo(int) { std::cout << "int" << std::endl; } }; int main() { c2 c; c.foo<10>(); c.foo(10); }
346,814
346,890
how to check performance of a c++ api
my web server has a lot of dependencies for sending back data, when it gets a request. i am testing one of these dependency applications within the web server. the application is decoupled from the main web server, and only queries are going to it in the form of api's exposed. my question is, if i wish to check these api's in a multithreaded environment (c++ functions with a 2 quadcore processor machine), what is the best wy to go about doing it? do i call each api in a separate thread or process? if so, how do i implement such code? from what i can figure out, i would be duplicating the functioning of the web server, but i can find no other better way to figure out the performance improvements given by that component alone.
It depends on whether your app deails with data that's shared if it is run in parallel processes because that'll most likely determine where the speed bottleneck awaits. E.g, if the app accesses a database or disk files, you'll probably have to simulate multiple threads/processes querying the app in order to see how they get along with each other, i.e. whether they have to wait for each other while accessing the shared resource. But if the app only does some internal calculation, all by its own, then it may scale well, as long as all its data fits into memory (i.e. not virtual memory access, e.g. disk access, necessary). Then you can test the performance of just one instance and focus on optimizing its speed. It also might help to state the OS you're planning to use. Mac OS X offers tools for performance testing and optimization that Windows and Linux may not, and vice versa.
346,858
346,867
How do you search a std::string for a substring in C++?
I'm trying to parse a simple string in C++. I know the string contains some text with a colon, followed immediately by a space, then a number. I'd like to extract just the number part of the string. I can't just tokenize on the space (using sstream and <<) because the text in front of the colon may or may not have spaces in it. Some example strings might be: Total disk space: 9852465 Free disk space: 6243863 Sectors: 4095 I'd like to use the standard library, but if you have another solution you can post that too, since others with the same question might like to see different solutions.
std::string strInput = "Total disk space: 9852465"; std::string strNumber = "0"; size_t iIndex = strInput.rfind(": "); if(iIndex != std::string::npos && strInput.length() >= 2) { strNumber = strInput.substr(iIndex + 2, strInput.length() - iIndex - 2) }
347,031
347,041
Accessing a bidimensional(or tridimensional) array through a pointer
When you have an array like this: int foo[3][2][2]; and you make: int *bar = &foo[0][0][0]; Is this the way it works? *bar == foo[0][0][0]; *(bar+1) == foo[0][0][1]; *(bar+2) == foo[0][1][0]; *(bar+3) == foo[0][1][1]; *(bar+4) == foo[1][0][0]; I'm not sure and have a bit of code dependent on if that works.
Yes, that's called row major ordering, and is what C is using.
347,096
347,107
How can I get a specialized template to use the unspecialized version of a member function?
Consider the following code: template <int dim> struct vec { vec normalize(); }; template <> struct vec<3> { vec cross_product(const vec& second); vec normalize(); }; template <int dim> vec<dim> vec<dim>::normalize() { // code to normalize vector here return *this; } int main() { vec<3> direction; direction.normalize(); } Compiling this code produces the following error: 1>main.obj : error LNK2019: unresolved external symbol "public: struct vec<3> __thiscall vec<3>::normalize(void)" (?normalize@?$vec@$02@@QAE?AU1@XZ) referenced in function _main
You can't :) What you want is to specialize the member functions instead: template <int dim> struct vec { // leave the function undefined for everything except dim==3 vec cross_product(const vec& second); vec normalize(); }; template<> vec<3> vec<3>::cross_product(const vec& second) { // ... } template <int dim> vec<dim> vec<dim>::normalize() { // code to normalize vector here return *this; } Another, slightly more complicated solution is to use boost::enable_if: template <int dim> struct vec { // function can't be called for dim != 3. Error at compile-time template<int dim1> typename boost::enable_if_c< dim == dim1 && dim1 == 3, vec<dim1> >::type cross_product(const vec<dim1>& second) { // ... } vec normalize(); // delegate to the template version void without_params() { // delegate this->without_params<dim>(); } private: // function can't be called for dim != 3. Error at compile-time template<int dim1> typename boost::enable_if_c< dim == dim1 && dim1 == 3 >::type without_params() { // ... } }; template <int dim> vec<dim> vec<dim>::normalize() { // code to normalize vector here return *this; } That will cause a compile time error if cross_product is called for any dim != 3. Note that that 'trick' only works for functions with parameters, since only then the template parameter can be auto-deduced. For cases without parameters, i have provided a function without_parameters above :).
347,132
347,140
Append an int to char*
How would you append an integer to a char* in c++?
First convert the int to a char* using sprintf(): char integer_string[32]; int integer = 1234; sprintf(integer_string, "%d", integer); Then to append it to your other char*, use strcat(): char other_string[64] = "Integer: "; // make sure you allocate enough space to append the other string strcat(other_string, integer_string); // other_string now contains "Integer: 1234"
347,187
347,217
Best Flags for Crash dumps
I currently have some code that will produce a crash dump when my application crashes however I can't work out what are the best flags to use for it are. At the moment I have it using Full Memory (MiniDumpWithFullMemory) but this produces 32mg crash files. What flags should I use as to not make the crash file huge but give me the most power when coming to debug using it? More info on the flags can be found at: http://www.debuginfo.com/articles/effminidumps.html
This is not always such a general answer. The flags desired will depend somewhat on what you are trying to accomplish or for what you may be searching. Perhaps you are having threading issues, MiniDumpWithThreadInfo or MiniDumpWithProcessThreadData would be appropriate. If your program is corrupting its in-memory data, then MiniDumpWithFullMemory may be the choice. From my own uses, having the full memory isn't always very useful -- I'll get what I need from the PEB or TEB structures, or just from the thread stack traces. Also, look at the flags listed in a section of the site to which you linked: http://www.debuginfo.com/articles/effminidumps2.html#strategies
347,191
347,215
How do I get the source IP address from a datagram's IP header with Winsock?
I have a port that is bind()'d to INADDR_ANY. I am receiving datagrams successfully. After receipt, I need to read the IP header to get the source IP address.
I don't believe you can get it if you're using the standard recv or read function calls. The recvfrom call as follows: int recvfrom( __in SOCKET s, __out char *buf, __in int len, __in int flags, __out struct sockaddr *from, __inout_opt int *fromlen ); includes a structure (the second to last field above) which will receive the source address which you can examine for whatever purposes you desire.
347,358
434,784
Inheriting constructors
Why does this code: class A { public: explicit A(int x) {} }; class B: public A { }; int main(void) { B *b = new B(5); delete b; } Result in these errors: main.cpp: In function ‘int main()’: main.cpp:13: error: no matching function for call to ‘B::B(int)’ main.cpp:8: note: candidates are: B::B() main.cpp:8: note: B::B(const B&) Shouldn't B inherit A's constructor? (this is using gcc)
If your compiler supports C++11 standard, there is a constructor inheritance using using (pun intended). For more see Wikipedia C++11 article. You write: class A { public: explicit A(int x) {} }; class B: public A { using A::A; }; This is all or nothing - you cannot inherit only some constructors, if you write this, you inherit all of them. To inherit only selected ones you need to write the individual constructors manually and call the base constructor as needed from them. Historically constructors could not be inherited in the C++03 standard. You needed to inherit them manually one by one by calling base implementation on your own. For templated base classes, refer to this example: using std::vector; template<class T> class my_vector : public vector<T> { public: using vector<T>::vector; ///Takes all vector's constructors /* */ };
347,441
347,478
Erasing elements from a vector
I want to clear a element from a vector using the erase method. But the problem here is that the element is not guaranteed to occur only once in the vector. It may be present multiple times and I need to clear all of them. My code is something like this: void erase(std::vector<int>& myNumbers_in, int number_in) { std::vector<int>::iterator iter = myNumbers_in.begin(); std::vector<int>::iterator endIter = myNumbers_in.end(); for(; iter != endIter; ++iter) { if(*iter == number_in) { myNumbers_in.erase(iter); } } } int main(int argc, char* argv[]) { std::vector<int> myNmbers; for(int i = 0; i < 2; ++i) { myNmbers.push_back(i); myNmbers.push_back(i); } erase(myNmbers, 1); return 0; } This code obviously crashes because I am changing the end of the vector while iterating through it. What is the best way to achieve this? I.e. is there any way to do this without iterating through the vector multiple times or creating one more copy of the vector?
Use the remove/erase idiom: std::vector<int>& vec = myNumbers; // use shorter name vec.erase(std::remove(vec.begin(), vec.end(), number_in), vec.end()); What happens is that remove compacts the elements that differ from the value to be removed (number_in) in the beginning of the vector and returns the iterator to the first element after that range. Then erase removes these elements (whose value is unspecified).
347,736
351,037
How to get file list from a Debian package using eptlib libraries?
Simple question: I have loaded an Apt package record with libept. There is a method to get file list? It should sound like record.GetFileList(); and it should return a vector string like the output of dpkg -L packagename
The libept main developer (Petr Rockai) explain me that unfortunately, at this time, libept have no such method. What they do in Adept is this: QString flfilename = "/var/lib/dpkg/info/" + u8(t.package()) + ".list"; QFile flfile(flfilename); QTextStream flInStream(&flfile); while (!flInStream.atEnd()) { QString line = flInStream.readLine(); // do stuff with line } flfile.close();
347,747
383,887
IDebugControl::WaitForEvent works once then returns E_HANDLE
I'm trying to make a small tool that makes use of the Debugger Engine API, but I'm having very limited success. I can get my IDebugClient and IDebugControl instances, and from there I am able to attach into an already running user process. I then enter a main loop where I call WaitForEvent, OutputStackTrace, SetExecutionStatus(DEBUG_STATUS_GO), and repeat. In essence this will be a very crude sampling based profiler. Good so far.. My loop runs for one full iteration, I can see a stack trace being displayed and then the target process going back into a running state. The problem I have is that on my 2nd iteration the call to WaitForEvent returns E_HANDLE ("The handle is invalid"). I cannot see in the documentation why this error should be returned. Does anyone know why this might be happening?
The problem turned out to be that I was compiling, linking, and running against an old version of the SDK. Now that I've upgraded my SDK to the latest version (which I presume is the version that the online docs refer to) I get behaviour that is at least consistent with the docs. I still have problems, but no longer this problem.
347,920
347,940
What do 1.#INF00, -1.#IND00 and -1.#IND mean?
I'm messing around with some C code using floats, and I'm getting 1.#INF00, -1.#IND00 and -1.#IND when I try to print floats in the screen. What does those values mean? I believe that 1.#INF00 means positive infinity, but what about -1.#IND00 and -1.#IND? I also saw sometimes this value: 1.$NaN which is Not a Number, but what causes those strange values and how can those help me with debugging? I'm using MinGW which I believe uses IEEE 754 representation for float point numbers. Can someone list all those invalid values and what they mean?
From IEEE floating-point exceptions in C++ : This page will answer the following questions. My program just printed out 1.#IND or 1.#INF (on Windows) or nan or inf (on Linux). What happened? How can I tell if a number is really a number and not a NaN or an infinity? How can I find out more details at runtime about kinds of NaNs and infinities? Do you have any sample code to show how this works? Where can I learn more? These questions have to do with floating point exceptions. If you get some strange non-numeric output where you're expecting a number, you've either exceeded the finite limits of floating point arithmetic or you've asked for some result that is undefined. To keep things simple, I'll stick to working with the double floating point type. Similar remarks hold for float types. Debugging 1.#IND, 1.#INF, nan, and inf If your operation would generate a larger positive number than could be stored in a double, the operation will return 1.#INF on Windows or inf on Linux. Similarly your code will return -1.#INF or -inf if the result would be a negative number too large to store in a double. Dividing a positive number by zero produces a positive infinity and dividing a negative number by zero produces a negative infinity. Example code at the end of this page will demonstrate some operations that produce infinities. Some operations don't make mathematical sense, such as taking the square root of a negative number. (Yes, this operation makes sense in the context of complex numbers, but a double represents a real number and so there is no double to represent the result.) The same is true for logarithms of negative numbers. Both sqrt(-1.0) and log(-1.0) would return a NaN, the generic term for a "number" that is "not a number". Windows displays a NaN as -1.#IND ("IND" for "indeterminate") while Linux displays nan. Other operations that would return a NaN include 0/0, 0*∞, and ∞/∞. See the sample code below for examples. In short, if you get 1.#INF or inf, look for overflow or division by zero. If you get 1.#IND or nan, look for illegal operations. Maybe you simply have a bug. If it's more subtle and you have something that is difficult to compute, see Avoiding Overflow, Underflow, and Loss of Precision. That article gives tricks for computing results that have intermediate steps overflow if computed directly.
347,949
347,959
How to convert a std::string to const char* or char*
How can I convert an std::string to a char* or a const char*?
If you just want to pass a std::string to a function that needs const char *, you can use .c_str(): std::string str; const char * c = str.c_str(); And if you need a non-const char *, call .data(): std::string str; char * c = str.data(); .data() was added in C++17. Before that, you can use &str[0]. Note that if the std::string is const, .data() will return const char * instead, like .c_str(). The pointer becomes invalid if the string is destroyed or reallocates memory. The pointer points to a null-terminated string, and the terminator doesn't count against str.size(). You're not allowed to assign a non-null character to the terminator.
347,951
347,999
Simple anonymous pipes - what wrapper model you use? (WinAPI, C++)
I have two running processes in Windows, and each process has a pipe to the other. I want to serialize a complicated class and transmit it from one process to the other. I already have the serialization procedure worked out, and I understand that the pipes are sending binary streams. How should I go about sending my serialized data? I'm using WinAPI and C++. Should I develop a custom protocol? If so, should it be generic or unique to this particular class? Can I preserve virtual tables when sending the serialized class? Are there any models or design patterns that are commonly used in this case? A little bit of sample code would be greatly appreciated. Thank you!
Here is the tutorial for boost::serialization. I could imagine it would work fine sending the data over the pipe and deserializing on the other side: http://www.boost.org/doc/libs/1_37_0/libs/serialization/doc/tutorial.html
348,320
348,350
In C++ is it possible to have a defined purely virtual function?
Here's the deal. I have a big class hierarchy and I have this one method that is extended all the way through. The method always has to look at one or two more variable at each new level and these variable depend on the actual class in the hierarchy. What I want to do is check those two extra variables then call the superclass's version of that same function. I want to be able to define this function as all it's immediate children will use it, but I want to force any children of that class to have to redefine that method (because they will have to look at their new data members) So how would I write this? I usually use =0; in the .h file, but I assume I can't use that and define it...
Actually you can declare a function as purely virtual and still define an implementation for it in the base class. class Abstract { public: virtual void pure_virtual(int x) = 0; }; void Abstract::pure_virtual(int x) { // do something } class Child : public Abstract { virtual void pure_virtual(int x); }; void Child::pure_virtual(int x) { // do something with x Abstract::pure_virtual(); }
348,616
348,624
Domain Specific Language in C/C++, does this Kosher?
I was just fooling around with some Domain Specific Language designs for a new project in C/C++ when I thought up this "odd" solution: define DSL(...) MakeCommand(#__VA_ARGS__\ )->Exec()->GetResults() MyResults results = DSL( for p in people do something ); The nice part is this is correct by the standards (but so is a Duff Switch), and cross-platform, portable, etc... However this method is really not any better than writing strings into code, but since the DSL engine parses strings anyways, it seems to look prettier this way, and reduces the clutter. But was wondering what do other folk think about it. Thanks
Hmm, while variadic macros are C99, they are not possible in C++. I wouldn't do it like that :) A simple dsl function taking a std::string or whatever string class your framework uses, and returning MakeCommand(str)->Exec()->GetResults() would be my preferred option, since it's more debug friendly, and you can put it into a namespace. You will also be able to build up command strings dynamically and pass it to the function. Using your macro approach, passing a str variable would actually pass "str" to the MakeCommand function. You would need another macro for dynamic commands to make it work, which i wouldn't be comfortable with.
348,730
348,740
Why is there "class" in "template <class x>"?
What does the "class" part of a template statement do? Example: template <class T> class Something { public: Something(const T &something); } And what else can go there? I usually only see "class".
The class keyword means the same thing as the typename keyword for the most part. They both indicates that T is a type. The only difference between the keywords class and typename is that class can be used to provide class template template arguments to a template, whereas typename can't. Consider: template<template <class T> class U> // must be "class" std::string to_string(const U<char>& u) { return std::string(u.begin(),u.end()); } The only other thing you can put in place of the class or typename keywords is an integral type. For example: template<std::size_t max> class Foo{...}; ... Foo<10> f; For a concrete example of this, take a look at std::bitset<N> in the standard library.
348,763
348,780
c++ template instantiation
I have a template class like below. template<int S> class A { private: char string[S]; public: A() { for(int i =0; i<S; i++) { . . } } int MaxLength() { return S; } }; If i instantiate the above class with different values of S, will the compiler create different instances of A() and MaxLenth() function? Or will it create one instance and pass the S as some sort of argument? How will it behave if i move the definition of A and Maxlength to a different cpp file.
The template will be instantiated for each different values of S. If you move the method implementations to a different file, you'll need to #include that file. (Boost for instance uses the .ipp convention for such source files that need to be #included). If you want to minimise the amount of code that is generated with the template instantiation (and hence needs to be made available in the .ipp file) you should try to factor it out by removing the dependency on S. So for example you could derive from a (private) base class which provides member functions with S as a parameter.
348,833
350,378
How to know the exact line of code where an exception has been caused?
If I generate an exception on my own, I can include any info into the exception: a number of code line and name of source file. Something like this: throw std::exception("myFile.cpp:255"); But what's with unhandled exceptions or with exceptions that were not generated by me?
It seems everyone is trying to improve your code to throw exceptions in your code, and no one is attempting the actual question you asked. Which is because it can't be done. If the code that's throwing the exception is only presented in binary form (e.g. in a LIB or DLL file), then the line number is gone, and there's no way to connect the object to to a line in the source code.
348,953
348,972
How do I resolve: "error C2039: '{ctor}' : is not a member of" in Visual Studio 2005?
I am extending a template class using C++ in Visual Studio 2005. It is giving me an error when I try to extend the template base class with: template <class K, class D> class RedBlackTreeOGL : public RedBlackTree<K, D>::RedBlackTree // Error 1 { public: RedBlackTreeOGL(); ~RedBlackTreeOGL(); and a second error when I try to instantiate the object: RedBlackTreeOGL<double, std::string> *tree = new RedBlackTreeOGL<double, std::string>; // error 2 Error 1: **redblacktreeopengl.hpp(27) : error C2039: '{ctor}' : is not a member of 'RedBlackTree' with [ K=double, D=std::string ] ** Error 2: main.cpp(50) : see reference to class template instantiation 'RedBlackTreeOGL' being compiled
The code is trying to inherit a constructor, not a class :-) The start of the class declaration should be template <class K, class D> class RedBlackTreeOGL : public RedBlackTree<K, D>
349,004
349,015
Why won't my program run unless Visual Studio 2008 is installed?
I have written a game that uses GLUT, OpenGL and FMOD. The problem is that the binary won't run, unless Visual Studio 2008 is installed on the computer. Why is this?
Most likely you're linking with DLL versions of the C/C++ runtime. Go to project properties -> C++ -> Code Generation, and set Runtime Library to not be one of "DLL" kinds. Alternatively, you can link to DLL runtimes, but then you have to redistribute the runtime with your application. MSDN has more information on various aspects of C++ application deployment: http://msdn.microsoft.com/en-us/library/zebw5zk9.aspx Also, Dependency Walker (depends.exe) will show what libraries your executable depends on. It ships with some versions of Visual Studio as well.
349,025
349,030
Is a string literal in С++ created in static memory?
Is a string literal in C++ created in static memory and destroyed only when the program exits?
Where it's created is an implementation decision by the compiler writer, really. Most likely, string literals will be stored in read-only segments of memory since they never change. In the old compiler days, you used to have static data like these literals, and global but changeable data. These were stored in the TEXT (code) segment and DATA (initialised data) segment. Even when you have code like char *x = "hello";, the hello string itself is stored in read-only memory while the variable x is on the stack (or elsewhere in writeable memory if it's a global). x just gets set to the address of the hello string. This allows all sorts of tricky things like string folding, so that "invalid option" (0x1000) and "valid option" (0x1002) can use the same memory block as follows: +-> plus:0 1 2 3 4 5 6 7 8 9 A B C D E | +---+---+---+---+---+---+---+---+---+---+---+---+---+---+----+ 0x1000 | i | n | v | a | l | i | d | | o | p | t | i | o | n | \0 | +---+---+---+---+---+---+---+---+---+---+---+---+---+---+----+ Keep in mind I don't mean read-only memory in terms of ROM, just memory that's dedicated to storing unchangeable stuff (which may be marked really read-only by the OS). They're also never destroyed until main() exits.
349,446
349,501
adjacency matrix in java or c++ to find connected nodes
I am given a problem where I have been given N nodes in a graph that are interconnected to each other then given a matrix which lists down a node being connected to another (1 if it is, 0 if not). I am wondering how to best approach this problem. I think these are adjacency matrix? But how would I implement that ... Basically what I am trying to get out of these is find whether a particular node is connected to all other nodes in a given set 'S'. And whether selected items are clique or not... I'd appreciate any hints.
You can implement this using a 2-dimensional array of booleans. So, if node i is connected to node j, then myarray[i][j] would be true. If your edges are not directional, then myarray[j][i] would be true whenever myarray[i][j] is. This can also be extended to weighted edges by using integers (or another numeric type) instead of booleans as the elements of the array.
349,544
349,599
Threading issue (chase your own tail)
There is ConnectionManager which waits for incoming connections. For each incoming connection it creates a Connection instance which handles the inbound and outbound traffic on this connection. Each Connection has a Watchdog which handles "bad connection" conditions and calls registered "Listerners". One "Listener" is the ConnectionManager which closes the connection and deletes the Connection instance which in turn deletes the corresponding Watchdog. Wait. A. Minute. The Watchdog calls the ConnectionManager which deletes the Connection which deletes the Watchdog? The Watchdog chases its own tail. I am completly blocked. How do I resolve this? Solution: I will make the Listener thingy asynchronous, altough I don't know yet how to do that without too much pain. The Watchdog doesn't know about the ConnectionManager. It is fairly generic. Also the Win32-Thread-API doesn't have something like "join", so I might need to roll my own with GetExitCodeThread() and STILL_ACTIVE... Thanks, guys.
If the watchdog is running in a different thread, then the problem isn't too bad - the watchdog signals the ConnectionManager to delete with a asynchronous message, then exits it's own thread. Mean-while, the ConnectionManager thread gets the delete message, and starts deleting the watchdog. To avoid race-conditions the watchdog destructor should join the watchdog thread, and cleanup the thread. (Probably also signal the watchdog thread, or assert() something about the watchdog thread being ready to exit).
349,835
350,137
WM_GETTEXT button action
I would like to tie an action to a PocketPC button (biggest button at the center - don't know how it is called). I tried to tie the action to WM_GETTEXT message, but this message is sent on every key press and don't know how to add an action just to the specific button. Does anyone know how this could be done? Thanks!
Are you talking about the Action button in the middle of a "d-pad"? Each device has its own hardware layout; there's no such thing as the "biggest button at the center" common to all devices. Prehaps you can post a picture? Anyway, take a look at this page: Keys and Key Codes for Windows Mobile. Also, WM_GETTEXT is definitely not the message you want to process. Its purpose is to retrieve "window text" of a window (caption of a button, contents of an edit control etc). You should handle WM_KEYDOWN/WM_KEYUP or WM_CHAR. Also, you may want to look at Accelerators.
349,889
350,046
How do you determine the amount of Linux system RAM in C++?
I just wrote the following C++ function to programmatically determine how much RAM a system has installed. It works, but it seems to me that there should be a simpler way to do this. Am I missing something? getRAM() { FILE* stream = popen("head -n1 /proc/meminfo", "r"); std::ostringstream output; int bufsize = 128; while( !feof(stream) && !ferror(stream)) { char buf[bufsize]; int bytesRead = fread(buf, 1, bufsize, stream); output.write(buf, bytesRead); } std::string result = output.str(); std::string label, ram; std::istringstream iss(result); iss >> label; iss >> ram; return ram; } First, I'm using popen("head -n1 /proc/meminfo") to get the first line of the meminfo file from the system. The output of that command looks like MemTotal: 775280 kB Once I've got that output in an istringstream, it's simple to tokenize it to get at the information I want. Is there a simpler way to read in the output of this command? Is there a standard C++ library call to read in the amount of system RAM?
On Linux, you can use the function sysinfo which sets values in the following struct: #include <sys/sysinfo.h> int sysinfo(struct sysinfo *info); struct sysinfo { long uptime; /* Seconds since boot */ unsigned long loads[3]; /* 1, 5, and 15 minute load averages */ unsigned long totalram; /* Total usable main memory size */ unsigned long freeram; /* Available memory size */ unsigned long sharedram; /* Amount of shared memory */ unsigned long bufferram; /* Memory used by buffers */ unsigned long totalswap; /* Total swap space size */ unsigned long freeswap; /* swap space still available */ unsigned short procs; /* Number of current processes */ unsigned long totalhigh; /* Total high memory size */ unsigned long freehigh; /* Available high memory size */ unsigned int mem_unit; /* Memory unit size in bytes */ char _f[20-2*sizeof(long)-sizeof(int)]; /* Padding for libc5 */ }; If you want to do it solely using functions of C++ (I would stick to sysinfo), I recommend taking a C++ approach using std::ifstream and std::string: unsigned long get_mem_total() { std::string token; std::ifstream file("/proc/meminfo"); while(file >> token) { if(token == "MemTotal:") { unsigned long mem; if(file >> mem) { return mem; } else { return 0; } } // Ignore the rest of the line file.ignore(std::numeric_limits<std::streamsize>::max(), '\n'); } return 0; // Nothing found }
349,899
349,906
What C++ compilers are supporting lambda already?
Are there C++ compilers already supporting C++0x lambda expressions?
Visual Studio 2010 CTP supports it already. Update: It is now Visual Studio 2010 Beta 2
350,419
350,448
How do you reconcile common C++ naming conventions with those of the libraries
Most C++ naming conventions dictate the use of camelCaseIdentifiers: names that start with an uppercase letter for classes (Person, Booking) and names that start with a lowercase letter for fields and variables (getPrice(), isValid(), largestValue). These recommendations are completely at odds with the naming conventions of the C++ library, which involve lowercase names for classes (string, set, map, fstream) and names_joined_with_an_underscore for methods and fields (find_first_of, lower_bound, reverse_iterator, first_type). Further complicating the picture are operating system and C library functions, which involve compressed lowercase names in C and Unix and functions starting with an uppercase letter in Windows. As a result my code is a mess, because some identifiers use the C++ library, C, or operating system naming convention, and others use the prescribed C++ convention. Writing classes or methods that wrap functionality of the library is painful, because one ends with different-style names for similar things. So, how do you reconcile these disparate naming conventions?
One way it to adopt the C++ naming_convention, this is what most code examples in the literature do nowadays. I slowly see these conventions move into production code but it's a battle against MFC naming conventions that still prevail in many places. Other style differences that fight against old standards are using trailing underscores rather than m_ to denote members.
350,507
350,545
Starting an application under windows using start
I noticed that I can start a program with it's associated handler by writing start filename. However, for some files, all I get is a console, and I don't know why. I'm trying to populate a list control in MFC, and I want to have the program and it's associated handler to run when I double click the selection. Is there a better way, or an explanation to why this doesn't work? This is the code that could be the problem: int selection = listControl.GetCurSel(); CString text; listControl.GetText(selection,text); string std_str = StringUtils::CStringToString(text); string st = string("start \"")+std_str+string("\""); const char* command = st.c_str(); system(command);
If the first parameter on the start command line is enclosed in double-quotes, it uses that as the window title instead of the command. It's lame, but that's what it does... Try string st = string("start \"\" \"")+std_str+string("\""); instead. But if you're trying to get the shell handler for a file to execute from within your process, a better, cleaner way to do this instead of invoking the start command is to use the ShellExecute() or ShellExecuteEx() Win32 API.
350,586
351,809
Set a default cursor for an application
In a Qt application, is there an equivalent to QApplication::setFont that sets the applications default cursor, to be overwritten by setting one on a specific widget? QApplication::setOverrideCursor overrides all widget specific ones, I want local ones to take precidene over this one, but still use my cursor if I didn't specify one.
A QWidget either uses the cursor specified with QWidget::setCursor or falls-back it's parents' cursor setting. So, simply setting the cursor for your main windows should do the trick. New top level windows and dialogs will need to have the cursor set when created since their is no parent from which to inherit.
350,811
351,200
MFC Equivalent to Java File#isDirectory()
Is there an equivalent to the Java File method isDirectory() in MFC? I tried using this : static bool isDirectory(CString &path) { return GetFileAttributes(path) & FILE_ATTRIBUTE_DIRECTORY; } but it doesn't seem to work.
Sorry for possibly "inconsistency" of answer to question but may be you'll see it useful because anytime I need something like this in Windows I am NOT using MFC but regular Windows API: //not completely tested but after some debug I'm sure it'll work bool IsDirectory(LPCTSTR sDirName) { //First define special structure defined in windows WIN32_FIND_DATA findFileData; ZeroMemory(&findFileData, sizeof(WIN32_FIND_DATA)); //after that call WinAPI function finding file\directory //(don't forget to close handle after all!) HANDLE hf = ::FindFirstFile(sDirName, &findFileData); if (hf == INVALID_HANDLE_VALUE) //also predefined value - 0xFFFFFFFF return false; //closing handle! ::FindClose(hf); // true if directory flag in on return (findFileData.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) != 0; }
351,122
351,233
Size of data obtained from SQL query via ODBC API
Does anybody know how I can get the number of the elements (rows*cols) returned after I do an SQL query? If that can't be done, then is there something that's going to be relatively representative of the size of data I get back? I'm trying to make a status bar that indicates how much of the returned data I have processed, so I want to be somewhere relatively close. Any ideas? Please note that SQLRowCount only returns returns the number of rows affected by an UPDATE, INSERT, or DELETE statement; not the number of rows returned from a SELECT statement (as far as I can tell). So I can't multiply that directly to the SQLColCount. My last option is to have a status bar that goes back and forth, indicating that data is being processed.
That is frequently a problem when you wan to reserve dynamic memory to hold the entire result set. One technique is to return the count as part of the result set. WITH data AS ( SELECT interesting-data FROM interesting-table WHERE some-condition ) SELECT COUNT(*), data.* from data If you don't know beforehand what columns you are selecting or use a *, like the example above, then number of columns can be selected out of the USER_TAB_COLS table SELECT COUNT(*) FROM USER_TAB_COLS WHERE TABLE_NAME = 'interesting-table'
351,236
351,255
copy hdc contents to bitmap
How could you copy the contents of an HDC to a bitmap?
Off the top of my head I think you need to: Create a new DC compatible with the source DC. Call this the memory DC. Create a new bitmap of the correct size. Select the bitmap into the memory DC. BitBlt the source DC into the memory DC. The bitmap should now contain a copy of the source DC. I'm at home so can't give you any code, so I hope this is enough to get you started. There is a good GDI section on Code Project. http://www.codeproject.com/KB/graphics/
351,360
351,414
An algorithm to get the next weekday set in a bitmask
I've got this small question - given a bitmask of weekdays (e.g., Sunday = 0x01, Monday = 0x02, Tuesday = 0x04, etc...) and today's day (in a form of Sunday = 1, Monday = 2, Tuesday = 3, etc...) - what's the most elegant way to find out the next day from today, that's set in the bitmask? By elegant I mean, is there a way to do this without if/switch/etc..., because I know the non-elegant way? Edit I probably should've mentioned (to make this more clear) that the variable holding the bitmask can have several of the days set, so for example (roughly): uDay = Sunday | Monday; today = Tuesday; I need to get "Sunday"
int getNextDay(int days_mask, int today) { if (!days_mask) return -1; // no days set days_mask |= days_mask << 7; // duplicate days into next week mask = 1 << (today % 7); // keep track of the day while (!(mask & days_mask)) { mask <<= 1; ++today; } return today % 7; } So that's just one if at the beginning and while loop. How's that? Edit: I just realized there was a degenerate case where if the use passes today>=14 (or greater than the highest bit set) the while loop becomes infinite. The (today % 7) on line 4 fixes this case. And if I may grouse (light-heartedly) about the other version getting the checkmark, my version only have 2 modulus calls, while the checked solution will have a minimum of 1 and a maximum of 6 modulus calls. Also, the comment about does the function return "today" if today is set is interesting. If the function should not return today unless today is the only day in the set would require that you pre-increment today on line 3 of my solution.
351,522
351,548
C++ Huffman Code Header
basically, I've got my Huffman table as std::map<std::string, char> ciMap; Where string is the bit pattern and char is the value represented by said pattern. The problem is how do I store that as a header of my compressed file so I can build again the same map when I want to decode it? Trying to store it as binary: size_t mapLen = ciMap.size(); outFile.write(reinterpret_cast<char*>(&mapLen), sizeof(size_t)); outFile.write(reinterpret_cast<char*>(&ciMap), sizeof(ciMap)); And later building with: inFile.read(reinterpret_cast<char*>(&mapLen), sizeof(size_t)); inFile.read(reinterpret_cast<char*>(&ciMap), sizeof(mapLen)); Doesn't work, I get string initilization error... something to do with NULL. Any suggestions? If you have better way of storing the bits and values I'd like to hear.
You can do it yourself, or you can do it with boost: http://www.boost.org/doc/libs/1_37_0/libs/serialization/doc/index.html. What you currently try is just view the map as a plain old datatype, which essentially means it's a C datatype. But it isn't, so it fails to save/load. boost serialization does it correctly. Have a look at it. If you don't want to use it, you can do something like this: typedef std::map<std::string, char> my_map; my_map ciMap; // saving std::ofstream stream("file.txt"); for(my_map::const_iterator it = ciMap.begin(); it != ciMap.end(); ++it) { stream << it->first << " " << it->second << std::endl; } // loading char c; std::string bits; std::ifstream stream("file.txt"); while(stream >> bits >> c) ciMap.insert(std::make_pair(bits, c)); Note that the above needs some changes if the characters stored could be whitespace characters too. Because of that, it's probably the best to first convert to an int before writing out, and then reading as an int when loading. Actually, i recommend boost serialization, and boost iostreams (http://www.boost.org/doc/libs/1_37_0/libs/iostreams/doc/index.html), which includes a compression stream that transparently can compress your data too.
351,582
351,601
How can I discover/control the level of internal buffering in a C++ fstream?
Say I do this (a contrived example): #include <iostream> #include <fstream> using namespace std; int main(int argc, char* argv[]) { ifstream ifs(argv[1]); char ch; while(ifs.read(&ch, 1)) { cout << ch; } } I assume(hope) that the iostream library does some internal buffering here and doesn't turn this into gazillions of one-byte file-read operations at the OS level. Is there a way of: a) Finding out the size of ifstream's internal buffer? b) Changing the size of ifstream's internal buffer? I'm writing a file filter that needs to read multi-gigabyte files in small chunks and I'd like to experiment with different buffer sizes to see if it affects performance.
You can use ios::rdbuf() to get a pointer to a streambuf object. This object represents the internal buffer for the stream. You can call streambuf::pubsetbuf(char * s, streamsize n) to set a new internal buffer with a given size. See this link for more details. edit: Here is how it would look in your case: #include <iostream> #include <fstream> using namespace std; int main(int argCount, char ** argList[]) { ifstream inStream(argList[1]); char myBuffer [512]; inStream.rdbuf()->pubsetbuf(myBuffer, sizeof(myBuffer)); char ch; while(inStream.read(&ch, 1)) { cout << ch; } } edit: as pointed out by litb, the actual behavior of streambuf::pubsetbuf is "implementation-defined". If you really want to play around with the buffers, you may have to roll your own buffering class that inherits from streambuf.