question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
369,788
371,717
How to SetFocus to a CButton so that the border and focus dotted line are visible?
I created a simple dialog-based application, and in the default CDialog added three buttons (by drag-and-dropping them) using the Visual Studio editor. The default OK and Cancel buttons are there too. I want to set the focus to button 1 when I click button 3. I set the property Flat to true in the properties for muy buttons. I coded this: void CbuttonfocusDlg::OnBnClickedButton3() { // TODO: Add your control notification handler code here GetDlgItem(IDC_BUTTON1)->SetFocus(); Invalidate(); } But the boder in button1 is never drawn. The caret (the dotted line indicating focus) is only drawn if I pressed TAB any time before clicking button 3. I want the button to look exactly as it looks after I click it. Showing the dotted line inside the button programatically, would be a plus. What I want: What I get:
This draws the thick border around the button: static_cast<CButton*>(GetDlgItem(IDC_BUTTON1))->SetButtonStyle(BS_DEFPUSHBUTTON); A more elegant way to do this would be to define a CButton member variable in CbuttonfocusDlg and associate it to the IDC_BUTTON1 control, and then calling this->m_myButton.SetButtonStyle(BS_DEFPUSHBUTTON); This makes the button to which I'm setting the focus the default button, but note that when the focus goes to a control (inside the dialog) that is not a button, the default button is once more the original default button set in the dialog resource, in this case the "Ok" button.
369,948
370,172
Event-driven simulation class
I am working through some of the exercises in The C++ Programming Language by Bjarne Stroustrup. I am confused by problem 11 at the end of Chapter 12: (*5) Design and implement a library for writing event-driven simulations. Hint: <task.h>. ... An object of class task should be able to save its state and to have that state restored so that it can operate as a coroutine. Specific tasks can be defined as objects of classes derived from task. The program to be executed by a task might be defined as a virtual function. ... There should be a scheduler implementing a concept of virtual time. ... The tasks will need to communicate. Design a class queue for that. ... I am not sure exactly what this is asking for. Is a task a separate thread? (As far as I know it is not possible to create a new thread without system calls, and since this is a book about C++ I do not believe that is the intent.) Without interrupts, how is it possible to start and stop a running function? I assume this would involve busy waiting (which is to say, continually loop and check a condition) although I cannot see how that could be applied to a function that might not terminate for some time (if it contains an infinite loop, for example). EDIT: Please see my post below with more information.
Hint: <task.h>. is a reference to an old cooperative multi-tasking library that shipped with early versions of CFront (you can also download at that page). If you read the paper "A Set of C++ Classes for Co-routine Style Programming" things will make a lot more sense. Adding a bit: I'm not an old enough programmer to have used the task library. However, I know that C++ was designed after Stroustrup wrote a simulation in Simula that had many of the same properties as the task library, so I've always been curious about it. If I were to implement the exercise from the book, I would probably do it like this (please note, I haven't tested this code or even tried to compile it): class Scheduler { std::list<*ITask> tasks; public: void run() { while (1) // or at least until some message is sent to stop running for (std::list<*ITask>::iterator itor = tasks.begin() , std::list<*ITask>::iterator end = tasks.end() ; itor != end ; ++itor) (*itor)->run(); // yes, two dereferences } void add_task(ITask* task) { tasks.push_back(task); } }; struct ITask { virtual ~ITask() { } virtual void run() = 0; }; I know people will disagree with some of my choices. For instance, using a struct for the interface; but structs have the behavior that inheriting from them is public by default (where inheriting from classes is private by default), and I don't see any value in inheriting privately from an interface, so why not make public inheritance the default? The idea is that calls to ITask::run() will block the scheduler until the task arrives at a point where it can be interrupted, at which point the task will return from the run method, and wait until the scheduler calls run again to continue. The "cooperative" in "cooperative multitasking" means "tasks say when they can be interrupted" ("coroutine" usually means "cooperative multitasking"). A simple task may only do one thing in its run() method, a more complex task may implement a state machine, and may use its run() method to figure out what state the object is currently in and make calls to other methods based on that state. The tasks must relinquish control once in a while for this to work, because that is the definition of "cooperative multitasking." It's also the reason why all modern operating systems don't use cooperative multitasking. This implementation does not (1) follow fair scheduling (maybe keeping a running total of clock ticks spent in in task's run() method, and skipping tasks that have used too much time relative to the others until the other tasks "catch up"), (2) allow for tasks to be removed, or even (3) allow for the scheduler to be stopped. As for communicating between tasks, you may consider looking at Plan 9's libtask or Rob Pike's newsqueak for inspiration (the "UNIX implementation of Newsqueak" download includes a paper, "The Implementation of Newsqueak" that discusses message passing in an interesting virtual machine). But I believe this is the basic skeleton Stroustrup had in mind.
370,250
370,316
How to debug COM object in Visual Studio 6.0 that is created in an ASP page?
I have an old C++ COM component which has to stay in Visual Studio 6.0 format. I can't for the life of me figure out how to debug the code in the actual COM component. I'm able to build it in debug mode, add breakpoints and attach it to the dllhost.exe process, but the Visual Studio environment will only show me the disassembly from dllhost.exe and not my source code (It also doesn't stop on the breakpoints). I remember a while ago there was a way to make IIS (or inetinfo.exe) the target of your debug action (i.e. when Visual Studio 6.0 asks you for the executable to launch your dll) and IIS would run in a single process mode. There was a registry key you needed to change, but googling hasn't helped.
If it is a VB6 based COM component, you can open the project in VB6 and run it (a DLL project cannot be run). The project properties has some option whereby it can be asked to run so that it runs & registers itself. Now, try hitting the ASP page, which makes a call to COM component. The breakpoints set in the class files will be hit, as the calls are made from ASP page.
370,283
370,311
Why can't I have a non-integral static const member in a class?
I noticed C++ will not compile the following: class No_Good { static double const d = 1.0; }; However it will happily allow a variation where the double is changed to an int, unsigned, or any integral type: class Happy_Times { static unsigned const u = 1; }; My solution was to alter it to read: class Now_Good { static double d() { return 1.0; } }; and figure that the compiler will be smart enough to inline where necessary... but it left me curious. Why would the C++ designer(s) allow me to static const an int or unsigned, but not a double? Edit: I am using visual studio 7.1 (.net 2003) on Windows XP. Edit2: Question has been answered, but for completion, the error I was seeing: error C2864: 'd' : only const static integral data members can be initialized inside a class or struct
The problem is that with an integer, the compiler usually doesn't have to ever create a memory address for the constant. It doesn't exist at runtime, and every use of it gets inlined into the surrounding code. It can still decide to give it a memory location - if its address is ever taken (or if it's passed by const reference to a function), that it must. In order to give it an address, it needs to be defined in some translation unit. And in that case, you need to separate the declaration from the definition, since otherwise it would get defined in multiple translation units. Using g++ with no optimization (-O0), it automatically inlines constant integer variables but not constant double values. At higher optimization levels (e.g. -O1), it inlines constant doubles. Thus, the following code compiles at -O1 but NOT at -O0: // File a.h class X { public: static const double d = 1.0; }; void foo(void); // File a.cc #include <stdio.h> #include "a.h" int main(void) { foo(); printf("%g\n", X::d); return 0; } // File b.cc #include <stdio.h> #include "a.h" void foo(void) { printf("foo: %g\n", X::d); } Command line: g++ a.cc b.cc -O0 -o a # Linker error: ld: undefined symbols: X::d g++ a.cc b.cc -O1 -o a # Succeeds For maximal portability, you should declare your constants in header files and define them once in some source file. With no optimization, this will not hurt performance, since you're not optimizing anyways, but with optimizations enabled, this can hurt performance, since the compiler can no longer inline those constants into other source files, unless you enable "whole program optimization".
370,366
370,373
Why put the constant before the variable in a comparison?
I noticed for a while now the following syntax in some of our code: if( NULL == var){ //... } or if( 0 == var){ //... } and similar things. Can someone please explain why did the person who wrote this choose this notation instead of the common var == 0 way)? Is it a matter of style, or does it somehow affect performance?
It's a mechanism to avoid mistakes like this: if ( var = NULL ) { // ... } If you write it with the variable name on the right hand side the compiler will be able catch certain mistakes: if ( NULL = var ) { // not legal, won't compile // ... } Of course this won't work if variable names appear on both sides of the equal sign and some people find this style unappealing. Edit: As Evan mentioned in the comments, any decent compiler will warn you about this if you enable warnings, for example, gcc -Wall will give you the following: warning: suggest parentheses around assignment used as truth value You should always enable warnings on your compiler, it is the cheapest way to find errors. Lastly, as Mike B points out, this is a matter of style and doesn't affect the performance of the program.
370,543
372,549
Combining two PDF files in C++
In C++ I'm generating a PDF report with libHaru. I'm looking for someway to append two pages from an existing PDF file to the end of my report. Is there any free way to do that? Thanks.
Try PoDoFo http://podofo.sourceforge.net/ You should be able to open both of the PDFs as PDFMemDocuments using PDFMemDocument.Load( filename ). Then, acquire references to the two pages you want to copy and add to the end of the document using InsertPages, or optionally, remove all but the last two pages of the source document, then call PDFDocument. Append and pass the called document. Hard to say which would be faster or more stable. Hope that helps, Troy
371,018
371,052
Create modified HFONT from HFONT
I using the Win32 API and C/C++. I have a HFONT and want to use it to create a new HFONT. The new font should use the exact same font metrics except that it should be bold. Something like: HFONT CreateBoldFont(HFONT hFont) { LOGFONT lf; GetLogicalFont(hFont, &lf); lf.lfWeight = FW_BOLD; return CreateFontIndirect(&lf); } The "GetLogicalFont" is the missing API (as far as I can tell anyway). Is there some other way to do it? Preferrably something that works on Windows Mobile 5+.
You want to use the GetObject function. GetObject ( hFont, sizeof(LOGFONT), &lf );
371,465
371,828
Quadtree vs Red-Black tree for a game in C++?
I have been looking for a quadtree/quadtree node implementation on the net for ages. There is some basic stuff but nothing that I would be able to really use it a game. My purpose is to store objects in a game for processing things such as collision detection. I am not 100% certain that a quadtree is the best data structure to use, but from what I have read it is. I have already coded a Red-Black tree, but I don't really know if the performance would be good enough for my game (which will be an adventure 3rd person game like Ankh). How would I write a basic but complete quadtree class (or octree) in C++? How would you use the quad tree for collisions?
Quadtrees are used when you only need to store things that are effectively on a plane. Like units in a classic RTS where they are all on the ground or just a little bit above it. Essentially each node has links to 4 children that divide the node's space up into evenly distributed quarters. Octrees do the same but in all three dimensions rather than just two, and thus they have 8 child nodes and partition the space up into eights. They should be used when the game entities are distributed more evenly among all three dimensions. If you are looking for a binary tree - like a red-black tree - then you want to use a data structure called a binary space partitioning tree (BSP tree) or a version of it called the KD Tree. These partition space into halves using a plane, in the KD tree the planes are orthogonal (on the XZ, XY, ZY axes) so sometimes it works better in a 3D scene. BSP trees divide the scene up using planes in any orientation, but they can be quite useful, and they were used as far back as Doom. Now because you've partitioned the game space you now don't have to test every game entity against every other game entity to see if they collide, which is an O(n^2) algorithm at best. Instead you query the data structure to return the game entities within a sub-region of the game space, and only perform collision detection for those nodes against each other. This means that collision detection for all game entities should be n O(nlogn) operation (at worst). A couple of extra things to watch out for: Make sure you test game entities from adjacent nodes, not just the ones in the current node, since they could still collide. Rebalance the data structure after the entities have moved since you may have empty nodes in the data structure now, or ones that contain too many entities for good performance (also the degenerate case of all entities being in the same node).
371,503
374,151
Why is ++i considered an l-value, but i++ is not?
Why is ++i is l-value and i++ not?
Well as another answerer pointed out already the reason why ++i is an lvalue is to pass it to a reference. int v = 0; int const & rcv = ++v; // would work if ++v is an rvalue too int & rv = ++v; // would not work if ++v is an rvalue The reason for the second rule is to allow to initialize a reference using a literal, when the reference is a reference to const: void taking_refc(int const& v); taking_refc(10); // valid, 10 is an rvalue though! Why do we introduce an rvalue at all you may ask. Well, these terms come up when building the language rules for these two situations: We want to have a locator value. That will represent a location which contains a value that can be read. We want to represent the value of an expression. The above two points are taken from the C99 Standard which includes this nice footnote quite helpful: [ The name ‘‘lvalue’’ comes originally from the assignment expression E1 = E2, in which the left operand E1 is required to be a (modifiable) lvalue. It is perhaps better considered as representing an object ‘‘locator value’’. What is sometimes called ‘‘rvalue’’ is in this International Standard described as the ‘‘value of an expression’’. ] The locator value is called lvalue, while the value resulting from evaluating that location is called rvalue. That's right according also to the C++ Standard (talking about the lvalue-to-rvalue conversion): 4.1/2: The value contained in the object indicated by the lvalue is the rvalue result. Conclusion Using the above semantics, it is clear now why i++ is no lvalue but an rvalue. Because the expression returned is not located in i anymore (it's incremented!), it is just the value that can be of interest. Modifying that value returned by i++ would make not sense, because we don't have a location from which we could read that value again. And so the Standard says it is an rvalue, and it thus can only bind to a reference-to-const. However, in constrast, the expression returned by ++i is the location (lvalue) of i. Provoking an lvalue-to-rvalue conversion, like in int a = ++i; will read the value out of it. Alternatively, we can make a reference point to it, and read out the value later: int &a = ++i;. Note also the other occasions where rvalues are generated. For example, all temporaries are rvalues, the result of binary/unary + and minus and all return value expressions that are not references. All those expressions are not located in an named object, but carry rather values only. Those values can of course be backed up by objects that are not constant. The next C++ Version will include so-called rvalue references that, even though they point to nonconst, can bind to an rvalue. The rationale is to be able to "steal" away resources from those anonymous objects, and avoid copies doing that. Assuming a class-type that has overloaded prefix ++ (returning Object&) and postfix ++ (returning Object), the following would cause a copy first, and for the second case it will steal the resources from the rvalue: Object o1(++a); // lvalue => can't steal. It will deep copy. Object o2(a++); // rvalue => steal resources (like just swapping pointers)
372,094
372,827
Changing terminalinterface command after showing it on qt/kdelibs app
I have this terminal on my qt/kde application KLibFactory* factory = KLibLoader::self()->factory("libkonsolepart"); KParts::Part* p = static_cast<KParts::Part*> (factory->create(this,"terminal",QStringList() << "terminal")); assert(p); TerminalInterface* terminalInterface= qobject_cast<TerminalInterface*> (p); terminalInterface->showShellInDir(QDir::home().path()); QWidget* terminal = p->widget(); so, the widget I see in my window is "terminal", but how can I change its inner command when a defined event happens (and, obviously, when the terminal is already displayed)? I've tried with: terminalInterface->startProgram( QString::fromUtf8( "/usr/bin/python" ), QStringList() << "python"); but it works only if I do it before the last line: QWidget* terminal = p->widget(); viz instead of the ->showShellInDir(QDir::home().path()); method. Some help? Thanks
Hola amigo! You can send any command to TerminalInterface prompt with terminalInterface->sendInput("/usr/bin/python\n"); I hope it's will be useful :D
372,198
372,207
Best way for interprocess communication in C++
I have two processes one will query other for data.There will be huge amount of queries in a limited time (10000 per second) and data (>100 mb) will be transferred per second.Type of data will be an integral type(double,int) My question is in which way to connect this process? Shared memory , message queue , lpc(Local Procedure call) or others.... And also i want to ask which library you suggest? by the way please do not suggest MPI. edit : under windows xp 32 bit
One Word: Boost.InterProcess. If it really needs to be fast, shared memory is the way to go. You nearly have zero overhead as the operation system does the usual mapping between virtual and physical addresses and no copy is required for the data. You just have to lookout for concurrency issues. For actually sending commands like shutdown and query, I would use message queues. I previously used localhost network programming to do that, and used manual shared memory allocation, before i knew about boost. Damn if i would need to rewrite the app, I would immediately pick boost. Boost.InterProcess makes this more easy for you. Check it out.
372,391
377,737
How to automatically sort a QTreeWidget column?
I'm using a QTreeWidget to display some simple items. I've set the list sortable by .setSortingEnabled(true) calling. In this way, the list is sorted only when the user press the title column, and not automatically whenever new item is inserted. Is there a way to force the automatic sorting in a specified column without calling .sortItems(column) at every item insertion? If it's possible, I would highlight the whole sorted column.
To do this, use QTreeView::setSortingEnabled() and QTreeView::sortByColumn. Just remember to turn this on /after/ you initially populate the widget, and turn it off again /before/ doing any bulk updates for efficiency.
372,453
372,480
Populate a vector<int> from integers in a char *
char *values = " 3 1 4 15"; vector<int> array; I want to populate the array with the values, 3,1,4,15 Is there a slick way to do it with the stl copy algorithm?
Indeed there is: std::istringstream iss(values); std::copy(std::istream_iterator<int>(iss), std::istream_iterator<int>(), std::back_inserter(array));
372,484
372,491
How do I programmatically check memory use in a fairly portable way? (C/C++)
I'm writing cross platform C++ code (Windows, Mac). Is there a way to check how much memory is in use by the current process? A very contrived snippet to illustrate: unsigned long m0 = GetMemoryInUse(); char *p = new char[ random_number ]; unsigned long m1 = GetMemoryInUse(); printf( "%d bytes used\n", (m1-m0) ); Of course (m1-m0) should equal random_number, but I'm trying to do this on a more complicated level, including possible library calls that could allocate memory. The following are not preferable: Use Valgrind (or its ilk) Use a custom memory allocator to track allocated memory.
There is no portable way to do that. For most Operating systems, there isn't even a reliable way to do it specific to that OS.
372,665
372,673
C++ Instance Initialization Syntax
Given a class like this: class Foo { public: Foo(int); Foo(const Foo&); Foo& operator=(int); private: // ... }; Are these two lines exactly equivalent, or is there a subtle difference between them? Foo f(42); Foo f = 42; Edit: I confused matters by making the Foo constructor "explicit" in the original question. I've removed that, but appreciate the answers. I've also added declaration of a copy constructor, to make it clear that copying may not be a trivial operation. What I really want to know is, according to the C++ standard, will "Foo f = 42" directly call the Foo(int) constructor, or is the copy constructor going to be called? It looks like fasih.ahmed has the answer I was looking for (unless it's wrong).
Foo f = 42; This statement will make a temporary object for the value '42'. Foo f(42); This statement will directly assign the value so one less function call.
372,695
372,922
Is there a standard C++ function object for taking apart a std::pair?
Does anyone know if there's a de-facto standard (i.e., TR1 or Boost) C++ function object for accessing the elements of a std::pair? Twice in the past 24 hours I've wished I had something like the keys function for Perl hashes. For example, it would be nice to run std::transform on a std::map object and dump all the keys (or values) to another container. I could certainly write such a function object but I'd prefer to reuse something that's had a lot of eyeballs on it.
boost::bind is what you look for. boost::bind(&std::pair::second, _1); // returns the value of a pair Example: typedef std::map<std::string, int> map_type; std::vector<int> values; // will contain all values map_type map; std::transform(map.begin(), map.end(), std::back_inserter(values), boost::bind(&map_type::value_type::second, _1));
372,714
372,756
Problem with a constructor c++
So I have this code for these Constructors of the Weapon class: Weapon(const WeaponsDB * wepDB); Weapon(const WeaponsDB * wepDB_, int * weaponlist); ~Weapon(void); And I keep getting an error: 1>c:\users\owner\desktop\bosconian\code\bosconian\weapon.h(20) : error C2062: type 'int' unexpected and ensuing errors (more than listed): 1>c:\users\owner\desktop\bosconian\code\bosconian\weapon.h(21) : error C2059: syntax error : '(' 1>c:\users\owner\desktop\bosconian\code\bosconian\weapon.h(21) : error C2238: unexpected token(s) preceding ';' 1>c:\users\owner\desktop\bosconian\code\bosconian\weapon.h(33) : error C2327: '<unnamed-tag>::maxWeapons' : is not a type name, static, or enumerator 1>c:\users\owner\desktop\bosconian\code\bosconian\weapon.h(33) : error C2065: 'maxWeapons' : undeclared identifier 1>c:\users\owner\desktop\bosconian\code\bosconian\weapon.h(38) : warning C4094: untagged 'class' declared no symbols I'm a semi-newbie and I haven't been able to figure it out. Line 21 is the second constructor, the first one doesn't cause an error. Also, if I comment out this constructor I still get all the errors listed after that constructors. Any idea what the problem might be? Here is the preceding code for reference: #ifndef Weapon #define Weapon #include <allegro.h> #include <stdio.h> #include <iostream> using namespace std; class WeaponsDB; class MenuDriver; class Ammo; class Weapon { public: .....
#ifndef Weapon #define Weapon This is almost certainly going to cause weirdness; call the constant WEAPON_H instead.
372,862
372,948
C++ programming style
I'm an old (but not too old) Java programmer, that decided to learn C++. But I have seen that much of C++ programming style, is... well, just damn ugly! All that stuff of putting the class definition in a header file, and the methods in a different source file- Calling functions out of nowhere, instead of using methods inside classes. All that just seems... wrong! So finally, is there any reason for me to continue with this massacre to the OOP, and anything that is good and righteous in programming, or can I just ignore that old-fashioned C++ conventions, and use my good Java programing style? By the way I'm learning C++, because I want to do game programing. Here is an example: In an C++ website I found a Windows implementation: class WinClass { public: WinClass (WNDPROC wndProc, char const * className, HINSTANCE hInst); void Register () { ::RegisterClass (&_class); } private: WNDCLASS _class; }; That class is located in a header file and the constructor: WinClass::WinClass (WNDPROC wndProc, char const * className, HINSTANCE hInst) { _class.style = 0; _class.lpfnWndProc = wndProc; // Window Procedure: mandatory _class.cbClsExtra = 0; _class.cbWndExtra = 0; _class.hInstance = hInst; // Owner of the class: mandatory _class.hIcon = 0; _class.hCursor = ::LoadCursor (0, IDC_ARROW); // Optional _class.hbrBackground = (HBRUSH) (COLOR_WINDOW + 1); // Optional _class.lpszMenuName = 0; _class.lpszClassName = className; // Mandatory } Is located at a .cpp source file. What I could just do is: class WinClass { public: WinClass (WNDPROC wndProc, char const * className, HINSTANCE hInst) { _class.style = 0; _class.lpfnWndProc = wndProc; // Window Procedure: mandatory _class.cbClsExtra = 0; _class.cbWndExtra = 0; _class.hInstance = hInst; // Owner of the class: mandatory _class.hIcon = 0; _class.hCursor = ::LoadCursor (0, IDC_ARROW); // Optional _class.hbrBackground = (HBRUSH) (COLOR_WINDOW + 1); // Optional _class.lpszMenuName = 0; _class.lpszClassName = className; // Mandatory } void Register () { ::RegisterClass (&_class); } private: WNDCLASS _class; }; And now the constructor is inside its class.
In addition to what others have said here, there are even more important problems: 1) Large translation units lead to longer compile times and larger object file sizes. 2) Circular dependencies! And this is the big one. And it can almost always be fixed by splitting up headers and source: // Vehicle.h class Wheel { private: Car& m_parent; public: Wheel( Car& p ) : m_parent( p ) { std::cout << "Car has " << m_parent.numWheels() << " wheels." << std::endl; } }; class Car { private: std::vector< Wheel > m_wheels; public: Car() { for( int i=0; i<4; ++i ) m_wheels.push_back( Wheel( *this ) ); } int numWheels() { return m_wheels.size(); } } No matter what order you put these in, one will always be lacking the definition of the other, even using forward declarations it won't work, since in the function bodies are using specifics about each class's symbol. But if you split them up into proper .h and .cpp files and use forward declarations it will satisfy the compiler: //Wheel.h //------- class Car; class Wheel { private: Car& m_parent; public: Wheel( Car& p ); }; //Wheel.cpp //--------- #include "Wheel.h" #include "Car.h" Wheel::Wheel( Car& p ) : m_parent( p ) { std::cout << "Car has " << m_parent.numWheels() << " wheels." << std::endl; } //Car.h //----- class Wheel; class Car { private: std::vector< Wheel > m_wheels; public: Car(); int numWheels(); } //Car.cpp //------- #include "Car.h" #include "Wheel.h" Car::Car() { for( int i=0; i<4; ++i ) m_wheels.push_back( Wheel( *this ) ); } int Car::numWheels() { return m_wheels.size(); } Now the code that actually has to know specifics about the second class can just include the header file which doesn't need to know specifics about the first class. Headers just provide the declarations while source files provide the definitions. Or another way to say it: Headers tell you what is there (what symbols are valid to use) and source tells the compiler what the symbols actually do. In C++ you don't need anything more than a valid symbol to begin using whatever it is. Trust that C++ has a reason for this idiom, because if you don't you will make a lot of headaches for yourself down the line. I know :/
373,012
408,302
MFC Edit Box - Multiple Characters per Keystroke?
I am trying to create a simple dialog in MFC using Visual C++. My problem is that when I get the dialog on the screen and try to type in an Edit Box field, if I type the letter 'a' once, it appears in the edit box as 'aaaaaaaaaaa' (that's 12 a's). Furthermore, if I try to navigate around in the box using the arrow keys, the carat moves 12 characters at a time. It's not just a display error, as the output from the editbox is still "aaaaaaaaaaaa". I'd post code, but there's really none to post. I added the edit box using the Toolbox in Visual Studio and assigned a variable to it in my class so this isn't any sort of special edit box. If anyone has any thoughts as to what might be happening it would be greatly appreciated. Unfortunately, I don't know where to begin. Thank as always.
Are you capturing any events such as WM_KEYUP in your PreTranslateMessage() function or anywhere else in your app ? If you have overridden the default handling for keyboard events, it might cause the symptoms you are seeing.
373,462
373,502
Help with C++ List erase function
I'm trying to do a simple erase and keep getting errors. Here is the snippet of code for my erase: std::list<Mine*>::iterator iterMines = mines.begin(); for(int i = oldSizeOfMines; i >0 ; i--, iterMines++) { if(player->distanceFrom(*iterMines) < radiusOfOnScreen) { onScreen.push_back(*iterMines); iterMines = onScreen.erase(iterMines); iterMines--; } } I keep getting a compiler message: 1>c:\users\owner\desktop\bosconian\code\bosconian\environment.cpp(158) : error C2664: 'std::list<_Ty>::_Iterator<_Secure_validation> std::list<_Ty>::erase(std::list<_Ty>::_Iterator<_Secure_validation>)' : cannot convert parameter 1 from 'std::list<_Ty>::_Iterator<_Secure_validation>' to 'std::list<_Ty>::_Iterator<_Secure_validation>' 1> with 1> [ 1> _Ty=SpaceObject *, 1> _Secure_validation=true 1> ] 1> and 1> [ 1> _Ty=Mine *, 1> _Secure_validation=true 1> ] 1> and 1> [ 1> _Ty=SpaceObject *, 1> _Secure_validation=true 1> ] 1> No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called I'm puzzled because I believe I'm giving it the correct iterator. Mine is a subclass of SpaceObject (a second generation subclass that is) Does this have anything to do with it? And how would I fix it?
The problem is you are trying to use the iterator of mines as an iterator in the onScreen list. This will not work. Did you mean to call mines.erase(iterMines) instead of onScreen.erase(iterMines)?
373,662
373,782
Need a good unmanaged C++ OCX example
I need a very simple and clear example of how to create an OCX in unmanaged C++ code. Ideally, I'd like to consume it in Office, but any container (i.e. VB6, .NET WinForms) should be good. I am having trouble seeing how I can add controls to the OCX canvas... I have seen examples of opening dialogs from within the OCX's load event... but that is not what I want... I need the GUI in the OCX itself.
Have you looked at this Microsoft tutorial. It uses MFC. If you want to create a windowless control you would need to use ATL.
374,217
375,270
Convert C++ Header Files To Python
I have a C++ header that contains #define statements, Enums and Structures. I have tried using the h2py.py script that is included with Python to no avail (except giving me the #defines converted). Any help would be greatly appreciated.
I don't know h2py, but you may want to look at 'ctypes' and 'ctypeslib'. ctypes is included with python 2.5+, and is targeted at creating binary compatibility with c-structs. If you add ctypeslib, you get a sub-tool called codegen, which has a 'h2xml.py' script, and a 'xml2py.py', the combination of which will auto-generate the python code you're looking for from C++ headers. ctypeslib:http://pypi.python.org/pypi/ctypeslib/0.5.4a h2xml.py will require another tool called gccxml: http://www.gccxml.org/HTML/Index.html it's best to check out (via CVS) the latest version of gccxml and build it yourself (actually easier done than said). The pre-packaged version is old.
374,263
374,320
Porting Application from Solaris to Linux
I am to take up the task of porting a C++ networking application code base of quite a size from Solaris to Linux platform. The code also uses third party libraries like ACE. The application when written initially was not planned for a possible porting in future. I would like to get some advice and suggestions as to how I go about the task. What would be the best methods to follow. -Prabhu. S
"There is no such thing as a portable application only applications that have been ported" First start with using the same tools on both platforms, if you can. I.E. if the Solaris version has not been changed to use GCC and GNU make etc, I advise you to change this first and get the Solaris build working. You will find that you will fix compiler problems first and not try to fix them on Linux at the same time as trying to port the application. Second make sure you can get all the same libraries on each platform at the same version. I think you can get ACE for Linux. Make sure that the libraries at that version work on Solaris. This will limit compatibility problems. Once you have done that then the real work starts. You will need to compile each source file one at a time and find the functions that are not available in Linux. First look for a replacement that is available in both OSs. If there is no simple replacement then create two libraries one for Solaris and one for Linux. Create wrapper classes or functions to abstract the incompatibilities away. If this sounds like a lot of work - it is.
374,303
374,352
Getter and setter wrappers for TiXmlElement*'s
I am rewriting a project so that it uses getters and setters to reference the TiXmlElement *'s However, I quickly run into problems that seem to be related to debug mode: Ecxerpt from my class's header: TiXmlElement *_rootElement; TiXmlElement *_dialogsElement; TiXmlElement *_dialogElement; TiXmlDocument _document; void setDocument (TiXmlDocument doc) { this->_document = doc; } void setRootElement (TiXmlElement * element) { this->_rootElement = element; } void setDialogsElement (TiXmlElement * element) { this->_dialogsElement = element; } TiXmlDocument getDocument () { return this->_document; } TiXmlElement* getRootElement () { return this->_rootElement; } TiXmlElement* getDialogsElement () { return this->_dialogsElement; } Excerpt from class constructor: DCXML::DCXML(const char *dialogMark,const char *dialogName,TiXmlDocument doc) { ... this->setDocument(doc); this->setRootElement(this->getDocument().FirstChildElement("dcxml")); this->setDialogsElement(this->getRootElement()->FirstChildElement("dialogs")); Excerpt from instantiating the class: TiXmlDocument doc(input.gettok(2,"\"").to_chr()); bool dcxmlFile = doc.LoadFile(); ... DCXML *dcxml = new DCXML(input.gettok(2).to_chr(),input.gettok(3).to_chr(),doc); Now for the weird part. This works up until this->setDialogsElement(this->getRootElement()->FirstChildElement("dialogs")); in the constructor. ->FirstChildElement("dialogs") throws an "CXX0039: Error: symbol is ambiguous" error in VS2008 when in debug mode. The weird part is IntelliSense picks up on the FirstChildElement method and the compiler doesn't throw any errors. What's even weirder is that when in release mode it just fails silently to get the dialogs element. What I am doing wrong ? Or if you have succesfully implemented getter setter wrappers for TiXmlElement*'s let me know how I can as well!. For complete reference here's an excerpt from the XML file: <?xml version="1.0" encoding="utf-8"?> <dcxml> <dialogs> <dialog name="mediaplayer" center="" w="300" h="400" caption="Mamp 4.0 BETA" border="btmnzy"> </dialog> </dialogs> </dcxml> Feedback would be much appreciated as I am at a dead end :)
Make sure that TiXmlDocument getDocument () { return this->_document; } Will not deep copy its contained TiXmlElement's. Otherwise you return a temporary, use that in the constructor to set the root node, which then will be destructed already. I haven't looked in its API, but just be aware of such pitfalls. The reason for the ambiguous call is because: There are three overloads of FirstChildElement taking one argument: const TiXmlElement * FirstChildElement (const char *value) const // :1 const TiXmlElement * FirstChildElement (const std::string &_value) const // :2 TiXmlElement * FirstChildElement (const std::string &_value) // :3 You access the TiXmlElement through TiXmlElement& (using a TiXmlElement* pointer). But the version taking a const char* has an implicit object parameter of TiXmlElement const&. That is, a qualification conversion is required to make the call work. For the other versions taking a std::string const&, there are conversions required too: <implied obj param> <implicit obj param> <arg1> <param1> TiXmlElement& TiXmlElement const& char const* char const* // :1 TiXmlElement& TiXmlElement const& char const* std::string const& // :2 TiXmlElement& TiXmlElement& char const* std::string const& // :3 There is an ambiguity between the first and the third overload. An easy fix is to do this->setDialogsElement( this->getRootElement()->FirstChildElement(std::string("dialogs"))); Instead, which will call the last version. Another fix is to const_cast: this->setDialogsElement( const_cast<TiXmlElement const*>(this->getRootElement())-> FirstChildElement("dialogs")); Which will call the first version. As for why it only happens in DEBUG... I remember that TiXML has an option to disable the use of the STL. Maybe in release mode you disabled it (and thus the overloads taking std::string), but in debug mode you forgot?
374,322
374,395
Being specialised and keeping up
One of the things mentioned recently maybe in the SO podcast or Joel was that the best way to succeed at business when you start out is to start specialised and concentrate on one thing only. If you say you're the jack of all trades; you're just another jack! If you say you're a specialist in - I think joels example was some type of faucets - then even though you think you've just removed most of your market; you've actually increased the odds of getting business in a specific marked as you'll stand out more against the rest; somebody somewhere will want what you more than if you just say you do everything. So anyway, my question is; I'd like to specialise myself in something. I've been doing C/C++ for a long time but never really pegged myself as either win32, mfc, directx, wxWidgets, qt, boost, sdl, allegro, etc etc. I've tried everything and done different things in each; I've also started moving into actually writing for linux (and now osx) as apposed to just using it so my opportunities have grown even further. My view was that the more I know the less likely I am to become stagnant and stuck; if I know bits of each I can get a job anywhere and then learn more as I need it. Over the last couple years I also took on .NET and then PHP/LAMP; I feel quite comfortable in either but I guess in my heart I'm still a C++ guy (although I think I've learned more commercial things in PHP than I have done with c++). The other issue I feel is that the world is becoming highly web-based, and technologies such as C++ are becoming less and less used (ignoring Google's new tool - can't remember the name) and other things like flash and flex are becoming more popular. I do agree that the tools we use to view the web are written in C++/like but in terms of market share I think all the bases are already covered. I was wondering how people decide what peg to eventually pick and why; and is this for business reasons (i.e. it makes more money) or for selfish reasons (i.e. I feel complete by writing things in C++ as apposed to PHP). Maybe i'm asking the wrong question; maybe I should by specialising in a specific product rather than a language as a service; Didn't think of that... Cheers, Chris
I'd guess that most people choose a reason (interest, money) to head in a particular direction at the beginning of their careers, and the rest is largely serendipitous. I still try to keep a broad general knowledge across the technologies related to my current and imminent responsibilities (and those of my friends and coworkers, so I can understand our conversations). But I've specialized in those that are necessary to stay "ahead of the curve" in my specific domain, or are unrelated but interesting to me, and I've dabbled in still others by osmosis. You noted that "the world is becoming highly web-based, and technologies such as C++ are becoming less and less used." So you can either "follow the herd" and chase the new tools and methods, or you can find a niche in the areas that are creating them. I've chosen the latter because, although there may be fewer positions available, I find them more challenging and rewarding both intellectually and financially. Oh, and there's less competition! As you advance through your career, you'll find that many technologies are very similar, with a lot of overlap. Once you've covered the basics, it becomes less important to know any specific platform or language, and more important to understand the techniques and processes that lead to effective, robust, high-quality products. Good luck!
374,399
374,423
Why do we actually need Private or Protected inheritance in C++?
In C++, I can't think of a case in which I would like to inherit private/protected from a base class: class Base; class Derived1 : private Base; class Derived2 : protected Base; Is it really useful?
It is useful when you want to have access to some members of the base class, but without exposing them in your class interface. Private inheritance can also be seen as some kind of composition: the C++ faq-lite gives the following example to illustrate this statement class Engine { public: Engine(int numCylinders); void start(); // Starts this Engine }; class Car { public: Car() : e_(8) { } // Initializes this Car with 8 cylinders void start() { e_.start(); } // Start this Car by starting its Engine private: Engine e_; // Car has-a Engine }; To obtain the same semantic, you could also write the car Class as follow: class Car : private Engine { // Car has-a Engine public: Car() : Engine(8) { } // Initializes this Car with 8 cylinders using Engine::start; // Start this Car by starting its Engine }; However, this way of doing has several disadvantages: your intent is much less clear it can lead to abusive multiple inheritance it breaks the encapsulation of the Engine class since you can access its protected members you're allowed to override Engine virtual methods, which is something you don't want if your aim is a simple composition
374,670
375,071
Returning Japanese characters via char* in an Excel XLOPER
I am retrieving Japanese characters from a data source and I want to return this data to Excel in an XLOPER. I am using a Japanese version of Excel 2003 (hence XLOPERs and not XLOPER12s). wchar_t* pszW = OLE2W(bstrResult); //I have the data I am trying to copy in a CComBSTR ULONG ulSize = ::WideCharToMultiByte( CP_THREAD_ACP, 0, pszW, -1, NULL, 0, NULL, NULL ); if ( ulSize ) { char* tmp = new char[ulSize + 1]; tmp[ulSize]='\0'; ::WideCharToMultiByte( CP_THREAD_ACP, 0, pszW, -1, LPSTR( tmp ), ulSize, NULL, NULL ); pszReturn = tmp; } wchar_t* pwszOut = new wchar_t[bstrResult.Length () + 1]; //now turn it back to test that that the correct code page was used. For debugging purposes only! ::MultiByteToWideChar (CP_THREAD_ACP,0, LPSTR(pszReturn),-1,pwszOut,bstrResult.Length () + 1); //pwszOut and bstrResult look the same in the debugger delete [] pwszOut; The parameter pszReturn is assigned to an XLOPER. The problem I have is that “アフリカの女王” is displayed as “ƒAƒtƒŠƒJ‚Ì—‰¤” in Excel. Manually setting the code page to 932 yields the same results as CP_THREAD_ACP so I think that that part is correct. Any help would be greatly appreciated.
User Error! The above code is good. The problem is that Excel was using the wrong code page. I hadn't set the language for non-unicode programs to Japanese in Control Panel. The code now works for the English version of Excel too. That was a day and a half well spent...
374,715
374,916
VC++ 6.0 access violation when run in debugger
I am trying to add enhancements to a 4 year old VC++ 6.0 program. The debug build runs from the command line but not in the debugger: it crashes with an access violation inside printf(). If I skip the printf, then it crashes in malloc() (called from within fopen()) and I can't skip over that. This means I cannot run in the debugger and have to rely on the old printf statements to see what's going on. This obviously makes it a lot harder. Any idea why printf() and malloc() would fail when running under the VC++ debugger? I am no good at this low level stuff! Here is the call stack after the access violation: _heap_alloc_dbg(unsigned int 24, int 2, const char * 0x0046b3d8 `string', int 225) line 394 + 8 bytes _nh_malloc_dbg(unsigned int 24, int 0, int 2, const char * 0x0046b3d8 `string', int 225) line 242 + 21 bytes _malloc_dbg(unsigned int 24, int 2, const char * 0x0046b3d8 `string', int 225) line 163 + 27 bytes _lock(int 2) line 225 + 19 bytes _getstream() line 55 + 7 bytes _fsopen(const char * 0x00468000 `string', const char * 0x00466280 `string', int 64) line 61 + 5 bytes fopen(const char * 0x00468000 `string', const char * 0x00466280 `string') line 104 + 15 bytes open_new_log(const char * 0x00468000 `string') line 66 + 14 bytes log_open(const char * 0x00468000 `string', int 0) line 106 + 9 bytes Xlog_open(const char * 0x00468000 `string', int 0) line 51 + 13 bytes service_start(unsigned long 1, char * * 0x009a0e50) line 3152 + 12 bytes service_init2(char * 0x00471fcc char * NTPROGRAM, char * 0x004723c4 char * NTSERVICE, char * 0x00466540 `string', unsigned long 1, char * * 0x009a0e50) line 508 + 13 bytes service_init(char * 0x00471fcc char * NTPROGRAM, char * 0x004723c4 char * NTSERVICE, unsigned long 2, char * * 0x009a0e50) line 548 main(unsigned long 2, char * * 0x009a0e50) line 3131 mainCRTStartup() line 206 + 25 bytes KERNEL32! 7c817067() Here is the debug dissassembly up to the operation that fails: 0041EA7E jmp _heap_alloc_dbg+2B3h (0041eb23) 0041EA83 mov edx,dword ptr [_lTotalAlloc (004b4294)] 0041EA89 add edx,dword ptr [nSize] 0041EA8C mov dword ptr [_lTotalAlloc (004b4294)],edx 0041EA92 mov eax,[_lCurAlloc (004b429c)] 0041EA97 add eax,dword ptr [nSize] 0041EA9A mov [_lCurAlloc (004b429c)],eax 0041EA9F mov ecx,dword ptr [_lCurAlloc (004b429c)] 0041EAA5 cmp ecx,dword ptr [_lMaxAlloc (004b42a0)] 0041EAAB jbe _heap_alloc_dbg+249h (0041eab9) 0041EAAD mov edx,dword ptr [_lCurAlloc (004b429c)] 0041EAB3 mov dword ptr [_lMaxAlloc (004b42a0)],edx 0041EAB9 cmp dword ptr [_pFirstBlock (004b4298)],0 0041EAC0 je _heap_alloc_dbg+25Fh (0041eacf) 0041EAC2 mov eax,[_pFirstBlock (004b4298)] 0041EAC7 mov ecx,dword ptr [pHead] 0041EACA mov dword ptr [eax+4],ecx Here is our source for that calls fopen() and fails in malloc() FILE *open_new_log( const char *logfile ) { FILE *fp; int retry = 0; while( ( fp = fopen( logfile, "w" ) ) == NULL && ++retry < 300 ) Sleep( 1000 ); return( fp ); } The error I get is Unhandled exception inPISCOOP.exe: 0xC00000005: Access Violation Regards, --- Alistair.
You can use _CrtSetDbgFlag() to enable a bunch of useful heap debugging techniques. There's a host of other CRT debugging functions available that should help you track down where your problem is.
374,832
377,579
(QT/kdelib) how to know the end of a command in KTerminalInterface?
I have a KTerminalInterface (from kparts) in my qt/kdelib app, and I run a command within it. The problem is: how can I know if the program is terminated? I need to know it because at its end I must reset some variable. Does anyone got a similar problem?!?
It isn't possible to communicate with the process once launched via TerminalInterface in KDE 4, since KonsolePart (which implements TerminalInterface) no longer has processExited() or receivedData() signals. In KDE 4, KonsolePart is for interactive terminal sessions, not for your app to use. In KDE 4, use KProcess to spawn an external command from your app, communicate with it, and detect when it terminates. See the QProcess parts of its API.
375,141
375,171
How to pass method result as parameter to base class constructor in C++?
I've trying to achieve something like this: class Base { public: Base(string S) { ... }; } class Derived: Base { public: int foo; string bar() { return stringof(foo); // actually, something more complex }; Derived(int f) : foo(f), Base(bar()) { }; } Now, this doesn't work as I want, because bar() is called in the Derived constructor before foo is initialized. I considered adding a static function similar to bar() which takes foo as a parameter - and using that in the initialization list, but thought I'd ask if there were any other techniques that could be used to dig myself out of this one... Edit: Thanks for feedback - here's how I was going to handle the static function. Not sure if the overload between a static and non-static function is too clever, but... class Derived: Base { public: int foo; static string bar(int f) { return stringof(f); // actually, something more complex } string bar() { return bar(foo); }; Derived(int f) : Base(bar(f)) , foo(f) { }; }
Yes, using a function (static class method or regular function) that takes foo as a parameter and returns a string is a good solution. You can call this same function from Derived::bar to prevent code duplication. So, your constructor would look like this: Derived(int f) : Base(stringof(f)), foo(f) {} I place the call to the Base constructor first in the list to emphasize the order in which the initializations occur. The ordering of the initializer list has no effect as all class members are initialized in the order that they are declared in the class body. This is a very clean, functional approach to the problem. However, if you still would like to weigh alternatives then consider using composition instead of inheritance for the relationship between the Derived and Base classes: class Base { public: Base(string S) { ... } void bat() { ... } }; class Derived { Base *base; int foo; public: Derived(int f) : base(NULL), foo(f) { base = new Base(bar()); } ~Derived() { delete base; } string bar() { return stringof(foo); // actually, something more complex } void bat() { base->bat(); } }; You will need to consider the pros and cons for your specific situation. With Derived holding a reference to Base you gain greater control over the initialization order.
375,598
375,608
Finding the owner of an STL iterator
Is there any way that I can find the container pointed to by an iterator? Specifically, I want to be able to find the std::vector pointed to by a particular std::vector::iterator so that I can check the range, without having to actually pass references to that vector around. If (as I suspect) the answer is no, why not? edit: thanks for a number of quick and (largely) accurate answers. Evan Teran nails it. I wasn't thinking about optimization at all, but it's obvious now. A couple of people asked what I want to do this for. It's nothing terribly important. I have an object which is initialized with a vector and an iterator pointing into the vector. It would be cute and convenient if I could initialize the object just with an iterator because then I could convert vector::iterators directly to this object (this sounds strange but does make sense in the particular case). But it's not crucial at all.
I don't believe so. If iterators had to keep a reference/pointer to their owning container, then it would be impossible for them to be optimized down to a lightweight pointer (which can be done with containers guaranteeing contiguous storage like vectors and such).
375,647
375,683
Is requiring a certain order for #includes in c++ a sign of bad library/header design?
I've used some very large scale systems and never seen a required order, but came across it recently. Does the STL or STD library or even Boost have any cases where certain includes must come in a certain order?
Does the STL or STD library or even Boost have any cases where certain includes must come in a certain order? For the standard, the answer is emphatically, no. I imagine the same is true for Boost, though I haven't looked it up. From the C standard: Standard headers may be included in any order; each may be included more than once in a given scope, with no effect different from being included only once, except that the effect of including <assert.h> depends on the definition of NDEBUG (see 7.2). the C++ standard has similar wording. My preference is that headers should include their own dependencies, but I've worked with people who believe this to be 'wasteful'. In my opinion, not having headers include their dependencies is a worthless early optimization.
375,722
375,736
Boost.format and wide characters
Is there a way to get boost.format to use and return wide (Unicode) character strings? I'd like to be able to do things like: wcout << boost::format(L"...") % ... and wstring s = boost::str(boost::format(L"...") % ...) Is this possible?
format_fwd.hpp contains this typedef: typedef basic_format<wchar_t > wformat; I think this will get you started. This works here: std::wcout << boost::wformat(L"...") % ... Also the boost::str works using wformat.
376,136
376,172
Changing the value of stdout in a C++ program
I have a Windows C++ program that is doing something like: FILE* pf = ...; *stdout = *pf; // stdout is defined in stdio.h I'm looking for an explanation about what happens when you change the value of the stdout file handle. Is this just a way of redirecting stdout? -cr
If you change stdout by assignment instead of by using the tool designated (in C, freopen() as Adam Rosenfield said - and by extension, in C++), then you leave yourself open to all sorts of liabilities. It is not clear that cout will also be redirected. You will likely leak a file descriptor (which may not matter). You might not flush the original stdout properly - losing information. You might leak memory associated with the original file pointer (which again may not matter). If anything closes pf, then you are liable for double-free errors (crashes). It is far better to do the job cleanly. (Demo code isn't necessarily written by, or even scrutinized by, the most experienced people in a vendor's coding team. If it looks dubious, that may be because it is dubious.)
376,154
376,241
Is std::map + std::tr1::bind + standard algorithms worthwhile?
This is a follow-up to my question from yesterday. I have Scott Meyers' warning about write-only code on my mind. I like the idea in principle of using standard algorithms to access the keys or values of a std::map, but the syntax required is a little baroque IMHO. Let's say I want to dump all the keys of a map to a vector. Given following declarations, typedef std::map<int, int> MyMap; MyMap m; std::vector<int> v; which code is more maintainable (i.e., potentially less confusing)? Option #1: std::transform(m.begin(), m.end(), std::back_inserter(v), std::tr1::bind(&MyMap::value_type::first, _1)); Option #2: for (MyMap::iterator i = m.begin(); i != m.end(); ++i) { v.push_back(i->first); } Option 1 is more standard library-ish but I have to mentally decompose it to understand what's going on. Option 2 seems easier to read at the expense of a possible small runtime penalty. I'm not hurting for CPU time so I'm leaning toward option 2. Does you guys agree? Is there a third option I should consider? P.S. Over the course of writing this question I concluded that the best way (for my project) to read the keys of a std::map is to store them off in a side container and iterate over that. The maintainability question still stands though.
Clarity always beats clever. Do what you can read later. You're not alone in thinking that the standard code is a little obtuse. The next C++ standard will introduce lambda functions so you can write more legible code with the standard algorithms.
376,232
376,287
Forward declaration problem
I'm using C++ .NET 2.0 I have 2 forms the first one is declared as follow #include "stdafx.h" namespace myNamespace{ public ref class frmMain : public System::Windows::Forms::Form { /*... snip ...*/ public void addNewRow(String^ text){ /*... snip... */ } public void launchSubForm() { SubForm^ sf = gcnew SubForm(this); sf->Show(); } }; } the second one goes like this #include stdafx.h namespace myNamespace{ ref class frmMain; public ref class SubForm : public System::Windows::Forms::Form { frmMain^ myMain; SubForm ( frmMain^ pMain){ myMain = pMain; } /*... snip ...*/ public void exportRows(String^ text){ /*... snip... */ } myMain->addNewRow("myNewText"); <--- This line causes compile error }; } in stdafx.h i have /*... snip... */ #include "SubForm.h" #include "frmMain.h" Now to the question! The line in SubForm causes the compiler to tell me "use of undefined type myNamespace::frmMain I really have no clue about why the "ref class frmMain" doesnt solve this problem
This is because both of these header files include "stdafx.h", and stdafx.h includes "SubForm.h" before "frmMain.h". So, in "SubForm.h", the compiler wants to define SubForm before frmMain has been defined, leading to the error. The proper way to solve this problem is to keep all of the code for your classes in the appropriate source file, and not in the header. If your header file simply declares: public void exportRows(String^ text); then you can define: public void SubForm::exportRows(String^ text) { /*... snip ...*/ myMain->addNewRow("myNewText"); } in SubForm.cpp, and everything should work out splendidly. edit: Good Object-Oriented design involves separating interface from implementation, and the best way to to this in C++ is to keep interfaces in header files and implementation code in the corresponding source files. The bottom line is that your header files should contain only declarations. Think of these as the interface to your classes. The header file shows only the function signatures that your class will implement. The source files, on the other hand, contain all of the definitions, which are the implementation of your classes.
376,296
4,634,756
Building both DLL and static libs from the same project
I have a number of native C++ libraries (Win32, without MFC) compiling under Visual Studio 2005, and used in a number of solutions. I'd like to be able to choose to compile and link them as either static libraries or DLLs, depending on the needs of the particular solution in which I'm using them. What's the best way to do this? I've considered these approaches: 1. Multiple project files Example: "foo_static.vcproj" vs "foo_dll.vcproj" Pro: easy to generate for new libraries, not too much manual vcproj munging. Con: settings, file lists, etc. in two places get out of sync too easily. 2. Single project file, multiple configurations Example: "Debug | Win32" vs "Debug DLL | Win32", etc. Pro: file lists are easier to keep in sync; compilation options are somewhat easier to keep in sync Con: I build for both Win32 and Smart Device targets, so I already have multiple configurations; I don't want to make my combinatorial explosion worse ("Static library for FooPhone | WinMobile 6", "Dynamic library for FooPhone | WinMobile 6", "Static library for BarPda | WinMobile 6", etc. Worse Con: VS 2005 has a bad habit of assuming that if you have a configuration defined for platform "Foo", then you really need it for all other platforms in your solution, and haphazardly inserts all permutations of configuration/platform configurations all over the affected vcproj files, whether valid or not. (Bug filed with MS; closed as WONTFIX.) 3. Single project file, selecting static or dynamic via vsprops files Example: store the appropriate vcproj fragments in property sheet files, then apply the "FooApp Static Library" property sheet to config/platform combinations when you want static libs, and apply the "FooApp DLL" property sheet when you want DLLs. Pros: This is what I really want to do! Cons: It doesn't seem possible. It seems that the .vcproj attribute that switches between static and dynamic libraries (the ConfigurationType attribute of the Configuration element) isn't overrideable by the .vsprops file. Microsoft's published schema for these files lists only <Tool> and <UserMacro> elements. EDIT: In case someone suggests it, I've also tried a more "clever" version of #3, in which I define a .vsprops containing a UserMacro called "ModuleConfigurationType" with a value of either "2" (DLL) or "4" (static library), and changed the configuration in the .vcproj to have ConfigurationType="$(ModuleConfigurationType)". Visual Studio silently and without warning removes the attribute and replaces it with ConfigurationType="1". So helpful! Am I missing a better solution?
I may have missed something, but why can't you define the DLL project with no files, and just have it link the lib created by the other project? And, with respect to settings, you can factor them out in vsprop files...
376,335
376,409
Are data structures an appropriate place for shared_ptr?
I'm in the process of implementing a binary tree in C++. Traditionally, I'd have a pointer to left and a pointer to right, but manual memory management typically ends in tears. Which leads me to my question... Are data structures an appropriate place to use shared_ptr?
I think it depends on where you'd be using them. I'm assuming that what you're thinking of doing is something like this: template <class T> class BinaryTreeNode { //public interface ignored for this example private: shared_ptr<BinaryTreeNode<T> > left; shared_ptr<BinaryTreeNode<T> > right; T data; } This would make perfect sense if you're expecting your data structure to handle dynamically created nodes. However, since that's not the normal design, I think it's inappropriate. My answer would be that no, it's not an appropriate place to use shared_ptr, as the use of shared_ptr implies that the object is actually shared - however, a node in a binary tree is not ever shared. However, as Martin York pointed out, why reinvent the wheel - there's already a smart pointer type that does what we're trying to do - auto_ptr. So go with something like this: template <class T> class BinaryTreeNode { //public interface ignored for this example private: auto_ptr<BinaryTreeNode<T> > left; auto_ptr<BinaryTreeNode<T> > right; T data; } If anyone asks why data isn't a shared_ptr, the answer is simple - if copies of the data are good for the client of the library, they pass in the data item, and the tree node makes a copy. If the client decides that copies are a bad idea, then the client code can pass in a shared_ptr, which the tree node can safely copy.
376,698
378,379
How to load default settings with KConfig on kdelib?
I've a question about KConfig usage. I'm able to write and read settings in my .kde4/share/config/_appname_rc configuration file like that KConfig basicconf; KConfigGroup conf = KConfigGroup(basicconf.group("Settings")); conf.writeEntry("filepath",QString("/path/")); basicconf.sync(); But I don't understand how to use a "default" configuration file to read at first time i run my application, or in case if application settings needs reset. I prefer do not use KConfig XT because my project is tiny and KConfigXT with kcfgc* files seems excessive. Thank you in advance
First, this: KConfigGroup conf = KConfigGroup(basicconf.group("Settings")); can be written more clearly, at least imho, as: KConfigGroup conf(&basicconf, "Settings"); Also note that "General" is the most common "generic" group name used. Anyways... You can install a default config file with your application; install it to $PREFIX/share/config/, which is easily achieved with this in your CMakeLists.txt file: install(FILES <your config file> DESTINATION ${CONFIG_INSTALL_DIR}) KConfig handles all the magic of merging from there; you don't have to do a thing. As for KConfigXT being overkill, there are many benefits to using it, including automating your config dialogs, ensuring bounds and legal values are enforced, etc. Writing a small file, popping an entry in the CMakeLists.txt file is usually much less work than doing what it gives you for free by hand. There's a great tutorial on TechBase on this.
376,966
377,049
Using C/C++ static libraries from iPhone ObjectiveC Apps
Is it possible to have a C static library API, which uses C++ internally and hide this from users of the library? I have writen a portable C++ library I wish to statically link to an iPhone application. I have created an Xcode project using the Max OS X 'static library' template, and copied the source across, as well as writing a C wapper (to deal with exceptions) using (extern "C"). I am trying to use the generated library (.a file) in another Cocoa iPhone application. Everything works well if the I use (.mm) extentions on the calling ObjectiveC file and (.cpp) on the implementation class in the library. But I get unresolved symbols on linking when I try and change the wrapper file to a (.c) extention, even though all the wrapper function files are only C functions. Just becuase C++ is used internally in a library, does it mean that externally it still must be treated as a C++ program. Is there not anyway to enforce this abstraction? Edit: Thanks for the replies, I had been using extern "C", I was just unsure about what configurations where needed in the calling project. ie. if the calling projected would require to know if it used C++ or could be ignorant and think its a purely C library. It would seem I cannot, and I must use (.mm) files on my ObjectiveC classes.
It's too hard to do this in comments, so I'm just going to demonstrate for you quickly what the linking issues are that you're having. When Xcode encounters files, it uses build rules based on the suffix to decide which compiler to use. By default, gcc links the files to the standard C library, but does not link with the standard C++ library. Archive files (static libraries) have no linking resolution done at all. They are basically an archive of object files which need to be linked. Since you have no .mm or .cpp files in your project, g++ is never called and your files are never linked to the standard libraries. To correct this, just add the standard C++ libraries to your other linker flags in your Xcode project, or just simply add them to the pre-defined other flags option as -l (e.g., -lstdc++). Here is a quick demonstration: stw.h: #ifdef __cplusplus extern "C" #endif void show_the_world(void); stw.cpp: #include <iostream> #include "stw.h" using namespace std; extern "C" void show_the_world() { cout << "Hello, world!\n"; } Build the library: $ g++ -c stw.cpp -o stw.cpp -O0 -g $ ar rcs stw.a stw.o Using the library from a C application: myapp.c: #include "stw.h" int main() { show_the_world(); return 0; } Building the C application: $ gcc -o myapp myapp.c stw.a -lstdc++ -g -O0 $ ./myapp Hello, world! $ If you try to compile without the -lstdc++ you will get all the unresolved issues because the C compiler has absolutely NO idea that it should link to the C++ runtime (and why would it, right!?!?) so you have to add this manually. The other option you have is to change the build rule for your project... instead of having Xcode use gcc to build .c and .m files, tell it to use g++ and your issues will be resolved.
377,094
410,051
GDI+ Dithering Problem
I have a C++ application that uses the Win32 API for Windows, and I'm having a problem with GDI+ dithering, when I don't know why it should be. I have a custom control (custom window). When I receive the WM_PAINT message, I draw some Polygons using FillPolygon on a Graphics device. This Graphics device was created using the HDC from BeginPaint. When the polygons appear on the screen, though, they are dithered instead of transparent, and only seem to show few colors (maybe 256?) When I do the same thing in C# using the .NET interface into GDI+, it works fine, which is leaving me wondering what's going on. I'm not doing anything special, this is a simple example that should work fine, as far as I know. Am I doing something wrong? Edit: Nevermind. It only happens over Remote Desktop, even though the C# example doesnt Dither over remote desktop. Remote Desktop is set at 32-bit color, so I don't know what's up with that.
Hmm... The filling capabilities are determined by the target device. When working over remote desktop, AFAIK Windows substitutes the display driver, so that can change the supported features of the display. when drawing on wm_paint, you actually draw directly on the screen surface, while .net usually uses double buffering (draws to in memory bitmap and then blits the entire bitmap) there are some settings in gdi+ that affect the drawing quality. maybe there are different defaults for on-screen, off-screen and remote painting?
377,178
377,208
What does the C++ new operator do other than allocation and a ctor call?
What are all the other things the new operator does other than allocating memory and calling a constructor?
The C++ standard has this to say about the single object form (the form usually used) of the new operator from the <new> header: Required behavior: Return a nonnull pointer to suitably aligned storage (3.7.3), or else throw a bad_alloc exception. This requirement is binding on a replacement version of this function. Default behavior: — Executes a loop: Within the loop, the function first attempts to allocate the requested storage. Whether the attempt involves a call to the Standard C library function malloc is unspecified. — Returns a pointer to the allocated storage if the attempt is successful. Otherwise, if the last argument to set_new_handler() was a null pointer, throw bad_alloc. — Otherwise, the function calls the current new_handler (18.4.2.2). If the called function returns, the loop repeats. — The loop terminates when an attempt to allocate the requested storage is successful or when a called new_handler function does not return. The standard has a lot of other stuff to say about the new operator and dynamic memory allocation (an awful lot to say), but I think the "Default behavior" list sums up the basics of the new operator pretty well.
377,322
377,325
Can someone explain the c++ FAILED function?
I've seen a lot of example c++ code that wraps function calls in a FAILED() function/method/macro. Could someone explain to me how this works? And if possible does anyone know a c# equivalent?
It generally checks COM function errors. But checking any function that returns a HRESULT is what it's meant for, specifically. FAILED returns a true value if the HRESULT value is negative, which means that the function failed ("error" or "warning" severity). Both S_OK and S_FALSE are >= 0 and so they are not used to convey an error. With "negative" I mean that the high bit is set for HRESULT error codes, i.e., their hexadecimal representation, which can be found in, e.g., winerror.h, begins with an 8, as in 0x8000FFFF.
377,973
377,984
Does anybody have any experience with SSEPlus?
SSEPlus is an open source library from AMD for unified handling of SSE processor extensions. I'm considering to use this library for my next small project and would like to know, if anybody have experience with it? Can I use it on Intel machines? Any performance issues in comparison to direct SSE calls? Any issues on 64bit machines? What other projects than Framewave use it?
Yes, you can use it on Intel machines too. Performance should not differ except that it adds the checks about supported processor features which might cost a little.
378,207
378,235
C++ How can I iterate till the end of a dynamic array?
suppose I declare a dynamic array like int *dynArray = new int [1]; which is initialized with an unknown amount of int values at some point. How would I iterate till the end of my array of unknown size? Also, if it read a blank space would its corresponding position in the array end up junked? Copying Input From users post below: Thing is: a) I'm not allowed to use STL (means: no ) b) I want to decompose a string into its characters and store them. So far I wanted to use a function like this: string breakLine (string line){ int lineSize = line.size(); const char *aux; aux=line.data(); int index=0; while (index<=lineSize){ mySynonyms[index]=aux[index]; index++; } I thought that the array aux would end up junked if there was a large blank space between the two numbers to be stored (apparently not). And I was wondering if there was a way to iterate till an undefined end in this type of array. Thanks for you answers.
No portable way of doing this. Either pass the size together with the array, or, better, use a standard container such as std::vector
378,213
378,290
Debugging GUI Applications in C++
Background: I'm currently debugging an application written over a custom-built GUI framework in C++. I've managed to pin down most bugs, but the bugs I'm having the most trouble with tend to have a common theme. All of them seem to be to do with the screen refreshing, redrawing or updating to match provided data. This is a pain to debug, because I can't break on every refresh, and most of this stuff is time-sensitive, so breakpoints occasionally "fix" the bug. Q: Does anyone have any tips for debugging Windows-based GUIs, especially regarding the refreshing of individual components?
I agree with dual monitors or even remote debugging to reduce interfering with the messages. I also highly recommend Spy utilities. These let you see what messages are being sent in the system. One such program is Winspector. http://www.windows-spy.com/
378,296
379,059
Why is my WM_UNICHAR handler never called?
I have an ATL control that I want to be Unicode-aware. I added a message handler for WM_UNICHAR: MESSAGE_HANDLER( WM_UNICHAR, OnUniChar ) But, for some reason, the OnUniChar handler is never called. According to the documentation, the handler should first be called with "UNICODE_NOCHAR", on which the handler should return TRUE if you want to receive UTF-32 characters. But, as I said, the handler is never called. Is there anything special that needs to be done to activate this?
What are you doing that you think should generate a WM_UNICHAR message? If your code (or the ATL code) ultimately calls CreateWindowW, then your window is already Unicode aware, and WM_CHAR messages will be UTF-16 format. The documentation is far from clear on when, exactly, a WM_UNICHAR message gets generated, but from what I can gather in very limited poking around on Google Groups and on the Internet it looks like it gets sent by 3rd party apps and not by Windows itself, unless the Window is an ANSI window (CreateWindowA and all that). Have you tried manually sending a WM_UNICHAR message to your window to see what happens? If you get the message then there's nothing wrong with your message dispatch code and there's just nothing happening that would cause WM_UNICHAR. You can also check with Spy++ and see whether you're getting that message, though I suspect it's just not being sent.
378,380
378,399
How can I debug a win32 process that unexpectedly terminates silently?
I have a Windows application written in C++ that occasionally evaporates. I use the word evaporate because there is nothing left behind: no "we're sorry" message from Windows, no crash dump from the Dr. Watson facility... On the one occasion the crash occurred under the debugger, the debugger did not break---it showed the application still running. When I manually paused execution, I found that my process no longer had any threads. How can I capture the reason this process is terminating?
You could try using the adplus utility in the windows debugging tool package. adplus -crash -p yourprocessid The auto dump tool provides mini dumps for exceptions and a full dump if the application crashes.
378,515
378,576
What's the deal with boost.asio and file i/o?
I've noticed that boost.asio has a lot of examples involving sockets, serial ports, and all sorts of non-file examples. Google hasn't really turned up a lot for me that mentions if asio is a good or valid approach for doing asynchronous file i/o. I've got gobs of data i'd like to write to disk asynchronously. This can be done with native overlapped io in Windows (my platform), but I'd prefer to have a platform independent solution. I'm curious if boost.asio has any kind of file support boost.asio file support is mature enough for everyday file i/o Will file support ever be added? Whats the outlook for this?
Has boost.asio any kind of file support? Starting with (I think) Boost 1.36 (which contains Asio 1.2.0) you can use [boost::asio::]windows::stream_handle or windows::random_access_handle to wrap a HANDLE and perform asynchronous read and write methods on it that use the OVERLAPPED structure internally. User Lazin also mentions boost::asio::windows::random_access_handle that can be used for async operations (e.g. named pipes, but also files). Is boost.asio file support mature enough for everyday file i/o? As Boost.Asio in itself is widely used by now, and the implementation uses overlapped IO internally, I would say yes. Will file support ever be added? Whats the outlook for this? As there's no roadmap found on the Asio website, I would say that there will be no new additions to Boost.Asio for this feature. Although there's always the chance of contributors adding code and classes to Boost.Asio. Maybe you can even contribute the missing parts yourself! :-)
378,630
378,751
Specify ordinals of C++ exported functions in a DLL
I am writing a DLL with mixed C/C++ code. I want to specify the ordinals of the functions I'm exporting. So I created a .DEF file that looks like this LIBRARY LEONMATH EXPORTS sca_alloc @1 vec_alloc @2 mat_alloc @3 sca_free @4 vec_free @5 mat_free @6 ... I would like to specify the ordinals of my C++ functions and class methods too. I have tried using the Dependency Walker to add the mangled names of my functions to the .DEF file: ??0CScalar@@QAE@XZ @25 ??0CScalar@@QAE@O@Z @26 ??0CScalar@@QAE@ABV0@@Z @27 ??1CScalar@@QAE@XZ @28 But this has failed. Any ideas why this could be happening? EDIT: kauppi made a good observation, so I'm adding more information to the question. Platform: Windows (and I'm not interested in portability) Compiler: Microsoft's C++ compiler (I'm using VS2005) Why I want to do this?: Using the ordinals has the advantage of letting me call exported C++ functions from C code.
Well, I don't have experience with ordinals (which look like some ugly, compiler-specific thing), but I can help you with making C++/C code compatible. Suppose, in C++, that your header file looks like this: class MyClass { void foo(int); int bar(int); double bar(double); void baz(MyClass); }; You can make it C-compatible by doing the following: #ifdef __cplusplus #define EXTERN_C extern "C" // Class definition here; unchanged #else #define EXTERN_C typedef struct MyClass MyClass; #endif EXTERN_C void MyClass_foo (MyClass*, int); EXTERN_C int MyClass_bar_int (MyClass*, int); EXTERN_C double MyClass_bar_double (MyClass*, double); EXTERN_C void MyClass_baz (MyClass*, MyClass*); In the C++ source file, you just define the various extern "C" functions to pass to the desired member functions, like this (this is only one; the rest work similarly) extern "C" void MyClass_foo (MyClass* obj, int i) { obj->foo(i); } The code will then have a C interface, without having to change the C++ code at all (except for declarations in the header; but those could also be moved to another file "myclass_c.h" or the like). All the functions declared/defined extern "C" won't be mangled, so you can do other operations on them easily. You will also probably want functions to construct/destroy instances of MyClass (you can, of course, use new/delete for this).
378,808
378,962
How do you change an IP address in C++?
I need to do a number of network-related things in C++ that I would normally do with ifconfig in Linux, but I'd like to do it without parsing the output of a group of system calls. Which C or C++ libraries can I use to tell if a network adapter is up or down, read or change an adapter's IP address and netmask, and change the default DNS gateway?
Basically you need to make a bunch of ioctl calls using a socket handle (SIOCGIFADDR, SIOCADDRT). You can find sample programs that use it in the Linux kernel source under Documentation/networking. Some other links that might be helpful: Network Interface operations on AIX XBMC's implementation (check out CNetworkInterfaceLinux) EDIT: Let me also add that if your target is desktop linux, then you can consider using the DBUS API to query network parameters using the NetworkManager. http://people.redhat.com/dcbw/NetworkManager/NetworkManager DBUS API.txt (sorry there's some issue inserting links in edits)
378,876
378,913
C++ namespaces: cross-usage
Consider the following example. It consists of two header files, declaring two different namespaces: // a1.h #pragma once #include "a2.h" namespace a1 { const int x = 10; typedef a2::C B; } and the second one is // a2.h #pragma once #include "a1.h" namespace a2 { class C { public: int say() { return a1::x; } }; } And a single source file, main.cpp: #include <iostream> #include "a1.h" #include "a2.h" int main() { a2::C c; std::cout << c.say() << std::endl; } This way it doesn't compile (tried GCC and MSVC). The error is that a1 namespaces is not declared (C2653 on Windows). If you change include order in main.cpp this way: #include "a2.h" #include "a1.h" you get a symmetric error message, i.e. a2 namespace is not declared. What's the problem?
You need to use a forward declaration in your header files because you have a circular reference. Something like this: // a1.h #pragma once namespace a2 { class C; } namespace a1 { const int x = 10; typedef a2::C B; }
379,136
379,970
Can I extend lisp with c++?
Can I call a function from lisp from a library written in c or c++? How can I extend lisp? This is useful when you want to do some system calls or stuff like that.
It is unusual to call non-lisp code from lisp, and rarely necessary. CLX (the X11 client implementation for CL) doesn't link to the Xlib implementation but "speaks" X11 directly. On any system, your CL implementation is likely to already have excellent operating system hooks rendering this unnecessary. That said, the answer depends on the lisp implementation: In ECL you can actually host a CL environment under C and simply call cl_eval() with the code to execute. This can let you write your application (or application host) in C (or C++) and "call to" lisp code. In CCL, there is a C-compatible calling interface which lets you do something like this: (with-cstrs ((x "Hello World")) (#_puts x)) In most other CL implementations (like SBCL; and yes this works in ECL and CCL as well) you can use UFFI (or CFFI) which simply lets you call C functions, which is what other people are talking about. If this is all you want to do, then CFFI is a good, safe place to start.
379,172
379,189
To GOTO or not to GOTO?
Currently I am working on a project where goto statements are heavely used. The main purpose of goto statements is to have one cleanup section in a routine rather than multiple return statements. Like below: BOOL foo() { BOOL bRetVal = FALSE; int *p = NULL; p = new int; if (p == NULL) { cout<<" OOM \n"; goto Exit; } // Lot of code... Exit: if(p) { delete p; p = NULL; } return bRetVal; } This makes it much easier as we can track our clean up code at one section in code, that is, after the Exit label. However, I have read many places it's bad practice to have goto statements. Currently I am reading the Code Complete book, and it says that we need to use variables close to their declarations. If we use goto then we need to declare/initialize all variables before first use of goto otherwise the compiler will give errors that initialization of xx variable is skipped by the goto statement. Which way is right? From Scott's comment: It looks like using goto to jump from one section to another is bad as it makes the code hard to read and understand. But if we use goto just to go forward and to one label then it should be fine(?).
I am not sure what do you mean by clean up code but in C++ there is a concept called "resource acquisition is initialization" and it should be the responsibility of your destructors to clean up stuff. (Note that in C# and Java, this is usually solved by try/finally) For more info check out this page: http://www.research.att.com/~bs/bs_faq2.html#finally EDIT: Let me clear this up a little bit. Consider the following code: void MyMethod() { MyClass *myInstance = new MyClass("myParameter"); /* Your code here */ delete myInstance; } The problem: What happens if you have multiple exits from the function? You have to keep track of each exit and delete your objects at all possible exits! Otherwise, you will have memory leaks and zombie resources, right? The solution: Use object references instead, as they get cleaned up automatically when the control leaves the scope. void MyMethod() { MyClass myInstance("myParameter"); /* Your code here */ /* You don't need delete - myInstance will be destructed and deleted * automatically on function exit */ } Oh yes, and use std::unique_ptr or something similar because the example above as it is is obviously imperfect.
379,238
380,857
C++ and SOAP -> how to start well
My project is about to introduce SOAP. It's going to be used for C++ <-> Java and C++ <-> Flex communication. I'm responsible for refactoring our apps to take advantage of Java business rules engine and new Flex gui. What resources are must read for C++ SOAP? I've read W3 materials. We're probably be using gSOAP on Solaris boxes.
There are some reasonably good books on SOAP, like Programming Web Services with SOAP by Snell, Tidwell, and Kulchenko; I've given that to people to introduce them to SOAP on projects in the past. I don't know of a C++-specific book, but the gSOAP site has pretty decent documentation. I think the really key thing is probably to not dive into a complicated project right at first; there are some SOAP tutorials around, like this one, that build simple web services using gSOAP. Get the SOAP version of "Hello, world!" going firt, then maybe build that eBay client in the example, and you'll learn a lot of things you'd learn much more slowly and painfully trying to build a big example first.
379,380
379,443
Avoiding Inheritance Madness
So, I have an API that I need to implement in to an existing framework. This API manages interactions with an external server. I've been charged with coming up with a way to create an easily repeatable "pattern," so that if people are working on new projects in the given framework they have a simple solution for integrating the API. My first idea was to create a class for your "main" class of the framework to extend that, would provide all the virtual functions necessary to interact with the API. However, my boss vetoed this, since the existing framework is "inheritence heavy" and he wants to avoid adding to the madness. I obviously can't incapsulate my API, because that is what the API itself is supposed to be doing, and doing so might hide functionality. Short of asking futures developers to copy and paste my example, what do I do?
If your boss is hostile to inheritance, try aggregation. (Has-a relationships rather than inheritance's is-a relationship.) Assuming you interface with the API in question via an object, maybe you can just keep that object in a property of your framework 'main' class, so you'd interact with it like main->whateverapi->doWhatever(). If the API isn't object-implemented or you need to load a lot of functionality specific to your environment onto it, that points toward making your own class that goes into that role and relates to the third party API however it needs to. Yeah, this basically means you're building an API to the API. Aggregation allows you to avoid the masking-functionality problem, though; even if you do have to do an intermediary layer, you can expose the original API as main->yourobject->originalapi and not have to worry about inheritance mucking things up.
379,383
379,507
Finding gaps in sequence of numbers
I have a std::vector containing a handful of numbers, which are not in any particular order, and may or may not have gaps between the numbers - for example, I may have { 1,2,3, 6 } or { 2,8,4,6 } or { 1, 9, 5, 2 }, etc. I'd like a simple way to look at this vector and say 'give me the lowest number >= 1 which does not appear in the vector'. So, for the three examples above, the answers would be 4, 1 and 3 respectively. It's not performance critical, and the list is short so there aren't any issues about copying the list and sorting it, for example. I am not really stuck for a way to do this, but my STL skills are seriously atrophied and I can feel that I'm about to do something inelegant - I would be interested to see what other people came up with.
The checked answer uses < for comparison. != is much simpler: int find_gap(std::vector<int> vec) { std::sort(vec.begin(), vec.end()); int next = 1; for (std::vector<int>::iterator it = vec.begin(); it != vec.end(); ++it) { if (*it != next) return next; ++next; } return next; } find_gap(1,2,4,5) = 3 find_gap(2) = 1 find_gap(1,2,3) = 4 I'm not passing a reference to the vector since a) he said time doesn't matter and b) so I don't change the order of the original vector.
379,442
379,589
How much slower is a wxWidget written in Python versus C++?
I'm looking into writing a wxWidget that displays a graphical node network, and therefore does a lot of drawing operations. I know that using Python to do it is going to be slower, but I'd rather get it working and port it later when its functional. Ideally, if the performance hit isn't too great, I'd prefer to keep the codebase in Python for easy updates. What I'm wondering is how much slower should I expect things to go? I realize this is vague and open ended, but I just need a sense of what to expect. Will drawing 500 circles bog down? Will it be noticeable at all? What are your experiences?
IMHO, main bottleneck will be the data structures you are going to use for representing the network graph. I have coded a similar application for tracing dependencies between various component versions in a system and graphics was the last thing I had to worry about and I was certainly drawing more than 500 objects with gradient fills for some of them! If you are getting bogged down, you should checkout using PyGame for drawing things.
379,581
379,663
How to diagnose Access Violation on application exit
I have an application that I'm trying to debug a crash in. However, it is difficult to detect the problem for a few reasons: The crash happens at shutdown, meaning the offending code isn't on the stack The crash only happens in release builds, meaning symbols aren't available By crash, I mean the following exception: 0xC0000005: Access violation reading location 0x00000000. What strategy would you use to diagnose this problem? What I have done so far is remove as much code from my program until I get the bare minimum that will cause the crash. It seems to be happening in code that is statically linked to the project, so that doesn't help, either.
You can make the symbol files even for the release build. Do that, run your program, attach the debugger, close it, and see the cause of the crash in the debugger.
379,916
379,951
How to implement One Definition Rule
This post reference to the One Definition Rule. Wikipedia is pretty bad on explaining how to implement it Where can I find good ressources about guidelines to follow in C++ .NET?
The one definition rule basically means that a variable/function can only be located at one place in address space of the compiled executable. One way to think of it is while you compile, there is an array of memory to be used in the compiled program (object code), and a lookup table to reference variable/function locations. This is done on a per process level. Suppose that the following is a simple program: file1.cpp int square(int x); // this is a declaration extern int someVariable; // this is a declration void square(int x) // this is a definition { return x * someVariable; } file2.cpp int square(int x); // this is a declaration int someVariable; // this is a definition void main() { someVariable = 12; someVariable = square(4); } When the compiler starts compiling the object code, it reads in the declarations, and puts thing in its table. AT the end of compiling file1.cpp, it will end up with something like this: declarations: square (XX): function that returns int, and takes a single int as parameter [4 bytes] someVariable (YY): integer [4 bytes] data: 12 34 56 78 aa XX XX XX XX ab cd definition: square: starts at address 0 This assumes that the function gets compiled to those particular assembly instructions. At linker time, XX XX XX XX will get replaced by the address of someVariable. File2 ends up something like: declarations: square (XX): function that returns int, and takes a single int as parameter [4 bytes] someVariable (YY): integer [4 bytes] data: 00 00 00 00 12 34 56 78 12 34 56 YY YY YY YY 23 21 definitions: someVariable: starts at address 0 main: starts at address 4 And in this case, the YY will be replaced by the address of square. That's where the linker comes into play. The linker's job is to go through the list, and build up a table of where everything is in the address space of the program at compile time. However, there is a problem if two object files have the same definition of a variable when it tries to link. If there were two definitions of someVariable in the above example, then it wouldn't know what to replace YY with. Likewise, if there's no definition, then you get ugly linker errors. The "solution" to the rule is to partition your file such that you have definitions only in .cpp files, and have declarations of things in your .h files, so the example above would become: file1.cpp #include "file2.h" void square(int x) // this is a definition { return x * someVariable; } file1.h int square(int x); // this is a declaration file2.cpp #include "file1.h" int someVariable; // this is a definition void main() { someVariable = 12; someVariable = square(4); } file2.h extern int someVariable; Note that this is an incredibly simple example, and that it doesn't really apply in .NET since there isn't a concept of a distinction between a declaration and definition.
380,442
380,447
What is the point of pointer types in C++?
Let's say I have some pointers called: char * pChar; int * pInt; I know they both simply hold memory addresses that point to some other location, and that the types declare how big the memory location is pointed to by the particular pointer. So for example, a char might be the size of a byte on a system, while an int may be 4 bytes.. So when I do: pChar++; // I am actually incrementing the address pointed to by pChar by 1 byte; pInt++; // I am actually incrementing the address pointed to by pInt by 4 bytes; But what if I do this: pChar+2; // increment the address pointed to by pChar by 2 bytes? pInt+2; // increment the address pointed to by pInt by 2 bytes? what happens to the other two bytes? Thanks.. Would appreciate any clarification here.. Is the pointer type simply for the ++ operation? EDIT: So avp answered my question fittingly, but I have a follow up question, what happens when I do: memcpy(pChar,pInt,2); Will it copy 2 bytes? or 4 bytes? Will I have an access violation? EDIT: THe answer, according to Ryan Fox, is 2 bytes, because they are typecasted to a (void*). Thanks! CLOSED! EDIT: Just so that future searchers may find this.. Another piece of info I discovered.. memcpy(pChar+5,pInt+5,2); doesnt copy 2 bytes of the memory block pointed to by pInt+5bytelocations,to pChar+5bytelocations.. what happens is that 2 bytes are copied to pChar+5bytelocations from pInt(4*5)bytelocations.. no wonder I got access violations, I was trying to read off somewhere I wasn't supposed to be reading.. :)
"++" is just another name for X = X + 1; For pointers it doesn't matter if you increment by 1 or by N. Anyway, sizeof(type)*N is used. In the case of 1 it will be just sizeof(type). So, when you increment by 2 (your second case): for char is 2*sizeof(char)=2*1=2 bytes, for int will be 2*sizeof(int)=2*4=8 bytes.
380,529
380,553
Multiple Inheritance
#include<iostream> using namespace std; class A { int a; int b; public: void eat() { cout<<"A::eat()"<<endl; } }; class B: public A { public: void eat() { cout<<"B::eat()"<<endl; } }; class C: public A { public: void eat() { cout<<"C::eat()"<<endl; } }; class D: public B, C { }; int foo(A *ptr) { ptr->eat(); } main() { D obj; foo(&(obj.B)); //error. How do i call with D's B part. } The above foo call is a compile time error. I want to call foo with obj's B part without using virtual inheritance. How do i do that. Also, in case of virtual inheritance, why the offset information need to be stored in the vtable. This can be determined at the compile time itself. In the above case, if we pass foo with D's object, at compile time only we can calculate the offset of D's A part.
Inheriting twice With double inheritance you have an ambiguity - the compiler cannot know which of the two A bases do you want to use. If you want to have two A bases (sometimes you may want to do this), you may select between them by casting to B or C. The most appropriate from default casts here is the static_cast (as the weakest available), however it is not realy needed (it is still stronger than your case needs), as you are not casting to a derived type. A custom safe_cast template should do the job: /// cast using implicit conversions only template <class To,class From> inline To safe_cast( const From &from ) {return from;} main() { D obj; foo(safe_cast<B *>(&obj)); //error. How do i call with D's B part. } Compile time types - use templates Also, in case of virtual inheritance, why the offset information need to be stored in the vtable. This can be determined at the compile time itself. In the above case, if we pass foo with D's object, at compile time only we can calculate the offset of D's A part. This is a misconception. The foo function as it is written now has no compile type information about ptr type other than it is A *, even if you pass B * or C*. If you want foo to be able to act based on the type passed compile time, you need to use templates: template <class TypeDerivedFromA> int foo(TypeDerivedFromA *ptr) { ptr->eat(); } Virtual Inheritance Your questions mentions virtual inheritance. If you want to use virtual inheritance, you need to specify so: class B: public virtual A ... class C: public virtual A ... With this the code would compile, but with this solution there is no way you could select between B::A or C::A (there is only one A), therefore this is probably not what you are about. Virtual functions Furthermore, your questions seems to be confusing two different concepts, virtual inheritance (which means sharing one base class between two intermediate base classes) and virtual functions (which mean allowing derived class function to be called via base class pointer). If you want the B::eat to be called using A pointer, you can do this without virtual inheritance (actually virtual inheritance would prevent you doing so, as explained above), using virtual functions: class A { int a; int b; public: virtual void eat() { cout<<"A::eat()"<<endl; } }; If virtual functions are not acceptable for you, the compile time mechanism for this are templates, as explained above.
380,559
381,019
Using .NET class from native C++ using C++/CLI as a 'middleware'
I have to use a class/assembly made in C# .NET from native C++ application. I suppose I need to make a wrapper class in C++/CLI, that would expose native methods in header files, but use .NET class as needed. Thing that's unclear to me is how to convert data types from .NET to standard C++ types. Does anybody have some sample code to learn from? Only text I have found on this is: http://msdn.microsoft.com/en-us/magazine/cc300632.aspx But text is very old (using Managed C++, not C++/CLI), and lot of things remains unclear
Yes you need to build a wrapper with C++/CLI around your managed assembly. For mixing native and managed types you may want to check this article for sample codes. Primitive types has int, float, byte are converted for you. But other types has String must be done yourself. For example, if you have a native pointer on a char*, then your C++/CLI class will have to convert it to a String in order to be passed to your Managed C# assembly.
381,161
383,217
Do XCode projects need special settings to use pre-compiled boost libraries?
I have just installed boost for the first time on my Intel Mac, and it works fine in general, as long as I use only boost's header files. But when I try to use a lib, in my case the regex lib, my app links and launches fine, but then it soon crashes in a regex related destructor. This even happens with the simple test program that's given on boost.org's unix install page . I wonder if I need to set special compile/link options (I've used the defaults for a C++ Tool) other than setting the search paths and including the regex lib, of course? (BTW, I have verified that the default compiler on my system is indeed gcc 4.01, which is the same my test project in XCode uses)
Answer to self: While it's still not clear what causes this problem, there is a workaround: Instead of using the libs that the boost installer process builds, add the source code for the lib (in boost-main-folder/libs/libname/src/) to your Xcode project instead, having Xcode compile the sources for you.
381,164
382,077
friend AND inline method, what's the point ?
I see in a header that I didn't write myself the following: class MonitorObjectString: public MonitorObject { // some other declarations friend inline bool operator==(MonitorObjectString& lhs, MonitorObjectString& rhs) { return(lhs.fVal==rhs.fVal); } I can't understand why this method is declared as friend. I thought it would make sense if the function is defined in another place and needs to access the internal member of the class, but this is not the case here as it is inline and doesn't even need to have access to the members. What do you think? Is the "friend" useless?
friend inline bool operator==(MonitorObjectString& lhs, MonitorObjectString& rhs) { return(lhs.fVal==rhs.fVal); } is sometimes called friend definition, because it is a friend declaration that also defines the function. It will define the function as a non-member function of the namespace surrounding the class it appears in. Actually, the inline there is redundant: It's implicitly declared inline if it's a friend definition. Some pros and cons of it: It makes the operator not visible to normal lookup. The only way you can call it is using argument dependent look-up. This will keep the namespace free of lots of operator declarations visible normally. Note that this will also disable the ability of calling it using implicit conversions to MonitorObjectString (because if both argument types do not match at the time of looking for candidates to be called, argument dependent look-up won't find the function). The lookup for names starts in the scope of the class the friend definition appears in. This means that no long type-names or other names need to be written out. Just refer them as you would in a normal member function of the class. As it is a friend, the function sees the internals of MonitorObjectString. But that's neither good nor bad. It depends on the situation. For example if there are functions getFVal() making the function friend is pretty pointless. Could use getFVal as-well then. I used to like this friend definition style of operators, because they have direct access to class members, and appear within the class definition - so I could have "everything with one sight". Recently, however, I came to the conclusion that it's not always a good idea. If you can (and you should) implement the operator purely using public member functions of the class, you should make it a non-friend (and non-member) operator, defined in the same namespace of the class. It makes sure that if you change some implementation - but keep the interface of the class the same - the operator will still work and you have less cascading changes, because you know it can't access implementation details. However, I prefer this style over writing member operators, because operator functions at namespace scope have the added features of being symmetric with their arguments: They don't treat the left side special, because both sides are just normal arguments and not object arguments that are bound to *this. If either the left or the right side is of the type of your class, the other side can be implicitly converted - regardless of whether it's left or right. For functions that are also defined without the friend definition syntax (traditionally, at namespace scope), you will have the feature of selectively including headers that make those operators available or not.
381,267
381,577
Looped push_back against resize() + iterator
Simple question; what's better and why? out.resize( in.size() ); T1::iterator outit = out.begin(); for( inIt = in.begin() to end, ++inIt, ++outIt ) *outit = *inIt OR out.erase(); for( inIt = in.begin() to end, ++inIt ) out.push_back( inIt ); I'm assuming the memory assignment implicit in push_back is worth avoiding but want to make sure. Thanks EDIT: Thanks for the out = in suggestions guys ;). The actual code I'm playing with is: template//can't stop the browser ignoring th class T1, class T2 in angle brackets bool asciihex( T1& out, const T2& in ) { //out.erase(); out.resize( in.size() / 2 ); if( std::distance( in.begin(), in.end() ) % 2 )//use distance rather than size to minimise the requirements on T2? return false; for( T2::const_iterator it = in.begin(); it != in.end(); it += 2 ) { out.push_back(((( (*it > '9' ? *it - 0x07 : *it) - 0x30) '9' ? *(it+1) - 0x07 : *(it+1)) - 0x30) & 0x000f)); } return true; } template bool asciihex( T1& out, const T2& in ) { size_t size = in.size(); if( size % 2 )//use distance rather than size to minimise the requirements on T2? return false; out.resize( size / 2 ); T1::iterator outit = out.begin(); for( T2::const_iterator it = in.begin(); it != in.end(); it += 2, ++outit ) { *outit = ((( (*it > '9' ? *it - 0x07 : *it) - 0x30) '9' ? *(it+1) - 0x07 : *(it+1)) - 0x30) & 0x000f); } return true; } Edit: I've marked push_back as the answer as it seems to be the consensus and, therefore, more useful to anyone else with the same problem. However I have ended up using the iterator method as one of the container classes I'm interested in doesn't support push_back... mileage varies.
The second, and if you're concerned about multiple extensions use out.reserve(). The right answer to adding to a vector is almost always push_back or back_inserter, which avoid some possible problems (exception guarantees, constructors, writing past the end, for example) that you'd have to pay attention to with other methods.
381,324
381,352
An operator == whose parameters are non-const references
I this post, I've seen this: class MonitorObjectString: public MonitorObject { // some other declarations friend inline bool operator==(/*const*/ MonitorObjectString& lhs, /*const*/ MonitorObjectString& rhs) { return lhs.fVal==rhs.fVal; } } Before we can continue, THIS IS VERY IMPORTANT: I am not questioning anyone's ability to code. I am just wondering why someone would need non-const references in a comparison. The poster of that question did not write that code. This was just in case. This is important too: I added both /*const*/s and reformatted the code. Now, we get back to the topic: I can't think of a sane use of the equality operator that lets you modify its by-ref arguments. Do you?
Perhaps the classes use a form of lazy initialization. When the data is accessed, proper initialization must occur, and the data must be fetched. This may change class members. However, lazy initialization can be formed so that modification to the class isn't necessary. This can be accomplished by using the Pimpl idiom (by a pointer to a private class) or by using the mutable keyword (not recommended!).
381,363
381,412
lightweight boost::bind
I'm so sick of the pass-callback-data-as-void*-struct anti-pattern. Boost bind solves it nicely, but is an unacceptable dependency. What's a lightweight alternative? How would I write it myself as simply as possible?
I'm not familiar with boost:bind, but is it something like this? #include <iostream> void foo (int const& x) { std::cout << "x = " << x << std::endl; } void bar (std::string const& s) { std::cout << "s = " << s << std::endl; } template<class T> void relay (void (*f)(T const&), T const& a) { f(a); } int main (int argc, char *argv[]) { std::string msg("Hello World!"); relay (foo, 1138); relay (bar, msg); } Output -- x = 1138 s = Hello World!
381,621
381,656
Using arrays or std::vectors in C++, what's the performance gap?
In our C++ course they suggest not to use C++ arrays on new projects anymore. As far as I know Stroustroup himself suggests not to use arrays. But are there significant performance differences?
Using C++ arrays with new (that is, using dynamic arrays) should be avoided. There is the problem that you have to keep track of the size, and you need to delete them manually and do all sorts of housekeeping. Using arrays on the stack is also discouraged because you don't have range checking, and passing the array around will lose any information about its size (array to pointer conversion). You should use std::array in that case, which wraps a C++ array in a small class and provides a size function and iterators to iterate over it. Now, std::vector vs. native C++ arrays (taken from the internet): // Comparison of assembly code generated for basic indexing, dereferencing, // and increment operations on vectors and arrays/pointers. // Assembly code was generated by gcc 4.1.0 invoked with g++ -O3 -S on a // x86_64-suse-linux machine. #include <vector> struct S { int padding; std::vector<int> v; int * p; std::vector<int>::iterator i; }; int pointer_index (S & s) { return s.p[3]; } // movq 32(%rdi), %rax // movl 12(%rax), %eax // ret int vector_index (S & s) { return s.v[3]; } // movq 8(%rdi), %rax // movl 12(%rax), %eax // ret // Conclusion: Indexing a vector is the same damn thing as indexing a pointer. int pointer_deref (S & s) { return *s.p; } // movq 32(%rdi), %rax // movl (%rax), %eax // ret int iterator_deref (S & s) { return *s.i; } // movq 40(%rdi), %rax // movl (%rax), %eax // ret // Conclusion: Dereferencing a vector iterator is the same damn thing // as dereferencing a pointer. void pointer_increment (S & s) { ++s.p; } // addq $4, 32(%rdi) // ret void iterator_increment (S & s) { ++s.i; } // addq $4, 40(%rdi) // ret // Conclusion: Incrementing a vector iterator is the same damn thing as // incrementing a pointer. Note: If you allocate arrays with new and allocate non-class objects (like plain int) or classes without a user defined constructor and you don't want to have your elements initialized initially, using new-allocated arrays can have performance advantages because std::vector initializes all elements to default values (0 for int, for example) on construction (credits to @bernie for reminding me).
381,695
381,704
Best base type to deal with linear algebra
I'm writing a small and inadequate linear algebra library in C++ for a project (I'm sorry). I'm implementing matrices and operations using double precision numbers. I'm doing right? Should I implement a template class instead? Is there a more precise type around?
I've written a C++ linear algebra library using templates. My thought was that we might want to use complex numbers or extended precision numbers some day. That was maybe seven years ago, and we haven't done it yet. We almost always use doubles as the template type, and we have typedefs to make that easy. A few times we've gone the other way, using types smaller than a double. For example, we've used float rather than double in a memory-bound application described here. But 99.9 percent of the time we use doubles. If you do use a template argument, watch out for using an integer type but implicitly requiring a floating point type. For example, say you have a matrix whose entries are all integers and so you use a matrix<int> class. But then you pass that to a linear solver. Now your arithmetic is done using integer division, and your results are wrong. (I've done that!)
381,718
381,749
Check for pointer definedness in C++
How do I check if a variable, specifically a pointer, is defined in C++? Suppose I have a class: class MyClass { public: MyClass(); ~MyClass() { delete pointer; // if defined! } initializePointer() { pointer = new OtherClass(); } private: OtherClass* pointer; };
Why worry about checking for the pointers value? Just initialize it to a null pointer value and then just call delete on it. delete on a null pointer does nothing (the standard guarantees it). class MyClass { public: MyClass():pointer(0) { } ~MyClass() { delete pointer; pointer = 0; } initializePointer() { pointer = new OtherClass(); } private: OtherClass* pointer; }; And everytime you call delete on it, you should set the pointer to a null pointer value. Then you are all fine.
381,968
392,777
Play an mp3 on a Pocket PC with VSC++ code
Does anyone know of some mp3 playing code for the pocket PC. I have 3 mp3s that I want to play when my application loads up, depending on how you log in. I've used VS2005 C++ to code what I've got now. I think code to play mp3 for the desk might do the job. But I might not have access to the library, that's why I've been specific.
You can use DirectShow. Here is an example (it plays a video file, but exactly the same code will work for audio). Unfortunately, Windows Mobile lacks a suitable splitter to decode plain .mp3 files, but there is a workaround: you can add a RIFF header to your MP3s (producing MPEG-compressed WAV files).
382,166
382,177
smart pointers + "this" considered harmful?
In a C++ project that uses smart pointers, such as boost::shared_ptr, what is a good design philosophy regarding use of "this"? Consider that: It's dangerous to store the raw pointer contained in any smart pointer for later use. You've given up control of object deletion and trust the smart pointer to do it at the right time. Non-static class members intrinsically use a this pointer. It's a raw pointer and that can't be changed. If I ever store this in another variable or pass it to another function which could potentially store it for later or bind it in a callback, I'm creating bugs that are introduced when anyone decides to make a shared pointer to my class. Given that, when is it ever appropriate for me to explicitly use a this pointer? Are there design paradigms that can prevent bugs related to this?
While i don't have a general answer or some idiom, there is boost::enable_shared_from_this . It allows you to get a shared_ptr managing an object that is already managed by shared_ptr. Since in a member function you have no reference to those managing shared_ptr's, enable_shared_ptr does allow you to get a shared_ptr instance and pass that when you need to pass the this pointer. But this won't solve the issue of passing this from within the constructor, since at that time, no shared_ptr is managing your object yet.
382,272
382,288
Safe to use the compiler generated assignment operator?
I'm using the CPoint class from MFC. There is no explicitly defined assignment operator or copy constructor (AFAIK). Yet, this works: CPoint p1(1, 2), p2; p2 = p1; // p2 now is equal to p1 I'm assuming this is working automagically because of a compiler generated assignment operator. Correct? If so, can I be confident that this isn't doing anything unexpected? In this case CPoint is so simple I think all is well, but in general this is something that worries me a bit. Is it better form to do: p2.SetPoint(p1.x, p2.x); -cr
This is safe - if an assignment operator wasn't meant to be supplied then the MFC designers could have made sure it wasn't available (by making it private for example). IIRC the compiler will perform a member-by-member copy, so for a class containing POD like this, you won't have a problem. It can get messy if you have a class that allocates memory and neglects to override operator= and perform a deep-copy. FWIW I asked a question about what the compiler can and cannot do a while back: Why don't C++ compilers define operator== and operator!=? Some of the answers make for interesting reading.
382,480
382,586
C++ experts: is the offset of a member variable to its class constant under these conditions?
Given a variable foo of type FooClass* and a member variable in that class named bar, is the distance between foo and &(foo->bar) the same in any situation with some constraints: FooClass is a non-POD type. We know that foo will always point to an instance of FooClass, and not some subtype of it. We only care about behaviour under a single compiler and a single compilation; that is, the value this may result in under gcc is never used in code compiled with MSVC, and it is never saved to be re-used between compilations. It is computed in the binary and used in the binary, and that is it. We don't use a custom new, although some instances of the class may be stack-allocated and some heap-allocated. There is no explicit ctor for FooClass; it relies upon the compiler-generated one (and each of the fields in FooClass is either POD or default-constructable). I can't find a guarantee either way on this in the standard (nor did I expect to), but my rudimentary testing with gcc leads me to believe that it will always be the case there. I also know that this guarantee is made for POD-types, but let us assume this type can't be POD. An update/clarification: this is just for a single compilation of a single binary; the calculated offsets will never leave that single execution. Basically, I want to be able to uniquely identify the fields of a class in a static map and then be able to lookup into that map for some macro/template/EVIL trickery. It is merely for my own amusement, and no life support machines will rely on this code.
After you have compiled your program, Yes*. The offset will remain constant. There is one very important restriction, however: foo must be pointing specifically to a FooClass object. Not a class derived from FooClass, or anything else for that matter. The reason that C++ makes the POD distinction regarding member offsets is because both multiple inheritance and the location (or lack of) a vtable pointer can create situations where the address of an object is not the same as the address of that object's base.
382,822
382,825
Destructor C++: type '***' argument given to 'delete', expected pointer
I declared a private variable vector<SomeClass> theVector; someplace inside my SomeClass class. Why can't I say: delete theVector inside my SomeClass destructor? The compiler error says: type `class Vector<SomeClass>' argument given to `delete', expected pointer What expected pointer?
If new and delete go hand in hand. To delete something you need to create it via new (which gives you a pointer). You can then delete the pointer. The way you are declaring the vector it is being created on the stack (not the heap) and will be deallocated when it goes out of scope. int main() { vector<SomeClass> theVector; vector<SomeClass>* ptrVctor = new vector<SomeClass>(); delete ptrVctor; // ptrVctor must be deleted manually // theVector destroyed automatically here }
383,016
383,084
How do stl containers get deleted?
How does container object like vector in stl get destroyed even though they are created in heap? EDIT If the container holds pointers then how to destroy those pointer objects
An STL container of pointer will NOT clean up the data pointed at. It will only clean up the space holding the pointer. If you want the vector to clean up pointer data you need to use some kind of smart pointer implementation: { std::vector<SomeClass*> v1; v1.push_back(new SomeClass()); std::vector<boost::shared_ptr<SomeClass> > v2; boost::shared_ptr<SomeClass> obj(new SomeClass); v2.push_back(obj); } When that scope ends both vectors will free their internal arrays. v1 will leak the SomeClass that was created since only the pointer to it is in the array. v2 will not leak any data.
383,173
384,115
QX11EmbedContainer and QProcess problem
I've been trying to put a QX11EmbedContainer in my app, and I need to start a terminal within it (because with konsolepart I can practically do nothing). QX11EmbedContainer* container = new QX11EmbedContainer(this); // with or without "this" I got the same result container->show(); QProcess process(container); QString executable("xterm -into "); QStringList arguments; arguments << QString::number(container->winId()); process.start(executable, arguments); compilation goes fine,but I got this message: QProcess: Destroyed while process is still running. and I'm not able to see the container, suggestions?????? Thanks
The QProcess is allocated on the stack and will deleted as soon as it goes out of scope. This is likely to happen before the the "xterm" child process quits (hence the output). Try allocating the QProcess in the heap instead: QProcess * process = new QProcess(container); ... process->start(executable, arguments); You can delete the QProcess in three ways: Do nothing. Let the QX11EmbedContainer delete it. It is a child of the QX11EmbedContainer and will be deleted when the QX11EmbedContainer is deleted. Hook up the finished() signal to its own deleteLater() slot. connect( process, SIGNAL(finished(int,QProcess::ExitStatus)), process, SLOT(deleteLater()) ); Delete it yourself by retaining a pointer to it and delete that pointer later. As an extra note, I'm suspicious of the first parameter to QProcess::start(). It should be the path to your executable and further arguments should be added to the QStringlist. QProcess * process = new QProcess(container); QString executable("xterm"); // perhaps try "/usr/X11/bin/xterm" QStringList arguments; arguments << "-into"; arguments << QString::number(container->winId()); proces->start(executable, arguments);
383,371
383,385
What is the best way to implement a cross-platform, multi-threaded server in C/C++?
Part of the development team I work with has been given the challenge of writing a server for integration with our product. We have some low-level sensor devices that provide a C SDK, and we want to share them over a network for use by people collecting data. Sounds simple, right? Someone would connect a sensor device to their machine in one part of the building and run our server, thus sharing the device(s) with the rest of the network. Then a client would connect to that server via our application and collect sensor readings from the device. I created a simple, language-agnostic network protocol, and a reference implementation in Java. The problem is creating an implementation that will work with our devices that only provide an SDK written in C. We were thinking of doing the following: Create polling threads that collect and store the most recent readings from each connected device. Use a multi-threaded server to spin off each incoming connection to a worker thread. When a worker thread receives a request for a sensor reading, the most recent value collected by the polling thread is sent back to the client. That's a lot of threading, especially in C. So, to review, the general requirements are: Runs on Windows XP/Vista, Linux, and OS X machines Written in C or C++, to interact with the C SDK we have Accepts a variable number of simultaneous connections (worker threads) Must use threads, not forking (don't want to deal with another layer of IPC) Can anyone suggest a library and preferably some example code to get use started?
I've used Boost.Thread & Boost.Asio to build a multi-threaded server on Windows & Linux systems. The tutorials made it easy to get started.
383,973
383,976
Is args[0] guaranteed to be the path of execution?
This is a fundamental question, but an important one none the less... When starting a C++ program whose main method has the following common signature: int main(int argc, char* args[]) { //Magic! return 0; } is args[0] always guaranteed to be the path to the currently running program? What about cross platform (since I am in a Linux environment but may port later on.)?
It is not always. It's the value that you gave the program by the Operation System. For example when starting a program using exec you can set that to an arbitrary value: int execve(const char *filename, char *const argv[], char *const envp[]); The first parameter is the file to start, and argv will contains argv[0] and all other parameters for main. envp contains the environment variables (not defined by Standard C or C++. This is a posix thing). More precisely, this is the definition of argv in C++: An implementation shall not predefine the main function. This function shall not be overloaded. It shall have a return type of type int, but otherwise its type is implementation-defined. All implementations shall allow both of the following definitions of main: int main() { /* ... */ } and int main(int argc, char* argv[]) { /* ... */ } In the latter form argc shall be the number of arguments passed to the program from the environment in which the program is run. If argc is nonzero these arguments shall be supplied in argv[0] through argv[argc-1] as pointers to the initial characters of null-terminated multibyte strings (NTMBSs) (17.3.2.1.3.2) and argv[0] shall be the pointer to the initial character of a NTMBS that represents the name used to invoke the program or "". The value of argc shall be nonnegative. The value of argv[argc] shall be 0. [Note: it is recommended that any further (optional) parameters be added after argv. ] It's pretty much up to the implementation what defines a "name used to invoke the program". If you want to get the full path of your executable, you can use GetModuleFileName on Windows, and argv[0] (for getting the name used to execute, may be relative) together with getcwd (for getting the current working directory, trying to make the name absolute).
384,036
384,485
c++ continuous integration with performance metrics
I want to set up a continuous integration and test framework for my open source C++ project. The desired features are: 1. check out the source code 2. run all the unit and other tests 3. run performance tests (these measure the software quality - for example how long does it take the system to complete the test) 4. produce a report based on 3. and 4. daily 5. archive the reports for future reference To implement this, which test framework and what continuous integration process would you recommend? Right now I am leaning towards Google Test Framework (I am aware of some of the comparisons of unit test frameworks discussed in other questions) for tests and Cruisecontrol for continuous integration. But I don't know if Cruisecontrol allows easy integration of performance metrics. Edit: To answer Wilhelmtell, code should work with both Windows and Linux.
I am using CruiseControl and UnitTest++ today for exactly this task. UnitTest++ is really nice although I feel sometimes limited by it around the corner. At least it is 10 times better than cppunit. Still haven't tried the google testing framework, it will be for my next project. I have been extremely disappointed by CruiseControl. There are a few bugs and questions asked on the mailing list were never answered. Many of the default "features" to manage program execution and logging were not satisfying. We had to write our own report generation, our own stdout and stderr capturing, our own status mailer. There is not much left for CruiseControl, apart from running test at regular intervals and gathering them in a central web server.
384,110
384,708
Smart pointers for Windows Mobile 6 SDK
I cannot get std::tr1::shared_ptr for my WinMobile project since the STL for WinCE is maintained by a different team at Microsoft :( aarrgh... Anyone worked with another thread-safe, reference counting smart pointers? I'm actually using yasper which seems to be good. Thank you very much.
I'd also recommend boost::shared_ptr. You can do what I did for a library, and use a #define to switch between std::tr1::shared_ptr and boost::shared_ptr, depending on the capabilities of the compiler. That way your code doesn't need to be modified [much] if/when the CE team add tr1 support. Just my 2c.
384,159
384,169
Can't assign a member which is a pointer to a templatized class
My problem is that in my "Widget" class i've the following declaration: MouseEvent* X; In a member function I initialize the pointer with an address the normal way: X = new MouseEvent; Ok, this last line makes the compiler stop at: error C2166: l-value specifies const object All right, a MouseEvent is declared as a typedef to simplify things: typedef Event__2<void, Widget&, const MouseEventArgs&> MouseEvent; And Event__2 is, as you may imagine as: (basic structure shown): template <typename return_type, typename arg1_T, typename arg2_T> class Event__2 { ... }; I don't know where the Event__2 class gets the const qualifier. Any tips ? Thanks.
Likely, the member function where you are initializing X is marked as const - something like this. class Foo { int *Bar; public: void AssignAndDoStuff() const { Bar = new int; // Can't assign to a const object. // other code } } The solution here is either to Assign to Bar in a separate non-const method, change AssignAndDoStuff to be non-const, or mark Bar as mutable. Pick one of the above: class Foo { mutable int *Bar; // 3 public: void Assign() // 1 { Bar = new int; } void DoStuff() const { // Other code } void AssignAndDoStuff() // 2 { Bar = new int; // other code } }
384,200
384,211
C++ Operator Ambiguity
Forgive me, for I am fairly new to C++, but I am having some trouble regarding operator ambiguity. I think it is compiler-specific, for the code compiled on my desktop. However, it fails to compile on my laptop. I think I know what's going wrong, but I don't see an elegant way around it. Please let me know if I am making an obvious mistake. Anyhow, here's what I'm trying to do: I have made my own vector class called Vector4 which looks something like this: class Vector4 { private: GLfloat vector[4]; ... } Then I have these operators, which are causing the problem: operator GLfloat* () { return vector; } operator const GLfloat* () const { return vector; } GLfloat& operator [] (const size_t i) { return vector[i]; } const GLfloat& operator [] (const size_t i) const { return vector[i]; } I have the conversion operator so that I can pass an instance of my Vector4 class to glVertex3fv, and I have subscripting for obvious reasons. However, calls that involve subscripting the Vector4 become ambiguous to the compiler: enum {x, y, z, w} Vector4 v(1.0, 2.0, 3.0, 4.0); glTranslatef(v[x], v[y], v[z]); Here are the candidates: candidate 1: const GLfloat& Vector4:: operator[](size_t) const candidate 2: operator[](const GLfloat*, int) <built-in> Why would it try to convert my Vector4 to a GLfloat* first when the subscript operator is already defined on Vector4? Is there a simple way around this that doesn't involve typecasting? Am I just making a silly mistake? Thanks for any help in advance.
This is explained in the book "C++ Templates - The Complete Guide". It's because your operator[] takes size_t, but you pass a different type which first has to undergo an implicit conversion to size_t. On the other side, the conversion operator can be chosen too, and then the returned pointer can be subscript. So there is the ambiguity. Solution is to drop the conversion operator. They should generally be avoided as they just introduce problems, as you see. Provide a begin and end member function that returns vector and vector + 4 respectively. Then you can use v.begin() if you want to pass to native openGL functions. There is a bit confusion in the comments. I think i will update this answer now to reflect the most recent concept of this. struct Vector4 { // some of container requirements typedef GLfloat value_type; typedef GLfloat& reference; typedef GLfloat const& const_reference; typedef GLfloat * iterator; typedef GLfloat const * const_iterator; typedef std::ptrdiff_t difference_type; typedef std::size_t size_type; static const size_type static_size = 4; // returns iterators to the begin and end iterator begin() { return vector; } iterator end() { return vector + size(); } const_iterator begin() const { return vector; } const_iterator end() const { return vector + size(); } size_type size() const { return static_size; } size_type max_size() const { return static_size; } void swap(Vector4 & that) { std::swap(*this, that); } // some of sequences reference operator[](size_type t) { return vector[t]; } const_reference operator[](size_type t) const { return vector[t]; } // specific for us. returns a pointer to the begin of our buffer. // compatible with std::vector, std::array and std::string of c++1x value_type * data() { return vector; } value_type const* data() const { return vector; } // comparison stuff for containers friend bool operator==(Vector4 const&a, Vector4 const&b) { return std::equal(a.begin(), a.end(), b.begin()); } friend bool operator!=(Vector4 const&a, Vector4 const&b) { return !(a == b); } friend bool operator<(Vector4 const&a, Vector4 const&b) { return std::lexicographical_compare(a.begin(), a.end(), b.begin(), b.end()); } friend bool operator> (Vector4 const&a, Vector4 const&b) { return b < a; } friend bool operator<=(Vector4 const&a, Vector4 const&b) { return !(b < a); } friend bool operator>=(Vector4 const&a, Vector4 const&b) { return !(a < b); } private: GLfloat vector[4]; }
384,431
384,434
C++ Header files - Confused!
game.h needs: - packet.h - socket.h server.h needs: - socket.h socket.h needs: - game.h The problem comes when I try to include socket.h into game.h, because socket.h has game.h included already. How do I solve these kind of problems?
The usual way, use #ifdef and #define in your header files inside game.h: #ifndef GAME_H #define GAME_H .. rest of your header file here #endif This way, the contents will be read multiple times, but only defined once. Edit: Removed underscores at start and end of identifier per comments.
384,886
384,936
How to make warnings persist in visual studio
Suppose I have files a.cpp and b.cpp and I get warnings in a.cpp and an error in b.cpp. I fix the error in b.cpp and recompile -- since Visual Studio doesn't have to recompile a.cpp, it doesn't remind me of the warnings it found before. I'd like to somehow have the warnings persist; however, I don't want it to treat warnings as errors (I'd like it to still compile/run even with warnings). Is this possible?
Essentially, you're out of luck. The C++ compilation will discard all of the errors and warnings. Because it only recompiles .CPP files that have a missing .OBJ file (i.e. the ones that had errors and failed last time), you'll only see the errors. You have a few options. Off the top of my head: Write a macro that responds to the build complete event. If it sees any warnings, it could delete the .OBJ file. The .CPP file would be compiled again next time. Unfortunately, this means that your program may not run without recompilation. You could write a macro that works the other way: on build start, look to see if there are any warnings, and then delete the .OBJ file. Write a VS addin that remembers warnings until the .CPP file is compiled again.
384,913
384,924
Problems using EnterCriticalSection
I need to work with array from several threads, so I use CRITICAL SECTION to give it an exclusive access to the data. Here is my template: #include "stdafx.h" #ifndef SHAREDVECTOR_H #define SHAREDVECTOR_H #include <vector> #include <windows.h> template<class T> class SharedVector { std::vector<T> vect; CRITICAL_SECTION cs; SharedVector(const SharedVector<T>& rhs) {} public: SharedVector(); explicit SharedVector(const CRITICAL_SECTION& CS); void PushBack(const T& value); void PopBack(); unsigned int size() const; T& operator[](int index); virtual ~SharedVector(); }; template<class T> SharedVector<T>::SharedVector() { InitializeCriticalSection(&cs); } template<class T> SharedVector<T>::SharedVector(const CRITICAL_SECTION& r): cs(r) { InitializeCriticalSection(&cs); } template<class T> void SharedVector<T>::PushBack(const T& value) { EnterCriticalSection(&cs); vect.push_back(value); LeaveCriticalSection(&cs); } template<class T> void SharedVector<T>::PopBack() { EnterCriticalSection(&cs); vect.pop_back(); LeaveCriticalSection(&cs); } template<class T> unsigned int SharedVector<T>::size() const { EnterCriticalSection(&cs); unsigned int result = vect.size(); LeaveCriticalSection(&cs); return result; } template<class T> T& SharedVector<T>::operator[](int index) { EnterCriticalSection(&cs); T result = vect[index]; LeaveCriticalSection(&cs); return result; } template<class T> SharedVector<T>::~SharedVector() { DeleteCriticalSection(&cs); } While compiling I have such a problem for calling EnterCriticalSection(&cs) and LeaveCriticalSection(&cs): 'EnterCriticalSection' : cannot convert parameter 1 from 'const CRITICAL_SECTION *' to 'LPCRITICAL_SECTION' I do not know what is wrong. May be you can see. Just because I always used it this way and it was alright. windows.h is included
Just declare cs as: mutable CRITICAL_SECTION cs; or else remove the const clause on size() Entering a critical section modifies the CRITICAL_SECTION, and leaving modifies it again. Since entering and leaving a critical section doesn't make the size() method call logically non-const, I'd say leave it declared const, and make cs mutable. This is the type of situation mutable was introduced for. Also - take a look at Martin York's and Joe Mucchiello's suggestions - use RAII whenever possible to deal with any kind of resources that need to be cleaned up. This works just as well for critical sections as it does for pointers and file handles.
385,039
385,076
Are there cases where a "finally" construct would be useful in C++?
Bjarne Stroustrup writes in his C++ Style and Technique FAQ, emphasis mine: Because C++ supports an alternative that is almost always better: The "resource acquisition is initialization" technique (TC++PL3 section 14.4). The basic idea is to represent a resource by a local object, so that the local object's destructor will release the resource. That way, the programmer cannot forget to release the resource. For example: class File_handle { FILE* p; public: File_handle(const char* n, const char* a) { p = fopen(n,a); if (p==0) throw Open_error(errno); } File_handle(FILE* pp) { p = pp; if (p==0) throw Open_error(errno); } ~File_handle() { fclose(p); } operator FILE*() { return p; } // ... }; void f(const char* fn) { File_handle f(fn,"rw"); // open fn for reading and writing // use file through f } In a system, we need a "resource handle" class for each resource. However, we don't have to have an "finally" clause for each acquisition of a resource. In realistic systems, there are far more resource acquisitions than kinds of resources, so the "resource acquisition is initialization" technique leads to less code than use of a "finally" construct. Note that Bjarne writes "almost always better" and not "always better". Now for my question: What situation would a finally construct be better than using the alternative construct (RAII) in C++?
The only reason I can think of that a finally block would be "better" is when it takes less code to accomplish the same thing. For example, if you have a resource that, for some reason doesn't use RAII, you would either need to write a class to wrap the resource and free it in the destructor, or use a finally block (if it existed). Compare: class RAII_Wrapper { Resource *resource; public: RAII_Wrapper() : resource(aquire_resource()) {} ~RAII_Wrapper() { free_resource(resource); delete resource; } Resource *getResource() const { return resource; } }; void Process() { RAII_Resource wrapper; do_something(wrapper.resource); } versus: void Process() { try { Resource *resource = aquire_resource(); do_something(resource); } finally { free_resource(resource); delete resource; } } Most people (including me) would still argue that the first version is better, because it doesn't force you to use the try...finally block. You also only need to write the class once, not duplicate the code in every function that uses the resource. Edit: Like litb mentioned, you should use an auto_ptr instead of deleting the pointers manually, which would simplify both cases.
385,060
385,071
How do I do lots of processing without gobbling cpu?
I know the question title isn't the best. Let me explain. I do a TON of text processing which converts natural language to xml. These text files get uploaded fairly fast and thrown into a queue. From there they are pulled one-by-one into a background worker that calls our parser (using boost spirit) to transform the text into xml and load relevant portions into our db. The parser can do about 100 of these at a time. I have rate-limiters on the background worker to only poll our queue every so often right now so it doesn't perform as fast. I can't throw up more than one background worker right now because my http requests start to drop -- the background worker and the webserver exist on the same machine and I believe it is because of cpu usage hitting 80-95%, although we could use more ram on it as well. I need to scale this better. How would you go about doing it? In answers to several questions: we use amazon web services so buying cheap extra hardware is a bit different from spawning a new amazon instance -- maybe somebody has done some code that autospawns instances on amount of load? we do have a http server that just stuffs our files into a queue so the only reason it would be affected is because the cpu is busy dealing with tons of parsing related stuff I already rate-limit our background workers, although we don't utilize that in the parser itself I haven't tried nice yet but I've used it in the past -- I need to write down some benchmarks on that the parser is completely seperate from the web server -- we have nginx/merb as our web/application server and a rake task calling c++ as our background worker -- yet they do exist on the same machine
I would buy a couple of cheap computers and do the text processing on those. As Jeff says in his latest post, "Always try to spend your way out of a performance problem first by throwing faster hardware at it."
385,370
385,373
User Breakpoint from nowhere
I have some code in MS VC++ 6.0 that I am debugging. For some reason, at this certain point where I am trying to delete some dynamically allocated memory, it breaks and I get a pop up message box saying "User Breakpoint called from code at blah blah".. then the Disassembly window pops up and I see *memory address* int 3 The odd thing is, there is NOWHERE in the code that I am calling an assembly instruction like this (I think asm int 3 is a hardware break command for x86?).. what could be causing this? EDIT: ANSWER: My code was "walking off the end" of an array, but only in the locations marked by Visual Studio debug with 0xFDFDFDFD, which is called a NoMan'sLand fence.. I think its also called an Off-by-one error.. This array was unrelated to the point where i was freeing the memory when the error was occuring. Which made it harder to spot.. :(
You're probably hitting code in the debug heap routines that have found heap corruption. What does the call stack look like when you've hit the Int 3? Edit: Based on the stack trace in your comments, the routine _CrtIsValidHeapPointer() is saying that the pointer being freed is bad. Here's the snippet of code from MSVC's DBGHEAP.C source: /* * If this ASSERT fails, a bad pointer has been passed in. It may be * totally bogus, or it may have been allocated from another heap. * The pointer MUST come from the 'local' heap. */ _ASSERTE(_CrtIsValidHeapPointer(pUserData)); pUserData would be the value of the pointer you're deleteing.
385,556
385,568
How does C++ pick which overloaded function to call?
Say I have three classes: class X{}; class Y{}; class Both : public X, public Y {}; I mean to say I have two classes, and then a third class which extends both (multiple-inheritance). Now say I have a function defined in another class: void doIt(X *arg) { } void doIt(Y *arg) { } and I call this function with an instance of both: doIt(new Both()); This causes a compile-time error, stating that the function call is ambiguous. What are the cases, besides this one, where the C++ compiler decides the call is ambiguous and throws an error, if any? How does the compiler determine what these cases are?
Simple: if it's ambiguous, then the compiler gives you an error, forcing you to choose. In your snippet, you'll get a different error, because the type of new Both() is a pointer to Both, whereas both overloads of doIt() accept their parameters by value (i.e. they do not accept pointers). If you changed doIt() to take arguments of types X* and Y* respectively, the compiler would give you an error about the ambiguous function call. If you want to explicitly call one or the other, you cast the arguments appropriately: void doIt(X *arg) { } void doIt(Y *arg) { } Both *both = new Both; doIt((X*)both); // calls doIt(X*) doIt((Y*)both); // calls doIt(Y*) delete both;
385,629
385,643
Draw Order in OpenGL
I am rendering an OpenGL scene that include some bitmap text. It is my understanding the order I draw things in will determine which items are on top. However, my bitmap text, even though I draw it last, is not on top! For instance, I am drawing: 1) Background 2) Buttons 3) Text All at the same z depth. Buttons are above the background, but text is invisible. It I change the z depth of the text, I can see it, but I then have other problems. I am using the bitmap text method from Nehe's Tutorials. How can I make the text visible without changing the z depth?
You can simply disable the z-test via glDisable (GL_DEPTH_TEST); // or something related.. If you do so the Z of your text-primitives will be ignored. Primitives are drawn in the same order as your call the gl-functions. Another way would be to set some constant z-offset via glPolygonOffset (not recommended) or set the depth-compare mode to something like GL_LESS_EQUAL (the EQUAL is the important one). That makes sure that primitives drawn with the same depth are rendered ontop of each other. Hope that helps.
385,660
387,017
How do I setup a callback mechanism for RichEdit in win32
In win32, how do I setup a callback mechanism for RichEdit I have not created myself? PART 1 I'm reading from a textedit field in another application's GUI. This works just fine now, except after the first read I'd like to fetch only new or modified lines. In GTK+ or Qt I'd just install a callback on some signal the field edits when its changed, but how does it work on Win32? My MSDN searches result with nothing useful, probably because I don't know the exact term to search for. The class of the textedit is RichText20W, and it has some messages that are probably used somehow, though that article just discusses using them for the parent of the class. PART 2 Also, if there is no such "text changed, here is the newly inserted text" callback which returns the new content immediately, I need some way to easily detect what is new. From top-of-my-head: Have a marker at the end of the text block we've read, and only read between that and the end. Store what we've read previously, and after a second read, remove the duplicate part from the latter to have the newly inserted stuff. Option 2 might not be viable, since the textedit can contain any amount of text. The marker part sounds doable, but yet again, my feeble Win32 skills and horrible Win32 function names prevent me from finding the right way to do it. Note that all these must be doable for a textedit I do not own and have not created, they belong to a third party process. Code samples in C++ highly appreciated. Disclaimer Obviously, if there is some better way of doing it, let me know. I only assumed callback would be the way to go based on my previous experience with GTK+/Qt. Feel free to show me the path :)
Win32 controls don't work on message-specific callbacks that you can subscribe to. They just send messages to their parent window when something happens, in this case EN_UPDATE, EN_CHANGE and all that. Even these events don't tell you what text changed. They only tell you that it did change. You could subclass the parent, but the documentation for SetWindowLongPtr explicitly says you "should not subclass a window class created by another process." Something like this is probably possible with hooks, but I haven't used them enough to say for certain how you'd actually do it.
386,252
386,263
operator ThisClass() causing stack overflow
I want to keep the class simple and not defined a constructor so i can do Pt data = {0, 5}; so i figured the best way convert Pt_t from a short to long or vice versa is to do something like this. template <class T> struct Pt_t { T x, y; template <class T2> operator Pt_t<T2>() { Pt_t pt = {x, y}; return pt; } }; The compiler doesnt like this and calls operator Pt_t on return pt; thus getting a stack overflow. How do i prevent this? the only solution i can think of is having Pt_t use constructors removing Pt_t pt = {1, 2}; which i prefer to keep if i can.
I'm pretty sure the unqualified Pt_t in your functions body is Pt_t<T>, but don't you want it to be Pt_t<T2>? You'll need to explicitly qualify it.
386,345
386,365
C++ classes - constructor declaration in derived class
Socket has a constructor that takes a winsock SOCKET as parameter and stores it in a private variable: Socket::Socket(SOCKET s) { this->s = s; } I'm trying to make a class "GameSocket" that will parse data from my Socket class: class GameSocket : public Socket { protected: void ParseData(unsigned char* data, int size); }; Next to these classes, I have a "Server" class that creates new sockets when needed: GameSocket* Server::Accept() { SOCKET a = accept(s, 0, 0); if(a==SOCKET_ERROR) { return 0; } else { return new GameSocket(a); } } However, this gives me an error on the last "else": error C2664: 'GameSocket::GameSocket' : cannot convert parameter 1 from 'SOCKET' to 'const GameSocket &' I must be missing something with constructors when dealing with derived classes... Don't go too hard on me, I'm relatively new to C++ and OOP
Add in a constructor for GameSocket class GameSocket : public Socket { public: // you need to add GameSocket(SOCKET s) : Socket(s) {} protected: void ParseData(unsigned char* data, int size); };
386,379
418,190
DirectShow - How to overlay video with text
I am writing an application for Windows Mobile where I want to display some text over the video preview on the screen. Does anyone know how this could be done or if there's any good tutorial about it? Thanks!
Try this: http://www.codeguru.com/cpp/g-m/directx/article.php/c1229 Not sure if it applies to Windows Mobile, though.
386,438
386,478
Boolean operations on rectangle polygons
Avast there fellow programmers! I have the following problem: I have two rectangles overlapping like shown on the picture below. I want to figure out the polygon consisting of point ABCDEF. Alternate christmas description: The red cookie cutter is cutting away a bit of the black cookie. I want to calculate the black cookie. Each rectangle is a data structure with 4 2d-vertices. What is the best algorithm to achieve this?
This is a special case of general 2D polygon clipping. A good place to start is the Weiler-Atherton algorithm. Wikipedia has a summary and links to the original paper. The algorithm seems to match the data structure you've described pretty well. Note that it's quite possible you'll end up with a rectangle with a hole in it (if the red one is entirely inside the black one) or even two rectangles (eg if the red is taller and skinnier than the black). If you're certain there is only one corner of the red rectangle inside the black one then the solution should be much simpler.
386,517
386,596
Need a c++ compiler to work with libraries (boost, ...)
Currently I`m using Visual Studio for writing code in C++. But it seems so weighty that I decided to switch for another one, preferably free, not so strict to system resources (I mean memory, of course) as VS to learn libraries, such as Boost and Qt. What compiler do you suggest?
Code::blocks is exactly what you are after. You can can download it here: http://www.codeblocks.org/downloads/5 Choose the version with the mingw compiler bundled with it (Windows port of GCC). You can switch between that and the VC++ compiler as and when you like. Code::Blocks has all the stuff you want, debugger integration, code completion, class browser, todo list etc. etc. It even import visual C++ projects. Don't use Dev C++ which has already been recommended. It's very very old and outdated.
386,559
386,565
How to delete a file from a C++ app without console window in Windows?
I need to delete a temporary file from my C++ windows application (developed in Borland C++ Builder). Currently I use a simple: system("del tempfile.tmp"); This causes a console window to flash in front of my app and it doesn't look very professional. How do I do this without the console window?
It sounds like you need the Win32 function DeleteFile(). You will need to #include <windows.h> to use it.
387,092
390,758
Hand Coded GUI Versus Qt Designer GUI
I'm spending these holidays learning to write Qt applications. I was reading about Qt Designer just a few hours ago, which made me wonder : what do people writing real world applications in Qt use to design their GUIs? In fact, how do people design GUIs in general? I, for one, found that writing the code by hand was conceptually simpler than using Qt Designer, although for complex GUIs Designer might make sense. Large GUIs might be possible using Designer, but with time they might become very difficult to manage as complexity increases (this is just my opinion). I also downloaded the AmaroK source code to take a peek at what those guys were doing, and found many calls to addWidget() and friends, but none of those XML files created by Designer (aside: AmaroK has to be my favorite application ever on any platform). What, then, is the "right" way to create a GUI? Designer or code? Let us, for this discussion, consider the following types of GUIs : Simple dialogs that just need to take input, show some result and exit. Let's assume an application that takes a YouTube URL and downloads the video to the user's hard disk. The sort of applications a newbie is likely to start out with. Intermediate level GUIs like, say, a sticky notes editor with a few toolbar/menu items. Let's take xPad for example (http://getxpad.com/). I'd say most applications falling in the category of "utilities". Very complex GUIs, like AmaroK or OpenOffice. You know 'em when you see 'em because they make your eyes bleed.
Our experience with Designer started in Qt3. Qt3 At that point, Designer was useful mainly to generate code that you would then compile into your application. We started using for that purpose but with all generated code, once you edit it, you can no longer go back and regenerate it without losing your edits. We ended up just taking the generated code and doing everything by hand henceforth. Qt4 Qt4 has improved on Designer significantly. No longer does it only generate code, but you can dynamically load in your Designer files (in xml) and dynamically connect them to the running objects in your program -- no generated code however, you do have to name the items in Designer and stick with the names to not break your code. My assessment is that it's nowhere near as useful as Interface Builder on Mac OS X, but at this point, I could see using the Designer files directly in a program. We haven't moved back to Designer since Qt3, but still use it to prototype, and debug layouts. For your problems: You could probably get away with using the standard dialogs that Qt offers. QInputDialog or if you subclass QDialog, make sure to use QButtonDialogBox to make sure your buttons have the proper platform-layout. You could probably do something more limited like xPad with limited Designer functionality. I wouldn't think you could write something like OpenOffice solely with Designer but maybe that's not the point. I'd use Designer as another tool, just like your text editor. Once you find the limitations, try a different tool for that new problem. I totally agree with Steve S that one advantage of Designer is that someone else who's not a programmer can do the layout.