question_id
int64
25
74.7M
answer_id
int64
332
74.7M
title
stringlengths
20
150
question
stringlengths
23
4.1k
answer
stringlengths
20
4.1k
806,300
806,461
Optimization settings in VS
I am working on an application/GUI created with Win32/ATL . So there is UI field( text field) which works fine as designed in debug build. With release build it gets malformed and it looks like the width of text field is infinite. It works fine in debug build but creating issues in release build. So my question how same code can work in debug build and not working in release build. Is there any chance that it could be releated to optimization settings in release builds ? In release build we have optimization settings set to Minimum Size, Favour small code. Also when I try to debug release build ,it again works fine. Does debugging in release build removes optimization? I want to reproduce the issue while debugging in release build or may be if possible in debug build. Any suggestions?
VC++ fills its runtime-memory in debug mode with special values. It does not ensure any fixed or zero-initialization though. Read this SO thread for details. Uninitialized Memory Blocks. These values may not crash the application every time. I have run into similar situations some times. One common example is that accessing past the array boundaries is an error. Many a time this was not evident in debug mode but crashed in release builds. In general, the runtime is leniant in debug modes and provides more cover to programmer errors. As others have pointed out, there is definitely some problem with the code; most probably uninitialized variables.
806,429
806,448
What is a good tutorial for C++ policy-based class design?
I have just started reading Modern C++ Design Generic programming and Design Patterns Applied and I am wondering if I need to go through some very basic tutorial on policy-based class design before I dive in. Will chapter 1 provide all I need to follow it? I am already experienced with template usage (STL/boost/Poco) and writing template classes/methods.
In my experience its pretty much a self contained book, as long as you know your way around C++. This is kind of a silly question. Just dive in the book, if something's not clear, look it up. Why ask questions about chaper 1 when you can just read it and find out? Fear is the mind-killer.
806,543
806,560
C++ Macros: manipulating a parameter (specific example)
I need to replace GET("any_name") with String str_any_name = getFunction("any_name"); The hard part is how to trim off the quote marks. Possible? Any ideas?
How about: #define UNSAFE_GET(X) String str_##X = getFunction(#X); Or, to safe guard against nested macro issues: #define STRINGIFY2(x) #x #define STRINGIFY(x) STRINGIFY2(x) #define PASTE2(a, b) a##b #define PASTE(a, b) PASTE2(a, b) #define SAFE_GET(X) String PASTE(str_, X) = getFunction(STRINGIFY(X)); Usage: SAFE_GET(foo) And this is what is compiled: String str_foo = getFunction("foo"); Key points: Use ## to combine macro parameters into a single token (token => variable name, etc) And # to stringify a macro parameter (very useful when doing "reflection" in C/C++) Use a prefix for your macros, since they are all in the same "namespace" and you don't want collisions with any other code. (I chose MLV based on your user name) The wrapper macros help if you nest macros, i.e. call MLV_GET from another macro with other merged/stringized parameters (as per the comment below, thanks!).
806,614
852,403
How to trace a certain object in VC++ 6
i was wondering how to trace a certain object in VC++6. It can be done by tracing the unique object ID with "Trace Points", in more recent versions of visual studio. I still haven't figured out how to do this in VC++ 6. Maybe you guys can point me in the right direction. Thanks in advance! Best regards, zhengtonic
click on object, go to break point and select trace point :)
806,830
806,880
Capture CPU and Memory usage dynamically
I am running a shell script to execute a c++ application, which measures the performance of an api. i can capture the latency (time taken to return a value for a given set of parameters) of the api, but i also wish to capture the cpu and memory usage alongside at intervals of say 5-10 seconds. is there a way to do this without effecting the performance of the system too much and that too within the same script? i have found many examples where one can do outside (independently) of the script we are running; but not one where we can do within the same script.
I'd suggest to use 'time' command and also 'vmstat' command. The first will give CPU usage of executable execution and second - periodic (i.e. once per second) dump of CPU/memory/IO of the system. Example: time dd if=/dev/zero bs=1K of=/dev/null count=1024000 1024000+0 records in 1024000+0 records out 1048576000 bytes (1.0 GB) copied, 0.738194 seconds, 1.4 GB/s 0.218u 0.519s 0:00.73 98.6% 0+0k 0+0io 0pf+0w <== that's time result
806,846
806,869
Query on Static member variables of a class in C++
Sorry if this question seems trivial to many here. In a C++ code there is something as below: class Foo { public: static int bands; ... ... private: ... ... }//class definition ends int Foo::bands; //Note: here its not initialized to any value! Why is the above statement needed again when 'bands' is once declared inside the class as static? Also can a static variable be declared as a private member variable in any class?
C++ notes a distinction between declaring and defining. bands is declared within the class, but not defined. A non-static data member would be defined when you define an object of that type, but since a static member is not a part of any one specific object, it needs it's own definition.
806,976
807,018
How to resolve this bad_alloc problem?
I'm developing an application that needs to interact over FTP. For this communication I am currently using C++, Visual Studio and Poco on Windows. The following line results in a bad_alloc exception... ftp = new FTPClientSession("127.0.0.1", 21); So I went down and tried initializing a StreamSocket first, also fails... StreamSocket ss = new SocketAddress("127.0.0.1", 21); When going even further down, and it seems the bad_alloc is coming from: IPAddress * ip = new IPAddress("127.0.0.1"); That constructor contains: (I see in the debugger that _pImpl isn't initialised) IPAddress::IPAddress(const std::string& addr) { _pImpl = IPv4AddressImpl::parse(addr); if (!_pImpl) throw InvalidAddressException(addr); } IPv4AddressImpl::parse contains: static IPv4AddressImpl* parse(const std::string& addr) { if (addr.empty()) return 0; #if defined(_WIN32) struct in_addr ia; ia.s_addr = inet_addr(addr.c_str()); if (ia.s_addr == INADDR_NONE && addr != "255.255.255.255") return 0; else return new IPv4AddressImpl(&ia); #else #if __GNUC__ < 3 struct in_addr ia; ia.s_addr = inet_addr(addr.c_str()); if (ia.s_addr == INADDR_NONE && addr != "255.255.255.255") return 0; else return new IPv4AddressImpl(&ia); #else struct in_addr ia; if (inet_aton(addr.c_str(), &ia)) return new IPv4AddressImpl(&ia); else return 0; #endif #endif } The following code with inet_addr from Winsock2.h (ws2_32.lib) results in "SOMETHING ELSE". unsigned long ulAddr = INADDR_NONE; ulAddr = inet_addr("91.183.48.210"); if (ulAddr == INADDR_NONE) msg("NONE"); else msg("SOMETHING ELSE"); I don't see what is going wrong here... Is there a way to debug this further or does someone know what goes wrong?
I'm no rocket scientist but it looks like you'll have to step into IPv4AddressImpl() with ia.s_addr populated with a pointer to the string "127.0.0.1". Just out of interest, do you get the error when you use your real IP address instead of the loopback. And, do you have an FTP server running on that machine? And, are you absolutely certain the string is "127.0.0.1"? Update: There's only really three things that can cause the exception. addr.empty() is true, unlikely if addr is "127.0.0.1". is.s_addr is INADDR_NONE, again unlikely since inet_addr() should have no trouble with 127.0.0.1 (as you've tested). the constructor for IPv4AddressImpl() returns NULL (or throws the bad_alloc exception itself). That last one seems the most likely, what happens when you write your own code to do it? struct in_addr ia; ia.s_addr = inet_addr("127.0.0.1"); IPv4Address *x = new IPv4AddressImpl(&ia); In addition, if you breakpoint on the "if (!_pImpl) throw..." line and examine _pImpl: zero means the ::parse returned NULL and it's the IPAddress() constructor throwing bad_alloc. non-zero means ::parse itself threw an exception which could only be from the IPv4AddressImpl() constructor. You may have to step into that to debug further.
807,073
807,088
What does object* foo(bar) do?
For some class C: C* a = new C(); C* b(a); //what does it do? C* b = a; //is there a difference?
C* b(a) and C* b = a are equivalent. As with many languages, there's more than one way to do it...
807,105
807,788
How to upload a file in C/C++ using HTTP with libcurl?
I would like to upload a file (a picture, in my case) in C/C++ using HTTP with libcurl. It will be great to have a working sample in C/C++ with (optional) the php code for the server side.
Check out the tutorial at the libcurl site.
807,245
808,242
Good, simple configuration library for large c++ project?
We are developing a rather large project in C++, where many components require configuration parameters. We would like to use a central place to configure everything (like a registry), preferably with a nice and simple GUI (e.g. like Firefox's about:config) and a simple API. I am pretty sure this that many applications have this kind of problem, but could not find any libraries available that can be readily used for this. Does anyone know of a good (preferably free) library to use for this? This should work cross platform in Windows and Linux.
boost::program_options provides unified (and cross platform) support for configuration from command line, environment variables and configuration files. It seems like it ought to scale to multiple bits of a large software system registering an interest in various parameters (e.g option groups). Not much help with the GUI or persistence side of things though (but then what's wrong with editing a config file with a text editor ?).
807,846
807,864
Are there any articles or advice on creating a C++ chat server with a C# client?
My current class is planning on creating a basic networked game, but we have decided to take on the task of making a C++ server with a C# client. I understand this is probably a difficult task, but I was wondering if there is any advice on making this happen. I am sorry I do not have any more information than this. We are just getting started and just want to make sure this is possible within the time span we have.
This works fine. C# and C++ both support TCP and UDP network connections, in many flavors. Either would work fine for the client or server. The main issues you'll need to watch for are deciding how to transmit your data, and making sure that any packets you pass through the wire are serialized/deserialized the same way. This would be simpler if you were to use the same language (or at least libraries) on both sides, ie: if you used C++/CLI for the server, and stuck with some of the same managed .NET tools for working with data transmission.
807,939
807,949
What is the difference between "new" and "malloc" and "calloc" in C++?
What is the difference between "new" and "malloc" and "calloc" and others in family? (When) Do I need anything other than "new" ? Is one of them implemented using any other?
new and delete are C++ specific features. They didn't exist in C. malloc is the old school C way to do things. Most of the time, you won't need to use it in C++. malloc allocates uninitialized memory. The allocated memory has to be released with free. calloc is like malloc but initializes the allocated memory with a constant (0). It needs to be freed with free. new initializes the allocated memory by calling the constructor (if it's an object). Memory allocated with new should be released with delete (which in turn calls the destructor). It does not need you to manually specify the size you need and cast it to the appropriate type. Thus, it's more modern and less prone to errors.
807,979
807,990
Efficient Huffman tree search while remembering path taken
As a follow up question related to my question regarding efficient way of storing huffman tree's I was wondering what would be the fastest and most efficient way of searching a binary tree (based on the Huffman coding output) and storing the path taken to a particular node. This is what I currently have: Add root node to queue while queue is not empty, pop item off queue check if it is what we are looking yes: Follow a head pointer back to the root node, while on each node we visit checking whether it is the left or right and making a note of it. break out of the search enqueue left, and right node Since this is a Huffman tree, all of the entries that I am looking for will exist. The above is a breadth first search, which is considered the best for Huffman trees since items that are in the source more often are higher up in the tree to get better compression, however I can't figure out a good way to keep track of how we got to a particular node without backtracking using the head pointer I put in the node. In this case, I am also getting all of the right/left paths in reverse order, for example, if we follow the head to the root, and we find out that from the root it is right, left, left, we get left, left, right. or 001 in binary, when what I am looking for is to get 100 in an efficient way. Storing the path from root to the node as a separate value inside the node was also suggested, however this would break down if we ever had a tree that was larger than however many bits the variable we created for that purpose could hold, and at that point storing the data would also take up huge amounts of memory.
Create a dictionary of value -> bit-string, that would give you the fastest lookup. If the values are a known size, you can probably get by with just an array of bit-strings and look up the values by their index.
808,148
808,160
When to use Malloc instead of New
Duplicate of: In what cases do I use malloc vs new? Just re-reading this question: What is the difference between "new" and "malloc" and "calloc" in C++? I checked the answers but nobody answered the question: When would I use malloc instead of new? There are a couple of reasons (I can think of two). Let the best float to the top.
A couple that spring to mind: When you need code to be portable between C++ and C. When you are allocating memory in a library that may be called from C, and the C code has to free the allocation.
808,215
808,248
Is there a data structure that doesn't allow duplicates and also maintains order of entry?
Duplicate: Choosing a STL container with uniqueness and which keeps insertion ordering I'm looking for a data structure that acts like a set in that it doesn't allow duplicates to be inserted, but also knows the order in which the items were inserted. It would basically be a combination of a set and list/vector. I would just use a list/vector and check for duplicates myself, but we need that duplicate verification to be fast as the size of the structure can get quite large.
Take a look at Boost.MultiIndex. You may have to write a wrapper over this.
808,403
808,465
member template specialization and its scope
It appears to me that C++ does not allow member template specialization in any scope other than namespace and global scope (MS VSC++ Error C3412). But to me it makes sense to specialize a base class's primary member template in the derived class because that is what derived classes do - specialize things in the base class. For instance, consider the following example: struct Base { template <class T> struct Kind { typedef T type; }; }; struct Derived : public Base { /* Not Allowed */ using Base::Kind; template <> struct Kind <float> { typedef double type; }; }; int main(void) { Base::Kind<float>::type f; // float type desired Derived::Kind<float>::type i; // double type desired but does not work. } My question is why isn't it allowed?
I get what you're trying to do, but you are not doing it right. Try this : struct Base{}; struct Derived{}; // Original definition of Kind // Will yield an error if Kind is not used properly template<typename WhatToDo, typename T> struct Kind { }; // definition of Kind for Base selector template<typename T> struct Kind<Base, T> { typedef T type; }; // Here is the inheritance you wanted template<typename T> struct Kind<Derived, T> : Kind<Base, T> { }; // ... and the specialization for float template<> struct Kind<Derived, float> { typedef double type; };
808,464
808,525
C++: new call that behaves like calloc?
Is there a call I can make to new to have it zero out memory like calloc?
Contrary what some are saying in their answers, it is possible. char * c = new char[N](); Will zero initialize all the characters (in reality, it's called value-initialization. But value-initialization is going to be zero-initialization for all its members of an array of scalar type). If that's what you are after. Worth to note that it does also work for (arrays of) class-types without user declared constructor in which case any member of them is value initialized: struct T { int a; }; T *t = new T[1](); assert(t[0].a == 0); delete[] t; It's not some extension or something. It worked and behaved the same way in C++98 too. Just there it was called default initialization instead of value initialization. Zero initialization, however, is done in both cases for scalars or arrays of scalar or POD types.
809,227
809,341
Is it safe to use -1 to set all bits to true?
I've seen this pattern used a lot in C & C++. unsigned int flags = -1; // all bits are true Is this a good portable way to accomplish this? Or is using 0xffffffff or ~0 better?
I recommend you to do it exactly as you have shown, since it is the most straight forward one. Initialize to -1 which will work always, independent of the actual sign representation, while ~ will sometimes have surprising behavior because you will have to have the right operand type. Only then you will get the most high value of an unsigned type. For an example of a possible surprise, consider this one: unsigned long a = ~0u; It won't necessarily store a pattern with all bits 1 into a. But it will first create a pattern with all bits 1 in an unsigned int, and then assign it to a. What happens when unsigned long has more bits is that not all of those are 1. And consider this one, which will fail on a non-two's complement representation: unsigned int a = ~0; // Should have done ~0u ! The reason for that is that ~0 has to invert all bits. Inverting that will yield -1 on a two's complement machine (which is the value we need!), but will not yield -1 on another representation. On a one's complement machine, it yields zero. Thus, on a one's complement machine, the above will initialize a to zero. The thing you should understand is that it's all about values - not bits. The variable is initialized with a value. If in the initializer you modify the bits of the variable used for initialization, the value will be generated according to those bits. The value you need, to initialize a to the highest possible value, is -1 or UINT_MAX. The second will depend on the type of a - you will need to use ULONG_MAX for an unsigned long. However, the first will not depend on its type, and it's a nice way of getting the most highest value. We are not talking about whether -1 has all bits one (it doesn't always have). And we're not talking about whether ~0 has all bits one (it has, of course). But what we are talking about is what the result of the initialized flags variable is. And for it, only -1 will work with every type and machine.
809,289
809,315
C++ Concurrent GET requests
I am writing a C++ application and would like to request several data files through a HTTP GET request simultaneously, where should I look to get started (needs to be cross-platform). Run Application Create a list of URLs { "http://host/file1.txt", "http://host/file2.txt", "http://host/file3.txt"} Request all the URLs simultaneously and load the contents to variables (don't want disk writes). Each file has about 10kB of data. What libraries would you recommend I use? libcurl? curlpp? boost asio? would I need to roll-my-own multi threading to request all the files simultaneously? is there a easier way? Edit: I will need to request about 1000 GET requests simultaneously. Most likely I will do this in batches (100 at a time, and creating more connections as made one are completed).
I would recommend libcurl. I'm not super-familiar with it, but it does have a multi-interface for performing multiple simultaneous HTTP operations. Depending on what solution you go with, it's possible to do asynchronous I/O without using multithreading. The key is to use the select(2) system call. select() takes a set of file descriptors and tells you if any of them have data available. If they do, you can then proceed to use read(2) or recv(2) on them without worrying about blocking.
809,324
809,443
linker error: undefined reference c++
I've tried looking at similar problems, but could not easily find one that helped my problem. I've created a project in C++ and am working on UNIX to compile, link, and run it. My specific problem is an undefined reference to a method I declare in a separate file. In the file SharedCache.cpp, I have the following method: int SharedCache::replaceLine(ullong address){ int evictPID = -1; int cacheSet = calcCacheSet( address ); //random uniformly-distributed value for cache line float numLines = static_cast<float>(CACHE_LINES_PER_SET); uint cacheLine = static_cast<uint>(uniformDistr( numLines )); if(cache[cacheSet][cacheLine] != NULL){ evictPID = cache[cacheSet][cacheLine]->PID; } uint PID= calcPID(address); uint tag= calcTag(address); cache[cacheSet][cacheLine]->setLine(PID, tag); //mutex method return PID; } The line uint cacheLine = static_cast<uint>( uniformDistr( numLines )); makes a call to the function I want to use from another file. The linker error is an undefined reference to this method. uniformDistr( float ); is declared in the header distributions.h and defined in distributions.cpp. In my project options I set the linker flag -distributions and I also compiled the distributions.cpp to make sure a distributions.o file exists to link with. From here, I don't know where to go, because this has not solved the problem.
I've not used KDevelop, however, on the command line you would just add distributions.o as an input file to the linking process. No need for dashes or leaving off the .o extension. Alternatively, can you just add distributions.cpp to your KDevelop project? That way it should get compiled and linked automatically (this is how it works in things like Visual Studio or Eclipse).
809,334
809,342
Making a Point class in c++
Right now I am using std::pair to represent a 2d point in c++. However, I am getting annoyed with having to write typedef std::pair<double, double> Point; Point difference = Point(p2.first - p1.first, p2.second - p1.second); instead of being able to overload operator+ and operator-. So, my question is, to make my Point class, should I Publicly derive from std::pair and add my own member functions? This is nice because all my code can stay the same. I am not going to be doing anything like std::pair<double, double>* p = new Point; so I don't have to worry about things like virtual destructors. Roll my own Point class, which is annoying since I am duplicating std::pair's functionality, however I am "doing it the pure way". Make template specializations of operator+ and operator- for std::pair, which admittedly I don't remember if they go in source or header files. I guess it's up for debate, I'd really like to do #1 but I don't know if it's a bad idea since I've heard that inheriting from STL is a no-no.
You could roll your own Point class, but use std::pair internally to store the data. This prevents the inheritance from STL issue, but still uses std::pair's functionality.
809,562
813,137
isAbstract template and visual studio
The following template will decide if T is abstract with g++. /** isAbstract<T>::result is 1 if T is abstract, 0 if otherwise. */ template<typename T> class isAbstract { class No { }; class Yes { No no[3]; }; template<class U> static No test( U (*)[1] ); // not defined template<class U> static Yes test( ... ); // not defined public: enum { result = sizeof( isAbstract<T>::template test<T>( 0 ) ) == sizeof(Yes) }; }; For example: struct myClass2 { virtual void f() {} }; struct myClass1 { virtual void f() = 0; }; bool twoAbstract = isAbstract<myClass2>::result; bool oneAbstract = isAbstract<myClass1>::result; However, it fails in visual studio 9.0 with the error: error C2784: 'AiLive::isAbstract<T>::No AiLive::isAbstract<T>::test(U (*)[1])' : could not deduce template argument for 'U (*)[1]' from 'myClass2' Does anyone have an idea of what the problem is and how to fix this? MSDN Reports that they now have an is_abstract class since VS2008 as part of TR1 (inside the header type_traits). However, it seems to be missing from my installation. PS. For reasons that are long and boring, I can't reimplement this via Boost. Update Also, tried replacing, template<class U> static No test( U (*)[1] ); with each of, template<class U> static No test( U (*x)[1] ); template<class U> static No test( U (*)() ); template<class U> static No test( U (*x)() ); and template <typename U> struct U2 : public U { U2( U* ) {} }; // Match if I can make a U2 template <typename U> static No test( U2<U> x ); and // Match if I can make a U2 template <typename U> static No test( U2<T> x ); No luck - all say that template argument cannot be deduced for U.
This works for me in VC9: template<typename T> class isAbstract { class No { }; class Yes { No no[3]; }; template<class U> static No test( U (*)[1] ); // not defined template<class U> static Yes test( ... ); // not defined public: enum { result = sizeof( test<T>( 0 ) ) == sizeof(Yes) }; }; Notice I just had to remove isAbstract<T>:: from the call to test.
809,807
809,817
How can I inhibit warning 4200 in Visual Studio 2005?
I can inhibit many warnings in Visual Studio 2005 SP1 in the C/C++ Advanced property page, which causes the IDE to use the /wd switch on the command line which invokes the compiler. However, when I try to inhibit warning 4200 (nonstandard extension used : zero-sized array in struct/union), it still appears when I compile. (Of course it's justified; I'm just not in a position to fix the code, nor would it be worth bothering, because it's generated, and the warning is entirely benign under the circumstances.) Does anybody happen to know if this is a bug in the compiler? Or might there be something I can do about it?
You mean like with pragma? #pragma warning( disable : 2400 )
809,827
809,846
What is this programming technique? (Boost Library)
I'am trying to understand the example from program_options of the boost library (http://www.boost.org/doc/libs/1_38_0/doc/html/program_options/tutorial.html#id3761458) Especially this part: desc.add_options() ("help", "produce help message") ("compression", po::value<int>(), "set compression level") ; what exactly is he doing here and which technique is that? This part desc.add_options() could be a function call but how do the other () fit here? Is this some kind of operator overloading? Thanks!
The "add_options()" function actually returns a functor, that is, an object that overrides the () operator. This means that the following function call desc.add_options() ("help", "produce help message"); actually expands to desc.add_options().operator()("help", "produce help message"); The "operator()" also returns a functor, so that the calls can be chained as you have shown.
809,948
810,109
DLL References in Visual C++
I have had C++ experience but not MSVC. What I am trying to do is incorporate a .dll from an open source project into my project. The code is available and I have built it. I have the .dll as well as the .lib which as I understand it is required for C++ projects. Now unfortunately there is no simple "Add Reference", drop my .dll into an include directory and add that to my solution. I have edited the project property pages, the C/C++ Additional Include Directories option as well as adding the .lib as an additional linker dependency. I have created an include directory for the dll and lib inside my solution tree. My problem is when I try to include the header files from the documentation, VS output spits out error messages. Now I realize that I am using the dll/lib combo and that the .h files are not present in my solution so how do I add the proper includes? I am using QT toolkit also which is working but how I add the other header / dll from the open source library eludes me. Can someone please point me in the right direction.
You need to do a couple of things to use the library: Make sure that you have both the *.lib and the *.dll from the library you want to use. If you don't have the *.lib, skip #2 Put a reference to the *.lib in the project. Right click the project name in the Solution Explorer and then select Configuration Properties->Linker->Input and put the name of the lib in the Additional Dependencies property. You have to make sure that VS can find the lib you just added so you have to go to the Tools menu and select Options... Then under Projects and Solutions select VC++ Directories,edit Library Directory option. From within here you can set the directory that contains your new lib by selecting the 'Library Files' in the 'Show Directories For:' drop down box. Just add the path to your lib file in the list of directories. If you dont have a lib you can omit this, but while your here you will also need to set the directory which contains your header files as well under the 'Include Files'. Do it the same way you added the lib. After doing this you should be good to go and can use your library. If you dont have a lib file you can still use the dll by importing it yourself. During your applications startup you can explicitly load the dll by calling LoadLibrary (see: http://msdn.microsoft.com/en-us/library/ms684175(VS.85).aspx for more info) Cheers! EDIT Remember to use #include < Foo.h > as opposed to #include "foo.h". The former searches the include path. The latter uses the local project files.
810,082
810,237
How does const after a function optimize the program?
I've seen some methods like this: void SomeClass::someMethod() const; What does this const declaration do, and how can it help optimize a program? Edit I see that the first part of this question has been asked before... BUT, it still doesn't answer the second part: how would this optimize the program?
If the compiler knows that the fields of a class instance are not modified across a const member function call, it doesn't have to reload any fields that it may have kept in registers before the const function call. This is sort of referred to the in C++ FAQ in the discussion on const_cast.
810,137
810,156
C++ - Passing Arrays To Methods
Here is a function similar to the one I've defined: void Function( BYTE *data ); What I would like to do is something like this: Function( new BYTE { 0x00, 0x00 } );
You cannot use the array initialiser syntax with dynamically allocated arrays using new. You could do something like this: BYTE *ary=new BYTE[2]; ary[0] = 0; ary[1] = 0; Function(ary); delete [] ary; But why are you using dynamically allocated memory here? Is the array held onto outside of the scope of the current function? If not, you can use an array allocated on the stack: BYTE ary[2] = {0}; Function(ary); In C++, a preferred method is to use the STL class std::vector which acts like a dynamically allocated (but type safe) array: std::vector<BYTE> ary(2); Function(&ary[0]);
810,177
810,551
Portable shared objects?
Is it possible to use shared object files in a portable way like DLLs in Windows?? I'm wondering if there is a way I could provide a compiled library, ready to use, for Linux. As the same way you can compile a DLL in Windows and it can be used on any other Windows (ok, not ANY other, but on most of them it can). Is that possible in Linux? EDIT: I've just woke up and read the answers. There are some very good ones. I'm not trying to hide the source code. I just want to provide an already-compiled-and-ready-to-use library, so users with no experience on compilation dont need to do it themselves. Hence the idea is to provide a .so file that works on as many different Linuxes as possible. The library is written in C++, using STL and Boost libraries.
I highly highly recommend using the LSB app / library checker. Its going to tell you quickly if you: Are using extensions that aren't available on some distros Introduce bash-isms in your install scripts Use syscalls that aren't available in all recent kernels Depend on non-standard libraries (it will tell you what distros lack them) And lots, upon lots of other very good checks You can get more information here as well as download the tool. Its easy to run .. just untar it, run a perl script and point your browser at localhost .. the rest is browser driven. Using the tool, you can easily get your library / app LSB certified (for both versions) and make the distro packager's job much easier. Beyond that, just use something like libtool (or similar) to make sure your library is installed correctly, provide a static object for people who don't want to link against the DSO (it will take time for your library to appear in most distributions, so writing a portable program, I can't count on it being present) and comment your public interface well. For libraries, I find that Doxygen works the best. Documentation is very important, it surely influences my choice of library to use for any given task. Really, again, check out the app checker, its going to give you portability problem reports that would take a year of having the library out in the wild to obtain otherwise. Finally, try to make your library easy to drop 'in tree', so I don't have to statically link against it. As I said, it could take a couple of years before it becomes common in most distributions. Its much easier for me to just grab your code, drop it in src/lib and use it, until and if your library is common. And please, please .. give me unit tests, TAP (test anything protocol) is a good and portable way to do that. If I hack your library, I need to know (quickly) if I broke it, especially when modifying it in tree or en situ (if the DSO exists).
810,651
814,311
Help installing C++ for Netbeans
I am trying to install c++ for netbeans.. I tried installing the cygwin and mingw but i can't compile because the make file that comes with Mingw is incompatible ..
Try these Instructions from the Netbeans Website
810,657
1,413,048
Fastest code C/C++ to select the median in a set of 27 floating point values
This is the well know select algorithm. see http://en.wikipedia.org/wiki/Selection_algorithm. I need it to find the median value of a set of 3x3x3 voxel values. Since the volume is made of a billion voxels and the algorithm is recursive, it better be a little bit fast. In general it can be expected that values are relatively close. The fastest known algorithm I have tried out so far uses the quick sort partition function. I would like to know if there is a faster one. I've "invented" a 20% faster one using two heaps, but expected an even faster one using a hash. Before implementing this, I'd like to know if a blitz fast solution already exist out there. The fact that I'm using floats shouldn't matter since they can be considered as unsigned integer after inverting the sign bit. The order will be preserved. EDIT: benchmark and source code moved into a separate answer as suggested by Davy Landman. See below for the answer by chmike. EDIT: The most efficient algorithm so far was referenced below by Boojum as a link to the Fast Median and Bilateral Filtering paper which is now the answer to this question. The first smart idea of this method is to use radix sort, the second is to combine median search of adjacent pixels who share a lot of pixels.
Since it sounds like you're performing a median filter on a large array of volume data, you might want to take a look at the Fast Median and Bilateral Filtering paper from SIGGRAPH 2006. That paper deals with 2D image processing, but you might be able to adapt the algorithm for 3D volumes. If nothing else, it might give you some ideas on how to step back and look at the problem from a slightly different perspective.
810,677
810,713
What utf format should boost wdirectory_iterator return?
If a file contains a £ (pound) sign then directory_iterator correctly returns the utf8 character sequence \xC2\xA3 wdirectory_iterator uses wide chars, but still returns the utf8 sequence. Is this the correct behaviour for wdirectory_iterator, or am I using it incorrectly? AddFile(testpath, "pound£sign"); wdirectory_iterator iter(testpath); TS_ASSERT_EQUALS(iter->leaf(),L"pound\xC2\xA3sign"); // Succeeds TS_ASSERT_EQUALS(*iter, L"pound£sign"); // Fails
The encoding for wide chars (wchar_t objects) is implementation dependent. For the second statement (i.e. L"pound£sign") to work, you will probably need to change the underlying locale. The default is "C" which does not know about the pound character. The hex value succeeds since this does not require mapping the glyph to a value in a particular encoding. Note: I am skipping the exact wording of the standard w.r.t wchar_t, extended character sets etc for brevity.
810,839
810,850
Throwing exceptions from constructors
I'm having a debate with a co-worker about throwing exceptions from constructors, and thought I would like some feedback. Is it OK to throw exceptions from constructors, from a design point of view? Lets say I'm wrapping a POSIX mutex in a class, it would look something like this: class Mutex { public: Mutex() { if (pthread_mutex_init(&mutex_, 0) != 0) { throw MutexInitException(); } } ~Mutex() { pthread_mutex_destroy(&mutex_); } void lock() { if (pthread_mutex_lock(&mutex_) != 0) { throw MutexLockException(); } } void unlock() { if (pthread_mutex_unlock(&mutex_) != 0) { throw MutexUnlockException(); } } private: pthread_mutex_t mutex_; }; My question is, is this the standard way to do it? Because if the pthread mutex_init call fails the mutex object is unusable so throwing an exception ensures that the mutex won't be created. Should I rather create a member function init for the Mutex class and call pthread mutex_init within which would return a bool based on pthread mutex_init's return? This way I don't have to use exceptions for such a low level object.
Yes, throwing an exception from the failed constructor is the standard way of doing this. Read this FAQ about Handling a constructor that fails for more information. Having a init() method will also work, but everybody who creates the object of mutex has to remember that init() has to be called. I feel it goes against the RAII principle.
810,872
810,895
Best library for audio file meta data?
I am looking for a library to read meta data from compressed and uncompressed audio files (i.e. mp3, ogg, etc.). In the past I have used libvorbis and id3lib, but I'm wondering if there are better libraries around? Ideally I would like a library that provides a common API to reading meta data from all the various formats. I realize more advanced libraries probably have id3lib and libvorbis as dependencies, and I don't mind that, I just would like to avoid having to write my own wrapper.
TagLib seems like a good candidate.
810,894
810,911
Reading cookies from default browser in C++
I want to create a c++ application that works together with a website. In order to keep the application synchronized with the website I want to be able to read some cookies from the user's default browser. Is there any way to do this?
Not in the general sense - there's no real defined format for cookie storage, so each browser is free to keep the cookie database wherever, and in whatever style, it prefers. You could implement cookie reading functions for the mainstream browsers (IE, Firefox), but that would leave some people out. It would also be non-robust - what happens when the user clears their cookie cache, or uses more than one browser? If you want to keep your application synchronised with a website, I'd recommend that you have it call directly into web services (RESTful, etc) on that site, with a username/identifier your application users configure. This way, you're not tied to an arbitrary 3rd party that is, at best, only "maybe" right.
811,438
811,789
Which Logging tools do you use for Windows?
I'm looking at adding logging to an application, and I'm considering using Kiwi syslogd and a freeware library (clSyslog) to send logging messages to the daemon. I briefly looked at log4c, and found compiling it with VC++ would take me more time than I had. What tools do you use and recommend for logging messages?
In C++ I use a lot of log4cxx.. Don't see why it's a problem to compile, works like champ. It brings lots of benefits. To name just a few - you can re-direct your log statements into syslog or windows event log without ever touching your code base - just change configuration.
811,641
811,659
Windows Threading Wait Method
I'm creating a thread class to encapsulate the windows thread methods. I'm trying to create a method that makes the application wait for the thread to complete before it exits the application. If I use a while loop and boolean flag, it works but obviously it spikes my CPU use and it's just not ideal. What ways would you use to wait for the completion of a thread? I'm not really looking for code here, just areas to look into.
After you use CreateThread to get a thread handle, pass it into the Win32 API WaitForSingleObject: WaitForSingleObject(threadhandle, INFINITE); If you do not use CreateThread (because you use another threading package), or perhaps your thread is always alive... Then you can still use WaitForSingleObject. Just create an event first with the Win32 API CreateEvent, and wait for the event to be set with WaitForSingleObject. At the end of your thread set the event with SetEvent and you can reset the event with ResetEvent. Most threading packages though will have their own way to wait for a thread. Like in boost::thread you can use .join() or a boost::condition.
811,720
811,890
C++ How to compile dll in a .exe
I am creating a c++ program, but I want to be able to offer just a .exe file to the user. However, I am using libraries (curl among others) which have some dll's. Is it possible to compile these dll's into the .exe file? I use Code::Blocks and mingw.
In order to achieve that you will need static linking. This requires that all your libraries (and the libraries they depend upon recursively) need to be available as static libraries. Be aware that the size of your executable will be large, as it will carry all the code from those static libraries. This is why shared libraries (DLLs) were invented in the first place, to be able to share common code among applications. However that does not always work so well on windows. I think what you may really want is an installer that installs your executable and all it's dependent libraries.
811,724
811,807
Why is the Visual C++ compiler calling the wrong overload here?
Why is the Visual C++ compiler calling the wrong overload here? I am have a subclass of ostream that I use to define a buffer for formatting. Sometimes I want to create a temporary and immediately insert a string into it with the usual << operator like this: M2Stream() << "the string"; Unfortunately, the program calls the operator<<(ostream, void *) member overload, instead of the operator<<(ostream, const char *) nonmember one. I wrote the sample below as a test where I define my own M2Stream class that reproduces the problem. I think the problem is that the M2Stream() expression produces a temporary and this somehow causes the compiler to prefer the void * overload. But why? This is borne out by the fact that if I make the first argument for the nonmember overload const M2Stream &, I get an ambiguity. Another strange thing is that it calls the desired const char * overload if I first define a variable of type const char * and then call it, instead of a literal char string, like this: const char *s = "char string variable"; M2Stream() << s; It's as if the literal string has a different type than the const char * variable! Shouldn't they be the same? And why does the compiler cause a call to the void * overload when I use the temporary and the literal char string? #include "stdafx.h" #include <iostream> using namespace std; class M2Stream { public: M2Stream &operator<<(void *vp) { cout << "M2Stream bad operator<<(void *) called with " << (const char *) vp << endl; return *this; } }; /* If I make first arg const M2Stream &os, I get \tests\t_stream_insertion_op\t_stream_insertion_op.cpp(39) : error C2666: 'M2Stream::operator <<' : 2 overloads have similar conversions \tests\t_stream_insertion_op\t_stream_insertion_op.cpp(13): could be 'M2Stream &M2Stream::operator <<(void *)' \tests\t_stream_insertion_op\t_stream_insertion_op.cpp(20): or 'const M2Stream &operator <<(const M2Stream &,const char *)' while trying to match the argument list '(M2Stream, const char [45])' note: qualification adjustment (const/volatile) may be causing the ambiguity */ const M2Stream & operator<<(M2Stream &os, const char *val) { cout << "M2Stream good operator<<(const char *) called with " << val << endl; return os; } int main(int argc, char argv[]) { // This line calls void * overload, outputs: M2Stream bad operator<<(void *) called with literal char string on constructed temporary M2Stream() << "literal char string on constructed temporary"; const char *s = "char string variable"; // This line calls the const char * overload, and outputs: M2Stream good operator<<(const char *) called with char string variable M2Stream() << s; // This line calls the const char * overload, and outputs: M2Stream good operator<<(const char *) called with literal char string on prebuilt object M2Stream m; m << "literal char string on prebuilt object"; return 0; } Output: M2Stream bad operator<<(void *) called with literal char string on constructed temporary M2Stream good operator<<(const char *) called with char string variable M2Stream good operator<<(const char *) called with literal char string on prebuilt object
The compiler is doing the right thing: Stream() << "hello"; should use the operator<< defined as a member function. Because the temporary stream object cannot be bound to a non-const reference but only to a const reference, the non-member operator that handles char const* won't be selected. And it's designed that way, as you see when you change that operator. You get ambiguities, because the compiler can't decide which of the available operators to use. Because all of them were designed with rejection of the non-member operator<< in mind for temporaries. Then, yes, a string literal has a different type than a char const*. A string literal is an array of const characters. But that wouldn't matter in your case, i think. I don't know what overloads of operator<< MSVC++ adds. It's allowed to add further overloads, as long as they don't affect the behavior of valid programs. For why M2Stream() << s; works even when the first parameter is a non-const reference... Well, MSVC++ has an extension that allows non-const references bind to temporaries. Put the warning level on level 4 to see a warning of it about that (something like "non-standard extension used..."). Now, because there is a member operator<< that takes a void const*, and a char const* can convert to that, that operator will be chosen and the address will be output as that's what the void const* overload is for. I've seen in your code that you actually have a void* overload, not a void const* overload. Well, a string literal can convert to char*, even though the type of a string literal is char const[N] (with N being the amount of characters you put). But that conversion is deprecated. It should be not standard that a string literal converts to void*. It looks to me that is another extension by the MSVC++ compiler. But that would explain why the string literal is treated differently than the char const* pointer. This is what the Standard says: A string literal (2.13.4) that is not a wide string literal can be converted to an rvalue of type "pointer to char"; a wide string literal can be converted to an rvalue of type "pointer to wchar_t". In either case, the result is a pointer to the first element of the array. This conversion is considered only when there is an explicit appropriate pointer target type, and not when there is a general need to convert from an lvalue to an rvalue. [Note: this conversion is deprecated. See Annex D. ]
811,951
811,980
/MT and /MD builds crashing, but only when debugger isn't attached: how to debug?
I have a small single-threaded C++ application, compiled and linked using Visual Studio 2005, that uses boost (crc, program_options, and tokenizer), a smattering of STL, and assorted other system headers. (It's primary purpose is to read in a .csv and generate a custom binary .dat and a paired .h declaring structures that "explain" the format of the .dat.) The tool is crashing (access violation on NULL) when run outside the debugger, only in release. E.g. pressing F5 does not cause the tool to crash, Ctrl-F5 does. When I re-attach the debugger, I get this stack: ntdll.dll!_RtlAllocateHeap@12() + 0x26916 bytes csv2bin.exe!malloc(unsigned int size=0x00000014) Line 163 + 0x63 bytes C csv2bin.exe!operator new(unsigned int size=0x00000014) Line 59 + 0x8 bytes C++ >csv2bin.exe!Record::addField(const char * string=0x0034aac8) Line 62 + 0x7 bytes C++ csv2bin.exe!main(int argc=0x00000007, char * * argv=0x00343998) Line 253 C++ csv2bin.exe!__tmainCRTStartup() Line 327 + 0x12 bytes C The line it's crashing on is a somewhat innocuous-looking allocation: pField = new NumberField(this, static_cast<NumberFieldInfo*>(pFieldInfo)); ...I don't believe it has reached the constructor yet, it's just allocating memory before jumping to the constructor. It has also executed this code dozens of times by the time it crashes, usually in a consistent (but otherwise non-suspicious) location. The problem goes away when compiling with /MTd or /MDd (debug runtime), and comes back when using /MT or /MD. The NULL is loaded from the stack, and I can see it in memory view. _RtlAllocateHeap@12 + 0x26916 bytes seems like a huge offset, like an incorrect jump has been made. I've tried _HAS_ITERATOR_DEBUGGING in a debug build and that hasn't brought up anything suspicious. Dropping a HeapValidate at the beginning and end of Record::addField shows an OK heap right up to when it crashes. This used to work -- I'm not entirely sure what changed between now and the last time we compiled the tool (probably years ago, maybe under an older VS). We've tried an older version of boost (1.36 vs 1.38). Before dropping back to manual investigation of the code or feeding this to PC-Lint and combing through its output, any suggestions on how to effectively debug this? [I'll be happy to update the question with more info, if you request info in the comments.]
One little know difference between running with debugger attached or not is the OS Debug Heap (see also Why does my code run slowly when I have debugger attached?). You can turn the debug heap off by using environment variable _NO_DEBUG_HEAP . You can specify this either in your computer properties, or in the Project Settings in Visual Studio. Once you turn the debug heap off, you should see the same crash even with debugger attached. That said, be aware memory corruptions can be hard to debug, as often the real cause of the corruption (like some buffer overrun) may be very far from where you see the symptoms (the crash).
811,974
812,336
Search Outlook Contact using COM?
I want to add support for searching for local Outlook contacts to my ATL/WTL app. Does anyone know of the Outlook COM interface (Office 2003 or greater) allows you to search for contacts? I already have LDAP lookup support but users want to be able to search their private contacts as well. Any information would be welcome.
To get access to the contacts you first have to get a Namespace object using the Application's GetNamespace function, passing "MAPI" as the namespace name. Then you use Namespace's GetDefaultFolder function, which gives you a MAPIFolder interface which contains an Items property. Next you call the Find function on the items.
812,409
812,423
How to design my classes to leverege factory and be extensible?
My c++ SOA app has a concept of "session" that is used exchange data between services. In example its used for checking legality of some service A operations before executing session B which commits or rollback changes. Whatever. I have 2 types of session modes: normal and what-if. Going further, I have different session, session for legality, session for assign, session for commit etc. This is a main problem. Legality session can be what-if or real etc. How to fix that and avoid code duplication? I can make a ISessionFactory interface and have WhatIfFactory and RealFactory implement it. Then I could make a ILegalitySession and make WhatIfLegalitySession and RealLegalitySession implement it. Then my factories would return appropriate objects. It has 2 major problems. What if new mode will come? I will have to implement new factory and new classes for all sessions! What if new session type comes? I have to change both of factories... Perhaps resign from 2 hierarhies and have whatIf sessions "decorate" real session? How can I localize the change?
Try to implement your WhatIf with decorators. Or extract some 'what if' specific parts to kind of strategy. Another option is using of the State pattern. 'WhatIf' state and 'Real' state.
812,717
815,197
Is there any reason to use C instead of C++ for embedded development?
Question I have two compilers on my hardware C++ and C89 I'm thinking about using C++ with classes but without polymorphism (to avoid vtables). The main reasons I’d like to use C++ are: I prefer to use “inline” functions instead of macro definitions. I’d like to use namespaces as I prefixes clutter the code. I see C++ a bit type safer mainly because of templates, and verbose casting. I really like overloaded functions and constructors (used for automatic casting). Do you see any reason to stick with C89 when developing for very limited hardware (4kb of RAM)? Conclusion Thank you for your answers, they were really helpful! I thought the subject through and I will stick with C mainly because: It is easier to predict actual code in C and this is really important if you have only 4kb of ram. My team consists mainly of C developers, so advanced C++ features won't be frequently used. I've found a way to inline functions in my C compiler (C89). It is hard to accept one answer as you provided so many good answers. Unfortunately I can't create a wiki and accept it, so I will choose one answer that made me think most.
Two reasons for using C over C++: For a lot of embedded processors, either there is no C++ compiler, or you have to pay extra for it. My experience is that a signficant proportion of embedded software engineers have little or no experience of C++ -- either because of (1), or because it tends not to be taught on electronic engineeering degrees -- and so it would be better to stick with what they know. Also, the original question, and a number of comments, mention the 4 Kb of RAM. For a typical embedded processor, the amount of RAM is (mostly) unrelated to the code size, as the code is stored, and run from, flash. Certainly, the amount of code storage space is something to bear in mind, but as new, more capacious, processors appear on the market, it's less of an issue than it used to be for all but the most cost-sensitive projects. On the use of a subset of C++ for use with embedded systems: there is now a MISRA C++ standard, which may be worth a look. EDIT: See also this question, which led to a debate about C vs C++ for embedded systems.
813,898
814,034
Handling TCHARs in header files for libraries with different character sets
I have a project that uses two third party libraries, both of which make use of TCHARs in their header files. Unfortunately one library is complied as multi-byte (call it library a), and the other is compiled as Unicode (call it library b). Now the way I understand it is that TCHAR is replaced by the precompiler with either wchar or char depending on the build options. So when library a was compiled any method that takes a parameter of type TCHAR was set to expect a parameter of type char, and methods in library b are set to expect a parameter of type wchar. Unfortunately my consuming application has to pick a character set too. If I pick Unicode then the header file I have included for library a tells me that the method wants a wchar, because when I compile the TCHARs in the header they are interpreted as wchars. This includes TCHARS defined inside of structures. I have confirmed this behavior in practice, when I allocate and pass a TCHAR buffer I get back garbage because it fills my wchar buffer with multi-byte data. My questions are: Is there a clean way to consume both of these libraries in the same application? Am I maybe doing something wrong with how I'm using these libraries?
Assuming you're not using too many class/function in either one of these libraries, I would wrap one of the library completely. Let's say if you decided to use mbc in your app and wrap library b (unicode), your wrapper header file can use wchar_t instead of TCHAR so #define will not affect your interface. Inside your wrapper's cpp file where you #include library b's headers, you #define TCHAR to match library b. No code other than your wrapper should be allowed to see library b. If you're using more than a few class/function in both of these libraries, maintaining the wrapper code will quickly become a problem of its own.
814,326
815,302
hardware buffering using SDL, question about how it works
I'm deciding to do my first game, its going to be simple but I want to use c++ and I chose SDL for my learning. So my question is about how the "buffers" are handled when writing code. I'll post my related code at the bottom. Ok, so basically the way I understand it is that SDL takes care of which buffer is actually being drawn to the screen. When I am writing to a buffer it is always the backbuffer I am writing to, or the buffer currently not being drawn on the screen. So, when I call SDL_Flip(screen) it "blits" my screen surface onto the backbuffer, then moves the pointer to which buffer is being drawn to that buffer which used to be the backbuffer, the one I had been working on, and the old buffer that was showing now becomes the backbuffer. At this point if I call SDL_FillRect(arguments) it will be performed on the now back buffer? I'm going to post my entire "heartbeat" of my learning game as it may help clarify my question: //While the user hasn't quit while( quit == false ) { //If there's an event to handle if( SDL_PollEvent( &event ) ) { //If a key was pressed if( event.type == SDL_KEYDOWN ) { //Set the proper message surface switch( event.key.keysym.sym ) { case SDLK_UP: message = upMessage; break; case SDLK_DOWN: message = downMessage; break; case SDLK_LEFT: message = leftMessage; break; case SDLK_RIGHT: message = rightMessage; break; } } else if( event.type == SDL_QUIT ) //if the user clicks the little X in the upper right corner. { quit = true; } } //If a message needs to be displayed if( message != NULL ) { // Clear the back buffer. SDL_FillRect( SDL_GetVideoSurface(), NULL, 0 ); //Draw the backgroudn to the back buffer. apply_surface( 0, 0, background, screen ); // Draw the "message" to the back buffer. apply_surface( ( SCREEN_WIDTH - message->w ) / 2, ( SCREEN_HEIGHT - message->h ) / 2, message, screen ); //Null the surface pointer message = NULL; } //Swap the current and back buffer. if( SDL_Flip( screen ) == -1 ) { return 1; } }
it highly depends on the your system (ie. X11, Linux frame buffer, Windows), and the backend SDL uses to interact with it. Also which flags you passs to SDL_SetVideoMode. There are basically software surfaces which sit in a region of memory in you program and hardware surfaces which are placed in graphical card's memory. What you describe seems to me to be a double buffer, which is enabled if you pass SDL_HWSURFACE | SDL_DOUBLEBUF to SDL. Just remember this is not supported on all platforms and configurations and you may get something different instead.
815,002
815,096
In Qt how do I get a button press to set a spinbox to a certain value?
I'm trying to get to grips with Qt's signal and slots mechanism. I have an app with a QPushButton and a QSpinBox. When I click the button I want the spinbox to change to 20. What signal and slot do I need to set up? The code below shows the app, the connect function is the one I am having trouble with. As I understand it the setValue(int) slot of QSpinBox will not work here because the clicked() signal of QPushButton has a different signature, and anyway how would I pass the value 20 to the spinbox? Do I need to write some sort of auxiliary function to act as a slot which calls spinbox->setValue(20)? If so, what form would that take? int main(int argc, char *argv[]) { QApplication app(argc, argv); QWidget *window = new QWidget; QSpinBox *spinbox = new QSpinBox; QPushButton *button = new QPushButton("Set to 20"); spinbox->setRange(0, 100); // What should I put below? QObject::connect(button, SIGNAL(clicked()), spinbox, SLOT(???????)); QLayout *layout = new QHBoxLayout; layout->addWidget(spinbox); layout->addWidget(button); window->setLayout(layout); window->show(); return app.exec(); }
You can either do: class AuxSignals : public QObject { Q_OBJECT ... signals: void valueChanged(int); public slots: void buttonClicked() { emit valueChanged(20); } }; ... // On main.cpp AuxSignals *auxSignals = new AuxSignals; QObject::connect(button, SIGNAL(clicked()), auxSignal, SLOT(buttonClicked)); QObject::connect(auxSignals, SIGNAL(valueChanged(int)), spinbox, SLOT(setValue(int))); or class AuxSignals : public QObject { Q_OBJECT ... QSpinBox *m_spinBox; public: AuxSignals(QSpinBox *spinBox) : m_spinBox(spinBox) {} public slots: void buttonClicked() { m_spinBox->setValue(20); } }; ... // On main.cpp AuxSignals *auxSignals = new AuxSignals(spinBox); QObject::connect(button, SIGNAL(clicked()), auxSignals, SLOT(buttonClicked())); I prefer the first option because it doesn't require the AuxSignals class to have a pointer to a specific QWidget.
815,380
815,494
Wait until QWidget closes
I'm working on a project in C++ and QT, and I want to open a new QWidget window, have the user interact with it, etc, then have execution return to the method that opened the window. Example (MyClass inherits QWidiget): void doStuff(){ MyClass newWindow = new Myclass(); /* I don't want the code down here to execute until newWindow has been closed */ } I feel like there is most likely a really easy way to do this, but for some reason I can't figure it out. How can I do this?
Have MyClass inherit QDialog. Then open it as a modal dialog with exec(). void MainWindow::createMyDialog() { MyClass dialog(this); dialog.exec(); } Check out http://qt-project.org/doc/qt-4.8/qdialog.html
815,423
816,496
C/C++ Machine Learning Libraries for Clustering
What are some C/c++ Machine learning libraries that supports clustering of multi dimensional data? (for example K-Means) So far I have come across SGI MLC++ http://www.sgi.com/tech/mlc/ OpenCV MLL I am tempted to roll-my-own, but I am sure pre-existing ones are far better performance optimized with more eyes on code.
The Open Source C Clustering Library from the Human Genome team at the University of Tokyo looks promising. It has K-means as well as other flat hierarchical clustering algorithms. Scroll down in their page for the bare library without the GUI. The Wikipedia-Clustering project seems nice and a bit lighter. Here's a specialized K-means library from The University of Mariland. I suggest you look at these considering your type of data. Preprocessing data to create feature vectors may be challenging.
815,429
815,433
is it possible to use regex in c++?
Duplicate of: There is a function to use pattern matching (using regular expressions) in C++? I'm not sure where one would use it... are there any parser-type functions that take some regex as an argument or something? I just found out that my editor will highlight a line after / as "regex" for C/C++ syntax which I thought was weird...
In the vanilla C++ language there is no support for regular expressions. However there are several libraries available that support Regex's. Boost is a popular one. Check out Boost's Regex implementation. http://www.onlamp.com/pub/a/onlamp/2006/04/06/boostregex.html http://www.boost.org/doc/libs/1_39_0/libs/regex/doc/html/boost_regex/syntax.html
815,581
816,073
Qt: creating an "svg image button"
I'm new to Qt so please excuse my ignorance. I am attempting to create a an 'svg image button' with QSizePolicy::Preferred for both horizontal and vertical. That part works. When the window is resized, the button grows and shrinks exactly how I want... But the image within the button stays the same size. I would like the image to scale with the button. I tried to overload resizeEvent, and call setImageSize, but that infinitely recurses. #ifndef SVGPUSHBUTTON_H #define SVGPUSHBUTTON_H #include <QtGui> class SVGPushButton : public QPushButton { public: SVGPushButton(QString path, QString name = ""); ~SVGPushButton(); void resizeEvent(QResizeEvent * event); private: }; #endif // SVGPUSHBUTTON_H #include "SVGPushButton.h" SVGPushButton::SVGPushButton(QString svgPath, QString name) : QPushButton(name) { QIcon icon(svgPath); setSizePolicy( QSizePolicy::Preferred, QSizePolicy::Preferred ); setFlat(true); setIcon(icon); } SVGPushButton::~SVGPushButton() {} void SVGPushButton::resizeEvent(QResizeEvent * event) { setIconSize( event->size() ); }
This is how I eventually solved it: SVGPushButton::SVGPushButton(QString svgPath, QString name) : QPushButton() { setSizePolicy( QSizePolicy::Preferred, QSizePolicy::Preferred ); QSvgWidget *icon = new QSvgWidget(svgPath,this); setLayout( new QHBoxLayout(this) ); layout()->addWidget( icon ); }
815,956
3,593,296
Anyone have some TAP or SNPP examples?
Does anyone know of a good resource with some examples of using the Telocator Alphanumeric Protocol (TAP) and/or the Simple Network Paging Protocol (SNPP) in either C++ or C#? Thanks!
TAP Not C++ but: Perl Net::SNPP module
815,993
816,048
Visual Studio - New Filter instead of New Folder when using Create Project From Existing Source Wizard
I used the Create Project From Existing Code Wizard for Visual Studio 2008, but apparently projects created this way do not allow you to create virtual solution folders, and instead only allow you to create actual system folders. I would like to have the standard project setup of Header Files, Resource Files, and Sources Files, but I seem to be unable to create the virtual files required. Is there any way to change the project so that it allows virtual folders to be created?
At least for a C++ project created this way (not sure about other languages), the "Show All Files" button/option in the Solution Explorer is enabled which gives the behavior you're seeing. Disable that option by clicking the icon and you'll get the behavior you're looking for.
816,001
816,008
Removing non-integers from a string in C++
There was a passing comment in a book of mine about people inputting commas into integers and messing up your program, but it didn't elaborate. That got me thinking, so I tried writing a little algorithm to take an std::string and remove all non-integer characters. This code compiles but skips over output. Why isn't anything being assigned to newstring? Does if(isdigit(fstring[i])) evaluate to true of the address it's pointing to hold a digit? //little algorithm to take the non-integers out of a string //no idea on efficiency #include <iostream> #include <string> int main() { std::cout << "Enter a number with non-integer characters: "; std::string fstring; getline(std::cin, fstring); std::string newstring; int i = 0, x = 0; while (i != fstring.length()) { if (isdigit(fstring[i])) { newstring[x] = fstring[i]; i++; x++; } else { i++; } } std::cout << std::endl; std::cout << newstring; system("PAUSE"); } Secondary question, that perhaps belongs elsewhere: how do you convert a string to an int (or a floating point number)?
newstring is of length 0, so newstring[x] where x=0 is actually illegal. You should append to the string using: newstring.append(1, fstring[i]) For the secondary question, look for atoi(), atof(), strtol(0, strtof() functions.
816,092
816,613
Typographic apostrophe + wide string literal broke my wofstream (C++)
I’ve just encountered some strange behaviour when dealing with the ominous typographic apostrophe ( ’ ) – not the typewriter apostrophe ( ' ). Used with wide string literal, the apostrophe breaks wofstream. This code works ofstream file("test.txt"); file << "A’B" ; file.close(); ==> A’B This code works wofstream file("test.txt"); file << "A’B" ; file.close(); ==> A’B This code fails wofstream file("test.txt"); file << L"A’B" ; file.close(); ==> A This code fails... wstring test = L"A’B"; wofstream file("test.txt"); file << test ; file.close(); ==> A Any idea ?
You should "enable" locale before using wofstream: std::locale::global(std::locale()); // Enable locale support wofstream file("test.txt"); file << L"A’B"; So if you have system locale en_US.UTF-8 then the file test.txt will include utf8 encoded data (4 byes), if you have system locale en_US.ISO8859-1, then it would encode it as 8 bit encoding (3 bytes), unless ISO 8859-1 misses such character. wofstream file("test.txt"); file << "A’B" ; file.close(); This code works because "A’B" is actually utf-8 string and you save utf-8 string to file byte by byte. Note: I assume you are using POSIX like OS, and you have default locale different from "C" that is the default locale.
816,143
830,580
image subdirectory in c++
Basically, I was hoping to sort of keep my files sorted instead of having them all in the same folder as my executable, but referencing files in sub folders relative to my executable has proven difficult. // DEFINES #define IMAGE_BACKGROUND "\\content\\images\\background.bmp" #define FONT_MAIN "\\content\\fonts\\sai.ttf" The above code obviously does not work. I read supposedly args[0] is somehow my path? Anyone care to elaborate a little more? int main(int argc, char* args[]) {
I actually solved it by using the following code, thank you all for the responses: // DEFINES #define IMAGE_BACKGROUND ".\\content\\images\\background.png" #define IMAGE_BLUEBLOCK ".\\content\\images\\blueblock.png" #define FONT_MAIN ".\\content\\fonts\\sai.ttf" Turns out the . gets the "working path directory".
816,232
816,483
Traverse from end to front ( C++ LL Q:1 )
int LinkedList::DoStuff() { Node *Current = next_; while ( Current != NULL ) { Current = Current->next_; length_++; } // At the last iteration we have reached the end/tail/last node return length_; } there are no more nodes beyond the last. How can i traverse to the tail-end to the front-head?
Recursion can work, as can building an auxiliary data structure, such as an array with one entry for each element of the original list. If you want a solution for a single-threaded list without requiring O(n) extra storage, the best bet is to reverse the list in place as Michael suggests. I wrote an example for this, [but I'll leave it out given the concern about homework]. One caution about reversing the list: if there are any other data structures that hold pointers to the original list, and you might be accessing them during your traversal, they won't work if they need to access the list while it's reversed, and this might lead to data corruption if they try to modify the list. Update: Ok, here's the (C++) routine to reverse a list in place. It hasn't been tested, though. And I'll note that if this is homework, the poster still needs to figure out how to use this routine correctly to get a complete answer. Node *ReverseList(Node *head) { // Reverse a single-threaded list in place, return new head Node *prev=NULL; Node *cur=head; while (Node *next=cur->next_) { cur->next_ = prev; prev = cur; cur = next; } cur->next_ = prev; return cur; }
816,463
816,470
Why won't a derived class work in an array? (C++)
I've created a class, called vir, with a function move: class vir { public: vir(int a,int b,char s){x=a;y=b;sym=s;} void move(){} }; (It's derived from a class with variables int x, int y, and char sym) I have derived a class from this, called subvir: class subvir:public vir { public: subvir(int a,int b,char s){x=a;y=b;sym=s;} void move(); }; subvir::move() { x++; return; } And then I created an array of vir, and put a subvir into it subvir sv1(0,0,'Q'); vir vir_RA[1]={sv1}; But when I try to use sv1.move(): vir_RA[0].move(); It uses the vir move ({}) rather than the subvir move ({x++}). I have tried making sv1 a vir and vir_RA a vir, and it works, and it also works when I make them both subvir, but I need them to be different. I tried making vir::move() a pure virtual, but then I get an error substantiating the array. Does anyone know how I can get move() to work when I use it from the array?
You need an array of pointers in this case, rather than an array of instances. Use vir*[] instead of vir[]
816,714
816,734
A good (and free) VCL GUI alternative
I've got a project with a rather messy VCL codebase built on Borland C++ Builder 6. I intend to rewrite most parts of it since it's hardly maintainable in it's current state. I'm looking for a good and free alternative to VCL. It is a Windows-only closed source commercial project. So main requirements are: Free for commercial closed-source projects Manage Windows GUI. Other components (db links and stuff) not required Be extendible (so one could write my own GUI components based on existing ones) Be STL-friendly. Please post your suggestions, with pros and cons if possible. EDIT Thanks for all the answers. I've decided to go with Qt as it has some other nice features like Qt Linguist translation suite.
Try Qt. Its LGPL so it can be used in closed source software. It provides widgets, networking functions, database access, web rendering via WebKit, animations and many more. Its documentation is one of the best of its kind.
817,263
817,266
Is it possible to create a vector of pointers?
Just wondering, because of a problem I am running into, is it possible to create a vector of pointers? And if so, how? Specifically concerning using iterators and .begin() with it, ie: How would I turn this vector into a vector of pointers: class c { void virtual func(); }; class sc:public c { void func(){cout<<"using func";} }; sc cobj; vector<c>cvect cvect.push_back(cobj); vector<c>::iterator citer for(citer=cvect.begin();citer<cvect.end();citer++) { citer->func(); }
vector <c> cvect is not a vector of pointers. It is a vector of objects of type c. You want vector <c*> cvect. and the you probably want: cvect.push_back( new c ); And then, given an iterator, you want something like: (*it)->func(); Of course, it's quite probable you didn't want a vector of pointers in the first place...
817,337
817,348
I notice ints and longs have the same size. Why?
Just noticed this on OSX and I found it curious as I expected long to be bigger than int. Is there any good reason for making them the same size?
This is a result of the loose nature of size definitions in the C and C++ language specifications. I believe C has specific minimum sizes, but the only rule in C++ is this: 1 == sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) Moreover, sizeof(int) and sizeof(long) are not the same size on all platforms. Every 64-bit platform I've worked with has had long fit the natural word size, so 32 bits on a 32-bit architecture, and 64 bits on a 64-bit architecture.
817,429
817,449
Why is my vector code asserting? What is an assert anyway?
What exactly is an "assert", or more specifically, how do I get rid of an error. When I create a vector of pointers to a class with data member int x, and then do this: for(I=antiviral_data.begin();I<antiviral_data.end();I++) { if((*I)->x>maxx) { antiviral_data.erase(I); } } And run the program, I get no errors until x is greater than maxx and I use .erase(), at which point I get this error: Debug Assertion Failed! Program: ...My Documents\O.exe File: ...include\vector Line: 116 Expression: ("this->_Has_container()",0) For information on how your program can cause an assertion failure, see the Visual C++ documentation on asserts. (Press Retry to debug the application) [Abort][Retry][Ignore] Also, if I try to use cout: cout<<(*antiviral_data.begin())->x<<endl; I get this error: Debug Assertion Failed! Program: ...My Documents\O.exe File: ...include\vector Line: 98 Expression: vector iterator not deferencable For information on how your program can cause an assertion failure, see the Visual C++ documentation on asserts. (Press Retry to debug the application) [Abort][Retry][Ignore] Could somebody please tell me why I can't USE any of the data in the vector, and how to fix it? ALSO: antiviral_data is a vector of pointers, with a single element: antiviral_data.push_back(new aX1(player.x,player.y,'>')); If that helps.
The most probable reason why you get the assertion is that you increment I after an erase. Try this instead: for(I=antiviral_data.begin();I!=antiviral_data.end();) { if((*I)->x>maxx) I=antiviral_data.erase(I); else ++I; } See also http://www.cppreference.com/wiki/stl/vector/erase , search for invalid iterators on that page.
817,491
817,579
C++ Exp vs. Log: Which is faster?
I have a C++ application in which I need to compare two values and decide which is greater. The only complication is that one number is represented in log-space, the other is not. For example: double log_num_1 = log(1.23); double num_2 = 1.24; If I want to compare num_1 and num_2, I have to use either log() or exp(), and I'm wondering if one is easier to compute than the other (i.e. runs in less time, in general). You can assume I'm using the standard cmath library. In other words, the following are semantically equivalent, so which is faster: if(exp(log_num_1) > num_2)) cout << "num_1 is greater"; or if(log_num_1 > log(num_2)) cout << "num_1 is greater";
AFAIK the algorithms, the complexity is the same, the difference should be only a (hopefully negligible) constant. Due to this, I'd use the exp(a) > b, simply because it doesn't break on invalid input.
817,604
817,683
Faster abs-max of float array
I need to draw peak meters for audio in realtime. Minimum 44100 samples per second times a minimum 40 streams. Each buffer is between 64 and 1024 samples. I need to grab the abs max from each buffer. (These are then fed through a kind of lowpass filter and drawn at about 20ms intervals.) for(int i = 0; i < numSamples; i++) { absMaxOfBuffer = MAX( fabs( buffer[i] ), absMaxOfBuffer); } That's how I do it now. I'd like to do it much faster. The buffers have floats in the -1 to 1 range, hence the fabs. Question, is there some tricky comp-sci quicksort-esque way of doing this faster? Failing that, branchless ABS and MAX functions for floats, do they exist? edit: Primary platform is Linux/gcc but a windows port is planned (probably with mingw). edit, the second: I gave the accept to onebyone because of the bit regarding the actual algo structure which was central to the question. I'll try unrolling the loop to four at the time, zeroing the signbits and then getting the max with SSE (maxps instruction) and see if that doesn't peel the banana. Thanks for the suggestions, I've up-voted a few of you, as runners up. :)
fabs and comparison are both really fast for IEEE floats (like, single-integer-op fast in principle). If the compiler isn't inlining both operations, then either poke it until it does, or find the implementation for your architecture and inline it yourself. You can maybe get something out of the fact that positive IEEE floats go in the same order as the integers with the same bit patterns. That is, f > g iff *(int*)&f > *(int*)&g So once you've fabs'ed, I think that a branch-free max for int will also work for float (assuming they're the same size of course). There's an explanation of why this works here: http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm. But your compiler already knows all this, as does your CPU, so it may not make any difference. There is no complexity-faster way of doing it. Your algorithm is already O(n), and you can't beat that and still look at every sample. I guess there's probably something in your processor's SIMD (that is, SSE2 on Intel) that would help, by processing more data per clock cycle than your code. But I don't know what. If there is, then it quite possibly will be several times faster. You could probably parallelize on a multi-core CPU, especially since you're dealing with 40 independent streams anyway. That will be at best a few factors faster. "Just" launch the appropriate number of extra threads, split the work between them, and use the lightest-weight primitive you can to indicate when they're all complete (maybe a thread barrier). I'm not quite clear whether you're plotting the max of all 40 streams, or the max of each separately, so maybe you don't actually need to synchronise the worker threads, other than to ensure results are delivered to the next stage without data corruption. It's probably worth taking a look at the disassembly to see how much the compiler has unrolled the loop. Try unrolling it a bit more, see if that makes any difference. Another thing to think about is how many cache misses you are getting, and whether it's possible to reduce the number by giving the cache a few clues so it can load the right pages ahead of time. But I have no experience with this, and I wouldn't hold out much hope. __builtin_prefetch is the magic incantation on gcc, and I guess the first experiment would be something like "prefetch the beginning of the next block before entering the loop for this block". What percentage of the required speed are you currently at? Or is it a case of "as fast as possible"?
817,627
817,649
Brand new to C (how to compile?)
I would like to know what I should use to compile in C. I am brand new to programing in general and would greatly appreciate a comprehensive explanation of this process. I am on Windows Vista. I heard about something called "djgpp" that is free and effective for windows.
For the answer to this question and many others you may have as you start out, try [this website][1] which has beginner tutorials. [Here's the page][2] on compilers and getting set up. An excerpt on what compilers you can use: Windows/DOS Code::Blocks and MINGW Borland DJGPP Dev-C++ and Digital Mars Windows Only Microsoft Visual C++ nix g++ is a C++ compiler that comes with most *nix distributions. gcc is a C compiler that comes with most *nix distributions. Macintosh - XCode Personally, I'd also recommend you strongly to read [The C Programming Language by Kernighan and Ritchie.][3] It will really help you understand both core principles of programming as well as C-specific details. [1]: http://www.cprogramming.com/tutorial/c/lesson1.html [2]: http://www.cprogramming.com/compilers.html [3]: http://www.amazon.com/C-Programming-Language-2nd-Ed/dp/0131103709/ref=ed_oe_h
817,769
817,802
How to log stuff in console in Visual Studio C++
I'm working on a little C++-Game in Visual Studio 2008. I want to see the content of a vector after a couple of seconds or after I pressed some buttons. Breakpoints are useless in this case, because they stop me at every call of the gameloop (~60 times per second). How do I debug in this case? Thanks!
Use function OutputDebugString from Windows API. You can call it anytime you want e.g. every 100th loop in your code. Function info is here Please read all comments on this page - some people claim that in your IDE (VS2008) output of this function is shown in "Immediate Window" not the "Output".
817,975
817,989
How can I get rid of the warning with rand()? (C++)
Whenever I use the rand function in C++: #include<iostream> #include<time.h> #include<stdlib.h> using namespace std; int main(){ srand(time(0)); int n=(rand()%6)+1; cout<<"The dice roll is "<<n<<"."<<endl; } I get a warning about conversion from time_t to int at line 5: srand(time(0)); Is there any way to get rid of this warning?
Actually, you should be using an an unsigned int with srand(): srand((unsigned) time(0));
818,141
818,305
SQLite C++ Access Columns by Name
Is there a way to access SQLite results by column name (like a C++ Map) instead of index number in C/C++? For example, Python's SQLite access allows dictionary access Results = Query("SELECT * FROM table"); print Results['colname'] print Results['anothercol'] Any similar methods available in C++ for the SQLite's interface?
I'd go with Daniel in recommending SQLAPI++ at www.sqlapi.com -- at http://www.sqlapi.com/HowTo/fetch.html you can find a simple example of fetching fields by name. The example is a bit verbose so here's a code-only gist: void showemps(SAConnection* pconn, int minage) { SACommand cmd(pconn, "select name, age from employees where age>:1"); cmd << minage; cmd.execute(); while(cmd.FetchNext()) { SAString sName = cmd.Field("name"); long nAge = cmd.Field("age"); printf("Name: %s, age: %d \n", sName, nAge); } }
818,155
818,173
SQLite Alternatives for C++
I am developing a application that needs to store data with many writes and reads as requiring fast searching of data (the need for indexes of some sort), and also be able to serialize and save the data. Currently I am thinking about using SQLite, which gets the job done, but I am open for alternatives. The SQLite's syntax really doesn't fit elegantly in C++ code (for example I am forced to access the results by column index numbers instead of column names, etc).
Stay with SQLite but find a good C++ library for this. This StackOverflow question should help you ...
818,208
818,215
Trouble with seekp() to replace portion of file in binary mode
I'm having some trouble with replacing a portion of a file in binary mode. For some reason my seekp() line is not placing the file pointer at the desired position. Right now its appending the new contents to the end of the file instead of replacing the desired portion. long int pos; bool found = false; fstream file(fileName, ios::binary|ios::out|ios::in); file.read(reinterpret_cast<char *>(&record), sizeof(Person)); while (!file.eof()) { if (record.getNumber() == number) { pos=file.tellg(); found = true; break; } // the record object is updated here file.seekp(pos, ios::beg); //this is not placing the file pointer at the desired place file.write(reinterpret_cast<const char *>(&record), sizeof(Person)); cout << "Record updated." << endl; file.close(); Am I doing something wrong? Thanks a lot in advance.
I don't see how your while() loop can work. In general, you should not test for eof() but instead test if a read operation worked. The following code writes a record to a file (which must exist) and then overwrites it: #include <iostream> #include <fstream> using namespace std; struct P { int n; }; int main() { fstream file( "afile.dat" , ios::binary|ios::out|ios::in); P p; p.n = 1; file.write( (char*)&p, sizeof(p) ); p.n = 2; int pos = 0; file.seekp(pos, ios::beg); file.write( (char*)&p, sizeof(p) ); }
818,259
818,268
C++ how to call a parent class method from contained class?
I am trying to make a call to a Parent class method from a contained object, but have no luck with the following code. What is the standard way to do it? I have searched around and this seems to work for inherited objects, but not for contained objects. Is it right to call it a Parent class even? Or is it called an Owner class? class Parent{ private: Child mychild; public: void doSomething(); } class Child{ public: void doOtherThing(); } void Child::doOtherThing(){ Parent::doSomething(); }
A contained object has no special access to the class that contains it, and in general does not know that it is contained. You need to pass a reference or a pointer to the containing class somehow - for example: class Child{ public: void doOtherThing( Parent & p ); }; void Child::doOtherThing( Parent & p ){ p.doSomething(); }
818,534
819,007
How do I set the color of a single pixel in a Direct3D texture?
I'm attempting to draw a 2D image to the screen in Direct3D, which I'm assuming must be done by mapping a texture to a rectangular billboard polygon projected to fill the screen. (I'm not interested or cannot use Direct2D.) All the texture information I've found in the SDK describes loading a bitmap from a file and assigning a texture to use that bitmap, but I haven't yet found a way to manipulate a texture as a bitmap pixel by pixel. What I'd really like is a function such as void TextureBitmap::SetBitmapPixel(int x, int y, DWORD color); If I can't set the pixels directly in the texture object, do I need to keep around a DWORD array that is the bitmap and then assign the texture to that every frame? Finally, while I'm initially assuming that I'll be doing this on the CPU, the per-pixel color calculations could probably also be done on the GPU. Is the HLSL code that sets the color of a single pixel in a texture, or are pixel shaders only useful for modifying the display pixels? Thanks.
First, your direct question: You can, technically, set pixels in a texture. That would require use of LockRect and UnlockRect API. In D3D context, 'locking' usually refers to transferring a resource from GPU memory to system memory (thereby disabling its participation in rendering operations). Once locked, you can modify the populated buffer as you wish, and then unlock - i.e., transfer the modified data back to the GPU. Generally locking was considered a very expensive operation, but since PCIe 2.0 that is probably not a major concern anymore. You can also specify a small (even 1-pixel) RECT as a 2nd argument to LockRect, thereby requiring the memory-transfer of a negligible data volume, and hope the driver is indeed smart enough to transfer just that (I know for a fact that in older nVidia drivers this was not the case). The more efficient (and code-intensive) way of achieving that, is indeed to never leave the GPU. If you create your texture as a RenderTarget (that is, specify D3DUSAGE_RENDERTARGET as its usage argument), you could then set it as the destination of the pipeline before making any draw calls, and write a shader (perhaps passing parameters) to paint your pixels. Such usage of render targets is considered standard, and you should be able to find many code samples around - but unless you're already facing performance issues, I'd say that's an overkill for a single 2D billboard. HTH.
818,647
818,664
Where can I find good C++ source code?
I am learning C++ as a first language. I feel like I am about to hit a ceiling on my learning (I am not learning through a class) if I don't start looking at actual code soon. Here are my two main questions: Where can I find source code What is a good litmus test on code's quality (I've obviously never developed in a work environment) I hope this is relevant to SO, but I can see the need to close this. Thanks for the help. Related: Examples of "modern C++" in action?
I would recommend Boost. Using Boost will simplify your program design. Reading Boost source code can show you how to use C++ to solve some challenging problems in a concise way. This add on library is itself written in C++, in a peer-reviewed fashion, and has a high standard of quality.
818,665
819,528
How to check if socket is closed in Boost.Asio?
What is the easiest way to check if a socket was closed on the remote side of the connection? socket::is_open() returns true even if it is closed on the remote side (I'm using boost::asio::ip::tcp::socket). I could try to read from the stream and see if it succeeds, but I'd have to change the logic of my program to make it work this way (I do not want data to be extracted from the stream at the point of the check).
If the connection has been cleanly closed by the peer you should get an EOF while reading. Otherwise I generally ping in order to figure out if the connection is really alive.
818,762
819,002
Unflip wxImage loading
I have the code here working fine except that all the non-power of 2 images are flipped in the y direction. In the wxImageLoader file there is this loop which I believe is the culprit: for(int y=0; y<newHeight; y++) { for(int x=0; x<newWidth; x++) { if( x<(*imageWidth) && y<(*imageHeight) ){ imageData[(x+y*newWidth)*bytesPerPixel+0]= bitmapData[( x+(rev_val-y)*(*imageWidth))*old_bytesPerPixel + 0]; imageData[(x+y*newWidth)*bytesPerPixel+1]= bitmapData[( x+(rev_val-y)*(*imageWidth))*old_bytesPerPixel + 1]; imageData[(x+y*newWidth)*bytesPerPixel+2]= bitmapData[( x+(rev_val-y)*(*imageWidth))*old_bytesPerPixel + 2]; if(bytesPerPixel==4) imageData[(x+y*newWidth)*bytesPerPixel+3]= alphaData[ x+(rev_val-y)*(*imageWidth) ]; } else { imageData[(x+y*newWidth)*bytesPerPixel+0] = 0; imageData[(x+y*newWidth)*bytesPerPixel+1] = 0; imageData[(x+y*newWidth)*bytesPerPixel+2] = 0; if(bytesPerPixel==4) imageData[(x+y*newWidth)*bytesPerPixel+3] = 0; } }//next }//next But I can't figure out how to un-flip the images.
The correct for loop is: for(int y=0; y<newHeight; y++) { for(int x=0; x<newWidth; x++) { if( x<(*imageWidth) && y<(*imageHeight) ){ imageData[(x+y*newWidth)*bytesPerPixel+0]= bitmapData[( x+y*(*imageWidth))*old_bytesPerPixel + 0]; imageData[(x+y*newWidth)*bytesPerPixel+1]= bitmapData[( x+y*(*imageWidth))*old_bytesPerPixel + 1]; imageData[(x+y*newWidth)*bytesPerPixel+2]= bitmapData[( x+y*(*imageWidth))*old_bytesPerPixel + 2]; if(bytesPerPixel==4) imageData[(x+y*newWidth)*bytesPerPixel+3]= alphaData[ x+y*(*imageWidth) ]; } else { imageData[(x+y*newWidth)*bytesPerPixel+0] = 0; imageData[(x+y*newWidth)*bytesPerPixel+1] = 0; imageData[(x+y*newWidth)*bytesPerPixel+2] = 0; if(bytesPerPixel==4) imageData[(x+y*newWidth)*bytesPerPixel+3] = 0; } }//next }//next
818,947
819,004
chaining c++ streams
I was thinking of "chaining" a couple of c++ iostreams toghether to filter input twice. I'm using gzstreams to read zlib compressed files and I was thinking of coding a stream that reads from a stream and performs encoding conversions. Perhaps by passing an opened stream as constructor parameter... How do you think this could be best accomplished?
I haven't used this but boost's filtering_stream may help. As an example I found a mailing list post with indent.hpp, which implements an output filter that indents outputs: boost::iostreams::filtering_ostream out; indent_filter::push(out,2); out.push(std::cout); And use it like so: out << "Hello Filter!\n" << indent_in << "this is\n" << "indented\n" << indent_out << "until here\n" ; Which will result in output: Hello Filter! this is indented until here
818,981
820,596
In qt, how do I implement a widget that stays consistent with variables in the code
Here's a sample of a SpinBox that writes its changes to underlying variables. The main problem that I'm having is valueChanged is called when the widget is constructed. Is there a more elegant way to do this? I think it's weird that I connected a widget to itself, but valueChanged isn't virtual. class ValueWriterInt: public QSpinBox { Q_OBJECT public: ValueWriterInt(vector<int*> const& value): myValue(value) { QObject::connect(this, SIGNAL(valueChanged(int)), this, SLOT(valueChanged(int))); } ~ValueWriterInt() {} private slots: void valueChanged(int new_value) { for (auto it = myValue.begin(); it != myValue.end(); ++it) **it = new_value; } private: vector<int*> myValue; };
I see nothing particularly weird about connecting a widget to itself. Having a single method of detecting and responding to data updates actually sounds like a good thing because you have fewer points of failure to check when you are debugging. In your specific case, it is causing some undesired behavior, but in general it is a fine solution. Now, having expressed the opinion that a reflexive connection isn't inherently inelegant, I am going to suggest a less than "elegant" solution to prevent the calling of valueChanged after construction. You can have a flag to determine whether the object was just constructed and return early to prevent the code being run immediately after construction. In your example: class ValueWriterInt: public QSpinBox { Q_OBJECT public: ValueWriterInt(vector<int*> const& value): myValue(value), myAfterInit(true) { QObject::connect(this, SIGNAL(valueChanged(int)), this, SLOT(valueChanged(int))); } ~ValueWriterInt() {} private slots: void valueChanged(int new_value) { if (myAfterInit) { myAfterInit = false; return; } for (auto it = myValue.begin(); it != myValue.end(); ++it) **it = new_value; } private: vector<int*> myValue; boolean myAfterInit; }; That isn't too bad of a solution. It will at least give you your desired behavior until (and if) you can find a more elegant method.
819,268
819,275
How to get the underlying stdio FILE* of a managed System.IO.FileStream?
I'm writing a .NET adaptor for a C/C++ library where a method "bar" takes a regular stdio FILE*. Is it possible to build an interface so that managed code user can pass a managed (File)Stream? That is without creating an intermediary buffer and code to pipe the data between. Also does the assumption that bar() reads only make things any better? // native code void bar(FILE*); // interface for managed code void foo(System::IO::FileStream^ file) { FILE* stdio_handle = ???; bar(stdio_handle); }
System.IO.FileStream.Handle It's not necessarily stdio handle. It's a Windows handle. I don't think FileStream is built upon stdio to have a stdio handle. As Marc pointed out and mentioned in the MSDN link, you might want to consider using SafeFileHandle property (if you are on .NET 2.0+) instead of Handle (which is now considered obsolete). Only Handle is available in older versions, though.
819,425
819,988
Why does C++ need language modifications to be "managed"?
Why can't a compiler be written that manages what needs to be managed in C++ code (i.e. to make it "CLR compatible")? Maybe with some compromise, like prohibiting void pointers in some situations etc. But all these extra keywords etc. What's the problem that has to be solved by these additions? I have my thoughts about some aspects and what might be hard to solve, but a good solid explanation would be highly appreciated!
I'd have to disagree with the answers so far. The main problem to understand is that a C++ compiler creates code which is suitable for a very dumb environment. Even a modern CPU does not know about virtual functions, hell, even functions are a stretch. A CPU really doesn't care that exception handling code to unwind the stack is outside any function, for instance. CPU's deal in instruction sequences, with jumps and returns. Functions certainly do not have names as far as the CPU is concerned. Hence, everything that's needed to support the concept of a function is put there by the compiler. E.g. vtables are just arrays of the right size, with the right values from the CPUs viewpoint. __func__ ends up as a sequence of bytes in the string table, the last one of which is 00. Now, there's nothing that says the target environment has to be dumb. You could definitely target the JVM. Again, the compiler has to fill in what's not natively offered. No raw memory? Then allocate a big byte array and use it instead. No raw pointers? Just use integer indices into that big byte array. The main problem is that the C++ program looks quite unrecognizable from the hosting environment. The JVM isn't dumb, it knows about functions, but it expects them to be class members. It doesn't expect them to have < and > in their name. You can circumvent this, but what you end up with is basically name mangling. And unlike name mangling today, this kind of name mangling isn't intended for C linkers but for smart environments. So, its reflection engine may become convinced that there is a class c__plus__plus with member function __namespace_std__for_each__arguments_int_pointer_int_pointer_function_address, and that's still a nice example. I don't want to know what happens if you have a std::map of strings to reverse iterators. The other way around is actually a lot easier, in general. Pretty much all abstractions of other languages can be massaged away in C++. Garbage collection? That's already allowed in C++ today, so you could support that even for void*. One thing I didn't address yet is performance. Emulating raw memory in a big byte array? That's not going to be fast, especially if you put doubles in them. You can play a whole lot of tricks to make it faster, but at which price? You're probably not going to get a commercially viable product. In fact, you might up with a language that combines the worst parts of C++ (lots of unusual implementation-dependent behavior) with the worst parts of a VM (slow).
819,481
819,500
Better alternative for c++ old feature?
c++ has come a long way, it has lot of feature that lead to do same task in n number of ways. What feature do you think should be avoided and list better alternative for the same. Like use SmartPointers in places of pointers
Avoid malloc, use new. Avoid memcpy, use a copy constructor. Avoid #defines, use consts. Avoid (C-style) casts, use static_cast<C++>( style casts ). Avoid macros, use templates. Avoid post-increment if you can use pre-increment. Avoid new, use collections of value types, let the collection deal with memory management. Avoid pointers to new'd objects, use boost::scoped_ptr or boost::shared_ptr. Avoid writing your own code to traverse collections, use <algorithm>s Avoid reinventing the wheel, see if boost has what you need. Avoid "char * name" for storing strings, use std::string. Avoid using namespace foo, use using foo::bar. Avoid "static (internal linkage) declarations" use unnamed namespaces.
819,483
829,175
Is there a WinXP/WinXP Embedded API to change Locale (specifically, IME)?
I am looking for an API on WinXP to switch between installed IME's. The scenario is, to be able to plug in a langauge keyboard (say Spanish) and change the IME by clicking on a UI button (say button named Spanish) e.g. I plug in a Spanish keyboard and click on the UI button named Spanish. This should internally change the IME to Spanish, which is already installed Thanks
I was trying to change the locale/ IME (which falls under the locale). I found that there is an api named, 'SystemParametersInfo' which allows us to make settings on system level. In my case, I had to go to Control Panel > Regional Settings > and then switch between installed locales under Language tab. This could finally be achieved programatically as shown in the code: #include "stdafx.h" #include "windows.h" int _tmain(int argc, _TCHAR* argv[]) { HKL hLangId = 0; bool isFine; DWORD errorCode; errorCode = GetLastError(); isFine = SystemParametersInfo(SPI_GETDEFAULTINPUTLANG, 0, &hLangId, 0); errorCode = GetLastError(); HKL spanishLanguage = (HKL) (0x040a0c0a); isFine = SystemParametersInfo(SPI_SETDEFAULTINPUTLANG, 0, &spanishLanguage, 0); errorCode = GetLastError(); return 0; }
819,487
822,128
Efficiently convert between Hex, Binary, and Decimal in C/C++
I have 3 base representations for positive integer numbers: Decimal, in unsigned long variable (e.g. unsigned long int NumDec = 200). Hex, in string variable (e.g. string NumHex = "C8") Binary, in string variable (e.g. string NumBin = "11001000") I want to be able to convert between numbers in all 3 representations in the most efficient way. I.e. to implement the following 6 functions: unsigned long int Binary2Dec(const string & Bin) {} unsigned long int Hex2Dec(const string & Hex) {} string Dec2Hex(unsigned long int Dec) {} string Binary2Hex(const string & Bin) {} string Dec2Binary(unsigned long int Dec) {} string Hex2Binary(const string & Hex) {} What is the most efficient approach for each of them? I can use C and C++, but not boost. Edit: By "efficiency" I mean time efficiency: Shortest execution time.
As others have pointed out, I would start with sscanf(), printf() and/or strtoul(). They are fast enough for most applications, and they are less likely to have bugs. I will say, however, that these functions are more generic than you might expect, as they have to deal with non-ASCII character sets, with numbers represented in any base and so forth. For some domains it is possible to beat the library functions. So, measure first, and if the performance of these conversion is really an issue, then: 1) In some applications / domains certain numbers appear very often, for example zero, 100, 200, 19.95, may be so common that it makes sense to optimize your functions to convert such numbers with a bunch of if() statements, and then fall back to the generic library functions. 2) Use a table lookup if the most common 100 numbers, and then fall back on a library function. Remember that large tables may not fit in your cache and may require multiple indirections for shared libraries, so measure these things carefully to make sure you are not decreasing performance. You may also want to look at boost lexical_cast functions, though in my experience the latter are relatively compared to the good old C functions. Tough many have said it, it is worth repeating over and over: do not optimize these conversions until you have evidence that they are a problem. If you do optimize, measure your new implementation to make sure it is faster and make sure you have a ton of unit tests for your own version, because you will introduce bugs :-(
819,525
819,898
How do I disable exp/lib generation when building an exe?
I realize this is probably caused by some _dllexport() somewhere, not in my code but in some third-party piece. (Qt, Boost, OpenSG, ...) Is there a simple linker option to disable this? I've searched but not found anywhere.
AFAIK, no, because the relevant #pragma's override the linker settings.
819,536
819,629
How to call Java functions from C++?
How can I call Java functions from a C++ application? I know about calling them from CMD (or similar techniques), but I would rather not use them.
As an example, check Creating a JVM from C. It shows a sample procedure to create a JVM and invoke a method. If the JVM already exists; e.g. your C program is invoked by the Java program (callback situation), you can cache the JNIEnv* pointer. As an advice, be careful caching pointers to the JVM from C/C++, there are some semantics involved as to what you can cache and it could be invoked later on. For that, as Brian Agnew pointed out, check the JNI reference.
819,708
821,280
PID from socket number on Windows?
I need to count amount of bytes sent and received from the network by various applications. First I thought about using LSP, but there is a lot of applications that do not use LSP at all (SMB for example). This is why I have written a small sniffer. This application works on the IP level and collects data using recvfrom. So I have the address of the remote host, local and remote ports. This is pretty cool, but I also need to have the PID of the local socket owner. So, is there a simple way to obtain such PID? The workaround is using GetTcpTable, GetUdpTable Or AllocateAndGetUdpExTableFromStack and AllocateAndGetTcpExTableFromStack (there are a lot of differences in those functions between Windows 2000, Windows XP, Windows XP SP2 and Windows Vista) and to lookup result tables, but it seems inelegant and inefficient... So is there a kind of A "GetPIDOfSocket" function? The resolution should be Win32 (no .NET) only, as various languages can be used, e.g. C++ and Delphi.
Using GetTcpTable or AllocateAndGetTcpExTableFromStack is not a workaround. It's actually how other netstat-type applications work. As far as I know, there isn't any Win32 "GetPIDOfSocket" function. Your only option is to poll using the port table functions. But at least you can code it up yourself and don't have to spawn the netstat process. See the Sysinternals C source code for netstatp here. There's not a lot of it and it's well worth a look.
819,710
819,735
DuplicateHandle(), use in first or second process?
The Windows API DuplicateHandle() http://msdn.microsoft.com/en-us/library/ms724251(VS.85).aspx Requires the Object handle to be duplicated and a handle to both the original process AND the other process that you want to use the duplicated handle in. I am assuming that if I have two UNRELATED processes, I could call DuplicateHandle() in either one so long as I had the required handles available? My Question is about using a Pipe to communicate between the two processes to achieve this with an Event. In the first process I CreateEvent(). Now I want to use WaitForSingleObject() in the second process. If I try to duplicate the handle in the first process, I will need to first send the second process handle to the first process via the pipe, duplicate the handle and then send the handle over to the second process? Alternativly, I could begin by sending the first process handle and the Event handle to the second process and just duplicate it there. Is there a reason I should choose one over the other? To add a wrinkle, the Event handle is actually inherited from the parent process that actually called the first process (which is a CGI application). If that Event handle was created with HANDLE_DO_NOT_DUPLICATE (something like that) then can I in fact use DuplicateHandle() to duplicate it for the second process? Response: Well I could create a new NAMED event in the first process and find it in the second process as suggested, but I am trying to DUPLICATE the event that was created in the parent of the first process and foreward it to the second process. This event is not a named event so I need to use DuplicateHandle(). I am using a pipe for the IPC. I am realizing that DuplicateHandle() will have to be called in the first process because the event handle is out of context when sent to the second process. hProcPseudo = GetCurrentProcess() //Then call either: lpRealHandle = OpenProcess( PROCESS_DUP_HANDLE, 0, hProcPseudo ) //This fails with GetLastError= 87 - The parameter is incorrect ??? // same thing with PROCESS_ALL_ACCESS ?? //OR lRet = DuplicateHandle( hProcPseudo, hProcPseudo, hProcPseudo, lpRealHandle, DUPLICATE_SAME_ACCESS, 0, 0 ) //then I can Duplicate my Event Handle in the first thread with: lRet = DuplicateHandle( hLocalProcess, hEvent, lpRealHandle, hDupEvent, DUPLICATE_SAME_ACCESS, 0, 0) The second process converts its handle with DuplicateHandle() detailed above converting hProcPseudo= 4294967295 to hProcess= 152 Then I pass this process handle to the first process via the named pipe. In the first process (where the event handle is valid) I call Duplicate handle: DuplicateHandle( hFirstProcess, hEvent, hSecondProc, hDupEvent, DUPLICATE_SAME_ACCESS, 0, 0) Unfortunatly I get the error: DuplicateHandle hPipeFCGI GetLastError= 6 - The handle is invalid. Further testing (substituting hFirstProcess) reveals it is hSecondProc that is invalid!?? Big Mystery.
Use a named pipe or mailslots for IPC, this should work reliably for your purpose. If you need to wait, use named wait handles. Otherwise, I'd choose to do DuplicateHandle in the second process in order to set the handle ownership correctly.
819,953
897,288
How to start writing a music visualizer in C++?
I'm interested in learning to use OpenGL and I had the idea of writing a music visualizer. Can anyone give me some pointers of what elements I'll need and how I should go about learning to do this?
If you use C++/CLI, here's an example that uses WPF four (fourier that is;) display. He references this site (archived) that has considerable information about what your asking, here's anoutline from the specific page; How do we split sound into frequencies? Our ears do it by mechanical means, mathematicians do it using Fourier transforms, and computers do it using FFT. The Physics of Sound Harmonic Oscillator Sampling Sounds Fourier Analysis Complex Numbers Digital Fourier Transform FFT Ahhh, I found this (archived) a few minutes later, it's a native C++ analyzer. Code included, that should get you off and running.
820,074
820,109
Which IDE does Google use for C++ and Java development
I am curious which IDE does Google use for C++ and Java development?
Mehrdad is very correct that it is highly unlikely that they standardize on one IDE for each language. However, there is probably a popular one or two for each language. A good way to tell is to look at the source code they release that would need an IDE plugin, and see what they support. I notice (regarding Java) Intellij IDEA from JetBrains tends to get early plugin support, but that could just be selection bias (because that is what I use).
820,213
820,238
New to C++: should I use Visual Studio?
I'm about to start work on my first C++ project. I've done lots of C# and VB (VB6 and VB.NET), plus Java and a few other things over the past 10 years or so, just never had a requirement for C++ until now. I plan to use Visual Studio 2008, but I'm interested to find out from experienced C++ programmers whether Visual Studio is a good IDE for a C++ newbie. Will VS in any way make it harder for me to learn to program C++ correctly? Will VS encourage any bad habits? Are there any pitfalls I should look out for?
First off, VS 2008 is quite powerful and probably one of the best IDEs for C++ programming (at least with a supporting plugin such as Visual Assist X). Beware, however, that C++ is a hard language to get right for the compilers and that the default warning level is quite lenient to boot. So it will tolerate bad/wrong code quite often. It is always advisable to check the code on other compilers as well – at the very least in g++ with high warning level in strict mode. Also setting the warning level higher in Visual Studio is encouraged.
820,569
820,861
Is there a Perl script to implement C++ Class get/set member functions?
I was reading this morning the book The Pragmatic Programmer Chapter 3 on Basic Tools every programmer should have and they mentioned Code Generation Tools. They mentioned one Perl script for C++ programs which helped automate the process of implementing the get/set() member functions for private data members. Does anyone know about such a script and where to find it? I've been unable to come up with the right google keywords to find it.
Although it doesn't directly answer your question, you may find that generated code is actually unnecessary for managing properties in C++. The following template code will allow you to declare and use properties conveniently: // Declare your class containing a few properties class my_class { public: property<int> x; property<string> y; ... }; ... my_class obj; cout << obj.x(); // Get obj.y("Hello, world!"); // Set Here is the code: // Utility template to choose the 2nd type if the 1st is void template <typename T, typename U> struct replace_void { typedef T type; }; template <typename T> struct replace_void<void, T> { typedef T type; }; // Getter/setter template template <typename T, typename D = void> class property { typedef typename replace_void<D, property>::type derived_type; derived_type& derived() { return static_cast<derived_type&>(*this); } public: property() {} // May be safer to omit the default ctor explicit property(T const& v) : _v(v) {} property(property const& p) : _v(p._v) {} property& operator=(property const& p) { _v = p._v; return *this; } T operator()() const { return _v; } // Getter void operator()(T const& v) { derived().check(v); _v = v; } // Setter protected: // Default no-op check (derive to override) void check(T const& v) const { (void)v; //avoid unused variable warning} private: T _v; }; check() is a function that tests whether the value being assigned is valid. You can override it in a subclass: class nonnegative_int : public property<int, nonnegative_int> { public: // Have to redeclare all relevant ctors unfortunately :( nonnegative_int(int v) : property<int, nonnegative_int>(v) {} void check(int const& v) const { if (v < 0) { throw "Yikes! A negative integer!"; } } }; There you have it -- all of the advantages of externally-generated getter/setter functions, with none of the mess! :) You could choose to have check() return a bool indicating validity instead of throwing an exception. And you could in principle add a similar method, access(), for catching read references to the property. EDIT: As Mr. Fooz notes in the comments, the class author can later change the implementation without modifying the logical structure of the class (e.g. by replacing the property<int> x member with a pair of x() methods), although binary compatibility is lost so users will need to recompile their client code whenever such a change is made. This ability to painlessly incorporate future changes is actually the main reason people use getter/setter functions instead of public members in the first place. Performance note: Because we are using the CRTP to achieve "compile-time polymorphism", there is no virtual-call overhead for providing your own check() in a subclass, and you need not declare it virtual.
820,664
820,744
'mpirun' is not recognized as an internal ort external commands,
I need to make a small openMP project. I took the example from the www.openmp.org. I can compile it with /openmp option within VC++ 2005. But, When I try to run the program, I am facing the "'mpirun' is not recognized as an internal ort external commands, operable program or batch file" error. When I search the net. I found this mpirun and mpiexec is not located XP desktop. They are only in server distribution. How can run/debug openMP programs on windows? Or should I need to go Linux?
You mention that you are using OpenMP - you shouldn't actually need to use mpirun or mpiexec, as you would with MPICH or OpenMPI programs. OpenMP works in a fairly different way than message-passing libraries: OpenMP uses multiple threads within the same process, but MPICH and OpenMPI use multiple processes. So if I understand your question correctly, you should be able to just run your executable without using mpirun or mpiexec. If, on the other hand, you're using MPICH, the mpiexec program lives in {ProgramFiles}\MPICH2\bin. Good luck!
820,846
821,189
RegOpenKeyEx fails on HKEY_LOCAL_MACHINE
Hi I'm trying to read a registry value that gives me the path to firefox.exe. This is stored under HKEY_LOCAL_MACHINE\SOFTWARE\Mozilla\Mozilla Firefox 3.0.10\bin (the version number can be found somewhere else) But I cant seem to get RegOpenKeyEx to return ERROR_SUCCESS for anything under HKEY_LOCAL_MACHINE so this test fails: if(RegOpenKeyEx(HKEY_LOCAL_MACHINE,TEXT("\\SOFTWARE"),0,KEY_QUERY_VALUE,&keyHandle) == ERROR_SUCCESS) while this test passes: if(RegOpenKeyEx(HKEY_CLASSES_ROOT,TEXT("\\Shell"),0,KEY_QUERY_VALUE,&keyHandle) == ERROR_SUCCESS)
The following code failed on my machine with the error code 161, which means "bad path" (look it up in winerror.h): long n = RegOpenKeyEx(HKEY_LOCAL_MACHINE,TEXT("SOFTWARE"), 0,KEY_QUERY_VALUE, &hk ); I then changed the call to RegOpenKeyEx to use "SOFTWARE" (note no leading slashes) and it worked: #include <windows.h> #include <iostream> using namespace std; int main() { HKEY hk; // Notice that it's SOFTWARE instead of \\SOFTWARE: long n = RegOpenKeyEx(HKEY_LOCAL_MACHINE,TEXT("SOFTWARE"), 0,KEY_QUERY_VALUE, &hk ); if ( n == ERROR_SUCCESS ) { cout << "OK" << endl; } else { cout << "Failed with value " << n << endl; } }
820,859
820,890
Determining whether an object is in a std::set
I'm trying to determine whether an object is already contained within a std::set. According to msdn (and other sources) the set::find function is supposed to return end() if it doesn't find the element you asked for. However when I implement code like the following, set::find returns junk (0xbaadf00d) instead. set<Cell*> cellSet; Cell* cell = new Cell(); if (cellSet.find(cell) == cellSet.end()) { ... } Am I using this correctly? I'm working in Visual C++ 2005.
Your code as posted will always execute the code within the if, and 0xbaadf00d is the implementation's "one-past-the-end" marker.
821,667
821,709
C++ Problem initializing an object twice
I'm relatively new to C++ and am having a hard trouble understanding the instantiation of object and pointers to objects. Whats the difference between these two declaration in terms of memory and usage? : MyClass obj1; MyClass *obj2; And also the specific problem I am having is that I have a class which has an unsigned short array where the last space of the array changes if I do this: MyClass obj; obj = MyClass("123"); MyClass has two constructors one which will take an int and as default will assign it to zero and splice it in parts of 3 digits or less. And another which will take a string representation of a number and do the same... hope that makes sense! It works well if I declare it MyClass obj = MyClass("123123123"); but not if I do it the other way. Why?
The difference: MyClass obj1; MyClass *obj2; Here obj1 is an instance of MyClass. While obj2 can potentially hold the address of an instance of MyClass. Also obj1 will automatically be initialized by the constructors, while obj2 is not initialized by default (and thus points to random memory). Once initialized obj2 may take the special value NULL which indicates that it is not pointing at an object. obj2 = &obj1; Here we initialize obj2 to point at the address in memory of obj1. If you change any of the members of obj1 then you can see the changes by looking at them through obj2 (but because obj2 is a pointer you need to de-reference). obj1.plop = 5; std::cout << obj1.plop << "\n"; std::cout << obj2->plop << "\n"; Should print the same values. The following is actually two different things: MyClass obj; obj = MyClass("123"); Line one initializes 'obj' with the default constructor. Line two: creates a temporary object constructed with the string "123". Once this temporary object is created it is copied onto 'obj' using the assignment operator. If you did not define an assignment operator the compiler will have generated one for you. If your class contains pointers then the default version will probably not work correctly (in most other situations the default assignment operator should work fine). This line probably works: MyClass obj = MyClass("123123123"); Because the compiler has optimised this into: MyClass obj("123123123");
821,676
821,698
How do I decide whether to use ATL, MFC, Win32 or CLR for a new C++ project?
I'm just starting my first C++ project. I'm using Visual Studio 2008. It's a single-form Windows application that accesses a couple of databases and initiates a WebSphere MQ transaction. I basically understand the differences among ATL, MFC, Win32 (I'm a little hazy on that one actually) and CLR, but I'm at a loss as to how I should choose. Is one or more of these just there for backward-compatibility? Is CLR a bad idea? Any suggestions appreciated. Edit: I've chosen C++ for this project for reasons I didn't go into in the post, which are not entirely technical. So, assuming C++ is the only/best option, which should I choose?
It depends on your needs. Using the CLR will provide you with the most expressive set of libraries (the entire .NET framework), at the cost of restricting your executable to requiring the .NET framework to be installed at runtime, as well as limiting you to the Windows platform (however, all 4 listed technologies are windows only, so the platform limitation is probably the least troublesome). However, CLR requires you to use the C++/CLI extensions to the C++ language, so you'll, in essense, need to learn some extra language features in order to use this. Doing so gives you many "extras," such as access to the .net libraries, full garbage collection, etc. ATL & MFC are somewhat trickier to decide between. I'd refer you to MSDN's page for choosing in order to decide between them. The nice thing about ATL/MFC is that you don't need the .NET framework, only the VC/MFC runtimes to be installed for deployment. Using Win32 directly provides the smallest executables, with the fewest dependencies, but is more work to write. You have the least amount of helper libraries, so you're writing more of the code.
821,873
822,032
How to open an std::fstream (ofstream or ifstream) with a unicode filename?
You wouldn't imagine something as basic as opening a file using the C++ standard library for a Windows application was tricky ... but it appears to be. By Unicode here I mean UTF-8, but I can convert to UTF-16 or whatever, the point is getting an ofstream instance from a Unicode filename. Before I hack up my own solution, is there a preferred route here ? Especially a cross-platform one ?
The C++ standard library is not Unicode-aware. char and wchar_t are not required to be Unicode encodings. On Windows, wchar_t is UTF-16, but there's no direct support for UTF-8 filenames in the standard library (the char datatype is not Unicode on Windows) With MSVC (and thus the Microsoft STL), a constructor for filestreams is provided which takes a const wchar_t* filename, allowing you to create the stream as: wchar_t const name[] = L"filename.txt"; std::fstream file(name); However, this overload is not specified by the C++11 standard (it only guarantees the presence of the char based version). It is also not present on alternative STL implementations like GCC's libstdc++ for MinGW(-w64), as of version g++ 4.8.x. Note that just like char on Windows is not UTF8, on other OS'es wchar_t may not be UTF16. So overall, this isn't likely to be portable. Opening a stream given a wchar_t filename isn't defined according to the standard, and specifying the filename in chars may be difficult because the encoding used by char varies between OS'es.
822,059
824,873
SFINAE with invalid function-type or array-type parameters?
Please consider this code: template<typename T> char (&f(T[1]))[1]; template<typename T> char (&f(...))[2]; int main() { char c[sizeof(f<void()>(0)) == 2]; } I expected it doing SFINAE and chosing the second overload, since substitution of T into T[1] yields void [1]() Which is an invalid type, of course. Adjustment of parameter types (array->pointer) is done after substituting template parameters into function parameters and checking for valid resulting types like 14.8.2 [temp.deduct] describes. But both comeau and GCC fail to compile the above. Both with different diagnostics. Comeau says: "ComeauTest.c", line 2: error: array of functions is not allowed char (&f(T[1]))[1]; GCC says (version 4.3.3): error: ISO C++ forbids zero-size array c Meaning, GCC does not fail to substitute, but it chooses the first overload of f, returning a sizeof of 1, instead of failing to substitute it up front like Comeau. What compiler is right and is my code valid at all? Please refer to or quote the proper Standard section in your answer. Thanks! Update: The Standard itself contains such an example in the list at 14.8.2/2. I don't know, why I overlooked it first: template <class T> int f(T[5]); int I = f<int>(0); int j = f<void>(0); // invalid array While the example is only informative, it shows the intention of all those mysterious paragraphs and seems to show the code above should work and reject the first overload.
A small note, although very rare, I have found some occasions where I believe that the Comeau compiler has it wrong - although, these occasions are so rare that its always worth double and triple checking your assumptions! I may have a reason for the behaviour of g++. I'm not sure its specified exactly when parameter types are adjusted: Consider the following: template<typename T> struct A { void bar (T[10]); }; template<typename T> void A<T>::bar (T*) { } The definition of 'bar' is legal, as "T[10]" decays to "T*". I do not see anything in the standard that prohibits the compiler from performing the adjustments of 8.3.5 against the template declaration, and it also improves performance when it comes to overload matching. Applying this to your example, g++ might be treating it as: template<typename T> char (&f( T* ))[1]; template<typename T> char (&f(...))[2]; int main() { char c[sizeof(f<void()>(0)) == 2]; } In the above, the substituted parameter is a legal pointer to function, rather than an array of functions. So, the question for me is - is if there is something that prohibts the adjustments for the function parameters (8.3.5) twice? Personally, I think it makes sense to allow the adjustments to happen twice since otherwise it complicates the matching of function template overloads In conclusion, I think its valid for g++ to select the first overload based on how it treates decaying array parameters, and Comeau is wrong not to have a deduction failure for the array of functions. Of course this now means that (if Comeau was fixed) then each compiler would choose a different overload and would still be standards compliant! :( EDIT: Just to illustrate my point, consider the following code: template <typename T> void foo ( T * ); template <typename T> void foo ( T * const ); template <typename T> void foo ( T [] ); template <typename T> void foo ( T [10] ); template <typename T> void foo ( T [100] ); void bar () { foo < void() > ( 0 ); } Here, foo has been declared and redeclared several times. Which declaration, and so which parameter type, should the compiler apply the rules listed in 14.8.2? My point is that the standard doesn't say anything about the above. I would also go as far as to say that any wording on this would have to leave it as either "undefined" or "implementation defined" behaviour.
822,066
822,236
Native C++ or .NET for Business App?
Here's the deal: I'm in the process of planning a mid-sized business application that absolutely must support Win2k. AFAIK, official .NET support for Win2k was scrapped a while ago (IIRC, it stopped at version 2.0). Now, I already wrote (ages ago) libraries in C++ that allow me to accomplish the end result (i.e., finish this project) just as quickly as if I was writing this application with the help of the .NET Framework -- so .NET's RAD "advantage" is almost negated. I'm sure a lot of people here deal with business applications that need to support old OS's. So, given my library situation, what advantage(s) are there for me in using .NET over native C++ and vice versa? I'm just not sure which of the two is right for the job -- because it seems that I could use either. Then again, there's that framework support issue to deal with... I will gladly add more information, if required.
The last .NET version that runs under Windows 2000 is .NET 2.0 SP2. It does include the features required by System.Core.dll (that is part of .NET 3.5). The answer is YES, you can use .NET 3.5 SP1 under Windows 2000 if you're not going to use .NET 3.0 libraries (WCF, WF, WPF, CardSpace). But you have LINQ, LINQ to XML, LINQ to SQL. The only thing you need to do is to deploy three core .NET 3.5 SP1 files: System.Core.dll System.Xml.Linq.dll (LINQ to XML) System.Data.Linq.dll (LINQ to SQL) Disadvantages of this method (read carefully): Not sure whether it's permitted or forbidden by the EULA (end-user license agreement) This scenario is not supported by Microsoft.
822,182
822,283
What is the most random function in C++?
I've used #include<stdlib> #include<time> using namespace std; srand((unsigned)time(0)); int n=(rand()>>8)%4; but what other random functions are there, or what other function could be used as random number generators? EDIT: I don't really have a particular reason for asking this question, I just wanted to know if C++ had any other random functions.
The rand() and srand() functions are all the C++ Standard specifies. And if it comes to writing your own, be aware of what John von Neumann said: "Anyone who considers arithmetical methods of producing random digits is of course in a state of sin"
822,581
822,591
What C++ library should I use to implement a HTTP client?
I'm looking for a C++ library that implements or enables the implementation of a HTTP client. It should handle cookies as well. What would you propose?
Curl++: is an option, particularly if you want things in more of a C++ style. cpp-netlib: very good and simple to use, available on ubuntu sudo apt-get install libcppnetlib-dev example: using namespace boost::network; using namespace boost::network::http; client::request request_("http://127.0.0.1:8000/"); request_ << header("Connection", "close"); client client_; client::response response_ = client_.get(request_); std::string body_ = body(response_);
822,714
822,857
How to download a file with WinHTTP in C/C++?
I know how to download an html/txt page. For example : //Variables DWORD dwSize = 0; DWORD dwDownloaded = 0; LPSTR pszOutBuffer; vector <string> vFileContent; BOOL bResults = FALSE; HINTERNET hSession = NULL, hConnect = NULL, hRequest = NULL; // Use WinHttpOpen to obtain a session handle. hSession = WinHttpOpen( L"WinHTTP Example/1.0", WINHTTP_ACCESS_TYPE_DEFAULT_PROXY, WINHTTP_NO_PROXY_NAME, WINHTTP_NO_PROXY_BYPASS, 0); // Specify an HTTP server. if (hSession) hConnect = WinHttpConnect( hSession, L"nytimes.com", INTERNET_DEFAULT_HTTP_PORT, 0); // Create an HTTP request handle. if (hConnect) hRequest = WinHttpOpenRequest( hConnect, L"GET", L"/ref/multimedia/podcasts.html", NULL, WINHTTP_NO_REFERER, NULL, NULL); // Send a request. if (hRequest) bResults = WinHttpSendRequest( hRequest, WINHTTP_NO_ADDITIONAL_HEADERS, 0, WINHTTP_NO_REQUEST_DATA, 0, 0, 0); // End the request. if (bResults) bResults = WinHttpReceiveResponse( hRequest, NULL); // Keep checking for data until there is nothing left. if (bResults) do { // Check for available data. dwSize = 0; if (!WinHttpQueryDataAvailable( hRequest, &dwSize)) printf( "Error %u in WinHttpQueryDataAvailable.\n", GetLastError()); // Allocate space for the buffer. pszOutBuffer = new char[dwSize+1]; if (!pszOutBuffer) { printf("Out of memory\n"); dwSize=0; } else { // Read the Data. ZeroMemory(pszOutBuffer, dwSize+1); if (!WinHttpReadData( hRequest, (LPVOID)pszOutBuffer, dwSize, &dwDownloaded)) { printf( "Error %u in WinHttpReadData.\n", GetLastError()); } else { printf("%s", pszOutBuffer); // Data in vFileContent vFileContent.push_back(pszOutBuffer); } // Free the memory allocated to the buffer. delete [] pszOutBuffer; } } while (dwSize>0); // Report any errors. if (!bResults) printf("Error %d has occurred.\n",GetLastError()); // Close any open handles. if (hRequest) WinHttpCloseHandle(hRequest); if (hConnect) WinHttpCloseHandle(hConnect); if (hSession) WinHttpCloseHandle(hSession); // Write vFileContent to file ofstream out("test.txt",ios::binary); for (int i = 0; i < (int) vFileContent.size();i++) out << vFileContent[i]; out.close(); When I try to download a picture, I get only the first lines of the file and no error message. The problem seems related to this parameter (ppwszAcceptTypes) in WinHttpOpenRequest Function. link text
Looks like this thread on MSDN is the same and has the solution http://social.msdn.microsoft.com/forums/en-US/vclanguage/thread/45ccd91c-6794-4f9b-8f4f-865c76cc146d
822,728
822,999
How to create/run a .exe from a program? (C++)
Is it possible (and if so, how) to write a program in C++ that takes parameters, then run that program from another C++ program. Ie: I can create a function that adds two numbers: int add(int a,int b){return a+b;} Is it possible to do the same thing, except instead of a function, a separate C++ .exe? EDIT: Since a lot of people don't understand my question, I'll try to say exactly what I want to know how to do. I want to have a program, lets call it "p1". I want that program to get two numbers from the user: int x,y; cin>>x; cin>>y; Now I want to create another program, that takes two numbers, and adds them, the same way a function would, except a seperate program. Then I want to run the second program from the first program with the numbers it got from the user. Ie: #include<iostream> using namespace std; int main() { int x,y; cin>>x; cin>>y; add(x,y); //this is how I would call a function named "add". I want to know how to do that with a separate program instead of just a separate function. return 0; } EDIT: I figured out how to use (void)system("C:\\Program Files\\test.exe"); How would I use that to pass arguments, and how could I write a program that takes those arguments?
On most OSes, you can pass arguments in system() by space-seperating them: system("someapp.exe 42 24"); This string, of course, can be built up by sprintf or std::ostringstream or what have you. Then you just need to process your arguments in the other program: int main(int argc, char **argv) { // important - make sure we have enough args if (argc != 3) { // first (zeroth) argument is always the program name, so 1+2 args is needed fprintf(stderr, "Wrong number of arguments\n"); return EXIT_FAILURE; } int a = atoi(argv[1]); int b = atoi(argv[2]); // do stuff }