question_id
int64 25
74.7M
| answer_id
int64 332
74.7M
| title
stringlengths 20
150
| question
stringlengths 23
4.1k
| answer
stringlengths 20
4.1k
|
|---|---|---|---|---|
268,121
| 268,335
|
When to build your own buffer system for I/O (C++)?
|
I have to deal with very large text files (2 GBs), it is mandatory to read/write them line by line. To write 23 millions of lines using ofstream is really slow so, at the beginning, I tried to speed up the process writing large chunks of lines in a memory buffer (for example 256 MB or 512 MB) and then write the buffer into the file. This did not work, the performance is more or less the same. I have the same problem reading the files. I know the I/O operations are buffered by the STL I/O system and this also depends on the disk scheduler policy (managed by the OS, in my case Linux).
Any idea about how to improve the performance?
PS: I have been thinking about using a background child process (or a thread) to read/write the data chunks while the program is processing data but I do not know (mainly in the case of the subprocess) if this will be worthy.
|
A 2GB file is pretty big, and you need to be aware of all the possible areas that can act as bottlenecks:
The HDD itself
The HDD interface (IDE/SATA/RAID/USB?)
Operating system/filesystem
C/C++ Library
Your code
I'd start by doing some measurements:
How long does your code take to read/write a 2GB file,
How fast can the 'dd' command read and write to disk? Example...
dd if=/dev/zero bs=1024 count=2000000 of=file_2GB
How long does it take to write/read using just big fwrite()/fread() calls
Assuming your disk is capable of reading/writing at about 40Mb/s (which is probably a realistic figure to start from), your 2GB file can't run faster than about 50 seconds.
How long is it actually taking?
Hi Roddy, using fstream read method
with 1.1 GB files and large
buffers(128,255 or 512 MB) it takes
about 43-48 seconds and it is the same
using fstream getline (line by line).
cp takes almost 2 minutes to copy the
file.
In which case, your're hardware-bound. cp has to read and write, and will be seeking back and forth across the disk surface like mad when it does it. So it will (as you see) be more than twice as bad as the simple 'read' case.
To improve the speed, the first thing I'd try is a faster hard drive, or an SSD.
You haven't said what the disk interface is? SATA is pretty much the easiest/fastest option. Also (obvious point, this...) make sure the disk is physically on the same machine your code is running, otherwise you're network-bound...
|
268,260
| 268,286
|
How do I write a console application in Windows that would minimize to the system tray?
|
I have a written a Visual C++ console application (i.e. subsystem:console) that prints useful diagnositic messages to the console.
However, I would like to keep the application minimized most of the time, and instead of minimizing to the taskbar, appear as a nice icon on the system tray. I would also like to restore the console when the system tray icon is clicked.
How should I change my program to do this?
|
This is going to be an ugly hack.
First, you have to retrieve the hWnd / hInstance of you console application. Right now, I can only come up with one way:
Create a Guid with CoCreateGuid()
Convert it to a string
Set the title of the console window to this guid with SetConsoleTitle()
Find the hWnd of the your window with the Guid as the tile with FindWindow()
And you can do it from the usual way from this point. See http://www.gidforums.com/t-9218.html for more info.
Don't forget the rename your console window to the original title once you're done.
As you can see, even though this is possible to do, it's a horrible and painful solution. Please don't do it. Please do not minimize console applications to the system tray. It is not something you are supposed to be able to do in the Windows API.
|
268,587
| 268,591
|
Can I use identical names for fields and constructor parameters?
|
class C {
T a;
public:
C(T a): a(a) {;}
};
Is it legal?
|
Yes it is legal and works on all platforms.
It will correctly initialize your member variable a, to the passed in value a.
It is considered by some more clean to name them differently though, but not all. I personally actually use it a lot :)
Initialization lists with the same variable name works because the syntax of an initialization item in an initialization list is as follows:
<member>(<value>)
You can verify what I wrote above by creating a simple program that does this: (It will not compile)
class A
{
A(int a)
: a(5)//<--- try to initialize a non member variable to 5
{
}
};
You will get a compiling error something like: A does not have a field named 'a'.
On a side note:
One reason why you may not want to use the same member name as parameter name is that you would be more prone to the following:
class A
{
A(int myVarriable)
: myVariable(myVariable)//<--- Bug, there was a typo in the parameter name, myVariable will never be initialized properly
{
}
int myVariable;
};
On a side note(2):
One reason why you may want to use the same member name as parameter name is that you would be less prone to the following:
class A
{
A(int myVariable_)
{
//<-- do something with _myVariable, oops _myVariable wasn't initialized yet
...
_myVariable = myVariable_;
}
int _myVariable;
};
This could also happen with large initialization lists and you use _myVariable before initializing it in the initialization list.
|
268,604
| 268,693
|
How to set the width of the exponent field for an ostream?
|
This code:
#include <iostream>
int main( int, char **argv )
{
std::cout << 1.23e45 << std::endl;
}
prints
1.23e+045
when compiled with MS Visual Studio 2003, and
1.23e+45
on my Linux machine.
How can I specify the width of the exponent field (and why is there a difference in the first place)?
|
I don't think this is possible with standard manipulators. (if it is, I'd love to be corrected and learn how)
Your only remaining option is creating a streambuf yourself, and intercepting all exponent numbers that go to the stream, reformat them by hand, and pass them on to the underlying stream.
Seems a lot of work, and while not rocket science, no trivial task either.
On the 'why' question: I know linux defines the exponent as minimum two digits, I suppose Windows specifies it as minimum three?
// on linux
std::cout << std::scientific << 1.23e4 << std::endl
Also adds a leading zero:
1.230000e+04
|
268,772
| 268,780
|
Run-Time Check Failure #2 - Stack around the variable 'x' was corrupted
|
I receive this Run-Time Check Failure upon the return in the following code. I believe similar code is running fine elsewhere in the program. Any ideas?
String GetVariableName(CString symbol, CString filepath)
{
char acLine[512];
char acPreviousLine[512];
CString csFile;
FILE *fp;
csFile.Format("%svariables.txt", filepath);
fp = fopen(csFile, "r");
if (! fp)
return("");
for (;;)
{
strcpy(acPreviousLine, acLine);
// NULL means we are out of lines in the file.
if (myfgets(acLine, 511, fp) == NULL)
break;
// "END" indicates end of file
if (! strcmp(acLine, "END"))
break;
if (! strcmp(acLine, csVarSymbol))
{
// Previous line should be variable name
fclose(fp);
// Following line results in Check Failure while in Debug mode
return(acPreviousLine);
}
}
fclose(fp);
return("");
}
|
There is no variable 'x' in the above example, but I'll presume you edited the error message!
acLine isn't initialised, so the first time you copy it to acPreviousLine, you are copying whatever happens to be on the stack. This can give you a buffer overflow and therefore stack corruption in some situations - not all, because you might be lucky and find a null in acLine before you get to 512 bytes.
The stack gets checked for corruption on return, because there are guard words (on this platform and build configuration - which I presume is on Windows, compiling on VS in debug mode) inserted around all stack variables to check for just that problem.
Initialise acLine[0] to 0.
|
269,054
| 269,079
|
How To Set Errorlevel On Exit of MFC App
|
I have an MFC legacy app that I help to maintain. I'm not quite sure how to identify the version of MFC and I don't think it would make a difference anyway.
The app can take some parameters on the command line; I would like to be able to set an errorlevel on exiting the app to allow a bat/cmd file to check for failure and respond appropriately.
I don't believe that exit() would work (hadn't tried it yet to be honest) because of the fact that this is an MFC app. Anyone know how to set the errorlevel returned by an MFC app? Can I just use exit()?
|
I can't take credit for this so please don't up this reply.
CWinApp::ExitInstance();
return myExitCode;
This will return the errorlevel to the calling batch file for you to then evaluate and act upon.
|
269,081
| 269,206
|
is it possible to have templated classes within a template class?
|
template <class M, class A> class C { std::list<M> m_List; ... }
Is the above code possible? I would like to be able to do something similar.
Why I ask is that i get the following error:
Error 1 error C2079: 'std::_List_nod<_Ty,_Alloc>::_Node::_Myval' uses undefined class 'M' C:\Program Files\Microsoft Visual Studio 9.0\VC\include\list 41
|
My guess: you forward declared class M somewhere, and only declared it fully after the template instantiation.
My hint: give your formal template arguments a different name than the actual ones. (i.e. class M)
// template definition file
#include <list>
template< class aM, class aT >
class C {
std::list<M> m_List;
...
};
Example of a bad forward declaration, resulting in the mentioned error:
// bad template usage file causing the aforementioned error
class M;
...
C<M,OtherClass> c; // this would result in your error
class M { double data; };
Example of proper declaration, not resulting in the error:
// better template usage file
class M { double data; }; // or #include the class header
...
C<M,OtherClass> c; // this would have to compile
|
269,135
| 269,210
|
How do you structure unit tests for cross-compiled code?
|
My new project is targeting an embedded ARM processor. I have a build system that uses a cross-compiler running on an Ubuntu linux box. I like to use unit testing as much as possible, but I'm a little bit confused about how to proceed with this setup.
I can't see how to run unit tests on the ARM device itself (somebody correct me if I'm wrong). I think that my best option is to compile the code on the build machine using its own native compiler for the unit tests. Is this approach fundamentally flawed? Is unit testing on a different platform a waste of time?
I'm planning to use CppUnit on the build machine using the native compiler for the unit tests. Then I'll cross compile the code for the ARM processor and do integration and system testing on the target device itself. How would you structure the source code and the test code to keep this from turning into a tangled mess?
|
With embedded device it depends on what interfaces (hardware) you have.
For example the motion control cards I deal with uses a command line interface. The IDE they ship uses it as it primary method of interacting with the cards. It works the same way regardless if I am using PCI, IDE, Serial, or Ethernet.
The DLL they ship for programming give access to the command line interface. So I can send a string, and read back the response. So what I do for my unit tests is have a physical card hooked (or in) my development machine. I send it commands after uploading the software, read the response and if they are correct it passes the test.
I also have extra hardware, a black box if you will, that simulates a machine that motion control card is normally hooked up too. It helps with the automated sets but there is a manual phase as I have to set switches to simulate different setups on the machine.
I have achieved a greater degree of automation by taking a digital I/O card and using it outputs to feed into the inputs of the motion control card and the same in reverse.
I found that for most hardware you have to have some type of simulator hardware.
The exception being the rare package that comes with a software simulator.
I know this isn't probably ideal as not every developer can have one of these on their desk. My hardware simulator so I can give it to whoever it working on the motion control software at the time. If it can't be portable then having a dedicated testing or hardware development computer would be in order.
Finally it boils down on the specifics of your hardware and what support the manufacturer gives in terms of software and simulators. To help you more you will need to post more specifics.
|
269,223
| 269,251
|
Could C++ have not obviated the pimpl idiom?
|
As I understand, the pimpl idiom is exists only because C++ forces you to place all the private class members in the header. If the header were to contain only the public interface, theoretically, any change in class implementation would not have necessitated a recompile for the rest of the program.
What I want to know is why C++ is not designed to allow such a convenience. Why does it demand at all for the private parts of a class to be openly displayed in the header (no pun intended)?
|
This has to do with the size of the object. The h file is used, among other things, to determine the size of the object. If the private members are not given in it, then you would not know how large an object to new.
You can simulate, however, your desired behavior by the following:
class MyClass
{
public:
// public stuff
private:
#include "MyClassPrivate.h"
};
This does not enforce the behavior, but it gets the private stuff out of the .h file.
On the down side, this adds another file to maintain.
Also, in visual studio, the intellisense does not work for the private members - this could be a plus or a minus.
|
269,241
| 269,313
|
There's a way to declare Copy Constructor non-public AND using default copy Constructor?
|
I have a not-so-small class under development (that it changes often) and I need not to provide a public copy constructor and copy assignment.
The class has objects with value semantics, so default copy and assignment work.
the class is in a hierarchy, with virtual methods, so I provide a virtual Clone() to avoid slicing and to perform "polymorphic copy".
I don't want to declare copy assignment and construction protected AND to define them (and to maintain in-sync with changes) unless I have some special thing to perform.
Do someone knows if there's another way?
thanks!
UgaSofT
|
An object from a polymorphic hierarchy, and with value semantics ? Something is wrong here.
If you really do need your class to have a value semantics, have a look at J.Coplien's Envelop-Letter Idiom, or at this article about Regular Objects [1].
[1] Sean Parent. “Beyond Objects”. Understanding The Software We Write. http://stlab.adobe.com/wiki/index.php/Papers_and_Presentations. C++ Connections. Nov 2005.
HTH,
|
269,268
| 269,289
|
How to implement big int in C++
|
I'd like to implement a big int class in C++ as a programming exercise—a class that can handle numbers bigger than a long int. I know that there are several open source implementations out there already, but I'd like to write my own. I'm trying to get a feel for what the right approach is.
I understand that the general strategy is get the number as a string, and then break it up into smaller numbers (single digits for example), and place them in an array. At this point it should be relatively simple to implement the various comparison operators. My main concern is how I would implement things like addition and multiplication.
I'm looking for a general approach and advice as opposed to actual working code.
|
Things to consider for a big int class:
Mathematical operators: +, -, /,
*, % Don't forget that your class may be on either side of the
operator, that the operators can be
chained, that one of the operands
could be an int, float, double, etc.
I/O operators: >>, << This is
where you figure out how to properly
create your class from user input, and how to format it for output as well.
Conversions/Casts: Figure out
what types/classes your big int
class should be convertible to, and
how to properly handle the
conversion. A quick list would
include double and float, and may
include int (with proper bounds
checking) and complex (assuming it
can handle the range).
|
269,384
| 269,502
|
Understanding wxWidgets sizers
|
I'm still getting used to the sizers in wxWidgets, and as such can't seem to make them do what I want.
I want a large panel that will contain a list of other panels/boxes, which each then contain a set of text fields
----------------------
| label text box |
| label2 text box2 |
----------------------
----------------------
| label text box |
| label2 text box2 |
----------------------
----------------------
| label text box |
| label2 text box2 |
----------------------
I also need to be able to add (at the end), and remove(anywhere) these boxes.
If there's too many to fit in the containing panel a vertical scroll bar is also required.
This is what I've tried so far, it works for the first box that's created with the containing panel, but additional added items are just a small box in the top left of the main panel, even though the sizer code is the same for all boxes.
//itemsList is a container containg a list of *Item pointers to each box/panel/whatever the right name is
Items::Items(wxWindow *parent)
:wxPanel(parent, wxID_ANY, wxDefaultPosition, wxDefaultSize, wxBORDER_SUNKEN)
{
//one sstarting item
OnAdd(wxCommandEvent());
}
void Items::OnAdd(wxCommandEvent &event)
{
unsigned id = itemsList .size();
Item *item = new Item(this,id);
itemsList .push_back(item);
RebuildSizer();
}
void Items::RebuildSizer()
{
this->SetSizer(0,true);
wxBoxSizer *sizerV = new wxBoxSizer(wxVERTICAL);
for(std::vector<Item*>::iterator it = itemsList .begin(); it != itemsList .end(); ++it)
sizerV->Add(*it, 1, wxEXPAND | wxLEFT | wxRIGHT, 5);
SetSizer(sizerV);
}
void Items::OnRemove (wxCommandEvent &event, unsigned itemId)
{
delete itemsList [itemId];
itemsList .erase(items.begin()+itemId);
for(std::vector<Item*>::iterator it = itemsList .begin()+itemId; it != itemsList .end(); ++it)
(*it)->ChangeId(itemId++);
RebuildSizer();
}
Also what's the best way to lay out the contents of each box? I was thinking of using a 2 by 2 grid sizer but I'm not sure how to make the text boxes to expand to be as wide as possible while making the labels stay as small as possible (but also maintaining the alignment between the 2 text boxes)?
|
"If theres too many to fit in the containing panel a vertical scroll bar is also required."
You could have a look at wxScrolledWindow.
"additional added items are just a small box in the top left of the main panel"
I am not sure, but, maybe a call to wxSizer::Layout() will help.
"Also whats the best way to lay out the contents of each box?"
Have a look at this sizerdemo. If it is not mandatory, that the labels stay as small as possible, you could give the labels a fixed width and only let the text boxes grow. If you want to adapt the size when adding or removing new boxes, you could implement the OnSize() event handler.
|
269,633
| 269,688
|
MFC: Accessing Views from Mainframe
|
I am trying to access a view inside a splitter from my mainframe. At the moment I have this:
CWnd* pView = m_wndSplitter.GetPane( 0, 0 );
However this gets me a pointer to the CWnd not the CMyViewClass object.
Can anyone explain to me what I need to do in order to access the view object itself so I can access member functions in the form pView->ViewFunction(...);
|
Just cast it:
// using MFC's dynamic cast macro
CMyViewClass* pMyView =
DYNAMIC_DOWNCAST(CMyViewClass, m_wndSplitter.GetPane(0,0));
if ( NULL != pMyView )
// whatever you want to do with it...
or:
// standard C++
CMyViewClass* pMyView =
dynamic_cast<CMyViewClass*>(m_wndSplitter.GetPane(0,0));
if ( NULL != pMyView )
// whatever you want to do with it...
If you know that the view in pane 0,0 will always be of type CMyViewClass, then you could just use static_cast... but i recommend you don't - no sense risking problems should you ever change your layout.
|
269,794
| 270,160
|
Can't initialize an object in a member initialization list
|
I have this code:
CCalcArchive::CCalcArchive() : m_calcMap()
{
}
m_calcMap is defined as this:
typedef CTypedPtrMap<CMapStringToPtr, CString, CCalculation*> CCalcMap;
CCalcMap& m_calcMap;
When I compile in Visual Studio 2008, I get this error:
error C2440: 'initializing' : cannot convert from 'int' to 'CCalcArchive::CCalcMap &'
I don't even understand where it gets the "int" error from, and also why this doesn't work? It feels like I'm actually having some sort of syntax error, but isn't this how member initialization lists are supposed to be used? Also, AFAIK, the MFC class CTypedPtrMap has no constructor taking arguments.
|
The int is coming from the fact that CTypedPtrMap has a constructor that takes an int argument that is defaulted to 10.
The real problem that you're running into is that the m_calcMap reference initalization you have there is trying to default construct a temporary CTypedPtrMap object to bind the reference to. However, only const references can be bound to temporary objects. No doubt the error message is not very informative.
But even if the m_calcMap member were a const refernce, you'd still have a problem binding it to a temporary. in this case, the MSVC 2008 compiler gives a pretty clear warning:
mfctest.cpp(72) : warning C4413: '' : reference member is initialized to a temporary
that doesn't persist after the constructor exits
|
269,837
| 867,675
|
How do I display custom tooltips in a CTreeCtrl?
|
I have a class derived from CTreeCtrl. In OnCreate() I replace the default CToolTipCtrl object with a custom one:
int CMyTreeCtrl::OnCreate(LPCREATESTRUCT lpCreateStruct)
{
if (CTreeCtrl::OnCreate(lpCreateStruct) == -1)
return -1;
// Replace tool tip with our own which will
// ask us for the text to display with a TTN_NEEDTEXT message
CTooltipManager::CreateToolTip(m_pToolTip, this, AFX_TOOLTIP_TYPE_DEFAULT);
m_pToolTip->AddTool(this, LPSTR_TEXTCALLBACK);
SetToolTips(m_pToolTip);
// Update: Added these two lines, which don't help either
m_pToolTip->Activate(TRUE);
EnableToolTips(TRUE);
return 0;
}
My message handler looks like this:
ON_NOTIFY_EX(TTN_NEEDTEXT, 0, &CMyTreeCtrl::OnTtnNeedText)
However I never receive a TTN_NEEDTEXT message. I had a look with Spy++ and it also looks like this message never gets sent.
What could be the problem here?
Update
I'm not sure whether this is relevant: The CTreeCtrl's parent window is of type CDockablePane. Could there be some extra work needed for this to work?
|
Finally! I (partially) solved it:
It looks like the CDockablePane parent window indeed caused this problem...
First I removed all the tooltip-specific code from the CTreeCtrl-derived class. Everything is done in the parent pane window.
Then I edited the parent window's OnCreate() method:
int CMyPane::OnCreate(LPCREATESTRUCT lpCreateStruct)
{
if (CDockablePane::OnCreate(lpCreateStruct) == -1)
return -1;
const DWORD dwStyle = WS_CHILD | WS_VISIBLE | WS_CLIPSIBLINGS | WS_CLIPCHILDREN |
TVS_CHECKBOXES | TVS_DISABLEDRAGDROP | TVS_HASBUTTONS | TVS_HASLINES | TVS_LINESATROOT |
TVS_INFOTIP | TVS_NOHSCROLL | TVS_SHOWSELALWAYS;
// TREECTRL_ID is a custom member constant, set to 1
if(!m_tree.Create(dwStyle, m_treeRect, this, TREECTRL_ID ) )
{
TRACE0("Failed to create trace tree list control.\n");
return -1;
}
// m_pToolTip is a protected member of CDockablePane
m_pToolTip->AddTool(&m_tree, LPSTR_TEXTCALLBACK, &m_treeRect, TREECTRL_ID);
m_tree.SetToolTips(m_pToolTip);
return 0;
}
Unforunately we cannot simply call AddTool() with less parameters because the base class will complain in the form of an ASSERT about a uFlag member if there is no tool ID set.
And since we need to set the ID, we also need to set a rectangle. I created a CRect member and set it to (0, 0, 10000, 10000) in the CTor. I have not yet found a working way to change the tool's rect size so this is my very ugly workaround. This is also why I call this solution partial. Update: I asked a question regarding this.
Finally there is the handler to get the tooltip info:
// Message map entry
ON_NOTIFY(TVN_GETINFOTIP, TREECTRL_ID, &CMobileCatalogPane::OnTvnGetInfoTip)
// Handler
void CMyPane::OnTvnGetInfoTip(NMHDR *pNMHDR, LRESULT *pResult)
{
LPNMTVGETINFOTIP pGetInfoTip = reinterpret_cast<LPNMTVGETINFOTIP>(pNMHDR);
// This is a CString member
m_toolTipText.ReleaseBuffer();
m_toolTipText.Empty();
// Set your text here...
pGetInfoTip->pszText = m_toolTipText.GetBuffer();
*pResult = 0;
}
|
269,918
| 270,143
|
C++ string diff (a la Python's difflib)
|
I'm trying to diff two strings to determine whether or not they solely vary in one numerical subset of the string structure; for example,
varies_in_single_number_field('foo7bar', 'foo123bar')
# Returns True, because 7 != 123, and there's only one varying
# number region between the two strings.
In Python I can use the difflib to accomplish this:
import difflib, doctest
def varies_in_single_number_field(str1, str2):
"""
A typical use case is as follows:
>>> varies_in_single_number_field('foo7bar00', 'foo123bar00')
True
Numerical variation in two dimensions is no good:
>>> varies_in_single_number_field('foo7bar00', 'foo123bar01')
False
Varying in a nonexistent field is okay:
>>> varies_in_single_number_field('foobar00', 'foo123bar00')
True
Identical strings don't *vary* in any number field:
>>> varies_in_single_number_field('foobar00', 'foobar00')
False
"""
in_differing_substring = False
passed_differing_substring = False # There should be only one.
differ = difflib.Differ()
for letter_diff in differ.compare(str1, str2):
letter = letter_diff[2:]
if letter_diff.startswith(('-', '+')):
if passed_differing_substring: # Already saw a varying field.
return False
in_differing_substring = True
if not letter.isdigit(): return False # Non-digit diff character.
elif in_differing_substring: # Diff character not found - end of diff.
in_differing_substring = False
passed_differing_substring = True
return passed_differing_substring # No variation if no diff was passed.
if __name__ == '__main__': doctest.testmod()
But I have no idea how to find something like difflib for C++. Alternative approaches welcome. :)
|
This might work, it at least passes your demonstration test:
EDIT: I've made some modifications to deal with some string indexing issues. I believe it should be good now.
#include <iostream>
#include <string>
#include <vector>
#include <algorithm>
#include <cctype>
bool starts_with(const std::string &s1, const std::string &s2) {
return (s1.length() <= s2.length()) && (s2.substr(0, s1.length()) == s1);
}
bool ends_with(const std::string &s1, const std::string &s2) {
return (s1.length() <= s2.length()) && (s2.substr(s2.length() - s1.length()) == s1);
}
bool is_numeric(const std::string &s) {
for(std::string::const_iterator it = s.begin(); it != s.end(); ++it) {
if(!std::isdigit(*it)) {
return false;
}
}
return true;
}
bool varies_in_single_number_field(std::string s1, std::string s2) {
size_t index1 = 0;
size_t index2 = s1.length() - 1;
if(s1 == s2) {
return false;
}
if((s1.empty() && is_numeric(s2)) || (s2.empty() && is_numeric(s1))) {
return true;
}
if(s1.length() < s2.length()) {
s1.swap(s2);
}
while(index1 < s1.length() && starts_with(s1.substr(0, index1), s2)) { index1++; }
while(ends_with(s1.substr(index2), s2)) { index2--; }
return is_numeric(s1.substr(index1 - 1, (index2 + 1) - (index1 - 1)));
}
int main() {
std::cout << std::boolalpha << varies_in_single_number_field("foo7bar00", "foo123bar00") << std::endl;
std::cout << std::boolalpha << varies_in_single_number_field("foo7bar00", "foo123bar01") << std::endl;
std::cout << std::boolalpha << varies_in_single_number_field("foobar00", "foo123bar00") << std::endl;
std::cout << std::boolalpha << varies_in_single_number_field("foobar00", "foobar00") << std::endl;
std::cout << std::boolalpha << varies_in_single_number_field("7aaa", "aaa") << std::endl;
std::cout << std::boolalpha << varies_in_single_number_field("aaa7", "aaa") << std::endl;
std::cout << std::boolalpha << varies_in_single_number_field("aaa", "7aaa") << std::endl;
std::cout << std::boolalpha << varies_in_single_number_field("aaa", "aaa7") << std::endl;
}
Basically, it looks for a string which has 3 parts, string2 begins with part1, string2 ends with part3 and part2 is only digits.
|
269,932
| 269,973
|
string to char* marshaling
|
I wrote a managed C++ class that has the following function:
void EndPointsMappingWrapper::GetLastError(char* strErrorMessage)
{
strErrorMessage = (char*) Marshal::StringToHGlobalAnsi(_managedObject->GetLastError()).ToPointer();
}
As you can see, this is a simple method to copy the managed string of the last error to the unmanaged world (char*).
From my unmanaged class I call the method like this:
char err[1000];
ofer->GetLastError(err);
Putting a breakpoint at the managed C++ method shows that the string is successfully translated into the char*. However, once I return to the unmanaged class, the content of err[1000] is lost and it's empty again.
|
You are assigning the value of the passed parameter (strErrorMessage) instead of copying to that address the content of the buffer returned by Marshal::StringToHGlobalAnsi.
A correct implementation should be:
void EndPointsMappingWrapper::GetLastError(char* strErrorMessage, int len)
{ char *str = (char*) Marshal::StringToHGlobalAnsi(_managedObject->GetLastError()).ToPointer();
strncpy(strErrorMessage,str,len);
strErrorMessage[len-1] = '\0';
Marshal::FreeHGlobal(IntPtr(str));
}
The length is the size of the buffer passed.
strncpy() will copy at the most len bytes. If there is no null byte among the first n bytes of the str, the destination string won't be null terminated. For that reason we force the '\0' in the last byte of the buffer.
|
270,347
| 271,644
|
Replacing auto_ptr in VC++ 8
|
std::auto_ptr is broken in VC++ 8 (which is what we use at work). My main gripe with it is that it allows auto_ptr<T> x = new T();, which of course leads to horrible crashes, while being simple to do by mistake.
From an answer to another question here on stackoverflow:
Note that the implementation of std::auto_ptr in Visual Studio 2005 is horribly broken.
http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=98871
http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=101842
I want to use
boost::scoped_ptr, for pointers that shouldn't pass ownership.
boost::shared_ptr, for pointers in containers and elsewhere where they are required.
std::auto_ptr, for pointers that should/can pass ownership.
But since std::auto_ptr is broken for me, I wonder what would be the best approach:
Replace std::auto_ptr with something from the net. Like this this one from Rani Sharoni (haven't tried it yet).
Use boost::shared_ptr instead. Will of course work, although there will be some minor overhead that I don't care about. But I want to use auto_ptr to signal the intent of the pointer. (See this answer for a vote on this approach.)
I will never need to pass ownership in practice, so I shouldn't worry about this.
Update:
Here is what I did:
I copied the aforementioned auto_ptr implementation by Rani Sharoni. From here.
Did some minor tests:
class T
{
public:
T() {
OutputDebugStringA("T\n");
};
~T() {
OutputDebugStringA("~T\n");
};
};
{
fix::auto_ptr<T> x(new T); // This just works.
}
{
fix::auto_ptr<T> x = (new T); // Doesn't compile. Great!
}
{
fix::auto_ptr<T> x = fix::auto_ptr<T>(new T); // Transfer of ownership works also.
}
Of course these tests are by no means exhaustive and you shouldn't trust them. Implementing an exception safe templated class is hairy business. At least this works better than the built in one.
Note: I don't know if I'm allowed to use this implementation yet, with respect to copyright. I have emailed Rani and I'm waiting for a reply. I'll update this post when I know more.
Permission is granted for everyone to use Rani Sharoni's auto_ptr implementation as you wish.
Thank you for all your replies.
|
Move to boost smart pointers.
In the meantime, you may want to extract a working auto_ptr implementation from an old / another STL, so you have working code.
I believe that auto_ptr semantics are fundamentally broken - it saves typing, but the interface actually is not simpler: you still have to track which instance is the current owner and make sure the owner leaves last.
unique-ptr "fixes" that, by making release not only give up ownership, but also setting the RHS to null. It is the closest replacement for auto-ptr, but with its different semantics it is not a drop-in replacement.
There's an introductory article to boost smart pointers, by, ahem, me.
|
270,408
| 270,435
|
Is it better in C++ to pass by value or pass by reference-to-const?
|
Is it better in C++ to pass by value or pass by reference-to-const?
I am wondering which is better practice. I realize that pass by reference-to-const should provide for better performance in the program because you are not making a copy of the variable.
|
It used to be generally recommended best practice1 to use pass by const ref for all types, except for builtin types (char, int, double, etc.), for iterators and for function objects (lambdas, classes deriving from std::*_function).
This was especially true before the existence of move semantics. The reason is simple: if you passed by value, a copy of the object had to be made and, except for very small objects, this is always more expensive than passing a reference.
With C++11, we have gained move semantics. In a nutshell, move semantics permit that, in some cases, an object can be passed “by value” without copying it. In particular, this is the case when the object that you are passing is an rvalue.
In itself, moving an object is still at least as expensive as passing by reference. However, in many cases a function will internally copy an object anyway — i.e. it will take ownership of the argument.2
In these situations we have the following (simplified) trade-off:
We can pass the object by reference, then copy internally.
We can pass the object by value.
“Pass by value” still causes the object to be copied, unless the object is an rvalue. In the case of an rvalue, the object can be moved instead, so that the second case is suddenly no longer “copy, then move” but “move, then (potentially) move again”.
For large objects that implement proper move constructors (such as vectors, strings …), the second case is then vastly more efficient than the first. Therefore, it is recommended to use pass by value if the function takes ownership of the argument, and if the object type supports efficient moving.
A historical note:
In fact, any modern compiler should be able to figure out when passing by value is expensive, and implicitly convert the call to use a const ref if possible.
In theory. In practice, compilers can’t always change this without breaking the function’s binary interface. In some special cases (when the function is inlined) the copy will actually be elided if the compiler can figure out that the original object won’t be changed through the actions in the function.
But in general the compiler can’t determine this, and the advent of move semantics in C++ has made this optimisation much less relevant.
1 E.g. in Scott Meyers, Effective C++.
2 This is especially often true for object constructors, which may take arguments and store them internally to be part of the constructed object’s state.
|
270,455
| 270,464
|
Is it possible to program iPhone in C++
|
I'm all for language diversity, but Objective C is insane. So I'm curious: is it possible to code iPhone apps with C++ while using the Cocoa API, etc?
|
Short answer, yes, sort of. You can use Objective-C++, which you can read about at Apple Developer Connection.
If you know C++ already, learning Objective-C would be pretty simple, if you decided to give that a try. More info on that topic is at the ADC as well.
|
270,488
| 270,515
|
Is assert(false) ignored in release mode?
|
I am using VC++. Is assert(false) ignored in release mode?
|
If compiling in release mode includes defining NDEBUG, then yes.
See assert (CRT)
|
270,527
| 270,608
|
ASSERT vs. ATLASSERT vs. assert
|
I am refactoring some MFC code that is littered with ASSERT statements, and in preparation for a future Linux port I want to replace them with the standard assert. Are there any significant differences between the two implementations that people know of that could bite me on the backside?
Similarly, I have also come across some code that uses ATLASSERT which I would also like to replace.
|
No. The MFC version just includes an easy to debug break point.
|
270,542
| 270,704
|
Testing for assert in the Boost Test framework
|
I use the Boost Test framework to unit test my C++ code and wondered if it is possible to test if a function will assert? Yes, sounds a bit strange but bear with me! Many of my functions check the input parameters upon entry, asserting if they are invalid, and it would be useful to test for this. For example:
void MyFunction(int param)
{
assert(param > 0); // param cannot be less than 1
...
}
I would like to be able to do something like this:
BOOST_CHECK_ASSERT(MyFunction(0), true);
BOOST_CHECK_ASSERT(MyFunction(-1), true);
BOOST_CHECK_ASSERT(MyFunction(1), false);
...
You can check for exceptions being thrown using Boost Test so I wondered if there was some assert magic too...
|
I don't think so. You could always write your own assert which throws an exception and then use BOOST_CHECK_NOTHROW() for that exception.
|
270,592
| 270,599
|
If abstract base class is an interface, is it obligatory to call base class constructor in derived class constructor?
|
class AbstractQuery {
virtual bool isCanBeExecuted()=0;
public:
AbstractQuery() {}
virtual bool Execute()=0;
};
class DropTableQuery: public AbstractQuery {
vector< std::pair< string, string> > QueryContent;
QueryValidate qv;
public:
explicit DropTableQuery(const string& qr): AbstractQuery(), qv(qr) {}
bool Execute();
};
Is it necessary to call base contructor in derived class constructor?
|
No, in fact for it is unnecessary for the base class to have an explicitly defined constructor (though make sure you have a virtual destructor).
So for a typical interface you could have something like this:
class MyInterface {
public:
virtual ~MyInterface() {}
virtual void execute() = 0;
};
EDIT: Here's a reason why you should have a virtual destructor:
MyInterface* iface = GetMeSomeThingThatSupportsInterface();
delete iface; // this is undefined behaviour if MyInterface doesn't have a virtual destructor
|
270,623
| 270,731
|
How to build an ActiveX object in C++ that can be scripted using Javascript
|
I can use VS08's MFC/ActiveX template to create a C++ ActiveX object that I can load into a HTML page and script with Javascript. But I can't figure out how to create an interface that allows me to call custom methods on my component with Javascript.
Could you please tell me how to accomplish that? I have spent over two hours on google with no luck.
Thanks.
|
I'm not extremely familiar with the MFC ActiveX wrapper, but I can answer the question in the general sense:
A COM object with an interface which is derived from IDispatch can be called through automation languages (eg: Javascript). The methods must also be "automation-compatible", which means that the parameters are convertible to the VARIANT type, or are explicitly of type VARIANT. Note that for in/out parameters, the type must be VARIANT* for the automation "hookup" to work.
I don't know how to make the ActiveX object accessible in the client script (eg: embed it into the page), but if it has a single interface derived from IDispatch, that makes it callable from Javascript (and other automation languages). I hope that helps...
|
270,917
| 270,925
|
Why should I declare a virtual destructor for an abstract class in C++?
|
I know it is a good practice to declare virtual destructors for base classes in C++, but is it always important to declare virtual destructors even for abstract classes that function as interfaces? Please provide some reasons and examples why.
|
It's even more important for an interface. Any user of your class will probably hold a pointer to the interface, not a pointer to the concrete implementation. When they come to delete it, if the destructor is non-virtual, they will call the interface's destructor (or the compiler-provided default, if you didn't specify one), not the derived class's destructor. Instant memory leak.
For example
class Interface
{
virtual void doSomething() = 0;
};
class Derived : public Interface
{
Derived();
~Derived()
{
// Do some important cleanup...
}
};
void myFunc(void)
{
Interface* p = new Derived();
// The behaviour of the next line is undefined. It probably
// calls Interface::~Interface, not Derived::~Derived
delete p;
}
|
270,947
| 270,957
|
Can any one provide me a sample of Singleton in c++?
|
I write a singleton c++ in the follow way:
class A {
private:
static A* m_pA;
A();
virtual ~A();
public:
static A* GetInstance();
static void FreeInstance();
void WORK1();
void WORK2();
void WORK3();
}
}
A* A::GetInstance() {
if (m_pA == NULL)
m_pA = new A();
return m_pA;
}
A::~A() {
FreeInstance() // Can I write this? are there any potential error?
}
void A::FreeInstance() {
delete m_pA;
m_pA = NULL;
}
Thanks! Evan Teran and sep61.myopenid.com 's answer is right, and really good!
My way is wrong, I wish any one writting such code can avoid my silly mistake.
My singleton A in my project has a vector of smart pointer, and another thread can also edit this vector, so when the application is closing, it always become unstable even I add lots of CMutex. Multithread error + singleton error wasted me 1 day.
//-----------------------------------------------------------
A new singleton, you are welcome to edit if you think there is any problem in the following sample:
class A {
private:
static A* m_pA;
explicit A();
void A(const A& a);
void A(A &a);
const A& operator=(const A& a);
virtual ~A();
public:
static A* GetInstance();
static void FreeInstance();
void WORK1();
void WORK2();
void WORK3();
}
}
A* A::GetInstance() {
if (m_pA == NULL){
static A self;
m_pA = &self;
}
return m_pA;
}
A::~A() {
}
|
You can avoid needing to delete it by using a static object like this:
if(m_pA == 0) {
static A static_instance;
m_pA = &static_instance;
}
|
270,984
| 270,995
|
g++ partial linking instead of archives?
|
I'm pretty new to the C++ build flow, and I'm thinking of switching to use partial linking for my libraries instead of creating ar archives. I'm hoping to reduce link time in an inevitable final compilation step that I have, and I figure partial linking some libraries once could save me time over linking everything in that final step.
Is what I'm describing possible? I figure it should be something along the lines ld -Ur -o mylib.o [components]. Are there important build considerations that I'm not taking into account?
|
You lose an important effect of having the object files in an ar archive, which is that only the referenced objects will be linked in.
If you have both foo.o with the symbol foo and bar.o with the symbol bar in an ar archive, and only reference the foo symbol, only foo.o would be linked in. If you instead do a partial link, the contents of both will end up in the executable, even if bar is never referenced anywhere.
You could also try a faster linker, like gold.
|
271,042
| 271,091
|
invoking functions while debugging with Visual Studio 2005?
|
Here's something I know is probably possible but I've never managed to do
In VS2005(C++), While debugging, to be able to invoke a function from the code which I'm debugging.
This feature is sometimes essential when debugging complex data structures which can't be explored easily using just the normal capabilities of the watch window.
The watch window seem to allow writing function calls but every time I try it it gives me one error or another.
Error: symbol "func" not found
Error: argument list does not match function
Error: member function not present
Did anyone ever succeed in making this work properly?
What am I missing here?
Edit: clearly, the function called should be a symbol that exists in the current scope the debugger is in.
|
Ok, Here's what I found
CXX0040 means that "The C expression evaluator does not support implicit conversions involving constructor calls."
CXX0047 means that "Overloaded functions can be called only if there is an exact parameter match or a match that does not require the construction of an object."
So combined it means that If I want to call a function none of the arguments should have an implicit conversion and none of the arguments should need a construction.
"implicit conversion" in this context seem to include trivial things like converting 'String' to 'const String&'.
"construction" seem to include trivial copy-construction. so passing by value anything that is not a primitive type will result in an error.
So this basically leaves functions that take only primitive types or pointers.
I have just tested this theory successfully.
So if you want to be able to call a method from the watch window, add an overload which takes only pointers and primitives and in the watch window pass the arguments appropriately. To pass an object that is not a primitive pass its address.
|
271,076
| 271,087
|
What is the difference between an int and a long in C++?
|
Correct me if I am wrong,
int is 4 bytes, with a range of values from -2,147,483,648 to 2,147,483,647 (2^31)
long is 4 bytes, with a range of values from -2,147,483,648 to 2,147,483,647 (2^31)
What is the difference in C++? Can they be used interchangeably?
|
It is implementation dependent.
For example, under Windows they are the same, but for example on Alpha systems a long was 64 bits whereas an int was 32 bits. This article covers the rules for the Intel C++ compiler on variable platforms. To summarize:
OS arch size
Windows IA-32 4 bytes
Windows Intel 64 4 bytes
Windows IA-64 4 bytes
Linux IA-32 4 bytes
Linux Intel 64 8 bytes
Linux IA-64 8 bytes
Mac OS X IA-32 4 bytes
Mac OS X Intel 64 8 bytes
|
271,204
| 271,445
|
What do you think is making this C++ code slow? (It loops through an ADODB recordset, converts COM types to strings, and fills an ostringstream)
|
This loop is slower than I would expect, and I'm not sure where yet. See anything?
I'm reading an Accces DB, using client-side cursors. When I have 127,000 rows with 20 columns, this loop takes about 10 seconds. The 20 columns are string, int, and date types. All the types get converted to ANSI strings before they are put into the ostringstream buffer.
void LoadRecordsetIntoStream(_RecordsetPtr& pRs, ostringstream& ostrm)
{
ADODB::FieldsPtr pFields = pRs->Fields;
char buf[80];
::SYSTEMTIME sysTime;
_variant_t var;
while(!pRs->EndOfFile) // loop through rows
{
for (long i = 0L; i < nColumns; i++) // loop through columns
{
var = pFields->GetItem(i)->GetValue();
if (V_VT(&var) == VT_BSTR)
{
ostrm << (const char*) (_bstr_t) var;
}
else if (V_VT(&var) == VT_I4
|| V_VT(&var) == VT_UI1
|| V_VT(&var) == VT_I2
|| V_VT(&var) == VT_BOOL)
{
ostrm << itoa(((int)var),buf,10);
}
else if (V_VT(&var) == VT_DATE)
{
::VariantTimeToSystemTime(var,&sysTime);
_stprintf(buf, _T("%4d-%02d-%02d %02d:%02d:%02d"),
sysTime.wYear, sysTime.wMonth, sysTime.wDay,
sysTime.wHour, sysTime.wMinute, sysTime.wSecond);
ostrm << buf;
}
}
pRs->MoveNext();
}
}
EDIT: After more experimentation...
I know now that about half the time is used by this line:
var = pFields->GetItem(i)->GetValue();
If I bypass the Microsoft generated COM wrappers, will my code be faster? My guess is no.
The othe half of the time is spent in the statements which convert data and stream it into the ostringstream.
I don't know right now as I write this whether it's the conversions or the streaming that is taking more time.
Would it be faster if I didn't use ostringstream and instead managed my own buffer, with my own logic to grow the buffer (re-alloc, copy, delete)? Would it be faster if my logic made a pessimistic guesstimate and reserved a lot of space for the ostringstream buffer up front? These might be experiments worth trying.
Finally, the conversions themselves. None of the three stand out in my timings as being bad. One answer says that my itoa might be slower than an alternative. Worth checking out.
|
Try commenting out the code in the for loop and comparing the time. Once you have a reading, start uncommenting various sections until you hit the bottle-neck.
|
271,612
| 271,654
|
Parse a file using C++, load the value to a structure
|
I have the following file/line:
pc=1 ct=1 av=112 cv=1100 cp=1700 rec=2 p=10001 g=0 a=0 sz=5 cr=200
pc=1 ct=1 av=113 cv=1110 cp=1800 rec=2 p=10001 g=0 a=10 sz=5 cr=200
and so on.
I wish to parse this and take the key value pairs and put them in a structure:
struct pky
{
pky() :
a_id(0),
sz_id(0),
cr_id(0),
cp_id(0),
cv_id(0),
ct_id(0),
fr(0),
g('U'),
a(0),
pc(0),
p_id(0)
{ }
};
wherein either all the structure fields are used or some might be omitted.
How do I create a C++ class, which will do the same? I am new to C++ and not aware of any functions or library which would do this work.
Each line is to be processed, and the structure will be populated with one line each time and used, before it is flushed. The structure is later used as a parameter to a function.
|
You can do something like this:
std::string line;
std::map<std::string, std::string> props;
std::ifstream file("foo.txt");
while(std::getline(file, line)) {
std::string token;
std::istringstream tokens(line);
while(tokens >> token) {
std::size_t pos = token.find('=');
if(pos != std::string::npos) {
props[token.substr(0, pos)] = token.substr(pos + 1);
}
}
/* work with those keys/values by doing properties["name"] */
Line l(props["pc"], props["ct"], ...);
/* clear the map for the next line */
props.clear();
}
i hope it's helpful. Line can be like this:
struct Line {
std::string pc, ct;
Line(std::string const& pc, std::string const& ct):pc(pc), ct(ct) {
}
};
now that works only if the delimiter is a space. you can make it work with other delimiters too. change
while(tokens >> token) {
into for example the following, if you want to have a semicolon:
while(std::getline(tokens, token, ';')) {
actually, it looks like you have only integers as values, and whitespace as delimiters. you might want to change
std::string token;
std::istringstream tokens(line);
while(tokens >> token) {
std::size_t pos = token.find('=');
if(pos != std::string::npos) {
props[token.substr(0, pos)] = token.substr(pos + 1);
}
}
into this then:
int value;
std::string key;
std::istringstream tokens(line);
while(tokens >> std::ws && std::getline(tokens, key, '=') &&
tokens >> std::ws >> value) {
props[key] = value;
}
std::ws just eats whitespace. you should change the type of props to
std::map<std::string, int> props;
then too, and make Line accept int instead of std::string's. i hope this is not too much information at once.
|
271,833
| 272,337
|
Is it possible to start a custom thread in an IIS hosted C++ application?
|
We host a C++ based WebServices application in IIS and we're finding that when we try to start our own C++ threads IIS has a fit and crashes. The threads are based on boost.thread which clearly dribbles down to the standard Windows threading API underneath.
The reason I need to start the thread is to listen for multicasts from our middle-tier server to keep local cache's up-to-date. Short of writing another process to listen for us I'm at a loss what else I can do.
So the question is, should this work? Are there inherent restrictions about doing this kind of thing with IIS?
|
It sounds like you're creating a persistent thread, which lives longer than the lifetime of the request that initiates it. You don't mention whether it's ASP.NET C++/CLI, Managed C++ or an ISAPI extension or filter, or even CGI.
Conceptually, code that is called by IIS is only supposed to "live" for the lifetime of the request. Code that runs for longer will be at the mercy of IIS' recycling of application pools.
Your best bet is to have another process that does the listening for notifications, and maintain your cache in that process. You can then use shared memory (see Boost.Interprocess) to access that cache from your Web service.
|
271,845
| 271,942
|
MFC: Changing font of a List control
|
I need to at run time change the font of a List Control so as to used a fixed width font. I have seen some examples which suggest I should trap the NM_CUSTOMDRAW message, but I was wondering if there was a better way of doing it.
Thanks.
|
Create an appropriate CFont object, and set the control's font by calling SetFont(), passing in the CFont, like so:
m_font.CreatePointFont(90,"Courier New");
m_listCtrl.SetFont(&m_font);
This assumes that you've got a window or dialog object with a "CFont m_font" member, and an "m_listCtrl" member attached to the list control.
|
271,939
| 271,958
|
Singleton getInstance in thread worker methods
|
This question is about using getter methods of a singleton object in worker threads. Here is some pseudo code first:
// Singleton class which contains data
class MyData
{
static MyData* sMyData ;
int mData1[1024];
int mData2[1024];
int mData3[1024];
MyData* getInstance()
{
// sMyData is created in the very beginning.
return sMyData ;
}
void getValues(int idx, int& data1,int& data2,int& data3)
{
data1 = mData1[idx];
data2 = mData2[idx];
data3 = mData3[idx];
}
int* getData1()
{
return &mData1[0];
}
}
class MyThread
{
void workerMethod()
{
MyData* md = MyData::getInstance();
int d1,d2,d3;
md->getValue( 12, d1,d2,d3 );
int* data1 = md->getData1();
d1 = data1[34];
}
}
Now as you see I have some getter methods (all read-only), MyData::getInstance(), MyData::getValue() and MyData::getData1(). The 1st question is how thread-safe these methods are ?
Since they are often-called methods, protecting those methods with mutex is something I am trying to avoid.
The 2nd question is: what is the suggested way of reading data from central sources in a multi-thread application, especially in worker methods.
Thanks !
Paul
|
Provided that no other thread will try to write to the data in your singleton object, you don't need to protect them: by definition, multiple readers in the absence of a writer is thread-safe. This is a common pattern where the program's initialization code sets up a singleton, which is then only read from by worker threads.
However, if any thread ever writes to this data while others are reading from it, you must protect it in some way. If you have lots of readers and only the occasional writer, it is worth considering some sort of "read-write" lock, which allows multiple readers in the absence of any writers.
|
271,971
| 272,024
|
How can I improve/replace sprintf, which I've measured to be a performance hotspot?
|
Through profiling I've discovered that the sprintf here takes a long time. Is there a better performing alternative that still handles the leading zeros in the y/m/d h/m/s fields?
SYSTEMTIME sysTime;
GetLocalTime( &sysTime );
char buf[80];
for (int i = 0; i < 100000; i++)
{
sprintf(buf, "%4d-%02d-%02d %02d:%02d:%02d",
sysTime.wYear, sysTime.wMonth, sysTime.wDay,
sysTime.wHour, sysTime.wMinute, sysTime.wSecond);
}
Note: The OP explains in the comments that this is a stripped-down example. The "real" loop contains additional code that uses varying time values from a database. Profiling has pinpointed sprintf() as the offender.
|
If you were writing your own function to do the job, a lookup table of the string values of 0 .. 61 would avoid having to do any arithmetic for everything apart from the year.
edit: Note that to cope with leap seconds (and to match strftime()) you should be able to print seconds values of 60 and 61.
char LeadingZeroIntegerValues[62][] = { "00", "01", "02", ... "59", "60", "61" };
Alternatively, how about strftime()? I've no idea how the performance compares (it could well just be calling sprintf()), but it's worth looking at (and it could be doing the above lookup itself).
|
272,036
| 272,070
|
Alternative to GetProcessID for Windows 2000
|
I've accidentally removed Win2K compatibility from an application by using GetProcessID.
I use it like this, to get the main HWND for the launched application.
ShellExecuteEx(&info); // Launch application
HANDLE han = info.hProcess; // Get process
cbinfo.han = han;
//Call EnumWindows to enumerate windows....
//with this as the callback
static BOOL CALLBACK enumproc(HWND hwnd, LPARAM lParam)
{
DWORD id;
GetWIndowThreadProcessID(hwnd, &id);
if (id == GetProcessID(cbinfo.han))
setResult(hwnd)
...
}
Any ideas how the same function could be acheived on Win2K?
|
There is an 'sort-of-unsupported' function: ZwQueryInformationProcess(): see
http://msdn.microsoft.com/en-us/library/ms687420.aspx
This will give you the process id (amongst other things), given the handle. This isn't guaranteed to work with future Windows versions, so I'd suggest having a helper function that tests the OS version and then uses GetProcAddress() to call either GetProcessId() for XP and above, and ZwQueryInformationProcess() for Win2K only.
|
272,161
| 272,240
|
Why can't I use static members, for example static structures, in my classes in VS2008?
|
When I write code like this in VS 2008:
.h
struct Patterns {
string ptCreate;
string ptDelete;
string ptDrop;
string ptUpdate;
string ptInsert;
string ptSelect;
};
class QueryValidate {
string query;
string pattern;
static Patterns pts;
public:
friend class Query;
QueryValidate(const string& qr, const string& ptn):
query(qr), pattern(ptn) {}
bool validate() {
boost::regex rg(pattern);
return boost::regex_match(query, rg);
}
virtual ~QueryValidate() {}
};
I then initialize my structure like this:
.cpp
string QueryValidate::pts::ptCreate = "something";
string QueryValidate::pts::ptDelete = "something";
//...
The compiler gives the following errors:
'Patterns': the symbol to the left of a '::' must be a type 'ptSelect'
: is not a member of 'QueryValidate'
What am I doing wrong? Is this a problem with Visual Studio or with my code? I know that static members except for const ones must be defined outside the class they were declared in.
|
You're trying to create a non-static member (ptCreate) of a static member (pts). This won't work like this.
You got two options, either use a struct initializer list for the Patterns class.
Patterns QueryValidate::pts = {"CREATE", "DELETE"}; // etc. for every string
Or, much safer (and better in my opinion), provide a constructor in Patterns and call that one.
struct Patterns {
Patterns() { /*...*/ }
/* ... */
}
On a side not, your code wouldn't work in any C++ compiler, it's not a conflict with Visual Studio things.
|
272,414
| 272,485
|
Boost Graph Library: Is there a neat algorithm built into BGL for community detection?
|
Anybody out there using BGL for large production servers?
How many node does your network consist of?
How do you handle community detection
Does BGL have any cool ways to detect communities?
Sometimes two communities might be linked together by one or two edges, but these edges are not reliable and can fade away. Sometimes there are no edges at all.
Could someone speak briefly on how to solve this problem.
Please open my mind and inspire me.
So far I have managed to work out if two nodes are on an island (in a community)
in a lest expensive manner, but now I need to work out which two nodes on separate islands are closest to each other. We can only make minimal use of unreliable geographical data.
If we figuratively compare it to a mainland and an island and take it out of social distance context. I want to work out which two bits of land are the closest together across a body of water.
|
I've used the BGL for graphs with millions of nodes, but the size of the graph you can use depends on what algorithm you are trying to run. You can quickly compute distances between nodes. There are 4 shortest path algorithms which are most applicable depending on your data: (single pairs of points, for all pairs of points, sparse and dense graphs,...).
As for community detection, there aren't any algorithms built-into the BGL specifically for that (but maybe you can contribute one when you are finished with your project). There are a few algorithms that might be helpful in building a community detection algorithm. The max-flow/min-cut algorithms are typically used in community detection (if there is a lot of flow possible between two nodes, then they are likely to be in the same community, if there isn't much flow, then the min-cut is likely to represent roads between communities). There are also heuristics to order the nodes of the graph to reduce bandwidth. Nodes making up "communities" are likely to be close to each other in such an ordering.
|
272,479
| 284,499
|
How to get equivalent of printf_l on Linux?
|
This function exists on OS X and allows you to pass custom local to the function. setlocale is not thread-safe, and passing locale as parameter is.
If there is no equivalent, any way of locale-independent printf, or printf just for doubles (%g) will be ok.
|
There are locale-independent double to string convertion routines at http://www.netlib.org/fp/. String to double conversion is available too. The API is not very nice, but the code works.
|
272,523
| 274,370
|
Socket Exception: "There are no more endpoints available from the endpoint mapper"
|
I am using winsock and C++ to set up a server application. The problem I'm having is that the call to listen results in a first chance exception. I guess normally these can be ignored (?) but I've found others having the same issue I am where it causes the application to hang every once in a while. Any help would be greatly appreciated.
The first chance exception is:
First-chance exception at 0x*12345678* in MyApp.exe: 0x000006D9: There are no more endpoints available from the endpoint mapper.
I've found some evidence that this could be cause by the socket And the code that I'm working with is as follows. The exception occurs on the call to listen in the fifth line from the bottom.
m_accept_fd = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if (m_accept_fd == INVALID_SOCKET)
{
return false;
}
int optval = 1;
if (setsockopt (m_accept_fd, SOL_SOCKET, SO_REUSEADDR,
(char*)&optval, sizeof(optval)))
{
closesocket(m_accept_fd);
m_accept_fd = INVALID_SOCKET;
return false;
}
struct sockaddr_in local_addr;
local_addr.sin_family = AF_INET;
local_addr.sin_addr.s_addr = INADDR_ANY;
local_addr.sin_port = htons(m_port);
if (bind(m_accept_fd, (struct sockaddr *)&local_addr,
sizeof(struct sockaddr_in)) == SOCKET_ERROR)
{
closesocket(m_accept_fd);
return false;
}
if (listen (m_accept_fd, 5) == SOCKET_ERROR)
{
closesocket(m_accept_fd);
return false;
}
|
On a very busy server, you may be running out of Sockets. You may have to adjust some TCPIP parameters. Adjust these two in the registry:
HKLM\System\CurrentControlSet\Services\Tcpip\Parameters
MaxUserPort REG_DWORD 65534 (decimal)
TcpTimedWaitDelay REG_DWORD 60 (decimal)
By default, there's a few minutes delay between releasing a network port (socket) and when it can be reused. Also, depending on the OS version, there's only a few thousand in the range that windows will use. On the server, run this at a command prompt:
netstat -an
and look at the results (pipe to a file is easiest: netstat -an > netstat.txt). If you see a large number of ports from 1025->5000 in Timed Wait Delay status, then this is your problem and it's solved by adjusting up the max user port from 5000 to 65534 using the registry entry above. You can also adjust the delay by using the registry entry above to recycle the ports more quickly.
If this is not the problem, then the problem is likely the number of pending connections that you have set in your Listen() method.
|
272,750
| 272,784
|
Kill a blocked Boost::Thread
|
I am writing an application which blocks on input from two istreams.
Reading from either istream is a synchronous (blocking) call, so, I decided to create two Boost::threads to do the reading.
Either one of these threads can get to the "end" (based on some input received), and once the "end" is reached, both input streams stop receiving. Unfortunately, I cannot know which will do so.
Thus, I cannot join() on both threads, because only one thread (cannot be predetermined which one) will actually return (unblock).
I must somehow force the other to exit, but it is blocked waiting for input, so it cannot itself decide it is time to return (condition variables or what not).
Is their a way to either:
Send a signal a boost::thread, or
Force an istream to "fail", or
Kill a Boost::thread?
Note:
One of the istreams is cin
I am trying to restart the process, so I cannot close the input streams in a way that prohibits reseting them.
Edit:
I do know when the "end" is reached, and I do know which thread has successfully finished, and which needs to be killed. Its the killing I need to figure out (or a different strategy for reading from an istream).
I need both threads to exit and cleanup properly :(
Thanks!
|
I don't think there is a way to do it cross platform, but pthread_cancel should be what you are looking for. With a boost thread you can get the native_handle from a thread, and call pthread_cancel on it.
In addition a better way might be to use the boost asio equivalent of a select call on multiple files. That way one thread will be blocked waiting for the input, but it could come from either input stream. I don't know how easy it is to do something like this with iostreams though.
|
272,900
| 272,965
|
Undefined reference to static class member
|
Can anyone explain why following code won't compile? At least on g++ 4.2.4.
And more interesting, why it will compile when I cast MEMBER to int?
#include <vector>
class Foo {
public:
static const int MEMBER = 1;
};
int main(){
vector<int> v;
v.push_back( Foo::MEMBER ); // undefined reference to `Foo::MEMBER'
v.push_back( (int) Foo::MEMBER ); // OK
return 0;
}
|
You need to actually define the static member somewhere (after the class definition). Try this:
class Foo { /* ... */ };
const int Foo::MEMBER;
int main() { /* ... */ }
That should get rid of the undefined reference.
|
272,964
| 273,032
|
Will the c++ compiler optimize away unused return value?
|
If I have a function that returns an object, but this return value is never used by the caller, will the compiler optimize away the copy? (Possibly an always/sometimes/never answer.)
Elementary example:
ReturnValue MyClass::FunctionThatAltersMembersAndNeverFails()
{
//Do stuff to members of MyClass that never fails
return successfulResultObject;
}
void MyClass::DoWork()
{
// Do some stuff
FunctionThatAltersMembersAndNeverFails();
// Do more stuff
}
In this case, will the ReturnValue object get copied at all? Does it even get constructed? (I know it probably depends on the compiler, but let's narrow this discussion down to the popular modern ones.)
EDIT: Let's simplify this a bit, since there doesn't seem to be a consensus in the general case. What if ReturnValue is an int, and we return 0 instead of successfulResultObject?
|
If the ReturnValue class has a non-trivial copy constructor, the compiler must not eliminate the call to the copy constructor - it is mandated by the language that it is invoked.
If the copy constructor is inline, the compiler might be able to inline the call, which in turn might cause a elimination of much of its code (also depending on whether FunctionThatAltersMembersAndNeverFails is inline).
|
273,209
| 273,287
|
Are memory leaks ever ok?
|
Is it ever acceptable to have a memory leak in your C or C++ application?
What if you allocate some memory and use it until the very last line of code in your application (for example, a global object's destructor)? As long as the memory consumption doesn't grow over time, is it OK to trust the OS to free your memory for you when your application terminates (on Windows, Mac, and Linux)? Would you even consider this a real memory leak if the memory was being used continuously until it was freed by the OS.
What if a third party library forced this situation on you? Would refuse to use that third party library no matter how great it otherwise might be?
I only see one practical disadvantage, and that is that these benign leaks will show up with memory leak detection tools as false positives.
|
No.
As professionals, the question we should not be asking ourselves is, "Is it ever OK to do this?" but rather "Is there ever a good reason to do this?" And "hunting down that memory leak is a pain" isn't a good reason.
I like to keep things simple. And the simple rule is that my program should have no memory leaks.
That makes my life simple, too. If I detect a memory leak, I eliminate it, rather than run through some elaborate decision tree structure to determine whether it's an "acceptable" memory leak.
It's similar to compiler warnings – will the warning be fatal to my particular application? Maybe not.
But it's ultimately a matter of professional discipline. Tolerating compiler warnings and tolerating memory leaks is a bad habit that will ultimately bite me in the rear.
To take things to an extreme, would it ever be acceptable for a surgeon to leave some piece of operating equipment inside a patient?
Although it is possible that a circumstance could arise where the cost/risk of removing that piece of equipment exceeds the cost/risk of leaving it in, and there could be circumstances where it was harmless, if I saw this question posted on SurgeonOverflow.com and saw any answer other than "no," it would seriously undermine my confidence in the medical profession.
–
If a third party library forced this situation on me, it would lead me to seriously suspect the overall quality of the library in question. It would be as if I test drove a car and found a couple loose washers and nuts in one of the cupholders – it may not be a big deal in itself, but it portrays a lack of commitment to quality, so I would consider alternatives.
|
273,451
| 273,462
|
Which platform should I use : native C++ or C#?
|
I want to develop a windows application. If I use native C++ and MFC for user interface then the application will be very fast and tiny. But using MFC is very complicated. Also If I use C# then the application will be slower than the native code and It reqiures .NET framework to run. But developing GUI is very easy by using WinForm. Which one do you prefer?
|
"fast" and "slow" are subjective, especially with today's PC's. I'm not saying deliberately make the thing slow, but there isn't nearly as much overhead in writing a managed application as you might think. The JIT etc work very well to make the code execute very fast. And you can also NGEN for extra start-up speed if you really need.
Actually, if you have time to learn it, you might want to consider WPF rather than winform - this is a different skill-set, but allows you to make very good use of graphics hardware etc.
Also - .NET framework comes with new OS installs, and is still very common on those that pre-date it. So for me it would be a fairly clear choice to develop with C#/.NET. The time to develop a robust and fully tested C++ app (with no leaks, etc) is (for me at least) much greater than the same with C#.
|
273,630
| 273,633
|
If I use explicit constructor, do I need to put the keyword in both .h and .cpp files?
|
Actually my question is all in the title.
Anyway:
I have a class and I use explicit constructor:
.h
class MyClass
{
public:
explicit MyClass(const string& s): query(s) {}
private:
string query;
}
Is it obligatory or not to put explicit keyword in implementation(.cpp) file?
|
No, it is not. The explicit keyword is only permitted in the header. My gcc says:
test.cpp:6: error: only declarations of constructors can be 'explicit'
for the following code:
class foo {
public:
explicit foo(int);
};
explicit foo::foo(int) {}
|
273,720
| 273,748
|
Singleton Destructors
|
Should Singleton objects that don't use instance/reference counters be considered memory leaks in C++?
Without a counter that calls for explicit deletion of the singleton instance when the count is zero, how does the object get deleted? Is it cleaned up by the OS when the application is terminated? What if that Singleton had allocated memory on the heap?
In a nutshell, do I have to call a Singelton's destructor or can I rely on it getting cleaned up when the application terminates?
|
You can rely on it being cleaned up by the operating system.
That said, if you are in a garbage collected language with finalizers rather than destructors you may want to have a graceful shutdown procedure that can cleanly shutdown your singletons directly so they can free any critical resources in case there are using system resources that won't be correctly cleaned up by merely ending the application.
This is because finalizers run on a sort of 'best effort' basis in most languages. On the other hand there a very very few resources that need this sort of reliability. file handles, memory, etc. all go back to the OS cleanly regardless.
If you are using a singleton that is lazily allocated (i.e. with a triple-check lock idiom) in a language like c++ with real destructors rather than finalizers, then you cannot rely on its destructor being invoked during program shutdown. If you are using a single static instance then the destructor will run after main completes at some point.
Regardless, when the process ends, all memory returns to the operating system.
|
273,836
| 288,306
|
Why Build Fails with CruiseControl.NET but it builds fine manually with same settings?
|
I have a project that builds fine If I build it manually but it fails with CC.NET.
The error that shows up on CC.NET is basically related to an import that's failing because file was not found; one of the projects (C++ dll) tries to import a dll built by another project. Dll should be in the right place since there's a dependency between the projects - indeeed when I build manually everything works fine (Note that when I say manually I am getting everything fresh from source code repository then invoking a Rebuild from VS2005 to simulate CC.NET automation).
looks like dependencies are ignored when the build is automated through CC.NET.
I am building in Release MinDependency mode.
Any help would be highly appreciated!
|
Can you change CC to use msbuild instead of devenv? That seems like the optimal solution to me, as it means the build is the same in both situations.
|
273,869
| 273,945
|
Performance implications of &p[0] vs. p.get() in boost::scoped_array
|
The topic generically says it all. Basically in a situation like this:
boost::scoped_array<int> p(new int[10]);
Is there any appreciable difference in performance between doing: &p[0] and p.get()?
I ask because I prefer the first one, it has a more natural pointer like syntax. In fact, it makes it so you could replace p with a native pointer or array and not have to change anything else.
I am guessing since get is a one liner "return ptr;" that the compiler will inline that, and I hope that it is smart enough to to inline operator[] in such a way that it is able to not dereference and then immediately reference.
Anyone know?
|
OK, I've done some basic tests as per Martin York's suggestions.
It seems that g++ (4.3.2) is actually pretty good about this. At both -O2 and -O3 optimization levels, it outputs slightly different but functionally equivalent assembly for both &p[0] and p.get().
At -Os as expected, it took the path of least complexity and emits a call to the operator[]. One thing to note is that the &p[0] version does cause g++ to emit a copy of the operator[] body, but it is never used, so there is a slight code bloat if you never use operator[] otherwise:
The tested code was this (with the #if both 0 and 1):
#include <boost/scoped_array.hpp>
#include <cstdio>
int main() {
boost::scoped_array<int> p(new int[10]);
#if 1
printf("%p\n", &p[0]);
#else
printf("%p\n", p.get());
#endif
}
|
273,908
| 8,362,045
|
c++ integer->std::string conversion. Simple function?
|
Problem: I have an integer; this integer needs to be converted to a stl::string type.
In the past, I've used stringstream to do a conversion, and that's just kind of cumbersome. I know the C way is to do a sprintf, but I'd much rather do a C++ method that is typesafe(er).
Is there a better way to do this?
Here is the stringstream approach I have used in the past:
std::string intToString(int i)
{
std::stringstream ss;
std::string s;
ss << i;
s = ss.str();
return s;
}
Of course, this could be rewritten as so:
template<class T>
std::string t_to_string(T i)
{
std::stringstream ss;
std::string s;
ss << i;
s = ss.str();
return s;
}
However, I have the notion that this is a fairly 'heavy-weight' implementation.
Zan noted that the invocation is pretty nice, however:
std::string s = t_to_string(my_integer);
At any rate, a nicer way would be... nice.
Related:
Alternative to itoa() for converting integer to string C++?
|
Now in c++11 we have
#include <string>
string s = std::to_string(123);
Link to reference: http://en.cppreference.com/w/cpp/string/basic_string/to_string
|
274,066
| 830,877
|
Search entire project for includes in Eclipse CDT
|
I have a large existing c++ codebase. Typically the users of the codebase edit the source with gvim, but we'd like to start using the nifty IDE features in Eclipse. The codebase has an extensive directory hierarchy, but the source files use include directives without paths due to some voodoo we use in our build process. When I link the source to my project in Eclipse, the indexer complains that it can't find any header files (because we don't specify paths in our includes.) If I manually add the directories from the workspace to the include path then everything works wonderfully, but obviously adding hundreds of directories manually isn't feasible. Would there be a simple method to tell Eclipse to look anywhere in the project for the include files without having to add them one by one? If not, then can anyone suggest a good starting place, like what classes to extend, for writing a plugin to just scan the project at creation/modification and programatically add all directories to the include path?
|
This feature has already been implemented in the current CDT development stream and will be available in CDT 6.0, which will be released along with Eclipse 3.5 in June 2009.
Basically if you have an #include and the header file exists somewhere in your project then CDT will be able to find it without the need to manually set up include paths.
If you need the feature now you can download and install the latest CDT development build.
Eclipse Bugzilla: https://bugs.eclipse.org/bugs/show_bug.cgi?id=213562
Latest CDT 6.0 Builds: http://download.eclipse.org/tools/cdt/builds/6.0.0/index.html
|
274,375
| 274,481
|
Intercepting traffic to memcached for statistics/analysis
|
I want to setup a statistics monitoring platform to watch a specific service, but I'm not quiet sure how to go about it. Processing the intercepted data isn't my concern, just how to go about it. One idea was to setup a proxy between the client application and the service so that all TCP traffic went first to my proxy, the proxy would then delegate the intercepted messages to an awaiting thread/fork to pass the message on and recieve the results. The other was to try and sniff the traffic between client & service.
My primary goal is to avoid any serious loss in transmission speed between client & application but get 100% complete communications between client & service.
Environment: UBuntu 8.04
Language: c/c++
In the background I was thinking of using a sqlite DB running completely in memory or a 20-25MB memcache dameon slaved to my process.
Update:
Specifically I am trying to track the usage of keys for a memcache daemon, storing the # of sets/gets success/fails on the key. The idea is that most keys have some sort of separating character [`|_-#] to create a sort of namespace. The idea is to step in between the daemon and the client, split the keys apart by a configured separator and record statistics on them.
|
You didn't mention one approach: you could modify memcached or your client to record the statistics you need. This is probably the easiest and cleanest approach.
Between the proxy and the libpcap approach, there are a couple of tradeoffs:
- If you do the packet capture approach, you have to reassemble the TCP
streams into something usable yourself. OTOH, if your monitor program
gets bogged down, it'll just lose some packets, it won't break the cache.
Same if it crashes. You also don't have to reconfigure anything; packet
capture is transparent.
- If you do the proxy approach, the kernel handles all the TCP work for
you. You'll never lose requests. But if your monitor bogs down, it'll bog
down the app. And if your monitor crashes, it'll break caching. You
probably will have to reconfigure your app and/or memcached servers so
that the connections go through the proxy.
In short, the proxy will probably be easier to code, but implementing it may be a royal pain, and it had better be perfect or its taking down your caching. Changing the app or memcached seems like the sanest approach to me.
BTW: You have looked at memcached's built-in statistics reporting? I don't think its granular enough for what you want, but if you haven't seen it, take a look before doing actual work :-D
|
274,841
| 275,036
|
How can I correctly downcast the pointer from void* to TMemo* in C++Builder2009?
|
I am writing multi-thread socket chat in C++Builder 2009.
It is almost complete in accordance with what I need to do but I have a little problem.
I need to pass the TMemo* pointer into CreateThread WinAPI function which upcasts it to void*.
I tryed this way:
HANDLE xxx = MemoChat->Handle;
hNetThread = CreateThread(NULL, 0, NetThread, xxx, 0, &dwNetThreadId);
//...
and then, in NetThread function,
TMemo* MyMemo((HANDLE)lpParam);
TMemo* MyMemo((TMemo*)lpParam);
but it didn`t work:(
The question is how I can really downcast it correctly so I can use my Memo Component in this new thread?
|
Call:
TMemo* MemoChat = // You defined that somewhere I assume
HANDLE hNetThread = CreateThread(NULL, 0, NetThread, MemoChat, 0, &dwNetThreadId);
What is happening here is that any pointer you pass as the third parameter is being auto converted into a void pointer (or in WinTerms LPVOID). That's fine it does not change it it just loses the type information as the system does not know anything about your object.
The new Thread Start point:
DWORD NetThread(LPVOID lpParameter)
{
TMemo* MemoChat = reinterpret_cast<TMemo*>(lpParameter);
// Do your thread stuff here.
}
Once your thread start method is called. Just convert the void pointer back into the correct type and you should be able to start using it again.
Just to clear up other misconceptions.
A HANDLE is a pointer.
And you could have passed it as the parameter to the NetThread().
A HANDLE is a pointer to pointer under system control which points at the object you are using. So why the double indirection. It allows the system to move the object (and update its pointer) without finding all owners of the object. The owners all have handles that point at the pointer that was just updated.
It is an old fashioned computer science concept that is used infrequently in modern computers because of the OS/Hardware ability to swap main memory into secondary storage. but for certain resource they are still useful. Nowadays when handles are required they are hidden inside objects away from the user.
|
274,861
| 274,913
|
How do I calculate the week number given a date?
|
If I have a date, how do I calculate the week number for that date within that year?
For example, in 2008, January 1st to January 6th are in week 1 and January 7th to the 13th are in week 2, so if my date was January 10th 2008, my week number would be 2.
An algorithm would be great to get me started and sample code would also help - I'm developing in C++ on Windows.
Related:
Getting week number off a date in MS SQL Server 2005?
|
Pseudocode:
int julian = getDayOfYear(myDate) // Jan 1 = 1, Jan 2 = 2, etc...
int dow = getDayOfWeek(myDate) // Sun = 0, Mon = 1, etc...
int dowJan1 = getDayOfWeek("1/1/" + thisYear) // find out first of year's day
// int badWeekNum = (julian / 7) + 1 // Get our week# (wrong! Don't use this)
int weekNum = ((julian + 6) / 7) // probably better. CHECK THIS LINE. (See comments.)
if (dow < dowJan1) // adjust for being after Saturday of week #1
++weekNum;
return (weekNum)
To clarify, this algorithm assumes you number your weeks like this:
S M T W R F S
1 2 3 <-- week #1
4 5 6 7 8 9 10 <-- week #2
[etc.]
getDayOfWeek() and getDayOfYear() are standard date-object operations in most languages. If yours doesn't have them, you can count-forward from some known date (Jan 1, 1970 is a common one), after looking up to see what day of the week it was.
If you're going to implement your own date counting routines, remember that years that are divisible by 100 are NOT leap years, unless they are also divisible by 400. So 1900 was not a leap year, but 2000 was. If you're going to work far back in time, you have to mess with Gregorian vs Julian calendars, etc., see Wikipedia for loads of info on that.
This link talks about date/time functions in Windows/C++ in greater detail.
|
274,951
| 275,033
|
Chaining of ordering predicates (e.g. for std::sort)
|
You can pass a function pointer, function object (or boost lambda) to std::sort to define a strict weak ordering of the elements of the container you want sorted.
However, sometimes (enough that I've hit this several times), you want to be able to chain "primitive" comparisons.
A trivial example would be if you were sorting a collection of objects that represent contact data. Sometimes you will want to sort by last name, first name, area code. Other times first name, last name - yet other times age, first name, area code... etc
Now, you can certainly write an additional function object for each case, but that violates the DRY principle - especially if each comparison is less trivial.
It seems like you should be able to write a hierarchy of comparison functions - the low level ones do the single, primitive, comparisons (e.g. first name < first name), then higher level ones call the lower level ones in succession (probably chaining with && to make use of short circuit evaluation) to generate the composite functions.
The trouble with this approach is that std::sort takes a binary predicate - the predicate can only return a bool. So if you're composing them you can't tell if a "false" indicates equality or greater than. You can make your lower level predicates return an int, with three states - but then you would have to wrap those in higher level predicates before they could be used with std::sort on their own.
In all, these are not insurmountable problems. It just seems harder than it should be - and certainly invites a helper library implementation.
Therefore, does anyone know of any pre-existing library (esp. if it's a std or boost library) that can help here - of have any other thoughts on the matter?
[Update]
As mentioned in some of the comments - I've gone ahead and written my own implementation of a class to manage this. It's fairly minimal, and probably has some issues with it in general. but on that basis, for anyone interested, the class is here:
http://pastebin.com/f52a85e4f
And some helper functions (to avoid the need to specify template args) is here:
http://pastebin.com/fa03d66e
|
You could build a little chaining system like so:
struct Type {
string first, last;
int age;
};
struct CmpFirst {
bool operator () (const Type& lhs, const Type& rhs) { return lhs.first < rhs.first; }
};
struct CmpLast {
bool operator () (const Type& lhs, const Type& rhs) { return lhs.last < rhs.last; }
};
struct CmpAge {
bool operator () (const Type& lhs, const Type& rhs) { return lhs.age < rhs.age; }
};
template <typename First, typename Second>
struct Chain {
Chain(const First& f_, const Second& s_): f(f_), s(s_) {}
bool operator () (const Type& lhs, const Type& rhs) {
if(f(lhs, rhs))
return true;
if(f(rhs, lhs))
return false;
return s(lhs, rhs);
}
template <typename Next>
Chain <Chain, Next> chain(const Next& next) const {
return Chain <Chain, Next> (*this, next);
}
First f;
Second s;
};
struct False { bool operator() (const Type& lhs, const Type& rhs) { return false; } };
template <typename Op>
Chain <False, Op> make_chain(const Op& op) { return Chain <False, Op> (False(), op); }
Then to use it:
vector <Type> v; // fill this baby up
sort(v.begin(), v.end(), make_chain(CmpLast()).chain(CmpFirst()).chain(CmpAge()));
The last line is a little verbose, but I think it's clear what's intended.
|
275,004
| 275,231
|
Timer function to provide time in nano seconds using C++
|
I wish to calculate the time it took for an API to return a value.
The time taken for such an action is in the space of nanoseconds. As the API is a C++ class/function, I am using the timer.h to calculate the same:
#include <ctime>
#include <iostream>
using namespace std;
int main(int argc, char** argv) {
clock_t start;
double diff;
start = clock();
diff = ( std::clock() - start ) / (double)CLOCKS_PER_SEC;
cout<<"printf: "<< diff <<'\n';
return 0;
}
The above code gives the time in seconds. How do I get the same in nano seconds and with more precision?
|
What others have posted about running the function repeatedly in a loop is correct.
For Linux (and BSD) you want to use clock_gettime().
#include <sys/time.h>
int main()
{
timespec ts;
// clock_gettime(CLOCK_MONOTONIC, &ts); // Works on FreeBSD
clock_gettime(CLOCK_REALTIME, &ts); // Works on Linux
}
For windows you want to use the QueryPerformanceCounter. And here is more on QPC
Apparently there is a known issue with QPC on some chipsets, so you may want to make sure you do not have those chipset. Additionally some dual core AMDs may also cause a problem. See the second post by sebbbi, where he states:
QueryPerformanceCounter() and
QueryPerformanceFrequency() offer a
bit better resolution, but have
different issues. For example in
Windows XP, all AMD Athlon X2 dual
core CPUs return the PC of either of
the cores "randomly" (the PC sometimes
jumps a bit backwards), unless you
specially install AMD dual core driver
package to fix the issue. We haven't
noticed any other dual+ core CPUs
having similar issues (p4 dual, p4 ht,
core2 dual, core2 quad, phenom quad).
EDIT 2013/07/16:
It looks like there is some controversy on the efficacy of QPC under certain circumstances as stated in http://msdn.microsoft.com/en-us/library/windows/desktop/ee417693(v=vs.85).aspx
...While QueryPerformanceCounter and QueryPerformanceFrequency typically adjust for
multiple processors, bugs in the BIOS or drivers may result in these routines returning
different values as the thread moves from one processor to another...
However this StackOverflow answer https://stackoverflow.com/a/4588605/34329 states that QPC should work fine on any MS OS after Win XP service pack 2.
This article shows that Windows 7 can determine if the processor(s) have an invariant TSC and falls back to an external timer if they don't. http://performancebydesign.blogspot.com/2012/03/high-resolution-clocks-and-timers-for.html Synchronizing across processors is still an issue.
Other fine reading related to timers:
https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks
http://lwn.net/Articles/209101/
http://performancebydesign.blogspot.com/2012/03/high-resolution-clocks-and-timers-for.html
QueryPerformanceCounter Status?
See the comments for more details.
|
275,128
| 275,149
|
Parallel assignment in C++
|
Is there any way of doing parallel assignment in C++? Currently, the below compiles (with warnings)
#include <iostream>
int main() {
int a = 4;
int b = 5;
a, b = b, a;
std::cout << "a: " << a << endl
<< "b: " << b << endl;
return 0;
}
and prints:
a: 4
b: 5
What I'd like it to print ... if it weren't obvious, is:
a: 5
b: 4
As in, say, ruby, or python.
|
That's not possible. Your code example
a, b = b, a;
is interpreted in the following way:
a, (b = b), a
It does nothing. The comma operator makes it return the value of a (the right most operand). Because assignment binds tighter, b = b is in parens.
The proper way doing this is just
std::swap(a, b);
Boost includes a tuple class with which you can do
tie(a, b) = make_tuple(b, a);
It internally creates a tuple of references to a and b, and then assigned to them a tuple of b and a.
|
275,214
| 275,228
|
Scope and return values in C++
|
I am starting again with c++ and was thinking about the scope of variables.
If I have a variable inside a function and then I return that variable will the variable not be "dead" when it's returned because the scope it was in has ended?
I have tried this with a function returning a string and it did work.
Can anyone explain this? Or at least point me to some place that can explain this to me please.
Thanks
|
When the function terminates, the
following steps happen:
The function’s return value is
copied into the placeholder that was
put on the stack for this purpose.
Everything after the stack frame
pointer is popped off. This destroys
all local variables and arguments.
The return value is popped off the
stack and is assigned as the value
of the function. If the value of the
function isn’t assigned to anything,
no assignment takes place, and the
value is lost.
The address of the next instruction
to execute is popped off the stack,
and the CPU resumes execution at
that instruction.
The stack and the heap
|
275,249
| 275,310
|
When would I use uncaught_exception?
|
What is a good use case for uncaught_exception?
|
Herb Sutter seems to give good advice here. He doesn't know of a good use for it and says that some cases where it appears to be useful don't really work.
|
275,355
| 275,405
|
C++ Reading file Tokens
|
another request sorry..
Right now I am reading the tokens in one by one and it works, but I want to know when there is a new line..
if my file contains
Hey Bob
Now
should give me
Hey
Bob
[NEW LINE]
NOW
Is there a way to do this without using getline?
|
Yes the operator>> when used with string read 'white space' separated words. A 'White space' includes space tab and new line characters.
If you want to read a line at a time use std::getline()
The line can then be tokenized separately with a string stream.
std::string line;
while(std::getline(std::cin,line))
{
// If you then want to tokenize the line use a string stream:
std::stringstream lineStream(line);
std::string token;
while(lineStream >> token)
{
std::cout << "Token(" << token << ")\n";
}
std::cout << "New Line Detected\n";
}
Small addition:
Without using getline()
So you really want to be able to detect a newline. This means that newline becomes another type of token. So lets assume that you have words separated by 'white spaces' as tokens and newline as its own token.
Then you can create a Token type.
Then all you have to do is write the stream operators for a token:
#include <iostream>
#include <fstream>
class Token
{
private:
friend std::ostream& operator<<(std::ostream&,Token const&);
friend std::istream& operator>>(std::istream&,Token&);
std::string value;
};
std::istream& operator>>(std::istream& str,Token& data)
{
// Check to make sure the stream is OK.
if (!str)
{ return str;
}
char x;
// Drop leading space
do
{
x = str.get();
}
while(str && isspace(x) && (x != '\n'));
// If the stream is done. exit now.
if (!str)
{
return str;
}
// We have skipped all white space up to the
// start of the first token. We can now modify data.
data.value ="";
// If the token is a '\n' We are finished.
if (x == '\n')
{ data.value = "\n";
return str;
}
// Otherwise read the next token in.
str.unget();
str >> data.value;
return str;
}
std::ostream& operator<<(std::ostream& str,Token const& data)
{
return str << data.value;
}
int main()
{
std::ifstream f("PLOP");
Token x;
while(f >> x)
{
std::cout << "Token(" << x << ")\n";
}
}
|
275,375
| 275,398
|
Inspecting STL containers in Xcode
|
From googling around it looks like Xcode (3.1 in my case) should be at least trying to give me a sane debug view of STL containers - or at least vectors.
However, whenever I go to look at a vector in the debugger I just see M_impl, with M_start and M_finish members (and a couple of others) - but nothing in-between! (it's a debug build, btw).
Am I missing a setting or something somewhere?
I've also read that there are macros available that can augment the debug viewer even further to inspect more complex containers - but have been unable to find any.
I'd also like to be able to view std::wstrings, without having to drop to the memory viewer. It shows std::string fine. Is there anything I can do to show std::wstring?
I realise this is a bit of a composite question - but it's all really part of the same subject.
|
The ability to view the container's items may rely on the complexity of the templated type. For trivial objects like int, bool, etc., and even simple class templates like
template <class T> struct S { T m_t; }
I normally have no problem viewing vector items in the debugger variable view. I say normally because there seem to be occasional bugs that cause stuff--particularly when debugging--not to behave the way I expected. One of those things is garbage or totally useless information in the variable view. Usually a clean rebuild of the target (or sometimes even a more drastic restarting of XCode followed by a clean rebuild) fixes the problem.
As for the other container types, it's most likely hard to efficiently view this information. For example a map is often implemented as a red-black tree. The debugger would have to know that in advance in order to properly walk the tree and show you all the keys and values. That's probably asking a lot from Xcode or GDB--especially since the former focuses more on Objective-C and plain C than C++ (hence the fact that namespaces tend to kill code completion despite their ubiquity and importance).
|
275,484
| 275,494
|
Align cout format as table's columns
|
I'm pretty sure this is a simple question in regards to formatting but here's what I want to accomplish:
I want to output data onto the screen using cout. I want to output this in the form of a table format. What I mean by this is the columns and rows should be properly aligned. Example:
Test 1
Test2 2
Iamlongverylongblah 2
Etc 1
I am only concerned with the individual line so my line to output now (not working) is
cout << var1 << "\t\t" << var2 << endl;
Which gives me something like:
Test 1
Test2 2
Iamlongverylongblah 2
Etc 1
|
setw.
#include <iostream>
#include <iomanip>
using namespace std;
int main () {
cout << setw(21) << left << "Test" << 1 << endl;
cout << setw(21) << left << "Test2" << 2 << endl;
cout << setw(21) << left << "Iamlongverylongblah" << 2 << endl;
cout << setw(21) << left << "Etc" << 1 << endl;
return 0;
}
|
275,853
| 275,873
|
acceptable fix for majority of signed/unsigned warnings?
|
I myself am convinced that in a project I'm working on signed integers are the best choice in the majority of cases, even though the value contained within can never be negative. (Simpler reverse for loops, less chance for bugs, etc., in particular for integers which can only hold values between 0 and, say, 20, anyway.)
The majority of the places where this goes wrong is a simple iteration of a std::vector, often this used to be an array in the past and has been changed to a std::vector later. So these loops generally look like this:
for (int i = 0; i < someVector.size(); ++i) { /* do stuff */ }
Because this pattern is used so often, the amount of compiler warning spam about this comparison between signed and unsigned type tends to hide more useful warnings. Note that we definitely do not have vectors with more then INT_MAX elements, and note that until now we used two ways to fix compiler warning:
for (unsigned i = 0; i < someVector.size(); ++i) { /*do stuff*/ }
This usually works but might silently break if the loop contains any code like 'if (i-1 >= 0) ...', etc.
for (int i = 0; i < static_cast<int>(someVector.size()); ++i) { /*do stuff*/ }
This change does not have any side effects, but it does make the loop a lot less readable. (And it's more typing.)
So I came up with the following idea:
template <typename T> struct vector : public std::vector<T>
{
typedef std::vector<T> base;
int size() const { return base::size(); }
int max_size() const { return base::max_size(); }
int capacity() const { return base::capacity(); }
vector() : base() {}
vector(int n) : base(n) {}
vector(int n, const T& t) : base(n, t) {}
vector(const base& other) : base(other) {}
};
template <typename Key, typename Data> struct map : public std::map<Key, Data>
{
typedef std::map<Key, Data> base;
typedef typename base::key_compare key_compare;
int size() const { return base::size(); }
int max_size() const { return base::max_size(); }
int erase(const Key& k) { return base::erase(k); }
int count(const Key& k) { return base::count(k); }
map() : base() {}
map(const key_compare& comp) : base(comp) {}
template <class InputIterator> map(InputIterator f, InputIterator l) : base(f, l) {}
template <class InputIterator> map(InputIterator f, InputIterator l, const key_compare& comp) : base(f, l, comp) {}
map(const base& other) : base(other) {}
};
// TODO: similar code for other container types
What you see is basically the STL classes with the methods which return size_type overridden to return just 'int'. The constructors are needed because these aren't inherited.
What would you think of this as a developer, if you'd see a solution like this in an existing codebase?
Would you think 'whaa, they're redefining the STL, what a huge WTF!', or would you think this is a nice simple solution to prevent bugs and increase readability. Or maybe you'd rather see we had spent (half) a day or so on changing all these loops to use std::vector<>::iterator?
(In particular if this solution was combined with banning the use of unsigned types for anything but raw data (e.g. unsigned char) and bit masks.)
|
I made this community wiki... Please edit it. I don't agree with the advice against "int" anymore. I now see it as not bad.
Yes, i agree with Richard. You should never use 'int' as the counting variable in a loop like those. The following is how you might want to do various loops using indices (althought there is little reason to, occasionally this can be useful).
Forward
for(std::vector<int>::size_type i = 0; i < someVector.size(); i++) {
/* ... */
}
Backward
You can do this, which is perfectly defined behaivor:
for(std::vector<int>::size_type i = someVector.size() - 1;
i != (std::vector<int>::size_type) -1; i--) {
/* ... */
}
Soon, with c++1x (next C++ version) coming along nicely, you can do it like this:
for(auto i = someVector.size() - 1; i != (decltype(i)) -1; i--) {
/* ... */
}
Decrementing below 0 will cause i to wrap around, because it is unsigned.
But unsigned will make bugs slurp in
That should never be an argument to make it the wrong way (using 'int').
Why not use std::size_t above?
The C++ Standard defines in 23.1 p5 Container Requirements, that T::size_type , for T being some Container, that this type is some implementation defined unsigned integral type. Now, using std::size_t for i above will let bugs slurp in silently. If T::size_type is less or greater than std::size_t, then it will overflow i, or not even get up to (std::size_t)-1 if someVector.size() == 0. Likewise, the condition of the loop would have been broken completely.
|
275,871
| 275,886
|
How to overcome GCC restriction "could not convert template argument '0' to 'Foo*'"?
|
Suppose I have code like this:
template<class T, T initial_t> class Bar {
// something
}
And then try to use it like this:
Bar<Foo*, NULL> foo_and_bar_whatever_it_means_;
GCC bails out with error (on the above line):
could not convert template argument
'0' to 'Foo*'
I found this thread: http://gcc.gnu.org/ml/gcc-help/2007-11/msg00066.html, but I have to use NULL in this case (ok, I could probably refactor - but it would not be trivial; any suggestions?). I tried to overcome the problem by creating a variable with value of NULL, but GCC still complains that I pass variable and not address of variable as a template argument. And reference to a variable initialized with default ctor would not be the same as NULL.
|
Rethinking your code is probably the best way to get around it. The thread you linked to includes a clear quote from the standard indicating that this isn't allowed.
|
275,994
| 276,056
|
What's the best way to do a backwards loop in C/C#/C++?
|
I need to move backwards through an array, so I have code like this:
for (int i = myArray.Length - 1; i >= 0; i--)
{
// Do something
myArray[i] = 42;
}
Is there a better way of doing this?
Update: I was hoping that maybe C# had some built-in mechanism for this like:
foreachbackwards (int i in myArray)
{
// so easy
}
|
While admittedly a bit obscure, I would say that the most typographically pleasing way of doing this is
for (int i = myArray.Length; i --> 0; )
{
//do something
}
|
276,010
| 276,060
|
std::wcout to console window in Xcode
|
In an Xcode project, if I use std::cout to write to the console the output is fine.
However, if I use std::wcout I get no output.
I know that this is a thorny issue in C++, and I've been googling around to try and find a specific solution in the Xcode case. A couple of things I found that it was suggested should work were:
std::cout.imbue( std::locale("") );
and
std::setlocale(LC_ALL, "");
Neither of these have made any difference. Before I resign myself to spending the next couple of weeks studying the facets API just to be able to write to the console I thought I'd check with the esteemed audience here.
[Update]
I think the reason for the problem I've been having is actually to do with the specific encoding of some of the strings I'm trying to print.
If I send just a string literal, using the L"my string" syntax it works! It appears this is using UTF32 - little endian encoding.
However, I've been mixing this with strings I've been passed from Objective C++ code using NSUTF32BigEndianStringEncoding encoding. It's this mix of encodings that's causing the problems.
I think we can consider this matter closed. Thanks for reading.
|
std::wcout should work just like std::cout.
The following works fine on my MAC:
#include <iostream>
int main()
{
std::cout << "HI" << std::endl;
std::wcout << L"PLOP" << std::endl;
}
Maybe (though some code would have been nice) its because you are not flushing the buffer. Remember that std::cout and std::wcout are buffered. This means the output will not be pushed to the console until the buffer is filled or you explicitly flush the buffer.
You can flush the buffer with:
std::wcout << flush();
// or
std::wcout << endl; // Those also puts a '\n' on the stream.
|
276,066
| 276,068
|
Better way to implement count_permutations?
|
I need a function count_permutations() that returns the number of permutations of a given range. Assuming that the range is allowed to be modified, and starts at the first permutation, I could naively implement this as repeated calls to next_permutation() as below:
template<class Ret, class Iter>
Ret count_permutations(Iter first, Iter last)
{
Ret ret = 0;
do {
++ret;
} while (next_permutation(first, last));
return ret;
}
Is there a faster way that doesn't require iterating through all the permutations to find the answer? It could still assume that the input can be modified, and starts in the first permutation, but obviously if it is possible to implement without those assumtions it'd be great too.
|
The number of permutations for a range where all the elements are unique is n! where n is the length of the range.
If there are duplicate elements, you can use n!/(n_0!)...(n_m!) where n_0...n_m are the lengths of duplicate ranges.
So for example [1,2,3] has 3! = 6 permutations while [1,2,2] has 3!/2! = 3 permutations.
EDIT: A better example is [1,2,2,3,3,3] which has 6!/2!3! = 60 permutations.
|
276,102
| 276,110
|
Catching all unhandled C++ exceptions?
|
Is there some way to catch exceptions which are otherwise unhandled (including those thrown outside the catch block)?
I'm not really concerned about all the normal cleanup stuff done with exceptions, just that I can catch it, write it to log/notify the user and exit the program, since the exceptions in these casese are generaly fatal, unrecoverable errors.
something like:
global_catch()
{
MessageBox(NULL,L"Fatal Error", L"A fatal error has occured. Sorry for any inconvience", MB_ICONERROR);
exit(-1);
}
global_catch(Exception *except)
{
MessageBox(NULL,L"Fatal Error", except->ToString(), MB_ICONERROR);
exit(-1);
}
|
This can be used to catch unexpected exceptions.
catch (...)
{
std::cout << "OMG! an unexpected exception has been caught" << std::endl;
}
Without a try catch block, I don't think you can catch exceptions, so structure your program so the exception thowing code is under the control of a try/catch.
|
276,173
| 276,570
|
What are your favorite C++ Coding Style idioms
|
What are your favorite C++ coding style idioms? I'm asking about style or coding typography such as where you put curly braces, are there spaces after keywords, the size of indents, etc. This is opposed to best-practices or requirements such as always deleting arrays with delete[].
Here is an example of one of my favorites: In C++ Class initializers, we put the separators at the front of the line, rather than the back. This makes it easier to keep this up to date. It also means that source code control diffs between versions are cleaner.
TextFileProcessor::
TextFileProcessor( class ConstStringFinder& theConstStringFinder )
: TextFileProcessor_Base( theConstStringFinder )
, m_ThreadHandle ( NULL )
, m_startNLSearch ( 0 )
, m_endNLSearch ( 0 )
, m_LineEndGetIdx ( 0 )
, m_LineEndPutIdx ( 0 )
, m_LineEnds ( new const void*[ sc_LineEndSize ] )
{
;
}
|
When creating enumerations, put them in a namespace so that you can access them with a meaningful name:
namespace EntityType {
enum Enum {
Ground = 0,
Human,
Aerial,
Total
};
}
void foo(EntityType::Enum entityType)
{
if (entityType == EntityType::Ground) {
/*code*/
}
}
EDIT: However, this technique has become obsolete in C++11. Scoped enumeration (declared with enum class or enum struct) should be used instead: it is more type-safe, concise, and flexible. With old-style enumerations the values are placed in the outer scope. With new-style enumeration they are placed within the scope of the enum class name.
Previous example rewritten using scoped enumeration (also known as strongly typed enumeration):
enum class EntityType {
Ground = 0,
Human,
Aerial,
Total
};
void foo(EntityType entityType)
{
if (entityType == EntityType::Ground) {
/*code*/
}
}
There are other significant benefits from using scoped enumerations: absence of implicit cast, possible forward declaration, and ability to use custom underlying type (not the default int).
|
276,543
| 276,547
|
How to run a console application with command line parameters in Visual C++ 6.0?
|
I've got a console application that compiles and executes fine with Visual C++ 6.0, except that it will then only get as far as telling me about missing command line parameters. There doesn't seem to be anywhere obvious to enter these. How do I run or debug it with command line parameters?
|
I assume you're talking about setting the command line parameters for running in the IDE.
Open the Project/Settings property page and go to the Debug tab.
There's a "Program arguments" field you can put them into.
|
276,562
| 276,584
|
Using windows DLLs in a portable app
|
I have built a windows C++ application that I'd like to port to linux. The main reasons to do this is ease of system maintenance for our IT staff. Apart from the one windows machine that runs this application we're a linux only operation.
The reason this application was built in-, and runs on- windows is that it uses a windows API (dll and header) to connect to a server belonging to an external party. This connection uses some proprietary protocol that I don't really want to mess with, (and I think I'm contractually not allowed to) so I'm stuck with the .dll.
This dll is windows only because of (I suspect) windows sockets specific code in the dll. Apart from that it doesn't do a whole lot of advanced stuff. Is there a way somewhere between just running the app on linux in WINE and sniffing out the protocol and reimplementing the DLL myself that will allow me to get this application to run on a linux machine?
This idea got inspired by this item about QT creator so any solution that allows me to play with that would be extra cool.
|
The most obvious middle ground would be to use Winelib. I do not know if it can link directly to a native DLL, but if not you probably could load it with LoadLibrary().
You could then split your application in two parts: a wrapper which only calls the DLL, and the rest of the code talking to your wrapper. You could have both in separate processes, and thus only the wrapper would have to be compiled with Winelib. The rest of the application could then use whatever framework you want.
|
276,761
| 277,306
|
Exposing a C++ API to Python
|
I'm currently working on a project were I had to wrap the C++ classes with Python to be able to script the program. So my specific experience also involved embedding the Python interpreter in our program.
The alternatives I tried were:
Boost.Python
I liked the cleaner API produced by Boost.Python, but the fact that it would have required that users install an additional dependency made us switch to SWIG.
SWIG
SWIG's main advantage for us was that it doesn't require end users to install it to use the final program.
What have you used to do this, and what has been your experience with it?
|
I've used both (for the same project): Boost is better integrated with the STL, and especially C++ exceptions. Also, its memory management mechanism (which tries to bridge C++ memory management and Python GC) is way more flexible than SWIG's. However, SWIG has much better documentation, no external dependencies, and if you get the library wrapped in SWIG for Python you're more than half-way there to getting a Java/Perl/Ruby wrapper as well.
I don't think there's a clear-cut choice: for smaller projects, I'd go with Boost.Python again, for larger long-lived projects, the extra investment in SWIG is worth it.
|
276,769
| 277,687
|
How to expose std::vector<int> as a Python list using SWIG?
|
I'm trying to expose this function to Python using SWIG:
std::vector<int> get_match_stats();
And I want SWIG to generate wrapping code for Python so I can see it as a list of integers.
Adding this to the .i file:
%include "typemaps.i"
%include "std_vector.i"
namespace std
{
%template(IntVector) vector<int>;
}
I'm running SWIG Version 1.3.36 and calling swig with -Wall and I get no warnings.
I'm able to get access to a list but I get a bunch of warnings when compiling with -Wall (with g++ (GCC) 4.2.4 ) the generated C++ code that say:
warning: dereferencing type-punned pointer will break strict-aliasing rules
Am I exposing the function correctly? If so, what does the warning mean?
These are the lines before the offending line in the same function:
SWIGINTERN PyObject *_wrap_IntVector_erase__SWIG_0(PyObject *SWIGUNUSEDPARM(self), PyObject *args) {
PyObject *resultobj = 0;
std::vector *arg1 = (std::vector *) 0 ;
std::vector::iterator arg2 ;
std::vector::iterator result;
void *argp1 = 0 ;
int res1 = 0 ;
swig::PySwigIterator *iter2 = 0 ;
int res2 ;
PyObject * obj0 = 0 ;
PyObject * obj1 = 0 ;
if (!PyArg_ParseTuple(args,(char *)"OO:IntVector_erase",&obj0,&obj1)) SWIG_fail;
res1 = SWIG_ConvertPtr(obj0, &argp1,SWIGTYPE_p_std__vectorT_int_std__allocatorT_int_t_t, 0 | 0 );
if (!SWIG_IsOK(res1)) {
SWIG_exception_fail(SWIG_ArgError(res1), "in method '" "IntVector_erase" "', argument " "1"" of type '" "std::vector *""'");
}
arg1 = reinterpret_cast * >(argp1);
And this is the offending line:
res2 = SWIG_ConvertPtr(obj1, SWIG_as_voidptrptr(&iter2), swig::PySwigIterator::descriptor(), 0);
More code follows that.
The warning generated when compiling with g++ 4.2.4 is:
swig_iss_wrap.cxx: In function ‘PyObject* _wrap_IntVector_erase__SWIG_0(PyObject*, PyObject*)’:
swig_iss_wrap.cxx:5885: warning: dereferencing type-punned pointer will break strict-aliasing rules
|
%template(IntVector) vector<int>;
|
276,847
| 276,862
|
Slow performance of AddString in MFC
|
I've got a dialog with several largeish combo boxes in it (maybe several hundred items apiece). There's a noticeable delay at construction while these are populated (confirmed that it's them by profiling).
My initial thought was that sorting was killing it's performance, but disabling sort and using InsertString instead doesn't seem to make things much better. I hadn't thought that it seemed like an excessive number of items - is there something else I should be doing or considering here?
The MFC calls are trivial wrappers to Win32 message calls so I don't think there's any significant overhead there.
DUPLICATE How to load a large array of strings in to an MFC combobox control fast as possible?
|
You should be using CWnd::SetRedraw around your adds, to prevent the control updating all its internal state after each add.
If you're not already doing it, then do this:
combo.SetRedraw(FALSE);
... All the adds
combo.SetRedraw(TRUE);
combo.Invalidate();
You should also consider using the CComboBox::InitStorage function, which preallocates memory for the combo-box.
|
276,884
| 276,889
|
Learning C# after C++
|
In a progression of languages, I have been learning C and C++. Now I would like to learn C#. I know there are some drastic differences between them - such as the removal of pointers and garbage collection. However, I don't know many of the differences between the two.
What are the major differences that a C++ programmer would need to know when moving to C#? (For example, what can I use instead of STL, syntactic differences between them, or anything else that might be considered important.)
|
C# for C++ Developers is a great place to start. It is a table that lists the most important comparisons between the two languages.
Once you have explored some of these differences, you might choose a self-contained project you have written in the past in C++, and re-write it in C#. In your first pass, you will probably just end up translating directly across, using the same design and algorithms. As you become more comfortable with C#, you will recognize ways to take advantage of language features only available in C#, as well as the incredibly versatile .NET Framework.
|
277,258
| 277,362
|
How do I see a C/C++ source file after preprocessing in Visual Studio?
|
Let's say I have a source file with many preprocessor directives. Is it possible to see how it looks after the preprocessor is done with it?
|
cl.exe, the command line interface to Microsoft Visual C++, has three different options for outputting the preprocessed file (hence the inconsistency in the previous responses about Visual C++):
/E: preprocess to stdout (similar to GCC's -E option)
/P: preprocess to file
/EP: preprocess to stdout without #line directives
If you want to preprocess to a file without #line directives, combine the /P and /EP options.
|
277,655
| 277,684
|
Why do C++ streams use char instead of unsigned char?
|
I've always wondered why the C++ Standard library has instantiated basic_[io]stream and all its variants using the char type instead of the unsigned char type. char means (depending on whether it is signed or not) you can have overflow and underflow for operations like get(), which will lead to implementation-defined value of the variables involved. Another example is when you want to output a byte, unformatted, to an ostream using its put function.
Any ideas?
Note: I'm still not really convinced. So if you know the definitive answer, you can still post it indeed.
|
Possibly I've misunderstood the question, but conversion from unsigned char to char isn't unspecified, it's implementation-dependent (4.7-3 in the C++ standard).
The type of a 1-byte character in C++ is "char", not "unsigned char". This gives implementations a bit more freedom to do the best thing on the platform (for example, the standards body may have believed that there exist CPUs where signed byte arithmetic is faster than unsigned byte arithmetic, although that's speculation on my part). Also for compatibility with C. The result of removing this kind of existential uncertainty from C++ is C# ;-)
Given that the "char" type exists, I think it makes sense for the usual streams to use it even though its signedness isn't defined. So maybe your question is answered by the answer to, "why didn't C++ just define char to be unsigned?"
|
277,809
| 277,816
|
Call C# methods from C++ without using COM
|
Is there a way to create C# objects and call methods from unmanaged C++, but without using COM Iterop? I am looking for something like JNI (but for .Net), where you can manually create the VM, create objects, etc.
|
If you are using C++/CLI then you can interact directly with both the managed world and unmanaged code, so interop is trivial.
You can also host the CLR yourself, and whilst the hosting API is COM based, you can then create any managed object. The process isn't a difficult as it sounds as a few API calls encapsulate a lot of functionality. There is a lot of info online, for example the MSDN documentation on "Hosting the Common Language Runtime".
|
278,025
| 280,880
|
Calling an XLL from (unmanaged) C++
|
I have an XLL Excel addin and now another team wants to use the same functionality in their project (unmanaged C++). Is there a way to interface with this XLL directly from C++?
|
Is you XLL un managed or unmanaged code?
As far as I know, an unmanaged C++ XLL file is in fact a DLL that exports specific methods called by Excel.
If your XLL has a .def file, maybe you could add method that would be called by the other team.
|
278,149
| 278,197
|
Programmatically navigating to the Windows Mobile home screen
|
We have an application that downloads some files in the background. Our application pops up when an Internet connection is made, and after prompting the user to accept the downloads, we'd like to switch back to the home screen while we do our stuff.
We can't work out how to do to this. We can emulate pressing "back" a few times, which sometimes works, but where you end up depends on what the user was doing when the Internet connection happened.
So, can someone provide pointers to how to do this?
Thanks.
Paul.
|
Can you try setting the today screen as the foreground window?
HWND hWnd = FindWindow(_T("DesktopExplorerWindow"), _T("Desktop"));
SetForegroundWindow(hWnd);
|
278,260
| 281,230
|
How to implement CEditListCtrl
|
How to implement CEditListCtrl?. List control with edit capabality (Report/Grid view).
I have a list view in Report View. It has some values. I need to extend this to edit the values present in the list view.
I declared a class which inherits from CListCtrl. And I have handled the two Window messages to start and end the edit. Upon getting the messages I am displaying a Text box. But I am not getting the control inside these message handlers. Is there a way to know the reason?
Or Is there a other way to implement this.
|
Thanks for all answers
I have done it easily.
I have handled the WM_LBUTTONDOWN. This handler pops up the edit box to get the new
value for the field
Handled LVN_ENDLABELEDIT to know the end of update.
After receiving the above message, updated the values.
“One thing I forgotten was to set the flag in the resource view for CListCtrl (Edit Labels to TRUE)”
We have to implement OnPaint() (in CListCtrl's derived class) also otherwise the UI won't update properly
|
278,273
| 278,353
|
How to guarantee files that are decrypted during run time are cleaned up?
|
Using C or C++, After I decrypt a file to disk- how can I guarantee it is deleted if the application crashes or the system powers off and can't clean it up properly? Using C or C++, on Windows and Linux?
|
Don't write the file decrypted to disk at all.
If the system is powerd off the file is still on disk, the disk and therefore the file can be accessed.
Exception would be the use of an encrypted file system, but this is out of control of your program.
|
278,429
| 280,031
|
What could cause a dynamic_cast to crash?
|
I have a piece of code looking like this :
TAxis *axis = 0;
if (dynamic_cast<MonitorObjectH1C*>(obj))
axis = (dynamic_cast<MonitorObjectH1C*>(obj))->GetXaxis();
Sometimes it crashes :
Thread 1 (Thread -1208658240 (LWP 11400)):
#0 0x0019e7a2 in _dl_sysinfo_int80 () from /lib/ld-linux.so.2
#1 0x048c67fb in __waitpid_nocancel () from /lib/tls/libc.so.6
#2 0x04870649 in do_system () from /lib/tls/libc.so.6
#3 0x048709c1 in system () from /lib/tls/libc.so.6
#4 0x001848bd in system () from /lib/tls/libpthread.so.0
#5 0x0117a5bb in TUnixSystem::Exec () from /opt/root/lib/libCore.so.5.21
#6 0x01180045 in TUnixSystem::StackTrace () from /opt/root/lib/libCore.so.5.21
#7 0x0117cc8a in TUnixSystem::DispatchSignals ()
from /opt/root/lib/libCore.so.5.21
#8 0x0117cd18 in SigHandler () from /opt/root/lib/libCore.so.5.21
#9 0x0117bf5d in sighandler () from /opt/root/lib/libCore.so.5.21
#10 <signal handler called>
#11 0x0533ddf4 in __dynamic_cast () from /usr/lib/libstdc++.so.6
I have no clue why it crashes. obj is not null (and if it was it would not be a problem, would it ?).
What could be the reason for a dynamic cast to crash ?
If it can't cast, it should just return NULL no ?
|
Some possible reasons for the crash:
obj points to an object with a non-polymorphic type (a class or struct with no virtual methods, or a fundamental type).
obj points to an object that has been freed.
obj points to unmapped memory, or memory that has been mapped in such a way as to generate an exception when accessed (such as a guard page or inaccessible page).
obj points to an object with a polymorphic type, but that type was defined in an external library that was compiled with RTTI disabled.
Not all of these problems necessarily cause a crash in all situations.
|
278,784
| 278,830
|
What is the best compiler to use when you want to experiment with C++0x features?
|
What is the best compiler to experiment with C++0x features? I have been experimenting with GNU g++ 4.4.
|
Definitely GCC Trunk. ConceptGCC misses many features GCC trunk has. It is being merged into GCC currently though. It has all these features, including the new auto-typed variables (no new function declaration syntax yet though): http://gcc.gnu.org/projects/cxx0x.html .
There is a GCC branch containing partial lambda support, which also contains other C++0x features. I would recommend you to try that one out too. It's in use on #geordi at irc.freenode.org, you can experiment with it there.
|
278,870
| 278,904
|
Does VS2008 have somewhat C++0x support?
|
Does VS2008 have somewhat C++0x standard support?
DUPLICATE Visual Studio support for new C / C++ standards?
|
No afaik. I believe VS2010 will have more support:
http://www.microsoft.com/downloads/details.aspx?FamilyId=922B4655-93D0-4476-BDA4-94CF5F8D4814&displaylang=en
http://blogs.msdn.com/vcblog/archive/2008/10/28/lambdas-auto-and-static-assert-c-0x-features-in-vc10-part-1.aspx
|
279,358
| 279,452
|
Invalid lock sequence error in an OpenSceneGraph application
|
I have an application that is built against OpenSceneGraph (2.6.1) and therefore indirectly OpenGL. The application initializes and begins to run, but then I get the following exception "attempt was made to execute an invalid lock sequence" in OpenGL32.dll. When I re-run it, I sometimes get this exception, and sometimes an exception about a "privileged instruction". The call stack looks like it is corrupted, so I can't really tell exactly where the exception is being thrown from. I ran the app quite a bit a couple of days ago and never saw this behavior. Since then I have added an else clause to a couple of ifs, and that is all. My app is a console application, is built with Visual Studio 2008, and it sets OpenScenGraph to SingleThreaded mode. Anybody seen this before? Any debugging tips?
|
Can you reproduce it with one of the standard examples?
Can you create a minimal app that causes this?
Do you have a machine with a different brand video card you can test it on (eg Nvidia vs. ATI) there are some issues with openscenegraph and bad OpenGL drivers.
Have you tried posting to osg-users@lists.openscenegraph.org
|
279,601
| 279,689
|
Boost.Lambda: Insert into a different data structure
|
I have a vector that I want to insert into a set. This is one of three different calls (the other two are more complex, involving boost::lambda::if_()), but solving this simple case will help me solve the others.
std::vector<std::string> s_vector;
std::set<std::string> s_set;
std::for_each(s_vector.begin(), s_vector.end(), s_set.insert(boost::lambda::_1));
Unfortunately, this fails with a conversion error message (trying to convert boost::lambda::placeholder1_type to std::string).
So... what's wrong with this?
|
The error is really nasty, but boils down to the fact that it can't figure out which set::insert to use, since there's three overloads.
You can work around the ambiguity by giving bind a helpful hand, by specifying a pointer to the function you wish to use:
typedef std::set<std::string> s_type;
typedef std::pair<s_type::iterator, bool>(s_type::*insert_fp)(const s_type::value_type&);
std::for_each(s_vector.begin(), s_vector.end(), boost::bind(static_cast<insert_fp>(&s_type::insert), &s_set, _1));
It's not pretty, but it should work.
|
279,729
| 279,761
|
How to wait until all child processes called by fork() complete?
|
I am forking a number of processes and I want to measure how long it takes to complete the whole task, that is when all processes forked are completed. Please advise how to make the parent process wait until all child processes are terminated? I want to make sure that I stop the timer at the right moment.
Here is as a code I use:
#include <iostream>
#include <string>
#include <fstream>
#include <sys/time.h>
#include <sys/wait.h>
using namespace std;
struct timeval first, second, lapsed;
struct timezone tzp;
int main(int argc, char* argv[])// query, file, num. of processes.
{
int pCount = 5; // process count
gettimeofday (&first, &tzp); //start time
pid_t* pID = new pid_t[pCount];
for(int indexOfProcess=0; indexOfProcess<pCount; indexOfProcess++)
{
pID[indexOfProcess]= fork();
if (pID[indexOfProcess] == 0) // child
{
// code only executed by child process
// magic here
// The End
exit(0);
}
else if (pID[indexOfProcess] < 0) // failed to fork
{
cerr << "Failed to fork" << endl;
exit(1);
}
else // parent
{
// if(indexOfProcess==pCount-1) and a loop with waitpid??
gettimeofday (&second, &tzp); //stop time
if (first.tv_usec > second.tv_usec)
{
second.tv_usec += 1000000;
second.tv_sec--;
}
lapsed.tv_usec = second.tv_usec - first.tv_usec;
lapsed.tv_sec = second.tv_sec - first.tv_sec;
cout << "Job performed in " <<lapsed.tv_sec << " sec and " << lapsed.tv_usec << " usec"<< endl << endl;
}
}//for
}//main
|
I'd move everything after the line "else //parent" down, outside the for loop. After the loop of forks, do another for loop with waitpid, then stop the clock and do the rest:
for (int i = 0; i < pidCount; ++i) {
int status;
while (-1 == waitpid(pids[i], &status, 0));
if (!WIFEXITED(status) || WEXITSTATUS(status) != 0) {
cerr << "Process " << i << " (pid " << pids[i] << ") failed" << endl;
exit(1);
}
}
gettimeofday (&second, &tzp); //stop time
I've assumed that if the child process fails to exit normally with a status of 0, then it didn't complete its work, and therefore the test has failed to produce valid timing data. Obviously if the child processes are supposed to be killed by signals, or exit non-0 return statuses, then you'll have to change the error check accordingly.
An alternative using wait:
while (true) {
int status;
pid_t done = wait(&status);
if (done == -1) {
if (errno == ECHILD) break; // no more child processes
} else {
if (!WIFEXITED(status) || WEXITSTATUS(status) != 0) {
cerr << "pid " << done << " failed" << endl;
exit(1);
}
}
}
This one doesn't tell you which process in sequence failed, but if you care then you can add code to look it up in the pids array and get back the index.
|
279,854
| 279,878
|
How do I sort a vector of pairs based on the second element of the pair?
|
If I have a vector of pairs:
std::vector<std::pair<int, int> > vec;
Is there and easy way to sort the list in increasing order based on the second element of the pair?
I know I can write a little function object that will do the work, but is there a way to use existing parts of the STL and std::less to do the work directly?
EDIT: I understand that I can write a separate function or class to pass to the third argument to sort. The question is whether or not I can build it out of standard stuff. I'd really something that looks like:
std::sort(vec.begin(), vec.end(), std::something_magic<int, int, std::less>());
|
EDIT: using c++14, the best solution is very easy to write thanks to lambdas that can now have parameters of type auto. This is my current favorite solution
std::sort(v.begin(), v.end(), [](auto &left, auto &right) {
return left.second < right.second;
});
ORIGINAL ANSWER:
Just use a custom comparator (it's an optional 3rd argument to std::sort)
struct sort_pred {
bool operator()(const std::pair<int,int> &left, const std::pair<int,int> &right) {
return left.second < right.second;
}
};
std::sort(v.begin(), v.end(), sort_pred());
If you're using a C++11 compiler, you can write the same using lambdas:
std::sort(v.begin(), v.end(), [](const std::pair<int,int> &left, const std::pair<int,int> &right) {
return left.second < right.second;
});
EDIT: in response to your edits to your question, here's some thoughts ...
if you really wanna be creative and be able to reuse this concept a lot, just make a template:
template <class T1, class T2, class Pred = std::less<T2> >
struct sort_pair_second {
bool operator()(const std::pair<T1,T2>&left, const std::pair<T1,T2>&right) {
Pred p;
return p(left.second, right.second);
}
};
then you can do this too:
std::sort(v.begin(), v.end(), sort_pair_second<int, int>());
or even
std::sort(v.begin(), v.end(), sort_pair_second<int, int, std::greater<int> >());
Though to be honest, this is all a bit overkill, just write the 3 line function and be done with it :-P
|
279,956
| 279,984
|
Is there any way to programmatically set the comment attribute on a file in XP?
|
Links to point me in the correct direction, or sample code will be appreciated.
|
You can do it with some unmanaged calls, check the OLE32 function StgOpenPropStg() and StgOpenStorageEx() functions at MSDN:
http://msdn.microsoft.com/en-us/library/aa380342(VS.85).aspx
There's quite a bit to this but the function name should get you going.
|
280,069
| 283,937
|
GCC: program doesn't work with compilation option -O3
|
I'm writing a C++ program that doesn't work (I get a segmentation fault) when I compile it with optimizations (options -O1, -O2, -O3, etc.), but it works just fine when I compile it without optimizations.
Is there any chance that the error is in my code? or should I assume that this is a bug in GCC?
My GCC version is 3.4.6.
Is there any known workaround for this kind of problem?
There is a big difference in speed between the optimized and unoptimized version of my program, so I really need to use optimizations.
This is my original functor. The one that works fine with no levels of optimizations and throws a segmentation fault with any level of optimization:
struct distanceToPointSort{
indexedDocument* point ;
distanceToPointSort(indexedDocument* p): point(p) {}
bool operator() (indexedDocument* p1,indexedDocument* p2){
return distance(point,p1) < distance(point,p2) ;
}
} ;
And this one works flawlessly with any level of optimization:
struct distanceToPointSort{
indexedDocument* point ;
distanceToPointSort(indexedDocument* p): point(p) {}
bool operator() (indexedDocument* p1,indexedDocument* p2){
float d1=distance(point,p1) ;
float d2=distance(point,p2) ;
std::cout << "" ; //without this line, I get a segmentation fault anyways
return d1 < d2 ;
}
} ;
Unfortunately, this problem is hard to reproduce because it happens with some specific values. I get the segmentation fault upon sorting just one out of more than a thousand vectors, so it really depends on the specific combination of values each vector has.
|
Now that you posted the code fragment and a working workaround was found (@Windows programmer's answer), I can say that perhaps what you are looking for is -ffloat-store.
-ffloat-store
Do not store floating point variables in registers, and inhibit other options that might change whether a floating point value is taken from a register or memory.
This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a double is supposed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE floating point. Use -ffloat-store for such programs, after modifying them to store all pertinent intermediate computations into variables.
Source: http://gcc.gnu.org/onlinedocs/gcc-3.4.6/gcc/Optimize-Options.html
|
280,143
| 280,447
|
Making a custom run Dialog with C or C++?
|
Well I would like to make a custom run dialog within my program so that the user can test commands without opening it themselves. The only problem is, msdn does not provide any coverage on this. If I cannot make my own custom run dialog and send the data to shell32.dll (where the run dialog is stored) I will settle for the code that can open the run dialog from a button. So far the only information I found is how to open it with VBScript, and I would like to know how to access shell objects within C/C++ directly.
Here is the VBScript if it helps (save as .vbs if you want to see)
<script language="VBScript">
function fnShellFileRunVB()
dim objShell
set objShell = CreateObject("Shell.Application")
objShell.FileRun
set objShell = nothing
end function
</script>
|
VBScript's CreateObject() function just creates an instance of a COM object. You can do exactly the same thing in C++, you just need to read a tutorial on how to access COM objects using C++ first.
|
280,162
| 280,526
|
Is there a way to do a C++ style compile-time assertion to determine machine's endianness?
|
I have some low level serialization code that is templated, and I need to know the system's endianness at compiletime obviously (because the templates specializes based on the system's endianness).
Right now I have a header with some platform defines, but I'd rather have someway to make assertions about endianness with some templated test (like a static_assert or boost_if). Reason being my code will need to be compiled and ran on a wide range of machines, of many specialized vendor, and probably devices that don't exist in 2008, so I can't really guess what might need to go into that header years down the road. And since the code-base has an expected lifetime of about 10 years. So I can't follow the code for-ever.
Hopefully this makes my situation clear.
So does anyone know of a compile-time test that can determine endianness, without relying on vendor specific defines?
|
If you're using autoconf, you can use the AC_C_BIGENDIAN macro, which is fairly guaranteed to work (setting the WORDS_BIGENDIAN define by default)
alternately, you could try something like the following (taken from autoconf) to get a test that will probably be optimized away (GCC, at least, removes the other branch)
int is_big_endian()
{
union {
long int l;
char c[sizeof (long int)];
} u;
u.l = 1;
if (u.c[sizeof(long int)-1] == 1)
{
return 1;
}
else
return 0;
}
|
280,209
| 280,235
|
Creating a multithreading application in vc6 with boost library?
|
Is it possible to create a multithreading application in VC6 with boost library?
If it is possible, what are some relevant tutorials.
|
Yes, I have done this successfully, but with Boost v1.30.0. So if you have trouble with the latest versions of the Boost libraries, you might want to go back a year or five. I recall I started getting all sorts of internal compiler errors, et al., when trying to upgrade Boost -- so I didn't, but rather went on using v1.30.0 until I was able to upgrade Visual C++ as well. Even the old versions of Boost are very stable and useful, they just have less features.
|
280,345
| 280,364
|
How do I use PostThreadMessage to close internet explorer from C++
|
I'm trying to start iexplore.exe let it run for 5 seconds and then close it again.
iexplore opens just fine however it doesn't close when I call the PostThreadMessage.
Can anyone see what I'm doing wrong? Here is my code:
CString IEPath = "C:\\Program Files\\Internet Explorer\\IEXPLORE.EXE";//GetIEPath();
//IEPath += ' ' + url;
std::string strCommand((LPCTSTR)IEPath);
PROCESS_INFORMATION procinfo;
STARTUPINFO startupinfo;
GetStartupInfo(&startupinfo);
CreateProcess(
NULL,
(char *)strCommand.c_str(),// name of executable module
NULL, // lpProcessAttributes
NULL, // lpThreadAttributes
false, // handle inheritance option
CREATE_SHARED_WOW_VDM, // creation flags
NULL, // new environment block
NULL, // current directory name
&startupinfo, // startup information
&procinfo // process information
);
Sleep(5000);
::PostThreadMessage(procinfo.dwThreadId, WM_QUIT, 0, 0); //<---Dosent Close internet explorer!
Anyone have an idea of what I'm doing wrong? Or is there better way what to do the trick?
|
if you can enumerate the windows on the desktop and send a WM_CLOSE to the IE window , it might work .. you can use the spy programme to get the window class of the IE window
|
280,347
| 280,443
|
How to convert Unicode string into a utf-8 or utf-16 string?
|
How to convert Unicode string into a utf-8 or utf-16 string?
My VS2005 project is using Unicode char set, while sqlite in cpp provide
int sqlite3_open(
const char *filename, /* Database filename (UTF-8) */
sqlite3 **ppDb /* OUT: SQLite db handle */
);
int sqlite3_open16(
const void *filename, /* Database filename (UTF-16) */
sqlite3 **ppDb /* OUT: SQLite db handle */
);
for opening a folder.
How can I convert string, CString, or wstring into UTF-8 or UTF-16 charset?
Thanks very much!
|
Short answer:
No conversion required if you use Unicode strings such as CString or wstring. Use sqlite3_open16().
You will have to make sure you pass a WCHAR pointer (casted to void *. Seems lame! Even if this lib is cross platform, I guess they could have defined a wide char type that depends on the platform and is less unfriendly than a void *) to the API. Such as for a CString: (void*)(LPCWSTR)strFilename
The longer answer:
You don't have a Unicode string that you want to convert to UTF8 or UTF16. You have a Unicode string represented in your program using a given encoding: Unicode is not a binary representation per se. Encodings say how the Unicode code points (numerical values) are represented in memory (binary layout of the number). UTF8 and UTF16 are the most widely used encodings. They are very different though.
When a VS project says "Unicode charset", it actually means "characters are encoded as UTF16". Therefore, you can use sqlite3_open16() directly. No conversion required. Characters are stored in WCHAR type (as opposed to char) which takes 16 bits (Fallsback on standard C type wchar_t, which takes 16 bits on Win32. Might be different on other platforms. Thanks for the correction, Checkers).
There's one more detail that you might want to pay attention to: UTF16 exists in 2 flavors: Big Endian and Little Endian. That's the byte ordering of these 16 bits. The function prototype you give for UTF16 doesn't say which ordering is used. But you're pretty safe assuming that sqlite uses the same endian-ness as Windows (Little Endian IIRC. I know the order but have always had problem with the names :-) ).
EDIT: Answer to comment by Checkers:
UTF16 uses 16 bits code units. Under Win32 (and only on Win32), wchar_t is used for such storage unit. The trick is that some Unicode characters require a sequence of 2 such 16-bits code units. They are called Surrogate Pairs.
The same way an UTF8 represents 1 character using a 1 to 4 bytes sequence. Yet UTF8 are used with the char type.
|
280,477
| 280,530
|
_CRT_DEBUGGER_HOOK throws exception
|
I'm having a problem converting my program from VS2005 to VS2008. When I run
the program in VS2008, the application starts up fine but when start playing
around with the application it crashes giving me this error:
"Microsoft Visual Studio C Runtime Library has detected a fatal error"
And then the debugger points me to this function:
__declspec(noinline)
void __cdecl _CRT_DEBUGGER_HOOK(int _Reserved)
{
/* assign 0 to _debugger_hook_dummy so that the function is not folded
in retail */
(_Reserved);
_debugger_hook_dummy = 0;
}
compiling the application ase release works prefectly...
By the way, this is a native code calling a managed c++ code that wrapps .NET code.
How can I debug such situation ?
Ofer
|
Make sure all of your dependencies are also compiled with VS2008 debug.
I experienced this same issue when compiling a program in VS2008-debug, and some of the dependent DLLs where compiled in VS2003, and also when compiling a program in VS2008-debug and some of the dependencies where compiled as release.
|
280,624
| 280,665
|
Which is faster - C# unsafe code or raw C++
|
I'm writing an image processing program to perform real time processing of video frames. It's in C# using the Emgu.CV library (C#) that wraps the OpenCV library dll (unmanaged C++). Now I have to write my own special algorithm and it needs to be as fast as possible.
Which will be a faster implementation of the algorithm?
Writing an 'unsafe' function in C#
Adding the function to the OpenCV library and calling it through Emgu.CV
I'm guessing C# unsafe is slower because it goes throught the JIT compiler, but would the difference be significant?
Edit:
Compiled for .NET 3.5 under VS2008
|
it needs to be as fast as possible
Then you're asking the wrong question.
Code it in assembler, with different versions for each significant architecture variant you support.
Use as a guide the output from a good C++ compiler with optimisation, because it probably knows some tricks that you don't. But you'll probably be able to think of some improvements, because C++ doesn't necessarily convey to the compiler all information that might be useful for optimisation. For example, C++ doesn't have the C99 keyword restrict. Although in that particular case many C++ compilers (including MSVC) do now support it, so use it where possible.
Of course if you mean, "I want it to be fast, but not to the extent of going outside C# or C++", then the answer's different ;-)
I would expect C# to at least approach the performance of similar-looking C++ in a lot of cases. I assume of course that the program will be running long enough that the time the JIT itself takes is irrelevant, but if you're processing much video then that seems likely. But I'd also expect there to be certain things which if you do them in unsafe C#, will be far slower than the equivalent thing in C++. I don't know what they are, because all my experience of JITs is in Java rather than CLR. There might also be things which are slower in C++, for instance if your algorithm makes any calls back into C# code.
Unfortunately the only way to be sure how close it is is to write both and test them, which kind of misses the point that writing the C++ version is a bunch of extra effort. However, you might be able to get a rough idea by hacking some quick code which approximates the processing you want to do, without necessarily doing all of it or getting it right. If you algorithm is going to loop over all the pixels and do a few FP ops per pixel, then hacking together a rough benchmark should take all of half an hour.
Usually I would advise against starting out thinking "this needs to be as fast as possible". Requirements should be achievable, and by definition "as X as possible" is only borderline achievable. Requirements should also be testable, and "as X as possible" isn't testable unless you somehow know a theoretical maximum. A more friendly requirement is "this needs to process video frames of such-and-such resolution in real time on such-and-such a speed CPU", or "this needs to be faster than our main competitor's product". If the C# version does that, with a bit to spare to account for unexpected minor issues in the user's setup, then job done.
|
280,654
| 280,657
|
Why can't I store boost::function in std::list?
|
I get the following compilation error:
error: expected `;' before 'it'"
Here's my code:
#include <boost/function.hpp>
#include <list>
template< class T >
void example() {
std::list< boost::function<T ()> >::iterator it;
}
Why does this happen? How can I fix it?
|
You need to put typename in front of that line, since the type you do ::iterator upon is dependant on the template-parameter T. Like this:
template< class T >
void example() {
typename std::list< boost::function<T ()> >::iterator it;
}
Consider the line
std::list< boost::function<T ()> >::iterator * it;
which could mean a multiplication, or a pointer. That's why you need typename to make your intention clear. Without it, the compiler assumes not a type, and thus it requires an operator there or a semicolon syntactically.
Also consult the new C++ FAQ entry Where to put template and typename on dependent names.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.