Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
I'm working on an ASP.Net application and working to add some Ajax to it to speed up certain areas. The first area that I am concentrating is the attendance area for the teachers to report attendance (and some other data) about the kids. This needs to be fast.
I've created a dual-control set up where the user clicks on the icon and via Javascript and Jquery I pop up the second control. Then I use a \_\_doPostBack() to refresh the pop up control to load all of the relevant data.
Here's a little video snippet to show how it works: <http://www.screencast.com/users/cyberjared/folders/Jing/media/32ef7c22-fe82-4b60-a74a-9a37ab625f1f> (:21 and ignore the audio background).
It's slower than I would like at 2-3 seconds in Firefox and Chrome for each "popping up", but it's entirely unworkable in IE, taking easily 7-8 seconds for each time it pops up and loads. And that disregards any time that is needed to save the data after it's been changed.
Here's the javascript that handles the pop-up:
```
function showAttendMenu(callingControl, guid) {
var myPnl = $get('" + this.MyPnl.ClientID + @"')
if(myPnl) {
var displayIDFld = $get('" + this.AttendanceFld.ClientID + @"');
var myStyle = myPnl.style;
if(myStyle.display == 'block' && (guid== '' || guid == displayIDFld.value)) {
myStyle.display = 'none';
} else {
// Get a reference to the PageRequestManager.
var prm = Sys.WebForms.PageRequestManager.getInstance();
// Unblock the form when a partial postback ends.
prm.add_endRequest(function() {
$('#" + this.MyPnl.ClientID + @"').unblock({ fadeOut: 0});
});
var domEl = Sys.UI.DomElement;
//Move it into position
var loc = domEl.getLocation(callingControl);
var width = domEl.getBounds(callingControl).width;
domEl.setLocation(myPnl, loc.x + width, loc.y - 200);
//Show it and block it until we finish loading the data
myStyle.display = 'block';
$('#" + this.MyPnl.ClientID + @"').block({ message: null, overlayCSS: { backgroundColor:'#fff', opacity: '0.7'} });
//Load the data
if(guid != '') { displayIDFld.value = guid; }
__doPostBack('" + UpdatePanel1.ClientID + @"','');
}
}}
```
First, I don't understand why the \_\_doPostBack() introduces such a delay in IE. If I take that and the prm.add\_endRequest out, it's VERY speedy as no postback is happening.
Second, I need a way to pop up this control and refresh the data so that it is still interactive. I'm not married to an UpdatePanel, but I haven't been able to figure out how to do it with a Web Service/static page method. As you can see this control is loaded many times on the same page so page size and download speed is an issue.
I'd appreciate any ideas?
Edit: It's the same in IE 6 or 7. I'm thinking it has to do with IE's handling of the UpdatePanel, because the same code is much faster in FF and Chrome.
|
If speed/performance is a major concern for you, I would strongly suggest against UpdatePanels, as they cause a full page postback that drags the ViewState in the header, among other crap, and forces the page to go through the whole life cycle every time (even though the user doesn't see this).
You should be able to (relatively easily) use PageMethods to accomplish your task.
```
// In your aspx.cs define the server-side method marked with the
// WebMethod attribute and it must be public static.
[WebMethod]
public static string HelloWorld(string name)
{
return "Hello World - by " + name;
}
// Call the method via javascript
PageMethods.HelloWorld("Jimmy", callbackMethod, failMethod);
```
|
Its a known issue with IE only, see [KB 2000262](http://support.microsoft.com/kb/2000262). A workaround/fix can be found [here](http://blog.devlpr.net/2009/09/01/updatepanel-async-postsback-slow-in-ie-part-3/). I worked with them on the script and its a shame they cannot put out a real fix.
|
UpdatePanel Slowness in IE
|
[
"",
"c#",
"jquery",
"asp.net",
"performance",
"updatepanel",
""
] |
It seems like there is a lot of overhead involved in rapidly opening and closing sqlconnections. Should I persist a connection (one, per client, per database), or continue declaring a new sqlconnection object whenever I need one, and making sure I clean up after myself?
What have you done? What worked well and what worked poorly?
|
In most cases, .NET connection pooling handles this for you. Even though you're opening and closing connections via code, that's not what's happening behind the scenes. When you instantiate and open a connection, .NET looks for an existing connection in the connection pool with the same connectionstring and gives you that instead. When you close the connection, it returns to the connection pool for future use.
If you're using SQL Server: <http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx>
OLE DB, ODBC, Oracle: <http://msdn.microsoft.com/en-us/library/ms254502.aspx>
Dino Esposito article: <http://www.wintellect.com/Articles/ADO%20NET%20Connection.pdf>
You can override default pooling behavior with connectionstring name/values: <http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.connectionstring.aspx>. See the second table of settings containing 'Connection Lifetime'.
|
There is not much overhead since, by default settings, pools are stored in the connection pool. Thus, when you open a connection, often you'll just get a ready connection from the pool. Creating SqlConnections has not given me any troubles.
|
Should I persist a sqlconnection in my data access layer?
|
[
"",
".net",
"sql",
"ado.net",
""
] |
How do I create, execute and control a winform from within a console application?
|
The easiest option is to start a windows forms project, then change the output-type to Console Application. Alternatively, just add a reference to System.Windows.Forms.dll, and start coding:
```
using System.Windows.Forms;
[STAThread]
static void Main() {
Application.EnableVisualStyles();
Application.Run(new Form()); // or whatever
}
```
The important bit is the `[STAThread]` on your `Main()` method, required for full COM support.
|
I recently wanted to do this and found that I was not happy with any of the answers here.
If you follow Marc's advice and set the output-type to Console Application there are two problems:
1) If you launch the application from Explorer, you get an annoying console window behind your Form which doesn't go away until your program exits. We can mitigate this problem by calling FreeConsole prior to showing the GUI (Application.Run). The annoyance here is that the console window still appears. It immediately goes away, but is there for a moment none-the-less.
2) If you launch it from a console, and display a GUI, the console is blocked until the GUI exits. This is because the console (cmd.exe) thinks it should launch Console apps synchronously and Windows apps asynchronously (the unix equivalent of "myprocess &").
If you leave the output-type as Windows Application, but *correctly* call AttachConsole, you don't get a second console window when invoked from a console and you don't get the unnecessary console when invoked from Explorer. The correct way to call AttachConsole is to pass -1 to it. This causes our process to attach to the console of our parent process (the console window that launched us).
However, this has two different problems:
1) Because the console launches Windows apps in the background, it immediately displays the prompt and allows further input. On the one hand this is good news, the console is not blocked on your GUI app, but in the case where you want to dump output to the console and never show the GUI, your program's output comes after the prompt and no new prompt is displayed when you're done. This looks a bit confusing, not to mention that your "console app" is running in the background and the user is free to execute other commands while it's running.
2) Stream redirection gets messed up as well, e.g. "myapp some parameters > somefile" fails to redirect. The stream redirection problem requires a significant amount of p/Invoke to fixup the standard handles, but it is solvable.
After many hours of hunting and experimenting, I've come to the conclusion that there is no way to do this perfectly. You simply cannot get all the benefits of both console and window without any side effects. It's a matter of picking which side effects are least annoying for your application's purposes.
|
how to run a winform from console application?
|
[
"",
"c#",
"winforms",
"console-application",
""
] |
One of the "best practice" is accessing data via stored procedures. I understand why is this scenario good.
My motivation is split database and application logic ( the tables can me changed, if the behaviour of stored procedures are same ), defence for SQL injection ( users can not execute "select \* from some\_tables", they can only call stored procedures ), and security ( in stored procedure can be "anything" which secure, that user can not select/insert/update/delete data, which is not for them ).
What I don't know is how to access data with dynamic filters.
I'm using MSSQL 2005.
If I have table:
```
CREATE TABLE tblProduct (
ProductID uniqueidentifier -- PK
, IDProductType uniqueidentifier -- FK to another table
, ProductName nvarchar(255) -- name of product
, ProductCode nvarchar(50) -- code of product for quick search
, Weight decimal(18,4)
, Volume decimal(18,4)
)
```
then I should create 4 stored procedures ( create / read / update / delete ).
The stored procedure for "create" is easy.
```
CREATE PROC Insert_Product ( @ProductID uniqueidentifier, @IDProductType uniqueidentifier, ... etc ... ) AS BEGIN
INSERT INTO tblProduct ( ProductID, IDProductType, ... etc .. ) VALUES ( @ProductID, @IDProductType, ... etc ... )
END
```
The stored procedure for "delete" is easy too.
```
CREATE PROC Delete_Product ( @ProductID uniqueidentifier, @IDProductType uniqueidentifier, ... etc ... ) AS BEGIN
DELETE tblProduct WHERE ProductID = @ProductID AND IDProductType = @IDProductType AND ... etc ...
END
```
The stored procedure for "update" is similar as for "delete", but I'm not sure this is the right way, how to do it. I think that updating all columns is not efficient.
```
CREATE PROC Update_Product( @ProductID uniqueidentifier, @Original_ProductID uniqueidentifier, @IDProductType uniqueidentifier, @Original_IDProductType uniqueidentifier, ... etc ... ) AS BEGIN
UPDATE tblProduct SET ProductID = @ProductID, IDProductType = @IDProductType, ... etc ...
WHERE ProductID = @Original_ProductID AND IDProductType = @Original_IDProductType AND ... etc ...
END
```
And the last - stored procedure for "read" is littlebit mystery for me. How pass filter values for complex conditions? I have a few suggestion:
Using XML parameter for passing where condition:
```
CREATE PROC Read_Product ( @WhereCondition XML ) AS BEGIN
DECLARE @SELECT nvarchar(4000)
SET @SELECT = 'SELECT ProductID, IDProductType, ProductName, ProductCode, Weight, Volume FROM tblProduct'
DECLARE @WHERE nvarchar(4000)
SET @WHERE = dbo.CreateSqlWherecondition( @WhereCondition ) --dbo.CreateSqlWherecondition is some function which returns text with WHERE condition from passed XML
DECLARE @LEN_SELECT int
SET @LEN_SELECT = LEN( @SELECT )
DECLARE @LEN_WHERE int
SET @LEN_WHERE = LEN( @WHERE )
DECLARE @LEN_TOTAL int
SET @LEN_TOTAL = @LEN_SELECT + @LEN_WHERE
IF @LEN_TOTAL > 4000 BEGIN
-- RAISE SOME CONCRETE ERROR, BECAUSE DYNAMIC SQL ACCEPTS MAX 4000 chars
END
DECLARE @SQL nvarchar(4000)
SET @SQL = @SELECT + @WHERE
EXEC sp_execsql @SQL
END
```
But, I think the limitation of "4000" characters for one query is ugly.
The next suggestion is using filter tables for every column. Insert filter values into the filter table and then call stored procedure with ID of filters:
```
CREATE TABLE tblFilter (
PKID uniqueidentifier -- PK
, IDFilter uniqueidentifier -- identification of filter
, FilterType tinyint -- 0 = ignore, 1 = equals, 2 = not equals, 3 = greater than, etc ...
, BitValue bit , TinyIntValue tinyint , SmallIntValue smallint, IntValue int
, BigIntValue bigint, DecimalValue decimal(19,4), NVarCharValue nvarchar(4000)
, GuidValue uniqueidentifier, etc ... )
CREATE TABLE Read_Product ( @Filter_ProductID uniqueidentifier, @Filter_IDProductType uniqueidentifier, @Filter_ProductName uniqueidentifier, ... etc ... ) AS BEGIN
SELECT ProductID, IDProductType, ProductName, ProductCode, Weight, Volume
FROM tblProduct
WHERE ( @Filter_ProductID IS NULL
OR ( ( ProductID IN ( SELECT GuidValue FROM tblFilter WHERE IDFilter = @Filter_ProductID AND FilterType = 1 ) AND NOT ( ProductID IN ( SELECT GuidValue FROM tblFilter WHERE IDFilter = @Filter_ProductID AND FilterType = 2 ) )
AND ( @Filter_IDProductType IS NULL
OR ( ( IDProductType IN ( SELECT GuidValue FROM tblFilter WHERE IDFilter = @Filter_IDProductType AND FilterType = 1 ) AND NOT ( IDProductType IN ( SELECT GuidValue FROM tblFilter WHERE IDFilter = @Filter_IDProductType AND FilterType = 2 ) )
AND ( @Filter_ProductName IS NULL OR ( ... etc ... ) )
END
```
But this suggestion is littlebit complicated I think.
Is there some "best practice" to do this type of stored procedures?
|
First: for your delete routine, your where clause should only include the primary key.
Second: for your update routine, do not try to optimize before you have working code. In fact, do not try to optimize until you can profile your application and see where the bottlenecks are. I can tell you for sure that updating one column of one row and updating all columns of one row are nearly identical in speed. What takes time in a DBMS is (1) finding the disk block where you will write the data and (2) locking out other writers so that your write will be consistent. Finally, writing the code necessary to update only the columns that need to change will generally be harder to do and harder to maintain. If you really wanted to get picky, you'd have to compare the speed of figuring out which columns changed compared with just updating every column. If you update them all, you don't have to read any of them.
Third: I tend to write one stored procedure for each retrieval path. In your example, I'd make one by primary key, one by each foreign key and then I'd add one for each new access path as I needed them in the application. Be agile; don't write code you don't need. I also agree with using views instead of stored procedures, however, you can use a stored procedure to return multiple result sets (in some version of MSSQL) or to change rows into columns, which can be useful.
If you need to get, for example, 7 rows by primary key, you have some options. You can call the stored procedure that gets one row by primary key seven times. This may be fast enough if you keep the connection opened between all the calls. If you know you never need more than a certain number (say 10) of IDs at a time, you can write a stored procedure that includes a where clause like "and ID in (arg1, arg2, arg3...)" and make sure that unused arguments are set to NULL. If you decide you need to generate dynamic SQL, I wouldn't bother with a stored procedure because TSQL is just as easy to make a mistake as any other language. Also, you gain no benefit from using the database to do string manipulation -- it's almost always your bottleneck, so there is no point in giving the DB any more work than necessary.
|
For reading data, you do not need a stored procedure for security or to separate out logic, you can use views.
Just grant only select on the view.
You can limit the records shown, change field names, join many tables into one logical "table", etc.
|
Accessing data with stored procedures
|
[
"",
"sql",
"sql-server-2005",
"stored-procedures",
"crud",
""
] |
I hear that tr1::result\_of gets used frequently inside of Boost... I'm wondering if there are any good (simple) use cases for tr1::result\_of I can use at home.
|
A description of result\_of is given at [open\_std.org](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1454.html). Microsoft has a quick example of a [unit test wrapper](http://msdn.microsoft.com/en-us/library/bb982028.aspx) that uses result\_of.
|
There are no simple cases. However, it's used in `BOOST_AUTO`, which can be used, e.g., in
```
BOOST_AUTO(x, make_pair(a, b));
```
|
What is a good use case for tr1::result_of?
|
[
"",
"c++",
"stl",
"boost",
"tr1",
"use-case",
""
] |
I am designing a simple internal framework for handling time series data.
Given that LINQ is my current toy hammer, I want to hit everything with it.
I want to implement methods in class TimeSeries (Select(), Where() and so on) so that I can use LINQ syntax to handle time series data
Some things are straight forward, e.g. (from x in A select x+10), giving a new time series.
What is the best syntax design for combining two or more time series?
(from a in A from b in B select a+b) is not great, since it expresses a nested loop.
Maybe some join? This should correspond to join on the implicit time variable.
(What I have in mind corresponds to the lisp 'zip' function)
---
**EDIT:** *Some clarification is necessary.*
A time series is a kind of function depending on time, e.g. stock quotes.
A combination of time series could be the difference between two stock prices, as a function of time.
```
Stock1.MyJoin(Stock2, (a,b)=>a-b)
```
is possible, but can this be expressed neatly using some LINQ syntax?
I am expecting to implement LINQ methods in `class MyTimeSeries` myself.
|
If I'm understanding the question correctly, you want to join multiple sequences based on their position within the sequence?
There isn't anything in the `System.Linq.Enumerable` class to do this as both the `Join` and `GroupJoin` methods are based on join keys. However, by coincidence I wrote a `PositionalJoin` method for just this purpose a few days back, used as in your example:
```
sequenceA.PositionalJoin(sequenceB, (a, b) => new { a, b });
```
The semantics of the method shown below is that it does not require the sequences to be of equal length, but it would be trivial to modify it to require this. I also commented out where the argument checking should be as it was using our internal helper classes.
```
public static IEnumerable<TResult> PositionalJoin<T1, T2, TResult>(
this IEnumerable<T1> source1,
IEnumerable<T2> source2,
Func<T1, T2, int, TResult> selector)
{
// argument checking here
return PositionalJoinIterator(source1, source2, selector);
}
private static IEnumerable<TResult> PositionalJoinIterator<T1, T2, TResult>(
IEnumerable<T1> source1,
IEnumerable<T2> source2,
Func<T1, T2, TResult> selector)
{
using (var enumerator1 = source1.GetEnumerator())
using (var enumerator2 = source2.GetEnumerator())
{
bool gotItem;
do
{
gotItem = false;
T1 item1;
if (enumerator1.MoveNext())
{
item1 = enumerator1.Current;
gotItem = true;
}
else
{
item1 = default(T1);
}
T2 item2;
if (enumerator2.MoveNext())
{
item2 = enumerator2.Current;
gotItem = true;
}
else
{
item2 = default(T2);
}
if (gotItem)
{
yield return selector(item1, item2);
}
}
while (gotItem);
}
}
```
Not sure if this is exactly what you're looking for, but hopefully of some help.
|
`Union` sounds like the right way to go - no query expression support, but I think it expresses what you mean.
You might be interested in looking at the Range-based classes in [MiscUtil](http://pobox.com/~skeet/csharp/miscutil) which can be nicely used for times. Combined with a bit of extension method fun, you can do:
```
foreach (DateTime day in 19.June(1976).To(DateTime.Today).Step(1.Day()))
{
Console.WriteLine("I'm alive!");
}
```
I'm not suggesting this should replace whatever you're doing, just that you might be able to take some ideas to make it even neater. Feel free to contribute back, too :)
|
"Join" of time series
|
[
"",
"c#",
"linq",
"linq-to-objects",
"time-series",
""
] |
For example, I'm writing a multi-threaded time-critical application that processes and streams audio in real-time. Interruptions in the audio are totally unacceptable. Does this mean I cannot use the STL because of the potential slow down when an exception is thrown?
|
It's not clearly written in the previous answers, so:
## Exceptions happen in C++
Using the STL or not won't remove the RAII code that will free the objects's resources you allocated.
For example:
```
void doSomething()
{
MyString str ;
doSomethingElse() ;
}
```
In the code above, the compiler will generate the code to free the MyString resources (i.e. will call the MyString destructor), no matter what happens in the meantime including if if an exception is thrown by doSomethingElse or if you do a "return" before the end of the function scope.
If you have a problem with that, then either you should revise your mindset, or try C.
## Exceptions are supposed to be exceptional
Usually, when an exception occurs ([and only when](https://stackoverflow.com/questions/1897940/in-what-ways-do-c-exceptions-slow-down-code-when-there-are-no-exceptions-thown)), you'll have a performance hit.
But then, the exception should only sent when:
* You have an exceptional event to handle (i.e. some kind of error)
* In very exceptional cases (i.e. a "massive return" from multiple function call in the stack, like when doing a complicated search, or unwinding the stack prior a thread graceful interruption)
The keyword here is "exceptional", which is good because we are discussing "exception" (see the pattern?).
In your case, if you have an exception thrown, chances are good something so bad happened your program would have crashed anyway without exception.
In this case, your problem is not dealing with the performance hit. It is to deal with a graceful handling of the error, or, at worse, graceful termination of your program (including a "Sorry" messagebox, saving unsaved data into a temporary file for later recovery, etc.).
This means (unless in very exceptional cases), don't use exceptions as "return data". Throw exceptions when something very bad happens. Catch an exception only if you know what to do with that. Avoid try/catching (unless you know how to handle the exception).
## What about the STL ?
Now that we know that:
* You still want to use C++
* Your aim is not to throw thousand exceptions each and every seconds just for the fun of it
We should discuss STL:
STL will (if possible) usually verify if you're doing something wrong with it. And if you do, it will throw an exception. Still, in C++, you usually won't pay for something you won't use.
An example of that is the access to a vector data.
If you **know** you won't go out of bounds, then you should use the operator [].
If you **know** you won't verify the bounds, then you should use the method at().
Example A:
```
typedef std::vector<std::string> Vector ;
void outputAllData(const Vector & aString)
{
for(Vector::size_type i = 0, iMax = aString.size() ; i != iMax ; ++i)
{
std::cout << i << " : " << aString[i] << std::endl ;
}
}
```
Example B:
```
typedef std::vector<std::string> Vector ;
void outputSomeData(const Vector & aString, Vector::size_type iIndex)
{
std::cout << iIndex << " : " << aString.at(iIndex) << std::endl ;
}
```
The example A "trust" the programmer, and no time will be lost in verification (and thus, less chance of an exception thrown *at that time* if there is an error anyway... Which usually means the error/exception/crash will usually happen after, which won't help debugging and will let more data be corrupted).
The example B asks the vector to verify the index is correct, and throw an exception if not.
The choice is yours.
|
Generally, the only exceptions that STL containers will throw by themselves is an std::bad\_alloc if new fails. The only other times are when user code (for example constructors, assignments, copy constructors) throws. If your user code never throws then you only have to guard against new throwing, which you would have had to do anyways most likely.
Other things that can throw exceptions:
- at() functions can throw std::out\_of\_range if you access them out of bounds. This is a serious program error anyways.
Secondly, exceptions aren't always slow. If an exception occurs in your audio processing, its probably because of a serious error that you will need to handle anyways. The error handling code is probably going to be significantly more expensive than the exception handling code to transport the exception to the catch site.
|
Can I use the STL if I cannot afford the slow performance when exceptions are thrown?
|
[
"",
"c++",
"performance",
"exception",
""
] |
I'm using Java's [Transformer](http://java.sun.com/j2se/1.5.0/docs/api/javax/xml/transform/Transformer.html) class to process an XML Document object.
This is the code that creates the Transformer:
```
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.OutputKeys;
import javax.xml.transform.Transformer;
Transformer transformer = TransformerFactory.newInstance().newTransformer();
transformer.setOutputProperty(OutputKeys.INDENT, "no");
transformer.setOutputProperty(OutputKeys.OMIT_XML_DECLARATION, "yes");
transformer.transform(source, result);
```
Currently, my output looks like this: <svg ... />. I'd like it to include the namespace of each element, as in <svg:svg ... />
How can I do that ?
|
Note that `<svg xmlns="SVGNS" />` is the same as `<svg:svg xmlns:svg="SVGNS" />`.
Did you check you called `setNamespaceAware(true)` on your `DocumentBuilderFactory` instance ?
|
The package description for [javax.xml.transform](http://java.sun.com/j2se/1.5.0/docs/api/javax/xml/transform/package-summary.html#package_description) has a section *Qualified Name Representation* which seems to imply that it is possible to get the namespace represented in both input and output.
It isn't really clear to me what the result would look like, other than the namespace URI would be included.
Give it a try - hopefully someone else will have more concrete experience.
|
Java XML: how to output the namespace of child elements?
|
[
"",
"java",
"xml",
""
] |
Anyone know of a good free winforms html editor for .NET. Ideally I would like html and preview modes along with the possibility of exporting to a pdf, word doc or similar.
Although the export I could probably create myself from the html output.
Another nice feature would be a paste from word that removes all the extra tags you usually end up with but again it's a nice to have not a required.
|
You can use the [WebBrowser](https://learn.microsoft.com/en-us/dotnet/framework/winforms/controls/webbrowser-control-windows-forms) control in design mode with a second `WebBrowser` control set in view mode.
In order to put the `WebBrowser` control in design mode, you can use the following code.
This code is a super stripped down version of a WYSIWYG editor for one of our software products.
Simply create a new Form, drop a `WebBrowser` control on it, and put this in the Form.Load:
```
Me.WebBrowser1.Navigate("")
Application.DoEvents()
Me.WebBrowser1.Document.OpenNew(False).Write("<html><body><div id=""editable"">Edit this text</div></body></html>")
'turns off document body editing
For Each el As HtmlElement In Me.WebBrowser1.Document.All
el.SetAttribute("unselectable", "on")
el.SetAttribute("contenteditable", "false")
Next
'turns on editable div editing
With Me.WebBrowser1.Document.Body.All("editable")
.SetAttribute("width", Me.Width & "px")
.SetAttribute("height", "100%")
.SetAttribute("contenteditable", "true")
End With
'turns on edit mode
Me.WebBrowser1.ActiveXInstance.Document.DesignMode = "On"
'stops right click->Browse View
Me.WebBrowser1.IsWebBrowserContextMenuEnabled = False
```
|
```
//CODE in C#
webBrowser1.Navigate("about:blank");
Application.DoEvents();
webBrowser1.Document.OpenNew(false).Write("<html><body><div id=\"editable\">Edit this text</div></body></html>");
foreach (HtmlElement el in webBrowser1.Document.All)
{
el.SetAttribute("unselectable", "on");
el.SetAttribute("contenteditable", "false");
}
webBrowser1.Document.Body.SetAttribute("width", this.Width.ToString() + "px");
webBrowser1.Document.Body.SetAttribute("height", "100%");
webBrowser1.Document.Body.SetAttribute("contenteditable", "true");
webBrowser1.Document.DomDocument.GetType().GetProperty("designMode").SetValue(webBrowser1.Document.DomDocument, "On", null);
webBrowser1.IsWebBrowserContextMenuEnabled = false;
```
|
winforms html editor
|
[
"",
"c#",
".net",
"html",
"winforms",
""
] |
## Scenario
I have two wrappers around Microsoft Office, one for 2003 and one for 2007. Since having two versions of Microsoft Office running side by side is "not officially possible" nor recommended by Microsoft, we have two boxes, one with Office 2003 and the other with Office 2007. We compile the wrappers separately. The DLLs are included in our solution, each box has the *same* checkout but with either Office 2003 or 2007 "unloaded" so it doesn't attempt to compile that particular DLL. Failure to do that will throw errors on compilation due to the Office COM DLLs not available.
We use .NET 2.0 and Visual Studio 2008.
## Facts
Since Microsoft mysteriously changed the Office 2003 API in 2007, renaming and changing some methods (*sigh*) thus making them not backwards compatible, we *need* the two wrappers.
We have each build machine with the solution and one Office DLL activated. E.g.: the machine with Office 2003 has the "Office 2007" DLL unloaded, therefore not compiling it. The other box is the same idea but the other way around. All this because we can't have 2 different Office in the same box for programming purposes. (you could technically have two Office together according to Microsoft) but *not* for programming and not without some issues.
## Problem
When we change the Application Version (from 1.5.0.1 to 1.5.0.2 for example) we need to recompile the DLL to match the new version of the application, this is automatically done, because the Office wrapper is included in the solution. Since the wrappers are contained in the solution, those inherit the APP Version, but we have to do it twice and then "copy" the other DLL to the machine that creates the installer. (A Pain…)
## Question
Is it possible to compile a DLL that will work with *any* version of the application, despite being "older"? I've read something about manifests but I have never had to interact with those. Any pointers will be appreciated.
The secret reason for this is that we haven't changed our wrappers in "ages" and neither did Microsoft with their ancient APIs, yet we are recompiling the DLL to match the app version on *every* release we make. I'd like to automate this process instead of having to rely on *two* machines.
I can't remove the DLL from the project (neither of them) because there are dependencies.
I could create a third "master wrapper" but haven't thought about it yet.
Any ideas? Anyone else with the same requirement?
## UPDATE
Clarifying:
I have 1 solution with N projects.
"Application" + Office11Wrapper.dll + Office12Wrapper.dll.
Both "wrappers" use dependencies for application + other libraries in the solution (datalayer, businesslayer, framework, etc.)
Each wrapper has references for the respective Office package (2003 and 2007).
If I compile and don't have office 12 installed, I get errors from Office12Wrapper.dll not finding the Office 2007 libraries.
So what I have are two building machines, one with Office 2003, one with Office 2007. After a full SVN update + compile on each machine, we simply use office12.dll in the "installer" to have the wrapper compiled against the "same code, same version".
Note: The Office 2007 Build Machine, has the Wrapper for Office 2003 "unloaded" and viceversa.
Thanks in advance.
|
When the .NET assembly resolver is unable to find a referenced assembly at runtime (in this case, it cannot find the particular wrapper DLL version the application was linked against), its default behavior is to fail and essentially crash the application. However, this behavior can be overridden by hooking the AppDomain.AssemblyResolve event. This event is fired whenever a referenced assembly cannot be found, and it gives you the opportunity to substitute another assembly in place of the missing one (provided that they are compatible). So, for instance, you could substitute an older version of the wrapper DLL that you load yourself.
The best way I've found to do this is to add a static constructor on the main class of the application that hooks the event, e.g.:
```
using System.Reflection;
static Program()
{
AppDomain.CurrentDomain.AssemblyResolve += delegate(object sender, ResolveEventArgs e)
{
AssemblyName requestedName = new AssemblyName(e.Name);
if (requestedName.Name == "Office11Wrapper")
{
// Put code here to load whatever version of the assembly you actually have
return Assembly.LoadFile("Office11Wrapper.DLL");
}
else
{
return null;
}
}
}
```
By putting this in a static constructor of the main application class, it is guaranteed to run before any code attempts to access anything in the wrapper DLL, ensuring that the hook is in place ahead of time.
You can also use policy files to do version redirection, but that tends to be more complex.
|
Just a thought - could you use TlbExp to create two interop assemblies (with different names and assemblies), and use an interface/factory to code against the two via your own interface? Once you have the interop dll, you don't need the COM dependency (except of course for testing etc).
TlbImp has a /asmversion for the version, so it could be done as part of the build script; but I'm sure you even need this: just make sure that "specific version" is false in the reference (solution explorer)?
Also - I know it doesn't help, but C# 4.0 with `dynamic` and/or "No PIA" might help here (in the future; maybe).
|
Compile a version agnostic DLL in .NET
|
[
"",
"c#",
".net",
"visual-studio-2008",
"dll",
"version",
""
] |
Is it possible to get the originating port from an ActionExecutingContext object? If so, how?
|
Yes, look at `ActionExecutingContext.HttpContext.Request.Url.Port;`
|
use this:
context.HttpContext.Request.Path
OR
context.HttpContext.Request.PathBase
|
Get originating port from ActionExecutingContext?
|
[
"",
"c#",
"asp.net-mvc",
"logging",
""
] |
We're seeing `JTable` selection get cleared when we do a `fireTableDataChanged()` or `fireTableRowsUpdated()` from the `TableModel`.
Is this expected, or are we doing something wrong? I didn't see any property on the `JTable` (or other related classes) about clearing/preserving selection on model updates.
If this is default behavior, is there a good way to prevent this? Maybe some way to "lock" the selection before the update and unlock after?
The developer has been experimenting with saving the selection before the update and re-applying it. It's a little slow.
This is Java 1.4.2 on Windows XP, if that matters. We're limited to that version based on some vendor code we use.
|
You need to preserve the selection and then re-apply it.
First of all you will need to get a list of all the selected cells.
Then when you re-load the JTable with the new data you need to programmatically re-apply those same selections.
The other point I want to make is, if the number or rows or columns in your table are increasing or decreasing after each table model reload, then please don't bother preserving the selection.
The user could have selected row 2 column 1 having a value say "Duck", before model updation. But after model updation that same data can now occur in row 4 column 1, and your original cell row 2 column 1 could have new data such as "Pig". Now if you forcibly set the selection to what it was before the model updation, this may not be what the user wanted.
So programmatically selecting cells could be a double edged sword. Don't do it, if you are not sure.
|
You can automatically preserve a table's selection if the STRUCTURE of that table hasn't changed (i.e. if you haven't add/removed any columns/rows) as follows.
If you've written your own implementation of TableModel, you can simply override the fireTableDataChanged() method:
```
@Override
public void fireTableDataChanged() {
fireTableChanged(new TableModelEvent(this, //tableModel
0, //firstRow
getRowCount() - 1, //lastRow
TableModelEvent.ALL_COLUMNS, //column
TableModelEvent.UPDATE)); //changeType
}
```
and this should ensure that your selection is maintained provided that only the data and not the structure of the table has changed. The only difference between this, and what would be called if this method weren't overridden is that getRowCount() - 1 is passed for the lastRow argument instead of Integer.MAX\_VALUE, the latter of which acts a signifier that not only has all the data in the table changed but that the number of rows may have as well.
|
Preserve JTable selection across TableModel change
|
[
"",
"java",
"swing",
"jtable",
"tablemodel",
""
] |
It seems like there should be a simpler way than:
```
import string
s = "string. With. Punctuation?" # Sample string
out = s.translate(string.maketrans("",""), string.punctuation)
```
Is there?
|
From an efficiency perspective, you're not going to beat
```
s.translate(None, string.punctuation)
```
For higher versions of Python use the following code:
```
s.translate(str.maketrans('', '', string.punctuation))
```
It's performing raw string operations in C with a lookup table - there's not much that will beat that but writing your own C code.
If speed isn't a worry, another option though is:
```
exclude = set(string.punctuation)
s = ''.join(ch for ch in s if ch not in exclude)
```
This is faster than s.replace with each char, but won't perform as well as non-pure python approaches such as regexes or string.translate, as you can see from the below timings. For this type of problem, doing it at as low a level as possible pays off.
Timing code:
```
import re, string, timeit
s = "string. With. Punctuation"
exclude = set(string.punctuation)
table = string.maketrans("","")
regex = re.compile('[%s]' % re.escape(string.punctuation))
def test_set(s):
return ''.join(ch for ch in s if ch not in exclude)
def test_re(s): # From Vinko's solution, with fix.
return regex.sub('', s)
def test_trans(s):
return s.translate(table, string.punctuation)
def test_repl(s): # From S.Lott's solution
for c in string.punctuation:
s=s.replace(c,"")
return s
print "sets :",timeit.Timer('f(s)', 'from __main__ import s,test_set as f').timeit(1000000)
print "regex :",timeit.Timer('f(s)', 'from __main__ import s,test_re as f').timeit(1000000)
print "translate :",timeit.Timer('f(s)', 'from __main__ import s,test_trans as f').timeit(1000000)
print "replace :",timeit.Timer('f(s)', 'from __main__ import s,test_repl as f').timeit(1000000)
```
This gives the following results:
```
sets : 19.8566138744
regex : 6.86155414581
translate : 2.12455511093
replace : 28.4436721802
```
|
Regular expressions are simple enough, if you know them.
```
import re
s = "string. With. Punctuation?"
s = re.sub(r'[^\w\s]','',s)
```
|
Best way to strip punctuation from a string
|
[
"",
"python",
"string",
"punctuation",
""
] |
I recently added JQuery's date-picker control to a project. In Internet Exploder, I get the following error message:
> Internet Explorer cannot open the
> Internet site
>
> <http://localhost/>
>
> Operation aborted
What is causing this problem?
|
**There was a related question earlier today**:
[**Operation Aborted Error in IE**](https://stackoverflow.com/questions/266585/operation-aborted-error-in-ie7)
This is a common problem.
It occurs in IE when a script tries to modify the DOM before the page is finished loading.
Take a look at what sort of scripts are executing. You'll find that something is getting started before the page is finished loading. You can use the window.onload event to correct the problem (or one of the onDomReady library functions).
|
Just elaborating Keparo's answer.
You can put your script inside one of the following functions(as per the library you are using) and that will resolve the issue.
```
prototype.js:
document.observe(’dom:loaded’, function () { /* your script goes here */ }),
jquery:
jQuery(document).ready(function () { /* your script goes here */ })
mootools:
document.addEvent(’domloaded’, function () { /* your script goes here */ })
```
|
What is the "Operation Aborted" error in Internet Explorer?
|
[
"",
"javascript",
"internet-explorer",
""
] |
Is it possible to create an attribute that can be initialized with a variable number of arguments?
For example:
```
[MyCustomAttribute(new int[3,4,5])] // this doesn't work
public MyClass ...
```
|
Attributes will take an array. Though if you control the attribute, you can also use `params` instead (which is nicer to consumers, IMO):
```
class MyCustomAttribute : Attribute {
public int[] Values { get; set; }
public MyCustomAttribute(params int[] values) {
this.Values = values;
}
}
[MyCustomAttribute(3, 4, 5)]
class MyClass { }
```
Your syntax for array creation just happens to be off:
```
class MyCustomAttribute : Attribute {
public int[] Values { get; set; }
public MyCustomAttribute(int[] values) {
this.Values = values;
}
}
[MyCustomAttribute(new int[] { 3, 4, 5 })]
class MyClass { }
```
|
You can do it, but it isn't CLS compliant:
```
[assembly: CLSCompliant(true)]
class Foo : Attribute
{
public Foo(string[] vals) { }
}
[Foo(new string[] {"abc","def"})]
static void Bar() {}
```
Shows:
```
Warning 1 Arrays as attribute arguments is not CLS-compliant
```
For regular reflection usage, it may be preferable to have multiple attributes, i.e.
```
[Foo("abc"), Foo("def")]
```
However, this won't work with `TypeDescriptor`/`PropertyDescriptor`, where only a single instance of any attribute is supported (either the first or last wins, I can't recall which).
|
Can I initialize a C# attribute with an array or other variable number of arguments?
|
[
"",
"c#",
"attributes",
""
] |
**I have class A:**
```
public class ClassA<T>
```
**Class B derives from A:**
```
public class ClassB : ClassA<ClassB>
```
**Class C derives from class B:**
```
public class ClassC : ClassB
```
**Now I have a generic method with constraints**
```
public static T Method<T>() where T : ClassA<T>
```
OK, now I want to call:
```
ClassC c = Method<ClassC>();
```
but I get the compile error saying:
`Type argument 'ClassC' does not inherit from or implement the constraint type 'ClassA<ClassC>.`
Yet, the compiler will allow:
```
ClassB b = Method<ClassB>();
```
My understanding is that this fails because `ClassC` inherits `ClassA<ClassB>` instead of `ClassA<ClassC>`
**My real question is, is it possible to create a class deriving from `ClassB` that can be used in some way with the generic method?**
This may seem like generics are overused and I would agree. I am trying to create business layer objects deriving from the subsonic data objects in a separate project.
Note: I have put the < T > with extra spaces otherwise they get stripped from the question.
|
Well, you could change Method to:
```
public static T Method<T,U>() where T : ClassA<U> where U : T
```
Does that help at all? It's not much use if you can't change Method of course...
|
No. You must change or wrap this method.
Here is the reason.
ClassC inherits from ClassB which inherits from ClassA(ClassB)
ClassC does not inherit from ClassA(ClassC)
No child of ClassB will inherit from ClassA(child class), because they instead inherit from ClassB and ClassB does not inherit from ClassA(child class).
[Generic types are invariant](http://msdn.microsoft.com/en-us/library/aa479859.aspx#fundamentals_topic12).
|
Generic contraints on derived classes
|
[
"",
"c#",
"generics",
"subsonic",
"constraints",
""
] |
This code does not seem to compile, I just need to write something to a small log text file (a new row to end of file).
```
<%@ Import Namespace="System.IO" %>
void Page_Load( object sender, EventArgs e ){
FileSystem myFileSystem = new FileSystem();
myFileSystem.WriteAllText(logFile, hash, false);
```
|
I can't see any class called FileSystem in [the `System.IO` namespace](http://msdn.microsoft.com/en-us/library/system.io.aspx). Is this something new in .NET 4.0 which you're trying to use?
Note that the [`File`](http://msdn.microsoft.com/en-us/library/system.io.file.aspx) class has a *static* method called [`WriteAllText`](http://msdn.microsoft.com/en-us/library/system.io.file.writealltext.aspx). Is that what you meant?
EDIT: To append to a file instead, use [`File.AppendAllText`](http://msdn.microsoft.com/en-us/library/system.io.file.appendalltext.aspx).
|
FileSystem is a class from the VisualBasic namespace:
<http://msdn.microsoft.com/en-us/library/microsoft.visualbasic.filesystem.aspx>
Have a look at the FileStream class in C#:
<http://msdn.microsoft.com/en-us/library/system.io.filestream.aspx>
|
How to write something to .txt log file from .aspx page (c#)
|
[
"",
"c#",
".net",
"logging",
"asp.net",
""
] |
I'm trying to create a table with two columns comprising the primary key in MySQL, but I can't figure out the syntax. I understand single-column PKs, but the syntax isn't the same to create a primary key with two columns.
|
```
CREATE TABLE table_name
(
c1 INT NOT NULL,
c2 INT NOT NULL,
PRIMARY KEY (c1, c2)
)
```
|
Try:
```
create table .....
primary key (`id1`, `id2`)
)
```
|
How do I declare a multi-column PK in MySQL
|
[
"",
"mysql",
"sql",
"database",
"ddl",
""
] |
If I had a phone number like this
```
string phone = "6365555796";
```
Which I store with only numeric characters in my database **(as a string)**, is it possible to output the number like this:
```
"636-555-5796"
```
Similar to how I could if I were using a number:
```
long phone = 6365555796;
string output = phone.ToString("000-000-0000");
```
I've tried searching and all I could find online were numeric formatting documents.
The reason I ask is because I think it would be an interesting idea to be able to store only the numeric values in the DB and allow for different formatting using a Constant string value to dictate how my phone numbers are formatted. Or am I better off using a number for this?
EDIT: The question is to format a string that contains numbers, not a number itself.
|
Best I can think of without having to convert to a long/number and so it fits one line is:
```
string number = "1234567890";
string formattedNumber = string.Format("{0}-{1}-{2}", number.Substring(0,3), number.Substring(3,3), number.Substring(6));
```
|
Be aware that not everyone uses the North American 3-3-4 format for phone numbers. European phone numbers can be up to 15 digits long, with significant punctuation, e.g. +44-XXXX-XXXX-XXXX is different from 44+XXXX-XXXX-XXXX. You are also not considering PBXs and extensions, which can require over 30 digits.
Military and radio phones can have alphabetic characters, no this is not the "2" = "ABC" you see on touch-tone phones.
|
Can I Format A String Like A Number in .NET?
|
[
"",
"c#",
".net",
"formatting",
""
] |
I have been asked to write a testing application that needs to test a new stored procedure on multiple rows in a database, in essence I want to do something like this:
```
[Test]
public void TestSelect()
{
foreach(id in ids)
{
DataTable old = Database.call("old_stored_proc",id);
DataTable new_ = Database.call("new_stored_proc",id);
Assert.AreEqual(old.Rows[0]["column"],ne_.Rows[0]["column"]);
}
}
```
When I run this test, if 1 row doesn't match the other, the entire test fails; instead I would like to count how many times the assertion was passed and how many times it has failed. Is there a way to do this with NUnit?
I realize that NUnit might be overkill and this is a simple task without it...I just wanted to learn it. ;)
|
1) If the id's are constant and not looked up at test run time, create a separate unit test fixture for each id. That way you will know which id's are actually failing. See here for a write up on the problems with data driven tests:
<http://googletesting.blogspot.com/2008/09/tott-data-driven-traps.html>
2) If you need to dynamically look up the id's making it impossible to create a fixture for each id, use akmad's suggestion with one change. Keep a list of id's where the values are not equal and add the list to the error message. It will be extremely difficult to diagnose a failing test that only states the number of errors, as you won't know what id's cause the errors.
3) I don't know how difficult it would be to do in NUnit, but in PyUnit, when we need to run tests on dynamically generated data, we dynamically create tests fixtures and attach them to the TestCase class so that we have a failed test for each piece of data that does not pass. Though I imagine this would be much more difficult without python's dynamic abilities.
|
Seems like you are just Asserting the wrong thing. If you want to check all the values and then assert that there are no errors (or show the number of errors) then try this:
```
[Test]
public void TestSelect()
{
int errors = 0;
foreach(id in ids)
{
DataTable old = Database.call("old_stored_proc",id);
DataTable new_ = Database.call("new_stored_proc",id);
if (old.Rows[0]["column"] != new_.Rows[0]["column"])
{
errors++;
}
}
Assert.AreEqual(0, errors, "There were " + errors + " errors.");
}
```
|
NUnit: Running multiple assertions in a single test
|
[
"",
"c#",
"unit-testing",
"nunit",
""
] |
In Visual Studio, I can select the "Treat warnings as errors" option to prevent my code from compiling if there are any warnings. Our team uses this option, but there are two warnings we would like to keep as warnings.
There is an option to suppress warnings, but we DO want them to show up as warnings, so that won't work.
It appears that the only way to get the behavior we want is to enter a list of every C# warning number into the "Specific warnings" text box, except for the two we want treated as warnings.
Besides the maintenance headache, the biggest disadvantage to this approach is that a few warnings do not have numbers, so they can't be referenced explicitly. For example, "Could not resolve this reference. Could not locate assembly 'Data....'"
Does anyone know of a better way to do this?
---
Clarifying for those who don't see immediately why this is useful. Think about how most warnings work. They tell you something is a little off in the code you just wrote. It takes about 10 seconds to fix them, and that keeps the code base cleaner.
The "Obsolete" warning is very different from this. Sometimes fixing it means just consuming a new method signature. But if an entire class is obsolete, and you have usage of it scattered through hundreds of thousands of lines of code, it could take weeks or more to fix. You don't want the build to be broken for that long, but you definitely DO want to see a warning about it. This isn't just a hypothetical case--this has happened to us.
Literal "#warning" warnings are also unique. I often *want* to check it in, but I don't want to break the build.
|
In Visual Studio 2022 we have a new Project Properties UI which includes an editor for this.
Under *Build | Errors and Warnings* if you set *Treat warnings as errors* to *All*, then another property appears which allows you to exempt specific warnings from being treated as errors:
[](https://i.stack.imgur.com/nOUWX.png)
This will add the following property to your project:
```
<WarningsNotAsErrors>618,1030,1701,1702</WarningsNotAsErrors>
```
|
You can add a `WarningsNotAsErrors`-tag in the project file.
```
<PropertyGroup>
...
...
<WarningsNotAsErrors>618,1030,1701,1702</WarningsNotAsErrors>
</PropertyGroup>
```
Note: `612` and `618` are both warnings about Obsolete, don't know the difference but the project i'm working on is reporting Obsolete with warning 618.
|
"Treat all warnings as errors except..." in Visual Studio
|
[
"",
"c#",
"visual-studio",
"msbuild",
""
] |
Bascially I want to know the best way to hide/show an ASP.NET control from a Javascript function. I figured I would just access the control in Javascript using:
```
var theControl = document.getElementById("txtEditBox");
```
Then just set the control's Visible property to true/false. It doesn't seem to be working, I can't seem to figure out how to set "Visible" to true/false. How can I do that? Also, is that the best way to hide/show a ASP.NET control from a Javascript function?
Thanks,
Jeff
|
The "Visible" property of an ASP.NET control determines whether or not it will be rendered on the client (i.e. sent to the client). If it is false when the page is rendered, it will never arrive at the client.
So, you cannot, technically, set that property of the control.
That said, if the control *is* rendered on the client because the Visible property is true when the page is rendered, you can then hide it using javascript like this:
```
var theControl = document.getElementById("txtEditBox");
theControl.style.display = "none";
// to show it again:
theControl.style.display = "";
```
That assumes that the control's `id` attribute really is "txtEditBox" on the client and that it is already visible.
> Also, is that the best way to hide/show a ASP.NET control from a Javascript function?
There is not necessarily a "best" way, although one better approach is to use CSS class definitions:
```
.invisible { display: none; }
```
When you want to hide something, dynamically apply that class to the element; when you want to show it again, remove it. Note, I believe this will only work for elements whose `display` value starts off as `block`.
|
instead of using visible, set its css to display:none
```
//css:
.invisible { display:none; }
//C#
txtEditBox.CssClass = 'invisible';
txtEditBox.CssClass = ''; // visible again
//javascript
document.getElementById('txtEditBox').className = 'invisible'
document.getElementById('txtEditBox').className = ''
```
|
How do you set the "Visible" property of a ASP.NET control from a Javascript function?
|
[
"",
"asp.net",
"javascript",
""
] |
Following techniques from 'Modern C++ Design', I am implementing a persistence library with various compile-time optimisations. I would like the ability to dispatch a function to a templated member variable if that variable derives from a given class:
```
template<class T, template <class> class Manager = DefaultManager> class Data
{
private:
T *data_;
public:
void Dispatch()
{
if(SUPERSUBCLASS(Container, T))
{
data_->IKnowThisIsHere();
}
else
{
Manager<T>::SomeGenericFunction(data_);
}
}
}
```
Where SUPERSUBCLASS is a compile-time macro to determine object inheritance. Of course, this fails in all cases where T does to inherit from Container (or T is an intrinsic type etc etc) because the compiler rightly complains that IKnowThisIsHere() is not a data member, even though this code path will never be followed, as shown here after preprocessing with T = int:
```
private:
int *data_;
public:
void Dispatch()
{
if(false)
{
data_->IKnowThisIsHere();
```
Compiler clearly complains at this code, even though it will never get executed. A suggestion of using a dynamic\_cast also does not work, as again a type conversion is attempted at compile time that is not possible (for example with T=double, std::string):
```
void Dispatch()
{
if(false)
{
dynamic_cast<Container*>(data_)->IKnowThisIsHere();
error: cannot dynamic_cast '((const Data<double, DefaultManager>*)this)->Data<double, DefaultManager>::data_' (of type 'double* const') to type 'class Container*' (source is not a pointer to class)
error: cannot dynamic_cast '((const Data<std::string, DefaultManager>*)this)->Da<sttad::string, DefaultManager>::data_' (of type 'struct std::string* const') to type 'class Container*' (source type is not polymorphic)
```
I really need to emulate (or indeed persuade!) having the compiler emit one set of code if T does inherit from Container, and another if it does not.
Any suggestions?
|
Overloading can be useful to implement compile-time dispatching, as proposed by *Alexandrescu* in his book "Modern C++ Design".
You can use a class like this to transform at compile time a boolean or integer into a type:
```
template <bool n>
struct int2type
{ enum { value = n}; };
```
The following source code shows a possible application:
```
#include <iostream>
#define MACRO() true // <- macro used to dispatch
template <bool n>
struct int2type
{ enum { value = n }; };
void method(int2type<false>)
{ std::cout << __PRETTY_FUNCTION__ << std::endl; }
void method(int2type<true>)
{ std::cout << __PRETTY_FUNCTION__ << std::endl; }
int
main(int argc, char *argv[])
{
// MACRO() determines which function to call
//
method( int2type<MACRO()>());
return 0;
}
```
Of course what really makes the job is the MACRO() or a better implementation as a metafunction
|
You require a kind of compile-time `if`. This then calls a function depending on which case is `true`. This way, the compiler won't stumble upon code which it can't compile (because that is safely stored away in another function template that never gets instantiated).
There are several ways of realizing such a compile-time `if`. The most common is to employ the SFINAE idiom: [substitution failure is not an error](http://en.wikipedia.org/wiki/SFINAE). Boost's `is_base_of` ist actually an instance of this idiom. To employ it correctly, you wouldn't write it in an `if` expression but rather use it as the return type of your function.
Untested code:
```
void Dispatch()
{
myfunc(data_);
}
private:
// EDIT: disabled the default case where the specialisation matched
template <typename U>
typename enable_if_c<is_base_of<Container, U>::value, U>::type myfunc(U& data_) {
data_->IKnowThisIsHere();
}
template <typename U>
typename disable_if_c<is_base_of<Container, U>::value, U>::type myfunc(U& data_) { // default case
Manager<U>::SomeGenericFunction(data_);
}
```
|
Compile-time type based dispatch
|
[
"",
"c++",
"templates",
""
] |
Similar question as [this one](https://stackoverflow.com/questions/56722/automated-processing-of-an-email-in-java) but for a Microsoft Environment.
Email --> Exchange Server -->[something]
For the [something] I was using Outlook 2003 & C# but it *feels* messy (A program is trying to access outlook, this could be a virus etc)
```
Microsoft.Office.Interop.Outlook.Application objOutlook = new Microsoft.Office.Interop.Outlook.Application();
Microsoft.Office.Interop.Outlook.NameSpace objNS = objOutlook.GetNamespace("MAPI");
objNS.Logon("MAPIProfile", "MAPIPassword", false, true);
```
Is this the best way to do it? Is there a better way of retrieving and processing emails in a Microsoft environment???
|
[This](http://www.codeproject.com/KB/IP/NetPopMimeClient.aspx) library provides you basic support for the POP3 protocol and MIME, you can use it to check specified mailboxes and retrieve emails and attachments, you can tweak it to your needs.
Here is [another library](http://www.codeproject.com/KB/IP/imaplibrary.aspx), this one is for the IMAP protocol, it's very basic but also allows you to fetch complete messages, including attachments...
|
I've been happy with the [Rebex components](http://www.rebex.net/mail.net/) which provide IMAP access. Of course you need to ensure your Exchange administrators will open an IMAP port on your Exchange servers.
|
Automated processing of an Email in C#
|
[
"",
"c#",
"outlook-2003",
""
] |
Should I be writing Doc Comments for all of my java methods?
|
@Claudiu
> When I write code that others will use - Yes. Every method that somebody else can use (any public method) should have a javadoc at least stating its obvious purpose.
@Daniel Spiewak
> I thoroughly document every public method in every API class. Classes which have public members but which are not intended for external consumption are prominently marked in the class javadoc. I also document every protected method in every API class, though to a lesser extent. This goes on the idea that any developer who is extending an API class will already have a fair concept of what's going on.
>
> Finally, I will occasionally document private and package private methods for my own benefit. Any method or field that I think needs some explanation in its usage will receive documentation, regardless of its visibility.
@Paul de Vrieze
> For things, like trivial getters and setters, share the comment between then and describe the purpose of the property, not of the getter/setter
```
/**
* Get the current value of the foo property.
* The foo property controls the initial guess used by the bla algorithm in
* {@link #bla}
* @return The initial guess used by {@link #bla}
*/
int getFoo() {
return foo;
}
```
And yes, this is more work.
@VonC
When you break a huge complex method (because of [high cyclomatic complexity](https://stackoverflow.com/questions/105852/conditional-logging-with-minimal-cyclomatic-complexity) reason) into:
* one public method calling
* several private methods which represent internal steps of the public one
, it is very useful to javadoc the private methods as well, even though that documentation will not be visible in the javadoc API files.
Still, it allows you to remember more easily the precise nature of the different steps of your complex algorithm.
And remember: **[limit values or boundary conditions](https://stackoverflow.com/questions/61604)** should be part of your javadoc as well.
Plus, ***javadoc is way better than simple "//comment"***:
* It is recognized by IDE and used to display a pop-up when you move your cursor on top of one of your - javadoc-ed - function. For instance, a *constant* - that is private static final variable -, should have a javadoc, especially when its value is not trivial. Case in point: ***regexp*** (its javadoc should includes the regexp in its non-escaped form, what is purpose is and a literal example matched by the regexp)
* It can be parsed by external tools (like [xdoclet](http://xdoclet.sourceforge.net/xdoclet/index.html))
@Domci
> For me, if somebody will see it or not doesn't matter - it's not likely I'll know what some obscure piece of code I wrote does after a couple of months. [...]
> In short, comment logic, not syntax, and do it only once, on a proper place.
@Miguel Ping
> In order to comment something, you have to understand it first. When you trying to comment a function, you are actually thinking of what the method/function/class does, and this makes you be more specific and clear in your javadoc, which in turn makes you write more clear and concise code, which is good.
|
If the method is, obviously self evident, I might skip a javadoc comment.
Comments like
```
/** Does Foo */
void doFoo();
```
Really aren't that useful. (Overly simplistic example, but you get the idea)
|
Do you use Javadoc for every method you write?
|
[
"",
"java",
"api",
"javadoc",
""
] |
I can't seem to get my application up and running on my dev server and I'm not sure why.
I have compiled my code in VS 2008 with a target framework of 3.5. I am using 3.5 mainly because I have implemented LINQ rather extensively. Compiling and runs local without any problems.
The hang up is that my server only has the 2.0 .Net framework and upgrading to 3.5 is apparently not going to happen.
I was under the impression after doing some research that as long as I was trying to execute compiled code the server would not need 3.5 installed.
Today I am trying to publish to the server and I can't get past this error in my WEB.CONFIG
Configuration Error
Parser Error Message: Child nodes not allowed.
providerOption name="CompilerVersion" value="v3.5"/
EDIT ADD ON QUESTION:
I have seen some posts about possibly setting my references to "copy local" which might allow me to run on the 2.0 server. Thoughts?
|
You can just copy over the 3.5 dlls onto the server. You can absolutely run 3.5 code on a 2.0 server.
|
You are right in that 3.5 runs on the 2.0 CLR, but 3.5 contains libraries and if you have used any of those, you're out of luck unless you install 3.5 on that server.
There are plenty of options for a 3.5 program to not run correctly on only 2.0, so I'd consider downgrading the program, or upgrading the server.
---
Note regarding *copy local*. Even if you copy all the 3.5 libraries that your app uses, there is no guarantee it'll work and most likely it won't. Even so, distributing the libraries with your app is expressively prohibited by the .NET license.
Since you have stated you use LINQ, the only legal way to get your app running is to install the 3.5 license.
Or, you can rewrite your app using only 2.0.
|
Problems executing compiled 3.5 code on a server which only has the 2.0 framework
|
[
"",
"c#",
"asp.net",
".net-3.5",
""
] |
I'm just trying to get a general idea of what views are used for in RDBMSes. That is to say, I know what a view is and how to make one. I also know what I've used them for in the past.
But I want to make sure I have a thorough understanding of what a view is useful for and what a view shouldn't be useful for. More specifically:
1. What is a view useful for?
2. Are there any situations in which it is tempting to use a view when you shouldn't use one?
3. Why would you use a view in lieu of something like a table-valued function or vice versa?
4. Are there any circumstances that a view might be useful that aren't apparent at first glance?
(And for the record, some of these questions are intentionally naive. This is partly a concept check.)
|
*1) What is a view useful for?*
> **IOPO** In One Place Only
>
> •Whether you consider the data itself or the queries that reference the joined tables, utilizing a view avoids unnecessary redundancy.
>
> •Views also provide an abstracting layer preventing direct access to the tables (and the resulting handcuffing referencing physical dependencies). In fact, I think it's *good practice*1 to offer only abstracted access to your underlying data (using views & table-valued functions), including views such as
>
> `CREATE VIEW AS
> SELECT * FROM tblData`
>
> 1I hafta admit there's a good deal of "Do as I say; not as I do" in that advice ;)
*2) Are there any situations in which it is tempting to use a view when you shouldn't use one?*
> Performance in view joins used to be a concern (e.g. SQL 2000). I'm no expert, but I haven't worried about it in a while. (Nor can I think of where I'm presently using view joins.)
>
> Another situation where a view might be overkill is when the view is only referenced from one calling location and a derived table could be used instead. Just like an anonymous type is preferable to a class in .NET if the anonymous type is only used/referenced once.
>
> • See the derived table description in http://msdn.microsoft.com/en-us/library/ms177634.aspx
*3) Why would you use a view in lieu of something like a table-valued function or vice versa?*
> (Aside from performance reasons) A table-valued function is functionally equivalent to a parameterized view. In fact, a common simple table-valued function use case is simply to add a WHERE clause filter to an already existing view in a single object.
*4) Are there any circumstances that a view might be useful that aren't apparent at first glance?*
> I can't think of any non-apparent uses of the top of my head. (I suppose if I could, that would make them apparent ;)
|
In a way, a view is like an interface. You can change the underlying table structure all you want, but the view gives a way for the code to not have to change.
Views are a nice way of providing something simple to report writers. If your business users want to access the data from something like Crystal Reports, you can give them some views in their account that simplify the data -- maybe even denormalize it for them.
|
What are views good for?
|
[
"",
"sql",
"view",
"rdbms-agnostic",
""
] |
We need to handle this event in the base form, regardless of which controls currently have focus. We have a couple of global key commands that need to work regardless of control focus.
This works by handling the PreviewKeyDown event in the form normally. When we add a user control to the form, the event no longer fires.
Am I missing something trivial here? Or do we need to handle the event in the user control first?
Thanks for your help!
Thanks Factor. When I get more time :) I'll get it working 'properley'!
|
We ended up doing this:
I found a workaround for this by setting up a hidden menu item by setting:
```
ToolStripMenuItem.Visible = false
```
(Thanks to [this article)](http://blogs.msdn.com/jfoscoding/archive/2005/01/24/359334.aspx).
It appears that the Main Menu of a form always gets searched for your shortcut key combination. This works regardless of whick control has focus
|
The hidden menu you are using works fine for shortcuts that are valid menu item shortcuts, but if you want to use any key as a shortcut (such as Page Up/Page Down), you'll need a different trick.
Another way to do this that doesn't involve P/Invoke is to set the `Form.KeyPreview` property of your form to true. This will cause all key presses to be sent to the form first, regardless of which control has focus. You can then override OnKeyDown, OnKeyPress, and/or OnKeyUp to handle the key press before any of your controls.
|
Always handle the PreviewKeyDown event in a base form
|
[
"",
"c#",
"winforms",
""
] |
I've got a collection (List<Rectangle>) which I need to sort left-right. That part's easy. Then I want to iterate through the Rectangles in their *original* order, but easily find their index in the sorted collection. indexOf() won't work, since I may have a number of equal objects. I can't help feeling there should be an easy way to do this.
|
I've found a solution - but perhaps there is a neater/more optimal one out there.
```
List<Rectangle> originalRects = ...;
/* record index of each rectangle object.
* Using a hash map makes lookups efficient,
* and using an IdentityHashMap means we lookup by object identity
* not value.
*/
IdentityHashMap<Rectangle, Integer> originalIndices = new IdentityHashMap<Rectangle, Integer>();
for(int i=0; i<originalRects.size(); i++) {
originalIndices.put(originalRects.get(i), i);
}
/* copy rectangle list */
List<Rectangle> sortedRects = new ArrayList<Rectangle>();
sortedRects.addAll(originalRects);
/* and sort */
Collections.sort(sortedRects, new LeftToRightComparator());
/* Loop through original list */
for(int i=0; i<sortedRects.size(); i++) {
Rectangle rect = sortedRects.get(i);
/* Lookup original index efficiently */
int origIndex = originalIndices.get(rect);
/* I know the original, and sorted indices plus the rectangle itself */
...
```
|
If you don't have tens of thousands of objects, you could just store them in two separate collections, one original, one sorted. Remember that collection classes in Java only store *references* to objects, so this doesn't take up as much memory as it might seem.
|
How to map sorted index back to original index for collection I'm sorting
|
[
"",
"java",
"sorting",
"collections",
""
] |
What's a good way to generate an icon in-memory in python? Right now I'm forced to use pygame to draw the icon, then I save it to disk as an .ico file, and then I load it from disk as an ICO resource...
Something like this:
```
if os.path.isfile(self.icon):
icon_flags = win32con.LR_LOADFROMFILE | win32con.LR_DEFAULTSIZE
hicon = win32gui.LoadImage(hinst,
self.icon,
win32con.IMAGE_ICON,
0,
0,
icon_flags)
```
...where self.icon is the filename of the icon I created.
Is there any way to do this in memory? EDIT: All I want to do is create an icon with a 2-digit number displayed on it (weather-taskbar style.
|
You can use [wxPython](http://wxpython.org/) for this.
```
from wx import EmptyIcon
icon = EmptyIcon()
icon.CopyFromBitmap(your_wxBitmap)
```
The [wxBitmap](http://docs.wxwidgets.org/stable/wx_wxbitmap.html#wxbitmap) can be generated in memory using [wxMemoryDC](http://docs.wxwidgets.org/stable/wx_wxmemorydc.html#wxmemorydc), look [here](http://docs.wxwidgets.org/stable/wx_wxdc.html) for operations you can do on a DC.
This icon can then be applied to a wxFrame (a window) or a wxTaskBarIcon using:
```
frame.SetIcon(icon)
```
|
You can probably create a object that mimics the python file-object interface.
<http://docs.python.org/library/stdtypes.html#bltin-file-objects>
|
Create an icon in memory with win32 in python
|
[
"",
"python",
"windows",
"winapi",
"icons",
""
] |
I need to check the `RequestType` of an `HttpRequest` in ASP.NET (or `WebRequest.Method`). I know that I can just use the string values "`POST`" or "`GET`" for the request type, but I could have sworn there was a constant somewhere in some class in .NET that contained the values.
Out of curiosity I was wondering if anyone knew what class these string constants for `GET` and `POST` were in. I've tried searching online but I've had no luck, so I thought I'd ask here.
|
```
System.Net.WebRequestMethods.Http
.Connect = "CONNECT"
.Get = "GET"
.Head = "HEAD"
.MkCol = "MKCOL"
.Post = "POST"
.Put = "PUT"
```
Ultimately, though; since `const` expressions are burned into the caller, this is identical to using "GET" etc, just without the risk of a typo.
|
Also exists [`System.Net.Http.HttpMethod`](https://learn.microsoft.com/en-us/dotnet/api/system.net.http.httpmethod) which can serve instead of enum. You can compare them `aMethod == HttpMethod.Get`, etc. To get string method name call e.g. `HttpMethod.Get.Method`.
|
Where is the constant for "HttpRequest.RequestType" and "WebRequest.Method" values in .NET?
|
[
"",
"c#",
".net",
"asp.net",
""
] |
Let's imagine I got this:
index.php generates form with unpredictable number of inputs with certain IDs/Names and different values that can be edited by user and saved by script.php
```
<form action="script.php" method="post">
<input id="1" name="1" type="text" value="1"/>
<input id="24" name="24" type="text" value="2233"/>
<input id="55" name="55" type="text" value="231321"/>
</form>
```
Script.php:
Here I need to get something like array of all inputs that were generated by index.php and save every value that corresponds to its id/name.
Is there a way to do this?
|
i may be missing something in your question, but the `$_POST` variable will contain all the name => value pairs you're asking for. for example, in your above HTML snippet:
```
print_r($_POST);
// contains:
array
(
[1] => 1
[24] => 2233
[55] => 231321
)
// example access:
foreach($_POST as $name => $value) {
print "Name: {$name} Value: {$value} <br />";
}
```
|
Use an array\_keys on the $\_POST variable in script.php to pull out the names you created and use those to get the values.
```
$keys = array_keys( $_POST );
foreach( $keys as $key ) {
echo "Name=" . $key . " Value=" . $_POST[$key];
}
```
|
Is there some list of input's IDs or names of Form after the script was sent?
|
[
"",
"php",
"html",
"forms",
"numbers",
"input",
""
] |
I often have to convert a retreived value (usually as a string) - and then convert it to an int. But in C# (.Net) you have to choose either int16, int32 or int64 - how do you know which one to choose when you don't know how big your retrieved number will be?
|
Everyone here who has mentioned that declaring an Int16 saves ram should get a downvote.
The answer to your question is to use the keyword "int" (or if you feel like it, use "Int32").
That gives you a range of up to 2.4 billion numbers... Also, 32bit processors will handle those ints better... also (and **THE MOST IMPORTANT REASON**) is that if you plan on using that int for almost any reason... it will likely need to be an "int" (Int32).
In the .Net framework, 99.999% of numeric fields (that are whole numbers) are "ints" (Int32).
Example: Array.Length, Process.ID, Windows.Width, Button.Height, etc, etc, etc 1 million times.
EDIT: I realize that my grumpiness is going to get me down-voted... but this is the right answer.
|
Just wanted to add that... I remembered that in the days of .NET 1.1 the compiler was optimized so that **'int' operations are actually faster than byte or short operations.**
I believe it still holds today, but I'm running some tests now.
---
EDIT: I have got a surprise discovery: the add, subtract and multiply operations for short(s) actually return int!
|
Converting to int16, int32, int64 - how do you know which one to choose?
|
[
"",
"c#",
"integer",
"typeconverter",
""
] |
[This article](http://blogs.msdn.com/larryosterman/archive/2004/09/10/228068.aspx) gives a good overview on why structured exception handling is bad. Is there a way to get the robustness of stopping your server from crashing, while getting past the problems mentioned in the article?
I have a server software that runs about 400 connected users concurrently. But if there is a crash all 400 users are affected. We added structured exception handling and enjoyed the results for a while, but eventually had to remove it because of some crashes causing the whole server to hang (which is worse than just having it crash and restart itself).
So we have this:
* With SEH: only 1 user of the 400 get a problem for most crashes
* Without SEH: If any user gets a crash, all 400 are affected.
* But sometimes with SEH: Server hangs, all 400 are affected and future users that try to connect.
|
Break your program up into worker processes and a single server process. The server process will handle initial requests and then hand them off the the worker processes. If a worker process crashes, only the users on that worker are affected. Don't use SEH for general exception handling - as you have found out, it can and will leave you wide open to deadlocks, and you can still crash anyway.
|
Using SEH because your program crashes randomly is a bad idea. It's not magic pixie dust that you can sprinkle on your program to make it stop crashing. Tracking down and fixing the bugs that cause the crashes is the right solution.
Using SEH when you really need to handle a structured exception is fine. Larry Osterman made a followup post explaining [what situations require SEH: memory mapped files, RPC, and security boundary transitions](http://blogs.msdn.com/larryosterman/archive/2006/10/16/so-when-is-it-ok-to-use-seh.aspx).
|
Structured exception handling with a multi-threaded server
|
[
"",
"c++",
"exception",
"seh",
""
] |
I've never used [SOAP](http://en.wikipedia.org/wiki/SOAP) before and I'm sort of new to Python. I'm doing this to get myself acquainted with both technologies. I've installed [SOAPlib](http://trac.optio.webfactional.com/wiki/soaplib) and I've tried to read their [Client](http://trac.optio.webfactional.com/wiki/Client) documentation, but I don't understand it too well. Is there anything else I can look into which is more suited for being a SOAP Client library for Python?
Edit: Just in case it helps, I'm using Python 2.6.
|
***Update (2016):***
If you only need SOAP client, there is well maintained library called [zeep](https://pypi.org/project/zeep/). It supports both Python 2 and 3 :)
---
***Update:***
Additionally to what is mentioned above, I will refer to [Python WebServices](https://wiki.python.org/moin/WebServices#SOAP) page which is always up-to-date with all actively maintained and recommended modules to **SOAP** and all other webservice types.
---
Unfortunately, at the moment, I don't think there is a "best" Python SOAP library. Each of the mainstream ones available has its own pros and cons.
Older libraries:
* [SOAPy](http://soapy.sourceforge.net): Was the "best," but no longer maintained. Does not work on Python 2.5+
* [ZSI](http://pywebsvcs.sourceforge.net): Very painful to use, and development is slow. Has a module called "SOAPpy", which is different than SOAPy (above).
"Newer" libraries:
* [SUDS](https://fedorahosted.org/suds): Very Pythonic, and easy to create WSDL-consuming SOAP clients. Creating SOAP servers is a little bit more difficult. (This package does not work with Python3. For Python3 see SUDS-py3)
* [SUDS-py3](https://pypi.org/project/suds-py3/): The Python3 version of SUDS
* [spyne](https://github.com/arskom/spyne): Creating servers is easy, creating clients a little bit more challenging. Documentation is somewhat lacking.
* [ladon](http://pypi.python.org/pypi/ladon): Creating servers is much like in soaplib (using a decorator). Ladon exposes more interfaces than SOAP at the same time without extra user code needed.
* [pysimplesoap](http://code.google.com/p/pysimplesoap/): very lightweight but useful for both client and server - includes a web2py server integration that ships with web2py.
* [SOAPpy](https://github.com/kiorky/SOAPpy): Distinct from the abandoned SOAPpy that's hosted at the ZSI link above, this version was actually maintained until 2011, now it seems to be abandoned too.
* [soaplib](http://www.python.org/pypi/soaplib): Easy to use python library for writing and calling soap web services. Webservices written with soaplib are simple, lightweight, work well with other SOAP implementations, and can be deployed as WSGI applications.
* [osa](https://bitbucket.org/sboz/osa): A fast/slim easy to use SOAP python client library.
Of the above, I've only used SUDS personally, and I liked it a lot.
|
I followed the advice of other answers to this question and gave [SUDS](https://fedorahosted.org/suds "SUDS") a try. After using it "in anger" I must agree: SUDS is very nice! Highly recommended!
I did run into trouble calling HTTPS-based web services from behind a proxy. At the time of this writing, this affects **all** Python web-service clients that use `urllib2`, so I'll document the solution here.
The `urllib2` module shipping with python 2.6.2 and below will not issue a `CONNECT` to the proxy for HTTPS-over-HTTP-proxy sessions. This results in a long timeout, or if you are lucky, an error that looks like:
```
abort: error: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
```
This was [issue1424152](http://bugs.python.org/issue1424152) on the Python bug tracker. There are patches attached to the bug report that will fix this in Python 2.x and Python 3.x. **The issue is already fixed**.
|
What SOAP client libraries exist for Python, and where is the documentation for them?
|
[
"",
"python",
"soap",
"soap-client",
""
] |
I want to do something like:
```
exec sproc1 and sproc2 at the same time
when they are both finished exec sproc3
```
I can do this in dts.
Is there a way to do it in transact sql?
Or is there a way to do it with a batch script (eg vbs or powershell)?
|
sp \_ start \_ job
I'm doing a similar thing at the moment, and the only way I've found to avoid using SSIS or some external shell is to split my load routine into 'threads' manually, and then fire a single master sqlagent job which in turn executes as many sp \_ start \_ job's as I have threads. From that point, they all run autonomously.
It's not exactly what we're looking for, but the result is the same. If you test the job status for the sub jobs, you can implement your conditional start of sproc 3 as well.
What's the point in 8 cores if we can't use them all at once?
|
You could create a CLR Stored Procedure that (using C#) would call the first two on their own threads, and then block until both are complete... then run the third one.
Are you able to use CLR sprocs in your situation? If so, I'll edit this answer to have more detail.
|
How can I run sql server stored procedures in parallel?
|
[
"",
"sql",
"sql-server",
"t-sql",
"parallel-processing",
"dts",
""
] |
I just noticed that java.beans.Introspector getBeanInfo does not pickup any superinterface's properties. Example:
```
public interface Person {
String getName();
}
public interface Employee extends Person {
int getSalary();
}
```
Introspecting on Employee only yields salary even though name is inherited from Person.
Why is this? I would rather not have to use reflection to get all the getters.
|
This issue is covered in Sun bug
[java.beans.Introspector doesn't work for interfaces](https://bugs.java.com/bugdatabase/view_bug?bug_id=4275879)
|
The Java VM does not support this out of the box as [Phil](https://stackoverflow.com/questions/185004/java-beans-introspector-getbeaninfo-does-not-pickup-any-superinterfaces-properti/867423#867423) wrote. I also needed this and implemented a [helper class](http://kenai.com/projects/deut/sources/repo/content/trunk/src/diergo/util/BeanHelper.java) as part of [Diergo Utils 1.5](http://kenai.com/projects/deut).
|
java.beans.Introspector getBeanInfo does not pickup any superinterface's properties
|
[
"",
"java",
"reflection",
"javabeans",
""
] |
I have a table with columns
> Index, Date
where an Index may have multiple Dates, and my goal is the following: select a list that looks like
> Index, MinDate, MaxDate
where each Index is listed only once, and MinDate (MaxDate) represents the earliest (latest) date present *in the entire table for that index*. That's easy enough, but then let's constrain this list to appear only for Indexes that are present in a given range of dates.
So far, I have the following:
```
SELECT
Index,
MIN([Date]),
MAX([Date])
FROM myTable
WHERE
Index IN
(SELECT Index From myTable WHERE [Date] BETWEEN '1/1/2000' AND '12/31/2000')
GROUP BY Index
ORDER BY Index ASC
```
This is excruciatingly slow. Any way to speed this up? [I am running SQL Server 2000.]
Thanks!
Edited: For clarity.
|
I would recommend a derived table approach. Like this:
```
SELECT
myTable.Index,
MIN(myTable.[Date]),
MAX(myTable.[Date])
FROM myTable
Inner Join (
SELECT Index
From myTable
WHERE [Date] BETWEEN '1/1/2000' AND '12/31/2000') As AliasName
On myTable.Index = AliasName.Index
GROUP BY myTable.Index
ORDER BY myTable.Index ASC
```
EDIT: Upon further review, there is another way you can create this query. The following query may be faster, slower, or execute in the same amount of time. This, of course, depends on how the table is indexed.
```
Select [Index],
Min([Date]),
Max([Date])
From myTable
Group By [Index]
Having Sum(Case When [Date] Between '1/1/2000' And '12/31/2000' Then 1 Else 0 End) > 0
```
Under the best circumstances, this query will cause an index scan (not a seek) to filter out rows you don't want to display. I encourage you to run both queries and pick this oen the executes the fastest.
|
I am not an SQL Server expert, but if you can do sub-selects like so, this is potentially faster.
```
SELECT Index,
(SELECT MIN([Date] FROM myTable WHERE Index = m.Index),
(SELECT MAX([Date] FROM myTable WHERE Index = m.Index)
From myTable m
WHERE [Date] BETWEEN '1/1/2000' AND '12/31/2000'
```
|
Help with SQL query (Joining views?)
|
[
"",
"sql",
""
] |
Similar to [this](https://stackoverflow.com/questions/5415/convert-bytes-to-floating-point-numbers-in-python) question, I am trying to read in an ID3v2 tag header and am having trouble figuring out how to get individual bytes in python.
I first read all ten bytes into a string. I then want to parse out the individual pieces of information.
I can grab the two version number chars in the string, but then I have no idea how to take those two chars and get an integer out of them.
The struct package seems to be what I want, but I can't get it to work.
Here is my code so-far (I am very new to python btw...so take it easy on me):
```
def __init__(self, ten_byte_string):
self.whole_string = ten_byte_string
self.file_identifier = self.whole_string[:3]
self.major_version = struct.pack('x', self.whole_string[3:4]) #this
self.minor_version = struct.pack('x', self.whole_string[4:5]) # and this
self.flags = self.whole_string[5:6]
self.len = self.whole_string[6:10]
```
Printing out any value except is obviously crap because they are not formatted correctly.
|
If you have a string, with 2 bytes that you wish to interpret as a 16 bit integer, you can do so by:
```
>>> s = '\0\x02'
>>> struct.unpack('>H', s)
(2,)
```
Note that the > is for big-endian (the largest part of the integer comes first). This is the format id3 tags use.
For other sizes of integer, you use different format codes. eg. "i" for a signed 32 bit integer. See help(struct) for details.
You can also unpack several elements at once. eg for 2 unsigned shorts, followed by a signed 32 bit value:
```
>>> a,b,c = struct.unpack('>HHi', some_string)
```
Going by your code, you are looking for (in order):
* a 3 char string
* 2 single byte values (major and minor version)
* a 1 byte flags variable
* a 32 bit length quantity
The format string for this would be:
```
ident, major, minor, flags, len = struct.unpack('>3sBBBI', ten_byte_string)
```
|
Why write your own? (Assuming you haven't checked out these other options.) There's a couple options out there for reading in ID3 tag info from MP3s in Python. Check out my [answer](https://stackoverflow.com/questions/8948/accessing-mp3-meta-data-with-python#102285) over at [this](https://stackoverflow.com/questions/8948/accessing-mp3-meta-data-with-python) question.
|
How Does One Read Bytes from File in Python
|
[
"",
"python",
"id3",
""
] |
How do you remove the jagged edges from a wide button in internet explorer? For example:

|
You can also eliminate Windows XP's styling of buttons (and every other version of Windows) by setting the `background-color` and/or `border-color` on your buttons.
Try the following styles:
```
background-color: black;
color: white;
border-color: red green blue yellow;
```
You can of course make this much more pleasing to the eyes. But you get my point :)
Stack Overflow uses this approach.
|
As a workaround, you can remove the blank spaces on each end of the button, which has the effect of decreasing the jagged edges. This is accomplished with the following css and a bit of jQuery:
```
input.button {
padding: 0 .25em;
width: 0; /* for IE only */
overflow: visible;
}
input.button[class] { /* IE ignores [class] */
width: auto;
}
```
---
```
$(function(){
$('input[type=button]').addClass('button');
});
```
The jQuery is for adding the button class. A more in depth write up can be found [here](http://latrine.dgx.cz/the-stretched-buttons-problem-in-ie).
|
Jagged Button edges in Internet Explorer
|
[
"",
"javascript",
"css",
"internet-explorer",
""
] |
I have a class which implements UserControl. In .NET 2005, a Dispose method is automatically created in the MyClass.Designer.cs partial class file that looks like this:
```
protected override void Dispose(bool disposing)
{
if (disposing && (components != null))
{
components.Dispose();
}
base.Dispose(disposing);
}
```
If I want to add my own Dispose functionality, where would I put it? Since this file is generated, I don't want to add code here and risk it getting blown away.
|
In such a case I move the generated `Dispose` method to the main file and extend it. Visual Studio respects this.
An other approach would be using a partial method (C# 3.0).
|
All `Component` classes implement a `Disposed` event. You can add an event handler for that event and clean up things in there.
For example, in your `UserControl` you could add following method:
```
private void OnDispose(object sender, EventArgs e)
{
// do stuff on dispose
}
```
And in constructor (or in `Load` event handler) add the following line:
```
Disposed += OnDispose;
```
|
How do I add Dispose functionality to a C# UserControl?
|
[
"",
"c#",
"user-controls",
"dispose",
""
] |
I've got a PHPUnit mock object that returns `'return value'` no matter what its arguments:
```
// From inside a test...
$mock = $this->getMock('myObject', 'methodToMock');
$mock->expects($this->any))
->method('methodToMock')
->will($this->returnValue('return value'));
```
What I want to be able to do is return a different value based on the arguments passed to the mock method. I've tried something like:
```
$mock = $this->getMock('myObject', 'methodToMock');
// methodToMock('one')
$mock->expects($this->any))
->method('methodToMock')
->with($this->equalTo('one'))
->will($this->returnValue('method called with argument "one"'));
// methodToMock('two')
$mock->expects($this->any))
->method('methodToMock')
->with($this->equalTo('two'))
->will($this->returnValue('method called with argument "two"'));
```
But this causes PHPUnit to complain if the mock isn't called with the argument `'two'`, so I assume that the definition of `methodToMock('two')` overwrites the definition of the first.
So my question is: Is there any way to get a PHPUnit mock object to return a different value based on its arguments? And if so, how?
|
Use a callback. e.g. (straight from PHPUnit documentation):
```
<?php
class StubTest extends PHPUnit_Framework_TestCase
{
public function testReturnCallbackStub()
{
$stub = $this->getMock(
'SomeClass', array('doSomething')
);
$stub->expects($this->any())
->method('doSomething')
->will($this->returnCallback('callback'));
// $stub->doSomething() returns callback(...)
}
}
function callback() {
$args = func_get_args();
// ...
}
?>
```
Do whatever processing you want in the callback() and return the result based on your $args as appropriate.
|
From the latest phpUnit docs: "Sometimes a stubbed method should return different values depending on a predefined list of arguments. You can use [returnValueMap()](https://phpunit.de/manual/current/en/test-doubles.html#test-doubles.stubs.examples.StubTest5.php) to create a map that associates arguments with corresponding return values."
```
$mock->expects($this->any())
->method('getConfigValue')
->will(
$this->returnValueMap(
array(
array('firstparam', 'secondparam', 'retval'),
array('modes', 'foo', array('Array', 'of', 'modes'))
)
)
);
```
|
How can I get PHPUnit MockObjects to return different values based on a parameter?
|
[
"",
"php",
"unit-testing",
"mocking",
"phpunit",
""
] |
I have a table ("venues") that stores all the possible venues a volunteer can work, each volunteer is assigned to work one venue each.
I want to create a select drop down from the venues table.
Right now I can display the venue each volunteer is assigned, but I want it to display the drop down box, with the venue already selected in the list.
```
<form action="upd.php?id=7">
<select name="venue_id">
<?php //some sort of loop goes here
print '<option value="'.$row['venue_id'].'">'.$row['venue_name'].'</option>';
//end loop here ?>
</select>
<input type="submit" value="submit" name="submit">
</form>
```
For example, volunteer with the id of 7, is assigned to venue\_id 4
```
<form action="upd.php?id=7">
<select name="venue_id">
<option value="1">Bagpipe Competition</option>
<option value="2">Band Assistance</option>
<option value="3">Beer/Wine Pouring</option>
<option value="4" selected>Brochure Distribution</option>
<option value="5">Childrens Area</option>
<option value="6">Cleanup</option>
<option value="7">Cultural Center Display</option>
<option value="8">Festival Merch</option>
</select>
<input type="submit" value="submit" name="submit">
</form>
Brochure Distribution option will already be selected when it displays the drop down list, because in the volunteers_2009 table, column venue_id is 4.
```
I know it will take a form of a for or while loop to pull the list of venues from the venues table
My query is:
```
$query = "SELECT volunteers_2009.id, volunteers_2009.comments, volunteers_2009.choice1, volunteers_2009.choice2, volunteers_2009.choice3, volunteers_2009.lname, volunteers_2009.fname, volunteers_2009.venue_id, venues.venue_name FROM volunteers_2009 AS volunteers_2009 LEFT OUTER JOIN venues ON (volunteers_2009.venue_id = venues.id) ORDER by $order $sort";
```
How do I populate the select drop down box with the venues (**volunteers\_2009.venue\_id**, **venues.id**) from the venues table and have it pre-select the venue in the list?
|
```
$query = "SELECT volunteers_2009.id, volunteers_2009.comments, volunteers_2009.choice1, volunteers_2009.choice2, volunteers_2009.choice3, volunteers_2009.lname, volunteers_2009.fname, volunteers_2009.venue_id, venues.venue_name FROM volunteers_2009 AS volunteers_2009 LEFT OUTER JOIN venues ON (volunteers_2009.venue_id = venues.id) ORDER by $order $sort";
$res = mysql_query($query);
echo "<select name = 'venue'>";
while (($row = mysql_fetch_row($res)) != null)
{
echo "<option value = '{$row['venue_id']}'";
if ($selected_venue_id == $row['venue_id'])
echo "selected = 'selected'";
echo ">{$row['venue_name']}</option>";
}
echo "</select>";
```
|
assuming you have an array of venues...personally i don't like to mix the sql with other wizardry.
```
function displayDropDown($items, $name, $label, $default='') {
if (count($items)) {
echo '<select name="' . $name . '">';
echo '<option value="">' . $label . '</option>';
echo '<option value="">----------</option>';
foreach($items as $item) {
$selected = ($item['id'] == $default) ? ' selected="selected" : '';
echo <option value="' . $item['id'] . '"' . $selected . '>' . $item['name'] . '</option>';
}
echo '</select>';
} else {
echo 'There are no venues';
}
}
```
|
Populate select drop down from a database table
|
[
"",
"php",
"mysql",
""
] |
How can I bind arguments to a Python function so that I can call it later without arguments (or with fewer additional arguments)?
For example:
```
def add(x, y):
return x + y
add_5 = magic_function(add, 5)
assert add_5(3) == 8
```
What is the `magic_function` I need here?
---
It often happens with frameworks and libraries that people accidentally call a function immediately when trying to give arguments to a callback: for example `on_event(action(foo))`. The solution is to bind `foo` as an argument to `action`, using one of the techniques described here. See for example [How to pass arguments to a Button command in Tkinter?](https://stackoverflow.com/questions/6920302) and [Using a dictionary as a switch statement in Python](https://stackoverflow.com/questions/21962763/).
Some APIs, however, allow you to pass the to-be-bound arguments separately, and will do the binding for you. Notably, the threading API in the standard library works this way. See [thread starts running before calling Thread.start](https://stackoverflow.com/questions/11792629). If you are trying to set up your own API like this, see [How can I write a simple callback function?](https://stackoverflow.com/questions/40843039).
Explicitly binding arguments is also a way to avoid problems caused by late binding when using closures. This is the problem where, for example, a `lambda` inside a `for` loop or list comprehension produces separate functions that compute the same result. See [What do lambda function closures capture?](https://stackoverflow.com/questions/2295290/) and [Creating functions (or lambdas) in a loop (or comprehension)](https://stackoverflow.com/questions/3431676/).
|
[`functools.partial`](http://docs.python.org/2/library/functools.html#functools.partial "Python 2 Documentation: functools module: partial function") returns a callable wrapping a function with some or all of the arguments frozen.
```
import sys
import functools
print_hello = functools.partial(sys.stdout.write, "Hello world\n")
print_hello()
```
```
Hello world
```
The above usage is equivalent to the following `lambda`.
```
print_hello = lambda *a, **kw: sys.stdout.write("Hello world\n", *a, **kw)
```
|
Using [`functools.partial`](https://docs.python.org/library/functools.html#functools.partial):
```
>>> from functools import partial
>>> def f(a, b):
... return a+b
...
>>> p = partial(f, 1, 2)
>>> p()
3
>>> p2 = partial(f, 1)
>>> p2(7)
8
```
|
How can I bind arguments to a function in Python?
|
[
"",
"python",
"partial-application",
""
] |
What is the functional equivalent of [Windows Communication Foundation](http://msdn.microsoft.com/en-us/netframework/aa663324.aspx) in Java 6?
|
WCF offers several communication options. A nice presentation is [this white paper](http://www.davidchappell.com/articles/white_papers/WCF_Diversity_v1.0.docx) by David Chappel. There the following options are described:
* Interoperable Communication using SOAP and WS-\*
* Binary Communication Between WCF Applications
* RESTful Communication
* Communication using POX, RSS, and ATOM
* Communication with Line-of-Business Applications using Adapters
* Communication via Message Queues
* Communication via Windows Peer-to-Peer Networking
* Communication Between Processes on the Same Machine
* Custom Communication
Although some options are not relevant to Java (e.g. the second one), it may help you to identify the corresponding functional equivalents in Java.
|
I don't know what all WCF contains, but [JAX-WS](https://jax-ws.dev.java.net/) (and its reference implementation [Metro](https://metro.dev.java.net/)) might be a good starting point.
Some of the other [technologies in J2EE](http://java.sun.com/javaee/technologies/) may apply as well.
|
What is the functional equivalent of Windows Communication Foundation in Java 6?
|
[
"",
"java",
"wcf",
""
] |
Often times I need a collection of non-sequential objects with numeric identifiers. I like using the KeyedCollection for this, but I think there's a serious drawback. If you use an int for the key, you can no longer access members of the collection by their index (collection[index] is now really collection[key]). Is this a serious enough problem to avoid using the int as the key? What would a preferable alternative be? (maybe int.ToString()?)
I've done this before without any major problems, but recently I hit a nasty snag where XML serialization against a KeyedCollection does *not* work if the key is an int, due to [a bug in .NET](http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=120461).
|
Basically you need to decide if users of the class are likely to be confused by the fact that they can't, for example, do:
```
for(int i=0; i=< myCollection.Count; i++)
{
... myCollection[i] ...
}
```
though they can of course use foreach, or use a cast:
```
for(int i=0; i=< myCollection.Count; i++)
{
... ((Collection<MyType>)myCollection)[i] ...
}
```
It's not an easy decision, as it can easily lead to heisenbugs. I decided to allow it in one of my apps, where access from users of the class was almost exclusively by key.
I'm not sure I'd do so for a shared class library though: in general I'd avoid exposing a KeyedCollection in a public API: instead I would expose IList<T> in a public API, and consumers of the API who need keyed access can define their own internal KeyedCollection with a constructor that takes an IEnumerable<TItem> and populates the collection with it. This means you can easily build a new KeyedCollection from a list retrieved from an API.
Regarding serialization, there is also a performance problem [that I reported to Microsoft Connect](http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=302290): the KeyedCollection maintains an internal dictionary as well as a list, and serializes both - it is sufficient to serialize the list as the dictionary can easily be recreated on deserialization.
For this reason as well as the XmlSerialization bug, I'd recommend you avoid serializing a KeyedCollection - instead only serialize the KeyedCollection.Items list.
I don't like [the suggestion of wrapping your int key in another type](https://stackoverflow.com/questions/201956/is-it-ok-to-use-an-int-for-the-key-in-a-keyedcollection#201983). It seems to me wrong to add complexity simply so that a type can be used as an item in a KeyedCollection. I'd use a string key (ToString) rather than doing this - this is rather like the VB6 Collection class.
FWIW, I asked [the same question](http://social.msdn.microsoft.com/Forums/en-US/vstscode/thread/d2163d71-636e-4521-926a-a7cce260143c/) some time ago on the MSDN forums. There is a response from a member of the FxCop team, but no conclusive guidelines.
|
An easy solution might be to wrap the `int` into another type to create a distinct type for overload resolution. If you use a `struct`, this wrapper doesn't have any additional overhead:
```
struct Id {
public int Value;
public Id(int value) { Value = value; }
override int GetHashCode() { return Value.GetHashCode(); }
// … Equals method.
}
```
|
IS it OK to use an int for the key in a KeyedCollection
|
[
"",
"c#",
".net",
"generics",
"collections",
""
] |
I have to read a binary file in a legacy format with Java.
In a nutshell the file has a header consisting of several integers, bytes and fixed-length char arrays, followed by a list of records which also consist of integers and chars.
In any other language I would create `struct`s (C/C++) or `record`s (Pascal/Delphi) which are byte-by-byte representations of the header and the record. Then I'd read `sizeof(header)` bytes into a header variable and do the same for the records.
Something like this: (Delphi)
```
type
THeader = record
Version: Integer;
Type: Byte;
BeginOfData: Integer;
ID: array[0..15] of Char;
end;
...
procedure ReadData(S: TStream);
var
Header: THeader;
begin
S.ReadBuffer(Header, SizeOf(THeader));
...
end;
```
What is the best way to do something similar with Java? Do I have to read every single value on its own or is there any other way to do this kind of "block-read"?
|
To my knowledge, Java forces you to read a file as bytes rather than being able to block read. If you were serializing Java objects, it'd be a different story.
The other examples shown use the [DataInputStream](http://java.sun.com/javase/6/docs/api/java/io/DataInputStream.html) class with a File, but you can also use a shortcut: The [RandomAccessFile](http://java.sun.com/javase/6/docs/api/java/io/RandomAccessFile.html) class:
```
RandomAccessFile in = new RandomAccessFile("filename", "r");
int version = in.readInt();
byte type = in.readByte();
int beginOfData = in.readInt();
byte[] tempId;
in.read(tempId, 0, 16);
String id = new String(tempId);
```
Note that you could turn the responce objects into a class, if that would make it easier.
|
If you would be using [Preon](https://github.com/preon/preon), then all you would have to do is this:
```
public class Header {
@BoundNumber int version;
@BoundNumber byte type;
@BoundNumber int beginOfData;
@BoundString(size="15") String id;
}
```
Once you have this, you create Codec using a single line:
```
Codec<Header> codec = Codecs.create(Header.class);
```
And you use the Codec like this:
```
Header header = Codecs.decode(codec, file);
```
|
Best way to read structured binary files with Java
|
[
"",
"java",
"file",
"binaryfiles",
""
] |
I'm running Python 2.6 on Unix and when I run the interactive prompt ([SQLite](http://en.wikipedia.org/wiki/SQLite) is supposed to be preinstalled) I get:
```
[root@idev htdocs]# python
Python 2.6 (r26:66714, Oct 23 2008, 16:25:34)
[GCC 3.2.2 20030222 (Red Hat Linux 3.2.2-5)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named sqlite
>>>
```
How do I resolve this?
|
The error:
```
ImportError: No module named _sqlite3
```
means that [SQLite](http://en.wikipedia.org/wiki/SQLite) 3 does not find the associated shared library. On Mac OS X it's \_sqlite3.so and it should be the same on other Unix systems.
To resolve the error you have to locate the \_sqlite3.so library on your computer and then check your PYTHONPATH for this directory location.
To print the Python search path enter the following in the Python shell:
```
import sys
print sys.path
```
If the directory containing your library is missing you can try adding it interactively with
```
sys.path.append('/your/dir/here')
```
and try
```
import sqlite3
```
again. If this works you have to add this directory permanently to your PYTHONPATH environment variable.
PS: If the library is missing you should (re-)install the module.
|
```
import sqlite3
```
[sqlite3](http://docs.python.org/library/sqlite3.html#module-sqlite3) - DB-API 2.0 interface for SQLite databases.
You are missing the `.so` (shared object) - probably an installation step. In my Linux python installation, `_sqlite3` is at:
```
${somewhere}/lib/python2.6/lib-dynload/_sqlite3.so
```
|
Cannot import SQLite with Python 2.6
|
[
"",
"python",
"sqlite",
""
] |
FindBugs has found a potential bug in my code. But it is not a bug.
Is it possible to mark this occurrence as 'not a bug' AND have it removed from the bug list?
I have documented quite clearly why for each case it is not a bug.
For example. A class implements the comparable interface. it has the compareTo method. I have however not overridden the equals method.
FindBugs does not like this as the JavaDocs state that it is recommended that
```
(x.compareTo(y)==0) == (x.equals(y))
```
Although in my case the above condition is and always will be true.
|
Instead of using filters, you can also use the [SuppressWarnings](http://findbugs.sourceforge.net/api/edu/umd/cs/findbugs/annotations/SuppressWarnings.html "SuppressWarnings") annotation. You must use the annotation out of the findbugs package, meaning you either need an import or use the fully qualified name of it. This is because other than the [SuppressWarnings](http://java.sun.com/javase/6/docs/api/java/lang/SuppressWarnings.html) from the JDK it has retention "Class", which is needed because findbugs operates on the compiled bytecode instead of source code.
Example:
```
@edu.umd.cs.findbugs.annotations.SuppressWarnings(
value="EQ_COMPARETO_USE_OBJECT_EQUALS",
justification="because I know better")
```
There's one corner case where you probably should not be using the annotation: If your code is library code that ends up in a jar, that could be used by other projects **and** you're still on Java5. The reason for this is a [bug](https://bugs.java.com/bugdatabase/view_bug?bug_id=6365854) in the JDK which crashes javac if the annotation is not in the classpath.
|
Probably by adding a [filter](http://findbugs.sourceforge.net/manual/filter.html) as parameter of findbugs
> Match clauses can only match information that is actually contained in the bug instances
```
<Match>
<Class name="com.foobar.MyClass" />
<Method name="myMethod" />
<Bug pattern="EQ_COMPARETO_USE_OBJECT_EQUALS" />
</Match>
```
|
When using Eclipse with FindBugs can you mark a bug as not a bug and have it removed from the bug list?
|
[
"",
"java",
"eclipse",
"findbugs",
""
] |
I am writing a script at the moment that will grab certain information from HTML using dom4j.
Since Python/Jython does not have a native **switch** statement I decided to use a whole bunch of **if** statements that call the appropriate method, like below:
```
if type == 'extractTitle':
extractTitle(dom)
if type == 'extractMetaTags':
extractMetaTags(dom)
```
I will be adding more depending on what information I want to extract from the HTML and thought about taking the dictionary approach which I found elsewhere on this site, example below:
```
{
'extractTitle': extractTitle,
'extractMetaTags': extractMetaTags
}[type](dom)
```
I know that each time I run the script the dictionary will be built, but at the same time if I were to use the **if** statements the script would have to check through all of them until it hits the correct one. What I am really wondering, which one performs better or is generally better practice to use?
**Update:** @Brian - Thanks for the great reply. I have a question, if any of the extract methods require more than one object, e.g.
```
handle_extractTag(self, dom, anotherObject)
# Do something
```
How would you make the appropriate changes to the **handle** method to implemented this? Hope you know what I mean :)
Cheers
|
To avoid specifying the tag and handler in the dict, you could just use a handler class with methods named to match the type. Eg
```
class MyHandler(object):
def handle_extractTitle(self, dom):
# do something
def handle_extractMetaTags(self, dom):
# do something
def handle(self, type, dom):
func = getattr(self, 'handle_%s' % type, None)
if func is None:
raise Exception("No handler for type %r" % type)
return func(dom)
```
Usage:
```
handler = MyHandler()
handler.handle('extractTitle', dom)
```
**Update:**
When you have multiple arguments, just change the handle function to take those arguments and pass them through to the function. If you want to make it more generic (so you don't have to change both the handler functions and the handle method when you change the argument signature), you can use the \*args and \*\*kwargs syntax to pass through all received arguments. The handle method then becomes:
```
def handle(self, type, *args, **kwargs):
func = getattr(self, 'handle_%s' % type, None)
if func is None:
raise Exception("No handler for type %r" % type)
return func(*args, **kwargs)
```
|
With your code you're running your functions all get called.
```
handlers = {
'extractTitle': extractTitle,
'extractMetaTags': extractMetaTags
}
handlers[type](dom)
```
Would work like your original `if` code.
|
Dictionary or If statements, Jython
|
[
"",
"python",
"switch-statement",
"jython",
""
] |
C++ is mostly a superset of C, but not always. In particular, while enumeration values in both C and C++ implicitly convert into int, the reverse isn't true: only in C do ints convert back into enumeration values. Thus, bitflags defined via enumeration declarations don't work correctly. Hence, this is OK in C, but not in C++:
```
typedef enum Foo
{
Foo_First = 1<<0,
Foo_Second = 1<<1,
} Foo;
int main(void)
{
Foo x = Foo_First | Foo_Second; // error in C++
return 0;
}
```
How should this problem be handled efficiently and correctly, ideally without harming the debugger-friendly nature of using Foo as the variable type (it decomposes into the component bitflags in watches etc.)?
Consider also that there may be hundreds of such flag enumerations, and many thousands of use-points. Ideally some kind of efficient operator overloading would do the trick, but it really ought to be efficient; the application I have in mind is compute-bound and has a reputation of being fast.
Clarification: I'm translating a large (>300K) C program into C++, so I'm looking for an efficient translation in both run-time and developer-time. Simply inserting casts in all the appropriate locations could take weeks.
|
Why not just cast the result back to a Foo?
```
Foo x = Foo(Foo_First | Foo_Second);
```
EDIT: I didn't understand the scope of your problem when I first answered this question. The above will work for doing a few spot fixes. For what you want to do, you will need to define a | operator that takes 2 Foo arguments and returns a Foo:
```
Foo operator|(Foo a, Foo b)
{
return Foo(int(a) | int(b));
}
```
The int casts are there to prevent undesired recursion.
|
It sounds like an ideal application for a cast - it's up to you to tell the compiler that yes, you DO mean to instantiate a Foo with a random integer.
Of course, technically speaking, Foo\_First | Foo\_Second isn't a valid value for a Foo.
|
How should C bitflag enumerations be translated into C++?
|
[
"",
"c++",
"c",
"enums",
"flags",
""
] |
I need an easy way to allow users to upload multiple files at once (ie I need to allow a user to upload a folder). I do not wish to put the burden of zipping on the user.
*I would prefer to avoid Flash or variants if possible.* I'm looking for a straight javascript / HTML solution if it is possible. Please note, this rules out the answers at: [What is the best client side browser library to upload multiple files over http?](https://stackoverflow.com/questions/159600/multiple-file-upload).
|
You won't be able to do it with just HTML and Javascript. I'd recommend trying [Fancy Upload](http://digitarald.de/project/fancyupload/), a [MooTools](http://mootools.net/) plugin for multiple file uploads. It uses a mixture of JavaScript and Flash, but degrades gracefully. It works with all major browsers including IE6 and there is also a Flash 10 compatible release available for download (though the demo hasn't been updated yet).
---
Update (2012-11-26):
**Multiple file** uploads are possible with [valums](https://github.com/valums/file-uploader) or [blueimp](https://github.com/blueimp/jQuery-File-Upload) file uploaders.
For **recursive directory** uploads, your best solution is using [Chrome 11's new **folder upload API**](https://stackoverflow.com/a/5849341/486547). It also seems to work on Firefox if you use a vendor prefix.
|
With Firefox 42 and Edge having implemented the new [directory upload proposal](https://microsoftedge.github.io/directory-upload/proposal.html) we'll finally able to do cross-browser directory uploads. The APIs are nasty enough that you may want to check out my wrapper, [uppie](https://github.com/silverwind/uppie).
|
What is the best way to upload a folder to a website?
|
[
"",
"javascript",
"html",
"file-upload",
""
] |
I have some code that looks like:
```
template<unsigned int A, unsigned int B>
int foo() {
int v = 1;
const int x = A - B;
if (x > 0) {
v = v << x;
}
bar(v);
}
```
gcc will complain about x being negative for certain instantiations of A, B; however, I do perform a check to make sure it is non-negative. What's the best way around this? I know I can cast `x` to be `unsigned int` but that will cause warnings about `x` being larger than the width of `v` (since it is casting a negative number to be positive). I know there is a work-around that involves creating a new templatized `shift` function, but I'd like to avoid that if possible.
|
Since A and B are known at compile time, not only can you get rid of your warning, but you can also get rid of a runtime `if`, without any casts, like this:
```
#include <iostream>
using namespace std;
template< unsigned int A, unsigned int B >
struct my
{
template< bool P >
static void shift_if( int & );
template<>
static void shift_if< false >( int & ) {}
template<>
static void shift_if< true >( int & v ) { v <<= A - B; }
static void op( int & v ) { shift_if< (A > B) >( v ); }
};
template< unsigned int A, unsigned int B >
int foo()
{
int v = 1;
my< A, B >::op( v );
return v;
}
int main() {
cout << foo< 1, 3 >() << endl;
cout << foo< 3, 1 >() << endl;
cout << foo< 300, 1 >() << endl;
cout << foo< 25, 31 >() << endl;
return 0;
}
```
|
why not make x an unsigned char type and cast it? surely you don't need to shift more than 255 bits?
```
const unsigned char x = static_cast<unsigned char>(A - B);
```
or perhaps use masking to ensure that the shift is in bounds like this:
```
const unsigned int x = static_cast<unsigned int>(A - B) & 0x1f; // limit A-B to have a range of (0 - 31)
```
EDIT:
in response to the comment here's an idea:
```
template<unsigned int A, unsigned int B>
int foo() {
int v = 1;
const int x = A - B;
if (x > 0) {
v = v << (static_cast<unsigned int>(x) & 0x1f);
}
bar(v);
}
```
NOTE: you can replace 0x1f with something like: (CHAR\_BIT \* sizeof(T) - 1)
EDIT: in response to the latest comment, this code does not issue any warning compiling with: g++ -W -Wall -ansi -pedantic test.cc -o test
```
#include <iostream>
template<unsigned int A, unsigned int B>
int foo() {
int v = 1;
const int x = A - B;
if (x > 0) {
v = v << (static_cast<unsigned int>(x) & 0x1f);
}
return v;
}
int main() {
std::cout << foo<1, 3>() << std::endl;
std::cout << foo<3, 1>() << std::endl;
std::cout << foo<300, 1>() << std::endl;
std::cout << foo<25, 31>() << std::endl;
}
```
|
Getting rid of gcc shift by negative warning
|
[
"",
"c++",
"gcc",
""
] |
How can I write a C++ function returning true if a real number is exactly representable with a double?
```
bool isRepresentable( const char* realNumber )
{
bool answer = false;
// what goes here?
return answer;
}
```
Simple tests:
```
assert( true==isRepresentable( "0.5" ) );
assert( false==isRepresentable( "0.1" ) );
```
|
Parse the number into the form a + N / (10^k), where a and N are integers, and k is the number of decimal places you have.
Example: 12.0345 -> 12 + 345 / 10^4, a = 12, N = 345, k = 4
Now, 10^k = (2 \* 5) ^ k = 2^k \* 5^k
You can represent your number as exact binary fraction if and only if you get rid of the 5^k term in the denominator.
The result would check (N mod 5^k) == 0
|
Holy homework, batman! :)
What makes this interesting is that you can't simply do an (atof|strtod|sscanf) -> sprintf loop and check whether you got the original string back. sprintf on many platforms detects the "as close as you can get to 0.1" double and prints it as 0.1, for example, even though 0.1 isn't precisely representable.
```
#include <stdio.h>
int main() {
printf("%llx = %f\n",0.1,0.1);
}
```
prints:
3fb999999999999a = 0.100000
on my system.
The real answer probably would require parsing out the double to convert it to an exact fractional representation (0.1 = 1/10) and then making sure that the atof conversion times the denominator equals the numerator.
I think.
|
How can I write a C++ function returning true if a real number is exactly representable with a double?
|
[
"",
"c++",
"floating-point",
""
] |
I have to use unsigned integers that could span to more than 4 bytes, what type should I use?
PS Sorry for the "noobism" but that's it :D
NB: I need integers because i have to do divisions and care only for the integer parts and this way int are useful
|
*long long*, 64 bit integer... [here](http://msdn.microsoft.com/en-us/library/s3f49ktz(VS.80).aspx) you can find some reference about the data types and ranges...
|
Simply include <stdint.h> and use int64\_t and uint64\_t (since you want unsigned, you want uint64\_t).
There are several other useful variants on that header, like the least variants (uint\_least64\_t is a type with at least 64 bits) and the fast variants (uint\_fast64\_t is the fastest integer type with at least 64 bits). Also very useful are intptr\_t/uintptr\_t (large enough for a void \* pointer) and intmax\_t/uintmax\_t (largest type).
And if for some reason your compiler doesn't have a <stdint.h> (since IIRC it's a C standard, not a C++ one), you can use Boost's [boost/cstdint.hpp](http://www.boost.org/doc/libs/1_36_0/libs/integer/cstdint.htm) (which you can use even if you do have a <stdint.h>, since in that case it should simply forward to the compiler's header).
|
What type for an integer of more than 4 bytes?
|
[
"",
"c++",
"integer",
""
] |
I want to print the full length of a C-string in GDB. By default it's being abbreviated, how do I force GDB to print the whole string?
|
```
set print elements 0
```
[From the GDB manual](https://sourceware.org/gdb/onlinedocs/gdb/Print-Settings.html#index-number-of-array-elements-to-print):
> `set print elements` *`number-of-elements`*
> Set a limit on how many elements of an array GDB will print. If GDB is printing a large array, it stops printing after it has printed the number of elements set by the `set print elements` command. This limit also applies to the display of strings. When GDB starts, this limit is set to 200. **Setting *number-of-elements* to zero means that the printing is unlimited**.
|
As long as your program's in a sane state, you can also `call (void)puts(your_string)` to print it to stdout. Same principle applies to all functions available to the debugger, actually.
|
How do I print the full value of a long string in gdb?
|
[
"",
"c++",
"c",
"string",
"debugging",
"gdb",
""
] |
Let's say we have defined a CSS class that is being applied to various elements on a page.
```
colourful
{
color: #DD00DD;
background-color: #330033;
}
```
People have complained about the colour, that they don't like pink/purple. So you want to give them the ability to change the style as they wish, and they can pick their favourite colours. You have a little colour-picker widget that invokes a Javascript function:
```
function changeColourful(colorRGB, backgroundColorRGB)
{
// answer goes here
}
```
What goes in the body of that function?
The intent being that when the user picks a new colour on the colour-picker all the elements with `class="colourful"` will have their style changed.
|
I don't know about manipulating the class directly, but you can effectively do the same thing. Here's an example in jQuery.
```
$('.colourful').css('background-color', 'purple').css('color','red');
```
In plain javascript, you would have to do more work.
|
```
var setStyleRule = function(selector, rule) {
var stylesheet = document.styleSheets[(document.styleSheets.length - 1)];
if(stylesheet.addRule) {
stylesheet.addRule(selector, rule)
} else if(stylesheet.insertRule) {
stylesheet.insertRule(selector + ' { ' + rule + ' }', stylesheet.cssRules.length);
}
};
```
|
How to redefine CSS classes with Javascript
|
[
"",
"javascript",
"css",
""
] |
We frequently have users that create multiple accounts and then end up storing the same lesson activity data more than once. Once they realize the error, then they contact us to merge the accounts into a single one that they can use.
I've been beating myself to death trying to figure out how to write a query in MySQL that will merge their activity logs into a single profile so that I can then delete the other profiles, but I still can't find the query that will work.
The tables look like this:
```
CREATE TABLE user_rtab (
user_id int PRIMARY KEY,
username varchar,
last_name varchar,
first_name varchar
);
CREATE TABLE lessonstatus_rtab (
lesson_id int,
user_id int,
accessdate timestamp,
score double,
);
```
What happens is that a user ends up taking the same lessons and also different lessons under two or more accounts and then they want to take all of their lesson statuses and have them assigned under one user account.
Can anyone provide a query that would accomplish this based on the lastname and firstname fields from the user table to determine all user accounts and then use only the user or username field to migrate all necessary statuses to the one single account?
|
One of my current clients is facing a similar problem, except that they have dozens of tables that have to be merged. This is one reason to use a real life primary key (natural key). Your best bet is to try to avoid this problem before it even happens.
Another thing to keep in mind, is that two people can share both the same first and last name. Maybe you don't consider this an issue because of your user base, but if they're already creating multiple accounts how long is it until they start making up fake names or creating names that are almost the same, but not quite. Names are generally not a great thing to match on to determine if two people are the same or not.
As to the technical part of your question, it depends a lot on what the business rules are. If they have the same lesson in there twice with different scores do you use the highest score? How do you decide to which user account to link everything? No matter what, it's going to probably be a multi-step process.
|
Attempting to merge this data via last/first is a horrible idea, the more users you have, the more likely you are to mesh up incorrect entries. You have IDs on your tables for a reason, use them.
I don't see any reason why you can't say "I want to merge user 7 into 12" and then do the following:
```
UPDATE lessonstatus_rtab SET user_id=12 WHERE user_id=7;
DELETE FROM user_rtab WHERE user_id=7;
```
|
How to Write a Query to Merge Two Accounts and their Activity Logs into One?
|
[
"",
"mysql",
"sql",
""
] |
How can I move items from one list box control to another listbox control using JavaScript in ASP.NET?
|
This code assumes that you have an anchor or that will trigger to movement when it is clicked:
```
document.getElementById('moveTrigger').onclick = function() {
var listTwo = document.getElementById('secondList');
var options = document.getElementById('firstList').getElementsByTagName('option');
while(options.length != 0) {
listTwo.appendChild(options[0]);
}
}
```
|
If you're happy to use jQuery, it's very, very simple.
```
$('#firstSelect option:selected').appendTo('#secondSelect');
```
Where #firstSelect is the ID of the select box.
I've included a working example here:
<http://jsbin.com/aluzu> (to edit: <http://jsbin.com/aluzu/edit>)
|
Moving items in Dual Listboxes
|
[
"",
"asp.net",
"javascript",
"listbox",
"listbox-control",
""
] |
I recently built a program that parses a remote file from **\some\_server\c$\directory\file.xls** and it works fine on my local machine as just a normal aspx page.
> Then I put the program into web part
> form on my VM SharePoint server and I
> get this error: Access to the path
> '\some\_server\c$\directory\file.xls'
> is denied.
The file is shared to **Domain\Authenticated Users** so I am not sure why it would be denied? Is it possible my SharePoint is trying to call it with a local or network service account? How can I get it to read? Thank you.
|
Salamander is right, SharePoint doesn't run with trust to do this.
Changing the trust level for SharePoint in it's web.config from WSS\_Medium to Full is the quick solution, but there are security implications..
|
Just a quick note, you could be running into the classic NTLM Double-Hop issue. You can authenticate to the front end, but because the front end does not have your password, it cannot then authenticate to a resource on another server.
Running with Elevated priviliges, and setting permissions based on the Application Pool identity could be one way of moving your forward.
|
Accessing a remote file with a SharePoint Web Part
|
[
"",
"c#",
"sharepoint",
"web-parts",
""
] |
This question is a follow up to:
[Why can’t I call a method outside of an anonymous class of the same name](https://stackoverflow.com/questions/252267/why-cant-i-call-a-method-outside-of-an-anonymous-class-of-the-same-name)
This previous question answer **why**, but now I want to know if javac **should** find run(int bar)? (See previous question to see why run(42) fails)
If it shouldn't, is it due to a spec? Does it produce ambiguous code? My point is, I think this is a bug. While the previous question explained why this code fails to compile, I feel it should compile if javac searched higher in the tree if it fails to find a match at the current level. IE. If this.run() does not match, it should automatically check NotApplicable.this for a run method.
Also note that foo(int bar) is correctly found. If you give any reason why run(int bar) shouldn't be found, it must also explain why foo(int bar) is found.
```
public class NotApplicable {
public NotApplicable() {
new Runnable() {
public void run() {
// this works just fine, it automatically used NotApplicable.this when it couldn't find this.foo
foo(42);
// this fails to compile, javac find this.run(), and it does not match
run(42);
// to force javac to find run(int bar) you must use the following
//NotApplicable.this.run(42);
}
};
}
private void run(int bar) {
}
public void foo(int bar) {
}
}
```
|
This behavior of javac conforms to the spec. See [§15.12 Method Invocation Expressions](http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#20448) in the Java Language Specification, specifically the paragraph under "Compile Time Step 1" explaining the meaning of an unqualified method invocation:
> If the Identifier appears within the scope (§6.3) of a visible method declaration with that name, then there must be an enclosing type declaration of which that method is a member. Let T be the innermost such type declaration. The class or interface to search is T.
In other words, the unqualified method *name* is searched for in all enclosing scopes, and the innermost "type declaration" (which means either a class or an interface declaration) in which the name is found is the one that will be searched for the whole signature (in "Compile Time Step 2").
|
Sounds like a recipe for ambiguity and fragility to me - as soon as a new method is added in your base class (okay, not so likely for an interface...) the meaning of your code changes completely.
Anonymous classes are pretty ugly already - making this bit of explicit doesn't bother me at all.
|
Should javac find methods outside of an anonymous class of the same name?
|
[
"",
"java",
"methods",
"javac",
"anonymous-class",
""
] |
On an ASP.NET MVC (Beta) site that I am developing sometimes calls to ActionLink will return to me URLs containing querying strings. I have isolated the circumstances that produce this behavior, but I still do not understand why, instead of producing a clean URL, it decides to using a query string parameter. I know that functionally they are the same, but for consistency (and appearance) of the URLs this is not what I want.
Here are my routes:
```
routes.MapRoute(
"Photo Gallery Shortcut",
"group/{groupname}",
new { controller = "Photos", action = "All", Id = "" });
routes.MapRoute(
"Tagged Photos", //since the Tagged action takes an extra parameter, put it first
"group/{groupname}/Photos/Tagged/{tagname}/{sortby}",
new { controller = "Photos", action = "Tagged", Id = "", SortBy = "" });
routes.MapRoute(
"Photo Gallery", //since the Gallery's defualt action is "All" not "Index" its listed seperatly
"group/{groupname}/Photos/{action}/{sortby}",
new { controller = "Photos", action = "All", Id = "", SortBy = "" });
routes.MapRoute(
"Group", //<-- "Group" Category defined above
"group/{groupname}/{controller}/{action}/{id}",
new {controller = "Photos", action = "Index", Id = ""});
```
Now the problem only occurs when I am looking at the view described by the route named "Tagged Photos" and execute ActionLink via:
```
Html.ActionLink<PhotosController>(p => p.All((string)ViewData["group"], ""), "Home")
```
Which produces the URL:
```
http://domain/group/GROUPNAME?sortBy=
```
From any other view the URL produced is:
```
http://domain/group/GROUPNAME
```
I have pulled down Phil's [ASP.NET Routing Debugger](http://haacked.com/archive/2008/03/13/url-routing-debugger.aspx), and everything appears in order. This one has me stumped. Any ideas?
|
Not sure why different views are producing different URLs.
But you can get rid of that sortBy param by assigning a default value to the first route.
new { sortBy = "" }
During generation, if sortBy matches the default, the route engine will skip that parameter (if it's in the query string).
|
You're going to have to use named routes here, not action routes, because of the way routing works in ASP.NET, because it does "first match", not "best match".
|
Why the query strings in my ASP.NET MVC route?
|
[
"",
"c#",
"asp.net",
"asp.net-mvc",
"routes",
""
] |
I am using a Cursor in my stored procedure. It works on a database that has a huge number of data. for every item in the cursor i do a update operation. This is taking a huge amount of time to complete. Almost 25min. :( .. Is there anyway i can reduce the time consumed for this?
|
When you need to do a more complex operation to each row than what a simple update would allow you, you can try:
* Write a User Defined Function and use that in the update (probably still slow)
* Put data in a temporary table and use that in an UPDATE ... FROM:
Did you know about the UPDATE ... FROM syntax? It is quite powerful when things get more complex:
```
UPDATE
MyTable
SET
Col1 = CASE WHEN b.Foo = "Bar" THEN LOWER(b.Baz) ELSE "" END,
Col2 = ISNULL(c.Bling, 0) * 100 / Col3
FROM
MyTable
INNER JOIN MySecondTable AS b ON b.Id = MyTable.SecondId
LEFT JOIN ##MyTempTable AS c ON c.Id = b.ThirdId
WHERE
MyTabe.Col3 > 0
AND b.Foo NOT IS NULL
AND MyTable.TheDate > GETDATE() - 10
```
The example is completely made-up and may not make much sense, but you get the picture of how to do a more complex update without having to use a cursor. Of course, a temp table would not necessarily be required for it to work. :-)
|
The quick answer is not to use a cursor. The most efficient way to update lots of records is to use an update statement. There are not many cases where you have to use a cursor rather than an update statement, you just have to get clever about how you write the update statement.
If you posted a snapshot of your SQL you might get some help to achieve what you're after.
|
Why is my SQL Server cursor very slow?
|
[
"",
"sql",
"sql-server-2005",
""
] |
I was just wondering how many experienced programers out there actually map out their process or algorithms in a program like MS Visio or Gnome Dia?
I am trying to code some complex PHP for my website and just seem to be missing something. Is a diagram program going to help or should I be looking in another area?
|
I use Visio only for quick graph that doesn't need to follow UML rules. Sometime useful for documentation that aren't about the detail of the code : if you need to show some high abstract view of your code, Visio do the job, Example, documentation that display how each big part communicate, or a simple activity diagram...
You can find a [SO list of free UML](https://stackoverflow.com/questions/168512/best-free-professional-looking-uml-tool#168548) editor if you require to do intensive UML design.
|
Everytime I've tried to make a truly usage diagram in Visio, it always ends up being more work than it's worth. Never underestimate the power of pencil & paper, or better yet, a white board.
But yes, explaining or writing out your problems will more quickly lead to a solution than merely sitting there and thinking about it.
|
Using Visio (Dia) to Map Out Algorithms
|
[
"",
"php",
"diagram",
"visio",
"dia",
""
] |
With a `TreeMap` it's trivial to provide a custom `Comparator`, thus overriding the semantics provided by `Comparable` objects added to the map. `HashMap`s however cannot be controlled in this manner; the functions providing hash values and equality checks cannot be 'side-loaded'.
I suspect it would be both easy and useful to design an interface and to retrofit this into `HashMap` (or a new class)? Something like this, except with better names:
```
interface Hasharator<T> {
int alternativeHashCode(T t);
boolean alternativeEquals(T t1, T t2);
}
class HasharatorMap<K, V> {
HasharatorMap(Hasharator<? super K> hasharator) { ... }
}
class HasharatorSet<T> {
HasharatorSet(Hasharator<? super T> hasharator) { ... }
}
```
The [case insensitive `Map`](https://stackoverflow.com/questions/212562/is-there-a-good-way-to-have-a-mapstring-get-and-put-ignore-case) problem gets a trivial solution:
```
new HasharatorMap(String.CASE_INSENSITIVE_EQUALITY);
```
Would this be doable, or can you see any fundamental problems with this approach?
Is the approach used in any existing (non-JRE) libs? (Tried google, no luck.)
EDIT: Nice workaround presented by hazzen, but I'm afraid this is the workaround I'm trying to avoid... ;)
EDIT: Changed title to no longer mention "Comparator"; I suspect this was a bit confusing.
EDIT: Accepted answer with relation to performance; would love a more specific answer!
EDIT: There is an implementation; see the accepted answer below.
EDIT: Rephrased the first sentence to indicate more clearly that it's the side-loading I'm after (and not ordering; ordering does not belong in HashMap).
|
[Trove4j](http://trove4j.sourceforge.net/html/overview.html) has the feature I'm after and they call it hashing strategies.
Their map has an implementation with different limitations and thus different prerequisites, so this does not implicitly mean that an implementation for Java's "native" HashMap would be feasible.
|
.NET has this via IEqualityComparer (for a type which can compare two objects) and IEquatable (for a type which can compare itself to another instance).
In fact, I believe it was a mistake to define equality and hashcodes in java.lang.Object or System.Object at all. Equality in particular is hard to define in a way which makes sense with inheritance. I keep meaning to blog about this...
But yes, basically the idea is sound.
|
Why not allow an external interface to provide hashCode/equals for a HashMap?
|
[
"",
"java",
"collections",
"hashmap",
"trove4j",
""
] |
`std::auto_ptr` is broken in VC++ 8 (which is what we use at work). My main gripe with it is that it allows `auto_ptr<T> x = new T();`, which of course leads to horrible crashes, while being simple to do by mistake.
From an [answer](https://stackoverflow.com/questions/106508/what-is-a-smart-pointer-and-when-should-i-use-one#110706) to another question here on stackoverflow:
> Note that the implementation of std::auto\_ptr in Visual Studio 2005 is horribly broken.
> <http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=98871>
> <http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=101842>
I want to use
* `boost::scoped_ptr`, for pointers that shouldn't pass ownership.
* `boost::shared_ptr`, for pointers in containers and elsewhere where they are required.
* `std::auto_ptr`, for pointers that should/can pass ownership.
But since `std::auto_ptr` is broken for me, I wonder what would be the best approach:
* Replace `std::auto_ptr` with something from the net. Like this [this](http://groups.google.com/group/comp.std.c++/msg/97a2539a1bbbe491) one from Rani Sharoni (haven't tried it yet).
* Use `boost::shared_ptr` instead. Will of course work, although there will be some minor overhead that I don't care about. But I want to use `auto_ptr` to signal the intent of the pointer. (See [this](https://stackoverflow.com/questions/197048/idiomatic-use-of-stdautoptr-or-only-use-sharedptr#197083) answer for a vote on this approach.)
* I will never need to pass ownership in practice, so I shouldn't worry about this.
---
Update:
Here is what I did:
I copied the aforementioned auto\_ptr implementation by Rani Sharoni. [From here](http://groups.google.com/group/comp.std.c++/msg/97a2539a1bbbe491).
Did some minor tests:
```
class T
{
public:
T() {
OutputDebugStringA("T\n");
};
~T() {
OutputDebugStringA("~T\n");
};
};
{
fix::auto_ptr<T> x(new T); // This just works.
}
{
fix::auto_ptr<T> x = (new T); // Doesn't compile. Great!
}
{
fix::auto_ptr<T> x = fix::auto_ptr<T>(new T); // Transfer of ownership works also.
}
```
Of course these tests are by no means exhaustive and you shouldn't trust them. Implementing an exception safe templated class is hairy business. At least this works better than the built in one.
Note: I don't know if I'm allowed to use this implementation yet, with respect to copyright. I have emailed Rani and I'm waiting for a reply. I'll update this post when I know more.
Permission is granted for everyone to use Rani Sharoni's auto\_ptr implementation as you wish.
Thank you for all your replies.
|
Move to boost smart pointers.
In the meantime, you may want to extract a working auto\_ptr implementation from an old / another STL, so you have working code.
I believe that auto\_ptr semantics are fundamentally broken - it saves typing, but the interface actually is not simpler: you still have to track which instance is the current owner and make sure the owner leaves last.
unique-ptr "fixes" that, by making release not only give up ownership, but also setting the RHS to null. It is the closest replacement for auto-ptr, but with its different semantics it is not a drop-in replacement.
There's an introductory article to [boost smart pointers](http://www.codeproject.com/KB/stl/boostsmartptr.aspx), by, ahem, me.
|
Have you considered using [STLPort](http://www.stlport.org/)?
|
Replacing auto_ptr in VC++ 8
|
[
"",
"c++",
"smart-pointers",
""
] |
How would I get the length of an `ArrayList` using a JSF EL expression?
```
#{MyBean.somelist.length}
```
does not work.
|
Yes, since some genius in the Java API creation committee decided that, even though certain classes have `size()` members or `length` attributes, they won't implement `getSize()` or `getLength()` which JSF and most other standards require, you can't do what you want.
There's a couple ways to do this.
One: add a function to your Bean that returns the length:
```
In class MyBean:
public int getSomelistLength() { return this.somelist.length; }
In your JSF page:
#{MyBean.somelistLength}
```
Two: If you're using Facelets (Oh, God, why aren't you using Facelets!), you can add the fn namespace and use the length function
```
In JSF page:
#{ fn:length(MyBean.somelist) }
```
|
You mean size() don't you?
```
#{MyBean.somelist.size()}
```
works for me (using JBoss Seam which has the Jboss EL extensions)
|
How do you get the length of a list in the JSF expression language?
|
[
"",
"java",
"jsp",
"jsf",
"jstl",
"el",
""
] |
The WPF control WindowsFormsHost inherits from IDisposable.
If I have a complex WPF visual tree containing some of the above controls what event or method can I use to call IDispose during shutdown?
|
Building from Todd's answer I came up with this generic solution for any WPF control that is hosted by a Window and want's to guarantee disposal when that window is closed.
(Obviously if you can avoid inheriting from IDisposable do, but sometimes you just can't)
Dispose is called when the the first parent window in the hierarchy is closed.
(Possible improvement - change the event handling to use the weak pattern)
```
public partial class MyCustomControl : IDisposable
{
public MyCustomControl() {
InitializeComponent();
Loaded += delegate(object sender, RoutedEventArgs e) {
System.Windows.Window parent_window = Window.GetWindow(this);
if (parent_window != null) {
parent_window.Closed += delegate(object sender2, EventArgs e2) {
Dispose();
};
}
};
...
}
...
}
```
|
In the case of application shutdown there is nothing you need to do to properly dispose of the WindowsFormsHost. Since it derives from HwndHost disposing is handled when the Dispatcher is shutdown. If you use Reflector you will see that when HwndHost is initialized it creates a WeakEventDispatcherShutdown.
If you are using it in a dialog the best I can suggest is to override OnClosed and dispose of your Host then, otherwise the HwndHost will hang around until until the Dispatcher is shutdown.
```
public partial class Dialog : Window
{
public Dialog()
{
InitializeComponent();
}
protected override void OnClosed(EventArgs e)
{
if (host != null)
host.Dispose();
base.OnClosed(e);
}
}
```
A simple way to test when dispose gets called is to derive a custom class from WindowsFormsHost and play around with different situations. Put a break point in dispose and see when it gets called.
```
public class CustomWindowsFormsHost : WindowsFormsHost
{
protected override void Dispose(bool disposing)
{
base.Dispose(disposing);
}
}
```
|
When can I dispose an IDisposable WPF control e.g. WindowsFormsHost?
|
[
"",
"c#",
".net",
"wpf",
"idisposable",
""
] |
At what point is it better to switch from java.net to java.nio? .net (not the Microsoft entity) is easier to understand and more familiar, while nio is scalable, and comes with some extra nifty features.
Specifically, I need to make a choice for this situation: We have one control center managing hardware at several remote sites (each site with one computer managing multiple hardware units (a transceiver, TNC, and rotator)). My idea was to have write a sever app on each machine that acts as a gateway from the control center to the radio hardware, with one socket for each unit. From my understanding, NIO is meant for one server, many clients, but what I'm thinking of is one client, many servers.
I suppose a third option is to use MINA, but I'm not sure if that's throwing too much at a simple problem.
---
Each remote server will have up to 8 connections, all from the same client (to control all the hardware, and separate TX/RX sockets). The single client will want to connect to several servers at once, though. Instead of putting each server on different ports, is it possible to use channel selectors on the client side, or is it better to go multi-threaded io on the client side of things and configure the servers differently?
---
Actually, since the remote machines serve only to interact with other hardware, would RMI or IDL/CORBA be a better solution? Really, I just want to be able to send commands and receive telemetry from the hardware, and not have to make up some application layer protocol to do it.
|
Scalability will probably drive your choice of package. java.net will require one thread per socket. Coding it will be significantly easier. java.nio is much more efficient, but can be hairy to code around.
I would ask yourself how many connections you expect to be handling. If it's relatively few (say, < 100), I'd go with java.net.
|
Avoid NIO unless you have a good reason to use it. [It's not much fun and may not be as beneficial as you would think](http://blog.uncommons.org/2008/09/03/avoid-nio-get-better-throughput/). You may get better scalability once you are dealing with tens of thousands of connections, but at lower numbers you'll probably get better throughput with blocking IO. As always though, make your own measurements before committing to something you might regret.
Something else to consider is that if you want to use SSL, NIO makes it extremely painful.
|
java.net versus java.nio
|
[
"",
"java",
"networking",
"io",
"nio",
""
] |
Why does this lambda expression not compile?
```
Action a = () => throw new InvalidOperationException();
```
Conjecture is fine, but I would really appreciate references to the C# language specification or other documentation.
And yes, I know that the following is valid and will compile:
```
Action a = () => { throw new InvalidOperationException(); };
```
The context where I would use something like this is described on [this blog post](http://jacobcarpenter.wordpress.com/2008/10/06/c-compiler-eccentricity-of-the-day-throwing-lambda/).
|
Hmm. I've got an answer, but it's not great.
I don't believe that there's a "throw" *expression*. There's a throw *statement*, but not just an expression. Compare this with "Console.WriteLine()" which is a method invocation expression with a void type.
As a parallel, you can't have a switch statement, or an if statement etc as the body of a lambda on its own. You can only have an expression or a block (section 7.14).
Is that any help?
|
Here's my take:
`throw` is a statement, not an expression.
And the reference:
> 12.3.3.11 Throw statements
>
> For a statement stmt of the form
>
> `throw expr;`
>
> the definite assignment state of v
> at the beginning of expr is the same
> as the definite assignment state of v
> at the beginning of stmt.
To explain the essence perhaps one should think about what an expression implies within the C# lambda construct. It is simply syntactic sugar for:
```
delegate () { return XXX; }
```
where `XXX` is an expression
|
C# Action lambda limitation
|
[
"",
"c#",
"compiler-construction",
"lambda",
""
] |
First off, I know next to nothing about language theory, and I barely know any other languages except Java, but I had an idea that I think would be cool, but I need you guys to tell me:
a: why it sucks
b: how language x has had that for years
c: how my mind sucks
d: all of the above
The idea would give composition the same ease of code reuse that `extends` does.
So if you had a class like this:
```
public interface A {
public void methodInA();
}
```
And then you had a class like this:
```
public class B {
private composed A;
public B() {
// construct A within constructor
}
}
```
You would then be able to do this:
```
B myB = new B();
myB.methodInA();
```
Without having to add in the delegation in B's class. But you could also do the same as with inheritance, ie:
```
@Overrides
public void methodInA(){
// B's own delegation method
}
```
Disadvantages include:
* methods are hidden in the source code, making it less obvious where the call is coming from, but this is also the case with `extends`
* if composed fields share the same method signature there needs to be a conflict resolved (how do conflicting interfaces solve this?)
* if you wanted to have several composed fields of the same type, there would be an obvious conflict for which field to delegate to
* probably 100 other things I've not thought of
Like I say, I'm obviously no language theorist, and I haven't spent ages thinking about it, the idea just popped in my head and I wanted to know how wrong I am. I just think it would be kind of cool.
|
I think if you restricted it such that a class could only use this feature to compose a single class it would be somewhat useful and would avoid a lot of the headaches that are being discussed.
Personally I hate inheritance of concrete classes. I'm a big proponent of Item 14 from Bloch's *Effective Java*, [Favor composition over inheritence](http://books.google.com/books?id=ZZOiqZQIbRMC&pg=PA71&lpg=PA71&dq=effective+java+composition+over+inheritance&source=bl&ots=UZM06thG21&sig=MQ4wfulnG28_z6TzvxL8uZaszn4&hl=en&sa=X&oi=book_result&resnum=1&ct=result). I think that something like this would make it a little easier to implement the idiom he recommends in that item.
Honestly, if you really knew what you were doing I'll bet you could write a compiler annotation that would handle this. So assuming you had a class Bar that implemented the interface IBar, your class would look like this:
```
public class Foo {
@Delegate(IBar.class)
private Bar bar;
// initialize bar via constructor or setter
}
```
Then during compilation Foo could be made to implement IBar and any of the methods on that interface that weren't already implemented by Foo would end up being generated to look like this:
```
public Baz method1(Qux val) {
return bar.method1(val);
}
```
As mentioned above you would want to make the restriction that only one field per class could use this annotation. If multiple fields had this annotation you'd probably want to throw a compilation error. Alternatively you could figure out a way to encode some sort of precedence model into the parameters passed to it.
Now that I've written this out that seems kinda cool. Maybe I'll play around with it next week. I'll update this if I manage to figure anything out.
|
It sounds cool but I think it makes for some horrible language constructs. Obviously there is a problem if you declare more than one 'composition' of the same class, but even if you forbid that what about the case where a call matches a method in more than one of the (different) composed classes? You would have to specify which one was called in the main class, and you would need extra syntax for that. The situation becomes even worse if there are public members in the classes.
Composition is used to prevent problems with multiple inheritance. Allowing composition like this is effectively permitting multiple inheritance, at least in terms of resolving which method to call. Since a key design decision with Java was to disallow multiple inheritance (for good reasons) I think it unlikely that this would ever be introduced to Java.
|
Would syntax for composition be a useful addition to Java?
|
[
"",
"java",
"inheritance",
"compiler-construction",
"programming-languages",
"composition",
""
] |
What is the difference between a Hash Map and dictionary ADT. And when to prefer one over another. For my programming assignment my instructor has asked to use one of them but I don't see any difference in between both. The program is supposed to work with a huge no. of strings. Any suggestions?
|
In terms of Java, both the class `HashMap` and the class `Dictionary` are *implementations* of the "Map" abstract data type. Abstract data types are not specific to any one programming language, and the Map ADT can also be known as a Hash, or a Dictionary, or an Associative Array (others at <http://en.wikipedia.org/wiki/Associative_array>). (Notice we're making a distinction between the `Dictionary` class and the Dictionary ADT.)
The `Dictionary` *class* has been marked as obsolete, so it's best not to use it.
|
This Stack Overflow post does a good job explaining the key differences:
[Java hashmap vs hashtable](https://stackoverflow.com/questions/40471/java-hashmap-vs-hashtable)
Note that Hashtable is simply an implementation of the Dictionary ADT. Also note that Java considers Dictionary ["obsolete"](http://java.sun.com/j2se/1.4.2/docs/api/java/util/Dictionary.html).
The fact that Hashtable is synchronized doesn't buy you much for most uses. Use HashMap.
|
Difference between a HashMap and a dictionary ADT
|
[
"",
"java",
"data-structures",
""
] |
is there a quick way to sort the items of a select element?
Or I have to resort to writing javascript?
Please any ideas.
```
<select size="4" name="lstALL" multiple="multiple" id="lstALL" tabindex="12" style="font-size:XX-Small;height:95%;width:100%;">
<option value="0"> XXX</option>
<option value="1203">ABC</option>
<option value="1013">MMM</option>
</select>
```
|
This will do the trick. Just pass it your select element a la: `document.getElementById('lstALL')` when you need your list sorted.
```
function sortSelect(selElem) {
var tmpAry = new Array();
for (var i=0;i<selElem.options.length;i++) {
tmpAry[i] = new Array();
tmpAry[i][0] = selElem.options[i].text;
tmpAry[i][1] = selElem.options[i].value;
}
tmpAry.sort();
while (selElem.options.length > 0) {
selElem.options[0] = null;
}
for (var i=0;i<tmpAry.length;i++) {
var op = new Option(tmpAry[i][0], tmpAry[i][1]);
selElem.options[i] = op;
}
return;
}
```
|
This solution worked very nicely for me using jquery, thought I'd cross reference it here as I found this page before the other one. Someone else might do the same.
```
$("#id").html($("#id option").sort(function (a, b) {
return a.text == b.text ? 0 : a.text < b.text ? -1 : 1
}))
```
from [Sorting dropdown list using Javascript](https://stackoverflow.com/questions/667010/sorting-dropdown-list-using-javascript/667198#667198)
|
Javascript to sort contents of select element
|
[
"",
"javascript",
"select",
""
] |
I'm starting to develop a web application in PHP that I hope will become incredibly popular and make me famous and rich. :-)
If that time comes, my decision whether to parse the API's data as XML with SimpleXML or to use json\_decode could make a difference in the app's scalability.
Does anyone know which of these approaches is more efficient for the server?
**Update:** I ran a rudimentary test to see which method was more performant. It appears that `json_decode` is slightly faster executing than `simplexml_load_string`. This isn't terribly conclusive because it doesn't test things like the scalability of concurrent processes. My conclusion is that I will go with SimpleXML for the time being because of its support for XPath expressions.
```
<?php
$xml = file_get_contents('sample.xml');
$json = file_get_contents('sample.js');
$iters = 1000;
// simplexml_load_string
$start_xml = microtime(true);
for ($i = 0; $i < $iters; ++$i) {
$obj = simplexml_load_string($xml, 'SimpleXMLElement', LIBXML_NOCDATA);
}
$end_xml = microtime(true);
// json_decode
$start_json = microtime(true);
for ($i = 0; $i < $iters; ++$i) {
$obj = json_decode($json);
}
$end_json = microtime(true);
?>
<pre>XML elapsed: <?=sprintf('%.4f', ($end_xml - $start_xml))?></pre>
<pre>JSON elapsed: <?=sprintf('%.4f', ($end_json - $start_json))?></pre>
```
Result:
```
XML elapsed: 9.9836
JSON elapsed: 8.3606
```
|
As the "lighter" format, I'd expect JSON to be slightly less stressful on the server, but I doubt it will be the biggest performance issue you find yourself dealing with as your site grows in popularity. Use whichever format you're more comfortable with.
Alternatively, if you know how you'll be structuring your data, you could try making an XML-formatted version and a JSON-formatted version and just run it against your setup a few hundred thousand times to see if it makes a noticeable difference.
|
Not really an answer to the question, but you could just wait until you have lots of users hitting your system. You may be surprised where your bottlenecks actually lie:
<http://gettingreal.37signals.com/ch04_Scale_Later.php>
|
What puts less load on a PHP server: SimpleXML or json_decode?
|
[
"",
"php",
"performance",
"scalability",
"simplexml",
"json",
""
] |
What is the advantages/disadvantages of using inline functions in C++? I see that it only increases performance for the code that the compiler outputs, but with today's optimized compilers, fast CPUs, huge memory etc. (not like in the 1980< where memory was scarce and everything had to fit in 100KB of memory) what advantages do they really have today?
|
Inline functions are faster because you don't need to push and pop things on/off the stack like parameters and the return address; however, it does make your binary slightly larger.
Does it make a significant difference? Not noticeably enough on modern hardware for most. But it can make a difference, which is enough for some people.
Marking something inline does not give you a guarantee that it will be inline. It's just a suggestion to the compiler. Sometimes it's not possible such as when you have a virtual function, or when there is recursion involved. And sometimes the compiler just chooses not to use it.
I could see a situation like this making a detectable difference:
```
inline int aplusb_pow2(int a, int b) {
return (a + b)*(a + b) ;
}
for(int a = 0; a < 900000; ++a)
for(int b = 0; b < 900000; ++b)
aplusb_pow2(a, b);
```
|
## Advantages
* By inlining your code where it is needed, your program will spend less time in the function call and return parts. It is supposed to make your code go faster, even as it goes larger (see below). Inlining trivial accessors could be an example of effective inlining.
* By marking it as inline, you can put a function definition in a header file (i.e. it can be included in multiple compilation unit, without the linker complaining)
## Disadvantages
* It can make your code larger (i.e. if you use inline for non-trivial functions). As such, it could provoke paging and defeat optimizations from the compiler.
* It slightly breaks your encapsulation because it exposes the internal of your object processing (but then, every "private" member would, too). This means you must not use inlining in a PImpl pattern.
* It slightly breaks your encapsulation 2: C++ inlining is resolved at compile time. Which means that should you change the code of the inlined function, you would need to recompile all the code using it to be sure it will be updated (for the same reason, I avoid default values for function parameters)
* When used in a header, it makes your header file larger, and thus, will dilute interesting informations (like the list of a class methods) with code the user don't care about (this is the reason that I declare inlined functions inside a class, but will define it in an header after the class body, and never inside the class body).
## Inlining Magic
* The compiler may or may not inline the functions you marked as inline; it may also decide to inline functions not marked as inline at compilation or linking time.
* Inline works like a copy/paste controlled by the compiler, which is quite different from a pre-processor macro: The macro will be forcibly inlined, will pollute all the namespaces and code, won't be easily debuggable, and will be done even if the compiler would have ruled it as inefficient.
* Every method of a class defined inside the body of the class itself is considered as "inlined" (even if the compiler can still decide to not inline it
* Virtual methods are not supposed to be inlinable. Still, sometimes, when the compiler can know for sure the type of the object (i.e. the object was declared and constructed inside the same function body), even a virtual function will be inlined because the compiler knows exactly the type of the object.
* Template methods/functions are not always inlined (their presence in an header will not make them automatically inline).
* The next step after "inline" is template metaprograming . I.e. By "inlining" your code at compile time, sometimes, the compiler can deduce the final result of a function... So a complex algorithm can sometimes be reduced to a kind of `return 42 ;` statement. This is for me *extreme inlining*. It happens rarely in real life, it makes compilation time longer, will not bloat your code, and will make your code faster. But like the grail, don't try to apply it everywhere because most processing cannot be resolved this way... Still, this is cool anyway...
:-p
|
What are the benefits of inline functions?
|
[
"",
"c++",
"inline",
"inline-functions",
""
] |
I'm trying to define a table to store student grades for a online report card. I can't decide how to do it, though.
The grades are given by subject, in a trimestral period. Every trimester has a average grade, the total missed classes and a "recovering grade" (I don't know the right term in English, but it's an extra test you take to try to raise your grade if you're below the average), I also gotta store the year average and final "recovering grade". Basically, it's like this:
```
|1st Trimester |2nd Trimester |3rd Trimester
Subj. |Avg. |Mis. |Rec |Avg. |Mis. |Rec |Avg. |Mis. |Rec |Year Avg. |Final Rec.
Math |5.33 |1 |4 |8.0 |0 |7.0 |2 |6.5 |7.0
Sci. |5.33 |1 |4 |8.0 |0 |7.0 |2 |6.5 |7.0
```
I could store this information in a single DB row, with each row like this:
```
1tAverage | 1tMissedClasses | 1tRecoveringGrade | 2tAverage | 2tMissedClasses | 2tRecoveringGrade
```
And so on, but I figured this would be a pain to mantain, if the scholl ever decides to grade by bimester or some other period (like it used to be up until 3 years ago).
I could also generalize the table fields, and use a tinyint for flagging for which trimester those grades are, or if they're the year finals.
But this one would ask for a lot of subqueries to write the report card, also a pain to mantain.
Which of the two is better, or is there some other way?
Thanks
|
You could try structuring it like this with your tables. I didn't have all the information so I made some guesses at what you might need or do with it all.
TimePeriods:
* ID(INT)
* PeriodTimeStart(DateTime)
* PeriodTimeEnd(DateTime)
* Name(VARCHAR(50)
Students:
* ID(INT)
* FirstName(VARCHAR(60))
* LastName(VARCHAR(60))
* Birthday(DateTime)
* [any other relevant student field
information added...like contact
info, etc]
Grading:
* ID(INT)
* StudentID(INT)
* GradeValue(float)
* TimePeriodID(INT)
* IsRecoveringGrade(boolean)
MissedClasses:
* ID(INT)
* StudentID(INT)
* ClassID(INT)
* TimePeriodID(INT)
* DateMissed(DateTime)
Classes:
* ID(INT)
* ClassName (VARCHAR(50))
* ClassDescription (TEXT)
|
I think the best solution is to store one row per period. So you'd have a table like:
```
grades
------
studentID
periodNumber
averageGrade
missedClasses
recoveringGrade
```
So if it's 2 semesters, you'd have periods 1 and 2. I'd suggest using period 0 to mean "overall for the year".
|
Database table for grades
|
[
"",
"sql",
"database",
"data-modeling",
""
] |
All,
As part of an application I'm writing I need to have a HTTP PUT webservice which accepts incoming imagedata, which will by analyzed, validated, and added to a local file store.
My issue arises after the size validation as the
> `$_SERVER['CONTENT_LENGTH']`
has a > 0 value, and this value is identical to the test file size, so I can assume that all is going well up to this point *but* when I try to read the incoming stream data using
> `file_get_contents('php://stdin');`
I get an empty string. I've also tried using
> `file_get_contents('php://input');`
And this give me the same result of an empty string.
Any help, suggestions or direction will be appreciated.
NB: I'm using
* PHP 5.2.6
* Apache 2.0
|
My best guess is that you need to alter httpd.conf to not deny PUT requests. Have you checked that?
|
Apache HTTPD denies PUT requests by default. You might check out mod\_put:
<http://perso.ec-lyon.fr/lyonel.vincent/apache/mod_put.html>
and add this to httpd.conf:
```
<Location /upload/dir>
EnablePut On
AuthType Basic
AuthName "Web publishing"
AuthUserFile /www/etc/passwd
AuthGroupFile /www/etc/group
<Limit PUT>
require valid-user
</Limit>
</Location>
```
|
Unable to access HTTP PUT data in webservice code
|
[
"",
"php",
"web-services",
"rest",
""
] |
I have an array of integers:
```
int[] number = new int[] { 2,3,6,7 };
```
What is the easiest way of converting these into a single string where the numbers are separated by a character (like: `"2,3,6,7"`)?
I'm using C# and .NET 3.5.
|
```
var ints = new int[] {1, 2, 3, 4, 5};
var result = string.Join(",", ints.Select(x => x.ToString()).ToArray());
Console.WriteLine(result); // prints "1,2,3,4,5"
```
As of (at least) .NET 4.5,
```
var result = string.Join(",", ints.Select(x => x.ToString()).ToArray());
```
is equivalent to:
```
var result = string.Join(",", ints);
```
I see several solutions advertise usage of StringBuilder. Someone complains that the Join method should take an IEnumerable argument.
I'm going to disappoint you :) String.Join requires an array for a single reason - performance. The Join method needs to know the size of the data to effectively preallocate the necessary amount of memory.
Here is a part of the internal implementation of String.Join method:
```
// length computed from length of items in input array and length of separator
string str = FastAllocateString(length);
fixed (char* chRef = &str.m_firstChar) // note than we use direct memory access here
{
UnSafeCharBuffer buffer = new UnSafeCharBuffer(chRef, length);
buffer.AppendString(value[startIndex]);
for (int j = startIndex + 1; j <= num2; j++)
{
buffer.AppendString(separator);
buffer.AppendString(value[j]);
}
}
```
|
Although the OP specified .NET 3.5, people wanting to do this in .NET 2.0 with C# 2.0 can do this:
```
string.Join(",", Array.ConvertAll<int, String>(ints, Convert.ToString));
```
I find there are a number of other cases where the use of the Convert.xxx functions is a neater alternative to a lambda, although in C# 3.0 the lambda might help the type-inferencing.
A fairly compact C# 3.0 version which works with .NET 2.0 is this:
```
string.Join(",", Array.ConvertAll(ints, item => item.ToString()))
```
|
How can I join int[] to a character-separated string in .NET?
|
[
"",
"c#",
".net",
".net-3.5",
""
] |
I'm working through [Practical Web 2.0 Appications](https://rads.stackoverflow.com/amzn/click/com/1590599063) currently and have hit a bit of a roadblock. I'm trying to get PHP, MySQL, Apache, Smarty and the Zend Framework all working correctly so I can begin to build the application. I have gotten the bootstrap file for Zend working, shown here:
```
<?php
require_once('Zend/Loader.php');
Zend_Loader::registerAutoload();
// load the application configuration
$config = new Zend_Config_Ini('../settings.ini', 'development');
Zend_Registry::set('config', $config);
// create the application logger
$logger = new Zend_Log(new Zend_Log_Writer_Stream($config->logging->file));
Zend_Registry::set('logger', $logger);
// connect to the database
$params = array('host' => $config->database->hostname,
'username' => $config->database->username,
'password' => $config->database->password,
'dbname' => $config->database->database);
$db = Zend_Db::factory($config->database->type, $params);
Zend_Registry::set('db', $db);
// handle the user request
$controller = Zend_Controller_Front::getInstance();
$controller->setControllerDirectory($config->paths->base .
'/include/Controllers');
// setup the view renderer
$vr = new Zend_Controller_Action_Helper_ViewRenderer();
$vr->setView(new Templater());
$vr->setViewSuffix('tpl');
Zend_Controller_Action_HelperBroker::addHelper($vr);
$controller->dispatch();
?>
```
This calls the IndexController. The error comes with the use of this Templater.php to implement Smarty with Zend:
```
<?php
class Templater extends Zend_View_Abstract
{
protected $_path;
protected $_engine;
public function __construct()
{
$config = Zend_Registry::get('config');
require_once('Smarty/Smarty.class.php');
$this->_engine = new Smarty();
$this->_engine->template_dir = $config->paths->templates;
$this->_engine->compile_dir = sprintf('%s/tmp/templates_c',
$config->paths->data);
$this->_engine->plugins_dir = array($config->paths->base .
'/include/Templater/plugins',
'plugins');
}
public function getEngine()
{
return $this->_engine;
}
public function __set($key, $val)
{
$this->_engine->assign($key, $val);
}
public function __get($key)
{
return $this->_engine->get_template_vars($key);
}
public function __isset($key)
{
return $this->_engine->get_template_vars($key) !== null;
}
public function __unset($key)
{
$this->_engine->clear_assign($key);
}
public function assign($spec, $value = null)
{
if (is_array($spec)) {
$this->_engine->assign($spec);
return;
}
$this->_engine->assign($spec, $value);
}
public function clearVars()
{
$this->_engine->clear_all_assign();
}
public function render($name)
{
return $this->_engine->fetch(strtolower($name));
}
public function _run()
{ }
}
?>
```
The error I am getting when I load the page is this:
`Fatal error: Call to a member function fetch() on a non-object in /var/www/phpweb20/include/Templater.php on line 60`
I understand it doesn't see $name as an object, but I don't know how to go about fixing this. Isn't the controller supposed to refer to the index.tpl? I haven't been able to discover what the $name variable represents and how to fix this to get the foundation working.
Any help you have is much appreciated!
|
The problem isn't with the $name variable but rather with the $\_engine variable. It's currently empty. You need to verify that the path specification to Smarty.class.php is correct.
You might try this to begin your debugging:
```
$this->_engine = new Smarty();
print_r($this->_engine);
```
If it turns out that $\_engine is correct at that stage then verify that it is still correctly populated within the render() function.
|
Zend has an example of creating a templating system which implements the Zend\_View\_Interface here: <http://framework.zend.com/manual/en/zend.view.scripts.html#zend.view.scripts.templates.interface>
That might save you some time from trying to debug a custom solution.
|
Call to a member function on a non-object
|
[
"",
"php",
"zend-framework",
"smarty",
""
] |
How do I remove the key 'bar' from an array foo so that 'bar' won't show up in
```
for(key in foo){alert(key);}
```
|
Don't use **delete** as it won't remove an element from an array it will only set it as undefined, which will then not be reflected correctly in the length of the array.
If you know the key you should use **splice** i.e.
```
myArray.splice(key, 1);
```
For someone in Steven's position you can try something like this:
```
for (var key in myArray) {
if (key == 'bar') {
myArray.splice(key, 1);
}
}
```
or
```
for (var key in myArray) {
if (myArray[key] == 'bar') {
myArray.splice(key, 1);
}
}
```
|
```
delete foo[key];
```
:D
|
How do I unset an element in an array in javascript?
|
[
"",
"javascript",
"arrays",
""
] |
I need a way to determine the type of an HTML element in JavaScript. It has the ID, but the element itself could be a `<div>`, a `<form>` field, a `<fieldset>`, etc. How can I achieve this?
|
[`nodeName`](https://developer.mozilla.org/en-US/docs/Web/API/Node/nodeName) is the attribute you are looking for. For example:
```
var elt = document.getElementById('foo');
console.log(elt.nodeName);
```
Note that `nodeName` returns the element name capitalized and without the angle brackets, which means that if you want to check if an element is an `<div>` element you could do it as follows:
```
elt.nodeName == "DIV"
```
While this would not give you the expected results:
```
elt.nodeName == "<div>"
```
|
What about [`element.tagName`](http://www.w3.org/TR/2000/REC-DOM-Level-2-Core-20001113/core.html#ID-104682815)?
See also [`tagName` docs on MDN](https://developer.mozilla.org/en-US/docs/Web/API/Element/tagName).
|
How can I determine the type of an HTML element in JavaScript?
|
[
"",
"javascript",
"dom",
""
] |
We're using Prototype for all of our Ajax request handling and to keep things simple we simple render HTML content which is then assigned to the appropriate div using the following function:
```
function ajaxModify(controller, parameters, div_id)
{
var div = $(div_id);
var request = new Ajax.Request
(
controller,
{
method: "post",
parameters: parameters,
onSuccess: function(data) {
div.innerHTML = data.responseText;
},
onFailure: function() {
div.innerHTML = "Information Temporarily Unavailable";
}
}
);
}
```
However, I occasionally need to execute Javascript within the HTML response and this method appears incapable of doing that.
I'm trying to keep the list of functions for Ajax calls to a minimum for a number of reasons so if there is a way to modify the existing function without breaking everywhere that it is currently being used or a way to modify the HTML response that will cause any embedded javascript to execute that would great.
By way of note, I've already tried adding "evalJS : 'force'" to the function to see what it would do and it didn't help things any.
|
The parameter is:
```
evalScripts:true
```
Note that you should be using **Ajax.Updater**, not **Ajax.Request**
See: <http://www.prototypejs.org/api/ajax/updater>
Ajax.Request will only process JavaScript if the response headers are:
> application/ecmascript,
> application/javascript,
> application/x-ecmascript,
> application/x-javascript,
> text/ecmascript, text/javascript,
> text/x-ecmascript, or
> text/x-javascript
Whereas Ajax.Updater will process JS is evalScripts:true is set. Ajax.Request is geared toward data transport, such as getting a JSON response.
Since you are updating HTML you should be using Ajax.Updater anyways.
|
Does setting `evalScripts: true` as an option help?
|
How to Force Javascript to Execute within HTML Response to Ajax Request
|
[
"",
"javascript",
"html",
"ajax",
"prototypejs",
""
] |
A colleague recently asked me how to deep-clone a Map and I realized that I probably have never used the clone() method- which worries me.
What are the most common scenarios you have found where you need to clone an object?
|
I assume you are referring to `Object.clone()` in Java. If yes, be advised that `Object.clone()` has some major problems, and its use is discouraged in most cases. Please see Item 11, from ["Effective Java"](http://java.sun.com/docs/books/effective/) by Joshua Bloch for a complete answer. I believe you can safely use `Object.clone()` on primitive type arrays, but apart from that you need to be judicious about properly using and overriding clone. You are probably better off defining a copy constructor or a static factory method that explicitly clones the object according to your semantics.
|
Most commonly, when I have to return a mutable object to a caller that I'm worried the caller might muck with, often in a thread-unfriendly way. Lists and Date are the ones that I do this to most. If the caller is likely to want to iterate over a List and I've got threads possibly updating it, it's safer to return a clone or copy of it.
Actually, that brings up something I'm going to have to open up another question for: copy constructors or clone? When I did C++, we ALWAYS did a copy constructor and implemented clone with it, but FindBugs does not like it if you implement your clone using a copy constructor.
|
What have you used Object.clone() for?
|
[
"",
"java",
""
] |
This probably sounds really stupid but I have noo idea how to implement jquery's rounded corners (<http://www.methvin.com/jquery/jq-corner-demo.html>). My javascript-fu is complete fail and I can't seem to get it to work on my page. Can anyone show me a simple example of the HTML and JavaScript you would use to get them to show? Apologies for my idiocy.
|
1. This thing does not work in Safari & Google Chrome.
2. You need to include [jquery.js](http://jqueryjs.googlecode.com/files/jquery-1.2.6.js) in your page. Don't forget to have a separate closing tag.
`<script type="text/javascript" src="jquery.js"></script>`
3. You need to include the jQuery Corner Plugin JavaScript file ([jquery.corner.js](http://www.methvin.com/jquery/jquery.corner.js)) in your page as well.
`<script type="text/javascript" src="jquery.corner.js"></script>`
4. Somewhere in your page you should have the `<div>` you want to have corners:
`<div id="divToHaveCorners" style="width: 200px; height: 100px; background-color: #701080;">Hello World!</div>`
5. Somewhere else in your page, preferably not before the div itself, issue the following JavaScript command. This will execute the inner function when the page is loaded and is ready to be manipulated.
`<script type="text/javascript">$(function() { $('#divToHaveCorners').corner(); } );</script>`
6. You're done! If not, let me know.
|
jquery corners by Methvin `http://www.methvin.com/jquery/jq-corner-demo.html` are ok and working fine, but... there is more beautiful alternative:
```
http://blue-anvil.com/jquerycurvycorners/test.html
```
you can use that lib to do rounded images even.
And what is very important:
- 18th July 2008 - Now works in IE6,7, safari, and all other modern browsers. Fixed centering bug.
So, please download jQuery Curvy Corners 2.0.2 Beta 3 from:
```
http://code.google.com/p/jquerycurvycorners/downloads/list
```
and again you have to load jquery core lib first so example of your HEAD can be:
```
<head>
<script src="http://path.to.your.downloaded.jquery.library/jquery-1.2.6.min.js" type="text/javascript"></script>
<script src="http://path.to.your.downloaded.jquery.library/jquery.curvycorners.min.js" type="text/javascript"></script>
<script type="text/javascript">
$(function(){
$('.myClassName').corner({
tl: { radius: 6 },
tr: { radius: 6 },
bl: { radius: 6 },
br: { radius: 6 }
});
}
</script>
</head>
```
where:
tl,tr,bl,br are top-left, top-right corners etc...
next, your BODY area:
and ?
and that's it :)
link to image about:
```
http://img44.imageshack.us/img44/3638/corners.jpg
```
... and ready to use code:
```
<html>
<head>
<script src="http://blue-anvil.com/jquerycurvycorners/jquery-1.2.6.min.js" type="text/javascript"></script>
<script src="http://blue-anvil.com/jquerycurvycorners/jquery.curvycorners.min.js" type="text/javascript"></script>
<script type="text/javascript">
$(function(){
$('.myClassName').corner({
tl: { radius: 12 },
tr: { radius: 12 },
bl: { radius: 12 },
br: { radius: 12 }
});
});
</script>
<style>
.myClassName
{
width:320px;
height:64px;
background-color:red;
text-align:center;
margin:auto;
margin-top:30px;
}
</style>
</head>
<body>
<div class="myClassName">content</div>
</body>
</html>
```
just copy, paste, enjoy :)
|
jquery round corners
|
[
"",
"javascript",
"jquery",
"rounded-corners",
""
] |
I need a way to determine whether the computer running my program is joined to any domain. It doesn't matter what specific domain it is part of, just whether it is connected to anything. I'm coding in vc++ against the Win32 API.
|
Straight from Microsoft:
[How To Determine If a Windows NT/Windows 2000 Computer Is a Domain Member](http://support.microsoft.com/kb/179891)
This approach uses the Windows API. From the article summary:
> This article describes how to
> determine if a computer that is
> running Windows NT 4.0 or Windows 2000
> is a member of a domain, is a member
> of a workgroup, or is a stand-alone
> computer using the Local Security
> Authority APIs.
The article also provides sample code for a small program that outputs whether the computer the program is running on is part of a domain, part of a workgroup, or a standalone computer.
|
I think the [NetServerEnum](http://msdn.microsoft.com/en-us/library/aa370623%28VS.85%29.aspx) function will help you in what you want; I would ask for the primary domain controllers with the `SV_TYPE_DOMAIN_CTRL` constant for *servertype* parameter. If you don't get any, then you're not in a domain.
|
How do you programmatically determine whether a Windows computer is a member of a domain?
|
[
"",
"c++",
"windows",
"winapi",
"dns",
""
] |
I have this bit of script to widen a text box on mouseover and shorten it on mouseoff.
The problem I am having is that Internet Explorer doesn't seem to extend it's hover over the options of a select box.
This means in IE I can click the select, have the options drop down, but if I try to select one, they vanish and the select box re-sizes as soon as I move off the select box itself.
Example Code:
```
<script type='text/javascript'>
$(function() {
$('#TheSelect').hover(
function(e){
$('#TheText').val('OVER');
$(this).width( 600 );
},
function(e){
$('#TheText').val('OUT');
$(this).width( 50 );
}
);
});
</script>
```
And:
```
<input type='text' id='TheText' /><br /><br />
<select id='TheSelect' style='width:50px;'>
<option value='1'>One</option>
<option value='2'>Two</option>
<option value='3'>Three</option>
<option value='42,693,748,756'>Forty-two billion, six-hundred and ninety-three million, seven-hundred-forty-some-odd..... </option>
<option value='5'>Five</option>
<option value='6'>Six</option>
<option value='7'>Seven...</option>
</select>
```
Are there any workarounds for select boxes in IE? I would even consider a jquery replacement if anyone can recommend one that is really reliable.
Thanks!
|
Apparently IE doesn't consider the drop down bit part of the select element. It's doable, but it takes a bit of cheating with expando properties and blur/focus events to enable and disable the 'hide' effect to stop it kicking in when the mouse enters the drop-down part of the element.
Have a go with this:
```
$(function() {
var expand = function(){ $(this).width(600) }
var contract = function(){ if (!this.noHide) $(this).width(50) }
var focus = function(){ this.noHide = true }
var blur = function(){ this.noHide = false; contract.call(this) }
$('#TheSelect')
.hover(expand, contract)
.focus(focus)
.click(focus)
.blur(blur)
.change(blur)
});
```
(Apologies if this isn't how one is supposed to use jQuery - I've never used it before :))
|
Looks like this post has been up for a while, but hopefully there are still folks interested in a workaround. I experienced this issue while building out a new site that I'm working on. On it is a product slider, and for each product, mousing over the product pops up a large informational bubble with info about the product, a dropdown to select buy options, and a buy button. I quickly discovered that anytime my mouse left the initial visible area of the select menu (i.e., trying to select an option), the entire bubble would disappear.
The answer (thank you, smart folks that developed jQuery), was all about event bubbling. I knew that the most straightforward way to fix the issue had to be temporarily "disabling" the out state of the hover. Fortunately, jQuery has functionality built in to deal with event bubbling (they call it propagation).
Basically, with only about a line or so of new code, I attached a method to the "onmouseleave" event of the dropdown (mousing over one of the options in the select list seems to trigger this event reliably - I tried a few other events, but this one seemed to be pretty solid) which turned off event propagation (i.e., parent elements were not allowed to hear about the "onmouseleave" event from the dropdown).
That was it! Solution was much more elegant that I expected it to be. Then, when the mouse leaves the bubble, the out state of the hover triggers normally and the site goes on about its business. Here's the fix (I put it in the document.ready):
```
$(document).ready(function(){
$(".dropdownClassName select").mouseleave(function(event){
event.stopPropagation();
});
});
```
|
JQuery/Javascript: IE hover doesn't cover select box options?
|
[
"",
"javascript",
"cross-browser",
""
] |
How do I write the SQL code to INSERT (or UPDATE) an array of values (with probably an attendant array of fieldnames, or with a matrix with them both) without simple iteration?
|
I construct the list as an xml string and pass it to the stored procs. In SQL 2005, it has enhanced xml functionalities to parse the xml and do a bulk insert.
check this post:
[Passing lists to SQL Server 2005 with XML Parameters](http://weblogs.asp.net/jgalloway/archive/2007/02/16/passing-lists-to-sql-server-2005-with-xml-parameters.aspx)
|
Simple way to concatenate the values into a list and pass it to the sp.
In the sp use dbo.Split udf to convert back to resultset (table).
Create this function:
```
CREATE FUNCTION dbo.Split(@String nvarchar(4000), @Delimiter char(1))
returns @Results TABLE (Items nvarchar(4000))
as
begin
declare @index int
declare @slice nvarchar(4000)
select @index = 1
if @String is null return
while @index != 0
begin
select @index = charindex(@Delimiter,@String)
if @index !=0
select @slice = left(@String,@index - 1)
else
select @slice = @String
insert into @Results(Items) values(@slice)
select @String = right(@String,len(@String) - @index)
if len(@String) = 0 break
end return
end
```
and then try:
```
select * from dbo.split('a,b,c,d,e,f,g,h,i,j,k,l', ',')
```
|
How to INSERT an array of values in SQL Server 2005?
|
[
"",
"sql",
"sql-server",
"sql-server-2005",
"bulkinsert",
""
] |
I've been tasked with build an accessible RSS feed for my company's job listings. I already have an RSS feed from our recruiting partner; so I'm transforming their RSS XML to our own proxy RSS feed to add additional data as well limit the number of items in the feed so we list on the latest jobs.
The RSS validates via feedvalidator.org (with warnings); but the problem is this. Unfortunately, no matter how many times I tell them not to; my company's HR team directly copies and pastes their Word documents into our Recruiting partners CMS when inserting new job listings, leaving WordML in my feed. I believe this WordML is causing issues with Feedburner's BrowserFriendly feature; which we want to show up to make it easier for people to subscribe. Therefore, I need to remove the WordML markup in the feed.
Anybody have experience doing this? Can anyone point me to a good solution to this problem?
Preferably; I'd like to be pointed to a solution in .Net (VB or C# is fine) and/or XSL.
Any advice on this is greatly appreciated.
Thanks.
|
I haven't yet worked with WordML, but assuming that its elements are in a different namespace from RSS, it should be quite simple to do with XSLT.
Start with a basic identity transform (a stylesheet that add all nodes from the input doc "as is" to the output tree). You need these two templates:
```
<!-- Copy all elements, and recur on their child nodes. -->
<xsl:template match="*">
<xsl:copy>
<xsl:apply-templates select="@*"/>
<xsl:apply-templates/>
</xsl:copy>
</xsl:template>
<!-- Copy all non-element nodes. -->
<xsl:template match="@*|text()|comment()|processing-instruction()">
<xsl:copy/>
</xsl:template>
```
A transformation using a stylesheet containing just the above two templates would exactly reproduce its input document on output, modulo those things that standards-compliant XML processors are permitted to change, such as entity replacement.
Now, add in a template that matches any element in the WordML namespace. Let's give it the namespace prefix 'wml' for the purposes of this example:
```
<!-- Do not copy WordML elements or their attributes to the
output tree; just recur on child nodes. -->
<xsl:template match="wml:*">
<xsl:apply-templates/>
</xsl:template>
```
The beginning and end of the stylesheet are left as an exercise for the coder.
|
I would do something like this:
```
char[] charToRemove = { (char)8217, (char)8216, (char)8220, (char)8221, (char)8211 };
char[] charToAdd = { (char)39, (char)39, (char)34, (char)34, '-' };
string cleanedStr = "Your WordML filled Feed Text.";
for (int i = 0; i < charToRemove.Length; i++)
{
cleanedStr = cleanedStr.Replace(charToRemove.GetValue(i).ToString(), charToAdd.GetValue(i).ToString());
}
```
This would look for the characters in reference, (Which are the Word special characters that mess up everything and replaces them with their ASCII equivelents.
|
Strip WordML from a string
|
[
"",
"c#",
"asp.net",
"xml",
"rss",
"xslt",
""
] |
Is there a way to test the type of an element in JavaScript?
The answer may or may not require the prototype library, however the following setup does make use of the library.
```
function(event) {
var element = event.element();
// if the element is an anchor
...
// if the element is a td
...
}
```
|
You can use `typeof(N)` to get the actual object type, but what you want to do is check the tag, not the type of the DOM element.
In that case, use the `elem.tagName` or `elem.nodeName` property.
if you want to get really creative, you can use a dictionary of tagnames and anonymous closures instead if a switch or if/else.
|
```
if (element.nodeName == "A") {
...
} else if (element.nodeName == "TD") {
...
}
```
|
Testing the type of a DOM element in JavaScript
|
[
"",
"javascript",
"prototypejs",
""
] |
Is it better in C++ to pass by value or pass by reference-to-const?
I am wondering which is better practice. I realize that pass by reference-to-const should provide for better performance in the program because you are not making a copy of the variable.
|
It used to be generally recommended best practice1 to **use pass by const ref for *all types*, except for builtin types (`char`, `int`, `double`, etc.), for iterators and for function objects** (lambdas, classes deriving from `std::*_function`).
This was especially true before the existence of *move semantics*. The reason is simple: if you passed by value, a copy of the object had to be made and, except for very small objects, this is always more expensive than passing a reference.
With C++11, we have gained [*move semantics*](https://stackoverflow.com/q/3106110/1968). In a nutshell, move semantics permit that, in some cases, an object can be passed “by value” without copying it. In particular, this is the case when the object that you are passing is an [*rvalue*](https://stackoverflow.com/q/3601602/1968).
In itself, moving an object is still at least as expensive as passing by reference. However, in many cases a function will internally copy an object anyway — i.e. it will take *ownership* of the argument.2
In these situations we have the following (simplified) trade-off:
1. We can pass the object by reference, then copy internally.
2. We can pass the object by value.
“Pass by value” still causes the object to be copied, unless the object is an rvalue. In the case of an rvalue, the object can be moved instead, so that the second case is suddenly no longer “copy, then move” but “move, then (potentially) move again”.
For large objects that implement proper move constructors (such as vectors, strings …), the second case is then *vastly* more efficient than the first. Therefore, it is recommended to **use pass by value if the function takes ownership of the argument, and if the object type supports efficient moving**.
---
A historical note:
In fact, any modern compiler should be able to figure out when passing by value is expensive, and implicitly convert the call to use a const ref if possible.
*In theory.* In practice, compilers can’t always change this without breaking the function’s binary interface. In some special cases (when the function is inlined) the copy will actually be elided if the compiler can figure out that the original object won’t be changed through the actions in the function.
But in general the compiler can’t determine this, and the advent of move semantics in C++ has made this optimisation much less relevant.
---
1 E.g. in Scott Meyers, *Effective C++*.
2 This is especially often true for object constructors, which may take arguments and store them internally to be part of the constructed object’s state.
|
**Edit:** New article by Dave Abrahams on cpp-next:
## [Want speed? Pass by value.](http://web.archive.org/web/20140113221447/http://cpp-next.com/archive/2009/08/want-speed-pass-by-value/)
---
Pass by value for structs where the copying is cheap has the additional advantage that the compiler may assume that the objects don't alias (are not the same objects). Using pass-by-reference the compiler cannot assume that always. Simple example:
```
foo * f;
void bar(foo g) {
g.i = 10;
f->i = 2;
g.i += 5;
}
```
the compiler can optimize it into
```
g.i = 15;
f->i = 2;
```
since it knows that f and g doesn't share the same location. if g was a reference (foo &), the compiler couldn't have assumed that. since g.i could then be aliased by f->i and have to have a value of 7. so the compiler would have to re-fetch the new value of g.i from memory.
For more pratical rules, here is a good set of rules found in [Move Constructors](http://www.ddj.com/database/184403855) article (highly recommended reading).
* If the function intends to change the argument as a side effect, take it by non-const reference.
* If the function doesn't modify its argument and the argument is of primitive type, take it by value.
* Otherwise take it by const reference, except in the following cases
+ If the function would then need to make a copy of the const reference anyway, take it by value.
"Primitive" above means basically small data types that are a few bytes long and aren't polymorphic (iterators, function objects, etc...) or expensive to copy. In that paper, there is one other rule. The idea is that sometimes one wants to make a copy (in case the argument can't be modified), and sometimes one doesn't want (in case one wants to use the argument itself in the function if the argument was a temporary anyway, for example). The paper explains in detail how that can be done. In C++1x that technique can be used natively with language support. Until then, i would go with the above rules.
Examples: To make a string uppercase and return the uppercase version, one should always pass by value: One has to take a copy of it anyway (one couldn't change the const reference directly) - so better make it as transparent as possible to the caller and make that copy early so that the caller can optimize as much as possible - as detailed in that paper:
```
my::string uppercase(my::string s) { /* change s and return it */ }
```
However, if you don't need to change the parameter anyway, take it by reference to const:
```
bool all_uppercase(my::string const& s) {
/* check to see whether any character is uppercase */
}
```
However, if you the purpose of the parameter is to write something into the argument, then pass it by non-const reference
```
bool try_parse(T text, my::string &out) {
/* try to parse, write result into out */
}
```
|
Is it better in C++ to pass by value or pass by reference-to-const?
|
[
"",
"c++",
"variables",
"pass-by-reference",
"constants",
"pass-by-value",
""
] |
I want to write a program that would print every combination of a set of variables to a text file, creating a word list. Each answer should be written on a separate line and write all of the results for 1 digit, 2 digits, and 3 digits to a single text file.
Is there a simple way I can write a python program that can accomplish this? Here is an example of the output I am expecting when printing all the binary number combinations possible for 1, 2, and 3 digits:
```
Output:
0
1
00
01
10
11
000
001
010
011
100
101
110
111
```
|
A naïve solution which solves the problem and is general enough for any application you might have is this:
```
def combinations(words, length):
if length == 0:
return []
result = [[word] for word in words]
while length > 1:
new_result = []
for combo in result:
new_result.extend(combo + [word] for word in words)
result = new_result[:]
length -= 1
return result
```
Basically, this gradually builds up a tree in memory of all the combinations, and then returns them. It is memory-intensive, however, and so is impractical for large-scale combinations.
Another solution for the problem is, indeed, to use counting, but then to transform the numbers generated into a list of words from the wordlist. To do so, we first need a function (called `number_to_list()`):
```
def number_to_list(number, words):
list_out = []
while number:
list_out = [number % len(words)] + list_out
number = number // len(words)
return [words[n] for n in list_out]
```
This is, in fact, a system for converting decimal numbers to other bases. We then write the counting function; this is relatively simple, and will make up the core of the application:
```
def combinations(words, length):
numbers = xrange(len(words)**length)
for number in numbers:
combo = number_to_list(number, words)
if len(combo) < length:
combo = [words[0]] * (length - len(combo)) + combo
yield combo
```
This is a Python generator; making it a generator allows it to use up less RAM. There is a little work to be done after turning the number into a list of words; this is because these lists will need padding so that they are at the requested length. It would be used like this:
```
>>> list(combinations('01', 3))
[['0', '0', '0'], ['0', '0', '1'],
['0', '1', '0'], ['0', '1', '1'],
['1', '0', '0'], ['1', '0', '1'],
['1', '1', '0'], ['1', '1', '1']]
```
As you can see, you get back a list of lists. Each of these sub-lists contains a sequence of the original words; you might then do something like `map(''.join, list(combinations('01', 3)))` to retrieve the following result:
```
['000', '001', '010', '011', '100', '101', '110', '111']
```
You could then write this to disk; a better idea, however, would be to use the built-in optimizations that generators have and do something like this:
```
fileout = open('filename.txt', 'w')
fileout.writelines(
''.join(combo) for combo in combinations('01', 3))
fileout.close()
```
This will only use as much RAM as necessary (enough to store one combination). I hope this helps.
|
```
# Given two lists of strings, return a list of all ways to concatenate
# one from each.
def combos(xs, ys):
return [x + y for x in xs for y in ys]
digits = ['0', '1']
for c in combos(digits, combos(digits, digits)):
print c
#. 000
#. 001
#. 010
#. 011
#. 100
#. 101
#. 110
#. 111
```
|
I want a program that writes every possible combination to a different line of a text file
|
[
"",
"python",
"recursion",
""
] |
Can I access a users microphone in Python?
Sorry I forgot not everyone is a mind reader:
Windows at minimum XP but Vista support would be VERY good.
|
Best way to go about it would be to use the ctypes library and use WinMM from that. mixerOpen will open a microphone device and you can read the data easily from there. Should be very straightforward.
|
I got the job done with [pyaudio](http://people.csail.mit.edu/hubert/pyaudio/)
It comes with a binary installer for windows and there's even an example on how to record through the microphone and save to a wave file. Nice! I used it on Windows XP, not sure how it will do on Vista though, sorry.
|
Microphone access in Python
|
[
"",
"python",
"windows",
"microphone",
""
] |
How can I change a file's extension using PHP?
Ex: photo.jpg to photo.exe
|
In modern operating systems, filenames very well might contain periods long before the file extension, for instance:
```
my.file.name.jpg
```
PHP provides a way to find the filename without the extension that takes this into account, then just add the new extension:
```
function replace_extension($filename, $new_extension) {
$info = pathinfo($filename);
return $info['filename'] . '.' . $new_extension;
}
```
|
```
substr_replace($file , 'png', strrpos($file , '.') +1)
```
Will change any extension to what you want. Replace png with what ever your desired extension would be.
|
How can I change a file's extension using PHP?
|
[
"",
"php",
"file",
""
] |
I'm looking for a "quick and dirty" C++ testing framework I can use on my Windows/Visual Studio box. It's just me developing, so it doesn't have to be enterprise class software.
Staring at a list of testing frameworks, I am somewhat befuddled...
<http://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#C.2B.2B>
|
[Here's a great article about C++ TDD frameworks](http://www.gamesfromwithin.com/articles/0412/000061.html). For the record, my personal preference is CxxTest, which I have been happily using for about six months now.
|
I have used both [UnitTest++](http://unittest-cpp.sourceforge.net/) and [Boost.Test](http://www.boost.org/doc/libs/1_36_0/libs/test/doc/html/index.html). They are both easy to setup and use.
Although, I wouldn't use Boost.Test if you're not already using the Boost libraries. It's a bit much to install all of Boost just to get the testing framework.
|
C++ testing framework: recommendation sought
|
[
"",
"c++",
"windows",
"unit-testing",
"frameworks",
""
] |
I have always found this to be a very useful feature in Visual Studio. For those who don't know about it, it allows you to edit code while you are debugging a running process, re-compile the code *while the binary is still running* and continue using the application seamlessly with the new code, without the need to restart it.
How is this feature implemented? If the code I am modifying is in a DLL loaded by the application, does the application simply unload the DLL and reload it again? This would seem to me like it would be prone to instability issues, so I assume it would be smarter than this. Any ideas?
|
My understanding is that when the app is compiled with support for Edit and Continue enabled, the compiler leaves extra room around the functions in the binary image to allow for adding additional code. Then the debugger can compile a new version of the function, replace the existing version (using the padding space as necessary), fix up the stack, set the instruction pointer, and keep going. That way you don't have to fix up any jump pointers, as long as you have enough padding.
Note that Edit and Continue doesn't usually work on code in libs/dlls, only with the main executable code.
|
My *guess* is that it recompiles the app (and for small changes this wouldn't mean very much would have to be recompiled). Then since Microsoft makes both the compiler and debugger they can make guarantees about how memory and the like are laid out. So, they can use the debugging API to re-write the code segments with the new ones as long as the changes are small enough.
If the changes redirect to entirely new code, this can obviously be loaded into memory in a similar style as DLLs.
Microsoft also has a mechanism for "hot-patching". Functions have a 2 byte no-op instruction usually something like "mov edx, edx" before any real code. This allows them to redirect the execution of a function cleanly. This may be an option as well.
The key thing to remember is that the application isn't "running", all it's threads are in the stopped state. So as far as the process is concerned any modifications the debugger makes are entirely atomic.
Of course, this is all speculation ;)
|
How does "Edit and continue" work in Visual Studio?
|
[
"",
"c++",
"visual-studio",
""
] |
in a web application I am building I need to upload photos without using a form, otherwise it will come up with form embedded in an outer from which is not permitted in XHTML.
I think uploading files is what jQuery.post should be able to do, but I currently can't figure out how to do this using jQuery.
|
take a look at [swfupload](http://swfupload.org/), it's a pretty nifty little app that should accomplish what you want. otherwise, you could separate out the forms, and just use css/javascript to reposition them if necessary.
|
With [jQuery.post](http://docs.jquery.com/Ajax/jQuery.post) you will be only able to send key/value pairs to the server in a POST Request, However you can use plugins like [jQuery Multiple File Upload Plugin](http://www.fyneworks.com/jquery/multiple-file-upload/).
|
How can I upload files without using an HTML <form>?
|
[
"",
"javascript",
"jquery",
"http",
"upload",
""
] |
In C# code can you catch a native exception thrown from deep in some unmanaged library? If so do you need to do anything differently to catch it or does a standard try...catch get it?
|
You can use [Win32Exception](http://msdn.microsoft.com/en-us/library/system.componentmodel.win32exception.aspx) and use its NativeErrorCode property to handle it appropriately.
```
// http://support.microsoft.com/kb/186550
const int ERROR_FILE_NOT_FOUND = 2;
const int ERROR_ACCESS_DENIED = 5;
const int ERROR_NO_APP_ASSOCIATED = 1155;
void OpenFile(string filePath)
{
Process process = new Process();
try
{
// Calls native application registered for the file type
// This may throw native exception
process.StartInfo.FileName = filePath;
process.StartInfo.Verb = "Open";
process.StartInfo.CreateNoWindow = true;
process.Start();
}
catch (Win32Exception e)
{
if (e.NativeErrorCode == ERROR_FILE_NOT_FOUND ||
e.NativeErrorCode == ERROR_ACCESS_DENIED ||
e.NativeErrorCode == ERROR_NO_APP_ASSOCIATED)
{
MessageBox.Show(this, e.Message, "Error",
MessageBoxButtons.OK,
MessageBoxIcon.Exclamation);
}
}
}
```
|
Catch without () will catch non-CLS compliant exceptions including native exceptions.
```
try
{
}
catch
{
}
```
See the following FxCop rule for more info
<http://msdn.microsoft.com/en-gb/bb264489.aspx>
|
Can you catch a native exception in C# code?
|
[
"",
"c#",
".net",
"exception",
""
] |
What is the best method of performing an scp transfer via the Java programming language? It seems I may be able to perform this via JSSE, JSch or the bouncy castle java libraries. None of these solutions seem to have an easy answer.
|
I ended up using [Jsch](http://www.jcraft.com/jsch/)- it was pretty straightforward, and seemed to scale up pretty well (I was grabbing a few thousand files every few minutes).
|
plug: sshj is the only sane choice! See these examples to get started: [download](https://github.com/hierynomus/sshj/blob/master/examples/src/main/java/net/schmizz/sshj/examples/SCPDownload.java), [upload](https://github.com/hierynomus/sshj/blob/master/examples/src/main/java/net/schmizz/sshj/examples/SCPUpload.java).
|
scp transfer via java
|
[
"",
"java",
"scp",
"bouncycastle",
"jsse",
"jsch",
""
] |
How would you go about converting a reasonably large (>300K), fairly mature C codebase to C++?
The kind of C I have in mind is split into files roughly corresponding to modules (i.e. less granular than a typical OO class-based decomposition), using internal linkage in lieu private functions and data, and external linkage for public functions and data. Global variables are used extensively for communication between the modules. There is a very extensive integration test suite available, but no unit (i.e. module) level tests.
I have in mind a general strategy:
1. Compile everything in C++'s C subset and get that working.
2. Convert modules into huge classes, so that all the cross-references are scoped by a class name, but leaving all functions and data as static members, and get that working.
3. Convert huge classes into instances with appropriate constructors and initialized cross-references; replace static member accesses with indirect accesses as appropriate; and get that working.
4. Now, approach the project as an ill-factored OO application, and write unit tests where dependencies are tractable, and decompose into separate classes where they are not; the goal here would be to move from one working program to another at each transformation.
Obviously, this would be quite a bit of work. Are there any case studies / war stories out there on this kind of translation? Alternative strategies? Other useful advice?
Note 1: the program is a compiler, and probably millions of other programs rely on its behaviour not changing, so wholesale rewriting is pretty much not an option.
Note 2: the source is nearly 20 years old, and has perhaps 30% code churn (lines modified + added / previous total lines) per year. It is heavily maintained and extended, in other words. Thus, one of the goals would be to increase mantainability.
[For the sake of the question, assume that translation into **C++** is mandatory, and that leaving it in C is **not** an option. The point of adding this condition is to weed out the "leave it in C" answers.]
|
Having just started on pretty much the same thing a few months ago (on a ten-year-old commercial project, originally written with the "C++ is nothing but C with smart `struct`s" philosophy), I would suggest using the same strategy you'd use to eat an elephant: take it one bite at a time. :-)
As much as possible, split it up into stages that can be done with minimal effects on other parts. Building a facade system, as [Federico Ramponi](https://stackoverflow.com/questions/199627/converting-c-source-to-c#199644) suggested, is a good start -- once everything has a C++ facade and is communicating through it, you can change the internals of the modules with fair certainty that they can't affect anything outside them.
We already had a partial C++ interface system in place (due to previous smaller refactoring efforts), so this approach wasn't difficult in our case. Once we had everything communicating as C++ objects (which took a few weeks, working on a completely separate source-code branch and integrating all changes to the main branch as they were approved), it was very seldom that we couldn't compile a totally working version before we left for the day.
The change-over isn't complete yet -- we've paused twice for interim releases (we aim for a point-release every few weeks), but it's well on the way, and no customer has complained about any problems. Our QA people have only found one problem that I recall, too. :-)
|
What about:
1. Compiling everything in C++'s C subset and get that working, and
2. Implementing a set of [facades](http://en.wikipedia.org/wiki/Facade_pattern) leaving the C code unaltered?
Why is "translation into C++ mandatory"? You can wrap the C code without the pain of converting it into huge classes and so on.
|
Converting C source to C++
|
[
"",
"c++",
"c",
"refactoring",
"legacy",
"program-transformation",
""
] |
In writing some test code I have found that Selector.select() can return without Selector.selectedKeys() containing any keys to process. This is happening in a tight loop when I register an accept()ed channel with
```
SelectionKey.OP_READ | SelectionKey.OP_CONNECT
```
as the operations of interest.
According to the docs, select() should return when:
1) There are channels that can be acted upon.
2) You explicitly call Selector.wakeup() - no keys are selected.
3) You explicitly Thread.interrupt() the thread doing the select() - no keys are selected.
If I get no keys after the select() I must be in cases (2) and (3). However, my code is not calling wakeup() or interrupt() to initiate these returns.
Any ideas as to what is causing this behaviour?
|
Short answer: remove `OP_CONNECT` from the list of operations you are interested in for the accepted connection -- an accepted connection is already connected.
I managed to reproduce the issue, which might be exactly what's happening to you:
```
import java.net.*;
import java.nio.channels.*;
public class MyNioServer {
public static void main(String[] params) throws Exception {
final ServerSocketChannel serverChannel = ServerSocketChannel.open();
serverChannel.configureBlocking(true);
serverChannel.socket().bind(new InetSocketAddress("localhost", 12345));
System.out.println("Listening for incoming connections");
final SocketChannel clientChannel = serverChannel.accept();
System.out.println("Accepted connection: " + clientChannel);
final Selector selector = Selector.open();
clientChannel.configureBlocking(false);
final SelectionKey clientKey = clientChannel.register(selector, SelectionKey.OP_READ | SelectionKey.OP_CONNECT);
System.out.println("Selecting...");
System.out.println(selector.select());
System.out.println(selector.selectedKeys().size());
System.out.println(clientKey.readyOps());
}
}
```
After the above server receives a connection, the very first `select()` on the connection exits without blocking and there are no keys with ready operations. I don't know why Java behaves in this way, but it appears many people get bitten by this behavior.
The outcome is the same on Sun's JVM 1.5.0\_06 on Windows XP as well as Sun's JVM 1.5.0\_05 and 1.4.2\_04 on Linux 2.6.
|
The reason is that `OP_CONNECT` and `OP_WRITE` are the same thing under the hood, so you should never be registered for both simultaneously (ditto `OP_ACCEPT` and `OP_READ`), and you should never be registered for `OP_CONNECT` at all when the channel is already connected, as it is in this case, having been accepted.
And `OP_WRITE` is almost always ready, except when the socket send buffer in the kernel is full, so you should only register for that after you get a zero length write. So by registering the already connected channel for `OP_CONNECT,` you were really registering for `OP_WRITE,` which was ready, so `select()` got triggered.
|
Java NIO select() returns without selected keys - why?
|
[
"",
"java",
"select",
"nio",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.