Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
Why does it (apparently) make a difference whether I pass `null` as an argument directly, or pass an `Object` that I assigned the *value* `null`?
```
Object testVal = null;
test.foo(testVal); // dispatched to foo(Object)
// test.foo(null); // compilation problem -> "The method foo(String) is ambiguous"
public void foo(String arg) { // More-specific
System.out.println("foo(String)");
}
public void foo(Object arg) { // Generic
System.out.println("foo(Object)");
}
```
In other words, why is the (commented-out) second call to `foo(...)` not dispatched to `foo(Object)`?
**Update:** I use Java 1.6. I could compile Hemal's code without problems, but mine still doesn't compile. The only difference I see is that Hemal's methods are static while mine are not. But I really don't see why this should make a difference...?
**Update 2:** Solved. I had another method foo(Runnable) in my class, so the dispatcher couldn't unambiguously select the single most specific method. (See my comment in Hemal's second answer.) Thanks everyone for your help. | Which version of Java are you using? With 1.6.0\_11 the code (pasted below) compiles and runs.
I am sure its obvious why `foo(testVal)` goes to `foo(Object)`.
The reason why `foo(null)` goes to `foo(String)` is a little complex. The constant `null` is of type `nulltype`, which is a subtype of all types. So, this `nulltype` extends `String`, which extends `Object`.
When you call `foo(null)` compiler looks for the overloaded method with most specific type. Since `String` is more specific then `Object` that is the method that gets called.
If you had another overload that was as specific as String, say `foo(Integer)` then you would get a ambiguous overload error.
```
class NullType {
public static final void main(final String[] args) {
foo();
}
static void foo()
{
Object testVal = null;
foo(testVal); // dispatched to foo(Object)
foo(null); // compilation problem -> "The method foo(String) is ambiguous"
}
public static void foo(String arg) { // More-specific
System.out.println("foo(String)");
}
public static void foo(Object arg) { // Generic
System.out.println("foo(Object)");
}
}
``` | Because the second commented out invocation with null is ambiguous to the compiler. The literal null could be a string or an object. Whereas the assigned value has a definite type. You need to cast the null, e.g. test.foo((String)null) to remove the ambiguity. | Java method dispatch with null argument | [
"",
"java",
"null",
"method-dispatch",
""
] |
Is there any way that I could get the source of a website (as a string preferably), let's say www.google.com, from some c# code inside code behind of asp.net website?
edit: of course i mean html code - in every browser you can view it using "view **source**" in context menu. | Assuming you want to retrieve the html:
```
class Program
{
static void Main(string[] args)
{
using (WebClient client = new WebClient())
using (Stream stream = client.OpenRead("http://www.google.com"))
using (StreamReader reader = new StreamReader(stream))
{
Console.WriteLine(reader.ReadToEnd());
}
}
}
``` | For C#, I prefer to use [HttpWebRequest](http://www.codeproject.com/KB/IP/httpwebrequest_response.aspx) over WebClient because you can have more option in the future like having GET/POST parameter, using Cookies, etc.
You can have a shortest explication at [MSDN](http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.begingetrequeststream.aspx).
Here is the example from MSDN:
```
// Create a new HttpWebRequest object.
HttpWebRequest request=(HttpWebRequest) WebRequest.Create("http://www.contoso.com/example.aspx");
// Set the ContentType property.
request.ContentType="application/x-www-form-urlencoded";
// Set the Method property to 'POST' to post data to the URI.
request.Method = "POST";
// Start the asynchronous operation.
request.BeginGetRequestStream(new AsyncCallback(ReadCallback), request);
// Keep the main thread from continuing while the asynchronous
// operation completes. A real world application
// could do something useful such as updating its user interface.
allDone.WaitOne();
// Get the response.
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream streamResponse = response.GetResponseStream();
StreamReader streamRead = new StreamReader(streamResponse);
string responseString = streamRead.ReadToEnd();
Console.WriteLine(responseString);
// Close the stream object.
streamResponse.Close();
streamRead.Close();
// Release the HttpWebResponse.
response.Close();
``` | Get the source of some website from asp.net code | [
"",
"c#",
"asp.net-2.0",
""
] |
I posted this [question](https://stackoverflow.com/questions/385442/linq-submitchanges-not-updating-certain-fields) yesterday evening, which has led me to discover a **huge** problem!
I have a decimal column in my database called Units, anytime I set the value of the column to a NON ZERO, and SubmitChanges the column updates with the new value. If I try to set the value of the column to ZERO, the SubmitChanges does not update the column.
```
data.Units = this.ReadProperty<decimal>(UnitsProperty);
data.UnitPrice = this.ReadProperty<decimal>(UnitPriceProperty);
data.Price = this.ReadProperty<decimal>(PriceProperty);
```
I've taken a look at the DataContext log and I can see that the field with the ZERO value is not included in the query. Even if I try to hard code the change Linq ignores it.
```
data.Units = 0;
data.UnitPrice = 0;
data.Price = 0;
```
Needless to say this is killing me! Any ideas why this happening?
**Solution**
I figured out my problem with the help of the SO community. My problem was being caused by the fact when I created my entity to attach, the default value of the column was set to zero, so when it tried to assign the value to zero ... LinqToSql says hey ... nothing changed, so I am not updating the value.
What I am doing now ... just to make it work is the following:
```
ctx.DataContext.InvoiceItems.Attach(data, true);
```
That seems to force all the values to write themselves to the database. This works for now. | I figured out my problem with the help of the SO community. My problem was being caused by the fact when I created my entity to attach, the default value of the column was set to zero, so when it tried to assign the value to zero ... LinqToSql says hey ... nothing changed, so I am not updating the value.
What I am doing now ... just to make it work is the following:
```
ctx.DataContext.InvoiceItems.Attach(data, true);
```
That seems to force **all** the values to write themselves to the database. This works for now. | I have tried to reproduce this with a the following code, but for me it works.
```
using (DataClasses1DataContext ctx = new DataClasses1DataContext())
{
var obj = ctx.DecimalColumnTables.First();
Debug.Assert(obj.B != 0);
obj.B = 0;
ctx.SubmitChanges();
}
```
So I think there must be something special in your domain that causes this. I suggest you to create a such simple repro with your domain model and see what happens.
LINQ to SQL ignores updates to the current value, so if the field was already zero, you may not see any updates.
Off: The OR/M you use is *LINQ to SQL*. LINQ is the name of the querying capability in .NET, but LINQ does not define nor implement any update logic. So the issue relates to LINQ to SQL, not LINQ. | Linq-to-Sql SubmitChanges not updating fields ... why? | [
"",
"c#",
".net",
"linq",
"linq-to-sql",
""
] |
I am using the following methods:
```
public void M1(Int32 a)
{
// acquire MyMutex
DoSomething(a);
// release MyMutex
}
```
and
```
public void M2(String s, String t)
{
// acquire MyMutex
DoSomethingElse(s, t);
// release MyMutex
}
```
From what I have found so far it seems that it is not possible to use a single delegate for two methods with different signatures.
Are there any other alternatives to write something like this:
```
public void UsingMutex(...)
{
// acquire MyMutex
...
// release MyMutex
}
UsingMutex(M1);
UsingMutex(M2);
```
All I can think for the moment is to use two delegates and a boolean flag to know which delegate to call, but it is not a long term solution.
It is possible to combine generics with delegates? And if so, do you have some links for any kind of documentation?
Environment: C# 2.0 | Absolutely you can mix delegates with generics. In 2.0, `Predicate<T>` etc are good examples of this, but you must have the same number of args. In this scenario, perhaps an option is to use captures to include the args in the delegate?
i.e.
```
public delegate void Action();
static void Main()
{
DoStuff(delegate {Foo(5);});
DoStuff(delegate {Bar("abc","def");});
}
static void DoStuff(Action action)
{
action();
}
static void Foo(int i)
{
Console.WriteLine(i);
}
static void Bar(string s, string t)
{
Console.WriteLine(s+t);
}
```
Note that `Action` is defined for you in .NET 3.5, but you can re-declare it for 2.0 purposes ;-p
Note that the anonymous method (`delegate {...}`) can also be parameterised:
```
static void Main()
{
DoStuff(delegate (string s) {Foo(5);});
DoStuff(delegate (string s) {Bar(s,"def");});
}
static void DoStuff(Action<string> action)
{
action("abc");
}
static void Foo(int i)
{
Console.WriteLine(i);
}
static void Bar(string s, string t)
{
Console.WriteLine(s+t);
}
```
Finally, C# 3.0 makes this all a lot easier and prettier with "lambdas", but that is another topic ;-p | Yes, it's possible to combine generics with delegates.
```
public delegate void Action<T>(T x);
public delegate void Action<T,U>(T x, U y);
public void UsingMutex<T>(Action<T> x, T t) {
// acquire mutex...
x(t);
// release mutex...
}
public void UsingMutex<T,U>(Action<T,U> x, T t, U u) {
// acquire mutex...
x(t, u);
// release mutex...
}
```
But you still have to handle different number of parameters using overloads. | C# delegate for two methods with different parameters | [
"",
"c#",
"generics",
"delegates",
""
] |
How do I handle the scenario where I making a synchronous request to the server using XMLHttpRequest and the server is not available?
```
xmlhttp.open("POST","Page.aspx",false);
xmlhttp.send(null);
```
Right now this scenario results into a JavaScript error:
"The system cannot locate the resource specified" | Ok I resolved it by using try...catch around xmlhttprequest.send
:
```
xmlhttp.open("POST","Page.aspx",false);
try
{
xmlhttp.send(null);
}
catch(e)
{
alert('there was a problem communicating with the server');
}
``` | Try the timeout property.
```
xmlHTTP.TimeOut= 2000
``` | Problem: XMLHttpRequest - handle server connection lost | [
"",
"javascript",
"xmlhttprequest",
""
] |
The [C# 3.0 spec](http://msdn.microsoft.com/en-us/vcsharp/aa336809.aspx) has the following code example in section 10.6.1.3 "Output parameters":
```
using System;
class Test
{
static void SplitPath(string path, out string dir, out string name) {
int i = path.Length;
while (i > 0) {
char ch = path[i – 1];
if (ch == '\\' || ch == '/' || ch == ':') break;
i--;
}
dir = path.Substring(0, i);
name = path.Substring(i);
}
static void Main() {
string dir, name;
SplitPath("c:\\Windows\\System\\hello.txt", out dir, out name);
Console.WriteLine(dir);
Console.WriteLine(name);
}
}
```
I cannot get this code to compile in VS2005/C#2.0. Did the behavior of strings in C# 3.0 change so that a string can be referred as a char[] array without explicitly converting it (the statement "ch = path[i - 1]")? | It is an invalid character '–'. Change '–' to '-' | What error are you getting?
System.String has had [] accessors since .NET v1.0 | String as char[] array in C# 3.0? | [
"",
"c#",
".net",
"c#-3.0",
""
] |
I wonder if people (meaning the company/developers) really care about having [SuppressMessage] attributes lying around in the shipping assemblies.
Creating separate configs in the Project files that include CODE\_ANALYSIS in Release mode and then yanking it off in the final build seems kind of an avoidable overhead to me.
What'll be the best stratergy, if one does not want these to remain in the final assembly but still want to use them in code ?
and Is there any advantages/disadvantages of storing them in FxCop Project files ?
[I'm coming from a VS2008 Pro+FxCop 1.36, rather than VS2008 Team System] | In the grand scheme of things, I don't think it really matters. Since this is an attribute (effectively meta-data), it doesn't impact code performance. That being said, do remember that the information in the attribute is available to anyone using a disassember like Reflector.
The problem with storing them in the FxCop project file is that you must then ensure that everyone uses the same project file and that the project file always travels with the project (it's checked in to source control, which means you must check it out each time you want to run FxCop).
If you don't want the SuppressMessage attributes in your production code you would need to only define the CODE\_ANALYSIS symbol in the build you are running FxCop against. This does mean defining it either on your Debug configuration or adding additional configurations. The attributes will only be compiled in to the code when the symbol is defined.
From an automated/nightly build viewpoint, you can build using a configuration that has the symbol defined and then build the production release without the symbol or do two builds - one with the symbol defined, run FxCop to get your violations, and then another build without the symbol defined. | The SuppressMessage attribute will only be added to your code if the CODE\_ANALYSIS preprocessor definition is present during a compile. You can verify this by looking at the definition of the attribute in Reflector.exe. By default this is not defined in Release so it won't affect production code.
Typically, I only run FxCop on DEBUG builds of my assembly where CODE\_ANALYSIS is defined. | .NET [SuppressMessage] attributes in shipping assemblies fxcop | [
"",
"c#",
".net",
"fxcop",
""
] |
I want to list (a sorted list) all my entries from an attribute called streetNames in my table/relation Customers.
eg. I want to achieve the following order:
Street\_1A
Street\_1B
Street\_2A
Street\_2B
Street\_12A
Street\_12B
A simple order by streetNames will do a lexical comparision and then Street\_12A and B will come before Street\_2A/B, and that is not correct. Is it possible to solve this by pure SQL? | The reliable way to do it (reliable in terms of "to sort your data correctly", not "to solve your general problem") is to split the data into street name and house number and sort both of them on their own. But this requires knowing where the house number starts. And this is the tricky part - making the assumption best fits your data.
You should use something like the following to refactor your data and from now on store the house number in a separate field. All this string-juggling won't perform too well when it comes to sorting large data sets.
Assuming it is the last thing in the street name, and it contains a number:
```
DECLARE @test TABLE
(
street VARCHAR(100)
)
INSERT INTO @test (street) VALUES('Street')
INSERT INTO @test (street) VALUES('Street 1A')
INSERT INTO @test (street) VALUES('Street1 12B')
INSERT INTO @test (street) VALUES('Street 22A')
INSERT INTO @test (street) VALUES('Street1 200B-8a')
INSERT INTO @test (street) VALUES('')
INSERT INTO @test (street) VALUES(NULL)
SELECT
street,
CASE
WHEN LEN(street) > 0 AND CHARINDEX(' ', REVERSE(street)) > 0
THEN CASE
WHEN RIGHT(street, CHARINDEX(' ', REVERSE(street)) - 1) LIKE '%[0-9]%'
THEN LEFT(street, LEN(street) - CHARINDEX(' ', REVERSE(street)))
END
END street_part,
CASE
WHEN LEN(street) > 0 AND CHARINDEX(' ', REVERSE(street)) > 0
THEN CASE
WHEN RIGHT(street, CHARINDEX(' ', REVERSE(street)) - 1) LIKE '%[0-9]%'
THEN RIGHT(street, CHARINDEX(' ', REVERSE(street)) - 1)
END
END house_part,
CASE
WHEN LEN(street) > 0 AND CHARINDEX(' ', REVERSE(street)) > 0
THEN CASE
WHEN RIGHT(street, CHARINDEX(' ', REVERSE(street)) - 1) LIKE '%[0-9]%'
THEN CASE
WHEN PATINDEX('%[a-z]%', LOWER(RIGHT(street, CHARINDEX(' ', REVERSE(street)) - 1))) > 0
THEN CONVERT(INT, LEFT(RIGHT(street, CHARINDEX(' ', REVERSE(street)) - 1), PATINDEX('%[^0-9]%', LOWER(RIGHT(street, CHARINDEX(' ', REVERSE(street)) - 1))) - 1))
END
END
END house_part_num
FROM
@test
ORDER BY
street_part,
house_part_num,
house_part
```
This assumes these conditions:
* a street address *can* have a house number
* a house number *must* be the last thing in a street address (no "525 Monroe Av.")
* a house number *should* start with a digit to be sorted correctly
* a house number *can* be a range ("200-205"), this would be sorted below 200
* a house number *must not* contain spaces or recognition fails (When you look at your data, you could apply something like `REPLACE(street, ' - ', '-')` to sanitize common patterns beforehand.)
* the whole thing is still an approximation that certainly deviates from what it would look like in a telephone book, for example | Select street\_name from tablex
order by udf\_getStreetNumber(street\_name)
in your udf\_getStreetNumber - write your business rule for stripping out the number
EDIT
I think you can use regex functionality in SQL Server now. I'd just strip out all non-number characters from the input. | SQL query (order by) | [
"",
"sql",
"sorting",
""
] |
```
function holiday_hitlist($tablename, $hit_user){
global $host, $user, $pass, $dbname;
$link = mysql_connect($host, $user, $pass, $dbname);
print "<div class=\"hit_list\">
<h3>My Holiday Hitlist</h3>
<p>Five things I want the most, based on my desirability ratings.<br/>You can't go wrong with this stuff!</p>
<ol>";
$sql = "SELECT title, URL, price FROM $dbname.$tablename WHERE user='$hit_user' AND rank >= 3 ORDER BY date DESC LIMIT 5";
$result = mysql_query($sql) or die ("Couldn't retrieve holiday hit list for this user. " . mysql_error());
while($row = mysql_fetch_array($result)) {
$title = $row['title'];
$url = $row['URL'];
$price = "$" . $row['price'];
$output = print "<li><a href=\"$url\" target=\"_blank\">$title</a> $price</li>";
}
print "</ol></div>";
return $output;
}
```
On an HTML page, it puts the "1" immediately following the closing `div` tag. Why? | See the line
```
$output = print "<li><a href=\"$url\" target=\"_blank\">$title</a> $price</li>";
```
you should probably remove the print after the $output =
Or maybe you just need to remove the $output =
I am not quite sure what you intend.
To explain, $output is getting the return value of print "..." | From php.net Reference:
"Return Values
Returns 1, always."
<http://ca.php.net/manual/en/function.print.php>
You should assign $output to be the output that you would like, then use print to display that output. | PHP is generating a numeral "1" on output of a function. I've never seen it do this before | [
"",
"php",
""
] |
I am considering using Maven for a Java open source project I manage.
In the past, however, Maven has not always had the best reputation. What are your impressions of Maven, at this time? | For an open-source project, Maven has some advantages, especially for your contributors (eg mvn eclipse:eclipse).
If you do go with Maven, the one rule you must follow religiously is: don't fight the tool. Layout your project exactly how Maven recommends, follow all its conventions and best practices. Every little fight you get into with Maven is a day you won't be spending writing code for your project.
Also consider up front where you want to deploy your artifacts (are you going to host your own repository?).
And don't be afraid to go with something other than Maven (eg Ant). The success of your project will be the project itself, not its build tool (so long as you choose a best-of-breed build tool, which both Ant and Maven are). | Personally, I'm not a fan. I agree with most of what Charles Miller [says about it being broken by design](http://fishbowl.pastiche.org/2007/12/20/maven_broken_by_design/). It does solve some problems, but it also [introduces others](http://tapestryjava.blogspot.com/2007/11/maven-wont-get-fooled-again.html).
[Ant](http://ant.apache.org) is far from perfect, but it is a lot more robust and far better documented. It does take [some discipline to use it in a modular way](http://blog.uncommons.org/2007/10/25/15-tips-for-better-ant-builds/) though (which is one of the things Maven is trying to address). I think that inventing something better than both Ant and Maven wouldn't be that difficult, but that tool doesn't seem to exist yet.
If you like Maven's dependency management but not Maven, you can get something similar in Ant using [Ivy](http://ant.apache.org/ivy/). My problem with this style of dependency management is that is [fragile](http://blog.uncommons.org/2008/02/03/maven-revisited-fallacies-and-usefulness/) due to factors outside of your control. The one use case where it does make some sense is if you have lots of projects internal to your organisation that depend on each other. In this case everything is under your control and it might work quite well.
**EDIT**: I forgot to add that even if you don't like Maven, you can't ignore it. If you write open source libraries that other people use, they will expect them to be available in a Maven repository so that they can use them easily from their Maven builds.
**EDIT2**: Since you've clarified that your main interest is in providing an Open Source library to other Maven users, it's worth noting that you don't necessarily have to use Maven to achieve this. There is [a set of Ant Tasks for publishing to a Maven repository](http://maven.apache.org/ant-tasks/index.html). So, if you want to continue using Ant to build your project, you can do but still satisfy your Maven-using users. | What are your impressions of Maven? | [
"",
"java",
"open-source",
"maven-2",
""
] |
Using the code below, I am returning an nvarchar field from *MS SQL 2005* and keep getting a System.InvalidCastException.
```
vo.PlacementID = dr.IsDBNull(0) ? null : dr.GetString(0);
```
The vo.PlacementID variable is of type String so there shouldn't be a problem.
The values I am trying to return are like this (number, number, letter): 00F, 02A, 01F, etc
```
System.InvalidCastException: Unable to cast object of type 'System.Int32' to
type 'System.String'.
at System.Data.SqlClient.SqlBuffer.get_String()
at System.Data.SqlClient.SqlDataReader.GetString(Int32 i)
```
Any help much appreciated. | If you read the exception again it gives you a clue as to the problem:
> System.**InvalidCastException**:
> ***Unable to cast object of type 'System.Int32' to type
> 'System.String'***. at
> System.Data.SqlClient.SqlBuffer.get\_String()
> at
> System.Data.SqlClient.SqlDataReader.GetString(Int32
> i)
Basically the underlying data type being returned in column 0 of your SqlDataReader isn't a string compatible type, hence the cast exception.
I'd suggest setting a breakpoint on the offending line and then execute the following line in the immediate window:
```
? dr.GetValue(0).GetType()
```
This will tell what type being returned. | the cast exception is not raised in the assignment, but in the datareader's GetString().
try dr.GetValue(0).ToString() | Problem returning field in ms sql 2005 - System.InvalidCastException: | [
"",
"sql",
"sql-server",
""
] |
Compiling a C++ file takes a very long time when compared to C# and Java. It takes significantly longer to compile a C++ file than it would to run a normal size Python script. I'm currently using VC++ but it's the same with any compiler. Why is this?
The two reasons I could think of were loading header files and running the preprocessor, but that doesn't seem like it should explain why it takes so long. | Several reasons
# Header files
Every single compilation unit requires hundreds or even thousands of headers to be (1) loaded and (2) compiled.
Every one of them typically has to be recompiled for every compilation unit,
because the preprocessor ensures that the result of compiling a header *might* vary between every compilation unit.
(A macro may be defined in one compilation unit which changes the content of the header).
This is probably *the* main reason, as it requires huge amounts of code to be compiled for every compilation unit,
and additionally, every header has to be compiled multiple times
(once for every compilation unit that includes it).
# Linking
Once compiled, all the object files have to be linked together.
This is basically a monolithic process that can't very well be parallelized, and has to process your entire project.
# Parsing
The syntax is extremely complicated to parse, depends heavily on context, and is very hard to disambiguate.
This takes a lot of time.
# Templates
In C#, `List<T>` is the only type that is compiled, no matter how many instantiations of List you have in your program.
In C++, `vector<int>` is a completely separate type from `vector<float>`, and each one will have to be compiled separately.
Add to this that templates make up a full Turing-complete "sub-language" that the compiler has to interpret,
and this can become ridiculously complicated.
Even relatively simple template metaprogramming code can define recursive templates that create dozens and dozens of template instantiations.
Templates may also result in extremely complex types, with ridiculously long names, adding a lot of extra work to the linker.
(It has to compare a lot of symbol names, and if these names can grow into many thousand characters, that can become fairly expensive).
And of course, they exacerbate the problems with header files, because templates generally have to be defined in headers,
which means far more code has to be parsed and compiled for every compilation unit.
In plain C code, a header typically only contains forward declarations, but very little actual code.
In C++, it is not uncommon for almost all the code to reside in header files.
# Optimization
C++ allows for some very dramatic optimizations.
C# or Java don't allow classes to be completely eliminated (they have to be there for reflection purposes),
but even a simple C++ template metaprogram can easily generate dozens or hundreds of classes,
all of which are inlined and eliminated again in the optimization phase.
Moreover, a C++ program must be fully optimized by the compiler.
A C# program can rely on the JIT compiler to perform additional optimizations at load-time,
C++ doesn't get any such "second chances". What the compiler generates is as optimized as it's going to get.
# Machine
C++ is compiled to machine code which may be somewhat more complicated than the bytecode Java or .NET use (especially in the case of x86).
(This is mentioned out of completeness only because it was mentioned in comments and such.
In practice, this step is unlikely to take more than a tiny fraction of the total compilation time).
# Conclusion
Most of these factors are shared by C code, which actually compiles fairly efficiently.
The parsing step is a lot more complicated in C++, and can take up significantly more time, but the main offender is probably templates.
They're useful, and make C++ a far more powerful language, but they also take their toll in terms of compilation speed. | Parsing and code generation are actually rather fast. The real problem is opening and closing files. Remember, even with include guards, the compiler still have open the .H file, and read each line (and then ignore it).
A friend once (while bored at work), took his company's application and put everything -- all source and header files-- into one big file. Compile time dropped from 3 hours to 7 minutes. | Why does C++ compilation take so long? | [
"",
"c++",
"performance",
"compilation",
""
] |
I hope someone can guide me as I'm stuck... I need to write an emergency broadcast system that notifies workstations of an emergency and pops up a little message at the bottom of the user's screen. This seems simple enough but there are about 4000 workstations over multiple subnets. The system needs to be almost realtime, lightweight and easy to deploy as a windows service.
The problem started when I discovered that the routers do not forward UDP broadcast packets x.x.x.255. Later I made a simple test hook in VB6 to catch net send messages but even those didn't pass the routers. I also wrote a simple packet sniffer to filter packets only to find that the network packets never reached the intended destination.
Then I took a look and explored using MSMQ over HTTP, but this required IIS to be installed on the target workstation. Since there are so many workstations it would be a major security concern.
Right now I've finished a web service with asynchronous callback that sends an event to subscribers. It works perfectly on a small scale but once there are more than 15 subscribers performance degrades considerably. Polling a server isn't really an option because of the load it will generate on the server (plus I've tried it too)
I need your help to guide me as to what technology to use. has anyone used the comet way with so many clients or should I look at WCF?
I'm using Visual C# 2005. Please help me out of this predicament.
Thanks | Consider using WCF callbacks mechanism and events. There is [good introduction](http://msdn.microsoft.com/en-us/magazine/cc163537.aspx) by Juval Lowy.
Another pattern is to implement [blocking web-service calls](http://xmpp.org/extensions/xep-0124.html). This is how GMail chat works, for example. However, you will have to deal with sessions and timeouts here. It works when clients are behind NATs and Firewalls and not reachable directly. But it may be too complicated for simple alert within intranet. | This is exactly what [Multicast](http://en.wikipedia.org/wiki/Multicast) was designed for.
A normal network broadcast (by definition) stays on the local subnet, and will not be forwarded through routers.
Multicast transmissions on the other hand can have various scopes, ranging from subnet local, through site local, even to global. All you need is for the various routers connecting your subnets together to be multicast aware. | Whats the best way to send an event to all workstations | [
"",
"c#",
"wcf",
"web-services",
"tcp",
"udp",
""
] |
I've seen a lot of example c++ code that wraps function calls in a FAILED() function/method/macro.
Could someone explain to me how this works? And if possible does anyone know a c# equivalent? | It generally checks COM function errors. But checking any function that returns a `HRESULT` is what it's meant for, specifically. `FAILED` returns a true value if the `HRESULT` value is negative, which means that the function failed ("error" or "warning" severity). Both `S_OK` and `S_FALSE` are >= 0 and so they are not used to convey an error. With "negative" I mean that the high bit is set for `HRESULT` error codes, *i.e.*, their hexadecimal representation, which can be found in, *e.g.*, winerror.h, begins with an 8, as in 0x8000FFFF. | [This page](http://msdn.microsoft.com/en-us/library/ms819775.aspx) shows the half of the WinError.h include file that defines `FAILED()`. It's actually just a very simple macro, the entire definition goes like this:
```
#define FAILED(Status) ((HRESULT)(Status)<0)
``` | Can someone explain the c++ FAILED function? | [
"",
"c++",
"windows",
"com",
""
] |
suppose I have an enum
```
[Flags]
public enum E {
zero = 0,
one = 1
}
```
then I can write
```
E e;
object o = 1;
e = (E) o;
```
and it will work.
BUT if I try to do that at runtime, like
```
(o as IConvertible).ToType(typeof(E), null)
```
it will throw InvalidCastException.
So, is there something that I can invoke at runtime, and it will convert from int32 to enum, in the same way as if I wrote a cast as above? | ```
object o = 1;
object z = Enum.ToObject(typeof(E), o);
``` | You can also use
```
Enum.Parse(typeof(E), (int)o)
``` | what's the runtime equivalent of c# 'bracketed' type cast | [
"",
"c#",
"casting",
"enums",
"runtime",
""
] |
Can all .NET exception objects be serialized? | Yes and no. As other answers here have pointed out, the all exception classes should and almost always are serializable. If you run across a particular exception class that is not serializable, it is quite possibly a bug.
But when considering serializability you need to consider both the immediate class and all types which are members of this type (this is a recursive process). The base Exception class has a [Data](http://msdn.microsoft.com/en-us/library/system.exception.data.aspx) property which is of type IDictionary. This is problematic because it allows you, or anyone else, to add any object into the Exception. If this object is not serializable then serialization of the Exception will fail.
The default implementation of the Data property does do some basic checking to ensure an object being added is serializable. But it is flawed in several ways
* Only does top level checking for serializability of a type and this check is only done by checking Type.IsSerializable (not a 100% solid check).
* The property is virtual. A derived exception type could override and use a Hashtable which does 0 checking. | Yes, but historically there have been (no pun intended) exceptions.
In other words, **all Exceptions SHOULD be serializable**, but some custom Exceptions from third party code may *not* be, depending on how they're implemented.
For example, in the .NET 1.0 era, exceptions from the official Microsoft Oracle database provider were not serializable, due to bugs in their code. | Are all .NET exceptions serializable? | [
"",
"c#",
".net",
"exception",
""
] |
At my current job, we're looking to implement our own odbc driver to allow many different applications to be able to connect to our own app as a datasource. Right now we are trying to weigh the options of developing our own driver to the implementation spec, which is massive, *or* using an SDK that allows for programmers to 'fill in' the data specific parts and allow higher levels of abstraction.
Has anyone else implemented a custom odbc driver? What pitfalls did you run into? What benefits did you see from doing it yourself? How many manhours would you approximate it took? Did you use an SDK, and if so, what benefits/downsides did you see from that approach?
Any comments and answers would be greatly appreciated. Thanks!
**EDIT:** We are trying to maintain portability with our code, which is written in C. | I have not, but I once interviewed at a company that had done exactly this. They made
a 4GL/DBMS product called AMPS of the same sort of architecture as MUMPS - a hierarchical database with integrated 4GL (a whole genre of such systems came out during the 1970s). They had quite a substantial legacy code base and customers wishing to connect to it using MS Access.
The lead developer who interviewed me shared some war stories about this. Apparently it is exceedingly painful to do and shouldn't be taken lightly. However, they did actually succeed in implemnenting it.
One alternative to doing this would be to provide a data mart/BI product (along the lines of SAP BW) that presents your application data in an external database and massages it into a more friendly format such as a star or snowflake schema.
This would suffer from not supporting real-time access, but might be considerably easier to implement (and more importantly maintain) than an ODBC driver. If your real-time access requirements are reasonably predicitable and limited, you could possibly expose a web service API to support those. | Another option: Instead of creating a ODBC driver, implement a back end that talks the wire protocol that another database (Postgresql or MySQL for instance) uses.
Your users can then download and use for instance the Postgresql ODBC driver.
Exactly what back-end database you choose to emulate should probably depend the most on how well the wire protocol format is documented.
Both [Postgres](https://www.postgresql.org/docs/current/protocol.html) and [MySQL](http://dev.mysql.com/doc/internals/en/client-server-protocol.html) has decent documentation for their client-server protocols.
A simple Python 2.7 example of a server backend that understands parts of the Postgresql wire protocol is below. The example script creates a server that listens to port 9876. I can use the command `psql -h localhost -p 9876` to connect to the server. Any query executed will return a result set with columns abc and def and two rows, all values NULL.
Reading the Postgresql docs and using something like wireshark to inspect real protocol traffic would make it pretty simple to implement a Postgresql-compatible back end.
```
import SocketServer
import struct
def char_to_hex(char):
retval = hex(ord(char))
if len(retval) == 4:
return retval[-2:]
else:
assert len(retval) == 3
return "0" + retval[-1]
def str_to_hex(inputstr):
return " ".join(char_to_hex(char) for char in inputstr)
class Handler(SocketServer.BaseRequestHandler):
def handle(self):
print "handle()"
self.read_SSLRequest()
self.send_to_socket("N")
self.read_StartupMessage()
self.send_AuthenticationClearText()
self.read_PasswordMessage()
self.send_AuthenticationOK()
self.send_ReadyForQuery()
self.read_Query()
self.send_queryresult()
def send_queryresult(self):
fieldnames = ['abc', 'def']
HEADERFORMAT = "!cih"
fields = ''.join(self.fieldname_msg(name) for name in fieldnames)
rdheader = struct.pack(HEADERFORMAT, 'T', struct.calcsize(HEADERFORMAT) - 1 + len(fields), len(fieldnames))
self.send_to_socket(rdheader + fields)
rows = [[1, 2], [3, 4]]
DRHEADER = "!cih"
for row in rows:
dr_data = struct.pack("!ii", -1, -1)
dr_header = struct.pack(DRHEADER, 'D', struct.calcsize(DRHEADER) - 1 + len(dr_data), 2)
self.send_to_socket(dr_header + dr_data)
self.send_CommandComplete()
self.send_ReadyForQuery()
def send_CommandComplete(self):
HFMT = "!ci"
msg = "SELECT 2\x00"
self.send_to_socket(struct.pack(HFMT, "C", struct.calcsize(HFMT) - 1 + len(msg)) + msg)
def fieldname_msg(self, name):
tableid = 0
columnid = 0
datatypeid = 23
datatypesize = 4
typemodifier = -1
format_code = 0 # 0=text 1=binary
return name + "\x00" + struct.pack("!ihihih", tableid, columnid, datatypeid, datatypesize, typemodifier, format_code)
def read_socket(self):
print "Trying recv..."
data = self.request.recv(1024)
print "Received {} bytes: {}".format(len(data), repr(data))
print "Hex: {}".format(str_to_hex(data))
return data
def send_to_socket(self, data):
print "Sending {} bytes: {}".format(len(data), repr(data))
print "Hex: {}".format(str_to_hex(data))
return self.request.sendall(data)
def read_Query(self):
data = self.read_socket()
msgident, msglen = struct.unpack("!ci", data[0:5])
assert msgident == "Q"
print data[5:]
def send_ReadyForQuery(self):
self.send_to_socket(struct.pack("!cic", 'Z', 5, 'I'))
def read_PasswordMessage(self):
data = self.read_socket()
b, msglen = struct.unpack("!ci", data[0:5])
assert b == "p"
print "Password: {}".format(data[5:])
def read_SSLRequest(self):
data = self.read_socket()
msglen, sslcode = struct.unpack("!ii", data)
assert msglen == 8
assert sslcode == 80877103
def read_StartupMessage(self):
data = self.read_socket()
msglen, protoversion = struct.unpack("!ii", data[0:8])
print "msglen: {}, protoversion: {}".format(msglen, protoversion)
assert msglen == len(data)
parameters_string = data[8:]
print parameters_string.split('\x00')
def send_AuthenticationOK(self):
self.send_to_socket(struct.pack("!cii", 'R', 8, 0))
def send_AuthenticationClearText(self):
self.send_to_socket(struct.pack("!cii", 'R', 8, 3))
if __name__ == "__main__":
server = SocketServer.TCPServer(("localhost", 9876), Handler)
try:
server.serve_forever()
except:
server.shutdown()
```
Example command line psql session:
```
[~]
$ psql -h localhost -p 9876
Password:
psql (9.1.6, server 0.0.0)
WARNING: psql version 9.1, server version 0.0.
Some psql features might not work.
Type "help" for help.
codeape=> Select;
abc | def
-----+-----
|
|
(2 rows)
codeape=>
```
An ODBC driver that speaks the Postgresql protocol should work as well (but I have not tried it yet). | Creating a custom ODBC driver | [
"",
"sql",
"database",
"sdk",
"odbc",
"odbc-bridge",
""
] |
In a C# program, I have an abstract base class with a static "Create" method. The Create method is used to create an instance of the class and store it locally for later use. Since the base class is abstract, implementation objects will always derive from it.
I want to be able to derive an object from the base class, call the static Create method (implemented once in the base class) through the derived class, and create an instance of the derived object.
Are there any facilities within the C# language that will allow me to pull this off. My current fallback position is to pass an instance of the derived class as one of the arguments to the Create function, i.e.:
```
objDerived.Create(new objDerived(), "Arg1", "Arg2");
``` | Try using generics:
```
public static BaseClass Create<T>() where T : BaseClass, new()
{
T newVar = new T();
// Do something with newVar
return T;
}
```
Sample use:
```
DerivedClass d = BaseClass.Create<DerivedClass>();
``` | ## Summary
There are two main options. The nicer and newer one is to use generics, the other is to use reflection. I'm providing both in case you need to develop a solution that works prior to .NET 2.0.
## Generics
```
abstract class BaseClass
{
public static BaseClass Create<T>() where T : BaseClass, new()
{
return new T();
}
}
```
Where the usage would be:
```
DerivedClass derivedInstance = BaseClass.Create<DerivedClass>();
```
## Reflection
```
abstract class BaseClass
{
public static BaseClass Create(Type derivedType)
{
// Cast will throw at runtime if the created class
// doesn't derive from BaseClass.
return (BaseClass)Activator.CreateInstance(derivedType);
}
}
```
Where the usage would be (split over two lines for readability):
```
DerivedClass derivedClass
= (DerivedClass)BaseClass.Create(typeof(DerivedClass));
``` | C#: Determine derived object type from a base class static method | [
"",
"c#",
"static-methods",
"derived-class",
""
] |
In my Java application I would like to download a JPEG, transfer it to a PNG and do something with the resulting bytes.
I am almost certain I remember a library to do this exists, I cannot remember its name. | [ImageIO](http://docs.oracle.com/javase/6/docs/api/javax/imageio/ImageIO.html) can be used to load JPEG files and save PNG files (also into a `ByteArrayOutputStream` if you don't want to write to a file). | This is what I ended up doing, I was thinking toooo far outside of the box when I asked the question..
```
// these are the imports needed
import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;
import java.io.ByteArrayOutputStream;
// read a jpeg from a inputFile
BufferedImage bufferedImage = ImageIO.read(new File(inputFile));
// write the bufferedImage back to outputFile
ImageIO.write(bufferedImage, "png", new File(outputFile));
// this writes the bufferedImage into a byte array called resultingBytes
ByteArrayOutputStream byteArrayOut = new ByteArrayOutputStream();
ImageIO.write(bufferedImage, "png", byteArrayOut);
byte[] resultingBytes = byteArrayOut.toByteArray();
``` | Image transcoding (JPEG to PNG) with Java | [
"",
"java",
"png",
"jpeg",
"image-transcoding",
""
] |
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Linq.Expressions;
using System.Text;
namespace ConsoleApplication1
{
public class Class1
{
static void Main(string[] args)
{
List<Car> mylist = new List<Car>();
Car car1;
Car car2;
Car car3;
car1 = new Car()
{
make = "Honda",
id = 1
};
car2 = new Car()
{
make = "toyota",
id = 2
};
car3 = new Car()
{
make = "Honda",
id = 3,
color = "red"
};
mylist.Add(car1);
mylist.Add(car2);
**////mylist.Where(p => p.id == 1).SingleOrDefault() = car3;**
}
}
public class Car
{
public int id { get; set; }
public string make { get; set; }
public string color { get; set; }
}
}
```
How can I update the list by replacing the honda car of Id 1 with honda car with Id 3 in the best way. | Everything leppie said - plus:
```
int index = mylist.FindIndex(p => p.id == 1);
if(index<0) {
mylist.Add(car3);
} else {
mylist[index] = car3;
}
```
This just uses the existing FindIndex to locate a car with id 1, then replace or add it. No LINQ; no SQL - just a lambda and `List<T>`. | If you wanted to do an update to multiple elements...
```
foreach (var f in mylist.FindAll(x => x.id == 1))
{
f.id = car3.id;
f.color = car3.color;
f.make = car3.make;
}
``` | List Manipulation in C# using Linq | [
"",
"c#",
"linq",
""
] |
Answers provided have all been great, I mentioned in the comments of Alnitak's answer that I would need to go take a look at my CSV Generation script because for whatever reason it wasn't outputting UTF-8.
As was correctly pointed out, it WAS outputting UTF-8 - the problem existed with Ye Olde Microsoft Excel which wasn't picking up the encoding the way I would have liked.
My existing CSV generation looked something like:
```
// Create file and exit;
$filename = $file."_".date("Y-m-d_H-i",time());
header("Content-type: application/vnd.ms-excel");
header("Content-disposition: csv" . date("Y-m-d") . ".csv");
header( "Content-disposition: filename=".$filename.".csv");
echo $csv_output;
```
It now looks like:
```
// Create file and exit;
$filename = $file."_".date("Y-m-d_H-i",time());
header("Content-type: text/csv; charset=ISO-8859-1");
header("Content-disposition: csv" . date("Y-m-d") . ".csv");
header("Content-disposition: filename=".$filename.".csv");
echo iconv('UTF-8', 'ISO-8859-1', $csv_output);
```
## -------------------------------------------------------
**ORIGINAL QUESTION**
Hi,
I've got a form which collects data, form works ok but I've just noticed that if someone types or uses a '£' symbol, the MySQL DB ends up with '£'.
Not really sure where or how to stop this from happening, code and DB information to follow:
**MySQL details**
```
mysql> SHOW COLUMNS FROM fraud_report;
+--------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-----+---------+----------------+
| id | mediumint(9) | | PRI | NULL | auto_increment |
| crm_number | varchar(32) | YES | | NULL | |
| datacash_ref | varchar(32) | YES | | NULL | |
| amount | varchar(32) | YES | | NULL | |
| sales_date | varchar(32) | YES | | NULL | |
| domain | varchar(32) | YES | | NULL | |
| date_added | datetime | YES | | NULL | |
| agent_added | varchar(32) | YES | | NULL | |
+--------------+--------------+------+-----+---------+----------------+
8 rows in set (0.03 sec)
```
**PHP Function**
```
function processFraudForm($crm_number, $datacash_ref, $amount, $sales_date, $domain, $agent_added) {
// Insert Data to DB
$sql = "INSERT INTO fraud_report (id, crm_number, datacash_ref, amount, sales_date, domain, date_added, agent_added) VALUES (NULL, '$crm_number', '$datacash_ref', '$amount', '$sales_date', '$domain', NOW(), '$agent_added')";
$result = mysql_query($sql) or die (mysql_error());
if ($result) {
$outcome = "<div id=\"success\">Emails sent and database updated.</div>";
} else {
$outcome = "<div id=\"error\">Something went wrong!</div>";
}
return $outcome;
}
```
**Example DB Entry**
```
+----+------------+--------------+---------+------------+--------------------+---------------------+------------------+
| id | crm_number | datacash_ref | amount | sales_date | domain | date_added | agent_added |
+----+------------+--------------+---------+------------+--------------------+---------------------+------------------+
| 13 | 100xxxxxxx | 10000000 | £10.93 | 18/12/08 | blargh.com | 2008-12-22 10:53:53 | agent.name |
``` | What you're seeing is [UTF-8](http://en.wikipedia.org/wiki/UTF-8) encoding - it's a way of storing Unicode characters in a relatively compact format.
The pound symbol has value `0x00a3` in Unicode, but when it's written in UTF-8 that becomes `0xc2 0xa3` and that's what's stored in the database. It seems that your database table is already set to use UTF-8 encoding. This is a *good thing*!
If you pull the value back out from the database and display it on a UTF-8 compatible terminal (or on a web page that's declared as being UTF-8 encoded) it will look like a normal pound sign again. | £ is 0xC2 0xA3 which is the UTF-8 encoding for £ symbol - so you're storing it as UTF-8, but presumably viewing it as Latin-1 or something other than UTF-8
It's useful to know how to spot and decode UTF-8 by hand - check the [wikipedia page](http://en.wikipedia.org/wiki/UTF-8) for info on how the encoding works:
* 0xC2A3 = 110 **00010** 10 **100011**
* The bold parts are the actual
"payload", which gives 10100011,
which is 0xA3, the pound symbol. | MySQL or PHP is appending a  whenever the £ is used | [
"",
"php",
"mysql",
"character",
""
] |
I'm trying to write rules for detecting some errors in *annotated* multi-threaded java programs. As a toy example, I'd like to detect if any method annotated with @ThreadSafe calls a method without such an annotation, without synchronization. I'm looking for a tool that would allow me to write such a test.
I've looked at source analyzers, like CheckStyle and PMD, and they don't really have cross-class analysis capabilities. Bytecode analysers, like FindBugs and JLint seem rather difficult to extend.
I'd settle for a solution to something even simpler, but posing the same difficulty: writing a *custom* rule that checks whether each overriden method is annotated with @Override. | Have you tried [FindBugs](http://findbugs.sourceforge.net/)? It actually supports a set of [annotations for thread safety](http://www.javaconcurrencyinpractice.com/annotations/doc/net/jcip/annotations/package-summary.html) (the same as those used in [Java Concurrency in Practice](http://www.javaconcurrencyinpractice.com)). Also, you can write your own custom rules. I'm not sure whether you can do cross-class analysis, but I believe so.
[Peter Ventjeer](http://pveentjer.wordpress.com) has a [concurrency checking tool](http://pveentjer.wordpress.com/2008/05/26/the-concurrency-detector/) (that uses ASM) to detect stuff like this. I'm not sure if he's released it publicly but he might able to help you.
And I believe [Coverity's](http://www.coverity.com/) static/dynamic analysis tools for thread safety do checking like this. | You can do [cross-class analysis in PMD](http://pmd.sourceforge.net/howtowritearule.html#I_want_to_implement_a_rule_that_analyse_more_than_the_class__) (though I've never used it for this specific purpose). I think it's possible using this visitor pattern that they document, though I'll leave the specifics to you. | Cross-class-capable extendable static analysis tool for java? | [
"",
"java",
"multithreading",
"annotations",
"static-analysis",
""
] |
I had this question earlier and it was concluded it was a bug in 5.2.5. Well, it's still broken in 5.2.6, at least for me:
Please let me know if it is broken or works for you:
```
$obj = new stdClass();
$obj->{"foo"} = "bar";
$obj->{"0"} = "zero";
$arr = (array)$obj;
//foo -- bar
//0 -- {error: undefined index}
foreach ($arr as $key=>$value){
echo "$key -- " . $arr[$key] . "\n";
}
```
Any ways to "fix" the array after it has been cast from a stdClass? | Definitely seems like a bug to me (PHP 5.2.6).
You can fix the array like this:
```
$arr = array_combine(array_keys($arr), array_values($arr));
```
It's been reported in [this bug report](http://bugs.php.net/bug.php?id=45346) but marked as bogus... [the documentation](http://www.php.net/manual/en/language.types.array.php#language.types.array.casting) says:
> The keys are the member variable
> names, with a few notable exceptions:
> integer properties are unaccessible; | A bit of experimentation shows phps own functions don't persist this fubarity.
```
function noopa( $a ){ return $a; }
$arr = array_map('noopa', $arr );
$arr[0]; # no error!
```
This in effect, just creates a copy of the array, and the fix occurs during the copy.
Ultimately, its a design failure across the board, try using array\_merge in the way you think it works on an array with mixed numeric and string keys?
All numbered elements get copied and some get re-numbered, even if some merely happen to be string-encapsulated-numbers, and as a result, there are dozens of homebrew implementations of array\_merge just to solve this problem.
Back when php tried to make a clone of perl and failed, they didn't grasp the concept of arrays and hashes being distinct concepts, an instead globbed them together into one universal umbrella. Good going!.
For their next trick, they manage to break namespace delimiters because of some technical problem that no other language has for some reason encountered. | PHP bug with converting object to arrays | [
"",
"php",
"arrays",
"stdclass",
""
] |
Which version of JavaScript does Google Chrome support in relation to Mozilla Firefox? In other words, does Chrome support JavaScript 1.6, 1.7, or 1.8 which Firefox also supports or some combination of them? | While Chrome will execute Javascript marked as "javascript1.7", it does not support JS1.7 features like the "let" scoped variable operator.
This code will run on Firefox 3.5 but not on Chrome using V8:
```
<script language="javascript" type="application/javascript;version=1.7">
function foo(){ let a = 4; alert(a); }; foo();
</script>
```
If you change language to "javascript1.7" and omit the type, it won't run with JS 1.7 features in Firefox 3.5. The type section is necessary.
This seems to be related to a general WebKit bug, <https://bugs.webkit.org/show_bug.cgi?id=23097>; it may be that Chrome emulates the Safari behavior even though it uses a different engine.
[When asked about supporting JS 1.8 features](http://www.mail-archive.com/v8-users@googlegroups.com/msg00731.html), the V8 team said they were trying to track the version used in Safari so pages would act the same in both browsers. | This thread is still relevant. As of 2012, Chrome supports most of Javascript 1.6, not including string and array generics. It supports none of 1.7. It supports reduce and reduceRight from 1.8, all of 1.8.1, and Getters and setters and all the non-version specific things listed on [this page](http://robertnyman.com/javascript/index.html). This page is linked from the Mozilla Developer Network, which specifies the versions of javascript, found [here](https://developer.mozilla.org/en/JavaScript). | Google Chrome - JavaScript version | [
"",
"javascript",
"firefox",
"google-chrome",
""
] |
Ok, I'm doing a bunch of RIA/AJAX stuff and need to create a "pretty", custom confirm box which is a DIV (not the built-in javascript confirm). I'm having trouble determining how to accomplish a pause in execution to give the user a chance to accept or decline the condition before either resuming or halting execution. (depending upon their answer)
So here's the general flow of logic I'm dealing with:
1. User selects an item from dropdown and clicks button.
2. In client-side javascript eventhandler for button, I need to check a (this is the key) SERIES of conditions for the item they chose in dropdown.
3. These conditions could possibly result in not showing any confirmation at all or possibly only some of the conditions may evaluate to true which means I'll need to ask the user to accept or decline the condition before proceeding. Only one confirmation should be show at a time.
To demonstrate the logic:
```
function buttonEventHandler() {
if (condition1) {
if(!showConfirmForCondition1) // want execution to pause while waiting for user response.
return; // discontinue execution
}
if (condition2) {
if (!showConfirmForCondition2) // want execution to pause while waiting for user response.
return; // discontinue execution
}
if (condition3) {
if (!showConfirmForCondition3) // want execution to pause while waiting for user response.
return; // discontinue execution
}
...
}
```
If anybody has dealt with this challenge before and found a solution, help would be greatly appreciated. As a note, I'm also using the **MS Ajax** and **jQuery** libraries although I haven't found any functionality that may already be included in those for this problem. | I'm afraid to say that it's not possible to pause the Javascript runtime in the same way that the "confirm" and "alert" dialogs pause it. To do it with a DIV you're going to have to break up your code into multiple chunks and have the event handler on the custom confirm box call the next section of code.
There have been some projects to bring "continuations" support into Javascript, such as [Narrative Javascript](http://www.neilmix.com/narrativejs/doc/) so if you're really keen on getting it to work in a single block of code you could look into that. | The way how I did this:
1. Create your own confirm dialog box
with buttons, let's say "Yes" and
"No".
2. Create function that triggers the
dialog box, let's say `confirmBox(text,
callback)`.
3. Bind events on "Yes" and "No"
buttons - "Yes" - `callback(true)`,
"No" - `callback(false)`.
4. When you are calling the function
use this syntax:
```
confirmBox("Are you sure", function(callback){
if (callback) {
// do something if user pressed yes
}
else {
// do something if user pressed no
}
});
``` | How to create a custom "confirm" & pause js execution until user clicks button? | [
"",
"jquery",
"ajax",
"ria",
"javascript",
""
] |
I'm trying to animate a block level element using jQuery. The page loads with the element styled with `display: none`. Id like it to `slideDown` whilst transparent and then `fadeIn` the content using the callback, however `slideDown` appears to set the visibility to full before `fadeIn` is called, resulting in the content being visible during the `slideDown` animation.
Does anyone know of a way to accomplish this? | a few probable issues with your code: are you setting the content to hide as well in the beginning? are you calling `fadeIn` during the `slideDown` callback?
here's some example HTML/code that will `fadeIn` after the `slideDown`
```
$('div').hide(); // make sure you hide both container/content
$('#container').slideDown('slow', function() {
$('#content').fadeIn('slow'); // fade in of content happens after slidedown
});
```
html:
```
<div id="container">
<div id="content">stuff</div>
</div>
``` | Your description sounds like a variation of [this](http://dev.jquery.com/ticket/2423) problem, which I had to deal with the other day.
It only happens in IE, and only if you are rendering in quirks mode. So the easy solution, is to switch to standards mode. You can find a table of doctypes [here](http://hsivonen.iki.fi/doctype/) that makes IE render in standards mode.
But if you are in the same situation as me where thats not an options, then you can apply a patch to you jQuery library, which you can find [here](http://dev.jquery.com/attachment/ticket/1726/1726.diff). | How do I slideDown whilst maintaining transparency in jQuery? | [
"",
"javascript",
"jquery",
"animation",
""
] |
I am wondering if it is possible to validate parameters to (custom) .net attributes. eg: If I had an attribute that takes a positive integer, could I force a compile time error when a negative value was supplied?
[DonkeyAttribute(1)] //OK
[DonkeyAttribute(-828)] //error
In this example I could use an unsigned integer (but that is non cls compliant I beleive?)
Recommendations? | I don't think it is normally, however [this](http://karlagius.wordpress.com/2008/01/29/validating-attribute-usage-with-postsharp-aspects/) article details a solution using [PostSharp](http://www.postsharp.org/). Not sure if it's fit for your purpose, but give it go! | You could enforce this with unit tests; a similar solution to the one I proposed for [this question](https://stackoverflow.com/questions/19454/enforce-attribute-decoration-of-classesmethods#19455), maybe. | Possible to validate a .NET Attibute parameter? | [
"",
"c#",
"attributes",
""
] |
Avast there fellow programmers!
I have the following problem:
I have two rectangles overlapping like shown on the picture below.

I want to figure out the polygon consisting of point ABCDEF.
Alternate christmas description: The red cookie cutter is cutting away a bit of the black cookie. I want to calculate the black cookie.
Each rectangle is a data structure with 4 2d-vertices.
What is the best algorithm to achieve this? | This is a special case of general 2D polygon clipping. A good place to start is the Weiler-Atherton algorithm. [Wikipedia has a summary](http://en.wikipedia.org/wiki/Weiler-Atherton) and [links to the original paper](http://www.cs.drexel.edu/~david/Classes/CS430/HWs/p214-weiler.pdf). The algorithm seems to match the data structure you've described pretty well.
Note that it's quite possible you'll end up with a rectangle with a hole in it (if the red one is entirely inside the black one) or even two rectangles (eg if the red is taller and skinnier than the black). If you're certain there is only one corner of the red rectangle inside the black one then the solution should be much simpler. | [constructive solid geometry](http://en.wikipedia.org/wiki/Constructive_solid_geometry) | Boolean operations on rectangle polygons | [
"",
"c++",
"algorithm",
"geometry",
"boolean",
""
] |
Does anyone know how to calculate time difference in C++ in milliseconds?
I used [`difftime`](http://linux.die.net/man/3/difftime) but it doesn't have enough precision for what I'm trying to measure. | You have to use one of the more specific time structures, either timeval (microsecond-resolution) or timespec (nanosecond-resolution), but you can do it manually fairly easily:
```
#include <time.h>
int diff_ms(timeval t1, timeval t2)
{
return (((t1.tv_sec - t2.tv_sec) * 1000000) +
(t1.tv_usec - t2.tv_usec))/1000;
}
```
This obviously has some problems with integer overflow if the difference in times is really large (or if you have 16-bit ints), but that's probably not a common case. | I know this is an old question, but there's an updated answer for C++0x. There is a new header called `<chrono>` which contains modern time utilities. Example use:
```
#include <iostream>
#include <thread>
#include <chrono>
int main()
{
typedef std::chrono::high_resolution_clock Clock;
typedef std::chrono::milliseconds milliseconds;
Clock::time_point t0 = Clock::now();
std::this_thread::sleep_for(milliseconds(50));
Clock::time_point t1 = Clock::now();
milliseconds ms = std::chrono::duration_cast<milliseconds>(t1 - t0);
std::cout << ms.count() << "ms\n";
}
50ms
```
More information can be found here:
<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2661.htm>
There is also now a boost implementation of `<chrono>`. | Time difference in C++ | [
"",
"c++",
"time",
""
] |
So I have a client who's current host does not allow me to use tar via exec()/passthru()/ect and I need to backup the site periodicly and programmaticly so is there a solution?
This is a linux server. | At <http://pear.php.net/package/Archive_Tar> you can donload the PEAR tar package and use it like this to create the archive:
```
<?php
require 'Archive/Tar.php';
$obj = new Archive_Tar('archive.tar');
$path = '/path/to/folder/';
$handle=opendir($path);
$files = array();
while(false!==($file = readdir($handle)))
{
$files[] = $path . $file;
}
if ($obj->create($files))
{
//Sucess
}
else
{
//Fail
}
?>
``` | PHP 5.3 offers a much easier way to solve this issue.
Look here: <http://www.php.net/manual/en/phardata.buildfromdirectory.php>
```
<?php
$phar = new PharData('project.tar');
// add all files in the project
$phar->buildFromDirectory(dirname(__FILE__) . '/project');
?>
``` | Build Tar file from directory in PHP without exec/passthru | [
"",
"php",
"linux",
"backup",
"archive",
""
] |
Do you know a good source to learn how to design SQL solutions?
Beyond the basic language syntax, I'm looking for something to help me understand:
1. What tables to build and how to link them
2. How to design for different scales (small client APP to a huge distributed website)
3. How to write effective / efficient / elegant SQL queries | I started with this book: [Relational Database Design Clearly Explained (The Morgan Kaufmann Series in Data Management Systems) (Paperback)](http://www.amazon.co.uk/Relational-Database-Explained-Kaufmann-Management/dp/1558608206/ref=sr_1_3?ie=UTF8&s=books&qid=1229597641&sr=8-3) by Jan L. Harrington and found it very clear and helpful
and as you get up to speed this one was good too [Database Systems: A Practical Approach to Design, Implementation and Management (International Computer Science Series)](http://www.amazon.co.uk/Database-Systems-Implementation-Management-International/dp/0321210255/ref=cm_lmf_tit_5_rsrrsi0) (Paperback)
I think SQL and database design are **different** (but complementary) skills. | I started out with this article
<http://en.tekstenuitleg.net/articles/software/database-design-tutorial/intro.html>
It's pretty concise compared to reading an entire book and it explains the basics of database design (normalization, types of relationships) very well. | A beginner's guide to SQL database design | [
"",
"sql",
"database",
"database-design",
"scalability",
""
] |
I was hoping to automate some tasks related to SubVersion, so I got SharpSvn. Unfortunately I cant find much documentation for it.
I want to be able to view the changes after a user commits a new revision so I can parse the code for special comments that can then be uploaded into my ticket system. | If you just want to browse SharpSvn you can use [<http://docs.sharpsvn.net/>](http://docs.sharpsvn.net/current/). The documentation there is far from complete as the focus is primarily on providing features. Any help on enhancing the documentation (or SharpSvn itself) is welcome ;-)
To use log messages for your issue tracker you can use two routes:
1. A post-commit hook that processes changes one at a time
2. A scheduled service that calls 'svn log -r <last-retrieved>:HEAD' every once in a while.
The last daily builds of SharpSvn have some support for commit hooks, but that part is not really api-stable yet.
You could create a post commit hook (post-commit.exe) with:
```
static void Main(string[] args)
{
SvnHookArguments ha;
if (!SvnHookArguments.ParseHookArguments(args, SvnHookType.PostCommit, false, out ha))
{
Console.Error.WriteLine("Invalid arguments");
Environment.Exit(1);
}
using (SvnLookClient cl = new SvnLookClient())
{
SvnChangeInfoEventArgs ci;
cl.GetChangeInfo(ha.LookOrigin, out ci);
// ci contains information on the commit e.g.
Console.WriteLine(ci.LogMessage); // Has log message
foreach(SvnChangeItem i in ci.ChangedPaths)
{
//
}
}
}
```
(For a complete solution you would also have to hook the post-revprop-change, as your users might change the log message after the first commit) | Is this of any use?
<http://blogs.open.collab.net/svn/2008/04/sharpsvn-brings.html> | How can I view the changes made after a revision is committed and parse it for comments? | [
"",
"c#",
"svn",
"sharpsvn",
""
] |
I've recently bumped into facelift, an alternative to sIFR and I was wondering if those who have experience with both sIFR and FLIR could shed some light on their experience with FLIR.
For those of you who've not yet read about how FLIR does it, FLIR works by taking the text from targeted elements using JavaScript to then make calls to a PHP app that uses PHP's GD to render and return transparent PNG images that get placed as background for the said element, where the overflow is then set to hidden and padding is applied equal to the elements dimensions to effectively push the text out of view.
This is what I've figured so far:
* The good
+ No flash (+for mobiles)
+ FLIR won't break the layout
+ Images range from some 1KB (say one h3 sentence) to 8KB (very very large headline)
+ Good documentation
+ Easy to implement
+ Customizable selectors
+ Support for jQuery/prototype/scriptaculous/mooTools
+ FLIR has implemented cache
+ Browsers cache the images themselves!
* The bad
+ Text can't be selected
+ Requests are processed from all sources (you need to restrict FLIR yourself to process requests from your domain only)
My main concerns are about how well does it scale, that is, how expensive is it to work with the GD library on a shared host, does anyone have experience with it?; second, what love do search engines garner for sIFR or FLIR implementations knowing that a) text isn't explicitly hidden b) renders only on a JavaScript engine. | Over the long term, sIFR should cache better because rendering is done on the client side, from one single Flash movie. Flash text acts more like browser text than an image, and it's easy to style the text within Flash (different colors, font weights, links, etc). You may also prefer the quality of text rendered in Flash, versus that rendered by the server side image library. Another advantage is that you don't need any server side code.
Google has stated that sIFR is OK, since it's replacing HTML text by the same text, but rendered differently. I'd say the same holds true for FLIR. | I know that with sIFR, and I assume with FLIR that you perform your markup in the same way as usual, but with an extra class tag or similar, so it can find the text to replace. Search engines will still read the markup as regular text so that shouldn't be an issue.
Performance-wise: if you're just using this for headings (and they're not headings which will change each page load), then the caching of the images in browsers, and also presumably on the server's disk should remove any worries about performance. Just make sure you set up your HTTP headers correctly! | sIFR or FLIR? | [
"",
"php",
"seo",
"sifr",
"gd",
"flir",
""
] |
I want to convert a string into a series of Keycodes, so that I can then send them via PostMessage to a control. I need to simulate actual keyboard input, and I'm wondering if a massive switch statement is the only way to convert a character into the correct keycode, or if there's a simpler method.
====
Got my solution - <http://msdn.microsoft.com/en-us/library/ms646329(VS.85).aspx>
VkKeyScan will return the correct keycode for any character.
(And yes, I wouldn't do this in general, but when doing automated testing, and making sure that keyboard presses are responded to correctly, it works reliably enough). | Got my solution - <http://msdn.microsoft.com/en-us/library/ms646329(VS.85).aspx>
VkKeyScan will return the correct keycode for any character.
(And yes, I wouldn't do this in general, but when doing automated testing, and making sure that keyboard presses are responded to correctly, it works reliably enough). | Raymond says this is a bad idea.
<http://blogs.msdn.com/oldnewthing/archive/2005/05/30/423202.aspx> | How do I convert a Char into a Keycode in .Net? | [
"",
"c#",
"winapi",
""
] |
Currently I`m using Visual Studio for writing code in C++. But it seems so weighty that I decided to switch for another one, preferably free, not so strict to system resources (I mean memory, of course) as VS to learn libraries, such as Boost and Qt. What compiler do you suggest? | Code::blocks is exactly what you are after. You can can download it here: <http://www.codeblocks.org/downloads/5>
Choose the version with the mingw compiler bundled with it (Windows port of GCC). You can switch between that and the VC++ compiler as and when you like.
Code::Blocks has all the stuff you want, debugger integration, code completion, class browser, todo list etc. etc. It even import visual C++ projects.
Don't use Dev C++ which has already been recommended. It's very very old and outdated. | I'd suggest using Visual Studio's compiler from the command-line. You get the same high-quality compiler, without the resource-hogging IDE.
Although the IDE is pretty good too, and probably worth the resources it uses. | Need a c++ compiler to work with libraries (boost, ...) | [
"",
"c++",
"compiler-construction",
""
] |
I am trying to use the actual numerical value for the month on a sql query to pull results. Is there any way to do this without having a function to change the numbers to actual month names, then back to month numbers? The following code works for Names, what works for numbers?
> datename(month,(convert(datetime,DTSTAMP)))=
> 'October' | month,(convert(datetime,DTSTAMP)) should do it, but why on earth are you not storing the data correctly as a datetime to begin with? All that additional conversion stuff to use the dates adds unnecessary load to your server and slows down your application. | [Datepart](http://msdn.microsoft.com/en-us/library/aa258265(SQL.80).aspx) is an alternative to the month command and it is more flexable as you can extract other parts of the date.
```
DATEPART(mm, convert(datetime,DTSTAMP))
``` | Is there a way to search a SQL Query with Month Numbers, and not Month Name? | [
"",
"sql",
"mysql",
"sql-server",
""
] |
Ok, i have simple scenario:
have two pages:
login and welcome pages.
im using FormsAuthentication with my own table that has four columns: ID, UserName, Password, FullName
When pressed login im setting my username like:
```
FormsAuthentication.SetAuthCookie(userName, rememberMe ?? false);
```
on the welcome page i cant use:
```
Page.User.Identity.Name
```
to provide to user which user currently logged, BUT i dont user username like at all examples in <http://asp.net> web site i want to user FullName field
i think that always go to db and request fullname when page loads its crazy and dont like to user Sessions or Simple Cookie mayby FormsAuth provider has custom fields for this | I would store the user's full name in the session cookie after your call to FormsAuth
```
FormsAuth.SetAuthCookie(userName, rememberme);
// get the full name (ex "John Doe") from the datbase here during login
string fullName = "John Doe";
Response.Cookies["FullName"].Value = fullName;
Response.Cookies["FullName"].expires = DateTime.Now.AddDays(30);
```
and then retrieve it in your view pages via:
```
string fullName = HttpContext.Current.Request.Cookies["FullName"].Value
``` | Forms authentication works using cookies. You could construct your own auth cookie and put the full name in it, but I think I would go with putting it into the session. If you use a cookie of any sort, you'll need to extract the name from it each time. Tying it to the session seems more natural and makes it easy for you to access. I agree that it seems a waste to go back to the DB every time and I would certainly cache the value somewhere.
Info on constructing your own forms authentication cookie can be found [here](http://msdn.microsoft.com/en-us/library/aa480476.aspx). | FormsAuthentication after login | [
"",
"c#",
"asp.net",
"asp.net-mvc",
""
] |
I was shocked to find out today that C# does not support dynamic sized arrays. How then does a [VB.NET](http://en.wikipedia.org/wiki/Visual_Basic_.NET) developer used to using [ReDim Preserve](http://msdn.microsoft.com/en-us/library/w8k3cys2.aspx) deal with this in C#?
At the beginning of the function I am not sure of the upper bound of the array. This depends on the rows returned from the database. | Use ArrayLists or Generics instead | VB.NET doesn't have the idea of dynamically sized arrays, either - the CLR doesn't support it.
The equivalent of "Redim Preserve" is [`Array.Resize<T>`](http://msdn.microsoft.com/en-us/library/bb348051.aspx) - but you *must* be aware that if there are other references to the original array, they won't be changed at all. For example:
```
using System;
class Foo
{
static void Main()
{
string[] x = new string[10];
string[] y = x;
Array.Resize(ref x, 20);
Console.WriteLine(x.Length); // Prints out 20
Console.WriteLine(y.Length); // Still prints out 10
}
}
```
Proof that this is the equivalent of Redim Preserve:
```
Imports System
Class Foo
Shared Sub Main()
Dim x(9) as String
Dim y as String() = x
Redim Preserve x(19)
Console.WriteLine(x.Length)
Console.WriteLine(y.Length)
End Sub
End Class
```
The two programs are equivalent.
If you truly want a dynamically sized collection, you should use `List<T>` (or something similar). There are various issues with using arrays directly - see [Eric Lippert's blog post](http://blogs.msdn.com/ericlippert/archive/2008/09/22/arrays-considered-somewhat-harmful.aspx) for details. That's not to say you should always avoid them, by any means - but you need to know what you're dealing with. | Redim Preserve in C#? | [
"",
"c#",
"arrays",
""
] |
How do I get my Python program to sleep for 50 milliseconds? | Use [`time.sleep()`](https://docs.python.org/library/time.html#time.sleep)
```
from time import sleep
sleep(0.05)
``` | Note that if you rely on sleep taking *exactly* 50 ms, you won't get that. It will just be about it. | How do I get my program to sleep for 50 milliseconds? | [
"",
"python",
"timer",
"sleep",
""
] |
I'm trying to write some C# code that calls a method from an unmanaged DLL. The prototype for the function in the dll is:
```
extern "C" __declspec(dllexport) char *foo(void);
```
In C#, I first used:
```
[DllImport(_dllLocation)]
public static extern string foo();
```
It seems to work on the surface, but I'm getting memory corruption errors during runtime. I think I'm pointing to memory that happens to be correct, but has already been freed.
I tried using a PInvoke code gen utility called "P/Invoke Interop Assistant". It gave me the output:
```
[System.Runtime.InteropServices.DLLImportAttribute(_dllLocation, EntryPoint = "foo")]
public static extern System.IntPtr foo();
```
Is this correct? If so, how do I convert this IntPtr to a string in C#? | You must return this as an IntPtr. Returning a System.String type from a PInvoke function requires great care. The CLR must transfer the memory from the native representation into the managed one. This is an easy and predictable operation.
The problem though comes with what to do with the native memory that was returned from `foo()`. The CLR assumes the following two items about a PInvoke function which directly returns the string type
1. The native memory needs to be freed
2. The native memory was allocated with CoTaskMemAlloc
Therefore it will marshal the string and then call `CoTaskMemFree(...)` on the native memory blob. Unless you actually allocated this memory with CoTaskMemAlloc this will at best cause a crash in your application.
In order to get the correct semantics here you must return an IntPtr directly. Then use Marshal.PtrToString\* in order to get to a managed String value. You may still need to free the native memory but that will dependent upon the implementation of foo. | You can use the Marshal.PtrToStringAuto method.
```
IntPtr ptr = foo();
string str = Marshal.PtrToStringAuto(ptr);
``` | PInvoke for C function that returns char * | [
"",
"c#",
"pinvoke",
""
] |
Academia has it that table names should be the singular of the entity that they store attributes of.
I dislike any T-SQL that requires square brackets around names, but I have renamed a `Users` table to the singular, forever sentencing those using the table to sometimes have to use brackets.
My gut feel is that it is more correct to stay with the singular, but my gut feel is also that brackets indicate undesirables like column names with spaces in them etc.
Should I stay, or should I go? | Others have given pretty good answers as far as "standards" go, but I just wanted to add this... Is it possible that "User" (or "Users") is not actually a full description of the data held in the table? Not that you should get too crazy with table names and specificity, but perhaps something like "Widget\_Users" (where "Widget" is the name of your application or website) would be more appropriate. | I had same question, and after reading all answers here I definitely stay with ***SINGULAR***, reasons:
~~**Reason 1** (Concept). You can think of bag containing apples like "AppleBag", it doesn't matter if contains 0, 1 or a million apples, it is always the same bag. Tables are just that, containers, the table name must describe what it contains, not how much data it contains. Additionally, the plural concept is more about a spoken language one (actually to determine whether there is one or more).~~
**Reason 2**. (Convenience). it is easier come out with singular names, than with plural ones. Objects can have irregular plurals or not plural at all, but will always have a singular one (with few exceptions like News).
* Customer
* Order
* User
* Status
* News
**Reason 3**. (Aesthetic and Order). Specially in master-detail scenarios, this reads better, aligns better by name, and have more logical order (Master first, Detail second):
* 1.Order
* 2.OrderDetail
Compared to:
* 1.OrderDetails
* 2.Orders
**Reason 4** (Simplicity). Put all together, Table Names, Primary Keys, Relationships, Entity Classes... is better to be aware of only one name (singular) instead of two (singular class, plural table, singular field, singular-plural master-detail...)
* `Customer`
* `Customer.CustomerID`
* `CustomerAddress`
* `public Class Customer {...}`
* `SELECT FROM Customer WHERE CustomerID = 100`
Once you know you are dealing with "Customer", you can be sure you will use the same word for all of your database interaction needs.
**Reason 5**. (Globalization). The world is getting smaller, you may have a team of different nationalities, not everybody has English as a native language. It would be easier for a non-native English language programmer to think of "Repository" than of "Repositories", or "Status" instead of "Statuses". Having singular names can lead to fewer errors caused by typos, save time by not having to think "is it Child or Children?", hence improving productivity.
**Reason 6**. (Why not?). It can even save you writing time, save you disk space, and even make your computer keyboard last longer!
* `SELECT Customer.CustomerName FROM Customer WHERE Customer.CustomerID = 100`
* `SELECT Customers.CustomerName FROM Customers WHERE Customers.CustomerID = 103`
You have saved 3 letters, 3 bytes, 3 extra keyboard hits :)
And finally, you can name those ones messing up with reserved names like:
* User > LoginUser, AppUser, SystemUser, CMSUser,...
Or use the infamous square brackets [User] | Table Naming Dilemma: Singular vs. Plural Names | [
"",
"sql",
"sql-server",
"naming-conventions",
""
] |
We have a large (about 580,000 loc) application which in Delphi 2006 builds (on my machine) in around 20 seconds. When you have build times in seconds, you tend to use the compiler as a tool. i.e. write a little code, build, write some more code and build some more etc etc As we move some of our stuff over to C#, does anyone have a comparison of how long something that size would take to build? I only have small apps and components at the moment, so can't really compare. If things are going to take a lot longer to build, then I may need to change my style! Or is my style just lazy?
For example, if I'm changing the interface of a method call, rather than do a full search on all the app to find out where I need to make changes to calls, I'll use the compiler to find them for me. | Visual Studio 2008 SP1 now has background compilation for C# (it's always had it for VB.NET). Back in my VB days, I often used this to find where something was referenced by changing the name and then seeing where the background compiler said there was an error.
I never worked on anything quite this large. At my last job we had about 60,000 loc spread over about 15 projects and it took about 10 seconds to compile. Maybe someone else can post a slightly larger case study | I used to use the compiler as you describe, but since I've been using [ReSharper](http://www.jetbrains.com/resharper) I do this a lot less.
Also, for things like rename, the refactoring support (both in Visual Studio 2005 upwards and, even better, from ReSharper) mean I don't have to do search + replace to rename things. | Do you use regular builds as a coding tool? | [
"",
"c#",
"performance",
"delphi",
"compiler-construction",
""
] |
I've built the x86 Boost libraries many times, but I can't seem to build x64 libraries. I start the "Visual Studio 2005 x64 Cross Tools Command Prompt" and run my usual build:
```
bjam --toolset=msvc --build-type=complete --build-dir=c:\build install
```
But it still produces x86 .lib files (I verified this with dumpbin /headers).
What am I doing wrong? | You need to add the `address-model=64` parameter.
Look e.g. [here](http://devsql.blogspot.com/2007/05/building-boost-134-for-x86-x64-and-ia64.html). | The accepted answer is correct. Adding this in case somebody else googles this answer and still fails to produce x64 version.
Following is what I had to do to build Boost 1.63 on Visual Studio 15 2017 Community Edition.
Commands executed from VS environment cmd shell. Tools -> Visual Studio Command Prompt
```
C:\Work\Boost_1_63> C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\vcvarsall.bat amd64
C:\Work\Boost_1_63> bootstrap.bat
C:\Work\Boost_1_63> bjam -j4 architecture=x86 address-model=64 link=static stage
C:\Work\Boost_1_63> bjam --prefix=C:\opt\boost architecture=x86 address-model=64 link=static install
```
You can verify that the resulting .lib is x64 with dumpbin:
```
C:\Work> dumpbin /headers C:\work\boost_1_63\stage\lib\libboost_locale-vc140-mt-1_63.lib | findstr machine
8664 machine (x64)
8664 machine (x64)
8664 machine (x64)
8664 machine (x64)
...
``` | How do you build the x64 Boost libraries on Windows? | [
"",
"c++",
"visual-studio-2005",
"boost",
"64-bit",
"boost-build",
""
] |
If I am posting a question about a query against an Oracle database, what should I include in my question so that people have a chance to answer me? How should I get this information?
Simply providing the poorly performing query may not be enough. | Ideally, get the full query plan using DBMS\_XPLAN.DISPLAY\_CURSOR using the sql\_id and child\_cursor\_id from v$sql. Failing that (ie on older versions), try v$sql\_plan and include filter and access predicates. EXPLAIN PLAN is fine if it actually shows the plan that was used.
DB version and edition (Express/Standard/Enterprise). Maybe the OS too.
SELECT \* FROM V$VERSION
If you have any non-standard database parameters, it is useful to know (especially anything optimizer related).
select \* from v$parameter where rownum < 5 and isdefault != 'TRUE';
\*If you do a
alter session set events '10053 trace name context forever, level 1'
and parse a query, there'll be a log file that will include all the parameters used when optimizing a query \*
Real world table sizes and column distributions (eg it is a million row table, with 30% of rows being "Red" etc). And the relevant stats off USER\_TABLES, USER\_TAB\_COLUMNS.
How long it actually look, plus any SQL stats you have available (consistent gets, physical reads) from v$sql.
Also, who do you THINK it should be able to run faster. Do you think there's a better plan, or are you just crossing your fingers. | * The schema definition of the tables involved.
* The indexes defined on those tables.
* The query you are executing.
* The resulting query execution plan | If I'm posting a question about Oracle SQL query performance, what should I include in my question? | [
"",
"sql",
"performance",
"oracle",
"database-design",
""
] |
```
SaveFileDialog savefileDialog1 = new SaveFileDialog();
DialogResult result = savefileDialog1.ShowDialog();
switch(result == DialogResult.OK)
case true:
//do something
case false:
MessageBox.Show("are you sure?","",MessageBoxButtons.YesNo,MessageBoxIcon.Question);
```
How to show the messagebox over the savedialog box after clicking "Cancel" on the SaveDialog box i.e. the Save Dialog box should be present on the background. | If the reason for needing the message box on Cancel of the File Save dialogue is because you're shutting things down with unsaved changes, then I suggest putting the call to the File Save dialogue in a loop that keeps going until a flag is set to stop the loop and call the message box if you don't get OK as the result. For example:
```
// lead-up code
SaveFileDialog sft = new SaveFileDialog();
BOOL bDone;
do
{
if (DialogResult.OK == sft.ShowDialog())
bDone = true;
else
{
DialogResult result = MessageBox.Show("Are you sure you don't want to save the changed file?", "", MessageBoxButtons.YesNo, MessageBoxIcon.Question);
bDone = (result == Yes) ? true : false;
}
} while (!bDone);
// carry on
```
This way, the File Save dialogue behaves consistently with the way it does in other in Windows apps, *and* you get to let the user have another go at saving the file(s) if he accidentally hits Cancel in the File Save dialogue. | You can't do that with `SaveFileDialog` class. | Show messagebox over Save dialog in C# | [
"",
"c#",
"winforms",
""
] |
I've got a simple class that inherits from Collection and adds a couple of properties. I need to serialize this class to XML, but the XMLSerializer ignores my additional properties.
I assume this is because of the special treatment that XMLSerializer gives ICollection and IEnumerable objects. What's the best way around this?
Here's some sample code:
```
using System.Collections.ObjectModel;
using System.IO;
using System.Xml.Serialization;
namespace SerialiseCollection
{
class Program
{
static void Main(string[] args)
{
var c = new MyCollection();
c.Add("Hello");
c.Add("Goodbye");
var serializer = new XmlSerializer(typeof(MyCollection));
using (var writer = new StreamWriter("test.xml"))
serializer.Serialize(writer, c);
}
}
[XmlRoot("MyCollection")]
public class MyCollection : Collection<string>
{
[XmlAttribute()]
public string MyAttribute { get; set; }
public MyCollection()
{
this.MyAttribute = "SerializeThis";
}
}
}
```
This outputs the following XML (note MyAttribute is missing in the MyCollection element):
```
<?xml version="1.0" encoding="utf-8"?>
<MyCollection xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<string>Hello</string>
<string>Goodbye</string>
</MyCollection>
```
What I *want* is
```
<MyCollection MyAttribute="SerializeThis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<string>Hello</string>
<string>Goodbye</string>
</MyCollection>
```
Any ideas? The simpler the better. Thanks. | Collections generally don't make good places for extra properties. Both during serialization and in data-binding, they will be ignored if the item looks like a collection (`IList`, `IEnumerable`, etc - depending on the scenario).
If it was me, I would encapsulate the collection - i.e.
```
[Serializable]
public class MyCollectionWrapper {
[XmlAttribute]
public string SomeProp {get;set;} // custom props etc
[XmlAttribute]
public int SomeOtherProp {get;set;} // custom props etc
public Collection<string> Items {get;set;} // the items
}
```
The other option is to implement `IXmlSerializable` (quite a lot of work), but that still won't work for data-binding etc. Basically, this isn't the expected usage. | If you do encapsulate, as Marc Gravell suggests, the beginning of this post explains how to get your XML to look exactly like you describe.
<http://blogs.msdn.com/youssefm/archive/2009/06/12/customizing-the-xml-for-collections-with-xmlserializer-and-datacontractserializer.aspx>
That is, instead of this:
```
<MyCollection MyAttribute="SerializeThis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<Items>
<string>Hello</string>
<string>Goodbye</string>
<Items>
</MyCollection>
```
You can have this:
```
<MyCollection MyAttribute="SerializeThis"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<string>Hello</string>
<string>Goodbye</string>
</MyCollection>
``` | XmlSerialize a custom collection with an Attribute | [
"",
"c#",
".net",
"xml-serialization",
""
] |
In your workplace, where do you store your common, non-database specific scripts that you use in SQL Server? Do you keep them in .SQL scripts on the file server, do you store them in the Master database, or do you keep them in a database you defined specifically for these kinds of things? | We store them as regular source code, so in version-control.
You have then available previous versions of script, and you avoid "someone deleted the XY script" risk. | We store them in a wiki where everyone can access them. | Where do you keep your common sql task scripts? | [
"",
"sql",
"sql-server",
"scripting",
""
] |
I keep seeing people say that exceptions are slow, but I never see any proof. So, instead of asking if they are, I will ask how do exceptions work behind the scenes, so I can make decisions of when to use them and whether they are slow.
From what I know, exceptions are the same as doing a return bunch of times, except that it also checks after each return whether it needs to do another one or to stop. How does it check when to stop returning? I guess there is a second stack that holds the type of the exception and a stack location, it then does returns until it gets there. I am also guessing that the only time this second stack is touched is on a throw and on each try/catch. AFAICT implementing a similar behaviour with return codes would take the same amount of time. But this is all just a guess, so I want to know what really happens.
How do exceptions really work? | Instead of guessing, I decided to actually look at the generated code with a small piece of C++ code and a somewhat old Linux install.
```
class MyException
{
public:
MyException() { }
~MyException() { }
};
void my_throwing_function(bool throwit)
{
if (throwit)
throw MyException();
}
void another_function();
void log(unsigned count);
void my_catching_function()
{
log(0);
try
{
log(1);
another_function();
log(2);
}
catch (const MyException& e)
{
log(3);
}
log(4);
}
```
I compiled it with `g++ -m32 -W -Wall -O3 -save-temps -c`, and looked at the generated assembly file.
```
.file "foo.cpp"
.section .text._ZN11MyExceptionD1Ev,"axG",@progbits,_ZN11MyExceptionD1Ev,comdat
.align 2
.p2align 4,,15
.weak _ZN11MyExceptionD1Ev
.type _ZN11MyExceptionD1Ev, @function
_ZN11MyExceptionD1Ev:
.LFB7:
pushl %ebp
.LCFI0:
movl %esp, %ebp
.LCFI1:
popl %ebp
ret
.LFE7:
.size _ZN11MyExceptionD1Ev, .-_ZN11MyExceptionD1Ev
```
`_ZN11MyExceptionD1Ev` is `MyException::~MyException()`, so the compiler decided it needed a non-inline copy of the destructor.
```
.globl __gxx_personality_v0
.globl _Unwind_Resume
.text
.align 2
.p2align 4,,15
.globl _Z20my_catching_functionv
.type _Z20my_catching_functionv, @function
_Z20my_catching_functionv:
.LFB9:
pushl %ebp
.LCFI2:
movl %esp, %ebp
.LCFI3:
pushl %ebx
.LCFI4:
subl $20, %esp
.LCFI5:
movl $0, (%esp)
.LEHB0:
call _Z3logj
.LEHE0:
movl $1, (%esp)
.LEHB1:
call _Z3logj
call _Z16another_functionv
movl $2, (%esp)
call _Z3logj
.LEHE1:
.L5:
movl $4, (%esp)
.LEHB2:
call _Z3logj
addl $20, %esp
popl %ebx
popl %ebp
ret
.L12:
subl $1, %edx
movl %eax, %ebx
je .L16
.L14:
movl %ebx, (%esp)
call _Unwind_Resume
.LEHE2:
.L16:
.L6:
movl %eax, (%esp)
call __cxa_begin_catch
movl $3, (%esp)
.LEHB3:
call _Z3logj
.LEHE3:
call __cxa_end_catch
.p2align 4,,3
jmp .L5
.L11:
.L8:
movl %eax, %ebx
.p2align 4,,6
call __cxa_end_catch
.p2align 4,,6
jmp .L14
.LFE9:
.size _Z20my_catching_functionv, .-_Z20my_catching_functionv
.section .gcc_except_table,"a",@progbits
.align 4
.LLSDA9:
.byte 0xff
.byte 0x0
.uleb128 .LLSDATT9-.LLSDATTD9
.LLSDATTD9:
.byte 0x1
.uleb128 .LLSDACSE9-.LLSDACSB9
.LLSDACSB9:
.uleb128 .LEHB0-.LFB9
.uleb128 .LEHE0-.LEHB0
.uleb128 0x0
.uleb128 0x0
.uleb128 .LEHB1-.LFB9
.uleb128 .LEHE1-.LEHB1
.uleb128 .L12-.LFB9
.uleb128 0x1
.uleb128 .LEHB2-.LFB9
.uleb128 .LEHE2-.LEHB2
.uleb128 0x0
.uleb128 0x0
.uleb128 .LEHB3-.LFB9
.uleb128 .LEHE3-.LEHB3
.uleb128 .L11-.LFB9
.uleb128 0x0
.LLSDACSE9:
.byte 0x1
.byte 0x0
.align 4
.long _ZTI11MyException
.LLSDATT9:
```
Surprise! There are no extra instructions at all on the normal code path. The compiler instead generated extra out-of-line fixup code blocks, referenced via a table at the end of the function (which is actually put on a separate section of the executable). All the work is done behind the scenes by the standard library, based on these tables (`_ZTI11MyException` is `typeinfo for MyException`).
OK, that was not actually a surprise for me, I already knew how this compiler did it. Continuing with the assembly output:
```
.text
.align 2
.p2align 4,,15
.globl _Z20my_throwing_functionb
.type _Z20my_throwing_functionb, @function
_Z20my_throwing_functionb:
.LFB8:
pushl %ebp
.LCFI6:
movl %esp, %ebp
.LCFI7:
subl $24, %esp
.LCFI8:
cmpb $0, 8(%ebp)
jne .L21
leave
ret
.L21:
movl $1, (%esp)
call __cxa_allocate_exception
movl $_ZN11MyExceptionD1Ev, 8(%esp)
movl $_ZTI11MyException, 4(%esp)
movl %eax, (%esp)
call __cxa_throw
.LFE8:
.size _Z20my_throwing_functionb, .-_Z20my_throwing_functionb
```
Here we see the code for throwing an exception. While there was no extra overhead simply because an exception might be thrown, there is obviously a lot of overhead in actually throwing and catching an exception. Most of it is hidden within `__cxa_throw`, which must:
* Walk the stack with the help of the exception tables until it finds a handler for that exception.
* Unwind the stack until it gets to that handler.
* Actually call the handler.
Compare that with the cost of simply returning a value, and you see why exceptions should be used only for exceptional returns.
To finish, the rest of the assembly file:
```
.weak _ZTI11MyException
.section .rodata._ZTI11MyException,"aG",@progbits,_ZTI11MyException,comdat
.align 4
.type _ZTI11MyException, @object
.size _ZTI11MyException, 8
_ZTI11MyException:
.long _ZTVN10__cxxabiv117__class_type_infoE+8
.long _ZTS11MyException
.weak _ZTS11MyException
.section .rodata._ZTS11MyException,"aG",@progbits,_ZTS11MyException,comdat
.type _ZTS11MyException, @object
.size _ZTS11MyException, 14
_ZTS11MyException:
.string "11MyException"
```
The typeinfo data.
```
.section .eh_frame,"a",@progbits
.Lframe1:
.long .LECIE1-.LSCIE1
.LSCIE1:
.long 0x0
.byte 0x1
.string "zPL"
.uleb128 0x1
.sleb128 -4
.byte 0x8
.uleb128 0x6
.byte 0x0
.long __gxx_personality_v0
.byte 0x0
.byte 0xc
.uleb128 0x4
.uleb128 0x4
.byte 0x88
.uleb128 0x1
.align 4
.LECIE1:
.LSFDE3:
.long .LEFDE3-.LASFDE3
.LASFDE3:
.long .LASFDE3-.Lframe1
.long .LFB9
.long .LFE9-.LFB9
.uleb128 0x4
.long .LLSDA9
.byte 0x4
.long .LCFI2-.LFB9
.byte 0xe
.uleb128 0x8
.byte 0x85
.uleb128 0x2
.byte 0x4
.long .LCFI3-.LCFI2
.byte 0xd
.uleb128 0x5
.byte 0x4
.long .LCFI5-.LCFI3
.byte 0x83
.uleb128 0x3
.align 4
.LEFDE3:
.LSFDE5:
.long .LEFDE5-.LASFDE5
.LASFDE5:
.long .LASFDE5-.Lframe1
.long .LFB8
.long .LFE8-.LFB8
.uleb128 0x4
.long 0x0
.byte 0x4
.long .LCFI6-.LFB8
.byte 0xe
.uleb128 0x8
.byte 0x85
.uleb128 0x2
.byte 0x4
.long .LCFI7-.LCFI6
.byte 0xd
.uleb128 0x5
.align 4
.LEFDE5:
.ident "GCC: (GNU) 4.1.2 (Ubuntu 4.1.2-0ubuntu4)"
.section .note.GNU-stack,"",@progbits
```
Even more exception handling tables, and assorted extra information.
So, the conclusion, at least for GCC on Linux: the cost is extra space (for the handlers and tables) whether or not exceptions are thrown, plus the extra cost of parsing the tables and executing the handlers when an exception is thrown. If you use exceptions instead of error codes, and an error is rare, it can be *faster*, since you do not have the overhead of testing for errors anymore.
In case you want more information, in particular what all the `__cxa_` functions do, see the original specification they came from:
* [Itanium C++ ABI](https://itanium-cxx-abi.github.io/cxx-abi/) | Exceptions being slow **was** true in the old days.
In most modern compiler this no longer holds true.
Note: Just because we have exceptions does not mean we do not use error codes as well. When error can be handled locally use error codes. When errors require more context for correction use exceptions: I wrote it much more eloquently here: [What are the principles guiding your exception handling policy?](https://stackoverflow.com/questions/106586/what-are-the-principles-guiding-your-exception-handling-policy#106749)
The cost of exception handling code when no exceptions are being used is practically zero.
When an exception is thrown there is some work done.
But you have to compare this against the cost of returning error codes and checking them all the way back to to point where the error can be handled. Both more time consuming to write and maintain.
Also there is one gotcha for novices:
Though Exception objects are supposed to be small some people put lots of stuff inside them. Then you have the cost of copying the exception object. The solution there is two fold:
* Don't put extra stuff in your exception.
* Catch by const reference.
In my opinion I would bet that the same code with exceptions is either more efficient or at least as comparable as the code without the exceptions (but has all the extra code to check function error results). Remember you are not getting anything for free the compiler is generating the code you should have written in the first place to check error codes (and usually the compiler is much more efficient than a human). | How do exceptions work (behind the scenes) in c++ | [
"",
"c++",
"performance",
"exception",
"throw",
"try-catch",
""
] |
We have a large enterprise consisting of many apps both old and new backed by Oracle 10G. When blocking sessions or deadlocks occur and we get trace reports it would be super handy to be able to have embedded some details in the sql so that we know both what app, and specifically where it was executed from. In addition to helping speed up the resolution time, it can help us find places where business processes were stepping on each other. We have a few thoughts on how this could be achieved, but I am curious if this is already a solved problem. Any thoughts or details on any successful (or otherwise) attempts would be appreciated. | You can tag chunks of SQL being executed via the Oracle [DBMS\_APPLICATION\_INFO](http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28419/d_appinf.htm) package. This works in any application language, works within the database engine itself, and does not clutter your SQL statements with comments (smart comments are evil anyway). More importantly, Oracle's various tools recognize the data and help you use it when troubleshooting or tuning. | We dynamically modify our SQL statements so that the command which executed them is in a comment at the start of the query. This works because we do our own transaction management and have a strict framework. But the base code is simple (in Java... not sure how other languages will deal with this):
```
String sql = "SELECT * FROM USERS WHERE ID = ?";
Connection con = getConnection();
PreparedStatement ps = con.prepareStatement(getComment() + sql);
// etc
String getComment() {
return " /* " + getCommandName() + " */ ";
}
```
The JDBC driver passes the comment through intact and it shows up in the database when the DBAs are examining queries. Our command architecture maintains a thread-local stack of command names for this purpose. Also our connection factory wraps the JDBC connection with our own connection object so that this code is present even if people program against the bare-metal Connection instance, instead of using the friendly helper methods we normally use. | Tagging sql statements for tracing and debugging | [
"",
"sql",
"oracle",
"debugging",
"tags",
"trace",
""
] |
I need to delete a temporary file from my C++ windows application (developed in Borland C++ Builder). Currently I use a simple:
```
system("del tempfile.tmp");
```
This causes a console window to flash in front of my app and it doesn't look very professional. How do I do this without the console window? | It sounds like you need the Win32 function [DeleteFile](http://msdn.microsoft.com/en-us/library/aa363915(VS.85).aspx)(). You will need to `#include <windows.h>` to use it. | Or, even the standard C library function `int remove( const char *path );`. | How to delete a file from a C++ app without console window in Windows? | [
"",
"c++",
"windows",
"winapi",
"file-io",
""
] |
This sure seems like a simple error to fix. However, I somehow can't figure it out. Found some posts here and went thru them and couldn't get it to work. Sure seems like a famous error.
The error I'm getting is : Unable to start debugging on the web server. An Authentication error occured while communicating with the web server.
I'm running VS2k8 on the Server 2k3sp2 web server. I've verified debugging is set in the web.config and verified http keep alives are set in IIS. I've set the web to run using integrated authentication. I've even tried to attach to the process. However, my breakpoints say that running code and current code doesn't match, even though I compiled the web and ran it..sounds like im attached to the wrong process but there is only 1 w3wp process there to attach to, which should be just my process.
I even did an aspnet\_regiis. Still notta.
Thoughts? | After looking around, I noticed there were 537 - 'An error occurred during logon' security events every time i tried to launch in debug mode. I found this article on M$-
<http://support.microsoft.com/?id=896861>
After setting a key in the reg to disable the loopback check per the article and a reboot, problem solved! Since it's a dev box on the inside, shouldn't be a security risk.
Thanks for the help!!! | Make sure to check that in IIS, the authentication mode is set properly. If you're on a Domain, make sure to note such, and make sure that Anonymous access is checked (if required for your application).
[This MSDN Link](http://msdn.microsoft.com/en-us/library/aa292114(VS.71).aspx) has more details.
There are a few other things you can try as well:
* [IIS 7.0 Debugging Hotfix](http://blogs.msdn.com/webdevtools/archive/2007/06/20/downloadable-hotifx-f5-debugging-using-iis-fails-on-vista-home-editions.aspx)
* [IIS 7.0 Issues](http://mvolo.com/blogs/serverside/archive/2006/12/28/Fix-problems-with-Visual-Studio-F5-debugging-of-ASP.NET-applications-on-IIS7-Vista.aspx)
* [Other things to Try](http://social.msdn.microsoft.com/forums/en-US/asmxandxml/thread/bf530e90-5d42-4909-a858-6be0ae28717d/)
Let me know how that works out for you. | Getting 'unable to start debugging' 'An authentication error occurred while...' | [
"",
"c#",
"asp.net",
"debugging",
""
] |
I'd like to be able to detect Vista IE7 Protected Mode within a page using javascript, preferably. My thinking is to perform an action that would violate protected mode, thus exposing it. The goal is to give appropriate site help messaging to IE7 Vista users. | What are you trying to accomplish that is substantially different for protected users? I've seen some window popup issues, but otherwise, clean JavaScript tends to be less affected. If you're finding that a block of code won't execute, why not do a check after attempted execution to see if the document is the state you expect, and alert if not.
If it's it using ActiveX, MS has a Protected Mode API: <http://msdn.microsoft.com/en-us/library/ms537316.aspx>
Protected mode is customizable, so you don't really have a reliable way of detecting it based on action violation. And many users disable UAC, so detecting via version won't work either (not to mention you may have the same headache with IE 8. Always best to test if something worked instead of assume by version.) | I reviewed the MSDN documentation of the [Protected Mode API](http://msdn.microsoft.com/en-us/library/ms537316) from [6t7m4](https://stackoverflow.com/users/42608/meyahoocoma6t7m4awxtoekdfhpitmg2nk224sinfwk3st), and if I could call the ieframe.dll from javascript, such as:
```
var axo = new ActiveXObject("ieframe.dll");
```
I would be able to access the `IEIsProtectedModeProcess()` function, but I believe it's inaccesible due to security reasons. | Detecting Vista IE7 Protected Mode with Javascript | [
"",
"javascript",
"windows-vista",
"internet-explorer-7",
"protected",
"mode",
""
] |
Can we use Java in Silverlight? | In short: No. Silverlight only supports [.NET languages](http://en.wikipedia.org/wiki/.NET_languages), such as Visual Basic, C#, Managed JavaScript, IronPython and IronRuby.
However, J# or IKVM.NET could be of use to you. | According to [Wikipedia - Future of J#](http://en.wikipedia.org/wiki/J_Sharp#Future_of_J.23), Microsoft's own major Java support is soon to be retired.
Since J# will gets removed from the full .NET CLR itself...
I don't think there is much hope for Silverlight. | Java in Silverlight? | [
"",
"java",
"silverlight",
""
] |
Modern browsers have multi-tab interface, but JavaScript function `window.showModalDialog()` creates a modal dialog that blocks *all* of the tabs.
I'd like to know if there is a way to create a modal dialog that blocks only the tab it's been created in? | You could use one of the more 'Ajax-like' modal dialogs, which are just absolute positioned divs, floating on top of everything else.
Those are modal to the 'document' and not the browser.
For instance take a look it [this jQuery plugin](http://www.ericmmartin.com/projects/simplemodal/)
P.S. `showModalDialog()` is an IE only call, so you might want to not use that altogether. | Nope. It's conceivable IE8's ‘loose coupling’ might behave like this at some point, but it doesn't in the current betas.
I second Michiel's recommendation. A pseudo-modal-dialogue working by obscuring the rest of the page, floating a div on top, and calling the script back when it's finished, is both:
* much more usable/less annoying than a real modal dialog
* compatible with any browser
showModalDialog/showModelessDialog themselves are generally to be avoided. | Can a JavaScript modal dialog be modal only to the tab and not to the whole browser window? | [
"",
"javascript",
"modal-dialog",
""
] |
In F#, you can generate a set of numbers, just by saying [1..100].
I want to do something similar in C#. This is what I have come up with so far:
```
public static int[] To(this int start, int end)
{
var result = new List<int>();
for(int i = start; i <= end; i++)
result.Add(i);
return result.ToArray();
}
```
By doing this, I can now create a set by saying 1.To(100)
Unfortunately, this is not nearly as readable as [1..100]. Has anyone come up with a better way to do this in C#? Is it more readable if it is lowercase? 1.to(100), for instance? Or, is "To" a bad word? Is something like 1.Through(100) more readable?
Just looking for some thoughts. Has anyone else come up with a more elegant solution?
**EDIT:**
After reading the responses, I have re-written my To method using the range:
```
public static int[] To(this int start, int end)
{
return Enumerable.Range(start, end - start + 1).ToArray();
}
```
I am still looking for thoughts on the readability of 1.To(100) | Enumerable.Range(1, 100); | I like the idea of using `To`. The alternative `Enumerable.Range` has a subtle flaw imo. The second parameter is **not** the *value* of the last element, it is the *length* of the enumeration. This is what I've done in the past:
```
public IEnumerable<int> To(this int start, int stop)
{
while (start <= stop)
yield return start++;
}
```
EDIT: If you want the result as an `int[]`, just add `.ToArray()`:
```
int[] theSet = 1.To(100).ToArray();
``` | Generating sets of integers in C# | [
"",
"c#",
"set",
""
] |
I'm building a small web app in PHP that stores some information in a plain text file. However, this text file is used/modified by all users of my app at some given point in time and possible at the same time.
So the questions is. What would be the best way to make sure that only one user can make changes to the file at any given point in time? | You should put a lock on the file
```
$fp = fopen("/tmp/lock.txt", "r+");
if (flock($fp, LOCK_EX)) { // acquire an exclusive lock
ftruncate($fp, 0); // truncate file
fwrite($fp, "Write something here\n");
fflush($fp); // flush output before releasing the lock
flock($fp, LOCK_UN); // release the lock
} else {
echo "Couldn't get the lock!";
}
fclose($fp);
```
Take a look at the <http://www.php.net/flock> | My suggestion is to use SQLite. It's fast, lightweight, stored in a file, and has mechanisms for preventing concurrent modification. Unless you're dealing with a preexisting file format, SQLite is the way to go. | PHP and concurrent file access | [
"",
"php",
"file",
"concurrency",
""
] |
I have had a few problems with log files growing too big on my SQL Servers (2000). Microsoft doesn't recommend using auto shrink for log files, but since it is a feature it must be useful in some scenarios. Does anyone know when is proper to use the auto shrink property? | Your problem is not that you need to autoshrink periodically but that you need to backup the log files periodically. (We back ours up every 15 minutes.) Backing up the database itself is not sufficient, you must do the log as well. If you do not back up the transaction log, it will grow until it takes up all the space on the drive. If you back it up, it frees the space to be reused (you will still probably need to shrink after the first backup to get the log down to a more reasonable size). If you don't need to be able torecover from transactions (which you should need to be able to do unless your entire database consists of tables that are loaded from another source and can easily be re-loaded.), then set your log to simlpe recovery mode.
One reason why autoshrinking isn't so good an idea is that you will be growing the transaction log frequently which slows down performance. IF you back up the log, one you get to a relatively stable size (the amount of space normally used by the transaction log in the time period between backups), then the log will only need to grow occasionally if there are an unusually heavy amount fo transactions. | My take on this is that auto-shrink is useful when you have many fairly small databases that frequently get larger due to added data, and then have a lot of empty space afterwards. You also need to not mind that the files will be fragmented on the disk when they frequently grow and shrink. I'd never use auto-shrink on a critical database or one larger than 2 GB, as you never know when the shrink operation will kick in, and access to the database will be blocked until the shrink has completed. | When should one use auto shrink on log files in SQL Server? | [
"",
"sql",
"sql-server",
"database",
""
] |
Is it possible to use any chart modules with wxpython? And are there any good ones out there?
I'm thinking of the likes of PyCha (<http://www.lorenzogil.com/projects/pycha/>) or any equivalent. Many modules seem to require PyCairo, but I can't figure out if I can use those with my wxpython app.
My app has a notebook pane, and I'd like to place the chart inside it. The chart has to be dynamic -- ie the user can choose what kind of data to view -- so I'm guessing modules that make chart images are out.
Just for clarity, by charts I mean things like pies, lines and bars etc. | I recently revisited [matplotlib](http://matplotlib.sourceforge.net/index.html), and am pretty happy with the results.
If you're on windows, there are windows installers available to make your installation process a little less painful.
One potential drawback though is that it requires [numpy](http://sourceforge.net/project/showfiles.php?group_id=1369&package_id=175103) to be installed.
I don't have experience with the interactivity of it, but it does support [event handling](http://matplotlib.sourceforge.net/users/event_handling.html?highlight=wxpython). | Use matplotlib. It integrates nicely with wxPython. [Here's a sample](http://eli.thegreenplace.net/2008/08/01/matplotlib-with-wxpython-guis/) of an interactive chart with wxPython and matplotlib. | Can I use chart modules with wxpython? | [
"",
"python",
"wxpython",
""
] |
What's the best way to extract the key and value from a string like this:
```
var myString = 'A1234=B1234';
```
I originally had something like this:
```
myString.split('=');
```
And that works fine, BUT an equal (=) sign could be used as a key or value within the string plus the string could have quotes, like this:
```
var myString = '"A123=1=2=3=4"="B1234"';
```
The string could also only have one pair of quotes and spaces:
```
var myString = ' "A123=1=2=3=4" = B1234 ';
```
I'm not very good at regular expressions but I'm guessing that's the way forward?
What I want to end up with is two variables, key and value, in the case above, the key variable would end up being *A123=1=2=3=4* and the value variable would be *B1234*.
If there is no value present, for example if this were the original string:
```
var myString = 'A1234';
```
Then I would want the key variable to be 'A1234' and for the value variable to be null or false - or something I can test against.
Any help is appreciated. | What I've tended to do in config files is ensure that there's **no** possibility that the separator character can get into either the key or value.
Sometimes that's easy if you can just say "no '=' characters allowed" but I've had to resort to encoding those characters in some places.
I generally hex them up so that if you wanted a '=' character, you would have to put in %3d (and %25 for the '%' character so you don't think it's a hex-starter character). You can also use %xx for any character but it's only **required** for those two.
That way you can check the line to ensure it has one and only one '=' character then post-process the key and value to turn the hex'ed characters back into real ones. | can't help with a one-liner, but I'll suggest the naive way:
```
var inQuote = false;
for(i=0; i<str.length; i++) {
if (str.charAt(i) == '"') {
inQuote = !inQuote;
}
if (!inQuote && str.charAt(i)=='=') {
key = str.slice(0,i);
value = str.slice(i+1);
break;
}
}
``` | Easiest way to split string into Key / value | [
"",
"javascript",
"regex",
""
] |
How can I get the (physical) installed path of a DLL that is (may be) registered in GAC? This DLL is a control that may be hosted in things other than a .Net app (including IDEs other than VS...).
When I use `System.Reflection.Assembly.GetExecutingAssembly().Location`, it gives path of GAC folder in winnt\system32 - or in Design mode in VS gives the path to the VS IDE.
I need to get the path where physical dll is actually installed - or the bin/debug or (release) folder for VS.
Reason is that there is an XML file I need to get at in this folder, with config setting that are used both in design mode and at runtime.
Or how is it best to handle this scenario? I have a dubious network location I am using for design mode at the moment... (Don't think that `ApplicationData` folder is going to cut it (but have the .Net version soved as that's installed via ClickOnce ans can use the Clickonce Data folder) ) | If something gets put in the GAC, it actually gets copied into a spot under %WINDIR%\assembly, like
```
C:\WINDOWS\assembly\GAC_32\System.Data\2.0.0.0__b77a5c561934e089\System.Data.dll
```
I assume you're seeing something like that when you check the Location of the assembly in question when it's installed in the GAC. That's actually correct. (In .NET 1.1 there was a "Codebase" listed when you looked at a GAC assembly's properties, but that was only to show you where the original file was located when you ran gacutil - it didn't actually indicate what would be loaded.) You can read [more about that here](http://www.grimes.nildram.co.uk/workshops/fusWSFour.htm).
Long story short, you may not be able to do what you want to do. Instead of looking in relation to some assembly that's being loaded (`Assembly.GetExecutingAssembly()`), you might want to switch the behavior to look relative to the primary application assembly (`Assembly.GetEntryAssembly()`) or put the file in some well-known location, possibly based on an environment variable that gets set. | After the assembly is shadow copied into the Global Assembly cache, i don't think there is any metadata to traceback the location of the source assemblies.
What are you trying to achieve by deploying in GAC? If its just for the sake of CLR for resolving purposes, then there is an alternate way that solves your problem.
Don't gac install the dll, rather add the following key in the registry, (this registry location is looked up by CLR when trying to resolve assemblies)
```
32 bit OS : HKEY_LOCAL_MACHINE\SOFTWARE\\Microsoft\.NETFramework\v4.0.30319\AssemblyFoldersEx\foo
64 bit OS : HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NETFramework\v4.0.30319\AssemblyFoldersEx\foo
```
For the foo key (Use your favourite name instead of foo), you will see a Key Name "Default". Double click it and set the value to wherever your assembly exists. (absolute path is preferred)
Now from Visual Studio, your client should be able to see your assemblies in the "Add Reference" Dialog and can use it.
Now coming to your actual problem,
`Assembly.GetExecutingAssembly()` will return the path of the location where the insatlled dll's are present. Find the XML file from there. :)
Note: In the registry key the 4.0.30319 is the version of the .NET Framework your application targets. Use whatever version your application targets instead. | Path of DLL installed to the GAC | [
"",
"c#",
"dll",
"file",
"gac",
""
] |
Whats the other option we can use for multiple inheritance other than implementing interface | A direct answer is to use inner classes. That gives you two (or more) objects that are intimately linked but cover independent base classes.
In general, prefer composition over inheritance. It's a common mistake to use inheritance everywhere. However, that leaves inflexible solutions that are difficult to follow. | At first, it's better to avoid multiple inheritance and use interfaces. And Java actually does not support multiple inheritance.
But you can use mixins to fake the multiple inheritance. There are some manuals about this:
[Multiple Inheritance in Java](http://csis.pace.edu/~bergin/patterns/multipleinheritance.html)
[The Java Mixin Pattern, or Faking Multiple Inheritance](http://jonaquino.blogspot.com/2005/07/java-mixin-pattern-or-faking-multiple.html)
And if you want to make something composite, I advise to take a look at the [Qi4j](http://www.qi4j.org/) framework:
[Composite Oriented Programming with Qi4j](http://www.infoq.com/articles/Composite-Programming-Qi4j) | option for inheritance | [
"",
"java",
"multiple-inheritance",
""
] |
I've looking to find a simple recommended "minimal" c++ makefile for linux which will use g++ to compile and link a single file and h file. Ideally the make file will not even have the physical file names in it and only have a .cpp to .o transform. What is the best way to generate such a makefile without diving into the horrors of autoconf?
The current dir contains, for example
t.cpp
t.h
and I want a makefile for that to be created. I tried autoconf but its assuming .h is gcc instead of g++. Yes, while not a beginner, I am relearning from years ago best approaches to project manipulation and hence am looking for automated ways to create and maintain makefiles for small projects. | If it is a single file, you can type
```
make t
```
And it will invoke
```
g++ t.cpp -o t
```
This doesn't even require a Makefile in the directory, although it will get confused if you have a t.cpp and a t.c and a t.java, etc etc.
Also a real Makefile:
```
SOURCES := t.cpp
# Objs are all the sources, with .cpp replaced by .o
OBJS := $(SOURCES:.cpp=.o)
all: t
# Compile the binary 't' by calling the compiler with cflags, lflags, and any libs (if defined) and the list of objects.
t: $(OBJS)
$(CC) $(CFLAGS) -o t $(OBJS) $(LFLAGS) $(LIBS)
# Get a .o from a .cpp by calling compiler with cflags and includes (if defined)
.cpp.o:
$(CC) $(CFLAGS) $(INCLUDES) -c $<
``` | Here is a generic makefile from my code snippets directory:
```
SOURCES=$(wildcard *.cpp)
OBJECTS=$(SOURCES:.cpp=.o)
DEPS=$(SOURCES:.cpp=.d)
BINS=$(SOURCES:.cpp=)
CFLAGS+=-MMD
CXXFLAGS+=-MMD
all: $(BINS)
.PHONY: clean
clean:
$(RM) $(OBJECTS) $(DEPS) $(BINS)
-include $(DEPS)
```
As long as you have one .cpp source producing one binary, you don't need anything more. I have only used it with GNU make, and the dependency generation uses gcc syntax (also supported by icc). If you are using the SUN compilers, you need to change "-MMD" to "-xMMD". Also, ensure that the tab on the start of the line after `clean:` does not get changed to spaces when you paste this code or `make` will give you a missing separator error. | minimum c++ make file for linux | [
"",
"c++",
"makefile",
"compilation",
""
] |
I'm trying to figure out if I should start using more of `internal` access modifier.
I know that if we use `internal` and set the assembly variable `InternalsVisibleTo`, we can test functions that we don't want to declare public from the testing project.
This makes me think that I should just always use `internal` because at least each project (should?) have its own testing project.
Why shouldn't one do this? When should one use `private`? | Internal classes need to be tested and there is an assembly attribute:
```
using System.Runtime.CompilerServices;
[assembly:InternalsVisibleTo("MyTests")]
```
Add this to the project info file, e.g. `Properties\AssemblyInfo.cs`, for the project under test. In this case "MyTests" is the test project. | Adding to Eric's answer, you can also configure this in the `csproj` file:
```
<ItemGroup>
<AssemblyAttribute Include="System.Runtime.CompilerServices.InternalsVisibleTo">
<_Parameter1>MyTests</_Parameter1>
</AssemblyAttribute>
</ItemGroup>
```
Or if you have one test project per project to be tested, you could do something like this in your `Directory.Build.props` file:
```
<ItemGroup>
<AssemblyAttribute Include="System.Runtime.CompilerServices.InternalsVisibleTo">
<_Parameter1>$(MSBuildProjectName).Test</_Parameter1>
</AssemblyAttribute>
</ItemGroup>
```
See: <https://stackoverflow.com/a/49978185/1678053>
Example: <https://github.com/gldraphael/evlog/blob/master/Directory.Build.props#L5-L12> | C# "internal" access modifier when doing unit testing | [
"",
"c#",
".net",
"unit-testing",
"tdd",
""
] |
I am trying to talk to a device using python. I have been handed a tuple of bytes which contains the storage information. How can I convert the data into the correct values:
response = (0, 0, 117, 143, 6)
The first 4 values are a 32-bit int telling me how many bytes have been used and the last value is the percentage used.
I can access the tuple as response[0] but cannot see how I can get the first 4 values into the int I require. | See [Convert Bytes to Floating Point Numbers in Python](https://stackoverflow.com/questions/5415/)
You probably want to use the struct module, e.g.
```
import struct
response = (0, 0, 117, 143, 6)
struct.unpack(">I", ''.join([chr(x) for x in response[:-1]]))
```
Assuming an unsigned int. There may be a better way to do the conversion to unpack, a list comprehension with join was just the first thing that I came up with.
**EDIT**: See also ΤΖΩΤΖΙΟΥ's comment on this answer regarding endianness as well.
**EDIT #2**: If you don't mind using the array module as well, here is an alternate method that obviates the need for a list comprehension. Thanks to @[JimB](https://stackoverflow.com/questions/386753/how-do-i-convert-part-of-a-python-tuple-byte-array-into-an-integer#386998) for pointing out that unpack can operate on arrays as well.
```
import struct
from array import array
response = (0, 0, 117, 143, 6)
bytes = array('B', response[:-1])
struct.unpack('>I', bytes)
``` | Would,
```
num = (response[0] << 24) + (response[1] << 16) + (response[2] << 8) + response[3]
```
meet your needs?
aid | How do I convert part of a python tuple (byte array) into an integer | [
"",
"python",
"tuples",
""
] |
I have a class called DatabaseHelper that wraps a DbConnection. What's the proper way to setup this class for a using statement? I have implemented IDisposible, but I'm not sure when and where I should be calling Connection.Close() or Connection.Dispose().
When I simply call Connection.Dispose() in my own Dispose() method, I'll sometimes get a SocketException from my DbConnection object. I assume this is because old connections are being left open, but there's no details attached the to exception, so I can't know for sure. | Call connection.Dispose() from within your dispose method. You should look at the standard pattern for implementing IDisposable, which goes above and beyond simply implementing the IDisposable interface and allows for disposing unmanaged objects etc:
```
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (!disposed)
{
if (disposing)
{
// Dispose managed resources.
}
// There are no unmanaged resources to release, but
// if we add them, they need to be released here.
}
disposed = true;
// If it is available, make the call to the
// base class's Dispose(Boolean) method
base.Dispose(disposing);
}
```
(Taken from <http://msdn.microsoft.com/en-us/library/system.idisposable.aspx>). | According to [this](http://www.devnewsgroups.net/group/microsoft.public.dotnet.framework.adonet/topic41208.aspx) newsgroup:
Here is how IDbConnection.Dispose() is implemented (as Reflector utility shows):
SqlClient:
```
protected override void Dispose(bool disposing)
{
if (disposing)
{
switch (this._objectState)
{
case ConnectionState.Open:
{
this.Close();
break;
}
}
this._constr = null;
}
base.Dispose(disposing);
}
Odbc:
protected override void Dispose(bool disposing)
{
if (disposing)
{
this._constr = null;
this.Close();
CNativeBuffer buffer1 = this._buffer;
if (buffer1 != null)
{
buffer1.Dispose();
this._buffer = null;
}
}
base.Dispose(disposing);
}
OleDb:
protected override void Dispose(bool disposing)
{
if (disposing)
{
if (this.objectState != 0)
{
this.DisposeManaged();
if (base.DesignMode)
{
OleDbConnection.ReleaseObjectPool();
}
this.OnStateChange(ConnectionState.Open, ConnectionState.Closed);
}
if (this.propertyIDSet != null)
{
this.propertyIDSet.Dispose();
this.propertyIDSet = null;
}
this._constr = null;
}
base.Dispose(disposing);
}
```
Your dispose method should only attempt to close the connection if it is open. | Properly disposing of a DbConnection | [
"",
"c#",
"dispose",
"dbconnection",
""
] |
**JavaFX** is now out, and there are promises that Swing will improve along with JavaFX. Gone will be the days of ugly default UI, and at long last we can create engaging applications that are comparable to **Flash, Air, and Silverlight** in terms of quality.
1. Will this mean that **Java Applets** that hail from 1990's are dead and not worth going back to?
2. Same with **Java Desktop**: What will be compelling for us Java Developers to use it rather than JavaFX? | In my opinion Java Applets have been dead for years. I wrote some in the late 90s - a Tetris game during an internship to demonstrate on a 40MHz ARM Acorn Set Top Box for example. Of course I bet there are some casual game sites that have tonnes of them still, and thus it will remain supported, but active development will/has dropped off.
Java Web Start is a handy technology in my opinion. That will still work with JavaFX, it's just another library for that system.
JavaFX will give Java opportunities beyond technical tools (like SQL Developer), in-house business applications and server applications (where it excels). I think it's one of those libraries that is worth learning for any Java developer, if they can get the time. There's no arguing that user interface libraries for Java have been sorely lacking, or overly complex, for many a year.
However there's a lot of competition out there, and it is very new (which means the development tool support is very raw, compared to Flash and Silverlight). Also people don't like downloading massive runtime environments, although broadband makes it less painful than 5 years ago for many! | I think this discussion is somewhat misleading. I an no fan of applet technology either (and I have been underwhelmed by JavaFX). But the point that this thread is missing is that, unless I am mistaken, **JavaFX is built on top of applet technology**. They are not competing or mutually exclusive. See these articles [here](http://www.javaworld.com/javaworld/jw-05-2008/jw-05-newapplet.html) and [here](http://www.javaworld.com/javaworld/jw-05-2008/jw-05-applets.html).
It could be that the confusion is somewhat intentional on Sun's part as they do not want JavaFX and applets to be mentioned in the same sentence, since Applets had so many problems. | JavaFX is now out: Are Applets and Java Desktop officially dead/dying? | [
"",
"java",
"applet",
"javafx",
""
] |
I've got several function where I need to do a one-to-many join, using count(), group\_by, and order\_by. I'm using the sqlalchemy.select function to produce a query that will return me a set of id's, which I then iterate over to do an ORM select on the individual records. What I'm wondering is if there is a way to do what I need using the ORM in a single query so that I can avoid having to do the iteration.
Here's an example of what I'm doing now. In this case the entities are Location and Guide, mapped one-to-many. I'm trying get a list of the top locations sorted by how many guides they are related to.
```
def popular_world_cities(self):
query = select([locations.c.id, func.count(Guide.location_id).label('count')],
from_obj=[locations, guides],
whereclause="guides.location_id = locations.id AND (locations.type = 'city' OR locations.type = 'custom')",
group_by=[Location.id],
order_by='count desc',
limit=10)
return map(lambda x: meta.Session.query(Location).filter_by(id=x[0]).first(), meta.engine.execute(query).fetchall())
```
**Solution**
I've found the best way to do this. Simply supply a `from_statement` instead of a `filter_by` or some such. Like so:
```
meta.Session.query(Location).from_statement(query).all()
``` | I've found the best way to do this. Simply supply a `from_statement` instead of a `filter_by` or some such. Like so:
```
meta.Session.query(Location).from_statement(query).all()
``` | What you're trying to do maps directly to a SQLAlchemy join between a subquery [made from your current select call] and a table. You'll want to move the ordering out of the subselect and create a separate, labeled column with count(desc); order the outer select by that column.
Other than that, I don't see much that's nonobvious. | SQLAlchemy with count, group_by and order_by using the ORM | [
"",
"python",
"sqlalchemy",
""
] |
I have an XP client that is experiencing an issue. My dev box is Vista Home 64. I start up the debugger on the client machine and when i try to run on the dev box, it says 'Unable to debug. Remote debugger doesn't support this version of windows'.
Looks like I'm sol. is there another way to find out how to debug this? The application is throwing a .net runtime error eventid 5000 with a system.security error. Not sure why the try catch isn't getting it. Is there some way to find out what's going on?
Thanks! | Turns out the .net framework was hosed on the client. I couldn't get a stacktrace or nothing. Removed and reinstalled .net framework and everything worked great!
Thanks for all the help! | You may be running into issues with the 64-bit debugger not being able to deal with the 32-bit debugger client.
There was a question a while ago talking about problems connecting a 32-bit debugger to a 64-bit target (which I think is the opposite from your situation):
* [x86 Remote Debugger Service on x64](https://stackoverflow.com/questions/76939/x86-remote-debugger-service-on-x64)
Even though there's no resolution there, it might give you some ideas - maybe it's as simple as making sure you're running the 32-bit version of the debugger on the x64 machine (if that can be done simply).. | Remote debugging - Remote debugger doesn't support this version of Windows error | [
"",
"c#",
".net",
"debugging",
"remote-debugging",
""
] |
I'm writing some RSS feeds in PHP and stuggling with character-encoding issues. Should I utf8\_encode() before or after htmlentities() encoding? For example, I've got both ampersands and Chinese characters in a description element, and I'm not sure which of these is proper:
```
$output = utf8_encode(htmlentities($source)); or
$output = htmlentities(utf8_encode($source));
```
And why? | It's important to pass the character set to the htmlentities function, as the default is ISO-8859-1:
```
utf8_encode(htmlentities($source,ENT_COMPAT,'utf-8'));
```
You should apply htmlentities first as to allow utf8\_encode to encode the entities properly.
(EDIT: I changed from my opinion before that the order didn't matter based on the comments. This code is tested and works well). | First: The [`utf8_encode` function](http://docs.php.net/utf8_encode) converts from ISO 8859-1 to UTF-8. So you only need this function, if your input encoding/charset is ISO 8859-1. But why don’t you use UTF-8 in the first place?
Second: You don’t need [`htmlentities`](http://docs.php.net/htmlentities). You just need [`htmlspecialchars`](http://docs.php.net/htmlspecialchars) to replace the special characters by character references. `htmlentities` would replace “too much” characters that can be encoded directly using UTF-8. Important is that you use the `ENT_QUOTES` quote style to replace the single quotes as well.
So my proposal:
```
// if your input encoding is ISO 8859-1
htmlspecialchars(utf8_encode($string), ENT_QUOTES)
// if your input encoding is UTF-8
htmlspecialchars($string, ENT_QUOTES, 'UTF-8')
``` | utf-8 and htmlentities in RSS feeds | [
"",
"php",
"utf-8",
"rss",
""
] |
I know that the compiler will sometimes initialize memory with certain patterns such as `0xCD` and `0xDD`. What I want to know is **when** and **why** this happens.
## When
Is this specific to the compiler used?
Do `malloc/new` and `free/delete` work in the same way with regard to this?
Is it platform specific?
Will it occur on other operating systems, such as `Linux` or `VxWorks`?
## Why
My understanding is this only occurs in `Win32` debug configuration, and it is used to detect memory overruns and to help the compiler catch exceptions.
Can you give any practical examples as to how this initialization is useful?
I remember reading something (maybe in Code Complete 2) saying that it is good to initialize memory to a known pattern when allocating it, and certain patterns will trigger interrupts in `Win32` which will result in exceptions showing in the debugger.
How portable is this? | A quick summary of what Microsoft's compilers use for various bits of unowned/uninitialized memory when compiled for debug mode (support may vary by compiler version):
```
Value Name Description
------ -------- -------------------------
0xCD Clean Memory Allocated memory via malloc or new but never
written by the application.
0xDD Dead Memory Memory that has been released with delete or free.
It is used to detect writing through dangling pointers.
0xED or Aligned Fence 'No man's land' for aligned allocations. Using a
0xBD different value here than 0xFD allows the runtime
to detect not only writing outside the allocation,
but to also identify mixing alignment-specific
allocation/deallocation routines with the regular
ones.
0xFD Fence Memory Also known as "no mans land." This is used to wrap
the allocated memory (surrounding it with a fence)
and is used to detect indexing arrays out of
bounds or other accesses (especially writes) past
the end (or start) of an allocated block.
0xFD or Buffer slack Used to fill slack space in some memory buffers
0xFE (unused parts of `std::string` or the user buffer
passed to `fread()`). 0xFD is used in VS 2005 (maybe
some prior versions, too), 0xFE is used in VS 2008
and later.
0xCC When the code is compiled with the /GZ option,
uninitialized variables are automatically assigned
to this value (at byte level).
// the following magic values are done by the OS, not the C runtime:
0xAB (Allocated Block?) Memory allocated by LocalAlloc().
0xBAADF00D Bad Food Memory allocated by LocalAlloc() with LMEM_FIXED,but
not yet written to.
0xFEEEFEEE OS fill heap memory, which was marked for usage,
but wasn't allocated by HeapAlloc() or LocalAlloc().
Or that memory just has been freed by HeapFree().
```
Disclaimer: the table is from some notes I have lying around - they may not be 100% correct (or coherent).
Many of these values are defined in vc/crt/src/dbgheap.c:
```
/*
* The following values are non-zero, constant, odd, large, and atypical
* Non-zero values help find bugs assuming zero filled data.
* Constant values are good, so that memory filling is deterministic
* (to help make bugs reproducible). Of course, it is bad if
* the constant filling of weird values masks a bug.
* Mathematically odd numbers are good for finding bugs assuming a cleared
* lower bit.
* Large numbers (byte values at least) are less typical and are good
* at finding bad addresses.
* Atypical values (i.e. not too often) are good since they typically
* cause early detection in code.
* For the case of no man's land and free blocks, if you store to any
* of these locations, the memory integrity checker will detect it.
*
* _bAlignLandFill has been changed from 0xBD to 0xED, to ensure that
* 4 bytes of that (0xEDEDEDED) would give an inaccessible address under 3gb.
*/
static unsigned char _bNoMansLandFill = 0xFD; /* fill no-man's land with this */
static unsigned char _bAlignLandFill = 0xED; /* fill no-man's land for aligned routines */
static unsigned char _bDeadLandFill = 0xDD; /* fill free objects with this */
static unsigned char _bCleanLandFill = 0xCD; /* fill new objects with this */
```
There are also a few times where the debug runtime will fill buffers (or parts of buffers) with a known value, for example, the 'slack' space in `std::string`'s allocation or the buffer passed to `fread()`. Those cases use a value given the name `_SECURECRT_FILL_BUFFER_PATTERN` (defined in `crtdefs.h`). I'm not sure exactly when it was introduced, but it was in the debug runtime by at least VS 2005 (VC++8).
Initially, the value used to fill these buffers was `0xFD` - the same value used for no man's land. However, in VS 2008 (VC++9) the value was changed to `0xFE`. I assume that's because there could be situations where the fill operation would run past the end of the buffer, for example, if the caller passed in a buffer size that was too large to `fread()`. In that case, the value `0xFD` might not trigger detecting this overrun since if the buffer size were too large by just one, the fill value would be the same as the no man's land value used to initialize that canary. No change in no man's land means the overrun wouldn't be noticed.
So the fill value was changed in VS 2008 so that such a case would change the no man's land canary, resulting in the detection of the problem by the runtime.
As others have noted, one of the key properties of these values is that if a pointer variable with one of these values is de-referenced, it will result in an access violation, since on a standard 32-bit Windows configuration, user mode addresses will not go higher than 0x7fffffff. | One nice property about the fill value 0xCCCCCCCC is that in x86 assembly, the opcode 0xCC is the [int3](https://pdos.csail.mit.edu/6.828/2008/readings/i386/INT.htm) opcode, which is the software breakpoint interrupt. So, if you ever try to execute code in uninitialized memory that's been filled with that fill value, you'll immediately hit a breakpoint, and the operating system will let you attach a debugger (or kill the process). | When and why will a compiler initialise memory to 0xCD, 0xDD, etc. on malloc/free/new/delete? | [
"",
"c++",
"c",
"memory",
"memory-management",
""
] |
I'm working with a Java program that has multiple components (with Eclipse & Ant at the moment).
Is there some way to start multiple programs with one launch configuration? I have an Ant target that does the job (launches multiple programs) but there are things I would like to do:
* I would like to debug the programs with Eclipse, hence the need for Eclipse launch.
* I would like to see the outputs for the programs at separate consoles.
Also other ways to launch multiple Java programs "with one click" with separate consoles and/or debugging would be ok. | ['multiple launch part':]
If you have an ant launch configuration which does what you want, you can always transform it into a java launcher calling ant.
```
Main Class: org.apache.tools.ant.Main
-Dant.home=${resource_loc:/myPath/apache_ant}
-f ${resource_loc:/myProject/config/myFile-ant.xml}
```
You can then launch this ant session as a regular java application, with all eclipse debugging facilities at your disposal.
Add to your classpath in the User Entries section (*before* your project and default path):
* ant.jar
* ant-launcher.jar
---
[Multiple console part]
May be a possible solution would be to make sure your ant launcher actually launches the different application in their own JVM process (one javaw.exe for each application)
That way, you could use the ability of the [native eclipse console to switch between different process](http://dev.eclipse.org/newslists/news.eclipse.webtools/msg04776.html).
> The Console view clearly separates output from each distinct "process" and keeps them in several "buffers". The Console has a built-in "switch" feature that will automatically switch the view to display the buffer of the last process that performed output, however you can easily switch the display to any "process buffer" you want to look at.
>
> To switch the Console "buffer" display, just click on the black "Down arrow" next to the 4th toolbar button from the right in the Console View's title bar (the button
> that resembles a computer screen):
> this will show a pop-down menu listing the "names" of all active process buffers, preceded by an "order number".
> The one currently displayed will have a check-mark before its "order number". You can switch the view to another display buffer simply by clicking on its name. | The question and selected answer here are both 6 years old.
[Eclipse Launch Groups](http://help.eclipse.org/juno/index.jsp?topic=%2Forg.eclipse.cdt.doc.user%2Freference%2Fcdt_u_run_dbg_launch_group.htm) provides UI to run multiple launch configs. Launch Groups is apparently part of CDT but can be [installed separately](/a/11369639/1086034) without CDT by installing "C/C++ Remote Launch" (org.eclipse.cdt.launch.remote). | How to launch multiple Java programs with one configuration on separate consoles (with Eclipse) | [
"",
"java",
"eclipse",
"launch",
""
] |
In Java 1.4 you could use ((SunToolkit) Toolkit.getDefaultToolkit()).getNativeWindowHandleFromComponent() but that was removed.
It looks like you have to use JNI to do this now. Do you have the JNI code and sample Java code to do this?
I need this to call the Win32 GetWindowLong and SetWindowLong API calls, which can be done via the Jawin library.
I would like something very precise so I can pass a reference to the JDialog or JFrame and get the window handle.
[Swing transparency using JNI](https://stackoverflow.com/questions/265602/swing-transparency-using-jni) may be related. | The following code lets you pass a Component to get the window handle (HWND) for it. To make sure that a Component has a corresponding window handle call isLightWeight() on the Component and verify that it equals false. If it doesn't, try it's parent by calling Component.getParent().
Java code:
```
package win32;
public class Win32 {
public static native int getWindowHandle(Component c);
}
```
Header file main.h:
```
/* DO NOT EDIT THIS FILE - it is machine generated */
#include <jni.h>
/* Header for class win32_Win32 */
#ifndef _Included_win32_Win32
#define _Included_win32_Win32
#ifdef __cplusplus
extern "C" {
#endif
/*
* Class: win32_Win32
* Method: getWindowHandle
* Signature: (Ljava/awt/Component;Ljava/lang/String;)I
*/
JNIEXPORT jint JNICALL Java_win32_Win32_getWindowHandle
(JNIEnv *, jclass, jobject);
#ifdef __cplusplus
}
#endif
#endif
```
The C source main.c:
```
#include<windows.h>
#include <jni.h>
#include <jawt.h>
#include <jawt_md.h>
HMODULE _hAWT = 0;
JNIEXPORT jint JNICALL Java_win32_Win32_getWindowHandle
(JNIEnv * env, jclass cls, jobject comp)
{
HWND hWnd = 0;
typedef jboolean (JNICALL *PJAWT_GETAWT)(JNIEnv*, JAWT*);
JAWT awt;
JAWT_DrawingSurface* ds;
JAWT_DrawingSurfaceInfo* dsi;
JAWT_Win32DrawingSurfaceInfo* dsi_win;
jboolean result;
jint lock;
//Load AWT Library
if(!_hAWT)
//for Java 1.4
_hAWT = LoadLibrary("jawt.dll");
if(!_hAWT)
//for Java 1.3
_hAWT = LoadLibrary("awt.dll");
if(_hAWT)
{
PJAWT_GETAWT JAWT_GetAWT = (PJAWT_GETAWT)GetProcAddress(_hAWT, "_JAWT_GetAWT@8");
if(JAWT_GetAWT)
{
awt.version = JAWT_VERSION_1_4; // Init here with JAWT_VERSION_1_3 or JAWT_VERSION_1_4
//Get AWT API Interface
result = JAWT_GetAWT(env, &awt);
if(result != JNI_FALSE)
{
ds = awt.GetDrawingSurface(env, comp);
if(ds != NULL)
{
lock = ds->Lock(ds);
if((lock & JAWT_LOCK_ERROR) == 0)
{
dsi = ds->GetDrawingSurfaceInfo(ds);
if(dsi)
{
dsi_win = (JAWT_Win32DrawingSurfaceInfo*)dsi->platformInfo;
if(dsi_win)
{
hWnd = dsi_win->hwnd;
}
else {
hWnd = (HWND) -1;
}
ds->FreeDrawingSurfaceInfo(dsi);
}
else {
hWnd = (HWND) -2;
}
ds->Unlock(ds);
}
else {
hWnd = (HWND) -3;
}
awt.FreeDrawingSurface(ds);
}
else {
hWnd = (HWND) -4;
}
}
else {
hWnd = (HWND) -5;
}
}
else {
hWnd = (HWND) -6;
}
}
else {
hWnd = (HWND) -7;
}
return (jint)hWnd;
}
``` | You don't have write any C/JNI code. From Java:
```
import sun.awt.windows.WComponentPeer;
public static long getHWnd(Frame f) {
return f.getPeer() != null ? ((WComponentPeer) f.getPeer()).getHWnd() : 0;
}
```
Caveats:
* This uses a sun.\* package. Obviously this is not public API. But it is unlikely to change (and I think less likely to break than the solutions above).
* This will compile and run on Windows only. You would need to turn this into reflection code for this to be portable. | In Java Swing how do you get a Win32 window handle (hwnd) reference to a window? | [
"",
"java",
"winapi",
"swing",
"java-native-interface",
"hwnd",
""
] |
SQL Server 2005
I have 10 million rows in DB, and run a select (with lots of "where" and joints.. pretty complex). The results are presented in grid (think goolge results) and because of that, the user cannot possibly use more then 1000 results.
So I limit my SQL with a TOP 1000.
**Problem**: User still wants to know that there are 5432 results for his search.
Can I get that info without paying the price (or in other words, with still getting the speed benefits the "top 1000" is giving me ?)
**Assumptions**
- Assume that the TOP 1000 brings 1000 out of **100K** rows. So even the network price of moving 100K might be an issue.
**Conclusions**
There is no free lunch! you can get the elegant way (accepted answer) but it still takes as long as the more expensive operation (i.e. counting all results).
In real life, I will go with the 2 SQL approach, one to return top 1000 rows for display, and one that is ASYNC and updates some AJAX panel with the count(\*) results that will take much much longer to computer | ```
SELECT TOP 1000 x, y, z, COUNT(*) OVER () AS TotalCount
FROM dbo.table
``` | Personally I'd opt for two statements hitting the database. One to retrieve the count, one to retrieve the first 1000 records.
You could run both queries in a batch to squeeze a little extra performance by saving a round-trip to the database.
```
-- Get the count
select count(*) from table where [criteria]
-- Get the data
select [cols] from table where [criteria]
``` | select top 1000, but know how many rows are there? | [
"",
"sql",
"sql-server-2005",
"performance",
""
] |
My MS Visual C# program was compiling and running just fine.
I close MS Visual C# to go off and do other things in life.
I reopen it and (before doing anything else) go to "Publish" my program and get the following error message:
> Program C:\myprogram.exe does not contain a static 'Main' method suitable for an entry point
Huh? Yes it does... and it was all working 15 min earlier. Sure, I can believe that I accidentally hit something or done something before I closed it up... but what? How do I troubleshoot this?
My Program.cs file looks like this:
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Windows.Forms;
using System.Threading;
namespace SimpleAIMLEditor
{
static class Program
{
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new mainSAEForm());
}
}
}
```
...and there are some comments in there. There are no other errors.
Help? | Are the properties on the file set to Compile? | I was struggle with this error just because **one of my `class library` projects** `was set acceddentaly` to be an **console application**
so make sure your class library projects is class library in Output type
[](https://i.stack.imgur.com/pzRCw.png) | Troubleshooting "program does not contain a static 'Main' method" when it clearly does...? | [
"",
"c#",
"compilation",
""
] |
Does anyone know of a link to a reference on the web that contains a sample English dictionary word script, that can be used to populate a dictionary table in SQL Server?
I can handle a .txt or .csv file, or something similar.
Alternatively, I'm adding custom spellchecking functionality to my web apps...but I don't want to integrate the standard components. If there are good references (with samples) on building a dictionary word checker, I'd love that too! | [Downloadable Dictionaries](http://www.dicts.info/uddl.php)
[GNU version of The Collaborative International Dictionary of English (XML)](http://www.ibiblio.org/webster/) | As used on [Debian](http://packages.debian.org/changelogs/pool/main/s/scowl/scowl_6-2.3/wamerican-large.copyright) and [Ubuntu](http://changelogs.ubuntu.com/changelogs/pool/main/s/scowl/scowl_6-2.1/wbritish.copyright):
[SCOWL (Spell Checker Oriented Word Lists)](http://wordlist.sourceforge.net/) is a collection of English word lists maintained by Kevin Atkinson... | Sample [English] Dictionary SQL Script to populate table? | [
"",
"sql",
"dictionary",
"scripting",
"spell-checking",
""
] |
This seems to me to be the kind of issue that would crop up all the time with SQL/database development, but then I'm new to all this, so forgive my ignorance.
I have 2 tables:
```
CREATE TABLE [dbo].[Tracks](
[TrackStringId] [bigint] NOT NULL,
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Time] [datetime] NOT NULL,
CONSTRAINT [PK_Tracks] PRIMARY KEY CLUSTERED
(
[Id] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Tracks] CHECK CONSTRAINT [FK_Tracks_AudioStreams]
GO
ALTER TABLE [dbo].[Tracks] WITH CHECK ADD CONSTRAINT
[FK_Tracks_TrackStrings] FOREIGN KEY([TrackStringId])
REFERENCES [dbo].[TrackStrings] ([Id])
GO
ALTER TABLE [dbo].[Tracks] CHECK CONSTRAINT [FK_Tracks_TrackStrings]
GO
```
and
```
CREATE TABLE [dbo].[TrackStrings](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[String] [nvarchar](512) NOT NULL,
CONSTRAINT [PK_Strings] PRIMARY KEY CLUSTERED
(
[Id] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
```
I want to insert a new entry into the tracks table. This will also involve inserting a new entry in the trackstrings table, and ensuring that the foreign key column trackstringid in tracks points to the new entry in trackstrings. What is the most efficient means of achieving this? | First, insert into `TrackStrings`, omitting the primary key column from the column list. This invokes its `IDENTITY` column which generates a value automatically.
```
INSERT INTO [dbo].[TrackStrings] ([String])
VALUES ('some string');
```
Second, insert into `Tracks` and specify as its `TrackStringId` the function [`SCOPE_IDENTITY()`](http://msdn.microsoft.com/en-us/library/ms190315(SQL.90).aspx), which returns the most recent value generated by an `IDENTITY` column *in your current scope*.
```
INSERT INTO [dbo].[Tracks] ([TrackStringId], [Time])
VALUES (SCOPE_IDENTITY(), CURRENT_TIMESTAMP());
``` | If you are using SQL Server 2005 or later and are inserting a lot of records in a single `INSERT`, you can look into `OUTPUT` or `OUTPUT INTO` options [here](http://msdn.microsoft.com/en-us/library/ms177564.aspx) to use the identities from the first insert in the second without haveing to "re-find" the rows to get all the `IDENTITY` values. | SQL insert into related tables | [
"",
"sql",
"sql-server",
"insert",
""
] |
How do you gracefully handle failed future feature imports? If a user is running using Python 2.5 and the first statement in my module is:
```
from __future__ import print_function
```
Compiling this module for Python 2.5 will fail with a:
```
File "__init__.py", line 1
from __future__ import print_function
SyntaxError: future feature print_function is not defined
```
I'd like to inform the user that they need to rerun the program with Python >= 2.6 and maybe provide some instructions on how to do so. However, to quote [PEP 236](http://www.python.org/dev/peps/pep-0236/):
> The only lines that can appear before
> a future\_statement are:
>
> * The module docstring (if any).
> * Comments.
> * Blank lines.
> * Other future\_statements.
So I can't do something like:
```
import __future__
if hasattr(__future__, 'print_function'):
from __future__ import print_function
else:
raise ImportError('Python >= 2.6 is required')
```
Because it yields:
```
File "__init__.py", line 4
from __future__ import print_function
SyntaxError: from __future__ imports must occur at the beginning of the file
```
This snippet from the PEP seems to give hope of doing it inline:
> Q: I want to wrap future\_statements
> in try/except blocks, so I can use
> different code depending on which
> version of Python I'm running. Why
> can't I?
>
> A: Sorry! try/except is a runtime
> feature; future\_statements are
> primarily compile-time gimmicks, and
> your try/except happens long after the
> compiler is done. That is, by the
> time you do try/except, the semantics
> in effect for the module are already a
> done deal. Since the try/except
> wouldn't accomplish what it *looks*
> like it should accomplish, it's simply
> not allowed. We also want to keep
> these special statements very easy to
> find and to recognize.
>
> Note that you *can* import \_\_future\_\_
> directly, and use the information in
> it, along with sys.version\_info, to
> figure out where the release you're
> running under stands in relation to a
> given feature's status.
Ideas? | "I'd like to inform the user that they need to rerun the program with Python >= 2.6 and maybe provide some instructions on how to do so."
Isn't that what a README file is for?
Here's your alternative. A "wrapper": a little blob of Python that checks the environment before running your target aop.
File: appwrapper.py
```
import sys
major, minor, micro, releaselevel, serial = sys.version_info
if (major,minor) <= (2,5):
# provide advice on getting version 2.6 or higher.
sys.exit(2)
import app
app.main()
```
---
What "direct import" means. You can examine the contents of `__future__`. You're still bound by the fact the a `from __future__ import print_function` is information to the compiler, but you can poke around before importing the module that does the real work.
```
import __future__, sys
if hasattr(__future__, 'print_function'):
# Could also check sys.version_info >= __future__. print_function.optional
import app
app.main()
else:
print "instructions for upgrading"
``` | A rather hacky but simple method I've used before is to exploit the fact that byte literals were introduced in Python 2.6 and use something like this near the start of the file:
```
b'This module needs Python 2.6 or later. Please do xxx.'
```
This is harmless in Python 2.6 or later, but a `SyntaxError` in any earlier versions. Anyone trying to compile your file will still get an error, but they also get whatever message you want to give.
You might think that as you will have to have this line after your `from __future__ import print_function` then it will be the import that generates the `SyntaxError` and you won't get to see the useful error message, but strangely enough the later error takes precedence. I suspect that as the error from the import isn't really a syntax error in itself it isn't raised on the first compilation pass, and so real syntax errors get raised first (but I'm guessing).
This might not meet you criteria for being 'graceful', and it is very Python 2.6 specific, but it is quick and easy to do. | How to gracefully deal with failed future feature (__future__) imports due to old interpreter version? | [
"",
"python",
"python-import",
""
] |
Setup is following:
Drupal project, one svn repo with trunk/qa/production-ready branches, vhosts for every branch, post-commit hook that copies files from repository to docroots.
Problem is following: Drupal website often relies not only on source code but on DB data too (node types, their settings, etc.).
I'm looking for solution to make this changes versionable. But not like 'diffing' all data in database, instead something like fixtures in unit tests.
Fixture-like scripts with SQL data and files for content that should be versionable and be applied after main post-commit hook.
Is there anything written for that purpose, or maybe it would be easy to adapt some kind of build tool (like Apache Ant) or unit testing framework. And it would be very great, if this tool know about drupal, so in scripts I can do things like `variable_set()`, `drupal_execute()`.
Any ideas? Or should I start coding right now instead of asking this? :) | It sounds like you've already got some infrastructure there that you've written.
So I'd start coding! There's not anything that I'm aware of thats especially good for this at the moment. And if there is, I imagine that it would take some effort to get it going with your existing infrastructure. So starting coding seems the way to go.
My approach to this is to use sql patch files (files containing the sql statements to upgrade the db schema/data) with a version number at the start of the filename. The database then contains a table with config info in (you may already have this) that includes info on which version the database is at.
You can then take a number of approaches to automatically apply the patch. One would be a script that you call from the postcommit that checks the version the database is at, and then checks to see if the latest version you have a patch for is newer than the version the db is at, and applies it/them (in order) if so.
The db patch should always finish by updating aforementioned the version number in the config table.
This approach can be extended to include the ability to set up a new database based on a full dump file and then applying any necessary patches to it to upgrade it as well. | Did a presentation on this at a recent conference ([slideshare link](http://www.slideshare.net/eaton/drupal-deployment-presentation)) -- I would STRONGLY suggest that you use a site-specific custom module whose .install file contains versioned 'update' functions that do the heavy lifting for database schema changes and settings/configuration changes.
It's definitely superior to keeping .sql files around, because Drupal will keep track of which ones have run and gives you a batch-processing mechanism for anything thaht requires long-running bulk operations on lots of data. | Drupal deployment/testing/don't-how-to-call it tool | [
"",
"php",
"deployment",
"drupal",
"continuous-integration",
""
] |
I have searched high and low and cannot find a Samsung Omnia SDK.
I know its possible to use the .net framework for development , but i want more , specifically being able to access the motion sensor and maybe the GPS as well.
Any idea or directions are welcome. | [Here](http://innovator.samsungmobile.com/down/cnts/category.main.list.do?cateId=147&cateAll=all&platformId=2) you can find SDK and Emulators for Samsung's Omnia, the [second](http://innovator.samsungmobile.com/down/cnts/detail.view.do?platformId=2&cateId=147&childCateId=All&childCateId2=&cntsId=1641&imgType=&parentCateId=147&cateAll=all&listReturnUrl=http%3A%2F%2Finnovator.samsungmobile.com%3A80%2Fdown%2Fcnts%2Fcategory.main.list.do%3FcateId%3D147%26cateAll%3Dall%26platformId%3D2&previousUrl=http%3A%2F%2Finnovator.samsungmobile.com%3A80%2Fdown%2Fcnts%2Fcategory.main.list.do%3FcateId%3D147%26cateAll%3Dall%26platformId%3D2) link should lead you to SDK for Samsung's specific API like accelerometer.
GPS should be accessible thru common API [GPS Intermediate Driver Reference](http://msdn.microsoft.com/en-us/library/ms850332.aspx) to access it from .NET simple use pinvoke | You should be able to access the GPS with the GPS intermediate driver. Have a look at the GPS sample from Windows Mobile SDK.
I don't know where to find the Omnia SDK, but if you can't find something in [xda-developers](http://www.xda-developers.com/), it would be very hard to locate it.
EDIT: Little bit of search and [here](http://forum.xda-developers.com/showthread.php?p=2737837) is a forum post linking to an [SDK](http://www.koushikdutta.com/2008/10/using-samsung-omnia-accelerometergsenso.html) for the GSensor. | How or where to obtain the Official Samsung Omnia's SDK | [
"",
"c#",
"windows-mobile",
""
] |
You can do it in .NET by using the keyword "ref". Is there any way to do so in Java? | What are you doing in your method? If you're merely populating an existing array, then you don't need pass-by-reference semantics - either in .NET or in Java. In both cases, the reference will be passed by value - so changes to the *object* will be visible by the caller. That's like telling someone the address of your house and asking them to deliver something to it - no problem.
If you *really* want pass-by-reference semantics, i.e. the caller will see any changes made to the parameter itself, e.g. setting it to null or a reference to a different byte array, then either method needs to return the new value, or you need to pass a reference to some sort of "holder" which contains a reference to the byte array, and which can have the (possibly changed) reference grabbed from it later.
In other words, if your method looks likes this:
```
public void doSomething(byte[] data)
{
for (int i=0; i < data.length; i++)
{
data[i] = (byte) i;
}
}
```
then you're fine. If your method looks like this:
```
public void createArray(byte[] data, int length)
{
// Eek! Change to parameter won't get seen by caller
data = new byte[length];
for (int i=0; i < data.length; i++)
{
data[i] = (byte) i;
}
}
```
then you need to change it to either:
```
public byte[] createArray(int length)
{
byte[] data = new byte[length];
for (int i=0; i < data.length; i++)
{
data[i] = (byte) i;
}
return data;
}
```
or:
```
public class Holder<T>
{
public T value; // Use a property in real code!
}
public void createArray(Holder<byte[]> holder, int length)
{
holder.value = new byte[length];
for (int i=0; i < length; i++)
{
holder.value[i] = (byte) i;
}
}
```
For more details, read [Parameter passing in C#](http://pobox.com/~skeet/csharp/parameters.html) and [Parameter passing in Java](http://pobox.com/~skeet/java/parameters.html). (The former is better written than the latter, I'm afraid. One day I'll get round to doing an update.) | Actually, in Java, the **references are passed-by-value**.
In this case, the reference is a `byte[]` object. Any changes that affect the object itself will be seen from the caller method.
However, if you try to replace the reference, for example using `new byte[length]`, you are only replacing the reference that you obtained by pass-by-value, so you are not changing the reference in the caller method.
Here's an interesting read about this issue: [Java is Pass-by-Value Dammit!](http://javadude.com/articles/passbyvalue.htm)
---
Here's an concrete example:
```
public class PassByValue
{
public static void modifyArray(byte[] array)
{
System.out.println("Method Entry: Length: " + array.length);
array = new byte[16];
System.out.println("Method Exit: Length: " + array.length);
}
public static void main(String[] args)
{
byte[] array = new byte[8];
System.out.println("Before Method: Length: " + array.length);
modifyArray(array);
System.out.println("After Method: Length: " + array.length);
}
}
```
This program will create a `byte` array of length `8` in the `main` method, which will call the `modifyArray` method, where the a new `byte` array of length `16` is created.
It may appear that by creating a new `byte` array in the `modifyArray` method, that the length of the `byte` array upon returning to the `main` method will be `16`, however, running this program reveals something different:
```
Before Method: Length: 8
Method Entry: Length: 8
Method Exit: Length: 16
After Method: Length: 8
```
The length of the `byte` array upon returning from the `modifyArray` method reverts to `8` instead of `16`.
Why is that?
That's because the `main` method called the `modifyArray` method and sent **a copied reference to the `new byte[8]`** by using **pass-by-value**. Then, the `modifyArray` method **threw away the copied reference by creating a `new byte[16]`**. By the time we leave `modifyArray`, the reference to the `new byte[16]` is out of scope (and eventually will be garbage collected.) However, the `main` method still has reference to the `new byte[8]` as it only **sent the copied reference** and not an actual reference to the reference.
That should demonstrate that Java will pass reference using pass-by-value. | Java: How to pass byte[] by reference? | [
"",
"java",
"pass-by-reference",
""
] |
I have an action handling a form post, but I want to make sure they are authenticated before the action. The problem is that the post data is lost because they user is redirected to the login page, and then back.
```
[AcceptVerbs(HttpVerbs.Post)]
[Authorize]
public ActionResult AskQuestion(string question)
{
....
}
```
Any ideas?
Cheers | You need to serialize your form values and a RedirectUrl to a hidden field.
After authentication deserialize the data in your hidden field and redirect based on the value of the RedirectUrl.
You will need a custom Authorize class to handle this. | A POST is usually used for an add, update, or delete of data. By the time the user is doing this, if authentication is needed, you should have already authenticated them. So I would suggest that you change the flow of your app to authenticate before the POST. | ASP.NET MVC Preserve Post Data After Authorize | [
"",
"c#",
"asp.net-mvc",
""
] |
I keep a cache of transactions to flush (to persistent storage) on the event of a watermark or object finalization. Since `__del__` is [no longer guaranteed to be called](http://docs.python.org/reference/datamodel.html?highlight=__del__#object.__del__) on every object, is the appropriate approach to hook a similar function (or `__del__` itself) into `atexit.register` (during initialization)?
If I'm not mistaken, this *will* cause the object to which the method is bound to hang around until program termination. This isn't likely to be a problem, but maybe there's a more elegant solution?
Note: I know using `__del__` is non-ideal because [it can cause uncatchable exceptions](http://docs.python.org/reference/datamodel.html?highlight=__del__#object.__del__), but I can't think of another way to do this short of cascading `finalize()` calls all the way through my program. TIA! | If you don't need your object to be alive at the time you perform the flush, you could use [weak references](http://docs.python.org/library/weakref.html)
This is similar to your proposed solution, but rather than using a real reference, store a list of weak references, with a callback function to perform the flush. This way, the references aren't going to keep those objects alive, and you won't run into any circular garbage problems with `__del__` methods.
You can run through the list of weak references on termination to manually flush any still alive if this needs to be guaranteed done at a certain point. | If you have to handle ressources the prefered way is to have an explicit call to a `close()` or `finalize()` method. Have a look at the `with` statement to abstract that. In your case the `weakref` module might be an option. The cached object can be garbage collected by the system and have their `__del__()` method called or you finalize them if they are still alive. | Python: flush a buffer before program termination via a finalizer | [
"",
"python",
"buffer",
"destructor",
"finalizer",
""
] |
for the following:
```
( a != b ) ? cout<<"not equal" : cout<<"equal";
```
suppose I don't care if it's equal, how can I use the above statement by substituting `cout<<"equal"` with a no-op. | If it really is for a ternary operator that doesn't need a second action, the best option would be to replace it for an if:
```
if (a!=b) cout << "not equal";
```
it will smell a lot less. | Simple: I would code it as
```
if (a != b)
cout << "not equal";
```
The ternary operator requires the two results to be of the same type. So you might also be able to get away with
```
(a != b) ? cout << "not equal" : cout;
```
because the stream operator (<<) just returns the ostream reference. That's ugly and unnecessary in my opinion though. | How does one execute a no-op in C/C++? | [
"",
"c++",
""
] |
I'm working on a PHP CMS like project and I'm trying to find out what's the most convenient way of dealing with the CRUD functionality in PHP.
The CMS is programmed completely in procedural PHP (no OOP - I know that many of you will not agree with this...) and was designed keeping everything as simple and light as possible, as well as creating highly reusable functions and snippets of code.
The CMS allows multiple modules to be installed / activated on needs basis. These modules describe different types of content so I will probably end up having something like pages, news, blogs just to name a few.
For each of this content types I will have to create the CRUD operations and now I'm trying to find the most convenient way to achieve this.
One requirement would be that the form for each of these content types is contained in a single external file (for both insert and edit) and if there is some way to integrate server side input validation that would be a plus. | By CRUD operations do you mean just the (tedious) database queries?
You could just as easily setup your database so that except for a few common fields amongst content types, all data for a particular content type is stored as a serialized associative array in a TEXT field.
This way you only need 1 set of queries to CRUD any particular content type since the data passed to the CRUD functions is just blindly serialized.
For example, say we declare that content title, created/updated date, tags, and short description are considered common data. From there we have a blog and a page content type.
I would possibly create a database table as such:
```
CREATE TABLE `content` (
`id` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`name` VARCHAR NOT NULL,
`short_description` TEXT NOT NULL,
`tags` TEXT ,
`data` TEXT ,
`content_type` INT NOT NULL,
`created_at` DATETIME NOT NULL,
`updated_at` DATETIME NOT NULL,
PRIMARY KEY (`id`)
)
```
*(Go ahead and assume we'll create the reference tables for content\_type)*
And while the blog might require data like "pingbacks" and the page might require nothing but the body, you just store the output of something like the below example for a blog:
```
$data = serialize(array(
"body" => "Lorem ipsum",
"pingbacks" => array()
));
```
Updates are easy, whenever you grab the data from the database you unserialize the data for editing into a form selected based on the content type. Displaying works the same way, just grab a template based on the content type and send it the unserialized data array. The template never needs to worry how the data is stored, just that it gets a $data['pingbacks'].
As for your forms, my suggestion is to break your anti OOP covenant and find a form generation library. If you can extract it from the framework, using [Zend\_Form](http://framework.zend.com/manual/en/zend.form.html) with [Zend\_Config](http://framework.zend.com/manual/en/zend.config.html) and [Zend\_Validate](http://framework.zend.com/manual/en/zend.validate.html) from the [Zend Framework](http://framework.zend.com/) (all Zend\_Config amounts to in this situation is a convenient interface to load and traverse XML and INI files) makes life really nice. You can have your XML files define the form for each content type, and all you would do is just render the form on your page (grabbing the XML based off of content type), grabbing the filtered data, removing the "common fields" like name, created/updated dates, then serializing what is left over into the database. No knowledge of the schema for a particular content type is required (unless you wish to be strict).
Though as a personal aside I would highly suggest you look into grabbing Zend\_Form (with Zend\_Validate and Zend\_Config) as well as using [Doctrine](http://www.doctrine-project.org/) as a ORM/database abstraction layer. You might find that at least Doctrine will make your life so much easier when it comes to running operations on the database. | > Though as a personal aside I would highly suggest you look into grabbing Zend\_Form (with Zend\_Validate and Zend\_Config) as well as using Doctrine as a ORM/database abstraction layer. You might find that at least Doctrine will make your life so much easier when it comes to running operations on the database.
I agree with dcousineau. Why roll your own when it's already done? I would also have a look at [Zend DB](http://framework.zend.com/manual/en/zend.db.html), and if you need a PHP4 and 5 solution [PHP ADOdb](http://adodb.sourceforge.net/).
I started an academic project recently and had the same desire as you; eventually I went with PHP ADoDB. | Simple / reusable CRUD in PHP (NO famework or big classes) | [
"",
"php",
"crud",
"code-reuse",
""
] |
I've written a PHP script that handles file downloads, determining which file is being requested and setting the proper HTTP headers to trigger the browser to actually download the file (rather than displaying it in the browser).
I now have a problem where some users have reported certain files being identified incorrectly (so regardless of extension, the browser will consider it a GIF image). I'm guessing this is because I haven't set the "Content-type" in the response header. Is this most likely the case? If so, is there a fairly generic type that could be used for all files, rather than trying to account for every possible file type?
Currently I'm only setting the value "Content-disposition: attachment; filename=arandomf.ile"
**Update:** I followed this guide here to build a more robust process for file downloads (<http://w-shadow.com/blog/2007/08/12/how-to-force-file-download-with-php/>), but there is a significant delay between when the script is executed and when the browser's download dialog appears. Can anyone identify the bottleneck that is causing this?
Here's my implementation:
```
/**
* Outputs the specified file to the browser.
*
* @param string $filePath the path to the file to output
* @param string $fileName the name of the file
* @param string $mimeType the type of file
*/
function outputFile($filePath, $fileName, $mimeType = '') {
// Setup
$mimeTypes = array(
'pdf' => 'application/pdf',
'txt' => 'text/plain',
'html' => 'text/html',
'exe' => 'application/octet-stream',
'zip' => 'application/zip',
'doc' => 'application/msword',
'xls' => 'application/vnd.ms-excel',
'ppt' => 'application/vnd.ms-powerpoint',
'gif' => 'image/gif',
'png' => 'image/png',
'jpeg' => 'image/jpg',
'jpg' => 'image/jpg',
'php' => 'text/plain'
);
$fileSize = filesize($filePath);
$fileName = rawurldecode($fileName);
$fileExt = '';
// Determine MIME Type
if($mimeType == '') {
$fileExt = strtolower(substr(strrchr($filePath, '.'), 1));
if(array_key_exists($fileExt, $mimeTypes)) {
$mimeType = $mimeTypes[$fileExt];
}
else {
$mimeType = 'application/force-download';
}
}
// Disable Output Buffering
@ob_end_clean();
// IE Required
if(ini_get('zlib.output_compression')) {
ini_set('zlib.output_compression', 'Off');
}
// Send Headers
header('Content-Type: ' . $mimeType);
header('Content-Disposition: attachment; filename="' . $fileName . '"');
header('Content-Transfer-Encoding: binary');
header('Accept-Ranges: bytes');
// Send Headers: Prevent Caching of File
header('Cache-Control: private');
header('Pragma: private');
header('Expires: Mon, 26 Jul 1997 05:00:00 GMT');
// Multipart-Download and Download Resuming Support
if(isset($_SERVER['HTTP_RANGE'])) {
list($a, $range) = explode('=', $_SERVER['HTTP_RANGE'], 2);
list($range) = explode(',', $range, 2);
list($range, $rangeEnd) = explode('-', $range);
$range = intval($range);
if(!$rangeEnd) {
$rangeEnd = $fileSize - 1;
}
else {
$rangeEnd = intval($rangeEnd);
}
$newLength = $rangeEnd - $range + 1;
// Send Headers
header('HTTP/1.1 206 Partial Content');
header('Content-Length: ' . $newLength);
header('Content-Range: bytes ' . $range - $rangeEnd / $fileSize);
}
else {
$newLength = $fileSize;
header('Content-Length: ' . $fileSize);
}
// Output File
$chunkSize = 1 * (1024*1024);
$bytesSend = 0;
if($file = fopen($filePath, 'r')) {
if(isset($_SERVER['HTTP_RANGE'])) {
fseek($file, $range);
while(!feof($file) && !connection_aborted() && $bytesSend < $newLength) {
$buffer = fread($file, $chunkSize);
echo $buffer;
flush();
$bytesSend += strlen($buffer);
}
fclose($file);
}
}
}
``` | Acoording to [RFC 2046 (Multipurpose Internet Mail Extensions)](http://www.rfc-editor.org/rfc/rfc2046.txt):
> The recommended action for an
> implementation that receives an
> "application/octet-stream" entity is
> to simply offer to put the data in
> a file
So I'd go for that one. | As explained by Alex's link you're probably missing the header `Content-Disposition` on top of `Content-Type`.
So something like this:
```
Content-Disposition: attachment; filename="MyFileName.ext"
``` | HTTP Headers for File Downloads | [
"",
"php",
"http-headers",
"download",
"content-type",
""
] |
Are there any scientific packages for Lua comparable to Scipy? | There is the basis for one in [Numeric Lua](http://numlua.luaforge.net/). | You should try **[Torch7](http://www.torch.ch)** ([github](https://github.com/andresy/torch)).
Torch7 has a very nice and efficient vector/matrix/tensor numerical library
with a Lua front-end. It also has a bunch of functions for computer vision
and machine learning.
It's pretty recent but getting better quickly. | Scientific libraries for Lua? | [
"",
"python",
"lua",
"scipy",
"scientific-computing",
"torch",
""
] |
Is there a way to tidy-up the following code, rather than a series of nested try/except statements?
```
try:
import simplejson as json
except ImportError:
try:
import json
except ImportError:
try:
from django.utils import simplejson as json
except:
raise "Requires either simplejson, Python 2.6 or django.utils!"
``` | I found the following function at <http://mail.python.org/pipermail/python-list/2007-May/441896.html>. It seems to work quite well, and I'm pretty sure the way its importing won't stomp on any existing imports you might already have.
```
def module_exists(module_name):
try:
mod = __import__(module_name)
except ImportError:
return False
else:
return True
if module_exists('simplejson'):
import simplejson as json
elif module_exists('json'):
import json
elif module_exists('django.utils'):
from django.utils import simplejson as json
else:
raise ImportError('Requires either simplejson, Python 2.6 or django.utils')
```
I know this seems like more code, but the function is reusable elsewhere if you're doing a lot of this. | ```
def import_any(*mod_list):
res = None
for mod in mod_list:
try:
res = __import__(mod)
return res
except ImportError:
pass
raise ImportError("Requires one of " + ', '.join(mod_list))
json = import_any('simplejson', 'json', 'django.utils.simplejson')
``` | Tidier way of trying to import a module from multiple locations? | [
"",
"python",
"refactoring",
"python-import",
""
] |
We are porting an app which formerly used Openbase 7 to now use MySQL 5.0.
OB 7 did have quite badly defined (i.e. undocumented) behavior regarding case-sensitivity. We only found this out now when trying the same queries with MySQL.
It appears that OB 7 treats lookups using "=" differently from those using "LIKE": If you have two values "a" and "A", and make a query with WHERE f="a", then it finds only the "a" field, not the "A" field. However, if you use LIKE instead of "=", then it finds both.
Our tests with MySQL showed that if we're using a non-binary collation (e.g. latin1), then both "=" and "LIKE" compare case-insensitively. However, to simulate OB's behavior, we need to get only "=" to be case-sensitive.
We're now trying to figure out how to deal with this in MySQL without having to add a lot of LOWER() function calls to all our queries (there are a lot!).
We have full control over the MySQL DB, meaning we can choose its collation mode as we like (our table names and unique indexes are not affected by the case sensitivity issues, fortunately).
Any suggestions how to simulate the OpenBase behaviour on MySQL with the least amount of code changes?
(I realize that a few smart regex replacements in our source code to add the LOWER calls might do the trick, but we'd rather find a different way) | Another idea .. does MySQL offer something like User Defined Functions? You could then write a UDF-version of like that is case insesitive (ci\_like or so) and change all like's to ci\_like. Probably easier to do than regexing a call to lower in .. | These two articles talk about case sensitivity in mysql:
* [Case Sensitive mysql](http://dev.mysql.com/doc/refman/5.1/en/case-sensitivity.html)
* [mySql docs "Case Sensitivity"](http://dev.mysql.com/doc/refman/5.1/en/case-sensitivity.html)
Both were early hits in this Google search:
* [case sensitive mysql](http://www.google.com/search?client=safari&rls=en-us&q=case+sensitive+mysql&ie=UTF-8&oe=UTF-8) | Openbase SQL case-sensitivity oddities ('=' vs. LIKE) - porting to MySQL | [
"",
"sql",
"mysql",
"case-sensitive",
"case-insensitive",
"openbase",
""
] |
I've done quite a bit of programming on Windows but now I have to write my first Linux app.
I need to talk to a hardware device using UDP. I have to send 60 packets a second with a size of 40 bytes. If I send less than 60 packets within 1 second, bad things will happen.
The data for the packets may take a while to generate. But if the data isn't ready to send out on the wire, it's ok to send the same data that was sent out last time.
The computer is a command-line only setup and will only run this program.
I don't know much about Linux so I was hoping to get a general idea how you might set up an app to meet these requirements.
I was hoping for an answer like:
Make 2 threads, one for sending packets and the other for the calculations.
But I'm not sure it's that simple (maybe it is). Maybe it would be more reliable to make some sort of daemon that just sent out packets from shared memory or something and then have another app do the calculations? If it is some multiple process solution, what communication mechanism would you recommend?
Is there some way I can give my app more priority than normal or something similar?
PS: The more bulletproof the better! | I posted this answer to illustrate a quite different approach to the "obvious" one, in the hope that someone discovers it to be exactly what they need. I didn't expect it to be selected as the best answer! Treat this solution with caution, because there are potential dangers and concurrency issues...
You can use the [setitimer()](http://linux.die.net/man/2/setitimer) system call to have a SIGALRM (alarm signal) sent to your program after a specified number of milliseconds. Signals are asynchronous events (a bit like messages) that interrupt the executing program to let a signal handler run.
A set of default signal handlers are installed by the OS when your program begins, but you can install a custom signal handler using [sigaction()](http://linux.die.net/man/2/sigaction).
So all you need is a single thread; use global variables so that the signal handler can access the necessary information and send off a new packet or repeat the last packet as appropriate.
Here's an example for your benefit:
```
#include <stdio.h>
#include <signal.h>
#include <sys/time.h>
int ticker = 0;
void timerTick(int dummy)
{
printf("The value of ticker is: %d\n", ticker);
}
int main()
{
int i;
struct sigaction action;
struct itimerval time;
//Here is where we specify the SIGALRM handler
action.sa_handler = &timerTick;
sigemptyset(&action.sa_mask);
action.sa_flags = 0;
//Register the handler for SIGALRM
sigaction(SIGALRM, &action, NULL);
time.it_interval.tv_sec = 1; //Timing interval in seconds
time.it_interval.tv_usec = 000000; //and microseconds
time.it_value.tv_sec = 0; //Initial timer value in seconds
time.it_value.tv_usec = 1; //and microseconds
//Set off the timer
setitimer(ITIMER_REAL, &time, NULL);
//Be busy
while(1)
for(ticker = 0; ticker < 1000; ticker++)
for(i = 0; i < 60000000; i++)
;
}
``` | I've done a similar project: a simple software on an embedded Linux computer, sending out CAN messages at a regular speed.
I would go for the two threads approach. Give the sending thread a slightly higher priority, and make it send out the same data block once again if the other thread is slow in computing those blocks.
60 UDP packets per second is pretty relaxed on most systems (including embedded ones), so I would not spend much sweat on optimizing the sharing of the data between the threads and the sending of the packets.
In fact, I would say: **keep it simple**! I you really are the only app in the system, and you have reasonable control over that system, you have nothing to gain from a complex IPC scheme and other tricks. Keeping it simple will help you produce better code with less defects and in less time, which actually means more time for testing. | Architectural Suggestions in a Linux App | [
"",
"c++",
"c",
"linux",
"structure",
"ethernet",
""
] |
How do I properly set the default character encoding used by the JVM (1.5.x) programmatically?
I have read that `-Dfile.encoding=whatever` used to be the way to go for older JVMs. I don't have that luxury for reasons I wont get into.
I have tried:
```
System.setProperty("file.encoding", "UTF-8");
```
And the property gets set, but it doesn't seem to cause the final `getBytes` call below to use UTF8:
```
System.setProperty("file.encoding", "UTF-8");
byte inbytes[] = new byte[1024];
FileInputStream fis = new FileInputStream("response.txt");
fis.read(inbytes);
FileOutputStream fos = new FileOutputStream("response-2.txt");
String in = new String(inbytes, "UTF8");
fos.write(in.getBytes());
``` | Unfortunately, the `file.encoding` property has to be specified as the JVM starts up; by the time your main method is entered, the character encoding used by `String.getBytes()` and the default constructors of `InputStreamReader` and `OutputStreamWriter` has been permanently cached.
As [another user points out,](https://stackoverflow.com/a/623036/3474) in a special case like this, the environment variable `JAVA_TOOL_OPTIONS` *can* be used to specify this property, but it's normally done like this:
```
java -Dfile.encoding=UTF-8 … com.x.Main
```
`Charset.defaultCharset()` will reflect changes to the `file.encoding` property, but most of the code in the core Java libraries that need to determine the default character encoding do not use this mechanism.
When you are encoding or decoding, you can query the `file.encoding` property or `Charset.defaultCharset()` to find the current default encoding, and use the appropriate method or constructor overload to specify it. | From the [JVM™ Tool Interface](http://java.sun.com/javase/6/docs/platform/jvmti/jvmti.html#tooloptions) documentation…
> Since the command-line cannot always be accessed or modified, for example in embedded VMs or simply VMs launched deep within scripts, a `JAVA_TOOL_OPTIONS` variable is provided so that agents may be launched in these cases.
By setting the (Windows) environment variable `JAVA_TOOL_OPTIONS` to `-Dfile.encoding=UTF8`, the (Java) `System` property will be set automatically every time a JVM is started. You will know that the parameter has been picked up because the following message will be posted to `System.err`:
> `Picked up JAVA_TOOL_OPTIONS: -Dfile.encoding=UTF8` | Setting the default Java character encoding | [
"",
"java",
"utf-8",
"character-encoding",
""
] |
I wanna stop the reading of my text input file when the word "synonyms" appears. I'm using ifstream and I don't know how to break the loop. I tried using a stringstream "synonyms" but it ended up junking my bst. I included the complete project files below in case you wanna avoid typing.
Important part:
```
for(;;) /*here, I wanna break the cycle when it reads "synonyms"*/
{
inStream >> word;
if (inStream.eof()) break;
wordTree.insert(word);
}
wordTree.graph(cout);
```
dictionary.txt
```
1 cute
2 hello
3 ugly
4 easy
5 difficult
6 tired
7 beautiful
synonyms
1 7
7 1
antonyms
1 3
3 1 7
4 5
5 4
7 3
```
Project.cpp
```
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
#include "MiBST.h"
using namespace std;
class WordInfo{
public:
//--id accesor
int id ()const {return myId; }
/* myId is the number that identifies each word*/
//--input function
void read (istream &in)
{
in>>myId>>word;
}
//--output function
void print(ostream &out)
{
out<<myId<<" "<<word;
}
//--- equals operator
bool operator==(const WordInfo & otherword) const
{ return myId == otherword.myId; }
//--- less-than operator
bool operator<(const WordInfo & otherword) const
{ return myId < otherword.myId; }
private:
int myId;
string word;
};
//--- Definition of input operator
istream & operator>>(istream & in, WordInfo & word)
{
word.read(in);
}
//---Definition of output operator
ostream & operator <<(ostream &out, WordInfo &word)
{
word.print(out);
}
int main(){
// Open stream to file of ids and words
string wordFile;
cout << "Enter name of dictionary file: ";
getline(cin, wordFile);
ifstream inStream(wordFile.data());
if (!inStream.is_open())
{
cerr << "Cannot open " << wordFile << "\n";
exit(1);
}
// Build the BST of word records
BST<WordInfo> wordTree; // BST of word records
WordInfo word; // a word record
for(;;) /*here, I wanna break the cycle when it reads "synonyms"*/
{
inStream >> word;
if (inStream.eof()) break;
wordTree.insert(word);
}
wordTree.graph(cout);
//wordTree.inorder(cout);
system ("PAUSE");
return 0;
}
```
MiBST.h (in case you wanna run it)
```
#include <iostream>
#include <iomanip>
#ifndef BINARY_SEARCH_TREE
#define BINARY_SEARCH_TREE
template <typename DataType>
class BST
{
public:
/***** Function Members *****/
BST();
bool empty() const;
bool search(const DataType & item) const;
void insert(const DataType & item);
void remove(const DataType & item);
void inorder(std::ostream & out) const;
void graph(std::ostream & out) const;
private:
/***** Node class *****/
class BinNode
{
public:
DataType data;
BinNode * left;
BinNode * right;
// BinNode constructors
// Default -- data part is default DataType value; both links are null.
BinNode()
: left(0), right(0)
{}
// Explicit Value -- data part contains item; both links are null.
BinNode(DataType item)
: data(item), left(0), right(0)
{}
}; //end inner class
typedef BinNode * BinNodePointer;
/***** Private Function Members *****/
void search2(const DataType & item, bool & found,
BinNodePointer & locptr, BinNodePointer & parent) const;
/*------------------------------------------------------------------------
Locate a node containing item and its parent.
Precondition: None.
Postcondition: locptr points to node containing item or is null if
not found, and parent points to its parent.#include <iostream>
------------------------------------------------------------------------*/
void inorderAux(std::ostream & out,
BST<DataType>::BinNodePointer subtreePtr) const;
/*------------------------------------------------------------------------
Inorder traversal auxiliary function.
Precondition: ostream out is open; subtreePtr points to a subtree
of this BST.
Postcondition: Subtree with root pointed to by subtreePtr has been
output to out.
------------------------------------------------------------------------*/
void graphAux(std::ostream & out, int indent,
BST<DataType>::BinNodePointer subtreeRoot) const;
/*------------------------------------------------------------------------
Graph auxiliary function.
Precondition: ostream out is open; subtreePtr points to a subtree
of this BST.
Postcondition: Graphical representation of subtree with root pointed
to by subtreePtr has been output to out, indented indent spaces.
------------------------------------------------------------------------*/
/***** Data Members *****/
BinNodePointer myRoot;
}; // end of class template declaration
//--- Definition of constructor
template <typename DataType>
inline BST<DataType>::BST()
: myRoot(0)
{}
//--- Definition of empty()
template <typename DataType>
inline bool BST<DataType>::empty() const
{ return myRoot == 0; }
//--- Definition of search()
template <typename DataType>
bool BST<DataType>::search(const DataType & item) const
{
typename BST<DataType>::BinNodePointer locptr = myRoot;
typename BST<DataType>::BinNodePointer parent =0;
/* BST<DataType>::BinNodePointer locptr = myRoot;
parent = 0; */ //falta el typename en la declaracion original
bool found = false;
while (!found && locptr != 0)
{
if (item < locptr->data) // descend left
locptr = locptr->left;
else if (locptr->data < item) // descend right
locptr = locptr->right;
else // item found
found = true;
}
return found;
}
//--- Definition of insert()
template <typename DataType>
inline void BST<DataType>::insert(const DataType & item)
{
typename BST<DataType>::BinNodePointer
locptr = myRoot, // search pointer
parent = 0; // pointer to parent of current node
bool found = false; // indicates if item already in BST
while (!found && locptr != 0)
{
parent = locptr;
if (item < locptr->data) // descend left
locptr = locptr->left;
else if (locptr->data < item) // descend right
locptr = locptr->right;
else // item found
found = true;
}
if (!found)
{ // construct node containing item
locptr = new typename BST<DataType>::BinNode(item);
if (parent == 0) // empty tree
myRoot = locptr;
else if (item < parent->data ) // insert to left of parent
parent->left = locptr;
else // insert to right of parent
parent->right = locptr;
}
else
std::cout << "Item already in the tree\n";
}
//--- Definition of remove()
template <typename DataType>
void BST<DataType>::remove(const DataType & item)
{
bool found; // signals if item is found
typename BST<DataType>::BinNodePointer
x, // points to node to be deleted
parent; // " " parent of x and xSucc
search2(item, found, x, parent);
if (!found)
{
std::cout << "Item not in the BST\n";
return;
}
//else
if (x->left != 0 && x->right != 0)
{ // node has 2 children
// Find x's inorder successor and its parent
typename BST<DataType>::BinNodePointer xSucc = x->right;
parent = x;
while (xSucc->left != 0) // descend left
{
parent = xSucc;
xSucc = xSucc->left;
}
// Move contents of xSucc to x and change x
// to point to successor, which will be removed.
x->data = xSucc->data;
x = xSucc;
} // end if node has 2 children
// Now proceed with case where node has 0 or 2 child
typename BST<DataType>::BinNodePointer
subtree = x->left; // pointer to a subtree of x
if (subtree == 0)
subtree = x->right;
if (parent == 0) // root being removed
myRoot = subtree;
else if (parent->left == x) // left child of parent
parent->left = subtree;
else // right child of parent
parent->right = subtree;
delete x;
}
//--- Definition of inorder()
template <typename DataType>
inline void BST<DataType>::inorder(std::ostream & out) const
{
inorderAux(out, myRoot);
}
//--- Definition of graph()
template <typename DataType>
inline void BST<DataType>::graph(std::ostream & out) const
{ graphAux(out, 0, myRoot); }
//--- Definition of search2()
template <typename DataType>
void BST<DataType>::search2(const DataType & item, bool & found,
BST<DataType>::BinNodePointer & locptr,
BST<DataType>::BinNodePointer & parent) const
{
locptr = myRoot;
parent = 0;
found = false;
while (!found && locptr != 0)
{
if (item < locptr->data) // descend left
{
parent = locptr;
locptr = locptr->left;
}
else if (locptr->data < item) // descend right
{
parent = locptr;
locptr = locptr->right;
}
else // item found
found = true;
}
}
//--- Definition of inorderAux()
template <typename DataType>
void BST<DataType>::inorderAux(std::ostream & out,
BST<DataType>::BinNodePointer subtreeRoot) const
{
if (subtreeRoot != 0)
{
inorderAux(out, subtreeRoot->left); // L operation
out << subtreeRoot->data << " "; // V operation
inorderAux(out, subtreeRoot->right); // R operation
}
}
//--- Definition of graphAux()
template <typename DataType>
void BST<DataType>::graphAux(std::ostream & out, int indent,
BST<DataType>::BinNodePointer subtreeRoot) const
{
if (subtreeRoot != 0)
{
graphAux(out, indent + 8, subtreeRoot->right);
out << std::setw(indent) << " " << subtreeRoot->data << std::endl;
graphAux(out, indent + 8, subtreeRoot->left);
}
}
#endif
``` | You can do it like
```
/* here, it stops when reading "synonyms" or when failing to extract a word. */
while(inStream >> word && word != "synonym") {
wordTree.insert(word);
}
wordTree.graph(cout);
```
Note that when it fails to read a sequence of non-whitespace characters, it sets the fail-bit of the stream. inStream evaluates to false then. That is what makes the loop work. Be careful when using `.eof();` It only returns true after you tried to read beyond the end of file. So for example, you will exit the loop, and miss the word *3 tree* here:
```
1 house 2 garden 3 tree
```
Assuming there is no white-space after *tree*. The same will of course happen if you have newlines between the word information and have no trailing newline after the last word. Using `if(inStream)` (which the while loop does implicitly) is safe. It would still evaluate to true in this case, and only to false if it didn't read anything except white-space. | You should make an operator == on WordInfo to compare it to a string, then you can just this in the reading loop:
```
if ( word == "synonyms" ) break;
``` | How can I break the reading this text file (using ifstream)? C++ | [
"",
"c++",
"ifstream",
""
] |
I'm having issues getting Firefox to update a webpage when its class is changed dynamically.
I'm using an HTML `table` element. When the user clicks a cell in the table header, my script toggles the class back and forth between `sorted_asc` and `sorted_des`. I have pseudo element which adds an arrow glyph (pointing up or down) depending on which class the cell currently is.
```
.thead .tr .sorted_asc .cell:after {
content: ' \25B2';
}
```
The problem is, that when you click the cell header a second time, the page doesn't update the arrow... until the user mouses away from the element. I think it's a bug as it works fine in Safari, and as I don't see any `:hover` tags in my CSS or other entries that might interfere.
Anyone seen this before, or know how to work around the issue? | It's kind of cheesy, but since you're using javascript anyway, try this after you changed the className:
```
document.body.style.display = 'none';
document.body.style.display = 'block';
```
This will re-render the layout and often solves these kind of bugs. Not always, though. | This is 2014 and none of the proposed solutions on this page seem to work. I found another way : detach the element from the DOM and append it back where it was. | How can I convince Firefox to redraw CSS Pseudo-elements? | [
"",
"javascript",
"html",
"css",
"firefox",
""
] |
Having a table with a column like: `mydate DATETIME` ...
I have a query such as:
```
SELECT SUM(foo), mydate FROM a_table GROUP BY a_table.mydate;
```
This will group by the full `datetime`, including hours and minutes. I wish to make the group by, only by the date `YYYY/MM/DD` not by the `YYYY/MM/DD/HH/mm`.
How to do this? | Cast the datetime to a date, then GROUP BY using this syntax:
```
SELECT SUM(foo), DATE(mydate) FROM a_table GROUP BY DATE(a_table.mydate);
```
Or you can GROUP BY the alias as @orlandu63 suggested:
```
SELECT SUM(foo), DATE(mydate) DateOnly FROM a_table GROUP BY DateOnly;
```
Though I don't think it'll make any difference to performance, it is a little clearer. | I found that I needed to group by the month and year so neither of the above worked for me. Instead I used date\_format
```
SELECT date
FROM blog
GROUP BY DATE_FORMAT(date, "%m-%y")
ORDER BY YEAR(date) DESC, MONTH(date) DESC
``` | Group by date only on a Datetime column | [
"",
"sql",
"mysql",
""
] |
A few weeks back I was using std::ifstream to read in some files and it was failing immediately on open because the file was larger than 4GB. At the time I couldnt find a decent answer as to why it was limited to 32 bit files sizes, so I wrote my own using native OS API.
So, my question then: Is there a way to handle files greater than 4GB in size using std::ifstream/std::ostream (IE: standard c++)
EDIT: Using the STL implementation from the VC 9 compiler (Visual Studio 2008).
EDIT2: Surely there has to be standard way to support file sizes larger than 4GB. | Apparently it depends on how `off_t` is implemented by the library.
```
#include <streambuf>
__int64_t temp=std::numeric_limits<std::streamsize>::max();
```
gives you what the current max is.
[STLport](http://stlport.sourceforge.net/) supports larger files. | I ran into this problem several years ago using gcc on Linux. The OS supported large files, and the C library (fopen, etc) supported it, but the C++ standard library didn't. I turned out that I had to recompile the C++ standard library using the correct compiler flags. | Reading files larger than 4GB using c++ stl | [
"",
"c++",
"file-io",
"iostream",
""
] |
Is there a way to configure Velocity to use something other than toString() to convert an object to a string in a template? For example, suppose I'm using a simple date class with a format() method, and I use the same format every time. If all of my velocity code looks like this:
```
$someDate.format('M-D-yyyy')
```
is there some configuration I could add that would let me just say
```
$someDate
```
instead? (Assuming I'm not in a position to just edit the date class and give it an appropriate toString()).
I'm doing this in the context of a webapp built with WebWork, if that helps. | You could also create your own ReferenceInsertionEventHandler that watches for your dates and automatically does the formatting for you. | Velocity allows for a JSTL like utility called velocimacros:
<http://velocity.apache.org/engine/devel/user-guide.html#Velocimacros>
This would allow you to define a macro like:
```
#macro( d $date)
$date.format('M-D-yyyy')
#end
```
And then call it like so:
```
#d($someDate)
``` | Configure velocity to render an object with something other than toString? | [
"",
"java",
"templates",
"velocity",
"tostring",
"webwork",
""
] |
I'm reading "[Pro JavaScript Techniques](http://jspro.org/)" by [John Resig](https://en.wikipedia.org/wiki/John_Resig), and I'm confused with an example. This is the code:
```
// Create a new user object that accepts an object of properties
function User( properties ) {
// Iterate through the properties of the object, and make sure
// that it's properly scoped (as discussed previously)
for ( var i in properties ) { (function(){
// Create a new getter for the property
this[ "get" + i ] = function() {
return properties[i];
};
// Create a new setter for the property
this[ "set" + i ] = function(val) {
properties[i] = val;
};
})(); }
}
// Create a new user object instance and pass in an object of
// properties to seed it with
var user = new User({
name: "Bob",
age: 44
});
// Just note that the name property does not exist, as it's private
// within the properties object
alert( user.name == null );
// However, we're able to access its value using the new getname()
// method, that was dynamically generated
alert( user.getname() == "Bob" );
// Finally, we can see that it's possible to set and get the age using
// the newly generated functions
user.setage( 22 );
alert( user.getage() == 22 );
```
Now running that in the [Firebug](https://en.wikipedia.org/wiki/Firebug_%28software%29) console (on Firefox 3) throws that *user.getname()* is not a function. I tried doing this:
```
var other = User
other()
window.getname() // --> This works!
```
And it worked!
Why?
Doing:
```
var me = this;
```
seems to work a bit better, but when executing "getname()" it returns '44' (the second property)...
Also I find it strange that it worked on the window object without modification...
And a third question, what's the difference between PEZ solution and the original? (He doesn't use an anonymous function.) | I started this post with the sole purpose of learning why that things happened, and I finally did. So in case there's someone else interested in the "whys", here they are:
**Why does 'this' changes inside the anonymous function?**
A new function, even if it is an anonymous, declared inside an object or another function, *always changes the scope*, in this case returning to the global scope ([window](https://developer.mozilla.org/en-US/docs/Web/API/Window)).
*Solution:* all stated in the post, I think the clearer is executing the anonymous function with *.call(this)*.
**Why does getname() always return the age?**
While the anonymous function gets executed right away, the getters/setters get executed for the first time when they are called. In that moment, the value of **i** will always be the last, because it has already iterated for all the properties... and it will always return properties[i] which is the last value, in this case the age.
*Solution:* save the `i` value in a variable like this
```
for ( i in properties ) { (function(){
var j = i
// From now on, use properties[j]
``` | **I think it's best not to use the `new` keyword at all when working in JavaScript.**
This is because if you then instantiate the object **without** using the new keyword (ex: `var user = User()`) by mistake, \*very bad things will happen...\*reason being that in the function (if instantiated without the `new` keyword), the `this` will refer to the **global object**, ie the `window`...
So therefore, I suggest a better way on how to use class-like objects.
Consider the following example :
```
var user = function (props) {
var pObject = {};
for (p in props) {
(function (pc) {
pObject['set' + pc] = function (v) {
props[pc] = v;
return pObject;
}
pObject['get' + pc] = function () {
return props[pc];
}
})(p);
}
return pObject;
}
```
In the above example, I am creating a new object inside of the function, and then attaching getters and setters to this newly created object.
Finally, I am returning this newly created object. *Note that the the `this` keyword is not used anywhere*
Then, to 'instantiate' a `user`, I would do the following:
```
var john = user({name : 'Andreas', age : 21});
john.getname(); //returns 'Andreas'
john.setage(19).getage(); //returns 19
```
**The best way to avoid falling into pitfalls is by not creating them in the first place**...In the above example, I am avoiding the `new` keyword pitfall (*as i said, not using the `new` keyword when it's supposed to be used will cause bad things to happen*) by **not using `new` at all.** | JavaScript automatic getter/setters (John Resig book) | [
"",
"javascript",
"metaprogramming",
""
] |
i m trying to design a mmo game using python...
I have evaluated stackless and since it is not the general python and it is a fork, i dont want to use it
I am trying to chose between
pysage
candygram
dramatis
and
parley
any one try any of these libraries?
Thanks a lot for your responses | I would go for [pysage](http://code.google.com/p/pysage/).
It has the highest level of abstraction and a lightweight messaging API which will give you lots of flexibility. I would imagine when designing an MMO you will want as much flexibility as possible.
It also takes a page from Erlang's Actor model which is really solid.
That's great you are trying to build an MMO via python! It has great OpenGL bindings when you want to add graphics which is great!
Hope that helps. | Initially [Twisted Python](http://twistedmatrix.com/trac/) was designed to write MMOs, but it not really easy to use. I don't know if there is an Actor implementation for it, perhaps in the [tx project in Launchpad](https://launchpad.net/tx) ? | Good python library for designing a mmo? Actor based design | [
"",
"python",
"stackless",
"python-stackless",
"mmo",
""
] |
I have been learning more and more javascript; it's a necessity at my job. We have a web application that uses a lot of javascript and I will be doing more and more every day. I have read bits and pieces about design patterns, but was wondering if someone could just give me a cut and dry example and definition. Are they something that would benefit me? Or is it more high level? | Design patterns are generic and usually elegant solutions to well-known programming problems. Without knowing what problem you're working in, I would say "Yes" they can help make your code more manageable.
[This link](https://addyosmani.com/resources/essentialjsdesignpatterns/book/) and [this link](https://www.oreilly.com/library/view/ajax-design-patterns/0596101805/ch03.html) make some reference to design patterns in Javascript. They may be worth reviewing. | One of the most practical and easy to use JavaScript-specific design pattern I've encountered is the [Module Pattern](http://yuiblog.com/blog/2007/06/12/module-pattern/), which is a modified [Singleton pattern](http://en.wikipedia.org/wiki/Singleton_pattern) that "namespaces" related code and prevents the global scope from getting cluttered with variables and functions that might conflict with each other in a complicated page. | Are design patterns in JavaScript helpful? And what exactly are they? | [
"",
"javascript",
"design-patterns",
""
] |
How can I send keyboard input messages to either the currently selected window or the previously selected window?
I have a program which I use to type some characters which are not present on my keyboard and I would like it if I could just send the input directly rather than me having to copy and paste all the time.
EDIT:
The application of this is typing the German Umlauts. I'm an American and I work in Germany. I'm working on an American keyboard and from time to time I have to type in the umlauts / the euro symbol / the sharp S. Currently I have a simple WinForms application with a textfield and some buttons with the extra characters on it. I type into the textfield and I can press the buttons to append text to the textfield. I then copy the text and paste it where ever. What would be nice though if I could just just hit one of the buttons and it would send the text where ever I'm typing / was typing. The current program works fairly well but I could make it better. | [SendKeys.Send](http://msdn.microsoft.com/en-us/library/system.windows.forms.sendkeys.send.aspx) will help you with that. | Look at System.Windows.Forms.SendKeys.Send( string ). This allows key presses to be sent to the currently active application.
Update: just found this on MSDN forums: [MSDN Forum](http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2766639&SiteID=1) | C# Sending Keyboard Input | [
"",
"c#",
"windows",
""
] |
Does Java 6 consume more memory than you expect for largish applications?
I have an application I have been developing for years, which has, until now taken about 30-40 MB in my particular test configuration; now with Java 6u10 and 11 it is taking several hundred while active. It bounces around a lot, anywhere between 50M and 200M, and when it idles, it *does* GC and drop the memory right down. In addition it generates millions of page faults. All of this is observed via Windows Task Manager.
So, I ran it up under my profiler (jProfiler) and using jVisualVM, and both of them indicate the usual moderate heap and perm-gen usages of around 30M combined, even when fully active doing my load-test cycle.
So I am mystified! And it not just requesting more memory from the Windows Virtual Memory pool - this is showing up as 200M "Mem Usage".
CLARIFICATION: I want to be perfectly clear on this - observed over an 18 hour period with Java VisualVM the class heap and perm gen heap have been perfectly stable. The allocated volatile heap (eden and tenured) sits unmoved at 16MB (which it reaches in the first few minutes), and the use of this memory fluctuates in a perfect pattern of growing evenly from 8MB to 16MB, at which point GC kicks in an drops it back to 8MB. Over this 18 hour period, the system was under constant maximum load since I was running a stress test. This behavior is *perfectly* and *consistently* reproducible, seen over numerous runs. The only anomaly is that while this is going on the memory taken from Windows, observed via Task Manager, fluctuates all over the place from 64MB up to 900+MB.
UPDATE 2008-12-18: I have run the program with -Xms16M -Xmx16M without any apparent adverse affect - performance is fine, total run time is about the same. But memory use in a short run still peaked at about 180M.
Update 2009-01-21: It seems the answer may be in the number of threads - see my answer below.
---
EDIT: And I mean millions of page faults literally - in the region of 30M+.
EDIT: I have a 4G machine, so the 200M is not significant in that regard. | Over the last few weeks I had cause to investigate and correct a problem with a thread pooling object (a pre-Java 6 multi-threaded execution pool), where is was launching far more threads than required. In the jobs in question there could be up to 200 unnecessary threads. And the threads were continually dying and new ones replacing them.
Having corrected that problem, I thought to run a test again, and now it seems the memory consumption is stable (though 20 or so MB higher than with older JVMs).
So my conclusion is that the spikes in memory were related to the number of threads running (several hundred). Unfortunately I don't have time to experiment.
If someone would like to experiment and answer this with their conclusions, I will accept that answer; otherwise I will accept this one (after the 2 day waiting period).
Also, the page fault rate is way down (by a factor of 10).
Also, the fixes to the thread pool corrected some contention issues. | In response to a discussion in the comments to Ran's answer, here's a test case that proves that the JVM *will* release memory back to the OS under certain circumstances:
```
public class FreeTest
{
public static void main(String[] args) throws Exception
{
byte[][] blob = new byte[60][1024*1024];
for(int i=0; i<blob.length; i++)
{
Thread.sleep(500);
System.out.println("freeing block "+i);
blob[i] = null;
System.gc();
}
}
}
```
I see the JVM process' size decrease when the count reaches around 40, on both Java 1.4 and Java 6 JVMs (from Sun).
You can even tune the exact behaviour with the [-XX:MaxHeapFreeRatio and -XX:MinHeapFreeRatio options](http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp#PerformanceTuning) -- some of the options on that page may also help with answering the original question. | Java 6 Excessive Memory Usage | [
"",
"java",
"memory-management",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.