Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I know what PermGen is, what it's used for, why it fails, how to increase it etc.
What I don't know is what PermGen actually stands for. Permanent... Gen... something?
Does anyone know what PermGen actually stands for? | Permanent Generation. Details are of course implementation specific.
Briefly, it contains the Java objects associated with classes and interned strings. In Sun's client implementation with sharing on, `classes.jsa` is memory mapped to form the initial data, with about half read-only and half copy-on-write.
Java objects that are merely old are kept in the Tenured Generation. | PermGen is used by the JVM to hold loaded classes. You can increase it using:
`-XX:MaxPermSize=384m`
if you're using the Sun JVM or OpenJDK.
So if you get an OutOfMemoryException: PermGen you need to either make PermGen bigger or you might be having class loader problems. | What does PermGen actually stand for? | [
"",
"java",
"permgen",
""
] |
For my Java game server I send the Action ID of the packet which basically tells the server what the packet is for. I want to map each Action ID (an integer) to a function. Is there a way of doing this without using a switch? | What about this one?
```
HashMap<Integer, Runnable> map = new HashMap<Integer, Runnable>();
map.put(Register.ID, new Runnable() {
public void run() { functionA(); }
});
map.put(NotifyMessage.ID, new Runnable() {
public void run() { functionB(); }
});
// ...
map.get(id).run();
```
(If you need to pass some arguments, define your own interface with a function having a suitable parameter, and use that instead of Runnable). | Another similar approach could be using Java 8's Suppliers:
```
Map<Integer, Supplier<T>> suppliers = new HashMap();
suppliers.put(1, () -> methodOne());
suppliers.put(2, () -> methodTwo());
// ...
public T methodOne() { ... }
public T methodTwo() { ... }
// ...
T obj = suppliers.get(id).run();
``` | Function pointers/delegates in Java? | [
"",
"java",
"delegates",
"function-pointers",
""
] |
I haven't used a statically typed language in many years and have set myself the task of getting up to speed with C#. I'm using my usual trick of following the fifteen exercises here <http://www.jobsnake.com/seek/articles/index.cgi?openarticle&8533> as my first task.
I've just finished the second Fibonacci task which didn't take to long and works just fine but in my opinion looks ugly and I'm sure could be achieved in far fewer lines of more elegant code.
I usually like to learn by pair programming with someone who already knows what they're doing, but that option isn't open to me today, so I'm hoping posting here will be the next best thing.
So to all the C# Jedi's out there, if you were going to refactor the code below, what would it look like?
```
using System;
using System.Collections;
namespace Exercises
{
class MainClass
{
public static void Main(string[] args)
{
Console.WriteLine("Find all fibinacci numbers between:");
int from = Convert.ToInt32(Console.ReadLine());
Console.WriteLine("And:");
int to = Convert.ToInt32(Console.ReadLine());
Fibonacci fibonacci = new Fibonacci();
fibonacci.PrintArrayList(fibonacci.Between(from, to));
}
}
class Fibonacci
{
public ArrayList Between(int from, int to)
{
int last = 1;
int penultimate = 0;
ArrayList results = new ArrayList();
results.Add(penultimate);
results.Add(last);
while(last<to)
{
int fib = last + penultimate;
penultimate = last;
last = fib;
if (fib>from && fib<to) results.Add(fib.ToString());
}
return results;
}
public void PrintArrayList(ArrayList arrayList)
{
Console.WriteLine("Your Fibonacci sequence:");
Console.Write(arrayList[0]);
for(int i = 1; i<arrayList.Count; i++)
{
Console.Write("," + arrayList[i]);
}
Console.WriteLine("");
}
}
}
```
Regards,
Chris | As an iterator block:
```
using System;
using System.Collections.Generic;
using System.Linq;
static class Program {
static IEnumerable<long> Fibonacci() {
long n = 0, m = 1;
yield return 0;
yield return 1;
while (true) {
long tmp = n + m;
n = m;
m = tmp;
yield return m;
}
}
static void Main() {
foreach (long i in Fibonacci().Take(10)) {
Console.WriteLine(i);
}
}
}
```
This is now fully lazy, and using LINQ's `Skip`/`Take` etc allows you to control the start/end easily. For example, for your "between" query:
```
foreach (long i in Fibonacci().SkipWhile(x=>x < from).TakeWhile(x=>x <= to)) {...}
``` | If you prefer recursion instead of the loop:
```
public static void Main(string[] args)
{
Func<int, int> fib = null;
fib = n => n > 1 ? fib(n - 1) + fib(n - 2) : n;
int start = 1;
int end = 10;
var numbers = Enumerable.Range(start, end).Select(fib);
foreach (var number in numbers)
{
Console.WriteLine(number);
}
}
``` | Refactoring Fibonacci Algorithm | [
"",
"c#",
"refactoring",
""
] |
Supposing to have something like this:
```
#include <map>
int main(){
std::map<int,int> m;
m[1] = 2;
m[2] = 4;
return 0;
}
```
I would like to be able to inspect the contents of the map running the program from gdb.
If I try using the subscript operator I get:
```
(gdb) p m[1]
Attempt to take address of value not located in memory.
```
Using the find method does not yield better results:
```
(gdb) p m.find(1)
Cannot evaluate function -- may be inlined
```
Is there a way to accomplish this? | I think there isn't, at least not if your source is optimized etc. However, there are some macros for gdb that can inspect STL containers for you:
<http://sourceware.org/ml/gdb/2008-02/msg00064.html>
However, I don't use this, so YMMV | The existing answers to this question are **very** out of date. With a recent GCC and GDB it Just WorksTM thanks to the built-in Python support in GDB 7.x and the libstdc++ pretty printers that come with GCC.
For the OP's example I get:
```
(gdb) print m
$1 = std::map with 2 elements = {[1] = 2, [2] = 4}
```
If it doesn't work automatically for you see the first bullet point on the [STL Support](http://sourceware.org/gdb/wiki/STLSupport) page of the GDB wiki.
You can write Python pretty printers for your own types too, see [Pretty Printing](http://sourceware.org/gdb/current/onlinedocs/gdb/Pretty-Printing.html#Pretty-Printing) in the GDB manual. | Inspecting standard container (std::map) contents with gdb | [
"",
"c++",
"stl",
"dictionary",
"gdb",
""
] |
I'm trying to work out a way of passing the web current http context to a service class (or initialising the class with a reference to it). I am doing this to abstract the rest of the app away from needing to know anything about the http context.
I also want the service to be testable using TDD, probably using one of the Mockable frameworks. Hence it would be preferable to use an interface rather than an actual class.
An example of what I'd like to achieve:
```
class WebInstanceService
{
private IHttpContext _Context;
public WebInstanceService( ... , IHttpContext HttpContext )
{
....
_Context = HttpContext;
}
// Methods...
public string GetInstanceVariable(string VariableName)
{
return _Context.Current.Session[VariableName];
}
}
```
One of the main issues I have is that there is no IHttpContext, the .net http context is a subclass of an abstract class which can't be mocked (easily?).
Another issue is that I can't initialise global instances of the class as then the context won't be relevant for most requests.
I could make the class static, and require the Context to be passed to each function as it is called i.e.
```
public static string GetInstanceVariable(string VariableName, HttpContext Context)
{ ... }
```
but this doesn't make the class any easier to test, I still need to create an HttpContext and additionally any non-web-aware services which want to use this class suddenly need to be able to retrieve the Context requiring them to be closely coupled to the web server - the whole reason for wanting to create this class in the first place.
I'm open to ALL suggestions - particularly those which people know facilitate easy tdd testing. **How would people suggest I tackle this problem?**
Cheers | This is why HttpContextBase and HttpContextWrapper were introduced. You probably want to use HttpContextBase and when passing the real context in, use `new HttpContextWrapper( httpContext )`, although, I think that what is available to you in the controller is already of type HttpContextBase. I would create one of these in my controller each time rather than trying to reference the current context from the static, global HttpContext.Current instance. If you need it in your view, pass a reference to your strongly typed context in ViewData.
I mock up HttpContextBase frequently in my tests.
```
class WebInstanceService
{
private HttpContextBase _Context;
public WebInstanceService( ... , HttpContextBase HttpContext )
{
....
_Context = HttpContext;
}
// Methods...
public string GetInstanceVariable(string VariableName)
{
return _Context.Session[VariableName];
}
}
``` | What we do is spin one of these up <http://haacked.com/archive/2007/06/19/unit-tests-web-code-without-a-web-server-using-httpsimulator.aspx>
Easy as pie, just instanciate an HttpSimulator and fill in the values, and HttpContext.Current gets filled up with whatever you specify.
IHttpContext is something that is in MVC, and aparently one day will be in webforms. Hopefully that day will be .net 4 | Passing web context to a 'service' in ASP MVC app | [
"",
"c#",
".net",
"asp.net-mvc",
"tdd",
"mocking",
""
] |
In the [Boost Signals](http://www.boost.org/doc/html/signals.html "Boost Signals documentation") library, they are overloading the () operator.
Is this a convention in C++? For callbacks, etc.?
I have seen this in code of a co-worker (who happens to be a big Boost fan). Of all the Boost goodness out there, this has only led to confusion for me.
Any insight as to the reason for this overload? | One of the primary goal when overloading operator() is to create a functor. A functor acts just like a function, but it has the advantages that it is stateful, meaning it can keep data reflecting its state between calls.
Here is a simple functor example :
```
struct Accumulator
{
int counter = 0;
int operator()(int i) { return counter += i; }
}
...
Accumulator acc;
cout << acc(10) << endl; //prints "10"
cout << acc(20) << endl; //prints "30"
```
Functors are heavily used with generic programming. Many STL algorithms are written in a very general way, so that you can plug-in your own function/functor into the algorithm. For example, the algorithm std::for\_each allows you to apply an operation on each element of a range. It could be implemented something like that :
```
template <typename InputIterator, typename Functor>
void for_each(InputIterator first, InputIterator last, Functor f)
{
while (first != last) f(*first++);
}
```
You see that this algorithm is very generic since it is parametrized by a function. By using the operator(), this function lets you use either a functor or a function pointer. Here's an example showing both possibilities :
```
void print(int i) { std::cout << i << std::endl; }
...
std::vector<int> vec;
// Fill vec
// Using a functor
Accumulator acc;
std::for_each(vec.begin(), vec.end(), acc);
// acc.counter contains the sum of all elements of the vector
// Using a function pointer
std::for_each(vec.begin(), vec.end(), print); // prints all elements
```
---
Concerning your question about operator() overloading, well yes it is possible. You can perfectly write a functor that has several parentheses operator, as long as you respect the basic rules of method overloading (e.g. overloading only on the return type is not possible). | It allows a class to act like a function. I have used it in a logging class where the call should be a function but i wanted the extra benefit of the class.
so something like this:
```
logger.log("Log this message");
```
turns into this:
```
logger("Log this message");
``` | Why override operator()? | [
"",
"c++",
"boost",
"operator-overloading",
"functor",
"function-call-operator",
""
] |
Is there C# look-alike for Linux? What about a compiler? | There is another language called [Vala](http://live.gnome.org/Vala). It's not well known, but as you can see from the page, an interesting amount of projects have been produced already. | You could actually use C# with [Mono](http://www.mono-project.com/). | Is there C# look-alike for Linux? | [
"",
"c#",
"linux",
"compiler-construction",
""
] |
I'm looking for a good, simple PHP function to get my latest Facebook status updates. Anyone know of one?
Thanks!
**EDIT:** I've added a half-solution below.
Or if anyone knows a good way to read in the RSS feed and spit out the recent status update? | Since I couldn't use the API route, I went with the RSS found at: <http://www.new.facebook.com/minifeed.php?filter=11>
And used the following PHP function, [called StatusPress](http://www.paradoxica.net/2007/08/30/statuspress/), with some of my own modifications, to parse the RSS feed for my Facebook status. Works great! | A quick check on [PEAR](http://pear.php.net) found [Services\_Facebook](http://pear.php.net/package/Services_Facebook) | PHP function to get Facebook status? | [
"",
"php",
"facebook",
"status",
"updates",
""
] |
Most C++ naming conventions dictate the use of `camelCaseIdentifiers`: names that start with an uppercase letter for classes (`Person`, `Booking`) and names that start with a lowercase letter for fields and variables (`getPrice()`, `isValid()`, `largestValue`). These recommendations are completely at odds with the naming conventions of the C++ library, which involve lowercase names for classes (`string`, `set`, `map`, `fstream`) and `names_joined_with_an_underscore` for methods and fields (`find_first_of`, `lower_bound`, `reverse_iterator`, `first_type`). Further complicating the picture are operating system and C library functions, which involve compressed lowercase names in C and Unix and functions starting with an uppercase letter in Windows.
As a result my code is a mess, because some identifiers use the C++ library, C, or operating system naming convention, and others use the prescribed C++ convention. Writing classes or methods that wrap functionality of the library is painful, because one ends with different-style names for similar things.
So, how do you reconcile these disparate naming conventions? | One way it to adopt the C++ `naming_convention`, this is what most code examples in the literature do nowadays.
I slowly see these conventions move into production code but it's a battle against MFC naming conventions that still prevail in many places.
Other style differences that fight against old standards are using trailing underscores rather than `m_` to denote members. | Diomidis, I share your pain and have spent a lot of time switching between different schemes over the years, trying to find something that works with the different libraries/frameworks that I use (MFC and/or STL/Boost). When working with a single framework, such as the STL, you can try and copy the naming convention it uses, but when you introduce a different framework, it easily falls apart.
In the end I have adopted a single style for all new code that I write (based on the Google C++ style guidelines) and I refactor older code to use this style when appropriate. You cannot reconcile the different naming conventions very easily, so don't waste time trying. Enforce a scheme for your team/dept./company and stick to it - but don't get hung up on how 'ugly' the code may look when using a mixture of schemes.
The Google C++ guidelines are pretty good IMHO - with some minor amendments. Check the guide out here:
<https://google.github.io/styleguide/cppguide.html#Naming> | How do you reconcile common C++ naming conventions with those of the libraries | [
"",
"c++",
"coding-style",
"naming-conventions",
""
] |
I have a spec in my current project that requires us to advise the user which browsers are best to use the web application. If their current browser version they are using is not in our list of "ideal" browsers we want to display a message.
What is the best way to check a specific version of the users browser. I am aware of the following using jQuery but this doesn't help with specific versions.
```
$(document).ready(function() {
var b = '';
$.each($.browser, function(i, val) {
if (i=='safari' && val==true) { b = 'safari'; }
if (i=='opera' && val==true) { b = 'opera'; }
if (i=='msie' && val==true) { b = 'msie'; }
if (i=='mozilla' && val==true) {b = 'mozilla'; }
});
//Do Something With b, Like $('#dis').html(b);
});
```
We want to be able to say is your browser Firexfox 2 or greater or IE6 or greater etc? | [Here](http://davecardwell.co.uk/javascript/jquery/plugins/jquery-browserdetect/) is a JQuery plugin that'll help | Also check for $.browser.version in the docs.jquery.com
It can return 2.0 for Firefox 2.x.x, check the docs :) | Browser version Detection | [
"",
"javascript",
"jquery",
"browser",
""
] |
I want to display an image at 'true size' in my application. For that I need to know the pixel size of the display.
I know windows display resolution is nominally 96dpi, but for my purposes I want a better guess. I understand this information may not always be available or accurate (e.g. older CRT displays), but I imagine with the prevelance of LCD displays that this should be possible!
Is there a way to get the pixel size of my display?
Is there a way to determine if the pixel size is accurate?
.NET API's preferred (I couldn't find them), but Win32 is OK too, I'm happy to P/Invoke. | For the display size you'll want [`Screen`](http://msdn.microsoft.com/en-us/library/system.windows.forms.screen.aspx)`.PrimaryScreen.Bounds.Size` (or `Screen.GetBounds(myform)`).
If you want the DPI, use the [DpiX](http://msdn.microsoft.com/en-us/library/system.drawing.graphics.dpix.aspx) and [DpiY](http://msdn.microsoft.com/en-us/library/system.drawing.graphics.dpiy.aspx) properties of [Graphics](http://msdn.microsoft.com/en-us/library/system.drawing.graphics.aspx):
```
PointF dpi = PointF.Empty;
using(Graphics g = this.CreateGraphics()){
dpi.X = g.DpiX;
dpi.Y = g.DpiY;
}
```
---
Oh, wait! You wanted actual, hold a ruler up to the monitor and measure, size?! No. Not possible using *any* OS services. The OS doesn't know the actual dimensions of the monitor, or how the user has it calibrated. Some of this information is theoretically detectable, but it's not deterministic enough for the OS to use it reliably, so it doesn't.
As a work around, you can try a couple of things.
* You can try to query the display string of the installed monitor device (I'm not sure how to do that) and see if you can parse out a sensible size out of that. For example, the monitor might be a "ValueBin E17p", and you *might* deduce that it's a 17" monitor from that. Of course, this display string is likely to be "Plug and Play Monitor". This scheme is pretty sketchy at best.
* You could *ask* the user what size monitor they have. *Maybe* they'll know.
Once you know (or *think* you know) the monitor's diagonal size, you need to find its physical aspect ratio. Again, a couple of things:
* Assume the current pixel aspect ratio matches the monitor's physical aspect ratio. This assumes that (A) the user has chosen a resolution that is ideal for their monitor, and that (B) the monitor has square pixels. I don't know of a current consumer-oriented computer monitor that doesn't have square pixels, but older ones did and newer ones might.
* Ask the user. *Maybe* they'll know.
Once you know (or *think* you know) what the monitor's diagonal size and physical aspect ratio are, then you you can calculate it's physical width and height. A2 + B2 = C2, so a few calculations will give it to you good:
> If you found out that it's a 17" monitor, and its current resolution is 1280 x 1024:
> 12802 + 10242 = 2686976
> Sqrt(2686976) = 1639.1998047828092637409837247032
> 17" \* 1280 / 1639.2 = 13.274768179599804782820888238165"
> 17" \* 1024 / 1639.2 = 10.619814543679843826256710590532"
This puts the physical width at 13.27" and the physical height at 10.62". This makes the pixels 13.27" / 1280 = 10.62" / 1024 = 0.01037" or about 0.263 mm.
Of course, all of this is invalid if the user doesn't have a suitable resolution, the monitor has wacky non-square pixels, or it's an older analog monitor and the controls aren't adjusted properly for the display to fill the entire physical screen. Or worse, it could be a projector.
In the end, you may be best off performing a calibration step where you have the user *actually hold a ruler up to the screen*, and measure the size of something for you. You could:
* Have the user click the mouse on any two points an inch (or a centimeter) apart.
* Draw a box on the screen and have the user press the up and down arrows to adjust its height, and the left and right arrows to adjust its width, until the box is exactly one inch (or centimeter) square according to their ruler.
* Draw a box on the screen and have the user tell you how many inches/centimeters it is in each dimension.
**No matter what you do**, don't expect your results to be 100% accurate. There are way too many factors at play for you (or the user) to get this exactly correct, every time.
Be aware that 96 dpi is usually *pretty close* to accurate. Modern pixels on non-projected screens all tend to be about 0.25 mm, give or take, so you usually end up with *about* 100 physical pixels per inch, give or take, if the monitor is set to its native resolution. (Of course, this is a huge generalization and does **not** apply to all monitors. Eee PCs, for example, have pixels about 0.19 mm in size, if I remember the specs correctly.) | sorry, you've got to P/Invoke for this information.
Here's the link that I utilized for it a while ago:
<http://www.davidthielen.info/programming/2007/05/get_screen_dpi_.html> | How do I determine the true pixel size of my Monitor in .NET? | [
"",
"c#",
".net",
"windows",
""
] |
I haven't looked at this in a while, but if I recall correctly both ant and maven still rely on JUnit 3 for unit tests (as of maven 2.09 the default POM file still has JUnit 3.81).
Does it still make sense to stick to JUnit 3 instead of using the latest and greatest? Any good reason I might be missing? | I don't see a reason to stick to the 3.x versions. Most tools have been compatible with 4.x for a while now. The only reason I would stick to 3.x is in a java 1.4 environment (because there is no other way).
By the way, maven is switching to Java 5 in 2.1, so there is a chance they will propose junit 4.x | JUnit 4 has lots of advantages over 3.x. The most important is that you no longer have to extend TestCase, nor do your test methods have to begin with "test." It's all annotation-based now. You can also add the Hamcrest matchers, which gives you a really nice and expressive way of writing test assertions.
If you're stuck on a pre-Java-1.5 project, you may have to stick with JUnit 3.x, though. | Is running tests with JUnit 3.x vs JUnit 4.x still a best practice? | [
"",
"java",
"unit-testing",
"junit",
""
] |
I'm currently wrestling with an Oracle SQL DATE conversion problem using iBATIS from Java.
Am using the Oracle JDBC thin driver ojdbc14 version 10.2.0.4.0. iBATIS version 2.3.2. Java 1.6.0\_10-rc2-b32.
The problem revolves around a column of DATE type that is being returned by this snippet of SQL:
```
SELECT *
FROM TABLE(pk_invoice_qry.get_contract_rate(?,?,?,?,?,?,?,?,?,?)) order by from_date
```
The package procedure call returns a ref cursor that is being wrapped in a TABLE to where is then easy to read the result set as though were a select query against a table.
In PL/SQL Developer, one of the columns returned, FROM\_DATE, of SQL DATE type, has precision to time of day:
```
Tue Dec 16 23:59:00 PST 2008
```
But when I access this via iBATIS and JDBC, the value only retains precision to day:
```
Tue Dec 16 12:00:00 AM PST 2008
```
This is clearer when displayed like so:
Should have been:
```
1229500740000 milliseconds since epoch
Tuesday, December 16, 2008 11:59:00 PM PST
```
But getting this instead:
```
1229414400000 milliseconds since epoch
Tuesday, December 16, 2008 12:00:00 AM PST
(as instance of class java.sql.Date)
```
No matter what I try, I am unable to expose the full precision of this DATE column to be returned via Java JDBC and iBATIS.
What iBATIS is mapping from is this:
```
FROM_DATE : 2008-12-03 : class java.sql.Date
```
The current iBATIS mapping is this:
```
<result property="from_date" jdbcType="DATE" javaType="java.sql.Date"/>
```
I've also tried:
```
<result property="from_date" jdbcType="DATETIME" javaType="java.sql.Date"/>
```
or
```
<result property="from_date" jdbcType="TIMESTAMP" javaType="java.sql.Timestamp"/>
```
But all attempted mappings yield the same truncated Date value. It's as though JDBC has already done the damage of losing data precision before iBATIS even touches it.
Clearly I'm losing some of my data precision by going through JDBC and iBATIS that is not happening when I stay in PL/SQL Developer running the same SQL snippet as a test script. Not acceptable at all, very frustrating, and ultimately very scary. | The full info (and it's more complex than described here and might depend upon which particular version of the Oracle drivers are in use) is in Richard Yee's answer here - [now expired link to Nabble]
---
Quick grab before it expires from nabble...
Roger,
See: <http://www.oracle.com/technetwork/database/enterprise-edition/jdbc-faq-090281.html#08_01>
Specifically:
Simple Data Types
What is going on with DATE and TIMESTAMP?
This section is on simple data types. :-)
Prior to 9.2, the Oracle JDBC drivers mapped the DATE SQL type to java.sql.Timestamp. This made a certain amount of sense because the Oracle DATE SQL type contains both date and time information as does java.sql.Timestamp. The more obvious mapping to java.sql.Date was somewhat problematic as java.sql.Date does not include time information. It was also the case that the RDBMS did not support the TIMESTAMP SQL type, so there was no problem with mapping DATE to Timestamp.
In 9.2 TIMESTAMP support was added to the RDBMS. The difference between DATE and TIMESTAMP is that TIMESTAMP includes nanoseconds and DATE does not. So, beginning in 9.2, DATE is mapped to Date and TIMESTAMP is mapped to Timestamp. Unfortunately if you were relying on DATE values to contain time information, there is a problem.
There are several ways to address this problem:
Alter your tables to use TIMESTAMP instead of DATE. This is probably rarely possible, but it is the best solution when it is.
Alter your application to use defineColumnType to define the columns as TIMESTAMP rather than DATE. There are problems with this because you really don't want to use defineColumnType unless you have to (see What is defineColumnType and when should I use it?).
Alter you application to use getTimestamp rather than getObject. This is a good solution when possible, however many applications contain generic code that relies on getObject, so it isn't always possible.
Set the V8Compatible connection property. This tells the JDBC drivers to use the old mapping rather than the new one. You can set this flag either as a connection property or a system property. You set the connection property by adding it to the java.util.Properties object passed to DriverManager.getConnection or to OracleDataSource.setConnectionProperties. You set the system property by including a -D option in your java command line.
java -Doracle.jdbc.V8Compatible="true" MyApp
Oracle JDBC 11.1 fixes this problem. Beginning with this release the driver maps SQL DATE columns to java.sql.Timestamp by default. There is no need to set V8Compatible to get the correct mapping. V8Compatible is strongly deprecated. You should not use it at all. If you do set it to true it won't hurt anything, but you should stop using it.
Although it was rarely used that way, V8Compatible existed not to fix the DATE to Date issue but to support compatibility with 8i databases. 8i (and older) databases did not support the TIMESTAMP type. Setting V8Compatible not only caused SQL DATE to be mapped to Timestamp when read from the database, it also caused all Timestamps to be converted to SQL DATE when written to the database. Since 8i is desupported, the 11.1 JDBC drivers do not support this compatibility mode. For this reason V8Compatible is desupported.
As mentioned above, the 11.1 drivers by default convert SQL DATE to Timestamp when reading from the database. This always was the right thing to do and the change in 9i was a mistake. The 11.1 drivers have reverted to the correct behavior. Even if you didn't set V8Compatible in your application you shouldn't see any difference in behavior in most cases. You may notice a difference if you use getObject to read a DATE column. The result will be a Timestamp rather than a Date. Since Timestamp is a subclass of Date this generally isn't a problem. Where you might notice a difference is if you relied on the conversion from DATE to Date to truncate the time component or if you do toString on the value. Otherwise the change should be transparent.
If for some reason your app is very sensitive to this change and you simply must have the 9i-10g behavior, there is a connection property you can set. Set mapDateToTimestamp to false and the driver will revert to the default 9i-10g behavior and map DATE to Date.
If possible, you should change your column type to TIMESTAMP instead of DATE.
-Richard
---
Roger Voss wrote:
I posted following question/problem on stackoverflow, so if anyone knows a resolution, would be good to see it answered there:
Oracle SQL DATE conversion problem using iBATIS via Java JDBC
Here's the problem description:
I'm currently wrestling with an Oracle sql DATE conversion problem using iBATIS from Java.
Am using the Oracle JDBC thin driver ojdbc14 version 10.2.0.4.0. iBATIS version 2.3.2. Java 1.6.0\_10-rc2-b32.
The problem revolves around a column of DATE type that is being returned by this snippet of SQL:
SELECT \*
FROM TABLE(pk\_invoice\_qry.get\_contract\_rate(?,?,?,?,?,?,?,?,?,?)) order by from\_date
The package procedure call returns a ref cursor that is being wrapped in a TABLE to where is then easy to read the result set as though were a select query against a table.
In PL/SQL Developer, one of the columns returned, FROM\_DATE, of SQL DATE type, has precision to time of day:
```
Tue Dec 16 23:59:00 PST 2008
```
But when I access this via iBATIS and JDBC, the value only retains precision to day:
```
Tue Dec 16 12:00:00 AM PST 2008
```
This is clearer when displayed like so:
Should have been:
1229500740000 milliseconds since epoch
Tuesday, December 16, 2008 11:59:00 PM PST
But getting this instead:
1229414400000 milliseconds since epoch
Tuesday, December 16, 2008 12:00:00 AM PST
(as instance of class java.sql.Date)
No matter what I try, I am unable to expose the full precision of this DATE column to be returned via Java JDBC and iBATIS.
What iBATIS is mapping from is this:
FROM\_DATE : 2008-12-03 : class java.sql.Date
The current iBATIS mapping is this:
I've also tried:
or
But all attempted mappings yield the same truncated Date value. It's as though JDBC has already done the damage of loosing data precision before iBATIS even touches it.
Clearly I'm loosing some of my data precision by going through JDBC and iBATIS that is not happening when I stay in PL/SQL Developer running the same SQL snippet as a test script. Not acceptable at all, very frustrating, and ultimately very scary. | I found out how to solve this problem. iBATIS permits custom type handlers to be registered. So in my sqlmap-config.xml file I added this:
```
<typeAlias alias="OracleDateHandler" type="com.tideworks.ms.CustomDateHandler"/>
<typeHandler callback="OracleDateHandler" jdbcType="DATETIME" javaType="date"/>
```
And then added this class which implements the iBATIS TypeHandlerCallback interface:
```
// corrected getResult()/setParameter() to correctly deal with when value is null
public class CustomDateHandler implements TypeHandlerCallback {
@Override
public Object getResult(ResultGetter getter) throws SQLException {
final Object obj = getter.getTimestamp();
return obj != null ? (Date) obj : null;
}
@Override
public void setParameter(ParameterSetter setter,Object value) throws SQLException {
setter.setTimestamp(value != null ? new Timestamp(((Date)value).getTime()) : null);
}
@Override
public Object valueOf(String datetime) {
return Timestamp.valueOf(datetime);
}
}
```
Whennever I need to map an Oracle DATE I now describe it like so:
```
<result property="from_date" jdbcType="DATETIME" javaType="date"/>
``` | Oracle SQL DATE conversion problem using iBATIS via Java JDBC | [
"",
"java",
"oracle",
"date",
"jdbc",
"ibatis",
""
] |
I'm building a new app that is using NHibernate to generate the database schema but i can see a possible problem in the future.
Obviously you use all the data from your database is cleared when you update the schema but what stratagies do people use to restore any data to the new database. I am aware that massive changes to the schema will make this hard but was wondering how other people have dealt with this problem.
Cheers
Colin G
PS I will not be doing this against the live database only using it to restore test data for integration test and continuous integration | When testing, we use NHibernate to create the database, then a series of [builders](http://en.wikipedia.org/wiki/Builder_pattern) to create the data for each test fixture. We also use Sqlite for these tests, so they are lightening fast.
Our builders look a something like this:
```
public class CustomerBuilder : Builder<Customer>
{
string firstName;
string lastName;
Guid id = Guid.Empty;
public override Customer Build()
{
return new Customer() { Id = id, FirstName = firstName, LastName = }
}
public CustomerBuilder WithId(Guid newId)
{
id= newId;
return this;
}
public CustomerBuilder WithFirstName(string newFirstName)
{
firstName = newFirstName;
return this;
}
public CustomerBuilder WithLastName(string newLastName)
{
lastName = newLastName;
return this;
}
}
```
and usage:
```
var customer = new CustomerBuilder().WithFirstName("John").WithLastName("Doe").Build();
```
Because every line of code is written with TDD, we build up a comprehensive suite of data from scatch and will generally refactor some of it to factories that will wrap the above and make it a breeze to get dummy data in. | I think it is a good thing in many situations to let NHibernate generate the schema for you. To recreate the test data you either use code driven by a testing framework (such as NUnit) or you could export your test data as a SQL script which you can run after you have updated the schema. | Strategies for using NHibernate to generate a schema | [
"",
"sql",
"nhibernate",
""
] |
[Mono](http://www.mono-project.com/) claims to be compatible with .NET.
Have you tried it?
Can you share any tips or guidelines for making a running .NET application compatible with mono? | Might be [MoMA](http://www.mono-project.com/MoMA) helps you | See these documents for in-depth answers:
* General document about [Application Portability](http://www.mono-project.com/Guidelines:Application_Portability)
* [Porting Winforms Applications](http://www.mono-project.com/Guide:_Porting_Winforms_Applications)
* [Porting ASP.NET Applications](http://www.mono-project.com/Guide:_Porting_ASP.NET_Applications) | Guidelines to write C# applications for .NET and mono | [
"",
"c#",
".net",
"mono",
"cross-platform",
"porting",
""
] |
In my everlasting quest to suck less I'm trying to understand the "yield" statement, but I keep encountering the same error.
> The body of [someMethod] cannot be an iterator block because
> 'System.Collections.Generic.List< AClass>' is not an iterator interface type.
This is the code where I got stuck:
```
foreach (XElement header in headersXml.Root.Elements()){
yield return (ParseHeader(header));
}
```
What am I doing wrong? Can't I use yield in an iterator? Then what's the point?
In this example it said that `List<ProductMixHeader>` is not an iterator interface type.
`ProductMixHeader` is a custom class, but I imagine `List` is an iterator interface type, no?
**--Edit--**
Thanks for all the quick answers.
I know this question isn't all that new and the same resources keep popping up.
It turned out I was thinking I could return `List<AClass>` as a return type, but since `List<T>` isn't lazy, it cannot. Changing my return type to `IEnumerable<T>` solved the problem :D
A somewhat related question (not worth opening a new thread): is it worth giving `IEnumerable<T>` as a return type if I'm sure that 99% of the cases I'm going to go .ToList() anyway? What will the performance implications be? | A method using *yield return* must be declared as returning one of the following two interfaces:
```
IEnumerable<SomethingAppropriate>
IEnumerator<SomethingApropriate>
```
(thanks [Jon](https://stackoverflow.com/users/22656/jon-skeet) and [Marc](https://stackoverflow.com/users/23354/marc-gravell) for pointing out IEnumerator)
Example:
```
public IEnumerable<AClass> YourMethod()
{
foreach (XElement header in headersXml.Root.Elements())
{
yield return (ParseHeader(header));
}
}
```
yield is a lazy producer of data, only producing another item after the first has been retrieved, whereas returning a list will return everything in one go.
So there is a difference, and you need to declare the method correctly.
For more information, read [Jon's answer here](https://stackoverflow.com/questions/317462/some-help-understanding-yield#317502), which contains some very useful links. | It's a tricky topic. In a nutshell, it's an easy way of implementing IEnumerable and its friends. The compiler builds you a state machine, transforming parameters and local variables into instance variables in a new class. Complicated stuff.
I have a few resources on this:
* [Chapter 6 of C# in Depth](https://www.manning.com/books/c-sharp-in-depth) (free download from that page)
* [Iterators, iterator blocks and data pipelines](http://csharpindepth.com/Articles/Chapter11/StreamingAndIterators.aspx) (article)
* [Iterator block implementation details](http://csharpindepth.com/Articles/Chapter6/IteratorBlockImplementation.aspx) (article) | Some help understanding "yield" | [
"",
"c#",
"iterator",
"yield",
""
] |
I have a small application to convert several file formats, with a main windows form which contains several text boxes, buttons, etc. I have another "Document" class which handles all of the actual conversion work, and my form essentially iterates through all of the files, and calls the proper Document methods to convert the files. While this is happening, however, the form stops responding. I created another simple form with a progress bar and a cancel button to spawn when the conversion starts to provide some feedback to our (easily rattled) users. When the new form loads, however, all of the controls are white boxes, and it too stops responding. Once the conversion completes, the progress bar closes properly, and the main form becomes responsive again.
Is there a simple way to make these two forms independent, so that the progress bar can operate even while the other form is unresponsive? | The simplest solution is to have your processing done on a background thread using the [BackgroundWorker](http://msdn.microsoft.com/en-us/library/system.componentmodel.backgroundworker.aspx "BackgroundWorker") component. You can drag it in from the VS toolbox. It allows you to update the UI thread with progress notifications so you can update your progress bar to show realistic values (something much more user-friendly than having a "marquee" style progress bar). | You should use two threads so that the form continues to respond while you do work.
This is so common that .NET has a BackgroundWorker control that wraps some of this up for you. | Windows Forms: Progress Bar Unresponsive | [
"",
"c#",
"winforms",
""
] |
I'm always surprised that even after using C# for all this time now, I still manage to find things I didn't know about...
I've tried searching the internet for this, but using the "~" in a search isn't working for me so well and I didn't find anything on MSDN either (not to say it isn't there)
I saw this snippet of code recently, what does the tilde(~) mean?
```
/// <summary>
/// Enumerates the ways a customer may purchase goods.
/// </summary>
[Flags]
public enum PurchaseMethod
{
All = ~0,
None = 0,
Cash = 1,
Check = 2,
CreditCard = 4
}
```
I was a little surprised to see it so I tried to compile it, and it worked... but I still don't know what it means/does. Any help?? | ~ is the unary one's complement operator -- it flips the bits of its operand.
```
~0 = 0xFFFFFFFF = -1
```
in two's complement arithmetic, `~x == -x-1`
the ~ operator can be found in pretty much any language that borrowed syntax from C, including Objective-C/C++/C#/Java/Javascript. | I'd think that:
```
[Flags]
public enum PurchaseMethod
{
None = 0,
Cash = 1,
Check = 2,
CreditCard = 4,
All = Cash | Check | CreditCard
}
```
Would be a bit more clear. | What is the tilde (~) in the enum definition? | [
"",
"c#",
"enums",
"language-features",
"enumeration",
""
] |
I've added a label control to a webform. It appears when I run the page but when I try to reference it using the codebehind, I get no intellisense and the code I write somehow doesn't work. I've verified the project is running in debug, that debug compilation is set in the web.config. In fact, I can debug all pages. However, when i go to this page, debugging runs over it as if there is no breakpoint set. Also, on the breakpoint, it says 'The breakpoint will currently not be hit. The source code is different from the current code' which has the be the main issue. Not sure why that's happening as I recompiled the whole project.
I have this at the top of the page:
`print("<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="MemberSearch.aspx.cs" Inherits="Portal.EmployerPortal.MemberSearch" Debug="true" %>
");`
Any ideas? | In the past I've had problems with this because the auto-generated designer files were bad. Try deleting them and then allow VS to recreate them, this may fix the problem. | Something similar happens to me from time to time...I add a control and I get no intellisense...VS basically has no idea the control is on the page/designer. If I close the file (codebehind and HTML) and come back...everything is honky dory. Clunky, but it works. Worth a try. I'm using 2008 BTW. | Intellisense not showing up for new controls in code behind | [
"",
"c#",
".net",
"asp.net",
"debugging",
""
] |
Why does C++ have header files and .cpp files? | Well, the main reason would be for separating the interface from the implementation. The header declares "what" a class (or whatever is being implemented) will do, while the cpp file defines "how" it will perform those features.
This reduces dependencies so that code that uses the header doesn't necessarily need to know all the details of the implementation and any other classes/headers needed only for that. This will reduce compilation times and also the amount of recompilation needed when something in the implementation changes.
It's not perfect, and you would usually resort to techniques like the [Pimpl Idiom](http://aszt.inf.elte.hu/~gsd/halado_cpp/ch09s03.html) to properly separate interface and implementation, but it's a good start. | ## C++ compilation
A compilation in C++ is done in 2 major phases:
1. The first is the compilation of "source" text files into binary "object" files: The CPP file is the compiled file and is compiled without any knowledge about the other CPP files (or even libraries), unless fed to it through raw declaration or header inclusion. The CPP file is usually compiled into a .OBJ or a .O "object" file.
2. The second is the linking together of all the "object" files, and thus, the creation of the final binary file (either a library or an executable).
Where does the HPP fit in all this process?
## A poor lonesome CPP file...
The compilation of each CPP file is independent from all other CPP files, which means that if A.CPP needs a symbol defined in B.CPP, like:
```
// A.CPP
void doSomething()
{
doSomethingElse(); // Defined in B.CPP
}
// B.CPP
void doSomethingElse()
{
// Etc.
}
```
It won't compile because A.CPP has no way to know "doSomethingElse" exists... Unless there is a declaration in A.CPP, like:
```
// A.CPP
void doSomethingElse() ; // From B.CPP
void doSomething()
{
doSomethingElse() ; // Defined in B.CPP
}
```
Then, if you have C.CPP which uses the same symbol, you then copy/paste the declaration...
## COPY/PASTE ALERT!
Yes, there is a problem. Copy/pastes are dangerous, and difficult to maintain. Which means that it would be cool if we had some way to NOT copy/paste, and still declare the symbol... How can we do it? By the include of some text file, which is commonly suffixed by .h, .hxx, .h++ or, my preferred for C++ files, .hpp:
```
// B.HPP (here, we decided to declare every symbol defined in B.CPP)
void doSomethingElse() ;
// A.CPP
#include "B.HPP"
void doSomething()
{
doSomethingElse() ; // Defined in B.CPP
}
// B.CPP
#include "B.HPP"
void doSomethingElse()
{
// Etc.
}
// C.CPP
#include "B.HPP"
void doSomethingAgain()
{
doSomethingElse() ; // Defined in B.CPP
}
```
### How does `include` work?
Including a file will, in essence, parse and then copy-paste its content in the CPP file.
For example, in the following code, with the A.HPP header:
```
// A.HPP
void someFunction();
void someOtherFunction();
```
... the source B.CPP:
```
// B.CPP
#include "A.HPP"
void doSomething()
{
// Etc.
}
```
... will become after inclusion:
```
// B.CPP
void someFunction();
void someOtherFunction();
void doSomething()
{
// Etc.
}
```
## One small thing - why include B.HPP in B.CPP?
In the current case, this is not needed, and B.HPP has the `doSomethingElse` function declaration, and B.CPP has the `doSomethingElse` function definition (which is, by itself a declaration). But in a more general case, where B.HPP is used for declarations (and inline code), there could be no corresponding definition (for example, enums, plain structs, etc.), so the include could be needed if B.CPP uses those declaration from B.HPP. All in all, it is "good taste" for a source to include by default its header.
## Conclusion
The header file is thus necessary, because the C++ compiler is unable to search for symbol declarations alone, and thus, you must help it by including those declarations.
One last word: You should put header guards around the content of your HPP files, to be sure multiple inclusions won't break anything, but all in all, I believe the main reason for existence of HPP files is explained above.
```
#ifndef B_HPP_
#define B_HPP_
// The declarations in the B.hpp file
#endif // B_HPP_
```
or even simpler (although not standard)
```
#pragma once
// The declarations in the B.hpp file
``` | Why have header files and .cpp files? | [
"",
"c++",
"header-files",
""
] |
Using Python, how would I go about reading in (be from a string, file or url) a mathematical expression (1 + 1 is a good start) and executing it?
Aside from grabbing a string, file or url I have no idea of where to start with this. | Because python supports some algebraic forms, you could do:
```
eval("1 + 1")
```
But this allows the input to execute about anything defined in your env:
```
eval("__import__('sys').exit(1)")
```
Also, if you want to support something python doesn't support, the approach fails:
```
x³ + y² + c
----------- = 0
z
```
Instead of doing this, you can implement a tokenizer and a parser with [ply](http://www.dabeaz.com/ply/). Evaluating a thing like '1 + 1' ought not take more than ten lines or so.
You could also implement the tokenizer and the parser by hand. Read about LL and LR parsers. Before attempting this it's also better to learn using parser generators first. | If you are receiving an expression as a string you'll need to parse it into its operators and operands and then process the resulting tree. This isn't a python problem per se. but a general issue of how to deal with mathematical expressions delivered as strings.
A quick google reveals a bunch of lexical parsers for python. | Reading and running a mathematical expression in Python | [
"",
"python",
""
] |
I have a container element which I need to resize as its contents change. It contains 2 absolutely positioned divs which can both change height. If I don't specify the height of the container then anything after the container disappears under the contents.
At the moment I am doing the following but I'd be pleased to find a less laborious alternative:
(container has position:relative, #main and #sidebar are position:absolute, the contents of #sidebar have no positioning specified)
css:
```
div#mapcontainer { position:relative; width:100%; height: 600px; }
div#main { position:absolute; top: 0; left: 10px; width: 500px; height: 400px; }
div#sidebar { position:absolute; top:10px; right:10px; width: 155px; height: 405px;}
```
html:
```
<div id="container">
<div id="main">variable height content here</div>
<div id="sidebar">
<div id="foo">...</div>
<div id="bar">....</div>
...
</div>
<div>
```
js:
```
fixHeights = function() {
var children_height = 0;
$('#sidebar'). children().each(function(){children_height += $(this).height();});
$('#container').height(Math.max(children_height, $('#main').height()));
};
``` | This is a very odd question, as div's height is always the height of its children.
Are you floating content in your container div? When you float child content the containing div doesn't act the same anymore.
If you're floating content that extends past the bottom of the container div, add the following div to the very bottom of the children of the container div:
```
<div style="clear:both;"></div>
```
That will not allow children to float over it, thus forcing the containing div to be the height of its tallest child...
```
<div id="container">
<div id="dynamic" style="float:left;width:100px;">dynamic content goes here</div>
<div id="static" style="margin-left:104px;">Lots of static stuff here</div>
<div style="clear:both;"></div>
</div>
```
---
Okay, I'm not sure why you're doing the positioning the way you are, but I've done something similar for a website that had to look like a desktop application. I don't believe there is any way to do this other than with javascript. Html documents are designed to flow, not be rigid. If you want to bail on the javascript, you'll have to let go of the positioning styles and use your floating and clearing divs. Its not *that* horrible... | if you're floating the container div "overflow: auto" can also work magically, esp with regard to the whole IE hasLayout debacle | How to resize a container div to the total height of its children? | [
"",
"javascript",
"jquery",
"dom",
""
] |
I have a string with markup in it which I need to find using Java.
eg.
```
string = abc<B>def</B>ghi<B>j</B>kl
desired output..
segment [n] = start, end
segment [1] = 4, 6
segment [2] = 10, 10
``` | Regular expressions should work wonderfully for this.
Refer to your JavaDoc for
* java.langString.split()
* java.util.regex package
* java.util.Scanner
Note: StringTokenizer is not what you want since it splits around *characters*, not strings - the string delim is a list of characters, any one of which will split. It's good for the very simple cases like an unambiguous comma separated list. | Given your example I think I'd use regex and particularly I'd look at the grouping functionality offered by Matcher.
Tom
```
String inputString = "abc<B>def</B>ghi<B>j</B>kl";
String stringPattern = "(<B>)([a-zA-Z]+)(<\\/B>)";
Pattern pattern = Pattern.compile(stringPattern);
Matcher matcher = pattern.matcher(inputString);
if (matcher.matches()) {
String firstGroup = matcher.group(1);
String secondGroup = matcher.group(2);
String thirdGroup = matcher.group(3);
}
``` | What is the best way to find specific tokens in a string (in Java)? | [
"",
"java",
"string",
""
] |
In my current project, I'm producing weekly releases. I've been using the technique described in [this post](http://jebsoft.blogspot.com/2006/04/consistent-version-numbers-across-all.html) to keep the version numbers of all of the assemblies in my project in sync. (I don't presently have any good reason to track the assemblies' version numbers separately, though I'm sure that day will eventually come.)
When I push out a release, I build a new version of the installer. Unlike all of the assemblies, which can get their version numbers from a shared SolutionInfo.cs file, the version number of the installer isn't, as best I can tell, an assembly property. So my release process includes manually advancing the version number in the setup project.
Or, I should say, *usually* includes doing that. I'd like to turn that into something I can't screw up. I'm finding the documentation of setup and deployment projects to be surprisingly opaque (it was quite a bit harder to find out how to make it possible for the MSI to uninstall properly if the user installed it to a non-default path, which is a pretty freaking common use case to be undocumented) and have no idea if it's even possible to do this.
Any ideas?
**Edit:**
Just to clarify, this is a Visual Studio setup and deployment project I'm talking about. | CodeProject has a script to set the version number of an MSI file, which you could run in the pre-built step of the setup project. You find it here:
> <http://www.codeproject.com/KB/install/NewSetupVersion.aspx>
**More Details**
Be aware that with Windows Installer things are a bit more complicated. MSI files (as the one that you create using a VS Setup and Deployment project) not only have a version number but also a product code which is a GUID value. This product code is used by Windows Installer to uniquely identify your product e.g. in Control Panel -> Add Or Remove programs where you can decide to uninstall or repair a product.
However, when changing you MSI version number, this product code must also be changed in a number of cases. MSI technology is poorly documented but you can find some recommendations when to also change the product code on the following MSDN page: <http://msdn.microsoft.com/en-us/library/aa367850(VS.85).aspx>.
In my projects I always generate a new product code for every new version. The script on CodeProject will also change the product code for you.
And one more thing: Windows Installer only checks the first three places of the version number afaik, anything in the forth place will be ignored, i.e. 2.3.0.1234 is considered equal to 2.3.0.5678. ([ProductVersion](http://msdn.microsoft.com/en-us/library/aa370859%28v=vs.85%29.aspx))
(There is a related article on CodeProject which might also be interesting to you: <http://www.codeproject.com/KB/install/VersionVDProj.aspx>) | Its going to depend on the installer toolkit you are using.
We use TFS Team Build and [WiX v3](http://wix.sourceforge.net/). I have a custom build task that increments the build number in Team build (5.0.0.X for example), then this version number is pushed to the common AssemblyInfo.cs AssemblyFileVersion field. It is also passed by MSBuild to our solutions/projects as a property which is then passed into WiX and used to update the installer version as well.
We probably will need to do better with the assembly versioning someday as well, but right now this has been working pretty well for us. | How to keep the installer's version number in sync with the installed assemblies' version numbers? | [
"",
"c#",
"visual-studio",
"deployment",
""
] |
I want to get the UCS-2 code points for a given UTF-8 string. For example the word "hello" should become something like "0068 0065 006C 006C 006F". Please note that the characters could be from any language including complex scripts like the east asian languages.
So, the problem comes down to "convert a given character to its UCS-2 code point"
But how? Please, any kind of help will be very very much appreciated since I am in a great hurry.
---
*Transcription of questioner's response posted as an answer*
Thanks for your reply, but it needs to be done in PHP v 4 or 5 but not 6.
The string will be a user input, from a form field.
I want to implement a PHP version of utf8to16 or utf8decode like
```
function get_ucs2_codepoint($char)
{
// calculation of ucs2 codepoint value and assign it to $hex_codepoint
return $hex_codepoint;
}
```
Can you help me with PHP or can it be done with PHP with version mentioned above? | [Scott Reynen](https://stackoverflow.com/users/10837/scott-reynen) wrote a function to [convert UTF-8 into Unicode](http://randomchaos.com/documents/?source=php_and_unicode). I found it looking at the [PHP documentation](http://us.php.net/manual/en/function.unicode-encode.php#73422).
```
function utf8_to_unicode( $str ) {
$unicode = array();
$values = array();
$lookingFor = 1;
for ($i = 0; $i < strlen( $str ); $i++ ) {
$thisValue = ord( $str[ $i ] );
if ( $thisValue < ord('A') ) {
// exclude 0-9
if ($thisValue >= ord('0') && $thisValue <= ord('9')) {
// number
$unicode[] = chr($thisValue);
}
else {
$unicode[] = '%'.dechex($thisValue);
}
} else {
if ( $thisValue < 128)
$unicode[] = $str[ $i ];
else {
if ( count( $values ) == 0 ) $lookingFor = ( $thisValue < 224 ) ? 2 : 3;
$values[] = $thisValue;
if ( count( $values ) == $lookingFor ) {
$number = ( $lookingFor == 3 ) ?
( ( $values[0] % 16 ) * 4096 ) + ( ( $values[1] % 64 ) * 64 ) + ( $values[2] % 64 ):
( ( $values[0] % 32 ) * 64 ) + ( $values[1] % 64 );
$number = dechex($number);
$unicode[] = (strlen($number)==3)?"%u0".$number:"%u".$number;
$values = array();
$lookingFor = 1;
} // if
} // if
}
} // for
return implode("",$unicode);
} // utf8_to_unicode
``` | Use an existing utility such as [iconv](http://www.gnu.org/software/libiconv/), or whatever libraries come with the language you're using.
If you insist on rolling your own solution, read up on the [UTF-8](http://en.wikipedia.org/wiki/Utf-8) format. Basically, each code point is stored as 1-4 bytes, depending on the value of the code point. The ranges are as follows:
* U+0000 — U+007F: 1 byte: 0xxxxxxx
* U+0080 — U+07FF: 2 bytes: 110xxxxx 10xxxxxx
* U+0800 — U+FFFF: 3 bytes: 1110xxxx 10xxxxxx 10xxxxxx
* U+10000 — U+10FFFF: 4 bytes: 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
Where each x is a data bit. Thus, you can tell how many bytes compose each code point by looking at the first byte: if it begins with a 0, it's a 1-byte character. If it begins with 110, it's a 2-byte character. If it begins with 1110, it's a 3-byte character. If it begins with 11110, it's a 4-byte character. If it begins with 10, it's a non-initial byte of a multibyte character. If it begins with 11111, it's an invalid character.
Once you figure out how many bytes are in the character, it's just a matter if bit twiddling. Also note that UCS-2 cannot represent characters above U+FFFF.
Since you didn't specify a language, here's some sample C code (error checking omitted):
```
wchar_t utf8_char_to_ucs2(const unsigned char *utf8)
{
if(!(utf8[0] & 0x80)) // 0xxxxxxx
return (wchar_t)utf8[0];
else if((utf8[0] & 0xE0) == 0xC0) // 110xxxxx
return (wchar_t)(((utf8[0] & 0x1F) << 6) | (utf8[1] & 0x3F));
else if((utf8[0] & 0xF0) == 0xE0) // 1110xxxx
return (wchar_t)(((utf8[0] & 0x0F) << 12) | ((utf8[1] & 0x3F) << 6) | (utf8[2] & 0x3F));
else
return ERROR; // uh-oh, UCS-2 can't handle code points this high
}
``` | How to get code point number for a given character in a utf-8 string? | [
"",
"php",
"unicode",
""
] |
In SQL server you can use the DATENAME function to get the day of week as a string
```
declare @date datetime
set @date = '12/16/08'
select datename(dw, @date)
```
which returns "Tuesday"
and you can use the DATEPART function to get the day of week as an integer
```
declare @date datetime
set @date = '12/16/08'
select datepart(dw, @date)
```
Which returns 3
But say I have a varchar that contains the string "Tuesday" and I want to convert it to its integer representation of 3. Sure, I could write out the conversion without much hassle, but I'd much rather use a built-in function. Does such a function exist? | Rather than write a function, you should create a days of the week table with the description and the numeric value. THen you can simply join to the table to get the numeric.
And if you have days stored multiple ways (likely in a characterbased system), you can put all the variants into the table, so TUE, Tues., Tuesday would all map to the same integer. | unfortunately there isn't a built in function, but you can create your own like this:
```
CREATE FUNCTION dbo.WeekDay(@DayOfWeek Varchar(9))
RETURNS INT
AS
BEGIN
DECLARE @iDayofWeek INT
SELECT @iDayofWeek = CASE @DayOfWeek
WHEN 'Sunday' THEN 1
WHEN 'Monday' THEN 2
WHEN 'Tuesday' THEN 3
WHEN 'Wednesday' THEN 4
WHEN 'Thursday' THEN 5
WHEN 'Friday' THEN 6
WHEN 'Saturday' THEN 7
END
RETURN (@iDayofWeek)
END
GO
``` | Converting the name of a day to its integer representation | [
"",
"sql",
"sql-server",
"type-conversion",
""
] |
In trying to make my Javascript unobtrusive, I'm using `onLoad`s to add functionality to `<input>`s and such. With Dojo, this looks something like:
```
var coolInput = dojo.byId('cool_input');
if(coolInput) {
dojo.addOnLoad(function() {
coolInput.onkeyup = function() { ... };
});
}
```
Or, approximately equivalently:
```
dojo.addOnLoad(function() {
dojo.forEach(dojo.query('#cool_input'), function(elt) {
elt.onkeyup = function() { ... };
});
});
```
Has anyone written an implementation of Ruby's [andand](http://github.com/raganwald/andand/tree/master) so that I could do the following?
```
dojo.addOnLoad(function() {
// the input's onkeyup is set iff the input exists
dojo.byId('cool_input').andand().onkeyup = function() { ... };
});
```
or
```
dojo.byId('cool_input').andand(function(elt) {
// this function gets called with elt = the input iff it exists
dojo.addOnLoad(function() {
elt.onkeyup = function() { ... };
});
});
``` | The exact syntax you want is not possible in JavaScript. The way JavaScript executes would need to change in a pretty fundamental fashion. For example:
```
var name = getUserById(id).andand().name;
// ^
// |-------------------------------
// if getUserById returns null, execution MUST stop here |
// otherwise, you'll get a "null is not an object" exception
```
However, JavaScript doesn't work that way. It simply doesn't.
The following line performs almost exactly what you want.
```
var name = (var user = getUserById(id)) ? user.name : null;
```
But readability won't scale to larger examples. For example:
```
// this is what you want to see
var initial = getUserById(id).andand().name.andand()[0];
// this is the best that JavaScript can do
var initial = (var name = (var user = getUserById(id)) ? user.name : null) ? name[0] : null;
```
And there is the side-effect of those unnecessary variables. I use those variables to avoid the double lookup. The variables are mucking up the context, and if that's a huge deal, you can use anonymous functions:
```
var name = (function() {return (var user = getUserById(id)) ? user.name : null;})();
```
Now, the user variable is cleaned-up properly, and everybody's happy. But wow! what a lot of typing! :) | I don't know Dojo, but shouldn't your first example read
```
dojo.addOnLoad(function() {
var coolInput = dojo.byId('cool_input');
if(coolInput)
coolInput.onkeyup = function() { ... };
});
```
Otherwise, you might end up trying to access the element before the DOM has been built.
Back to your question: In JavaScript, I'd implement `andand()` as
```
function andand(obj, func, args) {
return obj && func.apply(obj, args || []);
}
```
Your example could then be written as
```
dojo.addOnLoad(function() {
andand(dojo.byId('cool_input'), function() {
this.onkeyup = function() { ... };
});
});
```
which isn't really that much shorter than using the explicit `if` statement - so why bother? | Is there a Javascript equivalent of Ruby's andand? | [
"",
"javascript",
"ruby",
"andand",
""
] |
I am having lot of CPU and IO intensive code in shutdown hook. Some times I see they are not getting executed properly.
So Is there any limitation like we should not have lot of code in shutdown hook.
Thanks
D. L. Kumar | Simply No. [Read carefully API](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/Runtime.html#addShutdownHook(java.lang.Thread)) and you will see. There is limited time to shutdown. If you extend over this time slot the JVM will simply end/terminate without notification. If you are writing data, you will write down corrupted data. | I have placed some fairly substantial code in a shutdown hook for one of my apps, and it executes very reliably on a variety of O/S's (Windows XP, Vista, AS400/iSeries/Systemi, Linux) under a variety of circumstances, including sigterms and Windows shutdown.
But beware of any system's shutdown timeout (ala Windows good ol' this program is not responding dialog).
Also beware true O/S kill commands, like Linux sigkill (?) and Windows task manager's end process. | Can I place lot of CPU and IO intensive code in ShutDown Hook | [
"",
"java",
""
] |
I know java and would normally put in getter/setter methods. I am interested in doing it in C# with the following code, but it throws a StackOverflow exception. What am I doing wrong?
Calling Code
```
c.firstName = "a";
```
Property Code
```
public String firstName;
{
get
{
return firstName;
}
set
{
firstName = value;
}
}
``` | It's because you're recursively calling the property - in the `set` you are setting the property again, which continues *ad infinitum* until you blow the stack.
You need a private backing field to hold the value, e.g.
```
private string firstName;
public string FirstName
{
get
{
return this.firstName;
}
set
{
this.firstName = value;
}
}
```
Alternatively, if you're using C# 3.0, you could use an auto-property, which creates a hidden backing field for you, e.g.
```
public string FirstName { get; set; }
``` | You are setting the property name inside your property--not the field name. This would work better:
```
private string m_firstName;
public String firstName;
{
get
{
return m_firstName;
}
set
{
m_firstName = value;
}
}
``` | Why does Property Set throw StackOverflow exception? | [
"",
"c#",
""
] |
I have an MS SQL server application where I have defined my relationships and primary keys.
However do I need to further define indexes on relationship fields which are sometimes not used in joins and just as part of a where clause?
I am working on the assumption that defining a relationship creates an index, which the sql engine can reuse. | No indexes will be automatically created on foreign keys constraint. But unique and primary key constraints will create theirs.
Creating indexes on the queries you use, be it on joins or on the WHERE clause is the way to go. | Some very thick books have been written on this subject!
Here are some ruiles of thumb:-
Dont bother indexing (apart from PK) any table with < 1000 rows.
Otherwise index all your FKs.
Examine your SQL and look for the where clauses that will most reduce your result sets and index that columun.
eg. given:
```
SELECT OWNER FROM CARS WHERE COLOUR = 'RED' AND MANUFACTURER = "BMW" AND ECAP = "2.0";
```
You may have 5000 red cars out of 20,000 so indexing this wont help much.
However you may only have 100 BMWs so indexing MANUFACURER will immediatly reduce you result set to 100 and you can eliminate the the blue and white cars by simply scanning through the hundred rows.
Generally the dbms will pick one or two of the indexes available based on cardinality so it pays to second guess and define only those indexes that are likely to be used. | SQL Relationships and indexes | [
"",
"sql",
"sql-server",
"t-sql",
""
] |
I'm having problems allocating and deallocating my memory in a recursive C++ program. So without using an automatic memory management solution, I wonder if anyone can help me resolve the memory leak I am experiencing.
The following code essentially explains the problem (although it's a contrived example, please correct any mistakes or simplifications I've made).
A number class to hold the value of a number:
```
class Number {
public:
Number() { value = 1; };
Number& operator + (const Number& n1) const {
Number result = value + n1.value;
return result;
};
int value;
};
```
Two functions to perform the recursion:
```
Number& recurse(const Number& v1) {
Number* result = new Number();
Number one = Number();
*result = *result + recurse(one);
return *result;
}
int main(...) {
Number answer = Number();
answer = recurse(result);
}
```
As you can see the memory allocated in the recurse function is leaked, but I'm not sure where I can free up this memory from based on the nature of the recursion? | The problem is here:
```
Number& operator + (const Number& n1) const {
Number result = value + n1.value;
return result;
};
```
You're returning a local variable (`result`) by reference, and that's a big NO-NO. Local variables are allocated on the stack, and when the function exits, the variables are gone. Returning a reference to a local variable is returning a pointer into the stack that's now being used for something else, and that's going to cause lots of badness.
What you should instead do is return by value (just change the return type from `Number&` to `Number`). Make sure you have an appropriate copy constructor, or that the compiler's automatically generated copy constructor suits your needs. This means when `operator+` returns, it makes a copy (which can often by optimized away), and since there's no pointers or references involved, you can't get a corrupted return value.
To fix your memory leak, you can use smart pointers such as `boost::shared_ptr`. Alternatively, ditch pointers and dynamic memory altogether, and just return your results by value from `recurse()`. | I don't see why you're allocating the memory on the heap to begin with:
```
Number& recurse(const Number& v1) {
Number result;
Number one;
// I assume there is a step here to determine if the recursion should stop
result += recurse(one);
return result;
}
```
By allocating only on the stack you're guaranteed that the variables will be cleaned up when the function returns.
Otherwise I think you'd have to use some sort of smart pointer. | Memory Allocation in Recursive C++ Calls | [
"",
"c++",
"memory",
"memory-leaks",
"recursion",
""
] |
My employers website has multiple hostnames that all hit the same server and we just show different skins for branding purposes.
Unfortunately WCF doesn't seem to work well in this situation.
I've tried [overriding the default host with a custom host factory](http://www.robzelt.com/blog/2007/01/24/WCF+This+Collection+Already+Contains+An+Address+With+Scheme+Http.aspx).
That's not an acceptable solution because it needs to work from all hosts, not just 1.
I've also looked at [this blog post](http://blogs.msdn.com/rampo/archive/2008/02/11/how-can-wcf-support-multiple-iis-binding-specified-per-site.aspx) but either I couldn't get it to work or it wasn't meant to solve my problem.
The error I'm getting is "This collection already contains an address with scheme http"
There's got to be a way to configure this, please help :) | If you don't put an address in the endpoint then it should resolve to whatever server hits the service. I use this code and it resolves both to my .local address and to my .com address from IIS.
```
<system.serviceModel>
<services>
<service name="ServiceName" behaviorConfiguration="ServiceName.Service1Behavior">
<endpoint address="" binding="wsHttpBinding" contract="iServiceName">
</endpoint>
<endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>
</service>
</services>
<behaviors>
<serviceBehaviors>
<behavior name="ServiceName.Service1Behavior">
<!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment -->
<serviceMetadata httpGetEnabled="true"/>
<!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information -->
<serviceDebug includeExceptionDetailInFaults="true"/>
</behavior>
</serviceBehaviors>
</behaviors>
</system.serviceModel>
``` | I don't think that the **host base addresses** solution posted above will work for IIS-hosted websites (the OP did mention that this was for his employer's *website*).
See [this blog post](http://www.pluralsight.com/community/blogs/aaron/archive/2006/07/13/31879.aspx "this blog post")
Also, the other answer further up by **thaBadDawg** won't work where multiple host headers are specified - you'll just get the errors that the OP mentions ("This collection already contains an address with scheme http".)
I don't think any of the solutions mentioned so far will work, because it doesn't look like WCF allows a single service to be accessible for a single site with multiple host headers from all of the sites. The only workaround I could find for .Net 3.5 (and under) is to create a different contract for each of the host headers, and use the custom ServiceHostFactory to use the correct host header based on which contract is specified. This isn't at all practical. Apparently [.Net 4.0 will resolve this issue](http://blogs.msdn.com/rampo/archive/2008/02/11/how-can-wcf-support-multiple-iis-binding-specified-per-site.aspx). | WCF and Multiple Host Headers | [
"",
"c#",
"wcf",
"iis",
"iis-6",
""
] |
I have a MySQL Left Join problem.
I have three tables which I'm trying to join.
A person table:
```
CREATE TABLE person (
id INT NOT NULL AUTO_INCREMENT,
type ENUM('student', 'staff', 'guardian') NOT NULL,
first_name CHAR(30) NOT NULL,
last_name CHAR(30) NOT NULL,
gender ENUM('m', 'f') NOT NULL,
dob VARCHAR(30) NOT NULL,
PRIMARY KEY (id)
);
```
A student table:
```
CREATE TABLE student (
id INT NOT NULL AUTO_INCREMENT,
person_id INT NOT NULL,
primary_guardian INT NOT NULL,
secondary_guardian INT,
join_date VARCHAR(30) NOT NULL,
status ENUM('current', 'graduated', 'expelled', 'other') NOT NULL,
tutor_group VARCHAR(30) NOT NULL,
year_group VARCHAR(30) NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (person_id) REFERENCES person(id) ON DELETE CASCADE,
FOREIGN KEY (primary_guardian) REFERENCES guardian(id),
FOREIGN KEY (secondary_guardian) REFERENCES guardian(id),
FOREIGN KEY (tutor_group) REFERENCES tutor_group(name),
FOREIGN KEY (year_group) REFERENCES year_group(name)
);
```
And an incident table:
```
CREATE TABLE incident (
id INT NOT NULL AUTO_INCREMENT,
student INT NOT NULL,
staff INT NOT NULL,
guardian INT NOT NULL,
sent_home BOOLEAN NOT NULL,
illness_type VARCHAR(255) NOT NULL,
action_taken VARCHAR(255) NOT NULL,
incident_date DATETIME NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (student) REFERENCES student(id),
FOREIGN KEY (staff) REFERENCES staff(id),
FOREIGN KEY (guardian) REFERENCES guardian(id)
);
```
What I'm trying to select is the first name, last name and the number of incidents for each student in year 9.
Here's my best attempt at the query:
```
SELECT p.first_name, p.last_name, COUNT(i.student)
FROM person p, student s LEFT JOIN incident i ON s.id = i.student
WHERE p.id = s.person_id AND s.year_group LIKE "%Year 9%";
```
However, it ignores any students without an incident which is not what I want - they should be displayed but with a count of 0. If I remove the left join and the count then I get all the students as I would expect.
I've probably misunderstood left join but I thought it was supposed to do, essentially what I'm trying to do?
Thanks for your help,
Adam | What you are doing is fine, you just missed off the group by clause
```
SELECT p.first_name, p.last_name, COUNT(i.student)
FROM person p, student s LEFT JOIN incident i ON s.id = i.student
WHERE p.id = s.person_id AND s.year_group LIKE "%Year 9%"
GROUP BY p.first_name, p.last_name;
```
Here's some test data
```
insert into person values(1, 'student', 'Alice', 'Foo', 'f','1970-01-01');
insert into person values(2, 'student', 'Bob', 'Bar', 'm','1970-01-01');
insert into student values(1,1,0,0,'', 'current','','Year 9');
insert into student values(2,2,0,0,'', 'current','','Year 9');
insert into incident values(1,1,0,0,0,'flu','chicken soup', '2008-01-08');
```
And here's the output of the query with the group by added to it:
```
+------------+-----------+------------------+
| first_name | last_name | COUNT(i.student) |
+------------+-----------+------------------+
| Alice | Foo | 1 |
| Bob | Bar | 0 |
+------------+-----------+------------------+
```
You could further clean up the query by making join clauses from your where clause, and grouping on the person id:
```
SELECT p.first_name, p.last_name, COUNT(i.student)
FROM person p
INNER JOIN student s ON(p.id = s.person_id)
LEFT JOIN incident i ON(s.id = i.student)
WHERE s.year_group LIKE "%Year 9%"
GROUP BY p.id;
``` | Would that not be a left outer join you are looking for? I may have my terminology mixed up? Would not be the first time. But Aron's answer would work. | Why is this LEFT JOIN eliminating records with nothing in the other table? | [
"",
"mysql",
"sql",
"join",
"left-join",
""
] |
For example: I'm on MS DOS, I have a source code in the folder C:\Documents and Settings\Programs. Can i make my source code use a program (for example gnuplot) that is in a random folder? | Here are some options:
1. Search in the system PATH for the executable you want to run
2. Allow the user to specify the location on the command-line
3. Store the location in a configuration file, and allow the user to specify it during install (if you have an install process) or by editing the file by hand
Ideally you'd do all 3 | <http://www.codeproject.com/KB/system/newbiespawn.aspx>
ShellExecute will look into the PATH environment variable, so you don't need to specify the full PATH. Now, if it's really a random location and it's not even in the PATH environment variable, then I guess you are out of luck.
If they aren't even in the PATH, then you have to search for it in the candidates folder. Here's sample code on how to [traverse a file system](http://www.daniweb.com/forums/thread31081.html) path in C++.
And an example using Boost:
directoryList.h
```
#ifndef DIRECTORYLIST_H_INCLUDED
#define DIRECTORYLIST_H_INCLUDED
#define BOOST_FILESYSTEM_NO_DEPRECATED
#include <iostream>
#include <list>
#include <string>
class directoryList {
public:
directoryList();
~directoryList();
std::list<std::string> getListing(std::string path);
};
#endif // DIRECTORYLIST_H_INCLUDED
```
directoryList.cpp
```
#include "boost/filesystem/operations.hpp"
#include "boost/filesystem/convenience.hpp"
#include "boost/filesystem/path.hpp"
#include "boost/progress.hpp"
#include "directoryList.h"
using namespace std;
namespace fs = boost::filesystem;
directoryList::directoryList() {}
directoryList::~directoryList() {}
list<string> directoryList::getListing(string base_dir) {
list<string> rv;
fs::path p(base_dir);
for (fs::recursive_directory_iterator it(p);
it != fs::recursive_directory_iterator(); ++it) {
string complete_filename = it->path().string();
rv.insert(rv.begin(),complete_filename);
}
return rv;
}
```
Usage sample:
```
directoryList *dl = new directoryList();
filenames = dl->getListing("C:\\Program Files");
//search for the file here, or modify the getListing to supply a filter
``` | How to use a program which is not in the source code's folder? | [
"",
"c++",
"dos",
""
] |
currently writes an application to connect to the device "BTLink Bluetooth to Serial Adapter"
More information about device: [device specification](http://cgi.ebay.pl/Bluetooth-to-RS232-Serial-Adapter-Dongle-100m-UK_W0QQitemZ300266522534QQcmdZViewItem?hash=item300266522534&_trkparms=72%3A1399|39%3A1|66%3A2|65%3A12|240%3A1318&_trksid=p3286.c0.m14)
Have created such a code:
```
BluetoothAddress btAddress = null;
if (!BluetoothAddress.TryParse(comboBoxDevices.SelectedValue.ToString().Trim(), out btAddress))
throw new Exception(String.Format("Adress: {0} wrong !", comboBoxDevices.SelectedValue.ToString().Trim()));
BluetoothEndPoint endPoint = new BluetoothEndPoint(btAddress, BluetoothService.DialupNetworking);
_bluetoothClient.Connect(endPoint);
```
I have tested two types of GUIDs:BluetoothService.DialupNetworking, BluetoothService.SerialPort.
I received this message: "No connection could be made because the target machine actively refused it"
What is wrong ?
Best regards,
mykhaylo | I would first try to connect to the target machine using the device's built-in Bluetooth capabilities. Only after this succeeds would I try to connect to it programatically.
To be able to connect to a Bluetooth device you need to know the following:
* The Bluetooth profile to use. You've tried both Serial and Dialup, but which of them is actually the desired one? Many devices only support a single profile and will refuse to connect to another device requesting a non supported service.
* Authentication information. Some devices require a pre-defined password.
* Many devices can only form a connection to a single BT device. Is the target device already connected somewhere else?
The error you are experiencing is most probably related to one of the above and has not to do with the use of the bluetooth library. | Have you checked with 32Feet.net or on [their support Forums](http://inthehand.com/forums/default.aspx?GroupID=29) (the provider of the [classes you're using](http://inthehand.com/content/32feet.aspx))? | Bluetooth to Serial Adapter - Connection Exception | [
"",
"c#",
"windows-mobile",
"compact-framework",
"bluetooth",
""
] |
Is there a convention for naming the private method that I have called "`_Add`" here? I am not a fan of the leading underscore but it is what one of my teammates suggests.
```
public Vector Add(Vector vector) {
// check vector for null, and compare Length to vector.Length
return _Add(vector);
}
public static Vector Add(Vector vector1, Vector vector2) {
// check parameters for null, and compare Lengths
Vector returnVector = vector1.Clone()
return returnVector._Add(vector2);
}
private Vector _Add(Vector vector) {
for (int index = 0; index < Length; index++) {
this[index] += vector[index];
}
return this;
}
``` | I usually see and use either "AddCore" or "InnerAdd" | I've never seen any coding convention in C# that distinguished between public and private methods. I don't suggest doing it, since I don't see the benefit.
If the method name conflicts with public methods, it’s time to become more descriptive; if, as in your case, it contains the actual method *implementation* for the public method, one convention is to call it `*Impl`. I.e. `AddImpl` in your case. | Private method naming convention | [
"",
"c#",
"coding-style",
"private-methods",
""
] |
> **Possible Duplicate:**
> [How to properly clean up Excel interop objects in C#](https://stackoverflow.com/questions/158706/how-to-properly-clean-up-excel-interop-objects-in-c-sharp)
I've read many of the other threads here about managing COM references while using the .Net-Excel interop to make sure the Excel process exits correctly upon exit, and so far the techniques have been working very well, but I recently came across a problem when adding new worksheets to an existing workbook file.
The code below leaves a zombie Excel process.
If I add a worksheet to a newly created workbook file, it exits fine. If I run the code excluding the `.Add()` line, it exits fine. (The existing file I'm reading from is an empty file created by the commented out code)
Any ideas?
```
//using Excel = Microsoft.Office.Interop.Excel;
//using System.Runtime.InteropServices;
public static void AddTest()
{
string filename = @"C:\addtest.xls";
object m = Type.Missing;
Excel.Application excelapp = new Excel.Application();
if (excelapp == null) throw new Exception("Can't start Excel");
Excel.Workbooks wbs = excelapp.Workbooks;
//if I create a new file and then add a worksheet,
//it will exit normally (i.e. if you uncomment the next two lines
//and comment out the .Open() line below):
//Excel.Workbook wb = wbs.Add(Excel.XlWBATemplate.xlWBATWorksheet);
//wb.SaveAs(filename, m, m, m, m, m,
// Excel.XlSaveAsAccessMode.xlExclusive,
// m, m, m, m, m);
//but if I open an existing file and add a worksheet,
//it won't exit (leaves zombie excel processes)
Excel.Workbook wb = wbs.Open(filename,
m, m, m, m, m, m,
Excel.XlPlatform.xlWindows,
m, m, m, m, m, m, m);
Excel.Sheets sheets = wb.Worksheets;
//This is the offending line:
Excel.Worksheet wsnew = sheets.Add(m, m, m, m) as Excel.Worksheet;
//N.B. it doesn't help if I try specifying the parameters in Add() above
wb.Save();
wb.Close(m, m, m);
//overkill to do GC so many times, but shows that doesn't fix it
GC();
//cleanup COM references
//changing these all to FinalReleaseComObject doesn't help either
while (Marshal.ReleaseComObject(wsnew) > 0) { }
wsnew = null;
while (Marshal.ReleaseComObject(sheets) > 0) { }
sheets = null;
while (Marshal.ReleaseComObject(wb) > 0) { }
wb = null;
while (Marshal.ReleaseComObject(wbs) > 0) { }
wbs = null;
GC();
excelapp.Quit();
while (Marshal.ReleaseComObject(excelapp) > 0) { }
excelapp = null;
GC();
}
public static void GC()
{
System.GC.Collect();
System.GC.WaitForPendingFinalizers();
System.GC.Collect();
System.GC.WaitForPendingFinalizers();
}
``` | I don't have the code to hand, but I did run into a similar problem.
If I recall correctly, I ended up retrieving the process id of the excel instance, and killing it (after a suitable wait period, and when the other method failed).
I think I used:
`GetWindowThreadProcessId` (via P/Invoke) on the excel object hwnd property to get the process id, and then used `Process.GetProcessById` to get a process object.
Once I'd done that, I'd call `Kill` on the process.
EDIT: I have to admit, this isn't the ideal solution, but if you can't find the rogue interface that isn't being released, then this will fix it in true eggshell/sledgehammer fashion. ;)
EDIT2: You don't have to call `Kill` on the process object immediately... You could first try calling `Close` before resorting to `Kill`. | I have done a similar thing. I create an Excel file or open an existing. I delete all the sheets and add my own. here is the code I use to ensure all references are closed:
```
workbook.Close(true, null, null);
excelApp.Quit();
if (newSheet != null)
{
System.Runtime.InteropServices.Marshal.ReleaseComObject(newSheet);
}
if (rangeSelection != null)
{
System.Runtime.InteropServices.Marshal.ReleaseComObject(rangeSelection);
}
if (sheets != null)
{
System.Runtime.InteropServices.Marshal.ReleaseComObject(sheets);
}
if (workbook != null)
{
System.Runtime.InteropServices.Marshal.ReleaseComObject(workbook);
}
if (excelApp != null)
{
System.Runtime.InteropServices.Marshal.ReleaseComObject(excelApp);
}
newSheet = null;
rangeSelection = null;
sheets = null;
workbook = null;
excelApp = null;
GC.Collect();
```
I have tested this with many different options and not had it fail on me yet. | C# interop: excel process not exiting after adding new worksheet to existing file | [
"",
"c#",
"excel",
"com",
"interop",
"pia",
""
] |
### Disclaimer
Yes, I am fully aware that what I am asking about is totally stupid and that anyone who would wish to try such a thing in production code should be fired and/or shot. I'm mainly looking to see if *can* be done.
Now that that's out of the way, is there any way to access private class members in C++ from outside the class? For example, is there any way to do this with pointer offsets?
(Naive and otherwise non-production-ready techniques welcome)
### Update
As noted in the comments, I asked this question because I wanted to write a blog post on over-encapsulation (and how it affects TDD). I wanted to see if there was a way to say "using private variables isn't a 100% reliable way to enforce encapsulation, even in C++." At the end, I decided to focus more on how to solve the problem rather than why it's a problem, so I didn't feature some of the stuff brought up here as prominently as I had planned, but I still left a link.
At any rate, if anyone's interested in how it came out, here it is: [Enemies of Test Driven Development part I: encapsulation](http://jasonmbaker.wordpress.com/2009/01/08/enemies-of-test-driven-development-part-i-encapsulation/) (I suggest reading it before you decide that I'm crazy). | If the class contains any template member functions you can specialize that member function to suit your needs. Even if the original developer didn't think of it.
safe.h
```
class safe
{
int money;
public:
safe()
: money(1000000)
{
}
template <typename T>
void backdoor()
{
// Do some stuff.
}
};
```
main.cpp:
```
#include <safe.h>
#include <iostream>
class key;
template <>
void safe::backdoor<key>()
{
// My specialization.
money -= 100000;
std::cout << money << "\n";
}
int main()
{
safe s;
s.backdoor<key>();
s.backdoor<key>();
}
```
Output:
```
900000
800000
``` | I've added an [entry to my blog](http://bloglitb.blogspot.de/2011/12/access-to-private-members-safer.html) (see below) that shows how it can be done. Here is an example on how you use it for the following class
```
struct A {
private:
int member;
};
```
Just declare a struct for it where you describe it and instantiate the implementation class used for robbery
```
// tag used to access A::member
struct A_member {
typedef int A::*type;
friend type get(A_member);
};
template struct Rob<A_member, &A::member>;
int main() {
A a;
a.*get(A_member()) = 42; // write 42 to it
std::cout << "proof: " << a.*get(A_member()) << std::endl;
}
```
The `Rob` class template is defined like this, and needs only be defined once, regardless how many private members you plan to access
```
template<typename Tag, typename Tag::type M>
struct Rob {
friend typename Tag::type get(Tag) {
return M;
}
};
```
However, this doesn't show that c++'s access rules aren't reliable. The language rules are designed to protect against accidental mistakes - if you try to rob data of an object, the language *by-design* does not take long ways to prevent you. | Can I access private members from outside the class without using friends? | [
"",
"c++",
"encapsulation",
"private-members",
""
] |
I need to build a C++ library to distribute among our customers. The library must be able to be accessed from a wide range of languages including VB6, C++, VB.net and C#.
I've being using ActiveX controls (ocx files) until now. But I wonder if there is a better kind of library (dll, etc.) that I can build. What do you recommend?
I'm limited to C++ as the library language, but you can mention other languages for reference to other developers.
P.S. Sorry if the question was already asked. I had some trouble finding a suitable title. Feel free to correct my English.
**Edit:** Seems like the best choice is either DLLs or OCX (i.e., COM), but I'm still having some doubts on which one will I choose. Which one is more suitable to modern languages (.NET for instance)? Which one would be easier to use from an end developer perspective? | Almost every language has a way of loading dynamic libraries and accessing exported C functions from them.
There is nothing preventing you from using C++ inside the dll but for maximum portability, export only C functions.
I have some more about this in this [post](https://stackoverflow.com/questions/62398/middleware-api-best-practices-what-are-they#71496). | If you're looking at supporting both VB6 and .NET, you're pretty much stuck with exposing interfaces via COM, but at least that'll get you out of having to create more than one wrapper based on the language/runtime system you're trying to interact with. | What kind of code library should I build for distribution? | [
"",
"c++",
"com",
"dll",
"vb6",
"ocx",
""
] |
So I've just recently made the step from ad hoc debugging with `dump`, `print_r` and `echo` to some more sophisticated methods and I'm having a struggle.
I work with Zend Framework, Aptana and Zend Debugger.
At this moment I'm trying to debug a custom controller and whatever I try I don't get to my breakpoint which I understand since there is authentication in between and using the menu.
**Questions**:
1. How can I make my application break at the point of authentication, login in the browser, navigate to a certain uri and then continue debugging?
2. What are good places to set breakpoints in Zend Framework with MVC? | You want to change the current user's authentication details mid-way through a request?
I don't think this is possible. Zend Debugger is pretty much a read-only tool. Even if it were, you're assuming that whatever framework you're using can handle this. That would mean it would have to constantly try to synchronize it's internal state with changing input data.
I think instead of asking us how to solve this specific problem, you should be telling us why you need to change authentication. It sounds like you're launching a script in your debugger, which fails because you have no user session.
Zend Debugger has a browser toolbar (<http://files.zend.com/help/Zend-Studio-Eclipse-Help/zend_debugger_toolbar.htm>) that allows you to start a the debugger for your current page; the debugger will have all information that the browser would have sent: cookies, post data, etc. There's even a "debug next page" setting which helps you debug POST forms. It sounds like this is what you want. | Wouldn't it be easier to set up a constant such as:
```
define('MODE_DEBUG', 1);
```
Then check in the authentication process:
```
if($obj->myLoginMethod() || constant('MODE_DEBUG') == 1){
}
```
Noone will be able to inject into that constant and the worst thing that can happen is you end up leaving debug mode on my mistake...
Which you could put a check before the definition:
```
define('MODE_DEBUG', (false !== strpos($_SERVER['HTTP_HOST'], 'dev.mysite.com') ? 1 : 0));
``` | PHP debugging - where to set the breakpoints? How to authenticate? | [
"",
"php",
"debugging",
"zend-framework",
"breakpoints",
"zend-debugger",
""
] |
Could you explain the difference between setting methods in the constructor and through prototype object? The following code shows these two ways of setting the methods - `say_hello` and `say_bye` both work fine:
```
function MessageClass() {
this.say_bye = function() { alert('see ya'); };
}
MessageClass.prototype.say_hello = function() { alert('hello'); };
x = new MessageClass();
x.say_hello();
x.say_bye();
``` | foxxtrot and annakata are both correct, but I'll throw in my 2 cents.
If you use the prototype then each instance of the "MessageClass" is really referencing the same functions. The functions exist in memory only once and are used for all instances. If you declare the methods in the constructor (or otherwise add it to a specific instance) rather than the prototype then a new function is created for each instance of MessageClass.
That being said, there is probably not any noticeable performance difference for most cases and it is unlikely that you will see a memory usage difference either. I would go with the prototype method unless you have a compelling reason to do otherwise. The only reason I can thing that you might want to declare a method in the constructor is if you need a closure. For example, if you have event handlers or you wanted to simulate private properties with getters/setters you might do:
```
function MessageClass() {
var self = this;
this.clickHander = function(e) { self.someoneClickedMe = true; };
var _private = 0;
this.getPrivate = function() { return _private; };
this.setPrivate = function(val) { _private = val; };
}
```
**EDIT:** Because there has been discussion about how this effects objects extended by another object with functions assigned in the constructor I'm adding a bit more detail. I might use the term "class" to simplify the discussion, but it is important to note that js does not support classes (that doesn't mean we can't do good OO development) or we would not be discussing this issue.
Most javascript libraries call the constructor on the base class and the sub class. (e.g. Prototype.js's Object.extend) This means that methods assigned in the constructor of each will be available on the resulting objects. However, if you are extending objects yourself there can be unexpected consequences.
If I take the MessageClass above and extend it:
```
function ErrorMessageClass() {}
ErrorMessageClass.prototype = new MessageClass();
errorMsg = new ErrorMessageClass();
```
Then errorMsg will have a getPrivate and setPrivate method on it, but they may not behave as you would expect. Because those functions were scoped when they were assigned (i.e. at "ErrorMessageClass.prototype = new MessageClass()" not only are the get/setPrivate methods shared, the \_private variable gets shared across all instances of ErrorMessageClass as well. This essentially makes \_private a static property for ErrorMessageClass. For example:
```
var errorA = new ErrorMessageClass();
var errorB = new ErrorMessageClass();
errorA.setPrivate('A');
console.log(errorA.getPrivate()); // prints 'A'
console.log(errorB.getPrivate()); // prints 'A'
errorB.setPrivate('B');
console.log(errorA.getPrivate()); // prints 'B'
```
Likewise with the clickHandler function and someoneClickedMe property:
```
errorA.clickHandler();
console.log(errorA.someoneClickedMe); // prints 'true'
console.log(errorB.someoneClickedMe); // prints 'true'
```
However, change those function definitions to use this.\_private:
```
this.getPrivate = function() { return this._private; };
this.setPrivate = function(val) { this._private = val; };
```
and behavior of instances of ErrorMessageClass becomes more of what you would expect:
```
errorA.setPrivate('A');
errorB.setPrivate('B');
console.log(errorA.getPrivate()); // prints 'A'
console.log(errorB.getPrivate()); // prints 'B'
``` | If you bind methods by prototype JS only has to do it once and binds to an object class (which makes it elligible for OO JS extensions).
If you do the binding within the "class" function, JS has to do the work of creating and assigning for each and every instance. | Setting methods through prototype object or in constructor, difference? | [
"",
"javascript",
"constructor",
"prototype",
""
] |
Here's the .jsp code:
```
<table>
<s:iterator value="allAgents">
<tr>
<td><s:property value="firstName" /></td>
<td><s:property value="middleName" /></td>
<td><s:property value="lastName" /></td>
<td><s:checkbox name="ss"/></td>
</tr>
</s:iterator>
</table>
```
When rendered, the checkbox would occupy a whole row below the 'names', centered.
Here's the generated html for what's supposed to be a single row:
```
<tr>
<td>first</td>
<td>middle</td>
<td>last</td>
<td>
<tr>
<td valign="top" align="right"></td>
<td valign="top" align="left"><input type="checkbox" name="ss"
value="true" id="agent_ss" /> <input type="hidden"
name="__checkbox_ss" value="true" /></td>
</tr>
</td>
</tr>
```
Is it me or struts?
TIA. | Add the property theme="simple"
like | Struts2 renders s:checkbox as a table cell itself.The reason is tht struts2 uses a template system for tag rendering. The default is (as defined in struts-default.properties)
### Standard UI theme
struts.ui.theme=xhtml
struts.ui.templateDir=template
struts.ui.templateSuffix=ftl
You need to make this change -- struts.ui.theme:simple
It can be done by adding
constant name="struts.ui.theme" value="simple" /> tag
in the "struts.xml".This wil suffice. | struts2: s:checkbox doesn't go on the same row with s:checkbox | [
"",
"java",
"struts2",
"rendering",
""
] |
I have 3 projects in my VS solution. One of them is a Web app, the second one is a Windows Service and the last one a Setup project for my Web app.
What I want is by the end of the installation of the web app in my setup project, within my custom action to try and install my windows service given that I have the location of the assembly by then. | Ok, here is what REALLY worked for me, it has been tested on multiple machines with different OS ( Vista, XP, Win2k, Win2003 server )
The code has been taken from [here](http://www.tech-archive.net/Archive/VB/microsoft.public.vb.winapi/2006-08/msg00238.html) so full credit goes to whoever wrote this piece of code.
Once you add the dll or source file into your project make sure to add the ServiceTools namespace and then you have access to some very handy functionality such as...
```
//Installs and starts the service
ServiceInstaller.InstallAndStart("MyServiceName", "MyServiceDisplayName", "C:\\PathToServiceFile.exe");
//Removes the service
ServiceInstaller.Uninstall("MyServiceName");
//Checks the status of the service
ServiceInstaller.GetServiceStatus("MyServiceName");
//Starts the service
ServiceInstaller.StartService("MyServiceName");
//Stops the service
ServiceInstaller.StopService("MyServiceName");
//Check if service is installed
ServiceInstaller.ServiceIsInstalled("MyServiceName");
```
I hope this helps. | I found several errors in the code that you reused and have fixed these and also cleaned it up a little. Again, the original code is taken from [here](http://www.tech-archive.net/Archive/VB/microsoft.public.vb.winapi/2006-08/msg00238.html).
```
public static class ServiceInstaller
{
private const int STANDARD_RIGHTS_REQUIRED = 0xF0000;
private const int SERVICE_WIN32_OWN_PROCESS = 0x00000010;
[StructLayout(LayoutKind.Sequential)]
private class SERVICE_STATUS
{
public int dwServiceType = 0;
public ServiceState dwCurrentState = 0;
public int dwControlsAccepted = 0;
public int dwWin32ExitCode = 0;
public int dwServiceSpecificExitCode = 0;
public int dwCheckPoint = 0;
public int dwWaitHint = 0;
}
#region OpenSCManager
[DllImport("advapi32.dll", EntryPoint = "OpenSCManagerW", ExactSpelling = true, CharSet = CharSet.Unicode, SetLastError = true)]
static extern IntPtr OpenSCManager(string machineName, string databaseName, ScmAccessRights dwDesiredAccess);
#endregion
#region OpenService
[DllImport("advapi32.dll", SetLastError = true, CharSet = CharSet.Auto)]
static extern IntPtr OpenService(IntPtr hSCManager, string lpServiceName, ServiceAccessRights dwDesiredAccess);
#endregion
#region CreateService
[DllImport("advapi32.dll", SetLastError = true, CharSet = CharSet.Auto)]
private static extern IntPtr CreateService(IntPtr hSCManager, string lpServiceName, string lpDisplayName, ServiceAccessRights dwDesiredAccess, int dwServiceType, ServiceBootFlag dwStartType, ServiceError dwErrorControl, string lpBinaryPathName, string lpLoadOrderGroup, IntPtr lpdwTagId, string lpDependencies, string lp, string lpPassword);
#endregion
#region CloseServiceHandle
[DllImport("advapi32.dll", SetLastError = true)]
[return: MarshalAs(UnmanagedType.Bool)]
static extern bool CloseServiceHandle(IntPtr hSCObject);
#endregion
#region QueryServiceStatus
[DllImport("advapi32.dll")]
private static extern int QueryServiceStatus(IntPtr hService, SERVICE_STATUS lpServiceStatus);
#endregion
#region DeleteService
[DllImport("advapi32.dll", SetLastError = true)]
[return: MarshalAs(UnmanagedType.Bool)]
private static extern bool DeleteService(IntPtr hService);
#endregion
#region ControlService
[DllImport("advapi32.dll")]
private static extern int ControlService(IntPtr hService, ServiceControl dwControl, SERVICE_STATUS lpServiceStatus);
#endregion
#region StartService
[DllImport("advapi32.dll", SetLastError = true)]
private static extern int StartService(IntPtr hService, int dwNumServiceArgs, int lpServiceArgVectors);
#endregion
public static void Uninstall(string serviceName)
{
IntPtr scm = OpenSCManager(ScmAccessRights.AllAccess);
try
{
IntPtr service = OpenService(scm, serviceName, ServiceAccessRights.AllAccess);
if (service == IntPtr.Zero)
throw new ApplicationException("Service not installed.");
try
{
StopService(service);
if (!DeleteService(service))
throw new ApplicationException("Could not delete service " + Marshal.GetLastWin32Error());
}
finally
{
CloseServiceHandle(service);
}
}
finally
{
CloseServiceHandle(scm);
}
}
public static bool ServiceIsInstalled(string serviceName)
{
IntPtr scm = OpenSCManager(ScmAccessRights.Connect);
try
{
IntPtr service = OpenService(scm, serviceName, ServiceAccessRights.QueryStatus);
if (service == IntPtr.Zero)
return false;
CloseServiceHandle(service);
return true;
}
finally
{
CloseServiceHandle(scm);
}
}
public static void InstallAndStart(string serviceName, string displayName, string fileName)
{
IntPtr scm = OpenSCManager(ScmAccessRights.AllAccess);
try
{
IntPtr service = OpenService(scm, serviceName, ServiceAccessRights.AllAccess);
if (service == IntPtr.Zero)
service = CreateService(scm, serviceName, displayName, ServiceAccessRights.AllAccess, SERVICE_WIN32_OWN_PROCESS, ServiceBootFlag.AutoStart, ServiceError.Normal, fileName, null, IntPtr.Zero, null, null, null);
if (service == IntPtr.Zero)
throw new ApplicationException("Failed to install service.");
try
{
StartService(service);
}
finally
{
CloseServiceHandle(service);
}
}
finally
{
CloseServiceHandle(scm);
}
}
public static void StartService(string serviceName)
{
IntPtr scm = OpenSCManager(ScmAccessRights.Connect);
try
{
IntPtr service = OpenService(scm, serviceName, ServiceAccessRights.QueryStatus | ServiceAccessRights.Start);
if (service == IntPtr.Zero)
throw new ApplicationException("Could not open service.");
try
{
StartService(service);
}
finally
{
CloseServiceHandle(service);
}
}
finally
{
CloseServiceHandle(scm);
}
}
public static void StopService(string serviceName)
{
IntPtr scm = OpenSCManager(ScmAccessRights.Connect);
try
{
IntPtr service = OpenService(scm, serviceName, ServiceAccessRights.QueryStatus | ServiceAccessRights.Stop);
if (service == IntPtr.Zero)
throw new ApplicationException("Could not open service.");
try
{
StopService(service);
}
finally
{
CloseServiceHandle(service);
}
}
finally
{
CloseServiceHandle(scm);
}
}
private static void StartService(IntPtr service)
{
SERVICE_STATUS status = new SERVICE_STATUS();
StartService(service, 0, 0);
var changedStatus = WaitForServiceStatus(service, ServiceState.StartPending, ServiceState.Running);
if (!changedStatus)
throw new ApplicationException("Unable to start service");
}
private static void StopService(IntPtr service)
{
SERVICE_STATUS status = new SERVICE_STATUS();
ControlService(service, ServiceControl.Stop, status);
var changedStatus = WaitForServiceStatus(service, ServiceState.StopPending, ServiceState.Stopped);
if (!changedStatus)
throw new ApplicationException("Unable to stop service");
}
public static ServiceState GetServiceStatus(string serviceName)
{
IntPtr scm = OpenSCManager(ScmAccessRights.Connect);
try
{
IntPtr service = OpenService(scm, serviceName, ServiceAccessRights.QueryStatus);
if (service == IntPtr.Zero)
return ServiceState.NotFound;
try
{
return GetServiceStatus(service);
}
finally
{
CloseServiceHandle(service);
}
}
finally
{
CloseServiceHandle(scm);
}
}
private static ServiceState GetServiceStatus(IntPtr service)
{
SERVICE_STATUS status = new SERVICE_STATUS();
if (QueryServiceStatus(service, status) == 0)
throw new ApplicationException("Failed to query service status.");
return status.dwCurrentState;
}
private static bool WaitForServiceStatus(IntPtr service, ServiceState waitStatus, ServiceState desiredStatus)
{
SERVICE_STATUS status = new SERVICE_STATUS();
QueryServiceStatus(service, status);
if (status.dwCurrentState == desiredStatus) return true;
int dwStartTickCount = Environment.TickCount;
int dwOldCheckPoint = status.dwCheckPoint;
while (status.dwCurrentState == waitStatus)
{
// Do not wait longer than the wait hint. A good interval is
// one tenth the wait hint, but no less than 1 second and no
// more than 10 seconds.
int dwWaitTime = status.dwWaitHint / 10;
if (dwWaitTime < 1000) dwWaitTime = 1000;
else if (dwWaitTime > 10000) dwWaitTime = 10000;
Thread.Sleep(dwWaitTime);
// Check the status again.
if (QueryServiceStatus(service, status) == 0) break;
if (status.dwCheckPoint > dwOldCheckPoint)
{
// The service is making progress.
dwStartTickCount = Environment.TickCount;
dwOldCheckPoint = status.dwCheckPoint;
}
else
{
if (Environment.TickCount - dwStartTickCount > status.dwWaitHint)
{
// No progress made within the wait hint
break;
}
}
}
return (status.dwCurrentState == desiredStatus);
}
private static IntPtr OpenSCManager(ScmAccessRights rights)
{
IntPtr scm = OpenSCManager(null, null, rights);
if (scm == IntPtr.Zero)
throw new ApplicationException("Could not connect to service control manager.");
return scm;
}
}
public enum ServiceState
{
Unknown = -1, // The state cannot be (has not been) retrieved.
NotFound = 0, // The service is not known on the host server.
Stopped = 1,
StartPending = 2,
StopPending = 3,
Running = 4,
ContinuePending = 5,
PausePending = 6,
Paused = 7
}
[Flags]
public enum ScmAccessRights
{
Connect = 0x0001,
CreateService = 0x0002,
EnumerateService = 0x0004,
Lock = 0x0008,
QueryLockStatus = 0x0010,
ModifyBootConfig = 0x0020,
StandardRightsRequired = 0xF0000,
AllAccess = (StandardRightsRequired | Connect | CreateService |
EnumerateService | Lock | QueryLockStatus | ModifyBootConfig)
}
[Flags]
public enum ServiceAccessRights
{
QueryConfig = 0x1,
ChangeConfig = 0x2,
QueryStatus = 0x4,
EnumerateDependants = 0x8,
Start = 0x10,
Stop = 0x20,
PauseContinue = 0x40,
Interrogate = 0x80,
UserDefinedControl = 0x100,
Delete = 0x00010000,
StandardRightsRequired = 0xF0000,
AllAccess = (StandardRightsRequired | QueryConfig | ChangeConfig |
QueryStatus | EnumerateDependants | Start | Stop | PauseContinue |
Interrogate | UserDefinedControl)
}
public enum ServiceBootFlag
{
Start = 0x00000000,
SystemStart = 0x00000001,
AutoStart = 0x00000002,
DemandStart = 0x00000003,
Disabled = 0x00000004
}
public enum ServiceControl
{
Stop = 0x00000001,
Pause = 0x00000002,
Continue = 0x00000003,
Interrogate = 0x00000004,
Shutdown = 0x00000005,
ParamChange = 0x00000006,
NetBindAdd = 0x00000007,
NetBindRemove = 0x00000008,
NetBindEnable = 0x00000009,
NetBindDisable = 0x0000000A
}
public enum ServiceError
{
Ignore = 0x00000000,
Normal = 0x00000001,
Severe = 0x00000002,
Critical = 0x00000003
}
```
Please let me know if anyone finds anything wrong with this code! | How to install a windows service programmatically in C#? | [
"",
"c#",
".net",
"windows-services",
"setup-project",
"visual-studio-setup-proje",
""
] |
I work on a web-based tool where we offer customized prints.
Currently we build an XML structure with Java, feed it to the [XMLmind XSL-FO Converter](http://www.xmlmind.com/foconverter/) along with customized XSL-FO, which then produces an RTF document.
This works fine on simple layouts, but there's some problem areas where I'd like greater control, or where I can't do what I want at all. F.ex: tables in header, footers (e.g., page numbers), columns, having a separate column setup or different page number info on the first page, etc.
Do any of you know of better alternatives, either to XMLmind or to the way we get from data to RTF, i.e., Java-> XML, XML+XSL-> RTF? (The only practical limitation for us is the JVM.) | If you could afford spending some money, you could use [Aspose.Words](http://www.aspose.com/categories/file-format-components/aspose.words-for-.net-and-java/default.aspx), a professional library for creating Word and RTF documents for Java and .NET. | You can take a look at a new library called [jRTF](http://code.google.com/p/jrtf/). It allows you to create new RTF documents and to fill RTF templates. | How do I generate RTF from Java? | [
"",
"java",
"xml",
"xslt",
"rtf",
""
] |
I've got a function creating some XmlDocument:
```
public string CreateOutputXmlString(ICollection<Field> fields)
{
XmlWriterSettings settings = new XmlWriterSettings();
settings.Indent = true;
settings.Encoding = Encoding.GetEncoding("windows-1250");
StringBuilder builder = new StringBuilder();
XmlWriter writer = XmlWriter.Create(builder, settings);
writer.WriteStartDocument();
writer.WriteStartElement("data");
foreach (Field field in fields)
{
writer.WriteStartElement("item");
writer.WriteAttributeString("name", field.Id);
writer.WriteAttributeString("value", field.Value);
writer.WriteEndElement();
}
writer.WriteEndElement();
writer.Flush();
writer.Close();
return builder.ToString();
}
```
I set an encoding but after i create XmlWriter it does have utf-16 encoding. I know it's because strings (and StringBuilder i suppose) are encoded in utf-16 and you can't change it.
So how can I easily create this xml with the encoding attribute set to "windows-1250"? it doesn't even have to be encoded in this encoding, it just has to have the specified attribute.
edit: it has to be in .Net 2.0 so any new framework elements cannot be used. | You need to use a StringWriter with the appropriate encoding. Unfortunately StringWriter doesn't let you specify the encoding directly, so you need a class like this:
```
public sealed class StringWriterWithEncoding : StringWriter
{
private readonly Encoding encoding;
public StringWriterWithEncoding (Encoding encoding)
{
this.encoding = encoding;
}
public override Encoding Encoding
{
get { return encoding; }
}
}
```
([This question](https://stackoverflow.com/questions/371930/-net-xmlwriter-unexpected-encoding-is-confusing-me) is similar but not quite a duplicate.)
EDIT: To answer the comment: pass the StringWriterWithEncoding to [XmlWriter.Create](http://msdn.microsoft.com/en-us/library/ms162620.aspx) instead of the StringBuilder, then call ToString() on it at the end. | Just some extra explanations to why this is so.
Strings are sequences of characters, not bytes. Strings, per se, are not "encoded", because they are using characters, which are stored as Unicode codepoints. Encoding DOES NOT MAKE SENSE at String level.
An encoding is a mapping from a sequence of codepoints (characters) to a sequence of bytes (for storage on byte-based systems like filesystems or memory). The framework does not let you specify encodings, unless there is a compelling reason to, like to make 16-bit codepoints fit on byte-based storage.
So when you're trying to write your XML into a StringBuilder, you're actually building an XML sequence of characters and writing them as a sequence of characters, so no encoding is performed. Therefore, no Encoding field.
If you want to use an encoding, the XmlWriter has to write to a Stream.
About the solution that you found with the MemoryStream, no offense intended, but it's just flapping around arms and moving hot air. You're encoding your codepoints with 'windows-1252', and then parsing it back to codepoints. The only change that may occur is that characters not defined in windows-1252 get converted to a '?' character in the process.
To me, the right solution might be the following one. Depending on what your function is used for, you could pass a Stream as a parameter to your function, so that the caller decides whether it should be written to memory or to a file. So it would be written like this:
```
public static void WriteFieldsAsXmlDocument(ICollection fields, Stream outStream)
{
XmlWriterSettings settings = new XmlWriterSettings();
settings.Indent = true;
settings.Encoding = Encoding.GetEncoding("windows-1250");
using(XmlWriter writer = XmlWriter.Create(outStream, settings)) {
writer.WriteStartDocument();
writer.WriteStartElement("data");
foreach (Field field in fields)
{
writer.WriteStartElement("item");
writer.WriteAttributeString("name", field.Id);
writer.WriteAttributeString("value", field.Value);
writer.WriteEndElement();
}
writer.WriteEndElement();
}
}
``` | How to put an encoding attribute to xml other that utf-16 with XmlWriter? | [
"",
"c#",
"encoding",
"xmlwriter",
""
] |
I know that in C#, if you write `~MyClass()`, this basically translates to `override System.Object.Finalize()`. So, whether you write the *destructor* or not, every type in CLR will have a `Finalize()` method in it (of `System.Object` at least).
1] So, does it mean that, every object, by default, **has** a finalizer ?
2] What is the basis for the CLR to decide that an object should be put through finalization queue ?
I'm asking this, because, I had a class, say `ManagedResourceHolder` that implemented `IDisposable`, but did not call `GC.SuppressFinalize(this)` in its `IDisposable.Dispose()` method. The class did not hold any unmanaged resources, and there was no need for the `~ManagedResourceHolder()` method, which in turn meant no need for the `GC.SuppressFinalize(this)` call as there was no *finalizer*.
3] In context of the above scenario, is it **always** necessary to provide a finalizer when you implement IDisposable ? (even on a class that holds no unmanaged resources)
The FxCop rule [CA1816](http://msdn.microsoft.com/en-us/library/ms182269.aspx) was giving me a violation on this and the response I got [here](http://social.msdn.microsoft.com/Forums/en-US/vstscode/thread/4e8a3155-6f12-4888-9ca1-f6762761e038) when I asked in the CA forum on MSDN confused me.
Thanks. | **Questions 1 and 2**: The CLR basically checks whether or not the finalizer is overridden. If it's not, it treats it as not having a finalizer.
The benefit of having a finalizer in System.Object is that compilers know they can *always* put a call to `base.Finalize()` in. This avoids versioning issues. Consider a world without `System.Object.Finalize()`:
* System.Object (no Finalize)
* Acme.BaseClass (no Finalize)
* MyCompany.DerivedClass (Finalize)
Without a `Finalize` method in object, the finalizer in MyCompany.DerivedClass can't call anything. Which leads to a problem when version 2 of Acme.BaseClass comes out *with* a finalizer. Unless you recompile MyCompany.DerivedClass, an instance of DerivedClass will be finalized without calling BaseClass.Finalize, which is clearly a Bad Thing.
Now consider the same situation *with* System.Object.Finalize - the compiler inserts a call to base.Finalize automatically in DerivedClass.Finalize, which in version 1 just calls the no-op implementation in System.Object. When version 2 of Acme.BaseClass comes out, the call to `base.Finalize` will (without recompilation of DerivedClass) call BaseClass.Finalize.
**Question 3**: No, you don't need to have a finalizer just because you implement IDisposable. Finalizers should only be used for unmanaged resources which *nothing else is going to clean up* - i.e. ones you have a *direct* reference to. For instance, suppose you have a class which has a `FileStream` member variable. You want to implement `IDisposable` so you can close the stream as soon as possible, if the caller remembers - but if they *don't* remember to call `Dispose()`, the stream will become eligible for garbage collection at the same time as your object. Trust that `FileStream` has an appropriate finalizer (or a reference to something else with a finalizer etc) rather than trying to clean it up in your own finalizer.
As of .NET 2.0, with the [SafeHandle](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.safehandle.aspx) class, it should be *incredibly* rare for you to need your own finalizer. | 1: It only really counts (in the useful sense) if it has been overridden
2: As defined by 1, and GC.SuppressFinalize has not been called (plus re-register etc)
3: certainly not; in fact, unless you are directly handling an unmanaged resource, you **shouldn't** have a finalizer. You shouldn't add a finalizer just because it is IDisposable - but things that have finalizers should also generally be IDisposable. | When does CLR say that an object has a finalizer? | [
"",
"c#",
".net",
"memory-management",
"garbage-collection",
""
] |
I know a lot about C# but this one is stumping me and Google isn't helping.
I have an IEnumerable range of objects. I want to set a property on the first one. I do so, but when I enumerate over the range of objects after the modification, I don't see my change.
Here's a good example of the problem:
```
public static void GenericCollectionModifier()
{
// 1, 2, 3, 4... 10
var range = Enumerable.Range(1, 10);
// Convert range into SubItem classes
var items = range.Select(i => new SubItem() {Name = "foo", MagicNumber = i});
Write(items); // Expect to output 1,2,3,4,5,6,7,8,9,10
// Make a change
items.First().MagicNumber = 42;
Write(items); // Expect to output 42,2,3,4,5,6,7,8,9,10
// Actual output: 1,2,3,4,5,6,7,8,9,10
}
public static void Write(IEnumerable<SubItem> items)
{
Console.WriteLine(string.Join(", ", items.Select(item => item.MagicNumber.ToString()).ToArray()));
}
public class SubItem
{
public string Name;
public int MagicNumber;
}
```
What aspect of C# stops my "MagicNumber = 42" change from being output? Is there a way I can get my change to "stick" without doing some funky converting to List<> or array?
Thanks!
-Mike | When you call First() it enumerates over the result of this bit of code:
```
Select(i => new SubItem() {Name = "foo", MagicNumber = i});
```
Note that the Select is a lazy enumerator, meaning that it only does the select when you ask for an item from it (and does it *every* time you ask it). The results are not stored anywhere, so when you call items.First() you get a new `SubItem` instance. When you then pass items to Write, it gets a whole bunch of new `SubItem` instances - not the one you got before.
If you want to store the result of your select and modify it, you need to do something like:
```
var items = range.Select(i => new SubItem() {Name = "foo", MagicNumber = i}).ToList();
``` | I suspect something going in the background. Most likely due to the fact the IEnumerables can only be iterated once.
Does it work if you add a 'ToList()' after the call to Select() when assigning to 'items'? | Why does setting a property on an enumerated object not work? | [
"",
"c#",
"ienumerable",
""
] |
given the following code:
```
import ctypes
ip="192.168.1.1"
thisdll = ctypes.cdll['aDLL']
thisdll.functionThatExpectsAnIP(ip)
```
how can I correctly pack this for a DLL that expects it as a c\_ulong datatype?
I've tried using:
```
ip_netFrmt = socket.inet_aton(ip)
ip_netFrmt_c = ctypes.c_ulong(ip_netFrmt)
```
however, the `c_ulong()` method returns an error because it needs an integer.
is there a way to use `struct.pack` to accomplish this? | The inet\_aton returns a string of bytes. This used to be the *lingua franca* for C-language interfaces.
Here's how to unpack those bytes into a more useful value.
```
>>> import socket
>>> packed_n= socket.inet_aton("128.0.0.1")
>>> import struct
>>> struct.unpack( "!L", packed_n )
(2147483649L,)
>>> hex(_[0])
'0x80000001L'
```
This unpacked value can be used with ctypes. The hex thing is just to show you that the unpacked value looks a lot like an IP address. | First a disclaimer: This is just an educated guess.
an ip-address is traditionally represented as four bytes - i.e. xxx.xxx.xxx.xxx, but is really a unsigned long. So you should convert the representation 192.168.1.1 to an unsiged int. you could convert it like this.
```
ip="192.168.1.1"
ip_long = reduce(lambda x,y:x*256+int(y), ip.split('.'), 0)
``` | Python: packing an ip address as a ctype.c_ulong() for use with DLL | [
"",
"python",
"dll",
"ip-address",
"ctypes",
""
] |
I have an HTML element with a large collection of unordered lists contained within it. I need to clone this element to place elsewhere on the page with different styles added (this is simple enough using jQuery).
```
$("#MainConfig").clone(false).appendTo($("#smallConfig"));
```
The problem, however, is that all the lists and their associated list items have IDs and `clone` duplicates them. Is there an easy way to replace all these duplicate IDs using jQuery before appending? | If you need a way to reference the list items after you've cloned them, you must use classes, not IDs. Change all id="..." to class="..."
If you are dealing with legacy code or something and can't change the IDs to classes, you must remove the id attributes before appending.
```
$("#MainConfig").clone(false).find("*").removeAttr("id").appendTo($("#smallConfig"));
```
Just be aware that you don't have a way to reference individual items anymore. | Since the OP asked for a way to replace all the duplicate id's before appending them, maybe something like this would work. Assuming you wanted to clone MainConfig\_1 in an HTML block such as this:
```
<div id="smallConfig">
<div id="MainConfig_1">
<ul>
<li id="red_1">red</li>
<li id="blue_1">blue</li>
</ul>
</div>
</div>
```
The code could be something like the following, to find all child elements (and descendants) of the cloned block, and modify their id's using a counter:
```
var cur_num = 1; // Counter used previously.
//...
var cloned = $("#MainConfig_" + cur_num).clone(true, true).get(0);
++cur_num;
cloned.id = "MainConfig_" + cur_num; // Change the div itself.
$(cloned).find("*").each(function(index, element) { // And all inner elements.
if(element.id)
{
var matches = element.id.match(/(.+)_\d+/);
if(matches && matches.length >= 2) // Captures start at [1].
element.id = matches[1] + "_" + cur_num;
}
});
$(cloned).appendTo($("#smallConfig"));
```
To create new HTML like this:
```
<div id="smallConfig">
<div id="MainConfig_1">
<ul>
<li id="red_1">red</li>
<li id="blue_1">blue</li>
</ul>
</div>
<div id="MainConfig_2">
<ul>
<li id="red_2">red</li>
<li id="blue_2">blue</li>
</ul>
</div>
</div>
``` | jQuery clone duplicate IDs | [
"",
"javascript",
"jquery",
"html",
""
] |
I have a fairly expensive array calculation (SpectralResponse) which I like to keep to a minimum. I figured the best way is to store them and bring it back up when same array is needed again in the future. The decision is made using BasicParameters.
So right now, I use a LinkedList of object for the arrays of SpectralResponse, and another LinkedList for the BasicParameter. And the BasicParameters has a isParamsEqualTo(BasicParameters) method to compare the parameter set.
```
LinkedList<SpectralResponse> responses
LinkedList<BasicParameters> fitParams
LinkedList<Integer> responseNumbers
```
So to look up, I just go through the list of BasicParameters, check for match, if matched, return the SpectralResponse. If no match, then calculate the SpectralResponse.
Here's is the for loop I used to lookup.
```
size: LinkedList size, limited to a reasonable value
responseNumber: just another variable to distinguish the SpectralResponse.
for ( i = size-1; i > 0 ; i--) {
if (responseNumbers.get(i) == responseNum)
{
tempFit = fitParams.get(i);
if (tempFit.isParamsEqualTo(fit))
{
return responses.get(i);
}
}
}
```
But somehow, doing it this way no only take out lots of memory, it's actually slower than just calculating SpectralResponse straight. Much slower.
So it is my implementation that's wrong, or I was mistaken that precalculating and lookup is faster? | You are accessing a LinkedList by index, this is the worst possible way to access it ;)
You should use ArrayList instead, or use iterators for all your lists.
Possibly you should merge the three objects into one, and keep them in a map with responseNum as key.
Hope this helps! | You probably should use an array type (an actual array, like Vector, ArrayList), not Linked lists. Linked lists is best for stack or queue operation, not indexing (since you have to traverse it from one end). Vector is a auto resizing array, wich has less overhead in accessing inexes. | Storing & lookup double array | [
"",
"java",
"lookup",
""
] |
I have a scenario when I start 3..10 threads with ThreadPool.
Each thread does its job and returns to the ThreadPool.
What are possible options to be notified in main thread when all background threads have finished?
Currently I'm using a homegrown method with incrementing a variable for each of created threads and decrementing it when a background thread is about to finish.
This works just fine, but I was curious if there are better options. | Decrementing a variable (between threads) is a little bit risky unless done with `Interlocked.Decrement`, but that approach should be fine if you have the last thread (i.e. when it gets to zero) raise an event. Note that it would have to be in a "finally" block to avoid losing it in the case of exceptions (plus you don't want to kill the process).
In "Parallel Extensions" (or with .NET 4.0), you might also look at the `Parallel.ForEach` options here... that might be another way of getting everything done as a block. Without having to watch them all manually. | Try this: <https://bitbucket.org/nevdelap/poolguard>
```
using (var poolGuard = new PoolGuard())
{
for (int i = 0; i < ...
{
ThreadPool.QueueUserWorkItem(ChildThread, poolGuard);
}
// Do stuff.
poolGuard.WaitOne();
// Do stuff that required the child threads to have ended.
void ChildThread(object state)
{
var poolGuard = state as PoolGuard;
if (poolGuard.TryEnter())
{
try
{
// Do stuff.
}
finally
{
poolGuard.Exit();
}
}
}
```
Multiple PoolGuards can be used in different ways to track when threads have ended, and handles threads that haven't started when the pool is already closed. | be notified when all background threadpool threads are finished | [
"",
"c#",
"multithreading",
"threadpool",
""
] |
I'm building a questionnaire mvc webapp, and i cant figure out how to pass an unknown number of arguments to the controller from the form.
My form is something like:
```
<% using (Html.BeginForm())
{ %>
<div id="Content">
<% foreach (var group in ViewData.Model.QuestionGroups)
{ %>
<div class="Group">
<%=group.Description %>
<% foreach (var question in group.Questions)
{%>
<div class="Question">
<div class="QuestionTitle">
<%=question.Title %>
</div>
<%=Html.Hidden("Id", question.ID) %>
<div class="QuestionText">
<%switch (question.TypeAsEnum)
{
case QuestionTypeEnum.Text:%>
<%=Html.TextBox("somename") %>
<% break;
case QuestionTypeEnum.Number:%>
<%=Html.TextBox("somename") %>
<% break;
case QuestionTypeEnum.PhoneNumber:%>
<%=Html.TextBox("somename")%>
<% break;
case QuestionTypeEnum.Email:%>
<%=Html.TextBox("somename")%>
<% break;
case QuestionTypeEnum.Date:%>
<%=Html.TextBox("somename")%>
<% break;
case QuestionTypeEnum.YesNo:%>
<%=Html.RadioButton("somename", true)%>
<%=Html.RadioButton("somename", false)%>
<% break;
case QuestionTypeEnum.Alternative:%>
<%=Html.DropDownList("somename", question.Answers)%>
<% break;
}%>
</div>
</div>
<% } %>
</div>
<% } %>
</div>
<div id="submittButton">
<%=Html.SubmitButton()%></div>
<% } %>
```
Now what i need in my controller is List< ResponseAnswer >,
where ResponseAnswer has the properties:
string questionID,
string AnswerText,
bool AnswerBool,
number AnswerNumber,
...
So how can i pass an unknown number of items containing questionID, AnswerType and Answer to the controller.
In webforms i solved this by rendering the form with repeaters instead of foreach, and then iterating through the question repeater checking the control id, each repeater item containing a hidden questionid element and a input with id=AnswerType.
But this will seriously break Separation of concern in mvc?
So is there any way of getting my controller to accept List< ResultAnswer > and somehow build this list without breaking soc, and if not, how do i pass the entire formresult back to the controller so i can do the iteration of the form data there instead of in the view. | Garry's answer will work (and hence, up voted). However, you can model bind directly to a list, and I think it's a bit more elegant. [There are instructions in this blog post.](http://haacked.com/archive/2008/10/23/model-binding-to-a-list.aspx) | You can add an argument to your action like
```
public ActionResult MyAction(FormCollection form)
```
Then the `form` parameter will contain all the data from the posted form. From that you can do what you want.
You could probably implement a binder that could map to ResponseAnswers but I have no experience of doing that so I'll leave that to someone else. | Asp.Net MVC form, with unknown parameters for Controller | [
"",
"c#",
"asp.net-mvc",
""
] |
I've been a fan of EasyMock for many years now, and thanks to SO I came across references to PowerMock and it's ability to mock Constructors and static methods, both of which cause problems when retrofitting tests to a legacy codebase.
Obviously one of the huge benefits of unit testing (and TDD) is the way it leads to (forces?) a much cleaner design, and it seems to me that the introduction of PowerMock may detract from that. I would see this mostly manifesting itself as:
1. Going back to initialising collaborators rather than injecting them
2. Using statics rather than making the method be owned by a collaborator
In addition to this, something doesn't quite sit right with me about my code being bytecode manipulated for the test. I can't really give a concrete reason for this, just that it makes me feel a little uneasy as it's just for the test and not for production.
At my current gig we're really pushing for the unit tests as a way for people to improve their coding practices and it feels like introducing PowerMock into the equation may let people skip that step somewhat and so I'm loathe to start using it. Having said that, I can really see where making use of it can cut down on the amount of refactoring that needs to be done to *start* testing a class.
I guess my question is, what are peoples experiences of using PowerMock (or any other similar library) for these features, would you make use of them and how much overall do you want your tests influencing your design? | I think you're right to be concerned. Refactoring legacy code to be testable isn't **that hard** in *most* cases once you've learned how.
Better to go a bit slower and have a supportive environment for learning than take a short cut and learn bad habits.
(And I just [read this](http://www.infoq.com/articles/levison-TDD-adoption-strategy) and feel like it is relevant.) | I have to strongly disagree with this question.
There is no justification for a mocking tool that limits design choices. It's not just static methods that are ruled out by EasyMock, EasyMock Class Extension, jMock, Mockito, and others. These tools also prevent you from declaring classes and methods `final`, and that alone is a very bad thing. (If you need one authoritative source that defends the use of `final` for classes and methods, see the "Effective Java" book, or watch this [presentation](http://www.youtube.com/watch?v=aAb7hSCtvGw) from the author.)
And "initialising collaborators rather than injecting them" often is the *best* design, in my experience. If you decompose a class that solves some complex problem by creating helper classes that are instantiated from that class, you can take advantage of the ability to safely pass specific data to those child objects, while at the same time hiding them from client code (which provided the full data used in the high-level operation). Exposing such helper classes in the public API violates the principle of information hiding, breaking encapsulation and increasing the complexity of client code.
The abuse of DI leads to stateless objects which really should be stateful because they will *almost always* operate on data that is specific to the business operation.
This is not only true for non-public helper classes, but also for public "business service" classes called from UI/presentation objects. Such service classes are usually internal code (to a single business application) that is inherently not reusable and have only a few clients (often only one) because such code is by nature *domain/use-case specific*.
In such a case (a very common one, by the way) it makes much more sense to have the UI class directly instantiate the business service class, passing data provided by the user through a constructor.
Being able to easily write unit tests for code like this is precisely what led me to create the [JMockit](https://jmockit.github.io/) toolkit. I wasn't thinking about legacy code, but about simplicity and economy of design. The results I achieved so far convinced me that *testability* really is a function of two variables: the *maintainability* of production code, and the limitations of the mocking tool used to test that code. So, if you remove *all* limitations from the mocking tool, what do you get? | Using PowerMock or How much do you let your tests affect your design? | [
"",
"java",
"unit-testing",
"dependency-injection",
"junit",
""
] |
I am looking at moving my company's internal business app from VB.NET to PHP. Some of the people were worried about losing GUI features that can be found in .NET. I am under the impression that with the right javascript framework, anything in .NET GUI can be replicated.
While I am still researching this point, I would like to ask if form features in .NET GUI can in fact be replicated with javascript and slightly more importantly, will it take much longer to develop to get the same results? | First off: to answer your question.
A Tree Control is hard to emulate in a web environment. Doable, but hard (look at Yahoos YUI for an example).
* **State**: you get it in WinForms, not in the web. This has more to do with how people use the application.
* **Interaction**: That is easier on WinForms than web. Again, it is doable, but more layers are involved.
* **Data Size**: How much data is being displayed? You don't see grids with thousands of records on the web, that can be common on WinForms. This can effect web load times greater than WinForms.
* **Testing**: how many browsers to you have to test with. The JavaScript/CSS differences between the browsers can make life difficult. Ah heck, IE6 will make your life difficult if you have to develop for that.
* **Development time**: that is about the same for a developer experienced in both environments.
But, there are a number of other questions that pop up in your initial statement.
Why go from WinForms VB.Net to PHP? Instead, go to ASP.Net with VB.Net. Might save you from completely reinventing the wheel. Also, then you wont have to learn how to convert between frameworks. | I will say that yes anything "CAN" be replicated, but the amount of time to do it might be a big bottle neck right away.
I am going to assume that your current application is an ASP.NET application and that you are not moving from WinForms. (If you are the answers are still pretty much the same...but I might add a few extra comments).
Out of the box, with drag and drop functionality from a UI perspective you have data validation and many other general out of the box items, that although they are done clientside you don't have to write a single line of JavaScript to get them working. This is a big cost saver, but can it be replicated elsewhere, yes, but it takes time.
Secondly you have the very easy to work with ASP.NET AJAX functionality combined with the AJAX Control Toolkit. These again allow you to use .NET logic to put things together, but can they be replicated? Yes, jQuery and many other AJAX frameworks contain similar items to a lot of items.
The biggest point that I have to make is that if you already have something and are familiar with the language and technology behind it, why scrap it, risk not getting done on time, and enter an unknown world of a new language. That is just my $0.02 I guess. | Javascript RIA vs .NET GUI | [
"",
"javascript",
"vb.net",
""
] |
If an object has a property that is a collection, should the object create the collection object or make a consumer check for null? I know the consumer should not assume, just wondering if most people create the collection object if it is never added to. | You can also use the "Lazy initailizer" pattern where the collection is not initialized until (and unless) someone accesses the property getter for it... This avoids the overhead of creating it in those cases where the parent object is instantiated for some other purpose that does not require the collection...
```
public class Division
{
private int divId;
public int DivisionId { get; set; }
private Collection<Employee> emps;
public Collection<Employee> Employees
{ get {return emps?? (emps = new Collection<Employee>(DivisionId));}}
}
```
EDIT: This implementation pattern is, in general, not thread safe... emps could be read by two different threads as null before first thread finishes modifying it. In this case, it probably does not matter as DivisionId is immutable and although both threads would get different collections, they would both be valid. Whne the second thread fihishes, therefore, emps would be a valid collection. The 'probably' is because it might be possible for the first thread to start using emps before the second thread resets it. That would not be thread-safe. Another slightly more complex implementation from Jon SKeet is thread-safe (see [This article on SIngletons](http://www.yoda.arachsys.com/csharp/singleton.html) for his example/discussion on how to fix this. | This depends on the contract you have between your API and the user.
Personally, I like a contract that makes the Object manage its collections, i.e., instantiating them on creation, and ensuring that they can't be set to null via a setter - possibly by providing methods to manage the collection rather than setting the collection itself.
i.e., addFoo(Foo toAdd) instead of setFooSet(Set set), and so on.
But it's up to you. | Initialize a collection within an object? | [
"",
"c#",
".net",
"oop",
""
] |
Why are we not able to override an instance variable of a super class in a subclass? | Because if you changed the implementation of a data member it would quite possibly break the superclass (imagine changing a superclass's data member from a float to a String). | He perhaps meant to try and override the value used to **initialize** the variable.
For example,
## Instead of this (which is illegal)
```
public abstract class A {
String help = "**no help defined -- somebody should change that***";
// ...
}
// ...
public class B extends A {
// ILLEGAL
@Override
String help = "some fancy help message for B";
// ...
}
```
## One should do
```
public abstract class A {
public String getHelp() {
return "**no help defined -- somebody should change that***";
}
// ...
}
// ...
public class B extends A {
@Override
public String getHelp() {
return "some fancy help message for B";
// ...
}
``` | Overriding a super class's instance variables | [
"",
"java",
"inheritance",
""
] |
I've seen a class which is a class which is defined like this..
```
class StringChild : public StringBase
{
public:
//some non-virtual functions
static StringChild* CreateMe(int size);
private:
unsigned char iBuf[1];
};
```
The static factory function has the following implementation..
```
return new(malloc(__builtin_offsetof(StringChild ,iBuf[size]))) StringChild();
```
So as far as I understand it this function is using placement new to extend this class.
Is this safe only because there is only 1 member and it's allocated on the heap? | It's an old C trick that was used to work around the non-availablity of variable length arrays in plain C. Yes, it also works in C++ as long as you use suitable allocator constructs (like allocating a bunch of raw memory the desired size and then placement newing the object in there). It's safe as long as you don't wander over the end of the allocated memory, but it does tend to confuse at least some memory debuggers.
One thing you have to make absolutely certain when using this technique is that the variable length array is the last element in the object layout, otherwise you'll walk over other internal variables.
I am however a little dubious about the implementation of the factory function - I assume the 'size' parameter is actually the desired array size? Also, don't forget that you'd have to release the memory above using 'free' and not 'delete', even though the latter might work in most cases.
Unless there's a compelling reason as to why the memory has to be managed this way, I would simply replace the array with a std::vector. | This should be OK for PODs provided iBuf is the last member of the structure. The problems with non-PODs could be that eg. compiler is free to reorder public/private/protected members, virtual base classes end up at the end of the most derived object IIUC, etc.
Your structure is non-POD (it has a base class) so I wouldn't recommend it.
Also, if you create instances like this
```
return new(malloc(__builtin_offsetof(StringChild ,iBuf[size]))) StringChild();
```
You should make sure that memory acquired by malloc should be freed with free, so delete your instances like this:
```
obj->~StringChild();
free(obj);
```
Maybe you'd like to use `::operator new()` for allocation | Variable sized class - C++ | [
"",
"c++",
"placement-new",
"memory-layout",
""
] |
Here is the situation:
User looks something up.
* Alert sound is played because there is a notice on the item he looked up
* User closes the notice - the application continues to retrieve information
* User is sent a 'ding' telling them the information has finished retrieving
* Application begins sending certain attributes to TextToSpeech
* The application encounters a notice second, internal notice on the data and sends another alert (texttospeech is paused, then resumed after the alert finishes)
* Text to speech is finished, another 'ding' is played letting the user know that is has read everything
We currently use a queue to process sounds in an orderly fashion to prevent overlapping of sounds. We currently only play a sound from the queue when the SoundAdded event is called. This is a custom event we made to be raised when we add a sound to the queue. Currently, only one sound will be played per calling of the event. We have run into the problem where after certain sounds are played, they will 'lock' the queue, preventing further sounds from being process. We got around this by always 'unlocking' the queue even if a sound doesn't play. However, this still leaves us with the problem of the queue getting backed up with sounds to play.
The only method I can think of to solve this is to add a loop to keep trying to play the sound if there is still a sound waiting to be played. Currently, the main thread is handling sounds, I would probably need to move sound handling to a separate thread, but I'm not sure what the best method of handing the sound would be.
So... my question... Is there a good sound library that we can use that includes an event for the end of a sound. Or... what would be the best way to create such a thing? | I don't know if there's already a .NET library that would let you do this, but I think you could pretty easily P/Invoke [PlaySound](http://msdn.microsoft.com/en-us/library/ms712879.aspx) from the Windows API and use it to play your sounds. As long as you don't specify the SND\_ASYNC flag, it should block until the sound is done playing at which point you can play the next sound. As you mentioned, you'll definitely want to do this in a different thread. | Extending Jon's answer: in .NET 2.0 and above, you can use [My.Computer.Audio.Play](http://msdn.microsoft.com/en-us/library/cf1shcah.aspx) with the [AudioPlayMode.WaitToComplete](http://msdn.microsoft.com/en-us/library/17c3wc6k.aspx) option.
edit: To use this in a C# context, see [How to: Use the My Namespace (C# Programming Guide)](http://msdn.microsoft.com/en-us/library/ms173136.aspx) from Microsoft. | Determining when a sound has finished playing in C# | [
"",
"c#",
".net",
"audio",
""
] |
Is there a C# equivalent method to Java's `Exception.printStackTrace()` or do I have to write something myself, working my way through the InnerExceptions? | Try this:
```
Console.WriteLine(ex.ToString());
```
From <http://msdn.microsoft.com/en-us/library/system.exception.tostring.aspx>:
> The default implementation of ToString obtains the name of the class that threw the current exception, the message, the result of calling ToString on the inner exception, and the result of calling Environment.StackTrace. If any of these members is null, its value is not included in the returned string.
Note that in the above code the call to `ToString` isn't required as there's an overload that takes `System.Object` and calls `ToString` directly. | I would like to add: If you want to print the stack outside of an exception, you can use:
```
Console.WriteLine(System.Environment.StackTrace);
``` | C# equivalent to Java's Exception.printStackTrace()? | [
"",
"c#",
".net",
"exception",
"stack-trace",
""
] |
I need to get the lesser n numbers of a list in Python. I need this to be really fast because it's in a critical part for performance and it needs to be repeated a lot of times.
n is usually no greater than 10 and the list usually has around 20000 elements. The list is always different each time I call the function. Sorting can't be made in place.
Initially, I have written this function:
```
def mins(items, n):
mins = [float('inf')]*n
for item in items:
for i, min in enumerate(mins):
if item < min:
mins.insert(i, item)
mins.pop()
break
return mins
```
But this function can't beat a simple sorted(items)[:n] which sort the entire list. Here is my test:
```
from random import randint, random
import time
test_data = [randint(10, 50) + random() for i in range(20000)]
init = time.time()
mins = mins(test_data, 8)
print 'mins(items, n):', time.time() - init
init = time.time()
mins = sorted(test_data)[:8]
print 'sorted(items)[:n]:', time.time() - init
```
Results:
```
mins(items, n): 0.0632939338684
sorted(items)[:n]: 0.0231449604034
```
sorted()[:n] is three times faster. I believe this is because:
1. insert() operation is costly because Python lists are not linked lists.
2. sorted() is an optimized c function and mine is pure python.
Is there any way to beat sorted()[:n] ?
Should I use a C extension, or Pyrex or Psyco or something like that?
Thanks in advance for your answers. | You actually want a sorted sequence of mins.
```
mins = items[:n]
mins.sort()
for i in items[n:]:
if i < mins[-1]:
mins.append(i)
mins.sort()
mins= mins[:n]
```
This runs *much* faster because you aren't even looking at mins unless it's provably got a value larger than the given item. About 1/10th the time of the original algorithm.
This ran in zero time on my Dell. I had to run it 10 times to get a measurable run time.
```
mins(items, n): 0.297000169754
sorted(items)[:n]: 0.109999895096
mins2(items)[:n]: 0.0309998989105
```
Using `bisect.insort` instead of append and sort may speed this up a hair further. | ```
import heapq
nlesser_items = heapq.nsmallest(n, items)
```
Here's a correct version of [S.Lott's algorithm](https://stackoverflow.com/questions/350519/getting-the-lesser-n-elements-of-a-list-in-python#350568):
```
from bisect import insort
from itertools import islice
def nsmallest_slott_bisect(n, iterable, insort=insort):
it = iter(iterable)
mins = sorted(islice(it, n))
for el in it:
if el <= mins[-1]: #NOTE: equal sign is to preserve duplicates
insort(mins, el)
mins.pop()
return mins
```
Performance:
```
$ python -mtimeit -s "import marshal; from nsmallest import nsmallest$label as nsmallest; items = marshal.load(open('items.marshal','rb')); n = 10"\
"nsmallest(n, items)"
```
```
nsmallest_heapq
100 loops, best of 3: 12.9 msec per loop
nsmallest_slott_list
100 loops, best of 3: 4.37 msec per loop
nsmallest_slott_bisect
100 loops, best of 3: 3.95 msec per loop
```
`nsmallest_slott_bisect` is **3 times faster** than `heapq`'s `nsmallest` (for n=10, len(items)=20000). `nsmallest_slott_list` is only marginally slower. It is unclear why heapq's nsmallest is so slow; its algorithm is almost identical to the presented above (for small n). | Getting the lesser n elements of a list in Python | [
"",
"python",
"algorithm",
"sorting",
""
] |
We use a number of diffrent web services in our company, wiki(moinmoin), bugtracker (internally), requestracker (customer connection), subversion. Is there a way to parse the wikipages so that if I write "... in Bug1234 you could ..." Bug1234 woud be renderd as a link to `http://mybugtracker/bug1234` | check out the interwiki page in moinmoin, (most wikis have them) we use trac for example and you can set up different link paths to point to your different web resources. So in our Trac you can go [[SSGWiki:Some Topic]] and it will point to another internal wiki. | add to the file `data/intermap.txt` (create if not existing, but that should not happen) a line like
```
wpen http://en.wikipedia.org/wiki/
```
so that you can write `[[wpen:MoinMoin]]` instead of `http://en.wikipedia.org/wiki/MoinMoin`
I also have
```
wpfr http://fr.wikipedia.org/wiki/
wpde http://de.wikipedia.org/wiki/
```
the `data/intermap.txt` gives also other examples that serve just like bookmarklets in firefox. in your case, using :
```
tracker http://mybugtracker/
```
you would issue `[[tracker:bug1234]]` | How to use InterWiki links in moinmoin? | [
"",
"python",
"wiki",
"moinmoin",
""
] |
I recently upgraded a Web Application Project (as well as some dependent projects) from .net 2.0 to .net 3.5 using the built in conversion tool. Everything works well such as using MS AJAX 3.5 vs. the external MS AJAX libraries in 2.0.
My problem occurs when I tried using the new Lambda Expression syntax. The compiler will not recognize Lambda Expressions as valid syntax. The target frame work version is set to 3.5 in all projects in the solution.I was also able to successfully use Lambda Expressions in a Library Project in the same solution.
The is the code that is giving me the error. Nothing too special.
```
ObjectFactory.Initialize(x =>
{
x.ForRequestedType<IUnitIdSequencingService>().TheDefaultIsConcreteType<UnitIdSequencingService>();
x.ForRequestedType<IGadgetDAO>().TheDefault.Is.OfConcreteType<GadgetDAO>().WithCtorArg("instance").EqualToAppSetting("OSHAInspectionManager");
});
```
The specific errors I am getting are:
```
Error 102 Invalid expression term '>' D:\projects\bohlco\pmr\PMR\Web\App_Code\Bootstrapper.cs 13 41 D:\...\Web\
```
Any help would be greatly appreciated. I have been searching Google with little luck | If any of the page is being compiled by ASP.NET (i.e. you aren't pre-compiling the WAP), then you'll need to ensure that ASP.NET knows about the C# 3.0 (.NET 3.5) compiler. Ensure the following is in the `web.config`:
```
<system.codedom>
<compilers>
<compiler language="c#;cs;csharp"
extension=".cs"
warningLevel="4"
type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=
<providerOption name="CompilerVersion" value="v3.5"/>
<providerOption name="WarnAsError" value="false"/>
</compiler>
</compliers>
</system.codedom>
```
Also, if you are hosting in IIS, ensure that the correct folder is set as an application, and that it is using ASP.NET v2.blah (not v1.1.blah). | I don't have much experience with the VS 2008 conversion tool, but I know other project conversion tools have had "issues". I'd recommend you compare the .csproj file for your 'broken' project to one that is working. Maybe the conversion utility broke something in your project. You could also try creating a new project and copying over all the source files as well. | Visual Studio 2008 doesn't recognize Lambda Expression Syntax | [
"",
"c#",
"asp.net",
"visual-studio-2008",
".net-3.5",
"lambda",
""
] |
In a way following on from [reading a windows \*.dmp file](https://stackoverflow.com/questions/158534/reading-a-windows-dmp-file)
Having received a dump file from random customer, running the debug session to see the crash, you often find it is in a MS or other third party library. The next issue is that you may not have knowledge of the PC setup to such an extent that you can ensure you have the actually modules available.
For instance I'm currently stuck trying to get symbols to load for ntdll.dll (5.01.2600.5512). In MSVC 2005 the path column in the modules list window shows a \* before the fully pathed file name, and refuses to load symbols I have downloaded for XP/SP1/SP1a/SP2/SP3.
I have the symbol server setup to download from the internet and store in a local cache which seems to have been working fine for modules that I do have on my PC.
Using GUI equivelant to the method
```
Set _NT_SYMBOL_PATH=srv*d:\SymbolCache*\\server1\Third-Party-PDB;srv*d:\SymbolCache*\\server2\Windows\Symbols*http://msdl.microsoft.com/download/symbols
```
Perhaps I have the wrong symbols, but as new ones are not downloading where do I go to next? Do I have to contact the customer and ask what SP they have installed, and any other patches? Do I have to install that machine and then run up the debugger with the dmp file to get the symbols I need? | What are you using to debug the minidump? I.e., WinDBG or Visual Studio? And how was the minidump generated?
There should be enough information in the minidump to resolve system dll symbols correctly. Are you using a local download of symbols or <http://msdl.microsoft.com/>?
Update: You should be able to add the public microsoft symbol store to Tools->Options->Debugging->Symbols->Symbol file (.pdb) locations, and then manually load the symbols by right clicking on the module in the Modules window and loading them if it isn't done automatically.
It's also possibly (likely) that VS 2005 doesn't look at `_NT_SYMBOL_PATH` to resolve minidump symbols. | If you are using WinDbg (part of the [Debugging Tools for Windows](http://www.microsoft.com/whdc/devtools/debugging/default.mspx) package), then it's simple to have it pull the right symbols for you from Microsoft automatically. Configure the symbol path using the ".symfix" (or ".symfix+", to simply append to your existing symbol search path) command.
Once you have that done and you have the crash dump loaded in WinDbg, type ".reload /f" to cause WinDbg to reload the symbols. It will use the information within the dump file itself to pull the correct symbols from Microsoft's public symbol server, regardless of what DLLs you have on your machine.
If for some reason the symbols aren't loading properly after you have done this, enter "!sym noisy" into WinDbg's command window and reload the symbols again. As WinDbg attempts to load them, you will see it output any errors that it encounters in its search/load process. These error messages will help you further diagnose what is going wrong and why the correct symbols aren't being loaded.
[This post](https://stackoverflow.com/questions/389103/debugging-tools-for-windows-symbol-proxy-doesnt-proxy#390243) has information that may also be of use. | How do you identify (and get access to) modules/debug symbols to use when provided a windows .dmp or .minidmp | [
"",
"c++",
"module",
"symbols",
"dump",
"minidump",
""
] |
How do you do it? Given a byte array:
```
byte[] foo = new byte[4096];
```
How would I get the first x bytes of the array as a separate array? (Specifically, I need it as an `IEnumerable<byte>`)
This is for working with `Socket`s. I figure the easiest way would be array slicing, similar to Perls syntax:
```
@bar = @foo[0..40];
```
Which would return the first 41 elements into the `@bar` array. Is there something in C# that I'm just missing, or is there some other thing I should be doing?
LINQ is an option for me (.NET 3.5), if that helps any. | Arrays are enumerable, so your `foo` already is an `IEnumerable<byte>` itself.
Simply use LINQ sequence methods like [`Take()`](https://msdn.microsoft.com/en-us/library/bb503062(v=vs.110).aspx) to get what you want out of it (don't forget to include the `Linq` namespace with `using System.Linq;`):
```
byte[] foo = new byte[4096];
var bar = foo.Take(41);
```
If you really need an array from any `IEnumerable<byte>` value, you could use the `ToArray()` method for that. That does not seem to be the case here. | You could use [`ArraySegment<T>`](http://msdn.microsoft.com/en-us/library/1hsbd92d.aspx). It's very light-weight as it doesn't copy the array:
```
string[] a = { "one", "two", "three", "four", "five" };
var segment = new ArraySegment<string>( a, 1, 2 );
``` | Array slices in C# | [
"",
"c#",
"arrays",
""
] |
I have a DateTime class and wish to display it according to some format... in this case, I specifically want to format it as YYYYMMDD format.
What's the best C#/.NET API function for doing this? | I always use this site to get any dates formats etc.
<http://blog.stevex.net/index.php/string-formatting-in-csharp/> | ToString(format)? i.e.
```
string s = DateTime.Today.ToString("yyyyMMdd");
```
(note the case) | What's the simplest way to format a .NET DateTime according to YYYYMMDD or the like? | [
"",
"c#",
".net",
"datetime",
"formatting",
""
] |
double TotalMinute=300.0
double TotalMinutesAdded=1378.0
```
double TotalMinute=300.0
double TotalMinutesAdded=1378.0
foreach(DataRow dr in ds.Tables[0].Rows)
{
//Add The above Timings to each Row's 2nd Column
DateTime correctDate=Convert.ToDateTime(dr[2]);
correctDate.AddMinutes(TotalMinute);
correctDate.AddMinutes(TotalMinutesAdded);
dr[2]=correctDate;
}
``` | As mentioned, due to `DateTime` objects being immutable you have to reassign the variable.
However, a point to note is that you can chain the manipulations as so:
```
correctDate = correctDate.AddMinutes(TotalMinute)
.AddMinutes(TotalMinutesAdded);
``` | DateTiem Add\* functions are not supposed to change current DateTime value. They RETURN the new Value.
If you want your value changed, type like this:
```
correctDate = correctDate.AddMinutes(TotalMinute);
``` | Adding Minutes to Date is not changing the Date C# .NET | [
"",
"c#",
"datetime",
""
] |
I have a table, users, in an Oracle 9.2.0.6 database. Two of the fields are varchar - last\_name and first\_name.
When rows are inserted into this table, the first name and last name fields are supposed to be in all upper case, but somehow some values in these two fields are mixed case.
I want to run a query that will show me all of the rows in the table that have first or last names with lowercase characters in it.
I searched the net and found REGEXP\_LIKE, but that must be for newer versions of oracle - it doesn't seem to work for me.
Another thing I tried was to translate "abcde...z" to "$$$$$...$" and then search for a '$' in my field, but there has to be a better way?
Thanks in advance! | How about this:
```
select id, first, last from mytable
where first != upper(first) or last != upper(last);
``` | I think BQ's SQL and Justin's second SQL will work, because in this scenario:
```
first_name last_name
---------- ---------
bob johnson
Bob Johnson
BOB JOHNSON
```
I want my query to return the first 2 rows.
I just want to make sure that this will be an efficient query though - my table has 500 million rows in it.
When you say upper(first\_name) != first\_name, is "first\_name" always pertaining to the current row that oracle is looking at? I was afraid to use this method at first because I was afraid I would end up joining this table to itself, but they way you both wrote the SQL it appears that the equality check is only operating on a row-by-row basis, which would work for me. | Oracle - Select where field has lowercase characters | [
"",
"sql",
"oracle",
"select",
"indexing",
"case-sensitive",
""
] |
With Symfony's Action Security if a user has not been identified he will be forwarded to the default login action as defined in the applications settings.yml file. How would I forward the user to the originally requested action after the user is successfully authenticated? | On first hit to your login action, store referer to the user session:
```
if(!$this->getUser()->hasParameter('referer'))
{
$this->getUser()->setParameter('referer',$this->getRequest()->getReferer());
}
```
and then when login succeeds, redirect user to stored referer with:
```
$this->redirect($this->getUser()->getParameter('referer'));
```
You have complete example in sfGuardPlugin:
<http://www.symfony-project.org/plugins/sfGuardPlugin> | More simply...
```
$this->getUser()->setReferer($this->getRequest()->getReferer());
```
like
```
setReferer($referer)
{
if (!$this->hasAttribute('referer'))
$this->setAttribute('referer', $referer);
}
``` | Symfony Action Security - How to forward after successful authentication? | [
"",
"php",
"security",
"symfony1",
"action",
""
] |
I have two tables in a DataSet where the ID field on each is the same. I have a Relation between the two tables. How do I, in C# code, pull the info from Table2 that relates to the info on Table1?
I have tried using a new DataRow and assigning it by using GetChildRow, but for some reason I cannot seem to make it work.
Also, I understand this question may not be that informative, let me know and I will try to provide more clarification. | The answer is different (and significantly easier) if your DataSet is strongly-typed (i.e. generated from a .xsd file). I'll assume that's not the case below, but if it is speak up.
For generic DataSet objects, the answer largely depends on what you have hold of to start. If you have simply an ID, then it's probably simplest to use the ID in a select on the relevant DataTable. This will work for either (or both) tables as it will return an array of DataRows with the information you're looking for.
If you have a parent DataRow (and it seems likely that you do), then the best method to use depends on the relationship—i.e. which is the parent. If Table1 is your parent and you want to navigate to relevant Table2 child rows, you're looking for GetChildRow (best to be as specific as you can in telling it which relation to follow). If Table2 is the parent and you're navigating from a Table1 DataRow to the parent in Table2, you'll want to use GetParentRow (again, be as specific in identifying the relation as you can—use the relation object if you have it handy). | In a strongly-typed DataSet, each DataRow object will have specific methods or properties for each relationship. If you have a parent DataRow and want to navigate to children in Table2, it will be a pluralized method (i.e. "row.GetTable2Rows()"). If you have a child and want to navigate to the parent, it will be a singular property ("row.Table2Row"). | Get Table2 data from a relation to Table1 in C# Code | [
"",
"c#",
"dataset",
"relation",
""
] |
I am really trying to show what htmlentities gives me, but it doesn't give & euro; for the € character.
I am trying
```
echo htmlentities(htmlentities("LISBOA-VIENA DESDE 99€ TAXAS INCLUÍDAS, RESERVE JÁ",ENT_COMPAT,ISO-8859-1),ENT_COMPAT,ISO-8859-1);
echo '<br>';
echo htmlentities(htmlentities("LISBOA-VIENA DESDE 99€ TAXAS INCLUÍDAS, RESERVE JÁ",ENT_COMPAT,UTF-8),ENT_COMPAT,UTF-8);
```
and for both I get
LISBOA-VIENA DESDE 99€ TAXAS INCLU& Iacute;DAS, RESERVE J& Aacute;
LISBOA-VIENA DESDE 99€ TAXAS INCLU& Iacute;DAS, RESERVE J& Aacute;
I never get a & euro;
Anyone know how to get this right? | This is discussed [here](http://www.cs.tut.fi/~jkorpela/html/euro.html); it seems € (`€`) works often. | What is the original file encoding of the file in which you use these statements?
If you're on Windows chances are high that the file is encoded with [Windows-1252](http://en.wikipedia.org/wiki/Windows-1252) (CP1252) and not in [ISO-8859-1](http://en.wikipedia.org/wiki/ISO/IEC_8859-1), [ISO-8859-2](http://en.wikipedia.org/wiki/ISO/IEC_8859-15) or [UTF-8](http://en.wikipedia.org/wiki/UTF-8).
The `€` sign is `0x80` in Windows-1252, ISO-8859-15 encodes the `€` sign with `0xA4` while ISO-8859-1 doesn't have a `€` sign altogether ([see answer](https://stackoverflow.com/questions/416161/getting-the-with-htmlentities#416196) from Aron Rotteveel).
You must ensure that you pass the correct charset used for the string into [`htmlentities()`](http://de.php.net/htmlentities). Best practice would be to use UTF-8 encoding for all of your files.
If `htmlentities("LISBOA-VIENA DESDE 99€ TAXAS INCLUÍDAS, RESERVE JÁ",ENT_COMPAT,'Windows-1252')` works then you're using the CP1252 charset.
**I also just noticed that you're missing quotes around the charsets in your example above. This could also be the cause of trouble.** | Getting the € with htmlentities | [
"",
"php",
"character-encoding",
"html-entities",
""
] |
```
ClassA* pa = NULL;
ClassA* pb = NULL;
void assignObject(ClassA* pa, ClassB* pb)
{
pa = new ClassA;
pb = new ClassB;
}
```
What will be the value of `pa` and `pb` after executing the function?
EDIT
how to pass as pointer is the return if pa,pb is NULL | As pointed out in other answers - both will still be NULL after the call. However, there are two possible solutions to this problem:
1) **references**
```
void assignObject(ClassA*& pa, ClassB*& pb)
{
pa = new ClassA;
pb = new ClassB;
}
ClassA* pa = NULL;
ClassA* pb = NULL;
assignObject(pa, pb); // both will be assigned as expected.
```
2) **pointers**
```
void assignObject(ClassA** pa, ClassB** pb)
{
assert(pa != NULL); assert(pb != NULL);
*pa = new ClassA;
*pb = new ClassB;
}
ClassA* pa = NULL;
ClassA* pb = NULL;
assignObject(&pa, &pb); // both will be assigned as expected.
```
Most programmers would probably choose references because then they don't need to assert anything (references can never be NULL). | They will be NULL, since you're passing them by value. If you want to pass it by reference, you'd do this:
```
ClassA* pa = NULL;
ClassA* pb = NULL;
assignObject(ClassA*& pa, ClassB*& pb)
{
pa = new ClassA;
pb = new ClassB;
}
```
Note, I'm not sure what you're trying to accomplish with the global variables. They're never used in this example, since the local variables (function parameters) hide them.
I think you also need to declare a return value type for your function in order for it to be valid C++. | Assignment inside function that is passed as pointer? | [
"",
"c++",
"function",
"pointers",
"parameters",
""
] |
Is there any difference in the performance of the following three SQL statements?
```
SELECT * FROM tableA WHERE EXISTS (SELECT * FROM tableB WHERE tableA.x = tableB.y)
SELECT * FROM tableA WHERE EXISTS (SELECT y FROM tableB WHERE tableA.x = tableB.y)
SELECT * FROM tableA WHERE EXISTS (SELECT 1 FROM tableB WHERE tableA.x = tableB.y)
```
They all should work and return the same result set. But does it matter if the inner SELECT selects all fields of tableB, one field, or just a constant?
Is there any best practice when all statements behave equal? | The truth about the EXISTS clause is that the SELECT clause is not evaluated in an EXISTS clause - you could try:
```
SELECT *
FROM tableA
WHERE EXISTS (SELECT 1/0
FROM tableB
WHERE tableA.x = tableB.y)
```
...and should expect a divide by zero error, but you won't because it's not evaluated. This is why my habit is to specify NULL in an EXISTS to demonstrate that the SELECT can be ignored:
```
SELECT *
FROM tableA
WHERE EXISTS (SELECT NULL
FROM tableB
WHERE tableA.x = tableB.y)
```
All that matters in an EXISTS clause is the FROM and beyond clauses - WHERE, GROUP BY, HAVING, etc.
This question wasn't marked with a database in mind, and it should be because vendors handle things differently -- so test, and check the explain/execution plans to confirm. It is possible that behavior changes between versions... | Definitely #1. It "looks" scary, but realize the optimizer will do the right thing and is expressive of intent. Also ther is a slight typo bonus should one accidently think EXISTS but type IN. #2 is acceptable but not expressive. The third option stinks in my not so humble opinion. It's too close to saying "if 'no value' exists" for comfort.
In general it's important to not be scared to write code that mearly looks inefficient if it provides other benefits and does not actually affect performance.
That is, the optimizer will almost always execute your complicated join/select/grouping wizardry to save a simple EXISTS/subquery the same way.
After having given yourself [kudos](http://en.wiktionary.org/wiki/kudos) for cleverly rewriting that nasty OR out of a join you will eventually realize the optimizer still used the same crappy execution plan to resolve the much easier to understand query with embedded OR anyway.
The moral of the story is know your platforms optimizer. Try different things and see what is actually being done because the rampant knee jerks assumptions regarding 'decorative' query optimization are almost always incorrect and irrelevant from my experience. | Performance of SQL "EXISTS" usage variants | [
"",
"sql",
"sql-execution-plan",
""
] |
I am trying to play the Asterisk system sound from a C# program with
```
System.Media.SystemSounds.Asterisk.Play();
```
but no sound plays. My system does have a sound set up for Asterisk and other programs (not written by me) cause various system sounds to play.
Can anyone suggest any possible reasons for this? | I had ignored this problem until today. Some googling revealed that this is quite a common problem and totally unrelated to the .NET Play calls.
What happens is that while you can play/preview the sounds from the Control Panel Sounds and Audio Devices applet they do not play when programs trigger the sounds. It seems to be corruption caused by program installations. The fix is quite simple.
The (Default) entry for HKEY\_CURRENT\_USER in the registry should be (value not set). If it is something else (mine was OfficeCompleted) delete the entry (right click and select delete) and it will be re-created as (value not set). The system sounds should then play. | Sorry if this is overstating the obvious...
1. Are you sure this line of code is being executed?
2. As RobS suggests do any of the other SystemSounds play?
I had a look in reflector. Whichever of the SystemSounds you call returns a SystemSound instance initialised with the type (e.g. 0x40 for Asterix) for the system sound you want to play. This is then passed to the Win32 bool MessageBeep( int type ) method. Imported as:
```
[DllImport("user32.dll", CharSet=CharSet.Auto, ExactSpelling=true)]
internal static extern bool MessageBeep(int type);
```
The bool return isn't preserved in any way - i.e. you can't get it :(
Exceptions are not swallowed so you should get any if thrown.
Hope that helps (though probably just telling you what you already know) | SystemSounds Play not working | [
"",
"c#",
"audio",
""
] |
Is there a canonical or recommended pattern for implementing arithmetic operator overloading in C++ number-like classes?
From the C++ FAQ, we have an exception-safe assignment operator that avoids most problems:
```
class NumberImpl;
class Number {
NumberImpl *Impl;
...
};
Number& Number::operator=(const Number &rhs)
{
NumberImpl* tmp = new NumberImpl(*rhs.Impl);
delete Impl;
Impl = tmp;
return *this;
}
```
But for other operators (+, +=, etc..) very little advice is given other than to make them behave like the operators on built-in types.
Is there a standard way of defining these? This is what I've come up with - are there pitfalls I'm not seeing?
```
// Member operator
Number& Number::operator+= (const Number &rhs)
{
Impl->Value += rhs.Impl->Value; // Obviously this is more complicated
return *this;
}
// Non-member non-friend addition operator
Number operator+(Number lhs, const Number &rhs)
{
return lhs += rhs;
}
``` | In Bjarne Stroustrup's book "[The C++ Programming Language](https://rads.stackoverflow.com/amzn/click/com/0201700735)", in chapter 11 (the one devoted to Operator Overloading) he goes through witting a class for a complex number type (section 11.3).
One thing I do notice from that section is that he implements mixed type operations... this is probably expected for any numeric class.
In general, what you've got looks good. | The big thing to consider when writing any operator is that member operators do not undergo conversions on the left parameter:
```
struct example {
example(int);
example operator + (example);
};
void foo() {
example e(3), f(6);
e + 4; // okay: right operand is implicitly converted to example
e + f; // okay: no conversions needed.
6 + e; // BAD: no matching call.
}
```
This is because conversion never applies to `this` for member functions, and this extends to operators. If the operator was instead `example operator + (example, example)` in the global namespace, it would compile (or if pass-by-const-ref was used).
As a result, symmetric operators like `+` and `-` are generally implemented as non-members, whereas the compound assignment operators like `+=` and `-=` are implemented as members (they also change data, meaning they should be members). And, since you want to avoid code duplication, the symmetric operators can be implemented in terms of the compound assignment ones (as in your code example, although convention recommends making the temporary inside the function). | Canonical operator overloading? | [
"",
"c++",
"operator-overloading",
""
] |
I am working on an application where i need to transfer mails from a mailbox to anoter one.I can not send these mails using smtp because this willchange the header information .I am using C# and out look api to process mails . is thre any way i can transfer mails to other mail box without changing mail header.
---
By Transfer I mean, I need to take a mail from one mail box and move this to another mailbox without changing any header information. If I use smtp , header information will be changed. I have heared that using MAPI mail can be moved from one mail box to another mail box. any pointers. | If you cannot load all relevant mailboxes into a single Outlook profile, then this cannot be solved using the Outlook API. It should however be possible to run a standalone application from an administrative account that accesses the Exchange information store directly via Extended MAPI. You can then open the source mailboxes sequentially and move the relevant mail items to the target mailbox.
This would allow you to run a batch job harvesting all mailboxes from a central location in a single giant operation. If however your task is to move messages as they appear then maybe addressing this in a more decentralized fashion via Outlook addins installed on the source machines might be a more sensible approach after all. Maybe if you told us a little bit more about your motivation for moving those items we can come up with an even better solution.
If you go for the centralized harvester approach I strongly recommend using a helper library like [Redemption](http://dimastr.com/redemption/) for this though as otherwise it will probably take a couple of months before you have gathered enough knowledge to address the task. The [RDO](http://dimastr.com/redemption/rdo/) framework (Redemption Data Objects) should be especially well suited to get you running ASAP. | I was able to move the mails from one mail box to another using Redemption. This is like a copy mail from one mail box to another. First logon to the destination mail box using redemption.
Get the reference to the folder where you want to move the mail . In my case , it was inbox. now convert the outlook mail item to RDOMail and copy the rdomail to destination folder. here the is code -
```
rdoSession.LogonExchangeMailbox("TEST", "ServerName");
RDOExchangeMailboxStore mailBoxStore = (Redemption.RDOExchangeMailboxStore)
rdoSession.Stores.DefaultStore;
RDOFolder inboxFolder = null;
foreach (RDOFolder rdoFolder in mailBoxStore.IPMRootFolder.Folders)
{
if (rdoFolder.Name.Equals("Inbox", StringComparison.InvariantCultureIgnoreCase))
{
inboxFolder = rdoFolder;
break;
}
}
rdoMail.CopyTo(inboxFolder);
```
with this, mail will be copied to the new mail box without changing any header information. | Transfer mail to other mail box | [
"",
"c#",
"outlook",
"exchange-server",
"mapi",
""
] |
In our project I have several [JUnit](http://www.junit.org/) tests that e.g. take every file from a directory and run a test on it. If I implement a `testEveryFileInDirectory` method in the `TestCase` this shows up as only one test that may fail or succeed. But I am interested in the results on each individual file. How can I write a `TestCase` / `TestSuite` such that each file shows up as a separate test e.g. in the graphical TestRunner of Eclipse? (Coding an explicit test method for each file is not an option.)
Compare also the question [ParameterizedTest with a name in Eclipse Testrunner](https://stackoverflow.com/questions/385925/parameterizedtest-with-a-name-in-eclipse-testrunner). | Take a look at **Parameterized Tests** in JUnit 4.
Actually I did this a few days ago. I'll try to explain ...
First build your test class normally, as you where just testing with one input file.
Decorate your class with:
```
@RunWith(Parameterized.class)
```
Build one constructor that takes the input that will change in every test call (in this case it may be the file itself)
Then, build a static method that will return a `Collection` of arrays. Each array in the collection will contain the input arguments for your class constructor e.g. the file. Decorate this method with:
```
@Parameters
```
Here's a sample class.
```
@RunWith(Parameterized.class)
public class ParameterizedTest {
private File file;
public ParameterizedTest(File file) {
this.file = file;
}
@Test
public void test1() throws Exception { }
@Test
public void test2() throws Exception { }
@Parameters
public static Collection<Object[]> data() {
// load the files as you want
Object[] fileArg1 = new Object[] { new File("path1") };
Object[] fileArg2 = new Object[] { new File("path2") };
Collection<Object[]> data = new ArrayList<Object[]>();
data.add(fileArg1);
data.add(fileArg2);
return data;
}
}
```
Also check this [example](http://www.nofluffjuststuff.com/blog/paul_duvall/2007/04/take_heed_of_mixing_junit_4_s_parameterized_tests) | **JUnit 3**
```
public class XTest extends TestCase {
public File file;
public XTest(File file) {
super(file.toString());
this.file = file;
}
public void testX() {
fail("Failed: " + file);
}
}
public class XTestSuite extends TestSuite {
public static Test suite() {
TestSuite suite = new TestSuite("XTestSuite");
File[] files = new File(".").listFiles();
for (File file : files) {
suite.addTest(new XTest(file));
}
return suite;
}
}
```
**JUnit 4**
```
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;
@RunWith(Parameterized.class)
public class TestY {
@Parameters
public static Collection<Object[]> getFiles() {
Collection<Object[]> params = new ArrayList<Object[]>();
for (File f : new File(".").listFiles()) {
Object[] arr = new Object[] { f };
params.add(arr);
}
return params;
}
private File file;
public TestY(File file) {
this.file = file;
}
@Test
public void testY() {
fail(file.toString());
}
}
``` | JUnit test with dynamic number of tests | [
"",
"java",
"unit-testing",
"junit",
"parameterized-unit-test",
""
] |
I have a method that I'm writing that is calling another overloaded method inside it. I'd like to only write one outer method, since the parameter to the outer method is being passed to the inner one. Is there a way to do this?
I tried using generics, but I don't know enough about this so it isn't working:
```
public void OuterMethod<T>(T parameter)
{
InnerMethod(parameter); // InnerMethod accepts an int or a string
}
```
I know that I can do this:
```
public void OuterMethod(string parameter)
{
InnerMethod(parameter);
}
public void OuterMethod(int parameter)
{
InnerMethod(parameter);
}
```
But I'd rather do this the right way instead of copying/pasting code. What's the best way to accomplish this? | You can do this in C++ but not in C# (unless the inner method can also be generic instead of overloaded).
---
Alternatively (if you won't take 'no' for an answer), you can do a run-time switch on type, like for example ...
```
public void OuterMethod(object parameter)
{
if (parameter is int)
InnerMethod((int)parameter);
else if (parameter is string)
InnerMethod((string)parameter);
else
throw new SomeKindOfException();
}
```
... but obviously this is a run-time, not a compile-time check.
> But I'd rather do this the right way instead of copying/pasting code.
You can also write software to write your outer methods (e.g. using System.CodeDom classes) instead of writing them by hand, but this is probably more trouble than it's worth. | Like the others said, you can't really do what you are trying to do and the option you stated in your question is the best bet.
You would actually have to convert the value if you use the generic. Otherwise you can downcast by accepting an Object as ChrisW suggests.
```
public void OuterMethod<T>(T parameter)
{
T temp = parameter;
if (temp is string )
InnerMethod(Convert.ToString(temp));
if (temp is int)
InnerMethod(Convert.ToInt32(temp));// InnerMethod accepts an int or a string
}
```
Here is a link to the overview of Generics: <http://msdn.microsoft.com/en-us/library/ms172193.aspx> | Overloaded method calling overloaded method | [
"",
"c#",
".net",
"overloading",
""
] |
In Java, the idiomatic way to declare critical sections in the code is the following:
```
private void doSomething() {
// thread-safe code
synchronized(this) {
// thread-unsafe code
}
// thread-safe code
}
```
Almost all blocks synchronize on `this`, but is there a particular reason for this? Are there other possibilities? Are there any best practices on what object to synchronize on? (such as private instances of `Object`?) | First, note that the following code snippets are identical.
```
public void foo() {
synchronized (this) {
// do something thread-safe
}
}
```
and:
```
public synchronized void foo() {
// do something thread-safe
}
```
do **exactly the same thing**. No preference for either one of them except for code readability and style.
When you do synchronize methods or blocks of code, it's important to know **why** you are doing such a thing, and **what object** exactly you are locking, and for **what purpose**.
Also note that there are situations in which you will want to *client-side synchronize* blocks of code in which the monitor you are asking for (i.e. the synchronized object) is not necessarily `this`, like in this example :
```
Vector v = getSomeGlobalVector();
synchronized (v) {
// some thread-safe operation on the vector
}
```
I suggest you get more knowledge about concurrent programming, it will serve you a great deal once you know exactly what's happening behind the scenes. You should check out [Concurrent Programming in Java](https://rads.stackoverflow.com/amzn/click/com/0201310090), a great book on the subject. If you want a quick dive-in to the subject, check out [Java Concurrency @ Sun](http://java.sun.com/docs/books/tutorial/essential/concurrency/) | As earlier answerers have noted, it is best practice to synchronize on an object of limited scope (in other words, pick the most restrictive scope you can get away with, and use that.) In particular, synchronizing on `this` is a bad idea, unless you intend to allow the users of your class to gain the lock.
A particularly ugly case arises, though, if you choose to synchronize on a `java.lang.String`. Strings can be (and in practice almost always are) interned. That means that each string of equal content - in the **ENTIRE JVM** - turns out to be the same string behind the scenes. That means that if you synchronize on any String, another (completely disparate) code section that also locks on a String with the same content, will actually lock your code as well.
I was once troubleshooting a deadlock in a production system and (very painfully) tracked the deadlock to two completely disparate open source packages that each synchronized on an instance of String whose contents were both `"LOCK"`. | In Java critical sections, what should I synchronize on? | [
"",
"java",
"multithreading",
"synchronized",
""
] |
If I am to follow [What should a developer know *before* building a public web site?](https://stackoverflow.com/questions/72394/what-should-a-developer-know-before-building-a-public-web-site) on authentication, then what options do I have ?
I have never used PEAR, and I'm not about to start. I have read about [phpGALC](http://phpgacl.sourceforge.net/) but have yet to try it.
After the authentication do rights/access level kick in. I'm not a big fan of using a single tinyint value which decides what a user can and cannot do, besides its not very flexible. I have recently written a section system where I specify what kind of access each user type have access to, but is there something better ?
If you want a language, then PHP5. | > *I'm not a big fan of using a single tinyint value which desides what a
> user can and cannot do, besides its
> not very flexible.*
That depends... Are you referring to using the value as an integer or as a bitfield?
If you're using it simply as a number (level 5 user has all the abilities of level 1-4 users, plus a little more), then, yeah, that's not very flexible.
If you're using it as a bitfield, it gives you 8 (sets of) capabilities which can be turned on or off in any combination for any user. I'd call that flexible. And, if 8 capabilities isn't enough for you, it's trivial to change the tinyint (8 bits) to a smallint (16 bits/capabilities), int (32 bits), or bigint (64 bits), which should be more than sufficient for just about any application most of us are likely to write. | ACL and Auth are the things I'm working on at this very moment. I'm using [CakePHP](http://cakephp.org) at the moment, and it provides an extensive (albeit not simple) module for ACL, and a simple way to do authentication. I'm interested in answers too.
What I've gathered:
* Learn to validate input, especially the difference between blacklists and whitelists
* Consider carefully your email validation pattern
* Consider what languages will you have to support (pesky little accents, tildes and the like get in the way in names, e.g. Añagaza or Alérta).
* Roll-your-own or prebuilt?
* ACL: keep it simple or it could swallow you whole.
* Careful about [CSRF and XSRF](https://blog.codinghorror.com/preventing-csrf-and-xsrf-attacks/)! | User authentication | [
"",
"php",
"authentication",
"acl",
""
] |
I understand that the WITH RECOMPILE option forces the optimizer to rebuild the query plan for stored procs but when would you want that to happen?
What are some rules of thumb on when to use the WITH RECOMPILE option and when not to?
What's the effective overhead associated with just putting it on every sproc? | As others have said, you don't want to simply include `WITH RECOMPILE` in every stored proc as a matter of habit. By doing so, you'd be eliminating one of the primary benefits of stored procedures: the fact that it saves the query plan.
Why is that potentially a big deal? Computing a query plan is a lot more intensive than compiling regular procedural code. Because the syntax of a SQL statement only specifies **what** you want, and not (generally) **how** to get it, that allows the database a wide degree of flexibility when creating the physical plan (that is, the step-by-step instructions to actually gather and modify data). There are lots of "tricks" the database query pre-processor can do and choices it can make - what order to join the tables, which indexes to use, whether to apply `WHERE` clauses before or after joins, etc.
For a simple SELECT statement, it might not make a difference, but for any non-trivial query, the database is going to spend some serious time (measured in milliseconds, as opposed to the usual microseconds) to come up with an optimal plan. For really complex queries, it can't even guarantee an *optimal* plan, it has to just use heuristics to come up with a *pretty good* plan. So by forcing it to recompile every time, you're telling it that it has to go through that process over and over again, even if the plan it got before was perfectly good.
Depending on the vendor, there should be automatic triggers for recompiling query plans - for example, if the statistics on a table change significantly (like, the histogram of values in a certain column starts out evenly distributed by over time becomes highly skewed), then the DB should notice that and recompile the plan. But generally speaking, the implementers of a database are going to be smarter about that on the whole than you are.
As with anything performance related, don't take shots in the dark; figure out where the bottlenecks are that are costing 90% of your performance, and solve them first. | Putting it on every stored procedure is NOT a good idea, because compiling a query plan is a relatively expensive operation and you will not see any benefit from the query plans being cached and re-used.
The case of a dynamic where clause built up inside a stored procedure can be handled using `sp_executesql` to execute the TSQL rather than adding `WITH RECOMPILE` to the stored procedure.
Another solution (SQL Server 2005 onwards) is to use hint with specific parameters using the `OPTIMIZE FOR` hint. This works well if the values in the rows are static.
SQL Server 2008 has introduced a [little known feature](http://blogs.msdn.com/sqlprogrammability/archive/2008/11/26/optimize-for-unknown-a-little-known-sql-server-2008-feature.aspx) called "`OPTIMIZE FOR UNKNOWN`":
> This hint directs the query optimizer
> to use the standard algorithms it has
> always used if no parameters values
> had been passed to the query at all.
> In this case the optimizer will look
> at all available statistical data to
> reach a determination of what the
> values of the local variables used to
> generate the queryplan should be,
> instead of looking at the specific
> parameter values that were passed to
> the query by the application. | Rule of thumb on when to use WITH RECOMPILE option | [
"",
"sql",
"sql-server",
""
] |
I have a problem with my web module classpath in Websphere v6.1.
In my WEB-INF/lib I have a largish number of jar files which include xercesImpl.jar and xmlparserv2.jar. I need both jars to be present, but they appear to confict with each other. Specifically, each jar contains a META-INF/services directory so, when we try to get an instance of a DocumentBuilderFactory via JAXP, which instance we get depends upon the order in which these two jars appear in the classpath.
I **always** want to use the xerces instance of the DocumentBuildFactory, so I want to push xercesImpl.jar to the front of the classpath. I've tried to do this by specifying a Class-Path section in the Manifest file for the war file, but the class path that I actually get in my WAS Module Compound CLass Loader in is very strange. I seem to get some standard stuff that WAS puts on, followed by the contents of WEB-INF lib in alphabetical order, followed by the classpath specified by the Manifest file.
If I don't put a manifest file into the war at all, I get the standard stuff followed by the contents of WEB-INF/lib but in an arbitrary order.
What am I missing? Is there a way in which I can set the class path up to be exactly what I want?
Dave | I assume by WebSphere, you mean the regular J2EE Application Server (and not something like Community Edition; WebSphere is a brand name applied to a number of IBM products).
I think your options are limited. Since the dependencies look quite explicit, I would prefer a programmatic approach rather than relying on the vagaries of the classpath (like creating factory instances explicitly rather than relying on the SPI).
If that isn't an option, you might want to look at making one of your dependencies an EAR project utility JAR and configure MODULE classloading with a PARENT\_LAST classloading policy on the WAR. This can be configured via the browser admin console (or via the [RAD](http://en.wikipedia.org/wiki/IBM_Rational_Application_Developer) tooling if you use it).
Another thing I'd look at is the WAS [Shared Libraries](http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/topic/com.ibm.websphere.base.doc/info/aes/ae/tcws_sharedlib_create.html?resultof=%22%73%68%61%72%65%64%22%20%22%73%68%61%72%65%22%20%22%6c%69%62%72%61%72%69%65%73%22%20%22%6c%69%62%72%61%72%69%22%20) feature (under *Environment* in the browser admin console). These can be [associated with servers](http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/topic/com.ibm.websphere.base.doc/info/aes/ae/tcws_sharedlib_server.html?resultof=%22%73%68%61%72%65%64%22%20%22%73%68%61%72%65%22%20%22%6c%69%62%72%61%72%69%65%73%22%20%22%6c%69%62%72%61%72%69%22%20) or [applications](http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/topic/com.ibm.websphere.base.doc/info/aes/ae/tcws_sharedlib_app.html?resultof=%22%73%68%61%72%65%64%22%20%22%73%68%61%72%65%22%20%22%6c%69%62%72%61%72%69%65%73%22%20%22%6c%69%62%72%61%72%69%22%20). The downside is that this requires more configuration. | In IBM Websphere Application Server 6.1, web modules have their own class loaders that are usually used in the PARENT\_FIRST mode. This means that the web module class loaders attempt to delegate class loading to the parent class loaders, before loading any new classes.
If you wish to have the Xerces classes loaded before the XML parser v2 (I'm assuming Oracle XML v2 parser) classes, then the Xerces classes will have to be loaded by a parent class loader - in this case, preferably the application class loader. This can be done by placing the Xerces jar in the root of the EAR file (if you have one) or prepare the EAR file with xerces.jar and your WAR file in the root. The xmlparserv2 jar should then be placed in WEB-INF\lib.
You could also attempt creating an Xerces shared library for usage by your application.
You can find more information about this in the [IBM WebSphere Application Server V6.1: System Management and Configuration](http://www.redbooks.ibm.com/abstracts/sg247304.html). Details are available in Chapter 12. | How do I manage the ClassPath in WebSphere | [
"",
"java",
"websphere",
""
] |
This is the ability to run your application on a cluster of servers with the intent to distribute the load and also provide additional redundancy.
I've seen a presentation for [GridGain](http://www.gridgain.com/) and I was very impressed with it.
Know of any others? | There are several:
* [Terracotta](http://www.terracotta.org/) ([open source, based on Mozilla Public License](http://www.terracotta.org/confluence/display/wiki/FAQ#FAQ-Q%3AWhat%27syourlicense%3F));
* [Oracle Coherence](http://www.oracle.com/technology/products/coherence/index.html) (formerly Tangosol Coherence; commercial; based on [JSR 107](http://jcp.org/en/jsr/detail?id=107), which was never adopted officially);
* [GigaSpaces](http://www.gigaspaces.com/) (commercial; based on [JavaSpaces API](http://www.jini.org/wiki/JavaSpaces_Specification), part of [Jini](http://www.jini.org/wiki/Main_Page));
* [GridGain](http://www.gridgain.com/), which you mentioned (open source: [LGPL](http://en.wikipedia.org/wiki/LGPL));
* [memcached](http://www.danga.com/memcached/) with a [Java client library](http://www.whalin.com/memcached/) (open source: [BSD License](http://en.wikipedia.org/wiki/BSD_License);
* [EHCache](http://ehcache.sourceforge.net/) (open source: [Apache Software License](http://ehcache.sourceforge.net/license.html);
* [OSCache](http://www.opensymphony.com/oscache/) (open source: [modified Apache License](http://www.opensymphony.com/oscache/license.action); and
* no doubt several others.
Now I haven't used all of these but I've used or investigated the majority of them.
GridGain and GigaSpaces are more centred around [grid computing](http://en.wikipedia.org/wiki/Grid_computing) than caching and (imho) best suited to compute grids than data grids (see [this explanation of compute vs data grids](http://java.dzone.com/articles/compute-grids-vs-data-grids)). I find GigaSpaces to be a really interesting technology and it has several licensing options, including a free version and a free full version for startups.
Coherence and Terracotta try to treat caches as [Maps](http://java.sun.com/j2se/1.5.0/docs/api/java/util/Map.html), which is a fairly natural abstraction. I've used Coherence a lot and it's an excellent high-performance product but not cheap. Terracotta I'm less familiar with. The documentation for Coherence I find a bit lacking at times but it really is a powerful product.
OSCache I've primarily used as a means of reducing memory usage and fragmentation in Java Web applications as it has a fairly neat JSP tag. If you've ever looked at compiled JSPs, you'll see they do a lot of String concatenations. This tag allows you to effectively cache the results of a segment of JSP code and HTML into a single String, which can hugely improve performance in some cases.
EHCache is an easy caching solution that I've also used in Web applications. Never as a distributed cache though but it can do that. I tend to view it as a quick and dirty solution but that's perhaps my bias.
memcached is particularly prevelent in the PHP world (and used by such sites as Facebook). It's a really light and easy solution and has the advantage that it doesn't run in the same process and you'll have arguably better interoperability options with other technology stacks, if this is important to you. | You may want to check out Hazelcast also. [Hazelcast](http://www.hazelcast.com) is an open source transactional, distributed/partitioned implementation of queue, topic, map, set, list, lock and executor service. It is super easy to work with; just add hazelcast.jar into your classpath and start coding. Almost no configuration is required.
If you are interested in executing your Runnable, Callable tasks in a distributed fashion, then please check out Distributed Executor Service documentation at <http://code.google.com/docreader/#p=hazelcast>
[Hazelcast](http://www.hazelcast.com) is released under Apache license and enterprise grade support is also available. | What is the best library for Java to grid/cluster-enable your application? | [
"",
"java",
"grid",
"load-balancing",
"gridgain",
""
] |
Is there a way in JPA to map a collection of Enums within the Entity class? Or the only solution is to wrap Enum with another domain class and use it to map the collection?
```
@Entity
public class Person {
public enum InterestsEnum {Books, Sport, etc... }
//@???
Collection<InterestsEnum> interests;
}
```
I am using Hibernate JPA implementation, but of course would prefer implementation agnostic solution. | using Hibernate you can do
```
@ElementCollection(targetElement = InterestsEnum.class)
@JoinTable(name = "tblInterests", joinColumns = @JoinColumn(name = "personID"))
@Column(name = "interest", nullable = false)
@Enumerated(EnumType.STRING)
Collection<InterestsEnum> interests;
``` | The link in Andy's answer is a great starting point for mapping collections of "non-Entity" objects in JPA 2, but isn't quite complete when it comes to mapping enums. Here is what I came up with instead.
```
@Entity
public class Person {
@ElementCollection(targetClass=InterestsEnum.class)
@Enumerated(EnumType.STRING) // Possibly optional (I'm not sure) but defaults to ORDINAL.
@CollectionTable(name="person_interest")
@Column(name="interest") // Column name in person_interest
Collection<InterestsEnum> interests;
}
``` | JPA map collection of Enums | [
"",
"java",
"jpa",
"jakarta-ee",
""
] |
I need to define new UI Elements as well as data binding in code because they will be implemented after run-time. Here is a simplified version of what I am trying to do.
Data Model:
```
public class AddressBook : INotifyPropertyChanged
{
private int _houseNumber;
public int HouseNumber
{
get { return _houseNumber; }
set { _houseNumber = value; NotifyPropertyChanged("HouseNumber"); }
}
public event PropertyChangedEventHandler PropertyChanged;
protected void NotifyPropertyChanged(string sProp)
{
if (PropertyChanged != null)
{
PropertyChanged(this, new PropertyChangedEventArgs(sProp));
}
}
}
```
Binding in Code:
```
AddressBook book = new AddressBook();
book.HouseNumber = 123;
TextBlock tb = new TextBlock();
Binding bind = new Binding("HouseNumber");
bind.Source = book;
bind.Mode = BindingMode.OneWay;
tb.SetBinding(TextBlock.TextProperty, bind); // Text block displays "123"
myGrid.Children.Add(tb);
book.HouseNumber = 456; // Text block displays "123" but PropertyChanged event fires
```
When the data is first bound, the text block is updated with the correct house number. Then, if I change the house number in code later, the book's PropertyChanged event fires, but the text block is not updated. Can anyone tell me why?
Thanks,
Ben | The root of it was that the string I passed to PropertyChangedEventArgs did not EXACTLY match the name of the property. I had something like this:
```
public int HouseNumber
{
get { return _houseNumber; }
set { _houseNumber = value; NotifyPropertyChanged("HouseNum"); }
}
```
Where it should be this:
```
public int HouseNumber
{
get { return _houseNumber; }
set { _houseNumber = value; NotifyPropertyChanged("HouseNumber"); }
}
```
Yikes! Thanks for the push in the right direction. | Make sure you're updating the `AddressBook` reference that was used in the binding, and not some other `AddressBook` reference.
I got the following to work with the AddressBook code you gave.
```
<StackPanel>
<Button Click="Button_Click">Random</Button>
<Grid x:Name="myGrid">
</Grid>
</StackPanel>
```
Code behind:
```
public partial class Window1 : Window
{
private AddressBook book;
public Window1()
{
InitializeComponent();
book = new AddressBook();
book.HouseNumber = 13;
TextBlock tb = new TextBlock();
Binding bind = new Binding("HouseNumber");
bind.Source = book;
tb.SetBinding(TextBlock.TextProperty, bind);
myGrid.Children.Add(tb);
}
private void Button_Click(object sender, RoutedEventArgs e)
{
Random rnd = new Random();
book.HouseNumber = rnd.Next();
}
}
```
Note the same reference is used in the update code. | Problem with WPF Data Binding Defined in Code Not Updating UI Elements | [
"",
"c#",
"wpf",
"data-binding",
"inotifypropertychanged",
""
] |
Is it possible to get access to the spell checker that is incorporated in browsers for text areas from Javascript? I would like to be able to control spell checking from withing my code. Most browsers (apart from IE) seem to have some kind of a spell checker built in to them nowadays. | The most access that I know of is disabling or enabling spellchecking on a field: [Inline Disabling of Firefox Spellcheck?](https://stackoverflow.com/questions/223940/inline-disabling-of-firefox-spellcheck)
I don't know of a way that you can directly access the spellchecker of a browser via javascript. If you aren't particular to the spell checker of the browser, there are many open source spell checkers for javascript. Just try googling javascript spell checker.
If you really want to use the browsers spellcheck you might want to create a textbox and set the display to none. You could then put each word in the textbox and then check to see if it's underlined or not. I'm not sure of the feasability of this, just a thought. My suggestion would be to use a javascript spellchecker instead of trying to hack up a way of using the browser's spellchecker. | Browser's don't provide access to their built-in, proprietary spell checker APIs. I'm quite certain there's no x-plat way to do this, let alone a way to do it individually for each browser.
Best bet is to check with each browser vendor and see if they provide any javascript hooking of their spell checker.
I think the most they'll allow is what Bobo said; you can enable/disable it for textboxes, but I don't think they allow any further control than that. | Javascript access to spell checker on browsers | [
"",
"javascript",
""
] |
I am looking to extend jQuery so I can easily retrieve the tagName of the first element in a jQuery object. This is what I have come up with, but it doesn't seem to work:
```
$.fn.tagName = function() {
return this.each(function() {
return this.tagName;
});
}
alert($('#testElement').tagName());
```
Any ideas what's wrong?
BTW, I'm looking to use this more for testing than in production. | Try this instead:
```
$.fn.tagName = function() {
return this.get(0).tagName;
}
alert($('#testElement').tagName());
```
To explain a little bit more of why your original example didn't work, the `each()` method will always return the original jQuery object (unless the jQuery object itself was modified). To see what is happening in each with your code, here is some pseudocode that shows how the `each()` method works:
```
function each(action) {
for(var e in jQueryElements) {
action();
}
return jQueryObject;
}
```
This is not how `each()` really gets implemented (by a long shot probably), but it is to show that the return value of your `action()` function is ignored. | Why create a plugin at all? Seems a bit unnecessary...
```
alert( $('div')[0].tagName );
``` | How to extend jQuery to make it easier to retrieve the tagName | [
"",
"javascript",
"jquery",
""
] |
I'm trying to write a query for an advanced search page on my document archiving system. I'm attempting to search by multiple optional parameters. I have about 5 parameters that could be empty strings or search strings. I know I shouldn't have to check for each as a string or empty and create a separate stored procedure for each combination.
Edit:
Ended up using:
```
ISNULL(COALESCE(@var, a.col), '') = ISNULL(a.col, '')
``` | You could use COALESCE (or ISNULL) like so:
```
WHERE COALESCE(@var1, col1) = col1
AND COALESCE(@var2, col2) = col2
AND COALESCE(@var3, col3) = col3
``` | I usually do this :P
```
WHERE (@var1 IS NULL OR col1 = @var1)
AND (@var2 IS NULL OR col2 = @var2)
```
... | sql search query for multiple optional parameters | [
"",
"jquery",
"sql",
"sql-server",
"search",
"stored-procedures",
""
] |
How do you add a new variable to be inserted into a Java code template. How do I add a variable to the list in Window->Preferences->Java->Code Style->Code Templates->Code->New Java Files->Edit->Insert Variable... ?
Currently my new files get created with:
```
${filecomment}
${package_declaration}
${typecomment}
${type_declaration}
```
I'd like them to get created with something like:
```
${begin_filecomment}
${package_declaration}
${typecomment}
${type_declaration}
${end_filecomment}
```
where begin\_filecomment and end\_filecomment appear in the Insert Variable list. | I'm pretty sure that the list of "variables" is generated by Eclipse and there is no way to add a new template variable.
What do you want `${begin_filecomment}` and `${end_filecomment}` to be? Just type the content into the Edit box - there is nothing that says you cannot put static content in there. | Yes, you can indeed add a variable to this list. See the extension point called
org.eclipse.ui.editors.templates
and you'll find out how. | Eclipse Custom Variable for Java Code Templates | [
"",
"java",
"eclipse",
"templates",
""
] |
I have a database with 2 tables.
One of the tables holds a row containing numbers, from 0 to 10.
In PHP, I do this:
```
$query = "SELECT ".$param." FROM issues WHERE ".$param." >=0";
$result = @mysql_query($query) or showError("query failed");
if (!($record = mysql_fetch_array($result))) return null;
return $record;
```
The $param holds the name of the row.
I kinda expected to get an array holding the number 0 to 10, but instead I get an array with 2 elements:
```
array(
[0] = 0
[row_name] = 0
.
.
. // AND SO ON
)
```
And that's it.
I've never worked with these functions before and [www.php.net](http://www.php.net) doesn't have any examples that really help... | I'm not exactly sure what you are trying to achieve here, but I think what you want is:
```
// query...
$records = array();
while($r = mysql_fetch_array($result)) {
$records[] = $r;
}
return $records;
``` | You want to get all the results in one call. With your method you have to loop the results like Paolo showed you. But it might be better to use PDO with [fetchAll](http://no.php.net/manual/en/pdostatement.fetchall.php). If you are learning PHP database connections, learn [PDO](http://no.php.net/manual/en/intro.pdo.php). | Trouble with PHP MySQL fetch array function | [
"",
"php",
"mysql",
""
] |
I'm writing a semi-generic form plugin using jQuery in order to speed up the development of the project I'm working on.
The plan is that a [jTemplates](http://jtemplates.tpython.com/) template contains the fields, my plugin looks through the template to find any required multi-lingual resources, requests them from the server, and then packages everything up into a JavaScript object that is then passed to a custom function on "submit".
Everything is working nicely, except the standard "when enter is pressed, submit the form" code that you need to do when you're faking a form:
```
opts.coreElement.find('input[type=text]').keypress(function(evt) {
if ((evt.keyCode || evt.which) == 13) {
opts.coreElement.find('.saveButton').click();
}
});
```
The issue is that in Firefox (at least; I haven't checked other browsers yet), if you've entered information in a similarly-named textbox before, you get your form history. If you then select one of those suggested values by hitting enter, it submits the form. Not great if you're on the first input on the page. Really rather annoying, actually.
The obvious solution seems to be to insert a form element around the fields and stopping any possible submission of this dummy form via jQuery. Fortunately I have the luxury of doing this as I'm in ASP.NET MVC, but what if I wasn't? What if my plugin didn't know whether it was already inside a form and so had to keep itself to itself? What if I was in standard WebForms ASP.NET and I *had* to manually "target" each input's return key to the correct submit button?
Is there a way, perhaps through the event object itself, to detect the context of the keypress, so I can filter out the selection of form history items? | Set the `autocomplete` attribute in your text fields to `off`:
```
opts.coreElement.find('input[type=text]').each(function() {
$(this).attr('autocomplete', 'off');
});
```
This works for all the major browsers (Safari, Firefox, IE). | I have found that in order to prevent the default action for an [enter] or [tab] key event you have to listen for the keydown event and handle/cancel it.
By the time keyup or keypress is triggered the default for keydown has already happened. | JavaScript detection of keypress context (form history selection vs. form submit) | [
"",
"javascript",
"keypress",
"input-history",
""
] |
I'm trying to use the CRT memory leak detection but I keep getting the following message in Microsoft Visual Studio: "Detected memory leaks - skipping object dump." I can never get the it to actually do and object dump.
I followed the directions in the Microsoft article on Memory Leak Detection (<http://msdn.microsoft.com/en-us/library/e5ewb1h3(VS.80).aspx>) with no luck.
In my stdafx.h I have (rebuilt the entire project):
```
#define _CRTDBG_MAP_ALLOC
#include <stdlib.h>
#include <crtdbg.h>
```
In the entry point of my application I have:
```
_CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
```
I'm also hoping that it will display the file name and line number in the Microsoft advertised format:
```
Detected memory leaks!
Dumping objects ->
C:\PROGRAM FILES\VISUAL STUDIO\MyProjects\leaktest\leaktest.cpp(20) : {18}
normal block at 0x00780E80, 64 bytes long.
Data: < > CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD
Object dump complete.
```
While I've NEVER been able to even get an object dump, but I've noticed that other people say they see something along these lines (even with \_CRTDBG\_MAP\_ALLOC defined):
```
Detected memory leaks!
Dumping objects ->
{18} normal block at 0x00780E80, 64 bytes long.
Data: < > CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD CD
Object dump complete.
```
I don't really want to spend time overriding the new and delete operators, so I was hoping that the CRT debugging would help me. If this doesn't work out I might end up overriding the new and delete operators, but I really want to get the allocation information with a file name and line number (crosses fingers).
Thanks,
Kiril | I don't have it here on my machine, but when you instal MSVC you have the option of installing (most of the) source code for the C run-time library (i.e. for MSVCRTxx.xxx). If you look in that source code for "skipping object dump" then you might be able to work out why the object dump is being skipped. | I just used [Visual Leak Detector](http://vld.codeplex.com/) after getting a large dump of leaked objects with no filenames/line numbers using the \_CrtDumpMemoryLeaks approach. VLD worked as advertised (it's free) and I'm pretty happy with that. | Visual Studio _CrtDumpMemoryLeaks always skipping object dump | [
"",
"c++",
"visual-studio",
"memory-leaks",
"msvcrt",
"crtdbg.h",
""
] |
I'm trying to format numbers. Examples:
```
1 => 1
12 => 12
123 => 123
1234 => 1,234
12345 => 12,345
```
It strikes as a fairly common thing to do but I can't figure out which filter I'm supposed to use.
Edit: If you've a generic Python way to do this, I'm happy adding a formatted field in my model. | Django's contributed [humanize](http://docs.djangoproject.com/en/dev/ref/contrib/humanize/#ref-contrib-humanize) application does this:
```
{% load humanize %}
{{ my_num|intcomma }}
```
Be sure to add `'django.contrib.humanize'` to your `INSTALLED_APPS` list in the `settings.py` file. | Building on other answers, to extend this to floats, you can do:
```
{% load humanize %}
{{ floatvalue|floatformat:2|intcomma }}
```
Documentation: [`floatformat`](https://docs.djangoproject.com/en/stable/ref/templates/builtins/#floatformat), [`intcomma`](https://docs.djangoproject.com/en/stable/ref/contrib/humanize/#intcomma). | Format numbers in django templates | [
"",
"python",
"django",
""
] |
Building on what has been written in SO question [Best Singleton Implementation In Java](https://stackoverflow.com/questions/70689/best-singleton-implementation-in-java) - namely about using an enum to create a singleton - what are the differences/pros/cons between (constructor omitted)
```
public enum Elvis {
INSTANCE;
private int age;
public int getAge() {
return age;
}
}
```
and then calling `Elvis.INSTANCE.getAge()`
and
```
public enum Elvis {
INSTANCE;
private int age;
public static int getAge() {
return INSTANCE.age;
}
}
```
and then calling `Elvis.getAge()` | Suppose you're binding to something which will use the properties of any object it's given - you can pass Elvis.INSTANCE very easily, but you can't pass Elvis.class and expect it to find the property (unless it's deliberately coded to find static properties of classes).
Basically you only use the singleton pattern when you *want* an instance. If static methods work okay for you, then just use those and don't bother with the enum. | A great advantage is when your singleton must implements an interface. Following your example:
```
public enum Elvis implements HasAge {
INSTANCE;
private int age;
@Override
public int getAge() {
return age;
}
}
```
With:
```
public interface HasAge {
public int getAge();
}
```
It can't be done with statics... | What is the best approach for using an Enum as a singleton in Java? | [
"",
"java",
"singleton",
""
] |
More specifically, I'm trying to check if given string (a sentence) is in Turkish.
I can check if the string has Turkish characters such as Ç, Ş, Ü, Ö, Ğ etc. However that's not very reliable as those might be converted to C, S, U, O, G before I receive the string.
Another method is to have the 100 most used words in Turkish and check if the sentence includes any/some of those words. I can combine these two methods and use a point system.
What do you think is the most efficient way to solve my problem in Python?
Related question: [(human) Language of a document](https://stackoverflow.com/questions/257125/human-language-of-a-document) (Perl, Google Translation API) | One option would be to use a Bayesian Classifier such as [Reverend](http://www.divmod.org/trac/wiki/DivmodReverend). The Reverend homepage gives this suggestion for a naive language detector:
```
from reverend.thomas import Bayes
guesser = Bayes()
guesser.train('french', 'le la les du un une je il elle de en')
guesser.train('german', 'der die das ein eine')
guesser.train('spanish', 'el uno una las de la en')
guesser.train('english', 'the it she he they them are were to')
guesser.guess('they went to el cantina')
guesser.guess('they were flying planes')
guesser.train('english', 'the rain in spain falls mainly on the plain')
guesser.save('my_guesser.bay')
```
Training with more complex token sets would strengthen the results. For more information on Bayesian classification, [see here](http://en.wikipedia.org/wiki/Bayesian_analysis) and [here](http://en.wikipedia.org/wiki/Naive_Bayesian_classification). | A simple statistical method that I've used before:
Get a decent amount of sample training text in the language you want to detect. Split it up into trigrams, e.g.
"Hello foobar" in trigrams is:
'Hel', 'ell', 'llo', 'lo ', 'o f', ' fo', 'foo', 'oob', 'oba', 'bar'
For all of the source data, count up the frequency of occurrence of each trigram, presumably in a dict where key=trigram and value=frequency. You can limit this to the top 300 most frequent 3-letter combinations or something if you want. Pickle the dict away somewhere.
To tell if a new sample of text is written in the same language, repeat the above steps for the sample text. Now, all you have to do is compute a correlation between the sample trigram frequencies and the training trigram frequencies. You'll need to play with it a bit to pick a threshold correlation above which you are willing to consider input to be turkish or not.
This method has been shown to be highly accurate, beating out more sophisticated methods, see
[Cavnar & Trenkle (1994): "N-Gram-Based Text Categorization"](http://lesfourmisrouges.com/bs/documentation/@%20work/sdair-94-bc.pdf)
Using trigrams solves the problem of using word lists, as there is a vast number of words in any given language, especially given different grammatical permutations. I've tried looking for common words, the problem is they often give a false positive for some other language, or themselves have many permutations. The statistical method doesn't require a lot of storage space and does not require complex parsing. By the way this method only works for languages with a phonetic writing system, it works poorly if at all with languages that use an ideographic language (i.e. Chinese, Japanese, Korean).
Alternatively wikipedia has a section on Turkish in [its handy language recognition chart.](http://en.wikipedia.org/wiki/Language_recognition_chart#Turkic_languages) | Best way to return the language of a given string | [
"",
"python",
"algorithm",
"string",
""
] |
Let me describe the problem in details:
I want to show an absolute positioned div when hovering over an element. That's really simple with jQuery and works just fine. But when the mouse goes over one of the child elements, it triggers the mouseout event of the containing div. How do I keep javascript from triggering the mouseout event of the containing element when hovering a child element.
What's the best and shortest way to do that with jQuery?
Here is a simplified example to illustrate what I mean:
Html:
```
<a>Hover Me</a>
<div>
<input>Test</input>
<select>
<option>Option 1</option>
<option>Option 2</option>
</select>
</div>
```
Javascript/jQuery:
```
$('a').hover( function() { $(this).next().show() }
function() { $(this).next().hide() } );
``` | The question is a bit old, but I ran into this the other day.
The simplest way to do this with recent versions of jQuery is to use the `mouseenter` and `mouseleave` events rather than `mouseover` and `mouseout`.
You can test the behavior quickly with:
```
$(".myClass").on( {
'mouseenter':function() { console.log("enter"); },
'mouseleave':function() { console.log("leave"); }
});
``` | For simplicity sake, I would just reorganize the html a bit to put the newly displayed content inside the element that the mouseover event is bound to:
```
<div id="hoverable">
<a>Hover Me</a>
<div style="display:none;">
<input>Test</input>
<select>
<option>Option 1</option>
<option>Option 2</option>
</select>
</div>
</div>
```
Then, you could do something like this:
```
$('#hoverable').hover( function() { $(this).find("div").show(); },
function() { $(this).find("div").hide(); } );
```
---
*Note: I don't recommend inline css, but it was done to make the example easier to digest.* | How to disable mouseout events triggered by child elements? | [
"",
"javascript",
"jquery",
"events",
""
] |
This is a fundamental question, but an important one none the less...
**When starting a C++ program whose main method has the following common signature:**
```
int main(int argc, char* args[]) {
//Magic!
return 0;
}
```
**is args[0] always guaranteed to be the path to the currently running program? What about cross platform (since I am in a Linux environment but may port later on.)?** | It is not always. It's the value that you gave the program by the Operation System. For example when starting a program using `exec` you can set that to an arbitrary value:
```
int execve(const char *filename, char *const argv[],
char *const envp[]);
```
The first parameter is the file to start, and argv will contains argv[0] and all other parameters for main. envp contains the environment variables (not defined by Standard C or C++. This is a posix thing).
More precisely, this is the definition of argv in C++:
> An implementation shall not predefine the main function. This function shall not be overloaded. It shall
> have a return type of type int, but otherwise its type is implementation-defined. All implementations
> shall allow both of the following definitions of main:
```
int main() { /* ... */ }
```
> and
```
int main(int argc, char* argv[]) { /* ... */ }
```
> In the latter form argc shall be the number of arguments passed to the program from the environment in
> which the program is run. If argc is nonzero these arguments shall be supplied in `argv[0]` through
> `argv[argc-1]` as pointers to the initial characters of null-terminated multibyte strings (NTMBSs)
> (17.3.2.1.3.2) and `argv[0]` shall be the pointer to the initial character of a NTMBS that represents the
> name used to invoke the program or "". The value of argc shall be nonnegative. The value of
> `argv[argc]` shall be 0. [Note: it is recommended that any further (optional) parameters be added after
> argv. ]
It's pretty much up to the implementation what defines a "name used to invoke the program". If you want to get the full path of your executable, you can use [GetModuleFileName](http://msdn.microsoft.com/en-us/library/ms683197%28VS.85%29.aspx) on Windows, and `argv[0]` (for getting the name used to execute, may be relative) together with `getcwd` (for getting the current working directory, trying to make the name absolute). | No. On Windows GetModuleFileName gurantees the exact full path to the current executing program. On linux there is a symlink /proc/self/exe. Do a readlink on this symlink to get the full path of the currently executing program. Even if youprogram was called thorugh a symlink /proc/self/exe will always point to the actuall program. | Is args[0] guaranteed to be the path of execution? | [
"",
"c++",
"argv",
""
] |
When writing plugins for media center your plugin is hosted in `ehexthost.exe` this exe gets launched from `ehshell.exe` and you have no way of launching it directly, instead you pass a special param to `ehshell.exe` which will launch the plugin in a separate process.
When we are debugging [media browser](http://code.google.com/p/videobrowser/source/checkout) I find the process of attaching to a second process kind of clunky, I know about Debugger.Attach and also of some [special registry](http://msdn.microsoft.com/en-us/library/a329t4ed.aspx) entries I can use.
Both these methods don't exactly fit my bill. What I want is to press F5 and have my current instance of visual studio attach to the child process automatically. Can this be done?
If there is a plugin for VS that allows me to achieve this functionality I would be happy with it.
**EDIT**
I ended up going with the following macro:
```
Public Sub CompileRunAndAttachToEhExtHost()
DTE.Solution.SolutionBuild.Build(True)
DTE.Solution.SolutionBuild.Debug()
Dim trd As System.Threading.Thread = New System.Threading.Thread(AddressOf AttachToEhExtHost)
trd.Start()
End Sub
Public Sub AttachToEhExtHost()
Dim i As Integer = 0
Do Until i = 50
i = i + 1
Try
For Each proc As EnvDTE.Process In DTE.Debugger.LocalProcesses
If (proc.Name.IndexOf("ehexthost.exe") <> -1) Then
proc.Attach()
Exit Sub
End If
Next
Catch e As Exception
' dont care - stuff may be busy
End Try
Threading.Thread.Sleep(100)
Loop
End Sub
```
Also, I outlined the process on how to [get this going](http://www.samsaffron.com/archive/2009/01/28/Simpler+debugging+of+Vista+Media+Center+plugins) on my blog. | I would use a macro. I've redefined my F5 function to attach to the asp.net process instead of the long build/validate it usually performs. This works pretty well for me and it's really easy.
```
For Each process In DTE.Debugger.LocalProcesses
If (process.Name.IndexOf("aspnet_wp.exe") <> -1) Then
process.Attach()
Exit Sub
End If
Next
``` | For VS2012, macros have been dropped, but you can still do it quite quickly with standard keyboard shortcuts. For instance, to attach to iisexpress.exe:
`Ctrl` + `Alt` + `p` - brings up the Attach To Process dialog
`i` - jumps to the the first process beginning with i in the list (for me this is iisexpress.exe)
`Enter` - attaches
For super speed, you can also [Turn off Visual Studio Attach security warning when debugging IIS](https://stackoverflow.com/questions/1414769/turn-off-visual-studio-attach-security-warning-when-debugging-iis). | Attaching to a child process automatically in Visual Studio during Debugging | [
"",
"c#",
".net",
"visual-studio",
"debugging",
"visual-studio-debugging",
""
] |
I am trying to write a static function to Or two expressions, but recieve the following error:
> The parameter 'item' is not in scope.
>
> Description: An unhandled exception
> occurred during the execution of the
> current web request. Please review the
> stack trace for more information about
> the error and where it originated in
> the code.
>
> Exception Details:
> System.InvalidOperationException: The
> parameter 'item' is not in scope.
the method:
```
public static Expression<Func<T, bool>> OrExpressions(Expression<Func<T, bool>> left, Expression<Func<T, bool>> right)
{
// Define the parameter to use
var param = Expression.Parameter(typeof(T), "item");
var filterExpression = Expression.Lambda<Func<T, bool>>
(Expression.Or(
left.Body,
right.Body
), param);
// Build the expression and return it
return (filterExpression);
}
```
**edit**: adding more info
The expressions being or'd are coming from the method below, which execute just fine. if there is a better way to or the results I am all ears. Also, I do not know how many are being or'd in advance.
```
public static Expression<Func<T, bool>> FilterExpression(string filterBy, object Value, FilterBinaryExpression binaryExpression)
{
// Define the parameter to use
var param = Expression.Parameter(typeof(T), "item");
// Filter expression on the value
switch (binaryExpression)
{
case FilterBinaryExpression.Equal:
{
// Build an expression for "Is the parameter equal to the value" by employing reflection
var filterExpression = Expression.Lambda<Func<T, bool>>
(Expression.Equal(
Expression.Convert(Expression.Property(param, filterBy), typeof(TVal)),
Expression.Constant(Value)
),
param);
// Build the expression and return it
return (filterExpression);
}
```
**edit**: adding even more info
Alternatively, is there a better way to do an or? Currently the .Where(constraint) works just fine where constraint is of type Expression>. How can i do where(constraint1 or constraint2) (to the constraint n'th)
Thanks in advance! | The issue is that the Expression you're creating in the method OrExpressions reuses the body of the two expressions. Those bodies will contain references to their own ParameterExpression that has been defined in FilterExpression.
A fix would be to rewrite the left and right parts to use the new ParameterExpression. Or to pass the original ParameterExpression along. It's not because the two ParameterExpression have the same name that they represent the same parameter. | As already suggested, [here](http://www.albahari.com/nutshell/predicatebuilder.aspx) you can find this very nice (working) code
```
public static Expression<Func<T, bool>> Or<T>(this Expression<Func<T, bool>> expr1, Expression<Func<T, bool>> expr2)
{
var invokedExpr = Expression.Invoke(expr2, expr1.Parameters.Cast<Expression>());
return Expression.Lambda<Func<T, bool>>(Expression.Or(expr1.Body, invokedExpr), expr1.Parameters);
}
```
that you can adapt to your needs and which isn't tied (IMHO) to LINQ. | Expression.Or, The parameter 'item' is not in scope | [
"",
"c#",
"linq",
"expression-trees",
"expression",
""
] |
I'm writing a web app (Java) which allows users to select contacts. The contacts details can be downloaded (currently in CSV format) and used to perform a mail merge in Word 2007.
I would like to use a format which is a bit more 'robust' than CSV. Those of you in non-English areas will know the comma/semicolon problems!
Which format would you use? | I prefer TSV (Tab Separated Values) for this sort of task. I have never encountered a dataset containing literal tabs that were desired in the output. | Not having much experience in "non-English" mail merges, what's wrong with exporting the contacts in xlsx format and using that as your datasource? | What is the best format to use when creating a mail list for use in a Word 2007? | [
"",
"java",
"ms-word",
"office-2007",
"mailmerge",
""
] |
I am looking for a C/C++ library to convert HTML (Actually XHTML + CSS) documents to PDF.
It is for commercial use and source would be nice but not essential.
Anybody have any recommendations or experience doing this?
UPDATE: To clarify, I am targeting the Windows platform only. I am developing with Borland C++ Builder 2006, but the library does not have to be a VCL component.
Many thanks in advance.
Steve. | Just to bump this, I have evaluated both [VisPDF](http://www.vispdf.com/) and [PDFDoc Scout](http://bytescout.com/pdfdocscout.html) and will probably go with PDFDoc Scout as it can format HTML input.
Thanks for everybody else's input. | To do that I have successfully used wkhtmltopdf.
Uses webkit and can be called from command line or as a static library. It's great and simply to use.
[wkhtmltopdf website](http://wkhtmltopdf.org/)
OpensSource (LGPL) and free!
Hope it can help | C++ Library to Convert HTML to PDF? | [
"",
"c++",
"html",
"pdf",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.