Prompt stringlengths 10 31k | Chosen stringlengths 3 29.4k | Rejected stringlengths 3 51.1k | Title stringlengths 9 150 | Tags listlengths 3 7 |
|---|---|---|---|---|
I have the following code:
```
<script type="text/javascript">
function SubmitForm()
{
form1.submit();
}
function ShowResponse()
{
}
</script>
.
.
.
<div>
<a href="#" onclick="SubmitForm();">Click</a>
</div>
```
I want to capture the html response of `form1.submit`? How do I do this? Can I register any callback function to form1.submit method? | You won't be able to do this easily with plain javascript. When you post a form, the form inputs are sent to the server and your page is refreshed - the data is handled on the server side. That is, the `submit()` function doesn't actually return anything, it just sends the form data to the server.
If you really wanted to get the response in Javascript (without the page refreshing), then you'll need to use AJAX, and when you start talking about using AJAX, you'll *need* to use a library. [jQuery](http://www.jquery.com) is by far the most popular, and my personal favourite. There's a great plugin for jQuery called [Form](http://malsup.com/jquery/form/) which will do exactly what it sounds like you want.
Here's how you'd use jQuery and that plugin:
```
$('#myForm')
.ajaxForm({
url : 'myscript.php', // or whatever
dataType : 'json',
success : function (response) {
alert("The server says: " + response);
}
})
;
``` | The non-jQuery vanilla Javascript way, extracted from 12me21's comment:
```
var xhr = new XMLHttpRequest();
xhr.open("POST", "/your/url/name.php");
xhr.onload = function(event){
alert("Success, server responded with: " + event.target.response); // raw response
};
// or onerror, onabort
var formData = new FormData(document.getElementById("myForm"));
xhr.send(formData);
```
For `POST`'s the default content type is "application/x-www-form-urlencoded" which matches what we're sending in the above snippet. If you want to send "other stuff" or tweak it somehow see [here](https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Using_XMLHttpRequest) for some nitty gritty details. | How do I capture response of form.submit | [
"",
"javascript",
"forms",
"dom-events",
"form-submit",
""
] |
Does anyone know a way to open up an instance of the operating system (Windows/Linux/Mac) browser within a Swing window that is integrated into a Java application. No other actions would be performed other than opening a given URL. Currently, we open a new browser window because the Java embedded browsers have been insufficient. However, from a user interaction standpoint this is less than desirable.
I'm curious if a solution for this was part of the 1.6 Java release. So far my google searching has not turned up anything of note. Are there any closed-source libraries that do this? | [JDIC](https://jdic.dev.java.net/) | We use JDIC as well and it works for us in Windows; however, configuring it to work in \*nix/OS X can be a pain, as it simply utilizes a platform-native browser (supports IE and Mozilla), while on Linux/Mac you may have neither - that's the problem. | Embedding web browser window in Java | [
"",
"java",
"swing",
"browser",
"embed",
""
] |
I have a bunch of MDI child nodes that are all created in the same way and to reduce redundant code I'd like to be able to call a method, pass it a string (name of child node), have it create the node and add it to the parent.
I can do all the stuff except create the class from a string of a class name, how can I do this? | I'm currently using this in one of my applications to new up a class
```
public static IMobileAdapter CreateAdapter(Type AdapterType)
{
return (IMobileAdapter)System.Activator.CreateInstance(AdapterType);
}
```
It's returning an instance of a class that implements IMobileAdapter, but you could use it equally easily with a string:
```
public static IMyClassInterface CreateClass(string MyClassType)
{
return (IMyClassInterface)System.Activator.CreateInstance(Type.GetType(MyClassType));
}
```
Call it using code similar to the following:
```
IMyClassInterface myInst = CreateClass("MyNamespace.MyClass, MyAssembly");
```
Of course, the class it creates must implement the interface IMyClassInterface in this case, but with a class factory, you'd likely have all your classes implementing the same interface anyway.
***Edit:***
In reference to your comment, for the purpose of this discussion, think of the term "assembly" as the set of files within your vb/cs project. I'm assuming that you're doing this all within a single project [assembly] and not spreading over multiple projects.
In your case as your classes will be extending the Form object, you would do something like this.
```
Form myInst = CreateClass("MyExtendedForm");
```
Or
```
Form myInst = CreateClass(Type.GetType("MyExtendedForm"));
```
Depending on whether you get the type within your CreateClass method or outside it. You would need to cast your instance to the correct type in order to access any custom members. Consider this:
```
class MyCustomForm : Form
{
public int myCustomField{ get; set; }
}
```
I've got a custom form that extends Form adding the myCustomField property. I want to instantiate this using Activator.CreateInstance():
```
public static Form CreateClass(string InstanceName)
{
return (Form)System.Activator.CreateInstance(Type.GetType(InstanceName));
}
```
I then call it using:
```
Form myInst = CreateClass("MyCustomForm");
```
So now I have my custom form stored in myInst. However, to access the custom property [myCustomField], you would need to cast your instance to the correct form:
```
int someVal = ((Type.GetType("MyCustomForm"))myInst).myCustomField;
``` | ```
Activator.CreateInstance(Type.GetType(typeName))
``` | Factory Model in C# | [
"",
"c#",
".net-3.5",
"factory",
""
] |
Two Questions:
Will I get different sequences of numbers for every seed I put into it?
Are there some "dead" seeds? (Ones that produce zeros or repeat very quickly.)
By the way, which, if any, other PRNGs should I use?
Solution: Since, I'm going to be using the PRNG to make a game, I don't need it to be cryptographically secure. I'm going with the Mersenne Twister, both for it's speed and huge period. | To some extent, random number generators are horses for courses. The Random class implements an LCG with reasonably chosen parameters. But it still exhibits the following features:
* fairly short period (2^48)
* bits are not equally random (see my article on [randomness of bit positions](http://www.javamex.com/tutorials/random_numbers/lcg_bit_positions.shtml))
* will only generate a small fraction of *combinations* of values (the famous problem of "[falling in the planes](http://www.javamex.com/tutorials/random_numbers/lcg_planes.shtml)")
If these things don't matter to you, then Random has the redeeming feature of being provided as part of the JDK. It's good enough for things like casual games (but not ones where money is involved). There are no weak seeds as such.
Another alternative which is the [XORShift generator](http://www.javamex.com/tutorials/random_numbers/xorshift.shtml), which can be implemented in Java as follows:
```
public long randomLong() {
x ^= (x << 21);
x ^= (x >>> 35);
x ^= (x << 4);
return x;
}
```
For some very cheap operations, this has a period of 2^64-1 (zero is not permitted), and is simple enough to be inlined when you're generating values repeatedly. Various shift values are possible: see George Marsaglia's paper on XORShift Generators for more details. You can consider bits in the numbers generated as being equally random. One main weakness is that occasionally it will get into a "rut" where not many bits are set in the number, and then it takes a few generations to get out of this rut.
Other possibilities are:
* combine different generators (e.g. feed the output from an XORShift generator into an LCG, then add the result to the output of an XORShift generator with different parameters): this generally allows the weaknesses of the different methods to be "smoothed out", and can give a longer period if the periods of the combined generators are carefully chosen
* add a "lag" (to give a longer period): essentially, where a generator would normally transform the last number generated, store a "history buffer" and transform, say, the (n-1023)th.
I would say avoid generators that use a stupid amount of memory to give you a period longer than you really need (some have a period greater than the number of atoms in the universe-- you really don't usually need that). And note that "long period" doesn't necessarily mean "high quality generator" (though 2^48 is still a little bit low!). | As zvrba said, that JavaDoc explains the normal implementation. The [Wikipedia page on pseudo-random number generators](http://en.wikipedia.org/wiki/PRNG) has a fair amount of information and mentions the [Mersenne twister](http://en.wikipedia.org/wiki/Mersenne_twister), which is not deemed cryptographically secure, but is very fast and has various [implementations in Java](http://www.cs.gmu.edu/~sean/research/). (The last link has two implementations - there are others available, I believe.)
If you need cryptographically secure generation, read the Wikipedia page - there are various options available. | How good is java.util.Random? | [
"",
"java",
"random",
""
] |
When using contentEditable in Firefox, is there a way to prevent the user from inserting paragraph or line breaks by pressing enter or shift+enter? | You can attach an event handler to the keydown or keypress event for the contentEditable field and cancel the event if the keycode identifies itself as enter (or shift+enter).
This will disable enter/shift+enter completely when focus is in the contentEditable field.
If using jQuery, something like:
```
$("#idContentEditable").keypress(function(e){ return e.which != 13; });
```
...which will return false and cancel the keypress event on enter. | This is possible with Vanilla JS, with the same effort:
```
document.getElementById('idContentEditable').addEventListener('keypress', (evt) => {
if (evt.which === 13) {
evt.preventDefault();
}
});
```
You should not use jQuery for the most simple things. Also, you may want to use "key" instead of "which": <https://developer.mozilla.org/en-US/docs/Web/Events/keypress>
**Update, since `keypress` is deprecated:**
```
document.getElementById('idContentEditable').addEventListener('keydown', (evt) => {
if (evt.keyCode === 13) {
evt.preventDefault();
}
});
``` | Prevent line/paragraph breaks in contentEditable | [
"",
"javascript",
"firefox",
"contenteditable",
""
] |
How do you left pad an `int` with zeros when converting to a `String` in java?
I'm basically looking to pad out integers up to `9999` with leading zeros (e.g. 1 = `0001`). | Use [`java.lang.String.format(String,Object...)`](https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#format-java.lang.String-java.lang.Object...-) like this:
```
String.format("%05d", yournumber);
```
for zero-padding with a length of 5. For hexadecimal output replace the `d` with an `x` as in `"%05x"`.
The full formatting options are documented as part of [`java.util.Formatter`](https://docs.oracle.com/javase/8/docs/api/java/util/Formatter.html). | Let's say you want to print `11` as `011`
**You could use a formatter**: `"%03d"`.
[](https://i.stack.imgur.com/u4vid.png)
You can use this formatter like this:
```
int a = 11;
String with3digits = String.format("%03d", a);
System.out.println(with3digits);
```
Alternatively, some java methods directly support these formatters:
```
System.out.printf("%03d", a);
``` | How can I pad an integer with zeros on the left? | [
"",
"java",
"formatting",
"zero-pad",
""
] |
I am implementing Javascript Validation on an ASP.Net page that is inside a Master Page. The control IDs are changing, so I can't use the original control IDs for validation. I looked on the Net and got the suggestion to build the script dynamically, and add it to the page using RegisterStartupScript. I don't like that solution because it is not maintainable.
What are the ways I can perform javascript validation on this page and arrange for the client IDs?
Another solution just asked me to go ahead and use the final clientIDs that appear in the browser. That is workable and I'd prefer it to creating the script on runtime. What's your opinion? | It's perfectly acceptable to refer the ClientID via inline code (e.g. <% %> tags). The example given won't really work, though; you need to access the DOM...
```
<script language="javascript" type="text/javascript">
var control = document.getElementById('<%= control.ClientID %>');
</script>
``` | IMO, copying the final ClientID values that appear in the browser is trouble waiting to happen. If you change a control ID in your code and forget to change it in your javascript, you might not notice until the validation mysteriously stops working. By building it dynamically you'll at least get the benefit of compile-time errors.
When I need to access a control's client ID in javascript I put a script block at the top of the page thusly:
```
<script>
var SomeControl = <%= Control1.ClientID %>;
var AnotherControl = <%= Control2.ClientID %>;
// etc, etc.
</script>
```
Unfortunately it still feels like an ugly hack. Future versions of ASP.NET will supposedly address this scenario. | What's the best way to do javascript validation on ASP.Net page with Masterpage? | [
"",
".net",
"asp.net",
"javascript",
"master-pages",
""
] |
How can I validate the user input by using scanf. Right now I have something like this, but doesn't work.
NOTE: I have the atoi just to validate that the scanf validation works.
```
scanf("%[0987654321.-]s",buf);
i = atoi(buf);
if(i)
index = i;
``` | Using `scanf()` is usually a bad idea for user input since failure leaves the `FILE` pointer at an unknown position. That's because `scanf` stands for "scan formatted" and there is little more *unformatted* than user input.
I would suggest using `fgets()` to get a line in, followed by `sscanf()` on the string to actually check and process it.
This also allows you to check the string for those characters you desire (either via a loop or with a regular expression), something which the `scanf` family of functions is not really suited for.
By way of example, using `scanf()` with a `"%d"` or `"%f"` will stop at the first non-number character so won't catch trailing errors like `"127hello"`, which will just give you 127. And using it with a non-bounded `%s` is just *begging* for a buffer overflow.
If you really must use the `[]` format specifier (in `scanf` *or* `sscanf`), I don't think it's meant to be followed by `s`.
And, for a robust input solution using that advice, see [here](https://stackoverflow.com/questions/4023895/how-to-read-string-entered-by-user-in-c/4023921#4023921). Once you have an input line as a string, you can `sscanf` to your hearts content. | You seem to want to validate a string as input. It depends on whether you want to validate that your string contains a double or a int. The following checks for a double (leading and trailing whitespace is allowed).
```
bool is_double(char const* s) {
int n;
double d;
return sscanf(s, "%lf %n", &d, &n) == 1 && !s[n];
}
```
`sscanf` will return the items converted (without '%n'). `n` will be set to the amount of the processed input characters. If all input was processed, s[n] will return the terminating 0 character. The space between the two format specifiers accounts for optional trailing whitespace.
The following checks for an int, same techniques used:
```
bool is_int(char const* s) {
int n;
int i;
return sscanf(s, "%d %n", &i, &n) == 1 && !s[n];
}
```
There was a question on that [here](https://stackoverflow.com/questions/392981/how-can-i-convert-string-to-double-in-c), which include also more C++'ish ways for doing this, like using string streams and functions from boost, like lexical\_cast and so on. They should generally be preferred over functions like scanf, since it's very easy to forget to pass some '%' to scanf, or some address. scanf won't recognize that, but instead will do arbitrary things, while lexical\_cast, for instance, will throw an exception if anything isn't right . | How to validate input using scanf | [
"",
"c++",
"c",
"scanf",
""
] |
I have the following css code:
```
#Layer3
{
position:absolute;
width: 89%;
height: 40%;
left: 10%;
top: 56%;
background-color: #f1ffff;
}
#Layer3 h1
{
font-size: medium;
color: #000033;
text-decoration: none;
text-align: center;
}
.tableheader {
border-width:10px; border-style:solid;
}
.tablecontent {
height: 95%;
overflow:auto;
}
```
However, when I use this PHP to generate the html
```
echo '<div id="tableheader" class="tableheader">';
echo "<h1>{$query} Auctions</h1>" . "\n";
echo "</div>";
echo '<div id="tablecontent" class="tablecontent">';
echo "<table border='0' width='100%'><tr>" . "\n";
echo "<td width='15%'>Seller ID</td>" . "\n";
echo "<td width='10%'>Start Date</td>" . "\n";
echo "<td width='75%'>Description</td>" . "\n";
echo "</tr>\n";
// printing table rows
foreach ($rows as $row)
{
$pk = $row['ARTICLE_NO'];
echo '<tr>' . "\n";
table contens generated in here
echo '</tr>' . "\n";
}
echo "</table>";
}
echo "</div>";
```
which generates this html:
```
<div id="tableheader" class="tableheader">
<h1>hardy Auctions</h1>
</div>
<div id="tablecontent" class="tablecontent">
<table border='0' width='100%'>
<tr>
<td width='15%'>Seller ID</td>
<td width='10%'>Start Date</td>
<td width='75%'>Description</td>
the rest of table stuff
</div>
```
The stylesheet is correctly referenced so I am unsure what is causing the error. But there is no border around tableheader at all. Both of these layers are in Layer3 which no longer displays properly on the page. | ```
#tableheader {
border: 10px solid #000;
}
```
Try giving it a color.
EDIT: since its id is tableheader, try changing the style selector to be an id. You could also try using !important to see if anything is overriding your class selector.
Specificity values:
inline: 1000; id: 100, class: 10, element: 1
!important trumps all other non-important declarations. | Start by browsing the HTML DOM in the rendered page either using Firebug in Firefox or using the IE Developer Toolbar in IE.
That way, you can see what styles are actually associated with the element in the rendered page. It's a lot easier to debug the issue from there.
One possibility is that there's a syntax error somewhere in the CSS file causing the styles not to be applied correctly. | css div not being recognized | [
"",
"php",
"html",
"css",
""
] |
I read in the Essential C# 3.0 and .NET 3.5 book that:
> GetHashCode()’s returns over the life of a particular object should be
> constant (the same value), even if the object’s data changes. In many
> cases, you should cache the method return to enforce this.
Is this a valid guideline?
I have tried a couple built-in types in .NET and they didn't behave like this. | The answer is mostly, it is a valid guideline, but perhaps not a valid rule. It also doesn't tell the whole story.
The point being made is that for mutable types, you cannot base the hash code on the mutable data because two equal objects must return the same hash code and the hash code has to be valid for the lifetime of the object. If the hash code changes, you end up with an object that gets lost in a hashed collection because it no longer lives in the correct hash bin.
For example, object A returns hash of 1. So, it goes in bin 1 of the hash table. Then you change object A such that it returns a hash of 2. When a hash table goes looking for it, it looks in bin 2 and can't find it - the object is orphaned in bin 1. This is why the hash code must not change for the lifetime of the object, and just one reason why writing GetHashCode implementations is a pain in the butt.
**Update**
[Eric Lippert has posted a blog](http://ericlippert.com/2011/02/28/guidelines-and-rules-for-gethashcode/) that gives excellent information on `GetHashCode`.
**Additional Update**
I've made a couple of changes above:
1. I made a distinction between guideline and rule.
2. I struck through "for the lifetime of the object".
A guideline is just a guide, not a rule. In reality, `GetHashCode` only has to follow these guidelines when things expect the object to follow the guidelines, such as when it is being stored in a hash table. If you never intend to use your objects in hash tables (or anything else that relies on the rules of `GetHashCode`), your implementation doesn't need to follow the guidelines.
When you see "for the lifetime of the object", you should read "for the time the object needs to co-operate with hash tables" or similar. Like most things, `GetHashCode` is about knowing when to break the rules. | It's been a long time, but nevertheless I think it is still necessary to give a correct answer to this question, including explanations about the whys and hows. The best answer so far is the one citing the MSDN exhaustivly - don't try to make your own rules, the MS guys knew what they were doing.
But first things first:
The Guideline as cited in the question is wrong.
Now the whys - there are two of them
**First why**:
If the hashcode is computed in a way, that it does not change during the lifetime of an object, even if the object itself changes, than it would break the equals-contract.
Remember:
"If two objects compare as equal, the GetHashCode method for each object must return the same value. However, if two objects do not compare as equal, the GetHashCode methods for the two object do not have to return different values."
The second sentence often is misinterpreted as "The only rule is, that at object creation time, the hashcode of equal objects must be equal". Don't really know why, but that's about the essence of most answers here as well.
Think of two objects containing a name, where the name is used in the equals method: Same name -> same thing.
Create Instance A: Name = Joe
Create Instance B: Name = Peter
Hashcode A and Hashcode B will most likely not be the same.
What would now happen, when the Name of instance B is changed to Joe?
According to the guideline from the question, the hashcode of B would not change. The result of this would be:
A.Equals(B) ==> true
But at the same time:
A.GetHashCode() == B.GetHashCode() ==> false.
But exactly this behaviour is forbidden explicitly by the equals&hashcode-contract.
**Second why**:
While it is - of course - true, that changes in the hashcode could break hashed lists and other objects using the hashcode, the reverse is true as well. Not changing the hashcode will in the worst case get hashed lists, where all of a lot of different objects will have the same hashcode and therefor be in the same hash bin - happens when objects are initialized with a standard value, for example.
---
Now coming to the hows
Well, on first glance, there seems to be a contradiction - either way, code will break.
But neither problem does come from changed or unchanged hashcode.
The source of the problems is well described in the MSDN:
From MSDN's hashtable entry:
> Key objects must be immutable as long
> as they are used as keys in the
> Hashtable.
This does mean:
Any object that creates a hashvalue should change the hashvalue, when the object changes, but it must not - absolutely must not - allow any changes to itself, when it is used inside a Hashtable (or any other Hash-using object, of course).
First how
Easiest way would of course be to design immutable objects only for the use in hashtables, that will be created as copys of the normal, the mutable objects when needed.
Inside the immutable objects, it's obviusly ok to cache the hashcode, since it's immutable.
Second how
Or give the object a "you are hashed now"-flag, make sure all object data is private, check the flag in all functions that can change objects data and throw an exception data if change is not allowed (i.e. flag is set).
Now, when you put the object in any hashed area, make sure to set the flag, and - as well - unset the flag, when it is no longer needed.
For ease of use, I'd advise to set the flag automatically inside the "GetHashCode" method - this way it can't be forgotten. And the explicit call of a "ResetHashFlag" method will make sure, that the programmer will have to think, wether it is or is not allowed to change the objects data by now.
Ok, what should be said as well: There are cases, where it is possible to have objects with mutable data, where the hashcode is nevertheless unchanged, when the objects data is changed, without violating the equals&hashcode-contract.
This does however require, that the equals-method is not based on the mutable data as well.
So, if I write an object, and create a GetHashCode method that does calculate a value only once and stores it inside the object to return it on later calls, then I must, again: absolutely must, create a Equals method, that will use stored values for the comparison, so that A.Equals(B) will never change from false to true as well. Otherwise, the contract would be broken. The result of this will usually be that the Equals method doesn't make any sense - it's not the original reference equals, but it is neither a value equals as well. Sometimes, this may be intended behaviour (i.e. customer records), but usually it is not.
So, just make GetHashCode result change, when the object data changes, and if the use of the object inside of hash using lists or objects is intended (or just possible) then make the object either immutable or create a readonly flag to use for the lifetime of a hashed list containing the object.
(By the way: All of this is not C# oder .NET specific - it is in the nature of all hashtable implementations, or more generally of any indexed list, that identifying data of objects should never change, while the object is in the list. Unexpected and unpredictable behaviour will occur, if this rule is broken. Somewhere, there may be list implementations, that do monitor all elements inside the list and do automatic reindexing the list - but the performance of those will surely be gruesome at best.) | GetHashCode Guidelines in C# | [
"",
"c#",
".net",
"hashcode",
""
] |
I keep getting stuck conceptually on deciding an Exception-handling structure for my project.
Suppose you have, as an example:
```
public abstract class Data {
public abstract String read();
}
```
And two subclasses FileData, which reads your data from some specified file, and StaticData, which just returns some pre-defined constant data.
Now, upon reading the file, an IOException may be thrown in FileData, but StaticData will never throw. Most style guides recommend propagating an Exception up the call stack until a sufficient amount of context is available to effectively deal with it.
But I don't really want to add a throws clause to the abstract read() method. Why? Because Data and the complicated machinery using it knows nothing about files, it just knows about Data. Moreover, there may be other Data subclasses (and more of them) that never throw exceptions and deliver data flawlessly.
On the other hand, the IOException is necessary, for if the disk is unreadable (or some such) an error *must* be thrown. So the only way out that I see is catching the IOException and throwing some RuntimeException in its place.
Is this the correct philosophy? | You're right.
The exception should be at the same level of abstraction where is used. This is the reason why since java 1.4 Throwable supports exception chaining. There is no point to throw FileNotFoundException for a service that uses a Database for instance, or for a service that is "store" agnostic.
It could be like this:
```
public abstract class Data {
public abstract String read() throws DataUnavailableException;
}
class DataFile extends Data {
public String read() throws DataUnavailableException {
if( !this.file.exits() ) {
throw new DataUnavailableException( "Cannot read from ", file );
}
try {
....
} catch( IOException ioe ) {
throw new DataUnavailableException( ioe );
} finally {
...
}
}
class DataMemory extends Data {
public String read() {
// Everything is performed in memory. No exception expected.
}
}
class DataWebService extends Data {
public string read() throws DataUnavailableException {
// connect to some internet service
try {
...
} catch( UnknownHostException uhe ) {
throw new DataUnavailableException( uhe );
}
}
}
```
Bear in mind that if you program with inheritance in mind, you should design carefully for the specific scenarios and test implementations with those scenarios. Obviously if it is harder to code an general purpose library, because you don't know how is it going to be used. But most of the times applications are constrained to an specific domain.
Should your new exception be Runtime or Checked? It depends, the general rule is to throw Runtime for programming errors and checked for recoverable conditions.
If the exception could be avoided by programming correctly ( such as NullPointerException or IndexOutOfBounds ) use Runtime
If the exception is due to some external resource out of control of the programmer ( the network is down for instance ) AND there is something THAT could be done ( Display a message of retry in 5 mins or something ) then a checked exception should be used.
If the exception is out of control of the programmer, but NOTHING can be done, you could use a RuntimeException. For instance, you're supposed to write to a file, but the file was deleted and you cannot re-create it or re-try then the program should fail ( there is nothing you can do about it ) most likely with a Runtime.
See these two items from Effective Java:
* Use checked exceptions for recoverable conditions and run-time exceptions for programming errors
* Throw exceptions appropriate to the abstraction
I hope this helps. | If you're not explicitly stating that `read()` can throw an exception, then you'll surprise developers when it does.
In your particular case I'd catch the underlying exceptions and rethrow them as a new exception class `DataException` or `DataReadException`. | Java Style: Properly handling exceptions | [
"",
"java",
"exception",
""
] |
I am trying to create a column in a table that's a foreign key, but in MySQL that's more difficult than it should be. It would require me to go back and make certain changes to an already-in-use table. So I wonder, **how necessary is it for MySQL to be sure that a certain value is appropriate? Couldn't I just do that with a language like PHP, which I'm using to access this database *anyway*?**
Similarly with NOT NULL. If I only access this database with PHP, couldn't I simply have PHP ensure that no null value is entered?
**Why should I use MySQL to do enforce these constraints, when I could just do it with PHP?**
---
I realize that NOT NULL is a very stupid part to neglect for the above reasons. But MySQL doesn't enforce foreign keys without a serious degree of monkeying around.
In your opinion, would it still be bad to use the "fake" foreign keys, and simply check if the values to be entered are matched in other tables, with PHP? | You are *going* to make mistakes with PHP, 100% guaranteed. PHP is procedural. What you want are declarative constraints. You want to tell the entire stack: "These are the constraints on the data, and these constraints cannot be violated." You don't want to much around with "Step 1 ... Step 2 ... Step 3 ... Step 432 ...", as your method of enforcing constraints on data, because
* you're going to get it wrong
* when you change it later, you will forget what you did now
* nobody else will know all of these implicit constraints like you know them now, and that includes your future self
* it takes a lot of code to enforce constraints properly and all the time - the database server has this code already, but are you prepared to write it?
The question should actually be worded, "Why should I use PHP to enforce these constraints, when I could just do it with MySQL?" | You can't "just" do it with [PHP](http://en.wikipedia.org/wiki/PHP) for the same reason that programmers "just" can't write bug-free code. It's harder than you think. Especially if you think it's not that hard. | How important are constraints like NOT NULL and FOREIGN KEY if I'll always control my database input with PHP? | [
"",
"php",
"mysql",
"constraints",
""
] |
Do you have any formal or informal standards for reasonably achievable SQL query speed? How do you enforce them? Assume a production OLTP database under full realistic production load of a couple dozen queries per second, properly equipped and configured.
Personal example for illustrative purposes (not a recommendation, highly contingent on many factors, some outside your control):
Expectation:
Each transactional unit (single statement, multiple SQL statements from beginning to end transaction boundaries, or a single stored procedure, whichever is largest) must execute in 1 second or less on average, without anomalous outliers.
Resolution:
Slower queries must be optimized to standard. Slow queries for reports and other analysis are moved to an OLAP cube (best case) or a static snapshot database.
(Obviously some execution queries (Insert/Update/Delete) can't be moved, so must be optimized, but so far in my experience it's been achievable.) | I usually go by the one second rule when writing/refactoring stored procedures, although my workplace doesn't have any specific rules about this. It's just my common sense. Experience tells me that if it takes up to ten seconds or more for a procedure to execute, which doesn't perform any large bulk inserts, there are usually serious problems in the code that can easily be corrected.
They way most common problem I encounter in SP:s with poor performance is incorrect use of indexes, causing costly index seek operations. | Given that you can't expect deterministic performance on a system that could (at least in theory) be subject to transient load spikes, you want your performance SLA to be probabilistic. An example of this might be:
95% of transactions to complete within 2 seconds.
95% of search queries (more appropriate for a search screen) to complete within 10 seconds.
95% of operational reports to complete within 10 seconds.
Transactional and search queries can't be moved off transactional system, so the only actions you can take are database or application tuning, or buying faster hardware.
For operational reports, you need to be ruthless about what qualifies as an operational report. Only reports that *absolutely* need to have access to up-to-date data should be run off the live system. Reports that do a lot of I/O are very anti-social on a production system, and normalised schemas tend to be quite inefficient for reporting. Move any reports that do not require real-time data off onto a data warehouse or some other separate reporting facility. | SQL Queries - How Slow is Too Slow? | [
"",
"sql",
""
] |
How do I check if a given object is nullable in other words how to implement the following method...
```
bool IsNullableValueType(object o)
{
...
}
```
I am looking for nullable *value types.* I didn't have reference types in mind.
```
//Note: This is just a sample. The code has been simplified
//to fit in a post.
public class BoolContainer
{
bool? myBool = true;
}
var bc = new BoolContainer();
const BindingFlags bindingFlags = BindingFlags.Public
| BindingFlags.NonPublic
| BindingFlags.Instance
;
object obj;
object o = (object)bc;
foreach (var fieldInfo in o.GetType().GetFields(bindingFlags))
{
obj = (object)fieldInfo.GetValue(o);
}
```
`obj` now refers to an object of type `bool` (`System.Boolean`) with value equal to `true`. What I really wanted was an object of type `Nullable<bool>`
So now as a work around I decided to check if o is nullable and create a nullable wrapper around obj. | There are two types of nullable - `Nullable<T>` and reference-type.
Jon has corrected me that it is hard to get type if boxed, but you can with generics:
- so how about below. This is actually testing type `T`, but using the `obj` parameter purely for generic type inference (to make it easy to call) - it would work almost identically without the `obj` param, though.
```
static bool IsNullable<T>(T obj)
{
if (obj == null) return true; // obvious
Type type = typeof(T);
if (!type.IsValueType) return true; // ref-type
if (Nullable.GetUnderlyingType(type) != null) return true; // Nullable<T>
return false; // value-type
}
```
But this won't work so well if you have already boxed the value to an object variable.
Microsoft documentation: <https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/nullable-types/how-to-identify-a-nullable-type> | There is a very simple solution using method overloads
<http://deanchalk.com/is-it-nullable/>
excerpt:
```
public static class ValueTypeHelper
{
public static bool IsNullable<T>(T t) { return false; }
public static bool IsNullable<T>(T? t) where T : struct { return true; }
}
```
then
```
static void Main(string[] args)
{
int a = 123;
int? b = null;
object c = new object();
object d = null;
int? e = 456;
var f = (int?)789;
bool result1 = ValueTypeHelper.IsNullable(a); // false
bool result2 = ValueTypeHelper.IsNullable(b); // true
bool result3 = ValueTypeHelper.IsNullable(c); // false
bool result4 = ValueTypeHelper.IsNullable(d); // false
bool result5 = ValueTypeHelper.IsNullable(e); // true
bool result6 = ValueTypeHelper.IsNullable(f); // true
``` | How to check if an object is nullable? | [
"",
"c#",
".net",
"nullable",
""
] |
When you double-click on a Word document, Word is automatically run and the document is loaded.
What steps are needed to do the same thing with my C# application?
In other words, assume my application uses ".XYZ" data files. I know how to tell Windows to start my application when a .XYZ data file is double clicked. **But how do I find out within my application what data file was chosen so I can load it?** | I did this in a project I was working on awhile ago and don't have the source code handy, but I believe it really came down to this:
```
//program.cs
[STAThread]
static void Main(string[] args)
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
if (args.Length > 0)
{
//launch same form as normal or different
Application.Run(new Form1(args));
}
else
{
Application.Run(new Form1());
}
}
```
args is empty when the application is normally started, but when you've properly linked in the association to your .xyz file, when one of those files is selected, your application will be launched with the file location as the first element of the string[]. Certainly either in program.cs or your launch form, I would add validation, but at the base level I believe this is what you need to do. | Granted this is a VB.NET solution, but [this article](http://www.codeproject.com/KB/vb/VBFileAssociation.aspx) has details on how to create the file association for your application in the registry and how to retrieve the command arguments when your application is fired up to do the proper file handling.
It looks easy enough to port over to C#. | Starting application from data file | [
"",
"c#",
"double-click",
""
] |
What is the difference between these two apis?
Which one faster, reliable using Python DB API?
**Upd:**
I see two psql drivers for Django. The first one is psycopg2.
What is the second one? pygresql? | For what it's worth, django uses psycopg2. | "PyGreSQL is written in Python only, easy to deployed but slower."
PyGreSQL contains a C-coded module, too. I haven't done speed tests, but they're not likely to be much different, as the real work will happen inside the database server. | PyGreSQL vs psycopg2 | [
"",
"python",
"postgresql",
""
] |
Does anybody know what is the best approach to accessing a sql view through Grails (or if this is even possible)? It seems an obvious way of doing this would be to use executeQuery against the view to select a collection of rows from the view which we would not treat as a list of domain objects. However, even in this case it is not obvious which domain class to run executeQuery against, since really we are just using that domain class in order to run the query against a completely unrelated entity (the view).
Would it be preferred to create a domain class representing the view and we could then just use list() against that domain class? It seems like there would be problems with this as Grails probably expects to be able to insert, update, delete, and modify the table schema of any domain class.
[Edit:
Follow up question here: [Grails Domain Class without ID field or with partially NULL composite field](https://stackoverflow.com/questions/430004/grails-domain-class-without-id-field-or-with-partially-null-composite-field]) | You can use plain SQL in Grails which is in the case of accessing a view the preferable way (IMO):
For example in your controller:
```
import groovy.sql.Sql
class MyFancySqlController {
def dataSource // the Spring-Bean "dataSource" is auto-injected
def list = {
def db = new Sql(dataSource) // Create a new instance of groovy.sql.Sql with the DB of the Grails app
def result = db.rows("SELECT foo, bar FROM my_view") // Perform the query
[ result: result ] // return the results as model
}
}
```
and the view part:
```
<g:each in="${result}">
<tr>
<td>${it.foo}</td>
<td>${it.bar}</td>
</tr>
</g:each>
```
I hope the source is self-explanatory. The [Documentation can be found here](http://groovy.codehaus.org/Database+features) | You can put this in your domain class mappings:
```
static mapping = {
cache 'read-only'
}
```
But I'm not sure if it helps Hibernate understand it's a view... <http://docs.jboss.org/hibernate/stable/core/reference/en/html_single/#performance-cache-readonly>
Anyway, we use database views a lot as grails domain classes in our current project, because HQL is a pain in the ass and it's simpler to use SQL to join tables.
One thing you need to be careful about though, is the Hibernate batching of queries (and the whole flush business). If you insert something in a table, and then in the same transaction you select a view that depends on that table, you will not get the latest rows you inserted. This is because Hibernate will not actually have inserted the rows yet, whereas if you selected the table you inserted rows in, Hibernate would have figured out it needed to flush its pending queries before giving you the result of your select.
One solution is to (`flush:true`) when saving a domain instance that you know you will need to read through a view thereafter in the same transaction.
It would be cool however to have some kind of way to tell Hibernate that a view/domain depends on which other domain classes, so that the Hibernate flushing works well seemlessly. | SQL/Database Views in Grails | [
"",
"sql",
"database",
"grails",
"view",
""
] |
I believe, that the usage of preprocessor directives like `#if UsingNetwork` is bad OO practice - other coworkers do not.
I think, when using an IoC container (e.g. Spring), components can be easily configured if programmed accordingly. In this context either a propery `IsUsingNetwork` can be set by the IoC container or, if the "using network" implementation behaves differently, another implementation of that interface should be implemented and injected (e.g.: `IService`, `ServiceImplementation`, `NetworkingServiceImplementation`).
Can somebody please provide **citations of OO-Gurus** or **references in books** which basically reads "Preprocessor usage is bad OO practice if you try to configure behaviour which should be configured via an IoC container"?
**I need this citations to convince coworkers to refactor...**
Edit: I do know and agree that using preprocessor directives to change targetplatform specific code during compilation is fine and that is what preprocessor directives are made for. However, I think that runtime-configuration should be used rather than compiletime-configuration to get good designed and testable classes and components. In other words: Using #defines and #if's beyond what they are meant for will lead to difficult to test code and badly designed classes.
Has anybody read something along these lines and can give me so I can refer to? | Henry Spencer wrote a paper called [#ifdef Considered Harmful](https://www.usenix.org/legacy/publications/library/proceedings/sa92/spencer.pdf).
Also, Bjarne Stroustrup himself, in the chapter 18 of his book [The Design and Evolution of C++](https://books.google.com/books?id=GvivU9kGInoC), frowns on the use of preprocessor and wishes to eliminate it completely. However, Stroustrup also recognizes the necessity for #ifdef directive and the conditional compilation and goes on to illustrate that there is no good alternative for it in C++.
Finally, Pete Goodliffe, in chapter 13 of his book [Code Craft: The Practice of Writing Excellent Code](http://books.google.com/books?id=i4zCzpkrt4sC), gives an example how, even when used for its original purpose, #ifdef can make a mess out of your code.
Hope this helps. However, if your co-workers won't listen to reasonable arguments in the first place, I doubt book quotes will help convince them ;) | Preprocessor directives in C# have very clearly defined and practical uses cases. The ones you're specifically talking about, called conditional directives, are used to control which parts of the code are compiled and which aren't.
There is a very important difference between not compiling parts of code and controlling how your object graph is wired via IoC. Let's look at a real-world example: XNA. When you're developing XNA games that you plan to deploy on both Windows and XBox 360, your solution will typically have at least two platforms that you can switch between, in your IDE. There will be several differences between them, but one of those differences will be that the XBox 360 platform will define a conditional symbol XBOX360 which you can use in your source code with a following idiom:
```
#if (XBOX360)
// some XBOX360-specific code here
#else
// some Windows-specific code here
#endif
```
You could, of course, factor out these differences using a Strategy design pattern and control via IoC which one gets instantiated, but the conditional compilation offers at least three major advantages:
1. You don't ship code you don't need.
2. You can see the differences between platform-specific code for both platforms in the rightful context of that code.
3. There's no indirection overhead. The appropriate code is compiled, the other isn't and that's it. | Quote needed: Preprocessor usage is bad OO practice | [
"",
"c#",
"oop",
"c-preprocessor",
"dos-donts",
""
] |
What's a good algorithm for determining the remaining time for something to complete? I know how many total lines there are, and how many have completed already, how should I estimate the time remaining? | Why not?
`(linesProcessed / TimeTaken)` `(timetaken / linesProcessed) * LinesLeft = TimeLeft`
`TimeLeft` will then be expressed in whatever unit of time `timeTaken` is.
# Edit:
Thanks for the comment you're right this should be:
`(TimeTaken / linesProcessed) * linesLeft = timeLeft`
so we have
`(10 / 100) * 200` = 20 Seconds now 10 seconds go past
`(20 / 100) * 200` = 40 Seconds left now 10 more seconds and we process 100 more lines
`(30 / 200) * 100` = 15 Seconds and now we all see why the copy file dialog jumps from 3 hours to 30 minutes :-) | I'm surprised no one has answered this question with code!
The simple way to calculate time, as answered by @JoshBerke, can be coded as follows:
```
DateTime startTime = DateTime.Now;
for (int index = 0, count = lines.Count; index < count; index++) {
// Do the processing
...
// Calculate the time remaining:
TimeSpan timeRemaining = TimeSpan.FromTicks(DateTime.Now.Subtract(startTime).Ticks * (count - (index+1)) / (index+1));
// Display the progress to the user
...
}
```
This simple example works great for simple progress calculation.
However, for a more complicated task, there are many ways this calculation could be improved!
For example, when you're downloading a large file, the download speed could easily fluctuate. To calculate the most accurate "ETA", a good algorithm would be to only consider the past 10 seconds of progress. Check out [**ETACalculator.cs**](http://github.com/scottrippey/Progression/blob/master/Progression/Extras/ETACalculator.cs) for an implementation of this algorithm!
**ETACalculator.cs** is from [**Progression**](http://github.com/scottrippey/Progression) -- an open source library that I wrote. It defines a very easy-to-use structure for all kinds of "progress calculation". It makes it easy to have nested steps that report different types of progress. If you're concerned about Perceived Performance (as @JoshBerke suggested), it will help you immensely. | Calculate Time Remaining | [
"",
"c#",
"algorithm",
""
] |
I'm working on a rather large project, and its unlikely will catch everything. I've found the event that notifies me of unhandled exceptions, however I haven't found a way to programmatically shut off the windows error dialog. Ideally, if there is an unhandled exception, I would like to have that event fired, provide a dialog box telling the user that there is a problem, and then to close gracefully. Is there any way to do this? I realize I could wrap the highest layer in a try catch, but I was hoping for something a little more elegant. | This is what we did.
```
static void Main() {
try
{
SubMain();
}
catch (Exception e)
{
HandleUnhandledException(e);
}
}
private static void SubMain()
{
// Setup unhandled exception handlers
AppDomain.CurrentDomain.UnhandledException += // CLR
new UnhandledExceptionEventHandler(OnUnhandledException);
Application.ThreadException += // Windows Forms
new System.Threading.ThreadExceptionEventHandler(
OnGuiUnhandledException);
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new frmMain());
}
// CLR unhandled exception
private static void OnUnhandledException(Object sender,
UnhandledExceptionEventArgs e)
{
HandleUnhandledException(e.ExceptionObject);
}
// Windows Forms unhandled exception
private static void OnGuiUnhandledException(Object sender,
System.Threading.ThreadExceptionEventArgs e)
{
HandleUnhandledException(e.Exception);
}
``` | You sort of answered your own question. The best way to prevent the error dialog is to write code that handles your exceptions so the dialog will never appear. I would wager this is what Raymond Chen would suggest (if I may be so bold). | Prevent "Send error report to Microsoft" | [
"",
"c#",
"error-handling",
""
] |
```
template<typename T, size_t n>
size_t array_size(const T (&)[n])
{
return n;
}
```
The part that I don't get is the parameters for this template function. What happens with the array when I pass it through there that gives `n` as the number of elements in the array? | Well, first you have to understand that trying to get a value out of an array can give you a pointer to its first element:
```
int a[] = {1, 2, 3};
int *ap = a; // a pointer, size is lost
int (&ar)[3] = a; // a reference to the array, size is not lost
```
References refer to objects using their exact type or their base-class type. The key is that the template takes arrays by reference. Arrays (not references to them) as parameters do not exist in C++. If you give a parameter an array type, it will be a pointer instead. So using a reference is necessary when we want to know the size of the passed array. The size and the element type are automatically deduced, as is generally the case for function templates. The following template
```
template<typename T, size_t n>
size_t array_size(const T (&)[n]) {
return n;
}
```
Called with our previously defined array `a` will implicitly instantiate the following function:
```
size_t array_size(const int (&)[3]) {
return 3;
}
```
Which can be used like this:
```
size_t size_of_a = array_size(a);
```
---
There's a variation I made up some time ago *[Edit: turns out someone already had that same idea [here](http://blogs.msdn.com/the1/archive/2004/05/07/128242.aspx)]* which can determine a value at compile time. Instead of returning the value directly, it gives the template a return type depending on `n`:
```
template<typename T, size_t n>
char (& array_size(const T (&)[n]) )[n];
```
You say if the array has `n` elements, the return type is a reference to an array having size `n` and element type `char`. Now, you can get a compile-time determined size of the passed array:
```
size_t size_of_a = sizeof(array_size(a));
```
Because an array of `char` having `n` elements has sizeof `n`, that will give you the number of elements in the given array too. At compile time, so you can do
```
int havingSameSize[sizeof(array_size(a))];
```
Because the function never is actually called, it doesn't need to be defined, so it doesn't have a body. Hope I could clear the matter up a little bit. | Think of it this way, suppose you had a bunch of functions:
```
// Note that you don't need to name the array, since you don't
// actually reference the parameter at all.
size_t array_size(const int (&)[1])
{
return 1;
}
size_t array_size(const int (&)[2])
{
return 2;
}
size_t array_size(const int (&)[3])
{
return 3;
}
// etc...
```
Now when you call this, which function gets called?
```
int a[2];
array_size(a);
```
Now if you templatize the arraysize, you get:
```
template <int n>
size_t array_size(const int (&)[n])
{
return n;
}
```
The compiler will attempt to instantiate a version of array\_size that matches whatever parameter you call it with. So if you call it with an array of 10 ints, it will instantiate array\_size with n=10.
Next, just templatize the type, so you can call it with more than just int arrays:
```
template <typename T, int n>
size_t array_size(const T (&)[n])
{
return n;
}
```
And you're done.
**Edit**: A note about the `(&)`
The parentheses are needed around the `&` to differentiate between array of int references (illegal) and reference to array of ints (what you want). Since the precedence of `[]` is higher than `&`, if you have the declaration:
```
const int &a[1];
```
because of operator precedence, you end up with a one-element array of const references to int. If you want the `&` applied first, you need to force that with parentheses:
```
const int (&a)[1];
```
Now the you have a const reference to a one element array of ints. In the function parameter list, you don't need to specify the name of a parameter if you don't use it, so you can drop the name, but keep the parentheses:
```
size_t array_size(const int (&)[1])
``` | Can someone explain this template code that gives me the size of an array? | [
"",
"c++",
"arrays",
"templates",
"size",
""
] |
There seem to be many excellent web frameworks for Python. Has anyone used any of these (Pylons, Web2Py, Django) with IronPython? | Django [has been run on IronPython](http://unbracketed.org/2008/mar/16/pycon-2008-django-now-plays-dark-side/) before, but as a proof-of-concept. I know the IronPython team are interested in Django support as a metric for Python-compatibility.
Somewhat related is the possibility to use [IronPython with ASP.NET and ASP.NET MVC](http://haacked.com/archive/2008/09/23/refreshing-asp.net-dynamic-language-support.aspx), which is probably more mature. | You may want to read [this](http://groups.google.com/group/web2py/browse_thread/thread/857f9d7bc5c25822/278c5a8e209c83e5?lnk=gst&q=ironpython#278c5a8e209c83e5)
Basically web2py code runs unmodified and out of the box but with IronPython but
* no CSV module (so no database IO)
* no third party database drivers (not even SQLite, so no databases at all)
* no built-in web server (unless you cripple it by removing signals and logging)
This is because csv, signals, logging and sqlite are not present in IronPython.
An you can see from the thread above there is work underway to find ways around.
web2py also runs unmodified on Jython 2.5 beta, without any known limitation, except for a bug with regular expressions in Jython that makes it choke on some templates (re.compile(...).finditer goes in a loop). We are working to find a way around for this as well. | IronPython Webframework | [
"",
"python",
"ironpython",
""
] |
Is it ok to do the following:
```
View(new {Object A, Object B})
```
Or should Object A and Object B be explicitly declared in a new type?
Thanks. | Although anonymous types are versatile for many MVC purposes, in this case I would use a regular named class, or at a push a dictionary (or the inbuilt one). Otherwise you will have to use reflection / `TypeDescriptor` to get the values out again. | Yes, it's fine to do so. To get the values, you can use ViewData.Eval("PropertyName") and the existing Html helpers will work fine with them. The only thing you won't be able to do is get strongly typed access to the properties using <%= ViewData.Model.PropertyName %> | Is it safe to pass anonymous type to MVC ViewData.Model? | [
"",
"c#",
"asp.net-mvc",
"model-view-controller",
""
] |
I'm writing a javascript based photo gallery with a horizontally scrollable thumbnail bar.
[>> My current work-in-progress is here <<](http://www.sharehost.co.uk/gallerytest/)
I would like the thumbnail bar to stop scrolling when it gets to the last thumbnail. To do this I need to find the total width of the contents of the div - preferably without adding up the widths of all the thumbnail images and margins.
I've put an alert in my window.onload function so I can test the various element dimension functions in different browsers. currently it's showing the value of scrollWidth, which is reported as 1540px by IE and 920px by FireFox, Safari, Opera, etc.
The value 1540 is the correct one for my purposes, Can anyone tell me how to obtain this value in FireFox, etc. | Maybe you're just referencing the wrong element.
```
document.getElementById('thumb_window').scrollWidth
```
is giving me 1540 on that page in both IE6 and firefox 2. Is that what you're looking for?
BTW in IE6 the thumbnails extend way past the right scroller. | I think you're looking for "offsetWidth".
For a cross-browser experience its either scroll[Width/Height] or offset[Width/Height], whichever is greater.
```
var elemWidth = (elem.offsetWidth > elem.scrollWidth) ? elem.offsetWidth : elem.scrollWidth;
var elemHeight = (elem.offsetHeight > elem.scrollHeight) ? elem.offsetHeight : elem.scrollHeight;
``` | Javascript/xhtml - discovering the total width of content in div with overflow:hidden | [
"",
"javascript",
"css",
"xhtml",
""
] |
What are the best practices for choosing the linking method in VC++? Can anything/everything be statically linked?
On a dynamically linked project, is the relative/absolute location of the linked library important?
What are the pros and cons ?
**added**: I was mainly referring to lib files. Do they behave same as dll linking? | Dynamic links allow you to upgrade individual DLLs without recompiling your applications. That is why windows can be upgraded without your application being recompiled, because the dynamic linker is able to determine the entry points in the dll, provided that the method name exists.
Statically linking your application has a benefit in that calls to the linked code are not indirected, so they run faster. This may have an impact on extremely performance dependent code.
Using DLLs can also help you reduce your memory footprint, as effectively you only load the libraries as you need them and you can unload them when your done (think application plugins, only load an image browsing library when you have an image open etc.)
EDIT: Robert Gamble has added a comment which I missed: DLLs are loaded into memory shared by all processes in the operating systems. This means if two programs (or two instances of your program) use the same DLL, they will use the same DLL loaded into memory which will further reduce your overall memory usage. | DLLs *can* make for smaller runtime workingset, if the application were written in such a way as to manage the context switching between DLLs (For example, for larger applications, you could divide the applications functionality into logical boundaries to be implemented within self-contained DLLs and allow the loader to load at runtime).
While it's true that DLLs are primarily installed/copied into the same folder as the .exe, the requirement is to adhere to the loaders loading rules (which includes system folder (bad idea), PATH, current directory [see LoadLibrary API Help documentation for a full description of precedence]).
You "added" a comment regarding LIB files. In BOTH Dynamic and Static, you link using LIB files. But in the case of dynamic loading you deliver the .exe along with all dependent DLLs (the LIB files contain the exported entry points for the corresponding DLL).
I prefer DLLs as my applications tend to be larger and segmented and this allows me to deliver ONLY those updated components (DLLs). We even separate business logic from presentation in their own DLLs [permits localization of the resource-only dll independent of the logic.
Programming using DLLs DOES cause you to force yourself to adhere to the contract of the exported class/method or function. | Static/Dynamic Runtime Linking | [
"",
"c++",
"windows",
"linker",
""
] |
I am using the Apache Commons IO:
```
FileUtils.copyFileToDirectory(srcFile, destDir)
```
How do I make Windows lock the destination file during copy? Windows locks the file correctly if I use:
```
Runtime.getRuntime().exec(
"cmd /c copy /Y \"" + srcFile.getCanonicalPath() + "\" \""
+ destDir.getCanonicalPath() + "\"").waitFor();
```
Notes:
The contention is not with the local program, but an external one. The file is being copied to a remote system. The remote system is processing the file before it completes the copy. Because the systems are Windows, the normal copy locks the file and blocks the external program from access. | Java doesn't natively support file locking.
If contention for the file is coming from within your program, perhaps you need to build additional synchronization on top of the file copy to make sure concurrent writes don't clobber one another. However, if the contention is coming from somewhere external to your software then there isn't much you can do. You can try writing the file to a temporary directory and then renaming it since the rename is more or less atomic (depending on the filesystem).
It would help to have more information on why you need to lock the file in the first place.
> The contention is not with the local
> program, but an external one. The file
> is being copied to a remote system.
> The remote system is processing the
> file before it completes the copy.
> Because the systems are Windows, the
> normal copy locks the file and blocks
> the external program from access.
In that case, you should try writing to a temporary file and then renaming it when the file is fully copied. File renames are atomic operations (on a non networked filesystem) so it should work for you. | [java.nio.channels.FileChannel](http://java.sun.com/javase/6/docs/api/java/nio/channels/FileChannel.html) will allow you to acquire a [FileLock](http://java.sun.com/javase/6/docs/api/java/nio/channels/FileLock.html) on a file, using a method native to the underlying file system, assuming such functionality is supported.
This lock operates across processes on the machine, even non-java ones. (In fact, the lock is held on behalf of the specific JVM instance, so is not suitable for managing contention between multiple threads in a process, or multiple processes in the same JVM).
There are lots of caveats here, but it is worth investigating if you are working on Windows.
From the javadoc:
> This file-locking API is intended to map directly to the native locking facility of the underlying operating system. Thus the locks held on a file should be visible to all programs that have access to the file, regardless of the language in which those programs are written.
>
> Whether or not a lock actually prevents another program from accessing the content of the locked region is system-dependent and therefore unspecified. The native file-locking facilities of some systems are merely advisory, meaning that programs must cooperatively observe a known locking protocol in order to guarantee data integrity. On other systems native file locks are mandatory, meaning that if one program locks a region of a file then other programs are actually prevented from accessing that region in a way that would violate the lock. On yet other systems, whether native file locks are advisory or mandatory is configurable on a per-file basis. To ensure consistent and correct behavior across platforms, it is strongly recommended that the locks provided by this API be used as if they were advisory locks.
>
> On some systems, acquiring a mandatory lock on a region of a file prevents that region from being mapped into memory, and vice versa. Programs that combine locking and mapping should be prepared for this combination to fail.
>
> On some systems, closing a channel releases all locks held by the Java virtual machine on the underlying file regardless of whether the locks were acquired via that channel or via another channel open on the same file. It is strongly recommended that, within a program, a unique channel be used to acquire all locks on any given file.
>
> Some network filesystems permit file locking to be used with memory-mapped files only when the locked regions are page-aligned and a whole multiple of the underlying hardware's page size. Some network filesystems do not implement file locks on regions that extend past a certain position, often 230 or 231. In general, great care should be taken when locking files that reside on network filesystems.` | Locking a file while copying using Commons IO | [
"",
"java",
"windows",
"file-io",
"apache-commons",
""
] |
I have following Code Block Which I tried to optimize in the Optimized section
```
DataSet dsLastWeighing = null;
DataSet ds = null;
DataSet dsWeight = null;
string strQuery = string.Empty;
string strWhere = string.Empty;
Database db = null;
#region Original Code Block
try
{
db = DatabaseFactory.CreateDatabase();
strWhere = "WHERE SESSION_ID = '"+pSessionID+"'";
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("DeleteWeightRecent",db.ToString(),pFacilityID,pSessionID,strWhere,0,DateTime.Now.ToUniversalTime(),0,"");
db.ExecuteNonQuery(System.Data.CommandType.Text, strQuery);
strWhere = "WHERE LAB_ID = 0";
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("InsertWeightRecent",db.ToString(),pFacilityID,pSessionID,strWhere,0,DateTime.Now.ToUniversalTime(),0,"");
db.ExecuteNonQuery(System.Data.CommandType.Text, strQuery);
strWhere = strWhere = "WHERE SESSION_ID = '"+pSessionID+"'";
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("GetPatientID",db.ToString(),pFacilityID,pSessionID,strWhere,0,DateTime.Now.ToUniversalTime(),0,"");
ds = (DataSet) db.ExecuteDataSet(System.Data.CommandType.Text, strQuery);
foreach(DataRow dr in ds.Tables[0].Rows)
{
if (db.ToString() == "Microsoft.Practices.EnterpriseLibrary.Data.SqlBase.SqlBaseDatabase")
{
strWhere = "WHERE LAB_ID=0 AND PAT_ID ="+ int.Parse(dr["PAT_ID"].ToString())+" AND WHEN IN(SELECT MAX(WHEN) FROM PATIENT_LAB WHERE LAB_ID=0 AND PAT_ID="+ int.Parse(dr["PAT_ID"].ToString())+")";
}
else if (db.ToString() == "Microsoft.Practices.EnterpriseLibrary.Data.Sql.SqlDatabase")
{
strWhere = "WHERE LAB_ID=0 AND PAT_ID ="+ int.Parse(dr["PAT_ID"].ToString())+" AND [WHEN] IN(SELECT MAX([WHEN]) FROM PATIENT_LAB WHERE LAB_ID=0 AND PAT_ID="+ int.Parse(dr["PAT_ID"].ToString())+")";
}
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("GetWeight",db.ToString(),pFacilityID,pSessionID,strWhere,0,DateTime.Now.ToUniversalTime(),0,"");
strMain.append(strQuery+" ");
dsWeight = (DataSet) db.ExecuteDataSet(System.Data.CommandType.Text, strQuery);
foreach(DataRow drWeight in dsWeight.Tables[0].Rows)
{
strWhere = "WHERE PAT_ID = "+int.Parse(dr["PAT_ID"].ToString())+" AND SESSION_ID ='"+pSessionID+"'";
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("UpdateWeightRecent",db.ToString(),pFacilityID,pSessionID,strWhere,decimal.Parse(drWeight["LEVEL"].ToString()),DateTime.Parse(drWeight["WHEN"].ToString()).ToUniversalTime(),int.Parse(drWeight["IS_BAD"].ToString()),drWeight["NOTE"].ToString());
db.ExecuteNonQuery(System.Data.CommandType.Text, strQuery);
}
}
strWhere = " ORDER BY W.IS_BAD DESC, P.LASTNAME ASC, P.FIRSTNAME ASC,P.MIDDLENAME ASC";
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("GetPatientLastWeight",db.ToString(),pFacilityID,pSessionID,strWhere,0,DateTime.Now.ToUniversalTime(),0,"");
dsLastWeighing = (DataSet) db.ExecuteDataSet(System.Data.CommandType.Text, strQuery);
}
catch(Exception ex)
{
throw ex;
}
finally
{
db = null;
ds= null;
dsWeight= null;
}
return dsLastWeighing;
#endregion
--Optimized Section--
#region Optimized Code Block
try
{
StringBuilder strMain=new StringBuilder();
db = DatabaseFactory.CreateDatabase();
//StartTime=DateTime.Now.ToLongTimeString();
strWhere = "WHERE SESSION_ID = '"+pSessionID+"'";
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("DeleteWeightRecent",db.ToString(),pFacilityID,pSessionID,strWhere,0,DateTime.Now.ToUniversalTime(),0,"");
//EndTime=DateTime.Now.ToLongTimeString();
//db.ExecuteNonQuery(System.Data.CommandType.Text, strQuery);
strMain.append(strQuery+" ");
strWhere = "WHERE LAB_ID = 0";
//StartTime=DateTime.Now.ToLongTimeString();
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("InsertWeightRecent",db.ToString(),pFacilityID,pSessionID,strWhere,0,DateTime.Now.ToUniversalTime(),0,"");
//EndTime=DateTime.Now.ToLongTimeString();
//db.ExecuteNonQuery(System.Data.CommandType.Text, strQuery);
strMain.append(strQuery+" ");
strWhere = strWhere = "WHERE SESSION_ID = '"+pSessionID+"'";
//StartTime=DateTime.Now.ToLongTimeString();
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("GetPatientID",db.ToString(),pFacilityID,pSessionID,strWhere,0,DateTime.Now.ToUniversalTime(),0,"");
//EndTime=DateTime.Now.ToLongTimeString();
//ds = (DataSet) db.ExecuteDataSet(System.Data.CommandType.Text, strQuery);
strMain.append(strQuery+" ");
//StartTime=DateTime.Now.ToLongTimeString();
ds = (DataSet) db.ExecuteDataSet(System.Data.CommandType.Text, strMain.ToString());
//EndTime=DateTime.Now.ToLongTimeString();
strMain=null;
foreach(DataRow dr in ds.Tables[0].Rows)
{
//StartTime=DateTime.Now.ToLongTimeString();
if (db.ToString() == "Microsoft.Practices.EnterpriseLibrary.Data.SqlBase.SqlBaseDatabase")
{
strWhere = "WHERE LAB_ID=0 AND PAT_ID ="+ int.Parse(dr["PAT_ID"].ToString())+" AND WHEN IN(SELECT MAX(WHEN) FROM PATIENT_LAB WHERE LAB_ID=0 AND PAT_ID="+ int.Parse(dr["PAT_ID"].ToString())+")";
}
else if (db.ToString() == "Microsoft.Practices.EnterpriseLibrary.Data.Sql.SqlDatabase")
{
strWhere = "WHERE LAB_ID=0 AND PAT_ID ="+ int.Parse(dr["PAT_ID"].ToString())+" AND [WHEN] IN(SELECT MAX([WHEN]) FROM PATIENT_LAB WHERE LAB_ID=0 AND PAT_ID="+ int.Parse(dr["PAT_ID"].ToString())+")";
}
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("GetWeight",db.ToString(),pFacilityID,pSessionID,strWhere,0,DateTime.Now.ToUniversalTime(),0,"");
strMain.append(strQuery+" ");
//EndTime=DateTime.Now.ToLongTimeString();
//dsWeight = (DataSet) db.ExecuteDataSet(System.Data.CommandType.Text, strQuery);
/*
foreach(DataRow drWeight in dsWeight.Tables[0].Rows)
{
strWhere = "WHERE PAT_ID = "+int.Parse(dr["PAT_ID"].ToString())+" AND SESSION_ID ='"+pSessionID+"'";
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("UpdateWeightRecent",db.ToString(),pFacilityID,pSessionID,strWhere,decimal.Parse(drWeight["LEVEL"].ToString()),DateTime.Parse(drWeight["WHEN"].ToString()).ToUniversalTime(),int.Parse(drWeight["IS_BAD"].ToString()),drWeight["NOTE"].ToString());
db.ExecuteNonQuery(System.Data.CommandType.Text, strQuery);
}
*/
}
dsWeight = (DataSet) db.ExecuteDataSet(System.Data.CommandType.Text, strMain.ToString());
strMain=null;
//StartTime=DateTime.Now.ToLongTimeString();
for(int i=0;i<dsWeight.Tables.Count;i++)
{
foreach(DataRow drWeight in dsWeight.Tables[i].Rows)
{
strWhere = "WHERE PAT_ID = "+int.Parse(dr["PAT_ID"].ToString())+" AND SESSION_ID ='"+pSessionID+"'";
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("UpdateWeightRecent",db.ToString(),pFacilityID,pSessionID,strWhere,decimal.Parse(drWeight["LEVEL"].ToString()),DateTime.Parse(drWeight["WHEN"].ToString()).ToUniversalTime(),int.Parse(drWeight["IS_BAD"].ToString()),drWeight["NOTE"].ToString());
strMain.append(strQuery+" ");
//db.ExecuteNonQuery(System.Data.CommandType.Text, strQuery);
}
}
db.ExecuteNonQuery(System.Data.CommandType.Text, strMain.ToString());
//EndTime=DateTime.Now.ToLongTimeString();
//StartTime=DateTime.Now.ToLongTimeString();
strWhere = " ORDER BY W.IS_BAD DESC, P.LASTNAME ASC, P.FIRSTNAME ASC,P.MIDDLENAME ASC";
strQuery = SQ.BusinessLogic.SQLQueryFactory.GetPatientLastWeighing("GetPatientLastWeight",db.ToString(),pFacilityID,pSessionID,strWhere,0,DateTime.Now.ToUniversalTime(),0,"");
dsLastWeighing = (DataSet) db.ExecuteDataSet(System.Data.CommandType.Text, strQuery);
//EndTime=DateTime.Now.ToLongTimeString();
}
catch(Exception ex)
{
throw ex;
}
finally
{
db = null;
ds= null;
dsWeight= null;
}
return dsLastWeighing;
#endregion
```
Can This Further be optimized .. Just concentrate on minimizing the loops.. I am not getting any ideas further..Any Help would be appreciated | I would suggest:
* Use parameterised SQL
* Get rid of the catch block, or *at least* use "throw" instead of "throw ex" so you don't lose the information
* Get rid of the finally block - it's not helping you
* Declare variables when they're first needed, not all at the top of the method
* Break the method up into more manageable sections
When all those have been done, you'll be in a better place to tune the performance. At that point, profile the app and see whether the pain point is actually in the .NET code or in the interaction with the database. In my experience, database applications are usually improved by:
* Improving the SQL queries
* Reducing the round-trips
* Optimising the database (applying indexes etc) | Maybe you have optimized the creation of the SQL string, but I think this is peanuts compared to the time that it takes to communicate with the SQL server.
You win a few milliseconds by optimizing your strings, but loose a lot by using a Dataset.
I think you should focus on that part first. And not just the dataset thing, there is a lot more to gain if you optimize the SQL server. Maybe throw in a stored procedure, look at indexing etc.
Also, this code is **not safe at all** for SQL injection attacks. You should use parameters. | How to Optimize/Speedup Code Execution C#, Windows.Net | [
"",
"c#",
""
] |
Consider this example (typical in OOP books):
I have an `Animal` class, where each `Animal` can have many friends.
And subclasses like `Dog`, `Duck`, `Mouse` etc which add specific behavior like `bark()`, `quack()` etc.
Here's the `Animal` class:
```
public class Animal {
private Map<String,Animal> friends = new HashMap<>();
public void addFriend(String name, Animal animal){
friends.put(name,animal);
}
public Animal callFriend(String name){
return friends.get(name);
}
}
```
And here's some code snippet with lots of typecasting:
```
Mouse jerry = new Mouse();
jerry.addFriend("spike", new Dog());
jerry.addFriend("quacker", new Duck());
((Dog) jerry.callFriend("spike")).bark();
((Duck) jerry.callFriend("quacker")).quack();
```
Is there any way I can use generics for the return type to get rid of the typecasting, so that I can say
```
jerry.callFriend("spike").bark();
jerry.callFriend("quacker").quack();
```
Here's some initial code with return type conveyed to the method as a parameter that's never used.
```
public<T extends Animal> T callFriend(String name, T unusedTypeObj){
return (T)friends.get(name);
}
```
Is there a way to figure out the return type at runtime without the extra parameter using `instanceof`? Or at least by passing a class of the type instead of a dummy instance.
I understand generics are for compile time type-checking, but is there a workaround for this? | You could define `callFriend` this way:
```
public <T extends Animal> T callFriend(String name, Class<T> type) {
return type.cast(friends.get(name));
}
```
Then call it as such:
```
jerry.callFriend("spike", Dog.class).bark();
jerry.callFriend("quacker", Duck.class).quack();
```
This code has the benefit of not generating any compiler warnings. Of course this is really just an updated version of casting from the pre-generic days and doesn't add any additional safety. | You could implement it like this:
```
@SuppressWarnings("unchecked")
public <T extends Animal> T callFriend(String name) {
return (T)friends.get(name);
}
```
(Yes, this is legal code; see [Java Generics: Generic type defined as return type only](https://stackoverflow.com/questions/338887/java-generics-generic-type-defined-as-return-type-only).)
The return type will be inferred from the caller. However, note the `@SuppressWarnings` annotation: that tells you that **this code isn't typesafe**. You have to verify it yourself, or you could get `ClassCastExceptions` at runtime.
Unfortunately, the way you're using it (without assigning the return value to a temporary variable), the only way to make the compiler happy is to call it like this:
```
jerry.<Dog>callFriend("spike").bark();
```
While this may be a little nicer than casting, you are probably better off giving the `Animal` class an abstract `talk()` method, as David Schmitt said. | How do I make the method return type generic? | [
"",
"java",
"generics",
"return-value",
""
] |
In JUnit 3, I could get the name of the currently running test like this:
```
public class MyTest extends TestCase
{
public void testSomething()
{
System.out.println("Current test is " + getName());
...
}
}
```
which would print "Current test is testSomething".
Is there any out-of-the-box or simple way to do this in JUnit 4?
Background: Obviously, I don't want to just print the name of the test. I want to load test-specific data that is stored in a resource with the same name as the test. You know, [convention over configuration](http://en.wikipedia.org/wiki/Convention_over_Configuration) and all that. | JUnit 4.7 added this feature it seems using [TestName-Rule](https://github.com/junit-team/junit/wiki/Rules#testname-rule). Looks like this will get you the method name:
```
import org.junit.Rule;
public class NameRuleTest {
@Rule public TestName name = new TestName();
@Test public void testA() {
assertEquals("testA", name.getMethodName());
}
@Test public void testB() {
assertEquals("testB", name.getMethodName());
}
}
``` | # JUnit 4.9.x and higher
Since JUnit 4.9, the [`TestWatchman`](http://junit.org/javadoc/latest/org/junit/rules/TestWatchman.html) class has been deprecated in favour of the [`TestWatcher`](http://junit.org/javadoc/latest/org/junit/rules/TestWatcher.html) class, which has invocation:
```
@Rule
public TestRule watcher = new TestWatcher() {
protected void starting(Description description) {
System.out.println("Starting test: " + description.getMethodName());
}
};
```
Note: The containing class must be declared `public`.
# JUnit 4.7.x - 4.8.x
The following approach will print method names for all tests in a class:
```
@Rule
public MethodRule watchman = new TestWatchman() {
public void starting(FrameworkMethod method) {
System.out.println("Starting test: " + method.getName());
}
};
``` | Get name of currently executing test in JUnit 4 | [
"",
"java",
"unit-testing",
"junit",
""
] |
just wondering if anyone could suggest why I might be getting an error? I'm currently trying to execute a macro in a workbook by calling the Application.Run method that the interop exposes.
It's currently throwing the following COM Exception:
```
{System.Runtime.InteropServices.COMException (0x800A03EC): Cannot run the macro Macro1'.
The macro may not be available in this workbook or all macros may be disabled.
```
I've put the containing workbook in a trusted location, set all of the security settings to their minimum values, and trusted programmatic access to my object model. I'm entirely out of ideas and Google's failed me so far!
Has anyone done this, or can you suggest anything which I could try?
Many thanks! | It turned out that there were actually two identically-named macros which had been injected into the workbook. The exception above translates to a "Compile Error: Ambiguous name detected" error in VBA - I should have gone straight into VBA to look at this, rather than assuming that something odd was happening with the interop. | If the macro is stored in the personal workbook, you may have to reference the workbook name: personal.xls!macroname | Execute VBA Macro via C# Interop? | [
"",
"c#",
"vba",
"interop",
""
] |
I'm working on a simple tool that transfers files to a hard-coded location with the password also hard-coded. I'm a python novice, but thanks to ftplib, it was easy:
```
import ftplib
info= ('someuser', 'password') #hard-coded
def putfile(file, site, dir, user=(), verbose=True):
"""
upload a file by ftp to a site/directory
login hard-coded, binary transfer
"""
if verbose: print 'Uploading', file
local = open(file, 'rb')
remote = ftplib.FTP(site)
remote.login(*user)
remote.cwd(dir)
remote.storbinary('STOR ' + file, local, 1024)
remote.quit()
local.close()
if verbose: print 'Upload done.'
if __name__ == '__main__':
site = 'somewhere.com' #hard-coded
dir = './uploads/' #hard-coded
import sys, getpass
putfile(sys.argv[1], site, dir, user=info)
```
The problem is that I can't find any library that supports sFTP. What's the normal way to do something like this securely?
Edit: Thanks to the answers here, I've gotten it working with Paramiko and this was the syntax.
```
import paramiko
host = "THEHOST.com" #hard-coded
port = 22
transport = paramiko.Transport((host, port))
password = "THEPASSWORD" #hard-coded
username = "THEUSERNAME" #hard-coded
transport.connect(username = username, password = password)
sftp = paramiko.SFTPClient.from_transport(transport)
import sys
path = './THETARGETDIRECTORY/' + sys.argv[1] #hard-coded
localpath = sys.argv[1]
sftp.put(localpath, path)
sftp.close()
transport.close()
print 'Upload done.'
```
Thanks again! | [Paramiko](https://www.paramiko.org/) supports SFTP. I've used it, and I've used Twisted. Both have their place, but you might find it easier to start with Paramiko. | You should check out pysftp <https://pypi.python.org/pypi/pysftp> it depends on paramiko, but wraps most common use cases to just a few lines of code.
```
import pysftp
import sys
path = './THETARGETDIRECTORY/' + sys.argv[1] #hard-coded
localpath = sys.argv[1]
host = "THEHOST.com" #hard-coded
password = "THEPASSWORD" #hard-coded
username = "THEUSERNAME" #hard-coded
with pysftp.Connection(host, username=username, password=password) as sftp:
sftp.put(localpath, path)
print 'Upload done.'
``` | SFTP in Python? (platform independent) | [
"",
"python",
"sftp",
""
] |
Are there any classes/functions written in php publicly available that will take a timestamp, and return the time passed since then in number of days, months, years etc? Basically i want the same function that generates the time-since-posted presented together with each entry on this site (and on digg and loads of other sites). | This is written as a wordpress plugin but you can extract the relevant PHP code no problem: [Fuzzy date-time](http://www.splee.co.uk/2005/04/21/fuzzy-datetime-v-06-beta/) | Here is a Zend Framework ViewHelper I wrote to do this, you could easily modify this to not use the ZF specific code:
```
/**
* @category View_Helper
* @package Custom_View_Helper
* @author Chris Jones <leeked@gmail.com>
* @license New BSD License
*/
class Custom_View_Helper_HumaneDate extends Zend_View_Helper_Abstract
{
/**
* Various time formats
*/
private static $_time_formats = array(
array(60, 'just now'),
array(90, '1 minute'), // 60*1.5
array(3600, 'minutes', 60), // 60*60, 60
array(5400, '1 hour'), // 60*60*1.5
array(86400, 'hours', 3600), // 60*60*24, 60*60
array(129600, '1 day'), // 60*60*24*1.5
array(604800, 'days', 86400), // 60*60*24*7, 60*60*24
array(907200, '1 week'), // 60*60*24*7*1.5
array(2628000, 'weeks', 604800), // 60*60*24*(365/12), 60*60*24*7
array(3942000, '1 month'), // 60*60*24*(365/12)*1.5
array(31536000, 'months', 2628000), // 60*60*24*365, 60*60*24*(365/12)
array(47304000, '1 year'), // 60*60*24*365*1.5
array(3153600000, 'years', 31536000), // 60*60*24*365*100, 60*60*24*365
);
/**
* Convert date into a pretty 'human' form
* Now with microformats!
*
* @param string|Zend_Date $date_from Date to convert
* @return string
*/
public function humaneDate($date_from)
{
$date_to = new Zend_Date(null, Zend_Date::ISO_8601);
if (!($date_from instanceof Zend_Date)) {
$date_from = new Zend_Date($date_from, Zend_Date::ISO_8601);
}
$dateTo = $date_to->getTimestamp(); // UnixTimestamp
$dateFrom = $date_from->getTimestamp(); // UnixTimestamp
$difference = $dateTo - $dateFrom;
$message = '';
if ($dateFrom <= 0) {
$message = 'a long time ago';
} else {
foreach (self::$_time_formats as $format) {
if ($difference < $format[0]) {
if (count($format) == 2) {
$message = $format[1] . ($format[0] === 60 ? '' : ' ago');
break;
} else {
$message = ceil($difference / $format[2]) . ' ' . $format[1] . ' ago';
break;
}
}
}
}
return sprintf('<abbr title="%sZ">%s</abbr>',
$date_from->get('YYYY-MM-ddTHH:mm:ss'),
$message
);
}
}
``` | Calculate time difference between two dates, and present the answer like "2 days 3 hours ago" | [
"",
"php",
"date",
""
] |
I'm wanting to test that a class has an EventHandler assigned to an event. Basically I'm using my IoC Container to hook up EventHandlers for me, and I'm wanting to check they get assigned properly. So really, I'm testing my IoC config.
```
[Test]
public void create_person_event_handler_is_hooked_up_by_windsor()
{
IChangePersonService changePersonService = IoC.Resolve<IChangePersonService>();
// check that changePersonService.PersonCreated has a handler attached
}
```
I'm not sure how to test that changePersonService.PersonCreated has anything attached to it though.
Any ideas?
Thanks. | Not questioning what you pretend with this, the only way of testing and enumerating the registered events is if your register them in your own collection.
See this example:
```
public class MyChangePersonService : IChangePersonService
{
private IList<EventHandler> handlers;
private EventHandler _personEvent;
public event EventHandler PersonCreated
{
add
{
_personEvent += value;
handlers.Add(value);
}
remove
{
_personEvent -= value;
handlers.Remove(value);
}
}
public IList<EventHandler> PersonEventHandlers { get { return handlers; } }
public MyChangePersonService()
{
handlers = new List<EventHandler>();
}
public void FirePersonEvent()
{
_personEvent(this, null);
}
}
```
Then you could access the registered handlers with prop `PersonEventHandlers`.
Can you implement something like this? | It looks to me like you're attempting to unit test Castle Windsor. Since it probably already has unit tests, I think this is a waste of effort. You'd be better off testing that your object raises the appropriate events at the correct times (probably by registering a mock object as the event handler). | Testing that an event has an EventHandler assigned | [
"",
"c#",
"unit-testing",
"events",
"event-handling",
""
] |
I want all buttons to perform an action before and after their normal onclick event. So I came up with the "brilliant" idea of looping through all those elements and creating a wrapper function.
This appeared to work pretty well when I tested it, but when I integrated it into our app, it fell apart. I traced it down to the 'this' value was changed by my wrapper. The sample code illustrates this; before you wrap the event handlers, each button displays the button id when you click, but after wrapping it the displayed name is 'undefined' in this example, or 'Form1' if you run it from within a form.
Does anybody know either a better way to do the same thing? Or a good way to maintain the originally intended 'this' values?
As you can imagine, I don't want to modify any of the existing event handler code in the target buttons.
Thanks in advance.
PS-The target browser is IE6 & up, crossbrowser functionality not required
```
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<script language="javascript" type="text/javascript">
function btnWrap_onClick()
{
var btns = document.getElementsByTagName("button");
for( var i = 0; i < btns.length; i++)
{
var btn = btns[i];
// handle wrap button differerntly
if( "btnWrap" == btn.id)
{
btn.disabled = true;
continue; // skip this button
}
// wrap it
var originalEventHandler = btn.onclick;
btn.onclick = function()
{
alert("Starting event handler");
originalEventHandler();
alert("Finished event handler");
}
}
alert("Buttons wrapped successfully");
}
</script>
<body>
<p>
<button id="TestButton1" onclick="alert(this.id);">TestButton1</button>
<button id="TestButton2" onclick="alert(this.id);">TestButton2</button>
</p>
<button id="btnWrap" onclick="btnWrap_onClick();">Wrap Event Handlers</button>
</body>
</html>
``` | Like [Paul Dixon](https://stackoverflow.com/users/6521/) said, you could use [call](http://msdn.microsoft.com/en-us/library/wt1h2e5c.aspx) but I suggest you use [apply](http://msdn.microsoft.com/en-us/library/4zc42wh1(VS.85).aspx) instead.
However, the reason I am answering is that I found a disturbing bug: **You are actually replacing all your event handlers with the event handler of the last button.** I don't think that was what you intended, was it? (Hint: You are replacing the value for originalEventHandler in each iteration)
In the code below you find a working cross-browser solution:
```
function btnWrap_onClick()
{
var btns = document.getElementsByTagName("button");
for( var i = 0; i < btns.length; i++)
{
var btn = btns[i];
// handle wrap button differerntly
if( "btnWrap" == btn.id)
{
btn.disabled = true;
continue; // skip this button
}
// wrap it
var newOnClick = function()
{
alert("Starting event handler");
var src=arguments.callee;
src.original.apply(src.source,arguments);
alert("Finished event handler");
}
newOnClick.original = btn.onclick; // Save original onClick
newOnClick.source = btn; // Save source for "this"
btn.onclick = newOnClick; //Assign new handler
}
alert("Buttons wrapped successfully");
}
```
First I create a new anonymous function and store that in the variable *newOnClick*. Since a function is an object I can create properties on the function object like any other object. I use this to create the property *original* that is the original onclick-handler, and *source* that is the source element that will be the *this* when the original handler is called.
Inside the anonymous function I need to get a reference to the function to be able to get the value of the properties *original* and *source*. Since the anonymous function don't have a name I use use [arguments.callee](http://msdn.microsoft.com/en-us/library/s4esdbwz(VS.85).aspx) (that has been supported since MSIE5.5) to get that reference and store it in variable src.
Then I use the method *apply* to execute the original onclick handler. *apply* takes two parameters: the first is going to be the value of *this*, and the second is an array of arguments. *this* has to be the element where the original onclick handler was attached to, and that value was saved in *source*. *arguments* is an internal property of all functions and hold all the arguments the function was called with (notice that the anonymous function don't have any parameters specified, but if it is called with some parameters anyway, they will be found in the *arguments* property).
The reason I use *apply* is that I can forward all the arguments that the anonymous function was called with, and this makes this function transparent and cross-browser. (Microsoft put the event in window.event but the other browsers supplies it in the first parameter of the handler call) | You can use the [call](https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Function/call) method to resolve the binding, e.g. `originalEventHandler.call(btn);`
Alternatively, a library like prototype can help - its [bind](http://www.prototypejs.org/api/function/bind) method lets you build a new function bound to a specified object, so you'd have declared originalEventHandler as `var originalEventHandler = btn.onclick.bind(btn);`
Finally, for a good backgrounder on binding issues, see also [Getting Out of Binding Situations in JavaScript](http://www.alistapart.com/articles/getoutbindingsituations) | Is there anyway to prevent 'this' from changing, when I wrap a function? | [
"",
"javascript",
"binding",
""
] |
I would like to call C# service with using PHP, anyone know how to do it? Thanks | Create an SOAP XML document that matches up with the WSDL and send it via HTTP POST.
See [here for an example](http://www.w3schools.com/webservices/tempconvert.asmx?op=CelsiusToFahrenheit).
You send this:
```
POST /webservices/tempconvert.asmx HTTP/1.1
Host: www.w3schools.com
Content-Type: application/soap+xml; charset=utf-8
Content-Length: length
<?xml version="1.0" encoding="utf-8"?>
<soap12:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap12="http://www.w3.org/2003/05/soap-envelope">
<soap12:Body>
<CelsiusToFahrenheit xmlns="http://tempuri.org/">
<Celsius>string</Celsius>
</CelsiusToFahrenheit>
</soap12:Body>
</soap12:Envelope>
```
And get this back:
```
HTTP/1.1 200 OK
Content-Type: application/soap+xml; charset=utf-8
Content-Length: length
<?xml version="1.0" encoding="utf-8"?>
<soap12:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap12="http://www.w3.org/2003/05/soap-envelope">
<soap12:Body>
<CelsiusToFahrenheitResponse xmlns="http://tempuri.org/">
<CelsiusToFahrenheitResult>string</CelsiusToFahrenheitResult>
</CelsiusToFahrenheitResponse>
</soap12:Body>
</soap12:Envelope>
``` | There's no thing called a "C# Web Service". What you mean is an XML Web Service based on SOAP remote calls and WSDL for description.
Indeed all SOAP services are supposed to be inter compatible, be it .Net, PHP or Java. But in practice, minor problems make it harder.
There are many different SOAP libraries for PHP, but for connection to an ASP.NET XML Web Service from PHP, but [nuSOAP](http://sourceforge.net/projects/nusoap/) gave the best results for me. It's basicly a set of PHP-classes to consume SOAP-based web services. The simplest client code seems like this:
```
<?php
// Pull in the NuSOAP code
require_once('nusoap.php');
// Create the client instance
$client = new soapclient('http://localhost/phphack/helloworld.php');
// Call the SOAP method
$result = $client->call('hello', array('name' => 'Scott'));
// Display the result
print_r($result);
?>
```
See <http://www.scottnichol.com/nusoapintro.htm> for more examples. | Call C# Web Service with using PHP | [
"",
"php",
""
] |
I've models for `Books`, `Chapters` and `Pages`. They are all written by a `User`:
```
from django.db import models
class Book(models.Model)
author = models.ForeignKey('auth.User')
class Chapter(models.Model)
author = models.ForeignKey('auth.User')
book = models.ForeignKey(Book)
class Page(models.Model)
author = models.ForeignKey('auth.User')
book = models.ForeignKey(Book)
chapter = models.ForeignKey(Chapter)
```
What I'd like to do is duplicate an existing `Book` and update it's `User` to someone else. The wrinkle is I would also like to duplicate all related model instances to the `Book` - all it's `Chapters` and `Pages` as well!
Things get really tricky when look at a `Page` - not only will the new `Pages` need to have their `author` field updated but they will also need to point to the new `Chapter` objects!
Does Django support an out of the box way of doing this? What would a generic algorithm for duplicating a model look like?
Cheers,
John
---
Update:
The classes given above are just an example to illustrate the problem I'm having! | This no longer works in Django 1.3 as CollectedObjects was removed. See [changeset 14507](http://code.djangoproject.com/changeset/14507)
[I posted my solution on Django Snippets.](http://www.djangosnippets.org/snippets/1282/) It's based heavily on the [`django.db.models.query.CollectedObject`](http://code.djangoproject.com/browser/django/trunk/django/db/models/query.py#L33) code used for deleting objects:
```
from django.db.models.query import CollectedObjects
from django.db.models.fields.related import ForeignKey
def duplicate(obj, value, field):
"""
Duplicate all related objects of `obj` setting
`field` to `value`. If one of the duplicate
objects has an FK to another duplicate object
update that as well. Return the duplicate copy
of `obj`.
"""
collected_objs = CollectedObjects()
obj._collect_sub_objects(collected_objs)
related_models = collected_objs.keys()
root_obj = None
# Traverse the related models in reverse deletion order.
for model in reversed(related_models):
# Find all FKs on `model` that point to a `related_model`.
fks = []
for f in model._meta.fields:
if isinstance(f, ForeignKey) and f.rel.to in related_models:
fks.append(f)
# Replace each `sub_obj` with a duplicate.
sub_obj = collected_objs[model]
for pk_val, obj in sub_obj.iteritems():
for fk in fks:
fk_value = getattr(obj, "%s_id" % fk.name)
# If this FK has been duplicated then point to the duplicate.
if fk_value in collected_objs[fk.rel.to]:
dupe_obj = collected_objs[fk.rel.to][fk_value]
setattr(obj, fk.name, dupe_obj)
# Duplicate the object and save it.
obj.id = None
setattr(obj, field, value)
obj.save()
if root_obj is None:
root_obj = obj
return root_obj
```
**For django >= 2** there should be some minimal changes. so the output will be like this:
```
def duplicate(obj, value=None, field=None, duplicate_order=None):
"""
Duplicate all related objects of obj setting
field to value. If one of the duplicate
objects has an FK to another duplicate object
update that as well. Return the duplicate copy
of obj.
duplicate_order is a list of models which specify how
the duplicate objects are saved. For complex objects
this can matter. Check to save if objects are being
saved correctly and if not just pass in related objects
in the order that they should be saved.
"""
from django.db.models.deletion import Collector
from django.db.models.fields.related import ForeignKey
collector = Collector(using='default')
collector.collect([obj])
collector.sort()
related_models = collector.data.keys()
data_snapshot = {}
for key in collector.data.keys():
data_snapshot.update(
{key: dict(zip([item.pk for item in collector.data[key]], [item for item in collector.data[key]]))})
root_obj = None
# Sometimes it's good enough just to save in reverse deletion order.
if duplicate_order is None:
duplicate_order = reversed(related_models)
for model in duplicate_order:
# Find all FKs on model that point to a related_model.
fks = []
for f in model._meta.fields:
if isinstance(f, ForeignKey) and f.remote_field.related_model in related_models:
fks.append(f)
# Replace each `sub_obj` with a duplicate.
if model not in collector.data:
continue
sub_objects = collector.data[model]
for obj in sub_objects:
for fk in fks:
fk_value = getattr(obj, "%s_id" % fk.name)
# If this FK has been duplicated then point to the duplicate.
fk_rel_to = data_snapshot[fk.remote_field.related_model]
if fk_value in fk_rel_to:
dupe_obj = fk_rel_to[fk_value]
setattr(obj, fk.name, dupe_obj)
# Duplicate the object and save it.
obj.id = None
if field is not None:
setattr(obj, field, value)
obj.save()
if root_obj is None:
root_obj = obj
return root_obj
``` | Here's an easy way to copy your object.
Basically:
(1) set the id of your original object to None:
book\_to\_copy.id = None
(2) change the 'author' attribute and save the ojbect:
book\_to\_copy.author = new\_author
book\_to\_copy.save()
(3) INSERT performed instead of UPDATE
(It doesn't address changing the author in the Page--I agree with the comments regarding re-structuring the models) | Duplicating model instances and their related objects in Django / Algorithm for recusrively duplicating an object | [
"",
"python",
"django",
"django-models",
"duplicates",
""
] |
I learned today that there are [digraphs](https://en.cppreference.com/w/c/language/operator_alternative#Alternative_tokens.28C95.29) in C99 and C++. The following is a valid program:
```
%:include <stdio.h>
%:ifndef BUFSIZE
%:define BUFSIZE 512
%:endif
void copy(char d<::>, const char s<::>, int len)
<%
while (len-- >= 0)
<%
d<:len:> = s<:len:>;
%>
%>
```
My question is: why do they exist? | Digraphs were created for programmers that didn't have a keyboard which supported the ISO 646 character set.
<http://en.wikipedia.org/wiki/C_trigraph> | I believe that their existence can be traced back to the *possibility* that somewhere, somebody is using a compiler with an operating system whose character set is so archaic that it doesn't necessarily have all the characters that C or C++ need to express the whole language.
Also, it makes for good entries in the [IOCCC](http://www.ioccc.org). | Why are there digraphs in C and C++? | [
"",
"c++",
"c",
"c99",
"language-design",
"digraphs",
""
] |
I'm writing a php app to access a MySQL database, and on a tutorial, it says something of the form
```
mysql_connect($host, $user, $pass) or die("could not connect");
```
How does PHP know that the function failed so that it runs the die part? I guess I'm asking how the "or" part of it works. I don't think I've seen it before. | If the first statement returns `true`, then the entire statement must be `true` therefore the second part is never executed.
For example:
```
$x = 5;
true or $x++;
echo $x; // 5
false or $x++;
echo $x; // 6
```
Therefore, if your query is unsuccessful, it will evaluate the `die()` statement and end the script. | PHP's **`or`** works like C's **`||`** (which incidentally is also supported by PHP - **`or`** just looks nicer and has different operator precedence - see [this page](http://www.php.net/manual/en/language.operators.precedence.php)).
It's known as a [short-circuit](http://en.wikipedia.org/wiki/Short-circuit_evaluation) operator because it will skip any evaluations once it has enough information to decide the final value.
In your example, if `mysql_connect()` returns TRUE, then PHP already knows that the whole statement will evaluate to TRUE no matter what `die()` evalutes to, and hence `die()` isn't evaluated.
If `mysql_connect()` returns FALSE, PHP doesn't know whether the whole statement will evaluate to TRUE or FALSE so it goes on and tries to evalute `die()` - ending the script in the process.
It's just a nice trick that takes advantage of the way **`or`** works. | How does "do something OR DIE()" work in PHP? | [
"",
"php",
"mysql",
"exception",
"conditional-statements",
""
] |
I have a python application that relies on a file that is downloaded by a client from a website.
The website is not under my control and has no API to check for a "latest version" of the file.
Is there a simple way to access the file (in python) via a URL and check it's date (or size) without having to download it to the clients machine each time?
**update:** Thanks to those who mentioned the"last-modified" date. This is the correct parameter to look at.
I guess I didn't state the question well enough. How do I do this from a python script? I want to application to check the file and then download it if (last-modified date < current file date). | Check the [Last-Modified](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.29) header.
EDIT: Try [urllib2](http://www.voidspace.org.uk/python/articles/urllib2.shtml#introduction).
EDIT 2: This [short tutorial](http://www.artima.com/forums/flat.jsp?forum=122&thread=15024) should give you a pretty good feel for accomplishing your goal. | There is no reliable way to do this. For all you know, the file can be created on the fly by the web server and the question "how old is this file" is not meaningful. The webserver may choose to provide Last-Modified header, but it could tell you whatever it wants. | How can I get the created date of a file on the web (with Python)? | [
"",
"python",
"http",
""
] |
Does anyone know if the IsNullOrEmpty bug is fixed in 3.0 or later? I currently came across the (NullReferenceException) bug in 2.0 and I have found documentation stating that is supposed to be fixed in the next release, but no definitive answer. | I found some [info](http://msdn.microsoft.com/en-us/library/system.string.isnullorempty(VS.80).aspx) on the matter:
> This bug has been fixed in the
> Microsoft .NET Framework 2.0 Service
> Pack 1 (SP1). | Works with .NET 3.5SP1. Test program for those who want to try it (mostly taken from bug report):
```
using System;
class Test
{
static void Main(string[] args)
{
Console.WriteLine("starting");
ShowBug(null);
Console.WriteLine("finished");
Console.ReadLine();
}
static void ShowBug(string x)
{
for (int j = 0; j < 10; j++)
{
if (String.IsNullOrEmpty(x))
{
//TODO:
}
}
}
}
```
Compile with /o+ /debug- from the command line. | Is the IsNullOrEmpty bug fixed in .NET 3.0 or later? | [
"",
"c#",
"clr",
"jit",
"nullreferenceexception",
""
] |
I am about to start a personal project using python and I will be using it on both Linux(Fedora) and Windows(Vista), Although I might as well make it work on a mac while im at it. I have found an API for the GUI that will work on all 3. The reason I am asking is because I have always heard of small differences that are easily avoided if you know about them before starting. Does anyone have any tips or suggestions that fall along these lines? | In general:
* Be careful with paths. Use os.path wherever possible.
* Don't assume that HOME points to the user's home/profile directory.
* Avoid using things like unix-domain sockets, fifos, and other POSIX-specific stuff.
More specific stuff:
* If you're using wxPython, note that there may be differences in things like which thread certain events are generated in. Don't assume that events are generated in a specific thread. If you're calling a method which triggers a GUI-event, don't assume that event-handlers have completed by the time your method returns. (And vice versa, of course.)
* There are always differences in how a GUI will appear. Layouts are not always implemented in the exact same way. | Some things I've noticed in my cross platform development in Python:
* OSX doesn't have a tray, so application notifications usually happen right in the dock. So if you're building a background notification service you may need a small amount of platform-specific code.
* os.startfile() apparently only works on Windows. Either that or Python 2.5.1 on Leopard doesn't support it.
* os.normpath() is something you might want to consider using too, just to keep your paths and volumes using the correct slash notation and volume names.
* icons are dealt with in fundamentally different ways in Windows and OSX, be sure you provide icons at all the right sizes for both (16x16, 24x24, 32x32, 48x48, 64x64, 128x128 and 256x256) and be sure to read up on setting up icons with wx widgets. | Python and different Operating Systems | [
"",
"python",
"cross-platform",
""
] |
So maybe I want to have values between -1.0 and 1.0 as floats. It's clumsy having to write this and bolt it on top using extension methods IMO. | ```
public static double NextDouble(this Random rnd, double min, double max){
return rnd.NextDouble() * (max - min) + min;
}
```
Call with:
```
double x = rnd.NextDouble(-1, 1);
```
to get a value in the range `-1 <= x < 1` | There's a [`.NextDouble()`](http://msdn.microsoft.com/en-us/library/system.random.nextdouble.aspx) method as well that does almost exactly what you want- just 0 to 1.0 instead of -1.0 to 1.0. | Why doesn't the Random.Next method in .NET support floats/double? | [
"",
"c#",
".net",
"math",
""
] |
Since Actionscript is a proper superset of javascript, it should I suppose be possible.
Do you use/have you used any of the the javascript extension libraries with Actionscript/Flex/Air? | I ported some stuff from prototype.js to AS:
<http://wiki.alcidesfonseca.com/hacks/prototypeas> | The language itself is a proper superset, but the underlying API is not at all the same. The problems that jQuery and its ilk solve won't be useful for you in ActionScript, so you won't really get much from dropping them into your Flash/Flex project directly. Most things are already well covered by the default Flash/Flex API, so that probably explain the dearth of add-on libraries. I would, however, recommend [as3corelib](http://code.google.com/p/as3corelib/z) for processing JSON and handling other data in your ActionScript 3 projects.
Tnanks - I'm already onto as3corelib for JSON. | Do you use jQuery, extJS, or other javascript libraries with Actionscript and Flex? | [
"",
"javascript",
"jquery",
"apache-flex",
"actionscript",
"extjs",
""
] |
Everyone's saying "Contract-First" approach to design WS is more inclined to SOA style design. Now, if we take the available open-source frameworks available to achieve that we have **Spring-ws** and also **Axis2**(which supports both styles). I have a task to design SOA based e-commerce app. where loose coupling, quick response, security and scalability are the key points. So it is very important to choose the right framework from the start.
Based on past experiences, which of them or something else do you guys think to be a more appropriate option for my requirements. | That is a tough question.
I have used Axis2 in the past but am relatively new to Spring WS. What I do like about spring WS is the options I get with respect to what API's I use to handle my incoming and outgoing requests (XmlBeans, JDOM, Castor etc.) and the excellent integration with a Spring based stack.
You mentioned the Contract First approach. I am not sure if Axis 2 has something like this but Spring WS has a schema to wsdl generator. You can see an example of this here:
<http://static.springsource.org/spring-ws/sites/1.5/reference/html/tutorial.html>
Both the frameworks offer all that you ask for in terms of features such as loose coupling, response, scalability etc. Spring-ws may also offer good integration with Acegi as far as I think but I have really not dived deep into that topic. | For contract first I'd recommend using JAX-WS. Either [CXF](http://cxf.apache.org/), [JAX-WS RI](https://jax-ws.dev.java.net/) or [Metro](https://metro.dev.java.net/) ([Metro](https://metro.dev.java.net/) = JAX-WS RI + WSIT) seem to be the best implementations around that can take any WSDL contract and generate the POJOs (or vice versa). | Spring-ws or Axis2 or Something else for "Contract-First" approach to WS | [
"",
"java",
"web-services",
"soa",
"apache-axis",
"spring-ws",
""
] |
I've got a very strange bug cropping up in some PHP code I've got. The page is managing student enrolments in courses. On the page is a table of the student's courses, and each row has a number of dates: when they enrolled, when they completed, when they passed the assessment and when they picked up their certificate.
The table data is generated by PHP (drawing the data from the DB), and Javascript actually renders the table. The output from PHP is JS code which looks something like this:
```
var e = new Enrolment();
e.contactId = 5801;
e.enrolId = 14834;
e.courseId = 3;
e.dateEnrolled = new Date(1219672800000);
e.dateCompleted = new Date(-1000); // magic value meaning they haven't completed.
e.resultDate = new Date(1223647200000);
e.certDate = new Date(1223560800000);
e.result = 95;
e.passed = true;
enrolments[14834] = e;
```
In the database, all the date fields are stored as `DATE` (not `DATETIME`) fields.
The bug is that the dates are being displayed as one day off. I would suspect that this has a lot to do with the server being in an area which has daylight saving, whereas here there isn't any (meaning the server time is one hour off). This explains a lot, especially how the data preparation and rendering is being done in two different timezones. That is: the server is saying to the client that the person completed at midnight on the 15th August, and the client is interpreting that as 11pm on the 14th and therefore displaying 14th August.
But here's the confusing part: it's only doing that for the resultDate and certDate fields! I've copied the data to my local server and have found that the production server is actually sending a different timestamp (one which is off by 1 hour) just for those two fields, whereas the dateEnrolled field is the same.
Here's the output using the exact same code and data from the database:
```
// local server (timezone GMT+1000)
e.dateEnrolled = new Date(1219672800000); // 26 Aug 2008 00:00 +10:00
e.dateCompleted = new Date(-1000);
e.resultDate = new Date(1223647200000); // 11 Oct 2008 00:00 +10:00
e.certDate = new Date(1223560800000); // 10 Oct 2008 00:00 +10:00
// production server (timezone GMT+1100)
e.dateEnrolled = new Date(1219672800000); // 26 Aug 2008 00:00 +10:00
e.dateCompleted = new Date(-1000);
e.resultDate = new Date(1223643600000); // 10 Oct 2008 23:00 +10:00 **
e.certDate = new Date(1223557200000); // 09 Oct 2008 23:00 +10:00 **
```
I can understand if this was a problem with Daylight Saving not being accounted for, but notice how the dateEnrolled is the same?
The PHP code which converts the MySQL date to a unix timestamp is this:
```
list ($year, $month, $day) = explode ('-', $mysqlDT);
$timestamp = mktime (0,0,0, $month, $day, $year);
```
Any ideas about how to fix this? | Thats because you use mktime which is locale specific. That is it will convert it to the number of seconds from 00:00:00 1970-1-1 GMT, and that is offset by 1 hour with one timezone.
You should also remember that the javascript does use the same timezone as the browser, not the web page.
```
e.resultDate = new Date(year, month - 1, day);
```
This will make sure the date is the same for every viewer from every timezone.
Or you can use gmmktime and use the UTC methods in Date. | It is mosly likely to be day light saving issue.
The reason why it doing it only for resultDate and certDate is that dateEnrolled is in August, daylight saving normally begins/ends in late September or early October. | PHP date issues with daylight saving | [
"",
"php",
"mysql",
"date",
"dst",
""
] |
I am wondering if there is away (possibly a better way) to order by the order of the values in an IN() clause.
The problem is that I have 2 queries, one that gets all of the IDs and the second that retrieves all the information. The first creates the order of the IDs which I want the second to order by. The IDs are put in an IN() clause in the correct order.
So it'd be something like (extremely simplified):
```
SELECT id FROM table1 WHERE ... ORDER BY display_order, name
SELECT name, description, ... WHERE id IN ([id's from first])
```
The issue is that the second query does not return the results in the same order that the IDs are put into the IN() clause.
One solution I have found is to put all of the IDs into a temp table with an auto incrementing field which is then joined into the second query.
Is there a better option?
**Note:** As the first query is run "by the user" and the second is run in a background process, there is no way to combine the 2 into 1 query using sub queries.
I am using MySQL, but I'm thinking it might be useful to have it noted what options there are for other DBs as well. | Use MySQL's [`FIELD()`](http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_field) function:
```
SELECT name, description, ...
FROM ...
WHERE id IN([ids, any order])
ORDER BY FIELD(id, [ids in order])
```
`FIELD()` will return the index of the first parameter that is equal to the first parameter (other than the first parameter itself).
`FIELD('a', 'a', 'b', 'c')`
will return 1
`FIELD('a', 'c', 'b', 'a')`
will return 3
This will do exactly what you want if you paste the ids into the `IN()` clause and the `FIELD()` function in the same order. | See following how to get sorted data.
```
SELECT ...
FROM ...
WHERE zip IN (91709,92886,92807,...,91356)
AND user.status=1
ORDER
BY provider.package_id DESC
, FIELD(zip,91709,92886,92807,...,91356)
LIMIT 10
``` | Ordering by the order of values in a SQL IN() clause | [
"",
"mysql",
"sql",
"sql-order-by",
""
] |
What is the recommended method for escaping variables before inserting them into the database in Java?
As I understand, I can use PreparedStatement.setString() to escape the data, but PreparedStatement seems somewhat impractical if I don't plan to run the same query ever again.. Is there a better way to do it without preparing every query? | Yes, use prepared statements for everything.
1. They're parsed once.
2. They're immune from SQL injection attacks.
3. They're a better design because you have to think about your SQL and how it's used.
If you think they're only used once, you aren't looking at the big picture. Some day, your data or your application will change.
---
**Edit.**
Why do prepared statements make you think about your SQL?
* When you assemble a string (or simply execute a literal block of text) you aren't creating a new `PreparedStatement` object. You're just executing SQL -- it can be done very casually.
* When you have to create (and save) a `PreparedStatement`, you have to think just a tiny bit more about encapsulation, allocation of responsibility. The preparation of a statement is a stateful event prior to doing any SQL processing.
The extra work is small, but not insignificant. It's what causes people to start thinking about ORM and a data caching layer, and things like that to optimize their database access.
With Prepared statements, database access is less casual, more intentional. | You should never construct a SQL query yourself using string concatenation. You should never manually escape variables/user data when constructing a SQL query. The actual escaping that is required varies depending on your underlying database, and at some point somebody WILL forget to escape.
The point is this: with prepared statements, it is *impossible* to create a SQL injectable statement. With custom escaping, it is possible. The choice is obvious. | Should I be using PreparedStatements for all my database inserts in Java? | [
"",
"java",
"mysql",
"connection",
"prepared-statement",
""
] |
Is there a quick way to determine whether you are using certain namespaces in your application. I want to remove all the unneccessary using statements like using System.Reflection and so on, but I need a way to determine if I am using those libraries or not. I know that the tool Resharper does this for you, but is there a quick and dirty and Free way to do this? | Visual Studio 2008 will also do this for you, right click in your class file and select "Organize Usings" -> "Remove and Sort". | Visual Studio 2008 + [PowerCommmands](http://code.msdn.microsoft.com/PowerCommands) = Remove and Sort usings across a whole solution. | Is there a quick way to remove using statements in C#? | [
"",
"c#",
"visual-studio-2008",
"namespaces",
"using-statement",
""
] |
Alright, so I just finished my last compiler error (so I thought) and these errors came up:
```
1>GameEngine.obj : error LNK2001: unresolved external symbol "public: static double WeaponsDB::PI" (?PI@WeaponsDB@@2NA)
1>Component.obj : error LNK2001: unresolved external symbol "public: static double WeaponsDB::PI" (?PI@WeaponsDB@@2NA)
1>Coordinate.obj : error LNK2019: unresolved external symbol "public: static double WeaponsDB::PI" (?PI@WeaponsDB@@2NA) referenced in function "public: double __thiscall Coordinate::distanceFrom(class Coordinate *)" (?distanceFrom@Coordinate@@QAENPAV1@@Z)
1>Driver.obj : error LNK2001: unresolved external symbol "public: static double WeaponsDB::PI" (?PI@WeaponsDB@@2NA)
1>Environment.obj : error LNK2001: unresolved external symbol "public: static double WeaponsDB::PI" (?PI@WeaponsDB@@2NA)
1>Environment.obj : error LNK2001: unresolved external symbol "public: static bool Environment::spyFlag" (?spyFlag@Environment@@2_NA)
1>Environment.obj : error LNK2001: unresolved external symbol "private: static class Environment * Environment::instance_" (?instance_@Environment@@0PAV1@A)
1>Environment.obj : error LNK2019: unresolved external symbol "public: static void __cdecl Environment::spyAlertOver(void)" (?spyAlertOver@Environment@@SAXXZ) referenced in function "public: void __thiscall Environment::notificationOfSpySuccess(void)" (?notificationOfSpySuccess@Environment@@QAEXXZ)
1>GameDriver.obj : error LNK2019: unresolved external symbol "public: static void __cdecl MainMenu::gameOver(int)" (?gameOver@MainMenu@@SAXH@Z) referenced in function "public: static void __cdecl GameDriver::run(void)" (?run@GameDriver@@SAXXZ)
1>GameDriver.obj : error LNK2019: unresolved external symbol "public: static void __cdecl GameDriver::gatherInput(void)" (?gatherInput@GameDriver@@SAXXZ) referenced in function "public: static void __cdecl GameDriver::run(void)" (?run@GameDriver@@SAXXZ)
1>GameDriver.obj : error LNK2019: unresolved external symbol "public: static void __cdecl GameDriver::ticker(void)" (?ticker@GameDriver@@SAXXZ) referenced in function "public: static void __cdecl GameDriver::run(void)" (?run@GameDriver@@SAXXZ)
1>GameDriver.obj : error LNK2001: unresolved external symbol "public: static int GameDriver::ticks" (?ticks@GameDriver@@2HA)
1>GameDriver.obj : error LNK2001: unresolved external symbol "public: static bool GameDriver::evaluatingInputFlag" (?evaluatingInputFlag@GameDriver@@2_NA)
1>GameDriver.obj : error LNK2001: unresolved external symbol "public: static bool GameDriver::keyQuitFlag" (?keyQuitFlag@GameDriver@@2_NA)
1>GameDriver.obj : error LNK2001: unresolved external symbol "public: static bool GameDriver::keyToggleWeaponRightFlag" (?keyToggleWeaponRightFlag@GameDriver@@2_NA)
1>GameDriver.obj : error LNK2001: unresolved external symbol "public: static bool GameDriver::keyToggleWeaponLeftFlag" (?keyToggleWeaponLeftFlag@GameDriver@@2_NA)
1>GameDriver.obj : error LNK2001: unresolved external symbol "public: static bool GameDriver::keyFireFlag" (?keyFireFlag@GameDriver@@2_NA)
1>GameDriver.obj : error LNK2001: unresolved external symbol "public: static bool GameDriver::keyLeftFlag" (?keyLeftFlag@GameDriver@@2_NA)
1>GameDriver.obj : error LNK2001: unresolved external symbol "public: static bool GameDriver::keyRightFlag" (?keyRightFlag@GameDriver@@2_NA)
1>GameDriver.obj : error LNK2001: unresolved external symbol "public: static bool GameDriver::keyUpFlag" (?keyUpFlag@GameDriver@@2_NA)
1>GameDriver.obj : error LNK2001: unresolved external symbol "public: static bool GameDriver::keyDownFlag" (?keyDownFlag@GameDriver@@2_NA)
1>GUI_Env.obj : error LNK2001: unresolved external symbol "private: static struct BITMAP * GUI_Env::buffer" (?buffer@GUI_Env@@0PAUBITMAP@@A)
1>GUI_Info.obj : error LNK2001: unresolved external symbol "private: static struct BITMAP * GUI_Info::buffer" (?buffer@GUI_Info@@0PAUBITMAP@@A)
1>MenuDriver.obj : error LNK2019: unresolved external symbol "public: static void __cdecl MainMenu::displayMenu(void)" (?displayMenu@MainMenu@@SAXXZ) referenced in function "public: static void __cdecl MenuDriver::start(void)" (?start@MenuDriver@@SAXXZ)
1>SpaceObjectFactory.obj : error LNK2001: unresolved external symbol "private: static class SpaceObjectFactory * SpaceObjectFactory::_instance" (?_instance@SpaceObjectFactory@@0PAV1@A)
1>Spy.obj : error LNK2019: unresolved external symbol "public: virtual bool __thiscall UnFormationable::sameTypeOfSpaceObjectAs(class SpaceObject *)" (?sameTypeOfSpaceObjectAs@UnFormationable@@UAE_NPAVSpaceObject@@@Z) referenced in function "public: virtual bool __thiscall Spy::sameTypeOfSpaceObjectAs(class SpaceObject *)" (?sameTypeOfSpaceObjectAs@Spy@@UAE_NPAVSpaceObject@@@Z)
1>WeaponsDB.obj : error LNK2001: unresolved external symbol "private: static class WeaponsDB * WeaponsDB::_instance" (?_instance@WeaponsDB@@0PAV1@A)
1>C:\Users\Owner\Desktop\Bosconian\code\Bosconian\Debug\Bosconian.exe : fatal error LNK1120: 23 unresolved externals
```
Alright, here's a brief overview.
PI is a static constant in WeaponsDB and is referenced by other classes using WeaponsDB::PI and the appropriate #include (what's wrong with this?)
Most other errors stem from static variables and static methods for timers from the allegro gaming library.
What causes these errors and how might I get rid of them?
Thanks in advance
----------------Edits-------------------
As requested, where the WeaponsDB::PI is declared and defined.
It is declared in WeaponsDB.h:
```
public:
static double PI;
```
But it is defined in another class Driver.cpp:
```
WeaponsDB::PI = 4*atan(1.0);
```
If this is one of the problems with my code I would love to know why this causes an error. | Most often, when the linker is failing to detect a static member it is because you to forgot to really define it somewhere, as it was pointed before:
```
// header
class X {
static const int y;
};
// cpp
const int X::y = 1;
```
But in your case, as you are not only missing static variables but also all the rest of the members my bet is that you are not linking the implementation file(s) (.cpp) in your project. You must provide a compilation unit that defines the symbols that were declared in the header, and make the environment compile and link it together. If the symbols belong to a compilation unit in an external library (from your current project) then you must remember to link with the library. | This
```
WeaponsDB::PI = 4*atan(1.0);
```
assigns a value to PI. It does not create space for it (does not define it).
This creates space for (defines) PI and assigns a value to (initializes) it.
```
double WeaponsDB::PI = 4*atan(1.0);
```
You should probably also mark PI as "static const" and not just "static". static makes it owned by the class instead of by the instance. const makes it immutable (and enables various optimizations).
You might also consider using M\_PI from math.h instead of recreating the constant.
**Edit**: parenthetically added the more precise terms: defines, initializes due to WP's comment. | error LNK2001 and error LNK2019 (C++) -- Requesting some learning about these errors | [
"",
"c++",
"linker-errors",
"lnk2019",
""
] |
Is it possible to get the name of the currently logged in user (Windows/Unix) and the hostname of the machine?
I assume it's just a property of some static environment class.
I've found this for the user name
```
com.sun.security.auth.module.NTSystem NTSystem = new
com.sun.security.auth.module.NTSystem();
System.out.println(NTSystem.getName());
```
and this for the machine name:
```
import java.net.InetAddress;
...
String computerName;
...
try {
computerName = InetAddress.getLocalHost().getHostName();
}
catch(Exception ex) {
...
}
```
Is the first one just for Windows?
And what will the second one do, if you don't have a hostname set? | To get the currently logged in user:
```
System.getProperty("user.name"); //platform independent
```
and the hostname of the machine:
```
java.net.InetAddress localMachine = java.net.InetAddress.getLocalHost();
System.out.println("Hostname of local machine: " + localMachine.getHostName());
``` | To get the currently logged in user:
```
System.getProperty("user.name");
```
To get the host name of the machine:
```
InetAddress.getLocalHost().getHostName();
```
To answer the last part of your question, the [Java API](http://docs.oracle.com/javase/6/docs/api/java/net/InetAddress.html#getHostName%28%29) says that getHostName() will return
> the host name for this IP address, or if the operation is not allowed by the security check, the textual representation of the IP address. | Java current machine name and logged in user? | [
"",
"java",
"environment",
""
] |
[Tony Andrews](https://stackoverflow.com/users/18747/tony-andrews) in another [question](https://stackoverflow.com/questions/461985/better-way-to-structure-a-pl-sql-if-then-statement) gave an example of:
```
IF p_c_courtesies_cd
|| p_c_language_cd
|| v_c_name
|| v_c_firstname
|| v_c_function
|| p_c_phone
|| p_c_mobile p_c_fax
|| v_c_email is not null
THEN
-- Do something
END IF;
```
as a clever (if not a tad obscure) alternative to the Oracle COALESCE function. Sure enough, it works, if any argument is not null, the IF test is true. My question: Is Oracle's implementation of the above concatenation operation SQL-92 conforming? Shouldn't an expression involving a NULL evaluate to NULL? If you don't think so, then why should the expression 1 + NULL evaluate to NULL? | No, Oracle's treatment of nulls is idiosyncratic, different from everyone else's, and inconsistent with the ANSI standards. In Oracle's defence however, it probably settled on and was committed to this treatment long before there was an ANSI standard to be consistent with!
It all starts from the fact that Oracle stores strings with a character count followed by the string data. A NULL is represented by a character count of zero with no following string data - which is exactly the same as an empty string (''). Oracle simply doesn't have a way to distinguish them.
This leads to some quirky behaviour, such as this concatenation case. Oracle also has a function LENGTH to return the length of a string, but this has been defined in an opposite manner, so that LENGTH('') returns NULL not zero. So:
```
LENGTH('abc') + LENGTH('') IS NULL
LENGTH('abc' || '') = 3
```
which seems to me to violate basic mathematical principles.
Of course, Oracle developers become so used to this that many of us can't even see anything wrong or odd about it - some will in fact argue that the rest of the world is wrong and that an empty string and a NULL **are** the same thing! | @Nezroy: Thanks for the link. As I read the standard, however, I believe it states that Oracle's implementation is in fact, incorrect. Section 6.13, General Rules, item 2a:
```
2) If <concatenation> is specified, then let S1 and S2 be the re-
sult of the <character value expression> and <character factor>,
respectively.
Case:
a) If either S1 or S2 is the null value, then the result of the
<concatenation> is the null value.
``` | Is this implementation SQL-92 conformant? | [
"",
"sql",
"oracle",
"plsql",
""
] |
I want to match a portion of a string using a [regular expression](https://en.wikipedia.org/wiki/Regular_expression) and then access that parenthesized substring:
```
var myString = "something format_abc"; // I want "abc"
var arr = /(?:^|\s)format_(.*?)(?:\s|$)/.exec(myString);
console.log(arr); // Prints: [" format_abc", "abc"] .. so far so good.
console.log(arr[1]); // Prints: undefined (???)
console.log(arr[0]); // Prints: format_undefined (!!!)
```
What am I doing wrong?
---
I've discovered that there was nothing wrong with the regular expression code above: the actual string which I was testing against was this:
```
"date format_%A"
```
Reporting that "%A" is undefined seems a very strange behaviour, but it is not directly related to this question, so I've opened a new one, *[Why is a matched substring returning "undefined" in JavaScript?](https://stackoverflow.com/questions/432826/why-is-a-matched-substring-returning-undefined-in-javascript)*.
---
The issue was that `console.log` takes its parameters like a `printf` statement, and since the string I was logging (`"%A"`) had a special value, it was trying to find the value of the next parameter. | ## Update: 2019-09-10
The old way to iterate over multiple matches was not very intuitive. This lead to the proposal of the [`String.prototype.matchAll`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/matchAll) method. This new method is in the [ECMAScript 2020 specification](https://tc39.es/ecma262/#sec-string.prototype.matchall). It gives us a clean API and solves multiple problems. It is in major browsers and JS engines since [Chrome 73+ / Node 12+](https://v8.dev/features/string-matchall) and Firefox 67+.
The method returns an iterator and is used as follows:
```
const string = "something format_abc";
const regexp = /(?:^|\s)format_(.*?)(?:\s|$)/g;
const matches = string.matchAll(regexp);
for (const match of matches) {
console.log(match);
console.log(match.index)
}
```
As it returns an iterator, we can say it's lazy, this is useful when handling particularly large numbers of capturing groups, or very large strings. But if you need, the result can be easily transformed into an Array by using the *spread syntax* or the `Array.from` method:
```
function getFirstGroup(regexp, str) {
const array = [...str.matchAll(regexp)];
return array.map(m => m[1]);
}
// or:
function getFirstGroup(regexp, str) {
return Array.from(str.matchAll(regexp), m => m[1]);
}
```
In the meantime, while this proposal gets more wide support, you can use the [official shim package](https://www.npmjs.com/package/string.prototype.matchall).
Also, the internal workings of the method are simple. An equivalent implementation using a generator function would be as follows:
```
function* matchAll(str, regexp) {
const flags = regexp.global ? regexp.flags : regexp.flags + "g";
const re = new RegExp(regexp, flags);
let match;
while (match = re.exec(str)) {
yield match;
}
}
```
A copy of the original regexp is created; this is to avoid side-effects due to the mutation of the `lastIndex` property when going through the multple matches.
Also, we need to ensure the regexp has the *global* flag to avoid an infinite loop.
I'm also happy to see that even this StackOverflow question was referenced in the [discussions of the proposal](https://github.com/tc39/proposal-string-matchall#previous-discussions).
## original answer
You can access capturing groups like this:
```
var myString = "something format_abc";
var myRegexp = /(?:^|\s)format_(.*?)(?:\s|$)/g;
var myRegexp = new RegExp("(?:^|\\s)format_(.*?)(?:\\s|$)", "g");
var matches = myRegexp.exec(myString);
console.log(matches[1]); // abc
```
And if there are multiple matches you can iterate over them:
```
var myString = "something format_abc";
var myRegexp = new RegExp("(?:^|\\s)format_(.*?)(?:\\s|$)", "g");
match = myRegexp.exec(myString);
while (match != null) {
// matched text: match[0]
// match start: match.index
// capturing group n: match[n]
console.log(match[0])
match = myRegexp.exec(myString);
}
``` | Here’s a method you can use to get the *n*th capturing group for each match:
```
function getMatches(string, regex, index) {
index || (index = 1); // default to the first capturing group
var matches = [];
var match;
while (match = regex.exec(string)) {
matches.push(match[index]);
}
return matches;
}
// Example :
var myString = 'something format_abc something format_def something format_ghi';
var myRegEx = /(?:^|\s)format_(.*?)(?:\s|$)/g;
// Get an array containing the first capturing group for every match
var matches = getMatches(myString, myRegEx, 1);
// Log results
document.write(matches.length + ' matches found: ' + JSON.stringify(matches))
console.log(matches);
``` | How do you access the matched groups in a JavaScript regular expression? | [
"",
"javascript",
"regex",
""
] |
According to the [official (gregorian) calendar](http://www.timeanddate.com/calendar/custom.html?year=2008&country=22&month=12&typ=2&months=2&display=0&space=0&fdow=1&wno=1&hol=), the week number for 29/12/2008 is 1, because after the last day of week 52 (i.e. 28/12) there are three or less days left in the year. Kinda weird, but OK, rules are rules.
So according to this calendar, we have these boundary values for 2008/2009
* 28/12 is week 52
* 29/12 is week 1
* 1/1 is week 1
* 8/1 is week 2
C# offers a GregorianCalendar class, that has a function `GetWeekOfYear(date, rule, firstDayOfWeek)`.
The parameter `rule` is an enumeration with 3 possible values: `FirstDay, FirstFourWeekDay, FirstFullWeek`. From what I've understood I should go for the `FirstFourWeekDay` rule, but I tried all of them just in case.
The last parameter informs which week day should be considered the first day of the week, according to that calendar it's Monday so Monday it is.
So I fired up a quick and dirty console app to test this:
```
using System;
using System.Globalization;
namespace CalendarTest
{
class Program
{
static void Main(string[] args)
{
var cal = new GregorianCalendar();
var firstWeekDay = DayOfWeek.Monday;
var twentyEighth = new DateTime(2008, 12, 28);
var twentyNinth = new DateTime(2008, 12, 29);
var firstJan = new DateTime(2009, 1, 1);
var eightJan = new DateTime(2009, 1, 8);
PrintWeekDays(cal, twentyEighth, firstWeekDay);
PrintWeekDays(cal, twentyNinth, firstWeekDay);
PrintWeekDays(cal, firstJan, firstWeekDay);
PrintWeekDays(cal, eightJan, firstWeekDay);
Console.ReadKey();
}
private static void PrintWeekDays(Calendar cal, DateTime dt, DayOfWeek firstWeekDay)
{
Console.WriteLine("Testing for " + dt.ToShortDateString());
Console.WriteLine("--------------------------------------------");
Console.Write(CalendarWeekRule.FirstDay.ToString() + "\t\t");
Console.WriteLine(cal.GetWeekOfYear(dt, CalendarWeekRule.FirstDay, firstWeekDay));
Console.Write(CalendarWeekRule.FirstFourDayWeek.ToString() + "\t");
Console.WriteLine(cal.GetWeekOfYear(dt, CalendarWeekRule.FirstFourDayWeek, firstWeekDay));
Console.Write(CalendarWeekRule.FirstFullWeek.ToString() + "\t\t");
Console.WriteLine(cal.GetWeekOfYear(dt, CalendarWeekRule.FirstFullWeek, firstWeekDay));
Console.WriteLine("--------------------------------------------");
}
}
}
```
... and this what I get
```
Testing for 28.12.2008
--------------------------------------------
FirstDay 52
FirstFourDayWeek 52
FirstFullWeek 51
--------------------------------------------
Testing for 29.12.2008
--------------------------------------------
FirstDay 53
FirstFourDayWeek 53
FirstFullWeek 52
--------------------------------------------
Testing for 01.01.2009
--------------------------------------------
FirstDay 1
FirstFourDayWeek 1
FirstFullWeek 52
--------------------------------------------
Testing for 08.01.2009
--------------------------------------------
FirstDay 2
FirstFourDayWeek 2
FirstFullWeek 1
--------------------------------------------
```
So as we see, none of the combinations above matches the official calendar (if you are in a hurry, just see that 29/12 never gets week #1).
What am I getting wrong here? Maybe there's something glaring that I am missing? (it's Friday and late work hours here in Belgium, bear with me ;))
Edit: Maybe I should explain: what I need is a function that works for any year, returning the same results as the gregorian calendar I linked. So no special workarounds for 2008. | This [article](http://blogs.msdn.com/shawnste/archive/2006/01/24/iso-8601-week-of-year-format-in-microsoft-net.aspx) looks deeper into the issue and possible workarounds. The hub of the matter is that the .NET calendar implementation does not seem to faithfully implement the ISO standard | @Conrad is correct. The .NET implementation of DateTime and the GregorianCalendar don't implement/follow the full ISO 8601 spec. That being said, they spec is extremely detailed and non-trivial to implement fully, at least for the parsing side of things.
Some more information is available at the following sites:
* <http://www.iso.org/iso/en/prods-services/popstds/datesandtime.html>
* <http://www.w3.org/TR/NOTE-datetime>
* <http://www.cl.cam.ac.uk/~mgk25/iso-time.html>
* <http://www.cs.tut.fi/~jkorpela/iso8601.html>
* <http://hydracen.com/dx/iso8601.htm>
* <http://www.probabilityof.com/ISO8601.shtml>
* <http://www.mcs.vuw.ac.nz/technical/software/SGML/doc/iso8601/ISO8601.html>
Simply put:
A week is identified by its number in a given year and begins with a Monday. The first week of a year is the one which includes the first Thursday, or equivalently the one which includes January 4.
Here is part of the code I use for properly handling ISO 8601 dates:
```
#region FirstWeekOfYear
/// <summary>
/// Gets the first week of the year.
/// </summary>
/// <param name="year">The year to retrieve the first week of.</param>
/// <returns>A <see cref="DateTime"/>representing the start of the first
/// week of the year.</returns>
/// <remarks>
/// Week 01 of a year is per definition the first week that has the Thursday
/// in this year, which is equivalent to the week that contains the fourth
/// day of January. In other words, the first week of a new year is the week
/// that has the majority of its days in the new year. Week 01 might also
/// contain days from the previous year and the week before week 01 of a year
/// is the last week (52 or 53) of the previous year even if it contains days
/// from the new year.
/// A week starts with Monday (day 1) and ends with Sunday (day 7).
/// </remarks>
private static DateTime FirstWeekOfYear(int year)
{
int dayNumber;
// Get the date that represents the fourth day of January for the given year.
DateTime date = new DateTime(year, 1, 4, 0, 0, 0, DateTimeKind.Utc);
// A week starts with Monday (day 1) and ends with Sunday (day 7).
// Since DayOfWeek.Sunday = 0, translate it to 7. All of the other values
// are correct since DayOfWeek.Monday = 1.
if (date.DayOfWeek == DayOfWeek.Sunday)
{
dayNumber = 7;
}
else
{
dayNumber = (int)date.DayOfWeek;
}
// Since the week starts with Monday, figure out what day that
// Monday falls on.
return date.AddDays(1 - dayNumber);
}
#endregion
#region GetIsoDate
/// <summary>
/// Gets the ISO date for the specified <see cref="DateTime"/>.
/// </summary>
/// <param name="date">The <see cref="DateTime"/> for which the ISO date
/// should be calculated.</param>
/// <returns>An <see cref="Int32"/> representing the ISO date.</returns>
private static int GetIsoDate(DateTime date)
{
DateTime firstWeek;
int year = date.Year;
// If we are near the end of the year, then we need to calculate
// what next year's first week should be.
if (date >= new DateTime(year, 12, 29))
{
if (date == DateTime.MaxValue)
{
firstWeek = FirstWeekOfYear(year);
}
else
{
firstWeek = FirstWeekOfYear(year + 1);
}
// If the current date is less than next years first week, then
// we are still in the last month of the current year; otherwise
// change to next year.
if (date < firstWeek)
{
firstWeek = FirstWeekOfYear(year);
}
else
{
year++;
}
}
else
{
// We aren't near the end of the year, so make sure
// we're not near the beginning.
firstWeek = FirstWeekOfYear(year);
// If the current date is less than the current years
// first week, then we are in the last month of the
// previous year.
if (date < firstWeek)
{
if (date == DateTime.MinValue)
{
firstWeek = FirstWeekOfYear(year);
}
else
{
firstWeek = FirstWeekOfYear(--year);
}
}
}
// return the ISO date as a numeric value, so it makes it
// easier to get the year and the week.
return (year * 100) + ((date - firstWeek).Days / 7 + 1);
}
#endregion
#region Week
/// <summary>
/// Gets the week component of the date represented by this instance.
/// </summary>
/// <value>The week, between 1 and 53.</value>
public int Week
{
get
{
return this.isoDate % 100;
}
}
#endregion
#region Year
/// <summary>
/// Gets the year component of the date represented by this instance.
/// </summary>
/// <value>The year, between 1 and 9999.</value>
public int Year
{
get
{
return this.isoDate / 100;
}
}
#endregion
``` | Is .NET giving me the wrong week number for Dec. 29th 2008? | [
"",
"c#",
".net",
"calendar",
"gregorian-calendar",
"week-number",
""
] |
Platform Builder is a tool for building a Windows CE Operating system on your computer and then loading it on a Windows CE device.
All this is done through Platform Builder. And I do it all through the Microsoft Visual Stuido Development Environment (IDE).
I want to automate the process of using the Platform Builder. So far, I only know how to use the Platform Builder through the IDE. I want to use the Platform Builder through another program written in C#.
Another approach to this goal is to learn how to use Platform Builder from the command line. If it can be done by the command line, then it could also be done through a C# program. I have seen some links here and there that say that you can do a Platform Builder build from the command line, but so far, they are not so good.
Any tips?
This link:
<http://msdn.microsoft.com/en-us/library/ms924536.aspx>
talks about "How to Use the Command Line to Create, Customize, and Build a Run-Time Image"
but it has links about Creating ai OS Design ( <http://msdn.microsoft.com/en-us/library/aa448498.aspx> ) that requires using the IDE.
It would be great to find a good link about this topic. If I can do what I need to do using the IDE, I should be able to do what I need to do from the command line. If I can do what I need to do from the command line, I should be able to do what I need to do from a C# program. That is my goal. | [Command-Line Tools](http://msdn.microsoft.com/en-us/library/ms938334.aspx) contains a complete list of all the Platform Builder command line commands.
Note: "The IDE and command-line environments are independent of each other".
Also, you need to use the Visual Studio command line under "Start" / "All Programs" / ... otherwise the environment will be wrong. | This link mentions Windows CE Build Environment tool (Wince.bat) that sets up the Windows CE build environment.
<http://msdn.microsoft.com/en-us/library/ms938334.aspx>
And this link describes how to use the Windows CD Build Environment Tool:
<http://msdn.microsoft.com/en-us/library/ms930978.aspx>
Basically, when you use wince.bat, you pass it three arguments: CPU; PROJECT, and PLATFORM.
I have figured out what to input for the CPU and the PLATFORM. But what to use for the "PROJECT" has me a little puzzled. It seems that for this argument, you can give it any project name you want. This does not work. So I have tried the names of a few existing project folders. This does not work either. So now I am thinking that I am supposed to define a project name prior to making the wince.bat call. How do I do this? | Platform Builder and C# | [
"",
"c#",
"command-line",
"windows-ce",
"platform-builder",
""
] |
Aparently, encoding japanese emails is somewhat challenging, which I am slowly discovering myself. In case there are any experts (even those with limited experience will do), can I please have some guidelines as to how to do it, how to test it and how to verify it?
Bear in mind that I've never set foot anywhere near Japan, it is simply that the product I'm developing is used there, among other places.
What (I think) I know so far is following:
- Japanese emails should be encoded in ISO-2022-JP, Japanese JIS codepage 50220 or possibly SHIFT\_JIS codepage 932
- Email transfer encoding should be set to Base64 for plain text and 7Bit for Html
- Email subject should be encoded separately to start with "=?ISO-2022-JP?B?" (don't know what this is supposed to mean). I've tried encoding the subject with
```
"=?ISO-2022-JP?B?" + Convert.ToBase64String(Encoding.Unicode.GetBytes(subject))
```
which basically gives the encoded string as expected but it doesn't get presented as any japanese text in an email program
- I've tested in Outlook 2003, Outlook Express and GMail
Any help would be greatly appreciated
---
Ok, so to post a short update, thanks to the two helpful answers, I've managed to get the right format and encoding. Now, Outlook gives something that resembles the correct subject:
`=?iso-2022-jp?B?6 Japanese test に各々の視点で語ってもらった。 6相当の防水?=`
However, the exact same email in Outlook Express gives subject like this:
`=?iso-2022-jp?B?6 Japanese test 縺ォ蜷・・・隕也せ縺ァ隱槭▲縺ヲ繧ゅi縺」縺溘・ 6逶ク蠖薙・髦イ豌エ?=`
Furthermore, when viewed in the Inbox view in Outlook Express, the email subject is even more weird, like this:
`=?iso-2022-jp?B?6 Japanese test ??????????????? 6???????=`
Gmail seems to be working in the similar fashion to Outlook, which looks correct.
I just can't get my head around this one. | I've been dealing with Japanese encodings for almost 20 years and so I can sympathize with your difficulties. Websites that I've worked on send hundreds of emails daily to Japanese customers so I can share with you what's worked for us.
* First of all, do not use Shift-JIS. I personally receive tons of Japanese emails and almost never are they encoded using Shift-JIS. I think an old (circa Win 98?) version of Outlook Express encoded outgoing mail using Shift-JIS, but nowadays you just don't see it.
* As you've figured out, you need to use ISO-2022-JP as your encoding for at least anything that goes in the mail header. This includes the Subject, To line, and CC line. UTF-8 will also work in most cases, *but* it will not work on Yahoo Japan mail, and as you can imagine, many Japanese users use Yahoo Japan mail.
* You can use UTF-8 in the body of the email, but it is recommended that you base64 encode the UTF-8 encoded Japanese text and put that in the body instead of raw UTF-8 text. However, in practice, I believe that raw UTF-8 text will work fine these days, for the body of the email.
* As I alluded to above, you need to at least test on Outlook (Exchange), Outlook Express (IMAP/POP3), and Yahoo Japan web mail. Yahoo Japan is the trickiest because I believe they use EUC for the encoding of their web pages, and so you need to follow the correct standards for your emails or they won't work (ISO-2022-JP is the standard for sending Japanese emails).
* Also, your subject line should not exceed 75 characters per line. That is, 75 characters *after* you've encoded in ISO-2022-JP and base64, not 75 characters before conversion. If you exceed 75 characters, you need to break your encoded subject into multiple lines, starting with "=?iso-2022-jp?B?" and ending with "?=" on each line. If you don't do this, your subject might get truncated (depending on the email reader, and also the content of your subject text). According to RFC 2047:
"An 'encoded-word' may not be more than 75 characters long, including 'charset', 'encoding', 'encoded-text', and delimiters. If it is desirable to encode more text than will fit in an 'encoded-word' of 75 characters, multiple 'encoded-word's (separated by CRLF SPACE) may be used."
* Here's some sample PHP code to encode the subject:
```
// Convert Japanese subject to ISO-2022-JP (JIS is essentially ISO-2022-JP)
$subject = mb_convert_encoding ($subject, "JIS", "SJIS");
// Now, base64 encode the subject
$subject = base64_encode ($subject);
// Add the encoding markers to the subject
$subject = "=?iso-2022-jp?B?" . $subject . "?=";
// Now, $subject can be placed as-is into the raw mail header.
```
* See RFC 2047 for a complete description of how to encode your email header. | Check <http://en.wikipedia.org/wiki/MIME#Encoded-Word> for a description on how to encode header fields in MIME-compliant messages. You seem to be missing a “?=” at the end of your subject. | japanese email subject encoding | [
"",
"c#",
"email",
"unicode",
"encoding",
""
] |
I am trying to return the result that I found in my query to the ASP.net table. How do I do that? I already have the query, I am just having trouble getting the count result back.
```
string configMan.ConnString["connect"].ToString();
iDB2Conn temp = new iDB2Conn
string query = "select Count(*) as total from test";
...
```
this is where I am having trouble. | This is where the SqlCommand object comes in handy.
```
int result = 0;
using(SqlConnection conn = new SqlConnection(connectionString))
{
conn.Open();
SqlCommand sql = new SqlCommand("SELECT COUNT(*) FROM test", conn);
result = (int)sql.ExecuteScalar();
}
``` | In ADO.Net, the simplest way is to use the ExecuteScalar() method on your command which returns a single result. You don't explicitly list what database or connection method you are using, but I would expect that most database access methods have something equivalent to ExecuteScalar(). | Read single value from query result | [
"",
"c#",
"asp.net",
""
] |
So, I have a json file and I want to get the names of all the fields in the file using javascript. Is there a better way other than using eval and then using reflection on the resulting object? Or does that make no sense? Thanks. | You should do what some frameworks use, like:
<http://www.prototypejs.org/api/string/evalJSON>
It checks if the JSON is sanitized. But other than that its just using old plain eval() at the end. | Unless you want to parse the text somehow I would suggest the easiest way being to eval it and the loop though it to get all the field names. | Is it possible to get the "schema" of a json file? | [
"",
"javascript",
"json",
""
] |
How to change the row position of virtual mode DataGridView?
I am using [Windows Forms](http://en.wikipedia.org/wiki/Windows_Forms). | Marcus's answer is correct, but you may also need to set the DataGridView's current cell property...
```
dgv.CurrentCell = dgv.Rows[0].Cells[0];
```
I believe this will scroll the grid. Also, to be absolutely safe, you may want to add this before the other line of code...
```
dgv.CurrentCell = null;
```
This will ensure that if the row you want is already the active row but just scrolled out of view, it will scroll it back into view. | You have to clear the old position and set a new one
The collection dataGridView1.SelectedRows has the current selected Rows. Depending on the MultiSelect property of the grid you may have to loop through all the rows in the SelectedRows and mark them as unselected. If you are single selection mode, just setting the new row as selected should clear the old selection.
To select a particular row (in this case the one at index 0) you just add the line
dataGridView1.Rows[0].Selected = true; | How to change the row position of virtual mode DataGridView? | [
"",
"c#",
"winforms",
"datagridview",
"scroll",
"virtualmode",
""
] |
Is there a O(1) way in windows API to concatenate 2 files?
O(1) with respect to not having to read in the entire second file and write it out to the file you want to append to. So as opposed to O(n) bytes processed.
I think this should be possible at the file system driver level, and I don't think there is a user mode API available for this, but I thought I'd ask. | If the "new file" is only going to be read by your application, then you can get away without actually concatenating them on disk.
You can just implement a stream interface that behaves as if the two files have been concatenated, and then use that stream as opposed to what ever the default filestream implementation used by your app framework is.
If that won't work for you, and you are using windows, you could always create a re parse point and a file system filter. I believe if you create a "mini filter" that it will run in user mode, but I'm not sure.
You can probably find more information about it here:
<http://www.microsoft.com/whdc/driver/filterdrv/default.mspx> | No, there isn't.
The best you could hope for is O(n), where n is the length of the shorter of the two files. | Is there a O(1) way in windows api to concatenate 2 files? | [
"",
"c++",
"winapi",
"visual-c++",
""
] |
I am currently trying to validate a form server side, the way it works is all the data is put into and array, with the form field as the key and and the field data as the value, however I need to check that all the keys have a value associated with other wise I want the submittion to stop and the user having to edit there details and then re-submit, is there a quick check I can run rather that break the array apart and check that or check before it is put in the array with a switch or losts of if statements? | One option is to do a bit of both.
Have a separate array of field options with the field name as the key.
```
$fieldTypes = array('nameFirst' => 'name',
'nameLast' => 'name',
'phone' => 'phone',
'email' => 'email');
foreach($input as $key => $value) {
switch($fieldTypes[$key]) {
case 'name':
//run $value against name validation
break;
case 'phone':
//run $value against phone validation
break;
case 'email':
//run $value against email validation
break;
default:
//code here for unknown type
}
}
```
Now, that could be used in any number of ways, and easily expanded to include things like if the field is required or not, or even error messages. By turning the `$fieldTypes` array into a multidimensional array, or an array of objects containing the data.
This way, if you decide to add a field, it would probably not involve much change in the validation code. | Sico87,
More of than not you don't want to test all of the fields simultaneously. For instance you may have a contact field that contains a phone-number entry option that isn't mandated, and rejecting the submission for that reason would be problematic.
In many cases, it's easier to test your fields individually so you can give each the special attention it needs. You'll want to give special attention to email addresses, and special attention to birthdays.
Treating all equal could cause very serious problems in the long-run. | Validate each element in an array? | [
"",
"php",
"arrays",
"validation",
""
] |
I spend my days in vim, currently writing a lot of JavaScript. I've been trying to find a way to integrate JSLint or something similar into vim to improve my coding. Has anyone managed to do something like this?
I tried this: [Javascript Syntax Checking From Vim](http://mikecantelon.com/story/javascript-syntax-checking-vim), unfortunately the output is very crude. | You can follow the intructions from [JSLint web-service + VIM integration](http://wiki.whatwg.org/wiki/IDE) or do what I did:
Download <http://jslint.webvm.net/mylintrun.js> and <http://www.jslint.com/fulljslint.js> and put them in a directory of your choice.
Then add the following line to the beginning of mylintrun.js:
```
var filename= arguments[0];
```
and change last line of code in mylintrun.js ("print( ...)") to:
```
print ( filename + ":" + (obj["line"] + 1) + ":" + (obj["character"] + 1) + ":" + obj["reason"] );
```
This makes in mylintrun.js output a error list that can be used with the VIM quickfix window (:copen).
Now set the following in VIM:
```
set makeprg=cat\ %\ \\\|\ /my/path/to/js\ /my/path/to/mylintrun.js\ %
set errorformat=%f:%l:%c:%m
```
where you have to change */my/path/to/js* to the path to SpiderMonkey and */my/path/to/mylintrun.js* to the path where you put the JS files.
Now, you can use **:make** in VIM and use the *quickfix* window (:he quickfix-window) to jump from error to error. | The best-practice way IMO is:
1. Install [Syntastic Vim plugin](https://www.vim.org/scripts/script.php?script_id=2736) - Best syntax-checker around for plenty of languages, plus it integrates with Vim's *location-list* (==*quickfix*) window.
* I recommend [cloning from the GitHub repo](https://github.com/vim-syntastic/syntastic) and installing using a plugin manager like [Vundle](https://github.com/VundleVim/Vundle.vim) or [Pathogen](https://www.vim.org/scripts/script.php?script_id=2332), since it's more frequently updated.
2. Choose one of the two options below:
## JSLint
1. Install `jsl` (JSLint executable) using your favorite package manager (Ubuntu's `apt-get`, Mac's [home `brew`](https://brew.sh/), etc.).
## Community-driven [jshint.com](https://jshint.com/) (better than JSLint)
1. Install *node.js* using your favorite package manager.
2. Install *Node Package Manager*: 'curl <https://npmjs.org/install.sh> | sh' **EDIT: npm IS PART OF node.js NOW**
* See [http://npmjs.org](https://www.npmjs.com/) for more info.
3. Install *jshint* globally: 'npm install jshint -g'
4. Put your jshint *config* file in your *$HOME* dir: '~/.jshintrc'
* [Here's JSHint-Node's example configuration file](https://github.com/jshint/jshint/blob/master/examples/.jshintrc), they pulled from mine and added more stuff.
* [Here's my original copy, which you can also start from](https://github.com/oryband/dotfiles/blob/master/jshintrc).
5. Overwrite *Syntastic*'s `syntax_checkers/javascript.vim` file with [this one](https://github.com/factorylabs/vimfiles/blob/master/home/.vim/syntax_checkers/javascript.vim) - **EDIT: NO LONGER NECESSARY WITH NEWEST SYNTASTIC VERSION.**
* Script [copied](https://github.com/factorylabs/vimfiles/blob/master/home/.vim/syntax_checkers/javascript.vim) from [FactorayLab's really helpful Vim GitHub page](https://github.com/factorylabs/vimfiles/), which I recommend every Vim user to check out.
Enjoy! :) | How to do JSLint in Vim | [
"",
"javascript",
"vim",
"lint",
""
] |
I'm looking for a way to convert numbers to string format, dropping any redundant '.0'
The input data is a mix of floats and strings. Desired output:
0 --> '0'
0.0 --> '0'
0.1 --> '0.1'
1.0 --> '1'
I've come up with the following generator expression, but I wonder if there's a faster way:
```
(str(i).rstrip('.0') if i else '0' for i in lst)
```
The truth check is there to prevent 0 from becoming an empty string.
EDIT: The more or less acceptable solution I have for now is this:
```
('%d'%i if i == int(i) else '%s'%i for i in lst)
```
It just seems strange that there is no elegant way to handle this (fairly straightforward) case in python. | ```
(str(i)[-2:] == '.0' and str(i)[:-2] or str(i) for i in ...)
``` | See [PEP 3101](http://www.python.org/dev/peps/pep-3101/):
```
'g' - General format. This prints the number as a fixed-point
number, unless the number is too large, in which case
it switches to 'e' exponent notation.
```
Old style (not preferred):
```
>>> "%g" % float(10)
'10'
```
New style:
```
>>> '{0:g}'.format(float(21))
'21'
```
New style 3.6+:
```
>>> f'{float(21):g}'
'21'
``` | dropping trailing '.0' from floats | [
"",
"python",
""
] |
I am writing a simple application that lets a user upload images. After the upload, the user can tag them or remove them.
I figured out how to upload the files and save them once the files are uploaded. I am keeping tracking of a global path where images are kept. In the database I keep the meta data about the images like file name, tags, etc.
I am using Java/JSP (specifically Stripes framework but my problem is generic).
My question is where do I keep these image files once they are uploaded?
Right now I have two web applications deployed on a Tomcat server. One main web application and other one is the where I upload the images.
But this does not work as I can not see the uploaded images in the main application until I redeploy/restart Tomcat.
It seems like Tomcat does not pick newly uploaded images automatically.
Does anyone have any solutions?
This is a simple project, so I do not want to store them in a database or use Apache for images. That is all just too complicated for this small project.
Thanks. | Definitely don't store the images in the database, but you will want to store the image path in the database. This will allow you to store the image just about anywhere.
Since you are using two tomcat applications, your best bet may be to store the images outside of either app and stream the image back to the user instead of letting tomcat manage the files. Otherwise, I would ask why you are trying to do this with two web apps. | However, storing uploaded images inside the web-app directory is not a wise thing to do, and you know it.
By the way, you might want to look this [stackoverflow thread](https://stackoverflow.com/questions/348363/what-is-the-best-practice-for-storing-uploaded-images), lately discussed where to store the images. It might not solve your issue, surely will give you more confidence on what you are doing. | Java/JSP Image upload. Where to keep these image files? | [
"",
"java",
"jsp",
"file-upload",
"stripes",
""
] |
I would like to be able to start a second script (either PHP or Python) when a page is loaded and have it continue to run after the user cancels/navigates away is this possible? | You can send Connection:Close headers, which finishes the page for your user, but enables you to execute things "after page loads".
There is a simple way to ignore user abort (see [php manual](http://de3.php.net/ignore_user_abort) too):
```
ignore_user_abort(true);
``` | Use [process forking with pcntl](http://www.electrictoolbox.com/article/php/process-forking/).
It only works under Unix operating systems, however.
You can also do something like this:
```
exec("/usr/bin/php ./child_script.php > /dev/null 2>&1 &");
```
You can read more about the above example [here](http://www.welldonesoft.com/technology/articles/php/forking/). | Running PHP after request | [
"",
"php",
""
] |
I want to make the line marked with // THIS LINE SHOULD BE PRINTING do its thing, which is print the int values between "synonyms" and "antonyms".
This is the text file:
dictionary.txt
```
1 cute
2 hello
3 ugly
4 easy
5 difficult
6 tired
7 beautiful
synonyms
1 7
7 1
antonyms
1 3
3 1 7
4 5
5 4
7 3
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
#include <vector>
using namespace std;
class WordInfo{
public:
WordInfo(){}
~WordInfo() {
}
int id() const {return myId;}
void readWords(istream &in)
{
in>>myId>>word;
}
void pushSynonyms (string synline, vector <WordInfo> wordInfoVector)
{
stringstream synstream(synline);
vector<int> synsAux;
int num;
while (synstream >> num) synsAux.push_back(num);
for (int i=0; i<synsAux.size(); i++){
cout<<synsAux[i]<<endl; //THIS LINE SHOULD BE PRINTING
}
}
void pushAntonyms (string antline, vector <WordInfo> wordInfoVector)
{
}
//--dictionary output function
void printWords (ostream &out)
{
out<<myId<< " "<<word;
}
//--equals operator for String
bool operator == (const string &aString)const
{
return word ==aString;
}
//--less than operator
bool operator <(const WordInfo &otherWordInfo) const
{ return word<otherWordInfo.word;}
//--more than operator
bool operator > (const WordInfo &otherWordInfo)const
{return word>otherWordInfo.word;}
private:
vector <int> mySynonyms;
vector <int> myAntonyms;
string word;
int myId;
};
//--Definition of input operator for WordInfo
istream & operator >>(istream &in, WordInfo &word)
{
word.readWords(in);
}
//--Definition of output operator
ostream & operator <<(ostream &out, WordInfo &word)
{
word.printWords(out);
}
int main() {
string wordFile;
cout<<"enter name of dictionary file: ";
getline (cin,wordFile);
ifstream inStream (wordFile.data());
if(!inStream.is_open())
{
cerr<<"cannot open "<<wordFile<<endl;
exit(1);
}
vector <WordInfo> wordVector;
WordInfo aword;
while (inStream >>aword && (!(aword=="synonyms")))
{
wordVector.push_back(aword);
}
int i=0;
while (i<wordVector.size()){
cout<<wordVector[i]<<endl;
i++;
}
vector <int> intVector;
string aLine; //suspect
// bad statement?
while (getline(inStream, aLine)&&(aLine!=("antonyms"))){
aword.pushSynonyms(aLine, wordVector);
}
system("PAUSE");
return 0;
}
``` | The problem seems to be here:
```
in>>myId>>word;
```
On the "synonyms" line the extraction of `myId` fails and sets `failbit` on the stream, which causes the following extractions to also fail. You have to reset the error control state before extracting further elements (like the word "synonyms") from the stream:
```
in.clear();
``` | First, turn on compiler warnings. It may help you find some things that you think are OK but which really aren't. For example, functions with non-`void` return types should always return something. If they don't, then your program's behavior is undefined, and undefined behavior includes "working exactly as you wanted, except for some subtle difference later in the program." If you're using g++, the option for warnings is `-Wall`.
Second, note that it's not just the highlighted line that isn't running. The *entire `pushSynonyms` function* never gets called. Has your class covered how to use the debugger yet? If so, then consider using it. If not, then try putting some "`cout`" statements in your program so you can see exactly how far your program gets before it goes wrong.
Third, note that when a stream read failure occurs, the stream's fail bit is set. Until you clear it ([as shown by sth's answer](https://stackoverflow.com/questions/456357/c-ifstream-failure-why-is-this-line-not-going-where-its-supposed-to#456390)), no further extraction can occur from that stream, so all further uses of `>>` and `getline` will fail. | C++ ifstream failure, why is this line not going where it's supposed to? | [
"",
"c++",
"ifstream",
""
] |
Using Python's Imaging Library I want to create a PNG file.
I would like it if when printing this image, without any scaling, it would always print at a known and consistent 'size' on the printed page.
Is the resolution encoded in the image?
If so, how do I specify it?
And even if it is, does this have any relevance when it goes to the printer? | As of PIL 1.1.5, there is a way to get the DPI:
```
im = ... # get image into PIL image instance
dpi = im.info["dpi"] # retrive the DPI
print dpi # (x-res, y-res)
im.info["dpi"] = new dpi # (x-res, y-res)
im.save("PNG") # uses the new DPI
``` | I found a very simple way to get dpi information into the png:
im.save('myfile.png',dpi=[600,600])
Unfortunately I did not find this documented anywhere and had to dig into the PIL source code. | When printing an image, what determines how large it will appear on a page? | [
"",
"python",
"python-imaging-library",
"dpi",
""
] |
Debugging with gdb, any c++ code that uses STL/boost is still a nightmare. Anyone who has used gdb with STL knows this. For example, see sample runs of some debugging sessions in code [here](http://www.yolinux.com/TUTORIALS/GDB-Commands.html#STLDEREF).
I am trying to reduce the pain by collecting tips. Can you please comment on the tips I have collected below (particularly which ones you have been using and any changes you would recommend on them) - I have listed the tips is decreasing order of technicality.
* Is anyone using ["Stanford GDB STL utils"](http://www.stanford.edu/~afn/gdb_stl_utils/) and ["UCF GDB utils"](http://www.yolinux.com/TUTORIALS/src/dbinit_stl_views-1.01.txt)? Is there some such utils for boost data structures? The utils above do not seem to be usable recursively, for example for printing vector of a boost::shared\_ptr in a legible manner within one command.
* Write your .gdbinit file. Include, for example, C++ related beautifiers, listed at the bottom of UCF GDB utils.
* Use checked/debug STL/Boost library, such as STLport.
* Use logging (for example as described [here](http://www.ddj.com/cpp/201804215))
*Update*: GDB has a [new C++ branch](http://sourceware.org/gdb/wiki/ProjectArcher). | Maybe not the sort of "tip" you were looking for, but I have to say that my experience after a few years of moving from C++ & STL to C++ & boost & STL is that I now spend a *lot* less time in GDB than I used to. I put this down to a number of things:
* boost smart pointers (particularly "shared pointer", and the pointer containers when performance is needed). I can't remember the last time I had to write an explicit delete (delete is the "goto" of C++ IMHO). There goes a lot of GDB time tracking down invalid and leaking pointers.
* boost is full of proven code for things you'd probably hack together an inferior version of otherwise. e.g `boost::bimap` is great for the common pattern of LRU caching logic. There goes another heap of GDB time.
* Adopting unittesting. `boost::test`'s AUTO macros mean it's an absolute doddle to set up test cases ([easier than CppUnit](http://gamesfromwithin.com/?p=29)). This catches lots of stuff long before it gets built into anything you'd have to attach a debugger to.
* Related to that, tools like `boost::bind` make it easier to design-for-test. e.g algorithms can be more generic and less tied up with the types they operate on; this makes plugging them into test shims/proxies/mock objects etc easier (that and the fact that exposure to boost's template-tasticness will encourage you to "dare to template" things you'd never have considered before, yielding similar testing benefits).
* `boost::array`. "C array" performance, with range checking in debug builds.
* boost is full of great code you can't help but learn from | You might look at:
[Inspecting standard container (std::map) contents with gdb](https://stackoverflow.com/questions/427589/inspecting-stdmap-contents-with-gdb) | Debugging Best Practices for C++ STL/Boost with gdb | [
"",
"c++",
"stl",
"boost",
"gdb",
""
] |
Is there in the JDK or Jakarta Commons (or anywhere else) a method that can parse the output of Arrays.toString, at least for integer arrays?
```
int[] i = fromString(Arrays.toString(new int[] { 1, 2, 3} );
``` | Pretty easy to just do it yourself:
```
public class Test {
public static void main(String args[]){
int[] i = fromString(Arrays.toString(new int[] { 1, 2, 3} ));
}
private static int[] fromString(String string) {
String[] strings = string.replace("[", "").replace("]", "").split(", ");
int result[] = new int[strings.length];
for (int i = 0; i < result.length; i++) {
result[i] = Integer.parseInt(strings[i]);
}
return result;
}
}
``` | A sample with fastjson, a JSON library:
```
String s = Arrays.toString(new int[] { 1, 2, 3 });
Integer[] result = ((JSONArray) JSONArray.parse(s)).toArray(new Integer[] {});
```
Another sample with guava:
```
String s = Arrays.toString(new int[] { 1, 2, 3 });
Iterable<String> i = Splitter.on(",")
.trimResults(CharMatcher.WHITESPACE.or(CharMatcher.anyOf("[]"))).split(s);
Integer[] result = FluentIterable.from(i).transform(Ints.stringConverter())
.toArray(Integer.class);
``` | Reverse (parse the output) of Arrays.toString(int[]) | [
"",
"java",
"serialization",
"tostring",
""
] |
> **Possible Duplicate:**
> [Why is it a bad practice to return generated HTML instead of JSON? Or is it?](https://stackoverflow.com/questions/1284381/why-is-it-a-bad-practice-to-return-generated-html-instead-of-json-or-is-it)
It seems to me that any interception of this could provide instant trouble because anyone could just send any HTML/script back to the client.
The only reason I'm interested in doing this is because of the huge pain it is for front-end developers every time there's a DOM structure/CSS change so you now have to go figure out where in the Javascript HTML building process you may have to update.
How do you guys deal with this? Are there things I can do to reduce any risk or is ut just straight up bad idea? | I tend to use the following rules:
1. Request and return HTML for quick snippets, then use client-side (static) Javascript to insert them. Great for alert messages.
2. Request and return JSON for large datasets. This works great when you want to do filtering, grouping, or sorting on the client side without re-requesting the data in a different form.
3. Request and return JSON for large datasets, but include the (escaped) HTML snippet for each record in the JSON record. This means more rendering time and more bandwidth use than (2), but can reduce duplication of often complex HTML rendering.
4. Request and return Javascript, and `eval` it client-side. This works best for interactions such as hiding, showing, moving, and deleting. It can work for insertions as well, but often type (1) or (5) work better for that.
5. Request and return Javascript, and `eval` it client-side, but include escaped HTML in the Javascript so the server is doing the HTML rendering.
I probably use 5 and 1 the most often. | I would seem to me that it would be an even bigger hassle to figure out where in the back-end server that would need to be changed when there's a DOM structure or CSS change.
Keeping all of that in one place (the HTML file) is probably the best reason to limit ajax communication to JSON. | How dangerous is it send HTML in AJAX as opposed to sending JSON and building the HTML? | [
"",
"javascript",
"html",
"ajax",
"security",
"json",
""
] |
How can I write a custom `IEnumerator<T>` implementation which needs to maintain some state and still get to use iterator blocks to simplify it? The best I can come up with is something like this:
```
public class MyEnumerator<T> : IEnumerator<T> {
private IEnumerator<T> _enumerator;
public int Position {get; private set;} // or some other custom properties
public MyEnumerator() {
Position = 0;
_enumerator = MakeEnumerator();
}
private IEnumerator<T> MakeEnumerator() {
// yield return something depending on Position
}
public bool MoveNext() {
bool res = _enumerator.MoveNext();
if (res) Position++;
return res;
}
// delegate Reset and Current to _enumerator as well
}
public class MyCollection<T> : IEnumerable<T> {
IEnumerator<T> IEnumerable<T>.GetEnumerator() {
return GetEnumerator();
}
public MyEnumerator<T> GetEnumerator() {
return new MyEnumerator<T>();
}
...
}
``` | Why do you want to write an iterator class? The whole point of an iterator block is so you don't have to...
i.e.
```
public IEnumerator<T> GetEnumerator() {
int position = 0; // state
while(whatever) {
position++;
yield return ...something...;
}
}
```
If you add more context (i,e, why the above can't work), we can probably help more.
But if possible, avoid writing an iterator class. They are lots of work, and easy to get wrong.
By the way, you don't really have to bother with `Reset` - it is largely deprecated, and shouldn't really ever be used (since it can't be relied to work for an arbitrary enumerator).
If you want to consume an inner iterator, that is fine too:
```
int position = 0;
foreach(var item in source) {
position++;
yield return position;
}
```
or if you only have an enumerator:
```
while(iter.MoveNext()) {
position++;
yield return iter.Current;
}
```
You might also consider adding the state (as a tuple) to the thing you yield:
```
class MyState<T> {
public int Position {get;private set;}
public T Current {get;private set;}
public MyState(int position, T current) {...} // assign
}
...
yield return new MyState<Foo>(position, item);
```
Finally, you could use a LINQ-style extension/delegate approach, with an `Action<int,T>` to supply the position and value to the caller:
```
static void Main() {
var values = new[] { "a", "b", "c" };
values.ForEach((pos, s) => Console.WriteLine("{0}: {1}", pos, s));
}
static void ForEach<T>(
this IEnumerable<T> source,
Action<int, T> action) {
if (source == null) throw new ArgumentNullException("source");
if (action == null) throw new ArgumentNullException("action");
int position = 0;
foreach (T item in source) {
action(position++, item);
}
}
```
Outputs:
```
0: a
1: b
2: c
``` | I'd have to concur with Marc here. Either write an enumerator class completely yourself if you really want to (just because you can?) or simply use an interator block and yield statements and be done with it. Personally, I'm never touching enumerator classes again. ;-) | Writing custom IEnumerator<T> with iterators | [
"",
"c#",
"iterator",
""
] |
I'm a recent CS graduate and have learned very little on 'web 2.0' type stuff, we mainly focused on Java and C. I want to get into PHP, what would you guys recommend as the best book/website to get started with? There are a lot of them out there, and I don't want to drop 50 bucks on something that will finish with a hello world program. Thanks :-) | I would avoid books for PHP. MySQL will be reasonably familliar to you from your Database course at college- I have got most of what I need from their [Reference Manual](http://dev.mysql.com/doc/refman/5.1/en/).
PHP is pretty odd because there are as you say a million and one tutorials out there, but once you're past the very basics you will probably find once again that you end up going back to [their platform documentation](http://www.php.net/docs.php) more than anything else. You may find the tutorial there is as good a starting point as any. | **Start a project!**
Program yourself a personal blog. Write everything (and I mean everything!) yourself. This will help you get very familiar with the language very quickly.
Finished building your basic blog? Upgrade it! Make a spam filter for the comments, an RSS feed, and post email subscriptions, make sure it's secure. After you're finished your blog, move onto a larger, more complex project, and the cycle continues.
Use the [PHP documentation](http://www.php.net/docs.php), and this site if you run into any walls. | New to PHP/MySQL | [
"",
"php",
""
] |
The naive way of writing building a menu in a Java Swing app is to do something like:
```
JMenu fileMenu = new JMenu("File");
JMenuItem openItem = new JMenuItem("Open...");
openItem.addActionListener(new ActionListener() { /* action listener stuff */ } )
fileMenu.addMenuItem(openItem);
```
A more experienced developer will recognize that actions can be accessed through a variety of mechanisms - menus, toolbar buttons, maybe even other workflows in the system. That person is more likely to write:
```
Action openAction = new AbstractAction();
openAction.setName("Open...");
openAction.addActionListener(new ActionListener() { /* action listener stuff */ } )
...
JMenuItem openItem = new JMenuItem(openAction);
```
My question is, **what is the best way to manage these Action objects so they can be used across menus, toolbars, etc?**
* Create a factory class that returns specific actions?
* Declare all of the actions as `private static final Action` in some utility class?
* Take advantage of a Java application framework?
* Something else? | Applications that I have developed that need to use that same actions across menus, toolbars, and other buttons have been done using Swing Application Framework.
[Swing Application Framework](https://appframework.dev.java.net/)
This framework will allow you to have a resource file where you can define all menu text, tooltips, and ICONS. I think the icons are the key, you do not have to load them yourself. Also, if you have any actions that you need to enable/disable you can override the method to control its state.
The website is worth the read. | You can group all your abstractAction using the dedicated Map javax.swing.actionmap .
See <http://java.sun.com/javase/6/docs/api/javax/swing/ActionMap.html>
Moreover each JComponent has an internal actionMap (getActionMap()).
```
class MyComponent
extends JPanel
{
public static final String ACTION_NAME1="my.action.1";
public MyComponent()
{
AbstractAction action= new AbstractAction() { ... }
getActionMap().put(ACTION_NAME1,action);
...
menu.add(getActionMap().get(ACTION_NAME1));
}
}
```
Hope it helps | Correct way to use Actions to create menus, toolbars, and other components in Java | [
"",
"java",
"user-interface",
"swing",
""
] |
In C# should you have code like:
```
public static string importantRegex = "magic!";
public void F1(){
//code
if(Regex.IsMatch(importantRegex)){
//codez in here.
}
//more code
}
public void main(){
F1();
/*
some stuff happens......
*/
F1();
}
```
or should you persist an instance of a Regex containing the important pattern? What is the cost of using Regex.IsMatch? I imagine there is an NFA created in each Regex intance. From what I understand this NFA creation is non trivial. | In a rare departure from my typical egotism, I'm kind of reversing myself on this answer.
My original answer, preserved below, was based on an examination of version *1.1* of the .NET framework. This is pretty shameful, since .NET 2.0 had been out for over three years at the time of my answer, and it contained changes to the `Regex` class that significantly affect the difference between the static and instance methods.
In .NET 2.0 (and 4.0), the static `IsMatch` function is defined as follows:
```
public static bool IsMatch(string input, string pattern){
return new Regex(pattern, RegexOptions.None, true).IsMatch(input);
}
```
The significant difference here is that little `true` as the third argument. That corresponds to a parameter named "useCache". When that is true, then the parsed tree is retrieved from cached on the second and subsequent use.
This caching eats up most—but not all—of the performance difference between the static and instance methods. In my tests, the static `IsMatch` method was still about 20% slower than the instance method, but that only amounted to about a half second increase when run 100 times over a set of 10,000 input strings (for a total of 1 million operations).
This 20% slowdown can still be significant in some scenarios. If you find yourself regexing hundreds of millions of strings, you'll probably want to take every step you can to make it more efficient. But I'd bet that 99% of the time, you're using a particular Regex no more than a handful of times, and the extra millisecond you lose to the static method won't be even close to noticeable.
Props to [devgeezer](https://stackoverflow.com/a/10838615/36388), who pointed this out almost a year ago, although no one seemed to notice.
My old answer follows:
---
The static `IsMatch` function is defined as follows:
```
public static bool IsMatch(string input, string pattern){
return new Regex(pattern).IsMatch(input);
}
```
And, yes, initialization of a `Regex` object is not trivial. You should use the static `IsMatch` (or any of the other static `Regex` functions) as a quick shortcut only for patterns that you will use only once. If you will reuse the pattern, it's worth it to reuse a `Regex` object, too.
As to whether or not you should specify `RegexOptions.Compiled`, as suggested by Jon Skeet, that's another story. The answer there is: it depends. For simple patterns or for patterns used only a handful of times, it may well be faster to use a non-compiled instance. You should definitely profile before deciding. The cost of compiling a regular expression object is quite large indeed, and may not be worth it.
---
Take, as an example, the following:
```
const int count = 10000;
string pattern = "^[a-z]+[0-9]+$";
string input = "abc123";
Stopwatch sw = Stopwatch.StartNew();
for(int i = 0; i < count; i++)
Regex.IsMatch(input, pattern);
Console.WriteLine("static took {0} seconds.", sw.Elapsed.TotalSeconds);
sw.Reset();
sw.Start();
Regex rx = new Regex(pattern);
for(int i = 0; i < count; i++)
rx.IsMatch(input);
Console.WriteLine("instance took {0} seconds.", sw.Elapsed.TotalSeconds);
sw.Reset();
sw.Start();
rx = new Regex(pattern, RegexOptions.Compiled);
for(int i = 0; i < count; i++)
rx.IsMatch(input);
Console.WriteLine("compiled took {0} seconds.", sw.Elapsed.TotalSeconds);
```
At `count = 10000`, as listed, the second output is fastest. Increase `count` to `100000`, and the compiled version wins. | If you're going to reuse the regular expression multiple times, I'd create it with `RegexOptions.Compiled` and cache it. There's no point in making the framework parse the regex pattern every time you want it. | using static Regex.IsMatch vs creating an instance of Regex | [
"",
"c#",
"regex",
"optimization",
""
] |
Say I have an ID row (int) in a database set as the primary key. If I query off the ID often do I also need to index it? Or does it being a primary key mean it's already indexed?
Reason I ask is because in MS SQL Server I can create an index on this ID, which as I stated is my primary key.
Edit: an additional question - will it do any harm to additionally index the primary key? | You are right, it's confusing that SQL Server allows you to create duplicate indexes on the same field(s). But the fact that you can create another doesn't indicate that the PK index doesn't also already exist.
The additional index does no good, but the only harm (very small) is the additional file size and row-creation overhead. | As everyone else have already said, primary keys are automatically indexed.
Creating more indexes on the primary key column only makes sense when you need to optimize a query that uses the primary key and some other specific columns. By creating another index on the primary key column and including some other columns with it, you may reach the desired optimization for a query.
For example you have a table with many columns but you are only querying ID, Name and Address columns. Taking ID as the primary key, we can create the following index that is built on ID but includes Name and Address columns.
```
CREATE NONCLUSTERED INDEX MyIndex
ON MyTable(ID)
INCLUDE (Name, Address)
```
So, when you use this query:
```
SELECT ID, Name, Address FROM MyTable WHERE ID > 1000
```
SQL Server will give you the result only using the index you've created and it'll not read anything from the actual table. | sql primary key and index | [
"",
"sql",
"sql-server",
"t-sql",
"indexing",
"primary-key",
""
] |
How can I track the memory allocations in C++, especially those done by `new`/`delete`. For an object, I can easily override the `operator new`, but I'm not sure how to globally override all allocations so they go through my custom `new`/`delete`. This should be not a big problem, but I'm not sure how this is supposed to be done (`#define new MY_NEW`?).
As soon as this works, I would assume it's enough to have a map somewhere of pointer/location of the allocation, so I can keep track of all allocations which are currently 'active' and - at the end of the application - check for allocations which have not been freed.
Well, this seems again like something that surely has been done several times at least, so any good library out there (preferably a portable one)? | I would recommend you to use `valgrind` for linux. It will catch not freed memory, among other bugs like writing to unallocated memory. Another option is mudflap, which tells you about not freed memory too. Use `-fmudflap -lmudflap` options with gcc, then start your program with `MUDFLAP_OPTIONS=-print-leaks ./my_program`.
Here's some very simple code. It's not suitable for sophisticated tracking, but intended to show you how you would do it in principle, if you were to implement it yourself. Something like this (left out stuff calling the registered new\_handler and other details).
```
template<typename T>
struct track_alloc : std::allocator<T> {
typedef typename std::allocator<T>::pointer pointer;
typedef typename std::allocator<T>::size_type size_type;
template<typename U>
struct rebind {
typedef track_alloc<U> other;
};
track_alloc() {}
template<typename U>
track_alloc(track_alloc<U> const& u)
:std::allocator<T>(u) {}
pointer allocate(size_type size,
std::allocator<void>::const_pointer = 0) {
void * p = std::malloc(size * sizeof(T));
if(p == 0) {
throw std::bad_alloc();
}
return static_cast<pointer>(p);
}
void deallocate(pointer p, size_type) {
std::free(p);
}
};
typedef std::map< void*, std::size_t, std::less<void*>,
track_alloc< std::pair<void* const, std::size_t> > > track_type;
struct track_printer {
track_type * track;
track_printer(track_type * track):track(track) {}
~track_printer() {
track_type::const_iterator it = track->begin();
while(it != track->end()) {
std::cerr << "TRACK: leaked at " << it->first << ", "
<< it->second << " bytes\n";
++it;
}
}
};
track_type * get_map() {
// don't use normal new to avoid infinite recursion.
static track_type * track = new (std::malloc(sizeof *track))
track_type;
static track_printer printer(track);
return track;
}
void * operator new(std::size_t size) throw(std::bad_alloc) {
// we are required to return non-null
void * mem = std::malloc(size == 0 ? 1 : size);
if(mem == 0) {
throw std::bad_alloc();
}
(*get_map())[mem] = size;
return mem;
}
void operator delete(void * mem) throw() {
if(get_map()->erase(mem) == 0) {
// this indicates a serious bug
std::cerr << "bug: memory at "
<< mem << " wasn't allocated by us\n";
}
std::free(mem);
}
int main() {
std::string *s = new std::string;
// will print something like: TRACK: leaked at 0x9564008, 4 bytes
}
```
We have to use our own allocator for our map, because the standard one will use our overridden operator new, which would result in an infinite recursion.
Make sure if you override operator new, you use the map to register your allocations. Deleting memory allocated by placement forms of new will use that delete operator too, so it can become tricky if some code you don't know has overloaded operator new not using your map, because operator delete will tell you that it wasn't allocated and use `std::free` to free the memory.
Also note, as *Pax* pointed out for his solution too, this will only show leaks that are caused by code using our own defined operator new/delete. So if you want to use them, put their declaration in a header, and include it in all files that should be watched. | To be specific, use valgrind's massif tool. As opposed to memcheck, massif is not concerned with illegal use of memory, but tracking allocations over time. It does a good job of 'efficiently' measuring heap memory usage of a program. The best part is, you don't have to write any code. Try:
<http://valgrind.org/docs/manual/ms-manual.html>
Or if you are really impatient:
```
valgrind --tool=massif <executable> <args>
ms_print massif.out.<pid> | less
```
This will give you a graph of allocations over time, and back traces to where the big allocations ocurred. This tool is best run on Linux, I don't know if there is a Windows varient. It *does* work on OS X.
Good luck! | How to track memory allocations in C++ (especially new/delete) | [
"",
"c++",
"debugging",
"memory-management",
""
] |
I want to try out SICP with Python.
Can any one point to materials (video.article...) that teaches Structure and Interpretation of Computer Programs in **python**.
Currently learning from SICP videos of Abelson, Sussman, and Sussman. | Don't think there is a complete set of materials, [this](http://www.codepoetics.com/wiki/index.php?title=Topics:SICP_in_other_languages) is the best I know.
If you are up to generating the material yourself, a bunch of us plan to work through SICP collectively [at](http://hn-sicp.pbwiki.com/). I know at least one guy will be using Haskell, so you will not be alone in pursuing an alternative route. | I think this would be great for you, [CS61A SICP](http://www-inst.eecs.berkeley.edu/~cs61a/fa11/61a-python/content/www/index.html) in Python by Berkeley
[sicp-python](https://github.com/dongchongyubing/sicp-python) code at Github | Materials for SICP with python? | [
"",
"python",
"sicp",
""
] |
Is there any possibility of exceptions to occur other than RuntimeException in Java? Thanks. | The java.lang package defines the following standard exception classes that are not runtime exceptions:
* **[ClassNotFoundException](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/ClassNotFoundException.html)**: This exception is thrown to indicate that a class that is to be loaded cannot be found.
* **[CloneNotSupportedException](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/CloneNotSupportedException.html)**: This exception is thrown when the clone() method has been called for an object that does not implement the Cloneable interface and thus cannot be cloned.
* **[Exception](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/Exception.html)**: The appropriate subclass of this exception is thrown in response to an error detected at the virtual machine level. If a program defines its own exception classes, they should be subclasses of the Exception class.
* **[IllegalAccessException](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/IllegalAccessException.html)**: This exception is thrown when a program tries to dynamically load a class (i.e., uses the forName() method of the Class class, or the findSystemClass() or the loadClass() method of the ClassLoader class) and the currently executing method does not have access to the specified class because it is in another package and not public. This exception is also thrown when a program tries to create an instance of a class (i.e., uses the newInstance() method of the Class class) that does not have a zero-argument constructor accessible to the caller.
* **[InstantiationException](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/InstantiationException.html)**: This exception is thrown in response to an attempt to instantiate an abstract class or an interface using the newInstance() method of the Class class.
* **[InterruptedException](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/InterruptedException.html)**: This exception is thrown to signal that a thread that is sleeping, waiting, or otherwise paused has been interrupted by another thread.
* **[NoSuchFieldException](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/NoSuchFieldException.html)**: This exception is thrown when a specified variable cannot be found. This exception is new in Java 1.1.
* **[NoSuchMethodException](http://java.sun.com/j2se/1.4.2/docs/api/java/lang/NoSuchMethodException.html)**: This exception is thrown when a specified method cannot be found. | Yes, there are **Three** kinds.
## **Checked exceptions**
The compiler will let you know when they can possible be thrown, due to a failure in the environment most likely.
They should be caught, if the program can do something with it, otherwise it is preferred to let them go.
Most of them inherit from
```
java.lang.Exception
```
or from
```
java.lang.Throwable
```
Although it is better to inherit from the former.
For example:
```
java.io.IOException
```
*Signals that an I/O exception of some sort has occurred. This class is the general class of exceptions produced by failed or interrupted I/O operations.*
## **Errors**
These are special kinds of exceptions. They **SHOULD NOT BE CAUGHT** for when they occur means that something really really bad just happened.
All of them inherit from
```
java.lang.Error
```
For example:
```
java.lang.OutOfMemoryError
```
*Thrown when the Java Virtual Machine cannot allocate an object because it is out of memory, and no more memory could be made available by the garbage collector.*
or
```
java.lang.StackOverflowError
```
*Thrown when a stack overflow occurs because an application recurses too deeply.*
## RuntimeExceptions
Used to identify programmer failures, rather than resource failures.
A Runtime exception could "normally" be avoided while coding. If you have one most likely you're doing something wrong.
Sometimes runtime exceptions are caught, but, unless you know exactly what you're doing and **why**, catching them is a bad practice ( again, unless that's exactly what you need )
They inherit from
```
java.lang.RuntimeException
```
For example
```
java.lang.ArrayIndexOutOfBoundsException
```
*Thrown to indicate that an array has been accessed with an illegal index. The index is either negative or greater than or equal to the size of the array*
or
```
java.lang.NullPointerException
```
*Thrown when an application attempts to use null in a case where an object is required*
About the last two, **MOST** of the times, they can be avoided by programming carefully and understand what the state of the program is ( does this array have 5 elements? why should I try to access -1 or 6th. Is this reference null? why should I call null.toString() )
Although I have had arguments with guys that claim that all NPE should be caught. Well what can I say. | Exception other than RuntimeException | [
"",
"java",
"exception",
""
] |
I'm finally starting out with unit testing, having known that I should be doing it for a while, but I have a few questions:
* Should or shouldn't I retest parent
classes when testing the children if
no methods have been overwritten?
* Conceptually, how do you test the
submitted part of a form? I'm using
PHP. (**Edit**: The reason I ask this is that I have a high level form class that generates a form, validates it, filters it, and generates any error messages by taking a JSON-like array as input and delegating to a variety of smaller classes. However, I can't test the errors, etc without submitting the form. **Edit**: [This](https://stackoverflow.com/questions/132342/testing-form-inputs-in-phpunit) looks like it might be an answer.)
* If you have optional parameter in a
method, should you write a test for
both when they are present and when
they are not?
* Should unit testing in any way be
combined with testing code execution
time or should they remain completely
separate?
* Is there any valid reason not to run
your full test suite every time?
* Just so I'm getting my terminology
right, to what does the unit in unit
testing refer? The class being
tested? The method? The parameter?
Something else? | * Test parent, test child; if the child doesn't override the parent method, no need to retest it.
* I'm not sure I understand the second one. You can use Selenium to automate testing a form. Is that what you mean?
* Tests should include the "happy path" and all edge cases. If you have an optional parameter, write tests to show proper operation with the value present and absent.
* Unit tests, integration tests, acceptance tests, load tests are all different ideas that may have some overlap.
* I'll bet there are valid reasons, but if you're doing automated builds that run the test suite automatically why would you not run them? Maybe long run times come to mind, but that's the only reason I can think of. The value is seeing that all of them continue to pass, and that changes you've made haven't broken anything.
* Unit test to me means the class you're testing, which can have several methods. I associate them with classes, not forms. Forms mean UI and integration testing to me. | (Not quite in the same order as your questions)
* If the full test suite doesn't take too long, you should always run it complete. You often don't know which side-effects may result from changes exactly.
* If you can combine speed tests with your favorite unit testing tool, you should do it. This gives you additional information about the quality of your changes. But only do this for time-critical parts of your code.
* From Wikipedia: "A unit is the smallest testable part of an application." | Unit Testing: Beginner Questions | [
"",
"php",
"unit-testing",
""
] |
For smaller websites which are view-only or require light online-editing, SQL Server 2008, Oracle, and MySQL are overkill.
In the PHP world, I used SQLite quite a bit which is a e.g. 100K file holding hundreds of records which you speak to with standard SQL.
In the .NET world, what options do we have, I've seen:
* SQL Server 2008 Express (free but the files are bloated 2-10MB with just a couple records)
* SQL Server Compact 3.5 (seems interesting)
* Vista.db (not free?)
* What about SQLite with .NET, have any of you had success with that?
* Any other small, no-nonsense, SQL databases for .NET there?
I would particularly be interested in something like SQLite but that supports CLR type system and LINQ. | Maybe [this Wikipedia RDBMS comparison](http://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems) might help you in making your choice.
* SQLite [works just fine](http://sqlite.phxsoftware.com/) with .NET.
* I second Edoode's suggestion of Firebird - that works great for me.
* [Be very careful](http://social.msdn.microsoft.com/forums/en-US/sqlce/thread/c39ae613-7513-4608-9572-9afa556e1699/) when using SQL Server CE in multi-threaded applications. | I had to use Interbase at work, so I came to [Firebird](http://www.firebirdsql.org).
First I had to use it, now I love it.
There's a [.NET Data Provider](http://www.firebirdsql.org/index.php?op=files&id=netprovider) (ADO, DDEX).
U can even use it without setting up a server, like you do with SQLite (direct access to the database file).
It's actively developed and "open source". | Need an overview of non-enterprise databases for .NET | [
"",
"sql",
"database",
""
] |
I have a javascript routine that is performing actions on a group of checkboxes, but the final action I want to set the clicked checkbox to checked or unchecked based on if the user was checking the box or unchecking.
Unfortunately, every time I check for whether it is being checked or unchecked, it returns "on" indicating the user is always checking the box! Any help would be appreciated, I've also included the javascript.
```
// Uncheck all the checkboxs with the same Tax Credit
for (i=0; i<arrChecks.length; i++)
{
var attribute = arrChecks[i].getAttribute("xid")
if (attribute == elementName)
{
// if the current state is checked, unchecked and vice-versa
if (arrChecks[i].value == "on") // <-- This is always returning true, even if the box is being unchecked
{
arrChecks[i].checked = 1;
} else {
arrChecks[i].checked = 0;
}
} else {
arrChecks[i].checked = 0;
}
}
``` | You should be evaluating against the checked property of the checkbox element.
```
for (i=0; i<arrChecks.length; i++)
{
var attribute = arrChecks[i].getAttribute("xid")
if (attribute == elementName)
{
// if the current state is checked, unchecked and vice-versa
if (arrChecks[i].checked)
{
arrChecks[i].checked = false;
} else {
arrChecks[i].checked = true;
}
} else {
arrChecks[i].checked = false;
}
}
``` | The `value` attribute of a `checkbox` is what you set by:
```
<input type='checkbox' name='test' value='1'>
```
So when someone checks that box, the server receives a variable named `test` with a `value` of `1` - what you want to check for is not the `value` of it (which will never change, whether it is checked or not) but the `checked` status of the checkbox.
So, if you replace this code:
```
if (arrChecks[i].value == "on")
{
arrChecks[i].checked = 1;
} else {
arrChecks[i].checked = 0;
}
```
With this:
```
arrChecks[i].checked = !arrChecks[i].checked;
```
It should work. You should use `true` and `false` instead of `0` and `1` for this. | Javascript to check whether a checkbox is being checked or unchecked | [
"",
"javascript",
""
] |
I am looking for an open-source asp.net (preferably .net 2.0) project in c#. It doesn't matter if it is some kind of a shop or cms or anything else. What matters is the size of the project that must be at least of medium size (not a simple app that was done in 2 weeks by one developer) and it would be a great advantage if the project contained unit tests and some kind of case study.
I want to use this project as a learning source. I know there are many books and web resources about asp.net but i would like to see how bigger projects are implemented. I am especially interested in implementation of user rights to certain parts of application, separation between html and application logic and i would like to see if developers use .net web controls or write their own custom controls.
Thanks in advance. | Same question here: [**Good asp.net (C#) apps?**](https://stackoverflow.com/questions/90560/good-asp-net-c-apps)
[BlogEngine.Net](http://www.dotnetblogengine.net/) was mendtioned twice:
> It implements a ton of different
> abilities and common needs in asp.net
> as well as allowing it to be fully
> customizable and very easy to
> understand. It can work with XML or
> SQL (your choice) and has a huge
> community behind it. | [ScrewTurn Wiki](http://www.screwturn.eu/) is a nice open source project.
> ScrewTurn Wiki is a fast, powerful and
> simple ASP.NET wiki engine, installs
> in minutes and it's available in
> different packages.
We use it for documentation in our team. There's a lot of different things to learn from the code: theming, compression, how to write an RSS feed, configuration, multi-language support ... | Asp.net open-source project as a learning source in c# | [
"",
"c#",
"asp.net",
"open-source",
""
] |
Can I access an authenticated web service using JSTL? (Form-based authentication)
If I can't do it using JSTL, is there any other way I do it from a JSP?
---
Maybe I need to give a little more information. I'm using the core library:
`<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %>`
and the import tag, as in:
`<c:import var="xml" url="http://my.service.com/api/xml"/>`
Where `http://my.service.com/api/xml` is a web service requiring form-based authentication - if I try to access it in a browser, I'm prompted for a username and password in a web form, not a popup window.
*sorry, I'm no web services expert as you can tell - fingers crossed for an easy solution :)* | @Brabster as @duffymo said, jstl has nothing to do with web services authentication.
If you need to avoid display the authentication page inside some JSP, maybe you can get around with a custom tag which will deal with authentication. How would you authenticate to your web service in a simple console program with a main method? If you know how to do it, integrating that code in a custom tag is easy. | JSTL == JSP Standard Tag Library? If so, I don't see how JSTL and JSP are different.
If you add tokens to HTML or SOAP headers, you're assuming that the web service knows how to get at them and how to consume them. Form-based authentication uses j\_username and j\_password for the form element parameter names.
I'd say that security should be a cross-cutting concern. A filter or aspect should be able to get those values out of the HTTP headers and authenticate for you. | JSTL and authenticated web services | [
"",
"java",
"web-services",
"authentication",
"jstl",
""
] |
I am running a query against a providex database that we use in MAS 90. The query has three tables joined together, and has been slow but not unbearably, taking about 8 minutes per run. The query has a fair number of conditions in the where clause:
I'm going to omit the select part of the query as its long and simple, just a list of fields from the three tables that are to be used in the results.
But the tables and the where clauses in the 8 minute run time version are:
(The first parameter is the lower bound of the user-selected date range, the second is the upper bound.)
```
FROM "AR_InvoiceHistoryDetail" "AR_InvoiceHistoryDetail",
"AR_InvoiceHistoryHeader" "AR_InvoiceHistoryHeader", "IM1_InventoryMasterfile"
"IM1_InventoryMasterfile"
WHERE "AR_InvoiceHistoryDetail"."InvoiceNo" = "AR_InvoiceHistoryHeader"."InvoiceNo"
AND "AR_InvoiceHistoryDetail"."ItemCode" = "IM1_InventoryMasterfile"."ItemNumber"
AND "AR_InvoiceHistoryHeader"."SalespersonNo" = 'SMC'
AND "AR_InvoiceHistoryHeader"."OrderDate" >= @p_dr
AND "AR_InvoiceHistoryHeader"."OrderDate" <= @p_d2
```
However, it turns out that another date field in the same table needs to be the one that the Date Range is compared with. So I changed the Order Dates at the end of the WHERE clause to InvoiceDate. I haven't had the query run successfully at all yet. And I've waited over 40 minutes. I have no control over indexing because this is a MAS 90 database which I don't believe I can directly change the database characteristics of.
What could cause such a large (at least 5 fold) difference in performance. Is it that OrderDate might have been indexed while InvoiceDate was not? I have tried BETWEEN clauses but it doesn't seem to work in the providex dialect. I am using the ODBC interface through .NET in my custom report engine. I have been debugging the report and it is running at the database execution point when I asked VS to Break All, at the same spot where the 8 minute report was waiting, so it is almost certainly either something in my query or something in the database that is screwed up.
If its just the case that InvoiceDates aren't indexed, what else can I do in the providex dialect of SQL to optimize the performance of these queries? Should I change the order of my criteria? This report gets results for a specific salesperson which is why the SMC clause exists. The prior clauses are for the inner joins, and the last clause is for the date range.
I used an identical date range in both the OrderDate and InvoiceDate versions and have ran them all mulitiple times and got the same results. | I still don't know exactly why it was so slow, but we had another problem with the results coming from the query (we switched back to using OrderDate). We weren't getting some of the results because of the nature of the IM1 table.
So I added a Left Outer Join once I figured out Providex's syntax for that. And for some reason, even though we still have 3 tables, it runs a lot faster now.
The new query criteria are:
```
FROM "AR_InvoiceHistoryHeader" "AR_InvoiceHistoryHeader",
{OJ "AR_InvoiceHistoryDetail" "AR_InvoiceHistoryDetail"
LEFT OUTER JOIN "IM1_InventoryMasterfile" "IM1_InventoryMasterfile"
ON "AR_InvoiceHistoryDetail"."ItemCode" =
"IM1_InventoryMasterfile"."ItemNumber" }
WHERE "AR_InvoiceHistoryDetail"."InvoiceNo" =
"AR_InvoiceHistoryHeader"."InvoiceNo" AND
"AR_InvoiceHistoryHeader"."SalespersonNo" = 'SMC'
AND "AR_InvoiceHistoryHeader"."InvoiceDate" >= ?
AND "AR_InvoiceHistoryHeader"."InvoiceDate" <= ?
```
Strange, but at least I learned more of the world of Providex Sql in the process. | I've never used providex before.
A search turned up [this reference article](http://www.pvx.com/support/docs/reference/PVXLanguage0029.htm) on the syntax for creating an index.
Looking over your query, there's three tables and five criteria. Two of the criteria are "join criteria", and three criteria are filtering criteria:
```
AND "AR_InvoiceHistoryHeader"."SalespersonNo" = 'SMC'
AND "AR_InvoiceHistoryHeader"."OrderDate" >= @p_dr
AND "AR_InvoiceHistoryHeader"."OrderDate" <= @p_d2
```
I don't know how good SalespersonNo is for limiting return results, but it might be good to add an index on that. | Providex Query Performance | [
"",
"sql",
"database",
""
] |
When moving a file from `old.package to new.package` I want two things to happen:
1. Update all references to that class (in all files of the project) so that the new package is used
2. `svn move old/package/Foo.java new/package/Foo.java`
I use subversive within Eclipse Ganymede. When I just drag the file from one package to the other, all references get updated and the file is moved around on the filesystem. But SVN is unaware of this and therefore the `svn move old/package/foo.java new/package/Foo.java` command does not work (obviously) when `old/package/Foo.java` does not exist (because eclipse has moved it already).
Is there a way to have eclipse do it's refactoring but using svn to move the file on the filesystem? I hope I'm just missing something obvious :-) | I use [Subclipse](http://subclipse.tigris.org/) and it *does* support moving files between packages | I've been experimenting, and think I've found the way:
1) Tortoise-move the file using the right-click-drag context menu (or whatever SVN method you want).
2) Use the windows filesystem to move it back.
3) Drag-move the file in Eclipse to update all the references.
4) Commit - it shows as a delete/add, but it saves the history, and it does change the package references in one commit (keeping each commit buildable).
Old question, but I keep returning to it, so this is my documentation! | Conveniently move a class to a different package in eclipse without borking svn | [
"",
"java",
"eclipse",
"svn",
"move",
""
] |
I have a service that I am rewriting to use threading. I understand that state from one thread should not be accessed by another, but I'm a little confused by what constitutes 'state'. Does that mean *any* field/property/method outside of the method scope?
Specifically, my service looks something like this:
```
public class MyService
{
private IRepository<MyClass> repository;
private ILogger log;
...
public void MyMethod()
{
...
var t = new Thread(MyMethodAsync);
t.Start(someState);
}
//Is this OK???
public void MyMethodAsync(object state)
{
var someState = (MyState)state;
log.Log("Starting");
var someData = repository.GetSomeData(someState.Property);
//process data
log.Log("Done");
}
//Or should I be doing this:
public void MyMethodAsync2(object state)
{
var someState = (MyState)state;
lock(log){
log.Log("Starting"); }
lock(repository){
var someData = repository.GetSomeData(someState.Property);}
//process data
lock(log){
log.Log("Done"); }
}
}
``` | Er...nope, you don't need to lock resources that are read-only. The purpose of locking them is so that if you need to check the value of a resource before writing it then another resource can't change the value between your read and your write. i.e.:
```
SyncLock MyQueue
If MyQueue.Length = 0 Then
PauseFlag.Reset
End If
End SyncLock
```
If we were to check the length of our queue before we set the flag to pause our process queue thread and then another resource were to add an item to the queue, then our process queue thread would sit in a paused state while an item potentially could've been added between checking the queue length and setting the pause flag...
If all resources are only accessing the queue in a read only fashion (not that I could think of a single useful application of a read-only queue) then there's no need to lock it. | "State" is all the data contained in the class, and the real issue as far as concurrency goes is write access, so your intuition is right. | Do I need to lock "read only" services when using threading? | [
"",
"c#",
"multithreading",
""
] |
I have an input field where both regular text and sprintf tags can be entered.
Example: `some text here. %1$s done %2$d times`
How do I validate the sprintf parts so its not possible them wrong like `%$1s` ?
The text is utf-8 and as far as I know regex only match latin-1 characters.
www.regular-expressions.info does not list `/u` anywhere, which I think is used to tell that string is unicode.
Is the best way to just search the whole input field string for `%` or `$` and if either found then apply the regex to validate the sprintf parts ?
I think the regex would be: `/%\d\$(s|d|u|f)/u` | This is what I ended up with, and its working.
```
// Always use server validation even if you have JS validation
if (!isset($_POST['input']) || empty($_POST['input'])) {
// Do stuff
} else {
$matches = explode(' ',$_POST['input']);
$validInput = true;
foreach ($matches as $m) {
// Check if a slice contains %$[number] as it indicates a sprintf format
if (preg_match('/[%\d\$]+/',$m) > 0) {
// Match found. Now check if its a valid sprintf format
if ($validInput === false || preg_match('/^%(?:\d+\$)?[dfsu]$/u',$m)===0) { // no match found
$validInput = false;
break; // Invalid sprintf format found. Abort
}
}
}
if ($validInput === false) {
// Do stuff when input is NOT valid
}
}
```
Thank you Gumbo for the regex pattern that matches both with and without order marking.
**Edit**: I realized that searching for % is wrong, since nothing will be checked if its forgotten/omitted. Above is new code.
"$validInput === false ||" can be omitted in the last if-statement, but I included it for completeness. | I originally used Gumbo's regex to parse sprintf directives, but I immediately ran into a problem when trying to parse something like %1.2f. I ended up going back to PHP's sprintf manual and wrote the regex according to its rules. By far I'm not a regex expert, so I'm not sure if this is the cleanest way to write it:
```
/%(?:\d+\$)?[+-]?(?:[ 0]|'.{1})?-?\d*(?:\.\d+)?[bcdeEufFgGosxX]/
``` | Validate sprintf format from input field with regex | [
"",
"php",
"validation",
""
] |
I have developed a business index which combines ecommerce websites.(in asp.net2.0+c#)
I'm looking for an in-site search engine that already handles issues like indexing, speed and quality.
Are there any famous solutions doing such?
I need the search results to be customized on my design, so google search engine isn't an option.
Thanks,
Eytan | I haven't used it myself but I have read about [Lucene.net](http://incubator.apache.org/lucene.net/) a port of [Lucene](http://lucene.apache.org/java/docs/) for Java. | Have you looked at DTSearch? It is relatively inexpensive and pretty full-featured. Not great, but should be adequate. | Ready made insite search for asp.net website | [
"",
"c#",
"asp.net",
"search",
"e-commerce",
"search-engine",
""
] |
I am querying data from views that are subject to change. I need to know if the column exists before I do a `crs.get******()`.
I have found that I can query the *metadata* like this to see if a column exist before I request the data from it:
```
ResultSetMetaData meta = crs.getMetaData();
int numCol = meta.getColumnCount();
for (int i = 1; i < numCol + 1; i++)
if (meta.getColumnName(i).equals("name"))
return true;
```
Is there a simpler way of checking to see if a column exists?
---
### EDIT
It must be database agnostic. That is why I am referencing the `CachedRowSet` instead of the database. | There's not a simpler way with the general JDBC API (at least not that I know of, or can find...I've got exactly the same code in my home-grown toolset.)
Your code isn't complete:
```
ResultSetMetaData meta = crs.getMetaData();
int numCol = meta.getColumnCount();
for (int i = 1; i < numCol + 1; i++) {
if (meta.getColumnName(i).equals("name")) {
return true;
}
}
return false;
```
That being said, if you use proprietary, database-specific API's and/or SQL queries, I'm sure you can find more elegant ways of doing the same thing. Bbut you'd have to write custom code for each database you need to deal with. I'd stick with the JDBC APIs, if I were you.
Is there something about your proposed solution that makes you think it's incorrect? It seems simple enough to me. | you could take the shorter approach of using the fact that findColumn() will throw an SQLException for InvalidColumName if the column isn't in the CachedRowSet.
for example
```
try {
int foundColIndex = results.findColumn("nameOfColumn");
} catch {
// do whatever else makes sense
}
```
Likely an abuse of Exception Handling (per EffectiveJava 2nd ed item 57) but it is an alternative to looping through all the columns from the meta data. | How do I check to see if a column name exists in a CachedRowSet? | [
"",
"java",
"jdbc",
"metadata",
"cachedrowset",
""
] |
Title pretty much sums it up.
The external style sheet has the following code:
```
td.EvenRow a {
display: none !important;
}
```
I have tried using:
```
element.style.display = "inline";
```
and
```
element.style.display = "inline !important";
```
but neither works. Is it possible to override an !important style using javascript.
This is for a greasemonkey extension, if that makes a difference. | I believe the only way to do this it to add the style as a new CSS declaration with the '!important' suffix. The easiest way to do this is to append a new <style> element to the head of document:
```
function addNewStyle(newStyle) {
var styleElement = document.getElementById('styles_js');
if (!styleElement) {
styleElement = document.createElement('style');
styleElement.type = 'text/css';
styleElement.id = 'styles_js';
document.getElementsByTagName('head')[0].appendChild(styleElement);
}
styleElement.appendChild(document.createTextNode(newStyle));
}
addNewStyle('td.EvenRow a {display:inline !important;}')
```
The rules added with the above method will (if you use the !important suffix) override other previously set styling. If you're not using the suffix then make sure to take concepts like '[specificity](http://www.w3.org/TR/CSS2/cascade.html#specificity)' into account. | There are a couple of simple one-liners you can use to do this.
1. Set a "style" *attribute* on the element:
```
element.setAttribute('style', 'display:inline !important');
```
or...
2. Modify the `cssText` property of the `style` object:
```
element.style.cssText = 'display:inline !important';
```
Either will do the job.
===
I've written a jQuery plugin called "important" to manipulate `!important` rules in elements, : <http://github.com/premasagar/important>
===
Edit:
As shared in the comments, the standard CSSOM interface (the API for JavaScript to interact with CSS) provides the [`setProperty`](https://developer.mozilla.org/en-US/docs/Web/API/CSSStyleDeclaration/setProperty) method:
```
element.style.setProperty(propertyName, value, priority);
```
E.g:
```
document.body.style.setProperty('background-color', 'red', 'important');
``` | Overriding !important style | [
"",
"javascript",
"css",
""
] |
I need a binary search algorithm that is compatible with the C++ STL containers, something like `std::binary_search` in the standard library's `<algorithm>` header, but I need it to return the iterator that points at the result, not a simple boolean telling me if the element exists.
(On a side note, what the hell was the standard committee thinking when they defined the API for binary\_search?!)
My main concern here is that I need the speed of a binary search, so although I can find the data with other algorithms, as mentioned below, I want to take advantage of the fact that my data is sorted to get the benefits of a binary search, not a linear search.
so far `lower_bound` and `upper_bound` fail if the datum is missing:
```
//lousy pseudo code
vector(1,2,3,4,6,7,8,9,0) //notice no 5
iter = lower_bound_or_upper_bound(start,end,5)
iter != 5 && iter !=end //not returning end as usual, instead it'll return 4 or 6
```
**Note:** I'm also fine using an algorithm that doesn't belong to the std namespace as long as its compatible with containers. Like, say, `boost::binary_search`. | There is no such functions, but you can write a simple one using [`std::lower_bound`](http://en.cppreference.com/w/cpp/algorithm/lower_bound), [`std::upper_bound`](http://en.cppreference.com/w/cpp/algorithm/upper_bound) or [`std::equal_range`](http://en.cppreference.com/w/cpp/algorithm/equal_range).
A simple implementation could be
```
template<class Iter, class T>
Iter binary_find(Iter begin, Iter end, T val)
{
// Finds the lower bound in at most log(last - first) + 1 comparisons
Iter i = std::lower_bound(begin, end, val);
if (i != end && !(val < *i))
return i; // found
else
return end; // not found
}
```
Another solution would be to use a `std::set`, which guarantees the ordering of the elements and provides a method `iterator find(T key)` that returns an iterator to the given item. However, your requirements might not be compatible with the use of a set (for example if you need to store the same element multiple times). | You should have a look at `std::equal_range`. It will return a pair of iterators to the range of all results. | Where can I get a "useful" C++ binary search algorithm? | [
"",
"c++",
"algorithm",
"stl",
"binary-search",
""
] |
I think this should be easy, but it's evading me. I've got a many-to-many relationship between Accounts and Account Groups. An Account can be in zero or more Groups, so I'm using the standard join table.
```
Accounts
--------
ID
BankName
AcctNumber
Balance
AccountGroups
-------------
ID
GroupName
JoinAccountsGroups
------------------
AID
GID
```
I'm using MS Access, FWIW. Also, this is for a low-bandwidth situation, so code optimization isn't as important as simplicity/readability.
I'm using php as a presentation layer, so a bare-bones result from Access is fine.
As for what to do with the multi-result situation, I actually have two things I'm trying to build. The first lists all the groups in one column thus:
```
Bank AcctNum Balance Groups
--------|--------------|----------|----------------
Citi 930938 400 Payroll
HSBC 8372933 100 Monthly, Payroll
Wells 09837 800 -
Chase 8730923 250 Monthly
```
The second is a master-detail list:
```
Name AcctNum Balance
------------|----------|----------
Payroll (2) 500
Citi 930938 400
HSBC 8372933 100
..................................
Monthly (2) 350
HSBC 8372933 100
Chase 8730923 250
..................................
Wells 09837 800
```
For the master-detail, my plan was just to get a big result set from the db, and munge it in php as needed. Since there's going to be some significant post-processing in php anyway, maybe I should just do three separate queries and do the joining there. (Since I'm more comfortable with that language.) | This
```
SELECT a.BankName, a.AcctNumber, a.Balance, ag.GroupName
FROM (Accounts a
LEFT JOIN JoinAccountsGroups jag
ON a.ID = jag.AID)
LEFT JOIN AccountGroups ag
ON jag.GID = ag.GroupName;
```
Will select the data for the first table, however to concatenate the groups (Monthly, Payroll), you would need a User Defined Function (UDF), wich would not be available to Jet, so processing in PHP would be necessary.
You may wish to read [Understanding SQL Joins](http://www.devshed.com/c/a/MySQL/Understanding-SQL-Joins/). It refers to MySQL but applies to Jet, for the most part. | Another thought... why not use the Query Designer in Access. This should take about 30 seconds to design the "View". Then go look at the SQL it wrote. | basic many-to-many sql select query | [
"",
"sql",
"ms-access",
"many-to-many",
""
] |
I need a generic container that keeps its elements sorted and can be asked where (at which position) it would insert a new element, without actually inserting it.
Does such a container exist in the .NET libraries?
The best illustration is an example (container sorts characters by ASCII value, let's assume unicode does not exist):
```
sortedContainer.Add('d');
sortedContainer.Add('b');
sortedContainer.Add('g');
//container contains elements ordered like 'b' 'd' 'g'
//index --------------------------------> 0 1 2
sortedContainer.GetSortedIndex('a'); //returns 0
sortedContainer.GetSortedIndex('b'); //returns 0
sortedContainer.GetSortedIndex('c'); //returns 1
sortedContainer.GetSortedIndex('d'); //returns 1
sortedContainer.GetSortedIndex('e'); //returns 2
sortedContainer.GetSortedIndex('f'); //returns 2
sortedContainer.GetSortedIndex('g'); //returns 2
sortedContainer.GetSortedIndex('h'); //returns 3
[...]
```
The search for the position should take advantage of the fact that the elements are sorted. | If you sort a [`List<T>`](http://msdn.microsoft.com/en-us/library/6sh2ey19.aspx) and then use [`List<T>.BinarySearch`](http://msdn.microsoft.com/en-us/library/3f90y839.aspx) it will give you the index of the entry if it exists, or the bitwise complement of the index of where it *would* be inserted if you inserted then sorted. From that, you should easily be able to build your method.
Sample code matching your example, but not the results - if you look at your sample, you've only got 3 entries, so it doesn't make sense for 'h' to return 4 or 'g' to return 3. I hope that's your example which is slightly off, rather than me misunderstanding the problem :) Note that the sorting isn't automatic - you'd have to sort the list explicitly before calling GetSortedIndex.
```
using System;
using System.Collections.Generic;
static class Test
{
static int GetSortedIndex<T>(this List<T> list, T entry)
{
int index = list.BinarySearch(entry);
return index >= 0 ? index : ~index;
}
static void Main()
{
List<char> container = new List<char> { 'b', 'd', 'g' };
Console.WriteLine(container.GetSortedIndex('a'));
Console.WriteLine(container.GetSortedIndex('b'));
Console.WriteLine(container.GetSortedIndex('c'));
Console.WriteLine(container.GetSortedIndex('d'));
Console.WriteLine(container.GetSortedIndex('e'));
Console.WriteLine(container.GetSortedIndex('f'));
Console.WriteLine(container.GetSortedIndex('g'));
Console.WriteLine(container.GetSortedIndex('h'));
}
}
``` | The closest class to what you are looking for is [SortedList<TKey,TValue>](http://msdn.microsoft.com/en-us/library/ms132319.aspx). This class will maintain a sorted list order.
However there is no method by which you can get the to be added index of a new value. You can however write an extension method that will give you the new index
```
public int GetSortedIndex<TKey,TValue>(this SortedList<TKey,TValue> list, TKey key) {
var comp = list.Comparer;
for ( var i = 0; i < list.Count; i++ ) {
if ( comp.Compare(key, list.GetKey(i)) < 0 ) {
return i;
}
}
return list.Count;
}
``` | C#: Generic sorted container that can return the sorted position of a newly added object? | [
"",
"c#",
".net",
"generics",
"sorting",
""
] |
I have been tasked with creating a reusable process for our Finance Dept to upload our payroll to the State(WI) for reporting. I need to create something that takes a sheet or range in Excel and creates a specifically formatted text file.
***THE FORMAT***
* Column 1 - A Static Number, never changes, position 1-10
* Column 2 - A Dynamic Param filled at runtime for Quarter/Year, position 11-13
* Column 3 - SSN, no hyphens or spaces,
filled from column A, position 14-22
* Column 4 - Last Name, filled from
column B, Truncated at 10, Left
Justify & fill with blanks, position
23-32
* Column 5 - First Name, filled from C,
Truncate at 8, Left Justify & fill
with blanks, position 33-40
* Column 6 - Total Gross Wages/Quarter,
filled from D, strip all formatting,
Right Justify Zero Fill, position
41-49
* Column 7 - A Static Code, never
changes, position 50-51
* Column 8 - BLANKS, Fill with blanks,
position 52-80
I have, I assume, 3 options:
1. VBA
2. .NET
3. SQL
I had explored the .NET method first but I just couldn't find decent documentation to get me going. I still like this one but I digress.
Next I have some VBA that will dump a Sheet to a fixed width Text. I am currently pursuing this which leads, finally, to my actual question.
How do I transform a Range of text in Excel? Do I need to coy it over to another sheet and then pass over that data with the neccesarry formatting functions the run my Dump to text routine? I currently had planned to have a function for each column but I am having trouble figuring out how to take the next step. I am fairly new at Office programming and developing in general so any insight will be greatly appreciated.
The SQL option would be my fall back as I have done similar exports from SQL in the past. I just prefer the other two on the, *"I don't want to be responsible for running this,"* principle.
Thanks in advance for any time given. | Using VBA seems like the way to go to me. This lets you write a macro that takes care of all of the various formatting options and should, hopefully, be simple enough for your finance people to run themselves.
You said you need something that takes a sheet or range in Excel. The first column never changes so we can store that in the macro, columns 3-7 come from the spreadsheet and column 8 is just blank. That leaves column 2 (the quarter/year as QYY) as an issue. If the quarter/year is specified somewhere in the workbook (e.g. stored in a cell, as a worksheet name, as part of the workbook title) then we can just read it in. Otherwise you will need to find some method for specifying the quarter/year when the macro runs (e.g. pop up a dialog box and ask the user to input it)
Some simple code (we'll worry about how to call this later):
```
Sub ProduceStatePayrollReportFile(rngPayrollData As Range, strCompanyNo As String, _
strQuarterYear As String, strRecordCode As String, strOutputFile As String)
```
The parameters are fairly obvious: the range that holds the data, the company number for column 1, the quarter/year for column 2, the fixed code for column 7 and the file we want to output the results to
```
' Store the file handle for the output file
Dim fnOutPayrollReport As Integer
' Store each line of the output file
Dim strPayrollReportLine As String
' Use to work through each row in the range
Dim indexRow As Integer
```
To output to a file in VBA we need to get a file handle so we need a variable to store that in. We'll build up each line of the report in the report line string and use the row index to work through the range
```
' Store the raw SSN, last name, first name and wages data
Dim strRawSSN As String
Dim strRawLastName As String
Dim strRawFirstName As String
Dim strRawWages As String
Dim currencyRawWages As Currency
' Store the corrected SSN, last name, first name and wages data
Dim strCleanSSN As String
Dim strCleanLastName As String
Dim strCleanFirstName As String
Dim strCleanWages As String
```
These sets of variables store the raw data from the worksheet and the cleaned data to be output to the file respectively. Naming them "raw" and "clean" makes it easier to spot errors where you accidentally output raw data instead of cleaned data. We will need to change the raw wages from a string value to a numeric value to help with the formatting
```
' Open up the output file
fnOutPayrollReport = FreeFile()
Open strOutputFile For Output As #fnOutPayrollReport
```
FreeFile() gets the next available file handle and we use that to link to the file
```
' Work through each row in the range
For indexRow = 1 To rngPayrollData.Rows.Count
' Reset the output report line to be empty
strPayrollReportLine = ""
' Add the company number to the report line (assumption: already correctly formatted)
strPayrollReportLine = strPayrollReportLine & strCompanyNo
' Add in the quarter/year (assumption: already correctly formatted)
strPayrollReportLine = strPayrollReportLine & strQuarterYear
```
In our loop to work through each row, we start by clearing out the output string and then adding in the values for columns 1 and 2
```
' Get the raw SSN data, clean it and append to the report line
strRawSSN = rngPayrollData.Cells(indexRow, 1)
strCleanSSN = cleanFromRawSSN(strRawSSN)
strPayrollReportLine = strPayrollReportLine & strCleanSSN
```
The `.Cells(indexRow, 1)` part just means the left-most column of the range at the row specified by indexRow. If the ranges starts in column A (which does not have to be the case) then this just means A. We'll need to write the `cleanFromRawSSN` function ourselves later
```
' Get the raw last and first names, clean them and append them
strRawLastName = rngPayrollData.Cells(indexRow, 2)
strCleanLastName = Format(Left$(strRawLastName, 10), "!@@@@@@@@@@")
strPayrollReportLine = strPayrollReportLine & strCleanLastName
strRawFirstName = rngPayrollData.Cells(indexRow, 3)
strCleanFirstName = Format(Left$(strRawFirstName, 8), "!@@@@@@@@")
strPayrollReportLine = strPayrollReportLine & strCleanFirstName
```
`Left$(string, length)` truncates the string to the given length. The format picture `!@@@@@@@@@@` formats a string as exactly ten characters long, left justified (the ! signifies left justify) and padded with spaces
```
' Read in the wages data, convert to numeric data, lose the decimal, clean it and append it
strRawWages = rngPayrollData.Cells(indexRow, 4)
currencyRawWages = CCur(strRawWages)
currencyRawWages = currencyRawWages * 100
strCleanWages = Format(currencyRawWages, "000000000")
strPayrollReportLine = strPayrollReportLine & strCleanWages
```
We convert it to currency so that we can multiply by 100 to move the cents value to the left of the decimal point. This makes it much easier to use `Format` to generate the correct value. This will not produce correct output for wages >= $10 million but that's a limitation of the file format used for reporting. The `0` in the format picture pads with 0s surprisingly enough
```
' Append the fixed code for column 7 and the spaces for column 8
strPayrollReportLine = strPayrollReportLine & strRecordCode
strPayrollReportLine = strPayrollReportLine & CStr(String(29, " "))
' Output the line to the file
Print #fnOutPayrollReport, strPayrollReportLine
```
The `String(number, char)` function produces a Variant with a sequence of `number` of the specified `char`. `CStr` turns the Variant into a string. The `Print #` statement outputs to the file without any additional formatting
```
Next indexRow
' Close the file
Close #fnOutPayrollReport
End Sub
```
Loop round to the next row in the range and repeat. When we have processed all of the rows, close the file and end the macro
We still need two things: a cleanFromRawSSN function and a way to call the macro with the relevant data.
```
Function cleanFromRawSSN(strRawSSN As String) As String
' Used to index the raw SSN so we can process it one character at a time
Dim indexRawChar As Integer
' Set the return string to be empty
cleanFromRawSSN = ""
' Loop through the raw data and extract the correct characters
For indexRawChar = 1 To Len(strRawSSN)
' Check for hyphen
If (Mid$(strRawSSN, indexRawChar, 1) = "-") Then
' do nothing
' Check for space
ElseIf (Mid$(strRawSSN, indexRawChar, 1) = " ") Then
' do nothing
Else
' Output character
cleanFromRawSSN = cleanFromRawSSN & Mid$(strRawSSN, indexRawChar, 1)
End If
Next indexRawChar
' Check for correct length and return empty string if incorrect
If (Len(cleanFromRawSSN) <> 9) Then
cleanFromRawSSN = ""
End If
End Function
```
`Len` returns the length of a string and `Mid$(string, start, length)` returns `length` characters from `string` beginning at `start`. This function could be improved as it doesn't currently check for non-numeric data
To call the macro:
```
Sub CallPayrollReport()
ProduceStatePayrollReportFile Application.Selection, "1234560007", "109", "01", "C:\payroll109.txt"
End Sub
```
This is the simplest way to call it. The Range is whatever the user has selected on the active worksheet in the active workbook and the other values are hard-coded. The user should select the range they want to output to the file then go Tools > Macro > Run and choose `CallPayrollReport`. For this to work, the macro would either need to be part of the workbook containg the data or in a different workbook which had been loaded before the user calls the macro.
Someone would need to change the hard-coded value of the quarter/year before each quarter's report was generated. As stated earlier, if the quarter/year is already stored in the workbook somewhere then it's better to read that in rather than hard-coding it
Hope that makes sense and is of some use | Thinking about this strictly from the viewpoint of what's easiest for you, and if you are comfortable with SQL, in the context of Access, you could use Access to attach to the spreadsheet as an external datasource. It would look like a table in Access, and work from there. | Export Excel range/sheet to formatted text file | [
"",
".net",
"sql",
"vba",
"excel",
""
] |
For the following SQL Server datatypes, what would be the corresponding datatype in C#?
**Exact Numerics**
```
bigint
numeric
bit
smallint
decimal
smallmoney
int
tinyint
money
```
---
**Approximate Numerics**
```
float
real
```
---
**Date and Time**
```
date
datetimeoffset
datetime2
smalldatetime
datetime
time
```
---
**Character Strings**
```
char
varchar
text
```
---
**Unicode Character Strings**
```
nchar
nvarchar
ntext
```
---
**Binary Strings**
```
binary
varbinary
image
```
---
**Other Data Types**
```
cursor
timestamp
hierarchyid
uniqueidentifier
sql_variant
xml
table
```
(source: [MSDN](http://msdn.microsoft.com/en-us/library/ms187752.aspx)) | This is for [SQL Server 2005](http://msdn.microsoft.com/en-us/library/ms131092(SQL.90).aspx). There are updated versions of the table for [SQL Server 2008](http://msdn.microsoft.com/en-us/library/ms131092%28v=sql.100%29.aspx), [SQL Server 2008 R2](http://msdn.microsoft.com/en-us/library/ms131092%28v=sql.105%29.aspx), [SQL Server 2012](http://msdn.microsoft.com/en-us/library/ms131092%28v=sql.110%29.aspx) and [SQL Server 2014](http://msdn.microsoft.com/en-us/library/ms131092(v=sql.120).aspx).
## SQL Server Data Types and Their .NET Framework Equivalents
The following table lists Microsoft SQL Server data types, their equivalents in the common language runtime (CLR) for SQL Server in the **System.Data.SqlTypes** namespace, and their native CLR equivalents in the Microsoft .NET Framework.
| SQL Server data type | CLR data type (SQL Server) | CLR data type (.NET Framework) |
| --- | --- | --- |
| varbinary | SqlBytes, SqlBinary | Byte[] |
| binary | SqlBytes, SqlBinary | Byte[] |
| varbinary(1), binary(1) | SqlBytes, SqlBinary | byte, Byte[] |
| image | None | None |
| varchar | None | None |
| char | None | None |
| nvarchar(1), nchar(1) | SqlChars, SqlString | Char, String, Char[] |
| nvarchar | SqlChars, SqlString | String, Char[] |
| nchar | SqlChars, SqlString | String, Char[] |
| text | None | None |
| ntext | None | None |
| uniqueidentifier | SqlGuid | Guid |
| rowversion | None | Byte[] |
| bit | SqlBoolean | Boolean |
| tinyint | SqlByte | Byte |
| smallint | SqlInt16 | Int16 |
| int | SqlInt32 | Int32 |
| bigint | SqlInt64 | Int64 |
| smallmoney | SqlMoney | Decimal |
| money | SqlMoney | Decimal |
| numeric | SqlDecimal | Decimal |
| decimal | SqlDecimal | Decimal |
| real | SqlSingle | Single |
| float | SqlDouble | Double |
| smalldatetime | SqlDateTime | DateTime |
| datetime | SqlDateTime | DateTime |
| sql\_variant | None | Object |
| User-defined type(UDT) | None | user-defined type |
| table | None | None |
| cursor | None | None |
| timestamp | None | None |
| xml | SqlXml | None | | SQL Server and .Net Data Type mapping
[](https://i.stack.imgur.com/CBhE9.png) | C# Equivalent of SQL Server DataTypes | [
"",
"c#",
".net",
"sql-server",
""
] |
[Searching on Google](http://www.google.co.uk/search?q=python+daemon) reveals x2 code snippets. The first result is to [this code recipe](http://code.activestate.com/recipes/278731/) which has a lot of documentation and explanation, along with some useful discussion underneath.
However, [another code sample](https://gist.github.com/slor/5946334), whilst not containing so much documentation, includes sample code for passing commands such as start, stop and restart. It also creates a PID file which can be handy for checking if the daemon is already running etc.
These samples both explain how to create the daemon. Are there any additional things that need to be considered? Is one sample better than the other, and why? | **Current solution**
A reference implementation of [PEP 3143 (Standard daemon process library)](https://www.python.org/dev/peps/pep-3143/) is now available as [python-daemon](https://pypi.python.org/pypi/python-daemon).
**Historical answer**
Sander Marechal's [code sample](http://web.archive.org/web/20131017130434/http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/) is superior to the original, which was originally posted in 2004. I once contributed a daemonizer for Pyro, but would probably use Sander's code if I had to do it over. | There are **many fiddly things** to take care of when becoming a [well-behaved daemon process](http://www.python.org/dev/peps/pep-3143/#correct-daemon-behaviour):
* prevent core dumps (many daemons run as root, and core dumps can contain sensitive information)
* behave correctly inside a [`chroot` gaol](https://en.wikipedia.org/wiki/Chroot)
* set UID, GID, working directory, umask, and other process parameters appropriately for the use case
* relinquish elevated `suid`, `sgid` privileges
* close all open file descriptors, with exclusions depending on the use case
* behave correctly if started inside an already-detached context, such as `init`, `inetd`, etc.
* set up signal handlers for sensible daemon behaviour, but also with specific handlers determined by the use case
* redirect the standard streams `stdin`, `stdout`, `stderr` since a daemon process no longer has a controlling terminal
* handle a PID file as a cooperative advisory lock, which is a [whole can of worms in itself](https://stackoverflow.com/questions/688343/reference-for-proper-handling-of-pid-file-on-unix) with many contradictory but valid ways to behave
* allow proper cleanup when the process is terminated
* actually become a daemon process without leading to [zombies](https://en.wikipedia.org/wiki/Zombie_process)
Some of these are **standard**, as described in canonical Unix literature (*Advanced Programming in the UNIX Environment*, by the late W. Richard Stevens, Addison-Wesley, 1992). Others, such as stream redirection and [PID file handling](https://stackoverflow.com/questions/688343/reference-for-proper-handling-of-pid-file-on-unix), are **conventional behaviour** most daemon users would expect but that are less standardised.
All of these are covered by the **[PEP 3143](http://www.python.org/dev/peps/pep-3143) “Standard daemon process library” specification**. The [python-daemon](http://pypi.python.org/pypi/python-daemon/) reference implementation works on Python 2.7 or later, and Python 3.2 or later. | How do you create a daemon in Python? | [
"",
"python",
"daemon",
""
] |
What would be the best way to implement, store, and render spherical worlds, such as the ones in spore or infinity but without the in-between stages of spore, and multiple worlds ala infinity universe. Make no assumptions on how the planet itself is generated or its size/scale. | For rendering, you'll need to use some sort of level-of-detail algorithm in order to seamlessly move from close to the planet's surface to far away. There are many dynamic LOD algorithms ([see here](http://www.vterrain.org/LOD/Papers/)). An older algorithm, called ROAM, can be adapted to handle spherical objects, or planets ([spherical ROAM](http://www.gamasutra.com/features/20010810/oneil_01.htm)). Geometry clipmaps is a newer, more popular algorithm that can be adapted to spherical surfaces [as well](http://www.zib.de/clasen/?page_id=6).
As for storing the data, you may want to look into procedural generation (depending on your needs) for texturing, heightmaps, etc. This is similar to how Infinity and Spore do things. You can read a little about procedural texturing [here](http://www.infinity-universe.com/Infinity/index.php?option=com_smf&Itemid=75&topic=7598.0). Procedural heightmaps are simpler, depending on how complex/realistic you want your terrain. On the simplest level, you can simply displace your vertex height by a perlin noise function. | If you are looking for something to store data about the surface in, you might look at [HEALpix](http://healpix.jpl.nasa.gov/). It is software developed by the astronomical community specifically for mapping the sky (another spherical surface).
HEALpix creates a mesh that describes the position and size of the surface faces and assigns each one an ID. You can then use that ID as the key or index to access as much detail as you want about that particular level.
HEASpix provides methods to find neighboring surface areas and can give center and vertex positions for each mesh point.
It is a hierarchical mesh that allows you to subdivide each face as much as you want so you could in theory have parts of the mesh at low resolution and other parts at higher levels of detail if you wanted. It has the nice property that it is infinitely divisible (up to your memory limits) and each pixel of the grid at a given resolution has the same area as all the other ones at that resolution level.
The distribution package provides a lot of stuff that you probably won't need but the core libraries should be useful. | Sphere World Implementation C++ | [
"",
"c++",
"data-structures",
"3d",
"3d-rendering",
""
] |
I'm working with an incorrectly built spring application. Rather than use IOC, objects that require references are pulling their references from the context:
```
BeanFactory b = SingletonBeanFactoryLocator.getInstance().
useBeanFactory("factory").getFactory();
Bean foo = (FOO)beanFactory.getBean("foo");
```
Putting aside the non-IOC design, what are the other detrimental effects of this? For example, does this have any particular performance implications? Is there any way that this could result in the creation of additional contexts or object references? Anything else really unpleasant that this could cause? | The old phrase "you can program fortran in any language" comes to mind. Using spring as a service locator sounds like you're missing out on most of the things that make spring nice. Spring isn't really a pretty service locator either. It also makes it harder to make unit tests, since you loose some of the really nice loose coupling. (You're tightening the coupling in this way)
IMO these arguments alone enough are reason to convert. I wouldn't even start talking about performance, which others have pointed out are not really likely unless you're in some tight loops.
On the plus side, you can probably really easily convert to spring annotations, which is probably the reason the original author of your code didn't do the "correct" spring; he didn't like all the xml. With annotations you don't need all that xml. | One possible issue that came straight to mind is that if that code is used in a functional method instead of some initialization method, the bean gets re-fetched a lot which most likely slows things down.
That's not really maintainable either, if the name of the bean is changed all the references to it have to be updated by hand or nothing will work. This of course extends to beans which depend on other beans and maybe even those beans depend on other beans too, something like DataSource as bean to common JDBC Template for non-transactional database queries which is injected to a general-purpose container class from which all the other classes fetch such objects. | Effects of incorrect spring initialization | [
"",
"java",
"spring",
""
] |
I'd like to dynamically generate content and then render to a PDF file. This processing would take place on a remote hosting server so using virtual printers etc is out. Does any have a recommendation for a .NET library (pref C#) that would work?
I know that I could generate a bunch of PS code and package it myself but I'd prefer something a little less tricksy at this stage.
Thanks! | I have had good success using [SharpPDF](http://sharppdf.sourceforge.net/index.html). | Have a look at <http://itextsharp.sourceforge.net/>. Its open source.
Tutorial: <http://itextdocs.lowagie.com/tutorial/> | .NET server based PDF generation | [
"",
"c#",
".net",
"pdf",
"printing",
"html2pdf",
""
] |
Is there a way to send a MIME email in which the body of the email is derived from a JSP? I need to send an email with Javamail which contains a table and I figure it would be convenient if I could use a JSP to do all the formatting and layout. | In this thread, [Suggestions for Java Email Templating](https://stackoverflow.com/questions/456148/suggestions-for-java-email-templating), a pal Jack Leow is telling how he did that using JSP/Servlet. That might be of any help. | To do so I think you'd basically have to have a Tomcat (or equivalent) server running and, while you're constructing your email, send a request to the server to invoke your JSP, which would then send back the contents of the email (or the HTML part of the email) as the response. It doesn't seem very elegant, though.
You'd probably be better off with a separate templating system. Here's an interesting-looking [answer](https://stackoverflow.com/questions/195437/loading-jsp-pages-from-custom-sources#195503) from a related question.
(shameless plug: I happen to like [Clearsilver](http://www.clearsilver.net) as a template system, if you happen to be on UNIX/Linux) | JSP as Email template | [
"",
"java",
"email",
"jsp",
""
] |
I am getting a little confused and need some help please. Take these two classes
```
public class Comment
{
public string Message {get; set;}
public DateTime Created {get; set;}
}
public class Post
{
public int PostId {get; set;}
public string Content {get; set;}
public IList<Comment> Comments {get; set;}
}
```
I want to write a linq query which returns a single Post but ordered by the comment created date.
So i started off constructing my linq query as follows:
```
var query = from p in _repository.GetPosts()
where p.PostId == id
orderby p.Comments.Select(x => x.Created)
select p;
return query.Single();
```
But the orderby statement seem not to work! It just returns my list in the default sort order. Any suggestions on how i can make this work???
Thanks in advance! | Ordered by *which* comment date? the first? the last? You cuold try:
```
orderby p.Comments.Max(x=>x.Created)
```
for example.
Also - your `Single` suggests you expect **exactly** one row, in which case there isn't much point sorting it. Do you mean `First()` ?
---
Or do you mean that you want to sort the `Comments`? In which case, get the `Post` first;
```
Post post = ...
```
Now... sorting the `Comments` is a little tricky because of your `IList<T>` - if you don't mind it being a little inefficient, this is simple:
```
post.Comments = post.Comments.OrderBy(x=>x.Created).ToList();
```
Of course, if the `Comments` was `List<T>`, you could do:
```
post.Comments.Sort((x, y) => (x.Created.CompareTo(y.Created)));
```
There are also tricks you can do to make an extension method of the form:
```
post.Comments.Sort(x=>x.Created);
```
i.e.
```
public static void Sort<TSource, TKey>(
this List<TSource> source,
Func<TSource, TKey> selector)
{
var comparer = Comparer<TKey>.Default;
source.Sort((x, y) => comparer.Compare(selector(x), selector(y)));
}
``` | Your orderby projection is returning an `IEnumerable<DateTime>` - that sounds unlikely to be what you want.
A post has many comments - which one do you want to take as the one to use the created date of for ordering? My guess is the first:
```
var query = from p in _repository.GetPosts()
where p.PostId == id
orderby {
Comment comment = p.Comments.FirstOrDefault();
return comment == null ? DateTime.MinValue : comment.Created;
}
select p;
``` | Help with linq orderby on class aggregate relationships | [
"",
"c#",
"linq-to-objects",
""
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.