Prompt
stringlengths 10
31k
| Chosen
stringlengths 3
29.4k
| Rejected
stringlengths 3
51.1k
| Title
stringlengths 9
150
| Tags
listlengths 3
7
|
|---|---|---|---|---|
What are the ways that you use to model and retrieve hierarchical info in a database?
|
The definitive pieces on this subject have been written by Joe Celko, and he has worked a number of them into a book called Joe Celko's Trees and Hierarchies in SQL for Smarties.
He favours a technique called directed graphs. An introduction to his work on this subject can be found [here](http://web.archive.org/web/20081102031702/http://www.intelligententerprise.com/001020/celko.jhtml)
|
I like the Modified Preorder Tree Traversal Algorithm. This technique makes it very easy to query the tree.
But here is a list of links about the topic which I copied from the Zend Framework (PHP) contributors webpage (posted there by Posted by Laurent Melmoux at Jun 05, 2007 15:52).
Many of the links are language agnostic:
There is 2 main representations and algorithms to represent hierarchical structures with databases :
* nested set also known as modified preorder tree traversal algorithm
* adjacency list model
It's well explained here:
* <http://www.sitepoint.com/article/hierarchical-data-database>
* [Managing Hierarchical Data in MySQL](http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/)
* [http://www.evolt.org/article/Four\_ways\_to\_work\_with\_hierarchical\_data/17/4047/index.html](http://evolt.org/node/4047/)
Here are some more links that I've collected:
* <http://en.wikipedia.org/wiki/Tree_%28data_structure%29>
* <http://en.wikipedia.org/wiki/Category:Trees_%28structure%29>
adjacency list model
* <http://www.sqlteam.com/item.asp?ItemID=8866>
nested set
* <http://www.sqlsummit.com/AdjacencyList.htm>
* <http://www.edutech.ch/contribution/nstrees/index.php>
* <http://www.phpriot.com/d/articles/php/application-design/nested-trees-1/>
* <http://www.dbmsmag.com/9604d06.html>
* <http://en.wikipedia.org/wiki/Tree_traversal>
* <http://www.cosc.canterbury.ac.nz/mukundan/dsal/BTree.html> (applet java montrant le fonctionnement )
Graphes
* <http://www.artfulsoftware.com/mysqlbook/sampler/mysqled1ch20.html>
Classes :
Nested Sets DB Tree Adodb
* <http://www.phpclasses.org/browse/package/2547.html>
Visitation Model ADOdb
* <http://www.phpclasses.org/browse/package/2919.html>
PEAR::DB\_NestedSet
* <http://pear.php.net/package/DB_NestedSet>
* utilisation : <https://www.entwickler.com/itr/kolumnen/psecom,id,26,nodeid,207.html>
PEAR::Tree
* <http://pear.php.net/package/Tree/download/0.3.0/>
* <http://www.phpkitchen.com/index.php?/archives/337-PEARTree-Tutorial.html>
nstrees
* <http://www.edutech.ch/contribution/nstrees/index.php>
|
SQL - How to store and navigate hierarchies?
|
[
"",
"sql",
"sql-server",
"oracle",
"database-design",
"hierarchy",
""
] |
What is the best full text search alternative to Microsoft SQL? (which works with MS SQL)
I'm looking for something similar to [Lucene](http://lucene.apache.org/java/docs/index.html) and [Lucene.NET](http://incubator.apache.org/lucene.net/) but without the .NET and Java requirements. I would also like to find a solution that is usable in commercial applications.
|
Take a look at [CLucene](http://clucene.wiki.sourceforge.net/) - It's a well maintained C++ port of java Lucene. It's currently licenced under LGPL and we use it in our commercial application.
Performance is incredible, however you do have to get your head around some of the strange API conventions.
|
[Sphinx](http://www.sphinxsearch.com) is one of the best solutions. It's written in C++ and has amazing performance.
|
Best full text search alternative to MS SQL, C++ solution
|
[
"",
"c++",
"sql-server",
"full-text-search",
"lucene",
"lucene.net",
""
] |
My website was recently attacked by, what seemed to me as, an innocent code:
```
<?php
if ( isset( $ _GET['page'] ) ) {
include( $ _GET['page'] . ".php" );
} else {
include("home.php");
}
?>
```
There where no SQL calls, so I wasn't afraid for SQL Injection. But, apparently, SQL isn't the only kind of injection.
This website has an explanation and a few examples of avoiding code injection: <http://www.theserverpages.com/articles/webmasters/php/security/Code_Injection_Vulnerabilities_Explained.html>
How would you protect this code from code injection?
|
Use a whitelist and make sure the page is in the whitelist:
```
$whitelist = array('home', 'page');
if (in_array($_GET['page'], $whitelist)) {
include($_GET['page'].'.php');
} else {
include('home.php');
}
```
|
Another way to sanitize the input is to make sure that only allowed characters (no "/", ".", ":", ...) are in it. However don't use a blacklist for *bad* characters, but a whitelist for allowed characters:
```
$page = preg_replace('[^a-zA-Z0-9]', '', $page);
```
... followed by a file\_exists.
That way you can make sure that only scripts you want to be executed are executed (for example this would rule out a "blabla.inc.php", because "." is not allowed).
Note: This is kind of a "hack", because then the user could execute "h.o.m.e" and it would give the "home" page, because all it does is removing all prohibited characters. It's not intended to stop "smartasses" who want to cute stuff with your page, but it will stop people doing *really bad* things.
BTW: Another thing you could do in you **.htaccess** file is to prevent obvious attack attempts:
```
RewriteEngine on
RewriteCond %{QUERY_STRING} http[:%] [NC]
RewriteRule .* /–http– [F,NC]
RewriteRule http: /–http– [F,NC]
```
That way all page accesses with "http:" url (and query string) result in an "Forbidden" error message, not even reaching the php script. That results in less server load.
However keep in mind that no "http" is allowed in the query string. You website might MIGHT require it in some cases (maybe when filling out a form).
BTW: If you can read german: I also have a [blog post](http://blogs.interdose.com/dominik/2008/03/20/sicherer-php-code-php-code-injection-verhindern/) on that topic.
|
Best way to avoid code injection in PHP
|
[
"",
"php",
"security",
"code-injection",
""
] |
I'm getting this problem:
```
PHP Warning: mail() [function.mail]: SMTP server response: 550 5.7.1 Unable to relay for chris.mahan@gmail.com in c:\inetpub\wwwroot\mailtest.php on line 12
```
from this script:
```
<?php
$to = "chris.mahan@gmail.com";
$subject = "test";
$body = "this is a test";
if (mail($to, $subject, $body)){
echo "mail sent";
}
else {
echo "problem";
}
?>
```
section from php.ini on the server:
```
[mail function]
; For Win32 only.
SMTP = server.domain.com; for Win32 only
smtp_port = 25
; For Win32 only.
sendmail_from = support@domain.com
; For Unix only. You may supply arguments as well (default: "sendmail -t -i").
;sendmail_path =
```
(note that "server" and "domain" refer accurately to the actual server and domain name)
In IIS, SMTP is running. Under `"Access"` tab, `"Relay"` button, the Select which computers may relay through this virtual server is set to `checkbox "only the list below"` and on the list is `"127.0.0.1(xxx.xxx.xxx.xxx)" (x's representing actual server IP address).`
Server is running `Windows Server 2003 Service Pack 2`, fully patched as of 5 PM Sept 1st 2008. I assume it is running `IIS7` (how to check?).
Any ideas?
In reponse to [Espo](https://stackoverflow.com/users/2257/espo): This machine is hosted at a datacenter. We do not want to use a gmail account (were doing it, want to move away from that). Windows server 2003 comes with its own SMTP server.
Update: Per Yaakov Ellis' advice, I dropped all relay restrictions and added the server IP to the allowed list (using the reverse DNS button provided) and the thing started working.
Thanks to both Espo and Yaakov for helping me out.
|
Try removing the IP restrictions for Relaying in the SMTP server, and opening it up to all relays. If it works when this is set, then you know that the problem has to do with the original restrictions. In this case, it may be a DNS issue, or perhaps you had the wrong IP address listed.
|
You are using the wrong SMTP-server. If you you are only going to send emails to your gmail-account, have a look at my answer [here](https://stackoverflow.com/questions/29988/how-to-send-email-from-a-program-without-using-a-preexisting-account#30001).
If you also need to send email to other accounts, ask you ISP for your SMTP-details.
EDIT: I think it is always better to use the ISP SMTP-server as they (should) have people monitoring the mail-queues, checking for exploits and updating the mail-software. If you business is developing web-applications it is almost always best to stick with what you do, and let other people do their stuff (eg running mailservers).
If you still for some reason want to use you local SMTP server, the first thing would be to rule out the php-part. Try folowing [KB153119](http://support.microsoft.com/kb/153119) and then check you SMTPServer IISlog for errors.
EDIT2:
That KB-article says it is for exchange, but the same commands are used for other SMTP-servers (including IIS) as well, so please try and see if you can send mails using the examples from the article.
|
php mail() not working windows 2003, IIS SMTP
|
[
"",
"php",
"iis",
"smtp",
""
] |
I find that getting Unicode support in my cross-platform apps a real pain in the butt.
I need strings that can go from C code, to a database, to a Java application and into a Perl module. Each of these use a different Unicode encodings (UTF8, UTF16) or some other code page. The biggest thing that I need is a cross-platform way of doing conversions.
What kind of tools, libraries or techniques do people use to make handling these things easier?
|
Have a look at this: [http://www.icu-project.org/](http://www.icu-project.org/ "International Components for Unicode")
|
Perl has [Encode](http://search.cpan.org/~dankogai/Encode/Encode.pm) as a standard library. It can be used to read/write any encoding you want, so that's not going to be a problem.
|
cross platform unicode support
|
[
"",
"java",
"c",
"perl",
"unicode",
"cross-platform",
""
] |
I would like to test a string containing a path to a file for existence of that file (something like the `-e` test in Perl or the `os.path.exists()` in Python) in C#.
|
Use:
```
File.Exists(path)
```
MSDN: <http://msdn.microsoft.com/en-us/library/system.io.file.exists.aspx>
Edit: In System.IO
|
[System.IO.File](http://msdn.microsoft.com/en-us/library/system.io.file.exists.aspx):
```
using System.IO;
if (File.Exists(path))
{
Console.WriteLine("file exists");
}
```
|
How to find out if a file exists in C# / .NET?
|
[
"",
"c#",
".net",
"io",
""
] |
I want to merge two dictionaries into a new dictionary.
```
x = {'a': 1, 'b': 2}
y = {'b': 3, 'c': 4}
z = merge(x, y)
>>> z
{'a': 1, 'b': 3, 'c': 4}
```
Whenever a key `k` is present in both dictionaries, only the value `y[k]` should be kept.
|
## How can I merge two Python dictionaries in a single expression?
For dictionaries `x` and `y`, their shallowly-merged dictionary `z` takes values from `y`, replacing those from `x`.
* In Python 3.9.0 or greater (released 17 October 2020, [`PEP-584`](https://www.python.org/dev/peps/pep-0584/), [discussed here](https://bugs.python.org/issue36144)):
```
z = x | y
```
* In Python 3.5 or greater:
```
z = {**x, **y}
```
* In Python 2, (or 3.4 or lower) write a function:
```
def merge_two_dicts(x, y):
z = x.copy() # start with keys and values of x
z.update(y) # modifies z with keys and values of y
return z
```
and now:
```
z = merge_two_dicts(x, y)
```
### Explanation
Say you have two dictionaries and you want to merge them into a new dictionary without altering the original dictionaries:
```
x = {'a': 1, 'b': 2}
y = {'b': 3, 'c': 4}
```
The desired result is to get a new dictionary (`z`) with the values merged, and the second dictionary's values overwriting those from the first.
```
>>> z
{'a': 1, 'b': 3, 'c': 4}
```
A new syntax for this, proposed in [PEP 448](https://www.python.org/dev/peps/pep-0448) and [available as of Python 3.5](https://mail.python.org/pipermail/python-dev/2015-February/138564.html), is
```
z = {**x, **y}
```
And it is indeed a single expression.
Note that we can merge in with literal notation as well:
```
z = {**x, 'foo': 1, 'bar': 2, **y}
```
and now:
```
>>> z
{'a': 1, 'b': 3, 'foo': 1, 'bar': 2, 'c': 4}
```
It is now showing as implemented in the [release schedule for 3.5, PEP 478](https://www.python.org/dev/peps/pep-0478/#features-for-3-5), and it has now made its way into the [What's New in Python 3.5](https://docs.python.org/dev/whatsnew/3.5.html#pep-448-additional-unpacking-generalizations) document.
However, since many organizations are still on Python 2, you may wish to do this in a backward-compatible way. The classically Pythonic way, available in Python 2 and Python 3.0-3.4, is to do this as a two-step process:
```
z = x.copy()
z.update(y) # which returns None since it mutates z
```
In both approaches, `y` will come second and its values will replace `x`'s values, thus `b` will point to `3` in our final result.
## Not yet on Python 3.5, but want a *single expression*
If you are not yet on Python 3.5 or need to write backward-compatible code, and you want this in a *single expression*, the most performant while the correct approach is to put it in a function:
```
def merge_two_dicts(x, y):
"""Given two dictionaries, merge them into a new dict as a shallow copy."""
z = x.copy()
z.update(y)
return z
```
and then you have a single expression:
```
z = merge_two_dicts(x, y)
```
You can also make a function to merge an arbitrary number of dictionaries, from zero to a very large number:
```
def merge_dicts(*dict_args):
"""
Given any number of dictionaries, shallow copy and merge into a new dict,
precedence goes to key-value pairs in latter dictionaries.
"""
result = {}
for dictionary in dict_args:
result.update(dictionary)
return result
```
This function will work in Python 2 and 3 for all dictionaries. e.g. given dictionaries `a` to `g`:
```
z = merge_dicts(a, b, c, d, e, f, g)
```
and key-value pairs in `g` will take precedence over dictionaries `a` to `f`, and so on.
## Critiques of Other Answers
Don't use what you see in the formerly accepted answer:
```
z = dict(x.items() + y.items())
```
In Python 2, you create two lists in memory for each dict, create a third list in memory with length equal to the length of the first two put together, and then discard all three lists to create the dict. **In Python 3, this will fail** because you're adding two `dict_items` objects together, not two lists -
```
>>> c = dict(a.items() + b.items())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'dict_items' and 'dict_items'
```
and you would have to explicitly create them as lists, e.g. `z = dict(list(x.items()) + list(y.items()))`. This is a waste of resources and computation power.
Similarly, taking the union of `items()` in Python 3 (`viewitems()` in Python 2.7) will also fail when values are unhashable objects (like lists, for example). Even if your values are hashable, **since sets are semantically unordered, the behavior is undefined in regards to precedence. So don't do this:**
```
>>> c = dict(a.items() | b.items())
```
This example demonstrates what happens when values are unhashable:
```
>>> x = {'a': []}
>>> y = {'b': []}
>>> dict(x.items() | y.items())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'list'
```
Here's an example where `y` should have precedence, but instead the value from `x` is retained due to the arbitrary order of sets:
```
>>> x = {'a': 2}
>>> y = {'a': 1}
>>> dict(x.items() | y.items())
{'a': 2}
```
Another hack you should not use:
```
z = dict(x, **y)
```
This uses the `dict` constructor and is very fast and memory-efficient (even slightly more so than our two-step process) but unless you know precisely what is happening here (that is, the second dict is being passed as keyword arguments to the dict constructor), it's difficult to read, it's not the intended usage, and so it is not Pythonic.
Here's an example of the usage being [remediated in django](https://code.djangoproject.com/attachment/ticket/13357/django-pypy.2.diff).
Dictionaries are intended to take hashable keys (e.g. `frozenset`s or tuples), but **this method fails in Python 3 when keys are not strings.**
```
>>> c = dict(a, **b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: keyword arguments must be strings
```
From the [mailing list](https://mail.python.org/pipermail/python-dev/2010-April/099459.html), Guido van Rossum, the creator of the language, wrote:
> I am fine with
> declaring dict({}, \*\*{1:3}) illegal, since after all it is abuse of
> the \*\* mechanism.
and
> Apparently dict(x, \*\*y) is going around as "cool hack" for "call
> x.update(y) and return x". Personally, I find it more despicable than
> cool.
It is my understanding (as well as the understanding of the [creator of the language](https://mail.python.org/pipermail/python-dev/2010-April/099485.html)) that the intended usage for `dict(**y)` is for creating dictionaries for readability purposes, e.g.:
```
dict(a=1, b=10, c=11)
```
instead of
```
{'a': 1, 'b': 10, 'c': 11}
```
## Response to comments
> Despite what Guido says, `dict(x, **y)` is in line with the dict specification, which btw. works for both Python 2 and 3. The fact that this only works for string keys is a direct consequence of how keyword parameters work and not a short-coming of dict. Nor is using the \*\* operator in this place an abuse of the mechanism, in fact, \*\* was designed precisely to pass dictionaries as keywords.
Again, it doesn't work for 3 when keys are not strings. The implicit calling contract is that namespaces take ordinary dictionaries, while users must only pass keyword arguments that are strings. All other callables enforced it. `dict` broke this consistency in Python 2:
```
>>> foo(**{('a', 'b'): None})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: foo() keywords must be strings
>>> dict(**{('a', 'b'): None})
{('a', 'b'): None}
```
This inconsistency was bad given other implementations of Python (PyPy, Jython, IronPython). Thus it was fixed in Python 3, as this usage could be a breaking change.
I submit to you that it is malicious incompetence to intentionally write code that only works in one version of a language or that only works given certain arbitrary constraints.
More comments:
> `dict(x.items() + y.items())` is still the most readable solution for Python 2. Readability counts.
My response: `merge_two_dicts(x, y)` actually seems much clearer to me, if we're actually concerned about readability. And it is not forward compatible, as Python 2 is increasingly deprecated.
> `{**x, **y}` does not seem to handle nested dictionaries. the contents of nested keys are simply overwritten, not merged [...] I ended up being burnt by these answers that do not merge recursively and I was surprised no one mentioned it. In my interpretation of the word "merging" these answers describe "updating one dict with another", and not merging.
Yes. I must refer you back to the question, which is asking for a *shallow* merge of ***two*** dictionaries, with the first's values being overwritten by the second's - in a single expression.
Assuming two dictionaries of dictionaries, one might recursively merge them in a single function, but you should be careful not to modify the dictionaries from either source, and the surest way to avoid that is to make a copy when assigning values. As keys must be hashable and are usually therefore immutable, it is pointless to copy them:
```
from copy import deepcopy
def dict_of_dicts_merge(x, y):
z = {}
overlapping_keys = x.keys() & y.keys()
for key in overlapping_keys:
z[key] = dict_of_dicts_merge(x[key], y[key])
for key in x.keys() - overlapping_keys:
z[key] = deepcopy(x[key])
for key in y.keys() - overlapping_keys:
z[key] = deepcopy(y[key])
return z
```
Usage:
```
>>> x = {'a':{1:{}}, 'b': {2:{}}}
>>> y = {'b':{10:{}}, 'c': {11:{}}}
>>> dict_of_dicts_merge(x, y)
{'b': {2: {}, 10: {}}, 'a': {1: {}}, 'c': {11: {}}}
```
Coming up with contingencies for other value types is far beyond the scope of this question, so I will point you at [my answer to the canonical question on a "Dictionaries of dictionaries merge"](https://stackoverflow.com/a/24088493/541136).
## Less Performant But Correct Ad-hocs
These approaches are less performant, but they will provide correct behavior.
They will be *much less* performant than `copy` and `update` or the new unpacking because they iterate through each key-value pair at a higher level of abstraction, but they *do* respect the order of precedence (latter dictionaries have precedence)
You can also chain the dictionaries manually inside a [dict comprehension](https://www.python.org/dev/peps/pep-0274/):
```
{k: v for d in dicts for k, v in d.items()} # iteritems in Python 2.7
```
or in Python 2.6 (and perhaps as early as 2.4 when generator expressions were introduced):
```
dict((k, v) for d in dicts for k, v in d.items()) # iteritems in Python 2
```
`itertools.chain` will chain the iterators over the key-value pairs in the correct order:
```
from itertools import chain
z = dict(chain(x.items(), y.items())) # iteritems in Python 2
```
## Performance Analysis
I'm only going to do the performance analysis of the usages known to behave correctly. (Self-contained so you can copy and paste yourself.)
```
from timeit import repeat
from itertools import chain
x = dict.fromkeys('abcdefg')
y = dict.fromkeys('efghijk')
def merge_two_dicts(x, y):
z = x.copy()
z.update(y)
return z
min(repeat(lambda: {**x, **y}))
min(repeat(lambda: merge_two_dicts(x, y)))
min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()}))
min(repeat(lambda: dict(chain(x.items(), y.items()))))
min(repeat(lambda: dict(item for d in (x, y) for item in d.items())))
```
In Python 3.8.1, NixOS:
```
>>> min(repeat(lambda: {**x, **y}))
1.0804965235292912
>>> min(repeat(lambda: merge_two_dicts(x, y)))
1.636518670246005
>>> min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()}))
3.1779992282390594
>>> min(repeat(lambda: dict(chain(x.items(), y.items()))))
2.740647904574871
>>> min(repeat(lambda: dict(item for d in (x, y) for item in d.items())))
4.266070580109954
```
```
$ uname -a
Linux nixos 4.19.113 #1-NixOS SMP Wed Mar 25 07:06:15 UTC 2020 x86_64 GNU/Linux
```
## Resources on Dictionaries
* [My explanation of Python's **dictionary implementation**, updated for 3.6.](https://stackoverflow.com/questions/327311/how-are-pythons-built-in-dictionaries-implemented/44509302#44509302)
* [Answer on how to add new keys to a dictionary](https://stackoverflow.com/questions/1024847/add-new-keys-to-a-dictionary/27208535#27208535)
* [Mapping two lists into a dictionary](https://stackoverflow.com/questions/209840/map-two-lists-into-a-dictionary-in-python/33737067#33737067)
* [The official Python docs on dictionaries](https://docs.python.org/3/tutorial/datastructures.html#dictionaries)
* [The Dictionary Even Mightier](https://www.youtube.com/watch?v=66P5FMkWoVU) - talk by Brandon Rhodes at Pycon 2017
* [Modern Python Dictionaries, A Confluence of Great Ideas](https://www.youtube.com/watch?v=npw4s1QTmPg) - talk by Raymond Hettinger at Pycon 2017
|
In your case, you can do:
```
z = dict(list(x.items()) + list(y.items()))
```
This will, as you want it, put the final dict in `z`, and make the value for key `b` be properly overridden by the second (`y`) dict's value:
```
>>> x = {'a': 1, 'b': 2}
>>> y = {'b': 10, 'c': 11}
>>> z = dict(list(x.items()) + list(y.items()))
>>> z
{'a': 1, 'c': 11, 'b': 10}
```
If you use Python 2, you can even remove the `list()` calls. To create z:
```
>>> z = dict(x.items() + y.items())
>>> z
{'a': 1, 'c': 11, 'b': 10}
```
If you use Python version 3.9.0a4 or greater, you can directly use:
```
>>> x = {'a': 1, 'b': 2}
>>> y = {'b': 10, 'c': 11}
>>> z = x | y
>>> z
{'a': 1, 'c': 11, 'b': 10}
```
|
How do I merge two dictionaries in a single expression in Python?
|
[
"",
"python",
"dictionary",
"merge",
""
] |
If I have interface IFoo, and have several classes that implement it, what is the best/most elegant/cleverest way to test all those classes against the interface?
I'd like to reduce test code duplication, but still 'stay true' to the principles of Unit testing.
What would you consider best practice? I'm using NUnit, but I suppose examples from any Unit testing framework would be valid
|
If you have classes implement any one interface then they all need to implement the methods in that interface. In order to test these classes you need to create a unit test class for each of the classes.
Lets go with a smarter route instead; if your goal is to **avoid code and test code duplication** you might want to create an abstract class instead that handles the **recurring** code.
E.g. you have the following interface:
```
public interface IFoo {
public void CommonCode();
public void SpecificCode();
}
```
You might want to create an abstract class:
```
public abstract class AbstractFoo : IFoo {
public void CommonCode() {
SpecificCode();
}
public abstract void SpecificCode();
}
```
Testing that is easy; implement the abstract class in the test class either as an inner class:
```
[TestFixture]
public void TestClass {
private class TestFoo : AbstractFoo {
boolean hasCalledSpecificCode = false;
public void SpecificCode() {
hasCalledSpecificCode = true;
}
}
[Test]
public void testCommonCallsSpecificCode() {
TestFoo fooFighter = new TestFoo();
fooFighter.CommonCode();
Assert.That(fooFighter.hasCalledSpecificCode, Is.True());
}
}
```
...or let the test class extend the abstract class itself if that fits your fancy.
```
[TestFixture]
public void TestClass : AbstractFoo {
boolean hasCalledSpecificCode;
public void specificCode() {
hasCalledSpecificCode = true;
}
[Test]
public void testCommonCallsSpecificCode() {
AbstractFoo fooFighter = this;
hasCalledSpecificCode = false;
fooFighter.CommonCode();
Assert.That(fooFighter.hasCalledSpecificCode, Is.True());
}
}
```
Having an abstract class take care of common code that an interface implies gives a much cleaner code design.
I hope this makes sense to you.
---
As a side note, this is a common design pattern called the **[Template Method pattern](http://en.wikipedia.org/wiki/Template_method_pattern)**. In the above example, the template method is the `CommonCode` method and `SpecificCode` is called a stub or a hook. The idea is that anyone can extend behavior without the need to know the behind the scenes stuff.
A lot of frameworks rely on this behavioral pattern, e.g. [ASP.NET](http://msdn.microsoft.com/en-us/library/ms178472.aspx) where you have to implement the hooks in a page or a user controls such as the generated `Page_Load` method which is called by the `Load` event, the template method calls the hooks behind the scenes. There are a lot more examples of this. Basically anything that you have to implement that is using the words "load", "init", or "render" is called by a template method.
|
I disagree with [Jon Limjap](https://stackoverflow.com/questions/39003/nunit-how-to-test-all-classes-that-implement-a-particular-interface#39036) when he says,
> It is not a contract on either a.) how the method should be implemented and b.) what that method should be doing exactly (it only guarantees the return type), the two reasons that I glean would be your motive in wanting this kind of test.
There could be many parts of the contract not specified in the return type. A language-agnostic example:
```
public interface List {
// adds o and returns the list
public List add(Object o);
// removed the first occurrence of o and returns the list
public List remove(Object o);
}
```
Your unit tests on LinkedList, ArrayList, CircularlyLinkedList, and all the others should test not only that the lists themselves are returned, but also that they have been properly modified.
There was an [earlier question](https://stackoverflow.com/questions/26455/does-design-by-contract-work-for-you#34811) on design-by-contract, which can help point you in the right direction on one way of DRYing up these tests.
If you don't want the overhead of contracts, I recommend test rigs, along the lines of what [Spoike](https://stackoverflow.com/questions/39003/nunit-how-to-test-all-classes-that-implement-a-particular-interface#39034) recommended:
```
abstract class BaseListTest {
abstract public List newListInstance();
public void testAddToList() {
// do some adding tests
}
public void testRemoveFromList() {
// do some removing tests
}
}
class ArrayListTest < BaseListTest {
List newListInstance() { new ArrayList(); }
public void arrayListSpecificTest1() {
// test something about ArrayLists beyond the List requirements
}
}
```
|
NUnit - How to test all classes that implement a particular interface
|
[
"",
"c#",
".net",
"unit-testing",
"nunit",
""
] |
I am working on localization for a asp.net application that consists of several projects.
For this, there are some strings that are used in several of these projects. Naturally, I would prefer to have only one copy of the resource file in each project.
Since the resource files don't have an namespace (at least as far as I can tell), they can't be accessed like regular classes.
Is there any way to reference resx files in another project, within the same solution?
|
You can just create a class library project, add a resource file there, and then refer to that assembly for common resources.
|
I have used this solution before to share a assembley info.cs file across all projects in a solution I would presume the same would work fro a resource file.
Create a linked file to each individual project/class library. There will be only one copy and every project will have a reference to the code via a linked file at compile time. Its a very elegant solution to solve shared non public resources without duplicating code.
```
<Compile Include="path to shared file usually relative">
<Link>filename for Visual Studio To Dispaly.resx</Link>
</Compile>
```
add that code to the complile item group of a csproj file then replace the paths with your actual paths to the resx files and you sould be able to open them.
Once you have done this for one project file you should be able to employ the copy & paste the linked file to other projects without having to hack the csproj.
|
Referencing resource files from multiple projects in a solution
|
[
"",
"c#",
"localization",
"resx",
""
] |
I want to loop over the contents of a text file and do a search and replace on some lines and write the result back to the file. I could first load the whole file in memory and then write it back, but that probably is not the best way to do it.
What is the best way to do this, within the following code?
```
f = open(file)
for line in f:
if line.contains('foo'):
newline = line.replace('foo', 'bar')
# how to write this newline back to the file
```
|
I guess something like this should do it. It basically writes the content to a new file and replaces the old file with the new file:
```
from tempfile import mkstemp
from shutil import move, copymode
from os import fdopen, remove
def replace(file_path, pattern, subst):
#Create temp file
fh, abs_path = mkstemp()
with fdopen(fh,'w') as new_file:
with open(file_path) as old_file:
for line in old_file:
new_file.write(line.replace(pattern, subst))
#Copy the file permissions from the old file to the new file
copymode(file_path, abs_path)
#Remove original file
remove(file_path)
#Move new file
move(abs_path, file_path)
```
|
The shortest way would probably be to use the [fileinput module](https://docs.python.org/3/library/fileinput.html#fileinput.input). For example, the following adds line numbers to a file, in-place:
```
import fileinput
for line in fileinput.input("test.txt", inplace=True):
print('{} {}'.format(fileinput.filelineno(), line), end='') # for Python 3
# print "%d: %s" % (fileinput.filelineno(), line), # for Python 2
```
What happens here is:
1. The original file is moved to a backup file
2. The standard output is redirected to the original file within the loop
3. Thus any `print` statements write back into the original file
`fileinput` has more bells and whistles. For example, it can be used to automatically operate on all files in `sys.args[1:]`, without your having to iterate over them explicitly. Starting with Python 3.2 it also provides a convenient context manager for use in a `with` statement.
---
While `fileinput` is great for throwaway scripts, I would be wary of using it in real code because admittedly it's not very readable or familiar. In real (production) code it's worthwhile to spend just a few more lines of code to make the process explicit and thus make the code readable.
There are two options:
1. The file is not overly large, and you can just read it wholly to memory. Then close the file, reopen it in writing mode and write the modified contents back.
2. The file is too large to be stored in memory; you can move it over to a temporary file and open that, reading it line by line, writing back into the original file. Note that this requires twice the storage.
|
Search and replace a line in a file in Python
|
[
"",
"python",
"file",
""
] |
I know in certain circumstances, such as long running processes, it is important to lock ASP.NET cache in order to avoid subsequent requests by another user for that resource from executing the long process again instead of hitting the cache.
What is the best way in c# to implement cache locking in ASP.NET?
|
Here's the basic pattern:
* Check the cache for the value, return if its available
* If the value is not in the cache, then implement a lock
* Inside the lock, check the cache again, you might have been blocked
* Perform the value look up and cache it
* Release the lock
In code, it looks like this:
```
private static object ThisLock = new object();
public string GetFoo()
{
// try to pull from cache here
lock (ThisLock)
{
// cache was empty before we got the lock, check again inside the lock
// cache is still empty, so retreive the value here
// store the value in the cache here
}
// return the cached value here
}
```
|
For completeness a full example would look something like this.
```
private static object ThisLock = new object();
...
object dataObject = Cache["globalData"];
if( dataObject == null )
{
lock( ThisLock )
{
dataObject = Cache["globalData"];
if( dataObject == null )
{
//Get Data from db
dataObject = GlobalObj.GetData();
Cache["globalData"] = dataObject;
}
}
}
return dataObject;
```
|
What is the best way to lock cache in asp.net?
|
[
"",
"c#",
".net",
"asp.net",
"caching",
""
] |
I'm working on a module for a CMS. This module is distributed as a class library DLL.
I have several utility libraries I'd like to use in this module. Is there anyway I can link these libraries statically so I won't have to distribute several DLL's (thereby distributing my utility libraries separately)?
I would like to have only one DLL.
|
You can merge your many DLLs with ILMERGE:
<http://research.microsoft.com/~mbarnett/ILMerge.aspx>
Haven't tried it myself. Hope it helps.
---
Download here:
<http://www.microsoft.com/downloads/details.aspx?familyid=22914587-B4AD-4EAE-87CF-B14AE6A939B0&displaylang=en>
**Brief Description** *(from download-page)*
ILMerge is a utility for merging multiple .NET assemblies into a single .NET assembly. It works on executables and DLLs alike and comes with several options for controlling the processing and format of the output. See the accompanying documentation for details.
|
If you don't want to use ILMerge, see this page:
<http://blogs.msdn.com/b/microsoft_press/archive/2010/02/03/jeffrey-richter-excerpt-2-from-clr-via-c-third-edition.aspx>
*editor's note*: Jeffrey Richter advices to put your dlls into exe file as resources (*For each DLL file you add, display its properties and change its “Build Action” to “Embedded Resource.”*). Then a custom class loader is needed to make the executable work (*At runtime, the CLR won’t be able to find the dependent DLL assemblies, which is a problem. To fix this, when your application initializes, register a callback method with the AppDomain’s ResolveAssembly event*).
Be sure to change the `resourceName` string to point to your actual resources. (e.g. change `AssemblyLoadingAndReflection` to your project name.)
|
Linking statically in C#
|
[
"",
"c#",
"visual-studio-2008",
"linker",
""
] |
I want to create a Java application bundle for Mac without using Mac.
According to [Java Deployment Options for Mac OS X](http://developer.apple.com/documentation/Java/Conceptual/Java14Development/03-JavaDeployment/JavaDeployment.html#//apple_ref/doc/uid/TP40001885-208447-TPXREF120), I can do this by using Xcode, Jar Bundler, or from the command line. Once the files and folders are set up, all I need for the command line method is to call /Developer/Tools/SetFile. Is there a SetFile clone on Linux or Windows? If not, do I have to get a Mac?
|
A Java application bundle on OS X is nothing more than a directory containing your .jars and a number of configuration files. The SetFile tool sets a custom HFS filesystem property on the directory to tell finder that it is an app, but giving it a ".app" extension serves the same purpose. I don't think there's anything stopping you from building one on, say, Windows, though of course you have no way of testing that it works, but if you are able to test it at least once on a real Mac, you could then conceivably update the .jars within it on Windows to reflect code changes without too much difficulty.
Have a look at the [Bundle Programming Guide](http://developer.apple.com/documentation/CoreFoundation/Conceptual/CFBundles/CFBundles.html) for more info.
|
One way is to generate a zip file with the App using for example Ant. In ant you can specify that the file in Contents/MacOS should have execute-permissions using something like filemode="755".
|
Do I need a Mac to make a Java application bundle?
|
[
"",
"java",
"macos",
"deployment",
""
] |
I want to assign the decimal variable "trans" to the double variable "this.Opacity".
```
decimal trans = trackBar1.Value / 5000;
this.Opacity = trans;
```
When I build the app it gives the following error:
> Cannot implicitly convert type decimal to double
|
An explicit cast to `double` like this isn't necessary:
```
double trans = (double) trackBar1.Value / 5000.0;
```
Identifying the constant as `5000.0` (or as `5000d`) is sufficient:
```
double trans = trackBar1.Value / 5000.0;
double trans = trackBar1.Value / 5000d;
```
|
**A more generic answer for the generic question "Decimal vs Double?":**
**Decimal** is for monetary calculations to preserve precision. **Double** is for scientific calculations that do not get affected by small differences. Since Double is a type that is native to the CPU (internal representation is stored in *base 2*), calculations made with Double perform better than Decimal (which is represented in *base 10* internally).
|
How to convert Decimal to Double in C#?
|
[
"",
"c#",
"floating-point",
"type-conversion",
"double",
"decimal",
""
] |
Given a `DateTime` representing a person's birthday, how do I calculate their age in years?
|
An easy to understand and simple solution.
```
// Save today's date.
var today = DateTime.Today;
// Calculate the age.
var age = today.Year - birthdate.Year;
// Go back to the year in which the person was born in case of a leap year
if (birthdate.Date > today.AddYears(-age)) age--;
```
However, this assumes you are looking for the *western* idea of the age and not using [*East Asian reckoning*](https://en.wikipedia.org/wiki/East_Asian_age_reckoning).
|
This is a strange way to do it, but if you format the date to `yyyymmdd` and subtract the date of birth from the current date then drop the last 4 digits you've got the age :)
I don't know C#, but I believe this will work in any language.
```
20080814 - 19800703 = 280111
```
Drop the last 4 digits = `28`.
C# Code:
```
int now = int.Parse(DateTime.Now.ToString("yyyyMMdd"));
int dob = int.Parse(dateOfBirth.ToString("yyyyMMdd"));
int age = (now - dob) / 10000;
```
Or alternatively without all the type conversion in the form of an extension method. Error checking omitted:
```
public static Int32 GetAge(this DateTime dateOfBirth)
{
var today = DateTime.Today;
var a = (today.Year * 100 + today.Month) * 100 + today.Day;
var b = (dateOfBirth.Year * 100 + dateOfBirth.Month) * 100 + dateOfBirth.Day;
return (a - b) / 10000;
}
```
|
How do I calculate someone's age based on a DateTime type birthday?
|
[
"",
"c#",
".net",
"datetime",
""
] |
Given a specific `DateTime` value, how do I display relative time, like:
* `2 hours ago`
* `3 days ago`
* `a month ago`
|
Jeff, [your code](https://stackoverflow.com/questions/11/how-do-i-calculate-relative-time/12#12) is nice but could be clearer with constants (as suggested in Code Complete).
```
const int SECOND = 1;
const int MINUTE = 60 * SECOND;
const int HOUR = 60 * MINUTE;
const int DAY = 24 * HOUR;
const int MONTH = 30 * DAY;
var ts = new TimeSpan(DateTime.UtcNow.Ticks - yourDate.Ticks);
double delta = Math.Abs(ts.TotalSeconds);
if (delta < 1 * MINUTE)
return ts.Seconds == 1 ? "one second ago" : ts.Seconds + " seconds ago";
if (delta < 2 * MINUTE)
return "a minute ago";
if (delta < 45 * MINUTE)
return ts.Minutes + " minutes ago";
if (delta < 90 * MINUTE)
return "an hour ago";
if (delta < 24 * HOUR)
return ts.Hours + " hours ago";
if (delta < 48 * HOUR)
return "yesterday";
if (delta < 30 * DAY)
return ts.Days + " days ago";
if (delta < 12 * MONTH)
{
int months = Convert.ToInt32(Math.Floor((double)ts.Days / 30));
return months <= 1 ? "one month ago" : months + " months ago";
}
else
{
int years = Convert.ToInt32(Math.Floor((double)ts.Days / 365));
return years <= 1 ? "one year ago" : years + " years ago";
}
```
|
## [jquery.timeago plugin](https://timeago.yarp.com/)
Jeff, because Stack Overflow uses jQuery extensively, I recommend the [jquery.timeago plugin](https://timeago.yarp.com/).
Benefits:
* Avoid timestamps dated "1 minute ago" even though the page was opened 10 minutes ago; timeago refreshes automatically.
* You can take full advantage of page and/or fragment caching in your web applications, because the timestamps aren't calculated on the server.
* You get to use microformats like the cool kids.
Just attach it to your timestamps on DOM ready:
```
jQuery(document).ready(function() {
jQuery('abbr.timeago').timeago();
});
```
This will turn all `abbr` elements with a class of timeago and an [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) timestamp in the title:
```
<abbr class="timeago" title="2008-07-17T09:24:17Z">July 17, 2008</abbr>
```
into something like this:
```
<abbr class="timeago" title="July 17, 2008">4 months ago</abbr>
```
which yields: 4 months ago. As time passes, the timestamps will automatically update.
*Disclaimer: I wrote this plugin, so I'm biased.*
|
Calculate relative time in C#
|
[
"",
"c#",
"datetime",
"time",
"datediff",
"relative-time-span",
""
] |
Code below does not run correctly and throws `InvalidOperationExcepiton`.
```
public void Foo()
{
DataContext context = new DataContext();
LinqEntity item = new LinqEntity(){ Id = 1, Name = "John", Surname = "Doe"} ;
context.LinqEntities.Attach(item, true);
}
```
|
By default, the entities will use all fields for checking concurrency when making edits. That's what's throwing the InvalidOperationException.
This can be setting the Update Check property for all fields to Never. This must be done on all fields to attach the entity as modified. If this is done, an additional call to context.SubmitChanges() will save the data.
Alternatively, if you know the original values, you can attach and then make the updates, but all values that are being checked must match the original values.
```
LinqEntity item = new LinqEntity(){ Id = 1, Name = "OldName", Surname = "OldSurname"};
context.LinqEntities.Attach(item);
item.Name = "John";
item.Surname = "Doe";
context.SubmitChanges();
```
|
I'm not sure what you mean by disconnected from the database.
It appears that you are trying to insert a new row into the LinqEntities table -- is that correct?
If that is the case you'll want to do
```
context.LinqEntities.InsertOnSubmit(item);
context.Submit();
```
|
How can I update in Linq an entity that is disconnected from database?
|
[
"",
"c#",
"linq",
""
] |
Given a Python object of any kind, is there an easy way to get the list of all methods that this object has?
Or if this is not possible, is there at least an easy way to check if it has a particular method, other than checking if an error occurs when the method is called?
|
**For many objects**, you can use this code, replacing 'object' with the object you're interested in:
```
object_methods = [method_name for method_name in dir(object)
if callable(getattr(object, method_name))]
```
I discovered it at [diveintopython.net](https://web.archive.org/web/20180901124519/http://www.diveintopython.net/power_of_introspection/index.html) (now archived), that should provide some further details!
**If you get an `AttributeError`, you can use this instead**:
`getattr()` is intolerant of pandas style Python 3.6 abstract virtual sub-classes. This code does the same as above and ignores exceptions.
```
import pandas as pd
df = pd.DataFrame([[10, 20, 30], [100, 200, 300]],
columns=['foo', 'bar', 'baz'])
def get_methods(object, spacing=20):
methodList = []
for method_name in dir(object):
try:
if callable(getattr(object, method_name)):
methodList.append(str(method_name))
except Exception:
methodList.append(str(method_name))
processFunc = (lambda s: ' '.join(s.split())) or (lambda s: s)
for method in methodList:
try:
print(str(method.ljust(spacing)) + ' ' +
processFunc(str(getattr(object, method).__doc__)[0:90]))
except Exception:
print(method.ljust(spacing) + ' ' + ' getattr() failed')
get_methods(df['foo'])
```
|
You can use the built in `dir()` function to get a list of all the attributes a module has. Try this at the command line to see how it works.
```
>>> import moduleName
>>> dir(moduleName)
```
Also, you can use the `hasattr(module_name, "attr_name")` function to find out if a module has a specific attribute.
See the [Python introspection](https://zetcode.com/python/introspection/) for more information.
|
Finding what methods a Python object has
|
[
"",
"python",
"introspection",
""
] |
So I have a Sybase stored proc that takes 1 parameter that's a comma separated list of strings and runs a query with in in an IN() clause:
```
CREATE PROCEDURE getSomething @keyList varchar(4096)
AS
SELECT * FROM mytbl WHERE name IN (@keyList)
```
How do I call my stored proc with more than 1 value in the list?
So far I've tried
```
exec getSomething 'John' -- works but only 1 value
exec getSomething 'John','Tom' -- doesn't work - expects two variables
exec getSomething "'John','Tom'" -- doesn't work - doesn't find anything
exec getSomething '"John","Tom"' -- doesn't work - doesn't find anything
exec getSomething '\'John\',\'Tom\'' -- doesn't work - syntax error
```
**EDIT:** I actually found this [page](http://vyaskn.tripod.com/passing_arrays_to_stored_procedures.htm) that has a great reference of the various ways to pas an array to a sproc
|
If you're using Sybase 12.5 or earlier then you can't use functions. A workaround might be to populate a temporary table with the values and read them from there.
|
This is a little late, but I had this exact issue a while ago and I found a solution.
The trick is double quoting and then wrapping the whole string in quotes.
```
exec getSomething """John"",""Tom"",""Bob"",""Harry"""
```
Modify your proc to match the table entry to the string.
```
CREATE PROCEDURE getSomething @keyList varchar(4096)
AS
SELECT * FROM mytbl WHERE @keyList LIKE '%'+name+'%'
```
I've had this in production since ASE 12.5; we're now on 15.0.3.
|
How to pass a comma separated list to a stored procedure?
|
[
"",
"sql",
"sap-ase",
""
] |
Given a select with multiple option's in jQuery.
```
$select = $("<select></select>");
$select.append("<option>Jason</option>") //Key = 1
.append("<option>John</option>") //Key = 32
.append("<option>Paul</option>") //Key = 423
```
How should the key be stored and retrieved?
The ID may be an OK place but would not be guaranteed unique if I had multiple select's sharing values (and other scenarios).
Thanks
and in the spirit of TMTOWTDI.
```
$option = $("<option></option>");
$select = $("<select></select>");
$select.addOption = function(value,text){
$(this).append($("<option/>").val(value).text(text));
};
$select.append($option.val(1).text("Jason").clone())
.append("<option value=32>John</option>")
.append($("<option/>").val(423).text("Paul"))
.addOption("321","Lenny");
```
|
Like lucas said the value attribute is what you need. Using your code it would look something like this ( I added an id attribute to the select to make it fit ):
```
$select = $('<select id="mySelect"></select>');
$select.append('<option value="1">Jason</option>') //Key = 1
.append('<option value="32">John</option>') //Key = 32
.append('<option value="423">Paul</option>') //Key = 423
```
jQuery lets you get the value using the val() method. Using it on the select tag you get the current selected option's value.
```
$( '#mySelect' ).val(); //Gets the value for the current selected option
$( '#mySelect > option' ).each( function( index, option ) {
option.val(); //The value for each individual option
} );
```
Just in case, the .each method loops throught every element the query matched.
|
The HTML `<option>` tag has an attribute called "value", where you can store your key.
e.g.:
```
<option value=1>Jason</option>
```
I don't know how this will play with jQuery (I don't use it), but I hope this is helpful nonetheless.
|
Keeping key value pairs together in HTML <select/> with jQuery?
|
[
"",
"javascript",
"jquery",
"html",
""
] |
I love list comprehensions in Python, because they concisely represent a transformation of a list.
However, in other languages, I frequently find myself writing something along the lines of:
```
foreach (int x in intArray)
if (x > 3) //generic condition on x
x++
//do other processing
```
This example is in C#, where I'm under the impression LINQ can help with this, but is there some common programming construct which can replace this slightly less-than-elegant solution? Perhaps a data structure I'm not considering?
|
The increment in the original `foreach` loop will not affect the contents of the array, the only way to do this remains a `for` loop:
```
for(int i = 0; i < intArray.Length; ++i)
{
if(intArray[i] > 3) ++intArray[i];
}
```
Linq is not intended to modify existing collections or sequences. It creates new sequences based on existing ones. It is possible to achieve the above code using Linq, though it is slightly against its purposes:
```
var newArray1 = from i in intArray select ((i > 3) ? (i + 1) : (i));
var newArray2 = intArray.Select(i => (i > 3) ? (i + 1) : (i));
```
Using `where` (or equivalent), as shown in some of the other answers, will exclude any values less than or equal to 3 from the resulting sequence.
```
var intArray = new int[] { 10, 1, 20, 2 };
var newArray = from i in intArray where i > 3 select i + 1;
// newArray == { 11, 21 }
```
There is a `ForEach` method on arrays that will allow you to use a lambda function instead of a `foreach` block, though for anything more than a method call I would stick with `foreach`.
```
intArray.ForEach(i => DoSomething(i));
```
|
In C# you can apply selective processing on anything that lives inside an IEnumerable like this:
```
intArray.Where(i => i > 3).ConvertAll();
DoStuff(intArray.Where(i => i 3));
```
Etc..
|
Replacement for for... if array iteration
|
[
"",
".net",
"python",
"arrays",
"loops",
"iteration",
""
] |
Help! I have an Axis web service that is being consumed by a C# application. Everything works great, except that arrays of long values always come across as [0,0,0,0] - the right length, but the values aren't deserialized. I have tried with other primitives (ints, doubles) and the same thing happens. What do I do? I don't want to change the semantics of my service.
|
Here's what I ended up with. I have never found another solution out there for this, so if you have something better, by all means, contribute.
First, the long array definition in the wsdl:types area:
```
<xsd:complexType name="ArrayOf_xsd_long">
<xsd:complexContent mixed="false">
<xsd:restriction base="soapenc:Array">
<xsd:attribute wsdl:arrayType="soapenc:long[]" ref="soapenc:arrayType" />
</xsd:restriction>
</xsd:complexContent>
</xsd:complexType>
```
Next, we create a SoapExtensionAttribute that will perform the fix. It seems that the problem was that .NET wasn't following the multiref id to the element containing the double value. So, we process the array item, go find the value, and then insert it the value into the element:
```
[AttributeUsage(AttributeTargets.Method)]
public class LongArrayHelperAttribute : SoapExtensionAttribute
{
private int priority = 0;
public override Type ExtensionType
{
get { return typeof (LongArrayHelper); }
}
public override int Priority
{
get { return priority; }
set { priority = value; }
}
}
public class LongArrayHelper : SoapExtension
{
private static ILog log = LogManager.GetLogger(typeof (LongArrayHelper));
public override object GetInitializer(LogicalMethodInfo methodInfo, SoapExtensionAttribute attribute)
{
return null;
}
public override object GetInitializer(Type serviceType)
{
return null;
}
public override void Initialize(object initializer)
{
}
private Stream originalStream;
private Stream newStream;
public override void ProcessMessage(SoapMessage m)
{
switch (m.Stage)
{
case SoapMessageStage.AfterSerialize:
newStream.Position = 0; //need to reset stream
CopyStream(newStream, originalStream);
break;
case SoapMessageStage.BeforeDeserialize:
XmlWriterSettings settings = new XmlWriterSettings();
settings.Indent = false;
settings.NewLineOnAttributes = false;
settings.NewLineHandling = NewLineHandling.None;
settings.NewLineChars = "";
XmlWriter writer = XmlWriter.Create(newStream, settings);
XmlDocument xmlDocument = new XmlDocument();
xmlDocument.Load(originalStream);
List<XmlElement> longArrayItems = new List<XmlElement>();
Dictionary<string, XmlElement> multiRefs = new Dictionary<string, XmlElement>();
FindImportantNodes(xmlDocument.DocumentElement, longArrayItems, multiRefs);
FixLongArrays(longArrayItems, multiRefs);
xmlDocument.Save(writer);
newStream.Position = 0;
break;
}
}
private static void FindImportantNodes(XmlElement element, List<XmlElement> longArrayItems,
Dictionary<string, XmlElement> multiRefs)
{
string val = element.GetAttribute("soapenc:arrayType");
if (val != null && val.Contains(":long["))
{
longArrayItems.Add(element);
}
if (element.Name == "multiRef")
{
multiRefs[element.GetAttribute("id")] = element;
}
foreach (XmlNode node in element.ChildNodes)
{
XmlElement child = node as XmlElement;
if (child != null)
{
FindImportantNodes(child, longArrayItems, multiRefs);
}
}
}
private static void FixLongArrays(List<XmlElement> longArrayItems, Dictionary<string, XmlElement> multiRefs)
{
foreach (XmlElement element in longArrayItems)
{
foreach (XmlNode node in element.ChildNodes)
{
XmlElement child = node as XmlElement;
if (child != null)
{
string href = child.GetAttribute("href");
if (href == null || href.Length == 0)
{
continue;
}
if (href.StartsWith("#"))
{
href = href.Remove(0, 1);
}
XmlElement multiRef = multiRefs[href];
if (multiRef == null)
{
continue;
}
child.RemoveAttribute("href");
child.InnerXml = multiRef.InnerXml;
if (log.IsDebugEnabled)
{
log.Debug("Replaced multiRef id '" + href + "' with value: " + multiRef.InnerXml);
}
}
}
}
}
public override Stream ChainStream(Stream s)
{
originalStream = s;
newStream = new MemoryStream();
return newStream;
}
private static void CopyStream(Stream from, Stream to)
{
TextReader reader = new StreamReader(from);
TextWriter writer = new StreamWriter(to);
writer.WriteLine(reader.ReadToEnd());
writer.Flush();
}
}
```
Finally, we tag all methods in the Reference.cs file that will be deserializing a long array with our attribute:
```
[SoapRpcMethod("", RequestNamespace="http://some.service.provider",
ResponseNamespace="http://some.service.provider")]
[return : SoapElement("getFooReturn")]
[LongArrayHelper]
public Foo getFoo()
{
object[] results = Invoke("getFoo", new object[0]);
return ((Foo) (results[0]));
}
```
This fix is long-specific, but it could probably be generalized to handle any primitive type having this problem.
|
Here's a more or less copy-pasted version of a [blog post](http://www.tomergabel.com/GettingWCFAndApacheAxisToBeFriendly.aspx) I wrote on the subject.
Executive summary: You can either change the way .NET deserializes the result set (see Chris's solution above), or you can reconfigure Axis to serialize its results in a way that's compatible with the .NET SOAP implementation.
If you go the latter route, here's how:
> ... the generated
> classes look and appear to function
> normally, but if you'll look at the
> deserialized array on the client
> (.NET/WCF) side you'll find that the
> array has been deserialized
> incorrectly, and all values in the
> array are 0. You'll have to manually
> look at the SOAP response returned by
> Axis to figure out what's wrong;
> here's a sample response (again,
> edited for clarity):
```
<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenv=http://schemas.xmlsoap.org/soap/envelope/>
<soapenv:Body>
<doSomethingResponse>
<doSomethingReturn>
<doSomethingReturn href="#id0"/>
<doSomethingReturn href="#id1"/>
<doSomethingReturn href="#id2"/>
<doSomethingReturn href="#id3"/>
<doSomethingReturn href="#id4"/>
</doSomethingReturn>
</doSomethingResponse>
<multiRef id="id4">5</multiRef>
<multiRef id="id3">4</multiRef>
<multiRef id="id2">3</multiRef>
<multiRef id="id1">2</multiRef>
<multiRef id="id0">1</multiRef>
</soapenv:Body>
</soapenv:Envelope>
```
> You'll notice that Axis does not
> generate values directly in the
> returned element, but instead
> references external elements for
> values. This might make sense when
> there are many references to
> relatively few discrete values, but
> whatever the case this is not properly
> handled by the WCF basicHttpBinding
> provider (and reportedly by gSOAP and
> classic .NET web references as well).
>
> It took me a while to find a solution:
> edit your Axis deployment's
> server-config.wsdd file and find the
> following parameter:
```
<parameter name="sendMultiRefs" value="true"/>
```
> Change it to false,
> then redeploy via the command line,
> which looks (under Windows) something
> like this:
```
java -cp %AXISCLASSPATH% org.apache.axis.client.AdminClient server-config.wsdl
```
> The web service's
> response should now be deserializable
> by your .NET client.
|
Why won't .NET deserialize my primitive array from a web service?
|
[
"",
"java",
".net",
"service",
"primitive",
""
] |
I would like to have a reference for the pros and cons of using include **files vs objects(classes)** when developing PHP applications.
I know I would benefit from having one place to go for this answer...I have a few opinions of my own but I look forward to hearing others.
**A Simple Example:**
Certain pages on my site are only accessible to logged in users. I have two options for implementation (there are others but let's limit it to these two)
1. *Create an authenticate.php file and include it on every page. It holds the logic for authentication.*
2. *Create a user object, which has an authenticate function, reference the object for authentication on every page.*
**Edit**
I'd like to see some way weigh the benefits of one over the other.
My current (and weak reasons) follow:
Includes - Sometimes a function is just easier/shorter/faster to call
Objects - Grouping of functionality and properties leads for longer term maintenance.
**Includes** - Less code to write (no constructor, no class syntax) call me lazy but this is true.
**Objects** - Force formality and a single approach to functions and creation.
**Includes** - Easier for a novice to deal with
Objects - Harder for novices, but frowned upon by professionals.
I look at these factors at the start of a project to decide if I want to do includes or objects.
Those are a few pros and cons off the top of my head.
|
These are not really opposite choices. You will have to include the checking code anyway. I read your question as procedural programming vs. OO programming.
Writing a few lines of code, or a function, and including it in your page header was how things were done in PHP3 or PHP4. It's simple, it works (that's how we did it in [osCommerce](http://www.oscommerce.com/), for example, an eCommerce PHP application).
But it's not easy to maintain and modify, as many developers can confirm.
In PHP5 you'd write a user object which will carry its own data and methods for authentication. Your code will be clearer and easier to maintain as everything having to do with users and authentication will be concentrated in a single place.
|
While the question touches on a couple of very debatable issues (OOP, User authentication) I'll skip by those and second Konrad's comment about \_\_autoload. Anyone who knows C/C++ knows how much of a pain including files can be. With autoload, a PHP5 addition, if you choose to use OOP (which I do almost exclusively) you only need use some standard file naming convention and (I would recommend) restricting a single class per file and PHP will do the rest for you. Cleans up the code and you no longer have to worry about remembering to remove includes that are no longer necessary (one of the many problems with includes).
|
PHP includes vs OOP
|
[
"",
"php",
"coding-style",
""
] |
I have a php server that is running my domain name. For testing purposes I am running an asp.net on a dotted quad IP. I am hoping to link them together via either PHP or some kind of DNS/`.htaccess` voodoo.
So if I go to `www.mydomain.com/test` it redirects (but keeps the url of (`www.mydomain.com/test`) in the browser's address bar and the pages are served by the dotted quad IP asp.net box.
|
Instead of pointing `www.yourdomain.com/test` at your test server, why not use `test.yourdomain.com`?
Assuming you have access to the DNS records for `yourdomain.com`, you should just need to create an A record mapping `test.yourdomain.com` to your test server's IP address.
|
It is quite possible, if I understand what you're getting at.
You have a PHP server with your domain pointing to it. You also have a separate ASP.NET server that only has an IP address associated with it, no domain.
Is there any drawback to simply pointing your domain name to your ASP.NEt box?
|
Redirect from domain name to a dotted quad hosted box
|
[
"",
"php",
"hosting",
"dns",
""
] |
Details:
* Only disable after user clicks the submit button, but before the posting back to the server
* ASP.NET Webforms (.NET 1.1)
* Prefer jQuery (if any library at all)
* Must be enabled if form reloads (i.e. credit card failed)
This isn't a necessity that I do this, but if there is a simple way to do it without having to change too much, I'll do it. (i.e. if there isn't a simple solution, I probably won't do it, so don't worry about digging too deep)
|
For all submit buttons, via JQuery, it'd be:
```
$('input[type=submit]').click(function() { this.disabled = true; });
```
Or it might be more useful to do so on form submission:
```
$('form').submit(function() {
$('input[type=submit]', this).attr("disabled","disabled");
});
```
But I think we could give a better answer to your question if we knew a bit more about the context.
If this is an ajax request, then you'll need to make sure you enable submit buttons again on either success or failure.
If this is a standard HTTP form submission (aside from disabling the button with javascript) and you're doing this to safe guard from multiple submissions of the same form, then you ought to have some sort of control in the code that deals with the submitted data, because disabling a button with javascript might not prevent multiple submissions.
|
You could do something like this:
```
$('form').submit(function() {
$(this)
.find(":submit,:image") // get all the submit buttons
.attr({ disabled : 'disabled' }) // disable them
.end() // go back to this form
.submit(function() { // change the onsubmit to always reject.
return false;
})
;
});
```
Benefits of this:
* It will work with all your forms, with all methods of submission:
+ clicking a submit element
+ pressing enter, or
+ calling `form.submit()` from some other code
* It will disable all submit elements:
+ `<input type="submit"/>`
+ `<button type="submit"></button>`
+ `<input type="image" />`
* it's really short.
|
What is the best approach for (client-side) disabling of a submit button?
|
[
"",
"asp.net",
"javascript",
"jquery",
"webforms",
".net-1.1",
""
] |
I'm using ant to generate javadocs, but get this exception over and over - why?
I'm using JDK version **1.6.0\_06**.
```
[javadoc] java.lang.ClassCastException: com.sun.tools.javadoc.ClassDocImpl cannot be cast to com.sun.javadoc.AnnotationTypeDoc
[javadoc] at com.sun.tools.javadoc.AnnotationDescImpl.annotationType(AnnotationDescImpl.java:46)
[javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.getAnnotations(HtmlDocletWriter.java:1739)
[javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.writeAnnotationInfo(HtmlDocletWriter.java:1713)
[javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.writeAnnotationInfo(HtmlDocletWriter.java:1702)
[javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.writeAnnotationInfo(HtmlDocletWriter.java:1681)
[javadoc] at com.sun.tools.doclets.formats.html.FieldWriterImpl.writeSignature(FieldWriterImpl.java:130)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.FieldBuilder.buildSignature(FieldBuilder.java:184)
[javadoc] at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
[javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[javadoc] at java.lang.reflect.Method.invoke(Method.java:597)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.FieldBuilder.invokeMethod(FieldBuilder.java:114)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractMemberBuilder.build(AbstractMemberBuilder.java:56)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.FieldBuilder.buildFieldDoc(FieldBuilder.java:158)
[javadoc] at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source)
[javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[javadoc] at java.lang.reflect.Method.invoke(Method.java:597)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.FieldBuilder.invokeMethod(FieldBuilder.java:114)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractMemberBuilder.build(AbstractMemberBuilder.java:56)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.buildFieldDetails(ClassBuilder.java:301)
[javadoc] at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
[javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[javadoc] at java.lang.reflect.Method.invoke(Method.java:597)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.invokeMethod(ClassBuilder.java:101)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.buildClassDoc(ClassBuilder.java:124)
[javadoc] at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
[javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[javadoc] at java.lang.reflect.Method.invoke(Method.java:597)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.invokeMethod(ClassBuilder.java:101)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90)
[javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.build(ClassBuilder.java:108)
[javadoc] at com.sun.tools.doclets.formats.html.HtmlDoclet.generateClassFiles(HtmlDoclet.java:155)
[javadoc] at com.sun.tools.doclets.internal.toolkit.AbstractDoclet.generateClassFiles(AbstractDoclet.java:164)
[javadoc] at com.sun.tools.doclets.internal.toolkit.AbstractDoclet.startGeneration(AbstractDoclet.java:106)
[javadoc] at com.sun.tools.doclets.internal.toolkit.AbstractDoclet.start(AbstractDoclet.java:64)
[javadoc] at com.sun.tools.doclets.formats.html.HtmlDoclet.start(HtmlDoclet.java:42)
[javadoc] at com.sun.tools.doclets.standard.Standard.start(Standard.java:23)
[javadoc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[javadoc] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
[javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
[javadoc] at java.lang.reflect.Method.invoke(Method.java:597)
[javadoc] at com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:215)
[javadoc] at com.sun.tools.javadoc.DocletInvoker.start(DocletInvoker.java:91)
[javadoc] at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:340)
[javadoc] at com.sun.tools.javadoc.Start.begin(Start.java:128)
[javadoc] at com.sun.tools.javadoc.Main.execute(Main.java:41)
[javadoc] at com.sun.tools.javadoc.Main.main(Main.java:31)
```
|
It looks like this has been reported as a [Java bug](https://bugs.java.com/bugdatabase/view_bug?bug_id=6442982). It appears to be caused by using annotations from a 3rd party library (like JUnit) and not including the jar with that annotation in the javadoc invocation.
If that is the case, just use the -classpath option on javadoc and include the extra jar files.
|
I have some idea regarding this problem but this not exact solution to get.
If you give single comment line `//` before annotation and try to run the javadoc once again. This problem will solve
Eg: `sample.java` file
```
@ChannelPipeline
```
Makes changes in
```
//@ChannelPipeline
```
Try to run javadoc command once again. Now `ClassCastException` won't occur.
|
Why am I getting a ClassCastException when generating javadocs?
|
[
"",
"java",
"ant",
"javadoc",
"classcastexception",
""
] |
I am looking for good methods of manipulating HTML in PHP. For example, the problem I currently have is dealing with malformed HTML.
I am getting input that looks something like this:
```
<div>This is some <b>text
```
As you noticed, the HTML is missing closing tags. I could use regex or an XML Parser to solve this problem. However, it is likely that I will have to do other DOM manipulation in the future. I wonder if there are any good PHP libraries that handle DOM manipulation similar to how Javascript deals with DOM manipulation.
|
PHP has [a PECL extension that gives you access to the features of HTML Tidy](http://php.net/tidy). Tidy is a pretty powerful library that should be able to take code like that and close tags in an intelligent manner.
I use it to clean up malformed XML and HTML sent to me by a classified ad system prior to import.
|
I've found PHP Simple HTML DOM to be the most useful and straight forward library yet. Better than PECL I would say.
I've written an article on [how to use it to scrape myspace artist tour dates](http://www.crainbandy.com/programming/using-php-and-simple-html-dom-parser-to-scrape-artist-tour-dates-off-myspace) (just an example.) Here's a link to the [php simple html dom parser.](http://simplehtmldom.sourceforge.net/)
|
DOM manipulation in PHP
|
[
"",
"php",
"html",
"dom",
""
] |
How do you expose a LINQ query as an ASMX web service?
Usually, from the business tier, I can return a typed `DataSet` or a `DataTable` which can be serialized for transport over ASMX.
How can I do the same for a LINQ query?
Is there a way to populate a typed `DataSet` or a `DataTable` via a LINQ query?
```
public static MyDataTable CallMySproc()
{
string conn = "...";
MyDatabaseDataContext db = new MyDatabaseDataContext(conn);
MyDataTable dt = new MyDataTable();
// execute a sproc via LINQ
var query = from dr
in db.MySproc().AsEnumerable
select dr;
// copy LINQ query resultset into a DataTable -this does not work !
dt = query.CopyToDataTable();
return dt;
}
```
How could I put the result set of a LINQ query into a `DataSet` or a `DataTable`?
Alternatively, can the LINQ query be serializable so that I can expose it as an ASMX web service?
|
As mentioned in the question, `IEnumerable` has a `CopyToDataTable` method:
```
IEnumerable<DataRow> query =
from order in orders.AsEnumerable()
where order.Field<DateTime>("OrderDate") > new DateTime(2001, 8, 1)
select order;
// Create a table from the query.
DataTable boundTable = query.CopyToDataTable<DataRow>();
```
Why won't that work for you?
|
To perform this query against a `DataContext` class, you'll need to do the following:
```
MyDataContext db = new MyDataContext();
IEnumerable<DataRow> query =
(from order in db.Orders.AsEnumerable()
select new
{
order.Property,
order.Property2
})
as IEnumerable<DataRow>;
return query.CopyToDataTable<DataRow>();
```
Without the `as IEnumerable<DataRow>;` you will see the following compilation error:
> Cannot implicitly convert type 'System.Collections.Generic.IEnumerable' to 'System.Collections.Generic.IEnumerable'. An explicit conversion exists (are you missing a cast?)
|
Filling a DataSet or a DataTable from a LINQ query result set
|
[
"",
"c#",
"linq",
"web-services",
".net-3.5",
""
] |
I have lots of article store in MS SQL server 2005 database in a table called Articles-
```
"Articles (ArticleID, ArticleTitle, ArticleContent)"
```
Now I want some SP or SQL query which could return me similar Article against any user's input (very much like "Similar Posts" in blogs OR "Related Questions" in stackoverflow). The matching should work on both ArticleTitle and ArticleContent. The query should be intelligent enough to sort the result on the basis on their relevancy.
Is it possible to do this in MS SQL Server 2005?
|
Something like this might work, a kind of ranking system. You would probably have to split the string in your application to build a SQL string, but I have used similar to build an effective site search.
```
Select
Top 10
ArticleID,
ArticleTitle,
ArticleContent
From
Articles
Order By
(Case When ArticleTitle = 'Article Title' Then 1 Else 0 End) Desc,
(Case When ArticleTitle = 'Article' Then 1 Else 0 End) Desc,
(Case When ArticleTitle = 'Title' Then 1 Else 0 End) Desc,
(Case When Soundex('Article Title') = Soundex(ArticleTitle) Then 1 Else 0 End) Desc,
(Case When Soundex('Article') = Soundex(ArticleTitle) Then 1 Else 0 End) Desc,
(Case When Soundex('Title') = Soundex(ArticleTitle) Then 1 Else 0 End) Desc,
(Case When PatIndex('%Article%Title%', ArticleTitle) > 0 Then 1 Else 0 End) Desc,
(Case When PatIndex('%Article%', ArticleTitle) > 0 Then 1 Else 0 End) Desc,
(Case When PatIndex('%Title%', ArticleTitle) > 0 Then 1 Else 0 End) Desc,
(Case When PatIndex('%Article%Title%', ArticleContent) > 0 Then 1 Else 0 End) Desc,
(Case When PatIndex('%Article%', ArticleContent) > 0 Then 1 Else 0 End) Desc,
(Case When PatIndex('%Title%', ArticleContent) > 0 Then 1 Else 0 End) Desc
```
You can then add/remove case statements from the order by clause to improve the list based on your data.
|
First of all you need to define what article similarity means.
For example you can associate some meta information with articles, like tags.
To be able to find similar articles you need to extract some features from them, for example you can build full text index.
You can take advantage of full text search capability of MSSQL 2005
```
-- Assuming @Title contains title of current articles you can find related articles runnig this query
SELECT * FROM Acticles WHERE CONTAINS(ArticleTitle, @Title)
```
|
"Similar Posts" like functionality using MS SQL Server?
|
[
"",
"sql",
"sql-server",
"database",
""
] |
Having been a PHP developer on LAMP servers for quite a while, is there anything that I will need to take into consideration while preparing an application for *IIS* on windows.
|
Make sure you get the FastCGI extension for IIS 6.0 or IIS 7.0. It is the single most important thing you can have when running PHP under IIS. Also this article should get you setup:
<http://learn.iis.net/page.aspx/247/using-fastcgi-to-host-php-applications-on-iis-60/>
Everything beyond this is simple, MySQL and what not.
|
We just rolled out PHP 5.2.6 + FastCGI on our shared hosting platform without any problems. As long as you follow the steps outlined in [the article Nick linked](https://stackoverflow.com/questions/10515/php-on-iis#10519) to then you should be just fine.
My only additional piece of advice would be to forget about using the `fcgiconfig.js` script to modify the fcgiext.ini file, it's more of a hindrance than a help. Just edit it by hand, you also learn more about how it works.
If you're installing PHP onto IIS 7 then this link should be worth a read though:
> [Using FastCGI to Host PHP Applications on IIS 7](http://learn.iis.net/page.aspx/246/using-fastcgi-to-host-php-applications-on-iis7/)
|
What do I need to run PHP applications on IIS?
|
[
"",
"php",
"windows",
"iis",
"portability",
"lamp",
""
] |
Is there any way to check whether a file is locked without using a try/catch block?
Right now, the only way I know of is to just open the file and catch any `System.IO.IOException`.
|
No, unfortunately, and if you think about it, that information would be worthless anyway since the file could become locked the very next second (read: short timespan).
Why specifically do you need to know if the file is locked anyway? Knowing that might give us some other way of giving you good advice.
If your code would look like this:
```
if not locked then
open and update file
```
Then between the two lines, another process could easily lock the file, giving you the same problem you were trying to avoid to begin with: exceptions.
|
When I faced with a similar problem, I finished with the following code:
```
public class FileManager
{
private string _fileName;
private int _numberOfTries;
private int _timeIntervalBetweenTries;
private FileStream GetStream(FileAccess fileAccess)
{
var tries = 0;
while (true)
{
try
{
return File.Open(_fileName, FileMode.Open, fileAccess, Fileshare.None);
}
catch (IOException e)
{
if (!IsFileLocked(e))
throw;
if (++tries > _numberOfTries)
throw new MyCustomException("The file is locked too long: " + e.Message, e);
Thread.Sleep(_timeIntervalBetweenTries);
}
}
}
private static bool IsFileLocked(IOException exception)
{
int errorCode = Marshal.GetHRForException(exception) & ((1 << 16) - 1);
return errorCode == 32 || errorCode == 33;
}
// other code
}
```
|
How to check for file lock?
|
[
"",
"c#",
".net",
"io",
"filelock",
""
] |
I wanted some of those spiffy rounded corners for a web project that I'm currently working on.
I thought I'd try to accomplish it using javascript and not CSS in an effort to keep the requests for image files to a minimum (yes, I know that it's possible to combine all required rounded corner shapes into one image) and I also wanted to be able to change the background color pretty much on the fly.
I already utilize jQuery so I looked at the excellent [rounded corners plugin](http://plugins.jquery.com/project/corners) and it worked like a charm in every browser I tried. Being a developer however I noticed the opportunity to make it a bit more efficient. The script already includes code for detecting if the current browser supports webkit rounded corners (safari based browsers). If so it uses raw CSS instead of creating layers of divs.
I thought that it would be awesome if the same kind of check could be performed to see if the browser supports the Gecko-specific `-moz-border-radius-*` properties and if so utilize them.
The check for webkit support looks like this:
```
var webkitAvailable = false;
try {
webkitAvailable = (document.defaultView.getComputedStyle(this[0], null)['-webkit-border-radius'] != undefined);
}
catch(err) {}
```
That, however, did not work for `-moz-border-radius` so I started checking for alternatives.
My fallback solution is of course to use browser detection but that's far from recommended practice ofcourse.
My best solution yet is as follows.
```
var mozborderAvailable = false;
try {
var o = jQuery('<div>').css('-moz-border-radius', '1px');
mozborderAvailable = $(o).css('-moz-border-radius-topleft') == '1px';
o = null;
} catch(err) {}
```
It's based on the theory that Gecko "expands" the composite -moz-border-radius to the four sub-properties
* `-moz-border-radius-topleft`
* `-moz-border-radius-topright`
* `-moz-border-radius-bottomleft`
* `-moz-border-radius-bottomright`
Is there any javascript/CSS guru out there that have a better solution?
(The feature request for this page is at <http://plugins.jquery.com/node/3619>)
|
How about this?
```
var mozborderAvailable = false;
try {
if (typeof(document.body.style.MozBorderRadius) !== "undefined") {
mozborderAvailable = true;
}
} catch(err) {}
```
I tested it in Firefox 3 (true) and false in: Safari, IE7, and Opera.
(Edit: better undefined test)
|
Why not use `-moz-border-radius` and `-webkit-border-radius` in the stylesheet? It's valid CSS and throwing an otherwise unused attribute would hurt less than having javascript do the legwork of figuring out if it should apply it or not.
Then, in the javascript you'd just check if the browser is IE (or Opera?) - if it is, it'll ignore the proprietary tags, and your javascript could do it's thing.
Maybe I'm missing something here...
|
The best way of checking for -moz-border-radius support
|
[
"",
"javascript",
"css",
""
] |
In Java 5 and above you have the foreach loop, which works magically on anything that implements `Iterable`:
```
for (Object o : list) {
doStuff(o);
}
```
However, `Enumerable` still does not implement `Iterable`, meaning that to iterate over an `Enumeration` you must do the following:
```
for(; e.hasMoreElements() ;) {
doStuff(e.nextElement());
}
```
Does anyone know if there is a reason why `Enumeration` still does not implement `Iterable`?
**Edit:** As a clarification, I'm not talking about the language concept of an [enum](http://en.wikipedia.org/wiki/Enumerated_type), I'm talking a Java-specific class in the Java API called '[Enumeration](http://java.sun.com/j2se/1.5.0/docs/api/java/util/Enumeration.html)'.
|
Enumeration hasn't been modified to support Iterable because it's an interface not a concrete class (like Vector, which was modifed to support the Collections interface).
If Enumeration was changed to support Iterable it would break a bunch of people's code.
|
As an easy and **clean** way of using an Enumeration with the enhanced for loop, convert to an ArrayList with java.util.Collections.list.
```
for (TableColumn col : Collections.list(columnModel.getColumns()) {
```
(javax.swing.table.TableColumnModel.getColumns returns Enumeration.)
Note, this may be very slightly less efficient.
|
Why aren't Enumerations Iterable?
|
[
"",
"java",
"enumeration",
"iterable",
""
] |
I've had a lot of good experiences learning about web development on [w3schools.com](http://www.w3schools.com/). It's hit or miss, I know, but the PHP and CSS sections specifically have proven very useful for reference.
Anyway, I was wondering if there was a similar site for [jQuery](http://en.wikipedia.org/wiki/JQuery). I'm interested in learning, but I need it to be online/searchable, so I can refer back to it easily when I need the information in the future.
Also, as a brief aside, is jQuery worth learning? Or should I look at different JavaScript libraries? I know Jeff uses jQuery on Stack Overflow and it seems to be working well.
Thanks!
**Edit**: jQuery's website has a [pretty big list of tutorials](http://docs.jquery.com/Tutorials), and a seemingly comprehensive [documentation page](http://docs.jquery.com/Main_Page). I haven't had time to go through it all yet, has anyone else had experience with it?
**Edit 2**: It seems Google is now hosting the jQuery libraries. That should give jQuery a pretty big advantage in terms of publicity.
Also, if everyone uses a single unified aQuery library hosted at the same place, it should get cached for most Internet users early on and therefore not impact the download footprint of your site should you decide to use it.
## 2 Months Later...
**Edit 3**: I started using jQuery on a project at work recently and it is great to work with! Just wanted to let everyone know that I have concluded it is ***ABSOLUTELY*** worth it to learn and use jQuery.
Also, I learned almost entirely from the Official jQuery [documentation](http://docs.jquery.com/Main_Page) and [tutorials](http://docs.jquery.com/Tutorials). It's very straightforward.
## 10 Months Later...
jQuery is a part of just about every web app I've made since I initially wrote this post. It makes progressive enhancement a breeze, and helps make the code maintainable.
Also, all the jQuery plug-ins are an invaluable resource!
## 3 Years Later...
Still using jQuery just about every day. I now author jQuery plug-ins and consult full time. I'm primarily a Djangonaut but I've done several javascript only contracts with jQuery. It's a life saver.
From one jQuery user to another... You should look at [templating with jQuery](http://api.jquery.com/category/plugins/templates/) (or underscore -- see below).
Other things I've found valuable in addition to jQuery (with estimated portion of projects I use it on):
* [jQuery Form Plugin](http://jquery.malsup.com/form/) (95%)
* [jQuery Form Example Plugin](http://mudge.github.com/jquery_example/) (75%)
* [jQuery UI](http://jqueryui.com/) (70%)
* [Underscore.js](http://documentcloud.github.com/underscore/) (80%)
* [CoffeeScript](http://jashkenas.github.com/coffee-script/) (30%)
* [Backbone.js](http://documentcloud.github.com/backbone/) (10%)
|
Rick Strahl and Matt Berseth's blogs both tipped me into jQuery and man am I glad they did. jQuery completely changes a) your client programming perspective, b) the grief it causes it you, and c) how much fun it can be!
<http://www.west-wind.com/weblog/>
<http://mattberseth.com/>
I used the book jQuery in Action
[http://www.amazon.com/jQuery-Action-Bear-Bibeault/dp/1933988355/ref=sr\_1\_1?ie=UTF8&s=books&qid=1219716122&sr=1-1](https://rads.stackoverflow.com/amzn/click/com/1933988355) (I bought it used at Amazon for about $22). It has been a big help into bootstrapping me into jQuery. The documentation at jquery.com are also very helpful.
A place where jQuery falls a little flat is with its UI components. Those don't seem to be quite ready for primetime just yet.
It could be that [Prototype](http://en.wikipedia.org/wiki/Prototype_JavaScript_Framework) or [MooTools](http://en.wikipedia.org/wiki/MooTools) or [ExtJS](http://en.wikipedia.org/wiki/Ext_%28JavaScript_library%29) are as good as jQuery. But for me, jQuery seems to have a little more momentum behind it right now and that counts for something for me.
Check jQuery out. It is very cool!
|
A great resource for learning jQuery is: [Learning jQuery](http://www.learningjquery.com). The author, Karl Swedberg, also co-wrote the book titled... ready? Yup, *[Learning jQuery](https://rads.stackoverflow.com/amzn/click/com/1847192505)*. Remy Sharp also has great info geared towards the visual aspects of jQuery on [his blog](http://www.jqueryfordesigners.com/).
--SEAN O
|
Where can I learn jQuery? Is it worth it?
|
[
"",
"javascript",
"jquery",
"review",
""
] |
I have tried both of :
```
ini_set('include_path', '.:/usr/share/php5:/usr/share/php5/PEAR:lib:app/classes');
```
and also :
```
php_value include_path ".:/usr/share/php5:/usr/share/php5/PEAR:lib:app/classes"
```
in the .htaccess file.
Both methods actually **do work** but only intermittently. That is, they will work fine for about 37 pages requests and then fail about 42 pages requests resulting in an require() call to cause a fatal error effectively crashing the site.
I'm not even sure where to begin trying to find out what is going on!
---
@[cnote](https://stackoverflow.com/questions/41836/setting-include-path-in-php-intermittently-fails-why#41877)
> Looks like you duplicated the current directory in your include path. Try removing one of the '.:' from your string.
The in script version was originally
```
ini_set('include_path', ini_get('include_path') . PATH_SEPARATOR . 'lib' . PATH_SEPARATOR . 'app' . DIRECTORY_SEPARATOR . 'classes');
```
and thus the .:.: was coming from the existing path:
```
ini_get('include_path')
```
I tried removing it anyway and the problem persists.
|
It turned out the issue was related to a PHP bug in 5.2.5
Setting an "admin\_flag" for include\_path caused the include path to be empty in some requests, and Plesk sets an admin\_flag in the default config for something or other. An update of PHP solved the issue.
<http://bugs.php.net/bug.php?id=43677>
|
Have you tried [set\_include\_path()](http://www.php.net/manual/en/function.set-include-path.php)?. As a benefit this returns false on failure, allowing you to at least catch the occurence and generate some meaningful debug data.
Additionally, you should be using the constant `PATH_SEPARATOR` as it differs between windows / \*nix.
As a specific example:
```
set_include_path('.' . PATH_SEPARATOR . './app/lib' . PATH_SEPARATOR . get_include_path());
```
(the get\_include\_path() on the end means whatever your ini / htaccess path is set to will remain)
|
Setting include path in PHP intermittently fails
|
[
"",
"php",
"include-path",
""
] |
When should I include PDB files for a production release? Should I use the `Optimize code` flag and how would that affect the information I get from an exception?
If there is a noticeable performance benefit I would want to use the optimizations but if not I'd rather have accurate debugging info. What is typically done for a production app?
|
When you want to see source filenames and line numbers in your stacktraces, generate PDBs using the pdb-only option. Optimization is separate from PDB generation, i.e. you can optimize *and* generate PDBs without a performance hit.
From [the C# Language Reference](http://msdn.microsoft.com/en-us/library/8cw0bt21(VS.80).aspx)
> If you use /debug:full, be aware that there is some impact on the speed and size of JIT optimized code and a small impact on code quality with /debug:full. We recommend /debug:pdbonly or no PDB for generating release code.
|
To answer your first question, you only need to include PDBs for a production release if you need line numbers for your exception reports.
To answer your second question, using the "Optimise" flag with PDBs means that any [stack "collapse" will be reflected in the stack trace](https://www.hanselman.com/blog/release-is-not-debug-64bit-optimizations-and-c-method-inlining-in-release-build-call-stacks). I'm not sure whether the actual line number reported can be wrong - this needs more investigation.
To answer your third question, you can have the best of both worlds with a rather neat trick. The major differences between the default debug build and default release build are that when doing a default release build, optimization is turned on and debug symbols are not emitted. So, in four steps:
1. Change your release config to emit debug symbols. This has virtually no effect on the performance of your app, and is very useful if (when?) you need to debug a release build of your app.
2. Compile using your new release build config, i.e. *with* debug symbols and *with* optimization. Note that 99% of code optimization is done by the JIT compiler, not the language compiler.
3. Create a text file in your app's folder called xxxx.exe.ini (or dll or whatever), where xxxx is the name of your executable. This text file should initially look like:
```
[.NET Framework Debugging Control]
GenerateTrackingInfo=0
AllowOptimize=1
```
4. With these settings, your app runs at full speed. When you want to debug your app by turning on debug tracking and possibly turning off (CIL) code optimization, just use the following settings:
```
[.NET Framework Debugging Control]
GenerateTrackingInfo=1
AllowOptimize=0
```
**EDIT** According to cateye's comment, [this can also work in a hosted environment](http://martin.bz/blog/asp-net-mvc-source-debugging-the-easy-way) such as ASP.NET.
|
PDB files for production app and the "Optimize code" flag
|
[
"",
"c#",
"visual-studio",
"build-process",
""
] |
I was trying to get my head around XAML and thought that I would try writing some code.
Trying to add a grid with 6 by 6 column definitions then add a text block into one of the grid cells. I don't seem to be able to reference the cell that I want. There is no method on the grid that I can add the text block too. There is only grid.children.add(object), no Cell definition.
XAML:
```
<Page x:Class="WPF_Tester.Page1"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
Title="Page1"
Loaded="Page_Loaded">
</Page>
```
C#:
```
private void Page_Loaded(object sender, RoutedEventArgs e)
{
//create the structure
Grid g = new Grid();
g.ShowGridLines = true;
g.Visibility = Visibility.Visible;
//add columns
for (int i = 0; i < 6; ++i)
{
ColumnDefinition cd = new ColumnDefinition();
cd.Name = "Column" + i.ToString();
g.ColumnDefinitions.Add(cd);
}
//add rows
for (int i = 0; i < 6; ++i)
{
RowDefinition rd = new RowDefinition();
rd.Name = "Row" + i.ToString();
g.RowDefinitions.Add(rd);
}
TextBlock tb = new TextBlock();
tb.Text = "Hello World";
g.Children.Add(tb);
}
```
**Update**
Here is the spooky bit:
* Using VS2008 Pro on XP
* WPFbrowser Project Template (3.5 verified)
I don't get the methods in autocomplete.
|
WPF makes use of a funky thing called [attached properties](http://msdn.microsoft.com/en-us/library/ms749011.aspx). So in your XAML you might write this:
```
<TextBlock Grid.Row="0" Grid.Column="0" />
```
And this will effectively move the TextBlock into cell (0,0) of your grid.
In code this looks a little strange. I believe it'd be something like:
```
g.Children.Add(tb);
Grid.SetRow(tb, 0);
Grid.SetColumn(tb, 0);
```
Have a look at that link above - attached properties make things really easy to do in XAML perhaps at the expense of intuitive-looking code.
|
The cell location is an attached property - the value belongs to the TextBlock rather than Grid. However, since the property itself belongs to Grid, you need to use either the property definition field or the provided static functions.
```
TextBlock tb = new TextBlock();
//
// Locate tb in the second row, third column.
// Row and column indices are zero-indexed, so this
// equates to row 1, column 2.
//
Grid.SetRow(tb, 1);
Grid.SetColumn(tb, 2);
```
|
How do I generate WPF controls through code
|
[
"",
"c#",
".net",
"wpf",
"xaml",
""
] |
I want to create a client side mail creator web page. I know the problems of using the mailto action in an html form (not standard, no default mail appication set on the client). But the web page isn't very important, and they don't care very much.
The mail created by the mailto action has the syntax:
> subject: undefined subject
> body:
>
> param1=value1
> param2=value2
> .
> .
> .
> paramn=valuen
Can I use JavaScript to format the mail like this?
> Subject:XXXXX
>
> Body:
> Value1;Value2;Value3...ValueN
|
What we used in a projet is a popup window that opens a mailto: link, it is the only way we found to compose a mail within the default mail client that works with all mail clients (at least all our clients used).
```
var addresses = "";//between the speech mark goes the receptient. Seperate addresses with a ;
var body = ""//write the message text between the speech marks or put a variable in the place of the speech marks
var subject = ""//between the speech marks goes the subject of the message
var href = "mailto:" + addresses + "?"
+ "subject=" + subject + "&"
+ "body=" + body;
var wndMail;
wndMail = window.open(href, "_blank", "scrollbars=yes,resizable=yes,width=10,height=10");
if(wndMail)
{
wndMail.close();
}
```
|
You more or less only have two alternatives when sending mail via the browser..
1. make a page that takes user input, and allows them to send the mail via your web-server. You need some kind of server-side scripting for this.
2. use a mailto: link to trigger opening of the users registered mail client. This has the obvious pitfalls you mentioned, and is less flexible. It needs less work though.
|
Can I use JavaScript to create a client side email?
|
[
"",
"javascript",
"email",
""
] |
First of all, I know how to build a Java application. But I have always been puzzled about where to put my classes. There are proponents for organizing the packages in a strictly domain oriented fashion, others separate by tier.
I myself have always had problems with
* naming,
* placing
So,
1. Where do you put your domain specific constants (and what is the best name for such a class)?
2. Where do you put classes for stuff which is both infrastructural and domain specific (for instance I have a FileStorageStrategy class, which stores the files either in the database, or alternatively in database)?
3. Where to put Exceptions?
4. Are there any standards to which I can refer?
|
I've really come to like Maven's [Standard Directory Layout](http://maven.apache.org/guides/introduction/introduction-to-the-standard-directory-layout.html "S").
One of the key ideas for me is to have two source roots - one for production code and one for test code like so:
```
MyProject/src/main/java/com/acme/Widget.java
MyProject/src/test/java/com/acme/WidgetTest.java
```
(here, both src/main/java and src/test/java are source roots).
Advantages:
* Your tests have package (or "default") level access to your classes under test.
* You can easily package only your production sources into a JAR by dropping src/test/java as a source root.
One rule of thumb about class placement and packages:
Generally speaking, well structured projects will be free of [circular dependencies](http://en.wikipedia.org/wiki/Circular_dependency). Learn when they are bad (and when they are [not](http://beust.com/weblog/archives/000208.html)), and consider a tool like [JDepend](http://www.google.ca/search?q=JDepend&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a) or [SonarJ](http://www.hello2morrow.com/products/sonargraph) that will help you eliminate them.
|
I'm a huge fan of organized sources, so I always create the following directory structure:
```
/src - for your packages & classes
/test - for unit tests
/docs - for documentation, generated and manually edited
/lib - 3rd party libraries
/etc - unrelated stuff
/bin (or /classes) - compiled classes, output of your compile
/dist - for distribution packages, hopefully auto generated by a build system
```
In /src I'm using the default Java patterns: Package names starting with your domain (org.yourdomain.yourprojectname) and class names reflecting the OOP aspect you're creating with the class (see the other commenters). Common package names like *util*, *model*, *view*, *events* are useful, too.
I tend to put constants for a specific topic in an own class, like *SessionConstants* or *ServiceConstants* in the same package of the domain classes.
|
How should I structure a Java application, where do I put my classes?
|
[
"",
"java",
"architecture",
""
] |
Is there any free or commercial component written in .NET (no COM interop) that will work with most twain scanners?
|
In my company we use [Pegasus](http://www.pegasusimaging.com). It's great.
|
> ### [TwainDotNet](https://github.com/tmyroadctfig/twaindotnet)
I've just wrapped up the code from Thomas Scheidegger's article ([CodeProject: .NET TWAIN image scanning](http://www.codeproject.com/KB/dotnet/twaindotnet.aspx)) into a Google code project: [http://code.google.com/p/twaindotnet/](https://github.com/tmyroadctfig/twaindotnet)
I've cleaned up the API a bit and added WPF support, so check it out. :)
|
.NET Scanning API
|
[
"",
"c#",
".net",
"twain",
"scanning",
""
] |
We have a requirement in project to store all the revisions(Change History) for the entities in the database. Currently we have 2 designed proposals for this:
e.g. for "Employee" Entity
**Design 1:**
```
-- Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"
-- Holds the Employee Revisions in Xml. The RevisionXML will contain
-- all data of that particular EmployeeId
"EmployeeHistories (EmployeeId, DateModified, RevisionXML)"
```
**Design 2:**
```
-- Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"
-- In this approach we have basically duplicated all the fields on Employees
-- in the EmployeeHistories and storing the revision data.
"EmployeeHistories (EmployeeId, RevisionId, DateModified, FirstName,
LastName, DepartmentId, .., ..)"
```
Is there any other way of doing this thing?
The problem with the "Design 1" is that we have to parse XML each time when you need to access data. This will slow the process and also add some limitations like we cannot add joins on the revisions data fields.
And the problem with the "Design 2" is that we have to duplicate each and every field on all entities (We have around 70-80 entities for which we want to maintain revisions).
|
1. Do **not** put it all in one table with an IsCurrent discriminator attribute. This just causes problems down the line, requires surrogate keys and all sorts of other problems.
2. Design 2 does have problems with schema changes. If you change the Employees table you have to change the EmployeeHistories table and all the related sprocs that go with it. Potentially doubles you schema change effort.
3. Design 1 works well and if done properly does not cost much in terms of a performance hit. You could use an xml schema and even indexes to get over possible performance problems. Your comment about parsing the xml is valid but you could easily create a view using xquery - which you can include in queries and join to. Something like this...
```
CREATE VIEW EmployeeHistory
AS
, FirstName, , DepartmentId
SELECT EmployeeId, RevisionXML.value('(/employee/FirstName)[1]', 'varchar(50)') AS FirstName,
RevisionXML.value('(/employee/LastName)[1]', 'varchar(100)') AS LastName,
RevisionXML.value('(/employee/DepartmentId)[1]', 'integer') AS DepartmentId,
FROM EmployeeHistories
```
|
I think the key question to ask here is 'Who / What is going to be using the history'?
If it's going to be mostly for reporting / human readable history, we've implemented this scheme in the past...
Create a table called 'AuditTrail' or something that has the following fields...
```
[ID] [int] IDENTITY(1,1) NOT NULL,
[UserID] [int] NULL,
[EventDate] [datetime] NOT NULL,
[TableName] [varchar](50) NOT NULL,
[RecordID] [varchar](20) NOT NULL,
[FieldName] [varchar](50) NULL,
[OldValue] [varchar](5000) NULL,
[NewValue] [varchar](5000) NULL
```
You can then add a 'LastUpdatedByUserID' column to all of your tables which should be set every time you do an update / insert on the table.
You can then add a trigger to every table to catch any insert / update that happens and creates an entry in this table for each field that's changed. Because the table is also being supplied with the 'LastUpdateByUserID' for each update / insert, you can access this value in the trigger and use it when adding to the audit table.
We use the RecordID field to store the value of the key field of the table being updated. If it's a combined key, we just do a string concatenation with a '~' between the fields.
I'm sure this system may have drawbacks - for heavily updated databases the performance may be hit, but for my web-app, we get many more reads than writes and it seems to be performing pretty well. We even wrote a little VB.NET utility to automatically write the triggers based on the table definitions.
Just a thought!
|
Database Design for Revisions?
|
[
"",
"sql",
"database",
"database-design",
"versioning",
""
] |
While the C# spec does include a pre-processor and basic directives (#define, #if, etc), the language does not have the same flexible pre-processor found in languages such as C/C++. I believe the lack of such a flexible pre-processor was a design decision made by Anders Hejlsberg (although, unfortunately, I can't find reference to this now). From experience, this is certainly a good decision, as there were some really terrible un-maintainable macros created back when I was doing a lot of C/C++.
That said, there are a number of scenarios where I could find a slightly more flexible pre-processor to be useful. Code such as the following could be improved by some simple pre-processor directives:
```
public string MyProperty
{
get { return _myProperty; }
set
{
if (value != _myProperty)
{
_myProperty = value;
NotifyPropertyChanged("MyProperty");
// This line above could be improved by replacing the literal string with
// a pre-processor directive like "#Property", which could be translated
// to the string value "MyProperty" This new notify call would be as follows:
// NotifyPropertyChanged(#Property);
}
}
}
```
Would it be a good idea to write a pre-processor to handle extremely simple cases like this? Steve McConnell wrote in [Code Complete](https://rads.stackoverflow.com/amzn/click/com/0735619670) (p208):
> *Write your own preprocessor* If a language doesn't include a preprocessor, it's fairly easy to write one...
I am torn. It was a design decision to leave such a flexible pre-processor out of C#. However, an author I highly respect mentions it may be ok in some circumstances.
**Should I build a C# pre-processor? Is there one available that does the simple things I want to do?**
|
Consider taking a look at an aspect-oriented solution like [PostSharp](http://www.postsharp.org/), which injects code after the fact based on custom attributes. It's the opposite of a precompiler but can give you the sort of functionality you're looking for (PropertyChanged notifications etc).
|
> Should I build a C# pre-processor? Is there one available that does the simple things I want to do?
You can always use the C pre-processor -- C# is close enough, syntax-wise. M4 is also an option.
|
C# Preprocessor
|
[
"",
"c#",
"c-preprocessor",
""
] |
I'm having issues getting the C sockets API to work properly in C++ on z/OS.
Although I am including `sys/socket.h`, I still get compile time errors telling me that `AF_INET` is not defined.
Am I missing something obvious, or is this related to the fact that being on z/OS makes my problems much more complicated?
I discovered that there is a `#ifdef` that I'm hitting. Apparently, z/OS isn't happy unless I define which "type" of sockets I'm using with:
```
#define _OE_SOCKETS
```
Now, I personally have no idea what this `_OE_SOCKETS` is actually for, so if any z/OS sockets programmers are out there (all 3 of you), perhaps you could give me a rundown of how this all works?
Test App:
```
#include <sys/socket.h>
int main()
{
return AF_INET;
}
```
Compile/Link Output:
```
cxx -Wc,xplink -Wl,xplink -o inet_test inet.C
"./inet.C", line 5.16: CCN5274 (S) The name lookup for "AF_INET" did not find a declaration.
CCN0797(I) Compilation failed for file ./inet.C. Object file not created.
```
A check of sys/sockets.h does include the definition I need, and as far as I can tell, it is not being blocked by any `#ifdef` statements.
I have however noticed it contains the following:
```
#ifdef __cplusplus
extern "C" {
#endif
```
which encapsulates basically the whole file? Not sure if it matters.
|
Keep a copy of the IBM manuals handy:
* [z/OS V1R11.0 XL C/C++ Programming Guide](http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/CBCPG1A0/CCONTENTS?DT=20090606065305)
* [z/OS V1R11.0 XL C/C++ Run-Time Library Reference](http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/EDCLB1A0/CCONTENTS?DT=20090607203246)
The IBM publications are generally very good, but you need to get used to their format, as well as knowing where to look for an answer. You'll find quite often that a feature that you want to use is guarded by a "feature test macro"
You should ask your friendly system programmer to install the [XL C/C++ Run-Time Library Reference: Man Pages](http://www-03.ibm.com/servers/eserver/zseries/zos/le/manpgs.html) on your system. Then you can do things like "man connect" to pull up the man page for the socket connect() API. When I do that, this is what I see:
FORMAT
X/Open
```
#define _XOPEN_SOURCE_EXTENDED 1
#include <sys/socket.h>
int connect(int socket, const struct sockaddr *address, socklen_t address_len);
```
Berkeley Sockets
```
#define _OE_SOCKETS
#include <sys/types.h>
#include <sys/socket.h>
int connect(int socket, struct sockaddr *address, int address_len);
```
|
I've had no trouble using the BSD sockets API in C++, in GNU/Linux. Here's the sample program I used:
```
#include <sys/socket.h>
int
main()
{
return AF_INET;
}
```
So my take on this is that z/OS is probably the complicating factor here, however, because I've never used z/OS before, much less programmed in it, I can't say this definitively. :-P
|
How to use the C socket API in C++ on z/OS
|
[
"",
"c++",
"c",
"sockets",
"mainframe",
"zos",
""
] |
I am looking for a (preferably) command-line tool that can reformat the C# source code on a directory tree. Ideally, I should be able to customize the formatting. Bonus points if the tool can be run on [Mono](https://en.wikipedia.org/wiki/Mono_%28software%29) (or Linux).
|
You could also try [NArrange](http://www.narrange.net) to reformat your code. The formatting options it supports are still pretty limited, but it can process an entire directory and is a command-line tool. Also, NArrange runs under Mono.
|
You could give [Artistic Style](http://astyle.sourceforge.net/) a try. It requires [Perl](https://en.wikipedia.org/wiki/Perl) to be installed though.
It's got a decent list of formatting options, and supports C and Java as well.
|
Is there a tool for reformatting C# code?
|
[
"",
"c#",
"code-formatting",
""
] |
I'm about to start a fairly Ajax heavy feature in my company's application. What I need to do is make an Ajax callback every few minutes a user has been on the page.
* I don't need to do any DOM updates before, after, or during the callbacks.
* I don't need any information from the page, just from a site cookie which should always be sent with requests anyway, and an ID value.
What I'm curious to find out, is if there is any clean and simple way to make a JavaScript Ajax callback to an ASP.NET page without posting back the rest of the information on the page. I'd like to not have to do this if it is possible.
I really just want to be able to call a single method on the page, nothing else.
Also, I'm restricted to ASP.NET 2.0 so I can't use any of the new 3.5 framework ASP AJAX features, although I can use the ASP AJAX extensions for the 2.0 framework.
**UPDATE**
I've decided to accept [DanP](https://stackoverflow.com/questions/27258/aspnet-javascript-callbacks-without-full-postbacks#27270)'s answer as it seems to be exactly what I'm looking for. Our site already uses jQuery for some things so I'll probably use jQuery for making requests since in my experience it seems to perform much better than ASP's AJAX framework does.
What do you think would be the best method of transferring data to the IHttpHandler? Should I add variables to the query string or POST the data I need to send?
The only thing I think I have to send is a single ID, but I can't decide what the best method is to send the ID and have the IHttpHandler handle it. I'd like to come up with a solution that would prevent a person with basic computer skills from accidentally or intentionally accessing the page directly or repeating requests. Is this possible?
|
If you don't want to create a blank page, you could call a IHttpHandler (ashx) file:
```
public class RSSHandler : IHttpHandler
{
public void ProcessRequest (HttpContext context)
{
context.Response.ContentType = "text/xml";
string sXml = BuildXMLString(); //not showing this function,
//but it creates the XML string
context.Response.Write( sXml );
}
public bool IsReusable
{
get { return true; }
}
}
```
|
You should use ASP.Net Callbacks which were introduced in Asp.Net 2.0. Here is an article that should get you set to go:
[Implementing Client Callbacks Programmatically Without Postbacks in ASP.NET Web Pages](http://msdn.microsoft.com/en-us/library/ms178208.aspx)
Edit: Also look at this:
[ICallback & JSON Based JavaScript Serialization](http://aspalliance.com/1537)
|
ASP.NET JavaScript Callbacks Without Full PostBacks?
|
[
"",
"asp.net",
"javascript",
"ajax",
""
] |
I've heard rumors that PHP is planning on introducing a "goto" command. What is it supposed to be doing?
I've tried searching a bit, but haven't found anything awfully descriptive. I understand that it won't be a "`GOTO 10`"-like command...
|
They are not adding a real GOTO, but extending the BREAK keyword to use static labels. Basically, it will be enhancing the ability to break out of ~~switch~~ nested if statements. Here's the concept example I found:
```
<?php
for ($i = 0; $i < 9; $i++) {
if (true) {
break blah;
}
echo "not shown";
blah:
echo "iteration $i\n";
}
?>
```
Of course, once the GOTO "rumor" was out, there was nothing to stop some evil guys to propagate an additional **COMEFROM** joke. Be on your toes.
See also:
<http://www.php.net/~derick/meeting-notes.html#adding-goto>
|
I'm always astonished at how incredibly dumb the PHP designers are.
If the purpose of using GOTOs is to make breaking out of multiply nested
loops more efficient there's a better way: labelled code blocks
and break statements that can reference labels:
```
a: for (...) {
b: for (...) {
c: for (...) {
...
break a;
}
}
}
```
Now is is clear which loop/block to exit, and the exit is structured;
you can't get spaghetti code with this like you can with real gotos.
This is an old, old, old idea. Designing good control flow management
structures has been solved since the 70s, and the literature on all this
is long since written up. The Bohm-Jacopini theorem showed that
you could code anything with function call, if-then-else, and while loops.
In practice, to break out of deeply nested blocks, Bohm-Jacopini style
coding required extra boolean flags ("set this flag to get out of the loop")
which was clumsy coding wise and inefficient (you don't want such flags
in your inner loop). With if-then-else, various loops (while,for)
and break-to-labelled block, you can code any algorithm without no
loss in efficiency. Why don't people read the literature, instead
of copying what C did? Grrr.
|
GOTO command in PHP?
|
[
"",
"php",
"language-features",
"goto",
""
] |
How can I monitor an SQL Server database for changes to a table without using triggers or modifying the structure of the database in any way? My preferred programming environment is [.NET](http://en.wikipedia.org/wiki/.NET_Framework) and C#.
I'd like to be able to support any [SQL Server 2000](http://en.wikipedia.org/wiki/Microsoft_SQL_Server#Genesis) SP4 or newer. My application is a bolt-on data visualization for another company's product. Our customer base is in the thousands, so I don't want to have to put in requirements that we modify the third-party vendor's table at every installation.
By *"changes to a table"* I mean changes to table data, not changes to table structure.
Ultimately, I would like the change to trigger an event in my application, instead of having to check for changes at an interval.
---
The best course of action given my requirements (no triggers or schema modification, SQL Server 2000 and 2005) seems to be to use the `BINARY_CHECKSUM` function in [T-SQL](http://en.wikipedia.org/wiki/Transact-SQL). The way I plan to implement is this:
Every X seconds run the following query:
```
SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*))
FROM sample_table
WITH (NOLOCK);
```
And compare that against the stored value. If the value has changed, go through the table row by row using the query:
```
SELECT row_id, BINARY_CHECKSUM(*)
FROM sample_table
WITH (NOLOCK);
```
And compare the returned checksums against stored values.
|
Take a look at the CHECKSUM command:
```
SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM sample_table WITH (NOLOCK);
```
That will return the same number each time it's run as long as the table contents haven't changed. See my post on this for more information:
[CHECKSUM](http://msdn.microsoft.com/en-us/library/aa258245(SQL.80).aspx)
Here's how I used it to rebuild cache dependencies when tables changed:
[ASP.NET 1.1 database cache dependency (without triggers)](http://weblogs.asp.net/jgalloway/archive/2005/05/07/406056.aspx)
|
**Unfortunately CHECKSUM does not always work properly to detect changes**.
It is only a primitive checksum and no cyclic redundancy check (CRC) calculation.
Therefore you can't use it to detect all changes, e. g. symmetrical changes result in the same CHECKSUM!
E. g. the solution with `CHECKSUM_AGG(BINARY_CHECKSUM(*))` will always deliver 0 for all 3 tables with different content:
```
SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM
(
SELECT 1 as numA, 1 as numB
UNION ALL
SELECT 1 as numA, 1 as numB
) q
-- delivers 0!
```
SELECT CHECKSUM\_AGG(BINARY\_CHECKSUM(\*)) FROM
(
SELECT 1 as numA, 2 as numB
UNION ALL
SELECT 1 as numA, 2 as numB
) q
-- delivers 0!
SELECT CHECKSUM\_AGG(BINARY\_CHECKSUM(\*)) FROM
(
SELECT 0 as numA, 0 as numB
UNION ALL
SELECT 0 as numA, 0 as numB
) q
-- delivers 0!
|
Check for changes to an SQL Server table?
|
[
"",
"sql",
"sql-server",
"datatable",
"rdbms",
""
] |
I am aware that in [.NET](http://en.wikipedia.org/wiki/.NET_Framework) there are three timer types (see *[Comparing the Timer Classes in the .NET Framework Class Library](http://msdn.microsoft.com/en-us/magazine/cc164015.aspx)*). I have chosen a threaded timer as the other types can drift if the main thread is busy, and I need this to be reliable.
The way this timer works in the control of the timer is put on another thread so it can always tick along with the work begin completed on the parent thread when it is not busy.
The issue with this timer in a console application is that while the timer is ticking along on another thread the main thread is not doing anything to the application closes.
I tried adding a `while true` loop, but then the main thread is too busy when the timer does go off.
|
You can use something like `Console.ReadLine()` to block the main thread, so other background threads (like timer threads) will still work. You may also use an [AutoResetEvent](https://learn.microsoft.com/en-us/dotnet/api/system.threading.autoresetevent) to block the execution, then (when you need to) you can call Set() method on that AutoResetEvent object to release the main thread. Also ensure that your reference to Timer object doesn't go out of scope and garbage collected.
|
Consider using a [ManualResetEvent](https://learn.microsoft.com/en-us/dotnet/api/system.threading.manualresetevent) to block the main thread at the end of its processing, and call `Reset()` on it once the timer's processing has finished. If this is something that needs to run continuously, consider moving this into a service process instead of a console app.
|
Reliable timer in a console application
|
[
"",
"c#",
".net",
"vb.net",
"timer",
""
] |
I'm trying to wrap my head around reflection, so I decided to add plugin capability to a program that I'm writing. The only way to understand a concept is to get your fingers dirty and write the code, so I went the route of creating a simple interface library consisting of the IPlugin and IHost interfaces, a plugin implementation library of classes that implement IPlugin, and a simple console project that instantiates the IHost implementation class that does simple work with the plugin objects.
Using reflection, I wanted to iterate through the types contained inside my plugin implementation dll and create instances of types. I was able to sucessfully instantiate classes with this code, but I could not cast the created object to the interface.
I tried this code but I couldn't cast object o as I expected. I stepped through the process with the debugger and the proper constructor was called. Quickwatching object o showed me that it had the fields and properties that I expected to see in the implementation class.
```
loop through assemblies
loop through types in assembly
// Filter out unwanted types
if (!type.IsClass || type.IsNotPublic || type.IsAbstract )
continue;
// This successfully created the right object
object o = Activator.CreateInstance(type);
// This threw an Invalid Cast Exception or returned null for an "as" cast
// even though the object implemented IPlugin
IPlugin i = (IPlugin) o;
```
I made the code work with this.
```
using System.Runtime.Remoting;
ObjectHandle oh = Activator.CreateInstance(assembly.FullName, type.FullName);
// This worked as I intended
IPlugin i = (IPlugin) oh.Unwrap();
i.DoStuff();
```
Here are my questions:
1. Activator.CreateInstance(Type t) returns an object, but I couldn't cast the object to an interface that the object implemented. Why?
2. Should I have been using a different overload of CreateInstance()?
3. What are the reflection related tips and tricks?
4. Is there some crucial part of reflection that I'm just not getting?
|
I'm just guessing here because from your code it's not obvious where do you have definition of IPlugin interface but if you can't cast in your host application then you are probably having IPlugin interface in your host assembly and then at the same time in your plugin assembly. This won't work.
The easiest thing is to make this work is to have IPlugin interface marked as public in your host assembly and then have your Plugin assembly **reference host application assembly**, so both assemblies have access to **the very same interface**.
|
hmmm... If you are using Assembly.LoadFrom to load your assembly try changing it Assembly.LoadFile instead.
Worked for me
From here: <http://www.eggheadcafe.com/community/aspnet/2/10036776/solution-found.aspx>
|
How to properly cast objects created through reflection
|
[
"",
"c#",
".net",
"reflection",
""
] |
I'm trying to create a bookmarklet for posting del.icio.us bookmarks to a separate account.
I tested it from the command line like:
```
wget -O - --no-check-certificate \
"https://seconduser:thepassword@api.del.icio.us/v1/posts/add?url=http://seet.dk&description=test"
```
This works great.
I then wanted to create a bookmarklet in my firefox. I googled and found bits and pieces and ended up with:
```
javascript:void(
open('https://seconduser:password@api.del.icio.us/v1/posts/add?url='
+encodeURIComponent(location.href)
+'&description='+encodeURIComponent(document.title),
'delicious','toolbar=no,width=500,height=250'
)
);
```
But all that happens is that I get this from del.icio.us:
```
<?xml version="1.0" standalone="yes"?>
<result code="access denied" />
<!-- fe04.api.del.ac4.yahoo.net uncompressed/chunked Thu Aug 7 02:02:54 PDT 2008 -->
```
If I then go to the address bar and press enter, it changes to:
```
<?xml version='1.0' standalone='yes'?>
<result code="done" />
<!-- fe02.api.del.ac4.yahoo.net uncompressed/chunked Thu Aug 7 02:07:45 PDT 2008 -->
```
Any ideas how to get it to work directly from the bookmarks?
|
Can you sniff the traffic to find what's actually being sent? Is it sending any auth data at all and it's incorrect or being presented in a form the server doesn't like, or is it never being sent by firefox at all?
|
@travis Looks very nice! I will sure take a look into it. I can think of several places I can use that
I never got round to sniff the traffic but found out that a php site on my own server with http-auth worked fine, so i figured it was something with delicious. I then created a php page that does a wget of the delicious api and everything works fine :)
|
Http Auth in a Firefox 3 bookmarklet
|
[
"",
"javascript",
"firefox",
"delicious-api",
""
] |
I am starting a new web application in PHP and this time around I want to create something that people can extend by using a plugin interface.
How does one go about writing 'hooks' into their code so that plugins can attach to specific events?
|
You could use an Observer pattern. A simple functional way to accomplish this:
```
<?php
/** Plugin system **/
$listeners = array();
/* Create an entry point for plugins */
function hook() {
global $listeners;
$num_args = func_num_args();
$args = func_get_args();
if($num_args < 2)
trigger_error("Insufficient arguments", E_USER_ERROR);
// Hook name should always be first argument
$hook_name = array_shift($args);
if(!isset($listeners[$hook_name]))
return; // No plugins have registered this hook
foreach($listeners[$hook_name] as $func) {
$args = $func($args);
}
return $args;
}
/* Attach a function to a hook */
function add_listener($hook, $function_name) {
global $listeners;
$listeners[$hook][] = $function_name;
}
/////////////////////////
/** Sample Plugin **/
add_listener('a_b', 'my_plugin_func1');
add_listener('str', 'my_plugin_func2');
function my_plugin_func1($args) {
return array(4, 5);
}
function my_plugin_func2($args) {
return str_replace('sample', 'CRAZY', $args[0]);
}
/////////////////////////
/** Sample Application **/
$a = 1;
$b = 2;
list($a, $b) = hook('a_b', $a, $b);
$str = "This is my sample application\n";
$str .= "$a + $b = ".($a+$b)."\n";
$str .= "$a * $b = ".($a*$b)."\n";
$str = hook('str', $str);
echo $str;
?>
```
**Output:**
```
This is my CRAZY application
4 + 5 = 9
4 * 5 = 20
```
**Notes:**
For this example source code, you must declare all your plugins before the actual source code that you want to be extendable. I've included an example of how to handle single or multiple values being passed to the plugin. The hardest part of this is writing the actual documentation which lists what arguments get passed to each hook.
This is just one method of accomplishing a plugin system in PHP. There are better alternatives, I suggest you check out the WordPress Documentation for more information.
|
So let's say you don't want the Observer pattern because it requires that you change your class methods to handle the task of listening, and want something generic. And let's say you don't want to use `extends` inheritance because you may already be inheriting in your class from some other class. Wouldn't it be great to have a generic way to make *any class pluggable without much effort*? Here's how:
```
<?php
////////////////////
// PART 1
////////////////////
class Plugin {
private $_RefObject;
private $_Class = '';
public function __construct(&$RefObject) {
$this->_Class = get_class(&$RefObject);
$this->_RefObject = $RefObject;
}
public function __set($sProperty,$mixed) {
$sPlugin = $this->_Class . '_' . $sProperty . '_setEvent';
if (is_callable($sPlugin)) {
$mixed = call_user_func_array($sPlugin, $mixed);
}
$this->_RefObject->$sProperty = $mixed;
}
public function __get($sProperty) {
$asItems = (array) $this->_RefObject;
$mixed = $asItems[$sProperty];
$sPlugin = $this->_Class . '_' . $sProperty . '_getEvent';
if (is_callable($sPlugin)) {
$mixed = call_user_func_array($sPlugin, $mixed);
}
return $mixed;
}
public function __call($sMethod,$mixed) {
$sPlugin = $this->_Class . '_' . $sMethod . '_beforeEvent';
if (is_callable($sPlugin)) {
$mixed = call_user_func_array($sPlugin, $mixed);
}
if ($mixed != 'BLOCK_EVENT') {
call_user_func_array(array(&$this->_RefObject, $sMethod), $mixed);
$sPlugin = $this->_Class . '_' . $sMethod . '_afterEvent';
if (is_callable($sPlugin)) {
call_user_func_array($sPlugin, $mixed);
}
}
}
} //end class Plugin
class Pluggable extends Plugin {
} //end class Pluggable
////////////////////
// PART 2
////////////////////
class Dog {
public $Name = '';
public function bark(&$sHow) {
echo "$sHow<br />\n";
}
public function sayName() {
echo "<br />\nMy Name is: " . $this->Name . "<br />\n";
}
} //end class Dog
$Dog = new Dog();
////////////////////
// PART 3
////////////////////
$PDog = new Pluggable($Dog);
function Dog_bark_beforeEvent(&$mixed) {
$mixed = 'Woof'; // Override saying 'meow' with 'Woof'
//$mixed = 'BLOCK_EVENT'; // if you want to block the event
return $mixed;
}
function Dog_bark_afterEvent(&$mixed) {
echo $mixed; // show the override
}
function Dog_Name_setEvent(&$mixed) {
$mixed = 'Coco'; // override 'Fido' with 'Coco'
return $mixed;
}
function Dog_Name_getEvent(&$mixed) {
$mixed = 'Different'; // override 'Coco' with 'Different'
return $mixed;
}
////////////////////
// PART 4
////////////////////
$PDog->Name = 'Fido';
$PDog->Bark('meow');
$PDog->SayName();
echo 'My New Name is: ' . $PDog->Name;
```
In Part 1, that's what you might include with a `require_once()` call at the top of your PHP script. It loads the classes to make something pluggable.
In Part 2, that's where we load a class. Note I didn't have to do anything special to the class, which is significantly different than the Observer pattern.
In Part 3, that's where we switch our class around into being "pluggable" (that is, supports plugins that let us override class methods and properties). So, for instance, if you have a web app, you might have a plugin registry, and you could activate plugins here. Notice also the `Dog_bark_beforeEvent()` function. If I set `$mixed = 'BLOCK_EVENT'` before the return statement, it will block the dog from barking and would also block the Dog\_bark\_afterEvent because there wouldn't be any event.
In Part 4, that's the normal operation code, but notice that what you might think would run does not run like that at all. For instance, the dog does not announce it's name as 'Fido', but 'Coco'. The dog does not say 'meow', but 'Woof'. And when you want to look at the dog's name afterwards, you find it is 'Different' instead of 'Coco'. All those overrides were provided in Part 3.
So how does this work? Well, let's rule out `eval()` (which everyone says is "evil") and rule out that it's not an Observer pattern. So, the way it works is the sneaky empty class called Pluggable, which does not contain the methods and properties used by the Dog class. Thus, since that occurs, the magic methods will engage for us. That's why in parts 3 and 4 we mess with the object derived from the Pluggable class, not the Dog class itself. Instead, we let the Plugin class do the "touching" on the Dog object for us. (If that's some kind of design pattern I don't know about -- please let me know.)
|
Best way to allow plugins for a PHP application
|
[
"",
"php",
"plugins",
"architecture",
"hook",
""
] |
I'm writing an app that will need to make use of `Timer`s, but potentially very many of them. How scalable is the `System.Threading.Timer` class? The documentation merely say it's "lightweight", but doesn't explain further. Do these timers get sucked into a single thread (or very small threadpool) that processes all the callbacks on behalf of a `Timer`, or does each `Timer` have its own thread?
I guess another way to rephrase the question is: How is `System.Threading.Timer` implemented?
|
I say this in response to a lot of questions: Don't forget that the (managed) source code to the framework is available. You can use this tool to get it all: <http://www.codeplex.com/NetMassDownloader>
Unfortunately, in this specific case, a lot of the implementation is in native code, so you don't get to look at it...
They definitely use pool threads rather than a thread-per-timer, though.
The standard way to implement a big collection of timers (which is how the kernel does it internally, and I would suspect is indirectly how your big collection of Timers ends up) is to maintain the list sorted by time-until-expiry - so the system only ever has to worry about checking the next timer which is going to expire, not the whole list.
Roughly, this gives O(log n) for starting a timer and O(1) for processing running timers.
Edit: Just been looking in Jeff Richter's book. He says (of Threading.Timer) that it uses a single thread for all Timer objects, this thread knows when the next timer (i.e. as above) is due and calls ThreadPool.QueueUserWorkItem for the callbacks as appropriate. This has the effect that if you don't finish servicing one callback on a timer before the next is due, that your callback will reenter on another pool thread. So in summary I doubt you'll see a big problem with having lots of timers, but you might suffer thread pool exhaustion if large numbers of them are firing at the same timer and/or their callbacks are slow-running.
|
I think you might want to rethink your design (that is, if you have control over the design yourself). If you're using so many timers that this is actually a concern for you, there's clearly some potential for consolidation there.
Here's a good article from MSDN Magazine from a few years ago that compares the three available timer classes, and gives some insight into their implementations:
<http://msdn.microsoft.com/en-us/magazine/cc164015.aspx>
|
How scalable is System.Threading.Timer?
|
[
"",
"c#",
".net",
"multithreading",
"timer",
""
] |
For classes that have a long list of setters that are used frequently, I found this way very useful (although I have recently read about the [Builder pattern](http://en.wikipedia.org/wiki/Builder_pattern) in *Effective Java* that is kinda the same).
Basically, all setter methods return the object itself so then you can use code like this:
```
myClass
.setInt(1)
.setString("test")
.setBoolean(true);
```
Setters simply return this in the end:
```
public MyClass setInt(int anInt) {
// [snip]
return this;
}
```
What is your opinion? What are the pros and cons? Does this have any impact on performance?
Also referred to as the [named parameter idiom](http://www.cs.technion.ac.il/users/yechiel/c++-faq/named-parameter-idiom.html) in c++.
|
@pek
Chained invocation is one of proposals for Java 7. It says that if a method return type is void, it should implicitly return **this**. If you're interested in this topic, there is a bunch of links and a simple example on [Alex Miller's Java 7 page](http://tech.puredanger.com/java7#chained).
|
This is called a [Fluent Interface](http://en.wikipedia.org/wiki/Fluent_interface), for reference.
Personally, I think it's a pretty neat idea, but a matter of taste really. I think [jQuery](http://en.wikipedia.org/wiki/Jquery) works this way.
|
Design: Java and returning self-reference in setter methods
|
[
"",
"java",
""
] |
I am writing a few extensions to mimic the map and reduce functions in Lisp.
```
public delegate R ReduceFunction<T,R>(T t, R previous);
public delegate void TransformFunction<T>(T t, params object[] args);
public static R Reduce<T,R>(this List<T> list, ReduceFunction<T,R> r, R initial)
{
var aggregate = initial;
foreach(var t in list)
aggregate = r(t,aggregate);
return aggregate;
}
public static void Transform<T>(this List<T> list, TransformFunction<T> f, params object [] args)
{
foreach(var t in list)
f(t,args);
}
```
The transform function will cut down on cruft like:
```
foreach(var t in list)
if(conditions && moreconditions)
//do work etc
```
Does this make sense? Could it be better?
|
These look very similar to extensions in Linq already:
```
//takes a function that matches the Func<T,R> delegate
listInstance.Aggregate(
startingValue,
(x, y) => /* aggregate two subsequent values */ );
//takes a function that matches the Action<T> delegate
listInstance.ForEach(
x => /* do something with x */);
```
Why is the 2nd example called Transform? Do you intend to change the values in the list somehow? If that's the case you may be better off using `ConvertAll<T>` or `Select<T>`.
|
According to this link [Functional Programming in C# 3.0: How Map/Reduce/Filter can Rock your World](http://www.25hoursaday.com/weblog/2008/06/16/FunctionalProgrammingInC30HowMapReduceFilterCanRockYourWorld.aspx) the following are the equivalent in C# under the System.Linq namespace:
* map --> [Enumerable.Select](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.select.aspx)
* reduce --> [Enumerable.Aggregate](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.aggregate.aspx)
* filter --> [Enumerable.Where](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.where.aspx)
|
Generic Map/Reduce List Extensions in C#
|
[
"",
"c#",
"functional-programming",
"extension-methods",
""
] |
What would be the easiest way to separate the directory name from the file name when dealing with `SaveFileDialog.FileName` in C#?
|
Use:
```
System.IO.Path.GetDirectoryName(saveDialog.FileName)
```
(and the corresponding `System.IO.Path.GetFileName`). The Path class is really rather useful.
|
You could construct a FileInfo object. It has a Name, FullName, and DirectoryName property.
```
var file = new FileInfo(saveFileDialog.FileName);
Console.WriteLine("File is: " + file.Name);
Console.WriteLine("Directory is: " + file.DirectoryName);
```
|
How to get only directory name from SaveFileDialog.FileName
|
[
"",
"c#",
"string",
"parsing",
""
] |
I've been using PHP & MySQL for ages and am about to start using PostgreSQL instead.
What's the preferred method?
Is it via the PDO objects or is there something better?
|
PDO objects are the new hotness. I'd recommend that as long as you can ensure that your target platform will always be running PHP 5.2+.
There are many other database abstraction layers that support PostgreSQL that are compatible with older versions of PHP; I'd recommend [ADODB](http://adodb.sourceforge.net/).
You should really be using PDO or a different abstraction layer even for your MySQL work; that way you won't have this problem again!
|
Using Zend Db:
```
require_once 'Zend/Db.php';
$DB_ADAPTER = 'Pdo_Pgsql';
$DB_CONFIG = array(
'username' => 'app_db_user',
'password' => 'xxxxxxxxx',
'host' => 'localhost',
'port' => 5432,
'dbname' => 'mydb'
);
$db = Zend_Db::factory($DB_ADAPTER, $DB_CONFIG);
```
|
What's the preferred way to connect to a postgresql database from PHP?
|
[
"",
"php",
"postgresql",
""
] |
I have a regex that is going to end up being a bit long and it'd make it much easier to read to have it across multiple lines.
I tried this but it just barfs.
```
preg_match(
'^J[0-9]{7}:\s+
(.*?) #Extract the Transaction Start Date msg
\s+J[0-9]{7}:\s+Project\sname:\s+
(.*?) #Extract the Project Name
\s+J[0-9]{7}:\s+Job\sname:\s+
(.*?) #Extract the Job Name
\s+J[0-9]{7}:\s+',
$this->getResultVar('FullMessage'),
$atmp
);
```
Is there are way to pass a regex in the above form to preg\_match?
|
You can use the extended syntax:
```
preg_match("/
test
/x", $foo, $bar);
```
|
Yes, you can add the `/x` [Pattern Modifier](http://us.php.net/manual/en/reference.pcre.pattern.modifiers.php).
> This modifier turns on additional
> functionality of PCRE that is
> incompatible with Perl. Any backslash
> in a pattern that is followed by a
> letter that has no special meaning
> causes an error, thus reserving these
> combinations for future expansion. By
> default, as in Perl, a backslash
> followed by a letter with no special
> meaning is treated as a literal. There
> are at present no other features
> controlled by this modifier.
For your example try this:
```
preg_match('/
^J[0-9]{7}:\s+
(.*?) #Extract the Transaction Start Date msg
\s+J[0-9]{7}:\s+Project\sname:\s+
(.*?) #Extract the Project Name
\s+J[0-9]{7}:\s+Job\sname:\s+
(.*?) #Extract the Job Name
\s+J[0-9]{7}:\s+
/x', $this->getResultVar('FullMessage'), $atmp);
```
|
Passing a commented, multi-line (freespace) regex to preg_match
|
[
"",
"php",
"regex",
""
] |
Here is the issue I am having: I have a large query that needs to compare datetimes in the where clause to see if two dates are on the same day. My current solution, which sucks, is to send the datetimes into a UDF to convert them to midnight of the same day, and then check those dates for equality. When it comes to the query plan, this is a disaster, as are almost all UDFs in joins or where clauses. This is one of the only places in my application that I haven't been able to root out the functions and give the query optimizer something it can actually use to locate the best index.
In this case, merging the function code back into the query seems impractical.
I think I am missing something simple here.
Here's the function for reference.
```
if not exists (select * from dbo.sysobjects
where id = object_id(N'dbo.f_MakeDate') and
type in (N'FN', N'IF', N'TF', N'FS', N'FT'))
exec('create function dbo.f_MakeDate() returns int as
begin declare @retval int return @retval end')
go
alter function dbo.f_MakeDate
(
@Day datetime,
@Hour int,
@Minute int
)
returns datetime
as
/*
Creates a datetime using the year-month-day portion of @Day, and the
@Hour and @Minute provided
*/
begin
declare @retval datetime
set @retval = cast(
cast(datepart(m, @Day) as varchar(2)) +
'/' +
cast(datepart(d, @Day) as varchar(2)) +
'/' +
cast(datepart(yyyy, @Day) as varchar(4)) +
' ' +
cast(@Hour as varchar(2)) +
':' +
cast(@Minute as varchar(2)) as datetime)
return @retval
end
go
```
To complicate matters, I am joining on time zone tables to check the date against the local time, which could be different for every row:
```
where
dbo.f_MakeDate(dateadd(hh, tz.Offset +
case when ds.LocalTimeZone is not null
then 1 else 0 end, t.TheDateINeedToCheck), 0, 0) = @activityDateMidnight
```
[Edit]
I'm incorporating @Todd's suggestion:
```
where datediff(day, dateadd(hh, tz.Offset +
case when ds.LocalTimeZone is not null
then 1 else 0 end, t.TheDateINeedToCheck), @ActivityDate) = 0
```
My misconception about how datediff works (the same day of year in consecutive years yields 366, not 0 as I expected) caused me to waste a lot of effort.
But the query plan didn't change. I think I need to go back to the drawing board with the whole thing.
|
This is much more concise:
```
where
datediff(day, date1, date2) = 0
```
|
You pretty much have to keep the left side of your where clause clean. So, normally, you'd do something like:
```
WHERE MyDateTime >= @activityDateMidnight
AND MyDateTime < (@activityDateMidnight + 1)
```
(Some folks prefer DATEADD(d, 1, @activityDateMidnight) instead - but it's the same thing).
The TimeZone table complicates matter a bit though. It's a little unclear from your snippet, but it looks like t.TheDateInTable is in GMT with a Time Zone identifier, and that you're then adding the offset to compare against @activityDateMidnight - which is in local time. I'm not sure what ds.LocalTimeZone is, though.
If that's the case, then you need to get @activityDateMidnight into GMT instead.
|
What's a good way to check if two datetimes are on the same calendar day in TSQL?
|
[
"",
"sql",
"sql-server",
"t-sql",
"datetime",
"user-defined-functions",
""
] |
I have a `DataTable` with a `Name` column. I want to generate a collection of the unique names ordered alphabetically. The following query ignores the *order by* clause.
```
var names =
(from DataRow dr in dataTable.Rows
orderby (string)dr["Name"]
select (string)dr["Name"]).Distinct();
```
Why does the `orderby` not get enforced?
|
To make it more readable and maintainable, you can also split it up into multiple LINQ statements.
1. First, select your data into a new list, let's call it `x1`, do a projection if desired
2. Next, create a distinct list, from `x1` into `x2`, using whatever distinction you require
3. Finally, create an ordered list, from `x2` into `x3`, sorting by whatever you desire
|
**The problem is that the Distinct
operator does not grant that it will
maintain the original order of
values.**
So your query will need to work like this
```
var names = (from DataRow dr in dataTable.Rows
select (string)dr["Name"]).Distinct().OrderBy( name => name );
```
|
How do I get a distinct, ordered list of names from a DataTable using LINQ?
|
[
"",
"c#",
"linq",
".net-3.5",
""
] |
What's the cleanest, most effective way to validate decimal numbers in JavaScript?
Bonus points for:
1. Clarity. Solution should be clean and simple.
2. Cross-platform.
Test cases:
```
01. IsNumeric('-1') => true
02. IsNumeric('-1.5') => true
03. IsNumeric('0') => true
04. IsNumeric('0.42') => true
05. IsNumeric('.42') => true
06. IsNumeric('99,999') => false
07. IsNumeric('0x89f') => false
08. IsNumeric('#abcdef') => false
09. IsNumeric('1.2.3') => false
10. IsNumeric('') => false
11. IsNumeric('blah') => false
```
|
[@Joel's answer](https://stackoverflow.com/questions/18082/validate-numbers-in-javascript-isnumeric/174921#174921) is pretty close, but it will fail in the following cases:
```
// Whitespace strings:
IsNumeric(' ') == true;
IsNumeric('\t\t') == true;
IsNumeric('\n\r') == true;
// Number literals:
IsNumeric(-1) == false;
IsNumeric(0) == false;
IsNumeric(1.1) == false;
IsNumeric(8e5) == false;
```
Some time ago I had to implement an `IsNumeric` function, to find out if a variable contained a numeric value, **regardless of its type**, it could be a `String` containing a numeric value (I had to consider also exponential notation, etc.), a `Number` object, virtually anything could be passed to that function, I couldn't make any type assumptions, taking care of type coercion (eg. `+true == 1;` but `true` shouldn't be considered as `"numeric"`).
I think is worth sharing this set of [**+30 unit tests**](http://run.plnkr.co/plunks/93FPpacuIcXqqKMecLdk/) made to numerous function implementations, and also share the one that passes all my tests:
```
function isNumeric(n) {
return !isNaN(parseFloat(n)) && isFinite(n);
}
```
**P.S.** [isNaN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/isNaN) & [isFinite](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/isFinite) have a confusing behavior due to forced conversion to number. In ES6, [Number.isNaN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/isNaN) & [Number.isFinite](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/isFinite) would fix these issues. Keep that in mind when using them.
---
**Update** :
[Here's how jQuery does it now (2.2-stable)](https://github.com/jquery/jquery/blob/2.2-stable/src/core.js#L215):
```
isNumeric: function(obj) {
var realStringObj = obj && obj.toString();
return !jQuery.isArray(obj) && (realStringObj - parseFloat(realStringObj) + 1) >= 0;
}
```
**Update** :
[Angular 4.3](https://github.com/angular/angular/blob/4.3.x/packages/common/src/pipes/number_pipe.ts#L172):
```
export function isNumeric(value: any): boolean {
return !isNaN(value - parseFloat(value));
}
```
|
Arrrgh! Don't listen to the regular expression answers. RegEx is icky for this, and I'm not talking just performance. It's so easy to make subtle, impossible to spot mistakes with your regular expression.
If you can't use `isNaN()` — and remember: I said, "IF" — this should work much better:
```
function IsNumeric(input)
{
return (input - 0) == input && (''+input).trim().length > 0;
}
```
Here's how it works:
The `(input - 0)` expression forces JavaScript to do type coercion on your input value; it must first be interpreted as a number for the subtraction operation. If that conversion to a number fails, the expression will result in `NaN` (Not a Number). This *numeric* result is then compared to the original value you passed in. Since the left hand side is now numeric, type coercion is again used. Now that the input from both sides was coerced to the same type from the same original value, you would think they should always be the same (always true). However, there's a special rule that says `NaN` is never equal to `NaN`, and so a value that can't be converted to a number (and only values that cannot be converted to numbers) will result in false.
The check on the length is for a special case involving empty strings. Also note that it falls down on your 0x89f test, but that's because for javascript *that's a valid way to define a number literal.* If you want to catch that specific scenario you could add an additional check. Even better, if that's your reason for not using `isNaN()` then just wrap your own function around `isNaN()` that can also do the additional check.
In summary, ***if you want to know if a value can be converted to a number, actually try to convert it to a number.***
---
I went back and did some research for *why* a whitespace string did not have the expected output, and I think I get it now: an empty string is coerced to `0` rather than `NaN`. Simply trimming the string before the length check will handle this case.
Running the unit tests against the new code and it only fails on the infinity and boolean literals, and the only time that should be a problem is if you're generating code (really, who would type in a literal and check if it's numeric? You should *know*), and that would be some strange code to generate.
But, again, **the only reason ever to use this is if for some reason you have to avoid isNaN().**
|
Validate decimal numbers in JavaScript - IsNumeric()
|
[
"",
"javascript",
"validation",
"numbers",
""
] |
In C++, there isn't a de-facto standard logging tool. In my experience, shops roll their own. This creates a bit of a problem, however, when trying to create reusable software components. If everything in your system depends on the logging component, this makes the software less reusable, basically forcing any downstream projects to take your logging framework along with the components they really want.
IOC (dependency injection) doesn't really help with the problem since your components would need to depend on a logging abstraction. Logging components themselves can add dependencies on file I/O, triggering mechanisms, and other possibly unwanted dependencies.
Does adding a dependency to your proprietary logging framework sacrifice the reusability of the component?
|
Yes. But dependency injection will help in this case.
You can create an abstract logging base-class and create implementations for the logging-frameworks you want to use. Your components are just dependent on the abstract base-class. And you inject the implementations along with al their dependencies as needed.
|
Yes, Mendelt is right. We do exactly this in our products. Everything depends on the ILogger abstract interface, but it does not depend on anything else. Typically an executable or a high-level DLL will be the one to construct an actual implemented Logger interface and inject it.
|
Do C++ logging frameworks sacrifice reusability?
|
[
"",
"c++",
"logging",
"code-reuse",
""
] |
I have to do some JavaScript in the future, so it is time to update my toolbox. Right now I use Firefox with some addons:
* JavaScript Shell from <https://www.squarefree.com/bookmarklets/webdevel.html>
* Firefox Dom Inspector
* Firebug
* Greasemonkey
* Stylish
I plan to use [Venkman Javascript debugger](http://www.hacksrus.com/~ginda/venkman/) as well as [jsunit](https://github.com/pivotal/jsunit) and [js-lint](http://jslint.com/).
For programming I'm stick with vim.
So what other tools do you use when developing JavaScript?
|
I use both Firefox and IE for Web Development and a few add-ons in each:
**Firefox:**
* [Firebug](https://addons.mozilla.org/en-US/firefox/addon/1843)
* [Web Developer Toolbar](https://addons.mozilla.org/en-US/firefox/addon/60)
**Internet Explorer:**
* [IE Developer Toolbar](http://www.microsoft.com/en-us/download/details.aspx?id=18359)
* [Fiddler](http://www.fiddlertool.com/)
* Visual Studio for JS Debugging
|
I sometimes use Emacs with Steve Yegge's [js2-mode](http://code.google.com/p/js2-mode/), evaluating code with [Rhino](http://www.mozilla.org/rhino/) & John Resig's [env.js](http://ejohn.org/blog/bringing-the-browser-to-the-server/) to load jQuery or Prototype in my standalone scripts.
This allows me to explore javascript, jQuery, and Prototype outside of a browser.
Example:
```
var window;
load("Library/env.js");
window.location = 'index.html'; // Load the page 'index.html'
print($('aForm').id); // Play with the Dom in a standalone script!
```
|
What is in your JavaScript development toolbox?
|
[
"",
"javascript",
"debugging",
""
] |
I have a small JS function that does Ajax for me and another like it that adds in POST data to the request. With Ajax being such a big topic with so many libraries about it, what am I missing from my function, is it insecure or something else worrying?
```
function loadPage(pagePath, displayElement)
{
var xmlHttp;
try
{
// Firefox, Opera 8.0+, Safari
xmlHttp=new XMLHttpRequest();
}
catch (e)
{
// Internet Explorer
try
{
xmlHttp=new ActiveXObject("Msxml2.XMLHTTP");
}
catch (e)
{
try
{
xmlHttp=new ActiveXObject("Microsoft.XMLHTTP");
}
catch (e)
{
alert("Your browser does not support AJAX!");
return false;
}
}
}
xmlHttp.onreadystatechange=function()
{
if(xmlHttp.readyState==4)
{
document.getElementById(displayElement).innerHTML = xmlHttp.responseText;
}
}
xmlHttp.open("GET", pagePath, true);
xmlHttp.send(null);
}
```
|
I strongly recommend you not roll your own Ajax code. Instead, use a framework such as Prototype, Dojo, or any of the others. They've taken care of handling all the ReadyStates you're not handling (2 means it's been sent, 3 means it's in process, etc.), and they should escape the reponse you're getting so you don't insert potentially insecure javascript or something into your page.
Another thing a more robust framework will give you is the ability to do more than just use innerHTML to replace items in the DOM. Your function here can only be used to replace one element with the response from the ajax call. There's a lot more you can do with Ajax.
|
I would remove this line.
```
alert("Your browser does not support AJAX!")
```
Shouting at the user in a language he probably doesn't understand is worse than failure. :-)
|
What more is needed for Ajax than this function
|
[
"",
"javascript",
"ajax",
""
] |
I have a large tree of Java Objects in my Desktop Application and am trying to decide on the best way of persisting them as a file to the file system.
Some thoughts I've had were:
* **Roll my own serializer using DataOutputStream**: This would give me the greatest control of what was in the file, but at the cost of micromanaging it.
* **Straight old Serialization using ObjectOutputStream and its various related classes**: I'm not sold on it though since I find the data brittle. Changing any object's structure breaks the serialized instances of it. So I'm locked in to what seems to be a horrible versioning nightmare.
* **XML Serialization**: It's not as brittle, but it's significantly slower that straight out serialization. It can be transformed outside of my program.
* **JavaDB**: I'd considered this since I'm comfortable writing JDBC applications. The difference here is that the database instance would only persist while the file was being opened or saved. It's not pretty but... it does lend itself to migrating to a central server architecture if the need arises later and it introduces the possibility of quering the datamodel in a simpler way.
I'm curious to see what other people think. And I'm hoping that I've missed some obvious, and simpler approach than the ones above.
---
Here are some more options culled from the answers below:
* **An Object Database** - Has significantly less infrastructure than ORM approaches and performs faster than an XML approach. thanks aku
|
[db4objects](http://www.db4o.com) might be the best choice
|
I would go for the your final option JavaDB (Sun's distribution of [Derby](http://db.apache.org/derby)) and use an object relational layer like [Hibernate](http://hibernate.org) or [iBatis](http://ibatis.apache.org). Using the first three aproaches means you are going to spend more time building a database engine than developing application features.
|
What's the best way to persist data in a Java Desktop Application?
|
[
"",
"java",
"persistence",
"desktop",
"oop",
""
] |
In one of my applications, I am querying active directory to get a list of all users below a given user (using the "Direct Reports" thing). So basically, given the name of the person, it is looked up in AD, then the Direct Reports are read. But then for every direct report, the tool needs to check the direct reports of the direct reports. Or, more abstract: The Tool will use a person as the root of the tree and then walk down the complete tree to get the names of all the leaves (can be several hundred)
Now, my concern is obviously performance, as this needs to be done quite a few times. My idea is to manually cache that (essentially just put all the names in a long string and store that somewhere and update it once a day).
But I just wonder if there is a more elegant way to first get the information and then cache it, possibly using something in the System.DirectoryServices Namespace?
|
In order to take control over the properties that you want to be cached you can call 'RefreshCache()' passing the properties that you want to hang around:
```
System.DirectoryServices.DirectoryEntry entry = new System.DirectoryServices.DirectoryEntry();
// Push the property values from AD back to cache.
entry.RefreshCache(new string[] {"cn", "www" });
```
|
Depends on how up to date you want the information to be. If you **must have** the very latest data in your report then querying directly from AD is reasonable. And I agree that AD is quite robust, a typical dedicated AD server is actually very lightly utilised in normal day to day operations *but best to check with your IT department / support person.*
An alternative is to have a daily script to dump the AD data into a CSV file and/or import it into a SQL database. (Oracle has a SELECT CONNECT BY feature that can automatically create multi-level hierarchies within a result set. MSSQL can do a similar thing with a bit of recursion IIRC).
|
Caching Active Directory Data
|
[
"",
"c#",
"asp.net",
"active-directory",
""
] |
I have a complete XML document in a string and would like a `Document` object. Google turns up all sorts of garbage. What is the simplest solution? (In Java 1.5)
**Solution** Thanks to [Matt McMinn](https://stackoverflow.com/users/1322/matt-mcminn), I have settled on this implementation. It has the right level of input flexibility and exception granularity for me. (It's good to know if the error came from malformed XML - `SAXException` - or just bad IO - `IOException`.)
```
public static org.w3c.dom.Document loadXMLFrom(String xml)
throws org.xml.sax.SAXException, java.io.IOException {
return loadXMLFrom(new java.io.ByteArrayInputStream(xml.getBytes()));
}
public static org.w3c.dom.Document loadXMLFrom(java.io.InputStream is)
throws org.xml.sax.SAXException, java.io.IOException {
javax.xml.parsers.DocumentBuilderFactory factory =
javax.xml.parsers.DocumentBuilderFactory.newInstance();
factory.setNamespaceAware(true);
javax.xml.parsers.DocumentBuilder builder = null;
try {
builder = factory.newDocumentBuilder();
}
catch (javax.xml.parsers.ParserConfigurationException ex) {
}
org.w3c.dom.Document doc = builder.parse(is);
is.close();
return doc;
}
```
|
This works for me in Java 1.5 - I stripped out specific exceptions for readability.
```
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.DocumentBuilder;
import org.w3c.dom.Document;
import java.io.ByteArrayInputStream;
public Document loadXMLFromString(String xml) throws Exception
{
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
factory.setNamespaceAware(true);
DocumentBuilder builder = factory.newDocumentBuilder();
return builder.parse(new ByteArrayInputStream(xml.getBytes()));
}
```
|
Whoa there!
There's a potentially serious problem with this code, because it ignores the character encoding specified in the `String` (which is UTF-8 by default). When you call `String.getBytes()` the platform default encoding is used to encode Unicode characters to bytes. So, the parser may think it's getting UTF-8 data when in fact it's getting EBCDIC or something… not pretty!
Instead, use the parse method that takes an InputSource, which can be constructed with a Reader, like this:
```
import java.io.StringReader;
import org.xml.sax.InputSource;
…
return builder.parse(new InputSource(new StringReader(xml)));
```
It may not seem like a big deal, but ignorance of character encoding issues leads to insidious code rot akin to y2k.
|
How do I load an org.w3c.dom.Document from XML in a string?
|
[
"",
"java",
"xml",
"document",
"w3c",
""
] |
There are two weird operators in C#:
* the [true operator](http://msdn.microsoft.com/en-us/library/6x6y6z4d.aspx)
* the [false operator](http://msdn.microsoft.com/en-us/library/6292hy1k.aspx)
If I understand this right these operators can be used in types which I want to use instead of a boolean expression and where I don't want to provide an implicit conversion to bool.
Let's say I have a following class:
```
public class MyType
{
public readonly int Value;
public MyType(int value)
{
Value = value;
}
public static bool operator true (MyType mt)
{
return mt.Value > 0;
}
public static bool operator false (MyType mt)
{
return mt.Value < 0;
}
}
```
So I can write the following code:
```
MyType mTrue = new MyType(100);
MyType mFalse = new MyType(-100);
MyType mDontKnow = new MyType(0);
if (mTrue)
{
// Do something.
}
while (mFalse)
{
// Do something else.
}
do
{
// Another code comes here.
} while (mDontKnow)
```
However for all the examples above only the true operator is executed. So what's the false operator in C# good for?
*Note: More examples can be found [here](http://www.java2s.com/Tutorial/CSharp/0160__Operator-Overload/truefalseoperatorforComplex.htm), [here](http://www.java2s.com/Tutorial/CSharp/0160__Operator-Overload/OverloadtrueandfalseforTwoDimension.htm) and [here](http://www.java2s.com/Tutorial/CSharp/0160__Operator-Overload/trueandfalseoperator.htm).*
|
You can use it to override the `&&` and `||` operators.
The `&&` and `||` operators can't be overridden, but if you override `|`, `&`, `true` and `false` in exactly the right way the compiler will call `|` and `&` when you write `||` and `&&`.
For example, look at this code (from <http://ayende.com/blog/1574/nhibernate-criteria-api-operator-overloading> - where I found out about this trick; [archived version](http://web.archive.org/web/20080613013350/http://www.ayende.com/Blog/archive/2006/08/04/7381.aspx) by @BiggsTRC):
```
public static AbstractCriterion operator &(AbstractCriterion lhs, AbstractCriterion rhs)
{
return new AndExpression(lhs, rhs);
}
public static AbstractCriterion operator |(AbstractCriterion lhs, AbstractCriterion rhs)
{
return new OrExpression(lhs, rhs);
}
public static bool operator false(AbstractCriterion criteria)
{
return false;
}
public static bool operator true(AbstractCriterion criteria)
{
return false;
}
```
This is obviously a side effect and not the way it's intended to be used, but it is useful.
|
Shog9 and Nir:
thanks for your answers. Those answers pointed me to [Steve Eichert article](http://steve.emxsoftware.com/NET/Overloading+the++and++operators) and it pointed me to [msdn](http://msdn.microsoft.com/en-us/library/aa691312.aspx):
> The operation x && y is evaluated as T.false(x) ? x : T.&(x, y), where T.false(x) is an invocation of the operator false declared in T, and T.&(x, y) is an invocation of the selected operator &. In other words, x is first evaluated and operator false is invoked on the result to determine if x is definitely false. Then, if x is definitely false, the result of the operation is the value previously computed for x. Otherwise, y is evaluated, and the selected operator & is invoked on the value previously computed for x and the value computed for y to produce the result of the operation.
|
What's the false operator in C# good for?
|
[
"",
"c#",
".net",
"syntax",
""
] |
Is it true that the only way to handle default function arguments is through function overloading?
For example, in PHP I can do this:
```
function foo($x, $y=0)
{
}
```
Would the best way to handle it in C# be this?
```
void foo(int x)
{
foo(x, 0);
}
void foo(int x, int y)
{
}
```
***[Example lifted from here](http://bytes.com/forum/thread224970.html)***
Edit
***Made the C# example into actual C# (Thanks Blair Conrad)***
|
Yes, that'd be best~~, except you'd omit the `$`s on the parameter names~~, as others have pointed out. For those interested in the rationale behind the lack of default parameter values, see @Giovanni Galbo's explanation.
|
Just to satisfy some curiosity:
From [Why doesn't C# support default parameters?](http://blogs.msdn.com/csharpfaq/archive/2004/03/07/85556.aspx):
> In languages such as C++, a default value can be included as part of the method declaration:
>
> void Process(Employee employee, bool bonus = false)
>
> This method can be called either with:
>
> a.Process(employee, true);
>
> or
>
> a.Process(employee);
>
> in the second case, the parameter bonus is set to false.
>
> C# doesn't have this feature.
>
> One reason we don't have this feature is related to a specific implementation of the feature. In the C++ world, when the user writes:
>
> a.Process(employee);
>
> the compiler generates
>
> a.process(employee, false);
>
> In other words, the compiler takes the default value that is specified in the method prototype and puts it into the method call - it's just as if the user wrote 'false' as the second parameter. There's no way to change that default value without forcing the user of the class to recompile, which is unfortunate.
>
> The overloading model works better in this respect. The framework author just defines two separate methods, and the single-parameter one calls the two-parameter method. This keeps the default value in the framework, where it can be modified if necessary.
>
> It would be possible for a compiler to take something like the C++ definition and produce the overloads, but there are a few issues with that approach.
>
> The first one is that the correlation between the code that the user writes and the code the compiler generates is less obvious. We generally try to limit magic when possible, as it makes it harder for programmers. The second issue has to do with things like XML doc comments and intellisense. The compiler would have to have special rules for how it generates doc comments for the overloaded methods, and intellisense would need to have smarts to collapse the overloaded methods into a single method.
>
> Writing overloads yourself is a bit less convenient, but we think it's an acceptable solution.
|
Is overloading the only way to have default function arguments in C#?
|
[
"",
"c#",
"overloading",
""
] |
I have a custom validation function in JavaScript in a user control on a .Net 2.0 web site which checks to see that the fee paid is not in excess of the fee amount due.
I've placed the validator code in the `ascx` file, and I have also tried using `Page.ClientScript.RegisterClientScriptBlock()` and in both cases the validation fires, but cannot find the JavaScript function.
The output in Firefox's error console is `"feeAmountCheck is not defined"`. Here is the function (this was taken directly from firefox->view source)
```
<script type="text/javascript">
function feeAmountCheck(source, arguments)
{
var amountDue = document.getElementById('ctl00_footerContentHolder_Fees1_FeeDue');
var amountPaid = document.getElementById('ctl00_footerContentHolder_Fees1_FeePaid');
if (amountDue.value > 0 && amountDue >= amountPaid)
{
arguments.IsValid = true;
}
else
{
arguments.IsValid = false;
}
return arguments;
}
</script>
```
Any ideas as to why the function isn't being found? How can I remedy this without having to add the function to my master page or consuming page?
|
Try changing the argument names to `sender` and `args`. And, after you have it working, switch the call over to `ScriptManager.RegisterClientScriptBlock`, regardless of AJAX use.
|
When you're using .Net 2.0 and Ajax - you should use:
```
ScriptManager.RegisterClientScriptBlock
```
It will work better in Ajax environments then the old Page.ClientScript version
|
ASP.Net Custom Client-Side Validation
|
[
"",
"asp.net",
"javascript",
"validation",
""
] |
Is it possible to configure [xampp](http://www.apachefriends.org/en/xampp.html) to serve up a file outside of the `htdocs` directory?
For instance, say I have a file located as follows:
`C:\projects\transitCalculator\trunk\TransitCalculator.php`
and my [xampp](http://www.apachefriends.org/en/xampp.html) files are normally served out from:
`C:\xampp\htdocs\`
(because that's the default configuration) Is there some way to make Apache recognize and serve up my `TransitCalculator.php` file without moving it under `htdocs`? Preferably I'd like Apache to serve up/have access to the entire contents of the projects directory, and I don't want to move the projects directory under `htdocs`.
edit: edited to add Apache to the question title to make Q/A more "searchable"
|
Ok, per [pix0r](https://stackoverflow.com/questions/1408/#2471)'s, [Sparks](https://stackoverflow.com/questions/1408/#1413)' and [Dave](https://stackoverflow.com/questions/1408/#1414)'s answers it looks like there are three ways to do this:
---
## [Virtual Hosts](https://stackoverflow.com/questions/1408/#2471)
1. Open C:\xampp\apache\conf\extra\httpd-vhosts.conf.
2. Un-comment ~line 19 (`NameVirtualHost *:80`).
3. Add your virtual host (~line 36):
```
<VirtualHost *:80>
DocumentRoot C:\Projects\transitCalculator\trunk
ServerName transitcalculator.localhost
<Directory C:\Projects\transitCalculator\trunk>
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
```
4. Open your hosts file (C:\Windows\System32\drivers\etc\hosts).
5. Add
```
127.0.0.1 transitcalculator.localhost #transitCalculator
```
to the end of the file (before the Spybot - Search & Destroy stuff if you have that installed).
6. Save (You might have to save it to the desktop, change the permissions on the old hosts file (right click > properties), and copy the new one into the directory over the old one (or rename the old one) if you are using Vista and have trouble).
7. Restart Apache.
Now you can access that directory by browsing to <http://transitcalculator.localhost/>.
---
## [Make an Alias](https://stackoverflow.com/questions/1408/#1413)
1. Starting ~line 200 of your `http.conf` file, copy everything between `<Directory "C:/xampp/htdocs">` and `</Directory>` (~line 232) and paste it immediately below with `C:/xampp/htdocs` replaced with your desired directory (in this case `C:/Projects`) to give your server the correct permissions for the new directory.
2. Find the `<IfModule alias_module></IfModule>` section (~line 300) and add
```
Alias /transitCalculator "C:/Projects/transitCalculator/trunk"
```
(or whatever is relevant to your desires) below the `Alias` comment block, inside the module tags.
---
## [Change your document root](https://stackoverflow.com/questions/1408/#1414)
1. Edit ~line 176 in C:\xampp\apache\conf\httpd.conf; change `DocumentRoot "C:/xampp/htdocs"` to `#DocumentRoot "C:/Projects"` (or whatever you want).
2. Edit ~line 203 to match your new location (in this case `C:/Projects`).
---
**Notes:**
* You have to use forward slashes "/" instead of back slashes "\".
* Don't include the trailing "/" at the end.
* *restart your server*.
|
You can relocate it by editing the **DocumentRoot** setting in XAMPP\apache\conf\httpd.conf.
It should currently be:
> C:/xampp/htdocs
Change it to:
> C:/projects/transitCalculator/trunk
|
Make XAMPP / Apache serve file outside of htdocs folder
|
[
"",
"php",
"apache",
"xampp",
"server-configuration",
""
] |
What techniques can be applied effectively to improve the performance of SQL queries? Are there any general rules that apply?
|
* Use primary keys
* Avoid select \*
* Be as specific as you can when building your conditional statements
* De-normalisation can often be more efficient
* Table variables and temporary tables (where available) will often be better than using a large source table
* Partitioned views
* Employ indices and constraints
|
Learn what's really going on under the hood - you should be able to understand the following concepts in detail:
* Indexes (not just what they are but actually how they work).
* Clustered indexes vs heap allocated tables.
* Text and binary lookups and when they can be in-lined.
* [Fill factor](https://stackoverflow.com/questions/10919/how-do-you-measure-sql-fill-factor-value).
* How records are ghosted for update/delete.
* When page splits happen and why.
* Statistics, and how they effect various query speeds.
* The query planner, and how it works for your specific database (for instance on some systems "select \*" is slow, on modern MS-Sql DBs the planner can handle it).
|
What generic techniques can be applied to optimize SQL queries?
|
[
"",
"sql",
"performance",
""
] |
What is the SQL query to select all of the MSSQL Server's logins?
Thank you. More than one of you had the answer I was looking for:
```
SELECT * FROM syslogins
```
|
Is this what you're after?
```
select * from master.syslogins
```
|
On SQL Azure as of 2012;
logins:
```
--connecct to master
--logins
SELECT * from sys.sql_logins
--users
SELECT * from sys.sysusers
```
and users on a specific database:
```
--connect to database
SELECT * from sys.sysusers
```
Also note that 'users' on Azure SQL now (2022-11-17) have more 'login' type properties and creating a user on a Azure SQL database with a password is now possible, so it is less likely to require creating logins in 'master'.
|
SQL Query for Logins
|
[
"",
"sql",
"sql-server",
""
] |
I have a JavaScript method that I need to run on one of my pages, in particular, the `onresize` event.
However, I don't see how I can set that event from my content page. I wish I could just put it on my master page, but I don't have the need for the method to be called on all pages that use that master page.
Any help would be appreciated.
|
Place the following in your content page:
```
<script type="text/javascript">
// here is a cross-browser compatible way of connecting
// handlers to events, in case you don't have one
function attachEventHandler(element, eventToHandle, eventHandler) {
if(element.attachEvent) {
element.attachEvent(eventToHandle, eventHandler);
} else if(element.addEventListener) {
element.addEventListener(eventToHandle.replace("on", ""), eventHandler, false);
} else {
element[eventToHandle] = eventHandler;
}
}
attachEventHandler(window, "onresize", function() {
// the code you want to run when the browser is resized
});
</script>
```
---
That code should give you the basic idea of what you need to do. Hopefully you are using a library that already has code to help you write up event handlers and such.
|
How about use code like the following in your Content Page (C#)?
```
Page.ClientScript.RegisterStartupScript(this.GetType(), "resizeMyPage", "window.onresize=function(){ resizeMyPage();}", true);
```
Thus, you could have a `resizeMyPage` function defined somewhere in the Javascript and it would be run whenever the browser is resized!
|
Call onresize from ASP.NET content page
|
[
"",
"asp.net",
"javascript",
"master-pages",
"onresize",
""
] |
I'm currently trying out db4o (the java version) and I pretty much like what I see. But I cannot help wondering how it does perform in a real live (web-)environment. Does anyone have any experiences (good or bad) to share about running db4o?
|
We run DB40 .NET version in a large client/server project.
Our experiences is that you can potentially get much better performance than typical relational databases.
However, you really have to tweak your objects to get this kind of performance. For example, if you've got a list containing a lot of objects, DB4O activation of these lists is slow. There are a number of ways to get around this problem, for example, by inverting the relationship.
Another pain is activation. When you retrieve or delete an object from DB4O, by default it will activate the whole object tree. For example, loading a Foo will load Foo.Bar.Baz.Bat, etc until there's nothing left to load. While this is nice from a programming standpoint, performance will slow down the more nesting in your objects. To improve performance, you can tell DB4O how many levels deep to activate. This is time-consuming to do if you've got a lot of objects.
Another area of pain was text searching. DB4O's text searching is far, far slower than SQL full text indexing. (They'll tell you this outright on their site.) The good news is, it's easy to setup a text searching engine on top of DB4O. On our project, we've hooked up Lucene.NET to index the text fields we want.
Some APIs don't seem to work, such as the GetField APIs useful in applying database upgrades. (For example, you've renamed a property and you want to upgrade your existing objects in the database, you need to use these "reflection" APIs to find objects in the database. Other APIs, such as the [Index] attribute don't work in the stable 6.4 version, and you must instead specify indexes using the Configure().Index("someField"), which is not strongly typed.
We've witnessed performance degrade the larger your database. We have a 1GB database right now and things are still fast, but not nearly as fast as when we started with a tiny database.
We've found another issue where Db4O.GetByID will close the database if the ID doesn't exist anymore in the database.
We've found the Native Query syntax (the most natural, language-integrated syntax for queries) is far, far slower than the less-friendly SODA queries. So instead of typing:
```
// C# syntax for "Find all MyFoos with Bar == 23".
// (Note the Java syntax is more verbose using the Predicate class.)
IList<MyFoo> results = db4o.Query<MyFoo>(input => input.Bar == 23);
```
Instead of that nice query code, you have to an ugly SODA query which is string-based and not strongly-typed.
For .NET folks, they've recently introduced a LINQ-to-DB4O provider, which provides for the best syntax yet. However, it's yet to be seen whether performance will be up-to-par with the ugly SODA queries.
DB4O support has been decent: we've talked to them on the phone a number of times and have received helpful info. Their user forums are next to worthless, however, almost all questions go unanswered. Their JIRA bug tracker receives a lot of attention, so if you've got a nagging bug, file it on JIRA on it often will get fixed. (We've had 2 bugs that have been fixed, and another one that got patched in a half-assed way.)
If all this hasn't scared you off, let me say that we're very happy with DB4O, despite the problems we've encountered. The performance we've got has blown away some O/RM frameworks we tried. I recommend it.
**update July 2015** Keep in mind, this answer was written back in 2008. While I appreciate the upvotes, the world has changed since then, and this information may not be as reliable as it was when it was written.
|
Most native queries can and are efficiently converted into SODA queries behind the scenes so that should not make a difference. Using NQ is of course preferred as you remain in the realms of strong typed language. If you have problems getting NQ to use indexes please feel free to post your problem to the [db4o forums](http://developer.db4o.com/forums/) and we'll try to help you out.
Goran
|
db4o experiences?
|
[
"",
"java",
"db4o",
""
] |
I have a (potentially dumb) question about the C++ STL. When I make a container (vector, set, map, etc), is it allocated on the stack or on the heap? If I make a set and put 5 million strings, will I have to worry about a stack overflow?
|
STL classes by default allocate their internal buffers from the heap, although these classes also allow custom allocators that allow a user to specify an alternate location to allocate from - e.g. a shared memory pool.
|
The default allocator for STL containers uses operator new and delete, so it's whatever those route to for the type being contained. (In general, it comes from the heap unless you do something to override that.)
You will not get a stack overflow from allocating 5 million strings. Even if you made a stack based allocator, it would probably overflow before you even inserted one string.
|
C++ STL question: allocators
|
[
"",
"c++",
"stl",
""
] |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 1