Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
What are the ways that you use to model and retrieve hierarchical info in a database?
The definitive pieces on this subject have been written by Joe Celko, and he has worked a number of them into a book called Joe Celko's Trees and Hierarchies in SQL for Smarties. He favours a technique called directed graphs. An introduction to his work on this subject can be found [here](http://web.archive.org/web/20081102031702/http://www.intelligententerprise.com/001020/celko.jhtml)
I like the Modified Preorder Tree Traversal Algorithm. This technique makes it very easy to query the tree. But here is a list of links about the topic which I copied from the Zend Framework (PHP) contributors webpage (posted there by Posted by Laurent Melmoux at Jun 05, 2007 15:52). Many of the links are language agnostic: There is 2 main representations and algorithms to represent hierarchical structures with databases : * nested set also known as modified preorder tree traversal algorithm * adjacency list model It's well explained here: * <http://www.sitepoint.com/article/hierarchical-data-database> * [Managing Hierarchical Data in MySQL](http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/) * [http://www.evolt.org/article/Four\_ways\_to\_work\_with\_hierarchical\_data/17/4047/index.html](http://evolt.org/node/4047/) Here are some more links that I've collected: * <http://en.wikipedia.org/wiki/Tree_%28data_structure%29> * <http://en.wikipedia.org/wiki/Category:Trees_%28structure%29> adjacency list model * <http://www.sqlteam.com/item.asp?ItemID=8866> nested set * <http://www.sqlsummit.com/AdjacencyList.htm> * <http://www.edutech.ch/contribution/nstrees/index.php> * <http://www.phpriot.com/d/articles/php/application-design/nested-trees-1/> * <http://www.dbmsmag.com/9604d06.html> * <http://en.wikipedia.org/wiki/Tree_traversal> * <http://www.cosc.canterbury.ac.nz/mukundan/dsal/BTree.html> (applet java montrant le fonctionnement ) Graphes * <http://www.artfulsoftware.com/mysqlbook/sampler/mysqled1ch20.html> Classes : Nested Sets DB Tree Adodb * <http://www.phpclasses.org/browse/package/2547.html> Visitation Model ADOdb * <http://www.phpclasses.org/browse/package/2919.html> PEAR::DB\_NestedSet * <http://pear.php.net/package/DB_NestedSet> * utilisation : <https://www.entwickler.com/itr/kolumnen/psecom,id,26,nodeid,207.html> PEAR::Tree * <http://pear.php.net/package/Tree/download/0.3.0/> * <http://www.phpkitchen.com/index.php?/archives/337-PEARTree-Tutorial.html> nstrees * <http://www.edutech.ch/contribution/nstrees/index.php>
SQL - How to store and navigate hierarchies?
[ "", "sql", "sql-server", "oracle", "database-design", "hierarchy", "" ]
What is the best full text search alternative to Microsoft SQL? (which works with MS SQL) I'm looking for something similar to [Lucene](http://lucene.apache.org/java/docs/index.html) and [Lucene.NET](http://incubator.apache.org/lucene.net/) but without the .NET and Java requirements. I would also like to find a solution that is usable in commercial applications.
Take a look at [CLucene](http://clucene.wiki.sourceforge.net/) - It's a well maintained C++ port of java Lucene. It's currently licenced under LGPL and we use it in our commercial application. Performance is incredible, however you do have to get your head around some of the strange API conventions.
[Sphinx](http://www.sphinxsearch.com) is one of the best solutions. It's written in C++ and has amazing performance.
Best full text search alternative to MS SQL, C++ solution
[ "", "c++", "sql-server", "full-text-search", "lucene", "lucene.net", "" ]
My website was recently attacked by, what seemed to me as, an innocent code: ``` <?php if ( isset( $ _GET['page'] ) ) { include( $ _GET['page'] . ".php" ); } else { include("home.php"); } ?> ``` There where no SQL calls, so I wasn't afraid for SQL Injection. But, apparently, SQL isn't the only kind of injection. This website has an explanation and a few examples of avoiding code injection: <http://www.theserverpages.com/articles/webmasters/php/security/Code_Injection_Vulnerabilities_Explained.html> How would you protect this code from code injection?
Use a whitelist and make sure the page is in the whitelist: ``` $whitelist = array('home', 'page'); if (in_array($_GET['page'], $whitelist)) { include($_GET['page'].'.php'); } else { include('home.php'); } ```
Another way to sanitize the input is to make sure that only allowed characters (no "/", ".", ":", ...) are in it. However don't use a blacklist for *bad* characters, but a whitelist for allowed characters: ``` $page = preg_replace('[^a-zA-Z0-9]', '', $page); ``` ... followed by a file\_exists. That way you can make sure that only scripts you want to be executed are executed (for example this would rule out a "blabla.inc.php", because "." is not allowed). Note: This is kind of a "hack", because then the user could execute "h.o.m.e" and it would give the "home" page, because all it does is removing all prohibited characters. It's not intended to stop "smartasses" who want to cute stuff with your page, but it will stop people doing *really bad* things. BTW: Another thing you could do in you **.htaccess** file is to prevent obvious attack attempts: ``` RewriteEngine on RewriteCond %{QUERY_STRING} http[:%] [NC] RewriteRule .* /–http– [F,NC] RewriteRule http: /–http– [F,NC] ``` That way all page accesses with "http:" url (and query string) result in an "Forbidden" error message, not even reaching the php script. That results in less server load. However keep in mind that no "http" is allowed in the query string. You website might MIGHT require it in some cases (maybe when filling out a form). BTW: If you can read german: I also have a [blog post](http://blogs.interdose.com/dominik/2008/03/20/sicherer-php-code-php-code-injection-verhindern/) on that topic.
Best way to avoid code injection in PHP
[ "", "php", "security", "code-injection", "" ]
I'm getting this problem: ``` PHP Warning: mail() [function.mail]: SMTP server response: 550 5.7.1 Unable to relay for chris.mahan@gmail.com in c:\inetpub\wwwroot\mailtest.php on line 12 ``` from this script: ``` <?php $to = "chris.mahan@gmail.com"; $subject = "test"; $body = "this is a test"; if (mail($to, $subject, $body)){ echo "mail sent"; } else { echo "problem"; } ?> ``` section from php.ini on the server: ``` [mail function] ; For Win32 only. SMTP = server.domain.com; for Win32 only smtp_port = 25 ; For Win32 only. sendmail_from = support@domain.com ; For Unix only. You may supply arguments as well (default: "sendmail -t -i"). ;sendmail_path = ``` (note that "server" and "domain" refer accurately to the actual server and domain name) In IIS, SMTP is running. Under `"Access"` tab, `"Relay"` button, the Select which computers may relay through this virtual server is set to `checkbox "only the list below"` and on the list is `"127.0.0.1(xxx.xxx.xxx.xxx)" (x's representing actual server IP address).` Server is running `Windows Server 2003 Service Pack 2`, fully patched as of 5 PM Sept 1st 2008. I assume it is running `IIS7` (how to check?). Any ideas? In reponse to [Espo](https://stackoverflow.com/users/2257/espo): This machine is hosted at a datacenter. We do not want to use a gmail account (were doing it, want to move away from that). Windows server 2003 comes with its own SMTP server. Update: Per Yaakov Ellis' advice, I dropped all relay restrictions and added the server IP to the allowed list (using the reverse DNS button provided) and the thing started working. Thanks to both Espo and Yaakov for helping me out.
Try removing the IP restrictions for Relaying in the SMTP server, and opening it up to all relays. If it works when this is set, then you know that the problem has to do with the original restrictions. In this case, it may be a DNS issue, or perhaps you had the wrong IP address listed.
You are using the wrong SMTP-server. If you you are only going to send emails to your gmail-account, have a look at my answer [here](https://stackoverflow.com/questions/29988/how-to-send-email-from-a-program-without-using-a-preexisting-account#30001). If you also need to send email to other accounts, ask you ISP for your SMTP-details. EDIT: I think it is always better to use the ISP SMTP-server as they (should) have people monitoring the mail-queues, checking for exploits and updating the mail-software. If you business is developing web-applications it is almost always best to stick with what you do, and let other people do their stuff (eg running mailservers). If you still for some reason want to use you local SMTP server, the first thing would be to rule out the php-part. Try folowing [KB153119](http://support.microsoft.com/kb/153119) and then check you SMTPServer IISlog for errors. EDIT2: That KB-article says it is for exchange, but the same commands are used for other SMTP-servers (including IIS) as well, so please try and see if you can send mails using the examples from the article.
php mail() not working windows 2003, IIS SMTP
[ "", "php", "iis", "smtp", "" ]
I find that getting Unicode support in my cross-platform apps a real pain in the butt. I need strings that can go from C code, to a database, to a Java application and into a Perl module. Each of these use a different Unicode encodings (UTF8, UTF16) or some other code page. The biggest thing that I need is a cross-platform way of doing conversions. What kind of tools, libraries or techniques do people use to make handling these things easier?
Have a look at this: [http://www.icu-project.org/](http://www.icu-project.org/ "International Components for Unicode")
Perl has [Encode](http://search.cpan.org/~dankogai/Encode/Encode.pm) as a standard library. It can be used to read/write any encoding you want, so that's not going to be a problem.
cross platform unicode support
[ "", "java", "c", "perl", "unicode", "cross-platform", "" ]
I would like to test a string containing a path to a file for existence of that file (something like the `-e` test in Perl or the `os.path.exists()` in Python) in C#.
Use: ``` File.Exists(path) ``` MSDN: <http://msdn.microsoft.com/en-us/library/system.io.file.exists.aspx> Edit: In System.IO
[System.IO.File](http://msdn.microsoft.com/en-us/library/system.io.file.exists.aspx): ``` using System.IO; if (File.Exists(path)) { Console.WriteLine("file exists"); } ```
How to find out if a file exists in C# / .NET?
[ "", "c#", ".net", "io", "" ]
I want to merge two dictionaries into a new dictionary. ``` x = {'a': 1, 'b': 2} y = {'b': 3, 'c': 4} z = merge(x, y) >>> z {'a': 1, 'b': 3, 'c': 4} ``` Whenever a key `k` is present in both dictionaries, only the value `y[k]` should be kept.
## How can I merge two Python dictionaries in a single expression? For dictionaries `x` and `y`, their shallowly-merged dictionary `z` takes values from `y`, replacing those from `x`. * In Python 3.9.0 or greater (released 17 October 2020, [`PEP-584`](https://www.python.org/dev/peps/pep-0584/), [discussed here](https://bugs.python.org/issue36144)): ``` z = x | y ``` * In Python 3.5 or greater: ``` z = {**x, **y} ``` * In Python 2, (or 3.4 or lower) write a function: ``` def merge_two_dicts(x, y): z = x.copy() # start with keys and values of x z.update(y) # modifies z with keys and values of y return z ``` and now: ``` z = merge_two_dicts(x, y) ``` ### Explanation Say you have two dictionaries and you want to merge them into a new dictionary without altering the original dictionaries: ``` x = {'a': 1, 'b': 2} y = {'b': 3, 'c': 4} ``` The desired result is to get a new dictionary (`z`) with the values merged, and the second dictionary's values overwriting those from the first. ``` >>> z {'a': 1, 'b': 3, 'c': 4} ``` A new syntax for this, proposed in [PEP 448](https://www.python.org/dev/peps/pep-0448) and [available as of Python 3.5](https://mail.python.org/pipermail/python-dev/2015-February/138564.html), is ``` z = {**x, **y} ``` And it is indeed a single expression. Note that we can merge in with literal notation as well: ``` z = {**x, 'foo': 1, 'bar': 2, **y} ``` and now: ``` >>> z {'a': 1, 'b': 3, 'foo': 1, 'bar': 2, 'c': 4} ``` It is now showing as implemented in the [release schedule for 3.5, PEP 478](https://www.python.org/dev/peps/pep-0478/#features-for-3-5), and it has now made its way into the [What's New in Python 3.5](https://docs.python.org/dev/whatsnew/3.5.html#pep-448-additional-unpacking-generalizations) document. However, since many organizations are still on Python 2, you may wish to do this in a backward-compatible way. The classically Pythonic way, available in Python 2 and Python 3.0-3.4, is to do this as a two-step process: ``` z = x.copy() z.update(y) # which returns None since it mutates z ``` In both approaches, `y` will come second and its values will replace `x`'s values, thus `b` will point to `3` in our final result. ## Not yet on Python 3.5, but want a *single expression* If you are not yet on Python 3.5 or need to write backward-compatible code, and you want this in a *single expression*, the most performant while the correct approach is to put it in a function: ``` def merge_two_dicts(x, y): """Given two dictionaries, merge them into a new dict as a shallow copy.""" z = x.copy() z.update(y) return z ``` and then you have a single expression: ``` z = merge_two_dicts(x, y) ``` You can also make a function to merge an arbitrary number of dictionaries, from zero to a very large number: ``` def merge_dicts(*dict_args): """ Given any number of dictionaries, shallow copy and merge into a new dict, precedence goes to key-value pairs in latter dictionaries. """ result = {} for dictionary in dict_args: result.update(dictionary) return result ``` This function will work in Python 2 and 3 for all dictionaries. e.g. given dictionaries `a` to `g`: ``` z = merge_dicts(a, b, c, d, e, f, g) ``` and key-value pairs in `g` will take precedence over dictionaries `a` to `f`, and so on. ## Critiques of Other Answers Don't use what you see in the formerly accepted answer: ``` z = dict(x.items() + y.items()) ``` In Python 2, you create two lists in memory for each dict, create a third list in memory with length equal to the length of the first two put together, and then discard all three lists to create the dict. **In Python 3, this will fail** because you're adding two `dict_items` objects together, not two lists - ``` >>> c = dict(a.items() + b.items()) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for +: 'dict_items' and 'dict_items' ``` and you would have to explicitly create them as lists, e.g. `z = dict(list(x.items()) + list(y.items()))`. This is a waste of resources and computation power. Similarly, taking the union of `items()` in Python 3 (`viewitems()` in Python 2.7) will also fail when values are unhashable objects (like lists, for example). Even if your values are hashable, **since sets are semantically unordered, the behavior is undefined in regards to precedence. So don't do this:** ``` >>> c = dict(a.items() | b.items()) ``` This example demonstrates what happens when values are unhashable: ``` >>> x = {'a': []} >>> y = {'b': []} >>> dict(x.items() | y.items()) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'list' ``` Here's an example where `y` should have precedence, but instead the value from `x` is retained due to the arbitrary order of sets: ``` >>> x = {'a': 2} >>> y = {'a': 1} >>> dict(x.items() | y.items()) {'a': 2} ``` Another hack you should not use: ``` z = dict(x, **y) ``` This uses the `dict` constructor and is very fast and memory-efficient (even slightly more so than our two-step process) but unless you know precisely what is happening here (that is, the second dict is being passed as keyword arguments to the dict constructor), it's difficult to read, it's not the intended usage, and so it is not Pythonic. Here's an example of the usage being [remediated in django](https://code.djangoproject.com/attachment/ticket/13357/django-pypy.2.diff). Dictionaries are intended to take hashable keys (e.g. `frozenset`s or tuples), but **this method fails in Python 3 when keys are not strings.** ``` >>> c = dict(a, **b) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: keyword arguments must be strings ``` From the [mailing list](https://mail.python.org/pipermail/python-dev/2010-April/099459.html), Guido van Rossum, the creator of the language, wrote: > I am fine with > declaring dict({}, \*\*{1:3}) illegal, since after all it is abuse of > the \*\* mechanism. and > Apparently dict(x, \*\*y) is going around as "cool hack" for "call > x.update(y) and return x". Personally, I find it more despicable than > cool. It is my understanding (as well as the understanding of the [creator of the language](https://mail.python.org/pipermail/python-dev/2010-April/099485.html)) that the intended usage for `dict(**y)` is for creating dictionaries for readability purposes, e.g.: ``` dict(a=1, b=10, c=11) ``` instead of ``` {'a': 1, 'b': 10, 'c': 11} ``` ## Response to comments > Despite what Guido says, `dict(x, **y)` is in line with the dict specification, which btw. works for both Python 2 and 3. The fact that this only works for string keys is a direct consequence of how keyword parameters work and not a short-coming of dict. Nor is using the \*\* operator in this place an abuse of the mechanism, in fact, \*\* was designed precisely to pass dictionaries as keywords. Again, it doesn't work for 3 when keys are not strings. The implicit calling contract is that namespaces take ordinary dictionaries, while users must only pass keyword arguments that are strings. All other callables enforced it. `dict` broke this consistency in Python 2: ``` >>> foo(**{('a', 'b'): None}) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: foo() keywords must be strings >>> dict(**{('a', 'b'): None}) {('a', 'b'): None} ``` This inconsistency was bad given other implementations of Python (PyPy, Jython, IronPython). Thus it was fixed in Python 3, as this usage could be a breaking change. I submit to you that it is malicious incompetence to intentionally write code that only works in one version of a language or that only works given certain arbitrary constraints. More comments: > `dict(x.items() + y.items())` is still the most readable solution for Python 2. Readability counts. My response: `merge_two_dicts(x, y)` actually seems much clearer to me, if we're actually concerned about readability. And it is not forward compatible, as Python 2 is increasingly deprecated. > `{**x, **y}` does not seem to handle nested dictionaries. the contents of nested keys are simply overwritten, not merged [...] I ended up being burnt by these answers that do not merge recursively and I was surprised no one mentioned it. In my interpretation of the word "merging" these answers describe "updating one dict with another", and not merging. Yes. I must refer you back to the question, which is asking for a *shallow* merge of ***two*** dictionaries, with the first's values being overwritten by the second's - in a single expression. Assuming two dictionaries of dictionaries, one might recursively merge them in a single function, but you should be careful not to modify the dictionaries from either source, and the surest way to avoid that is to make a copy when assigning values. As keys must be hashable and are usually therefore immutable, it is pointless to copy them: ``` from copy import deepcopy def dict_of_dicts_merge(x, y): z = {} overlapping_keys = x.keys() & y.keys() for key in overlapping_keys: z[key] = dict_of_dicts_merge(x[key], y[key]) for key in x.keys() - overlapping_keys: z[key] = deepcopy(x[key]) for key in y.keys() - overlapping_keys: z[key] = deepcopy(y[key]) return z ``` Usage: ``` >>> x = {'a':{1:{}}, 'b': {2:{}}} >>> y = {'b':{10:{}}, 'c': {11:{}}} >>> dict_of_dicts_merge(x, y) {'b': {2: {}, 10: {}}, 'a': {1: {}}, 'c': {11: {}}} ``` Coming up with contingencies for other value types is far beyond the scope of this question, so I will point you at [my answer to the canonical question on a "Dictionaries of dictionaries merge"](https://stackoverflow.com/a/24088493/541136). ## Less Performant But Correct Ad-hocs These approaches are less performant, but they will provide correct behavior. They will be *much less* performant than `copy` and `update` or the new unpacking because they iterate through each key-value pair at a higher level of abstraction, but they *do* respect the order of precedence (latter dictionaries have precedence) You can also chain the dictionaries manually inside a [dict comprehension](https://www.python.org/dev/peps/pep-0274/): ``` {k: v for d in dicts for k, v in d.items()} # iteritems in Python 2.7 ``` or in Python 2.6 (and perhaps as early as 2.4 when generator expressions were introduced): ``` dict((k, v) for d in dicts for k, v in d.items()) # iteritems in Python 2 ``` `itertools.chain` will chain the iterators over the key-value pairs in the correct order: ``` from itertools import chain z = dict(chain(x.items(), y.items())) # iteritems in Python 2 ``` ## Performance Analysis I'm only going to do the performance analysis of the usages known to behave correctly. (Self-contained so you can copy and paste yourself.) ``` from timeit import repeat from itertools import chain x = dict.fromkeys('abcdefg') y = dict.fromkeys('efghijk') def merge_two_dicts(x, y): z = x.copy() z.update(y) return z min(repeat(lambda: {**x, **y})) min(repeat(lambda: merge_two_dicts(x, y))) min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()})) min(repeat(lambda: dict(chain(x.items(), y.items())))) min(repeat(lambda: dict(item for d in (x, y) for item in d.items()))) ``` In Python 3.8.1, NixOS: ``` >>> min(repeat(lambda: {**x, **y})) 1.0804965235292912 >>> min(repeat(lambda: merge_two_dicts(x, y))) 1.636518670246005 >>> min(repeat(lambda: {k: v for d in (x, y) for k, v in d.items()})) 3.1779992282390594 >>> min(repeat(lambda: dict(chain(x.items(), y.items())))) 2.740647904574871 >>> min(repeat(lambda: dict(item for d in (x, y) for item in d.items()))) 4.266070580109954 ``` ``` $ uname -a Linux nixos 4.19.113 #1-NixOS SMP Wed Mar 25 07:06:15 UTC 2020 x86_64 GNU/Linux ``` ## Resources on Dictionaries * [My explanation of Python's **dictionary implementation**, updated for 3.6.](https://stackoverflow.com/questions/327311/how-are-pythons-built-in-dictionaries-implemented/44509302#44509302) * [Answer on how to add new keys to a dictionary](https://stackoverflow.com/questions/1024847/add-new-keys-to-a-dictionary/27208535#27208535) * [Mapping two lists into a dictionary](https://stackoverflow.com/questions/209840/map-two-lists-into-a-dictionary-in-python/33737067#33737067) * [The official Python docs on dictionaries](https://docs.python.org/3/tutorial/datastructures.html#dictionaries) * [The Dictionary Even Mightier](https://www.youtube.com/watch?v=66P5FMkWoVU) - talk by Brandon Rhodes at Pycon 2017 * [Modern Python Dictionaries, A Confluence of Great Ideas](https://www.youtube.com/watch?v=npw4s1QTmPg) - talk by Raymond Hettinger at Pycon 2017
In your case, you can do: ``` z = dict(list(x.items()) + list(y.items())) ``` This will, as you want it, put the final dict in `z`, and make the value for key `b` be properly overridden by the second (`y`) dict's value: ``` >>> x = {'a': 1, 'b': 2} >>> y = {'b': 10, 'c': 11} >>> z = dict(list(x.items()) + list(y.items())) >>> z {'a': 1, 'c': 11, 'b': 10} ``` If you use Python 2, you can even remove the `list()` calls. To create z: ``` >>> z = dict(x.items() + y.items()) >>> z {'a': 1, 'c': 11, 'b': 10} ``` If you use Python version 3.9.0a4 or greater, you can directly use: ``` >>> x = {'a': 1, 'b': 2} >>> y = {'b': 10, 'c': 11} >>> z = x | y >>> z {'a': 1, 'c': 11, 'b': 10} ```
How do I merge two dictionaries in a single expression in Python?
[ "", "python", "dictionary", "merge", "" ]
If I have interface IFoo, and have several classes that implement it, what is the best/most elegant/cleverest way to test all those classes against the interface? I'd like to reduce test code duplication, but still 'stay true' to the principles of Unit testing. What would you consider best practice? I'm using NUnit, but I suppose examples from any Unit testing framework would be valid
If you have classes implement any one interface then they all need to implement the methods in that interface. In order to test these classes you need to create a unit test class for each of the classes. Lets go with a smarter route instead; if your goal is to **avoid code and test code duplication** you might want to create an abstract class instead that handles the **recurring** code. E.g. you have the following interface: ``` public interface IFoo { public void CommonCode(); public void SpecificCode(); } ``` You might want to create an abstract class: ``` public abstract class AbstractFoo : IFoo { public void CommonCode() { SpecificCode(); } public abstract void SpecificCode(); } ``` Testing that is easy; implement the abstract class in the test class either as an inner class: ``` [TestFixture] public void TestClass { private class TestFoo : AbstractFoo { boolean hasCalledSpecificCode = false; public void SpecificCode() { hasCalledSpecificCode = true; } } [Test] public void testCommonCallsSpecificCode() { TestFoo fooFighter = new TestFoo(); fooFighter.CommonCode(); Assert.That(fooFighter.hasCalledSpecificCode, Is.True()); } } ``` ...or let the test class extend the abstract class itself if that fits your fancy. ``` [TestFixture] public void TestClass : AbstractFoo { boolean hasCalledSpecificCode; public void specificCode() { hasCalledSpecificCode = true; } [Test] public void testCommonCallsSpecificCode() { AbstractFoo fooFighter = this; hasCalledSpecificCode = false; fooFighter.CommonCode(); Assert.That(fooFighter.hasCalledSpecificCode, Is.True()); } } ``` Having an abstract class take care of common code that an interface implies gives a much cleaner code design. I hope this makes sense to you. --- As a side note, this is a common design pattern called the **[Template Method pattern](http://en.wikipedia.org/wiki/Template_method_pattern)**. In the above example, the template method is the `CommonCode` method and `SpecificCode` is called a stub or a hook. The idea is that anyone can extend behavior without the need to know the behind the scenes stuff. A lot of frameworks rely on this behavioral pattern, e.g. [ASP.NET](http://msdn.microsoft.com/en-us/library/ms178472.aspx) where you have to implement the hooks in a page or a user controls such as the generated `Page_Load` method which is called by the `Load` event, the template method calls the hooks behind the scenes. There are a lot more examples of this. Basically anything that you have to implement that is using the words "load", "init", or "render" is called by a template method.
I disagree with [Jon Limjap](https://stackoverflow.com/questions/39003/nunit-how-to-test-all-classes-that-implement-a-particular-interface#39036) when he says, > It is not a contract on either a.) how the method should be implemented and b.) what that method should be doing exactly (it only guarantees the return type), the two reasons that I glean would be your motive in wanting this kind of test. There could be many parts of the contract not specified in the return type. A language-agnostic example: ``` public interface List { // adds o and returns the list public List add(Object o); // removed the first occurrence of o and returns the list public List remove(Object o); } ``` Your unit tests on LinkedList, ArrayList, CircularlyLinkedList, and all the others should test not only that the lists themselves are returned, but also that they have been properly modified. There was an [earlier question](https://stackoverflow.com/questions/26455/does-design-by-contract-work-for-you#34811) on design-by-contract, which can help point you in the right direction on one way of DRYing up these tests. If you don't want the overhead of contracts, I recommend test rigs, along the lines of what [Spoike](https://stackoverflow.com/questions/39003/nunit-how-to-test-all-classes-that-implement-a-particular-interface#39034) recommended: ``` abstract class BaseListTest { abstract public List newListInstance(); public void testAddToList() { // do some adding tests } public void testRemoveFromList() { // do some removing tests } } class ArrayListTest < BaseListTest { List newListInstance() { new ArrayList(); } public void arrayListSpecificTest1() { // test something about ArrayLists beyond the List requirements } } ```
NUnit - How to test all classes that implement a particular interface
[ "", "c#", ".net", "unit-testing", "nunit", "" ]
I am working on localization for a asp.net application that consists of several projects. For this, there are some strings that are used in several of these projects. Naturally, I would prefer to have only one copy of the resource file in each project. Since the resource files don't have an namespace (at least as far as I can tell), they can't be accessed like regular classes. Is there any way to reference resx files in another project, within the same solution?
You can just create a class library project, add a resource file there, and then refer to that assembly for common resources.
I have used this solution before to share a assembley info.cs file across all projects in a solution I would presume the same would work fro a resource file. Create a linked file to each individual project/class library. There will be only one copy and every project will have a reference to the code via a linked file at compile time. Its a very elegant solution to solve shared non public resources without duplicating code. ``` <Compile Include="path to shared file usually relative"> <Link>filename for Visual Studio To Dispaly.resx</Link> </Compile> ``` add that code to the complile item group of a csproj file then replace the paths with your actual paths to the resx files and you sould be able to open them. Once you have done this for one project file you should be able to employ the copy & paste the linked file to other projects without having to hack the csproj.
Referencing resource files from multiple projects in a solution
[ "", "c#", "localization", "resx", "" ]
I want to loop over the contents of a text file and do a search and replace on some lines and write the result back to the file. I could first load the whole file in memory and then write it back, but that probably is not the best way to do it. What is the best way to do this, within the following code? ``` f = open(file) for line in f: if line.contains('foo'): newline = line.replace('foo', 'bar') # how to write this newline back to the file ```
I guess something like this should do it. It basically writes the content to a new file and replaces the old file with the new file: ``` from tempfile import mkstemp from shutil import move, copymode from os import fdopen, remove def replace(file_path, pattern, subst): #Create temp file fh, abs_path = mkstemp() with fdopen(fh,'w') as new_file: with open(file_path) as old_file: for line in old_file: new_file.write(line.replace(pattern, subst)) #Copy the file permissions from the old file to the new file copymode(file_path, abs_path) #Remove original file remove(file_path) #Move new file move(abs_path, file_path) ```
The shortest way would probably be to use the [fileinput module](https://docs.python.org/3/library/fileinput.html#fileinput.input). For example, the following adds line numbers to a file, in-place: ``` import fileinput for line in fileinput.input("test.txt", inplace=True): print('{} {}'.format(fileinput.filelineno(), line), end='') # for Python 3 # print "%d: %s" % (fileinput.filelineno(), line), # for Python 2 ``` What happens here is: 1. The original file is moved to a backup file 2. The standard output is redirected to the original file within the loop 3. Thus any `print` statements write back into the original file `fileinput` has more bells and whistles. For example, it can be used to automatically operate on all files in `sys.args[1:]`, without your having to iterate over them explicitly. Starting with Python 3.2 it also provides a convenient context manager for use in a `with` statement. --- While `fileinput` is great for throwaway scripts, I would be wary of using it in real code because admittedly it's not very readable or familiar. In real (production) code it's worthwhile to spend just a few more lines of code to make the process explicit and thus make the code readable. There are two options: 1. The file is not overly large, and you can just read it wholly to memory. Then close the file, reopen it in writing mode and write the modified contents back. 2. The file is too large to be stored in memory; you can move it over to a temporary file and open that, reading it line by line, writing back into the original file. Note that this requires twice the storage.
Search and replace a line in a file in Python
[ "", "python", "file", "" ]
I know in certain circumstances, such as long running processes, it is important to lock ASP.NET cache in order to avoid subsequent requests by another user for that resource from executing the long process again instead of hitting the cache. What is the best way in c# to implement cache locking in ASP.NET?
Here's the basic pattern: * Check the cache for the value, return if its available * If the value is not in the cache, then implement a lock * Inside the lock, check the cache again, you might have been blocked * Perform the value look up and cache it * Release the lock In code, it looks like this: ``` private static object ThisLock = new object(); public string GetFoo() { // try to pull from cache here lock (ThisLock) { // cache was empty before we got the lock, check again inside the lock // cache is still empty, so retreive the value here // store the value in the cache here } // return the cached value here } ```
For completeness a full example would look something like this. ``` private static object ThisLock = new object(); ... object dataObject = Cache["globalData"]; if( dataObject == null ) { lock( ThisLock ) { dataObject = Cache["globalData"]; if( dataObject == null ) { //Get Data from db dataObject = GlobalObj.GetData(); Cache["globalData"] = dataObject; } } } return dataObject; ```
What is the best way to lock cache in asp.net?
[ "", "c#", ".net", "asp.net", "caching", "" ]
I'm working on a module for a CMS. This module is distributed as a class library DLL. I have several utility libraries I'd like to use in this module. Is there anyway I can link these libraries statically so I won't have to distribute several DLL's (thereby distributing my utility libraries separately)? I would like to have only one DLL.
You can merge your many DLLs with ILMERGE: <http://research.microsoft.com/~mbarnett/ILMerge.aspx> Haven't tried it myself. Hope it helps. --- Download here: <http://www.microsoft.com/downloads/details.aspx?familyid=22914587-B4AD-4EAE-87CF-B14AE6A939B0&displaylang=en> **Brief Description** *(from download-page)* ILMerge is a utility for merging multiple .NET assemblies into a single .NET assembly. It works on executables and DLLs alike and comes with several options for controlling the processing and format of the output. See the accompanying documentation for details.
If you don't want to use ILMerge, see this page: <http://blogs.msdn.com/b/microsoft_press/archive/2010/02/03/jeffrey-richter-excerpt-2-from-clr-via-c-third-edition.aspx> *editor's note*: Jeffrey Richter advices to put your dlls into exe file as resources (*For each DLL file you add, display its properties and change its “Build Action” to “Embedded Resource.”*). Then a custom class loader is needed to make the executable work (*At runtime, the CLR won’t be able to find the dependent DLL assemblies, which is a problem. To fix this, when your application initializes, register a callback method with the AppDomain’s ResolveAssembly event*). Be sure to change the `resourceName` string to point to your actual resources. (e.g. change `AssemblyLoadingAndReflection` to your project name.)
Linking statically in C#
[ "", "c#", "visual-studio-2008", "linker", "" ]
I want to create a Java application bundle for Mac without using Mac. According to [Java Deployment Options for Mac OS X](http://developer.apple.com/documentation/Java/Conceptual/Java14Development/03-JavaDeployment/JavaDeployment.html#//apple_ref/doc/uid/TP40001885-208447-TPXREF120), I can do this by using Xcode, Jar Bundler, or from the command line. Once the files and folders are set up, all I need for the command line method is to call /Developer/Tools/SetFile. Is there a SetFile clone on Linux or Windows? If not, do I have to get a Mac?
A Java application bundle on OS X is nothing more than a directory containing your .jars and a number of configuration files. The SetFile tool sets a custom HFS filesystem property on the directory to tell finder that it is an app, but giving it a ".app" extension serves the same purpose. I don't think there's anything stopping you from building one on, say, Windows, though of course you have no way of testing that it works, but if you are able to test it at least once on a real Mac, you could then conceivably update the .jars within it on Windows to reflect code changes without too much difficulty. Have a look at the [Bundle Programming Guide](http://developer.apple.com/documentation/CoreFoundation/Conceptual/CFBundles/CFBundles.html) for more info.
One way is to generate a zip file with the App using for example Ant. In ant you can specify that the file in Contents/MacOS should have execute-permissions using something like filemode="755".
Do I need a Mac to make a Java application bundle?
[ "", "java", "macos", "deployment", "" ]
I want to assign the decimal variable "trans" to the double variable "this.Opacity". ``` decimal trans = trackBar1.Value / 5000; this.Opacity = trans; ``` When I build the app it gives the following error: > Cannot implicitly convert type decimal to double
An explicit cast to `double` like this isn't necessary: ``` double trans = (double) trackBar1.Value / 5000.0; ``` Identifying the constant as `5000.0` (or as `5000d`) is sufficient: ``` double trans = trackBar1.Value / 5000.0; double trans = trackBar1.Value / 5000d; ```
**A more generic answer for the generic question "Decimal vs Double?":** **Decimal** is for monetary calculations to preserve precision. **Double** is for scientific calculations that do not get affected by small differences. Since Double is a type that is native to the CPU (internal representation is stored in *base 2*), calculations made with Double perform better than Decimal (which is represented in *base 10* internally).
How to convert Decimal to Double in C#?
[ "", "c#", "floating-point", "type-conversion", "double", "decimal", "" ]
Given a `DateTime` representing a person's birthday, how do I calculate their age in years?
An easy to understand and simple solution. ``` // Save today's date. var today = DateTime.Today; // Calculate the age. var age = today.Year - birthdate.Year; // Go back to the year in which the person was born in case of a leap year if (birthdate.Date > today.AddYears(-age)) age--; ``` However, this assumes you are looking for the *western* idea of the age and not using [*East Asian reckoning*](https://en.wikipedia.org/wiki/East_Asian_age_reckoning).
This is a strange way to do it, but if you format the date to `yyyymmdd` and subtract the date of birth from the current date then drop the last 4 digits you've got the age :) I don't know C#, but I believe this will work in any language. ``` 20080814 - 19800703 = 280111 ``` Drop the last 4 digits = `28`. C# Code: ``` int now = int.Parse(DateTime.Now.ToString("yyyyMMdd")); int dob = int.Parse(dateOfBirth.ToString("yyyyMMdd")); int age = (now - dob) / 10000; ``` Or alternatively without all the type conversion in the form of an extension method. Error checking omitted: ``` public static Int32 GetAge(this DateTime dateOfBirth) { var today = DateTime.Today; var a = (today.Year * 100 + today.Month) * 100 + today.Day; var b = (dateOfBirth.Year * 100 + dateOfBirth.Month) * 100 + dateOfBirth.Day; return (a - b) / 10000; } ```
How do I calculate someone's age based on a DateTime type birthday?
[ "", "c#", ".net", "datetime", "" ]
Given a specific `DateTime` value, how do I display relative time, like: * `2 hours ago` * `3 days ago` * `a month ago`
Jeff, [your code](https://stackoverflow.com/questions/11/how-do-i-calculate-relative-time/12#12) is nice but could be clearer with constants (as suggested in Code Complete). ``` const int SECOND = 1; const int MINUTE = 60 * SECOND; const int HOUR = 60 * MINUTE; const int DAY = 24 * HOUR; const int MONTH = 30 * DAY; var ts = new TimeSpan(DateTime.UtcNow.Ticks - yourDate.Ticks); double delta = Math.Abs(ts.TotalSeconds); if (delta < 1 * MINUTE) return ts.Seconds == 1 ? "one second ago" : ts.Seconds + " seconds ago"; if (delta < 2 * MINUTE) return "a minute ago"; if (delta < 45 * MINUTE) return ts.Minutes + " minutes ago"; if (delta < 90 * MINUTE) return "an hour ago"; if (delta < 24 * HOUR) return ts.Hours + " hours ago"; if (delta < 48 * HOUR) return "yesterday"; if (delta < 30 * DAY) return ts.Days + " days ago"; if (delta < 12 * MONTH) { int months = Convert.ToInt32(Math.Floor((double)ts.Days / 30)); return months <= 1 ? "one month ago" : months + " months ago"; } else { int years = Convert.ToInt32(Math.Floor((double)ts.Days / 365)); return years <= 1 ? "one year ago" : years + " years ago"; } ```
## [jquery.timeago plugin](https://timeago.yarp.com/) Jeff, because Stack Overflow uses jQuery extensively, I recommend the [jquery.timeago plugin](https://timeago.yarp.com/). Benefits: * Avoid timestamps dated "1 minute ago" even though the page was opened 10 minutes ago; timeago refreshes automatically. * You can take full advantage of page and/or fragment caching in your web applications, because the timestamps aren't calculated on the server. * You get to use microformats like the cool kids. Just attach it to your timestamps on DOM ready: ``` jQuery(document).ready(function() { jQuery('abbr.timeago').timeago(); }); ``` This will turn all `abbr` elements with a class of timeago and an [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) timestamp in the title: ``` <abbr class="timeago" title="2008-07-17T09:24:17Z">July 17, 2008</abbr> ``` into something like this: ``` <abbr class="timeago" title="July 17, 2008">4 months ago</abbr> ``` which yields: 4 months ago. As time passes, the timestamps will automatically update. *Disclaimer: I wrote this plugin, so I'm biased.*
Calculate relative time in C#
[ "", "c#", "datetime", "time", "datediff", "relative-time-span", "" ]
Code below does not run correctly and throws `InvalidOperationExcepiton`. ``` public void Foo() { DataContext context = new DataContext(); LinqEntity item = new LinqEntity(){ Id = 1, Name = "John", Surname = "Doe"} ; context.LinqEntities.Attach(item, true); } ```
By default, the entities will use all fields for checking concurrency when making edits. That's what's throwing the InvalidOperationException. This can be setting the Update Check property for all fields to Never. This must be done on all fields to attach the entity as modified. If this is done, an additional call to context.SubmitChanges() will save the data. Alternatively, if you know the original values, you can attach and then make the updates, but all values that are being checked must match the original values. ``` LinqEntity item = new LinqEntity(){ Id = 1, Name = "OldName", Surname = "OldSurname"}; context.LinqEntities.Attach(item); item.Name = "John"; item.Surname = "Doe"; context.SubmitChanges(); ```
I'm not sure what you mean by disconnected from the database. It appears that you are trying to insert a new row into the LinqEntities table -- is that correct? If that is the case you'll want to do ``` context.LinqEntities.InsertOnSubmit(item); context.Submit(); ```
How can I update in Linq an entity that is disconnected from database?
[ "", "c#", "linq", "" ]
Given a Python object of any kind, is there an easy way to get the list of all methods that this object has? Or if this is not possible, is there at least an easy way to check if it has a particular method, other than checking if an error occurs when the method is called?
**For many objects**, you can use this code, replacing 'object' with the object you're interested in: ``` object_methods = [method_name for method_name in dir(object) if callable(getattr(object, method_name))] ``` I discovered it at [diveintopython.net](https://web.archive.org/web/20180901124519/http://www.diveintopython.net/power_of_introspection/index.html) (now archived), that should provide some further details! **If you get an `AttributeError`, you can use this instead**: `getattr()` is intolerant of pandas style Python 3.6 abstract virtual sub-classes. This code does the same as above and ignores exceptions. ``` import pandas as pd df = pd.DataFrame([[10, 20, 30], [100, 200, 300]], columns=['foo', 'bar', 'baz']) def get_methods(object, spacing=20): methodList = [] for method_name in dir(object): try: if callable(getattr(object, method_name)): methodList.append(str(method_name)) except Exception: methodList.append(str(method_name)) processFunc = (lambda s: ' '.join(s.split())) or (lambda s: s) for method in methodList: try: print(str(method.ljust(spacing)) + ' ' + processFunc(str(getattr(object, method).__doc__)[0:90])) except Exception: print(method.ljust(spacing) + ' ' + ' getattr() failed') get_methods(df['foo']) ```
You can use the built in `dir()` function to get a list of all the attributes a module has. Try this at the command line to see how it works. ``` >>> import moduleName >>> dir(moduleName) ``` Also, you can use the `hasattr(module_name, "attr_name")` function to find out if a module has a specific attribute. See the [Python introspection](https://zetcode.com/python/introspection/) for more information.
Finding what methods a Python object has
[ "", "python", "introspection", "" ]
So I have a Sybase stored proc that takes 1 parameter that's a comma separated list of strings and runs a query with in in an IN() clause: ``` CREATE PROCEDURE getSomething @keyList varchar(4096) AS SELECT * FROM mytbl WHERE name IN (@keyList) ``` How do I call my stored proc with more than 1 value in the list? So far I've tried ``` exec getSomething 'John' -- works but only 1 value exec getSomething 'John','Tom' -- doesn't work - expects two variables exec getSomething "'John','Tom'" -- doesn't work - doesn't find anything exec getSomething '"John","Tom"' -- doesn't work - doesn't find anything exec getSomething '\'John\',\'Tom\'' -- doesn't work - syntax error ``` **EDIT:** I actually found this [page](http://vyaskn.tripod.com/passing_arrays_to_stored_procedures.htm) that has a great reference of the various ways to pas an array to a sproc
If you're using Sybase 12.5 or earlier then you can't use functions. A workaround might be to populate a temporary table with the values and read them from there.
This is a little late, but I had this exact issue a while ago and I found a solution. The trick is double quoting and then wrapping the whole string in quotes. ``` exec getSomething """John"",""Tom"",""Bob"",""Harry""" ``` Modify your proc to match the table entry to the string. ``` CREATE PROCEDURE getSomething @keyList varchar(4096) AS SELECT * FROM mytbl WHERE @keyList LIKE '%'+name+'%' ``` I've had this in production since ASE 12.5; we're now on 15.0.3.
How to pass a comma separated list to a stored procedure?
[ "", "sql", "sap-ase", "" ]
Given a select with multiple option's in jQuery. ``` $select = $("<select></select>"); $select.append("<option>Jason</option>") //Key = 1 .append("<option>John</option>") //Key = 32 .append("<option>Paul</option>") //Key = 423 ``` How should the key be stored and retrieved? The ID may be an OK place but would not be guaranteed unique if I had multiple select's sharing values (and other scenarios). Thanks and in the spirit of TMTOWTDI. ``` $option = $("<option></option>"); $select = $("<select></select>"); $select.addOption = function(value,text){ $(this).append($("<option/>").val(value).text(text)); }; $select.append($option.val(1).text("Jason").clone()) .append("<option value=32>John</option>") .append($("<option/>").val(423).text("Paul")) .addOption("321","Lenny"); ```
Like lucas said the value attribute is what you need. Using your code it would look something like this ( I added an id attribute to the select to make it fit ): ``` $select = $('<select id="mySelect"></select>'); $select.append('<option value="1">Jason</option>') //Key = 1 .append('<option value="32">John</option>') //Key = 32 .append('<option value="423">Paul</option>') //Key = 423 ``` jQuery lets you get the value using the val() method. Using it on the select tag you get the current selected option's value. ``` $( '#mySelect' ).val(); //Gets the value for the current selected option $( '#mySelect > option' ).each( function( index, option ) { option.val(); //The value for each individual option } ); ``` Just in case, the .each method loops throught every element the query matched.
The HTML `<option>` tag has an attribute called "value", where you can store your key. e.g.: ``` <option value=1>Jason</option> ``` I don't know how this will play with jQuery (I don't use it), but I hope this is helpful nonetheless.
Keeping key value pairs together in HTML <select/> with jQuery?
[ "", "javascript", "jquery", "html", "" ]
I love list comprehensions in Python, because they concisely represent a transformation of a list. However, in other languages, I frequently find myself writing something along the lines of: ``` foreach (int x in intArray) if (x > 3) //generic condition on x x++ //do other processing ``` This example is in C#, where I'm under the impression LINQ can help with this, but is there some common programming construct which can replace this slightly less-than-elegant solution? Perhaps a data structure I'm not considering?
The increment in the original `foreach` loop will not affect the contents of the array, the only way to do this remains a `for` loop: ``` for(int i = 0; i < intArray.Length; ++i) { if(intArray[i] > 3) ++intArray[i]; } ``` Linq is not intended to modify existing collections or sequences. It creates new sequences based on existing ones. It is possible to achieve the above code using Linq, though it is slightly against its purposes: ``` var newArray1 = from i in intArray select ((i > 3) ? (i + 1) : (i)); var newArray2 = intArray.Select(i => (i > 3) ? (i + 1) : (i)); ``` Using `where` (or equivalent), as shown in some of the other answers, will exclude any values less than or equal to 3 from the resulting sequence. ``` var intArray = new int[] { 10, 1, 20, 2 }; var newArray = from i in intArray where i > 3 select i + 1; // newArray == { 11, 21 } ``` There is a `ForEach` method on arrays that will allow you to use a lambda function instead of a `foreach` block, though for anything more than a method call I would stick with `foreach`. ``` intArray.ForEach(i => DoSomething(i)); ```
In C# you can apply selective processing on anything that lives inside an IEnumerable like this: ``` intArray.Where(i => i > 3).ConvertAll(); DoStuff(intArray.Where(i => i 3)); ``` Etc..
Replacement for for... if array iteration
[ "", ".net", "python", "arrays", "loops", "iteration", "" ]
Help! I have an Axis web service that is being consumed by a C# application. Everything works great, except that arrays of long values always come across as [0,0,0,0] - the right length, but the values aren't deserialized. I have tried with other primitives (ints, doubles) and the same thing happens. What do I do? I don't want to change the semantics of my service.
Here's what I ended up with. I have never found another solution out there for this, so if you have something better, by all means, contribute. First, the long array definition in the wsdl:types area: ``` <xsd:complexType name="ArrayOf_xsd_long"> <xsd:complexContent mixed="false"> <xsd:restriction base="soapenc:Array"> <xsd:attribute wsdl:arrayType="soapenc:long[]" ref="soapenc:arrayType" /> </xsd:restriction> </xsd:complexContent> </xsd:complexType> ``` Next, we create a SoapExtensionAttribute that will perform the fix. It seems that the problem was that .NET wasn't following the multiref id to the element containing the double value. So, we process the array item, go find the value, and then insert it the value into the element: ``` [AttributeUsage(AttributeTargets.Method)] public class LongArrayHelperAttribute : SoapExtensionAttribute { private int priority = 0; public override Type ExtensionType { get { return typeof (LongArrayHelper); } } public override int Priority { get { return priority; } set { priority = value; } } } public class LongArrayHelper : SoapExtension { private static ILog log = LogManager.GetLogger(typeof (LongArrayHelper)); public override object GetInitializer(LogicalMethodInfo methodInfo, SoapExtensionAttribute attribute) { return null; } public override object GetInitializer(Type serviceType) { return null; } public override void Initialize(object initializer) { } private Stream originalStream; private Stream newStream; public override void ProcessMessage(SoapMessage m) { switch (m.Stage) { case SoapMessageStage.AfterSerialize: newStream.Position = 0; //need to reset stream CopyStream(newStream, originalStream); break; case SoapMessageStage.BeforeDeserialize: XmlWriterSettings settings = new XmlWriterSettings(); settings.Indent = false; settings.NewLineOnAttributes = false; settings.NewLineHandling = NewLineHandling.None; settings.NewLineChars = ""; XmlWriter writer = XmlWriter.Create(newStream, settings); XmlDocument xmlDocument = new XmlDocument(); xmlDocument.Load(originalStream); List<XmlElement> longArrayItems = new List<XmlElement>(); Dictionary<string, XmlElement> multiRefs = new Dictionary<string, XmlElement>(); FindImportantNodes(xmlDocument.DocumentElement, longArrayItems, multiRefs); FixLongArrays(longArrayItems, multiRefs); xmlDocument.Save(writer); newStream.Position = 0; break; } } private static void FindImportantNodes(XmlElement element, List<XmlElement> longArrayItems, Dictionary<string, XmlElement> multiRefs) { string val = element.GetAttribute("soapenc:arrayType"); if (val != null && val.Contains(":long[")) { longArrayItems.Add(element); } if (element.Name == "multiRef") { multiRefs[element.GetAttribute("id")] = element; } foreach (XmlNode node in element.ChildNodes) { XmlElement child = node as XmlElement; if (child != null) { FindImportantNodes(child, longArrayItems, multiRefs); } } } private static void FixLongArrays(List<XmlElement> longArrayItems, Dictionary<string, XmlElement> multiRefs) { foreach (XmlElement element in longArrayItems) { foreach (XmlNode node in element.ChildNodes) { XmlElement child = node as XmlElement; if (child != null) { string href = child.GetAttribute("href"); if (href == null || href.Length == 0) { continue; } if (href.StartsWith("#")) { href = href.Remove(0, 1); } XmlElement multiRef = multiRefs[href]; if (multiRef == null) { continue; } child.RemoveAttribute("href"); child.InnerXml = multiRef.InnerXml; if (log.IsDebugEnabled) { log.Debug("Replaced multiRef id '" + href + "' with value: " + multiRef.InnerXml); } } } } } public override Stream ChainStream(Stream s) { originalStream = s; newStream = new MemoryStream(); return newStream; } private static void CopyStream(Stream from, Stream to) { TextReader reader = new StreamReader(from); TextWriter writer = new StreamWriter(to); writer.WriteLine(reader.ReadToEnd()); writer.Flush(); } } ``` Finally, we tag all methods in the Reference.cs file that will be deserializing a long array with our attribute: ``` [SoapRpcMethod("", RequestNamespace="http://some.service.provider", ResponseNamespace="http://some.service.provider")] [return : SoapElement("getFooReturn")] [LongArrayHelper] public Foo getFoo() { object[] results = Invoke("getFoo", new object[0]); return ((Foo) (results[0])); } ``` This fix is long-specific, but it could probably be generalized to handle any primitive type having this problem.
Here's a more or less copy-pasted version of a [blog post](http://www.tomergabel.com/GettingWCFAndApacheAxisToBeFriendly.aspx) I wrote on the subject. Executive summary: You can either change the way .NET deserializes the result set (see Chris's solution above), or you can reconfigure Axis to serialize its results in a way that's compatible with the .NET SOAP implementation. If you go the latter route, here's how: > ... the generated > classes look and appear to function > normally, but if you'll look at the > deserialized array on the client > (.NET/WCF) side you'll find that the > array has been deserialized > incorrectly, and all values in the > array are 0. You'll have to manually > look at the SOAP response returned by > Axis to figure out what's wrong; > here's a sample response (again, > edited for clarity): ``` <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv=http://schemas.xmlsoap.org/soap/envelope/> <soapenv:Body> <doSomethingResponse> <doSomethingReturn> <doSomethingReturn href="#id0"/> <doSomethingReturn href="#id1"/> <doSomethingReturn href="#id2"/> <doSomethingReturn href="#id3"/> <doSomethingReturn href="#id4"/> </doSomethingReturn> </doSomethingResponse> <multiRef id="id4">5</multiRef> <multiRef id="id3">4</multiRef> <multiRef id="id2">3</multiRef> <multiRef id="id1">2</multiRef> <multiRef id="id0">1</multiRef> </soapenv:Body> </soapenv:Envelope> ``` > You'll notice that Axis does not > generate values directly in the > returned element, but instead > references external elements for > values. This might make sense when > there are many references to > relatively few discrete values, but > whatever the case this is not properly > handled by the WCF basicHttpBinding > provider (and reportedly by gSOAP and > classic .NET web references as well). > > It took me a while to find a solution: > edit your Axis deployment's > server-config.wsdd file and find the > following parameter: ``` <parameter name="sendMultiRefs" value="true"/> ``` > Change it to false, > then redeploy via the command line, > which looks (under Windows) something > like this: ``` java -cp %AXISCLASSPATH% org.apache.axis.client.AdminClient server-config.wsdl ``` > The web service's > response should now be deserializable > by your .NET client.
Why won't .NET deserialize my primitive array from a web service?
[ "", "java", ".net", "service", "primitive", "" ]
I would like to have a reference for the pros and cons of using include **files vs objects(classes)** when developing PHP applications. I know I would benefit from having one place to go for this answer...I have a few opinions of my own but I look forward to hearing others. **A Simple Example:** Certain pages on my site are only accessible to logged in users. I have two options for implementation (there are others but let's limit it to these two) 1. *Create an authenticate.php file and include it on every page. It holds the logic for authentication.* 2. *Create a user object, which has an authenticate function, reference the object for authentication on every page.* **Edit** I'd like to see some way weigh the benefits of one over the other. My current (and weak reasons) follow: Includes - Sometimes a function is just easier/shorter/faster to call Objects - Grouping of functionality and properties leads for longer term maintenance. **Includes** - Less code to write (no constructor, no class syntax) call me lazy but this is true. **Objects** - Force formality and a single approach to functions and creation. **Includes** - Easier for a novice to deal with Objects - Harder for novices, but frowned upon by professionals. I look at these factors at the start of a project to decide if I want to do includes or objects. Those are a few pros and cons off the top of my head.
These are not really opposite choices. You will have to include the checking code anyway. I read your question as procedural programming vs. OO programming. Writing a few lines of code, or a function, and including it in your page header was how things were done in PHP3 or PHP4. It's simple, it works (that's how we did it in [osCommerce](http://www.oscommerce.com/), for example, an eCommerce PHP application). But it's not easy to maintain and modify, as many developers can confirm. In PHP5 you'd write a user object which will carry its own data and methods for authentication. Your code will be clearer and easier to maintain as everything having to do with users and authentication will be concentrated in a single place.
While the question touches on a couple of very debatable issues (OOP, User authentication) I'll skip by those and second Konrad's comment about \_\_autoload. Anyone who knows C/C++ knows how much of a pain including files can be. With autoload, a PHP5 addition, if you choose to use OOP (which I do almost exclusively) you only need use some standard file naming convention and (I would recommend) restricting a single class per file and PHP will do the rest for you. Cleans up the code and you no longer have to worry about remembering to remove includes that are no longer necessary (one of the many problems with includes).
PHP includes vs OOP
[ "", "php", "coding-style", "" ]
I have a php server that is running my domain name. For testing purposes I am running an asp.net on a dotted quad IP. I am hoping to link them together via either PHP or some kind of DNS/`.htaccess` voodoo. So if I go to `www.mydomain.com/test` it redirects (but keeps the url of (`www.mydomain.com/test`) in the browser's address bar and the pages are served by the dotted quad IP asp.net box.
Instead of pointing `www.yourdomain.com/test` at your test server, why not use `test.yourdomain.com`? Assuming you have access to the DNS records for `yourdomain.com`, you should just need to create an A record mapping `test.yourdomain.com` to your test server's IP address.
It is quite possible, if I understand what you're getting at. You have a PHP server with your domain pointing to it. You also have a separate ASP.NET server that only has an IP address associated with it, no domain. Is there any drawback to simply pointing your domain name to your ASP.NEt box?
Redirect from domain name to a dotted quad hosted box
[ "", "php", "hosting", "dns", "" ]
Details: * Only disable after user clicks the submit button, but before the posting back to the server * ASP.NET Webforms (.NET 1.1) * Prefer jQuery (if any library at all) * Must be enabled if form reloads (i.e. credit card failed) This isn't a necessity that I do this, but if there is a simple way to do it without having to change too much, I'll do it. (i.e. if there isn't a simple solution, I probably won't do it, so don't worry about digging too deep)
For all submit buttons, via JQuery, it'd be: ``` $('input[type=submit]').click(function() { this.disabled = true; }); ``` Or it might be more useful to do so on form submission: ``` $('form').submit(function() { $('input[type=submit]', this).attr("disabled","disabled"); }); ``` But I think we could give a better answer to your question if we knew a bit more about the context. If this is an ajax request, then you'll need to make sure you enable submit buttons again on either success or failure. If this is a standard HTTP form submission (aside from disabling the button with javascript) and you're doing this to safe guard from multiple submissions of the same form, then you ought to have some sort of control in the code that deals with the submitted data, because disabling a button with javascript might not prevent multiple submissions.
You could do something like this: ``` $('form').submit(function() { $(this) .find(":submit,:image") // get all the submit buttons .attr({ disabled : 'disabled' }) // disable them .end() // go back to this form .submit(function() { // change the onsubmit to always reject. return false; }) ; }); ``` Benefits of this: * It will work with all your forms, with all methods of submission: + clicking a submit element + pressing enter, or + calling `form.submit()` from some other code * It will disable all submit elements: + `<input type="submit"/>` + `<button type="submit"></button>` + `<input type="image" />` * it's really short.
What is the best approach for (client-side) disabling of a submit button?
[ "", "asp.net", "javascript", "jquery", "webforms", ".net-1.1", "" ]
I'm using ant to generate javadocs, but get this exception over and over - why? I'm using JDK version **1.6.0\_06**. ``` [javadoc] java.lang.ClassCastException: com.sun.tools.javadoc.ClassDocImpl cannot be cast to com.sun.javadoc.AnnotationTypeDoc [javadoc] at com.sun.tools.javadoc.AnnotationDescImpl.annotationType(AnnotationDescImpl.java:46) [javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.getAnnotations(HtmlDocletWriter.java:1739) [javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.writeAnnotationInfo(HtmlDocletWriter.java:1713) [javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.writeAnnotationInfo(HtmlDocletWriter.java:1702) [javadoc] at com.sun.tools.doclets.formats.html.HtmlDocletWriter.writeAnnotationInfo(HtmlDocletWriter.java:1681) [javadoc] at com.sun.tools.doclets.formats.html.FieldWriterImpl.writeSignature(FieldWriterImpl.java:130) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.FieldBuilder.buildSignature(FieldBuilder.java:184) [javadoc] at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source) [javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [javadoc] at java.lang.reflect.Method.invoke(Method.java:597) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.FieldBuilder.invokeMethod(FieldBuilder.java:114) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractMemberBuilder.build(AbstractMemberBuilder.java:56) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.FieldBuilder.buildFieldDoc(FieldBuilder.java:158) [javadoc] at sun.reflect.GeneratedMethodAccessor51.invoke(Unknown Source) [javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [javadoc] at java.lang.reflect.Method.invoke(Method.java:597) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.FieldBuilder.invokeMethod(FieldBuilder.java:114) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractMemberBuilder.build(AbstractMemberBuilder.java:56) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.buildFieldDetails(ClassBuilder.java:301) [javadoc] at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source) [javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [javadoc] at java.lang.reflect.Method.invoke(Method.java:597) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.invokeMethod(ClassBuilder.java:101) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.buildClassDoc(ClassBuilder.java:124) [javadoc] at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) [javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [javadoc] at java.lang.reflect.Method.invoke(Method.java:597) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.invokeMethod(ClassBuilder.java:101) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.AbstractBuilder.build(AbstractBuilder.java:90) [javadoc] at com.sun.tools.doclets.internal.toolkit.builders.ClassBuilder.build(ClassBuilder.java:108) [javadoc] at com.sun.tools.doclets.formats.html.HtmlDoclet.generateClassFiles(HtmlDoclet.java:155) [javadoc] at com.sun.tools.doclets.internal.toolkit.AbstractDoclet.generateClassFiles(AbstractDoclet.java:164) [javadoc] at com.sun.tools.doclets.internal.toolkit.AbstractDoclet.startGeneration(AbstractDoclet.java:106) [javadoc] at com.sun.tools.doclets.internal.toolkit.AbstractDoclet.start(AbstractDoclet.java:64) [javadoc] at com.sun.tools.doclets.formats.html.HtmlDoclet.start(HtmlDoclet.java:42) [javadoc] at com.sun.tools.doclets.standard.Standard.start(Standard.java:23) [javadoc] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [javadoc] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) [javadoc] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [javadoc] at java.lang.reflect.Method.invoke(Method.java:597) [javadoc] at com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:215) [javadoc] at com.sun.tools.javadoc.DocletInvoker.start(DocletInvoker.java:91) [javadoc] at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:340) [javadoc] at com.sun.tools.javadoc.Start.begin(Start.java:128) [javadoc] at com.sun.tools.javadoc.Main.execute(Main.java:41) [javadoc] at com.sun.tools.javadoc.Main.main(Main.java:31) ```
It looks like this has been reported as a [Java bug](https://bugs.java.com/bugdatabase/view_bug?bug_id=6442982). It appears to be caused by using annotations from a 3rd party library (like JUnit) and not including the jar with that annotation in the javadoc invocation. If that is the case, just use the -classpath option on javadoc and include the extra jar files.
I have some idea regarding this problem but this not exact solution to get. If you give single comment line `//` before annotation and try to run the javadoc once again. This problem will solve Eg: `sample.java` file ``` @ChannelPipeline ``` Makes changes in ``` //@ChannelPipeline ``` Try to run javadoc command once again. Now `ClassCastException` won't occur.
Why am I getting a ClassCastException when generating javadocs?
[ "", "java", "ant", "javadoc", "classcastexception", "" ]
I am looking for good methods of manipulating HTML in PHP. For example, the problem I currently have is dealing with malformed HTML. I am getting input that looks something like this: ``` <div>This is some <b>text ``` As you noticed, the HTML is missing closing tags. I could use regex or an XML Parser to solve this problem. However, it is likely that I will have to do other DOM manipulation in the future. I wonder if there are any good PHP libraries that handle DOM manipulation similar to how Javascript deals with DOM manipulation.
PHP has [a PECL extension that gives you access to the features of HTML Tidy](http://php.net/tidy). Tidy is a pretty powerful library that should be able to take code like that and close tags in an intelligent manner. I use it to clean up malformed XML and HTML sent to me by a classified ad system prior to import.
I've found PHP Simple HTML DOM to be the most useful and straight forward library yet. Better than PECL I would say. I've written an article on [how to use it to scrape myspace artist tour dates](http://www.crainbandy.com/programming/using-php-and-simple-html-dom-parser-to-scrape-artist-tour-dates-off-myspace) (just an example.) Here's a link to the [php simple html dom parser.](http://simplehtmldom.sourceforge.net/)
DOM manipulation in PHP
[ "", "php", "html", "dom", "" ]
How do you expose a LINQ query as an ASMX web service? Usually, from the business tier, I can return a typed `DataSet` or a `DataTable` which can be serialized for transport over ASMX. How can I do the same for a LINQ query? Is there a way to populate a typed `DataSet` or a `DataTable` via a LINQ query? ``` public static MyDataTable CallMySproc() { string conn = "..."; MyDatabaseDataContext db = new MyDatabaseDataContext(conn); MyDataTable dt = new MyDataTable(); // execute a sproc via LINQ var query = from dr in db.MySproc().AsEnumerable select dr; // copy LINQ query resultset into a DataTable -this does not work ! dt = query.CopyToDataTable(); return dt; } ``` How could I put the result set of a LINQ query into a `DataSet` or a `DataTable`? Alternatively, can the LINQ query be serializable so that I can expose it as an ASMX web service?
As mentioned in the question, `IEnumerable` has a `CopyToDataTable` method: ``` IEnumerable<DataRow> query = from order in orders.AsEnumerable() where order.Field<DateTime>("OrderDate") > new DateTime(2001, 8, 1) select order; // Create a table from the query. DataTable boundTable = query.CopyToDataTable<DataRow>(); ``` Why won't that work for you?
To perform this query against a `DataContext` class, you'll need to do the following: ``` MyDataContext db = new MyDataContext(); IEnumerable<DataRow> query = (from order in db.Orders.AsEnumerable() select new { order.Property, order.Property2 }) as IEnumerable<DataRow>; return query.CopyToDataTable<DataRow>(); ``` Without the `as IEnumerable<DataRow>;` you will see the following compilation error: > Cannot implicitly convert type 'System.Collections.Generic.IEnumerable' to 'System.Collections.Generic.IEnumerable'. An explicit conversion exists (are you missing a cast?)
Filling a DataSet or a DataTable from a LINQ query result set
[ "", "c#", "linq", "web-services", ".net-3.5", "" ]
I have lots of article store in MS SQL server 2005 database in a table called Articles- ``` "Articles (ArticleID, ArticleTitle, ArticleContent)" ``` Now I want some SP or SQL query which could return me similar Article against any user's input (very much like "Similar Posts" in blogs OR "Related Questions" in stackoverflow). The matching should work on both ArticleTitle and ArticleContent. The query should be intelligent enough to sort the result on the basis on their relevancy. Is it possible to do this in MS SQL Server 2005?
Something like this might work, a kind of ranking system. You would probably have to split the string in your application to build a SQL string, but I have used similar to build an effective site search. ``` Select Top 10 ArticleID, ArticleTitle, ArticleContent From Articles Order By (Case When ArticleTitle = 'Article Title' Then 1 Else 0 End) Desc, (Case When ArticleTitle = 'Article' Then 1 Else 0 End) Desc, (Case When ArticleTitle = 'Title' Then 1 Else 0 End) Desc, (Case When Soundex('Article Title') = Soundex(ArticleTitle) Then 1 Else 0 End) Desc, (Case When Soundex('Article') = Soundex(ArticleTitle) Then 1 Else 0 End) Desc, (Case When Soundex('Title') = Soundex(ArticleTitle) Then 1 Else 0 End) Desc, (Case When PatIndex('%Article%Title%', ArticleTitle) > 0 Then 1 Else 0 End) Desc, (Case When PatIndex('%Article%', ArticleTitle) > 0 Then 1 Else 0 End) Desc, (Case When PatIndex('%Title%', ArticleTitle) > 0 Then 1 Else 0 End) Desc, (Case When PatIndex('%Article%Title%', ArticleContent) > 0 Then 1 Else 0 End) Desc, (Case When PatIndex('%Article%', ArticleContent) > 0 Then 1 Else 0 End) Desc, (Case When PatIndex('%Title%', ArticleContent) > 0 Then 1 Else 0 End) Desc ``` You can then add/remove case statements from the order by clause to improve the list based on your data.
First of all you need to define what article similarity means. For example you can associate some meta information with articles, like tags. To be able to find similar articles you need to extract some features from them, for example you can build full text index. You can take advantage of full text search capability of MSSQL 2005 ``` -- Assuming @Title contains title of current articles you can find related articles runnig this query SELECT * FROM Acticles WHERE CONTAINS(ArticleTitle, @Title) ```
"Similar Posts" like functionality using MS SQL Server?
[ "", "sql", "sql-server", "database", "" ]
Having been a PHP developer on LAMP servers for quite a while, is there anything that I will need to take into consideration while preparing an application for *IIS* on windows.
Make sure you get the FastCGI extension for IIS 6.0 or IIS 7.0. It is the single most important thing you can have when running PHP under IIS. Also this article should get you setup: <http://learn.iis.net/page.aspx/247/using-fastcgi-to-host-php-applications-on-iis-60/> Everything beyond this is simple, MySQL and what not.
We just rolled out PHP 5.2.6 + FastCGI on our shared hosting platform without any problems. As long as you follow the steps outlined in [the article Nick linked](https://stackoverflow.com/questions/10515/php-on-iis#10519) to then you should be just fine. My only additional piece of advice would be to forget about using the `fcgiconfig.js` script to modify the fcgiext.ini file, it's more of a hindrance than a help. Just edit it by hand, you also learn more about how it works. If you're installing PHP onto IIS 7 then this link should be worth a read though: > [Using FastCGI to Host PHP Applications on IIS 7](http://learn.iis.net/page.aspx/246/using-fastcgi-to-host-php-applications-on-iis7/)
What do I need to run PHP applications on IIS?
[ "", "php", "windows", "iis", "portability", "lamp", "" ]
Is there any way to check whether a file is locked without using a try/catch block? Right now, the only way I know of is to just open the file and catch any `System.IO.IOException`.
No, unfortunately, and if you think about it, that information would be worthless anyway since the file could become locked the very next second (read: short timespan). Why specifically do you need to know if the file is locked anyway? Knowing that might give us some other way of giving you good advice. If your code would look like this: ``` if not locked then open and update file ``` Then between the two lines, another process could easily lock the file, giving you the same problem you were trying to avoid to begin with: exceptions.
When I faced with a similar problem, I finished with the following code: ``` public class FileManager { private string _fileName; private int _numberOfTries; private int _timeIntervalBetweenTries; private FileStream GetStream(FileAccess fileAccess) { var tries = 0; while (true) { try { return File.Open(_fileName, FileMode.Open, fileAccess, Fileshare.None); } catch (IOException e) { if (!IsFileLocked(e)) throw; if (++tries > _numberOfTries) throw new MyCustomException("The file is locked too long: " + e.Message, e); Thread.Sleep(_timeIntervalBetweenTries); } } } private static bool IsFileLocked(IOException exception) { int errorCode = Marshal.GetHRForException(exception) & ((1 << 16) - 1); return errorCode == 32 || errorCode == 33; } // other code } ```
How to check for file lock?
[ "", "c#", ".net", "io", "filelock", "" ]
I wanted some of those spiffy rounded corners for a web project that I'm currently working on. I thought I'd try to accomplish it using javascript and not CSS in an effort to keep the requests for image files to a minimum (yes, I know that it's possible to combine all required rounded corner shapes into one image) and I also wanted to be able to change the background color pretty much on the fly. I already utilize jQuery so I looked at the excellent [rounded corners plugin](http://plugins.jquery.com/project/corners) and it worked like a charm in every browser I tried. Being a developer however I noticed the opportunity to make it a bit more efficient. The script already includes code for detecting if the current browser supports webkit rounded corners (safari based browsers). If so it uses raw CSS instead of creating layers of divs. I thought that it would be awesome if the same kind of check could be performed to see if the browser supports the Gecko-specific `-moz-border-radius-*` properties and if so utilize them. The check for webkit support looks like this: ``` var webkitAvailable = false; try { webkitAvailable = (document.defaultView.getComputedStyle(this[0], null)['-webkit-border-radius'] != undefined); } catch(err) {} ``` That, however, did not work for `-moz-border-radius` so I started checking for alternatives. My fallback solution is of course to use browser detection but that's far from recommended practice ofcourse. My best solution yet is as follows. ``` var mozborderAvailable = false; try { var o = jQuery('<div>').css('-moz-border-radius', '1px'); mozborderAvailable = $(o).css('-moz-border-radius-topleft') == '1px'; o = null; } catch(err) {} ``` It's based on the theory that Gecko "expands" the composite -moz-border-radius to the four sub-properties * `-moz-border-radius-topleft` * `-moz-border-radius-topright` * `-moz-border-radius-bottomleft` * `-moz-border-radius-bottomright` Is there any javascript/CSS guru out there that have a better solution? (The feature request for this page is at <http://plugins.jquery.com/node/3619>)
How about this? ``` var mozborderAvailable = false; try { if (typeof(document.body.style.MozBorderRadius) !== "undefined") { mozborderAvailable = true; } } catch(err) {} ``` I tested it in Firefox 3 (true) and false in: Safari, IE7, and Opera. (Edit: better undefined test)
Why not use `-moz-border-radius` and `-webkit-border-radius` in the stylesheet? It's valid CSS and throwing an otherwise unused attribute would hurt less than having javascript do the legwork of figuring out if it should apply it or not. Then, in the javascript you'd just check if the browser is IE (or Opera?) - if it is, it'll ignore the proprietary tags, and your javascript could do it's thing. Maybe I'm missing something here...
The best way of checking for -moz-border-radius support
[ "", "javascript", "css", "" ]
In Java 5 and above you have the foreach loop, which works magically on anything that implements `Iterable`: ``` for (Object o : list) { doStuff(o); } ``` However, `Enumerable` still does not implement `Iterable`, meaning that to iterate over an `Enumeration` you must do the following: ``` for(; e.hasMoreElements() ;) { doStuff(e.nextElement()); } ``` Does anyone know if there is a reason why `Enumeration` still does not implement `Iterable`? **Edit:** As a clarification, I'm not talking about the language concept of an [enum](http://en.wikipedia.org/wiki/Enumerated_type), I'm talking a Java-specific class in the Java API called '[Enumeration](http://java.sun.com/j2se/1.5.0/docs/api/java/util/Enumeration.html)'.
Enumeration hasn't been modified to support Iterable because it's an interface not a concrete class (like Vector, which was modifed to support the Collections interface). If Enumeration was changed to support Iterable it would break a bunch of people's code.
As an easy and **clean** way of using an Enumeration with the enhanced for loop, convert to an ArrayList with java.util.Collections.list. ``` for (TableColumn col : Collections.list(columnModel.getColumns()) { ``` (javax.swing.table.TableColumnModel.getColumns returns Enumeration.) Note, this may be very slightly less efficient.
Why aren't Enumerations Iterable?
[ "", "java", "enumeration", "iterable", "" ]
I've had a lot of good experiences learning about web development on [w3schools.com](http://www.w3schools.com/). It's hit or miss, I know, but the PHP and CSS sections specifically have proven very useful for reference. Anyway, I was wondering if there was a similar site for [jQuery](http://en.wikipedia.org/wiki/JQuery). I'm interested in learning, but I need it to be online/searchable, so I can refer back to it easily when I need the information in the future. Also, as a brief aside, is jQuery worth learning? Or should I look at different JavaScript libraries? I know Jeff uses jQuery on Stack Overflow and it seems to be working well. Thanks! **Edit**: jQuery's website has a [pretty big list of tutorials](http://docs.jquery.com/Tutorials), and a seemingly comprehensive [documentation page](http://docs.jquery.com/Main_Page). I haven't had time to go through it all yet, has anyone else had experience with it? **Edit 2**: It seems Google is now hosting the jQuery libraries. That should give jQuery a pretty big advantage in terms of publicity. Also, if everyone uses a single unified aQuery library hosted at the same place, it should get cached for most Internet users early on and therefore not impact the download footprint of your site should you decide to use it. ## 2 Months Later... **Edit 3**: I started using jQuery on a project at work recently and it is great to work with! Just wanted to let everyone know that I have concluded it is ***ABSOLUTELY*** worth it to learn and use jQuery. Also, I learned almost entirely from the Official jQuery [documentation](http://docs.jquery.com/Main_Page) and [tutorials](http://docs.jquery.com/Tutorials). It's very straightforward. ## 10 Months Later... jQuery is a part of just about every web app I've made since I initially wrote this post. It makes progressive enhancement a breeze, and helps make the code maintainable. Also, all the jQuery plug-ins are an invaluable resource! ## 3 Years Later... Still using jQuery just about every day. I now author jQuery plug-ins and consult full time. I'm primarily a Djangonaut but I've done several javascript only contracts with jQuery. It's a life saver. From one jQuery user to another... You should look at [templating with jQuery](http://api.jquery.com/category/plugins/templates/) (or underscore -- see below). Other things I've found valuable in addition to jQuery (with estimated portion of projects I use it on): * [jQuery Form Plugin](http://jquery.malsup.com/form/) (95%) * [jQuery Form Example Plugin](http://mudge.github.com/jquery_example/) (75%) * [jQuery UI](http://jqueryui.com/) (70%) * [Underscore.js](http://documentcloud.github.com/underscore/) (80%) * [CoffeeScript](http://jashkenas.github.com/coffee-script/) (30%) * [Backbone.js](http://documentcloud.github.com/backbone/) (10%)
Rick Strahl and Matt Berseth's blogs both tipped me into jQuery and man am I glad they did. jQuery completely changes a) your client programming perspective, b) the grief it causes it you, and c) how much fun it can be! <http://www.west-wind.com/weblog/> <http://mattberseth.com/> I used the book jQuery in Action [http://www.amazon.com/jQuery-Action-Bear-Bibeault/dp/1933988355/ref=sr\_1\_1?ie=UTF8&s=books&qid=1219716122&sr=1-1](https://rads.stackoverflow.com/amzn/click/com/1933988355) (I bought it used at Amazon for about $22). It has been a big help into bootstrapping me into jQuery. The documentation at jquery.com are also very helpful. A place where jQuery falls a little flat is with its UI components. Those don't seem to be quite ready for primetime just yet. It could be that [Prototype](http://en.wikipedia.org/wiki/Prototype_JavaScript_Framework) or [MooTools](http://en.wikipedia.org/wiki/MooTools) or [ExtJS](http://en.wikipedia.org/wiki/Ext_%28JavaScript_library%29) are as good as jQuery. But for me, jQuery seems to have a little more momentum behind it right now and that counts for something for me. Check jQuery out. It is very cool!
A great resource for learning jQuery is: [Learning jQuery](http://www.learningjquery.com). The author, Karl Swedberg, also co-wrote the book titled... ready? Yup, *[Learning jQuery](https://rads.stackoverflow.com/amzn/click/com/1847192505)*. Remy Sharp also has great info geared towards the visual aspects of jQuery on [his blog](http://www.jqueryfordesigners.com/). --SEAN O
Where can I learn jQuery? Is it worth it?
[ "", "javascript", "jquery", "review", "" ]
I have tried both of : ``` ini_set('include_path', '.:/usr/share/php5:/usr/share/php5/PEAR:lib:app/classes'); ``` and also : ``` php_value include_path ".:/usr/share/php5:/usr/share/php5/PEAR:lib:app/classes" ``` in the .htaccess file. Both methods actually **do work** but only intermittently. That is, they will work fine for about 37 pages requests and then fail about 42 pages requests resulting in an require() call to cause a fatal error effectively crashing the site. I'm not even sure where to begin trying to find out what is going on! --- @[cnote](https://stackoverflow.com/questions/41836/setting-include-path-in-php-intermittently-fails-why#41877) > Looks like you duplicated the current directory in your include path. Try removing one of the '.:' from your string. The in script version was originally ``` ini_set('include_path', ini_get('include_path') . PATH_SEPARATOR . 'lib' . PATH_SEPARATOR . 'app' . DIRECTORY_SEPARATOR . 'classes'); ``` and thus the .:.: was coming from the existing path: ``` ini_get('include_path') ``` I tried removing it anyway and the problem persists.
It turned out the issue was related to a PHP bug in 5.2.5 Setting an "admin\_flag" for include\_path caused the include path to be empty in some requests, and Plesk sets an admin\_flag in the default config for something or other. An update of PHP solved the issue. <http://bugs.php.net/bug.php?id=43677>
Have you tried [set\_include\_path()](http://www.php.net/manual/en/function.set-include-path.php)?. As a benefit this returns false on failure, allowing you to at least catch the occurence and generate some meaningful debug data. Additionally, you should be using the constant `PATH_SEPARATOR` as it differs between windows / \*nix. As a specific example: ``` set_include_path('.' . PATH_SEPARATOR . './app/lib' . PATH_SEPARATOR . get_include_path()); ``` (the get\_include\_path() on the end means whatever your ini / htaccess path is set to will remain)
Setting include path in PHP intermittently fails
[ "", "php", "include-path", "" ]
When should I include PDB files for a production release? Should I use the `Optimize code` flag and how would that affect the information I get from an exception? If there is a noticeable performance benefit I would want to use the optimizations but if not I'd rather have accurate debugging info. What is typically done for a production app?
When you want to see source filenames and line numbers in your stacktraces, generate PDBs using the pdb-only option. Optimization is separate from PDB generation, i.e. you can optimize *and* generate PDBs without a performance hit. From [the C# Language Reference](http://msdn.microsoft.com/en-us/library/8cw0bt21(VS.80).aspx) > If you use /debug:full, be aware that there is some impact on the speed and size of JIT optimized code and a small impact on code quality with /debug:full. We recommend /debug:pdbonly or no PDB for generating release code.
To answer your first question, you only need to include PDBs for a production release if you need line numbers for your exception reports. To answer your second question, using the "Optimise" flag with PDBs means that any [stack "collapse" will be reflected in the stack trace](https://www.hanselman.com/blog/release-is-not-debug-64bit-optimizations-and-c-method-inlining-in-release-build-call-stacks). I'm not sure whether the actual line number reported can be wrong - this needs more investigation. To answer your third question, you can have the best of both worlds with a rather neat trick. The major differences between the default debug build and default release build are that when doing a default release build, optimization is turned on and debug symbols are not emitted. So, in four steps: 1. Change your release config to emit debug symbols. This has virtually no effect on the performance of your app, and is very useful if (when?) you need to debug a release build of your app. 2. Compile using your new release build config, i.e. *with* debug symbols and *with* optimization. Note that 99% of code optimization is done by the JIT compiler, not the language compiler. 3. Create a text file in your app's folder called xxxx.exe.ini (or dll or whatever), where xxxx is the name of your executable. This text file should initially look like: ``` [.NET Framework Debugging Control] GenerateTrackingInfo=0 AllowOptimize=1 ``` 4. With these settings, your app runs at full speed. When you want to debug your app by turning on debug tracking and possibly turning off (CIL) code optimization, just use the following settings: ``` [.NET Framework Debugging Control] GenerateTrackingInfo=1 AllowOptimize=0 ``` **EDIT** According to cateye's comment, [this can also work in a hosted environment](http://martin.bz/blog/asp-net-mvc-source-debugging-the-easy-way) such as ASP.NET.
PDB files for production app and the "Optimize code" flag
[ "", "c#", "visual-studio", "build-process", "" ]
I was trying to get my head around XAML and thought that I would try writing some code. Trying to add a grid with 6 by 6 column definitions then add a text block into one of the grid cells. I don't seem to be able to reference the cell that I want. There is no method on the grid that I can add the text block too. There is only grid.children.add(object), no Cell definition. XAML: ``` <Page x:Class="WPF_Tester.Page1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Page1" Loaded="Page_Loaded"> </Page> ``` C#: ``` private void Page_Loaded(object sender, RoutedEventArgs e) { //create the structure Grid g = new Grid(); g.ShowGridLines = true; g.Visibility = Visibility.Visible; //add columns for (int i = 0; i < 6; ++i) { ColumnDefinition cd = new ColumnDefinition(); cd.Name = "Column" + i.ToString(); g.ColumnDefinitions.Add(cd); } //add rows for (int i = 0; i < 6; ++i) { RowDefinition rd = new RowDefinition(); rd.Name = "Row" + i.ToString(); g.RowDefinitions.Add(rd); } TextBlock tb = new TextBlock(); tb.Text = "Hello World"; g.Children.Add(tb); } ``` **Update** Here is the spooky bit: * Using VS2008 Pro on XP * WPFbrowser Project Template (3.5 verified) I don't get the methods in autocomplete.
WPF makes use of a funky thing called [attached properties](http://msdn.microsoft.com/en-us/library/ms749011.aspx). So in your XAML you might write this: ``` <TextBlock Grid.Row="0" Grid.Column="0" /> ``` And this will effectively move the TextBlock into cell (0,0) of your grid. In code this looks a little strange. I believe it'd be something like: ``` g.Children.Add(tb); Grid.SetRow(tb, 0); Grid.SetColumn(tb, 0); ``` Have a look at that link above - attached properties make things really easy to do in XAML perhaps at the expense of intuitive-looking code.
The cell location is an attached property - the value belongs to the TextBlock rather than Grid. However, since the property itself belongs to Grid, you need to use either the property definition field or the provided static functions. ``` TextBlock tb = new TextBlock(); // // Locate tb in the second row, third column. // Row and column indices are zero-indexed, so this // equates to row 1, column 2. // Grid.SetRow(tb, 1); Grid.SetColumn(tb, 2); ```
How do I generate WPF controls through code
[ "", "c#", ".net", "wpf", "xaml", "" ]
I want to create a client side mail creator web page. I know the problems of using the mailto action in an html form (not standard, no default mail appication set on the client). But the web page isn't very important, and they don't care very much. The mail created by the mailto action has the syntax: > subject: undefined subject > body: > > param1=value1 > param2=value2 > . > . > . > paramn=valuen Can I use JavaScript to format the mail like this? > Subject:XXXXX > > Body: > Value1;Value2;Value3...ValueN
What we used in a projet is a popup window that opens a mailto: link, it is the only way we found to compose a mail within the default mail client that works with all mail clients (at least all our clients used). ``` var addresses = "";//between the speech mark goes the receptient. Seperate addresses with a ; var body = ""//write the message text between the speech marks or put a variable in the place of the speech marks var subject = ""//between the speech marks goes the subject of the message var href = "mailto:" + addresses + "?" + "subject=" + subject + "&" + "body=" + body; var wndMail; wndMail = window.open(href, "_blank", "scrollbars=yes,resizable=yes,width=10,height=10"); if(wndMail) { wndMail.close(); } ```
You more or less only have two alternatives when sending mail via the browser.. 1. make a page that takes user input, and allows them to send the mail via your web-server. You need some kind of server-side scripting for this. 2. use a mailto: link to trigger opening of the users registered mail client. This has the obvious pitfalls you mentioned, and is less flexible. It needs less work though.
Can I use JavaScript to create a client side email?
[ "", "javascript", "email", "" ]
First of all, I know how to build a Java application. But I have always been puzzled about where to put my classes. There are proponents for organizing the packages in a strictly domain oriented fashion, others separate by tier. I myself have always had problems with * naming, * placing So, 1. Where do you put your domain specific constants (and what is the best name for such a class)? 2. Where do you put classes for stuff which is both infrastructural and domain specific (for instance I have a FileStorageStrategy class, which stores the files either in the database, or alternatively in database)? 3. Where to put Exceptions? 4. Are there any standards to which I can refer?
I've really come to like Maven's [Standard Directory Layout](http://maven.apache.org/guides/introduction/introduction-to-the-standard-directory-layout.html "S"). One of the key ideas for me is to have two source roots - one for production code and one for test code like so: ``` MyProject/src/main/java/com/acme/Widget.java MyProject/src/test/java/com/acme/WidgetTest.java ``` (here, both src/main/java and src/test/java are source roots). Advantages: * Your tests have package (or "default") level access to your classes under test. * You can easily package only your production sources into a JAR by dropping src/test/java as a source root. One rule of thumb about class placement and packages: Generally speaking, well structured projects will be free of [circular dependencies](http://en.wikipedia.org/wiki/Circular_dependency). Learn when they are bad (and when they are [not](http://beust.com/weblog/archives/000208.html)), and consider a tool like [JDepend](http://www.google.ca/search?q=JDepend&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a) or [SonarJ](http://www.hello2morrow.com/products/sonargraph) that will help you eliminate them.
I'm a huge fan of organized sources, so I always create the following directory structure: ``` /src - for your packages & classes /test - for unit tests /docs - for documentation, generated and manually edited /lib - 3rd party libraries /etc - unrelated stuff /bin (or /classes) - compiled classes, output of your compile /dist - for distribution packages, hopefully auto generated by a build system ``` In /src I'm using the default Java patterns: Package names starting with your domain (org.yourdomain.yourprojectname) and class names reflecting the OOP aspect you're creating with the class (see the other commenters). Common package names like *util*, *model*, *view*, *events* are useful, too. I tend to put constants for a specific topic in an own class, like *SessionConstants* or *ServiceConstants* in the same package of the domain classes.
How should I structure a Java application, where do I put my classes?
[ "", "java", "architecture", "" ]
Is there any free or commercial component written in .NET (no COM interop) that will work with most twain scanners?
In my company we use [Pegasus](http://www.pegasusimaging.com). It's great.
> ### [TwainDotNet](https://github.com/tmyroadctfig/twaindotnet) I've just wrapped up the code from Thomas Scheidegger's article ([CodeProject: .NET TWAIN image scanning](http://www.codeproject.com/KB/dotnet/twaindotnet.aspx)) into a Google code project: [http://code.google.com/p/twaindotnet/](https://github.com/tmyroadctfig/twaindotnet) I've cleaned up the API a bit and added WPF support, so check it out. :)
.NET Scanning API
[ "", "c#", ".net", "twain", "scanning", "" ]
We have a requirement in project to store all the revisions(Change History) for the entities in the database. Currently we have 2 designed proposals for this: e.g. for "Employee" Entity **Design 1:** ``` -- Holds Employee Entity "Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)" -- Holds the Employee Revisions in Xml. The RevisionXML will contain -- all data of that particular EmployeeId "EmployeeHistories (EmployeeId, DateModified, RevisionXML)" ``` **Design 2:** ``` -- Holds Employee Entity "Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)" -- In this approach we have basically duplicated all the fields on Employees -- in the EmployeeHistories and storing the revision data. "EmployeeHistories (EmployeeId, RevisionId, DateModified, FirstName, LastName, DepartmentId, .., ..)" ``` Is there any other way of doing this thing? The problem with the "Design 1" is that we have to parse XML each time when you need to access data. This will slow the process and also add some limitations like we cannot add joins on the revisions data fields. And the problem with the "Design 2" is that we have to duplicate each and every field on all entities (We have around 70-80 entities for which we want to maintain revisions).
1. Do **not** put it all in one table with an IsCurrent discriminator attribute. This just causes problems down the line, requires surrogate keys and all sorts of other problems. 2. Design 2 does have problems with schema changes. If you change the Employees table you have to change the EmployeeHistories table and all the related sprocs that go with it. Potentially doubles you schema change effort. 3. Design 1 works well and if done properly does not cost much in terms of a performance hit. You could use an xml schema and even indexes to get over possible performance problems. Your comment about parsing the xml is valid but you could easily create a view using xquery - which you can include in queries and join to. Something like this... ``` CREATE VIEW EmployeeHistory AS , FirstName, , DepartmentId SELECT EmployeeId, RevisionXML.value('(/employee/FirstName)[1]', 'varchar(50)') AS FirstName, RevisionXML.value('(/employee/LastName)[1]', 'varchar(100)') AS LastName, RevisionXML.value('(/employee/DepartmentId)[1]', 'integer') AS DepartmentId, FROM EmployeeHistories ```
I think the key question to ask here is 'Who / What is going to be using the history'? If it's going to be mostly for reporting / human readable history, we've implemented this scheme in the past... Create a table called 'AuditTrail' or something that has the following fields... ``` [ID] [int] IDENTITY(1,1) NOT NULL, [UserID] [int] NULL, [EventDate] [datetime] NOT NULL, [TableName] [varchar](50) NOT NULL, [RecordID] [varchar](20) NOT NULL, [FieldName] [varchar](50) NULL, [OldValue] [varchar](5000) NULL, [NewValue] [varchar](5000) NULL ``` You can then add a 'LastUpdatedByUserID' column to all of your tables which should be set every time you do an update / insert on the table. You can then add a trigger to every table to catch any insert / update that happens and creates an entry in this table for each field that's changed. Because the table is also being supplied with the 'LastUpdateByUserID' for each update / insert, you can access this value in the trigger and use it when adding to the audit table. We use the RecordID field to store the value of the key field of the table being updated. If it's a combined key, we just do a string concatenation with a '~' between the fields. I'm sure this system may have drawbacks - for heavily updated databases the performance may be hit, but for my web-app, we get many more reads than writes and it seems to be performing pretty well. We even wrote a little VB.NET utility to automatically write the triggers based on the table definitions. Just a thought!
Database Design for Revisions?
[ "", "sql", "database", "database-design", "versioning", "" ]
While the C# spec does include a pre-processor and basic directives (#define, #if, etc), the language does not have the same flexible pre-processor found in languages such as C/C++. I believe the lack of such a flexible pre-processor was a design decision made by Anders Hejlsberg (although, unfortunately, I can't find reference to this now). From experience, this is certainly a good decision, as there were some really terrible un-maintainable macros created back when I was doing a lot of C/C++. That said, there are a number of scenarios where I could find a slightly more flexible pre-processor to be useful. Code such as the following could be improved by some simple pre-processor directives: ``` public string MyProperty { get { return _myProperty; } set { if (value != _myProperty) { _myProperty = value; NotifyPropertyChanged("MyProperty"); // This line above could be improved by replacing the literal string with // a pre-processor directive like "#Property", which could be translated // to the string value "MyProperty" This new notify call would be as follows: // NotifyPropertyChanged(#Property); } } } ``` Would it be a good idea to write a pre-processor to handle extremely simple cases like this? Steve McConnell wrote in [Code Complete](https://rads.stackoverflow.com/amzn/click/com/0735619670) (p208): > *Write your own preprocessor* If a language doesn't include a preprocessor, it's fairly easy to write one... I am torn. It was a design decision to leave such a flexible pre-processor out of C#. However, an author I highly respect mentions it may be ok in some circumstances. **Should I build a C# pre-processor? Is there one available that does the simple things I want to do?**
Consider taking a look at an aspect-oriented solution like [PostSharp](http://www.postsharp.org/), which injects code after the fact based on custom attributes. It's the opposite of a precompiler but can give you the sort of functionality you're looking for (PropertyChanged notifications etc).
> Should I build a C# pre-processor? Is there one available that does the simple things I want to do? You can always use the C pre-processor -- C# is close enough, syntax-wise. M4 is also an option.
C# Preprocessor
[ "", "c#", "c-preprocessor", "" ]
I'm having issues getting the C sockets API to work properly in C++ on z/OS. Although I am including `sys/socket.h`, I still get compile time errors telling me that `AF_INET` is not defined. Am I missing something obvious, or is this related to the fact that being on z/OS makes my problems much more complicated? I discovered that there is a `#ifdef` that I'm hitting. Apparently, z/OS isn't happy unless I define which "type" of sockets I'm using with: ``` #define _OE_SOCKETS ``` Now, I personally have no idea what this `_OE_SOCKETS` is actually for, so if any z/OS sockets programmers are out there (all 3 of you), perhaps you could give me a rundown of how this all works? Test App: ``` #include <sys/socket.h> int main() { return AF_INET; } ``` Compile/Link Output: ``` cxx -Wc,xplink -Wl,xplink -o inet_test inet.C "./inet.C", line 5.16: CCN5274 (S) The name lookup for "AF_INET" did not find a declaration. CCN0797(I) Compilation failed for file ./inet.C. Object file not created. ``` A check of sys/sockets.h does include the definition I need, and as far as I can tell, it is not being blocked by any `#ifdef` statements. I have however noticed it contains the following: ``` #ifdef __cplusplus extern "C" { #endif ``` which encapsulates basically the whole file? Not sure if it matters.
Keep a copy of the IBM manuals handy: * [z/OS V1R11.0 XL C/C++ Programming Guide](http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/CBCPG1A0/CCONTENTS?DT=20090606065305) * [z/OS V1R11.0 XL C/C++ Run-Time Library Reference](http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/EDCLB1A0/CCONTENTS?DT=20090607203246) The IBM publications are generally very good, but you need to get used to their format, as well as knowing where to look for an answer. You'll find quite often that a feature that you want to use is guarded by a "feature test macro" You should ask your friendly system programmer to install the [XL C/C++ Run-Time Library Reference: Man Pages](http://www-03.ibm.com/servers/eserver/zseries/zos/le/manpgs.html) on your system. Then you can do things like "man connect" to pull up the man page for the socket connect() API. When I do that, this is what I see: FORMAT X/Open ``` #define _XOPEN_SOURCE_EXTENDED 1 #include <sys/socket.h> int connect(int socket, const struct sockaddr *address, socklen_t address_len); ``` Berkeley Sockets ``` #define _OE_SOCKETS #include <sys/types.h> #include <sys/socket.h> int connect(int socket, struct sockaddr *address, int address_len); ```
I've had no trouble using the BSD sockets API in C++, in GNU/Linux. Here's the sample program I used: ``` #include <sys/socket.h> int main() { return AF_INET; } ``` So my take on this is that z/OS is probably the complicating factor here, however, because I've never used z/OS before, much less programmed in it, I can't say this definitively. :-P
How to use the C socket API in C++ on z/OS
[ "", "c++", "c", "sockets", "mainframe", "zos", "" ]
I am looking for a (preferably) command-line tool that can reformat the C# source code on a directory tree. Ideally, I should be able to customize the formatting. Bonus points if the tool can be run on [Mono](https://en.wikipedia.org/wiki/Mono_%28software%29) (or Linux).
You could also try [NArrange](http://www.narrange.net) to reformat your code. The formatting options it supports are still pretty limited, but it can process an entire directory and is a command-line tool. Also, NArrange runs under Mono.
You could give [Artistic Style](http://astyle.sourceforge.net/) a try. It requires [Perl](https://en.wikipedia.org/wiki/Perl) to be installed though. It's got a decent list of formatting options, and supports C and Java as well.
Is there a tool for reformatting C# code?
[ "", "c#", "code-formatting", "" ]
I'm about to start a fairly Ajax heavy feature in my company's application. What I need to do is make an Ajax callback every few minutes a user has been on the page. * I don't need to do any DOM updates before, after, or during the callbacks. * I don't need any information from the page, just from a site cookie which should always be sent with requests anyway, and an ID value. What I'm curious to find out, is if there is any clean and simple way to make a JavaScript Ajax callback to an ASP.NET page without posting back the rest of the information on the page. I'd like to not have to do this if it is possible. I really just want to be able to call a single method on the page, nothing else. Also, I'm restricted to ASP.NET 2.0 so I can't use any of the new 3.5 framework ASP AJAX features, although I can use the ASP AJAX extensions for the 2.0 framework. **UPDATE** I've decided to accept [DanP](https://stackoverflow.com/questions/27258/aspnet-javascript-callbacks-without-full-postbacks#27270)'s answer as it seems to be exactly what I'm looking for. Our site already uses jQuery for some things so I'll probably use jQuery for making requests since in my experience it seems to perform much better than ASP's AJAX framework does. What do you think would be the best method of transferring data to the IHttpHandler? Should I add variables to the query string or POST the data I need to send? The only thing I think I have to send is a single ID, but I can't decide what the best method is to send the ID and have the IHttpHandler handle it. I'd like to come up with a solution that would prevent a person with basic computer skills from accidentally or intentionally accessing the page directly or repeating requests. Is this possible?
If you don't want to create a blank page, you could call a IHttpHandler (ashx) file: ``` public class RSSHandler : IHttpHandler { public void ProcessRequest (HttpContext context) { context.Response.ContentType = "text/xml"; string sXml = BuildXMLString(); //not showing this function, //but it creates the XML string context.Response.Write( sXml ); } public bool IsReusable { get { return true; } } } ```
You should use ASP.Net Callbacks which were introduced in Asp.Net 2.0. Here is an article that should get you set to go: [Implementing Client Callbacks Programmatically Without Postbacks in ASP.NET Web Pages](http://msdn.microsoft.com/en-us/library/ms178208.aspx) Edit: Also look at this: [ICallback & JSON Based JavaScript Serialization](http://aspalliance.com/1537)
ASP.NET JavaScript Callbacks Without Full PostBacks?
[ "", "asp.net", "javascript", "ajax", "" ]
I've heard rumors that PHP is planning on introducing a "goto" command. What is it supposed to be doing? I've tried searching a bit, but haven't found anything awfully descriptive. I understand that it won't be a "`GOTO 10`"-like command...
They are not adding a real GOTO, but extending the BREAK keyword to use static labels. Basically, it will be enhancing the ability to break out of ~~switch~~ nested if statements. Here's the concept example I found: ``` <?php for ($i = 0; $i < 9; $i++) { if (true) { break blah; } echo "not shown"; blah: echo "iteration $i\n"; } ?> ``` Of course, once the GOTO "rumor" was out, there was nothing to stop some evil guys to propagate an additional **COMEFROM** joke. Be on your toes. See also: <http://www.php.net/~derick/meeting-notes.html#adding-goto>
I'm always astonished at how incredibly dumb the PHP designers are. If the purpose of using GOTOs is to make breaking out of multiply nested loops more efficient there's a better way: labelled code blocks and break statements that can reference labels: ``` a: for (...) { b: for (...) { c: for (...) { ... break a; } } } ``` Now is is clear which loop/block to exit, and the exit is structured; you can't get spaghetti code with this like you can with real gotos. This is an old, old, old idea. Designing good control flow management structures has been solved since the 70s, and the literature on all this is long since written up. The Bohm-Jacopini theorem showed that you could code anything with function call, if-then-else, and while loops. In practice, to break out of deeply nested blocks, Bohm-Jacopini style coding required extra boolean flags ("set this flag to get out of the loop") which was clumsy coding wise and inefficient (you don't want such flags in your inner loop). With if-then-else, various loops (while,for) and break-to-labelled block, you can code any algorithm without no loss in efficiency. Why don't people read the literature, instead of copying what C did? Grrr.
GOTO command in PHP?
[ "", "php", "language-features", "goto", "" ]
How can I monitor an SQL Server database for changes to a table without using triggers or modifying the structure of the database in any way? My preferred programming environment is [.NET](http://en.wikipedia.org/wiki/.NET_Framework) and C#. I'd like to be able to support any [SQL Server 2000](http://en.wikipedia.org/wiki/Microsoft_SQL_Server#Genesis) SP4 or newer. My application is a bolt-on data visualization for another company's product. Our customer base is in the thousands, so I don't want to have to put in requirements that we modify the third-party vendor's table at every installation. By *"changes to a table"* I mean changes to table data, not changes to table structure. Ultimately, I would like the change to trigger an event in my application, instead of having to check for changes at an interval. --- The best course of action given my requirements (no triggers or schema modification, SQL Server 2000 and 2005) seems to be to use the `BINARY_CHECKSUM` function in [T-SQL](http://en.wikipedia.org/wiki/Transact-SQL). The way I plan to implement is this: Every X seconds run the following query: ``` SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM sample_table WITH (NOLOCK); ``` And compare that against the stored value. If the value has changed, go through the table row by row using the query: ``` SELECT row_id, BINARY_CHECKSUM(*) FROM sample_table WITH (NOLOCK); ``` And compare the returned checksums against stored values.
Take a look at the CHECKSUM command: ``` SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM sample_table WITH (NOLOCK); ``` That will return the same number each time it's run as long as the table contents haven't changed. See my post on this for more information: [CHECKSUM](http://msdn.microsoft.com/en-us/library/aa258245(SQL.80).aspx) Here's how I used it to rebuild cache dependencies when tables changed: [ASP.NET 1.1 database cache dependency (without triggers)](http://weblogs.asp.net/jgalloway/archive/2005/05/07/406056.aspx)
**Unfortunately CHECKSUM does not always work properly to detect changes**. It is only a primitive checksum and no cyclic redundancy check (CRC) calculation. Therefore you can't use it to detect all changes, e. g. symmetrical changes result in the same CHECKSUM! E. g. the solution with `CHECKSUM_AGG(BINARY_CHECKSUM(*))` will always deliver 0 for all 3 tables with different content: ``` SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM ( SELECT 1 as numA, 1 as numB UNION ALL SELECT 1 as numA, 1 as numB ) q -- delivers 0! ``` SELECT CHECKSUM\_AGG(BINARY\_CHECKSUM(\*)) FROM ( SELECT 1 as numA, 2 as numB UNION ALL SELECT 1 as numA, 2 as numB ) q -- delivers 0! SELECT CHECKSUM\_AGG(BINARY\_CHECKSUM(\*)) FROM ( SELECT 0 as numA, 0 as numB UNION ALL SELECT 0 as numA, 0 as numB ) q -- delivers 0!
Check for changes to an SQL Server table?
[ "", "sql", "sql-server", "datatable", "rdbms", "" ]
I am aware that in [.NET](http://en.wikipedia.org/wiki/.NET_Framework) there are three timer types (see *[Comparing the Timer Classes in the .NET Framework Class Library](http://msdn.microsoft.com/en-us/magazine/cc164015.aspx)*). I have chosen a threaded timer as the other types can drift if the main thread is busy, and I need this to be reliable. The way this timer works in the control of the timer is put on another thread so it can always tick along with the work begin completed on the parent thread when it is not busy. The issue with this timer in a console application is that while the timer is ticking along on another thread the main thread is not doing anything to the application closes. I tried adding a `while true` loop, but then the main thread is too busy when the timer does go off.
You can use something like `Console.ReadLine()` to block the main thread, so other background threads (like timer threads) will still work. You may also use an [AutoResetEvent](https://learn.microsoft.com/en-us/dotnet/api/system.threading.autoresetevent) to block the execution, then (when you need to) you can call Set() method on that AutoResetEvent object to release the main thread. Also ensure that your reference to Timer object doesn't go out of scope and garbage collected.
Consider using a [ManualResetEvent](https://learn.microsoft.com/en-us/dotnet/api/system.threading.manualresetevent) to block the main thread at the end of its processing, and call `Reset()` on it once the timer's processing has finished. If this is something that needs to run continuously, consider moving this into a service process instead of a console app.
Reliable timer in a console application
[ "", "c#", ".net", "vb.net", "timer", "" ]
I'm trying to wrap my head around reflection, so I decided to add plugin capability to a program that I'm writing. The only way to understand a concept is to get your fingers dirty and write the code, so I went the route of creating a simple interface library consisting of the IPlugin and IHost interfaces, a plugin implementation library of classes that implement IPlugin, and a simple console project that instantiates the IHost implementation class that does simple work with the plugin objects. Using reflection, I wanted to iterate through the types contained inside my plugin implementation dll and create instances of types. I was able to sucessfully instantiate classes with this code, but I could not cast the created object to the interface. I tried this code but I couldn't cast object o as I expected. I stepped through the process with the debugger and the proper constructor was called. Quickwatching object o showed me that it had the fields and properties that I expected to see in the implementation class. ``` loop through assemblies loop through types in assembly // Filter out unwanted types if (!type.IsClass || type.IsNotPublic || type.IsAbstract ) continue; // This successfully created the right object object o = Activator.CreateInstance(type); // This threw an Invalid Cast Exception or returned null for an "as" cast // even though the object implemented IPlugin IPlugin i = (IPlugin) o; ``` I made the code work with this. ``` using System.Runtime.Remoting; ObjectHandle oh = Activator.CreateInstance(assembly.FullName, type.FullName); // This worked as I intended IPlugin i = (IPlugin) oh.Unwrap(); i.DoStuff(); ``` Here are my questions: 1. Activator.CreateInstance(Type t) returns an object, but I couldn't cast the object to an interface that the object implemented. Why? 2. Should I have been using a different overload of CreateInstance()? 3. What are the reflection related tips and tricks? 4. Is there some crucial part of reflection that I'm just not getting?
I'm just guessing here because from your code it's not obvious where do you have definition of IPlugin interface but if you can't cast in your host application then you are probably having IPlugin interface in your host assembly and then at the same time in your plugin assembly. This won't work. The easiest thing is to make this work is to have IPlugin interface marked as public in your host assembly and then have your Plugin assembly **reference host application assembly**, so both assemblies have access to **the very same interface**.
hmmm... If you are using Assembly.LoadFrom to load your assembly try changing it Assembly.LoadFile instead. Worked for me From here: <http://www.eggheadcafe.com/community/aspnet/2/10036776/solution-found.aspx>
How to properly cast objects created through reflection
[ "", "c#", ".net", "reflection", "" ]
I'm trying to create a bookmarklet for posting del.icio.us bookmarks to a separate account. I tested it from the command line like: ``` wget -O - --no-check-certificate \ "https://seconduser:thepassword@api.del.icio.us/v1/posts/add?url=http://seet.dk&description=test" ``` This works great. I then wanted to create a bookmarklet in my firefox. I googled and found bits and pieces and ended up with: ``` javascript:void( open('https://seconduser:password@api.del.icio.us/v1/posts/add?url=' +encodeURIComponent(location.href) +'&description='+encodeURIComponent(document.title), 'delicious','toolbar=no,width=500,height=250' ) ); ``` But all that happens is that I get this from del.icio.us: ``` <?xml version="1.0" standalone="yes"?> <result code="access denied" /> <!-- fe04.api.del.ac4.yahoo.net uncompressed/chunked Thu Aug 7 02:02:54 PDT 2008 --> ``` If I then go to the address bar and press enter, it changes to: ``` <?xml version='1.0' standalone='yes'?> <result code="done" /> <!-- fe02.api.del.ac4.yahoo.net uncompressed/chunked Thu Aug 7 02:07:45 PDT 2008 --> ``` Any ideas how to get it to work directly from the bookmarks?
Can you sniff the traffic to find what's actually being sent? Is it sending any auth data at all and it's incorrect or being presented in a form the server doesn't like, or is it never being sent by firefox at all?
@travis Looks very nice! I will sure take a look into it. I can think of several places I can use that I never got round to sniff the traffic but found out that a php site on my own server with http-auth worked fine, so i figured it was something with delicious. I then created a php page that does a wget of the delicious api and everything works fine :)
Http Auth in a Firefox 3 bookmarklet
[ "", "javascript", "firefox", "delicious-api", "" ]
I am starting a new web application in PHP and this time around I want to create something that people can extend by using a plugin interface. How does one go about writing 'hooks' into their code so that plugins can attach to specific events?
You could use an Observer pattern. A simple functional way to accomplish this: ``` <?php /** Plugin system **/ $listeners = array(); /* Create an entry point for plugins */ function hook() { global $listeners; $num_args = func_num_args(); $args = func_get_args(); if($num_args < 2) trigger_error("Insufficient arguments", E_USER_ERROR); // Hook name should always be first argument $hook_name = array_shift($args); if(!isset($listeners[$hook_name])) return; // No plugins have registered this hook foreach($listeners[$hook_name] as $func) { $args = $func($args); } return $args; } /* Attach a function to a hook */ function add_listener($hook, $function_name) { global $listeners; $listeners[$hook][] = $function_name; } ///////////////////////// /** Sample Plugin **/ add_listener('a_b', 'my_plugin_func1'); add_listener('str', 'my_plugin_func2'); function my_plugin_func1($args) { return array(4, 5); } function my_plugin_func2($args) { return str_replace('sample', 'CRAZY', $args[0]); } ///////////////////////// /** Sample Application **/ $a = 1; $b = 2; list($a, $b) = hook('a_b', $a, $b); $str = "This is my sample application\n"; $str .= "$a + $b = ".($a+$b)."\n"; $str .= "$a * $b = ".($a*$b)."\n"; $str = hook('str', $str); echo $str; ?> ``` **Output:** ``` This is my CRAZY application 4 + 5 = 9 4 * 5 = 20 ``` **Notes:** For this example source code, you must declare all your plugins before the actual source code that you want to be extendable. I've included an example of how to handle single or multiple values being passed to the plugin. The hardest part of this is writing the actual documentation which lists what arguments get passed to each hook. This is just one method of accomplishing a plugin system in PHP. There are better alternatives, I suggest you check out the WordPress Documentation for more information.
So let's say you don't want the Observer pattern because it requires that you change your class methods to handle the task of listening, and want something generic. And let's say you don't want to use `extends` inheritance because you may already be inheriting in your class from some other class. Wouldn't it be great to have a generic way to make *any class pluggable without much effort*? Here's how: ``` <?php //////////////////// // PART 1 //////////////////// class Plugin { private $_RefObject; private $_Class = ''; public function __construct(&$RefObject) { $this->_Class = get_class(&$RefObject); $this->_RefObject = $RefObject; } public function __set($sProperty,$mixed) { $sPlugin = $this->_Class . '_' . $sProperty . '_setEvent'; if (is_callable($sPlugin)) { $mixed = call_user_func_array($sPlugin, $mixed); } $this->_RefObject->$sProperty = $mixed; } public function __get($sProperty) { $asItems = (array) $this->_RefObject; $mixed = $asItems[$sProperty]; $sPlugin = $this->_Class . '_' . $sProperty . '_getEvent'; if (is_callable($sPlugin)) { $mixed = call_user_func_array($sPlugin, $mixed); } return $mixed; } public function __call($sMethod,$mixed) { $sPlugin = $this->_Class . '_' . $sMethod . '_beforeEvent'; if (is_callable($sPlugin)) { $mixed = call_user_func_array($sPlugin, $mixed); } if ($mixed != 'BLOCK_EVENT') { call_user_func_array(array(&$this->_RefObject, $sMethod), $mixed); $sPlugin = $this->_Class . '_' . $sMethod . '_afterEvent'; if (is_callable($sPlugin)) { call_user_func_array($sPlugin, $mixed); } } } } //end class Plugin class Pluggable extends Plugin { } //end class Pluggable //////////////////// // PART 2 //////////////////// class Dog { public $Name = ''; public function bark(&$sHow) { echo "$sHow<br />\n"; } public function sayName() { echo "<br />\nMy Name is: " . $this->Name . "<br />\n"; } } //end class Dog $Dog = new Dog(); //////////////////// // PART 3 //////////////////// $PDog = new Pluggable($Dog); function Dog_bark_beforeEvent(&$mixed) { $mixed = 'Woof'; // Override saying 'meow' with 'Woof' //$mixed = 'BLOCK_EVENT'; // if you want to block the event return $mixed; } function Dog_bark_afterEvent(&$mixed) { echo $mixed; // show the override } function Dog_Name_setEvent(&$mixed) { $mixed = 'Coco'; // override 'Fido' with 'Coco' return $mixed; } function Dog_Name_getEvent(&$mixed) { $mixed = 'Different'; // override 'Coco' with 'Different' return $mixed; } //////////////////// // PART 4 //////////////////// $PDog->Name = 'Fido'; $PDog->Bark('meow'); $PDog->SayName(); echo 'My New Name is: ' . $PDog->Name; ``` In Part 1, that's what you might include with a `require_once()` call at the top of your PHP script. It loads the classes to make something pluggable. In Part 2, that's where we load a class. Note I didn't have to do anything special to the class, which is significantly different than the Observer pattern. In Part 3, that's where we switch our class around into being "pluggable" (that is, supports plugins that let us override class methods and properties). So, for instance, if you have a web app, you might have a plugin registry, and you could activate plugins here. Notice also the `Dog_bark_beforeEvent()` function. If I set `$mixed = 'BLOCK_EVENT'` before the return statement, it will block the dog from barking and would also block the Dog\_bark\_afterEvent because there wouldn't be any event. In Part 4, that's the normal operation code, but notice that what you might think would run does not run like that at all. For instance, the dog does not announce it's name as 'Fido', but 'Coco'. The dog does not say 'meow', but 'Woof'. And when you want to look at the dog's name afterwards, you find it is 'Different' instead of 'Coco'. All those overrides were provided in Part 3. So how does this work? Well, let's rule out `eval()` (which everyone says is "evil") and rule out that it's not an Observer pattern. So, the way it works is the sneaky empty class called Pluggable, which does not contain the methods and properties used by the Dog class. Thus, since that occurs, the magic methods will engage for us. That's why in parts 3 and 4 we mess with the object derived from the Pluggable class, not the Dog class itself. Instead, we let the Plugin class do the "touching" on the Dog object for us. (If that's some kind of design pattern I don't know about -- please let me know.)
Best way to allow plugins for a PHP application
[ "", "php", "plugins", "architecture", "hook", "" ]
I'm writing an app that will need to make use of `Timer`s, but potentially very many of them. How scalable is the `System.Threading.Timer` class? The documentation merely say it's "lightweight", but doesn't explain further. Do these timers get sucked into a single thread (or very small threadpool) that processes all the callbacks on behalf of a `Timer`, or does each `Timer` have its own thread? I guess another way to rephrase the question is: How is `System.Threading.Timer` implemented?
I say this in response to a lot of questions: Don't forget that the (managed) source code to the framework is available. You can use this tool to get it all: <http://www.codeplex.com/NetMassDownloader> Unfortunately, in this specific case, a lot of the implementation is in native code, so you don't get to look at it... They definitely use pool threads rather than a thread-per-timer, though. The standard way to implement a big collection of timers (which is how the kernel does it internally, and I would suspect is indirectly how your big collection of Timers ends up) is to maintain the list sorted by time-until-expiry - so the system only ever has to worry about checking the next timer which is going to expire, not the whole list. Roughly, this gives O(log n) for starting a timer and O(1) for processing running timers. Edit: Just been looking in Jeff Richter's book. He says (of Threading.Timer) that it uses a single thread for all Timer objects, this thread knows when the next timer (i.e. as above) is due and calls ThreadPool.QueueUserWorkItem for the callbacks as appropriate. This has the effect that if you don't finish servicing one callback on a timer before the next is due, that your callback will reenter on another pool thread. So in summary I doubt you'll see a big problem with having lots of timers, but you might suffer thread pool exhaustion if large numbers of them are firing at the same timer and/or their callbacks are slow-running.
I think you might want to rethink your design (that is, if you have control over the design yourself). If you're using so many timers that this is actually a concern for you, there's clearly some potential for consolidation there. Here's a good article from MSDN Magazine from a few years ago that compares the three available timer classes, and gives some insight into their implementations: <http://msdn.microsoft.com/en-us/magazine/cc164015.aspx>
How scalable is System.Threading.Timer?
[ "", "c#", ".net", "multithreading", "timer", "" ]
For classes that have a long list of setters that are used frequently, I found this way very useful (although I have recently read about the [Builder pattern](http://en.wikipedia.org/wiki/Builder_pattern) in *Effective Java* that is kinda the same). Basically, all setter methods return the object itself so then you can use code like this: ``` myClass .setInt(1) .setString("test") .setBoolean(true); ``` Setters simply return this in the end: ``` public MyClass setInt(int anInt) { // [snip] return this; } ``` What is your opinion? What are the pros and cons? Does this have any impact on performance? Also referred to as the [named parameter idiom](http://www.cs.technion.ac.il/users/yechiel/c++-faq/named-parameter-idiom.html) in c++.
@pek Chained invocation is one of proposals for Java 7. It says that if a method return type is void, it should implicitly return **this**. If you're interested in this topic, there is a bunch of links and a simple example on [Alex Miller's Java 7 page](http://tech.puredanger.com/java7#chained).
This is called a [Fluent Interface](http://en.wikipedia.org/wiki/Fluent_interface), for reference. Personally, I think it's a pretty neat idea, but a matter of taste really. I think [jQuery](http://en.wikipedia.org/wiki/Jquery) works this way.
Design: Java and returning self-reference in setter methods
[ "", "java", "" ]
I am writing a few extensions to mimic the map and reduce functions in Lisp. ``` public delegate R ReduceFunction<T,R>(T t, R previous); public delegate void TransformFunction<T>(T t, params object[] args); public static R Reduce<T,R>(this List<T> list, ReduceFunction<T,R> r, R initial) { var aggregate = initial; foreach(var t in list) aggregate = r(t,aggregate); return aggregate; } public static void Transform<T>(this List<T> list, TransformFunction<T> f, params object [] args) { foreach(var t in list) f(t,args); } ``` The transform function will cut down on cruft like: ``` foreach(var t in list) if(conditions && moreconditions) //do work etc ``` Does this make sense? Could it be better?
These look very similar to extensions in Linq already: ``` //takes a function that matches the Func<T,R> delegate listInstance.Aggregate( startingValue, (x, y) => /* aggregate two subsequent values */ ); //takes a function that matches the Action<T> delegate listInstance.ForEach( x => /* do something with x */); ``` Why is the 2nd example called Transform? Do you intend to change the values in the list somehow? If that's the case you may be better off using `ConvertAll<T>` or `Select<T>`.
According to this link [Functional Programming in C# 3.0: How Map/Reduce/Filter can Rock your World](http://www.25hoursaday.com/weblog/2008/06/16/FunctionalProgrammingInC30HowMapReduceFilterCanRockYourWorld.aspx) the following are the equivalent in C# under the System.Linq namespace: * map --> [Enumerable.Select](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.select.aspx) * reduce --> [Enumerable.Aggregate](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.aggregate.aspx) * filter --> [Enumerable.Where](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.where.aspx)
Generic Map/Reduce List Extensions in C#
[ "", "c#", "functional-programming", "extension-methods", "" ]
What would be the easiest way to separate the directory name from the file name when dealing with `SaveFileDialog.FileName` in C#?
Use: ``` System.IO.Path.GetDirectoryName(saveDialog.FileName) ``` (and the corresponding `System.IO.Path.GetFileName`). The Path class is really rather useful.
You could construct a FileInfo object. It has a Name, FullName, and DirectoryName property. ``` var file = new FileInfo(saveFileDialog.FileName); Console.WriteLine("File is: " + file.Name); Console.WriteLine("Directory is: " + file.DirectoryName); ```
How to get only directory name from SaveFileDialog.FileName
[ "", "c#", "string", "parsing", "" ]
I've been using PHP & MySQL for ages and am about to start using PostgreSQL instead. What's the preferred method? Is it via the PDO objects or is there something better?
PDO objects are the new hotness. I'd recommend that as long as you can ensure that your target platform will always be running PHP 5.2+. There are many other database abstraction layers that support PostgreSQL that are compatible with older versions of PHP; I'd recommend [ADODB](http://adodb.sourceforge.net/). You should really be using PDO or a different abstraction layer even for your MySQL work; that way you won't have this problem again!
Using Zend Db: ``` require_once 'Zend/Db.php'; $DB_ADAPTER = 'Pdo_Pgsql'; $DB_CONFIG = array( 'username' => 'app_db_user', 'password' => 'xxxxxxxxx', 'host' => 'localhost', 'port' => 5432, 'dbname' => 'mydb' ); $db = Zend_Db::factory($DB_ADAPTER, $DB_CONFIG); ```
What's the preferred way to connect to a postgresql database from PHP?
[ "", "php", "postgresql", "" ]
I have a regex that is going to end up being a bit long and it'd make it much easier to read to have it across multiple lines. I tried this but it just barfs. ``` preg_match( '^J[0-9]{7}:\s+ (.*?) #Extract the Transaction Start Date msg \s+J[0-9]{7}:\s+Project\sname:\s+ (.*?) #Extract the Project Name \s+J[0-9]{7}:\s+Job\sname:\s+ (.*?) #Extract the Job Name \s+J[0-9]{7}:\s+', $this->getResultVar('FullMessage'), $atmp ); ``` Is there are way to pass a regex in the above form to preg\_match?
You can use the extended syntax: ``` preg_match("/ test /x", $foo, $bar); ```
Yes, you can add the `/x` [Pattern Modifier](http://us.php.net/manual/en/reference.pcre.pattern.modifiers.php). > This modifier turns on additional > functionality of PCRE that is > incompatible with Perl. Any backslash > in a pattern that is followed by a > letter that has no special meaning > causes an error, thus reserving these > combinations for future expansion. By > default, as in Perl, a backslash > followed by a letter with no special > meaning is treated as a literal. There > are at present no other features > controlled by this modifier. For your example try this: ``` preg_match('/ ^J[0-9]{7}:\s+ (.*?) #Extract the Transaction Start Date msg \s+J[0-9]{7}:\s+Project\sname:\s+ (.*?) #Extract the Project Name \s+J[0-9]{7}:\s+Job\sname:\s+ (.*?) #Extract the Job Name \s+J[0-9]{7}:\s+ /x', $this->getResultVar('FullMessage'), $atmp); ```
Passing a commented, multi-line (freespace) regex to preg_match
[ "", "php", "regex", "" ]
Here is the issue I am having: I have a large query that needs to compare datetimes in the where clause to see if two dates are on the same day. My current solution, which sucks, is to send the datetimes into a UDF to convert them to midnight of the same day, and then check those dates for equality. When it comes to the query plan, this is a disaster, as are almost all UDFs in joins or where clauses. This is one of the only places in my application that I haven't been able to root out the functions and give the query optimizer something it can actually use to locate the best index. In this case, merging the function code back into the query seems impractical. I think I am missing something simple here. Here's the function for reference. ``` if not exists (select * from dbo.sysobjects where id = object_id(N'dbo.f_MakeDate') and type in (N'FN', N'IF', N'TF', N'FS', N'FT')) exec('create function dbo.f_MakeDate() returns int as begin declare @retval int return @retval end') go alter function dbo.f_MakeDate ( @Day datetime, @Hour int, @Minute int ) returns datetime as /* Creates a datetime using the year-month-day portion of @Day, and the @Hour and @Minute provided */ begin declare @retval datetime set @retval = cast( cast(datepart(m, @Day) as varchar(2)) + '/' + cast(datepart(d, @Day) as varchar(2)) + '/' + cast(datepart(yyyy, @Day) as varchar(4)) + ' ' + cast(@Hour as varchar(2)) + ':' + cast(@Minute as varchar(2)) as datetime) return @retval end go ``` To complicate matters, I am joining on time zone tables to check the date against the local time, which could be different for every row: ``` where dbo.f_MakeDate(dateadd(hh, tz.Offset + case when ds.LocalTimeZone is not null then 1 else 0 end, t.TheDateINeedToCheck), 0, 0) = @activityDateMidnight ``` [Edit] I'm incorporating @Todd's suggestion: ``` where datediff(day, dateadd(hh, tz.Offset + case when ds.LocalTimeZone is not null then 1 else 0 end, t.TheDateINeedToCheck), @ActivityDate) = 0 ``` My misconception about how datediff works (the same day of year in consecutive years yields 366, not 0 as I expected) caused me to waste a lot of effort. But the query plan didn't change. I think I need to go back to the drawing board with the whole thing.
This is much more concise: ``` where datediff(day, date1, date2) = 0 ```
You pretty much have to keep the left side of your where clause clean. So, normally, you'd do something like: ``` WHERE MyDateTime >= @activityDateMidnight AND MyDateTime < (@activityDateMidnight + 1) ``` (Some folks prefer DATEADD(d, 1, @activityDateMidnight) instead - but it's the same thing). The TimeZone table complicates matter a bit though. It's a little unclear from your snippet, but it looks like t.TheDateInTable is in GMT with a Time Zone identifier, and that you're then adding the offset to compare against @activityDateMidnight - which is in local time. I'm not sure what ds.LocalTimeZone is, though. If that's the case, then you need to get @activityDateMidnight into GMT instead.
What's a good way to check if two datetimes are on the same calendar day in TSQL?
[ "", "sql", "sql-server", "t-sql", "datetime", "user-defined-functions", "" ]
I have a `DataTable` with a `Name` column. I want to generate a collection of the unique names ordered alphabetically. The following query ignores the *order by* clause. ``` var names = (from DataRow dr in dataTable.Rows orderby (string)dr["Name"] select (string)dr["Name"]).Distinct(); ``` Why does the `orderby` not get enforced?
To make it more readable and maintainable, you can also split it up into multiple LINQ statements. 1. First, select your data into a new list, let's call it `x1`, do a projection if desired 2. Next, create a distinct list, from `x1` into `x2`, using whatever distinction you require 3. Finally, create an ordered list, from `x2` into `x3`, sorting by whatever you desire
**The problem is that the Distinct operator does not grant that it will maintain the original order of values.** So your query will need to work like this ``` var names = (from DataRow dr in dataTable.Rows select (string)dr["Name"]).Distinct().OrderBy( name => name ); ```
How do I get a distinct, ordered list of names from a DataTable using LINQ?
[ "", "c#", "linq", ".net-3.5", "" ]
What's the cleanest, most effective way to validate decimal numbers in JavaScript? Bonus points for: 1. Clarity. Solution should be clean and simple. 2. Cross-platform. Test cases: ``` 01. IsNumeric('-1') => true 02. IsNumeric('-1.5') => true 03. IsNumeric('0') => true 04. IsNumeric('0.42') => true 05. IsNumeric('.42') => true 06. IsNumeric('99,999') => false 07. IsNumeric('0x89f') => false 08. IsNumeric('#abcdef') => false 09. IsNumeric('1.2.3') => false 10. IsNumeric('') => false 11. IsNumeric('blah') => false ```
[@Joel's answer](https://stackoverflow.com/questions/18082/validate-numbers-in-javascript-isnumeric/174921#174921) is pretty close, but it will fail in the following cases: ``` // Whitespace strings: IsNumeric(' ') == true; IsNumeric('\t\t') == true; IsNumeric('\n\r') == true; // Number literals: IsNumeric(-1) == false; IsNumeric(0) == false; IsNumeric(1.1) == false; IsNumeric(8e5) == false; ``` Some time ago I had to implement an `IsNumeric` function, to find out if a variable contained a numeric value, **regardless of its type**, it could be a `String` containing a numeric value (I had to consider also exponential notation, etc.), a `Number` object, virtually anything could be passed to that function, I couldn't make any type assumptions, taking care of type coercion (eg. `+true == 1;` but `true` shouldn't be considered as `"numeric"`). I think is worth sharing this set of [**+30 unit tests**](http://run.plnkr.co/plunks/93FPpacuIcXqqKMecLdk/) made to numerous function implementations, and also share the one that passes all my tests: ``` function isNumeric(n) { return !isNaN(parseFloat(n)) && isFinite(n); } ``` **P.S.** [isNaN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/isNaN) & [isFinite](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/isFinite) have a confusing behavior due to forced conversion to number. In ES6, [Number.isNaN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/isNaN) & [Number.isFinite](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/isFinite) would fix these issues. Keep that in mind when using them. --- **Update** : [Here's how jQuery does it now (2.2-stable)](https://github.com/jquery/jquery/blob/2.2-stable/src/core.js#L215): ``` isNumeric: function(obj) { var realStringObj = obj && obj.toString(); return !jQuery.isArray(obj) && (realStringObj - parseFloat(realStringObj) + 1) >= 0; } ``` **Update** : [Angular 4.3](https://github.com/angular/angular/blob/4.3.x/packages/common/src/pipes/number_pipe.ts#L172): ``` export function isNumeric(value: any): boolean { return !isNaN(value - parseFloat(value)); } ```
Arrrgh! Don't listen to the regular expression answers. RegEx is icky for this, and I'm not talking just performance. It's so easy to make subtle, impossible to spot mistakes with your regular expression. If you can't use `isNaN()` — and remember: I said, "IF" — this should work much better: ``` function IsNumeric(input) { return (input - 0) == input && (''+input).trim().length > 0; } ``` Here's how it works: The `(input - 0)` expression forces JavaScript to do type coercion on your input value; it must first be interpreted as a number for the subtraction operation. If that conversion to a number fails, the expression will result in `NaN` (Not a Number). This *numeric* result is then compared to the original value you passed in. Since the left hand side is now numeric, type coercion is again used. Now that the input from both sides was coerced to the same type from the same original value, you would think they should always be the same (always true). However, there's a special rule that says `NaN` is never equal to `NaN`, and so a value that can't be converted to a number (and only values that cannot be converted to numbers) will result in false. The check on the length is for a special case involving empty strings. Also note that it falls down on your 0x89f test, but that's because for javascript *that's a valid way to define a number literal.* If you want to catch that specific scenario you could add an additional check. Even better, if that's your reason for not using `isNaN()` then just wrap your own function around `isNaN()` that can also do the additional check. In summary, ***if you want to know if a value can be converted to a number, actually try to convert it to a number.*** --- I went back and did some research for *why* a whitespace string did not have the expected output, and I think I get it now: an empty string is coerced to `0` rather than `NaN`. Simply trimming the string before the length check will handle this case. Running the unit tests against the new code and it only fails on the infinity and boolean literals, and the only time that should be a problem is if you're generating code (really, who would type in a literal and check if it's numeric? You should *know*), and that would be some strange code to generate. But, again, **the only reason ever to use this is if for some reason you have to avoid isNaN().**
Validate decimal numbers in JavaScript - IsNumeric()
[ "", "javascript", "validation", "numbers", "" ]
In C++, there isn't a de-facto standard logging tool. In my experience, shops roll their own. This creates a bit of a problem, however, when trying to create reusable software components. If everything in your system depends on the logging component, this makes the software less reusable, basically forcing any downstream projects to take your logging framework along with the components they really want. IOC (dependency injection) doesn't really help with the problem since your components would need to depend on a logging abstraction. Logging components themselves can add dependencies on file I/O, triggering mechanisms, and other possibly unwanted dependencies. Does adding a dependency to your proprietary logging framework sacrifice the reusability of the component?
Yes. But dependency injection will help in this case. You can create an abstract logging base-class and create implementations for the logging-frameworks you want to use. Your components are just dependent on the abstract base-class. And you inject the implementations along with al their dependencies as needed.
Yes, Mendelt is right. We do exactly this in our products. Everything depends on the ILogger abstract interface, but it does not depend on anything else. Typically an executable or a high-level DLL will be the one to construct an actual implemented Logger interface and inject it.
Do C++ logging frameworks sacrifice reusability?
[ "", "c++", "logging", "code-reuse", "" ]
I have to do some JavaScript in the future, so it is time to update my toolbox. Right now I use Firefox with some addons: * JavaScript Shell from <https://www.squarefree.com/bookmarklets/webdevel.html> * Firefox Dom Inspector * Firebug * Greasemonkey * Stylish I plan to use [Venkman Javascript debugger](http://www.hacksrus.com/~ginda/venkman/) as well as [jsunit](https://github.com/pivotal/jsunit) and [js-lint](http://jslint.com/). For programming I'm stick with vim. So what other tools do you use when developing JavaScript?
I use both Firefox and IE for Web Development and a few add-ons in each: **Firefox:** * [Firebug](https://addons.mozilla.org/en-US/firefox/addon/1843) * [Web Developer Toolbar](https://addons.mozilla.org/en-US/firefox/addon/60) **Internet Explorer:** * [IE Developer Toolbar](http://www.microsoft.com/en-us/download/details.aspx?id=18359) * [Fiddler](http://www.fiddlertool.com/) * Visual Studio for JS Debugging
I sometimes use Emacs with Steve Yegge's [js2-mode](http://code.google.com/p/js2-mode/), evaluating code with [Rhino](http://www.mozilla.org/rhino/) & John Resig's [env.js](http://ejohn.org/blog/bringing-the-browser-to-the-server/) to load jQuery or Prototype in my standalone scripts. This allows me to explore javascript, jQuery, and Prototype outside of a browser. Example: ``` var window; load("Library/env.js"); window.location = 'index.html'; // Load the page 'index.html' print($('aForm').id); // Play with the Dom in a standalone script! ```
What is in your JavaScript development toolbox?
[ "", "javascript", "debugging", "" ]
I have a small JS function that does Ajax for me and another like it that adds in POST data to the request. With Ajax being such a big topic with so many libraries about it, what am I missing from my function, is it insecure or something else worrying? ``` function loadPage(pagePath, displayElement) { var xmlHttp; try { // Firefox, Opera 8.0+, Safari xmlHttp=new XMLHttpRequest(); } catch (e) { // Internet Explorer try { xmlHttp=new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try { xmlHttp=new ActiveXObject("Microsoft.XMLHTTP"); } catch (e) { alert("Your browser does not support AJAX!"); return false; } } } xmlHttp.onreadystatechange=function() { if(xmlHttp.readyState==4) { document.getElementById(displayElement).innerHTML = xmlHttp.responseText; } } xmlHttp.open("GET", pagePath, true); xmlHttp.send(null); } ```
I strongly recommend you not roll your own Ajax code. Instead, use a framework such as Prototype, Dojo, or any of the others. They've taken care of handling all the ReadyStates you're not handling (2 means it's been sent, 3 means it's in process, etc.), and they should escape the reponse you're getting so you don't insert potentially insecure javascript or something into your page. Another thing a more robust framework will give you is the ability to do more than just use innerHTML to replace items in the DOM. Your function here can only be used to replace one element with the response from the ajax call. There's a lot more you can do with Ajax.
I would remove this line. ``` alert("Your browser does not support AJAX!") ``` Shouting at the user in a language he probably doesn't understand is worse than failure. :-)
What more is needed for Ajax than this function
[ "", "javascript", "ajax", "" ]
I have a large tree of Java Objects in my Desktop Application and am trying to decide on the best way of persisting them as a file to the file system. Some thoughts I've had were: * **Roll my own serializer using DataOutputStream**: This would give me the greatest control of what was in the file, but at the cost of micromanaging it. * **Straight old Serialization using ObjectOutputStream and its various related classes**: I'm not sold on it though since I find the data brittle. Changing any object's structure breaks the serialized instances of it. So I'm locked in to what seems to be a horrible versioning nightmare. * **XML Serialization**: It's not as brittle, but it's significantly slower that straight out serialization. It can be transformed outside of my program. * **JavaDB**: I'd considered this since I'm comfortable writing JDBC applications. The difference here is that the database instance would only persist while the file was being opened or saved. It's not pretty but... it does lend itself to migrating to a central server architecture if the need arises later and it introduces the possibility of quering the datamodel in a simpler way. I'm curious to see what other people think. And I'm hoping that I've missed some obvious, and simpler approach than the ones above. --- Here are some more options culled from the answers below: * **An Object Database** - Has significantly less infrastructure than ORM approaches and performs faster than an XML approach. thanks aku
[db4objects](http://www.db4o.com) might be the best choice
I would go for the your final option JavaDB (Sun's distribution of [Derby](http://db.apache.org/derby)) and use an object relational layer like [Hibernate](http://hibernate.org) or [iBatis](http://ibatis.apache.org). Using the first three aproaches means you are going to spend more time building a database engine than developing application features.
What's the best way to persist data in a Java Desktop Application?
[ "", "java", "persistence", "desktop", "oop", "" ]
In one of my applications, I am querying active directory to get a list of all users below a given user (using the "Direct Reports" thing). So basically, given the name of the person, it is looked up in AD, then the Direct Reports are read. But then for every direct report, the tool needs to check the direct reports of the direct reports. Or, more abstract: The Tool will use a person as the root of the tree and then walk down the complete tree to get the names of all the leaves (can be several hundred) Now, my concern is obviously performance, as this needs to be done quite a few times. My idea is to manually cache that (essentially just put all the names in a long string and store that somewhere and update it once a day). But I just wonder if there is a more elegant way to first get the information and then cache it, possibly using something in the System.DirectoryServices Namespace?
In order to take control over the properties that you want to be cached you can call 'RefreshCache()' passing the properties that you want to hang around: ``` System.DirectoryServices.DirectoryEntry entry = new System.DirectoryServices.DirectoryEntry(); // Push the property values from AD back to cache. entry.RefreshCache(new string[] {"cn", "www" }); ```
Depends on how up to date you want the information to be. If you **must have** the very latest data in your report then querying directly from AD is reasonable. And I agree that AD is quite robust, a typical dedicated AD server is actually very lightly utilised in normal day to day operations *but best to check with your IT department / support person.* An alternative is to have a daily script to dump the AD data into a CSV file and/or import it into a SQL database. (Oracle has a SELECT CONNECT BY feature that can automatically create multi-level hierarchies within a result set. MSSQL can do a similar thing with a bit of recursion IIRC).
Caching Active Directory Data
[ "", "c#", "asp.net", "active-directory", "" ]
I have a complete XML document in a string and would like a `Document` object. Google turns up all sorts of garbage. What is the simplest solution? (In Java 1.5) **Solution** Thanks to [Matt McMinn](https://stackoverflow.com/users/1322/matt-mcminn), I have settled on this implementation. It has the right level of input flexibility and exception granularity for me. (It's good to know if the error came from malformed XML - `SAXException` - or just bad IO - `IOException`.) ``` public static org.w3c.dom.Document loadXMLFrom(String xml) throws org.xml.sax.SAXException, java.io.IOException { return loadXMLFrom(new java.io.ByteArrayInputStream(xml.getBytes())); } public static org.w3c.dom.Document loadXMLFrom(java.io.InputStream is) throws org.xml.sax.SAXException, java.io.IOException { javax.xml.parsers.DocumentBuilderFactory factory = javax.xml.parsers.DocumentBuilderFactory.newInstance(); factory.setNamespaceAware(true); javax.xml.parsers.DocumentBuilder builder = null; try { builder = factory.newDocumentBuilder(); } catch (javax.xml.parsers.ParserConfigurationException ex) { } org.w3c.dom.Document doc = builder.parse(is); is.close(); return doc; } ```
This works for me in Java 1.5 - I stripped out specific exceptions for readability. ``` import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.parsers.DocumentBuilder; import org.w3c.dom.Document; import java.io.ByteArrayInputStream; public Document loadXMLFromString(String xml) throws Exception { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); factory.setNamespaceAware(true); DocumentBuilder builder = factory.newDocumentBuilder(); return builder.parse(new ByteArrayInputStream(xml.getBytes())); } ```
Whoa there! There's a potentially serious problem with this code, because it ignores the character encoding specified in the `String` (which is UTF-8 by default). When you call `String.getBytes()` the platform default encoding is used to encode Unicode characters to bytes. So, the parser may think it's getting UTF-8 data when in fact it's getting EBCDIC or something… not pretty! Instead, use the parse method that takes an InputSource, which can be constructed with a Reader, like this: ``` import java.io.StringReader; import org.xml.sax.InputSource; … return builder.parse(new InputSource(new StringReader(xml))); ``` It may not seem like a big deal, but ignorance of character encoding issues leads to insidious code rot akin to y2k.
How do I load an org.w3c.dom.Document from XML in a string?
[ "", "java", "xml", "document", "w3c", "" ]
There are two weird operators in C#: * the [true operator](http://msdn.microsoft.com/en-us/library/6x6y6z4d.aspx) * the [false operator](http://msdn.microsoft.com/en-us/library/6292hy1k.aspx) If I understand this right these operators can be used in types which I want to use instead of a boolean expression and where I don't want to provide an implicit conversion to bool. Let's say I have a following class: ``` public class MyType { public readonly int Value; public MyType(int value) { Value = value; } public static bool operator true (MyType mt) { return mt.Value > 0; } public static bool operator false (MyType mt) { return mt.Value < 0; } } ``` So I can write the following code: ``` MyType mTrue = new MyType(100); MyType mFalse = new MyType(-100); MyType mDontKnow = new MyType(0); if (mTrue) { // Do something. } while (mFalse) { // Do something else. } do { // Another code comes here. } while (mDontKnow) ``` However for all the examples above only the true operator is executed. So what's the false operator in C# good for? *Note: More examples can be found [here](http://www.java2s.com/Tutorial/CSharp/0160__Operator-Overload/truefalseoperatorforComplex.htm), [here](http://www.java2s.com/Tutorial/CSharp/0160__Operator-Overload/OverloadtrueandfalseforTwoDimension.htm) and [here](http://www.java2s.com/Tutorial/CSharp/0160__Operator-Overload/trueandfalseoperator.htm).*
You can use it to override the `&&` and `||` operators. The `&&` and `||` operators can't be overridden, but if you override `|`, `&`, `true` and `false` in exactly the right way the compiler will call `|` and `&` when you write `||` and `&&`. For example, look at this code (from <http://ayende.com/blog/1574/nhibernate-criteria-api-operator-overloading> - where I found out about this trick; [archived version](http://web.archive.org/web/20080613013350/http://www.ayende.com/Blog/archive/2006/08/04/7381.aspx) by @BiggsTRC): ``` public static AbstractCriterion operator &(AbstractCriterion lhs, AbstractCriterion rhs) { return new AndExpression(lhs, rhs); } public static AbstractCriterion operator |(AbstractCriterion lhs, AbstractCriterion rhs) { return new OrExpression(lhs, rhs); } public static bool operator false(AbstractCriterion criteria) { return false; } public static bool operator true(AbstractCriterion criteria) { return false; } ``` This is obviously a side effect and not the way it's intended to be used, but it is useful.
Shog9 and Nir: thanks for your answers. Those answers pointed me to [Steve Eichert article](http://steve.emxsoftware.com/NET/Overloading+the++and++operators) and it pointed me to [msdn](http://msdn.microsoft.com/en-us/library/aa691312.aspx): > The operation x && y is evaluated as T.false(x) ? x : T.&(x, y), where T.false(x) is an invocation of the operator false declared in T, and T.&(x, y) is an invocation of the selected operator &. In other words, x is first evaluated and operator false is invoked on the result to determine if x is definitely false. Then, if x is definitely false, the result of the operation is the value previously computed for x. Otherwise, y is evaluated, and the selected operator & is invoked on the value previously computed for x and the value computed for y to produce the result of the operation.
What's the false operator in C# good for?
[ "", "c#", ".net", "syntax", "" ]
Is it true that the only way to handle default function arguments is through function overloading? For example, in PHP I can do this: ``` function foo($x, $y=0) { } ``` Would the best way to handle it in C# be this? ``` void foo(int x) { foo(x, 0); } void foo(int x, int y) { } ``` ***[Example lifted from here](http://bytes.com/forum/thread224970.html)*** Edit ***Made the C# example into actual C# (Thanks Blair Conrad)***
Yes, that'd be best~~, except you'd omit the `$`s on the parameter names~~, as others have pointed out. For those interested in the rationale behind the lack of default parameter values, see @Giovanni Galbo's explanation.
Just to satisfy some curiosity: From [Why doesn't C# support default parameters?](http://blogs.msdn.com/csharpfaq/archive/2004/03/07/85556.aspx): > In languages such as C++, a default value can be included as part of the method declaration: > > void Process(Employee employee, bool bonus = false) > > This method can be called either with: > > a.Process(employee, true); > > or > > a.Process(employee); > > in the second case, the parameter bonus is set to false. > > C# doesn't have this feature. > > One reason we don't have this feature is related to a specific implementation of the feature. In the C++ world, when the user writes: > > a.Process(employee); > > the compiler generates > > a.process(employee, false); > > In other words, the compiler takes the default value that is specified in the method prototype and puts it into the method call - it's just as if the user wrote 'false' as the second parameter. There's no way to change that default value without forcing the user of the class to recompile, which is unfortunate. > > The overloading model works better in this respect. The framework author just defines two separate methods, and the single-parameter one calls the two-parameter method. This keeps the default value in the framework, where it can be modified if necessary. > > It would be possible for a compiler to take something like the C++ definition and produce the overloads, but there are a few issues with that approach. > > The first one is that the correlation between the code that the user writes and the code the compiler generates is less obvious. We generally try to limit magic when possible, as it makes it harder for programmers. The second issue has to do with things like XML doc comments and intellisense. The compiler would have to have special rules for how it generates doc comments for the overloaded methods, and intellisense would need to have smarts to collapse the overloaded methods into a single method. > > Writing overloads yourself is a bit less convenient, but we think it's an acceptable solution.
Is overloading the only way to have default function arguments in C#?
[ "", "c#", "overloading", "" ]
I have a custom validation function in JavaScript in a user control on a .Net 2.0 web site which checks to see that the fee paid is not in excess of the fee amount due. I've placed the validator code in the `ascx` file, and I have also tried using `Page.ClientScript.RegisterClientScriptBlock()` and in both cases the validation fires, but cannot find the JavaScript function. The output in Firefox's error console is `"feeAmountCheck is not defined"`. Here is the function (this was taken directly from firefox->view source) ``` <script type="text/javascript"> function feeAmountCheck(source, arguments) { var amountDue = document.getElementById('ctl00_footerContentHolder_Fees1_FeeDue'); var amountPaid = document.getElementById('ctl00_footerContentHolder_Fees1_FeePaid'); if (amountDue.value > 0 && amountDue >= amountPaid) { arguments.IsValid = true; } else { arguments.IsValid = false; } return arguments; } </script> ``` Any ideas as to why the function isn't being found? How can I remedy this without having to add the function to my master page or consuming page?
Try changing the argument names to `sender` and `args`. And, after you have it working, switch the call over to `ScriptManager.RegisterClientScriptBlock`, regardless of AJAX use.
When you're using .Net 2.0 and Ajax - you should use: ``` ScriptManager.RegisterClientScriptBlock ``` It will work better in Ajax environments then the old Page.ClientScript version
ASP.Net Custom Client-Side Validation
[ "", "asp.net", "javascript", "validation", "" ]
Is it possible to configure [xampp](http://www.apachefriends.org/en/xampp.html) to serve up a file outside of the `htdocs` directory? For instance, say I have a file located as follows: `C:\projects\transitCalculator\trunk\TransitCalculator.php` and my [xampp](http://www.apachefriends.org/en/xampp.html) files are normally served out from: `C:\xampp\htdocs\` (because that's the default configuration) Is there some way to make Apache recognize and serve up my `TransitCalculator.php` file without moving it under `htdocs`? Preferably I'd like Apache to serve up/have access to the entire contents of the projects directory, and I don't want to move the projects directory under `htdocs`. edit: edited to add Apache to the question title to make Q/A more "searchable"
Ok, per [pix0r](https://stackoverflow.com/questions/1408/#2471)'s, [Sparks](https://stackoverflow.com/questions/1408/#1413)' and [Dave](https://stackoverflow.com/questions/1408/#1414)'s answers it looks like there are three ways to do this: --- ## [Virtual Hosts](https://stackoverflow.com/questions/1408/#2471) 1. Open C:\xampp\apache\conf\extra\httpd-vhosts.conf. 2. Un-comment ~line 19 (`NameVirtualHost *:80`). 3. Add your virtual host (~line 36): ``` <VirtualHost *:80> DocumentRoot C:\Projects\transitCalculator\trunk ServerName transitcalculator.localhost <Directory C:\Projects\transitCalculator\trunk> Order allow,deny Allow from all </Directory> </VirtualHost> ``` 4. Open your hosts file (C:\Windows\System32\drivers\etc\hosts). 5. Add ``` 127.0.0.1 transitcalculator.localhost #transitCalculator ``` to the end of the file (before the Spybot - Search & Destroy stuff if you have that installed). 6. Save (You might have to save it to the desktop, change the permissions on the old hosts file (right click > properties), and copy the new one into the directory over the old one (or rename the old one) if you are using Vista and have trouble). 7. Restart Apache. Now you can access that directory by browsing to <http://transitcalculator.localhost/>. --- ## [Make an Alias](https://stackoverflow.com/questions/1408/#1413) 1. Starting ~line 200 of your `http.conf` file, copy everything between `<Directory "C:/xampp/htdocs">` and `</Directory>` (~line 232) and paste it immediately below with `C:/xampp/htdocs` replaced with your desired directory (in this case `C:/Projects`) to give your server the correct permissions for the new directory. 2. Find the `<IfModule alias_module></IfModule>` section (~line 300) and add ``` Alias /transitCalculator "C:/Projects/transitCalculator/trunk" ``` (or whatever is relevant to your desires) below the `Alias` comment block, inside the module tags. --- ## [Change your document root](https://stackoverflow.com/questions/1408/#1414) 1. Edit ~line 176 in C:\xampp\apache\conf\httpd.conf; change `DocumentRoot "C:/xampp/htdocs"` to `#DocumentRoot "C:/Projects"` (or whatever you want). 2. Edit ~line 203 to match your new location (in this case `C:/Projects`). --- **Notes:** * You have to use forward slashes "/" instead of back slashes "\". * Don't include the trailing "/" at the end. * *restart your server*.
You can relocate it by editing the **DocumentRoot** setting in XAMPP\apache\conf\httpd.conf. It should currently be: > C:/xampp/htdocs Change it to: > C:/projects/transitCalculator/trunk
Make XAMPP / Apache serve file outside of htdocs folder
[ "", "php", "apache", "xampp", "server-configuration", "" ]
What techniques can be applied effectively to improve the performance of SQL queries? Are there any general rules that apply?
* Use primary keys * Avoid select \* * Be as specific as you can when building your conditional statements * De-normalisation can often be more efficient * Table variables and temporary tables (where available) will often be better than using a large source table * Partitioned views * Employ indices and constraints
Learn what's really going on under the hood - you should be able to understand the following concepts in detail: * Indexes (not just what they are but actually how they work). * Clustered indexes vs heap allocated tables. * Text and binary lookups and when they can be in-lined. * [Fill factor](https://stackoverflow.com/questions/10919/how-do-you-measure-sql-fill-factor-value). * How records are ghosted for update/delete. * When page splits happen and why. * Statistics, and how they effect various query speeds. * The query planner, and how it works for your specific database (for instance on some systems "select \*" is slow, on modern MS-Sql DBs the planner can handle it).
What generic techniques can be applied to optimize SQL queries?
[ "", "sql", "performance", "" ]
What is the SQL query to select all of the MSSQL Server's logins? Thank you. More than one of you had the answer I was looking for: ``` SELECT * FROM syslogins ```
Is this what you're after? ``` select * from master.syslogins ```
On SQL Azure as of 2012; logins: ``` --connecct to master --logins SELECT * from sys.sql_logins --users SELECT * from sys.sysusers ``` and users on a specific database: ``` --connect to database SELECT * from sys.sysusers ``` Also note that 'users' on Azure SQL now (2022-11-17) have more 'login' type properties and creating a user on a Azure SQL database with a password is now possible, so it is less likely to require creating logins in 'master'.
SQL Query for Logins
[ "", "sql", "sql-server", "" ]
I have a JavaScript method that I need to run on one of my pages, in particular, the `onresize` event. However, I don't see how I can set that event from my content page. I wish I could just put it on my master page, but I don't have the need for the method to be called on all pages that use that master page. Any help would be appreciated.
Place the following in your content page: ``` <script type="text/javascript"> // here is a cross-browser compatible way of connecting // handlers to events, in case you don't have one function attachEventHandler(element, eventToHandle, eventHandler) { if(element.attachEvent) { element.attachEvent(eventToHandle, eventHandler); } else if(element.addEventListener) { element.addEventListener(eventToHandle.replace("on", ""), eventHandler, false); } else { element[eventToHandle] = eventHandler; } } attachEventHandler(window, "onresize", function() { // the code you want to run when the browser is resized }); </script> ``` --- That code should give you the basic idea of what you need to do. Hopefully you are using a library that already has code to help you write up event handlers and such.
How about use code like the following in your Content Page (C#)? ``` Page.ClientScript.RegisterStartupScript(this.GetType(), "resizeMyPage", "window.onresize=function(){ resizeMyPage();}", true); ``` Thus, you could have a `resizeMyPage` function defined somewhere in the Javascript and it would be run whenever the browser is resized!
Call onresize from ASP.NET content page
[ "", "asp.net", "javascript", "master-pages", "onresize", "" ]
I'm currently trying out db4o (the java version) and I pretty much like what I see. But I cannot help wondering how it does perform in a real live (web-)environment. Does anyone have any experiences (good or bad) to share about running db4o?
We run DB40 .NET version in a large client/server project. Our experiences is that you can potentially get much better performance than typical relational databases. However, you really have to tweak your objects to get this kind of performance. For example, if you've got a list containing a lot of objects, DB4O activation of these lists is slow. There are a number of ways to get around this problem, for example, by inverting the relationship. Another pain is activation. When you retrieve or delete an object from DB4O, by default it will activate the whole object tree. For example, loading a Foo will load Foo.Bar.Baz.Bat, etc until there's nothing left to load. While this is nice from a programming standpoint, performance will slow down the more nesting in your objects. To improve performance, you can tell DB4O how many levels deep to activate. This is time-consuming to do if you've got a lot of objects. Another area of pain was text searching. DB4O's text searching is far, far slower than SQL full text indexing. (They'll tell you this outright on their site.) The good news is, it's easy to setup a text searching engine on top of DB4O. On our project, we've hooked up Lucene.NET to index the text fields we want. Some APIs don't seem to work, such as the GetField APIs useful in applying database upgrades. (For example, you've renamed a property and you want to upgrade your existing objects in the database, you need to use these "reflection" APIs to find objects in the database. Other APIs, such as the [Index] attribute don't work in the stable 6.4 version, and you must instead specify indexes using the Configure().Index("someField"), which is not strongly typed. We've witnessed performance degrade the larger your database. We have a 1GB database right now and things are still fast, but not nearly as fast as when we started with a tiny database. We've found another issue where Db4O.GetByID will close the database if the ID doesn't exist anymore in the database. We've found the Native Query syntax (the most natural, language-integrated syntax for queries) is far, far slower than the less-friendly SODA queries. So instead of typing: ``` // C# syntax for "Find all MyFoos with Bar == 23". // (Note the Java syntax is more verbose using the Predicate class.) IList<MyFoo> results = db4o.Query<MyFoo>(input => input.Bar == 23); ``` Instead of that nice query code, you have to an ugly SODA query which is string-based and not strongly-typed. For .NET folks, they've recently introduced a LINQ-to-DB4O provider, which provides for the best syntax yet. However, it's yet to be seen whether performance will be up-to-par with the ugly SODA queries. DB4O support has been decent: we've talked to them on the phone a number of times and have received helpful info. Their user forums are next to worthless, however, almost all questions go unanswered. Their JIRA bug tracker receives a lot of attention, so if you've got a nagging bug, file it on JIRA on it often will get fixed. (We've had 2 bugs that have been fixed, and another one that got patched in a half-assed way.) If all this hasn't scared you off, let me say that we're very happy with DB4O, despite the problems we've encountered. The performance we've got has blown away some O/RM frameworks we tried. I recommend it. **update July 2015** Keep in mind, this answer was written back in 2008. While I appreciate the upvotes, the world has changed since then, and this information may not be as reliable as it was when it was written.
Most native queries can and are efficiently converted into SODA queries behind the scenes so that should not make a difference. Using NQ is of course preferred as you remain in the realms of strong typed language. If you have problems getting NQ to use indexes please feel free to post your problem to the [db4o forums](http://developer.db4o.com/forums/) and we'll try to help you out. Goran
db4o experiences?
[ "", "java", "db4o", "" ]
I have a (potentially dumb) question about the C++ STL. When I make a container (vector, set, map, etc), is it allocated on the stack or on the heap? If I make a set and put 5 million strings, will I have to worry about a stack overflow?
STL classes by default allocate their internal buffers from the heap, although these classes also allow custom allocators that allow a user to specify an alternate location to allocate from - e.g. a shared memory pool.
The default allocator for STL containers uses operator new and delete, so it's whatever those route to for the type being contained. (In general, it comes from the heap unless you do something to override that.) You will not get a stack overflow from allocating 5 million strings. Even if you made a stack based allocator, it would probably overflow before you even inserted one string.
C++ STL question: allocators
[ "", "c++", "stl", "" ]
I'm working on an internal project for my company, and part of the project is to be able to parse various "Tasks" from an XML file into a collection of tasks to be ran later. Because each type of Task has a multitude of different associated fields, I decided it would be best to represent each type of Task with a seperate class. To do this, I constructed an abstract base class: ``` public abstract class Task { public enum TaskType { // Types of Tasks } public abstract TaskType Type { get; } public abstract LoadFromXml(XmlElement task); public abstract XmlElement CreateXml(XmlDocument currentDoc); } ``` Each task inherited from this base class, and included the code necessary to create itself from the passed in XmlElement, as well as serialize itself back out to an XmlElement. A basic example: ``` public class MergeTask : Task { public override TaskType Type { get { return TaskType.Merge; } } // Lots of Properties / Methods for this Task public MergeTask (XmlElement elem) { this.LoadFromXml(elem); } public override LoadFromXml(XmlElement task) { // Populates this Task from the Xml. } public override XmlElement CreateXml(XmlDocument currentDoc) { // Serializes this class back to xml. } } ``` The parser would then use code similar to this to create a task collection: ``` XmlNode taskNode = parent.SelectNode("tasks"); TaskFactory tf = new TaskFactory(); foreach (XmlNode task in taskNode.ChildNodes) { // Since XmlComments etc will show up if (task is XmlElement) { tasks.Add(tf.CreateTask(task as XmlElement)); } } ``` All of this works wonderfully, and allows me to pass tasks around using the base class, while retaining the structure of having individual classes for each task. However, I am not happy with my code for TaskFactory.CreateTask. This method accepts an XmlElement, and then returns an instance of the appropriate Task class: ``` public Task CreateTask(XmlElement elem) { if (elem != null) { switch(elem.Name) { case "merge": return new MergeTask(elem); default: throw new ArgumentException("Invalid Task"); } } } ``` Because I have to parse the XMLElement, I'm using a huge (10-15 cases in the real code) switch to pick which child class to instantiate. I'm hoping there is some sort of polymorphic trick I can do here to clean up this method. Any advice?
I use reflection to do this. You can make a factory that basically expands without you having to add any extra code. make sure you have "using System.Reflection", place the following code in your instantiation method. ``` public Task CreateTask(XmlElement elem) { if (elem != null) { try { Assembly a = typeof(Task).Assembly string type = string.Format("{0}.{1}Task",typeof(Task).Namespace,elem.Name); //this is only here, so that if that type doesn't exist, this method //throws an exception Type t = a.GetType(type, true, true); return a.CreateInstance(type, true) as Task; } catch(System.Exception) { throw new ArgumentException("Invalid Task"); } } } ``` Another observation, is that you can make this method, a static and hang it off of the Task class, so that you don't have to new up the TaskFactory, and also you get to save yourself a moving piece to maintain.
Create a "Prototype" instanace of each class and put them in a hashtable inside the factory , with the string you expect in the XML as the key. so CreateTask just finds the right Prototype object, by get() ing from the hashtable. then call LoadFromXML on it. you have to pre-load the classes into the hashtable, If you want it more automatic... You can make the classes "self-registering" by calling a static register method on the factory. Put calls to register ( with constructors) in the static blocks on the Task subclasses. Then all you need to do is "mention" the classes to get the static blocks run. A static array of Task subclasses would then suffice to "mention" them. Or use reflection to mention the classes.
Abstract Factory Design Pattern
[ "", "c#", "design-patterns", "factory", "" ]
Searching for some sample code for converting a point in WGS84 coordinate system to a map position in Google Maps (pixel position), also supporting zoom levels. If the codes is well commented, then it can also be in some other language. You can also point me to a open source Java project :) Some resources found: [OpenLayer](http://trac.openlayers.org/browser/trunk/openlayers/lib/OpenLayers/Layer) implementation. [JOSM](http://josm.openstreetmap.de/browser/trunk/src/org/openstreetmap/josm/data/projection) project Excellent [Java Map Projection Library](https://github.com/OSUCartography/JMapProjLib) from JH LABS. This is a pure java PROJ.4 port. Does projection from WGS84 to meters. From there it's quite straightforward to convert meters to tile pixels.
[Tile utility code in Java](http://web.archive.org/web/20110809084551/http://mapki.com/wiki/Tile_utility_code_in_Java) on mapki.com (great resource for google map developers)
Here are the functions in JavaSCript ... As extracted from OpenLayers ``` function toMercator (lon, lat) { var x = lon * 20037508.34 / 180; var y = Math.log(Math.tan((90 + lat) * Math.PI / 360)) / (Math.PI / 180); y = y * 20037508.34 / 180; return [x, y]; } function inverseMercator (x, y) { var lon = (x / 20037508.34) * 180; var lat = (y / 20037508.34) * 180; lat = 180/Math.PI * (2 * Math.atan(Math.exp(lat * Math.PI / 180)) - Math.PI / 2); return [lon, lat]; } ``` Fairly straightforward to convert to Java
Java code for WGS84 to Google map position and back
[ "", "java", "google-maps", "mapping", "wgs84", "" ]
I am wrapping existing C++ code from a **BSD** project in our own custom wrapper and I want to integrate it to our code with as few changes as possible. This code uses `fprintf` to print to **stderr** in order to log / report errors. I want to redirect this to an alternative place within the same process. On **Unix** I have done this with a `socketpair` and a `thread`: one end of the socket is where I send **stderr** (via a call to `dup2`) and the other end is monitored in a thread, where I can then process the output. This does not work on **Windows** though because a socket is not the same as a file handle. All documents I have found on the web show how to redirect output from a child process, which is not what I want. How can I redirect **stderr** within the same process getting a callback of some sort when output is written? (and before you say so, I've tried `SetStdHandle` but cannot find any way to make this work)...
You can use a similar technique on Windows, you just need to use different words for the same concepts. :) This article: <http://msdn.microsoft.com/en-us/library/ms682499.aspx> uses a win32 pipe to handle I/O from another process, you just have to do the same thing with threads within the same process. Of course, in your case all output to stderr from anywhere in the process will be redirected to your consumer. Actually, other pieces of the puzzle you may need are [\_fdopen](http://msdn.microsoft.com/en-us/library/dye30d82(VS.80).aspx) and [\_open\_osfhandle](http://msdn.microsoft.com/en-us/library/bdts1c9x(VS.71).aspx). In fact, here's a related example from some [code](http://hewgill.com/pilot/copilot/) I released years ago: ``` DWORD CALLBACK DoDebugThread(void *) { AllocConsole(); SetConsoleTitle("Copilot Debugger"); // The following is a really disgusting hack to make stdin and stdout attach // to the newly created console using the MSVC++ libraries. I hope other // operating systems don't need this kind of kludge.. :) stdout->_file = _open_osfhandle((long)GetStdHandle(STD_OUTPUT_HANDLE), _O_TEXT); stdin->_file = _open_osfhandle((long)GetStdHandle(STD_INPUT_HANDLE), _O_TEXT); debug(); stdout->_file = -1; stdin->_file = -1; FreeConsole(); CPU_run(); return 0; } ``` In this case, the main process was a GUI process which doesn't start with stdio handles at all. It opens a console, then shoves the right handles into stdout and stdin so the debug() function (which was designed as a stdio interactive function) can interact with the newly created console. You should be able to open some pipes and do the same sort of thing to redirect stderr.
You have to remember that what MSVCRT calls "OS handles" are not Win32 handles, but another layer of handles added just to confuse you. MSVCRT tries to emulate the Unix handle numbers where `stdin` = 0, `stdout` = 1, `stderr` = 2 and so on. Win32 handles are numbered differently and their values always happen to be a multiple of 4. Opening the pipe and getting all the handles configured properly will require getting your hands messy. Using the MSVCRT source code and a debugger is probably a requirement.
Windows C++: How can I redirect stderr for calls to fprintf?
[ "", "c++", "windows", "redirect", "" ]
I often encounter the following scenario where I need to offer many different types of permissions. I primarily use ASP.NET / VB.NET with SQL Server 2000. **Scenario** I want to offer a dynamic permission system that can work on different parameters. Let's say that I want to give either a department or just a specific person access to an application. And pretend that we have a number of applications that keeps growing. In the past, I have chosen one of the following two ways that I know to do this. 1. Use a single permission table with special columns that are used for determining a how to apply the parameters. The special columns in this example are `TypeID` and `TypeAuxID`. The SQL would look something like this. ``` SELECT COUNT(PermissionID) FROM application_permissions WHERE (TypeID = 1 AND TypeAuxID = @UserID) OR (TypeID = 2 AND TypeAuxID = @DepartmentID) AND ApplicationID = 1 ``` 2. Use a mapping table for each type of permission, then joining them all together. ``` SELECT COUNT(perm.PermissionID) FROM application_permissions perm LEFT JOIN application_UserPermissions emp ON perm.ApplicationID = emp.ApplicationID LEFT JOIN application_DepartmentPermissions dept ON perm.ApplicationID = dept.ApplicationID WHERE q.SectionID=@SectionID AND (emp.UserID=@UserID OR dept.DeptID=@DeptID OR (emp.UserID IS NULL AND dept.DeptID IS NULL)) AND ApplicationID = 1 ORDER BY q.QID ASC ``` **My Thoughts** I hope that the examples make sense. I cobbled them together. The first example requires less work, but neither of them feel like the best answer. Is there a better way to handle this?
I agree with John Downey. Personally, I sometimes use a flagged enumeration of permissions. This way you can use AND, OR, NOT and XOR bitwise operations on the enumeration's items. ``` "[Flags] public enum Permission { VIEWUSERS = 1, // 2^0 // 0000 0001 EDITUSERS = 2, // 2^1 // 0000 0010 VIEWPRODUCTS = 4, // 2^2 // 0000 0100 EDITPRODUCTS = 8, // 2^3 // 0000 1000 VIEWCLIENTS = 16, // 2^4 // 0001 0000 EDITCLIENTS = 32, // 2^5 // 0010 0000 DELETECLIENTS = 64, // 2^6 // 0100 0000 }" ``` Then, you can combine several permissions using the AND bitwise operator. For example, if a user can view & edit users, the binary result of the operation is 0000 0011 which converted to decimal is 3. You can then store the permission of one user into a single column of your Database (in our case it would be 3). Inside your application, you just need another bitwise operation (OR) to verify if a user has a particular permission or not.
The way I typically go about coding permission systems is having 6 tables. * Users - this is pretty straight forward it is your typical users table * Groups - this would be synonymous to your departments * Roles - this is a table with all permissions generally also including a human readable name and a description * Users\_have\_Groups - this is a many-to-many table of what groups a user belongs to * Users\_have\_Roles - another many-to-many table of what roles are assigned to an individual user * Groups\_have\_Roles - the final many-to-many table of what roles each group has At the beginning of a users session you would run some logic that pulls out every role they have assigned, either directory or through a group. Then you code against those roles as your security permissions. Like I said this is what I typically do but your millage may vary.
What is the best way to handle multiple permission types?
[ "", "sql", "permissions", "" ]
I am trying to set a `javascript` `date` so that it can be submitted via `JSON` to a `.NET` type, but when attempting to do this, `jQuery` sets the `date` to a full `string`, what format does it have to be in to be converted to a `.NET` type? ``` var regDate = student.RegistrationDate.getMonth() + "/" + student.RegistrationDate.getDate() + "/" + student.RegistrationDate.getFullYear(); j("#student_registrationdate").val(regDate); // value to serialize ``` I am using `MonoRail` on the server to perform the binding to a `.NET` type, that aside I need to know what to set the form hidden field value to, to get properly sent to `.NET` code.
This [MSDN article](http://msdn.microsoft.com/en-us/library/ey1cdcx8.aspx) has some example Date strings that are parse-able is that what you're looking for? ``` string dateString = "5/1/2008 8:30:52 AM"; DateTime date1 = DateTime.Parse(dateString, CultureInfo.InvariantCulture); ```
As travis suggests, you could simply change the parameter or class property (depending on what you are passing back) to a string, the parse it as his example. You may also want to take a look at [this article](http://www.nikhilk.net/DateSyntaxForJSON2.aspx). It suggests that direct conversion for DateTime JSON serialization uses something more like the ticks property.
How can I format a javascript date to be serialized by jQuery
[ "", "c#", "json", "serialization", "date", "castle-monorail", "" ]
I'm trying to do this (which produces an unexpected T\_VARIABLE error): ``` public function createShipment($startZip, $endZip, $weight = $this->getDefaultWeight()){} ``` I don't want to put a magic number in there for weight since the object I am using has a `"defaultWeight"` parameter that all new shipments get if you don't specify a weight. I can't put the `defaultWeight` in the shipment itself, because it changes from shipment group to shipment group. Is there a better way to do it than the following? ``` public function createShipment($startZip, $endZip, weight = 0){ if($weight <= 0){ $weight = $this->getDefaultWeight(); } } ```
This isn't much better: ``` public function createShipment($startZip, $endZip, $weight=null){ $weight = !$weight ? $this->getDefaultWeight() : $weight; } // or... public function createShipment($startZip, $endZip, $weight=null){ if ( !$weight ) $weight = $this->getDefaultWeight(); } ```
Neat trick with Boolean OR operator: ``` public function createShipment($startZip, $endZip, $weight = 0){ $weight or $weight = $this->getDefaultWeight(); ... } ```
Using object property as default for method property
[ "", "php", "parameters", "error-handling", "" ]
How do you express an integer as a binary number with Python literals? I was easily able to find the answer for hex: ``` >>> 0x12AF 4783 >>> 0x100 256 ``` and octal: ``` >>> 01267 695 >>> 0100 64 ``` **How do you use literals to express binary in Python?** --- ***Summary of Answers*** * Python 2.5 and earlier: can express binary using `int('01010101111',2)` but not with a literal. * Python 2.5 and earlier: there is *no way* to express binary literals. * Python 2.6 beta: You can do like so: `0b1100111` or `0B1100111`. * Python 2.6 beta: will also allow `0o27` or `0O27` (second character is the letter O) to represent an octal. * Python 3.0 beta: Same as 2.6, but will no longer allow the older `027` syntax for octals.
For reference—*future* Python possibilities: Starting with Python 2.6 you can express binary literals using the prefix **0b** or **0B**: ``` >>> 0b101111 47 ``` You can also use the new **bin** function to get the binary representation of a number: ``` >>> bin(173) '0b10101101' ``` Development version of the documentation: [What's New in Python 2.6](http://docs.python.org/dev/whatsnew/2.6.html#pep-3127-integer-literal-support-and-syntax)
``` >>> print int('01010101111',2) 687 >>> print int('11111111',2) 255 ``` Another way.
How do you express binary literals in Python?
[ "", "python", "syntax", "binary", "integer", "literals", "" ]
In C#, if I have an inherited class with a default constructor, do I have to explicitly call the base class' constructor or will it be implicitly called? ``` class BaseClass { public BaseClass() { // ... some code } } class MyClass : BaseClass { public MyClass() // Do I need to put ": base()" here or is it implied? { // ... some code } } ```
You do not need to explicitly call the base constructor, it will be implicitly called. Extend your example a little and create a Console Application and you can verify this behaviour for yourself: ``` using System; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { MyClass foo = new MyClass(); Console.ReadLine(); } } class BaseClass { public BaseClass() { Console.WriteLine("BaseClass constructor called."); } } class MyClass : BaseClass { public MyClass() { Console.WriteLine("MyClass constructor called."); } } } ```
It is implied, provided it is parameterless. This is because you **need to implement constructors that take values**, see the code below for an example: ``` public class SuperClassEmptyCtor { public SuperClassEmptyCtor() { // Default Ctor } } public class SubClassA : SuperClassEmptyCtor { // No Ctor's this is fine since we have // a default (empty ctor in the base) } public class SuperClassCtor { public SuperClassCtor(string value) { // Default Ctor } } public class SubClassB : SuperClassCtor { // This fails because we need to satisfy // the ctor for the base class. } public class SubClassC : SuperClassCtor { public SubClassC(string value) : base(value) { // make it easy and pipe the params // straight to the base! } } ```
In C#, do you need to call the base constructor?
[ "", "c#", "inheritance", "constructor", "" ]
I would like to sort an array in ascending order using `C/C++`. The outcome is an array containing element indexes. Each index is corespondent to the element location in the sorted array. **Example** ``` Input: 1, 3, 4, 9, 6 Output: 1, 2, 3, 5, 4 ``` **Edit:** I am using shell sort procedure. The duplicate value indexes are arbitrarily chosen based on which duplicate values are first in the original array. ### Update: Despite my best efforts, I haven't been able to implement a sorting algorithm for an array of pointers. The current example won't compile. Could someone please tell me what's wrong? I'd very much appreciate some help! ``` void SortArray(int ** pArray, int ArrayLength) { int i, j, flag = 1; // set flag to 1 to begin initial pass int * temp; // holding variable orig with no * for (i = 1; (i <= ArrayLength) && flag; i++) { flag = 0; for (j = 0; j < (ArrayLength - 1); j++) { if (*pArray[j + 1] > *pArray[j]) // ascending order simply changes to < { &temp = &pArray[j]; // swap elements &pArray[j] = &pArray[j + 1]; //the problem lies somewhere in here &pArray[j + 1] = &temp; flag = 1; // indicates that a swap occurred. } } } }; ```
Since you're using C++, I would do it something like this. The `SortIntPointers` function can be any sort algorithm, the important part is that it sorts the array of pointers based on the `int` that they are pointing to. Once that is done, you can go through the array of pointers and assign their sorted index which will end up in the original position in the original array. ``` int* intArray; // set somewhere else int arrayLen; // set somewhere else int** pintArray = new int*[arrayLen]; for(int i = 0; i < arrayLen; ++i) { pintArray[i] = &intArray[i]; } // This function sorts the pointers according to the values they // point to. In effect, it sorts intArray without losing the positional // information. SortIntPointers(pintArray, arrayLen); // Dereference the pointers and assign their sorted position. for(int i = 0; i < arrayLen; ++i) { *pintArray[i] = i; } ``` Hopefully that's clear enough.
Ok, here is my atempt in C++ ``` #include <iostream> #include <algorithm> struct mycomparison { bool operator() (int* lhs, int* rhs) {return (*lhs) < (*rhs);} }; int main(int argc, char* argv[]) { int myarray[] = {1, 3, 6, 2, 4, 9, 5, 12, 10}; const size_t size = sizeof(myarray) / sizeof(myarray[0]); int *arrayofpointers[size]; for(int i = 0; i < size; ++i) { arrayofpointers[i] = myarray + i; } std::sort(arrayofpointers, arrayofpointers + size, mycomparison()); for(int i = 0; i < size; ++i) { *arrayofpointers[i] = i + 1; } for(int i = 0; i < size; ++i) { std::cout << myarray[i] << " "; } std::cout << std::endl; return 0; } ```
How does one rank an array (sort) by value? *With a twist*
[ "", "c++", "c", "arrays", "sorting", "" ]
Is there a way to find the name of the program that is running in Java? The class of the main method would be good enough.
Try this: ``` StackTraceElement[] stack = Thread.currentThread ().getStackTrace (); StackTraceElement main = stack[stack.length - 1]; String mainClass = main.getClassName (); ``` Of course, this only works if you're running from the main thread. Unfortunately I don't think there's a system property you can query to find this out. **Edit:** Pulling in @John Meagher's comment, which is a great idea: > To expand on @jodonnell you can also > get all stack traces in the system > using Thread.getAllStackTraces(). From > this you can search all the stack > traces for the "main" Thread to > determine what the main class is. This > will work even if your class is not > running in the main thread.
``` System.getProperty("sun.java.command") ```
$0 (Program Name) in Java? Discover main class?
[ "", "java", "" ]
I'm trying to find a way to list the (static) dependency requirements of a jar file, in terms of which symbols are required at run time. I can see that the methods exported by classes can be listed using "javap", but there doesn't seem to be an opposite facility to list the 'imports'. Is it possible to do this? This would be similar to the dumpbin utility in Windows development which can be used to list the exports and imports of a DLL. EDIT : Thanks for the responses; I checked out all of the suggestions; accepted DependencyFinder as it most closely meets what I was looking for.
You could use the [Outbound dependencies](http://depfind.sourceforge.net/Manual.html#ShowingOnlyOutboundDependencies) feature of [DependencyFinder](http://depfind.sourceforge.net/). You can do that entirely in the GUI, or in command line exporting XML.
I think you can get that information using [JDepend](http://www.clarkware.com/software/JDepend.html)
Java import/export dependencies
[ "", "java", "import", "export", "" ]
I'm looking for some way to effectively hide inherited members. I have a library of classes which inherit from common base classes. Some of the more recent descendant classes inherit dependency properties which have become vestigial and can be a little confusing when using *IntelliSense* or using the classes in a visual designer. These classes are all controls that are written to be compiled for either WPF or Silverlight 2.0. I know about `ICustomTypeDescriptor` and `ICustomPropertyProvider`, but I'm pretty certain those can't be used in Silverlight. It's not as much a functional issue as a usability issue. What should I do? **Update** Some of the properties that I would really like to hide come from ancestors that are not my own and because of a specific tool I'm designing for, I can't do member hiding with the `new` operator. (I know, it's ridiculous)
Override them like Michael Suggests [above](https://stackoverflow.com/questions/1528/#1533) and to prevent folks from using the overridden (sp?) methods, mark them as obsolete: ``` [Obsolete("These are not supported in this class.", true)] public override void dontcallmeanymore() { } ``` If the second parm is set to true, a compiler error will be generated if anyone tries to call that method and the string in the first parm is the message. If parm2 is false only a compiler warning will be generated.
While you cannot prevent usage of those inherited members to my knowledge, you should be able to hide them from IntelliSense using the [EditorBrowsableAttribute](http://msdn.microsoft.com/en-us/library/system.componentmodel.editorbrowsableattribute.aspx): ``` Using System.ComponentModel; [EditorBrowsable(EditorBrowsableState.Never)] private string MyHiddenString = "Muahahahahahahahaha"; ``` *Edit:* Just saw this in the documentation comments, which makes it kinda useless for this purpose: > There is a prominent note that states that this attribute "does not suppress members from a class in the same assembly". That is true but not complete. Actually, the attribute does not suppress members from a class in the same solution.
Hiding inherited members
[ "", "c#", "wpf", "silverlight", "polymorphism", "dependency-properties", "" ]
I am attempting to POST against a vendor's server using PHP 5.2 with cURL. I'm reading in an XML document to post against their server and then reading in a response: ``` $request = trim(file_get_contents('test.xml')); $curlHandle = curl_init($servletURL); curl_setopt($curlHandle, CURLOPT_POST, TRUE); curl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request)); curl_setopt($curlHandle, CURLOPT_RETURNTRANSFER, TRUE); curl_setopt($curlHandle, CURLOPT_HEADER, FALSE); # Have also tried leaving this out $response = curl_exec($curlHandle); ``` That code, in an of itself, works OK, but the other server returns a response from it's XML parser stating: > Content not allowed in prolog I looked that error up and this is normally caused by whitespace before the XML, but I made sure that the XML file itself has no whitespace and the trim() should clear that up anyway. I did a TCPDump on the connection while I ran the code and this is what is sent out: ``` POST {serverURL} HTTP/1.1 Host: {ip of server}:8080 Accept: */* Content-Length: 921 Expect: 100-continue Content-Type: multipart/form-data; boundry:---------------------------01e7cda3896f ---------------------------01e7cda3896f Content-Disposition: form-data; name="XML" [SNIP - the XML was displayed] ---------------------------01e7cda3896f-- ``` Before and after the [SNIP] line there is visible whitespace when I replay the session in Ethereal. Is this what is causing the problem and, if so, how can I remove it, or am I looking too far and this may be an issue with the server I'm posting against?
It turns out it's an encoding issue. The app apparently needs the XML in www-form-urlencoded instead of form-data so I had to change: ``` # This sets the encoding to multipart/form-data curl_setopt($curlHandle, CURLOPT_POSTFIELDS, array('XML'=>$request)); ``` to ``` # This sets it to application/x-www-form-urlencoded curl_setopt($curlHandle, CURLOPT_POSTFIELDS, 'XML=' . urlencode($request)); ```
Not an answer, but I find the whole fopen/fread/fclose thing very dull to peruse when looking at code. You can replace: ``` $file = 'test.xml'; $fileHandle = fopen($file, 'r'); $request = fread($fileHandle, filesize($file)); fclose($fileHandle); $request = trim($request); ``` With: ``` $request = trim(file_get_contents('test.xml')); ``` But anyway - to your question; if those are the headers that are being sent, then it shouldn't be a problem with the remote server. Try changing the contents of your xml file and using var\_dump() to check the exact output (including the string length, so you can look for missing things) Hope that helps
cURL adding whitespace to post content?
[ "", "php", "xml", "curl", "" ]
Is there any way to capture the MouseDown even from the .NET 2.0 TextBox control? I know the inherited Control class has the event, but it's not exposed in TextBox. Is there a way to override the event handler? I also tried the OpenNETCF TextBox2 control which does have the MouseDown event exposed, but no matter what I do, it doesn't fire the handler. Any suggestions? --- > What kind of crazy mobile device do > you have that has a mouse? :) Yes, windows mobile does not have an actual mouse, but you are mistaken that Windows Mobile .NET do not support the Mouse events. A click or move on the screen is still considered a "Mouse" event. It was done this way so that code could port over from full Windows easily. And this is not a Windows Mobile specific issue. The TextBox control on Windows does not have native mouse events either. I just happened to be using Windows Mobile in this case. Edit: And on a side note...as Windows Mobile is built of the WindowsCE core which is often used for embedded desktop systems and Slim Terminal Services clients or "WinTerms" it has support for a hardware mouse and has for a long time. Most devices just don't have the ports to plug one in. --- > According to the .Net Framework, the > MouseDown Event Handler on a TextBox > is supported. What happens when you > try to run the code? Actually, that's only there because it inherits it from "Control", as does *every* other Form control. It is, however, overridden (and changed to private I believe) in the TextBox class. So it will not show up in IntelliSense in Visual Studio. However, you actually can write the code: ``` textBox1.MouseDown += new System.Windows.Forms.MouseEventHandler(this.textBox1_MouseDown); ``` and it will compile and run just fine, the only problem is that textBox1\_MouseDown() will not be fired when you tap the TextBox control. I assume this is because of the Event being overridden internally. I don't even want to change what's happening on the event internally, I just want to add my own event handler to that event so I can fire some custom code as you could with any other event.
Looks like you're right. Bummer. No MouseOver event. One of the fallbacks that always works with .NET, though, is P/Invoke. Someone already took the time to do this for the .NET CF TextBox. I found this on CodeProject: <http://www.codeproject.com/KB/cs/TextBox_subclassing.aspx> Hope this helps
I know this answer is way late, but hopefully it ends up being useful for someone who finds this. Also, I didn't entirely come up with it myself. I believe I originally found most of the info on the OpenNETCF boards, but what is typed below is extracted from one of my applications. You can get a mousedown event by implementing the OpenNETCF.Windows.Forms.IMessageFilter interface and attaching it to your application's message filter. ``` static class Program { public static MouseUpDownFilter mudFilter = new MouseUpDownfilter(); public static void Main() { Application2.AddMessageFilter(mudFilter); Application2.Run(new MainForm()); } } ``` This is how you could implement the MouseUpDownFilter: ``` public class MouseUpDownFilter : IMessageFilter { List ControlList = new List(); public void WatchControl(Control buttonToWatch) { ControlList.Add(buttonToWatch); } public event MouseEventHandler MouseUp; public event MouseEventHandler MouseDown; public bool PreFilterMessage(ref Microsoft.WindowsCE.Forms.Message m) { const int WM_LBUTTONDOWN = 0x0201; const int WM_LBUTTONUP = 0x0202; // If the message code isn't one of the ones we're interested in // then we can stop here if (m.Msg != WM_LBUTTONDOWN && m.Msg != WM_LBUTTONDOWN) { return false; } // see if the control is a watched button foreach (Control c in ControlList) { if (m.HWnd == c.Handle) { int i = (int)m.LParam; int x = i & 0xFFFF; int y = (i >> 16) & 0xFFFF; MouseEventArgs args = new MouseEventArgs(MouseButtons.Left, 1, x, y, 0); if (m.Msg == WM_LBUTTONDOWN) MouseDown(c, args); else MouseUp(c, args); // returning true means we've processed this message return true; } } return false; } } ``` Now this MouseUpDownFilter will fire an MouseUp/MouseDown event when they occur on a watched control, for example your textbox. To use this filter you add some watched controls and assign to the events it might fire in your form's load event: ``` private void MainForm_Load(object sender, EventArgs e) { Program.mudFilter.WatchControl(this.textBox1); Program.mudFilter.MouseDown += new MouseEventHandler(mudFilter_MouseDown); Program.mudFilter.MouseUp += new MouseEventHandler(mudFilter_MouseUp); } void mudFilter_MouseDown(object sender, MouseEventArgs e) { if (sender == textBox1) { // do what you want to do in the textBox1 mouse down event :) } } ```
Capture MouseDown event for .NET TextBox
[ "", "c#", ".net", "events", "windows-mobile", "" ]
I am writing an application in Java for the desktop using the Eclipse SWT library for GUI rendering. I think SWT helps Java get over the biggest hurdle for acceptance on the desktop: namely providing a Java application with a consistent, responsive interface that looks like that belonging to any other app on your desktop. However, I feel that packaging an application is still an issue. OS X natively provides an easy mechanism for wrapping Java apps in native application bundles, but producing an app for Windows/Linux that doesn't require the user to run an ugly batch file or click on a .jar is still a hassle. Possibly that's not such an issue on Linux, where the user is likely to be a little more tech-savvy, but on Windows I'd like to have a regular .exe for him/her to run. Has anyone had any experience with any of the .exe generation tools for Java that are out there? I've tried JSmooth but had various issues with it. Is there a better solution before I crack out Visual Studio and roll my own? **Edit:** I should perhaps mention that I am unable to spend a lot of money on a commercial solution.
To follow up on pauxu's answer, I'm using launch4j and NSIS on a project of mine and thought it would be helpful to show just how I'm using them. Here's what I'm doing for Windows. BTW, I'm creating .app and .dmg for Mac, but haven't figured out what to do for Linux yet. ## Project Copies of launch4j and NSIS In my project I have a "vendor" directory and underneath it I have a directory for "launch4j" and "nsis". Within each is a copy of the install for each application. I find it easier to have a copy local to the project rather than forcing others to install both products and set up some kind of environment variable to point to each. ## Script Files I also have a "scripts" directory in my project that holds various configuration/script files for my project. First there is the launch4j.xml file: ``` <launch4jConfig> <dontWrapJar>true</dontWrapJar> <headerType>gui</headerType> <jar>rpgam.jar</jar> <outfile>rpgam.exe</outfile> <errTitle></errTitle> <cmdLine></cmdLine> <chdir>.</chdir> <priority>normal</priority> <downloadUrl>http://www.rpgaudiomixer.com/</downloadUrl> <supportUrl></supportUrl> <customProcName>false</customProcName> <stayAlive>false</stayAlive> <manifest></manifest> <icon></icon> <jre> <path></path> <minVersion>1.5.0</minVersion> <maxVersion></maxVersion> <jdkPreference>preferJre</jdkPreference> </jre> <splash> <file>..\images\splash.bmp</file> <waitForWindow>true</waitForWindow> <timeout>60</timeout> <timeoutErr>true</timeoutErr> </splash> </launch4jConfig> ``` And then there's the NSIS script rpgam-setup.nsis. It can take a VERSION argument to help name the file. ``` ; The name of the installer Name "RPG Audio Mixer" !ifndef VERSION !define VERSION A.B.C !endif ; The file to write outfile "..\dist\installers\windows\rpgam-${VERSION}.exe" ; The default installation directory InstallDir "$PROGRAMFILES\RPG Audio Mixer" ; Registry key to check for directory (so if you install again, it will ; overwrite the old one automatically) InstallDirRegKey HKLM "Software\RPG_Audio_Mixer" "Install_Dir" # create a default section. section "RPG Audio Mixer" SectionIn RO ; Set output path to the installation directory. SetOutPath $INSTDIR File /r "..\dist\layout\windows\" ; Write the installation path into the registry WriteRegStr HKLM SOFTWARE\RPG_Audio_Mixer "Install_Dir" "$INSTDIR" ; Write the uninstall keys for Windows WriteRegStr HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\RPGAudioMixer" "DisplayName" "RPG Audio Mixer" WriteRegStr HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\RPGAudioMixer" "UninstallString" '"$INSTDIR\uninstall.exe"' WriteRegDWORD HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\RPGAudioMixer" "NoModify" 1 WriteRegDWORD HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\RPGAudioMixer" "NoRepair" 1 WriteUninstaller "uninstall.exe" ; read the value from the registry into the $0 register ;readRegStr $0 HKLM "SOFTWARE\JavaSoft\Java Runtime Environment" CurrentVersion ; print the results in a popup message box ;messageBox MB_OK "version: $0" sectionEnd Section "Start Menu Shortcuts" CreateDirectory "$SMPROGRAMS\RPG Audio Mixer" CreateShortCut "$SMPROGRAMS\RPG Audio Mixer\Uninstall.lnk" "$INSTDIR\uninstall.exe" "" "$INSTDIR\uninstall.exe" 0 CreateShortCut "$SMPROGRAMS\RPG AUdio Mixer\RPG Audio Mixer.lnk" "$INSTDIR\rpgam.exe" "" "$INSTDIR\rpgam.exe" 0 SectionEnd Section "Uninstall" ; Remove registry keys DeleteRegKey HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\RPGAudioMixer" DeleteRegKey HKLM SOFTWARE\RPG_Audio_Mixer ; Remove files and uninstaller Delete $INSTDIR\rpgam.exe Delete $INSTDIR\uninstall.exe ; Remove shortcuts, if any Delete "$SMPROGRAMS\RPG Audio Mixer\*.*" ; Remove directories used RMDir "$SMPROGRAMS\RPG Audio Mixer" RMDir "$INSTDIR" SectionEnd ``` ## Ant Integration I have some targets in my Ant buildfile (build.xml) to handle the above. First I tel Ant to import launch4j's Ant tasks: ``` <property name="launch4j.dir" location="vendor/launch4j" /> <taskdef name="launch4j" classname="net.sf.launch4j.ant.Launch4jTask" classpath="${launch4j.dir}/launch4j.jar:${launch4j.dir}/lib/xstream.jar" /> ``` I then have a simple target for creating the wrapper executable: ``` <target name="executable-windows" depends="jar" description="Create Windows executable (EXE)"> <launch4j configFile="scripts/launch4j.xml" outfile="${exeFile}" /> </target> ``` And another target for making the installer: ``` <target name="installer-windows" depends="executable-windows" description="Create the installer for Windows (EXE)"> <!-- Lay out files needed for building the installer --> <mkdir dir="${windowsLayoutDirectory}" /> <copy file="${jarFile}" todir="${windowsLayoutDirectory}" /> <copy todir="${windowsLayoutDirectory}/lib"> <fileset dir="${libraryDirectory}" /> <fileset dir="${windowsLibraryDirectory}" /> </copy> <copy todir="${windowsLayoutDirectory}/icons"> <fileset dir="${iconsDirectory}" /> </copy> <copy todir="${windowsLayoutDirectory}" file="${exeFile}" /> <mkdir dir="${windowsInstallerDirectory}" /> <!-- Build the installer using NSIS --> <exec executable="vendor/nsis/makensis.exe"> <arg value="/DVERSION=${version}" /> <arg value="scripts/rpgam-setup.nsi" /> </exec> </target> ``` The top portion of that just copies the necessary files for the installer to a temporary location and the second half executes the script that uses all of it to make the installer.
In my company we use [Launch4J](http://launch4j.sourceforge.net/) to create the exe file, and [NSIS](http://nsis.sourceforge.net/) to create the installer, with SWT applications. We have used it for years in several commercial applications and the pair works fine.
Packaging Java apps for the Windows/Linux desktop
[ "", "java", "windows", "swt", "executable", "software-distribution", "" ]
I've never been completely happy with the way exception handling works, there's a lot exceptions and try/catch brings to the table (stack unwinding, etc.), but it seems to break a lot of the OO model in the process. Anyway, here's the problem: Let's say you have some class which wraps or includes networked file IO operations (e.g. reading and writing to some file at some particular UNC path somewhere). For various reasons you don't want those IO operations to fail, so if you detect that they fail you retry them and you keep retrying them until they succeed or you reach a timeout. I already have a convenient RetryTimer class which I can instantiate and use to sleep the current thread between retries and determine when the timeout period has elapsed, etc. The problem is that you have a bunch of IO operations in several methods of this class, and you need to wrap each of them in try-catch / retry logic. Here's an example code snippet: ``` RetryTimer fileIORetryTimer = new RetryTimer(TimeSpan.FromHours(10)); bool success = false; while (!success) { try { // do some file IO which may succeed or fail success = true; } catch (IOException e) { if (fileIORetryTimer.HasExceededRetryTimeout) { throw e; } fileIORetryTimer.SleepUntilNextRetry(); } } ``` So, how do you avoid duplicating most of this code for every file IO operation throughout the class? My solution was to use anonymous delegate blocks and a single method in the class which executed the delegate block passed to it. This allowed me to do things like this in other methods: ``` this.RetryFileIO( delegate() { // some code block } ); ``` I like this somewhat, but it leaves a lot to be desired. I'd like to hear how other people would solve this sort of problem.
This looks like an excellent opportunity to have a look at Aspect Oriented Programming. Here is a good article on [AOP in .NET](https://learn.microsoft.com/en-us/archive/blogs/simonince/aspect-oriented-interception). The general idea is that you'd extract the cross-functional concern (i.e. Retry for x hours) into a separate class and then you'd annotate any methods that need to modify their behaviour in that way. Here's how it might look (with a nice extension method on Int32) ``` [RetryFor( 10.Hours() )] public void DeleteArchive() { //.. code to just delete the archive } ```
Just wondering, what do you feel your method leaves to be desired? You could replace the anonymous delegate with a.. named? delegate, something like ``` public delegate void IoOperation(params string[] parameters); public void FileDeleteOperation(params string[] fileName) { File.Delete(fileName[0]); } public void FileCopyOperation(params string[] fileNames) { File.Copy(fileNames[0], fileNames[1]); } public void RetryFileIO(IoOperation operation, params string[] parameters) { RetryTimer fileIORetryTimer = new RetryTimer(TimeSpan.FromHours(10)); bool success = false; while (!success) { try { operation(parameters); success = true; } catch (IOException e) { if (fileIORetryTimer.HasExceededRetryTimeout) { throw; } fileIORetryTimer.SleepUntilNextRetry(); } } } public void Foo() { this.RetryFileIO(FileDeleteOperation, "L:\file.to.delete" ); this.RetryFileIO(FileCopyOperation, "L:\file.to.copy.source", "L:\file.to.copy.destination" ); } ```
Reducing duplicate error handling code in C#?
[ "", "c#", "exception", "error-handling", "" ]
I have been working with Visual Studio (WinForm and ASP.NET applications using mostly C#) for several months now. For the most part my IDE is set up fairly standard but I have been wondering what are some suggestions in terms of plugins/settings that you find to be the most useful? **Update**: Thanks for all the great suggestions. It looks like a general consensus that I should look into 'Resharper' along with some eye-candy with themes and custom fonts. --- **Themes** * [Consolas Font Pack for Visual Studio 2005/2008](http://www.microsoft.com/downloads/details.aspx?familyid=22e69ae4-7e40-4807-8a86-b3d36fab68d3&displaylang=en) * [Scott Hanselman's Visual Studio Themes Gallery](http://www.hanselman.com/blog/VisualStudioProgrammerThemesGallery.aspx) * [Visual Studio Theme Generator](http://frickinsweet.com/tools/Theme.mvc.aspx) **Free Tools** * [PowerCommands for Visual Studio 2008](http://visualstudiogallery.msdn.microsoft.com/en-us/df3f0c30-3d37-4e06-9ef8-3bff3508be31) * [GhostDoc](http://submain.com/products/ghostdoc.aspx) * [HyperAddin](http://www.codeplex.com/hyperAddin) * [RockScroll](http://www.hanselman.com/blog/IntroducingRockScroll.aspx) * [CodeRush XPress](http://www.devexpress.com/Products/Visual_Studio_Add-in/CodeRushX/) * [.NET Reflector](http://www.red-gate.com/products/reflector/) - (Not a plugin but still useful) **Paid Tools** * [Resharper](http://www.jetbrains.com/resharper/) - Free (Open Source), $49 (Academic), $199 (Personal), $349 (Commercial) * [CodeRush with Refactor!™ Pro](http://www.devexpress.com/Products/Visual_Studio_Add-in/Coding_Assistance/) - $249
**[Resharper](http://www.jetbrains.com/resharper/)** is definitely a great tool. It has a moderate learning curve but is easy to pick up for some simple things and add mastery later. It is a good price for students and kinda expensive for the rest of us. Resharper is similar to CodeRush, but seems to have a larger following. **[PowerCommands](http://www.visualstudiogallery.com/ExtensionDetails.aspx?ExtensionID=df3f0c30-3d37-4e06-9ef8-3bff3508be31)** is a great set of add-ons that comes free from Microsoft. Things like "Open in Windows Explorer", "Command Prompt Here", and Copy/Paste references. A discussion regarding **add-ins** is [floating around here somewhere](https://stackoverflow.com/questions/2767/do-you-have-any-recommended-add-onsplugins-for-microsoft-visual-studio). For straight-up customization, **changing colors** is fun, easy, and gives you a big bang for your buck. I prefer a slightly personalized version of [Rob Conery's TextMate theme for Visual Studio.](http://blog.wekeroad.com/2007/10/17/textmate-theme-for-visual-studio-take-2/) Once you get colors you like, you can just [export the settings](http://msdn.microsoft.com/en-us/library/zbhkx167(VS.80).aspx) and carry them with you wherever you go. Related to colors and themes, the [**Consolas** font pack](http://www.microsoft.com/downloads/details.aspx?familyid=22e69ae4-7e40-4807-8a86-b3d36fab68d3&displaylang=en) is pretty nifty and easy on the eye. And like John recommends, a [mastery of **keyboard shortcuts**](https://stackoverflow.com/questions/20507/give-me-awesome-visual-studio-keyboard-short-cuts) will pay big dividends.
Make sure you install a custom color theme. These URLs are a good place to start looking for one: <http://www.codinghorror.com/blog/archives/000682.html> <http://www.hanselman.com/blog/VisualStudioProgrammerThemesGallery.aspx> I myself love Oren Ellenbogen's Dark Scheme. Really pleasant to the eyes. Also, make sure to replace the default font with Consolas or Inconsolata (one is Microsoft's, the other is free). They are both awesome.
Customizing Visual Studio
[ "", "c#", "asp.net", "winforms", "visual-studio", "" ]
I'm considering developing a website similar to stackoverflow, but the answers may also consist of drawings (schematics, in this case). I want to have an area in the answer form where they can make this schematic without requiring special plugins, etc. 1. Are we to the point where SVG has or should have critical mass soon (1-2 years) such that designing a website where script-run SVG as a primary feature is reasonable (ie, requiring Firefox or another SVG/AJAX compliant browser)? 2. What are some good resources for learning cross platform SVG scripting (likely in javascript)?
Unfortunately, I don't have an answer, but I do have three pointers to projects that you could look at. The first is the [Lively Kernel](http://Research.Sun.Com/projects/lively/ "Lively Kernel") by Dan Ingalls (yes, *the* Dan Ingalls) at Sun Labs. It is an implementation of a Smalltalk Virtual World in JavaScript on top of SVG. More precisely, it is an implementation of the Morphic GUI framework from Squeak Smalltalk in JavaScript using SVG and a port of (parts of) Squeak Smalltalk in JavaScript. Or, if you're not a Smalltalker and the above doesn't make sense to you: it's an Operating System, written in JavaScript with the JavaScript interpreter as the CPU, SVG as the graphics card and the browser as the computer. This is about as extreme as it gets, when it comes to JavaScript and SVG. And it only *fully* works in Safari 3 and partly in Firefox 3, although there is an experimental port to Internet Explorer as well. The second project is John Resig's [Processing.js](http://EJohn.Org/blog/processingjs/ "Processing.js") port of the Processing visualization language to JavaScript. It uses the `<canvas>` element instead of SVG precisely because of the problems that you mentioned. This one however, only works in Firefox 3. The third one is [Real-Time 3D in JavaScript](http://UselessPickles.Com/triangles/demo.html "Triangles in JavaScript") by Useless Pickles. It uses *only* JavaScript, DOM and CSS and *no* SVG or `<canvas>` or Flash or whatever. *And* it is portable to almost any browser, including Internet Explorer 7 and up. Doing 2D should be even easier than this. Between those three projects you should be able to find some inspiration and also to find some people who tried to push the envelope with JavaScript and SVG or JavaScript and Graphics and can tell you what works and what doesn't. Conclusion: doing cross-browser SVG or cross-browser `<canvas>` is nigh impossible, but with a little bit of craziness, cross-browser graphics *without* SVG or `<canvas>` is possible.
[Raphael](http://raphaeljs.com/) looks like an interesting take on the problem of cross-browser vector graphics.
About scripting SVG
[ "", "javascript", "ajax", "svg", "" ]
I'm developing a site in Visual Web Dev Express, and when I run/debug, I'd like to be able to set my application's document root so that I can use safer paths, like "/css/style.css' instead of "css/style.css". How would I accomplish this?
* Click on the web site node in the solution explorer. * Press F4 to see the properties window. * Change the virtual path from **/projectname** to **/** Bear in mind that this has an impact on how you expect the application/web site to be deployed. If it is ever used outside the root of a web server, the URL paths will be incorrect.
I have been looking for the virtual path field in the properties window but haven't been able to find it... it only shows me the following options * aperture directory URL * complete access route * policy file * port * use dynamic port Do you know of some place else where I could find the Virtual Path field?
Visual Web Developer (Express): Setting Document Root for Dev Environment
[ "", "c#", "asp.net", "visual-web-developer", "server-configuration", "" ]
After reading Jeff's blog post on [Protecting Your Cookies: HttpOnly](http://www.codinghorror.com/blog/archives/001167.html). I'd like to implement HttpOnly cookies in my web application. How do you tell tomcat to use http only cookies for sessions?
httpOnly is supported as of Tomcat 6.0.19 and Tomcat 5.5.28. See the [changelog](http://tomcat.apache.org/tomcat-6.0-doc/changelog.html) entry for bug 44382. The last comment for bug [44382](https://issues.apache.org/bugzilla/show_bug.cgi?id=44382) states, "this has been applied to 5.5.x and will be included in 5.5.28 onwards." However, it does not appear that 5.5.28 has been released. The httpOnly functionality can be enabled for all webapps in **conf/context.xml**: ``` <Context useHttpOnly="true"> ... </Context> ``` My interpretation is that it also works for an individual context by setting it on the desired ***Context*** entry in **conf/server.xml** (in the same manner as above).
> **Update: The JSESSIONID stuff here is > only for older containers. Please use > jt's currently accepted answer unless > you are using < Tomcat 6.0.19 or < Tomcat > 5.5.28 or another container that does not support HttpOnly JSESSIONID cookies as a config option.** When setting cookies in your app, use ``` response.setHeader( "Set-Cookie", "name=value; HttpOnly"); ``` However, in many webapps, the most important cookie is the session identifier, which is automatically set by the container as the JSESSIONID cookie. If you only use this cookie, you can write a ServletFilter to re-set the cookies on the way out, forcing JSESSIONID to HttpOnly. The page at <http://keepitlocked.net/archive/2007/11/05/java-and-httponly.aspx> <http://alexsmolen.com/blog/?p=16> suggests adding the following in a filter. ``` if (response.containsHeader( "SET-COOKIE" )) { String sessionid = request.getSession().getId(); response.setHeader( "SET-COOKIE", "JSESSIONID=" + sessionid + ";Path=/<whatever>; Secure; HttpOnly" ); } ``` but note that this will overwrite all cookies and only set what you state here in this filter. If you use additional cookies to the JSESSIONID cookie, then you'll need to extend this code to set all the cookies in the filter. This is not a great solution in the case of multiple-cookies, but is a perhaps an acceptable quick-fix for the JSESSIONID-only setup. Please note that as your code evolves over time, there's a nasty hidden bug waiting for you when you forget about this filter and try and set another cookie somewhere else in your code. Of course, it won't get set. This really is a hack though. If you do use Tomcat and can compile it, then take a look at Shabaz's excellent suggestion to patch HttpOnly support into Tomcat.
How do you configure HttpOnly cookies in tomcat / java webapps?
[ "", "java", "security", "cookies", "xss", "httponly", "" ]
So I'm getting a new job working with databases (Microsoft SQL Server to be precise). I know nothing about SQL much less SQL Server. They said they'd train me, but I want to take some initiative to learn about it on my own to be ahead. Where's the best place to start (tutorials, books, etc)? I want to learn more about the SQL language moreso than any of the fancy point and click stuff.
If you're planning on coding against a sql database using .NET, skip ADO and go directly to Linq. You will NOT miss anything. Oh, also, Joe Celko. If you see his name on an article or a book about SQL, read it.
This can be broad but here are some responsibilities that could get thrown at you in a brain dump format. on the DBA end * Backups * Indexes * Triggers * Security per table database creating users ect. * ODBC in your windows control panel * know you normal forms * the diff between a data warehouse (for reporting) * and a Transactional database for most everything else (esp reporting in most environments) On the Programing end * Reporting (Run for the hills) * Stored procedures * Star and snowflake schema's * ADO, ODBC * CRUD apps (Create Read Update Delete)
Best way to learn SQL Server
[ "", "sql", "sql-server", "database", "" ]
I'm not quite sure if this is possible, or falls into the category of pivot tables, but I figured I'd go to the pros to see. I have three basic tables: Card, Property, and CardProperty. Since cards do not have the same properties, and often multiple values for the same property, I decided to use the union table approach to store data instead of having a really big column structure in my card table. The property table is a basic keyword/value type table. So you have the keyword ATK and the value assigned to it. There is another property called SpecialType which a card can have multiple values for, such as "Sycnro" and "DARK" What I'd like to do is create a view or stored procedure that gives me the Card Id, Card Name, and all the property keywords assigned to the card as columns and their values in the ResultSet for a card specified. So ideally I'd have a result set like: ``` ID NAME SPECIALTYPE 1 Red Dragon Archfiend Synchro 1 Red Dragon Archfiend DARK 1 Red Dragon Archfiend Effect ``` and I could tally my results that way. I guess even slicker would be to simply concatenate the properties together based on their keyword, so I could generate a ResultSet like: ``` 1 Red Dragon Archfiend Synchro/DARK/Effect ``` ..but I don't know if that's feasible. Help me stackoverflow Kenobi! You're my only hope.
Is this for SQL server? If yes then [Concatenate Values From Multiple Rows Into One Column (2000)](http://wiki.lessthandot.com/index.php/Concatenate_Values_From_Multiple_Rows_Into_One_Column) [Concatenate Values From Multiple Rows Into One Column Ordered (2005+)](http://wiki.lessthandot.com/index.php/Concatenate_Values_From_Multiple_Rows_Into_One_Column_Ordered)
Related but values are values are kept in separate columns and you have know your "special types" a head of time: [SQL query to compare product sales by month](https://stackoverflow.com/questions/17194/sql-query-to-compare-product-sales-by-month#17290) Otherwise I would do this with cursor in a stored procedure or preform the transformation in the business or presentation layer. Stab at sql if you know all cases: ``` Select ID,NAME ,Synchro+DARK+Effect -- add a some substring logic to trim any trailing /'s from (select ID ,NAME --may need to replace max() with min(). ,MAX(CASE SPECIALTYPE WHEN "Synchro" THEN SPECIALTYPE +"/" ELSE "" END) Synchro ,MAX(CASE SPECIALTYPE WHEN "DARK" THEN SPECIALTYPE +"/" ELSE "" END) DARK ,MAX(CASE SPECIALTYPE WHEN "Effect" THEN SPECIALTYPE ELSE "" END) Effect from table group by ID ,NAME) sub1 ```
Query a union table with fields as columns
[ "", "sql", "database-design", "stored-procedures", "" ]
How is it possible to make prototype methods in C#.Net? In JavaScript, I can do the following to create a trim method for the string object: ``` String.prototype.trim = function() { return this.replace(/^\s+|\s+$/g,""); } ``` How can I go about doing this in C#.Net?
You can't dynamically add methods to existing objects or classes in .NET, except by changing the source for that class. You can, however, in C# 3.0, use extension methods, which *look* like new methods, but are compile-time magic. To do this for your code: ``` public static class StringExtensions { public static String trim(this String s) { return s.Trim(); } } ``` To use it: ``` String s = " Test "; s = s.trim(); ``` This looks like a new method, but will compile the exact same way as this code: ``` String s = " Test "; s = StringExtensions.trim(s); ``` What exactly are you trying to accomplish? Perhaps there are better ways of doing what you want?
It sounds like you're talking about C#'s Extension Methods. You add functionality to existing classes by inserting the "this" keyword before the first parameter. The method has to be a static method in a static class. Strings in .NET already have a "Trim" method, so I'll use another example. ``` public static class MyStringEtensions { public static bool ContainsMabster(this string s) { return s.Contains("Mabster"); } } ``` So now every string has a tremendously useful ContainsMabster method, which I can use like this: ``` if ("Why hello there, Mabster!".ContainsMabster()) { /* ... */ } ``` Note that you can also add extension methods to interfaces (eg IList), which means that any class implementing that interface will also pick up that new method. Any extra parameters you declare in the extension method (after the first "this" parameter) are treated as normal parameters.
How can I create Prototype Methods (like JavaScript) in C#.Net?
[ "", "c#", ".net", "" ]
How can I request a random row (or as close to truly random as possible) in pure SQL?
See this post: [SQL to Select a random row from a database table](http://www.petefreitag.com/item/466.cfm). It goes through methods for doing this in MySQL, PostgreSQL, Microsoft SQL Server, IBM DB2 and Oracle (the following is copied from that link): Select a random row with MySQL: ``` SELECT column FROM table ORDER BY RAND() LIMIT 1 ``` Select a random row with PostgreSQL: ``` SELECT column FROM table ORDER BY RANDOM() LIMIT 1 ``` Select a random row with Microsoft SQL Server: ``` SELECT TOP 1 column FROM table ORDER BY NEWID() ``` Select a random row with IBM DB2 ``` SELECT column, RAND() as IDX FROM table ORDER BY IDX FETCH FIRST 1 ROWS ONLY ``` Select a random record with Oracle: ``` SELECT column FROM ( SELECT column FROM table ORDER BY dbms_random.value ) WHERE rownum = 1 ```
Solutions like Jeremies: ``` SELECT * FROM table ORDER BY RAND() LIMIT 1 ``` work, but they need a sequential scan of all the table (because the random value associated with each row needs to be calculated - so that the smallest one can be determined), which can be quite slow for even medium sized tables. My recommendation would be to use some kind of indexed numeric column (many tables have these as their primary keys), and then write something like: ``` SELECT * FROM table WHERE num_value >= RAND() * ( SELECT MAX (num_value ) FROM table ) ORDER BY num_value LIMIT 1 ``` This works in logarithmic time, regardless of the table size, if `num_value` is indexed. One caveat: this assumes that `num_value` is equally distributed in the range `0..MAX(num_value)`. If your dataset strongly deviates from this assumption, you will get skewed results (some rows will appear more often than others).
How to request a random row in SQL?
[ "", "sql", "random", "" ]
I am writing a query in which I have to get the data for only the last year. What is the best way to do this? ``` SELECT ... FROM ... WHERE date > '8/27/2007 12:00:00 AM' ```
The following adds -1 years to the current date: ``` SELECT ... From ... WHERE date > DATEADD(year,-1,GETDATE()) ```
I found this page while looking for a solution that would help me select results from a prior calendar year. Most of the results shown above seems return items from the past 365 days, which didn't work for me. At the same time, it did give me enough direction to solve my needs in the following code - which I'm posting here for any others who have the same need as mine and who may come across this page in searching for a solution. ``` SELECT .... FROM .... WHERE year(*your date column*) = year(DATEADD(year,-1,getdate())) ``` Thanks to those above whose solutions helped me arrive at what I needed.
SQL Server: Get data for only the past year
[ "", "sql", "sql-server", "database", "t-sql", "" ]