instruction
stringlengths
0
30k
> I think you need to drop the "~/" and replace it with just "/", I believe / is the root **STOP RIGHT THERE!** :-) unless you want to hardcode your web app so that it can only be installed at the root of a web site. "/~" ***is*** the correct thing to use, but the reason that your original code didn't work as expected is that `ResolveUrl` (which is used internally by `Redirect`) tries to first work out of the path you are passing it is an absolute URL (e.g. "**http://server/**foo/bar.htm" as opposed to "foo/bar.htm") - but unfortunately it does this by simply looking for a colon character ':' in the URL you give it. But in this case it finds a colon in the URL you give in the `ReturnPath` query string value, which fools it - therefore your '~/' doesn't get resolved. The fix is that you should be URL-encoding the `ReturnPath` value which escapes the problematic ':' along with any other special characters. Response.Redirect("~/Login.aspx?ReturnPath=" + Server.UrlEncode(Request.Url.ToString())); Additionally, I recommend that you (or anyone) never use `Uri.ToString` - because it gives a human-readable, more "friendly" version of the URL - not a necessarily correct one (it unescapes things). Instead use Uri.AbsoluteUri - like so: Response.Redirect("~/Login.aspx?ReturnPath=" + Server.UrlEncode(Request.Url.AbsoluteUri));
Personally I prefer the join syntax as its makes it clearer that the tables are joined and how they are joined. Try compare larger SQL queries where you selecting from 8 different tables and you have lots of filtering in the where. By using join syntax you separate out the parts where the tables are joined, to the part where you are filtering the rows.
In our model (which is fairly specific to business process applications) the design artefacts include: - a domain data model, with comments on each entity and attribute - a properties file listing all the modify and create triggers on each entity, calculated attributes, validators and other business logic - a set of screen definitions (view model) However do these really count as design artefacts? Our framework is such that these definitions are used to generate the actual code of the system, so maybe they go beyond design. But the fact that they serve double duty is powerful because they are, by definition, up to date and synchronised with the code at all times.
I found a pair of [blog][1] [postings][2] that appear to cover this topic. [1]: http://blogs.msdn.com/astebner/archive/2004/11/10/255346.aspx [2]: http://blogs.msdn.com/astebner/archive/2007/06/12/3260076.aspx
My current solution is to create an exact searcher and a prefix searcher, both sorted by reverse population, and then copy out all my hits starting from the exact hits, moving to the prefix hits. It makes paging my results slightly more annoying than I think it should be. Also I used a hash to eliminate duplicates but later changed the prefix searcher into a boolean query of a prefix search (MUST) with an exact search (MUST NOT), to have Lucene remove the duplicates. Though this seemed even more wasteful.
My current solution is to create an exact searcher and a prefix searcher, both sorted by reverse population, and then copy out all my hits starting from the exact hits, moving to the prefix hits. It makes paging my results slightly more annoying than I think it should be. Also I used a hash to eliminate duplicates but later changed the prefix searcher into a boolean query of a prefix search (MUST) with an exact search (MUST NOT), to have Lucene remove the duplicates. Though this seemed even more wasteful. ![Yuval F][1] [Yuval F](http://stackoverflow.com/users/1702/yuval-f) Thank you for your blog post explaining how to implement a sort comparator that conveniently does not require defining 2 classes. However because the sort comparator can only work on two documents *without knowing the search term* it cannot rank the results as I've described them in my question. How would the sort comparator know that the name field "london" exactly matches the search term "london" if it cannot access the search term? [1]: http://www.gravatar.com/avatar/362e951a8c3d5494dcc652cc458f2ed6?s=32&d=identicon&r=PG
Well, for one I'd say a main advantage is that F# compiles against the .NET platform which makes it easy to deploy on windows. I've seen examples which explained using F# combined with ASP.NET to build web applications ;-) On the other hand, Haskell has been around for waaaaay longer, so I think the group of people who are real experts on that language is a lot bigger. For F# I've only seen one real implementation so far, which is the Singularity proof of concept OS. I've seen more real world implementations of Haskell.
> Bigger C++ projects I've seen hardly used more than one namespace (e.g. boost library). Actually boost uses tons of namespaces, typically every part of boost has its own namespace for the inner workings and then may put only the public interface in the top-level namespace boost. Personally I think that the larger a code-base becomes, the more important namespaces become, even within a single application (or library). At work we put each module of our application in its own namespace. Another use (pun not intended) of namespaces that I use a lot is the anonymous namespace: namespace { const int CONSTANT = 42; } This is basically the same as: static const int CONSTANT = 42; Using an anonymous namespace (instead of static) is the recommended way for code and data to be visible only within the current compilation unit in C++.
> Bigger C++ projects I've seen hardly used more than one namespace (e.g. boost library). Actually boost uses tons of namespaces, typically every part of boost has its own namespace for the inner workings and then may put only the public interface in the top-level namespace boost. Personally I think that the larger a code-base becomes, the more important namespaces become, even within a single application (or library). At work we put each module of our application in its own namespace. Another use (no pun intended) of namespaces that I use a lot is the anonymous namespace: namespace { const int CONSTANT = 42; } This is basically the same as: static const int CONSTANT = 42; Using an anonymous namespace (instead of static) is however the recommended way for code and data to be visible only within the current compilation unit in C++.
How do you set, clear and toggle a single bit in C?
|c++|c|bit-manipulation|
How to set, clear and toggle a bit in C?
It's a little bit difficult to see the value of duck typing until you've used it for a little while. Once you get used to it, you'll realize how much of a load off your mind it is to not have to deal with interfaces or having to worry about exactly what type something is.
You can try the following tool to make things more sane: http://www.bdsoft.com/tools/stlfilt.html
Some compilers give better messages than others. What compiler are you using? Having said that, they are all pretty bad. C++0X will fix most of this problem (see [concepts][1]), but the standard won't be released until 2009, and broad support will probably be even later than that :-( [1]: http://en.wikipedia.org/wiki/C%2B%2B0x#Concepts
**Setting a bit** Use the bitwise OR operator (|) to set a bit. number |= 1 << x; That will set bit x. **Clearing a bit** Use the bitwise AND operator (&) to clear a bit. number &= ~(1 << x); That will clear bit x. You must invert the bit string with the bitwise NOT operator (~), then AND it. **Toggling a bit** The XOR operator (^) can be used to toggle a bit. number ^= 1 << x; That will toggle bit x. **Checking a bit** You didn't ask for this but I might as well add it. To check a bit, AND it with the bit you want to check: bit = number & (1 << x); That will put the value of bit x into the variable bit.
Often time I need to add a control to a dialog after the dialog has been generated via dialog template and CreateDialogIndirect. In these cases the tab order is set by the dialog template and there is no obvious way to change the tab order by including a newly created control.
Something like [`hg transplant`][1]? That's what we use on our dev and prod branches. [1]: http://www.selenic.com/mercurial/wiki/index.cgi/TransplantExtension
Something else you can do is EagerLoading. In Linq2SQL you can use LoadOptions : [More on LoadOptions][1] [1]: http://oakleafblog.blogspot.com/2007/08/linq-to-sql-query-execution-with.html One VERY weird thing about L2S is that you can set LoadOptions only before the first query is sent to the Database.
I recently discovered that you can use SetWindowPos to accomplish this. Determine which control after which you want to insert the new control in the tab order then use SetWindowPos like this: SetWindowPos(hNewControl, hOldControl, 0, 0, 0, 0, SWP_NOMOVE|SWP_NOSIZE); This changes the z-order of controls which, in turn, establishes the tab order.
With `ldd` you can get the libraries that tools use. To rank the usage of libraries for a set of tool you can use something like the following command. ldd /bin/* /usr/bin/* ... | sed -e '/^[^\t]/ d; s/^\t\(.* => \)\?\([^ ]*\) (.*/\2/g' | sort | uniq -c (Here `sed` strips all lines that do not start with a tab and the filters out only the actual libraries. With `sort | uniq -c` you get each library with a count indicating the number of times it occurred.) You might want to add `sort -g` at the end to get the libraries in order of usage. Note that you probably get lines two non-library lines with the above command. One of static executables ("not a dynamic executable") and one without any library. The latter is the result of `linux-gate.so.1` which is not a library in your file system but one "supplied" by the kernel.
Programmatic Caching is one area which immediately comes to my mind. It is a great feature which needs to be used carefully. Since it is shared across requests, you have to put locks around it before updating it.
Programmatic Caching is one area which immediately comes to my mind. It is a great feature which needs to be used carefully. Since it is shared across requests, you have to put locks around it before updating it. Another place I would check is any code accessing filesystem like writing to log files. If one request has a read-write lock on a file, other concurrent requests will error out if not handled properly.
[JetBrains dottrace profiler][1] is the best. I wouldn't work without it. It is hard to find a tool that is free and performs well in this arena. Dottrace is hands down the best profiler I have used for .Net. [1]: http://www.jetbrains.com/profiler/
Dynamic linq:Creating an extension method that produces JSON result
|c#|linq|json|
I'm stuck trying to create a dynamic linq extension method that returns a string in JSON format - I'm using System.Linq.Dynamic and Newtonsoft.Json and I can't get the Linq.Dynamic to parse the "cell=new object[]" part. Perhaps too complex? Any ideas? : static void Main(string[] args) { NorthwindDataContext db = new NorthwindDataContext(); var query = db.Customers; string json = JSonify<Customer> .GetJsonTable( query, 2, 10, "CustomerID" , new string[] { "CustomerID", "CompanyName", "City", "Country", "Orders.Count" }); Console.WriteLine(json); } } public static class JSonify<T> { public static string GetJsonTable( this IQueryable<T> query, int pageNumber, int pageSize, string IDColumnName, string[] columnNames) { string selectItems = String.Format(@" new { {{0}} as ID, cell = new object[]{{{1}}} }", IDColumnName, String.Join(",", columnNames)); var items = new { page = pageNumber, total = query.Count(), rows = query .Select(selectItems) .Skip(pageNumber * pageSize) .Take(pageSize) }; return JavaScriptConvert.SerializeObject(items); // Should produce this result: // { // "page":2, // "total":91, // "rows": // [ // {"ID":"FAMIA","cell":["FAMIA","Familia Arquibaldo","Sao Paulo","Brazil",7]}, // {"ID":"FISSA","cell":["FISSA","FISSA Fabrica Inter. Salchichas S.A.","Madrid","Spain",0]}, // {"ID":"FOLIG","cell":["FOLIG","Folies gourmandes","Lille","France",5]}, // {"ID":"FOLKO","cell":["FOLKO","Folk och fä HB","Bräcke","Sweden",19]}, // {"ID":"FRANK","cell":["FRANK","Frankenversand","München","Germany",15]}, // {"ID":"FRANR","cell":["FRANR","France restauration","Nantes","France",3]}, // {"ID":"FRANS","cell":["FRANS","Franchi S.p.A.","Torino","Italy",6]}, // {"ID":"FURIB","cell":["FURIB","Furia Bacalhau e Frutos do Mar","Lisboa","Portugal",8]}, // {"ID":"GALED","cell":["GALED","Galería del gastrónomo","Barcelona","Spain",5]}, // {"ID":"GODOS","cell":["GODOS","Godos Cocina Típica","Sevilla","Spain",10]} // ] // } } }
Short answer: no. Such a function was considered and explicitly rejected for version 2 of the XPath spec (see the non-normative[Illustrative User-written Functions section](http://www.w3.org/TR/xpath-functions/#examples)).
Short answer: no. Such a function was considered and explicitly rejected for version 2 of the XPath spec (see the non-normative [Illustrative User-written Functions section](http://www.w3.org/TR/xpath-functions/#examples)).
cx_Oracle: how do I get the ORA-xxxxx error number?
|python|oracle-database|cx-oracle|
In a try/except block, how do I extract the Oracle error number?
According to the [URL specification][1], the square brackets are not valid URL characters. Here's the relevant snippets: > The "national" and "punctuation" characters do not appear in any productions and therefore may not appear in URLs. national { | } | vline | [ | ] | \ | ^ | ~ punctuation < | > [1]: http://www.w3.org/Addressing/URL/url-spec.txt
try: cursor.execute("select 1 / 0 from dual") except cx_Oracle.DatabaseError, exc: error, = exc print "Code:", error.code print "Message:", error.message This results in the following output: Code: 1476 Message: ORA-01476: divisor is equal to zero
Any browser or web-enabled software that accepts URLs and is not throwing an exception when special characters are introduced is almost guaranteed to be encoding the special characters behind the scenes. Curly brackets, square brackets, spaces, etc all have special encoded ways of representing them so as not to produce conflicts. As per the previous answers, the safest way to deal with these is to URL-encode them before handing them off to something that will try to resolve the URL.
A deadlock is a state of a system in which no single process/thread is capable of executing an action. As mentioned by others, a deadlock is typically the result of a situation where each process/thread wishes to acquire a lock to a resource that is already locked by another (or even the same) process/thread. There are various methods to find them and avoid them. One is thinking very hard and/or trying lots of things. However, dealing with parallelism is notoriously difficult and most (if not all) people will not be able to completely avoid problems. Some more formal methods can be useful if you are serious about dealing with these kinds of issues. The most practical method that I'm aware of is to use the process theoretic approach. Here you model your system in some process language (e.g. CCS, CSP, ACP, mCRL2, LOTOS) and use the available tools to (model-)check for deadlocks (and perhaps some other properties as well). Examples of toolset to use are FDR, mCRL2, CADP and Uppaal. Some brave souls might even prove their systems deadlock free by using purely symbolic methods (theorem proving; look for Owicki-Gries). However, these formal methods typically do require some effort (e.g. learning the basics of process theory). But I guess that's simply a consequence of the fact that these problems are hard.
Can I run rubygems in ironruby?
|ironruby|
Is it currently possible with pre release of ironruby to run rubygems? It seems as if I have to set an environment variable to find them?
if you set ENV['GEM_PATH'] correctly before using rubygems, then gems will work *sometimes* under IronRuby. I'm actually looking at fixing this right now. Also, this kind of question is probably better suited for the [IronRuby mailing list][1] than it is for StackOverflow. [1]: http://rubyforge.org/mail/?group_id=4359
> Performance wise, they are exactly the same (at least in SQL Server) but be aware that they are deprecating this join syntax and it's not supported by sql server2005 out of the box. I think you are thinking of the deprecated \*= and =\* operators vs. "outer join". I have just now tested the two formats given, and they work properly on a SQL Server 2008 database. In my case they yielded identical execution plans, but I couldn't confidently say that this would always be true.
A monad is a thing used to encapsulate objects that have changing state. It is most often encountered in languages that otherwise do not allow you to have modifiable state (e.g., Haskell). An example would be for file IO. You would be able to use a monad for file IO to isolate the changing state nature to just the code that used the Monad. The code inside the Monad can effectively ignore the changing state of the world outside the Monad - this makes it a lot easier to reason about the overall effect of your program.
If you're dynamically generating your page though, this probably won't make sense. If you're in PHP, you can pick the inode and file size of the main script, but the modify time won't tell you if your data has changed. Unless you have a good caching process or just generate static pages, etags aren't helpful. If you do have a good caching process, the inode and file size are probably irrelevant.
If you're dynamically generating your page though, this probably won't make sense. If you're in PHP, you can pick the inode and file size of the main script, but the modify time won't tell you if your data has changed. Unless you have a good caching process or just generate static pages, etags aren't helpful. If you do have a good caching process, the inode and file size are probably irrelevant. Edit: For people who don't know what [etag][1]s are - they're just supposed to be a value that changes when the content has changed, for caching purposes. The browser gets the etag from the web server, compares it to the etag for its cached copy and then fetches the whole page if the etag has changed. [1]: http://en.wikipedia.org/wiki/HTTP_ETag
On some databases (notably Oracle) the order of the joins can make a huge difference to query performance (if there are more than two tables). On one application, we had literally two orders of magnitude difference in some cases. Using the inner join syntax gives you control over this - if you use the right hints syntax. You didn't specify which database you're using, but probability suggests SQL Server or MySQL where there it makes no real difference.
It's a shame that some people perceive Stripes as a framework for which "there really just isn't much support or information for it." In reality, the Stripes community is **very** supportive - have a look at the [mailing list](http://news.gmane.org/gmane.comp.java.stripes.user) and you'll see how friendly and responsive people are. In fact, some have said on the #stripes IRC channel that they have had better response for Hibernate-related questions than on #hibernate itself! Give Stripes a good, serious look instead of dismissing it because of misconceptions.
I have virtually no knowledge of C#, but I suspect that either switch was simply taken as it occurs in other languages without thinking about making it more general or the developer decided that extending it was not worth it. Strictly speaking you are absolutely right that there is no reason to put these restrictions on it. One might suspect that the reason is that for the allowed cases the implemention is very efficient (as suggested by Brian Ensink ([44921](http://stackoverflow.com/questions/44905/c-switch-statement-limitations-why#44921))), but I doubt the implementation is very efficient (w.r.t. if-statements) if I use integers and some random cases (e.g. 345, -4574 and 1234203). And in any case, what is the harm in allowing it for everything (or at least more) and saying that it is only efficient for specific cases (such as (almost) consecutive numbers). I can, however, imagine that one might want to exclude types because of reasons such as the one given by lomaxx ([44918](http://stackoverflow.com/questions/44905/c-switch-statement-limitations-why#44918)).
I have virtually no knowledge of C#, but I suspect that either switch was simply taken as it occurs in other languages without thinking about making it more general or the developer decided that extending it was not worth it. Strictly speaking you are absolutely right that there is no reason to put these restrictions on it. One might suspect that the reason is that for the allowed cases the implementation is very efficient (as suggested by Brian Ensink ([44921](http://stackoverflow.com/questions/44905/c-switch-statement-limitations-why#44921))), but I doubt the implementation is very efficient (w.r.t. if-statements) if I use integers and some random cases (e.g. 345, -4574 and 1234203). And in any case, what is the harm in allowing it for everything (or at least more) and saying that it is only efficient for specific cases (such as (almost) consecutive numbers). I can, however, imagine that one might want to exclude types because of reasons such as the one given by lomaxx ([44918](http://stackoverflow.com/questions/44905/c-switch-statement-limitations-why#44918)). Edit: @Henk ([44970](http://stackoverflow.com/questions/44905/c-switch-statement-limitations-why#44970)): If Strings are maximally shared, strings with equal content will be pointers to the same memory location as well. Then, if you can make sure that the strings used in the cases are stored consecutively in memory, you can very efficiently implement the switch (i.e. with execution in the order of 2 compares, an addition and two jumps). I would also like to note that I'm not very happy to see Brian's answer get this many (relatively) up-votes. It's nothing personal, but I just think the answer is really wrong.
I have virtually no knowledge of C#, but I suspect that either switch was simply taken as it occurs in other languages without thinking about making it more general or the developer decided that extending it was not worth it. Strictly speaking you are absolutely right that there is no reason to put these restrictions on it. One might suspect that the reason is that for the allowed cases the implementation is very efficient (as suggested by Brian Ensink ([44921](http://stackoverflow.com/questions/44905/c-switch-statement-limitations-why#44921))), but I doubt the implementation is very efficient (w.r.t. if-statements) if I use integers and some random cases (e.g. 345, -4574 and 1234203). And in any case, what is the harm in allowing it for everything (or at least more) and saying that it is only efficient for specific cases (such as (almost) consecutive numbers). I can, however, imagine that one might want to exclude types because of reasons such as the one given by lomaxx ([44918](http://stackoverflow.com/questions/44905/c-switch-statement-limitations-why#44918)). Edit: @Henk ([44970](http://stackoverflow.com/questions/44905/c-switch-statement-limitations-why#44970)): If Strings are maximally shared, strings with equal content will be pointers to the same memory location as well. Then, if you can make sure that the strings used in the cases are stored consecutively in memory, you can very efficiently implement the switch (i.e. with execution in the order of 2 compares, an addition and two jumps).
Mr. Bunny's Big Cup O' Java
Maybe you're looking for something like [Kigg - Building a Digg Clone with ASP.NET MVC][1]. Or maybe Jeff and the team are willing to provide you the souce code for SO... :P [1]: http://dotnetslackers.com/articles/aspnet/KiggBuildingADiggCloneWithASPNETMVC1.aspx
Detach an entity from JPA/EJB3 persistence context
|java|jpa|ejb-3.0|
What would be the easiest way to detach a specific JPA Entity Bean that was acquired through an EntityManager. Alternatively, could I have a query return detached objects in the first place so they would essentially act as 'read only'? The reason why I want to do this is becuase I want to modify the data within the bean - with in my application only, but not ever have it persisted to the database. In my program, I eventually have to call flush() on the EntityManager, which would persist all changes from attached entities to the underyling database, but I want to exclude specific objects.
Can you provide a code sample? More likely than not you'll just need to escape your HTML entities using [encodeuri](http://www.w3schools.com/jsref/jsref_encodeURI.asp) or something like that.
What about the [MVC Membership application][1] that Troy Goode wrote. I'm not sure what Preview he wrote it in but it could be worth a look. Also, if you want to go right from the beginning, scott gu has a [great series of posts on MVC][2] Edit: he has an [updated version][3] as well [1]: http://www.squaredroot.com/post/2008/04/MVC-Membership-Starter-Kit.aspx [2]: http://weblogs.asp.net/scottgu/archive/2007/11/13/asp-net-mvc-framework-part-1.aspx [3]: http://www.squaredroot.com/post/2008/04/MVC-Membership-Starter-Kit-1-2.aspx
What about the [MVC Membership application][1] that Troy Goode wrote. I'm not sure what Preview he wrote it in but it could be worth a look. Also, if you want to go right from the beginning, scott gu has a [great series of posts on MVC][2] Finally, I haven't personally taken a look at this code, but I noticed the [MVCSample app][3] on codeplex and bookmarked it for future reference, but I'm not sure of the quality. Edit: he has an [updated version][4] as well [1]: http://www.squaredroot.com/post/2008/04/MVC-Membership-Starter-Kit.aspx [2]: http://weblogs.asp.net/scottgu/archive/2007/11/13/asp-net-mvc-framework-part-1.aspx [3]: http://www.codeplex.com/mvcsamples [4]: http://www.squaredroot.com/post/2008/04/MVC-Membership-Starter-Kit-1-2.aspx
[Rob Conery's MVC Storefront project][1] is a good from-the-ground-up step-by-step series on how to put up an ASP.NET MVC site. [1]: http://blog.wekeroad.com/mvc-storefront/
[CodeCampServer][1] - Built with ASP.NET MVC, pretty light and small project. No cruft at all. @lomaxx - Just FYI, most of what Troy Goode wrote is now part of [ASP.NET MVC as of Preview 4][2]. [1]: http://code.google.com/p/codecampserver/ [2]: http://weblogs.asp.net/scottgu/archive/2008/07/14/asp-net-mvc-preview-4-release-part-1.aspx
Modeling Geographic Locations in an Relational Database.
|sql|database-design|
I am designing a contact management system and have come across an interesting issue regarding modeling geographic locations in a consistent way. I would like to be able to record locations associated with a particular person (mailing address(es) for work, school, home, etc.) My thought is to create a table of locales such as the following: **Locales (ID, LocationName, ParentID)** where autonomous locations (such as countries, e.g. USA) are parents of themselves. This way I can have an arbitrarily deep nesting of 'political units' (COUNTRY > STATE > CITY or COUNTRY > STATE > CITY > UNIVERSITY). Some queries will necessarily involve recursion. I would appreciate any other recommendations or perhaps advice regarding predictable issues that I am likely to encounter with such a scheme.
Does Java need closures?
|java|closures|
I've been reading a lot lately about the next release of Java possibly supporting closures. I feel like I have a pretty firm grasp on what closures are, but I can't think of a solid example of how they would make an Object-Oriented language "better". Can anyone give me a specific use-case where a closure would be needed (or even preferred)?
I think the best solution would be to get everyone to use the version control system directly. If you are on a Windows platform, TortoiseSVN would be my recommendation. If using TortoiseSVN directly is too difficult, I have had good experiences with setting up a Samba file share where all project documents are stored and automatically synchronizing this with Subversion. You lose the benefits of people writing comments on their commits, but in many cases automatic version history is better than no version history. This way the people involved don't even have to be aware of the version tracking, as long as they save their documents in the right place. How often you need to synchronize depends on how often documents are changed, but in my case a synchronization every 24 hours was adequate. Note: To implement this I had to write a custom script that checked out the latest version from the repository, compared it with the local copy and issued <code>svn</code> (or <code>cvs</code>) commands to add, remove and update any changed files. I'm not sure if there exists a general (open source) solution to do this, but I don't think it should be too hard to implement yourself anyway (I wrote a simple script to do it in a few hours).
Sounds like a good approach to me. The one thing that I'm not clear on when reading you post is what "parents of themselves" means - if this is to indicate that the locale does not have a parent, you're better off using null than the ID of itself.
In Oracle: select round((sysdate - cu.last_updated) * 24) || ' hours ago' from customers cu
In Oracle: select CC.MOD_DATETIME, 'Last modified ' || case when (sysdate - cc.mod_datetime) < 1 then round((sysdate - CC.MOD_DATETIME)*24) || ' hours ago' when (sysdate - CC.MOD_DATETIME) between 1 and 7 then round(sysdate-CC.MOD_DATETIME) || ' days ago' when (sysdate - CC.MOD_DATETIME) between 8 and 365 then round((sysdate - CC.MOD_DATETIME) / 7) || ' weeks ago' when (sysdate - CC.MOD_DATETIME) > 365 then round((sysdate - CC.MOD_DATETIME) / 365) || ' years ago' end from customer_catalog CC
@lomaxx: Just to clarify, I'm pretty certain that both above syntax are supported by SQL Serv 2005. The syntax below is NOT supported however select a.*, b.* from table a, table b where a.id *= b.id; Specifically, the outer join (*=) is not supported.
[Expert .NET 2.0 IL Assembler by Serge Lidin][1] There was a 1.1 version of the same book, but I haven't seen anything for the latest .NET release. It's an excellent book. I used it to write an OCR component in MSIL, as a learning project. [1]: http://www.amazon.com/Expert-NET-2-0-IL-Assembler/dp/1590596463
[Expert .NET 2.0 IL Assembler by Serge Lidin][1] There was a 1.1 version of the same book, but I haven't seen anything for the latest .NET release. It's an excellent book. I used it to write an OCR component in MSIL, as a learning project. [Edit] @Curt is right, 3.0 and 3.5 are just extensions to 2.0, I hadn't plugged that in to my head yet. Now I've thought of a fun geek project... compare the disassembly of standard 2.0 code to the new LINQ/Lambda way of performing common tasks like filtering lists. For some reason I assumed that the magic was happening in new IL features, not just the compiler. [1]: http://www.amazon.com/Expert-NET-2-0-IL-Assembler/dp/1590596463
Sorry for the down vote, but I wanted to try to make sure you came back to this question, because the array you created by _Dim s(0) As String_ **IS NOT EMPTY** In VB.Net, the subscript you use in the array is index of the last element. That means you have an array that already has one element. You should try using System.Collections.Specialized.StringCollection or System.Collections.Generic.List(Of String) They amount to pretty much the same thing as an array of string, except they're loads better for adding and removing items. And let's be honest: you're not creating an _empty_ string array without planning on adding _at least_ one element to it. If you really want an empty string array, declare it like this: Dim s As String() or Dim t() As String
Sorry for the down vote, but I wanted to try to make sure you came back to this question, because the array you created by _Dim s(0) As String_ **IS NOT EMPTY** In VB.Net, the subscript you use in the array is index of the last element. That means you have an array that already has one element. You should try using System.Collections.Specialized.StringCollection or System.Collections.Generic.List(Of String) They amount to pretty much the same thing as an array of string, except they're loads better for adding and removing items. And let's be honest: you'll rarely create an _empty_ string array without wanting to add _at least_ one element to it. If you really want an empty string array, declare it like this: Dim s As String() or Dim t() As String
We had an interesting issue come up at a company I previously worked at where we used friend to decent affect. I worked in our framework department we created a basic engine level system over our custom OS. Internally we had a class structure: Game / \ TwoPlayer SinglePlayer All of these classes were part of the framework and maintained by our team. The games produced by the company were built on top of this framework deriving from one of Games children. The issue was that Game had interfaces to various things that SinglePlayer and TwoPlayer needed access to but that we did not want expose outside of the framework classes. The solution was to make those interfaces private and allow TwoPlayer and SinglePlayer access to them via friendship. Truthfully this whole issue could have been resolved by a better implementation of our system but we were locked into what we had.
The .h files should be used to define the prototypes for your functions. This is necessary so you can include the prototypes that you need in your C-file without declaring every function that you need all in one file. For instance, when you #include <stdio.h>, this provides the prototypes for printf and other IO functions. The symbols for these functions are normally loaded by the compiler by default. You can look at the system's .h files under /usr/include if you're interested in the normal idioms involved with these files. If you're only writing trivial applications with not many functions, it's not really necessary to modularize everything out into logical groupings of procedures. However, if you have the need to develop a large system, then you'll need to pay some consideration as to where to define each of your functions.
The .h files should be used to define the prototypes for your functions. This is necessary so you can include the prototypes that you need in your C-file without declaring every function that you need all in one file. For instance, when you `#include <stdio.h>`, this provides the prototypes for printf and other IO functions. The symbols for these functions are normally loaded by the compiler by default. You can look at the system's .h files under /usr/include if you're interested in the normal idioms involved with these files. If you're only writing trivial applications with not many functions, it's not really necessary to modularize everything out into logical groupings of procedures. However, if you have the need to develop a large system, then you'll need to pay some consideration as to where to define each of your functions.
It's been ages since I've played the reading google's tea leafs game, but there are a few reasons your SEO expert might be saying this 1. Three or four years back there was a bit of conventional wisdom floating around that the search engine algorithms would give more weight to search terms that happened sooner in the page. If all other things were equal on Pages A and B, if Page A mentions widgets earlier in the HTML file than Page B, Page A "wins". It's not that Google's engineers and PhD employees couldn't skip over the <script /> blocks, it's that they found a valuable metric in their presence. Taking that into account, it's easy to see how unless something "needs" (see #2 below) to be in the head of a document, an SEO obsessed person would want it out. 2. The SEO people who aren't offering a quick fix tend to be proponents of well-crafted, validating/conforming HTML/XHTML structure. Inline Javascript, particularly the kind web ignorant software engineers tend to favor makes these people (I'm one) seethe. The bias against script tags themselves could also stem from some of the work Yahoo and others have done in optimizing Ajax applications (don't make the browser parse Javascript until is has to). Not necessarily directly related to SEO, but a best practice a white hat SEO type will have picked up. 3. It's also possible you're misunderstanding each other. **Content** that's generated by Javascript is considered controversial in the SEO world. It's not that Google can't "see" this content, it's that people are unsure how its presence will rank the page, as a lot of black hat SEO games revolve around hiding and showing content with Javascript. SEO is at best Kremlinology and at worse a field that the black hats won over a long time ago. My free unsolicited advice is to stay out of the SEO game, present your managers with estimates as so how long it will take to implement their SEO related changes, and leave it at that.
Are you certain that `currentPage` is an integer? Try something like: var currentPage = 5; jQuery('li').eq(currentPage); as a simple sanity check. If that works, you should try casting to `Integer`.
How to Stop NTFS volume auto-mounting on OS X
|macos|hardware|
I'm a bit newbieish when it comes to the deeper parts of OSX configuration and am having to put up with a fairly irritating niggle which while I can put up with it, I know under Windows I could have sorted in minutes. Basically, I have an external disk with two volumes: One is an HFS+ volume which I use for TimeMachine backups. The other, an NTFS volume that I use for general file copying etc on Mac and Windows boxes. So what happens is that whenever I plug in the disk into my Mac USB, OSX goes off and mounts both volumes and shows an icon on the desktop for each. The thing is that to remove the disk you have to eject the volume and in this case do it for both volumes, which causes an annoying warning dialog to be shown every time. What I'd prefer is some way to prevent the NTFS volume from auto-mounting altogether. I've done some hefty googling and here's a list of things I've tried so far: - I've tried going through options in Disk Utility - I've tried setting AutoMount to No in /etc/hostconfig but that is a bit too global for my liking. - I've also tried the suggested approach to putting settings in fstab but it appears the OSX (10.5) is ignoring these settings. Any other suggestions would be welcomed. Just a little dissapointed that I can't just tick a box somewhere (or untick).
|macos|hardware|
I'm a bit newbieish when it comes to the deeper parts of OSX configuration and am having to put up with a fairly irritating niggle which while I can put up with it, I know under Windows I could have sorted in minutes. Basically, I have an external disk with two volumes: One is an HFS+ volume which I use for TimeMachine backups. The other, an NTFS volume that I use for general file copying etc on Mac and Windows boxes. So what happens is that whenever I plug in the disk into my Mac USB, OSX goes off and mounts both volumes and shows an icon on the desktop for each. The thing is that to remove the disk you have to eject the volume and in this case do it for both volumes, which causes an annoying warning dialog to be shown every time. What I'd prefer is some way to prevent the NTFS volume from auto-mounting altogether. I've done some hefty googling and here's a list of things I've tried so far: - I've tried going through options in Disk Utility - I've tried setting AutoMount to No in /etc/hostconfig but that is a bit too global for my liking. - I've also tried the suggested approach to putting settings in fstab but it appears the OSX (10.5) is ignoring these settings. Any other suggestions would be welcomed. Just a little dissapointed that I can't just tick a box somewhere (or untick). EDIT: Thanks heaps to hop for the answer it worked a treat. For the record it turns out that it wasn't OSX not picking up the settings I actually had "msdos" instead of "ntfs" in the fs type column. Cheers!
What kind of problems are state machines good for?
When creating dynamic controls ... I only populate them on the initial load. Afterwords I recreate the controls on postback in the page load event, and the viewstate seems to handle the repopulating of the values with no problems.
I'll cite some passages from [Implementation Patterns][1] by Kent Beck: ##Simple Superclass Name > "[...] The names should be short and punchy. > However, to make the names precise > sometimes seems to require several > words. A way out of this dilemma is > picking a strong metaphor for the > computation. With a metaphor in mind, > even single words bring with them a > rich web of associations, connections, > and implications. For example, in the > HotDraw drawing framework, my first > name for an object in a drawing was > **DrawingObject**. Ward Cunningham came > along with the typography metaphor: a > drawing is like a printed, laid-out > page. Graphical items on a page are > figures, so the class became **Figure**. > In the context of the metaphor, **Figure** > is simultaneously shorter, richer, and > more precise than **DrawingObject**." ##Qualified Subclass Name > "The names of subclasses have two jobs. > They need to communicate what class > they are like and how they are > different. [...] Unlike the names at > the roots of hierarchies, subclass > names aren’t used nearly as often in > conversation, so they can be > expressive at the cost of being > concise. [...] > > Give subclasses that serve as the > roots of hierarchies their own simple > names. For example, *HotDraw* has a > class **Handle** which presents figure- > editing operations when a figure is > selected. It is called, simply, **Handle** > in spite of extending **Figure**. There is > a whole family of handles and they > most appropriately have names like > **StretchyHandle** and **TransparencyHandle**. > Because **Handle** is the root of its own > hierarchy, it deserves a simple > superclass name more than a qualified > subclass name. > >Another wrinkle in > subclass naming is multiple-level > hierarchies. [...] Rather than blindly > prepend the modifiers to the immediate > superclass, think about the name from > the reader’s perspective. What class > does he need to know this class is > like? Use that superclass as the basis > for the subclass name." ##Interface > Two styles of naming interfaces depend on how you are thinking of the interfaces. > Interfaces as classes without implementations should be named as if they were classes > (*Simple Superclass Name*, *Qualified Subclass Name*). One problem with this style of > naming is that the good names are used up before you get to naming classes. An > interface called **File** needs an implementation class called something like > **ActualFile**, **ConcreteFile**, or (yuck!) **FileImpl** (both a suffix and an > abbreviation). In general, communicating whether one is dealing with a concrete or > abstract object is important, whether the abstract object is implemented as an > interface or a superclass is less important. Deferring the distinction between > interfaces and superclasses is well >supported by this style of naming, leaving you > free to change your mind later if that >becomes necessary. > > Sometimes, naming concrete classes simply is more important to communication than > hiding the use of interfaces. In this case, prefix interface names with “I”. If the > interface is called **IFile**, the class can be simply called **File**. For more detailed discussion, buy the book! It's worth it! :) [1]: http://www.amazon.com/Implementation-Patterns-Addison-Wesley-Signature-Kent/dp/0321413091
If he is interested than I wouldn't worry about focusing on games or whatnot. I'd just grab that beginners 'teach yourself x' book you were about to throw and give it him and let him struggle through it. Maybe talk about it after and then do another and another. After then I'd pair program with him so he could learn how shallow and lame those books he read were. Then I'd start having him code something for himself. A website to track softball stats or whatever would engage him. For me it was a database for wine back in the day. After that I would start in on the real books, domain design, etc.
The most obvious thing would be a pseudo-replacement for all those classes that just have a single method called run() or actionPerformed() or something like that. So instead of creating a Thread with a Runnable embedded, you'd use a closure instead. Not more powerful than what we've got now, but much more convenient and concise. So do we *need* closures? No. Would they be nice to have? Sure, as long as they don't feel bolted on, as I fear they would be.
IPC in .Net can be achieved using: - **WCF**, using named pipers (current .Net 3.0 and above way of doing things) - **Remoting**, (Which I believe is no longer being actively developed) - **Sockets** using a custom protocol (harder)
IPC in .Net can be achieved using: - **WCF**, using named pipers (current .Net 3.0 and above way of doing things) - **Remoting**, (Which I believe is no longer being actively developed) - **Sockets** using a custom protocol (harder) **Code example** - The WCF class **NetNamedPipeBinding** can be used for interprocess communication on the same machine. The MSDN documentaion for this class includes a code sample covering this scenario <http://msdn.microsoft.com/en-us/library/system.servicemodel.netnamedpipebinding.aspx>
IPC in .Net can be achieved using: - **WCF**, using named pipes **requires .Net 3.0** and above. - **Remoting**, (Which I believe is no longer being actively developed) - **Sockets** using a custom protocol (harder) **Code example** - The WCF class **NetNamedPipeBinding** can be used for interprocess communication on the same machine. The MSDN documentaion for this class includes a code sample covering this scenario <http://msdn.microsoft.com/en-us/library/system.servicemodel.netnamedpipebinding.aspx>
IPC in .Net can be achieved using: # WCF using named pipes **requires .Net 3.0** and above. ##Code example - The WCF class **NetNamedPipeBinding** can be used for interprocess communication on the same machine. The MSDN documentaion for this class includes a code sample covering this scenario <http://msdn.microsoft.com/en-us/library/system.servicemodel.netnamedpipebinding.aspx> ---------- #Remoting The original IPC framework released with .Net 1.0. I believe remoting is no longer being actively developed, and you are encouraged to use WCF instead ##Code example [Inter-process communication via Remoting](http://www.codeproject.com/KB/threads/process_communication.aspx/process_communication.aspx) - uses a tcp channel ##Resources - GenuineChannels, sell a remoting toolkit that includes a Shared Memory Channel. <http://www.genuinechannels.com/Index.aspx> - [Ingo Rammer](http://www.thinktecture.com/resourcearchive/tools-and-software/dotnetremotingprojects), wrote the definitive .Net remoting book, [Advanced .NET Remoting, Second Edition ](http://www.amazon.com/Advanced-NET-Remoting-Second-Rammer/dp/1590594177) ---------- #Sockets Using a custom protocol (harder)
There is nothing wrong with static typing if you are using Haskell, which has an incredible static type system. However, if you are using languages like Java and C++ that have terribly crippling type systems, duck typing is definitely an improvement. Imagine trying to use something so simple as "map" in Java (and no, I don't mean [this kind of map][1]). Even generics are rather poorly supported. [1]: http://java.sun.com/j2se/1.4.2/docs/api/java/util/Map.html
Don't listen to every people telling you that namespaces are just name-spaces. They are important because they are considered by the compiler to apply the interface principle. Basically, it can be explained by an example: namespace ns { class A { } void print(A a) { } } If you wanted to print an A object, the code would be this one: ns::A a; print(a); Note that we didn't explicitely mention the namespace when calling the function. This is the interface principle: C++ consider a function taking a type as an argument as begin part of the interface for that type, so no need to specify the namespace because the parameter already implied the namespace. Now why this principle is important? Imagine that the class A author did not provide a print() function for this class. You will have to provide one yourself. As you are a good programmer, you will define this function in your own namespace, or maybe in the global namespace. namespace ns { class A { } } void print(A a) { } And your code can start calling the print(a) function wherever you want. Now imagine that years later, the author decides to provide a print() function, better than yours because he knows the internals of his class and can make a better version than yours. Then C++ authors decided that his version of the print() function should be used instead of the one provided in another namespace, to respect the interface principle. And that this "upgrade" of the print() function should be as easy as possible, which means that you won't have to change every call to the print() function. That's why "interface functions" (function in the same namespace as a class) can be called without specifying the namespace in C++. And that's why you should consider a C++ namespace as an "interface" when you use one an keep in mind the interface principle. If you want better explanation of this behavior, you can refer to the book [Exceptional C++ from Herb Sutter][1] [1]:http://books.google.fr/books?id=mT7E5gDuW_4C&dq=exceptional+C%2B%2B&pg=PP1&ots=AXUPz0dWnW&sig=DAib0u-zXuY3lGCCUFPtzI33pCQ&hl=fr&sa=X&oi=book_result&resnum=1&ct=result#PPA119,M1
I've read in a few places that Google's spiders only index the first 100KB of a page. 20KB of JS at the top of your page would mean 20KB of content later on that Google wouldn't see, etc. Mind you, I have no idea if this fact is still true, but when combine it with the rest of the superstition/rumors/outright quackery you find in the dark underbelly of SEO forums, it starts to make a strange sort of sense.
I've read in a few places that Google's spiders only index the first 100KB of a page. 20KB of JS at the top of your page would mean 20KB of content later on that Google wouldn't see, etc. Mind you, I have no idea if this fact is still true, but when combine it with the rest of the superstition/rumors/outright quackery you find in the dark underbelly of SEO forums, it starts to make a strange sort of sense. This is in addition to the fact that inline JS is a Bad Thing with respect to the separation of presentation, content, and behavior, as mentioned in other answers.
I'm on that mailing list - to save you the digging, someone asked this a few weeks ago, and [this was the answer](http://rubyforge.org/pipermail/ironruby-core/2008-August/002688.html) The answer (at this point) is no, you can't, but it doesn't seem like it'll be too far away. PS: listen to curt. He's on the core team for ironruby. <3
The solution is to use the OpenDesktop API call. Basically it just attempts to switch the the default desktop which will fail if it's locked. Note1: the following function loads the relevant dlls and functions dynamically so that apps will still run on Windows 9.x :) Note2: DESKTOP& #95;SWITCHDESKTOP should read DESKTOP&#95;SWITCHDESKTOP BOOL Misc::IsWorkStationLocked() { // note: we can't call OpenInputDesktop directly because it's not // available on win 9x typedef HDESK (WINAPI *PFNOPENDESKTOP)(LPSTR lpszDesktop, DWORD dwFlags, BOOL fInherit, ACCESS_MASK dwDesiredAccess); typedef BOOL (WINAPI *PFNCLOSEDESKTOP)(HDESK hDesk); typedef BOOL (WINAPI *PFNSWITCHDESKTOP)(HDESK hDesk); // load user32.dll once only static HMODULE hUser32 = LoadLibrary("user32.dll"); if (hUser32) { static PFNOPENDESKTOP fnOpenDesktop = (PFNOPENDESKTOP)GetProcAddress(hUser32, "OpenDesktopA"); static PFNCLOSEDESKTOP fnCloseDesktop = (PFNCLOSEDESKTOP)GetProcAddress(hUser32, "CloseDesktop"); static PFNSWITCHDESKTOP fnSwitchDesktop = (PFNSWITCHDESKTOP)GetProcAddress(hUser32, "SwitchDesktop"); if (fnOpenDesktop && fnCloseDesktop && fnSwitchDesktop) { HDESK hDesk = fnOpenDesktop("Default", 0, FALSE, DESKTOP_SWITCHDESKTOP); if (hDesk) { BOOL bLocked = !fnSwitchDesktop(hDesk); // cleanup fnCloseDesktop(hDesk); return bLocked; } } } // must be win9x return FALSE; }
How do you index into a var in LINQ?