instruction
stringlengths
0
30k
Remove the [L] from the previous rules: RewriteBase /appRoot RewriteRule ^$ app/webroot/     RewriteRule (.*) app/webroot/$1 [L] means "Stop the rewriting process here and don't apply any more rewriting rules."
It really depends on your brother's learning style. Many people learn faster by getting their hands dirty & just getting into it, crystallising the concepts and the big picture as they progress and build their knowledge. Me, I prefer to start with the big picture and drill down into the nitty-gritty. The first thing I wanted to know was how it all fits together then all that Object-oriented gobbledygook, then about classes & instances and so-on. I like to know the underlying concepts and a bit of theory before I learn the syntax. I had a bit of an advantage because I wrote some games in BASIC 20 years ago but nothing much since. Perhaps it is useful to shadow a production process by starting with an overall mission statement, then a plan and/or flowchart, then elaborate into some pseudo code (leaning towards the syntax you will ultimately use) before actually writing the code. The golden rule here is to suss out your student's leaning style.
When I was working for myself doing my consulting business I signed up for a hosted account and honestly I couldn't have done without it. What I liked most about it was it took 30 seconds to sign up for an account and I was then able to integrate source control using sourcegear vault (which is an excellent source control product and free for single developers) set up projects, clients, releases and versions and monitor my progress constantly. One thing that totally blew me away was that I ended up completely abandoning outlook for all work related correspondence. I could manage all my client interactions from within fogbugz and it all just worked amazingly well. In terms of overhead, one of the nice things you could do was turn anything into a case. Anything that came up in your mind while you were coding, you simply created a new email, sent it to fogbugz and it was instantly added as an item for review later. I would strongly recommend you get yourself one of the hosted accounts and give it a whirl
One of the things about SubVersion that irks me is that it puts its own folder in each directory of a project, whereas git only puts one in the root directory. It's not *that* big of a deal, but little things like that add up. Of course, SubVersion has Tortoise, which is [usually] very nice.
I would use the dayofyear function of datepart: <pre><code> Select * from mytable where datepart(dy,date1) = datepart(dy,date2) and year(date1) = year(date2) --assuming you want the same year too </code></pre> See the datepart reference [here](http://www.tizag.com/sqlTutorial/sqldatepart.php).
This will do it, in one line (note the '*' wildcard at the end) @files = </path/to/directory/*>; # To demonstrate: print join(", ", @files);
Don't move to UpdatePanels. Especially after coming from jQuery, [the drop in performance][1] would be untenable. Especially on a page as complex as yours sounds. If you have 2,000 lines of JavaScript code, the solution is to refactor that code. If you put 2,000 lines of C# code in one file, it would be difficult to maintain too. That would be difficult to manage effectively with **any** language or tool. If you're using 3.5 SP1, you can use the ScriptManager's new script combining to separate your JavaScript into multiple files with no penalty. That way, you can logically partition your code just as you would with server side code. [1]: http://encosia.com/2007/07/11/why-aspnet-ajax-updatepanels-are-dangerous/
Don't move to UpdatePanels. After coming from jQuery, [the drop in performance][1] would be untenable. Especially on a page as complex as yours sounds. If you have 2,000 lines of JavaScript code, the solution is to refactor that code. If you put 2,000 lines of C# code in one file, it would be difficult to maintain too. That would be difficult to manage effectively with **any** language or tool. If you're using 3.5 SP1, you can use the ScriptManager's new script combining to separate your JavaScript into multiple files with no penalty. That way, you can logically partition your code just as you would with server side code. [1]: http://encosia.com/2007/07/11/why-aspnet-ajax-updatepanels-are-dangerous/
It is <a href="http://blogs.vmware.com/vmtn/2008/01/virtual-leopa-1.html">legal</a> to run Mac OS X **Server** in a virtual machine *on Apple hardware*. All other forms of Mac OS X virtualization are currently forbidden.
I usually use `keys` and I can't think of the last time I used or read a use of `each`. Don't forget about `map`, depending on what you're doing in the loop! map { print "$_ => $hash{$_}\n" } keys %hash;
You should look at the .NET Framework Configuration Tool. It's in the .NET SDK, and you can find instructions on running it here... <http://msdn.microsoft.com/en-us/library/2bc0cxhc.aspx> In the _Runtime Security Policy_ section you'll find 3 policy levels: Enterprise, Machine and User. If you drill into Machine or User you'll find definitions of _Code Groups_ and _Permission Sets_ . When you say that you want to test some .NET code in partial trust environments, I guess you'll want to test against one of the standard permission sets already defined, such as _Internet_ . You need to define a _Code Group_ that matches your app (or specific assemblies) and assign your chosen permission set to that _Code Group_ . You can define your own custom _Permission Sets_ too, but let's keep it simple for now. Choose whether you want your new code group to exist at machine-wide scope, or just for your user account, and drill into the Machine or User policy level accordingly. You'll see a code group called _All_Code_ . Create a child code group inside that one, by right-clicking and selecting _New..._ Give it a name, say _PartialTrustGroup_ , then click _Next_ . You have to specify a membership condition for this group, and there are various types. I like to create a specific folder called _PartialTrust_ on my machine, and then create a URL membership condition that matches. So, my URL looks like this... file://c:/users/martin/documents/partialtrust/* The * is a wildcard to catch any assembly beneath that path. Click _Next_ . Now you can pick a permission set for your new code group. For now, pick _Internet_ . It's quite a restrictive set, similar to a Java applet sandbox. Click _Next_ and _Finish_ . Now right-click on your new code-group and select Properties. In the General tab, ensure the topmost checkbox is selected, then click OK. Now, any .NET assemblies that are loaded from a location beneath the URL you specified will have the _Internet_ permission set applied to them. Expect to get some SecurityExceptions if you haven't written your code to carefully observe the reduced permission set. Sorry this is a long description. It really is a lot more simple than it sounds.
From the simple test I have tried, aku is right, dispose is called as soon as the foreach block exit. @David : However call stack is kept between call, so the connection would not be closed because on the next call we would return to the next instruction after the yield, which is the while block. My understanding is that when the iterator is disposed, the connection would also be disposed with it. I also think that the Connection.Close would not be needed because it would be taken care of when the object is disposed because of the using clause. Here is a simple program I tried to test the behavior... class Program { static void Main(string[] args) { foreach (int v in getValues()) { Console.WriteLine(v); } Console.ReadKey(); foreach (int v in getValues()) { Console.WriteLine(v); break; } Console.ReadKey(); } public static IEnumerable<int> getValues() { using (TestDisposable t = new TestDisposable()) { for(int i = 0; i<10; i++) yield return t.GetValue(); } } } public class TestDisposable : IDisposable { private int value; public void Dispose() { Console.WriteLine("Disposed"); } public int GetValue() { value += 1; return value; } }
Static libraries, that makes life more difficult in this regard. In case of dynamic libraries you could just have used ldd on the resulting executable and be done with it. The best bet would be some kind of configuration file. Alternatively you could try to look for -l arguments to gcc/ld. Those are used to specify libraries. You could write a script for extracting it from the output, though I suspect that you will have to do it manually because by the time you know what the script should look for you probably already know the answer.
It is probably possible to do something useful using e.g. Perl, but you would have to provide more details. On the other hand, it could be easier to simply analyze the script...
MSDN describes [how to enable streaming over WCF][1] rather well. Note, if the link between client and server needs to be encrypted, then you'll need to "roll your own" encryption mechanism. The default `net.tcp` encryption requires X.509 certificates, which won't work with streams as this kind of encryption needs to work on an entire message in one go rather than a stream of bytes. This, in turn, means that you won't be able to authenticate the client using the default WCF security mechanisms as authentication requires encryption. The only work-around for this that I know of is to implement your own custom behaviour extensions on client and server to handle authentication. A really good reference on how to add custom behaviour extensions is [here][2]: this documents how to provide custom configuration, too (something that I don't think is discussed anywhere in the MSDN documents at this time). [1]: http://msdn.microsoft.com/en-us/library/ms789010.aspx [2]: http://www.winterdom.com/weblog/2006/10/02/CustomWCFBehaviorsThroughAppConfig.aspx
Install-base of Java JRE?
|java|deployment|
Is there an online resource somewhere that maintains statistics on the install-base of Java including JRE version information? If not, is there any recent report that has some numbers? I'm particularly interested in Windows users, but all other OSs are welcome too.
You can use [ResourceManager][1] class. See "ResourceManager and ASP.NET" article at http://msdn.microsoft.com/en-us/library/aa309419(VS.71).aspx [1]: http://msdn.microsoft.com/en-us/library/system.resources.resourcemanager.aspx
I was trying to set the property from markup on an outside user control. When I took the property to OnLoad, it worked.
1. Ditto on the SaveAs 2. Whenever I have to do Interop I create a separate VB.NET class library and write the logic in VB. It is just not worth the hassle doing it in C#
Why do I receive a q[num] error when aborting a jQuery queue pipeline?
|jquery|ajax|
When creating and executing a ajax request queue with $.manageAjax, I call ajaxManager.abort();, to abort the entire queue due to error, at which time I get an error stating q[num] has no properties (jquery.ajaxmanager.js line 75) here is the calling code: var ajaxManager = $.manageAjax({manageType:'sync', maxReq:0}); // setup code calling ajaxManager.add(...) // in success callback of first request ajaxManager.abort(); <-- causes error in jquery.ajaxManager.js there are 4 requests in the queue, this is being called in the success of the first request, if certain criteria is met, the queue needs to be aborted. Any ideas?
If you don't mind working with older libraries there are quite a few. For example, there's a no-frills GUI kit for Ansi-C called [IUP][1]. Also, check out [this list][2] -- Search on that page for 'C API'. I think the most modern and well-known is the above-mentioned GTK+. [1]: http://www.tecgraf.puc-rio.br/iup/ [2]: http://www.atai.org/guitool/#free_c
That's a great question. ( There is a high chance this is going to end up a normalised versus denormalised database debate..which I am not going to start... okay now for some input.) some off the top of my head things I have done (will add more when I have some more time or need a break) client design - this is where the VB method of inline sql (even with prepared statements) gets you into trouble. You can spend AGES just finding those statements. If you use something like Hibernate and put as much SQL into named queries you have a single place for most of the sql (nothing worse than trying to test sql that is inside of some IF statement and you just don't hit the "trigger" criteria in your testing for that IF statement). Prior to using hibernate (or other orms') when I would do SQL directly in JDBC or ODBC I would put all the sql statements as either public fields of an object (with a nameing convention) or in a property file (also with a nameing convention for the values say PREP_STMT_xxxx. And use either reflection or iterate over the values at startup in a) test cases b) startup of the application (some rdbms allow you to pre-compile with prepared statements before execution, so on startup post login I would pre-compile the prep-stmts at startup to make the application self testing. Even for 100's of statements on a good rdbms thats only a few seconds. and only once. And it has saved my butt a lot. On one project the DBA's wouldn't communicate (a different team, in a different country) and the schema seemed to change NIGHTLY, for no reason. And each morning we got a list of exactly where it broke the application, on startup. If you need adhoc functionality , put it in a well named class (ie. again a naming convention helps with auto mated testing) that acts as some sort of factory for you query (ie. it builds the query). You are going to have to write the equivalent code anyway right, just put in a place you can test it. You can even write some basic test methods on the same object or in a seperate class. If you can , also try to use stored procedures. They are a bit harder to test as above. Some db's also don't pre-validate the sql in stored procs against the schema at compile time only at run time. It usually involves say taking a copy of the schema structure (no data) and then creating all stored procs against this copy (in case the db team making the changes DIDn't validate correctly). Thus the structure can be checked. but as a point of change management stored procs are great. On change all get it. Especially when the db changes are a result of business process changes. And all languages (java, vb, etc get the change ) I usually also setup a table I use called system_setting etc. In this table we keep a VERSION identifier. This is so that client libraries can connection and validate if they are valid for this version of the schema. Depending on the changes to your schema, you don't want to allow clients to connect if they can corrupt your schema (ie. you don't have a lot of referential rules in the db, but on the client). It depends if you are also going to have multiple client versions (which does happen in NON - web apps, ie. they are running the wrong binary). You could also have batch tools etc. Another approach which I have also done is define a set of schema to operation versions in some sort of property file or again in a system_info table. This table is loaded on login, and then used by each "manager" (I usually have some sort of client side api to do most db stuff) to validate for that operation if it is the right version. Thus most operations can succeed, but you can also fail (throw some exception) on out of date methods and tells you WHY. managing the change to schema -> do you update the table or add 1-1 relationships to new tables ? I have seen a lot of shops which always access data via a view for this reason. This allows table names to change , columns etc. I have played with the idea of actually treating views like interfaces in COM. ie. you add a new VIEW for new functionality / versions. Often, what gets you here is that you can have a lot of reports (especially end user custom reports) that assume table formats. The views allow you to deploy a new table format but support existing client apps (remember all those pesky adhoc reports). Also, need to write update and rollback scripts. and again TEST, TEST, TEST... okay, thats all for now (break over) will try to add more later.
That's a great question. ( There is a high chance this is going to end up a normalised versus denormalised database debate..which I am not going to start... okay now for some input.) some off the top of my head things I have done (will add more when I have some more time or need a break) client design - this is where the VB method of inline sql (even with prepared statements) gets you into trouble. You can spend AGES just finding those statements. If you use something like Hibernate and put as much SQL into named queries you have a single place for most of the sql (nothing worse than trying to test sql that is inside of some IF statement and you just don't hit the "trigger" criteria in your testing for that IF statement). Prior to using hibernate (or other orms') when I would do SQL directly in JDBC or ODBC I would put all the sql statements as either public fields of an object (with a naming convention) or in a property file (also with a naming convention for the values say PREP_STMT_xxxx. And use either reflection or iterate over the values at startup in a) test cases b) startup of the application (some rdbms allow you to pre-compile with prepared statements before execution, so on startup post login I would pre-compile the prep-stmts at startup to make the application self testing. Even for 100's of statements on a good rdbms thats only a few seconds. and only once. And it has saved my butt a lot. On one project the DBA's wouldn't communicate (a different team, in a different country) and the schema seemed to change NIGHTLY, for no reason. And each morning we got a list of exactly where it broke the application, on startup. If you need adhoc functionality , put it in a well named class (ie. again a naming convention helps with auto mated testing) that acts as some sort of factory for you query (ie. it builds the query). You are going to have to write the equivalent code anyway right, just put in a place you can test it. You can even write some basic test methods on the same object or in a separate class. If you can , also try to use stored procedures. They are a bit harder to test as above. Some db's also don't pre-validate the sql in stored procs against the schema at compile time only at run time. It usually involves say taking a copy of the schema structure (no data) and then creating all stored procs against this copy (in case the db team making the changes DIDn't validate correctly). Thus the structure can be checked. but as a point of change management stored procs are great. On change all get it. Especially when the db changes are a result of business process changes. And all languages (java, vb, etc get the change ) I usually also setup a table I use called system_setting etc. In this table we keep a VERSION identifier. This is so that client libraries can connection and validate if they are valid for this version of the schema. Depending on the changes to your schema, you don't want to allow clients to connect if they can corrupt your schema (ie. you don't have a lot of referential rules in the db, but on the client). It depends if you are also going to have multiple client versions (which does happen in NON - web apps, ie. they are running the wrong binary). You could also have batch tools etc. Another approach which I have also done is define a set of schema to operation versions in some sort of property file or again in a system_info table. This table is loaded on login, and then used by each "manager" (I usually have some sort of client side api to do most db stuff) to validate for that operation if it is the right version. Thus most operations can succeed, but you can also fail (throw some exception) on out of date methods and tells you WHY. managing the change to schema -> do you update the table or add 1-1 relationships to new tables ? I have seen a lot of shops which always access data via a view for this reason. This allows table names to change , columns etc. I have played with the idea of actually treating views like interfaces in COM. ie. you add a new VIEW for new functionality / versions. Often, what gets you here is that you can have a lot of reports (especially end user custom reports) that assume table formats. The views allow you to deploy a new table format but support existing client apps (remember all those pesky adhoc reports). Also, need to write update and rollback scripts. and again TEST, TEST, TEST... ------------ OKAY - THIS IS A BIT RANDOM DISCUSSION TIME -------------- Actually had a large commercial project (ie. software shop) where we had the same problem. The architecture was a 2 tier and they were using a product a bit like PHP but pre-php. Same thing. different name. anyway i came in in version 2.... It was costing A LOT OF MONEY to do upgrades. A lot. ie. give away weeks of free consulting time on site. And it was getting to the point of wanting to either add new features or optimize the code. Some of the existing code used stored procedures , so we had common points where we could manage code. but other areas were this embedded sql markup in html. Which was great for getting to market quickly but with each interaction of new features the cost at least doubled to test and maintain. So when we were looking at pulling out the php type code out, putting in data layers (this was 2001-2002, pre any ORM's etc) and adding a lot of new features (customer feedback) looked at this issue of how to engineer UPGRADES into the system. Which is a big deal, as upgrades cost a lot of money to do correctly. Now, most patterns and all the other stuff people discuss with a degree of energy deals with OO code that is running, but what about the fact that your data has to a) integrate to this logic, b) the meaning and also the structure of the data can change over time, and often due to the way data works you end up with a lot of sub process / applications in your clients organisation that needs that data -> ad hoc reporting or any complex custom reporting, as well as batch jobs that have been done for custom data feeds etc. With this in mind i started playing with something a bit left of field. It also has a few assumptions. a) data is heavily read more than write. b) updates do happen, but not at bank levels ie. one or 2 a second say. The idea was to apply a COM / Interface view to how data was accessed by clients over a set of CONCRETE tables (which varied with schema changes). You could create a seperate view for each type operation - update, delete, insert and read. This is important. The views would either map directly to a table , or allow you to trigger of a dummy table that does the real updates or inserts etc. What i actually wanted was some sort of trappable level indirection that could still be used by crystal reports etc. NOTE - For inserts , update and deletes you could also use stored procs. And you had a version for each version of the product. That way your version 1.0 had its version of the schema, and if the tables changed, you would still have the version 1.0 VIEWS but with NEW backend logic to map to the new tables as needed, but you also had version 2.0 views that would support new fields etc. This was really just to support ad hoc reporting, which if your a BUSINESS person and not a coder is probably the whole point of why you have the product. (your product can be crap but if you have the best reporting in the world you can still win, the reverse is true - your product can be the best feature wise, but if its the worse on reporting you can very easily loose). okay, hope some of those ideas help.
For what it's worth, there is also [Google Docs][1]. I guess it's not a perfect fit, but it's versioning is **very** convenient. [1]: http://docs.google.com/
Hope this helps a little, but at my college our web applications course just got revamped. So now we are going the jsp, servlet, hibernate route with the second part of the course on mostly JBoss Seam. So who knows, it probably just needs time to grow in the community.
This can be quite tricky. When attempting to figure out how to map route data into a route, the system currently searches top-down until it finds something where all the required information is provided, and then stuffs everything else into query parameters. Since the required information for the route "Content/{*pathInfo}" is entirely satisfied always (no required data at all in this route), and it's near the top of the route list, then all your attempts to map to unnamed routes will match this pattern, and all your URLs will be based on this ("Content?action=foo&controller=bar") Unfortunately, there's no way around this with action routes. If you use named routes (f.e., choosing Html.RouteLink instead of Html.ActionLink), then you can specify the name of the route to match. It's less convenient, but more precise. IMO, complex routes make the action-routing system basically fall over. In applications where I have something other than the default routes, I almost always end up reverting to named-route based URL generation to ensure I'm always getting the right route.
Is there a (preferably free) Java analogue of .NET's XML serialization?
Most of these kinds of things can be determined programatically in Python, using modules like <tt>sys</tt>, <tt>os</tt>, and the special [<tt?_\_file_\_</tt>][1] identifier which tells you where you are in the filesystem path. It's important to keep in mind that when a module is first imported it will execute everything in the file-scope, which is important for developing system-dependent behaviors. For example, the <tt>os</tt> module basically determines what operating system you're using on import and then adjusts its implementation accordingly (by importing another module corresponding to Linux, OSX, Windows, etc.). There's a lot of power in this feature and something along these lines is probably what you're looking for. :) [1]: http://pyref.infogami.com/__file__ [Edit] I've also used socket.gethostname() in some rare, hackish instances. ;)
Or find . -name '*.as' -or -name '*.mxml' | xargs wc -l Or if you use zsh wc -l **/*.{as,mxml} It won't give you what fraction of those lines are comments, or blank lines, but if you're only interested in how one project differs from another and you've written them both, it's a useful metric.
No, there is not any facility to enumerate named events. You could enumerate all objects in the respective object manager directory using ZwOpenDirectoryObject and then filter for events. But this routine is undocumented and therefore should not be used without good reason. Why not use a separate mechanism to share the event names? You could list them in a configuration file, a registry key or maybe even in shared memory.
Yea FogBugz is great for process-light, quick and easy task management. It seems especially well suited for soloing, where you don't need or want a lot of complexity in that area. By the way, if you want to keep track of what you're doing at the computer all day, check out TimeSprite, which integrates with FogBugz. It's a Windows app that logs your active window and then categorizes your activity based on the window title / activity type mappings you define as you go. (You can also just tell it what you're working on.) And if you're a FogBugz user, you can associate your work with a FogBugz case, and it will upload your time intervals for that case. This makes accurate recording of elapsed time pretty painless and about as accurate as you can get, which in turn improves FogBugz predictive powers in its evidence-based scheduling. Also, when soloing, I find that such specific logging of my time keeps me on task, in the way a meandering manager otherwise might. (I'm not affiliated with TimeSprite in any way.)
I see nothing wrong with using the async version. I can agree that the second version is shorter, but I'm not sure that I think it's easier to follow. The test does a lot of things that you wouldn't normally do, whereas the first example is more true to how you would use the component outside the test environment. Also, in the second form you have to make sure that you do exactly what the framework would do, miss one step and your test isn't relevant, and each test must repeat this code. Seems to me it's better to test it in a situation that is as close to the real thing as possible. You could have a look at [dpUint][1]'s [sequences][2], they made component testing a little more declarative: public function testLogin():void { var passThroughData:Object = new Object(); passThroughData.username = "myuser1"; passThroughData.password = "somepsswd"; var sequence:SequenceRunner = new SequenceRunner(this); sequence.addStep(new SequenceSetter(form.usernameTI, {text:passThroughData.username})); sequence.addStep(new SequenceWaiter(form.usernameTI, FlexEvent.VALUE_COMMIT, 100)); sequence.addStep(new SequenceSetter(form.passwordTI, {text:passThroughData.password})); sequence.addStep(new SequenceWaiter(form.passwordTI, FlexEvent.VALUE_COMMIT, 100)); sequence.addStep(new SequenceEventDispatcher(form.loginBtn, new MouseEvent("click", true, false))); sequence.addStep(new SequenceWaiter(form, "loginRequested", 100)); sequence.addAssertHandler(handleLoginEvent, passThroughData); sequence.run(); } (example from the [dpUint][1] wiki, [see here for more info][2]). [1]: http://code.google.com/p/dpuint/ [2]: http://code.google.com/p/dpuint/wiki/Sequences
For warnings to be "flagged" to PHP natively would require changes to the mysql/mysqli driver, which is obviously beyond the scope of this question. Instead you're going to have to basically check every query you make on the database for warnings: $warningCountResult = mysql_query("SELECT @@warning_count"); if ($warningCountResult) { $warningCount = mysql_fetch_row($warningCountResult ); if ($warningCount[0] > 0) { //Have warnings $warningDetailResult = mysql_query("SHOW WARNINGS"); if ($warningDetailResult ) { while ($warning = mysql_fetch_assoc(warningDetailResult) { //Process it } } }//Else no warnings } Obviously this is going to be hideously expensive to apply en-mass, so you might need to carefully think about when and how warnings may arise (we may lead you to refactor to eliminate them). For reference, [MySQL SHOW WARNINGS][1] [1]: http://dev.mysql.com/doc/refman/5.0/en/show-warnings.html
For warnings to be "flagged" to PHP natively would require changes to the mysql/mysqli driver, which is obviously beyond the scope of this question. Instead you're going to have to basically check every query you make on the database for warnings: $warningCountResult = mysql_query("SELECT @@warning_count"); if ($warningCountResult) { $warningCount = mysql_fetch_row($warningCountResult ); if ($warningCount[0] > 0) { //Have warnings $warningDetailResult = mysql_query("SHOW WARNINGS"); if ($warningDetailResult ) { while ($warning = mysql_fetch_assoc(warningDetailResult) { //Process it } } }//Else no warnings } Obviously this is going to be hideously expensive to apply en-mass, so you might need to carefully think about when and how warnings may arise (which may lead you to refactor to eliminate them). For reference, [MySQL SHOW WARNINGS][1] *edit* *Of course, you could dispense with the initial query for the SELECT @@warning_count, which would save you a query per execution, but I included it for pedantic completeness.* [1]: http://dev.mysql.com/doc/refman/5.0/en/show-warnings.html
This looks like a job for [clearfix][1] to me ... [1]: http://www.positioniseverything.net/easyclearing.html
BigTable which is the database back end for App Engine will scale to millions of records. Due to this, App Engine will not allow you to do any query that will result in a table scan, as performance would be dreadful for a well populated table. In other words Every query must use a index. This is why you can only do `=`, `>` and `<` queries. (In fact you can also do != but the API does this using a a combination of `>` and `<` queries.) This is also why the development environment monitors all the queries you do and automatically adds any missing indexes to your `index.yaml` file. There is no way to index for a `LIKE` query so it's simply not available. Have a watch of [this Google IO session][1] for a much better and more detailed explanation of this. [1]: http://sites.google.com/site/io/under-the-covers-of-the-google-app-engine-datastore
This may sound like insanity, but if you are using NAnt (or Ant or some other automated build system), you can use NAnt properties as CSS variables in a hacky way. Start with a CSS template file (maybe styles.css.template or something) containing something like this: a { color: ${colors.blue}; } a:hover { color: ${colors.blue.light}; } p { padding: ${padding.normal}; } And then add a step to your build that assigns all the property values (I use external buildfiles and &lt;include&gt; them) and uses the &lt;expandproperties&gt; filter to generate the actual CSS: <property name="colors.blue" value="#0066FF" /> <property name="colors.blue.light" value="#0099FF" /> <property name="padding.normal" value="0.5em" /> <copy file="styles.css.template" tofile="styles.css" overwrite="true"> <filterchain> <expandproperties/> </filterchain> </copy> The downside, of course, is that you have to run the css generation target before you can check what it looks like in the browser. And it probably would restrict you to generating all your css by hand. However, you can write NAnt functions to do all sorts of cool things beyond just property expansion (like generating gradient image files dynamically), so for me it's been worth the headaches.
You should look at the .NET Framework Configuration Tool. It's in the .NET SDK, and you can find instructions on running it here... <http://msdn.microsoft.com/en-us/library/2bc0cxhc.aspx> In the _Runtime Security Policy_ section you'll find 3 policy levels: Enterprise, Machine and User. If you drill into Machine or User you'll find definitions of _Code Groups_ and _Permission Sets_ . When you say that you want to test some .NET code in partial trust environments, I guess you'll want to test against one of the standard permission sets already defined, such as _Internet_ . You need to define a _Code Group_ that matches your app (or specific assemblies) and assign your chosen permission set to that _Code Group_ . You can define your own custom _Permission Sets_ too, but let's keep it simple for now. Choose whether you want your new code group to exist at machine-wide scope, or just for your user account, and drill into the Machine or User policy level accordingly. You'll see a code group called _All _ Code_ . Create a child code group inside that one, by right-clicking and selecting _New..._ Give it a name, say _PartialTrustGroup_ , then click _Next_ . You have to specify a membership condition for this group, and there are various types. I like to create a specific folder called _PartialTrust_ on my machine, and then create a URL membership condition that matches. So, my URL looks like this... file://c:/users/martin/documents/partialtrust/* The * is a wildcard to catch any assembly beneath that path. Click _Next_ . Now you can pick a permission set for your new code group. For now, pick _Internet_ . It's quite a restrictive set, similar to a Java applet sandbox. Click _Next_ and _Finish_ . Now right-click on your new code-group and select Properties. In the General tab, ensure the topmost checkbox is selected, then click OK. Now, any .NET assemblies that are loaded from a location beneath the URL you specified will have the _Internet_ permission set applied to them. Expect to get some SecurityExceptions if you haven't written your code to carefully observe the reduced permission set. Sorry this is a long description. It really is a lot more simple than it sounds.
You may also use: try { // Dangerous code } finally { // clean up, or do nothing } And any exceptions thrown will bubble up to the next level that handles them.
For the really gritty problems that would be too time consuming to use print_r/echo to figure out I use my IDE's (PhpEd) debugging feature. Unlike other IDEs I've used, PhpEd requires pretty much no setup. the only reason I don't use it for any problems I encounter is that it's *painfully* slow. I'm not sure that slowness is specific to PhpEd or any php debugger. PhpEd is not free but I believe it uses one of the open-source debuggers (like XDebug previously mentioned) anyway. The benefit with PhpEd, again, is that it requires no setup which I have found really pretty tedious in the past.
Please don't put your self in that world of pain. Instead use [UFRAME][1] which is a lot faster and is implemented in jQuery. Now, to manage those 2000 lines of Javascript code I recommend splitting the code in different files and set up your build process to join them using JSMin or Yahoo Compressor into chunks. Cheers. [1]: http://msmvps.com/blogs/omar/archive/2008/05/24/uframe-goodness-of-updatepanel-and-iframe-combined.aspx
The way I see it is this. You have a Windows Service, which is playing the role of a scheduler and in it there are some classes which simply call the webservices and put the data in your databases. So, you can use these classes directly from the WebUI as well and import the data based on the WebUI trigger. I don't like the idea of storing a user generated action as a flag (trigger) in the database where some service will poll it (at an interval which is not under the user's control) to execute that action. You could even convert the whole code into an exe which you can then schedule using the Windows Scheduler. And call the same exe whenever the user triggers the action from the Web UI.
Familiarity with the alogrithms/data structures I use and/or quick glance analysis of iteration nesting. The difficulty is when you call a library function, possibly multiple times - you can often be unsure of whether you are calling the function unnecessarily at times or what implementation they are using. Maybe library functions should have a complexity/efficiency measure, whether that be Big O or some other metric, that is available in documentation or even intellisense.
@Vaibhav Unfortunately, the physical architecture of the solution will not allow any direct communication between the components, other than Web UI to Database, and database to service (which can then call out to the web services). I do, however, agree that re-use of the communication classes would be the ideal here - I just can't do it within the confines of our business* *Isn't it always the way that a technically "better" solution is stymied by external factors?
You could apply some [Standard Devision][1] to your logic and take notice of peaks over x%. [1]: http://en.wikipedia.org/wiki/Standard_deviations
@KP After your update to the original question, the only real option available to you is to do some jiggery-pokery in Javascript on the client. The only issue with that would be provicing graceful degredation for non-javascript enabled clients. e.g. You could add some AJAX-y goodness that reads a hidden form filed value, requests a verification key from the server, and sends that back along with the response, but that will never be populated if javascript is blocked/disabled. You could always implement a more traditional captcha type interface which could be disabled by javascript, and ignored by the server if the scripted field if filled in... Depends how far you want to go with it, though. Good luck
Static methods cannot be inherited or overridden, and that is why they can't be abstract. Since static methods are defined on the type, not the instance, of a class, they must be called explicitly on that type. So when you want to call a method on a child class, you need to use its name to call it. This makes inheritance irrelevant. Assume you could, for a moment, inherit static methods. Imagine this scenario: public static class Base { public static virtual int GetNumber() { return 5; } } public static class Child1 : Base { public static override int GetNumber() { return 1; } } public static class Child2 : Base { public static override int GetNumber() { return 2; } } If you call Base.GetNumber(), which method would be called? Which value returned? Its pretty easy to see that without creating instances of objects, inheritance is rather hard. Abstract methods without inheritance are just methods that don't have a body, so can't be called.
If you're talking about automatic XML serialization of objects, check out [Castor][1]: > Castor is an Open Source data binding framework for Java[tm]. It's the shortest path between Java objects, XML documents and relational tables. Castor provides Java-to-XML binding, Java-to-SQL persistence, and more. [1]: http://www.castor.org/
Python is considered (among Python programmers :) to be a great language for rapid prototyping. There's not a lot of extraneous syntax getting in the way of your thought processes, so most of the work you do tends to go into the code. (There's far less idioms required to be involved in writing good Python code than in writing good C++.) Given this, most Python (CPython) programmers ascribe to the "premature optimization is the root of all evil" philosophy. By writing high-level (and significantly slower) Python code, one can optimize the bottlenecks out using C/C++ bindings when your application is nearing completion. At this point it becomes more clear what your processor-intensive algorithms are through proper profiling. This way, you write most of the code in a very readable and maintainable manner while allowing for speedups down the road. You'll see several Python library modules written in C for this very reason. Most graphics libraries in Python (i.e. wxPython) are just Python wrappers around C++ libraries anyway, so you're pretty much writing to a C++ backend. To address your IDE question, [SPE][1] (Stani's Python Editor) is a good IDE that I've used and [Eclipse][2] with [PyDev][3] gets the job done as well. Both are <acronym title="Open Source Software">OSS</acronym>, so they're free to try! [1]: http://pythonide.blogspot.com/ [2]: http://www.eclipse.org/ [3]: http://pydev.sourceforge.net/
Python is considered (among Python programmers :) to be a great language for rapid prototyping. There's not a lot of extraneous syntax getting in the way of your thought processes, so most of the work you do tends to go into the code. (There's far less idioms required to be involved in writing good Python code than in writing good C++.) Given this, most Python (CPython) programmers ascribe to the "premature optimization is the root of all evil" philosophy. By writing high-level (and significantly slower) Python code, one can optimize the bottlenecks out using C/C++ bindings when your application is nearing completion. At this point it becomes more clear what your processor-intensive algorithms are through proper profiling. This way, you write most of the code in a very readable and maintainable manner while allowing for speedups down the road. You'll see several Python library modules written in C for this very reason. Most graphics libraries in Python (i.e. wxPython) are just Python wrappers around C++ libraries anyway, so you're pretty much writing to a C++ backend. To address your IDE question, [SPE][1] (Stani's Python Editor) is a good IDE that I've used and [Eclipse][2] with [PyDev][3] gets the job done as well. Both are <acronym title="Open Source Software">OSS</acronym>, so they're free to try! [Edit] @Marcin: Have you had experience writing > 30k LOC in Python? It's also funny that you should mention Google's scalability concerns, since they're Python's biggest supporters! Also a small organization called NASA also uses Python frequently ;) see ["One coder and 17,000 Lines of Code Later"][4]. [1]: http://pythonide.blogspot.com/ [2]: http://www.eclipse.org/ [3]: http://pydev.sourceforge.net/ [4]: http://www.python.org/about/success/usa/
Here's an example of recursing through a directory structure and copying files froma backup script I wrote. sub copy_directory { my ($source, $dest) = @_; my $start = time; # get the contents of the directory. opendir(D, $source); my @f = readdir(D); closedir(D); # recurse through the directory structure and copy files. foreach my $file (@f) { # Setup the full path to the source and dest files. my $filename = $source . "\\" . $file; my $destfile = $dest . "\\" . $file; # get the file info for the 2 files. my $sourceInfo = stat( $filename ); my $destInfo = stat( $destfile ); # make sure the destinatin directory exists. mkdir( $dest, 0777 ); if ($file eq '.' || $file eq '..') { } elsif (-d $filename) { # if it's a directory then recurse into it. #print "entering $filename\n"; copy_directory($filename, $destfile); } else { # Only backup the file if it has been created/modified since the last backup if( (not -e $destfile) || ($sourceInfo->mtime > $destInfo->mtime ) ) { #print $filename . " -> " . $destfile . "\n"; copy( $filename, $destfile ) or print "Error copying $filename: $!\n"; } } } print "$source copied in " . (time - $start) . " seconds.\n"; }
I couldn't agree with you more, HollyStyles. I also used to be a TSQL guy, and find some of Oracle's idiosyncrasies more than a little perplexing. Unfortunately, temp tables aren't as convenient in Oracle, and in this case, other existing SQL logic is expecting to directly query a table, so I give it this view instead. There's really no application logic that exists outside of the database in this system. Oracle developers do seem to use cursors much more eagerly than I would have thought. Given the bondage & discipline nature of PL/SQL, that's all the more surprising.
Manual debugging is generally quicker for me - `var_dump()` and `debug_print_backtrace()` are all the tools you need to arm your logic with.
Do not ***ever*** assume your user has access to certain key areas of the disc (i.e. program files, windows directory etc). Instead the default user account will only be able to write to a small section of their application data. Also, they won't be able to write to global areas of the registry - only the current user section. You can of course elevate their privileges, but that in itself is a task. Generally programming for Vista is the same as XP, it's just the new account restrictions you have to be wary of. Have a look at this page with regards to making your application "UAC aware" http://www.codeproject.com/KB/vista-security/MakingAppsUACAware.aspx
Is there a benefit to not having foreign keys? Unless you are using a crappy database, FKs aren't that hard to set up. So why would you have a policy of avoiding them? It's one thing to have a naming convention that says a column references another, it's another to know the database is actually verifying that relationship for you.
Alternative SSH Application to Plink
|cvs|ssh|tortoisecvs|plink|
The Iterator that the compiler synthesises implements IDisposable, which foreach calls when the foreach loop is exited. The Iterators Dispose() method will clean up the using statements on early exit.
The Iterator that the compiler synthesises implements IDisposable, which foreach calls when the foreach loop is exited. The Iterator's Dispose() method will clean up the using statements on early exit. As long as you use the iterator in a foreach loop, using() block, or call the Dispose() method in some other way, the cleanup of the Iterator will happen.
Judging from [this technical explanation][1], your code will not work as expected, but abort on the second item, because the connection was already closed when returning the first item. [1]: http://blogs.msdn.com/oldnewthing/archive/2008/08/12/8849519.aspx
Judging from [this technical explanation][1], your code will not work as expected, but abort on the second item, because the connection was already closed when returning the first item. @Joel Gauvreau : Yes, I should have read on. [Part 3][2] of this series explains that the compiler adds special handling for finally blocks to trigger only at the _real_ end. [1]: http://blogs.msdn.com/oldnewthing/archive/2008/08/12/8849519.aspx [2]: http://blogs.msdn.com/oldnewthing/archive/2008/08/14/8862242.aspx
Having a very descriptive name helps to instantly see what is not working correctly, so that you don't actually need to look at the unit test code. Also, a list of all the unit tests describes the intended behavior of the unit, and can be used (more or less) as documentation to the behavior of the unit under test. Note, this only works when unit tests are very specific and do not validate too much within one unit test. So for example: [Test] void TestThatExceptionIsRaisedWhenStringLengthLargerThen100() [Test] void TestThatStringLengthOf99IsAccepted()
John's answer is good (that list comprehensions are better when you want to iterate over something multiple times). However, it's also worth noting that you should use a list if you want to use any of the list methods. For example, the following code won't work: def gen(): return (something for something in get_some_stuff()) print gen()[:2] # generators don't support indexing or slicing print [5,6] + gen() # generators can't be added to lists Basically, use a generator expression if all you're doing is iterating once. If you want to store and use the generated results, then you're probably better off with a list comprehension. Since performance is the most common reason to choose one over the other, my advice is to not worry about it and just pick one; if you find that your program is running too slowly, then and only then should you go back and worry about tuning your code.
HTML Compression and SEO?
|seo|html-compression|
At work, we have a dedicated SEO Analyst who's job is to pour over lots of data (KeyNote/Compete etc) and generate up fancy reports for the executives so they can see how we are doing against our competitors in organic search ranking. He also leads initiatives to improve the SEO rankings on our sites by optimizing things as best we can. We also have a longstanding mission to decrease our page load time, which right now is pretty shoddy on some pages. The SEO guy mentioned that semantic, valid HTML gets more points by crawlers than jumbled messy HTML. I've been working on a real time HTML compressor that will decrease our page sizes my a pretty good chunk. Will compressing the HTML hurt us in site rankings?
[XStream][1] is pretty good at serializing object to XML without much configuration and money! (it's under BSD license). We used it in one of our project to replace the plain old java-serialization and it worked almost out of the box. [1]: http://xstream.codehaus.org/
Techniques to detect Polymorphic and Metamorphic viruses?
|antivirus|virus|
What techniques can be applied to detect [Polymorphic][1] and [Metamorphic][2] viruses? How difficult is to implement these techniques? Are these techniques being applied in modern day anti-virus softwares? [1]: http://en.wikipedia.org/wiki/Computer_virus#Polymorphic_code [2]: http://en.wikipedia.org/wiki/Computer_virus#Metamorphic_code
[ProcessExplorer][1] is able to enumerate all the named events held by some specific process. You could go over the entire process list and do something similar although I have now clue as to what API is used to get the list... [1]: http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
[Tracker][1] ships with Ubuntu 8.04 -- it was a significant switch from Beagle which users believed was too resource (CPU) intensive and didn't yield good enough results. It indexes both pdf and chm and according to [this bug report][2] it also indexes djvu. [1]: https://wiki.ubuntu.com/Tracker [2]: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=460260
Oracle - What TNS Names file am I using?
|oracle-database|connection|tns|tnsnames|
Sometimes I get Oracle connection problems because I can't figure out which tnsnames.ora file my database client is using. What's the best way to figure this out? ++happy for various platform solutions.
Sometimes I get Oracle connection problems because I can't figure out which tnsnames.ora file my database client is using. What's the best way to figure this out? ++happy for various platform solutions.
Sometimes I get Oracle connection problems because I can't figure out which tnsnames.ora file my database client is using. What's the best way to figure this out? ++happy for various platform solutions.
|oracle-database|connection|tns|tnsnames|
Sometimes I get Oracle connection problems because I can't figure out which tnsnames.ora file my database client is using. What's the best way to figure this out? ++happy for various platform solutions.
It seems like the answer is in the question - the method you suggested seems to be the right direction to go, except that if you have a big number of those shared members you might want to gather them into a struct or class and past that as the argument to the constructor of the base class. If you insist on having the "shared" members implemented as static members of the derived class, you might be able to auto-generate the code of the derived classes. XSLT is a great tool for auto-generating simple classes. In general, the example doesn't show a need for "virtual static" members, because for purposes like these you don't actually need inheritance - instead you should use the base class and have it accept the appropriate values in the constructor - maybe creating a single instance of the arguments for each "sub-type" and passing a pointer to it to avoid duplication of the shared data. Another similar approach is to use templates and pass as the template argument a class that provides all the relevant values (this is commonly referred to as the "Policy" pattern). To conclude - for the purpose of the original example, there is no need for such "virtual static" members. If you still think they are needed for the code you are writing, please try to elaborate and add more context. Example of what I described above: class BaseClass { public: BaseClass(const Descriptor& desc) : _desc(desc) {} string GetName() const { return _desc.name; } int GetId() const { return _desc.Id; } X GetX() connst { return _desc.X; } virtual void UseClass() = 0; private: const Descriptor _desc; }; class DerivedClass : public BaseClass { public: DerivedClass() : BaseClass(Descriptor("abc", 1,...)) {} virtual void UseClass() { /* do something */ } }; class DerDerClass : public BaseClass { public: DerivedClass() : BaseClass("Wowzer", 843,...) {} virtual void UseClass() { /* do something */ } };
To add to the previous explanations, static method calls are bound to a specific method at *compile-time*, which rather rules out polymorphic behavior.
Static methods are not *instantiated* as such, they're just available without an object reference. A call to a static method is done through the class name, not through an object reference, and the IL code to call it will call the abstract method through the name of the class that defined it, not necessarily the name of the class you used. Let me show an example. With the following code: public class A { public static void Test() { } } public class B : A { } If you call B.Test, like this: class Program { static void Main(string[] args) { B.Test(); } } Then the actual code inside the Main method is as follows: .entrypoint .maxstack 8 L0000: nop L0001: call void ConsoleApplication1.A::Test() L0006: nop L0007: ret As you can see, the call is made to A.Test, because it was the A class that defined it, and not to B.Test, even though you can write the code that way. If you had *class types*, like in Delphi, where you can make a variable referring to a type and not an object, you would have more use for virtual and thus abstract static methods (and also constructors), but they aren't available and thus static calls are non-virtual in .NET. I realize that the IL designers could allow the code to be compiled to call B.Test, and resolve the call at runtime, but it still wouldn't be virtual, as you would still have to write some kind of class name there. Virtual methods, and thus abstract ones, are only useful when you're using a variable which, at runtime, can contain many different types of objects, and you thus want to call the right method for the current object you have in the variable. With static methods you need to go through a class name anyway, so the exact method to call is known at compile time because it can't and won't change. Thus, virtual/abstract static methods are not available in .NET.
Break down the algorithm into pieces you know the big O notation for, and combine through big O operators. That's the only way I know of. For more information, check the [Wikipedia page](http://en.wikipedia.org/wiki/Big_O_notation) on the subject.
I think you want to [cross-correlate][1] your signal with an expected, exemplar signal. But, it has been such a long time since I studied signal processing and even then I didn't take much notice. [1]: http://en.wikipedia.org/wiki/Cross-correlation
I don't know very much about instrumentation, so this might be totally impractical, but then again it might be a helpful different direction. If you know how the readings can fail, and there is a certain interval between peaks given such failures, why not do gradient descent at each interval. If the descent brings you back to an area you've searched before, you can abandon it. Depending upon the shape of the sampled surface, this also might help you find peaks faster than search.
Thanks guys! For the record, this is line 176 in C:\xampp\apache\conf\httpd.conf Note that you also have to edit line 203. Also note that you have to use forward slashes "/" instead of back slashes "\" and restart your server.
|unix|search|
If the option exists to **not** do an XML signature and instead just to treat the XML as a byte stream and to sign that, do it. It will be easier to implement, easier to understand, more stable (no canonicalization, transform, policy, ...) and faster. If you absolutely must have XML DSIG (sadly, some of us must), it is certainly possible these days but there are many, many caveats. You need good library support, with Java this is out of the box in JDK 1.6, I am not familiar with other platforms. You must test interoperability with the receiving end of your signed XML, especially if they are potentially on a different platform. Be sure to read [Why XML Security Is Broken][1], it basically covers all the ground regarding the horror that is XML Canonicalization and gives some pointers to some alternatives. [1]: http://www.cs.auckland.ac.nz/~pgut001/pubs/xmlsec.txt
This blog entry references to something similar to what you need [here](http://blog.corunet.com/english/automatic-css-the-stylizator). It contains a link to a Perl script called '[stylizator.pl](http://blog.corunet.com/uploads/stylizator/stylizator.zip)'. This script parses the html to look for possible CSS elements and outputs them to a file.
Just a word of warning with: <pre><code>DBCC CHECKIDENT (MyTable, RESEED, 0)</code></pre> If you did not truncate the table, and the identity column is the PK, you will get an error when reaching pre-existing identites. For example, you have identities (3,4,5) in the table already. You then reset the identity column to 1. After the identity 2 is inserted, the next insert will try to use the identity 3, which will fail.
Aptana more or less is RadRails, or it's based on it. I've used it, and it's really good, but it does have some problems. For instance, it breaks the basic search dialog on my system (giving a raw java exception to the end user), and it clutters the interface with add like notices and upgrade bars and news feeds and... But all in all it's pretty good, especially its editors (ERB, HTML/XML, ...) are top notch.
What is the best way to partition terabyte drive in a linux development machine?
|linux|storage|partition|
I have a new 1 TB drive coming in tomorrow. What is the best way to divide this space for a development workstation? The biggest problem I think I'm going to have is that some partitions (probably /usr) will become to small after a bit of use. Other partitions are probably to huge. The swap drive for example is currently 2GB (2x 1GB RAM), but it is almost never used (only once that I know of).
This is an implementation in C# of how to parse and convert a DateTime to and from its RFC-3339 representation. The only restriction it has is that the DateTime is in Coordinated Universal Time (UTC). using System; using System.Globalization; namespace DateTimeConsoleApplication { /// <summary> /// Provides methods for converting <see cref="DateTime"/> structures to and from the equivalent RFC 3339 string representation. /// </summary> public static class Rfc3339DateTime { //============================================================ // Private members //============================================================ #region Private Members /// <summary> /// Private member to hold array of formats that RFC 3339 date-time representations conform to. /// </summary> private static string[] formats = new string[0]; /// <summary> /// Private member to hold the DateTime format string for representing a DateTime in the RFC 3339 format. /// </summary> private const string format = "yyyy-MM-dd'T'HH:mm:ss.fffK"; #endregion //============================================================ // Public Properties //============================================================ #region Rfc3339DateTimeFormat /// <summary> /// Gets the custom format specifier that may be used to represent a <see cref="DateTime"/> in the RFC 3339 format. /// </summary> /// <value>A <i>DateTime format string</i> that may be used to represent a <see cref="DateTime"/> in the RFC 3339 format.</value> /// <remarks> /// <para> /// This method returns a string representation of a <see cref="DateTime"/> that /// is precise to the three most significant digits of the seconds fraction; that is, it represents /// the milliseconds in a date and time value. The <see cref="Rfc3339DateTimeFormat"/> is a valid /// date-time format string for use in the <see cref="DateTime.ToString(String, IFormatProvider)"/> method. /// </para> /// </remarks> public static string Rfc3339DateTimeFormat { get { return format; } } #endregion #region Rfc3339DateTimePatterns /// <summary> /// Gets an array of the expected formats for RFC 3339 date-time string representations. /// </summary> /// <value> /// An array of the expected formats for RFC 3339 date-time string representations /// that may used in the <see cref="DateTime.TryParseExact(String, string[], IFormatProvider, DateTimeStyles, out DateTime)"/> method. /// </value> public static string[] Rfc3339DateTimePatterns { get { if (formats.Length > 0) { return formats; } else { formats = new string[11]; // Rfc3339DateTimePatterns formats[0] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffffffK"; formats[1] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'ffffffK"; formats[2] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffffK"; formats[3] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'ffffK"; formats[4] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffK"; formats[5] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'ffK"; formats[6] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fK"; formats[7] = "yyyy'-'MM'-'dd'T'HH':'mm':'ssK"; // Fall back patterns formats[8] = "yyyy'-'MM'-'dd'T'HH':'mm':'ss'.'fffffffK"; // RoundtripDateTimePattern formats[9] = DateTimeFormatInfo.InvariantInfo.UniversalSortableDateTimePattern; formats[10] = DateTimeFormatInfo.InvariantInfo.SortableDateTimePattern; return formats; } } } #endregion //============================================================ // Public Methods //============================================================ #region Parse(string s) /// <summary> /// Converts the specified string representation of a date and time to its <see cref="DateTime"/> equivalent. /// </summary> /// <param name="s">A string containing a date and time to convert.</param> /// <returns>A <see cref="DateTime"/> equivalent to the date and time contained in <paramref name="s"/>.</returns> /// <remarks> /// The string <paramref name="s"/> is parsed using formatting information in the <see cref="DateTimeFormatInfo.InvariantInfo"/> object. /// </remarks> /// <exception cref="ArgumentNullException"><paramref name="s"/> is a <b>null</b> reference (Nothing in Visual Basic).</exception> /// <exception cref="FormatException"><paramref name="s"/> does not contain a valid RFC 3339 string representation of a date and time.</exception> public static DateTime Parse(string s) { //------------------------------------------------------------ // Validate parameter //------------------------------------------------------------ if(s == null) { throw new ArgumentNullException("s"); } DateTime result; if (Rfc3339DateTime.TryParse(s, out result)) { return result; } else { throw new FormatException(String.Format(null, "{0} is not a valid RFC 3339 string representation of a date and time.", s)); } } #endregion #region ToString(DateTime utcDateTime) /// <summary> /// Converts the value of the specified <see cref="DateTime"/> object to its equivalent string representation. /// </summary> /// <param name="utcDateTime">The Coordinated Universal Time (UTC) <see cref="DateTime"/> to convert.</param> /// <returns>A RFC 3339 string representation of the value of the <paramref name="utcDateTime"/>.</returns> /// <remarks> /// <para> /// This method returns a string representation of the <paramref name="utcDateTime"/> that /// is precise to the three most significant digits of the seconds fraction; that is, it represents /// the milliseconds in a date and time value. /// </para> /// <para> /// While it is possible to display higher precision fractions of a second component of a time value, /// that value may not be meaningful. The precision of date and time values depends on the resolution /// of the system clock. On Windows NT 3.5 and later, and Windows Vista operating systems, the clock's /// resolution is approximately 10-15 milliseconds. /// </para> /// </remarks> /// <exception cref="ArgumentException">The specified <paramref name="utcDateTime"/> object does not represent a <see cref="DateTimeKind.Utc">Coordinated Universal Time (UTC)</see> value.</exception> public static string ToString(DateTime utcDateTime) { if (utcDateTime.Kind != DateTimeKind.Utc) { throw new ArgumentException("utcDateTime"); } return utcDateTime.ToString(Rfc3339DateTime.Rfc3339DateTimeFormat, DateTimeFormatInfo.InvariantInfo); } #endregion #region TryParse(string s, out DateTime result) /// <summary> /// Converts the specified string representation of a date and time to its <see cref="DateTime"/> equivalent. /// </summary> /// <param name="s">A string containing a date and time to convert.</param> /// <param name="result"> /// When this method returns, contains the <see cref="DateTime"/> value equivalent to the date and time /// contained in <paramref name="s"/>, if the conversion succeeded, /// or <see cref="DateTime.MinValue">MinValue</see> if the conversion failed. /// The conversion fails if the s parameter is a <b>null</b> reference (Nothing in Visual Basic), /// or does not contain a valid string representation of a date and time. /// This parameter is passed uninitialized. /// </param> /// <returns><b>true</b> if the <paramref name="s"/> parameter was converted successfully; otherwise, <b>false</b>.</returns> /// <remarks> /// The string <paramref name="s"/> is parsed using formatting information in the <see cref="DateTimeFormatInfo.InvariantInfo"/> object. /// </remarks> public static bool TryParse(string s, out DateTime result) { //------------------------------------------------------------ // Attempt to convert string representation //------------------------------------------------------------ bool wasConverted = false; result = DateTime.MinValue; if (!String.IsNullOrEmpty(s)) { DateTime parseResult; if (DateTime.TryParseExact(s, Rfc3339DateTime.Rfc3339DateTimePatterns, DateTimeFormatInfo.InvariantInfo, DateTimeStyles.AdjustToUniversal, out parseResult)) { result = DateTime.SpecifyKind(parseResult, DateTimeKind.Utc); wasConverted = true; } } return wasConverted; } #endregion } }
The latest Netbeans IDE (6.1) has a pretty solid Ruby support. You can check it out [here](http://www.netbeans.org/features/ruby/index.html).