instruction
stringlengths
0
30k
If you're using SQL Server, there's the HashBytes function: [http://msdn.microsoft.com/en-us/library/ms174415.aspx][1] [1]: http://msdn.microsoft.com/en-us/library/ms174415.aspx
Register file extensions / mime types in Linux
|mime|file-type|freedesktop|linux|installation|
I'm developing a Linux application that has its own file format. I want my app to open when you double-click on those files. How can I register a file extension and associate it with my application on Linux? I'm looking for a way that that is standard (works with GNOME and KDE based systems) and can be done automatic when my program is installed or run for the first time.
|linux|installation|mime|file-type|
This is the web service to call. http://developer.yahoo.com/search/local/V2/localSearch.html This site has ok web services, but not exactly what you're asking for here. http://www.usps.com/webtools/
I second the vote for MD5 or SHA with a salt. Any of the major web development languages have functions built-in for computing the hash (in PHP, for example, the mcrypt package contains the necessary functions).
Regex to validate URIs
|regex|.net|
How do you produce a regex that matches only valid URI. The description for URIs can be found here: http://en.wikipedia.org/wiki/URI_scheme. It doesn't need to extract any parts, just test if a URI is valid. (preferred format is .Net RegularExpression)
How do you produce a regex that matches only valid URI. The description for URIs can be found here: http://en.wikipedia.org/wiki/URI_scheme. It doesn't need to extract any parts, just test if a URI is valid. (preferred format is .Net RegularExpression) (.Net Version 1.1) - Doesn't neet to check for a known protocol, just a valid one.
|.net|regex|
How do you produce a regex that matches only valid URI. The description for URIs can be found here: http://en.wikipedia.org/wiki/URI_scheme. It doesn't need to extract any parts, just test if a URI is valid. (preferred format is .Net RegularExpression) (.Net Version 1.1) - Doesn't neet to check for a known protocol, just a valid one. Current Solution: ^([a-zA-Z0-9+.-]+):(//([a-zA-Z0-9-._~!$&'()*+,;=:]*)@)?([a-zA-Z0-9-._~!$&'()*+,;=]+)(:(\\d*))?(/?[a-zA-Z0-9-._~!$&'()*+,;=:/]+)?(\\?[a-zA-Z0-9-._~!$&'()*+,;=:/?@]+)?(#[a-zA-Z0-9-._~!$&'()*+,;=:/?@]+)?$(:(\\d*))?(/?[a-zA-Z0-9-._~!$&'()*+,;=:/]+)?(\?[a-zA-Z0-9-._~!$&'()*+,;=:/?@]+)?(\#[a-zA-Z0-9-._~!$&'()*+,;=:/?@]+)?$
Is it really that bad to catch a general exception?
|exception|
Whilst analysing some legacy code with FXCop, it occurred to me is it really that bad to catch a general exception error within a try block or should you be looking for a specific exception. Thoughts on a postcard please.
Well, I don't seen any difference between catching a general exception or a specific one, except that when having multiple catch blocks, you can react differently depending on what the exception is. You will catch both IOException and NullPointerException with a generic Exception, but the way you program should react is probably different.
I think a good guideline is to catch only specific exceptions from within a framework (so that the host application can deal with edge cases like the disk filling up etc), but I don't see why we shouldn't be able to catch all exceptions from our application code. Quite simply there are times where you don't want the app to crash, no matter what might go wrong.
Most of the time catching a general exception is not needed. Of course there are situations where you don't have a choice, but in this case I think it's better to check why you need to catch it. Maybe there's something wrong in your design.
Unless you are doing some logging and clean up code in the front end of your application, then I think it is bad to catch all exceptions. My basic rule of thumb is to catch all the exceptions you expect and anything else is a bug. If you catch everything and continue on, it's a bit like putting a sticking plaster over the warning light on your car dashboard. You can't see it anymore, but it doesn't mean everything is ok.
The point is twofold I think. Firstly, if you don't know what exception has occurred how can you hope to recover from it. If you expect that a user might type a filename in wrong then you can expect a FileNotFoundException and tell the user to try again. If that same code generated a NullReferenceException and you simply told the user to try again they wouldn't know what had happened. Secondly, the FxCop guidelines do focus on Library/Framework code - not all their rules are designed to be applicable to EXE's or ASP.Net web sites. So having a global exception handler that will log all exceptions and exit the application nicely is a good thing to have.
Obviously this is one of those questions where the only real answer is "it depends." The main thing it depends on is where your are catching the exception. In general libraries should be more conservative with catching exceptions whereas at the top level of your program (e.g. in your main method or in the top of the action method in a controller, etc) you can be more liberal with what you catch. The reason for this is that e.g. you don't want to catch all exceptions in a library because you may mask problems that have nothing to do with your library, like "OutOfMemoryException" which you really would prefer bubbles up so that the user can be notified, etc. On the other hand, if you are talking about catching exceptions inside your main() method which catches the exception, displays it and then exits... well, it's probably safe to catch just about any exception here. The most important rule about catching all exceptions is that you should never just swallow all exceptions silently... e.g. something like this in Java: try { something(); } catch (Exception ex) {} or this in perl: try: something() except: pass Because these can be some of the hardest issues to track down. A good rule of thumb is that you should only catch exceptions that you can properly deal with yourself. If you cannot handle the exception completely then you should let it bubble up to someone who can.
The *args and **kwargs ist a common idiom to allow arbitrary number of arguments to functions as described in the section [more on defining functions][1] in the the python documentation. The *args will give you all funtion parameters a a list: In [1]: def foo(*args): ...: for a in args: ...: print a ...: ...: In [2]: foo(1) 1 In [4]: foo(1,2,3) 1 2 3 The \**kwargs will give you all **keyword arguments** except for those corresponding to a formal parameter as a dictionary. In [5]: def bar(**kwargs): ...: for a in kwargs: ...: print a, kwargs[a] ...: ...: In [6]: bar(name="one", age=27) age 27 name one Both idioms can be mixed with normal arguments to allow a set of fixed and some variable arguments: def foo(kind, *args, **kwargs): pass An other usage of the *l idiom is to **unpack argument lists** when calling a function. In [9]: def foo(bar, lee): ...: print bar, lee ...: ...: In [10]: l = [1,2] In [11]: foo(*l) 1 2 [1]: http://docs.python.org/tut/node6.html#SECTION006700000000000000000
The *args and **kwargs ist a common idiom to allow arbitrary number of arguments to functions as described in the section [more on defining functions][1] in the the python documentation. The *args will give you all funtion parameters a a list: In [1]: def foo(*args): ...: for a in args: ...: print a ...: ...: In [2]: foo(1) 1 In [4]: foo(1,2,3) 1 2 3 The \**kwargs will give you all **keyword arguments** except for those corresponding to a formal parameter as a dictionary. In [5]: def bar(**kwargs): ...: for a in kwargs: ...: print a, kwargs[a] ...: ...: In [6]: bar(name="one", age=27) age 27 name one Both idioms can be mixed with normal arguments to allow a set of fixed and some variable arguments: def foo(kind, *args, **kwargs): pass An other usage of the *l idiom is to **unpack argument lists** when calling a function. In [9]: def foo(bar, lee): ...: print bar, lee ...: ...: In [10]: l = [1,2] In [11]: foo(*l) 1 2 [1]: http://docs.python.org/dev/tutorial/controlflow.html#more-on-defining-functions
The *args and **kwargs ist a common idiom to allow arbitrary number of arguments to functions as described in the section [more on defining functions][1] in the the python documentation. The *args will give you all funtion parameters a a list: In [1]: def foo(*args): ...: for a in args: ...: print a ...: ...: In [2]: foo(1) 1 In [4]: foo(1,2,3) 1 2 3 The \**kwargs will give you all **keyword arguments** except for those corresponding to a formal parameter as a dictionary. In [5]: def bar(**kwargs): ...: for a in kwargs: ...: print a, kwargs[a] ...: ...: In [6]: bar(name="one", age=27) age 27 name one Both idioms can be mixed with normal arguments to allow a set of fixed and some variable arguments: def foo(kind, *args, **kwargs): pass An other usage of the *l idiom is to **unpack argument lists** when calling a function. In [9]: def foo(bar, lee): ...: print bar, lee ...: ...: In [10]: l = [1,2] In [11]: foo(*l) 1 2 In the upcoming python 3.0 it will be possible to use *l on the left side of an assignment ([Extended Iterable Unpacking][2]): first, *rest = [1,2,3,4] first, *l, last = [1,2,3,4] [1]: http://docs.python.org/dev/tutorial/controlflow.html#more-on-defining-functions [2]: http://www.python.org/dev/peps/pep-3132/
One thing I would suggest ensure your build script (and installer project, if relevant in your case) is in source control. I tend to have a very simple script that just checks out\gets latest the "main" build script then launches it. I say this b/c I see teams just running the latest version of the build script on the server but either never putting it in source control or when they do they only check it in on a random basis. If you make the build process to "get" from source control it will force you to keep the latest and greatest build script in there.
In most languages with decent optimization the former will work just fine.
Calculating Distance Between 2 Cities
|algorithm|
How do you calculate the distance between 2 cities?
|algorithm|mathematics|geography|
|algorithm|mathematics|geography|trigonometry|
|algorithm|mathematics|geography|trigonometry|
|algorithm|math|trigonometry|geography|
Easiest way for me to illustrate is using in-memory objects so it's clear what's happening. LINQ to SQL should be able to take that same LINQ query and translate it into appropriate SQL. public class Site { static void Main() { List<Site> sites = new List<Site>() { new Site() { SiteID = 1, VisitorType = 1, Last30 = 10, Total = 100, }, new Site() { SiteID = 1, VisitorType = 2, Last30 = 40, Total = 140, }, new Site() { SiteID = 2, VisitorType = 1, Last30 = 20, Total = 180, }, }; var totals = from s in sites group s by s.SiteID into grouped select new { SiteID = grouped.Key, Last30Sum = (from value in grouped select value.Last30).Sum(), }; foreach (var total in totals) { Console.WriteLine("Site: {0}, Last30Sum: {1}", total.SiteID, total.Last30Sum); } } public int SiteID { get; set; } public int VisitorType { get; set; } public int Last30 { get; set; } public int Total { get; set; } }
To determine what tags apply to a particular file use: cvs log <filename> This will output all the versions of the file and what tags have been applied to the version. To determine what files are included in a single tag, the only thing I can think of is to check out using the tag and see what files come back. The command for that is any of: cvs update -r <tagname> cvs co <modulename> -r <tagname> cvs export <modulename> -r <tagname>
BODMAS
|maths|
What is BODMAS and why is it useful in programming?
JavaScript and Threads
|javascript|
Is there some way to do multi-threading in JavaScript?
|javascript|multithreading|
@Mike: this kind of thinking makes me nervous. I've heard to many times "this should be infinitely portable", but when the question is asked: do you actually foresee that there will be any porting? the answer is: no. Sticking to the lowest common denominator can really hurt performance, as can the introduction of abstraction layers (ORM's, PHP PDO, etc). My opinion is: - Evaluate realistically if there is a need to support multiple RDBMS's. For example if you are writing an open source web application, chances are that you need to support MySQL and PostgreSQL at least (if not MSSQL and Oracle) - After the evaluation, make the most of the platform you decided upon And BTW: you are mixing relational with non-relation databases (CouchDB is *not* a RDBMS comparable with Oracle for example), further exemplifying the point that the perceived need for portability is many times greatly overestimated.
Can the same Adobe AIR app run more than once?
|javascript|air|adobe|
As the title says, is there a way to run the same Adobe AIR app more than once? I have a little widget I wrote that shows thumbnails from a couple of photo streams, and I'd like to fix it so I can look at more than one stream at a time. Thanks!
I think Jeff complained about this recently. One common technique is to drag all the objects into the designer again... I hope someone else chimes in with a better approach!
How to find a normal vector pointing directly from virtual world to screen in Java3D?
|java|graphics|java-3d|
I think it can be done by applying the transformation matrix of the scenegraph to z-normal (0, 0, 1), but it doesn't work. My code goes like this: Vector3f toScreenVector = new Vector3f(0, 0, 1); Transform3D t3d = new Transform3D(); tg.getTransform(t3d); //tg is Transform Group of all objects in a scene t3d.transform(toScreenVector); Then I tried something like this too: Point3d eyePos = new Point3d(); Point3d mousePos = new Point3d(); canvas.getCenterEyeInImagePlate(eyePos); canvas.getPixelLocationInImagePlate(new Point2d(Main.WIDTH/2, Main.HEIGHT/2), mousePos); //Main is the class for main window. Transform3D motion = new Transform3D(); canvas.getImagePlateToVworld(motion); motion.transform(eyePos); motion.transform(mousePos); Vector3d toScreenVector = new Vector3f(eyePos); toScreenVector.sub(mousePos); toScreenVector.normalize(); But still this doesn't work correctly. I think there must be an easy way to create such vector. Do you know what's wrong with my code or better way to do so?
Theoretically, no, they are not unique. It's possible to generate an identical guid over and over. However, the chances of it happening are so low that you can assume they are unique. I've read before that the chances are so low that you really should stress about something else--like your server spontaneously combusting or other bugs in your code. That is, assume it's unique and don't build in any code to "catch" duplicates--spend your time on something more likely to happen (i.e. *anything* else). I [made an attempt][1] to describe the usefulness of GUIDs to my blog audience (non-technical family memebers). From there (via Wikipedia), the odds of generating a duplicate GUID: - 1 in 2128 - 1 in 340 undecillion (don’t worry, undecillion is not on the quiz) - 1 in 3.4 × 1038 - 1 in 340,000,000,000,000,000,000,000,000,000,000,000,000 [1]: http://www.wassupy.com/20080105/technology/pick-a-number-any-number
I have a few lines of PowerShell code that I would like to use as an automated script. The way I would like it to be able to work is to be able to call it using one of the following options: 1. One command line that opens PowerShell, executes script and closes PowerShell (this would be used for a global build-routine) 2. A file that I can double-click to run the above (I would use this method when manually testing components of my build process) I have been going through PowerShell documentation online, and although I can find lots of scripts, I have been unable to find instructions on how to do what I need. Thanks for the help.
Set up PowerShell Script for Automatic Execution
|command-line|scripting|powershell|
Save your script as a .ps1 file and launch it using powershell.exe, like this: powershell.exe .\foo.ps1 Make sure you specify the full path to the script, and make sure you have set your execution policy level to at least "RemoteSigned" so that unsigned local scripts can be run.
Database - since you mentioned multiple threads. Synchronization as well as filtered retrieval are my reasons for my answer. See if you have a performance problem before deciding to switch to files *"Knuth: Premature optimization is the root of all evil" I didn't get any further in that book... :)*
There are ways you can work around the limitations of file logging. You can always start each log entry with a thread id of some kind, and grep out the individual thread ids. Or a different log file for each thread. I've logged to database in the past, in a separate thread at a lower priority. I must say, queryability is very valuable when you're trying to figure out what went wrong.
Most of the memory overhead will come from the opcode cache size. Each opcode cacher has their own default(e.g. 30MB for APC) which you can change through the config file.
Most of the memory overhead will come from the opcode cache size. Each opcode cacher has their own default(e.g. 30MB for APC) which you can change through the config file. Other than the cache size, the actual memory overhead of the cacher itself is negligible.
I second the other answers here, **depends on what you are doing with the data**. We have two scenarios here: 1. The majority of the logging is to a DB since admin users for the products we build need to be able to view them in their nice little app with all the bells and whistles. 2. We log all of our diagnostics and debug info to file. We have no need for really "prettifying" it and TBH, we don't even often need it, so we just log and archive for the most part. I would say if the user is doing anything with it, then log to DB, if its for you, then a file will probably suffice.
How about logging to database-file, say a SQLite database? I think it can handle multi-threaded writes - although that may also have its own performance overheads.
WinForms databinding and foreign key relationships
If you can use some inline assembler and do the following (psuedo assembler): PUSH A A=B POP B You will save a lot of parameter passing and stack fix up code etc.
To help facilitate understanding of closures it might be useful to examine how they might be implemented in a procedural language. This explanation will follow a simplistic implementation of closures in Scheme. To start, I must introduce the concept of a namespace. When you enter a command into a Scheme interpreter, it must evaluate the various symbols in the expression and obtain their value. Example: (define x 3) (define y 4) (+ x y) returns 7 The define expressions store the value 3 in the spot for x and the value 4 in the spot for y. Then when we call (+ x y), the interpreter looks up the values in the namespace and is able to perform the operation and return 7. However, in Scheme there are expressions that allow you to temporarily override the value of a symbol. Here's an example: (define x 3) (define y 4) (let ((x 5)) (+ x y)) returns 9 x returns 3 What the let keyword does is introduces a new namespace with x as the value 5. You will notice that it's still able to see that y is 4, making the sum returned to be 9. You can also see that once the expression has ended x is back to being 3. In this sense, x has been temporarily masked by the local value. Procedural and object-oriented languages have a similar concept. Whenever you declare a variable in a function that has the same name as a global variable you get the same effect. How would we implement this? A simple way is with a linked list - the head contains the new value and the tail contains the old namespace. When you need to look up a symbol, you start at the head and work your way down the tail. Now let's skip to the implementation of first-class functions for the moment. More or less, a function is a set of instructions to execute when the function is called culminating in the return value. When we read in a function, we can store these instructions in the function value and run them when the function is called. (define x 3) (define (plus-x y) (+ x y)) (let ((x 5)) (plus-x 4)) returns ? We define x to be 3 and plus-x to be its parameter, y, plus the value of x. Finally we call plus-x in an environment where x has been masked by a new x, this one valued 5. If we merely store the operation, (+ x y), for the function plus-x, since we're in the context of x being 5 the result returned would be 9. This is what's called dynamic scoping. However, Scheme, Common Lisp, and many other languages have what's called lexical scoping - in addition to storing the operation (+ x y) we also store the namespace at that particular point. That way, when we're looking up the values we can see that x, in this context, is really 3. This is a closure. (define x 3) (define (plus-x y) (+ x y)) (let ((x 5)) (plus-x 4)) returns 7 In summary, we can use a linked list to store the state of the namespace at the time of function definition, allowing us to access variables that otherwise would no longer be in scope.
To help facilitate understanding of closures it might be useful to examine how they might be implemented in a procedural language. This explanation will follow a simplistic implementation of closures in Scheme. To start, I must introduce the concept of a namespace. When you enter a command into a Scheme interpreter, it must evaluate the various symbols in the expression and obtain their value. Example: (define x 3) (define y 4) (+ x y) returns 7 The define expressions store the value 3 in the spot for x and the value 4 in the spot for y. Then when we call (+ x y), the interpreter looks up the values in the namespace and is able to perform the operation and return 7. However, in Scheme there are expressions that allow you to temporarily override the value of a symbol. Here's an example: (define x 3) (define y 4) (let ((x 5)) (+ x y)) returns 9 x returns 3 What the let keyword does is introduces a new namespace with x as the value 5. You will notice that it's still able to see that y is 4, making the sum returned to be 9. You can also see that once the expression has ended x is back to being 3. In this sense, x has been temporarily masked by the local value. Procedural and object-oriented languages have a similar concept. Whenever you declare a variable in a function that has the same name as a global variable you get the same effect. How would we implement this? A simple way is with a linked list - the head contains the new value and the tail contains the old namespace. When you need to look up a symbol, you start at the head and work your way down the tail. Now let's skip to the implementation of first-class functions for the moment. More or less, a function is a set of instructions to execute when the function is called culminating in the return value. When we read in a function, we can store these instructions behind the scenes and run them when the function is called. (define x 3) (define (plus-x y) (+ x y)) (let ((x 5)) (plus-x 4)) returns ? We define x to be 3 and plus-x to be its parameter, y, plus the value of x. Finally we call plus-x in an environment where x has been masked by a new x, this one valued 5. If we merely store the operation, (+ x y), for the function plus-x, since we're in the context of x being 5 the result returned would be 9. This is what's called dynamic scoping. However, Scheme, Common Lisp, and many other languages have what's called lexical scoping - in addition to storing the operation (+ x y) we also store the namespace at that particular point. That way, when we're looking up the values we can see that x, in this context, is really 3. This is a closure. (define x 3) (define (plus-x y) (+ x y)) (let ((x 5)) (plus-x 4)) returns 7 In summary, we can use a linked list to store the state of the namespace at the time of function definition, allowing us to access variables that otherwise would no longer be in scope.
Yes! (except at the "top" of your application) By catching an exception and allowing the code execution to continue, you are stating that you know how do deal with and circumvent, or fix a particular problem. You are stating that this is **a recoverable situation**. Catching Exception or SystemException means that you will catch problems like IO errors, network errors, out-of-memory errors, missing-code errors, null-pointer-dereferencing and the likes. It is a lie to say that you can deal with these. In a well organised application, these unrecoverable problems should be handled high up the stack. In addition, as code evolves, you don't want your function to catch a new exception that is added *in the future* to a called method.
How do I compare two arrays of DataRow objects in PowerShell?
|arrays|powershell|comparison|
I have two arrays of System.Data.DataRow objects which I want to compare. The rows have two columns A and B. Column A is a key and I want to find out which rows have had their B column changed and which rows have been added or deleted. How do I do this in PowerShell?
Just a quick update on this, I have not forgotten! Just doing some more research, looks like I am on to a winner, just need to get the code sorted. So far, I have the following: * The _XmlSeralizer_ is basically a class that does some nifty reflection on the classes it is serializing. It determines the properties that are serialized based on the **Type**. * The reason the problem occurs is because a type mismatch is occurring, it is expecting the _BaseType_ but in fact receives the _DerivedType_ .. While you may think that it would treat it polymorphically, it doesn't since it would involve a whole extra load of reflection and type-checking, which it is not designed to do. This behaviour appears to be able to be overridden (code pending) by creating a proxy class to act as the go-between for the serializer. This will basically determine the type of the derived class and then serialize that as normal. This proxy class then will feed that XML back up the line to the main serializer.. Watch this space! ^_^
In my opinion you should catch all exceptions you **expect**, but this rule applies to anything but your interface logic. All the way down the call stack you should probably create a way to catch all exceptions, do some logging/give user feedback and, if needed and possible, shut down gracefully. Nothing is worse than an application crashing with some user unfriendly stacktrace dumped to the screen. Not only does it give (perhaps unwanted) insight into your code, but it also confuses your end-user, and sometimes even scares them away to a competing application.
Why does this python date/time conversion seem wrong?
|python|datetime|
>>> import time >>> time.strptime("01-31-2009", "%m-%d-%Y") (2009, 1, 31, 0, 0, 0, 5, 31, -1) >>> time.mktime((2009, 1, 31, 0, 0, 0, 5, 31, -1)) 1233378000.0 >>> 60*60*24 # seconds in a day 86400 >>> 1233378000.0 / 86400 14275.208333333334 time.mktime should return the number of seconds since the epoch. Since I'm giving it a time at midnight and the epoch is at midnight, shouldn't the result be evenly divisible by the number of seconds in a day? Thanks!
Keep to the ideal of inbox zero in the actual inbox, then employ a decent search engine (Google Desktop or Xobni for example). I have a handful of project- or filter-specific folders (e.g. for system generated status messages that go to a mailing list), but generally all archived email is dumped in one folder. In Outlook 2007 categories (which can approach the usefulness of tags) do add a potentially useful dimension.
My book recommendations: **Essential C++** (Lippman) **C++ Common Knowledge: Essential Intermediate Programming** (Dewhurst) ...and I second the **Effective C++** suggestion above. A very handy alternative to buying books in meatspace is to subscribe to a service like [Safari Books Online][1]. For a not unreasonable monthly fee you'll get access to all of the above books plus a bajillion others. If you desire fast random access to more than a couple books, it pretty much pays for itself. It's an easy case to make if you want to convince your employer to pay for it. Beyond that, sit yourself in front of an IDE that has a C++ code completion feature (I use Eclipse/CDT most of the time). [1]: http://techbus.safaribooksonline.com/home
I strongly suggest using [VideoInput lib][1], it supports any DirectShow device (even multiple devices at the same time) and is more configurable. You'll spend five minutes make it play with OpenCV. [1]: http://muonics.net/school/spring05/videoInput/
I don't think its possible to limit the number of words returned, however to limit the number of chars returned you could do something like SELECT SUBSTRING(field_name, LOCATE('keyword', field_name) - chars_before, total_chars) FROM table_name WHERE field_name LIKE "%keyword%" - chars_before - is the number of chars you wish to select before the keyword(s) - total_chars - is the total number of chars you wish to select i.e. the following example would return 30 chars of data staring from 15 chars before the keyword SUBSTRING(field_name, LOCATE('keyword', field_name) - 15, 30) Note: as aryeh pointed out, any negative values in SUBSTRING() buggers things up considerably - for example if the keyword is found within the first [chars_before] chars of the field, then the last [chars_before] chars of data in the field are returned.
Change the message: just provide the primary key and the current date, not the key/value pairs. Your mdb fetches the entity by primary key and calls index(). After indexing you set a value "updated" in your index to the message date. You update your index only if the message date is after the "updated" field of the index. This way you can't get behind because you always fetch the current key/value pairs first. As an alternative: have a look at [http://www.compass-project.org][1]. [1]: http://www.compass-project.org
A good starting place is "Thinking in C++" by Bruce Eckel, I've rarely had anyone complain about the book. Well written and also has a version available online.
Determine if a ruby script is already running
|ruby|
Is there an easy way to tell if a ruby script is already running and then handle it appropriately? For example: I have a script called really_long_script.rb. I have it cronned to run every 5 minutes. When it runs, I want to see if the previous run is still running and then stop the execution of the second script. Any ideas?
You can also [Uuencode][1] you original binary data. This format is a bit older but it does the same thing as base63 encoding. [1]: http://en.wikipedia.org/wiki/Uuencode
The recommendations for the accepted answer are a little out of date now. - The windows forms implementation is pretty good now. (See [Paint-Mono][1] for a port of Paint.net which is a pretty involved Windows forms application. All that was required was an emulation layer for some of the P-Invoke and unsupported system calls). - Path.Combine as well as Path.Seperator to join paths and filenames. - The windows Registry is OK, as long as you are only using it for storing and retrieving data from your applications (i.e. you can't get any information about Windows from it, since it is basically a registry for Mono applications). [1]: http://code.google.com/p/paint-mono/
Unless the data is truely random with a symmetric 1/0 distribution, this simply becomes a lossless data compression problem and is very analogous to CCITT Group 3 compression used for black and white (ie: Binary) FAX images. CCITT Group 3 uses a Huffman Coding scheme. In the case of FAX they are using a fixed set of Huffman codes, but for a given data set, you can generate a specific set of codes for each data set to improve the compression ratio achived. As long as you only need to access the bits sequencially, as you implied, this will be a pretty efficient approach. Random access would create some additional challenges, but you could probably generate a binary tree index to various points in the array that would allow you to get close to the desired location and then walk in from there. If it is truely random with an even distribution, then, well, according to *Mr. Claude Shannon*, you not going to be able to compress any significant amount using any scheme.
Unless the data is truely random ***and*** has a symmetric 1/0 distribution, then this simply becomes a lossless data compression problem and is very analogous to CCITT Group 3 compression used for black and white (ie: Binary) FAX images. CCITT Group 3 uses a Huffman Coding scheme. In the case of FAX they are using a fixed set of Huffman codes, but for a given data set, you can generate a specific set of codes for each data set to improve the compression ratio achived. As long as you only need to access the bits sequencially, as you implied, this will be a pretty efficient approach. Random access would create some additional challenges, but you could probably generate a binary tree index to various points in the array that would allow you to get close to the desired location and then walk in from there. ***Note***: The Huffman scheme still works well even if the data is random, as long as the 1/0 distribution is not perfectly even. That is, the less even the distribution, the better the compression ratio. Finally, if the bits are truely random with an even distribution, then, well, according to *Mr. Claude Shannon*, you not going to be able to compress it any significant amount using any scheme.
Unless the data is truely random ***and*** has a symmetric 1/0 distribution, then this simply becomes a lossless data compression problem and is very analogous to CCITT Group 3 compression used for black and white (ie: Binary) FAX images. CCITT Group 3 uses a Huffman Coding scheme. In the case of FAX they are using a fixed set of Huffman codes, but for a given data set, you can generate a specific set of codes for each data set to improve the compression ratio achived. As long as you only need to access the bits sequencially, as you implied, this will be a pretty efficient approach. Random access would create some additional challenges, but you could probably generate a binary search tree index to various offset points in the array that would allow you to get close to the desired location and then walk in from there. ***Note***: The Huffman scheme still works well even if the data is random, as long as the 1/0 distribution is not perfectly even. That is, the less even the distribution, the better the compression ratio. Finally, if the bits are truely random with an even distribution, then, well, according to *Mr. Claude Shannon*, you are not going to be able to compress it any significant amount using any scheme.
|math|
Since reddit's ranking algorithm rocks, it makes very much sense to have a look at it, if not copying it: ![alt text][1] [1]: http://redflavor.com/reddit.cf.algorithm.png
Since reddit's ranking algorithm rocks, it makes very much sense to have a look at it, if not copy it: ![alt text][1] [1]: http://redflavor.com/reddit.cf.algorithm.png
If all prisoners are killed when someone fails to find their number then you either save 100 or 0. There is no way to save 30 people.
I've seen a lot of these types of questions lately--optimization to the nth degree. I think it makes sense in certain circumstances: 1. Computing condition 2 is not a constant time operation 2. You are asking strictly for educational purposes--you want to know how the language works, not to save 3us. In other cases, worrying about the "fastest" way to iterate or check a conditional is silly. Instead of writing tests which require millions of trials to see any recordable (but insignificant) difference, focus on clarity. When someone else (could be you!) picks up this code in a month or a year, what's going to be most important is clarity. In this case, your first example is shorter, clearer and doesn't require you to repeat yourself.
According to [this article][1] PHP does short circuit evaluation, which means that if the first condition is met the second is not even evaluated. It's quite easy to test also (from the article): <?php /* ch06ex07 – shows no output because of short circuit evaluation */ if (true || $intVal = 5) // short circuits after true { echo $intVal; // will be empty because the assignment never took place } ?> [1]: http://www.kekh.com/chapter-6/short-circuit-evaluation/
You could iterate over the entire Workbook using vba: Sub iterateOverWorkbook() For Each i In Sheets i.Select Cells.Select For Each j In Selection If (j.Formula <> "") Then j.Value = Replace(j.Formula, "INDIRECT(D4)", "INDIRECT(C4)") End If Next j Next i End Sub This example substitues every occurrence of "indirect(D4)" with "indirect(C4)". You can easily swap the replace-function with something more sophisticated, if you have more complicated indirect-functions.
You could iterate over the entire Workbook using vba (i've included the code from @PabloG and @euro-micelli ): Sub iterateOverWorkbook() For Each i In ThisWorkbook.Worksheets Set rRng = i.UsedRange For Each j In rRng If (Not IsEmpty(j)) Then If (j.HasFormula) Then If InStr(oCell.Formula, "INDIRECT") Then j.Value = Replace(j.Formula, "INDIRECT(D4)", "INDIRECT(C4)") End If End If End If Next j Next i End Sub This example substitues every occurrence of "indirect(D4)" with "indirect(C4)". You can easily swap the replace-function with something more sophisticated, if you have more complicated indirect-functions. Performance is not that bad, even for bigger Workbooks.
My suggestion would be an ASCII captcha it does not use an image, and it's programmer/geeky. Here is a PHP implementation http://thephppro.com/products/captcha/ this one is a paid. There is a free, also PHP implementation, however I could not find an example -> http://www.phpclasses.org/browse/package/4544.html I know these are in PHP but I'm sure you smart guys building SO can 'port' it to your favorite language.
There are drawbacks of using reference counting. One of the most mentioned is circular references: Suppose A references B, B references C and C references B. If A were to drop its reference to B, both B and C will still have a reference count of 1 and won't be deleted with traditional reference counting. CPython (reference counting is not part of python itself, but part of the C implementation thereof) catches circular references with a separate garbage collection routine that it runs periodically... Another drawback: Reference counting can make execution slower. Each time an object is referenced and dereferenced, the interpreter/VM must check to see if the count has gone down to 0 (and then deallocate if it did). Garbage Collection does not need to do this. Also, Garbage Collection can be done in a separate thread (though it can be a bit tricky). On machines with lots of RAM and for processes that use memory only slowly, you might not want to be doing GC at all! Reference counting would be a bit drawback there in terms of performance...
I finally got Multisampling working with my wxWidgets OpenGL program. It's a bit messy right now, but here's how: **wxWidgets** doesn't have **Multisampling** support in their **stable releases** right now (latest version at this time is **2.8.8**). But, it's available as a patch and also through their daily snapshot. (The latter is heartening, since it means that the patch has been accepted and should appear in later stable releases if there are no issues.) So, there are 2 options: 1. Download and build from their **[daily snapshot][1]**. 2. Get the **[patch][2]** for your working wxWidgets installation. I found the 2nd option to be less cumbersome, since I don't want to disturb my working installation as much as possible. If you don't know how to patch on Windows, see [this][3]. At the very least, for Windows, the patch will modify the following files: $(WX_WIDGETS_ROOT)/include/wx/glcanvas.h $(WX_WIDGETS_ROOT)/include/wx/msw/glcanvas.h $(WX_WIDGETS_ROOT)/src/msw/glcanvas.cpp After patching, **recompile** the wxWidgets libraries. To enable multisampling in your wxWidgets OpenGL program, minor changes to the code are required. An attribute list needs to be passed to the **wxGLCanvas** constructor: int attribList[] = {WX_GL_RGBA, WX_GL_DOUBLEBUFFER, WX_GL_SAMPLE_BUFFERS, GL_TRUE, // Multi-sampling WX_GL_DEPTH_SIZE, 16, 0, 0}; If you were already using an attribute list, then add the line with `GL_SAMPLE_BUFFERS, GL_TRUE` to it. Else, add this attribute list definition to your code. Then modify your wxGLCanvas constructor to take this attribute list as a parameter: myGLFrame::myGLFrame // Derived from wxGLCanvas ( wxWindow *parent, wxWindowID id, const wxPoint& pos, const wxSize& size, long style, const wxString& name ) : wxGLCanvas(parent, (wxGLCanvas*) NULL, id, pos, size, style, name, attribList) { // ... } After the wxGLCanvas element is created, multisampling is turned on by default. To disable or enable it at will, use the related OpenGL calls: glEnable(GL_MULTISAMPLE); glDisable(GL_MULTISAMPLE); Multisampling should now work with the wxWidgets OpenGL program. Hopefully, it should be supported in the stable release of wxWidgets soon, making this information irrelevant :-) [1]: http://biolpc22.york.ac.uk/pub/Daily_HEAD/ [2]: https://sourceforge.net/tracker/?func=detail&atid=309863&aid=1915804&group_id=9863 [3]: http://stackoverflow.com/questions/19611/