Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I'm running the following query in Hypersonic DB (HSQLDB): ``` SELECT (CASE foo WHEN 'a' THEN 'bar' WHEN 'b' THEN 'biz' .... ELSE 'fin' END ) FROM MyTable LIMIT 1 ``` When the number of "WHEN" clauses exceeds about 1000, I get a Java `StackOverflowError` thrown by the JDBC driver in `org.hsqldb.jdbc.Util.sqlException()`. Here's the really weird part: I tried breaking up my `CASE` statement into pieces with e.g. 100 WHEN clauses followed by `ELSE ( CASE foo WHEN ... ) END`. But even with this rewrite I get *exactly* the same behavior! I don't see any reference to a limit of 1000 or anything else in the HSQLDB manual. Help!
You should never get anywhere near 1000 terms in a `CASE` statement. Long before that, you should put the other values into a separate table and pick them by joining. ``` INSERT INTO MappingTable (foo, string) VALUES ('a', 'bar'), ('b', 'biz'), ... SELECT COALESCE(m.string, 'fin') FROM MyTable t LEFT OUTER JOIN MappingTable m USING (foo) LIMIT 1; ``` --- Java API says about StackOverflowError: > Thrown when a stack overflow occurs because an application recurses too deeply. So I would guess that when HSQLDB parses a `CASE` expression, each `WHEN` term adds another layer to the runtime stack (actually probably several layers per `WHEN`). You'd probably get a similar StackOverflowError if you had an arithmetic expression with 1,000 levels of nested parentheses. The limit of 1,000 is probably variable, depending on the implementation of the Java VM, the version of Java, the platform you're running on, the amount of memory available, etc. They may not document it in the HSQLDB documentation because it's a platform-specific limit, not something built into HSQLDB.
Eliminate the CASE statement entirely. Make a table using those 1000 values, then just do an inner join to that table.
How to get more than 1000 items in HSQLDB in CASE WHEN statement?
[ "", "sql", "hsqldb", "stack-overflow", "" ]
I have been using com.sun.org.apache.xpath.internal.XPathAPI for some time and it seems to work ok. Recently I tried to use the TPTP profiler in Eclipse but it could not find the XPathAPI class. I haven't figured this problem yet but it did make me wonder whether I should be using a class in an 'internal' package? Should I be using the Xalan library? Or what?? (I'm currently using JRE 1.5.0\_06 for this project)
All classes under the com.sun package are internal implementation details. You should never reference them directly. The base for xPath in the JDK is [javax.xml.xpath](http://java.sun.com/j2se/1.5.0/docs/api/javax/xml/xpath/package-frame.html).
Use the `XPathFactory.newInstance()` method in the `javax.xml.path` package. I think this was introduced in Java 1.5. If you have to revert to Java 1.4 or earlier, I think you have to use the `com.sun` packages, which is never really a good idea (but sometimes unavoidable)
Which XPathAPI should I use in Java 1.5?
[ "", "java", "xpath", "xpath-api", "" ]
I have multiple setTimeout functions like this: ``` function bigtomedium(visiblespan) { visiblespan.removeClass('big').addClass('medium'); setTimeout(function(){ mediumtosmall(visiblespan);},150); }; function mediumtosmall(visiblespan) { visiblespan.removeClass('medium').addClass('small'); setTimeout(function() { smalltomedium(visiblespan); },150); }; function smalltomedium(visiblespan) { visiblespan.removeClass('small').addClass('medium'); setTimeout(function() { mediumtobig(visiblespan); },150); }; function mediumtobig(visiblespan) { visiblespan.removeClass('medium').addClass('big'); setTimeout(function() { bigtomedium(visiblespan); },150); }; ``` Which is activated in jquery onclick: ``` $('div.click').click( function(event) { var visiblespan = $('span:visible'); mediumtosmall(visiblespan); } ); ``` What I need to do, is to get the click to hide invisible span as well. ``` $('div.click').click( function(event) { var visiblespan = $('span:visible'); var invisiblespan = $('span:not(:visible)'); mediumtosmall(visiblespan); clearTimeout(invisiblespan); } ); ``` What I'm not sure how to do is to write the clearTimeout function that will stop the loop. Any help is greatly appreciated. Thanks.
Not sure if you are already aware of this but, clearTimeout accepts a timeoutID that is previously returned from a call to setTimeout. Therefore you need to assign this timeout id to a variable that remains in scope for when you need to cancel it. Then pass it to the clearTimeout call when you need to stop the loop. As it is just an integer id, another option might be to create a custom attribute on a dom element using something like "domElement.setAttribute('timoutIDFirst');" (or attr in jQuery) and then just retrieve it using getAttribute when required. Considering you have multiple timers, using custom attributes on the DOM elements may help keep things tidier. EG: ``` function mediumtosmall(visiblespan) { vt.removeClass('medium').addClass('small'); // Store the timeoutID for this timer var storedTimeoutID=setTimeout(function() { smalltomedium(visiblespan); },150); $('span:visible').attr('timeoutID',storedTimeoutID); }; ``` and then: ``` $('div.click').click( function(event) { var visiblespan = $('span:visible'); var invisiblespan = $('span:visible'); mediumtosmall(visiblespan); var storedTimeoutID=invisibleSpan.attr('timeoutID'); // Pass the ID to clearTimeout clearTimeout(storedTimeoutID); } ); ```
Probably the best way to handle this is to use setInterval() instead of setTimeout. Like setTimeout, setInterval returns an integer, which can be passed to clearInterval() to cancel the processing. an example would be (warning, I've not tested this at all): ``` function animateSizes( jQueryElement ) { if( jQueryElement.hasClass("big") jQueryElement.removeClass("big").addClass("medium"); else if( jQueryElement.hasClass("medium") ) jQueryElement.removeClass("medium").addClass("small"); else if( jQueryElement.hasClass("small") ) jQueryElement.removeClass("small").addClass("smaller"); else jQueryElement.removeClass("smaller").addClass("big"); } function startAnimation( elem ) { var sizeAnimation = window.setInterval( function() {animateSizes( elem )}, 150); elem.attr( "sizeAnimation", sizeAnimation ); } function stopAnimation( elem ) { var sizeAnimation = elem.attr("sizeAnimation"); window.clearTimeout( sizeAnimation ); } ```
clearTimeout on multiple setTimeout
[ "", "javascript", "jquery", "" ]
How can I "hide" parts of a class so that whoever is using the libary does not have to include headers for all the types used in my class. Ie take the MainWindow class below, ho can I have it so when compiled in a static/dynamic libary, whoever is useing the libary does NOT have to include windows.h, ie HWND, CRITICAL\_SECTION, LRESULT, etc do not have to be defined. I know I could split it into two classes, an abstract class with just the public interface, and an implementation class which is hidden that contains the members that require windows.h. The problem here is that the visible class can no longer be created itsself, and an additional create function (eg CreateMainWindow) is required. That is fine in this case since it is most likly that just a single instance created on the heap is wanted but for other classes this is not. ``` class MainWindow { HWND hwnd; int width, height; std::string caption; bool started,exited; bool closeRequest; unsigned loopThread; CRITICAL_SECTION inputLock; Input *input; public: static void init_type(); Py::Object getattr(const char *name); MainWindow(int width, int height, std::string caption); ~MainWindow(); bool CloseRequest(const Py::Tuple &args); bool CloseRequestReset(const Py::Tuple &args); HWND GetHwnd(); int GetWidth(); int GetHeight(); Input* GetInput(); protected: unsigned static __stdcall loopThreadWrap(void *arg); unsigned LoopThreadMain(); LRESULT WndProc(UINT msg, WPARAM wParam, LPARAM lParam); LRESULT static CALLBACK WndProcWrapper(HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam); }; ```
You can hide parts of a class using the so-called "cheshire cat", "letter/envelope", or "pimpl" technique (which are, all, different names for the same technique): ``` class MainWindow { private: //opaque data class ImplementationDetails; ImplementationDetails* m_data; public: ... declare your public methods here ... } ``` The best way is probably the abstract class mentioned in your 2nd paragraph (however I failed to understand your last sentence, in which you (tried/failed to) explain what your counter-argument to that is).
As you mentioned in your question, using an abstract interface is your best option. Your DLL should have factory methods for creating/destroying instances of your concrete class. I didn't quite get your point about the downside of this.
C++ DLL: Not exposing the entire class
[ "", "c++", "" ]
Very direct question, i need to know if its possible and maybe where to start reading about that. Im a programmer with zero experience in windows servers and a client is asking if we can deliver in their windows boxes(and no they wont let us use any kind of virtualization in them). Thanks guys, i know its not 100% programming related but... hope you can help =) EDIT: Wow, fast reply! Thanks guys!!!
Yes, it's very easy. Just run the ISAPI installer from the PHP websiite.
Yes you can. I don't think there is anything different about the setup.
Is there any way to run both PHP and ASP in the same server running IIS?
[ "", "php", "iis", "" ]
**EDIT:** Oops - as rightly pointed out, there'd be no way to know whether the constructor for the class in question is sensitive to when or how many times it is called, or whether the object's state is changed during the method, so it would have to be created from scratch each time. Ignore the Dictionary and just consider delegates created in-line during the course of a method :-) --- Say I have the following method with Dictionary of Type to Action local variable. ``` void TakeAction(Type type) { // Random types chosen for example. var actions = new Dictionary<Type, Action>() { {typeof(StringBuilder), () => { // .. }}, {typeof(DateTime), () => { // .. }} }; actions[type].Invoke(); } ``` The Dictionary will always be the same when the method is called. Can the C# compiler notice this, only create it once and cache it somewhere for use in future calls to the method? Or will it simply be created from scratch each time? I know it could be a field of the containing class, but it seems neater to me for a thing like this to be contained in the method that uses it.
How should the C# compiler know that it's "the same" dictionary every time? You explicitly create a new dictionary every time. C# does not support static local variables, so you have to use a field. There's nothing wrong with that, even if no other method uses the field. It would be bad if the C# compiler did things like that. What if the constructor of the variable uses random input? :)
Short answer: no. Slightly longer answer: I believe it will cache the result of creating a delegate from a lambda expression which doesn't capture anything (including "this") but that's a pretty special case. Correct way to change your code: declare a private static readonly variable for the dictionary. ``` private static readonly Dictionary<Type,Action> Actions = new Dictionary<Type, Action>() { { typeof(StringBuilder), () => ... }, { typeof(DateTime), () => ... }, } void TakeAction(Type type) { Actions[type].Invoke(); } ```
C# compiler and caching of local variables
[ "", "c#", "optimization", "compiler-construction", "" ]
I am writing this java program to find all the prime numbers up to num using the Sieve of Eratosthenes, but when I try to compile, it says I can't use a long var as an array index, and it expects an int var in its place. But I'll be working with large numbers, so I can't use int. What can I do? ``` import java.util.*; import java.lang.*; public class t3{ public static void main(String[] args){ long num = 100; //declaring list and filling it with numbers ArrayList<Long> numlist = new ArrayList<Long>(); for(long x=2 ; x<num ; x++){ numlist.add(new Long(x)); } //sieve or eratosthenes for(long x=0 ; x<Math.sqrt(num) ; x++){ for(long y=x+1 ; y<numlist.size() ; y++){ if(numlist[y]%numlist[x] == 0){ numlist.remove(y); } } } //print list for(Object item : numlist){ System.out.println((Long)item); } } } ```
I'm not sure why your code would compile to begin with. You're not supposed to use [] in an array list to access members. An arraylist is merely a list that is internally stored in an array. You have to use the list get operation (which would still be O(1)). Writing numlist[index] means that you have an array of objects in numlist. You cannot override the [] operation as in C++. In addition, an int is 32 bits in Java. Having an array of length greater than 2^32 (so you would need long indices) is unlikely and I'm not even sure the specification allows it.
Realize that with a 32-bit signed int index to a long[] you're addressing 16GB of RAM. If you're really serious about getting to big primes with the sieve, you're not going to get away with several things in your current impl: * ArrayList of boxed longs * Using [] like Uri mentions * Not systematically paging to disk
Using a long as ArrayList index in java
[ "", "java", "arraylist", "" ]
Is that .NET related? It appears to be a pointer of some sort, what is the difference? Edit: I actually know it is the XOR operator, but look at this example from [this page](https://learn.microsoft.com/en-us/previous-versions/ms379600(v=vs.80)). ``` void objectCollection() { using namespace System::Collections; ArrayList ^as = gcnew ArrayList; //... } ``` What is this? Thanks.
I'm assuming that you're looking at constructs of the form: ``` Foo ^bar = gcnew Foo(); ``` You're right, in .NET it is a pointer-"like" type and is part of C++/CLI, not but not standard ISO C++. It's a reference to a garbage-collected, managed .NET object as opposed to a regular, unmanaged C++ object. As the other poster suggest, outside the .NET world or in a non-object creation context, it is the XOR operator.
In C++, that is the [XOR](http://en.wikipedia.org/wiki/Xor) operator.
I see many examples of C++ with the use of "Foo ^ bar" - what is "^"?
[ "", "c++", "c++-cli", "" ]
I'm creating an API for a module and after I created several methods inside my classes, I asked myself this question. Right now, and as an example, I'm doing this: ``` public Company GetMonitoredCompany( String companyName ) { ... } public List<Company> GetMonitoredCompanies( ) { ... } ``` But I realize that for several times that I use other API's / Services sometimes they have **Collection** in the name maybe like: ``` public List<Company> GetMonitoredCompanyCollection( ) { ... } ``` is there a rule for this? a pattern? or either way should be ok?
Use the shortest, simplest name that clearly shows the methods purpose. And be consistent within your project. In this example, if there are no other considerations, I would use the name: > GetMonitoredCompanies Because it's shorter and clearer. (I would also return a read-only ICollection or IEnumerable unless you've got some specific reason not to.)
It's up to you. But I recommend you to have the same convention for whole project... I prefer the first way (GetMonitoredCompanies)
When to use Plural vs Collection word on Methods
[ "", "c#", "naming-conventions", "design-patterns", "" ]
I have a quick little application and wanted to give a try to develop using TDD. I've never used TDD and actually didn't even know what it was until I found ASP.NET-MVC. (My first MVC app had unit tests, but they were brittle, way coupled, took too much up keep, and were abandoned -- I've come to learn unit tests != TDD). Background on app: I have a text dump of a purchase order read in as a string. I need to parse the text and return new purchase order number, new line item number, old purchase order number, old purchase order line number. Pretty simple. Right now I'm only working on new purchase order details (number/line) and have a model like this: ``` public class PurchaseOrder { public string NewNumber {get; private set;} public string NewLine {get; private set;} new public PurchaseOrder(string purchaseOrderText) { NewNumber = GetNewNumber(purchaseOrderText); NewLine = GetNewLine(purchaseOrderText); } // ... definition of GetNewNumber / GetNewLine ... // both return null if they can't parse the text } ``` Now I want to add a method "IsValid" that should only be true if "NewNumber" and "NewLine" are both non-null. So I want to test it like: ``` public void Purchase_Order_Is_Valid_When_New_Purchase_Order_Number_And_Line_Number_Are_Not_Null() { PurchaseOrder order = new PurchaseOrder() { NewNumber = "123456", NewLine = "001" }; Assert.IsTrue(order.IsValid); } ``` This is easy enough, but it seems like a bad compromise to allow public setters and a parameterless constructor. So the alternative is to feed in a 'purchaseOrderText' value in the constructor, but then I'm testing the code for 'GetNewNumber' and 'GetNewLine' as well. I'm kind of stumped on how to write this as a testable class while trying to keep it locked up in terms of what makes sense for the model. This seems like it would be a common problem, so I'm thinking I'm just missing an obvious concept.
One solution is to not have the constructor do the work: ``` public class PurchaseOrder { public PurchaseOrder(string newNumber, string newLine) { NewNumber = newNumber; NewLine = newLine; } // ... } ``` Then testing is easy and isolated - you're not testing `GetNewNumber` and `GetNewLine` at the same time. To help using `PurchaseOrder` you can create a factory method that puts it together: ``` public static PurchaseOrder CreatePurchaseOrder(string purchaseOrderText) { return new PurchaseOrder( GetNewNumber(purchaseOrderText), GetNewLine(purchaseOrderText)); } ```
Instead of making the setters public, make them internal and then make your test assembly InternalsVisibleTo in your main project. That way, your tests can see your internal members, but no-one else can. In you main project, put something like this; ``` [assembly: InternalsVisibleTo( "UnitTests" )] ``` Where UnitTests is the name of your test assembly.
TDD: Help with writing Testable Class
[ "", "c#", "asp.net-mvc", "unit-testing", "tdd", "" ]
``` typedef void (FunctionSet::* Function)(); class MyFunctionSet : public FunctionSet { protected: void addFunctions() { addFunction(Function(&MyFunctionSet::function1)); } void function1() { // Do something. } }; ``` The addFunction method adds the function to a list in the base class, which can then be enumerated to call all functions. Is there any way to simplify (less typing work) the adding of functions?
Looks like you assign a member function pointer to a function of the derived class to a member function pointer to a function of the base class. Well, that's forbidden, because it opens up a hole in the type-system. It comes at a surprise (at least for me, the first time i heard that). Read [this answer](https://stackoverflow.com/questions/420726/how-to-cast-member-variable-pointer-to-generic-type-in-c#420804) for why. To answer your actual question - i would make `addFunction` a template: ``` void addFunctions() { addFunction(&MyFunctionSet::function1); } ``` Change `addFunction` in the base-class to this: ``` template<typename Derived> void addFunction(void(Derived::*f)()) { myFunctions.push_back(static_cast<Function>(f)); } ``` Better use `static_cast`, because it will tell you if `Derived` isn't actually derived from `FunctionSet`.
Can you explain what you're trying to achieve by doing this? This seems like a fairly bad design, can't you have an abstract class ("c++ interface") such as "Computable" that has a pure virtual function1, subclass Computable for each implementation, and then have MyFunctionSet maintain a set of Computable? Is there a specific reason you're using function pointers?
Can I simplify this?
[ "", "c++", "reference", "polymorphism", "function-pointers", "" ]
I was looking at C# collection initializers and found the implementation to be very pragmatic but also very unlike anything else in C# I am able to create code like this: ``` using System; using System.Collections; class Program { static void Main() { Test test = new Test { 1, 2, 3 }; } } class Test : IEnumerable { public IEnumerator GetEnumerator() { throw new NotImplementedException(); } public void Add(int i) { } } ``` Since I have satisfied the minimum requirements for the compiler (implemented `IEnumerable` and a `public void Add`) this works but obviously has no value. I was wondering what prevented the C# team from creating a more strict set of requirements? In other words why, in order for this syntax to compile, does the compiler not require that the type implement `ICollection`? That seems more in the spirit of other C# features.
Your observation is spot on - in fact, it mirrors one made by Mads Torgersen, a Microsoft C# Language PM. Mads made a post in October 2006 on this subject titled *[What Is a Collection?](http://blogs.msdn.com/madst/archive/2006/10/10/What-is-a-collection_3F00_.aspx)* in which he wrote: > Admitted, we blew it in the first > version of the framework with > System.Collections.ICollection, which > is next to useless. But we fixed it up > pretty well when generics came along > in .NET framework 2.0: > System.Collections.Generic.ICollection<T> > lets you Add and Remove elements, > enumerate them, Count them and check > for membership. > > Obviously from then on, everyone would > implement ICollection<T> every time > they make a collection, right? Not so. > Here is how we used LINQ to learn > about what collections really are, and > how that made us change our language > design in C# 3.0. It turns out that there are only 14 implementations of `ICollection<T>` in the framework, but 189 classes that implement `IEnumerable` and have a public `Add()` method. There's a hidden benefit to this approach - if they had based it on the `ICollection<T>` interface, there would have been exactly one supported `Add()` method. In contrast, the approach they did take means that the initializers for the collection just form sets of arguments for the `Add()` methods. To illustrate, let's extend your code slightly: ``` class Test : IEnumerable { public IEnumerator GetEnumerator() { throw new NotImplementedException(); } public void Add(int i) { } public void Add(int i, string s) { } } ``` You can now write this: ``` class Program { static void Main() { Test test = new Test { 1, { 2, "two" }, 3 }; } } ```
I thought about this too, and the answer which satisfies me the most is that ICollection has many methods other than Add, such as: Clear, Contains, CopyTo, and Remove. Removing elements or clearing has nothing to do with being able to support the object initializer syntax, all you need is an Add(). If the framework was designed granularly enough, and there was an ICollectionAdd interface, then it would've had a "perfect" design. But I honestly don't think that would have added much value, having one method per interface. IEnumerable + Add seems like a hackish approach, but when you think about it, it's a better alternative. EDIT: This is not the only time C# has approached a problem with this type of solution. Since .NET 1.1, foreach uses duck typing to enumerate a collection, all your class needs to implement is GetEnumerator, MoveNext and Current. Kirill Osenkov has a [post](http://kirillosenkov.blogspot.com/2007/09/c-30-collection-initializers-duck.html) which asks your question as well.
Why do C# collection initializers work this way?
[ "", "c#", "collections", "" ]
First check out this code. I seems like it should work for me, but it doesn't! (surprise!) Anyway, this is what I tried first: ``` SELECT Status as status, Address as ip, PCName as pc_name, (Numbers.Phone = 'CPU/' + PCName) as cpu_contact, (Numbers.Phone = 'PC/' + PCName) as pc_contact, (Numbers.Phone = 'LOGIN/' + PCName) as login_contact, FROM IPAddress WHERE $where --Generated In code JOIN Numbers ON ('CPU/' + PCName = Numbers.Phone) OR ('PC/' + PCName = Numbers.Phone) OR ('LOGIN/' + PCName = Numbers.Phone) ``` So what I want is some boolean calculated fields and to join on similar conditions. I would also like the result to collapse down to single rows. For example, I think the current setup would do something like this: ``` status ip cpu_contact pc_contact login_contact ----------------------------------------------- foo bar true false false foo bar false true false foo bar false false true ``` And obviously I'd rather ``` status ip cpu_contact pc_contact login_contact ----------------------------------------------- foo bar true true true ``` Any ideas? Database redesign isn't an option. If it were, I'd do that :-)
You could use a `GROUP BY` and `SUM` to collapse the rows: ``` SELECT Status as status, Address as ip, PCName as pc_name, cast(sum(case when (Numbers.Phone = 'CPU/' + PCName) then 1 else 0 end) as bit) as cpu_contact, cast(sum(case when (Numbers.Phone = 'PC/' + PCName) then 1 else 0 end)) as bit) as pc_contact, cast(sum(case when (Numbers.Phone = 'LOGIN/' + PCName) then 1 else 0 end) as bit) as login_contact, FROM IPAddress JOIN Numbers ON ('CPU/' + PCName = Numbers.Phone) OR ('PC/' + PCName = Numbers.Phone) OR ('LOGIN/' + PCName = Numbers.Phone) WHERE $where --Generated In code GROUP BY Status, Address, PCName ``` Since you are doing a logical or between the rows, a sum of zero is false, while any value greater than 0 will be true.
You need to use Case/When for the comparisons. In this case, I am hardcoding a 1 or 0, but T-SQL will convert the hard coded numbers to int. If you want boolean (bit), you'll need to convert that manually, like this... ``` Convert(Bit, Case When Numbers.Phone = 'CPU/' + PCName Then 1 Else 0 End) as cpu_contact, Convert(Bit, Case When Numbers.Phone = 'PC/' + PCName Then 1 Else 0 End) as pc_contact, Convert(Bit, Case When Numbers.Phone = 'LOGIN/' + PCName Then 1 Else 0 End) as login_contact, ```
How do I make a boolean calculated field in TSQL and join on that calculated field?
[ "", "sql", "sql-server", "t-sql", "" ]
So I am trying to serve large files via a PHP script, they are not in a web accessible directory, so this is the best way I can figure to provide access to them. The only way I could think of off the bat to serve this file is by loading it into memory (fopen, fread, ect.), setting the header data to the proper MIME type, and then just echoing the entire contents of the file. The problem with this is, I have to load these ~700MB files into memory all at once, and keep the entire thing there till the download is finished. It would be nice if I could stream in the parts that I need as they are downloading. Any ideas?
You don't need to read the whole thing - just enter a loop reading it in, say, 32Kb chunks and sending it as output. Better yet, use [fpassthru](http://php.net/fpassthru) which does much the same thing for you.... ``` $name = 'mybigfile.zip'; $fp = fopen($name, 'rb'); // send the right headers header("Content-Type: application/zip"); header("Content-Length: " . filesize($name)); // dump the file and stop the script fpassthru($fp); exit; ``` *even less lines* if you use [readfile](http://php.net/readfile), which doesn't need the fopen call... ``` $name = 'mybigfile.zip'; // send the right headers header("Content-Type: application/zip"); header("Content-Length: " . filesize($name)); // dump the file and stop the script readfile($name); exit; ``` If you want to get even cuter, you can support the [Content-Range](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.16) header which lets clients request a particular byte range of your file. This is particularly useful for serving PDF files to Adobe Acrobat, which just requests the chunks of the file it needs to render the current page. It's a bit involved, but [see this for an example](http://www.coneural.org/florian/papers/04_byteserving.php).
The best way to send big files with php is the `X-Sendfile` header. It allows the webserver to serve files much faster through zero-copy mechanisms like `sendfile(2)`. It is supported by lighttpd and apache with a [plugin](http://tn123.ath.cx/mod_xsendfile/). Example: ``` $file = "/absolute/path/to/file"; // can be protected by .htaccess header('X-Sendfile: '.$file); header('Content-type: application/octet-stream'); header('Content-Disposition: attachment; filename="'.basename($file).'"'); // other headers ... exit; ``` The server reads the `X-Sendfile` header and sends out the file.
Serving large files with PHP
[ "", "php", "apache", "" ]
I need to prevent user from selecting text (select all or select a portion of text) in the browser Mozilla Firefox, using java script. I have done this using Internet Explorer, but it seems doesn't work with Mozilla. Any hints? URL? Sample? TIA. EDIT: Actually, this ridiculous problem was requested by our client. And yes, we have explained them that there are thousand other ways to get the text. But they answer that they know about that, they just want to prevent for amateur user to do that. I have done some googling and find similar problem with the solution [here](http://forums.mozillazine.org/viewtopic.php?f=8&t=168887).
There is no way to fully protect content you publish, short of DRM schemes which are not widespread enough to be useful for a website. But to prevent simple copy-and-paste there are several approaches, each of which is very annoying to your users. A simple way would be to cover the text with another element, such as a `DIV`, using CSS positioning. This would mean that when the user tries to click on the `DIV` to select the text there would be no text to select. Should work in any browser that supports CSS and in browsers that don't it will probably be completely invisible. Clicking on the page and hitting `CTRL`+`A` (or some other shortcut key) may select the text anyway, and it may be impossible to block all key and mouse events that can get at the text. But this `DIV` "lid" approach is at least unobtrusive and easy to generalize. However, this is trivially defeated by looking at the HTML source. It is less trivially defeated by turning off CSS (easy to do in Firefox, and many Firefox users are sophisticated enough to do it). A more robust approach would be to render the text as a graphic, either using a regular image file or something like a PDF. However graphics can be OCR'ed. There are free tools for this. Finally you could put your text in a Flash or Java applet, which would download the text from the server. Someone could steal your applet but would have difficulty making it talk to your webserver due to the same-origin security policy (note: this can also be worked around). This approach is not that much better than the PDF approach except that it makes it harder to grab entire documents, because the applet will only display a portion of the document at a time. To defeat this using OCR the attacker has to take screen-captures. Or they could reverse-engineer your applet and make a new applet (or even a regular program) which downloads all the content from your server. All those approaches are only of the most basic use and I suspect you'll find that they don't help at all. You'd probably get more return on investment building useful features rather than this.
Render the text to an image if you really want to prevent people from copy-pasting it. Javascript tricks can always be disabled and/or worked around. Of course the best way to prevent people from copying text is to not show it at all - they might read it and retype! ;-)
Preventing Selection / Copy to Clipboard in Firefox
[ "", "javascript", "firefox", "" ]
I'm writing a log collection / analysis application in Python and I need to write a "rules engine" to match and act on log messages. It needs to feature: * Regular expression matching for the message itself * Arithmetic comparisons for message severity/priority * Boolean operators I envision An example rule would probably be something like: ``` (message ~ "program\\[\d+\\]: message" and severity >= high) or (severity >= critical) ``` I'm thinking about using [PyParsing](http://pyparsing.wikispaces.com/) or similar to actually parse the rules and construct the parse tree. The current (not yet implemented) design I have in mind is to have classes for each rule type, and construct and chain them together according to the parse tree. Then each rule would have a "matches" method that could take a message object return whether or not it matches the rule. Very quickly, something like: ``` class RegexRule(Rule): def __init__(self, regex): self.regex = regex def match(self, message): return self.regex.match(message.contents) class SeverityRule(Rule): def __init__(self, operator, severity): self.operator = operator def match(self, message): if operator == ">=": return message.severity >= severity # more conditions here... class BooleanAndRule(Rule): def __init__(self, rule1, rule2): self.rule1 = rule1 self.rule2 = rule2 def match(self, message): return self.rule1.match(message) and self.rule2.match(message) ``` These rule classes would then be chained together according to the parse tree of the message, and the match() method called on the top rule, which would cascade down until all the rules were evaluated. I'm just wondering if this is a reasonable approach, or if my design and ideas are way totally out of whack? Unfortunately I never had the chance to take a compiler design course or anything like that in Unviersity so I'm pretty much coming up with this stuff of my own accord. Could someone with some experience in these kinds of things please chime in and evaluate the idea? **EDIT:** Some good answers so far, here's a bit of clarification. The aim of the program is to collect log messages from servers on the network and store them in the database. Apart from the collection of log messages, the collector will define a set of rules that will either match or ignore messages depending on the conditions and flag an alert if necessary. I can't see the rules being of more than a moderate complexity, and they will be applied in a chain (list) until either a matching alert or ignore rule is hit. However, this part isn't quite as relevant to the question. As far the syntax being close to Python syntax, yes that is true, however I think it would be difficult to filter the Python down to the point where the user couldn't inadvertently do some crazy stuff with the rules that was not intended.
Do not invent yet another rules language. Either use Python or use some other existing, already debugged and working language like BPEL. Just write your rules in Python, import them and execute them. Life is simpler, far easier to debug, and you've actually solved the actual log-reading problem without creating another problem. Imagine this scenario. Your program breaks. It's now either the rule parsing, the rule execution, or the rule itself. You must debug all three. If you wrote the rule in Python, it would be the rule, and that would be that. "I think it would be difficult to filter the Python down to the point where the user couldn't inadvertently do some crazy stuff with the rules that was not intended." This is largely the "I want to write a compiler" argument. 1) You're the primary user. You'll write, debug and maintain the rules. Are there really armies of crazy programmers who will be doing crazy things? Really? If there is any potential crazy user, *talk to them*. Teach Them. Don't fight against them by inventing a new language (which you will then have to maintain and debug forever.) 2) It's just log processing. There's no real cost to the *craziness*. No one is going to subvert the world economic system with faulty log handling. Don't make a small task with a few dozen lines of Python onto a 1000 line interpreter to interpret a few dozen lines of some rule language. Just write the few dozen lines of Python. Just write it in Python as quickly and clearly as you can and move on to the next project.
You might also want to look at [PyKE](http://pyke.sourceforge.net/logic_programming/index.html).
Implementing a "rules engine" in Python
[ "", "python", "parsing", "rules", "" ]
I have a function (for ease, I'll just use count()) that I want to apply to maybe 4-5 different variables. Right now, I am doing this: ``` $a = count($a); $b = count($b); $c = count($c); $d = count($d); ``` Is there a better way? I know arrays can use the array\_map function, but I want the values to remain as separate values, instead of values inside of an array. Thanks.
I know you said you don't want the values to be in an array, but how about just creating an array specifically for looping through the values? i.e.: ``` $arr = Array($a, $b, $c, $d); foreach ($arr as &$var) { $var = count($var); } ``` I'm not sure if that really is much tidier than the original way, though.
If you have a bunch of repeating variables to collect data your code is poorly designed and should just be using an array to store the values, instead of dozens of variables. So perhaps you want something like: ``` $totals = array("Visa"=>0,"Mastercard"=>0,"Discover"=>0,"AmericanExpress"=>0); ``` then you simply add to your array element (say from a while loop from your SQL or whatever you are doing) ``` $totals['Visa'] += $row['total']; ``` But if you really want to go down this route, you could use the tools given to you, if you want to do this with a large batch then an array is a good choice. Then foreach the array and use variable variables, like so: ``` $variables = array('a','b','c'...); foreach ( $variables as $var ) { ${$var} = count(${var}); } ```
PHP: Apply a function to multiple variables without using an array
[ "", "php", "arrays", "list", "array-map", "" ]
I would like to determine what the *alphabet* for a given locale is, preferably based on the browser Accept-Language header values. Anyone know how to do this, using a library if necessary ?
take a look at [LocaleData.getExemplarSet][1] for example for english this returns abcdefghijklmnopqrstuvwxyz [1]: <http://icu-project.org/apiref/icu4j/com/ibm/icu/util/LocaleData.html#getExemplarSet(com.ibm.icu.util.ULocale>, int)
This is an English answer written in Århus. Yesterday, I heard some Germans say 'Blödheit, à propos, ist dumm'. However, one of them wore a shirt that said 'I know the difference between 文字 and الْعَرَبيّة'. What's the answer to your question for this text? Is it allowed? Isn't this an English text?
How can I determine what the alphabet for a locale is in java?
[ "", "java", "locale", "character-encoding", "" ]
I was 'forced' to add `myLocalVar = null;` statement into finally clause just before leaving method. Reason is to help GC. I was told I will get SMS's during night when server crashes next time, so I better did it :-). I think this is pointless, as myLocalVar is scoped to method, and will be 'lost' as soon as method exits. Extra nulling just pollutes the code, but is harmless otherwise. My question is, where does this myth about helping GC come from? (I was referred to "Java memory books") Do you know any article from 'authorities' which explain it in more depth? Is there possibility this is not a myth, but really helps somehow? If so, how? May nulling local variables cause any harm? To clarify, method look like this: ``` void method() { MyClass myLocalVar = null; try { myLocalVar = get reference to object; ... do more here ... } finally { if (myLocalVar != null) { myLocalVar.close(); // it is resource which we should close } myLocalVar = null; // THIS IS THE LINE I AM TALKING ABOUT } } ```
There was an old piece of Sun documentation, *[Java Platform Performance](https://web.archive.org/web/20120626144027/http://java.sun.com:80/docs/books/performance/1st_edition/html/JPAppGC.fm.html)* (link sadly now broken, and I haven't been able to find a new one), which described a situation where nulling a local variable which dropped out of scope actually had an effect on the GC. However, the paper referred to a very old version of java. As mentioned in [this question](https://stackoverflow.com/questions/271613/are-invisible-references-still-a-problem-in-recent-jvms) (which also contains a précis of the problem described in the paper), this no longer affects current JVM implementations.
The Java GC is supposed to be "sound" but is not necessarily immediately "complete". In other words, it is designed so that it would never eliminate objects that are still accessible by at least one path (and thus cause a dangling reference). It is not necessarily immediately complete since it might take time until it removes everything that can be removed. I think that most GC myths come from a misunderstanding of that concept. Many people keep too many **instance variables** alive, and that causes problems, but that is of course not the issue here. Other people put the local variables in an instance variable (e.g., by passing it to function), and then think that nullifying the local variable somehow eliminates the variable, which is of course untrue. Finally, there are people who overrely on the GC and think it would do functional shutdown for them (E.g., close connections when variable is removed) which is of course not the case. I think the source of this line is the "I'm really really done with it but I'm not sure how to ensure that". **So yeah, you're correct that it's unneccessary.**
Does it help GC to null local variables in Java
[ "", "java", "variables", "garbage-collection", "null", "local", "" ]
I have an ASP.NET MVC project containing an AdminController class and giving me URls like these: > <http://example.com/admin/AddCustomer> > > <http://examle.com/Admin/ListCustomers> I want to configure the server/app so that URIs containing **/Admin** are only accessible from the 192.168.0.0/24 network (i.e. our LAN) I'd like to restrict this controller to only be accessible from certain IP addresses. Under WebForms, /admin/ was a physical folder that I could restrict in IIS... but with MVC, of course, there's no physical folder. Is this achievable using web.config or attributes, or do I need to intercept the HTTP request to achieve this?
I know this is an old question, but I needed to have this functionality today so I implemented it and thought about posting it here. Using the IPList class from here (<http://www.codeproject.com/KB/IP/ipnumbers.aspx>) **The filter attribute FilterIPAttribute.cs:** ``` using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Http; using System.Security.Principal; using System.Configuration; namespace Miscellaneous.Attributes.Controller { /// <summary> /// Filter by IP address /// </summary> public class FilterIPAttribute : AuthorizeAttribute { #region Allowed /// <summary> /// Comma seperated string of allowable IPs. Example "10.2.5.41,192.168.0.22" /// </summary> /// <value></value> public string AllowedSingleIPs { get; set; } /// <summary> /// Comma seperated string of allowable IPs with masks. Example "10.2.0.0;255.255.0.0,10.3.0.0;255.255.0.0" /// </summary> /// <value>The masked I ps.</value> public string AllowedMaskedIPs { get; set; } /// <summary> /// Gets or sets the configuration key for allowed single IPs /// </summary> /// <value>The configuration key single I ps.</value> public string ConfigurationKeyAllowedSingleIPs { get; set; } /// <summary> /// Gets or sets the configuration key allowed mmasked IPs /// </summary> /// <value>The configuration key masked I ps.</value> public string ConfigurationKeyAllowedMaskedIPs { get; set; } /// <summary> /// List of allowed IPs /// </summary> IPList allowedIPListToCheck = new IPList(); #endregion #region Denied /// <summary> /// Comma seperated string of denied IPs. Example "10.2.5.41,192.168.0.22" /// </summary> /// <value></value> public string DeniedSingleIPs { get; set; } /// <summary> /// Comma seperated string of denied IPs with masks. Example "10.2.0.0;255.255.0.0,10.3.0.0;255.255.0.0" /// </summary> /// <value>The masked I ps.</value> public string DeniedMaskedIPs { get; set; } /// <summary> /// Gets or sets the configuration key for denied single IPs /// </summary> /// <value>The configuration key single I ps.</value> public string ConfigurationKeyDeniedSingleIPs { get; set; } /// <summary> /// Gets or sets the configuration key for denied masked IPs /// </summary> /// <value>The configuration key masked I ps.</value> public string ConfigurationKeyDeniedMaskedIPs { get; set; } /// <summary> /// List of denied IPs /// </summary> IPList deniedIPListToCheck = new IPList(); #endregion /// <summary> /// Determines whether access to the core framework is authorized. /// </summary> /// <param name="actionContext">The HTTP context, which encapsulates all HTTP-specific information about an individual HTTP request.</param> /// <returns> /// true if access is authorized; otherwise, false. /// </returns> /// <exception cref="T:System.ArgumentNullException">The <paramref name="httpContext"/> parameter is null.</exception> protected override bool IsAuthorized(HttpActionContext actionContext) { if (actionContext == null) throw new ArgumentNullException("actionContext"); string userIpAddress = ((HttpContextWrapper)actionContext.Request.Properties["MS_HttpContext"]).Request.UserHostName; try { // Check that the IP is allowed to access bool ipAllowed = CheckAllowedIPs(userIpAddress); // Check that the IP is not denied to access bool ipDenied = CheckDeniedIPs(userIpAddress); // Only allowed if allowed and not denied bool finallyAllowed = ipAllowed && !ipDenied; return finallyAllowed; } catch (Exception e) { // Log the exception, probably something wrong with the configuration } return true; // if there was an exception, then we return true } /// <summary> /// Checks the allowed IPs. /// </summary> /// <param name="userIpAddress">The user ip address.</param> /// <returns></returns> private bool CheckAllowedIPs(string userIpAddress) { // Populate the IPList with the Single IPs if (!string.IsNullOrEmpty(AllowedSingleIPs)) { SplitAndAddSingleIPs(AllowedSingleIPs, allowedIPListToCheck); } // Populate the IPList with the Masked IPs if (!string.IsNullOrEmpty(AllowedMaskedIPs)) { SplitAndAddMaskedIPs(AllowedMaskedIPs, allowedIPListToCheck); } // Check if there are more settings from the configuration (Web.config) if (!string.IsNullOrEmpty(ConfigurationKeyAllowedSingleIPs)) { string configurationAllowedAdminSingleIPs = ConfigurationManager.AppSettings[ConfigurationKeyAllowedSingleIPs]; if (!string.IsNullOrEmpty(configurationAllowedAdminSingleIPs)) { SplitAndAddSingleIPs(configurationAllowedAdminSingleIPs, allowedIPListToCheck); } } if (!string.IsNullOrEmpty(ConfigurationKeyAllowedMaskedIPs)) { string configurationAllowedAdminMaskedIPs = ConfigurationManager.AppSettings[ConfigurationKeyAllowedMaskedIPs]; if (!string.IsNullOrEmpty(configurationAllowedAdminMaskedIPs)) { SplitAndAddMaskedIPs(configurationAllowedAdminMaskedIPs, allowedIPListToCheck); } } return allowedIPListToCheck.CheckNumber(userIpAddress); } /// <summary> /// Checks the denied IPs. /// </summary> /// <param name="userIpAddress">The user ip address.</param> /// <returns></returns> private bool CheckDeniedIPs(string userIpAddress) { // Populate the IPList with the Single IPs if (!string.IsNullOrEmpty(DeniedSingleIPs)) { SplitAndAddSingleIPs(DeniedSingleIPs, deniedIPListToCheck); } // Populate the IPList with the Masked IPs if (!string.IsNullOrEmpty(DeniedMaskedIPs)) { SplitAndAddMaskedIPs(DeniedMaskedIPs, deniedIPListToCheck); } // Check if there are more settings from the configuration (Web.config) if (!string.IsNullOrEmpty(ConfigurationKeyDeniedSingleIPs)) { string configurationDeniedAdminSingleIPs = ConfigurationManager.AppSettings[ConfigurationKeyDeniedSingleIPs]; if (!string.IsNullOrEmpty(configurationDeniedAdminSingleIPs)) { SplitAndAddSingleIPs(configurationDeniedAdminSingleIPs, deniedIPListToCheck); } } if (!string.IsNullOrEmpty(ConfigurationKeyDeniedMaskedIPs)) { string configurationDeniedAdminMaskedIPs = ConfigurationManager.AppSettings[ConfigurationKeyDeniedMaskedIPs]; if (!string.IsNullOrEmpty(configurationDeniedAdminMaskedIPs)) { SplitAndAddMaskedIPs(configurationDeniedAdminMaskedIPs, deniedIPListToCheck); } } return deniedIPListToCheck.CheckNumber(userIpAddress); } /// <summary> /// Splits the incoming ip string of the format "IP,IP" example "10.2.0.0,10.3.0.0" and adds the result to the IPList /// </summary> /// <param name="ips">The ips.</param> /// <param name="list">The list.</param> private void SplitAndAddSingleIPs(string ips,IPList list) { var splitSingleIPs = ips.Split(','); foreach (string ip in splitSingleIPs) list.Add(ip); } /// <summary> /// Splits the incoming ip string of the format "IP;MASK,IP;MASK" example "10.2.0.0;255.255.0.0,10.3.0.0;255.255.0.0" and adds the result to the IPList /// </summary> /// <param name="ips">The ips.</param> /// <param name="list">The list.</param> private void SplitAndAddMaskedIPs(string ips, IPList list) { var splitMaskedIPs = ips.Split(','); foreach (string maskedIp in splitMaskedIPs) { var ipAndMask = maskedIp.Split(';'); list.Add(ipAndMask[0], ipAndMask[1]); // IP;MASK } } public override void OnAuthorization(AuthorizationContext filterContext) { base.OnAuthorization(filterContext); } } } ``` **Example usage:** **1. Directly specifying the IPs in the code** ``` [FilterIP( AllowedSingleIPs="10.2.5.55,192.168.2.2", AllowedMaskedIPs="10.2.0.0;255.255.0.0,192.168.2.0;255.255.255.0" )] public class HomeController { // Some code here } ``` **2. Or, Loading the configuration from the Web.config** ``` [FilterIP( ConfigurationKeyAllowedSingleIPs="AllowedAdminSingleIPs", ConfigurationKeyAllowedMaskedIPs="AllowedAdminMaskedIPs", ConfigurationKeyDeniedSingleIPs="DeniedAdminSingleIPs", ConfigurationKeyDeniedMaskedIPs="DeniedAdminMaskedIPs" )] public class HomeController { // Some code here } <configuration> <appSettings> <add key="AllowedAdminSingleIPs" value="localhost,127.0.0.1"/> <!-- Example "10.2.80.21,192.168.2.2" --> <add key="AllowedAdminMaskedIPs" value="10.2.0.0;255.255.0.0"/> <!-- Example "10.2.0.0;255.255.0.0,192.168.2.0;255.255.255.0" --> <add key="DeniedAdminSingleIPs" value=""/> <!-- Example "10.2.80.21,192.168.2.2" --> <add key="DeniedAdminMaskedIPs" value=""/> <!-- Example "10.2.0.0;255.255.0.0,192.168.2.0;255.255.255.0" --> </appSettings> </configuration> ```
You should have access to the `UserHostAddress` in the Request object in your controller to do the restriction on. I'd suggest that you may want to extend the `AuthorizeAttribute` and add your `IP` address restrictions on it so that you can simply decorate any methods or controllers that need this protection.
Restrict access to a specific controller by IP address in ASP.NET MVC Beta
[ "", "c#", "asp.net-mvc", "security", "web-config", "authorization", "" ]
Java is supposed to be "write once, run anywhere" and it really can be, but in some cases it turns into "write once, debug everywhere". What are the most common reasons for problems when moving a Java application from one platform to another? What are un-common but interesting reasons?
* Don't make assumptions about the case (in)sensitivity of the file system * Don't make assumptions about the path or directory separator * Don't make assumptions about the line terminator * Don't use the default platform encoding unless you're really, really sure you mean to * Don't start "cmd.exe" etc (I know, it sounds obvious - but I've seen it cause problems)
Few from UI area: * Incorrect ordering of buttons like OK/Cancel * Using absolute layouts * Different accelerator keys * Different sizes/rendering of fonts * Expecing certain keys to be present (Windows key, Meta key) (These are not Java specific though)
What Issues prevent Java applications from working on multiple platforms?
[ "", "java", "cross-platform", "" ]
I admit, I don't know too much about javascript, mostly I just "steal and modify" from Javascript.com and Dynamic Drive. I've run across a few scripts that call two .js files ``` <script type="text/javascript" src="lib/prototype.js"> </script> <script type="text/javascript" src="src/aptabs.js"> </script> ``` and was wondering why, can I safely merge them both with my external javascript or is there a sort of incompatibility that prevents all the code from sharing the same file?
It's actually better performance wise to have them both in the same file- Depending on how your site is architected. The principle is to reduce the number of http requests, as each one carries some overhead. That said, that's something best left to the very end of production. During development it's easier to have everything seperate. If you're going to join them, it's best to have an automated build script to do the operations.
It's often good to separate code with different concerns. Those two files might come from different places. Say prototype is upgraded and you want the new goodness. Then you can just replace the prototype.js file on your server rather than editing your huge file and do the surgery on it. EDIT: It's also "nicer" for the browser to be able to cache the files individually. If your question comes from a concern of duplicating that block of code in several html files I suggest you make one snippet of it from the server side and include it your html files through whatever means you have at hand/feel comfy with.
The same script using two js files?
[ "", "javascript", "external", "" ]
In PHP you have the create\_function() function which creates a unique named lambda function like this: ``` $myFunction = create_function('$foo', 'return $foo;'); $myFunction('bar'); //Returns bar ``` Is this actually any better (apart from being more easy) then just doing: ``` do{ $myFunction = 'createdFunction_'.rand(); } while(function_exists($myFunction)); eval("function $myFunction(\$foo) { return \$foo; }"); $myFunction('bar'); //Returns bar ``` Is create\_function really better? (apart from the fact that it is more easy)
On my understanding of the relevant docs,[1] they both do the same thing, create\_function() just comes up with a unique function name for you. To address some other comments on this question: > create\_function can be assigned to a variable making the function accessible to other parts of your code, whereas eval is only useful for the given scope. It may well be that eval() runs in the current scope, but function definitions get dumped into the global namespace anyway.[2] So whenever you define a function, it will be accessible everywhere else in your program. > Using eval() will clutter the global function list, create\_function() will not create\_function() only returns a string with the name of the new function,[3] not some special callback type. So, both techniques will pollute your global namespace. So no, apart from create\_function() being easier, it does not appear to be any better than eval(). Footnotes: [1] <https://www.php.net/manual/en/functions.user-defined.php> ; <http://au.php.net/create_function> ; <http://au.php.net/eval> [2] <https://www.php.net/manual/en/functions.user-defined.php> [3] <http://au.php.net/create_function>
Using eval() will clutter the global function list, create\_function() will not, apart from that there's no big difference. *However*, both methods require writing the function body inside a PHP string which is error-prone and if you were working on my project I would order you to just declare a helper function using the normal syntax. Anonymous functions in PHP are so poorly implemented that your code is actually better off **not** using them. (Thankfully this will be fixed in PHP 5.3).
PHP's create_function() versus just using eval()
[ "", "php", "eval", "create-function", "" ]
The following code demonstrates a weird problem I have in a Turbo C++ Explorer project. One of the three stack objects in D::D() is not destroyed after going out of scope. This only happens if compiled in release mode, the auto\_ptrs a\_ and b\_ are of different types and the exception thrown doesn't inherit from std::exception. It appears to work just fine in VC++ 2005 and C++ Builder 2009. I did install the BDS2006 Update 2, the hotfix rollup and hotfix 12. Is it my code or the compiler? Do you know of a fix? Not being able to reliably use auto\_ptr in a VCL project would be quite inconvenient. ``` #include <memory> #include <stdexcept> #include <iostream> typedef std::exception my_error; // will work fine if replaced with line below //class my_error : public std::exception {}; class A {}; class B {}; class C { public: C(int id) : id_(id) { std::cout << "C::C() " << id_ << std::endl; }; ~C() { std::cout << "C::~C() " << id_ << std::endl; }; private: int id_; }; class D { public: D() { C c1(1); C c2(2); C c3(3); throw my_error(); }; private: std::auto_ptr<A> a_; std::auto_ptr<B> b_; // will work fine if replaced with line below // std::auto_ptr<A> b_; // std::auto_ptr<C> c_; // see expected output }; #pragma argsused int main(int argc, char* argv[]) { try { D d; } catch (...) { std::cout << "caught exception" << std::endl; } return 0; } ``` Expected: ``` C::C() 1 C::C() 2 C::C() 3 C::~C() 3 C::~C() 2 C::~C() 1 caught exception ``` Got: ``` C::C() 1 C::C() 2 C::C() 3 C::~C() 2 C::~C() 1 caught exception ``` Got (with line '`// std::auto_ptr<C> c_;`' uncommented): ``` C::C() 1 C::C() 2 C::C() 3 C::~C() 1 caught exception ``` **Edit:** Made suggested changes **Edit 2:** I just tested it with C++ Builder 2007 (11.0.2902.10471), which shows the same problem. The release configuration works as soon as I check the "Debug information" box in Project -> Options -> C++ Compiler -> Debugging. It surprises me that the executable gets smaller with "Debug information" enabled (down to 31.5 KB from 39.5 KB ). **Edit 3:** In Turbo C++ Explorer (C++ Builder 2006) (10.0.2288.42451) the release configuration works if I uncheck the "Inline function expansion (-vi)" box in Project -> Options -> C++ Compiler -> Debugging. Replacing the first line (`#include <memory>`) with the following code makes it work, too. ``` #pragma option push -vi- #include <memory> #pragma option pop ```
This appears to be a compiler bug. I just ran the same sample in VS2008SP1 and got the expected output.
For whatever it's worth, GCC 3.4.6 does the expected thing: ``` $ g++ main.cpp $ a.out C::C() C::C() C::~C() C::~C() caught exception ```
Why is the destructor ignored in this code?
[ "", "c++", "destructor", "c++builder", "" ]
Ok, I've read the Stack Overflow question ['What does it mean to "program to an interface"'](https://stackoverflow.com/questions/383947/what-does-it-mean-to-program-to-an-interface) and understand it. I understand the benefit of using interfaces (at least I think I do). However, I'm having a little bit of a problem applying it to what I'm currently working on which would be my first interface implementation. I'm creating an class to authenticate users to my site . I've written it using static methods so I can do User.Authenticate(username, password). To me, that's nice and easy. Looking forward, there will be 3 other types of members that can authenticate. Their authentication methods will be pointing to different tables, etc. However, they will still need to authenticate, change their password, when changing passwords to verify their security question, etc. It seems like a perfect use for an interface but I'm having issues wrapping my head around how exactly to do it. The types of users are users, doctors, brokers, employers. To me, each class should implement a security interface that defines the methods mentioned above. Can anybody shed some light on how to do this AND if my thinking is right?
You might want to take a look at [ASP.NET/Microsoft Membership](http://msdn.microsoft.com/en-us/library/tw292whz.aspx). From your description it sounds like you have users with different Roles (doctor, broker, etc).
"Programming to an interface" means designing a set of methods and properties which are the public "signature" of the functionality or service the class will provide. You declare it as an interface and then implement it in a class which implements the interface. Then to replace the class, any other class that implements the interface can be used. The interface becomes a contract. However, this is not for static methods. I'm not sure what purpose your `User.Authenticate(username, password)` serves - does it return a value to indicate the user is authenticated? What is an unauthenticated user object? You might have some kind of central authentication controller which authenticates the users and then returned a different class of object depending on the user. Each class would implement (or probably inherit from a user class, instead) the user interface for the common functionality and then extend with their specific functionality. I'm not really sure this example is a good case for either inheritance or interface implementation, it would really depend on more design details we don't have.
Programming to an interface - Use them for a security class?
[ "", "c#", "asp.net", "interface", "" ]
I recently upgraded my server running CentOS 5.0 to a quad-core CPU from a dual-core CPU. Do I need a recompile to make use of the added cores? PostgreSQL was installed by compiling from source. EDIT: The upgrade was from an Intel Xeon 5130 to an Intel Xeon 5345.
No, you will not need to recompile for PostgreSQL to take advantage of the additional cores. What will happen is that the Linux scheduler will now be able to select two or more (up to four) postgresql threads/processes to run all at the same time, basically they are working in parallel rather than having to wait on each other to get a slice of CPU time. This means you are able to process data faster since now four different queries can be processed at the same time rather than just the two you had previously. PostGreSQL has no further tuning required to take advantage of multiple cores/physical CPU's and it is entirely up to the OS. You basically improved your performance for the cost of a new CPU. If you are looking for information on tuning your PostgreSQL take a look at [this post on tuning PostgreSQL on a dedicated server](http://www.anchor.com.au/hosting/dedicated/Tuning_PostgreSQL_on_your_Dedicated_Server). Since you now have more processes able to run at the same time, you may also want to consider upgrading the amount of RAM you have depending on what you currently have installed, the more the database is able to be stored in memory the faster all of the transactions and queries will be!
If it's the same architecture, I don't think a recompile should be needed. If it's a different architecture (x86 vs x86\_64 vs amd64, etc.), then you will have to recompile.
PostgreSQL recompile needed after upgrading to a quad-core CPU?
[ "", "sql", "linux", "database", "postgresql", "64-bit", "" ]
I need to do user validation of a date field, it should be in the format *yyyyMMdd* and should not be more than one year in the future. How would I go about doing this? Currently I only have a crude regexp which is insufficient. ``` function VerifyDate(source, args) { var regexp = /^([1-2]{1}[0-9]{1})\d{2}([0][1-9]|[1][0-2])([0][1-9]|[1-2][0-9]|[3][0-1])$/ var result = args.Value.match(regexp); if(result) { args.IsValid = true; } else { args.IsValid = false; } } ```
Take the regex to check the format only. You can stay simple: ``` ^(\d{4})(\d{2})(\d{2})$ ``` Then parse the date and check the range: ``` function VerifyDate(source, args) { args.IsValid = false; var regexp = /^(\d{4})(\d{2})(\d{2})$/; var daysInMonth = function (y, m) {return 32-new Date(y, m, 32).getDate(); }; var ma = regexp.exec(args.Value); if (ma && ma.length == 4 && ma[2] < 12 && ma[3] <= daysInMonth(ma[1], ma[2])) { var diff = new Date(ma[1], ma[2], ma[3]) - new Date(); args.IsValid = diff < 31536000000; // one year = 1000ms*60*60*24*365 } } ```
new Date() don't throw an exception if month or day is out of range. It uses the internal MakeDay to calculate a date (see [ECMAScript Language Specification](http://www.ecma-international.org/publications/standards/Ecma-262.htm) section 15.9.3.1 and 15.9.1.13). To make sure that the date is valid in the function below, the input is converted to integers who is converted to a date, and then the parts of the date are compared to the integers. Since date uses MakeDay, the calculation of *maxDate* works even if *now* is the leep day (xxxx0229 will be yyyy0301 where yyyy=xxxx+1) ``` function verifyDate(args) { var result=false, match = args.Value.match(/^(\d{4})(\d{2})(\d{2})$/); if (match && match.length === 4) { var year = parseInt(match[1],10), month =parseInt(match[2],10) -1, // 0 = January day = parseInt(match[3],10), testDate= new Date(year,month,day), now = new Date(), maxDate = new Date(now.getFullYear() + 1, now.getMonth(), now. getDate()), minDate = new Date(1800,0,1), result = ( testDate.getFullYear() === year && testDate.getMonth() === month && testDate.getDate() === day && testDate >= minDate && testDate <= maxDate ); } args.IsValue = result; return result; } ```
Verify a date in JavaScript
[ "", "asp.net", "javascript", "validation", "date", "" ]
I have a pretty good understanding of Javascript, except that I can't figure out a nice way to set the "this" variable. Consider: ``` var myFunction = function(){ alert(this.foo_variable); } var someObj = document.body; //using body as example object someObj.foo_variable = "hi"; //set foo_variable so it alerts var old_fn = someObj.fn; //store old value someObj.fn = myFunction; //bind to someObj so "this" keyword works someObj.fn(); someObj.fn = old_fn; //restore old value ``` Is there a way to do this without the last 4 lines? It's rather annoying... I've tried binding an anonymous function, which I thought was beautiful and clever, but to no avail: ``` var myFunction = function(){ alert(this.foo_variable); } var someObj = document.body; //using body as example object someObj.foo_variable = "hi"; //set foo_variable so it alerts someObj.(function(){ fn(); })(); //fail. ``` Obviously, passing the variable into myFunction is an option... but that's not the point of this question. Thanks.
There are two methods defined for all functions in JavaScript, [`call()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/call), and [`apply()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/apply). The function syntax looks like: ``` call( /* object */, /* arguments... */ ); apply(/* object */, /* arguments[] */); ``` What these functions do is call the function they were invoked on, assigning the value of the *object* parameter to **this**. ``` var myFunction = function(){ alert(this.foo_variable); } myFunction.call( document.body ); ```
I think you're looking for [`call`](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Function/call): ``` myFunction.call(obj, arg1, arg2, ...); ``` This calls `myFunction` with `this` set to `obj`. There is also the slightly different method [`apply`](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Function/apply), which takes the function parameters as an array: ``` myFunction.apply(obj, [arg1, arg2, ...]); ```
Set "this" variable easily?
[ "", "javascript", "variables", "scope", "this", "" ]
I have two strings: the first's value is "catdog" and the second's is "got". I'm trying to find a regex that tells me if the letters for "got" are in "catdog". I'm particularly looking to avoid the case where there are duplicate letters. For example, I know "got" is a match, however "gott" is not a match because there are not two "t" in "catdog". EDIT: Based on Adam's response below this is the C# code I got to work in my solution. Thanks to all those that responded. Note: I had to convert the char to int and subtract 97 to get the appropriate index for the array. In my case the letters are *always* lower case. ``` private bool CompareParts(string a, string b) { int[] count1 = new int[26]; int[] count2 = new int[26]; foreach (var item in a.ToCharArray()) count1[(int)item - 97]++; foreach (var item in b.ToCharArray()) count2[(int)item - 97]++; for (int i = 0; i < count1.Length; i++) if(count2[i] > count1[i]) return false; return true; } ```
You're using the wrong tool for the job. This is not something regular expressions are capable of handling easily. Fortunately, it's relatively easy to do this without regular expressions. You just count up the number of occurrences of each letter within both strings, and compare the counts between the two strings - if for each letter of the alphabet, the count in the first string is at least as large as the count in the second string, then your criteria are satisfied. Since you didn't specify a language, here's an answer in pseudocode that should be easily translatable into your language: ``` bool containsParts(string1, string2) { count1 = array of 26 0's count2 = array of 26 0's // Note: be sure to check for an ignore non-alphabetic characters, // and do case conversion if you want to do it case-insensitively for each character c in string1: count1[c]++ for each character c in string2: count2[c]++ for each character c in 'a'...'z': if count1[c] < count2[c]: return false return true } ```
Previous suggestions have already been made that perhaps regex isn't the best way to do this and I agree, however, your accepted answer is a little verbose considering what you're trying to achieve and that is test to see if a set of letters is the subset of another set of letters. Consider the following code which achieves this in a single line of code: ``` MatchString.ToList().ForEach(Item => Input.Remove(Item)); ``` Which can be used as follows: ``` public bool IsSubSetOf(string InputString, string MatchString) { var InputChars = InputString.ToList(); MatchString.ToList().ForEach(Item => InputChars.Remove(Item)); return InputChars.Count == 0; } ``` You can then just call this method to verify if it's a subset or not. What is interesting here is that "got" will return a list with no items because each item in the match string only appears once, but "gott" will return a list with a single item because there would only be a single call to remove the "t" from the list. Consequently you would have an item left in the list. That is, "gott" is not a subset of "catdog" but "got" is. You could take it one step further and put the method into a static class: ``` using System; using System.Linq; using System.Runtime.CompilerServices; static class extensions { public static bool IsSubSetOf(this string InputString, string MatchString) { var InputChars = InputString.ToList(); MatchString.ToList().ForEach(Item => InputChars.Remove(Item)); return InputChars.Count == 0; } } ``` which makes your method into an extension of the string object which actually makes thins a lot easier in the long run, because you can now make your calls like so: ``` Console.WriteLine("gott".IsSubSetOf("catdog")); ```
regular expression for finding parts of a string within another
[ "", "c#", "regex", "" ]
I was asking a related question but messed the title up and no-one would understand it. Since I am able now to ask the question more precisely, I decided to reformulate it in a new question and close the old one. Sorry for that. So what I want to do is passing data (my custom user's nickname as stored in the db) to the LoginUserControl. This login gets rendered from the master page via Html.RenderPartial(), so what I really need to do is making sure that, say ViewData["UserNickname"] is present on every call. But I don't want to populate ViewData["UserNickname"] in each and every action of every controller, so I decided to use [this approach](http://www.asp.net/Learn/mvc/tutorial-13-cs.aspx) and create an abstract base controller which will do the work for me, like so: ``` public abstract class ApplicationController : Controller { private IUserRepository _repUser; public ApplicationController() { _repUser = RepositoryFactory.getUserRepository(); var loggedInUser = _repUser.FindById(User.Identity.Name); //Problem! ViewData["LoggedInUser"] = loggedInUser; } } ``` This way, whatever my deriving Controller does, the user information will already be present. So far, so good. Now for the problem: I can't call User.Identity.Name because `User` is already null. This is not the case in all of my deriving controllers, so this is specific for the abstract base controller. I am setting the User.Identity.Name via FormsAuthentication at another place in the code, but I think this can't be the problem - afaik User.Identity.Name can be null, but not User itself. It looks to me like the HttpContext is not available (since also null ;-) and that I am missing a simple yet important point here. Can anyone give me some hints? I would really appreciate it.
My guess would be that the Controller's base constructor is not filling in the User, but that it is only known later when the ControllerContext is set for the Controller. You should check this in the documentation about the lifecycle of an MVC application, (the one [here](http://www.techtribe.com/viewBlogEntry.html?blogEntryId=68c62b34-47bd-102b-9fc2-000f1f68a9bf) will probably do, although it might be a bit out of date since it's for the preview version), or just check the source code of MVC. from the code that I have of MVC (also a preview version, but that should be fine): (In Controller) ``` public IPrincipal User { get { return HttpContext == null ? null : HttpContext.User; } } ``` ... ``` public HttpContextBase HttpContext { get { return ControllerContext == null ? null : ControllerContext.HttpContext; } } ``` I don't see en an implementation of a default constructor in the code. That would prove that the ControllerContext is null at the time of construction. So you should execute your code somewhere else.
The answer to this problem is actually quite simple. I can't execute the code from within the constructor for reasons pointed out by Raimond, but I can do it outside the constructor. So what I did was overriding onActionExecuting() in the base controller class (I created a custom Attribute for it, but just overriding the method should also work) and then do my user lookup from there. Now it works as expected and I have no repeated code.
Why is User (as in User.Identity.Name) null in my abstract base controller?
[ "", "c#", "asp.net-mvc", "controller", "user-controls", "httpcontext", "" ]
I need to extract all the images from a PDF file on my server. I don't want the PDF pages, only the images at their original size and resolution. How could I do this with Perl, PHP or any other UNIX based app (which I would invoke with the exec function from PHP)?
[pdfimages](http://linuxcommand.org/man_pages/pdfimages1.html) does just that. It's is part of the poppler-utils and xpdf-utils packages. From the manpage: > Pdfimages saves images from a Portable Document Format (PDF) file as Portable Pixmap (PPM), Portable Bitmap (PBM), or JPEG files. > > Pdfimages reads the PDF file, scans one or more pages, PDF-file, and writes one PPM, PBM, or JPEG file for each image, image-root-nnn.xxx, where nnn is the image number and xxx is the image type (.ppm, .pbm, .jpg). > > NB: pdfimages extracts the raw image data from the PDF file, without performing any additional transforms. Any rotation, clipping, color inversion, etc. done by the PDF content stream is ignored.
With regards to Perl, have you checked [CPAN](http://search.cpan.org)? * [PDF::GetImages](http://search.cpan.org/search?query=PDF-GetImages&mode=module) - get images from pdf document * [PDF::OCR](http://search.cpan.org/search?query=PDF-OCR&mode=module) - get ocr and images out of a pdf file * [PDF::OCR2](http://search.cpan.org/search?query=PDF-OCR2&mode=module) - extract all text and all image ocr from pdf
How can I extract images from a PDF file?
[ "", "php", "perl", "pdf", "" ]
I am trying to add enhancements to a 4 year old VC++ 6.0 program. The debug build runs from the command line but not in the debugger: it crashes with an access violation inside printf(). If I skip the printf, then it crashes in malloc() (called from within fopen()) and I can't skip over that. This means I cannot run in the debugger and have to rely on the old printf statements to see what's going on. This obviously makes it a lot harder. Any idea why printf() and malloc() would fail when running under the VC++ debugger? I am no good at this low level stuff! Here is the call stack after the access violation: ``` _heap_alloc_dbg(unsigned int 24, int 2, const char * 0x0046b3d8 `string', int 225) line 394 + 8 bytes _nh_malloc_dbg(unsigned int 24, int 0, int 2, const char * 0x0046b3d8 `string', int 225) line 242 + 21 bytes _malloc_dbg(unsigned int 24, int 2, const char * 0x0046b3d8 `string', int 225) line 163 + 27 bytes _lock(int 2) line 225 + 19 bytes _getstream() line 55 + 7 bytes _fsopen(const char * 0x00468000 `string', const char * 0x00466280 `string', int 64) line 61 + 5 bytes fopen(const char * 0x00468000 `string', const char * 0x00466280 `string') line 104 + 15 bytes open_new_log(const char * 0x00468000 `string') line 66 + 14 bytes log_open(const char * 0x00468000 `string', int 0) line 106 + 9 bytes Xlog_open(const char * 0x00468000 `string', int 0) line 51 + 13 bytes service_start(unsigned long 1, char * * 0x009a0e50) line 3152 + 12 bytes service_init2(char * 0x00471fcc char * NTPROGRAM, char * 0x004723c4 char * NTSERVICE, char * 0x00466540 `string', unsigned long 1, char * * 0x009a0e50) line 508 + 13 bytes service_init(char * 0x00471fcc char * NTPROGRAM, char * 0x004723c4 char * NTSERVICE, unsigned long 2, char * * 0x009a0e50) line 548 main(unsigned long 2, char * * 0x009a0e50) line 3131 mainCRTStartup() line 206 + 25 bytes KERNEL32! 7c817067() ``` Here is the debug dissassembly up to the operation that fails: ``` 0041EA7E jmp _heap_alloc_dbg+2B3h (0041eb23) 0041EA83 mov edx,dword ptr [_lTotalAlloc (004b4294)] 0041EA89 add edx,dword ptr [nSize] 0041EA8C mov dword ptr [_lTotalAlloc (004b4294)],edx 0041EA92 mov eax,[_lCurAlloc (004b429c)] 0041EA97 add eax,dword ptr [nSize] 0041EA9A mov [_lCurAlloc (004b429c)],eax 0041EA9F mov ecx,dword ptr [_lCurAlloc (004b429c)] 0041EAA5 cmp ecx,dword ptr [_lMaxAlloc (004b42a0)] 0041EAAB jbe _heap_alloc_dbg+249h (0041eab9) 0041EAAD mov edx,dword ptr [_lCurAlloc (004b429c)] 0041EAB3 mov dword ptr [_lMaxAlloc (004b42a0)],edx 0041EAB9 cmp dword ptr [_pFirstBlock (004b4298)],0 0041EAC0 je _heap_alloc_dbg+25Fh (0041eacf) 0041EAC2 mov eax,[_pFirstBlock (004b4298)] 0041EAC7 mov ecx,dword ptr [pHead] 0041EACA mov dword ptr [eax+4],ecx ``` Here is our source for that calls fopen() and fails in malloc() ``` FILE *open_new_log( const char *logfile ) { FILE *fp; int retry = 0; while( ( fp = fopen( logfile, "w" ) ) == NULL && ++retry < 300 ) Sleep( 1000 ); return( fp ); } ``` The error I get is ``` Unhandled exception inPISCOOP.exe: 0xC00000005: Access Violation ``` Regards, --- Alistair.
You can use [`_CrtSetDbgFlag()`](http://msdn.microsoft.com/en-us/library/5at7yxcs(VS.71).aspx) to enable a bunch of useful heap debugging techniques. There's a host of other [CRT debugging functions](http://msdn.microsoft.com/en-us/library/1666sb98(VS.71).aspx) available that should help you track down where your problem is.
When run from the debugger, a different heap is used; this is referred to as the *debug heap*. This has different behaviour from the heap used outside the debugger, and is there to help you catch problems like this one. Note that the Win32 "debug heap" is distinct from the VC++ "debug heap"; both are intended to do more or less the same thing, however. See [this article](http://msdn.microsoft.com/en-us/library/cc266414.aspx) which describes the difference in behaviour when you run the app under the debugger. In this case, you have probably corrupted the heap before calling this function, either by writing off the end or off the start of a heap block.
VC++ 6.0 access violation when run in debugger
[ "", "c++", "debugging", "visual-c++-6", "access-violation", "" ]
I am using the `String.Split()` method in C#. How can I put the resulting `string[]` into an `ArrayList` or `Stack`?
You can initialize a `List<T>` with an array (or any other object that implements `IEnumerable`). You should prefer the strongly typed `List<T>` over `ArrayList`. ``` var myList = new List<string>(myString.Split(',')); ```
If you want a re-usable method, you could write an extension method. ``` public static ArrayList ToArrayList(this IEnumerable enumerable) { var list = new ArrayList; for ( var cur in enumerable ) { list.Add(cur); } return list; } public static Stack ToStack(this IEnumerable enumerable) { return new Stack(enumerable.ToArrayList()); } var list = "hello wolrld".Split(' ').ToArrayList(); ```
Put result of String.Split() into ArrayList or Stack
[ "", "c#", "arrays", "collections", "split", "arraylist", "" ]
I find myself attached to a project to integerate an interpreter into an existing application. The language to be interpreted is a derivative of Lisp, with application-specific builtins. Individual 'programs' will be run batch-style in the application. I'm surprised that over the years I've written a couple of compilers, and several data-language translators/parsers, but I've never actually written an interpreter before. The prototype is pretty far along, implemented as a syntax tree walker, in C++. I can probably influence the architecture beyond the prototype, but not the implementation language (C++). So, constraints: * implementation will be in C++ * parsing will probably be handled with a yacc/bison grammar (it is now) * suggestions of full VM/Interpreter ecologies like NekoVM and LLVM are probably not practical for this project. Self-contained is better, even if this sounds like NIH. What I'm really looking for is reading material on the fundamentals of implementing interpreters. I did some browsing of SO, and another site known as [Lambda the Ultimate](http://www.lambda-the-ultimate.org), though they are more oriented toward programming language theory. Some of the tidbits I've gathered so far: * [Lisp in Small Pieces](https://rads.stackoverflow.com/amzn/click/com/0521545668), by Christian Queinnec. The person recommending it said it "goes from the trivial interpreter to more advanced techniques and finishes presenting bytecode and 'Scheme to C' compilers." * [NekoVM](http://nekovm.org/). As I've mentioned above, I doubt that we'd be allowed to incorporate an entire VM framework to support this project. * [Structure and Interpretation of Computer Programs](http://mitpress.mit.edu/sicp/full-text/book/book.html). Originally I suggested that this might be overkill, but having worked through a healthy chunk, I agree with @JBF. Very informative, and mind-expanding. * [On Lisp](http://paulgraham.com/rootsoflisp.html) by Paul Graham. I've read this, and while it is an informative introduction to Lisp principles, is not enough to jump-start constructing an interpreter. * [Parrot Implementation](http://www.sidhe.org/~dan/presentations/Parrot_Implementation.pdf). This seems like a fun read. Not sure it will provide me with the fundamentals. * [Scheme from Scratch](http://peter.michaux.ca/articles/scheme-from-scratch-introduction). Peter Michaux is attacking various implementations of Scheme, from a quick-and-dirty Scheme interpreter written in C (for use as a bootstrap in later projects) to compiled Scheme code. Very interesting so far. * [Language Implementation Patterns: Create Your Own Domain-Specific and General Programming Languages](http://pragprog.com/book/tpdsl/language-implementation-patterns), recommended in the comment thread for [Books On Creating Interpreted Languages](https://stackoverflow.com/questions/2853412/books-on-creating-interpreted-languages). The book contains two chapters devoted to the practice of building interpreters, so I'm adding it to my reading queue. * *New* (and yet *Old*, i.e. 1979): [Writing Interactive Compilers and Interpreters](http://books.google.com/books?id=d6cpAQAAIAAJ&q=writing+interactive+compilers+and+interpreters&dq=writing+interactive+compilers+and+interpreters&hl=en&ei=FzorTsDzFIj0swOb_4XMCg&sa=X&oi=book_result&ct=result&resnum=1&ved=0CCkQ6AEwAA) by P. J. Brown. This is long out of print, but is interesting in providing an outline of the various tasks associated with the implementation of a Basic interpreter. I've seen mixed reviews for this one but as it is cheap (I have it on order used for around $3.50) I'll give it a spin. So how about it? Is there a good book that takes the neophyte by the hand and shows how to build an interpreter in C/C++ for a Lisp-like language? Do you have a preference for syntax-tree walkers or bytecode interpreters? To answer @JBF: * the current prototype is an interpreter, and it makes sense to me as we're accepting a path to an arbitrary code file and executing it in our application environment. The builtins are used to affect our in-memory data representation. * it should not be hideously slow. The current tree walker seems acceptable. * The language is *based* on Lisp, but is not Lisp, so no standards compliance required. * As mentioned above, it's unlikely that we'll be allowed to add a full external VM/interpreter project to solve this problem. To the other posters, I'll be checking out your citations as well. Thanks, all!
Short answer: The fundamental reading list for a lisp interpreter is SICP. I would not at all call it overkill, if you feel you are overqualified for the first parts of the book jump to chapter 4 and start interpreting away (although I feel this would be a loss since chapters 1-3 really are that good!). Add LISP in Small Pieces (LISP from now on), chapters 1-3. Especially chapter 3 if you need to implement any non-trivial control forms. See this post by Jens Axel Søgaard on a minimal self-hosting Scheme: <http://www.scheme.dk/blog/2006/12/self-evaluating-evaluator.html> . A slightly longer answer: It is hard to give advice without knowing what you require from your interpreter. * does it really really need to be an interpreter, or do you actually need to be able to execute lisp code? * does it need to be fast? * does it need standards compliance? Common Lisp? R5RS? R6RS? Any SFRIs you need? If you need anything more fancy than a simple syntax tree walker I would strongly recommend embedding a fast scheme subsystem. Gambit scheme comes to mind: [http://dynamo.iro.umontreal.ca/~gambit/wiki/index.php/Main\_Page](http://dynamo.iro.umontreal.ca/%7Egambit/wiki/index.php/Main_Page) . If that is not an option chapter 5 in SICP and chapters 5-- in LISP target compilation for faster execution. For faster interpretation I would take a look at the most recent JavaScript interpreters/compilers. There seem to be a lot of thought going into fast JavaScript execution, and you can probably learn from them. V8 cites two important papers: <http://code.google.com/apis/v8/design.html> and squirrelfish cites a couple: <http://webkit.org/blog/189/announcing-squirrelfish/> . There is also the canonical scheme papers: <http://library.readscheme.org/page1.html> for the RABBIT compiler. If I engage in a bit of premature speculation, memory management might be the tough nut to crack. Nils M Holm has published a book "Scheme 9 from empty space" <http://www.t3x.org/s9fes/> which includes a simple stop-the-world mark and sweep garbage collector. Source included. John Rose (of newer JVM fame) has written a paper on integrating Scheme to C: <http://library.readscheme.org/servlets/cite.ss?pattern=AcmDL-Ros-92> .
Yes on SICP. I've done this task several times and here's what I'd do if I were you: Design your memory model first. You'll want a GC system of some kind. It's WAAAAY easier to do this first than to bolt it on later. Design your data structures. In my implementations, I've had a basic cons box with a number of base types: atom, string, number, list, bool, primitive-function. Design your VM and be sure to keep the API clean. My last implementation had this as a top-level API (forgive the formatting - SO is pooching my preview) ``` ConsBoxFactory &GetConsBoxFactory() { return mConsFactory; } AtomFactory &GetAtomFactory() { return mAtomFactory; } Environment &GetEnvironment() { return mEnvironment; } t_ConsBox *Read(iostream &stm); t_ConsBox *Eval(t_ConsBox *box); void Print(basic_ostream<char> &stm, t_ConsBox *box); void RunProgram(char *program); void RunProgram(iostream &stm); ``` RunProgram isn't needed - it's implemented in terms of Read, Eval, and Print. REPL is a common pattern for interpreters, especially LISP. A ConsBoxFactory is available to make new cons boxes and to operate on them. An AtomFactory is used so that equivalent symbolic atoms map to exactly one object. An Environment is used to maintain the binding of symbols to cons boxes. Most of your work should go into these three steps. Then you will find that your client code and support code starts to look very much like LISP too: ``` t_ConsBox *ConsBoxFactory::Cadr(t_ConsBox *list) { return Car(Cdr(list)); } ``` You can write the parser in yacc/lex, but why bother? Lisp is an incredibly simple grammar and scanner/recursive-descent parser pair for it is about two hours of work. The worst part is writing predicates to identify the tokens (ie, IsString, IsNumber, IsQuotedExpr, etc) and then writing routines to convert the tokens into cons boxes. Make it easy to write glue into and out of C code and make it easy to debug issues when things go wrong.
References Needed for Implementing an Interpreter in C/C++
[ "", "c++", "lisp", "interpreter", "" ]
Could you please explain me, what is the different between API functions `AllocConsole` and `AttachConsole(-1)` ? I mean if `AttachConsole` gets `ATTACH_PARENT_PROCESS(DWORD)-1`.
Well, the fundamental difference is: * `AllocConsole()` will create a new console (and attach to it) * `AttachConsole( ATTACH_PARENT_PROCESS /* -1 */)` will not create a new console, it will attach to the existing console of the parent process. In the first case you get a whole new console window, in the second case, you use an existing console window. Of course, if you're already attached to a console (ie., you're a console mode program launched from cmd.exe) there's not much difference - you'll get an error with either API. Also note that just because you detach from a console doesn't mean the detached console will be useful - for example, if you're a console process launched from a cmd window, that window essentially blocks until your process ends. Some code to play with: ``` int main( int argc, char* argv[]) { int ch; BOOL bResult; printf( "default console\n"); ch = getchar(); bResult = FreeConsole(); bResult = AllocConsole(); printf( "AllocConsole()\n"); ch = getchar(); bResult = FreeConsole(); bResult = AttachConsole( ATTACH_PARENT_PROCESS); printf( "AttachConsole( ATTACH_PARENT_PROCESS)\n"); ch = getchar(); return 0; } ```
I don't think there's a function called `CreateConsole`, but there's [`AllocConsole`](http://msdn.microsoft.com/en-us/library/ms681944%28VS.85%29.aspx). Assuming that's what you meant, I think the difference is that `AttachConsole(ATTACH_PARENT_PROCESS)` can [return `ERROR_INVALID_HANDLE`](http://msdn.microsoft.com/en-us/library/ms681952.aspx) if the parent process doesn't have a console. Try running this code from both a command prompt and Start -> Run: ``` #include <windows.h> #pragma comment ( lib, "user32.lib" ) int main() { BOOL b; char msg[1024]; b = FreeConsole(); sprintf(msg, "%d", b); MessageBox(NULL, msg, "FreeConsole", 0); b = AttachConsole(ATTACH_PARENT_PROCESS); sprintf(msg, "%d", b); MessageBox(NULL, msg, "AttachConsole", 0); return 0; } ``` When run from a command prompt, two message boxes containing a `1` are displayed, meaning both calls succeeded. When run from Start -> Run, the first box contains `1` and the second contains `0`, meaning that only the first call succeeded. The second one fails because explorer.exe (which is the parent of a process launched from Start -> Run) doesn't have a console.
What is the different between API functions AllocConsole and AttachConsole(-1)?
[ "", "c#", ".net", "winapi", "console", "" ]
How do you get the [logical xor](http://en.wikipedia.org/wiki/Exclusive_or) of two variables in Python? For example, I have two variables that I expect to be strings. I want to test that only one of them contains a True value (is not None or an empty string): ``` str1 = raw_input("Enter string one:") str2 = raw_input("Enter string two:") if logical_xor(str1, str2): print "ok" else: print "bad" ``` The `^` operator is bitwise, and not defined on all objects: ``` >>> 1 ^ 1 0 >>> 2 ^ 1 3 >>> "abc" ^ "" Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for ^: 'str' and 'str' ```
If you're already normalizing the inputs to booleans, then != is xor. ``` bool(a) != bool(b) ```
You can always use the definition of xor to compute it from other logical operations: ``` (a and not b) or (not a and b) ``` But this is a little too verbose for me, and isn't particularly clear at first glance. Another way to do it is: ``` bool(a) ^ bool(b) ``` The xor operator on two booleans is logical xor (unlike on ints, where it's bitwise). Which makes sense, since [`bool` is just a subclass of `int`](http://docs.python.org/library/functions.html?highlight=bool#bool), but is implemented to only have the values `0` and `1`. And logical xor is equivalent to bitwise xor when the domain is restricted to `0` and `1`. So the `logical_xor` function would be implemented like: ``` def logical_xor(str1, str2): return bool(str1) ^ bool(str2) ``` Credit to [Nick Coghlan on the Python-3000 mailing list](http://mail.python.org/pipermail/python-3000/2007-December/011560.html).
How do you get the logical xor of two variables in Python?
[ "", "python", "logical-operators", "" ]
I'm using boost for several C++ projects. I recently made a upgrade (1.33.1 to 1.36, soon to 1.37), since then I cannot run any debug-builds anymore. To be sure that no other project issues remain, I've created a minimum test-project, which only includes boost.thread, and uses it to start one method. The release build can be started, the debug build cannot, although the *Dependency Walker* shows that all required libraries are found (this also means that the required MS Debug CRT is found in the SxS directory). On startup I only get: > Die Anwendung konnte nicht richtig initialisiert werden (0xc0150002). > Klicken Sie auf "OK", um die Anwendung zu beenden. Which means nothing more than "failed to initialize app". An internet research primarily lead to [an MS Office installation problem](http://support.microsoft.com/?scid=kb%3Ben-us%3B822520&x=16&y=11), which recommends to perform a repair of WinXP. So, beside the repair setup (which I think will not help as I'm talking about debug-dll issues), any ideas? Ah, before I forget: Absolutely the same source-code leads to no errors on the build-machine (i.e., DLLs can be registered, means executed). So it's obviously an installation problem, but as the DLLs *are* there, and dependency-walker finds it, what else have I forgotten? (**edit**) Well, I have not yet resolved my problem, but thanks to deemok I'm a step further. For the sake of reducing misunderstandings I give some clarifications below: * The program fails to run on the *developer*-machine * I am working with an *installed* VS2005 (it's a VC++8 project) * I used the boost-setup from [BoostPro](http://www.boostpro.com/products/free), compiled all possible build-versions, and I double-checked that they are there (otherwise I'd already get linker-errors during build). * and I double-checked any corner of include/lib/bin configuration I can think of -- as boost uses auto-linking on windows, with a special naming convention, the build or start-up would have failed, with a much more comprehensible error-message. * I *cannot* use *static linking*, as boost.thread requires dynamic linking for DLL projects (I maybe could mess around here, but as this problem seems to happen only on my machine, I do not want to mess with this, as I'm sure the boost-guys had a reason to place that check in there in the first place) * As I wrote, I checked with Dependency Walker, and it says everything is just fine. Currently it seems to be an error in the boost-dll (maybe incorrect Manifest), will check that.
So you are using the *pre-built* libraries from BoostPro? If so, your environment might somehow be slightly different to the one they were built in (TR1 feature pack or not, etc). Perhaps best to try [building Boost yourself](http://www.boost.org/doc/libs/1_37_0/more/getting_started/windows.html#or-build-binaries-from-source) in your specific environment.
It's a [Side-by-Side](http://blogs.msdn.com/rchiodo/archive/2007/04/09/the-next-level-of-dll-hell-sxs.aspx) (SxS) issue – simply copying the DLLs is not enough anymore. Regarding your specific problem concerning the Debug build, see: [Running vc2008 debug builds on non-dev machines](https://stackoverflow.com/questions/264385/running-vc2008-debug-builds-on-non-dev-machines) Short answer: > You can't, because there's no installer redist for the debug runtime (and in fact the software license forbids distributing it, so you'd be breaking the EULA even if you did get something put together). So, you'll need to install Visual Studio there. However, if you still want to try without taking that path, you could read [puetzk's answer](https://stackoverflow.com/questions/264385/running-vc2008-debug-builds-on-non-dev-machines#264406) in its entirety. Or, you could try to [link everything statically](https://stackoverflow.com/questions/264385/running-vc2008-debug-builds-on-non-dev-machines#264461).
Cannot execute program if using boost (C++) libraries in debug-version on WinXP
[ "", "c++", "windows", "dll", "boost", "side-by-side", "" ]
Other than the Java language itself, you have to learn the java framework. Similiar to how you have to learn the .net framework in addition to the language (C#/VB). How important is it to know unix? Or rather, what unix areas should one focus on? Seeing as many people run java based applications (desktop/web) on unix boxes, **what sort of unix skills do you need**? Are we just talking basic directory traversing, creating files, etc or is there much more to it?
The answer as read from Sun marketing material is that Java is cross platform, so you don't. The practical answer is that you need to know enough to get your own application up and running on the platform where you plan to use it. Getting to know Apache or Tomcat configuration helps if you're working with web development, and so does knowing how to use the basic network analysis tools - the `ifconfig`, `netstat` and `traceroute` commands are all useful. File permission tools are also a must for getting a system working - look into `chmod` and `chown` and how those commands work. Desktop systems have it easier, since most windowing systems are very good at working cross platform, but you still need to know a little bit about how the file system and permissions are structured.
Really, you don't need unix skills directly for writing java-based applications. However, if you want to develop java-based applications on unix boxes and deploy there, you want a good working understanding of how to operate and administer a unix box. But for the areas you mention (directory traversing, creating files), you'll be using Java APIs that only occasionally touch on Unix-specific ("\n" vs "\r\n", directories rooted at "/", etc.) information. When they do touch, it's not something you need to know in a programming sort of way, it's something you need to know in a user/administrator sort of way.
When learning Java, how important is it to know Unix well?
[ "", "java", "unix", "" ]
I have a javascript function (class) that takes a function reference as one paremter. ``` function MyClass ( callBack ) { if (typeof callBack !== 'function') throw "You didn't pass me a function!" } ``` For reasons I won't go in to here, I need to append something to the function by enclosing it in an anonymous function, but the only way I've been able to figure out how to do it is by adding a public function to **MyClass** that takes the callBack function as a parameter and returns the modified version. ``` function MyClass () { this.modifyCallBack = function ( callBack ) { var oldCallBack = callBack; callBack = function () { oldCallBack(); // call the original functionality /* new code goes here */ } return callBack; } } /* elsewhere on the page, after the class is instantiated and the callback function defined */ myCallBackFunction = MyClassInstance.modifyCallBack( myCallBackFunction ); ``` Is it possible to make this work when passing the callBack function as a parameter to the class? Attempting to modify the function in this manner when passign it as a parameter seems to only affect the instance of it in within the class, but that doesn't seem like it's a valid assumption since functions are Objects in javascript, and are hence passed by reference. **Update:** as crescentfresh pointed out (and I failed to explain well), I want to modify the callBack function in-place. I'd rather not call a second function if it's possible to do all of this when the class is instantiated.
Function objects don't provide methods to modify them. Therefore, what you want to do is impossible the way you want to do it. It's the same thing Jon Skeet likes to point out about Java: Objects are not really passed by reference, but instead a pointer to them is passed by value. That means that changing the value of an argument variable to a new one won't affect the original one at all. There are only two ways to do what you want in call-by-value languages like Java and JavaScript: The first one would be to use the (function) object's methods to modify it. As I already stated, function objects don't have those. The other one is to pass the object of which the function object is a property as a second argument and set the appropriate property to a new function which wraps the old one. Example: ``` var foo = {}; foo.func = function() {}; function wrapFunc(obj) { var oldFunc = obj.func; obj.func = function() { // do some stuff oldFunc.call(obj, _some_argument__); }; } wrapFunc(foo); ``` This works for global functions as well: they are properties of the `window` object.
As Javascript uses lexical scoping on variables the following is possible: ``` var modifiableCallback=function() { alert('A'); }; function ModifyCallbackClass(callback) { modifiableCallback=function() { callback(); alert('B'); }; } function body_onload() { var myClass=new ModifyCallbackClass(modifiableCallback); modifiableCallback(); } ``` This does what you want, however the function "modifiableCallback" must be referred to with the same name inside ModifyCallbackClass, otherwise the closure will not be applied. So this may limit the usefulness of this approach for you a little. Using eval (performance may suffer a bit) it is also possible to make this approach more flexible: ``` var modfiableCallback1=function() { alert('A'); }; var modfiableCallback2=function() { alert('B'); }; var modfiableCallback3=function() { alert('C'); }; function ModifyCallbackClass(callbackName) { var temp=eval(callbackName); var temp2=eval(callbackName); temp= function() { temp2(); alert('Modified'); }; eval(callbackName + " = temp;"); } function body_onload() { var myClass=new ModifyCallbackClass("modfiableCallback1"); modfiableCallback1(); myClass=new ModifyCallbackClass("modfiableCallback2"); modfiableCallback2(); myClass=new ModifyCallbackClass("modfiableCallback3"); modfiableCallback3(); } ```
Extend Javascript Function passed as parameter
[ "", "javascript", "" ]
I'm calling a WebService exposed by Oracle that accepts an input of an ItemID and returns to me the corresponding Item Number. I want to grab the Item Number that has been returned out of the XML contained in the response. The XML looks like this: ``` <env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ns0="http://dev1/MyWebService1.wsdl"> <env:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" env:mustUnderstand="1"/> </env:Header> <env:Body> <ns0:getItemNbrByItemIdResponseElement> <ns0:result>1010603</ns0:result> </ns0:getItemNbrByItemIdResponseElement> </env:Body> </env:Envelope> ``` I'm interested in grabbing only the `<ns0:result>1010603</ns0:result>` particularly only the 1010603. I haven't done a lot of work parsing XML using C# and I'm playing around with a few different methods so far. What is the recommended way to do this? I'm on VS2008 (so XPath is available etc.)
I'd personally use LINQ to XML, because I find that easier to deal with than XPath, particularly when namespaces are involved. You'd do something like: ``` XNamespace ns0 = "http://dev1/MyWebService1.wsdl"; String result = doc.Descendants(ns0 + "result").First().Value; ``` Note that `doc` here is expected to be an [`XDocument`](http://msdn.microsoft.com/en-us/library/bb345449.aspx), *not* an [`XmlDocument`](http://msdn.microsoft.com/en-us/library/system.xml.xmldocument.aspx). (My guess is that this is why it wasn't showing up for you.)
fwiw you can cheat the namespace issue with an xpath like this: `//*[local-name()='result']`
What is a good way to find a specific value in an XML document using C#?
[ "", "c#", "xml", "xpath", "searching-xml", "" ]
I am working on some schema changes to an existing database. I backed up the database to get a dev copy, and have made my changes. I will be creating a single roll script to migrate the changes on the production machine in a single transaction. Is there a best practice for creating a rollback script encase a deployment issue arises? Before I've written then by hand using the following pattern: * Drop new constraints and indexes * Alter tables to remove new columns * Drop added tables * Commit transaction Is there a better approach?
That's basically it, I don't think there's much to add, aside from what your approach. This is how we do it in our company, we developers are responsible for creating the script and the rollback script, and we are responsible for leaving the DB in the same state it was before the initial changes are applied. Then the DBAs run it in production, if there's ever a problem they'll run the rollback script and everything is back to normal. Just keep in mind the dependencies and the order in which you alter your objects and then create the rollback script backwards.
You are missing the fifth step * Drop new constraints and indexes * Alter tables to remove new columns * Drop added tables * Commit transaction * **Test the hell out of the script before running it in production** A more efficient approach is to register the changes as they happen like [RoR](http://wiki.rubyonrails.org/rails/pages/understandingmigrations) [migrations](http://wiki.rubyonrails.org/rails/pages/UsingMigrations) [do](http://api.rubyonrails.org/classes/ActiveRecord/Migration.html). For each DB change you create a script that will both apply the change and roll it back (at your choice, of course). You can then have those scripts under version control just like your code. Additionally, if you keep a version number in the database you can automatize it a bit more, by identifying each script with a version number and having it increase or decrease the version number in the database according to the performed action.
Best way to create a SQL Server rollback script?
[ "", "sql", "sql-server-2005", "t-sql", "rollback", "" ]
Here is my issue. I have an array containing the name of cities that I need to lookup the weather for. So I'm looping through each city and performing an AJAX request to retrieve the weather. ``` var LOCATION = 'http://www.google.com/ig/api?weather='; $( document ).ready( function() { for( var cityIdx = 0; cityIdx < cities.length; cityIdx++ ) { $.ajax({ type: 'GET', url: LOCATION + cities[ cityIdx ], dataType: 'xml', success: function( xml ) { if( $( xml ).find( 'problem_cause' ) != 0 ) { // Do what I want with the data returned var weather = $( xml ).find( 'temp_c' ).attr( 'data' ); } } }); } }); ``` The issue I'm encountering is that in the success function, I can't access the city name (via cities[cityIdx]). I inserted an alert() in the for loop and the success function and it seems like the loop gets executed *cities.length* times, then I get the success function alerts. My goal is to simply loop through each city getting the weather and showing it on my page along with the associated city name. Also, what would you suggest I should do to separate content with presentation? Thank you. :)
I suspect that your problem is similar to the example at <http://ejohn.org/apps/learn/>. The index variable cityIdx is updated in the closure you create as the for loop is processed, so by the time your function on success is run cityIdx will point to the last element in the array. The solution is to use an evaluated anonymous function to create an independent context, where the index value doesn't get updated. ``` //... success: (function(cities, idx) { return function( xml ) { if( $( xml ).find( 'problem_cause' ) != 0 ) { // Do what I want with the data returned // use cities[idx] var weather = $( xml ).find( 'temp_c' ).attr( 'data' ); } }; })(cities, cityIdx) //... ```
Since Javascript uses functions for closure, I found the simplest way for me was to just wrap the contents of the for loop in an inline function that copies the current city name to a variable it will always have access to. ``` $(document).ready(function() { for (var cityIdx = 0; cityIdx < cities.length; cityIdx++) { new function() { var currentCity = cities[cityIdx]; $.ajax({ type: 'GET', url: LOCATION + currentCity, dataType: 'xml', success: function(xml) { alert(currentCity); if ($(xml).find('problem_cause') != 0) { // Do what I want with the data returned var weather = $(xml).find('temp_c').attr('data'); } } }); }(); // the "();" calls the function we just created inline } }); ```
Iterating through an array while performing a request for each entry
[ "", "javascript", "jquery", "ajax", "" ]
This isn't quite as straight forward as one may think. I'm using a plugin called [jQuery MultiSelect](http://abeautifulsite.net/notebook.php?article=62 "jQuery MultiSelect") and multiple <select> options using XSLT as follows: ``` <xsl:for-each select="RootField"> <select id="{RootField}" multiple="multiple" size="3"> <option value=""></option> <xsl:for-each select="ChildField"> <option value="{ChildField}"><xsl:value-of select="ChildField"/></option> </xsl:for-each> </select> </xsl:for-each> ``` The accompanying JavaScript is as follows: ``` var selects = document.getElementsByTagName("select"); $.each(selects, function() { $(this).multiSelect(); }); ``` This allows me to apply the multiSelect(); function to every single <select> on the page. The behaviour is quite strange, every other <select> is being changed into the dropdown list (all the even ones anyway). I can't see anything wrong in my JavaScript to cause this issue as it would iterate over every single one. To make it more clear, the only lists that have that JavaScript applied to it are ones in position 2, 4, 6 and 8 (out of the 9 which are on the page). Any ideas?
I'd not heard the 'Halloween problem' tag before, but Robert may be correct. The nodelist returned from getElementsByTagName is dynamic i.e. adding or removing, in this case selects, will change the nodelist after it has been created. try ``` //hoping for magic here $('select').multiSelect(); ``` or ``` $('select').each( function() { $(this).multiSelect(); }); ```
Sounds like a Halloween problem (<http://blogs.msdn.com/mikechampion/archive/2006/07/20/672208.aspx>) in multiSelect, but since I don't know multiSelect I can't say for sure.
Iterating Over <select> Using jQuery + Multi Select
[ "", "javascript", "jquery", "multi-select", "" ]
How do you do "inline functions" in C#? I don't think I understand the concept. Are they like anonymous methods? Like lambda functions? **Note**: The answers almost entirely deal with the ability to [inline functions](http://en.wikipedia.org/wiki/Inline_expansion), i.e. "a manual or compiler optimization that replaces a function call site with the body of the callee." If you are interested in [anonymous (a.k.a. lambda) functions](http://en.wikipedia.org/wiki/Anonymous_function), see [@jalf's answer](https://stackoverflow.com/a/473813/116891) or [What is this 'Lambda' everyone keeps speaking of?](https://stackoverflow.com/questions/1085875/what-is-this-lambda-everyone-keeps-speaking-of/1086347#1086347).
Finally in .NET 4.5, the CLR allows one to hint/suggest1 method inlining using [`MethodImplOptions.AggressiveInlining`](http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.methodimploptions%28v=VS.110%29.aspx) value. It is also available in the Mono's trunk (committed today). ``` // The full attribute usage is in mscorlib.dll, // so should not need to include extra references using System.Runtime.CompilerServices; ... [MethodImpl(MethodImplOptions.AggressiveInlining)] void MyMethod(...) ``` **1**. Previously "force" was used here. I'll try to clarify the term. As in the comments and the documentation, `The method should be inlined if possible.` Especially considering Mono (which is open), there are some mono-specific technical limitations considering inlining or more general one (like virtual functions). Overall, yes, this is a hint to compiler, but I guess that is what was asked for.
Inline methods are simply a compiler optimization where the code of a function is rolled into the caller. There's no mechanism by which to do this in C#, and they're to be used sparingly in languages where they are supported -- if you don't know why they should be used somewhere, they shouldn't be. Edit: To clarify, there are two major reasons they need to be used sparingly: 1. It's easy to make massive binaries by using inline in cases where it's not necessary 2. The compiler tends to know better than you do when something should, from a performance standpoint, be inlined It's best to leave things alone and let the compiler do its work, then profile and figure out if inline is the best solution for you. Of course, some things just make sense to be inlined (mathematical operators particularly), but letting the compiler handle it is typically the best practice.
Inline functions in C#?
[ "", "c#", "optimization", "inline", "" ]
I understand the best way to count the number of rows in an SQL table is count(\*) (or equivalently count(PrimaryKey)). 1. Is this O(1)? 2. If not, why not? Why not just implement a counter and return it for this specific query? Is it because this query is not a common use case? If the answers vary according to SQL engine, I'd like to hear the differences - but in any case, I'm interested in the actual implementation in production SQL engines.
In *some* RDBM's this is O(1) (most notably MySQL), put AFAIK it is generally frowned upon and considered an "ugly performance hack". The reasons is that if you have transactions (which every real RDBM should have), the total number of rows in the table might or might not be equal to the total number *you can see from the current transaction*. This is why the server needs to check which rows are actually visible to your transaction, making it more O(n) than O(1). If you want to optimize the process of getting the number of rows and are satisfied with an approximate result, most RDBM's have special "info" tables which hold information about the tables, including the approximate number of rows (again, it is not the *exact* number of rows because of the transactions).
No this is not a common use case. Most row counts I have seen have some where clause involved. The main reason this is not implemented though is the row counter would be a cause of contention in a multi user environment. Every time a row was inserted or deleted the counter would need updating effectively locking the whole table for each insert/delete.
Count rows in an SQL table in O(1)
[ "", "sql", "performance", "count", "" ]
I have a partially nfilled array of objects, and when I iterate through them I tried to check to see whether the selected object is `null` before I do other stuff with it. However, even the act of checking if it is `null` seem to through a `NullPointerException`. `array.length` will include all `null` elements as well. How do you go about checking for `null` elements in an array? For example in the following code will throw an NPE for me. ``` Object[][] someArray = new Object[5][]; for (int i=0; i<=someArray.length-1; i++) { if (someArray[i]!=null) { //do something } } ```
You have more going on than you said. I ran the following expanded test from your example: ``` public class test { public static void main(String[] args) { Object[][] someArray = new Object[5][]; someArray[0] = new Object[10]; someArray[1] = null; someArray[2] = new Object[1]; someArray[3] = null; someArray[4] = new Object[5]; for (int i=0; i<=someArray.length-1; i++) { if (someArray[i] != null) { System.out.println("not null"); } else { System.out.println("null"); } } } } ``` and got the expected output: ``` $ /cygdrive/c/Program\ Files/Java/jdk1.6.0_03/bin/java -cp . test not null null not null null not null ``` Are you possibly trying to check the lengths of someArray[index]?
It does not. See below. The program you posted runs as supposed. ``` C:\oreyes\samples\java\arrays>type ArrayNullTest.java public class ArrayNullTest { public static void main( String [] args ) { Object[][] someArray = new Object[5][]; for (int i=0; i<=someArray.length-1; i++) { if (someArray[i]!=null ) { System.out.println("It wasn't null"); } else { System.out.printf("Element at %d was null \n", i ); } } } } C:\oreyes\samples\java\arrays>javac ArrayNullTest.java C:\oreyes\samples\java\arrays>java ArrayNullTest Element at 0 was null Element at 1 was null Element at 2 was null Element at 3 was null Element at 4 was null C:\oreyes\samples\java\arrays> ```
How to check if array element is null to avoid NullPointerException in Java
[ "", "java", "arrays", "exception", "nullpointerexception", "" ]
I'm rolling my own logger class, and want to represent the heirarchy of logs as the app moves through different phases: ``` log start loading loaded 400 values processing couldn't process var "x" ``` etc. In C++ (yes I know), I'd use RAII classlets that pushed themselves on the log stack when created, and popped off when they left the scope. You could then leave functions at any point and still have consistent logging. Obviously in C# any variable has to be new'd, so it wouldn't be deleted until the next garbage collection cycle, and if you immediately create a new classlet, you could have an out-of-sync logger. How would people try to solve this problem in C#? I want the logger syntax to be as unobtrusive to the current function as possible, and still support functions with multiple exit points. The only solution I can think of off the top of my head involves closeHeirarchy() calls every return statement - and you know I'm going to miss one somewhere. --- Edit: I should make it clear I'm interested mainly in how you would replicate the RAII behaviour in c#. Is there a mechanism that provides identical behaviour?
You can get the behavior you are asking about if you use the [using](http://msdn.microsoft.com/en-us/library/yh598w02(VS.80).aspx) statement with an IDisposable-class. Do something like this: ``` public class LogContext: IDisposable { private readonly Logger _logger; private readonly string _context; public LogContext(Logger logger, string context){ _logger = logger; _context = context; _logger.EnterContext(_context); } public void Dispose(){ _logger.LeaveContext(_context); } } //... public void Load(){ using(new LogContext(logger, "Loading")){ // perform load } } ```
OffTopic: If you want "the logger syntax to be as unobtrusive " to the client code as possible, you might want to have a look at Aspect Oriented Programming once your logger is finished. But maybe you should take a short less painful route and use any of the great loggers available (log4Net, NLog, System.Trace or even Spring.Commonlogging).
Replicating C++'s RAII in C#
[ "", "c#", "logging", "raii", "" ]
In Python, properties are used instead of the Java-style getters and setters. So one rarely sees `get...` or `set...` methods in the public interfaces of classes. But, in cases where a property is not appropriate, one might still end up with methods that behave like getters or setters. Now for my questions: Should these method names start with `get_` / `set_`? Or, is this unpythonic verbosity since it is often obvious of what is meant (and one can still use the docstring to clarify non-obvious situations)? This might be a matter of personal taste, but I would be interested in what the majority thinks about this. What would you prefer as an API user? For example, say we have an object representing multiple cities. One might have a method `get_city_by_postalcode(postalcode)` or one could use the shorter method `city_by_postalcode`. I tend towards the latter.
I think shorter is better, so I tend to prefer the latter. But what's important is to be consistent within your project: don't mix the two methods. If you jump into someone else's project, keep what the other developers chose initially.
You won't ever lose the chance to make your property behave like a getter/setter later by using [descriptors](https://web.archive.org/web/20081216014119/http://users.rcn.com/python/download/Descriptor.htm). If you want to change a property to be read-only, you can also replace it with a getter method with the same name as the property and decorate it with `@property`. So, my advice is to avoid getters/setters unless the project you are working on already uses them because you can always change your mind later and make properties read-only, write-only, or whatever without modifying the interface to your class.
Should I use get_/set_ prefixes in Python method names?
[ "", "python", "coding-style", "" ]
I've been extensively using smart pointers (boost::shared\_ptr to be exact) in my projects for the last two years. I understand and appreciate their benefits and I generally like them a lot. But the more I use them, the more I miss the deterministic behavior of C++ with regarding to memory management and RAII that I seem to like in a programming language. Smart pointers simplify the process of memory management and provide automatic garbage collection among other things, but the problem is that using automatic garbage collection in general and smart pointer specifically introduces some degree of indeterminisim in the order of (de)initializations. This indeterminism takes the control away from the programmers and, as I've come to realize lately, makes the job of designing and developing APIs, the usage of which is not completely known in advance at the time of development, annoyingly time-consuming because all usage patterns and corner cases must be well thought of. To elaborate more, I'm currently developing an API. Parts of this API requires certain objects to be initialized before or destroyed after other objects. Put another way, the order of (de)initialization is important at times. To give you a simple example, let's say we have a class called System. A System provides some basic functionality (logging in our example) and holds a number of Subsystems via smart pointers. ``` class System { public: boost::shared_ptr< Subsystem > GetSubsystem( unsigned int index ) { assert( index < mSubsystems.size() ); return mSubsystems[ index ]; } void LogMessage( const std::string& message ) { std::cout << message << std::endl; } private: typedef std::vector< boost::shared_ptr< Subsystem > > SubsystemList; SubsystemList mSubsystems; }; class Subsystem { public: Subsystem( System* pParentSystem ) : mpParentSystem( pParentSystem ) { } ~Subsystem() { pParentSubsystem->LogMessage( "Destroying..." ); // Destroy this subsystem: deallocate memory, release resource, etc. } /* Other stuff here */ private: System * pParentSystem; // raw pointer to avoid cycles - can also use weak_ptrs }; ``` As you can already tell, a Subsystem is only meaningful in the context of a System. But a Subsystem in such a design can easily outlive its parent System. ``` int main() { { boost::shared_ptr< Subsystem > pSomeSubsystem; { boost::shared_ptr< System > pSystem( new System ); pSomeSubsystem = pSystem->GetSubsystem( /* some index */ ); } // Our System would go out of scope and be destroyed here, but the Subsystem that pSomeSubsystem points to will not be destroyed. } // pSomeSubsystem would go out of scope here but wait a second, how are we going to log messages in Subsystem's destructor?! Its parent System is destroyed after all. BOOM! return 0; } ``` If we had used raw pointers to hold subsystems, we would have destroyed subsystems when our system had gone down, of course then, pSomeSubsystem would be a dangling pointer. Although, it's not the job of an API designer to protect the client programmers from themselves, it's a good idea to make the API easy to use correctly and hard to use incorrectly. So I'm asking you guys. What do you think? How should I alleviate this problem? How would you design such a system? Thanks in advance, Josh
## Problem Summary There are two competing concerns in this question. 1. Life-cycle management of `Subsystem`s, allowing their removal at the right time. 2. Clients of `Subsystem`s need to know that the `Subsystem` they are using is valid. ## Handling #1 `System` owns the `Subsystem`s and should manage their life-cycle with it's own scope. Using `shared_ptr`s for this is particularly useful as it simplifies destruction, but you should not be handing them out because then you loose the determinism you are seeking with regard to their deallocation. ## Handling #2 This is the more intersting concern to address. Describing the problem in more detail, you need clients to receive an object which behaves like a `Subsystem` while that `Subsystem` (and it's parent `System`) exists, but behaves appropriately after a `Subsystem` is destroyed. This is easily solved by a combination of the [Proxy Pattern](http://en.wikipedia.org/wiki/Proxy_pattern "Wikipedia: Proxy Pattern"), the [State Pattern](http://en.wikipedia.org/wiki/State_pattern "Wikipedia: State Pattern") and the [Null Object Pattern](http://en.wikipedia.org/wiki/Null_Object_pattern "Wikipedia: Null Object Pattern"). While this may seem to be a bit complex of a solution, '*There is a simplicity only to be had on the other side of complexity*.' As Library/API developers, we must go the extra mile to make our systems robust. Further, we want our systems to behave intuitively as a user expects, and to decay gracefully when they attempt to misuse them. There are many solutions to this problem, however, this one should get you to that all important point where, as you and [Scott Meyers](https://rads.stackoverflow.com/amzn/click/com/0321334876 "Amazon: Effective C++ by Scott Meyers") say, it is "*easy to use correctly and hard to use incorrectly.*' Now, I am assuming that in reality, `System` deals in some base class of `Subsystem`s, from which you derive various different `Subsystem`s. I've introduced it below as `SubsystemBase`. You need to introduce a ***Proxy*** object, `SubsystemProxy` below, which implements the interface of `SubsystemBase` by forwarding requests to the object it is proxying. (In this sense, it is very much like a special purpose application of the [Decorator Pattern](http://en.wikipedia.org/wiki/Decorator_pattern "Wikipedia: Decorator Pattern").) Each `Subsystem` creates one of these objects, which it holds via a `shared_ptr`, and returns when requested via `GetProxy()`, which is called by the parent `System` object when `GetSubsystem()` is called. When a `System` goes out of scope, each of it's `Subsystem` objects gets destructed. In their destructor, they call `mProxy->Nullify()`, which causes their ***Proxy*** objects to change their ***State***. They do this by changing to point to a ***Null Object***, which implements the `SubsystemBase` interface, but does so by doing nothing. Using the ***State Pattern*** here has allowed the client application to be completely oblivious to whether or not a particular `Subsystem` exists. Moreover, it does not need to check pointers or keep around instances that should have been destroyed. The ***Proxy Pattern*** allows the client to be dependent on a light weight object that completely wraps up the details of the API's inner workings, and maintains a constant, uniform interface. The ***Null Object Pattern*** allows the ***Proxy*** to function after the original `Subsystem` has been removed. ## Sample Code I had placed a rough pseudo-code quality example here, but I wasn't satisfied with it. I've rewritten it to be a precise, compiling (I used g++) example of what I have described above. To get it to work, I had to introduce a few other classes, but their uses should be clear from their names. I employed the [Singleton Pattern](http://en.wikipedia.org/wiki/Singleton_pattern "Wikipedia: Singleton Pattern") for the `NullSubsystem` class, as it makes sense that you wouldn't need more than one. `ProxyableSubsystemBase` completely abstracts the Proxying behavior away from the `Subsystem`, allowing it to be ignorant of this behavior. Here is the UML Diagram of the classes: [![UML Diagram of Subsystem and System Hierarchy](https://i.stack.imgur.com/5RVNY.png)](https://i.stack.imgur.com/5RVNY.png) ### Example Code: ``` #include <iostream> #include <string> #include <vector> #include <boost/shared_ptr.hpp> // Forward Declarations to allow friending class System; class ProxyableSubsystemBase; // Base defining the interface for Subsystems class SubsystemBase { public: // pure virtual functions virtual void DoSomething(void) = 0; virtual int GetSize(void) = 0; virtual ~SubsystemBase() {} // virtual destructor for base class }; // Null Object Pattern: an object which implements the interface to do nothing. class NullSubsystem : public SubsystemBase { public: // implements pure virtual functions from SubsystemBase to do nothing. void DoSomething(void) { } int GetSize(void) { return -1; } // Singleton Pattern: We only ever need one NullSubsystem, so we'll enforce that static NullSubsystem *instance() { static NullSubsystem singletonInstance; return &singletonInstance; } private: NullSubsystem() {} // private constructor to inforce Singleton Pattern }; // Proxy Pattern: An object that takes the place of another to provide better // control over the uses of that object class SubsystemProxy : public SubsystemBase { friend class ProxyableSubsystemBase; public: SubsystemProxy(SubsystemBase *ProxiedSubsystem) : mProxied(ProxiedSubsystem) { } // implements pure virtual functions from SubsystemBase to forward to mProxied void DoSomething(void) { mProxied->DoSomething(); } int GetSize(void) { return mProxied->GetSize(); } protected: // State Pattern: the initial state of the SubsystemProxy is to point to a // valid SubsytemBase, which is passed into the constructor. Calling Nullify() // causes a change in the internal state to point to a NullSubsystem, which allows // the proxy to still perform correctly, despite the Subsystem going out of scope. void Nullify() { mProxied=NullSubsystem::instance(); } private: SubsystemBase *mProxied; }; // A Base for real Subsystems to add the Proxying behavior class ProxyableSubsystemBase : public SubsystemBase { friend class System; // Allow system to call our GetProxy() method. public: ProxyableSubsystemBase() : mProxy(new SubsystemProxy(this)) // create our proxy object { } ~ProxyableSubsystemBase() { mProxy->Nullify(); // inform our proxy object we are going away } protected: boost::shared_ptr<SubsystemProxy> GetProxy() { return mProxy; } private: boost::shared_ptr<SubsystemProxy> mProxy; }; // the managing system class System { public: typedef boost::shared_ptr< SubsystemProxy > SubsystemHandle; typedef boost::shared_ptr< ProxyableSubsystemBase > SubsystemPtr; SubsystemHandle GetSubsystem( unsigned int index ) { assert( index < mSubsystems.size() ); return mSubsystems[ index ]->GetProxy(); } void LogMessage( const std::string& message ) { std::cout << " <System>: " << message << std::endl; } int AddSubsystem( ProxyableSubsystemBase *pSubsystem ) { LogMessage("Adding Subsystem:"); mSubsystems.push_back(SubsystemPtr(pSubsystem)); return mSubsystems.size()-1; } System() { LogMessage("System is constructing."); } ~System() { LogMessage("System is going out of scope."); } private: // have to hold base pointers typedef std::vector< boost::shared_ptr<ProxyableSubsystemBase> > SubsystemList; SubsystemList mSubsystems; }; // the actual Subsystem class Subsystem : public ProxyableSubsystemBase { public: Subsystem( System* pParentSystem, const std::string ID ) : mParentSystem( pParentSystem ) , mID(ID) { mParentSystem->LogMessage( "Creating... "+mID ); } ~Subsystem() { mParentSystem->LogMessage( "Destroying... "+mID ); } // implements pure virtual functions from SubsystemBase void DoSomething(void) { mParentSystem->LogMessage( mID + " is DoingSomething (tm)."); } int GetSize(void) { return sizeof(Subsystem); } private: System * mParentSystem; // raw pointer to avoid cycles - can also use weak_ptrs std::string mID; }; ////////////////////////////////////////////////////////////////// // Actual Use Example int main(int argc, char* argv[]) { std::cout << "main(): Creating Handles H1 and H2 for Subsystems. " << std::endl; System::SubsystemHandle H1; System::SubsystemHandle H2; std::cout << "-------------------------------------------" << std::endl; { std::cout << " main(): Begin scope for System." << std::endl; System mySystem; int FrankIndex = mySystem.AddSubsystem(new Subsystem(&mySystem, "Frank")); int ErnestIndex = mySystem.AddSubsystem(new Subsystem(&mySystem, "Ernest")); std::cout << " main(): Assigning Subsystems to H1 and H2." << std::endl; H1=mySystem.GetSubsystem(FrankIndex); H2=mySystem.GetSubsystem(ErnestIndex); std::cout << " main(): Doing something on H1 and H2." << std::endl; H1->DoSomething(); H2->DoSomething(); std::cout << " main(): Leaving scope for System." << std::endl; } std::cout << "-------------------------------------------" << std::endl; std::cout << "main(): Doing something on H1 and H2. (outside System Scope.) " << std::endl; H1->DoSomething(); H2->DoSomething(); std::cout << "main(): No errors from using handles to out of scope Subsystems because of Proxy to Null Object." << std::endl; return 0; } ``` ## Output from the code: ``` main(): Creating Handles H1 and H2 for Subsystems. ------------------------------------------- main(): Begin scope for System. <System>: System is constructing. <System>: Creating... Frank <System>: Adding Subsystem: <System>: Creating... Ernest <System>: Adding Subsystem: main(): Assigning Subsystems to H1 and H2. main(): Doing something on H1 and H2. <System>: Frank is DoingSomething (tm). <System>: Ernest is DoingSomething (tm). main(): Leaving scope for System. <System>: System is going out of scope. <System>: Destroying... Frank <System>: Destroying... Ernest ------------------------------------------- main(): Doing something on H1 and H2. (outside System Scope.) main(): No errors from using handles to out of scope Subsystems because of Proxy to Null Object. ``` ## Other Thoughts: * An interesting article I read in one of the Game Programming Gems books talks about using Null Objects for debugging and development. They were specifically talking about using Null Graphics Models and Textures, such as a checkerboard texture to make missing models really stand out. The same could be applied here by changing out the `NullSubsystem` for a `ReportingSubsystem` which would log the call and possibly the callstack whenever it is accessed. This would allow you or your library's clients to track down where they are depending on something that has gone out of scope, but without the need to cause a crash. * I mentioned in a comment @Arkadiy that the circular dependency he brought up between `System` and `Subsystem` is a bit unpleasant. It can easily be remedied by having `System` derive from an interface on which `Subsystem` depends, an application of Robert C Martin's [Dependency Inversion Principle](http://en.wikipedia.org/wiki/Dependency_inversion_principle "Wikipedia: Dependency Inversion Principle"). Better still would be to isolate the functionality that `Subsystem`s need from their parent, write an interface for that, then hold onto an implementor of that interface in `System` and pass it to the `Subsystem`s, which would hold it via a `shared_ptr`. For example, you might have `LoggerInterface`, which your `Subsystem` uses to write to the log, then you could derive `CoutLogger` or `FileLogger` from it, and keep an instance of such in `System`. [![Eliminating the Circular Dependency](https://i.stack.imgur.com/Kq0HX.png)](https://i.stack.imgur.com/Kq0HX.png)
This is do-able with proper use of the `weak_ptr` class. In fact, you are already quite close to having a good solution. You are right that you cannot be expected to "out-think" your client programmers, nor should you expect that they will always follow the "rules" of your API (as I'm sure you are already aware). So, the best you can really do is damage control. I recommend having your call to `GetSubsystem` return a `weak_ptr` rather than a `shared_ptr` simply so that the client developer can test the validity of the pointer without always claiming a reference to it. Similarly, have `pParentSystem` be a `boost::weak_ptr<System>` so that it can internally detect whether its parent `System` still exists via a call to `lock` on `pParentSystem` along with a check for `NULL` (a raw pointer won't tell you this). Assuming you change your `Subsystem` class to always check whether or not its corresponding `System` object exists, you can ensure that if the client programmer attempts to use the `Subsystem` object outside of the intended scope that an error will result (that you control), rather than an inexplicable exception (that you must trust the client programmer to catch/properly handle). So, in your example with `main()`, things won't go BOOM! The most graceful way to handle this in the `Subsystem`'s dtor would be to have it look something like this: ``` class Subsystem { ... ~Subsystem() { boost::shared_ptr<System> my_system(pParentSystem.lock()); if (NULL != my_system.get()) { // only works if pParentSystem refers to a valid System object // now you are guaranteed this will work, since a reference is held to the System object my_system->LogMessage( "Destroying..." ); } // Destroy this subsystem: deallocate memory, release resource, etc. // when my_system goes out of scope, this may cause the associated System object to be destroyed as well (if it holds the last reference) } ... }; ``` I hope this helps!
A Question On Smart Pointers and Their Inevitable Indeterminism
[ "", "c++", "shared-ptr", "raii", "object-lifetime", "" ]
[This question](https://stackoverflow.com/questions/417265/what-does-sign-mean-in-django) originally asked (wrongly) what does "|" mean in Python, when the actual question was about Django. That question had a wonderful answer by Triptych I want to preserve.
In Python, the `'|'` operator is defined by default on integer types and set types. If the two operands are integers, then it will perform a [bitwise or](http://en.wikipedia.org/wiki/Bitwise_operation#OR), which is a mathematical operation. If the two operands are `set` types, the `'|'` operator will return the union of two sets. ``` a = set([1,2,3]) b = set([2,3,4]) c = a|b # = set([1,2,3,4]) ``` Additionally, authors may define operator behavior for custom types, so if `something.property` is a user-defined object, you should check that class definition for an `__or__()` method, which will then define the behavior in your code sample. So, it's impossible to give you a precise answer without knowing the data types for the two operands, but *usually* it will be a bitwise or.
Bitwise OR
What does the “|” sign mean in Python?
[ "", "python", "syntax-rules", "" ]
Looking for an implementation for C++ of a function like .NET's String.Format. Obviously there is printf and it's varieties, but I'm looking for something that is positional as in: > String.Format("Hi there {0}. You are > {1} years old. How does it feel to be > {1}?", name, age); This is needed because we're going to try and make it easier to localize our app, and giving the translators {0} and {1} to position anywhere in the sentence is much easier than giving them a %s, %d, %d which must be positioned in that order in their translation. I suppose search and replace with variable inputs (va\_start, va\_end, etc) is what I'll end up building, but if there is already a solid solution, that would be preferrable. Thanks :)
Lots of good recommendations above that would work in most situations. In my case, I ultimately wanted to load strings from a resource, AND keep the string resources as close to .NET String.Format as I could, so I rolled my own. After looking at some of the implementations above for ideas, the resulting implementation was quite short and easy. There is a class String, which in my case derives from Microsoft's CString, but it could derive from about any string class. There is also a class StringArg -- it's job is to take any parameter type and turn it into a string (ie it mimics ToString in .NET). If a new object needs to be ToString'd, you just add another constructor. The constructor allows a printf-style format specifier for non-default formatting. The String class then accepts a string table ID for the source string, a number of StringArg parameters, and finally an optional HINSTANCE (I use lots of DLLs, any of which could host the string table, so this allowed me to pass it in, or use a DLL-specific HINSTANCE by default). Usage examples: ``` dlg.m_prompt = String(1417); //"Welcome to Stackoverflow!" MessageBox(String(1532, m_username)); //"Hi {0}" ``` As it is, it only takes a string ID for an input, but it would be trivial to add an input string instead of a string ID: ``` CString s = String.Format("Hi {0}, you are {1} years old in Hexidecimal", m_userName, StringArg(m_age, "%0X")); ``` Now for the StringArg class which does the equivalent of ToString on variables: ``` class StringArg { StringArg(); //not implemented StringArg(const StringArg&); //not implemented StringArg& operator=(const StringArg&); //not implemented public: StringArg(LPCWSTR val); StringArg(const CString& val); StringArg(int val, LPCWSTR formatSpec = NULL); StringArg(size_t val, LPCWSTR formatSpec = NULL); StringArg(WORD val, LPCWSTR formatSpec = NULL); StringArg(DWORD val, LPCWSTR formatSpec = NULL); StringArg(__int64 val, LPCWSTR formatSpec = NULL); StringArg(double val, LPCWSTR formatSpec = NULL); CString ToString() const; private: CString m_strVal; }; extern HINSTANCE GetModuleHInst(); //every DLL implements this for getting it's own HINSTANCE -- scenarios with a single resource DLL wouldn't need this ``` For the String class, there are a bunch of member functions and constructors that take up to 10 arguments. These ultimately call CentralFormat which does the real work. ``` class String : public CString { public: String() { } String(WORD stringTableID, HINSTANCE hInst = GetModuleHInst()) { Format(stringTableID, hInst); } String(WORD stringTableID, const StringArg& arg1, HINSTANCE hInst = GetModuleHInst()) { Format(stringTableID, arg1, hInst); } String(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, HINSTANCE hInst = GetModuleHInst()) { Format(stringTableID, arg1, arg2, hInst); } String(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, HINSTANCE hInst = GetModuleHInst()) { Format(stringTableID, arg1, arg2, arg3, hInst); } String(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, HINSTANCE hInst = GetModuleHInst()) { Format(stringTableID, arg1, arg2, arg3, arg4, hInst); } String(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, HINSTANCE hInst = GetModuleHInst()) { Format(stringTableID, arg1, arg2, arg3, arg4, arg5, hInst); } String(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, const StringArg& arg6, HINSTANCE hInst = GetModuleHInst()) { Format(stringTableID, arg1, arg2, arg3, arg4, arg5, arg6, hInst); } String(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, const StringArg& arg6, const StringArg& arg7, HINSTANCE hInst = GetModuleHInst()) { Format(stringTableID, arg1, arg2, arg3, arg4, arg5, arg6, arg7, hInst); } String(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, const StringArg& arg6, const StringArg& arg7, const StringArg& arg8, HINSTANCE hInst = GetModuleHInst()) { Format(stringTableID, arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8, hInst); } String(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, const StringArg& arg6, const StringArg& arg7, const StringArg& arg8, const StringArg& arg9, HINSTANCE hInst = GetModuleHInst()) { Format(stringTableID, arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8, arg9, hInst); } CString& Format(WORD stringTableID, HINSTANCE hInst = GetModuleHInst()); CString& Format(WORD stringTableID, const StringArg& arg1, HINSTANCE hInst = GetModuleHInst()); CString& Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, HINSTANCE hInst = GetModuleHInst()); CString& Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, HINSTANCE hInst = GetModuleHInst()); CString& Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, HINSTANCE hInst = GetModuleHInst()); CString& Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, HINSTANCE hInst = GetModuleHInst()); CString& Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, const StringArg& arg6, HINSTANCE hInst = GetModuleHInst()); CString& Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, const StringArg& arg6, const StringArg& arg7, HINSTANCE hInst = GetModuleHInst()); CString& Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, const StringArg& arg6, const StringArg& arg7, const StringArg& arg8, HINSTANCE hInst = GetModuleHInst()); CString& Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, const StringArg& arg6, const StringArg& arg7, const StringArg& arg8, const StringArg& arg9, HINSTANCE hInst = GetModuleHInst()); private: void CentralFormat(WORD stringTableID, std::vector<const StringArg*>& args, HINSTANCE hInst); }; ``` Finally, the implementation (hopefully it's OK to post this much on StackOverflow, although the bulk of it is very simple): ``` StringArg::StringArg(LPCWSTR val) { m_strVal = val; } StringArg::StringArg(const CString& val) { m_strVal = (LPCWSTR)val; } StringArg::StringArg(int val, LPCWSTR formatSpec) { if(NULL == formatSpec) formatSpec = L"%d"; //GLOK m_strVal.Format(formatSpec, val); } StringArg::StringArg(size_t val, LPCWSTR formatSpec) { if(NULL == formatSpec) formatSpec = L"%u"; //GLOK m_strVal.Format(formatSpec, val); } StringArg::StringArg(WORD val, LPCWSTR formatSpec) { if(NULL == formatSpec) formatSpec = L"%u"; //GLOK m_strVal.Format(formatSpec, val); } StringArg::StringArg(DWORD val, LPCWSTR formatSpec) { if(NULL == formatSpec) formatSpec = L"%u"; //GLOK m_strVal.Format(formatSpec, val); } StringArg::StringArg(__int64 val, LPCWSTR formatSpec) { if(NULL == formatSpec) formatSpec = L"%I64d"; //GLOK m_strVal.Format(formatSpec, val); } StringArg::StringArg(double val, LPCWSTR formatSpec) { if(NULL == formatSpec) formatSpec = L"%f"; //GLOK m_strVal.Format(formatSpec, val); } CString StringArg::ToString() const { return m_strVal; } void String::CentralFormat(WORD stringTableID, std::vector<const StringArg*>& args, HINSTANCE hInst) { size_t argsCount = args.size(); _ASSERT(argsCount < 10); //code below assumes a single character position indicator CString tmp; HINSTANCE hOld = AfxGetResourceHandle(); AfxSetResourceHandle(hInst); BOOL b = tmp.LoadString(stringTableID); AfxSetResourceHandle(hOld); if(FALSE == b) { #ifdef _DEBUG //missing string resource, or more likely a bad stringID was used -- tell someone!! CString s; s.Format(L"StringID %d could not be found! %s", stringTableID, hInst == ghCommonHInst ? L"CommonHInst was passed in" : L"CommonHInst was NOT passed in"); //GLOK ::MessageBeep(MB_ICONHAND); ::MessageBeep(MB_ICONEXCLAMATION); ::MessageBeep(MB_ICONHAND); _ASSERT(0); ::MessageBox(NULL, s, L"DEBUG Error - Inform Development", MB_ICONSTOP | MB_OK | MB_SERVICE_NOTIFICATION); //GLOK } #endif //_DEBUG CString::Format(L"(???+%d)", stringTableID); //GLOK return; } //check for the degenerate case if(0 == argsCount) { CString::operator=(tmp); return; } GetBuffer(tmp.GetLength() * 3); //pre-allocate space ReleaseBuffer(0); LPCWSTR pStr = tmp; while(L'\0' != *pStr) { bool bSkip = false; if(L'{' == *pStr) { //is this an incoming string position? //we only support 10 args, so the next char must be a number if(wcschr(L"0123456789", *(pStr + 1))) //GLOK { if(L'}' == *(pStr + 2)) //and closing brace? { bSkip = true; //this is a replacement size_t index = *(pStr + 1) - L'0'; _ASSERT(index < argsCount); _ASSERT(index >= 0); if((index >= 0) && (index < argsCount)) CString::operator+=(args[index]->ToString()); else { //bad positional index CString msg; msg.Format(L"(??-%d)", index); //GLOK CString::operator+=(msg); } pStr += 2; //get past the two extra characters that we skipped ahead and peeked at } } } if(false == bSkip) CString::operator+=(*pStr); pStr++; } } CString& String::Format(WORD stringTableID, HINSTANCE hInst) { std::vector<const StringArg*> args; CentralFormat(stringTableID, args, hInst); return *this; } CString& String::Format(WORD stringTableID, const StringArg& arg1, HINSTANCE hInst) { std::vector<const StringArg*> args; args.push_back(&arg1); CentralFormat(stringTableID, args, hInst); return *this; } CString& String::Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, HINSTANCE hInst) { std::vector<const StringArg*> args; args.push_back(&arg1); args.push_back(&arg2); CentralFormat(stringTableID, args, hInst); return *this; } CString& String::Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, HINSTANCE hInst) { std::vector<const StringArg*> args; args.push_back(&arg1); args.push_back(&arg2); args.push_back(&arg3); CentralFormat(stringTableID, args, hInst); return *this; } CString& String::Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, HINSTANCE hInst) { std::vector<const StringArg*> args; args.push_back(&arg1); args.push_back(&arg2); args.push_back(&arg3); args.push_back(&arg4); CentralFormat(stringTableID, args, hInst); return *this; } CString& String::Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, HINSTANCE hInst) { std::vector<const StringArg*> args; args.push_back(&arg1); args.push_back(&arg2); args.push_back(&arg3); args.push_back(&arg4); args.push_back(&arg5); CentralFormat(stringTableID, args, hInst); return *this; } CString& String::Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, const StringArg& arg6, HINSTANCE hInst) { std::vector<const StringArg*> args; args.push_back(&arg1); args.push_back(&arg2); args.push_back(&arg3); args.push_back(&arg4); args.push_back(&arg5); args.push_back(&arg6); CentralFormat(stringTableID, args, hInst); return *this; } CString& String::Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, const StringArg& arg6, const StringArg& arg7, HINSTANCE hInst) { std::vector<const StringArg*> args; args.push_back(&arg1); args.push_back(&arg2); args.push_back(&arg3); args.push_back(&arg4); args.push_back(&arg5); args.push_back(&arg6); args.push_back(&arg7); CentralFormat(stringTableID, args, hInst); return *this; } CString& String::Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, const StringArg& arg6, const StringArg& arg7, const StringArg& arg8, HINSTANCE hInst) { std::vector<const StringArg*> args; args.push_back(&arg1); args.push_back(&arg2); args.push_back(&arg3); args.push_back(&arg4); args.push_back(&arg5); args.push_back(&arg6); args.push_back(&arg7); args.push_back(&arg8); CentralFormat(stringTableID, args, hInst); return *this; } CString& String::Format(WORD stringTableID, const StringArg& arg1, const StringArg& arg2, const StringArg& arg3, const StringArg& arg4, const StringArg& arg5, const StringArg& arg6, const StringArg& arg7, const StringArg& arg8, const StringArg& arg9, HINSTANCE hInst) { std::vector<const StringArg*> args; args.push_back(&arg1); args.push_back(&arg2); args.push_back(&arg3); args.push_back(&arg4); args.push_back(&arg5); args.push_back(&arg6); args.push_back(&arg7); args.push_back(&arg8); args.push_back(&arg9); CentralFormat(stringTableID, args, hInst); return *this; } ```
Look at the [boost format library.](http://www.boost.org/doc/libs/1_37_0/libs/format/index.html)
String.Format for C++
[ "", "c++", "localization", "string.format", "printf", "" ]
I have the following code (adapted from an example given in [Dive Into Python](http://diveintopython.net/file_handling/file_objects.html#d0e14928)) that reads the entire contents of a file into a buffer. ``` buffer = "" try: file = open(postFileName, 'rU') try: # Read the entire POST log file into a buffer buffer += file.read() finally: file.close() except IOError: buffer += "The POST file could not be opened." ``` What's bothering me about this code is the inner try/finally block without an except block. Do I need an except block in there? Can the call to `read()` fail after the call to `open()` was successful? I know the try-except-finally is now unified, so adding one, at least syntactically, is not a problem. If I add an except block, under what conditions will it be executed, and how do I write a test to make sure it runs under those conditions? Additionally, if I *don't* need an except block, then why do I need the inner try/finally block at all?
I find that finally blocks are often overused. The file close (and a few other similar patterns) are so important that Python 3.0 will have a **with** statement just to cover this base in a slightly less obscure way. * Do I need an except with a finally? That hits on the confusing nature of this specific example, and why they added the **with** statement. The **finally** does "no matter what" cleanup. Exception or no exception, the **finally** is always executed. * Can the call to read() fail after the call to open() was successful? All OS calls, all I/O calls (almost everything) can raise an exception. All kinds of bad things can happen after open and before read. * If I add an **except** block, under what conditions will it be executed? Read up on files. There are lots of goofy I/O errors that can occur between open and read. Also, read up on the built-in exceptions. <https://docs.python.org/2/library/exceptions.html> * How do I write a test to make sure it runs under those conditions? You'll need a mock file object. This object will responds to `open` but raises an `IOError` or `OSError` on every `read`. * If I don't need an except block, then why do I need the inner try/finally block at all? Cleanup. The **finally** will be executed no matter what exception is raised. Try this. See what it does. ``` try: raise OSError("hi mom") finally: print "Hmmm" ```
I disagree with the other answers mentioning unifying the try / except / finally blocks. That would change the behaviour, as you wouldn't want the finally block to try to close the file if the open failed. The split blocks are correct here (though it may be better using the new "`with open(filename,'rU') as f`" syntax instead). There are reasons the read() could fail. For instance the data could be too big to fit into memory, or the user may have signalled an interrupt with control-C. Those cases won't be caught by the IOError, but are left to be handled (or not) by the caller who may want to do different things depending on the nature of the application. However, the code does still have an obligation to clean up the file, even where it doesn't deal with the error, hence the finally without the except.
How do you test a file.read() error in Python?
[ "", "python", "file-io", "error-handling", "" ]
What I want to do is trigger a function from an extension/GM Script once a page in FireFox reloads/refreshes... Picture this: 1. I goto a webpage that asks for a username and password. 2. My code populates these values and submits the credentials 3. The webpage reloads and brings up a page asking me to enter a pre-decided number 4. I enter the number 5. The webpage reloads and brings up a page asking me to select an option from a dropdown 6. I select ... .. you get the idea.. I figured I wanted to write some JavaScript to do all this.. and **since persistence would be required and I don't have capability to change the source site, I thought of writing a FireFox extension or GreaseMonkey - basically anything on the client side**. Something like that the event DOMContentReloaded would have acted like (had it existed) : addEventListener("DOMContentReloaded", pageReloaded, false); Typical test cases for such code would be to: 1. Find out how much time it took between page refreshes 2. Wait for the second instance of page refresh to happen and then redirect to another page ,etc .. **All this would be done from a FireFox extension (or GreaseMonkey in case a slution in GM would be easier/better/recommended) - given this, things should be easy?**
*I've updated my answer to reflect your updated question below.* As [rjk](https://stackoverflow.com/questions/432925/actions-on-page-reload-refresh-in-javascript#432941) mentioned, you can use the `onbeforeunload` event to perform an action when the page refreshes. Here is a solution that should work with some potential issues I'll explain below: ``` // Just some cookie utils from: http://www.quirksmode.org/js/cookies.html var Cookie = { create: function(name, value, days) { if (days) { var date = new Date(); date.setTime(date.getTime() + (days * 24 * 60 * 60 * 1000)); var expires = "; expires=" + date.toGMTString(); } else var expires = ""; document.cookie = name + "=" + value + expires + "; path=/"; }, read: function(name) { var nameEQ = name + "="; var ca = document.cookie.split(';'); for (var i = 0; i < ca.length; i++) { var c = ca[i]; while (c.charAt(0) == ' ') { c = c.substring(1, c.length); } if (c.indexOf(nameEQ) == 0) { return c.substring(nameEQ.length, c.length); } } return null; }, erase: function(name) { createCookie(name, "", -1); } } window.addEventListener('beforeunload', pageClosed, false); window.addEventListener('load', pageOpened, false); function pageOpened() { var timestampString = Cookie.read('closeTester'); if (timestampString) { var timestamp = new Date(timestampString); var temp = new Date(); temp.setMinutes(temp.getMinutes() - 1, temp.getSeconds() - 30); // If this is true, the load is a re-load/refresh if (timestamp > temp) { var counter = Cookie.read('counter'); if (counter) { counter++; Cookie.create('counter', counter); } else { Cookie.create('counter', 1); } } Cookie.erase('closeTester'); } } function pageClosed() { Cookie.create('closeTester', new Date()); } ``` What this does is create a temporary cookie when the page unloads. The cookie stores a timestamp of the current time. When the page is loaded, that cookie is read, and the timestamp checked to see how old it is. If it is within 30 seconds, it will increment the counter. I chose 30 seconds because I don't know how data intensive your site is or how long it takes to load. If your site is speedy, I'd change it to 5-10 seconds so that the script is more accurate. To do that, change the number of seconds to 50-55 seconds and you will get a 10 second or a 5 second window respectively. This will only keep track of the reloads while the browser is kept open. Once it is closed, the count is lost. You can change that by adding an expiration to the '`count`' cookie. Because the timestamp cookie is only maintained while the browser is open, this script is fairly trustworthy since it won't count closing and reopening the browser. The only case where you could have problems is if the user has a tab open, and then closes the tab and re-opens it within the window of time you specify. All of this is done without a firefox extension and will work on any browser except IE (unless you correct the event hander to work with it). I don't know how to do this using a firefox extension, although it's possible there may be a better way using an extension. ## UPDATE Now that I understand what you're trying to do a little better, here are a few things that may be helpful: The script I've included above (obviously) is specifically for tracking refreshes. However, it can also be used to just track navigation also. In the condition that checks '`if (timestamp > temp)`' you can call another function that will perform some action (the action will only be performed when the page is refreshed, etc). If you want persistent data, you just need to store it in a cookie, like I do above. If you don't need to keep a count of the pages you can store some other info in that cookie. I've never created a greasemonkey script before, but I'm assuming that since it can reference elements in the DOM, it can also reference the document's cookies. This would allow you to store persistent data for the site using greasemonkey by just using the code I've included above. If you can't access the DOM cookie, you can use [greasemonkey's `GM_setValue()` and `GM_getValue()` functions](http://diveintogreasemonkey.org/advanced/gm_getvalue.html) to store persistent data. This data will be stored across browser session though as far as I know. You'll have to put some sentinel values in to ensure that it only works across page loads (something like the timestamp example I used above). As for jQuery, that is a javascript API that is used for JavaScript in general. I don't know how useful it is for GreaseMonkey scripts, although I'd assume it would work if you used it in a script. If you want to get started with jQuery, check out [their documentation](http://docs.jquery.com/Main_Page). It's really well done. The only part of my example that really can use jQuery effectively is the event handling parts. Here is how you'd do the event handling in jQuery: ``` $().ready(pageOpened); $().unload(pageClosed); ``` Replace the '`window.addEventListener()`' calls with those 2 lines above and you've got a cross browser implementation. Since you're using a greasemonkey script however, the jQuery API becomes unnecessary unless you want to do DOM manipulation, which jQuery is very good at.
I do not think there is any way of preserving you JavaScript through refreshing the page in browser - it then loads it all again... Maybe you can start with `window.onbeforeunload` to overload default function of reload button and use AJAX to reload some main div...
Actions on page reload/refresh in JavaScript
[ "", "javascript", "" ]
Is there a shortcut to add an event method for a control? If I have a button, I want to add the Click method without having to type it out or switch to design view. EDIT: Seriously! When i did this in VB I had a drop down list of all the controls and their events. Is that so hard to do for C#?
Winforms? Webforms? What? One option is to (after initialization) hook the event your self - intellisense supplies the event name, and [Tab][Tab] creates the method stub - i.e. ``` public MyForm() { InitializeComponent() someButton.Click += (press [tab][tab] now) } ``` and it does the rest... likewise in web-forms at the appropriate place. This gives you: ``` public MyForm() { InitializeComponent(); someButton.Click += new EventHandler(someButton_Click); } void someButton_Click(object sender, EventArgs e) { throw new NotImplementedException(); // your code here ;-p } ```
Have you looked into creating a snippet? Here is a snippet I use to create anonymous methods that hook up to an event. ``` <?xml version="1.0" encoding="utf-8" ?> <CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet"> <CodeSnippet Format="1.0.0"> <Header> <Title>anonymous</Title> <Shortcut>__anonymous</Shortcut> <Description>Code snippet for an anonymous method</Description> <Author>Andrew</Author> <SnippetTypes> <SnippetType>Expansion</SnippetType> </SnippetTypes> </Header> <Snippet> <Declarations> <Literal> <ID>event</ID> <Default>base.Init</Default> <ToolTip>Event to attach</ToolTip> </Literal> <Literal> <ID>args</ID> <Default>EventArgs</Default> <ToolTip>Event argument type</ToolTip> </Literal> <Literal> <ID>name</ID> <Default>args</Default> <ToolTip>Event arg instance name</ToolTip> </Literal> </Declarations> <Code Language="csharp"><![CDATA[$event$ += delegate(Object sender, $args$ $name$) { $end$ };]]> </Code> </Snippet> </CodeSnippet> </CodeSnippets> ``` Also [here is an article](http://msdn.microsoft.com/en-us/library/ms165392(VS.80).aspx) explaining how to create them and how they work.
Create event method for control shortcut - Visual Studio
[ "", "c#", "visual-studio-2008", "" ]
I just want to send SMS from my web application in PHP. Can anyone tell me how to do this? What all things I need to do for this?
I don't know if this applies to you, but what I have done many times to save myself the money is ask the user in his profile what his carrier is, then tried matching it with [`this list`](http://en.wikipedia.org/wiki/List_of_carriers_providing_SMS_transit). Essentially, many/most carriers have an email address connected to a phone number that will easily let you send texts to the number. For example, if you have ATT and your phone number is 786-262-8344, an email to 7682628344@txt.att.net will send you a text message with the subject/body of the email, free of charge. This technique will pretty much cover all of your US users for free. Obviously, depending on the needs of your application this may not be possible/adequate/desired, but it is an option to be aware of.
Your main option for sending SMS messages is using an existing SMS provider. In my experience (which is extensive with SMS messaging web applications), you will often find that negotiating with different providers is the best way to get the best deal for your application. Different providers often offer different services, and different features. My favourite provider, and indeed, the one that has happily negotiated with me for lower rates in the past, is TM4B (<http://www.tm4b.com>). These guys have excellent rates, cover a huge proportion of the globe, and have excellent customer service. Below is some code extracted (and some parts obfuscated) from one of my live web applications, for sending a simple message via their API: ``` require_once("tm4b.lib.php"); $smsEngine = new tm4b(); // Prepare the array for sending $smsRequest["username"] = "YOURUNAME"; $smsRequest["password"] = "YOURPWORD"; $smsRequest["to"] = "+441234554443"; $smsRequest["from"] = "ME!"; $smsRequest["msg"] = "Hello, test message!"; // Do the actual sending $smsResult = $smsEngine->ClientAPI($smsRequest); // Check the result if( $smsResult['status'] == "ok" ) { print "Message sent!"; } else { print "Message not sent."; } ``` Many other providers that I've used in the past, have very similar interfaces, and all are really competitive when it comes to pricing. You simply have to look around for a provider that suits your needs. In regard to cost, you're looking at prices ranging from a few pence/cents for most Western countries (prices are a little bit higher for most third-world countries, though, so beware). Most providers you will have to pay in bulk, if you want decent rates from them, but they'll often negotiate with you for 'smaller-than-usual' batches. Most providers do offer a post-pay option, but only when you've successfully completed a few transactions with them... others offer it from the start, but the prices are extortionate.
SMS from web application
[ "", "php", "sms", "" ]
What is the correct way to separate between `F1` and i.e. `CTRL`+`F1` respective `SHIFT`-`CTRL`+`F1` within an KeyListener registered behind i.e. a JButton? ``` public void keyPressed(KeyEvent event) { int key = event.getKeyCode(); logger.debug("KeyBoard pressed char(" + event.getKeyChar() + ") code (" + key + ")"); } ``` .. always gives me 112 for `F1`, 113 for `F2` and so on. I understand that I can handle it by taking care of the keyPressed() respective for keyReleased for `CTRL` / `SHIFT` / `ALT` / etc on my own, but I hope that there is a better way. Many many thanks!!!
The Solution lies in the parent of KeyEvent (InputEvent) 1. Use the isAltDown,isControlDown,isShiftDown methods or 2. Use the getModifiers method
`KeyEvent`s are probably a bit low-level when dealing with a Swing widget. Instead go through `InputMap` and `ActionMap`.
Java: handling combined keyboard input
[ "", "java", "event-handling", "keyboard", "" ]
How can I set a parameter of a sub-report? I have successfully hooked myself up to the SubreportProcessing event, I can find the correct sub-report through e.ReportPath, and I can add datasources through e.DataSources.Add. But I find no way of adding report parameters?? I have found people suggesting to add them to the master report, but I don't really want to do that, since the master report shouldn't have to be connected to the sub-report at all, other than that it is wrapping the sub-report. I am using one report as a master template, printing name of the report, page numbers etc. And the subreport is going to be the report itself. And if I could only find a way to set those report parameters of the sub-report I would be good to go... ***Clarification:*** Creating/Defining the parameters is not the problem. The problem is to set their values. I thought the natural thing to do was to do it in the SubreportProcessing event. And the SubreportProcessingEventArgs do in fact have a Parameters property. But it is read only! So how do you use that? How can I set their value?
After looking and looking, I have come to the conclusion that setting the parameters of a sub-report, in code, is not possible. Unless you do something fancy like start editing the xml of the report definition before you load it or something like that. (But if someone else should know that I am wrong, please do answer, cause I am still very curious to know!)
It does work but it sure is persnickety. First thing I recommend is to develop your reports as .rdl. Much easier to test the reports this way. You can also get the subreport parameters set up and tested as rdl, making sure each parameter of the subreport is also defined as a parameter of the parent report. Once you get the reports - including the subreports - working that way you can rename the .rdl to rdlc and add the rdlc files to your ReportViewer Project. No further changes required. Use the names of the rdl datasources as the data source names in your code to provide data to the report in the SubreportProcessing event handler. You don't assign values to the passed parameter. The subreport will use them as is. (Sounds like the step you are missing is adding the parameters to the parent report as well as the the subreport as mentioned above.) You can evaluate the parameters and use them as query parameters to get the datasource you will add. You have to think about the datasource like its on an undiscovered dimension for a subreport. You will have to poke around while debugging in the event handler to see what I mean. Some of the values in your application will be readily available, others that you use easily elsewhere will throw object not found exceptions. For example I create a dataset in a instance of a class created on my applications main form. I use the data set throughout my application. In the SubreportProcessing event handler I cannot use the common dataset, so I must create a new instance of the table I need for the report and populate it. In the main report I would be able to access the common dataset. There are other limitations like this. Just have to wade your way through. Here is the SubreportProcessing event handler from a working VB.NET ReportViewer application. Shows a few different ways to get the datasource for a subreport. subreport1 builds a one row datatable from application business objects, subreport2 provides data the report requires without a parameter, subreport3 is lie subreport2 but evaluates one of the parameters passed to the subreport for use in date value required by the query that creates the ReportDataSource. ``` Public Sub SubreportProcessingEventHandler(ByVal sender As Object, _ ByVal e As SubreportProcessingEventArgs) Select Case e.ReportPath Case "subreport1" Dim tbl As DataTable = New DataTable("TableName") Dim Status As DataColumn = New DataColumn Status.DataType = System.Type.GetType("System.String") Status.ColumnName = "Status" tbl.Columns.Add(Status) Dim Account As DataColumn = New DataColumn Account.DataType = System.Type.GetType("System.String") Account.ColumnName = "Account" tbl.Columns.Add(Account) Dim rw As DataRow = tbl.NewRow() rw("Status") = core.GetStatus rw("Account") = core.Account tbl.Rows.Add(rw) e.DataSources.Add(New ReportDataSource("ReportDatasourceName", tbl)) Case "subreport2" core.DAL.cnStr = My.Settings.cnStr core.DAL.LoadSchedule() e.DataSources.Add(New ReportDataSource("ScheduledTasks", _ My.Forms.Mother.DAL.dsSQLCfg.tSchedule)) Case "subreport3" core.DAL.cnStr = My.Settings.cnStr Dim dt As DataTable = core.DAL.GetNodesForDateRange(DateAdd("d", _ -1 * CInt(e.Parameters("NumberOfDays").Values(0)), _ Today), _ Now) e.DataSources.Add(New ReportDataSource("Summary", dt)) End Select End Sub ```
Microsoft Reporting: Setting subreport parameters in code
[ "", "c#", "reporting-services", "parameters", "reporting", "subreport", "" ]
Is there anyway I can create a not in clause like I would have in SQL Server in *Linq to Entities*?
If you are using an in-memory collection as your filter, it's probably best to use the negation of Contains(). Note that this can fail if the list is too long, in which case you will need to choose another strategy (see below for using a strategy for a fully DB-oriented query). ``` var exceptionList = new List<string> { "exception1", "exception2" }; var query = myEntities.MyEntity .Select(e => e.Name) .Where(e => !exceptionList.Contains(e.Name)); ``` If you're excluding based on another database query using `Except` might be a better choice. (Here is a [link](http://msdn.microsoft.com/en-us/library/bb738638.aspx) to the supported Set extensions in LINQ to Entities) ``` var exceptionList = myEntities.MyOtherEntity .Select(e => e.Name); var query = myEntities.MyEntity .Select(e => e.Name) .Except(exceptionList); ``` This assumes a complex entity in which you are excluding certain ones depending some property of another table and want the names of the entities that are not excluded. If you wanted the entire entity, then you'd need to construct the exceptions as instances of the entity class such that they would satisfy the default equality operator (see [docs](http://msdn.microsoft.com/en-us/library/bb348938.aspx)).
Try: ``` from p in db.Products where !theBadCategories.Contains(p.Category) select p; ``` What's the SQL query you want to translate into a Linq query?
"NOT IN" clause in LINQ to Entities
[ "", "c#", "linq-to-entities", "" ]
Ok, this may be a dumb question but here goes. I noticed something the other day when I was playing around with different HTML to PDF converters in PHP. One I tried (dompdf) took forever to run on my HTML. Eventually it ran out of memory and ended but while it was still running, none of my other PHP scripts were responsive at all. It was almost as if that one request was blocking the entire Webserver. Now I'm assuming either that can't be right or I should be setting something somewhere to control that behaviour. Can someone please clue me in?
did you had open sessions for each of the scripts?:) they might reuse the same sesion and that blocks until the session is freed by the last request...so they basically wait for each other to complete(in your case the long-running pdf generator). This only applies if you use the same browser. Tip, not sure why you want html to pdf, but you may take a look at FOP <http://xmlgraphics.apache.org/fop/> to generate PDF's. I'm using it and works great..and fast:) It does have its quirks though.
It could be that all the scripts you tried are running in the same application pool. (At least, that's what it's called in IIS.) However, another explanation is that some browsers will queue requests over a single connection. This has caused me some confusion in the past. If your web browser is waiting for a response from yourdomain.com/script1.php and you open another window or tab to yourdomain.com/script2.php that request won't be sent until the first request receives a reply making it *seem* like your entire web server is hanging. An easy way to test if this is what's going on try two requests on two separate browsers.
PHP/Apache blocking on each request?
[ "", "php", "apache", "" ]
So say I have a `products` table with a list of products, and one of the fields is `price`. This table also has an `id` field. When someone makes a purchase I insert a row into another table called `purchases`. This table also has a field called `price` and a `productid` field that maps to the products table `id` field. I want to know if I can get the price from my products table and insert it into the purchases table all in one SQL query. Is this possible in MySQL? Thanks!
Sure. In my normal practice I'd just be sending the price as part of the insert, but if you truly need to extract it from the product table at that point, you can do it with a subselect, like: ``` INSERT INTO `purchase` SET `product_id` = $whatever, `price` = ( SELECT `price` FROM `product` WHERE `id` = $whatever ) ```
``` INSERT INTO purchases ( price ,productid ,add your other columns here ) SELECT price ,id ,add your other columns here FROM products WHERE add your selection criteria here ```
Is it possible to take a value from a row in one table in my MySQL database and insert into another table in the same database?
[ "", "sql", "mysql", "" ]
I was wondering if there are any times where it's advantageous to use an IEnumerator over a foreach loop for iterating through a collection? For example, is there any time where it would be better to use either of the following code samples over the other? ``` IEnumerator<MyClass> classesEnum = myClasses.GetEnumerator(); while(classesEnum.MoveNext()) Console.WriteLine(classesEnum.Current); ``` instead of ``` foreach (var class in myClasses) Console.WriteLine(class); ```
First, note that one big difference in your example (between `foreach` and `GetEnumerator`) is that `foreach` guarantees to call `Dispose()` on the iterator if the iterator is `IDisposable`. This is important for many iterators (which might be consuming an external data feed, for example). Actually, there are cases where `foreach` isn't as helpful as we'd like. First, there is the "first item" case discussed [here (foreach / detecting first iteration)](https://stackoverflow.com/questions/420774/foreach-with-generic-list-detecting-first-iteration-when-using-value-type#420786). But more; if you try writing the missing `Zip` method for stitching two enumerable sequences together (or the `SequenceEqual` method), you find that you can' use `foreach` on both sequences, since that would perform a cross-join. You need to use the iterator directly for one of them: ``` static IEnumerable<T> Zip<T>(this IEnumerable<T> left, IEnumerable<T> right) { using (var iter = right.GetEnumerator()) { // consume everything in the first sequence foreach (var item in left) { yield return item; // and add an item from the second sequnce each time (if we can) if (iter.MoveNext()) { yield return iter.Current; } } // any remaining items in the second sequence while (iter.MoveNext()) { yield return iter.Current; } } } static bool SequenceEqual<T>(this IEnumerable<T> left, IEnumerable<T> right) { var comparer = EqualityComparer<T>.Default; using (var iter = right.GetEnumerator()) { foreach (var item in left) { if (!iter.MoveNext()) return false; // first is longer if (!comparer.Equals(item, iter.Current)) return false; // item different } if (iter.MoveNext()) return false; // second is longer } return true; // same length, all equal } ```
According to the [C# language spec](http://msdn.microsoft.com/en-us/vcsharp/aa336809.aspx): > A foreach statement of the form ``` foreach (V v in x) embedded-statement ``` > is then expanded to: ``` { E e = ((C)(x)).GetEnumerator(); try { V v; while (e.MoveNext()) { v = (V)(T)e.Current; embedded-statement } } finally { ... // Dispose e } } ``` The essense of your two examples are the same, but there are some important differences, most notably the `try`/`finally`.
When should I use IEnumerator for looping in c#?
[ "", "c#", "iterator", "ienumerable", "loops", "" ]
What is the most efficient Javascript/AJAX toolkit?
Choose the library that makes the most sense to you idiomatically. The differences in efficiency are going to become less and less important as two things happen. 1. [Browsers are getting much better at interpreting Javascript.](http://ejohn.org/blog/javascript-performance-rundown/) 2. [Most major Javascript libraries are planning to adopt a single selector engine, Sizzle](http://ajaxian.com/archives/a-great-example-of-sharing-sizzle-engine-in-dojo-foundation)
jQuery seems pretty popular at the moment, and is lightweight. * <http://jquery.com/> Their API is well constructed and designed, and the resulting code tends to be very concise. Some may find it TOO concise - matter of taste. On larger projects I sometimes end up using YUI - it's a lot more heavyweight, but for a large codebase I find it easier to read something a little more explicit. * <http://developer.yahoo.com/yui/> Really, it's a bit of a subjective question; most efficient will depend on what makes the most sense to your coding style, what you're trying to do, and what you're interacting with. Best of luck!
Most efficient javascript/AJAX toolkit?
[ "", "javascript", "ajax", "" ]
I am writing a simple web app using Linq to Sql as my datalayer as I like Linq2Sql very much. I have been reading a bit about DDD and TDD lately and wanted to give it a shot. First and foremost it strikes me that Linq2Sql and DDD don't go along too great. My other problem is finding tests, I actually find it very hard to define good tests so I wanted to ask, What is your best techniques for discovering good test cases.
Test Case discovery is more of an art than a science. However simple guidelines include: * Code that you know to be frail / weak / likely to break * Follow the user scenario (what your user will be doing) and see how it will touch your code (often this means Debugging it, other times profiling, and other times it simply means thinking about the scenario) - whatever points in your code get touched by the user, those are the highest priority to write tests against. * During your own development the tests you ran that resulted in bugs you found - write tests to avoid the code regressing again with the same behavior. There are several books on how to write test cases out there, but unless you are working in a large organization that requires documented test cases, your best bet is to think of all the parts in your code that you don't like (that aren't "pure") and make sure you can test those modules thoroughly.
Well, going by the standard interpretation of TDD is that the tests *drive* your development. So, in essence you start with the test. It will fail, and you will write code until that test passes. So it's kind of driven by your requirements, however you go about gathering those. You decide what your app/feature needs to do, write the test, then code until it passes. Of course, there are many other techniques but this is just a brief statement about what is typically thought in the TDD world.
TDD, What are your techniques for finding good tests?
[ "", "c#", "linq-to-sql", "tdd", "domain-driven-design", "" ]
I have a JAR file for authorization. I need it for each of my WAR files. All the WAR files are packaged in an EAR file. Do I have to repeat this common JAR in every WAR, or is there a structure for common libraries? So my example looks something like this... ``` big.ear - META-INF - MANIFEST.MF - application.xml - appl1.war - META-INF - MANIFEST.MF - WEB-INF - web.xml - lib - unique1.jar - unique2.jar - unique3.jar - common1.jar - jsps/html/etc - appl2.war - META-INF - MANIFEST.MF - WEB-INF - web.xml - lib - unique3.jar - unique4.jar - common1.jar - jsps/html/etc - appl3.war - META-INF - MANIFEST.MF - WEB-INF - web.xml - lib - unique5.jar - common1.jar - jsps/html/etc ``` Each of my WAR applications can see common1.jar, but it is in the EAR three times. Where in the EAR structure could I put common1.jar so that appl1, appl2, and appl3 could see it without repeating it three times?
The standard way is to put the JARs at the root of your EAR and reference them in the `Class-Path` attribute of the WARs' `META-INF/MANIFEST.MF`. See [this article](http://java.sun.com/j2ee/verified/packaging.html). Check your container's documentation to make sure it is supported.
It’s in the JEE5 spec, chapter EE 8.2.1: > A .ear file may contain a directory that contains libraries packaged > in JAR files. The library-directory element of the .ear file’s > deployment descriptor contains the name of this directory. If a > library-directory element isn’t specified, or if the .ear file does > not contain a deployment descriptor, the directory named **lib** is > used.
Do common JARs have to be repeated across WARs in an EAR?
[ "", "java", "jakarta-ee", "enterprise-library", "" ]
I've been working with web start for a couple years now and have experience with signing the jars and what not. I am taking my first attempt at deploying a RCP app with web start and though I have in fact signed all of the jars with the same certificate I keep getting this error: 'jar resources in jnlp are not signed by the same certificate' Has anyone else came across this? If so, any ideas on how to fix?
When I had similar problems after checking the jars it turned out that some 3rd party jar was signed by someone else. You should create a separate jnlp file for the jars signed by the other certificate and read this jnlp from your jnlp file: ``` <resources> ... <extension name="other" href="other.jnlp"/> </resources> ``` [Here](http://mindprod.com/jgloss/jnlp.html#MULTIPLE) or [here](http://www.java.net/blogs/kirillcool/archive/deployment/index.html) you can find an example.
The following script lists serial number of the RSA certificate in each jar in /some/lib directory and helps to find jars that are signed by the wrong certificate: ``` for f in $( find /some/lib -type f -name '*.jar' ) do serial=$( unzip -p $f 'META-INF/*.RSA' | openssl pkcs7 -inform der -print -noout | grep --max-count=1 serialNumber | cut -d: -f2- | tr -d ' ' ) printf "%40s: %s\n" "$serial" "$f" done ```
jar resources in jnlp are not signed by the same certificate
[ "", "java", "rcp", "java-web-start", "jnlp", "" ]
I would like to be able to match a string literal with the option of escaped quotations. For instance, I'd like to be able to search "this is a 'test with escaped\' values' ok" and have it properly recognize the backslash as an escape character. I've tried solutions like the following: ``` import re regexc = re.compile(r"\'(.*?)(?<!\\)\'") match = regexc.search(r""" Example: 'Foo \' Bar' End. """) print match.groups() # I want ("Foo \' Bar") to be printed above ``` After looking at this, there is a simple problem that the escape character being used, "`\`", can't be escaped itself. I can't figure out how to do that. I wanted a solution like the following, but negative lookbehind assertions need to be fixed length: ``` # ... re.compile(r"\'(.*?)(?<!\\(\\\\)*)\'") # ... ``` Any regex gurus able to tackle this problem? Thanks.
I think this will work: ``` import re regexc = re.compile(r"(?:^|[^\\])'(([^\\']|\\'|\\\\)*)'") def check(test, base, target): match = regexc.search(base) assert match is not None, test+": regex didn't match for "+base assert match.group(1) == target, test+": "+target+" not found in "+base print "test %s passed"%test check("Empty","''","") check("single escape1", r""" Example: 'Foo \' Bar' End. """,r"Foo \' Bar") check("single escape2", r"""'\''""",r"\'") check("double escape",r""" Example2: 'Foo \\' End. """,r"Foo \\") check("First quote escaped",r"not matched\''a'","a") check("First quote escaped beginning",r"\''a'","a") ``` The regular expression `r"(?:^|[^\\])'(([^\\']|\\'|\\\\)*)'"` is forward matching only the things that we want inside the string: 1. Chars that aren't backslash or quote. 2. Escaped quote 3. Escaped backslash EDIT: Add extra regex at front to check for first quote escaped.
## re\_single\_quote = r"`'[^'\\]*(?:\\.[^'\\]*)*'"` First note that MizardX's answer is 100% accurate. I'd just like to add some additional recommendations regarding efficiency. Secondly, I'd like to note that this problem was solved and optimized long ago - See: [Mastering Regular Expressions (3rd Edition)](https://rads.stackoverflow.com/amzn/click/com/0596528124 "By Jeffrey Friedl. Best book on Regex - ever!"), (which covers this specific problem in great detail - *highly* recommended). First let's look at the sub-expression to match a single quoted string which may contain escaped single quotes. If you are going to allow escaped single quotes, you had better at least allow escaped-escapes as well (which is what Douglas Leeder's answer does). But as long as you're at it, its just as easy to allow escaped-anything-else. With these requirements. MizardX is the only one who got the expression right. Here it is in both short and long format (and I've taken the liberty to write this in `VERBOSE` mode, with lots of descriptive comments - which you should *always* do for non-trivial regexes): ``` # MizardX's correct regex to match single quoted string: re_sq_short = r"'((?:\\.|[^\\'])*)'" re_sq_long = r""" ' # Literal opening quote ( # Capture group $1: Contents. (?: # Group for contents alternatives \\. # Either escaped anything | [^\\'] # or one non-quote, non-escape. )* # Zero or more contents alternatives. ) # End $1: Contents. ' """ ``` This works and correctly matches all the following string test cases: ``` text01 = r"out1 'escaped-escape: \\ ' out2" test02 = r"out1 'escaped-quote: \' ' out2" test03 = r"out1 'escaped-anything: \X ' out2" test04 = r"out1 'two escaped escapes: \\\\ ' out2" test05 = r"out1 'escaped-quote at end: \'' out2" test06 = r"out1 'escaped-escape at end: \\' out2" ``` Ok, now lets begin to improve on this. First, the order of the alternatives makes a difference and one should always put the most likely alternative first. In this case, non escaped characters are more likely than escaped ones, so reversing the order will improve the regex's efficiency slightly like so: ``` # Better regex to match single quoted string: re_sq_short = r"'((?:[^\\']|\\.)*)'" re_sq_long = r""" ' # Literal opening quote ( # $1: Contents. (?: # Group for contents alternatives [^\\'] # Either a non-quote, non-escape, | \\. # or an escaped anything. )* # Zero or more contents alternatives. ) # End $1: Contents. ' """ ``` ## "Unrolling-the-Loop": This is a little better, but can be further improved (significantly) by applying Jeffrey Friedl's *"unrolling-the-loop"* efficiency technique (from [MRE3](https://rads.stackoverflow.com/amzn/click/com/0596528124 "Mastering Regular Expressions (3rd Edition)")). The above regex is not optimal because it must painstakingly apply the star quantifier to the non-capture group of two alternatives, each of which consume only one or two characters at a time. This alternation can be eliminated entirely by recognizing that a similar pattern is repeated over and over, and that an equivalent expression can be crafted to do the same thing without alternation. Here is an optimized expression to match a single quoted string and capture its contents into group `$1`: ``` # Better regex to match single quoted string: re_sq_short = r"'([^'\\]*(?:\\.[^'\\]*)*)'" re_sq_long = r""" ' # Literal opening quote ( # $1: Contents. [^'\\]* # {normal*} Zero or more non-', non-escapes. (?: # Group for {(special normal*)*} construct. \\. # {special} Escaped anything. [^'\\]* # More {normal*}. )* # Finish up {(special normal*)*} construct. ) # End $1: Contents. ' """ ``` This expression gobbles up all non-quote, non-backslashes (the vast majority of most strings), in one "gulp", which drastically reduces the amount of work that the regex engine must perform. How much better you ask? Well, I entered each of the regexes presented from this question into [RegexBuddy](http://www.regexbuddy.com/ "An excellent tool for crafting and debugging regular expressions") and measured how many steps it took the regex engine to complete a match on the following string (which all solutions correctly match): `'This is an example string which contains one \'internally quoted\' string.'` Here are the benchmark results on the above test string: ``` r""" AUTHOR SINGLE-QUOTE REGEX STEPS TO: MATCH NON-MATCH Evan Fosmark '(.*?)(?<!\\)' 374 376 Douglas Leeder '(([^\\']|\\'|\\\\)*)' 154 444 cletus/PEZ '((?:\\'|[^'])*)(?<!\\)' 223 527 MizardX '((?:\\.|[^\\'])*)' 221 369 MizardX(improved) '((?:[^\\']|\\.)*)' 153 369 Jeffrey Friedl '([^\\']*(?:\\.[^\\']*)*)' 13 19 """ ``` These steps are the number of steps required to match the test string using the RegexBuddy debugger function. The "NON-MATCH" column is the number of steps required to declare match failure when the closing quote is removed from the test string. As you can see, the difference is significant for both the matching and non-matching cases. Note also that these efficiency improvements are only applicable to a NFA engine which uses backtracking (i.e. Perl, PHP, Java, Python, Javascript, .NET, Ruby and most others.) A DFA engine will not see any performance boost by this technique (See: [Regular Expression Matching Can Be Simple And Fast](http://swtch.com/~rsc/regexp/regexp1.html)). ## On to the complete solution: The goal of the original question (my interpretation), is to pick out single quoted sub-strings (which may contain escaped quotes) from a larger string. If it is known that the text outside the quoted sub-strings will never contain escaped-single-quotes, the regex above will do the job. However, to correctly match single-quoted sub-strings within a sea of text swimming with escaped-quotes and escaped-escapes and escaped-anything-elses, (which is my interpretation of what the author is after), requires parsing from the beginning of the string No, (this is what I originally thought), but it doesn't - this can be achieved using MizardX's very clever `(?<!\\)(?:\\\\)*` expression. Here are some test strings to exercise the various solutions: ``` text01 = r"out1 'escaped-escape: \\ ' out2" test02 = r"out1 'escaped-quote: \' ' out2" test03 = r"out1 'escaped-anything: \X ' out2" test04 = r"out1 'two escaped escapes: \\\\ ' out2" test05 = r"out1 'escaped-quote at end: \'' out2" test06 = r"out1 'escaped-escape at end: \\' out2" test07 = r"out1 'str1' out2 'str2' out2" test08 = r"out1 \' 'str1' out2 'str2' out2" test09 = r"out1 \\\' 'str1' out2 'str2' out2" test10 = r"out1 \\ 'str1' out2 'str2' out2" test11 = r"out1 \\\\ 'str1' out2 'str2' out2" test12 = r"out1 \\'str1' out2 'str2' out2" test13 = r"out1 \\\\'str1' out2 'str2' out2" test14 = r"out1 'str1''str2''str3' out2" ``` Given this test data let's see how the various solutions fare ('p'==pass, 'XX'==fail): ``` r""" AUTHOR/REGEX 01 02 03 04 05 06 07 08 09 10 11 12 13 14 Douglas Leeder p p XX p p p p p p p p XX XX XX r"(?:^|[^\\])'(([^\\']|\\'|\\\\)*)'" cletus/PEZ p p p p p XX p p p p p XX XX XX r"(?<!\\)'((?:\\'|[^'])*)(?<!\\)'" MizardX p p p p p p p p p p p p p p r"(?<!\\)(?:\\\\)*'((?:\\.|[^\\'])*)'" ridgerunner p p p p p p p p p p p p p p r"(?<!\\)(?:\\\\)*'([^'\\]*(?:\\.[^'\\]*)*)'" """ ``` ## A working test script: ``` import re data_list = [ r"out1 'escaped-escape: \\ ' out2", r"out1 'escaped-quote: \' ' out2", r"out1 'escaped-anything: \X ' out2", r"out1 'two escaped escapes: \\\\ ' out2", r"out1 'escaped-quote at end: \'' out2", r"out1 'escaped-escape at end: \\' out2", r"out1 'str1' out2 'str2' out2", r"out1 \' 'str1' out2 'str2' out2", r"out1 \\\' 'str1' out2 'str2' out2", r"out1 \\ 'str1' out2 'str2' out2", r"out1 \\\\ 'str1' out2 'str2' out2", r"out1 \\'str1' out2 'str2' out2", r"out1 \\\\'str1' out2 'str2' out2", r"out1 'str1''str2''str3' out2", ] regex = re.compile( r"""(?<!\\)(?:\\\\)*'([^'\\]*(?:\\.[^'\\]*)*)'""", re.DOTALL) data_cnt = 0 for data in data_list: data_cnt += 1 print ("\nData string %d" % (data_cnt)) m_cnt = 0 for match in regex.finditer(data): m_cnt += 1 if (match.group(1)): print(" quoted sub-string%3d = \"%s\"" % (m_cnt, match.group(1))) ``` # Phew! p.s. Thanks to MizardX for the very cool `(?<!\\)(?:\\\\)*` expression. Learn something new every day!
Regex for managing escaped characters for items like string literals
[ "", "python", "regex", "" ]
I use the following code to create countdowns in Javascript. n is the number of times to repeat, freq is the number of milliseconds to wait before executing, funN is a function to call on each iteration (typically a function that updates part of the DOM) and funDone is the function to call when the countdown is complete. ``` function timer(n, freq, funN, funDone) { if(n == 0){ funDone(); }else{ setTimeout(function(){funN(n-1); timer(n-1, freq, funN, funDone);}, freq); } } ``` It can be called like so: ``` timer(10, 1000, /* 1 second */ function(n){console.log("(A) Counting: "+n);}, function() {console.log("(A) Done!");} ); timer(10, 500, function(n){console.log("(B) Counting: "+n);}, function() {console.log("(B) Done!");} ); ``` The advantage of this is that I can call timer() as many times as I want without worrying about global variables etc. Is there a better way to do this? Is there a clean way to make setInterval stop after a certain number of calls (without using global variables)? This code also creates a new lambda function with each call to setTimeout which seems like it could be problematic for large countdowns (I'm not sure how javascript's garbage collector handles this). Is there a better way to do this? Thanks.
I'd create an object that receives a counter and receives a function pointer to execute, something akin to the following pseudo code: ``` TimedIteration = function(interval, iterations, methodToRun, completedMethod){ var counter = iterations; var timerElapsed = methodToRun; //Link to timedMethod() method var completed = callbackMethod; onTimerElapsed = function(){ if (timerElapsed != null) timerElapsed(); } onComplete = function(){ if (completed != null) completed(); } timedMethod = function(){ if (counter != null) if (counter > 0) { setTimeOut(interval, onTimerElapsed); counter--; } else onComplete(); this = null; } } if ((counter != null)&&(counter > 0)){ //Trip the initial iteration... setTimeOut(interval, timedMethod); counter--; } } ``` obviously this is pseudo code, I've not tested it in an IDE and syntactically I'm not sure if it'll work as is [I'd be astonished if it does], but basically what you're doing is you're creating a wrapper object that receives a time interval, a number of iterations and a method to run upon the timer elapsed. You'd then call this on your method to run like so: ``` function myMethod(){ doSomething(); } function doWhenComplete(){ doSomethingElse(); } new TimedIteration(1000, 10, myMethod, doWhenComplete); ```
This is basically the same idea as @balabaster, but it is tested, uses prototype, and has a little more flexible interface. ``` var CountDownTimer = function(callback,n,interval) { this.initialize(callback,n,interval); } CountDownTimer.prototype = { _times : 0, _interval: 1000, _callback: null, constructor: CountDownTimer, initialize: function(callback,n,interval) { this._callback = callback; this.setTimes(n); this.setInterval(interval); }, setTimes: function(n) { if (n) this._times = n else this._times = 0; }, setInterval: function(interval) { if (interval) this._interval = interval else this._interval = 1000; }, start: function() { this._handleExpiration(this,this._times); }, _handleExpiration: function(timer,counter) { if (counter > 0) { if (timer._callback) timer._callback(counter); setTimeout( function() { timer._handleExpiration(timer,counter-1); }, timer._interval ); } } }; var timer = new CountDownTimer(function(i) { alert(i); },10); ... <input type='button' value='Start Timer' onclick='timer.start();' /> ```
Best way to have event occur n times?
[ "", "javascript", "" ]
Could you suggest an efficient way to identify a unique user with JavaScript (e.g. calculate a hash to send to the server-side)? EDIT: The point is that I can't "intrude" into the browser (e.g. send cookies). And IPs are also not the option. And it has to be a client-side solution (therefore JavaScript).
A common solution to this problem is to calculate a unique ID on the server side, then push a cookie to the browser containing that ID (checking, first, to see whether that cookie has already been defined for the current browser). Advertising networks use that technique fairly heavily to gather demographic information on web users. **EDIT** By the way, the "unique ID" can simply be the first session ID associated with the user. Many frameworks can supply that for you.
I upvoted Brian's answer but I'd like to add that the problem is identifying a unique user. Insisting that it be done in Javascript - which is stateless beyond the page level unless there is participation at the server level - just isn't a fruitful approach.
Identifying unique hits with JavaScript
[ "", "javascript", "unique", "hit", "" ]
[Problem 2b](http://sqlzoo.net/2b.htm) goes as follows: > 2b. For each subject show the first year that the prize was awarded. > > > nobel(yr, subject, winner) My solution was this: `SELECT DISTINCT subject, yr FROM nobel ORDER BY yr ASC;` Why isn't this working?
Your answer gets a row for every distinct combination of subject and year. The correct answer GROUPS BY the subject, and gets the MIN year per subject. Enough of a clue?
You could do it a different way without using group by or min ``` select distinct subject, yr from nobel x where yr <= all (select yr from nobel y where y.subject = x.subject) ``` but its definitely more work.
Why can't I get the right answer in this SQLzoo tutorial?
[ "", "sql", "mysql-5.0", "" ]
I have a very large app, 1.5 million lines of C++, which is currently MFC based using the Document/View architecture. The application includes a lot of 3d vector graphics, spreadsheets, and very many dialogs and Windows. Within the constraints of the DVA it is fairly well written, in that there is no significant program logic in the user interface, and anything that can be done using the user interface can also be carried out programmatically using a COM/OLE Automation interface. At the request of a number of users, I have been toying with the idea of creating a browser interface to the program, where the program itself runs on a server. Thoughts so far are convert all the COM interfaces to DCOM and rewrite/port the UI to Java. Initial experimentation shows that this will be a huge amount of work. Anyone else out there got any ideas for an easier implmentation? Anyone come across any refactoring or similar tools specifically to help this kind of a port?
The short answer is that it is feasible, don't use java, and that it will be a considerable amount of work. A good few years ago (around the time of IE5) I was asked by a client to answer a similar question to this one. The application in question was a well structured three tier desktop application. The upshot of the study was that it is possible. The options considered were Java, and CGI, using either CORBA or COM/DCOM. Building a Java applet was considered, but ruled out because it wouldn't have been too different to the C++ desktop version. The approach taken was to take the backend tier and turn that into a server application sitting behind a cgi layer. The UI was largely re-written using what we now know as Ajax, i.e. Javascript and HTML. The UI was split between browser elements and server elements. I did consider writing a tool to convert documents, views and dialogs and wrapping these into a compatible format, however after much analysis it became obvious that it wasn't really feasible to do this because the MFC is actually quite closely coupled to the Win32 API, and itself. Depending on the amount of dialogs it may be worth writing something to convert these. I did find, that even in a well structured system that a fair bit of code that should have been in the backend had leaked into the front end. If I were to do the same thing now, there are some excellent javascript libraries that would help, but I'd still take the same approach of doing as much as possible in the browser using Ajax, with possibly something like [qooxdoo](http://qooxdoo.org/) or [YUI](http://developer.yahoo.com/yui/). I'd also probably move look at using [XUL](http://en.wikipedia.org/wiki/Xul) and moving at least most of the dialogs into the backend. The only thing that concerns me from your list of requirements is the 3d vector graphics, although there maybe some mileage in [this JS toy](http://ctho.ath.cx/toys/3d.html) or [JS3D](http://www.wxs.ca/js3d/). This is only a brief summary, as I'm trying to avoid writing a dissertation.
Before considering to convert the MFC application to a web application, I suggest you to read "[Avoiding The Uncanny Valley of User Interface](https://blog.codinghorror.com/avoiding-the-uncanny-valley-of-user-interface/)" from Jeff Atwood. > If you're considering or actively building Ajax/RIA applications, you should consider the Uncanny Valley of user interface design. When you build a "desktop in the web browser"-style application, you're violating users' unwritten expectations of how a web application should look and behave. This choice may have significant negative impact on learnability, pleasantness of use, and adoption. I don't know what your application looks like and how well "web portable" it is, but maybe doing an exact copy of the application for the web isn't the best choice.
Is it feasible to convert a desktop based MFC C++ application to a web app
[ "", "c++", "mfc", "refactoring", "" ]
Why do we need to create custom exceptions in `.NET?`
Specific customs exceptions allow you to segregate different error types for your catch statements. The common construct for exception handling is this: ``` try {} catch (Exception ex) {} ``` This catches *all* exceptions regardless of type. However, if you have custom exceptions, you can have separate handlers for each type: ``` try {} catch (CustomException1 ex1) { //handle CustomException1 type errors here } catch (CustomException2 ex2) { //handle CustomException2 type errors here } catch (Exception ex) { //handle all other types of exceptions here } ``` Ergo, specific exceptions allow you a finer level of control over your exception handling. This benefit is shared not only by custom exceptions, but all other exception types in the .NET system libraries as well.
I did a lengthy blog post on this subject recently: <https://learn.microsoft.com/en-us/archive/blogs/jaredpar/custom-exceptions-when-should-you-create-them> The crux of it comes down to: Only create a custom exception if one of the following are true 1. You actually expect someone to handle it. 2. You want to log information about a particular error
Why Create Custom Exceptions?
[ "", "c#", ".net", "exception", "" ]
I need to display something such as below. ## Type A 1. Type A Item 1 2. Type A Item 2 3. Type A Item 3 ## Type B 1. Type B Item 1 2. Type B Item 2 3. Type B Item 3 ## Type C 1. Type C Item 1 2. Type C Item 2 3. Type C Item 3 All of the data comes from a dataset with columns 'Type' and 'ItemName'. Now an easy solution would be to create a repeater for each 'Type' and filter the data set by each 'Type'. However, I would like to just use one repeater and the problem with that is I need some way to recognize when each new section has a header. ``` <div>[Header][Item]</div> [Header] = <b>Type A</b><br /> [Item] = Type A Item 1 ``` So only the first item of each Type gets a header item and all others are set to an empty string. Does anyone know any tricks that could help?
Instead of creating a repeater for each type, how about a nested repeater? <http://www.codeproject.com/KB/aspnet/AspNetNestedRepeaters.aspx> Try that out. You still will need more than one repeater, but in this case it'd only be two that you need, and you wouldn't have to make one for each type (as you were fearing) but rather one for types, and one nested within it for items. The link above should give you a good starting point to investigate further.
Ok, I just came up with a solution. Though it is pretty dirty. I created a global variable to track the headers. The variable is a list of strings. OnItemDatabind I check if the header item is in the global list. If the item doesn't exist, I add it to the list and display the header. Otherwise, the header item is empty. So far this works, but I don't know how I feel about it. Edit: I am wondering if another control may work better at this...
Repeater for display sets
[ "", "c#", "asp.net", "repeater", "" ]
How to detect IIS version using C#? Update: I meant from a winapp (actually the scenario is developing a custom installer that wants to check the version of the installed IIS to call the appropriate api's)
You can get this information from the `SERVER_SOFTWARE` variable. It will return the following: `Microsoft-IIS/5.0 (Windows 2000)` `Microsoft-IIS/5.1 (Windows XP)` `Microsoft-IIS/6.0 (Windows 2003 Server)` etc. If you're using ASP.NET, you can get this string via ``` Request.ServerVariables["SERVER_SOFTWARE"]; ``` EDIT: It seems that you will have to query the registry to get this information. Take a look at [this page](http://www.codeproject.com/KB/cs/iisdetection.aspx) to see how.
Found the answer here: [link text](http://forums.iis.net/p/1162404/1923867.aspx#1923867) The fileVersion method dosesn't work on Windows 2008, the inetserv exe is somewhere else I guess. ``` public Version GetIisVersion() { using (RegistryKey componentsKey = Registry.LocalMachine.OpenSubKey(@"Software\Microsoft\InetStp", false)) { if (componentsKey != null) { int majorVersion = (int)componentsKey.GetValue("MajorVersion", -1); int minorVersion = (int)componentsKey.GetValue("MinorVersion", -1); if (majorVersion != -1 && minorVersion != -1) { return new Version(majorVersion, minorVersion); } } return new Version(0, 0); } } ``` I tested it, it works perfectly on Windows XP, 7 and 2008
How to detect IIS version using C#?
[ "", "c#", "iis", "" ]
I realised this might be relatively niche, but maybe that's why this is good to ask anyway. I'm looking at a hardware multiple input recording console (such as the Alesis IO 26) to take in an Adat lightpipe 8 channel input to do signal processing. As I have yet to acquire the device and need to work out whether this is feasible (budgetary concerns), I'd like to ask if anyone has any experience tapping all these 8 inputs for data in Java? I've seen tons of examples of recording sound using the Javax.sound.sampled libraries but I couldn't find any information on multichannel that is more than 2. What I'm interested in is sampling the 8 channels individual as mono source to perform some simple DSP on them. Would the hardware device be treated as one mixer? And the 8 channels be represented as 8 lines? I hope I got my terminology right. I would appreciate if someone can point me to any relevant information. Thanks!
Multi-channel audio is supposed to be possible in Java depending on the version of Java you are running, the platform you are running on, and the type of soundcard you are running. See the excellent [Java Audio Faq](http://www.jsresources.org/faq_audio.html) For more details. I've never been able to use this reliably across all platforms. If you really care about doing robust multichannel audio, I would recommend using a C API, such as [PortAudio](http://www.portaudio.com/), [Juce](http://www.rawmaterialsoftware.com/juce/), or [PulseAudio](http://www.pulseaudio.org/). I've had excellent luck with PortAudio and Juce both. To use these in Java you would need to create a [JNI](http://en.wikipedia.org/wiki/Java_Native_Interface) interface to the C APIs. This obviously is more work than just using javax.sound, but it gives you a lot more control over the audio hardware you are using. In my experience dealing with audio devices, being too far removed from the hardware (i.e. 4 layers of abstraction instead of 2) often removes your ability to do serious multi-channel, high bit depth, high sample rate audio.
I have implemented something similar with Terratec cards cascaded to 32 inputs, the only library at the time was Jsyn. Its a really good library once you get to know it. http://www.softsynth.com/jsyn I am not sure if there is anything else that can help.
How to sample multi-channel sound input in Java
[ "", "java", "audio-recording", "javasound", "" ]
I got the code of a former employee. There are many calls to methods like: ``` foo(val,...); ``` where ``` void foo(String s,...) { ... } ``` and val is an int. Of course, I get an error. As a workaround I pass ""+val to foo. I wonder if there is a better way.
depending on the different types that are supposed to be passed as parameter, you could either accept an object and call .toString() ``` void foo(Object o){ String s=o.toString(); ... } ``` or overload foo for specific types ``` void foo(String s) { ... } void foo(int i){ foo(Integer.toString(i); } ```
String.valueOf(val) is much faster.
Automatic .toString() when calling a method?
[ "", "java", "" ]
In a similar vain to the c# extension topic on SO It would be great if we could pull together a whole host of useful jQuery function extensions. **Please Note the idea is to have short snippets of code not full blown well known plugins or ui widgets**
``` // Allows chainable logging // USAGE: $('#someDiv').hide().log('div hidden').addClass('someClass'); // Demo : http://jsbin.com/odeke jQuery.log = jQuery.fn.log = function (msg) { if ( window.console && window.console.log ) { console.log("%s: %o", msg, this); } return this; }; ```
you can use this to see if a selector exists. ``` if($.exists('#mydiv')) { } $.exists = function(selector) { return ($(selector).length); } ```
Post your short extension functions goodies for jQuery
[ "", "javascript", "jquery", "jquery-plugins", "" ]
Does any one know a good way to do remote procedure calls in windows (non .net) environmental? I cant find much information on how to do it and the msdn only has the .net version. . Edit: Thanks for the answers so far. What i need it for is to communicate with a service on the same computer which will send progress reports back to the "client". The reason im intersted in rpc is because of vistas uac and how services cant talk to normal apps unless they use rpc or pipes. Looking into pipes, they seem to be entirely text based and i was under the impression that rpc can pass strongly typed values across. I will look into DCOM as well.
If you are only interested in talking between processes on the same machine, [boost::interprocess](http://www.boost.org/doc/libs/1_37_0/doc/html/interprocess.html) is a cool way of getting a channel for them to talk through. More windows specific solutions is a [shared memory mapped file](http://msdn.microsoft.com/en-us/library/aa365574(VS.85).aspx#base.using_a_file_mapping_for_ipc) and system global mutexes/signals or [named pipes](http://msdn.microsoft.com/en-us/library/aa365590(VS.85).aspx). [boost::serialize](http://www.boost.org/doc/libs/1_37_0/libs/serialization/doc/index.html) and google [protocol buffers](http://code.google.com/intl/sv-SE/apis/protocolbuffers/) are ways of converting the data you send between the processes to binary strings that are less dependent on structure packing and other things that may differ between different executables. boost::interprocess, boost::serialize and protocol buffers should be platform independent so technically it could work on Linux/Mac as well!
[DCOM](http://en.wikipedia.org/wiki/Distributed_Component_Object_Model) is has a remote procedure call mechanism based on DCE RPC. If you build your system as a COM component or put a COM wrapper over the API you want to expose you could use this. Beyond that, you might want to expand on your question with some more insight into the specifics of the problem. I don't really have a handle on whether the problem has any aspects that might preclude the use of DCOM. An alternative approach would be to place a web service wrapper around the application. Web services (certainly those based on SOAP or XML-RPC) are really just a RPC mechanism using HTTP as a transport protocol.
remote procedure calls
[ "", "c++", "windows", "rpc", "" ]
I'm trying to effectively build a functional test suite for an applet, and I'm trying to find a good framework for it. In the past, when I wanted to design a test suite that would serve as both functional and load testing on an application, it has always been a web-based application, or at least some kind of service-based application, and I've used something like grinder to build up test scripts and use them to simulate users. With a Java applet, it's not clear to me what, if any, mechanism may exist for me to consume and run usage scripts against the GUI and thus simulate a user clicking on form controls. Does anyone have any experience with this?
Have a look at [FEST Swing](http://docs.codehaus.org/display/FEST/Swing+Module). It makes it easy to drive and test Swing GUIs. It supports applets.
We have had a big success testing applets using QuickTest Professional ([wikipedia link](http://en.wikipedia.org/wiki/HP_QuickTest_Professional)). We tested the applet both in its natural environment (browser) and using a specially built "cradle" which takes over the browser part and embed the applet in a JFrame (so we can test JavaScript input/output, start/stop, close the frame and look for leaks and activate generally hidden / forbidden interfaces). Disclosure: I'm a developer in HP which develops QuickTest Professional.
What is the best mechanism for testing applets?
[ "", "java", "testing", "applet", "automated-tests", "integration-testing", "" ]
If I have three classes, A, B, C. A and B are friends (bidirectionally). Also, B and C are friends (bidirectionally). A has a pointer to B and B has a pointer to C. Why can't A access C's private data through the pointer? Just to clarify: This is a pure theoretical C++ language question, not a design advice question.
### Friendship in C++ is not transitive: John is a friend of mine and he can use my wireless connection any time (I trust him). John's friend Tim though is a waster and though John is my friend I do not include Tim as a friend, and thus I don't let him use my wireless connection. ### Friendship is NOT inherited Also John's children are a bunch of hooligans so I don't trust them either they are definitely not my friends nor are my own children who I trust as far as I could throw them. Though our children can not directly accesses the wireless they can get access to it if they go through us. So John's children can access my wireless if they access it via John (ie they are supervised and **protected** by John). ### Also, friendship is not symmetric. John has a goverment job so he unfortunately is not allowed to trust anyone, especially when it comes to wireless. ### You are always your own best friend. This allows things like copy constructors where you can access the private member of another object even though there is no real accesses. So I am also automatically friends with all my clones :-) as they are just other instances of myself.
Friendship in C++ is not transitive: ``` (A is friend of B) and (B is friend of C) does not mean (A is friend of C) ``` Also, friendship is not symmetric. ``` (A is friend of B) does not mean (B is friend of A) ``` You have to explicitly state that A is a friend of C to be able to access C's private stuff from within A. If adding a setter and getter to a class exposes information not meant to be exposed, you should consider friends if you can't find your design being faulty (using friend is valid. It's not a sign for bad design). If you can add a setter and getter without that being destructive to the interface, then you should avoid making other classes friends. Note that a nested class is always a friend of the nesting class. So a nested class can see the privates of the nesting class.
Friend scope in C++
[ "", "c++", "friend", "" ]
Is it possible to have Python save the `.pyc` files to a separate folder location that is in `sys.path`? ``` /code foo.py foo.pyc bar.py bar.pyc ``` To: ``` /code foo.py bar.py /code_compiled foo.pyc bar.pyc ``` I would like this because I feel it'd be more organized. Thanks for any help you can give me.
**Update:** In Python 3.8 `-X pycache_prefix=PATH` command-line option enables writing `.pyc` files to a parallel tree rooted at the given directory instead of to the code tree. See [`$PYTHONPYCACHEPREFIX` envvar](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONPYCACHEPREFIX)credits: [@RobertT' answer](https://stackoverflow.com/a/53367609/4279) The location of the cache is reported in [`sys.pycache_prefix`](https://docs.python.org/3/library/sys.html#sys.pycache_prefix) (`None` indicates the default location in `__pycache__` [since Python 3.2] subdirectories). To turn off caching the compiled Python bytecode, `-B` may be set, then Python won’t try to write `.pyc` files on the import of source modules. See [`$PYTHONDONTWRITEBYTECODE` envvar](https://docs.python.org/3/using/cmdline.html#envvar-PYTHONDONTWRITEBYTECODE)credits: [@Maleev's answer](https://stackoverflow.com/a/2985939/4279) --- Old [Python 2] answer: There is [PEP 304: Controlling Generation of Bytecode Files](http://www.python.org/dev/peps/pep-0304/). Its status is `Withdrawn` and corresponding [patch](http://bugs.python.org/issue677103) rejected. Therefore there might be no direct way to do it. If you don't need source code then you may just delete `*.py` files. `*.pyc` files can be used as is or packed in an egg.
*In the dark and ancient days of 2003, PEP 304 came forth to challenge this problem. Its patch was found wanting. Environment variable platform dependencies and version skews ripped it to shreds and left its bits scattered across the wastelands.* *After years of suffering, a new challenger rose in the last days of 2009. Barry Warsaw summoned PEP 3147 and sent it to do battle, wielding a simple weapon with skill. The PEP crushed the cluttering PYC files, silenced the waring Unladen Swallow and CPython interpreter each trying to argue its PYC file should be triumphant, and allowed Python to rest easy with its dead ghosts occasionally running in the dead of night. PEP 3147 was found worthy by the dictator and was knighted into the official roles in the days of 3.2.* As of 3.2, Python stores a module's PYC files in `__pycache__` under the module's directory. Each PYC file contains the name and version of the interpreter, e.g., `__pycache__/foo.cpython-33.pyc`. You might also have a `__pycache__/foo.cpython-32.pyc` compiled by an earlier version of Python. The right magic happens: the correct one is used and recompiled if out of sync with the source code. At runtime, look at the module's `mymodule.__cached__` for the pyc filename and parse it with `imp.get_tag()`. See [the What's New section](http://docs.python.org/3/whatsnew/3.2.html#pep-3147-pyc-repository-directories) for more information. TL;DR - Just works in Python 3.2 and above. Poor hacks substitute for versions before that.
Way to have compiled python files in a separate folder?
[ "", "python", "file", "compiled", "" ]
I need to invoke tesseract OCR (its an open source library in C++ that does Optical Character Recognition) from a Java Application Server. Right now its easy enough to run the executable using Runtime.exec(). The basic logic would be 1. Save image that is currently held in memory to file (a .tif) 2. pass in the image file name to the tesseract command line program. 3. read in the output text file from Java using FileReader. How much improvement in terms of performance am I likely to get by writing a JNI wrapper for Tesseract? Unfortunately there is not an open source JNI wrapper that works in Linux. I would have to do it myself and am wondering about whether the benefit is worth the development cost.
It's hard to say whether it would be worth it. If you assume that if done in-process via JNI, the OCR code can directly access the image data without having to write it to a file, then it would certainly eliminate any disk I/O constraints there. I'd recommend going with the simpler approach and only undertaking the JNI option if performance is not acceptable. At least then you'll be able to do some benchmarking and estimate the performance gains you might be able to realize.
If you do pursue your own wrapper, I recommend you check out [JNA](https://github.com/twall/jna/). It will allow you to call most "native" libraries writing only Java code, and will give you more help than does raw JNI to do it safely. JNA is available for most platforms.
Invoking via command line versus JNI
[ "", "java", "java-native-interface", "ocr", "tesseract", "" ]
Here are two questions related to modifying the data source for strongly typed dataset connection string. When my app is deployed, a light weight database ( in the form of Microsoft Access) is deployed to the Application Data folder. I have a strongly typed dataset that is wrapped around that. So the question is how to change the following app.config code: ``` <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> </configSections> <connectionStrings> <add name="XTrace.Properties.Settings.Timer_DBConnectionString" connectionString="Provider=Microsoft.Jet.OLEDB.4.0;Data Source=&quot;|DataDirectory|\Timer DB.mdb&quot;;Persist Security Info=True" providerName="System.Data.OleDb" /> </connectionStrings> </configuration> ``` To make it read from the Application Data folder copy? I [read from SO](https://stackoverflow.com/questions/31449/dataset-connection-strings-in-vs-2008) and elsewhere that it's best to use app.config or web.config to manage the connection string. But since the app.config is compiled into the Windows Form, how does it suppose to provide the flexibility that allows one to change the connection string at deployment time? I am not talking about web app though, because I understand that it's possible that web app distributes the web.config and from that file you can modify your connection string.
When you deploy an Windows Forms application you could have an app.config file. yourproject.exe gives you a yourproject.exe.config file, which is the app.config file.
Instead of modifying .config file, you might be better off adding a new connection string property at runtime and using the TableAdapterManager. Please read more at: <http://rajmsdn.wordpress.com/2009/12/09/strongly-typed-dataset-connection-string/>
Modifying the Data Source for the Strongly Typed Dataset connection string
[ "", "c#", "configuration", "strongly-typed-dataset", "" ]
I have a statistics page which has a meta refresh tag on it. ``` <meta http-equiv="refresh" content="10" /> ``` How do I stop it forcing a refresh of the images in the page too? I've tried setting Cache control but the browser (IE7) still refreshes the 50+ images in the page. ``` Response.Cache.AppendCacheExtension("post-check=900,pre-check=3600"); ```
I solved this issue by using javascript to manually refresh the page rather than the meta tag. This stopped the browser refreshing all the images on every reload, but still forced the browser to refresh the ASPX page itself. ``` <script> setTimeout('document.location=document.location',10000) </script> ```
Maybe your IE is set to not cache images? How do you generate the images? Do they contain some kind of no-cache header?
How to force caching of image with meta refresh?
[ "", "c#", "asp.net", "html", "caching", "" ]
I was just reading over [this thread](https://stackoverflow.com/questions/186338/why-is-requireonce-so-bad-to-use) where the pros and cons of using `include_once` and `require_once` were being debated. From that discussion (particularly [Ambush Commander's answer](https://stackoverflow.com/questions/186338/why-is-requireonce-so-bad-to-use#194979)), I've taken away the fact(?) that any sort of include in PHP is inherently expensive, since it requires the processor to parse a new file into OP codes and so on. This got me to thinking. I have written a small script which will "roll" a number of Javascript files into one (appending the all contents into another file), such that it can be [packed](http://dean.edwards.name/packer/) to reduce HTTP requests and overall bandwidth usage. Typically for my PHP applications, I have one "includes.php" file which is included on each page, and that then includes all the classes and other libraries which I need. (I know this isn't probably the best practise, but it works - the `__autoload` feature of PHP5 is making this better in any case). **Should I apply the same "rolling" technique on my PHP files?** I know of that saying about premature optimisation being evil, but let's take this question as theoretical, ok?
That would depend somewhat on whether it was more work to parse several small files or to parse one big one. If you require files on an as-needed basis (not saying you necessarily *should* do things that way ) then presumably for some execution paths there would be considerably less compilation required than if all your code was rolled into one big PHP file that the parser had to encode the entirety of whether it was needed or not. In keeping with the question, this is thinking aloud more than expertise on the internals of the PHP runtime, - it doesn't sound as though there is any real world benefit to getting too involved with this at all. If you run into a serious slowdown in your PHP I would be very surprised if the use of require\_once turned out to be the bottleneck.
There is a problem with Apache/PHP on Windows which causes the application to be extremely slow when loading or even touching too many files (page which loads approx. 50-100 files may spend few seconds only with file business). This problem appears both with including/requiring and working with files (fopen, file\_get\_contents etc). So if you (or more likely anybody else, due to the age of this post) will ever run your app on apache/windows, reducing the number of loaded files is absolutely necessary for you. Combine more PHP classes into one file (an automated script for it would be useful, I haven't found one yet) or be careful to not touch any unneeded file in your app.
Rolling and packing PHP scripts
[ "", "php", "optimization", "" ]
In the following code I want to replace every occurrence of `"U.S.A"` with `"united states of America"` and every occurrence of `"uk"` with `"united kingdom"` in a string, but it does not seem to work. How do I fix it? ``` class Program { static void Main(string[] args) { string s = "the U.S.A love UK"; Console.WriteLine(replace(s)); } public static string replace(string s) { s = Regex.Replace(s, @"^U.S.A", " United state Of America"); s = Regex.Replace(s, @"^Uk", "United kingdom"); return s; } } ```
For simple replacements, you don't need Regular expressions: ``` string s = "the U.S.A love UK"; s = s.Replace("U.S.A", "United States of America").Replace("UK", "United Kingdom"); ```
Well, look at the search pattern in your regex. The `^` has a specific meaning. (as do the `.` but they won't actually fail in this case, but they aren't doing what you think they are)
Why is the regex replace method not working?
[ "", "c#", "regex", "" ]
Short of inserting a try/catch block in each worker thread method, is there a way to deal with unhandled, non-ui thread, exceptions in Windows Forms? `Thread.GetDomain().UnhandledException` works great for catching the error, but by then it's too late to do anything about it (besides log it). After control passes out of your `UnhandledException` handler the application will terminate. The best you can hope for is a generic Windows error that looks this: ![http://i40.tinypic.com/2be98i.jpg](https://i.stack.imgur.com/8kxgl.jpg) All my research indicates that you *must* insert a try/catch block in the worker thread method, but I wanted to put this out there in case anyone had a different take. Thanks.
If you want to do something about the error before it hits UnhandledException, then you need a try/catch in the thread method. You should at least handle exceptions like FileNotFoundException here, where you can do something intelligent about it. If all else fails, you can use UnhandledException to cleanly handle anything you didn't expect (which, hopefully, is nothing).
Thread.GetDomain().UnhandledException devolves to [AppDomain.UnhandledException](http://msdn.microsoft.com/en-us/library/system.appdomain.unhandledexception.aspx), which ordinarily would be the same domain for all threads in your application - in other words, you only have to hook this event once, not once per thread. unhandled exceptions in secondary threads will kill the thread. see [SafeThread](http://www.codeproject.com/KB/threads/SafeThread.aspx) for an alternative caveat: i am the author of the SafeThread article
Unhandled, non-ui thread, exceptions in Windows Forms
[ "", "c#", ".net", "winforms", "multithreading", "" ]
Assume I have a form ``` class SampleClass(forms.Form): name = forms.CharField(max_length=30) age = forms.IntegerField() django_hacker = forms.BooleanField(required=False) ``` Is there a way for me to define css classes on each field such that I can then use jQuery based on class in my rendered page? I was hoping not to have to manually build the form.
Answered my own question. *Sigh* <http://docs.djangoproject.com/en/dev/ref/forms/widgets/#django.forms.Widget.attrs> I didn't realize it was passed into the widget constructor.
Yet another solution that doesn't require changes in python code and so is better for designers and one-off presentational changes: [django-widget-tweaks](https://github.com/kmike/django-widget-tweaks). Hope somebody will find it useful.
Define css class in django Forms
[ "", "python", "django", "django-forms", "" ]
Should I make my own framework by wrapping up the STL classes and/or Boost libraries so that if I need to change the implementation of the string, vectors, lists, etc. or I need to write functions that MFC, other other libraries or even other platforms would need to use their format, I can easily mutate them to meet the criteria. this is what I'm thinking. ``` // In my framework: namespace MyFX { typedef std::string String; }; // Port specific (MFC in this case) CString ToCString(const MyFx::String &str) { /* magic */ } // Port specific (.NET specific) System::String^ ToManagedString(const MyFx::String &str) { /* magic */ } ``` Am I reinventing the wheel too much? I would use the *MyFx::String* in UI interfaces between the UI and the other layers.
It seems to me like there won't be a lot of benefit to this; in my experience, the *point* of using these frameworks is so that you don't go reinventing the wheel. If you find that you need to write a new string class or a new vector class, you should think really hard about it, and make sure you're not just doing something else wrong. I'm not saying there's never a reason to write your own string class, I'm just saying it's rare. Given that, I would suggest just using the desired frameworks directly. Regarding the conversion functions, I believe the compiler won't see your ToCString function any differently than it would see this: ``` CString ToCString( const std::string & ) {...} ``` This is because a C++ typedef does not create a new type, just an alias to an existing type. **Further Thoughts** I think the concern you voice here is a very natural one, and I know it has come up in my team several times. However, I think the answer is still as stated above. While the STL classes are probably not perfect, they were designed by very smart people, who put quite a lot of thought into the task. Thus, the odds of you needing to write a full replacement string class are very small. Furthermore, and without intending any slight, it would take you (or me) a very long time to implement a robust general-purpose string class that could suitably replace std::string. Another possible way to think about it would be this: would you consider "replacing" the String class in Java or C#? I think the answer there is clearly "no", although there may be occasional limited areas where you use something other than a String to represent a sequence of characters. Same thing goes here: std::string is as close as C++ gets to a built-in string class, and you almost assuredly don't need to replace it.
"Am I reinventing the wheel too much?" - yes. Don't do it.
Should I make my own framework?
[ "", "c++", "frameworks", "interface", "portability", "" ]
How do you convert a number to a string showing dollars and cents? ``` eg: 123.45 => '$123.45' 123.456 => '$123.46' 123 => '$123.00' .13 => '$0.13' .1 => '$0.10' 0 => '$0.00' ```
PHP also has [money\_format()](http://php.net/money_format). Here's an example: ``` echo money_format('$%i', 3.4); // echos '$3.40' ``` This function actually has tons of options, go to the documentation I linked to to see them. Note: money\_format is undefined in Windows. --- UPDATE: Via the PHP manual: <https://www.php.net/manual/en/function.money-format.php> *WARNING: This function [**money\_format**] has been DEPRECATED as of PHP 7.4.0. Relying on this function is highly discouraged.* Instead, look into [NumberFormatter::formatCurrency](https://www.php.net/manual/en/numberformatter.formatcurrency.php). ``` $number = "123.45"; $formatter = new NumberFormatter('en_US', NumberFormatter::CURRENCY); return $formatter->formatCurrency($number, 'USD'); ```
If you just want something simple: ``` '$' . number_format($money, 2); ``` [number\_format()](http://php.net/number_format)
How do I format a number to a dollar amount in PHP
[ "", "php", "formatting", "currency", "" ]
I have a generic class ``` public MyClass<TContext, T> where TContext : DataContext ``` that effectively acts on an instance of another ``` public class MyOtherClass<T> : IEnumerable<T> ``` I want to enforce that the `TContext` has a `Table<T>`. Is there a clean way to enforce this?
Are you wanting to verify that TContext has a member which is a Table<T>? If so, the only way to do that is to define an interface for this contract and alter your generic constraints ``` interface IMyTable<T> { Table<T> Table; } public MyClass<TContext,T> where TContext : DataContext,IMyTable<T> ``` **EDIT** Jason posted a clarifying comment to my answer. The name Table is not static, it instead depends on the type of T. If that's the case then there is no way to statically enforce this through generic constraints. The best you could do is create an adapter class which implements IMyTable<> and provides a DataContext and a Table instance. ``` interface IMyTable2<T> { DataContext DataContext {get; } Table<T> Table {get; } } class MyAdapter: IMyTable2<T> { private MyOtherClass<T> _other; public DataContext DataContext { get { return _other.DataContext } } public Table<T> Table { get { return _other.TableWithDifferentName; } } } ```
I think JaredPar had the right idea the first time around. ``` interface IMyTable<T> { Table<T> TheTable {get;} } public class MyClass<TContext,T> where TContext : DataContext,IMyTable<T> { //silly implementation provided to support the later example: public TContext Source {get;set;} public List<T> GetThem() { IMyTable<T> x = Source as IMyTable<T>; return x.TheTable.ToList(); } } ``` I want to extend his thought by adding an explicit interface implementation. This addresses Jason's remark about accessing the table property through IMyTable. The type must be involved somehow, and if you have an `IMyTable<T>`, the type is involved. ``` public partial class MyDataContext:IMyTable<Customer>, IMyTable<Order> { Table<Customer> IMyTable<Customer>.TheTable { get{ return this.GetTable<Customer>(); } } Table<Order> IMyTable<Order>.TheTable { get{ return this.GetTable<Order>(); } } } ``` Now it is possible to do this: ``` var z = new MyClass<MyDataContext, Customer>(); z.Source = new MyDataContext(); List<Customer> result = z.GetThem(); ```
Generic enforcement
[ "", "c#", ".net", "linq-to-sql", "generics", "datacontext", "" ]
What are some of the ways you have implemented models in the Zend Framework? I have seen the basic `class User extends Zend_Db_Table_Abstract` and then putting calls to that in your controllers: `$foo = new User;` `$foo->fetchAll()` but what about more sophisticated uses? The Quickstart section of the documentation offers such an example but I still feel like I'm not getting a "best use" example for models in Zend Framework. Any interesting implementations out there? --- **EDIT:** I should clarify (in response to CMS's comment)... I know about doing more complicated selects. I was interested in overall approaches to the Model concept and concrete examples of how others have implemented them (basically, the stuff the manual leaves out and the stuff that basic how-to's gloss over)
I personally subclass both `Zend_Db_Table_Abstract` and `Zend_Db_Table_Row_Abstract`. The main difference between my code and yours is that explicitly treat the subclass of `Zend_Db_Table_Abstract` as a "table" and `Zend_Db_Table_Row_Abstract` as "row". Very rarely do I see direct calls to select objects, SQL, or the built in ZF database methods in my controllers. I try to hide the logic of requesting specific records to calls for behind `Zend_Db_Table_Abstract` like so: ``` class Users extends Zend_Db_Table_Abstract { protected $_name = 'users'; protected $_rowClass = 'User'; // <== THIS IS REALLY HELPFUL public function getById($id) { // RETURNS ONE INSTANCE OF 'User' } public function getActiveUsers() { // RETURNS MULTIPLE 'User' OBJECTS } } class User extends Zend_Db_Table_Row_Abstract { public function setPassword() { // SET THE PASSWORD FOR A SINGLE ROW } } /* CONTROLLER */ public function setPasswordAction() { /* GET YOUR PARAMS */ $users = new Users(); $user = $users->getById($id); $user->setPassword($password); $user->save(); } ``` There are numerous ways to approach this. Don't think this is the only one, but I try to follow the intent of the ZF's design. (Here are more of my [thoughts and links on the subject](https://stackoverflow.com/questions/57773/zend-php-framework#58837).) This approach does get a little class heavy, but I feel it keeps the controllers focused on handling input and coordinating with the view; leaving the model to do the application specific work.
I worked for Zend and did quite a bit of work on the Zend\_Db\_Table component. Zend Framework doesn't give a lot of guidance on the concept of a "Model" with respect to the Domain Model pattern. There's no base class for a Model because the Model encapsulates some part of business logic specific to your application. I wrote a [blog](http://karwin.blogspot.com/2008/05/activerecord-does-not-suck.html) about this subject in more detail. Persistence to a database should be an internal implementation detail of a Model. The Model typically *uses* one or more Table. It's a common but improper object-oriented design to consider a Model as an extension of a Table. In other words, we should say Model HAS-A Table -- not Model IS-A Table. This is an example of IS-A: ``` class MyModel extends Zend_Db_Table_Abstract { } ``` This is an example of HAS-A: ``` class MyModel // extends nothing { protected $some_table; } ``` In a real domain model, you would *use* $some\_table in the methods of MyModel. You can also read Martin Fowler's take on the [Domain Model](http://martinfowler.com/eaaCatalog/domainModel.html) design pattern, and his description of the [Anemic Domain Model](http://www.martinfowler.com/bliki/AnemicDomainModel.html) antipattern, which is how many developers unfortunately approach OO programming.
Models in the Zend Framework
[ "", "php", "model-view-controller", "zend-framework", "model", "" ]
I've been working on optimizing a query and have ran into a situation that's making me question how I've always used SQL's OR operator. (SQL Server 2000 also) I have a query where the conditional (WHERE) clause looks something like this: ``` WHERE (Column1 = @Param1 or Column1 LIKE @Param1 + '%') AND (@Param2 = '' OR Column2 = @Param2 OR Column2 LIKE @Param2 + '%') ``` Now, I've always understood that OR in SQL evaluated both expressions. So all records that evaluated true for the left expression would be returned along with all records that evaluated true on the right expression. For example: ``` SELECT * FROM TABLE1 WHERE COL1 = 'Test' OR Col2 = 'Data' ``` This would return back all records where COL1 is'Test' as well as any record where Col2 is 'Data' In the example above, I modified the Column2 conditional to the following: ``` AND(Column2 LIKE ISNULL(@Param2, '') + '%') ``` All of the sudden, I get 0 rows returned. Have I been mistaken in that OR only evaluates expressions until it find a TRUE result or is there a condition that would cause the 2 different to return different results?
"OR only evaluates expressions until it find a TRUE result" It only has to, but that's not your problem (actually this is what was saving you in your original case). Your two queries are not really equivalent. I'm thinking you have `NULL`s in Column2 which will never cause `(Column2 LIKE ISNULL(@Param2, '') + '%')` to be true - and in your original version the `@Param2 = ''` was masking this case, since it IS true (sometimes) Perhaps: ``` (ISNULL(Column2, '') LIKE ISNULL(@Param2, '') + '%') ``` Remember the three-valued logic for NULLs: ``` TRUE and UNKNOWN: UNKNOWN TRUE or UNKNOWN: TRUE FALSE and UNKNOWN: FALSE FALSE or UNKNOWN: UNKNOWN ``` But I'm not sure your optimization is really helping.
OR is not all-encompassing, especially as it's in parentheses. What you have in a larger since is: `WHERE X AND Y`. That fact that X and Y are themselves boolean expressions that make use of an OR is not important: they are evaluated separately and then results are fed to the AND operator. [edit]: Reading again, I may have misunderstood your question. With that in mind, I'll have to go with the other answer, because `NULL LIKE '%'` returns NULL, which is the same as false in this case. You might try this instead: ``` COALESCE(Column2,'') LIKE COALESCE(@param2,'') + '%' ```
SQL 'Or' operator. How does it work in the following scenario?
[ "", "sql", "sql-server", "null", "three-valued-logic", "" ]
> **Possible Duplicate:** > [Converting XML to JSON using Python?](https://stackoverflow.com/questions/191536/converting-xml-to-json-using-python) I'm doing some work on App Engine and I need to convert an XML document being retrieved from a remote server into an equivalent JSON object. I'm using `xml.dom.minidom` to parse the XML data being returned by `urlfetch`. I'm also trying to use `django.utils.simplejson` to convert the parsed XML document into JSON. I'm completely at a loss as to how to hook the two together. Below is the code I'm tinkering with: ``` from xml.dom import minidom from django.utils import simplejson as json #pseudo code that returns actual xml data as a string from remote server. result = urlfetch.fetch(url,'','get'); dom = minidom.parseString(result.content) json = simplejson.load(dom) self.response.out.write(json) ```
Soviut's advice for lxml objectify is good. With a specially subclassed simplejson, you can turn an lxml objectify result into json. ``` import simplejson as json import lxml class objectJSONEncoder(json.JSONEncoder): """A specialized JSON encoder that can handle simple lxml objectify types >>> from lxml import objectify >>> obj = objectify.fromstring("<Book><price>1.50</price><author>W. Shakespeare</author></Book>") >>> objectJSONEncoder().encode(obj) '{"price": 1.5, "author": "W. Shakespeare"}' """ def default(self,o): if isinstance(o, lxml.objectify.IntElement): return int(o) if isinstance(o, lxml.objectify.NumberElement) or isinstance(o, lxml.objectify.FloatElement): return float(o) if isinstance(o, lxml.objectify.ObjectifiedDataElement): return str(o) if hasattr(o, '__dict__'): #For objects with a __dict__, return the encoding of the __dict__ return o.__dict__ return json.JSONEncoder.default(self, o) ``` See the docstring for example of usage, essentially you pass the result of lxml `objectify` to the encode method of an instance of `objectJSONEncoder` Note that Koen's point is very valid here, the solution above only works for simply nested xml and doesn't include the name of root elements. This could be fixed. I've included this class in a gist here: <http://gist.github.com/345559>
[xmltodict](https://github.com/martinblech/xmltodict) (full disclosure: I wrote it) can help you convert your XML to a dict+list+string structure, following this ["standard"](http://www.xml.com/pub/a/2006/05/31/converting-between-xml-and-json.html). It is [Expat](http://docs.python.org/library/pyexpat.html)-based, so it's very fast and doesn't need to load the whole XML tree in memory. Once you have that data structure, you can serialize it to JSON: ``` import xmltodict, json o = xmltodict.parse('<e> <a>text</a> <a>text</a> </e>') json.dumps(o) # '{"e": {"a": ["text", "text"]}}' ```
How to convert XML to JSON in Python?
[ "", "python", "xml", "json", "" ]
Lets say that I have a header user control in a master page, and want to change a property of the user control depending on what content page is loaded inside of the master page. How might I go about this? Thanks!
You can use two methods. The first is by using `Page.Master.FindControl('controlID')`. Then you can cast it to the type of your user control. The second method is by adding a `<%@ MasterType VirtualPath="">` OR `<%@ MasterType TypeName=""%>` tag to your aspx page. In the `VirtualPath` add the virtual path to the master page, or the class in the `TypeName`. You can then access everything with intellisense.
first find the user control in the masterpage as below.Then find the control you need to access their property. ``` UserControl uch = Page.Master.FindControl("ucHeader1") as UserControl; PlaceHolder phProxylist= ucHeader1.FindControl("phProxy") as PlaceHolder; DropDownList ddlproxylist1 = ucHeader1.FindControl("ddlProxyList") as DropDownList; phProxylist.Visible = false; ``` Hope this helps.
How to access a user control in a masterpage from a content page?
[ "", "c#", "asp.net", "user-controls", "master-pages", "" ]
I am in a situation where I would like to accept a LOT of log events controlled by me - notably the logging agent I am preparing for slf4j - and then analyze them interactively. I am not as such interested in a facility that presents formatted log files, but one that can accept log events as objects and allow me to sort and display on e.g. threads and timelines etc. Chainsaw could maybe be an option but is currently not compatible with logback which I use for technical reasons. Is there any project with stand alone viewers or embedded in an IDE which would be suitable for this kind of log handling. I am aware that I am approaching what might be suitable for a profiler, so if there is a profiler projekt suitable for this kind of data acquisition and display where I can feed the event pipe, I would like to hear about it). --- Update 2009-03-19: I have found that there is not a log viewer which allows me to see what I would like (a visual display of events with coordinates determined by day and time, etc), so I have decided to create a very terse XML format derived from the log4j XMLLayout adapted to be as readable as possible while still being valid XML-snippets, and then use the Microsoft LogParser to extract the information I need for postprocessing in other tools.
You might implement a adapter for logback to send log4j events to a log4j receiver. This would enable you to use chainsaw. Or build an adapter which receives logback network events and exposes them for log4j.
Take a look at [splunk](http://www.splunk.com/), it doesn't do the specific things that you are looking for, but maybe it can help you achieve the end goal.
Recommendations of a high volume log event viewer in a Java environment
[ "", "java", "logging", "" ]
If you are developing a memory intensive application in C++ on Windows, do you opt to write your own custom memory manager to allocate memory from virtual address space or do you allow CRT to take control and do the memory management for you ? I am especially concerned about the fragmentation caused by the allocation and deallocation of small objects on heap. Because of this, I think the process will run out of memory eventhough there is enough memory but is fragmented.
I think your best bet is to not implement one until profiles **prove** that the CRT is fragmenting memory in a way that damages the performance of your application. CRT, core OS, and STL guys spend a lot of time thinking about memory management. There's a good chance that your code will perform quite fine under existing allocators with no changes needed. There's certainly a better chance of that, than there is of you getting a memory allocator right the first time. I've written memory allocators before for similar circumstances and it's a monsterous task to take on. Not so suprisingly, the version I inherited was rife with fragmentation problems. The other advantage of waiting until a profile shows it's a problem is that you will also know if you've actually fixed anything. That's the most important part of a performance fix. As long as you're using standard collection classes an algorihtmns (such as STL/BOOST) it shouldn't be very hard to plug in a new allocator later on in the cycle to fix the portions of your code base that do need to be fixed. It's very unlikely that you will need a hand coded allocator for your entire program.
Although most of you indicate that you shouldn't write your own memory manager, it could still be useful if: * you have a specific requirement or situation in which you are sure you can write a faster version * you want to write you own memory-overwrite logic (to help in debugging) * you want to keep track of the places where memory is leaked If you want to write your own memory manager, it's important to split it in the following 4 parts: 1. a part that 'intercepts' the calls to malloc/free (C) and new/delete (C++). This is quite easy for new/delete (just global new and delete operators), but also for malloc/free this is possible ('overwrite' the functions of the CRT, redefine calls to malloc/free, ...) 2. a part that represents the entry point of your memory manager, and which is called by the 'interceptor' part 3. a part that implements the actual memory manager. Possibly you will have multiple implementations of this (depending on the situation) 4. a part that 'decorates' the allocated memory with information of the call stack, overwrite-zones (aka red zones), ... If these 4 parts are clearly separated, it also becomes easy to replace one part by another, or add a new part to it e.g.: * add the memory manager implementation of Intel Tread Building Blocks library (to part 3) * modify part 1 to support a new version of the compiler, a new platform or a totally new compiler Having written a memory manager myself, I can only indicate that it can be really handy having an easy way to extend your own memory manager. E.g. what I regularly have to do is finding memory leaks in long-running server applications. With my own memory manager I do it like this: * start the application and let it 'warm up' for a while * ask your own memory manager to dump an overview of the used memory, including the call stacks at the moment of the call * continue running the application * make a second dump * sort the two dumps alphabetically on call stack * look up the differences Although you can do similar things with out-of-the-box components, they tend to have some disadvantages: * often they seriously slow down the application * often they can only report leaks at the end of the application, not while the application is running But, also try to be realistic: if you don't have a problem with memory fragmentation, performance, memory leaks or memory overwrites, there's no real reason to write your own memory manager.
Memory management in memory intensive application
[ "", "c++", "windows", "optimization", "memory-management", "" ]
What libraries, extensions etc. would be required to render a portion of a PDF document to an image file? Most PHP PDF libraries that I have found center around creating PDF documents, but is there a simple way to render a document to an image format suitable for web use? Our environment is a LAMP stack.
You need [`ImageMagick`](https://www.php.net/imagick) and [`GhostScript`](https://www.ghostscript.com/download.html) ``` <?php $im = new imagick('file.pdf[0]'); $im->setImageFormat('jpg'); header('Content-Type: image/jpeg'); echo $im; ?> ``` The `[0]` means `page 1`.
For those who don't have ImageMagick for whatever reason, GD functions will also work, in conjunction with GhostScript. Run the ghostscript command with `exec()` to convert a PDF to JPG, and manipulate the resulting file with `imagecreatefromjpeg()`. Run the ghostscript command: ``` exec('gs -dSAFER -dBATCH -sDEVICE=jpeg -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r300 -sOutputFile=whatever.jpg input.pdf') ``` To manipulate, create a new placeholder image, `$newimage = imagecreatetruecolor(...)`, and bring in the current image. `$image = imagecreatefromjpeg('whatever.jpg')`, and then you can use `imagecopyresampled()` to change the size, or any number of other built-in, non-`imagemagick` commands
How do I convert a PDF document to a preview image in PHP?
[ "", "php", "image", "pdf", "lamp", "" ]
does jquery have any plugin that prevents entering any input to a textbox that doesnt match a regexp pattern. for example , i have a textbox for entering payment amount, i want user t be able to enter only numebers and . in the textbox, all other input wont have any effect on the textbox.. thanks
[jquery-keyfilter plugin](http://code.google.com/p/jquery-keyfilter/) - does what is needed.
[Masked Input Plugin](http://digitalbush.com/projects/masked-input-plugin/) ``` jQuery(function($){ $("#paymentAmount").mask("9999.99"); }); ```
jquery plugin for preventing entering any input not matching a regexp
[ "", "javascript", "jquery", "jquery-plugins", "" ]
I have a bunch of Python modules in a directory, all being a derivate class. I need a "runner" script that, for each module, instantiate the class that is inside it (the actual class name can be built by the module file name) and than call the "go" method on each of them. I don't know how many modules are there, but I can list all of them globbing the directory via something like "bot\_\*.py" I think this is something about "meta programming", but how could be the best (most elegant) way to do it?
``` def run_all(path): import glob, os print "Exploring %s" % path for filename in glob.glob(path + "/*.py"): # modulename = "bot_paperino" modulename = os.path.splitext(os.path.split(filename)[-1])[0] # classname = "Paperino" classname = modulename.split("bot_")[-1].capitalize() # package = "path.bot_paperino" package = filename.replace("\\", "/").replace("/", ".")[:-3] mod = __import__(package) if classname in mod.__dict__[modulename].__dict__.keys(): obj = mod.__dict__[modulename].__dict__[classname]() if hasattr(obj, "go"): obj.go() if __name__ == "__main__": import sys # Run on each directory passed on command line for path in sys.argv[1:]: run_all(sys.argv[1]) ``` You need a `__init__.py` in each path you want to "run". Change "bot\_" at your will. Run on windows and linux.
You could use [`__import__()`](http://docs.python.org/library/functions.html#__import__) to load each module, use [`dir()`](http://docs.python.org/library/functions.html#dir) to find all objects in each module, find all objects which are classes, instantiate them, and run the `go()` method: ``` import types for module_name in list_of_modules_to_load: module = __import__(module_name) for name in dir(module): object = module.__dict__[name] if type(object) == types.ClassType: object().go() ```
Best way to create a "runner" script in Python?
[ "", "python", "metaprogramming", "" ]
I need to create an out-of-process COM server (.exe) in C# that will be accessed by multiple other processes on the same box. The component has to be a single process because it will cache the information it provides to its consumers in memory. Note: the processes that will access my COM Server are mostly Matlab processes, thus the **necessity** for a COM interface. I have seen threads regarding creating in-process COM components in .Net on stack overflow ([Create COM ...](https://stackoverflow.com/questions/186640/create-com-component-and-activex-controls-in-net-c-and-net-framework-3-5)) and on the web, but am having a hard time to find a way to create out-of-process components with .Net. How is this achievable? Any suggested references? Thanks.
One option is [serviced components](http://msdn.microsoft.com/en-us/library/3x7357ez(VS.80).aspx) - i.e. host it in COM+ as the shell exe. See also the [howto here](http://msdn.microsoft.com/en-us/library/ty17dz7h(VS.80).aspx).
We too had some issues many years ago with regasm and running the COM class as a Local EXE Server. This is a bit of a hack and I'd welcome any suggestions to make it more elegant. It was implemented for a project back in the .NET 1.0 days and has not been touched since then! Basically it performs a regasm style of registration each time the application starts (it needs to be run once to make registry entries before the COM object is instantiated in the COM container application). I've copied the following important bits from our implementation and renamed a few classes to illustrate the example. The following method is called from the Form Loaded event to register the COM class(renamed to `MyCOMClass` for this example) ``` private void InitialiseCOM() { System.Runtime.InteropServices.RegistrationServices services = new System.Runtime.InteropServices.RegistrationServices(); try { System.Reflection.Assembly ass = Assembly.GetExecutingAssembly(); services.RegisterAssembly(ass, System.Runtime.InteropServices.AssemblyRegistrationFlags.SetCodeBase); Type t = typeof(MyCOMClass); try { Registry.ClassesRoot.DeleteSubKeyTree("CLSID\\{" + t.GUID.ToString() + "}\\InprocServer32"); } catch(Exception E) { Log.WriteLine(E.Message); } System.Guid GUID = t.GUID; services.RegisterTypeForComClients(t, ref GUID ); } catch ( Exception e ) { throw new Exception( "Failed to initialise COM Server", e ); } } ``` For the type in question, `MyCOMObject`, will need some special attributes to be COM compatible. One important attribute is to specify a fixed GUID otherwise each time you compile the registry will fill up with orphaned COM GUIDs. You can use the Tools menu in VisualStudio to create you a unique GUID. ``` [GuidAttribute("D26278EA-A7D0-4580-A48F-353D1E455E50"), ProgIdAttribute("My PROGID"), ComVisible(true), Serializable] public class MyCOMClass : IAlreadyRegisteredCOMInterface { public void MyMethod() { } [ComRegisterFunction] public static void RegisterFunction(Type t) { AttributeCollection attributes = TypeDescriptor.GetAttributes(t); ProgIdAttribute ProgIdAttr = attributes[typeof(ProgIdAttribute)] as ProgIdAttribute; string ProgId = ProgIdAttr != null ? ProgIdAttr.Value : t.FullName; GuidAttribute GUIDAttr = attributes[typeof(GuidAttribute)] as GuidAttribute; string GUID = "{" + GUIDAttr.Value + "}"; RegistryKey localServer32 = Registry.ClassesRoot.CreateSubKey(String.Format("CLSID\\{0}\\LocalServer32", GUID)); localServer32.SetValue(null, t.Module.FullyQualifiedName); RegistryKey CLSIDProgID = Registry.ClassesRoot.CreateSubKey(String.Format("CLSID\\{0}\\ProgId", GUID)); CLSIDProgID.SetValue(null, ProgId); RegistryKey ProgIDCLSID = Registry.ClassesRoot.CreateSubKey(String.Format("CLSID\\{0}", ProgId)); ProgIDCLSID.SetValue(null, GUID); //Registry.ClassesRoot.CreateSubKey(String.Format("CLSID\\{0}\\Implemented Categories\\{{63D5F432-CFE4-11D1-B2C8-0060083BA1FB}}", GUID)); //Registry.ClassesRoot.CreateSubKey(String.Format("CLSID\\{0}\\Implemented Categories\\{{63D5F430-CFE4-11d1-B2C8-0060083BA1FB}}", GUID)); //Registry.ClassesRoot.CreateSubKey(String.Format("CLSID\\{0}\\Implemented Categories\\{{62C8FE65-4EBB-45e7-B440-6E39B2CDBF29}}", GUID)); } [ComUnregisterFunction] public static void UnregisterFunction(Type t) { AttributeCollection attributes = TypeDescriptor.GetAttributes(t); ProgIdAttribute ProgIdAttr = attributes[typeof(ProgIdAttribute)] as ProgIdAttribute; string ProgId = ProgIdAttr != null ? ProgIdAttr.Value : t.FullName; Registry.ClassesRoot.DeleteSubKeyTree("CLSID\\{" + t.GUID + "}"); Registry.ClassesRoot.DeleteSubKeyTree("CLSID\\" + ProgId); } } ``` The `InitialiseCOM` method in the main form uses `RegistrationServices` to register the type. The framework then uses reflection to find the method marked with the `ComRegisterFunction` attribute and calls that function with the type being registered. The `ComRegisterFunction` marked method, hand creates the registry settings for a Local EXE Server COM object and this can be compared with regasm if you use `REGEDIT` and find the keys in question. I've commented out the three `\\Registry.ClassesRoot.CreateSubKey` method calls as this was another reason we needed to register the type ourselves as this was an OPC server and third party OPC clients use these implemented categories to scan for compatible OPC servers. REGASM would not add these in for us unless we did the work ourselves. You can easily see how this works if you put break points on the functions as it is starting. Our implementation used an interface that was already registered with COM. For your application you will either need to :- 1. Extend the registration methods listed above to register the interface with COM 2. Or create a separate DLL with the interface definition and then export that interface definition to a type library and register that as discussed in the StackOverflow link you added in the question.
Create Out-Of-Process COM in C#/.Net?
[ "", "c#", "com", "" ]