Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Is there a better way of getting this result? This function fails if num has more digits than digits, and I feel like it should be in the library somewhere (like Integer.toString(x,"%3d") or something) ``` static String intToString(int num, int digits) { StringBuffer s = new StringBuffer(digits); int zeroes = digits - (int) (Math.log(num) / Math.log(10)) - 1; for (int i = 0; i < zeroes; i++) { s.append(0); } return s.append(num).toString(); } ```
String.format (<https://docs.oracle.com/javase/1.5.0/docs/api/java/util/Formatter.html#syntax>) In your case it will be: ``` String formatted = String.format("%03d", num); ``` * 0 - to pad with zeros * 3 - to set width to 3
Since Java 1.5 you can use the `String.format` method. For example, to do the same thing as your example: ``` String format = String.format("%0%d", digits); String result = String.format(format, num); return result; ``` In this case, you're creating the format string using the width specified in digits, then applying it directly to the number. The format for this example is converted as follows: ``` %% --> % 0 --> 0 %d --> <value of digits> d --> d ``` So if digits is equal to 5, the format string becomes `%05d` which specifies an integer with a width of 5 printing leading zeroes. See the [java docs](http://docs.oracle.com/javase/7/docs/api/java/lang/String.html#format(java.util.Locale,%20java.lang.String,%20java.lang.Object...)) for `String.format` for more information on the conversion specifiers.
Add leading zeroes to number in Java?
[ "", "java", "" ]
I'd like to strip out occurrences of a specific tag, leaving the inner XML intact. I'd like to do this with one pass (rather than searching, replacing, and starting from scratch again). For instance, from the source: ``` <element> <RemovalTarget Attribute="Something"> Content Here </RemovalTarget> </element> <element> More Here </element> ``` I'd like the result to be: ``` <element> Content Here </element> <element> More Here </element> ``` I've tried something like this (forgive me, I'm new to Linq): ``` var elements = from element in doc.Descendants() where element.Name.LocalName == "RemovalTarget" select element; foreach (var element in elements) { element.AddAfterSelf(element.Value); element.Remove(); } ``` but on the second time through the loop I get a null reference, presumably because the collection is invalidated by changing it. What is an efficient way to make remove these tags on a potentially large document?
You'll have to skip the deferred execution with a call to ToList, which probably won't hurt your performance in large documents as you're just going to be iterating and replacing at a much lower big-O than the original search. As @jacob\_c pointed out, I should be using element.Nodes() to replace it properly, and as @Panos pointed out, I should reverse the list in order to handle nested replacements accurately. Also, use [XElement.ReplaceWith](http://msdn.microsoft.com/en-us/library/system.xml.linq.xelement.replacewith.aspx), much faster than your current approach in large documents: ``` var elements = doc.Descendants("RemovalTarget").ToList().Reverse(); /* reverse on the IList<T> may be faster than Reverse on the IEnumerable<T>, * needs benchmarking, but can't be any slower */ foreach (var element in elements) { element.ReplaceWith(element.Nodes()); } ``` One last point, in reviewing what this MAY be used for, I tend to agree with @Trull that XSLT may be what you're actually looking for, if say you're removing all say <b> tags from a document. Otherwise, enjoy this fairly decent and fairly well performing LINQ to XML implementation.
Have you considered using XSLT? Seems like the perfect soution, as you are doing exactly what XSLT is meant for, transforming one XML doc into another. The templating system will delve into nested nastiness for you without problems. [Here](http://www.w3schools.com/xsl/xsl_transformation.asp) is a basic example
strip out tag occurrences from XML
[ "", "c#", ".net", "xml", "linq", "" ]
I am working on a program which manipulates images of different sizes. Many of these manipulations read pixel data from an input and write to a separate output (e.g. blur). This is done on a per-pixel basis. Such image mapulations are very stressful on the CPU. I would like to use multithreading to speed things up. How would I do this? I was thinking of creating one thread per row of pixels. I have several requirements: * Executable size must be minimized. In other words, I can't use massive libraries. What's the most light-weight, portable threading library for C/C++? * Executable size must be minimized. I was thinking of having a function forEachRow(fp\* ) which runs a thread for each row, or even a forEachPixel(fp\* ) where fp operates on a single pixel in its own thread. Which is best? + Should I use normal functions or functors or functionoids or some lambda functions or ... something else? + Some operations use optimizations which require information from the previous pixel processed. This makes forEachRow favorable. Would using forEachPixel be better even considering this? * Would I need to lock my read-only and write-only arrays? + The input is only read from, but many operations require input from more than one pixel in the array. + The ouput is only written once per pixel. * Speed is also important (of course), but optimize executable size takes precedence. Thanks. **More information on this topic for the curious: [C++ Parallelization Libraries: OpenMP vs. Thread Building Blocks](https://stackoverflow.com/questions/615264/c-parallelization-libraries-openmp-vs-thread-building-blocks)**
If your compiler supports [OpenMP](http://openmp.org/wp/) (I know [VC++ 8.0 and 9.0](http://msdn.microsoft.com/en-us/library/tt15eb9t(VS.80).aspx) do, as does gcc), it can make things like this much easier to do. You don't just want to make a lot of threads - there's a point of diminishing returns where adding new threads slows things down as you start getting more and more context switches. At some point, using too many threads can actually make the parallel version slower than just using a linear algorithm. The optimal number of threads is a function of the number of cpus/cores available, and the percentage of time each thread spends blocked on things like I/O. Take a look at [this article](http://www.ddj.com/cpp/211800538) by Herb Sutter for some discussion on parallel performance gains. OpenMP lets you easily adapt the number of threads created to the number of CPUs available. Using it (especially in data-processing cases) often involves simply putting in a few `#pragma omp`s in existing code, and letting the compiler handle creating threads and synchronization. In general - as long as data isn't changing, you won't have to lock read-only data. If you can be sure that each pixel slot will only be written once and you can guarantee that all the writing has been completed before you start reading from the result, you won't have to lock that either. For OpenMP, there's no need to do anything special as far as functors / function objects. Write it whichever way makes the most sense to you. Here's an image-processing example from [Intel](http://software.intel.com/en-us/articles/getting-started-with-openmp) (converts rgb to grayscale): ``` #pragma omp parallel for for (i=0; i < numPixels; i++) { pGrayScaleBitmap[i] = (unsigned BYTE) (pRGBBitmap[i].red * 0.299 + pRGBBitmap[i].green * 0.587 + pRGBBitmap[i].blue * 0.114); } ``` This automatically splits up into as many threads as you have CPUs, and assigns a section of the array to each thread.
**Don't embark on threading lightly!** The race conditions can be a major pain in the arse to figure out. Especially if you don't have a lot of experience with threads! **(You've been warned: Here be dragons! Big hairy non-deterministic impossible-to-reliably-reproduce dragons!)** [Do you know what deadlock is? How about Livelock?](http://en.wikipedia.org/wiki/Deadlock) That said... --- As ckarmann and others have already suggested: **Use a work-queue model. One thread per CPU core.** Break the work up into N chunks. Make the chunks reasonably large, like many rows. As each thread becomes free, it snags the next work chunk off the queue. In the simplest *IDEAL* version, you have N cores, N threads, and N subparts of the problem with each thread knowing from the start exactly what it's going to do. But that doesn't usually happen in practice due to the overhead of starting/stopping threads. You really want the threads to already be spawned and waiting for action. (E.g. Through a semaphore.) The work-queue model itself is quite powerful. It lets you parallelize things like quick-sort, which normally doesn't parallelize across N threads/cores gracefully. --- More threads than cores? You're just wasting overhead. Each thread has overhead. Even at #threads=#cores, you will never achieve a perfect Nx speedup factor. One thread per row would be very inefficient! One thread per pixel? I don't even want to think about it. (That per-pixel approach makes a lot more sense when playing with vectorized processor units like they had on the old Crays. But not with threads!) --- Libraries? What's your platform? Under Unix/Linux/g++ I'd suggest pthreads & semaphores. (Pthreads is also available under windows with a microsoft compatibility layer. But, uhgg. I don't really trust it! Cygwin might be a better choice there.) Under Unix/Linux, *man*: ``` * pthread_create, pthread_detach. * pthread_mutexattr_init, pthread_mutexattr_settype, pthread_mutex_init, * pthread_mutexattr_destroy, pthread_mutex_destroy, pthread_mutex_lock, * pthread_mutex_trylock, pthread_mutex_unlock, pthread_mutex_timedlock. * sem_init, sem_destroy, sem_post, sem_wait, sem_trywait, sem_timedwait. ``` Some folks like pthreads' condition variables. But I always preferred POSIX 1003.1b semaphores. They handle the situation where you want to signal another thread *BEFORE* it starts waiting somewhat better. Or where another thread is signaled multiple times. Oh, and do yourself a favor: Wrap your thread/mutex/semaphore pthread calls into a couple of C++ classes. That will simplify matters a lot! --- ***Would I need to lock my read-only and write-only arrays?*** It depends on your precise hardware & software. Usually read-only arrays can be freely shared between threads. But there are cases where that is not so. Writing is much the same. Usually, as long as only one thread is writing to each particular memory spot, you are ok. But there are cases where that is not so! Writing is more troublesome than reading as you can get into these weird fencepost situations. Memory is often written as words not bytes. When one thread writes part of the word, and another writes a different part, depending on the exact timing of which thread does what when (e.g. nondeterministic), you can get some very unpredictable results! **I'd play it safe: Give each thread its own copy of the read and write areas. After they are done, copy the data back. All under mutex, of course.** Unless you are talking about gigabytes of data, memory blits are very fast. That couple of microseconds of performance time just isn't worth the debugging nightmare. If you were to share one common data area between threads using mutexes, the collision/waiting mutex inefficiencies would pile up and devastate your efficiency! --- Look, clean data boundaries are the essence of good multi-threaded code. When your boundaries aren't clear, that's when you get into trouble. Similarly, it's essential to keep everything on the boundary mutexed! And to keep the mutexed areas short! Try to avoid locking more than one mutex at the same time. If you do lock more than one mutex, always lock them in the same order! Where possible use ERROR-CHECKING or RECURSIVE mutexes. FAST mutexes are just asking for trouble, with very little actual (measured) speed gain. If you get into a deadlock situation, run it in gdb, hit ctrl-c, visit each thread and backtrace. You can find the problem quite quickly that way. (Livelock is much harder!) --- One final suggestion: Build it single-threaded, then start optimizing. On a single-core system, you may find yourself gaining more speed from things like foo[i++]=bar ==> \*(foo++)=bar than from threading. --- ***Addendum:*** What I said about ***keeping mutexed areas short*** up above? Consider two threads: (Given a global shared mutex object of a Mutex class.) ``` /*ThreadA:*/ while(1){ mutex.lock(); printf("a\n"); usleep(100000); mutex.unlock(); } /*ThreadB:*/ while(1){ mutex.lock(); printf("b\n"); usleep(100000); mutex.unlock(); } ``` What will happen? Under my version of Linux, one thread will run continuously and the other will starve. Very very rarely they will change places when a context swap occurs between mutex.unlock() and mutex.lock(). --- ***Addendum:*** In your case, this is unlikely to be an issue. But with other problems one may not know in advance how long a particular work-chunk will take to complete. Breaking a problem down into 100 parts (instead of 4 parts) and using a work-queue to split it up across 4 cores smooths out such discrepancies. If one work-chunk takes 5 times longer to complete than another, well, it all evens out in the end. Though with too many chunks, the overhead of acquiring new work-chunks creates noticeable delays. It's a problem-specific balancing act.
Multithreaded image processing in C++
[ "", "c++", "multithreading", "optimization", "image-processing", "parallel-processing", "" ]
In writing the code that throws the exception I asked about [here](https://stackoverflow.com/questions/259800/is-there-a-built-in-net-exception-that-indicates-an-illegal-object-state), I came to the end of my message, and paused at the punctuation. I realized that nearly every exception message I've ever thrown probably has a ! somewhere. ``` throw new InvalidOperationException("I'm not configured correctly!"); throw new ArgumentNullException("You passed a null!"); throw new StupidUserException("You can't divide by 0! What the hell were you THINKING??? DUMMY!!!!!"); ``` What tone do you take when writing exception messages? When going through logs, do you find any certain style of message actually helps more than another?
A conversational tone in system messages makes the software look unprofessional and sloppy. Exclamation points, insults, and slang don't really have a place in polished exception messages. Also, I tend to use different styles in Java for runtime exceptions and checked exceptions, since runtime exceptions are addressed to the programmer that made the mistake. Since runtime exceptions might be displayed to end users, I still "keep it clean," but they can be a little more terse and cryptic. Checked exception messages should be more helpful, since it may be that the user can fix the problem if you describe it (e.g., file not found, disk full, no route to host, etc.). One thing that is helpful, in the absence of a specific field on the exception for the information, is the offending data: ``` throw new IndexOutOfBoundsException("offset < 0: " + off); ```
I try to mirror the tone, grammar and punctuation style of the framework against which I'm coding. You never know when one of these messages might actually make it out in front of a client or user, so I keep everything professional, non-judgmental and specific enough for troubleshooting -- without being so specific as to give away any security issues in the code. I avoid exclamation marks in all strings (UI and exception) like the plague, except (ocasionally) in my unit tests.
What style do you use for exception messages?
[ "", "c#", ".net", "exception", "logging", "throw", "" ]
How do you draw the following dynamic **3D** array with OpenGL **glDrawPixels()**? You can find the documentation here: <http://opengl.org/documentation/specs/man_pages/hardcopy/GL/html/gl/drawpixels.html> ``` float ***array3d; void InitScreenArray() { int i, j; int screenX = scene.camera.vres; int screenY = scene.camera.hres; array3d = (float ***)malloc(sizeof(float **) * screenX); for (i = 0 ; i < screenX; i++) { array3d[i] = (float **)malloc(sizeof(float *) * screenY); for (j = 0; j < screenY; j++) array3d[i][j] = (float *)malloc(sizeof(float) * /*Z_SIZE*/ 3); } } ``` I can use only the following header files: ``` #include <math.h> #include <stdlib.h> #include <windows.h> #include <GL/gl.h> #include <GL/glu.h> #include <GL/glut.h> ```
Uh ... Since you're allocating *each single pixel* with a separate `malloc()`, you will have to draw each pixel with a separate call to `glDrawPixels()`, too. This is (obviously) insane; the idea of bitmapped graphics is that the pixels are stored in an adjacent, compact, format, so that it is quick and fast (*O(1)*) to move from one pixel to another. This looks very confused to me. A more sensible approach would be to allocate the "3D array" (which is often referred to as a 2D array of pixels, where each pixel happens to consist of a red, green and blue component) with a single call to `malloc()`, like so (in C): ``` float *array3d; array3d = malloc(scene.camera.hres * scene.camera.vres * 3 * sizeof *array3d); ```
Thanks unwind. I got the same advice on [gamedev.net](http://www.gamedev.net/community/forums/topic.asp?topic_id=513674) so I have implemented the following algorithm: ``` typedef struct { GLfloat R, G, B; } color_t; color_t *array1d; void InitScreenArray() { long screenX = scene.camera.vres; long screenY = scene.camera.hres; array1d = (color_t *)malloc(screenX * screenY * sizeof(color_t)); } void SetScreenColor(int x, int y, float red, float green, float blue) { int screenX = scene.camera.vres; int screenY = scene.camera.hres; array1d[x + y*screenY].R = red; array1d[x + y*screenY].G = green; array1d[x + y*screenY].B = blue; } void onDisplay( ) { glClearColor(0.1f, 0.2f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glRasterPos2i(0,0); glDrawPixels(scene.camera.hres, scene.camera.vres, GL_RGB, GL_FLOAT, array1d); glFinish(); glutSwapBuffers(); } ``` My application doesn't work yet (nothing appears on screen), but I think it's my fault and this code will work.
OpenGL glDrawPixels on dynamic 3D arrays
[ "", "c++", "arrays", "opengl", "graphics", "" ]
I got a web application, the problem is that the text in the label will not update on the first click, I need to click the button twice, I debugged to code, and I found out that the label does not recive the data until after the second click, Here is my code: ``` System.Data.SqlClient.SqlCommand command = new System.Data.SqlClient.SqlCommand(); System.Data.SqlClient.SqlConnection connection; string CommandText; string game; string modtype; bool filter; protected void Page_Load(object sender, EventArgs e) { labDownloadList.Text = null; //Session variables: if (Session["Game"] != null) { game = Convert.ToString(Session["Game"]); } if (Session["ModType"] != null) { modtype = Convert.ToString(Session["ModType"]); } if (Session["FilterBool"] != null) { filter = Convert.ToBoolean(Session["FilterBool"]); } string ConnectionString = "Data Source=.\\SQLEXPRESS;AttachDbFilename=C:\\inetpub\\wwwroot\\stian\\App_Data\\Database.mdf;Integrated Security=True;User Instance=True"; connection = new System.Data.SqlClient.SqlConnection(ConnectionString); System.Data.SqlClient.SqlDataReader reader; command = connection.CreateCommand(); connection.Open(); CommandText = "SELECT * FROM Command"; if (filter) { CommandText = "SELECT * FROM Command WHERE Game='" + game + "' AND Type='" + modtype + "'"; } command.CommandText = CommandText; reader = command.ExecuteReader(); labDownloadList.Text = ""; while (reader.Read()) { string game = reader.GetString(1); string author = reader.GetString(2); string downloadlink = reader.GetString(3); string size = reader.GetString(4); string description = reader.GetString(5); string version = reader.GetString(6); string screenshotlink = reader.GetString(7); Int64 AmountDownloaded = reader.GetInt64(8); labDownloadList.Text += "Game: " + game + "<br>"; labDownloadList.Text += "Author: " + author + "<br>"; labDownloadList.Text += "Size: " + size + "<br>"; labDownloadList.Text += "Description: " + description + "<br>"; labDownloadList.Text += "Version: " + version + "<br>"; labDownloadList.Text += "<img src='" + screenshotlink + " /><br>"; labDownloadList.Text += "Downloaded: " + AmountDownloaded + " times<br><hr>"; labDownloadList.Text += "<a href='" + downloadlink + "'>Download</a><br>"; } } protected void Page_UnLoad(object sender, EventArgs e) { Session["Game"] = game; Session["ModType"] = modtype; Session["FilterBool"] = filter; connection.Close(); } protected void btnFilter_Click(object sender, EventArgs e) { game = lstGames.SelectedValue; modtype = lstTypeMod.SelectedValue; filter = true; } ```
To be very clear. The button click event happens after the Page\_Load event meaning that the filtering does not get applied on the first postback. It has been updated on the second postback and you see the filtering. The simplest change to get your code to work is to move all the code in your Page\_Load event into OnPreRender so the reload happens after the button click event. A cleaner solution however is probably to move it into a LoadData function and call that on PageLoad when it is not a postback and also call it on the button click event after updating your filters. That will prevent a call to the database on any postback page cycles that do not need to reload the data: ``` protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { LoadData() } } private void LoadData() { labDownloadList.Text = null; //Session variables: if (Session["Game"] != null) ... } protected void btnFilter_Click(object sender, EventArgs e) { game = lstGames.SelectedValue; modtype = lstTypeMod.SelectedValue; filter = true; LoadData(); } ``` A last piece of quick advice for a budding ASP.Net developer is to thoroughly learn the page lifecycle. Knowing the sequence of events on a page is essential. Good Luck.
Microsoft's overview of the [Page Life Cycle](http://msdn.microsoft.com/en-us/library/ms178472.aspx) may be helpful in understanding the flow (and resolving your issue).
ASP.NET C#, need to press a button twice to make something happen
[ "", "c#", "asp.net", "" ]
I have this code :- ``` using (System.Security.Cryptography.SHA256 sha2 = new System.Security.Cryptography.SHA256Managed()) { .. } ``` Do I need to put this line of code, just BEFORE I leave that dispose scope .. or does the dispose 'call' that already. ``` sha2.Clear(); ```
Since AFAIK the Clear() method just calls Dispose, the using block should be enough to ensure that the resources used are released.
IMHO if calling Dispose() is not enough to dispose off an object then either there is a serious bug in the code or a serious flaw in the design. So don't worry about taking any additional steps in your own code!
Is disposing this object, enough? or do i need to do more?
[ "", "c#", ".net", "dispose", "sha2", "" ]
When I run a wxPython application, it prints the string “Redirecting output to win32trace remote collector”and I must open PythonWin's trace collector tool to view that trace output. Since I'm not interested in collecting this output, how should I disable this feature?
You can even pass that when you instantiate your wx.App(): ``` if __name__ == "__main__": app = wx.App(redirect=False) #or 0 app.MainLoop() ``` [wxPython wx.App docs](http://wxpython.org/docs/api/wx.App-class.html#__init__)
This message deceived me into thinking win32trace was preventing me from seeing uncaught exceptions in the regular console (of my IDE). The real issue was that wxPython by default redirects stdout/stderr to a popup window that quickly disappeared after an uncaught exception. To solve *that* problem, I simply had to pass ``` redirect=0 ``` to the superclass constructor of my application. ``` class MyApp(wx.App): def __init__(self): # Prevent wxPython from redirecting stdout/stderr: super(MyApp, self).__init__(redirect=0) ``` That fix notwithstanding, I am still curious about how to control win32trace.
How do I disable PythonWin's “Redirecting output to win32trace remote collector” feature without uninstalling PythonWin?
[ "", "python", "windows", "wxpython", "" ]
Where can I find a free, lightweight YUI-like compressor for PHP? I am sure it will decrease the file size but will compressing PHP code boost its performance? Is this the same thing as an obfuscator?
There is a product called PHP Encoder by ionCube (<http://www.ioncube.com/sa_encoder.php>) which is enterprise grade compression and obfuscater. PHP Encoder is a PHP extension to create and run compiled bytecodes for accelerated runtime performance and maximum security. It will shrink the file size, and speed up runtime because the code is already partially compiled
Compressing JavaScript has benefits because the script has to be sent over the Net to the client before it can be interpreted -- the smaller the file size, the faster it reaches the end user. PHP is interpreted directly on the server, so compressing the code won't affect how fast it runs. If it's speed gains you want, you should look into a [PHP accelerator](http://en.wikipedia.org/wiki/PHP_accelerator) that will cache the compiled PHP bytecode of your app.
Where can I find a free, lightweight YUI-like compressor for PHP?
[ "", "php", "yui", "" ]
I've been trying to deal with some delimited text files that have non standard delimiters (not comma/quote or tab delimited). The delimiters are random ASCII characters that don't show up often between the delimiters. After searching around, I've seem to have only found no solutions in .NET will suit my needs and the custom libraries that people have written for this seem to have some flaws when it comes to gigantic input (4GB file with some field values having very easily several million characters). While this seems to be a bit extreme, it is actually a standard in the Electronic Document Discovery (EDD) industry for some review software to have field values that contain the full contents of a document. For reference, I've previously done this in python using the csv module with no problems. Here's an example input: ``` Field delimiter = quote character = þ þFieldName1þþFieldName2þþFieldName3þþFieldName4þ þValue1þþValue2þþValue3þþSomeVery,Very,Very,Large value(5MB or so)þ ...etc... ``` Edit: So I went ahead and created a delimited file parser from scratch. I'm kind of weary using this solution as it may be prone to bugs. It also doesn't feel "elegant" or correct to have to write my own parser for a task like this. I also have a feeling that I probably didn't have to write a parser from scratch for this anyway.
Use the [File Helpers API](http://filehelpers.sourceforge.net/). It's .NET and open source. It's extremely high performance using compiled IL code to set fields on strongly typed objects, and supports streaming. It supports all sorts of file types and custom delimiters; I've used it to read files larger than 4GB. If for some reason that doesn't do it for you, try just reading line by line with a string.split: ``` public IEnumerable<string[]> CreateEnumerable(StreamReader input) { string line; while ((line = input.ReadLine()) != null) { yield return line.Split('þ'); } } ``` That'll give you simple string arrays representing the lines in a streamy fashion that you can even Linq into ;) Remember however that the IEnumerable is lazy loaded, so don't close or alter the StreamReader until you've iterated (or caused a full load operation like ToList/ToArray or such - given your filesize however, I assume you won't do that!). Here's a good sample use of it: ``` using (StreamReader sr = new StreamReader("c:\\test.file")) { var qry = from l in CreateEnumerable(sr).Skip(1) where l[3].Contains("something") select new { Field1 = l[0], Field2 = l[1] }; foreach (var item in qry) { Console.WriteLine(item.Field1 + " , " + item.Field2); } } Console.ReadLine(); ``` This will skip the header line, then print out the first two field from the file where the 4th field contains the string "something". It will do this without loading the entire file into memory.
Windows and high performance I/O means, use [IO Completion](http://blogs.msdn.com/kavitak/archive/2003/12/15/Async-I_2F00_O-and-I_2F00_O-completion-ports.aspx) ports. You may have todo some extra plumbing to get it working in your case. This is with the understanding that you want to use C#/.NET, and according to [Joe Duffy](http://www.bluebytesoftware.com/blog/PermaLink,guid,f8404ab3-e3e6-4933-a5bc-b69348deedba.aspx) > 18) Don’t use Windows Asynchronous Procedure Calls (APCs) in managed > code. I had to learn that one the hard way ;), but ruling out APC use, IOCP is the only sane option. It also supports many other types of I/O, frequently used in socket servers. As far as parsing the actual text, check out [Eric White's](http://blogs.msdn.com/ericwhite/archive/2008/09/30/linq-to-text-and-linq-to-csv.aspx) blog for some streamlined stream use.
What is the fastest way to parse text with custom delimiters and some very, very large field values in C#?
[ "", "c#", "parsing", "bulk", "csv", "" ]
In Python, how do I jump to a file in the Windows Explorer? I found a solution for jumping to folders: ``` import subprocess subprocess.Popen('explorer "C:\path\of\folder"') ``` but I have no solution for files.
From [Geoff Chappell's *The Windows Explorer Command Line*](http://www.geoffchappell.com/studies/windows/shell/explorer/cmdline.htm) ``` import subprocess subprocess.Popen(r'explorer /select,"C:\path\of\folder\file"') ```
A nicer and safer solution (only in Windows unfortunately) is [os.startfile()](https://docs.python.org/3.6/library/os.html#os.startfile). When it's given a folder instead of a file, it will open Explorer. Im aware that i do not completely answer the question since its not selecting a file, but using `subprocess` is always kind of a bad idea (for security reasons) and this solution may help other people.
Open explorer on a file
[ "", "python", "windows", "explorer", "" ]
Did the recent purchase of MySQL by Sun and the subsequent buggy releases kill the MySQL brand? I whole heartedly embraced MySQL when it first came out as I used to be a poor developer and all the RDBMs were too expensive. I have fond feelings for MySQL and their being able to compete with Oracle and SQL Server. I credit the original MySQL team for the existence SQL Server Express. I now use SQL Server Express instead of MySQL for just about everything. First, I do not like Sun and second, SQL Server Express is significantly more robust and 'Enterprise' than MySQL. The only significant limitations on SQL Server Express are the 4GB db size and the lack of Agent. I find that the size limit is not a concern because by the time the db gets close to that size, the application should either be profitable (and you buy the license), or you should kill the product. The Agent issue is a nice to have, but not critical as you can work around it. It seems that for db simpletons like me, SQL Server Express is easier to setup and use and is faster and more stable. And for gurus, they will use PostgresSQL... Resolved: So basically, we have a bunch of SQL Server fans (albeit open-minded) on stackoverflow. It looks like I'm in the right place. It's sad to see MySQL transform from something you cheer to something you pity. I hope the founders of the original company try a new endeavor. I absolutely do not judge them for selling to SUNW - work is naturally profit driven. I hope we get some people with FU money who don't care about profits who create some products like MySQL used to be. Othewise, MSFT might take away EXPRESS.
Having worked with both, I have to say that the limitations and/or bugs in MySQL were a big turn off for me... I don't like PHP, and while I respect the open source community for their advances with these two technologies I just can't see the elegance in the way either of them have been put together. But don't let my personal taste sway your judgement against MySQL. I do all of my prototyping in SQL Server Express and most of my clients use full blown SQL Server 2005. The ease of transition from one to the other makes it a no brainer for me - I can take anything I wrote for SQL Server Express and put it straight in without worrying that the syntax might be different. The two limitations don't even really get in the way in a dev environment - it's only when you come to production that they would tend to be a problem. For me, SQL Server wins the argument hands down.
So far no-one knows what Oracle is going to do to MySQL, not even Oracle. I've done extensive testing of MySQL and would say that in terms of performance it is about at SQL Server 7.0 level. That is fine if all you need is the performance of SQL Server 7.0 At the enterprise level is simply doesn't compete. If you look at the fanfare surrounding MySQL 5.4 it says "now supports more than 4 processors". Where MySQL scores highly is that it is so cheap that it makes a scale-out design feasible, in which case the raw power of an individual box simply doesn't matter. There are quite a few ommissions in MySQL that will trap a SQL Server developer. No CHECK constraints, no index views, no separation of clustered indexes from primary keys. That said, it has a large number of useful features that are very useful to web developers. The Sun/MySQL guys are remarkably honest about their product. They say exactly what the strengths and weaknesses are. If you are used to some of the big vendors sales tactics this comes as a massive shock. Ultimately this does inspire confidence in using the product because you know exactly what to expect. I would much sooner deal with a product that says "don't do this because this is beyond our limits" than one that says "our product is the cure for cancer" and it turns out it couldn't cure a simple hangover!
MySQL versus SQL Server Express
[ "", "sql", "mysql", "sql-server", "" ]
I'm hearing that some people believe storing info on the server in a session is a bad idea, that its not secure. As a result, in a multi-page business process function, the application is writing data to a db, then retrieving the info when its needed. Is there something necessarily unsafe about storing private info in a session?
There's not a security risk in storing attributes in a Session, as long as the session itself is safe from [hijacking](http://www.owasp.org/index.php/Session_hijacking_attack). There are some serious issues involving concurrency and sessions. Since its extremely common for multiple threads to be making requests concurrently for a single session, you have to make sure that the objects you store in a Session are thread safe. Either make them immutable, or make them thread safe with memory barriers like synchronization. I highly recommend an [article on the subject by Brian Goetz](http://www.ibm.com/developerworks/library/j-jtp09238.html).
HTTP sessions themselves aren't inherently unsafe. However, depending on your application server / container, the mechanism in which session cookies are passed back to the browser (and lack of transport layer security - SSL) can allow malicious parties to perform a variety of attacks (cross-site scripting, session hijacking, etc.). I would spend some time researching these things along with SQL injection to understand the full ramifications of using HTTP sessions. If your application runs within a firewall, there are often much bigger security risks than this one, such as social engineering.
Java session variables
[ "", "java", "session", "servlets", "variables", "jakarta-ee", "" ]
When reading lines from a text file using python, the end-line character often needs to be truncated before processing the text, as in the following example: ``` f = open("myFile.txt", "r") for line in f: line = line[:-1] # do something with line ``` Is there an elegant way or idiom for retrieving text lines without the end-line character?
The *idiomatic* way to do this in Python is to use **rstrip('\n')**: ``` for line in open('myfile.txt'): # opened in text-mode; all EOLs are converted to '\n' line = line.rstrip('\n') process(line) ``` Each of the other alternatives has a gotcha: * **file('...').read().splitlines()** has to load the whole file in memory at once. * **line = line[:-1]** will fail if the last line has no EOL.
Simple. Use **splitlines()** ``` L = open("myFile.txt", "r").read().splitlines(); for line in L: process(line) # this 'line' will not have '\n' character at the end ```
End-line characters from lines read from text file, using Python
[ "", "python", "" ]
Presently I'm starting to introduce the concept of Mock objects into my Unit Tests. In particular I'm using the Moq framework. However, one of the things I've noticed is that suddenly the classes I'm testing using this framework are showing code coverage of 0%. Now I understand that since I'm just mocking the class, its not running the actual class itself....but how do I write these tests and have Code Coverage return accurate results? Do I have to write one set of tests that use Mocks and one set to instantiate the class directly. Perhaps I am doing something wrong without realizing it? Here is an example of me trying to Unit Test a class called "MyClass": ``` using Moq; using NUnitFramework; namespace MyNameSpace { [TestFixture] public class MyClassTests { [Test] public void TestGetSomeString() { const string EXPECTED_STRING = "Some String!"; Mock<MyClass> myMock = new Mock<MyClass>(); myMock.Expect(m => m.GetSomeString()).Returns(EXPECTED_STRING); string someString = myMock.Object.GetSomeString(); Assert.AreEqual(EXPECTED_STRING, someString); myMock.VerifyAll(); } } public class MyClass { public virtual string GetSomeString() { return "Hello World!"; } } } ``` Does anyone know what I should be doing differently?
You are not using your mock objects correctly. When you are using mock objects you meant to be testing how your code interacts with other objects without actually using the real objects. See the code below: ``` using Moq; using NUnitFramework; namespace MyNameSpace { [TestFixture] public class MyClassTests { [Test] public void TestGetSomeString() { const string EXPECTED_STRING = "Some String!"; Mock<IDependance> myMock = new Mock<IDependance>(); myMock.Expect(m => m.GiveMeAString()).Returns("Hello World"); MyClass myobject = new MyClass(); string someString = myobject.GetSomeString(myMock.Object); Assert.AreEqual(EXPECTED_STRING, someString); myMock.VerifyAll(); } } public class MyClass { public virtual string GetSomeString(IDependance objectThatITalkTo) { return objectThatITalkTo.GiveMeAString(); } } public interface IDependance { string GiveMeAString(); } } ``` It doesn't look like it is doing anything useful when your code is just returning a string without any logic behind it. The real power comes if you `GetSomeString()` method did some logic that may change the result of the output string depending on the return from the `IDependdance` .`GiveMeAString()` method, then you can see how your method handles bad data being sent from the `IDependdance` interface. Something like: ``` public virtual string GetSomeString(IDependance objectThatITalkTo) { if (objectThatITalkTo.GiveMeAString() == "Hello World") return "Hi"; return null; } ``` Now if you have this line in your test: ``` myMock.Expect(m => m.GiveMeAString()).Returns(null); ``` What will happen to your `GetSomeString()` method?
Big mistake is mocking the [System Under Test](https://en.wikipedia.org/wiki/System_under_test "System Under Test") (SUT), you test something else. You should mock only SUT dependencies.
How can I use Mock Objects in my unit tests and still use Code Coverage?
[ "", "c#", ".net", "unit-testing", "moq", "code-coverage", "" ]
I'm working on some upgrades to an internal web analytics system we provide for our clients (in the absence of a preferred vendor or Google Analytics), and I'm working on the following query: ``` select path as EntryPage, count(Path) as [Count] from ( /* Sub-query 1 */ select pv2.path from pageviews pv2 inner join ( /* Sub-query 2 */ select pv1.sessionid, min(pv1.created) as created from pageviews pv1 inner join Sessions s1 on pv1.SessionID = s1.SessionID inner join Visitors v1 on s1.VisitorID = v1.VisitorID where pv1.Domain = isnull(@Domain, pv1.Domain) and v1.Campaign = @Campaign group by pv1.sessionid ) t1 on pv2.sessionid = t1.sessionid and pv2.created = t1.created ) t2 group by Path; ``` I've tested this query with 2 million rows in the PageViews table and it takes about 20 seconds to run. I'm noticing a clustered index scan twice in the execution plan, both times it hits the PageViews table. There is a clustered index on the Created column in that table. The problem is that in both cases it appears to iterate over all 2 million rows, which I believe is the performance bottleneck. Is there anything I can do to prevent this, or am I pretty much maxed out as far as optimization goes? For reference, the purpose of the query is to find the first page view for each session. **EDIT:** After much frustration, despite the help received here, I could not make this query work. Therefore, I decided to simply store a reference to the entry page (and now exit page) in the sessions table, which allows me to do the following: ``` select pv.Path, count(*) from PageViews pv inner join Sessions s on pv.SessionID = s.SessionID and pv.PageViewID = s.ExitPage inner join Visitors v on s.VisitorID = v.VisitorID where ( @Domain is null or pv.Domain = @Domain ) and v.Campaign = @Campaign group by pv.Path; ``` This query runs in 3 seconds or less. Now I either have to update the entry/exit page in real time as the page views are recorded (the optimal solution) or run a batch update at some interval. Either way, it solves the problem, but not like I'd intended. Edit Edit: Adding a missing index (after cleaning up from last night) reduced the query to mere milliseconds). Woo hoo!
For starters, ``` where pv1.Domain = isnull(@Domain, pv1.Domain) ``` won't SARG. You can't optimize a match on a function, as I remember.
To continue from doofledorf. Try this: ``` where (@Domain is null or pv1.Domain = @Domain) and v1.Campaign = @Campaign ``` Ok, I have a couple of suggestions 1. Create this covered index: ``` create index idx2 on [PageViews]([SessionID], Domain, Created, Path) ``` 2. If you can amend the Sessions table so that it stores the entry page, eg. EntryPageViewID you will be able to heavily optimise this.
T-SQL Query Optimization
[ "", "sql", "sql-server", "" ]
I wonder what options there are for .NET (or C# specifically) code coverage, especially in the lower priced segment? I am not looking for recommendations, but for a comparison of products based on facts. I know the following: * [NCover](http://www.ncover.com/) + Seems to be very popular and looks quite good + Supports statement coverage and branch coverage + [$480 for "NCover 3 Complete"](https://www.ncover.com/quote) + Older beta versions are available [for free](https://stackoverflow.com/questions/276829/code-coverage-for-cnet#276840) * [Visual Studio (2008 Pro) | (2005 Team System (Development, Test or Team Suite Editions))](https://web.archive.org/web/20090118182937/http://msdn.microsoft.com/en-us/vstudio/products/cc149003.aspx) + Well, it's Microsoft so I'd expect it to work properly + Fully integrated into Visual Studio + At least $5,469 * [PartCover](https://github.com/sawilde/partcover.net4) - no further development (moved to OpenCover) + Open source + Supports statement coverage * [OpenCover](https://github.com/OpenCover/opencover) - successor to PartCover + OpenSource + Supports branch and statement coverage + 32 and 64 bit support + Silverlight support + [Background](http://scubamunki.blogspot.com/2011/06/opencover-first-beta-release.html) + [Tutorial on The Code Project by the primary developer](https://www.codeproject.com/Articles/677691/Getting-code-coverage-from-your-NET-testing-using) + No [.NET Core support yet](https://github.com/OpenCover/opencover/issues/595) * [SD Test Coverage](http://www.semanticdesigns.com/Products/TestCoverage/CSharpTestCoverage.html) + Works with 32 and 64 bits, full C# 4.0 + Handles both small and very large code bases + $250 for single user license * [JetBrains dotCover](https://www.jetbrains.com/dotcover/) + $100 for Personal License. Free for user groups, open source projects, students and teachers. + Supports statement coverage + Silverlight support * [NCrunch](https://www.ncrunch.net/) + $159 for personal license + $289 for commercial seat license ~~\* Free during beta, [to become commercial, pricing unknown](https://blog.ncrunch.net/post/The-Future-Of-NCrunch-Part-2.aspx) [future unknown](https://blog.ncrunch.net/post/The-Future-of-NCrunch.aspx)~~. + Code coverage indicators in Visual Studio + Continuous (near real time) testing + Visual per-test code coverage + Performance metrics, parallel multi-core test execution * [NDepend](https://www.ndepend.com/Coverage.aspx) + [$410](https://www.ndepend.com/Purchase.aspx) for developer license + NDepend can import coverage data from NCover, DotCover, Visual Studio 2017; 2015, 2013, 2012, 2010 and 2008 Code Coverage files. + Dependency graph + Dependency structure matrix + Visualizing code metrics + Validating code rules
I use the version of NCover that comes with [TestDriven.NET](http://www.testdriven.net). It will allow you to easily right-click on your unit test class library, and hit *Test With→Coverage*, and it will pull up the report.
An alternative to NCover can be [PartCover](https://sourceforge.net/projects/partcover/), is an open source code coverage tool for .NET very similar to NCover, it includes a console application, a GUI coverage browser, and XSL transforms for use in [CruiseControl.NET](http://en.wikipedia.org/wiki/CruiseControl). It is a very interesting product. **[OpenCover](https://github.com/OpenCover/opencover)** has replaced PartCover.
What can I use for good quality code coverage for C#/.NET?
[ "", "c#", ".net", "code-coverage", "" ]
I'm build an UserControl and I'm not sure how to handle exceptions, the Control itself is not very complicated, the User chose an image from disk so they can authorize it, I don't know exactly how the control will be used so if I use a MessageBox I might block the application, and if I just re-throw it I might crash it. thanks in advance. Juan Zamudio
this is a common problem facing developers who build libraries. Try to weed out bugs and decide for the remaining error cases if it's an expected error (your control should not throw an exception but rather gracefully handle the error) or an unexpected exceptional condition (your control must throw an exception as soon as possible). You might also have a look at [Design By Contract](http://www.google.com/search?q=Design+By+Contract), a methodology to declare required preconditions and guaranteed postconditions. This may sound academic, but it leads to more robust code. UPDATE: A good introduction is <http://se.ethz.ch/~meyer/publications/computer/contract.pdf> Regards, tamberg
unhandled exceptions should definitely be thrown so that the people using your control can see what's wrong.
Correct way to Handle Exceptions in UserControl
[ "", "c#", "winforms", ".net-3.5", "" ]
I've heard that there are some free applications that will check the vulnerability of a PHP website, but I don't know what to use. I'd like a free program (preferably with a GUI) for Windows that will analyze my site an give me a report. Anyone know of a solution?
There are only certain security holes you can check for with any program. You can check your PHP configuration, Apache configuration, passwords, common bugs, etc. but you can't really check programatically for logic errors which might cause security holes. Your best bet would be to do a thorough code review of the website. Or, better yet, have several other people do a thorough code review of the website, looking for security holes.
[Top 10 Web Vulnerability Scanners](http://sectools.org/web-scanners.html) from Insecure.org (listing from 2006). Their number one, Nikto2, can be found [here](http://www.cirt.net/nikto2).
How can I check website security for free?
[ "", "php", "security", "" ]
With Php when does an included file get included? Is it during a preprocessing stage or is it during script evaluation? Right now I have several scripts that share the same header and footer code, which do input validation and exception handling. Like this: ``` /* validate input */ ... /* process/do task */ ... /* handle exceptions */ ... ``` So I'd like to do something like this ``` #include "verification.php" /* process/do task */ ... #include "exception_handling.php" ``` So if include happens as a preprocessing step, I can do the #include "exception\_handling.php" but if not, then any exception will kill the script before it has a chance to evaluate the include. Thanks
[PHP.net: include](http://fi.php.net/include/) gives a basic example: ``` vars.php <?php $color = 'green'; $fruit = 'apple'; ?> test.php <?php echo "A $color $fruit"; // A include 'vars.php'; echo "A $color $fruit"; // A green apple ?> ``` so include happens when its executed in code. Edit: fixed url.
PHP doesn't have a preprocessor. Starting a line with an '#' makes the line a comment. You have to do this to include a file: ``` include ("exception_handling.php"); include 'exception_handling.php'; // or this, the parentheses are optional ``` Read this for more information: <http://php.net/include>
In Php when is Include/Require evaluated?
[ "", "php", "include", "" ]
I'm trying to convert a character code to a character with chr(), but VBScript isn't giving me the value I expect. According to VBScript, character code 199 is: ``` � ``` However, when using something like Javascript's String.fromCharCode, 199 is: ``` Ç ``` The second result is what I need to get out of VBScript's chr() function. Any idea what the problem is?
**Edited to reflect comments** `Chr(199)` returns a 2-byte character, which is being interpreted as 2 separate characters. * use `ChrW(199)` to return a `Unicode` string. * use `ChrB(199)` to return it as a single-byte character
Encoding is the problem. Javascript may be interpreting as latin-1; VBScript may be using a different encoding and getting confused.
VBScript chr() appears to return wrong value
[ "", "javascript", "vbscript", "fromcharcode", "chr", "" ]
How do you organize your stored procedures so you can easily find them and keep track of their dependencies?
I tend to name them according to a convention. Typically {TableName}\_{operation}{extra} where the extra part is optional. For example: Product\_Get, Product\_Add, Product\_Delete, Product\_Update, Product\_GetByName
I'm strongly considering creating database projects so that I can version the stored procedures and avoid the confusion when deploying from development to production. Once they start getting out of synch with a large project, things can get difficult fast.
How do you manage and organize your stored procedures?
[ "", "sql", "stored-procedures", "" ]
Say I have some code like ``` namespace Portal { public class Author { public Author() { } private void SomeMethod(){ string myMethodName = ""; // myMethodName = "Portal.Author.SomeMethod()"; } } } ``` Can I find out the name of the method I am using? In my example I'ld like to programmatically set `myMethodName` to the name of the current method (ie in this case `"Portal.Author.SomeMethod"`). Thanks
``` MethodInfo.GetCurrentMethod().Name ```
System.Reflection.MethodBase.GetCurrentMethod().Name
Can I find out the name of the method I am using?
[ "", "c#", "oop", "reflection", "" ]
I'm currently using the Yahoo YUI javascript library in a couple of my projects. However, I'm a little concerned about three things. First, they laid off 10% of their employees. Second, their stock price keeps falling: especially after ignoring the MS takeover earlier this year. Third, what if someone does buy them? The only reason I bring this up is that I tend to build applications that are going to be around for 8 to 10 years. What would you do?
Yahoo is a major company that won't end in the next couple of year. The Yahoo! library is open source so you will have other people to continue to improve it IF Yahoo would go bankrupt. No technology is 100% safe for 10 years perspective, I think you aren't in danger with it. In 10 years Javascript will be completely difference and most framework will not be the same so I think whatever you choose you will need to change a lot of thing in 10 years ;) Just be sure to keep a version of the code in you repository to always have the latest version that work for your system and you will be fine.
As a member of the YUI team, I would add the following to this conversation: Almost everyone who has ever worked on the team is still with Yahoo and still working on YUI -- a remarkable consistency for a project that is now almost four years old. No one can predict the future of Yahoo at this point (or of any other company), but you can bank on the code you're using today. It's free, open under BSD, and no one can prevent you from using it regardless of what may happen in the future. We continue to be excited about YUI and we think its next four years will be better than the last four. Regards, Eric
To YUI or not to YUI?
[ "", "javascript", "yui", "yahoo", "" ]
Per the Java documentation, the [hash code](http://java.sun.com/javase/6/docs/api/java/lang/String.html#hashCode()) for a `String` object is computed as: > ``` > s[0]*31^(n-1) + s[1]*31^(n-2) + ... + s[n-1] > ``` > > using `int` arithmetic, where `s[i]` is the > *i*th character of the string, `n` is the length of > the string, and `^` indicates exponentiation. Why is 31 used as a multiplier? I understand that the multiplier should be a relatively large prime number. So why not 29, or 37, or even 97?
According to Joshua Bloch's [Effective Java, Second Edition](https://rads.stackoverflow.com/amzn/click/com/0321356683) (a book that can't be recommended enough, and which I bought thanks to continual mentions on Stack Overflow): > The value 31 was chosen because it is an odd prime. If it were even and the multiplication overflowed, information would be lost, as multiplication by 2 is equivalent to shifting. The advantage of using a prime is less clear, but it is traditional. A nice property of 31 is that the multiplication can be replaced by a shift and a subtraction for better performance: `31 * i == (i << 5) - i`. Modern VMs do this sort of optimization automatically. *(from Chapter 3, Item 9: Always override `hashCode` when you override `equals`, page 48)*
Goodrich and Tamassia computed from over 50,000 English words (formed as the union of the word lists provided in two variants of Unix) that using the constants 31, 33, 37, 39, and 41 will produce fewer than 7 collisions in each case. This may be the reason that so many Java implementations choose such constants. See section 9.2 Hash Tables (page 522) of [Data Structures and Algorithms in Java](https://enos.itcollege.ee/%7Ejpoial/algorithms/GT/Data%20Structures%20and%20Algorithms%20in%20Java%20Fourth%20Edition.pdf).
Why does Java's hashCode() in String use 31 as a multiplier?
[ "", "java", "string", "algorithm", "hash", "" ]
What I'd like to do is produce an HTML/CSS/JS version of the following. The gridlines and other aspects are not important. It's more of a question how to do the background databars. [![alt text](https://i.stack.imgur.com/tPLAD.png)](https://i.stack.imgur.com/tPLAD.png) (source: [tech-recipes.com](http://blogs.tech-recipes.com/shamanstears/files/2008/04/excel_databars2.png))
Make the bars as background images and position them to show values. eg. with a fixed column width of 100px: ``` <div style="background: url(bg.gif) -50px 0 no-repeat;">5</div> <div style="background: url(bg.gif) -20px 0 no-repeat;">8</div> ``` If your columns have to be flexible size (not fixed, and not known at the time the page is produced), it's a bit trickier: ``` <style type="text/css"> .cell { position: relative; } .cell .back { position: absolute; z-index: 1; background: url(bg.gif); } .cell .value { position: relative; z-index: 2; } </style> <div class="cell"> <div class="back" style="width: 50%;">&nbsp;</div> <div class="value">5</div> </div> <div class="cell"> <div class="back" style="width: 80%;">&nbsp;</div> <div class="value">8</div> </div> ```
A javascript-based solution like this [cross-browser gradient](http://slayeroffice.com/code/gradient/) might be a good start. With some DHTML, you can make a [bar with a given length](http://slayeroffice.com/code/gradientProgressBar/).
How could you implement something like Excel 2007's databars in HTML/CSS/JS?
[ "", "javascript", "excel", "user-interface", "" ]
I'm trying to use some extension methods that I use to apply consistent formatting to DateTime and Int32 - which works absolutely fine in code behind, but I'm having issues with databinding. I get: ``` 'System.DateTime' does not contain a definition for 'ToCustomShortDate' ``` for ``` <%# ((ProductionDetails)Container.DataItem).StartDate.ToCustomShortDate() %> ``` (inside a templatefield of a gridview contained on a usercontrol) Even when I'm including the namespace that the extension method is defined in at the top of the usercontrol: ``` <%@ import namespace="MyAssembly.Formatting" %> ``` Has anyone else come across this and is there any way to resolve it? **EDIT:** My mistake, above should be: ``` <%@ import namespace="MyNamespace.Formatting" %> ``` ie. I'm not incorrectly referencing the namespace (works vertabim in the code behind)
Found the issue! I was including the namespace correcly as I thought - but the real issue was that the app was only INCLUDING the .NET 3.5 assemblies and not being compiled using the 3.5 compiler, was missing some entries from web.config which I realised when I created an empty project and tried it (successfully) in there! Error messages can be so misleading sometimes...
Regarding the extenstion method i implemented above should my namespace look like this? ``` namespace MyNamespace.Formatting ```
Using extension methods within inline databinding context
[ "", "c#", "asp.net", "data-binding", "gridview", "extension-methods", "" ]
I am testing against the following test document: ``` <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>hi there</title> </head> <body> <img class="foo" src="bar.png"/> </body> </html> ``` If I parse the document using lxml.html, I can get the IMG with an xpath just fine: ``` >>> root = lxml.html.fromstring(doc) >>> root.xpath("//img") [<Element img at 1879e30>] ``` However, if I parse the document as XML and try to get the IMG tag, I get an empty result: ``` >>> tree = etree.parse(StringIO(doc)) >>> tree.getroot().xpath("//img") [] ``` I can navigate to the element directly: ``` >>> tree.getroot().getchildren()[1].getchildren()[0] <Element {http://www.w3.org/1999/xhtml}img at f56810> ``` But of course that doesn't help me process arbitrary documents. I would also expect to be able to query etree to get an xpath expression that will directly identify this element, which, technically I can do: ``` >>> tree.getpath(tree.getroot().getchildren()[1].getchildren()[0]) '/*/*[2]/*' >>> tree.getroot().xpath('/*/*[2]/*') [<Element {http://www.w3.org/1999/xhtml}img at fa1750>] ``` But that xpath is, again, obviously not useful for parsing arbitrary documents. Obviously I am missing some key issue here, but I don't know what it is. My best guess is that it has something to do with namespaces but the only namespace defined is the default and I don't know what else I might need to consider in regards to namespaces. So, what am I missing?
The problem is the namespaces. When parsed as XML, the img tag is in the <http://www.w3.org/1999/xhtml> namespace since that is the default namespace for the element. You are asking for the img tag in no namespace. Try this: ``` >>> tree.getroot().xpath( ... "//xhtml:img", ... namespaces={'xhtml':'http://www.w3.org/1999/xhtml'} ... ) [<Element {http://www.w3.org/1999/xhtml}img at 11a29e0>] ```
[XPath considers all unprefixed names to be in "no namespace"](http://www.w3.org/TR/xpath#node-tests). In particular the spec says: "A QName in the node test is expanded into an expanded-name using the namespace declarations from the expression context. This is the same way expansion is done for element type names in start and end-tags except that the default namespace declared with xmlns is not used: if the QName does not have a prefix, then the namespace URI is null (this is the same way attribute names are expanded). " See those two detailed explanations of the problem and its solution: [**here**](http://www.topxml.com/people/bosley/defaultns.asp) and [**here**](http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1455253&SiteID=1). The solution is to associate a prefix (with the API that's being used) and to use it to prefix any unprefixed name in the XPath expression. Hope this helped. Cheers, Dimitre Novatchev
Why doesn't xpath work when processing an XHTML document with lxml (in python)?
[ "", "python", "xml", "xhtml", "xpath", "lxml", "" ]
I have a WCF Service that should not enter the faulted state. If there's an exception, it should be logged and the service should continue uninterrupted. The service has a one-way operation contract and is reading messages from an MSMQ. My problems are twofold: 1. The service appears to be swallowing an exception/fault so I am unable to debug it. How do I get the service to expose the exception so that I can log or handle it? 2. The service is entering into a faulted state after this exception is swallowed. How do I prevent the service from entering into a faulted state?
Most, if not all exceptions can be seen in the WCF Trace ([Configuring Tracing](http://msdn.microsoft.com/en-us/library/ms733025.aspx)) and the trace is best viewed with the [Service Trace Viewer](http://msdn.microsoft.com/en-us/library/ms733025.aspx). Obviously, this is not something you should have running all day in a production environment, but it helps in troubleshooting anyway. Apart from that, note that oneways may not run as a true "fire and forget" depending on the SessionMode you use. If you have your service configured for SessionMode.Allowed or even SessionMode.Required, the oneway operation will run as if it was not oneway at all (this can be observed when using oneways over the netTcpBinding). To be frank, however, I don't know if that changes the type of exceptions you can get, or when you get them. However, in any case, you should get an exception if the request could not be send at all. AFAIK, the oneway "ends" when it is successfully enqued on the server side. So there is some place for (WCF framework related) exceptions until then (serialization/deserialization comes to mind). Then, such framework related exceptions are best seen (even an IErrorHandler doesn't get them all due to the fact when it is called in the request/response-flow) using the above mentioned trace / traceviewer.
The official documentation on how to handle Faults is here: * [**Handling Exceptions and Faults**](http://msdn.microsoft.com/en-us/library/ms789039.aspx) * [**Understanding State Changes**](http://msdn.microsoft.com/en-us/library/ms789041.aspx) with the main page being at [Channel Model Overview](http://msdn.microsoft.com/en-us/library/ms729840.aspx) There's a nice state diagram showing how things happen: ![enter image description here](https://i.stack.imgur.com/KXLng.gif)
How do I prevent a WCF service from enter a faulted state?
[ "", "c#", ".net", "wcf", "msmq", "" ]
I have a problem when assigning functions to the click event of a button in IE 7 with jQuery. Something like the following works fine in Opera but produces an infinite loop in IE: ``` function updateIndputFields(index, id) { $("#reloadBtn").click(function(){ updateIndputFields(index, id) }); } ``` As I understand it, an infinite loop would not be the expected behavior in this situation. But I'm new to jQuery so maybe I've missed something. Anyways, what should I do to make the click event of the reloadBtn button be set to 'updateIndputFields(index, id)' in IE?
I think the key to your answer is in unbinding the event that you have already bound to the click event. I used this on IE and, if I understand what you're trying to do, it seems to do what you need: ``` <script type="text/javascript"> function updateIndputFields(index, id) { $('#output').append('<p>' + index + ' : ' + id + '</p>'); $('#reloadBtn').unbind('click'); $("#reloadBtn").click(function(){ updateIndputFields(index, id) }); } </script> <p><a href="#" id="reloadBtn">reload</a></p> <p><a href="#" onclick="updateIndputFields(1,2);return false;">start</a></p> <div id="output"></div> ``` Each click should output the passed parameters exactly once into the output div. If you don't unbind the originally assigned click event, then it stays present and you attach a duplicate click event handler to the event each time it's clicked. Even in Firefox, not unbinding the event creates an interesting recursive situation.
or just use [.one](http://docs.jquery.com/Events/one#typedatafn) to bind the event
jQuery callbacks firing too early
[ "", "javascript", "jquery", "internet-explorer", "callback", "" ]
I am currently looking at the "unload" event of a window to try to determine how the "unload" event was triggered, but am having little success. Is there a way to determine how the javascript event was triggered? * Page Refresh * Back Button (or navigate away from the page) * Closing the Browser Essentially I need to execute some code only when the browser window is being closed, not refreshed or navigated away from. **Purpose:** When a customer does an update of our software, the update will redirect their first Internet request to an offer page. There is a button for a "Do Not Bother" option, but some users will simply close their browser. Upon closing the browser, I need to duplicate the "Do Not Bother" functionality so the user no longer gets redirected to the offer page. Simply attaching to the "unload" event will not work due to the different ways of leaving a page.
No, and if there was it would be browser dependent. What kind of code are you trying to run when the user closes the page? Is it to logout the user? Then the user would not be logged out if the browser crashes or the network connection breaks (and probably not if the computer goes to sleep/hibernation mode). If it is for logout-purposes you should probably use a timestamp variable at the server that gets updated with every request (or use a ajax-ping), and logout the user if it hasn't been seen for a specified time. Update: Found [this answer](https://stackoverflow.com/questions/181189/to-detect-if-the-user-is-closing-the-ie-browser-apart-from-onunload-event-as-it) here at stackoverflow.
Yes, there is a solution! I've designed a solution based on onBeforeUnload+onLoad events, HTML5 local storage and client/server communication. See the details on <https://stackoverflow.com/a/13916847/698168>.
Is there a way in javascript to detect if the unload event is caused via a refresh, the back button, or closing the browser?
[ "", "javascript", "events", "" ]
I've been using this function but I'd like to know what's the most efficient and accurate way to get it. ``` function daysInMonth(iMonth, iYear) { return 32 - new Date(iYear, iMonth, 32).getDate(); } ```
``` function daysInMonth (month, year) { // Use 1 for January, 2 for February, etc. return new Date(year, month, 0).getDate(); } console.log(daysInMonth(2, 1999)); // February in a non-leap year. console.log(daysInMonth(2, 2000)); // February in a leap year. ``` Day 0 is the last day in the previous month. Because the month constructor is 0-based, this works nicely. A bit of a hack, but that's basically what you're doing by subtracting 32. See more : [Number of days in the current month](https://stackoverflow.com/a/69267613/7942242)
Some answers (also on other questions) had leap-year problems or used the Date-object. Although javascript's `Date object` covers approximately 285616 years (100,000,000 days) on either side of January 1 1970, I was fed up with all kinds of unexpected date [inconsistencies](http://people.cs.nctu.edu.tw/~tsaiwn/sisc/runtime_error_200_div_by_0/www.merlyn.demon.co.uk/js-datex.htm) across different browsers (most notably year 0 to 99). I was also curious how to calculate it. So I wrote a simple and above all, *small* algorithm to calculate the *correct* ([Proleptic](http://en.wikipedia.org/wiki/Proleptic_Gregorian_calendarhttp://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar) [Gregorian](http://en.wikipedia.org/wiki/Gregorian_calendar) / Astronomical / ISO 8601:2004 (clause 4.3.2.1), so [year `0`](http://en.wikipedia.org/wiki/0_%28year%29) exists and is a leap year and *negative years are supported*) number of day's for a given month and year. It uses the [short-circuit bitmask-modulo leapYear algorithm](https://stackoverflow.com/a/11595914/588079) (slightly modified for js) and common mod-8 month algorithm. Note that in `AD/BC` notation, year 0 AD/BC does not exist: instead year `1 BC` is the leap-year! IF you need to account for BC notation then simply subtract one year of the (otherwise positive) year-value first!! (Or subtract the year from `1` for further year-calculations.) ``` function daysInMonth(m, y){ return m===2?y&3||!(y%25)&&y&15?28:29:30+(m+(m>>3)&1); } ``` ``` <!-- example for the snippet --> <input type="text" value="enter year" onblur=" for( var r='', i=0, y=+this.value ; 12>i++ ; r+= 'Month: ' + i + ' has ' + daysInMonth(i, y) + ' days<br>' ); this.nextSibling.innerHTML=r; " /><div></div> ``` Note, months must be 1-based! Note, this is a different algorithm then the magic number lookup I used in my [Javascript calculate the day of the year (1 - 366)](https://stackoverflow.com/a/27790471/588079) answer, because here the extra branch for the leap-year is only needed for February.
What is the best way to determine the number of days in a month with JavaScript?
[ "", "javascript", "date", "" ]
I want to create a compiled JavaScript file for my website. For development I would prefer to keep the JavaScript in separate files and just as part of my automated scripts concatenate the files into one and run the compressor over it. My problem is that if I use the old DOS copy command it also puts in the EOF markers which the compressor complains about: copy /A \*.js compiled.js /Y What are other people doing?
I recommend using Apache Ant and YUI Compressor. <http://ant.apache.org/> <http://yui.github.com/yuicompressor/> Put something like this in the Ant build xml. It will create two files, application.js and application-min.js. ``` <target name="concatenate" description="Concatenate all js files"> <concat destfile="build/application.js"> <fileset dir="src/js" includes="*.js" /> </concat> </target> <target name="compress" depends="concatenate" description="Compress application.js to application-min.js"> <apply executable="java" parallel="false"> <filelist dir="build" files="application.js" /> <arg line="-jar" /> <arg path="path/to/yuicompressor-2.4.2.jar" /> <srcfile /> <arg line="-o" /> <mapper type="glob" from="*.js" to="build/*-min.js" /> <targetfile /> </apply> </target> ```
To copy without EOF use binary mode: ``` copy /B *.js compiled.js /Y ``` If the resulting file still has EOFs, that might have come from one of original files, it can be fixed by this variant: ``` copy /A *.js compiled.js /B /Y ``` /A removes trailing EOFs from original files if any and /B prevents appending EOF to the resulting file. If an EOF is not at the end, the source file will be truncated at it. The order of switches is important. If you write ``` copy /A *.js /B compiled.js /Y ``` - EOFs in source files won't be removed but still resulting EOF won't be appended. Try it yourself, thats where I get it. DOS commands are weird.
How do I concatenate JavaScript files into one file?
[ "", "javascript", "batch-file", "compression", "" ]
I'm building a project along with a Dll. The Dll must support native code so I declared it as a /clr. My project was initialy also a /clr project and everything was fine. However I'd like to include some NUnit testing so I had to switch my main project from /clr to /clr:pure. Everything still compiles but any Dll call generates a runtime error. When I revert back to /clr everything is ok In my Dll, exported functions are declared as follow : ``` #define DllExport __declspec( dllexport ) DllExport bool DisplayScan(bool bShow, bool bAllPasses) { } ``` I also made a .def file containing the real names of all the exported functions ``` LIBRARY "Controller" EXPORTS DisplayScan ``` From my main project my imports are declared as follow : ``` #define _DllImport [DllImport("Controller.dll", CallingConvention = CallingConvention::Cdecl)] static _DllImport bool DisplayScan(bool bShow, bool bAllPasses) ``` Anyone ever encountered such a problem?
Ok everything is working now In fact, it has been working from the beginning. Moral : don't try to cast a char\* into a std::string Weird thing : its ok in /clr until you return from the function. It crashes right away in /clr:pure
Basically you are doing something that's not supported; /clr:pure and native DLL exports. As quoted from [this MSDN article](http://msdn.microsoft.com/en-us/library/85344whh.aspx) "pure assemblies cannot export functions that are callable from native functions because entry points in a pure assembly use the \_\_clrcall calling convention." I'm not sure of the best workaround. However, with a little experimenting, you could probably take advantage of the \_\_clrcall calling convention with the /clr option. [Here's a link](http://social.msdn.microsoft.com/Forums/en-US/vclanguage/thread/e15d6303-876f-4a40-b9c5-f6945517cd23/) that may be useful. From what I can gather you should be able to export those managed classes and consume them from within a managed assembly such as your managed NUnit test project, but keep your unmanaged exports there with different method signatures. Keep in mind that as soon as you expose any .net class via an export, it will need to use the \_\_clrcall calling convention.
Using mixed DLLs from /clr:pure projects
[ "", "c++", "dll", "visual-studio-2005", "clr", "mixed-mode", "" ]
I have a JEdit (BeanShell) macro which opens a specific file then immediately saves the file to my c:\temp folder (so that I don't accidentally update the real file). Here is the bean shell code: ``` logFilePath = "c:\\temp\\aj.txt"; jEdit.openFile( view , logFilePath ); _buffer = jEdit.getBuffer(logFilePath); _buffer.save(view,"c:\\temp\\backup.txt",true); ``` This gives me the following error: ``` I/O Error Each buffer can only execute one input/output operation at a time. Please wait until the current operation finishes (or abort it in the I/O progress monitor) before starting another one. ``` I have tried adding a while loop to wait until `buffer.isLoaded()` is true, but that just goes into an infinite loop. What does seem to work is popping up a message box ( `Macros.message` ). However, I really don't want to have this unnecessary dialogue. I don't know much java, so please tell me if I'm making a rookie mistake. ## Update: Added my own answer to show the code pointed to from [Serhii's answer](https://stackoverflow.com/users/34009/serhii).
You can try [this solution](http://community.jedit.org/?q=node/view/4026), calling `VFSManager.waitForRequests();`.
### This Works This is the code pointed to by [Serhii's answer](https://stackoverflow.com/questions/295956/jedit-macro-open-and-save-file#296361), above. Add `VFSManager.waitForRequests();` after the `jEdit.openFile()` command. ### Full Code ``` logFilePath = "c:\\temp\\aj.txt"; jEdit.openFile( view , logFilePath ); VFSManager.waitForRequests(); /* VFSManager.waitForRequests(); jEdit waits then for the file to be completely loaded before continuing ... It's designed for waiting on all 'pending I/O requests'". */ _buffer = jEdit.getBuffer(logFilePath); _buffer.save(view,"c:\\temp\\backup.txt",true); ```
JEdit Macro - Open and Save File
[ "", "java", "macros", "jedit", "beanshell", "" ]
I have a object of type `ICollection<string>`. What is the best way to convert to `string[]`. How can this be done in .NET 2? How can this be done cleaner in later version of C#, perhaps using LINQ in C# 3?
You could use the following snippet to convert it to an ordinary array: ``` string[] array = new string[collection.Count]; collection.CopyTo(array, 0); ``` That should do the job :)
If you're using C# 3.0 and .Net framework 3.5, you should be able to do: ``` ICollection<string> col = new List<string>() { "a","b"}; string[] colArr = col.ToArray(); ``` of course, you must have "using System.Linq;" at the top of the file
ICollection<string> to string[]
[ "", "c#", "generics", "collections", ".net-2.0", "" ]
After moving to .NET 2.0+ is there ever a reason to still use the systems.Collections namespace (besides maintaining legacy code)? Should the generics namespace always be used instead?
For the most part, the generic collections will perform faster than the non-generic counterpart and give you the benefit of having a strongly-typed collection. Comparing the collections available in System.Collections and System.Collections.Generic, you get the following "migration": ``` Non-Generic Generic Equivalent ------------------------------------------------------------ ArrayList List<T> BitArray N/A CaseInsensitiveComparer N/A CollectionBase Collection<T> Comparer Comparer<T> DictionaryBase Dictionary<TKey,TValue> Hashtable Dictionary<TKey,TValue> Queue Queue<T> ReadOnlyCollectionBase ReadOnlyCollection<T> SortedList SortedList<TKey,TValue> Stack Stack<T> DictionaryEntry KeyValuePair<TKey,TValue> ICollection N/A (use IEnumerable<T> or anything that extends it) IComparer IComparer<T> IDictionary IDictionary<TKey,TValue> IEnumerable IEnumerable<T> IEnumerator IEnumerator<T> IEqualityComparer IEqualityComparer<T> IList IList<T> ``` ICollection is immutable (no members to change the contents of the collection) while ICollection<T> is mutable. This makes the interfaces similar in name only while ICollection and IEnumerable<T> differ by very little. From this list, the only non-generic classes that don't have a generic counterpart are BitArray and CaseInsensitiveComparer.
In some circumstances the generic containers perform better than the old ones. They should at least perform as well as the old ones in all circumstances. And they help to catch programming errors. It's a rare combination of a more helpful abstraction and better performance, so there isn't much of a reason to avoid them. Only if you are forced to by a crummy library you have to work with that was written before generics.
Generics and System.Collections
[ "", "c#", ".net", "generics", "collections", "" ]
I have a class that inherits a generic dictionary and an inteface ``` public class MyDictionary: Dictionary<string, IFoo>, IMyDictionary { } ``` the issue is that consumers of this class are looking for the '.Keys' and ".Values" properties of the interface so i added: ``` /// <summary> /// /// </summary> ICollection<string> Keys { get; } /// <summary> /// /// </summary> IEnumerable<IFoo> Values { get; } ``` to the interface. Now, the implementation needs to have this as well but when i implement these, i get this error: "The keyword new is required because it hides property Keys . .. " so what do i need to do. Should i be adding a "new" in front of these get properties?
Another option would be to change the types on the interface to be: ``` public interface IMyDictionary { /// <summary> /// /// </summary> Dictionary<string, IFoo>.KeyCollection Keys { get; } /// <summary> /// /// </summary> Dictionary<string, IFoo>.ValueCollection Values { get; } } ``` That way the interface is already implemented by the dictionary saving you the trouble of implementing the properties again, and it doesn't hide or cover the original implementation.
Dictionary<string, IFoo> implements the IDictionary<TKey,TValue> interface which already provides the Keys and Values properties. There shouldn't be a need to create your own properties, but the way to get around the compiler warning is to add the `new` keyword at the beginning of your property declarations in your class.
Class inherits generic dictionary<string, IFoo> and Interface
[ "", "c#", "generics", "collections", "interface", "" ]
I would like to check my JavaScript files without going to [JSLint](http://www.jslint.com/) web site. Is there a desktop version of this tool for Windows?
From <http://www.jslint.com/lint.html>: > The analysis is done by a script > running on your machine. Your script > is not sent over the network. > > It is also available as a [Konfabulator > widget](http://www.widgetgallery.com/?search=jslint). You can check a file by > dragging it and dropping it on the > widget. You can recheck the file by > double-clicking the widget. > > It is also available in a [WSH Command > Line](http://web.archive.org/web/20091217105428/http://www.jslint.com/wsh/index.html) version. > > It is also available in a [Rhino > Command Line](http://www.jslint.com/rhino/index.html) version. Or since JSLint is a JavaScript program running in your browser - you could grab the script and run it locally.
You can also use JavaScript Lint on your machine, get it from here [JavaScript Lint](http://www.javascriptlint.com) There are instructions on how to integrate it into many editors/IDE's on the above site. I use it in UltraEdit and it works great. From the above site > > You can run JavaScript Lint several ways: > > > > You can integrate it into your IDE, such as Visual Studio, SciTE, or any other IDE supporting external tools. When JavaScript Lint finds an error, your IDE takes you directly to the line containing the error. > > > > You can run it through Windows Explorer, which Windows programmers may prefer. > > > > You can use the command line to integrate into your build system, or maybe you're a Linux programmer and simply prefer the command line!
Is there an offline version of JSLint for Windows?
[ "", "javascript", "jslint", "" ]
I'm building a program that has a class used locally, but I want the same class to be used the same way over the network. This means I need to be able to make synchronous calls to any of its public methods. The class reads and writes files, so I think XML-RPC is too much overhead. I created a basic rpc client/server using the examples from twisted, but I'm having trouble with the client. ``` c = ClientCreator(reactor, Greeter) c.connectTCP(self.host, self.port).addCallback(request) reactor.run() ``` This works for a single call, when the data is received I'm calling reactor.stop(), but if I make any more calls the reactor won't restart. Is there something else I should be using for this? maybe a different twisted module or another framework? (I'm not including the details of how the protocol works, because the main point is that I only get one call out of this.) Addendum & Clarification: I shared a google doc with notes on what I'm doing. <http://docs.google.com/Doc?id=ddv9rsfd_37ftshgpgz> I have a version written that uses fuse and can combine multiple local folders into the fuse mount point. The file access is already handled within a class, so I want to have servers that give me network access to the same class. After continuing to search, I suspect pyro (<http://pyro.sourceforge.net/>) might be what I'm really looking for (simply based on reading their home page right now) but I'm open to any suggestions. I could achieve similar results by using an nfs mount and combining it with my local folder, but I want all of the peers to have access to the same combined filesystem, so that would require every computer to bee an nfs server with a number of nfs mounts equal to the number of computers in the network. **Conclusion:** I have decided to use rpyc as it gave me exactly what I was looking for. A server that keeps an instance of a class that I can manipulate as if it was local. If anyone is interested I put my project up on Launchpad (<http://launchpad.net/dstorage>).
If you're even considering Pyro, check out [RPyC](http://rpyc.wikidot.com/) first, and re-consider XML-RPC. Regarding Twisted: try leaving the reactor up instead of stopping it, and just `ClientCreator(...).connectTCP(...)` each time. If you `self.transport.loseConnection()` in your Protocol you won't be leaving open connections.
For a synchronous client, Twisted probably isn't the right option. Instead, you might want to use the socket module directly. ``` import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((self.host, self.port)) s.send(output) data = s.recv(size) s.close() ``` The `recv()` call might need to be repeated until you get an empty string, but this shows the basics. Alternatively, you can rearrange your entire program to support asynchronous calls...
How can I do synchronous rpc calls
[ "", "python", "twisted", "" ]
I've seen some nice screenshot popups around the web, do you know of any library to do that? I could write my own, but... if there's something free I can save time. Thanks!
Sounds like you want a lightbox. jQuery lightbox gets my vote. <http://leandrovieira.com/projects/jquery/lightbox/> Dead simple - no point me writing examples, its all there on the site.
Here is a collection of some Javascript libraries from [Smashing Magazine - 30 Scripts For Galleries, Slideshows and Lightboxes](http://www.smashingmagazine.com/2007/05/18/30-best-solutions-for-image-galleries-slideshows-lightboxes/) (I excluded CSS-based, see website for full list): * [JonDesign’s SmoothGallery](http://smoothgallery.jondesign.net/showcase/gallery/) * [Pyxy-gallery](http://fennecfoxen.org/pyxy/gallery) * [zenphoto](http://www.zenphoto.org/) * [Couloir.org: Resizing, Fading Slideshow Demo - AJAX Slideshow](http://www.couloir.org/js_slideshow/) * [Grey Box](http://orangoo.com/labs/GreyBox/) * [Lightbox2](http://www.huddletogether.com/projects/lightbox2/#example) * [Litebox](http://www.doknowevil.net/litebox/) * [Multifaceted Lightbox](http://www.gregphoto.net/lightbox/) * [Slightly ThickerBox](http://www.jasons-toolbox.com/SlightlyThickerBox/) * [TripTracker](http://slideshow.triptracker.net/) * [Slimbox](http://www.digitalia.be/software/slimbox) * [Suckerfish HoverLightbox](http://mondaybynoon.com/2006/03/27/suckerfish-hoverlightbox/) * [Suckerfish HoverLightbox Redux](http://mondaybynoon.com/2007/02/19/suckerfish-hoverlightbox-redux/) * [ThickBox](http://jquery.com/demo/thickbox/)
What javascript library do you recommend to display/popup screenshots on a webpage?
[ "", "javascript", "screenshot", "" ]
I need to round decimal numbers to six places using JavaScript, but I need to consider legacy browsers so I [can't rely on Number.toFixed](http://www.hunlock.com/blogs/The_Complete_Javascript_Number_Reference) > The big catch with toExponential, toFixed, and toPrecision is that they are fairly modern constructs not supported in Mozilla until Firefox version 1.5 (although IE supported the methods since version 5.5). While it's mostly safe to use these methods, older browsers WILL break so if you are writing a public program it's recommended you provide your own prototypes to provide functionality for these methods for older browser. I'm considering using something like ``` Math.round(N*1000000)/1000000 ``` What is the best method for providing this a prototype to older browsers?
Try this: ``` if (!Number.prototype.toFixed) Number.prototype.toFixed = function(precision) { var power = Math.pow(10, precision || 0); return String(Math.round(this * power)/power); } ```
I think Firefox 1.5 and IE 5 are pretty much no longer used, or by a very minor quantity of people. It is a bit like coding to support Netscape Navigator... :-) Unless some other major browser (Opera? Safari? unlikely...) doesn't support this, or if your Web logs show lot of legacy browser use, you can probably just use these methods. Sometime, we have to move on. ^\_^ [EDIT] Works fine on Opera 9.50 and Safari 3.1 ``` javascript: var num = 3.1415926535897932384; alert(num.toFixed(7)); ``` The article you reference is a year and half ago, an eternity in IT industry... I think that, unlike IE users, Firefox users often go to the latest version.
How to write a prototype for Number.toFixed in JavaScript?
[ "", "javascript", "rounding", "precision", "" ]
Seems likes it might be useful to have the assert display a message when an assertion fails. Currently an `AssertionError` gets thrown, can you specify a custom message for it? Can you show an example mechanism for doing this (other than creating your own exception type and throwing it)?
You certainly can: ``` assert x > 0 : "x must be greater than zero, but x = " + x; ``` See [Programming with Assertions](https://docs.oracle.com/javase/8/docs/technotes/guides/language/assert.html) for more information.
``` assert (condition) : "some message"; ``` I'd recommend putting the conditional in brackets ``` assert (y > x): "y is too small. y = " + y; ``` Imagine if you came across code like this... ``` assert isTrue() ? true : false : "some message"; ``` Don't forget this has nothing to do with asserts you'd write in JUnit.
Can java's assert statement allow you to specify a message?
[ "", "java", "assert", "" ]
which one of the two is more spread? I want to read out the version number from <http://freshmeat.net/projects-xml/mysql/mysql.xml?branch_id=46519> but I want to use the one which more people have. If you know another way to get the latest stable version number from mysql please tell me ;)
For this kind of task, reading the document into a [`DomDocument`](https://www.php.net/domdocument) and using [`DomXPath`](http://docs.php.net/manual/it/domxpath.query.php) is probably more suitable. To answer your question, both libraries (as well as `DomDocument` + `DomXPath`) are standard outfit with PHP5, so they would be equally fine choices.
It has to be SimpleXML. It [is enabled by default](https://www.php.net/manual/en/simplexml.installation.php), is quicker to load XML documents than the Dom methods, has a smaller memory foot-print than the Dom methods, and has much simpler [xpath methods](https://www.php.net/manual/en/function.simplexml-element-xpath.php) than the Dom methods: ``` $xml = simplexml_load_file( 'http://freshmeat.net/projects-xml/mysql/mysql.xml?branch_id=46519' ); $result = $xml->xpath('//latest_release/latest_release_version'); // or '//latest_release/*' if you'd rather loop through all release information. while(list( , $node) = each($result)) echo $node, "\n"; ```
SimpleXML or XMLReader?
[ "", "php", "xml", "" ]
I've rewritten my family web site using JavaScript (JQuery) making Ajax calls to PHP on the back end. It's your standard "bunch of image thumbnails and one main image, and when you click on a thumbnail image the main image changes" kind of thing. Everything is working as expected when using Firefox, but on IE, when I click on a thumbnail, the main image changes to the one I clicked and then immediately changes back to the first one. I have tried MS Script Debugger to no avail; I set a breakpoint in the JavaScript code that starts the Ajax call, and when I click the thumbnail the breakpoint fires. Then I hit F5 and it continues but does not fire again. If I use Wireshark to watch the actual TCP packets over the network, I can see that we are definitely sending more than one request to the server. I cannot figure out where the second query (the one to revert back to the original image) comes from. Any suggestions? One example of what I'm talking about is [here](http://perrow.ca/gallery.php?tag=nicholas-1).
Debugging through your site here's what it looks is happening: After the first image is pocessed, the resize event is being thrown, so this code gets called: ``` $(window).bind("resize", function(){ ResizeWindow( 'nicholas-1' ) }); ``` which as you know reloads your gallery. Now I can't tell you why this is occurring but this is where the trouble starts. For future reference to debug this I used VS2008 to attach to IE. I then put a break in $ajax() at: ``` // Send the data try { xhr.send(s.data); } catch(e) { jQuery.handleError(s, xhr, null, e); } ``` Then I simply hit F5 which is run for the first two ajax calls, then I open up the call stack window when I found the rogue ajax call and walked the call stack back up to the function I posted earlier. Good luck.
You could use [Fiddler](http://www.fiddlertool.com/), a free debugging proxy for Internet Explorer. It was a great help for me many times when I had to debug specific,server-related problems on IE. Here is an [Introduction to Fiddler on MSDN](http://i.msdn.microsoft.com/Bb250446.ie_introfiddler_fig04(en-us,VS.85).gif). [alt text http://i.msdn.microsoft.com/Bb250446.ie\_introfiddler\_fig04(en-us,VS.85).gif](http://i.msdn.microsoft.com/Bb250446.ie_introfiddler_fig04(en-us,VS.85).gif)
How can I find the source of rogue Ajax requests on IE?
[ "", "javascript", "jquery", "ajax", "internet-explorer", "" ]
what is the value of using IDictionary here?
The value of using an interface is always the same: you don't have to change client code when switching to another backend implementation. Consider that profiling your code later shows that a hash table implementation (used in the `Dictionary` class) isn't suited for your task and that a binary search tree would perform better. If you've coded to an interface then switching the implementation is straightforward. If, however, you've used a concrete class, you've got to change a lot more code in a lot more places. => This costs time and money.
IDictionary enables looser coupling. Say you have a method like this: ``` void DoSomething(IDictionary<string, string> d) { //... } ``` You can use it like this: ``` Dictionary<string, string> a = new Dictionary<string, string>(); SortedDictionary<string, string> b = new SortedDictionary<string, string>(); DoSomething(a); DoSomething(b); ```
IDictionary<string, string> versus Dictionary<string, string>
[ "", "c#", "generics", "collections", "" ]
I am currently in the process of rewriting an application whereby teachers can plan curriculum online. The application guides teachers through a process of creating a unit of work for their students. The tool is currently used in three states but we have plans to get much bigger than that. One of the major draw cards of the application is that all of the student outcomes are preloaded into the system. This allows teachers to search or browse through and select which outcomes are going to be met in each unit of work. When I originally designed the system I made the assumption that all student outcomes followed a similar Hierarchy. That is, there are named nested containers and then outcomes. The original set of outcomes that I entered was three tiered. As such my database has the following structure: ========================= *Tables in bold* **h1** id, Name **h2** id, parent\_\_\_id (h1\_id), Name **h3** id, parent\_\_\_id (h2\_id), Name **outcome** id, parent\_\_\_id (h3\_id), Name ========================= Other than the obvious inability to add n/ levels of hierarchy this method also made it difficult to display a list of all of the standards without recursively querying the database. Once the student outcomes (and their parent categories) have been added there is very little reason for them to be modified in any way. The primary requirement is that they are easy and efficient to read. So far all of the student outcomes from different schools / states / countries have roughly followed my assumption. This may not always be the case. All existing data must of course be transferred across from the current database. Given the above, what is the best way for me to store all the different sets of student outcomes? Some of the ideas I have had are listed below. * Continue using 4 tables in the database, when selecting either use recusion or lots of joins * Use nested sets * XML (Either a global XML file for all of the different sets or an XML file for each)
I don't know that you actually need 4 tables for this. If you have a single table that tracks the parent\_id and a level you can have infinite levels. > **outcome** > > id, parent\_id, level, name You can use recursion to track through the tree for any particular element (you don't actually need level, but it can be easier to query with it). The alternative is nested sets. In this case you would still merge to a single table, but use the set stuff to track levels. Which one to use depends on your application. Read-intensive: nested sets Write-intensive: parent tree thingy This is because with nested sets you can retrieve the entire tree with a single query but at the cost of reordering the entire tree every time you insert a new node. When you just track the parent\_id, you can move or delete nodes individually. PS: I vote no to XML. You have the same recursive issues, plus the overhead of parsing the data as well as either storing it in the db or on the filesystem (which will cause concurrency issues).
I agree with the other poster - nested sets is the way to go I think. See here: <http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/> It explains the theory and compares it to what you are already using - which is a twist on adjacency really. It shows +/- of them all, and should help you reach a decision based on all of the subtleties of your project. Another thing I've seen (in CakePHP's tree behaviour) is actually to use both at once. Sure its not great performance wise, but under this model, you insert/remove things just as you would with adjacency, and then there is a method to run to rebuild the left/right edge values to allow you to do the selects in a nested sets fashion. Result is you can insert/delete much more easily. <http://book.cakephp.org/view/91/Tree>
Best way to represt n/ depth tree for use in PHP (MySQL / XML / ?)
[ "", "php", "mysql", "xml", "search", "tree", "" ]
With a class in Python, how do I define a function to print every single instance of the class in a format defined in the function?
I see two options in this case: ## Garbage collector ``` import gc for obj in gc.get_objects(): if isinstance(obj, some_class): dome_something(obj) ``` This has the disadvantage of being very slow when you have a lot of objects, but works with types over which you have no control. ## Use a mixin and weakrefs ``` from collections import defaultdict import weakref class KeepRefs(object): __refs__ = defaultdict(list) def __init__(self): self.__refs__[self.__class__].append(weakref.ref(self)) @classmethod def get_instances(cls): for inst_ref in cls.__refs__[cls]: inst = inst_ref() if inst is not None: yield inst class X(KeepRefs): def __init__(self, name): super(X, self).__init__() self.name = name x = X("x") y = X("y") for r in X.get_instances(): print r.name del y for r in X.get_instances(): print r.name ``` In this case, all the references get stored as a weak reference in a list. If you create and delete a lot of instances frequently, you should clean up the list of weakrefs after iteration, otherwise there's going to be a lot of cruft. Another problem in this case is that you have to make sure to call the base class constructor. You could also override `__new__`, but only the `__new__` method of the first base class is used on instantiation. This also works only on types that are under your control. **Edit**: The method for printing all instances according to a specific format is left as an exercise, but it's basically just a variation on the `for`-loops.
You'll want to create a static list on your class, and add a `weakref` to each instance so the garbage collector can clean up your instances when they're no longer needed. ``` import weakref class A: instances = [] def __init__(self, name=None): self.__class__.instances.append(weakref.proxy(self)) self.name = name a1 = A('a1') a2 = A('a2') a3 = A('a3') a4 = A('a4') for instance in A.instances: print(instance.name) ```
Printing all instances of a class
[ "", "python", "class", "" ]
Suppose I have a class 'Application'. In order to be initialised it takes certain settings in the constructor. Let's also assume that the number of settings is so many that it's compelling to place them in a class of their own. Compare the following two implementations of this scenario. Implementation 1: ``` class Application { Application(ApplicationSettings settings) { //Do initialisation here } } class ApplicationSettings { //Settings related methods and properties here } ``` Implementation 2: ``` class Application { Application(Application.Settings settings) { //Do initialisation here } class Settings { //Settings related methods and properties here } } ``` To me, the second approach is very much preferable. It is more readable because it strongly emphasises the relation between the two classes. When I write code to instantiate Application class anywhere, the second approach is going to look prettier. Now just imagine the Settings class itself in turn had some similarly "related" class and that class in turn did so too. Go only three such levels and the class naming gets out out of hand in the 'non-nested' case. If you nest, however, things still stay elegant. Despite the above, I've read people saying on StackOverflow that nested classes are justified only if they're not visible to the outside world; that is if they are used only for the internal implementation of the containing class. The commonly cited objection is bloating the size of containing class's source file, but partial classes is the perfect solution for that problem. My question is, why are we wary of the "publicly exposed" use of nested classes? Are there any other arguments against such use?
I think it's fine. This is basically the builder pattern, and using nested classes works pretty well. It also lets the builder access private members of the outer class, which can be very useful. For instance, you can have a Build method on the builder which calls a private constructor on the outer class which takes an instance of the builder: ``` public class Outer { private Outer(Builder builder) { // Copy stuff } public class Builder { public Outer Build() { return new Outer(this); } } } ``` That ensures that the *only* way of building an instance of the outer class is via the builder. I use a pattern very much like this in my C# port of Protocol Buffers.
You can use namespaces to relate things that are... related. For example: ``` namespace Diner { public class Sandwich { public Sandwich(Filling filling) { } } public class Filling { } } ``` The advantage of this over using classes as if they were namespaces is that you can optionally use `using` on the calling side to abbreviate things: ``` using Diner; ... var sandwich = new Sandwich(new Filling()); ``` If you use the `Sandwich` class as if it were a namespace for `Filling`, you have to use the full name `Sandwich.Filling` to refer to `Filling`. And how are you going to sleep at night knowing that?
"Public" nested classes or not
[ "", "c#", "class-design", "nested-class", "" ]
I have often heard this term being used, but I have never really understood it. What does it mean, and can anyone give some examples/point me to some links? EDIT: Thanks to everyone for the replies. Can you also tell me how the canonical representation is useful in equals() performance, as stated in Effective Java?
Wikipedia points to the term [Canonicalization](http://en.wikipedia.org/wiki/Canonicalization). > A process for converting data that has more than one possible representation into a "standard" canonical representation. This can be done to compare different representations for equivalence, to count the number of distinct data structures, to improve the efficiency of various algorithms by eliminating repeated calculations, or to make it possible to impose a meaningful sorting order. The **Unicode** example made the most sense to me: > Variable-length encodings in the Unicode standard, in particular UTF-8, have more than one possible encoding for most common characters. This makes string validation more complicated, since every possible encoding of each string character must be considered. A software implementation which does not consider all character encodings runs the risk of accepting strings considered invalid in the application design, which could cause bugs or allow attacks. The solution is to allow a single encoding for each character. Canonicalization is then the process of translating every string character to its single allowed encoding. An alternative is for software to determine whether a string is canonicalized, and then reject it if it is not. In this case, in a client/server context, the canonicalization would be the responsibility of the client. In summary, a standard form of representation for data. From this form you can then convert to any representation you may need.
I believe there are two related uses of canonical: forms and instances. A **canonical form** means that values of a particular type of resource can be described or represented in multiple ways, and one of those ways is chosen as the favored canonical form. (That form is *canonized*, like books that made it into the bible, and the other forms are not.) A classic example of a canonical form is paths in a hierarchical file system, where a single file can be referenced in a number of ways: ``` myFile.txt # in current working dir ../conf/myFile.txt # relative to the CWD /apps/tomcat/conf/myFile.txt # absolute path using symbolic links /u1/local/apps/tomcat-5.5.1/conf/myFile.txt # absolute path with no symlinks ``` The classic definition of the canonical representation of that file would be the last path. With local or relative paths you cannot globally identify the resource without contextual information. With absolute paths you can identify the resource, but cannot tell if two paths refer to the same entity. With two or more paths converted to their canonical forms, you can do all the above, plus determine if two resources are the same or not, if that is important to your application (solve *the aliasing problem*). Note that the canonical form of a resource is not a quality of that particular form itself; there can be multiple possible canonical forms for a given type like file paths (say, lexicographically first of all possible absolute paths). One form is just selected as the canonical form for a particular application reason, or maybe arbitrarily so that everyone speaks the same language. Forcing objects into their **canonical instances** is the same basic idea, but instead of determining one "best" representation of a resource, it arbitrarily chooses one instance of a class of instances with the same "content" as the canonical reference, then converts all references to equivalent objects to use the one canonical instance. This can be used as a technique for optimizing both time and space. If there are multiple instances of equivalent objects in an application, then by forcing them all to be resolved as the single canonical instance of a particular value, you can eliminate all but one of each value, saving space and possibly time since you can now compare those values with reference identity (==) as opposed to object equivalence (`equals()` method). A classic example of optimizing performance with canonical instances is collapsing strings with the same content. Calling `String.intern()` on two strings with the same character sequence is guaranteed to return the same canonical String object for that text. If you pass all your strings through that canonicalizer, you know equivalent strings are actually identical object references, i.e., aliases The enum types in Java 5.0+ force all instances of a particular enum value to use the same canonical instance within a VM, even if the value is serialized and deserialized. That is why you can use `if (day == Days.SUNDAY)` with impunity in java if `Days` is an enum type. Doing this for your own classes is certainly possible, but takes care. Read *Effective Java* by Josh Bloch for details and advice.
What does the term "canonical form" or "canonical representation" in Java mean?
[ "", "java", "" ]
I want a query that returns a list of all the (user) stored procedures in a database by name, with the number of lines of code for each one. i.e. ``` sp_name lines_of_code -------- ------------- DoStuff1 120 DoStuff2 50 DoStuff3 30 ``` Any ideas how to do this?
``` select t.sp_name, sum(t.lines_of_code) - 1 as lines_ofcode, t.type_desc from ( select o.name as sp_name, (len(c.text) - len(replace(c.text, char(10), ''))) as lines_of_code, case when o.xtype = 'P' then 'Stored Procedure' when o.xtype in ('FN', 'IF', 'TF') then 'Function' end as type_desc from sysobjects o inner join syscomments c on c.id = o.id where o.xtype in ('P', 'FN', 'IF', 'TF') and o.category = 0 and o.name not in ('fn_diagramobjects', 'sp_alterdiagram', 'sp_creatediagram', 'sp_dropdiagram', 'sp_helpdiagramdefinition', 'sp_helpdiagrams', 'sp_renamediagram', 'sp_upgraddiagrams', 'sysdiagrams') ) t group by t.sp_name, t.type_desc order by 1 ``` Edited so it should also now work in SQL Server 2000- 2008 and to exclude Database Diagram-related sprocs and funcs (which appear like user created objects).
FWIW, here's another one: ``` SELECT o.type_desc AS ROUTINE_TYPE ,QUOTENAME(s.[name]) + '.' + QUOTENAME(o.[name]) AS [OBJECT_NAME] ,(LEN(m.definition) - LEN(REPLACE(m.definition, CHAR(10), ''))) AS LINES_OF_CODE FROM sys.sql_modules AS m INNER JOIN sys.objects AS o ON m.[object_id] = o.[OBJECT_ID] INNER JOIN sys.schemas AS s ON s.[schema_id] = o.[schema_id] ```
Query to list SQL Server stored procedures along with lines of code for each procedure
[ "", "sql", "sql-server", "t-sql", "stored-procedures", "lines-of-code", "" ]
I want a pure virtual parent class to call a child implementation of a function like so: ``` class parent { public: void Read() { //read stuff } virtual void Process() = 0; parent() { Read(); Process(); } } class child : public parent { public: virtual void Process() { //process stuff } child() : parent() { } } int main() { child c; } ``` This should work, but I get an unlinked error :/ This is using VC++ 2k3 Or shouldn't it work, am I wrong?
Title of the following article says it all: [Never Call Virtual Functions during Construction or Destruction](http://www.artima.com/cppsource/nevercall.html).
Alternatively, make a factory method for creating the objects and make the constructors private, the factory method can then Initialize the object after construction.
C++ Parent class calling a child virtual function
[ "", "c++", "inheritance", "virtual", "" ]
I have a sql database that stores some documents. A user can sign into the application, and view a list of their documents. When clicking a linkbutton download in a gridview of their docs, I get the file from the database, write it to the file system, and then execute this code. ``` System.IO.FileInfo file = new System.IO.FileInfo(System.Configuration.ConfigurationManager.AppSettings["UploadPath"] + DocumentName); Response.Clear(); Response.ClearContent(); Response.ClearHeaders(); Response.Cookies.Clear(); Response.Cache.SetCacheability(HttpCacheability.Private); Response.CacheControl = "private"; Response.Charset = System.Text.UTF8Encoding.UTF8.WebName; Response.ContentEncoding = System.Text.UTF8Encoding.UTF8; Response.AppendHeader("Content-Length", file.Length.ToString()); Response.AppendHeader("Pragma","cache"); Response.AppendHeader("Expires", "60"); Response.ContentType = GetContentType(file.Extension); Response.AppendHeader("Content-Disposition", "inline; " + "filename=\"" + file.Name + "\"; " + "size=" + file.Length.ToString() + "; " + "creation-date=" + DateTime.Now.ToString("R") + "; " + "modification-date=" + DateTime.Now.ToString("R") + "; " + "read-date=" + DateTime.Now.ToString("R")); ``` My GetContentType() method just returns the appropriate file type for the files I'm allowing "application/pdf, application/msw0rd, etc. My problem is that when the file gets saved, it's the webpage itself, not the file from the file system. And in google chrome, it's putting a .htm extension on the end of the filename, I guess because it knows it's a web page? Anyhow, a great first step would be to get the actual file, and not a copy of the web page in HTML they are sitting on! Thanks.
How are you sending the actual content of the file?? I usually use [Response.TransmitFile](http://msdn.microsoft.com/en-us/library/12s31dhy(VS.80).aspx) method, it basically opens the file and sends its content to the Response.OutputStream
Instead of Response.AppendHeader("Content-Disposition", "inline; " + "filename= use Response.AddHeader("content-disposition:", "attachment;filename=
C# ASP.NET 3.5 content-disposition file download problems
[ "", "c#", "asp.net", "content-disposition", "" ]
Using strictly SQL (no PHP or anything else), is it possible to create a table **and insert default data into that table** only if that table doesn't exist?
Use the CREATE TABLE ... SELECT format: ``` create table if not exists tablename as select * from defaultdata; ```
Here is one way of doing it: ``` CREATE TABLE IF NOT EXISTS T ( ID int(10) unsigned NOT NULL primary key, NAME varchar(255) NOT NULL ); REPLACE INTO T SELECT 1, 'John Doe'; REPLACE INTO T SELECT 2, 'Jane Doe'; ``` REPLACE is a MySQL extension to the SQL standard that either inserts, or deletes and inserts.
MySQL inserting data only if table doesn't exist
[ "", "mysql", "sql", "database", "" ]
I'm looking for ideas on how to implement audit trails for my objects in C#, for the current project,basically I need to: 1. Store the old values and new values of a given object. 2. Record creation of new objects. 3. Deletion of old object. Is there any generic way of doing this,like using C# Generics,so that I don't have to write code for events of the base object like on creation,on deletion etc.(ORM objects).The thing is that if there was a way to inject audit trail if one is using a .Anybody have any experiences or any methods they follow.Any way to do this in a Aspect-oriented (AOP) mannner. Please share your ideas etc.
Question is pretty similar to [How do you implement audit trail for your objects (Programming)?](https://stackoverflow.com/questions/148291/how-do-you-implement-audit-trail-for-your-objects-programming) We've implemented a similar solution, using AOP (aspectJ implementation). Using this particular points can be captured and specific operations can be performed. This can be plugged in and plugged off when we like. However, our implementation was in J2EE.. If you really want to do it in the app layer, i would suggest this. Hope it helps..
You could implement something similiar to INotifyPropertyChanged with a slight difference. I've extracted most of INotifyPropertyChanged and changed it to be generic and store new and old values. You can then have some sort of a management class that can listen to this and onSaving and onDeleting you can deal with the changes. ``` public interface INotifyProperyChanged<T> { event PropertyChangedEventHandler<T> PropertyChanged; } public delegate void PropertyChangedEventHandler<T>(object sender, PropertyChangedEventArgs<T> e); public class PropertyChangedEventArgs<T> : EventArgs { private readonly string propertyName; public PropertyChangedEventArgs(string propertyName) { this.propertyName = propertyName } public virtual string PropertyName { get { return propertyName; } } public T OldValue { get; set; } public T NewValue { get; set; } } ```
Implementing Audit Trail for Objects in C#?
[ "", "c#", "object", "audit-trail", "" ]
I am looking for the best way to make my desktop java program run in the background (**daemon/service**?) across most platforms (Windows, Mac OS, Linux [Ubuntu in particular]). By "best way" I am hoping to find a way that will: 1. require a **minimum** amount of platform-specific code. 2. not require the user to do anything a general computer user couldn't/wouldn't do 3. not be a resource hog. I understand that my requirements may be unrealistic but I am hoping there is some sort of "best practice" for this type of situation. How to go forward?
You can use the [SystemTray](http://java.sun.com/javase/6/docs/api/java/awt/SystemTray.html) classes and install your app as any other in the default platform. For windows it could be an scheduled task that run at startup. For Linux and OSX I don't know (besides crontab wich is somehow too technical) but I'm pretty sure they both have a way to do the same thing easily. Unfortunately (as of today) Apple hasn't finished the 1.6 port. It won't be a real demon, but an app like Google Desktop. I've heard Quartz is a good option. But I've never used it.
You can run a Java application as a service (Windows) or daemon (Linux) using the [Apache Commons daemon code](http://commons.apache.org/daemon/index.html). ## Structure Daemon is made of 2 parts. One written in C that makes the interface to the operating system and the other in Java that provides the Daemon API. ## Platforms Both Win32 and UNIX like platforms are supported. For Win32 platforms use procrun. For UNIX like platforms use jsvc. ## Java code You have to write a Class (MyClass) that implements the following methods: ``` * void load(String[] arguments): Here open the configuration files, create the trace file, create the ServerSockets, the Threads * void start(): Start the Thread, accept incoming connections * void stop(): Inform the Thread to live the run(), close the ServerSockets * void destroy(): Destroy any object created in init() ```
java background/daemon/service cross platform best practices
[ "", "java", "cross-platform", "desktop-application", "daemon", "" ]
Given the [iPhone's 25k limit for caching of files](http://www.niallkennedy.com/blog/2008/02/iphone-cache-performance.html), I'm wondering if there's interest in an iPhone optimized javascript library that makes caching a top level goal. Since it'd be iPhone only it could get rid of most cross-browser cruft and rely on safari specific capabilities, hopefully cutting down some of the girth and staying with 25k. [John Resig discusses this briefly](http://ejohn.org/blog/jquery-plugins-size-and-storage/), although mostly to dismiss it, it seems. He does mention: > if you're particularly excited about > breaking jQuery down into little > chunks you can grab the individual > pieces from SVN and build a custom > copy. Anyone tried that? Dojo implements a 6k version that seems to rely on deferred loading. I'm mostly a jQuery user so I haven't given it a try, but it looks interesting. Overall: what do you think about a safari/iphone specific javascript library that implements, say, the top 90% most used APIs in jQuery (or your other favorite library)?
Newer update: looks like [Zepto](http://zeptojs.com/) is the way to go these days. Found [XUI](http://github.com/brianleroux/xui/tree/master), looks like what I was looking for, although I haven't given it a try yet.
You should check out QuickConnectiPhone. It may do what you want. It can be found at <https://sourceforge.net/projects/quickconnect/>. It also lets you write your app in JavaScript, CSS, and HTML and yet install it on a device. There is an API that will allow you to make calls down to the Objective-C layer as well for phone vibration, GPS locations, accelerometer information, and some more. You can even extend this to other native phone behaviors as well. The development blog for the framework is found at <http://tetontech.wordpress.com>
Minimalist cacheable jQuery/javascript library for iPhone?
[ "", "javascript", "jquery", "iphone", "dojo", "" ]
Is it safe to use MS SQL's WITH (NOLOCK) option for select statements and insert statements if you never modify a row, but only insert or delete rows? I..e you never do an UPDATE to any of the rows.
If you're asking whether or not you'll get data that may no longer be accurate, then it depends on your queries. For example, if you do something like: ``` SELECT my_id, my_date FROM My_Table WHERE my_date >= '2008-01-01' ``` at the same time that a row is being inserted with a date on or after 2008-01-01 then you may not get that new row. This can also affect queries which generate aggregates. If you are just mimicking updates through a delete/insert then you also may get an "old" version of the data.
Not in general. (i.e. UPDATE is not the only locking issue) If you are inserting (or deleting) records and a select could potentially specify records which would be in that set, then yes, NOLOCK will give you a dirty read which may or may not include those records. If the inserts or deletes would never potentially be selected (for instance the data read is always yesterday's data, wheras today's data coming in or being manipulated is never read), then yes, it is "safe".
Is it safe to use MS SQL's WITH (NOLOCK) option for select statements and insert statements if
[ "", "sql", "sql-server-2005", "" ]
I'm looking for a really generic way to "fill out" a form based on a parameter string using javascript. for example, if i have this form: ``` <form id="someform"> <select name="option1"> <option value="1">1</option> <option value="2">2</option> </select> <select name="option2"> <option value="1">1</option> <option value="2">2</option> </select> </form> ``` I'd like to be able to take a param string like this: `option1=2&option2=1` And then have the correct things selected based on the options in the query string. I have a sort of ugly solution where I go through children of the form and check if they match the various keys, then set the values, but I really don't like it. I'd like a cleaner, **generic** way of doing it that would work for any form (assuming the param string had all the right keys). I'm using the prototype javascript library, so I'd welcome suggestions that take advantage of it. EDIT: this is what I've come up with so far (using prototype for `Form.getElements(form)`) ``` function setFormOptions(formId, params) { params = params.split('&'); params.each(function(pair) { pair = pair.split('='); var key = pair[0]; var val = pair[1]; Form.getElements(form).each(function(element) { if(element.name == key) { element.value = val; } }); }); } ``` I feel that it could still be faster/cleaner however.
If you're using Prototype, this is easy. First, you can use the [toQueryParams](http://prototypejs.org/api/string/toQueryParams) method on the String object to get a Javascript object with name/value pairs for each parameter. Second, you can use the Form.Elements.setValue method (doesn't seem to be documented) to translate each query string value to an actual form input state (e.g. check a checkbox when the query string value is "on"). Using the name.value=value technique only works for text and select (one, not many) inputs. Using the Prototype method adds support for checkbox and select (multiple) inputs. As for a simple function to populate what you have, this works well and it isn't complicated. ``` function populateForm(queryString) { var params = queryString.toQueryParams(); Object.keys(params).each(function(key) { Form.Element.setValue($("someform")[key], params[key]); }); } ``` This uses the [Object.keys](http://prototypejs.org/api/object/keys) and the [each](http://prototypejs.org/api/array/each) methods to iterate over each query string parameter and set the matching form input (using the input name attribute) to the matching value in the query params object. **Edit:** Note that you do not need to have id attributes on your form elements for this solution to work.
Try this: ``` Event.observe(window, 'load', function() { var loc = window.location.search.substr(1).split('&'); loc.each(function(param) { param = param.split('='); key = param[0]; val = param[1]; $(key).value = val; }); }); ``` The above code assumes that you assign id values as well as names to each form element. It takes parameters in the form: ``` ?key=value&key=value ``` or ``` ?option1=1&option2=2 ``` --- If you want to keep it at just names for the elements, then try instead of the above: ``` Event.observe(window, 'load', function() { var loc = window.location.search.substr(1).split('&'); loc.each(function(param) { param = param.split('='); key = param[0].split('_'); type = key[0]; key = key[1]; val = param[1]; $$(type + '[name="' + key + '"]').each(function(ele) { ele.value = val; }); }); ``` This code takes parameters in the form of: ``` ?type_key=value&type_key=value ``` or ``` ?select_option1=1&select_option2=2 ```
Generic way to fill out a form in javascript
[ "", "javascript", "html", "forms", "prototypejs", "" ]
I am making a C# app for a class project. I want to ensure a string has one of three values. Normally, in a web app, I would do validation with javascript on the client side. However, this is currently a console app. I know that I should do the validation early, but what are some good rules of thumb for validation?
Each module should do its own validation and never trust what the calling code gives it. This typically means that validation should happen at each layer of your application. You especially do not want to trust any validation to occurs on the client side, because that can lead to security holes. Code that runs on the client has been known to change from time to time.
If you're doing MVC, chances are you're working from the ground up using TDD. I'm not sure if this is the best way, but the way I do things is.. * Make my business objects. * Define some kind of validation framework, so business objects can return a list of errors on it's current state and test those using unit testing. * If you're using linq to sql, implement the partial method OnValidate() and make it so it calls your mybusinessobject.geterrors(). OnValidate is called when you do db.submitchanges() so you can stop invalid data getting saved * Now, in your controllers, when someone makes a new business object, or edits one, make the object with whatever data you get from the user - then call your geterrors() method and do whatever * Then client side validation if you can be arsed That is a framework which scott guthrie described here: <http://weblogs.asp.net/scottgu/archive/2008/09/02/asp-net-mvc-preview-5-and-form-posting-scenarios.aspx> I like it and it means you can define your business rules once and re-use them on different layers, which means its less likely you'll miss them out at a particular area when you are updating things
Where is the best place in an app to do validation? Rules of thumb?
[ "", "c#", "validation", "model-view-controller", "" ]
How do i check in C++ if a file is a regular file (and is not a directory, a pipe, etc.)? I need a function isFile(). ``` DIR *dp; struct dirent *dirp; while ((dirp = readdir(dp)) != NULL) { if ( isFile(dirp)) { cout << "IS A FILE!" << endl; i++; } ``` I've tried comparing dirp->d\_type with (unsigned char)0x8, but it seems not portable through differents systems.
You need to call stat(2) on the file, and then use the S\_ISREG macro on st\_mode. Something like (adapted from [this answer](https://stackoverflow.com/a/3828537/6451573)): ``` #include <sys/stat.h> struct stat sb; if (stat(pathname, &sb) == 0 && S_ISREG(sb.st_mode)) { // file exists and it's a regular file } ```
You can use the portable [`boost::filesystem`](http://www.boost.org/doc/libs/1_31_0/libs/filesystem/doc/index.htm) (The standard C++ library could not have done this up until recent introduction of [std::filesystem](http://en.cppreference.com/w/cpp/filesystem/is_regular_file) in C++17): ``` #include <boost/filesystem/path.hpp> #include <boost/filesystem/operations.hpp> #include <iostream> int main() { using namespace boost::filesystem; path p("/bin/bash"); if(is_regular_file(p)) { std::cout << "exists and is regular file" << std::endl; } } ```
How do i check if a file is a regular file?
[ "", "c++", "c", "filesystems", "dirent.h", "" ]
Thanks for a [solution in C](https://stackoverflow.com/questions/327893/how-to-write-a-compare-function-for-qsort-from-stdlib), now I would like to achieve this in C++ using std::sort and vector: ``` typedef struct { double x; double y; double alfa; } pkt; ``` `vector< pkt > wektor;` filled up using push\_back(); compare function: ``` int porownaj(const void *p_a, const void *p_b) { pkt *pkt_a = (pkt *) p_a; pkt *pkt_b = (pkt *) p_b; if (pkt_a->alfa > pkt_b->alfa) return 1; if (pkt_a->alfa < pkt_b->alfa) return -1; if (pkt_a->x > pkt_b->x) return 1; if (pkt_a->x < pkt_b->x) return -1; return 0; } sort(wektor.begin(), wektor.end(), porownaj); // this makes loads of errors on compile time ``` What is to correct? How to use properly std::sort in that case?
`std::sort` takes a different compare function from that used in `qsort`. Instead of returning –1, 0 or 1, this function is expected to return a `bool` value indicating whether the first element is less than the second. You have two possibilites: implement `operator <` for your objects; in that case, the default `sort` invocation without a third argument will work; or you can rewrite your above function to accomplish the same thing. Notice that you have to use strong typing in the arguments. Additionally, it's good not to use a function here at all. Instead, use a function object. These benefit from inlining. ``` struct pkt_less { bool operator ()(pkt const& a, pkt const& b) const { if (a.alfa < b.alfa) return true; if (a.alfa > b.alfa) return false; if (a.x < b.x) return true; if (a.x > b.x) return false; return false; } }; // Usage: sort(wektor.begin(), wektor.end(), pkt_less()); ```
In C++, you can use functors like `boost::bind` which do this job nicely: ``` #include <vector> #include <algorithm> struct pkt { double x; double y; double alfa; pkt(double x, double y, double alfa) :x(x), y(y), alfa(alfa) { } }; int main() { std::vector<pkt> p; p.push_back(pkt(10., 0., 20.)); p.push_back(pkt(10, 0., 30.)); p.push_back(pkt(5., 0., 40.)); std::sort(p.begin(), p.end(), boost::bind(&pkt::alfa, _1) < boost::bind(&pkt::alfa, _2) || boost::bind(&pkt::alfa, _1) == boost::bind(&pkt::alfa, _2) && boost::bind(&pkt::x, _1) < boost::bind(&pkt::x, _2)); } ``` If you need to do this many times, you can also solve the problem by making a function object which accepts member pointers and does the sort. You can reuse it for any kind of object and members. First how you use it: ``` int main() { /* sorting a vector of pkt */ std::vector<pkt> p; p.push_back(pkt(10., 0., 20.)); p.push_back(pkt(5., 0., 40.)); std::sort(p.begin(), p.end(), make_cmp(&pkt::x, &pkt::y)); } ``` Here is the code for make\_cmp. Feel free to rip it (using [`boost::preprocessor`](http://www.boostpro.com/tmpbook/preprocessor.html)): ``` #include <boost/preprocessor/repetition.hpp> #include <boost/preprocessor/facilities/empty.hpp> // tweak this to increase the maximal field count #define CMP_MAX 10 #define TYPEDEF_print(z, n, unused) typedef M##n T::* m##n##_type; #define MEMBER_print(z, n, unused) m##n##_type m##n; #define CTORPARAMS_print(z, n, unused) m##n##_type m##n #define CTORINIT_print(z, n, unused) m##n(m##n) #define CMPIF_print(z, n, unused) \ if ((t0.*m##n) < (t1.*m##n)) return true; \ if ((t0.*m##n) > (t1.*m##n)) return false; \ #define PARAM_print(z, n, unused) M##n T::* m##n #define CMP_functor(z, n, unused) \ template <typename T \ BOOST_PP_ENUM_TRAILING_PARAMS(n, typename M)> \ struct cmp##n { \ BOOST_PP_REPEAT(n, TYPEDEF_print, ~) \ BOOST_PP_REPEAT(n, MEMBER_print, ~) \ cmp##n(BOOST_PP_ENUM(n, CTORPARAMS_print, ~)) \ BOOST_PP_IF(n, :, BOOST_PP_EMPTY()) \ BOOST_PP_ENUM(n, CTORINIT_print, ~) { } \ \ bool operator()(T const& t0, T const& t1) const { \ BOOST_PP_REPEAT(n, CMPIF_print, ~) \ return false; \ } \ }; \ \ template<typename T \ BOOST_PP_ENUM_TRAILING_PARAMS(n, typename M)> \ cmp##n<T BOOST_PP_ENUM_TRAILING_PARAMS(n, M)> \ make_cmp(BOOST_PP_ENUM(n, PARAM_print, ~)) \ { \ return cmp##n<T BOOST_PP_ENUM_TRAILING_PARAMS(n, M)>( \ BOOST_PP_ENUM_PARAMS(n, m)); \ } BOOST_PP_REPEAT(CMP_MAX, CMP_functor, ~) #undef TYPEDEF_print #undef MEMBER_print #undef CTORPARAMS_print #undef CTORINIT_print #undef CMPIF_print #undef PARAM_print #undef CMP_functor ```
How to use std::sort with a vector of structures and compare function?
[ "", "c++", "sorting", "stl", "vector", "" ]
Given a word, I've to replace some specific alphabets with some specific letters such as 1 for a, 5 for b etc. I'm using regex for this. I understand that StringBuilder is the best way to deal with this problem as I'm doing a lot of string manipulations. Here is what I'm doing: ``` String word = "foobooandfoo"; String converted = ""; converted = word.replaceAll("[ao]", "1"); converted = converted.replaceAll("[df]", "2"); converted = converted.replaceAll("[n]", "3"); ``` My problem is how to rewrite this program using StringBuilder. I tried everything but I can't succeed. Or using String is just fine for this?
I think this is a case where clarity and performance happily coincide. I would use a lookup table to do the "translation". ``` public static void translate(StringBuilder str, char[] table) { for (int idx = 0; idx < str.length(); ++idx) { char ch = str.charAt(idx); if (ch < table.length) { ch = table[ch]; str.setCharAt(idx, ch); } } } ``` If you have a large alphabet for the `str` input, or your mappings are sparse, you could use a real map, like this: ``` public static void translate(StringBuilder str, Map<Character, Character> table) { for (int idx = 0; idx < str.length(); ++idx) { char ch = str.charAt(idx); Character conversion = table.get(ch); if (conversion != null) str.setCharAt(idx, conversion); } } ``` While these implementations work in-place, you could create a new `StringBuilder` instance (or append to one that's passed in).
I'd actually say that the code is pretty OK in most applications although it's theoretically inferior to other methods. If you don't want to use the `Matcher`, try it like this: ``` StringBuilder result = new StringBuilder(word.length()); for (char c : word.toCharArray()) { switch (c) { case 'a': case 'o': result.append('1'); break; case 'd': case 'f': result.append('2'); break; case 'n': result.append('3'); break; default: result.append(c); break; } } ```
How to rewrite this block of code using a StringBuilder in Java?
[ "", "java", "string", "stringbuilder", "" ]
I have a collection of HTML documents for which I need to parse the contents of the <meta> tags in the <head> section. These are the only HTML tags whose values I'm interested in, i.e. I don't need to parse anything in the <body> section. I've attempted to parse these values using the XPath support provided by JDom. However, this isn't working out too well because a lot of the HTML in the <body> section is not valid XML. Does anyone have any suggestions for how I might go about parsing these tag values in manner that can deal with malformed HTML? Cheers, Don
You can likely use the [Jericho HTML Parser](http://jerichohtml.sourceforge.net/doc/index.html). In particular, have a look at [this](http://jerichohtml.sourceforge.net/samples/console/src/FindSpecificTags.java) to see how you can go about finding specific tags.
If it suits your application you can use [Tidy](http://tidy.sourceforge.net/) to convert HTML to valid XML, and then use as much XPath as you like!
parse meta tags in Java
[ "", "java", "html", "xml", "parsing", "" ]
Is there a built in Javascript function to turn the text string of a month into the numerical equivalent? Ex. I have the name of the month "December" and I want a function to return "12".
You can append some dummy day and year to the month name and then use the [Date](http://www.cev.washington.edu/lc/CLWEBCLB/jst/js_datetime.html) constructor: ``` var month = (new Date("December 1, 1970").getMonth() + 1); ```
Out of the box, this is not something supported in native JS. As mentioned, there are locale considerations, and different date conventions which you need to cater to. Do you have any easing assumptions you can use?
Is there a built in Javascript function to turn the text string of a month into the numerical equivalent?
[ "", "javascript", "" ]
I'm trying to load javascript code with a user web control into a page via a the Page.LoadControl method during an asyncron post back of an update panel. I've tried the specially for that scenario designed methods of the scriptmanager, but the javascript just doens't get returned to the user. To explain my scenario a bit better: Master Page has the script manager and one page loads the user control via Page.LoadControl-method during an async post back. The custom control injects in the pre-render event handler the javascript. Is that a matter of timing to inject the js or is it just not possible to do so?
For that you can do ``` string scr; scr = "<script src='/scripts/myscript.js'></script>" Page.ClientScript.RegisterStartupScript(GetType(Page), "key", scr, false) ``` HTH
**If you don't want to hard-code your JavaScript**, but instead include it from a file, call [ScriptManager.RegisterClientScriptInclude](http://msdn.microsoft.com/en-us/library/system.web.ui.scriptmanager.registerclientscriptinclude.aspx) and then call your initialization function in [ScriptManager.RegisterStartupScript](http://msdn.microsoft.com/en-us/library/system.web.ui.scriptmanager.registerstartupscript.aspx). ``` protected void Page_Load(object sender, EventArgs e) { ScriptManager.RegisterClientScriptInclude( this, GetType(), "formatterScript", ResolveUrl("~/js/formatter.js")); ScriptManager.RegisterStartupScript( this, GetType(), "formatTableFunction", "formatTable()", true); } ```
ASP.NET inject javascript in user control nested in update panel
[ "", "asp.net", "javascript", "updatepanel", "custom-server-controls", "" ]
I'm currently building a Gridview that has expandable rows. Each row contains a dynamically created Panel of Form elements. Right now, I have a javascript function that expands (or in my case, makes visible) the panel when an Image is clicked on the Gridview row. My question is... is there a more efficient way of doing this. Instead of pulling all my data to begin with and building each new row as I Databind, is there a way to simple create the row with the Panel full of textboxes and dropdownlists on the fly when the user clicks the Expand button?" I'd like to limit the server calls by doing it that way instead of how I'm currently doing it, looping through every row and creating a new panel with form elements and inserting that into a row that is hidden.
Actually worked this recently into an AJAX Handler returning the form structure. It's on demand, and works well. Simply call $ajax via jQuery, return an HTML structure, inject into DIV. It's a bit limiting on actual functionality, so be careful.
Actually, it isn't performing badly since my original SQL query can populate every single row and I have enabled paging on the Gridview. I'm just wondering if they can be built on the fly using PageMethods or some sort of JSON/AJAX solution. I haven't seen anything, but... worth a try in searching for it.
ASP.net 2.0 Gridview with Expanding Panel Rows -- How to build Panel "on the fly"
[ "", "asp.net", "javascript", "gridview", "panel", "expandable", "" ]
I'm new to C# and .Net in general so this may be a naive thing to ask. But anyway, consider this C# code: ``` class A { public int Data {get; set;} } class B { public A Aval {get; set;} } ``` The B.Aval property above is returning a reference to its internal A object. As a former C++ programmer, I find this dangerous because by exposing reference to an member object class B violates the principle of data hiding. Nonetheless, this seems to be the normal practice in the C# world. My question is, if it is at all, why is such a design the usual approach as opposed to returning copies of internal members, which will be much safer in so many ways (including the case of thread safety)? I understand that even in C++ sometimes good design demands that you do expose class members directly (a **Car** class with **Engine** and **Stereo** objects inside it comes to mind), but it is not the norm as seems to be in C#.
You're absolutely right - you should only return objects from properties where either the object is immutable, or you're happy for the caller to modify it to whatever extent they can. A classic example of this is returning collections - often it's much better to return a read-only wrapper round a collection than to return the "real" collection directly. On the other hand, pragmatism sometimes calls for just documenting this as "please don't change the returned object" - particularly when it's an API which is only used within a company. Hopefully there'll be more support for immutability in future versions of C# and .NET, which will make this easier to cope with - but it's likely to remain a knotty problem in many cases.
This isn't encapsulation - it's an act of abstraction through object composition or aggregation depending on how the internal object lifetimes are created/managed. In composition patterns it is perfectly acceptable to access composite state e.g. the instance of A in the instance of B. > <http://en.wikipedia.org/wiki/Object_composition> As you point out the semantics of encapsulation are *very different* - to completely hide the internal implementation of A e.g. by inheriting B from A.
C# and Data Hiding
[ "", "c#", ".net", "" ]
I am trying to test a class that manages data access in the database (you know, CRUD, essentially). The DB library we're using happens to have an API wherein you first get the table object by a static call: ``` function getFoo($id) { $MyTableRepresentation = DB_DataObject::factory("mytable"); $MyTableRepresentation->get($id); ... do some stuff return $somedata } ``` ...you get the idea. We're trying to test this method, but mocking the DataObject stuff so that (a) we don't need an actual db connection for the test, and (b) we don't even need to include the DB\_DataObject lib for the test. However, in PHPUnit I can't seem to get $this->getMock() to appropriately set up a static call. I have... ``` $DB_DataObject = $this->getMock('DB_DataObject', array('factory')); ``` ...but the test still says unknown method "factory". I know it's creating the object, because before it said it couldn't find DB\_DataObject. Now it can. But, no method? What I really want to do is to have two mock objects, one for the table object returned as well. So, not only do I need to specify that factory is a static call, but also that it returns some specified other mock object that I've already set up. I should mention as a caveat that I did this in SimpleTest a while ago (can't find the code) and it worked fine. What gives? [UPDATE] I am starting to grasp that it has something to do with expects()
I agree with both of you that it would be better not to use a static call. However, I guess I forgot to mention that DB\_DataObject is a third party library, and the static call is *their* best practice for their code usage, not ours. There are other ways to use their objects that involve constructing the returned object directly. It just leaves those darned include/require statements in whatever class file is using that DB\_DO class. That sucks because the tests will break (or just not be isolated) if you're meanwhile trying to mock a class of the same name in your test--at least I think.
When you cannot alter the library, alter your access of it. Refactor all calls to DB\_DataObject::factory() to an instance method in your code: ``` function getFoo($id) { $MyTableRepresentation = $this->getTable("mytable"); $MyTableRepresentation->get($id); ... do some stuff return $somedata } function getTable($table) { return DB_DataObject::factory($table); } ``` Now you can use a partial mock of the class you're testing and have getTable() return a mock table object. ``` function testMyTable() { $dao = $this->getMock('MyTableDao', array('getMock')); $table = $this->getMock('DB_DataObject', ...); $dao->expects($this->any()) ->method('getTable') ->with('mytable') ->will($this->returnValue($table)); $table->expects... ...test... } ```
Mock Objects in PHPUnit to emulate Static Method Calls?
[ "", "php", "static", "mocking", "phpunit", "" ]
On a Unix system, where does gcc look for header files? I spent a little time this morning looking for some system header files, so I thought this would be good information to have here.
``` `gcc -print-prog-name=cc1plus` -v ``` This command asks gcc which **C++** preprocessor it is using, and then asks that preprocessor where it looks for includes. You will get a reliable answer for your specific setup. Likewise, for the **C** preprocessor: ``` `gcc -print-prog-name=cpp` -v ```
In addition, gcc will look in the directories specified after the `-I` option.
Where does gcc look for C and C++ header files?
[ "", "c++", "c", "gcc", "header", "" ]
The title basically spells it out. What interfaces have you written that makes you proud and you use a lot. I guess the guys that wrote `IEnumerable<T>` and not least `IQueryable<T>` had a good feeling after creating those.
I'm pleased with the design of the interface at the heart of [Push LINQ](http://msmvps.com/blogs/jon_skeet/archive/2008/01/04/quot-push-quot-linq-revisited-next-attempt-at-an-explanation.aspx). It's a very simple interface, but with it you can do all kinds of interesting things. Here's the definition (from memory, but it'll be pretty close at least): ``` public interface IDataProducer<T> { event Action<T> DataProduced; event Action EndOfData; } ``` Basically it allows observers to "listen" on a stream of data instead of "pulling" from it in the way that IEnumerable works. Three points of interest: * The name sucks. I want to rename it, and I've had some good feedback, but *the* name isn't there yet. * The event handlers don't follow the standard .NET conventions of having a sender/event args. In this case it really doesn't make much sense to do so. * The multicast nature of events makes this perfect for calculating multiple aggregates etc. All that comes for free.
Strictly an interface? Or just an API? I'm kinda pleased with how the [generic operator](http://www.pobox.com/~skeet/csharp/miscutil/usage/genericoperators.html) stuff worked out (available [here](http://www.yoda.arachsys.com/csharp/miscutil/)) - I *regularly* see people asking about using operators with generics, so I'm glad it is a handy answer for a lot of people. It might be [slightly easier in C# 4.0](http://marcgravell.blogspot.com/2008/11/dynamic-operators.html), but I very much doubt it will be as fast - the DLR-tree/dynamic stuff has an overhead. I'm also quite happy that it was helpful in the Push LINQ that Jon has already mentioned ;-p
Whats the best (most useful) interface you have written?
[ "", "c#", "interface", "" ]
I would like to write a program that will identify a machine( for licensing purposes), I tought about getting the following information and to compile an xml file with this data: 1. MAC address. 2. CPU data (serial, manufacture, etc) 3. MotherBoard Identification. (serial, manufacture, etc) can someone refer me to a lib that provide such information - I write my program in c++ and have troubles to find such a lib. Should I dig in the registry for this information? Many Thanks, ofer
Using [WMI](http://msdn.microsoft.com/en-us/library/aa394582.aspx) and getting the motherboard's serial number should be enough (the other options are less secure, since an old computer may not have a network adapter and/or the CPU can be changed more likely than the motherboard).
GetAdaptersInfo() will give you the MAC address. Here's an example of how to use it for this purpose. ``` /** ************************************* return string containing first MAC address on computer NOTE: requires adding Iphlpapi.lib to project */ string GetMac() { char data[4096]; ZeroMemory( data, 4096 ); unsigned long len = 4000; PIP_ADAPTER_INFO pinfo = ( PIP_ADAPTER_INFO ) data; char sbuf[20]; string sret; DWORD ret = GetAdaptersInfo( pinfo, &len ); if( ret != ERROR_SUCCESS ) return string("**ERROR**"); for(int k = 0; k < 5; k++ ) { sprintf(sbuf,"%02X-",pinfo->Address[k]); sret += sbuf; } sprintf(sbuf,"%02X",pinfo->Address[5]); sret += sbuf; return( sret ); } ``` For an example of a complete system to do this sort of thing, check out Manuele Sicuteri's [article](http://www.codeproject.com/KB/cpp/softwarekey.aspx) on CodeProject.
Get machine properties
[ "", "c++", "winapi", "" ]
I have a textbox and a link button. When I write some text, select some of it and then click the link button, the selected text from textbox must be show with a message box. How can I do it? --- When I click the submit button for the textbox below, the message box must show *Lorem ipsum*. Because "Lorem ipsum" is selected in the area. --- If I select any text from the page and click the submit button it is working, but if I write a text to textbox and make it, it's not. Because when I click to another space, the selection of textbox is canceled. Now problem is that, when I select text from textbox and click any other control or space, the text, which is selected, must still be selected. How is it to be done?
OK, here is the code I have: ``` function ShowSelection() { var textComponent = document.getElementById('Editor'); var selectedText; if (textComponent.selectionStart !== undefined) { // Standards-compliant version var startPos = textComponent.selectionStart; var endPos = textComponent.selectionEnd; selectedText = textComponent.value.substring(startPos, endPos); } else if (document.selection !== undefined) { // Internet Explorer version textComponent.focus(); var sel = document.selection.createRange(); selectedText = sel.text; } alert("You selected: " + selectedText); } ``` The problem is, although the code I give for Internet Explorer is given on a lot of sites, I cannot make it work on my copy of [Internet Explorer 6](https://en.wikipedia.org/wiki/Internet_Explorer_6) on my current system. Perhaps it will work for you, and that's why I give it. The trick you look for is probably the .focus() call to give the focus back to the textarea, so the selection is reactivated. I got the right result (the selection content) with the *onKeyDown* event: ``` document.onkeydown = function (e) { ShowSelection(); } ``` So the code is correct. Again, the issue is to get the selection on click on a button... I continue to search. I didn't have any success with a button drawn with a `li` tag, because when we click on it, Internet Explorer deselects the previous selection. The above code works with a simple `input` button, though...
Here's a much simpler solution, based on the fact that text selection occurs on mouseup, so we add an event listener for that: ``` document.querySelector('textarea').addEventListener('mouseup', function () { window.mySelection = this.value.substring(this.selectionStart, this.selectionEnd) // window.getSelection().toString(); }); ``` ``` <textarea> Select some text </textarea> <a href="#" onclick=alert(mySelection);>Click here to display the selected text</a> ``` This works in all browsers. If you also want to handle selection via the keyboard, add another event listener for `keyup`, with the same code. If it weren't for this [Firefox bug filed back in 2001](https://stackoverflow.com/questions/20419515/window-getselection-of-textarea-not-working-in-firefox#comment52700249_20419515) (yes, 14 years ago), we could replace the value assigned to `window.mySelection` with `window.getSelection().toString()`, which works in IE9+ and all modern browsers, and also gets the selection made in non-textarea parts of the DOM.
How to get selected text from a textbox control with JavaScript
[ "", "javascript", "html", "textarea", "textselection", "selectedtext", "" ]
Why do you think Microsoft wants us to go through IEnumerable to get to IEnumerator? Isn't the presence or absence of a valid cast to IEnumerator for a given type enough to determine if the type is enumerable or not? For example whats wrong with the following? ``` class MyClass : IEnumerator { ... } MyClass myObj = new MyClass(); if(myObj as IEnumerator != null) { Console.WriteLine("myObj is enumerable"); } else { Console.WriteLine("myObj doesn't support enumeration"); } ```
You might have two threads enumerating - each needs its own enumerator. IEnumerable.GetEnumerator returns an enumerator that is initially positioned before the first element of the collection. If you only had IEnumerator, you'd have to remember to reset yourself before using it, even in a single-threaded scenario.
An IEnumerator holds state information necessary to perform enumeration, such as array index etc. The enumeration state information is NOT part of the enumerable object, therefore casting would not be powerful enough. Also, the enumerator/enumerable separation allows several enumerations to be performed simultaneously on the same enumerable collection.
What's the rationale behind IEnumerable?
[ "", "c#", ".net", "" ]
This is a second-hand question from an OS development site, but it made me curious since I couldn't find a decent explanation anywhere. When compiling and linking a free-standing C++ program using gcc, sometimes a linker error like this occurs: ``` out/kernel.o:(.eh_frame+0x11): undefined reference to `__gxx_personality_v0' ``` This is apparently because this symbol is defined in libstdc++, which is missing in a free-standing environment. Fixing the problem simply requires defining this symbol somewhere: ``` void *__gxx_personality_v0; ``` Which is nice, but I don't like things that just magically work... So the question is, what is the purpose of this symbol?
It is used in the stack unwiding tables, which you can see for instance in the assembly output of [my answer to another question](https://stackoverflow.com/questions/307610/how-do-exceptions-work-behind-the-scenes-in-c#307716). As mentioned on that answer, its use is defined by the [Itanium C++ ABI](https://itanium-cxx-abi.github.io/cxx-abi), where it is called the [Personality Routine](https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html#base-personality). The reason it "works" by defining it as a global NULL void pointer is probably because nothing is throwing an exception. When something tries to throw an exception, then you will see it misbehave. Of course, if nothing is using exceptions, you can disable them with `-fno-exceptions` (and if nothing is using RTTI, you can also add `-fno-rtti`). If you are using them, you have to (as other answers already noted) link with `g++` instead of `gcc`, which will add `-lstdc++` for you.
It's part of the exception handling. The gcc EH mechanism allows to mix various EH models, and a personality routine is invoked to determine if an exception match, what finalization to invoke, etc. This specific personality routine is for C++ exception handling (as opposed to, say, gcj/Java exception handling).
What is __gxx_personality_v0 for?
[ "", "c++", "gcc", "linker", "kernel", "" ]
What are the tensions between multithreading and exception-safety in C++? Are there good guidelines to follow? Does a thread terminate because of an uncaught exception?
I believe the C++ standard does not make any mention of multithreading - multithreading is a platform-specific feature. I'm not exactly sure what the C++ standard says about uncaught exceptions in general, but according to [this page](http://www.devx.com/tips/Tip/14240), what happens is platform-defined, and you should find out in your compiler's documentation. In a quick-and-dirty test I did with g++ 4.0.1 (i686-apple-darwin8-g++-4.0.1 to be specific), the result is that `terminate()` is called, which kills the entire program. The code I used follows: ``` #include <stdio.h> #include <pthread.h> void *threadproc(void *x) { throw 0; return NULL; } int main(int argc, char **argv) { pthread_t t; pthread_create(&t, NULL, threadproc, NULL); void *ret; pthread_join(t, &ret); printf("ret = 0x%08x\n", ret); return 0; } ``` Compiled with `g++ threadtest.cc -lpthread -o threadtest`. Output was: ``` terminate called after throwing an instance of 'int' ```
C++0x will have [Language Support for Transporting Exceptions between Threads](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2179.html) so that when a worker thread throws an exception the spawning thread can catch or rethrow it. From the proposal: ``` namespace std { typedef unspecified exception_ptr; exception_ptr current_exception(); void rethrow_exception( exception_ptr p ); template< class E > exception_ptr copy_exception( E e ); } ```
Writing Multithreaded Exception-Safe Code
[ "", "c++", "multithreading", "exception", "c++11", "" ]
Am looking for checkpointing library for C#. Any ideas ? see <http://en.wikipedia.org/wiki/Application_checkpointing>
This should be possible to implement using transactions (commit/rollback) or undo. If you design your classes and operations correctly it will work, it will require some hard work and discipline using the classes of course. You'll also need to be careful about exceptions. The [System.Transactions namespace](http://msdn.microsoft.com/en-us/library/system.transactions.aspx) (as Rune suggested) seems to be a good candidate, or atleast a good starting point.
If I understand you correctly (and the question is pretty vague so I'm not sure of this at all) then Windows Workflow Foundation certainly has this capability. However, it's almost certainly overkill for what you're asking. --- Okay, you added a link that better explains what you mean by checkpointing. With that in mind, .Net doesn't have anything built in to support this directly. However, anything that uses a state machine where the state machine can be persisted, can be inspected/validated, and is transactional has this sort of thing naturally built-in. WWF is one example of that.
checkpointing library for C#
[ "", "c#", "" ]
When you assign a date to a named SQL parameter Hibernate automatically converts it to GMT time. How do you make it use the current server timezone for all dates? Lets say you have a query: ``` Query q = session.createQuery("from Table where date_field < :now"); q.setDate("now", new java.util.Date()); ``` "now" will be set to GMT time, while "new Date()" gets your current server time. Thanks.
As it turned out Hibernate doesn't convert dates to GMT automatically, it just cuts off time if you use `query.setDate()`, so if you pass "2009-01-16 12:13:14" it becomes "2009-01-16 00:00:00". To take time into consideration you need to use `query.setTimestamp("date", dateObj)` instead.
Hibernate is ignorant of timezones. Any timezone conversion should be done prior to executing the query. E.g., if your database server is set to CST, but the user is on EST, you'll need to add 1 hour to any timestamps which are the input to a query.
How to assign Date parameters to Hibernate query for current timezone?
[ "", "java", "hibernate", "timezone", "" ]
I had the following piece of code (simplified for this question): ``` struct StyleInfo { int width; int height; }; typedef int (StyleInfo::*StyleInfoMember); void AddStyleInfoMembers(std::vector<StyleInfoMember>& members) { members.push_back(&StyleInfo::width); members.push_back(&StyleInfo::height); } ``` Now, we had to restructure this a bit, and we did something like this: ``` struct Rectangle { int width; int height; }; struct StyleInfo { Rectangle size; }; typedef int (StyleInfo::*StyleInfoMember); void AddStyleInfoMembers(std::vector<StyleInfoMember>& members) { members.push_back(&StyleInfo::size::width); members.push_back(&StyleInfo::size::height); } ``` If this all looks like a stupid thing to do, or if you feel there's a good opportunity to apply BOOST here for some reason, I must warn you that I really simplified it all down to the problem at hand: > error C3083: 'size': the symbol to the left of a '::' must be a type The point I'm trying to make is that I don't know what the correct syntax is to use here. It might be that "StyleInfo" is not the correct type of take the address from to begin with, but in my project I can fix that sort of thing (there's a whole framework there). I simply don't know how to point to this member-within-a-member.
Remember a pointer to a member is just used like a member. ``` Obj x; int y = (x.*)ptrMem; ``` But like normal members you can not access members of subclasses using the member access mechanism. So what you need to do is access it like you would access a member of the object (in your case via the size member). ``` #include <vector> #include <iostream> struct Rectangle { int width; int height; }; struct StyleInfo { Rectangle size; }; typedef Rectangle (StyleInfo::*StyleInfoMember); typedef int (Rectangle::*RectangleMember); typedef std::pair<StyleInfoMember,RectangleMember> Access; void AddStyleInfoMembers(std::vector<Access>& members) { members.push_back(std::make_pair(&StyleInfo::size,&Rectangle::width)); members.push_back(std::make_pair(&StyleInfo::size,&Rectangle::height)); } int main() { std::vector<Access> data; AddStyleInfoMembers(data); StyleInfo obj; obj.size.width = 10; std::cout << obj.*(data[0].first).*(data[0].second) << std::endl; } ``` This is not something I would recommend doing! An alternative (that I recommend even less) is to find the byte offset from the beginning of the class and then just add this to the objects address. Obviously this will involve a lot of casting backwards and forwards so this looks even worse then the above.
Is it definitely possible? I honestly don't know, never having played much with pointer-to-member. Suppose you were using non-POD types (I know you aren't, but the syntax would have to support it). Then pointer-to-member might have to encapsulate more than just an offset from the base pointer. There might be indirection as well, depending how multiple inheritance is implemented. With multiple levels of member indirection, this could get arbitrarily complicated, which is a lot to ask for a type that has to have fixed size. Perhaps you need a vector of pairs, of types defined by: ``` typedef Rectangle (StyleInfo::*StyleInfoMember); typedef int (Rectangle::*RectangleMember); ``` Apply each in turn to get where you want to be. Of course this still doesn't let you build a vector of mappings from a StyleInfo to arbitrary members-of-members-of StyleInfo, since they wouldn't all go through Rectangle. For that you may need to open a can of functors...
Pointer-to-data-member-of-data-member
[ "", "c++", "" ]
I am doing some simple sanity validation on various types. The current test I'm working on is checking to make sure their properties are populated. In this case, populated is defined as not null, having a length greater than zero (if a string), or not equal to 0 (if an integer). The "tricky" part of this test is that some properties are immune to this check. Right now I use a giant if statement that weeds out properties that don't need to be checked. ``` //Gets all the properties of the currect feature. System.Reflection.PropertyInfo[] pi = t.GetProperties(); for(int i = 0; i < pi.Length; i++) { if(!pi[i].Name.Equals("PropertyOne") && !pi[i].Name.Equals("PropertyTwo") && !pi[i].Name.Equals("PropertyThree") //... repeat a bunch more times && !pi[i].Name.IndexOf("ValueOne") != -1 && !pi[i].Name.IndexOf("ValueTwo") != -1 //... repeat a bunch more times { //Perform the validation check. } } ``` When profiling, I noticed the if statement is actually performing worse than the reflection (not that the reflection is blazing fast). Is there a more efficient way to filter the properties of several different types? I've thought about a massive regular expression but I'm unsure on how to format it, plus it would probably be unreadable given its size. I've also considered storing the values in a List and then using Linq but I'm not sure how to handle the cases that use String.IndexOf() to find if the property contains a certain value. Thanks in advance.
Make a HashSet "exactNames" with PropertyOne, PropertyTwo etc, and then a List "partialNames" with ValueOne, ValueTwo etc. Then: ``` var matchingProperties = pi.Where(exactNames.Contains(pi.Name) || partialNames.Any(name => pi.Name.Contains(name)); foreach (PropertyInfo property in matchingProperties) { // Stuff } ``` (Odd indenting just to avoid wrapping.) Note that you could cache the set of properties to validate on a per-type basis, so you only need to go through this check once per type.
Your idea help speed up my program, thank you. However, you had some syntax issues, plus you were matching items found in the lists and I needed items not in the list. Here is the code I ended up using. ``` List<System.Reflection.PropertyInfo> pi = type.GetProperties().ToList(); var matchingProperties = pi.Where( prop => !PropertyExclusionSet.Contains( prop.Name ) && !PropertiesPartialSet.Any( name => prop.Name.Contains( name ) ) ); ```
Filtering an Objects Properties by Name
[ "", "c#", "string", "properties", "" ]
I'm working on a project where a program running on the mobile phone needs to communicate with a program running on the PC it's connected to. Ideally, I'd like to use USB, WiFi, whatever to communicate. The two programs should be able to communicate things like battery life, text messages, etc... But I can work on that later, I just need to get them to talk. What's the best way to do this?
"Best" is really subjective and highly dependent on a lot of factors like devices, topology, firewall presence, need for security, etc, etc. Where do you need the comms to originate and will you have an ActiveSync connection? If the PC initiates the comms and you have ActiveSync, then RAPI is the transport you'd use as it's got all of the infrastructure done and ready. For anything else you're going to need some form of proprietary protocol and transport mechanism. Typically I write a simple socket protocol with a defined message structure (typically a message ID, CRC, message length and data payload). I then have some base message class that handles the comms and a set of derived messages for each specific command I want. For 2-way stuff that requires a response, I typically create a base Response class and then derive specific response formats from it.
Assuming you have a wifi connection, one way for your Windows Mobile program to communicate with your PC would be to use WCF on the .NET compact framework 3.5. You'd create a new WCF application to run you your PC, and expose an interface exposing functions you want to call from your Windows Mobile Device. WCF on Windows Mobile requires Compact Framework 3.5 to be installed on your device. You also need the "Windows Mobile power toys" to be able to generate compatible proxies to call from Windows mobile. [Power Toys for .NET Compact Framework 3.5](http://www.microsoft.com/downloads/details.aspx?FamilyID=c8174c14-a27d-4148-bf01-86c2e0953eab&DisplayLang=en) Calling the WCF service from your WM Device also requires you to manually set up the binding and endpoint to pass into your web service proxy (with desktop WCF this is done automatically by loading them from a config file). WCF on Windows Mobile currently only supports the basic http binding (which can be encrypted if you want), but this may be enough for your needs.
Windows Mobile (C#) - Communicating between phone and PC
[ "", "c#", "windows-mobile", "" ]
I need to activate a JButton ActionListener within a JDialog so I can do some unit testing using JUnit. Basically I have this: ``` public class MyDialog extends JDialog { public static int APPLY_OPTION= 1; protected int buttonpressed; protected JButton okButton; public MyDialog(Frame f) { super(f); okButton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { buttonpressed= APPLY_OPTION; } } ); public int getButtonPressed() { return buttonpressed; } } ``` then I have my JUnit file: ``` public class testMyDialog { @Test public void testGetButtonPressed() { MyDialog fc= new MyDialog(null); fc.okButton.???????? //how do I activate the ActionListener? assertEquals(MyDialog.APPLY_OPTION, fc.getButtonPressed()); } } ``` This may sound redundant to do in a unit test, but the actual class is a lot more complicated than that...
`AbstractButton.doClick` Your tests might run faster if you use the form that takes an argument and give it a shorter delay. The call blocks for the delay.
If you have non-trivial code directly in your event handler that needs unit testing, you might want to consider adopting the [MVC pattern](http://en.wikipedia.org/wiki/Model-view-controller) and moving the code to the controller. Then you can unit test the code using a mock View, and you never need to programmatically press the button at all.
How do I activate JButton ActionListener inside code (unit testing purposes)?
[ "", "java", "unit-testing", "swing", "junit", "" ]
I have a base class that has a private static member: ``` class Base { private static Base m_instance = new Base(); public static Base Instance { get { return m_instance; } } } ``` And I want to derive multiple classes from this: ``` class DerivedA : Base {} class DerivedB : Base {} class DerivedC : Base {} ``` However, at this point calling DerivedA::Instance will return the same exact object as will DerivedB::Instance and DerivedC::Instance. I can solve this by declaring the instance in the derived class, but then every single derived class will need to do that and that just seems like it should be unneccessary. So is there any way to put all this in the base class? Could a design pattern be applied?
There's one really icky way of doing this: ``` class Base { // Put common stuff in here... } class Base<T> : Base where T : Base<T>, new() { private static T m_instance = new T(); public static T Instance { get { return m_instance; } } } class DerivedA : Base<DerivedA> {} class DerivedB : Base<DerivedB> {} class DerivedC : Base<DerivedC> {} ``` This works because there's one static variable *per constructed type* - e.g. `List<string>` is a different type to `List<int>` and so would have separate static variables. I've taken the opportunity of making it an instance of the derived class as well - I don't know whether that's what you want or not, but I thought I'd at least make it available for you :) In general though, this is a nasty thing to do. Static variables aren't really designed for this kind of use - I've just abused a feature of generics to get "sort of" the behaviour you asked for. Also note that `Base<DerivedA>.Instance` will return the same result as `DerivedA.Instance` - the property/variable don't "know" that you're using `DerivedA.Instance`. I don't know whether or not that's important to you. With the extra non-generic class, you can write: ``` Base t = DerivedA.Instance; t = DerivedB.Instance; ``` If you don't need that, take it out :)
Static methods does not support polymorphism, therefore, such a thing is not possible. Fundamentally, the Instance property has no idea how you're using it. And a single implementation of it will exist, as it's static. If you really wanted to do this, this "not recommended" solution is available (I got the idea from Jon's solution): ``` private static Dictionary<Type, Base> instances = new Dictionary<Type, Base>(); public static T GetInstance<T>() where T : Base, new() { Type ty = typeof(T); T x; if (instances.TryGetValue(ty, out x)) return x; x = new T(); instances[ty] = x; return x; } ```
Deriving static members
[ "", "c#", "design-patterns", "oop", "" ]
I've got about 200k images in a bucket. They all have expires headers of 2050 but I've read you shouldn't send an expires header older than a year. I want to schedule a script to run every month and set the headers to 6 months away. Anything out there? Obviously I'd like to avoid iterating 200k objects.
*Disclaimer:* I am the developer of this tool, but I think it may answer your question. [CloudBerry Explore](http://cloudberrylab.com)r freeware will be able to do it in the next release.
S3 doesn't appear to support bulk updates, as its API's contain no such operations. Some third-party tools claim bulk update capability; however, I wager that they merely automate iteration.
Can I set the expires header on all objects in an Amazon S3 bucket all at once?
[ "", "c#", ".net", "amazon-s3", "amazon-web-services", "cloud", "" ]
``` void addNewNode (struct node *head, int n) { struct node* temp = (struct node*) malloc(sizeof(struct node)); temp -> data = n; temp -> link = head; head = temp; } ``` The code give above is the popularly wrong version of a function for adding a new node at the head of a linked list. Generally the correct versions are like, ``` void addNewNode (struct node **head, int n); void addNewNode (struct node * &head, int n); ``` I worked out another but simple function for the purpose which worked fine. ``` struct node* addNewNode (struct node *head, int n) { struct node* temp = (struct node*) malloc(sizeof(struct node)); temp -> data = n; temp -> link = head; return temp; } ``` But I haven't seen this being used or discussed in code and tutorials and thus I am curious to know if this approach has some flaw.
The flaw is that you're relying on the caller to perform the last step of updating the head pointer to the list. If the caller neglects to do this, the compiler will not complain, and for all intents and purposes the list will appear to not have changed (and you'll have leaked the memory for a node).
This is how linked lists work in most functional languages. For example, in ML you might do something like this: ``` val ls = [1, 2, 3, 4] val newList = 0 :: ls ``` The `::` syntax is actually a function which takes two parameters (`0` and `ls`) and returns a new list which has `0` as the first element. Lists in ML are actually defined as list nodes, so `::` is actually written very similarly to the `addNewNode` function you proposed. In other words: congratulations, you have created an immutable linked list implementation in C! Understanding this is actually a fairly important first step to functional languages, so it's really a good thing to know.
New approach for adding a new Node to a Linked List
[ "", "c++", "c", "data-structures", "pointers", "" ]
jQuery selectors are wonderful, but I sometimes I find myself typing them over and over, and it gets a little annoying. ``` $('#mybutton').click(function() { $('#message-box').doSomething(); $('#message-box').doSomethingElse(); $('#message-box').attr('something', 'something'); }); ``` So often I like to cache my objects in variables: ``` $('#mybutton').click(function() { var msg = $('#message-box'); msg.doSomething(); msg.doSomethingElse(); // you get the idea }); ``` Are there any pros or cons between these two patterns? Sometimes it feels like creating the variables is extra work, but sometimes it saves my fingers a lot of typing. Are there any memory concerns to be aware of? Do selectors clean up nicely after being used, whereas my bad coding habits tends to keep the vars in memory longer? This doesn't keep me up at night, but I am curious. Thanks. EDIT: Please see [this question](https://stackoverflow.com/questions/291841/does-jquery-do-any-kind-of-caching-of-selectors). It essentially asks the same thing, but I like the answer better.
You should chain them: ``` $('#mybutton').click(function() { $('#message-box').doSomething().doSomethingElse().attr('something', 'something'); }); ``` If you need to do something over and over again and the functions don't return the jQuery object saving them in a var is faster.
I usually chain them as per Pims answer, however sometimes when theres a lot of operations to be done at once, chaining can cause readability issues. In those cases I cache the selected jQuery objects in a variable.
Always use jquery selectors or cache them in variables?
[ "", "javascript", "jquery", "performance", "" ]
i've got some binary data which i want to save as an image. When i try to save the image, it throws an exception if the memory stream used to create the image, was closed before the save. The reason i do this is because i'm dynamically creating images and as such .. i need to use a memory stream. this is the code: ``` [TestMethod] public void TestMethod1() { // Grab the binary data. byte[] data = File.ReadAllBytes("Chick.jpg"); // Read in the data but do not close, before using the stream. Stream originalBinaryDataStream = new MemoryStream(data); Bitmap image = new Bitmap(originalBinaryDataStream); image.Save(@"c:\test.jpg"); originalBinaryDataStream.Dispose(); // Now lets use a nice dispose, etc... Bitmap2 image2; using (Stream originalBinaryDataStream2 = new MemoryStream(data)) { image2 = new Bitmap(originalBinaryDataStream2); } image2.Save(@"C:\temp\pewpew.jpg"); // This throws the GDI+ exception. } ``` Does anyone have any suggestions to how i could save an image with the stream closed? I cannot rely on the developers to remember to close the stream after the image is saved. In fact, the developer would have NO IDEA that the image was generated using a memory stream (because it happens in some other code, elsewhere). I'm really confused :(
As it's a MemoryStream, you really don't *need* to close the stream - nothing bad will happen if you don't, although obviously it's good practice to dispose anything that's disposable anyway. (See [this question](https://stackoverflow.com/questions/234059) for more on this.) However, you *should* be disposing the Bitmap - and that will close the stream for you. Basically once you give the Bitmap constructor a stream, it "owns" the stream and you shouldn't close it. As [the docs for that constructor](http://msdn.microsoft.com/en-us/library/z7ha67kw.aspx) say: > You must keep the stream open for the > lifetime of the Bitmap. I can't find any docs promising to close the stream when you dispose the bitmap, but you should be able to verify that fairly easily.
**A generic error occurred in GDI+.** May also result from **incorrect save path**! Took me half a day to notice that. So make sure that you have double checked the path to save the image as well.
Image.Save(..) throws a GDI+ exception because the memory stream is closed
[ "", "c#", "image", "exception", "gdi+", "" ]
I get the following error while building OpenCV on OS X 10.5 (intel): ``` ld: warning in .libs/_cv_la-_cv.o, file is not of required architecture ld: warning in .libs/_cv_la-error.o, file is not of required architecture ld: warning in .libs/_cv_la-pyhelpers.o, file is not of required architecture ld: warning in .libs/_cv_la-cvshadow.o, file is not of required architecture ld: warning in ../../../cv/src/.libs/libcv.dylib, file is not of required architecture ld: warning in /Developer/SDKs/MacOSX10.4u.sdk/usr/local/lib/libcxcore.dylib, file is not of required architecture Undefined symbols for architecture i386: "_fputs$UNIX2003", referenced from: _PySwigObject_print in _cv_la-_cv.o _PySwigPacked_print in _cv_la-_cv.o _PySwigPacked_print in _cv_la-_cv.o ld: symbol(s) not found for architecture i386 collect2: ld returned 1 exit status lipo: can't open input file: /var/folders/Sr/Srq9N4R8Hr82xeFvW3o-uk+++TI/-Tmp-//cchT0WVX.out (No such file or directory) make[4]: *** [_cv.la] Error 1 make[3]: *** [all-recursive] Error 1 make[2]: *** [all-recursive] Error 1 make[1]: *** [all-recursive] Error 1 make: *** [all] Error 2 ``` While running ./configure --without-python everything is ok. Another strange thing is that when I used Python 2.4.5 or 2.5.1 everything has built ok, the problem occured after switching to Python Framework 2.5.2
Ok, I kind of worked it out It needs to be compiled with python from macports or whatever. Then I need to run `/System/Library/Frameworks/Python.framework/Versions/2.5/bin/python2.5` (this is my previous python version) and there OpenCV just works.
It seems a little weird that it is warning about different architectures when looking for /Developer/SDKs/MacOSX10.4u.sdk while linking - can you give us some more detail about your build environment (version of XCode, GCC, Python, $PATH etc) Alternatively, won't any of the OpenCV binaries available work for you?
OpenCV's Python - OS X
[ "", "python", "macos", "opencv", "" ]
I want to convert from char representing a hexadecimal value (in upper or lower case) to byte, like ``` '0'->0, '1' -> 1, 'A' -> 10, 'a' -> 10, 'f' -> 15 etc... ``` I will be calling this method extremely often, so performance is important. Is there a faster way than to use a pre-initialized `HashMap<Character,Byte>` to get the value from? **Answer** It seems like ~~it's a tossup between using a switch-case and Jon Skeet's direct computing solution - the switch-case solution seems to edge out ever so slightly, though.~~ Greg's array method wins out. Here are the performance results (in ms) for 200,000,000 runs of the various methods: ``` Character.getNumericValue: 8360 Character.digit: 8453 HashMap<Character,Byte>: 15109 Greg's Array Method: 6656 JonSkeet's Direct Method: 7344 Switch: 7281 ``` Thanks guys! **Benchmark method code** Here ya go, JonSkeet, you old competitor. ;-) ``` public class ScratchPad { private static final int NUMBER_OF_RUNS = 200000000; static byte res; static HashMap<Character, Byte> map = new HashMap<Character, Byte>() {{ put( Character.valueOf( '0' ), Byte.valueOf( (byte )0 )); put( Character.valueOf( '1' ), Byte.valueOf( (byte )1 )); put( Character.valueOf( '2' ), Byte.valueOf( (byte )2 )); put( Character.valueOf( '3' ), Byte.valueOf( (byte )3 )); put( Character.valueOf( '4' ), Byte.valueOf( (byte )4 )); put( Character.valueOf( '5' ), Byte.valueOf( (byte )5 )); put( Character.valueOf( '6' ), Byte.valueOf( (byte )6 )); put( Character.valueOf( '7' ), Byte.valueOf( (byte )7 )); put( Character.valueOf( '8' ), Byte.valueOf( (byte )8 )); put( Character.valueOf( '9' ), Byte.valueOf( (byte )9 )); put( Character.valueOf( 'a' ), Byte.valueOf( (byte )10 )); put( Character.valueOf( 'b' ), Byte.valueOf( (byte )11 )); put( Character.valueOf( 'c' ), Byte.valueOf( (byte )12 )); put( Character.valueOf( 'd' ), Byte.valueOf( (byte )13 )); put( Character.valueOf( 'e' ), Byte.valueOf( (byte )14 )); put( Character.valueOf( 'f' ), Byte.valueOf( (byte )15 )); put( Character.valueOf( 'A' ), Byte.valueOf( (byte )10 )); put( Character.valueOf( 'B' ), Byte.valueOf( (byte )11 )); put( Character.valueOf( 'C' ), Byte.valueOf( (byte )12 )); put( Character.valueOf( 'D' ), Byte.valueOf( (byte )13 )); put( Character.valueOf( 'E' ), Byte.valueOf( (byte )14 )); put( Character.valueOf( 'F' ), Byte.valueOf( (byte )15 )); }}; static int[] charValues = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -1, -1, -1, -1, -1, -1, -1, 10, 11, 12, 13,14,15,-1,-1,-1,-1,-1,-1,-1,-1,-1, -1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,10, 11, 12, 13,14,15}; static char[] cs = new char[]{'0','1','2','3','4','5','6','7','8','9','a','b','c','d','e','f','A','B','C','D','E','F'}; public static void main(String args[]) throws Exception { long time = System.currentTimeMillis(); for( int i = 0; i < NUMBER_OF_RUNS; i++ ) { res = getNumericValue( i ); } System.out.println( "Character.getNumericValue:" ); System.out.println( System.currentTimeMillis()-time ); time = System.currentTimeMillis(); for( int i = 0; i < NUMBER_OF_RUNS; i++ ) { res = getDigit( i ); } System.out.println( "Character.digit:" ); System.out.println( System.currentTimeMillis()-time ); time = System.currentTimeMillis(); for( int i = 0; i < NUMBER_OF_RUNS; i++ ) { try { res = getValueFromArray( i ); } catch (IllegalArgumentException e) { } } System.out.println( "Array:" ); System.out.println( System.currentTimeMillis()-time ); time = System.currentTimeMillis(); for( int i = 0; i < NUMBER_OF_RUNS; i++ ) { res = getValueFromHashMap( i ); } System.out.println( "HashMap<Character,Byte>:" ); System.out.println( System.currentTimeMillis()-time ); time = System.currentTimeMillis(); for( int i = 0; i < NUMBER_OF_RUNS; i++ ) { char c = cs[i%cs.length]; res = getValueFromComputeMethod( c ); } System.out.println( "JonSkeet's Direct Method:" ); System.out.println( System.currentTimeMillis()-time ); time = System.currentTimeMillis(); for( int i = 0; i < NUMBER_OF_RUNS; i++ ) { res = getValueFromSwitch( i ); } System.out.println( "Switch:" ); System.out.println( System.currentTimeMillis()-time ); } private static byte getValueFromSwitch( int i ) { byte res; char ch = cs[i%cs.length]; switch( ch ) { case '0': res = 0; break; case '1': res = 1; break; case '2': res = 2; break; case '3': res = 3; break; case '4': res = 4; break; case '5': res = 5; break; case '6': res = 6; break; case '7': res = 7; break; case '8': res = 8; break; case '9': res = 9; break; case 'a': case 'A': res = 10; break; case 'b': case 'B': res = 11; break; case 'c': case 'C': res = 12; break; case 'd': case 'D': res = 13; break; case 'e': case 'E': res = 14; break; case 'f': case 'F': res = 15; break; default: throw new RuntimeException("unknown hex character: " + ch ); } return res; } private static byte getValueFromComputeMethod( char c ) { byte result = 0; if (c >= '0' && c <= '9') { result = (byte)(c - '0'); } if (c >= 'a' && c <= 'f') { result = (byte)(c - 'a' + 10); } if (c >= 'A' && c <= 'F') { result = (byte)(c - 'A' + 10); } return result; } private static byte getValueFromHashMap( int i ) { return map.get( Character.valueOf( cs[i%cs.length] ) ).byteValue(); } private static byte getValueFromArray( int i ) { char c = cs[i%cs.length]; if (c < '0' || c > 'f') { throw new IllegalArgumentException(); } byte result = (byte)charValues[c-'0']; if (res < 0) { throw new IllegalArgumentException(); } return result; } private static byte getDigit( int i ) { return (byte)Character.digit( cs[i%cs.length], 16 ); } private static byte getNumericValue( int i ) { return (byte)Character.getNumericValue( cs[i%cs.length] ); } } ```
A preinitialised array would be faster than a HashMap. Something like this: ``` int CharValues['f'-'0'+1] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -1, -1, ... -1, 10, 11, 12, ...}; if (c < '0' || c > 'f') { throw new IllegalArgumentException(); } int n = CharValues[c-'0']; if (n < 0) { throw new IllegalArgumentException(); } // n contains the digit value ``` You should benchmark this method against other methods (such as [Jon Skeet's](https://stackoverflow.com/questions/221001/performance-question-fastest-way-to-convert-hexadecimal-char-to-its-number-valu#221012) direct method) to determine which will be the fastest for your application.
A hash table would be relatively slow. This is pretty quick: ``` if (c >= '0' && c <= '9') { return c - '0'; } if (c >= 'a' && c <= 'f') { return c - 'a' + 10; } if (c >= 'A' && c <= 'F') { return c - 'A' + 10; } throw new IllegalArgumentException(); ``` Another option would be to try a switch/case statement. An array might be okay if it's in cache, but a miss could be expensive.
Performance question: Fastest way to convert hexadecimal char to its number value in Java?
[ "", "java", "performance", "algorithm", "" ]
I'm implementing an audit log on a database, so everything has a CreatedAt and a RemovedAt column. Now I want to be able to list all revisions of an object graph but the best way I can think of for this is to use unions. I need to get every unique CreatedAt and RemovedAt id. If I'm getting a list of countries with provinces the union looks like this: ``` SELECT c.CreatedAt AS RevisionId from Countries as c where localId=@Country UNION SELECT p.CreatedAt AS RevisionId from Provinces as p INNER JOIN Countries as c ON p.CountryId=c.LocalId AND c.LocalId = @Country UNION SELECT c.RemovedAt AS RevisionId from Countries as c where localId=@Country UNION SELECT p.RemovedAt AS RevisionId from Provinces as p INNER JOIN Countries as c ON p.CountryId=c.LocalId AND c.LocalId = @Country ``` For more complicated queries this could get quite complicated and possibly perform very poorly so I wanted to see if anyone could think of a better approach. This is in MSSQL Server. I need them all in a single list because this is being used in a from clause and the real data comes from joining on this.
You have most likely already implemented your solution, but to address a few issues; I would suggest considering Aleris's solution, or some derivative thereof. - In your tables, you have a "removed at" field -- well, if that field were active (populated), technically the data shouldn't be there -- or perhaps your implementation has it flagged for deletion, which will break the logging once it *is* removed. - What happens when you have multiple updates during a reporting period -- the previous log entries would be overwritten. - Having a separate log allows for archival of the log information and allows you to set a different log analysis cycle from your update/edit cycles. - Add whatever "linking" fields required to enable you to get back to your original source data *OR* make the descriptions sufficiently verbose. The fields contained in your log are up to you but Aleris's solution is direct. I may create an action table and change the field type from varchar to int, as a link into the action table -- forcing the developers to some standardized actions. Hope it helps.
An alternative would be to create an audit log that might look like this: ``` AuditLog table EntityName varchar(2000), Action varchar(255), EntityId int, OccuranceDate datetime ``` where EntityName is the name of the table (eg: Contries, Provinces), the Action is the audit action (eg: Created, Removed etc) and the EntityId is the primary key of the modified row in the original table. The table would need to be kept synchronized on each action performed to the tables. There are a couple of ways to do this: 1) Make triggers on each table that will add rows to AuditTable 2) From your application add rows in AuditTable each time a change is made to the repectivetables Using this solution is very simple to get a list of logs in audit. If you need to get columns from original table is also possible using joins like this: ``` select * from Contries C join AuditLog L on C.Id = L.EntityId and EntityName = 'Contries' ```
Query to get all revisions of an object graph
[ "", "sql", "sql-server", "orm", "union", "" ]
Windows Mobile pops up a "busy wheel" - a rotating colour disk - when things are happening . I can't find in the documentation how this is done - can someone point me in the right direction? We have a situation where we need to prompt the user to say we're doing stuff for a while, but we don't know how long it will take. So we can't do a progress bar, hence the proposal to use this busy wheel.
Use [SetCursor](http://msdn.microsoft.com/en-us/library/ms940016.aspx)/[LoadCursor](http://msdn.microsoft.com/en-us/library/aa453410.aspx)/[ShowCursor](http://msdn.microsoft.com/en-us/library/aa453730.aspx) APIs, like this: ``` SetCursor(LoadCursor(NULL, IDC_WAIT)); // my code ShowCursor(FALSE); ```
Using compactframework. Spining wheel: System.Windows.Forms.Cursor.Current = System.Windows.Forms.Cursors.WaitCursor; Return to normal: System.Windows.Forms.Cursor.Current = System.Windows.Forms.Cursors.Default;
How do I get a "busy wheel" on Windows Mobile 6?
[ "", "c++", "windows-mobile", "progress-bar", "" ]
Since I'm sure many people have different standard, I've made this post a community wiki. My question is, what's a good naming scheme for table aliases? I've been using the first letter of every word from the table name, but it's been getting quite unreadable. Here's a quick example. ``` FROM incidents i FROM cause_attack ca FROM obscure_table ot ``` Thank you.
The whole point of an alias is to shorten the name so you don't need verbosity. It only needs to be unique within a given query, so there's no need for a scheme for naming them. Edit: Also, the aliases you'd use depend highly on the table naming scheme. If all your tables have a 5-part name where the first 4 are common across a query, it's silly to keep those parts in the aliases.
The tables names themselves should already be readable. Therefore, if you want a readable name just don't alias. This means the purpose of an alias is as much to save your poor fingers from re-typing long names as anything else. In that case, short terse names work well, especially as they must be declared right next to the full name. The only exceptions here is if you're joining a table in more than once, in which case you'll need something to identify which instance of the table you need, or if you're aliasing a sub query.
Readable SQL aliases
[ "", "sql", "oracle", "alias", "" ]
I was wondering how to unit test abstract classes, and classes that extend abstract classes. Should I test the abstract class by extending it, stubbing out the abstract methods, and then test all the concrete methods? Then only test the methods I override, and test the abstract methods in the unit tests for objects that extend my abstract class? Should I have an abstract test case that can be used to test the methods of the abstract class, and extend this class in my test case for objects that extend the abstract class? Note that my abstract class has some concrete methods.
There are two ways in which abstract base classes are used. 1. You are specializing your abstract object, but all clients will use the derived class through its base interface. 2. You are using an abstract base class to factor out duplication within objects in your design, and clients use the concrete implementations through their own interfaces.! --- **Solution For 1 - Strategy Pattern** ![Option1](https://i.stack.imgur.com/Bu4Vy.png) If you have the first situation, then you actually have an interface defined by the virtual methods in the abstract class that your derived classes are implementing. You should consider making this a real interface, changing your abstract class to be concrete, and take an instance of this interface in its constructor. Your derived classes then become implementations of this new interface. ![IMotor](https://i.stack.imgur.com/wPPqA.png) This means you can now test your previously abstract class using a mock instance of the new interface, and each new implementation through the now public interface. Everything is simple and testable. --- **Solution For 2** If you have the second situation, then your abstract class is working as a helper class. ![AbstractHelper](https://i.stack.imgur.com/ABrGO.png) Take a look at the functionality it contains. See if any of it can be pushed onto the objects that are being manipulated to minimize this duplication. If you still have anything left, look at making it a helper class that your concrete implementation take in their constructor and remove their base class. ![Motor Helper](https://i.stack.imgur.com/eySjQ.png) This again leads to concrete classes that are simple and easily testable. --- **As a Rule** Favor complex network of simple objects over a simple network of complex objects. The key to extensible testable code is small building blocks and independent wiring. --- **Updated : How to handle mixtures of both?** It is possible to have a base class performing both of these roles... ie: it has a public interface, and has protected helper methods. If this is the case, then you can factor out the helper methods into one class (scenario2) and convert the inheritance tree into a strategy pattern. If you find you have some methods your base class implements directly and other are virtual, then you can still convert the inheritance tree into a strategy pattern, but I would also take it as a good indicator that the responsibilities are not correctly aligned, and may need refactoring. --- **Update 2 : Abstract Classes as a stepping stone (2014/06/12)** I had a situation the other day where I used abstract, so I'd like to explore why. We have a standard format for our configuration files. This particular tool has 3 configuration files all in that format. I wanted a strongly typed class for each setting file so, through dependency injection, a class could ask for the settings it cared about. I implemented this by having an abstract base class that knows how to parse the settings files formats and derived classes that exposed those same methods, but encapsulated the location of the settings file. I could have written a "SettingsFileParser" that the 3 classes wrapped, and then delegated through to the base class to expose the data access methods. I chose not to do this *yet* as it would lead to 3 derived classes with more *delegation* code in them than anything else. However... as this code evolves and the consumers of each of these settings classes become clearer. Each settings users will ask for some settings and transform them in some way (as settings are text they may wrap them in objects of convert them to numbers etc.). As this happens I will start to extract this logic into data manipulation methods and push them back onto the strongly typed settings classes. This will lead to a higher level interface for each set of settings, that is eventually no longer aware it's dealing with 'settings'. At this point the strongly typed settings classes will no longer need the "getter" methods that expose the underlying 'settings' implementation. At that point I would no longer want their public interface to include the settings accessor methods; so I will change this class to encapsulate a settings parser class instead of derive from it. The Abstract class is therefore: a way for me to avoid delegation code at the moment, and a marker in the code to remind me to change the design later. I may never get to it, so it may live a good while... only the code can tell. I find this to be true with any rule... like "no static methods" or "no private methods". They indicate a smell in the code... and that's good. It keeps you looking for the abstraction that you have missed... and lets you carry on providing value to your customer in the mean time. I imagine rules like this one defining a landscape, where maintainable code lives in the valleys. As you add new behaviour, it's like rain landing on your code. Initially you put it wherever it lands.. then you refactor to allow the forces of good design to push the behaviour around until it all ends up in the valleys.
Write a Mock object and use them just for testing. They usually are very very very minimal (inherit from the abstract class) and not more.Then, in your Unit Test you can call the abstract method you want to test. You should test abstract class that contain some logic like all other classes you have.
How to unit test abstract classes: extend with stubs?
[ "", "java", "unit-testing", "testing", "abstract-class", "" ]
I need to extract some bitmaps from an .msstyles file (the Windows XP visual style files) and I'm not sure where to start. I can't seem to find any documentation on how to do it, and the file format seems to be binary and not easily parsed. I have been able to extract the bitmap by itself using: ``` IntPtr p = LoadLibrary(UxTheme.ThemeName); Bitmap bmp = Bitmap.FromResource(p, "BITMAP_NAME"); FreeLibrary(p); ``` However, I also need the information related to the bitmap, like the margin sizes, spacing and number of "images" per bitmap. Does anyone have any experience with this or any links to documentation that I can use?
[This](http://filext.com/file-extension/MSSTYLES "File Extensions") site claims the file format is documented though not by Microsoft. Also found this in the [Wine Crossreference](http://source.winehq.org/source/dlls/uxtheme/msstyles.c "msstyles.c"). Hope that helps!
If you want to get files out of a dll directly (remember, msstyles are dlls with another extension), you could have a look at the [Anolis Project](http://www.codeplex.com/anolis "Anolis Project"). As for actually parsing that stuff you should look at the various tutorials on creating msstyles for information on how the various text resources in that file work. [This](http://www.codeproject.com/KB/miscctrl/XPTaskBar.aspx "CodeProject Article") codeproject article seems to have exactly what you want, with a little interop involved. A [managed wrapper](http://www.codeproject.com/KB/mcpp/ManagedUxTheme.aspx "Managed UX Theme") exists and it seems rather good. The .Net WindowsForms also has the functionality built in, you might want to look at the System.Windows.Forms.VisualStyles namespace if you want simplified read only access.
How to parse an .msstyles file?
[ "", "c#", "uxtheme", "msstyles", "" ]
By means of a regular expression and Greasemonkey I have an array of results that looks like: `choice1, choice2, choice3, choice3, choice1, etc..` My question is how do I tally up the choices so I know how many times choice1 is in the array, choice2 is in the array, etc. if I do not know exactly how many choices there are or what they are. The ultimate goal is to create a Greasemonkey script that stores the number of votes each choice gets over multiple pages (probably using gm\_setvalue although I'm open to other ideas.) Thanks!
One technique would be to iterate over the choices and increment a counter associated to each unique choice in an object property. Example: ``` var choiceCounts = {}; for (var iLoop=0; iLoop < aChoices.length; iLoop++) { var keyChoice = aChoices[iLoop]; if (!choiceCounts[keyChoice]) { choiceCounts[keyChoice] = 1; } else { choiceCounts[keyChoice]++; } //if } //for ``` You then have an object with properties equal to the number of times that the property existed in the array.
Not 100% sure what your looking for, but I think this is it. ``` // Original data var choices = [ "choice 1", "choice 1", "choice 2", "choice 3", "choice 3", "choice 3", "choice 3", "choice 3", "choice 3", "choice 4", "choice 4", "choice 4"]; //Create the results array var result = new Object(); for (var choice in choices) { if (result[choices[choice]] === undefined) result[choices[choice]] = 1; else result[choices[choice]]++; } //Print result var str = ""; for (var item in result) str += item + ": " + result[item] + "<br />"; document.getElementById("resultDiv").innerHTML = str; ``` **Output:** ``` choice 1: 2 choice 2: 1 choice 3: 6 choice 4: 3 ```
Counting Results in an Array
[ "", "javascript", "arrays", "greasemonkey", "" ]
Is there a tool which tells you (or gives you a hint) why a particular select statement dose not return any rows given the current data in your database. eg if you had the following 4 table join ``` select * from a, b, c, d where a.b_id = b.id and b.c_id = c.id and c.d_id = d.id ``` If there were rows which satisfied the conditions a.b\_id = b.id also rows which satisfied b.c\_id = c.id but no rows which satisfied the condition c.d\_id = d.id it would highlight c.d\_id = d.id as the problem. Ie it would brake up the where clause and find out which of the sub conditions returned true and highlight those which do not return true. It would not work well for complex querys but many select statements are simple joins over lots of tables. This would be useful when creating test data to exercise a peace of application code or debugging a problem with a live system. Graphical explain tools (that show the plan of the actual exiection path) come close but they show too much info and do not highlight the missing link in the select stament. I am using postgres, sqllight and mysql but would be interested in how tools for other databases/platforms work. Im also interested in any manula techniques. Does anybody else have this problem? Would anybody be interested if I wrote such a tool?
This will get you started... select count(1) from a, b where a.b\_id = b.id select count(1) from b, c where b.c\_id = c.id select count(1) from c, d where c.d\_id = d.id Note that you are using AND so the overlap of the above queries may not be what you expect. OR Using MS-SQL Server Management Studio... Display the "Execution Plan" and mouse over the nodes for "Actual Number of Rows".
I never have this problem, but I also use explicit joins so it's usually as simple as running parts of the query until I find out which one is restricting my results incorrectly. In your case ``` SELECT * FROM a -- First run just to here, are there records? INNER JOIN b ON a.b_id = b.id -- Then run just to here, is it OK? INNER JOIN c ON b.c_id = c.id -- Then run just to here, is it OK? INNER JOIN d ON c.d_id = d.id -- Then run just to here, is it OK? ```
Tool to find out why an SQL select does not return any data
[ "", "sql", "postgresql", "" ]
In Python 2.6, a new "timeout" parameter was added to the httplib.HTTPConnection class: <http://docs.python.org/library/httplib.html#httplib.HTTPConnection> However, this is only a timeout for the connection to a server. I am looking to set a timeout value for the **request**, not the connection. This doesn't seem to be supported by httplib. Is there any way to emulate this behavior?
You can set a global socket timeout (\*): ``` import socket timeout = 10 socket.setdefaulttimeout(timeout) ``` --- (\*) **EDIT**: As people in the comments correctly point out: This is technically true, but it only has predictable results for tasks that involve a single socket operation. An HTTP request consists of multiple socket operations (e.g. DNS requests or other things that might be abstracted away from an HTTP client). The timeout of the overall operation becomes unpredictable because of that.
No, there isn't. It's because the [HTTP spec](http://www.greenbytes.de/tech/webdav/rfc2616.html) does not provide anything for the client to specify time-to-live information with a HTTP request. You can do this only on TCP level, as you mentioned. On the other hand, the server may inform the client about timeout situations with HTTP status codes [408 Request Timeout](http://www.greenbytes.de/tech/webdav/rfc2616.html#status.408) resp. [504 Gateway Timeout](http://www.greenbytes.de/tech/webdav/rfc2616.html#status.504).
HTTP Request Timeout
[ "", "python", "http", "" ]
I have a string with a length that is a multiple of 8 that contains only 0's and 1's. I want to convert the string into a byte array suitable for writing to a file. For instance, if I have the string "0010011010011101", I want to get the byte array [0x26, 0x9d], which, when written to file, will give 0x269d as the binary (raw) contents. How can I do this in Python?
``` py> data = "0010011010011101" py> data = [data[8*i:8*(i+1)] for i in range(len(data)/8)] py> data ['00100110', '10011101'] py> data = [int(i, 2) for i in data] py> data [38, 157] py> data = ''.join(chr(i) for i in data) py> data '&\x9d' ```
You could do something like this: ``` >>> s = "0010011010011101" >>> [int(s[x:x+8], 2) for x in range(0, len(s), 8)] [38, 157] ```
Converting a string of 1's and 0's to a byte array
[ "", "python", "" ]
Suppose you create a generic Object variable and assign it to a specific instance. If you do GetType(), will it get type Object or the type of the original class?
Yes. You can also do: ``` object c = new FooBar(); if(c is FooBar) Console.WriteLine("FOOBAR!!!"); ```
**Short answer: GetType() will return the Type of the specific object.** I made a quick app to test this: ``` Foo f = new Foo(); Type t = f.GetType(); Object o = (object)f; Type t2 = o.GetType(); bool areSame = t.Equals(t2); ``` And yep, they are the same.
Do C# objects know the type of the more specific class?
[ "", "c#", "object", "types", "" ]
I'm writing a global error handling "module" for one of my applications. One of the features I want to have is to be able to easily wrap a function with a `try{} catch{}` block, so that all calls to that function will automatically have the error handling code that'll call my global logging method. (To avoid polluting the code everywhere with try/catch blocks). This is, however, slightly beyond my understanding of the low-level functioning of JavaScript, the `.call` and `.apply` methods, and the `this` keyword. I wrote this code, based on Prototype's `Function.wrap` method: ``` Object.extend(Function.prototype, { TryCatchWrap: function() { var __method = this; return function() { try { __method.apply(this, arguments) } catch(ex) { ErrorHandler.Exception(ex); } } } }); ``` Which is used like this: ``` function DoSomething(a, b, c, d) { document.write(a + b + c) alert(1/e); } var fn2 = DoSomething.TryCatchWrap(); fn2(1, 2, 3, 4); ``` That code works perfectly. It prints out 6, and then calls my global error handler. My question is: will this break something when the function I'm wrapping is within an object, and it uses the "this" operator? I'm slightly worried since I'm calling .apply, passing something there, I'm afraid this may break something.
Personally instead of polluting builtin objects I would go with a decorator technique: ``` var makeSafe = function(fn){ return function(){ try{ return fn.apply(this, arguments); }catch(ex){ ErrorHandler.Exception(ex); } }; }; ``` You can use it like that: ``` function fnOriginal(a){ console.log(1/a); }; var fn2 = makeSafe(fnOriginal); fn2(1); fn2(0); fn2("abracadabra!"); var obj = { method1: function(x){ /* do something */ }, method2: function(x){ /* do something */ } }; obj.safeMethod1 = makeSafe(obj.method1); obj.method1(42); // the original method obj.safeMethod1(42); // the "safe" method // let's override a method completely obj.method2 = makeSafe(obj.method2); ``` But if you do feel like modifying prototypes, you can write it like that: ``` Function.prototype.TryCatchWrap = function(){ var fn = this; // because we call it on the function itself // let's copy the rest from makeSafe() return function(){ try{ return fn.apply(this, arguments); }catch(ex){ ErrorHandler.Exception(ex); } }; }; ``` Obvious improvement will be to parameterize makeSafe() so you can specify what function to call in the catch block.
**2017 answer**: just use ES6. Given the following demo function: ``` function doThing(){ console.log(...arguments) } ``` You can make your own wrapper function without needing external libraries: ``` function wrap(someFunction){ function wrappedFunction(){ var newArguments = [...arguments] newArguments.push('SECRET EXTRA ARG ADDED BY WRAPPER!') console.log(`You're about to run a function with these arguments: \n ${newArguments}`) return someFunction(...newArguments) } return wrappedFunction } ``` In use: ``` doThing('one', 'two', 'three') ``` Works as normal. But using the new wrapped function: ``` const wrappedDoThing = wrap(doThing) wrappedDoThing('one', 'two', 'three') ``` Returns: ``` one two three SECRET EXTRA ARG ADDED BY WRAPPER! ``` **2016 answer**: use the `wrap` module: In the example below I'm wrapping `process.exit()`, but this works happily with any other function (including browser JS too). ``` var wrap = require('lodash.wrap'); var log = console.log.bind(console) var RESTART_FLUSH_DELAY = 3 * 1000 process.exit = wrap(process.exit, function(originalFunction) { log('Waiting', RESTART_FLUSH_DELAY, 'for buffers to flush before restarting') setTimeout(originalFunction, RESTART_FLUSH_DELAY) }); process.exit(1); ```
How do I wrap a function in Javascript?
[ "", "javascript", "function", "try-catch", "apply", "wrapper", "" ]