Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I want my controller to return the right HTTP response code when the user lacks permission to view a particular page.
**Quickie** If you are using plain JSP views (as is most common), then simply add ``` <% response.setStatus( 403 ); %> ``` somewhere in your view file. At the top is a nice place. **Detail** In MVC, i would always set this in the view, and in most cases with Spring-MVC, use the `SimpleMappingExceptionResolver` to present the correct view in response to a thrown runtime Exception. For example: create and throw a `PermissionDeniedException` in your controller or service layer and have the exception resolver point to a view file `permissionDenied.jsp`. This view file sets the 403 status and shows the user an appropriate message. In your Spring bean XML file: ``` <bean id="exceptionResolver" class="org.springframework.web.servlet.handler.SimpleMappingExceptionResolver"> <property name="exceptionMappings"> <props> <prop key="PermissionDeniedException"> rescues/permissionDenied </prop> ... set other exception/view mappings as <prop>s here ... </props> </property> <property name="defaultErrorView" value="rescues/general" /> </bean> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView" /> <property name="prefix" value="/WEB-INF/views/" /> <property name="suffix" value=".jsp" /> </bean> ``` If you need to implement a user login mechanism, take a look at [Spring Security](http://static.springframework.org/spring-security/site/index.html) (formerly Acegi Security).
You can also just throw ``` org.springframework.security.access.AccessDeniedException("403 returned"); ``` This returns a 403 in the response header.
How do I return a 403 Forbidden in Spring MVC?
[ "", "java", "spring", "http-status-code-403", "httpforbiddenhandler", "" ]
I'm trying to alpha blend sprites and backgrounds with [devkitPro](http://www.devkitpro.org/) (including libnds, libarm, etc). Does anyone know how to do this?
As a generic reference, i once wrote [a small blog entry](http://sylvainhb.blogspot.com/2006/09/alpha-blending.html) about that issue. Basically, you first have to define which layer is alpha-blended against which other layer(s). Afaik, * the source layer(s) must be over destination layer(s) to have some blending displayed. that means the priority of source layers should be numerically lower than the the priority of destination layers. * the source layer is what is going to be translucent, the destination(s) is what is going to be seen through (and yes, i find this rather confusing). For the [sprites, specifically](http://sylvainhb.blogspot.com/2011/09/oam-alpha.html), you then have 3 ways to achieve alpha-blending depending on what you need and what you're "ready to pay" for it: * You can make all the sprites have some alpha-blending by turning on `BLEND_SRC_SPRITE` in `REG_BLDCNT[_SUB]` ... not that useful. * You can selectively turn on blending of some sprites by using `ATTR0_TYPE_BLENDED`. The blending level will be the same for all sprites (and layers) * bitmap-type sprites use direct colors (bypassing the palettes), so the `ATTR2_PALETTE()` field of GBA sprites is useless and has been recycled into `ATTR2_ALPHA`.
Sprites on the DS can be alpha blended using the [blend control registers](http://www.coranac.com/tonc/text/gfx.htm#sec-blend). TONC gives the necessary information for getting blending working on the main screen because the register locations are the same. Alpha blending on the subscreen uses the same process with different registers at a 1000h offset. The registers you'll be looking at are `REG_BLDMOD`, `REG_COLV`, and `REG_COLY` for the main screen and `REG_BLDMOD_SUB`, `REG_COLV_SUB`, and `REG_COLY_SUB` for the sub screen. Also remember that you'll have to [change the sprite's graphic mode](http://www.coranac.com/tonc/text/regobj.htm#sec-oam-entry) to enable blending per sprite.
Alpha blending sprites in Nintendo DS Homebrew
[ "", "c++", "c", "nintendo-ds", "" ]
Is there a way to tell MS SQL that a query is not too important and that it can (and should) take its time? Likewise is there a way to tell MS SQL that it should give higher priority to a query?
Not in versions below SQL 2008. In SQL Server 2008 there's the resource governor. Using that you can assign logins to groups based on properties of the login (login name, application name, etc). The groups can then be assigned to resource pools and limitations or restrictions i.t.o. resources can be applied to those resource pools
SQL Server does not have any form of resource governor yet. There is a SET option called QUERY\_GOVERNOR\_COST\_LIMIT but it's not quite what you're looking for. And it prevents queries from executing based on the cost rather than controlling resources.
Priority of a query in MS SQL
[ "", "sql", "sql-server", "database", "" ]
Can anyone recommend a good binary XML format? It's for a JavaME application, so it needs to be a) Easy to implement on the server, and b) Easy to write a low-footprint parser for on a low-end JavaME client device. And it goes without saying that it needs to be smaller than XML, and faster to parse. --- The data would be something akin to SVG.
You might want to take a look at [wbxml](http://en.wikipedia.org/wiki/WBXML) (Wireless Binary XML) it is optimized for size, and often used on mobile phones, but it is not optimized for parsing speed.
[Hessian](http://en.wikipedia.org/wiki/Hessian_(protocol)) might be an alternative worth looking at. It is a small protocol, well-suited for Java ME applications. "Hessian is a binary web service protocol that makes web services usable without requiring a large framework, and without learning a new set of protocols. Because it is a binary protocol, it is well-suited to sending binary data without any need to extend the protocol with attachments." More links: [Here](http://hessian.caucho.com/) [Here too](http://web.archive.org/web/20100329161201/http://www.nubius.be:80/info-software/hessian4j.html)
Best binary XML format for JavaME
[ "", "java", "xml", "java-me", "mobile", "" ]
I am developing a web page code, which fetches dynamically the content from the server and then places this content to container nodes using something like ``` container.innerHTML = content; ``` Sometimes I have to overwrite some previous content in this node. This works fine, until it happens that previous content occupied more vertical space then a new one would occupy AND a user scrolled the page down -- scrolled more than new content would allow, provided its height. In this case the page redraws incorrectly -- some artifacts of the old content remain. It works fine, and it is even possible to get rid of artifacts, by minimizing and restoring the browser (or force the window to be redrawn in an other way), however this does not seem very convenient. I am testing this only under Safari (this is a iPhone-optimized website). Does anybody have the idea how to deal with this?
The easiest solution that I have found would be to place an anchor tag `<a>` at the top of the `div` you are editing: ``` <a name="ajax-div"></a> ``` Then when you change the content of the `div`, you can do this to have the browser jump to your anchor tag: ``` location.hash = 'ajax-div'; ``` Use this to make sure the user isn't scrolled down too far when you update the content and you shouldn't get the issue in the first place. (tested in the latest FF beta and latest safari)
It sounds like the webkit rendering engine of Safari is not at first recognizing the content change, at least not fully enough to remove the previous html content. Minimizing and then restoring the windows initiates a redraw event in the browser's rendering engine. I think I would explore 2 avenues: first could I use an iframe instead of the current 'content' node? Browsers expect IFrames to change, however as you're seeing they're not always so good at changing content of DIV or other elements. Secondly, perhaps by modifying the scroll position as suggested earlier. You could simply move the scroll back to 0 as suggested or if that is to obtrusive you could try to restore the scroll after the content change. Subtract the height of the old content node from the current scroll position (reseting the browser's scroll to the content node's 0), change the node content, then add the new node's height to the scroll position. Palehorse is right though (I can't vote his answer up at the moment - no points) an abstraction library like jQuery, Dojo, or even Prototype can often help with these matters. Especially if you see your page / site moving beyond simple DOM manipulation you'll find the tools and enhancements provided by libraries to be a huge help.
innerHTML manipulation in JavaScript
[ "", "javascript", "html", "dom", "" ]
I want to implement in Java a class for handling graph data structures. I have a Node class and an Edge class. The Graph class maintains two list: a list of nodes and a list of edges. Each node must have an unique name. How do I guard against a situation like this: ``` Graph g = new Graph(); Node n1 = new Node("#1"); Node n2 = new Node("#2"); Edge e1 = new Edge("e#1", "#1", "#2"); // Each node is added like a reference g.addNode(n1); g.addNode(n2); g.addEdge(e1); // This will break the internal integrity of the graph n1.setName("#3"); g.getNode("#2").setName("#4"); ``` I believe I should clone the nodes and the edges when adding them to the graph and return a NodeEnvelope class that will maintain the graph structural integrity. Is this the right way of doing this or the design is broken from the beginning ?
I work with graph structures in Java a lot, and my advice would be to make any data member of the Node and Edge class that the Graph depends on for maintaining its structure final, with no setters. In fact, if you can, I would make Node and Edge completely immutable, which has [many benefits](http://www.javapractices.com/topic/TopicAction.do?Id=29). So, for example: ``` public final class Node { private final String name; public Node(String name) { this.name = name; } public String getName() { return name; } // note: no setter for name } ``` You would then do your uniqueness check in the Graph object: ``` public class Graph { Set<Node> nodes = new HashSet<Node>(); public void addNode(Node n) { // note: this assumes you've properly overridden // equals and hashCode in Node to make Nodes with the // same name .equal() and hash to the same value. if(nodes.contains(n)) { throw new IllegalArgumentException("Already in graph: " + node); } nodes.add(n); } } ``` If you need to modify a name of a node, remove the old node and add a new one. This might sound like extra work, but it saves a lot of effort keeping everything straight. Really, though, creating your own Graph structure from the ground up is probably unnecessary -- this issue is only the first of many you are likely to run into if you build your own. I would recommend finding a good open source Java graph library, and using that instead. Depending on what you are doing, there are a few options out there. I have used [JUNG](http://jung.sourceforge.net/) in the past, and would recommend it as a good starting point.
It isn't clear to me why you are adding the additional indirection of the String names for the nodes. Wouldn't it make more sense for your Edge constructor's signature to be something like `public Edge(String, Node, Node)` instead of `public Edge (String, String, String)`? I don't know where clone would help you here. ETA: If the danger comes from having the node name changed after the node is created, throw an `IllegalOperationException` if the client tries to call setName() on a node with an existing name.
Should I use clone when adding a new element? When should clone be used?
[ "", "java", "memory", "class", "" ]
Certainly there's the difference in general syntax, but what other critical distinctions exist? There are *some* differences, right?
The linked comparisons are very thorough, but as far as the main differences I would note the following: * C# has anonymous methodsVB has these now, too * C# has the yield keyword (iterator blocks)VB11 added this * VB supports [implicit late binding](http://smartypeeps.blogspot.com/2006/06/late-binding-in-c-and-vbnet.html) (C# has explicit late binding now via the dynamic keyword) * VB supports XML literals * VB is case insensitive * More out-of-the-box code snippets for VB * More out-of-the-box refactoring tools for C#Visual Studio 2015 now provides the same refactoring tools for both VB and C#. In general the things MS focuses on for each vary, because the two languages are targeted at very different audiences. [This blog post](http://blogs.msdn.com/ericlippert/archive/2004/03/02/cargo-cultists-part-three-is-mort-a-cargo-cultist.aspx) has a good summary of the target audiences. It is probably a good idea to determine which audience you are in, because it will determine what kind of tools you'll get from Microsoft.
This topic has had a lot of face time since .Net 2.0 was released. See this [Wikipedia](http://en.wikipedia.org/wiki/Comparison_of_C_sharp_and_Visual_Basic_.NET) article for a readable summary.
What are the most important functional differences between C# and VB.NET?
[ "", "c#", "vb.net", "comparison", "" ]
What is the best way of doing case-insensitive string comparison in C++ without transforming a string to all uppercase or all lowercase? Please indicate whether the methods are Unicode-friendly and how portable they are.
Boost includes a handy algorithm for this: ``` #include <boost/algorithm/string.hpp> // Or, for fewer header dependencies: //#include <boost/algorithm/string/predicate.hpp> std::string str1 = "hello, world!"; std::string str2 = "HELLO, WORLD!"; if (boost::iequals(str1, str2)) { // Strings are identical } ```
The trouble with boost is that you have to link with and depend on boost. Not easy in some cases (e.g. android). And using char\_traits means *all* your comparisons are case insensitive, which isn't usually what you want. This should suffice. It should be reasonably efficient. Doesn't handle unicode or anything though. ``` #include <cctype> // std::tolower #include <algorithm> // std::equal bool ichar_equals(char a, char b) { return std::tolower(static_cast<unsigned char>(a)) == std::tolower(static_cast<unsigned char>(b)); } bool iequals(const std::string& a, const std::string& b) { return a.size() == b.size() && std::equal(a.begin(), a.end(), b.begin(), ichar_equals); } ``` ##### C++14 version ``` #include <cctype> // std::tolower #include <algorithm> // std::equal bool iequals(const std::string& a, const std::string& b) { return std::equal(a.begin(), a.end(), b.begin(), b.end(), ichar_equals); } ``` ##### C++20 version using `std::ranges` ``` #include <cctype> // std::tolower #include <algorithm> // std::equal #include <string_view> // std::string_view bool iequals(std::string_view lhs, std::string_view rhs) { return std::ranges::equal(lhs, rhs, ichar_equals); } ```
Case-insensitive string comparison in C++
[ "", "c++", "string", "" ]
We've been trying to alter a lot of columns from nullable to not nullable, which involves dropping all the associated objects, making the change, and recreating the associated objects. We've been using SQL Compare to generate the scripts, but I noticed that SQL Compare doesn't script statistic objects. Does this mean its ok to drop them and the database will work as well as it did before without them, or have Red Gate missed a trick?
It is considered best practice to auto create and auto update statistics. Sql Server will create them if it needs them. You will often see the tuning wizard generate lots of these, and you will also see people advise that you update statistics as a part of your maintenance plan, but this is not necessary and might actually make things worse, just so long as auto create and auto update are enabled.
If you have update stats and auto create stats on then it should works as before You can also run `sp_updatestats` or `UPDATE STATISTICS WITH FULLSCAN` after you make the changes
Is it OK to drop sql statistics?
[ "", "sql", "sql-server", "scripting", "statistics", "" ]
For example, <http://developer.apple.com/cocoa/pyobjc.html> is still for OS X 10.4 Tiger, not 10.5 Leopard.. And that's the official Apple documentation for it.. The official PyObjC page is equally bad, <http://pyobjc.sourceforge.net/> It's so bad it's baffling.. I'm considering learning Ruby primarily because the RubyCocoa stuff is so much better documented, and there's lots of decent tutorials (<http://www.rubycocoa.com/> for example), and because of the Shoes GUI toolkit.. Even [this badly-auto-translated Japanese tutorial](http://66.163.168.225/babelfish/translate_url_content?lp=ja_en&url=http%3A%2F%2Fblog.monospace.jp%2F2007%2F11%2F05%2Fxcode3_cocoa_python%2F&fr=avbbf-us&.intl=us) is more useful than the rest of the documentation I could find.. All I want to do is create fairly simple Python applications with Cocoa GUI's.. Can anyone shed light on the horrible documentation, or point me at some tutorials that don't just give you huge blocks of code and assume you know what `NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None)` does..?
I agree that that tutorial is flawed, throwing random, unexplained code right in front of your eyes. It introduces concepts such as the autorelease pool and user defaults without explaining why you would want them ("Autorelease pool for memory management" is hardly an explanation). That said… > basically all I want to do is write Cocoa applications without having to learn ObjC. I'm afraid that for the time being, you *will* need a basic grasp of ObjC in order to benefit from any language that uses Cocoa. PyObjC, RubyCocoa, Nu and others are niches at best, and all of them were developed by people intimately familiar with the ins and outs of ObjC *and* Cocoa. For now, you will benefit the most if you realistically see those bridges as useful where scripting languages truly shine, rather than trying to build a whole application with them. While this *has* been done (with LimeChat, I'm using a RubyCocoa-written app right now), it is rare and likely will be for a while.
The main reason for the lack of documentation for PyObjC is that there is one developer (me), and as most developers I don't particularly like writing documentation. Because PyObjC is a side project for me I tend to focus on working on features and bugfixes, because that's more interesting for me. The best way to improve the documentation is to volunteer to help on the pyobjc-dev mailing list. As an aside: the pythonmac-sig mailinglist (see google) is an excelent resource for getting help on Python on MacOSX (not just PyObjC).
Why is the PyObjC documentation so bad?
[ "", "python", "macos", "cocoa", "pyobjc", "" ]
I am starting to work on a hobby project with a **Python** codebase and I would like to set up some form of continuous integration (i.e. running a battery of test-cases each time a check-in is made and sending nag e-mails to responsible persons when the tests fail) similar to **CruiseControl** or **TeamCity**. I realize I could do this with hooks in most **VCSes**, but that requires that the tests run on the same machine as the version control server, which isn't as elegant as I would like. Does anyone have any suggestions for a small, user-friendly, open-source continuous integration system suitable for a **Python** codebase?
We run [Buildbot - Trac](http://buildbot.net/trac) at work. I haven't used it too much since my codebase isn't part of the release cycle yet. But we run the tests on different environments (OSX/Linux/Win) and it sends emails — and it's written in Python.
One possibility is Hudson. It's written in Java, but there's integration with Python projects: > [Hudson embraces Python](http://redsolo.blogspot.com/2007/11/hudson-embraces-python.html) I've never tried it myself, however. (**Update**, Sept. 2011: After a trademark dispute Hudson has been renamed to [Jenkins](http://jenkins-ci.org/).)
Continuous Integration System for a Python Codebase
[ "", "python", "continuous-integration", "extreme-programming", "" ]
I have a directory with PDF files that I need to create an index for. It is a PHP page with a list of links: ``` <A HREF="path to file">filename</A> ``` The filenames can be complicated: ``` LVD 2-1133 - Ändring av dumpningslina (1984-11-20).pdf ``` What is the correct way to link to this file on a Linux/Apache server? Is there a PHP function to do this conversion?
You can use [rawurlencode()](http://php.net/manual/en/function.rawurlencode.php) to convert a string according to the RFC 1738 spec. This function replaces all non-alphanumeric characters by their associated code. The difference with [urlencode()](http://php.net/manual/en/function.urlencode.php) is that spaces are encoded as plus signs. You'll probably want to use the last one. This technique is called Percent or URL encoding. See [Wikipedia](http://en.wikipedia.org/wiki/Url_encoding) for more details.
The urlencode() function will convert spaces into plus signs (+), so it won't work. The rawurlencode does the trick. Thanks. Be sure to convert each part of the path separately, otherwise path/file will be converted into path%2Ffile. (which was what I missed)
How to convert complex filename into HTML link?
[ "", "php", "html", "" ]
After the suggestion to use a library for [my ajax needs](https://stackoverflow.com/questions/34486/what-more-is-needed-for-ajax-than-this-function) I am going to use one, the problem is that there are so many and I've no idea how to even begin telling them apart. Thus, can anybody A) Give a rundown of the differences or B) Point me (and others like me) somewhere that has such a list. Failing that plan C is to go with whichever gets mentioned the most here.
To answer B: [**Comparison of JavaScript frameworks**](http://en.wikipedia.org/wiki/Comparison_of_JavaScript_frameworks) --- **EDIT:** Although everyone and their mom is apparently riding the [**jQuery**](http://jquery.com/) bandwagon (I use [**MochiKit**](http://www.mochikit.com)), there are many libraries which provide the same functionality - the problem set which most libraries solve (async client-server communication, DOM manipulation, etc.) is the same, and there are few that don't have what you will need to get the job done. The important thing to determine for yourself is whether or not a library will fit your particular style and sensibilities. Wide-spread ignorance about how [**JavaScript**](http://www.ecma-international.org/publications/standards/Ecma-262.htm), the language, actually works, coupled with the negative press resulting thereby, coupled with the now-immense popularity of jQuery leads most people down that road. Thankfully, it isn't a bad road to be on as there are a lot of travellers to keep you company when the abstractions leak and you need help. You probably can't go wrong choosing jQuery.
jQuery, easy to learn, easy to use, small footprint, active plugin developer community. Can't go wrong with jQuery.
Comparison of Javascript libraries
[ "", "javascript", "comparison", "" ]
I have a Java application that launches another java application. The launcher has a watchdog timer and receives periodic notifications from the second VM. However, if no notifications are received then the second virtual machine should be killed and the launcher will perform some additional clean-up activities. The question is, is there any way to do this using only java? so far I have to use some native methods to perform this operation and it is somehow ugly. Thanks!
I may be missing something but can't you call the [`destroy()`](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Process.html#destroy()) method on the [`Process`](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Process.html) object returned by [`Runtime.exec()`](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Runtime.html#exec(java.lang.String))?
You can use [java.lang.Process](http://java.sun.com/javase/6/docs/api/java/lang/Process.html) to do what you want. Once you have created the nested process and have a reference to the Process instance, you can get references to its standard out and err streams. You can periodically monitor those, and call .destroy() if you want to close the process. The whole thing might look something like this: ``` Process nestedProcess = new ProcessBuilder("java mysubprocess").start(); InputStream nestedStdOut = nestedProcess.getInputStream(); //kinda backwards, I know InputStream nestedStdErr = nestedProcess.getErrorStream(); while (true) { /* TODO: read from the std out or std err (or get notifications some other way) Then put the real "kill-me" logic here instead of if (false) */ if (false) { nestedProcess.destroy(); //perform post-destruction cleanup here return; } Thread.currentThread().sleep(1000L); //wait for a bit } ``` Hope this helps, Sean
Is it possible to kill a Java Virtual Machine from another Virtual Machine?
[ "", "java", "process-management", "" ]
Let's say I had a program in C# that did something computationally expensive, like encoding a list of WAV files into MP3s. Ordinarily I would encode the files one at a time, but let's say I wanted the program to figure out how many CPU cores I had and spin up an encoding thread on each core. So, when I run the program on a quad core CPU, the program figures out it's a quad core CPU, figures out there are four cores to work with, then spawns four threads for the encoding, each of which is running on its own separate CPU. How would I do this? And would this be any different if the cores were spread out across multiple physical CPUs? As in, if I had a machine with two quad core CPUs on it, are there any special considerations or are the eight cores across the two dies considered equal in Windows?
Don't bother doing that. Instead use the [Thread Pool](http://msdn.microsoft.com/en-us/library/system.threading.threadpool.aspx). The thread pool is a mechanism (actually a class) of the framework that you can query for a new thread. When you ask for a new thread it will either give you a new one or enqueue the work until a thread get freed. In that way the framework is in charge on deciding wether it should create more threads or not depending on the number of present CPUs. Edit: In addition, as it has been already mentioned, the OS is in charge of distributing the threads among the different CPUs.
It is not necessarily as simple as using the thread pool. By default, the thread pool allocates multiple threads for each CPU. Since every thread which gets involved in the work you are doing has a cost (task switching overhead, use of the CPU's very limited L1, L2 and maybe L3 cache, etc...), the optimal number of threads to use is <= the number of available CPU's - unless each thread is requesting services from other machines - such as a highly scalable web service. In some cases, particularly those which involve more hard disk reading and writing than CPU activity, you can actually be better off with 1 thread than multiple threads. For most applications, and certainly for WAV and MP3 encoding, you should limit the number of worker threads to the number of available CPU's. Here is some C# code to find the number of CPU's: ``` int processors = 1; string processorsStr = System.Environment.GetEnvironmentVariable("NUMBER_OF_PROCESSORS"); if (processorsStr != null) processors = int.Parse(processorsStr); ``` Unfortunately, it is not as simple as limiting yourself to the number of CPU's. You also have to take into account the performance of the hard disk controller(s) and disk(s). The only way you can really find the optimal number of threads is trial an error. This is particularly true when you are using hard disks, web services and such. With hard disks, you might be better off not using all four processers on you quad processor CPU. On the other hand, with some web services, you might be better off making 10 or even 100 requests per CPU.
How do I spawn threads on different CPU cores?
[ "", "c#", ".net", "windows", "multithreading", "" ]
I'm just wondering if it exists better solution for this. ``` BitConverter.ToInt32(sample_guid.ToByteArray(), 0) ```
I don't think there's a better solution than this.
I don't know if it's better, but it is easier to read: Int32.Parse(sample\_guid.ToString().SubString(0,1)); I'm a junior developer, admittedly, but the above reads easier to me than a byte conversion, and on a modern computer it would run indistinguishably quickly.
What is the best method of getting Int32 from first four bytes of GUID?
[ "", "c#", ".net", "guid", "" ]
I'm trying to use Groovy to create an interactive scripting / macro mode for my application. The application is OSGi and much of the information the scripts may need is not know up front. I figured I could use GroovyShell and call eval() multiple times continually appending to the namespace as OSGi bundles are loaded. GroovyShell maintains variable state over multiple eval calls, but not class definitions or methods. goal: Create a base class during startup. As OSGi bundles load, create derived classes as needed.
Ended up injecting code before each script compilation. End goal is that the user written script has a domain-specific-language available for use.
I am not sure about what you mean about declared classes not existing between evals, the following two scripts work as expected when evaled one after another: ``` class C {{println 'hi'}} new C() ``` ... ``` new C() ``` However methods become bound to the class that declared them, and GroovyShell creates a new class for each instance. If you do not need the return value of any of the scripts and they are truly scripts (not classes with main methods) you can attach the following to the end of every evaluated scrips. ``` Class klass = this.getClass() this.getMetaClass().getMethods().each { if (it.declaringClass.cachedClass == klass) { binding[it.name] = this.&"$it.name" } } ``` If you depend on the return value you can hand-manage the evaluation and run the script as part of your parsing (warning, untested code follows, for illustrative uses only)... ``` String scriptText = ... Script script = shell.parse(scriptText) def returnValue = script.run() Class klass = script.getClass() script.getMetaClass().getMethods().each { if (it.declaringClass.cachedClass == klass) { shell.context[it.name] = this.&"$it.name" } } // do whatever with returnValue... ``` There is one last caveat I am sure you are aware of. Statically typed variables are not kept between evals as they are not stored in the binding. So in the previous script the variable 'klass' will not be kept between script invocations and will disappear. To rectify that simply remove the type declarations on the first use of all variables, that means they will be read and written to the binding.
How can I convince GroovyShell to maintain state over eval() calls?
[ "", "java", "groovy", "scripting", "groovyshell", "" ]
How do you create a database from an Entity Data Model. So I created a database using the EDM Designer in VisualStudio 2008, and now I want to generate the SQL Server Schema to create storage in SQL Server.
From what I understand you are not just supposed to use EDM as a "pretty" database designer, in fact EDM does not depend on a specific storage layer. It tries to abstract that part for the developer. There are design schemas (CSDL) and storage schemas (SSDL). Anyway, don't mean to lecture you. ;) There is [EDM Generator](http://msdn.microsoft.com/en-us/library/bb387165.aspx), which you use to create models and class, etc.. For a DDL kind of export, I've never done that but what I did was map my EDM to an existing database, which was easier for me to get started. There is a great tutorial on MSDN, which details step by step instructions on how to go about using an existing database, but also touches the how to start from scratch approach. <http://msdn.microsoft.com/en-us/magazine/cc163286.aspx>
The Feature "Generate Database Schema from Model" is scheduled for a future release of Entity Framework. V1 does'nt support schema generatiorn based on EF models.
How do you create a database from an EDM?
[ "", "sql", "entity-framework", "ado.net", "" ]
Sometimes I have to work on code that moves the computer clock forward. In this case some .cpp or .h files get their latest modification date set to the future time. Later on, when my clock is fixed, and I compile my sources, system rebuilds most of the project because some of the latest modification dates are in the future. Each subsequent recompile has the same problem. Solution that I know are: a) Find the file that has the future time and re-save it. This method is not ideal because the project is very big and it takes time even for windows advanced search to find the files that are changed. b) Delete the whole project and re-check it out from svn. Does anyone know how I can get around this problem? Is there perhaps a setting in visual studio that will allow me to tell the compiler to use the archive bit instead of the last modification date to detect source file changes? Or perhaps there is a recursive modification date reset tool that can be used in this situation?
If this was my problem, I'd look for ways to avoid mucking with the system time. Isolating the code under unit tests, or a virtual machine, or something. However, because I love [PowerShell](https://stackoverflow.com/questions/52487/the-most-amazing-pieces-of-software-in-the-world#53414): ``` Get-ChildItem -r . | ? { $_.LastWriteTime -gt ([DateTime]::Now) } | Set-ItemProperty -Name "LastWriteTime" -Value ([DateTime]::Now) ```
I would recommend using a virtual machine where you can mess with the clock to your heart's content and it won't affect your development machine. Two free ones are [Virtual PC](http://www.microsoft.com/Windows/products/winfamily/virtualpc/default.mspx) from Microsoft and [VirtualBox](http://www.virtualbox.org/) from Sun.
Resetting detection of source file changes
[ "", "c++", "visual-studio", "svn", "time", "timezone", "" ]
I want to move various parts of my app into simple scripts, to allow people that do not have a strong knowledge of c++ to be able to edit and implement various features. Because it's a real time app, I need to have some kind of multitasking for these scripts. Ideally I want it so that the c++ app calls a script function which then continues running (under the c++ thread) until either a pause point (Wait(x)), or it returns. In the case of it waiting the state needs to be saved ready for the script to be restarted the next time the app loops after the duration has expired. The scripts also need to be able to call c++ class methods, ideally using the c++ classes rather than plain wrapper functions around c++ classes. I don't want to spend a massive amount of time implementing this, so using an existing scripting language is preferred to writing my own. I heard that Python and Lua can be integrated into a c++ app, but I do not know how to do this to achieve my goals. * The scripts must be able to call c++ functions * The scripts must be able to "pause" when certain functions are called (eg. Wait), and be restarted again by the c++ thread * Needs to be fast -- this is for a real time app and there could potentially be a lot of scripts running. I can probably roll the multitasking code fairly easily, provided the scripts can be saved and restarted (possibly by a different thread to the original).
You can use either Lua or Python. Lua is more "lightweight" than python. It's got a smaller memory footprint than python does and in our experience was easier to integrate (people's mileage on this point might vary). It can support a bunch of scripts running simultaneously. Lua, at least, supports stopping/starting threads in the manner you desire. Boost.python is nice, but in my (limited) experience, it was difficult for us to get compiling for our different environments and was pretty heavyweight. It has (in my opinion) the disadvantage of requiring Boost. For some, that might not be a problem, but if you don't need Boost (or are not using it), you are introducing a ton of code to get Boost.python working. YMMV. We have built Lua into apps on multiple platforms (win32, Xbox360 and PS3). I believe that it will work on x64. The suggestion to use Luabind is good. We wound up writing our own interface between the two and while not too complicated, having that glue code will save you a lot of time and perhaps aggravation. With either solution though, debugging can be a pain. We currently have no good solution for debugging Lua scripts that are embedded into our app. Since we haven't used python in our apps I can't speak to what tools might be available there, but a couple of years ago the landscape was roughly the same -- poor debugging. Having scripting to extend functionality is nice, but bugs in the scripts can cause problems and might be difficult to locate. The Lua code itself is kind of messy to work with if you need to make changes there. We have seen bugs in the Lua codebase itself that were hard to track down. I suspect that Boost::Python might have similar problems. And with any scripting language, it's not necessarily a solution for "non-programmers" to extend functionality. It might seem like it, but you will likely wind up spending a fair amount of time either debugging scripts or even perhaps Lua. That all said, we've been very happy with Lua and have shipped it in two games. We currently have no plans to move away from the language. All in all, we've found it better than other alternatives that were available a couple of years ago. Python (and IronPython) are other choices, but based on experience, they seem more heavy handed than Lua. I'd love to hear about other experiences there though.
I can highly recommend that you take a look at [Luabind](http://sourceforge.net/projects/luabind/). It makes it very simple to integrate Lua in your C++ code and vice versa. It is also possible to expose whole C++ classes to be used in Lua.
Implementing scripts in c++ app
[ "", "c++", "scripting", "" ]
My understanding of Hibernate is that as objects are loaded from the DB they are added to the Session. At various points, depending on your configuration, the session is flushed. At this point, modified objects are written to the database. How does Hibernate decide which objects are 'dirty' and need to be written? Do the proxies generated by Hibernate intercept assignments to fields, and add the object to a dirty list in the Session? Or does Hibernate look at each object in the Session and compare it with the objects original state? Or something completely different?
Hibernate does/can use bytecode generation (CGLIB) so that it knows a field is dirty as soon as you call the setter (or even assign to the field afaict). This immediately marks that field/object as dirty, but doesn't reduce the number of objects that need to be dirty-checked during flush. All it does is impact the implementation of `org.hibernate.engine.EntityEntry.requiresDirtyCheck()`. It *still* does a field-by-field comparison to check for dirtiness. I say the above based on a recent trawl through the source code (3.2.6GA), with whatever credibility that adds. Points of interest are: * `SessionImpl.flush()` triggers an `onFlush()` event. * `SessionImpl.list()` calls `autoFlushIfRequired()` which triggers an `onAutoFlush()` event. (on the tables-of-interest). That is, queries can invoke a flush. Interestingly, no flush occurs if there is no transaction. * Both those events eventually end up in `AbstractFlushingEventListener.flushEverythingToExecutions()`, which ends up (amongst other interesting locations) at `flushEntities()`. * That loops over every entity in the session (`source.getPersistenceContext().getEntityEntries()`) calling `DefaultFlushEntityEventListener.onFlushEntity()`. * You eventually end up at `dirtyCheck()`. That method does make some optimizations wrt to CGLIB dirty flags, but we've still ended up looping over every entity.
Hibernate takes a snapshot of the state of each object that gets loaded into the Session. On flush, each object in the Session is compared with its corresponding snapshot to determine which ones are dirty. SQL statements are issued as required, and the snapshots are updated to reflect the state of the (now clean) Session objects.
When Hibernate flushes a Session, how does it decide which objects in the session are dirty?
[ "", "java", "hibernate", "session", "orm", "flush", "" ]
Why is the following C# code not allowed: ``` public abstract class BaseClass { public abstract int Bar { get;} } public class ConcreteClass : BaseClass { public override int Bar { get { return 0; } set {} } } ``` > CS0546 'ConcreteClass.Bar.set': cannot override because 'BaseClass.Bar' does not have an overridable set accessor
Because the writer of Baseclass has explicitly declared that Bar has to be a read-only property. It doesn't make sense for derivations to break this contract and make it read-write. I'm with Microsoft on this one. Let's say I'm a new programmer who has been told to code against the Baseclass derivation. i write something that assumes that Bar cannot be written to (since the Baseclass explicitly states that it is a get only property). Now with your derivation, my code may break. e.g. ``` public class BarProvider { BaseClass _source; Bar _currentBar; public void setSource(BaseClass b) { _source = b; _currentBar = b.Bar; } public Bar getBar() { return _currentBar; } } ``` Since Bar cannot be set as per the BaseClass interface, BarProvider assumes that caching is a safe thing to do - Since Bar cannot be modified. But if set was possible in a derivation, this class could be serving stale values if someone modified the \_source object's Bar property externally. The point being '*Be Open, avoid doing sneaky things and surprising people*' **Update**: *Ilya Ryzhenkov asks 'Why don't interfaces play by the same rules then?'* Hmm.. this gets muddier as I think about it. An interface is a contract that says 'expect an implementation to have a read property named Bar.' **Personally** I'm much less likely to make that assumption of read-only if I saw an Interface. When i see a get-only property on an interface, I read it as 'Any implementation would expose this attribute Bar'... on a base-class it clicks as 'Bar is a read-only property'. Of course technically you're not breaking the contract.. you're doing more. So you're right in a sense.. I'd close by saying 'make it as hard as possible for misunderstandings to crop up'.
I think the main reason is simply that the syntax is too explicit for this to work any other way. This code: ``` public override int MyProperty { get { ... } set { ... } } ``` is quite explicit that both the `get` and the `set` are overrides. There is no `set` in the base class, so the compiler complains. Just like you can't override a method that's not defined in the base class, you can't override a setter either. You might say that the compiler should guess your intention and only apply the override to the method that can be overridden (i.e. the getter in this case), but this goes against one of the C# design principles - that the compiler must not guess your intentions, because it may guess wrong without you knowing. I think the following syntax might do nicely, but as Eric Lippert keeps saying, implementing even a minor feature like this is still a major amount of effort... ``` public int MyProperty { override get { ... } // not valid C# set { ... } } ``` or, for autoimplemented properties, ``` public int MyProperty { override get; set; } // not valid C# ```
Why is it impossible to override a getter-only property and add a setter?
[ "", "c#", ".net", "properties", "getter-setter", "" ]
Inspired by the MVC storefront the latest project I'm working on is using extension methods on IQueryable to filter results. I have this interface; ``` IPrimaryKey { int ID { get; } } ``` and I have this extension method ``` public static IPrimaryKey GetByID(this IQueryable<IPrimaryKey> source, int id) { return source(obj => obj.ID == id); } ``` Let's say I have a class, SimpleObj which implements IPrimaryKey. When I have an IQueryable of SimpleObj the GetByID method doesn't exist, unless I explicitally cast as an IQueryable of IPrimaryKey, which is less than ideal. Am I missing something here?
It works, when done right. cfeduke's solution works. However, you don't have to make the `IPrimaryKey` interface generic, in fact, you don't have to change your original definition at all: ``` public static IPrimaryKey GetByID<T>(this IQueryable<T> source, int id) where T : IPrimaryKey { return source(obj => obj.ID == id); } ```
Edit: [Konrad](https://stackoverflow.com/questions/82442/extension-methods-not-working-for-an-interface#85503)'s solution is better because its far simpler. The below solution works but is only required in situations similar to ObjectDataSource where a method of a class is retrieved through reflection without walking up the inheritance hierarchy. Obviously that's not happening here. This is possible, I've had to implement a similar pattern when I designed a custom entity framework solution for working with ObjectDataSource: ``` public interface IPrimaryKey<T> where T : IPrimaryKey<T> { int Id { get; } } public static class IPrimaryKeyTExtension { public static IPrimaryKey<T> GetById<T>(this IQueryable<T> source, int id) where T : IPrimaryKey<T> { return source.Where(pk => pk.Id == id).SingleOrDefault(); } } public class Person : IPrimaryKey<Person> { public int Id { get; set; } } ``` Snippet of use: ``` var people = new List<Person> { new Person { Id = 1 }, new Person { Id = 2 }, new Person { Id = 3 } }; var personOne = people.AsQueryable().GetById(1); ```
Extension Methods not working for an interface
[ "", "c#", ".net", "extension-methods", "" ]
I am writing an immutable DOM tree in Java, to simplify access from multiple threads.\* However, it does need to support inserts and updates as fast as possible. And since it is immutable, if I make a change to a node on the N'th level of the tree, I need to allocate at least N new nodes in order to return the new tree. My question is, would it be dramatically faster to pre-allocate nodes rather than create new ones every time the tree is modified? It would be fairly easy to do - keep a pool of several hundred unused nodes, and pull one out of the pool rather than create one whenever it was required for a modify operation. I can replenish the node pool when there's nothing else going on. (in case it isn't obvious, execution time is going to be much more at a premium in this application than heap space is) Is it worthwhile to do this? Any other tips on speeding it up? Alternatively, does anyone know if an immutable DOM library already? I searched, but couldn't find anything. \*Note: For those of you who aren't familiar with the concept of immutability, it basically means that on any operation to an object that changes it, the method returns a copy of the object with the changes in place, rather than the changed object. Thus, if another thread is still reading the object it will continue to happily operate on the "old" version, unaware that changes have been made, rather than crashing horribly. See <http://www.javapractices.com/topic/TopicAction.do?Id=29>
These days, object creation is pretty dang fast, and the concept of object pooling is kind of obsolete (at least in general; connection pooling is of course still valid). Avoid premature optimization. Create your nodes when you need them when doing your copies, and then see if that becomes prohibitively slow. If so, then look into some techniques to speed it up. But unless you already know that what you've got isn't fast enough, I wouldn't go introducing all the complexity you're going to need to get pooling going.
I hate to give a non-answer, but I think the only definitive way to answer a performance question like this might be for you to code both approaches, benchmark the two, and compare the results.
Java object allocation overhead
[ "", "java", "xml", "dom", "concurrency", "" ]
The [demos](http://dev.jquery.com/view/trunk/ui/demos/functional/#ui.dialog) for the jquery ui dialog all use the "flora" theme. I wanted a customized theme, so I used the themeroller to generate a css file. When I used it, everything seemed to be working fine, but later I found that I can't control any input element contained in the dialog (i.e, can't type into a text field, can't check checkboxes). Further investigation revealed that this happens if I set the dialog attribute "modal" to true. This doesn't happen when I use the flora theme. Here is the js file: ``` topMenu = { init: function(){ $("#my_button").bind("click", function(){ $("#SERVICE03_DLG").dialog("open"); $("#something").focus(); }); $("#SERVICE03_DLG").dialog({ autoOpen: false, modal: true, resizable: false, title: "my title", overlay: { opacity: 0.5, background: "black" }, buttons: { "OK": function() { alert("hi!"); }, "cancel": function() { $(this).dialog("close"); } }, close: function(){ $("#something").val(""); } }); } } $(document).ready(topMenu.init); ``` Here is the html that uses the flora theme: ``` <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=Shift_JIS"> <title>sample</title> <script src="jquery-1.2.6.min.js" language="JavaScript"></script> <link rel="stylesheet" href="flora/flora.all.css" type="text/css"> <script src="jquery-ui-personalized-1.5.2.min.js" language="JavaScript"></script> <script src="TopMenu.js" language="JavaScript"></script> </head> <body> <input type="button" value="click me!" id="my_button"> <div id="SERVICE03_DLG" class="flora">please enter something<br><br> <label for="something">somthing:</label>&nbsp;<input name="something" id="something" type="text" maxlength="20" size="24"> </div> </body> </html> ``` Here is the html that uses the downloaded themeroller theme: ``` <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=Shift_JIS"> <title>sample</title> <script src="jquery-1.2.6.min.js" language="JavaScript"></script> <link rel="stylesheet" href="jquery-ui-themeroller.css" type="text/css"> <script src="jquery-ui-personalized-1.5.2.min.js" language="JavaScript"></script> <script src="TopMenu.js" language="JavaScript"></script> </head> <body> <input type="button" value="click me!" id="my_button"> <div id="SERVICE03_DLG" class="ui-dialog">please enter something<br><br> <label for="something">somthing:</label>&nbsp;<input name="something" id="something" type="text" maxlength="20" size="24"> </div> </body> </html> ``` As you can see, only the referenced css file and class names are different. Anybody have a clue as to what could be wrong? @David: I tried it, and it doesn't seem to work (neither on FF or IE). I tried inline css: ``` style="z-index:5000" ``` and I've also tried it referencing an external css file: ``` #SERVICE03_DLG{z-index:5000;} ``` But neither of these work. Am I missing something in what you suggested? **Edit:** Solve by brostbeef! Since I was originally using flora, I had mistakenly assumed that I have to specify a class attribute. Turns out, this is only true when you actually use the flora theme (as in the samples). If you use the customized theme, specifying a class attribute causes that strange behaviour.
I think it is because you have the classes different. `<div id="SERVICE03_DLG" class="flora">` (flora) `<div id="SERVICE03_DLG" class="ui-dialog">` (custom) Even with the flora theme, you would still use the ui-dialog class to define it as a dialog. I've done modals before and I've never even defined a class in the tag. jQueryUI should take care of that for you. Try getting rid of the class attribute or using the "ui-dialog" class.
After playing with this in Firebug, if you add a z-index attribute greater than 1004 to your default div, id of "SERVICE03\_DLG", then it will work. I'd give it something extremely high, like 5000, just to be sure. I'm not sure what it is in the themeroller CSS that causes this. They've probably changed or neglected the position attribute of the target div that it turns into a dialog.
A issue with the jquery dialog when using the themeroller css
[ "", "javascript", "jquery", "user-interface", "dialog", "" ]
C# doesn't require you to specify a generic type parameter if the compiler can infer it, for instance: ``` List<int> myInts = new List<int> {0,1,1, 2,3,5,8,13,21,34,55,89,144,233,377, 610,987,1597,2584,4181,6765}; //this statement is clunky List<string> myStrings = myInts. Select<int,string>( i => i.ToString() ). ToList<string>(); //the type is inferred from the lambda expression //the compiler knows that it's taking an int and //returning a string List<string> myStrings = myInts. Select( i => i.ToString() ). ToList(); ``` This is needed for anonymous types where you don't know what the type parameter would be (in intellisense it shows up as `'a`) because it's added by the compiler. Class-level type parameters don't let you do this: ``` //sample generic class public class GenericDemo<T> { public GenericDemo ( T value ) { GenericTypedProperty = value; } public T GenericTypedProperty {get; set;} } //why can't I do: int anIntValue = 4181; var item = new GenericDemo( anIntValue ); //type inference fails //however I can create a wrapper like this: public static GenericDemo<T> Create<T> ( T value ) { return new GenericDemo<T> ( value ); } //then this works - type inference on the method compiles var item = Create( anIntValue ); ``` Why doesn't C# support this class level generic type inference?
Actually, your question isn't bad. I've been toying with a generic programming language for last few years and although I've never come around to actually develop it (and probably never will), I've thought a lot about generic type inference and one of my top priorities has always been to allow the construction of classes without having to specify the generic type. C# simply lacks the set of rules to make this possible. I think the developers never saw the neccesity to include this. Actually, the following code would be very near to your proposition and solve the problem. All C# needs is an added syntax support. ``` class Foo<T> { public Foo(T x) { … } } // Notice: non-generic class overload. Possible in C#! class Foo { public static Foo<T> ctor<T>(T x) { return new Foo<T>(x); } } var x = Foo.ctor(42); ``` Since this code actually works, we've shown that the problem is not one of semantics but simply one of lacking support. I guess I have to take back my previous posting. ;-)
> Why doesn't C# support this class level generic type inference? Because they're generally ambiguous. By contrast, type inference is trivial for function calls (if all types appear in arguments). But in the case of constructor calls (glorified functions, for the sake of discussion), the compiler has to resolve multiple levels at the same time. One level is the class level and the other is the constructor arguments level. I believe solving this is algorithmically non-trivial. Intuitively, I'd say it's even NP-complete. To illustrate an extreme case where resolution is impossible, imagine the following class and tell me what the compiler should do: ``` class Foo<T> { public Foo<U>(U x) { } } var x = new Foo(1); ```
Why doesn't C# support implied generic types on class constructors?
[ "", "c#", ".net", "generics", "" ]
I'm writing a CMS application in PHP and one of the requirements is that it must be able to interface with the customer's Exchange server. I've written up this functionality a few times before and have always used [WebDAV](http://en.wikipedia.org/wiki/WebDAV) to do it, but now I'm leaning away from that. I will be running the site on IIS OR Apache (no preference) on Windows server 2008. A few things I would need to do include adding contacts to a given user's address book, sending emails as a given user and running reports on contacts for a user. All of this is pretty easy to do with WebDAV, but if there is a better way that doesn't require any functionality that is likely to be deprecated any time soon. Any ideas? ### Update: Justin, I love the idea of using com objects, I just worry about maintaining a 3rd product to make everything work... John, I can write a web service in C# to interface with for these functions and access it with my PHP app, but it's also a little bit out of the way. So far, I'm not 100% convinced that either of these is better than WebDAV... Can anyone show me where I'm being silly?
**Update as of 2020:** Over a decade since this question and things have moved on. Microsft now has a [Rest API](https://learn.microsoft.com/en-us/exchange/client-developer/exchange-web-services/office-365-rest-apis-for-mail-calendars-and-contacts) that will allow you to easily access this data. --- **Original Answer** I have not used PHP to do this but have experience in using C# to achieve the same thing. The Outlook API is a way of automating Outlook rather than connecting to Exchange directly. I have previously taken this approach in a C# application and it does work although can be buggy. If you wish to connect directly to the Exchange server you will need to research extended MAPI. In the past I used this wrapper [MAPIEx: Extended MAPI Wrapper](http://www.codeproject.com/KB/IP/CMapiEx.aspx). It is a C# project but I believe you can use some .NET code on a PHP5 Windows server. Alternatively it has a C++ core DLL that you may be a able to use. I have found it to be very good and there are some good example applications. --- Sorry for the delay no current way to keep track of posts yet. I do agree adding more layer on to your application and relying on 3rd party code can be scary (and rightfully so.) Today I read another [interesting post](https://stackoverflow.com/questions/4508/mapi-and-managed-code-experiences) tagged up as MAPI that is on a different subject. The key thing here though is that it has linked to [this important MS article](http://blogs.msdn.com/mstehle/archive/2007/10/03/fyi-why-are-mapi-and-cdo-1-21-not-supported-in-managed-net-code.aspx). I have been unaware of the issues until now on using managed code to interface to MAPI although the C++ code in the component should be unaffected by this error as it is unmanaged. This blog entry also suggests other ways to connect to MAPI/Exchange server. In this case due to these new facts <https://www.php.net/imap> may be the answer as suggested by the other user.
Is your customer using Exchange 2007? If so, I'd have a look at [Exchange Web Services](http://msdn.microsoft.com/en-us/library/bb204119(EXCHG.80).aspx). If not, as hairy as it can be, I think WebDAV is your best bet. Personally I don't like using the Outlook.Application COM object route, as its security prompts ("An application is attempting to access your contacts. Allow this?", etc.) can cause problems on a server. I also think it would be difficult to accomplish your impersonation-like tasks using Outlook, such as sending mail as a given user.
Best way to access Exchange using PHP?
[ "", "php", "windows", "exchange-server", "webdav", "mapi", "" ]
> **Possible Duplicate:** > [.NET - What’s the best way to implement a “catch all exceptions handler”](https://stackoverflow.com/questions/219594/net-whats-the-best-way-to-implement-a-catch-all-exceptions-handler) I have a .NET console app app that is crashing and displaying a message to the user. All of my code is in a `try{<code>} catch(Exception e){<stuff>}` block, but still errors are occasionally displayed. In a Win32 app, you can capture all possible exceptions/crashes by installing various exception handlers: ``` /* C++ exc handlers */ _set_se_translator SetUnhandledExceptionFilter _set_purecall_handler set_terminate set_unexpected _set_invalid_parameter_handler ``` What is the equivalent in the .NET world so I can handle/log/quiet all possible error cases?
Contrary to what some others have posted, there's nothing wrong catching all exceptions. The important thing is to handle them all appropriately. If you have a stack overflow or out of memory condition, the app should shut down for them. Also, keep in mind that OOM conditions can prevent your exception handler from running correctly. For example, if your exception handler displays a dialog with the exception message, if you're out of memory, there may not be enough left for the dialog display. Best to log it and shut down immediately. As others mentioned, there are the UnhandledException and ThreadException events that you can handle to collection exceptions that might otherwise get missed. Then simply throw an exception handler around your main loop (assuming a winforms app). Also, you should be aware that OutOfMemoryExceptions aren't always thrown for out of memory conditions. An OOM condition can trigger all sorts of exceptions, in your code, or in the framework, that don't necessarily have anything to do with the fact that the real underlying condition is out of memory. I've frequently seen InvalidOperationException or ArgumentException when the underlying cause is actually out of memory.
You can add an event handler to AppDomain.UnhandledException event, and it'll be called when a exception is thrown and not caught.
How to catch ALL exceptions/crashes in a .NET app
[ "", "c#", ".net", "exception", "" ]
I'm needing to access Excel workbooks from .Net. I know all about the different ways of doing it (I've written them up in a [blog post](http://blog.functionalfun.net/2008/06/reading-and-writing-excel-files-with.html "Reading and Writing Excel files in .Net")), and I know that using a native .Net component is going to be the fastest. But the question is, which of the components wins? Has anybody benchmarked them? I've been using Syncfusion XlsIO, but that's very slow for some key operations (like deleting rows in a workbook containing thousands of Named ranges).
I haven't done any proper benchmarks, but I tried out several other components,and found that [SpreadsheetGear](http://spreadsheetgear.com/) was considerably faster than XlsIO which I was using before. I've written up some of my findings in this [post](http://blog.functionalfun.net/2008/08/which-net-excel-io-component-should-i.html)
Can't help you with your original question, but are you aware that you can access Excel files using an OleDbConnection, and therefore treat it as a database? You can then read worksheets into a DataTable, perform all the changes you need to the data in your application, and then save it all back to the file using an OleDbConnection.
Does anyone have .Net Excel IO component benchmarks?
[ "", "c#", "excel", "components", "" ]
What could be the problem with reversing the array of DOM objects as in the following code: ``` var imagesArr = new Array(); imagesArr = document.getElementById("myDivHolderId").getElementsByTagName("img"); imagesArr.reverse(); ``` In Firefox 3, when I call the `reverse()` method the script stops executing and shows the following error in the console of the Web Developer Toolbar: ``` imagesArr.reverse is not a function ``` The `imagesArr` variable can be iterated through with a for loop and elements like `imagesArr[i]` can be accessed, so why is it not seen as an array when calling the `reverse()` method?
Because getElementsByTag name actually returns a NodeList structure. It has similar array like indexing properties for syntactic convenience, but it is *not* an array. For example, the set of entries is actually constantly being dynamically updated - if you add a new img tag under myDivHolderId, it will automatically appear in imagesArr. See <http://www.w3.org/TR/DOM-Level-2-Core/core.html#ID-536297177> for more.
`getElementsByTag()` returns a NodeList instead of an Array. You can convert a NodeList to an Array but note that the array will be another object, so reversing it will not affect the DOM nodes position. ``` var listNodes = document.getElementById("myDivHolderId").getElementsByTagName("img"); var arrayNodes = Array.slice.call(listNodes, 0); arrayNodes.reverse(); ``` In order to change the position, you will have to remove the DOM nodes and add them all again at the right position. `Array.prototype.slice.call(arrayLike, 0)` is a great way to convert an array-like to an array, but if you are using a JavaScript library, it may actually provide a even better/faster way to do it. For example, jQuery has `$.makeArray(arrayLike)`. You can also use the Array methods directly on the NodeList: ``` Array.prototype.reverse.call(listNodes); ```
Javascript collection of DOM objects - why can't I reverse with Array.reverse()?
[ "", "javascript", "arrays", "" ]
In Java, what would the best way be to have a constantly listening port open, and still send upon receipt of a packet. I am not particularly savvy with network programming at the moment, so the tutorials I have found on the net aren't particularly helpful. Would it make sense to have the listening socket as a serversocket and run it in a separate thread to the socket I'm using to send data to the server? In a loosely related question. Does anyone know if programming simply for java, in netbeans then exporting it for use on a blackberry (using a plugin) the sockets would still work ?
If you can afford the threading, try this (keep in mind I've left out some details like exception handling and playing nice with threads). You may want to look into `SocketChannels` and/or NIO async sockets / selectors. This should get you started. ``` boolean finished = false; int port = 10000; ServerSocket server = new ServerSocket(port); while (!finished) { // This will block until a connection is made Socket s = server.accept(); // Spawn off some thread (or use a thread pool) to handle this socket // Server will continue to listen } ```
As for connecting to a Blackberry, this is problematic since in most cases the Blackberry won't have a public IP address and will instead be behind a WAP gateway or wireless provider access point server. RIM provides the Mobile Data Server (MDS) to get around this and provide "Push" data which uses ServerSocket semantics on the Blackberry. The MDS is available with the Blackberry Enterprise Server (BES) and the Unite Server. Once set up data can be sent to a particular unit via the MDS using the HTTP protocol. There is an excellent description of the Push protocol [here](http://blog.fupps.com/documents/blackberry/real-world-blackberry-mds-push-applications-an-extensible-framework/) with LAMP source code. The parameter PORT=7874 in [pushout.pl](http://fupps.com/code/bberry/push/pushout.pl) connects to the Blackberry Browser Push server socket. By changing that parameter the payload can be sent to an arbitrary port where your own ServerSocket is accepting connections.
Sockets and Processes in Java
[ "", "java", "networking", "sockets", "blackberry", "" ]
[Interfaces](http://php.net/Interfaces) allow you to create code that defines the methods of classes that implement it. You cannot however add any code to those methods. [Abstract classes](http://php.net/Abstract) allow you to do the same thing, along with adding code to the method. Now if you can achieve the same goal with abstract classes, why do we even need the concept of interfaces? I've been told that it has to do with OO theory from C++ to Java, which is what PHP's OO stuff is based on. Is the concept useful in Java but not in PHP? Is it just a way to keep from having placeholders littered in the abstract class? Am I missing something?
The entire point of interfaces is to give you the flexibility to have your class be forced to implement multiple interfaces, but still not allow multiple inheritance. The issues with inheriting from multiple classes are many and varied and the [wikipedia](http://en.wikipedia.org/wiki/Multiple_inheritance) page on it sums them up pretty well. Interfaces are a compromise. Most of the problems with multiple inheritance don't apply to abstract base classes, so most modern languages these days disable multiple inheritance yet call abstract base classes interfaces and allows a class to "implement" as many of those as they want.
The concept is useful all around in object oriented programming. To me I think of an interface as a contract. So long my class and your class agree on this method signature contract we can "interface". As for abstract classes those I see as more of base classes that stub out some methods and I need to fill in the details.
What is the point of interfaces in PHP?
[ "", "php", "oop", "interface", "theory", "" ]
I'm developing an Excel 2007 add-in using Visual Studio Tools for Office (2008). I have one sheet with several ListObjects on it, which are being bound to datatables on startup. When they are bound, they autosize correctly. The problem comes when they are re-bound. I have a custom button on the ribbon bar which goes back out to the database and retrieves different information based on some criteria that the user inputs. This new data comes back and is re-bound to the ListObjects - however, this time they are not resized and I get an exception: > ListObject cannot be bound because it > cannot be resized to fit the data. The > ListObject failed to add new rows. > This can be caused because of > inability to move objects below of the > list object. > > > Inner exception: "Insert method of Range class failed" > > Reason: Microsoft.Office.Tools.Excel.FailureReason.CouldNotResizeListObject I was not able to find anything very meaningful on this error on Google or MSDN. I have been trying to figure this out for a while, but to no avail. Basic code structure: ``` //at startup DataTable tbl = //get from database listObj1.SetDataBinding(tbl); DataTable tbl2 = //get from database listObj2.SetDataBinding(tbl2); //in buttonClick event handler DataTable tbl = //get different info from database //have tried with and without unbinding old source listObj1.SetDataBinding(tbl); <-- exception here DataTable tbl2 = //get different info from database listObj2.SetDataBinding(tbl2); ``` Note that this exception occurs even when the ListObject is shrinking, and not only when it grows.
If anyone else is having this problem, I have found the cause of this exception. ListObjects will automatically re-size on binding, as long as they do not affect any other objects on the sheet. Keep in mind that ListObjects can only affect the Ranges which they wrap around. In my case, the list object which was above the other one had fewer columns than the one below it. Let's say the top ListObject had 2 columns, and the bottom ListObject had 3 columns. When the top ListObject changed its number of rows, it had no ability to make any changes to the third column since it wasn't in it's underlying Range. This means that it couldn't shift any cells in the third column, and so the second ListObject couldn't be properly moved, resulting in my exception above. Changing the positions of the ListObjects to place the wider one above the smaller one works fine. Following the logic above, this now means that the wider ListObject can shift all of the columns of the second ListObject, and since there is nothing below the smaller one it can also shift any cells necessary. The reason I wasn't having any trouble on the initial binding is that both ListObjects were a single cell. Since this is not optimal in my case, I will probably use empty columns or try to play around with invisible columns if that's possible, but at least the cause is now clear.
I've got a similar issue with refreshign multiple listobjects. We are setting each listObject.DataSource = null, then rebinding starting at the bottom listobject and working our way up instead of the top down.
.NET - Excel ListObject autosizing on databind
[ "", "c#", ".net", "excel", "data-binding", "vsto", "" ]
Are there any automatic methods for trimming a path string in .NET? For example: ``` C:\Documents and Settings\nick\My Documents\Tests\demo data\demo data.emx ``` becomes ``` C:\Documents...\demo data.emx ``` It would be particularly cool if this were built into the Label class, and I seem to recall it is--can't find it though!
Use **TextRenderer.DrawText** with **TextFormatFlags.PathEllipsis** flag ``` void label_Paint(object sender, PaintEventArgs e) { Label label = (Label)sender; TextRenderer.DrawText(e.Graphics, label.Text, label.Font, label.ClientRectangle, label.ForeColor, TextFormatFlags.PathEllipsis); } ``` > Your code is 95% there. The only > problem is that the trimmed text is > drawn on top of the text which is > already on the label. Yes thanks, I was aware of that. My intention was only to demonstrate use of `DrawText` method. I didn't know whether you want to manually create event for each label or just override `OnPaint()` method in inherited label. Thanks for sharing your final solution though.
@ [lubos hasko](https://stackoverflow.com/questions/20467?sort=votes#20495) Your code is 95% there. The only problem is that the trimmed text is drawn on top of the text which is already on the label. This is easily solved: ``` Label label = (Label)sender; using (SolidBrush b = new SolidBrush(label.BackColor)) e.Graphics.FillRectangle(b, label.ClientRectangle); TextRenderer.DrawText( e.Graphics, label.Text, label.Font, label.ClientRectangle, label.ForeColor, TextFormatFlags.PathEllipsis); ```
Path Display in Label
[ "", "c#", ".net", "winforms", "path", "" ]
I'm working with a SQL Server 2000 database that likely has a few dozen tables that are no longer accessed. I'd like to clear out the data that we no longer need to be maintaining, but I'm not sure how to identify which tables to remove. The database is shared by several different applications, so I can't be 100% confident that reviewing these will give me a complete list of the objects that are used. What I'd like to do, if it's possible, is to get a list of tables that haven't been accessed at all for some period of time. No reads, no writes. How should I approach this?
MSSQL2000 won't give you that kind of information. But a way you can identify what tables ARE used (and then deduce which ones are not) is to use the SQL Profiler, to save all the queries that go to a certain database. Configure the profiler to record the results to a new table, and then check the queries saved there to find all the tables (and views, sps, etc) that are used by your applications. Another way I think you might check if there's any "writes" is to add a new timestamp column to every table, and a trigger that updates that column every time there's an update or an insert. But keep in mind that if your apps do queries of the type ``` select * from ... ``` then they will receive a new column and that might cause you some problems.
Another suggestion for tracking tables that have been written to is to use [Red Gate SQL Log Rescue](http://www.red-gate.com/products/SQL_Log_Rescue/index.htm) (free). This tool dives into the log of the database and will show you all inserts, updates and deletes. The list is fully searchable, too. It doesn't meet your criteria for researching reads into the database, but I think the SQL Profiler technique will get you a fair idea as far as that goes.
Strategy for identifying unused tables in SQL Server 2000?
[ "", "sql", "database", "sql-server-2000", "" ]
Can/Should I use a LIKE criteria as part of an INNER JOIN when building a stored procedure/query? I'm not sure I'm asking the right thing, so let me explain. I'm creating a procedure that is going to take a list of keywords to be searched for in a column that contains text. If I was sitting at the console, I'd execute it as such: ``` SELECT Id, Name, Description FROM dbo.Card WHERE Description LIKE '%warrior%' OR Description LIKE '%fiend%' OR Description LIKE '%damage%' ``` But a trick I picked up a little while go to do "strongly typed" list parsing in a stored procedure is to parse the list into a table variable/temporary table, converting it to the proper type and then doing an INNER JOIN against that table in my final result set. This works great when sending say a list of integer IDs to the procedure. I wind up having a final query that looks like this: ``` SELECT Id, Name, Description FROM dbo.Card INNER JOIN @tblExclusiveCard ON dbo.Card.Id = @tblExclusiveCard.CardId ``` I want to use this trick with a list of strings. But since I'm looking for a particular keyword, I am going to use the LIKE clause. So ideally I'm thinking I'd have my final query look like this: ``` SELECT Id, Name, Description FROM dbo.Card INNER JOIN @tblKeyword ON dbo.Card.Description LIKE '%' + @tblKeyword.Value + '%' ``` Is this possible/recommended? Is there a better way to do something like this? --- The reason I'm putting wildcards on both ends of the clause is because there are "archfiend", "beast-warrior", "direct-damage" and "battle-damage" terms that are used in the card texts. I'm getting the impression that depending on the performance, I can either use the query I specified or use a full-text keyword search to accomplish the same task? Other than having the server do a text index on the fields I want to text search, is there anything else I need to do?
Your first query will work but will require a full table scan because any index on that column will be ignored. You will also have to do some dynamic SQL to generate all your LIKE clauses. Try a full text search if your using SQL Server or check out one of the [Lucene](http://lucene.apache.org/java/docs/index.html) implementations. Joel talked about his success with it recently.
Try this ``` select * from Table_1 a left join Table_2 b on b.type LIKE '%' + a.type + '%' ``` This practice is not ideal. Use with caution.
Use a LIKE clause in part of an INNER JOIN
[ "", "sql", "sql-server", "design-patterns", "" ]
I have a class that I want to use to store "properties" for another class. These properties simply have a name and a value. Ideally, what I would like is to be able to add *typed* properties, so that the "value" returned is always of the type that I want it to be. The type should always be a primitive. This class subclasses an abstract class which basically stores the name and value as string. The idea being that this subclass will add some type-safety to the base class (as well as saving me on some conversion). So, I have created a class which is (roughly) this: ``` public class TypedProperty<DataType> : Property { public DataType TypedValue { get { // Having problems here! } set { base.Value = value.ToString();} } } ``` So the question is: **Is there a "generic" way to convert from string back to a primitive?** I can't seem to find any generic interface that links the conversion across the board (something like *ITryParsable* would have been ideal!).
I am not sure whether I understood your intentions correctly, but let's see if this one helps. ``` public class TypedProperty<T> : Property where T : IConvertible { public T TypedValue { get { return (T)Convert.ChangeType(base.Value, typeof(T)); } set { base.Value = value.ToString();} } } ```
lubos hasko's method fails for nullables. The method below will work for nullables. I didn't come up with it, though. I found it via Google: <http://web.archive.org/web/20101214042641/http://dogaoztuzun.com/post/C-Generic-Type-Conversion.aspx> Credit to "Tuna Toksoz" Usage first: ``` TConverter.ChangeType<T>(StringValue); ``` The class is below. ``` public static class TConverter { public static T ChangeType<T>(object value) { return (T)ChangeType(typeof(T), value); } public static object ChangeType(Type t, object value) { TypeConverter tc = TypeDescriptor.GetConverter(t); return tc.ConvertFrom(value); } public static void RegisterTypeConverter<T, TC>() where TC : TypeConverter { TypeDescriptor.AddAttributes(typeof(T), new TypeConverterAttribute(typeof(TC))); } } ```
Generic type conversion FROM string
[ "", "c#", "generics", "primitive", "type-safety", "" ]
I have a flex application that needs the ability to generate and execute JavaScript. When I say this, I mean I need to execute raw JavaScript that I create in my Flex application (not just an existing JavaScript method) I am currently doing this by exposing the following JavaScript method: ``` function doScript(js){ eval(js);} ``` I can then do something like this in Flex (note: I am doing something more substantial then an alert box in the real Flex app): ``` ExternalInterface.call("doScript","alert('foo')); ``` My question is does this impose any security risk, I am assuming it's not since the Flex and JasvaScript all run client side... Is there a better way to do this?
There's no need for the JavaScript function, the first argument to `ExternalInterface` can be any JavaScript code, it doesn't have to be a function name (the documentation says so, but it is wrong). Try this: ``` ExternalInterface.call("alert('hello')"); ```
This isn't inherently dangerous, but the moment you pass any user-provided data into the function, it's ripe for a code injection exploit. That's worrisome, and something I'd avoid. I think a better approach would be to only expose the functionality you *need*, and nothing more.
Executing JavaScript from Flex: Is this javascript function dangerous?
[ "", "javascript", "apache-flex", "exploit", "" ]
I have a structure which I need to populate and write to disk (several actually). An example is: ``` byte-6 bit0 - original_or_copy bit1 - copyright bit2 - data_alignment_indicator bit3 - PES_priority bit4-bit5 - PES_scrambling control. bit6-bit7 - reserved ``` In C I might do something like the following: ``` struct PESHeader { unsigned reserved:2; unsigned scrambling_control:2; unsigned priority:1; unsigned data_alignment_indicator:1; unsigned copyright:1; unsigned original_or_copy:1; }; ``` Is there any way to do this in C# that would enable me to access the bits using the struct dereferencing dot operator? For a couple of structures, I can just do bit shifting wrapped in an accessor function. I have loads of structures to handle in this way, so I'm looking for something that's easier to read and quicker to write.
I'd probably knock together something using attributes, then a conversion class to convert suitably attributed structures to the bitfield primitives. Something like... ``` using System; namespace BitfieldTest { [global::System.AttributeUsage(AttributeTargets.Field, AllowMultiple = false)] sealed class BitfieldLengthAttribute : Attribute { uint length; public BitfieldLengthAttribute(uint length) { this.length = length; } public uint Length { get { return length; } } } static class PrimitiveConversion { public static long ToLong<T>(T t) where T : struct { long r = 0; int offset = 0; // For every field suitably attributed with a BitfieldLength foreach (System.Reflection.FieldInfo f in t.GetType().GetFields()) { object[] attrs = f.GetCustomAttributes(typeof(BitfieldLengthAttribute), false); if (attrs.Length == 1) { uint fieldLength = ((BitfieldLengthAttribute)attrs[0]).Length; // Calculate a bitmask of the desired length long mask = 0; for (int i = 0; i < fieldLength; i++) mask |= 1 << i; r |= ((UInt32)f.GetValue(t) & mask) << offset; offset += (int)fieldLength; } } return r; } } struct PESHeader { [BitfieldLength(2)] public uint reserved; [BitfieldLength(2)] public uint scrambling_control; [BitfieldLength(1)] public uint priority; [BitfieldLength(1)] public uint data_alignment_indicator; [BitfieldLength(1)] public uint copyright; [BitfieldLength(1)] public uint original_or_copy; }; public class MainClass { public static void Main(string[] args) { PESHeader p = new PESHeader(); p.reserved = 3; p.scrambling_control = 2; p.data_alignment_indicator = 1; long l = PrimitiveConversion.ToLong(p); for (int i = 63; i >= 0; i--) { Console.Write( ((l & (1l << i)) > 0) ? "1" : "0"); } Console.WriteLine(); return; } } } ``` Which produces the expected ...000101011. Of course, it needs more error checking and a slightly saner typing, but the concept is (I think) sound, reusable, and lets you knock out easily maintained structures by the dozen. adamw
By using an enum you can do this, but will look awkward. ``` [Flags] public enum PESHeaderFlags { IsCopy = 1, // implied that if not present, then it is an original IsCopyrighted = 2, IsDataAligned = 4, Priority = 8, ScramblingControlType1 = 0, ScramblingControlType2 = 16, ScramblingControlType3 = 32, ScramblingControlType4 = 16+32, ScramblingControlFlags = ScramblingControlType1 | ScramblingControlType2 | ... ype4 etc. } ```
Bit fields in C#
[ "", "c#", "bit-fields", "" ]
I needed some simple string encryption, so I wrote the following code (with a great deal of "inspiration" from [here](http://www.codeproject.com/KB/security/DotNetCrypto.aspx)): ``` // create and initialize a crypto algorithm private static SymmetricAlgorithm getAlgorithm(string password) { SymmetricAlgorithm algorithm = Rijndael.Create(); Rfc2898DeriveBytes rdb = new Rfc2898DeriveBytes( password, new byte[] { 0x53,0x6f,0x64,0x69,0x75,0x6d,0x20, // salty goodness 0x43,0x68,0x6c,0x6f,0x72,0x69,0x64,0x65 } ); algorithm.Padding = PaddingMode.ISO10126; algorithm.Key = rdb.GetBytes(32); algorithm.IV = rdb.GetBytes(16); return algorithm; } /* * encryptString * provides simple encryption of a string, with a given password */ public static string encryptString(string clearText, string password) { SymmetricAlgorithm algorithm = getAlgorithm(password); byte[] clearBytes = System.Text.Encoding.Unicode.GetBytes(clearText); MemoryStream ms = new MemoryStream(); CryptoStream cs = new CryptoStream(ms, algorithm.CreateEncryptor(), CryptoStreamMode.Write); cs.Write(clearBytes, 0, clearBytes.Length); cs.Close(); return Convert.ToBase64String(ms.ToArray()); } /* * decryptString * provides simple decryption of a string, with a given password */ public static string decryptString(string cipherText, string password) { SymmetricAlgorithm algorithm = getAlgorithm(password); byte[] cipherBytes = Convert.FromBase64String(cipherText); MemoryStream ms = new MemoryStream(); CryptoStream cs = new CryptoStream(ms, algorithm.CreateDecryptor(), CryptoStreamMode.Write); cs.Write(cipherBytes, 0, cipherBytes.Length); cs.Close(); return System.Text.Encoding.Unicode.GetString(ms.ToArray()); } ``` The code appears to work fine, except that when decrypting data with an incorrect key, I get a CryptographicException - "Padding is invalid and cannot be removed" - on the cs.Close() line in decryptString. example code: ``` string password1 = "password"; string password2 = "letmein"; string startClearText = "The quick brown fox jumps over the lazy dog"; string cipherText = encryptString(startClearText, password1); string endClearText = decryptString(cipherText, password2); // exception thrown ``` My question is, is this to be expected? I would have thought that decrypting with the wrong password would just result in nonsense output, rather than an exception.
Although this have been already answered I think it would be a good idea to explain **why** it is to be expected. A padding scheme is usually applied because most cryptographic filters are not semantically secure and to prevent some forms of cryptoatacks. For example, usually in RSA the [OAEP](http://en.wikipedia.org/wiki/Optimal_Asymmetric_Encryption_Padding) padding scheme is used which prevents some sorts of attacks (such as a chosen plaintext attack or [blinding](http://en.wikipedia.org/wiki/Blinding_(cryptography))). A padding scheme appends some (usually) random garbage to the message m before the message is sent. In the OAEP method, for example, two Oracles are used (this is a simplistic explanation): 1. Given the size of the modulus you padd k1 bits with 0 and k0 bits with a random number. 2. Then by applying some transformation to the message you obtain the padded message wich is encrypted and sent. That provides you with a randomization for the messages and with a way to test if the message is garbage or not. As the padding scheme is reversible, when you decrypt the message whereas you can't say anything about the integrity of the message itself you can, in fact, make some assertion about the padding and thus you can know if the message has been correctly decrypted or you're doing something wrong (i.e someone has tampered with the message or you're using the wrong key)
I experienced a similar "Padding is invalid and cannot be removed." exception, but in my case the key IV and padding were correct. It turned out that flushing the crypto stream is all that was missing. Like this: ``` MemoryStream msr3 = new MemoryStream(); CryptoStream encStream = new CryptoStream(msr3, RijndaelAlg.CreateEncryptor(), CryptoStreamMode.Write); encStream.Write(bar2, 0, bar2.Length); // unless we flush the stream we would get "Padding is invalid and cannot be removed." exception when decoding encStream.FlushFinalBlock(); byte[] bar3 = msr3.ToArray(); ```
Why does a bad password cause "Padding is invalid and cannot be removed"?
[ "", "c#", ".net", "exception", "encryption", "" ]
How can I present a control to the user that allows him/her to select a directory? There doesn't seem to be any native .net controls which do this?
The [FolderBrowserDialog class](http://msdn.microsoft.com/en-us/library/system.windows.forms.folderbrowserdialog.aspx) is the best option.
``` string folderPath = ""; FolderBrowserDialog folderBrowserDialog1 = new FolderBrowserDialog(); if (folderBrowserDialog1.ShowDialog() == DialogResult.OK) { folderPath = folderBrowserDialog1.SelectedPath ; } ```
Browse for a directory in C#
[ "", "c#", ".net", "directory", "" ]
I was checking out Intel's "whatif" site and their Transactional Memory compiler (each thread has to make atomic commits or rollback the system's memory, like a Database would). It seems like a promising way to replace locks and mutexes but I can't find many testimonials. Does anyone here have any input?
I have not used Intel's compiler, however, Herb Sutter had some interesting comments on it... From [Sutter Speaks: The Future of Concurrency](http://www.devx.com/go-parallel/Article/37839) **Do you see a lot of interest in and usage of transactional memory, or is the concept too difficult for most developers to grasp?** It's not yet possible to answer who's using it because it hasn't been brought to market yet. Intel has a software transactional memory compiler prototype. But if the question is "Is it too hard for developers to use?" the answer is that I certainly hope not. The whole point is it's way easier than locks. It is the only major thing on the research horizon that holds out hope of greatly reducing our use of locks. It will never replace locks completely, but it's our only big hope to replacing them partially. There are some limitations. In particular, some I/O is inherently not transactional—you can't take an atomic block that prompts the user for his name and read the name from the console, and just automatically abort and retry the block if it conflicts with another transaction; the user can tell the difference if you prompt him twice. Transactional memory is great for stuff that is only touching memory, though. Every major hardware and software vendor I know of has multiple transactional memory tools in R&D. There are conferences and academic papers on theoretical answers to basic questions. We're not at the Model T stage yet where we can ship it out. You'll probably see early, limited prototypes where you can't do unbounded transactional memory—where you can only read and write, say, 100 memory locations. That's still very useful for enabling more lock-free algorithms, though.
Dr. Dobb's had an article on the concept last year: Transactional Programming by Calum Grant -- <http://www.ddj.com/cpp/202802978> It includes some examples, comparisons, and conclusions using his example library.
Has anyone tried transactional memory for C++?
[ "", "c++", "multithreading", "locking", "intel", "transactional-memory", "" ]
Let's say I want a web page that contains a Flash applet and I'd like to drag and drop some objects from or to the rest of the web page, is this at all possible? Bonus if you know a website somewhere that does that!
This one intrigued me. I know jessegavin posted some code while I went to figure this out, but this one is tested. I have a super-simple working example that lets you drag to and from flash. It's pretty messy as I threw it together during my lunch break. Here's the [demo](http://enobrev.github.io/DragSWF/) And the [source](https://github.com/enobrev/DragSWF) The base class is taken directly from the [External Interface LiveDocs](http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/flash/external/ExternalInterface.html). I added MyButton so the button could have some text. The majority of the javascript comes from the same LiveDocs example. I compiled this using mxmlc.
**DISCLAIMER** I haven't tested this code at all, but the idea should work. Also, this only handles the dragging ***to*** a flash movie. Here's some Actionscript 3.0 code which makes use of the [ExternalInterface](http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/flash/external/ExternalInterface.html) class. ``` import flash.display.Sprite; import flash.external.ExternalInterface; import flash.net.URLLoader; import flash.net.URLRequest; if (ExternalInterface.available) { ExternalInterface.addCallback("handleDroppedImage", myDroppedImageHandler); } private function myDroppedImageHandler(url:String, x:Number, y:Number):void { var container:Sprite = new Sprite(); container.x = x; container.y = y; addChild(container); var loader:Loader = new Loader(); var request:URLRequest = new URLRequest(url); loader.load(request); container.addChild(loader); } ``` Here's the HTML/jQuery code ``` <html> <head> <title>XHTML 1.0 Transitional Template</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.5.2/jquery-ui.min.js"></script> <script type="text/javascript"> $(function() { $("#dragIcon").draggable(); $("#flash").droppable({ tolerance : "intersect", drop: function(e,ui) { // Get the X,Y coords relative to to the flash movie var x = $(this).offset().left - ui.draggable.offset().left; var y = $(this).offset().top - ui.draggable.offset().top; // Get the url of the dragged image var url = ui.draggable.attr("src"); // Get access to the swf var swf = ($.browser.msie) ? document["MyFlashMovie"] : window["MyFlashMovie"]; // Call the ExternalInterface function swf.handleDroppedImage(url, x, y); // remove the swf from the javascript DOM ui.draggable.remove(); } }); }); </script> </head> <body> <img id="dragIcon" width="16" height="16" alt="drag me" /> <div id="flash"> <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" id="MyFlashMovie" width="500" height="375" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab"> <param name="movie" value="MyFlashMovie.swf" /> <param name="quality" value="high" /> <param name="bgcolor" value="#869ca7" /> <param name="allowScriptAccess" value="sameDomain" /> <embed src="MyFlashMovie.swf" quality="high" bgcolor="#869ca7" width="500" height="375" name="MyFlashMovie" align="middle" play="true" loop="false" quality="high" allowScriptAccess="sameDomain" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer"> </embed> </object> </div> </body> </html> ```
Is it possible to drag and drop from/to outside a Flash applet with JavaScript?
[ "", "javascript", "flash", "drag-and-drop", "" ]
I have some classes layed out like this ``` class A { public virtual void Render() { } } class B : A { public override void Render() { // Prepare the object for rendering SpecialRender(); // Do some cleanup } protected virtual void SpecialRender() { } } class C : B { protected override void SpecialRender() { // Do some cool stuff } } ``` Is it possible to prevent the C class from overriding the Render method, without breaking the following code? ``` A obj = new C(); obj.Render(); // calls B.Render -> c.SpecialRender ```
You can seal individual methods to prevent them from being overridable: ``` public sealed override void Render() { // Prepare the object for rendering SpecialRender(); // Do some cleanup } ```
Yes, you can use the sealed keyword in the B class's implementation of Render: ``` class B : A { public sealed override void Render() { // Prepare the object for rendering SpecialRender(); // Do some cleanup } protected virtual void SpecialRender() { } } ```
Can I prevent an inherited virtual method from being overridden in subclasses?
[ "", "c#", "polymorphism", "" ]
I believe the application has some parts that target .NET, and some that don't. I'm particularly interested in looking at the resource files, if there are any.
To add to aku's excellent answer, for English speakers, IDA Pro is available at <http://www.hex-rays.com/>.
If you want to disassemble native x86/64 app use [IDA](http://www.idapro.ru), .NET exe/dll can be disassembled using Reflector. There are [tons](http://www.google.ru/search?complete=1&hl=en&newwindow=1&client=firefox-a&rls=org.mozilla%3Aen-US%3Aofficial&hs=v1M&q=extract+resource+from+exe+file&btnG=%D0%9F%D0%BE%D0%B8%D1%81%D0%BA&lr=&aq=f&oq=) of utilities to extract resources. Can you elaborate your question a bit?
How do I disassemble a VC++ application?
[ "", "c++", "resources", "reverse-engineering", "" ]
I'd like to bind a configuration file to my executable. I'd like to do this by storing an MD5 hash of the file inside the executable. This should keep anyone but the executable from modifying the file. Essentially if someone modifies this file outside of the program the program should fail to load it again. EDIT: The program processes credit card information so being able to change the configuration in any way could be a potential security risk. This software will be distributed to a large number of clients. Ideally client should have a configuration that is tied directly to the executable. This will hopefully keep a hacker from being able to get a fake configuration into place. The configuration still needs to be editable though so compiling an individual copy for each customer is not an option. --- It's important that this be dynamic. So that I can tie the hash to the configuration file as the configuration changes.
A better solution is to store the MD5 in the configuration file. But instead of the MD5 being just of the configuration file, also include some secret "key" value, like a fixed guid, in the MD5. ``` write(MD5(SecretKey + ConfigFileText)); ``` Then you simply remove that MD5 and rehash the file (including your secret key). If the MD5's are the same, then no-one modified it. This prevents someone from modifying it and re-applying the MD5 since they don't know your secret key. Keep in mind this is a fairly weak solution (as is the one you are suggesting) as they could easily track into your program to find the key or where the MD5 is stored. A better solution would be to use a public key system and sign the configuration file. Again that is weak since that would require the private key to be stored on their local machine. Pretty much anything that is contained on their local PC can be bypassed with enough effort. If you REALLY want to store the information in your executable (which I would discourage) then you can just try appending it at the end of the EXE. That is usually safe. Modifying executable programs is *virus like* behavior and most operating system security will try to stop you too. If your program is in the Program Files directory, and your configuration file is in the Application Data directory, and the user is logged in as a non-administrator (in XP or Vista), then you will be unable to update the EXE. **Update:** I don't care if you are using Asymmetric encryption, RSA or Quantum cryptography, if you are storing your keys on the user's computer (which you *must* do unless you route it all through a web service) then the user can find your keys, even if it means inspecting the registers on the CPU at run time! You are only buying yourself a moderate level of security, so stick with something that is simple. To prevent modification the solution I suggested is the best. To prevent reading then encrypt it, and if you are storing your key locally then use AES Rijndael. **Update:** The FixedGUID / SecretKey could alternatively be generated at install time and stored somewhere "secret" in the registry. Or you could generate it every time you use it from hardware configuration. Then you are getting more complicated. How you want to do this to allow for moderate levels of hardware changes would be to take 6 different signatures, and hash your configuration file 6 times - once with each. Combine each one with a 2nd secret value, like the GUID mentioned above (either global or generated at install). Then when you check you verify each hash separately. As long as they have 3 out of 6 (or whatever your tolerance is) then you accept it. Next time you write it you hash it with the new hardware configuration. This allows them to slowly swap out hardware over time and get a whole new system. . . Maybe that is a weakness. It all comes down to your tolerance. There are variations based on tighter tolerances. **UPDATE:** For a Credit Card system you might want to consider some real security. You should retain the services of a *security and cryptography consultant*. More information needs to be exchanged. They need to analyze your specific needs and risks. Also, if you want security with .NET you need to first start with a really good .NET obfuscator ([just Google it](http://www.google.com/search?hl=en&q=.NET%20obfuscator&aq=f&oq=)). A .NET assembly is way to easy to disassemble and get at the source code and read all your secrets. Not to sound a like a broken record, but anything that depends on the security of your user's system is fundamentally flawed from the beginning.
Out of pure curiosity, what's your reasoning for never wanting to load the file if it's been changed? Why not just keep all of the configuration information compiled in the executable? Why bother with an external file at all? ***Edit*** I just read your edit about this being a credit card info program. That poses a very interesting challenge. I would think, for that level of security, some sort of pretty major encryption would be necessary but I don't know anything about handling that sort of thing in such a way that the cryptographic secrets can't just be extracted from the executable. Is authenticating against some sort of online source a possibility?
How do I store information in my executable in .Net
[ "", "c#", ".net", "" ]
Does anyone use [Phing](http://phing.info/trac/) to deploy PHP applications, and if so how do you use it? We currently have a hand-written "setup" script that we run whenever we deploy a new instance of our project. We just check out from SVN and run it. It sets some basic configuration variables, installs or reloads the database, and generates a v-host for the site instance. I have often thought that maybe we should be using [Phing](http://phing.info/trac/). I haven't used ant much, so I don't have a real sense of what [Phing](http://phing.info/trac/) is supposed to do other than script the copying of files from one place to another much as our setup script does. What are some more advanced uses that you can give examples of to help me understand why we would or would not want to integrate [Phing](http://phing.info/trac/) into our process?
From Federico Cargnelutti's [blog post](http://phpimpact.wordpress.com/2007/06/30/phing-php-project-build-system-based-on-apache-ant/): > Features include file transformations (e.g. token > replacement, XSLT transformation, > Smarty template transformations), file > system operations, interactive build > support, SQL execution, CVS > operations, tools for creating PEAR > packages, and much more. Of course you could write custom scripts for all of the above. However, using a specialized build tool like Phing gives you a number of benefits. You'll be using a proven framework so instead of having to worry about setting up "infrastructure" you can focus on the code you need to write. Using Phing will also make it easier for when new members join your team, they'll be able to understand what is going on if they've used Phing (or Ant, which is what Phing is based on) before.
The compelling answer for me is that phing understands PHP classpaths. Ant doesn't. I don't want to have an ant build.xml full of exec commands. I happen to be primarily a java programmer and still use phing. It's the best tool for the job.
Do you use Phing?
[ "", "php", "deployment", "build-process", "build-automation", "phing", "" ]
What I want to do is something like this: I have enums with combined flagged values. ``` public static class EnumExtension { public static bool IsSet<T>( this T input, T matchTo ) where T:enum //the constraint I want that doesn't exist in C#3 { return (input & matchTo) != 0; } } ``` So then I could do: ``` MyEnum tester = MyEnum.FlagA | MyEnum.FlagB if( tester.IsSet( MyEnum.FlagA ) ) //act on flag a ``` Unfortunately, C#'s generic where constraints have no enum restriction, only class and struct. C# doesn't see enums as structs (even though they are value types) so I can't add extension types like this. Does anyone know a workaround?
**EDIT: This is now live in version 0.0.0.2 of UnconstrainedMelody.** (As requested on my [blog post about enum constraints](http://codeblog.jonskeet.uk/2009/09/10/generic-constraints-for-enums-and-delegates/). I've included the basic facts below for the sake of a standalone answer.) The best solution is to wait for me to include it in [UnconstrainedMelody](https://github.com/jskeet/unconstrained-melody)1. This is a library which takes C# code with "fake" constraints such as ``` where T : struct, IEnumConstraint ``` and turns it into ``` where T : struct, System.Enum ``` via a postbuild step. It shouldn't be too hard to write `IsSet`... although catering for both `Int64`-based and `UInt64`-based flags could be the tricky part. (I smell some helper methods coming on, basically allowing me to treat any flags enum as if it had a base type of `UInt64`.) What would you want the behaviour to be if you called ``` tester.IsSet(MyFlags.A | MyFlags.C) ``` ? Should it check that *all* the specified flags are set? That would be my expectation. I'll try to do this on the way home tonight... I'm hoping to have a quick blitz on useful enum methods to get the library up to a usable standard quickly, then relax a bit. EDIT: I'm not sure about `IsSet` as a name, by the way. Options: * Includes * Contains * HasFlag (or HasFlags) * IsSet (it's certainly an option) Thoughts welcome. I'm sure it'll be a while before anything's set in stone anyway... --- 1 or submit it as a patch, of course...
As of C# 7.3, there is now a built-in way to add enum constraints: ``` public class UsingEnum<T> where T : System.Enum { } ``` source: <https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/where-generic-type-constraint>
Anyone know a good workaround for the lack of an enum generic constraint?
[ "", "c#", ".net", "enums", "flags", "" ]
I am developing a J2ME application that has a large amount of data to store on the device (in the region of 1MB but variable). I can't rely on the file system so I'm stuck the Record Management System (RMS), which allows multiple record stores but each have a limited size. My initial target platform, Blackberry, limits each to 64KB. I'm wondering if anyone else has had to tackle the problem of storing a large amount of data in the RMS and how they managed it? I'm thinking of having to calculate record sizes and split one data set accross multiple stores if its too large, but that adds a lot of complexity to keep it intact. There is lots of different types of data being stored but only one set in particular will exceed the 64KB limit.
For anything past a few kilobytes you need to use either JSR 75 or a remote server. RMS records are extremely limited in size and speed, even in some higher end handsets. If you need to juggle 1MB of data in J2ME the only reliable, portable way is to store it on the network. The HttpConnection class and the GET and POST methods are always supported. On the handsets that support JSR 75 FileConnection it may be valid alternative but without code signing it is an user experience nightmare. Almost every single API call will invoke a security prompt with no blanket permission choice. Companies that deploy apps with JSR 75 usually need half a dozen binaries for every port just to cover a small part of the possible certificates. And this is just for the manufacturer certificates; some handsets only have carrier-locked certificates.
RMS performance and implementation varies wildly between devices, so if platform portability is a problem, you may find that your code works well on some devices and not others. RMS is designed to store small amounts of data (High score tables, or whatever) not large amounts. You might find that some platforms are faster with files stored in multiple record stores. Some are faster with multiple records within one store. Many are ok for storage, but become unusably slow when deleting large amounts of data from the store. Your best bet is to use JSR-75 instead where available, and create your own file store interface that falls back to RMS if nothing better is supported. Unfortunately when it comes to JavaME, you are often drawn into writing device-specific variants of your code.
Best practice for storing large amounts of data with J2ME
[ "", "java", "java-me", "rms", "" ]
I have a database table and one of the fields (not the primary key) is having a unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hacks I know are: 1. Delete both rows and re-insert them. 2. Update rows with some other value and swap and then update to actual value. But I don't want to go for these as they do not seem to be the appropriate solution to the problem. Could anyone help me out?
I think you should go for solution 2. There is no 'swap' function in any SQL variant I know of. If you need to do this regularly, I suggest solution 1, depending on how other parts of the software are using this data. You can have locking issues if you're not careful. But in short: there is no other solution than the ones you provided.
The magic word is **DEFERRABLE** here: ``` DROP TABLE ztable CASCADE; CREATE TABLE ztable ( id integer NOT NULL PRIMARY KEY , payload varchar ); INSERT INTO ztable(id,payload) VALUES (1,'one' ), (2,'two' ), (3,'three' ); SELECT * FROM ztable; -- This works, because there is no constraint UPDATE ztable t1 SET payload=t2.payload FROM ztable t2 WHERE t1.id IN (2,3) AND t2.id IN (2,3) AND t1.id <> t2.id ; SELECT * FROM ztable; ALTER TABLE ztable ADD CONSTRAINT OMG_WTF UNIQUE (payload) DEFERRABLE INITIALLY DEFERRED ; -- This should also work, because the constraint -- is deferred until "commit time" UPDATE ztable t1 SET payload=t2.payload FROM ztable t2 WHERE t1.id IN (2,3) AND t2.id IN (2,3) AND t1.id <> t2.id ; SELECT * FROM ztable; ``` RESULT: ``` DROP TABLE NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "ztable_pkey" for table "ztable" CREATE TABLE INSERT 0 3 id | payload ----+--------- 1 | one 2 | two 3 | three (3 rows) UPDATE 2 id | payload ----+--------- 1 | one 2 | three 3 | two (3 rows) NOTICE: ALTER TABLE / ADD UNIQUE will create implicit index "omg_wtf" for table "ztable" ALTER TABLE UPDATE 2 id | payload ----+--------- 1 | one 2 | two 3 | three (3 rows) ```
Swap unique indexed column values in database
[ "", "sql", "database", "" ]
I would like the version property of my application to be incremented for each build but I'm not sure on how to enable this functionality in Visual Studio (2005/2008). I have tried to specify the AssemblyVersion as 1.0.\* but it doesn't get me exactly what I want. I'm also using a settings file and in earlier attempts when the assembly version changed my settings got reset to the default since the application looked for the settings file in another directory. I would like to be able to display a version number in the form of 1.1.38 so when a user finds a problem I can log the version they are using as well as tell them to upgrade if they have an old release. A short explanation of how the versioning works would also be appreciated. When does the build and revision number get incremented?
With the "Built in" stuff, you can't, as using 1.0.\* or 1.0.0.\* will replace the revision and build numbers with a coded date/timestamp, which is usually also a good way. For more info, see the [Assembly Linker](http://msdn2.microsoft.com/en-us/library/c405shex(vs.80).aspx) Documentation in the /v tag. As for automatically incrementing numbers, use the AssemblyInfo Task: [AssemblyInfo Task](http://code.msdn.microsoft.com/AssemblyInfoTaskvers) This can be configured to automatically increment the build number. There are 2 Gotchas: 1. Each of the 4 numbers in the Version string is limited to 65535. This is a Windows Limitation and unlikely to get fixed. * [Why are build numbers limited to 65535?](http://blogs.msdn.com/msbuild/archive/2007/01/03/why-are-build-numbers-limited-to-65535.aspx) 2. Using with with Subversion requires a small change: * [Using MSBuild to generate assembly version info at build time (including SubVersion fix)](http://www.andrewconnell.com/blog/archive/2006/08/29/4078.aspx) Retrieving the Version number is then quite easy: ``` Version v = Assembly.GetExecutingAssembly().GetName().Version; string About = string.Format(CultureInfo.InvariantCulture, @"YourApp Version {0}.{1}.{2} (r{3})", v.Major, v.Minor, v.Build, v.Revision); ``` --- And, to clarify: In .net or at least in C#, the build is actually the THIRD number, not the fourth one as some people (for example Delphi Developers who are used to Major.Minor.Release.Build) might expect. In .net, it's Major.Minor.Build.Revision.
VS.NET defaults the Assembly version to 1.0.\* and uses the following logic when auto-incrementing: it sets the build part to the number of days since January 1st, 2000, and sets the revision part to the number of seconds since midnight, local time, divided by two. See this [MSDN article](http://msdn.microsoft.com/en-us/library/system.reflection.assemblyversionattribute.assemblyversionattribute.aspx). Assembly version is located in an assemblyinfo.vb or assemblyinfo.cs file. From the file: ``` ' Version information for an assembly consists of the following four values: ' ' Major Version ' Minor Version ' Build Number ' Revision ' ' You can specify all the values or you can default the Build and Revision Numbers ' by using the '*' as shown below: ' <Assembly: AssemblyVersion("1.0.*")> <Assembly: AssemblyVersion("1.0.0.0")> <Assembly: AssemblyFileVersion("1.0.0.0")> ```
Automatically update version number
[ "", "c#", "visual-studio", "versioning", "" ]
What did I do wrong? Here is an excerpt from my code: ``` public void createPartControl(Composite parent) { parent.setLayout(new FillLayout()); ScrolledComposite scrollBox = new ScrolledComposite(parent, SWT.V_SCROLL); scrollBox.setExpandHorizontal(true); mParent = new Composite(scrollBox, SWT.NONE); scrollBox.setContent(mParent); FormLayout layout = new FormLayout(); mParent.setLayout(layout); // Adds a bunch of controls here mParent.layout(); mParent.setSize(mParent.computeSize(SWT.DEFAULT, SWT.DEFAULT, true)); } ``` ...but it clips the last button: ![alt text](https://i.stack.imgur.com/1ubzc.png "Screenshot") bigbrother82: That didn't work. SCdF: I tried your suggestion, and now the scrollbars are gone. I need to work some more on that.
This is a common hurdle when using `ScrolledComposite`. When it gets so small that the scroll bar must be shown, the client control has to shrink horizontally to make room for the scroll bar. This has the side effect of making some labels wrap lines, which moved the following controls farther down, which increased the minimum height needed by the content composite. You need to listen for width changes on the content composite (`mParent`), compute the minimum height again given the new content width, and call `setMinHeight()` on the scrolled composite with new height. ``` public void createPartControl(Composite parent) { parent.setLayout(new FillLayout()); ScrolledComposite scrollBox = new ScrolledComposite(parent, SWT.V_SCROLL); scrollBox.setExpandHorizontal(true); scrollBox.setExpandVertical(true); // Using 0 here ensures the horizontal scroll bar will never appear. If // you want the horizontal bar to appear at some threshold (say 100 // pixels) then send that value instead. scrollBox.setMinWidth(0); mParent = new Composite(scrollBox, SWT.NONE); FormLayout layout = new FormLayout(); mParent.setLayout(layout); // Adds a bunch of controls here mParent.addListener(SWT.Resize, new Listener() { int width = -1; public void handleEvent(Event e) { int newWidth = mParent.getSize().x; if (newWidth != width) { scrollBox.setMinHeight(mParent.computeSize(newWidth, SWT.DEFAULT).y); width = newWidth; } } } // Wait until here to set content pane. This way the resize listener will // fire when the scrolled composite first resizes mParent, which in turn // computes the minimum height and calls setMinHeight() scrollBox.setContent(mParent); } ``` In listening for size changes, note that we ignore any resize events where the width stays the same. This is because changes in the height of the content do not affect the *minimum* height of the content, as long as the width is the same.
If I am not mistaken you need to swap the ``` mParent.layout(); ``` and ``` mParent.setSize(mParent.computeSize(SWT.DEFAULT, SWT.DEFAULT, true)); ``` so that you have: ``` public void createPartControl(Composite parent) { parent.setLayout(new FillLayout()); ScrolledComposite scrollBox = new ScrolledComposite(parent, SWT.V_SCROLL); scrollBox.setExpandHorizontal(true); mParent = new Composite(scrollBox, SWT.NONE); scrollBox.setContent(mParent); FormLayout layout = new FormLayout(); mParent.setLayout(layout); // Adds a bunch of controls here mParent.setSize(mParent.computeSize(SWT.DEFAULT, SWT.DEFAULT, true)); mParent.layout(); } ```
Prevent SWT ScrolledComposite from eating part of it's children
[ "", "java", "eclipse", "swt", "rcp", "" ]
I have a List of Foo. Foo has a string property named Bar. I'd like to use **LINQ** to get a string[] of **distinct** values for Foo.Bar in List of Foo. How can I do this?
I'd go lambdas... wayyy nicer ``` var bars = Foos.Select(f => f.Bar).Distinct().ToArray(); ``` works the same as what @lassevk posted. I'd also add that you might want to keep from converting to an array until the last minute. LINQ does some optimizations behind the scenes, queries stay in its query form until explicitly needed. So you might want to build everything you need into the query first so any possible optimization is applied altogether. By evaluation I means asking for something that explicitly requires evalution like "Count()" or "ToArray()" etc.
This should work if you want to use the fluent pattern: ``` string[] arrayStrings = fooList.Select(a => a.Bar).Distinct().ToArray(); ```
How to get an array of distinct property values from in memory lists?
[ "", "c#", ".net", "performance", "linq", "filtering", "" ]
What is the fastest, yet secure way to encrypt passwords (in PHP preferably), and for whichever method you choose, is it portable? In other words, if I later migrate my website to a different server, will my passwords continue to work? The method I am using now, as I was told, is dependent on the exact versions of the libraries installed on the server.
If you are choosing an encryption method for your login system then speed is not your friend, Jeff had a to-and-frow with Thomas Ptacek about passwords and the [conclusion](http://chargen.matasano.com/chargen/2007/9/7/enough-with-the-rainbow-tables-what-you-need-to-know-about-s.html) was that you should use the slowest, most secure encryption method you can afford to. > From Thomas Ptacek's blog: > Speed is exactly what you don’t want in a password hash function. > > Modern password schemes are attacked with incremental password crackers. > > Incremental crackers don’t precalculate all possible cracked passwords. They consider each password hash individually, and they feed their dictionary through the password hash function the same way your PHP login page would. Rainbow table crackers like Ophcrack use space to attack passwords; incremental crackers like John the Ripper, Crack, and LC5 work with time: statistics and compute. > > The password attack game is scored in time taken to crack password X. With rainbow tables, that time depends on how big your table needs to be and how fast you can search it. With incremental crackers, the time depends on how fast you can make the password hash function run. > > The better you can optimize your password hash function, the faster your password hash function gets, the weaker your scheme is. MD5 and SHA1, even conventional block ciphers like DES, are designed to be fast. MD5, SHA1, and DES are weak password hashes. On modern CPUs, raw crypto building blocks like DES and MD5 can be bitsliced, vectorized, and parallelized to make password searches lightning fast. Game-over FPGA implementations cost only hundreds of dollars.
I'm with Peter. Developer don't seem to understand passwords. We all pick (and I'm guilty of this too) MD5 or SHA1 because they are fast. Thinking about it ('cuz someone recently pointed it out to me) that doesn't make any sense. We should be picking a hashing algorithm that's stupid slow. I mean, on the scale of things, a busy site will hash passwords what? every 1/2 minute? Who cares if it take 0.8 seconds vs 0.03 seconds server wise? But that extra slowness is huge to prevent all types of common brute-forcish attacks. From my reading, bcrypt is specifically designed for secure password hashing. It's based on blowfish, and there are many implementation. For PHP, check out [PHP Pass](http://www.openwall.com/phpass/) For anyone doing .NET, check out [BCrypt.NET](http://derekslager.com/blog/posts/2007/10/bcrypt-dotnet-strong-password-hashing-for-dotnet-and-mono.ashx)
Encrypting Passwords
[ "", "php", "encryption", "passwords", "" ]
What is the best method to parse multiple, discrete, custom XML documents with Java?
I would use [Stax](http://jcp.org/en/jsr/detail?id=173) to parse XML, it's fast and easy to use. I've been using it on my last project to parse XML files up to 24MB. There's a nice introduction on [java.net](http://today.java.net/pub/a/today/2006/07/20/introduction-to-stax.html), which tells you everything you need to know to get started.
Basically, you have two main XML parsing methods in Java : * [SAX](http://en.wikipedia.org/wiki/Simple_API_for_XML), where you use an [handler](http://download.oracle.com/javase/6/docs/api/org/xml/sax/helpers/DefaultHandler.html) to only grab what you want in your XML and ditch the rest * [DOM](http://en.wikipedia.org/wiki/Document_Object_Model), which parses your file all along, and allows you to grab all elements in a more tree-like fashion. Another very useful XML parsing method, albeit a little more recent than these ones, and included in the JRE only since Java6, is [StAX](http://en.wikipedia.org/wiki/StAX). StAX was conceived as a medial method between the tree-based of DOM and event-based approach of SAX. It is quite similar to SAX in the fact that parsing very large documents is easy, but in this case the application "pulls" info from the parser, instead of the parsing "pushing" events to the application. You can find more explanation on this subject [here](http://www.xml.com/pub/a/2003/09/17/stax.html). So, depending on what you want to achieve, you can use one of these approaches.
Best method to parse various custom XML documents in Java
[ "", "java", "xml", "" ]
I have a template class that I serialize (call it C), for which I want to specify a version for boost serialization. As BOOST\_CLASS\_VERSION does not work for template classes. I tried this: ``` namespace boost { namespace serialization { template< typename T, typename U > struct version< C<T,U> > { typedef mpl::int_<1> type; typedef mpl::integral_c_tag tag; BOOST_STATIC_CONSTANT(unsigned int, value = version::type::value); }; } } ``` but it does not compile. Under VC8, a subsequent call to BOOST\_CLASS\_VERSION gives this error: `error C2913: explicit specialization; 'boost::serialization::version' is not a specialization of a class template` What is the correct way to do it?
``` #include <boost/serialization/version.hpp> ``` :-)
I was able to properly use the macro BOOST\_CLASS\_VERSION until I encapsulated it inside a namespace. Compilation errors returned were: ``` error C2988: unrecognizable template declaration/definition error C2143: syntax error: missing ';' before '<' error C2913: explicit specialization; 'Romer::RDS::Settings::boost::serialization::version' is not a specialization of a class template error C2059: syntax error: '<' error C2143: syntax error: missing ';' before '{' error C2447: '{': missing function header (old-style formal list?) ``` As suggested in a previous edit, moving BOOST\_CLASS\_VERSION to global scope solved the issue. I would prefer keeping the macro close to the referenced structure.
Boost serialization: specifying a template class version
[ "", "c++", "boost-serialization", "" ]
I'm part of a team that develops a pretty big Swing Java Applet. Most of our code are legacy and there are tons of singleton references. We've bunched all of them to a single "Application Context" singleton. What we now need is to create some way to separate the shared context (shared across all applets currently showing) and non-shared context (specific to each applet currently showing). However, we don't have an ID at each of the locations that call to the singleton, nor do we want to propagate the ID to all locations. What's the easiest way to identify in which applet context we're running? (I've tried messing with classloaders, thread groups, thread ids... so far I could find nothing that will enable me to ID the origin of the call).
Singletons are evil, what do you expect? ;) Perhaps the most comprehensive approach would be to load the bulk of the applet in a different class loader (use java.net.URLClassLoader.newInstance). Then use a WeakHashMap to associate class loader with an applet. If you could split most of the code into a common class loader (as a parent of each per-applet class loader) and into the normal applet codebase, that would be faster but more work. Other hacks: If you have access to any component, you can use Component.getParent repeatedly or SwingUtilities.getRoot. If you are in a per-applet instance thread, then you can set up a ThreadLocal. From the EDT, you can read the current event from the queue (java.awt.EventQueue.getCurrentEvent()), and possibly find a component from that. Alternatively push an EventQueue with a overridden dispatchEvent method.
If I understand you correctly, the idea is to get a different "singleton" object for each caller object or "context". One thing you can do is to create a thread-local global variable where you write the ID of the current context. (This can be done with AOP.) Then in the singleton getter, the context ID is fetched from the thread-local to use as a key to the correct "singleton" instance for the calling context. Regarding AOP there should be no problem using it in applets since, depending on your point-cuts, the advices are woven at compile time and a JAR is added to the runtime dependencies. Hence, no special evidence of AOP should remain at run time.
How can I identify in which Java Applet context running without passing an ID?
[ "", "java", "swing", "applet", "" ]
I am looking for a very fast way to filter down a collection in C#. I am currently using generic `List<object>` collections, but am open to using other structures if they perform better. Currently, I am just creating a new `List<object>` and looping thru the original list. If the filtering criteria matches, I put a copy into the new list. Is there a better way to do this? Is there a way to filter in place so there is no temporary list required?
If you're using C# 3.0 you can use linq, which is way better and way more elegant: ``` List<int> myList = GetListOfIntsFromSomewhere(); // This will filter ints that are not > 7 out of the list; Where returns an // IEnumerable<T>, so call ToList to convert back to a List<T>. List<int> filteredList = myList.Where(x => x > 7).ToList(); ``` If you can't find the `.Where`, that means you need to import `using System.Linq;` at the top of your file.
Here is a code block / example of some list filtering using three different methods that I put together to show Lambdas and LINQ based list filtering. ``` #region List Filtering static void Main(string[] args) { ListFiltering(); Console.ReadLine(); } private static void ListFiltering() { var PersonList = new List<Person>(); PersonList.Add(new Person() { Age = 23, Name = "Jon", Gender = "M" }); //Non-Constructor Object Property Initialization PersonList.Add(new Person() { Age = 24, Name = "Jack", Gender = "M" }); PersonList.Add(new Person() { Age = 29, Name = "Billy", Gender = "M" }); PersonList.Add(new Person() { Age = 33, Name = "Bob", Gender = "M" }); PersonList.Add(new Person() { Age = 45, Name = "Frank", Gender = "M" }); PersonList.Add(new Person() { Age = 24, Name = "Anna", Gender = "F" }); PersonList.Add(new Person() { Age = 29, Name = "Sue", Gender = "F" }); PersonList.Add(new Person() { Age = 35, Name = "Sally", Gender = "F" }); PersonList.Add(new Person() { Age = 36, Name = "Jane", Gender = "F" }); PersonList.Add(new Person() { Age = 42, Name = "Jill", Gender = "F" }); //Logic: Show me all males that are less than 30 years old. Console.WriteLine(""); //Iterative Method Console.WriteLine("List Filter Normal Way:"); foreach (var p in PersonList) if (p.Gender == "M" && p.Age < 30) Console.WriteLine(p.Name + " is " + p.Age); Console.WriteLine(""); //Lambda Filter Method Console.WriteLine("List Filter Lambda Way"); foreach (var p in PersonList.Where(p => (p.Gender == "M" && p.Age < 30))) //.Where is an extension method Console.WriteLine(p.Name + " is " + p.Age); Console.WriteLine(""); //LINQ Query Method Console.WriteLine("List Filter LINQ Way:"); foreach (var v in from p in PersonList where p.Gender == "M" && p.Age < 30 select new { p.Name, p.Age }) Console.WriteLine(v.Name + " is " + v.Age); } private class Person { public Person() { } public int Age { get; set; } public string Name { get; set; } public string Gender { get; set; } } #endregion ```
Filtering collections in C#
[ "", "c#", "collections", "filtering", "" ]
I'm fairly new to ASP.NET and trying to learn how things are done. I come from a C# background so the code-behind portion is easy, but thinking like a web developer is unfamiliar. I have an aspx page that contains a grid of checkboxes. I have a button that is coded via a Button\_Click event to collect a list of which rows are checked and create a session variable out of that list. The same button is referenced (via TargetControlID) by my ascx page's ModalPopupExtender which controls the panel on the ascx page. When the button is clicked, the modal popup opens but the Button\_Click event is never fired, so the modal doesn't get its session data. Since the two pages are separate, I can't call the ModalPopupExtender from the aspx.cs code, I can't reach the list of checkboxes from the ascx.cs code, and I don't see a way to populate my session variable and then programmatically activate some other hidden button or control which will then open my modal popup. Any thoughts?
All a usercontrol(.ascx) file is is a set of controls that you have grouped together to provide some reusable functionality. The controls defined in it are still added to the page's control collection (.aspx) durring the page lifecylce. The ModalPopupExtender uses javascript and dhtml to show and hide the controls in the usercontrol client-side. What you are seeing is that the click event is being handled client-side by the ModalPoupExtender and it is canceling the post-back to the server. This is the default behavior by design. You certainly can access the page's control collection from the code-behind of your usercontrol though because it is all part of the same control tree. Just use the FindControl(xxx) method of any control to search for the child of it you need.
After some research following DancesWithBamboo's answer, I figured out how to make it work. An example reference to my ascx page within my aspx page: ``` <uc1:ChildPage ID="MyModalPage" runat="server" /> ``` The aspx code-behind to grab and open the ModalPopupExtender (named modalPopup) would look like this: ``` AjaxControlToolkit.ModalPopupExtender mpe = (AjaxControlToolkit.ModalPopupExtender) MyModalPage.FindControl("modalPopup"); mpe.Show(); ```
How can I pass data from an aspx page to an ascx modal popup?
[ "", "c#", "asp.net", "asp.net-ajax", "" ]
I've been trying to use SQLite with the PDO wrapper in PHP with mixed success. I can read from the database fine, but none of my updates are being committed to the database when I view the page in the browser. Curiously, running the script from my shell does update the database. I suspected file permissions as the culprit, but even with the database providing full access (chmod 777) the problem persists. Should I try changing the file owner? If so, what to? By the way, my machine is the standard Mac OS X Leopard install with PHP activated. @Tom Martin Thank you for your reply. I just ran your code and it looks like PHP runs as user \_www. I then tried chowning the database to be owned by \_www, but that didn't work either. I should also note that PDO's errorInfo function doesn't indicate an error took place. Could this be a setting with PDO somehow opening the database for read-only? I've heard that SQLite performs write locks on the entire file. Is it possible that the database is locked by something else preventing the write? I've decided to include the code in question. This is going to be more or less a port of [Grant's script](https://stackoverflow.com/questions/6936/using-what-ive-learned-from-stackoverflow-html-scraper) to PHP. So far it's just the Questions section: ``` <?php $db = new PDO('sqlite:test.db'); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "https://stackoverflow.com/users/658/kyle"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_COOKIE, "shhsecret=1293706652"); $page = curl_exec($ch); preg_match('/summarycount">.*?([,\d]+)<\/div>.*?Reputation/s', $page, $rep); $rep = preg_replace("/,/", "", $rep[1]); preg_match('/iv class="summarycount".{10,60} (\d+)<\/d.{10,140}Badges/s', $page, $badge); $badge = $badge[1]; $qreg = '/question-summary narrow.*?vote-count-post"><strong.*?>(-?\d*).*?\/questions\/(\d*).*?>(.*?)<\/a>/s'; preg_match_all($qreg, $page, $questions, PREG_SET_ORDER); $areg = '/(answer-summary"><a href="\/questions\/(\d*).*?votes.*?>(-?\d+).*?href.*?>(.*?)<.a)/s'; preg_match_all($areg, $page, $answers, PREG_SET_ORDER); echo "<h3>Questions:</h3>\n"; echo "<table cellpadding=\"3\">\n"; foreach ($questions as $q) { $query = 'SELECT count(id), votes FROM Questions WHERE id = '.$q[2].' AND type=0;'; $dbitem = $db->query($query)->fetch(PDO::FETCH_ASSOC); if ($dbitem['count(id)'] > 0) { $lastQ = $q[1] - $dbitem['votes']; if ($lastQ == 0) { $lastQ = ""; } $query = "UPDATE Questions SET votes = '$q[1]' WHERE id = '$q[2]'"; $db->exec($query); } else { $query = "INSERT INTO Questions VALUES('$q[3]', '$q[1]', 0, '$q[2]')"; echo "$query\n"; $db->exec($query); $lastQ = "(NEW)"; } echo "<tr><td>$lastQ</td><td align=\"right\">$q[1]</td><td>$q[3]</td></tr>\n"; } echo "</table>"; ?> ```
Kyle, in order for PDO/Sqlite to work you need write permission to directory where your database resides. Also, I see you perform multiple selects in loop. This may be ok if you are building something small and not heavy loaded. Otherwise I'd suggest building single query that returns multiple rows and process them in separate loop.
I found the answer on the [PHP manual](http://www.php.net/manual/en/ref.pdo-sqlite.php#57356) "The folder that houses the database file must be writeable."
SQLite/PHP read-only?
[ "", "php", "sqlite", "permissions", "pdo", "" ]
I am working with both [amq.js](http://activemq.apache.org/ajax.html) (ActiveMQ) and [Google Maps](http://code.google.com/apis/maps/documentation/reference.html). I load my scripts in this order ``` <head> <meta http-equiv="content-type" content="text/html;charset=UTF-8" /> <title>AMQ & Maps Demo</title> <!-- Stylesheet --> <link rel="stylesheet" type="text/css" href="style.css"></link> <!-- Google APIs --> <script type="text/javascript" src="http://www.google.com/jsapi?key=abcdefg"></script> <!-- Active MQ --> <script type="text/javascript" src="amq/amq.js"></script> <script type="text/javascript">amq.uri='amq';</script> <!-- Application --> <script type="text/javascript" src="application.js"></script> </head> ``` However in my application.js it loads Maps fine but I get an error when trying to subscribe to a Topic with AMQ. AMQ depends on prototype which the error console in Firefox says object is not defined. I think I have a problem with using the amq object before the script is finished loading. **Is there a way to make sure both scripts load before I use them in my application.js?** Google has this nice function call `google.setOnLoadCallback(initialize);` which works great. I'm not sure amq.js has something like this.
> **Is there a way to make sure both scripts load before I use them in my application.js?** JavaScript files should load sequentially *and block* so unless the scripts you are depending on are doing something unusual all you should need to do is load application.js after the other files. [Non-blocking JavaScript Downloads](http://yuiblog.com/blog/2008/07/22/non-blocking-scripts/) has some information about how scripts load (and discusses some techniques to subvert the blocking).
cross-domain scripts are loaded after scripts of site itself, this is why you get errors. interestingly, nobody knows this here.
JavaScript Load Order
[ "", "javascript", "google-maps", "" ]
I need advice on how to have my C# console application display text to the user through the standard output while still being able access it later on. The actual feature I would like to implement is to dump the entire output buffer to a text file at the end of program execution. The workaround I use while I don't find a cleaner approach is to subclass `TextWriter` overriding the writing methods so they would both write to a file and call the original stdout writer. Something like this: ``` public class DirtyWorkaround { private class DirtyWriter : TextWriter { private TextWriter stdoutWriter; private StreamWriter fileWriter; public DirtyWriter(string path, TextWriter stdoutWriter) { this.stdoutWriter = stdoutWriter; this.fileWriter = new StreamWriter(path); } override public void Write(string s) { stdoutWriter.Write(s); fileWriter.Write(s); fileWriter.Flush(); } // Same as above for WriteLine() and WriteLine(string), // plus whatever methods I need to override to inherit // from TextWriter (Encoding.Get I guess). } public static void Main(string[] args) { using (DirtyWriter dw = new DirtyWriter("path", Console.Out)) { Console.SetOut(dw); // Teh codez } } } ``` See that it writes to and flushes the file all the time. I'd love to do it only at the end of the execution, but I couldn't find any way to access to the output buffer. Also, excuse inaccuracies with the above code (had to write it *ad hoc*, sorry ;).
The perfect solution for this is to use [log4net](http://logging.apache.org/log4net/) with a console appender and a file appender. There are many other appenders available as well. It also allows you to turn the different appenders off and on at runtime.
I don't think there's anything wrong with your approach. If you wanted reusable code, consider implementing a class called `MultiWriter` or somesuch that takes as input two (or N?) `TextWriter` streams and distributes all writs, flushes, etc. to those streams. Then you can do this file/console thing, but just as easily you can split any output stream. Useful!
How to save the output of a console application
[ "", "c#", ".net", "console", "stdout", "" ]
I'm working on a web service at the moment and there is the potential that the returned results could be quite large ( > 5mb). It's perfectly valid for this set of data to be this large and the web service can be called either sync or async, but I'm wondering what people's thoughts are on the following: 1. If the connection is lost, the entire resultset will have to be regenerated and sent again. Is there any way I can do any sort of "resume" if the connection is lost or reset? 2. Is sending a result set this large even appropriate? Would it be better to implement some sort of "paging" where the resultset is generated and stored on the server and the client can then download chunks of the resultset in smaller amounts and re-assemble the set at their end?
I have seen all three approaches, **paged**, **store and retrieve**, and **massive push**. I think the solution to your problem depends to some extent on why your result set is so large and how it is generated. Do your results grow over time, are they calculated all at once and then pushed, do you want to stream them back as soon as you have them? ## **Paging Approach** In my experience, using a paging approach is appropriate when the client needs quick access to reasonably sized chunks of the result set similar to pages in search results. Considerations here are overall chattiness of your protocol, caching of the entire result set between client page requests, and/or the processing time it takes to generate a page of results. ## **Store and retrieve** Store and retrieve is useful when the results are not random access and the result set grows in size as the query is processed. Issues to consider here are complexity for clients and if you can provide the user with partial results or if you need to calculate all results before returning anything to the client (think sorting of results from distributed search engines). ## **Massive Push** The massive push approach is almost certainly flawed. Even if the client needs all of the information and it needs to be pushed in a monolithic result set, I would recommend taking the approach of `WS-ReliableMessaging` (either directly or through your own simplified version) and chunking your results. By doing this you 1. ensure that the pieces reach the client 2. can discard the chunk as soon as you get a receipt from the client 3. can reduce the possible issues with memory consumption from having to retain 5MB of XML, DOM, or whatever in memory (assuming that you aren't processing the results in a streaming manner) on the server and client sides. Like others have said though, don't do anything until you know your result set size, how it is generated, and overall performance to be actual issues.
There's no hard law against 5 Mb as a result set size. Over 400 Mb can be [hard to send](http://msdn.microsoft.com/en-us/library/aa528822.aspx). You'll automatically get async handlers (since you're using .net) > implement some sort of "paging" where > the resultset is generated and stored > on the server and the client can then > download chunks of the resultset in > smaller amounts and re-assemble the > set at their end That's already happening for you -- it's called tcp/ip ;-) Re-implementing that could be overkill. Similarly -- > entire resultset will have to be > regenerated and sent again If it's MS-SQL, for example that is generating most of the resultset -- then re-generating it will take advantage of some implicit cacheing in SQL Server and the subsequent generations will be quicker. To some extent you can get away with not worrying about these problems, until they surface as 'real' problems -- because the platform(s) you're using take care of a lot of the performance bottlenecks for you.
Returning Large Results Via a Webservice
[ "", "c#", ".net", "web-services", "" ]
I've been raised to believe that if multiple threads can access a variable, then all reads from and writes to that variable must be protected by synchronization code, such as a "lock" statement, because the processor might switch to another thread halfway through a write. However, I was looking through System.Web.Security.Membership using Reflector and found code like this: ``` public static class Membership { private static bool s_Initialized = false; private static object s_lock = new object(); private static MembershipProvider s_Provider; public static MembershipProvider Provider { get { Initialize(); return s_Provider; } } private static void Initialize() { if (s_Initialized) return; lock(s_lock) { if (s_Initialized) return; // Perform initialization... s_Initialized = true; } } } ``` Why is the s\_Initialized field read outside of the lock? Couldn't another thread be trying to write to it at the same time? **Are reads and writes of variables atomic?**
For the definitive answer go to the spec. :) Partition I, Section 12.6.6 of the CLI spec states: "A conforming CLI shall guarantee that read and write access to properly aligned memory locations no larger than the native word size is atomic when all the write accesses to a location are the same size." So that confirms that s\_Initialized will never be unstable, and that read and writes to primitve types smaller than 32 bits are atomic. In particular, `double` and `long` (`Int64` and `UInt64`) are **not** guaranteed to be atomic on a 32-bit platform. You can use the methods on the `Interlocked` class to protect these. Additionally, while reads and writes are atomic, there is a race condition with addition, subtraction, and incrementing and decrementing primitive types, since they must be read, operated on, and rewritten. The interlocked class allows you to protect these using the `CompareExchange` and `Increment` methods. Interlocking creates a memory barrier to prevent the processor from reordering reads and writes. The lock creates the only required barrier in this example.
This is a (bad) form of the double check locking pattern which is **not** thread safe in C#! There is one big problem in this code: s\_Initialized is not volatile. That means that writes in the initialization code can move after s\_Initialized is set to true and other threads can see uninitialized code even if s\_Initialized is true for them. This doesn't apply to Microsoft's implementation of the Framework because every write is a volatile write. But also in Microsoft's implementation, reads of the uninitialized data can be reordered (i.e. prefetched by the cpu), so if s\_Initialized is true, reading the data that should be initialized can result in reading old, uninitialized data because of cache-hits (ie. the reads are reordered). For example: ``` Thread 1 reads s_Provider (which is null) Thread 2 initializes the data Thread 2 sets s\_Initialized to true Thread 1 reads s\_Initialized (which is true now) Thread 1 uses the previously read Provider and gets a NullReferenceException ``` Moving the read of s\_Provider before the read of s\_Initialized is perfectly legal because there is no volatile read anywhere. If s\_Initialized would be volatile, the read of s\_Provider would not be allowed to move before the read of s\_Initialized and also the initialization of the Provider is not allowed to move after s\_Initialized is set to true and everything is ok now. Joe Duffy also wrote an Article about this problem: [Broken variants on double-checked locking](http://joeduffyblog.com/2006/01/26/broken-variants-on-doublechecked-locking/)
Is accessing a variable in C# an atomic operation?
[ "", "c#", "multithreading", "" ]
I have been working with a `string[]` array in C# that gets returned from a function call. I could possibly cast to a `Generic` collection, but I was wondering if there was a better way to do it, possibly by using a temp array. **What is the best way to remove duplicates from a C# array?**
You could possibly use a LINQ query to do this: ``` int[] s = { 1, 2, 3, 3, 4}; int[] q = s.Distinct().ToArray(); ```
Here is the [HashSet<string>](http://msdn.microsoft.com/en-us/library/bb359438.aspx) approach: ``` public static string[] RemoveDuplicates(string[] s) { HashSet<string> set = new HashSet<string>(s); string[] result = new string[set.Count]; set.CopyTo(result); return result; } ``` Unfortunately this solution also requires .NET framework 3.5 or later as HashSet was not added until that version. You could also use [array.Distinct()](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.distinct.aspx), which is a feature of LINQ.
How do I remove duplicates from a C# array?
[ "", "c#", "arrays", "duplicates", "" ]
In C#, (and feel free to answer for other languages), what order does the runtime evaluate a logic statement? Example: ``` DataTable myDt = new DataTable(); if (myDt != null && myDt.Rows.Count > 0) { //do some stuff with myDt } ``` Which statement does the runtime evaluate first - ``` myDt != null ``` or: ``` myDt.Rows.Count > 0 ``` ? Is there a time when the compiler would ever evaluate the statement backwards? Perhaps when an "OR" operator is involved? --- > & is known as a logical bitwise operator and will always evaluate all the sub-expressions What is a good example of when to use the bitwise operator instead of the "short-circuited boolean"?
C# : Left to right, and processing stops if a non-match (evaluates to false) is found.
"C# : Left to right, and processing stops if a match (evaluates to true) is found." Zombie sheep is wrong. The question is about the && operator, not the || operator. In the case of && evaluation will stop if a FALSE is found. In the case of || evaluation stops if a TRUE is found.
C# logic order and compiler behavior
[ "", "c#", "language-agnostic", "compiler-construction", "logic", "" ]
I'm having trouble reading a "chunked" response when using a StreamReader to read the stream returned by GetResponseStream() of a HttpWebResponse: ``` // response is an HttpWebResponse StreamReader reader = new StreamReader(response.GetResponseStream()); string output = reader.ReadToEnd(); // throws exception... ``` When the `reader.ReadToEnd()` method is called I'm getting the following System.IO.IOException: **Unable to read data from the transport connection: The connection was closed.** The above code works just fine when server returns a "non-chunked" response. The only way I've been able to get it to work is to use HTTP/1.0 for the initial request (instead of HTTP/1.1, the default) but this seems like a lame work-around. Any ideas? --- @Chuck Your solution works pretty good. It still throws the same IOExeception on the last Read(). But after inspecting the contents of the StringBuilder it looks like all the data has been received. So perhaps I just need to wrap the Read() in a try-catch and swallow the "error".
Haven't tried it this with a "chunked" response but would something like this work? ``` StringBuilder sb = new StringBuilder(); Byte[] buf = new byte[8192]; Stream resStream = response.GetResponseStream(); string tmpString = null; int count = 0; do { count = resStream.Read(buf, 0, buf.Length); if(count != 0) { tmpString = Encoding.ASCII.GetString(buf, 0, count); sb.Append(tmpString); } }while (count > 0); ```
I am working on a similar problem. The .net HttpWebRequest and HttpWebRequest handle cookies and redirects automatically but they do not handle chunked content on the response body automatically. This is perhaps because chunked content may contain more than simple data (i.e.: chunk names, trailing headers). Simply reading the stream and ignoring the EOF exception will not work as the stream contains more than the desired content. The stream will contain chunks and each chunk begins by declaring its size. If the stream is simply read from beginning to end the final data will contain the chunk meta-data (and in case where it is gziped content it will fail the CRC check when decompressing). To solve the problem it is necessary to manually parse the stream, removing the chunk size from each chunk (as well as the CR LF delimitors), detecting the final chunk and keeping only the chunk data. There likely is a library out there somewhere that does this, I have not found it yet. Usefull resources : <http://en.wikipedia.org/wiki/Chunked_transfer_encoding> <https://www.rfc-editor.org/rfc/rfc2616#section-3.6.1>
Reading "chunked" response with HttpWebResponse
[ "", "c#", "http", ".net-3.5", ".net-2.0", "" ]
Can someone show me how to implement a recursive lambda expression to traverse a tree structure in C#.
Ok, I found some free time finally. Here we go: ``` class TreeNode { public string Value { get; set;} public List<TreeNode> Nodes { get; set;} public TreeNode() { Nodes = new List<TreeNode>(); } } Action<TreeNode> traverse = null; traverse = (n) => { Console.WriteLine(n.Value); n.Nodes.ForEach(traverse);}; var root = new TreeNode { Value = "Root" }; root.Nodes.Add(new TreeNode { Value = "ChildA"} ); root.Nodes[0].Nodes.Add(new TreeNode { Value = "ChildA1" }); root.Nodes[0].Nodes.Add(new TreeNode { Value = "ChildA2" }); root.Nodes.Add(new TreeNode { Value = "ChildB"} ); root.Nodes[1].Nodes.Add(new TreeNode { Value = "ChildB1" }); root.Nodes[1].Nodes.Add(new TreeNode { Value = "ChildB2" }); traverse(root); ```
A proper solution, and indeed the idiomatic solution in many functional programming languages, would be the use of a [fixed-point combinator](https://en.wikipedia.org/wiki/Fixed-point_combinator). In a nutshell: a fixed-point combinator answers the question “how do I define an anonymous function to be recursive?”. But the solution is so nontrivial that whole articles are written to explain them. A simple, pragmatic alternative is to “go back in time” to the antics of C: declaration before definition. Try the following (the “factorial” function): ``` Func<int, int> fact = null; fact = x => (x == 0) ? 1 : x * fact(x - 1); ``` Works like a charm. Or, for a pre-order tree traversal on an object of class `TreeNode` which implements `IEnumerable<TreeNode>` appropriately to go over its children: ``` Action<TreeNode, Action<TreeNode>> preorderTraverse = null; preorderTraverse = (node, action) => { action(node); foreach (var child in node) preorderTraverse(child, action); }; ```
Recursive lambda expression to traverse a tree in C#
[ "", "c#", "recursion", "lambda", "" ]
Does anyone know of a good method for editing PDFs in PHP? Preferably open-source/zero-license cost methods. :) I am thinking along the lines of opening a PDF file, replacing text in the PDF and then writing out the modified version of the PDF? On the front-end
If you are taking a 'fill in the blank' approach, you can precisely position text anywhere you want on the page. So it's relatively easy (if not a bit tedious) to add the missing text to the document. For example with Zend Framework: ``` <?php require_once 'Zend/Pdf.php'; $pdf = Zend_Pdf::load('blank.pdf'); $page = $pdf->pages[0]; $font = Zend_Pdf_Font::fontWithName(Zend_Pdf_Font::FONT_HELVETICA); $page->setFont($font, 12); $page->drawText('Hello world!', 72, 720); $pdf->save('zend.pdf'); ``` If you're trying to replace inline content, such as a "[placeholder string]," it gets much more complicated. While it's technically possible to do, you're likely to mess up the layout of the page. A PDF document is comprised of a set of primitive drawing operations: line here, image here, text chunk there, etc. It does not contain any information about the layout intent of those primitives.
There is a free and easy to use PDF class to create PDF documents. It's called [FPDF](http://www.fpdf.org/). In combination with FPDI (<http://www.setasign.de/products/pdf-php-solutions/fpdi>) it is even possible to edit PDF documents. The following code shows how to use FPDF and FPDI to fill an existing gift coupon with the user data. ``` require_once('fpdf.php'); require_once('fpdi.php'); $pdf = new FPDI(); $pdf->AddPage(); $pdf->setSourceFile('gift_coupon.pdf'); // import page 1 $tplIdx = $this->pdf->importPage(1); //use the imported page and place it at point 0,0; calculate width and height //automaticallay and ajust the page size to the size of the imported page $this->pdf->useTemplate($tplIdx, 0, 0, 0, 0, true); // now write some text above the imported page $this->pdf->SetFont('Arial', '', '13'); $this->pdf->SetTextColor(0,0,0); //set position in pdf document $this->pdf->SetXY(20, 20); //first parameter defines the line height $this->pdf->Write(0, 'gift code'); //force the browser to download the output $this->pdf->Output('gift_coupon_generated.pdf', 'D'); ```
Edit PDF in PHP?
[ "", "php", "pdf", "" ]
[This answer says](https://stackoverflow.com/questions/71955/when-choosing-an-orm-is-linq-to-sql-or-linq-to-entities-better-than-nhibernate#71974) that Linq is targeted at a slightly different group of developers than NHibernate, Castle, etc. Being rather new to C#, nevermind all the DB stuff surrounding it: * Are there other major, for lack of a better term, SQL wrappers than NHibernate, Castle, Linq? * What are the differences between them? * What kind of developers or development are they aimed at? -Adam
When you say Castle I assume you mean Castle Active Record? The difference is NHibernate is an OR/M and is aimed at developers who want to focus on the domain rather than the database. With linq to sql, your database is pre-existing and you're relationships and some of programming will be driven by how your database is defined. Now between NHibernate and Castle ActiveRecord -- they are similar in that you're driving your application design from the domain but with NHibernate you provide mapping xml files (or mapping classes with fluent NHibernate) where in Active Record you are using the convention over configuration (using attributes to define any columns and settings that don't fit naturally). Castle Active record is still using NHibernate in the background. One OR/M is not necessarily the 'one true way' to go. It depends on your environment, the application your developing and your team. You may also want to check out [SubSonic](http://www.subsonicproject.com). It's great for active record but it is not for project where you want to focus mainly on your Domain. Depending on the project, I usually use either NHibernate (with Castle Active Record) or Subsonic
LINQ is just a set of new C# features: extension methods, lambda expressions, object initializers, anonymous types, etc. "LINQ to SQL" on the other hand is something you can compare other SQL wrappers.
Differences between NHibernate, Castle, Linq - Who are they aimed at?
[ "", "c#", "database", "linq", "nhibernate", "" ]
Does Python have a unit testing framework compatible with the standard xUnit style of test framework? If so, what is it, where is it, and is it any good?
Python has several testing frameworks, including `unittest`, `doctest`, and `nose`. The most xUnit-like is `unittest`, which is documented on Python.org. * [`unittest` documentation](https://docs.python.org/3/library/unittest.html) * [`doctest` documentation](https://docs.python.org/3/library/doctest.html)
I recommend [nose](https://nose.readthedocs.org/en/latest/). It is the most Pythonic of the unit test frameworks. The test runner runs both doctests and unittests, so you are free to use whatever style of test you like.
Unit tests in Python
[ "", "python", "unit-testing", "" ]
I'm parsing text from a file and storing it in a string. The problem is that some of the text in the original files contains `ASCII art` and whatnot that I would like to preserve. When I print out the string on the `HTML page`, even if it does have the same formatting and everything since it is in `HTML`, the spacing and line breaks are not preserved. What is the best way to print out the text in `HTML` exactly as it was in the original text file? I would like to give an example, but unfortunately, I was not able to get it to display correctly in this markdown editor :P Basically, I would like suggestions on how to display `ASCII art in HTML`.
use the <pre> tag (pre formatted), that will use a mono spaced font (for your art) and keep all the white space ``` <pre> text goes here and here and here and here Some out here ▄ ▄█▄ █▄ ▄ ▄█▀█▓ ▄▓▀▀█▀ ▀▀▀█▓▀▀ ▀▀ ▄█▀█▓▀▀▀▀▀▓▄▀██▀▀ ██ ██ ▀██▄▄ ▄█ ▀ ░▒ ░▒ ██ ██ ▄█▄ █▀ ██ █▓▄▀██ ▄ ▀█▌▓█ ▒▓ ▒▓ █▓▄▀██ ▓█ ▀▄ █▓ █▒ █▓ ██▄▓▀ ▀█▄▄█▄▓█ ▓█ █▒ █▓ ▒█ ▓█▄ ▒ ▀▒ ▀ ▀ █▀ ▀▒ ▀ █▀ ░ </pre> ``` You might have to convert any <'s to &lt; 's
the `<pre>` and `</pre>` might not be ideal in textarea etc.. When wanting to preserve new line - `\n` and `\n\r` use [nl2br](http://php.net/manual/en/function.nl2br.php) as mentioned by UnkwnTech and Brad Mace. When wanting to preserve spaces use [str\_replace](http://www.php.net/str_replace): `str_replace(' ', '&nbsp;', $stringVariable);` When both use this: ``` $result = str_replace(' ', '&nbsp;', $stringVariable); $result = nl2br($result); ```
How do I keep whitespace formatting using PHP/HTML?
[ "", "php", "html", "ascii", "" ]
By default tomcat will create a session cookie for the current domain. If you are on www.example.com, your cookie will be created for www.example.com (will only work on www.example.com). Whereas for example.com it will be created for .example.com (desired behaviour, will work on any subdomain of example.com as well as example.com itself). I've seen a few Tomcat valves which seem to intercept the creation of session cookies and create a replacement cookie with the correct .example.com domain, however none of them seem to work flawlessly and they all appear to leave the existing cookie and just create a new one. This means that two JSESSIONID cookies are being sent with each request. I was wondering if anybody has a definitive solution to this problem.
This is apparently supported via a configuration setting in 6.0.27 and onwards: > Configuration is done by editing > META-INF/context.xml > > <Context > sessionCookiePath="/something" > sessionCookieDomain=".domain.tld" /> <https://issues.apache.org/bugzilla/show_bug.cgi?id=48379>
I have just gone through all of this looking for a simple solution. I started looking at it from the tomcat perspective first. Tomcat does not give direct access to configuring the domain cookie for the session, and I definitely did not want to custom patch tomcat to fix that problem as shown in some other posts. Valves in tomcat also seems to be a problem solution due to the limitations on accessing headers & cookies built into the Servlet specification. They also fail completely if the http response is commited before it gets passed to your valve. Since we proxy our requests through Apache, I then moved onto how to use apache to fix the problem instead. I first tried the mod\_proxy directive ProxyPassReverseCookieDomain, but it does not work for JSESSIONID cookies because tomcat does not set the domain attribute and ProxyPassReverseCookieDomain cannot work without some sort of domain being part of the cookie. I also came across a hack using ProxyPassReverseCookiePath where they were rewriting the path to add a domain attribute to the cookie, but that felt way to messy for a production site. I finally got it to work by rewriting the response headers using the mod\_headers module in apache as mentioned by Dave above. I have added the following line inside the virtual host definition: ``` Header edit Set-Cookie "(JSESSIONID\s?=[^;,]+?)((?:;\s?(?:(?i)Comment|Max-Age|Path|Version|Secure)[^;,]*?)*)(;\s?(?:(?i)Domain\s?=)[^;,]+?)?((?:;\s?(?:(?i)Comment|Max-Age|Path|Version|Secure)[^;,]*?)*)(,|$)" "$1$2; Domain=.example.com$4$5" ``` The above should all be a single line in the config. It will replace any JSESSIONID cookies domain attribute with ".example.com". If a JSESSIONID cookie does not contain a domain attribute, then the pattern will add one with a value of ".example.com". As a bonus, this solution does not suffer from the double JSESSION cookies problem of the valves. The pattern should work with multiple cookies in the Set-Cookie header without affecting the other cookies in the header. It should also be modifiable to work with other cookies by changing JSESSIONID in the first part of the pattern to what ever cookie name you desire. I am not reg-ex power user, so I am sure there are a couple of optimisations that could be made to the pattern, but it seems to be working for us so far. I will update this post if I find any bugs with the pattern. Hopefully this will stop a few of you from having to go through the last couple of days worth of frustrations as I did.
Best way for allowing subdomain session cookies using Tomcat
[ "", "java", "tomcat", "session", "cookies", "subdomain", "" ]
Is there any list of blog engines, written in Django?
EDIT: Original link went dead so here's an updated link with extracts of the list sorted with the most recently updated source at the top. [Eleven Django blog engines you should know](http://blog.montylounge.com/2010/02/10/eleven-django-blog-engines-you-should-know/) by Monty Lounge Industries > * [Biblion](http://github.com/eldarion/biblion) > * [Django-article](http://bitbucket.org/codekoala/django-articles/) > * [Flother](http://github.com/flother/flother) > * [Basic-Blog](http://github.com/nathanborror/django-basic-apps) > * [Hello-Newman](http://github.com/gregnewman/hello-newman) > * [Banjo](http://getbanjo.com/download/) > * [djangotechblog](http://code.google.com/p/djangotechblog/) > * [Django-YABA](http://github.com/f4nt/django-yaba/) > * [Shifting Bits](http://github.com/paltman/shiftingbits/) (this is now just a [biblion](http://github.com/eldarion/biblion) blog) > * [Mighty Lemon](http://github.com/mightylemon/mightylemon) > * [Coltrane](http://code.google.com/p/coltrane-blog/)
James Bennett has an [interesting take](http://www.b-list.org/weblog/2007/nov/29/django-blog) on this question: > “where can I find a good Django-powered blogging application” is probably at the top of the frequently-asked questions list both on django-users and in the IRC; part of this is simply that, right now, there is no “definitive” Django blogging application; there are a bunch of them available if you go looking, but you’re not likely to get anyone to recommend one of them as “the” Django blogging app (unless the person doing the recommending happens to be the author of one of them). The [blog entry](http://www.b-list.org/weblog/2007/nov/29/django-blog) also has a list.
Is there any list of blog engines, written in Django?
[ "", "python", "django", "" ]
I have an application that reads a CSV file with piles of data rows. I give the user a summary of the number of rows based on types of data, but I want to make sure that I don't read in too many rows of data and cause `OutOfMemoryError`s. Each row translates into an object. Is there a way to find out the size of that object programmatically? Is there a reference that defines how large primitive types and object references are for a `VM`? Right now, I have code that says read up to **32,000 rows**, but I'd also like to have code that says read as many rows as possible until I've used **32MB** of memory.
You can use the [`java.lang.instrument` package](http://docs.oracle.com/javase/8/docs/api/java/lang/instrument/Instrumentation.html). Compile and put this class in a JAR: ``` import java.lang.instrument.Instrumentation; public class ObjectSizeFetcher { private static Instrumentation instrumentation; public static void premain(String args, Instrumentation inst) { instrumentation = inst; } public static long getObjectSize(Object o) { return instrumentation.getObjectSize(o); } } ``` Add the following to your `MANIFEST.MF`: ``` Premain-Class: ObjectSizeFetcher ``` Use the `getObjectSize()` method: ``` public class C { private int x; private int y; public static void main(String [] args) { System.out.println(ObjectSizeFetcher.getObjectSize(new C())); } } ``` Invoke with: ``` java -javaagent:ObjectSizeFetcherAgent.jar C ```
You should use [jol](http://openjdk.java.net/projects/code-tools/jol/), a tool developed as part of the OpenJDK project. > JOL (Java Object Layout) is the tiny toolbox to analyze object layout schemes in JVMs. These tools are using Unsafe, JVMTI, and Serviceability Agent (SA) heavily to decoder the actual object layout, footprint, and references. This makes JOL much more accurate than other tools relying on heap dumps, specification assumptions, etc. To get the sizes of primitives, references and array elements, use `VMSupport.vmDetails()`. On Oracle JDK 1.8.0\_40 running on 64-bit Windows (used for all following examples), this method returns ``` Running 64-bit HotSpot VM. Using compressed oop with 0-bit shift. Using compressed klass with 3-bit shift. Objects are 8 bytes aligned. Field sizes by type: 4, 1, 1, 2, 2, 4, 4, 8, 8 [bytes] Array element sizes: 4, 1, 1, 2, 2, 4, 4, 8, 8 [bytes] ``` You can get the shallow size of an object instance using `ClassLayout.parseClass(Foo.class).toPrintable()` (optionally passing an instance to `toPrintable`). This is only the space consumed by a single instance of that class; it does not include any other objects referenced by that class. It *does* include VM overhead for the object header, field alignment and padding. For `java.util.regex.Pattern`: ``` java.util.regex.Pattern object internals: OFFSET SIZE TYPE DESCRIPTION VALUE 0 4 (object header) 01 00 00 00 (0000 0001 0000 0000 0000 0000 0000 0000) 4 4 (object header) 00 00 00 00 (0000 0000 0000 0000 0000 0000 0000 0000) 8 4 (object header) cb cf 00 20 (1100 1011 1100 1111 0000 0000 0010 0000) 12 4 int Pattern.flags 0 16 4 int Pattern.capturingGroupCount 1 20 4 int Pattern.localCount 0 24 4 int Pattern.cursor 48 28 4 int Pattern.patternLength 0 32 1 boolean Pattern.compiled true 33 1 boolean Pattern.hasSupplementary false 34 2 (alignment/padding gap) N/A 36 4 String Pattern.pattern (object) 40 4 String Pattern.normalizedPattern (object) 44 4 Node Pattern.root (object) 48 4 Node Pattern.matchRoot (object) 52 4 int[] Pattern.buffer null 56 4 Map Pattern.namedGroups null 60 4 GroupHead[] Pattern.groupNodes null 64 4 int[] Pattern.temp null 68 4 (loss due to the next object alignment) Instance size: 72 bytes (reported by Instrumentation API) Space losses: 2 bytes internal + 4 bytes external = 6 bytes total ``` You can get a summary view of the deep size of an object instance using `GraphLayout.parseInstance(obj).toFootprint()`. Of course, some objects in the footprint might be shared (also referenced from other objects), so it is an overapproximation of the space that could be reclaimed when that object is garbage collected. For the result of `Pattern.compile("^[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\\.[a-zA-Z0-9-.]+$")` (taken from [this answer](https://stackoverflow.com/a/719543/3614835)), jol reports a total footprint of 1840 bytes, of which only 72 are the Pattern instance itself. ``` java.util.regex.Pattern instance footprint: COUNT AVG SUM DESCRIPTION 1 112 112 [C 3 272 816 [Z 1 24 24 java.lang.String 1 72 72 java.util.regex.Pattern 9 24 216 java.util.regex.Pattern$1 13 24 312 java.util.regex.Pattern$5 1 16 16 java.util.regex.Pattern$Begin 3 24 72 java.util.regex.Pattern$BitClass 3 32 96 java.util.regex.Pattern$Curly 1 24 24 java.util.regex.Pattern$Dollar 1 16 16 java.util.regex.Pattern$LastNode 1 16 16 java.util.regex.Pattern$Node 2 24 48 java.util.regex.Pattern$Single 40 1840 (total) ``` If you instead use `GraphLayout.parseInstance(obj).toPrintable()`, jol will tell you the address, size, type, value and path of field dereferences to each referenced object, though that's usually too much detail to be useful. For the ongoing pattern example, you might get the following. (Addresses will likely change between runs.) ``` java.util.regex.Pattern object externals: ADDRESS SIZE TYPE PATH VALUE d5e5f290 16 java.util.regex.Pattern$Node .root.next.atom.next (object) d5e5f2a0 120 (something else) (somewhere else) (something else) d5e5f318 16 java.util.regex.Pattern$LastNode .root.next.next.next.next.next.next.next (object) d5e5f328 21664 (something else) (somewhere else) (something else) d5e647c8 24 java.lang.String .pattern (object) d5e647e0 112 [C .pattern.value [^, [, a, -, z, A, -, Z, 0, -, 9, _, ., +, -, ], +, @, [, a, -, z, A, -, Z, 0, -, 9, -, ], +, \, ., [, a, -, z, A, -, Z, 0, -, 9, -, ., ], +, $] d5e64850 448 (something else) (somewhere else) (something else) d5e64a10 72 java.util.regex.Pattern (object) d5e64a58 416 (something else) (somewhere else) (something else) d5e64bf8 16 java.util.regex.Pattern$Begin .root (object) d5e64c08 24 java.util.regex.Pattern$BitClass .root.next.atom.val$rhs (object) d5e64c20 272 [Z .root.next.atom.val$rhs.bits [false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, true, false, true, true, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, true, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false] d5e64d30 24 java.util.regex.Pattern$1 .root.next.atom.val$lhs.val$lhs.val$lhs.val$lhs.val$lhs.val$lhs (object) d5e64d48 24 java.util.regex.Pattern$1 .root.next.atom.val$lhs.val$lhs.val$lhs.val$lhs.val$lhs.val$rhs (object) d5e64d60 24 java.util.regex.Pattern$5 .root.next.atom.val$lhs.val$lhs.val$lhs.val$lhs.val$lhs (object) d5e64d78 24 java.util.regex.Pattern$1 .root.next.atom.val$lhs.val$lhs.val$lhs.val$lhs.val$rhs (object) d5e64d90 24 java.util.regex.Pattern$5 .root.next.atom.val$lhs.val$lhs.val$lhs.val$lhs (object) d5e64da8 24 java.util.regex.Pattern$5 .root.next.atom.val$lhs.val$lhs.val$lhs (object) d5e64dc0 24 java.util.regex.Pattern$5 .root.next.atom.val$lhs.val$lhs (object) d5e64dd8 24 java.util.regex.Pattern$5 .root.next.atom.val$lhs (object) d5e64df0 24 java.util.regex.Pattern$5 .root.next.atom (object) d5e64e08 32 java.util.regex.Pattern$Curly .root.next (object) d5e64e28 24 java.util.regex.Pattern$Single .root.next.next (object) d5e64e40 24 java.util.regex.Pattern$BitClass .root.next.next.next.atom.val$rhs (object) d5e64e58 272 [Z .root.next.next.next.atom.val$rhs.bits [false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, true, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false] d5e64f68 24 java.util.regex.Pattern$1 .root.next.next.next.atom.val$lhs.val$lhs.val$lhs (object) d5e64f80 24 java.util.regex.Pattern$1 .root.next.next.next.atom.val$lhs.val$lhs.val$rhs (object) d5e64f98 24 java.util.regex.Pattern$5 .root.next.next.next.atom.val$lhs.val$lhs (object) d5e64fb0 24 java.util.regex.Pattern$1 .root.next.next.next.atom.val$lhs.val$rhs (object) d5e64fc8 24 java.util.regex.Pattern$5 .root.next.next.next.atom.val$lhs (object) d5e64fe0 24 java.util.regex.Pattern$5 .root.next.next.next.atom (object) d5e64ff8 32 java.util.regex.Pattern$Curly .root.next.next.next (object) d5e65018 24 java.util.regex.Pattern$Single .root.next.next.next.next (object) d5e65030 24 java.util.regex.Pattern$BitClass .root.next.next.next.next.next.atom.val$rhs (object) d5e65048 272 [Z .root.next.next.next.next.next.atom.val$rhs.bits [false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, true, true, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false] d5e65158 24 java.util.regex.Pattern$1 .root.next.next.next.next.next.atom.val$lhs.val$lhs.val$lhs.val$lhs (object) d5e65170 24 java.util.regex.Pattern$1 .root.next.next.next.next.next.atom.val$lhs.val$lhs.val$lhs.val$rhs (object) d5e65188 24 java.util.regex.Pattern$5 .root.next.next.next.next.next.atom.val$lhs.val$lhs.val$lhs (object) d5e651a0 24 java.util.regex.Pattern$1 .root.next.next.next.next.next.atom.val$lhs.val$lhs.val$rhs (object) d5e651b8 24 java.util.regex.Pattern$5 .root.next.next.next.next.next.atom.val$lhs.val$lhs (object) d5e651d0 24 java.util.regex.Pattern$5 .root.next.next.next.next.next.atom.val$lhs (object) d5e651e8 24 java.util.regex.Pattern$5 .root.next.next.next.next.next.atom (object) d5e65200 32 java.util.regex.Pattern$Curly .root.next.next.next.next.next (object) d5e65220 120 (something else) (somewhere else) (something else) d5e65298 24 java.util.regex.Pattern$Dollar .root.next.next.next.next.next.next (object) ``` The "(something else)" entries [describe other objects in the heap that are not part of this object graph](https://stackoverflow.com/questions/30021092/what-is-something-else-in-jol-graphlayout-output). The best jol documentation is the [jol samples](http://hg.openjdk.java.net/code-tools/jol/file/tip/jol-samples/src/main/java/org/openjdk/jol/samples/) in the jol repository. The samples demonstrate common jol operations and show how you can use jol to analyze VM and garbage collector internals.
How to determine the size of an object in Java
[ "", "java", "memory", "" ]
I've been wondering, what is the point of clog? As near as I can tell, clog is the same as cerr but with buffering so it is more efficient. Usually stderr is the same as stdout, so clog is the same as cout. This seems pretty lame to me, so I figure I must be misunderstanding it. If I have log messages going out to the same place I have error messages going out to (perhaps something in /var/log/messages), then I probably am not writing too much out (so there isn't much lost by using non-buffered cerr). In my experience, I want my log messages up to date (not buffered) so I can help find a crash (so I don't want to be using the buffered clog). Apparently I should always be using cerr. I'd like to be able to redirect clog inside my program. It would be useful to redirect cerr so that when I call a library routine I can control where cerr and clog go to. Can some compilers support this? I just checked DJGPP and stdout is defined as the address of a FILE struct, so it is illegal to do something like "stdout = freopen(...)". * Is it possible to redirect clog, cerr, cout, stdin, stdout, and/or stderr? * Is the only difference between clog and cerr the buffering? * How should I implement (or find) a more robust logging facility (links please)?
> Is it possible to redirect clog, cerr, cout, stdin, stdout, and/or stderr? Yes. You want the [`rdbuf`](http://www.cplusplus.com/reference/iostream/ios/rdbuf.html) function. ``` ofstream ofs("logfile"); cout.rdbuf(ofs.rdbuf()); cout << "Goes to file." << endl; ``` > Is the only difference between clog and cerr the buffering? As far as I know, yes.
If you're in a posix shell environment (I'm really thinking of bash), you can redirect any file descriptor to any other file descriptor, so to redirect, you can just: ``` $ myprogram 2>&5 ``` to redirect stderr to the file represented by fd=5. Edit: on second thought, I like @Konrad Rudolph's answer about redirection better. rdbuf() is a more coherent and portable way to do it. As for logging, well...I start with the Boost library for all things C++ that isn't in the std library. Behold: [Boost Logging v2](http://www.torjo.com/log2/index.html) **Edit**: Boost Logging is *not* part of the Boost Libraries; it has been reviewed, but not accepted. **Edit**: 2 years later, back in May 2010, Boost did accept a logging library, now called [Boost.Log](http://boost-log.sourceforge.net/libs/log/doc/html/index.html). Of course, there are alternatives: * [Log4Cpp](http://log4cpp.sourceforge.net/) (a log4j-style API for C++) * [Log4Cxx](http://logging.apache.org/log4cxx/index.html) (Apache-sponsored log4j-style API) * [Pantheios](http://pantheios.sourceforge.net/) (defunct? last time I tried I couldn't get it to build on a recent compiler) * [Google's GLog](https://github.com/google/glog) (hat-tip @SuperElectric) There's also the Windows Event logger. And a couple of articles that may be of use: * [Logging in C++ (Dr. Dobbs)](http://www.ddj.com/cpp/201804215) * [Logging and Tracing Simplified (Sun)](http://developers.sun.com/solaris/articles/logging.html)
What is the point of clog?
[ "", "c++", "logging", "log4cpp", "" ]
My motto for Java is "just because Java has static blocks, it doesn't mean that you should be using them." Jokes aside, there are a lot of tricks in Java that make testing a nightmare. Two of the most I hate are Anonymous Classes and Static Blocks. We have a lot of legacy code that make use of Static Blocks and these are one of the annoying points in our push in writing unit tests. Our goal is to be able to write unit tests for classes that depend on this static initialization with minimal code changes. So far my suggestion to my colleagues is to move the body of the static block into a private static method and call it `staticInit`. This method can then be called from within the static block. For unit testing another class that depends on this class could easily mock `staticInit` with [JMockit](https://jmockit.github.io/) to not do anything. Let's see this in example. ``` public class ClassWithStaticInit { static { System.out.println("static initializer."); } } ``` Will be changed to ``` public class ClassWithStaticInit { static { staticInit(); } private static void staticInit() { System.out.println("static initialized."); } } ``` So that we can do the following in a [JUnit](https://junit.org/junit5/). ``` public class DependentClassTest { public static class MockClassWithStaticInit { public static void staticInit() { } } @BeforeClass public static void setUpBeforeClass() { Mockit.redefineMethods(ClassWithStaticInit.class, MockClassWithStaticInit.class); } } ``` However this solution also comes with its own problems. You can't run `DependentClassTest` and `ClassWithStaticInitTest` on the same JVM since you actually want the static block to run for `ClassWithStaticInitTest`. What would be your way of accomplishing this task? Or any better, non-JMockit based solutions that you think would work cleaner?
When I run into this problem, I usually do the same thing you describe, except I make the static method protected so I can invoke it manually. On top of this, I make sure that the method can be invoked multiple times without problems (otherwise it is no better than the static initializer as far as the tests go). This works reasonably well, and I can actually test that the static initializer method does what I expect/want it to do. Sometimes it is just easiest to have some static initialization code, and it just isn't worth it to build an overly complex system to replace it. When I use this mechanism, I make sure to document that the protected method is only exposed for testing purposes, with the hopes that it won't be used by other developers. This of course may not be a viable solution, for example if the class' interface is externally visible (either as a sub-component of some kind for other teams, or as a public framework). It is a simple solution to the problem though, and doesn't require a third party library to set up (which I like).
[PowerMock](http://powermock.org) is another mock framework that extends EasyMock and Mockito. With PowerMock you can easily [remove unwanted behavior](http://code.google.com/p/powermock/wiki/SuppressUnwantedBehavior) from a class, for example a static initializer. In your example you simply add the following annotations to your JUnit test case: ``` @RunWith(PowerMockRunner.class) @SuppressStaticInitializationFor("some.package.ClassWithStaticInit") ``` PowerMock does not use a Java agent and therefore does not require modification of the JVM startup parameters. You simple add the jar file and the above annotations.
Mocking Static Blocks in Java
[ "", "java", "unit-testing", "mocking", "jmockit", "static-block", "" ]
I'm trying to run powershell commands through a web interface (ASP.NET/C#) in order to create mailboxes/etc on Exchange 2007. When I run the page using Visual Studio (Cassini), the page loads up correctly. However, when I run it on IIS (v5.1), I get the error "unknown user name or bad password". The biggest problem that I noticed was that Powershell was logged in as ASPNET instead of my Active Directory Account. How do I force my Powershell session to be authenticated with another Active Directory Account? Basically, the script that I have so far looks something like this: ``` RunspaceConfiguration rc = RunspaceConfiguration.Create(); PSSnapInException snapEx = null; rc.AddPSSnapIn("Microsoft.Exchange.Management.PowerShell.Admin", out snapEx); Runspace runspace = RunspaceFactory.CreateRunspace(rc); runspace.Open(); Pipeline pipeline = runspace.CreatePipeline(); using (pipeline) { pipeline.Commands.AddScript("Get-Mailbox -identity 'user.name'"); pipeline.Commands.Add("Out-String"); Collection<PSObject> results = pipeline.Invoke(); if (pipeline.Error != null && pipeline.Error.Count > 0) { foreach (object item in pipeline.Error.ReadToEnd()) resultString += "Error: " + item.ToString() + "\n"; } runspace.Close(); foreach (PSObject obj in results) resultString += obj.ToString(); } return resultString; ```
Exchange 2007 doesn't allow you to impersonate a user for security reasons. This means that it is impossible (at the moment) to create mailboxes by impersonating a user. In order to get around this problem, I created a web service which runs under AD user which has permissions to create email acounts, etc. You can then access this webservice to get access to powershell. Please remember to add the necessary security because this could potentially be a huge security hole.
Here is a class that I use to impersonate a user. ``` using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; namespace orr.Tools { #region Using directives. using System.Security.Principal; using System.Runtime.InteropServices; using System.ComponentModel; #endregion /// <summary> /// Impersonation of a user. Allows to execute code under another /// user context. /// Please note that the account that instantiates the Impersonator class /// needs to have the 'Act as part of operating system' privilege set. /// </summary> /// <remarks> /// This class is based on the information in the Microsoft knowledge base /// article http://support.microsoft.com/default.aspx?scid=kb;en-us;Q306158 /// /// Encapsulate an instance into a using-directive like e.g.: /// /// ... /// using ( new Impersonator( "myUsername", "myDomainname", "myPassword" ) ) /// { /// ... /// [code that executes under the new context] /// ... /// } /// ... /// /// Please contact the author Uwe Keim (mailto:uwe.keim@zeta-software.de) /// for questions regarding this class. /// </remarks> public class Impersonator : IDisposable { #region Public methods. /// <summary> /// Constructor. Starts the impersonation with the given credentials. /// Please note that the account that instantiates the Impersonator class /// needs to have the 'Act as part of operating system' privilege set. /// </summary> /// <param name="userName">The name of the user to act as.</param> /// <param name="domainName">The domain name of the user to act as.</param> /// <param name="password">The password of the user to act as.</param> public Impersonator( string userName, string domainName, string password) { ImpersonateValidUser(userName, domainName, password); } // ------------------------------------------------------------------ #endregion #region IDisposable member. public void Dispose() { UndoImpersonation(); } // ------------------------------------------------------------------ #endregion #region P/Invoke. [DllImport("advapi32.dll", SetLastError = true)] private static extern int LogonUser( string lpszUserName, string lpszDomain, string lpszPassword, int dwLogonType, int dwLogonProvider, ref IntPtr phToken); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern int DuplicateToken( IntPtr hToken, int impersonationLevel, ref IntPtr hNewToken); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] private static extern bool RevertToSelf(); [DllImport("kernel32.dll", CharSet = CharSet.Auto)] private static extern bool CloseHandle( IntPtr handle); private const int LOGON32_LOGON_INTERACTIVE = 2; private const int LOGON32_PROVIDER_DEFAULT = 0; // ------------------------------------------------------------------ #endregion #region Private member. // ------------------------------------------------------------------ /// <summary> /// Does the actual impersonation. /// </summary> /// <param name="userName">The name of the user to act as.</param> /// <param name="domainName">The domain name of the user to act as.</param> /// <param name="password">The password of the user to act as.</param> private void ImpersonateValidUser( string userName, string domain, string password) { WindowsIdentity tempWindowsIdentity = null; IntPtr token = IntPtr.Zero; IntPtr tokenDuplicate = IntPtr.Zero; try { if (RevertToSelf()) { if (LogonUser( userName, domain, password, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, ref token) != 0) { if (DuplicateToken(token, 2, ref tokenDuplicate) != 0) { tempWindowsIdentity = new WindowsIdentity(tokenDuplicate); impersonationContext = tempWindowsIdentity.Impersonate(); } else { throw new Win32Exception(Marshal.GetLastWin32Error()); } } else { throw new Win32Exception(Marshal.GetLastWin32Error()); } } else { throw new Win32Exception(Marshal.GetLastWin32Error()); } } finally { if (token != IntPtr.Zero) { CloseHandle(token); } if (tokenDuplicate != IntPtr.Zero) { CloseHandle(tokenDuplicate); } } } /// <summary> /// Reverts the impersonation. /// </summary> private void UndoImpersonation() { if (impersonationContext != null) { impersonationContext.Undo(); } } private WindowsImpersonationContext impersonationContext = null; // ------------------------------------------------------------------ #endregion } } ```
How do you impersonate an Active Directory user in Powershell?
[ "", "c#", "asp.net", "powershell", "active-directory", "" ]
I'm trying to reduce the form spam on our website. (It's actually pretty recent). I seem to remember reading somewhere that the spammers aren't executing the Javascript on the site. Is that true? And if so, then could you simply check for javascript being disabled and then figure it's *likely* that it's spam?
There are still a large number of people that run with Javascript turned off. Alternatively, I have had decent success with stopping form spam using CSS. Basically, include an input field and label that is hidden using CSS (`display: none;`) and once submitted, check if anything has been entered in the field. I generally label the field as a spam filter with an instruction to *not* put anything in the field, but all newer browsers will properly hide the block. * More: [Fighting Spam with CSS](http://www.modernblue.com/web-design-blog/fighting-spam-with-css/) [reCAPTCHA](http://recaptcha.net/) is also surprisingly easy to implement.
check <http://kahi.cz/wordpress/ravens-antispam-plugin/> for a nice answer if puts in ``` <noscript><p><label for="websiteurl99f">Please type "e73053": </label><input type="text" name="websiteurl99f" id="websiteurl99f" /></p></noscript> <script type="text/javascript">/* <![CDATA[ */ document.write('<div><input type="hidden" name="websiteurl99f" value="e' + '73053" \/><\/div>'); /* ]]> */</script> ``` so javascript users see nothing, non js users just type in a word if a spammer targets you specifically it won't take them long to code round it but for drive by spammers it should be good
Simple & basic form spam reduction: checking for Javascript?
[ "", "javascript", "user-input", "" ]
So, in Java, the first line of your constructor HAS to be a call to super... be it implicitly calling super(), or explicitly calling another constructor. What I want to know is, why can't I put a try block around that? My specific case is that I have a mock class for a test. There is no default constructor, but I want one to make the tests simpler to read. I also want to wrap the exceptions thrown from the constructor into a RuntimeException. So, what I want to do is effectively this: ``` public class MyClassMock extends MyClass { public MyClassMock() { try { super(0); } catch (Exception e) { throw new RuntimeException(e); } } // Mocked methods } ``` But Java complains that super isn't the first statement. My workaround: ``` public class MyClassMock extends MyClass { public static MyClassMock construct() { try { return new MyClassMock(); } catch (Exception e) { throw new RuntimeException(e); } } public MyClassMock() throws Exception { super(0); } // Mocked methods } ``` Is this the best workaround? Why doesn't Java let me do the former? --- My best guess as to the "why" is that Java doesn't want to let me have a constructed object in a potentially inconsistent state... however, in doing a mock, I don't care about that. It seems I should be able to do the above... or at least I know that the above is safe for my case... or seems as though it should be anyways. I am overriding any methods I use from the tested class, so there is no risk that I am using uninitialized variables.
Unfortunately, compilers can't work on theoretical principles, and even though you may know that it is safe in your case, if they allowed it, it would have to be safe for all cases. In other words, the compiler isn't stopping just you, it's stopping everyone, including all those that don't know that it is unsafe and needs special handling. There are probably other reasons for this as well, as all languages usually have ways to do *unsafe* things if one knows how to deal with them. In C# .NET there are similar provisions, and the only way to declare a constructor that calls a base constructor is this: ``` public ClassName(...) : base(...) ``` in doing so, the base constructor will be called before the body of the constructor, and you cannot change this order.
It's done to prevent someone from creating a new `SecurityManager` object from untrusted code. ``` public class Evil : SecurityManager { Evil() { try { super(); } catch { Throwable t } { } } } ```
Why can't I use a try block around my super() call?
[ "", "java", "exception", "mocking", "try-catch", "" ]
How do you specify that a foreign key constraint should be a 1:1 relationship in transact sql? Is declaring the column UNIQUE enough? Below is my existing code.! ``` CREATE TABLE [dbo].MyTable( [MyTablekey] INT IDENTITY(1,1) NOT FOR REPLICATION NOT NULL, [OtherTableKey] INT NOT NULL UNIQUE CONSTRAINT [FK_MyTable_OtherTable] FOREIGN KEY REFERENCES [dbo].[OtherTable]([OtherTableKey]), ... CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED ( [MyTableKey] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ```
A foreign key column with the UNIQUE and NOT NULL constraints that references a UNIQUE, NOT NULL column in another table creates a 1:(0|1) relationship, which is probably what you want. If there was a true 1:1 relationship, every record in the first table would have a corresponding record in the second table and vice-versa. In that case, you would probably just want to make one table (unless you needed some strange storage optimization).
You could declare the column to be both the primary key and a foreign key. This is a good strategy for "extension" tables that are used to avoid putting nullable columns into the main table.
1:1 Foreign Key Constraints
[ "", "sql", "sql-server", "" ]
I don't remember whether I was dreaming or not but I seem to recall there being a function which allowed something like, ``` foo in iter_attr(array of python objects, attribute name) ``` I've looked over the docs but this kind of thing doesn't fall under any obvious listed headers
Using a list comprehension would build a temporary list, which could eat all your memory if the sequence being searched is large. Even if the sequence is not large, building the list means iterating over the whole of the sequence before `in` could start its search. The temporary list can be avoiding by using a generator expression: ``` foo = 12 foo in (obj.id for obj in bar) ``` Now, as long as `obj.id == 12` near the start of `bar`, the search will be fast, even if `bar` is infinitely long. As @Matt suggested, it's a good idea to use `hasattr` if any of the objects in `bar` can be missing an `id` attribute: ``` foo = 12 foo in (obj.id for obj in bar if hasattr(obj, 'id')) ```
Are you looking to get a list of objects that have a certain attribute? If so, a [list comprehension](http://docs.python.org/tut/node7.html#SECTION007140000000000000000) is the right way to do this. ``` result = [obj for obj in listOfObjs if hasattr(obj, 'attributeName')] ```
Using 'in' to match an attribute of Python objects in an array
[ "", "python", "arrays", "iteration", "" ]
I have a java back-end that needs to expose services to clients running in the following environments : * J2ME * Windows Mobile * iPhone I am looking for the best tool for each platform. I do not search a technology that works everywhere. I need something "light" adapted to low speed internet access. Right now I am using SOAP. It is verbose and not easy to parse on the mobile. The problem is that I have not seen any real alternative. Is there a format that works "out of the box" with one of these platforms ? I would rather not use a bloated library that will increase tremendously the download time of the application. Everybody seems to agree on JSON. Does anyone has implemented a solution based on JSON running with Objective-C, J2ME, Windows Mobile ? Note : so far the best solution seems to be Hessian. It works well on Windows Mobile and Objective-C/iPhone . The big problem is J2ME. The J2ME implementation of Hessian has serious limitations. It does not support complex objects. I had written another question about it. If you have any ideas, there are very welcome.
Hessian. <http://hessian.caucho.com>. Implementations in multiple languages (including ObjC), super light weight, and doesn't require reliance on dom/xml parsers for translation from wire to object models. Once we found Hessian, we forgot we ever knew XML.
[JSON](http://www.json.org/) is fairly compact, and supported by most frameworks. You can transfer data over HTTP using standard [REST](http://en.wikipedia.org/wiki/Representational_State_Transfer) techniques. There are JSON libraries for [Java](http://www.json.org/java/), [Objective C](http://code.brautaset.org/JSON/), and [many other languages](http://www.json.org/) (scroll down). You should have no problem finding framework support on the server side, because JSON is used for web applications. Older alternatives include plain XML and XML-RPC (like SOAP, but much simpler, and with libraries for most languages).
What are the best remoting technologies for mobile applications?
[ "", "java", "iphone", "windows-mobile", "java-me", "mobile", "" ]
I've just started skimming 'Debugging MS .Net 2.0 Applications' by John Robbins, and have become confused by his evangelism for Debug.Assert(...). He points out that well-implemented Asserts store the state, somewhat, of an error condition, e.g.: ``` Debug.Assert(i > 3, "i > 3", "This means I got a bad parameter"); ``` Now, personally, it seems crazy to me that he so loves restating his test without an actual sensible 'business logic' comment, perhaps "i <= 3 must never happen because of the flobittyjam widgitification process". So, I think I get Asserts as a kind-of low-level "Let's protect my assumptions" kind of thing... assuming that one feels this is a test one only needs to do in debug - i.e. you are protecting yourself against colleague and future programmers, and hoping that they actually test things. But what I don't get is, he then goes on to say that you should use assertions in addition to normal error handling; now what I envisage is something like this: ``` Debug.Assert(i > 3, "i must be greater than 3 because of the flibbity widgit status"); if (i <= 3) { throw new ArgumentOutOfRangeException("i", "i must be > 3 because... i=" + i.ToString()); } ``` What have I gained by the Debug.Assert repetition of the error condition test? I think I'd get it if we were talking about debug-only double-checking of a very important calculation... ``` double interestAmount = loan.GetInterest(); Debug.Assert(debugInterestDoubleCheck(loan) == interestAmount, "Mismatch on interest calc"); ``` ...but I don't get it for parameter tests which are surely worth checking (in both DEBUG and Release builds)... or not. What am I missing?
Assertions are not for parameter checking. Parameter checking should always be done (and precisely according to what pre-conditions are specified in your documentation and/or specification), and the `ArgumentOutOfRangeException` thrown as necessary. Assertions are for testing for "impossible" situations, i.e., things that you (in your program logic) *assume* are true. The assertions are there to tell you if these assumptions are broken for any reason. Hope this helps!
There is a communication aspect to asserts vs exception throwing. Let's say we have a User class with a Name property and a ToString method. If ToString is implemented like this: ``` public string ToString() { Debug.Assert(Name != null); return Name; } ``` It says that Name should never null and there is a bug in the User class if it is. If ToString is implement like this: ``` public string ToString() { if ( Name == null ) { throw new InvalidOperationException("Name is null"); } return Name; } ``` It says that the caller is using ToString incorrectly if Name is null and should check that before calling. The implementation with both ``` public string ToString() { Debug.Assert(Name != null); if ( Name == null ) { throw new InvalidOperationException("Name is null"); } return Name; } ``` says that if Name is null there bug in the User class, but we want to handle it anyway. (The user doesn't need to check Name before calling.) I think this is the kind of safety Robbins was recommending.
Debug.Assert vs. Specific Thrown Exceptions
[ "", "c#", "exception", "assert", "" ]
I've got an upcoming project wherein I will need to connect our website (`PHP5/Apache 1.3/OpenBSD 4.1`) to our back-end system running on an iSeries with OS400 V5R3 so that I can access some tables stored there. I've done some checking around but am running into some roadblocks. From what I've seen the DB2 extensions and DB2 software from IBM only run under Linux. I've tried compiling the extensions with all the software from IBM and even tried their precompiled ibm\_db2 extension with no luck. IBM only supports Linux so I turned on the Linux emulation in the kernel but that didn't seem to help anything. If anyone has run across getting everything to run natively under OpenBSD that would be great, but what I think I may have to do is setting up a second server running CentOS with DB2 installed (most likely via ZendCore for IBM since it seems to do all this for me) and the driver so that I can set up a small transaction server that I can post against and get a JSON representation of the DB2 data that I need. Does the second option seem overkill or does anyone else have any better ideas?
Have you looked at connecting to the server using [unixODBC](http://www.unixodbc.org/)? If I remember correctly it has support for IBM DB2 and compiles on OpenBSD. Check out <http://www.php.net/odbc> for more information regarding the PHP side. If you can't get that to work, the option to setup a web service on a Linux server may be all you can do.
Rather than setup a 2nd box, why don't you look into the PHP Connector for iSeries? My mainframe guys said it was very easy to setup on our iSeries here. We wrote a simple server in PHP that loads data models from DB2 data, serializes them, and returns them to the caller. This approach means that only another PHP app can consume the service but it's just so much quicker on both ends to just serialize the object and send it down the pipe. Here is a PDF from IBM on the subject: <http://i-seriesusergroup.org/wp-content/uploads/2006/09/PHP%20for%20i5OS%20NESDND.pdf>
Connect PHP to IBM i (AS/400)
[ "", "php", "database", "odbc", "db2", "ibm-midrange", "" ]
How do you rewrite this in Linq? ``` SELECT Id, Name FROM TableA WHERE TableA.Id IN (SELECT xx from TableB INNER JOIN Table C....) ``` So in plain english, I want to select Id and Name from TableA where TableA's Id is in a result set from a second query.
``` from a in TableA where (from b in TableB join c in TableC on b.id equals c.id where .. select b.id) .Contains(a.Id) select new { a.Id, a.Name } ```
LINQ supports IN in the form of contains. Think "collection.Contains(id)" instead of "id IN (collection)". ``` from a in TableA where ( from b in TableB join c in TableC on b.id equals c.id select b.id ).Contains(TableA.Id) select new { a.Id, a.Name } ``` See also this blog [post](http://blog.wekeroad.com/2008/02/27/creating-in-queries-with-linq-to-sql/).
How do I write SELECT FROM myTable WHERE id IN (SELECT...) in Linq?
[ "", "sql", "linq", "linq-to-sql", "" ]
The subject says it all - normally easy and cross platform way is to poll, intelligently. But every OS has some means to notify without polling. Is it possible in a reasonably cross platform way? (I only really care about Windows and Linux, but I use mac, so I thought posix may help?)
Linux users can use [inotify](http://en.wikipedia.org/wiki/Inotify) > inotify is a Linux kernel subsystem > that provides file system event > notification. Some goodies for Windows fellows: * [File Change Notification](http://msdn.microsoft.com/en-us/library/bb158663.aspx) on MSDN * "[When Folders Change](http://www.relisoft.com/win32/watcher.html)" article * [File System Notification on Change](http://techsupt.winbatch.com/TS/T000001048F39.html)
The Qt library has a [QFileSystemWatcher](http://doc.qt.io/qt-5/qfilesystemwatcher.html) class which provides cross platform notifications when a file changes. Even if you are not using Qt, because the source is available you could have a look at it as a sample for your own implementation. Qt has separate implementations for Windows, Linux and Mac.
How to be notified of file/directory change in C/C++, ideally using POSIX
[ "", "c++", "c", "posix", "" ]
I know that the following is true ``` int i = 17; //binary 10001 int j = i << 1; //decimal 34, binary 100010 ``` But, if you shift too far, the bits fall off the end. Where this happens is a matter of the size of integer you are working with. Is there a way to perform a shift so that the bits rotate around to the other side? I'm looking for a single operation, not a for loop.
If you know the size of type, you could do something like: ``` uint i = 17; uint j = i << 1 | i >> 31; ``` ... which would perform a circular shift of a 32 bit value. As a generalization to circular shift left n bits, on a b bit variable: ``` /*some unsigned numeric type*/ input = 17; var result = input << n | input >> (b - n); ``` --- @The comment, it appears that C# does treat the high bit of signed values differently. I found some info on this [here](http://www.blackwasp.co.uk/CSharpShiftOperators.aspx). I also changed the example to use a uint.
One year ago I've to implement MD4 for my undergraduate thesis. Here it is my implementation of circular bit shift using a UInt32. ``` private UInt32 RotateLeft(UInt32 x, Byte n) { return UInt32((x << n) | (x >> (32 - n))); } ```
Is there a way to perform a circular bit shift in C#?
[ "", "c#", "bit-manipulation", "" ]
I'm looking to introduce a unit testing framework into the mix at my job. We're using Visual Studio 2005 (though we may be moving to 2008 within the next six months) and work primarily in C#. If the framework has some kind of IDE integration that would be best, but I'm open to frameworks that don't have integration but are still relatively simple to get set up. I'm going to get resistance to it one way or another, so if I can make sure what I'm pushing isn't a pain in the neck, that would help my case. The obvious choice from the research I've done so far points to NUnit, but I'd like to get the impressions of someone who's actually used it before recommending it to my team. Has anyone out there used [NUnit](https://en.wikipedia.org/wiki/NUnit)? If so, are there any pitfalls or limitations of which I should be aware? Are there other good options out there? If so, if you've used both NUnit at that, I'd greatly appreciate an idea of the strengths and weaknesses of them.
I think [NUnit](https://en.wikipedia.org/wiki/NUnit) **is** your best bet. With [TestDriven.NET](https://testdriven.net/), you get great integration within Visual Studio. (ReSharper also has a unit test runner if you're using it). NUnit is simple to use and follows an established paradigm. You'll also find plenty of projects, tutorials, and guides using it which always helps. Your other main choice is probably [MbUnit](https://en.wikipedia.org/wiki/List_of_unit_testing_frameworks#.NET_programming_languages), which is more and more positioning itself as the [BDD](https://en.wikipedia.org/wiki/Behavior-driven_development) framework of choice (in conjunction with [Gallio](http://www.gallio.org)).
Scott Hanselman had a good *podcast* about this, entitled: > "The Past, Present and Future of .NET Unit Testing Frameworks" : [Hanselminutes #112](http://www.hanselminutes.com/default.aspx?showID=130)
.NET testing framework advice
[ "", "c#", ".net", "visual-studio", "unit-testing", "nunit", "" ]
What method do you use when you want to get performance data about specific code paths?
This method has several limitations, but I still find it very useful. I'll list the limitations (I know of) up front and let whoever wants to use it do so at their own risk. 1. The original version I posted over-reported time spent in recursive calls (as pointed out in the comments to the answer). 2. It's not thread safe, it wasn't thread safe before I added the code to ignore recursion and it's even less thread safe now. 3. Although it's very efficient if it's called many times (millions), it will have a measurable effect on the outcome so that scopes you measure will take longer than those you don't. --- I use this class when the problem at hand doesn't justify profiling all my code or I get some data from a profiler that I want to verify. Basically it sums up the time you spent in a specific block and at the end of the program outputs it to the debug stream (viewable with [DbgView](http://technet.microsoft.com/en-us/sysinternals/bb896647.aspx)), including how many times the code was executed (and the average time spent of course)). ``` #pragma once #include <tchar.h> #include <windows.h> #include <sstream> #include <boost/noncopyable.hpp> namespace scope_timer { class time_collector : boost::noncopyable { __int64 total; LARGE_INTEGER start; size_t times; const TCHAR* name; double cpu_frequency() { // cache the CPU frequency, which doesn't change. static double ret = 0; // store as double so devision later on is floating point and not truncating if (ret == 0) { LARGE_INTEGER freq; QueryPerformanceFrequency(&freq); ret = static_cast<double>(freq.QuadPart); } return ret; } bool in_use; public: time_collector(const TCHAR* n) : times(0) , name(n) , total(0) , start(LARGE_INTEGER()) , in_use(false) { } ~time_collector() { std::basic_ostringstream<TCHAR> msg; msg << _T("scope_timer> ") << name << _T(" called: "); double seconds = total / cpu_frequency(); double average = seconds / times; msg << times << _T(" times total time: ") << seconds << _T(" seconds ") << _T(" (avg ") << average <<_T(")\n"); OutputDebugString(msg.str().c_str()); } void add_time(__int64 ticks) { total += ticks; ++times; in_use = false; } bool aquire() { if (in_use) return false; in_use = true; return true; } }; class one_time : boost::noncopyable { LARGE_INTEGER start; time_collector* collector; public: one_time(time_collector& tc) { if (tc.aquire()) { collector = &tc; QueryPerformanceCounter(&start); } else collector = 0; } ~one_time() { if (collector) { LARGE_INTEGER end; QueryPerformanceCounter(&end); collector->add_time(end.QuadPart - start.QuadPart); } } }; } // Usage TIME_THIS_SCOPE(XX); where XX is a C variable name (can begin with a number) #define TIME_THIS_SCOPE(name) \ static scope_timer::time_collector st_time_collector_##name(_T(#name)); \ scope_timer::one_time st_one_time_##name(st_time_collector_##name) ```
I do my profiles by creating two classes: `cProfile` and `cProfileManager`. `cProfileManager` will hold all the data that resulted from `cProfile`. `cProfile` with have the following requirements: * `cProfile` has a constructor which initializes the current time. * `cProfile` has a deconstructor which sends the total time the class was alive to `cProfileManager` To use these profile classes, I first make an instance of `cProfileManager`. Then, I put the code block, which I want to profile, inside curly braces. Inside the curly braces, I create a `cProfile` instance. When the code block ends, `cProfile` will send the time it took for the block of code to finish to `cProfileManager`. **Example Code** Here's an example of the code (simplified): ``` class cProfile { cProfile() { TimeStart = GetTime(); }; ~cProfile() { ProfileManager->AddProfile (GetTime() - TimeStart); } float TimeStart; } ``` To use `cProfile`, I would do something like this: ``` int main() { printf("Start test"); { cProfile Profile; Calculate(); } ProfileManager->OutputData(); } ``` or this: ``` void foobar() { cProfile ProfileFoobar; foo(); { cProfile ProfileBarCheck; while (bar()) { cProfile ProfileSpam; spam(); } } } ``` **Technical Note** This code is actually an abuse of the way scoping, constructors and deconstructors work in [C++](http://en.wikipedia.org/wiki/C_%28programming_language%29). `cProfile` exists only inside the block scope (the code block we want to test). Once the program leaves the block scope, `cProfile` records the result. **Additional Enhancements** * You can add a string parameter to the constructor so you can do something like this: cProfile Profile("Profile for complicated calculation"); * You can use a macro to make the code look cleaner (be careful not to abuse this. Unlike our other abuses on the language, macros can be dangerous when used). Example: #define START\_PROFILE cProfile Profile(); { #define END\_PROFILE } * `cProfileManager` can check how many times a block of code is called. But you would need an identifier for the block of code. The first enhancement can help identify the block. This can be useful in cases where the code you want to profile is inside a loop (like the second example aboe). You can also add the average, fastest and longest execution time the code block took. * Don't forget to add a check to skip profiling if you are in debug mode.
Quick and dirty way to profile your code
[ "", "c++", "performance", "profiling", "code-snippets", "" ]
I'm getting a `NoSuchMethodError` error when running my Java program. What's wrong and how do I fix it?
Without any more information it is difficult to pinpoint the problem, but the root cause is that you most likely have compiled a class against a different version of the class that is missing a method, than the one you are using when running it. Look at the stack trace ... If the exception appears when calling a method on an object in a library, you are most likely using separate versions of the library when compiling and running. Make sure you have the right version both places. If the exception appears when calling a method on objects instantiated by classes *you* made, then your build process seems to be faulty. Make sure the class files that you are actually running are updated when you compile.
I was having your problem, and this is how I fixed it. The following steps are a working way to add a library. I had done the first two steps right, but I hadn't done the last one by dragging the ".jar" file direct from the file system into the "lib" folder on my eclipse project. Additionally, I had to remove the previous version of the library from both the build path and the "lib" folder. ## Step 1 - Add .jar to build path ![enter image description here](https://i.stack.imgur.com/Os3Be.gif) ## Step 2 - Associate sources and javadocs (optional) ![enter image description here](https://i.stack.imgur.com/vaOVy.gif) ## Step 3 - Actually drag .jar file into "lib" folder (not optional) ![enter image description here](https://i.stack.imgur.com/2OkvN.gif)
How do I fix a NoSuchMethodError?
[ "", "java", "nosuchmethoderror", "" ]
I have an issue that is driving me a bit nuts: Using a UserProfileManager as an non-authorized user. The problem: The user does not have "Manage User Profiles" rights, but I still want to use the UserProfileManager. The idea of using SPSecurity.RunWithElevatedPrivileges does not seem to work, as the UserProfileManager authorizes against the SSP as it seems. ``` SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite site = new SPSite(inputWeb.Site.ID)) { ServerContext ctx = ServerContext.GetContext(site); UserProfileManager upm = new UserProfileManager(ctx,true); UserProfile u = upm.GetUserProfile(userLogin); DepartmentName = u["Department"].Value as string; } }); ``` This still fails on the "new UserProfileManager" line, with the "You must have manage user profiles administrator rights to use administrator mode" exception. As far as I userstood, RunWithElevatedPrivileges reverts to the AppPool Identity. WindowsIdentity.GetCurrent().Name returns "NT AUTHORITY\network service", and I have given that account Manage User Profiles rights - no luck. site.RootWeb.CurrentUser.LoginName returns SHAREPOINT\system for the site created within RunWithElevatedPrivileges, which is not a valid Windows Account ofc. Is there even a way to do that? I do not want to give all users "Manage User Profiles" rights, but I just want to get some data from the user profiles (Department, Country, Direct Reports). Any ideas?
The permission that needs set is actually found in the Shared Service Provider. 1. Navigate to Central Admin 2. Navigate to the Shared Service Provider 3. Under **User Profiles and My Sites** navigate to Personalization services permissions . 4. If the account doesn't already exist, add the account for which your sites App Domain is running under. 5. Grant that user **Manage user profiles** permission. I notice that you're running the application pool under the Network Service account. I implemented an identical feature on my site; however, the application pool was hosted under a Windows account. I'm not sure why this would make a difference, however.
Thanks for the Answers. One Caveat: if you run the Application Pool as "Network Service" instead of a Domain Account, you're screwed. But then again, it's recommended to use a domain account anyway (On a test server I used network service, but after changing it to a domain account it worked).
Sharepoint UserProfileManager without Manage User Profiles right
[ "", "c#", "sharepoint", "" ]
This is something that I think would be very useful. Basically, I'd like there to be a way to edit Python source programmatically without requiring human intervention. There are a couple of things I would like to do with this: 1. Edit the configuration of Python apps that use source modules for configuration. 2. Set up a "template" so that I can customize a Python source file on the fly. This way, I can set up a "project" system on an open source app I'm working on and allow certain files to be customized. I could probably write something that can do this myself, but I can see that opening up a lot of "devil's in the details" type issues. Are there any ways to do this currently, or am I just going to have to bite the bullet and implement it myself?
Python's standard library provides pretty good facilities for working with Python source; note the [tokenize](https://docs.python.org/2/library/tokenize.html) and [parser](https://docs.python.org/2/library/parser.html) modules.
I had the same issue and I simply opened the file and did some replace: then reload the file in the Python interpreter. This works fine and is easy to do. Otherwise AFAIK you have to use some conf objects.
Programmatically editing Python source
[ "", "python", "file-io", "" ]
The following code is causing an intermittent crash on a Vista machine. ``` using (SoundPlayer myPlayer = new SoundPlayer(Properties.Resources.BEEPPURE)) myPlayer.Play(); ``` I highly suspect it is this code because the program crashes mid-beep or just before the beep is played every time. I have top-level traps for all `ThreadExceptions`, `UnhandledExceptions` in my app domain, and a `try-catch` around `Application.Run`, none of which trap this crash. Any ideas? --- EDIT: The Event Viewer has the following information: > Faulting application [xyz].exe, version 4.0.0.0, time stamp > 0x48ce5a74, faulting module msvcrt.dll, version 7.0.6001.18000, time > stamp 0x4791a727, exception code 0xc0000005, fault offset 0x00009b30, > process id 0x%9, application start time 0x%10. Interestingly, the `HRESULT 0xc0000005` has the message: > "Reading or writing to an inaccessible memory location." > (STATUS\_ACCESS\_VIOLATION)
Actually, the above code (that is, new SoundPlayer(BEEPPURE)).Play(); was crashing for me. This article explains why, and provides an alternative to SoundPlayer that works flawlessly: <http://www.codeproject.com/KB/audio-video/soundplayerbug.aspx?msg=2862832#xx2862832xx>
You can use WinDBG and trap all first-chance exceptions. I'm sure you'll see something interesting. If so, you can use SOS to clean up the stack and post it here to help us along. Or you can use Visual Studio by enabling the trap of all exceptions. Go to "Debug" and then "Exceptions" and make sure you trap everything. Do this along with switching the debugger to mixed-mode (managed and unmanaged). Once you have the stack trace, we can determine the answer. A process doesn't exit on Windows without an exception. It's in there. Also, you might want to check the machine's Event Log to see if anything has shown up.
SoundPlayer crash on Vista
[ "", "c#", "winforms", "windows-vista", "audio", "" ]
In a project our team is using object lists to perform mass operations on sets of data that should all be processed in a similar way. In particular, different objects would ideally act the same, which would be very easily achieved with polymorphism. The problem I have with it is that inheritance implies the *is a* relationship, rather than the *has a* relationship. For example, several objects *have a* damage counter, but to make this easy to use in an object list, polymorphism could be used - except that would imply an *is a* relationship which wouldn't be true. (A person *is not a* damage counter.) The only solution I can think of is to have a member of the class return the proper object type when implicitly casted instead of relying on inheritance. Would it be better to forgo the *is a* / *has a* ideal in exchange for ease of programming? Edit: To be more specific, I am using C++, so using polymorphism would allow the different objects to "act the same" in the sense that the derived classes could reside within a single list and be operated upon by a virtual function of the base class. The use of an interface (or imitating them via inheritance) seems like a solution I would be willing to use.
This can be accomplished using multiple inheritance. In your specific case (C++), you can use pure virtual classes as interfaces. This allows you to have multiple inheritance without creating scope/ambiguity problems. Example: ``` class Damage { virtual void addDamage(int d) = 0; virtual int getDamage() = 0; }; class Person : public virtual Damage { void addDamage(int d) { // ... damage += d * 2; } int getDamage() { return damage; } }; class Car : public virtual Damage { void addDamage(int d) { // ... damage += d; } int getDamage() { return damage; } }; ``` Now both Person and Car 'is-a' Damage, meaning, they implement the Damage interface. The use of pure virtual classes (so that they are like interfaces) is key and should be used frequently. It insulates future changes from altering the entire system. Read up on the Open-Closed Principle for more information.
I think you should be implementing interfaces to be able to enforce your *has a* relationships (am doing this in C#): ``` public interface IDamageable { void AddDamage(int i); int DamageCount {get;} } ``` You could implement this in your objects: ``` public class Person : IDamageable public class House : IDamageable ``` And you'd be sure that the DamageCount property and has a method to allow you to add damage, without implying that a person and a house are related to each other in some sort of heirarchy.
Inheritance and Polymorphism - Ease of use vs Purity
[ "", "c++", "inheritance", "oop", "polymorphism", "" ]
What are the differences between a [`HashMap`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/HashMap.html) and a [`Hashtable`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/Hashtable.html) in Java? Which is more efficient for non-threaded applications?
There are several differences between [`HashMap`](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/HashMap.html) and [`Hashtable`](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/Hashtable.html) in Java: 1. `Hashtable` is [synchronized](https://stackoverflow.com/questions/1085709/what-does-synchronized-mean), whereas `HashMap` is not. This makes `HashMap` better for non-threaded applications, as unsynchronized Objects typically perform better than synchronized ones. 2. `Hashtable` does not allow `null` keys or values. `HashMap` allows one `null` key and any number of `null` values. 3. One of HashMap's subclasses is [`LinkedHashMap`](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/LinkedHashMap.html), so in the event that you'd want predictable iteration order (which is insertion order by default), you could easily swap out the `HashMap` for a `LinkedHashMap`. This wouldn't be as easy if you were using `Hashtable`. Since synchronization is not an issue for you, I'd recommend `HashMap`. If synchronization becomes an issue, you may also look at [`ConcurrentHashMap`](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/concurrent/ConcurrentHashMap.html).
Note, that a lot of the answers state that Hashtable is synchronized. **In practice this buys you very little.** The synchronization is on the accessor/mutator methods will stop two threads adding or removing from the map concurrently, but in the real world, you will often need additional synchronization. A very common idiom is to "check then put" — i.e. look for an entry in the `Map`, and add it if it does not already exist. This is not in any way an atomic operation whether you use `Hashtable` or `HashMap`. An equivalently synchronised `HashMap` can be obtained by: ``` Collections.synchronizedMap(myMap); ``` But to correctly implement this logic you need **additional synchronisation** of the form: ``` synchronized(myMap) { if (!myMap.containsKey("tomato")) myMap.put("tomato", "red"); } ``` Even iterating over a `Hashtable`'s entries (or a `HashMap` obtained by `Collections.synchronizedMap`) is not thread-safe unless you also guard the `Map` against being modified through additional synchronization. Implementations of the [`ConcurrentMap`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ConcurrentMap.html) interface (for example [`ConcurrentHashMap`](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ConcurrentHashMap.html)) solve some of this by including **thread safe check-then-act semantics** such as: ``` ConcurrentMap.putIfAbsent(key, value); ```
What are the differences between a HashMap and a Hashtable in Java?
[ "", "java", "collections", "hashmap", "hashtable", "" ]
[Django](http://www.djangoproject.com/) view points to a function, which can be a problem if you want to change only a bit of functionality. Yes, I could have million keyword arguments and even more if statements in the function, but I was thinking more of an object oriented approach. For example, I have a page that displays a user. This page is very similar to page that displays a group, but it's still not so similar to just use another data model. Group also has members etc... One way would be to point views to class methods and then extend that class. Has anyone tried this approach or has any other idea?
I've created and used my own generic view classes, defining **`__call__`** so an instance of the class is callable. I really like it; while Django's generic views allow some customization through keyword arguments, OO generic views (if their behavior is split into a number of separate methods) can have much more fine-grained customization via subclassing, which lets me repeat myself a lot less. (I get tired of rewriting the same create/update view logic anytime I need to tweak something Django's generic views don't quite allow). I've posted some code at [djangosnippets.org](http://www.djangosnippets.org/snippets/1009/). The only real downside I see is the proliferation of internal method calls, which may impact performance somewhat. I don't think this is much of a concern; it's rare that Python code execution would be your performance bottleneck in a web app. **UPDATE**: Django's own [generic views](http://docs.djangoproject.com/en/dev/topics/class-based-views/) are now class-based. **UPDATE**: FWIW, I've changed my opinion on class-based views since this answer was written. After having used them extensively on a couple of projects, I feel they tend to lead to code that is satisfyingly DRY to write, but very hard to read and maintain later, because functionality is spread across so many different places, and subclasses are so dependent on every implementation detail of the superclasses and mixins. I now feel that [TemplateResponse](https://docs.djangoproject.com/en/dev/ref/template-response/) and view decorators is a better answer for decomposing view code.
I needed to use class based views, but I wanted to be able to use the full name of the class in my URLconf without always having to instantiate the view class before using it. What helped me was a surprisingly simple metaclass: ``` class CallableViewClass(type): def __call__(cls, *args, **kwargs): if args and isinstance(args[0], HttpRequest): instance = super(CallableViewClass, cls).__call__() return instance.__call__(*args, **kwargs) else: instance = super(CallableViewClass, cls).__call__(*args, **kwargs) return instance class View(object): __metaclass__ = CallableViewClass def __call__(self, request, *args, **kwargs): if hasattr(self, request.method): handler = getattr(self, request.method) if hasattr(handler, '__call__'): return handler(request, *args, **kwargs) return HttpResponseBadRequest('Method Not Allowed', status=405) ``` I can now both instantiate view classes and use the instances as view functions, OR I can simply point my URLconf to my class and have the metaclass instantiate (and call) the view class for me. This works by checking the first argument to `__call__` – if it's a `HttpRequest`, it must be an actual HTTP request because it would be nonsense to attept to instantiate a view class with an `HttpRequest` instance. ``` class MyView(View): def __init__(self, arg=None): self.arg = arg def GET(request): return HttpResponse(self.arg or 'no args provided') @login_required class MyOtherView(View): def POST(request): pass # And all the following work as expected. urlpatterns = patterns('' url(r'^myview1$', 'myapp.views.MyView', name='myview1'), url(r'^myview2$', myapp.views.MyView, name='myview2'), url(r'^myview3$', myapp.views.MyView('foobar'), name='myview3'), url(r'^myotherview$', 'myapp.views.MyOtherView', name='otherview'), ) ``` (I posted a snippet for this at <http://djangosnippets.org/snippets/2041/>)
Class views in Django
[ "", "python", "django", "view", "oop", "" ]
I'm interested in doing comparisons between the date string and the MySQL timestamp. However, I'm not seeing an easy conversion. Am I overlooking something obvious?
Converting from timestamp to format: ``` date('Y-m-d', $timestamp); ``` Converting from formatted to timestamp: ``` mktime(0, 0, 0, $month, $day, $year, $is_dst); ``` See [date](http://www.php.net/manual/en/function.date.php) and [mktime](http://www.php.net/manual/en/function.mktime.php) for further documentation. When it comes to storing it's up to you whether to use the MySQL DATE format for stroing as a formatted date; as an integer for storing as a UNIX timestamp; or you can use MySQL's TIMESTAMP format which converts a numeric timestamp into a readable format. [Check the MySQL Doc](http://dev.mysql.com/doc/refman/5.0/en/datetime.html) for TIMESTAMP info.
You can avoid having to use `strtotime()` or `getdate()` in **PHP** by using MySQL's `UNIX_TIMESTAMP()` function. ``` SELECT UNIX_TIMESTAMP(timestamp) FROM sometable ``` The resulting data will be a standard integer Unix timestamp, so you can do a direct comparison to `time()`.
If I have a PHP string in the format YYYY-DD-MM and a timestamp in MySQL, is there a good way to convert between them?
[ "", "php", "mysql", "time", "timestamp", "date", "" ]
I would like to retrieve the ethernet address of the network interface that is used to access a particular website. How can this be done in Java? **Solution** Note that the accepted solution of `getHardwareAddress` is only available in Java 6. There does not seem to be a solution for Java 5 aside from executing i(f|p)confing.
[java.net.NetworkInterface.getHardwareAddress](http://java.sun.com/javase/6/docs/api/java/net/NetworkInterface.html#getHardwareAddress%28%29) (method added in Java 6) It has to be called on the machine you are interested in - the MAC is not transferred across network boundaries (i.e. LAN and WAN). If you want to make use of it on a website server to interrogate the clients, you'd have to run an applet that would report the result back to you. For Java 5 and older I found code [parsing output of command line tools on various systems](http://forums.sun.com/thread.jspa?messageID=3424868#4204392).
You can get the address that connects to your ServerSocket using <http://java.sun.com/javase/6/docs/api/java/net/NetworkInterface.html#getInetAddresses()> However if your client is connecting via a NAT, then you will get the address of the router and NOT the Ethernet address. If it is on your local network (via a hub/switch, no router with NAT) the it wil work as intended.
How do you get the ethernet address using Java?
[ "", "java", "networking", "ethernet", "" ]
One may not always know the `Type` of an object at compile-time, but may need to create an instance of the `Type`. How do you get a new object instance from a `Type`?
The `Activator` class within the root `System` namespace is pretty powerful. There are a lot of overloads for passing parameters to the constructor and such. Check out the documentation at: > <http://msdn.microsoft.com/en-us/library/system.activator.createinstance.aspx> or (new path) > <https://learn.microsoft.com/en-us/dotnet/api/system.activator.createinstance> Here are some simple examples: ``` ObjectType instance = (ObjectType)Activator.CreateInstance(objectType); ObjectType instance = (ObjectType)Activator.CreateInstance("MyAssembly","MyNamespace.ObjectType"); ```
The answer was already given: > ``` > ObjectType instance = (ObjectType)Activator.CreateInstance(objectType); > ``` However, the `Activator` class has a [generic variant for the parameterless constructor](https://learn.microsoft.com/en-us/dotnet/api/system.activator.createinstance?view=net-7.0#system-activator-createinstance-1) that makes this slightly more readable by making the cast unnecessary and not needing to pass the runtime type of the object: ``` ObjectType instance = Activator.CreateInstance<ObjectType>(); ```
How to create a new object instance from a Type
[ "", "c#", ".net", "performance", "reflection", "types", "" ]