Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I used Visual Studio's Application Wizard to create a skeleton MFC program with a multi-document interface. When I start this program, it automatically creates a child frame, which I don't want it to do - I need the main frame's client area to be empty until the user chooses to open a file. The debugger tells me that a CChildFrame object is created when the application class's InitInstance() function calls ProcessShellCommand(), but what is a good entry point for me to override this behaviour?
This worked for me -- change ``` if (!ProcessShellCommand(cmdInfo)) ``` to ``` if (cmdInfo.m_nShellCommand != CCommandLineInfo::FileNew && !ProcessShellCommand(cmdInfo)) ``` in your app's InitInstance() function.
This works, it maintains printing/opening from the shell etc. ``` // Parse command line for standard shell commands, DDE, file open CCommandLineInfo cmdInfo; ParseCommandLine(cmdInfo); if ( cmdInfo.m_nShellCommand == CCommandLineInfo::FileNew ) { cmdInfo.m_nShellCommand = CCommandLineInfo::FileNothing ; } // Dispatch commands specified on the command line if (!ProcessShellCommand(cmdInfo)) return FALSE; ```
How can I stop my MFC application from calling OnFileNew() when it starts?
[ "", "c++", "mfc", "" ]
Evil or not evil? ``` public static void Raise(this EventHandler handler, object sender, EventArgs args) { if (handler != null) { handler(sender, args); } } // Usage: MyButtonClicked.Raise(this, EventArgs.Empty); // This works too! Evil? EventHandler handler = null; handler.Raise(this, EVentArgs.Empty); ``` Note that due to the nature of extension methods, MyButtonClicked.Raise will not throw a NullReferenceException if MyButtonClicked is null. (E.g. there are no listeners to MyButtonClicked event). Evil or not?
Not evil. I wish events worked this way by default. Can someone explain why an event with no subscribers is null?
You can always declare your events like this (not that i recommend it): ``` public event EventHandler<EventArgs> OnClicked = delegate { }; ``` That way they have something assigned to them when you call them, so they don't throw a null pointer exception. You can probably get rid of the delegate keyword in C# 3.0...
Is calling an extension method on a "null" reference (i.e. event with no subscribers) evil?
[ "", "c#", "events", "extension-methods", "" ]
I've got two collections (generic Lists), let's call them ListA and ListB. In ListA I've got a few items of type A. In ListB I've got some items of type B that have the SAME ID (but not same type) as the items in ListA, plus many more. I want to remove all the items from ListB that have the same ID as the ones in ListA. What's the best way of doing this? Is Linq to objects a nice fit? What algorithm would you use? Example ListA: ItemWithID1, ItemWithID2¨ ListB: ItemWithID1, ItemWithID2, ItemWithID3, ItemWithID4 EDIT: I forgot to mention in my original question that ListA and ListB doesn't contain the same types. So the only way to compare them is through the .Id property. Which invalidates the answers I've gotten so far.
I discovered that lambda expressions was a perfect match. Instead of a long linq to objects method, I could do it in just a few lines with lambda: ``` foreach(TypeA objectA in listA){ listB.RemoveAll(objectB => objectB.Id == objectA.Id); } ```
Here are two options. Not sure which one is faster. ``` listB.RemoveAll(listA.Contains); foreach (string str in listA.Intersect(listB)) listB.Remove(str); ```
Deleting items from one collection in another collection
[ "", "c#", ".net", "collections", "linq-to-objects", "" ]
What is the most efficient way to read the last row with SQL Server? The table is indexed on a unique key -- the "bottom" key values represent the last row.
If you're using MS SQL, you can try: ``` SELECT TOP 1 * FROM table_Name ORDER BY unique_column DESC ```
``` select whatever,columns,you,want from mytable where mykey=(select max(mykey) from mytable); ```
How to read the last row with SQL Server
[ "", "sql", "sql-server", "" ]
Imagine I have these python lists: ``` keys = ['name', 'age'] values = ['Monty', 42, 'Matt', 28, 'Frank', 33] ``` Is there a direct or at least a simple way to produce the following list of dictionaries ? ``` [ {'name': 'Monty', 'age': 42}, {'name': 'Matt', 'age': 28}, {'name': 'Frank', 'age': 33} ] ```
Here is the zip way ``` def mapper(keys, values): n = len(keys) return [dict(zip(keys, values[i:i + n])) for i in range(0, len(values), n)] ```
It's not pretty but here's a one-liner using a list comprehension, zip and stepping: ``` [dict(zip(keys, a)) for a in zip(values[::2], values[1::2])] ```
Map two lists into one single list of dictionaries
[ "", "python", "dictionary", "list", "" ]
The documentation available on the [boost website](http://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio.html) is... limited. From what I've been able to read, the general consensus is that it is simply difficult to find good documentation on the boost::asio library. Is this really the case? If so, why? Notes: * I have already found the (non-boost) [Asio website](http://tenermerx.com/Asio/WebHome) - and the documentation looks to be identical to that on the boost website. * I know that Boost::asio is new! I'm looking for solutions not excuses. Edit: * There is a [proposal to add a networking library to standard library for TR2](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2054.pdf) written by the author of Boost:asio (Christopher Kohlhoff). While it isn't documentation for boost:asio, it does use it as a base for the TR2 proposal. Since the author put more effort into this document, I have found it to be somewhat helpful, if not as a reference, then at least as an overview.
First, I've been using Boost.Asio for quite a while already -- and I share your concern. To address your question: * There really is very scarce documentation about Boost.Asio aside from the introduction and tutorial. I am not the author, but this is mostly because there are just too many things to document for something as low-level as an Asynchronous IO Library. * The examples give more away than the tutorials do. If you don't mind spending a little time looking at the different examples, I would think they should suffice to get you started. If you want to run away with it, then the reference documentation should help you a lot. * Ask around in the Boost Users and Boost Developers mailing list if you're really stuck or looking for specific guidance. I'm pretty sure a lot of people will be willing to address your concerns on the mailing lists. There are efforts (not part of Boost.Asio) to expose a lot of the functionality and possible alternative use cases. This at best is scattered around the web in blogs and other forms of non-packaged documentation. One thing that is unclear and which will really need close coordination with the author and developers of the Boost.Asio library would be as far as extending and customizing it for a specific platform or adding specific new functionality. This should be improved though but the good thing is it's looking like Asio will be a reference implementation for a standard library technical report (for an asynchronous IO library in the STL) in the future.
Some nice documentation on boost including a chapter on asio can be found in a (free) boost book at <http://en.highscore.de/cpp/boost/index.html>. The chapter on asio provides a general overview and then goes as far as how to develop your own custom asio extensions. Really fantastic effort by Boris Schäling!
Best documentation for Boost:asio?
[ "", "c++", "boost", "boost-asio", "" ]
We are introducing static analysis tools into the build system for our Java product. We are using Maven2 so [Checkstyle](http://checkstyle.sourceforge.net/) and [PMD](http://pmd.sourceforge.net/) integration come for free. However it looks like there is a large overlap in functionality between these two tools, in terms of enforcing basic style rules. Is there a benefit from utilizing both of these? I don't want to maintain 2 tools if one will work. If we choose one, which one should we use and why? We are also planning on using FindBugs. Are there other static analysis tools we should look at? **Update:** Consensus seems to be that PMD is preferred over CheckStyle. I don't see a solid reason to use both, and I don't want to maintain 2 sets of rule files, so we will probably aim for PMD exclusively. We'll also be bringing in FindBugs, and perhaps, eventually, Macker to enforce architectural rules.
You should definitely use [FindBugs](http://findbugs.sourceforge.net/). In my experience, the false-positive rate is very low, and even the least-critical warnings it reports are worth addressing to some extent. As for Checkstyle vs. PMD, I would not use Checkstyle since it is pretty much only concerned with style. In my experience, Checkstyle will report on a ton of things that are completely irrelevant. PMD on the other hand is also able to point out questionable coding practices and its output is generally more relevant and useful.
Both softwares are useful. Checkstyle will help you during your programming by checking your *coding style* i.e braces, naming etc. Simple things but very numerous! PMD will help you by checking more complicate rules like during the design of your classes, or for more special problems like implementing correctly the clone function. Simply, PMD will check your *programming style* However, both softwares suffers from similar rules sometimes bad explained. With a bad configuration, you may check things twice or two opposite things i.e "Remove useless constructors" and "Always one constructor".
Checkstyle vs. PMD
[ "", "java", "static-analysis", "findbugs", "checkstyle", "pmd", "" ]
I'd like something like ``` int minIndex = list.FindMin(delegate (MyClass a, MyClass b) {returns a.CompareTo(b);}); ``` Is there a builtin way to do this in .NET?
Try looking at these: [Min](http://msdn.microsoft.com/en-us/library/bb909313.aspx) [Max](http://msdn.microsoft.com/en-us/library/bb909073.aspx) As long as your class implements IComparable, all you have to do is: ``` List<MyClass> list = new List(); //add whatever you need to add MyClass min = list.Min(); MyClass max = list.Max(); ```
Well, if you can't use .NET 3.5, you could always sort the list and then return list[0]. It might not be the fastest way, but it's probably the shortest code, especially if your class already implements IComparable. ``` List<SomeClass> list = new List<SomeClass>(); // populate the list // assume that SomeClass implements IComparable list.Sort(); return list[0]; // min, or return list[list.Count - 1]; // max ``` This also assumes, of course, that it doesn't matter which item you return if you have multiple items that are the minimum or maximum. If your class doesn't implement IComparable, you can pass in an anonymous delegate, something like this: ``` list.Sort(delegate(SomeClass x, SomeClass y) { return string.Compare(x.Name, y.Name); }); ```
Shortest code to calculate list min/max in .NET
[ "", "c#", ".net", "algorithm", "" ]
I am working in Java on a fairly large project. My question is about how to best structure the set of Properties for my application. Approach 1: Have some static Properties object that's accessible by every class. (Disadvantages: then, some classes lose their generality should they be taken out of the context of the application; they also require explicit calls to some static object that is located in a different class and may in the future disappear; it just doesn't *feel* right, am I wrong?) Approach 2: Have the Properties be instantiated by the main class and handed down to the other application classes. (Disadvantages: you end up passing a pointer to the Properties object to almost every class and it seems to become very redundant and cumbersome; I don't *like* it.) Any suggestions?
I like using Spring dependency injection for many of the properties. You can treat your application like building blocks and inject the properties directly into the component that needs them. This preserves (encourages) encapsulation. Then, you assemble your components together and create the "main class". A nice side effect of the dependency injection is that your code should be more easily testable.
Actually, approach 2 works really well. I tried using a Singleton properties object on a recent project. Then, when it came time to add features, I need to revise the Singleton, and regretted having to locate every place where I used `MySingleton.getInstance()`. Approach 2 of passing a global information object through your various constructors is easier to control. Using an explicit setter helps, too. ``` class MyConfig extends Properties {...} class SomeClass { MyConfig theConfig; public void setConfi( MyConfig c ) { theConfig= c; } ... } ``` It works well, and you'll be happy that you tightly controlled precisely which classes actually need configuration information.
Correct approach to Properties
[ "", "java", "oop", "properties", "structure", "" ]
I am trying to run some unit tests in a C# Windows Forms application (Visual Studio 2005), and I get the following error: > System.IO.FileLoadException: Could not load file or assembly 'Utility, Version=1.2.0.200, Culture=neutral, PublicKeyToken=764d581291d764f7' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)\*\* > > at x.Foo.FooGO() > > at x.Foo.Foo2(String groupName\_) in Foo.cs:line 123 > > at x.Foo.UnitTests.FooTests.TestFoo() in FooTests.cs:line 98\*\* > > System.IO.FileLoadException: Could not load file or assembly 'Utility, Version=1.2.0.203, Culture=neutral, PublicKeyToken=764d581291d764f7' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) I look in my references, and I only have a reference to `Utility version 1.2.0.203` (the other one is old). Any suggestions on how I figure out what is trying to reference this old version of this DLL file? Besides, I don't think I even have this old assembly on my hard drive. Is there any tool to search for this old versioned assembly?
The .NET Assembly loader: * is unable to find 1.2.0.203 * but did find a 1.2.0.200 This assembly does not match what was requested and therefore you get this error. In simple words, it can't find the assembly that was referenced. Make sure it can find the right assembly by putting it in the GAC or in the application path. run below command to add the assembly dll file to GAC: ``` gacutil /i "path/to/my.dll" ``` Also see <https://learn.microsoft.com/archive/blogs/junfeng/the-located-assemblys-manifest-definition-with-name-xxx-dll-does-not-match-the-assembly-reference>.
You can do a couple of things to troubleshoot this issue. First, use Windows file search to search your hard drive for your assembly (.dll). Once you have a list of results, do View->Choose Details... and then check "File Version". This will display the version number in the list of results, so you can see where the old version might be coming from. Also, like Lars said, check your GAC to see what version is listed there. [This Microsoft article](http://msdn.microsoft.com/en-us/library/ez524kew(VS.80).aspx) states that assemblies found in the GAC are not copied locally during a build, so you might need to remove the old version before doing a rebuild all. (See my answer to [this question](https://stackoverflow.com/questions/163748/com-registration-and-the-gac#164707) for notes on creating a batch file to do this for you) If you still can't figure out where the old version is coming from, you can use the fuslogvw.exe application that ships with Visual Studio to get more information about the binding failures. Microsoft has information about this tool [here](http://msdn.microsoft.com/en-us/library/e74a18c4(VS.80).aspx). Note that you'll have to enable logging by setting the `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Fusion\EnableLog` registry key to 1.
The located assembly's manifest definition does not match the assembly reference
[ "", "c#", "reference", "compiler-errors", "dependencies", "version", "" ]
I don't use Eclipse as an IDE, and have no interest in doing so. However, I do like its source-level debugging. Is there any way I can use it to debug a C++ Linux app without going through the ritual of creating a project? (In effect, can I just use it like a frontend to gdb?) If not, what are the steps I need to follow to create a project that I can use to just debug an existing C++ program that is built using Makefiles or other tools (SCons, CMake, etc.). I don't want to be able to "develop" in Eclipse; all I need to do is debug.
Take a look at [this question](https://stackoverflow.com/questions/204345/how-to-setup-a-linux-c-project-in-eclipse#204353). Create a C/C++-project, use your project's source directory as project directory, select to use the external builder, and change "make" to whatever tool you want. The tricky part is to get the indexer to work correctly and find all your header files. EDIT: CMake 2.6.x has support for generating CDT project files, which might be a more straightforward solution.
I don't know if this has changed in the 4+ years since the question was posted, but there's a much easier way to do this. I'm on Eclipse Luna (4.4.2). ``` > eclipse& ``` then `File` > `Import` > `C/C++` > `C/C++ Executable` > `Next` > browse to executable > `Next` > choose a project name > `Finish` No other project setup required, no source paths (which should be in the object code). Just like running gdb/insight/etc. Almost makes it worth installing Java.
How Do I Use Eclipse to Debug a C++ Program on Linux?
[ "", "c++", "eclipse", "debugging", "eclipse-cdt", "" ]
What does this mean exactly? I'm doing something like this: ``` File.Copy(@"\\foo\bar\baz.txt", @"c:\test\baz.txt"); ``` MSDN doesn't describe this exception except in general terms, and googling around just yields tables of error codes. I've confirmed the source file exists, and I'm 99% sure that I have the permissions to copy the file to the destination location.
Check this article for some information about using symlinks in .Net: "[Manipulating NTFS Junction Points in .NET](http://www.codeproject.com/KB/files/JunctionPointsNet.aspx)". According to this article: > "In particular the .NET libraries does > not include any functionality for > creating or querying properties of > Junction Points" But there is a method how to actually get the target of the symlink, and then you'll be able to use File.Copy with it.
By default local evaluation of remote symbolic links is disabled. You could use fsutil to change that setting or you could delve into unmanaged code and resolve the link yourself.
System.IO.IOException: The symbolic link cannot be followed because its type is disabled
[ "", "c#", "" ]
I am working on a legacy ASP application. I am attempting to insert a value (40.33) into a field in SQL Server 2000 that happens to be a float type. Every place I can see (via some logging) in the application is sending 40.33 to the Stored Procedure. When I run SQL Profiler against the database while the call is happening, the value that I see in the trace is 4.033000183105469e+001 Where is all the extra garbage coming from (the 183105469)? Why is it that when I pass in 40, or 40.25 there is nothing extra? Is this just one of the weird side effects of using float? When I am writing something I normally use money or decimal or something else, so not that familiar with the float datatype.
Yes, this is a weird, although [well-known](http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems), side effect of using FLOAT. In Microsoft SQL Server, you should use exact numeric datatypes such as [NUMERIC, DECIMAL](http://msdn.microsoft.com/en-us/library/ms187746(SQL.90).aspx), [MONEY or SMALLMONEY](http://msdn.microsoft.com/en-us/library/ms179882(SQL.90).aspx) if you need exact numerics with scale. Do not use FLOAT.
I think this is probably just a precision issue - the 0.33 part of the number can't be represented exactly in binary - this is probably the closest that you can get to.
Inserting a value into an SQL float column generates a weird result
[ "", "sql", "sql-server", "floating-point", "" ]
Developing a heavily XML-based Java-application, I recently encountered an interesting problem on Ubuntu Linux. My application, using the [Java Plugin Framework](http://jpf.sourceforge.net/), appears unable to convert a [dom4j](http://www.dom4j.org/)-created XML document to [Batik's](http://xmlgraphics.apache.org/batik/) implementation of the SVG specification. On the console, I learn that an error occurs: ``` Exception in thread "AWT-EventQueue-0" java.lang.LinkageError: loader constraint violation in interface itable initialization: when resolving method "org.apache.batik.dom.svg.SVGOMDocument.createAttribute(Ljava/lang/String;)Lorg/w3c/dom/Attr;" the class loader (instance of org/java/plugin/standard/StandardPluginClassLoader) of the current class, org/apache/batik/dom/svg/SVGOMDocument, and the class loader (instance of <bootloader>) for interface org/w3c/dom/Document have different Class objects for the type org/w3c/dom/Attr used in the signature at org.apache.batik.dom.svg.SVGDOMImplementation.createDocument(SVGDOMImplementation.java:149) at org.dom4j.io.DOMWriter.createDomDocument(DOMWriter.java:361) at org.dom4j.io.DOMWriter.write(DOMWriter.java:138) ``` I figure that the problem is caused by a conflict between the original classloader from the JVM and the classloader deployed by the plugin framework. To my knowledge, it's not possible to specify a classloader for the framework to use. It might be possible to hack it, but I would prefer a less aggressive approach to solving this problem, since (for whatever reason) it only occurs on Linux systems. Has one of you encountered such a problem and has any idea how to fix it or at least get to the core of the issue?
LinkageError is what you'll get in a classic case where you have a class C loaded by more than one classloader and those classes are being used together in the same code (compared, cast, etc). It doesn't matter if it is the same Class name or even if it's loaded from the identical jar - a Class from one classloader is always treated as a different Class if loaded from another classloader. The message (which has improved a lot over the years) says: ``` Exception in thread "AWT-EventQueue-0" java.lang.LinkageError: loader constraint violation in interface itable initialization: when resolving method "org.apache.batik.dom.svg.SVGOMDocument.createAttribute(Ljava/lang/String;)Lorg/w3c/dom/Attr;" the class loader (instance of org/java/plugin/standard/StandardPluginClassLoader) of the current class, org/apache/batik/dom/svg/SVGOMDocument, and the class loader (instance of ) for interface org/w3c/dom/Document have different Class objects for the type org/w3c/dom/Attr used in the signature ``` So, here the problem is in resolving the SVGOMDocument.createAttribute() method, which uses org.w3c.dom.Attr (part of the standard DOM library). But, the version of Attr loaded with Batik was loaded from a different classloader than the instance of Attr you're passing to the method. You'll see that Batik's version seems to be loaded from the Java plugin. And yours is being loaded from " ", which is most likely one of the built-in JVM loaders (boot classpath, ESOM, or classpath). The three prominent classloader models are: * delegation (the default in the JDK - ask parent, then me) * post-delegation (common in plugins, servlets, and places where you want isolation - ask me, then parent) * sibling (common in dependency models like OSGi, Eclipse, etc) I don't know what delegation strategy the JPF classloader uses, but the key is that you want one version of the dom library to be loaded and everyone to source that class from the same location. That may mean removing it from the classpath and loading as a plugin, or preventing Batik from loading it, or something else.
Sounds like a classloader hierarchy problem. I can't tell what type of environment your application is deployed in, but sometimes this problem can occur in a web environment - where the application server creates a hierarchy of classloaders, resembling something like: javahome/lib - as root appserver/lib - as child of root webapp/WEB-INF/lib - as child of child of root etc Usually classloaders delegate loading to their parent classloader (this is known as "`parent-first`"), and if that classloader cannot find the class, then the child classloader attempts to. For example, if a class deployed as a JAR in webapp/WEB-INF/lib tries to load a class, first it asks the classloader corresponding to appserver/lib to load the class (which in turn asks the classloader corresponding to javahome/lib to load the class), and if this lookup fails, then WEB-INF/lib is searched for a match to this class. In a web environment, you can run into problems with this hierarchy. For example, one mistake/problem I've run into before was when a class in WEB-INF/lib depended on a class deployed in appserver/lib, which in turn depended on a class deployed in WEB-INF/lib. This caused failures because while classloaders are able to delegate to the parent classloader, they cannot delegate back down the tree. So, the WEB-INF/lib classloader would ask appserver/lib classloader for a class, appserver/lib classloader would load that class and try to load the dependent class, and fail since it could not find that class in appserver/lib or javahome/lib. So, while you may not be deploying your app in a web/app server environment, my too-long explanation might apply to you if your environment has a hierarchy of classloaders set up. Does it? Is JPF doing some sort of classloader magic to be able to implement it's plugin features?
How to deal with LinkageErrors in Java?
[ "", "java", "linux", "classloader", "linkageerror", "" ]
I have a certain POJO which needs to be persisted on a database, current design specifies its field as a single string column, and adding additional fields to the table is not an option. Meaning, the objects need to be serialized in some way. So just for the basic implementation I went and designed my own serialized form of the object which meant concatenating all it's fields into one nice string, separated by a delimiter I chose. But this is rather ugly, and can cause problems, say if one of the fields contains my delimiter. So I tried basic Java serialization, but from a basic test I conducted, this somehow becomes a very costly operation (building a ByteArrayOutputStream, an ObjectOutputStream, and so on, same for the deserialization). So what are my options? What is the preferred way for serializing objects to go on a database? **Edit:** this is going to be a very common operation in my project, so overhead must be kept to a minimum, and performance is crucial. Also, third-party solutions are nice, but irrelevant (and usually generate overhead which I am trying to avoid)
Elliot Rusty Harold wrote up a [nice argument](http://xom.nu/designprinciples.xhtml#d0e323) against using Java Object serialization for the objects in his XOM library. The same principles apply to you. The built-in Java serialization is Java-specific, fragile, and slow, and so is best avoided. You have roughly the right idea in using a String-based format. The problem, as you state, is that you're running into formatting/syntax problems with delimiters. The solution is to use a format that is already built to handle this. If this is a standardized format, then you can also potentially use other libraries/languages to manipulate it. Also, a string-based format means that you have a hope of understanding it just by eyeballing the data; binary formats remove that option. XML and JSON are two great options here; they're standardized, text-based, flexible, readable, and have lots of library support. They'll also perform surprisingly well (sometimes even *faster* than Java serialization).
You might try [Protocol Buffers](http://code.google.com/p/protobuf/), it is a open-source project from Google, it is said to be fast (generates shorter serialized form than XML, and works faster). It also handles addition of new field gently (inserts default values).
Homemade vs. Java Serialization
[ "", "java", "database", "serialization", "" ]
I am developing a little app that retrieves an XML file, located on a remote server (`http://example.com/myfile.xml`) This file is relatively big, and it contains a big list of geolocations with other information that I need to use for my app. So I read this file remotely once and insert it into a little SqlCE file (`database.sdf`) So If I need to be accessing geolocation #1, I ll just make a SELECT statement into this DATABASE instead of loading the whole XML file every time. But I would like to know if its possible to do this without using .sdf files? What is the most efficient way (fastest)? Saving the big XML file once locally and load if every time I start my app to load it in a data set? this is would make the app a bit long to load every time Saving the big XML file once locally and reading the nodes one by one to look for geolocation #1? Or is it possible to retrieve geolocation #1 from the remote XML directly(`http://example.com/myfile.xml`) without reading the whole file?
Re protobuf-net, there isn't a separate download for the CF version at the moment, but there is a csproj in the source for both CF 2.0 and CF 3.5. To clarify on your question; actually protobuf-net doesn't even use a `.proto` file (at the moment); a `.proto` file just describes what the data is - protobuf-net simply looks at your classes and infers the schema from that (similar to how XmlSerializer / DataContractSerializer etc work). So there *is* not `.proto` - just the classes that look like your data. However, before you embark on creating classes that look like your data, I wonder if you couldn't simply use GZIP or [PK]ZIP to compress the data, and transfer it "as is". XML generally compresses *very* well. Of course, finding a GZIP (etc) implementation for CF then becomes the issue. Of course, if you *want* to use protobuf-net here, I'll happily advise etc if you get issues... The other option is for your CF app to call into a web-service that has the data locally...
Load the big XML file, convert it into an appropriate different data structure, save it to a file in an efficient format. (XML really isn't terribly efficient.) I believe [Marc Gravell's Protocol Buffers](http://code.google.com/p/protobuf-net/) implementation works on the Compact Framework... (None of the protobuf implementations are deemed production-ready yet, but a couple are close. We need testers!)
How to read remote XML file the most efficiently way?
[ "", "c#", "xml", "compact-framework", "sdf", "sql-server-mobile", "" ]
I have one std::list<> container and these threads: * One writer thread which adds elements indefinitely. * One reader/writer thread which reads and removes elements while available. * Several reader threads which access the SIZE of the container (by using the size() method) There is a normal mutex which protects the access to the list from the first two threads. My question is, do the size reader threads need to acquire this mutex too? should I use a read/write mutex? I'm in a windows environment using Visual C++ 6. **Update**: It looks like the answer is not clear yet. To sum up the main doubt: Do I still need to protect the SIZE reader threads even if they only call size() (which returns a simple variable) taking into account that I don't need the exact value (i.e. I can assume a +/- 1 variation)? How a race condition could make my size() call return an invalid value (i.e. one totally unrelated to the good one)? **Answer**: In general, the reader threads must be protected to avoid race conditions. Nevertheless, in my opinion, some of the questions stated above in the update haven't been answered yet. Thanks in advance! Thank you all for your answers!
Yes, the read threads will need some sort of mutex control, otherwise the write will change things from under it. A reader/writer mutex should be enough. But strictly speaking this is an implmentation-specific issue. It's possible that an implementation may have mutable members even in const objects that are read-only in your code.
Checkout the concurrent containers provided by Intel's Open Source [Threading Building Blocks](http://www.threadingbuildingblocks.org/) library. Look under "Container Snippets" on the [Code Samples page](http://www.threadingbuildingblocks.org/codesamples.php) for some examples. They have concurrent / thread-safe containers for vectors, hash maps and queues.
Do I need to protect read access to an STL container in a multithreading environment?
[ "", "c++", "multithreading", "visual-c++", "stl", "concurrency", "" ]
Given a list of urls, I would like to check that each url: * Returns a 200 OK status code * Returns a response within X amount of time The end goal is a system that is capable of flagging urls as potentially broken so that an administrator can review them. The script will be written in PHP and will most likely run on a daily basis via cron. The script will be processing approximately 1000 urls at a go. Question has two parts: * Are there any bigtime gotchas with an operation like this, what issues have you run into? * What is the best method for checking the status of a url in PHP considering both accuracy and performance?
Use the PHP cURL extension. Unlike fopen() it can also make HTTP HEAD requests which are sufficient to check the availability of a URL and save you a ton of bandwith as you don't have to download the entire body of the page to check. As a starting point you could use some function like this: ``` function is_available($url, $timeout = 30) { $ch = curl_init(); // get cURL handle // set cURL options $opts = array(CURLOPT_RETURNTRANSFER => true, // do not output to browser CURLOPT_URL => $url, // set URL CURLOPT_NOBODY => true, // do a HEAD request only CURLOPT_TIMEOUT => $timeout); // set timeout curl_setopt_array($ch, $opts); curl_exec($ch); // do it! $retval = curl_getinfo($ch, CURLINFO_HTTP_CODE) == 200; // check if HTTP OK curl_close($ch); // close handle return $retval; } ``` However, there's a ton of possible optimizations: You might want to re-use the cURL instance and, if checking more than one URL per host, even re-use the connection. Oh, and this code does check strictly for HTTP response code 200. It does not follow redirects (302) -- but there also is a cURL-option for that.
Look into cURL. There's a library for PHP. There's also an executable version of cURL so you could even write the script in bash.
How do I check for valid (not dead) links programmatically using PHP?
[ "", "php", "url", "cron", "" ]
I'm planning to make a very simple program using php and mySQL. The main page will take information and make a new row in the database with that information. However, I need a number to put in for the primary key. Unfortunately, I have no idea about the normal way to determine what umber to use. Preferably, if I delete a row, that row's key won't ever be reused. A preliminary search has turned up the AUTOINCREMENT keyword in mySQL. However, I'd still like to know if that will work for what I want and what the common solution to this issue is.
In MySQL that's the [standard solution.](http://dev.mysql.com/doc/refman/5.0/en/example-auto-increment.html) ``` CREATE TABLE animals ( id MEDIUMINT NOT NULL AUTO_INCREMENT, name CHAR(30) NOT NULL, PRIMARY KEY (id) ); ```
Unless you have an overriding reason to generate your own PK then using the autoincrement would be good enough. That way the database manages the keys. When you are inserting a row you have to leave out the primary key column. Say you have a table table = (a, b, c) where a is the primary key then the insert statement would be insert into table (b, c) values ('bbb', 'ccc') and the primary key will be auto inserted by the databse.
What's the standard way to determine the number for a primary key?
[ "", "php", "mysql", "database", "" ]
I was playing around with my own Sudoku solver and was looking for some pointers to good and fast design when I came across this: ``` def r(a):i=a.find('0');~i or exit(a);[m in[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3)or a[j]for j in range(81)]or r(a[:i]+m+a[i+1:])for m in'%d'%5**18] from sys import*;r(argv[1]) ``` My own implementation solves Sudokus the same way I solve them in my head but how does this cryptic algorithm work? <http://scottkirkwood.blogspot.com/2006/07/shortest-sudoku-solver-in-python.html>
Well, you can make things a little easier by fixing up the syntax: ``` def r(a): i = a.find('0') ~i or exit(a) [m in[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3)or a[j]for j in range(81)] or r(a[:i]+m+a[i+1:])for m in'%d'%5**18] from sys import * r(argv[1]) ``` Cleaning up a little: ``` from sys import exit, argv def r(a): i = a.find('0') if i == -1: exit(a) for m in '%d' % 5**18: m in[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3) or a[j] for j in range(81)] or r(a[:i]+m+a[i+1:]) r(argv[1]) ``` Okay, so this script expects a command-line argument, and calls the function r on it. If there are no zeros in that string, r exits and prints out its argument. > (If another type of object is passed, > None is equivalent to passing zero, > and any other object is printed to > sys.stderr and results in an exit > code of 1. In particular, > sys.exit("some error message") is a > quick way to exit a program when an > error occurs. See > <http://www.python.org/doc/2.5.2/lib/module-sys.html>) I guess this means that zeros correspond to open spaces, and a puzzle with no zeros is solved. Then there's that nasty recursive expression. The loop is interesting: `for m in'%d'%5**18` Why 5\*\*18? It turns out that `'%d'%5**18` evaluates to `'3814697265625'`. This is a string that has each digit 1-9 at least once, so maybe it's trying to place each of them. In fact, it looks like this is what `r(a[:i]+m+a[i+1:])` is doing: recursively calling r, with the first blank filled in by a digit from that string. But this only happens if the earlier expression is false. Let's look at that: `m in [(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3) or a[j] for j in range(81)]` So the placement is done only if m is not in that monster list. Each element is either a number (if the first expression is nonzero) or a character (if the first expression is zero). m is ruled out as a possible substitution if it appears as a character, which can only happen if the first expression is zero. When is the expression zero? It has three parts that are multiplied: * `(i-j)%9` which is zero if i and j are a multiple of 9 apart, i.e. the same column. * `(i/9^j/9)` which is zero if i/9 == j/9, i.e. the same row. * `(i/27^j/27|i%9/3^j%9/3)` which is zero if both of these are zero: * + `i/27^j^27` which is zero if i/27 == j/27, i.e. the same block of three rows * + `i%9/3^j%9/3` which is zero if i%9/3 == j%9/3, i.e. the same block of three columns If any of these three parts is zero, the entire expression is zero. In other words, if i and j share a row, column, or 3x3 block, then the value of j can't be used as a candidate for the blank at i. Aha! ``` from sys import exit, argv def r(a): i = a.find('0') if i == -1: exit(a) for m in '3814697265625': okay = True for j in range(81): if (i-j)%9 == 0 or (i/9 == j/9) or (i/27 == j/27 and i%9/3 == j%9/3): if a[j] == m: okay = False break if okay: # At this point, m is not excluded by any row, column, or block, so let's place it and recurse r(a[:i]+m+a[i+1:]) r(argv[1]) ``` Note that if none of the placements work out, r will return and back up to the point where something else can be chosen, so it's a basic depth first algorithm. Not using any heuristics, it's not particularly efficient. I took this puzzle from Wikipedia (<http://en.wikipedia.org/wiki/Sudoku>): ``` $ time python sudoku.py 530070000600195000098000060800060003400803001700020006060000280000419005000080079 534678912672195348198342567859761423426853791713924856961537284287419635345286179 real 0m47.881s user 0m47.223s sys 0m0.137s ``` Addendum: How I would rewrite it as a maintenance programmer (this version has about a 93x speedup :) ``` import sys def same_row(i,j): return (i/9 == j/9) def same_col(i,j): return (i-j) % 9 == 0 def same_block(i,j): return (i/27 == j/27 and i%9/3 == j%9/3) def r(a): i = a.find('0') if i == -1: sys.exit(a) excluded_numbers = set() for j in range(81): if same_row(i,j) or same_col(i,j) or same_block(i,j): excluded_numbers.add(a[j]) for m in '123456789': if m not in excluded_numbers: # At this point, m is not excluded by any row, column, or block, so let's place it and recurse r(a[:i]+m+a[i+1:]) if __name__ == '__main__': if len(sys.argv) == 2 and len(sys.argv[1]) == 81: r(sys.argv[1]) else: print 'Usage: python sudoku.py puzzle' print ' where puzzle is an 81 character string representing the puzzle read left-to-right, top-to-bottom, and 0 is a blank' ```
unobfuscating it: ``` def r(a): i = a.find('0') # returns -1 on fail, index otherwise ~i or exit(a) # ~(-1) == 0, anthing else is not 0 # thus: if i == -1: exit(a) inner_lexp = [ (i-j)%9*(i/9 ^ j/9)*(i/27 ^ j/27 | i%9/3 ^ j%9/3) or a[j] for j in range(81)] # r appears to be a string of 81 # characters with 0 for empty and 1-9 # otherwise [m in inner_lexp or r(a[:i]+m+a[i+1:]) for m in'%d'%5**18] # recurse # trying all possible digits for that empty field # if m is not in the inner lexp from sys import * r(argv[1]) # thus, a is some string ``` So, we just need to work out the inner list expression. I know it collects the digits set in the line -- otherwise, the code around it makes no sense. However, I have no real clue how it does that (and Im too tired to work out that binary fancyness right now, sorry)
Shortest Sudoku Solver in Python - How does it work?
[ "", "python", "algorithm", "" ]
I need to do a few very simple URL manipulations in Java. Like get the value for a parameter in the query, or update it, ... I was expecting to find a simple utility class doing that in the commons-lang package, but no. I know it is a simple problem, but if there is something already written, why do it again ? Do you know of any ? I would like to have at least the following capabilities : ``` String myUrl = "http://www.example.com/test.html?toto=1&titi=2"; // get the value of a parameter String parameterValue = UrlUtils.getParameterValue(myUrl, "toto"); Assert.equals(parameterValue, "1"); // update a parameter String newUrl = UrlUtils.updateParameter(myUrl, "toto", 3); parameterValue = UrlUtils.getParameterValue(myUrl, "toto"); Assert.equals(parameterValue, "3"); ``` Ideally, it would take care of all encoding related issues, and work with java.net.Url as well as with Strings. Thanks for your help !
I think what you want is called a query string parser instead of an url manipulator and here's one: <http://ostermiller.org/utils/CGIParser.java.html>
Apache's httpcomponents library has a URL decoder: <http://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/client/utils/URLEncodedUtils.html> Httpcomponents is the successor to commons http client.
A good library to do URL Query String manipulation in Java
[ "", "java", "url", "" ]
In Visual Studio, is there any way to make the debugger break whenever a certain file (or class) is entered? Please don't answer "just set a breakpoint at the beginning of every method" :) I am using C#.
Macros can be your friend. Here is a macro that will add a breakpoint to every method in the current class (put the cursor somewhere in the class before running it). ``` Public Module ClassBreak Public Sub BreakOnAnyMember() Dim debugger As EnvDTE.Debugger = DTE.Debugger Dim sel As EnvDTE.TextSelection = DTE.ActiveDocument.Selection Dim editPoint As EnvDTE.EditPoint = sel.ActivePoint.CreateEditPoint() Dim classElem As EnvDTE.CodeElement = editPoint.CodeElement(vsCMElement.vsCMElementClass) If Not classElem Is Nothing Then For Each member As EnvDTE.CodeElement In classElem.Children If member.Kind = vsCMElement.vsCMElementFunction Then debugger.Breakpoints.Add(member.FullName) End If Next End If End Sub End Module ``` **Edit:** Updated to add breakpoint by function name, rather than file/line number. It 'feels' better and will be easier to recognise in the breakpoints window.
You could start by introducing some sort of Aspect-Oriented Programming - see for instance [this explanation](http://blogs.msdn.com/saveenr/archive/2008/11/20/c-aop-elegant-tracing-with-postsharp-and-aspect-oriented-programming.aspx) - and then put a breakpoint in the single OnEnter method. Depending on which AOP framework you choose, it'd require a little decoration in your code and introduce a little overhead (that you can remove later) but at least you won't need to set breakpoints everywhere. In some frameworks you might even be able to introduce it with no code change at all, just an XML file on the side?
break whenever a file (or class) is entered
[ "", "c#", "visual-studio", "debugging", "" ]
Given that a function `a_method` has been defined like ``` def a_method(arg1, arg2): pass ``` Starting from `a_method` itself, how can I get the argument names - for example, as a tuple of strings, like `("arg1", "arg2")`?
Take a look at the [`inspect`](http://docs.python.org/library/inspect.html) module - this will do the inspection of the various code object properties for you. ``` >>> inspect.getfullargspec(a_method) (['arg1', 'arg2'], None, None, None) ``` The other results are the name of the \*args and \*\*kwargs variables, and the defaults provided. ie. ``` >>> def foo(a, b, c=4, *arglist, **keywords): pass >>> inspect.getfullargspec(foo) (['a', 'b', 'c'], 'arglist', 'keywords', (4,)) ``` Note that some callables may not be introspectable in certain implementations of Python. For Example, in CPython, some built-in functions defined in C provide no metadata about their arguments. As a result, you will get a `ValueError` if you use `inspect.getfullargspec()` on a built-in function. Since Python 3.3, you can use [`inspect.signature()`](https://docs.python.org/library/inspect.html#introspecting-callables-with-the-signature-object) to see the call signature of a callable object: ``` >>> inspect.signature(foo) <Signature (a, b, c=4, *arglist, **keywords)> ```
In CPython, the number of arguments is ``` a_method.func_code.co_argcount ``` and their names are in the beginning of ``` a_method.func_code.co_varnames ``` These are implementation details of CPython, so this probably does not work in other implementations of Python, such as IronPython and Jython. One portable way to admit "pass-through" arguments is to define your function with the signature `func(*args, **kwargs)`. This is used a lot in e.g. [matplotlib](https://matplotlib.org/), where the outer API layer passes lots of keyword arguments to the lower-level API.
How to get method parameter names?
[ "", "python", "decorator", "introspection", "python-datamodel", "" ]
I have a number of child tables that have a foreign key to a parent table. How do I add an entry in the parent table and get the primary key of that entry, so that I can then enter rows in the child tables that point to the entry in the parent table? I'm doing this in a MS Access Database from a C# application.
Microsoft Access 2000 or later does support the @@IDENTITY property to retrieve the value of an Autonumber field after an INSERT. ([msdn](http://msdn.microsoft.com/en-us/library/ks9f57t0(VS.71).aspx)) Edit: [This](http://msdn.microsoft.com/en-us/library/ks9f57t0.aspx) is the link to a similar article for .NET 3.5
[Getting the identity of the most recently added record](http://www.mikesdotnetting.com/Article.aspx?ArticleID=54)
Get the Primary Key of a new Entry
[ "", "c#", ".net", "database", "ms-access", "primary-key", "" ]
I need to search a string and replace all occurrences of `%FirstName%` and `%PolicyAmount%` with a value pulled from a database. The problem is the capitalization of FirstName varies. That prevents me from using the `String.Replace()` method. I've seen web pages on the subject that suggest ``` Regex.Replace(strInput, strToken, strReplaceWith, RegexOptions.IgnoreCase); ``` However for some reason when I try and replace `%PolicyAmount%` with `$0`, the replacement never takes place. I assume that it has something to do with the dollar sign being a reserved character in regex. Is there another method I can use that doesn't involve sanitizing the input to deal with regex special characters?
[From MSDN](http://msdn.microsoft.com/en-us/library/ewy2t5e0.aspx) $0 - "Substitutes the last substring matched by group number number (decimal)." In .NET Regular expressions group 0 is always the entire match. For a literal $ you need to ``` string value = Regex.Replace("%PolicyAmount%", "%PolicyAmount%", @"$$0", RegexOptions.IgnoreCase); ```
Seems like `string.Replace` *should* have an overload that takes a `StringComparison` argument. Since it doesn't, you could try something like this: ``` public static string ReplaceString(string str, string oldValue, string newValue, StringComparison comparison) { StringBuilder sb = new StringBuilder(); int previousIndex = 0; int index = str.IndexOf(oldValue, comparison); while (index != -1) { sb.Append(str.Substring(previousIndex, index - previousIndex)); sb.Append(newValue); index += oldValue.Length; previousIndex = index; index = str.IndexOf(oldValue, index, comparison); } sb.Append(str.Substring(previousIndex)); return sb.ToString(); } ```
Is there an alternative to string.Replace that is case-insensitive?
[ "", "c#", ".net", "string", ".net-2.0", "replace", "" ]
I need to know the default port settings for the following services 1. SQL Server 2. SQL Browser 3. SQL Reporting services 4. SQL Analysis services I need to know the port settings for these services for different versions of SQL Server (2000,2005,2008) Also let me know whether the default port setting will change based on sql server versions.
1. The default SQL Server port is 1433 but only if it's a default install. Named instances get a random port number. 2. The browser service runs on port UDP 1434. 3. Reporting services is a web service - so it's port 80, or 443 if it's SSL enabled. 4. Analysis services is 2382 but only if it's a default install. Named instances get a random port number.
If you have access to the server then you can use ``` select local_tcp_port from sys.dm_exec_connections where local_tcp_port is not null ``` For full details see [port number of SQL Server](https://stackoverflow.com/questions/1281150/port-number-of-sql-server/5035779#5035779)
Default port for SQL Server
[ "", "asp.net", "sql", "sql-server", "" ]
I allow users to enter a regular expression to match IP addresses, for doing an IP filtration in a related system. I would like to validate if the entered regular expressions are valid as a lot of userse will mess op, with good intentions though. I can of course do a Regex.IsMatch() inside a try/catch and see if it blows up that way, but are there any smarter ways of doing it? Speed is not an issue as such, I just prefer to avoid throwing exceptions for no reason.
As long as you catch very specific exceptions, just do the try/catch. Exceptions are not evil if used correctly.
I think exceptions are OK in this case. Just make sure to shortcircuit and eliminate the exceptions you can: ``` private static bool IsValidRegex(string pattern) { if (string.IsNullOrWhiteSpace(pattern)) return false; try { Regex.Match("", pattern); } catch (ArgumentException) { return false; } return true; } ```
Can I test if a regex is valid in C# without throwing exception
[ "", "c#", "regex", "" ]
I'm interested in how garbage collection works. I've read up on how some work such as mark-and-sweep, stop-and-copy, generational GC, etc... I'd like to experiment with implementing some of these and comparing their behaviors. What's a good way to get started experimenting with my own? Ideally something in C, Java or Python (although the last two are themselves garbage-collected so it seems it'd be hard to use them...)
Never played with it myself, but the one that always gets mentioned for use with C/C++ is [Hans Boehm's](http://www.hpl.hp.com/personal/Hans_Boehm/gc/).
the .NET runtime and Java runtime are now open source, so you can experiment with the runtime it self if you want to play around with a current support programming language. However if you wanted to do this yourself you would probably have to create your own runtime which has it's own language.
How can I experiment with garbage collection?
[ "", "c++", "c", "garbage-collection", "" ]
I have a block of product images we received from a customer. Each product image is a picture of something and it was taken with a white background. I would like to crop all the surrounding parts of the image but leave only the product in the middle. Is this possible? As an example: [<http://www.5dnet.de/media/catalog/product/d/r/dress_shoes_5.jpg][1]> I don't want all white pixels removed, however I do want the image cropped so that the top-most row of pixels contains one non-white pixel, the left-most vertical row of pixels contains one non-white pixel, bottom-most horizontal row of pixels contains one non-white pixel, etc. Code in C# or VB.net would be appreciated.
I've written code to do this myself - it's not too difficult to get the basics going. Essentially, you need to scan pixel rows/columns to check for non-white pixels and isolate the bounds of the product image, then create a new bitmap with just that region. Note that while the `Bitmap.GetPixel()` method works, it's relatively slow. If processing time is important, you'll need to use `Bitmap.LockBits()` to lock the bitmap in memory, and then some simple pointer use inside an `unsafe { }` block to access the pixels directly. [This article](http://www.codeproject.com/KB/GDI-plus/csharpgraphicfilters11.aspx?fid=3488&df=90&mpp=25&noise=3&sort=Position&view=Quick&fr=76&select=1123224) on CodeProject gives some more details that you'll probably find useful.
I found I had to adjust Dmitri's answer to ensure it works with images that don't actually need cropping (either horizontally, vertically or both)... ``` public static Bitmap Crop(Bitmap bmp) { int w = bmp.Width; int h = bmp.Height; Func<int, bool> allWhiteRow = row => { for (int i = 0; i < w; ++i) if (bmp.GetPixel(i, row).R != 255) return false; return true; }; Func<int, bool> allWhiteColumn = col => { for (int i = 0; i < h; ++i) if (bmp.GetPixel(col, i).R != 255) return false; return true; }; int topmost = 0; for (int row = 0; row < h; ++row) { if (allWhiteRow(row)) topmost = row; else break; } int bottommost = 0; for (int row = h - 1; row >= 0; --row) { if (allWhiteRow(row)) bottommost = row; else break; } int leftmost = 0, rightmost = 0; for (int col = 0; col < w; ++col) { if (allWhiteColumn(col)) leftmost = col; else break; } for (int col = w - 1; col >= 0; --col) { if (allWhiteColumn(col)) rightmost = col; else break; } if (rightmost == 0) rightmost = w; // As reached left if (bottommost == 0) bottommost = h; // As reached top. int croppedWidth = rightmost - leftmost; int croppedHeight = bottommost - topmost; if (croppedWidth == 0) // No border on left or right { leftmost = 0; croppedWidth = w; } if (croppedHeight == 0) // No border on top or bottom { topmost = 0; croppedHeight = h; } try { var target = new Bitmap(croppedWidth, croppedHeight); using (Graphics g = Graphics.FromImage(target)) { g.DrawImage(bmp, new RectangleF(0, 0, croppedWidth, croppedHeight), new RectangleF(leftmost, topmost, croppedWidth, croppedHeight), GraphicsUnit.Pixel); } return target; } catch (Exception ex) { throw new Exception( string.Format("Values are topmost={0} btm={1} left={2} right={3} croppedWidth={4} croppedHeight={5}", topmost, bottommost, leftmost, rightmost, croppedWidth, croppedHeight), ex); } } ```
Remove surrounding whitespace from an image
[ "", "c#", "vb.net", "graphics", "" ]
I'm using `org.w3c` `XML API` to open an existing `XML` file. I'm removing some nodes , and I'm adding others instead. The problem is that the new nodes that are added are written one after the other, with no newline and no indentation what so ever. While it's true that the `XML` file is valid , it is very hard for a human to examine it. Is there anyway to add indentation , or at least a newline after each node?
I'm assuming that you're using a [`Transformer`](http://java.sun.com/javase/6/docs/api/javax/xml/transform/Transformer.html) to do the actual writing (to a [`StreamResult`](http://java.sun.com/javase/6/docs/api/javax/xml/transform/stream/StreamResult.html)). In which case, do this before you call `transform`: ``` transformer.setOutputProperty(OutputKeys.INDENT, "yes"); transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2"); ```
``` transformer.setOutputProperty(OutputKeys.INDENT, "yes"); transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2"); ``` source [How to pretty print XML from Java?](https://stackoverflow.com/questions/139076/how-to-pretty-print-xml-from-java)
Java: Writing a DOM to an XML file (formatting issues)
[ "", "java", "xml", "formatting", "pretty-print", "" ]
`Connection.close()` may throw `SqlException`, but I have always assumed that it is safe to ignore any such exceptions (and I have never seen code that does not ignore them). Normally I would write: ``` try{ connection.close(); }catch(Exception e) {} ``` Or ``` try{ connection.close(); }catch(Exception e) { logger.log(e.getMessage(), e); } ``` The question is: 1. Is it bad practice (and has anyone had problems when ignoring such exceptions). 2. When `Connection.close()` does throw any exception. 3. If it is bad how should I handle the exception. Comment: I know that discarding exceptions is evil, but I'm reffering only to exceptions thrown when closing a connection (and as I've seen this is fairly common in this case). Does anyone know when `Connection.close()` may throw anything?
Actually, what you're doing is (almost) best practice :-) here's what I saw in Spring's JdbcUtils.java. So, you might want to add another Catch block. ``` /** * Close the given ResultSet and ignore any thrown exception. * This is useful for typical finally blocks in manual code. * @param resultSet the ResultSet to close * @see javax.resource.cci.ResultSet#close() */ private void closeResultSet(ResultSet resultSet) { if (resultSet != null) { try { resultSet.close(); } catch (SQLException ex) { logger.debug("Could not close ResultSet", ex); } catch (Throwable ex) { // We don't trust the driver: It might throw RuntimeException or Error. logger.debug("Unexpected exception on closing ResultSet", ex); } } } ```
In general, I've had days wasted by people throwing away exceptions like that. I recommend following a few basic rules with exceptions: If you are ABSOLUTELY SURE you will NEVER cause a problem with a checked exception, catch JUST that exception and comment exactly why you don't need to handle it. (Sleep throws an InterruptedException that can always be ignored unless you actually are interested in it, but honestly this is the only case I usually ignore--even at that, if you never get it, what's the cost of logging it?) If you are not sure, but you may get it occasionally, catch and log a stack trace just so that if it is causing a problem, it can be found. Again, catch only the exception you need to. If you don't see any way the checked exception can be thrown, catch it and re-throw it as an unchecked exception. If you know exactly what is causing the exception, catch it and log exactly why, you don't really need a stack trace in this case if you are very clear as to what's causing it (and you might mention the class that's logging it if you're not already using log4j or something. It sounds like your problem would fall into the last category, and for this kind of a catch, never do what you wrote (Exception e), always do the specific exception just in case some unchecked exception is thrown (bad parameters, null pointer, ...) **Update:** The main problem here is that Checked Exceptions are ungood. The only highly used language they exist in is Java. They are neat in theory, but in action they cause this behavior of catch and hide that you don't get with unchecked exceptions. A lot of people have commented on the fact that I said that hiding them is okay sometimes. To be specific, the one case I can think of is: ``` try { Thread.sleep(1000); catch (InterruptedException e) { // I really don't care if this sleep is interrupted! } ``` I suppose the main reason I feel this use is okay is because this use of InterruptedException is an abuse of the checked exception pattern in the first place, it's communicating the result of a sleep more than indicating an exception condition. It would have made much more sense to have: ``` boolean interrupted=Thread.sleep(1000); ``` But they were very proud of their new checked exception pattern when they first created Java (understandably so, it's really neat in concept--only fails in practice) I can't imagine another case where this is acceptable, so perhaps I should have listed this as **the** single case where it might be valid to ignore an exception.
Should I catch exceptions thrown when closing java.sql.Connection
[ "", "java", "jdbc", "coding-style", "" ]
This is baffling me, maybe somebody can shine the light of education on my ignorance. This is in a C# windows app. I am accessing the contents of a listbox from a thread. When I try to access it like this ``` prgAll.Maximum = lbFolders.SelectedItems.Count; ``` I get the error. However, here is the part I don't get. If I comment out that line, the very next line ``` foreach (string dir in lbFolders.SelectedItems) ``` executes just fine. Edit: As usual, my communication skills are lacking. Let me clarify. I know that accessing GUI items from threads other than the ones they were created on causes problems. I know the right way to access them is via delegate. My question was mainly this: Why can I access and iterate through the SelectedItems object just fine, but when I try to get (not set) the Count property of it, it blows up.
``` prgAll.Maximum = lbFolders.SelectedItems.Count; ``` On that line you perform an assignment (**set/add**), which by default is not thread-safe. On the second line it's just a **get** operation, where thread-safety merely doesn't matter. EDIT: I don't mean access to the prgAll element. Accessing the Count property **changes the internal state** of the ListBox inner collection, that is why it throws the exception.
You can't access GUI elements from a separate thread. Use a delegate to make the change. eg. ``` lblStatus.Invoke((Action)(() => lblStatus.Text = counter.ToString())); ``` or older skool: ``` lblTest.Invoke((MethodInvoker)(delegate() { lblTest.Text = i.ToString(); })); ``` I've got a blog post on how to do this in all the .Net releases [here](http://rumandcode.wordpress.com/2008/07/17/doing-invokes-on-winform-controls-examples-from-delegates-to-lambdas/).
Why am I getting this error:"Cross-thread operation not valid: Control lbFolders accessed from a thread other than the thread it was created on."?
[ "", "c#", ".net", "multithreading", "" ]
How can I write an insert statement which includes the & character? For example, if I wanted to insert "J&J Construction" into a column in the database. I'm not sure if it makes a difference, but I'm using Oracle 9i.
I keep on forgetting this and coming back to it again! I think the best answer is a combination of the responses provided so far. Firstly, & is the variable prefix in sqlplus/sqldeveloper, hence the problem - when it appears, it is expected to be part of a variable name. SET DEFINE OFF will stop sqlplus interpreting & this way. But what if you need to use sqlplus variables *and* literal & characters? * You need SET DEFINE ON to make variables work * And SET ESCAPE ON to escape uses of &. e.g. ``` set define on set escape on define myvar=/forth select 'back\\ \& &myvar' as swing from dual; ``` Produces: ``` old 1: select 'back\\ \& &myvar' from dual new 1: select 'back\ & /forth' from dual SWING -------------- back\ & /forth ``` If you want to use a different escape character: ``` set define on set escape '#' define myvar=/forth select 'back\ #& &myvar' as swing from dual; ``` When you set a specific escape character, you may see 'SP2-0272: escape character cannot be alphanumeric or whitespace'. This probably means you already have the escape character defined, and things get horribly self-referential. The clean way of avoiding this problem is to set escape off first: ``` set escape off set escape '#' ```
If you are doing it from SQLPLUS use ``` SET DEFINE OFF ``` to stop it treading & as a special case
How to insert a string which contains an "&"
[ "", "sql", "oracle", "escaping", "sqlplus", "" ]
I have YAML data that looks sort of like this, but ~150k of it: ``` --- all: foo: 1025 bar: baz: 37628 quux: a: 179 b: 7 ``` ...or the same thing in JSON: ``` {"all":{"bar":{"baz":"37628","quux":{"a":"179","b":"7"}},"foo":"1025"}} ``` I want to present this content in an expandable JavaScripty HTML tree view (examples: [1](http://developer.yahoo.com/yui/examples/treeview/default_tree.html), [2](http://www.mattkruse.com/javascript/mktree/)) to make it easier to explore. How do I do this? I guess what I really want to figure out is how to take this YAML/JSON data, and automatically display it as a tree (with hash keys sorted alphabetically). So far, I've been tussling with [YUI's tree view](http://developer.yahoo.com/yui/treeview/), but it doesn't accept straight JSON, and my feeble attempts to massage the data into something useful don't seem to be working. Thanks for any help.
I finally came up with a super-elegant way to do this in about 5 lines of code, based on the fact that the simple [YAML](http://en.wikipedia.org/wiki/YAML) looks a lot like [Markdown](http://en.wikipedia.org/wiki/Markdown). We're starting off with this: ``` --- all: foo: 1025 bar: baz: 37628 quux: a: 179 b: 7 ``` Use regexps (in this case, in Perl) to remove the starting `---`, and put hyphens before the key on each line: ``` $data =~ s/^---\n//s; $data =~ s/^(\s*)(\S.*)$/$1- $2/gm; ``` Voila, Markdown: ``` - all: - foo: 1025 - bar: - baz: 37628 - quux: - a: 179 - b: 7 ``` Now, just run it through a Markdown processor: ``` use Text::Markdown qw( markdown ); print markdown($data); ``` And you get an HTML list -- clean, semantic, backwards-compatible: ``` <ul> <li>all: <ul> <li>foo: 1025</li> <li>bar:</li> <li>baz: 37628</li> <li>quux: <ul> <li>a: 179</li> <li>b: 7</li> </ul> </li> </ul> </li> </ul> ``` [YUI Treeview](http://developer.yahoo.com/yui/treeview/) can enhance existing lists, so we wrap it all up: ``` <html> <head> <!-- CSS + JS served via YUI hosting: developer.yahoo.com/yui/articles/hosting/ --> <link rel="stylesheet" type="text/css" href="http://yui.yahooapis.com/combo?2.6.0/build/treeview/assets/skins/sam/treeview.css"> <script type="text/javascript" src="http://yui.yahooapis.com/combo?2.6.0/build/yahoo-dom-event/yahoo-dom-event.js&2.6.0/build/treeview/treeview-min.js"></script> </head> <body> <div id="markup" class="yui-skin-sam"> <!-- start Markdown-generated list --> <ul> <li>all: <ul> <li>foo: 1025</li> <li>bar:</li> <li>baz: 37628</li> <li>quux: <ul> <li>a: 179</li> <li>b: 7</li> </ul> </li> </ul> </li> </ul> <!-- end Markdown-generated list --> </div> <script type="text/javascript"> var treeInit = function() { var tree = new YAHOO.widget.TreeView("markup"); tree.render(); }; YAHOO.util.Event.onDOMReady(treeInit); </script> </body> </html> ``` So this all works out to about 5 lines of code (turn YAML into Markdown, turn Markdown into an HTML list, and place that HTML list inside a template HTML file. The generated HTML's progressively-enhanced / degradable, since it's fully viewable on non-JavaScript browsers as a plain old list.
You can convert your JSON data to nicely nested DIVs with this. I haven't tested it with a wide number of datasets, but it seems to work. ``` function renderJSON(obj) { 'use strict'; var keys = [], retValue = ""; for (var key in obj) { if (typeof obj[key] === 'object') { retValue += "<div class='tree'>" + key; retValue += renderJSON(obj[key]); retValue += "</div>"; } else { retValue += "<div class='tree'>" + key + " = " + obj[key] + "</div>"; } keys.push(key); } return retValue; } ```
Display JSON/YAML hierarchy as a tree in HTML?
[ "", "javascript", "html", "json", "treeview", "yui", "" ]
I'm looking for the equivalent in Qt to `GetTickCount()` Something that will allow me to measure the time it takes for a segment of code to run as in: ``` uint start = GetTickCount(); // do something.. uint timeItTook = GetTickCount() - start; ``` any suggestions?
How about [`QTime`](http://doc.qt.io/qt-5/qtime.html)? Depending on your platform it should have 1 millisecond accuracy. Code would look something like this: ``` QTime myTimer; myTimer.start(); // do something.. int nMilliseconds = myTimer.elapsed(); ```
I think it's probably better to use [`QElapsedTimer`](http://doc.qt.io/qt-5/qelapsedtimer.html) since that is why the class exists in the first place. It was introduced with Qt 4.7. Note that it is also immuned to system's clock time change. Example usage: ``` #include <QDebug> #include <QElapsedTimer> ... ... QElapsedTimer timer; timer.start(); slowOperation(); // we want to measure the time of this slowOperation() qDebug() << timer.elapsed(); ```
Get elapsed time in Qt
[ "", "c++", "qt", "" ]
I am using the Maven (2) Cobertura plug-in to create reports on code coverage, and I have the following stub I am using in a method: ``` try { System.exit(0); } catch (final SecurityException exception) { exception.printStackTrace(); } System.err.println("The program never exited!"); ``` I know that I need to log the exception, etc, but that's not the point right now...Cobertura is refusing to acknowledge that the line after the stack trace is printed is covered. That is, the line with the '}' before the `System.err.println` statement is not being shown as covered. Before, the ending curly brace of the method was not being shown as covered, hence the `System.err` statement. Any idea how I can convince cobertura's maven plugin that, since the `System.err.println` statement is covered, that ending brace has to have been covered? Oh yeah, and I use a mock security manager to throw the security exception, since that's the easiest way I have found of making the test continue executing after the `System.exit` call.
I haven't used Cobertura in a while (2005?), and saw this behavior back then. A similar problem exists with NCover for C# and curly braces following catch/finally blocks. My suggestion would be to add to [this Cobertura bug report detailing a similar issue](http://sourceforge.net/tracker/index.php?func=detail&aid=1474067&group_id=130558&atid=720015). Also, follow @tvanfosson's advice and realize not having coverage on a curly brace, which doesn't actually become anything in the JVM, is something you can ignore as 'noise'.
I would look at the coverage report. Double check my tests. Notice that the code really is getting covered and not worry about hitting 100%. Code coverage is best used to find areas that you may have neglected to hit with your tests, but just focusing on getting 100% coverage as a goal is bad habit that can lead to you skipping tests that need to be written just because your tool shows 100%. Use the tool for what it can do but don't fall into the trap of letting the tool define what you do.
Cobertura refuses to acknowledge code was covered
[ "", "java", "maven-2", "code-coverage", "cobertura", "" ]
I have a validation control that has the following expression: ``` (?=(.*\\d.*){2,})(?=(.*\\w.*){2,})(?=(.*\\W.*){1,}).{8,} ``` That's a password with at least **2 digits**, **2 alpha characters**, **1 non-alphanumeric** and **8 character minimum**. Unfortunately this doesn't seem to be cross-browser compliant. This validation works perfectly in Firefox, but it does not in Internet Explorer. ***A combination of each of your answers results in:*** ``` var format = "^(?=.{" + minLength + ",})" + (minAlpha > 0 ? "(?=(.*[A-Za-z].*){" + minAlpha + ",})" : "") + (minNum > 0 ? "(?=(.*[0-9].*){" + minNum + ",})" : "") + (minNonAlpha > 0 ? "(?=(.*\\W.*){" + minNonAlpha + ",})" : "") + ".*$"; EX: "^(?=.{x,})(?=(.*[A-Za-z].*){y,})(?=(.*[0-9].*){z,})(?=(.*\W.*){a,}).*$" ``` The important piece is having the (?.{x,}) for the length **first**.
`(?=(.*\W.*){0,})` is not 0 non-alphanumeric characters. It is *at least 0* non-alphanumeric characters. If you wanted the password to not contain any non-alphanumeric characters you could do either `(?!.*\W)` or `(?=\w*$)`. A simpler solution would be to skip the `\W` look-ahead, and use `\w{8,}` instead of `.{8,}`. Also, `\w` includes `\d`. If you wanted just the alpha you could do either `[^\W\d]` or `[A-Za-z]`. ``` /^(?=(?:.*?\d){2})(?=(?:.*?[A-Za-z]){2})\w{8,}$/ ``` This would validate the password to contain at least **two digits**, **two alphas**, be **at least 8 characters long**, and contain **only alpha-numeric characters** (including underscore). * `\w` = `[A-Za-z0-9_]` * `\d` = `[0-9]` * `\s` = `[ \t\n\r\f\v]` **Edit:** To use this in all browsers you probably need to do something like this: ``` var re = new RegExp("^(?=(?:.*?\\d){2})(?=(?:.*?[A-Za-z]){2})\\w{8,}$"); if (re.test(password)) { /* ok */ } ``` **Edit2:** The recent update in the question almost invalidates my whole answer. `^^;;` You should still be able to use the JavaScript code in the end, if you replace the pattern with what you had originally. **Edit3:** OK. Now I see what you mean. ``` /^(?=.*[a-z].*[a-z])(?=.*[0-9].*[0-9]).{3,}/.test("password123") // matches /^(?=.*[a-z].*[a-z])(?=.*[0-9].*[0-9]).{4,}/.test("password123") // does not match /^(?=.*[a-z].*[a-z]).{4,}/.test("password123") // matches ``` It seems `(?= )` isn't really zero-width in Internet Explorer. <http://development.thatoneplace.net/2008/05/bug-discovered-in-internet-explorer-7.html> **Edit4:** More reading: <http://blog.stevenlevithan.com/archives/regex-lookahead-bug> I think this can solve your problem: ``` /^(?=.{8,}$)(?=(?:.*?\d){2})(?=(?:.*?[A-Za-z]){2})(?=(?:.*?\W){1})/ new RegExp("^(?=.{8,}$)(?=(?:.*?\\d){2})(?=(?:.*?[A-Za-z]){2})(?=(?:.*?\\W){1})") ``` The `(?=.{8,}$)` needs to come first.
This will get you 2 min digits, 2 min characters, and min 8 character length... I refuse to show you how to not allow users to have non-alphanumeric characters in their passwords, why do sites want to enforce less secure passwords? ``` ^(?=.*\d{2})(?=.*[a-zA-Z]{2}).{8,}$ ```
ASP.NET Regular Expression Validator (Password Strength)
[ "", "javascript", "asp.net", "regex", "validation", "" ]
**UPDATE** - A comprehensive comparison, updated as of February 2015, can be found here: # [Alternatives to Ext JS](https://stackoverflow.com/questions/200284/what-are-alternatives-to-extjs/2144878#2144878) --- *2008 question*: There are a number of great and not so-great Javascript GUI frameworks out there. I've looked at some (only superficially). And I can't make my mind about any of them **Scroll to the end of this question to see what others say** * [Ext.js](http://www.sencha.com/) The obvious choice by many since it's one of the most known frameworks. *Advantages:* Looks [awesome](http://extjs.com/products/extjs/), large community, lots of extensions/plugins, GPL'ed *Disadvanatges:* Inability to use third-party extensions with commercial license (and some of those extensions have killer features) * [Backbase](http://backbase.com/) Relatively less known. A curious mix of XML and Javascript that is reminiscent of XUL. However, it's already cross-browser *Advantages:* Looks [good](http://demo.backbase.com/explorer/index.html#%7Cexamples/welcome.xml), very extensible, allows easy incorporation of [some really neat stuff](http://bdn.backbase.com/blog/rus/advanced-3d-animations-and-transitions) *Disadvantages:* Pricing is steep and CPU-bound (though free to use on up to 2 CPUs), forums are slow to respond (though commercial support is supposedly fast) * [qooxdoo](http://qooxdoo.org/) Also very popular. *Advantages:* *Please, fill in* *Disadvantages:* Code is slighly messy (based on hearsay) * [YUI](http://developer.yahoo.com/yui) *Fill in description* *Advantages:* Well organized code *Disadvantages:* *Many widgets still in beta* * [Dojo](http://dojotoolkit.org/) *Fill in description* *Advantages:* Incremental loading of classes *Disadvantages:* MIght feel bloated * [jQuery UI](http://ui.jquery.com/) *Advantages:* Widgets not dependent on each other *Disadvantages:* In an early stage of development, very few widgets *Possible tendency towards wider acception:* jQuery to be shipped with ASP.NET MVC --- What say you? What do you use and why? What would you rather use and why? In any kind of project --- To be updated with your input... > See this [excellent comment](https://stackoverflow.com/questions/218699/your-choice-of-cross-browser-javascript-gui#218764) from Sergey Ilinsky which explains very nicely which framework you should choose when you want to just pimp up your page, build an application with a rich frontend (with several choices, no less) > > An interesting comment in another thread compares jQuery, Dojo, Prototype, Mootools, [Sproutcore](http://www.sproutcore.com/) and [Cappuccino](http://cappuccino.org/) *(the question was removed)*.
When considering a JavaScript library/framework for usage you should first define on your goals. I used to separate all JavaScript libraries/frameworks into three categories by their purpose and architecture: 1. I want to **pimp up my page** with some really "cool" features. Go for *JavaScript library*. * jQuery * ZenoUI * old: Prototype, Mootools 2. I want to **build an application** with a rich front-end. I like defining UI with JavaScript and I do not mind much using custom APIs of these libraries for coding my application logic. Go for JavaScript *post-library/pre-framework*. * [extjs](/questions/tagged/extjs "show questions tagged 'extjs'") * [kendo](/questions/tagged/kendo "show questions tagged 'kendo'") * DHTMLX * Dojo * YUI * Qooxdoo * jQuery UI * [Bindows](http://www.bindows.net/) - generates the exact look of Windows * Spry (raw, suffers severely of memory leaks) 3. I want to **build an application** with a rich front-end. I like defining UI in XML and I would like to code my application logic against standard APIs and make use of other developer-facing standard-based technologies. Go for *JavaScript framework*. * [Sproutcore](http://sproutcore.com/about/) * Backbase * Ample SDK ### For a more detailed comparison, see the [alternatives to ExtJS](https://stackoverflow.com/a/2144878/1269037).
On the contrary I find qooxdoo syntax and code organization very clear. It is much like swing or gtk. I have tried Dojo as well, without much success. My no.1 choice for RIAs is qooxdoo. It is rich, coder-friendly, free and well documented. My choice for any other (browser based) JavaScript work is jQuery. Although other libraries (prototype, mochikit etc) are all useful, jQuery is the hype now. Having an active (and big) community counts.
Your choice of cross-browser javascript GUI
[ "", "javascript", "user-interface", "frameworks", "cross-browser", "widget", "" ]
I am working on an embedded systems project and have run into an issue of the compiler being programatically embedded in the Paradigm C++ IDE. I would like to be able to automate building. The processor is the AMD186ES. I am not working with the OS - just baremetal stuff. I need to generate real-mode 16-bit 8086 machine code from C++. My googling indicates that G++ can build such code. My questions are: Can g++ be configured to build this machine code? Are there other C++ compilers that can do it as well?
I am currently using gnu `as` (part of binutils and the assembler used for gcc) and I have successfully been assembling 16bit assembly code with the following: ``` as <file> ld --oformat binary -Ttext 0x0 -e start <file> ``` with my assembly files starting out with: ``` .code16 .globl start .text start: ``` since its plain binary omitting the lines, ``` .globl start start: ``` will simply yield an warning, even though flat binaries need no entry point. --- something I learned the hard way; ``` -Ttext 0x0 ``` is critical, otherwise the `.text` segment is pushed outside of 16bit addressing range (don't ask me why) I am personally still learning assembly, so this is just my way, not necessarily the best way. --- **EDIT:** If you are writing boot code, you should change ``` -Ttext 0x0 ``` to ``` -Ttext 0x7c00 ``` this will offset your memory addresses by `0x7c00` since boot code is usually loaded at `0x7c00` by the BIOS.
Your best bet is probably [OpenWatcom](http://www.openwatcom.org/index.php/Main_Page), which includes a C++ compiler. Back in the early-to-mid 90s, I believe this was the best C/C++ compiler around. It was open-sourced a few years ago.
Looking for 16-bit x86 compiler
[ "", "c++", "compiler-construction", "embedded", "x86-16", "" ]
My goal is to recognize simple gestures from accelerometers mounted on a sun spot. A gesture could be as simple as rotating the device or moving the device in several different motions. The device currently only has accelerometers but we are considering adding gyroscopes if it would make it easier/more accurate. Does anyone have recommendations for how to do this? Any available libraries in Java? Sample projects you recommend I check out? Papers you recommend? The sun spot is a Java platform to help you make quick prototypes of systems. It is programmed using Java and can relay commands back to a base station attached to a computer. If I need to explain how the hardware works more leave a comment.
The accelerometers will be registering a constant acceleration due to gravity, plus any acceleration the device is subjected to by the user, plus noise. You will need to low pass filter the samples to get rid of as much irrelevant noise as you can. The worst of the noise will generally be higher frequency than any possible human-induced acceleration. Realise that when the device is not being accelerated by the user, the only force is due to gravity, and therefore you can [deduce its attitude](http://tom.pycke.be/mav/69/accelerometer-to-attitude) in space. Moreover, when the total acceleration varies greatly from 1g, it must be due to the user accelerating the device; by subtracting last known estimate of gravity, you can roughly estimate in what direction and by how much the user is accelerating the device, and so obtain data you can begin to match against a list of known gestures. With a single three-axis accelerometer you can detect the current pitch and roll, and also acceleration of the device in a straight line. Integrating acceleration minus gravity will give you an estimate of current velocity, but the estimate will rapidly drift away from reality due to noise; you will have to make assumptions about the user's behaviour before / between / during gestures, and guide them through your UI, to provide points where the device is not being accelerated and you can reset your estimates and reliably estimate the direction of gravity. Integrating again to find position is unlikely to provide usable results over any useful length of time at all. If you have two three-axis accelerometers some distance apart, or one and some gyros, you can also detect rotation of the device (by comparing the acceleration vectors, or from the gyros directly); integrating angular momentum over a couple of seconds will give you an estimate of current yaw relative to that when you started integrating, but again this will drift out of true rapidly.
Since no one seems to have mentioned existing libraries, as requested by OP, here goes: <http://www.wiigee.org/> Meant for use with the Wiimote, **wiigee** is an open-source Java based implementation for pattern matching based on accelerometer readings. It accomplishes this using **Hidden Markov Models**[1]. It was apparently used to great effect by a company, Thorn Technologies, and they've mentioned their experience here : <http://www.thorntech.com/2013/07/mobile-device-3d-accelerometer-based-gesture-recognition/> Alternatively, you could consider **FastDTW** (<https://code.google.com/p/fastdtw/>). It's less accurate than regular **DTW**[2], but also computationally less expensive, which is a big deal when it comes to embedded systems or mobile devices. [1] <https://en.wikipedia.org/wiki/Hidden_Markov_model> [2] <https://en.wikipedia.org/wiki/Dynamic_time_warping> EDIT: The OP has mentioned in one of the comments that he completed his project, with 90% accuracy in the field and a sub-millisecond compute time, using a variant of [$1 Recognizer](https://depts.washington.edu/aimgroup/proj/dollar/). He also mentions that rotation was not a criteria in his project.
How to do Gesture Recognition using Accelerometers
[ "", "java", "embedded", "accelerometer", "gesture-recognition", "" ]
I'm working on a service that needs to detect user states for all user(s) logged on to a single machine. Specifically, I want to check to see whether or not the screen saver is active and whether or not their session is locked. This code will run under a system-level service, and has no visible UI, so that may rule out several options (trapping WM messages, etc). Aside from normal workstations, I'd like for this to work on terminal servers that have multiple users logged in to it. Due to these requirements I'm wondering if several Win32 APIs will need to be involved. Any ideas on where to begin?
The most straightforward way would be to have a small app running in each user's session. Each instance of this app could communicate with the main instance of the service. Windows tries pretty hard to keep logon sessions separate -- both between services and the interactive desktop, and between individual Terminal Services sessions -- so it gets very tricky to access this sort of information about a user's session unless your app is running in that session to begin with.
As a serivce you can use the event OnSessionChange to catch all your relevant moments. ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceProcess; using System.Diagnostics; namespace MyCode { class MyService : ServiceBase { public MyService() { this.CanHandleSessionChangeEvent = true; } protected override void OnSessionChange(SessionChangeDescription changeDescription) { switch (changeDescription.Reason) { case SessionChangeReason.SessionLogon: Debug.WriteLine(changeDescription.SessionId + " logon"); break; case SessionChangeReason.SessionLogoff: Debug.WriteLine(changeDescription.SessionId + " logoff"); break; case SessionChangeReason.SessionLock: Debug.WriteLine(changeDescription.SessionId + " lock"); break; case SessionChangeReason.SessionUnlock: Debug.WriteLine(changeDescription.SessionId + " unlock"); break; } base.OnSessionChange(changeDescription); } } } ``` I'm sure it is possible to identify the user based on changeDescription.SessionId. But at the moment i don't know how... EDIT: This should be a possibilty ``` public static WindowsIdentity GetUserName(int sessionId) { foreach (Process p in Process.GetProcesses()) { if(p.SessionId == sessionId) { return new WindowsIdentity(p.Handle); } } return null; } ``` MSDN Links * [system.serviceprocess.servicebase.onsessionchange](http://msdn.microsoft.com/de-de/library/system.serviceprocess.servicebase.onsessionchange(VS.80).aspx) * [system.serviceprocess.sessionchangedescription](http://msdn.microsoft.com/de-de/library/system.serviceprocess.sessionchangedescription(VS.80).aspx) * [system.serviceprocess.sessionchangereason](http://msdn.microsoft.com/de-de/library/system.serviceprocess.sessionchangereason(VS.80).aspx)
Service needs to detect if workstation is locked, and screen saver is active
[ "", "c#", ".net", "" ]
I have a table with two fields of interest for this particular exercise: a CHAR(3) ID and a DATETIME. The ID identifies the submitter of the data - several thousand rows. The DATETIME is not necessarily unique, either. (The primary keys are other fields of the table.) Data for this table is submitted every six months. In December, we receive July-December data from each submitter, and in June we receive July-June data. My task is to write a script that identifies people who have only submitted half their data, or only submitted January-June data in June. ...Does anyone have a solution?
I later realised that I was supposed to check to make sure that there was data for *both* July to December and January to June. So this is what I wound up in v2: ``` SELECT @avgmonths = AVG(x.[count]) FROM ( SELECT CAST(COUNT(DISTINCT DATEPART(month, DATEADD(month, DATEDIFF(month, 0, dscdate), 0))) AS FLOAT) AS [count] FROM HospDscDate GROUP BY hosp ) x IF @avgmonths > 7 SET @months = 12 ELSE SET @months = 6 SELECT 'Submitter missing data for some months' AS [WarningType], t.id FROM TheTable t WHERE EXISTS ( SELECT 1 FROM TheTable t1 WHERE t.id = t1.id HAVING COUNT(DISTINCT DATEPART(month, DATEADD(month, DATEDIFF(month, 0, t1.Date), 0))) < @months ) GROUP BY t.id ```
For interest, this is what I wound up using. It was based off Stephen's answer, but with a few adaptations. It's part of a larger script that's run every six months, but we're only checking this every twelve months - hence the "If FullYear = 1". I'm sure there's a more stylish way to identify the boundary dates, but this seems to work. ``` IF @FullYear = 1 BEGIN DECLARE @FirstDate AS DATETIME DECLARE @LastDayFirstYear AS DATETIME DECLARE @SecondYear AS INT DECLARE @NewYearsDay AS DATETIME DECLARE @LastDate AS DATETIME SELECT @FirstDate = MIN(dscdate), @LastDate = MAX(dscdate) FROM TheTable SELECT @SecondYear = DATEPART(yyyy, @FirstDate) + 1 SELECT @NewYearsDay = CAST(CAST(@SecondYear AS VARCHAR) + '-01-01' AS DATETIME) INSERT INTO @AuditResults SELECT DISTINCT 'Submitter missing Jan-Jun data', t.id FROM TheTable t WHERE EXISTS ( SELECT 1 FROM TheTable t1 WHERE t.id = t1.id AND t1.date >= @FirstDate AND t1.date < @NewYearsDay ) AND NOT EXISTS ( SELECT 1 FROM TheTable t2 WHERE t2.date >= @NewYearsDay AND t2.date <= @LastDate AND t2.id = t.id GROUP BY t2.id ) GROUP BY t.id END ```
Identify full vs half yearly datasets in SQL
[ "", "sql", "sql-server-2005", "t-sql", "" ]
Hey all I'm hoping someone has enough experience with Cake PHP to make this work. I'm working on something that at the moment could affectionately be called a twitter clone. Essentially I have a set up like this. Users have many friends. This is a many to many relationship to the user table. It is stored in a link tabled called friends\_users with columns user\_id, friend\_id. Users is a table with column user\_id. Then I have a table called tips which associates to a user. A user can have many tips. I want to figure out a way to do a find on the Tip model that returns all tips owned by the userid i pass in as well as any tips owned by any friends of that user. This SQL query works perfectly - ``` SELECT * FROM `tips` JOIN users ON users.id = tips.user_id JOIN friends_users ON tips.user_id = friends_users.friend_id WHERE (friends_users.user_id =2 or tips.user_id=2) LIMIT 0 , 30 ``` That returns user#2s Tips as well as the tips of anyone who is a friend of User 2. Now how can I do the same thing using `$this->Tip->findxxxxx(user_id)` I know I can use `Tip->query` if need be but I'm trying to learn the hard way.
If all you need in the results of the query is a list of tips, I'd be tempted to do this in 2 queries. The first to find a list of user ids of this user and their friends, the second to find the tips that belong to any one of these ids. So, in your Tip model: ``` function findTipsByUserAndFriends($userId) { //FriendsUser is automagically created by CakePHP "with" association $conditions = array('FriendsUser.user_id'=>$userId); $fields = 'FriendsUser.friend_id'; //get a list friend ids for the given user $friendIds = $this->Tip->User->FriendsUser->find('list', compact('conditions', 'fields')); //get list of all userIds for whom you want the tips $userIds = array($userId) + $friendIds; $conditions = array('Tip.user_id'=>$userIds); $tips = $this->Tip->find('all', compact('conditions')); return $tips; } ``` Note that you're calling the first find on the automagically created "FriendsUser" model that CakePHP uses to model your HABTM friends\_users table, so you don't need to create it. This is untested, so you might need to debug some of it, but you get the idea.
I would recommend that you use containable it works great. ``` $aCond = array( 'fields' => array( 'Discount.*' ), 'contain' => array( 'Event' => array( 'conditions' => array('Event.id' => $iEventId) ) ) ); $aBulkDiscounts = $this->Discount->find('first', $aCond); ``` hope this helps.
How can I write a complex find query in CakePHP without using `$this->Model->query()`?
[ "", "php", "mysql", "cakephp", "" ]
I'm debating the best way to propagate fairly complex permissions from the server to an AJAX application, and I'm not sure the best approach to take. Essentially, I want my permissions to be defined so I can request a whole set of permissions in one shot, and adjust the UI as appropriate (the UI changes can be as low level as disabling certain context menu items). Of course, I still need to enforce the permissions server side. So, I was wondering if anyone has any suggestions for the best way to 1. maintain the permissions and use them in server code 2. have easy access to the permissions in javascript 3. not have to make a round-trip request to the server for each individual permission Thoughts?
If you have a clear set of permissions, like a "user level" or "user type", you could just pass the value down in a hidden field and access the value through the DOM. You could still do this if your permissions were more granular, but you would either have a lot of hidden fields or you would have to encode the information into XML or JSON or some other format. You might set them as bit flags so that you could OR a single numeric value with a mask to see if the user had the permission for a specific activity. That would be very flexible and as long as you don't have more than 32 or so specific "rights", that would allow for any permutation of those rights in a very small package (basically an unsigned int). For example: ``` 0x00000001 //edit permission 0x00000002 //create new thing permission 0x00000004 //delete things permission 0x00000008 //view hidden things permission . . . 0x80000000 //total control of the server and everyone logged in ``` Then a user with a permission of `0x000007` could edit, create, and delete, but nothing else. In either case, I think you're on the right track - make the request once per page invocation, store the permissions in a global JavaScript data structure, and go from there. AJAX is nice, but you don't want to query the server for every specific permission all over your page. You would do it once on the page load, set up the presentation of your page and save the value in a global variable, then reference the permission(s) locally for event functions.
If you transmit the permission structure to the client as a JSON object (or XML, if you prefer), you can manipulate that object with the client-side code, and send it back to the server, which can do whatever it needs to validate the data and persist it.
Propagate Permissions to Javascript
[ "", "javascript", "permissions", "" ]
I have been running [StyleCop](http://en.wikipedia.org/wiki/StyleCop) over some C# code, and it keeps reporting that my `using` directives should be inside the namespace. Is there a technical reason for putting the `using` directives inside instead of outside the namespace?
There is actually a (subtle) difference between the two. Imagine you have the following code in File1.cs: ``` // File1.cs using System; namespace Outer.Inner { class Foo { static void Bar() { double d = Math.PI; } } } ``` Now imagine that someone adds another file (File2.cs) to the project that looks like this: ``` // File2.cs namespace Outer { class Math { } } ``` The compiler searches `Outer` before looking at those `using` directives outside the namespace, so it finds `Outer.Math` instead of `System.Math`. Unfortunately (or perhaps fortunately?), `Outer.Math` has no `PI` member, so File1 is now broken. This changes if you put the `using` inside your namespace declaration, as follows: ``` // File1b.cs namespace Outer.Inner { using System; class Foo { static void Bar() { double d = Math.PI; } } } ``` Now the compiler searches `System` before searching `Outer`, finds `System.Math`, and all is well. Some would argue that `Math` might be a bad name for a user-defined class, since there's already one in `System`; the point here is just that there *is* a difference, and it affects the maintainability of your code. It's also interesting to note what happens if `Foo` is in namespace `Outer`, rather than `Outer.Inner`. In that case, adding `Outer.Math` in File2 breaks File1 regardless of where the `using` goes. This implies that the compiler searches the innermost enclosing namespace before it looks at any `using` directive.
This thread already has some great answers, but I feel I can bring a little more detail with this additional answer. First, remember that a namespace declaration with periods, like: ``` namespace MyCorp.TheProduct.SomeModule.Utilities { ... } ``` is entirely equivalent to: ``` namespace MyCorp { namespace TheProduct { namespace SomeModule { namespace Utilities { ... } } } } ``` If you wanted to, you could put `using` directives on all of these levels. (Of course, we want to have `using`s in only one place, but it would be legal according to the language.) The rule for resolving which type is implied, can be loosely stated like this: **First search the inner-most "scope" for a match, if nothing is found there go out one level to the next scope and search there, and so on**, until a match is found. If at some level more than one match is found, if one of the types are from the current assembly, pick that one and issue a compiler warning. Otherwise, give up (compile-time error). Now, let's be explicit about what this means in a concrete example with the two major conventions. **(1) With usings outside:** ``` using System; using System.Collections.Generic; using System.Linq; //using MyCorp.TheProduct; <-- uncommenting this would change nothing using MyCorp.TheProduct.OtherModule; using MyCorp.TheProduct.OtherModule.Integration; using ThirdParty; namespace MyCorp.TheProduct.SomeModule.Utilities { class C { Ambiguous a; } } ``` In the above case, to find out what type `Ambiguous` is, the search goes in this order: 1. Nested types inside `C` (including inherited nested types) 2. Types in the current namespace `MyCorp.TheProduct.SomeModule.Utilities` 3. Types in namespace `MyCorp.TheProduct.SomeModule` 4. Types in `MyCorp.TheProduct` 5. Types in `MyCorp` 6. Types in the *null* namespace (the global namespace) 7. Types in `System`, `System.Collections.Generic`, `System.Linq`, `MyCorp.TheProduct.OtherModule`, `MyCorp.TheProduct.OtherModule.Integration`, and `ThirdParty` The other convention: **(2) With usings inside:** ``` namespace MyCorp.TheProduct.SomeModule.Utilities { using System; using System.Collections.Generic; using System.Linq; using MyCorp.TheProduct; // MyCorp can be left out; this using is NOT redundant using MyCorp.TheProduct.OtherModule; // MyCorp.TheProduct can be left out using MyCorp.TheProduct.OtherModule.Integration; // MyCorp.TheProduct can be left out using ThirdParty; class C { Ambiguous a; } } ``` Now, search for the type `Ambiguous` goes in this order: 1. Nested types inside `C` (including inherited nested types) 2. Types in the current namespace `MyCorp.TheProduct.SomeModule.Utilities` 3. Types in `System`, `System.Collections.Generic`, `System.Linq`, `MyCorp.TheProduct`, `MyCorp.TheProduct.OtherModule`, `MyCorp.TheProduct.OtherModule.Integration`, and `ThirdParty` 4. Types in namespace `MyCorp.TheProduct.SomeModule` 5. Types in `MyCorp` 6. Types in the *null* namespace (the global namespace) (Note that `MyCorp.TheProduct` was a part of "3." and was therefore not needed between "4." and "5.".) **Concluding remarks** No matter if you put the usings inside or outside the namespace declaration, there's always the possibility that someone later adds a new type with identical name to one of the namespaces which have higher priority. Also, if a nested namespace has the same name as a type, it can cause problems. It is always dangerous to move the usings from one location to another because the search hierarchy changes, and another type may be found. Therefore, choose one convention and stick to it, so that you won't have to ever move usings. Visual Studio's templates, by default, put the usings *outside* of the namespace (for example if you make VS generate a new class in a new file). One (tiny) advantage of having usings *outside* is that you can then utilize the using directives for a global attribute, for example `[assembly: ComVisible(false)]` instead of `[assembly: System.Runtime.InteropServices.ComVisible(false)]`. --- Addition inspired by [other thread](https://stackoverflow.com/questions/59180703/): Suppose in the above example that the namespace `MyCorp.TheProduct.System` happened to exist, even though we have no use for it in our file. Then with usings outside, it would not change anything. But with usings *inside*, you would have to use the `global` alias, like this: ``` namespace MyCorp.TheProduct.SomeModule.Utilities { using global::System; // avoids 'MyCorp.TheProduct.System' which also exists using global::System.Collections.Generic; // etc. } ``` --- **Update about file-scoped namespace declarations** Since C# 10.0 (from 2021), you can avoid indentation and use either (convention 1, usings outside): ``` using System; using System.Collections.Generic; using System.Linq; using MyCorp.TheProduct.OtherModule; using MyCorp.TheProduct.OtherModule.Integration; using ThirdParty; namespace MyCorp.TheProduct.SomeModule.Utilities; class C { Ambiguous a; } ``` or (convention 2, usings inside): ``` namespace MyCorp.TheProduct.SomeModule.Utilities; using System; using System.Collections.Generic; using System.Linq; using MyCorp.TheProduct; using MyCorp.TheProduct.OtherModule; using MyCorp.TheProduct.OtherModule.Integration; using ThirdParty; class C { Ambiguous a; } ``` But the same considerations as before apply.
Should 'using' directives be inside or outside the namespace in C#?
[ "", "c#", ".net", "namespaces", "stylecop", "code-organization", "" ]
Do any queries exist that require RIGHT JOIN, or can they always be re-written with LEFT JOIN? And more specifically, how do you re-write this one without the right join (and I guess implicitly without any subqueries or other fanciness): ``` SELECT * FROM t1 LEFT JOIN t2 ON t1.k2 = t2.k2 RIGHT JOIN t3 ON t3.k3 = t2.k3 ```
You can always use only left Joins... ``` SELECT * FROM t1 LEFT JOIN t2 ON t1.k2 = t2.k2 RIGHT JOIN t3 ON t3.k3 = t2.k3 ``` is equivilent to: ``` Select * From t3 Left Join (t1 Left Join t2 On t2.k2 = t1.k2) On T2.k3 = T3.K3 ``` In general I always try to use only Left Joins, as the table on the left in a Left Join is the one whose rows are ALL included in the output, and I like to think of it, (The Left side) as the "base" set I am performing the cartesion product (join) against ... So I like to have it first in the SQL...
You can always re-write them to get the same result set. However, sometimes the execution plan may be different in significant ways (performance) and sometimes a right join let's you express the query in a way that makes more sense. Let me illustrate the performance difference. Programmers tend to think in terms of an sql statement happening all at once. However, it's useful to keep a mental model that complicated queries happen in a series of steps where tables are typically joined in the order listed. So you may have a query like this: ``` SELECT * /* example: don't care what's returned */ FROM LargeTable L LEFT JOIN MediumTable M ON M.L_ID=L.ID LEFT JOIN SmallTable S ON S.M_ID=M.ID WHERE ... ``` The server will normally start by applying anything it can from the WHERE clause to the first table listed (LargeTable, in this case), to reduce what it needs to load into memory. Then it will join the next table (MediumTable), and then the one after that (SmallTable), and so on. What we want to do is use a strategy that accounts for the expected impact of each joined table on the results. In general you want to keep the result set as small as possible for as long as possible. Apply that principle to the example query above, and we see it's obviously much slower than it needs to be. It starts with the larger sets (tables) and works down. We want to begin with the smaller sets and work up. That means using SmallTable first, and the way to do that is via a RIGHT JOIN. Another key here is that the server usually can't know which rows from SmallTable will be needed until the join is completed. Therefore it only matters if SmallTable is so much smaller than LargeTable that loading the entire SmallTable into memory is cheaper than whatever you would start with from LargeTable (which, being a large table, is probably well-indexed and probably filters on a field or three in the where clause). It's important to also point out that in the vast majority of cases the optimizer will look at this and handle things in the most efficient way possible, and most of the time the optimizer is going to do a better job at this than you could. But the optimizer isn't perfect. Sometimes you need to help it along: especially if one or more of your "tables" is a view (perhaps into a linked server!) or a nested select statement, for example. A nested sub-query is also a good case of where you might want to use a right join for expressive reasons: it lets you move the nested portion of the query around so you can group things better.
Is RIGHT JOIN Ever Required?
[ "", "sql", "" ]
So I'm teaching myself Python, and I'm having an issue with lists. I want to pass my function a list and pop items off it while retaining the original list. How do I make python "instance" the passed list rather that passing a pointer to the original one? Example: ``` def burninate(b): c = [] for i in range(3): c.append(b.pop()) return c a = range(6) d = burninate(a) print a, d ``` Output: [0, 1, 2] [5, 4, 3] Desired output: [0, 1, 2, 3, 4, 5] [5, 4, 3] Thanks!
As other answers have suggested, you can provide your function with a copy of the list. As an alternative, your function could take a copy of the argument: ``` def burninate(b): c = [] b = list(b) for i in range(3): c.append(b.pop()) return c ``` Basically, you need to be clear in your mind (and in your documentation) whether your function will change its arguments. In my opinion, functions that return computed values should not change their arguments, and functions that change their arguments should not return anything. See python's [].sort(), [].extend(), {}.update(), etc. for examples. Obviously there are exceptions (like .pop()). Also, depending on your particular case, you could rewrite the function to avoid using pop() or other functions that modify the argument. e.g. ``` def burninante(b): return b[:-4:-1] # return the last three elements in reverse order ```
You can call `burninate()` with a copy of the list like this: `d = burninate(a[:])` or, `d = burninate(list(a))` The other alternative is to make a copy of the list in your method: ``` def burninate(b): c=[] b=b[:] for i in range(3): c.append(b.pop()) return c >>> a = range(6) >>> b = burninate(a) >>> print a, b >>> [0, 1, 2, 3, 4, 5] [5, 4, 3] ```
Passing a list while retaining the original
[ "", "python", "list", "" ]
I have a helper class that is just a bunch of static methods and would like to subclass the helper class. Some behavior is unique depending on the subclass so I would like to call a virtual method from the base class, but since all the methods are static I can't create a plain virtual method (need object reference in order to access virtual method). Is there any way around this? I guess I could use a singleton.. HelperClass.Instance.HelperMethod() isn't so much worse than HelperClass.HelperMethod(). Brownie points for anyone that can point out some languages that support virtual static methods. **Edit:** OK yeah I'm crazy. Google search results had me thinking I wasn't for a bit there.
Virtual static methods don't make sense. If I call `HelperClass.HelperMethod();`, why would I expect some random subclass' method to be called? The solution really breaks down when you have 2 subclasses of `HelperClass` - which one would you use? If you want to have overrideable static-type methods you should probably go with: * A singleton, if you want the same subclass to be used globally. * A tradition class hierarchy, with a factory or dependency injection, if you want different behavior in different parts of your application. Choose whichever solution makes more sense in your situation.
I don't think you are crazy. You just want to use what is impossible currently in .NET. Your request for virtual static method would have so much sense if we are talking about generics. For example my future request for CLR designers is to allow me to write intereface like this: ``` public interface ISumable<T> { static T Add(T left, T right); } ``` and use it like this: ``` public T Aggregate<T>(T left, T right) where T : ISumable<T> { return T.Add(left, right); } ``` But it's impossible right now, so I'm doing it like this: ``` public static class Static<T> where T : new() { public static T Value = new T(); } public interface ISumable<T> { T Add(T left, T right); } public T Aggregate<T>(T left, T right) where T : ISumable<T>, new() { return Static<T>.Value.Add(left, right); } ```
Why can't I declare C# methods virtual and static?
[ "", "c#", "oop", "" ]
Given a date range, I need to know how many Mondays (or Tuesdays, Wednesdays, etc) are in that range. I am currently working in C#.
Try this: ``` static int CountDays(DayOfWeek day, DateTime start, DateTime end) { TimeSpan ts = end - start; // Total duration int count = (int)Math.Floor(ts.TotalDays / 7); // Number of whole weeks int remainder = (int)(ts.TotalDays % 7); // Number of remaining days int sinceLastDay = (int)(end.DayOfWeek - day); // Number of days since last [day] if (sinceLastDay < 0) sinceLastDay += 7; // Adjust for negative days since last [day] // If the days in excess of an even week are greater than or equal to the number days since the last [day], then count this one, too. if (remainder >= sinceLastDay) count++; return count; } ```
Since you're using C#, if you're using C#3.0, you can use LINQ. Assuming you have an Array/List/IQueryable etc that contains your dates as DateTime types: ``` DateTime[] dates = { new DateTime(2008,10,6), new DateTime(2008,10,7)}; //etc.... var mondays = dates.Where(d => d.DayOfWeek == DayOfWeek.Monday); // = {10/6/2008} ``` Added: Not sure if you meant grouping them and counting them, but here's how to do that in LINQ as well: ``` var datesgrouped = from d in dates group d by d.DayOfWeek into grouped select new { WeekDay = grouped.Key, Days = grouped }; foreach (var g in datesgrouped) { Console.Write (String.Format("{0} : {1}", g.WeekDay,g.Days.Count()); } ```
Count number of Mondays in a given date range
[ "", "c#", "datetime", "date", "datediff", "" ]
I'm looking for a way to add a close button to a .NET ToolTip object similar to the one the NotifyIcon has. I'm using the tooltip as a message balloon called programatically with the Show() method. That works fine but there is no onclick event or easy way to close the tooltip. You have to call the Hide() method somewhere else in your code and I would rather have the tooltip be able to close itself. I know there are several balloon tooltips around the net that use manage and unmanaged code to perform this with the windows API, but I would rather stay in my comfy .NET world. I have a thrid party application that calls my .NET application and it has crashes when trying to display unmanaged tooltips.
You could try an implement your own tool tip window by overriding the existing one and customizing the onDraw function. I never tried adding a button, but have done other customizations with the tooltip before. ``` 1 class MyToolTip : ToolTip 2 { 3 public MyToolTip() 4 { 5 this.OwnerDraw = true; 6 this.Draw += new DrawToolTipEventHandler(OnDraw); 7 8 } 9 10 public MyToolTip(System.ComponentModel.IContainer Cont) 11 { 12 this.OwnerDraw = true; 13 this.Draw += new DrawToolTipEventHandler(OnDraw); 14 } 15 16 private void OnDraw(object sender, DrawToolTipEventArgs e) 17 { ...Code Stuff... 24 } 25 } ```
You can subclass the ToolTip class with your own CreateParams that sets the TTS\_CLOSE style: ``` private const int TTS_BALLOON = 0x80; private const int TTS_CLOSE = 0x40; protected override CreateParams CreateParams { get { var cp = base.CreateParams; cp.Style = TTS_BALLOON | TTS_CLOSE; return cp; } } ``` The TTS\_CLOSE style also [requires](http://msdn.microsoft.com/en-us/library/windows/desktop/bb760248(v=vs.85).aspx) the TTS\_BALLOON style and you must also set the ToolTipTitle property on the tooltip. To get this style to work, you need to enable the Common Controls v6 styles [using an application manifest](http://blog.kalmbachnet.de/?postid=103). Add a new "Application Manifest File" and add the following under the <assembly> element: ``` <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="6.0.0.0" processorArchitecture="*" publicKeyToken="6595b64144ccf1df" language="*" /> </dependentAssembly> </dependency> ``` In Visual Studio 2012, at least, this stuff is included in the default template but commented out - you can just uncomment it. ![Tooltip with close button](https://i.stack.imgur.com/4ShRa.png)
Add close button (red x) to a .NET ToolTip
[ "", "c#", ".net", "winforms", "tooltip", "" ]
My basic question is, in .NET, how do I clone WebControls? I would like to build a custom tag, which can produce multiple copies of its children. Ultimately I intend to build a tag similar to in JSP/Struts. But the first hurdle I have is the ability to duplicate/clone the contents of a control. Consider this rather contrived example; ``` <custom:duplicate count="2"> <div> <p>Some html</p> <asp:TextBox id="tb1" runat="server" /> </div> </custom:duplicate> ``` The HTML markup which is output would be something like, ``` <div> <p>Some html</p> <input type="text" id="tb1" /> </div> <div> <p>Some html</p> <input type="text" id="tb1" /> </div> ``` *Note: I know i have the id duplicated, I can come up with a solution to that later!* So what we would have is my custom control with 3 children (I think) - a literal control, a TextBox control, and another literal control. In this example I have said 'count=2' so what the control should do is output/render its children twice. What I would hope to do is write some "OnInit" code which does something like: ``` List<WebControl> clones; for(int i=1; i<count; i++) { foreach(WebControl c in Controls) { WebControl clone = c.Clone(); clones.Add(clone); } } Controls.AddRange(clones); ``` However, as far as I can tell, WebControls do not implement ICloneable, so its not possible to clone them in this way. Any ideas how I can clone WebControls?
What's wrong with using a Repeater and binding a dud data source. It'll duplicate the templated controls and handle the ID creation and all.
Just as a reference for others which really want to clone a **custom server control**. ``` public class MyCustomServerCtrl { ... public MyCustomServerCtrl Clone() { return MemberwiseClone() as MyCustomServerCtrl; } } ``` But note: this is needed very rarely and if so, most probably just when you're having some really specific logic. It should be avoided when possible. Generally it should be enough to use existing controls like Repeater, ListView etc..
How do you 'clone' WebControls in C# .NET?
[ "", "c#", ".net", "asp.net", "" ]
I am getting the following error when I get to the line that invokes a REALLY BASIC web service I have running on Tomcat/Axis. ``` Element or attribute do not match QName production: QName::=(NCName':')?NCName ``` Have I got something wrong with QName?- I can't even find any useful information about it. My client code is below: ``` import javax.xml.namespace.QName; import org.apache.axis.client.Call; import org.apache.axis.client.Service; public class TestClient { public static void main(String [] args) { try{ String endpoint = "http://localhost:8080/TestWebService/services/DoesMagic"; Service service = new Service(); Call call = (Call) service.createCall(); call.setTargetEndpointAddress( new java.net.URL(endpoint) ); call.setOperationName( new QName("http://testPackage.fc.com/, doBasicStuff") ); String ret = (String) call.invoke( new Object[] {"some kind of message"} ); System.out.println(ret); }catch(Exception e){ System.err.println(e.toString()); } } } ``` My web serivce code is really basic - just a simple class that returns your input string with a bit of concat text: ``` public String doBasicStuff(String message) { return "This is your message: " + message; } ```
As the exception says, you call the QName constructor incorrectly: ``` new QName("http://testPackage.fc.com/, doBasicStuff") ``` is incorrect. I think you have to pass two strings, one containing the namespace, one the localname. The documentation will typically contain a description on how to use that class.
Could it be a typo in your QName?: ``` new QName("http://testPackage.fc.com/", "doBasicStuff") ``` instead of: ``` new QName("http://testPackage.fc.com/, doBasicStuff") ```
Simple java client code for Web Service - something to do with QName?
[ "", "java", "web-services", "client", "rpc", "" ]
I'm aware that I can grab the CPU identifier and the volume serial number for a physical drive by querying WMI, but WMI usually takes its sweet time. What other speedier options, if any, are available to retrieve this information? Are there Win32 APIs that would accomplish this? **Edit:** Allow me to clarify. By CPU identifier, I'm referring to the same value one gets by querying the following WMI instance properties: * Win32\_Processor::ProcessorId * Win32\_LogicalDisk::VolumeSerialNumber
Just keep in mind that ID of the CPU is not always available. By the way, what are you trying to accomplish? If you want to generate a unique key for a computer instance, check the [Generating Unique Key (Finger Print) for a Computer for Licensing Purposes](http://www.codeproject.com/KB/cs/GenerateUniqueKey.aspx) post by Sowkot Osman at Codeproject; it can give you some hints (also read comments).
You can query the windows registry for the drive information, not sure about the CPU though. It seems that your question is addressed in this SO q/a (demonstrates a number of methods to get this info, but for speed, maybe getting it from registry is your best bet): [How to list physical disks?](https://stackoverflow.com/questions/327718/how-to-list-physical-disks)
APIs in C# for grabbing CPU IDs and drive/volume serial
[ "", "c#", ".net", "winapi", "wmi", "" ]
So what I have right now is something like this: ``` PropertyInfo[] info = obj.GetType().GetProperties(BindingFlags.Public); ``` where `obj` is some object. The problem is some of the properties I want aren't in `obj.GetType()` they're in one of the base classes further up. If I stop the debugger and look at obj, the I have to dig through a few "base" entries to see the properties I want to get at. Is there some binding flag I can set to have it return those or do I have to recursively dig through the `Type.BaseType` hierarchy and do `GetProperties` on all of them?
Use this: ``` PropertyInfo[] info = obj.GetType().GetProperties(BindingFlags.Public | BindingFlags.Instance); ``` EDIT: Of course the correct answer is that of [Jay](https://stackoverflow.com/questions/245055/how-do-you-get-the-all-properties-of-a-class-and-its-base-classes-up-the-hierar#245131). `GetProperties()` without parameters is equivalent to `GetProperties(BindingFlags.Public | BindingFlags.Instance | BindingFlags.Static )`. The `BindingFlags.FlattenHierarchy` plays no role here.
I don't think it's that complicated. If you remove the `BindingFlags` parameter to GetProperties, I think you get the results you're looking for: ``` class B { public int MyProperty { get; set; } } class C : B { public string MyProperty2 { get; set; } } static void Main(string[] args) { PropertyInfo[] info = new C().GetType().GetProperties(); foreach (var pi in info) { Console.WriteLine(pi.Name); } } ``` produces ``` MyProperty2 MyProperty ```
How do you get the all properties of a class and its base classes (up the hierarchy) with Reflection? (C#)
[ "", "c#", "reflection", "getproperties", "" ]
Lately I've been using XPathDocument and XNavigator to parse an XML file for a given XPath and attribute. It's been working very well, when I know in advance what the XPath is. Sometimes though, the XPath will be one of several possible XPath values, and I'd like to be able to test whether or not a given XPath exists. In case I'm getting the nomenclature wrong, here's what I'm calling an XPath - given this XML blob: ``` <foo> <bar baz="This is the value of the attribute named baz"> </foo> ``` I might be looking for what I'm calling an XPath of "//foo/bar" and then reading the attribute "baz" to get the value. Example of the code that I use to do this: ``` XPathDocument document = new XPathDocument(filename); XPathNavigator navigator = document.CreateNavigator(); XPathNavigator node = navigator.SelectSingleNode("//foo/bar"); if(node.HasAttributes) { Console.WriteLine(node.GetAttribute("baz", string.Empty)); } ``` Now, if the call to navigator.SelectSingleNode fails, it will return a NullReferenceException or an XPathException. I can catch both of those and refactor the above into a test to see whether or not a given XPath returns an exception, but I was wondering whether there was a better way? I didn't see anything obvious in the Intellisense. XPathNavigator has .HasAttributes and .HasChildren but short of iterating through the path one node at a time, I don't see anything nicer to use.
If you've given valid XPath but it doesn't match anything, `SelectSingleNode` won't *throw* a `NullReferenceException` - it will just return null. If you pass `SelectSingleNode` some syntactically invalid XPath, that's when it will throw an `XPathException`. So normally, you'd just need to test whether the returned value was null or not.
``` var baz = navigator.SelectSingleNode("//foo/bar/@baz"); if (baz != null) Console.WriteLine(baz); ```
Best way test for XPath existence in an XML file?
[ "", "c#", "xml", "" ]
I'm planning to write gateway web application, which would need "terminal window" with VT100/ANSI escape code support. Are there any AJAX based alternatives for such a task? I'm thinking something like this: <http://tryruby.hobix.com/> My preferred backend for the system is Python/Twisted/Pylons, but since I'm just planning, I will explore every option.
Try [AnyTerm](http://wiki.kartbuilding.net/index.php/Anyterm) [AjaxTerm](http://wiki.kartbuilding.net/index.php/Ajaxterm) [WebShell](http://www-personal.umich.edu/~mressl/webshell/)
There's also [Shell In A Box](http://shellinabox.com).
AJAX console window with ANSI/VT100 support?
[ "", "python", "ajax", "vt100", "" ]
I want to setup a CRON that runs a PHP script that in turn moves XML file (holding non-sensitive information) from one server to another. I have been given the proper username/password, and want to use SFTP protocol. The jobs will run daily. There is the potential that one server is Linux and the other is Windows. Both are on different networks. What is the best way to move that file?
If both servers would be on Linux you could use [rsync](http://samba.anu.edu.au/rsync/) for any kind of files (php, xml, html, binary, etc). Even if one of them will be Windows there are rsync ports to Windows.
Why not try using PHP's [FTP functions](http://www.php.net/manual/en/ref.ftp.php)? Then you could do something like: ``` // open some file for reading $file = 'somefile.txt'; $fp = fopen($file, 'r'); // set up basic connection $conn_id = ftp_connect($ftp_server); // login with username and password $login_result = ftp_login($conn_id, $ftp_user_name, $ftp_user_pass); // try to upload $file if (ftp_fput($conn_id, $file, $fp, FTP_ASCII)) { echo "Successfully uploaded $file\n"; } else { echo "There was a problem while uploading $file\n"; } // close the connection and the file handler ftp_close($conn_id); fclose($fp); ```
What is the best way to move files from one server to another with PHP?
[ "", "php", "file", "data-transfer", "" ]
I'm looking for an open-source pastebin web-application written in either Python or Perl. I need it in order to implement a web-based specialized editor for my own needs, and I want to borrow code / ideas from the pastebin since I don't have much experience in web programming. Can you point to one (or a few) ? Thanks in advance
[Lodgeit](http://dev.pocoo.org/projects/lodgeit/) is written in Python and is a nice pastebin
I like [pastebot](http://sourceforge.net/projects/pastebot/), which powers <http://paste.pocoo.org/> (for example). It's Perl and uses POE.
What is a good open source pastebin in Python or Perl?
[ "", "python", "perl", "open-source", "pastebin", "" ]
The picture below explains all: [alt text http://img133.imageshack.us/img133/4206/accentar9.png](http://img133.imageshack.us/img133/4206/accentar9.png) The variable textInput comes from `File.ReadAllText(path);` and characters like : ' é è ... do not display. When I run my UnitTest, all is fine! I see them... Why?
I do not know why It works with NUnit, but I open the file with NotePad++ and I see ANSI in the format. Now I converted to UTF-8 and it works. I am still wondering why it was working with NUnit and not in the console? but at least it works now. **Update** I do not get why I get down voted on the question and in this answer because the question is still good, why in a Console I can't read an ANSI file but in NUNit I can?
The .NET classes (`System.IO.StreamReader` and the likes) take UTF-8 as the default encoding. If you want to read a different encoding you have to pass this explicitly to the appropriate constructor overload. Also note that there's not one single encoding called “ANSI”. You're probably referring to the Windows codepage 1252 aka “Western European”. Notice that this is different from the Windows default encoding in other countries. This is relevant when you try to use `System.Text.Encoding.Default` because this actually differs from system to system. /EDIT: It seems you misunderstood both my answer and my comment: 1. The problem in your code is that you need to tell .NET what encoding you're using. 2. The other remark, saying that “ANSI” may refer to *different* encodings, didn't have anything to do with your problem. It was just a “by the way” remark to prevent misunderstandings (well, that one backfired). So, finally: The solution to your problem should be the following code: ``` string text = System.IO.File.ReadAllText("path", Encoding.GetEncoding(1252)); ``` The important part here is the usage of an appropriate `System.Text.Encoding` instance. However, this assumes that your encoding is indeed Windows-1252 (but I believe that's what Notepad++ means by “ANSI”). I have no idea why your text gets displayed correctly when read by NUnit. I suppose that NUnit either has some kind of autodiscovery for text encodings or that NUnit uses some weird defaults (i.e. not UTF-8). Oh, and by the way: “ANSI” really refers to the “American National Standards Institute”. There are a lot of completely different standards that have “ANSI” as part of their names. For example, C++ is (among others) also an ANSI standard. Only in some contexts it's (imprecisely) used to refer to the Windows encodings. But even there, as I've tried to explain, it usually doesn't refer to a *specific* encoding but rather to a class of encodings that Windows uses as defaults for different countries. One of these is Windows-1252.
C# : Characters do not display well when in Console, why?
[ "", "c#", ".net", "encoding", ".net-2.0", "io", "" ]
Is there a typedef equivalent in C#, or someway to get some sort of similar behaviour? I've done some googling, but everywhere I look seems to be negative. Currently I have a situation similar to the following: ``` class GenericClass<T> { public event EventHandler<EventData> MyEvent; public class EventData : EventArgs { /* snip */ } // ... snip } ``` Now, it doesn't take a rocket scientist to figure out that this can very quickly lead to a lot of typing (apologies for the horrible pun) when trying to implement a handler for that event. It'd end up being something like this: ``` GenericClass<int> gcInt = new GenericClass<int>; gcInt.MyEvent += new EventHandler<GenericClass<int>.EventData>(gcInt_MyEvent); // ... private void gcInt_MyEvent(object sender, GenericClass<int>.EventData e) { throw new NotImplementedException(); } ``` Except, in my case, I was already using a complex type, not just an int. It'd be nice if it were possible to simplify this a little... Edit: ie. perhaps typedefing the EventHandler instead of needing to redefine it to get similar behaviour.
No, there's no true equivalent of typedef. You can use 'using' directives within one file, e.g. ``` using CustomerList = System.Collections.Generic.List<Customer>; ``` but that will only impact that source file. In C and C++, my experience is that `typedef` is usually used within .h files which are included widely - so a single `typedef` can be used over a whole project. That ability does not exist in C#, because there's no `#include` functionality in C# that would allow you to include the `using` directives from one file in another. Fortunately, the example you give *does* have a fix - implicit method group conversion. You can change your event subscription line to just: ``` gcInt.MyEvent += gcInt_MyEvent; ``` :)
Jon really gave a nice solution, I didn't know you could do that! At times what I resorted to was inheriting from the class and creating its constructors. E.g. ``` public class FooList : List<Foo> { ... } ``` Not the best solution (unless your assembly gets used by other people), but it works.
Equivalent of typedef in C#
[ "", "c#", "typedef", "" ]
How should I configure the class to bind three dropdowns (date, month, year) to a single Date property so that it works the way it works for 'single request parameter per property' scenario ? I guess a should add some custom PropertyEditors by overriding initBinder method. What else ?
Aleksey Kudryavtsev: you can override the onBind method in your controller, i which you cant fiddle something special in command object, like ``` dateField = new SimpleFormat("YYYY-mm-dd").parse(this.year + "-" + this.month + "-" this.day); ``` or: ``` Calendar c = Calendar.getInstance(); c.set(year, month, day); dateField = calendar.getTime(); ``` but i'd rather do validation in javascript and use some available date picker component, there are plenty of them...
You could create a hidden input in your form and populate it using JavaScript when user selects date, then bind to this input in your command object. Probably you will be using javascript anyway for things like checking correctness of the date, so why not format the ready to use date in one parameter. Then you need to register a property editor that would convert from string "2008-05-20" to Date object.
Spring MVC : Binding 3 dropdowns to a date property in SimpleFormController
[ "", "java", "spring-mvc", "propertyeditor", "" ]
I'm doing some PHP work recently, and in all the code I've seen, people tend to use few methods. (They also tend to use few variables, but that's another issue.) I was wondering why this is, and I found this note "A function call with one parameter and an empty function body takes about the same time as doing 7-8 $localvar++ operations. A similar method call is of course about 15 $localvar++ operations" [here](http://reinholdweber.com/?p=3). Is this true, even when the PHP page has been compiled and cached? Should I avoid using methods as much as possible for efficiency? I like to write well-organized, human-readable code with methods wherever a code block would be repeated. If it is necessary to write flat code without methods, are there any programs that will "inline" method bodies? That way I could write nice code and then ugly it up before deployment. By the way, the code I've been looking at is from the Joomla 1.5 core and several WordPress plugins, so I assume they are people who know what they're doing. **Note:** I'm pleased that everyone has jumped on this question to talk about optimization **in general**, but in fact we're talking about optimization in interpreted languages. At least some hint of the fact that we're talking about PHP would be nice.
I think Joomla and Wordpress are not the greatest examples of *good* PHP code, with no offense. I have nothing personal against the people working on it and it's great how they enable people to have a website/blog and I know that a lot of people spend all their free time on either of those projects but the code quality is rather poor (with no offense). Review security announcements over the past year if you don't believe me; also assuming you are looking for performance from either of the two, their code does not excel there either. So it's by no means good code, but Wordpress and Joomla both excel on the frontend - pretty easy to use, people get a website and can do *stuff*. And that's why they are so successful, people don't select them based on code quality but on what they enabled them to do. To answer your performance question, yes, it's true that all the good stuff (functions, classes, etc.) slow your application down. So I guess if your application/script is all in one file, so be it. Feel free to write *bad PHP code* then. As soon as you expand and start to duplicate code, you should consider the trade off (in speed) which writing maintainable code brings along. :-) IMHO this trade off is rather small because of two things: 1. CPU **is** cheap. 2. Developers **are not** cheap. When you need to go back into your code in six months from now, think if those nano seconds saved running it, still add up when you need to fix a nasty bug (three or four times, because of duplicated code). You can do all sorts of things to make PHP run faster. Generally people recommend a cache, such as [APC](http://php.net/apc). APC is really awesome. It runs all sorts of optimizations in the background for you, e.g. caching the bytecode of a PHP file and also provides you with functions in userland to save data. So for example if you parse a configuration file each time you run that script disk i/o is really critical. With a simple [apc\_store()](http://php.net/apc_store) and [apc\_fetch()](http://php.net/apc_fetch) you can store the parsed configuration file either in a file-based or a memory-based (RAM) cache and retrieve it from there until the cache expired or is deleted. APC is not the only cache, of course.
How much "efficiency" do you need? Have you even measured? Premature optimization is the root of all evil, and optimization without measurement is ALWAYS premature. Remember also the [rules of Optimization Club](http://xoa.petdance.com/Rules_of_Optimization_Club). 1. The first rule of Optimization Club is, you do not Optimize. 2. The second rule of Optimization Club is, you do not Optimize without measuring. 3. If your app is running faster than the underlying transport protocol, the optimization is over. 4. One factor at a time. 5. No marketroids, no marketroid schedules. 6. Testing will go on as long as it has to. 7. If this is your first night at Optimization Club, you have to write a test case.
Seriously, should I write bad PHP code?
[ "", "php", "optimization", "code-generation", "translation", "" ]
I have some HTML that looks like this: ``` <ul class="faq"> <li class="open"> <a class="question" href="">This is my question?</a> <p>Of course you can, it will be awesome. </p> </li> </ul> ``` Using CSS I'm setting the `p` tag to `display:none;`. I want to use jQuery to display or hide the `p` tag when the `anchor` is clicked, but I'm having some troubles with the sibling selector. Just trying to get the selector working, I tried: ``` $("a.question").click(function () { $(this + " ~ p").css("background-color", "red"); }); ``` to test it out. Seemingly, the sibling selector can't really be used like that, and as I'm completely new to jQuery I don't know the appropriate means to make that happen.
Try using: ``` $(this).siblings('p').css() ```
``` $(this).next("p").css("...") ``` the "p" above is optional, if you just want the next non-whitespace node in the DOM.
Using "this" with jQuery Selectors
[ "", "javascript", "jquery", "css-selectors", "" ]
I'm working on an application for work that is going to query our employee database. The end users want the ability to search based on the standard name/department criteria, but they also want the flexibility to query for all people with the first name of "James" that works in the Health Department. The one thing I want to avoid is to simply have the stored procedure take a list of parameters and generate a SQL statement to execute, since that would open doors to SQL injection at an internal level. Can this be done?
While the `COALESCE` trick is neat, my preferred method is: ``` CREATE PROCEDURE ps_Customers_SELECT_NameCityCountry @Cus_Name varchar(30) = NULL ,@Cus_City varchar(30) = NULL ,@Cus_Country varchar(30) = NULL ,@Dept_ID int = NULL ,@Dept_ID_partial varchar(10) = NULL AS SELECT Cus_Name ,Cus_City ,Cus_Country ,Dept_ID FROM Customers WHERE (@Cus_Name IS NULL OR Cus_Name LIKE '%' + @Cus_Name + '%') AND (@Cus_City IS NULL OR Cus_City LIKE '%' + @Cus_City + '%') AND (@Cus_Country IS NULL OR Cus_Country LIKE '%' + @Cus_Country + '%') AND (@Dept_ID IS NULL OR Dept_ID = @DeptID) AND (@Dept_ID_partial IS NULL OR CONVERT(varchar, Dept_ID) LIKE '%' + @Dept_ID_partial + '%') ``` These kind of SPs can easily be code generated (and re-generated for table-changes). You have a few options for handling numbers - depending if you want exact semantics or search semantics.
The most efficient way to implement this type of search is with a stored procedure. The statement shown here creates a procedure that accepts the required parameters. When a parameter value is not supplied it is set to NULL. ``` CREATE PROCEDURE ps_Customers_SELECT_NameCityCountry @Cus_Name varchar(30) = NULL, @Cus_City varchar(30) = NULL, @Cus_Country varchar(30) =NULL AS SELECT Cus_Name, Cus_City, Cus_Country FROM Customers WHERE Cus_Name = COALESCE(@Cus_Name,Cus_Name) AND Cus_City = COALESCE(@Cus_City,Cus_City) AND Cus_Country = COALESCE(@Cus_Country,Cus_Country) ``` Taken from this page: <http://www.sqlteam.com/article/implementing-a-dynamic-where-clause> I've done it before. It works well.
How do I create a stored procedure that will optionally search columns?
[ "", "sql", "sql-server", "search", "stored-procedures", "parameters", "" ]
I'm looking for the simplest, most straightforward way to implement the following: * The main program instantiates worker threads to do a task. * Only `n` tasks can be running at once. * When `n` is reached, no more workers are started until the count of running threads drops back below `n`.
I think that [Executors.newFixedThreadPool](http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Executors.html#newFixedThreadPool(int)) fits your requirements. There are a number of different ways to use the resulting ExecutorService, depending on whether you want a result returned to the main thread, or whether the task is totally self-contained, and whether you have a collection of tasks to perform up front, or whether tasks are queued in response to some event. ``` Collection<YourTask> tasks = new ArrayList<YourTask>(); YourTask yt1 = new YourTask(); ... tasks.add(yt1); ... ExecutorService exec = Executors.newFixedThreadPool(5); List<Future<YourResultType>> results = exec.invokeAll(tasks); ``` Alternatively, if you have a new asynchronous task to perform in response to some event, you probably just want to use the ExecutorService's simple `execute(Runnable)` method.
``` /* Get an executor service that will run a maximum of 5 threads at a time: */ ExecutorService exec = Executors.newFixedThreadPool(5); /* For all the 100 tasks to be done altogether... */ for (int i = 0; i < 100; i++) { /* ...execute the task to run concurrently as a runnable: */ exec.execute(new Runnable() { public void run() { /* do the work to be done in its own thread */ System.out.println("Running in: " + Thread.currentThread()); } }); } /* Tell the executor that after these 100 steps above, we will be done: */ exec.shutdown(); try { /* The tasks are now running concurrently. We wait until all work is done, * with a timeout of 50 seconds: */ boolean b = exec.awaitTermination(50, TimeUnit.SECONDS); /* If the execution timed out, false is returned: */ System.out.println("All done: " + b); } catch (InterruptedException e) { e.printStackTrace(); } ```
How to implement simple threading with a fixed number of worker threads
[ "", "java", "multithreading", "concurrency", "" ]
Is there a way to compile an Eclipse-based Java project from the command line? I'm trying to automate my build (using FinalBuilder not ant), and I'm neither a Java nor Eclipse expert. I can probably figure out how to do this with straight java command line options, but then the Eclipse project feels like a lot of wasted effort. In the event that there is no way to compile an Eclipse project via the command line, is there a way to generate the required java command line from within Eclipse? Or are there some files I can poke around to find the compile steps it is doing behind the scenes? --- Guys, I'm looking for an answer that does *NOT* include ant. Let me re-iterate the original question ....... Is there a way to build an Eclipse project from the command line? I don't think this is an unreasonable question given that I can do something like this for visual studio: ``` devenv.exe /build "Debug|Any CPU" "C:\Projects\MyProject\source\MyProject.sln" ```
You can build an eclipse project via a workspace from the command line: ``` eclipsec.exe -noSplash -data "D:\Source\MyProject\workspace" -application org.eclipse.jdt.apt.core.aptBuild ``` It uses the `jdt apt` plugin to build your workspace automatically. This is also known as a 'Headless Build'. Damn hard to figure out. If you're not using a win32 exe, you can try this: ``` java -cp startup.jar -noSplash -data "D:\Source\MyProject\workspace" -application org.eclipse.jdt.apt.core.aptBuild ``` **Update** Several years ago eclipse replaced `startup.jar` with the "equinox launcher" <https://wiki.eclipse.org/Equinox_Launcher> On Eclipse Mars (MacOX): ``` java -jar /Applications/Eclipse.app/Contents/Eclipse/plugins/org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar -noSplash -data "workspace" -application org.eclipse.jdt.apt.core.aptBuild ``` The `-data` parameter specifies the location of your workspace. The version number for the equinox launcher will depend on what version of eclipse you have.
To complete André's answer, an ant solution could be like the one described in [Emacs, JDEE, Ant, and the Eclipse Java Compiler](http://www.emacsblog.org/2007/02/13/emacs-jdee-ant-and-the-eclipse-java-compiler/), as in: ``` <javac srcdir="${src}" destdir="${build.dir}/classes"> <compilerarg compiler="org.eclipse.jdt.core.JDTCompilerAdapter" line="-warn:+unused -Xemacs"/> <classpath refid="compile.classpath" /> </javac> ``` The compilerarg element also allows you to pass in additional command line args to the eclipse compiler. You can find a [full ant script example here](http://dev.eclipse.org/newslists/news.eclipse.platform.rcp/msg31872.html) which would be ***invoked in a command line*** with: ``` java -cp C:/eclipse-SDK-3.4-win32/eclipse/plugins/org.eclipse.equinox.launcher_1.0.100.v20080509-1800.jar org.eclipse.core.launcher.Main -data "C:\Documents and Settings\Administrator\workspace" -application org.eclipse.ant.core.antRunner -buildfile build.xml -verbose ``` --- BUT all that involves ant, which is not what Keith is after. For a batch compilation, please refer to **[Compiling Java code](http://help.eclipse.org/neon/index.jsp?topic=%2Forg.eclipse.jdt.doc.isv%2Fguide%2Fjdt_api_compile.htm)**, especially the section "***Using the batch compiler***" > The batch compiler class is located in the JDT Core plug-in. The name of the class is org.eclipse.jdt.compiler.batch.BatchCompiler. It is packaged into plugins/org.eclipse.jdt.core\_3.4.0..jar. Since 3.2, it is also available as a separate download. The name of the file is ecj.jar. > Since 3.3, this jar also contains the support for jsr199 (Compiler API) and the support for jsr269 (Annotation processing). In order to use the annotations processing support, a 1.6 VM is required. Running the batch compiler From the command line would give ``` java -jar org.eclipse.jdt.core_3.4.0<qualifier>.jar -classpath rt.jar A.java ``` or: ``` java -jar ecj.jar -classpath rt.jar A.java ``` All java compilation options are detailed in that section as well. The difference with the Visual Studio command line compilation feature is that ***Eclipse does not seem to directly read its .project and .classpath in a command-line argument***. You have to report all information contained in the .project and .classpath in various command-line options in order to achieve the very same compilation result. So, then short answer is: "yes, Eclipse kind of does." ;)
Build Eclipse Java Project from Command Line
[ "", "java", "eclipse", "command-line", "" ]
Is there some way to get a value from the last inserted row? I am inserting a row where the PK will automatically increase, and I would like to get this PK. Only the PK is guaranteed to be unique in the table. I am using Java with a JDBC and PostgreSQL.
With PostgreSQL you can do it via the RETURNING keyword: [PostgresSQL - RETURNING](http://www.postgresql.org/docs/8.3/interactive/sql-insert.html) ``` INSERT INTO mytable( field_1, field_2,... ) VALUES ( value_1, value_2 ) RETURNING anyfield ``` It will return the value of "anyfield". "anyfield" may be a sequence or not. To use it with JDBC, do: ``` ResultSet rs = statement.executeQuery("INSERT ... RETURNING ID"); rs.next(); rs.getInt(1); ```
See the API docs for [java.sql.Statement](http://java.sun.com/javase/6/docs/api/java/sql/Statement.html). Basically, when you call `executeUpdate()` or `executeQuery()`, use the `Statement.RETURN_GENERATED_KEYS` constant. You can then call `getGeneratedKeys` to get the auto-generated keys of all rows created by that execution. (Assuming your JDBC driver provides it.) It goes something along the lines of this: ``` Statement stmt = conn.createStatement(); stmt.execute(sql, Statement.RETURN_GENERATED_KEYS); ResultSet keyset = stmt.getGeneratedKeys(); ```
How to get a value from the last inserted row?
[ "", "java", "database", "postgresql", "jdbc", "" ]
Anyone have any thoughts on how/if it is possible to integrate Google Code commits to cause a Google AppEngine deployment of the most recent code? I have a simple Google AppEngine project's source hosted on Google Code and would love if everytime I committed to Subversion, that AppEngine would reflect the latest commit. I don't mind if things are broken on the live site since the project is for personal use mainly and for learning. Anyone have any thoughts on how to tie into the subversion commit for the Code repository and/or how to kickoff the deployment to AppEngine? Ideally the solution would not require anything manual from me nor any type of server/listener software on my machine.
Google Code Project Hosting now supports [Post-Commit Web Hooks](http://code.google.com/p/support/wiki/PostCommitWebHooks), which ping a project-owner-specified URL after every commit. This would eliminate the need to regularly poll your Google Code repository.
[Made By Sofa](http://www.madebysofa.com) had a [blog post](http://www.madebysofa.com/#blog/appengine_hosting) about their workflow with Google App Engine. In the second last paragraph they have [attached a subversion hook](http://www.madebysofa.com/media/downloads/appengine_deploy.sh) that when when someone commits code it will automatically deploy to Google App Engine. It would take a little bit of tweaking (because it works on the server side not the client) but you could do the same.
Possible to integrate Google AppEngine and Google Code for continuous integration?
[ "", "python", "svn", "google-app-engine", "continuous-integration", "google-code", "" ]
I'm creating a series of builders to clean up the syntax which creates domain classes for my mocks as part of improving our overall unit tests. My builders essentially populate a domain class (such as a `Schedule`) with some values determined by invoking the appropriate `WithXXX` and chaining them together. I've encountered some commonality amongst my builders and I want to abstract that away into a base class to increase code reuse. Unfortunately what I end up with looks like: ``` public abstract class BaseBuilder<T,BLDR> where BLDR : BaseBuilder<T,BLDR> where T : new() { public abstract T Build(); protected int Id { get; private set; } protected abstract BLDR This { get; } public BLDR WithId(int id) { Id = id; return This; } } ``` Take special note of the `protected abstract BLDR This { get; }`. A sample implementation of a domain class builder is: ``` public class ScheduleIntervalBuilder : BaseBuilder<ScheduleInterval,ScheduleIntervalBuilder> { private int _scheduleId; // ... // UG! here's the problem: protected override ScheduleIntervalBuilder This { get { return this; } } public override ScheduleInterval Build() { return new ScheduleInterval { Id = base.Id, ScheduleId = _scheduleId // ... }; } public ScheduleIntervalBuilder WithScheduleId(int scheduleId) { _scheduleId = scheduleId; return this; } // ... } ``` Because BLDR is not of type BaseBuilder I cannot use `return this` in the `WithId(int)` method of `BaseBuilder`. Is exposing the child type with the property `abstract BLDR This { get; }` my only option here, or am I missing some syntax trick? Update (since I can show why I'm doing this a bit more clearly): The end result is to have builders that build profiled domain classes that one would expect to retrieve from the database in a [programmer] readable format. There's nothing wrong with... ``` mock.Expect(m => m.Select(It.IsAny<int>())).Returns( new Schedule { ScheduleId = 1 // ... } ); ``` as that's pretty readable already. The alternative builder syntax is: ``` mock.Expect(m => m.Select(It.IsAny<int>())).Returns( new ScheduleBuilder() .WithId(1) // ... .Build() ); ``` the advantage I'm looking for out of using builders (and implementing all these `WithXXX` methods) is to abstract away complex property creation (automatically expand our database lookup values with the correct `Lookup.KnownValues` without hitting the database obviously) and having the builder provide commonly reusable test profiles for domain classes... ``` mock.Expect(m => m.Select(It.IsAny<int>())).Returns( new ScheduleBuilder() .AsOneDay() .Build() ); ```
All I can say is that if there *is* a way of doing it, I want to know about it too - I use *exactly* this pattern in my [Protocol Buffers port](http://github.com/jskeet/dotnet-protobufs/tree/master). In fact, I'm glad to see that someone else has resorted to it - it means we're at least somewhat likely to be right!
I know this is an old question, but I think you can use a simple cast to avoid the `abstract BLDR This { get; }` The resulting code would then be: ``` public abstract class BaseBuilder<T, BLDR> where BLDR : BaseBuilder<T, BLDR> where T : new() { public abstract T Build(); protected int Id { get; private set; } public BLDR WithId(int id) { _id = id; return (BLDR)this; } } public class ScheduleIntervalBuilder : BaseBuilder<ScheduleInterval,ScheduleIntervalBuilder> { private int _scheduleId; // ... public override ScheduleInterval Build() { return new ScheduleInterval { Id = base.Id, ScheduleId = _scheduleId // ... }; } public ScheduleIntervalBuilder WithScheduleId(int scheduleId) { _scheduleId = scheduleId; return this; } // ... } ``` Of course you could encapsulate the builder with ``` protected BLDR This { get { return (BLDR)this; } } ```
Builder design pattern with inheritance: is there a better way?
[ "", "c#", "generics", "builder", "" ]
I was recently tasked with debugging a strange problem within an e-commerce application. After an application upgrade the site started to hang from time to time and I was sent in to debug. After checking the event log I found that the SQL-server wrote ~200 000 events in a couple of minutes with the message saying that a constraint had failed. After much debugging and some tracing I found the culprit. I've removed some unnecessary code and cleaned it up a bit but essentially this is it ``` WHILE EXISTS (SELECT * FROM ShoppingCartItem WHERE ShoppingCartItem.PurchID = @PurchID) BEGIN SELECT TOP 1 @TmpGFSID = ShoppingCartItem.GFSID, @TmpQuantity = ShoppingCartItem.Quantity, @TmpShoppingCartItemID = ShoppingCartItem.ShoppingCartItemID, FROM ShoppingCartItem INNER JOIN GoodsForSale on ShoppingCartItem.GFSID = GoodsForSale.GFSID WHERE ShoppingCartItem.PurchID = @PurchID EXEC @ErrorCode = spGoodsForSale_ReverseReservations @TmpGFSID, @TmpQuantity IF @ErrorCode <> 0 BEGIN Goto Cleanup END DELETE FROM ShoppingCartItem WHERE ShoppingCartItem.ShoppingCartItemID = @TmpShoppingCartItemID -- @@ROWCOUNT is 1 after this END ``` Facts: 1. There's only one or two records matching the first select-clause 2. RowCount from the DELETE statement indicates that it has been removed 3. The WHILE-clause will loop forever The procedure has been rewritten to select the rows that should be deleted into a temporary in-memory table instead so the immediate problem is solved but this really sparked my curiosity. Why does it loop forever? **Clarification**: The delete doesn't fail (@@rowcount is 1 after the delete stmt when debugged) **Clarification 2**: It shouldn't matter whether or not the SELECT TOP ... clause is ordered by any specific field since the record with the returned id will be deleted so in the next loop it should get another record. **Update**: After checking the subversion logs I found the culprit commit that made this stored procedure to go haywire. The only real difference that I can find is that there previously was no join in the SELECT TOP 1 statement i.e. without that join it worked without any transaction statements surrounding the delete. It appears to be the introduction of the join that made SQL server more picky. **Update clarification**: [brien](https://stackoverflow.com/questions/150250/while-clause-in-t-sql-that-loops-forever#150400) pointed out that there's no need for the join but we actually do use some fields from the GoodsForSale table but I've removed them to keep the code simply so that we can concentrate on the problem at hand
Are you operating in explicit or implicit [transaction mode](http://doc.ddart.net/mssql/sql70/ta-tz_8.htm)? Since you're in explicit mode, I think you need to surround the DELETE operation with BEGIN TRANSACTION and COMMIT TRANSACTION statements. ``` WHILE EXISTS (SELECT * FROM ShoppingCartItem WHERE ShoppingCartItem.PurchID = @PurchID) BEGIN SELECT TOP 1 @TmpGFSID = ShoppingCartItem.GFSID, @TmpQuantity = ShoppingCartItem.Quantity, @TmpShoppingCartItemID = ShoppingCartItem.ShoppingCartItemID, FROM ShoppingCartItem INNER JOIN GoodsForSale on ShoppingCartItem.GFSID = GoodsForSale.GFSID WHERE ShoppingCartItem.PurchID = @PurchID EXEC @ErrorCode = spGoodsForSale_ReverseReservations @TmpGFSID, @TmpQuantity IF @ErrorCode <> 0 BEGIN Goto Cleanup END BEGIN TRANSACTION delete DELETE FROM ShoppingCartItem WHERE ShoppingCartItem.ShoppingCartItemID = @TmpShoppingCartItemID -- @@ROWCOUNT is 1 after this COMMIT TRANSACTION delete END ``` **Clarification:** The reason you'd need to use transactions is that the delete doesn't actually happen in the database until you do a COMMIT operation. This is generally used when you have multiple write operations in an atomic transaction. Basically, you only want the changes to happen to the DB if all of the operations are successful. In your case, there's only 1 operation, but since you're in explicit transaction mode, you need to tell SQL Server to **really** make the changes.
``` FROM ShoppingCartItem INNER JOIN GoodsForSale on ShoppingCartItem.GFSID = GoodsForSale.GFSID ``` Oops, your join brings the result set down to zero rows. ``` SELECT TOP 1 @TmpGFSID = ShoppingCartItem.GFSID, @TmpQuantity = ShoppingCartItem.Quantity, @TmpShoppingCartItemID = ShoppingCartItem.ShoppingCartItemID ``` Oops, you used multi-assignment against a set with no rows. This causes the variables to remain unchanged (they will have the same value that they had last time through the loop). The variables do NOT get assigned to null in this case. If you put this code at the start of the loop, it will (correctly) fail faster: ``` SELECT @TmpGFSID = null, @TmpQuantity = null, @TmpShoppingCartItemID = null ``` If you change your code to fetch a key (without joining) and then fetching the related data by key in a second query, you'll win.
While-clause in T-SQL that loops forever
[ "", "sql", "sql-server", "t-sql", "" ]
I'm busy with an asignment where i have to make a graphical interface for a simple program. But i'm strugling with the layout. This is the idea: ![Layout Example](https://i.stack.imgur.com/a3Xk7.png "Layout Example") What is the easiest way to accomplish such a layout? And what method do you use to make layouts in java. Just code it, or use an IDE like netbeans?
Well considering how simple the layout is I would suggest you use a BorderLayout with NORTH set to the top section in a container and the JTable in the CENTER of the BorderLayout. For the Top it appears to be a simple BorderLayout again with NORTH as the Instruction: south as the black box (possibly in a container with a FlowLayout). The center of the top pane appears to be 2 Containers of GridLayouts with 2 rows and 2 columns, so put thos in another container with a GirdLayout. So in pseudo: ``` Container(BorderLayout) { @NORTH Container(BorderLayout) { @NORTH Label(Instruction); @CENTER Container(GridLayout(2,1)) { Container(GirdLayout(2,2)) { Label() TextField() Label() TextField() } Container(GirdLayout(2,2)) { Label() TextField() Label() TextField() } } @SOUTH Container(FlowLayout()) { JButton() //shaded thing? } } @CENTER { JTable } } ```
> And what method do you use to make layouts in java. Just code it, or use an IDE like netbeans? NetBeans for GUI developers is like a calculator for grade schoolers: you really shouldn't use it until you know how to do things without it, but then it will save you a lot of time. (I'd love to answer your primary question, but the firewall I'm behind is blocking the picture.)
Java GUI LayoutManagers
[ "", "java", "swing", "user-interface", "" ]
I am trying to build a dropdown list for a winform interop, and I am creating the dropdown in code. However, I have a problem getting the data to bind based on the DataTemplate I specify. What am I missing? ``` drpCreditCardNumberWpf = new ComboBox(); DataTemplate cardLayout = new DataTemplate {DataType = typeof (CreditCardPayment)}; StackPanel sp = new StackPanel { Orientation = System.Windows.Controls.Orientation.Vertical }; TextBlock cardHolder = new TextBlock {ToolTip = "Card Holder Name"}; cardHolder.SetBinding(TextBlock.TextProperty, "BillToName"); sp.Children.Add(cardHolder); TextBlock cardNumber = new TextBlock {ToolTip = "Credit Card Number"}; cardNumber.SetBinding(TextBlock.TextProperty, "SafeNumber"); sp.Children.Add(cardNumber); TextBlock notes = new TextBlock {ToolTip = "Notes"}; notes.SetBinding(TextBlock.TextProperty, "Notes"); sp.Children.Add(notes); cardLayout.Resources.Add(sp, null); drpCreditCardNumberWpf.ItemTemplate = cardLayout; ```
Assuming that you've already set up the `ItemsSource` etc for `drpCreditCardNumberWpf`... ``` //create the data template DataTemplate cardLayout = new DataTemplate(); cardLayout.DataType = typeof(CreditCardPayment); //set up the stack panel FrameworkElementFactory spFactory = new FrameworkElementFactory(typeof(StackPanel)); spFactory.Name = "myComboFactory"; spFactory.SetValue(StackPanel.OrientationProperty, Orientation.Horizontal); //set up the card holder textblock FrameworkElementFactory cardHolder = new FrameworkElementFactory(typeof(TextBlock)); cardHolder.SetBinding(TextBlock.TextProperty, new Binding("BillToName")); cardHolder.SetValue(TextBlock.ToolTipProperty, "Card Holder Name"); spFactory.AppendChild(cardHolder); //set up the card number textblock FrameworkElementFactory cardNumber = new FrameworkElementFactory(typeof(TextBlock)); cardNumber.SetBinding(TextBlock.TextProperty, new Binding("SafeNumber")); cardNumber.SetValue(TextBlock.ToolTipProperty, "Credit Card Number"); spFactory.AppendChild(cardNumber); //set up the notes textblock FrameworkElementFactory notes = new FrameworkElementFactory(typeof(TextBlock)); notes.SetBinding(TextBlock.TextProperty, new Binding("Notes")); notes.SetValue(TextBlock.ToolTipProperty, "Notes"); spFactory.AppendChild(notes); //set the visual tree of the data template cardLayout.VisualTree = spFactory; //set the item template to be our shiny new data template drpCreditCardNumberWpf.ItemTemplate = cardLayout; ``` You can use the same way I have set the `ToolTip` on the `TextBlock`s to set other properties such as margins.
The full version ``` var ms = new MemoryStream(Encoding.UTF8.GetBytes(@"<DataTemplate xmlns=""http://schemas.microsoft.com/winfx/2006/xaml/presentation"" xmlns:x=""http://schemas.microsoft.com/winfx/2006/xaml"" xmlns:c=""clr-namespace:MyApp.Converters;assembly=MyApp""> <DataTemplate.Resources> <c:MyConverter x:Key=""MyConverter""/> </DataTemplate.Resources> <TextBlock Text=""{Binding ., Converter={StaticResource MyConverter}}""/> </DataTemplate>")); var template = (DataTemplate)XamlReader.Load(ms); var cb = new ComboBox { }; //Set the data template cb.ItemTemplate = template; ```
How do I build a DataTemplate in c# code?
[ "", "c#", "wpf", "datatemplate", "" ]
YAML seems like a great format for configuration files & data binding persistent objects in human-readable form... Is there a C++ library that handles YAML? Does `Boost::Serialization` have plans for a YAML option? **EDIT: I would prefer an OO library.**
A quick search gave me this: [yaml-cpp](https://github.com/jbeder/yaml-cpp)
Try the YAML component from the [BOOST vault serialization library](https://github.com/boost-vault/serialization). **EDIT 2014**: A recent development, <https://groups.google.com/d/msg/boost-devel-archive/mhT7qIh1nsI/uXAuXFAWrxQJ> **EDIT 2019**: Didn't try it yet, but this seems to be a serious take: <https://github.com/rwols/yaml-archive> **EDIT 2024**: I developed my own YAML serialization archive for Boost Serialization. <https://gitlab.com/correaa/boost-archive-yml>
YAML serialization library for C++?
[ "", "c++", "serialization", "yaml", "persistence", "boost-serialization", "" ]
For my web service component, I need to generate a relatively large XML (~500 lines) according to many factors. I am facing a few different choices here: 1. StringBuilder 2. XmlWriter class 3. C# object with serialization??? Which one should I use. Is there any other ways that I am not aware of?
If you populate the XML with data from database, you can generate the whole XML by using SQL query and create a class with a property holds the XML blob. The property type can be XElement. This is the easiest I can think of.
I generate an RSS feed very simply using LINQ to XML. It's the nicest XML API I know of, to be honest. I have a a couple of extension methods which I use to make it even easier - it converts from an anonymous type to either elements or attributes: ``` public static IEnumerable<XElement> AsXElements(this object source) { foreach (PropertyInfo prop in source.GetType().GetProperties()) { object value = prop.GetValue(source, null); yield return new XElement(prop.Name.Replace("_", "-"), value); } } public static IEnumerable<XAttribute> AsXAttributes(this object source) { foreach (PropertyInfo prop in source.GetType().GetProperties()) { object value = prop.GetValue(source, null); yield return new XAttribute(prop.Name.Replace("_", "-"), value ?? ""); } } ``` That may not be at all appropriate for you, but I find it really handy. Of course, this assumes you're using .NET 3.5...
What's the Best way to Generate a Dynamic XML for Web Service?
[ "", "c#", ".net", "xml", "" ]
I've recently created these two (unrelated) methods to replace lots of boiler-plate code in my winforms application. As far as I can tell, they work ok, but I need some reassurance/advice on whether there are some problems I might be missing. (from memory) ``` static class SafeInvoker { //Utility to avoid boiler-plate InvokeRequired code //Usage: SafeInvoker.Invoke(myCtrl, () => myCtrl.Enabled = false); public static void Invoke(Control ctrl, Action cmd) { if (ctrl.InvokeRequired) ctrl.BeginInvoke(new MethodInvoker(cmd)); else cmd(); } //Replaces OnMyEventRaised boiler-plate code //Usage: SafeInvoker.RaiseEvent(this, MyEventRaised) public static void RaiseEvent(object sender, EventHandler evnt) { var handler = evnt; if (handler != null) handler(sender, EventArgs.Empty); } } ``` EDIT: See related question [here](https://stackoverflow.com/questions/258409/how-to-get-information-about-an-exception-raised-by-the-target-of-controlinvoke) **UPDATE** Following on from deadlock problems (related in [this question](https://stackoverflow.com/questions/2055960/control-invoke-getting-stuck-in-hidden-showdialog)), I have switched from Invoke to BeginInvoke (see an explanation [here](https://stackoverflow.com/questions/229554/whats-the-difference-between-invoke-and-begininvoke/229558#229558)). **Another Update** Regarding the second snippet, I am increasingly inclined to use the 'empty delegate' pattern, which fixes this problem 'at source' by declaring the event directly with an empty handler, like so: ``` event EventHandler MyEventRaised = delegate {}; ```
This is good stuff. Make them extension methods though to clean up your code a little more. For example: ``` //Replaces OnMyEventRaised boiler-plate code //Usage: SafeInvoker.RaiseEvent(this, MyEventRaised) public static void Raise(this EventHandler eventToRaise, object sender) { EventHandler eventHandler = eventToRaise; if (eventHandler != null) eventHandler(sender, EventArgs.Empty); } ``` Now on your events you can call: myEvent.Raise(this);
Due to the fact, that Benjol doesn't know, why he places the Action into a MethodInvoker and broccliman meant to use it as an Extension Function, here is the clean up code: ``` static class SafeInvoker { //Utility to avoid boiler-plate InvokeRequired code //Usage: myCtrl.SafeInvoke(() => myCtrl.Enabled = false); public static void SafeInvoke(this Control ctrl, Action cmd) { if (ctrl.InvokeRequired) ctrl.BeginInvoke(cmd); else cmd(); } //Replaces OnMyEventRaised boiler-plate code //Usage: this.RaiseEvent(myEventRaised); public static void RaiseEvent(this object sender, EventHandler evnt) { if (evnt != null) evnt(sender, EventArgs.Empty); } } ``` Just a last note: `MethodInvoker` and `Action` are both just delegates having the exact same structure. Due to this case both are replaceable by each other. The root of this naming clash comes from legacy. At the beginning (.Net 2.0) there was just `MethodInvoker` and `Action(T)`. But due to the fact, that everyone who used `Action(T)` whishes to have a `Action` and found it very unnatural to take `MethodInvoker`. So in .Net 3.5 the `Action`, `Action(T1, T2, T3, T4)` and all the `Func` delegates where added too, but MethodInvoker could not be removed anymore without making any breaking changes. ### Additional: If you are able to use .Net 3.5 the above code is fine, but if you're pinned to .Net 2.0 you can use it as normal function as before and replace `Action` by `MethodInvoker`.
Boiler plate code replacement - is there anything bad about this code?
[ "", "c#", "coding-style", "" ]
I'm not sure how to look for this online... I think they might be called something different in C++ I want to have a simple event system, somthing like ``` event myCustomEvent; myCustomEvent.subscribe( void myHandler(string) ); myCustomEvent.fire("a custom argument"); // myHandler prints out the string passed in the first argument event myNewCustomEvent; myNewCustomEvent.subscribe( void myNewHandler(int) ); myNewCustomEvent.fire(10); // myHandler prints 10 ``` I can do this pretty easily with a simple class -- but when i want to have an event that passes a different type or amount of arguments to the subscriber i have to write, and define an entirely new event class.. I figure there has to be some library, or maybe even something native in Visual C++ 2008 that will work something similar to this. It's basicly just an implementation of the Observer pattern, so it can't be too impossible to do in C++ This really makes me appreciate how nice it is in JavaScript not to have to worry about the arguments you are passing. Tell me if this is a stupid question.
I use [sigslot](http://sigslot.sourceforge.net/) for exactly this purpose.
Take a look at the boost [signal](http://www.boost.org/doc/libs/1_36_0/doc/html/signals.html) library. Combined with the [function](http://www.boost.org/doc/libs/1_36_0/doc/html/function.html) and [bind](http://www.boost.org/doc/libs/1_36_0/libs/bind/bind.html) libraries, you can do exactly what you are looking for.
Events in C++
[ "", "c++", "" ]
I need to solve a few mathematical equations in my application. Here's a typical example of such an equation: ``` a + b * c - d / e = a ``` Additional rules: * b % 10 = 0 * b >= 0 * b <= 100 * Each number must be integer * ... I would like to get the possible solution sets for a, b, c, d and e. Are there **any libraries out there**, either open source or commercial, **which I can use to solve such an equation**? If yes, what kind of result do they provide?
Solving [linear systems](http://en.wikipedia.org/wiki/Linear_systems) can *generally* be solved using linear programming. I'd recommend taking a look at [Boost uBLAS](http://www.boost.org/doc/libs/1_36_0/libs/numeric/ublas/doc/index.htm) for starters - it has a simple triangular solver. Then you might checkout libraries targeting more domain specific approaches, perhaps [QSopt](http://www2.isye.gatech.edu/~wcook/qsopt/).
You're venturing into the world of numerical analysis, and here be dragons. Seemingly small differences in specification can make a huge difference in what is the right approach. I hesitate to make specific suggestions without a fairly precise description of the problem domain. It sounds superficiall like you are solving constrained linear problems that are simple enough that there are a lot of ways to do it but "..." could be a problem. A good resource for general solvers etc. would be [GAMS](http://gams.nist.gov/). Much of the software there may be a bit heavy weight for what you are asking.
Equation Solvers for linear mathematical equations
[ "", "c++", "equation", "algebra", "solver", "" ]
I have topics(id\*) and tags(id\*,name) and a linking table topic\_tags(topicFk\*,tagFk\*). Now I want to select every single topic, that has all of the good tags (a,b,c) but none of the bad tags (d,e,f). How do I do that?
My own solution using Pauls and Bills ideas. The idea is to inner join topics with good tags (to throw out topics with no good tags) and then count the unique tags for each topic (to verify that all the good tags are present). At the same time an outer join with bad tags should have not a single match (all fields are NULL). ``` SELECT topics.id FROM topics INNER JOIN topic_tags topic_ptags ON topics.id = topic_ptags.topicFk INNER JOIN tags ptags ON topic_ptags.tagFk = ptags.id AND ptags.name IN ('a','b','c') LEFT JOIN topic_tags topic_ntags ON topics.id = topic_ntags.topicFk LEFT JOIN tags ntags ON topic_ntags.tagFk = ntags.id AND ntags.name IN ('d','e','f') GROUP BY topics.id HAVING count(DISTINCT ptags.id) = 3 AND count(ntags.id) = 0 ```
Assuming your Topic\_Tags table is unique, this answers your *exact* question - but may not be generalizable to your actual problem: ``` SELECT TopicId FROM Topic_Tags JOIN Tags ON Topic_Tags.TagId = Tags.TagId WHERE Tags.Name IN ('A', 'B', 'C', 'D', 'E', 'F') GROUP BY TopicId HAVING COUNT(*) = 3 AND MAX(Tags.Name) = 'C' ``` A more general solution would be: ``` SELECT * FROM ( SELECT TopicId FROM Topic_Tags JOIN Tags ON Topic_Tags.TagId = Tags.TagId WHERE Tags.Name IN ('A', 'B', 'C') GROUP BY TopicId HAVING COUNT(*) = 3 ) as GoodTags LEFT JOIN ( SELECT TopicId FROM Topic_Tags JOIN Tags ON Topic_Tags.TagId = Tags.TagId WHERE Tags.Name = 'D' OR Tags.Name = 'E' OR Tags.Name = 'F' ) as BadTags ON GoodTags.TopicId = BadTags.TopicId WHERE BadTags.TopicId IS NULL ```
MINUS in MySQL?
[ "", "sql", "mysql", "" ]
I'm stuck with .Net 1.1 application (i.e. I can not use the generics goodies from 2.0 for now), and I was trying to optimize some parts of the code. As it deals a lot with runtime callable wrappers, which need to be released, I ended up to create a utility method which loops until all references are released. The signature of the method is: ``` void ReleaseObject(object comObject) ``` After releasing all comObjects, I call GC.Collect and GC.WaitForPendingFinalizers (don't ask - anybody dealing with Office interop knows). And ... as usual, I hit a corner case - if I do not assign the corresponding managed reference to null before the GC.Collect call, it does not cleanup properly. So, my code looks like: ``` ReleaseObject(myComObject); myComObject = null; GC.Collect() ... ``` As, there are a bunch of xxx=null, I decided to put this in the util method, but as there is a difference between passing by reference, and passing a reference parameter, obviously I had to change the method to: ``` void ReleaseObject(out object comObject) { //do release comObject = null; } ``` and edit the caller to: ``` MyComClass myComObject = xxxx; ReleaseObject(out myComObject); ``` This fails with a message: "Cannot convert from 'out MyComClass' to 'out object'" While I can think of why it can be a problem (i.e. the reverse cast from object to MyComClass is not implicit, and there is no guarantee what the method will do), I was wondering if there is a workaround, or I need to stay with my hundreds assignments of nulls. Note: I have a bunch of different COM objects types, thats why I need a "object" parameter, and not a type safe one.
Why is it better to call a method than to just set the variable to null? They're both single line calls, and the latter is a lot simpler. It does sound very odd that you need to set them to null in the first place though. Are these static variables, or instance variables whose values need to be released earlier than their containing object? If the variable is just a local variable which will go out of scope anyway, setting it to null shouldn't make any difference (in release). Do the RCWs not implement IDisposable? If so, calling Dispose (preferably via a using statement) would be the best bet. (After discussions in comments.) These are local variables, which aren't referenced later in the method. That means the garbage collector will realise that they don't need to be treated as "root" references - so setting them to null shouldn't make any difference. To answer the original question directly, however: no, you can't pass a variable by reference unless the method parameter is of exactly the same type, so you're out of luck here. (With generics it would be possible, but you've said you're limited to .NET 1.1.)
Sunny, ref and out are a marshalling hints + contract to the compiler. Ref and out are a carryover to COM days - the marshalling hints for objects when sent over the wire / between processes. The `out` contract ``` void foo( out MyClass x) ``` 1. foo() will set `x` to something before it returns. 2. `x` has no value when foo() is entered, and you get a compiler error if you attempt to use `x` before setting it. (use of unassigned out parameter x) The `ref` contract ``` void foo( ref MyClass x) ``` 1. ref allows changing the callers reference. 2. x has to be assignable * you cannot cast something to an intermediate variable foo( ref (object) something) * x can not be a property The reality of the last two points are likely to stop you doing what you're trying to do, because in effect they make no sense when you understand what references really are. If you want to know that, ask Jon Skeet (he wrote a book). When marshalling ref, it says that in addition to the return value, bring back ref values as well. When marshalling out, it says don't bother sending the out value when the method is called, but remember to bring back the out value in addition to the return value. --- DISCLAIMER DISCLAIMER DISCLAIMER As others point out, something fishy is going on. It appears the brute-force code you are maintaining has some subtle bugs and suffers from coding by coincidence. The best solution is probably to add another layer of indirection. i.e. A wrapper to the wrapper class that ensures deterministic cleanup where you can write the messy code once and only once instead of peppering it throughout your codebase. --- That said .. Alternative 1 Ref won't do the trick unless you provide overloads for every type of (com) object you will call it with. ``` // need a remove method for each type. void Remove( ref Com1 x ) { ...; x = null; } void Remove( ref Con2 x ) { ...; x = null; } void Remove( ref Com3 x ) { ...; x = null; } // a generics version using ref. void RemoveComRef<ComT>(ref ComT t) where ComT : class { System.Runtime.InteropServices.Marshal.ReleaseComObject(t); t = null; } Com1 c1 = new Com1(); Com2 c2 = new Com2(); Remove( ref c1 ); RemoveComRef(ref c2); // the generics version again. ``` Alternative 2 If you don't want to do that, return null from the Remove() method and cast back to the type of object. ``` class Remover { // .net 1.1 must cast if assigning public static object Remove(object x) { System.Runtime.InteropServices.Marshal.ReleaseComObject(x); return null; } // uses generics. public static ComT RemoveCom<ComT>(ComT t) where ComT : class { System.Runtime.InteropServices.Marshal.ReleaseComObject(t); return null; } } Com1 c1 = new Com1(); Com2 c2 = new Com2(); c1 = (Com1)Remover.Remove(c1); // no reliance on generics c2 = Remover.RemoveCom(c2); // relies on generics ``` \* I added generic versions for comparison. The above code has the effect that when looking at code you become suspicious when you see a call to Remove(x) without the assignment (making wrong code look wrong). You could even Grep through the codebase looking for calls to Remove where assignment doesn't take place. --- DISCLAIMER - all the above is predicated on your needing to set the reference to null manually, which (normally) isn't necessary.
Using "ref" and/or "out" for Object type
[ "", "c#", ".net", "casting", "ref", "out", "" ]
Is there a way to generate a hash of a string so that the hash itself would be of specific length? I've got a function that generates 41-byte hashes (SHA-1), but I need it to be 33-bytes max (because of certain hardware limitations). If I truncate the 41-byte hash to 33, I'd probably (certainly!) lost the uniqueness. Or actually I suppose an MD5 algorithm would fit nicely, if I could find some C code for one with your help. EDIT: Thank you all for the quick and knowledgeable responses. I've chosen to go with an MD5 hash and it fits fine for my purpose. The uniqueness is an important issue, but I don't expect the number of those hashes to be very large at any given time - these hashes represent software servers on a home LAN, so at max there would be 5, maybe 10 running.
The way hashes are calculated that's unfortunately not possible. To limit the hash length to 33 bytes, you will have to cut it. You could xor the first and last 33 bytes, as that might keep more of the information. But even with 33 bytes you don't have that big a chance of a collision. md5: <http://www.md5hashing.com/c++/> btw. md5 is 16 bytes, sha1 20 bytes and sha256 is 32 bytes, however as hexstrings, they all double in size. If you can store bytes, you can even use sha256.
> If I truncate the 41-byte hash to 33, I'd probably (certainly!) lost the uniqueness. What makes you think you've got uniqueness now? Yes, there's clearly a higher chance of collision when you're only playing with 33 bytes instead of 41, but you need to be fully aware that collisions are only ever unlikely, not impossible, for any situation where it makes sense to use a hash in the first place. If you're hashing more than 41 bytes of data, there are clearly more possible combinations than there are hashes available. Now, whether you'd be better off truncating the SHA-1 hash or using a shorter hash such as MD5, I don't know. I think I'd be more generally confident when keeping the whole of a hash, but MD5 has [known vulnerabilities](http://en.wikipedia.org/wiki/MD5#Vulnerability) which may or may not be a problem for your particular application.
Hash of a string to be of specific length
[ "", "c++", "algorithm", "hash", "" ]
I've introduced visitors as one of core architecture ideas in one of my apps. I have several visitors that operate on a same stuff. Now, how should I test it? Some tests I'm thinking of are a bit larger then a unit test should be (integration test? whatever) but I still wanna do it. How would you test code like the C++ sample from wiki art on [Visitor Pattern](http://en.wikipedia.org/wiki/Visitor_pattern)
make a test visitor object and make it visit things.... test that it visited the right things.
Unit testing isn't really about testing patterns, it is about testing the correct implementation of methods and functions. The visitor pattern is a specific class structure, and for each of the classes (ConcreteVisitor and ConcreteElement, specifically) involved you'll want unit tests. When you've developed confidence that your class methods are behaving OK, you could use your unit test framework to develop integration tests as well. Do not start integration testing rightaway: you'll find that you develop a lot of integration tests that are actually testing the behavior of a specific class, i.e. unit tests. Whether you need mock objects or can use 'real' objects is a different matter. This depends a lot on whether the objects behave nice enough for unit test purposes (i.e. they do not pull in a lot of additional dependencies etc.), and whether the objects themselves are unit tested (i.e. you need to be able to trust these objects 100%). The mock vs. real objects issue has been addressed on stackflow before, so search the unittest tags.
Unit testing Visitor pattern architecture
[ "", "c++", "unit-testing", "design-patterns", "visitor-pattern", "" ]
I have a text file in the root of my web app *<http://localhost/foo.txt>* and I'd like to load it into a variable in javascript.. in groovy I would do this: ``` def fileContents = 'http://localhost/foo.txt'.toURL().text; println fileContents; ``` How can I get a similar result in javascript?
XMLHttpRequest, i.e. AJAX, without the XML. The precise manner you do this is dependent on what JavaScript framework you're using, but if we disregard interoperability issues, your code will look something like: ``` var client = new XMLHttpRequest(); client.open('GET', '/foo.txt'); client.onreadystatechange = function() { alert(client.responseText); } client.send(); ``` Normally speaking, though, XMLHttpRequest isn't available on all platforms, so some fudgery is done. Once again, your best bet is to use an AJAX framework like jQuery. One extra consideration: this will only work as long as foo.txt is on the same domain. If it's on a different domain, same-origin security policies will prevent you from reading the result.
## Update 2019: Using Fetch: ``` fetch('http://localhost/foo.txt') .then(response => response.text()) .then((data) => { console.log(data) }) ``` <https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API>
How do I load the contents of a text file into a javascript variable?
[ "", "javascript", "" ]
I've got a save dialog box which pops up when i press a button. However i dont want to save a file at that point, i want to take the name and place it in the text box next to the button, for the name to be used later. Can anybody tell me how to obtain the file path from the save dialog box to use it later?
Here is a sample code I just wrote very fast... instead of Console.Write you can simply store the path in a variable and use it later. ``` SaveFileDialog saveFileDialog1 = new SaveFileDialog(); saveFileDialog1.InitialDirectory = Convert.ToString(Environment.SpecialFolder.MyDocuments); saveFileDialog1.Filter = "Your extension here (*.EXT)|*.ext|All Files (*.*)|*.*" ; saveFileDialog1.FilterIndex = 1; if(saveFileDialog1.ShowDialog() == DialogResult.OK) { Console.WriteLine(saveFileDialog1.FileName);//Do what you want here } ```
Addressing the textbox... ``` if (saveFileDialog.ShowDialog() == DialogResult.OK) { this.textBox1.Text = saveFileDialog.FileName; } ```
Obtain file path of C# save dialog box
[ "", "c#", "dialog", "" ]
I have a range of random numbers. The range is actually determined by the user but it will be up to 1000 integers. They are placed in this: ``` vector<int> n ``` and the values are inserted like this: ``` srand(1); for (i = 0; i < n; i++) v[i] = rand() % n; ``` I'm creating a separate function to find all the non-prime values. Here is what I have now, but I know it's completely wrong as I get both prime and composite in the series. ``` void sieve(vector<int> v, int n) { int i,j; for(i = 2; i <= n; i++) { cout << i << " % "; for(j = 0; j <= n; j++) { if(i % v[j] == 0) cout << v[j] << endl; } } } ``` This method typically worked when I just had a series of numbers from 0-1000, but it doesn't seem to be working now when I have numbers out of order and duplicates. Is there a better method to find non-prime numbers in a vector? I'm tempted to just create another vector, fill it with n numbers and just find the non-primes that way, but would that be inefficient? Okay, since the range is from 0-1000 I am wondering if it's easier to just create vector with 0-n sorted, and then using a sieve to find the primes, is this getting any closer? ``` void sieve(vector<int> v, BST<int> t, int n) { vector<int> v_nonPrime(n); int i,j; for(i = 2; i < n; i++) v_nonPrime[i] = i; for(i = 2; i < n; i++) { for(j = i + 1; j < n; j++) { if(v_nonPrime[i] % j == 0) cout << v_nonPrime[i] << endl; } } } ```
In this code: ``` if(i % v[j] == 0) cout << v[j] << endl; ``` You are testing your index to see if it is divisible by v[j]. I think you meant to do it the other way around, i.e.: ``` if(v[j] % i == 0) ``` Right now, you are printing random divisors of i. You are not printing out random numbers which are known not to be prime. Also, you will have duplicates in your output, perhaps that is ok.
First off, I think Knuth said it first: premature optimization is the cause of many bugs. Make the slow version first, and then figure out how to make it faster. Second, for your outer loop, you really only need to go to sqrt(n) rather than n.
Finding composite numbers
[ "", "c++", "primes", "sieve", "" ]
I have some thread-related questions, assuming the following code. Please ignore the possible inefficiency of the code, I'm only interested in the thread part. ``` //code without thread use public static int getNextPrime(int from) { int nextPrime = from+1; boolean superPrime = false; while(!superPrime) { boolean prime = true; for(int i = 2;i < nextPrime;i++) { if(nextPrime % i == 0) { prime = false; } } if(prime) { superPrime = true; } else { nextPrime++; } } return nextPrime; } public static void main(String[] args) { int primeStart = 5; ArrayList list = new ArrayList(); for(int i = 0;i < 10000;i++) { list.add(primeStart); primeStart = getNextPrime(primeStart); } } ``` If I'm running the code like this and it takes about 56 seconds. If, however, I have the following code (as an alternative): ``` public class PrimeRunnable implements Runnable { private int from; private int lastPrime; public PrimeRunnable(int from) { this.from = from; } public boolean isPrime(int number) { for(int i = 2;i < from;i++) { if((number % i) == 0) { return false; } } lastPrime = number; return true; } public int getLastPrime() { return lastPrime; } public void run() { while(!isPrime(++from)) ; } } public static void main(String[] args) { int primeStart = 5; ArrayList list = new ArrayList(); for(int i = 0;i < 10000;i++) { PrimeRunnable pr = new PrimeRunnable(primeStart); Thread t = new Thread(pr); t.start(); t.join(); primeStart = pr.getLastPrime(); list.add(primeStart); } } ``` The whole operation takes about 7 seconds. I am almost certain that even though I only create one thread at a time, a thread doesn't always finish when another is created. Is that right? I am also curious: why is the operation ending so fast? When I'm joining a thread, do other threads keep running in the background, or is the joined thread the only one that's running?
By putting the join() in the loop, you're starting a thread, then waiting for that thread to stop before running the next one. I think you probably want something more like this: ``` public static void main(String[] args) { int primeStart = 5; // Make thread-safe list for adding results to List list = Collections.synchronizedList(new ArrayList()); // Pull thread pool count out into a value so you can easily change it int threadCount = 10000; Thread[] threads = new Thread[threadCount]; // Start all threads for(int i = 0;i < threadCount;i++) { // Pass list to each Runnable here // Also, I added +i here as I think the intention is // to test 10000 possible numbers>5 for primeness - // was testing 5 in all loops PrimeRunnable pr = new PrimeRunnable(primeStart+i, list); Thread[i] threads = new Thread(pr); threads[i].start(); // thread is now running in parallel } // All threads now running in parallel // Then wait for all threads to complete for(int i=0; i<threadCount; i++) { threads[i].join(); } } ``` By the way pr.getLastPrime() will return 0 in the case of no prime, so you might want to filter that out before adding it to your list. The PrimeRunnable has to absorb the work of adding to the final results list. Also, I think PrimeRunnable was actually broken by still having incrementing code in it. I think this is fixed, but I'm not actually compiling this. ``` public class PrimeRunnable implements Runnable { private int from; private List results; // shared but thread-safe public PrimeRunnable(int from, List results) { this.from = from; this.results = results; } public void isPrime(int number) { for(int i = 2;i < from;i++) { if((number % i) == 0) { return; } } // found prime, add to shared results this.results.add(number); } public void run() { isPrime(from); // don't increment, just check one number } } ``` Running 10000 threads in parallel is not a good idea. It's a much better idea to create a reasonably sized fixed thread pool and have them pull work from a shared queue. Basically every worker pulls tasks from the same queue, works on them and saves the results somewhere. The closest port of this with Java 5+ is to use an ExecutorService backed by a thread pool. You could also use a CompletionService which combines an ExecutorService with a result queue. An ExecutorService version would look like: ``` public static void main(String[] args) { int primeStart = 5; // Make thread-safe list for adding results to List list = Collections.synchronizedList(new ArrayList()); int threadCount = 16; // Experiment with this to find best on your machine ExecutorService exec = Executors.newFixedThreadPool(threadCount); int workCount = 10000; // See how # of work is now separate from # of threads? for(int i = 0;i < workCount;i++) { // submit work to the svc for execution across the thread pool exec.execute(new PrimeRunnable(primeStart+i, list)); } // Wait for all tasks to be done or timeout to go off exec.awaitTermination(1, TimeUnit.DAYS); } ``` Hope that gave you some ideas. And I hope the last example seemed a lot better than the first.
You can test this better by making the exact code in your first example run with threads. Sub your main method with this: ``` private static int currentPrime; public static void main(String[] args) throws InterruptedException { for (currentPrime = 0; currentPrime < 10000; currentPrime++) { Thread t = new Thread(new Runnable() { public void run() { getNextPrime(currentPrime); }}); t.run(); t.join(); } } ``` This will run in the same time as the original. To answer your "join" question: yes, other threads can be running in the background when you use "join", but in this particular case you will only have one active thread at a time, because you are blocking the creation of new threads until the last thread is done executing.
comparison of code performance, threaded versus non-threaded
[ "", "java", "multithreading", "concurrency", "" ]
I have a simple Google App Engine app, that I wrote using ordinary strings. I realize I want to make it handle unicode. Are there any gotchas with this? I'm thinking of all the strings that I currently already have in the live database. (From real users who I don't want to upset.)
[Alexander Kojevnikov](https://stackoverflow.com/questions/245094/upgrading-google-application-engine-program-to-use-unicode#245159) said: "The datastore internally keeps all strings in unicode." In other words, your application is already using unicode everywhere. Thank the google folks for a sensible API. No further work required.
The datastore internally keeps all strings [in unicode](http://code.google.com/appengine/docs/datastore/typesandpropertyclasses.html#str).
Upgrading Google Application Engine program to use unicode
[ "", "python", "google-app-engine", "unicode", "" ]
Can dynamic variables in C# 4.0 be members on a class or passed into or returned from methods? var from C# 3.0 couldn't but I haven't seen any mention anywhere of whether it is possible or not with dynamic.
All of the above. I tried them out in the VPC and was able to do all of these. See the ['New Features in C#' document here](http://code.msdn.microsoft.com/csharpfuture)
Yes. There's a big difference between `var` and `dynamic`. `var` just means "let the compiler infer the real type of the variable". `dynamic` *is* the type of the variable - so anywhere you can specify a type, you can specify `dynamic` instead, as I understand it. (I'm sure there are some exceptions to this, but that's the basic idea.) EDIT: Chris Burrow's [first blog entry on `dynamic`](http://blogs.msdn.com/cburrows/archive/2008/10/27/c-dynamic.aspx) (there's a [second one](http://blogs.msdn.com/cburrows/archive/2008/10/28/c-dynamic-part-ii.aspx) already; expect more soon) gives an example class which uses `dynamic` all over the place.
New dynamic variable in C# 4.0
[ "", "c#", "c#-4.0", "" ]
I have a generic list... public List<ApprovalEventDto> ApprovalEvents The ApprovalEventDto has ``` public class ApprovalEventDto { public string Event { get; set; } public DateTime EventDate { get; set; } } ``` How do I sort the list by the event date?
You can use List.Sort() as follows: ``` ApprovalEvents.Sort((lhs, rhs) => (lhs.EventDate.CompareTo(rhs.EventDate))); ```
``` using System.Linq; void List<ApprovalEventDto> sort(List<ApprovalEventDto> list) { return list.OrderBy(x => x.EventDate).ToList(); } ```
How do I sort a generic list?
[ "", "c#", "generics", "" ]
I was just writing some quick code and noticed this complier error > Using the iteration variable in a lambda expression may have unexpected results. > Instead, create a local variable within the loop and assign it the value of the iteration variable. I know what it means and I can easily fix it, not a big deal. But I was wondering why it is a bad idea to use a iteration variable in a lambda? What problems can I cause later on?
Consider this code: ``` List<Action> actions = new List<Action>(); for (int i = 0; i < 10; i++) { actions.Add(() => Console.WriteLine(i)); } foreach (Action action in actions) { action(); } ``` What would you expect this to print? The obvious answer is 0...9 - but actually it prints 10, ten times. It's because there's just one variable which is captured by all the delegates. It's this kind of behaviour which is unexpected. EDIT: I've just seen that you're talking about VB.NET rather than C#. I believe VB.NET has even more complicated rules, due to the way variables maintain their values across iterations. [This post by Jared Parsons](https://devblogs.microsoft.com/vbteam/closures-in-vb-part-5-looping/) gives some information about the kind of difficulties involved - although it's back from 2007, so the actual behaviour may have changed since then.
Assuming you mean C# here. It's because of the way the compiler implements closures. Using an iteration variable *can* cause a problem with accessing a modified closure (note that I said 'can' not 'will' cause a problem because sometimes it doesn't happen depending on what else is in the method, and sometimes you actually want to access the modified closure). More info: <http://blogs.msdn.com/abhinaba/archive/2005/10/18/482180.aspx> Even more info: <http://blogs.msdn.com/oldnewthing/archive/2006/08/02/686456.aspx> <http://blogs.msdn.com/oldnewthing/archive/2006/08/03/687529.aspx> <http://blogs.msdn.com/oldnewthing/archive/2006/08/04/688527.aspx>
Why is it bad to use an iteration variable in a lambda expression
[ "", "c#", "vb.net", "lambda", "iteration", "warnings", "" ]
I have discovered through trial and error that the MATLAB engine function is not completely thread safe. Does anyone know the rules? Discovered through trial and error: On Windows, the connection to MATLAB is via COM, so the COM Apartment threading rules apply. All calls must occur in the same thread, but multiple connections can occur in multiple threads as long as each connection is isolated. From the answers below, it seems that this is not the case on UNIX, where calls can be made from multiple threads as long as the calls are made serially.
When I first started using the engine, I didn't run across any documentation on thread safety, so I assumed that it was not thread-safe. I use a C++ class to synchronize access to an engine instance. For more parallel processing designs, I instantiate multiple instances of the engine class. (edit) I'm using MATLAB R14 on Solaris. I open the engine using the 'engOpen' call, and close it using 'engClose'. My platform does not crash when the Close is called by a different thread than the one that called Open.
From the [documentation](http://www.mathworks.com/access/helpdesk/help/techdoc/matlab_external/f29148.html), > MATLAB libraries are not thread-safe. > If you create multithreaded > applications, make sure only one > thread accesses the engine > application.
Thread safety of Matlab engine API
[ "", "c++", "matlab", "thread-safety", "matlab-engine", "" ]
I'm looking for best practices for performing strict (whitelist) validation/filtering of user-submitted HTML. Main purpose is to filter out XSS and similar nasties that may be entered via web forms. Secondary purpose is to limit breakage of HTML content entered by non-technical users e.g. via WYSIWYG editor that has an HTML view. I'm considering using [HTML Purifier](http://htmlpurifier.org), or rolling my own by using an HTML DOM parser to go through a process like HTML(dirty)->DOM(dirty)->filter->DOM(clean)->HTML(clean). Can you describe successes with these or any easier strategies that are also effective? Any pitfalls to watch out for?
I've tested all exploits I know on HTML Purifier and it did very well. It filters not only HTML, but also CSS and URLs. Once you narrow elements and attributes to innocent ones, the pitfalls are in attribute content – `javascript:` pseudo-URLs (IE allows tab characters in protocol name - `java&#09;script:` still works) and CSS properties that trigger JS. Parsing of URLs may be tricky, e.g. these are valid: `http://spoof.com:xxx@evil.com` or `//evil.com`. Internationalized domains (IDN) can be written in two ways – Unicode and punycode. Go with [HTML Purifier](http://htmlpurifier.org) – it has most of these worked out. If you just want to fix broken HTML, then use HTML Tidy (it's available as PHP extension).
User-submitted HTML isn't always valid, or indeed complete. Browsers will interpret a wide range of invalid HTML and you should make sure you can catch it. Also be aware of the valid-looking: ``` <img src="http://www.mysite.com/logout" /> ``` and ``` <a href="javascript:alert('xss hole');">click</a> ```
Strict HTML Validation and Filtering in PHP
[ "", "php", "html", "security", "validation", "xss", "" ]
How to create a table with a timestamp column that defaults to `DATETIME('now')`? Like this: ``` CREATE TABLE test ( id INTEGER PRIMARY KEY AUTOINCREMENT, t TIMESTAMP DEFAULT DATETIME('now') ); ``` This gives an error.
As of [version 3.1.0](https://www.sqlite.org/releaselog/3_1_0.html) you can use `CURRENT_TIMESTAMP` with the [DEFAULT](https://www.sqlite.org/lang_createtable.html#the_default_clause) clause: > If the default value of a column is CURRENT\_TIME, CURRENT\_DATE or CURRENT\_TIMESTAMP, then the value used in the new row is a text representation of the current UTC date and/or time. For CURRENT\_TIME, the format of the value is "HH:MM:SS". For CURRENT\_DATE, "YYYY-MM-DD". The format for CURRENT\_TIMESTAMP is "YYYY-MM-DD HH:MM:SS". ``` CREATE TABLE test ( id INTEGER PRIMARY KEY AUTOINCREMENT, t TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); ```
according to dr. hipp in a recent list post: ``` CREATE TABLE whatever( .... timestamp DATE DEFAULT (datetime('now','localtime')), ... ); ```
How to create timestamp column with default value 'now'?
[ "", "sql", "sqlite", "" ]
I've been reading many a tutorial/article on unmanaged DLLs in C++. For the life of me, however, I cannot seem to grasp the concept. I'm easily confused by the seeming disagreement about whether it needs a header file, how to export it, whether I need a .lib file and what have you. So, let's assume I have just a function like so: ``` public int calculateSquare(int num) { return num*num; } ``` Ignoring the actual code, what do I require to make this simple function, by itself, into a DLL which I can then call? Do I just add \_\_dllexport or whatever it is to the first line or do I require a header? I am perplexed by all of this.
I cannot stress this enough, the C++ compiler does not see header files, after the preprocessor is done, there's just one big source file ( also called the compilation unit ). So strictly you don't need a header to export this function from a dll. What you do need is some form of conditional compilation to export the function in the dll that you are compiling and to import it in the client code. Typically this is done with a combination of macros and header files. You create a macro called MYIMPORTEXPORT and through the use of macro conditional statements you make it work like \_\_declspec ( dllexport ) in the dll, and \_\_declspec( dllimport ) in the client code. in file MYIMPORTEXPORT.h ``` #ifdef SOME_CONDITION #define MYIMPORTEXPORT __declspec( dllexport ) #else #define MYIMPORTEXPORT __declspec( dllimport ) #endif ``` in file MyHeader.h ``` #include <MyImportExport.h> MYIMPORTEXPORT public int calculateSquare(int num) { return num*num; } ``` in dll .cpp file ``` #define SOME_CONDITION #include <MyHeader.h> ``` in client code .cpp file ``` #include <MyHeader.h> ``` Of course you also need to signal to the linker that you are building a dll with the [/DLL option](http://msdn.microsoft.com/en-us/library/y0zzbyt4(VS.80).aspx). The build process will also make a .lib file, this is a static lib - called the stub in this case - which the client code needs to link to as if it were linking to a real static lib. Automagically, the dll will be loaded when the client code is run. Of course the dll needs to be found by the OS through its lookup mechanism, which means you cannot put the dll just anywhere, but in a specific location. [Here](http://msdn.microsoft.com/en-us/library/ms682586.aspx) is more on that. A very handy tool to see whether you exported the correct function from the dll, and whether the client code is correctly importing is [dumpbin](http://support.microsoft.com/kb/177429). Run it with /EXPORTS and /IMPORTS respectively.
QBziZ' answer is right enough. See [Unmanaged DLLs in C++](https://stackoverflow.com/questions/236035/unmanaged-dlls-in-c#236052) To complete it: *In C++, if you need to use a symbol, you must tell the compiler it exists, and often, its prototype*. In other languages, the compiler will just explore the library on its own, and find the symbol, *et voilà*. In C++, you must tell the compiler. ## See a C/C++ header as a book table of contents The best way is to put in some common place the needed code. The "interface", if you want. This is usually done in an header file, called header because this is usually not an independent source file. The header is only a file whose aim is to be included (i.e. copy/pasted by the preprocessor) into true source files. In substance, it seems you have to declare twice a symbol (function, class, whatever). Which is almost an heresy when compared to other languages. You should see it as a book, with a summary table, or an index. In the table, you have all the chapters. In the text, you have the chapters and their content. And sometimes, you're just happy you have the chapter list. In C++, this is the header. ## What about the DLL? So, back to the DLL problem: The aim of a DLL is to export symbols that your code will use. So, in a C++ way, you must both export the code at compilation (i.e., in Windows, use the \_\_declspec, for example) and "publish" a table of what is exported (i.e. have "public" headers containing the exported declarations).
Unmanaged DLLs in C++
[ "", "c++", "dll", "unmanaged", "" ]
I am currently porting a lot of code from an MFC-based application to a DLL for client branding purposes. I've come across an unusual problem. This bit of code is the same in both systems: ``` // ... CCommsProperties props; pController->GetProperties( props ); if (props.handshake != HANDSHAKE_RTS_CTS) { props.handshake = HANDSHAKE_RTS_CTS; pController->RefreshCommProperties( props ); } // ... in another file: void CControllerSI::RefreshCommProperties ( const CCommsProperties& props ) { // ... code ... } ``` CommProperties is a wrapper for the comm settings, serialization of etc. and pController is of type ControllerSI which itself is a layer between the actual Comms and the Application. On the original MFC version the setting of handshake to RTS-CTS sticks but when running as the DLL version it resets itself to 0 as soon as the function is entered. The code is contained entirely in the DLL section of the code, so there are no boundaries. The main differences between the original and the new modules is the variables that call the various dialogs have been removed and the removed #includes I've lost an afternoon to this and I don't really want to lose any more...
After Saratv posting, I decided to ditch what I had done and restart it from working source again. This time however it works...I guess I will never know why passing a structure caused it to change.
It is difficult to see what is wrong from the given code alone. Some general pointers: 1. The object is initialized and processed in different binary modules with incompatible linking (such as C-run times) 2. If the class/structure is shared it is not exported/imported correctly. 3. The class(s) are defined in more than one place, and you are not including the correct definitions. The above three are the most likely causes, especially, if all fields are reset to their default initialized values. if this is only happening with only one or two fields, the structure may be poorly aligned and you may need to rearrange the fields to correct these (check that in release too). In general, I am tempted to hypothesize that the object you have intialized is not the one `RefreshCommProperties()` sees, for some reason, may be one of the three above.
Referenced structure not 'sticking'
[ "", "c++", "mfc", "" ]
In C# I could easily write the following: ``` string stringValue = string.IsNullOrEmpty( otherString ) ? defaultString : otherString; ``` Is there a quick way of doing the same thing in Python or am I stuck with an 'if' statement?
In Python 2.5, there is ``` A if C else B ``` which behaves a lot like ?: in C. However, it's frowned upon for two reasons: readability, and the fact that there's usually a simpler way to approach the problem. For instance, in your case: ``` stringValue = otherString or defaultString ```
@Dan > ``` > if otherString: > stringValue = otherString > else: > stringValue = defaultString > ``` > > This type of code is longer and more expressive, but also more readable Well yes, it's longer. Not so sure about “more expressive” and “more readable”. At the very least, your claim is disputable. I would even go as far as saying it's downright wrong, for two reasons. First, your code emphasizes the decision-making (rather extremely). Onthe other hand, the conditional operator emphasizes something else, namely the value (resp. the assignment of said value). And this is *exactly* what the writer of this code wants. The decision-making is really rather a by-product of the code. The important part here is the assignment operation. Your code hides this assignment in a lot of syntactic noise: the branching. Your code is less expressive because it shifts the emphasis from the important part. Even then your code would probably trump some obscure ASCII art like `?:`. An inline-`if` would be preferable. Personally, I don't like the variant introduced with Python 2.5 because it's backwards. I would prefer something that reads in the same flow (direction) as the C ternary operator but uses words instead of ASCII characters: ``` C = if cond then A else B ``` This wins hands down. C and C# unfortunately don't have such an expressive statement. But (and this is the second argument), the ternary conditional operator of C languages is so long established that it has become an idiom in itself. The ternary operator is as much part of the language as the “conventional” `if` statement. Because it's an idiom, anybody who knows the language immediately reads this code right. Furthermore, it's an extremely short, concise way of expressing these semantics. In fact, it's the shortest imaginable way. It's extremely expressive because it doesn't obscure the essence with needless noise. Finally, Jeff Atwood has written the perfect conclusion to this: [**The best code is no code at all**](http://www.codinghorror.com/blog/archives/000878.html).
How can I closely achieve ?: from C++/C# in Python?
[ "", "python", "syntax", "ternary-operator", "syntax-rules", "" ]
I'd like to open the intelligence window without typing a character and then backspacing it. I can't seem to remember the shortcut for this. What is it?
`Ctrl` + `Space`? Also, go to [Tools -> Options -> Environment -> Keyboard](https://msdn.microsoft.com/en-us/library/5zwses53.aspx) or [Default Keyboard Shortcuts in Visual Studio](https://msdn.microsoft.com/en-us/library/da5kh0wa.aspx), you can then search for commands and see what is assigned to that (and remap).
`Ctrl` + `Space` for normal Intellisense, and `Ctrl` + `Shift` + `Space` for parameter Intellisense (e.g. to see what overloads are available in a method call which you've actually already filled in). I find the latter very handy :)
What's the default intellisense shortcut in vs2008?
[ "", "c#", "visual-studio", "visual-studio-2008", "keyboard-shortcuts", "" ]
I am working on a windows service that polls for a connection to a network enabled devices every 15 seconds. If the service is not able to connect to a device, it throws an exception and tries again in 15 seconds. All of this works great. But, lets say one of the devices is down for a day or more. I am filling up my exception log with the same exception every 15 seconds. Is there a standard way to prevent an exception from being written to the event log if the exception being thrown hasn't changed in the last x number of hours?
One good way to achieve what you need is to employ the Circuit Breaker design pattern. I first read about this in the book "Release It! Design and Deploy Production Ready Software" by Michael T. Nygard, from the Pragmatic Press, p104-107. The idea of the circuit breaker is that it sits in the path of the connection between systems, passing connections through, watching for the "break condition". For example, it might trigger only if five connections in a row have all failed. Once the circuit has broken, all calls through the circuit breaker fail immediately, without consulting the external service. This continues until a timeout occurs, when the breaker goes into a half-open state. The next call is attempted - a failure results in the timeout being reset, success in the breaker closing and the system resuming operation. A quick google found [a post by Tim Ross](http://timross.wordpress.com/2008/02/10/implementing-the-circuit-breaker-pattern-in-c/) that reads well and goes into more detail. In your case, you could use a circuit breaker with a timeout of 10 minutes, and a trigger of 5 failures. Your log files would then contain, in the case of an all day failure, five exceptions logged for the original problem, and then just six more an hour (compared with 240 at 15 second intervals), indicating that the problem persists. Depending on your requirements, you could include a manual "reset" of the circuit breaker, or you could just leave it to automatically reset when the 10 minute timeout reveals things are back to normal. This could be useful - generally the fewer things the sysadmins need to fuss with, the better they like it.
Maybe have a workflow that if the polls fails for a certain number of times, the polling interval is increased. e.g. polls every 15 seconds for like 3 times, if it fails then increase the polling interval to one minute, if it fails for n times then increase the time to one hour. To be honest the workflow above doesnt really solve your problem. If I were you, I would reverse the workflow. Instead of the server polling for devices, why not do it the other way round? When a device connected to a networked machine, your client side service sends a message to the server, so that the server knows that the device is connected and alive. Hope this helps... RWendi
How do I conditionally suppress application exceptions written to the event log?
[ "", "c#", "exception", "windows-services", "event-log", "" ]
**Question** I have an application written in Java. It is designed to run on a Linux box standalone. I am trying to spawn a new *firefox* window. However, *firefox* never opens. It always has a shell exit code of 1. I can run this same code with *gnome-terminal* and it opens fine. **Background** So, here is its initialization process: 1. Start X "Xorg :1 -br -terminate -dpms -quiet vt7" 2. Start Window Manager "metacity --display=:1 --replace" 3. Configure resources "xrdb -merge /etc/X11/Xresources" 4. Become a daemon and disconnect from controlling terminal Once the program is up an running, there is a button the user can click that should spawn a firefox window. Here is my code to do that. Remember X is running on display :1. **Code** ``` public boolean openBrowser() { try { Process oProc = Runtime.getRuntime().exec( "/usr/bin/firefox --display=:1" ); int bExit = oProc.waitFor(); // This is always 1 for some reason return true; } catch ( Exception e ) { oLogger.log( Level.WARNING, "Open Browser", e ); return false; } } ```
after having read the various answers and various comments(from questioner), here's what I would do 1) try this java approach <http://java.sun.com/j2se/1.5.0/docs/api/java/lang/ProcessBuilder.html> ``` ProcessBuilder pb = new ProcessBuilder("myCommand", "myArg1", "myArg2"); Map<String, String> env = pb.environment(); env.put("VAR1", "myValue"); env.remove("OTHERVAR"); env.put("VAR2", env.get("VAR1") + "suffix"); pb.directory("myDir"); Process p = pb.start(); ``` see more about this class: <http://java.sun.com/developer/JDCTechTips/2005/tt0727.html#2> <http://www.javabeat.net/tips/8-using-the-new-process-builder-class.html> 2) try doing this(launching firefox) from C/C++/ruby/python and see if that is succeeding. 3) if all else fails, I would launch a shell program and that shell program would launch firefox!!
If you can narrow it down to Java 6, you can use the desktop API: <http://java.sun.com/developer/technicalArticles/J2SE/Desktop/javase6/desktop_api/> Should look something like: ``` if (Desktop.isDesktopSupported()) { Desktop desktop = Desktop.getDesktop(); if (desktop.isSupported(Desktop.Action.BROWSE)) { try { desktop.browse(new URI("http://localhost")); } catch(IOException ioe) { ioe.printStackTrace(); } catch(URISyntaxException use) { use.printStackTrace(); } } } ```
Open Browser window from Java program
[ "", "java", "linux", "browser", "x11", "daemon", "" ]
I have a Spring MVC web app which uses Spring Security. I want to know the username of the currently logged in user. I'm using the code snippet given below . Is this the accepted way? I don't like having a call to a static method inside this controller - that defeats the whole purpose of Spring, IMHO. Is there a way to configure the app to have the current SecurityContext, or current Authentication, injected instead? ``` @RequestMapping(method = RequestMethod.GET) public ModelAndView showResults(final HttpServletRequest request...) { final String currentUser = SecurityContextHolder.getContext().getAuthentication().getName(); ... } ```
If you are using [Spring 3](http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/mvc.html#mvc-ann-requestmapping-arguments), the easiest way is: ``` @RequestMapping(method = RequestMethod.GET) public ModelAndView showResults(final HttpServletRequest request, Principal principal) { final String currentUser = principal.getName(); } ```
A lot has changed in the Spring world since this question was answered. Spring has simplified getting the current user in a controller. For other beans, Spring has adopted the suggestions of the author and simplified the injection of 'SecurityContextHolder'. More details are in the comments. --- This is the solution I've ended up going with. Instead of using `SecurityContextHolder` in my controller, I want to inject something which uses `SecurityContextHolder` under the hood but abstracts away that singleton-like class from my code. I've found no way to do this other than rolling my own interface, like so: ``` public interface SecurityContextFacade { SecurityContext getContext(); void setContext(SecurityContext securityContext); } ``` Now, my controller (or whatever POJO) would look like this: ``` public class FooController { private final SecurityContextFacade securityContextFacade; public FooController(SecurityContextFacade securityContextFacade) { this.securityContextFacade = securityContextFacade; } public void doSomething(){ SecurityContext context = securityContextFacade.getContext(); // do something w/ context } } ``` And, because of the interface being a point of decoupling, unit testing is straightforward. In this example I use Mockito: ``` public class FooControllerTest { private FooController controller; private SecurityContextFacade mockSecurityContextFacade; private SecurityContext mockSecurityContext; @Before public void setUp() throws Exception { mockSecurityContextFacade = mock(SecurityContextFacade.class); mockSecurityContext = mock(SecurityContext.class); stub(mockSecurityContextFacade.getContext()).toReturn(mockSecurityContext); controller = new FooController(mockSecurityContextFacade); } @Test public void testDoSomething() { controller.doSomething(); verify(mockSecurityContextFacade).getContext(); } } ``` The default implementation of the interface looks like this: ``` public class SecurityContextHolderFacade implements SecurityContextFacade { public SecurityContext getContext() { return SecurityContextHolder.getContext(); } public void setContext(SecurityContext securityContext) { SecurityContextHolder.setContext(securityContext); } } ``` And, finally, the production Spring config looks like this: ``` <bean id="myController" class="com.foo.FooController"> ... <constructor-arg index="1"> <bean class="com.foo.SecurityContextHolderFacade"> </constructor-arg> </bean> ``` It seems more than a little silly that Spring, a dependency injection container of all things, has not supplied a way to inject something similar. I understand `SecurityContextHolder` was inherited from acegi, but still. The thing is, they're so close - if only `SecurityContextHolder` had a getter to get the underlying `SecurityContextHolderStrategy` instance (which is an interface), you could inject that. In fact, I even [opened a Jira issue](http://jira.springsource.org/browse/SEC-1188) to that effect. One last thing - I've just substantially changed the answer I had here before. Check the history if you're curious but, as a coworker pointed out to me, my previous answer would not work in a multi-threaded environment. The underlying `SecurityContextHolderStrategy` used by `SecurityContextHolder` is, by default, an instance of `ThreadLocalSecurityContextHolderStrategy`, which stores `SecurityContext`s in a `ThreadLocal`. Therefore, it is not necessarily a good idea to inject the `SecurityContext` directly into a bean at initialization time - it may need to be retrieved from the `ThreadLocal` each time, in a multi-threaded environment, so the correct one is retrieved.
When using Spring Security, what is the proper way to obtain current username (i.e. SecurityContext) information in a bean?
[ "", "java", "spring", "spring-mvc", "spring-security", "" ]
I often use Dictionary in C#2.0 with the first key as string that was containing a unique identifier. I am learning C#3.0+ and it seems that I can now simply use a List and simply do LINQ on that object to get the specific object (with the .where()). So, if I understand well, the Dictionary class has lost its purpose?
no, a dictionary is still more efficient for getting things back out given a key. a list you still have to iterate through the list to find what you want. A dictionary does a lookup.
If you just have a List, then doing an LINQ select will scan through every item in the list comparing it against the one you are looking for. The Dictionary however computes a hash code of the string you are looking for (returned by the GetHashCode method). This value is then used to look up the string more efficiently. For more info on how this works see [Wikipedia](http://en.wikipedia.org/wiki/Hash_table). If you have more than a few strings, the initial (List) method will start to get painfully slow.
Dictionary<string, MyObject> or List<MyObject> with C# 3.5?
[ "", "c#", ".net", "collections", "c#-3.0", "" ]