Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I am using wxWidgets and Visual C++ to create functionality similar to using Unix "tail -f" with rich formatting (colors, fonts, images) in a GUI. I am targeting both wxMSW and wxMAC. The obvious answer is to use wxTextCtrl with wxTE\_RICH, using calls to wxTextCtrl::SetDefaultStyle() and wxTextCtrl::WriteText(). However, on my 3ghz workstation, compiled in release mode, I am unable to keep tailing a log that grows on average of 1 ms per line, eventually falling behind. For each line, I am incurring: 1. Two calls to SetDefaultStyle() 2. Two calls two WriteText() 3. A call to Freeze() and Thaw() the widget When running this, my CPU goes to 100% on one core using wxMSW after filling up roughly 20,000 lines. The program is visibly slower once it reaches a certain threshold, falling further behind. I am open to using other controls (wxListCtrl, wxRichTextCtrl, etc).
Derive from wxVListBox. From the docs: > wxVListBox is a listbox-like control with the following two main differences from a regular listbox: it can have an arbitrarily huge number of items because it doesn't store them itself but uses OnDrawItem() callback to draw them (so it is a Virtual listbox) and its items can have variable height as determined by OnMeasureItem() (so it is also a listbox with the lines of Variable height).
Have you considered limiting the amount of lines in the view? When we had a similar issue, we just made sure never more than 10,000 lines are in the view. If more lines come in at the bottom we remove lines at the top. This was not using WxWidgets, it was using a native Cocoa UI on Mac, but the issue is the same. If a styled text view (with colors, formatting and pretty printing) grows to large, appending more data at the bottom becomes pretty slow.
How do I append a large amount of rich content (images, formatting) quickly to a control without using tons of CPU?
[ "", "c++", "performance", "wxwidgets", "" ]
I'm working on some code that uses a pattern in its business and data tiers that uses events to signal errors e.g. ``` resource = AllocateLotsOfMemory(); if (SomeCondition()) { OnOddError(new OddErrorEventArgs(resource.StatusProperty)); resource.FreeLotsOfMemory(); return; } ``` This looked superficially rather odd, especially as the code that calls this needs to hook into the events (there are four or five different ones!). The developer tells me that this way they can refer to the properties of the allocated resource in the error handling code, and that responsibility for cleaning up after the error is kept by this tier. Which makes some kind of sense. The alternative might be something like ``` resource = AllocateLotsOfMemory(); if (SomeCondition()) { BigObject temporary = resource.StatusProperty; resource.FreeLotsOfMemory(); throw new OddException(temporary); } ``` My questions are: 1. As this "`BigObject`" is freed when the exception object is released, do we need this pattern? 2. Has anyone else experience of this pattern? If so, what pitfalls did you find? What advantages are there? Thanks!
Take a look at [this post](http://www.udidahan.com/2008/08/25/domain-events-take-2/) by Udi Dahan. Its an elegant approach for dispatching domain events. The previous poster is correct in saying that you should not be using an event mechanism to recover from fatal errors, but it is a very useful pattern for notification in loosely coupled systems: ``` public class DomainEventStorage<ActionType> { public List<ActionType> Actions { get { var k = string.Format("Domain.Event.DomainEvent.{0}.{1}", GetType().Name, GetType().GetGenericArguments()[0]); if (Local.Data[k] == null) Local.Data[k] = new List<ActionType>(); return (List<ActionType>) Local.Data[k]; } } public IDisposable Register(ActionType callback) { Actions.Add(callback); return new DomainEventRegistrationRemover(() => Actions.Remove(callback) ); } } public class DomainEvent<T1> : IDomainEvent where T1 : class { private readonly DomainEventStorage<Action<T1>> _impl = new DomainEventStorage<Action<T1>>(); internal List<Action<T1>> Actions { get { return _impl.Actions; } } public IDisposable Register(Action<T1> callback) { return _impl.Register(callback); } public void Raise(T1 args) { foreach (var action in Actions) { action.Invoke(args); } } } ``` And to consume: ``` var fail = false; using(var ev = DomainErrors.SomethingHappened.Register(c => fail = true) { //Do something with your domain here } ```
It seems odd to me too. There are a few advantages - such as allowing multiple "handlers" but the semantics are significantly different to normal error handling. In particular, the fact that it doesn't automatically get propagated up the stack concerns me - unless the error handlers themselves throw an exception, the logic is going to keep going as if everything was still okay when it should probably be aborting the current operation. Another way of thinking about this: suppose the method is meant to return a value, but you've detected an error early. What value do you return? Exceptions communicate the fact that there *is* no appropriate value to return...
Using events rather than exceptions to implement error handling
[ "", "c#", "design-patterns", "exception", "events", "" ]
Other than pasting a WDSL in the tech spec, what are the recommended ways to document a WCF web-service before you begin coding?
We recently published a WCF based SOAP interface for third parties to integrate against. For each method we provided the following: * Method name * Request example (wire format dump) * Request parameters explanation * Response example (wire format dump) * Request parameters explanation * Caveats (gotchas) * History We also provide the WSDL. You may want to use Microsoft's [Disco.exe](http://msdn.microsoft.com/en-us/library/cy2a3ybs.aspx) for doing this. See also a related question about [obtaining the WSDL](https://stackoverflow.com/questions/286657/what-is-the-best-way-to-download-all-of-the-wsdl-files-exposed-by-a-wcf-service).
Document the contract interface as you would document any other interface. Describe the operations, pre-conditions, post-conditions, reasons for throwing a fault, etc.
How to document a WCF web-service in a technical spec?
[ "", "c#", "wcf", "web-services", "" ]
I've seen reference in some C# posted questions to a "using" clause. Does java have the equivalent?
Yes. Java 1.7 introduced the [try-with-resources](http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) construct allowing you to write: ``` try(InputStream is1 = new FileInputStream("/tmp/foo"); InputStream is2 = new FileInputStream("/tmp/bar")) { /* do stuff with is1 and is2 */ } ``` ... just like a `using` statement. Unfortunately, before Java 1.7, Java programmers were forced to use try{ ... } finally { ... }. In Java 1.6: ``` InputStream is1 = new FileInputStream("/tmp/foo"); try{ InputStream is2 = new FileInputStream("/tmp/bar"); try{ /* do stuff with is1 and is 2 */ } finally { is2.close(); } } finally { is1.close(); } ```
Yes, since Java 7 you can rewrite: ``` InputStream is1 = new FileInputStream("/tmp/foo"); try{ InputStream is2 = new FileInputStream("/tmp/bar"); try{ /* do stuff with is1 and is2 */ } finally { is2.close(); } } finally { is1.close(); } ``` As ``` try(InputStream is1 = new FileInputStream("/tmp/foo"); InputStream is2 = new FileInputStream("/tmp/bar")) { /* do stuff with is1 and is2 */ } ``` The objects passed as parameters to the try statement should implement `java.lang.AutoCloseable`.Have a look at the [official docs](http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html). For older versions of Java checkout [this answer](https://stackoverflow.com/a/141267/1860591) and [this answer](https://stackoverflow.com/a/143690/1860591).
Does java have an equivalent to the C# "using" clause
[ "", "java", "syntax", "" ]
I have a class that encapsulates tcp socket communications with a server. For each command message sent to the server, the server will send back a response message that invariably contains a response code (OK, Fail). Using my class, each command can be executed either sync or async. There are basically two types of exceptions that can take place: A "fault" that is caused by a disconnect or some other non-recoverable error and an unexpected exception like "send buffer is full". In the event of a fault, no command can continue or try again or anything until the connection is re-established. In the event of a fail response or even an exception, the command can be tried again... So, right now my sync command methods return an enum that can have the following values: OK, Fail, Fault. If an exception occurs, it is simply raised to the calling thread (in a sync command). For async commands, the Result property enum value can contain an extra value: OK, Fail, Fault or Exception and the callback can access the actual exception object via the command object's Exception property. What do you think about this strategy? I am tempted to not raise exceptions at all for sync commands and just log the exception internally and return the 4th enum value instead because that's all I'll really do with exceptions in any given case anyway... Or, should I not be using result codes at all and just raise exceptions in all cases, even faults? Thanks.
I think your strategy is basically sound. Keep in mind that the purpose of Exceptions is to deal with exceptional conditions. The closer to the source of the problem, the better. In your case, it appears that your strategy is something like "It didn't work right now. Let's retry". I don't see a reason to really raise exceptions. If dealing with a closed socket was something that required a totally different flow in your code, then maybe exceptions would make sense. From your description, that's not really the case. My philosophy on Exceptions is that they should be for exceptional conditions that you can't really deal with. A closed socket? Hmm...how many times does the internet go down at my house...
My preference is you throw an exception any time your method does not successfully complete its mission. So if I, the caller, call yourObject.UploadFile(), I will assume the file was uploaded successfully when the call returns. If it fails for any reason, I expect your object will throw an exception. If you want to distinguish between commands I can retry and commands I shouldn't retry, put that information in the exception and I can decide how to react accordingly. When calling yourObject.BeginAsyncUploadFile(), I'd expect the same behavior except that I'd need to wait to on the IAsyncResult or equivalent object to find out whether the file upload succeeded or not and then check an Exception/Error property if it didn't.
Exceptions vs Result Codes for a socket client class
[ "", "c#", ".net", "networking", "sockets", "tcp", "" ]
I've made some unit tests (in test class). The tutorial I've read said that I should make a TestSuite for the unittests. Odd is that when I'm running the unit test directly (selecting the test class - Run as jUnit test) everything is working fine, altough when I'm trying the same thing with the test suite there's always an exception: java.lang.Exception: No runnable methods. Here is the code of the test suite: ``` import junit.framework.Test; import junit.framework.TestSuite; public class AllTests { public static Test suite() { TestSuite suite = new TestSuite("Test suite for com.xxx.yyyy.test"); //$JUnit-BEGIN$ suite.addTestSuite(TestCase.class); //$JUnit-END$ return suite; } } ``` Any ideas why this isn't working ?
I'm not experienced in ant - so I'm not using it for testing it right now. Searching the internet it seems like I'm mixing up the old jUnit 3.8 and jUnit 4.0 behavior. Trying now a way to use the "new behavior" edited: now it works: AllTest changed to: ``` import org.junit.runner.RunWith; import org.junit.runners.Suite; import org.junit.runners.Suite.SuiteClasses; @RunWith(value=Suite.class) @SuiteClasses(value={TestCase.class}) public class AllTests { } ``` TestCase changed to: ``` import static org.junit.Assert.assertTrue; import org.junit.Test; public class TestCase { @Test public void test1 { assertTrue (tmp.getTermin().equals(soll)); } } ```
Be careful when using an IDE's code-completion to add the import for `@Test`. It has to be `import org.junit.Test` and **not** `import org.testng.annotations.Test`, for example. If you use the second one by mistake, you'll get the "no runnable methods" error. (I was using Intellij Idea 2017 which imported `org.junit.jupiter.api.Test` instead!)
JUnit and junit.framework.TestSuite - No runnable methods
[ "", "java", "unit-testing", "junit", "" ]
Assume my objects are in perfect working order (i.e. TDD makes me think they work). I have a list that I create like this (except indented properly): ``` var result = from v in vendors from p in v.Products orderby p.Name select p; ``` This works - I get all products from all vendors. Now I have a list of conditions, built up at runtime by the user. Let's apply them: ``` foreach (Attribute a in requiredAttributes) { result = result.Where(p => p.Attributes.Contains(a)); } ``` This may be primitive, but I thought it'd work. However, after this foreach loop is finished, when you enumerate "result" it will contain all products that has the LAST Attribute of the requiredAttributes collection in it's Attributes property (also a collection). To me, this smells like "a" is overwritten somewhere with each trip trough the loop, and only the last one applies. Short of somehow writing an extension method to IEnumerable called ContainsAll(IEnumerable) or something to that effect, how can I achieve what I want, which is basically a logical AND, giving me only those products that has ALL the required attributes?
(Edited for clarity.) The problem is the foreach loop, and the fact that the "a" variable is being captured and then changed each time. Here's a modification which will work, by effectively introducing a "new" variable for each iteration of the loop, and capturing that new variable. ``` foreach (Attribute a in requiredAttributes) { Attribute copy = a; result = result.Where(p => p.Attributes.Contains(copy)); } ``` Omer's solution is a cleaner one if you can use it, but this may help if your real code is actually more complicated :) EDIT: There's more about the issue in [this closures article](http://csharpindepth.com/Articles/Chapter5/Closures.aspx) - scroll down to "Comparing capture strategies: complexity vs power".
``` var result = from v in vendors from p in v.Products where requiredAttributes.All(a => p.Attributes.Contains(a)) orderby p.Name select p; ``` HTH.
Building a LINQ query programmatically without local variables tricking me
[ "", "c#", "linq", "" ]
We are using SQL Server 2005, but this question can be for any [RDBMS](http://en.wikipedia.org/wiki/Relational_database_management_system). Which of the following is more efficient, when selecting all columns from a view? ``` Select * from view ``` or ``` Select col1, col2, ..., colN from view ```
NEVER, EVER USE "SELECT \*"!!!! This is the cardinal rule of query design! There are multiple reasons for this. One of which is, that if your table only has three fields on it and you use all three fields in the code that calls the query, there's a great possibility that you will be adding more fields to that table as the application grows, and if your select \* query was only meant to return those 3 fields for the calling code, then you're pulling much more data from the database than you need. Another reason is performance. In query design, don't think about reusability as much as this mantra: TAKE ALL YOU CAN EAT, BUT EAT ALL YOU TAKE.
It is best practice to select each column by name. In the future your DB schema might change to add columns that you would then not need for a particular query. I would recommend selecting each column by name.
SQL query - Select * from view or Select col1, col2, ... colN from view
[ "", "sql", "sql-server", "" ]
``` inline int factorial(int n) { if(!n) return 1; else return n*factorial(n-1); } ``` As I was reading [this](https://rads.stackoverflow.com/amzn/click/com/0201543303), found that the above code would lead to "infinite compilation" if not handled by compiler correctly. How does the compiler decide whether to inline a function or not ?
First, the `inline` specification on a function is just a hint. The compiler can (and often does) completely ignore the presence or absence of an `inline` qualifier. With that said, a compiler *can* inline a recursive function, much as it can unroll an infinite loop. It simply has to place a limit on the level to which it will "unroll" the function. An optimizing compiler might turn this code: ``` inline int factorial(int n) { if (n <= 1) { return 1; } else { return n * factorial(n - 1); } } int f(int x) { return factorial(x); } ``` into this code: ``` int factorial(int n) { if (n <= 1) { return 1; } else { return n * factorial(n - 1); } } int f(int x) { if (x <= 1) { return 1; } else { int x2 = x - 1; if (x2 <= 1) { return x * 1; } else { int x3 = x2 - 1; if (x3 <= 1) { return x * x2 * 1; } else { return x * x2 * x3 * factorial(x3 - 1); } } } } ``` In this case, we've basically inlined the function 3 times. Some compilers *do* perform this optimization. I recall MSVC++ having a setting to tune the level of inlining that would be performed on recursive functions (up to 20, I believe).
Indeed, if your compiler does not act intelligently, it may try inserting copies of your `inline`d function recursively, creating infinitely-large code. Most modern compilers will recognize this, however. They can either: 1. Not inline the function at all 2. Inline it up to a certain depth, and if it hasn't terminated by then, call the separate instance of your function using the standard function calling convention. This can take care of many common cases in a high-performance manner, while leaving a fallback for the rare case with a large call depth. This also means that you keep both inlined and separate versions of that function's code around. For case 2, many compilers have `#pragma`s you can set to specify the maximum depth to which this should be done. In *gcc*, you can also pass this in from the command-line with `--max-inline-insns-recursive` (see more info [here](http://gcc.gnu.org/onlinedocs/gcc-4.0.4/gcc/Optimize-Options.html)).
Can a recursive function be inline?
[ "", "c++", "c", "compiler-construction", "" ]
I've always handled optional parameters in JavaScript like this: ``` function myFunc(requiredArg, optionalArg){ optionalArg = optionalArg || 'defaultValue'; // Do stuff } ``` Is there a better way to do it? Are there any cases where using `||` like that is going to fail?
Your logic fails if optionalArg is passed, but evaluates as false - try this as an alternative ``` if (typeof optionalArg === 'undefined') { optionalArg = 'default'; } ``` Or an alternative idiom: ``` optionalArg = (typeof optionalArg === 'undefined') ? 'default' : optionalArg; ``` Use whichever idiom communicates the intent best to you!
In **ECMAScript 2015** (aka "**ES6**") you can declare default argument values in the function declaration: ``` function myFunc(requiredArg, optionalArg = 'defaultValue') { // do stuff } ``` More about them in [this article on MDN](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/default_parameters). This is currently [only supported by Firefox](https://kangax.github.io/compat-table/es6/#test-default_function_parameters), but as the standard has been completed, expect support to improve rapidly. --- **EDIT (2019-06-12):** Default parameters are now widely supported by modern browsers. All versions of Internet **Explorer** do not support this feature. However, **Chrome**, **Firefox**, and **Edge** currently support it.
Is there a better way to do optional function parameters in JavaScript?
[ "", "javascript", "function", "arguments", "" ]
So, bitfields. Specifically, large bitfields. I understand how to manipulate individual values in a bitfield, but how would I go about doing this on a large set, such as say: ``` uint[] bitfield = new uint[4] { 0x0080000, 0x00FA3020, 0x00C8000, 0x0FF00D0 }; ``` The specific problem I'm having is doing left and right shifts that carry through across the whole array. So for instance, if I did a `>> 4` on the above array, I'd end up with: ``` uint[4] { 0x0008000, 0x000FA302, 0x000C800, 0x00FF00D }; ``` Now, an (overly) simplistic algorithm here might look something like (this is me writting code on the fly): ``` int shift = 4; for (int i = 0; i <= shift; i++) { for (int j = bitfield.GetUpperBound(0); j > 0; j--) { bitfield[j] = bitfield[j] >> 1; bitfield[j] = bitfield[j] + ((bitfield[j-1] & 1) << (sizeof(uint)*8)); } bitfield[0] = bitfield[0] >> 1; } ``` Is there anything built in that might ease working with this sort of data?
What makes you think that BitArray uses bools internally? It uses Boolean values to represent the bits in terms of the API, but under the hood I believe it uses an int[].
I'm not sure if it's the best way to do it, but this could work (constraining shifts to be in the range 0-31. ``` public static void ShiftLeft(uint[] bitfield, int shift) { if(shift < 0 || shift > 31) { // handle error here return; } int len = bitfield.Length; int i = len - 1; uint prev = 0; while(i >= 0) { uint tmp = bitfield[i]; bitfield[i] = bitfield[i] << shift; if(i < len - 1) { bitfield[i] |= (uint)(prev & (1 >> shift) - 1 ) >> (32 - shift); } prev = tmp; i--; } } public static void ShiftRight(uint[] bitfield, int shift) { if(shift < 0 || shift > 31) { // handle error here return; } int len = bitfield.Length; int i = 0; uint prev = 0; while(i < len) { uint tmp = bitfield[i]; bitfield[i] = bitfield[i] >> shift; if(i > 0) { bitfield[i] |= (uint)(prev & (1 << shift) - 1 ) << (32 - shift); } prev = tmp; i++; } } ``` PD: With this change, you should be able to handle shifts greater than 31 bits. Could be refactored to make it look a little less ugly, but in my tests, it works and it doesn't seem too bad performance-wise (unless, there's actually something built in to handle large bitsets, which could be the case). ``` public static void ShiftLeft(uint[] bitfield, int shift) { if(shift < 0) { // error return; } int intsShift = shift >> 5; if(intsShift > 0) { if(intsShift > bitfield.Length) { // error return; } for(int j=0;j < bitfield.Length;j++) { if(j > intsShift + 1) { bitfield[j] = 0; } else { bitfield[j] = bitfield[j+intsShift]; } } BitSetUtils.ShiftLeft(bitfield,shift - intsShift * 32); return; } int len = bitfield.Length; int i = len - 1; uint prev = 0; while(i >= 0) { uint tmp = bitfield[i]; bitfield[i] = bitfield[i] << shift; if(i < len - 1) { bitfield[i] |= (uint)(prev & (1 >> shift) - 1 ) >> (32 - shift); } prev = tmp; i--; } } public static void ShiftRight(uint[] bitfield, int shift) { if(shift < 0) { // error return; } int intsShift = shift >> 5; if(intsShift > 0) { if(intsShift > bitfield.Length) { // error return; } for(int j=bitfield.Length-1;j >= 0;j--) { if(j >= intsShift) { bitfield[j] = bitfield[j-intsShift]; } else { bitfield[j] = 0; } } BitSetUtils.ShiftRight(bitfield,shift - intsShift * 32); return; } int len = bitfield.Length; int i = 0; uint prev = 0; while(i < len) { uint tmp = bitfield[i]; bitfield[i] = bitfield[i] >> shift; if(i > 0) { bitfield[i] |= (uint)(prev & (1 << shift) - 1 ) << (32 - shift); } prev = tmp; i++; } } ```
Bitfields in C#
[ "", "c#", "data-structures", "bit-manipulation", "bit-fields", "" ]
I'm writing some cross-platform code between Windows and Mac. If `list::end()` "returns an iterator that addresses the location succeeding the last element in a list" and can be checked when traversing a list forward, what is the best way to traverse the list backwards? This code works on the Mac but not on Windows (can't decrement beyond first element): ``` list<DVFGfxObj*>::iterator iter = m_Objs.end(); for (iter--; iter!=m_Objs.end(); iter--)// By accident discovered that the iterator is circular ? {} ``` this works on Windows: ``` list<DVFGfxObj*>::iterator iter = m_Objs.end(); do{ iter--; } while (*iter != *m_Objs.begin()); ``` Is there another way to traverse backward that could be implemented in a for loop?
Use `reverse_iterator` instead of `iterator`. Use `rbegin()` & `rend()` instead of `begin()` & `end()`. Another possibility, if you like using the [`BOOST_FOREACH`](http://www.boost.org/doc/libs/1_36_0/doc/html/foreach.html) macro is to use the `BOOST_REVERSE_FOREACH` macro introduced in Boost 1.36.0.
The best/easiest way to reverse iterate a list is (as already stated) to use reverse iterators rbegin/rend. However, I did want to mention that reverse iterators are implemented storing the "current" iterator position off-by-one (at least on the GNU implementation of the standard library). This is done to simplify the implementation, in order for the range in reverse to have the same semantics as a range forward [begin, end) and [rbegin, rend) What this means is that dereferencing an iterator involves creating a new temporary, and then decrementing it, *each and every time*: ``` reference operator*() const { _Iterator __tmp = current; return *--__tmp; } ``` Thus, *dereferencing a reverse\_iterator is slower than an normal iterator.* However, You can instead use the regular bidirectional iterators to simulate reverse iteration yourself, avoiding this overhead: ``` for ( iterator current = end() ; current != begin() ; /* Do nothing */ ) { --current; // Unfortunately, you now need this here /* Do work */ cout << *current << endl; } ``` Testing showed this solution to be ~5 times faster *for each dereference* used in the body of the loop. Note: Testing was not done with the code above, as that std::cout would have been the bottleneck. Also Note: the 'wall clock time' difference was ~5 seconds with a std::list size of 10 million elements. So, realistically, unless the size of your data is that large, just stick to rbegin() rend()!
How do you iterate backwards through an STL list?
[ "", "c++", "list", "stl", "iterator", "traversal", "" ]
I am after documentation on using wildcard or regular expressions (not sure on the exact terminology) with a jQuery selector. I have looked for this myself but have been unable to find information on the syntax and how to use it. Does anyone know where the documentation for the syntax is? EDIT: The attribute filters allow you to select based on patterns of an attribute value.
James Padolsey created a [wonderful filter](http://james.padolsey.com/javascript/regex-selector-for-jquery/) that allows regex to be used for selection. Say you have the following `div`: ``` <div class="asdf"> ``` Padolsey's `:regex` filter can select it like so: ``` $("div:regex(class, .*sd.*)") ``` Also, check the [official documentation on selectors](http://docs.jquery.com/Selectors). ## UPDATE: `:` syntax Deprecation JQuery 3.0 Since `jQuery.expr[':']` used in Padolsey's implementation is already deprecated and will render a syntax error in the latest version of jQuery, here is his code adapted to jQuery 3+ syntax: ``` jQuery.expr.pseudos.regex = jQuery.expr.createPseudo(function (expression) { return function (elem) { var matchParams = expression.split(','), validLabels = /^(data|css):/, attr = { method: matchParams[0].match(validLabels) ? matchParams[0].split(':')[0] : 'attr', property: matchParams.shift().replace(validLabels, '') }, regexFlags = 'ig', regex = new RegExp(matchParams.join('').replace(/^\s+|\s+$/g, ''), regexFlags); return regex.test(jQuery(elem)[attr.method](attr.property)); } }); ```
You can use the [`filter`](http://docs.jquery.com/Traversing/filter#expr) function to apply more complicated regex matching. Here's an example which would just match the first three divs: ``` $('div') .filter(function() { return this.id.match(/abc+d/); }) .html("Matched!"); ``` ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div id="abcd">Not matched</div> <div id="abccd">Not matched</div> <div id="abcccd">Not matched</div> <div id="abd">Not matched</div> ```
jQuery selector regular expressions
[ "", "javascript", "jquery", "regex", "jquery-selectors", "" ]
I find it odd that in Visual C# 2008 Express edition, when you use the database explorer, your options are: 1. Microsoft Access 2. SQL Server Compact 3.5, and 3. SQL Server Database File. BUT if you use Visual Web Developer 2008 Express, you can connect to a regular SQL Server, Oracle, ODBC, etc. For people developing command-line or other C# apps that need to talk to a SQL Server database, do you really need to build your LINQ/Data Access code with one IDE (Visual Web Developer) and your program in another (Visual C#)? It's not a hard workaround, but it seems weird. If Microsoft wanted to force you to upgrade to Visual Studio to connect to SQL Server, why would they include that feature in one of their free IDEs but not the other? I feel like I might be missing something (like how to do it all in Visual C#).
You should be able to choose the SQL Server Database file option to get the right kind of database (the `system.data.SqlClient` provider), and then manually correct the connection string to point to your db. I think the reasoning behind those db choices probably goes something like this: * If you're using the Express Edition, and you're *not* using Visual Web Developer, you're probably building a desktop program. * If you're building a desktop program, and you're using the express edition, you're probably a hobbyist or uISV-er working at home rather than doing development for a corporation. * If you're not developing for a corporation, your app is probably destined for the end-user and your data store is probably going on their local machine. * You really shouldn't be deploying *server-class* databases to end-user desktops. An *in-process* db like Sql Server Compact or MS Access is much more appropriate. However, this logic doesn't quite hold. Even if each of those 4 points is true 90% of the time, by the time you apply all four of them it only applies to ~65% of your audience, which means up to 35% of the express market might legitimately want to talk to a server-class db, and that's a significant group. And so, the simplified (greedy) version: * A real db server (and the hardware to run it) costs real money. If you have access to that, you ought to be able to afford at least the standard edition of visual studio.
Workaround: 1. Open your solution in Visual Web Developer Express. It will not load some of the projects in the solution but it is ok. 2. Make a new connection in Database Explorer to the required database from SQL Server. 3. Add a new class library project. 4. Add a LINQ to SQL Classes item and link it to your database. 5. Close the solution. 6. Open the solution in Visual C# Express. Now you have a LINQ to SQL classes library that is linked to your SQL Server database in Visual C# Express. **Update** The solution is for Visual Studio Express 2010.
Connecting to SQL Server with Visual Studio Express Editions
[ "", "c#", "visual-studio-2008", "visual-web-developer", "" ]
I have a method running in a seperate thread. The thread is created and started from a form in a windows application. If an exception is thrown from inside the thread, what is the best way to pass it back to the main application. Right now, I'm passing a reference to the main form into the thread, then invoking the method from the thread, and causing the method to be called by the main application thread. Is there a best practice way to do this because I'm not comfortable with how I'm doing it now. Example of my form: ``` public class frmMyForm : System.Windows.Forms.Form { /// <summary> /// Create a thread /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void btnTest_Click(object sender, EventArgs e) { try { //Create and start the thread ThreadExample pThreadExample = new ThreadExample(this); pThreadExample.Start(); } catch (Exception ex) { MessageBox.Show(ex.Message, Application.ProductName); } } /// <summary> /// Called from inside the thread /// </summary> /// <param name="ex"></param> public void HandleError(Exception ex) { //Invoke a method in the GUI's main thread this.Invoke(new ThreadExample.delThreadSafeTriggerScript(HandleError), new Object[] { ex }); } private void __HandleError(Exception ex) { MessageBox.Show(ex.Message); } } ``` Example of my thread class: ``` public class ThreadExample { public delegate void delThreadSafeHandleException(System.Exception ex); private Thread thExample_m; frmMyForm pForm_m; private frmMyForm Form { get { return pForm_m; } } public ThreadExample(frmMyForm pForm) { pForm_m = pForm; thExample_m = new Thread(new ThreadStart(Main)); thExample_m.Name = "Example Thread"; } public void Start() { thExample_m.Start(); } private void Main() { try { throw new Exception("Test"); } catch (Exception ex) { Form.HandleException(ex); } } } ```
So you're using Invoke to marshall back to the UI thread, by the looks of it - which is exactly what you need to do. I'd personally use an Action<Exception> for simplicity's sake, and possibly BeginInvoke instead of Invoke, but basically you're doing the right thing.
Use the BackgroundWorker class in the .NET framework instead. It is the best practice for performing UI work on a different thread.
catching exceptions from another thread
[ "", "c#", ".net", "multithreading", "exception", "" ]
If something goes wrong in a WCF REST call, such as the requested resource is not found, how can I play with the HTTP response code (setting it to something like HTTP 404, for example) in my OperationContract method?
There is a [`WebOperationContext`](http://msdn.microsoft.com/en-us/library/system.servicemodel.web.weboperationcontext.aspx) that you can access and it has a [`OutgoingResponse`](http://msdn.microsoft.com/en-us/library/system.servicemodel.web.weboperationcontext.outgoingresponse.aspx) property of type [`OutgoingWebResponseContext`](http://msdn.microsoft.com/en-us/library/system.servicemodel.web.outgoingwebresponsecontext.aspx) which has a [`StatusCode`](http://msdn.microsoft.com/en-us/library/system.servicemodel.web.outgoingwebresponsecontext.statuscode.aspx) property that can be set. ``` WebOperationContext ctx = WebOperationContext.Current; ctx.OutgoingResponse.StatusCode = System.Net.HttpStatusCode.OK; ```
If you need to return a reason body then have a look at [WebFaultException](http://msdn.microsoft.com/en-us/library/dd989924.aspx) For example ``` throw new WebFaultException<string>("Bar wasn't Foo'd", HttpStatusCode.BadRequest ); ```
How can I return a custom HTTP status code from a WCF REST method?
[ "", "c#", ".net", "wcf", "rest", "" ]
In PHP I can name my array indices so that I may have something like: ``` $shows = Array(0 => Array('id' => 1, 'name' => 'Sesame Street'), 1 => Array('id' => 2, 'name' => 'Dora The Explorer')); ``` Is this possible in Python?
This sounds like the PHP array using named indices is very similar to a python dict: ``` shows = [ {"id": 1, "name": "Sesaeme Street"}, {"id": 2, "name": "Dora The Explorer"}, ] ``` See <http://docs.python.org/tutorial/datastructures.html#dictionaries> for more on this.
PHP arrays are actually maps, which is equivalent to dicts in Python. Thus, this is the Python equivalent: `showlist = [{'id':1, 'name':'Sesaeme Street'}, {'id':2, 'name':'Dora the Explorer'}]` Sorting example: ``` from operator import attrgetter showlist.sort(key=attrgetter('id')) ``` BUT! With the example you provided, a simpler datastructure would be better: ``` shows = {1: 'Sesame Street', 2:'Dora the Explorer'} ```
Python: can I have a list with named indices?
[ "", "python", "arrays", "" ]
I've got a (poorly written) base class that I want to wrap in a proxy object. The base class resembles the following: ``` public class BaseClass : SomeOtherBase { public BaseClass() {} public BaseClass(int someValue) {} //...more code, not important here } ``` and, my proxy resembles: ``` public BaseClassProxy : BaseClass { public BaseClassProxy(bool fakeOut){} } ``` Without the "fakeOut" constructor, the base constructor is expected to be called. However, with it, I expected it to not be called. Either way, I either need a way to not call any base class constructors, or some other way to effectively proxy this (evil) class.
If you do not explicitly call any constructor in the base class, the parameterless constructor will be called implicitly. There's no way around it, you cannot instantiate a class without a constructor being called.
There is a way to create an object without calling *any* instance constructors. Before you proceed, be very sure you want to do it this way. 99% of the time this is the wrong solution. This is how you do it: ``` FormatterServices.GetUninitializedObject(typeof(MyClass)); ``` Call it in place of the object's constructor. It will create and return you an instance without calling any constructors or field initializers. When you deserialize an object in WCF, it uses this method to create the object. When this happens, constructors and even field initializers are not run.
How can I prevent a base constructor from being called by an inheritor in C#?
[ "", "c#", "oop", "inheritance", "proxy", "constructor", "" ]
It looks like some work has been done to make [pthread-win32](http://sourceware.org/pthreads-win32/) work with x64, but there are no build instructions. I have tried simly building with the Visual Studio x64 Cross Tools Command Prompt, but when I try to link to the lib from an x64 application, it can't see any of the function exports. It seems like it is still compiling the lib as x86 or something. I've even tried adding /MACHINE to the makefile in the appropriate places, but it doesn't help. Has anyone gotten this to work?
Until it's officially released, it looks like you have to check out the CVS head to get version 2.9 of the library. Version 2.9 has all the x64 patches, but you will still have problems if you try to compile the static library from the command line. The only workaround I know of is to use the DLLs instead of statically linking the LIB.
You can use the **vcpkg** [here](https://learn.microsoft.com/en-us/cpp/build/vcpkg?view=vs-2019). Which is the Windows package manager for C++. It supports pthread building and also other open source libraries. I wanted to use a static pthread library. When i downloaded the pthread i got the dll(pthread.dll) and import lib(pthread.lib) i.e I can not use only pthread.lib I had to use the pthread.dll file. So using vcpkg I have built the static lib. Which I can use without any dll dependencies Using "vcpkg" you can build both Static and Dynamic Libraries **You can use below steps** Below i have added the steps for all DLL (x86|x64) and LIB (x86|x64) cases. You can build it as per your need. Clone the vcpkg from git directory [vcpkg git repo](https://github.com/Microsoft/vcpkg.) From the directory where you have cloned vcpkg run below command- Which will install the vcpkg ``` bootstrap - vcpkg.bat ``` Check for the library availability by running below commands ``` vcpkg search pthread ``` Which will show you below result ``` mbedtls[pthreads] Multi-threading support pthread 3.0.0 empty package, linking to other port pthreads 3.0.0-6 pthreads for windows ``` As you can see it supports pthread for windows **1 .Building Dynamic Library with import lib (DLL)** **Building x86 DLL** ``` vcpkg install pthreads:x86-windows ``` Which will build the dll and import library in **.\vcpkg\installed\x86-windows** from here copy the **lib** and **include** and you can use them **Building x64 DLL** ``` vcpkg install pthreads:x64-windows ``` Which will build the dll and import library in **.\vcpkg\installed\x64-windows** from here copy the **lib** and **include** folders. **2. Building Static Library (LIB)** **Building x86 LIB** ``` vcpkg install pthreads:x86-windows-static ``` Which will build the dll and import library in **.\vcpkg\installed\x86-windows-static** from here copy the **lib** and **include** and you can use them **Building x64 LIB** ``` vcpkg install pthreads:x64-windows-static ``` Which will build the dll and import library in **.\vcpkg\installed\x64-windows-static** from here copy the **lib** and **include** folders. **NOTE :** Try to use with admin privileges
How do you compile static pthread-win32 lib for x64?
[ "", "c++", "c", "windows", "64-bit", "pthreads", "" ]
We are trying to look at optimizing our localization testing. Our QA group had a suggestion of a special mode to force all strings from the resources to be entirely contained of X. We already API hijack LoadString, and the MFC implementation of it, so doing it should not be a major hurdle. My question is how would you solve the formatting issues? ``` Examples - CString str ; str . LoadString ( IDS_MYSTRING ) ; where IDS_MYSTRING is "Hello World", should return "XXXXX XXXXX" where IDS_MYSTRING is "Hello\nWorld", should return "XXXXX\nXXXXX" where IDS_MYSTRING is "Hello%dWorld", should return "XXXXX%dXXXXX" where IDS_MYSTRING is "Hello%.2fWorld", should return "XXXXX%.2fXXXXX" where IDS_MYSTRING is "Hello%%World", should return "XXXXX%%XXXXX" ``` So in summary the string should work if used in a printf or Format statement, it should honor escape characters. So this is a pure code question, C++/MFC, ``` CString ConvertStringToXXXX ( const CString& aSource ) { CString lResult = aSource ; // Insert your code here return lResult ; } ``` I know this could be done using tools on the .RC files, but we want to build English, then run like so - application -L10NTEST
If this approach is to highlight formatted strings (or format sequences) in the application (i.e. all text appearing other than XXXX), you could locate the escape sequence (using regex perhaps) and insert block quotes around the formatted (substituted) values, e.g. Some\ntext -> Some[\n]text You get readability (all strings as XXX might be hard to use the application) and also get to detect non-resource (hardcoded) strings. Having said that, if you're looking to detect non resource loaded strings (hardcoded strings), instead of substituting Xs, why not just prefix the string? You'll easily be able to tell resource loaded strings from hardcoded strings easily, e.g. Some\ntext -> [EN]Some\ntext Hope it helps?
The pseudo-localisation feature of [appTranslator](http://www.apptranslator.com) can help you there: It modifies untranslated strings to use diacritics, text widening or shortening and such. So far, you're not interested. Where it becomes interesting is that it optionally encloses such strings in brackets. The idea was to make it more obvious that a string is pseudo localized. You could use this to detect that the string actually comes from the string table rather than code. And of course, since the pseudo-localized program must run properly, appTranslator preserves all formatters (including printf-like and FormatMessage-like formatters) and special chars such as % or \n. Which is what you're looking for. You wouldn't even have to modify your code: Simply create a 'dummy' translation. By 'dummy', I mean a language into which you don't plan to translate your app. Set the language preference of your app to that language. Wait, it's even better: The guys at QA can do it entirely on their own. They dont even have to bother you! :-) Disclaimer: I'm the author of appTranslator. Edit: answer to your comment: Glad to read you already use appTranslator. To avoid problems due to dialogs or strings not in the L10N DLL, you can simply re-build the DLLs (e.g. using a post-link step in your VS project). The process automatically re-scans the source exe and merges new and modified texts in the built resource dlls (doesn't affect the appTranslator project file, as opposed to 'Update Source'). This helps make sure your resource DLLs are always in sync with your exe.
Localization testing, formatting all strings with XXXXX
[ "", "c++", "mfc", "localization", "" ]
Does anyone know how I can get a format string to use [bankers rounding](http://en.wikipedia.org/wiki/Rounding#Round-to-even_method)? I have been using "{0:c}" but that doesn't round the same way that bankers rounding does. The [`Math.Round()`](http://msdn.microsoft.com/en-us/library/system.math.round.aspx) method does bankers rounding. I just need to be able to duplicate how it rounds using a format string. --- **Note:** the original question was rather misleading, and answers mentioning regex derive from that.
Can't you simply call Math.Round() on the string input to get the behavior you want? Instead of: ``` string s = string.Format("{0:c}", 12345.6789); ``` Do: ``` string s = string.Format("{0:c}", Math.Round(12345.6789)); ```
Regexp is a pattern matching language. You can't do arithmetic operations in Regexp. Do some experiements with IFormatProvider and ICustomFormatter. Here is a link might point you in the right direction. <http://codebetter.com/blogs/david.hayden/archive/2006/03/12/140732.aspx>
.NET currency formatter: can I specify the use of banker's rounding?
[ "", "c#", ".net", "formatting", "bankers-rounding", "" ]
So why exactly is it that it's always recommended to use `const` as often as possible? It seems to me that using `const` can be more of a pain than a help in C++. But then again, I'm coming at this from the python perspective: if you don't want something to be changed, don't change it. So with that said, here are a few questions: 1. It seems like every time I mark something as `const`, I get an error and have to change some other function somewhere to be `const` too. Then this causes me to have to change *another* function somewhere else. Is this something that just gets easier with experience? 2. Are the benefits of using `const` *really* enough to compensate for the trouble? If you don't intend to change an object, why not just not write code that changes it? I should note that at this point in time, I'm most focused on the benefits of using `const` for correctness and maintainability purposes, although it is also nice to have an idea of the performance implications.
This is the definitive article on "const correctness": <https://isocpp.org/wiki/faq/const-correctness>. In a nutshell, using const is good practice because... 0. It protects you from accidentally changing variables that aren't intended be changed, 1. It protects you from making accidental variable assignments. For instance, you are protected from ``` if( x = y ) // whoops, meant if( x == y ). ``` 2. The compiler can optimize it. At the same time, the compiler can generate more efficient code because it knows exactly what the state of the variable/function will be at all times. If you are writing tight C++ code, this is good. You are correct in that it can be difficult to use const-correctness consistently, but the end code is more concise and safer to program with. When you do a lot of C++ development, the benefits of this quickly manifest.
Here's a piece of code with a common error that const correctness can protect you against: ``` void foo(const int DEFCON) { if (DEFCON = 1) //< FLAGGED AS COMPILER ERROR! WORLD SAVED! { fire_missiles(); } } ```
Sell me const-correctness
[ "", "c++", "const-correctness", "" ]
I'm currently working with PHPUnit to try and develop tests alongside what I'm writing, however, I'm currently working on writing the Session Manager, and am having issues doing so... The constructor for the Session handling class is ``` private function __construct() { if (!headers_sent()) { session_start(); self::$session_id = session_id(); } } ``` However, as PHPUnit sends out text before it starts the testing, any testing on this Object returns a failed test, as the HTTP "Headers" have been sent...
Well, your session manager is basically broken by design. To be able to test something, it must be possible to isolate it from side effects. Unfortunately, PHP is designed in such a way, that it encourages liberal use of global state (`echo`, `header`, `exit`, `session_start` etc. etc.). The best thing you can do, is to isolate the side-effects in a component, that can be swapped at runtime. That way, your tests can use mocked objects, while the live code uses adapters, that have real side-effects. You'll find that this doesn't play well with singletons, which I presume you're using. So you'll have to use some other mechanism for getting shared objects distributed to your code. You can start with a static registry, but there are even better solutions if you don't mind a bit of learning. If you can't do that, you always have the option of writing integration-tests. Eg. use the PHPUnit's equivalent of [`WebTestCase`](http://www.simpletest.org/en/web_tester_documentation.html).
Create a bootstrap file for phpunit, which calls: ``` session_start(); ``` Then start phpunit like this: ``` phpunit --bootstrap pathToBootstrap.php --anotherSwitch /your/test/path/ ``` The bootstrap file gets called before everything else, so the header hasn't been sent and everything should work fine.
Unit Testing with items that need to send headers
[ "", "php", "unit-testing", "phpunit", "" ]
As part of the base class for some extensive unit testing, I am writing a helper function which recursively compares the nodes of one XmlDocument object to another in C# (.NET). Some requirements of this: * The first document is the **source**, e.g. what I want the XML document to look like. Thus the second is the one I want to find differences in and it must not contain *extra* nodes not in the first document. * Must throw an exception when too many significant differences are found, and it should be easily understood by a human glancing at the description. * Child element order is important, attributes can be in any order. * Some attributes are ignorable; specifically `xsi:schemaLocation` and `xmlns:xsi`, though I would like to be able to pass in which ones are. * Prefixes for namespaces must match in both attributes and elements. * Whitespace between elements is irrelevant. * Elements will *either* have child elements *or* `InnerText`, but not both. While I'm scrapping something together: **has anyone written such code and would it be possible to share it here?** On an aside, what would you call the first and second documents? I've been referring to them as "source" and "target", but it feels wrong since the **source** is what I want the **target** to look like, else I throw an exception.
Microsoft has an [XML diff API](http://msdn.microsoft.com/en-us/library/aa302294.aspx) that you can use. Unofficial NuGet: <https://www.nuget.org/packages/XMLDiffPatch>.
I googled up a more complete list of solutions of this problem today, I am going to try one of them soon: * <http://xmlunit.sourceforge.net/> * <http://msdn.microsoft.com/en-us/library/aa302294.aspx> * <http://jolt.codeplex.com/wikipage?title=Jolt.Testing.Assertions.XML.Adaptors> * <http://www.codethinked.com/checking-xml-for-semantic-equivalence-in-c> * <https://vkreynin.wordpress.com/tag/xml/> * <http://gandrusz.blogspot.com/2008/07/recently-i-have-run-into-usual-problem.html> * <http://xmlspecificationcompare.codeplex.com/> * <https://github.com/netbike/netbike.xmlunit>
How would you compare two XML Documents?
[ "", "c#", ".net", "xml", "compare", "xmldocument", "" ]
I am able to serialize an object into a file and then restore it again as is shown in the next code snippet. I would like to serialize the object into a string and store into a database instead. Can anyone help me? ``` LinkedList<Diff_match_patch.Patch> patches = // whatever... FileOutputStream fileStream = new FileOutputStream("foo.ser"); ObjectOutputStream os = new ObjectOutputStream(fileStream); os.writeObject(patches1); os.close(); FileInputStream fileInputStream = new FileInputStream("foo.ser"); ObjectInputStream oInputStream = new ObjectInputStream(fileInputStream); Object one = oInputStream.readObject(); LinkedList<Diff_match_patch.Patch> patches3 = (LinkedList<Diff_match_patch.Patch>) one; os.close(); ```
Sergio: You should use [BLOB](http://docs.oracle.com/javase/6/docs/api/java/sql/Blob.html). It is pretty straighforward with JDBC. The problem with the second code you posted is the encoding. You should additionally encode the bytes to make sure none of them fails. If you still want to write it down into a String you can encode the bytes using [java.util.Base64](https://docs.oracle.com/javase/8/docs/api/java/util/Base64.html). Still you should use CLOB as data type because you don't know how long the serialized data is going to be. Here is a sample of how to use it. ``` import java.util.*; import java.io.*; /** * Usage sample serializing SomeClass instance */ public class ToStringSample { public static void main( String [] args ) throws IOException, ClassNotFoundException { String string = toString( new SomeClass() ); System.out.println(" Encoded serialized version " ); System.out.println( string ); SomeClass some = ( SomeClass ) fromString( string ); System.out.println( "\n\nReconstituted object"); System.out.println( some ); } /** Read the object from Base64 string. */ private static Object fromString( String s ) throws IOException , ClassNotFoundException { byte [] data = Base64.getDecoder().decode( s ); ObjectInputStream ois = new ObjectInputStream( new ByteArrayInputStream( data ) ); Object o = ois.readObject(); ois.close(); return o; } /** Write the object to a Base64 string. */ private static String toString( Serializable o ) throws IOException { ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream( baos ); oos.writeObject( o ); oos.close(); return Base64.getEncoder().encodeToString(baos.toByteArray()); } } /** Test subject. A very simple class. */ class SomeClass implements Serializable { private final static long serialVersionUID = 1; // See Nick's comment below int i = Integer.MAX_VALUE; String s = "ABCDEFGHIJKLMNOP"; Double d = new Double( -1.0 ); public String toString(){ return "SomeClass instance says: Don't worry, " + "I'm healthy. Look, my data is i = " + i + ", s = " + s + ", d = " + d; } } ``` Output: ``` C:\samples>javac *.java C:\samples>java ToStringSample Encoded serialized version rO0ABXNyAAlTb21lQ2xhc3MAAAAAAAAAAQIAA0kAAWlMAAFkdAASTGphdmEvbGFuZy9Eb3VibGU7T AABc3QAEkxqYXZhL2xhbmcvU3RyaW5nO3hwf////3NyABBqYXZhLmxhbmcuRG91YmxlgLPCSilr+w QCAAFEAAV2YWx1ZXhyABBqYXZhLmxhbmcuTnVtYmVyhqyVHQuU4IsCAAB4cL/wAAAAAAAAdAAQQUJ DREVGR0hJSktMTU5PUA== Reconstituted object SomeClass instance says: Don't worry, I'm healthy. Look, my data is i = 2147483647, s = ABCDEFGHIJKLMNOP, d = -1.0 ``` **NOTE**: for Java 7 and earlier you can see the original [answer here](https://stackoverflow.com/revisions/134918/9)
How about writing the data to a ByteArrayOutputStream instead of a FileOutputStream? Otherwise, you could serialize the object using XMLEncoder, persist the XML, then deserialize via XMLDecoder.
How to serialize an object into a string
[ "", "java", "serialization", "" ]
Is there a simple way to write a common function for each of the `CRUD (create, retreive, update, delete)` operations in `PHP` WITHOUT using any framework. For example I wish to have a single create function that takes the table name and field names as parameters and inserts data into a `mySQL database`. Another requirement is that the function should be able to support joins I.e. it should be able to insert data into multiple tables if required. I know that these tasks could be done by using a framework but because of various reasons - too lengthy to explain here - I cannot use them.
I wrote this very thing, it's kind of a polished scaffold. It's basically a class the constructor of which takes the table to be used, an array containing field names and types, and an action. Based on this action the object calls a method on itself. For example: This is the array I pass: ``` $data = array(array('name' => 'id', 'type' => 'hidden') , array('name' => 'student', 'type' => 'text', 'title' => 'Student')); ``` Then I call the constructor: ``` new MyScaffold($table, 'edit', $data, $_GET['id']); ``` In the above case the constructor calls the 'edit' method which presents a form displaying data from the $table, but only fields I set up in my array. The record it uses is determined by the $\_GET method. In this example the 'student' field is presented as a text-box (hence the 'text' type). The 'title' is simply the label used. Being 'hidden' the ID field is not shown for editing but is available to the program for use. If I had passed 'delete' instead of 'edit' it would delete the record from the GET variable. If I passed only a table name it would default to a list of records with buttons for edit, delete, and new. It's just one class that contains all the CRUD with lots of customisability. You can make it as complicated or as simple as you wish. By making it a generic class I can drop it in to any project and just pass instructions, table information and configuration information. I might for one table not want to permit new records from being added through the scaffold, in this case I might set "newbutton" to be false in my parameters array. It's not a framework in the conventional sense. Just a standalone class that handles everything internally. There are some drawbacks to this. The key ones must be that all my tables must have a primary key called 'id', you could get away without this but it would complicate matters. Another being that a large array detailing information about each table to be managed must be prepared, but you need only do this once. For a tutorial on this idea see [here](http://www.shadow-fox.net/site/tutorial/39-Creating-A-Scaffold-like-Class-in-PHP-or-An-Automatic-CMS-For-a-Table)
If you try to write such function you'll soon discover that you've just realized yet another framework.
Common CRUD functions in PHP
[ "", "php", "crud", "" ]
Coming from C++ to Java, the obvious unanswered question is why didn't Java include operator overloading? Isn't `Complex a, b, c; a = b + c;` much simpler than `Complex a, b, c; a = b.add(c);`? Is there a known reason for this, valid arguments for *not* allowing operator overloading? Is the reason arbitrary, or lost to time?
Assuming you wanted to overwrite the previous value of the object referred to by `a`, then a member function would have to be invoked. ``` Complex a, b, c; // ... a = b.add(c); ``` In C++, this expression tells the compiler to create three (3) objects on the stack, perform addition, and *copy* the resultant value from the temporary object into the existing object `a`. However, in Java, `operator=` doesn't perform value copy for reference types, and users can only create new reference types, not value types. So for a user-defined type named `Complex`, assignment means to copy a reference to an existing value. Consider instead: ``` b.set(1, 0); // initialize to real number '1' a = b; b.set(2, 0); assert( !a.equals(b) ); // this assertion will fail ``` In C++, this copies the value, so the comparison will result not-equal. In Java, `operator=` performs reference copy, so `a` and `b` are now referring to the same value. As a result, the comparison will produce 'equal', since the object will compare equal to itself. The difference between copies and references only adds to the confusion of operator overloading. As @Sebastian mentioned, Java and C# both have to deal with value and reference equality separately -- `operator+` would likely deal with values and objects, but `operator=` is already implemented to deal with references. In C++, you should only be dealing with one kind of comparison at a time, so it can be less confusing. For example, on `Complex`, `operator=` and `operator==` are both working on values -- copying values and comparing values respectively.
There are a lot of posts complaining about operator overloading. I felt I had to clarify the "operator overloading" concepts, offering an alternative viewpoint on this concept. # Code obfuscating? This argument is a fallacy. ## Obfuscating is possible in all languages... It is as easy to obfuscate code in C or Java through functions/methods as it is in C++ through operator overloads: ``` // C++ T operator + (const T & a, const T & b) // add ? { T c ; c.value = a.value - b.value ; // subtract !!! return c ; } // Java static T add (T a, T b) // add ? { T c = new T() ; c.value = a.value - b.value ; // subtract !!! return c ; } /* C */ T add (T a, T b) /* add ? */ { T c ; c.value = a.value - b.value ; /* subtract !!! */ return c ; } ``` ## ...Even in Java's standard interfaces For another example, let's see the [`Cloneable` interface](http://download.oracle.com/javase/7/docs/api/java/lang/Cloneable.html) in Java: You are supposed to clone the object implementing this interface. But you could lie. And create a different object. In fact, this interface is so weak you could return another type of object altogether, just for the fun of it: ``` class MySincereHandShake implements Cloneable { public Object clone() { return new MyVengefulKickInYourHead() ; } } ``` As the `Cloneable` interface can be abused/obfuscated, should it be banned on the same grounds C++ operator overloading is supposed to be? We could overload the `toString()` method of a `MyComplexNumber` class to have it return the stringified hour of the day. Should the `toString()` overloading be banned, too? We could sabotage `MyComplexNumber.equals` to have it return a random value, modify the operands... etc. etc. etc.. **In Java, as in C++, or whatever language, the programmer must respect a minimum of semantics when writing code. This means implementing an `add` function that adds, and `Cloneable` implementation method that clones, and a `++` operator that increments.** # What's obfuscating anyway? Now that we know that code can be sabotaged even through the pristine Java methods, we can ask ourselves about the real use of operator overloading in C++? ## Clear and natural notation: methods vs. operator overloading? We'll compare below, for different cases, the "same" code in Java and C++, to have an idea of which kind of coding style is clearer. ### Natural comparisons: ``` // C++ comparison for built-ins and user-defined types bool isEqual = A == B ; bool isNotEqual = A != B ; bool isLesser = A < B ; bool isLesserOrEqual = A <= B ; // Java comparison for user-defined types boolean isEqual = A.equals(B) ; boolean isNotEqual = ! A.equals(B) ; boolean isLesser = A.comparesTo(B) < 0 ; boolean isLesserOrEqual = A.comparesTo(B) <= 0 ; ``` Please note that A and B could be of any type in C++, as long as the operator overloads are provided. In Java, when A and B are not primitives, the code can become very confusing, even for primitive-like objects (BigInteger, etc.)... ### Natural array/container accessors and subscripting: ``` // C++ container accessors, more natural value = myArray[25] ; // subscript operator value = myVector[25] ; // subscript operator value = myString[25] ; // subscript operator value = myMap["25"] ; // subscript operator myArray[25] = value ; // subscript operator myVector[25] = value ; // subscript operator myString[25] = value ; // subscript operator myMap["25"] = value ; // subscript operator // Java container accessors, each one has its special notation value = myArray[25] ; // subscript operator value = myVector.get(25) ; // method get value = myString.charAt(25) ; // method charAt value = myMap.get("25") ; // method get myArray[25] = value ; // subscript operator myVector.set(25, value) ; // method set myMap.put("25", value) ; // method put ``` In Java, we see that for each container to do the same thing (access its content through an index or identifier), we have a different way to do it, which is confusing. In C++, each container uses the same way to access its content, thanks to operator overloading. ### Natural advanced types manipulation The examples below use a `Matrix` object, found using the first links found on Google for "[Java Matrix object](https://encrypted.google.com/search?q=Java+Matrix+object)" and "[C++ Matrix object](https://encrypted.google.com/search?q=c%2B%2B+Matrix+object)": ``` // C++ YMatrix matrix implementation on CodeProject // http://www.codeproject.com/KB/architecture/ymatrix.aspx // A, B, C, D, E, F are Matrix objects; E = A * (B / 2) ; E += (A - B) * (C + D) ; F = E ; // deep copy of the matrix // Java JAMA matrix implementation (seriously...) // http://math.nist.gov/javanumerics/jama/doc/ // A, B, C, D, E, F are Matrix objects; E = A.times(B.times(0.5)) ; E.plusEquals(A.minus(B).times(C.plus(D))) ; F = E.copy() ; // deep copy of the matrix ``` And this is not limited to matrices. The `BigInteger` and `BigDecimal` classes of Java suffer from the same confusing verbosity, whereas their equivalents in C++ are as clear as built-in types. ### Natural iterators: ``` // C++ Random Access iterators ++it ; // move to the next item --it ; // move to the previous item it += 5 ; // move to the next 5th item (random access) value = *it ; // gets the value of the current item *it = 3.1415 ; // sets the value 3.1415 to the current item (*it).foo() ; // call method foo() of the current item // Java ListIterator<E> "bi-directional" iterators value = it.next() ; // move to the next item & return the value value = it.previous() ; // move to the previous item & return the value it.set(3.1415) ; // sets the value 3.1415 to the current item ``` ### Natural functors: ``` // C++ Functors myFunctorObject("Hello World", 42) ; // Java Functors ??? myFunctorObject.execute("Hello World", 42) ; ``` ### Text concatenation: ``` // C++ stream handling (with the << operator) stringStream << "Hello " << 25 << " World" ; fileStream << "Hello " << 25 << " World" ; outputStream << "Hello " << 25 << " World" ; networkStream << "Hello " << 25 << " World" ; anythingThatOverloadsShiftOperator << "Hello " << 25 << " World" ; // Java concatenation myStringBuffer.append("Hello ").append(25).append(" World") ; ``` Ok, in Java you can use `MyString = "Hello " + 25 + " World" ;` too... But, wait a second: This is operator overloading, isn't it? Isn't it cheating??? :-D ## Generic code? The same generic code modifying operands should be usable both for built-ins/primitives (which have no interfaces in Java), standard objects (which could not have the right interface), and user-defined objects. For example, calculating the average value of two values of arbitrary types: ``` // C++ primitive/advanced types template<typename T> T getAverage(const T & p_lhs, const T & p_rhs) { return (p_lhs + p_rhs) / 2 ; } int intValue = getAverage(25, 42) ; double doubleValue = getAverage(25.25, 42.42) ; complex complexValue = getAverage(cA, cB) ; // cA, cB are complex Matrix matrixValue = getAverage(mA, mB) ; // mA, mB are Matrix // Java primitive/advanced types // It won't really work in Java, even with generics. Sorry. ``` # Discussing operator overloading Now that we have seen fair comparisons between C++ code using operator overloading, and the same code in Java, we can now discuss "operator overloading" as a concept. ## Operator overloading existed since before computers **Even outside of computer science, there is operator overloading: For example, in mathematics, operators like `+`, `-`, `*`, etc. are overloaded.** Indeed, the signification of `+`, `-`, `*`, etc. changes depending on the types of the operands (numerics, vectors, quantum wave functions, matrices, etc.). Most of us, as part of our science courses, learned multiple significations for operators, depending on the types of the operands. Did we find them confusing, then? ## Operator overloading depends on its operands This is the most important part of operator overloading: Like in mathematics, or in physics, the operation depends on its operands' types. So, know the type of the operand, and you will know the effect of the operation. ## Even C and Java have (hard-coded) operator overloading In C, the real behavior of an operator will change according to its operands. For example, adding two integers is different than adding two doubles, or even one integer and one double. There is even the whole pointer arithmetic domain (without casting, you can add to a pointer an integer, but you cannot add two pointers...). In Java, there is no pointer arithmetic, but someone still found string concatenation without the `+` operator would be ridiculous enough to justify an exception in the "operator overloading is evil" creed. It's just that you, as a C (for historical reasons) or Java (for *personal reasons*, see below) coder, you can't provide your own. ## In C++, operator overloading is not optional... In C++, operator overloading for built-in types is not possible (and this is a good thing), but *user-defined* types can have *user-defined* operator overloads. As already said earlier, in C++, and to the contrary to Java, user-types are not considered second-class citizens of the language, when compared to built-in types. So, if built-in types have operators, user types should be able to have them, too. The truth is that, like the `toString()`, `clone()`, `equals()` methods are for Java (*i.e. quasi-standard-like*), C++ operator overloading is so much part of C++ that it becomes as natural as the original C operators, or the before mentioned Java methods. Combined with template programming, operator overloading becomes a well known design pattern. In fact, you cannot go very far in STL without using overloaded operators, and overloading operators for your own class. ## ...but it should not be abused Operator overloading should strive to respect the semantics of the operator. Do not subtract in a `+` operator (as in "do not subtract in a `add` function", or "return crap in a `clone` method"). Cast overloading can be very dangerous because they can lead to ambiguities. So they should really be reserved for well defined cases. As for `&&` and `||`, do not ever overload them unless you really know what you're doing, as you'll lose the the short circuit evaluation that the native operators `&&` and `||` enjoy. # So... Ok... Then why it is not possible in Java? Because James Gosling said so: > I left out operator overloading as a **fairly personal choice** because I had seen too many people abuse it in C++. > > *James Gosling. Source: <http://www.gotw.ca/publications/c_family_interview.htm>* Please compare Gosling's text above with Stroustrup's below: > Many C++ design decisions have their roots in my dislike for forcing people to do things in some particular way [...] Often, I was tempted to outlaw a feature I personally disliked, I refrained from doing so because **I did not think I had the right to force my views on others**. > > *Bjarne Stroustrup. Source: The Design and Evolution of C++ (1.3 General Background)* ## Would operator overloading benefit Java? Some objects would greatly benefit from operator overloading (concrete or numerical types, like BigDecimal, complex numbers, matrices, containers, iterators, comparators, parsers etc.). In C++, you can profit from this benefit because of Stroustrup's humility. In Java, you're simply screwed because of Gosling's *personal choice*. ## Could it be added to Java? The reasons for not adding operator overloading now in Java could be a mix of internal politics, allergy to the feature, distrust of developers (you know, the saboteur ones that seem to haunt Java teams...), compatibility with the previous JVMs, time to write a correct specification, etc.. So don't hold your breath waiting for this feature... ## But they do it in C#!!! Yeah... While this is far from being the only difference between the two languages, this one never fails to amuse me. Apparently, the C# folks, with their *"every primitive is a `struct`, and a `struct` derives from Object"*, got it right at first try. ## And they do it in [other languages](https://en.wikipedia.org/wiki/Operator_overloading)!!! Despite all the FUD against used defined operator overloading, the following languages support it: [Kotlin](https://kotlinlang.org/docs/reference/operator-overloading.html), [Scala](https://stackoverflow.com/q/1991240), [Dart](https://www.dartlang.org/articles/idiomatic-dart/#operators), [Python](https://docs.python.org/3/reference/datamodel.html#special-method-names), [F#](https://msdn.microsoft.com/en-us/library/dd233204.aspx), [C#](https://msdn.microsoft.com/en-us/library/aa288467.aspx), [D](http://dlang.org/operatoroverloading.html), [Algol 68](http://www.cap-lore.com/Languages/A68Ops.html), [Smalltalk](http://logos.cs.uic.edu/476/resources/SmallTalk/cs476_Smalltalk/Smalltalk.htm), [Groovy](http://www.groovy-lang.org/operators.html#Operator-Overloading), [Raku (formerly Perl 6)](https://design.raku.org/S06.html#Operator_overloading), C++, [Ruby](https://stackoverflow.com/a/3331974), [Haskell](https://stackoverflow.com/questions/16241556), [MATLAB](https://fr.mathworks.com/help/matlab/matlab_oop/implementing-operators-for-your-class.html), [Eiffel](http://se.ethz.ch/~meyer/publications/online/eiffel/basic.html), [Lua](http://lua-users.org/wiki/MetamethodsTutorial), [Clojure](https://stackoverflow.com/a/1535235), [Fortran 90](http://research.physics.illinois.edu/ElectronicStructure/498-s97/comp_info/overload.html), [Swift](https://developer.apple.com/library/ios/documentation/Swift/Conceptual/Swift_Programming_Language/AdvancedOperators.html#//apple_ref/doc/uid/TP40014097-CH27-ID42), [Ada](http://archive.adaic.com/standards/83lrm/html/lrm-06-07.html), [Delphi 2005](http://edn.embarcadero.com/article/34324)... So many languages, with so many different (and sometimes opposing) philosophies, and yet they all agree on that point. Food for thought...
Why doesn't Java offer operator overloading?
[ "", "java", "operator-overloading", "" ]
Linux supports sending an arbitrary Posix-Signal such as `SIGINT` or `SIGTERM` to a process using the `kill`-Command. While `SIGINT` and `SIGTERM` are just boring old ways to end a process in a friendly or not-so-friendly kind of way, `SIGQUIT` is meant to trigger a core dump. This can be used to trigger a running Java VM to print out a thread dump, including the stacktraces of all running threads -- neat! After printing the debugging info, the Java VM will continue doing whatever it was doing before; in fact the thread dump just happens in another spawned thread of maximum priority. (You can try this out yourself by using `kill -3 <VM-PID>`.) Note that you can also register your own signal handlers using the (unsupported!) `Signal` and `SignalHandler` classes in the `sun.misc`-package, so you can have all kinds of fun with it. *However, I have yet to find a way to send a signal to a Windows process.* Signals are created by certain user inputs: `Ctrl-C` triggers a `SIGINT` on both platforms, for instance. But there does not seem to be any utility to manually send a signal to a running, but non-interactive process on Windows. The obvious solution is to use the Cygwin `kill` executable, but while it can end Windows processes using the appropriate Windows API, I could not send a `SIGBREAK` (the Windows equivalent to `SIGQUIT`) with it; in fact I think the only signal it is able to send to Windows processes is `SIGTERM`. So, to make a long story short and to repeat the headline: How to I send an arbitrary signal to a process in Windows?
If what you want is to explicitly/programmatically kill another program/process of any kind, within the SysInternals' pstools there is a small tool named "pskill" that behaves just like Unixen "kill" would do. If you want something else, keep reading (though I may be wrong on some of the specifics below - it's been eons since I last developed a Windows program in C using only the WinAPI and Charles Petzold's excellent books "Programming for Windows" as a guide). On Windows you don't properly have "signals", what functions WinMain and WinProc receive from the Operating System are simple *messages*. For instance, when you click on the "X" button of a window, Windows sends that windows' handler the message WM\_CLOSE. When the window's deleted but program's still running, it sends WM\_DESTROY. When it's about to get out of the main message processing loop, WinMain (not WinProc) receives WM\_QUIT. Your program should respond to all these as expected - you can actually develop an "unclosable" application by not doing what it should upon receiving a WM\_CLOSE. When user selects the task from Windows Task Manager and clicks "End Task", the OS will send WM\_CLOSE (and another one I don't remember). If you use "End Process", though, the process is killed directly, no messages sent ever (source: [The Old New Thing](https://devblogs.microsoft.com/oldnewthing/20040722-00/?p=38373)) I remember there was a way to get the HWND of another process' window, once you get that another process could send that window a message thru functions PostMessage and DispatchMessage.
Windows is not POSIX. It does not have signals. The only 'signals' that console programs get is if they call `SetConsoleCtrlHandler`, in which case it can be notified that the user has pressed Ctrl+C, Ctrl+Break, closed the console window, logged off, or shut the system down. Everything else is done with IPC, typically with window messages or RPC. Check Sun's documentation to see if there's a way to do what you're asking on the Windows JRE.
Sending an arbitrary Signal in Windows?
[ "", "java", "windows", "utilities", "" ]
What are my options? I tried MonoDevelop over a year ago but it was extremely buggy. Is the latest version a stable development environment?
MonoDevelop 2.0 has been released, it now has a decent GUI Debugger, code completion, Intellisense C# 3.0 support (including linq), and a decent GTK# Visual Designer. In short, since the 2.0 release I have started using Mono Develop again and am very happy with it so far. Check out the [MonoDevelop website](http://monodevelop.com/) for more info.
Microsoft has released [Visual Studio Code](https://www.visualstudio.com/en-us/products/code-vs.aspx "Visual Studio Code") for Linux, which has good C# support, naturally.
IDE's for C# development on Linux?
[ "", "c#", "linux", "ide", "mono", "" ]
Is there any way to set the same icon to all my forms without having to change one by one? Something like when you setup `GlobalAssemblyInfo` for all your projects inside your solution.
One option would be to inherit from a common base-Form that sets the Icon in the constructor (presumably from a resx). Another option might be [PostSharp](http://www.postsharp.org/) - it seems like it should be possible to do this (set .Icon) via AOP; not trivial, though. Finally, you could use a simple utility method (perhaps an extension method) to do the same. Best of all, with the first option, you could probably risk a `Ctrl`+`H` (replace all) from `: Form` or `: System.Windows.Forms.Form` to `: MyCustomForm`.
1. In the project properties > Application > Icon and Manifest > browse for a \*.ico file and add it there. 2. In the constructor or `_Load` event of a Form, simply add: ``` this.Icon = Icon.ExtractAssociatedIcon(Application.ExecutablePath); ```
Set same icon for all my Forms
[ "", "c#", ".net", "winforms", "icons", "" ]
In your opinion, is it ever valid to use the @ operator to suppress an error/warning in PHP whereas you may be handling the error? If so, in what circumstances would you use this? Code examples are welcome. Edit: Note to repliers. I'm not looking to turn error reporting off, but, for example, common practice is to use ``` @fopen($file); ``` and then check afterwards... but you can get rid of the @ by doing ``` if (file_exists($file)) { fopen($file); } else { die('File not found'); } ``` or similar. I guess the question is - is there anywhere that @ HAS to be used to supress an error, that CANNOT be handled in any other manner?
I would suppress the error **and handle it**. Otherwise you may have a **TOCTOU** issue (Time-of-check, time-of-use. For example a file may get deleted after file\_exists returns true, but before fopen). But I wouldn't just suppress errors to make them go away. These better be visible.
Note: Firstly, I realise 99% of PHP developers use the error suppression operator (I used to be one of them), so I'm expecting any PHP dev who sees this to disagree. > In your opinion, is it ever valid to use the @ operator to suppress an error/warning in PHP whereas you may be handling the error? **Short answer:** No! **Longer more correct answer:** I don't know as I don't know everything, but so far I haven't come across a situation where it was a good solution. **Why it's bad:** In what I think is about 7 years using PHP now I've seen endless debugging agony caused by the error suppression operator and have never come across a situation where it was unavoidable. The problem is that the piece of code you are suppressing errors for, may currently only cause the error you are seeing; however when you change the code which the suppressed line relies on, or the environment in which it runs, then there is every chance that the line will attempt to output a completely different error from the one you were trying to ignore. Then how do you track down an error that isn't outputting? Welcome to debugging hell! It took me many years to realise how much time I was wasting every couple of months because of suppressed errors. Most often (but not exclusively) this was after installing a third party script/app/library which was error free in the developers environment, but not mine because of a php or server configuration difference or missing dependency which would have normally output an error immediately alerting to what the issue was, but not when the dev adds the magic @. **The alternatives (depending on situation and desired result):** Handle the actual error that you are aware of, so that if a piece of code is going to cause a certain error then it isn't run in that particular situation. But I think you get this part and you were just worried about end users seeing errors, which is what I will now address. For regular errors you can set up an error handler so that they are output in the way you wish when it's you viewing the page, but hidden from end users and logged so that you know what errors your users are triggering. For fatal errors set `display_errors` to off (your error handler still gets triggered) in your php.ini and enable error logging. If you have a development server as well as a live server (which I recommend) then this step isn't necessary on your development server, so you can still debug these fatal errors without having to resort to looking at the error log file. There's even a [trick using the shutdown function](http://www.php.net/manual/en/function.set-error-handler.php#88401) to send a great deal of fatal errors to your error handler. **In summary:** Please avoid it. There may be a good reason for it, but I'm yet to see one, so until that day it's my opinion that the (@) Error suppression operator is evil. You can read [my comment on the Error Control Operators page](http://php.net/operators.errorcontrol#90987) in the PHP manual if you want more info.
Suppress error with @ operator in PHP
[ "", "php", "operators", "error-suppression", "" ]
Typically you will find STL code like this: ``` for (SomeClass::SomeContainer::iterator Iter = m_SomeMemberContainerVar.begin(); Iter != m_SomeMemberContainerVar.end(); ++Iter) { } ``` But we actually have the recommendation to write it like this: ``` SomeClass::SomeContainer::iterator Iter = m_SomeMemberContainerVar.begin(); SomeClass::SomeContainer::iterator IterEnd = m_SomeMemberContainerVar.end(); for (; Iter != IterEnd; ++Iter) { } ``` If you're worried about scoping, add enclosing braces: ``` { SomeClass::SomeContainer::iterator Iter = m_SomeMemberContainerVar.begin(); SomeClass::SomeContainer::iterator IterEnd = m_SomeMemberContainerVar.end(); for (; Iter != IterEnd; ++Iter) { } } ``` This is supposed to give a speed and efficiency gain, especially if you are programming consoles, because the .end() function is not called on each iteration of the loop. I just take the performance improvement for granted, it sounds reasonable but i don't know how much and it certainly depends on the type of container and actual STL implementation in use. But having used this style for a couple months now i actually prefer it over the first anyway. The reason being readability: the for line is neat and tidy. With qualifiers and member variables in real production code it is quite easy to have **really** long for lines if you use the style in the first example. That's why i intentionally made it to have a horizontal scrollbar in this example, just so you see what i'm talking about. ;) On the other hand, you suddenly introduce the Iter variables to the outer scope of the for loop. But then, at least in the environment i work in, the Iter would have been accessible in the outer scope even in the first example. What is your take on this? Are there any pro's to the first style other than possibly limiting the scope of Iter?
If you wrap your code into lines properly, the inline form would be equally readable. Besides, you should always do the `iterEnd = container.end()` as an optimization: ``` for (SomeClass::SomeContainer::iterator Iter = m_SomeMemberContainerVar.begin(), IterEnd = m_SomeMemberContainerVar.end(); Iter != IterEnd; ++Iter) { } ``` *Update: fixed the code per paercebal's advice.*
Another alternative is to use a foreach macro, for example [boost foreach](http://www.boost.org/doc/libs/1_36_0/doc/html/foreach.html): ``` BOOST_FOREACH( ContainedType item, m_SomeMemberContainerVar ) { mangle( item ); } ``` I know macros are discouraged in modern c++, but until the auto keyword is widely available this is the best way I've found to get something that is concise and readable, and still completely typesafe and fast. You can implement your macro using whichever initialization style gets you better performance. There's also a note on the linked page about redefining BOOST\_FOREACH as foreach to avoid the annoying all caps.
Is Iterator initialization inside for loop considered bad style, and why?
[ "", "c++", "stl", "coding-style", "iteration", "iterator", "" ]
I have months stored in SQL Server as 1,2,3,4,...12. I would like to display them as January,February etc. Is there a function in SQL Server like MonthName(1) = January? I am trying to avoid a CASE statement, if possible.
A little hacky but should work: ``` SELECT DATENAME(month, DATEADD(month, @mydate-1, CAST('2008-01-01' AS datetime))) ```
I think this is the best way to get the **month name** when you have the **month number** ``` Select DateName( month , DateAdd( month , @MonthNumber , 0 ) - 1 ) ``` Or ``` Select DateName( month , DateAdd( month , @MonthNumber , -1 ) ) ```
Convert Month Number to Month Name Function in SQL
[ "", "sql", "sql-server", "t-sql", "sql-server-2005", "" ]
Is there any *easy* way to explicitly make an already threaded application utilise a multicore environment? (if avaliable). The first problem would be how do I determine if my runtime environment supports multi-core applications? If I then determine that I can use multi core functionality, can I explicitly assign threads to run on different cores? If so Is the thread communication handled by the OS/Hardware or do I have to get my hands dirty?
If your application is already threaded properly, you dont have to do anything. Thread context execution and handling is done for you. Now when it comes to threading, is your app threaded in a way to utilize the cores? do you have processor intensive tasks split into seperate threads. If not, then you will need to make changes to split up your processing tasks into more threads. You can use Environment.ProcessorCount to get the core count and make as many threads as appropriate at runtime for your tasks. As of now, there is not really any way to make an improperly threaded app(single thread) into a threaded app taking advantage of all the processors/cores, unless you look to future technologies, such as [plinq](http://msdn.microsoft.com/en-us/magazine/cc163329.aspx) and [Parallel FX](http://en.wikipedia.org/wiki/Parallel_FX_Library)
Hi there are 2 extentions that I know about [Parallel C# Homepage](http://www.parallelcsharp.com/) [Check out Somasegars MSDN blog on MS June CTP for parallel 3.5 extenstions](http://blogs.msdn.com/somasegar/archive/2008/06/02/june-2008-ctp-parallel-extensions-to-the-net-fx.aspx) MSDN The library is made up of the following components: * The Task Parallel Library (TPL), which provides support for imperative data and task parallelism. * Parallel LINQ (PLINQ), which provides support for declarative data parallelism. * Coordination Data Structures (CDS), which provide support for work coordination and managing shared state.
Explicit multi-core C# applications
[ "", "c#", "multicore", "runtime-environment", "" ]
I'm looking for a profiler to use with native C++. It certainly does not have to be free, however cost does factor into the purchase decision. This is for commercial work so I can't use personal or academic licensed copies. The key features I'm looking for are: * Process level metrics * Component level metrics * Line-level metrics * Supports Multi-threaded code * Usability * Cost * Visual Studio 2005 Professional support required (VS 2008 Professional support highly desirable) I've used Intel's VTune and Compuware's Devpartner Performance Analysis Community Edition. VTune seemed very powerful but it has a steep learning curve. It also is very "modular" so you have to figure out what parts are you need to buy. DevPartner PACE was pretty easy to use and provides all of the key features however it's only a 45-day trial. The licensed version (DevPartner for Visual C++ BoundsChecker Suite) is about $1400 a seat, which is doable but a bit high imo. What are some good profilers for native C++ and **WHY**? --- ### See also: [What's Your Favorite Profiling Tool For C++](https://stackoverflow.com/questions/26663/whats-your-favorite-profiling-tool-for-c)
On Windows, [GlowCode](http://www.glowcode.com) is affordable, fairly easy to use, and offers a free trial so you can see if it works for you.
Many people are not aware but MSFT is making a great progress putting the best possible tools for improving performance in the hands of devlopers for free :-). They are exposing to all of us the internals of Windows tracing: ETW. [perftools](http://www.microsoft.com/whdc/system/sysperf/perftools.mspx) It is part of the new windows SDK for server 2008 and Vista. Simply impressive and must to download if performance analysis and profiling under Windows is your goal (regardless of language). Check the documentation here before you decide to download it: [msdn doc](http://msdn.microsoft.com/en-us/library/ff191077.aspx)
What are some good profilers for native C++ on Windows?
[ "", "c++", "windows", "performance", "profiling", "" ]
An interesting issue came up recently. We came across some code that is using `hashCode()` as a salt source for MD5 encryption but this raises the question: will `hashCode()` return the same value for the same object on different VMs, different JDK versions and operating systems? Even if its not guaranteed, has it changed at any point up til now? EDIT: I really mean `String.hashCode()` rather than the more general `Object.hashCode()`, which of course can be overridden.
No. From <http://tecfa.unige.ch/guides/java/langspec-1.0/javalang.doc1.html>: > The general contract of hashCode is as > follows: > > * Whenever it is invoked on the same object more than once during an > execution of a Java application, > hashCode must consistently return the > same integer. The integer may be > positive, negative, or zero. This > integer does not, however, have to > remain consistent from one Java > application to another, or from one > execution of an application to another > execution of the same application. > [...]
It depends on the type: * If you've got a type which hasn't overridden hashCode() then it will probably return a different hashCode() each time you run the program. * If you've got a type which overrides hashCode() but doesn't document how it's calculated, it's perfectly legitimate for an object with the same data to return a different hash on each run, so long as it returns the same hash for repeated calls within the same run. * If you've got a type which overrides hashCode() in a documented manner, i.e. the algorithm is part of the documented behaviour, then you're probably safe. (java.lang.String documents this, for example.) However, I'd still steer clear of *relying* on this on general principle, personally. Just a cautionary tale from the .NET world: I've seen at least a few people in a world of pain through using the result of string.GetHashCode() as their password hash in a database. The algorithm changed between .NET 1.1 and 2.0, and suddenly all the hashes are "wrong". (Jeffrey Richter documents an almost identical case in CLR via C#.) When a hash *does* need to be stored, I'd prefer it to be calculated in a way which is *always* guaranteed to be stable - e.g. MD5 or a custom interface implemented by your types with a guarantee of stability.
Is String.hashCode() portable across VMs, JDKs and OSs?
[ "", "java", "hashcode", "" ]
Since I have started using this site, I keep hearing about the Boost library. I am wondering what are some of the major benefits of the Boost library (hence why should I use it) and how portable is the Boost library?
Boost is organized by several members of the standard committee. So it is a breeding ground for libraries that will be in the next standard. 1. It is an extension to the STL (it fills in the bits left out) 2. It is well documented. 3. It is well peer-reviewed. 4. It has high activity so bugs are found and fixed quickly. 5. It is platform neutral and works everywhere. 6. It is free to use. With tr1 coming up soon it is nice to know that boost already has a lot of the ground covered. A lot of the libraries in tr1 are basically adapted directly from boost originals and thus have been tried and tested. The difference is that they have been moved into the std::tr1 namespace (rather than boost). All that you need to do is add the following to your compilers default [include search path](http://www.boost.org/doc/libs/1_37_0/doc/html/boost_tr1/usage.html): ``` <boost-install-path>/boost/tr1/tr1 ``` Then when you include the standard headers boost will automatically import all the required stuff into the namespace std::tr1 ### For Example: To use std::tr1::share\_ptr you just need to include <memory>. This will give you all the smart pointers with one file.
You can simply read the [Boost Background Information](http://www.boost.org/users/) page to get a quick overview of why you should use Boost and what you can use it for. Worth the few minutes it takes.
Benefits and portability of Boost Library
[ "", "c++", "boost", "" ]
I am trying to store more than 1 data item at a single index in my linked-list. All of the examples in my textbook seem to illustrate adding only 1 piece of data per index. I'm assuming it is possible to add more? For example, using the Collections API to store an integer I would do the following: ``` LinkedList <Integer>linky = new LinkedList<Integer>(); int num1 = 2, num2 = 22, num3 = 25, num4 = 1337; linky.add(num1); ``` How would I go about adding num2, num3, and num4 to the same first index in the list? Thanks guys.
There seems to be a little confusion about how linked lists work. Essentially, a linked list is composed of nodes, each of which contains one datum (an object, which itself can contain several member variables, to be precise), and a link to the next node in the list (or a null pointer if there is no such next node). You can also have a doubly-linked list, where each node also has a pointer to the previous node in the list, to speed up certain kinds of access patterns. To add multiple "pieces of data" to a single node sounds like adding several links off of one node, which turns your linked list into an N-ary *tree*. To add multiple pieces of data onto the end of the list, in the manner most commonly associated with a linked list, just do: ``` LinkedList <Integer>linky = new LinkedList<Integer>(); int num1 = 2, num2 = 22, num3 = 25, num4 = 1337; linky.add(num1); linky.add(num2); linky.add(num3); linky.add(num4); ``` ## Alternately, if you want each node of the linked list to have several pieces of data These data should be packaged up into an **object** (by defining a `class` that has them all as member variables). For example: ``` class GroupOfFourInts { int myInt1; int myInt2; int myInt3; int myInt4; public GroupOfFourInts(int a, int b, int c, int d) { myInt1 = a; myInt2 = b; myInt3 = c; myInt4 = d; } } class someOtherClass { public static void main(String[] args) { LinkedList<GroupOfFourInts> linky = new LinkedList<GroupOfFourInts>(); GroupOfFourInts group1 = new GroupOfFourInts(1,2,3,4); GroupOfFourInts group2 = new GroupOfFourInts(1337,7331,2345,6789); linky.add(group1); linky.add(group2); } } ``` Now, `linky` will have 2 nodes, each of which will contain 4 `int`s, *myInt1*, *myInt2*, *myInt3*, and *myInt4*. ## Note None of the above is specific to linked lists. This pattern should be used whenever you want to store a bunch of data together as a unit. You create a class that has member variables for every piece of data you want to be stored together, then create any Java Collections type (ArrayList, LinkedList, TreeList, ...) of that type. Be sure that you want to use a linked list (as there's no penalty in terms of programming difficulty in choosing an ArrayList or TreeList). This will depend on your data access pattern. Linked lists provide O(1) addition and deletion, but O(n) lookup, whereas ArrayLists provide O(1) lookup, but O(n) arbitrary add and delete. TreeLists provide O(log n) insertion, deletion, and lookup. The tradeoffs between these depend on the amount of data you have and how you're going to be modifying and accessing the data structure. Of course, none of this matters if you'll only have, say, <100 elements in your list ;-) Hope this helps!
Use a structure. For example: ``` private struct Node { int Num1; int Num2; int Num3; } ``` ... ``` LinkedList<Node> list = new LnkedList<Node>(); Node n = new Node(); n.Num1 = 10; n.Num2 = 100; n.Num3 = 1000; list.Add(n); ``` Note; I assume this is in C#; correct me if I'm wrong and I will fix the code ;) If you have not gone over OOP yet in your book - then I would recommend giving it a try; it will help you solve problems like this.
Storing more than 1 data item at a single index in a linked-list?
[ "", "java", "collections", "linked-list", "" ]
How do I manipulate images at the pixel level in C#? I need to be able to read/modify each bitmap pixel RGB values separately. A code sample would be appreciated.
If you want speed, then [LockBits](http://msdn.microsoft.com/en-us/library/system.drawing.bitmap.lockbits.aspx). See here for a good walkthrough by Bob Powell. If you just want to edit a few, then [GetPixel](http://msdn.microsoft.com/en-us/library/system.drawing.bitmap.getpixel.aspx)/[SetPixel](http://msdn.microsoft.com/en-us/library/system.drawing.bitmap.setpixel.aspx) should do what you want.
A sample code routine (I use this for simple merge and compare functionality. It takes two images and produces a third greyscale image showing the differences between the two images as a greyscale tone level. The darker it is, the more the diff.): ``` public static Bitmap Diff(Bitmap src1, Bitmap src2, int x1, int y1, int x2, int y2, int width, int height) { Bitmap diffBM = new Bitmap(width, height, PixelFormat.Format24bppRgb); for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { //Get Both Colours at the pixel point Color col1 = src1.GetPixel(x1 + x, y1 + y); Color col2 = src2.GetPixel(x2 + x, y2 + y); // Get the difference RGB int r = 0, g = 0, b = 0; r = Math.Abs(col1.R - col2.R); g = Math.Abs(col1.G - col2.G); b = Math.Abs(col1.B - col2.B); // Invert the difference average int dif = 255 - ((r+g+b) / 3); // Create new grayscale RGB colour Color newcol = Color.FromArgb(dif, dif, dif); diffBM.SetPixel(x, y, newcol); } } return diffBM; } ``` [Marc's post](https://stackoverflow.com/questions/190385/how-to-manipulate-images-at-pixel-level-in-c/190395#190395) notes LockBits and using that to modify the image directly in memory. I would suggest looking at that rather than what I have posted if performance is a concern. Thanks Marc!
How to manipulate images at the pixel level in C#
[ "", "c#", "image-processing", "image-manipulation", "" ]
Today at work we came across the following code (some of you might recognize it): ``` #define GET_VAL( val, type ) \ { \ ASSERT( ( pIP + sizeof(type) ) <= pMethodEnd ); \ val = ( *((type *&)(pIP))++ ); \ } ``` Basically we have a byte array and a pointer. The macro returns a reference to a variable of type and advance the pointer to the end of that variable. It reminded me of the several times that I needed to "think like a parser" in order to understand C++ code. Do you know of other code examples that caused you to stop and read it several times till you managed to grasp what it was suppose to do?
The inverse square root implementation in Quake 3: ``` float InvSqrt (float x){ float xhalf = 0.5f*x; int i = *(int*)&x; i = 0x5f3759df - (i>>1); x = *(float*)&i; x = x*(1.5f - xhalf*x*x); return x; } ``` **Update:** [How this works](http://www.lomont.org/Math/Papers/2003/InvSqrt.pdf) (thanks ryan\_s)
This was on reddit recently <http://www.eelis.net/C++/analogliterals.xhtml> ``` assert((o-----o | ! ! ! ! ! ! ! o-----o ).area == ( o---------o | ! ! ! o---------o ).area ); ```
What is the most hard to understand piece of C++ code you know?
[ "", "c++", "" ]
I'm using a vendor API to obtain a JDBC connection to the application's database. The API works when running in the application server or when running in a stand-alone mode. I want to run a series of SQL statements in a single transaction. I'm fine with them occurring in the context of the JTA transaction if it exists. However, if it doesn't then I need to use the JDBC transaction demarcation methods. (Calling these methods on a JDBC connection that is participating in a JTA transaction causes a SQLException.) So I need to be able to determine whether the Connection came from the JTA enabled DataSource or if it's just a straight JDBC connection. Is there a straight forward way to make this determination? Thanks!
Even if it's straight JDBC, you can have a JTA transaction enabled. Checking the autoCommit flag will NOT help in this regard. You can be in a transaction, distributed or otherwise, with autoCommit set to false. autoCommit set to true would tell you you're not in a distributed transaction but a value of false just means you won't auto-commit... it could be in any kind of transaction. I think you're going to have to call UserTransaction.getStatus() and verify that it is not equal to Status.NoTransaction(). This would tell you if you're in a JTA transaction.
What thilo says does make sense. Otherwise, Not sure of a straight way BUT I will give you a "hack" way write a BAD SQL which you know will give a DB exception. That will result in a stack trace. From the stack trace, you can find out if it is a JTA derived connection or NOT ?
How do you determine if a JDBC Connection was retrieved from a JTA enabled DataSource or straight JDBC?
[ "", "java", "jdbc", "transactions", "jta", "" ]
I'm having some minor problems with some animations I'm trying to set up. I have a couple divs stacked on top of each other kind of like this. ``` <div id="div1"> Stuff... </div> <div id="div2"> More Stuff... </div> ``` Each of these divs has a drop shadow applied to it via jQuery plugin (jquery.dropshadow.js). The problem occurs when I expand one of the divs using some kind of animation. The shadow does not update with the size of the div. I can redraw the shadow in the callback of the animation but still looks pretty joggy. Is there a way that I can update the status of my shadows periodically throughout the course of the animation or can anyone recommend a better drop shadow library that would fix the problem? It doesn't have to be jQuery plugin.
I would suggest using **CSS** for your drop shadows, and **not JS**. I have dealt with this exact problem in the past and I have completely stopped using JS for drop shadows. I have never seen animations with JS shadows look as smooth as pure CSS. Also, using too much JS to alter the page elements can cause performance issues.
I think the only way to do this (at least with that particular drop shadow plugin) would be targeting both the element you want **and** all the drop-shadow "phantom" elements, in your animation. So, for example: ``` <style type="text/css"> #div1 { width: 50px; } </style> <div id="div1"> <p>Here is a lot of stuff. Stuff stuff stuff.</p> </div> <script type="text/javascript"> $(document).ready(function() { $("#div1").dropShadow(); $("#div1").click(function() { $("#div1, #div1 + .dropShadow .dropShadow").animate({ width: "400px" }, 1500); }); }); </script> ``` This is based on the structure of the rendered code that the drop-shadow plugin produces... all the fuzzy copies of your original element get a class of *.dropShadow* and get grouped into a container element which also has a class of *.dropShadow* and gets stuck into the document right after the original element (thus the *+* selector). As long as you apply whatever animation you're doing to all of these shadow elements, they all get animated (however, it *is* a bit jerky from all that processing... uffda).
jQuery animation
[ "", "javascript", "jquery", "animation", "user-interface", "" ]
If I'm making a simple grid based game, for example, I might have a few 2d lists. One might be for terrain, another might be for objects, etc. Unfortunately, when I need to iterate over the lists and have the contents of a square in one list affect part of another list, I have to do something like this. ``` for i in range(len(alist)): for j in range(len(alist[i])): if alist[i][j].isWhatever: blist[i][j].doSomething() ``` Is there a nicer way to do something like this?
I'd start by writing a generator method: ``` def grid_objects(alist, blist): for i in range(len(alist)): for j in range(len(alist[i])): yield(alist[i][j], blist[i][j]) ``` Then whenever you need to iterate over the lists your code looks like this: ``` for (a, b) in grid_objects(alist, blist): if a.is_whatever(): b.do_something() ```
If anyone is interested in performance of the above solutions, here they are for 4000x4000 grids, from fastest to slowest: * [Brian](https://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189111): 1.08s (modified, with `izip` instead of `zip`) * [John](https://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189270): 2.33s * [DzinX](https://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189234): 2.36s * [ΤΖΩΤΖΙΟΥ](https://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189348): 2.41s (but object initialization took 62s) * [Eugene](https://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly): 3.17s * [Robert](https://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189165): 4.56s * [Brian](https://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189111): 27.24s (original, with `zip`) **EDIT**: Added Brian's scores with `izip` modification and it won by a large amount! John's solution is also very fast, although it uses indices (I was really surprised to see this!), whereas Robert's and Brian's (with `zip`) are slower than the question creator's initial solution. So let's present [Brian](https://stackoverflow.com/questions/189087/how-can-i-in-python-iterate-over-multiple-2d-lists-at-once-cleanly#189111)'s winning function, as it is not shown in proper form anywhere in this thread: ``` from itertools import izip for a_row,b_row in izip(alist, blist): for a_item, b_item in izip(a_row,b_row): if a_item.isWhatever: b_item.doSomething() ```
How can I, in python, iterate over multiple 2d lists at once, cleanly?
[ "", "python", "" ]
How can I get list all the files within a folder recursively in Java?
Not sure how you want to represent the tree? Anyway here's an example which scans the entire subtree using recursion. Files and directories are treated alike. Note that [File.listFiles()](http://docs.oracle.com/javase/6/docs/api/java/io/File.html#listFiles%28%29) returns null for non-directories. ``` public static void main(String[] args) { Collection<File> all = new ArrayList<File>(); addTree(new File("."), all); System.out.println(all); } static void addTree(File file, Collection<File> all) { File[] children = file.listFiles(); if (children != null) { for (File child : children) { all.add(child); addTree(child, all); } } } ``` Java 7 offers a couple of improvements. For example, [DirectoryStream](http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#newDirectoryStream%28java.nio.file.Path%29) provides one result at a time - the caller no longer has to wait for all I/O operations to complete before acting. This allows incremental GUI updates, early cancellation, etc. ``` static void addTree(Path directory, Collection<Path> all) throws IOException { try (DirectoryStream<Path> ds = Files.newDirectoryStream(directory)) { for (Path child : ds) { all.add(child); if (Files.isDirectory(child)) { addTree(child, all); } } } } ``` Note that the dreaded null return value has been replaced by IOException. Java 7 also offers a [tree walker](http://docs.oracle.com/javase/tutorial/essential/io/walk.html): ``` static void addTree(Path directory, final Collection<Path> all) throws IOException { Files.walkFileTree(directory, new SimpleFileVisitor<Path>() { @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { all.add(file); return FileVisitResult.CONTINUE; } }); } ```
``` import java.io.File; public class Test { public static void main( String [] args ) { File actual = new File("."); for( File f : actual.listFiles()){ System.out.println( f.getName() ); } } } ``` It displays indistinctly files and folders. See the methods in File class to order them or avoid directory print etc. <http://java.sun.com/javase/6/docs/api/java/io/File.html>
How to scan a folder in Java?
[ "", "java", "file", "directory", "" ]
I currently have a Java SAX parser that is extracting some info from a 30GB XML file. Presently it is: * reading each XML node * storing it into a string object, * running some regexex on the string * storing the results to the database For several million elements. I'm running this on a computer with 16GB of memory, but the memory is not being fully utilized. Is there a simple way to dynamically 'buffer' about 10gb worth of data from the input file? I suspect I could manually take a 'producer' 'consumer' multithreaded version of this (loading the objects on one side, using them and discarding on the other), but damnit, XML is ancient now, are there no efficient libraries to crunch em?
First, try to find out what's slowing you down. * How much faster is the parser when you parse from memory? * Does using a `BufferedInputStream` with a large size help? Is it easy to split up the XML file? In general, shuffling through 30 GiB of any kind of data will take some time, since you have to load it from the hard drive first, so you are always limited by the speed of this. Can you distribute the load to several machines, maybe by using something like [Hadoop](http://hadoop.apache.org/core/)?
1. Just to cover the bases, is Java able to use your 16GB? You (obviously) need to be on a 64-bit OS, and you need to run Java with -d64 -XMx10g (or however much memory you want to allocate to it). 2. It is highly unlikely memory is a limiting factor for what you're doing, so you really shouldn't see it fully utilized. You should be either IO or CPU bound. Most likely, it'll be IO. If it is, IO, make sure you're buffering your streams, and then you're pretty much done; the only thing you can do is buy a faster harddrive. 3. If you really are CPU-bound, it's possible that you're bottlenecking at regex rather than XML parsing. See [this](http://mailinator.blogspot.com/2008/01/how-to-search-for-word-pen1s-in-185.html) (which references [this](http://www.javaworld.com/javaworld/jw-09-2007/jw-09-optimizingregex.html?page=2)) 4. If your bottleneck is at SAX, you can try other implementations. Off the top of my head, I can think of the following alternatives: * StAX (there are multiple implementations; [Woodstox](http://woodstox.codehaus.org/) is one of the fastest) * [Javolution](http://javolution.org/api/javolution/xml/stream/XMLStreamReader.html) * Roll your own using [JFlex](http://jflex.de/) * Roll your own ad hoc, e.g. using regex For the last two, the more constrained is your XML subset, the more efficient you can make it. 5. It's very hard to say, but as others mentioned, an XML-native database might be a good alternative for you. I have limited experience with those, but I know that at least [Berkeley DB XML](http://www.oracle.com/database/berkeley-db/xml/index.html) supports XPath-based indices.
Searching for regex patterns on a 30GB XML dataset. Making use of 16gb of memory
[ "", "java", "xml", "" ]
I moved a [WordPress](http://en.wikipedia.org/wiki/WordPress) installation to a new folder on a Windows/[IIS](http://en.wikipedia.org/wiki/Internet_Information_Services) server. I'm setting up 301 redirects in PHP, but it doesn't seem to be working. My post URLs have the following format: ``` http:://www.example.com/OLD_FOLDER/index.php/post-title/ ``` I can't figure out how to grab the `/post-title/` part of the URL. `$_SERVER["REQUEST_URI"]` - which everyone seems to recommend - is returning an empty string. `$_SERVER["PHP_SELF"]` is just returning `index.php`. Why is this, and how can I fix it?
Maybe, because you are under IIS, ``` $_SERVER['PATH_INFO'] ``` is what you want, based on the URLs you used to explain. For Apache, you'd use `$_SERVER['REQUEST_URI']`.
``` $pageURL = (@$_SERVER["HTTPS"] == "on") ? "https://" : "http://"; if ($_SERVER["SERVER_PORT"] != "80") { $pageURL .= $_SERVER["SERVER_NAME"].":".$_SERVER["SERVER_PORT"].$_SERVER["REQUEST_URI"]; } else { $pageURL .= $_SERVER["SERVER_NAME"].$_SERVER["REQUEST_URI"]; } return $pageURL; ```
How can I get the current page's full URL on a Windows/IIS server?
[ "", "php", "iis", "" ]
using MVP, what is the normal order of construction and dependency injection. normally you create a presenter for each view and pass the view into the presenter on constructor. But what if you have: 1. A Service that multiple views need to listen to events on. 2. Multiple views all pointing to the same data model cache. can someone display a normal flow of info from a user click to data coming back in a service from a server.
Here is what I do: First, I define theses interfaces: ``` public interface IView<TPresenter> { TPresenter Presenter { get; set; } } public interface IPresenter<TView, TPresenter> where TView : IView<TPresenter> where TPresenter : IPresenter<TView, TPresenter> { TView View { get; set; } } ``` Then this abstract presenter class: ``` public abstract class AbstractPresenter<TView, TPresenter> : IPresenter<TView, TPresenter> where TView : IView<TPresenter> where TPresenter : class, IPresenter<TView, TPresenter> { protected TView view; public TView View { get { return this.view; } set { this.view = value; this.view.Presenter = this as TPresenter; } } } ``` The view is injected via a property, instead of the constructor, to allow the bi-directional affection in the setter. Notice that a safe cast is needed... Then, my concrete presenter is something like : ``` public class MyPresenter : AbstractPresenter<IMyView, MyPresenter> { //... } ``` Where `IMyView` implements `IView`. A concrete view type must exists (e.g. `MyView`), but it's the container that resolves it: 1. I register `MyPresenter` type as itself in the container, with a transient behavior. 2. I register `MyView` as an `IMyView` in the container with a transient behavior. 3. I then asks for a `MyPresenter` to the container. 4. Container instanciate a MyView 5. It instanciates a `MyPresenter` 6. It inject the view into the presenter through the `AbstractPresenter.View` property. 7. The setter code completes the bi-directional association 8. The container returns the couple Presenter/View **It allows you to inject other dependencies (services, repos) into both your view and your presenter.** But in the scenario you described, I recommend you to inject services and caches into the **presenter**, instead of the view.
In WinForms, I prefer a simple approach. Usually you're dealing with a few UserControls on a design surface -- make these your view classes. .NET creates the control hierarchy for you (via InitializeComponent). If you use the [Passive View](http://martinfowler.com/eaaDev/PassiveScreen.html) pattern, each view then instantiates it's presenter. (You can do this either directly or by asking an IOC container.) Use constructor injection to pass a reference to the view's interface to the presenter's constructor. The presenter can then wire itself up to view events. Repeat the process for the model: the presenter instantiates a model and wires up to its events. (In this case you don't need the constructor injection since Passive View says the presenter keeps a reference to the model, not vice versa.) The only nit I've found with this approach is properly managing lifetimes of the model and presenter. You want to keep the view as simple as possible, so you probably don't want it maintaining a reference to the presenter. However, that means you've got this presenter object hanging around with event handlers tied to your view. This setup prevents your view from being garbage collected. One solution is to have your view publish an event that indicates it's closing. The presenter would receive the event and remove both its model and view subscriptions. The objects in your web are now properly dereferenced and the garbage collector can go about its work. You wind up with something like the following: ``` public interface IView { ... event Action SomeEvent; event EventHandler Disposed; ... } // Note that the IView.Disposed event is implemented by the // UserControl.Disposed event. public class View : UserControl, IView { public event Action SomeEvent; public View() { var presenter = new Presenter(this); } } public interface IModel { ... event Action ModelChanged; ... } public class Model : IModel { ... public event Action ModelChanged; ... } public class Presenter { private IView MyView; private IModel MyModel; public Presenter(View view) { MyView = view; MyView.SomeEvent += RespondToSomeEvent; MyView.Disposed += ViewDisposed; MyModel = new Model(); MyModel.ModelChanged += RespondToModelChanged; } // You could take this a step further by implementing IDisposable on the // presenter and having View.Dispose() trigger Presenter.Dispose(). private void ViewDisposed(object sender, EventArgs e) { MyView.SomeEvent -= RespondToSomeEvent; MyView.Disposed -= ViewDisposed; MyView = null; MyModel.Modelchanged -= RespondToModelChanged; MyModel = null; } } ``` You can decouple this example a step further by using IOC and asking your IOC container for implementations of IModel (in the Presenter class) and IPresenter (in the View class).
MVP dependency injection
[ "", "c#", "winforms", "mvp", "" ]
For posting AJAX forms in a form with many parameters, I am using a solution of creating an `iframe`, posting the form to it by POST, and then accessing the `iframe`'s content. specifically, I am accessing the content like this: ``` $("some_iframe_id").get(0).contentWindow.document ``` I tested it and it worked. On some of the pages, I started getting an "Access is denied" error. As far as I know, this shouldn't happen if the iframe is served from the same domain. I'm pretty sure it was working before. Anybody have a clue? If I'm not being clear enough: I'm posting to the *same domain*. So this is not a cross-domain request. I am testing on IE only. P.S. I can't use simple ajax POST queries (don't ask...)
Solved it by myself! The problem was, that even though the correct response was being sent (verified with Fiddler), it was being sent with an HTTP 500 error code (instead of 200). So it turns out, that if a response is sent with an error code, IE replaces the content of the `iframe` with an error message loaded from the disk (`res://ieframe.dll/http_500.htm`), and that causes the cross-domain access denied error.
Beware of security limitations associated to **iFrames**, like *Cross domain* restriction (aka CORS). Below are 3 common errors related to CORS : 1. Load an iFrame with a different domain. (Ex: opening "*www.foo.com*" while top frame is "*www.ooof.com*") 2. Load an iFrame with a different port: iFrame's URL *port* differs from the one of the top frame. 3. Different protocols : loading iFrame resource via HTTPS while parent Frame uses HTTP.
"Access is denied" error on accessing iframe document object
[ "", "javascript", "ajax", "iframe", "" ]
I understand the mantra of "don't roll your own" when it comes to site security frameworks. For most cases anyway. I'm going to be collaborating on a site that integrates text-messaging into the system. I'd like to use an existing, well-tested security framework to protect the users data, but I need it to also protect a users phone number as well. I wouldn't want to be the one responsible for a list of users cell phone numbers getting jacked and spammed. What suggestions can the community offer?
Note that techniques applied to passwords aren't applicable here. You can store a password salted and hashed (although the value of doing so can be disputed), but that doesn't work for phone numbers. If someone jacks your server, they can do anything the server can. This must include recovering the phone number, but doesn't include recovering the password if it's hashed well. So the phone number is just a particular case of protecting confidential data. If phone nos truly are the only sensitive data in the app, then you could look at walling off the part of the app that sends the texts, and asymmetrically encrypting the phone nos. In a different process (or on a different machine) run an app that has the key to decrypt phone nos. This app's interface would have maybe one function taking an encrypted no and the message to send. Keep this app simple, and test and audit the snot out of it. Either hide it from the outside world, or use authentication to prove the request really came from your main app, or both. Neither the db nor the main part of the app is capable of decrypting phone nos (so for example you can't search on them), but they can encrypt them for addition to the db. The general technique is called "Privilege separation", the above is just one example. Note that phone nos would generally need to be padded with random data before encryption (like salting a hashed password). Otherwise it's possible to answer the question "is the encrypted phone number X?", without knowing the private key. That may not be a problem from the POV of spammers stealing your distribution list, but it is a problem from the POV of claiming that your phone numbers are securely stored, since it means a brute force attack becomes feasible: there are only a few billion phone nos, and it may be possible to narrow that down massively for a given user. Sorry this doesn't directly answer your question: I don't know whether there's a PHP framework which will help implement privilege separation. [Edit to add: in fact, it occurs to me that under the heading of 'keep the privileged app simple', you might not want to use a framework at all. It sort of depends whether you think you're more or less likely leave bugs in the small amount of code you really need, than the framework authors are to have left bugs in the much larger (but more widely used) amount of code they've written. But that's a huge over-simplification.]
Since you need to be able to retrieve the phone numbers, the only thing you can really do to protect them (beyond the normal things you would do to protecting your db) is encrypt them. This means that you need to: * Make sure the key doesn't leak when you inadvertently leak a database dump. * Make sure your system doesn't helpfully decrypt the phone numbers when someone manages to SQL inject your system. Of course the recommendation of not rolling your own still applies, use AES or some other well respected cipher with a reasonable key length.
Is there a PHP security framework that protects phone numbers as well as passwords?
[ "", "php", "security", "sms", "spam-prevention", "" ]
I want to consume a web service over https from a java client. What steps will i need to take in order to do this?
Really, there shouldn't much different from consuming a web service over HTTP. The big thing is that the process calling the web service will have to trust the server's SSL certificate. If the certificate was purchased from a well-known certificate-issuing authority, this usually isn't a problem. Otherwise, the client will want to either trust the root certificate, or the certificate associated with the server's fully qualified host name.
You may need to use the [keytool](https://web.archive.org/web/20100606064854/http://java.sun.com:80/j2se/1.4.2/docs/tooldocs/windows/keytool.html) command to trust the server's SSL certificate. I've generally found that it is necessary to run something like this: ``` keytool -importcert -v -trustcacerts -alias ServerName -file server_cert_file.crt -keystore client_keystore_file ```
Consume Webservice using https protocol
[ "", "java", "web-services", "https", "" ]
So I have a small C# app that needs to periodically check the contents of directories on multiple machines on the network. I thought I could just read \hostname\C$ as a directory path, but with the normal Directory class there doesn't seem to be a way to authenticate against the other servers so you can access the hidden share. I'm sure there's an easy way to do this that I've overlooked, but at the moment I'm a bit stumpted.
From <http://bytes.com/forum/thread689145.html>: > All processes run in the context of a > logged-in user account. If you want to > open a file on another computer, your > application must be running in the > context of a user that has permissions > to open files on that machine. You can > do this with Impersonation. The easiest way seems to be to give the current user appropriate rights on the other machines.
To authenticate with a share to which the user running the process does not have permission (which is often the case for administrative shares), try running the net use command: ``` net use SERVERNAME\IPC$ /user:USERNAME PASSWORD ``` Try running this in a separate process before the code which actually tries to access the share, e.g.: ``` ProcessStartInfo psi = new ProcessStartInfo( "net", "use " + SERVERNAME + @"\IPC$ /user:" + USERNAME + " " + PASSWORD); Process p = new Process(); p.StartInfo = psi; p.Start(); p.WaitForExit(); p.Close(); // The code to access the share follows... ``` This is useful if it is not appropriate to give permission to the share for the user account running the process, e.g. for a security model where an end-user application needs to access data on a share to which the user herself should not have direct access.
Reading hidden share in C#
[ "", "c#", "impersonation", "fileshare", "" ]
What is an elegant way to sort objects in PHP? I would love to accomplish something similar to this. ``` $sortedObjectArary = sort($unsortedObjectArray, $Object->weight); ``` Basically specify the array I want to sort as well as the field I want to sort on. I looked into multidimensional array sorting and there might be something useful there, but I don't see anything elegant or obvious.
Almost verbatim from the manual: ``` function compare_weights($a, $b) { if($a->weight == $b->weight) { return 0; } return ($a->weight < $b->weight) ? -1 : 1; } usort($unsortedObjectArray, 'compare_weights'); ``` If you want objects to be able to sort themselves, see example 3 here: <http://php.net/usort>
For php >= 5.3 ``` function osort(&$array, $prop) { usort($array, function($a, $b) use ($prop) { return $a->$prop > $b->$prop ? 1 : -1; }); } ``` Note that this uses Anonymous functions / closures. Might find reviewing the php docs on that useful.
Sort Object in PHP
[ "", "php", "arrays", "sorting", "" ]
What is the simplest SOAP example using Javascript? To be as useful as possible, the answer should: * Be functional (in other words actually work) * Send at least one parameter that can be set elsewhere in the code * Process at least one result value that can be read elsewhere in the code * Work with most modern browser versions * Be as clear and as short as possible, without using an external library
This is the simplest JavaScript SOAP Client I can create. ``` <html> <head> <title>SOAP JavaScript Client Test</title> <script type="text/javascript"> function soap() { var xmlhttp = new XMLHttpRequest(); xmlhttp.open('POST', 'https://somesoapurl.com/', true); // build SOAP request var sr = '<?xml version="1.0" encoding="utf-8"?>' + '<soapenv:Envelope ' + 'xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ' + 'xmlns:api="http://127.0.0.1/Integrics/Enswitch/API" ' + 'xmlns:xsd="http://www.w3.org/2001/XMLSchema" ' + 'xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">' + '<soapenv:Body>' + '<api:some_api_call soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">' + '<username xsi:type="xsd:string">login_username</username>' + '<password xsi:type="xsd:string">password</password>' + '</api:some_api_call>' + '</soapenv:Body>' + '</soapenv:Envelope>'; xmlhttp.onreadystatechange = function () { if (xmlhttp.readyState == 4) { if (xmlhttp.status == 200) { alert(xmlhttp.responseText); // alert('done. use firebug/console to see network response'); } } } // Send the POST request xmlhttp.setRequestHeader('Content-Type', 'text/xml'); xmlhttp.send(sr); // send request // ... } </script> </head> <body> <form name="Demo" action="" method="post"> <div> <input type="button" value="Soap" onclick="soap();" /> </div> </form> </body> </html> <!-- typo --> ```
There are many quirks in the way browsers handle XMLHttpRequest, this JS code will work across all browsers: <https://github.com/ilinsky/xmlhttprequest> This JS code converts XML into easy to use JavaScript objects: <http://www.terracoder.com/index.php/xml-objectifier> The JS code above can be included in the page to meet your no external library requirement. ``` var symbol = "MSFT"; var xmlhttp = new XMLHttpRequest(); xmlhttp.open("POST", "http://www.webservicex.net/stockquote.asmx?op=GetQuote",true); xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState == 4) { alert(xmlhttp.responseText); // http://www.terracoder.com convert XML to JSON var json = XMLObjectifier.xmlToJSON(xmlhttp.responseXML); var result = json.Body[0].GetQuoteResponse[0].GetQuoteResult[0].Text; // Result text is escaped XML string, convert string to XML object then convert to JSON object json = XMLObjectifier.xmlToJSON(XMLObjectifier.textToXML(result)); alert(symbol + ' Stock Quote: $' + json.Stock[0].Last[0].Text); } } xmlhttp.setRequestHeader("SOAPAction", "http://www.webserviceX.NET/GetQuote"); xmlhttp.setRequestHeader("Content-Type", "text/xml"); var xml = '<?xml version="1.0" encoding="utf-8"?>' + '<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ' + 'xmlns:xsd="http://www.w3.org/2001/XMLSchema" ' + 'xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">' + '<soap:Body> ' + '<GetQuote xmlns="http://www.webserviceX.NET/"> ' + '<symbol>' + symbol + '</symbol> ' + '</GetQuote> ' + '</soap:Body> ' + '</soap:Envelope>'; xmlhttp.send(xml); // ...Include Google and Terracoder JS code here... ``` Two other options: * JavaScript SOAP client: <http://www.guru4.net/articoli/javascript-soap-client/en/> * Generate JavaScript from a WSDL: <https://cwiki.apache.org/confluence/display/CXF20DOC/WSDL+to+Javascript>
Simplest SOAP example
[ "", "javascript", "soap", "" ]
I have a SQL Server 2005 database and I have 4 GB of text files that I need to import into it. The question is, if these 4 GB of text files are 1.2 GB when they are zipped, how big would the database be if they are imported? Does SQL Server shrink data by default, or how would I set this (think create a database as a detached item, to be attached to another DB later).
SQL Server will not shrink data by default. In fact, there will be a small overhead per column and row, so it would require more space. If you really must store those files in the DB (it is recommended that you don't), take a look at this [article](http://mitch-wheat.blogspot.com/2007/05/save-and-restore-filesimages-to-sql.html) I blogged a while back. In SQL Server 2008, they have introduced the FILESTREAM type.
There are two primary factors: * Indexing will increase the space requirement. * Normalization will decrease the space requirement.
SQL Server Storage of Data: 4 gb Raw, how much in SQL Server?
[ "", "sql", "sql-server", "" ]
On my team, we usually do string concatentation like this: ``` var url = // some dynamically generated URL var sb = new StringBuffer(); sb.append("<a href='").append(url).append("'>click here</a>"); ``` Obviously the following is much more readable: ``` var url = // some dynamically generated URL var sb = "<a href='" + url + "'>click here</a>"; ``` But the JS experts claim that the `+` operator is less performant than `StringBuffer.append()`. Is this really true?
Internet Explorer is the only browser which really suffers from this in today's world. (Versions 5, 6, and 7 were dog slow. 8 does not show the same degradation.) What's more, IE gets slower and slower the longer your string is. If you have long strings to concatenate then definitely use an array.join technique. (Or some StringBuffer wrapper around this, for readability.) But if your strings are short don't bother.
Your example is not a good one in that it is very unlikely that the performance will be signficantly different. In your example readability should trump performance because the performance gain of one vs the other is negligable. The benefits of an array (StringBuffer) are only apparent when you are doing many concatentations. Even then your mileage can very depending on your browser. Here is a detailed performance analysis that shows performance using all the different JavaScript concatenation methods across many different browsers; [String Performance an Analysis](http://www.sitepen.com/blog/2008/05/09/string-performance-an-analysis/) ![join() once, concat() once, join() for, += for, concat() for](https://i.stack.imgur.com/FcI38.jpg) More: [Ajaxian >> String Performance in IE: Array.join vs += continued](http://ajaxian.com/archives/string-performance-in-ie-arrayjoin-vs-continued)
is the + operator less performant than StringBuffer.append()
[ "", "javascript", "string", "concatenation", "" ]
I have those maps in my repository. ``` public IQueryable<AwType> GetAwTypes() { return from awt in _db.AwTypes select new AwType { Id = awt.Id, Header = awt.Header, Description = awt.Description }; } public IQueryable<Aw> GetAws() { return from aw in _db.Aws select new Aw { Id = aw.Id, Bw = (from bw in GetBws() where bw.Id == aw.Bw select bw ).SingleOrDefault(), AwType = (from awt in GetAwTypes() where awt.Id == awAwType select awt ).SingleOrDefault(), AwAttribute = aw.AwAttribute }; } ``` In service I want to get count of Bws grouped by AwType as `List<KeyValuePair<AwType, int>>`. When I call that linq query : ``` var awGroups = from aw in _repository.GetAws() group aw by aw.AwType into newGroup select newGroup; List<KeyValuePair<AwType, int>> RetGroups = new List<KeyValuePair<AwType, int>>(); foreach (var group in awGroups) { RetGroups.Add(new KeyValuePair<AwType, int>(group.Key, group.Count())); } return RetGroups; ``` I get an error that is saying I can't group by on an object I have to group by a scalar value like aw.AwType.Id. Is there a way to get "AwType, int" pairs in one call?
AwType is a reference type. It would be a bad idea to group on that reference type... Each AwType in that query is a unique reference, so n elements would yield n groups. Try this: ``` var awGroups = from aw in _repository.GetAws() group aw by aw.AwType.ID into newGroup //changed to group on ID select newGroup; List<KeyValuePair<AwType, int>> RetGroups = new List<KeyValuePair<AwType, int>>(); foreach (var group in awGroups) { //changed to get the first element of the group and examine its AwType RetGroups.Add(new KeyValuePair<AwType, int>(group.First().AwType, group.Count())); } return RetGroups; ```
You can group by an anonymous type, eg new { Foo, Bar }
Is there a way to write a group by query in LinqToSql grouping not on a scalar value?
[ "", "c#", "linq", "linq-to-sql", "" ]
How do I toggle the visibility of an element using `.hide()`, `.show()`, or `.toggle()`? How do I test if an element is `visible` or `hidden`?
Since the question refers to a single element, this code might be more suitable: ``` // Checks CSS content for display:[none|block], ignores visibility:[true|false] $(element).is(":visible"); // The same works with hidden $(element).is(":hidden"); ``` It is the same as [twernt's suggestion](https://stackoverflow.com/questions/178325/how-do-you-test-if-something-is-hidden-in-jquery/178386#178386), but applied to a single element; and it [matches the algorithm recommended in the jQuery FAQ](https://stackoverflow.com/questions/178325/how-do-i-check-if-an-element-is-hidden-in-jquery/4685330#4685330). We use jQuery's [is()](https://api.jquery.com/is/) to check the selected element with another element, selector or any jQuery object. This method traverses along the DOM elements to find a match, which satisfies the passed parameter. It will return true if there is a match, otherwise return false.
You can use the [`hidden`](http://docs.jquery.com/Selectors/hidden) selector: ``` // Matches all elements that are hidden $('element:hidden') ``` And the [`visible`](http://docs.jquery.com/Selectors/visible) selector: ``` // Matches all elements that are visible $('element:visible') ```
How do I check if an element is hidden in jQuery?
[ "", "javascript", "jquery", "dom", "visibility", "display", "" ]
I have an MFC app which I have been working on for a few weeks now, I want to manually set the dimensions of the main frame when it is loaded, can someone give me a hand with this, specifically where to put the code as well? Thanks!
You can also set the size (with `SetWindowPos()`) from within `CMainFrame::OnCreate()`, or in the `CWinApp`-derived class' `InitInstance`. Look for the line that says `pMainFrame->ShowWindow()`, and call `pMainFrame->SetWindowPos()` before that line. That's where I always do it.
Find your screen size with .. ``` CRect rect; SystemParametersInfo(SPI_GETWORKAREA,0,&rect,0); screen_x_size=rect.Width(); screen_y_size=rect.Height(); ``` use these values to calculate the X and Y size of your window then .. ``` ::SetWindowPos(m_hWnd,HWND_TOPMOST,0,0,main_x_size,main_y_size,SWP_NOZORDER); ``` Where `main_x_size` and `main_y_size` are your sizes.
Sizing an MFC Window
[ "", "c++", "mfc", "" ]
In PHP, function parameters can be passed by reference by prepending an ampersand to the parameter in the function declaration, like so: ``` function foo(&$bar) { // ... } ``` Now, I am aware that this is *not* designed to improve performance, but to allow functions to change variables that are normally out of their scope. Instead, PHP seems to use Copy On Write to avoid copying objects (and maybe also arrays) until they are changed. So, for functions that do not change their parameters, the effect should be the same as if you had passed them by reference. However, I was wondering if the Copy On Write logic maybe is shortcircuited on pass-by-reference and whether that has any performance impact. *ETA: To be sure, I assume that it's not faster, and I am well aware that this is not what references are for. So I think my own guesses are quite good, I'm just looking for an answer from someone who really knows what's definitely happening under the hood. In five years of PHP development, I've always found it hard to get quality information on PHP internals short from reading the source.*
The Zend Engine uses copy-on-write, and when you use a reference yourself, it incurs a little extra overhead. Can only find [this mention](http://web.archive.org/web/20130406174901/http://www.thedeveloperday.com/php-lazy-copy/) at time of writing though, and comments in [the manual](http://www.php.net/references) contain other links. (EDIT) The manual page on [Objects and references](http://www.php.net/manual/en/language.oop5.references.php) contains a little more info on how object variables differ from references.
In a test with 100 000 iterations of calling a function with a string of 20 kB, the results are: ### Function that just reads / uses the parameter ``` pass by value: 0.12065005 seconds pass by reference: 1.52171397 seconds ``` ### Function to write / change the parameter ``` pass by value: 1.52223396 seconds pass by reference: 1.52388787 seconds ``` --- ## Conclusions 1. Pass the parameter by value is always faster 2. If the function change the value of the variable passed, for practical purposes is the same as pass by reference than by value
In PHP (>= 5.0), is passing by reference faster?
[ "", "php", "performance", "pass-by-reference", "" ]
I've been playing with the .NET built in localization features and they seem to all rely on putting data in resx files. But most systems can't rely on this because they are database driven. So how do you solve this issue? Is there a built in .NET way, or do you create a translations table in SQL and do it all manually? And if you have to do this on the majority of your sites, is there any reason to even use the resx way of localization? An example of this is I have an FAQ list on my site, I keep this list in the database so I can easily add/remove more, but by putting it in the database, I have no good way have translating this information into multiple languages.
In my opinion, localizing dynamic content (e.g., your FAQ) should be done by you in your database. Depending on how your questions are stored, I would probably create a "locale" column and use that when selecting the FAQ questions from the database. I'm not sure if this would scale very well when you started localizing lots of tables. For static content (e.g, form field labels, static text, icons, etc) you should probably be just fine using file-based resources. If you really wanted to, however, it looks like it wouldn't be super hard to create a custom resource provider implementation that could handle this. Here's some related links: * <http://channel9.msdn.com/forums/Coffeehouse/250892-Localizing-with-a-database-or-resx-files/> * <http://weblogs.asp.net/scottgu/archive/2006/05/30/ASP.NET-2.0-Localization-_2800_Video_2C00_-Whitepaper_2C00_-and-Database-Provider-Support_2900_.aspx> * <http://www.arcencus.nl/Blogs/tabid/105/EntryID/20/Default.aspx> * <http://msdn.microsoft.com/en-us/library/aa905797.aspx> * <http://www.codeproject.com/KB/aspnet/customsqlserverprovider.aspx>
For a given item in your data model, split out the description part into a localized table with a locale Id column (LCID). So the table for Product would not in fact contain the products description or name, but only its hard and fast values (ProductId, EAN, NumberInStock, NextStockData, IsActive, IsPublished) etc. ProductDescription then contains ProductId, Name, Description, LCID
How do you localize a database driven website
[ "", "c#", ".net", "asp.net", "localization", "" ]
Since multiple inheritance is bad (it makes the source more complicated) C# does not provide such a pattern directly. But sometimes it would be helpful to have this ability. For instance I'm able to implement the missing multiple inheritance pattern using interfaces and three classes like that: ``` public interface IFirst { void FirstMethod(); } public interface ISecond { void SecondMethod(); } public class First:IFirst { public void FirstMethod() { Console.WriteLine("First"); } } public class Second:ISecond { public void SecondMethod() { Console.WriteLine("Second"); } } public class FirstAndSecond: IFirst, ISecond { First first = new First(); Second second = new Second(); public void FirstMethod() { first.FirstMethod(); } public void SecondMethod() { second.SecondMethod(); } } ``` Every time I add a method to one of the interfaces I need to change the class *FirstAndSecond* as well. Is there a way to inject multiple existing classes into one new class like it is possible in C++? Maybe there is a solution using some kind of code generation? Or it may look like this (imaginary c# syntax): ``` public class FirstAndSecond: IFirst from First, ISecond from Second { } ``` So that there won't be a need to update the class FirstAndSecond when I modify one of the interfaces. --- ### EDIT Maybe it would be better to consider a practical example: You have an existing class (e.g. a text based TCP client based on ITextTcpClient) which you do already use at different locations inside your project. Now you feel the need to create a component of your class to be easy accessible for windows forms developers. As far as I know you currently have two ways to do this: 1. Write a new class that is inherited from components and implements the interface of the TextTcpClient class using an instance of the class itself as shown with FirstAndSecond. 2. Write a new class that inherits from TextTcpClient and somehow implements IComponent (haven't actually tried this yet). In both cases you need to do work per method and not per class. Since you know that we will need all the methods of TextTcpClient and Component it would be the easiest solution to just combine those two into one class. To avoid conflicts this may be done by code generation where the result could be altered afterwards but typing this by hand is a pure pain in the ass.
> Since multiple inheritance is bad (it makes the source more complicated) C# does not provide such a pattern directly. But sometimes it would be helpful to have this ability. C# and the .net CLR have not implemented MI because they have not concluded how it would inter-operate between C#, VB.net and the other languages yet, not because "it would make source more complex" MI is a useful concept, the un-answered questions are ones like:- "What do you do when you have multiple common base classes in the different superclasses? Perl is the only language I've ever worked with where MI works and works well. .Net may well introduce it one day but not yet, the CLR does already support MI but as I've said, there are no language constructs for it beyond that yet. Until then you are stuck with Proxy objects and multiple Interfaces instead :(
Consider just using *[composition](http://en.wikipedia.org/wiki/Object_composition)* instead of trying to simulate Multiple Inheritance. You can use Interfaces to define what classes make up the composition, eg: `ISteerable` implies a property of type `SteeringWheel`, `IBrakable` implies a property of type `BrakePedal`, etc. Once you've done that, you could use the [Extension Methods](http://msdn.microsoft.com/en-us/library/bb383977.aspx) feature added to C# 3.0 to further simplify calling methods on those implied properties, eg: ``` public interface ISteerable { SteeringWheel wheel { get; set; } } public interface IBrakable { BrakePedal brake { get; set; } } public class Vehicle : ISteerable, IBrakable { public SteeringWheel wheel { get; set; } public BrakePedal brake { get; set; } public Vehicle() { wheel = new SteeringWheel(); brake = new BrakePedal(); } } public static class SteeringExtensions { public static void SteerLeft(this ISteerable vehicle) { vehicle.wheel.SteerLeft(); } } public static class BrakeExtensions { public static void Stop(this IBrakable vehicle) { vehicle.brake.ApplyUntilStop(); } } public class Main { Vehicle myCar = new Vehicle(); public void main() { myCar.SteerLeft(); myCar.Stop(); } } ```
Multiple Inheritance in C#
[ "", "c#", "interface", "multiple-inheritance", "" ]
I'm told that the template system in C++ is Turing-complete at compile time. This is mentioned in [this post](https://stackoverflow.com/questions/75538/hidden-features-of-c#75627) and also on [wikipedia](http://en.wikipedia.org/wiki/C%2B%2B). Can you provide a nontrivial example of a computation that exploits this property? Is this fact useful in practice?
Example ``` #include <iostream> template <int N> struct Factorial { enum { val = Factorial<N-1>::val * N }; }; template<> struct Factorial<0> { enum { val = 1 }; }; int main() { // Note this value is generated at compile time. // Also note that most compilers have a limit on the depth of the recursion available. std::cout << Factorial<4>::val << "\n"; } ``` That was a little fun but not very practical. To answer the second part of the question: **Is this fact useful in practice?** Short Answer: Sort of. Long Answer: Yes, but only if you are a template daemon. To turn out good programming using template meta-programming that is really useful for others to use (ie a library) is really really tough (though do-able). To Help boost even has [MPL](http://www.boost.org/doc/libs/1_36_0/libs/mpl/doc/index.html) aka (Meta Programming Library). But try debugging a compiler error in your template code and you will be in for a long hard ride. But a good practical example of it being used for something useful: Scott Meyers has been working extensions to the C++ language (I use the term loosely) using the templating facilities. You can read about his work here '[Enforcing Code Features](http://www.artima.com/cppsource/codefeaturesP.html)'
I've done a turing machine in C++11. Features that C++11 adds are not significant for the turing machine indeed. It just provides for arbitrary length rule lists using variadic templates, instead of using perverse macro metaprogramming :). The names for the conditions are used to output a diagram on stdout. i've removed that code to keep the sample short. ``` #include <iostream> template<bool C, typename A, typename B> struct Conditional { typedef A type; }; template<typename A, typename B> struct Conditional<false, A, B> { typedef B type; }; template<typename...> struct ParameterPack; template<bool C, typename = void> struct EnableIf { }; template<typename Type> struct EnableIf<true, Type> { typedef Type type; }; template<typename T> struct Identity { typedef T type; }; // define a type list template<typename...> struct TypeList; template<typename T, typename... TT> struct TypeList<T, TT...> { typedef T type; typedef TypeList<TT...> tail; }; template<> struct TypeList<> { }; template<typename List> struct GetSize; template<typename... Items> struct GetSize<TypeList<Items...>> { enum { value = sizeof...(Items) }; }; template<typename... T> struct ConcatList; template<typename... First, typename... Second, typename... Tail> struct ConcatList<TypeList<First...>, TypeList<Second...>, Tail...> { typedef typename ConcatList<TypeList<First..., Second...>, Tail...>::type type; }; template<typename T> struct ConcatList<T> { typedef T type; }; template<typename NewItem, typename List> struct AppendItem; template<typename NewItem, typename...Items> struct AppendItem<NewItem, TypeList<Items...>> { typedef TypeList<Items..., NewItem> type; }; template<typename NewItem, typename List> struct PrependItem; template<typename NewItem, typename...Items> struct PrependItem<NewItem, TypeList<Items...>> { typedef TypeList<NewItem, Items...> type; }; template<typename List, int N, typename = void> struct GetItem { static_assert(N > 0, "index cannot be negative"); static_assert(GetSize<List>::value > 0, "index too high"); typedef typename GetItem<typename List::tail, N-1>::type type; }; template<typename List> struct GetItem<List, 0> { static_assert(GetSize<List>::value > 0, "index too high"); typedef typename List::type type; }; template<typename List, template<typename, typename...> class Matcher, typename... Keys> struct FindItem { static_assert(GetSize<List>::value > 0, "Could not match any item."); typedef typename List::type current_type; typedef typename Conditional<Matcher<current_type, Keys...>::value, Identity<current_type>, // found! FindItem<typename List::tail, Matcher, Keys...>> ::type::type type; }; template<typename List, int I, typename NewItem> struct ReplaceItem { static_assert(I > 0, "index cannot be negative"); static_assert(GetSize<List>::value > 0, "index too high"); typedef typename PrependItem<typename List::type, typename ReplaceItem<typename List::tail, I-1, NewItem>::type> ::type type; }; template<typename NewItem, typename Type, typename... T> struct ReplaceItem<TypeList<Type, T...>, 0, NewItem> { typedef TypeList<NewItem, T...> type; }; enum Direction { Left = -1, Right = 1 }; template<typename OldState, typename Input, typename NewState, typename Output, Direction Move> struct Rule { typedef OldState old_state; typedef Input input; typedef NewState new_state; typedef Output output; static Direction const direction = Move; }; template<typename A, typename B> struct IsSame { enum { value = false }; }; template<typename A> struct IsSame<A, A> { enum { value = true }; }; template<typename Input, typename State, int Position> struct Configuration { typedef Input input; typedef State state; enum { position = Position }; }; template<int A, int B> struct Max { enum { value = A > B ? A : B }; }; template<int n> struct State { enum { value = n }; static char const * name; }; template<int n> char const* State<n>::name = "unnamed"; struct QAccept { enum { value = -1 }; static char const* name; }; struct QReject { enum { value = -2 }; static char const* name; }; #define DEF_STATE(ID, NAME) \ typedef State<ID> NAME ; \ NAME :: name = #NAME ; template<int n> struct Input { enum { value = n }; static char const * name; template<int... I> struct Generate { typedef TypeList<Input<I>...> type; }; }; template<int n> char const* Input<n>::name = "unnamed"; typedef Input<-1> InputBlank; #define DEF_INPUT(ID, NAME) \ typedef Input<ID> NAME ; \ NAME :: name = #NAME ; template<typename Config, typename Transitions, typename = void> struct Controller { typedef Config config; enum { position = config::position }; typedef typename Conditional< static_cast<int>(GetSize<typename config::input>::value) <= static_cast<int>(position), AppendItem<InputBlank, typename config::input>, Identity<typename config::input>>::type::type input; typedef typename config::state state; typedef typename GetItem<input, position>::type cell; template<typename Item, typename State, typename Cell> struct Matcher { typedef typename Item::old_state checking_state; typedef typename Item::input checking_input; enum { value = IsSame<State, checking_state>::value && IsSame<Cell, checking_input>::value }; }; typedef typename FindItem<Transitions, Matcher, state, cell>::type rule; typedef typename ReplaceItem<input, position, typename rule::output>::type new_input; typedef typename rule::new_state new_state; typedef Configuration<new_input, new_state, Max<position + rule::direction, 0>::value> new_config; typedef Controller<new_config, Transitions> next_step; typedef typename next_step::end_config end_config; typedef typename next_step::end_input end_input; typedef typename next_step::end_state end_state; enum { end_position = next_step::position }; }; template<typename Input, typename State, int Position, typename Transitions> struct Controller<Configuration<Input, State, Position>, Transitions, typename EnableIf<IsSame<State, QAccept>::value || IsSame<State, QReject>::value>::type> { typedef Configuration<Input, State, Position> config; enum { position = config::position }; typedef typename Conditional< static_cast<int>(GetSize<typename config::input>::value) <= static_cast<int>(position), AppendItem<InputBlank, typename config::input>, Identity<typename config::input>>::type::type input; typedef typename config::state state; typedef config end_config; typedef input end_input; typedef state end_state; enum { end_position = position }; }; template<typename Input, typename Transitions, typename StartState> struct TuringMachine { typedef Input input; typedef Transitions transitions; typedef StartState start_state; typedef Controller<Configuration<Input, StartState, 0>, Transitions> controller; typedef typename controller::end_config end_config; typedef typename controller::end_input end_input; typedef typename controller::end_state end_state; enum { end_position = controller::end_position }; }; #include <ostream> template<> char const* Input<-1>::name = "_"; char const* QAccept::name = "qaccept"; char const* QReject::name = "qreject"; int main() { DEF_INPUT(1, x); DEF_INPUT(2, x_mark); DEF_INPUT(3, split); DEF_STATE(0, start); DEF_STATE(1, find_blank); DEF_STATE(2, go_back); /* syntax: State, Input, NewState, Output, Move */ typedef TypeList< Rule<start, x, find_blank, x_mark, Right>, Rule<find_blank, x, find_blank, x, Right>, Rule<find_blank, split, find_blank, split, Right>, Rule<find_blank, InputBlank, go_back, x, Left>, Rule<go_back, x, go_back, x, Left>, Rule<go_back, split, go_back, split, Left>, Rule<go_back, x_mark, start, x, Right>, Rule<start, split, QAccept, split, Left>> rules; /* syntax: initial input, rules, start state */ typedef TuringMachine<TypeList<x, x, x, x, split>, rules, start> double_it; static_assert(IsSame<double_it::end_input, TypeList<x, x, x, x, split, x, x, x, x>>::value, "Hmm... This is borky!"); } ```
C++ templates Turing-complete?
[ "", "c++", "templates", "template-meta-programming", "turing-complete", "" ]
I have created some extra functionality on my Linq-to-SQL classes to make things easier as I develop my applications. For example I have defined a property that retrieves active contracts from a list of contracts. However if I try to use this property in a lambda expression or in general in a query it either throws an exception that there is no SQL statement matching that property or it generates one query per item (= a lot of roundtrips to the server). The queries themselves are not overly complex f.ex: ``` var activeContracts = customer.Contracts.Where(w => w.ContractEndDate == null); ``` Whereas I would like it to read as: ``` var activeContracts = customer.ActiveContracts; ``` The main reason for me doing this is because it will minimize logical errors on my part and if I in the future want to change what defines an active contract I don't have to redo a lot of code. Is there a way to specify on a property what SQL it should genereate. Or is there a way to make sure it is usable in a query like below? ``` var singleContractCustomers = db.Customers.Where(w => w.ActiveContracts.Count() == 1); ```
When accessed individually, I suspect that having a query that returns IQueryable would work - however, I expect that when this is part of a larger Expression, the expression interpreter will complain (which seems like what you are describing). However, I suspect that you might be able to break it down a bit. Try adding (to customer): ``` public static Expression<Func<Customer, bool>> HasActiveContract { get { return cust => cust.Contracts.Count() == 1; } } ``` Then you should be able to use: ``` var filtered = db.Customers.Where(Customer.HasActiveContract); ``` Obviously it is hard to run it (from here) to see what TSQL it comes up with, but I'd be surprised if that does roundtrips; I would expect that to do the COUNT() in the TSQL. As a topmost query, you should also be able to wrap this: ``` public IQueryable<Customer> CustomersWithActiveContract { get { return Customers.Where(Customer.HasActiveContract); } } ``` Does any of that work?
That worked like a charm. The SQL statment generated by **CustomersWithActiveContracts** looked fine to me. ``` {SELECT [t0].[CustomerID], [t0].[cFirstName], [t0].[cLastName] FROM [dbo].[Customers] AS [t0] WHERE (( SELECT COUNT(*) FROM [dbo].[Contracts] AS [t1] WHERE (([t1].[ContractEndDate] > @p0) OR ([t1].[ContractEndDate] IS NULL)) AND ([t1].[cId] = [t0].[cId]) )) > @p1 } ``` It should also mean that it's possible to build on this query without it generating more trips to the database.
Mapping computed properties in Linq-to-SQL to actuall SQL statements
[ "", "c#", ".net", "linq", "linq-to-sql", "" ]
Is it possible, in Java, to enforce that a class have a specific set of subclasses and no others? For example: ``` public abstract class A {} public final class B extends A {} public final class C extends A {} public final class D extends A {} ``` Can I somehow enforce that no other subclasses of A can ever be created?
Church encoding to the rescue: ``` public abstract class A { public abstract <R> R fold(R b, R c, R d); } ``` There are only three implementations possible: ``` public final class B extends A { public <R> R fold(R b, R c, R d) { return b; } } public final class C extends A { public <R> R fold(R b, R c, R d) { return c; } } public final class D extends A { public <R> R fold(R b, R c, R d) { return d; } } ```
Give class **`A`** a constructor with package-level accessibility (and no other constructors). Thanks, Dave L., for the bit about no other constructors.
Is there a way to implement algebraic types in Java?
[ "", "java", "class", "abstract-class", "final", "adts", "" ]
I refactored a slow section of an application we inherited from another company to use an inner join instead of a subquery like: ``` WHERE id IN (SELECT id FROM ...) ``` **The refactored query runs about 100x faster.** (~50 seconds to ~0.3) I expected an improvement, but can anyone explain why it was so drastic? The columns used in the where clause were all indexed. Does SQL execute the query in the where clause once per row or something? **Update** - Explain results: The difference is in the second part of the "where id in ()" query - ``` 2 DEPENDENT SUBQUERY submission_tags ref st_tag_id st_tag_id 4 const 2966 Using where ``` vs 1 indexed row with the join: ``` SIMPLE s eq_ref PRIMARY PRIMARY 4 newsladder_production.st.submission_id 1 Using index ```
A "correlated subquery" (i.e., one in which the where condition depends on values obtained from the rows of the containing query) will execute once for each row. A non-correlated subquery (one in which the where condition is independent of the containing query) will execute once at the beginning. The SQL engine makes this distinction automatically. But, yeah, explain-plan will give you the dirty details.
You are running the subquery **once for every row** whereas the join happens on indexes.
Subqueries vs joins
[ "", "sql", "mysql", "performance", "database-design", "join", "" ]
I'm using Java for accessing Alfresco content server via it's web service API for importing some content into it. Content should have some NamedValue properties set to UTF-8(cyrillic) string. I keep getting the Sax parser exception: ``` org.xml.sax.SAXParseException: An invalid XML character (Unicode: 0x1b) was found in the element content of the document. ``` The code looks something like this: ``` NamedValue[] namedValueProperties = new NamedValue[2]; namedValueProperties[0] = Utils.createNamedValue(Constants.PROP_NAME, name ); namedValueProperties[1] = Utils.createNamedValue("{my.custom.model}myProperty", cyrillicString); CMLCreate create = new CMLCreate("1", parentReference, null, null, null, documentType, namedValueProperties); CML cml = new CML(); cml.setCreate(new CMLCreate[]{create}); UpdateResult[] results = null; try { results = WebServiceFactory.getRepositoryService().update(cml); } catch (...) Here comes the "org.xml.sax.SAXParseException" } ``` Does anyone know how to solve this problem?
The problem was that `alfresco-web-service-client.jar` library I used was from 2.9B distribution (I am hitting 2.9B community content server), and dependency libs `bcprov-jdk15-136.jar` and `xmlsec-1.4.0.jar` were not adequate (remained old from 2.1 verison). I should have used `bcprov-jdk15-137.jar` and `xmlsec-1.4.1.jar` which are deployed along with 2.9B distribution.
The easiest way to get around it is I think to escape cyrillicString, for instance with [escapeXml](http://commons.apache.org/lang/api-release/org/apache/commons/lang/StringEscapeUtils.html#escapeXml(java.lang.String)) from Jakarta Commons, or by converting the whole string to XML entities if it's not enough. But in the long run, Alfresco should be fixed, so opening a ticket may be a good choice too.
Java, Alfresco Web Service API and Unicode NamedValues
[ "", "java", "web-services", "unicode", "alfresco", "" ]
In the following code, both `amp_swap()` and `star_swap()` seems to be doing the same thing. So why will someone prefer to use one over the other? Which one is the preferred notation and why? Or is it just a matter of taste? ``` #include <iostream> using namespace std; void amp_swap(int &x, int &y) { int temp = x; x = y; y = temp; } void star_swap(int *x, int *y) { int temp = *x; *x = *y; *y = temp; } int main() { int a = 10, b = 20; cout << "Using amp_swap(): " << endl; amp_swap(a, b); cout << "a = " << a << ", b = " << b << endl; cout << "Using star_swap(): " << endl; star_swap(&a, &b); cout << "a = " << a << ", b = " << b << endl; return 0; } ``` Thanks for your time! --- See Also ## [**Difference between pointer variable and reference variable in C++**](https://stackoverflow.com/questions/57483)
One is using a reference, one is using a pointer. I would use the one with references, because you can't pass a NULL reference (whereas you can pass a NULL pointer). So if you do: ``` star_swap(NULL, NULL); ``` Your application will crash. Whereas if you try: ``` amp_swap(NULL, NULL); // This won't compile ``` Always go with references unless you've got a good reason to use a pointer. See this link: <http://www.google.co.uk/search?q=references+vs+pointers>
An argument can also be made for preferring the pointer version over the reference version. In fact, from a documentation point of view it's superior because the caller is made aware of the fact that the input arguments are going to be modified; compare these two calls: ``` swap(a, b); swap(&a, &b); // This cries “will modify arguments” loud and clear. ``` Of course, in the case of the `swap` function this point is moot; everybody knows its arguments will be modified. There are other cases where this is less obvious. C# has the `ref` keyword for exactly this reason. In C#, the above would have to look like this: ``` swap(ref a, ref b); ``` Of course, there are other ways to document this behaviour. Using pointers is one valid technique of doing so.
What's the Difference Between func(int &param) and func(int *param)?
[ "", "c++", "pointers", "reference", "" ]
For a long time ago, I have thought that, in java, reversing the domain you own for package naming is silly and awkward. Which do you use for package naming in your projects?
Once you understand why the convention exists, it shouldn't feel silly or awkward in the least. This scheme does two important things: * All of your code is contained in packages that no one else will collide with. You own your domain name, so it's isolated. If we didn't have this convention, many companies would have a "utilities" package, containing classes like "StringUtil", "MessageUtil" etc. These would quickly collide if you tried to use anyone else's code. * The "reverse" nature of it makes class-directory layout very narrow at the top level. If you expand a jar, you'll see "com", "org", "net", etc dirs, then under each of those the organization/company name. (added in 2021) This is even more important nowadays when this type of package naming is used for third-party libraries which are pulled in transitively during builds and could easily conflict if the names were not unique. If everyone adheres to the same convention, there will be no accidental collisions. (added in 2021) The same naming convention can be used for application ids on an app store to ensure uniqueness as well. We usually don't expand jars, but in early java development, this was important because people used expanded dir structures for applets. However, this is nice now as source code dir structures have a very "top-down" feel. You go from the most general (com, org, net...) to less general (company name) to more specific (project/product/lib name).
I actually think the reverse domain name package naming is one of the more brilliant conventions in Java.
Do you really use your reverse domain for package naming in java?
[ "", "java", "namespaces", "" ]
*When using threads I sometimes visualise them as weaving together 3 or more dimensional interconnections between Objects in a Spatial context. This isn't a general use case scenario, but for what I do it is a useful way to think about it.* ### Are there any APIs which you use which aid threading? ### Have you used threads in a manner which doesn't conceptualise as thread being a process?
**Are there any APIs which you use which aid threading?** You mean appart from `java.util.concurrent`? [FunctionalJava](http://www.functionaljava.org/) got some constructs which aid in concurrent programming, as described in a multipart tutorial that starts [here](http://apocalisp.wordpress.com/2008/06/18/parallel-strategies-and-the-callable-monad/). **Have you used threads in a manner which doesn't conceptualise as thread being a process?** Yes, to the extent that threads doesn't conceptualise at all. Take an asynchronous task-runner for instance. It uses threads under the covers but I don't see them and I don't care about them. They are fully managed by the task-runner. Under the covers, it is all just threads, but when we stop caring about the individual thread and just think about them as a number of slots where you can somehow put code in and have it run for a period of time, then that is when we start to reach for a higher level of abstraction. Agents/Actors is a common way to do this. An Actor is like a thread that has a lump of state, and then you can send it some code and say "do this to your state when you have time" or something along those lines.
## First of all The usual disclaimer: concurrent programming, in any language, using any abstraction level, is **hard** and **complicated** and has many risks. Take into consideration: * Concurrent programming complicates any application by magnitude * Unit-testing critical sections is hard, and sometimes impossible * Reproducing bugs originating in concurrent code is **very hard** and much dependent on architecture, OS flavor, version, etc... ## Java Concurrent APIs Java has gone a long way in making concurrent programming as easy as possible for developers. For most cases, you will see that `java.util.concurrent` has most of the abstractions you will need: * **`Runnable`** interface and **`Thread`** object you can extend. Just throw in your code and you have a thread ready to run * A nice set of **`Executors`**: constant pool, dynamic pool, scheduled, or whatever. Just throw a `Runnable` at it and it does the rest. * **`Semaphore`s** and **locks** of all sorts relieve you of needing to implement common locking techniques. * A built-in `wait()` and `notify()` API for all objects. ## Uses The only thing left for you, as the software engineer, is to ensure you are writing **correct** code. Meaning you should be aware of the dangerous situations you might be exposing yourself to: * **Deadlock** - a situation in which two or more threads are waiting on unordered resources, rendering an infinite waiting loop. * **Livelock** - two or more threads which politely try to give way to the other on a shared resource but end up not taking it (consider two people in a corridor walking up to each other and constantly moving together from side to side) * **Starvation** - a single thread taking up most or all of a single shared resource, thus depriving other threads from access to it. # Main Point (or, when to use) **Use threads only when concurrency will directly improve your applications behavior.** If you are waiting on an IO/network/hardware-bound resource, **DO** spawn a thread on it so you can continue doing other stuff. If you are just trying to elegantly split CPU-bound computations, **DO NOT** use threads. You just might end up worsening your performance. If you do use threads, make sure you have thoroughly contemplated the risks, and triple-checked you did not miss any exceptional situations. ## Useful (Online) Resources Fastest way to get into things is to do the [Sun concurrency tutorial](http://java.sun.com/docs/books/tutorial/essential/concurrency/). Other than that, get a good book. Good luck :)
Java: Threading Techniques & Concepts
[ "", "java", "multithreading", "visualization", "" ]
I am trying to animate a change in backgroundColor using jQuery on mouseover. I have checked some example and I seem to have it right, it works with other properties like fontSize, but with backgroundColor I get and "Invalid Property" js error. The element I am working with is a div. ``` $(".usercontent").mouseover(function() { $(this).animate({ backgroundColor: "olive" }, "slow"); }); ``` Any ideas?
The color plugin is only 4kb so much cheaper than the UI library. Of course you'll want to use a [decent version](http://github.com/jquery/jquery-color) of the plugin and not [some buggy old thing](http://plugins.jquery.com/project/color) which doesn't handle Safari and crashes when the transitions are too fast. Since a minified version isn't supplied you might like test various compressors and [make your own](http://compressorrater.thruhere.net/) min version. YUI gets the best compression in this case needing only 2317 bytes and since it is so small - here it is: ``` (function (d) { d.each(["backgroundColor", "borderBottomColor", "borderLeftColor", "borderRightColor", "borderTopColor", "color", "outlineColor"], function (f, e) { d.fx.step[e] = function (g) { if (!g.colorInit) { g.start = c(g.elem, e); g.end = b(g.end); g.colorInit = true } g.elem.style[e] = "rgb(" + [Math.max(Math.min(parseInt((g.pos * (g.end[0] - g.start[0])) + g.start[0]), 255), 0), Math.max(Math.min(parseInt((g.pos * (g.end[1] - g.start[1])) + g.start[1]), 255), 0), Math.max(Math.min(parseInt((g.pos * (g.end[2] - g.start[2])) + g.start[2]), 255), 0)].join(",") + ")" } }); function b(f) { var e; if (f && f.constructor == Array && f.length == 3) { return f } if (e = /rgb\(\s*([0-9]{1,3})\s*,\s*([0-9]{1,3})\s*,\s*([0-9]{1,3})\s*\)/.exec(f)) { return [parseInt(e[1]), parseInt(e[2]), parseInt(e[3])] } if (e = /rgb\(\s*([0-9]+(?:\.[0-9]+)?)\%\s*,\s*([0-9]+(?:\.[0-9]+)?)\%\s*,\s*([0-9]+(?:\.[0-9]+)?)\%\s*\)/.exec(f)) { return [parseFloat(e[1]) * 2.55, parseFloat(e[2]) * 2.55, parseFloat(e[3]) * 2.55] } if (e = /#([a-fA-F0-9]{2})([a-fA-F0-9]{2})([a-fA-F0-9]{2})/.exec(f)) { return [parseInt(e[1], 16), parseInt(e[2], 16), parseInt(e[3], 16)] } if (e = /#([a-fA-F0-9])([a-fA-F0-9])([a-fA-F0-9])/.exec(f)) { return [parseInt(e[1] + e[1], 16), parseInt(e[2] + e[2], 16), parseInt(e[3] + e[3], 16)] } if (e = /rgba\(0, 0, 0, 0\)/.exec(f)) { return a.transparent } return a[d.trim(f).toLowerCase()] } function c(g, e) { var f; do { f = d.css(g, e); if (f != "" && f != "transparent" || d.nodeName(g, "body")) { break } e = "backgroundColor" } while (g = g.parentNode); return b(f) } var a = { aqua: [0, 255, 255], azure: [240, 255, 255], beige: [245, 245, 220], black: [0, 0, 0], blue: [0, 0, 255], brown: [165, 42, 42], cyan: [0, 255, 255], darkblue: [0, 0, 139], darkcyan: [0, 139, 139], darkgrey: [169, 169, 169], darkgreen: [0, 100, 0], darkkhaki: [189, 183, 107], darkmagenta: [139, 0, 139], darkolivegreen: [85, 107, 47], darkorange: [255, 140, 0], darkorchid: [153, 50, 204], darkred: [139, 0, 0], darksalmon: [233, 150, 122], darkviolet: [148, 0, 211], fuchsia: [255, 0, 255], gold: [255, 215, 0], green: [0, 128, 0], indigo: [75, 0, 130], khaki: [240, 230, 140], lightblue: [173, 216, 230], lightcyan: [224, 255, 255], lightgreen: [144, 238, 144], lightgrey: [211, 211, 211], lightpink: [255, 182, 193], lightyellow: [255, 255, 224], lime: [0, 255, 0], magenta: [255, 0, 255], maroon: [128, 0, 0], navy: [0, 0, 128], olive: [128, 128, 0], orange: [255, 165, 0], pink: [255, 192, 203], purple: [128, 0, 128], violet: [128, 0, 128], red: [255, 0, 0], silver: [192, 192, 192], white: [255, 255, 255], yellow: [255, 255, 0], transparent: [255, 255, 255] } })(jQuery); ```
I had the same problem and fixed it by including jQuery UI. Here is the complete script : ``` <!-- include Google's AJAX API loader --> <script src="http://www.google.com/jsapi"></script> <!-- load JQuery and UI from Google (need to use UI to animate colors) --> <script type="text/javascript"> google.load("jqueryui", "1.5.2"); </script> <script type="text/javascript"> $(document).ready(function() { $('#menu ul li.item').hover( function() { $(this).stop().animate({backgroundColor:'#4E1402'}, 300); }, function () { $(this).stop().animate({backgroundColor:'#943D20'}, 100); }); }); </script> ```
jQuery animate backgroundColor
[ "", "javascript", "jquery", "colors", "jquery-animate", "" ]
I am looking for a way to convert a long string (from a dump), that represents hex values into a byte array. I couldn't have phrased it better than the person that posted [the same question here](http://www.experts-exchange.com/Programming/Programming_Languages/Java/Q_21062554.html). But to keep it original, I'll phrase it my own way: suppose I have a string `"00A0BF"` that I would like interpreted as the ``` byte[] {0x00,0xA0,0xBf} ``` what should I do? I am a Java novice and ended up using `BigInteger` and watching out for leading hex zeros. But I think it is ugly and I am sure I am missing something simple.
Update (2021) - **Java 17** now includes [`java.util.HexFormat`](https://docs.oracle.com/en/java/javase/17/docs/api/java.base/java/util/HexFormat.html) (only took 25 years): `HexFormat.of().parseHex(s)` --- For older versions of Java: Here's a solution that I think is better than any posted so far: ``` /* s must be an even-length string. */ public static byte[] hexStringToByteArray(String s) { int len = s.length(); byte[] data = new byte[len / 2]; for (int i = 0; i < len; i += 2) { data[i / 2] = (byte) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i+1), 16)); } return data; } ``` Reasons why it is an improvement: * Safe with leading zeros (unlike BigInteger) and with negative byte values (unlike Byte.parseByte) * Doesn't convert the String into a `char[]`, or create StringBuilder and String objects for every single byte. * No library dependencies that may not be available Feel free to add argument checking via `assert` or exceptions if the argument is not known to be safe.
**One-liners:** ``` import javax.xml.bind.DatatypeConverter; public static String toHexString(byte[] array) { return DatatypeConverter.printHexBinary(array); } public static byte[] toByteArray(String s) { return DatatypeConverter.parseHexBinary(s); } ``` **Warnings**: * in Java 9 Jigsaw this is no longer part of the (default) java.se root set so it will result in a ClassNotFoundException unless you specify --add-modules java.se.ee (thanks to @`eckes`) * Not available on Android (thanks to `Fabian` for noting that), but you can just [take the source code](https://stackoverflow.com/a/11139098/173677) if your system lacks `javax.xml` for some reason. Thanks to @`Bert Regelink` for extracting the source.
Convert a string representation of a hex dump to a byte array using Java?
[ "", "java", "byte", "hex", "dump", "" ]
So I build an array of various dates. Birthdays, anniversaries, and holidays. I'd like to order the array by which one is happening next, essentially sort October to September (wrapping to next year) so if my array is ``` $a = ([0]=>"1980-04-14", [1]=>"2007-06-08", [2]=>"2008-12-25", [3]=>"1978-11-03") ``` I'd like to sort it so it is arranged ``` $a = ([0]=>"1978-11-03", [1]=>"2008-12-25", [2]=>"1980-04-14", [3]=>"2007-06-08") ``` because the november 'event' is the one that will happen next (based on it being october right now). I'm trying usort where my cmp function is ``` function cmp($a, $b) { $a_tmp = split("-", $a); $b_tmp = split("-", $b); return strcmp($a_tmp[1], $b_tmp[1]); } ``` I am not sure how to modify this to get my desired effect.
``` function relative_year_day($date) { $value = date('z', strtotime($date)) - date('z'); if ($value < 0) $value += 365; return $value; } function cmp($a, $b) { $aValue = relative_year_day($a); $bValue = relative_year_day($b); if ($aValue == $bValue) return 0; return ($aValue < $bValue) ? -1 : 1; } $a = array("1980-04-14", "2007-06-08", "2008-12-25", "1978-11-03"); usort($a, "cmp"); ```
I would be tempted to establish the original year of the event, and then add enough whole years to it to ensure that the value is greater than your reference date (normally today's date). Or, possibly, greater than or equal to the reference date. You can then sort in simple date order. **Edited to add**: I'm not fluent enough in PHP to give an answer in that, but here's a Perl solution. ``` #!/bin/perl -w # Sort sequence of dates by next occurrence of anniversary. # Today's "birthdays" count as low (will appear first in sequence) use strict; my $refdate = "2008-10-05"; my @list = ( "1980-04-14", "2007-06-08", "2008-12-25", "1978-11-03", "2008-10-04", "2008-10-05", "2008-10-06", "2008-02-29" ); sub date_on_or_after { my($actdate, $refdate) = @_; my($answer) = $actdate; if ($actdate lt $refdate) # String compare OK with ISO8601 format { my($act_yy, $act_mm, $act_dd) = split /-/, $actdate; my($ref_yy, $ref_mm, $ref_dd) = split /-/, $refdate; $ref_yy++ if ($act_mm < $ref_mm || ($act_mm == $ref_mm && $act_dd < $ref_dd)); $answer = "$ref_yy-$act_mm-$act_dd"; } return $answer; } sub anniversary_compare { my $r1 = date_on_or_after($a, $refdate); my $r2 = date_on_or_after($b, $refdate); return $r1 cmp $r2; } my @result = sort anniversary_compare @list; print "Before:\n"; print "* $_\n" foreach (@list); print "Reference date: $refdate\n"; print "After:\n"; print "* $_\n" foreach (@result); ``` Clearly, this is not dreadfully efficient - to make it efficient, you'd calculate the date\_on\_or\_after() value once, and then sort on those values. Perl's comparison is slightly peculiar - the variables $a and $b are magic, and appear as if out of nowhere. When run, the script produces: ``` Before: * 1980-04-14 * 2007-06-08 * 2008-12-25 * 1978-11-03 * 2008-10-04 * 2008-10-05 * 2008-10-06 * 2008-02-29 Reference date: 2008-10-05 After: * 2008-10-05 * 2008-10-06 * 1978-11-03 * 2008-12-25 * 2008-02-29 * 1980-04-14 * 2007-06-08 * 2008-10-04 ``` Note that it largely ducks the issue of what happens with the 29th of February, because it 'works' to do so. Basically, it will generate the 'date' 2009-02-29, which compares correctly in sequence. The anniversary for 2000-02-28 would be listed before the anniversary for 2008-02-29 (if 2000-02-28 were included in the data).
Order dates by upcoming
[ "", "php", "arrays", "sorting", "" ]
Using the following query and results, I'm looking for the most recent entry where the ChargeId and ChargeType are unique. ``` select chargeId, chargeType, serviceMonth from invoice CHARGEID CHARGETYPE SERVICEMONTH 1 101 R 8/1/2008 2 161 N 2/1/2008 3 101 R 2/1/2008 4 101 R 3/1/2008 5 101 R 4/1/2008 6 101 R 5/1/2008 7 101 R 6/1/2008 8 101 R 7/1/2008 ``` Desired: ``` CHARGEID CHARGETYPE SERVICEMONTH 1 101 R 8/1/2008 2 161 N 2/1/2008 ```
You can use a **GROUP BY** to group items by type and id. Then you can use the **MAX()** Aggregate function to get the most recent service month. The below returns a result set with ChargeId, ChargeType, and MostRecentServiceMonth ``` SELECT CHARGEID, CHARGETYPE, MAX(SERVICEMONTH) AS "MostRecentServiceMonth" FROM INVOICE GROUP BY CHARGEID, CHARGETYPE ```
So this isn't what the requester was asking for but it is the answer to "SQL selecting rows by most recent date". Modified from <http://wiki.lessthandot.com/index.php/Returning_The_Maximum_Value_For_A_Row> ``` SELECT t.chargeId, t.chargeType, t.serviceMonth FROM( SELECT chargeId,MAX(serviceMonth) AS serviceMonth FROM invoice GROUP BY chargeId) x JOIN invoice t ON x.chargeId =t.chargeId AND x.serviceMonth = t.serviceMonth ```
SQL selecting rows by most recent date with two unique columns
[ "", "sql", "oracle", "" ]
I would like to remove the domain/computer information from a login id in C#. So, I would like to make either "Domain\me" or "Domain\me" just "me". I could always check for the existence of either, and use that as the index to start the substring...but I am looking for something more elegant and compact. Worse case scenario: ``` int startIndex = 0; int indexOfSlashesSingle = ResourceLoginName.IndexOf("\"); int indexOfSlashesDouble = ResourceLoginName.IndexOf("\\"); if (indexOfSlashesSingle != -1) startIndex = indexOfSlashesSingle; else startIndex = indexOfSlashesDouble; string shortName = ResourceLoginName.Substring(startIndex, ResourceLoginName.Length-1); ```
when all you have is a hammer, everything looks like a nail..... use a razor blade ---- ``` using System; using System.Text.RegularExpressions; public class MyClass { public static void Main() { string domainUser = Regex.Replace("domain\\user",".*\\\\(.*)", "$1",RegexOptions.None); Console.WriteLine(domainUser); } } ```
You could abuse the Path class, thusly: ``` string shortName = System.IO.Path.GetFileNameWithoutExtension(ResourceLoginName); ```
Remove domain information from login id in C#
[ "", "c#", "string", "indexof", "" ]
When writing async method implementations using the BeginInvoke/EndInvoke pattern the code might look something like the following (and to save you guessing this is an async wrapper around a cache): ``` IAsyncResult BeginPut(string key, object value) { Action<string, object> put = this.cache.Put; return put.BeginInvoke(key, value, null, null); } void EndPut(IAsyncResult asyncResult) { var put = (Action<string, object>)((AsyncResult)asyncResult).AsyncDelegate; put.EndInvoke(asyncResult); } ``` This works perfectly well because it's known what the type of delegate is, so it can be cast. However it starts to get messy when you have two `Put` methods, because although the method returns void you seemingly have to cast it to a strongly typed delegate to end the invocation, e.g. ``` IAsyncResult BeginPut(string key, object value) { Action<string, object> put = this.cache.Put; return put.BeginInvoke(key, value, null, null); } IAsyncResult BeginPut(string region, string key, object value) { Action<string, string, object> put = this.cache.Put; return put.BeginInvoke(region, key, value, null, null); } void EndPut(IAsyncResult asyncResult) { var put = ((AsyncResult)asyncResult).AsyncDelegate; var put1 = put as Action<string, object>; if (put1 != null) { put1.EndInvoke(asyncResult); return; } var put2 = put as Action<string, string, object>; if (put2 != null) { put2.EndInvoke(asyncResult); return; } throw new ArgumentException("Invalid async result", "asyncResult"); } ``` I'm hoping there is a cleaner way to do this, because the only thing I care about the delegate is the return type (in this case void) and not the arguments that were supplied to it. But I've racked my brains and asked others in the office, and nobody can think of the answer. I know one solution is to write a custom `IAsyncResult`, but that's such a difficult task with the potential threading issues around things like lazy instantiation of the `WaitHandle` that I'd rather have this slightly hacky looking code than go down that route. Any ideas on how to end the invocation without a cascading set of `is` checks?
I was wrong, there is a cleaner way. You create `Action( IAsyncResult )` delegates for the specific `EndInvoke()` method in the same context where you already know the specific type of the delegate, passing it as the AsyncState. I'm passing `EndPut()` as the callback for convenience. ``` IAsyncResult BeginPut( string key, object value ) { Action<string, object> put = this.Put; return put.BeginInvoke( key, value, EndPut, new Action<IAsyncResult>( put.EndInvoke ) ); } IAsyncResult BeginPut( string region, string key, object value ) { Action<string, string, object> put = this.Put; return put.BeginInvoke( region, key, value, EndPut, new Action<IAsyncResult>( put.EndInvoke ) ); } ``` And then you finish it off. ``` void EndPut( IAsyncResult asyncResult ) { var del = asyncResult.AsyncState as Action<IAsyncResult>; del( asyncResult ); } ```
Why not avoid the problem by just going back to the more general overload: ``` IAsyncResult BeginPut(string key, object value) { return this.BeginPut(null, key, value); } IAsyncResult BeginPut(string region, string key, object value) { Action<string, string, object> put = this.Put; return put.BeginInvoke(region, key, value, null, null); } void EndPut(IAsyncResult asyncResult) { var put = (Action<string, string, object>)((AsyncResult)asyncResult).AsyncDelegate; put.EndInvoke(asyncResult); } ```
Ending asynchronous delegate invocation with partial type information
[ "", "c#", "asynchronous", "delegates", "" ]
What's the best and most efficient way to count keywords in JavaScript? Basically, I'd like to take a string and get the top N words or phrases that occur in the string, mainly for the use of suggesting tags. I'm looking more for conceptual hints or links to real-life examples than actual code, but I certainly wouldn't mind if you'd like to share code as well. If there are particular functions that would help, I'd also appreciate that. Right now I think I'm at using the split() function to separate the string by spaces and then cleaning punctuation out with a regular expression. I'd also want it to be case-insensitive.
Cut, paste + execute demo: ``` var text = "Text to be examined to determine which n words are used the most"; // Find 'em! var wordRegExp = /\w+(?:'\w{1,2})?/g; var words = {}; var matches; while ((matches = wordRegExp.exec(text)) != null) { var word = matches[0].toLowerCase(); if (typeof words[word] == "undefined") { words[word] = 1; } else { words[word]++; } } // Sort 'em! var wordList = []; for (var word in words) { if (words.hasOwnProperty(word)) { wordList.push([word, words[word]]); } } wordList.sort(function(a, b) { return b[1] - a[1]; }); // Come back any time, straaanger! var n = 10; var message = ["The top " + n + " words are:"]; for (var i = 0; i < n; i++) { message.push(wordList[i][0] + " - " + wordList[i][1] + " occurance" + (wordList[i][1] == 1 ? "" : "s")); } alert(message.join("\n")); ``` Reusable function: ``` function getTopNWords(text, n) { var wordRegExp = /\w+(?:'\w{1,2})?/g; var words = {}; var matches; while ((matches = wordRegExp.exec(text)) != null) { var word = matches[0].toLowerCase(); if (typeof words[word] == "undefined") { words[word] = 1; } else { words[word]++; } } var wordList = []; for (var word in words) { if (words.hasOwnProperty(word)) { wordList.push([word, words[word]]); } } wordList.sort(function(a, b) { return b[1] - a[1]; }); var topWords = []; for (var i = 0; i < n; i++) { topWords.push(wordList[i][0]); } return topWords; } ```
Once you have that array of words cleaned up, and let's say you call it `wordArray`: ``` var keywordRegistry = {}; for(var i = 0; i < wordArray.length; i++) { if(keywordRegistry.hasOwnProperty(wordArray[i]) == false) { keywordRegistry[wordArray[i]] = 0; } keywordRegistry[wordArray[i]] = keywordRegistry[wordArray[i]] + 1; } // now keywordRegistry will have, as properties, all of the // words in your word array with their respective counts // this will alert (choose something better than alert) all words and their counts for(var keyword in keywordRegistry) { alert("The keyword '" + keyword + "' occurred " + keywordRegistry[keyword] + " times"); } ``` That should give you the basics of doing this part of the work.
What's the best way to count keywords in JavaScript?
[ "", "javascript", "regex", "arrays", "string", "" ]
What is the difference between using the `Runnable` and `Callable` interfaces when designing a concurrent thread in Java, why would you choose one over the other?
See explanation [here](http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Callable.html). > The Callable interface is similar to > Runnable, in that both are designed > for classes whose instances are > potentially executed by another > thread. **A Runnable, however, does not > return a result and cannot throw a > checked exception.**
> What are the differences in the applications of `Runnable` and `Callable`. Is the difference only with the return parameter present in `Callable`? Basically, yes. See the answers to [this question](https://stackoverflow.com/questions/141284/the-difference-between-the-runnable-and-callable-interfaces-in-java). And the [javadoc for `Callable`](http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Callable.html). > What is the need of having both if `Callable` can do all that `Runnable` does? Because the `Runnable` interface **cannot** do everything that `Callable` does! `Runnable` has been around since Java 1.0, but `Callable` was only introduced in Java 1.5 ... to handle use-cases that `Runnable` does not support. In theory, the Java team could have changed the signature of the `Runnable.run()` method, but this would have broken binary compatiblity with pre-1.5 code, requiring recoding when migrating old Java code to newer JVMs. That is a BIG NO-NO. Java strives to be backwards compatible ... and that's been one of Java's biggest selling points for business computing. And, obviously, there are use-cases where a task doesn't *need* to return a result or throw a checked exception. For those use-cases, using `Runnable` is more concise than using `Callable<Void>` and returning a dummy (`null`) value from the `call()` method.
The difference between the Runnable and Callable interfaces in Java
[ "", "java", "multithreading", "interface", "runnable", "callable", "" ]
I'm creating some text boxes on my form programmatically which I need to reference later using FindControl. I've put the FindControl instruction in the page load method after the code which creates them but get an error: > **Object reference not set to an instance of an object.** I assume this is because the textbox controls are not created until later in the lifecycle and therefore cannot be referenced from within Page\_Load. Can someone advise where in my code-behind I would need to place the FindControl instruction so that it can find these programmatically created text boxes?
Did you put your textbox control inside of another control such as a panel or grid? If so you need to recursively search all the controls on the page. Here is an example of a recursive FindControl implementation: [Recursive Page.FindControl](http://www.codinghorror.com/blog/2005/06/recursive-pagefindcontrol.html). You can find many other examples by googling "recursive findcontrol".
If you create textboxes programmatically you can use directly that to manipulate them. No need for FindControl (which would also be slower) ``` TextBox txt = new TextBox(); ... txt.Text = "Text"; ``` If you need access in different methods you can just make txt a private variable of the class. If you really need to use FindControl - are the textboxes added in the page (added to the Controls list of the page) when you call the function?
Page Lifecycle - Using FindControl to reference a control created programmatically during page load
[ "", "c#", "asp.net", ".net", "controls", "" ]
Say I have class A with ``` class A { final String foo() { // .. computing result, contacting database, whatever .. return "some computed value"; } // ... and a bazillion other methods, some of them final. } ``` Now I have class B with ``` class B { String methodIWantToTest(A a) { String output = a.foo(); // ... whatever this method does, e.g.: output += "_suffix"; return output; } } ``` How would I go about unit testing this method? The reason `foo()` is final is because we don't want our classes which extend A to change its functionality. But at the same time to truly unit test the method, I don't want it to reach out and run the actual `A.foo()` method. Is there a way to, say, remove the final keyword and add an annotation along the lines of `@finalUnlessTest`? What would you recommend? Refactoring A to an interface would be very, very difficult, seeing as how it's one of our central classes and is unfortunately ~~pretty~~ extremely coupled. **Edit #1** Sorry, forgot to mention, we're talking Java. We are not using a mocking framework as of yet. **Answer** OK, so: wow. [JMockit](http://jmockit.org) is just incredible and is in my eyes the killer app for testing legacy code. Unbelievably useful especially in my case. Thanks so much! You basically would do something like the following for my psuedo-example: ``` class AMock { final String foo() { return "myTestValue"; } } class Test extends TestCase { A mockedA; B b; protected void setUp() { Mockit.redefineMethods( A.class, AMock.class ); // this "pipes" all mocked methods from A to AMock mockedA = new A(); // NOT new AMock()!!! b = new B(); } public void testB() { assertEquals("myTestValue",mockedA.foo()); assertEquals("myTestValue_suffix",b.methodIWantToTest(mockedA)); } } ``` Is this frickin' cool or what?
You can try the [JMockit](http://jmockit.org) mocking library.
I'd remove the "final" and just put in a comment "Don't override this method!!". If you can't trust coworkers not to follow simple instructions, it's hopeless anyway.
How to go about mocking a class with final methods?
[ "", "java", "unit-testing", "mocking", "legacy-code", "" ]
I have a data warehouse containing typical star schemas, and a whole bunch of code which does stuff like this (obviously a lot bigger, but this is illustrative): ``` SELECT cdim.x ,SUM(fact.y) AS y ,dim.z FROM fact INNER JOIN conformed_dim AS cdim ON cdim.cdim_dim_id = fact.cdim_dim_id INNER JOIN nonconformed_dim AS dim ON dim.ncdim_dim_id = fact.ncdim_dim_id INNER JOIN date_dim AS ddim ON ddim.date_id = fact.date_id WHERE fact.date_id = @date_id GROUP BY cdim.x ,dim.z ``` I'm thinking of replacing it with a view (`MODEL_SYSTEM_1`, say), so that it becomes: ``` SELECT m.x ,SUM(m.y) AS y ,m.z FROM MODEL_SYSTEM_1 AS m WHERE m.date_id = @date_id GROUP BY m.x ,m.z ``` But the view `MODEL_SYSTEM_1` would have to contain unique column names, and I'm also concerned about performance with the optimizer if I go ahead and do this, because I'm concerned that all the items in the WHERE clause across different facts and dimensions get optimized, since the view would be across a whole star, and views cannot be parametrized (boy, wouldn't that be cool!) So my questions are - 1. Is this approach OK, or is it just going to be an abstraction which hurts performance and doesn't give my anything but a lot nicer syntax? 2. What's the best way to code-gen these views, eliminating duplicate column names (even if the view later needs to be tweaked by hand), given that all the appropriate PK and FKs are in place? Should I just write some SQL to pull it out of the `INFORMATION_SCHEMA` or is there a good example already available. **Edit:** I have tested it, and the performance seems the same, even on the bigger processes - even joining multiple stars which each use these views. The automation is mainly because there are a number of these stars in the data warehouse, and the FK/PK has been done properly by the designers, but I don't want to have to pick through all the tables or the documentation. I wrote a script to generate the view (it also generates abbreviations for the tables), and it works well to generate the skeleton automagically from `INFORMATION_SCHEMA`, and then it can be tweaked before committing the creation of the view. If anyone wants the code, I could probably publish it here.
1. I’ve used this technique on several data warehouses I look after. I have not noticed any performance degradation when running reports based off of the views versus a table direct approach but have never performed a detailed analysis. 2. I created the views using the designer in SQL Server management studio and did not use any automated approach. I can’t imagine the schema changing often enough that automating it would be worthwhile anyhow. You might spend as long tweaking the results as it would have taken to drag all the tables onto the view in the first place! To remove ambiguity a good approach is to preface the column names with the name of the dimension it belongs to. This is helpful to the report writers and to anyone running ad hoc queries.
Make the view or views into into one or more summary fact tables and materialize it. These only need to be refreshed when the main fact table is refreshed. The materialized views will be faster to query and this can be a win if you have a lot of queries that can be satisfied by the summary. You can use the data dictionary or information schema views to generate SQL to create the tables if you have a large number of these summaries or wish to change them about frequently. However, I would guess that it's not likely that you would change these very often so auto-generating the view definitions might not be worth the trouble.
Typical Kimball Star-schema Data Warehouse - Model Views Feasible? and How to Code Gen
[ "", "sql", "sql-server", "t-sql", "code-generation", "data-warehouse", "" ]
Is there a way to respond to the back button being hit (or backspace being pressed) in javascript when only the location hash changes? That is to say when the browser is not communicating with the server or reloading the page.
Use the [`hashchange`](https://developer.mozilla.org/en-US/docs/Web/API/WindowEventHandlers.onhashchange) event: ``` window.addEventListener("hashchange", function(e) { // ... }) ``` If you need to support older browsers, check out the [`hashChange` Event section](https://github.com/Modernizr/Modernizr/wiki/HTML5-Cross-Browser-Polyfills#hashchange-event) in Modernizr's HTML5 Cross Browser Polyfills wiki page.
I did a fun hack to solve this issue to my satisfaction. I've got an AJAX site that loads content dynamically, then modifies the window.location.hash, and I had code to run upon $(document).ready() to parse the hash and load the appropriate section. The thing is that I was perfectly happy with my section loading code for navigation, but wanted to add a way to intercept the browser back and forward buttons, which change the window location, but not interfere with my current page loading routines where I manipulate the window.location, and polling the window.location at constant intervals was out of the question. What I ended up doing was creating an object as such: ``` var pageload = { ignorehashchange: false, loadUrl: function(){ if (pageload.ignorehashchange == false){ //code to parse window.location.hash and load content }; } }; ``` Then, I added a line to my site script to run the `pageload.loadUrl` function upon the [hashchange](https://developer.mozilla.org/en/DOM/window.onhashchange "hashchange") event, as such: ``` window.addEventListener("hashchange", pageload.loadUrl, false); ``` Then, any time I want to modify the `window.location.hash` without triggering this page loading routine, I simply add the following line before each `window.location.hash =` line: ``` pageload.ignorehashchange = true; ``` and then the following line after each hash modification line: ``` setTimeout(function(){pageload.ignorehashchange = false;}, 100); ``` So now my section loading routines are usually running, but if the user hits the 'back' or 'forward' buttons, the new location is parsed and the appropriate section loaded.
Is there a way to catch the back button event in javascript?
[ "", "javascript", "" ]
Should I try to actually upgrade my existing app, or just rewrite it mostly from scratch, saving what pieces (templates, etc) I can?
Although this depends on what you're doing, most applications should be able to just upgrade and then fix everything that breaks. In my experience, the main things that I've had to fix after an upgrade are 1. Changes to some of the funky stuff with models, such as the syntax for following foreign keys. 2. A small set of template changes, most notably auto-escaping. 3. Anything that depends on the specific structure of Django's internals. This shouldn't be an issue unless you're doing stuff like dynamically modifying Django internals to change their behavior in a way that's necessary/convenient for your project. To summarize, unless you're doing a lot of really weird and/or complex stuff, a simple upgrade should be relatively painless and only require a few changes.
Upgrade. For me it was very simple: change `__str__()` to `__unicode__()`, write basic `admin.py`, and done. Just start running your app on 1.0, test it, and when you encounter an error use the documentation on [backwards-incompatible changes](http://code.djangoproject.com/wiki/BackwardsIncompatibleChanges) to see how to fix the issue.
What's the best way to upgrade from Django 0.96 to 1.0?
[ "", "python", "django", "" ]
In Javascript, I have an object: ``` obj = { one: "foo", two: "bar" }; ``` Now, I want do do this ``` var a = 'two'; if(confirm('Do you want One')) { a = 'one'; } alert(obj.a); ``` But of course it doesn't work. What would be the correct way of referencing this object dynamically?
short answer: `obj[a]` long answer: `obj.field` is just a shorthand for `obj["field"]`, for the special case where the key is a constant string without spaces, dots, or other nasty things. in your question, the key wasn't a constant, so simply use the full syntax.
Like this: ``` obj[a] ```
How do I reference an object dynamically?
[ "", "javascript", "json", "" ]
Say I have the following: ``` <ul> <li>First item</li> <li>Second item</li> <li>Third item</li> </ul> ``` How would I select all the child elements after the first one using jQuery? So I can achieve something like: ``` <ul> <li>First item</li> <li class="something">Second item</li> <li class="something">Third item</li> </ul> ```
You should be able to use the "not" and "first child" selectors. ``` $("li:not(:first-child)").addClass("something"); ``` <http://docs.jquery.com/Selectors/not> <http://docs.jquery.com/Selectors/firstChild>
Based on my totally unscientific analysis of the four methods here, it looks like there's not a lot of speed difference among them. I ran each on a page containing a series of unordered lists of varying length and timed them using the Firebug profiler. ``` $("li").slice(1).addClass("something"); ``` Average Time: 5.322ms ``` $("li:gt(0)").addClass("something"); ``` Average Time: 5.590ms ``` $("li:not(:first-child)").addClass("something"); ``` Average Time: 6.084ms ``` $("ul li+li").addClass("something"); ``` Average Time: 7.831ms
Select all child elements except the first
[ "", "javascript", "jquery", "" ]
I am writing a drop-in replacement for a legacy application in Java. One of the requirements is that the ini files that the older application used have to be read as-is into the new Java Application. The format of this ini files is the common windows style, with header sections and key=value pairs, using # as the character for commenting. I tried using the Properties class from Java, but of course that won't work if there is name clashes between different headers. So the question is, what would be the easiest way to read in this INI file and access the keys?
The library I've used is [ini4j](http://ini4j.sourceforge.net/). It is lightweight and parses the ini files with ease. Also it uses no esoteric dependencies to 10,000 other jar files, as one of the design goals was to use only the standard Java API This is an example on how the library is used: ``` Ini ini = new Ini(new File(filename)); java.util.prefs.Preferences prefs = new IniPreferences(ini); System.out.println("grumpy/homePage: " + prefs.node("grumpy").get("homePage", null)); ```
As [mentioned](https://stackoverflow.com/a/190633), [ini4j](http://ini4j.sourceforge.net/index.html) can be used to achieve this. Let me show one other example. If we have an INI file like this: ``` [header] key = value ``` The following should display `value` to STDOUT: ``` Ini ini = new Ini(new File("/path/to/file")); System.out.println(ini.get("header", "key")); ``` Check [the tutorials](http://ini4j.sourceforge.net/tutorial/) for more examples.
What is the easiest way to parse an INI file in Java?
[ "", "java", "parsing", "ini", "" ]
I'm looking for a tool that will be able to build a parser (in C#) if I give it a BNF grammar (eg. <http://savage.net.au/SQL/sql-2003-2.bnf>) Does such a generator exist?
Normally BNF grammars are too ambiguous. ANTLR will be probably good for what you are looking for.
You will have to tweak the BNF a bit, but [TinyPG](http://www.codeproject.com/KB/recipes/TinyPG.aspx "TinyPG") is a great tool.
Parser-generator that outputs C# given a BNF grammar?
[ "", "c#", "parsing", "bnf", "" ]
Take the following C# class: ``` c1 { event EventHandler someEvent; } ``` If there are a lot of subscriptions to `c1`'s `someEvent` event and I want to clear them all, what is the best way to achieve this? **Also consider that subscriptions to this event could be/are lambdas/anonymous delegates.** Currently my solution is to add a `ResetSubscriptions()` method to `c1` that sets `someEvent` to null. I don't know if this has any unseen consequences.
From within the class, you can set the (hidden) variable to null. A null reference is the canonical way of representing an empty invocation list, effectively. From outside the class, you can't do this - events basically expose "subscribe" and "unsubscribe" and that's it. It's worth being aware of what field-like events are actually doing - they're creating a variable *and* an event at the same time. Within the class, you end up referencing the variable. From outside, you reference the event. See my [article on events and delegates](http://csharpindepth.com/Articles/Chapter2/Events.aspx) for more information.
Add a method to c1 that will set 'someEvent' to null. ``` public class c1 { event EventHandler someEvent; public ResetSubscriptions() => someEvent = null; } ```
How can I clear event subscriptions in C#?
[ "", "c#", ".net", "events", "delegates", "" ]
[PEP 8](http://www.python.org/dev/peps/pep-0008/) states: > Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants. However if the class/method/function that I am importing is only used in rare cases, surely it is more efficient to do the import when it is needed? Isn't this: ``` class SomeClass(object): def not_often_called(self) from datetime import datetime self.datetime = datetime.now() ``` more efficient than this? ``` from datetime import datetime class SomeClass(object): def not_often_called(self) self.datetime = datetime.now() ```
Module importing is quite fast, but not instant. This means that: * Putting the imports at the top of the module is fine, because it's a trivial cost that's only paid once. * Putting the imports within a function will cause calls to that function to take longer. So if you care about efficiency, put the imports at the top. Only move them into a function if your profiling shows that would help (you **did** profile to see where best to improve performance, right??) --- The best reasons I've seen to perform lazy imports are: * Optional library support. If your code has multiple paths that use different libraries, don't break if an optional library is not installed. * In the `__init__.py` of a plugin, which might be imported but not actually used. Examples are Bazaar plugins, which use `bzrlib`'s lazy-loading framework.
Putting the import statement inside of a function can prevent circular dependencies. For example, if you have 2 modules, X.py and Y.py, and they both need to import each other, this will cause a circular dependency when you import one of the modules causing an infinite loop. If you move the import statement in one of the modules then it won't try to import the other module till the function is called, and that module will already be imported, so no infinite loop. Read here for more - [effbot.org/zone/import-confusion.htm](https://web.archive.org/web/20200917011425/http://effbot.org/zone/import-confusion.htm)
Should import statements always be at the top of a module?
[ "", "python", "optimization", "pep8", "" ]
How do I iterate over a timespan after days, hours, weeks or months? Something like: ``` for date in foo(from_date, to_date, delta=HOURS): print date ``` Where foo is a function, returning an iterator. I've been looking at the calendar module, but that only works for one specific year or month, not between dates.
Use [dateutil](http://labix.org/python-dateutil) and its rrule implementation, like so: ``` from dateutil import rrule from datetime import datetime, timedelta now = datetime.now() hundredDaysLater = now + timedelta(days=100) for dt in rrule.rrule(rrule.MONTHLY, dtstart=now, until=hundredDaysLater): print dt ``` Output is ``` 2008-09-30 23:29:54 2008-10-30 23:29:54 2008-11-30 23:29:54 2008-12-30 23:29:54 ``` Replace MONTHLY with any of YEARLY, MONTHLY, WEEKLY, DAILY, HOURLY, MINUTELY, or SECONDLY. Replace dtstart and until with whatever datetime object you want. This recipe has the advantage for working in all cases, including MONTHLY. Only caveat I could find is that if you pass a day number that doesn't exist for all months, it skips those months.
I don't think there is a method in Python library, but you can easily create one yourself using [datetime](http://docs.python.org/lib/module-datetime.html) module: ``` from datetime import date, datetime, timedelta def datespan(startDate, endDate, delta=timedelta(days=1)): currentDate = startDate while currentDate < endDate: yield currentDate currentDate += delta ``` Then you could use it like this: ``` >>> for day in datespan(date(2007, 3, 30), date(2007, 4, 3), >>> delta=timedelta(days=1)): >>> print day 2007-03-30 2007-03-31 2007-04-01 2007-04-02 ``` Or, if you wish to make your delta smaller: ``` >>> for timestamp in datespan(datetime(2007, 3, 30, 15, 30), >>> datetime(2007, 3, 30, 18, 35), >>> delta=timedelta(hours=1)): >>> print timestamp 2007-03-30 15:30:00 2007-03-30 16:30:00 2007-03-30 17:30:00 2007-03-30 18:30:00 ```
How to iterate over a timespan after days, hours, weeks and months?
[ "", "python", "datetime", "" ]
In MS Transact SQL, let's say I have a table (Orders) like this: ``` Order Date Order Total Customer # 09/30/2008 8.00 1 09/15/2008 6.00 1 09/01/2008 9.50 1 09/01/2008 1.45 2 09/16/2008 4.50 2 09/17/2008 8.75 3 09/18/2008 2.50 3 ``` What I need out of this is: for each customer the average order amount for the most recent two orders. So for Customer #1, I should get 7.00 (and not 7.83). I've been staring at this for an hour now (inside a larger problem, which I've solved) and I think my brain has frozen. Help for a simple problem?
This should make it ``` select avg(total), customer from orders o1 where orderdate in ( select top 2 date from orders o2 where o2.customer = o1.customer order by date desc ) group by customer ```
In SQL Server 2005 you have the RANK function, used with partition: ``` USE AdventureWorks; GO SELECT i.ProductID, p.Name, i.LocationID, i.Quantity ,RANK() OVER (PARTITION BY i.LocationID ORDER BY i.Quantity DESC) AS 'RANK' FROM Production.ProductInventory i INNER JOIN Production.Product p ON i.ProductID = p.ProductID ORDER BY p.Name; GO ``` [Link](http://msdn.microsoft.com/en-us/library/ms176102.aspx)
SQL sub-query problem with grouping, average
[ "", "sql", "sql-server-2005", "" ]
What's the difference between using a define statement and an enum statement in C/C++ (and is there any difference when using them with either C or C++)? For example, when should one use ``` enum {BUFFER = 1234}; ``` over ``` #define BUFFER 1234 ```
`enum` defines a syntactical element. `#define` is a pre-preprocessor directive, executed *before* the compiler sees the code, and therefore is not a language element of C itself. Generally enums are preferred as they are type-safe and more easily discoverable. Defines are harder to locate and can have complex behavior, for example one piece of code can redefine a `#define` made by another. This can be hard to track down.
`#define` statements are handled by the pre-processor before the compiler gets to see the code so it's basically a text substitution (it's actually a little more intelligent with the use of parameters and such). Enumerations are part of the C language itself and have the following advantages. 1/ They may have type and the compiler can type-check them. 2/ Since they are available to the compiler, symbol information on them can be passed through to the debugger, making debugging easier.
Difference between Enum and Define Statements
[ "", "c++", "c", "enums", "c-preprocessor", "" ]
I'm not sure if the term's actually "Array Addition". I'm trying to understand what does the following line do: ``` int var[2 + 1] = {2, 1}; ``` How is that different from `int var[3]`? I've been using Java for several years, so I'd appreciate if explained using Java-friendly words. Edit: Thousands of thanks to everyone who helped me out, Occam's Razor applies here.
It's not different. C++ allows expressions (even non-constant expressions) in the subscripts of array declarations (with some limitations; anything other than the initial subscript on a multi-dimensional array must be constant). ``` int var[]; // illegal int var[] = {2,1}; // automatically sized to 2 int var[3] = {2,1}; // equivalent to {2,1,0}: anything not specified is zero int var[3]; // however, with no initializer, nothing is initialized to zero ``` Perhaps the code you are reading writes `2 + 1` instead of `3` as a reminder that a trailing `0` is intentional.
> How is that different from `int var[3]`? In no way that I can see.
C++ compile-time expression as an array size
[ "", "c++", "arrays", "" ]
I'm writing an import utility that is using phone numbers as a unique key within the import. I need to check that the phone number does not already exist in my DB. The problem is that phone numbers in the DB could have things like dashes and parenthesis and possibly other things. I wrote a function to remove these things, the problem is that it is **slow** and with thousands of records in my DB and thousands of records to import at once, this process can be unacceptably slow. I've already made the phone number column an index. I tried using the script from this post: [T-SQL trim &nbsp (and other non-alphanumeric characters)](https://stackoverflow.com/questions/52315/t-sql-trim-nbsp-and-other-non-alphanumeric-characters) But that didn't speed it up any. Is there a faster way to remove non-numeric characters? Something that can perform well when 10,000 to 100,000 records have to be compared. Whatever is done needs to perform **fast**. **Update** Given what people responded with, I think I'm going to have to clean the fields before I run the import utility. To answer the question of what I'm writing the import utility in, it is a C# app. I'm comparing BIGINT to BIGINT now, with no need to alter DB data and I'm still taking a performance hit with a very small set of data (about 2000 records). Could comparing BIGINT to BIGINT be slowing things down? I've optimized the code side of my app as much as I can (removed regexes, removed unneccessary DB calls). Although I can't isolate SQL as the source of the problem anymore, I still feel like it is.
I may misunderstand, but you've got two sets of data to remove the strings from one for current data in the database and then a new set whenever you import. For updating the existing records, I would just use SQL, that only has to happen once. However, SQL isn't optimized for this sort of operation, since you said you are writing an import utility, I would do those updates in the context of the import utility itself, not in SQL. This would be much better performance wise. What are you writing the utility in? Also, I may be completely misunderstanding the process, so I apologize if off-base. **Edit:** For the initial update, if you are using SQL Server 2005, you could try a CLR function. Here's a quick one using regex. Not sure how the performance would compare, I've never used this myself except for a quick test right now. ``` using System; using System.Data; using System.Text.RegularExpressions; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; public partial class UserDefinedFunctions { [Microsoft.SqlServer.Server.SqlFunction] public static SqlString StripNonNumeric(SqlString input) { Regex regEx = new Regex(@"\D"); return regEx.Replace(input.Value, ""); } }; ``` After this is deployed, to update you could just use: ``` UPDATE table SET phoneNumber = dbo.StripNonNumeric(phoneNumber) ```
I saw this solution with T-SQL code and PATINDEX. I like it :-) ``` CREATE Function [fnRemoveNonNumericCharacters](@strText VARCHAR(1000)) RETURNS VARCHAR(1000) AS BEGIN WHILE PATINDEX('%[^0-9]%', @strText) > 0 BEGIN SET @strText = STUFF(@strText, PATINDEX('%[^0-9]%', @strText), 1, '') END RETURN @strText END ```
Fastest way to remove non-numeric characters from a VARCHAR in SQL Server
[ "", "sql", "sql-server", "performance", "optimization", "" ]
I'm using .NET to make an application with a drawing surface, similar to Visio. The UI connects two objects on the screen with Graphics.DrawLine. This simple implementation works fine, but as the surface gets more complex, I need a more robust way to represent the objects. One of these robust requirements is determining the intersection point for two lines so I can indicate separation via some kind of graphic. So my question is, can anyone suggest a way to do this? Perhaps with a different technique (maybe GraphViz) or an algorithm?
The representation of lines by y = mx + c is problematic for computer graphics, because vertical lines require m to be infinite. Furthermore, lines in computer graphics have a start and end point, unlike mathematical lines which are infinite in extent. One is usually only interested in a crossing of lines if the crossing point lies on both the line segments in question. If you have two line segments, one from vectors x1 to x1+v1, and one from vectors x2 to x2+v2, then define: ``` a = (v2.v2 v1.(x2-x1) - v1.v2 v2.(x2-x1)) / ((v1.v1)(v2.v2) - (v1.v2)^2) b = (v1.v2 v1.(x2-x1) - v1.v1 v2.(x2-x1)) / ((v1.v1)(v2.v2) - (v1.v2)^2) ``` where for the vectors p=(px,py), q=(qx,qy), p.q is the dot product (px \* qx + py \* qy). First check if (v1.v1)(v2.v2) = (v1.v2)^2 - if so, the lines are parallel and do not cross. If they are not parallel, then if 0<=a<=1 and 0<=b<=1, the intersection point lies on both of the line segments, and is given by the point ``` x1 + a * v1 ``` **Edit** The derivation of the equations for a and b is as follows. The intersection point satisfies the vector equation ``` x1 + a*v1 = x2 + b*v2 ``` By taking the dot product of this equation with `v1`, and with `v2`, we get two equations: ``` v1.v1*a - v2.v1*b = v1.(x2-x1) v1.v2*a - v2.v2*b = v2.(x2-x1) ``` which form two linear equations for a and b. Solving this system (by multiplying the first equation by v2.v2 and the second by v1.v1 and subtracting, or otherwise) gives the equations for a and b.
You can ask Dr. Math, see [this link](https://web.archive.org/web/20180211083203/http://mathforum.org/library/drmath/view/53254.html).
How do I determine the intersection point of two lines in GDI+?
[ "", "c#", ".net", "graphics", "geometry", "gdi+", "" ]
What are my options for programmatically accessing a Microsoft Project file? What are the pros and cons of each approach? I will basically need to import all data from the file into another data structure. Using the Office Interop assembies is low on the preference scale.
Here are the options: 1. Interop (messy and horribly slow for large projects) 2. Save project in XML and work with it (messy) 3. Save project in the database (that's not publishing and it is available for project 2003 only - see ODBC option while saving). I've seen it being used a lot in the integration scenarios 4. [Projette](http://www.highlydeveloped.net/) (commercial, $10 per license) 5. [ILog Project Viewer](http://www.ilog.com/products/ganttnet/projectviewer/) (also commercial)
The MPXJ ([mpxj.sf.net](http://mpxj.sf.net)) library comes in both Java and .Net flavours and will allow you to read and write multiple Microsoft Project file formats using a single consistent API. I am aware of commercial products which use both the Java and the .Net versions of MPXJ without any issues. Disclaimer: I'm the maintainer of MPXJ.
Programmatically access a Microsoft Project (MPP) file from C#
[ "", "c#", "ms-project", "" ]
What would be the most efficient way of recording to a log (.txt) from a console program on C# and .NET 2.2? My program loops multiple times always outputting different data based on what the user wants, so I'm searching for the most efficient way to achieve this. I know I can always reopen a stream and then close it, but everytime I do that it would be writing just one line, then next time around (seconds later) the program reloops and needs tor write again. In my opinion, that doesn't seem very resourse friendly. I'm using multiple threads that all have output data that I want to log (opening/closing the same file or accessing the same file on different threads might be bad). The "holds a reference to a stream writer that auto-flushes" sounds like a good idea, however I don't know how to do that.
You could hook into the tracing framework that forms part of the CLR. Using a simple class like: <http://www.chaosink.co.uk/files/tracing.zip> you can selectively log diagnostic information. To use it add the class to your application. Create an inistance of the tracer in your class like: ``` private Tracing trace = new Tracing("My.Namespace.Class"); ``` and call it using: ``` MyClass() { trace.Verbose("Entered MyClass"); int x = 12; trace.Information("X is: {0}", x); trace.Verbose("Leaving MyClass"); } ``` There are 4 levels of information in the inbuilt tracing framework: Verbose - To log program flow Information - To log specific information of interest to monitors Warning - To log an invalid state or recoverable exception Error - To log an unrecoverable exception or state To access the information from your application then add into the app.config (or web.config) the following: ``` <system.diagnostics> <trace autoflush="false" indentsize="4"> <listeners> <add name="myListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="c:\mylogfile.log" /> </listeners> </trace> <switches> <add name="My.Namespace.Class" value="4"/> </switches> </system.diagnostics> ``` You can also attach listeners for publishing to the eventlog or anywhere else that interests you. More information on the tracing framework can be found at: <http://msdn.microsoft.com/en-us/library/ms733025.aspx>
Consider using [log4net](http://logging.apache.org/log4net/index.html): > a tool to help the programmer output log statements to a variety of output targets... We have kept the framework similar in spirit to the original log4j while taking advantage of new features in the .NET runtime. For more information on log4net see the [features](http://logging.apache.org/log4net/release/features.html) document...
Efficient way of logging to txt file from a console program
[ "", "c#", ".net", "performance", "logging", "" ]
I want to be able to list only the directories inside some folder. This means I don't want filenames listed, nor do I want additional sub-folders. Let's see if an example helps. In the current directory we have: ``` >>> os.listdir(os.getcwd()) ['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'LICENSE.txt', 'mod_p ython-wininst.log', 'NEWS.txt', 'pymssql-wininst.log', 'python.exe', 'pythonw.ex e', 'README.txt', 'Removemod_python.exe', 'Removepymssql.exe', 'Scripts', 'tcl', 'Tools', 'w9xpopen.exe'] ``` However, I don't want filenames listed. Nor do I want sub-folders such as \Lib\curses. Essentially what I want works with the following: ``` >>> for root, dirnames, filenames in os.walk('.'): ... print dirnames ... break ... ['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'Scripts', 'tcl', 'Tools'] ``` However, I'm wondering if there's a simpler way of achieving the same results. I get the impression that using os.walk only to return the top level is inefficient/too much.
Filter the result using os.path.isdir() (and use os.path.join() to get the real path): ``` >>> [ name for name in os.listdir(thedir) if os.path.isdir(os.path.join(thedir, name)) ] ['ctypes', 'distutils', 'encodings', 'lib-tk', 'config', 'idlelib', 'xml', 'bsddb', 'hotshot', 'logging', 'doc', 'test', 'compiler', 'curses', 'site-packages', 'email', 'sqlite3', 'lib-dynload', 'wsgiref', 'plat-linux2', 'plat-mac'] ```
## os.walk Use [`os.walk`](https://docs.python.org/3/library/os.html#os.walk) with [`next`](https://docs.python.org/3/library/functions.html#next) item function: ``` next(os.walk('.'))[1] ``` For *Python <=2.5* use: ``` os.walk('.').next()[1] ``` ### How this works `os.walk` is a generator and calling `next` will get the first result in the form of a 3-tuple (dirpath, dirnames, filenames). Thus the `[1]` index returns only the `dirnames` from that tuple.
How to list only top level directories in Python?
[ "", "python", "filesystems", "" ]
I'm trying to do something like the following: ``` enum E; void Foo(E e); enum E {A, B, C}; ``` which the compiler rejects. I've had a quick look on Google and the consensus seems to be "you can't do it". Why is that? Clarification 2: I'm doing this as I have private methods in a class that take said enum, and I do not want the enum's values exposed. For example, I do not want anyone to know that E is defined as ``` enum E { FUNCTIONALITY_NORMAL, FUNCTIONALITY_RESTRICTED, FUNCTIONALITY_FOR_PROJECT_X } ``` as project X is not something I want my users to know about. So, I wanted to forward-declare the enum, so I could put the private methods in the header file, declare the enum internally in the cpp, and distribute the built library file and header to people. As for the compiler, it's GCC.
The reason the enum can't be forward declared is that, without knowing the values, the compiler can't know the storage required for the enum variable. C++ compilers are allowed to specify the actual storage space based on the size necessary to contain all the values specified. If all that is visible is the forward declaration, the translation unit can't know what storage size has been chosen – it could be a `char`, or an `int`, or something else. --- From Section 7.2.5 of the ISO C++ Standard: > The *underlying type* of an enumeration is an integral type that can represent all the enumerator values defined in the enumeration. It is implementation-defined which integral type is used as the underlying type for an enumeration except that the underlying type shall not be larger than `int` unless the value of an enumerator cannot fit in an `int` or `unsigned int`. If the *enumerator-list* is empty, the underlying type is as if the enumeration had a single enumerator with value 0. The value of `sizeof()` applied to an enumeration type, an object of enumeration type, or an enumerator, is the value of `sizeof()` applied to the underlying type. Since the *caller* to the function must know the sizes of the parameters to correctly set up the call stack, the number of enumerations in an enumeration list must be known before the function prototype. Update: In C++0X, a syntax for forward declaring enum types has been proposed and accepted. You can see the proposal at *[Forward declaration of enumerations (rev.3)](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2764.pdf)*
Forward declaration of enums is possible since C++11. Previously, the reason enum types couldn't be forward declared was because the size of the enumeration depended on its contents. As long as the size of the enumeration is specified by the application, it can be forward declared: ``` enum Enum1; // Illegal in C++03 and C++11; no size is explicitly specified. enum Enum2 : unsigned int; // Legal in C++11. enum class Enum3; // Legal in C++11, because enum class declarations have a default type of "int". enum class Enum4: unsigned int; // Legal C++11. enum Enum2 : unsigned short; // Illegal in C++11, because Enum2 was previously declared with a different type. ```
Forward declaring an enum in C++
[ "", "c++", "enums", "forward-declaration", "" ]
I'm working on a loosely coupled cluster for some data processing. The network code and processing code is in place, but we are evaluating different methodologies in our approach. Right now, as we should be, we are I/O bound on performance issues, and we're trying to decrease that bottleneck. Obviously, faster switches like Infiniband would be awesome, but we can't afford the luxury of just throwing out what we have and getting new equipment. My question posed is this. All traditional and serious HPC applications done on clusters is typically implemented with message passing versus sending over sockets directly. What are the performance benefits to this? Should we see a speedup if we switched from sockets?
MPI MIGHT use sockets. But there are also MPI implementation to be used with SAN (System area network) that use direct distributed shared memory. That of course if you have the hardware for that. So MPI allows you to use such resources in the future. On that case you can gain massive performance improvements (on my experience with clusters back at university time, you can reach gains of a few orders of magnitude). So if you are writting code that can be ported to higher end clusters, using MPI is a very good idea. Even discarding performance issues, using MPI can save you a lot of time, that you can use to improve performance of other parts of your system or simply save your sanity.
I would recommend using MPI instead of rolling your own, unless you are very good at that sort of thing. Having wrote some distributed computing-esque applications using my own protocols, I always find myself reproducing (and poorly reproducing) features found within MPI. Performance wise I would not expect MPI to give you any tangible network speedups - it uses sockets just like you. MPI will however provide you with much the functionality you would need for managing many nodes, i.e. synchronisation between nodes.
MPI or Sockets?
[ "", "c++", "c", "cluster-computing", "hpc", "" ]
Is there an elegant way to have performant, natural sorting in a MySQL database? For example if I have this data set: * Final Fantasy * Final Fantasy 4 * Final Fantasy 10 * Final Fantasy 12 * Final Fantasy 12: Chains of Promathia * Final Fantasy Adventure * Final Fantasy Origins * Final Fantasy Tactics Any other **elegant** solution than to split up the games' names into their components * *Title*: "Final Fantasy" * *Number*: "12" * *Subtitle*: "Chains of Promathia" to make sure that they come out in the right order? (10 after 4, not before 2). Doing so is a pain in the a\*\* because every now and then there's another game that breaks that mechanism of parsing the game title (e.g. "Warhammer 40,000", "James Bond 007")
I think this is why a lot of things are sorted by release date. A solution could be to create another column in your table for the "SortKey". This could be a sanitized version of the title which conforms to a pattern you create for easy sorting or a counter.
Here is a quick solution: ``` SELECT alphanumeric, integer FROM sorting_test ORDER BY LENGTH(alphanumeric), alphanumeric ```
Natural Sort in MySQL
[ "", "sql", "mysql", "sorting", "natural-sort", "" ]