Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Is it like... ``` var obj = new Object(); obj.function1 = function(){ //code } ``` or something like that?
``` var newObj = { met1 : function () { alert('hello'); } }; ``` Then, the method can be called like such : ``` newObj.met1(); ``` Btw, when declaring a new object, use the object literal (`{}`), not the `new Object()` constructor.
You can see from the answers that you have already that there is more than one way. ``` #1 var o = new Object(); o.method = function(){} #2 var o = new Object(); o.prototype.method = function(){} #3 function myObject() { this.method = function(){} } var o = new myObject(); #4 function myObject() {} myObject.prototype.method = function(){} var o = new myObject(); #5 var o = { method: function(){} } ``` #3 and #4 are using a constructor function. this means you can use them to create a number of objects of the same 'class' (classes don't really exist in JavaScript) #4 is different to #3 because all objects constructed with #4 will share an identical 'method' method because it is a property of their prototype. This saves memory (but only a very tiny amount) and if you change the method of the prototype, all #4 objects will immediately be updated - even if they've already been instantiated. #1, #2 and #5 are all pretty much equivalent. This is because there will probably only ever be one of them at a time, so the fact that #2 has the method added to the prototype doesn't really matter. (not taking cloning into account) There are still more ways of adding methods to objects using factories with closure or adding 'static' properties/methods to functions or private nested functions... :)
How do you create a method for a custom object in JavaScript?
[ "", "javascript", "object", "methods", "" ]
I'm trying to change in runtime one key of my applications settings file, but it does not work. I do on that way: ``` ConfigurationSettings.AppSettings["XPTO"] = "HELLO"; ``` It seems that it only changes in memory, not on the file. Does anyone knows how to do this? Thanks.
Take a look at my [overview of .NET settings files](https://stackoverflow.com/questions/481025/where-property-setting-value-stored-in-c-windows-application/481064#481064)...In short, I think you want a user-scoped setting. It will behave more like you expect. > **Edit:** If you are using the [settings designer in Visual Studio](http://msdn.microsoft.com/en-us/library/aa730869(VS.80).aspx), then simply change the "Scope" to "User". If not, you should be able to do the equivalent programmatically.
Assuming your app has write permissions on the file... ``` Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); // the config that applies to all users AppSettingsSection appSettings = config.AppSettings; if (appSettings.IsReadOnly() == false) { appSettings("Key").Value = "new value"; config.Save(); } ``` I'm ignoring all the possible exceptions that can be thrown...
How to change in runtime application settings
[ "", "c#", ".net", "settings.settings", "" ]
How does Python's *slice notation* work? That is: when I write code like `a[x:y:z]`, `a[:]`, `a[::2]` etc., how can I understand which elements end up in the slice? --- See [Why are slice and range upper-bound exclusive?](https://stackoverflow.com/questions/11364533) to learn why `xs[0:2] == [xs[0], xs[1]]`, *not* `[..., xs[2]]`. See [Make a new list containing every Nth item in the original list](https://stackoverflow.com/questions/1403674/) for `xs[::N]`. See [How does assignment work with list slices?](https://stackoverflow.com/questions/10623302) to learn what `xs[0:2] = ["a", "b"]` does.
The syntax is: ``` a[start:stop] # items start through stop-1 a[start:] # items start through the rest of the array a[:stop] # items from the beginning through stop-1 a[:] # a copy of the whole array ``` There is also the `step` value, which can be used with any of the above: ``` a[start:stop:step] # start through not past stop, by step ``` The key point to remember is that the `:stop` value represents the first value that is *not* in the selected slice. So, the difference between `stop` and `start` is the number of elements selected (if `step` is 1, the default). The other feature is that `start` or `stop` may be a *negative* number, which means it counts from the end of the array instead of the beginning. So: ``` a[-1] # last item in the array a[-2:] # last two items in the array a[:-2] # everything except the last two items ``` Similarly, `step` may be a negative number: ``` a[::-1] # all items in the array, reversed a[1::-1] # the first two items, reversed a[:-3:-1] # the last two items, reversed a[-3::-1] # everything except the last two items, reversed ``` Python is kind to the programmer if there are fewer items than you ask for. For example, if you ask for `a[:-2]` and `a` only contains one element, you get an empty list instead of an error. Sometimes you would prefer the error, so you have to be aware that this may happen. ### Relationship with the `slice` object A [`slice` object](https://www.w3schools.com/python/ref_func_slice.asp) can represent a slicing operation, i.e.: ``` a[start:stop:step] ``` is equivalent to: ``` a[slice(start, stop, step)] ``` Slice objects also behave slightly differently depending on the number of arguments, similar to `range()`, i.e. both `slice(stop)` and `slice(start, stop[, step])` are supported. To skip specifying a given argument, one might use `None`, so that e.g. `a[start:]` is equivalent to `a[slice(start, None)]` or `a[::-1]` is equivalent to `a[slice(None, None, -1)]`. While the `:`-based notation is very helpful for simple slicing, the explicit use of `slice()` objects simplifies the programmatic generation of slicing.
The [Python tutorial](https://docs.python.org/3/tutorial/introduction.html#text) talks about it (scroll down a bit until you get to the part about slicing). The ASCII art diagram is helpful too for remembering how slices work: ``` +---+---+---+---+---+---+ | P | y | t | h | o | n | +---+---+---+---+---+---+ 0 1 2 3 4 5 -6 -5 -4 -3 -2 -1 ``` > One way to remember how slices work is to think of the indices as pointing *between* characters, with the left edge of the first character numbered 0. Then the right edge of the last character of a string of *n* characters has index *n*.
How slicing in Python works
[ "", "python", "slice", "sequence", "" ]
We distribute our web-application to our customers as a `.war` file. That way, the user can just deploy the war to their container and they're good to go. The problem is that some of our customers would like authentication, and use the username as a parameter to certain operations within the application. I know how to configure this using web.xml, but that would mean we either have to tell our customers to hack around in the war file, or distribute 2 separate wars; one with authentication (and predefined roles), one without. I also don't want to force authentication on our customers, because that would require more knowledge about Java containers and web servers in general, and make it harder to just take our application for a test drive. Is there a way to do the authentication configuration in the container, rather than in the web-app itself?
In web.xml, define security constraints to bind web resource collections to J2EE roles and a login configuration (both for the customers that want access control to some of the resources of your app). Then, let the customers bind J2EE roles defined in your web app to user groups specific users ans groups defined on their app servers. Customers that do not want any access control may bind all roles to unauthorized users (name of that user group is specific to appserver, e.g. Websphere calls that 'Everyone'). Customers that want to restrict access to a resource(s) in your webapp to a limited set of users or a user group may do so by binding the roles to users/groups per their's needs. If an authentication is required to verify user's membership in a role, then the authentication method specified in login config in your web.xml will be used.
It is common practice to have user information stored in e.g. a datasource (database). Have your application use that same datasource for authentication. Databases are relatively easy to maintain. You can even implement some admin pages to maintain user information from within the application.
Specify authentication in container rather than web.xml
[ "", "java", "authentication", "containers", "war", "" ]
I created a script that requires selecting a beginning year then only displays years from that Beginning year -> 2009. It's just a startYear to endYear Range selector. The script only works in firefox. I'm LEARNING javascript, so I'm hoping someone can point me into the right direction. Live script can be found at <http://motolistr.com> ``` <script type="text/javascript"> function display_year2(start_year) { //alert(start_year); if (start_year != "Any") { var i; for(i=document.form.year2.options.length-1;i>=0;i--) { document.form.year2.remove(i); } var x = 2009; while (x >= start_year) { var optn = document.createElement("OPTION"); optn.text = x; optn.value = x; document.form.year2.options.add(optn); //alert(x); x--; } } else { var i; for(i=document.form.year2.options.length-1;i>=0;i--) { document.form.year2.remove(i); } var optn = document.createElement("OPTION"); optn.text = "Any"; optn.value = "Any"; document.form.year2.options.add(optn); } // end else } // end function </script> ``` Any ideas? Thanks, Nick
You are trying to set an onclick event on each option. IE does not support this. Try using the onchanged event in the select element instead. Instead of this: ``` <select name="year1" id="year1"> <option value="Any" onclick="javascript:display_year2('Any');" >Any</option> <option value="2009" onclick="javascript:display_year2(2009);" >2009</option> <option value="2008" onclick="javascript:display_year2(2008);" >2008</option> </select> ``` Try this: ``` <select name="year1" id="year1" onchange="javascript:display_year2(this.options[this.selectedIndex].value)"> <option value="Any">Any</option> <option value="2009">2009</option> <option value="2008">2008</option> </select> ```
The reason it doesn't work isn't in your code snippet: ``` <OPTION onclick=javascript:display_year2(2009); value=2009> ``` Option.onclick is not an event that is generally expected to fire, but does in Firefox. The usual way to detect changes to Select values is via Select.onchange. Also, you do not need to include "javascript:" in event handlers, they are not URLs (also, never ever use javascript: URLs). Also, quote your attribute values; it's always a good idea and it's required when you start including punctuation in the values. Also, stick to strings for your values - you are calling display\_year2 with a Number, but option.value is always a String; trying to work with mixed datatypes is a recipe for confusion. In summary: ``` <select onchange="display_year2(this.options[this.selectedIndex].value)"> <option value="2009">2009</option> ... </select> ``` Other things: ``` var i; for(i=document.form.year2.options.length-1;i>=0;i--) { document.form.year2.remove(i); } ``` You can do away with the loop by writing to options.length. Also it's best to avoid referring to element names directly off documents - use the document.forms[] collection or getElmentById: ``` var year2= document.getElementById('year2'); year2.options.length= 0; ``` Also: ``` var optn = document.createElement("OPTION"); optn.text = x; optn.value = x; document.form.year2.options.add(optn); ``` HTMLCollection.add is not a standard DOM method. The traditional old-school way of doing it is: ``` year2.options[year2.options.length]= new Option(x, x); ```
Simple javascript only working in FireFox
[ "", "javascript", "cross-browser", "" ]
It seems to me there is no way to detect whether a drag operation was successful or not, but there must be some way. Suppose that I want to perform a "move" from the source to the destination. If the user releases the mouse over some app or control that cannot accept the drop, how can I tell? For that matter, how can I tell when the drag is completed at all? I saw [this question](https://stackoverflow.com/questions/480156/how-do-i-tell-if-a-drag-drop-has-ended-in-winforms), but his solution does not work for me, and `e.Action` is *always* `Continue`.
I'm not sure if that can help you, but DoDragDrop method returns final DragDropEffects value. ``` var ret = DoDragDrop( ... ); if(ret == DragDropEffects.None) //not successfull else // etc. ```
Ah, I think I've got it. Turns out the call to DoDragDrop is actually *synchronous* (how lame), and returns a value of `DragDropEffects`, which is set to `None` if the op fails. So basically this means the app (or at least the UI thread) will be frozen for so long as the user is in the middle of a drag. That does not seem a very elegant solution to me. Ok cz\_dl I see you just posted that very thing so I'll give u the answer. This I don't understand though: how can the destination determine whether the op should be a move or a copy? Shouldn't that be up to the source app?
How can I tell if a drag and drop operation failed?
[ "", "c#", "wpf", "drag-and-drop", "" ]
curl\_unescape doesnt seem to be in pycurl, what do i use instead?
Have you tried `urllib.quote`? ``` import urllib print urllib.quote("some url") some%20url ``` [here's](http://docs.python.org/library/urllib.html) the documentation
[curl\_ unescape](http://curl.haxx.se/libcurl/c/curl_unescape.html) is an obsolete function. Use [curl\_ easy\_unescape](http://curl.haxx.se/libcurl/c/curl_easy_unescape.html) instead.
pycurl and unescape
[ "", "python", "pycurl", "" ]
It is not documented on the web site and people seem to be having problems setting up the framework. Can someone please show a step-by-step introduction for a sample project setup?
What Arlaharen said was basically right, except he left out the part which explains your linker errors. First of all, you need to build your application *without* the CRT as a runtime library. You should always do this anyways, as it really simplifies distribution of your application. If you don't do this, then all of your users need the Visual C++ Runtime Library installed, and those who do not will complain about mysterious DLL's missing on their system... for the extra few hundred kilobytes that it costs to link in the CRT statically, you save yourself a lot of headache later in support (trust me on this one -- I've learned it the hard way!). Anyways, to do this, you go to the target's properties -> C/C++ -> Code Generation -> Runtime Library, and it needs to be set as "Multi-Threaded" for your Release build and "Multi-Threaded Debug" for your Debug build. Since the gtest library is built in the same way, you need to make sure you are linking against the correct version of *it*, or else the linker will pull in another copy of the runtime library, which is the error you saw (btw, this shouldn't make a difference if you are using MFC or not). You need to build gtest as **both a Debug and Release** mode and keep both copies. You then link against gtest.lib/gtest\_main.lib in your Release build and gtestd.lib/gtest\_maind.lib in your Debug build. Also, you need to make sure that your application points to the directory where the gtest header files are stored (in properties -> C/C++ -> General -> Additional Include Directories), but if you got to the linker error, I assume that you already managed to get this part correct, or else you'd have a lot more compiler errors to deal with first.
(These instructions get the testing framework working for the Debug configuration. It should be pretty trivial to apply the same process to the Release configuration.) **Get Google C++ Testing Framework** 1. Download the latest [gtest framework](http://code.google.com/p/googletest/downloads/list) 2. Unzip to `C:\gtest` **Build the Framework Libraries** 1. Open `C:\gtest\msvc\gtest.sln` in Visual Studio 2. Set Configuration to "Debug" 3. Build Solution **Create and Configure Your Test Project** 1. Create a new solution and choose the template Visual C++ > Win32 > Win32 Console Application 2. Right click the newly created project and choose Properties 3. Change Configuration to Debug. 4. Configuration Properties > C/C++ > General > Additional Include Directories: Add `C:\gtest\include` 5. Configuration Properties > C/C++ > Code Generation > Runtime Library: If your code links to a runtime DLL, choose Multi-threaded Debug DLL (/MDd). If not, choose Multi-threaded Debug (/MTd). 6. Configuration Properties > Linker > General > Additional Library Directories: Add `C:\gtest\msvc\gtest\Debug` or `C:\gtest\msvc\gtest-md\Debug`, depending on the location of gtestd.lib 7. Configuration Properties > Linker > Input > Additional Dependencies: Add `gtestd.lib` **Verifying Everything Works** 1. Open the cpp in your Test Project containing the `main()` function. 2. Paste the following code: ``` #include "stdafx.h" #include <iostream> #include "gtest/gtest.h" TEST(sample_test_case, sample_test) { EXPECT_EQ(1, 1); } int main(int argc, char** argv) { testing::InitGoogleTest(&argc, argv); RUN_ALL_TESTS(); std::getchar(); // keep console window open until Return keystroke } ``` 3. Debug > Start Debugging If everything worked, you should see the console window appear and show you the unit test results.
How to set up Google C++ Testing Framework (gtest) with Visual Studio 2005
[ "", "c++", "visual-studio", "unit-testing", "visual-studio-2005", "googletest", "" ]
When I first learned C++ 6-7 years ago, what I learned was basically "C with Classes". `std::vector` was definitely an advanced topic, something you could learn about if you *really* wanted to. And there was certainly no one telling me that destructors could be harnessed to help manage memory. Today, everywhere I look I see RAII and [SFINAE](http://en.wikipedia.org/wiki/Substitution_failure_is_not_an_error) and STL and Boost and, well, Modern C++. Even people who are just getting started with the language seem to be taught these concepts almost from day 1. My question is, is this simply because I'm only seeing the "best", that is, the questions here on SO, and on other programming sites that tend to attract beginners (gamedev.net), or is this actually representative of the C++ community as a whole? Is modern C++ really becoming the default? Rather than being some fancy thing the experts write about, is it becoming "the way C++ just is"? Or am I just unable to see the thousands of people who still learn "C with classes" and write their own dynamic arrays instead of using `std::vector`, and do memory management by manually calling new/delete from their top-level code? As much as I want to believe it, it seems incredible if the C++ community as a whole has evolved so much in basically a few years. What are your experiences and impressions? (disclaimer: Someone not familiar with C++ might misinterpret the title as asking whether C++ is gaining popularity versus other languages. That's not my question. "Modern C++" is a common name for a dialect or programming style within C++, named after the book "[Modern C++ Design: Generic Programming and Design Patterns Applied](https://rads.stackoverflow.com/amzn/click/com/0201704315)", and I'm solely interested in this versus "old C++". So no need to tell me that C++'s time is past, and we should all use Python ;))
Here's how I think things have evolved. The first generation of C++ programmers were C programmers, who were in fact using C++ as C with classes. Plus, the STL wasn't in place yet, so that's what C++ essentially was. When the STL came out, that advanced things, but most of the people writing books, putting together curricula, and teaching classes had learned C first, then that extra C++ stuff, so the second generation learned from that perspective. As another answer noted, if you're comfortable writing regular for loops, changing to use `std::for_each` doesn't buy you much except the warm fuzzy feeling that you're doing things the "modern" way. Now, we have instructors and book writers who have been using the whole of C++, and getting their instructions from that perspective, such as Koenig & Moo's Accelerated C++ and Stroustrup's new textbook. So we don't learn `char*` then `std::strings`. It's an interesting lesson in how long it takes for "legacy" methods to be replaced, especially when they have a track record of effectiveness.
Absolutely yes. To me if you're not programming C++ in this "Modern C++" style as you term, then there's no point using C++! You might as well just use C. "Modern C++" should be the only way C++ is ever programmed in my opinion, and I would expect that everyone who uses C++ and has programmed in this "Modern" fashion would agree with me. In fact, I am always completely shocked when I hear of a C++ programmer who is unaware of things such as an auto\_ptr or a ptr\_vector. As far as I'm concerned, those ideas are basic and fundamental to C++, and so I couldn't imagine it any other way.
Is modern C++ becoming more prevalent?
[ "", "c++", "" ]
Are there any major issues to be aware of running a PHP 5 / Zend MVC production application on Windows? The particular application is Magento, an ecommerce system, and the client is really not interested in having a Linux box in their datacenter. Has anyone had luck getting PHP 5 and Zend MVC working correctly on IIS?
Yes, it works. Microsoft and Zend are working together to get PHP running as it runs on linux. Zend even has a certified version of their core package (includes php, mysql and some control panel) for Windows and iis. Also Zend Framework is supposed to be truly platform independend. Another option instead is to use Apache on Windows, but IIS is faster for static page views and also has some other interesting options. .htaccess files are not supported, so for rewriting you need to rely on other IIS components.
Well I got IIS, Zend and PHP all working nicely. Installed the ReWrite module, followed :[this article](http://blogs.iis.net/bills/archive/2006/09/19/How-to-install-PHP-on-IIS7-_2800_RC1_2900_.aspx) Got the Zend re-write rule [from here](http://www.zendframework.com/manual/en/zend.controller.router.html) and ensured short\_opentag = on in my **php.ini** So far so good
PHP 5 and Zend MVC on Windows and IIS
[ "", "php", "windows", "zend-framework", "magento", "iis", "" ]
I have this enum: ``` [Flags] public enum ExportFormat { None = 0, Csv = 1, Tsv = 2, Excel = 4, All = Excel | Csv | Tsv } ``` I am trying to make a wrapper on this (or any, really) enum which notifies on change. Currently it looks like this: ``` public class NotifyingEnum<T> : INotifyPropertyChanged where T : struct { private T value; public event PropertyChangedEventHandler PropertyChanged; public NotifyingEnum() { if (!typeof (T).IsEnum) throw new ArgumentException("Type T must be an Enum"); } public T Value { get { return value; } set { if (!Enum.IsDefined(typeof (T), value)) throw new ArgumentOutOfRangeException("value", value, "Value not defined in enum, " + typeof (T).Name); if (!this.value.Equals(value)) { this.value = value; PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) handler(this, new PropertyChangedEventArgs("Value")); } } } } ``` Since an enum can be assigned with any value really, I want to check if the given Value is defined. But I found a problem. If I here give it an enum consisting of for example `Csv | Excel`, then `Enum.IsDefined` will return `false`. Apparently because I haven't defined any enum consisting of those two. I guess that on some level is logical, but how should I then check if the given value is valid? In other words, to make it work, what do I need to swap this following line with? ``` if (!Enum.IsDefined(typeof (T), value)) ```
With flag-based enums, it's about having a bit set or not. So for 'ExportFormat', if bit 1 is set, it's CSV format, even though there might be more bits set. Is having bit 1 and 2 set an invalid value? This is subjective: from the point of view of the values as a group, it is invalid (there's no bitpattern defined for bits 1 and 2 set) however, as each value is a bit, looking at them individually, it can be that a value with bits 1 and 2 set is valid. If one passes in the value 0011111011, is that a valid value? Well, it depends on what you're looking for: if you are looking at the whole value, then it's an invalid value, but if you're looking at individual bits, it's an ok value: it has bits set which aren't defined, but that's ok, as flag-based enums are checked 'per bit': you're not comparing them to a value, you're checking whether a bit is set or not. So, as your logic will check on which bits are set to select which formats to pick, it's realy not necessary to check whether the enum value is defined: you have 3 formats: if the bit of the corresponding format is set, the format is selected. That's the logic you should write.
We know that an enum value converted to a string will never start with a digit, but one that has an invalid value always will. Here's the simplest solution: ``` public static bool IsDefinedEx(this Enum yourEnum) { char firstDigit = yourEnum.ToString()[0]; if (Char.IsDigit(firstDigit) || firstDigit == '-') // Account for signed enums too.. return false; return true; } ``` Use that extension method instead of the stock IsDefined and that should solve your issue.
C#: Enum.IsDefined on combined flags
[ "", "c#", "enums", "constraints", "" ]
Help me settle an argument here. Is this: ``` SqlCommand cmd = new SqlCommand( "sql cmd", conn); ``` treated exactly the same as this: ``` const string s = "sql cmd"; SqlCommand cmd = new SqlCommand( s, conn); ``` Ie. does it make a difference if I state specifically that the string s is a const. And, if it is not treated in the same way, why not?
In the latter snippet, it's not that the *string* is const - it's that the *variable* is const. This is not quite the same as const in C++. (Strings are *always* immutable in .NET.) And yes, the two snippets do the same thing. The only difference is that in the first form you'll have a metadata entry for `s` as well, and if the variable is declared at the type level (instead of being a local variable) then other methods could use it too. Of course, due to string interning if you use "sql cmd" elsewhere you'll still only have a single string *object* in memory... but if you look at the type with reflection you'll find the const as a field in the metadata with the second snippet if it's declared as a constant field, and if it's just a local variable it'll be in the PDB file if you build one.
The value of a `const` always gets burned directly into the caller, so yes they are identical. Additionally, the compiler interns strings found in source code - a `const` is helpful if you are using the same string multiple times (purely from a maintenance angle - the result is the same either way).
C# - Is this declared string treated as a const?
[ "", "c#", "clr", "constants", "readonly", "" ]
I came across a javascript puzzle asking: Write a one-line piece of JavaScript code that concatenates all strings passed into a function: ``` function concatenate(/*any number of strings*/) { var string = /*your one line here*/ return string; } ``` @ [meebo](http://www.meebo.com/jobs/openings/javascript/ "Meebo") Seeing that the function arguments are represented as an indexed object MAYBE an array, i thought can be done in a recursive way. However my recursive implementation is throwing an error. --"conc.arguments.shift is not a function" -- ``` function conc(){ if (conc.arguments.length === 0) return ""; else return conc.arguments.shift() + conc(conc.arguments); } ``` it seems as though conc.arguments is not an array, but can be accessed by a number index and has a length property??? confusing -- please share opinions and other recursive implementations. Thanks
`arguments` [is said to be](http://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Functions_and_function_scope/arguments) an Array-like object. As you already saw you may access its elements by index, but you don't have all the Array methods at your disposal. Other examples of Array-like objects are HTML collections returned by getElementsByTagName() or getElementsByClassName(). jQuery, if you've ever used it, is also an Array-like object. After querying some DOM objects, inspect the resulting jQuery object with Firebug in the DOM tab and you'll see what I mean. Here's my solution for the Meebo problem: ``` function conc(){ if (arguments.length === 0) return ""; else return Array.prototype.slice.call(arguments).join(" "); } alert(conc("a", "b", "c")); ``` `Array.prototype.slice.call(arguments)` is a nice trick to transform our `arguments` into a veritable Array object. In Firefox `Array.slice.call(arguments)` would suffice, but it won't work in IE6 (at least), so the former version is what is usually used. Also, this trick doesn't work for collection returned by DOM API methods in IE6 (at least); it will throw an Error. By the way, instead of `call` one could use `apply`. A little explanation about Array-like objects. In JavaScript you may use pretty much anything to name the members of an object, and numbers are not an exception. So you may construct an object that looks like this, which is perfectly valid JavaScript: ``` var Foo = { bar : function() { alert('I am bar'); }, 0 : function() { alert('I am 1'); }, length : 1 } ``` The above object is an Array-like object for two reasons: 1. It has members which names are numbers, so they're like Array indexes 2. It has a `length` property, without which you cannot transform the object into a veritable Array with the construct: `Array.prototype.slice.call(Foo);` The arguments object of a Function object is pretty much like the Foo object, only that it has its special purpose.
[Mozilla on the subject](https://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Functions_and_function_scope/arguments): > The arguments object is not an array. It > is similar to an array, but does not have any array > properties except length. For example, > it does not have the pop method. > However it can be converted to an real > array: > > ``` > var args = Array.prototype.slice.call(arguments); > ``` Therefore the solution to your problem is fairly simple: ``` var string = Array.prototype.slice.call(arguments).join(""); ``` BTW: It further states: > The arguments object is a local > variable available within all > functions; arguments as a property of > Function can no longer be used. You should only use `arguments` instead of `func.arguments`
recursively concatenating a javascript functions arguments
[ "", "javascript", "recursion", "variadic-functions", "" ]
I have an ABC with several derived classes. To create these derived classes I use the factory pattern: .h file: ``` class derivedFactory { public: base* createInstance(); }; ``` .cpp file: ``` base* derivedFactory::createInstance() { return new derived(); } ``` Is there any advantage to this over just having a free function: .h file: ``` base* derivedFactoryFunction(); ``` .cpp file: ``` base* derivedFactoryFunction() { return new derived(); } ``` Also: I use the abstract factory pattern for dependency injection. I might use an inheritance hierarchy based on the ABC: ``` class objectCreator { public: base* create() = 0; }; ``` Is there any advantage to using this over a function pointer: ``` boost::function<base* ()> factory_ptr; ``` Using boost::bind/lambda this seems to make my code more composable, and if I wish I can wrap a real factory object in it. I can see that there may be a slight performance decrease but this is much to worry about as it is only called during startup.
It depends on how flexible your factory needs to be. If the factory needs external information (like from a configuration file, program options, etc) to determine how to construct objects, than an object makes sense. If all you will ever need is in the arguments to factory, than a function is probably fine. The only advantage I can see to having a pointer is for testing, where you can use a different factory function.
Do you ever want more than one factory for a type? If so, you need factory objects.
Factory object vs factory function
[ "", "c++", "design-patterns", "factory", "" ]
I know that I can do this in Rails: ``` <%="hello" %> ``` but is there any way to do this ``` <% echo "hello" %> ``` and get it to show up in the response?
Have you tried `concat`. I have seen this when wandering in Rails documentation. Not sure at all since I am very new to Rails.
What you have to write is ``` <% concat "bank" %> ``` now you can do something like ``` <% 10.times do concat "cat" end %> ``` for ten cat
Writing To The Response in Rails? (Like "echo" in PHP)
[ "", "php", "ruby-on-rails", "translation", "" ]
Say you were mainly a C-syntax like programmer and Linux systems administrator, and you were tasked with creating some simple automation tasks on Windows (monitoring of back-up files, process monitoring, ...). Which language would you prefer to write your scripts in? There's a large collection of VBS-scripts out there (using VB syntax), but I'd prefer anything more C-related. What are your best experiences in using scripts for Windows? Any obvious down- or upside to a certain language?
I would use [Powershell](http://www.microsoft.com/windowsserver2003/technologies/management/powershell/default.mspx). * It has a vaguely C-like syntaxt. * It has an integrated shell. * The newest version (currently in CTP) includes a builtin IDE (Although it is limited compared to other 3rd party ones). * It has easy access to something like 90% of the functionality in the .Net framework. * Going forward, MS products will explicitly provide Powershell integration. * It supports pipes.
Pretty much every script in VBS can be converted to an equivalent in JScript. There are a few gotchas to watch out for. Read up on the enumerator and remember that in VBS is case insensitive so when translating a script, certain methods may have the wrong casing.
Windows Scripting: VBScript, DOS, JS, Python,
[ "", "javascript", "windows", "vbscript", "scripting", "" ]
**Background**: I have a small Python application that makes life for developers releasing software in our company a bit easier. I build an executable for Windows using py2exe. The application as well as the binary are checked into Subversion. Distribution happens by people just checking out the directory from SVN. The program has about 6 different Python library dependencies (e.g. ElementTree, Mako) **The situation**: Developers want to hack on the source of this tool and then run it without having to build the binary. Currently this means that they need a python 2.6 interpreter (which is fine) and also have the 6 libraries installed locally using easy\_install. **The Problem** * This is not a public, classical open source environment: I'm inside a corporate network, the tool will never leave the "walled garden" and we have seriously inconvenient barriers to getting to the outside internet (NTLM authenticating proxies and/or machines without direct internet access). * I want the hurdles to starting to hack on this tool to be minimal: nobody should have to hunt for the right dependency in the right version, they should have to execute as little setup as possible. Optimally the prerequisites would be having a Python installation and just checking out the program from Subversion. **Anecdote**: The more self-contained the process is the easier it is to repeat it. I had my machine swapped out for a new one and went through the unpleasant process of having to reverse engineer the dependencies, reinstall distutils, hunting down the libraries online and getting them to install (see corporate internet restrictions above).
I sometimes use the approach I describe below, for the exact same reason that @Boris states: I would prefer that the use of some code is as easy as a) svn checkout/update - b) go. But for the record: * I use virtualenv/easy\_install most of the time. * I agree to a certain extent to the critisisms by @Ali A and @S.Lott Anyway, the approach I use depends on modifying sys.path, and works like this: * Require python and setuptools (to enable loading code from eggs) on all computers that will use your software. * Organize your directory structure this: ``` project/ *.py scriptcustomize.py file.pth thirdparty/ eggs/ mako-vNNN.egg ... .egg code/ elementtree\ *.py ... ``` * In your top-level script(s) include the following code at the top: ``` from scriptcustomize import apply_pth_files apply_pth_files(__file__) ``` * Add scriptcustomize.py to your project folder: ``` import os from glob import glob import fileinput import sys def apply_pth_files(scriptfilename, at_beginning=False): """At the top of your script: from scriptcustomize import apply_pth_files apply_pth_files(__file__) """ directory = os.path.dirname(scriptfilename) files = glob(os.path.join(directory, '*.pth')) if not files: return for line in fileinput.input(files): line = line.strip() if line and line[0] != '#': path = os.path.join(directory, line) if at_beginning: sys.path.insert(0, path) else: sys.path.append(path) ``` * Add one or more \*.pth file(s) to your project folder. On each line, put a reference to a directory with packages. For instance: ``` # contents of *.pth file thirdparty/code thirdparty/eggs/mako-vNNN.egg ``` * I "kind-of" like this approach. What I like: it is similar to how \*.pth files work, but for individual programs instead of your entire site-packages. What I do not like: having to add the two lines at the beginning of the top-level scripts. * Again: I use virtualenv most of the time. But I tend to use virtualenv for projects where I have tight control of the deployment scenario. In cases where I do not have tight control, I tend to use the approach I describe above. It makes it really easy to package a project as a zip and have the end user "install" it (by unzipping).
Just use [virtualenv](http://pypi.python.org/pypi/virtualenv) - it is a tool to create isolated Python environments. You can create a set-up script and distribute the whole bunch if you want.
How to deploy a Python application with libraries as source with no further dependencies?
[ "", "python", "deployment", "layout", "bootstrapping", "" ]
I'm writing multilingual website. I have several files on server like: ``` /index.php /files.php /funny.php ``` And would like to add language support by placing language code into URL like this: ``` http://mywebsite/en/index.php ``` would redirect to: ``` http://mywebsite/index.php?lang=en ``` And ``` http://mywebsite/en/files.php ``` would redirect to: ``` http://mywebsite/files.php?lang=en ``` I would like to put more languages for example: ``` http://mywebsite/ch-ZH/index.php ``` And I would like this to work only for files with php and php5 extension. Rest of files should be the same as they are. So for example when i will go to address ``` http://mywebsite/ch-ZH/index.php ``` I would like my PHP to recognize that current path is ``` http://mywebsite ``` and NOT ``` http://mywebsite/ch-ZH ``` It's necessary for me because in my PHP code I relate on current path and would like them to work as they are working now. Could you please write how to prepare htaccess file on Apache to meet this criteria?
Try this: ``` RewriteEngine on RewriteRule ^([a-z]{2}(-[A-Z]{2})?)/(.*) $3?lang=$1 [L,QSA] ``` And for the *current path* problem, you have to know how relative URIs are resolved: Relative URIs are resolved *by the client* from a base URI that is the URI (*not filesystem path!*) of the current resource if not declared otherwise. So if a document has the URI `/en/foo/bar` and the relative path `./baz` in it, the *client* resolves this to `/en/foo/baz` (as obviously the client doesn’t know about the actual filesystem path). For having `./baz` resolved to `/baz`, you have to change the base URI which can be done with the [HTML element `BASE`](http://www.w3.org/TR/html4/struct/links.html#edef-BASE).
Something like this should do the trick, ``` RewriteEngine on RewriteRule ^(.[^/]+)/(.*).php([5])?$ $2.php$3?lang=$1 [L] ``` This will only match .php and .php5 files, so the rest of your files will be unaffected.
How to translate /en/file.php to file.php?lang=en in htaccess Apache
[ "", "php", ".htaccess", "mod-rewrite", "internationalization", "multilingual", "" ]
Is there a way to detect the Language of the OS from within a c# class?
Unfortunately, the previous answers are not 100% correct. The `CurrentCulture` is the culture info of the running thread, and it is used for operations that need to know the current culture, but not do display anything. `CurrentUICulture` is used to format the display, such as correct display of the `DateTime`. Because you might change the current thread `Culture` or `UICulture`, if you want to know what the OS `CultureInfo` actually is, use `CultureInfo.InstalledUICulture`. Also, there is another question about this subject (more recent than this one) with a detailed answer: [Get operating system language in c#](https://stackoverflow.com/questions/5710127/get-operating-system-language-in-c).
With the `System.Globalization.CultureInfo` class you can determine what you want. With `CultureInfo.CurrentCulture` you get the system set culture, with `CultureInfo.CurrentUICulture` you get the user set culture.
detect os language from c#
[ "", "c#", "operating-system", "" ]
I am trying to assign absence dates to an academic year, the academic year being 1st August to the 31st July. So what I would want would be: 31/07/2007 = 2006/2007 02/10/2007 = 2007/2008 08/01/2008 = 2007/2008 Is there an easy way to do this in sql 2000 server.
A variant with less string handling ``` SELECT AbsenceDate, CASE WHEN MONTH(AbsenceDate) <= 7 THEN CONVERT(VARCHAR(4), YEAR(AbsenceDate) - 1) + '/' + CONVERT(VARCHAR(4), YEAR(AbsenceDate)) ELSE CONVERT(VARCHAR(4), YEAR(AbsenceDate)) + '/' + CONVERT(VARCHAR(4), YEAR(AbsenceDate) + 1) END AcademicYear FROM AbsenceTable ``` Result: ``` 2007-07-31 => '2006/2007' 2007-10-02 => '2007/2008' 2008-01-08 => '2007/2008' ```
Should work this way: ``` select case when month(AbsenceDate) <= 7 then ltrim(str(year(AbsenceDate) - 1)) + '/' + ltrim(str(year(AbsenceDate))) else ltrim(str(year(AbsenceDate))) + '/' + ltrim(str(year(AbsenceDate) + 1)) end ``` Example: ``` set dateformat ymd declare @AbsenceDate datetime set @AbsenceDate = '2008-03-01' select case when month(@AbsenceDate) <= 7 then ltrim(str(year(@AbsenceDate) - 1)) + '/' + ltrim(str(year(@AbsenceDate))) else ltrim(str(year(@AbsenceDate))) + '/' + ltrim(str(year(@AbsenceDate) + 1)) end ```
t-sql assign dates to academic year
[ "", "sql", "" ]
I need to execute a PowerShell script from within C#. The script needs commandline arguments. This is what I have done so far: ``` RunspaceConfiguration runspaceConfiguration = RunspaceConfiguration.Create(); Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfiguration); runspace.Open(); RunspaceInvoke scriptInvoker = new RunspaceInvoke(runspace); Pipeline pipeline = runspace.CreatePipeline(); pipeline.Commands.Add(scriptFile); // Execute PowerShell script results = pipeline.Invoke(); ``` scriptFile contains something like "C:\Program Files\MyProgram\Whatever.ps1". The script uses a commandline argument such as "-key Value" whereas Value can be something like a path that also might contain spaces. I don't get this to work. Does anyone know how to pass commandline arguments to a PowerShell script from within C# and make sure that spaces are no problem?
Try creating scriptfile as a separate command: ``` Command myCommand = new Command(scriptfile); ``` then you can add parameters with ``` CommandParameter testParam = new CommandParameter("key","value"); myCommand.Parameters.Add(testParam); ``` and finally ``` pipeline.Commands.Add(myCommand); ``` --- ***Here is the complete, edited code:*** ``` RunspaceConfiguration runspaceConfiguration = RunspaceConfiguration.Create(); Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfiguration); runspace.Open(); Pipeline pipeline = runspace.CreatePipeline(); //Here's how you add a new script with arguments Command myCommand = new Command(scriptfile); CommandParameter testParam = new CommandParameter("key","value"); myCommand.Parameters.Add(testParam); pipeline.Commands.Add(myCommand); // Execute PowerShell script results = pipeline.Invoke(); ```
I have another solution. I just want to test if executing a PowerShell script succeeds, because perhaps somebody might change the policy. As the argument, I just specify the path of the script to be executed. ``` ProcessStartInfo startInfo = new ProcessStartInfo(); startInfo.FileName = @"powershell.exe"; startInfo.Arguments = @"& 'c:\Scripts\test.ps1'"; startInfo.RedirectStandardOutput = true; startInfo.RedirectStandardError = true; startInfo.UseShellExecute = false; startInfo.CreateNoWindow = true; Process process = new Process(); process.StartInfo = startInfo; process.Start(); string output = process.StandardOutput.ReadToEnd(); Assert.IsTrue(output.Contains("StringToBeVerifiedInAUnitTest")); string errors = process.StandardError.ReadToEnd(); Assert.IsTrue(string.IsNullOrEmpty(errors)); ``` With the contents of the script being: ``` $someVariable = "StringToBeVerifiedInAUnitTest" $someVariable ```
Execute PowerShell Script from C# with Commandline Arguments
[ "", "c#", "command-line", "powershell", "scripting", "arguments", "" ]
When I use an Iterator of Object I use a *while loop* (as written in every book learning Java, as **Thinking in Java** of Bruce Eckel): ``` Iterator it=... while(it.hasNext()){ //... } ``` but sometime i saw than instead somebody use the *for loop*: ``` Iterator it=... for (Iterator it=...; it.hasNext();){ //... } ``` I don't' understand this choice: * I use the *for loop* when I have a collection with ordinal sequence (as array) or with a special rule for the step (declared generally as a simple increment `counter++`). * I use the *while loop* when the loop finishes with I have'nt this constraints but only a logic condition for exit. It's a question of style-coding without other cause or it exists some other logic (performance, for example) that I don't' know? Thanks for every feedback
The correct syntax for the for loop is: ``` for (Iterator it = ...; it.hasNext(); ){ //... } ``` (The preceding declaration in your code is superfluous, as well as the extra semicolon in the for loop heading.) Whether you use this syntax or the `while` loop is a matter of taste, both translate to exactly the same. The generic syntax of the for loop is: ``` for (<init stmt>; <loop cond>; <iterate stmt>) { <body>; } ``` which is equivalent to: ``` <init stmt>; while (<loop cond>) { <body>; <iterate stmt>; } ``` **Edit:** Actually, the above two forms are not entirely equivalent, *if* (as in the question) the variable is declared with the init statement. In this case, there will be a difference in the scope of the iterator variable. With the for loop, the scope is limited to the loop itself, in the case of the while loop, however, the scope extends to the end of the enclosing block (no big surprise, since the declaration is outside the loop). Also, as others have pointed out, in newer versions of Java, there is a shorthand notation for the for loop: ``` for (Iterator<Foo> it = myIterable.iterator(); it.hasNext(); ) { Foo foo = it.next(); //... } ``` can be written as: ``` for (Foo foo : myIterable) { //... } ``` With this form, you of course lose the direct reference to the iterator, which is necessary, for example, if you want to delete items from the collection while iterating.
The purpose of declaring the `Iterator` within the for loop is to *minimize the scope of your variables*, which is a good practice. When you declare the `Iterator` outside of the loop, then the reference is still valid / alive after the loop completes. 99.99% of the time, you don't need to continue to use the `Iterator` once the loop completes, so such a style can lead to bugs like this: ``` //iterate over first collection Iterator it1 = collection1.iterator(); while(it1.hasNext()) { //blah blah } //iterate over second collection Iterator it2 = collection2.iterator(); while(it1.hasNext()) { //oops copy and paste error! it1 has no more elements at this point } ```
Difference between moving an Iterator forward with a for statement and a while statement
[ "", "java", "iterator", "for-loop", "while-loop", "" ]
Are there any products that will decrease c++ build times? that can be used with msvc?
If it has to be a product, look at [Xoreax IncrediBuild](http://www.xoreax.com/), which distributes the build to machines on the network. Other than that: * solid build machines. RAM as it fits, use fast separate disks. * Splitting into separate projects (DLLs, Libraries). They can build in parallel, too (use dual quad/core, and is easily bottlenecked by disk) * Intelligent use of headers, including precompiled headers. That's not easy, and often there are other stakeholders. [PIMPL](http://en.wikipedia.org/wiki/Opaque_pointer) helps, too.
Usage of [precompiled headers](http://msdn.microsoft.com/en-us/library/9d87zb00(VS.71).aspx) might decrease your compile time.
product to decrease c++ compile time?
[ "", "c++", "compile-time", "" ]
I have a more complicated issue (than question 'Java map with values limited by key's type parameter' question) for mapping key and value type in a Map. Here it is: ``` interface AnnotatedFieldValidator<A extends Annotation> { void validate(Field f, A annotation, Object target); Class<A> getSupportedAnnotationClass(); } ``` Now, I want to store validators in a map, so that I can write the following method: ``` validate(Object o) { Field[] fields = getAllFields(o.getClass()); for (Field field: fields) { for (Annotation a: field.getAnnotations()) { AnnotatedFieldValidator validator = validators.get(a); if (validator != null) { validator.validate(field, a, target); } } } } ``` (type parameters are omitted here, since I do not have the solution). I also want to be able to register my validators: ``` public void addValidator(AnnotatedFieldValidator<? extends Annotation> v) { validators.put(v.getSupportedAnnotatedClass(), v); } ``` With this (only) public modifier method, I can ensure the map contains entries for which the key (annotation class) matches the validator's supported annotation class. Here is a try: I declare the validators Map like this: ``` private Map<Class<? extends Annotation>, AnnotatedFieldValidator<? extends Annotation>> validators; ``` I'm aware I cannot properly link the key and value (link is assumed OK due to only access through `addValidator()`), so I tried a cast: ``` for (Annotation a: field.getAnnotations()) { AnnotatedFieldValidator<? extends Annotation> validator = validators.get(a); if (validator != null) { validator.validate(field, validator.getSupportedAnnotationClass().cast(a), target); } } ``` But this does not work: `The method validate(Field, capture#8-of ?, Object) in the type AnnotatedFieldValidator<capture#8-of ?> is not applicable for the arguments (Field, capture#9-of ?, Object)`. I can't figure out why this does not work: the `AnnotatedFieldValidator` has a single type parameter (A), which is used both as the return type of `getSupportedAnnotationClass()` and as a parameter of `validate()`; thus, when casting the annotation to supportedAnnotationClass, I should be able to pass it as the parameter to `validate()`. Why is the result of `getSupportedAnnotationClass()` considered a different type than the parameter of `validate()`? I can solve the `validate()` method by removing wildcards in the validators declaration and `validate()` method, but then, of course, `addValidator()` doesn't compile.
Thank you all for your answers, it really helped me come to the following solution. The answer from flicken showed me the way: I have to extract some code into a parameterized method. but instead of extracting `validators.get()` in a method, I can extract the whole validation process. Doing so, I can use programmatic cast (which I assume OK since I control the coherence of key to values map): ``` public void validate(Object o) { Field[] fields = getFields(o.getClass()); for (Field field : fields) { Annotation[] annotations = field.getAnnotations(); for (Annotation annotation : annotations) { AnnotatedFieldValidator<? extends Annotation> validator = validators.get(annotation.annotationType()); if (validator != null) { doValidate(field, validator, annotation, o); } } } } ``` And then, the doValidate() method is as follows: ``` private <A extends Annotation> void doValidate(Field field, AnnotatedFieldValidator<A> validator, Annotation a, Object o) { // I assume this is correct following only access to validators Map // through addValidator() A annotation = validator.getSupportedAnnotationClass().cast(a); try { validator.validate(field, annotation, bean, beanName); } catch (IllegalAccessException e) { } } ``` No cast (OK, except Class.cast()...), no unchecked warnings, no raw type, I am happy.
You can extract a method to get the validator. All access to the `validators` Map is through type-checked method, and are thus type-safe. ``` protected <A extends Annotation> AnnotatedFieldValidator<A> getValidator(A a) { // unchecked cast, but isolated in method return (AnnotatedFieldValidator<A>) validators.get(a); } public void validate(Object o) { Object target = null; Field[] fields = getAllFields(o.getClass()); for (Field field : fields) { for (Annotation a : field.getAnnotations()) { AnnotatedFieldValidator<Annotation> validator = getValidator(a); if (validator != null) { validator.validate(field, a, target); } } } } // Generic map private Map<Class<? extends Annotation>, AnnotatedFieldValidator<? extends Annotation>> validators; ``` (Removed second suggestion as duplicate.)
limit map key and value types - more complicated
[ "", "java", "generics", "" ]
I'm looking for the best approach to dealing with duplicate code in a legacy PHP project with about 150k lines of code. Is this something best approached manually or are there standalone duplicate code detectors that will ease the pain?
As the other answers already mention, this should be approached manually, because you may want to change other things as you go along to make the code base cleaner. Maybe the actual invocation is already superfluous, or similar fragments can be combined. Also, in practice people usually slightly change the copied code, so there will often not be direct duplicates, but close variants. I fear automatic c&p detection will mostly fail you there. There are however refactoring tools that can help you with acutally performing the changes (and sometimes also with finding likely candidates). Google for "php refactoring", there are quite a few tools available, both standalone and as part of IDEs.
The [CloneDR](http://www.semanticdesigns.com/Products/Clone) finds duplicate code, both exact copies and near-misses, across large source systems, parameterized by langauge syntax. For each detected set of clones, it will even propose a sketch of the abstraction code that could be used to replace the clones. It is available for many langauges, including PHP system. A sample PHP clone detection report for Joomla (a PHP framework) can be found at the link.
How to deal with duplicate code under Linux?
[ "", "php", "linux", "copy-paste", "code-duplication", "" ]
When should the keyword 'this' be used within C# class definitions? Is it standard to use the form "this.Method()" from within class? Or to just use "Method()"? I have seen both, and usually go with the second choice, but I would like to learn more about this subject.
Most of the time it is redundant and can be omitted; a few exceptions: * to call a chained constructor: `Foo() : this("bar") {}` * to disambiguate between a local argument/variable and a field: `this.foo = foo;` etc * to call an **extension** method on the current instance: `this.SomeMethod();` (where defined as `public static SomeMethod(this Foo foo) {...}`) * to pass a reference to the current instance to an external method: `Helper.DoSomething(this);`
*this* is mainly used to explicitly use a class member when the name alone would be ambiguous, as in this example: ``` public class FooBar { private string Foo; private string Bar; public void DoWhatever(string Foo, string Bar) { // use *this* to indicate your class members this.Foo = Foo; this.Bar = Bar; } public void DoSomethingElse() { // Not ambiguity, no need to use *this* to indicate class members Debug.WriteLine(Foo + Bar); } } ``` Aside from that, some people prefer to prefix internal method calls (`this.Method()Β΄) because it makes it more obvious that you are not calling any external method, but I don't find it important. It definitely has no effect on the resulting program being more or less efficient.
What is the proper use of keyword 'this' in private class members?
[ "", "c#", ".net", "" ]
Strings are considered reference types yet can act like values. When shallow copying something either manually or with the MemberwiseClone(), how are strings handled? Are they considred separate and isolated from the copy and master?
Strings ARE reference types. However they are immutable (they cannot be changed), so it wouldn't really matter if they copied by value, or copied by reference. If they are shallow-copied then the reference will be copied... but you can't change them so you can't affect two objects at once.
Consider this: ``` public class Person { string name; // Other stuff } ``` If you call MemberwiseClone, you'll end up with two separate instances of Person, but their `name` variables, while distinct, will have the same value - they'll refer to the same string instance. This is because it's a shallow clone. If you change the name in one of those instances, that won't affect the other, because the two variables themselves are separate - you're just changing the value of one of them to refer to a different string.
How do strings work when shallow copying something in C#?
[ "", "c#", ".net", "shallow-copy", "" ]
Is there a way to set the global windows path environment variable programmatically (C++)? As far as I can see, putenv sets it only for the current application. Changing directly in the registry `(HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment)` is also an option though I would prefer API methods if there are?
MSDN [Says](http://msdn.microsoft.com/en-us/library/ms682653(VS.85).aspx): > Calling SetEnvironmentVariable has no > effect on the system environment > variables. **To programmatically add or > modify system environment variables, > add them to the > HKEY\_LOCAL\_MACHINE\System\CurrentControlSet\Control\Session > Manager\Environment registry key, then > broadcast a WM\_SETTINGCHANGE message > with lParam set to the string > "Environment".** This allows > applications, such as the shell, to > pick up your updates. Note that the > values of the environment variables > listed in this key are limited to 1024 > characters.
As was pointed out earlier, to change the PATH at the *machine level* just change this registry entry: ``` HLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment ``` But you can also set the PATH at the *user level* by changing this registry entry: ``` HKEY_CURRENT_USER\Environment\Path ``` And you can also set the PATH at the *application level* by adding the application\Path details to this registry entry: ``` HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\App Paths\ ```
Is there a way to set the environment path programmatically in C++ on Windows?
[ "", "c++", "winapi", "path", "environment-variables", "" ]
In Javascript, I have a certain string, and I would like to somehow measure how much space (in pixels) it will take within a certain element. Basically what I have is an element that will float above everything else (like a tooltip), and I need to set its width manually through Javascript, so it will adjust to the text inside. I can't have it "auto-grow" naturally like an inline element would grow horizontally to contain its children. In Windows there are APIs that do this. Is there a way to do the same thing in Javascript? If there is no decent way, what approach do you believe is feasible? (Like, trying out different widths and checking the height to make sure it didn't go over a certain threshold). The less "pixel values" I can hardcode in my JS the better, obviously.
try this <http://blog.mastykarz.nl/measuring-the-length-of-a-string-in-pixels-using-javascript/>
Given this HTML `<span>text here</span>` you have to read the offsetWidth attribute of the span, which is only assigned when the element itself is added to the DOM *without* a style that makes it invisible. Technically what this means is that the browser has to be able to visually load the element in the DOM to be able to construct and assign the offsetWidth attribute. Something like this would work: ``` var span = document.createElement("span"); span.appendChild(document.createTextNode("text here")); span.style = ""; // to make sure the elment doesn't have "display: none" or such document.body.appendChild(span); // adding it to the DOM var textWidth = span.offsetWidth; // be sure to hide or remove the span if you don't need it anymore ```
How can you measure the space that a text will take in Javascript?
[ "", "javascript", "css", "text", "layout", "" ]
## Static variable gotcha in php I am from Java background and have switched to php for one project recently. I have found one unexpected behaviour in php. > **Value set to some static variable is** > **not staying persistent across the** > **requests.** I am not sure if this is the expected bahaviour. Because in java , you can always persist very commonly used variables or say constants like dbname,hostname,username,password across the requests so that you don't have to read them always from local property files. Is this behaviour normal ? And if it is normal then is there any alternative by which I can persist values assigned to variables across the requests ? Can someone suggest me a better way of doing this in php?
No, while a static variable will stay for the current request you'll need to add it to a session to persist it's value across requests. Example: ``` session_start(); class Car { public static $make; public function __construct($make) { self::$make = $make; } } $c = new Car('Bugatti'); echo '<p>' . Car::$make . '</p>'; unset($c); if (!isset($_SESSION['make'])) { echo '<p>' . Car::$make . '</p>'; $c = new Car('Ferrari'); echo '<p>' . Car::$make . '</p>'; } $_SESSION['make'] = Car::$make; echo '<p>' . $_SESSION['make'] . '</p>'; ```
Static variables are only applicable to one single request. If you want data to persist between requests for a specific user only use session variables. A good starter tut for them is located here: <http://www.tizag.com/phpT/phpsessions.php>
Does static variables in php persist across the requests?
[ "", "php", "static", "persistence", "" ]
I have a table and I am highlighting alternate columns in the table using jquery ``` $("table.Table22 tr td:nth-child(even)").css("background","blue"); ``` However I have another `<table>` inside a `<tr>` as the last row. How can I avoid highlighting columns of tables that are inside `<tr>` ?
Qualify it with the `>` descendant selector: ``` $("table.Table22 > tbody > tr > td:nth-child(even)").css("background","blue"); ``` You need the `tbody` qualifier too, as browsers automatically insert a `tbody` **whether you have it in your markup or not**. *Edit*: woops. Thanks Annan. *Edit 2*: stressed tbody.
Untested but perhaps: <http://docs.jquery.com/Traversing/not#expr> ``` $("table.Table22 tr td:nth-child(even)").not("table.Table22 tr td table").css("background","blue"); ```
Highlighting columns in a table with jQuery
[ "", "javascript", "jquery", "" ]
I have a thread push-backing to STL list and another thread pop-fronting from the list. Do I need to lock the list with mutex in such case?
From [SGI's STL on Thread Safety](http://www.sgi.com/tech/stl/thread_safety.html): > If multiple threads access a single container, and at least one thread may potentially write, then the user is responsible for ensuring mutual exclusion between the threads during the container accesses. Since both your threads modify the list, I guess you have to lock it.
Most STL implementations are thread safe in the sens that you can access *several instances* of a list type from *several threads* without locking. But you MUST lock when you are accessing the same instance of your list. Have a look on this for more informations : [thread safty in sgi stl](http://www.sgi.com/tech/stl/thread_safety.html)
Do I need to lock STL list with mutex in push_back pop_front scenario?
[ "", "c++", "multithreading", "stl", "" ]
During our work as web developer for a meteorological company, we are faced with the same task over and over again: Get some files from somewhere (FTP/Web/directory/mail) and import the contained data to a database. Of course the file format is never the same, the databases are always designed differently, countless special cases have to be handled, etc, etc. So now I'm planning an importing framework for exactly this kind of work. Since we're all experienced PHP developers and the current scripts are either PHP or Perl, we'll stick with PHP as scripting language. * A data getter will fetch the file from the source, open it and store the content into a string variable. (Don't worry, PHP will get enough memory from us.) * The data handler will do the complicated work to convert the string into some kind of array. * The array will be saved to the database or written to a new file or whatever we're supposed to do with it. Along with this functionality there will be some common error handling, log writing and email reporting. The idea is to use a collection of classes (Some getter-classes, a lot of specialised handlers, some writer classes). **My question:** How do I practically organize these classes in a working script? Do I invent some kind of meta language which will be interpreted and the the classes are called accordingly? Or just provide some simple interfaces these classes have to implement and the my users (like I said: Experienced PHP developers) will write small PHP scripts loading these classes? The second version almost certainly offers the biggest flexiblity and extensibility. Do you have any other ideas concerning such an undertaking?
Working in a similar environment of dozens of different external data formats that need to be im- and exported, I can recommend to at least *try* and get them to unify the data formats. We had some success by developing tools that help others outside our company to transform their data into our format. We also gave them the source code, for free. Some others are now transforming their data for us using our tools, and if they change their format, it is *them* that changes the transformation tool. One cause of a headache less for us. In one case it even lead to another company switching to the file format our systems use internally. Granted, it is only one case, but I consider it a first step on a long road ;-)
I suggest borrowing concepts from [Data Transformation Services](http://en.wikipedia.org/wiki/Data_Transformation_Services) (DTS). You could have data sources and data sinks, import tasks, transformation tasks and so on.
What's the best practice for developing a PHP data import framework?
[ "", "php", "frameworks", "import", "etl", "" ]
I can't understand the motivation of PHP authors to add the type hinting. I happily lived before it appeared. Then, as it was added to PHP 5, I started specifying types everywhere. Now I think it's a bad idea, as far as duck typing assures minimal coupling between the classes, and leverages the code modularization and reuse. It feels like type hints split the language into 2 dialects: some people write the code in static-language style, with the hints, and others stick to the good old dynamic language model. Or is it not "all or nothing" situation? Should I somehow mix those two styles, when appropriate?
It's not about static vs dynamic typing, php is still dynamic. It's about contracts for interfaces. If you know a function requires an array as one of its parameters, force it right there in the function definition. I prefer to fail fast, rather than erroring later down in the function. (Also note that you cant specify type hinting for bool, int, string, float, which makes sense in a dynamic context.)
You should use type hinting whenever the code in your function definitely relies on the type of the passed parameter. The code would generate an error anyway, but the type hint will give you a better error message.
(When) should I use type hinting in PHP?
[ "", "php", "coding-style", "" ]
I have a simple `List<string>` and I'd like it to be displayed in a `DataGridView` column. If the list would contain more complex objects, simply would establish the list as the value of its `DataSource` property. But when doing this: ``` myDataGridView.DataSource = myStringList; ``` I get a column called `Length` and the strings' lengths are displayed. How to display the actual string values from the list in a column?
Thats because DataGridView looks for properties of containing objects. For string there is just one property - length. So, you need a wrapper for a string like this ``` public class StringValue { public StringValue(string s) { _value = s; } public string Value { get { return _value; } set { _value = value; } } string _value; } ``` Then bind `List<StringValue>` object to your grid. It works
Try this: ``` IList<String> list_string= new List<String>(); DataGridView.DataSource = list_string.Select(x => new { Value = x }).ToList(); dgvSelectedNode.Show(); ```
How to bind a List<string> to a DataGridView control?
[ "", "c#", "binding", "datagridview", "" ]
Is there an API call to determine the size and position of window caption buttons? I'm trying to draw vista-style caption buttons onto an owner drawn window. I'm dealing with c/c++/mfc. Edit: Does anyone have a code example to draw the close button?
I've found the function required to get the position of the buttons in vista: [WM\_GETTITLEBARINFOEX](https://learn.microsoft.com/en-us/windows/desktop/menurc/wm-gettitlebarinfoex) This link also shows the system metrics required to get all the spacing correct (shame it's not a full dialog picture though). This works perfectly in Vista, and mostly in XP (in XP there is slightly too much of a gap between the buttons). [![From http://shellrevealed.com/photos/blog_images/images/4538/original.aspx](https://i.stack.imgur.com/TR01p.png)](https://i.stack.imgur.com/TR01p.png)
[GetSystemMetrics](http://msdn.microsoft.com/en-us/library/ms724385.aspx) gives all these informations. To draw within the window decoration, use [GetWindowDC](http://msdn.microsoft.com/en-us/library/ms534830(VS.85).aspx).
How to get size and position of window caption buttons (minimise, restore, close)
[ "", "c++", "winapi", "mfc", "windows-vista", "uxtheme", "" ]
I have a legacy DLL written in C that I'd like to call from a C# .NET application. The problem is that the DLL interface for the C DLL is fairly complicated. It's something like this: ``` __declspec(dllexport) void __stdcall ProcessChunk( void *p_prochdl, const BIG_INPUT_STRC *p_inparams, BIG_OUTPUT_STRC *p_outparams ); ``` The BIG\_INPUT\_STRC/BIG\_OUTPUT\_STRC contain all kinds of things... pointers to buffer arrays, enumerated parameters, integer parameters, etc. In short, they're complicated. First question: Is there an easy way to get all of the structure information that is contained in the DLL header file into the C# class, or do you need to literally copy and paste everything to C# and re-define it? This seems redundant. Related to that, what is the correct way to pass structs into the unmanaged DLL from C#? Finally, is there an example of how to correctly pass buffer arrays from C# into the unmanaged DLL? Alternatively, how can I pass a two-dimensional array into the DLL? Thanks, -Greg
Its quite straight forward to do this soft of thing in C# using P/Invoke. I belive you are going to have to define the data structures in C# manually. I would reccomend you take a look at this [MSDN article](http://msdn.microsoft.com/en-us/library/aa288468.aspx) on the subject
You'll need to extensively use [.net marshalling](http://msdn.microsoft.com/en-us/library/aa446536.aspx). First you need to re-define C structs in your C# code, then make sure everything gets marshalle properly using the [MarshalAs](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshalasattribute.aspx) attribute. If you need to pass a pointer-to-structure in C# back to the C function you can use the [Marshal.StructToPtr](http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.marshal.structuretoptr.aspx) function. Buffer arrays, assuming they're defined as byte[] you can marshal using the following technique: ``` byte[] buffer = ...; fixed(byte *pBuffer = buffer) { // do something with the pBuffer } ``` The fixed statement makes sure that the bufffer doesn't get moved in the memory by the garbage collector, making the pointer within the statement 'fixed' or 'pinned'. As for the multi dimensional arrays, it depends on the underlying C implementation, you might for example work with a pointer to the array and adjust the position based on the number of dimension and the number of elements in each dimension like: ``` someValue = buffer[(elementsPerDimensions * x) + y]; ``` From your description it already seems pretty complicated, have you considered making your library COM friendly?
Interface from a C DLL to .NET
[ "", "c#", ".net", "dll", "unmanaged", "" ]
I have a MySQL table holding lots of records that i want to give the user access to. I don't want to dump the entire table to the page so i need to break it up into 25 records at a time, so i need a page index. You have probably seen these on other pages, they kind of look like this at the base of the page: < 1 2 3 4 **5** 6 7 8 9 > For example, when the user clicks on the '4' link, the page refreshes and the offset is moved on (4th page x 25 records). Here is what i already have: ``` function CreatePageIndex($ItemsPerPage, $TotalNumberOfItems, $CurrentOffset, $URL, $URLArguments = array()) { foreach($URLArguments as $Key => $Value) { if($FirstIndexDone == false) { $URL .= sprintf("?%s=%s", $Key, $Value); $FirstIndexDone = true; } else { $URL .= sprintf("&%s=%s", $Key, $Value); } } Print("<div id=\"ResultsNavigation\">"); Print("Page: "); Print("<span class=\"Links\">"); $NumberOfPages = ceil($TotalNumberOfItems / $ItemsPerPage); for($x = 0; $x < $NumberOfPages; $x++) { if($x == $CurrentOffset / $ItemsPerPage) { Print("<span class=\"Selected\">".($x + 1)." </span>"); } else { if(empty($URLArguments)) { Print("<a href=\"".$URL."?Offset=".$x * $ItemsPerPage."\">".($x + 1)."</a> "); } else { Print("<a href=\"".$URL."&Offset=".$x * $ItemsPerPage."\">".($x + 1)."</a> "); } } } Print("</span>"); Print(" (".$TotalNumberOfItems." results)"); Print("</div>"); } ``` Obviously this piece of code does not create a dynamic index, it just dumps the whole index at the bottom of the page for every page available. What i need is a dynamic solution that only shows the previous 5 pages and next 5 pages (if they exist) along with a >> or something to move ahead 5 or so pages. Anybody seen an elegant and reusable way of implementing this as i feel i'm re-inventing the wheel? Any help is appreciated.
It's called Pagination: a few examples: 1. [A nice one without SQL](https://stackoverflow.com/questions/207223/php-dynamic-pagination-without-sql) 2. [A long tutorial](http://www.sitepoint.com/article/perfect-php-pagination/) 3. [Another tutorial](http://www.tonymarston.net/php-mysql/pagination.html) 4. [And Another](http://php.about.com/od/phpwithmysql/ss/php_pagination.htm) 5. And of course.. [google](http://www.google.com.ar/search?q=php+pagination)
[Zend Framework](http://framework.zend.com/) is becoming a useful collection and includes a [Zend\_Paginator](http://framework.zend.com/manual/en/zend.paginator.html) class, which might be worth a look. Bit of a learning curve and might only be worth it if you want to invest the time in using other classes from the framework. It's not too hard to roll your own though. Get a total count of records with a COUNT(\*) query, then obtain a page of results with a LIMIT clause. For example, if you want 20 items per page, page 1 would have LIMIT 0,20 while page 2 would be LIMIT 20,20, for example ``` $count=getTotalItemCount(); $pagesize=20; $totalpages=ceil($count/$pagesize); $currentpage=isset($_GET['pg'])?intval($_GET['pg']):1; $currentpage=min(max($currentpage, 1),$totalpages); $offset=($currentpage-1)*$pagesize; $limit="LIMIT $offset,$pagesize"; ```
Is there a piece of public code available to create a page index using PHP?
[ "", "php", "indexing", "pagination", "" ]
I would like to display some extra UI elements when the process is being run as Administrator as opposed to when it isn't, similar to how Visual Studio 2008 displays 'Administrator' in its title bar when running as admin. How can I tell?
Technically, if you want to see if the member is the local administrator *account*, then you can get the [security identifier (SID)](http://en.wikipedia.org/wiki/Security_Identifier) of the current user through the [`User` property](http://msdn.microsoft.com/en-us/library/system.security.principal.windowsidentity.user.aspx) on the [`WindowsIdentity` class](http://msdn.microsoft.com/en-us/library/e599ywa6.aspx), like so (the static [`GetCurrent` method](http://msdn.microsoft.com/en-us/library/sfs49sw0.aspx) gets the current Windows user): ``` WindowsIdentity windowsIdentity = WindowsIdentity.GetCurrent(); string sid = windowsIdentity.User.ToString(); ``` The `User` property returns the SID of the user which [has a number of predefined values for various groups and users](http://support.microsoft.com/kb/243330). Then you would check to see if [the SID has the following pattern, indicating it is the local administrator account (which is a well-known SID)](http://blogs.technet.com/b/heyscriptingguy/archive/2005/07/22/how-can-i-determine-if-the-local-administrator-account-has-been-renamed-on-a-computer.aspx): > **S-1-5-**{other SID parts}**-500** Or, if you don't want to parse strings, you can use the [`SecurityIdentifier`](http://msdn.microsoft.com/en-us/library/system.security.principal.securityidentifier.aspx) class: ``` // Get the built-in administrator account. var sid = new SecurityIdentifier(WellKnownSidType.BuiltinAdministratorsSid, null); // Compare to the current user. bool isBuiltInAdmin = (windowsIdentity.User == sid); ``` However, I suspect that what you *really* want to know is if the current user is a member of the administrators *group* for the local machine. You can get this SID using the [`WellKnownSidType`](http://msdn.microsoft.com/en-us/library/system.security.principal.wellknownsidtype.aspx) of `BuiltinAdministratorsSid`: ``` // Get the SID of the admin group on the local machine. var localAdminGroupSid = new SecurityIdentifier( WellKnownSidType.BuiltinAdministratorsSid, null); ``` Then you can check the [`Groups` property](http://msdn.microsoft.com/en-us/library/system.security.principal.windowsidentity.groups.aspx) on the `WindowsIdentity` of the user to see if that user is a member of the local admin group, like so: ``` bool isLocalAdmin = windowsIdentity.Groups. Select(g => (SecurityIdentifier) g.Translate(typeof(SecurityIdentifier))). Any(s => s == localAdminGroupSid); ```
I think this is a good simple mechanism. ``` using System.Security.Principal; WindowsIdentity identity = WindowsIdentity.GetCurrent(); WindowsPrincipal principal = new WindowsPrincipal(identity); bool isAdmin = principal.IsInRole(WindowsBuiltInRole.Administrator); ```
How can I tell if my process is running as Administrator?
[ "", "c#", ".net", "process", "privileges", "" ]
I'm working on a class that is storing a 2D array of class `MyType` and would like it to use dynamic data types. i.e. not `MyType[,]` The problem with `MyType[,]` is that the class doesn't know the size of the array ahead of time, and I don't want to go to the trouble of managing array re-sizing if it's been done elsewhere in the .NET Frameworks. The class will not know maximum array size at any given moment, but the array will be dense. I know I can use static arrays, and re-allocate memory as needed, but I'd prefer to use a built-in implementation if possible. Is there anything better than `List<List<MyType>>` for this purpose? Edit 1: specified that array is dense; Edit 2 and 3: specified problem with `MyType[,]`
Create your own `List<List<T>>` encapsulation like : ``` public class Matrix<T> { List<List<T>> matrix; public void Add(IEnumerable<T> row) { List<T> newRow = new List<T>(row); matrix.Add(newRow); } public T this[int x, int y] { get { return matrix[y][x]; } } .... } ``` define your own set of operation on it ! Be freee ! **By encapsulating it, you can decide to go for a more optimised implementation later if it's not sufficient.** ``` ICollection<T> rowOne = (ICollection<T>)new List<Int64>(); rowOne.Add(1); rowOneList.Add(2); rowOne.Add(3); ICollection<T> rowTwo = (ICollection<T>)new List<Int64>(); rowTwo .Add(4); rowTwo .Add(5); rowTwo .Add(6); ```
It depends on how sparse your structure will be. For instance, if your entries will resemble something like `myTypes[0, 1]` and `myTypes[134, 544]`, you'd be much better off using [sparse matrix](http://en.wikipedia.org/wiki/Sparse_matrix). Otherwise, `List<List<MyType>>` will do.
Dynamic 2D structure for C#
[ "", "c#", "dynamic-data", "" ]
How to add 'Colors' (not color name, color itself ) as an item in combo box in C#?
You'll have to use an owner drawn combobox. [This article](http://www.codeproject.com/KB/combobox/ownerdrawncombobox.aspx) on [CodeProject](http://www.codeproject.com/) is a good reference.
If you desire the color picker that is provided when you place an object with a color typed property in a PropertyGrid then [this person has done just that](http://www.vbinfozine.com/a_colorpicker.shtml) however they did it as a button with a drop down. If you really wish it to be a ComboBox you shouldn't find that too much trouble I should think.
Add Colors in Combo box in .NET Windows application
[ "", "c#", "" ]
I’m maintaining a site in ASP, one of the tasks is to set the focus on textbox on a page. Here is what I have tried: ``` <script type="text/javascript"> <!-- document.psForm['password'].focus(); //AND document.getElementById("password").focus(); --> </script> ``` I didn't think this would work... and it doesn't: ``` <form id="psForm" action="logonpw.asp" method="post" defaultfocus="password"> ``` This doesn't work: ``` <body onload="javascript:docuument.psForm.password.focus();"> ``` Here is the form: ``` <form id="psForm" action="logonpw.asp" method="post"> <table border="0" cellpadding="5"> <tr> <td> Password: </td> <td> <input type="password" name="password" value="<%= password %>" size="50"> </td> </tr> </table> </form> ```
Try this: Add: ``` id="password" ``` to the input tag, then use: ``` document.getElementById("password").focus(); ```
a) move ``` <script type="text/javascript"> <!-- document.psForm['password'].focus(); //AND document.getElementById("password").focus(); --> </script> ``` to bottom of page source. b) fire code on load ``` <script type="text/javascript"> <!-- function handleOnLoad(){ document.psForm['password'].focus(); //AND document.getElementById("password").focus(); } --> </script> ... <body onload="handleOnLoad();"> ``` and by the way, only the second onfocus would do any good.
Setting focus in ASP
[ "", "javascript", "" ]
I'm *kinda* new to using Rails, and an app I am working on is progressing well - however I'm looking though the generated HTML and noticed things like... ``` <script type="text/javascript"> //<![CDATA[ Droppables.add(...); //]]> </script> ``` Sprinkled around the HTML, which of course matches up with places where I use: ``` <%= drop_receiving_element ... %> ``` What I'm wondering is... is there a better way to do this, or make it cleaner? Some of these script tags are coming from partials so putting them at the 'bottom of the page' doesn't really help in this situation. Another option may be pushing all these 'tag blocks' into an array then writing them out in the **application.rhtml** file, but its still a bit messy...
Well if you really want to use best practices...Do not use inline javascript. Keep your HTML, CSS and Javascript clean and seperated from eachother. Ideally the html file should be usable without CSS and javascript. The cleanest way, imo, is to just build your app using plain html/css, and enhance it with unobtrusive javascript to provide a better user experience. A pattern for this is to include all your JS files at the bottom of your page, and start running your code with functionality like onDomReady. Based on the comments I would like to add some possible options to get you started: * JQuery * YUI * Prototype
Best practice is to remove all inline javascript. However, there are 3 cases that I've run into where you absolutely need inline javascript: ### 1. To resolve image errors If an image tag is referencing an image path that doesn't exist, the only way to prevent the image error from appearing (even in the web inspector) is to do this: ``` <img src="/i/dont/exist.png" onerror="$(this).attr("src", "/i/do/exist.png")" /> ``` The following doesn't work because the `onerror` event is executed before you even have time to grab the image with jQuery: ``` $("img").error(function() { $(this).attr("src", "/i/do/exist.png") }); ``` The code below doesn't work either because the `onerror` event doesn't bubble: ``` $("img").live("error", function() { $(this).attr("src", "/i/do/exist.png"); }); ``` So you have to use inline javascript for that. ### 2. To render javascript templates as the page loads If you wait until `$(document).ready` to draw your feed of content into one of the empty container elements in your dom, the user will see the blank spot for a split second. This is why when your [Twitter](http://twitter.com/) page first loads, the feed is blank for an instant. Even if you just put an external script file at the bottom of the dom, and in there you append the dynamically generated html elements to your page without even using `$(document).ready`, it's still too late. You have to append the dynamic nodes immediately after the container element is added: ``` <script>App.bootstrap({some: "json"});</script> <nav id='navigation'></nav> <header id='header'></header> <section id='content'> // dynamic content here </section> <script>App.renderContent();</script> <aside id='sidebar'> // dynamically toggle which one is selected with javascript <nav> <h3>Search By Category</h3> <ol> <li> <a href="/category-a">Category A</a> </li> <li> <a href="/category-b">Category B</a> </li> </ol> </nav> </aside> <script>App.renderSidebar();</script> <footer id='footer'></footer> ``` ### 3. You're bootstrapping your app with JSON If you're using JSON + javascript templates, it's a good idea to bootstrap the first set of data into the response body by including it inline in the page (in the above dom example too). That makes it so you don't need an additional ajax request to render content. ### Everything else should be done with Unobtrusive Javascript Rails has a lot of javascript helpers, and thankfully in Rails 3 most (all?) of it is unobtrusive; they're now using the `data-` attribute and an external `rails.js` file. However, many of the gems out there that are part-ruby part-javascript tend to still write helper methods add complex javascript inline: * [https://github.com/rubaidh/google\_analytics](https://github.com/rubaidh/google_analytics/blob/master/lib/rubaidh/view_helpers.rb) * [https://github.com/scrubber/jquery\_grid\_for\_rails](https://github.com/scrubber/jquery_grid_for_rails/blob/master/lib/hexagile/jquery_grid_for_rails.rb) * [https://github.com/sandipransing/rails\_tiny\_mce](https://github.com/sandipransing/rails_tiny_mce/blob/master/lib/tiny_mce_helper.rb) * [https://github.com/phronos/rails\_datatables](https://github.com/phronos/rails_datatables/blob/master/lib/rails_datatables.rb) That's helpful, but I think just having a clear README describing how to add the javascript to your `application.js` is even more useful. External javascript makes it a lot easier to customize/extend the functionality down the road, it gives you a lot more control over the system, and it minimizes duplication across your html pages (so the response body is smaller, and the browser can cache external javascript files). Unless you need to handle missing images, instantaneous rendering, or bootstrap some json, you can put everything else in external javascript files and never have to use Rails javascript/ajax helpers.
Rails inline Javascript and Best Practices
[ "", "javascript", "ruby-on-rails", "coding-style", "" ]
Say I am iterating over a Map in Java... I am unclear about what I can to that Map while in the process of iterating over it. I guess I am mostly confused by this warning in the Javadoc for the Iterator interface remove method: > [...] The behavior of an iterator is unspecified if the underlying collection is modified while the iteration is in progress in any way other than by calling this method. I know for sure that I can invoke the remove method without any issues. But while iterating over the Map collection, can I: 1. Change the value associated with a key with the Map class put method (put with an existing key)? 2. Add a new entry with the Map class put method (put with a new key)? 3. Remove an entry with the Map class remove method? My guess is that I can probably safely do #1 (put to an existing key) but not safely do #2 or #3. Thanks in advance for any clarification on this.
You can use `Iterator.remove()`, and if using an entrySet iterator (of Map.Entry's) you can use `Map.Entry.setValue()`. Anything else and all bets are off - you should not change the map directly, and some maps will not permit either or both of the aforementioned methods. Specifically, your (1), (2) and (3) are **not** permitted. You *might* get away with setting an existing key's value through the `Map` object, but the `Set.iterator()` documentation specifically precludes that and it will be implementation specific: > If the map is modified while an iteration over the set is in progress (except through the iterator's own remove operation, or through the setValue operation on a map entry returned by the iterator) the **results of the iteration are undefined**. (emphasis added)
If you take a look at the HashMap class, you'll see a field called 'modCount'. This is how the map knows when it's been modified during iteration. Any method that increments modCount when you're iterating will cause it to throw a ConcurrentModificationException. That said, you CAN put a value into a map if the key already exists, effectively updating the entry with the new value: ``` Map<String, Object> test = new HashMap<String, Object>(); test.put("test", 1); for(String key : test.keySet()) { test.put(key, 2); // this works! } System.out.println(test); // will print "test->2" ``` When you ask if you can perform these operations 'safely,' you don't have to worry too much because HashMap is designed to throw that ConcurrentModificationException as soon as it runs into a problem like this. These operations will fail fast; they won't leave the map in a bad state.
What basic operations on a Map are permitted while iterating over it?
[ "", "java", "iterator", "" ]
I'm trying to use SendMessage to Notepad, so that I can insert written text without making Notepad the active window. I have done something like this in the past using `SendText`, but that required giving Notepad focus. Now, first I'm retrieving the Windows handle: ``` Process[] processes = Process.GetProcessesByName("notepad"); Console.WriteLine(processes[0].MainWindowHandle.ToString()); ``` I've confirmed it's the right handle for Notepad, the same shown within `Windows Task Manager`. ``` [DllImport("User32.dll", EntryPoint = "SendMessage")] public static extern int SendMessage(int hWnd, int Msg, int wParam, int lParam); ``` From here, I haven't been able to get SendMessage to work in all my experimentation. Am I going in the wrong direction?
``` [DllImport("user32.dll", EntryPoint = "FindWindowEx")] public static extern IntPtr FindWindowEx(IntPtr hwndParent, IntPtr hwndChildAfter, string lpszClass, string lpszWindow); [DllImport("User32.dll")] public static extern int SendMessage(IntPtr hWnd, int uMsg, int wParam, string lParam); private void button1_Click(object sender, EventArgs e) { Process [] notepads=Process.GetProcessesByName("notepad"); if(notepads.Length==0)return; if (notepads[0] != null) { IntPtr child= FindWindowEx(notepads[0].MainWindowHandle, new IntPtr(0), "Edit", null); SendMessage(child, 0x000C, 0, textBox1.Text); } } ``` WM\_SETTEXT=0x000c
You first have to find the child window where the text is entered. You can do this by finding the child window with the window class "Edit". Once you have that window handle, use WM\_GETTEXT to get the text which is already entered, then modify that text (e.g., add your own), then use WM\_SETTEXT to send the modified text back.
How to send text to Notepad in C#/Win32?
[ "", "c#", "winapi", "" ]
I know this is a darn simple question, but I'm very used to using Borland and wrappers, so this is a bit of a new approach for me. Can someone simply tell me how I Can open an OpenDialog that only gets .obj files from a visual studio c++ console app? It's very much appreciated!
There isn't really any difference between a console application and a GUI application, except for entry point (WinMain in a 'GUI' app), and a console app will have a console window opened during startup if not started from a console. All of the Win32 API is available, so you need to use the [GetOpenFileName](http://msdn.microsoft.com/en-us/library/ms646927.aspx) call, as follows: #define DEFAULT\_EXTENSION L".obj" ``` OPENFILENAME ofn; wchar_t *FilterSpec =L"Object Files(*.obj)\0*.obj\0Text Files(*.txt)\0*.txt\0All Files(*.*)\0*.*\0"; wchar_t *Title =L"Open...."; wchar_t szFileName[MAX_PATH]; wchar_t szFileTitle[MAX_PATH]; int Result; wchar_t filePath[MAX_PATH]; // Selected file and path *szFileName = 0; *szFileTitle = 0; /* fill in non-variant fields of OPENFILENAME struct. */ ofn.lStructSize = sizeof(OPENFILENAME); ofn.hwndOwner = GetFocus(); ofn.lpstrFilter = FilterSpec; ofn.lpstrCustomFilter = NULL; ofn.nMaxCustFilter = 0; ofn.nFilterIndex = 0; ofn.lpstrFile = szFileName; ofn.nMaxFile = MAX_PATH; ofn.lpstrInitialDir = L"."; // Initial directory. ofn.lpstrFileTitle = szFileTitle; ofn.nMaxFileTitle = MAX_PATH; ofn.lpstrTitle = Title; ofn.lpstrDefExt = DEFAULT_EXTENSION; ofn.Flags = OFN_FILEMUSTEXIST|OFN_HIDEREADONLY; if (!GetOpenFileName ((LPOPENFILENAME)&ofn)) { return; // Failed or cancelled } else { wcscpy_s(filePath,ofn.lpstrFile); } ```
Yes it is possible to open an OpenDialog from VC++ console app. Steps: Create a new project. -> select Win32 Console Application. In the next dialog, select "An Application that supports MFC". you will be provided with the following code: #include "stdafx.h" #include "test.h" #ifdef \_DEBUG #define new DEBUG\_NEW #undef THIS\_FILE static char THIS\_FILE[] = \_\_FILE\_\_; #endif ///////////////////////////////////////////////////////////////////////////// // The one and only application object CWinApp theApp; using namespace std; int \_tmain(int argc, TCHAR\* argv[], TCHAR\* envp[]) { int nRetCode = 0; ``` // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs cerr << _T("Fatal Error: MFC initialization failed") << endl; nRetCode = 1; } else { // TODO: code your application's behavior here. CString strHello; strHello.LoadString(IDS_HELLO); cout << (LPCTSTR)strHello << endl; } return nRetCode; ``` } Add the following code at the begining of "else" part CFileDialog dlgOpen(TRUE,NULL,NULL,OFN\_OVERWRITEPROMPT,"Text Files (*.txt)|*.txt||"); dlgOpen.DoModal(); Run the application. A open dialog will be opened automatically. Google "CFileDialog" for further help.
Opening an OpenDialog from Vis C++ console app?
[ "", "c++", "visual-studio-2008", "visual-c++", "" ]
Thanks for reading this. I imagine this is really a javascript question, and my title probably does not get at the heart of what I am trying to do, but I want to store the result of my ajax request in a global variable. This would allow me to test the var before making the ajax call...and avoid repeated ajax calls for the same data. I need to be able to pass the variable name from the click event down through the populateSelect function into the ajaxCall function. It seems like I could pass a function as a parameter, but I have not been able to make that work. I like to include working examples in my questions, but in this case the latency in the call to the server is part of the problem. Thanks ``` $('#getSelectOptions').bind("click", function() { populateSelect(this); }); function populateSelect(whatWasClicked) { var thisSelect = $(whatWasClicked).parents("div").find("select") ; var before = function() { $(loading).show() ; } ; var complete = function() { $(loading).hide() ; } ; var data = {'_service' : 'myService', '_program' : 'myProgram' } ; var error = function(){alert("Error"); } ; var success = function(request) { $(thisSelect).html(request) ; }; var waitTime = 20000 ; ajaxCall(thisSelect, waitTime, before, complete, data, success, error ) ; } function ajaxCall(elementToPopulate, waitTime, whatToDoBeforeAjaxSend, whatToDoAfterAjaxSend, dataToSendToTheServer, whatToDoAfterSuccess, whatToDoAfterError) { $.ajax({ type: "post", url: "http://myURL/cgi-bin/broker", dataType: "text", data: dataToSendToTheServer, timeout: waitTime, beforeSend: whatToDoBeforeAjaxSend, error: whatToDoAfterError(request), success: whatToDoAfterSuccess(request) }); } ``` **EDIT** Further education in how to write a good question... I should have mentioned that I call populateSelect to populate multiple selects..so I need way to reference the results for each select
jQuery has a $.data method which you can use to store/retrieve items related to any element on the page. ``` //e.g. create some object var inst = {}; inst.name = 'My Name' var target = $('#textbox1'); //save the data $.data(target, 'PROP_NAME', inst); //retrieve the instance var inst = $.data(target, 'PROP_NAME'); ```
It looks like in the example you gave, you only have one type of AJAX request, `POST`ed to the same URL with the same data every time. If that's the case, you should just need something like : ``` var brokerResponse = null; // <-- Global variable function populateSelect(whatWasClicked) { var thisSelect = $(whatWasClicked).parents("div").find("select") ; if (!brokerResponse) { // <-- Does an old response exist? If not, get one... var before = function() { $(loading).show() ; } ; var complete = function() { $(loading).hide() ; } ; var data = {'_service' : 'myService', '_program' : 'myProgram' } ; var error = function(){alert("Error"); } ; var success = function(request) { // <-- Store the response before use brokerResponse = request; $(thisSelect).html(brokerResponse); }; var waitTime = 20000 ; ajaxCall(thisSelect, waitTime, before, complete, data, success, error ) ; } else { // <-- If it already existed, we get here. $(thisSelect).html(brokerResponse); // <-- Use the old response } } ``` If you have multiple possible items for `whatWasClicked` which each need a different AJAX response cached, then you need to have some string with which to identify `whatWasClicked`, and use that to store multiple values in your global variable. For example, if you have a unique `id` on `whatWasClicked`, this would work: ``` var brokerResponse = {}; // Global variable is a simple object function populateSelect(whatWasClicked) { var whatWasClickedId = $(whatWasClicked).attr('id'); // Get the unique ID var thisSelect = $(whatWasClicked).parents("div").find("select") ; if (!brokerResponse[whatWasClickedId]) { // Check that ID for a response var before = function() { $(loading).show() ; } ; var complete = function() { $(loading).hide() ; } ; var data = {'_service' : 'myService', '_program' : 'myProgram' } ; var error = function(){alert("Error"); } ; var success = function(request) { brokerResponse[whatWasClickedId] = request; // Using ID $(thisSelect).html(brokerResponse); }; var waitTime = 20000 ; ajaxCall(thisSelect, waitTime, before, complete, data, success, error ) ; } else { $(thisSelect).html(brokerResponse[whatWasClickedId]); // Etc... } } ```
Using jQuery, how can I store the result of a call to the $.ajax function, to be re-used?
[ "", "javascript", "jquery", "ajax", "" ]
I'm just getting into ASP.NET Been avoiding it for years as I'm a desktop application advocate. Anyway, I was wondering if there's a way to stop the generated html being so cluttered with javascript. Is there a way to make the generated script go into a referenced js file rather that inline with the page.
Look into [ASP.NET MVC](http://www.asp.net/mvc/) - it is an alternative to WebForms and tends to have less clutter.
Are you sure you're using code-behind? When you add a new page, make sure you check "Place code in a separate file", otherwise all your server-side code will be on the page (even tough it won't show up for the end-user) If you're using code-behind but it still has a lot of javascript code in the page (maybe you're using ajax?) I'd suggest you don't use the .NET ajax controls, just do all the ajax by hand, using jQuery or Prototype. It's fast and it'll be as lightweight as it can be.
De-cluttering ASP.NET javascript
[ "", "asp.net", "javascript", "" ]
I've some pocket pc app and i'm having a serious problem with it described here: <https://stackoverflow.com/questions/472598> . While looking for a solution i tried some actually quite stupid code in Main(): ``` [MTAThread] static void Main() { Application.Run(new Tasks()); Application.Exit(); } ``` and set breakpoint on exit. if i just run the application and then close the window the breakpoint is reached. if i run the application and then open another window: ``` private void questButton_Click(object sender, EventArgs e) { QuestionnairesWindow questWindow = new QuestionnairesWindow(); questWindow.Show(); this.Hide(); } ``` and then get back from it to initial window: ``` private void backButton_Click(object sender, EventArgs e) { Tasks tasksWindow = new Tasks(); tasksWindow.Show(); this.Close(); } ``` and close the initial one the same way as the first time, the Apllication.exit() code is never reached and i have an impression that the application isn't really closed ( i can't open it again). sorry if the description is complicated edit: the question is - any ideas why is it behaving differently?
`new Tasks()` in `Main()` is not the same object with `Tasks tasksWindow = new Tasks();` You got 2 objects of Tasks, so closing second, first is still present and never dies. You need to pass to `QuestionnairesWindow` the reference of current `Tasks`. You can do that with additional `QuestionnairesWindow` constructor: ``` private Tasks tasks; public QuestionnairesWindow(Tasks t) { this.tasks = t; } ``` using: ``` new QuestionnairesWindow(this).Show(); // where this = current `Tasks` created in `Main` ```
This line seems to be the problem: ``` Tasks tasksWindow = new Tasks(); ``` If I understand your code correctly, in your backButton\_Click, you're creating a **new** instance of your Tasks form, isntead of showing the one you originally hid. This means that the application never terminates, because there is still one window open, but hidden. What you probably want to do is pass the Tasks form reference to the QuestionnairesWindow form.
Code in Main() reachable or unreachable depending on opening another form
[ "", "c#", "pocketpc", "" ]
In my HTML page, I need to check if Adobe Flash player is installed. If not, I want to automatically jump to another HTML page to tell the user that Flash player is required. I'm using JavaScript to check if the Flash player is available, using the '[JavaScript Flash detection library](http://www.featureblend.com/javascript-flash-detection-library.html)'. The body of my HTML page looks like this: ``` <body> <script type="text/javascript"> if(!FlashDetect.installed) { alert("Flash 9.0.115 is required to enjoy this site."); } </script> ... ... ``` The detection is working: I can see the alert, but I didn't find a way to jump to another HTML page. Any hint? **Edit:** There is something I didn't mention which seems to make a difference: the HTML pages are local pages (running from a CD-ROM), and I'd like to jump to an HTML page which is located in the current directory.
``` window.location.href = "http://stackoverflow.com"; ``` For local files this should work if you know the relative path: (In your case this works.) ``` window.location.href = "someOtherFile.html"; ``` Maybe you could also do it absolute using this: (Not tested.) ``` window.location.pathname = "/path/to/another/file.html/"; ``` The problem are the security measures of the browser vendors. Google has some [good information](http://blog.chromium.org/2008/12/security-in-depth-local-web-pages.html) about it.
Be very wary of instant JavaScript redirects. Flash detection scripts can be wrong(\*) so it's best to allow the user to decide Flash-or-not themselves with some kind of manual override, or simply using fallback content. Writing to location.href works but can "break the back button" - if the user presses Back and your page insta-redirects them forward a page again they're unlikely to be happy. location.replace('...') avoids this problem. (\* - there are two approaches to Flash detection, neither of them reliable. Creating a Flash instance and sniffing for it breaks with software like FlashBlock or just slow loading, and sniffing for plugins directly is not standardised and likely to break on more obscure platforms. Adobe's own code at <http://www.adobe.com/devnet/flashplayer/articles/future_detection_print.html> ends up resorting to sniffing the UA string, ugh.)
Jumping to a new HTML page with JavaScript
[ "", "javascript", "redirect", "" ]
My most used mini pattern is: ``` VideoLookup = new ArrayList { new ArrayList { buttonVideo1, "Video01.flv" }, new ArrayList { buttonVideo2, "Video02.flv" }, new ArrayList { buttonVideo3, "Video03.flv" }, new ArrayList { buttonVideo4, "Video04.flv" }, new ArrayList { buttonVideo4, "Video04.flv" } }; ``` This means that rather than a switch statement with a case for each button I can instead just compare the button that was clicked with each item in the ArrayList. Then when I've found a match I launch the correct file (although the action that's the 2nd part the "lookup" could be a delegate or anything else). The main benefit is that I don't have the problem of remembering to add all the correct code for each switch statement case, I just add a new item to the lookup ArrayList. *(Yes I know using an ArrayList isn't the best way to go, but it's old code. And I know that looping through an array each time isn't as efficient as using a switch statement, but this code isn't in a tight loop)* Does anyone else have any mini-patterns they use that save time/effort or make code more readable? *They don't have to just be GUI related*. **Update:** Don't copy this code, I knew it was bad, but I didn't realise how bad. Use something like this instead. ``` Hashtable PlayerLookup = new Hashtable(); PlayerLookup.Add(buttonVideo1, "Video01.flv"); PlayerLookup.Add(buttonVideo2, "Video02.flv"); PlayerLookup.Add(buttonVideo3, "Video03.flv"); PlayerLookup.Add(buttonVideo4, "Video04.flv"); string fileName = PlayerLookup[currentButton].ToString(); ```
You could just create a struct or object that has a button reference and a string representing the file name and then a List of these things. Or, you could just use a Dictionary and make it even easier on yourself. Lots of ways to improve. :)
please please please omg use this version. ``` VideoLookup = new Dictionary<Button, string> { { buttonVideo1, "Video01.flv" }, { buttonVideo2, "Video02.flv" }, { buttonVideo3, "Video03.flv" }, { buttonVideo4, "Video04.flv" }, { buttonVideo4, "Video04.flv" } }; ```
Useful mini patterns (not design patterns)
[ "", "c#", ".net", "design-patterns", "idioms", "" ]
We have a quite large (280 binaries) software project under Linux and currently it has a very dispersed code structure - that means one can't [work out] what code from the source tree is valid (builds to deployable binaries) and what is deprecated. But the Makefiles are good. We need to calculate C/C++ SLOC for entire project. Here's a question - can I find out SLOC GCC has compiled? Or maybe I can gain this information from binary (debug info probably)? Or maybe I can find out what source files was the binary compiled from and use this info to calculate SLOC? Thanks Bogdan
The first thing you want is an accurate list of what you actually compiled. You can achieve this by using a wrapper script instead of gcc. The second list you want is the list of files that were used for this. For this, consult the dependency list (as you said that was correct). (Seems you'd need make --print-data-base) Then, sort and deduplicate the list of files, and throw out system headers. For each remaining file, determine the SLOC count using your prefered tool.
It depends on what you mean by SLOC that GCC has compiled. If you mean, track the source files from your project that GCC used, then you'd probably use the dependency tracking options which lists source files and headers. That's `-M` and various related options. Beware of including system-provided headers. A technique I sometimes use is to replace the standard C compiler with an appropriate variation - for example, to ensure a 64-bit compilation, I use '`CC="gcc -m64"`' to guarantee the when the C compiler is used, it will compile in 64-bit mode. Obviously, with a list of files, you can use `wc` to calculate the number of lines. You use '`sort -u`' to eliminate duplicated headers. One obvious gotcha is if you find that everything is included with relative path names - then you have to work out more carefully where each file is. If you have some other definition of SLOC, then you will need to specify what you have in mind. Sometimes, people are looking for non-blank, non-comment SLOC, for example - but you still need the list of source files, which I think the `-M` options will help you determine.
Calculate SLOC GCC C/C++ Linux
[ "", "c++", "c", "linux", "gcc", "" ]
I'm having trouble with debugging a PHP project through NetBeans using XDebug, and was hoping someone out there might have had this problem before. Debugging works fine for the requested php file - so if I go to index.php on the remote server, I can put a breakpoint anywhere in index.php in NetBeans and the code stops there and I can step through. The trouble is, all the other files appear on the call stack like this: `"file:///home/user/site_html/library/class.requestprocessor.php"` and because that's a path to a file on the remote server, NetBeans is unable to resolve the name, and so I can't step through the code for it. It makes debugging practically useless! The php file that was requested, for example index.php, appears just as "index.php", it's just all the other ones which are included that NetBeans can't resolve properly. Is there some setting in NetBeans or XDebug that I can use to solve this problem? I have tried my best looking through all the project settings and options in NetBeans, and also looked through all the configuration options of the XDebug extention, but to no avail. Surely someone out there must have had this problem before? How did you solve it? I'll be very grateful if you could let me know. --- If there's no other way, do users of Eclipse PDT and XDebug have this problem when debugging on a remote server? If not I'll make the switch.
This issue appears to be fixed in NetBeans 6.7.
Okay it seems no one is going to answer the question, so I'll post the results of my research. As I was unable to find a way to make NetBeans successfully debug as I wanted, I tried using Eclipse PDT and Visual Studio 2008 with the VS.php extension in order to solve my problem. Eclipse PDT was hell to setup, and I wasn't happy with the way you couldn't specify a port to use for your FTP server in its Remote Server Explorer. I found VS.php did everything I wanted, it automatically uploads the code, as well as having perfect debugging, and it was EASY to setup. The only thing is, it looks ugly and I can't find a way to customise its colours. However, I'm going to choose function over fashion, and so I've settled on using VS.php as my final solution. Which to be honest is a shame, NetBeans was really great, I just can't live without a working debugger.
NetBeans PHP and XDebug - jumping to methods on call stack?
[ "", "php", "debugging", "netbeans", "xdebug", "" ]
I have a form with thousands of checkboxes, and when one is checked, I want to check all the boxes below it. This works: ``` <html> <body> <form name="myform"> <input type="checkbox" name="box1" onClick="redrawboxes(this);">1<br> <input type="checkbox" name="box2" onClick="redrawboxes(this);">2<br> ... </form> </body> </html> <script> function redrawboxes(obj){ //check all boxes below var foundit=false; for (e=0; e<document.myform.elements.length; e++){ if (foundit==false){ //search for checked obj if(obj == document.myform.elements[e]){ foundit=true; } }else{ //continuing below checked box if(obj.checked){ //are we checking or unchecking document.myform.elements[e].checked = true; }else{ document.myform.elements[e].checked = false; } } } } </script> ``` but for more than a few thousand boxes, IE is unacceptably slow. (Firefox works fine.) Is there a better way to find the original box besides iterating through the whole list?
Both of the jQuery suggestions are pretty good. For DOM wrangling like this, you're really better off using a good library. And the comment about the dubious wisdom of putting thousands of checkboxes on a form is pretty good as well... But, on the off-chance that you do have a good reason for doing this, and you can't use jQuery or similar, here's a fast, straight JS method: ``` function redrawboxes(obj) { //check all boxes below var next = obj; while ( (next = next.nextSibling) ) { if ( next.nodeName.toLowerCase() == "input" && next.type.toLowerCase() == "checkbox" ) next.checked = obj.checked; } } ``` *tested in FF3, FF3.1, IE6, Chrome 1, Chromium 2*
i might get down voted for this, but try using jquery. it has selectors optimized for that.
finding object in Javascript
[ "", "javascript", "search", "object", "" ]
I was playing around with the `Datetime.ParseExact` method, and it wants an `IFormatProvider`... It works giving `null` as input, but what exactly does it do?
In adition to [Ian Boyd's answer](https://stackoverflow.com/a/18195374/3283073): Also `CultureInfo` implements this interface and can be used in your case. So you could parse a French date string for example; you could use ``` var ci = new CultureInfo("fr-FR"); DateTime dt = DateTime.ParseExact(yourDateInputString, yourFormatString, ci); ```
The `IFormatProvider` interface is normally implemented for you by a `CultureInfo` class, e.g.: * `CultureInfo.CurrentCulture` * `CultureInfo.CurrentUICulture` * `CultureInfo.InvariantCulture` * `CultureInfo.CreateSpecificCulture("de-CA") //German (Canada)` The interface is a gateway for a function to get a set of culture-specific data from a culture. The two commonly available culture objects that an `IFormatProvider` can be queried for are: * [`DateTimeFormatInfo`](https://learn.microsoft.com/en-us/dotnet/api/system.globalization.datetimeformatinfo): `IFormatProvider.GetFormat(typeof(DateTimeFormatInfo));` * [`NumberFormatInfo`](https://learn.microsoft.com/en-us/dotnet/api/system.globalization.numberformatinfo): `IFormatProvider.GetFormat(typeof(NumberFormatInfo));` The way it would normally work is you ask the `IFormatProvider` to give you a `DateTimeFormatInfo` object: ``` DateTimeFormatInfo? format; format = (DateTimeFormatInfo)provider.GetFormat(typeof(DateTimeFormatInfo)); if (format != null) DoStuffWithDatesOrTimes(format); ``` There's also inside knowledge that any `IFormatProvider` interface is likely being implemented by a class that descends from `CultureInfo`, or descends from `DateTimeFormatInfo`, so you could cast the interface directly: ``` CultureInfo? info = provider as CultureInfo; if (info != null) format = info.DateTimeInfo; else { DateTimeFormatInfo? dtfi = provider as DateTimeFormatInfo; if (dtfi != null) format = dtfi; else format = (DateTimeFormatInfo)provider.GetFormat(typeof(DateTimeFormatInfo)); } if (format != null) DoStuffWithDatesOrTimes(format); ``` # But don't do that All that hard work has already been written for you: **To get a `DateTimeFormatInfo` from an `IFormatProvider`**: ``` DateTimeFormatInfo format = DateTimeFormatInfo.GetInstance(provider); ``` **To get a `NumberFormatInfo` from an `IFormatProvider`**: ``` NumberFormatInfo format = NumberFormatInfo.GetInstance(provider); ``` The virtue of `IFormatProvider` is that you create your own culture objects. As long as they implement `IFormatProvider`, and return objects they're asked for, you can bypass the built-in cultures. You can also use `IFormatProvider` for a way of passing arbitrary culture objects - through the `IFormatProvider`. E.g. the name of god in different cultures * god * God * Jehova * Yahwe * Χ™Χ”Χ•Χ” * אהיה אשר אהיה This lets your custom `LordsNameFormatInfo` class ride along inside an `IFormatProvider`, and you can preserve the idiom. In reality you will never need to call `GetFormat` method of `IFormatProvider` yourself. Whenever you need an `IFormatProvider` you can pass a `CultureInfo` object: ``` DateTime.Now.ToString(CultureInfo.CurrentCulture); endTime.ToString(CultureInfo.InvariantCulture); transactionID.toString(CultureInfo.CreateSpecificCulture("qps-ploc")); ``` > **Note**: Any code is released into the public domain. No attribution required.
What does IFormatProvider do?
[ "", "c#", "asp.net", "iformatprovider", "" ]
I have an application that reads lines from a file and runs its magic on each line as it is read. Once the line is read and properly processed, I would like to delete the line from the file. A backup of the removed line is already being kept. I would like to do something like ``` file = open('myfile.txt', 'rw+') for line in file: processLine(line) file.truncate(line) ``` This seems like a simple problem, but I would like to do it right rather than a whole lot of complicated seek() and tell() calls. Maybe all I really want to do is remove a particular line from a file. After spending far to long on this problem I decided that everyone was probably right and this it just not a good way to do things. It just seemed so elegant solution. What I was looking for was something akin to a FIFO that would just let me pop lines out of a file.
### Remove all lines after you've done with them: ``` with open('myfile.txt', 'r+') as file: for line in file: processLine(line) file.truncate(0) ``` ### Remove each line independently: ``` lines = open('myfile.txt').readlines() for line in lines[::-1]: # process lines in reverse order processLine(line) del lines[-1] # remove the [last] line open('myfile.txt', 'w').writelines(lines) ``` ### You can leave only those lines that cause exceptions: ``` import fileinput, sys for line in fileinput.input(['myfile.txt'], inplace=1): try: processLine(line) except Exception: sys.stdout.write(line) # it prints to 'myfile.txt' ``` In general, as other people already said it is a bad idea what you are trying to do.
**You can't**. It is just not possible with actual text file implementations on current filesystems. Text files are sequential, because the lines in a text file can be of any length. Deleting a particular line would mean rewriting the entire file from that point on. Suppose you have a file with the following 3 lines; ``` 'line1\nline2reallybig\nline3\nlast line' ``` To delete the second line you'd have to move the third and fourth lines' positions in the disk. The only way would be to store the third and fourth lines somewhere, truncate the file on the second line, and rewrite the missing lines. If you know the size of every line in the text file, you can truncate the file in any position using `.truncate(line_size * line_number)` but even then you'd have to rewrite everything after the line.
Python truncate lines as they are read
[ "", "python", "file-io", "" ]
### Original Question What I'd like is not a standard C pre-processor, but a variation on it which would accept from somewhere - probably the command line via -DNAME1 and -UNAME2 options - a specification of which macros are defined, and would then eliminate dead code. It may be easier to understand what I'm after with some examples: ``` #ifdef NAME1 #define ALBUQUERQUE "ambidextrous" #else #define PHANTASMAGORIA "ghostly" #endif ``` If the command were run with '-DNAME1', the output would be: ``` #define ALBUQUERQUE "ambidextrous" ``` If the command were run with '-UNAME1', the output would be: ``` #define PHANTASMAGORIA "ghostly" ``` If the command were run with neither option, the output would be the same as the input. This is a simple case - I'd be hoping that the code could handle more complex cases too. To illustrate with a real-world but still simple example: ``` #ifdef USE_VOID #ifdef PLATFORM1 #define VOID void #else #undef VOID typedef void VOID; #endif /* PLATFORM1 */ typedef void * VOIDPTR; #else typedef mint VOID; typedef char * VOIDPTR; #endif /* USE_VOID */ ``` I'd like to run the command with `-DUSE_VOID -UPLATFORM1` and get the output: ``` #undef VOID typedef void VOID; typedef void * VOIDPTR; ``` Another example: ``` #ifndef DOUBLEPAD #if (defined NT) || (defined OLDUNIX) #define DOUBLEPAD 8 #else #define DOUBLEPAD 0 #endif /* NT */ #endif /* !DOUBLEPAD */ ``` Ideally, I'd like to run with `-UOLDUNIX` and get the output: ``` #ifndef DOUBLEPAD #if (defined NT) #define DOUBLEPAD 8 #else #define DOUBLEPAD 0 #endif /* NT */ #endif /* !DOUBLEPAD */ ``` This may be pushing my luck! Motivation: large, ancient code base with lots of conditional code. Many of the conditions no longer apply - the OLDUNIX platform, for example, is no longer made and no longer supported, so there is no need to have references to it in the code. Other conditions are always true. For example, features are added with conditional compilation so that a single version of the code can be used for both older versions of the software where the feature is not available and newer versions where it is available (more or less). Eventually, the old versions without the feature are no longer supported - everything uses the feature - so the condition on whether the feature is present or not should be removed, and the 'when feature is absent' code should be removed too. I'd like to have a tool to do the job automatically because it will be faster and more reliable than doing it manually (which is rather critical when the code base includes 21,500 source files). (A really clever version of the tool might read `#include`'d files to determine whether the control macros - those specified by -D or -U on the command line - are defined in those files. I'm not sure whether that's truly helpful except as a backup diagnostic. Whatever else it does, though, the pseudo-pre-processor must not expand macros or include files verbatim. The output must be source similar to, but usually simpler than, the input code.) ### Status Report (one year later) After a year of use, I am very happy with '[sunifdef](https://sourceforge.net/projects/sunifdef/)' recommended by the selected answer. It hasn't made a mistake yet, and I don't expect it to. The only quibble I have with it is stylistic. Given an input such as: ``` #if (defined(A) && defined(B)) || defined(C) || (defined(D) && defined(E)) ``` and run with '-UC' (C is never defined), the output is: ``` #if defined(A) && defined(B) || defined(D) && defined(E) ``` This is technically correct because '&&' binds tighter than '||', but it is an open invitation to confusion. I would much prefer it to include parentheses around the sets of '&&' conditions, as in the original: ``` #if (defined(A) && defined(B)) || (defined(D) && defined(E)) ``` However, given the obscurity of some of the code I have to work with, for that to be the biggest nit-pick is a strong compliment; it is valuable tool to me. --- ### The New Kid on the Block Having checked the URL for inclusion in the information above, I see that (as predicted) there is an new program called [Coan](http://coan2.sourceforge.net/) that is the successor to 'sunifdef'. It is available on SourceForge and has been since January 2010. I'll be checking it out...further reports later this year, or maybe next year, or sometime, or never.
I know absolutely nothing about C, but it sounds like you are looking for something like [`unifdef`](https://github.com/fanf2/unifdef). Note that it hasn't been updated since 2000, but there is a successor called ["Son of unifdef" (sunifdef)](https://sourceforge.net/projects/sunifdef/).
Also you can try this tool <http://coan2.sourceforge.net/> something like this will remove ifdef blocks: coan source -UYOUR\_FLAG --filter c,h --recurse YourSourceTree
Is there a C pre-processor which eliminates #ifdef blocks based on values defined/undefined?
[ "", "c++", "c", "preprocessor", "" ]
I seem to recall something about avoiding the Immediate If operator ([?:](http://msdn.microsoft.com/en-us/library/ty67wk28(VS.80).aspx)) in C#, but I don't know where I read it and what it was. I think it had to do with the fact that both the true and the false part are executed before deciding on the outcome of the condition. Is this correct? Or is this so in VB.Net?
It's actually called conditional operator and is referred to as "?:" in the MSDN. It is basically a shorthand notation for `if-else` except that this is actually expression, not statement. Since it's equivalent for `if` there are no caveats to this operator. What you've read is about is possibly about `Iif` function in VB.NET. Being a function it evaluates all its arguments before being invoked, so ``` Dim s As String = Iif(person Is Nothing, String.Empty, person.FirstName) ``` will result in `NullReferenceException` being thrown.
Only use it for simple things like ``` Console.WriteLine(MyBool ? "It's true!" : "Nope"); ``` If you try to add logic to the inside, then the code looks really bad.
Possible downside to immediate if operator (?:) in C#?
[ "", "c#", ".net", "" ]
How can I tell if my window is the current active window? My current guess is to do GetForegroundWindow and compare the HWND with that of my window. Is there a better method than that? I'm using Win32 API / MFC.
Yes, that's the only way that I'm aware of. But you have to handle the fact that GFW can return NULL. Typically, this happens when another desktop (e.g. the screen saver desktop) is active. Note that use of a saver password can affect whether a different desktop is used (this is windows version-dependent and I can't remember the details of how different versions work). Also this code won't work properly in debug mode under Visual Studio, because you will get VS's window handle. Other than that everything's peachy :-)
You can try to use WM\_ACTIVATEAPP message. First define a bool variable `bool wActive = false`, in the WndProc procedure, here is the next piece of code: ``` case WM_ACTIVATEAPP: wActive = (bool)wParam; return 0; ``` You can go to [MSDN](https://learn.microsoft.com/en-us/windows/win32/winmsg/wm-activateapp?redirectedfrom=MSDN) to find more information about `WM_ACTIVATEAPP`
How to determine if the current window is the active window?
[ "", "c++", "windows", "winapi", "mfc", "" ]
I have the following situation: my application's authorization mechanism is implemented using Spring security. The central class implements **AccessDecisionManager** and uses voters (each of which implements **AccessDecisionVoter**) to decide whether to grant access to some method or not. The algorithm that tallies the votes is custom: ``` public class PermissionManagerImpl extends AbstractAccessDecisionManager { public void decide( Authentication authentication, Object object, ConfigAttributeDefinition config) throws AccessDeniedException { Iterator<?> iter = getDecisionVoters().iterator(); boolean wasDenied = false; while (iter.hasNext()) { AccessDecisionVoter voter = (AccessDecisionVoter) iter.next(); int result = voter.vote(authentication, object, config); switch (result) { // Some tallying calculations } } if (wasDenied) { throw new AccessDeniedException("Access is denied"); } } } ``` Upon denying an access to some method, the client of the application is interested in obtaining an informative exception that specifies exactly **why** the access is denied. This implies passing some information from voters to the decision manager. Unfortunately, the only information the standard **AccessDecisionVoter** passes back to the decision manager is one of the possible return values (**ACCESS\_GRANTED**, **ACCESS\_ABSTAIN** or **ACCESS\_DENIED**). What is the best way to do it? Thanks.
Well, the `AccesssDecisionVoter` interface actually returns an `int` in this situation. Granted, the built-in voter implementations always only return one of the three constants you mentioned (and these are what the standard access decision managers check for), but then they don't really have anything extra to return - the `RoleVoter` for instance will deny access if and only if the principal doesn't have the required role. Since you're using your own implementations both of the voters and the access decision manager, you have several options available as I see it: 1. Return other values of integers as some form of error code; treat `ACCESS_GRANTED`, `ACCESS_ABSTAIN` and `ACCESS_DENIED` as their typical values, but treat any other integer as "access denied" with an error code. Ideally have a lookup table of error codes available - essentially a poor man's enum. 2. Within your voter, return `ACCESS_DENIED` as usual, and set some publically accessible property (either on the voter object itself or perhaps some statically-accessible field) with the error reason. In your manager, if you get access denied from your custom voter, check the property to get the details. 3. As above, set an error property within the voter; but ensure that the instance of `Authentication` being passed in is one of your own custom subclasses that provides a good location to set/retrieve this information. 4. Throw an `AccessDeniedException` (or suitable subclass) from within your voter itself. This is not ideal as it presupposes the logic in the access decision manager; but you could either let this bubble straight up, or if needed catch it within the manager (a custom subclass would definitely be good for this) and rethrow if access really is denied (something similar to what the `ProviderManager` class does with its `lastException` variable). None of these sticks out as the obviously correct and elegant answer, but you should be able to get something workable from whichever one is most appropriate. Since there is no *explicit* support within the voter framework for communicating reasons (it's a straight boolean response fundamentally) I don't think you can do much better.
Thanks for people who answered. I think I have found a quite elegant way to do what I wanted and still use the standard voters API. The 2nd parameter to the *vote* method of *AccessDecisionVoter* is the secured object. I can create a contract between the decision manager and the voters, that this object is of a specific class/interface that is a **wrapper**, through which the original secured object can be fetched and also additional information can be added by the voters that deny the access. I saw a pattern like this in other frameworks as well. This solution has the following advantages over other possible solutions: * The voters can remain stateless, so they can be singletons * The standard interface of the AccessDecisionVoter is used and no new return values are added * The additional information is saved in an object that is discarded automatically because no one uses it after the *AbstactDecisionManager*'s *decide* method, so no cleanup code is required Cheers.
How to throw an informative exception from AccessDecisionManager that uses voters
[ "", "java", "authorization", "spring-security", "" ]
I waste a lot of time between Vim and Python. I find it too slow to manually copy-paste from Python to Vim and vice versa. A good broken example is: > %!python for i in xrange(25); print 6\*i \n How can you do such tweaks direcly in Vim? **[Solved]** **[Clarification]** I need things to Vim, like printing sequences, arithmetics... - things I cannot do in Vim. **[?]** Can someone elaborate this point: "your script can read from stdin to operate directly on the lines given *(., %, ...).*" **[Further Clarification]** If I want to print 'Hello' to lines 4,5, 6 and 7, what is wrong: > :4-7!python -c "print 'hello'" The dot . modifies the current line. Can I print on multiple lines 7, 32 and 99: > :7,32,99!python -c "print 'hello'" Clearly not working. How?
In any of your vim windows, type something like this: ``` for x in range(1,10): print '-> %d' % x ``` Visually select both of those lines (V to start visual mode), and type the following: ``` :!python ``` Because you pressed ':' in visual mode, that will end up looking like: ``` :'<,'>!python ``` Hit enter and the selection is replaced by the output of the `print` statements. You could easily turn it into a mapping: ``` :vnoremap <f5> :!python<CR> ```
> Can someone elaborate this point: "your script can read from stdin to operate directly on the lines given (., %, ...)." One common use is to sort lines of text using the 'sort' command available in your shell. For example, you can sort the whole file using this command: ``` :%!sort ``` Or, you could sort just a few lines by selecting them in visual mode and then typing: ``` :!sort ``` You could sort lines 5-10 using this command: ``` :5,10!sort ``` You could write your own command-line script (presuming you know how to do that) which reverses lines of text. It works like this: ``` bash$ myreverse 'hello world!' !dlrow olleh ``` You could apply it to one of your open files in vim in exactly the same way you used `sort`: ``` :%!myreverse <- all lines in your file are reversed ```
How can you use Python in Vim?
[ "", "python", "vim", "" ]
I want to disable Javascript once a page has already loaded. Why? Because I want to test how the behavior of something like the following 'degrades' when javascript isn't available, but i dont want the hastle of going to the browser's top level Javascript enable/disable feature. In addition I specifically want to disable it **after the page has loaded** because I want to isolate my testing to how that one form would perform (I have jQuery running for the rest of the page and I don't want to lose that). Allowing me to disable JS for this code allows me to test the form postback as well as the AJAX postback. > `<form > action="/pseudovirtualdirectoryfortesting/company/Contact" > id="fooForm" method="post" > onsubmit="Sys.Mvc.AsyncForm.handleSubmit(this, new Sys.UI.DomEvent(event), { > insertionMode: > Sys.Mvc.InsertionMode.replace, > loadingElementId: 'submitting', > onBegin: Function.createDelegate(this, > submitComments_begin), onComplete: > Function.createDelegate(this, > submitComments_complete), onFailure: > Function.createDelegate(this, > submitComments_failure), onSuccess: > Function.createDelegate(this, > submitComments_success) });">` What plug-ins or tactics could I use. I want to be able to test in different browsers, and some projects I work on are designed only for one browser (not my fault) so I need as many possible solutions as there are.
Get the [Web Developer Toolbar](https://addons.mozilla.org/en-US/firefox/addon/60) for Firefox. With it you can disable Javascript at any time, disable cookies, css, whatever you want.
Test in FireFox with the NoScript addon.
How can I temporarily disable javascript AFTER a page has loaded?
[ "", "javascript", "ajax", "" ]
I've been using Javascript's Date for a project, but noticed today that my code that previously worked is no longer working correctly. Instead of producing Feb as expected, the code below produces March. My code looks something like this: ``` current = new Date(); current.setMonth(current.getMonth()+1); //If today is Jan, expect it to be Feb now ``` This code worked everyday up until today. Is this a Javascript bug or am I going about this the wrong way?
You'll probably find you're setting the date to Feb 31, 2009 (if today is Jan 31) and Javascript automagically rolls that into the early part of March. Check the day of the month, I'd expect it to be 1, 2 or 3. If it's not the same as before you added a month, roll back by one day until the month changes again. That way, the day "last day of Jan" becomes "last day of Feb". EDIT: Ronald, based on your comments to other answers, you might want to steer clear of edge-case behavior such as "what happens when I try to make Feb 30" or "what happens when I try to make 2009/13/07 (yyyy/mm/dd)" (that last one might still be a problem even for my solution, so you should test it). Instead, I would explicitly code for the possibilities. Since you don't care about the day of the month (you just want the year and month to be correct for next month), something like this should suffice: ``` var now = new Date(); if (now.getMonth() == 11) { var current = new Date(now.getFullYear() + 1, 0, 1); } else { var current = new Date(now.getFullYear(), now.getMonth() + 1, 1); } ``` That gives you Jan 1 the following year for any day in December and the first day of the following month for any other day. More code, I know, but I've long since grown tired of coding tricks for efficiency, preferring readability unless there's a clear requirement to do otherwise.
Instead, try: ``` var now = new Date(); current = new Date(now.getFullYear(), now.getMonth()+1, 1); ```
Javascript Date: next month
[ "", "javascript", "date", "" ]
I'm trying to make a method that returns a string of words in opposite order. IE/ "The rain in Spain falls mostly on the" would return: "the on mostly falls Spain in rain The" For this I am not supposed to use any built in Java classes just basic Java. So far I have: ``` lastSpace = stringIn.length(); for (int i = stringIn.length() - 1; i >= 0; i--){ chIn = stringIn.charAt(i); if (chIn == ' '){ word = stringIn.substring(i + 1, lastSpace); stringOut.concat(word); lastS = i; } } word = stringIn.substring(0,lastSpace); stringOut.concat(word); return stringOut; ``` My problem is when `stringOut` is returned to its caller it always is a blank string. Am I doing something wrong? Maybe my use of `string.concat()`?
In Java, Strings are immutable, i.e. they can't be changed. concat() returns a new string with the concatenation. So you want something like this: ``` stringOut = stringOut.concat(word); ``` or ``` stringOut += word ``` as Ray notes, there are more succinct ways to do this though.
``` public String reverseWords(String words) { if(words == null || words.isEmpty() || !words.contains(" ")) return words; String reversed = ""; for(String word : words.split(" ")) reversed = word + " " + reversed; return reversed.trim(); } ``` Only API used is String (which should be allowed when manipulating Strings...)
Returning a string entered as reverse text in Java
[ "", "java", "string", "" ]
I'm uploading a file to the server. The file upload HTML form has 2 fields: 1. File name - A HTML text box where the user can give a name in any language. 2. File upload - A HTMl 'file' where user can specify a file from disk to upload. When the form is submitted, the file contents are received properly. However, when the file name (point 1 above) is read, it is garbled. ASCII characters are displayed properly. When the name is given in some other language (German, French etc.), there are problems. In the servlet method, the request's character encoding is set to UTF-8. I even tried doing a filter as mentioned - [[How can I make this code to submit a UTF-8 form textarea with jQuery/Ajax work?](https://stackoverflow.com/questions/29751/problems-while-submitting-a-utf-8-form-textarea-with-jquery-ajax)](https://stackoverflow.com/questions/29751/problems-while-submitting-a-utf-8-form-textarea-with-jquery-ajax "Forum Post") - but it doesn't seem to work. Only the filename seems to be garbled. The MySQL table where the file name goes supports UTF-8. I gave random non-English characters & they are stored/displayed properly. Using Fiddler, I monitored the request & all the POST data is passed correctly. I'm trying to identify how/where the data could get garbled. Any help will be greatly appreciated.
I had the same problem using Apache commons-fileupload. I did not find out what causes the problems especially because I have the UTF-8 encoding in the following places: 1. HTML meta tag 2. Form accept-charset attribute 3. Tomcat filter on every request that sets the "UTF-8" encoding -> My solution was to especially convert Strings from ISO-8859-1 (or whatever is the default encoding of your platform) to UTF-8: ``` new String (s.getBytes ("iso-8859-1"), "UTF-8"); ``` hope that helps Edit: starting with Java 7 you can also use the following: ``` new String (s.getBytes (StandardCharsets.ISO_8859_1), StandardCharsets.UTF_8); ```
Just use Apache commons upload library. Add `URIEncoding="UTF-8"` to Tomcat's connector, and use FileItem.getString("UTF-8") instead of FileItem.getString() without charset specified. Hope this help.
UTF-8 text is garbled when form is posted as multipart/form-data
[ "", "java", "jakarta-ee", "" ]
Say you have a CSS 2.1 counter like ``` ol { counter-reset: section; list-style-type: none; } li:before { counter-increment: section; content: counters(section, ".") " "; } <ol> <li>itemA</li> <!-- 1 --> <li>itemB <!-- 2 --> <ol> <li>itemC</li> <!-- 2.1 --> <li id="foo">itemD</li> <!-- 2.2 --> ``` (see <https://developer.mozilla.org/en/CSS_Counters> "nesting counters") Is there a way to read/get the `:before.content` ("2.2" in this case) for `<li id="foo">` in JavaScript? Edit: In my case a Mozilla-only solution would suffice. But there really seems to be no way to access this information. At least I didn't find any at <https://developer.mozilla.org/en/CSS_Counters> ff.
None that I can think of, no. :before pseudo-elements are not part of the DOM so there is no way to address their content. You could make a function that scanned the stylesheet's DOM for the :before rules and worked out which rules the browser had applied where, but it would be incredibly messy.
I thought about a workaround trying to get the .content value but even that doesn't work because its not been set. Thats really quite shocking. I don't think there actually is any easy way to get this value! You could calculate it with some disgusting Javascript, but that would blow the whole point of this automatic css styling out the water.
How can I read the applied CSS-counter value?
[ "", "javascript", "css", "" ]
I have an asp:Menu and it contains a top level menu item that points to <http://www.example.com/one.aspx>. When you hover over the top level menu item, it shows a dropdown and one of the selections is One which points to <http://www.example.com/one.aspx>. Apparently, I can't do this, so I have been putting a ? at the end of the second url to get around this. I was curious as to if it is possible to have two of urls pointing to the same location in an asp:Menu?
I had the same problem and used a slight variation of your solution: I added anchors to the URL, e.g: * <http://www.example.com/one.aspx#1> * <http://www.example.com/one.aspx#2> BTW: I was using a sitemap file, and when you add to nodes with the same URL to that file, then the error/exception is pretty clear: > Multiple nodes with the same URL > '/WebSite1/MyPage.aspx' were > found. XmlSiteMapProvider requires > that sitemap nodes have unique URLs. So I guess it really is not possible to have to identical URLs.
I don't believe so. The Menu control uses each item's NavigateUrl property as the identifier for highlighting and expanding the menu correctly - if you have two nodes with the same URL, how will the control know which one to highlight?
Can you have 2 url's that point to the same page in an asp:Menu control?
[ "", "c#", "asp.net", "aspmenu", "" ]
After I get response from httpwebrequest, I'd like the cookies obtained to save for the purpose of using them in another httbwebrequest. However, I'd need to insert CookieCollection to CookieContainer. How do I do that? Tried to do: ``` request.Cookiecontainer.add(response.Cookies); ``` but this keeps getting out of error: Object reference not set to an instance of an object.
``` request.CookieContainer = new CookieContainer(); request.CookieContainer.Add(response.Cookies); ``` According to Microsoft: > CookieContainer is a null reference (Nothing in Visual Basic) by default. You must assign a CookieContainer object to the property to have cookies returned in the Cookies property of the HttpWebResponse returned by the GetResponse method.
``` request.CookieContainer.Add(response.Cookies); ```
How to insert CookieCollection to CookieContainer?
[ "", "c#", "cookies", "httpwebrequest", "" ]
Currently I'm using the auto-tools to build/install and package a project of mine, but I would really like to move to something that feels more "pythonic". My project consists of two scripts, one module, two glade GUI descriptions, and two .desktop files. It's currently a pure python project, though that's likely to change soon-ish. Looking at setuptools I can easily see how to deal with everything except the .desktop files; they have to end up in a specific directory so that Gnome can find them. Is using distuils/setuptools a good idea to begin with?
**I managed to get this to work, but it kinda feels to me more like a *workaround*.** Don't know what's the preferred way to handle this... I used the following `setup.py` file (full version is [here](https://gist.github.com/brutus/e4bdb1d2c1705558b2ff)): ``` from setuptools import setup setup( # ... data_files=[ ('share/icons/hicolor/scalable/apps', ['data/mypackage.svg']), ('share/applications', ['data/mypackage.desktop']) ], entry_points={ 'console_scripts': ['startit=mypackage.cli:run'] } ) ``` The starter script trough `entry_points` works. But the `data_files` where put in an egg file and not in the folders specified, so they can't be accessed by the desktop shell. **To work around this, I used the following `setup.cfg` file:** ``` [install] single-version-externally-managed=1 record=install.txt ``` This works. Both data files are created in the right place and the `.desktop` file is recognized by Gnome.
In general, yes - everything is better than autotools when building Python projects. I have good experiences with setuptools so far. However, installing files into fixed locations is not a strength of setuptools - after all, it's not something to build installaters for Python apps, but distribute Python libraries. For the installation of files which are not application data files (like images, UI files etc) but provide integration into the operating system, you are better off with using a real packaging format (like RPM or deb). That said, nothing stops you from having the build process based on setuptools and a small make file for installing everything into its rightful place.
How to distribute `.desktop` files and icons for a Python package in Gnome (with distutils or setuptools)?
[ "", "python", "packaging", "setuptools", "gnome", "distutils2", "" ]
Will calling close on my WCF service kill all resources or set them up for GC or should I set it to null also?
Firstly, WCF proxies are `IDisposable`, so you can *kind of* use `using`: ``` using(var proxy = new MyProxy()) { // see below - not quite enough // use proxy } ``` **Unfortunately**, WCF also has a buggy `Dispose()` implementation that regularly throws exceptions. However, [here's a really cool trick](http://www.iserviceoriented.com/blog/post/Indisposable+-+WCF+Gotcha+1.aspx) to get it to work correctly. I also blogged about this [myself](http://marcgravell.blogspot.com/2008/11/dontdontuse-using.html), but I think the first link is a lot better. So: use `IDisposable` and `using`, but use it with caution (in this case). Setting a field *usually* makes no difference. There are a few edge-cases (such as variables captured by multiple delegates, static fields, long-life objects, etc), but in general leave it alone. In particular, do *not* do this, as this can theoretically *extend* the life: ``` if(field != null) field = null; // BAD ```
This is not so much a WCF question as a .NET question; see also [Setting Objects to Null/Nothing after use in .NET](https://stackoverflow.com/questions/2785/setting-objects-to-null-nothing-after-use-in-dot-net) [Is disposing this object, enough? or do i need to do more?](https://stackoverflow.com/questions/328743/is-disposing-this-object-enough-or-do-i-need-to-do-more) [In the Dispose(bool) method implementation, Shouldn't one set members to null?](https://stackoverflow.com/questions/456551/in-the-disposebool-method-implementation-shouldnt-one-set-members-to-null)
Will calling close() on my WCF service release all resources?
[ "", "c#", "wcf", "garbage-collection", "" ]
I have a table which uses three columns as a composite key. One of these column values is used as a sequence tracker for ordered related records. When I insert a new record I have to increment the sequence numbers for the related records that come after the new record. I can do this directly in SQL Server Management Studio, but when I attempt this in LINQ I get the following error: ``` Value of member 'Sequence' of an object of type 'TableName' changed. A member defining the identity of the object cannot be changed. Consider adding a new object with new identity and deleting the existing one instead. ``` Can anyone suggest a way around this limitation? (Adding a new record (as suggested by the error message) isn't really an option as the table with the composite key has a relationship with another table.)
The fix we implemented was as follows * Deleted the relationship that used the composite key * Added autoincrement ID field, set that as primary key * Added Unique contstraint to the three fields that we were previously using as our * Re-created the relationship using the three fields that were previously our primary key
Changing primary keys is a "code smell" in my book.
Edit composite key value using LINQ
[ "", "c#", ".net", "sql-server", "linq", "" ]
I am trying to write a view that will generate a report which displays all Items within my Inventory system, and provide summaries at a certain point. This report is purely just an HTML template by the way. In my case, each Item is part of an Order. An Order can have several items, and I want to be able to display SUM based summaries after the end of each order. So the report kind of looks like this: ``` Order #25 <Qty> <Qty Sold> <Cost> <Cost Value> Some Item 2 1 29.99 29.99 Another Item 4 0 10.00 40.00 <Subtotal Line> 6 1 39.99 69.99 Order #26 <Qty> <Qty Sold> <Cost> <Cost Value> ... Etc, you get the point ``` Now, I'm perfectly capable of displaying all the values and already have a report showing all the Items, but I have no idea how I can place Subtotals within the report like that without doing **alot** of queries. The Quantity, Qty Sold, and Cost fields are just part of the Item model, and Cost Value is just a simple model function. Any help would be appreciated. Thanks in advance :-)
Subtotals are `SELECT SUM(qty) GROUP BY order_number` things. They are entirely separate from a query to get details. The results of the two queries need to be interleaved. A good way to do this is to create each order as a tuple `( list_of_details, appropriate summary )`. Then the display is easy ``` {% for order in orderList %} {% for line in order.0 %} {{ line }} {% endfor %} {{ order.1 }} {% endfor %} ``` The hard part is interleaving the two queries. ``` details = Line.objects.all() ddict = defaultdict( list ) for d in details: ddict[d.order_number].append(d) interleaved= [] subtotals = ... Django query to get subtotals ... for s in subtotals: interleaved.append( ( ddict[s.order], s.totals ) ) ``` This `interleaved` object can be given to your template for rendering.
You could compute the subtotals in Python in the Django view. The sub-totals could be stored in instances of the Model object with an attribute indicating that it's a sub-total. To keep the report template simple you could insert the sub-total objects in the right places in the result list and use the sub-total attribute to render the sub-total lines differently.
Generating lists/reports with in-line summaries in Django
[ "", "python", "django", "list", "report", "" ]
``` #include <iostream> #include <vector> using namespace std; int main() { vector< vector<int> > dp(50000, vector<int>(4, -1)); cout << dp.size(); } ``` This tiny program takes a split second to execute when simply run from the command line. But when run in a debugger, it takes over 8 seconds. Pausing the debugger reveals that it is in the middle of destroying all those vectors. WTF? Note - Visual Studio 2008 SP1, Core 2 Duo 6700 CPU with 2GB of RAM. **Added:** To clarify, no, I'm not confusing Debug and Release builds. These results are on one and the same .exe, without even any recompiling inbetween. In fact, switching between Debug and Release builds changes nothing.
Running in the debugger changes the memory allocation library used to one that does a lot more checking. A program that does nothing but memory allocation and de-allocation is going to suffer much more than a "normal" program. **Edit** Having just tried running your program under VS I get a call stack that looks like ``` ntdll.dll!_RtlpValidateHeapEntry@12() + 0x117 bytes ntdll.dll!_RtlDebugFreeHeap@12() + 0x97 bytes ntdll.dll!_RtlFreeHeapSlowly@12() + 0x228bf bytes ntdll.dll!_RtlFreeHeap@12() + 0x17646 bytes msvcr90d.dll!_free_base(void * pBlock=0x0061f6e8) Line 109 + 0x13 bytes msvcr90d.dll!_free_dbg_nolock(void * pUserData=0x0061f708, int nBlockUse=1) msvcr90d.dll!_free_dbg(void * pUserData=0x0061f708, int nBlockUse=1) msvcr90d.dll!operator delete(void * pUserData=0x0061f708) desc.exe!std::allocator<int>::deallocate(int * _Ptr=0x0061f708, unsigned int __formal=4) desc.exe!std::vector<int,std::allocator<int> >::_Tidy() Line 1134 C++ ``` Which shows the debug functions in ntdll.dll and the C runtime being used.
The debug heap automatically gets enabled when you start your program in the debugger, as opposed to attaching to an already-running program with the debugger. The book *[Advanced Windows Debugging](http://www.advancedwindowsdebugging.com/)* by Mario Hewardt and Daniel Pravat has some decent information about the Windows heap, and it turns out that the chapter on heaps is [up on the web site as a sample chapter](http://www.advancedwindowsdebugging.com/ch06.pdf). Page 281 has a sidebar about "Attaching Versus Starting the Process Under the Debugger": > When starting the process under the > debugger, the heap manager modifies > all requests to create new heaps and > change the heap creation flags to > enable debug-friendly heaps (unless > the \_NO\_DEBUG\_HEAP environment > variable is set to 1). In comparison, > attaching to an already-running > process, the heaps in the process have > already been created using default > heap creation flags and will not have > the debug-friendly flags set (unless > explicitly set by the application). (Also: [a semi-related question](https://stackoverflow.com/questions/267540/really-strange-problem-about-access-violation/267584), where I posted part of this answer before.)
Weird behaviour of C++ destructors
[ "", "c++", "visual-studio-2008", "debugging", "destructor", "" ]
In my web app, I use the **onkeydown** event to capture key strokes. For example, I capture the 'j' key to animate a scroll down the page (and do some other stuff meanwhile). My problem is the user might keep the 'j' key down to scroll further down the page (this is equivalent to fast multiple key strokes). In my app, this result in a series of animations that doesn't look that good. How can I know when the key has been released, and know the amount of key stokes I should have captured? This way I could run one long animation instead of multiple short ones.
Building on [@JMD](https://stackoverflow.com/users/56793/jmd): ``` var animate = false; function startanimation() { animate = true; runanimation(); } function stopanimation() { animate = false; } function runanimation() { if ( animation_over ) { if ( !animate ) { return; } return startanimation(); } // animation code var timeout = 25; setTimeout(function(){runanimation();},timeout); } document.onkeydown = startanimation; document.onkeyup = stopanimation; ``` You'll need to add some checks for starting/ending animations, however. **Edit:** added a `return` to the JS; would've recursed endlessly.
Rather than trying to stack up the animations, you could start an animation on keyDown, and if at the end of the animation you haven't yet received keyUp then start another animation. As soon as you reach the end of an animation and you *do* have keyUp then you're done.
capture multiple "onkeydown" and wait until "onkeyup" to execute
[ "", "javascript", "events", "keyboard", "" ]
I have a bunch of input elements that have a particular substring in their IDs. Using javascript, is there a way to get these elements as an array? I wouldn't know the full ID - only the substring. Is this any simpler if I use JQuery?
Quite easy with jQuery. Example: ``` $("li[id^='comment']") ``` Select all "li" where id starts with "comment". **EDIT** To get those into an array: ``` var myArray = new Array; $("li[id^='comment']").each(function() { var thisId = $(this).attr("id"); myArray.push(thisId); }); ```
How about a non-jQuery answer...hmmm!? ``` function getAndFilter(elems, filter) { var length = elems.length, ret = []; while (length--) { if(filter(elems[length])) { ret[ret.length] = elems[length]; } } return ret; } getAndFilter(document.getElementsByTagName('input'), function(input) { // Your custom logic/rule goes here: return input.id.substr(0,5) === 'HELLO'; }); ```
Getting elements using Javascript
[ "", "javascript", "jquery", "" ]
So lets say I have this interface: ``` public interface IBox { public void setSize(int size); public int getSize(); public int getArea(); //...and so on } ``` And I have a class that implements it: ``` public class Rectangle implements IBox { private int size; //Methods here } ``` If I wanted to use the interface IBox, i can't actually create an instance of it, in the way: ``` public static void main(String args[]) { Ibox myBox=new Ibox(); } ``` right? So I'd actually have to do this: ``` public static void main(String args[]) { Rectangle myBox=new Rectangle(); } ``` If that's true, then the only purpose of interfaces is to make sure that the class which implements an interface has got the correct methods in it as described by an interface? Or is there any other use of interfaces?
Interfaces are a way to make your code more flexible. What you do is this: ``` Ibox myBox=new Rectangle(); ``` Then, later, if you decide you want to use a different kind of box (maybe there's another library, with a better kind of box), you switch your code to: ``` Ibox myBox=new OtherKindOfBox(); ``` Once you get used to it, you'll find it's a great (actually essential) way to work. Another reason is, for example, if you want to create a list of boxes and perform some operation on each one, but you want the list to contain different kinds of boxes. On each box you could do: ``` myBox.close() ``` (assuming IBox has a close() method) even though the actual class of myBox changes depending on which box you're at in the iteration.
What makes interfaces useful is **not** the fact that "you can change your mind and use a different implementation later and only have to change the one place where the object is created". That's a non-issue. The real point is already in the name: they define an *interface* that anyone at all can implement to use all code that operates on that interface. The best example is `java.util.Collections` which provides all kinds of useful methods that operate exclusively on interfaces, such as `sort()` or `reverse()` for `List`. The point here is that this code can now be used to sort or reverse **any** class that implements the `List` interfaces - not just `ArrayList` and `LinkedList`, but also classes that you write yourself, which may be implemented in a way the people who wrote `java.util.Collections` never imagined. In the same way, you can write code that operates on well-known interfaces, or interfaces you define, and other people can use your code without having to ask you to support their classes. Another common use of interfaces is for Callbacks. For example, [java.swing.table.TableCellRenderer](http://java.sun.com/javase/6/docs/api/javax/swing/table/TableCellRenderer.html), which allows you to influence how a Swing table displays the data in a certain column. You implement that interface, pass an instance to the `JTable`, and at some point during the rendering of the table, your code will get called to do its stuff.
Is there more to an interface than having the correct methods
[ "", "java", "oop", "language-features", "interface", "" ]
im re-factoring php on zend code and all the code is full of `$_GET["this"]` and `$_POST["that"]`. I have always used the more phpish `$this->_request->getPost('this')` and `$this->_request->getQuery('that')` (this one being not so much logical with the getquery insteado of getGet). So i was wondering if my method was safer/better/easier to mantain. I read in the Zend Framework documentation that you must validate your own input since the request object wont do it. That leaves me with 2 questions: * What is best of this two? (or if theres another better way) * What is the best practice for validating php input with this methods? Thanks!
I usually use $this->\_request->getParams(); to retrieve either the post or the URL parameters. Then I use the [Zend\_Filter\_Input](http://framework.zend.com/manual/en/zend.filter.input.html) to do validation and filtering. The getParams() does not do validation. Using the Zend\_Filter\_Input you can do application level validation, using the Zend Validators (or you can write your own too). For example, you can make sure the 'months' field is a number: ``` $data = $this->_request->getParams(); $validators = array( 'month' => 'Digits', ); $input = new Zend_Filter_Input($filters, $validators, $data); ```
Extending Brian's answer. As you noted you can also check out `$this->_request->getPost()` and `$this->_request->getQuery()`. If you generalize on `getParams()`, it's sort of like using the `$_REQUEST` superglobal and I don't think that's acceptable in terms of security. Additional to Zend\_Filter, you may also use simple PHP to cast the required. E.g.: ``` $id = (int) $this->_request->getQuery('id'); ``` For other values, it gets more complicated, so make sure to e.g. quote in your DB queries ([Zend\_Db](http://framework.zend.com/manual/en/zend.db.html), see quoting identifiers, `$db->quoteIdentifier()`) and in views use `$this->escape($var);` to escape content.
Handling input with the Zend Framework (Post,get,etc)
[ "", "php", "zend-framework", "post", "input", "get", "" ]
My application (written in WPF/C#) will monitor a live video source and will need to store small video segments and image stills. I'm hesitant to use a user defined "temp" directory since this would potentially allow a user to modify or delete these files causing major issues with my application. Alternatively, I have attempted to store these items in memory (BitmapSource object for the images) but cannot find an equivalent object for video. Should I byte the bullet and use a temp directory and simply use the image's URI or is their a better approach? Thank you in advance!
Couldn't you use the appdata store rather than a user-elected store? ``` System.Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData) ```
Maybe you want to look at [Isolated Storage](http://msdn.microsoft.com/en-us/library/3ak841sy.aspx) provided by .NET That should do the job I think..
What is the best practice for storing and processing images and videos?
[ "", "c#", "wpf", "video-capture", "" ]
I want to rewrite this query so that if @UserName gets passed in a null value then it leaves the Client\_User out of the search criteria. I only want it to search by name if the user enters a name in the username textbox on the webform. I'm not sure how to do this. ``` select * from weblogs.dbo.vwlogs where Log_time between @BeginDate and @EndDAte and client_user=@UserName ```
select \* from weblogs.dbo.vwlogs where Log\_time between @BeginDate and @EndDAte and **(@UserName IS NULL OR client\_user=@UserName)**
``` select * from weblogs.dbo.vwlogs where Log_time between @BeginDate and @EndDAte and (client_user=@UserName or @UserName IS null) ```
Search by parameter only if user enters a value
[ "", "asp.net", "sql", "sql-server", "" ]
I am looking for a way to do daily deployments and keep the database scripts in line with releases. Currently, we have a fairly decent way of deploying our source, we have unit code coverage, continuous integration and rollback procedures. The problem is keeping the database scripts in line with a release. Everyone seems to try the script out on the test database then run them on live, when the ORM mappings are updated (that is, the changes goes live) then it picks up the new column. The first problem is that none of the scripts HAVE to be written anywhere, generally everyone "attempts" to put them into a Subversion folder but some of the lazier people just run the script on live and most of the time no one knows who has done what to the database. The second issue is that we have 4 test databases and they are ALWAYS out of line and the only way to truly line them back up is to do a restore from the live database. I am a big believer that a process like this needs to be simple, straightforward and easy to use in order to help a developer, not hinder them. What I am looking for are techniques/ideas that make it EASY for the developer to want to record their database scripts so they can be ran as part of a release procedure. **A process that the developer would want to follow**. Any stories, use cases or even a link would helpful.
For this very problem I chose to use a migration tool: [Migratordotnet](http://code.google.com/p/migratordotnet/). With migrations (in any tool) you have a simple class used to perform your changes and undo them. Here's an example: ``` [Migration(62)] public class _62_add_date_created_column : Migration { public void Up() { //add it nullable Database.AddColumn("Customers", new Column("DateCreated", DateTime) ); //seed it with data Database.Execute("update Customers set DateCreated = getdate()"); //add not-null constraint Database.AddNotNullConstraint("Customers", "DateCreated"); } public void Down() { Database.RemoveColumn("Customers", "DateCreated"); } } ``` This example shows how you can handle volatile updates, like adding a new not-null column to a table that has existing data. This can be automated easily, and you can easily go up and down between versions. This has been a really valuable addition to our build, and has streamlined the process **immensely**. I posted a comparison of the various migration frameworks in .NET here: <http://benscheirman.com/2008/06/net-database-migration-tool-roundup>
Read [K.Scott Allen's series of posts on database versioning](http://odetocode.com/Blogs/scott/archive/2008/01/31/11710.aspx). We built a tool for applying database scripts in a controlled manner based on the techniques he describes and it works well. This could then be used as part of the continuous integration process with each test database having changes deployed to it when a commit is made to the URL you keep the database upgrade scripts in. I'd suggest having a baseline script and upgrade scripts so that you can always run a sequence of scripts to get a database from it's current version to the new state that is needed. This does still require some process and discipline from the developers though (all changes need to be rolled into a new version of the base install script and a patch script).
Database Deployment Strategies (SQL Server)
[ "", "c#", "asp.net", "sql-server-2005", "svn", "visual-studio-2005", "" ]
I have two classes: `Action` and `MyAction`. The latter is declared as: ``` class MyAction extends Action {/* some methods here */} ``` All I need is method in the `Action` class (only in it, because there will be a lot of inherited classes, and I don’t want to implement this method in all of them), which will return classname from a static call. Here is what I’m talking about: ``` Class Action { function n(){/* something */} } ``` And when I call it: ``` MyAction::n(); // it should return "MyAction" ``` But each declaration in the parent class has access only to the parent class `__CLASS__` variable, which has the value β€œAction”. Is there any possible way to do this?
`__CLASS__` always returns the name of the class in which it was used, so it's not much help with a static method. If the method wasn't static you could simply use [get\_class](http://php.net/get_class)($this). e.g. ``` class Action { public function n(){ echo get_class($this); } } class MyAction extends Action { } $foo=new MyAction; $foo->n(); //displays 'MyAction' ``` ## Late static bindings, available in PHP 5.3+ Now that PHP 5.3 is released, you can use [late static bindings](http://php.net/oop5.late-static-bindings), which let you resolve the target class for a static method call at runtime rather than when it is defined. While the feature does not introduce a new magic constant to tell you the classname you were called through, it does provide a new function, [get\_called\_class()](http://php.net/get_called_class) which can tell you the name of the class a static method was called in. Here's an example: ``` Class Action { public static function n() { return get_called_class(); } } class MyAction extends Action { } echo MyAction::n(); //displays MyAction ```
Since 5.5 you can [use `class` keyword for the class name resolution](http://php.net/manual/en/language.oop5.basic.php#language.oop5.basic.class.class), which would be a lot faster than making function calls. Also works with interfaces. ``` // C extends B extends A static::class // MyNamespace\ClassC when run in A self::class // MyNamespace\ClassA when run in A parent::class // MyNamespace\ClassB when run in C MyClass::class // MyNamespace\MyClass ```
How can I get the classname from a static call in an extended PHP class?
[ "", "php", "oop", "inheritance", "" ]
I'm trying to write a simple raytracer as a hobby project and it's all working fine now, except I can't get soft-shadows to work at all. My idea of soft-shadows is that the lightsource is considered to have a location and a radius. To do a shadow test on this light I take the point where the primary ray hit an object in the scene and cast an n-amount of rays towards the lightsource where each new ray has a random component to every axis, where the random component varies between -radius and radius. If such a ray hits an object in the scene, I increment a hitcounter (if a ray hits multiple objects, it still only increments with one). If it makes it to the lightsource without collisions, I add the distance of the primary ray's intersect point to the lightsource's center to a variable. When n samples have been taken, I calculate the ratio of rays that have collided and multiply the color of the light by this ratio (so a light with color 1000,1000,1000 will become 500,500,500 with a ratio of 0.5, where half the rays have collided). Then I calculate the average distance to the lightsource by dividing the distance variable of earlier by the amount of non-colliding rays. I return that variable and the function exits. The problem is: it doesn't work. Not quite at least. What it looks like can be seen [here](http://img16.imageshack.us/img16/5376/719sv8.png). You can see it sort of resembles soft-shadows, if you squint real hard. I don't get it, am I making some sort of fundamental flaw here, or is it something tiny? I'm fairly sure the problem is in this method, because when I count the number of partially lit pixels produced directly by this method, there are only about 250, when there should be a lot more. And when you look closely at the picture, you can see there's some partially lit pixels, suggesting the rest of the code processes the partially lit pixels just fine. Here's the actual light for soft-shadows class: ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace MyFirstRayTracer { public class AreaLight : ILight { private const int _radius = 5; private const int _samples = 16; public Color Color { get; set; } public Vector Location { get; set; } #region ILight Members public float GetLightingInformation(Vector point, ISceneObject[] scene, out Color color) { int intersectCount = 0; float distance = -1; for(int i = 0; i < _samples; i++) { bool intersects = false; float rand = 0; rand = _radius - (float)(new Random().NextDouble()*(2*_radius)); foreach (ISceneObject obj in scene) { Vector iPoint; Vector loc = new Vector(Location.X + rand, Location.Y + rand, Location.Z + rand); if (!obj.Intersect(new Ray(point, loc), out iPoint)) { distance += (Location - point).SqLength; } else { intersects = true; distance -= (Location - point).SqLength; } } if (intersects) intersectCount++; } float factor = 1-((float)intersectCount/_samples); color = new Color(factor*Color.R, factor*Color.G, factor*Color.B); return (float)Math.Sqrt(distance / (_samples - intersectCount)); } #endregion } } ```
Try generating a different "rand" for each component of "loc". As is, your jittered points all lie on a line.
minor point but is this the best use of the random class.. ``` for(int i = 0; i < _samples; i++) { bool intersects = false; float rand = 0; rand = _radius - (float)(new Random().NextDouble()*(2*_radius)); ``` should this not be.. ``` var rnd = new Random() for(int i = 0; i < _samples; i++) { bool intersects = false; float rand = 0; rand = _radius - (float)(rnd.NextDouble()*(2*_radius)); ```
What's wrong with my soft-shadow code?
[ "", "c#", "algorithm", "raytracing", "shadow", "" ]
I am trying to replace multiple rows in an Access database to follow a new set of data rules. For instance, the word Fort in Fort Myers is listed as Ft., Ft and Fort. I would like to make a global change to the group. I am familiar with the SQL replace command, but wondering if anyone has done something similar with a stored procedure or had experience with something like this.
You have to be **really, really careful** that you don't replace more than what you intend. **MAKE A BACKUP first in case things go horribly wrong.** Always start with a SELECT to filter the records first. Go over the results carefully. ``` SELECT * FROM Table WHERE City LIKE "%Ft. Myers%" ``` Then do the Replaces as Carlton said.
Harder than it sounds to the lay person ... There is no way around it but making a Replace for each thing you don't like, changing into what you do like. BUT BE VERY CAREFUL ... unintended consequences and all. I recommend doing a select before every update to see exactly what you will be updating. So in your instance of Fort Myers you have to do 3 Replaces: ``` Replace("Ft. Myers", "Fort Myers") Replace("Ft Myers", "Fort Myers") Replace("Fort. Myers", "Fort Myers") ``` If you have much data and many things to change, this could be a HUGE task. But there is no "automated" way to do it - SQL does not use fuzzy logic, you have to specify exactly everything you want it to do.
How to use the SQL replace function effectively?
[ "", "sql", "t-sql", "ms-access", "" ]
Here is the problem I'm running into. There's a huge legacy application that runs on java 1.3 and uses external API, say, MyAPI v1.0. The exact implementation of MyAPI 1.0 is located somewhere in the classpath used by the app. There's also a mechanism that allows that app to use external code (some kind of plugin mechanism). Now I have another java library (MyLib.jar) that uses MyAPI v2.0 (which is NOT 100% backward compatible with v1.0) and I have to use it from the original app using that plugin mechanism. So I have to somehow let two (incompatible!) versions of the same API work together. Specifically, I want to use MyAPI v2.0 when API classes are invoked from MyLib.jar classes and use MyAPI 1.0 in all other cases. MyAPI 1.0 is in the classpath, so it will be used by default, that's ok. I can create my own version of class loader to load classes from MyAPI 2.0 - no problem. But how do I fit it all together? Questions: 1. MyLib.jar objects instantiate a lot(!) of instances of classes from MyAPI 2.0. Does this mean that I will have to do ALL these instantiations via reflection (specifying my own classloader)? That's a hell of a work! 2. If some of MyAPI 2.0 objects gets instantiated and it internally instantiates another object from MyAPI, what classloader will it use? Would it use my class loader or default one? 3. Just generally, does my approach sound reasonable? Is there a better way?
Let's start from answering your 2nd question: when referring from some class to another class it will be loaded by the same classloader that loaded the original class (unless of course the classloader didn't succeed in finding the class and then it will delegate to its parent classloader). Having said that, why won't your entire MyLib.jar be loaded by a custom classloader, and then it can refer to the newer version of the API in a regular way. Otherwise you have a problem, because you will have to work with the Object type and reflection all the way through.
You need to be careful with class loaders. If you do what you were suggesting, you would almost always end up MyAPI 1.0 even when using your class loader for MyAPI 2.0. The reason for this is how classes are loaded using the class loader. Classes are always loaded from the parent class loader first. "The ClassLoader class uses a delegation model to search for classes and resources. Each instance of ClassLoader has an associated parent class loader. When requested to find a class or resource, a ClassLoader instance will delegate the search for the class or resource to its parent class loader before attempting to find the class or resource itself. The virtual machine's built-in class loader, called the "bootstrap class loader", does not itself have a parent but may serve as the parent of a ClassLoader instance. " (<http://java.sun.com/javase/6/docs/api/java/lang/ClassLoader.html>) To provide isolation between the two APIs properly, you would need 2 class loaders (or 2 in addition to the main application one). ``` Parent - System classloader |- Classloader1 - Used to load MyAPI 1.0 |- Classloader2 - Used to load MyAPI 2.0 ``` Now to your questions. What you probably want to do is move most of the logic that uses the API into the classloaders. In addition to MyAPI 1.0/2.0, you should load the part of the application that uses those. Then the parent application just has to call a method that uses the API. This way you make a single reflection call to start the application and everything inside that application just uses standard references.
My own classloader?
[ "", "java", "classloader", "" ]
Can an iPhone application read cookies previously stored by Safari Mobile?
To actually *answer* your question: No. Mobile Safari's cookies are not accessible from SDK apps. And each SDK app is given its own WebKit cache and cookie stores, so while cookies will persist within the same app, they aren't accessible betweeen apps.
As of iOS 9 this is possible! Use a [sfSafariViewController](https://developer.apple.com/library/ios/documentation/SafariServices/Reference/SFSafariViewController_Ref/). You will need to setup: * A [custom URL scheme](http://help.contrast.co/hc/en-us/articles/200865293-URL-Scheme-Intro-for-App-Developers) in your app to receive cookie data. * The website you are getting cookies from will need to implement an API specific your app's custom URL scheme, to redirect back to your app. You can clone this [repo](https://github.com/mackuba/SafariAutoLoginTest) which has a fully working demo of this. Hope this helps, Liam
Can an iPhone xcode application read cookies previously stored by Safari Mobile?
[ "", "javascript", "cocoa-touch", "safari", "" ]
I currently have my PHP class variables set up like this: ``` class someThing { private $cat; private $dog; private $mouse; private $hamster; private $zebra; private $lion; //getters, setters and other methods } ``` But I've also seen people using a single array to store all the variables: ``` class someThing { private $data = array(); //getters, setters and other methods } ``` Which do you use, and why? What are the advantages and disadvantages of each?
Generally, the first is better for reasons other people have stated here already. However, if you need to store data on a class privately, but the footprint of data members is unknown, you'll often see your 2nd example combined with \_\_get() \_\_set() hooks to hide that they're being stored privately. ``` class someThing { private $data = array(); public function __get( $property ) { if ( isset( $this->data[$property] ) ) { return $this->data[$property]; } return null; } public function __set( $property, $value ) { $this->data[$property] = $value; } } ``` Then, objects of this class can be used like an instance of stdClass, only none of the members you set are actually public ``` $o = new someThing() $o->cow = 'moo'; $o->dog = 'woof'; // etc ``` This technique has its uses, but be aware that \_\_get() and \_\_set() are on the order of 10-12 times slower than setting public properties directly.
If you're using `private $data;` you've just got an impenetrable blob of data there... Explicitly stating them will make your life much easier if you're figuring out how a class works. Another consideration is if you use an IDE with autocomplete - that's not going to work with the 2nd method.
What's the best way to store class variables in PHP?
[ "", "php", "variables", "" ]
I'm trying to access a web service secured by a certificate. The security is setup on IIS and the web service is behind it. I don't think WS-SECURITY will do this type of authentication. Is there any way to pass the client certificate when you call the web service? I'm just getting an IIS Error Page that says "The page requires a client certificate". I'm using CXF 2.1.4
Yes, this is possible using CXF. You will need to set up the client conduit. You can specify the keystore that contains the certificates that will allow you to access the web service in IIS. As long as the certificate you are using here is a known allowed client in IIS, you should be ok. ``` <http:conduit name="{http://apache.org/hello_world}HelloWorld.http-conduit"> <http:tlsClientParameters> <sec:keyManagers keyPassword="password"> <sec:keyStore type="JKS" password="password" file="src/test/java/org/apache/cxf/systest/http/resources/Morpit.jks"/> </sec:keyManagers> <sec:trustManagers> <sec:keyStore type="JKS" password="password" file="src/test/java/org/apache/cxf/systest/http/resources/Truststore.jks"/> </sec:trustManagers> ... </http:tlsClientParameters> ``` Sample from: [CXF Wiki](http://cxf.apache.org/docs/client-http-transport-including-ssl-support.html)
Above answer is correct but adding to that .... Your client bean should be as following (for this SSL working fine): ``` <jaxws:client id="helloClient" serviceClass="demo.spring.HelloWorld" address="http://localhost:9002/HelloWorld" /> ``` If you define the client bean as following SSL will not work: ``` <bean id="proxyFactory" class="org.apache.cxf.jaxws.JaxWsProxyFactoryBean"> <property name="serviceClass" value="demo.spring.HelloWorld"/> <property name="address" value="http://localhost:9002/HelloWorld"/> </bean> ```
How do you use TLS/SSL Http Authentication with a CXF client to a web service?
[ "", "java", "web-services", "ssl", "cxf", "ws-security", "" ]
Is it beneficial to add an index to a column that is part of a foreign key relationship? I have two columns which will be queried frequently and already have them foreign keyed but wasn't sure if I should index them aswell, or if the foreign key creates an index behind the scenes?
SQL Server does not create a behind the scenes index, so creating an index on all foreign key fields is advisable to improve look up performance. Details and additional benefits: <http://technet.microsoft.com/en-us/library/ms175464.aspx>
It is defenitely advisable to add an index to your FK column if you query it often. In your situation, it is probably even better if you create a composite index which spans your 2 columns. This is only advisable however (composite index) , if you often execute queries that filter / order on these 2 columns. If you decide that a composite index is appropriate, then you should pay attention to the order in which you put an index on those columns.
SQL Server and indices
[ "", "sql", "sql-server", "" ]
I'm trying to figure out a way to automatically cast something to an Action or Func and the best I can come up with is something like this: ``` [TestFixture] public class ExecutionTest { public void BadMethod() { throw new Exception("Something bad happened"); } [Test] public void TestBadMethod() { // Want this, but it won't work!! // BadMethod.Execute().IgnoreExceptions(); // Ick ((Action)BadMethod).Exec().IgnoreExceptions(); // Still ick ((Action)BadMethod).IgnoreExceptions(); // Do not want ExtensionMethods.Exec(BadMethod).IgnoreExceptions(); // Better but still meh this.Exec(BadMethod).IgnoreExceptions(); } } public static class ExtensionMethods { public static Action Exec(this Action action) { return action; } public static Action Exec(this object obj, Action action) { return action; } public static void IgnoreExceptions(this Action action) { try { action(); } catch {} } } ``` There has to a better/easier way to do this, any thoughts?
In C#, when you use the method name without parenthesis, it's called a method group and it has no representation other than at compile time. A method group can represent more than one method (because of overloads and overrides), therefore to implicitly identify which method is needed, a target delegate type must be provided. In your case, you are wondering why the extension method parameter type won't trigger the resolution of the function. Simply put, extension are evaluated after the type is known, that is, the this parameter can't be used as an implicit conversion target. Example of why it would break: ``` class Test { void M (void) // Fits Action delegate { } int M (int) // Fits Func<int,int> delegate { return 5; } void Test() { M.Exec(); // UHOH!!! Which Exec to resolve to ??? } } public static class Extensions { public static void Exec(this Action action) { } public static void Exec(this Func<int, int> func) { } } ``` As you can see, there is a conflict, but as a matter of fact, the conflict never happens because C# won't even try to find a matching extension with a method group. Note how this won't work either: ``` class A { public static implicit operator int (A a) { return 5; } void F() { A a = new A(); a.Blah(); // Error! It won't implicitly try C.Blah() } } public static class C { public static void Blah (int i) { } } ``` C# won't match `A` to `C.Blah(int)` because it would require an implicit conversion.
As Coincoin says, it's not gonna work well in C# because of the overzealous love for method overloading. The only workaround I've seen people use is to create Action and Func methods: ``` public Action Action(Action f) { return f; } public Action<A> Action<A>(Action<A> f) { return f; } ... public Func<A,B,C,D,E> Func(Func<A,B,C,D,E> f) { return f; } ``` You could even call them all "F" to get some sort of short syntax: ``` F(BadMethod).NoExceptions(); ``` You might decide to not define these methods in your class, and put them in a Funcs utility or something. Alias it with F and it doesn't end up too bad: ``` F.F(BadMethod).NoException(); ``` But overall it still sucks :(.
Why can't I implicitly cast a Delegate with Extension methods?
[ "", "c#", "functional-programming", "extension-methods", "currying", "" ]
This is probably my naivety showing through, but anyway... I have a generic interface which defines a set of standard methods (implemented differently) across implementations. I pass the interface into a method as a parameter, this method being responsible for persisting to a database. E.g. I have some implementations called bug, incident, etc, defined from the generic interface (called IEntry). These concerete implementations also make use of IEnumerable Because a bug is different to an incident, there are different fields. When I pass the interface into a method as a parameter, is there any way to inference the type? So if I pass in the Bug object, I can use its fields, which are not the same fields as in those of Incident. These fields are useful for the persistance to the database. I'm assuming no because there is no way to know what the type to be passed in will be (obviously), but I know people here have more wisdom. In that case, is there a better way of doing things? Because of the similarity, I would like to stick to interfaces. EDIT: I guess the other way is to make use of some flow control to generate the sql statement on the fly and then pass it in as a parameter. Thanks
The thing about passing objects and interfaces around is that you really shouldn't be concerned with the actual type, as long as it inherits from/implements the particular base class/interface you're interested in. So building logic into that method to figure out that it's a bug, and then accessing things that are only present for bugs, that's basically not the OOP way, although it might be the "best" way in your particular case. I would, however, advise against it, and instead try to build a proper OOP way with polymorphism to handle the differences, instead of building it into the method as special cases. You mention persistence, is this method responsible for storing the data somewhere? Perhaps you could separate the part that gathers the information to store from the part that stores the information, that way you could ask the object itself to provide you with all the pertinent information, which could vary from one class to another.
Bad Design (as I think was described in the question): ``` public interface IEntry { string Description { get; set; } } public class Bug : IEntry { public int ID { get; set; } public string Description { get; set; } public string UserName { get; set; } } public class Incident : IEntry { public Guid ID { get; set; } public string Description { get; set; } } public class Persister { public void Save(IEnumerable<IEntry> values) { foreach (IEntry value in values) { Save(value); } } public void Save(IEntry value) { if (value is Bug) { /* Bug save logic */ } else if (value is Incident) { /* Incident save logic */ } } } ``` Improved design (smart entity approach): ``` public interface IEntry { string Description { get; set; } void Save(IPersister gateway); } public class Bug : IEntry { public int ID { get; set; } public string Description { get; set; } public string UserName { get; set; } public void Save(IPersister gateway) { gateway.SaveBug(this); } } public class Incident : IEntry { public Guid ID { get; set; } public string Description { get; set; } public void Save(IPersister gateway) { gateway.SaveIncident(this); } } public interface IPersister { void SaveBug(Bug value); void SaveIncident(Incident value); } public class Persister : IPersister { public void Save(IEnumerable<IEntry> values) { foreach (IEntry value in values) { Save(value); } } public void Save(IEntry value) { value.Save(this); } public void SaveBug(Bug value) { // Bug save logic } public void SaveIncident(Incident value) { // Incident save logic } } ``` The improved design is only caters for the need to shift the need for change of Persister.Save(IEntry). I just wanted to demonstrate a first step to make the code less brittle. In reality and production code you would want to have a BugPersister and IncidentPersister class in order to conform to the [Single Responsibility principle](http://www.objectmentor.com/resources/articles/srp.pdf). Hope this more code-centric example is a help.
Generic interface as a method parameter and seeing fields
[ "", "c#", "" ]
I am interacting with a web server using a desktop client program in C# and .Net 3.5. I am using Fiddler to see what traffic the web browser sends, and emulate that. Sadly this server is old, and is a bit confused about the notions of charsets and utf-8. Mostly it uses Latin-1. When I enter data into the Web browser containing "special" chars, like "Ξ© Ο€ β„΅ ∞ ♣ β™₯ β™ˆ ♉ β™Š β™‹ β™Œ ♍ β™Ž ♏ ♐ β™‘ β™’ β™“" fiddler show me that they are being transmitted as follows from browser to server: `"&#9800; &#9801; &#9802; &#9803; &#9804; &#9805; &#9806; &#9807; &#9808; &#9809; &#9810; &#9811; "` But for my client, HttpUtility.HtmlEncode does not convert these characters, it leaves them as is. What do I need to call to convert "β™ˆ" to &#9800; and so on?
It seems horribly inefficient, but the only way I can think to do that is to look through each character: ``` public static string MyHtmlEncode(string value) { // call the normal HtmlEncode first char[] chars = HttpUtility.HtmlEncode(value).ToCharArray(); StringBuilder encodedValue = new StringBuilder(); foreach(char c in chars) { if ((int)c > 127) // above normal ASCII encodedValue.Append("&#" + (int)c + ";"); else encodedValue.Append(c); } return encodedValue.ToString(); } ```
Rich Strahl just posted a blog post, [Html and Uri String Encoding without System.Web](http://west-wind.com/weblog/posts/617930.aspx), where he has some custom code that encodes the upper range of characters, too. ``` /// <summary> /// HTML-encodes a string and returns the encoded string. /// </summary> /// <param name="text">The text string to encode. </param> /// <returns>The HTML-encoded text.</returns> public static string HtmlEncode(string text) { if (text == null) return null; StringBuilder sb = new StringBuilder(text.Length); int len = text.Length; for (int i = 0; i < len; i++) { switch (text[i]) { case '<': sb.Append("&lt;"); break; case '>': sb.Append("&gt;"); break; case '"': sb.Append("&quot;"); break; case '&': sb.Append("&amp;"); break; default: if (text[i] > 159) { // decimal numeric entity sb.Append("&#"); sb.Append(((int)text[i]).ToString(CultureInfo.InvariantCulture)); sb.Append(";"); } else sb.Append(text[i]); break; } } return sb.ToString(); } ```
HttpUtility.HtmlEncode doesn't encode everything
[ "", "c#", "html", "encoding", "utf-8", "" ]
For a while now, I have been using UltraEdit on my Windows box. The ability to write scripts with a familiar language (JavaScript) has proved to be extremely useful. The only problem is that I cannot use it on my Linux box at work. Is there a comparable text editor that runs on Linux and has an integrated scripting engine? Not breaking the bank and being cross-platform would be great. EDIT:While recordable macros are great, I use the scripting engine much more.
It looks like [Komodo Edit](http://www.activestate.com/store/download.aspx?prdGUID=20f4ed15-6684-4118-a78b-d37ff4058c5f), [SciTE](http://scintilla.sourceforge.net/SciTEDoc.html), and [Eclipse Monkey](http://72.3.219.182/docs/index.php/About_Eclipse_Monkey) are the winners. Komodo Edit seems to be most similar to Ultra Edit. SciTE is something I've used before, and Lua is not that difficult; [SciTE's API](http://www.scintilla.org/SciTELua.html) though does not seem as extensive as [Komodo Edit's API](http://docs.activestate.com/komodo/5.0/komodo-js-api.html). Eclipse Monkey is something I am definitely going to use, but it requires Eclipse, which is definitely not a text editor. EDIT: UltraEdit is coming out for Mac and Linux Soon.
All of the major open-source editors and most of the others hava a scripting facility of some description - some (Emacs in particular) are famous for it. The only ones that don't tend to be very lightweight ones like [pico.](http://en.wikipedia.org/wiki/Pico_(text_editor)) [vim](http://www.vim.org) has a [native scripting language](http://vimdoc.sourceforge.net/htmldoc/usr_41.html) and can also be built with embedded [Python,](http://www.python.org) [Tcl](http://www.tcl.tk) or [Perl](http://www.cpan.org) interepreters that can operate on selections, buffers etc through the plugin mechanism. [Emacs](http://www.gnu.org/software/emacs/) is all about scripting - it's has a LISP interpreter built right into the core of the system and most of the editor is written in LISP. There is a running joke about emacs describing it as a LISP interpreter that someone just happened to use to write a text editor. Vim's user interface is descended from vi, which is somewhat quirky but very powerful once you get used to it. It also does recorded keyboard macros particularly well and has a very nice regular expression search/replace facility. Emacs is regarded as a bit of a baroque monstrosity and is very large and complex. However, its scripting capability is second to none and there is an [enormous variety of macro packages](http://www.emacswiki.org/emacs/WikifiedEmacsLispList) that do many things. It has a very loyal following of people who swear by it; once you've gotten over the learning curve (there is an enormous body of resources on the web to help with this) it's a very powerful system indeed. You can customise emacs into a whole IDE and there are people around who claim to spend the majority of their tube time in it. Both of these editors can work in text mode or with a GUI and are highly portable, running on a wide variety of platforms. They are both open-source. I've used both; I used to use [XEmacs](http://www.xemacs.org/) (a major code-fork of emacs that goes back a number of years) back in the 1990s but went to vim later on. I even use vim on Windows. If you find the user interface of Vim or Emacs a bit too much, there are a [variety of other text editors](http://en.wikipedia.org/wiki/Category:Linux_text_editors) available, many of which offer scripting. Examples of these are [SciTE](http://www.scintilla.org/SciTEDoc.html), which has a built in [Lua](http://www.lua.org/) interpreter, [NEdit,](http://www.nedit.org/) which has a homebrew macro language of its own or [GEdit](http://projects.gnome.org/gedit/), which is substantially written in [Python](http://www.python.org) (which can also be used for scripting it) and has a plugin API. **EDIT:** Outside of a few specific projects (e.g. [Mozilla](http://www.mozilla.org/)) Javascript never got much traction as a stand-alone or embedded scripting language in open-source circles. Historically there wasn't a popular open-source Javascript interpreter that got widespread acceptance in the way that Python or Tcl/Tk did. Javascript is more widely used in closed source systems such as UltraEdit or InDesign (to name a couple) whereas other languages were more popular on open-source projects. None of the open-source text editors that I am aware of feature javascript as an option for a scripting language (feel free to step in and comment or edit this if you know of one). You will probably have to move off Javascript to another language such as Python or LISP. However, now that QT comes with a Javascript interpreter (QTScript) you may find some of the KDE-based ones offering this as a scripting option, but I am not specifically aware of any off the top of my head.
Text Editor with Scripting...for Linux
[ "", "javascript", "linux", "scripting", "cross-platform", "text-editor", "" ]
I don't want to use jQuery, but I'd like to use Ajax to do file uploading. Is that possible? If so, where can I find information/tutorial on it?
No, it isn't possible to do this with javascript. In order to give the 'AJAX' feel, however, you can submit a form to a hidden iframe and output the script results to it, then process from there. Google [`ajax iframe upload`](http://www.google.com/search?q=ajax+iframe+upload) and get started from there. If you are using jQuery, there is also the [`Form plugin`](http://malsup.com/jquery/form/) which will automatically create this iframe for you if your form has any file fields in it. I haven't used it to do this, but I've heard good things. As pointed out in the comments, you can also use something like the very popular [`SWFUpload`](http://swfupload.org/) to accomplish the desired effect with Flash.
Incase anyone is finding this question much later: yes this is possible with JavaScript now. HTML5 defined 2 new APIs that you use together to accomplish this: Drag and Drop API and the File API. You can use jQuery to interact with the APIs effectively letting people drag and drop files for upload. Here is [a tutorial](http://www.thebuzzmedia.com/html5-drag-and-drop-and-file-api-tutorial/) on how to do it. The code currently works in Chrome 6+ and Firefox 3.6+, Safari 6 and IE 10. If you need Safari 5 support, the code stays almost exactly the same but you use the FormData object instead for the uploaded file list (more information in the post). Opera supports the File API as of 11, but not the DnD API, the drop operation does not initiate the upload, but they support you getting access to the file with the API. I imagine in 12 they will finish off support for DnD API. 01-20-14 Update: All the major browsers implement all the standard APIs now so this tutorial works in all browsers.
Is it possible to use Ajax to do file upload?
[ "", "javascript", "jquery", "ajax", "upload", "" ]
I have implemented a linked list as a self-referencing database table: ``` CREATE TABLE LinkedList( Id bigint NOT NULL, ParentId bigint NULL, SomeData nvarchar(50) NOT NULL) ``` where Id is the primary key, and ParentId is the Id of the previous node on the list. The first node has ParentId = NULL. I now want to SELECT from the table, sorting the rows in the same order they should appear, as nodes on the list. Eg.: if the table contains the rows ``` Id ParentId SomeData 24971 NULL 0 38324 24971 1 60088 60089 3 60089 38324 2 61039 61497 5 61497 60088 4 109397 109831 7 109831 61039 6 ``` Then sorting it, using the criteria, should result in: ``` Id ParentId SomeData 24971 NULL 0 38324 24971 1 60089 38324 2 60088 60089 3 61497 60088 4 61039 61497 5 109831 61039 6 109397 109831 7 ``` You're supposed to use the *SomeData* colum as a control, so please don't cheat doing *ORDER by SomeData* :-)
In Oracle: ``` SELECT Id, ParentId, SomeData FROM ( SELECT ll.*, level AS lvl FROM LinkedList ll START WITH ParentID IS NULL CONNECT BY ParentId = PRIOR Id ) ORDER BY lvl ``` P. S. It's a bad practice to use `NULL` as `ParentID`, as it is not searchable by indices. Insert a surrogate root with id of `0` or `-1` instead, and use `START WITH ParentID = 0`.
I found a solution for SQLServer, but looks big and much less elegant than Quassnoi's ``` WITH SortedList (Id, ParentId, SomeData, Level) AS ( SELECT Id, ParentId, SomeData, 0 as Level FROM LinkedList WHERE ParentId IS NULL UNION ALL SELECT ll.Id, ll.ParentId, ll.SomeData, Level+1 as Level FROM LinkedList ll INNER JOIN SortedList as s ON ll.ParentId = s.Id ) SELECT Id, ParentId, SomeData FROM SortedList ORDER BY Level ```
How do I sort a linked list in sql?
[ "", "sql", "sql-server", "linked-list", "" ]
I suppose it depends on how it's implemented. I'd love it if someone would come back and tell me "yes, in virtually all browsers, order of items will be changed only when necessary to satisfy the conditions of the sort.
What you're looking for is whether or not the algorithm is "stable". It is known that Firefox's is not, while IE's is. The javascript standard doesn't require a stable sorting algorithm. Edit: Firefox 3+ has a stable sort. Please see <http://www.hedgerwow.com/360/dhtml/js_array_stable_sort.html>
Every browsers has different implemantation, so dont count on it.
will Array.sort() preserve the order of the array, where possible?
[ "", "javascript", "sorting", "cross-browser", "" ]
In the most recent version (4.1, released October 2008) of The Microsoft Enterprise Library's Exception Handling Application Block, there are two HandleException() method signatures, and I am a bit lost on the intent for these, especially since neither the documentation, intellisense, nor the QuickStart apps intimate any meaningful difference. Here are the two signatures: ``` bool HandleException(Exception exceptionToHandle, string policyName); bool HandleException(Exception exceptionToHandle, string policyName, out Exception exceptionToThrow); ``` All of the examples I have found use the first, as in this example straight out of the XML documentation comments on the actual method: ``` try { Foo(); } catch (Exception e) { if (ExceptionPolicy.HandleException(e, name)) throw; } ``` And here, from the same source (the XML doc comments for the method), is an example of using the second: ``` try { Foo(); } catch (Exception e) { Exception exceptionToThrow; if (ExceptionPolicy.HandleException(e, name, out exceptionToThrow)) { if(exceptionToThrow == null) throw; else throw exceptionToThrow; } } ``` So, my question is, what does using the second one give you that the first does not? This should probably be obvious to me, but my head is a mess today and I don't really want to keep banging my head against the proverbial wall any longer. :) No speculations, please; I hope to hear from someone that actually knows what they are talking about from experience using this.
If you use a replace handler in the enterprise library config, your replacement exception is returned by means of the second signature .. , out Exception exceptionToThrow).
The main difference between the two method overloads is that ``` bool HandleException(Exception exceptionToHandle, string policyName); ``` will throw an exception if the PostHandlingAction is set to ThrowNewException. Whereas the second method overload: ``` bool HandleException(Exception exceptionToHandle, string policyName, out Exception exceptionToThrow); ``` does not throw an exception in that case but instead returns the exception to be rethrown as an out parameter. Actually, both calls invoke the same core code but the second method overload catches the thrown Exception and returns it to the caller. So the second overload gives you a bit more control since you could perform some additional logic before throwing. It also standardizes the API in the sense that if you use the second HandleException method with the out parameter Enterprise Library will never intentionally handle your exception and throw it for you. i.e. The first method sometimes throws and sometimes relies on you to rethrow (in the case of NotifyRethrow) but the second method always returns and lets the caller throw/rethrow.
What's the essential difference between the two HandleException() methods of Exception Handling Application Block (Ent Lib 4.1)
[ "", "c#", ".net", "exception", "enterprise-library", "" ]
I've been trying to embed an icon (.ico) into my "compyled" .exe with py2exe. Py2Exe does have a way to embed an icon: ``` windows=[{ 'script':'MyScript.py', 'icon_resources':[(1,'MyIcon.ico')] }] ``` And that's what I am using. The icon shows up fine on Windows XP or lower, but doesn't show at all on Vista. I suppose this is because of the new Vista icon format, which can be in PNG format, up to 256x256 pixels. So, how can I get py2exe to embed them into my executable, without breaking the icons on Windows XP? I'm cool with doing it with an external utility rather than py2exe - I've tried [this command-line utility](http://www.rw-designer.com/compile-vista-icon) to embed it, but it always corrupts my exe and truncates its size for some reason.
Vista uses icons of high resolution *256x256* pixels images, they are stored using *PNG-based* compression. The problem is if you simply make the icon and save it in standard XP `ICO` format, the resulting file will be `400Kb` on disk. The solution is to compress the images. The compression scheme used is `PNG` (Portable Network Graphic) because it has a good lossless ratio and supports alpha channel. And use ``` png2ico myicon.ico logo16x16.png logo32x32.png logo255x255.png ``` It creates an `ICO` file from 1 or more `PNG`'s and handles multiple sizes etc. And I guess XP would have no problem with that.
It seems that the order of icon sizes is the key, as said by Helmut. To invert the pages (larger ones first) solves the issue on Windows 7 for 'include\_resources' (using Py2exe 0.6.9).
Embedding icon in .exe with py2exe, visible in Vista?
[ "", "python", "windows-vista", "embed", "icons", "py2exe", "" ]
Here's the situation: We have some generic graphics code that we use for one of our projects. After doing some clean-up of the code, it seems like something isn't working anymore (The graphics output looks completely wrong). I ran a diff against the last version of the code that gave the correct output, and it looks like we changed one of our functions as follows: ``` static public Rectangle FitRectangleOld(Rectangle rect, Size targetSize) { if (rect.Width <= 0 || rect.Height <= 0) { rect.Width = targetSize.Width; rect.Height = targetSize.Height; } else if (targetSize.Width * rect.Height > rect.Width * targetSize.Height) { rect.Width = rect.Width * targetSize.Height / rect.Height; rect.Height = targetSize.Height; } else { rect.Height = rect.Height * targetSize.Width / rect.Width; rect.Width = targetSize.Width; } return rect; } ``` to ``` static public Rectangle FitRectangle(Rectangle rect, Size targetSize) { if (rect.Width <= 0 || rect.Height <= 0) { rect.Width = targetSize.Width; rect.Height = targetSize.Height; } else if (targetSize.Width * rect.Height > rect.Width * targetSize.Height) { rect.Width *= targetSize.Height / rect.Height; rect.Height = targetSize.Height; } else { rect.Height *= targetSize.Width / rect.Width; rect.Width = targetSize.Width; } return rect; } ``` All of our unit tests are all passing, and nothing in the code has changed except for some syntactic shortcuts. But like I said, the output is wrong. We'll probably just revert back to the old code, but I'm curious if anyone has any idea what's going on here. Thanks.
Sounds like you don't have sufficient unit tests :] Unfortunately, your statement "Nothing in the code has changed except for some syntactic shortcuts" is wrong, and I'm guessing that's where your problem is. (It's certainly one of your problems!) Yes, ``` a *= b; ``` is equivalent to ``` a = a * b; ``` but ``` a *= b / c; ``` is NOT the same as ``` a = a * b / c; ``` instead ``` a *= b / c; // equivalent to a = a * (b / c) a = a * b / c; // equivalent to a = (a * b) / c ``` (See [c# operator precedence](http://msdn.microsoft.com/en-us/library/aa691323(VS.71).aspx) on msdn) I'm guessing you're running into trouble when your target height is not an exact multiple of the original rectangle height (or the same for the width). Then you'd end up with the following sort of situation: Let's assume rect.Size = (8, 20), targetSize = (15, 25) Using your original method, you'd arrive at the following calculation: ``` rect.Width = rect.Width * targetSize.Height / rect.Height; // = 8 * 25 / 20 // = 200 / 20 (multiplication happens first) // = 10 // rect.Width = 10 ``` Using your new code, you'd have ``` rect.Width *= targetSize.Height / rect.Height; // *= 25 / 20 // *= 1 (it's integer division!) // rect.Width = rect.Width * 1 // = 8 // rect.Width = 8 ``` which isn't the same. (It get's worse if the target size is less than your original size; in this case the integer division will result in one of the dimensions being 0!) If "[your] unit tests are all passing" then you definitely need some additional tests, specifically ones that deal with non-integer multiples. Also note that your calculation ``` else if(targetSize.Width * rect.Height > rect.Width * targetSize.Height) ``` isn't reliable; for very large rectangles, it has the potential to overflow and give you incorrect results. You'd be better off casting to a larger type (i.e. a long) as part of the multiplication. (Again, there should be some unit tests to this effect) Hope that helps!
If Rectangle.Width and Rectangle.Height are integers, the following two lines differ: ``` rect.Width = rect.Width * targetSize.Height / rect.Height; rect.Width *= targetSize.Height / rect.Height; ``` The first line performs a multiply, divide, cast-to-int, then assignment, in that order. The second performs a divide, cast-to-int, multiply, then assignment. The problem is, in your non-working code, **your divide is being casted to an integer before the multiply**. Keep the original code or force the division to be floating-point. Write better unit tests to check for this issue. (Try a width/height combination which do not have even multiples (e.g. prime numbers).)
Why isn't our c# graphics code working any more?
[ "", "c#", "operators", "" ]
Does `String.ToLower()` return the same reference (e.g. without allocating any new memory) if all the characters are already lower-case? Memory allocation is cheap, but running a quick check on zillions of short strings is even cheaper. Most of the time the input I'm working with is already lower-case, but I want to make it that way if it isn't. I'm working with C# / .NET in particular, but my curiosity extends to other languages so feel free to answer for your favorite one! NOTE: Strings *are* immutable but that does not mean a function always has to return a new one, rather it means nothing can change their character content.
I expect so, yes. A quick test agrees (but this is not evidence): ``` string a = "abc", b = a.ToLower(); bool areSame = ReferenceEquals(a, b); // false ``` In general, try to work with **comparers** that do what you want. For example, if you want a case-insensitive dictionary, use one: ``` var lookup = new Dictionary<string, int>( StringComparer.InvariantCultureIgnoreCase); ``` Likewise: ``` bool ciEqual = string.Equals("abc", "ABC", StringComparison.InvariantCultureIgnoreCase); ```
String is an immutable. String.ToLower() will always return new instance thereby generating another instance on every ToLower() call.
Does String.ToLower() always allocate memory?
[ "", "c#", ".net", "string", "lowercase", "" ]
I am creating an abstract class. I want each of my derived classes to be forced to implement a specific signature of constructor. As such, I did what I would have done has I wanted to force them to implement a method, I made an abstract one. ``` public abstract class A { abstract A(int a, int b); } ``` However I get a message saying the abstract modifier is invalid on this item. My goal was to force some code like this. ``` public class B : A { public B(int a, int b) : base(a, b) { //Some other awesome code. } } ``` This is all C# .NET code. Can anyone help me out? **Update 1** I wanted to add some things. What I ended up with was this. ``` private A() { } protected A(int a, int b) { //Code } ``` That does what some folks are saying, default is private, and the class needs to implement a constructor. However that doesn't FORCE a constructor with the signature A(int a, int b). ``` public abstract class A { protected abstract A(int a, int b) { } } ``` **Update 2** I should be clear, to work around this I made my default constructor private, and my other constructor protected. I am not really looking for a way to make my code work. I took care of that. I am looking to understand why C# does not let you do this.
You cannot have an abstract constructor because abstract means you must override it in any non-abstract child class and you cannot override a constructor. If you think about it, this makes sense, since you always call the constructor of the child class (with the new operator) and never the base class. Generally speaking, the only way in C# to enforce a specific constructor signature is by using the **new()** generic constraint, which enforces the existence of a parameterless constructor for the type parameter.
Change that constructor in class A to ``` protected A(int a, int b) { // Some initialisation code here } ``` Then your subclasses will have to use it, as there is no default constructor. They can, however, still change the actual signature of the constructor. There is no way of forcing a subclass to use a specific signature for its constructor as far as I know. I'm pretty sure constructors can't be abstract. What exactly do you need this for? We might be able to suggest a work around for this.
Why can't I create an abstract constructor on an abstract C# class?
[ "", "c#", "inheritance", "oop", "constructor", "abstract-class", "" ]