Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Is there a difference between using `exit()` or just `return` statements in `main()`? Personally I favor the `return` statements because I feel it's like reading any other function and the flow control when I'm reading the code is smooth (in my opinion). And even if I want to refactor the `main()` function, having `return` seems like a better choice than `exit()`. Does `exit()` do anything special that `return` doesn't?
Actually, there *is* a difference, but it's subtle. It has more implications for C++, but the differences are important. When I call `return` in `main()`, destructors will be called for my locally scoped objects. If I call `exit()`, **no destructor will be called for my locally scoped objects!** Re-read that. `exit()` **does not return**. That means that once I call it, there are "no backsies." Any objects that you've created in that function will not be destroyed. Often this has no implications, but sometimes it does, like closing files (surely you want all your data flushed to disk?). Note that `static` objects will be cleaned up even if you call `exit()`. Finally note, that if you use `abort()`, no objects will be destroyed. That is, no global objects, no static objects and no local objects will have their destructors called. **Proceed with caution when favoring exit over return.** <http://groups.google.com/group/gnu.gcc.help/msg/8348c50030cfd15a>
Another difference: `exit` is a Standard Library function so you need to include headers and link with the standard library. To illustrate (in C++), this is a valid program: ``` int main() { return 0; } ``` but to use `exit` you'll need an include: ``` #include <stdlib.h> int main() { exit(EXIT_SUCCESS); } ``` Plus this adds an additional assumption: that calling `exit` from `main` has the same side effects as returning zero. As others have pointed out, this depends on what kind of executable you're building (i.e., who's calling `main`). Are you coding an app that uses the C-runtime? A Maya plugin? A Windows service? A driver? Each case will require research to see if `exit` is equivalent to `return`. IMHO using `exit` when you *really mean* `return` just makes the code more confusing. OTOH, if you *really do mean* `exit`, then by all means use it.
return statement vs exit() in main()
[ "", "c++", "c", "return", "exit", "" ]
as I usually don't do the up front design of my models in Django projects I end up modifying the models a lot and thus deleting my test database every time (because "syncdb" won't ever alter the tables automatically for you). Below lies my workflow and I'd like to hear about yours. Any thoughts welcome.. 1. Modify the model. 2. Delete the test database. (always a simple sqlite database for me.) 3. Run "syncdb". 4. Generate some test data via code. 5. goto 1. A secondary question regarding this.. In case your workflow is like above, how do you execute the 4. step? Do you generate the test data manually or is there a proper hook point in Django apps where you can inject the test-data-generating-code at server startup?\ TIA.
Steps 2 & 3 can be done in one step: ``` manage.py reset appname ``` Step 4 is most easily managed, from my understanding, by using [fixtures](http://www.djangoproject.com/documentation/models/fixtures/)
This is a job for Django's fixtures. They are convenient because they are database independent and the test harness (and manage.py) have built-in support for them. To use them: 1. Set up your data in your app (call it "foo") using the admin tool 2. Create a fixtures directory in your "foo" app directory 3. Type: `python manage.py dumpdata --indent=4 foo > foo/fixtures/foo.json` Now, after your syncdb stage, you just type: ``` python manage.py loaddata foo.json ``` And your data will be re-created. If you want them in a test case: ``` class FooTests(TestCase): fixtures = ['foo.json'] ``` Note that you will have to recreate or manually update your fixtures if your schema changes drastically. You can read more about fixtures in the django docs for [Fixture Loading](http://docs.djangoproject.com/en/dev/topics/testing/#fixture-loading)
Django workflow when modifying models frequently?
[ "", "python", "django", "django-models", "workflow", "django-syncdb", "" ]
I'm pulling a list of items from one table, on the basis of their being included in another table, like this: ``` select fruit.id, fruit.name from fruit, fruit_rating where fruit_rating.fruit_id=fruit.id group by fruit.name; ``` That works fine -- it basically produces a list of all the fruit that has been rated by someone. But now, I want to exclude all fruit that has been rated by one specific user, so I tried this: ``` select fruit.id, fruit.name from fruit, fruit_rating where fruit_rating.fruit_id=fruit.id and fruit_rating.user_id != 10 group by fruit.name; ``` That's ok, but not quite right. It shows all the fruit that have been rated by people other than 10, but if users 1 and 10 have both rated the same fruit, it still shows that one. Can anyone tell me how to construct a query that shows only the fruit that have NOT Been rated by user 10, regardless of who else has rated them?
``` ... WHERE fruit_rating.fruit_id=fruit.id and fruit.id not in (select fruit_rating.fruit_id from fruit_rating where fruit_rating.user_id = 10) ```
I read this differently from Cowan, and agree with Noah... Find all fruit where: - User 10 did not rate it - Atleast one other user did rate it However, in my experience using NOT IN can be quite slow. So, I generally prefer to filter using LEFT JOIN in the same way as Cowan. Here are a few different options, though I have not had time to test performance on large data sets... ``` SELECT [f].id, [f].name FROM fruit AS [f] INNER JOIN fruit_rating AS [fr] ON [fr].fruit_id = [f].id GROUP BY [f].id, [f].name HAVING SUM(CASE WHEN [fr_exclude].user_id = 10 THEN 1 ELSE 0 END) = 0 ``` ``` SELECT [f].id, [f].name FROM fruit AS [f] INNER JOIN fruit_rating AS [fr] ON [fr].fruit_id = [f].id LEFT JOIN fruit_rating AS [fr_exclude] ON [fr_exclude].fruit_id = [fr].fruit_id AND [fr_exclude].user_id = 10 GROUP BY [f].id, [f].name HAVING MAX([fr_exclude].user_id) IS NULL ``` As this only works for one user, I would also consider making a table of "users to exclude" and LEFT JOIN on that instead... ``` SELECT [f].id, [f].name FROM fruit AS [f] INNER JOIN fruit_rating AS [fr] ON [fr].fruit_id = [f].id LEFT JOIN excluded_users AS [ex] AND [ex].user_id = [fr].user_id GROUP BY [f].id, [f].name HAVING MAX([ex].user_id) IS NULL ``` Or something much more long winded, but I suspect is the fastest on larger data sets with appropriate indexes... ``` SELECT [f].id, [f].name FROM fruit [f] INNER JOIN ( SELECT fruit_id FROM fruit_rating GROUP BY fruit_id ) AS [rated] ON [rated].fruit_id = [f].id LEFT JOIN ( SELECT [fr].fruit_id FROM fruit_rating AS [fr] INNER JOIN excluded_users AS [ex] ON [ex].user_id = [fr].user_id GROUP BY [fr].fruit_id ) AS [excluded] ON [rated].fruit_id = [excluded].fruit_id WHERE [excluded].fruit_id IS NULL GROUP BY [f].id, [f].name ```
SQL query to exclude items on the basis of one value
[ "", "sql", "mysql", "" ]
I am working on a site, and everything is working in my local environment as well as on a test server I have here and everything works fine. *but* when I upload it to my host, nothing seems to work!? The Asp.Net javascript validators fire, but any and all of the jquery is not working? I even put in a simple ``` $("#test").html("working"); ``` and ``` <div id="test></div> ``` And nothing is happening? I have triple checked that all the js is uploaded (to my /js/ directory The site is here: <http://whoowes.me> (it is in it's infancy still... barely started) and the Login/Register/Contact buttons should all pull a modal popup (not change pages, that should only happen if JS is disabled) and the word 'test' should show up under the menu. Can JS be disabled from the server? Wouldn't make sense though as the asp.net javascript validators are showing up? I am completely confused here, any ideas would be great!
my host did not default to have .js in their MIME types, so I had to add it in: ``` extension: .js application/x-javascript ```
You could always let Google do your work for you: ``` <head> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js"></script> </head> ``` One pleasant side-effect is that you'll save a little bandwidth, and because many sites now use Google's AJAX API, many users will already have it cached in their browser.
jQuery not working on my host?
[ "", "asp.net", "javascript", "jquery", "hosting", "" ]
Has anyone had issues with the contents of an isolated storage directory go missing or not update properly after a click once automated deployment?
I believe you can find your answer on this post - [ClickOnce and Isolated Storage](https://stackoverflow.com/questions/202013/clickonce-and-isolatedstorage#227218).
If the version of your application changes (I am uncertain exactly which version number it is) then new deployment creates a new folder for storage. There are 2 ways in which isolated storage stores the data: 1. Isolation by user and assembly 2. Isolation by user, domain and assembly In both cases, when the assembly changes the directory changes. I do not know what the best way to keep the data between deployments is, but that is the reasoning for it.
Clickonce & Isolated Storage
[ "", "c#", ".net", "clickonce", "" ]
Is iterating through the vector using an iterator and copying to a list the most optimal method of copying. Any recommendations?
Why would you iterate and not use the standard copy algorithm? ``` std::copy( vector.begin(), vector.end(), std::back_inserter( list ) ); ```
If you're making a new list, you can take advantage of a constructor that takes begin and end iterators: ``` std::list<SomeType> myList(v.begin(), v.end()); ``` Kasprzol's answer is perfect if you have an existing list you want to append to.
Best way to copy a vector to a list in STL?
[ "", "c++", "stl", "" ]
I have a C++ memory management doubt, that's (obviously) related to references and pointers. Suppose I have a class `Class` with a method `my_method`: ``` OtherClass& Class::my_method( ... ) { OtherClass* other_object = new OtherClass( ... ); return *other_object; } ``` Meanwhile in a nearby piece of code: ``` { Class m( ... ); OtherClass n; n = m.my_method( ... ); } ``` So, I know that there's a general rule about pointers (~ "anything new-ed, must be delete-d") to avoid memory leaks. But basicly I'm taking a reference to my heap-allocated object, so when n goes out of scope, shouldn't the destructor of OtherClass be called thus freeing the memory previously pointed by other\_object? So in the end the real question is: will this lead to a memory leak? Thanks.
It's fairly obvious that you want to return a new object to the caller that you do not need to keep any reference to. For this purpose, the simplest thing to do is to return the object by value. ``` OtherClass Class::my_method( ... ) { return OtherClass( ... ); } ``` Then in the calling code you can construct the new object like this. ``` { Class m( ... ); OtherClass n( m.mymethod( ... ) ); } ``` This avoids any worries about returning reference to temporaries or requiring the client to manager deletion of a returned pointer. Note, that this does require your object to be copyable, but it is a legal and commonly implemented optimization for the copy to be avoided when returning by value. You would only need to consider a shared pointer or similar if you need shared ownership or for the object to have a lifetime outside the scope of the calling function. In this latter case you can leave this decision up to the client and still return by value. E.g. ``` { Class m( ... ); // Trust me I know what I'm doing, I'll delete this object later... OtherClass* n = new OtherClass( m.mymethod( ... ) ); } ```
Yes that will lead to a memory leak. What you'll do is, in the return statement, dereference the new object you created. The compiler will invoke the assignment operator as part of the returning and copy the CONTENTS of your new object to the object it's assigned to in the calling method. The new object will be left on the heap, and its pointer cleared from the stack, thus creating a memory leak. Why not return a pointer and manage it that way?
Will this lead to a memory leak in C++?
[ "", "c++", "pointers", "memory-leaks", "reference", "memory-management", "" ]
When I try to make a very large boolean array using Java, such as: ``` boolean[] isPrime1 = new boolean[600851475144]; ``` I get a possible loss of precision error? Is it too big?
To store 600 *billion* bits, you need an absolute minimum address space of 75 *gigabytes*! Good luck with that! Even worse, the Java spec doesn't specify that a `boolean` array will use a single bit of memory for each element - it could ([and in some cases does](http://eblog.chrononsystems.com/hidden-evils-of-javas-byte-array-byte)) use more. In any case, I recognise that number from [Project Euler #3](https://projecteuler.net/problem=3). If it needs that much memory, you're doing it wrong...
Consider using a [BitSet](http://java.sun.com/j2se/1.4.2/docs/api/java/util/BitSet.html).
How do you make REALLY large boolean arrays using Java?
[ "", "java", "boolean", "" ]
So for a while I've been using XslCompiledTransform because that's what Microsoft tells me I need to use as XslTransform is deprecated. Recently I had to use it with a transform that has nearly 100,000 lines (generated xsl - of course). When I used my application I was shocked to see an OOM pop up. No matter what I did - OOM is all I get... For giggles I went back to XslTransform... Same exact code changing XslCompiledTransform to XslTransform and it works fine... Can anyone tell me how to work around the OOM - is there some 'swtich' on XslCompiledTransform? I don't know how you will be able to replicate the exact problem, but if anyone has any answers they're much appreciated. Thanks - code below: Works: ``` XslTransform myXslTransform = new XslTransform(); myXslTransform.Load(xslWith100ThousandLines); MemoryStream m = new MemoryStream(); myXslTransform.Transform(myXPathDocument, null, m); m.Flush(); m.Close(); ``` Fails with OOM ``` XslCompiledTransform cxslt = new XslCompiledTransform(); cxslt.Load(xslWith100ThousandLines); MemoryStream m = new MemoryStream(); cxslt.Transform(myXPathDocument, null, m); m.Flush(); m.Close(); ```
ok, so now i think, maybe this is more 'microsoft being microsoft'... on a hunch i changed the target platform back to "Any CPU", closed ALL my visual studio instances open and reopened my project... i then cleaned, recompiled, and reran and i am again getting the OOM... i then set the target platform back to x86 and low and behold no more OOM! i've had a lot of x64 related issues like this and in the future i will be sure to include them in any postings... more often than not it's due to x64... odd... but problem 'solved'... :S
well... programming in the windows world is always a mystery... i thank you both for your answers however today - while trying to replicate the problem (after being gone for around 14 hrs) the issue has disappeared entirely... the only thing i did differently was to set the compiler to be x86 instead of x64 and now the code works exactly as i'd have expected to begin with and i am left more confused than ever... just so everyone knows i'm not crazy, here's the task manager snap from two successive runs: ![alt text](https://i420.photobucket.com/albums/pp286/dovh/mem-spike.jpg) i'm not sure how or why... but there's that...
XslCompiledTransform vs. XslTransform and how about an OOM for good measure?
[ "", "c#", "xslt", "xslcompiledtransform", "" ]
How can I redirect the user from one page to another using jQuery or pure JavaScript?
## One does not simply redirect using jQuery jQuery is not necessary, and [**`window.location.replace(...)`**](https://developer.mozilla.org/en-US/docs/Web/API/Location/replace) will best simulate an HTTP redirect. `window.location.replace(...)` is better than using `window.location.href`, because `replace()` does not keep the originating page in the session history, meaning the user won't get stuck in a never-ending back-button fiasco. If you want to simulate someone clicking on a link, use **`location.href`** If you want to simulate an HTTP redirect, use **`location.replace`** **For example:** ``` // similar behavior as an HTTP redirect window.location.replace("http://stackoverflow.com"); // similar behavior as clicking on a link window.location.href = "http://stackoverflow.com"; ```
**WARNING:** This answer has merely been provided as a possible solution; it is obviously *not* the best solution, as it requires jQuery. Instead, prefer the pure JavaScript solution. ``` $(location).prop('href', 'http://stackoverflow.com') ```
How do I redirect to another webpage?
[ "", "javascript", "jquery", "redirect", "" ]
In Python, I can do: ``` list = ['a', 'b', 'c'] ', '.join(list) # 'a, b, c' ``` However, if I have a list of objects and try to do the same thing: ``` class Obj: def __str__(self): return 'name' list = [Obj(), Obj(), Obj()] ', '.join(list) ``` I get the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: sequence item 0: expected string, instance found ``` Is there any easy way? Or do I have to resort to a for-loop?
You could use a list comprehension or a generator expression instead: ``` ', '.join([str(x) for x in list]) # list comprehension ', '.join(str(x) for x in list) # generator expression ```
The built-in string constructor will automatically call `obj.__str__`: ``` ''.join(map(str,list)) ```
string.join(list) on object array rather than string array
[ "", "python", "string", "list", "" ]
What is the most efficient to fill a ComboBox with all the registered file types in Windows? I want the full file type, not just the extension. I'm using VB 9 (VS2008).
All the file types are stored in the registry under the HKEY\_CLASS\_ROOT, which you could obtain using the Framework's [Registry class](http://msdn.microsoft.com/en-us/library/microsoft.win32.registry.aspx). Here's c# code to perform the task: ``` using Microsoft.Win32; public class FileAssoc { public string Extension; public string Filetype; public FileAssoc(string fileext, string name) { Extension = fileext; Filetype = name; } } public static class EnumRegFiles { public static List<FileAssoc> GetFileAssociations() { List<FileAssoc> result = new List<FileAssoc>(); RegistryKey rk = Registry.ClassesRoot; String[] names = rk.GetSubKeyNames(); foreach (string file in names) { if (file.StartsWith(".")) { RegistryKey rkey = rk.OpenSubKey(file); object descKey = rkey.GetValue(""); if (descKey != null) { string desc = descKey.ToString(); if (!string.IsNullOrEmpty(desc)) { result.Add(new FileAssoc(file, desc)); } } } } return result; } } ```
I agree with Joel, that's going to be a lot of entries and trying to find something in a combobox list of hundreds of items is going to end up as a really poor user experience. Other than that, the only way to get this information is to go through the registry, as Mitch says but it won't be simple code. What are you trying to accomplish? **Edit:** @Mitch Wheat, I know this was addressed to @Mark Brackett, but I couldn't resist the challenge. Using LINQ, your code can be written as: ``` public static IList GetFileAssociations() { return Registry.ClassesRoot.GetSubKeyNames().Where(key => key.StartsWith(".")).Select(key => { string description = Registry.ClassesRoot.OpenSubKey(key).GetValue("") as string; if (!String.IsNullOrEmpty(description)) { return new { key, description }; } else { return null; } }).Where(a => a != null).ToList(); } ```
What is the most efficient way to get fill a ComboBox with all the registered file types (not just extensions)
[ "", "c#", "file", "combobox", "registry", "" ]
I have a string object "with multiple characters and even special characters" I am trying to use ``` UTF8Encoding utf8 = new UTF8Encoding(); ASCIIEncoding ascii = new ASCIIEncoding(); ``` objects in order to convert that string to ascii. May I ask someone to bring some light to this simple task, that is hunting my afternoon. EDIT 1: What we are trying to accomplish is getting rid of special characters like some of the special windows apostrophes. The code that I posted below as an answer will not take care of that. Basically > O'Brian will become O?Brian. where ' is one of the special apostrophes
This was in response to your other question, that looks like it's been deleted....the point still stands. Looks like a [classic Unicode to ASCII issue](http://www.joelonsoftware.com/articles/Unicode.html). The trick would be to find *where* it's happening. .NET works fine with Unicode, assuming [it's told it's Unicode](http://msdn.microsoft.com/en-us/library/system.text.encoding.aspx) to begin with (or left at the default). My *guess* is that your receiving app can't handle it. So, I'd probably use the [ASCIIEncoder](http://msdn.microsoft.com/en-us/library/system.text.asciiencoding.aspx) [with](http://msdn.microsoft.com/en-us/library/system.text.encoder.fallback.aspx) an [EncoderReplacementFallback](http://msdn.microsoft.com/en-us/library/system.text.encoderreplacementfallback.aspx) with String.Empty: ``` using System.Text; string inputString = GetInput(); var encoder = ASCIIEncoding.GetEncoder(); encoder.Fallback = new EncoderReplacementFallback(string.Empty); byte[] bAsciiString = encoder.GetBytes(inputString); // Do something with bytes... // can write to a file as is File.WriteAllBytes(FILE_NAME, bAsciiString); // or turn back into a "clean" string string cleanString = ASCIIEncoding.GetString(bAsciiString); // since the offending bytes have been removed, can use default encoding as well Assert.AreEqual(cleanString, Default.GetString(bAsciiString)); ``` Of course, in the old days, we'd just loop though and remove any chars [greater than 127](http://asciitable.com/)...well, those of us in the US at least. ;)
I was able to figure it out. In case someone wants to know below the code that worked for me: ``` ASCIIEncoding ascii = new ASCIIEncoding(); byte[] byteArray = Encoding.UTF8.GetBytes(sOriginal); byte[] asciiArray = Encoding.Convert(Encoding.UTF8, Encoding.ASCII, byteArray); string finalString = ascii.GetString(asciiArray); ``` Let me know if there is a simpler way o doing it.
How to convert (transliterate) a string from utf8 to ASCII (single byte) in c#?
[ "", "c#", "encoding", "utf-8", "ascii", "transliteration", "" ]
The following C# class is used in a multithreaded enviroment. I removed very much of the actual code. The problem occurs when calling MethodA and MethodB almost simultaneously. The order of the lock in the IsDepleted property doesn't solves the problem. Removing lock(WaitingQueue) from the IsDepleted property solves the deadlock, but this solution causes a problem when another thread adds/removes an item from the WaitingQueue between the WaitingQueue.Count == 0 and Processing.Count == 0 statements. ``` using System.Collections.Generic; class Example { bool IsDepleted { get { lock (Processing) { lock (WaitingQueue) { return WaitingQueue.Count == 0 && Processing.Count == 0; } } } } private readonly List<object> Processing = new List<object>(); private readonly Queue<object> WaitingQueue = new Queue<object>(); public void MethodA(object item) { lock (WaitingQueue) { if (WaitingQueue.Count > 0) { if (StartItem(WaitingQueue.Peek())) { WaitingQueue.Dequeue(); } } } } public void MethodB(object identifier) { lock (Processing) { Processing.Remove(identifier); if (!IsDepleted) { return; } } //Do something... } bool StartItem(object item) { //Do something and return a value } } ```
It depends if you want a quick fix or a rigorous fix. A quick fix would be just to use one lock object in all cases. e.g. `private readonly object _lock = new object();` And then just lock on that. However, depending on your situation, that may impact performance more than you can accept. I.e. your code would become this: ``` using System.Collections.Generic; class Example { private readonly object _lock = new object(); bool IsDepleted { get { lock (_lock) { return WaitingQueue.Count == 0 && Processing.Count == 0; } } } private readonly List<object> Processing = new List<object>(); private readonly Queue<object> WaitingQueue = new Queue<object>(); public void MethodA(object item) { lock (_lock) { if (WaitingQueue.Count > 0) { if (StartItem(WaitingQueue.Peek())) { WaitingQueue.Dequeue(); } } } } public void MethodB(object identifier) { lock (_lock) { Processing.Remove(identifier); if (!IsDepleted) { return; } } //Do something... } bool StartItem(object item) { //Do something and return a value } } ```
Take the Processing lock in method A and the WaitingQueue lock in method B (in other words, make it look like the first block of code). That way, you always take the locks in the same order and you'll never deadlock.
How to prevent deadlocks in the following C# code?
[ "", "c#", ".net", "multithreading", "deadlock", "" ]
Is there any way to make “private” variables (those defined in the constructor), available to prototype-defined methods? ``` TestClass = function(){ var privateField = "hello"; this.nonProtoHello = function(){alert(privateField)}; }; TestClass.prototype.prototypeHello = function(){alert(privateField)}; ``` This works: ``` t.nonProtoHello() ``` But this doesn’t: ``` t.prototypeHello() ``` I’m used to defining my methods inside the constructor, but am moving away from that for a couple reasons.
No, there's no way to do it. That would essentially be scoping in reverse. Methods defined inside the constructor have access to private variables because all functions have access to the scope in which they were defined. Methods defined on a prototype are not defined within the scope of the constructor, and will not have access to the constructor's local variables. You can still have private variables, but if you want methods defined on the prototype to have access to them, you should define getters and setters on the `this` object, which the prototype methods (along with everything else) *will* have access to. For example: ``` function Person(name, secret) { // public this.name = name; // private var secret = secret; // public methods have access to private members this.setSecret = function(s) { secret = s; } this.getSecret = function() { return secret; } } // Must use getters/setters Person.prototype.spillSecret = function() { alert(this.getSecret()); }; ```
# Update: With ES6, there is a better way: Long story short, you can use the new `Symbol` to create private fields. Here's a great description: <https://curiosity-driven.org/private-properties-in-javascript> Example: ``` var Person = (function() { // Only Person can access nameSymbol var nameSymbol = Symbol('name'); function Person(name) { this[nameSymbol] = name; } Person.prototype.getName = function() { return this[nameSymbol]; }; return Person; }()); ``` # For all modern browsers with ES5: ### You can use just Closures The simplest way to construct objects is to avoid prototypal inheritance altogether. Just define the private variables and public functions within the closure, and all public methods will have private access to the variables. ### Or you can use just Prototypes In JavaScript, prototypal inheritance is primarily *an optimization*. It allows multiple instances to share prototype methods, rather than each instance having its own methods. The drawback is that `this` is the **only** thing that's different each time a prototypal function is called. Therefore, any private fields must be accessible through `this`, which means they're going to be public. So we just stick to naming conventions for `_private` fields. ### Don't bother mixing Closures with Prototypes I think you **shouldn't** mix closure variables with prototype methods. You should use one or the other. When you use a closure to access a private variable, prototype methods cannot access the variable. So, you have to expose the closure onto `this`, which means that you're exposing it publicly one way or another. There's very little to gain with this approach. ### Which do I choose? For really simple objects, just use a plain object with closures. If you need prototypal inheritance -- for inheritance, performance, etc. -- then stick with the "\_private" naming convention, and don't bother with closures. I don't understand why JS developers try SO hard to make fields truly private.
Accessing private member variables from prototype-defined functions
[ "", "javascript", "private-members", "" ]
The keyword `protected` grants access to classes in the same package and subclasses (<http://java.sun.com/docs/books/tutorial/java/javaOO/accesscontrol.html>). Now, every class has `java.lang.Object` as superclass (<http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Object.html>). Hence I conclude that every class may access `java.lang.Object`'s methods even if they are `protected`. Take a look at the following example: ``` public class Testclass { public Object getOne() throws CloneNotSupportedException { return this.clone(); } public Object getTwo() throws CloneNotSupportedException { return ((Object) this).clone(); } } ``` While `getOne()` compiles fine, `getTwo()` gives ``` Testclass.java:6: clone() has protected access in java.lang.Object return ((Object) this).clone(); ``` I neither understand why `getTwo()` doesn't compile nor what's the difference (regarding the access of `java.lang.Object`s members) with `getOne()`.
You can only access protected members of a type in a different package if the compile-time type of the expression you're referencing it through is either your own class or a subclass. (Where "your" class is the class containing the code.) Your own class has to be a subclass of the type which originally declares the method, too. Here's an example; assume that `Base` is in a different package to all the other classes: ``` package first; public class Base { protected void Foo() {} } // Yes, each class is really in its own file normally - but treat // all the classes below as being in package "second" package second; public class Child extends Base { public void OtherMethod(Object x) { ((Base) x).Foo(); // Invalid: Base is not Child or subclass ((Child) x).Foo(); // Valid: Child is Child ((GrandChild) x).Foo(); // Valid: GrandChild is subclass of Child ((OtherChild) x).Foo(); // Invalid: OtherChild is not Child or subclass } } public class GrandChild extends Child {} public class OtherChild extends Base {} ``` In other words, it's letting you have access to the protected members of "objects which are a like you". Details are in [section 6.6.2 of the Java Language Specification](http://java.sun.com/docs/books/jls/third_edition/html/names.html#6.6.2): > A `protected` member or constructor of > an object may be accessed from outside > the package in which it is declared > only by code that is responsible for > the implementation of that object. > > **6.6.2.1 Access to a protected Member** > > Let *C* be the class in which a > protected member *m* is declared. Access > is permitted only within the body of a > subclass *S* of *C*. In addition, if Id > denotes an instance field or instance > method, then: If the access is by a > qualified name *Q.Id*, where *Q* is an > *ExpressionName*, then the access is > permitted if and only if the type of > the expression *Q* is *S* or a subclass of > *S*. If the access is by a field access > expression *E.Id*, where *E* is a *Primary* > expression, or by a method invocation > expression *E.Id(. . .)*, where *E* is a > *Primary* expression, then the access is > permitted if and only if the type of *E* > is *S* or a subclass of *S*.
When you said "`((Object) this).clone()`", you accessed your own object via its superclass Object. You performed a widening conversion to an Object. The code then attempts to call clone on Object. But, as you've noted, clone is a protected method, meaning that only if your object was in the same package of java.lang would it be able to access the OBJECT's clone method. When you say this.clone, your class extended Object and thus had access to override or use clone directly through the protected class modifier because of inheritance. But that doesn't change the Object implementation. By saying ((Object) yourObject), you get something that is only accessible through the Object class. Only public methods of the Object class are accessible ouside of the package java.lang, so you get the compile-time exception, because the compiler knows this. By saying this.clone(), you are invoking your object's clone method that it got through inheriting through Object, and is now able to be invoked because it becomes a part of your custom subclass.
How are java.lang.Object's protected methods protected from subclasses?
[ "", "java", "object", "protected", "access-modifiers", "" ]
I'm learning JavaScript and while browsing through the jQuery library I see `:` (colon) being used a lot. What is this used for in JavaScript? ``` // Return an array of filtered elements (r) // and the modified expression string (t) return { r: r, t: t }; ```
``` var o = { r: 'some value', t: 'some other value' }; ``` is functionally equivalent to ``` var o = new Object(); o.r = 'some value'; o.t = 'some other value'; ```
And also, a colon can be used to label a statement. for example ``` var i = 100, j = 100; outerloop: while(i>0) { while(j>0) { j++ if(j>50) { break outerloop; } } i++ } ```
What does ':' (colon) do in JavaScript?
[ "", "javascript", "" ]
The following throws an `InvalidCastException`. ``` IEnumerable<int> list = new List<int>() { 1 }; IEnumerable<long> castedList = list.Cast<long>(); Console.WriteLine(castedList.First()); ``` Why? I'm using Visual Studio 2008 SP1.
That's very odd! There's a blog post [here](http://blogs.msdn.com/ed_maurer/archive/2008/02/16/breaking-change-in-linq-queries-using-explicitly-typed-range-variables.aspx) that describes how the behaviour of `Cast<T>()` was changed between .NET 3.5 and .NET 3.5 SP1, but it still doesn't explain the InvalidCastException, which you even get if you rewrite your code thus: ``` var list = new[] { 1 }; var castedList = from long l in list select l; Console.WriteLine(castedList.First()); ``` Obviously you can work around it by doing the cast yourself ``` var castedList = list.Select(i => (long)i); ``` This works, but it doesn't explain the error in the first place. I tried casting the list to short and float and those threw the same exception. **Edit** That blog post does explain why it doesn't work! `Cast<T>()` is an extension method on `IEnumerable` rather than `IEnumerable<T>`. That means that by the time each value gets to the point where it's being cast, it has already been boxed back into a System.Object. In essence it's trying to do this: ``` int i = 1; object o = i; long l = (long)o; ``` This code throws the InvalidCastException you're getting. If you try to cast an int directly to a long you're fine, but casting a boxed int back to a long doesn't work. Certainly an oddity!
Enumerable.Cast method is defined as following: ``` public static IEnumerable<TResult> Cast<TResult>( this IEnumerable source ) ``` And there is no information about initial type of IEnumerable's items, so I think each of your ints is initially converted to System.Object via boxing and then it's tried to be unboxed into long variable and this is incorrect. Similar code to reproduce this: ``` int i = 1; object o = i; // boxing long l = (long)o; // unboxing, incorrect // long l = (int)o; // this will work ``` So solution for your problem will be: ``` ints.Select(i => (long)i) ```
Puzzling Enumerable.Cast InvalidCastException
[ "", "c#", ".net", "exception", "" ]
I have done the following code in JavaScript to put focus on the particular element (branch1 is a element), ``` document.location.href="#branch1"; ``` But as I am also using jQuery in my web app, so I want to do the above code in jQuery. I have tried but don't know why its not working, ``` $("#branch1").focus(); ``` The above jquery (focus()) code is not working for div, whereas If i am trying the same code with textbox, then its working, Please tell me, how can I put focus on a div elemnt using jQuery? Thanks!
Check [jQuery.ScrollTo](https://github.com/flesler/jquery.scrollTo), I think that's the behavior that you want, check the [demo](http://demos.flesler.com/jquery/scrollTo/).
For my problem this code worked, I had to navigate to an anchor tag on page load : ``` $(window).scrollTop($('a#captchaAnchor').position().top); ``` For that matter you can use this on any element, not just an anchor tag.
How to scroll to an element in jQuery?
[ "", "javascript", "jquery", "html", "focus", "" ]
I am currently working on a [Flot](http://code.google.com/p/flot/) graph, the [API](http://flot.googlecode.com/svn/trunk/API.txt) which seems pretty powerful overall, although examples of advanced use are not widely documented. The API suggests there are ways to set hoverable on the graph, not that I am sure what exactly that means I can do with it. I would like to know if anyone could contribute some examples that they have come across, or code for that matter, which demonstrate examples of any of the following: * Dynamic tooltips triggered by hover over events on Flot chart elements * Tick tooltips (hovering over the xaxis shows detail) * Any kind of hover over / dynamic event binding which has been used w Flot The effect I am looking for is similar to [this](http://teethgrinder.co.uk/open-flash-chart-2/tooltip.php) Open Flash Chart example
Have a look at [this flot example](http://www.flotcharts.org/flot/examples/interacting/) which demonstrates tooltips for plot points on the chart. (Make sure you select the **Enable tooltip** checkbox.)
There is also a simple tooltip plugin for it, you can find it [here](https://github.com/krzysu/flot.tooltip) And I also add some feature to the plugin, you can find it on github. <https://github.com/skeleton9/flot.tooltip>
Any examples of Flot with floating tooltips?
[ "", "javascript", "jquery", "jquery-events", "flot", "graphing", "" ]
I'm working on an asp.net-mvc application. The linq data context is being passed into my service objects by structure map. I've got is set to have a scope of hybrid. This all works just fine. ``` protected override void configure() { ForRequestedType<AetherDataContext>() .TheDefaultIs(() => new AetherDataContext()) .CacheBy(InstanceScope.Hybrid); } ``` The problem is that I keep running our of memory, I'm wondering if the IDisposable interface is ever actually being called. Anyone got any ideas? Failing that anyone got any other idea for things that might be causing my memory exceptions? Update: So some additional information, I just stuffed a couple of methods into my data context an put brake points in there. ``` protected override void Dispose(bool disposing) { Debug.WriteLine("Disposing: " + DateTime.Now); base.Dispose(disposing); } public new void Dispose() { Debug.WriteLine("Disposing: " + DateTime.Now); base.Dispose(); } ``` I'm not quite sure that I'm doing this the correct way, I'm guessing that the new method will be called? Anyway, neither of the brake points were hit. However the constructor for the same class was called on every request though. Not ideal I'm thinking.
This is almost an exact copy of the question I asked 2 days ago: [Session containing items implementing IDisposable](https://stackoverflow.com/questions/498095/session-containing-items-implementing-idisposable) InstanceScope.Hybrid just stores the object inside HttpContext.Current.Items if it exists or ThreadLocal storage otherwise and InstanceScope.HttpSession works the same way other than it uses the HttpSession and ThreadLocal. The items collection lives per request, so if you implement the pattern pointed out on my question you should see Dispose firing at the end of the current request.
Ok so the latest version of StructureMap [(2.3.5)](http://codebetter.com/blogs/jeremy.miller/archive/2009/02/01/structuremap-2-5-3-is-released-and-the-future-of-structuremap.aspx) has a useful little method called ``` HttpContextBuildPolicy.DisposeAndClearAll(); ``` > Cleanup convenience methods on HttpContext and ThreadLocal. HttpContextBuildPolicy.DisposeAndClearAll(), ThreadLocalStoragePolicy.DisposeAndClearAll(). Calling either method will eject all cached instances and call IDispose if the object is IDisposable. Previously the dispose methods weren't being called called, I added that to Application\_EndRequest and they are now. I'm hoping that this will solve some of my memory problems. We shall see.
StructureMap InstanceScope.Hybrid and IDisposable
[ "", "c#", "asp.net-mvc", "linq", "inversion-of-control", "structuremap", "" ]
I have some logic, which defines and uses some user-defined types, like these: ``` class Word { System.Drawing.Font font; //a System type string text; } class Canvass { System.Drawing.Graphics graphics; //another, related System type ... and other data members ... //a method whose implementation combines the two System types internal void draw(Word word, Point point) { //make the System API call graphics.DrawString(word.text, word.font, Brushes.Block, point); } } ``` The logic, after doing calculations with the types (e.g. to locate each `Word` instance), indirectly uses some `System` APIs, for example by invoking the `Canvass.draw` method. I'd like to make this logic independent of the `System.Drawing` namespace: mostly, in order to help with unit testing (I think unit tests' output would be easier to verify if the `draw` method were drawing to something other than a real `System.Drawing.Graphics` instance). To eliminate the logic's dependency on the `System.Drawing` namespace, I thought I'd declare some new interfaces to act as placeholders for the `System.Drawing` types, for example: ``` interface IMyFont { } interface IMyGraphics { void drawString(string text, IMyFont font, Point point); } class Word { IMyFont font; //no longer depends on System.Drawing.Font string text; } class Canvass { IMyGraphics graphics; //no longer depends on System.Drawing.Graphics ... and other data ... internal void draw(Word word, Point point) { //use interface method instead of making a direct System API call graphics.drawText(word.text, word.font, point); } } ``` If I did this, then different assemblies could have different implementations of the `IMyFont` and `IMyGraphics` interface, for example ... ``` class MyFont : IMyFont { System.Drawing.Font theFont; } class MyGraphics : IMyGraphics { System.Drawing.Graphics theGraphics; public void drawString(string text, IMyFont font, Point point) { //!!! downcast !!! System.Drawing.Font theFont = ((MyFont)font).theFont; //make the System API call theGraphics.DrawString(word.text, theFont, Brushes.Block, point); } } ``` ... however the implementation would need an downcast as illustrated above. My question is, **is there a way to do this without needing a downcast in the implementation?** By "this", I mean "defining UDTs like `Word` and `Canvass` which don't depend on specific concrete `System` types"? An alternative would be abstract UDTs ... ``` class Word { //System.Drawing.Font font; //declared in a subclass of Word string text; } class Canvass { //System.Drawing.Graphics graphics; //declared in a subclass of Canvass //concrete draw method is defined in a subclass of Canvass internal abstract void draw(Word word, Point point); } ``` ... but this too would need a downcast in the implementation of the subclass. I also thought of using the double dispatch idiom, but it depends on naming the various subclasses in the APIs. Or, if not with interfaces or subclasses, is there some way using delegates? --- **--Edit:--** There have been two possible answers. One answer is to use generics, precisely as suggested by 'Sir Lantis' answer below, and as suggested by the blog post to which John Skeet linked. I suspect this would work fine in most scenarios. The down-side from my point of view is that it means introducing `TFont` as a template parameter: it isn't only a class like `Word` (which contains a `Font` instance) which needs to become a generic class (like `WordT<TFont>`) ... it's also that any class which contains a `WordT<TFont>` (e.g. `Paragraph`) now also needs to become generic with a `TFont` parameter (e.g. `ParagraphT<TFont>`). Eventually, almost every class in the assembly has become a generic class. This **does** preserve type-safety and avoid the need to downcast ... **but** it's kind of ugly, and disturbs the illusion of encapsulation (the illusion that 'Font' is an opaque implementation detail). Another answer is to use a map or dictionary in the user class. Instead of `Font` in the reusable library, and instead of an abstract interface, define a 'handle' class like: ``` public struct FontHandle { public readonly int handleValue; FontHandle(int handleValue) { this.handleValue = handleValue; } } ``` Then, instead of downcasting from `FontHandle`, keep a `Dictionary<int, Font>` instance which maps `FontHandle` values to `Font` instances.
First, I wonder if the entire scenario isn't a little artificial; are you really going to **need** this level of abstraction? Perhaps subscribe to [YAGNI](http://en.wikipedia.org/wiki/You_Ain't_Gonna_Need_It)? Why does your `MyGraphics` only work with a `MyFont`? Can it work with an `IFont`? That would be a better use of interfaces, and would avoid this entire issue... One option might be a bit of a re-design, so that the `IFont` just describes the metadata for the font (size, font-face, etc), and you have things on the concrete `MyGraphics` like: ``` [public|internal] MyFont GetFont(IFont font) {...} // or just Font ``` and it becomes the job of the graphics to do the translation - so then used something like: ``` public void drawString(string text, IMyFont font, Point point) { using(System.Drawing.Font theFont = GetFont(font)) { theGraphics.DrawString(word.text, theFont, Brushes.Block, point); } // etc } ``` Of course, `Point` might need translation too ;-p
You're effectively saying "I know better than the compiler - I know that it's bound to be an instance of `MyFont`." At that point you've got `MyFont` and `MyGraphics` being tightly coupled again, which reduces the point of the interface a bit. Should `MyGraphics` work with any `IFont`, or only a `MyFont`? If you can make it work with any `IFont` you'll be fine. Otherwise, you may need to look at complicated generics to make it all compile-time type safe. You may find [my post on generics in Protocol Buffers](http://msmvps.com/blogs/jon_skeet/archive/2008/08/29/lessons-learned-from-protocol-buffers-part-3-generic-type-relationships.aspx) useful as a similar situation. (Side suggestion - your code will be more idiomatically .NET-like if you follow the naming conventions, which includes Pascal case for methods.)
Is it possible to avoid a downcast?
[ "", "c#", "polymorphism", "downcast", "" ]
Using LINQ to SQL ``` db.Products.Where(c => c.ID == 1).Skip(1).Take(1).ToList(); ``` executes ``` SELECT [t1].[ID], [t1].[CategoryID], [t1].[Name], [t1].[Price], [t1].[Descripti n], [t1].[IsFeatured], [t1].[IsActive] FROM ( SELECT ROW_NUMBER() OVER (ORDER BY [t0].[ID], [t0].[CategoryID], [t0].[Name , [t0].[Price], [t0].[Description], [t0].[IsFeatured], [t0].[IsActive]) AS [ROW NUMBER], [t0].[ID], [t0].[CategoryID], [t0].[Name], [t0].[Price], [t0].[Descrip ion], [t0].[IsFeatured], [t0].[IsActive] FROM [dbo].[Products] AS [t0] WHERE [t0].[ID] = @p0 ) AS [t1] WHERE [t1].[ROW_NUMBER] BETWEEN @p1 + 1 AND @p1 + @p2 ORDER BY [t1].[ROW_NUMBER] -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [1] -- @p1: Input Int (Size = 0; Prec = 0; Scale = 0) [1] -- @p2: Input Int (Size = 0; Prec = 0; Scale = 0) [1] ``` It's using ROW\_NUMBER for pagination... good. Now, I'm trying to use relationships generated by LINQ to SQL to paginate data. Using the query... ``` var cat = db.Categories.Where(c => c.ID == 1).SingleOrDefault(); cat.Products.Where(c => c.ID == 1).Skip(1).Take(1).ToList(); SELECT [t0].[ID], [t0].[Name] FROM [dbo].[Categories] AS [t0] WHERE [t0].[ID] = @p0 -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.1 SELECT [t0].[ID], [t0].[CategoryID], [t0].[Name], [t0].[Price], [t0].[Descriptio n], [t0].[IsFeatured], [t0].[IsActive] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0 -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.1 ``` Now the use of ROW\_NUMBER and pagination is gone... it's getting all Products where CategoryID = 1... why is it getting ALL rows?
I think its because the category is in memory. You are asking it, implicitly, to get the products of the category. This implicit request for data is for filled, and then in memory (where the category is at this point) the query is executed. I'm thinking its equivalent to : ``` var cat = db.Categories.Where(c => c.ID == 1).SingleOrDefault(); var prods = db.Products.Where(c => c.ID == 1).ToList(); var r = prods.Where(p.CategoryID == cat.ID).Skip(1).Take(1); ``` Note the significance, what if cat changes in memory? The size of the collection could vary. NOTE: Thanks for the headache :)
have you tried: ``` var cat = db.Categories.Where(c => c.ID == 1); var prod = cat.Products.Where(c => c.ID == 1).Skip(1).Take(1).ToList(); ```
Querying against LINQ to SQL relationships
[ "", "asp.net", "sql", "linq", "linq-to-sql", "" ]
I'd like to parse status.dat file for nagios3 and output as xml with a python script. The xml part is the easy one but how do I go about parsing the file? Use multi line regex? It's possible the file will be large as many hosts and services are monitored, will loading the whole file in memory be wise? I only need to extract services that have critical state and host they belong to. Any help and pointing in the right direction will be highly appreciated. **LE** Here's how the file looks: ``` ######################################## # NAGIOS STATUS FILE # # THIS FILE IS AUTOMATICALLY GENERATED # BY NAGIOS. DO NOT MODIFY THIS FILE! ######################################## info { created=1233491098 version=2.11 } program { modified_host_attributes=0 modified_service_attributes=0 nagios_pid=15015 daemon_mode=1 program_start=1233490393 last_command_check=0 last_log_rotation=0 enable_notifications=1 active_service_checks_enabled=1 passive_service_checks_enabled=1 active_host_checks_enabled=1 passive_host_checks_enabled=1 enable_event_handlers=1 obsess_over_services=0 obsess_over_hosts=0 check_service_freshness=1 check_host_freshness=0 enable_flap_detection=0 enable_failure_prediction=1 process_performance_data=0 global_host_event_handler= global_service_event_handler= total_external_command_buffer_slots=4096 used_external_command_buffer_slots=0 high_external_command_buffer_slots=0 total_check_result_buffer_slots=4096 used_check_result_buffer_slots=0 high_check_result_buffer_slots=2 } host { host_name=localhost modified_attributes=0 check_command=check-host-alive event_handler= has_been_checked=1 should_be_scheduled=0 check_execution_time=0.019 check_latency=0.000 check_type=0 current_state=0 last_hard_state=0 plugin_output=PING OK - Packet loss = 0%, RTA = 3.57 ms performance_data= last_check=1233490883 next_check=0 current_attempt=1 max_attempts=10 state_type=1 last_state_change=1233489475 last_hard_state_change=1233489475 last_time_up=1233490883 last_time_down=0 last_time_unreachable=0 last_notification=0 next_notification=0 no_more_notifications=0 current_notification_number=0 notifications_enabled=1 problem_has_been_acknowledged=0 acknowledgement_type=0 active_checks_enabled=1 passive_checks_enabled=1 event_handler_enabled=1 flap_detection_enabled=1 failure_prediction_enabled=1 process_performance_data=1 obsess_over_host=1 last_update=1233491098 is_flapping=0 percent_state_change=0.00 scheduled_downtime_depth=0 } service { host_name=gateway service_description=PING modified_attributes=0 check_command=check_ping!100.0,20%!500.0,60% event_handler= has_been_checked=1 should_be_scheduled=1 check_execution_time=4.017 check_latency=0.210 check_type=0 current_state=0 last_hard_state=0 current_attempt=1 max_attempts=4 state_type=1 last_state_change=1233489432 last_hard_state_change=1233489432 last_time_ok=1233491078 last_time_warning=0 last_time_unknown=0 last_time_critical=0 plugin_output=PING OK - Packet loss = 0%, RTA = 2.98 ms performance_data= last_check=1233491078 next_check=1233491378 current_notification_number=0 last_notification=0 next_notification=0 no_more_notifications=0 notifications_enabled=1 active_checks_enabled=1 passive_checks_enabled=1 event_handler_enabled=1 problem_has_been_acknowledged=0 acknowledgement_type=0 flap_detection_enabled=1 failure_prediction_enabled=1 process_performance_data=1 obsess_over_service=1 last_update=1233491098 is_flapping=0 percent_state_change=0.00 scheduled_downtime_depth=0 } ``` It can have any number of hosts and a host can have any number of services.
Nagiosity does exactly what you want: <http://code.google.com/p/nagiosity/>
Pfft, get yerself mk\_livestatus. <http://mathias-kettner.de/checkmk_livestatus.html>
How to parse nagios status.dat file?
[ "", "python", "parsing", "nagios", "" ]
We started a Web Project in Eclipse 3.2 a ways back and we've since upgraded to Eclipse 3.4 but now the Project has the error: "This project needs to migrate WTP metadata" We've tried right-clicking and doing the "quick-fix" which is in fact to Migrate WTP Metadata. Unfortunately nothing happens and the error remains. We can delete that error from the Problems and everything works as it should, however, every time we re-import the project from source control, the error re-appears. Any ideas on how to permanently get rid of this error or how to ACTUALLY migrate WTP metadata? **UPDATE:** Everyone, please vote on the answer that works for you rather than adding your own answer that references a previous person's answer.
The above solution works fine but it creeps up again and again. An easier solution is to right click on the concerned project in Eclipse and choose Validate.
For me, none of these worked. The solution for me was deleting the following file while Eclipse was stopped: ``` /workspace/.metadata/.plugins/org.eclipse.core.resources/.projects/myprojectname/.markers ```
Eclipse error: This project needs to migrate WTP metadata
[ "", "java", "eclipse", "eclipse-wtp", "" ]
How do I get a random [`decimal.Decimal`](https://docs.python.org/3.6/library/decimal.html#decimal-objects) instance? It appears that the random module only returns floats which are a pita to convert to Decimals.
What's "a random decimal"? Decimals have arbitrary precision, so generating a number with as much randomness as you can hold in a Decimal would take the entire memory of your machine to store. You have to know how many decimal digits of precision you want in your random number, at which point it's easy to just grab an random integer and divide it. For example if you want two digits above the point and two digits in the fraction (see [randrange here](https://docs.python.org/2/library/random.html#random.randrange)): ``` decimal.Decimal(random.randrange(10000))/100 ```
From the [standard library reference](http://docs.python.org/dev/3.0/library/decimal.html) : To create a Decimal from a float, first convert it to a string. This serves as an explicit reminder of the details of the conversion (including representation error). ``` >>> import random, decimal >>> decimal.Decimal(str(random.random())) Decimal('0.467474014342') ``` Is this what you mean? It doesn't seem like a pita to me. You can scale it into whatever range and precision you want.
random Decimal in python
[ "", "python", "random", "decimal", "" ]
What's the disadvantage of choosing a large value for max when creating a varchar or varbinary column? I'm using MS SQL but I assume this would be relevant to other dbs as well. Thanks
That depends on whether it is ever reasonable to store a large amount of data in the particular column. If you declare a column that would never properly store much data (i.e. an employee first name as a VARCHAR(1000)), you end up with a variety of problems 1. Many if not most client APIs (i.e. ODBC drivers, JDBC drivers, etc) allocate memory buffers on the client that are large enough to store the maximum size of a particular column. So even though the database only has to store the actual data, you may substantially increase the amount of memory the client application uses. 2. You lose the ability to drive data validation rules (or impart information about the data) from the table definition. If the database allows 1000 character first names, every application that interacts with the database will probably end up having its own rules for how large an employee name can be. If this is not mitigated by putting a stored procedure layer between all applications and the tables, this generally leads to various applications having various rules. 3. Murphy's Law states that if you allow 1000 characters, someone will eventually store 1000 characters in the column, or at least a value large enough to cause errors in one or more application (i.e. no one checked to see whether every application's employee name field could display 1000 characters).
Depends on the RDBMS. IIRC, MySql allocates a 2 byte overhead for varchars > 255 characters (to track the varchar length). MSSQL <= 2000 would allow you to allocate a row size > 8060 bytes, but would fail if you tried to INSERT or UPDATE a row that actually exceeded 8060 bytes. SQL 2005[1] allows the insert, but will allocate a new page for the overflow and leave a pointer behind. This, obviously, impacts performance. [1] varchar(max) is somewhat of a special case, but will also allocate an overflow page if the length of the field is > 8000 or the row > 8060. This is with MSSQL defaults, and behavior can change with the large types in data row option.
Disadvantage of choosing large MAX value for varchar or varbinary
[ "", "sql", "sql-server", "types", "varchar", "" ]
I'm writing a simplistic game to learn get some more C++ experience, and I have an idea where I feel polymorphism *almost* works, but doesn't. In this game, the `Party` moves fairly linearly through a `Map`, but can occasionally encounter a `Fork` in the road. A fork is (basically) an `std::vector<location*>`.Originally I was going to code something like the following into the a `Party` member function: ``` if(!CurrLocation->fork_.empty()) // Loop through forks and show options to the player, go where s/he wants else (CurrLocation++) ``` But I was wondering if some variant of the following might be better: ``` CurrLocation = CurrLocation->getNext(); ``` With Fork actually being derived from Location, and overloading some new function `getNext()`. But in the latter case, the `location` (a low level structure) would have to be the one to present the message to the user instead of "passing this back up", which I don't feel is elegant as it couples `location` to `UserInterface::*`. Your opinions?
All problems can be solved by adding a level of indirection. I would use your suggested variant, and decouple Location from Party by allowing getNext to accept an object that resolves directional choices. Here is an example (untested): ``` class Location; class IDirectionChooser { public: virtual bool ShouldIGoThisWay(Location & way) = 0; }; class Location { public: virtual Location * GetNext(IDirectionChooser & chooser) { return nextLocation; } virtual Describe(); private: Location * nextLocation; }; class Fork : public Location { public: virtual Location * GetNext(IDirectionChooser & chooser) { for (int i = 0; i < locations.size(); i++) if (chooser.ShouldIGoThisWay(*locations[i])) return locations[i]; } virtual Describe(); private: vector<Location *> locations; }; class Party : public IDirectionChooser { public: void Move() { currentLocation = currentLocation->GetNext(GetDirectionChooser()); } virtual IDirectionChooser & GetDirectionChooser() { return *this; } virtual bool ShouldIGoThisWay(Location & way) { way.Describe(); cout << "Do you want to go that way? y/n" << endl; char ans; cin >> ans; return ans == 'y'; } }; ```
You should use polymorphism as long as it makes sense and simplifies your design. You shouldn't use it just because it exists and has a fancy name. If it does make your design simpler, then it's worth the coupling. Correctness and simplicity should be the ultimate goal of every design decision.
Is Polymorphism worth an increase in coupling?
[ "", "c++", "polymorphism", "coupling", "" ]
Today I had to fix some older VB.NET 1.0 code which is using threads. The problem was with updating UI elements from the worker thread instead of the UI-thread. It took me some time to find out that I can use assertions with InvokeRequired to find the problem. Besides the above mentioned concurrent modification problem, there are deadlocks, race conditions, etc. one could run into. **As debugging/fixing threading problems is a pain, I'm wondering how I could reduce coding errors/faults in this area and how I could easier find any of them.** So, what I'm asking for, is: * **Are there any good patterns to follow when writing multi-threading code? What are the Dos and Don'ts?** * **What techniques do you use to debug threading problems?** Please provide some example code if applicable and possible. The answers should be related to the .NET framework (any version).
This could be a *massive* list - read Joe Duffy's excellent "[Concurrent Programming On Windows](https://rads.stackoverflow.com/amzn/click/com/032143482X)" for much more detail. This is pretty much a brain dump... * Try to avoid calling into significant chunks of code while you own a lock * Avoid locking on references which code outside the class might also lock on * If you ever need to acquire more than one lock at a time, always acquire those locks in the same order * Where reasonable, use immutable types - they can be shared freely between threads * Other than immutable types, try to avoid the need to share data between threads * Avoid trying to make your types threadsafe; most types don't need to be, and usually the code which needs to share data will need to control the locking itself * In a WinForms app: + Don't perform any long-running or blocking operations on the UI thread + Don't touch the UI from any thread other than the UI thread. (Use BackgroundWorker, Control.Invoke/BeginInvoke) * Avoid thread-local variables (aka thread-statics) where possible - they can lead to unexpected behaviour, particularly on ASP.NET where a request may be served by different threads (search for "thread agility" and ASP.NET) * Don't try to be clever. Lock-free concurrent code is *hugely* difficult to get right. * Document the threading model (and thread safety) of your types * Monitor.Wait should almost always be used in conjunction with some sort of check, in a while loop (i.e. while (I can't proceed) Monitor.Wait(monitor)) * Consider the difference between Monitor.Pulse and Monitor.PulseAll carefully every time you use one of them. * Inserting Thread.Sleep to make a problem go away is never a real fix. * Have a look at "Parallel Extensions" and the "Coordination and Concurrency Runtime" as ways of making concurrency simpler. Parallel Extensions is going to be part of .NET 4.0. In terms of debugging, I don't have very much advice. Using Thread.Sleep to boost the chances of seeing race conditions and deadlocks can work, but you've got to have quite a reasonable understanding of what's wrong before you know where to put it. Logging is very handy, but don't forget that the code goes into a sort of quantum state - observing it via logging is almost bound to change its behaviour!
I'm not sure how well this will help for the particular application you're working with, but here are two approaches borrowed from functional programming for writing multithreaded code: **Immutable objects** If you need to share state between threads, the state should be immutable. If one thread needs to make a change to the object, it creates a brand new version of the object with the change instead of mutating the object's state. Immutability does not inherently limit the kind of code you can write, nor is it inefficient. There are lots of implementations of immutable stacks, a variety of immutable trees that form the basis of maps and sets, and other kinds of immutable data structures, and many (if not all) immutable data structures are just as efficient as their mutable counterparts. Since objects are immutable, its not possible for one thread to mutate shared state under your nose. This means you don't need to acquire locks to write multithreaded code. This approach eliminates a whole class of errors related to deadlocking, livelocking, and raceconditions. **Erlang-style message passing** You don't need to learn the language, but have a look at Erlang to see how it approaches concurrency. Erlang apps can scale pretty much indefinitely because each process is completely seperate from all the others (note: these are not exactly processes, but not exactly threads either). Processes fire up and simply spin a loop waiting for messages: messages are recieved in the form of tuples, which the process can then pattern match against to see if the message is meaningful. Processes can send other messages, but they are indifferent to whoever recieves the message. Advantanges to this style is an elimination of locks, when one process fails it doesn't bring down your entire app. Here's a nice summary of Erlang-style concurrency: <http://www.defmacro.org/ramblings/concurrency.html>
How to write safe/correct multi-threaded code in .NET?
[ "", "c#", ".net", "vb.net", "multithreading", "" ]
Basically I want to do this: ``` public interface A { void a(); } public interface B { void b(); } public class SomeClass { public SomeClass(<A&B> e) { // Note the type here e.a(); e.b(); } } ``` What I did on the commented line is obviously illegal. I know I can just require the passed object to implement interface A, or interface B, but is there a way to do both? I guess there are workarounds (like requiring the parameter to be of type A and then check if it is also an instanceof B), but that way I don't get help from the compiler. Do you know of any way to do this? Or maybe a smarter workaround...
You can do it with generics enabled. For example, to accept an instance of some class that implements both CharSequence and Appendable: ``` public <T extends CharSequence & Appendable> void someMethod(T param) { ... } ```
Depending on the design, you can do one of the following: 1. Make A : B or B : A. 2. Make an interface C : A, B. Either way you'll want to have contract that includes both a() and b().
Can you require multiple types at once?
[ "", "java", "inheritance", "types", "interface", "" ]
I'm learning Java (and OOP) and although it might irrelevant for where I'm at right now, I was wondering if SO could share some common pitfalls or good design practices.
One important thing to remember is that static methods cannot be overridden by a subclass. References to a static method in your code essentially tie it to that implementation. When using instance methods, behavior can be varied based on the type of the instance. You can take advantage of polymorphism. Static methods are more suited to utilitarian types of operations where the behavior is set in stone. Things like base 64 encoding or calculating a checksum for instance.
I don't think any of the answers get to the heart of the OO reason of when to choose one or the other. Sure, use an instance method when you need to deal with instance members, but you could make all of your members public and then code a static method that takes in an instance of the class as an argument. Hello C. You need to think about the **messages** the object you are designing responds to. Those will always be your instance methods. If you think about your objects this way, [you'll almost never have static methods](http://misko.hevery.com/2008/12/15/static-methods-are-death-to-testability/). Static members are ok in certain circumstances. Notable exceptions that come to mind are the Factory Method and Singleton (use sparingly) patterns. Exercise caution when you are tempted to write a "helper" class, for from there, it is a slippery slope into procedural programming.
Is there a rule of thumb for when to code a static method vs an instance method?
[ "", "java", "oop", "class-design", "" ]
I found a couple of ways to handle recursion in Smarty, mostly based on including templates into themselves, which seems like ridiculous waste of resources. I found one solution, by Messju over at Smarty that seemed to be just right - but it is not supported and fails in the latest version of smarty :( For people asking: What I want smarty to print out is a discussion thread that is defined by an array of entries. If an entry has one or more answers, those are listed as children to said entry in an array, and so on. ``` array( array( 'id'=>0, 'headline'=>"My parent headline", 'body' =>"My parent body", 'children'=>array( array( 'id'=>1, 'headline'=>"My firstChild headline", 'body' =>"My firstChild body", 'children'=>array() ), array( 'id'=>2, 'headline'=>"My secondChild headline", 'body' =>"My secondChild body", 'children'=>array() ) ) ), ); ``` The nested array has an arbitrary depth, and each entry will have an arbitrary number of children. To me this is something I want to do with within the scope of the template, as I consider it pure display logic. I do not want to have to handle HTML or some type of HTML placeholders outside of the template. I want smarty to print this as nested lists: ``` <ul> <li> <h1>My parent headline</h1> <p>My parent body</p> <ul> <li> <h1>My firstChild headline</h1> <p>My firstChild body</p> </li> <li> <h1>My secondChild headline</h1> <p>My secondChild body</p> </li> </ul> </li> </ul> ``` I'm starting to realize this might be a very case-by-case problem, so I figure I'll just write a smarty plugin to handle this specifically, although I'd rather have an all-around solution. Is there a way?
"In order to understand recursion, you must first understand recursion..." Just kidding. This should do what you want: ``` <?php /* * Smarty plugin * ————————————————————- * File: function.recurse_array.php * Type: function * Name: recurse_array * Purpose: prints out elements of an array recursively * ————————————————————- */ function smarty_function_recurse_array($params, &$smarty) { if (is_array($params['array']) && count($params['array']) > 0) { $markup = ''; $markup .= '<ul>'; foreach ($params['array'] as $element) { $markup .= '<li>'; $markup .= '<h1>' . $element['headline'] . '</h1>'; $markup .= '<p>' . $element['body'] . '</p>'; if (isset($element['children'])) { $markup .= smarty_function_recurse_array(array('array' => $element['children']), $smarty); } $markup .= '</li>'; } $markup.= '</ul>'; return $markup; } else { return 'not array'; } } ``` Place the file into your smarty/plugins folder. Assign your array to Smarty then call it in your template like so: ``` {recurse_array array=$data} ``` Here's nice tutorial for making custom Smarty functions: [Creating Custom Smarty Functions](http://www.velvetblues.com/web-development-blog/smarty-templates-create-custom-functions/) Be aware of the dependency that this example has on your underlying data structure. Also, keep in mind that an unusually long or deeply nested set of data could be really slow. Manage your complexity, keep things well documented, and you should be fine. Good luck!
With Smarty 3, this can be done using {function}. The following code will produce the required ouput. ``` {function name=printList} <ul> {foreach $items as $item} <li> <h1>{$item['headline']}</h1> <p>{$item['body']}</p> {if $item['children']} {call name=printList items=$item['children']} {/if} </li> {/foreach} </ul> {/function} {call name=printList items=$comments} ``` More information can be found at the [docs](http://www.smarty.net/docs/en/language.function.function.tpl). Side note: Just because something is complex or recursive it doesn't mean that it can't be inside a template. For God's sake the HTML ul-li structure is naturally recursive and by hiding it away or moving it somewhere else (just because it is too complex for a template) you are introducing an extra complexity into the application.
What is the best way to handle recursion in smarty?
[ "", "php", "templates", "recursion", "smarty", "" ]
I am trying to generate a key using the .net 2.0 PasswordDeriveBytes class. ``` PasswordDeriveBytes pdb = new PasswordDeriveBytes("blahblahblah",null); byte[] iv = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0 }; byte[] key = pdb.CryptDeriveKey("TripleDES", "SHA1", 128, iv); ``` The code throws "CryptographicException: Invalid flags specified." when trying to execute the CryptDeriveKey method above. I'm trying to encrypt a database entry in asp.net and this is my first shot a crypto. I'd appreciate any help.
According to MSDN: *"If the keySize parameter is set to 0 bits, the default key size for the specified algorithm is used."* [MSDN PasswordDeriveBytes.CryptDeriveKey Method](http://msdn.microsoft.com/en-us/library/system.security.cryptography.passwordderivebytes.cryptderivekey.aspx) ``` PasswordDeriveBytes pdb = new PasswordDeriveBytes("blahblahblah",null); byte[] iv = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0 }; byte[] key = pdb.CryptDeriveKey("TripleDES", "SHA1", 0, iv); ``` Unless you have a good reason for specifying the size I'd suggest just leaving it 0.
TripleDES has a key length of 192 bits try: ``` byte[] key = pdb.CryptDeriveKey("TripleDES", "SHA1", 192, iv); ``` Try [this link](http://msdn.microsoft.com/en-us/library/system.security.cryptography.passwordderivebytes.aspx) for example code
Deriving a crypto key from a password
[ "", "c#", "asp.net", "" ]
I am quite a heavy user of wxWidgets, partly because of licensing reasons. * How do you see the future of wxWidgets in prospect of the [recent announcement](http://www.qtsoftware.com/about/licensing) of Qt now being released under LGPL? * Do you think wxwidget is still a good technical choice for new projects ? Or would you recommand adopting Qt, because it is going to be a de-facto standard. * I am also interested about the possible implications this will have on their bindings with the most common scripting languages (e.g. PyQt, wxPython, wxRuby). Why PyQt is so under-used when it has a professional grade designer and wxPython not? ### Related: > <https://stackoverflow.com/questions/443546/qt-goes-lgpl-on-windows-is-it-good-enough-to-use-instead-of-mfc>
For those of us who are drawn to wxWidgets because it is the cross-platform library that uses native controls for proper look and feel the licensing change of Qt has little to no consequences. **Edit:** Regarding > Qt not having native controls but native drawing functions let me quote the [wxWidgets wiki page comparing toolkits](http://wiki.wxwidgets.org/WxWidgets_Compared_To_Other_Toolkits): > Qt doesn't have true native ports like wxWidgets does. What we mean by this is that even though Qt draws them quite realistically, Qt draws its own widgets on each platform. It's worth mentioning though that Qt comes with special styles for Mac OS X and Windows XP and Vista that use native APIs (Appearance Manager on Mac OS X, UxTheme on Windows XP) for drawing standard widget primitives (e.g. scrollbars or buttons) exactly like any native application. *Event handling, the resulting visual feedback and widget layout are always implemented by Qt.*
I'm currently using pyqt at work and I find myself totally satisfied. You have better documentation (IMHO), better event managing (signal-slot pattern is somehow more powerful than the old simple-callback style), and importing your custom widget in a graphical designer like qt-designer is far easier. As far as I can tell qt-designer is more powerful than any wxpython counterpart, like Boa Constructor and pyGlade). You also have great support for translating program's strings in different languages (better support than wxLocale at least, and you can use a tool like Qt-Linguist which is fully integrated in the qt system). I'm using wxpython in some hobbistic works, but I'm still a noob there. I think its greater advantage over pyqt is to have a native look&feel on different platforms. This is a huge point if you are developing windows/linux applications, for example. Actually you could use "skins" to obtain a native look&feel with windows-qt applications but I have no idea on how to achieve that (sorry, I've never used qt on windows :D).
Qt being now released under LGPL, would you recommend it over wxWidgets?
[ "", "python", "qt", "wxpython", "wxwidgets", "" ]
I talked to the Team Lead at Snap-On Tools once, and she told they used an "implementation of JavaScript" for their server-side coding. It's been a while, but I was thinking, WTF is she talking about? Are there interpreters for JavaScript besides those implemented in browsers? **How can you create a program or code, especially server-side, using JavaScript that doesn't execute in the context of a browser? What the hell is server-side about JavaScript if it's not generating content after the browser has loaded it? Can "server-side" JavaScript generate content before the HTTP response is deliver—and if so, how does that work/is set up?** I have many issues with JavaScript, but first-class functions are so sexy. And JavaScript Object Notation is so pure; I couldn't imagine an easier way to define data structures. Plus, you can hack out some code pretty quickly with dynamic typing if you're not writing something that's mission critical. As a side question, given the last paragraph, have any suggestions about a good language to learn (comments will suffice)?
JavaScript does not have to be run in a browser if you use an ECMAScript engine. Actually, both [SpiderMonkey](http://www.mozilla.org/js/spidermonkey/) and [Rhino](http://www.mozilla.org/rhino/) are [ECMAScript engines](http://en.wikipedia.org/wiki/List_of_ECMAScript_engines). Flash's ActionScript is another ECMAScript derived language that doesn't have to run in a browser. Edit - Wow, a lot has changed in three years. For your server-side needs, I now recommend node.js.
List of JS interpreters that I know of, that can run standalone or embedded with other code: * [Google's V8](http://code.google.com/p/v8/) (C++) * [Jint](http://jint.codeplex.com/) (.NET) * [Microsoft's JScript](http://msdn.microsoft.com/en-us/library/hbxc2t98.aspx) (.NET), old * [JavaScript ScriptEngine in Java 6](http://java.sun.com/developer/technicalArticles/J2SE/Desktop/scripting/) (Java), formerly Mozilla's Rhino * [Mozilla's SpiderMonkey](http://www.mozilla.org/js/spidermonkey/) (C)
Browserless JavaScript
[ "", "javascript", "" ]
What do the following mean in an sql sytax: 1. (+) after a condition eg: "WHERE table1.col1 = table2.col2 (+) 2. What does /\* \*/ after select signify , i vaguely remember it being a suggestion to the optimizer, but where can i find more reference on this eg: select /\* \*/ ...
1. The `(+)` means an outer join syntax for SQL on Oracle. 2. The `/*` alter the SELECT means a comment. In Oracle if the comments starts with `/*+` (A plus after the comment token, means a hint, for instance `/*+ORDERED*/`
If we are talking oracle then the `(+)` is an outer join and `/* some stuff */` is a comment. If the comment is of the form `/*+ hint* */` then the comment is probably a hint. [google on oracle hints](http://www.google.co.uk/search?source=ig&hl=en&rlz=1G1GGLQ_ENUK243&q=oracle+hint&meta=lr%3D)
SQL syntax question
[ "", "sql", "oracle", "" ]
in my code i would like boost::shared\_ptr not to call delete but call ptr->deleteMe() instead. Also i have a few C styled functions that return a ptr. Can i make it call lib\_freeXYZ(ptr); instead of trying to delete?
Or how about using the stl to provide the wrapper functor - Doug T. description but without the custom caller. ``` boost::shared_ptr<T> ptr( new T, std::mem_fun_ref(&T::deleteMe) ); boost::shared_ptr<S> ptr( new S, std::ptr_fun(lib_freeXYZ) ); ```
You can give the shared\_ptr template a custom deleter function which has the signature ``` void Deleter( T* ptr); ``` for a boost::shared\_ptr So for Deleter you would do ``` boost::shared_ptr<T> ptrToT( new T, Deleter ); ``` then in the body of Deleter: ``` void Deleter( T* ptr); { ptr->deleteMe(); // And make sure YOU ACTUALLY DELETE (or do whatever else you need to // do to release the resource) delete ptr; } ``` For your specific case when you need something simple (like ptr->deleteMe) see Greg's solution, its very nice.
make shared_ptr not use delete
[ "", "c++", "boost", "" ]
How can I work with Novell eDirectory services in J2SE? Will JNDI work with eDirectory? What are some resources I can use to learn about whatever library or libraries you suggest? I just want to play around with retrieving information via LDAP for right now, and if I get things working the way I want, I will probably need to be able to modify objects later on. Thanks!
JNDI should work with eDirectory.....
try; <http://developer.novell.com/wiki/index.php/Jldap> and <http://developer.novell.com/wiki/index.php/Novell_LDAP_Extended_Library> Used it successfully with OpenLDAP and should suffice for eDirectory as well.
How can I work with Novell eDirectory services in J2SE?
[ "", "ldap", "java", "edirectory", "" ]
Is there a parser/library which is able to read an HTML document into a DOM tree using Java? I'd like to use the standard `DOM/Xpath` API that Java provides. Most libraries seem have custom API's to solve this task. Furthermore the conversion HTML to XML-DOM seems unsupported by the most of the available parsers. Any ideas or experience with a good HTML DOM parser?
[JTidy](http://jtidy.sourceforge.net/), either by processing the stream to XHTML then using your favourite DOM implementation to re-parse, or using parseDOM if the limited DOM imp that gives you is enough. Alternatively [Neko](http://nekohtml.sourceforge.net/).
Since HTML files are generally problematic, you'll need to first clean them up using a parser/scanner. I've used JTidy but never happily. NekoHTML works okay, but any of these tools are always just making a best guess of what is intended. You're effectively asking to let a program alter a document's markup until it conforms to a schema. That will likely cause structural (markup), style or content loss. It's unavoidable, and you won't really know what's missing unless you manually scan via a browser (and then you have to trust the browser too). It really depends on your purpose — if you have thousands of ugly documents with tons of extraneous (non-HTML) markup, then a manual process is probably unreasonable. If your goal is accuracy on a few important documents, then manually fixing them is a reasonable proposition. One approach is the manual process of repeatedly passing the source through a well-formed and/or validating parser, in an edit cycle using the error messages to eventually fix the broken markup. This does require some understanding of XML, but that's not a bad education to undertake. With Java 5 the necessary XML features — called the JAXP API — are now built into Java itself; you don't need any external libraries. You first obtain an instance of a DocumentBuilderFactory, set its features, create a DocumentBuilder (parser), then call its parse() method with an InputSource. InputSource has a number of possible constructors, with a StringReader used in the following example: ``` import javax.xml.parsers.*; // ... DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); dbf.setValidating(false); dbf.setNamespaceAware(true); dbf.setIgnoringComments(false); dbf.setIgnoringElementContentWhitespace(false); dbf.setExpandEntityReferences(false); DocumentBuilder db = dbf.newDocumentBuilder(); return db.parse(new InputSource(new StringReader(source))); ``` This returns a DOM Document. If you don't mind using external libraries there's also the JDOM and XOM APIs, and while these have some advantages over the SAX and DOM APIs in JAXP, they do require non-Java libraries to be added. The DOM can be somewhat cumbersome, but after so many years of using it I don't really mind any longer.
Reading HTML file to DOM tree using Java
[ "", "java", "html", "dom", "parsing", "" ]
Are there any standalone type conversion libraries? I have a data storage system that only understands bytes/strings, but I can tag metadata such as the type to be converted to. I could hack up some naive system of type converters, as every other application has done before me, or I could hopefully use a standalone library, except I can't find one. Odd for such a common activity. Just to clarify, I will have something like: ('123', 'integer') and I want to get out 123
Flatland does this well. <http://discorporate.us/projects/flatland/>
You've got two options, either use the [struct](http://docs.python.org/library/struct.html#module-struct) or [pickle](http://docs.python.org/library/pickle.html#module-pickle) modules. With struct you specify a format and it compacts your data to byte array. This is useful for working with C structures or writing to networked apps that require are binary protocol. pickle can automatically serialise and deserialise complex Python structures to a string. There are some caveats so it's best read the [documentation](http://docs.python.org/library/pickle.html#module-pickle). I think this is the most likely the library you want. ``` >>> import pickle >>> v = pickle.dumps(123) >>> v 'I123\n.' >>> pickle.loads(v) 123 >>> v = pickle.dumps({"abc": 123}) >>> v "(dp0\nS'abc'\np1\nI123\ns." >>> pickle.loads(v) {'abc': 123} ```
Is there a standalone Python type conversion library?
[ "", "python", "type-conversion", "" ]
We are migrating to a new server running Windows 2003 and IIS 6. When my PHP code runs, it has a warning on a particular line (which I'm expecting at the moment but will fix shortly). However, when it hits the warning, it immediately halts processing and returns a 500 error in the HTTP header. Normally, I would expect PHP to output the warning, but continue processing the script. Is there something in the configuration for IIS, FastCGI, or PHP that would be returning 500 errors when PHP hits a warning? **To clarify:** I don't want to suppress the warnings; I want them to display. I do not want the script to stop processing on warnings.
Figured out the issue. `log_errors` in php.ini was set to `On`, but `error_log` was unset. This was causing PHP to stop everything. After setting `display_errors` to `on`, the warnings now display so I can see where things are breaking in the output. This thread was helpful: <http://forums.iis.net/p/1146102/1856222.aspx#1856222>
I don't know about IIS or FastCGI, but afaik php has no such option. You can however set `error_reporting` (in your `php.ini`) to ``` E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR ``` to make warnings go away.
PHP warnings cause script to halt on IIS running FastCGI
[ "", "php", "iis-6", "windows-server-2003", "fastcgi", "" ]
I have a class that has a vector of another class objects as a member. In many functions of this class I have to do same operation on all the objects in the vector: ``` class Small { public: void foo(); void bar(int x); // and many more functions }; class Big { public: void foo() { for (size_t i = 0; i < VectorOfSmalls.size(); i++) VectorOfSmalls[i]->foo(); } void bar(int x) { for (size_t i = 0; i < VectorOfSmalls.size(); i++) VectorOfSmalls[i]->bar(x); } // and many more functions private: vector<Small*> VectorOfSmalls; }; ``` I want to simplify the code, and find a way not to duplicate going other the vector in every function. I've considered creating a function that receives a pointer to function, and calls the pointed function on every member of a vector. But I am not sure that using pointers to functions in C++ is a good idea. I have also been thinking about functors and [functionoids](http://www.parashift.com/c++-faq-lite/pointers-to-members.html#faq-33.10), but it will force me to create a class per each function and it sounds like an overkill. Another possible solution is creating a function that receives a string, and calls the command according to the string: ``` void Big::call_command(const string & command) { for (size_t i = 0; i < VectorOfSmalls.size(); i++) { if (command == "foo") VectorOfSmalls[i]->foo(); else if (command == "bar") VectorOfSmalls[i]->bar(); } } void Big::foo() { call_command("foo"); } ``` But it might work slow (unneeded creation of a string instead of just a function call), and also creates a problem if functions have different signature. So what would you recommend? Should I leave everything the same as it is now? EDIT: I can use only STL and not boost (old compilers).
Well you can rewrite the for loops to use iterators and more of the STL like this: ``` void foo() { std::for_each(VectorOfSmalls.begin(), VectorOfSmalls.end(), std::mem_fun(&Small::foo)); } void bar() { std::for_each(VectorOfSmalls.begin(), VectorOfSmalls.end(), std::mem_fun(&Small::bar)); } ``` beyond that, you could use some macros to avoid retyping that a lot, but I'm not a huge fan of that. Personally, I like the multiple functions over the single one which takes a command string. As it gives you more versatility over how the decision is made. If you do go with a single function taking a param to decide which to do, I would use an enum and a switch like this, it would be more efficient than strings and a cascading if. Also, in your example you have the if to decide which to do inside the loop. It is more efficient to check outside the loop and have redundant copies of the loop since "which command" only needs to be decided once per call. (NOTE: you can make the command a template parameter if it is known at compile time, which it sounds like it is). ``` class Big { public: enum Command { DO_FOO, DO_BAR }; void doit(Command cmd) { switch(cmd) { case DO_FOO: std::for_each(VectorOfSmalls.begin(), VectorOfSmalls.end(), std::mem_fun(&Small::foo)); break; case DO_BAR: std::for_each(VectorOfSmalls.begin(), VectorOfSmalls.end(), std::mem_fun(&Small::bar)); break; } }; ``` Also, as you mentioned, it is fairly trivial to replace the &Small::whatever, what a member function pointer and just pass that as a parameter. You can even make it a template too. ``` class Big { public: template<void (Small::*fn)()> void doit() { std::for_each(VectorOfSmalls.begin(), VectorOfSmalls.end(), std::mem_fun(fn)); } }; ``` Then you can do: ``` Big b; b.doit<&Small::foo>(); b.doit<&Small::bar>(); ``` The nice thing about both this and the regular parameter methods is that Big doesn't need to be altered if you change small to have more routines! I think this is the preferred method. If you want to be able to handle a single parameter, you'll need to add a bind2nd too, here's a complete example: ``` #include <algorithm> #include <functional> #include <iostream> #include <vector> class Small { public: void foo() { std::cout << "foo" << std::endl; } void bar(int x) { std::cout << "bar" << std::endl; } }; class Big { public: template<void (Small::*fn)()> void doit() { std::for_each(VectorOfSmalls.begin(), VectorOfSmalls.end(), std::mem_fun(fn)); } template<class T, void (Small::*fn)(T)> void doit(T x) { std::for_each(VectorOfSmalls.begin(), VectorOfSmalls.end(), std::bind2nd(std::mem_fun(fn), x)); } public: std::vector<Small *> VectorOfSmalls; }; int main() { Big b; b.VectorOfSmalls.push_back(new Small); b.VectorOfSmalls.push_back(new Small); b.doit<&Small::foo>(); b.doit<int, &Small::bar>(5); } ```
If you're using the std library, you should take a look at [for\_each](http://www.sgi.com/tech/stl/for_each.html). You mention that using function pointers in C++ might not be a good idea, but -- allowing your worry is speed -- you have to see if this is even a performance bottleneck area you're in, before worrying.
Iterating over vector and calling functions
[ "", "c++", "stl", "foreach", "iteration", "" ]
How can I use BOOST\_FOREACH efficiently (number-of-character/readability-wise) with a boost::ptr\_map? Kristo demonstrated in his [answer](https://stackoverflow.com/questions/461507/how-to-use-boostforeach-with-a-boostptrmap#461908) that it is possible to use BOOST\_FOREACH with a ptr\_map, but it does not really save me any typing (or makes my code really more readable) than iterating over the ptr\_map with an iterator: ``` typedef boost::ptr_container_detail::ref_pair<int, int* const> IntPair; BOOST_FOREACH(IntPair p, mymap) { int i = p.first; } // vs. boost::ptr_map<int, T>::iterator it; for (it = mymap.begin(); it != mymap.end(); ++it) { // doSomething() } ``` The following code is somewhere along the lines what I wish for. It follows the standard way on how to use BOOST\_FOREACH with a std::map. Unfortunately this does not compile: ``` boost::ptr_map<int, T> mymap; // insert something into mymap // ... typedef pair<int, T> IntTpair; BOOST_FOREACH (IntTpair &p, mymap) { int i = p.first; } ```
As STL style containers, the pointer containers have a `value_type` typedef that you can use: ``` #include <boost/ptr_container/ptr_map.hpp> #include <boost/foreach.hpp> int main() { typedef boost::ptr_map<int, int> int_map; int_map mymap; BOOST_FOREACH(int_map::value_type p, mymap) { } } ``` I find that using a typedef for the container makes the code a lot easier to write. Also, you should try to avoid using the contents of `detail` namespaces in boost, it's a boost convention that they contain implementation details.
I just ran into the same problem today. Unfortunately, Daniel's suggestion will not work with a constant reference to a map. In my case, the ptr\_map was a member of a class, and I wanted to loop through it in a const member function. Borrowing Daniel's example, this is what I had to do in my case: ``` #include "boost/ptr_container/ptr_map.hpp" #include "boost/foreach.hpp" int main() { typedef boost::ptr_map<int, int> int_map; int_map mymap; const int_map& mymap_const_ref(mymap); BOOST_FOREACH(int_map::const_iterator::value_type p, mymap_const_ref) { } } ``` It seems that `int_map::const_iterator::value_type` is equivalent to `boost::ptr_container_detail::ref_pair<int, const int* const>`.
How to use BOOST_FOREACH with a boost::ptr_map?
[ "", "c++", "boost", "" ]
Following the discussions here on SO I already read several times the remark that mutable structs are “evil” (like in the answer to this [question](https://stackoverflow.com/questions/292676/is-there-a-workaround-for-overloading-the-assignment-operator-in-c)). What's the actual problem with mutability and structs in C#?
Structs are value types which means they are copied when they are passed around. So if you change a copy you are changing only that copy, not the original and not any other copies which might be around. If your struct is immutable then all automatic copies resulting from being passed by value will be the same. If you want to change it you have to consciously do it by creating a new instance of the struct with the modified data. (not a copy)
Where to start ;-p [Eric Lippert's blog](https://learn.microsoft.com/en-us/archive/blogs/ericlippert/mutating-readonly-structs) is always good for a quote: > This is yet another reason why mutable > value types are evil. Try to always > make value types immutable. First, you tend to lose changes quite easily... for example, getting things out of a list: ``` Foo foo = list[0]; foo.Name = "abc"; ``` what did that change? Nothing useful... The same with properties: ``` myObj.SomeProperty.Size = 22; // the compiler spots this one ``` forcing you to do: ``` Bar bar = myObj.SomeProperty; bar.Size = 22; myObj.SomeProperty = bar; ``` less critically, there is a size issue; mutable objects **tend** to have multiple properties; yet if you have a struct with two `int`s, a `string`, a `DateTime` and a `bool`, you can very quickly burn through a lot of memory. With a class, multiple callers can share a reference to the same instance (references are small).
Why are mutable structs “evil”?
[ "", "c#", "struct", "immutability", "mutable", "" ]
I want to create a static class in PHP and have it behave like it does in C#, so 1. Constructor is automatically called on the first call to the class 2. No instantiation required Something of this sort... ``` static class Hello { private static $greeting = 'Hello'; private __construct() { $greeting .= ' There!'; } public static greet(){ echo $greeting; } } Hello::greet(); // Hello There! ```
You can have static classes in PHP but they don't call the constructor automatically (if you try and call `self::__construct()` you'll get an error). Therefore you'd have to create an `initialize()` function and call it in each method: ``` <?php class Hello { private static $greeting = 'Hello'; private static $initialized = false; private static function initialize() { if (self::$initialized) return; self::$greeting .= ' There!'; self::$initialized = true; } public static function greet() { self::initialize(); echo self::$greeting; } } Hello::greet(); // Hello There! ?> ```
In addition to Greg's answer, I would recommend to set the constructor private so that it is impossible to instantiate the class. So in my humble opinion this is a more complete example based on Greg's one: ``` <?php class Hello { /** * Construct won't be called inside this class and is uncallable from * the outside. This prevents instantiating this class. * This is by purpose, because we want a static class. */ private function __construct() {} private static $greeting = 'Hello'; private static $initialized = false; private static function initialize() { if (self::$initialized) return; self::$greeting .= ' There!'; self::$initialized = true; } public static function greet() { self::initialize(); echo self::$greeting; } } Hello::greet(); // Hello There! ?> ```
Is it possible to create static classes in PHP (like in C#)?
[ "", "php", "design-patterns", "oop", "static", "" ]
Hi there im using Linq for check if two user fields correspond for an unique user register in SQL Table, for example UID : userID PIN : passID so fields have to be from a single user, i was trying this: ``` public bool AutentificacionUsuario(string userID , string password passID) { USER _userID = _db.USER.FirstOrDefault(uid => uid.siglas == userID); USER _passID = _db.USER.FirstOrDefault(pwd => pwd.codigousuario == passID); ``` but boths returns a USER Table instance from Datacontext (two instead one), i want to match them in lambda expression like if " this userID match passID" Thanks!
``` var usr = db.Users.From(uid => uid.dsiglas == userID && uid.codigousuario == passID).FirstOrDefault(); if(usr != null) //Authenticate user here ``` or ``` var usr = (from u in db.Users where u.dsiglas == userID && uid.codigousuario == passID select u).FirstOrDefault() if(usr != null) //Authenticate user here ```
You need to use a Where clause to select the user. I'm unsure if LINQ-2-SQL is what I've been using, but I would have done it like this: ``` USER u = ( from u in _db.USER where u.uid == userID && u.pwd == password select u ).FirstOrDefault() ``` I hope that code is correct, I don't have access to intellisense on a LINQ project right now to check.
Using LINQ2Sql for Validate Users
[ "", ".net", "sql", "linq", "linq-to-sql", "lambda", "" ]
Several colleagues and I are faced with an architectural decision that has serious performance implications: our product includes a UI-driven schema builder that lets non-programmers build their own data types for a web app. Currently, it builds properly normalized schemas behind the scenes and includes some complex logic to alter the schema and migrate legacy data automagically if the admins make changes to the data types. The normalized schemas have hit performance bottlenecks in the past, and a major refactoring has been scheduled. One of the groups of developers wants to store every property of the data types in a separate table, so that changes to the data types will never require schema altering. (A single property could be turned into a 1:n relationship, for example, just by changing application logic.) Because early benchmarking indicates this will exact a huge performance penalty, they have built a caching layer in application code that maintains denormalized versions of each data type. While it does speed up the queries, I'm skeptical about the complexity that the application layer will be taking on, but I'm hoping for feedback - am I being pessimistic? Have others deployed this type of solution successfully? Should I stick to my guns, or is moving the complexity from "Schema modification tools" to "Schema mirroring tools" a good thing?
> The normalized schemas have hit > performance bottlenecks in the past, > and a major refactoring has been > scheduled. One of the groups of > developers wants to store every > property of the data types in a > separate table, so that changes to the > data types will never require schema > altering. (A single property could be > turned into a 1:n relationship, for > example, just by changing application > logic.) This sounds like a bad idea to me. 1. It's going to mess up your database performance. If you store these things on one row they will be physically located together on the disk and treated as one thing for the purposes of locking, etc. 2. Queries that you write are going to require a mass of extra joins and will be very painful. You will end up writing views to turn it back into what it should have been in the first place. 3. The scenario described might never happen so you slowed down and complicated your application for potentially no benefit, 4. If it does happen and you are going to have to re-code and test a bunch of application code what's the little extra effort in making a database change at that time? You can make your child table, copy the data down into it with an update, and drop teh column from the parent table 5. If you are successful, in the future a different application may attach to your database. They will not be able to tell what the real schema is, because that information is held by your application. Model data in the database. 6. The cache on your application server can get tricky if (a) there is too much to fit in memory, (b) you scale to multiple application servers, (c) you have a different application that connects to your database. You're working around a performance problem that is of your own making. 7. You are not going to be able to create an index on multiple columns if they are each living in a child table.
What you describe doesn't resemble what I call normalization. It's more like hyperabstraction - trying to find some abstraction level from which everything else can be derived. Like "Object" in javascript. If you take it to its logical conclusion, you could get by with two tables; one table for every Object, with a column for a ObjectTypeCode and ObjectId; and another table with Associations, having two ObjectId columns, a third for uniqueness, and a fourth for Value. I suggest you need to revisit your domain model. The one you describe sounds scary (but unfortunately, eerily familiar) to me. I had a guy who worked for me who invented a table called "Objects". There were two child tables, ObjectAttributes and ObjectProperties. It was difficult to get a clear explanation of the difference between the two. The table (and he, fortunately) didn't last long.
The dangers of hyper-normalization?
[ "", "sql", "database-design", "architecture", "" ]
I have the following line of code in a link on my webpage: ``` <a href="javascript:$('#comment_form').toggle('normal')" title="Comment on this post"> ``` This produces a link that should pop open a hidden form. It works on Safari, but in Firefox, I just get an almost-empty page with nothing but the following text: ``` [object Object] ``` I'm sure this has something to do with the value returned by the jQuery function, but I'm not sure how to fix the call to the JavaScript function so it works in Firefox, too.
For the love of... ``` <script type='text/javascript'> jQuery(function($){ # Document ready, access $ as jQuery in this scope $("a#comment").click(function(){ # bind a click event on the A like with ID='comment' $('#comment_form').toggleClass('normal'); # Toggle the class on the ID comment form return false; # stop the href from being followed. }); }); </script> .... <a id="comment" href="/you_need_javascript_sorry.html" title="Comment on this post"> [Comment] </a> ``` Please, don't embed JavaScript like you just did in the HTML. If you embed JavaScript in HTML like that, you : 1. Make messy code. 2. Have weird problems like you just did with hacks to get around it 3. Users trying to middle click links will get sent nowhere. 4. Maintaining the executable part of your code interspersed in page links will end in failure. 5. Don't gracefully degrade when users don't have javascript 6. Become problematic when you start needing to do stuff like store quotes more than 2 layers deep.
Try: ``` <a href="javascript:void($('#comment_form').toggle('normal'))" title="Comment on this post"> ``` Containing the script inside a `void()` will suppress the browser from displaying the result of the execution. --- **Update** I directly answered the original question, with the solution that would take the least amount of effort. As it's mentioned in some of the other answers here, I personally would keep my markup and JavaScript separate and dynamically add an `onclick` handler instead of embedding the script inside the `href` attribute.
Why does the call to this jQuery function fail in Firefox?
[ "", "javascript", "jquery", "firefox", "" ]
I'm trying to strip all punctuation out of a string using a simple regular expression and the php preg\_replace function, although I get the following error: > Compilation failed: POSIX named classes are supported only within a class at offset 0 I guess this means I can't use POSIX named classes outside of a class at offset 0. My question is, what does it means when it says "within a class at offset 0 "? ``` $string = "I like: perl"; if (eregi('[[:punct:]]', $string)) $new = preg_replace('[[:punct:]]', ' ', $string); echo $new; ```
The [`preg_*` functions](http://docs.php.net/manual/en/ref.pcre.php) expect [Perl compatible regular expressions](http://en.wikipedia.org/wiki/Perl_Compatible_Regular_Expressions) with delimiters. So try this: ``` preg_replace('/[[:punct:]]/', ' ', $string) ```
**NOTE: The `g` modifier is not needed with PHP's `PCRE` implementation!** In addition to [Gumbo's answer](https://stackoverflow.com/questions/475159/php-regex-what-is-class-at-offset-0#475170), use the `g` modifier to replace *all* occurances of punctuation: ``` preg_replace('/[[:punct:]]/g', ' ', $string) // ^ ``` From [Johnathan Lonowski](https://stackoverflow.com/users/15031/jonathan-lonowski) (see comments): ``` > [The g modifier] means "Global" -- i.e., find all existing matches. Without it, regex functions will stop searching after the first match. ```
PHP regex: what is "class at offset 0"?
[ "", "php", "regex", "" ]
I have to perform 3 operations in a windows app as a Transaction. These are not SQL operation otherwise i'd have used TransactionScope. The first operation is to log some value in a db, 2nd is to move an e mail to a pst and 3rd one is to move email to another mailbox. If any if these operation fails,I want other operations to roll back to previous state. is there any way, this transaction could be achieved.
You could roll your own ResourceManager and use [System.Transactions](http://msdn.microsoft.com/en-us/library/system.transactions.aspx) to help you handle the transactions. <http://www.codeguru.com/csharp/.net/net_data/sortinganditerating/article.php/c10993__1/> Depending on the complexity though, and how often you'll need it, it might be an overly complex solution.
Not unless your e-mail backend supports DTC, in which case you can use `TransactionScope`; note that `TransactionScope` is not limited just to SQL-providers; some middleware tools support it, and there is even a `TransactionScope`-based provided for .NET objects (not **any** object - only those written using the library's base-class / property-bag / etc). As it stands, you'll probably have to roll back manually.
Transaction for .net operations
[ "", "c#", "transactions", "" ]
I have a problem which I think is related to forward declarations, but perhaps not. Here is the relevant code: A.h ``` #ifndef A_H_ #define A_H_ #include "B.h" class A { private: B b; public: A() : b(*this) {} void bar() {} }; #endif /*A_H_*/ ``` B.h ``` #ifndef B_H_ #define B_H_ #include "A.h" class A; class B { private: A& a; public: B(A& a) : a(a) {} void foo() { /*a.bar();*/ } //doesn't compile }; #endif /*B_H_*/ ``` main.cpp ``` #include "A.h" int main() { A a; return 0; } ``` The problem seems to be with the invocation of A::bar(). The program successfully compiles until I attempt to call this method at which point I get two errors: > error: invalid use of incomplete type ‘struct A’ > > error: forward declaration of ‘struct A’ I presume this is because A::bar() has yet to be defined or declared since both headers reference each other. However, I forward declared class A and am at a loss as to what else I need to do. I am new to C++, so please forgive me. I could not find the answer to this question anywhere else online. As always, thanks in advance!
You've got a circular reference, so you need to separate B.h. Try something like: B.h: ``` #ifndef B_H_ #define B_H_ // don't include A.h here! class A; class B { private: A& a; public: B(A& a) : a(a) {} void foo(); }; #endif /*B_H_*/ ``` B.cpp: ``` #include "B.h" #include "A.h" void B::foo() { a.bar(); } // now you're ok ``` Edit: explanation for why you need to split it into two files: The class `B` contains a reference to `A`, which can be a so-called *incomplete* type. You can't call any functions on it because the compiler doesn't yet know what the heck `A` is - it just knows that it's a class of some sort. Once you include A.h (in the .cpp file), then `A` is a complete type, and you can do whatever you like with it. You can't keep the whole thing in one header file because you'll get a circular reference. You're preventing an infinite loop with your include guards, but you're getting something you don't want. Look at what the compiler ends up with when you compile main.cpp, as you had it before: ``` // #include "A.h" ==> #define A_H_ // #include "B.h" ==> #define B_H_ // #include "A.h" ==> nothing happens! (since A_H_ is already defined) class A; class B { private: A& a; public: B(A& a) : a(a) {} void foo() { a.bar(); } // <-- what the heck is A here? // it's not defined until below }; class A { private: B b; public: A() : b(*this) {} void bar() {} }; int main() { A a; return 0; } ```
The line `#include<file.h>` will just replace the line with the content of `file.h`. So when the computer tries to compile your `main.cpp`, it will pull everything together, which looks like the following. At the place you want to use A::bar(), it has not been defined. ``` // result from #include "A.h" #ifndef A_H_ #define A_H_ // Now, #include "B.h" in A.h will get you the following #ifndef B_H_ #define B_H_ // here you include "A.h" again, but now it has no effect // since A_H_ is already defined class A; class B { private: A& a; public: B(A& a) : a(a) {} // Oops, you want to use a.bar() but it is not defined yet void foo() { /*a.bar();*/ } }; #endif /*B_H_*/ class A { private: B b; public: A() : b(*this) {} void bar() {} }; #endif /*A_H_*/ // now this is your main function int main() { A a; return 0; } ```
C++ Forward Declaration Problem when calling Method
[ "", "c++", "compiler-construction", "declaration", "" ]
As you probably know, Derek Sivers is the guy who created CD Baby and eventually sold it for some big bucks. He wrote it in PHP originally and then down the road set about rewriting it in Rails. His troubles are the stuff of legend: [**7 reasons I switched back to PHP after 2 years on Rails**](http://www.oreillynet.com/ruby/blog/2007/09/7_reasons_i_switched_back_to_p_1.html) That article came out in 2007 but being newly infatuated with Rails, I'm wondering whether anything has changed to make Rails more of a wise bet in the meantime, or should I stick with my good old ugly PHP girlfriend? **Does anyone agree that Rails does not offer any significant advantages over PHP?**
Re-writing an existing site is almost always a bad idea. It's hard to put your heart into retreading an old wheel. I worked on a rewrite of a site from CGIs to a Java app server and saw several programmers quit because of it. For one, they preferred their old way of doing things and did not want to learn Java. Secondly, I believe they did not have the enthusiasm to re-write a ton of legacy code that they had been maintaining reluctantly to begin with. Far better to try Rails out on a new task and see how it fares. At least then you are putting it on an even footing with PHP in the psychological motivation sweepstakes.
Austin Ziegler wrote an interesting response to that article: [On Derek Siver’s Return to PHP…](http://www.halostatue.ca/2007/09/23/on-derek-sivers-return-to-php%E2%80%A6/) The gist of it is: > 1. Derek chose the technology for the wrong reasons. He chose it partially > based on the hype of Rails, but he > envisioned it as a silver bullet that > would magically make his application > better just because it’s in Rails. > 2. Rails didn’t fit Derek’s application model for CD Baby, and > Derek’s application model is more > important than the technology to be > used, since it represents a business > he understands well. > 3. He ignored his existing experts for the new technology. Neither he nor his > employees knew Ruby aside, perhaps, > from playing around with it. This > wasn’t a technology that was deemed to > be appropriate from experience; this > was a technology deemed appropriate by > management (sorry Derek, you might > still be getting your hands dirty with > code, but you’re still management). > 4. Derek approached the project as a whole-environment ground-up rewrite > with a One Big Day deployment, without > considering ways to phase it in over > time. It’s almost always possible to > find interface points where you can > replace one broken piece at a time. > Ultimately, this is what the Rails > folks wouldshould tell you anyway: > replace one area at a time, each with > a different codebase. Interface them > as REST-ful services. Don’t make them > depend on a single database schema.
Should I heed Derek Sivers' warnings about migrating from PHP to Rails?
[ "", "php", "ruby-on-rails", "comparison", "" ]
If I understand this correctly: Current CPU developing companies like AMD and Intel have their own API codes (the assembly language) as what they see as the 2G language on top of the Machine code (1G language) Would it be possible or desirable (performance or otherwise) to have a CPU that would perform IL handling at it's core instead of the current API calls?
A similar technology does exist for Java - ARM do a range of CPUs that can do this, they call it their "Jazelle" technology. However, the operations represented by .net IL opcodes are only well-defined in combination with the type information held on the stack, not on their own. This is a major difference from Java bytecode, and would make it much more difficult to create sensible hardware to execute IL. Moreover, IL is intended for compilation to a final target. Most back ends that spit out IL do very little optimisation, aiming instead to preserve semantic content for verification and optimisation in the final compilation step. Even if the hardware problems could be overcome, the result will almost certainly still be slower than a decent optimising JIT. So, to sum up: while it is not impossible, it would be disproportionately hard compared to other architectures, and would achieve little.
You seem a bit confused about how CPU's work. Assembly is not a separate language from machine code. It is simply a different (textual) representation of it. Assembly code is simply a sequential listing of instructions to be executed. And machine code is exactly the same thing. Every instruction supported by the CPU has a certain bit-pattern that cause it to be executed, and it also has a textual name you can use in assembly code. If I write `add $10, $9, $8` and run it through an assembler, I get the machine code for the add instruction, taking the values in registers 9 and 8, adding them and storing the result in register 10. There is a 1 to 1 mapping between assembler and machine code. There also are no "API calls". The CPU simply reads from address X, and matches the subsequent bits against all the instructions it understands. Once it finds an instruction that matches this bit pattern, it executes the instruction, and moves on to read the next one. What you're asking is in a sense impossible or a contradiction. IL stands for Intermediate Language, that is, a kind of pseudocode that is emitted by the compiler, but has not yet been translated into machine code. But if the CPU could execute that directly, then it would no longer be intermediate, it would *be* machine code. So the question becomes "is your IL code a better, more efficient representation of a program, than the machine code the CPU supports now?" And the answer is most likely no. MSIL (I assume that's what you mean by IL, which is a much more general term) is designed to be portable, simple and consistent. Every .NET language compiles to MSIL, and every MSIL program must be able to be translated into machine code for any CPU anywhere. That means MSIL must be general and abstract and not make assumptions about the CPU. For this reason, as far as I know, it is a purely stack-based architecture. Instead of keeping data in registers, each instruction processes the data on the top of the stack. That's a nice clean and generic system, but it's not very efficient, and doesn't translate well to the rigid structure of a CPU. (In your wonderful little high-level world, you can pretend that the stack can grow freely. For the CPU to get fast access to it, it must be stored in some small, fast on-chip memory with finite size. So what happens if your program push too much data on the stack?) Yes, you *could* make a CPU to execute MSIL directly, but what would you gain? You'd no longer need to JIT code before execution, so the first time you start a program, it would launch a bit faster. Apart from that, though? Once your MSIL program has been JIT'ed, it *has* been translated to machine code and runs as efficiently as if it had been written in machine code originally. MSIL bytecode no longer exists, just a series of instructions understood by the CPU. In fact, you'd be back where you were before .NET. Non-managed languages are compiled straight to machine code, just like this would be in your suggestion. The only difference is that non-managed code targets machine code that is designed by CPU designers to be suitable for execution on a CPU, while in your case, it'd target machine code that's designed by software designers to be easy to translate to and from.
Would there be any point in designing a CPU that could handle IL directly?
[ "", "c#", "performance", "compiler-construction", "cpu", "cpu-architecture", "" ]
I'm iterating through a HashMap (see [my earlier question](https://stackoverflow.com/questions/448122/how-do-i-use-a-foreach-loop-in-java-to-loop-through-the-values-in-a-hashmap) for more detail) and building a string consisting of the data contained in the Map. For each item, I will have a new line, but for the very last item, I don't want the new line. How can I achieve this? I was thinking I could so some kind of check to see if the entry is the last one or not, but I'm not sure how to actually do that. Thanks!
Change your thought process from "append a line break all but the last time" to "prepend a line break all but the first time": ``` boolean first = true; StringBuilder builder = new StringBuilder(); for (Map.Entry<MyClass.Key,String> entry : data.entrySet()) { if (first) { first = false; } else { builder.append("\n"); // Or whatever break you want } builder.append(entry.key()) .append(": ") .append(entry.value()); } ```
one method (with apologies to Jon Skeet for borrowing part of his Java code): ``` StringBuilder result = new StringBuilder(); string newline = ""; for (Map.Entry<MyClass.Key,String> entry : data.entrySet()) { result.append(newline) .append(entry.key()) .append(": ") .append(entry.value()); newline = "\n"; } ```
How do I append a newline character for all lines except the last one?
[ "", "java", "foreach", "newline", "" ]
What is the scope of variables in javascript? Do they have the same scope inside as opposed to outside a function? Or does it even matter? Also, where are the variables stored if they are defined globally?
## TLDR JavaScript has lexical (also called static) scoping and closures. This means you can tell the scope of an identifier by looking at the source code. The four scopes are: 1. Global - visible by everything 2. Function - visible within a function (and its sub-functions and blocks) 3. Block - visible within a block (and its sub-blocks) 4. Module - visible within a module Outside of the special cases of global and module scope, variables are declared using `var` (function scope), `let` (block scope), and `const` (block scope). Most other forms of identifier declaration have block scope in strict mode. ## Overview Scope is the region of the codebase over which an identifier is valid. A lexical environment is a mapping between identifier names and the values associated with them. Scope is formed of a linked nesting of lexical environments, with each level in the nesting corresponding to a lexical environment of an ancestor execution context. These linked lexical environments form a scope "chain". Identifier resolution is the process of searching along this chain for a matching identifier. Identifier resolution only occurs in one direction: outwards. In this way, outer lexical environments cannot "see" into inner lexical environments. There are three pertinent factors in deciding the [scope](https://en.wikipedia.org/wiki/Scope_(computer_science)) of an [identifier](https://www.ecma-international.org/ecma-262/10.0/index.html#sec-names-and-keywords) in JavaScript: 1. How an identifier was declared 2. Where an identifier was declared 3. Whether you are in [strict mode](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode) or [non-strict mode](https://developer.mozilla.org/en-US/docs/Glossary/Sloppy_mode) Some of the ways identifiers can be declared: 1. `var`, `let` and `const` 2. Function parameters 3. Catch block parameter 4. Function declarations 5. Named function expressions 6. Implicitly defined properties on the global object (i.e., missing out `var` in non-strict mode) 7. `import` statements 8. `eval` Some of the locations identifiers can be declared: 1. Global context 2. Function body 3. Ordinary block 4. The top of a control structure (e.g., loop, if, while, etc.) 5. Control structure body 6. Modules ## Declaration Styles ### var Identifiers declared using `var` **have function scope**, apart from when they are declared directly in the global context, in which case they are added as properties on the global object and have global scope. There are separate rules for their use in `eval` functions. ### let and const Identifiers declared using `let` and `const` **have block scope**, apart from when they are declared directly in the global context, in which case they have global scope. Note: `let`, `const` and `var` [are all hoisted](https://stackoverflow.com/a/31222689/38522). This means that their logical position of definition is the top of their enclosing scope (block or function). However, variables declared using `let` and `const` cannot be read or assigned to until control has passed the point of declaration in the source code. The interim period is known as the temporal dead zone. ``` function f() { function g() { console.log(x) } let x = 1 g() } f() // 1 because x is hoisted even though declared with `let`! ``` ### Function parameter names Function parameter names are scoped to the function body. Note that there is a slight complexity to this. Functions declared as default arguments close over the [parameter list](https://stackoverflow.com/questions/61208843/where-are-arguments-positioned-in-the-lexical-environment/), and not the body of the function. ### Function declarations Function declarations have block scope in strict mode and function scope in non-strict mode. Note: non-strict mode is a complicated set of emergent rules based on the quirky historical implementations of different browsers. ### Named function expressions Named function expressions are scoped to themselves (e.g., for the purpose of recursion). ### Implicitly defined properties on the global object In non-strict mode, implicitly defined properties on the global object have global scope, because the global object sits at the top of the scope chain. In strict mode, these are not permitted. ### eval In `eval` strings, variables declared using `var` will be placed in the current scope, or, if `eval` is used indirectly, as properties on the global object. ## Examples The following will throw a ReferenceError because the names`x`, `y`, and `z` have no meaning outside of the function `f`. ``` function f() { var x = 1 let y = 1 const z = 1 } console.log(typeof x) // undefined (because var has function scope!) console.log(typeof y) // undefined (because the body of the function is a block) console.log(typeof z) // undefined (because the body of the function is a block) ``` The following will throw a ReferenceError for `y` and `z`, but not for `x`, because the visibility of `x` is not constrained by the block. Blocks that define the bodies of control structures like `if`, `for`, and `while`, behave similarly. ``` { var x = 1 let y = 1 const z = 1 } console.log(x) // 1 console.log(typeof y) // undefined because `y` has block scope console.log(typeof z) // undefined because `z` has block scope ``` In the following, `x` is visible outside of the loop because `var` has function scope: ``` for(var x = 0; x < 5; ++x) {} console.log(x) // 5 (note this is outside the loop!) ``` ...because of this behavior, you need to be careful about closing over variables declared using `var` in loops. There is only one instance of variable `x` declared here, and it sits logically outside of the loop. The following prints `5`, five times, and then prints `5` a sixth time for the `console.log` outside the loop: ``` for(var x = 0; x < 5; ++x) { setTimeout(() => console.log(x)) // closes over the `x` which is logically positioned at the top of the enclosing scope, above the loop } console.log(x) // note: visible outside the loop ``` The following prints `undefined` because `x` is block-scoped. The callbacks are run one by one asynchronously. New behavior for `let` variables means that each anonymous function closed over a different variable named `x` (unlike it would have done with `var`), and so integers `0` through `4` are printed.: ``` for(let x = 0; x < 5; ++x) { setTimeout(() => console.log(x)) // `let` declarations are re-declared on a per-iteration basis, so the closures capture different variables } console.log(typeof x) // undefined ``` The following will NOT throw a `ReferenceError` because the visibility of `x` is not constrained by the block; it will, however, print `undefined` because the variable has not been initialised (because of the `if` statement). ``` if(false) { var x = 1 } console.log(x) // here, `x` has been declared, but not initialised ``` A variable declared at the top of a `for` loop using `let` is scoped to the body of the loop: ``` for(let x = 0; x < 10; ++x) {} console.log(typeof x) // undefined, because `x` is block-scoped ``` The following will throw a `ReferenceError` because the visibility of `x` is constrained by the block: ``` if(false) { let x = 1 } console.log(typeof x) // undefined, because `x` is block-scoped ``` Variables declared using `var`, `let` or `const` are all scoped to modules: ``` // module1.js var x = 0 export function f() {} //module2.js import f from 'module1.js' console.log(x) // throws ReferenceError ``` The following will declare a property on the global object because variables declared using `var` within the global context are added as properties to the global object: ``` var x = 1 console.log(window.hasOwnProperty('x')) // true ``` `let` and `const` in the global context do not add properties to the global object, but still have global scope: ``` let x = 1 console.log(window.hasOwnProperty('x')) // false ``` Function parameters can be considered to be declared in the function body: ``` function f(x) {} console.log(typeof x) // undefined, because `x` is scoped to the function ``` Catch block parameters are scoped to the catch-block body: ``` try {} catch(e) {} console.log(typeof e) // undefined, because `e` is scoped to the catch block ``` Named function expressions are scoped only to the expression itself: ``` (function foo() { console.log(foo) })() console.log(typeof foo) // undefined, because `foo` is scoped to its own expression ``` In non-strict mode, implicitly defined properties on the global object are globally scoped. In strict mode, you get an error. ``` x = 1 // implicitly defined property on the global object (no "var"!) console.log(x) // 1 console.log(window.hasOwnProperty('x')) // true ``` In non-strict mode, function declarations have function scope. In strict mode, they have block scope. ``` 'use strict' { function foo() {} } console.log(typeof foo) // undefined, because `foo` is block-scoped ``` ## How it works under the hood Scope is defined as the [lexical](https://stackoverflow.com/a/1047479/38522) region of code over which an identifier is valid. In JavaScript, every function-object has a hidden `[[Environment]]` reference that is a reference to the [lexical environment](https://www.ecma-international.org/ecma-262/10.0/index.html#sec-lexical-environments) of the [execution context](https://www.ecma-international.org/ecma-262/10.0/index.html#sec-execution-contexts) (stack frame) within which it was created. When you invoke a function, the hidden `[[Call]]` method is called. This method creates a new execution context and establishes a link between the new execution context and the lexical environment of the function-object. It does this by copying the `[[Environment]]` value on the function-object, into an [outer reference](https://www.ecma-international.org/ecma-262/10.0/index.html#sec-lexical-environments) field on the lexical environment of the new execution context. Note that this link between the new execution context and the lexical environment of the function object is called a [closure](https://stackoverflow.com/a/111114/38522). Thus, in JavaScript, scope is implemented via lexical environments linked together in a "chain" by outer references. This chain of lexical environments is called the scope chain, and identifier resolution occurs by [searching up the chain](https://www.ecma-international.org/ecma-262/10.0/index.html#sec-getidentifierreference) for a matching identifier. Find out [more](https://github.com/getify/You-Dont-Know-JS/tree/2nd-ed/scope-closures).
Javascript uses scope chains to establish the scope for a given function. There is typically one global scope, and each function defined has its own nested scope. Any function defined within another function has a local scope which is linked to the outer function. It's always the position in the source that defines the scope. An element in the scope chain is basically a Map with a pointer to its parent scope. When resolving a variable, javascript starts at the innermost scope and searches outwards.
What is the scope of variables in JavaScript?
[ "", "javascript", "function", "variables", "scope", "var", "" ]
If there is truly a 'best' way, what *is* the best way to ship a python app and ensure people can't (easily) reverse engineer your algorithms/security/work in general? If there isn't a 'best' way, what are the different options available? Background: I love coding in Python and would love to release more apps with it. One thing that I wonder about is the possibility of people circumventing any licensing code I put in, or being able to just rip off my entire source base. I've heard of Py2Exe and similar applications, but I'm curious if there are 'preferred' ways of doing it, or if this problem is just a fact of life.
Security through obscurity *never* works. If you must use a proprietary license, enforce it through the law, not half-baked obfuscation attempts. If you're worried about them learning your security (e.g. cryptography) algorithm, the same applies. Real, useful, security algorithms (like AES) are secure even though the algorithm is fully known.
Even if you use a compiled language like C# or Java, people can perform reverse engineering if they are motivated and technically competent. Obfuscation is not a reliable protection against this. You can add prohibition against reverse-engineering to your end-user license agreement for your software. Most proprietary companies do this. But that doesn't prevent violation, it only gives you legal recourse. The *best* solution is to offer products and services in which the user's access to read your code does not harm your ability to sell your product or service. Base your business on service provided, or subscription to periodic updates to data, rather than the code itself. Example: Slashdot actually makes their code for their website available. Does this harm their ability to run their website? No. Another remedy is to set your price point such that the effort to pirate your code is more costly than simply buying legitimate licenses to use your product. Joel Spolsky has made a recommendation to this effects in his articles and podcasts.
Python Applications: Can You Secure Your Code Somehow?
[ "", "python", "security", "reverse-engineering", "" ]
I have an text that consists of information enclosed by a certain pattern. The only thing I know is the pattern: "${template.start}" and ${template.end} To keep it simple I will substitute ${template.start} and ${template.end} with "a" in the example. So one entry in the text would be: ``` aINFORMATIONHEREa ``` I do not know how many of these entries are concatenated in the text. So the following is correct too: ``` aFOOOOOOaaASDADaaASDSDADa ``` I want to write a regular expression to extract the information enclosed by the "a"s. My first attempt was to do: ``` a(.*)a ``` which works as long as there is only one entry in the text. As soon as there are more than one entries it failes, because of the `.*` matching everything. So using `a(.*)a` on `aFOOOOOOaaASDADaaASDSDADa` results in only one capturing group containing everything between the first and the last character of the text which are "a": ``` FOOOOOOaaASDADaaASDSDAD ``` What I want to get is something like ``` captureGroup(0): aFOOOOOOaaASDADaaASDSDADa captureGroup(1): FOOOOOO captureGroup(2): ASDAD captureGroup(3): ASDSDAD ``` It would be great to being able to extract each entry out of the text and from each entry the information that is enclosed between the "a"s. By the way I am using the QRegExp class of Qt4. Any hints? Thanks! Markus --- Multiple variation of this question have been seen before. Various related discussions: * [Regex to replace all \n in a String, but no those inside [code] [/code] tag](https://stackoverflow.com/questions/328387/regex-to-replace-all-n-in-a-string-but-no-those-inside-code-code-tag) * [Using regular expressions how do I find a pattern surrounded by two other patterns without including the surrounding strings?](https://stackoverflow.com/questions/191704/using-regular-expressions-how-do-i-find-a-pattern-surrounded-by-two-other-pattern) * [Use RegExp to match a parenthetical number then increment it](https://stackoverflow.com/questions/423107/use-regexp-to-match-a-parenthetical-number-then-increment-it) * [Regex for splitting a string using space when not surrounded by single or double quotes](https://stackoverflow.com/questions/366202/regex-for-splitting-a-string-using-space-when-not-surrounded-by-single-or-double) * [What regex will match text excluding what lies within HTML tags?](https://stackoverflow.com/questions/179779/what-regex-will-match-text-excluding-what-lies-within-html-tags) and probably others...
Simply use non-greedy expressions, namely: ``` a(.*?)a ```
You need to match something like: ``` a[^a]*a ```
Using an asterisk in a RegExp to extract data that is enclosed by a certain pattern
[ "", "c++", "regex", "qt4", "" ]
In Python, I've got a list of dictionaries that looks like this: ``` matchings = [ {'id': 'someid1', 'domain': 'somedomain1.com'}, {'id': 'someid2', 'domain': 'somedomain2.com'}, {'id': 'someid3', 'domain': 'somedomain3.com'} ] ``` and, I have a variable: ``` the_id = 'someid3' ``` What's the most efficient way to retrieve the domain value of the item?
You can use a [list comprehension](http://docs.python.org/tutorial/datastructures.html#list-comprehensions): ``` domains = [matching['domain'] for matching in matchings if matching['id'] == the_id] ``` Which follows the format standard format of: ``` resulting_list = [item_to_return for item in items if condition] ``` And basically encapsulates all the following functionality: ``` domains = [] for matching in matchings: if matching['id'] == the_id: domains.append(matching['domain']) ``` All that functionality is represented in a single line using list comprehensions.
I'd restructure `matchings`. ``` from collections import defaultdict matchings_ix= defaultdict(list) for m in matchings: matchings_ix[m['id']].append( m ) ``` Now the most efficient lookup is ``` matchings_ix[ d ] ```
What's the most efficient way to access sibling dictionary value in a Python dict?
[ "", "python", "loops", "dictionary", "list-comprehension", "" ]
Can I tell, using javascript, whether a user has clicked on the "X" icon on a browser dialog, or the "OK"/"Cancel" buttons? I have code I need to run when the window closes, but it will only run when OK or Cancel are clicked. I currently capture the onunload event of the window. How can i accomplish this? ``` window.onunload = function() { alert("unloading"); } ```
Why do you want to do this? We can probably help you come up with a different design that doesn't require this if you tell us what you're trying to do. However, to answer your question: it's not possible to catch that event in all cases. You cannot prevent the user from closing the browser or guarantee that your code will execute when they do. You can make it slightly annoying for them, but they can disable javascript, or kill the process, or reboot the computer. Using the unload function is the closest you can come to having some code that runs when the window closes (it will run in cases of normal shutdown or when the user navigates away).
If I understood correctly, your question is about a browser **dialog**, not the main browser **window**. To answer your question, you probably cannot distinguish between the Cancel button and the X button of a browser **dialog**. They'll both end up just returning a `false`. If you need this level of control, you should consider writing your own simulated dialog (lightbox) instead of a real JavaScript dialog. Or perhaps look at existing frameworks/plugins with modal dialogs that give you the amount of control you need.
Prevent a user from closing a browser window using "X"?
[ "", "asp.net", "javascript", "html", "" ]
Having the hours and minutes, is there any easier or better way to set it into a Calendar object than: ``` calendar.set(calendar.get(Calendar.YEAR), calendar.get(Calendar.MONTH), calendar.get(Calendar.DAY_OF_MONTH), hour, minute); ```
From [here](http://www.java2s.com/Code/JavaAPI/java.util/Calendarsetintfieldintvalue.htm): ``` calendar.set(Calendar.HOUR_OF_DAY, hour); calendar.set(Calendar.MINUTE, minute); ```
Use [set(int field, int value)](https://docs.oracle.com/javase/9/docs/api/java/util/Calendar.html#set-int-int-). The first parameter is the field number for HOUR/MINUTE/SECOND.
How do you set the time and only the time in a calendar in Java?
[ "", "java", "date", "calendar", "hour", "minute", "" ]
What is the most efficient way of testing an input string whether it contains a numeric value (or conversely Not A Number)? I guess I can use `Double.Parse` or a regex (see below) but I was wondering if there is some built in way to do this, such as javascript's `NaN()` or `IsNumeric()` (was that VB, I can't remember?). ``` public static bool IsNumeric(this string value) { return Regex.IsMatch(value, "^\\d+$"); } ```
This doesn't have the regex overhead ``` double myNum = 0; String testVar = "Not A Number"; if (Double.TryParse(testVar, out myNum)) { // it is a number } else { // it is not a number } ``` Incidentally, all of the standard data types, with the glaring exception of GUIDs, support TryParse. **update** secretwep brought up that the value "2345," will pass the above test as a number. However, if you need to ensure that all of the characters within the string are digits, then another approach should be taken. **example 1**: ``` public Boolean IsNumber(String s) { Boolean value = true; foreach(Char c in s.ToCharArray()) { value = value && Char.IsDigit(c); } return value; } ``` or if you want to be a little more fancy ``` public Boolean IsNumber(String value) { return value.All(Char.IsDigit); } ``` **update 2** ( from @stackonfire to deal with null or empty strings) ``` public Boolean IsNumber(String s) { Boolean value = true; if (s == String.Empty || s == null) { value=false; } else { foreach(Char c in s.ToCharArray()) { value = value && Char.IsDigit(c); } } return value; } ```
I prefer something like this, it lets you decide what [`NumberStyle`](http://msdn.microsoft.com/en-us/library/system.globalization.numberstyles.aspx) to test for. ``` public static Boolean IsNumeric(String input, NumberStyles numberStyle) { Double temp; Boolean result = Double.TryParse(input, numberStyle, CultureInfo.CurrentCulture, out temp); return result; } ```
What is the C# equivalent of NaN or IsNumeric?
[ "", "c#", "parsing", "" ]
Today, I have seen some legacy code. In the destructor there is a statement like "`delete this`". I think, this call will be recursive. Why it is working? I made some quick search on Y!, I found that if there is a need to restrict the user to create the stack object, we can make destructor private and provide an interface to delete the instance. In the interface provided, we have to call delete on this pointer. Are there any other situations for using such statements?
"delete this" is commonly used for ref counted objects. For a ref counted object the decision of when to delete is usually placed on the object itself. Here is an example of what a Release method would look like [1]. ``` int MyRefCountedObject::Release() { _refCount--; if ( 0 == _refCount ) { delete this; return 0; } return _refCount; } ``` ATL COM objects are a prime example of this pattern. [1] Yes I realize this is not thread safe.
### `delete this` is valid in C++11 It is valid to call `delete this` in a destructor. The [C++ FAQ](https://isocpp.org/wiki/faq/freestore-mgmt#delete-this) has an entry about that. The restrictions of [C++11 [basic.life] p5](https://timsong-cpp.github.io/cppwp/n3337/basic.life#5) do not apply to objects under destruction, and [C++11 [class.cdtor]](https://timsong-cpp.github.io/cppwp/n3337/class.cdtor) does not restrict you from using `delete this`. Despite being valid, it's rarely a good idea. The `wxWidgets` framework uses it for their thread class. It has a mode where, when the thread ends execution, it automatically frees system resources and itself (the wxThread object). I found it very annoying, because from outside, you can't know whether it's valid to refer it or not - you can't call a function like `IsValid` anymore, because the object doesn't exist. That smells like the main problem with `delete this`, apart from the problem that it can't be used for non-dynamic objects. If you do it, make sure you don't touch any data-member, or call any member function anymore on the object you deleted that way. Best do it as the last statement in a non-virtual, protected or private function. Calling delete is valid in a virtual and/or public function too, but i would restrict the visibility of the method doing that. --- *Note: `delete this` may not be used in a destructor as per [C++11 [class.dtor] p15](https://timsong-cpp.github.io/cppwp/n3337/class.dtor#15), and you have to avoid destroying an object twice.* ### `delete this` used to be undefined behavior in C++03 C++ Standard quote on my claim above ([C++03 [basic.life] p5](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1804.pdf)): > Before the lifetime of an object has started but after the storage which the object will occupy has been allocated or, after the lifetime of an object has ended and before the storage which the object occupied is reused or released, any pointer that refers to the storage location where the object will be or was located may be used but only in limited ways. [...] *If the object will be or was of a class type with a non-trivial destructor, and the pointer is used as the operand of a delete-expression, the program has undefined behavior.* Lifetime ends when the destructor of the object begins execution. Note there are exceptions to the rules coming after that paragraph for objects under construction and destruction (you are allowed to access non-static data members of POD types, for example), detailed at [C++03 [class.cdtor]](https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1804.pdf).
What is the use of "delete this"?
[ "", "c++", "memory-management", "destructor", "self-destruction", "" ]
stripslashes() ? That's lame and so 4.0. What's the 5.0 counterpart of mysqli::real\_escape\_string that strips all slashes added for SQL queries? Got some other questions: 1. Tried to update a record and added a single quote in a text field, turns out phpMyAdmin escapes the string with single quotes instead of slashes - e.g. a single quote is escaped as '' (2 single quotes) instead of \' - what function is phpMyAdmin using or is it its own? So, mysql supports 2 approaches for escaping strings, namely slash and single quote? 2. Do I always have to unslash the string selected from mysql? Cause' you know it's slashed at insertion. But I thought I don't have to. Any ideas, thanks!
If you don't want to go with PDO, and you are using mysqli, you should be using prepared statements, so you don't have to worry about escaping quotes with things like mysql\_real\_escape\_string\_i\_mean\_it\_this\_time. More specifically, you can call [mysqli->prepare](http://ca.php.net/manual/en/mysqli.prepare.php) to prepare your query. Call [mysqli\_stmt->bind\_param](http://ca.php.net/manual/en/mysqli-stmt.bind-param.php) to set the parameter values. And, call [mysqli\_stmt->execute](http://ca.php.net/manual/en/mysqli-stmt.execute.php) to execute the query.
Use [PDO](http://php.net/pdo) instead of any of the `mysql[i]`/`pgsql`/... extensions. If you're just looking to reverse the damage done by magic quotes, though, `stripslashes()` is exactly what you're looking for.
The counterpart of mysqli::real_escape_string?
[ "", "php", "mysql", "mysqli", "" ]
I've been told not to use `for...in` with arrays in JavaScript. Why not?
The reason is that one construct: ``` var a = []; // Create a new empty array. a[5] = 5; // Perfectly legal JavaScript that resizes the array. for (var i = 0; i < a.length; i++) { // Iterate over numeric indexes from 0 to 5, as everyone expects. console.log(a[i]); } /* Will display: undefined undefined undefined undefined undefined 5 */ ``` can sometimes be totally different from the other: ``` var a = []; a[5] = 5; for (var x in a) { // Shows only the explicitly set index of "5", and ignores 0-4 console.log(x); } /* Will display: 5 */ ``` Also consider that [JavaScript](http://en.wikipedia.org/wiki/JavaScript) libraries might do things like this, which will affect any array you create: ``` // Somewhere deep in your JavaScript library... Array.prototype.foo = 1; // Now you have no idea what the below code will do. var a = [1, 2, 3, 4, 5]; for (var x in a){ // Now foo is a part of EVERY array and // will show up here as a value of 'x'. console.log(x); } /* Will display: 0 1 2 3 4 foo */ ```
The `for-in` statement by itself is not a "bad practice", however it can be *mis-used*, for example, to *iterate* over arrays or array-like objects. The purpose of the `for-in` statement is to *enumerate* over object properties. This statement will go up in the prototype chain, also enumerating over *inherited* properties, a thing that *sometimes* is not desired. Also, the order of iteration is not guaranteed by the spec., meaning that if you want to "iterate" an array object, with this statement you cannot be sure that the properties (array indexes) will be visited in the numeric order. For example, in JScript (IE <= 8), the order of enumeration even on Array objects is defined as the properties were created: ``` var array = []; array[2] = 'c'; array[1] = 'b'; array[0] = 'a'; for (var p in array) { //... p will be "2", "1" and "0" on IE } ``` Also, speaking about inherited properties, if you, for example, extend the `Array.prototype` object (like some libraries as MooTools do), that properties will be also enumerated: ``` Array.prototype.last = function () { return this[this.length-1]; }; for (var p in []) { // an empty array // last will be enumerated } ``` As I said before to *iterate* over arrays or array-like objects, the best thing is to use a *sequential loop*, such as a plain-old `for`/`while` loop. When you want to enumerate only the *own properties* of an object (the ones that aren't inherited), you can use the `hasOwnProperty` method: ``` for (var prop in obj) { if (obj.hasOwnProperty(prop)) { // prop is not inherited } } ``` And some people even recommend calling the method directly from `Object.prototype` to avoid having problems if somebody adds a property named `hasOwnProperty` to our object: ``` for (var prop in obj) { if (Object.prototype.hasOwnProperty.call(obj, prop)) { // prop is not inherited } } ```
Why is using "for...in" for array iteration a bad idea?
[ "", "javascript", "arrays", "loops", "for-loop", "iteration", "" ]
I've been doing a lot of research on how best to write "correct" network code in C#. I've seen a number of examples using the "using" statement of C#, and I think this is a good approach, however i've seen inconsistent use of it with various expressions. For instance, suppose I have some code like this: ``` TcpClient tcpClient = new TcpClient("url.com", 80); NetworkStream tcpStream = tcpClient.GetStream(); StreamReader tcpReader = new StreamReader(tcpStream); StreamWriter tcpWriter = new StreamWriter(tcpStream); ``` Obviously, this code is going to be very flaky. So, i've seen some code that puts the using on the tcpClient, which seems good. However, doesn't NetworkStream also have resources that need cleaning up? What about StreamReader/Writer? Do I need to wrap all 4 statements in nested using statements? And if so, what happens when the time has come to dispose? Won't StreamWriter close the stream and consequently the socket? Then what happens when StreamReader, then NetworkStream, then TcpClient each go through their disposals? Which brings up another question. WIth both StreamReader and StreamWriter composed of the same stream, who owns it? Don't they both think they own it, and will thus both try to destroy it? Or does the framework know that the stream has already been destroyed and just silently ignore it? It almost seems like the using statement is only necessary for the last statement in the chain, but then what happens if an exception is thrown in GetStream()? I don't think it would properly clean up the socket then, so it seems redundant usings are necessary to ensure this doesn't happen. Does anyone know of any good, recent books on network programming with .net, and prefeably c# that include chapters on exception handling and resource management? Or maybe any good articles online? All the books I can find are from the .NET 1.1 era (Network Programming for the Microsoft .NET Framework, Network Programming in.NET, etc..), so this seems like a topic that needs some good resources. EDIT: Please, don't let Marc's very good comment stop anyone else from commenting on this :) I'd like to hear anyone elses book recommendations or opinions on resource management, especially in regard to asynchronous usage.
Generally, objects should internally handle multiple `Dispose()` calls, and only do the main code once; so a stream getting `Dispose()`d multiple times is not usually a problem. Personally, I would use lots of `using` there; note that you don't need to indent/nest, though (unless different levels have different life-times): ``` using(TcpClient tcpClient = new TcpClient("url.com", 80)) using(NetworkStream tcpStream = tcpClient.GetStream()) using(StreamReader tcpReader = new StreamReader(tcpStream)) using(StreamWriter tcpWriter = new StreamWriter(tcpStream)) { ... } ``` As you say, this ensures that if an error happens during initialization, everything is still cleaned up correctly. This also ensures that every level gets a chance (in the right order) to deal correctly with any buffered data, etc. Re ownership; `NetworkStream` is actually an oddity in the first place... most streams are **either** input **xor** output. `NetworkStream` bends a few rules and shims two directions into one API; so this is an exception... normally the ownership would be clearer. Additionally, many wrappers have a flag to determine whether they should close the wrapped stream. `StreamReader` doesn't, but some do (such as `GZipStream`, which has a `leaveOpen` ctor option). If you don't want to flow ownership, this is an option - or use a non-closing stream intermediary - one is [here](http://www.pobox.com/~skeet/csharp/miscutil/#) (`NonClosingStream` or similar). Re books; I picked up a copy of "TCP/IP Sockets in C#: Practical Guide for Programmers" ([here](http://www.amazon.co.uk/exec/obidos/ASIN/0124660517)) - adequate, but not great.
If an object supports IDisposable, it's best to put it in a using {} block because the dispose method gets called automatically for you. This also makes for less code on your part. It is important to note the using a 'using' doesn't handle any exceptions. YOu still have to do that if you want to handle any errors. Once the using block goes out of scope, so does your object. ``` Old Style Code object obj; try { obj= new object(); //Do something with the object } catch { //Handle Exception } finally { if (obj != null) { obj.Dispose(); } } Newer Style Code try { using (object obj = new object()) { //Do something with the object } catch { //Handle Exception } ```
C# network programming and resource usage
[ "", "c#", ".net", "network-programming", "using-statement", "" ]
I wrote some CSS in my HTML code to create rollover buttons. Then i tried to run it with IE 7 and surprise! it doesn't run. In fact it shows both the button and underlying rollover. How can i get around IE's inability to cache background images? Preferably using CSS but javascript 'will' be tried. Sample CSS: ``` #Menu { width: 100%; height: 32px; margin-top: 93px; padding-left: 13px; } #Menu a { height: 32px; line-height: 32px; width: 123px; background: url("img/menu.png") top left no-repeat; background-position: -123px 0; float: left; margin-left: 3px; text-decoration: none; color: #1e1e1d; font-size: 12px; text-align: center; } #Top #Menu a:hover, #Top #Menu a.active { background-position: 0px 0; text-decoration: underline; } ```
Well firstly you are giving conflicting instructions ... ``` background: url("img/menu.png") top left no-repeat; background-position: -123px 0; ``` ... the background is already positioned using shorthand. I assume that your regular and hover states both share the same image, so why not do both with shorthand? Remove... > background-position: -123px 0; ... and for your hover and active states, use ... ``` background-position: bottom left; ``` Then have both your states in one image, one below the other (which I assume is what you've been trying anyway).
Image rollover issue comes mainly because of downloading image every time on hovering a link or tab. This flicker is caused by the delay when the primary image is removed and the rollover image is loaded (even though they are technically the same image, Internet Explorer prefers to treat them separately). check it out complete fix for rollover issue: <http://faqspoint.blogspot.com/2011/12/ie-rollover-problem.html>
CSS rollover problems with Internet Explorer 7
[ "", "javascript", "css", "internet-explorer", "" ]
This code compiles: ``` private static void Main(string[] args) { bool? fred = true; if (fred == true) Console.WriteLine("fred is true"); else if (fred == false) Console.WriteLine("fred is false"); else Console.WriteLine("fred is null"); } ``` This code does *not* compile. ``` private static void Main(string[] args) { bool? fred = true; if (fred) Console.WriteLine("fred is true"); else if (!fred) Console.WriteLine("fred is false"); else Console.WriteLine("fred is null"); } ``` I thought `if(booleanExpression == true)` was supposed to be a redundancy. Why isn't it in this case?
There's no implicit conversion from `Nullable<bool>` to `bool`. There *is* an implicit conversion from `bool` to `Nullable<bool>` and that's what happens (in language terms) to each of the bool constants in the first version. The `bool operator==(Nullable<bool>, Nullable<bool>` operator is then applied. (This isn't quite the same as other lifted operators - the result is just `bool`, not `Nullable<bool>`.) In other words, the expression 'fred == false' is of type `bool`, whereas the expression 'fred' is of type `Nullable<bool>` hence you can't use it as the "if" expression. EDIT: To answer the comments, there's never an implicit conversion from `Nullable<T>` to `T` and for good reason - implicit conversions shouldn't throw exceptions, and unless you want `null` to be implicitly converted to `default(T)` there's not a lot else that could be done. Also, if there *were* implicit conversions both ways round, an expression like "nullable + nonNullable" would be very confusing (for types that support +, like `int`). Both +(T?, T?) and +(T, T) would be available, depending on which operand were converted - but the results could be very different! I'm 100% behind the decision to only have an explicit conversion from `Nullable<T>` to `T`.
Because fred is not a boolean. it is a struct, which has a boolean property called IsNull, or HasValue, or whatever... The object named fred is the complex composite object containing a boolean and a value, not a primitive boolean itself... Below, for example is how a Nullable Int could be implemented. The generic Nullable is almost certainly implemented similarly (but generically). You can see here how the implicit and explicit conversions are implemented.. ``` public struct DBInt { // The Null member represents an unknown DBInt value. public static readonly DBInt Null = new DBInt(); // When the defined field is true, this DBInt represents a known value // which is stored in the value field. When the defined field is false, // this DBInt represents an unknown value, and the value field is 0. int value; bool defined; // Private instance constructor. Creates a DBInt with a known value. DBInt(int value) { this.value = value; this.defined = true; } // The IsNull property is true if this DBInt represents an unknown value. public bool IsNull { get { return !defined; } } // The Value property is the known value of this DBInt, or 0 if this // DBInt represents an unknown value. public int Value { get { return value; } } // Implicit conversion from int to DBInt. public static implicit operator DBInt(int x) { return new DBInt(x); } // Explicit conversion from DBInt to int. Throws an exception if the // given DBInt represents an unknown value. public static explicit operator int(DBInt x) { if (!x.defined) throw new InvalidOperationException(); return x.value; } public static DBInt operator +(DBInt x) { return x; } public static DBInt operator -(DBInt x) { return x.defined? -x.value: Null; } public static DBInt operator +(DBInt x, DBInt y) { return x.defined && y.defined? x.value + y.value: Null; } public static DBInt operator -(DBInt x, DBInt y) { return x.defined && y.defined? x.value - y.value: Null; } public static DBInt operator *(DBInt x, DBInt y) { return x.defined && y.defined? x.value * y.value: Null; } public static DBInt operator /(DBInt x, DBInt y) { return x.defined && y.defined? x.value / y.value: Null; } public static DBInt operator %(DBInt x, DBInt y) { return x.defined && y.defined? x.value % y.value: Null; } public static DBBool operator ==(DBInt x, DBInt y) { return x.defined && y.defined? x.value == y.value: DBBool.Null; } public static DBBool operator !=(DBInt x, DBInt y) { return x.defined && y.defined? x.value != y.value: DBBool.Null; } public static DBBool operator >(DBInt x, DBInt y) { return x.defined && y.defined? x.value > y.value: DBBool.Null; } public static DBBool operator <(DBInt x, DBInt y) { return x.defined && y.defined? x.value < y.value: DBBool.Null; } public static DBBool operator >=(DBInt x, DBInt y) { return x.defined && y.defined? x.value >= y.value: DBBool.Null; } public static DBBool operator <=(DBInt x, DBInt y) { return x.defined && y.defined? x.value <= y.value: DBBool.Null; } public override bool Equals(object o) { try { return (bool) (this == (DBInt) o); } catch { return false; } } public override int GetHashCode() { return (defined)? value: 0; } public override string ToString() { return (defined)? .ToString(): "DBInt.Null"; } } ```
Why do nullable bools not allow if(nullable) but do allow if(nullable == true)?
[ "", "c#", ".net", "nullable", "" ]
I have a generator and I would like to know if I can use it without having to worry about StopIteration , and I would like to use it without the **`for item in generator`** . I would like to use it with a while statement for example ( or other constructs ). How could I do that ?
Use this to wrap your generator: ``` class GeneratorWrap(object): def __init__(self, generator): self.generator = generator def __iter__(self): return self def next(self): for o in self.generator: return o raise StopIteration # If you don't care about the iterator protocol, remove this line and the __iter__ method. ``` Use it like this: ``` def example_generator(): for i in [1,2,3,4,5]: yield i gen = GeneratorWrap(example_generator()) print gen.next() # prints 1 print gen.next() # prints 2 ``` **Update:** Please use the answer below because it is much better than this one.
built-in function > next(iterator[, default]) > Retrieve the next item from the iterator by calling its `__next__()` method. If default is given, it is returned if the iterator is exhausted, otherwise StopIteration is raised. In Python 2.5 and older: ``` raiseStopIteration = object() def next(iterator, default=raiseStopIteration): if not hasattr(iterator, 'next'): raise TypeError("not an iterator") try: return iterator.next() except StopIteration: if default is raiseStopIteration: raise else: return default ```
is there an alternative way of calling next on python generators?
[ "", "python", "language-features", "" ]
What is the difference between Object Literals and Array Literals in JavaScript? I know it has something to do with the length method but i don't fully understand it.
[Mozilla.org](https://developer.mozilla.org/en/Core_JavaScript_1.5_Guide/Core_Language_Features#Literals) has very good explanation of the different literals with examples. > **Array Literals** > > An array literal is a list of zero or > more expressions, each of which > represents an array element, enclosed > in square brackets ([]). When you > create an array using an array > literal, it is initialized with the > specified values as its elements, and > its length is set to the number of > arguments specified. > > **Object Literals** > > An object literal is a list of zero or > more pairs of property names and > associated values of an object, > enclosed in curly braces ({}). You > should not use an object literal at > the beginning of a statement. This > will lead to an error or not behave as > you expect, because the { will be > interpreted as the beginning of a > block.
The difference is the way they're indexed. Objects have name, value pairs which are not ordered. In some browsers the order you added values will be the order you get when you traverse the object but not in all. The name is usually a string. Arrays are numerically indexed and the order is totally reliable
JavaScript Object Literals & Array Literals
[ "", "javascript", "" ]
Say we've got: ``` struct IsEven { bool operator() (int i) { return i % 2 == 0; } }; ``` Then: ``` vector<int> V; // fill with ints vector<int>::iterator new_end = remove_if(V.begin(), V.end(), IsEven()); V.erase(new_end, V.end()); ``` works fine (it leaves `V` with only the odd integers). But it seems that the elements from `new_end` to `V.end()` are *not* the even integers that we're deleting. For example, if `v` starts out as `1 4 2 8 5 7`, then I'm getting `8 5 7` for those elements (although after the `erase` call, the vector indeed has `1 5 7` left). Apparently, (according to <http://www.sgi.com/tech/stl/remove_if.html>) ``` The iterators in the range [new_last, last) are all still dereferenceable, but the elements that they point to are unspecified. ``` First of all, WTF? And second, how do I get around this without essentially reimplementing `remove_if`?
It sounds like you want to use `partition()` to partition the vector into groups of odd values at the start and even values at the end. `partition()` will return an iterator to the first element of the second grouping. As for the WTF, I'm not sure why you would expect a remove operation to preserve the elements you want to remove by copying them (that's extra work) to the end of the container. Most people consider the WTF in `remove()` (and it's cousins) to be the fact that the size of the vector is not reduced and you have to call `erase()` to actually delete undesired elements after the remove operation.
I suppose the point is that the function is called `remove_if` for a reason. It removes elements. It doesn't move them or select them. After you've called remove\_if, you're no longer guaranteed that the elements you removed exist. All you're guaranteed is that the elements between `first` and `new_last` do not contain any of the removed elements. `std::partition` would be a better choice, wouldn't it? Or perhaps `remove_copy_if`, depending on exactly what you're trying to do.
How to use the "removed" elements after std::remove_if
[ "", "c++", "stl", "" ]
What is the difference between using [the `delete` operator](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Operators/delete) on the array element as opposed to using [the `Array.splice` method](https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/Array/splice)? For example: ``` myArray = ['a', 'b', 'c', 'd']; delete myArray[1]; // or myArray.splice (1, 1); ``` Why even have the splice method if I can delete array elements like I can with objects?
`delete` will delete the object property, but will not reindex the array or update its length. This makes it appears as if it is undefined: ``` > myArray = ['a', 'b', 'c', 'd'] ["a", "b", "c", "d"] > delete myArray[0] true > myArray[0] undefined ``` Note that it is not in fact set to the value `undefined`, rather the property is removed from the array, making it *appear* undefined. The Chrome dev tools make this distinction clear by printing `empty` when logging the array. ``` > myArray[0] undefined > myArray [empty, "b", "c", "d"] ``` [`myArray.splice(start, deleteCount)`](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Array/splice) actually removes the element, reindexes the array, and changes its length. ``` > myArray = ['a', 'b', 'c', 'd'] ["a", "b", "c", "d"] > myArray.splice(0, 2) ["a", "b"] > myArray ["c", "d"] ```
# Array.remove() Method **John Resig**, creator of jQuery created a very handy `Array.remove` method that I always use it in my projects. ``` // Array Remove - By John Resig (MIT Licensed) Array.prototype.remove = function(from, to) { var rest = this.slice((to || from) + 1 || this.length); this.length = from < 0 ? this.length + from : from; return this.push.apply(this, rest); }; ``` and here's some examples of how it could be used: ``` // Remove the second item from the array array.remove(1); // Remove the second-to-last item from the array array.remove(-2); // Remove the second and third items from the array array.remove(1,2); // Remove the last and second-to-last items from the array array.remove(-2,-1); ``` [John's website](http://ejohn.org/blog/javascript-array-remove/)
Deleting array elements in JavaScript - delete vs splice
[ "", "javascript", "arrays", "element", "delete-operator", "array-splice", "" ]
Given `[1,2,3,4,5]`, how can I do something like ``` 1/1, 1/2, 1/3,1/4,1/5, ...., 3/1,3/2,3/3,3/4,3/5,.... 5/1,5/2,5/3,5/4,5/5 ``` I would like to store all the results, find the minimum, and return the two numbers used to find the minimum. So in the case I've described above I would like to return `(1,5)`. So basically I would like to do something like for each element `i` in the list map some function across all elements in the list, taking `i` and `j` as parameters store the result in a master list, find the minimum value in the master list, and return the arguments `i`, `j`used to calculate this minimum value. In my real problem I have a list objects/coordinates, and the function I am using takes two coordinates and calculates the euclidean distance. I'm trying to find minimum euclidean distance between any two points but I don't need a fancy algorithm.
You can do this using [list comprehensions](http://docs.python.org/tutorial/datastructures.html#list-comprehensions) and [min()](http://docs.python.org/library/functions.html) (Python 3.0 code): ``` >>> nums = [1,2,3,4,5] >>> [(x,y) for x in nums for y in nums] [(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (5, 1), (5, 2), (5, 3), (5, 4), (5, 5)] >>> min(_, key=lambda pair: pair[0]/pair[1]) (1, 5) ``` Note that to run this on Python 2.5 you'll need to either make one of the arguments a float, or do `from __future__ import division` so that 1/5 correctly equals 0.2 instead of 0.
If I'm correct in thinking that you want to find the minimum value of a function for all possible pairs of 2 elements from a list... ``` l = [1,2,3,4,5] def f(i,j): return i+j # Prints min value of f(i,j) along with i and j print min( (f(i,j),i,j) for i in l for j in l) ```
Python: For each list element apply a function across the list
[ "", "python", "algorithm", "list", "list-comprehension", "" ]
Is there a standard way to associate version string with a Python package in such way that I could do the following? ``` import foo print(foo.version) ``` I would imagine there's some way to retrieve that data without any extra hardcoding, since minor/major strings are specified in `setup.py` already. Alternative solution that I found was to have `import __version__` in my `foo/__init__.py` and then have `__version__.py` generated by `setup.py`.
Not directly an answer to your question, but you should consider naming it `__version__`, not `version`. This is almost a quasi-standard. Many modules in the standard library use `__version__`, and this is also used in [lots](http://www.google.com/codesearch?as_q=__version__&btnG=Search+Code&hl=en&as_lang=python&as_license_restrict=i&as_license=&as_package=&as_filename=&as_case=) of 3rd-party modules, so it's the quasi-standard. Usually, `__version__` is a string, but sometimes it's also a float or tuple. As mentioned by S.Lott (Thank you!), [PEP 8](https://www.python.org/dev/peps/pep-0008/#module-level-dunder-names) says it explicitly: > ## Module Level Dunder Names > > Module level "dunders" (i.e. names with two leading and two trailing > underscores) such as `__all__`, `__author__`, `__version__`, etc. > should be placed after the module docstring but before any import > statements except from `__future__` imports. You should also make sure that the version number conforms to the format described in [PEP 440](http://www.python.org/dev/peps/pep-0440/) ([PEP 386](http://www.python.org/dev/peps/pep-0386/) a previous version of this standard).
I use a single `_version.py` file as the "once cannonical place" to store version information: 1. It provides a `__version__` attribute. 2. It provides the standard metadata version. Therefore it will be detected by `pkg_resources` or other tools that parse the package metadata (EGG-INFO and/or PKG-INFO, PEP 0345). 3. It doesn't import your package (or anything else) when building your package, which can cause problems in some situations. (See the comments below about what problems this can cause.) 4. There is only one place that the version number is written down, so there is only one place to change it when the version number changes, and there is less chance of inconsistent versions. Here is how it works: the "one canonical place" to store the version number is a .py file, named "\_version.py" which is in your Python package, for example in `myniftyapp/_version.py`. This file is a Python module, but your setup.py doesn't import it! (That would defeat feature 3.) Instead your setup.py knows that the contents of this file is very simple, something like: ``` __version__ = "3.6.5" ``` And so your setup.py opens the file and parses it, with code like: ``` import re VERSIONFILE="myniftyapp/_version.py" verstrline = open(VERSIONFILE, "rt").read() VSRE = r"^__version__ = ['\"]([^'\"]*)['\"]" mo = re.search(VSRE, verstrline, re.M) if mo: verstr = mo.group(1) else: raise RuntimeError("Unable to find version string in %s." % (VERSIONFILE,)) ``` Then your setup.py passes that string as the value of the "version" argument to `setup()`, thus satisfying feature 2. To satisfy feature 1, you can have your package (at run-time, not at setup time!) import the \_version file from `myniftyapp/__init__.py` like this: ``` from _version import __version__ ``` Here is [an example of this technique](https://github.com/tahoe-lafs/zfec/blob/42dededf9d0ddbb165cdfccbd5acdf6e0600cb4b/setup.py#L91) that I've been using for years. The code in that example is a bit more complicated, but the simplified example that I wrote into this comment should be a complete implementation. Here is [example code of importing the version](https://tahoe-lafs.org/trac/zfec/browser/trunk/zfec/zfec/__init__.py?rev=363). If you see anything wrong with this approach, please let me know.
Standard way to embed version into Python package?
[ "", "python", "setuptools", "setup.py", "python-packaging", "" ]
I have a website which uses PHP and HTML pages, I want to create a session which stores a username from the login page. But the login pages are php and the next pages are html. Is this a problem or can I just add a small statement of php into the html page saying ``` <?PHP session_start(); $_session['loginid']=$_post['username']; ?> ``` Or am I doing it wrong? This is the first time i've used sessions and they confuse me a little. thanks for any help.
If you have access to your apache configuration, or a simple .htaccess file, you can tell Apache to handle php code inside of an .html file. You can do this by creating an **.htaccess** file (remember the . (dot) as the first character in that filename) on the document root of the site (probably public\_html/) and putting this into it: ``` # Add this to public_html/.htaccess file AddHandler application/x-httpd-php .html AddHandler application/x-httpd-php .htm ``` You should be able to reload the html page and your PHP code (from Michael Matthews answer) will run great.
You are trying to share a PHP session variable with a page that is of type text/html. As you suggested you must make the HTML page a PHP page for this to work and add a little snippet of PHP somewhere to display the user name. Change your HTML page to PHP. At the top of the page add something like this: ``` <?php session_start(); // must be before any output $username = $_SESSION['username']; // or whatever you called it // check that $username is valid here (safe to display) ?> html here Hello <?= $username ?>! ```
PHP sessions with HTML
[ "", "php", "html", "session", "" ]
I will participate a modeling competition, which spends three days. I need a language which is fast and designed for modeling, such as to 2D/3D models. I have considered these languages: 1. Python 2. Sage Which languages would you use?
You should use the language that you know best and that has good-enough tools for the task at hand. Depending on when the competition is you may have no time to learn a new language/environment.
have a look at <http://www.processing.org/> -- it is a programming language (similar to java) and IDE especially developed for simulation and data visualization. given that it was developed in a teaching context, it will be easy to use and will give you great results in no time -- i have seen amazing applications (e,g, [webpages as graphs](http://www.aharef.info/static/htmlgraph/), [complexification](http://www.complexification.net/))
Most suitable language(s) for simulations in modeling?
[ "", "python", "sage", "" ]
I have an element in my document that has a background color and image set through a regular CSS rule. When a certain event happens, I want to animate that item, highlighting it (I'm using Scriptaculous, but this question applies to any framework that'll do the same). ``` new Effect.Highlight(elHighlight, { startcolor: '#ffff99', endcolor: '#ffffff', afterFinish: fnEndOfFadeOut }); ``` The problem i'm facing is that after the animation is done, the element is left with the following style (according to FireBug): ``` element.style { background-color:transparent; background-image:none; } ``` Which overrides the CSS rule, since it's set at the element level, so I'm losing the background that the item used to have... What I'm trying to do is, in the callback function I'm running after the animation is done, set the style properties to a value that'll make them "go away". ``` var fnEndOfFadeOut = function() { elHighlight.style.backgroundColor = "xxxxx"; elHighlight.style.backgroundImage = "xxxxx"; } ``` What I'm trying to figure out is what to put in "xxxx" (or how to do the same thing in a different way). I tried 'auto', 'inherit', and '' (blank string), and neither worked (I didn't really expect them to work, but I'm clueless here). I also tried `elHighlight.style = "";` which, expectably, threw an exception. What can I do to overcome this? I know I can put a span inside the element that I'm highlighting and highlight that span instead, but I'm hoping I'll be able to avoid the extra useless markup.
Chances are you're not setting the style on the correct element. It's probably being set somewhere up the line in a parent node. ``` elHighlight.style.backgroundColor = ""; elHighlight.style.backgroundImage = ""; ``` You can also remove *all* the default styling by calling: ``` elHighlight.style.cssText = ""; ``` In any case, you'll still have to do this on the specific element that is setting these properties, which means you may need to do a recursion on parentNode until you find it.
Try `elHighlight.style.removeProperty('background-color')` `elHighlight.style.removeProperty('background-image')`
How can I undo the setting of element.style properties?
[ "", "javascript", "css", "animation", "" ]
I'm attempting to capture the output of `fputcsv()` in order to use `gzwrite()` to actually write to a tab-delimited file. Basically, I'm querying a database and I want to put these rows into a gzipped CSV file and I'd rather use `fputcsv()` than actually append `"\t"` and `"\n"` everywhere. Can I somehow do this with output buffering or something similar? Here's the basic outline of what I have: ``` $results = get_data_from_db(); $fp = gzopen($file_name, 'w'); if($fp) { foreach ($results as $row) { ???//something with gzwrite() ? } gzclose($fp); } ``` Thanks! EDIT: My understanding was that `gzwrite()` needs to be used to actually write to the file in order for it to actually be gzipped - is this not correct?
Maybe I'm misunderstanding the question but [`fputcsv()`](http://www.php.net/fputcsv) operates on a file handle, which is what gzopen returns, so you can do this: ``` $results = get_data_from_db(); $fp = gzopen($file_name, 'w'); if($fp) { foreach ($results as $row) { fputcsv($fp, $row); } gzclose($fp); } ``` I've tested this and it works fine.
As read on [php.net](http://www.php.net/manual/en/function.fputcsv.php#74118), you could try this little trick: ``` <?php // output up to 5MB is kept in memory, if it becomes bigger // it will automatically be written to a temporary file $csv = fopen('php://temp/maxmemory:'. (5*1024*1024), 'r+'); fputcsv($csv, array('blah','blah')); rewind($csv); // put it all in a variable $output = stream_get_contents($csv); ?> ``` This is actually what you tried to avoid (capturing the output), but capturing is a very powerful tool, so why not use it?
Capturing file output in PHP (fputcsv)
[ "", "php", "" ]
First of all,I'm not into web programming. I bumped into django and read a bit about models. I was intrigued by the following code ( from djangoproject.com ) : ``` class Person(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) def __str__(self): # Note use of django.utils.encoding.smart_str() here because # first_name and last_name will be unicode strings. return smart_str('%s %s' % (self.first_name, self.last_name)) ``` By my understanding of python , first\_name and last\_name are class variables , right ? How is that used in code ( because I guess that setting Person.first\_name or Person.last\_name will affect all Person instances ) ? Why is it used that way ?
Yes, first\_name and last\_name are class variables. They define fields that will be created in a database table. There is a Person table that has first\_name and last\_name columns, so it makes sense for them to be at Class level at this point. For more on models, see: <http://docs.djangoproject.com/en/dev/topics/db/models/> When it comes to accessing instances of a Person in code, you are typically doing this via Django's ORM, and at this point they essentially behave as instance variables. For more on model instances, see: <http://docs.djangoproject.com/en/dev/ref/models/instances/?from=olddocs>
The essence of your question is "how come these class variables (which I assign Field objects to) suddenly become instance variables (which I assign data to) in Django's ORM"? The answer to that is the magic of Python [metaclasses](http://docs.python.org/reference/datamodel.html). A metaclass allows you to hook into and modify the process of creating a Python class (not the creation of an instance of that class, the creation of the class itself). Django's Model object (and thus also your models, which are subclasses) has a [ModelBase metaclass](http://code.djangoproject.com/browser/django/trunk/django/db/models/base.py). It looks through all the class attributes of your model, and any that are instances of a Field subclass it moves into a fields list. That list is assigned as an attribute of the `_meta` object, which is a class attribute of the model. Thus you can always get to the actual Field objects via `MyModel._meta.fields`, or `MyModel._meta.get_field('field_name')`. The `Model.__init__` method is then able to use the `_meta.fields` list to determine what instance attributes should be initialized when a model instance is created. Don't be afraid to dive into the Django source code; it's a great source of education!
How do Django model fields work?
[ "", "python", "django", "django-models", "" ]
Just wanted opinions on a design question. If you have a C++ class than owns other objects, would you use smart pointers to achieve this? ``` class Example { public: // ... private: boost::scoped_ptr<Owned> data; }; ``` The 'Owned' object can't be stored by value because it may change through the lifetime of the object. My view of it is that on the one side, you make it clear that the object is owned and ensure its deletion, but on the flipside, you could easily just have a regular pointer and delete it in the destructor. Is this overkill? **Follow up:** Just wanted to say thanks for all your answers. Thanks for the heads-up about auto\_ptr leaving the other object with a NULL pointer when the whole object is copied, I have used auto\_ptr extensively but had not thought of that yet. I make basically all my classes boost::noncopyable unless I have a good reason, so there's nothing to worry about there. And thanks also for the information on memory leaks in exceptions, that's good to know too. I try not to write things which could cause exceptions in the constructor anyway - there are better ways of doing that - so that shouldn't be a problem. I just had another question though. What I wanted when I asked this question was to know whether anyone actually did this, and you all seem to mention that it's a good idea theoretically, but no one's said they actually do it. Which surprises me! Certainly one object owning a pointer to another is not a new idea, I would have expected you all would have done it before at some point. What's going on?
It's a good idea. It helps simplify your code, and ensure that when you do change the Owned object during the lifetime of the object, the previous one gets destroyed properly. You have to remember that scoped\_ptr is noncopyable, though, which makes your class noncopyable by default until/unless you add your own copy constructor, etc. (Of course, using the default copy constructor in the case of raw pointers would be a no-no too!) If your class has more than one pointer field, then use of scoped\_ptr actually improves exception safety in one case: ``` class C { Owned * o1; Owned * o2; public: C() : o1(new Owned), o2(new Owned) {} ~C() { delete o1; delete o2;} }; ``` Now, imagine that during construction of a C the second "new Owned" throws an exception (out-of-memory, for example). o1 will be leaked, because C::~C() (the destructor) won't get called, because the object has not been completely constructed yet. The destructor of any completely constructed member field *does* get called though. So, using a scoped\_ptr instead of a plain pointer will allow o1 to be properly destroyed.
scoped\_ptr is very good for this purpose. But one has to understand its semantics. You can group smart pointers using two major properties: * Copyable: A smart pointer can be copied: The copy and the original share ownership. * Movable: A smart pointer can be moved: The move-result will have ownership, the original won't own anymore. That's rather common terminology. For smart pointers, there is a specific terminology which better marks those properties: * Transfer of Ownership: A smart pointer is Movable * Share of Ownership: A smart pointer is copyable. If a smart pointer is already copyable, it's easy to support transfer-of-ownership semantic: That then is just an atomic *copy & reset-of-original* operation, restricting that to smart pointers of certain kinds (e.g only temporary smart pointers). Let's group the available smart pointers, using `(C)opyable`, and `(M)ovable`, `(N)either`: 1. `boost::scoped_ptr`: N 2. `std::auto_ptr`: M 3. `boost::shared_ptr`: C `auto_ptr` has one big problem, in that it realizes the Movable concept using a copy constructor. That is because When auto\_ptr was accepted into C++, there wasn't yet a way to natively support move semantics using a move constructor, as opposed to the new C++ Standard. That is, you can do the following with auto\_ptr, and it works: ``` auto_ptr<int> a(new int), b; // oops, after this, a is reset. But a copy was desired! // it does the copy&reset-of-original, but it's not restricted to only temporary // auto_ptrs (so, not to ones that are returned from functions, for example). b = a; ``` Anyway, as we see, in your case you won't be able to transfer the ownership to another object: Your object will in effect be non-copyable. And in the next C++ Standard, it will be non-movable if you stay with scoped\_ptr. For implementing your class with scoped\_ptr, watch that you either have one of these two points satisfied: * Write an destructor (even if it's empty) in the .cpp file of your class, or * Make `Owned` a completely defines class. Otherwise, when you would create an object of Example, the compiler would implicitly define a destructor for you, which would call scoped\_ptr's destructor: ``` ~Example() { ptr.~scoped_ptr<Owned>(); } ``` That would then make scoped\_ptr call `boost::checked_delete`, which would complain about `Owned` being incomplete, in case you haven't done any of the above two points. If you have defined your own dtor in the .cpp file, the implicit call to the destructor of scoped\_ptr would be made from the .cpp file, in which you could place the definition of your `Owned` class. You have that same problem with auto\_ptr, but you have one more problem: Providing auto\_ptr with an incomplete type is undefined behavior currently (maybe it will be fixed for the next C++ version). So, when you use auto\_ptr, you *have* to make Owned a complete type within your header file. shared\_ptr doesn't have that problem, because it uses a polymorphic deleter, which makes an indirect call to the delete. So the deleting function is not instantiated at the time the destructor is instantiated, but at the time the deleter is created in shared\_ptr's constructor.
C++ using scoped_ptr as a member variable
[ "", "c++", "oop", "smart-pointers", "" ]
Why would someone use numeric(12, 0) datatype for a simple integer ID column? If you have a reason why this is better than int or bigint I would like to hear it. We are not doing any math on this column, it is simply an ID used for foreign key linking. I am compiling a list of programming errors and performance issues about a product, and I want to be sure they didn't do this for some logical reason. If you follow this link: <http://msdn.microsoft.com/en-us/library/ms187746.aspx> ... you can see that the numeric(12, 0) uses 9 bytes of storage and being limited to 12 digits, theres a total of 2 trillion numbers if you include negatives. WHY would a person use this when they could use a bigint and get 10 million times as many numbers with one byte less storage. Furthermore, since this is being used as a product ID, the 4 billion numbers of a standard int would have been more than enough. So before I grab the torches and pitch forks - tell me what they are going to say in their defense? And no, I'm not making a huge deal out of nothing, there are hundreds of issues like this in the software, and it's all causing a huge performance problem and using too much space in the database. And we paid over a million bucks for this crap... so I take it kinda seriously.
Perhaps they're used to working with Oracle? All numeric types including ints are normalized to a standard single representation among all platforms.
There are many reasons to use numeric - for example - financial data and other stuffs which need to be accurate to certain decimal places. However for the example you cited above, a simple int would have done. Perhaps sloppy programmers working who didn't know how to to design a database ?
Is there any reason for numeric rather than int in T-SQL?
[ "", "sql", "database", "oracle", "t-sql", "types", "" ]
My friend is working on a project in which he needs to get some digital signals into a computer to display/manipulate them. So I advised him to insert those signals into a USB port due to it's popularity (because the device (which outputs the signals) and the program used for display and manipulation should both be designed for real world usage and might be sold in the market) The targeted platform is Windows primarily but it would be better if the software was cross-platform. And my friend has knowledge in C++, and wouldn't mind learning C as well. Where should he start, and what are the steps to get the signals into memory? Many Thanks
There is a great article here: [USB hardware/software integration](http://www.codeproject.com/KB/system/HIDAche.aspx "USB hardware/software integration") that describes the process in full.
Get a [Labjack](http://www.labjack.com/) or one of the [Phidgets](http://www.phidgets.com/). (I've used the former several times, not the latter) They're fairly inexpensive and both have Java driver support, among other things. (like LabView drivers)
How to load digital signals from a USB port into memory?
[ "", "c++", "usb", "driver", "" ]
I want to execute an SQL query like: ``` select 'tb1'.'f1','tb1'.'f2','tb2'.'f1' from 'tb1','tb2'; ``` Now the problem is that i want to put it into an array in PHP like: ``` $result['tb1']['f1'], $result['tb1']['f2'], $result['tb2']['f1']... ``` Any idea how to achieve the above? Afaik there is no function which does the above. I was wondering the best simple way to do it. I do not want to use a query like "select .. as .. " unless necessary. I do not know in advance what the fields will be, so I cannot assign them manually as suggest by the answer by benlumley. Thank you, Alec
You'll need to select the data as you are already doing, and then loop over it getting it into the format required, and because the fields have the same names, its easiest to use aliases or they'll just overwrite each other in the returned data (but you could use mysql\_fetch\_row instead, which returns a numerically indexed array). For example: ``` $sql = "select tb1.f1 as tb1f1,tb1.f2 as tb1f2,tb2.f1 as tb2f1 from tb1,tb2"; $result = mysql_query($sql); while ($row = mysql_fetch_assoc($result)) { $result['t1']['f1']=$row['tb1f1']; $result['t1']['f2']=$row['tb1f2']; $result['t2']['f1']=$row['tb2f1']; } ``` (The quoting was wrong in your sql as well) That won't handle multiple rows either, but your question sort of implies that you are only ever expecting one row? WIthout aliases: ``` $sql = "select tb1.f1,tb1.f2,tb2.f1 from tb1,tb2"; $result = mysql_query($sql); while ($row = mysql_fetch_row($result)) { $result['t1']['f1']=$row[0]; $result['t1']['f2']=$row[1]; $result['t2']['f1']=$row[2]; } ``` I prefer the first version unless you have a good reason to use the second, as its less likely to result in errors if you ever change the sql or add fields etc. EDIT: Taking the meta data idea from the response below .... ``` <?php mysql_connect('localhost', 'username', 'password'); mysql_select_db('dbname'); $result = mysql_query('select tb1.f1, tb1.f2, tb2.f1 from tb1, tb2'); $meta = array(); for ($i = 0; $i < mysql_num_fields($result); ++$i) { $meta[$i] = mysql_fetch_field($result, $i); } while ($row = mysql_fetch_row($result)) { foreach($row as $key=>$value) { $out[$meta[$key]->table][$meta[$key]->name]=$value; } } ``` seems to do exactly what you are after - although you can still only get one row at a time. Easily updated to store multiple rows with another dimension on the array: Change: ``` $out[$meta[$key]->table][$meta[$key]->name]=$value; ``` To: ``` $out[][$meta[$key]->table][$meta[$key]->name]=$value; ```
Since you say you can't specify column aliases, and you can't know the fields of the query beforehand, I'd suggest a solution using [`mysql_fetch_field()`](http://php.net/manual/en/function.mysql-fetch-field.php) to get metadata information: ``` <?php mysql_connect('localhost', 'username', 'password'); mysql_select_db('dbname'); $result = mysql_query('select tb1.f1, tb1.f2, tb2.f1 from tb1, tb2'); for ($i = 0; $i < mysql_num_fields($result); ++$i) { $meta = mysql_fetch_field($result, $i); print_r($meta); } ``` You can extract from this metadata information the table name and column name, even when there are multiple columns of the same name in the query. PHP's ext/mysqli supports a similar function [`mysqli_stmt::result_metadata()`](http://php.net/manual/en/mysqli-stmt.result-metadata.php), but you said you can't know the number of fields in the query beforehand, which makes it awkward to use [`mysqli_stmt::bind_result()`](http://php.net/manual/en/mysqli-stmt.bind-result.php). PDO\_mysql doesn't seem to support result set metadata at this time. --- The output from the above script is below. ``` stdClass Object ( [name] => f1 [table] => tb1 [def] => [max_length] => 1 [not_null] => 0 [primary_key] => 0 [multiple_key] => 0 [unique_key] => 0 [numeric] => 1 [blob] => 0 [type] => int [unsigned] => 0 [zerofill] => 0 ) stdClass Object ( [name] => f2 [table] => tb1 [def] => [max_length] => 1 [not_null] => 0 [primary_key] => 0 [multiple_key] => 0 [unique_key] => 0 [numeric] => 1 [blob] => 0 [type] => int [unsigned] => 0 [zerofill] => 0 ) stdClass Object ( [name] => f1 [table] => tb2 [def] => [max_length] => 1 [not_null] => 0 [primary_key] => 0 [multiple_key] => 0 [unique_key] => 0 [numeric] => 1 [blob] => 0 [type] => int [unsigned] => 0 [zerofill] => 0 ) ```
Place multiple similar fields in multi-dimensional array - php mysql
[ "", "php", "mysql", "" ]
I am returning a List from my WCF method. In my client code, it's return type shows as MyObject[]. I have to either use MyObject[], or IList, or IEnumerable... ``` WCFClient myClient = new WCFClient(); MyObject[] list = myClient.GetMyStuff(); or IList<MyObject> list = myClient.GetMyStuff(); or IEnumerable<MyObject> list = myClient.GetMyStuff(); ``` All I am doing is taking this collection and binding it to a grid. What is the best object to assign my returned collection?
You can specify that you want to use a generic list instead of an array by clicking the advanced button when you add a reference, or you can right click on the service reference and choose configure to change it in place. The reason is that WCF serializes Generic lists as arrays to send across the wire. The configuration is just telling svcutil to create a proxy that converts them back to a generic list for your convenience.
When you use `svcutil.exe` to create you client code you need to tell it how to resolve certain references that are not available to it. This is how you would do it for `List<T>`: ``` svcutil /o:YourService.cs /ct:System.Collections.Generic.List`1 http://example.com/mex ```
Why does WCF return myObject[] instead of List<T> like I was expecting?
[ "", "c#", "wcf", "collections", "" ]
after a bit speedtracing I found a piece of code (called very very often) which converts values of one enum to values of another enum, like this: ``` public Enum2 ConvertToEnum2(Enum1 enum1) { switch(enum1) { case Enum1.One: return Enum2.One; break; case Enum1.Two: return Enum2.Two; break; } } ``` Would it me more performant if I save those conversions in a Dictionary and just do something like this: ``` public Enum2 ConvertToEnum2(Enum1 enum1) { return m_ConversionTable[enum1]; } ``` Thanks for your comments!
A dictionary definitely would not be faster. If the enums in the Enum1 are sequential then an array of Enum2 would likely be faster (but that could be marginal). IF the Enum1 is close to sequential so that the array isn't too spares it may still be useful approach. For an enum with the [Flags] attribute then the switch is probably the best.
Are you sure that it's a botteneck? I found that many profilers report incorrect time percentages for small methods. In my computer I can execute 100 millons conversions between enums with 10 elements in 1.7 seconds using a switch (Anthony's answer is 10 time faster).
Performance of enum conversion
[ "", "c#", "performance", "enums", "" ]
Can someone please explain what the "&" does in the following: ``` class TEST { } $abc =& new TEST(); ``` I know it is by reference. But can someone illustrate why and when I would need such a thing? Or point me to a url where this is explained well. I am unable to grasp the concept. Thank you very much.
As I understand it, you're not asking about PHP references in general, but about the `$foo =& new Bar();` construction idiom. This is only seen in PHP4 as the usual `$foo = new Bar()` stores a *copy* of the object. This generally goes unnoticed unless the class stored a reference to $this in the constructor. When calling a method on the returned object later on, there would be two distinct copies of the object in existence when the intention was probably to have just one. Consider this code where the constructor stores a reference to $this in a global var ``` class Bar { function Bar(){ $GLOBALS['copy']=&$this; $this->str="hello"; } } //store copy of constructed object $x=new Bar; $x->str="goodbye"; echo $copy->str."\n"; //hello echo $x->str."\n"; //goodbye //store reference to constructed object $x=&new Bar; $x->str="au revoir"; echo $copy->str."\n"; //au revoir echo $x->str."\n"; //au revoir ``` In the first example, $x and $copy refer to different instances of Foo, but in the second they are the same.
Firstly, you don't really need to use it if you are using PHP 5, in PHP 5 all objects are passed by reference by default. Secondly, when you assign an object to a variable name, either by creation, passing in a parameter, or setting a variable value, you are either doing so by reference or value. Passing by reference means you pass the actual memory reference for the object, so say you passed an object as a parameter to a function, any changes that function makes to that variable will be reflected in the parent method as well, you are actually changing the state of that object in memory. The alternative, to pass by value means you pass a copy of that object, not the memory reference, so any changes you make, will not be reflected in the original.
Php By Reference
[ "", "php", "reference", "" ]
I have a page that is accessed via a URL like this: <http://power-coder.net/Test/something.php?id=3#Page1> I know how to access the id parameter using $\_GET, however is there a way for me to access the #Page1 part? I have looked at the $\_SERVER array and the REQUEST\_URI ends at ?id=3. I know that I could also change the #Page1 to be an additional parameter like &Page=1, however there is a fair bit of code using the old URL type that I would like to avoid re-writing if at all possible. Thanks for the help.
No, there isn't a way. The fragment part (the label after #) is not transmitted to the server. The browser retrieve the document <http://power-coder.net/Test/something.php?id=3> and then go to the correct anchor (if there is one) in the document.
The relevant text from [the specs](http://www.faqs.org/rfcs/rfc2396.html): > 4.1. Fragment Identifier > > When a URI reference is used to perform a retrieval action on the > identified resource, the optional fragment identifier, separated from > the URI by a crosshatch ("#") character, consists of additional > reference information to be interpreted by the user agent after the > retrieval action has been successfully completed. As such, it is not > part of a URI, but is often used in conjunction with a URI.
Is it possible to access anchors in a querystring via PHP?
[ "", "php", "url", "query-string", "anchor", "" ]
Consider these examples using `print` in Python: ``` >>> for i in range(4): print('.') . . . . >>> print('.', '.', '.', '.') . . . . ``` Either a newline or a space is added between each value. How can I avoid that, so that the output is `....` instead? In other words, how can I "append" strings to the standard output stream?
In Python 3, you can use the `sep=` and `end=` parameters of the [`print`](https://docs.python.org/library/functions.html#print) function: To not add a newline to the end of the string: ``` print('.', end='') ``` To not add a space between all the function arguments you want to print: ``` print('a', 'b', 'c', sep='') ``` You can pass any string to either parameter, and you can use both parameters at the same time. If you are having trouble with buffering, you can flush the output by adding `flush=True` keyword argument: ``` print('.', end='', flush=True) ``` ## Python 2.6 and 2.7 From Python 2.6 you can either import the `print` function from Python 3 using the [`__future__` module](https://docs.python.org/2/library/__future__.html): ``` from __future__ import print_function ``` which allows you to use the Python 3 solution above. However, note that the `flush` keyword is not available in the version of the `print` function imported from `__future__` in Python 2; it only works in Python 3, more specifically 3.3 and later. In earlier versions you'll still need to flush manually with a call to `sys.stdout.flush()`. You'll also have to rewrite all other print statements in the file where you do this import. Or you can use [`sys.stdout.write()`](https://docs.python.org/library/sys.html#sys.stdout) ``` import sys sys.stdout.write('.') ``` You may also need to call ``` sys.stdout.flush() ``` to ensure `stdout` is flushed immediately.
For Python 2 and earlier, it should be as simple as described in *[Re: How does one print without a CR?](http://legacy.python.org/search/hypermail/python-1992/0115.html)* by [Guido van Rossum](https://en.wikipedia.org/wiki/Guido_van_Rossum) (paraphrased): > Is it possible to print something, but not automatically have a > carriage return appended to it? Yes, append a comma after the last argument to print. For instance, this loop prints the numbers 0..9 on a line separated by spaces. Note the parameterless "print" that adds the final newline: ``` >>> for i in range(10): ... print i, ... else: ... print ... 0 1 2 3 4 5 6 7 8 9 >>> ```
How to print without a newline or space
[ "", "python", "trailing-newline", "" ]
I've got data in SQL Server 2005 that contains HTML tags and I'd like to strip all that out, leaving just the text between the tags. Ideally also replacing things like `&lt;` with `<`, etc. Is there an easy way to do this or has someone already got some sample T-SQL code? I don't have the ability to add extended stored procs and the like, so would prefer a pure T-SQL approach (preferably one backwards compatible with SQL 2000). I just want to retrieve the data with stripped out HTML, not update it, so ideally it would be written as a user-defined function, to make for easy reuse. So for example converting this: ``` <B>Some useful text</B>&nbsp; <A onclick="return openInfo(this)" href="http://there.com/3ce984e88d0531bac5349" target=globalhelp> <IMG title="Source Description" height=15 alt="Source Description" src="/ri/new_info.gif" width=15 align=top border=0> </A>&gt;&nbsp;<b>more text</b></TD></TR> ``` to this: ``` Some useful text > more text ```
There is a UDF that will do that described here: [User Defined Function to Strip HTML](http://blog.sqlauthority.com/2007/06/16/sql-server-udf-user-defined-function-to-strip-html-parse-html-no-regular-expression/) ``` CREATE FUNCTION [dbo].[udf_StripHTML] (@HTMLText VARCHAR(MAX)) RETURNS VARCHAR(MAX) AS BEGIN DECLARE @Start INT DECLARE @End INT DECLARE @Length INT SET @Start = CHARINDEX('<',@HTMLText) SET @End = CHARINDEX('>',@HTMLText,CHARINDEX('<',@HTMLText)) SET @Length = (@End - @Start) + 1 WHILE @Start > 0 AND @End > 0 AND @Length > 0 BEGIN SET @HTMLText = STUFF(@HTMLText,@Start,@Length,'') SET @Start = CHARINDEX('<',@HTMLText) SET @End = CHARINDEX('>',@HTMLText,CHARINDEX('<',@HTMLText)) SET @Length = (@End - @Start) + 1 END RETURN LTRIM(RTRIM(@HTMLText)) END GO ``` Edit: note this is for SQL Server 2005, but if you change the keyword MAX to something like 4000, it will work in SQL Server 2000 as well.
Derived from @Goner Doug answer, with a few things updated: - using REPLACE where possible - conversion of predefined entities like `&eacute;` (I chose the ones I needed :-) - some conversion of list tags `<ul> and <li>` ``` ALTER FUNCTION [dbo].[udf_StripHTML] --by Patrick Honorez --- www.idevlop.com --inspired by http://stackoverflow.com/questions/457701/best-way-to-strip-html-tags-from-a-string-in-sql-server/39253602#39253602 ( @HTMLText varchar(MAX) ) RETURNS varchar(MAX) AS BEGIN DECLARE @Start int DECLARE @End int DECLARE @Length int set @HTMLText = replace(@htmlText, '<br>',CHAR(13) + CHAR(10)) set @HTMLText = replace(@htmlText, '<br/>',CHAR(13) + CHAR(10)) set @HTMLText = replace(@htmlText, '<br />',CHAR(13) + CHAR(10)) set @HTMLText = replace(@htmlText, '<li>','- ') set @HTMLText = replace(@htmlText, '</li>',CHAR(13) + CHAR(10)) set @HTMLText = replace(@htmlText, '&rsquo;' collate Latin1_General_CS_AS, '''' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&quot;' collate Latin1_General_CS_AS, '"' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&amp;' collate Latin1_General_CS_AS, '&' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&euro;' collate Latin1_General_CS_AS, '€' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&lt;' collate Latin1_General_CS_AS, '<' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&gt;' collate Latin1_General_CS_AS, '>' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&oelig;' collate Latin1_General_CS_AS, 'oe' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&nbsp;' collate Latin1_General_CS_AS, ' ' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&copy;' collate Latin1_General_CS_AS, '©' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&laquo;' collate Latin1_General_CS_AS, '«' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&reg;' collate Latin1_General_CS_AS, '®' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&plusmn;' collate Latin1_General_CS_AS, '±' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&sup2;' collate Latin1_General_CS_AS, '²' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&sup3;' collate Latin1_General_CS_AS, '³' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&micro;' collate Latin1_General_CS_AS, 'µ' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&middot;' collate Latin1_General_CS_AS, '·' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&ordm;' collate Latin1_General_CS_AS, 'º' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&raquo;' collate Latin1_General_CS_AS, '»' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&frac14;' collate Latin1_General_CS_AS, '¼' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&frac12;' collate Latin1_General_CS_AS, '½' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&frac34;' collate Latin1_General_CS_AS, '¾' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&Aelig' collate Latin1_General_CS_AS, 'Æ' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&Ccedil;' collate Latin1_General_CS_AS, 'Ç' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&Egrave;' collate Latin1_General_CS_AS, 'È' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&Eacute;' collate Latin1_General_CS_AS, 'É' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&Ecirc;' collate Latin1_General_CS_AS, 'Ê' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&Ouml;' collate Latin1_General_CS_AS, 'Ö' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&agrave;' collate Latin1_General_CS_AS, 'à' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&acirc;' collate Latin1_General_CS_AS, 'â' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&auml;' collate Latin1_General_CS_AS, 'ä' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&aelig;' collate Latin1_General_CS_AS, 'æ' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&ccedil;' collate Latin1_General_CS_AS, 'ç' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&egrave;' collate Latin1_General_CS_AS, 'è' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&eacute;' collate Latin1_General_CS_AS, 'é' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&ecirc;' collate Latin1_General_CS_AS, 'ê' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&euml;' collate Latin1_General_CS_AS, 'ë' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&icirc;' collate Latin1_General_CS_AS, 'î' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&ocirc;' collate Latin1_General_CS_AS, 'ô' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&ouml;' collate Latin1_General_CS_AS, 'ö' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&divide;' collate Latin1_General_CS_AS, '÷' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&oslash;' collate Latin1_General_CS_AS, 'ø' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&ugrave;' collate Latin1_General_CS_AS, 'ù' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&uacute;' collate Latin1_General_CS_AS, 'ú' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&ucirc;' collate Latin1_General_CS_AS, 'û' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&uuml;' collate Latin1_General_CS_AS, 'ü' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&quot;' collate Latin1_General_CS_AS, '"' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&amp;' collate Latin1_General_CS_AS, '&' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&lsaquo;' collate Latin1_General_CS_AS, '<' collate Latin1_General_CS_AS) set @HTMLText = replace(@htmlText, '&rsaquo;' collate Latin1_General_CS_AS, '>' collate Latin1_General_CS_AS) -- Remove anything between <STYLE> tags SET @Start = CHARINDEX('<STYLE', @HTMLText) SET @End = CHARINDEX('</STYLE>', @HTMLText, CHARINDEX('<', @HTMLText)) + 7 SET @Length = (@End - @Start) + 1 WHILE (@Start > 0 AND @End > 0 AND @Length > 0) BEGIN SET @HTMLText = STUFF(@HTMLText, @Start, @Length, '') SET @Start = CHARINDEX('<STYLE', @HTMLText) SET @End = CHARINDEX('</STYLE>', @HTMLText, CHARINDEX('</STYLE>', @HTMLText)) + 7 SET @Length = (@End - @Start) + 1 END -- Remove anything between <whatever> tags SET @Start = CHARINDEX('<', @HTMLText) SET @End = CHARINDEX('>', @HTMLText, CHARINDEX('<', @HTMLText)) SET @Length = (@End - @Start) + 1 WHILE (@Start > 0 AND @End > 0 AND @Length > 0) BEGIN SET @HTMLText = STUFF(@HTMLText, @Start, @Length, '') SET @Start = CHARINDEX('<', @HTMLText) SET @End = CHARINDEX('>', @HTMLText, CHARINDEX('<', @HTMLText)) SET @Length = (@End - @Start) + 1 END RETURN LTRIM(RTRIM(@HTMLText)) END ```
How to strip HTML tags from a string in SQL Server?
[ "", "html", "sql", "sql-server", "string", "sql-server-2005", "" ]
I am running into quite an annoying issue while trying to deserialise a specific XML document using XmlSerializer.Deserialize() method. Basically, I have a strongly typed XSD with an element of type double. When trying to deserialise the element for a specific XML document, I get the usual "System.FormatException: Input string was not in a correct format." exception because in that specific document, the element does not have a value. Here is some code for you nerds out there. **Sample XML document:** ``` <TrackInfo> <Name>Barcelona</Name> <Length>4591</Length> <AverageSpeed /> </TrackInfo> ``` **XSD:** ``` <?xml version="1.0" encoding="utf-8"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="TrackInfo"> <xs:complexType> <xs:sequence> <xs:element name="Name" type="xs:string" /> <xs:element name="Length" type="xs:double" default="0.0" /> <xs:element name="AverageSpeed" type="xs:double" default="0.0" /> </xs:sequence> </xs:complexType> </xs:element> ``` **TrackInfo class:** ``` [Serializable] public class TrackInfo { private string name = string.Empty; private double length = 0.0; private double averageSpeed = 0.0; [XmlElement] public string Name { ... } [XmlElement] public double Length { ... } [XmlElement] public double AverageSpeed { ... } } ``` **Sample deserialisation code:** ``` XmlDocument xmlDocument = new XmlDocument(); xmlDocument.Load("TrackInfo.xml"); // Deserialise XML string into TrackInfo object byte[] buffer = ASCIIEncoding.UTF8.GetBytes(xmlDocument.InnerXml); MemoryStream stream = new MemoryStream(buffer); System.Xml.XmlReader reader = new System.Xml.XmlTextReader(stream); XmlSerializer xSerializer = new System.Xml.Serialization.XmlSerializer(typeof(TrackInfo)); TrackInfo trackInfo = (TrackInfo)xSerializer.Deserialize(reader); ``` I know that the deserialisation exception comes from the fact that an empty string cannot be converted to a double. I also know that the default value is not assigned to AverageSpeed because, effectively, an empty string is a perfectly acceptable value. Is there an easy way to default double values to 0.0 (or any other type) when deserialising if an empty string value is found in the XML document? Ideally, I would like to avoid implementing ISerializable because I don't really feel like spending the rest of the day into the burning pit of hell (i.e. implementing ISerializable for about a hundred classes). Cheers! Jean-Michel
You can specify the default value like ``` [XmlElement] [System.ComponentModel.DefaultValueAttribute(0.0)] public double AverageSpeed { ... } ``` /edit: ok, strange beaviour here. Whatever I set as value in the Attribute it's always the fields value: ``` private double averageSpeed = 2.0; ``` But no exception occurs.
Check out MSDN documentation on DefaultValueAttribute: <http://msdn.microsoft.com/en-us/library/system.componentmodel.defaultvalueattribute.aspx> > **Note:** A DefaultValueAttribute will not cause a member to be > automatically initialized with the > attribute's value. You must set the > initial value in your code. I think the field initialisation will always have precedence over the DefaultValueAttribute value. In any case, this seems to be exactly what I was looking for. Many thanks to you Sebastian!
Error deserialising XML document with strongly typed XSD
[ "", "c#", "xml", "xsd", "serialization", "" ]
I have a huge bunch of XML files with the following structure: ``` <Stuff1> <Content>someContent</name> <type>someType</type> </Stuff1> <Stuff2> <Content>someContent</name> <type>someType</type> </Stuff2> <Stuff3> <Content>someContent</name> <type>someType</type> </Stuff3> ... ... ``` I need to change the each of the "Content" node names to StuffxContent; basically prepend the parent node name to the content node's name. I planned to use the `XMLDocument` class and figure out a way, but thought I would ask if there were any better ways to do this.
The XML you have provided shows that someone completely misses the point of XML. Instead of having ``` <stuff1> <content/> </stuff1> ``` You should have:/ ``` <stuff id="1"> <content/> </stuff> ``` Now you would be able to traverse the document using Xpath (ie, //stuff[id='1']/content/) The names of nodes should not be used to establish identity, you use attributes for that. To do what you asked, load the XML into an xml document, and simply iterate through the first level of child nodes renaming them. PseudoCode: ``` foreach (XmlNode n in YourDoc.ChildNodes) { n.ChildNode[0].Name = n.Name + n.ChildNode[0].Name; } YourDoc.Save(); ``` However, I'd strongly recommend you actually fix the XML so that it is useful, instead of wreck it further.
(1.) The [XmlElement / XmlNode].Name property is read-only. (2.) The XML structure used in the question is crude and could be improved. (3.) Regardless, here is a code solution to the given question: ``` String sampleXml = "<doc>"+ "<Stuff1>"+ "<Content>someContent</Content>"+ "<type>someType</type>"+ "</Stuff1>"+ "<Stuff2>"+ "<Content>someContent</Content>"+ "<type>someType</type>"+ "</Stuff2>"+ "<Stuff3>"+ "<Content>someContent</Content>"+ "<type>someType</type>"+ "</Stuff3>"+ "</doc>"; XmlDocument xmlDoc = new XmlDocument(); xmlDoc.LoadXml(sampleXml); XmlNodeList stuffNodeList = xmlDoc.SelectNodes("//*[starts-with(name(), 'Stuff')]"); foreach (XmlNode stuffNode in stuffNodeList) { // get existing 'Content' node XmlNode contentNode = stuffNode.SelectSingleNode("Content"); // create new (renamed) Content node XmlNode newNode = xmlDoc.CreateElement(contentNode.Name + stuffNode.Name); // [if needed] copy existing Content children //newNode.InnerXml = stuffNode.InnerXml; // replace existing Content node with newly renamed Content node stuffNode.InsertBefore(newNode, contentNode); stuffNode.RemoveChild(contentNode); } //xmlDoc.Save ``` PS: I came here looking for a nicer way of renaming a node/element; I'm still looking.
Change the node names in an XML file using C#
[ "", "c#", "xml", "" ]
I have been working with PostgreSQL, playing around with Wikipedia's millions of hyperlinks and such, for 2 years now. I either do my thing directly by sending SQL commands, or I write a client side script in python to manage a million queries when this cannot be done productively (efficiently and effectively) manually. I would run my python script on my 32bit laptop and have it communicate with a $6000 64bit server running PostgreSQL; I would hence have an extra 2.10 Ghz, 3 GB of RAM, psyco and a multithreaded SQL query manager. I now realize that it is time for me to level up. I need to learn to server-side script using a procedural language (PL); I really need to reduce network traffic and its inherent serializing overhead. Now, I really do not feel like researching all the PLs. Knowing that I already know python, and that I am looking for the means between effort and language efficiency, what PL do you guys fancy I should install, learn and use, and why and how?
Since you already known python, PL/Python should be something to look at. And you sound like you write SQL for your database queries, so PL/SQL is a natural extension of that. PL/SQL feels like SQL, just with all the stuff that you would expect from SQL anyway, like variables for whole rows and the usual control structures of procedural languages. It fits the way you usually interact with the database, but it's not the most elegant language of all time. I can't say anything about PL/Python, since I have never used it, but since you know python it should be easy to flip through [some examples](http://www.postgresql.org/docs/8.3/interactive/plpython-funcs.html) and see if you like it.
Why can't you run your Python on the database server? That has the fewest complexities -- you can run the program you already have.
PostgreSQL procedural languages: to choose?
[ "", "python", "postgresql", "" ]
I've discovered this idiom recently, and I am wondering if there is something I am missing. I've never seen it used. Nearly all Java code I've worked with in the wild favors slurping data into a string or buffer, rather than something like this example (using HttpClient and XML APIs for example): ``` final LSOutput output; // XML stuff initialized elsewhere final LSSerializer serializer; final Document doc; // ... PostMethod post; // HttpClient post request final PipedOutputStream source = new PipedOutputStream(); PipedInputStream sink = new PipedInputStream(source); // ... executor.execute(new Runnable() { public void run() { output.setByteStream(source); serializer.write(doc, output); try { source.close(); } catch (IOException e) { throw new RuntimeException(e); } }}); post.setRequestEntity(new InputStreamRequestEntity(sink)); int status = httpClient.executeMethod(post); ``` That code uses a Unix-piping style technique to prevent multiple copies of the XML data being kept in memory. It uses the HTTP Post output stream and the DOM Load/Save API to serialize an XML Document as the content of the HTTP request. As far as I can tell it minimizes the use of memory with very little extra code (just the few lines for `Runnable`, `PipedInputStream`, and `PipedOutputStream`). So, what's wrong with this idiom? If there's nothing wrong with this idiom, why haven't I seen it? EDIT: to clarify, `PipedInputStream` and `PipedOutputStream` replace the boilerplate buffer-by-buffer copy that shows up everywhere, and they also allow you to process incoming data concurrently with writing out the processed data. They don't use OS pipes.
From the [Javadocs](http://docs.oracle.com/javase/7/docs/api/java/io/PipedInputStream.html): > Typically, data is read from a PipedInputStream object by one thread and data is written to the corresponding PipedOutputStream by some other thread. Attempting to use both objects from a single thread is not recommended, as it may deadlock the thread. This may partially explain why it is not more commonly used. I'd assume another reason is that many developers do not understand its purpose / benefit.
In your example you're creating two threads to do the work that could be done by one. And introducing I/O delays into the mix. Do you have a better example? Or did I just answer your question. --- To pull some of the comments (at least my view of them) into the main response: * Concurrency introduces complexity into an application. Instead of dealing with a single linear flow of data, you now have to be concerned about sequencing of independent data flows. In some cases, the added complexity may be justified, particularly if you can leverage multiple cores/CPUs to do CPU-intensive work. * If you are in a situation where you can benefit from concurrent operations, there's usually a better way to coordinate the flow of data between threads. For example, passing objects between threads using a concurrent queue, rather than wrapping the piped streams in object streams. * Where a piped stream may be a good solution is when you have multiple threads performing text processing, a la a Unix pipeline (eg: grep | sort). --- In the specific example, the piped stream allows use of an existing RequestEntity implementation class provided by HttpClient. I believe that a better solution is to create a new implementation class, as below, because the example is ultimately a sequential operation that cannot benefit from the complexity and overhead of a concurrent implementation. While I show the RequestEntity as an anonymous class, reusability would indicate that it should be a first-class class. ``` post.setRequestEntity(new RequestEntity() { public long getContentLength() { return 0-1; } public String getContentType() { return "text/xml"; } public boolean isRepeatable() { return false; } public void writeRequest(OutputStream out) throws IOException { output.setByteStream(out); serializer.write(doc, output); } }); ```
Why doesn't more Java code use PipedInputStream / PipedOutputStream?
[ "", "java", "design-patterns", "concurrency", "pipe", "" ]
The documentation of `System.Threading.Timer` says that I should keep a live reference for it to avoid it being garbage collected. But where should I do that? My `main` is very simple that I don't know where to keep the reference: ``` class Program { static void Main() { new System.Threading.Thread(myThreadStart).Start(); new System.Threading.Timer(myTimerCallback, new MyStateObject(), 0, 5000); } } ``` I thought about keeping the reference in a `static` field in the `Program` class, assuming that `static` fields do not get collected until the end of the application. But I'm not sure this is the best way to do it, so I'd appreciate your advice.
If your Timer is an application-level object there's nothing wrong with making it a private static member of your Main class. That's what I would do, anyway.
EDIT: My original answer is rubbish. Really rubbish. I've kept it here to explain *why* it's rubbish though - it's in the comments, but they'd have been deleted with the answer. GC.KeepAlive only makes sure a reference is treated as a root until after the call. In the code at the bottom of this answer, the GC.KeepAlive method would be called immediately, and then the timer would still be eligible for garbage collection. Because the newly created thread is a foreground thread, the app will run as long as it's alive (whereas the timer uses a background thread, which doesn't prevent program exit). This means that the Main method exits, but the application needs to keep running. Arguably a *simpler* solution would be to run `myThreadStart` in the main thread, rather than creating a new one and then letting the main thread die. In other words, a simple solution would be: ``` using System.Threading; class Program { static void Main() { Timer timer = new Timer(myTimerCallback, new MyStateObject(), 0, 5000); myThreadStart(); GC.KeepAlive(timer); } } ``` I assume the *real* code is more complicated though - in which case using a private static variable as suggested in other answers is probably the way to go. It really will depend on the usage though. I personally prefer not to create a static field just for the sake of preventing something being collected if there's an alternative (like the above) but sometimes it's virtually the only way of doing it. **Original (bad) answer:** If you really want to allocate it in Main, then you can use [GC.KeepAlive](http://msdn.microsoft.com/en-us/library/system.gc.keepalive.aspx): ``` using System.Threading; class Program { static void Main() { new Thread(myThreadStart).Start(); Timer timer = new Timer(myTimerCallback, new MyStateObject(), 0, 5000); GC.KeepAlive(timer); } } ```
In C#, where should I keep my timer's reference?
[ "", "c#", ".net", "garbage-collection", "reference", "timer", "" ]
K... I'm doing something obviously wrong. I have a simple page with a file input control on and a submit button. I'm trying out the new "File" ActionResult that was released with the Mvc RC... All, I want to happen is when the submit button is clicked the selected file is uploaded to the database. This all works fine... Then, after the page refreshes I want a image to display the resulting image that was uploaded. The issue, is that the image is not rendering... I get the broken image... This is the portion that is getting the file and sending it back to the view... ``` var a = Helper.Service.GetAttachmentById(id, MembershipProvider.SecurityTicket); if (a == null) { return View(new ImagePlaceHolderViewData(new { Id = id })); } return View(new ImagePlaceHolderViewData(new { Id = a.Id, Image = a, FileContent = File(a.Data, a.ContentType) })); ``` Then in the view I have a image tag like so... ``` <img src="<%=Model.FileContent.FileContents %>" /> ``` I have also tried... ``` <img src="<%=Model.FileContent%>" /> ``` Thoughts..??
FileResult returns the ASCII or binary contents of the file. When you say do the following: ``` <img src="<%=Model.FileContent.FileContents %>" /> ``` You are attempting to push the binary image data into the `src` attribute. That will never work, because the `src` must a URL or a path to an image. There are several ways of doing what you want, and the most correct solution in this case, would be to create a new controller that returns the binary data like you are attempting, and then you set the `src` attribute to be the path to correct action on your new controller. E.g: ``` <img src="/image/result/12345" /> ``` This points to the following (really simple and incomplete) example controller: ``` public class ImageController : Controller { public ActionResult Result(int resultID) { // Do stuff here... return File(..); } } ``` Note that the name I chose for the action is most likely not any good, but it serves its purpose as an example. Hope this was helpful.
I think it's pretty simple: the src attribute of your img tag requires an URL. What you're doing here is just putting the FileStream object in there (which implicitly calls the ToString method on that object). Your resulting html is probably something like this: ``` <img src="FileStream#1" /> ``` Did you check your html source? What you probably should do is provide a method which returns the data, and pass the route to that to your view. Your resulting html 'should' then look something like this: ``` <img src="/Images/View/1" /> ``` So, steps you have to do are: * Create a method on your controller that returns a FileContentResult * Pass your image ID to your view * Use Url.RouteUrl to generate an url which you can put in your img tag, which points to the method returning your image data.
Mvc Release Candidate "File" ActionResult
[ "", "c#", "asp.net-mvc", "" ]
I am able to send emails using the typical C# SMTP code across Exchange 2007 as long as both the from and to addresses are within my domain. As soon as I try to send emails outside the domain I get: Exception Details: System.Net.Mail.SmtpFailedRecipientException: Mailbox unavailable. The server response was: 5.7.1 Unable to relay How can I get exchange to accept my email and send it out to the internet?
Try #2... How about using a [Exchange Pickup Folder](http://www.msexchange.org/articles_tutorials/exchange-server-2007/management-administration/exchange-pickup-folder.html) instead? They are a faster way to send emails through Exchange because it just creates the email and drops it in the folder, no waiting to connect to the server or waiting for a reply. Plus I think it skips the whole relay issue. Configure youur SmtpClient like so: ``` SmtpClient srv = new SmtpClient("exchsrv2007", 25) { DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory, PickupDirectoryLocation = "\\exchsrv2007\PickupFolder" } ... ```
Authenticate to the exchange server. <http://msdn.microsoft.com/en-us/library/system.net.mail.smtpclient.credentials.aspx> --- > DefaultNetworkCredentials returns > empty strings for username etc and > causes this exception... Here is an [example](http://aspalliance.com/867), and here is [another](http://www.systemnetmail.com/faq/4.2.aspx) of sending authenticated message with System.Net.Mail.
How do I send emails outside my domain with Exchange 2007 and c#
[ "", "c#", "smtp", "exchange-server-2007", "" ]
BACKGROUND: I have the following XUL fragment ``` <tree id="treeToChange" flex="1"> <treecols> <treecol label = "First Column" id="c1" flex="1"/> <treecol label = "Second Column" id="c2" flex="1"/> </treecols> <treechildren> <treeitem> <treerow> <treecell label="Data for Column 1"/> <treecell label="Data for Column 2"/> </treerow> </treeitem> </treechildren> </tree> ``` and the following css fragment ``` tree { font-size: 120%; color: green;} ``` This causes my column data to be displayed in green text. I have many such tree objects on the XUL page QUESTION: In firefox, in response to a click event, which calls a javascript routine, how do I set the data for the object "treeToChange" in column 1 red and the data in column blue?
It turns out that the element.style.color only effects the column headings, and that within firefox, the cells in a tree structure can only be affected by coding the dataview. code snippets follow: ``` // DatabaseTreeView: Create a custom nsITreeView DatabaseTreeView: function(aTableData, aColumns) { this.getCellProperties = function(row,col,props){ var aserv=Components.classes["@mozilla.org/atom-service;1"].getService(Components.interfaces.nsIAtomService); props.AppendElement(aserv.getAtom("color_"+col.id)); props.AppendElement(aserv.getAtom("font_"+col.id)); }; ``` ... and modify the css as follows ``` treechildren::-moz-tree-cell-text(color_c1){ color:DarkGreen} treechildren::-moz-tree-cell-text(color_c2){ color:Navy} treechildren::-moz-tree-cell-text(font_c1){ font-size:120%} treechildren::-moz-tree-cell-text(font_c1){ font-size:150%} ``` I hope this helps someone else in the future
The style property of a DOM element contains all the CSS declarations for that element. The naming scheme is slightly different (camelCaps instead of dashes), but otherwise exactly the same. ``` element.style.color = 'blue'; ``` You can read more on the style property in the [Mozilla javascript manual](http://developer.mozilla.org/en/DOM/element.style).
In firefox, how to change color of text in a treecell using javascript
[ "", "javascript", "css", "firefox", "xul", "" ]
I am developing a c# GUI and was wondering if there were any (preferably free) runtime diagnostic programs available. What Im looking for is a way to monitor user interactions with my GUI and what functions are called when. Preferably I do not want to add stacks of debug code as this has the potential to change the behaviour of the bug I am looking for. Currently Im using Spy++ but am finding it a little heavy going as Im kinda new to this and its generating stacks of data.
[EQATEC profiler](http://www.eqatec.com/tools/profiler) is pretty simple to use. Free too. Started life targetting Compact Framework but since 2.00 is now geared more towards Desktop too.
I recommend [ANTS profiler](http://www.red-gate.com/Products/ants_profiler/index.htm). There is a trial available, first two weeks free.
C# runtime application Diagnostics
[ "", "c#", "runtime", "diagnostics", "" ]
I've heard of [**static\_cast operator**](http://msdn.microsoft.com/en-us/library/c36yw7x9(VS.80).aspx) Recently I've come across **static\_case**, for instance: ``` *ppv = static_case<IUnknown> ``` What does this mean?
It's a typo : **there is no static\_case**, [only static\_cast, dynamic\_cast, const\_cast and reinterpret\_cast](http://www.cplusplus.com/doc/tutorial/typecasting.html). You can see [on google](http://www.google.fr/search?rlz=1C1GGLS_frFR299FR303&sourceid=chrome&ie=UTF-8&q=c%2B%2B+static_case) that the docs where you find "static\_case" have typos and use static\_cast and static\_case like if it was the same word. To be sure, just try to use static\_case in available compilers.
There is nothing called **static\_case** in C++. There is just **static\_cast**
What is static_case operator in C++?
[ "", "c++", "" ]
How can I change values in string from 0,00 to 0.00? - only numeric values, not all chars "," to "." FROM ``` string myInputString = "<?xml version=\"1.0\"?>\n<List xmlns:Table=\"urn:www.navision.com/Formats/Table\"><Row><HostelMST>12,0000</HostelMST><PublicMST>0,0000</PublicMST><TaxiMST>0,0000</TaxiMST><ParkMST>0,0000</ParkMST><RoadMST>0,0000</RoadMST><FoodMST>0,0000</FoodMST><ErrorCode>0</ErrorCode><ErrorDescription></ErrorDescription></Row></List>\n"; ``` TO ``` string myInputString = "<?xml version=\"1.0\"?>\n<List xmlns:Table=\"urn:www.navision.com/Formats/Table\"><Row><HostelMST>12.0000</HostelMST><PublicMST>0.0000</PublicMST><TaxiMST>0.0000</TaxiMST><ParkMST>0.0000</ParkMST><RoadMST>0.0000</RoadMST><FoodMST>0.0000</FoodMST><ErrorCode>0</ErrorCode><ErrorDescription></ErrorDescription></Row></List>\n"; ``` --- Thanks for answers, but I mean to change only numeric values, not all chars "," to "." I don't want change string from ``` string = "<Attrib>txt txt, txt</Attrib><Attrib1>12,1223</Attrib1>"; ``` to ``` string = "<Attrib>txt txt. txt</Attrib><Attrib1>12.1223</Attrib1>"; ``` but this one is ok ``` string = "<Attrib>txt txt, txt</Attrib><Attrib1>12.1223</Attrib1>"; ```
I strongly recommend joel.neely's regex approach or the one below: 1. Use XmlReader to read all nodes 2. Use double.TryParse with the formatter = a NumberFormatInfo that uses a comma as decimal separator, to identify numbers 3. Use XmlWriter to write a new XML 4. Use CultureInfo.InvariantCulture to write the numbers on that XML
Try this : ``` Regex.Replace("attrib1='12,34' attrib2='43,22'", "(\\d),(\\d)", "$1.$2") ``` **output :** attrib1='12.34' attrib2='43.22'
how to change values in string from 0,00 to 0.00
[ "", "c#", ".net", "" ]
There's no much documentation on how to deploy a Django project with [Spawning](http://pypi.python.org/pypi/Spawning/) and yet people are recommending it over apache/mod\_wsgi. [In another similar question](https://stackoverflow.com/questions/487224/reducing-django-memory-usage-low-hanging-fruit/487261#487261), other SO user suggested me to open a new question specific to Spawning, so hopefully others can share their experiences too.
I'd be interested in seeing whose seriously recommending Spawning over Apache and mod\_python or mod\_wsgi. Judging by the fact that this question is now the #4 result in Google for 'django spawning' I'd say it's very much early days. :) If you're putting anything serious into production stick to Apache/mod\_wsgi for now.
cd to your django's settings.py directory. Here is the command line to serve your django application ``` spawn --factory=spawning.django_factory.config_factory settings --port 80 ```
How to deploy Django with Spawning
[ "", "python", "django", "deployment", "spawning", "" ]