Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I saw this reply from Jon on [Initialize generic object with unknown type](https://stackoverflow.com/questions/386500/initialize-generic-object-with-unknown-type): > If you want a single collection to > contain multiple unrelated types of > values, however, you will have to use > `List<object>` I'm not comparing `ArrayList` vs `List<>`, but `ArrayList` vs `List<object>`, as both will be exposing elements of type `object`. What would be the benefit of using either one in this case? EDIT: It's no concern for type safety here, since both class is exposing `object` as its item. One still needs to cast from `object` to the desired type. I'm more interested in anything other than type safety. EDIT: Thanks Marc Gravell and Sean for the answer. Sorry, I can only pick 1 as answer, so I'll up vote both.
You'll be able to use the LINQ extension methods directly with `List<object>`, but not with `ArrayList`, unless you inject a `Cast<object>()` / `OfType<object>` (thanks to `IEnumerable<object>` vs `IEnumerable`). That's worth quite a bit, even if you don't need type safety etc. The speed will be about the same; structs will still be boxed, etc - so there isn't much else to tell them apart. Except that I tend to see `ArrayList` as "oops, somebody is writing legacy code again..." ;-p
One big benefit to using `List<object>` is that these days most code is written to use the generic classes/interfaces. I suspect that these days most people would write a method that takes a `IList<object>` instead of an `IList`. Since `ArrayList` doesn't implement `IList<object>` you wouldn't be able to use an array list in these scenarios. I tend to think of the non-generic classes/interfaces as legacy code and avoid them whenever possible.
ArrayList vs List<object>
[ "", "c#", ".net", "list", "arraylist", "" ]
I am trying to write a JavaScript function that will return its first argument(function) with all the rest of its arguments as preset parameters to that function. So: ``` function out(a, b) { document.write(a + " " + b); } function setter(...) {...} setter(out, "hello")("world"); setter(out, "hello", "world")(); ``` Would output "hello world" twice. for some implementation of setter I ran into an issue with manipulating the arguments array on my first try, but it seems there would be a better way to do this.
First of all, you need a partial - [**there is a difference between a partial and a curry**](https://stackoverflow.com/questions/218025/what-is-the-difference-between-currying-and-partial-application) - and here is all you need, *without a framework*: ``` function partial(func /*, 0..n args */) { var args = Array.prototype.slice.call(arguments, 1); return function() { var allArguments = args.concat(Array.prototype.slice.call(arguments)); return func.apply(this, allArguments); }; } ``` Now, using your example, you can do exactly what you are after: ``` partial(out, "hello")("world"); partial(out, "hello", "world")(); // and here is my own extended example var sayHelloTo = partial(out, "Hello"); sayHelloTo("World"); sayHelloTo("Alex"); ``` The `partial()` function could be used to implement, but *is not* currying. Here is a quote from [**a blog post on the difference**](http://www.uncarved.com/blog/not_currying.mrk): > Where partial application takes a function and from it builds a function which takes fewer arguments, currying builds functions which take multiple arguments by composition of functions which each take a single argument.
Is [curried javascript](http://www.crockford.com/javascript/www_svendtofte_com/code/curried_javascript/index.html) what you're looking for?
How can I pre-set arguments in JavaScript function call? (Partial Function Application)
[ "", "javascript", "functional-programming", "" ]
I have a dialog in MFC with a CStatusBar. In a separate thread, I want to change the pane text of status bar. However MFC complains with asserts? How is it done? An example code would be great.
You could post a private message to the main frame window and 'ask' it to update the status bar. The thread would need the main window handle (don't use the CWnd object as it won't be thread safe). Here is some sample code: ``` static UINT CMainFrame::UpdateStatusBarProc(LPVOID pParam); void CMainFrame::OnCreateTestThread() { // Create the thread and pass the window handle AfxBeginThread(UpdateStatusBarProc, m_hWnd); } LRESULT CMainFrame::OnUser(WPARAM wParam, LPARAM) { // Load string and update status bar CString str; VERIFY(str.LoadString(wParam)); m_wndStatusBar.SetPaneText(0, str); return 0; } // Thread proc UINT CMainFrame::UpdateStatusBarProc(LPVOID pParam) { const HWND hMainFrame = reinterpret_cast<HWND>(pParam); ASSERT(hMainFrame != NULL); ::PostMessage(hMainFrame, WM_USER, IDS_STATUS_STRING); return 0; } ``` The code is from memory as I don't have access to compiler here at home, so apologies now for any errors. Instead of using `WM_USER` you could register your own Windows message: ``` UINT WM_MY_MESSAGE = ::RegisterWindowsMessage(_T("WM_MY_MESSAGE")); ``` Make the above a static member of `CMainFrame` for example. If using string resources is too basic then have the thread allocate the string on the heap and make sure the CMainFrame update function deletes it, e.g.: ``` // Thread proc UINT CMainFrame::UpdateStatusBarProc(LPVOID pParam) { const HWND hMainFrame = reinterpret_cast<HWND>(pParam); ASSERT(hMainFrame != NULL); CString* pString = new CString; *pString = _T("Hello, world!"); ::PostMessage(hMainFrame, WM_USER, 0, reinterpret_cast<LPARAM>(pString)); return 0; } LRESULT CMainFrame::OnUser(WPARAM, LPARAM lParam) { CString* pString = reinterpret_cast<CString*>(lParam); ASSERT(pString != NULL); m_wndStatusBar.SetPaneText(0, *pString); delete pString; return 0; } ``` Not perfect, but it's a start.
Maybe this can help you: [How to access UI elements from a thread in MFC.](http://www.codeguru.com/forum/showthread.php?t=312454) I don't code C++/MFC myself but I had experienced the similar problem in C# which is known as Cross-thread GUI update.
How to change pane text of status bar from a thread in MFC?
[ "", "c++", "multithreading", "mfc", "thread-safety", "" ]
Is there any way to change style of text, format the text inside a Javascript Alert box. e.g. changing its colour, making it bold, etc.? Also, if there an Alert avalaible with a 'Yes', 'No' button instead of an 'OK'/'Cancel' one?
**No**, I'm afraid that is not possible. With JavaScript, you are limited to only 3 popup boxes: `alert`, `prompt` and `confirm`. If you want to make a more featured popup, you should try using one of the many that exist written in JavaScript, which are built from JavaScript Libraries, such as [this](http://dev.iceburg.net/jquery/jqModal/)
The best you can do for formatting is the new line character `\n`.
Colour of Text in a Javascript Alert
[ "", "javascript", "css", "alerts", "" ]
I was going through some code and came across a scenario where my combobox has not been initialized yet. This is in .NET 2.0 and in the following code, this.cbRegion.SelectedValue is null. ``` int id = (int)this.cbRegion.SelectedValue; ``` This code threw a null reference exception instead of an invalid cast exception. I was wondering if anyone knew why it would throw a null reference exception instead of a invalid cast?
It has to do with [Boxing](http://www.csharphelp.com/archives/archive100.html) and unboxing. It is trying to pull an int out of the box (unbox), but the object is null, so you get a null reference exception before it ever gets the change to cast.
If you compile ``` object o = null; int a = (int)o; ``` and look at the MSIL code, you'll see something like ``` ldnull ... unbox.any int32 ``` Now the behavior for unbox.any is specified as follows: > InvalidCastException is thrown if obj > is not a boxed type. > > NullReferenceException is thrown if > obj is a null reference. This is what you see in your code.
Why does casting a null to a primitive(ie: int) in .net 2.0 throw a null ref exception and not a invalid cast exception?
[ "", "c#", ".net", ".net-2.0", "casting", "" ]
I'm trying to make a pull down menu post a form when the user selects (releases the mouse) on one of the options from the menu. This code works fine in FF but Safari, for some reason, doesn't submit the form. I re-wrote the code using jquery to see if jquery's .submit() implementation handled the browser quirks better. Same result, works in FF doesn't work in safari. The following snippets are from the same page, which has some django template language mixed in. Here's the vanilla js attempt: ``` function formSubmit(lang) { if (lang != '{{ LANGUAGE_CODE }}') { document.getElementById("setlang_form").submit(); } } ``` Here's the jquery attempt: ``` $(document).ready(function() { $('#lang_submit').hide() $('#setlang_form option').mouseup(function () { if ($(this).attr('value') != '{{ LANGUAGE_CODE }}') { $('#setlang_form').submit() } }); }); ``` and here's the form: ``` <form id="setlang_form" method="post" action="{% url django.views.i18n.set_language %}"> <fieldset> <select name="language"> {% for lang in interface_languages %} <option value="{{ lang.code }}" onmouseup="formSubmit('{{ lang.name }}')" {% ifequal lang.code LANGUAGE_CODE %}selected="selected"{% endifequal %}>{{ lang.name }}</option> {% endfor %} </select> </fieldset> </form> ``` My question is, how can I get this working in Safari?
The code would be: ``` <form id="setlang_form" method="post" action="{% url django.views.i18n.set_language %}"> <fieldset> <select name="language" onchange="formSubmit(this)"> {% for lang in interface_languages %} <option value="{{ lang.code }}" {% ifequal lang.code LANGUAGE_CODE %}selected="selected"{% endifequal %}>{{ lang.name }}</option> {% endfor %} </select> </fieldset> </form> ``` To get the value: ``` function formSubmit(theForm) { .... theForm.options[theForm.selectedIndex].value } ``` You can do it with jquery too: ``` $(document).ready(function() { $('#lang_submit').hide() $('#setlang_form select').change(function () { .... $("select option:selected").text() .... } }); }); ``` Look here to know about change event with Jquery: <http://docs.jquery.com/Events/change>
You should probably use the `onchange` event of the `<select>` instead (or as well).
Firefox handles xxx.submit(), Safari doesn't ... what can be done?
[ "", "javascript", "django", "django-templates", "" ]
I am crash-learning PHP for a project, and I have a ton of stupid questions to make. The first is the following. I have a structure like this: * index.php * header.php * images/ * data + data.php Now, the problem is that I want to include the header.php file in the data.php file. That is no problem. The problem I have is that header.php has a link to the images folder in a relative way. So, the images won't load. To make matters worse, this structure is under a specific alias, so I just can't append a / to the beginning of the link to the image. What I need, I guess, is a way to get the path to the application in the script. That way I can reference to the images without worrying where the include is made. How do you get this path in PHP?
Try including the file like such: ``` <?php include($_SERVER['DOCUMENT_ROOT'].'/data/data.php/'); ?> ``` The `DOCUMENT_ROOT` will return the path of the root folder, located on the web server of your website.
Many people use something like this. The `dirname(__FILE__)` will return the directory of the current script. Then you concatenate that to relative path to the script you are including. ``` require(realpath(dirname(__FILE__).'/lib/Database.php')); ```
Header page in PHP that takes the application directory into account
[ "", "php", "path", "" ]
I've got a Django application that works nicely. I'm adding REST services. I'm looking for some additional input on my REST strategy. Here are some examples of things I'm wringing my hands over. * Right now, I'm using the Django-REST API with a pile of patches. * I'm thinking of falling back to simply writing view functions in Django that return JSON results. * I can also see filtering the REST requests in Apache and routing them to a separate, non-Django server instance. Please nominate one approach per answer so we can vote them up or down.
> I'm thinking of falling back to simply > writing view functions in Django that > return JSON results. * Explicit * Portable to other frameworks * Doesn't require patching Django
Please note that REST does not just mean JSON results. REST essentially means exposing a resource-oriented API over native but full-fledged HTTP. I am not an expert on REST, but here are a few of the things Rails is doing. * URLs should be good, simple names for resources * Use the right HTTP methods + HEAD, GET, POST, PUT, and DELETE + Optionally with an override (form parameter '\_method' will override HTTP request-method) * Support content-type negotiation via Accept request-header + Optionally with an override (filename extension in the URL will override MIME-type in the Accept request-header) + Available content types should include XML, XHTML, HTML, JSON, YAML, and many others as appropriate For example, to get the native HTTP support going, the server should respond to ``` GET /account/profile HTTP/1.1 Host: example.com Accept: application/json ``` as it would respond to ``` GET /account/profile.json HTTP/1.1 Host: example.com ``` And it should respond to ``` PUT /account/profile HTTP/1.1 Host: example.com var=value ``` as it would respond to ``` POST /account/profile HTTP/1.1 Host: example.com _method=PUT&var=value ```
Adding REST to Django
[ "", "python", "django", "apache", "rest", "" ]
I need to read a properties files that's buried in my package structure in `com.al.common.email.templates`. I've tried everything and I can't figure it out. In the end, my code will be running in a servlet container, but I don't want to depend on the container for anything. I write JUnit test cases and it needs to work in both.
When loading the Properties from a Class in the package `com.al.common.email.templates` you can use ``` Properties prop = new Properties(); InputStream in = getClass().getResourceAsStream("foo.properties"); prop.load(in); in.close(); ``` (Add all the necessary exception handling). If your class is not in that package, you need to aquire the InputStream slightly differently: ``` InputStream in = getClass().getResourceAsStream("/com/al/common/email/templates/foo.properties"); ``` Relative paths (those without a leading '/') in `getResource()`/`getResourceAsStream()` mean that the resource will be searched relative to the directory which represents the package the class is in. Using `java.lang.String.class.getResource("foo.txt")` would search for the (inexistent) file `/java/lang/String/foo.txt` on the classpath. Using an absolute path (one that starts with '/') means that the current package is ignored.
To add to Joachim Sauer's answer, if you ever need to do this in a static context, you can do something like the following: ``` static { Properties prop = new Properties(); InputStream in = CurrentClassName.class.getResourceAsStream("foo.properties"); prop.load(in); in.close() } ``` (Exception handling elided, as before.)
Loading a properties file from Java package
[ "", "java", "properties-file", "" ]
I have an interface method ``` public void Execute(ICommand command); ``` which needs to pass known subtypes of `ICommand` to an apropriate `Handle(SpecificCommand command)` method implementation and do some generic handling of unknown types. I am looking for a universal (i.e. not requiring a giant switch) method of doing so, something similar to ``` Handle(command as command.GetType()); // this obviously does not compile ``` I know I could register the handlers somehow, e.g. store them as delegates in a dictionary, but this still requires duplicating the handling logic (once in the specific `Handle(...)` method signature, once in the delegate reqistration). If I populate the dictionary by inspecting my class with reflection (looking for `Handle(XXX command)` methods), I'll get a performance hit. To summarize: how can I downcast an object (upcasted by the call to `Execute(ICommand command)`) to invoke a method requiring a concrete type without knowing which type it is at compile time.
The cast is emitted at compile-time, so you need to know the type at compile-time. The overloading is also determined at compile-time - so by the time you actually know the concrete type to use, it's too late. I don't see that you'd actually be duplicating any *logic* by using delegates. Alternatively, if you do it with reflection, you can build delegates very easily using Delegate.CreateDelegate - you'll only get a performance hit once, and after that it'll be very fast. See my [blog entry about Delegate.CreateDelegate](http://msmvps.com/blogs/jon_skeet/archive/2008/08/09/making-reflection-fly-and-exploring-delegates.aspx) for more information. I think I'd decide to use a hand-built dictionary or one built with reflection based on how many methods I had and how often they change. You'll probably find [KeyedByTypeCollection](http://msdn.microsoft.com/en-us/library/ms404549.aspx) useful for the dictionary.
Well, the "correct" answer is that Handle() should be a method in ICommand, so that instead of `Handle(command)`, you'd be saying `command.Handle()`.
In C#, how can I downcast a previously upcasted object without knowing it's type?
[ "", "c#", ".net", "inheritance", "casting", "" ]
What is the correct output (meaning correct by the ECMA standard) of the following program? ``` function nl(x) { document.write(x + "<br>"); } nl(Function.prototype); nl(Function.prototype.prototype); nl(Function.prototype.prototype == Object.prototype); nl(Function.prototype.prototype.prototype); ``` Chrome and IE6 agree in saying: ``` function Empty() {} null for Chrome / undefined for IE6 false ``` and then crashing. Mozilla outputs: ``` function () { } [object Object] false undefined ``` Are either of these correct? It seems that the Mozilla one does better, but that the best output is ``` function () { } [object Object] true undefined ```
What you're doing here isn't really walking the prototype chain - [this question](https://stackoverflow.com/questions/383201/relation-between-prototype-and-prototype-in-javascript) might help you understand what is actually going on. I didn't bother to check the ECMA spec, but here is my take on the issue: * **Function** is the constructor of function objects * **Function.prototype** is the prototype from which all function objects inherit - it might contain properties like *call* and *apply* which are common to all *Function* instances; the implementations you checked were consistent in that it is implemented as a function object itself (as **some** pointed out, the ECMA specification requires this) * **Function.prototype.prototype** does't really make much sense, but as **Function.prototype** is implemented as a function object (which could possibly be used as a constructor), it should at least exists; objects which are created using **Function.prototype** as constructor would inherit its properties - but as there should be no reason to do something insane like this, setting it to *null*, *undefined* or an empty object is reasonable * **Function.prototype.prototype.prototype** will in all likelyhood be *undefined*: as we have seen before, **Function.prototype.prototype** should be something without properties (*null*, *undefined* or an empty object) and definetely not a function object; therefore, its *prototype* property should be *undefined* or might even throw an error when trying to be accessed Hope this helps ;)
**Function.prototype** From [ECMAScript Language Specification](http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf): > **15.3.3.1 Function.prototype** > > The initial value of Function.prototype is > the Function prototype object (section > 15.3.4). > > **15.3.4 Properties of the Function Prototype Object** > > The Function > prototype object is itself a Function > object (its [[Class]] is "Function") > that, when invoked, accepts any > arguments and returns undefined. The > value of the internal [[Prototype]] > property of the Function prototype > object is the Object prototype object > (section 15.3.2.1). > > It is a function with an “empty body”; > if it is invoked, it merely returns > undefined. The Function prototype > object does not have a valueOf > property of its own; however, it > inherits the valueOf property from the > Object prototype Object. I get this output: * **Opera:** function () { [native code] } * **Chrome:** function Empty() {} * **IE7:** function prototype() { [native code]} * **FF3:** function () { } Chrome and IE7 has named their functions, Opera and IE7 tells you that it will not reveal the implementation. They all agree on this: ``` nl(typeof Function.prototype); //function ``` Compare this to: ``` nl(typeof Object.prototype); //object nl(typeof Array.prototype); //object nl(typeof String.prototype); // object ``` **Function.prototype.prototype** I get **undefined** from Opera and IE7, **null** from Chrome and **[object Object]** from FF3. Who is right? Since *"The Function prototype object is itself a Function object"* shouldn't it be a circular reference to itself? To avoid the circular reference they have chosen different ways. I don't know if there is a standard for that or if it is up to the implementation, but I think an Object is right. Btw, here you see the difference between the internal [[prototype]] and the public prototype in action, like you asked in an earlier question! **Function.prototype.prototype == Object.prototype** This is false because it isn't the same object. See above. **Function.prototype.prototype.prototype** Only FF will give you an answer because of their implementation of Function.prototype.prototype returns an Object. I agree that your proposed output looks more logic. They do agree on this: ``` nl(Object.prototype); // [object Object] nl(Object.prototype.prototype); // undefined ```
Correct prototype chain for Function
[ "", "javascript", "constructor", "prototype", "" ]
Please feel free to correct me if I am wrong at any point... I am trying to read a [CSV](http://en.wikipedia.org/wiki/Comma-separated_values) (comma separated values) file using .NET file I/O classes. Now the problem is, this CSV file may contain some fields with soft carriage returns (i.e. solitary \r or \n markers rather than the standard \r\n used in text files to end a line) within some fields and the standard text mode I/O class StreamReader does not respect the standard convention and treats the soft carriage returns as hard carriage returns thus compromising the integrity of the CSV file. Now using the BinaryReader class seems to be the only option left but the BinaryReader does not have a ReadLine() function hence the need to implement a ReadLine() on my own. My current approach reads one character from the stream at a time and fills a StringBuilder until a \r\n is obtained (ignoring all other characters including solitary \r or \n) and then returns a string representation of the StringBuilder (using ToString()). But I wonder: is this is the most efficient way of implementing the ReadLine() function? Please enlighten me.
It probably is. In terms of order, it goes through each char once only, so it would be O(n) (where n is the length of the stream) so that's not a problem. To read a single character a BinaryReader is your best bet. What I would do is make a class ``` public class LineReader : IDisposable { private Stream stream; private BinaryReader reader; public LineReader(Stream stream) { reader = new BinaryReader(stream); } public string ReadLine() { StringBuilder result = new StringBuilder(); char lastChar = reader.ReadChar(); // an EndOfStreamException here would propogate to the caller try { char newChar = reader.ReadChar(); if (lastChar == '\r' && newChar == '\n') return result.ToString(); result.Append(lastChar); lastChar = newChar; } catch (EndOfStreamException) { result.Append(lastChar); return result.ToString(); } } public void Dispose() { reader.Close(); } } ``` Or something like that. (WARNING: the code has not been tested and is provided AS IS without warranty of any kind, expressed or implied. Should this program prove defective or destroy the planet, you assume the cost of all necessary servicing, repair or correction.)
You might want to look at using an ODBC/OleDB connection to do this. If you point the data source of an oledb connection to a directory containing csv files, you can then query it as if each CSV was a table. check http://www.connectionstrings.com/?carrier=textfile>connectionstrings.com for the correct connection string
What's the most efficient way of implementing ReadLine() on a binary stream?
[ "", "c#", ".net", "vb.net", "" ]
With a vector defined as `std::vector<std::string>`, Wondering why the following is valid: ``` if ( vecMetaData[0] != "Some string" ) { ... ``` But not this: ``` switch ( vecMetaData[1] ) { ... ``` Visual studio complains : ``` error C2450: switch expression of type 'std::basic_string<_Elem,_Traits,_Ax>' is illegal 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char>, 1> _Ax=std::allocator<char> 1> ] 1> No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called ```
switch() needs an integral type (like int, char, ...) string is not an integral type, neither does string have an implicit conversion to an integral type, so it can't be used in a switch statement
It is valid because the first will call the operator!= of std::string, which will accept a const char\* as an argument. That, however, doesn't mean that std::string also has an operator some\_integral\_type() that will return a integral expression which switch needs. Using operators in C++ does not necassary invoke the builtin meaning. Your code, for example, doesn't compare pointer values. It might invoke a user defined (in this case, the one of std::string) operator function.
STL Vector comparison issue
[ "", "c++", "stl", "" ]
I have a simple database with two tables. Users and Configurations. A user has a foreign key to link it to a particular configuration. I am having a strange problem where the following query always causes an inner join to the Configuration table regardless of the second parameter value. As far as I can tell, even though the "UserConfiguration =" part of the object initialisation is conditional, LINQ doesn't see that and determines that a relationship is followed in any case. If I actually remove that last initialisation, the whole thing works as expected. It doesn't inner join when loadConfiguration == false and it does join when loadConfiguration == true. Anyone got any ideas about this? Is this syntax just not going to work? The only thought I have now is to wrap the return in a basic if statement - I just wanted to avoid the duplicated lines. ``` public UserAccount GetByUsername(string username, bool loadConfiguration) { using (Database database = new Database()) { if (loadConfiguration) { DataLoadOptions loadOptions = new DataLoadOptions(); loadOptions.LoadWith<User>(c => c.Configuration); database.LoadOptions = loadOptions; } return (from c in database.Users where c.Username == username select new UserAccount { ID = c.ID, ConfigurationID = c.ConfigurationID, Username = c.Username, Password = c.Password.ToArray(), HashSalt = c.HashSalt, FirstName = c.FirstName, LastName = c.LastName, EmailAddress = c.EmailAddress, UserConfiguration = (loadConfiguration) ? new ApplicationConfiguration { ID = c.Configuration.ID, MonthlyAccountPrice = c.Configuration.MonthlyAccountPrice, TrialAccountDays = c.Configuration.TrialAccountDays, VAT = c.Configuration.VAT, DateCreated = c.Configuration.DateCreated } : null }).Single(); } } ``` Thanks in advance, Martin.
I dont think it will work like that. I suggest splitting it into 2 distinct queries. There are probably better ways, but it would require more 'plumbling'.
Replace .Single() with SingleOrDefault() and Linq will switch to a left outer join. I don't know if in your case it will do it, but in some cases it does. Edit dint's see the Single was for the entire query and not for the configuration part: try this: ``` UserConfiguration = (loadConfiguration && c.Configuration != null) ? new ApplicationConfiguration { ID = c.Configuration.ID, MonthlyAccountPrice = c.Configuration.MonthlyAccountPrice, TrialAccountDays = c.Configuration.TrialAccountDays, VAT = c.Configuration.VAT, DateCreated = c.Configuration.DateCreated } : null ```
LINQ To SQL Weird Join Issue
[ "", "c#", "linq-to-sql", "inner-join", "" ]
When a function is attached to an object and called: ``` function f() { return this.x; } var o = {x: 20}; o.func = f; o.func(); //evaluates to 20 ``` `this` refers to the object that the function was called as a method of. It's equivalent to doing `f.call(o)`. When the function is called not as part of an object, `this` refers to the global object. How do I check if a function is being called from a non-object context? Is there any standard keyword to access the global object? Is the only way to do it something like this? ``` globalobj = this; function f() { if (this == globalobj) doSomething(); } ``` Note: I have no particular use case in mind here - I actually am asking about this exact mechanism.
The global object is actually the window so you can do ``` if (this === window) ```
The below should work since using Function.call with a value of null will invoke it in the global scope. ``` this === ((function () { return this; }).call(null)) ``` A simpler variant, ``` this === (function () { return this; })() ``` will also work, but I think the first makes the intent clearer.
JavaScript - check if in global context
[ "", "javascript", "object", "global", "" ]
What is the most efficient way to display the last 10 lines of a very large text file (this particular file is over 10GB). I was thinking of just writing a simple C# app but I'm not sure how to do this effectively.
Read to the end of the file, then seek backwards until you find ten newlines, and then read forward to the end taking into consideration various encodings. Be sure to handle cases where the number of lines in the file is less than ten. Below is an implementation (in C# as you tagged this), generalized to find the last `numberOfTokens` in the file located at `path` encoded in `encoding` where the token separator is represented by `tokenSeparator`; the result is returned as a `string` (this could be improved by returning an `IEnumerable<string>` that enumerates the tokens). ``` public static string ReadEndTokens(string path, Int64 numberOfTokens, Encoding encoding, string tokenSeparator) { int sizeOfChar = encoding.GetByteCount("\n"); byte[] buffer = encoding.GetBytes(tokenSeparator); using (FileStream fs = new FileStream(path, FileMode.Open)) { Int64 tokenCount = 0; Int64 endPosition = fs.Length / sizeOfChar; for (Int64 position = sizeOfChar; position < endPosition; position += sizeOfChar) { fs.Seek(-position, SeekOrigin.End); fs.Read(buffer, 0, buffer.Length); if (encoding.GetString(buffer) == tokenSeparator) { tokenCount++; if (tokenCount == numberOfTokens) { byte[] returnBuffer = new byte[fs.Length - fs.Position]; fs.Read(returnBuffer, 0, returnBuffer.Length); return encoding.GetString(returnBuffer); } } } // handle case where number of tokens in file is less than numberOfTokens fs.Seek(0, SeekOrigin.Begin); buffer = new byte[fs.Length]; fs.Read(buffer, 0, buffer.Length); return encoding.GetString(buffer); } } ```
I'd likely just open it as a binary stream, seek to the end, then back up looking for line breaks. Back up 10 (or 11 depending on that last line) to find your 10 lines, then just read to the end and use Encoding.GetString on what you read to get it into a string format. Split as desired.
Get last 10 lines of very large text file > 10GB
[ "", "c#", "text", "large-files", "" ]
We have an application that uses a dual monitor setup - User A will work with Monitor 1, and user B will work with Monitor 2 simultaneously. Monitor 2 is a touch screen device. Now, the problem is, when User A types something in his screen, if User B tries to do something, User A will end up in losing the focus from his window, which is disastrous. What might be a good solution to preserve the focus on the window in Monitor 1, even if User B do something with Monitor 2?
It is possible with some elbow grease. Paste this code in the form that you show on the touch screen: ``` protected override CreateParams CreateParams { get { const int WS_EX_NOACTIVATE = 0x08000000; CreateParams param = base.CreateParams; param.ExStyle |= WS_EX_NOACTIVATE; return param; } } ``` That makes sure that the form won't steal the focus from the main form. Make it look like this: ``` public partial class Form1 : Form { public Form1() { InitializeComponent(); Thread t = new Thread(SecondMonitor); t.IsBackground = true; t.SetApartmentState(ApartmentState.STA); t.Start(); } private void SecondMonitor() { Form2 f2 = new Form2(); f2.StartPosition = FormStartPosition.Manual; f2.Left = 800; // Use Screen class here... f2.ShowDialog(); } ```
To me, it sounds like you might want 2 PC's... or maybe host a VM on the PC, and give the VM access to the second monitor via a USB video "card" (not quite the right term). Mmost modern VM's allow USB transfer. Most times, multi-head displays are either used to give a single user lots of screen real-estate, or as a display-only facility (for example, showing live network/server stats over a large infrastructure setup).
How to solve this focus problem with a dual monitor application?
[ "", "c#", ".net", "windows", "winforms", "multiple-monitors", "" ]
I want to dive more into the unit testing concepts. An open source project which will illustrate the best practices in unit testing is required.
I work on [CruiseControl](http://cruisecontrol.sourceforge.net/) (original Java version) and I think it is worth considering as a source of study for a few reasons: 1. Lots of unit tests. Certainly not the highest coverage but there are plenty to look at. 2. Diverse technology. There are pojos, tag libs, velocity macros, spring, etc. 3. Uneven quality of unit tests. Because we take contributions from outside the quality and throughness of unit tests vary. The core classes are very well tested. Lots of the contributions are untested or poorly tested. Consider the poorly tested ones as "an exercise left for the reader". ;) 4. Maybe the best reason: You can ask questions about the tests on [developer mailing list](http://cruisecontrol.sourceforge.net/developers.html). In the best case you could even end up writing tests for some of the untested areas and get coaching on your efforts.
The source code for [NUnit](http://www.nunit.org) (.NET unit testing software) would be worth a look.
Which opensource java or .net project has best unit test coverage?
[ "", "java", ".net", "unit-testing", "" ]
How can i convert an array of bitmaps into a brand new image of TIFF format, adding all the bitmaps as frames in this new tiff image? using .NET 2.0.
Start with the first bitmap by putting it into an Image object ``` Bitmap bitmap = (Bitmap)Image.FromFile(file); ``` Save the bitmap to memory as tiff ``` MemoryStream byteStream = new MemoryStream(); bitmap.Save(byteStream, ImageFormat.Tiff); ``` Put Tiff into another Image object ``` Image tiff = Image.FromStream(byteStream) ``` Prepare encoders: ``` var encoderInfo = ImageCodecInfo.GetImageEncoders().First(i => i.MimeType == "image/tiff"); EncoderParameters encoderParams = new EncoderParameters(2); encoderParams.Param[0] = new EncoderParameter(Encoder.Compression, (long)EncoderValue.CompressionNone); encoderParams.Param[1] = new EncoderParameter(Encoder.SaveFlag, (long)EncoderValue.MultiFrame); ``` Save to file: ``` tiff.Save(sOutFilePath, encoderInfo, encoderParams); ``` For subsequent pages, prepare encoders: ``` EncoderParameters EncoderParams = new EncoderParameters(2); EncoderParameter SaveEncodeParam = new EncoderParameter( Encoder.SaveFlag, (long)EncoderValue.FrameDimensionPage); EncoderParameter CompressionEncodeParam = new EncoderParameter( Encoder.Compression, (long)EncoderValue.CompressionNone); EncoderParams.Param[0] = CompressionEncodeParam; EncoderParams.Param[1] = SaveEncodeParam; tiff.SaveAdd(/* next image as tiff - do the same as above with memory */, EncoderParams); ``` Finally flush the file: ``` EncoderParameter SaveEncodeParam = new EncoderParameter( Encoder.SaveFlag, (long)EncoderValue.Flush); EncoderParams = new EncoderParameters(1); EncoderParams.Param[0] = SaveEncodeParam; tiff.SaveAdd(EncoderParams); ``` That should get you started.
Came across this post after a bit of searching on Google. I tried the code that was in the post by a'b'c'd'e'f'g'h', but that didn't work for me. Perhaps, I was not doing something correctly. In any case, I found another post that saved images to multi page tiffs. Here is the link to the post: [Adding frames to a Multi-Frame TIFF](https://web.archive.org/web/20150405144620/http://bobpowell.net/addframes.aspx). Also, here is the code that worked for me. It should be identical to that post. ``` Encoder encoder = Encoder.SaveFlag; ImageCodecInfo encoderInfo = ImageCodecInfo.GetImageEncoders().First(i => i.MimeType == "image/tiff"); EncoderParameters encoderParameters = new EncoderParameters(1); encoderParameters.Param[0] = new EncoderParameter(encoder, (long)EncoderValue.MultiFrame); // Save the first frame of the multi page tiff Bitmap firstImage = (Bitmap) _scannedPages[0].RawContent; firstImage.Save(fileName, encoderInfo, encoderParameters); encoderParameters.Param[0] = new EncoderParameter(encoder, (long)EncoderValue.FrameDimensionPage); // Add the remaining images to the tiff for (int i = 1; i < _scannedPages.Count; i++) { Bitmap img = (Bitmap) _scannedPages[i].RawContent; firstImage.SaveAdd(img, encoderParameters); } // Close out the file encoderParameters.Param[0] = new EncoderParameter(encoder, (long)EncoderValue.Flush); firstImage.SaveAdd(encoderParameters); ```
Convert bitmaps to one multipage TIFF image in .NET 2.0
[ "", "c#", ".net", ".net-2.0", "drawing", "tiff", "" ]
From <http://www.jibbering.com/faq/faq_notes/closures.html> : > Note: ECMAScript defines an internal [[prototype]] property of the internal Object type. This property is not directly accessible with scripts, but it is the chain of objects referred to with the internal [[prototype]] property that is used in property accessor resolution; the object's prototype chain. A public prototype property exists to allow the assignment, definition and manipulation of prototypes in association with the internal [[prototype]] property. **The details of the relationship between to two are described in ECMA 262 (3rd edition) and are beyond the scope of this discussion.** What are the details of the relationship between the two? I've browsed through ECMA 262 and all I've read there is stuff like: > The constructor’s associated prototype can be referenced by the program expression constructor.prototype, > > Native ECMAScript objects have an internal property called [[Prototype]]. The value of this property is either null or an object and is used for implementing inheritance. > > Every built-in function and every built-in constructor has the Function prototype object, which is the initial value of the expression Function.prototype > > Every built-in prototype object has the Object prototype object, which is the initial value of the expression > Object.prototype (15.3.2.1), as the value of its internal [[Prototype]] property, except the Object > prototype object itself. From this all I gather is that the [[Prototype]] property is equivalent to the `prototype` property for pretty much any object. Am I mistaken?
I believe you are right in most cases. Every object has a hidden `[[Prototype]]` property, which is used for inheritance. Functions additionally have a public `prototype` property, which is used only when the function is used as constructor: When an object is constructed using `new`, the `[[Prototype]]` property of the new object is set to the `prototype` property of the function that was used as constructor. E.g. ``` function C() {} C.prototype = P1; var obj = new C(); // obj.[[Prototype]] is now P1. ``` You can get the `[[Prototype]]` property using `Object.getPrototypeOf(<obj>)`. (This method is specified in ECMAScript 5. Older versions of JavaScript does not have any standard way of reading `[[Prototype]]`). You can *usually* get to the prototype through the constructor, e.g.: ``` obj.constructor.prototype == Object.getPrototypeOf(obj) ``` But this is not always the case, since the prototype property of the constructor function can be reassigned, but the `[[Prototype]]` of an object cannot be reassigned after the object is created. So if you do: ``` C.prototype = P2; ``` then ``` obj.constructor.prototype != Object.getPrototypeOf(obj) ``` Because the prototype of `C` is now `P2`, but `[[Prototype]]` of `obj` is still `P1`. Note that it is *only* functions that have a `prototype` property. Note also that the `prototype` property of a function is not the same as the `[[Prototype]]` property of the function!
To answer your question directly: logically it is an object's private copy of the `prototype` property of its constructor. Using metalanguage this is how objects are created: ``` // not real JS var Ctr = function(...){...}; Ctr.prototype = {...}; // some object with methods and properties // the object creation sequence: var x = new Ctr(a, b, c); var x = {}; x["[[prototype]]"] = Ctr.prototype; var result = Ctr.call(x, a, b, c); if(typeof result == "object"){ x = result; } // our x is fully constructed and initialized at this point ``` At this point we can modify the prototype, and the change will be reflected by all objects of the class, because they refer to the prototype by reference: ``` Ctr.prototype.log = function(){ console.log("...logging..."); }; x.log(); // ...logging.. ``` But if we change the prototype on the constructor, already created objects will continue referring to the old object: ``` Ctr.prototype = {life: 42}; // let's assume that the old prototype didn't define "life" console.log(x.life); // undefined x.log(); // ...logging... ``` In the full accordance with the standard `[[prototype]]` is not available, but Mozilla extends the standard with `__proto__` property (read-only), which is exposing the normally hidden `[[prototype]]`: * [the Mozilla's documentation](https://developer.mozilla.org/en/Core_JavaScript_1.5_Guide/Property_Inheritance_Revisited/Determining_Instance_Relationships) * [overview of Mozilla's extensions: \_\_count\_\_, \_\_proto\_\_, \_\_parent\_\_](http://weblogs.asp.net/stephenwalther/archive/2008/02/26/javascript-magic-properties-using-count-proto-and-parent.aspx) Again, `__proto__` can be legalized in [the next ES3.1 standard](http://wiki.ecmascript.org/doku.php?id=es3.1:es3.1_proposal_working_draft).
Relation between [[Prototype]] and prototype in JavaScript
[ "", "javascript", "ecma262", "" ]
In my application, there are 10-20 classes that are instantiated once[\*]. Here's an example: ``` class SomeOtherManager; class SomeManagerClass { public: SomeManagerClass(SomeOtherManager*); virtual void someMethod1(); virtual void someMethod2(); }; ``` Instances of the classes are contained in one object: ``` class TheManager { public: virtual SomeManagerClass* someManagerClass() const; virtual SomeOtherManager* someOtherManager() const; /** More objects... up to 10-20 */ }; ``` Currently TheManager uses the *new* operator in order to create objects. My intention is to be able to replace, using plugins, the SomeManagerClass (or any other class) implementation with another one. In order to replace the implementation, 2 steps are needed: 1. Define a class DerivedSomeManagerClass, which inherits SomeManagerClass [plugin] 2. Create the new class (DerivedSomeManagerClass) instead of the default (SomeManagerClass) [application] I guess I need some kind of object factory, but it should be fairly simple since there's always only one type to create (the default implementation or the user implementation). Any idea about how to design a simple factory like I just described? Consider the fact that there might be more classes in the future, so it should be easy to extend. [\*] I don't care if it happens more than once. **Edit:** Please note that there are more than two objects that are contained in TheManager.
I think there are two separate problems here. One problem is: how does TheManager *name* the class that it has to create? It must keep some kind of pointer to "a way to create the class". Possible solutions are: * keeping a separate pointer for each kind of class, with a way to set it, but you already said that you don't like this as it violates the DRY principle * keeping some sort of table where the key is an enum or a string; in this case the setter is a single function with parameters (of course if the key is an enum you can use a vector instead of a map) The other problem is: what is this "way to create a class"? Unfortunately we can't store pointers to constructors directly, but we can: * create, as others have pointed out, a factory for each class * just add a static "create" function for each class; if they keep a consistent signature, you can just use their pointers to functions Templates can help in avoiding unnecessary code duplication in both cases.
Assuming a class (plugin1) which inherits from SomeManagerClass, you need a class hierarchy to build your types: ``` class factory { public: virtual SomeManagerClass* create() = 0; }; class plugin1_factory : public factory { public: SomeManagerClass* create() { return new plugin1(); } }; ``` Then you can assign those factories to a std::map, where they are bound to strings ``` std::map<string, factory*> factory_map; ... factory_map["plugin1"] = new plugin1_factory(); ``` Finally your TheManager just needs to know the name of the plugin (as string) and can return an object of type SomeManagerClass with just one line of code: ``` SomeManagerClass* obj = factory_map[plugin_name]->create(); ``` **EDIT**: If you don't like to have one plugin factory class for each plugin, you could modify the previous pattern with this: ``` template <class plugin_type> class plugin_factory : public factory { public: SomeManagerClass* create() { return new plugin_type(); } }; factory_map["plugin1"] = new plugin_factory<plugin1>(); ``` I think this is a much better solution. Moreover the 'plugin\_factory' class could add itself to the 'factory\_map' if you pass costructor the string.
How to design a simple C++ object factory?
[ "", "c++", "factory", "" ]
I have a unicode string like "Tanım" which is encoded as "Tan%u0131m" somehow. How can i convert this encoded string back to original unicode. Apparently urllib.unquote does not support unicode.
%uXXXX is a [non-standard encoding scheme](http://en.wikipedia.org/wiki/Percent-encoding#Non-standard_implementations) that has been rejected by the w3c, despite the fact that an implementation continues to live on in JavaScript land. The more common technique seems to be to UTF-8 encode the string and then % escape the resulting bytes using %XX. This scheme is supported by urllib.unquote: ``` >>> urllib2.unquote("%0a") '\n' ``` Unfortunately, if you really **need** to support %uXXXX, you will probably have to roll your own decoder. Otherwise, it is likely to be far more preferable to simply UTF-8 encode your unicode and then % escape the resulting bytes. A more complete example: ``` >>> u"Tanım" u'Tan\u0131m' >>> url = urllib.quote(u"Tanım".encode('utf8')) >>> urllib.unquote(url).decode('utf8') u'Tan\u0131m' ```
``` def unquote(text): def unicode_unquoter(match): return unichr(int(match.group(1),16)) return re.sub(r'%u([0-9a-fA-F]{4})',unicode_unquoter,text) ```
How to unquote a urlencoded unicode string in python?
[ "", "python", "unicode", "character-encoding", "urllib", "w3c", "" ]
Are there any 'good' resources for porting a VB.NET winforms application to C#? I'm sure there are is software that just translates the code, but I'm looking to refactor the code at the same time. Keeping it in its current form is problematic, since it uses some of the 'bad design' practices that VB.NET allows, and would further complicate future maintanence. Has anyone here gone through that process, and how did you go about doing it? Did you use a translate/refactor approach? Did you just use the end product to recreate functionality without looking at the current codebase for most of it? What would you (collectively) recommend? **Update**: As I was telling Grauenwolf, keeping it in its current language presents the following issues: * Not being able to readily add features. VB.NET isn't a language I'm rock solid in. I do appreciate the irony of learning the language to port it over -- but future maintenance will need to account for someone who doesn't know VB.NET. * The rest of the application has been ported to C# (a long time ago, in fact); all features that we'd like to add depend on de-coupling the app (right now it's very tightly coupled). My choices are to either refactor it in a language I'm not too familiar with, or to refactor it in a language I understand. To anyone who voted the question down, I'm not really sure *why* you did; the concern isn't whether I should leave it in VB.NET; the concern is what is the future cost of not porting it over now. If I'm going to spare great expense in fixing it, why not go the extra step and make it maintainable for a future programmer? **Author's Note**: I hadn't looked at this question in ages, there was a recent response, so I moved my 'answer' into the question and deleted the 'answer' (since it wasn't really an answer).
Based on my experience working with some large applications that mix VB and C# projects, I would recommend leaving it in VB.NET. If there are problems with the design, then fix them, but converting the whole thing to C# sounds like a messy, unnecessary distraction to me. The non-stylistic differences between the two languages are very minimal, so it is hard to see a functional need that would force a conversion. (There was an old bug in Visual Studio 2003 that ruled out certain chains of project references that mixed C# and VB projects in specific ways, but that is the only one I have ever run into as a practical obstacle.) Individual developers certainly tend to have a stylistic preference favoring one or the other, but a full conversion is a lot of work to do for something that amounts to a taste for a different flavor of syntactic sugar.
If you use something like Reflector or Anakrino, its output is based on the IL rather than the original source. Whether or not it produces code that's any better is open for debate... But you could try it out, anyway. :)
Porting VB.NET Winforms Application to C#
[ "", "c#", ".net", "vb.net", "winforms", "" ]
I'm trying to get a list of processes currently owned by the current user (`Environment.UserName`). Unfortunately, the `Process` class doesn't have any way of getting the UserName of the user owning a process. How do you get the UserName of the user which is the owner of a process using the `Process` class so I can compare it to `Environment.UserName`? If your solution requires a `pinvoke`, please provide a code example.
The CodeProject article [How To Get Process Owner ID and Current User SID](http://www.codeproject.com/KB/cs/processownersid.aspx) by [Warlib](http://www.codeproject.com/script/Membership/Profiles.aspx?mid=856529) describes how to do this using both WMI and using the Win32 API via PInvoke. The WMI code is much simpler but is slower to execute. Your question doesn't indicate which would be more appropriate for your scenario.
Thanks, your answers put me on the proper path. For those who needs a code sample: ``` public class App { public static void Main(string[] Args) { Management.ManagementObjectSearcher Processes = new Management.ManagementObjectSearcher("SELECT * FROM Win32_Process"); foreach (Management.ManagementObject Process in Processes.Get()) { if (Process["ExecutablePath"] != null) { string ExecutablePath = Process["ExecutablePath"].ToString(); string[] OwnerInfo = new string[2]; Process.InvokeMethod("GetOwner", (object[]) OwnerInfo); Console.WriteLine(string.Format("{0}: {1}", IO.Path.GetFileName(ExecutablePath), OwnerInfo[0])); } } Console.ReadLine(); } } ```
How do you get the UserName of the owner of a process?
[ "", "c#", ".net", "process", "" ]
I've got a big file on which I'm opening a FileInputStream. This file contains some files each having an offset from the beginning and a size. Furthermore, I've got a parser that should evaluate such a contained file. ``` File file = ...; // the big file long offset = 1734; // a contained file's offset long size = 256; // a contained file's size FileInputStream fis = new FileInputStream(file ); fis.skip(offset); parse(fis, size); public void parse(InputStream is, long size) { // parse stream data and insure we don't read more than size bytes is.close(); } ``` I feel like this is no good practice. Is there a better way to do this, maybe using buffering? Furthermore, I feel like the skip() method slows the reading process a lot.
It sounds like what you really want is a sort of "partial" input stream - one a bit like the ZipInputStream, where you've got a stream within a stream. You could write this yourself, proxying all InputStream methods to the original input stream making suitable adjustments for offset and checking for reading past the end of the subfile. Is that the sort of thing you're talking about?
First, [FileInputStream.skip() has a bug](https://bugs.java.com/bugdatabase/view_bug?bug_id=6294974) which may make the file underneath skip beyond the EOF marker of the file so be wary of that one. I've personally found working with Input/OutputStreams to be a pain compared to using FileReader and FileWriter and you're showing the main issue I have with them: The need to close the streams after using. One of the issues is that you can never be sure if you've closed up all the resources properly unless you make the code a bit too cautious like this: ``` public void parse(File in, long size) { try { FileInputStream fis = new FileInputStream(in); // do file content handling here } finally { fis.close(); } // do parsing here } ``` This is of course bad in the sense that this would lead to creating new objects all the time which may end up eating a lot of resources. The good side of this is of course that the stream will get closed even if the file handling code throws an exception.
Good design: How to pass InputStreams as argument?
[ "", "java", "inputstream", "" ]
I would like to load a specific ConfigurationSection but the way the CLR loads the assemblies is giving me some trouble: My CustomConfigurationSection definition is on a specific assembly which cannot be found by the overall process of assembly loading, because I'm using an external tool which basically loads my assembly, discovers some information about it via reflection and then tries to "install" it. Very much like the installutil when trying to install a windows service. I'm going nuts because the ConfigurationManager tries to find the required assembly for my ConfigurationSection under the location of the original process. I know this for sure as I'm using SysInternals Process Monitor. Can someone provide some workaround or directions? Thanks!
If you know the path to your assembly, then you should try ConfigurationManager.OpenExeConfiguration(exePath).
If your assembly is needed to deserialize your custom configuration section, but the CLR can't find the assembly, then I think you're out of luck (or am I misunderstanding the problem?). Is there any way you can get the CLR to find your assembly (providing a hint path maybe)? If not, maybe you'd be better off using a separate XML file for this data instead of using app.config/web.config.
How to: Use a ConfigurationSection without loading it via the GetSection call
[ "", "c#", "configurationsection", "" ]
I understand that .NET FileStream's Flush method only writes the current buffer to disk, but dependent on Windows' disk driver and the hard disk firmware this is no guarantee that the data is actually physically written to disk. Is there a .NET or Win32 method that can give me this guarantee? So if there is power loss one nanosecond after the call to this method comes back, I can still be sure that everything is OK?
Under Windows, look at [FlushFileBuffers](http://msdn.microsoft.com/en-us/library/aa364439(v=vs.85).aspx) (Win32 API).
Stefan S. said: > I understand that .NET FileStream's Flush method only writes the current buffer to disk No, .NET FileStream's Flush only writes the .NET buffers to the OS cache, it does not flush the OS cache to disk. Sadly the MSDN doc on this class doesn't say that. For .NET < 4.0, you'll have to call Flush + Win32's FlushFilebuffers: ``` using System.Runtime.InteropServices; . . . // start of class: [DllImport("kernel32", SetLastError=true)] private static extern bool FlushFileBuffers(IntPtr handle); . . . stream.Flush(); // Flush .NET buffers to OS file cache. #pragma warning disable 618,612 // disable stream.Handle deprecation warning. if (!FlushFileBuffers(stream.Handle)) // Flush OS file cache to disk. #pragma warning restore 618,612 { Int32 err = Marshal.GetLastWin32Error(); throw new Win32Exception(err, "Win32 FlushFileBuffers returned error for " + stream.Name); } ``` For .NET 4.0, you can instead use the new flush(true) method. 11/09/2012 update: MS bug report [here](http://connect.microsoft.com/VisualStudio/feedback/details/634385/filestream-flush-flushtodisk-true-call-does-not-flush-the-buffers-to-disk#details) says it's broken, then fixed, but doesn't say what version or service pack it was fixed in! Sounds like bug was if internal .NET FileStream buffer is empty, the Flush(true) did nothing??
How to ensure all data has been physically written to disk?
[ "", "c#", ".net", "filestream", "flush", "" ]
The customer wants us to "log" the "actions" that a user performs on our system: creation, deletion and update, mostly. I already have an aspect that logs the trace, but that works at a pretty low level logging every method call. So if a user clicked on the button "open medical file" the log would read: 1. closePreviousFiles("patient zero") 2. createMedicalFile("patient zero") --> file #001 3. changeStatus("#001") --> open while the desired result is: 1. opened medical file #001 for patient zero I'm thinking of instrumenting the Struts2 actions with log statements, but I'm wondering... is there another way to do that? I might use AspectJ again (or a filter) and keep the logic in just one place, so that I might configure the log easily, but then I'm afraid everything will become harder to understand (i.e. "the log for this action is wrong... where the heck should I look for the trouble?").
Sounds like your client wants an audit trail of the user's actions in the system. Consider that at each action's entry point (from the web request) to start an audit entry with an enum/constant on the action. Populate it with information user has provided if possible. At exit/finally, indicate in the audit if it is successful or failed. An example in pseudocode: ``` enum Actions { OPEN_MEDICAL_FILE ... } void handleRequest(...) { String patient = ...; Audit audit = new Audit(OPEN_MEDICAL_FILE); audit.addParameter("patient", patient); try { ... more things .. audit.addParameter("file", "#001"); ... more things ... audit.setSuccess(); } finally { audit.save(); } } ``` What is important here is that all user actions are saved, regardless of success or failure. Also, the client really really need to know all relevant information along with the action. Since we are logging action constants and data, the presentation of the audit to the client can be seperately coded. You gain flexibility too, since a change of presentation string (eg. "opened medical file #001 for patient zero" to "patient zero #001 medical file opened) is not determined at the time of the action, but later. You don't have to remassage the audit data.
I recently post-processed logs to generate summary. You may want to consider that approach, especially if #2 and #3 in the above logs are generated at different places and the desired result would require to carry around state from one place to another.
Logging user actions
[ "", "java", "struts", "logging", "aspectj", "" ]
I'm building a CMS in PHP and one dread I have is that the users will have to fill the data in from existing Word (and Excel, but nevermind that) documents. Now, I've seen what happens when they carelessly copy and paste from Word to a textarea: the database got filled with crap markup. Now, I could certainly strip all markup myself, but I'd have to start learning about it first. So I ask you: have you tested some functionality - plugins of the usual suspects (tinyMCE, FCKeditor, etc) that helps here? Bonus for the least intrusive solution.
Sadly most of the HTML editor controls I've used either: 1. Have a button to strip out various elements of mark up (word, html, script, etc) 2. Strip out **all** markup on paste via JavaScript. If you leave it to a button, then generally the non-technical users will forget to press it because they don't (some would say "shouldn't have to") care about it :( With a bit of playing around with Regular Expressions (now you have another problem ;)) you could do something similar to 2 but just for word xml.
I have found FCKEditor to handle text yanked and thrown at it from Word documents, much better than tinyMCE.
How do I remove Word markup crap when inserting to a form?
[ "", "php", "ms-word", "tinymce", "textarea", "fckeditor", "" ]
How do I make the computer's internal speaker beep in C# without external speakers?
In .Net 2.0, you can use [`Console.Beep`](https://learn.microsoft.com/en-us/dotnet/api/system.console.beep). ``` // Default beep Console.Beep(); ``` You can also specify the frequency and length of the beep in milliseconds. ``` // Beep at 5000 Hz for 1 second Console.Beep(5000, 1000); ```
Use [`System.Media.SystemSounds`](https://learn.microsoft.com/en-us/dotnet/api/system.media.systemsounds) to get the sounds for various events, then [Play](https://learn.microsoft.com/en-us/dotnet/api/system.media.systemsound.play) them: ``` System.Media.SystemSounds.Beep.Play(); System.Media.SystemSounds.Asterisk.Play(); System.Media.SystemSounds.Exclamation.Play(); System.Media.SystemSounds.Question.Play(); System.Media.SystemSounds.Hand.Play(); ```
How can I make the computer beep in C#?
[ "", "c#", ".net", "audio", "beep", "" ]
I have a function that, among other things, takes in an object and a Type, and converts the object into that Type. However, the input object is often a double, and the type some variation of int (uint, long, etc.). I want this to work if a round number is passed in as a double (like 4.0), but to throw an exception if a decimal is passed in (4.3). Is there any more elegant way to check if the Type is some sort of int? ``` if (inObject is double && (targetType == typeof (int) || targetType == typeof (uint) || targetType == typeof (long) || targetType == typeof (ulong) || targetType == typeof (short) || targetType == typeof (ushort))) { double input = (double) inObject; if (Math.Truncate(input) != input) throw new ArgumentException("Input was not an integer."); } ``` Thanks.
This seems to do what you ask. I have only tested it for doubles, floats and ints. ``` public int GetInt(IConvertible x) { int y = Convert.ToInt32(x); if (Convert.ToDouble(x) != Convert.ToDouble(y)) throw new ArgumentException("Input was not an integer"); return y; } ```
``` int intvalue; if(!Int32.TryParse(inObject.ToString(), out intvalue)) throw InvalidArgumentException("Not rounded number or invalid int...etc"); return intvalue; //this now contains your value as an integer! ```
Check for any int type in C#?
[ "", "c#", "reflection", "casting", "types", "" ]
I'm trying to implement paging using row-based limiting (for example: `setFirstResult(5)` and `setMaxResults(10)`) on a Hibernate Criteria query that has joins to other tables. Understandably, data is getting cut off randomly; and the reason for that is explained [here](https://developer.jboss.org/wiki/HibernateFAQ-AdvancedProblems#jive_content_id_Hibernate_does_not_return_distinct_results_for_a_query_with_outer_join_fetching_enabled_for_a_collection_even_if_I_use_the_distinct_keyword). As a solution, the page suggests using a "second sql select" instead of a join. How can I convert my existing criteria query (which has joins using `createAlias()`) to use a nested select instead?
You can achieve the desired result by requesting a list of distinct ids instead of a list of distinct hydrated objects. Simply add this to your criteria: ``` criteria.setProjection(Projections.distinct(Projections.property("id"))); ``` Now you'll get the correct number of results according to your row-based limiting. The reason this works is because the projection will perform the distinctness check **as part of** the sql query, instead of what a ResultTransformer does which is to filter the results for distinctness **after** the sql query has been performed. Worth noting is that instead of getting a list of objects, you will now get a list of ids, which you can use to hydrate objects from hibernate later.
I am using this one with my code. Simply add this to your criteria: > criteria.setResultTransformer(Criteria.DISTINCT\_ROOT\_ENTITY); that code will be like the select distinct \* from table of the native sql.
How to get distinct results in hibernate with joins and row-based limiting (paging)?
[ "", "java", "hibernate", "paging", "criteria", "distinct", "" ]
I am trying to debug a problem where a user clicks on the button and the UI just dies. I know, good luck. The logs just end after the user clicks the button so i'm thinking there may be some exception/error that we are not logging. Maybe an OutOfMemoryError. Any suggestions on how to proceed? to get more information. Java command setting etc. Thanks for any help * rich
Which version of java and what machine? In any case, here's the scoop: the event queue thread runs somewhat separately from the main thread. In Java < 5 there was a bug that made it difficult to capture events from that thread, so some exceptions just went away. In Java 5, there's a new method `Thread.setDefaultUncaughtExceptionHandler()` that will let you set up an exception handler for anything that might otherwise have gone uncaught. Add a handler there, and catch all `Throwables` and log them. This is also a good hack for dealing with things you might otherwise call `System.exit()` for, as well; have a `normalExit` `Throwable`; throw that anywhere you'd call exit in the GUI, and make sure all gets cleaned up.
Try redirecting standard output - you'll probably see the exception stack trace there.
Java swing UI crash debugging
[ "", "java", "swing", "" ]
When we talk about the .NET world the CLR is what everything we do depends on. What is the minimum knowledge of CLR a .NET programmer must have to be a good programmer? Can you give me one/many you think is/are the most important subjects: GC?, AppDomain?, Threads?, Processes?, Assemblies/Fusion? I will very much appreciate if you post a links to articles, blogs, books or other on the topic where more information could be found. Update: I noticed from some of comments that my question was not clear to some. When I say CLR I don't mean .Net Framework. It is NOT about memorizing .NET libraries, it is rather to understand how does the execution environment (in which those libraries live on runtime) work. My question was directly inspired by John Robbins the author of "Debugging Applications for Microsoft® .NET" book (which I recommend) and colleague of here cited Jeffrey Richter at Wintellect. In one of introductory chapters he is saying that "...any .NET programmer should know what is probing and how assemblies are loaded into runtime". Do you think there are other such things? Last Update: After having read first 5 chapters of "CLR via C#" I must say to anyone reading this. If you haven't allready, read this book!
Most of those are way deeper than the kind of thing many developers fall down on in my experience. Most misunderstood (and important) aspects in my experience: * Value types vs reference types * Variables vs objects * Pass by ref vs pass by value * Delegates and events * Distinguishing between language, runtime and framework * Boxing * Garbage collection On the "variables vs objects" front, here are three statements about the code ``` string x = "hello"; ``` * (Very bad) `x` is a string with 5 letters * (Slightly better) `x` is a reference to a string with 5 letters * (Correct) The value of `x` is a reference to a string with 5 letters Obviously the first two are okay in "casual" conversation, but only if everyone involved understands the real situation.
A *great* programmer cannot be measured by the quantity of things he knows about the CLR. Sure it's a nice beginning, but he must also know OOP/D/A and a lot of other things like Design Patterns, Best Practices, O/RM concepts etc. Fact is I'd say a "great .Net programmer" doesn't necessary need to know much about the CLR at all as long as he has great knowledge about general programming theory and concepts... I would rather hire a "great Java developer" with great general knowledge and experience in Java for a .Net job then a "master" in .Net that have little experience and thinks O/RM is a stock ticker and stored procedures are a great way to "abstract away the database"... I've seen *professional teachers* in .Net completely fail in doing really simple things without breaking their backs due to lack of "general knowledge" while they at the same time "know everything" there is to know about .Net and the CLR...
What is the minimum knowledge of CLR a .NET programmer must have to be a good programmer?
[ "", "c#", ".net", "clr", "" ]
Is there a way to have different application settings per each build configuration? I would like to be able to batch build a few configurations with different settings, instead of having to change the settings and build, rinse repeat. Thanks in advance!
I don't know much about the appsettings architecture (I've never really used it), but you can define different values for constants using a bit of MSBuild magic. Create two .cs files, Constants1.cs and Constants2.cs (or name them after your configurations). In each file, define a class called Constants (or whatever) -- but do it as if each file were the only definition (ie. use the same class name). Typically, this should just define `public static readonly` fields -- do **not** use `const`, as that can get you into trouble with partial builds. Now Unload your project file and Edit it. Find the entries that look like this: ``` <Compile Include="Constants1.cs" /> <Compile Include="Constants2.cs" /> ``` and change them like so: ``` <Compile Include="Constants1.cs" Condition="'$(Configuration)'=='Debug'" /> <Compile Include="Constants2.cs" Condition="'$(Configuration)'=='Release'" /> ``` Finally Save and Reload your project. Now only one of the files will actually be built at a time, depending on your build configuration.
Besides all these, MS promised to add this feature in VS 2010.
C# VS.NET 2008 Changing settings per configuration
[ "", "c#", ".net", "visual-studio-2008", "appsettings", "" ]
This is about a school assignment so I'm trying to do things by the book. I feel like I'm getting the grips of Java, but good programming practice, design patterns, etc. are all rather new to me. I've made my model and it works fine. It contains a student class which contains a number of fields with student information (obviously). Now I want this information to be displayed in a couple of JLabels. It is easiest to pass a whole student object to the GUI and use JLabel.settext.getname() etc. a number of times and no doubt this will work perfectly. But I feel that the student object is part of the model and by passing it to the GUI I'm not using a MVC pattern anymore. Am I right here? I did make a controller object for passing data to and from the model and the GUI but for passing only strings to the labels or setting the JLabel text through the controller, I need either a lot of setters in the GUI, or I would have to make all JLabels global fields which doesn't feel good either. Any advice on this?
the GUI should worry about all the interface stuff. I guess you have a class that is your GUI for doing 'stuff' to the student with your JLabels. Just pass your student instance to this class and let it do what it needs to do. When it is done it will call a controller method to do whatever needs to be done. OOD deals with passing the objects around that you want to manipulate. You don't need to break the objects apart for passing in MVC. You are supposed to pass that around really if this is a general case. The model defines the data objects that you will be working with... or more specifically the system will be working with (controller) and the users will be working with (GUI). These class are built to be passed around. You will have to do a lot more work if you un-encapsulate all the information hehe :)
Note that the Swing components all use the MVC pattern internally, so they already have a model. This is more relevant with complex widgets like JTable, where you definitely want your model to implement the `TableModel` interface. The big question is how to reconcile your domain model with the internal models of the individual Swing components. One way to do this is to have `setModel()` and `getModel()` methods in your GUI class, which translate between them, i.e. `setModel()` takes your model and calls `setText()` on the individual JLabels, etc.
MVC in Java
[ "", "java", "model-view-controller", "" ]
As of SQL Server 2005, you used to be able to open a flat file in SQL Management Studio off the right click menu for any database table. Once opened, you could add or update records right there in the grid. Now that SQL Server 2008 is out, Microsoft has hidden that feature, at least from the right-click menu. Where did it go?
It's replaced by "Edit Top 200 Rows". You can change that command in the Tools > Options > SQL Server Object Explorer > Commands. But once you right-click a table, choose "Edit Top 200 Rows", hunt on the toolbar for a button called "Show SQL Pane". From here you can edit the query so the grid shows a subset of the data that you want. They did this because people were accidentally opening huge tables with the old Open Table command. This method seems to work pretty well, though in general I find that the 2008 version is pretty wonky when talking to 2005 databases, but that's another matter....
Changed the value in options to 0 and you can open the entire table.
Where did Open Table go in SQL Server 2008?
[ "", "sql", "database", "sql-server-2008", "" ]
Basically what the title says... I need to have an image that when clicked, I call script.php for instance and in that PHP script file, I get the image coordinates where the mouse was clicked. Is this possible? **EDIT:** After a couple of answers I realized I didn't describe my problem correctly... The thing is, I don't have total control over the HTML. The control I have for the image and the image link is the control BBCode provides me. Basically want I want to do is to have a forum signature with links to various sections on my website. You could argue I could use multiple images but most forums limit how much you can type for the signature, which is not enough for multiple images. So, I will only be able to do something like this: ``` [url=http://www.mydomain.com/script.php] [img]http://www.mydomain.com/signature.jpg[/img] [/url] ``` Which translates to something like this: ``` <a href="http://www.mydomain.com/script.php"> <img src="http://www.mydomain.com/signature.jpg" /> </a> ```
If you can't: 1. use JavaScript, or 2. use input type="image", or 3. add any attributes to your img tag (to do things like create an image map) then, no, you won't be able to do what you describe.
If you use an input type="image", which works like a button, it will send you x and y coordinates of the mouse click (submits form too). More info here: <http://www.htmlhelp.com/reference/html40/forms/input.html#image> Long time since I used it, but I did make it work for a "where's the ball?" competition on a site many years ago. Update: Sounds like the environment is just too limited to do what you want. If the forum lets you do an image map that would work, but I doubt they'd let you. Only other things I could think of would be flash or javascript, similarly, they prob won't allow them. You need something with a little more smarts than an image and an anchor to make this work.
Is it possible to get the image mouse click location with PHP?
[ "", "php", "image", "mouse", "click", "location", "" ]
I don't know if this is too specific a question, if that is possible, but I'm having to port an app that uses Castle Windsor to Unity so that there isn't a reliance on non-microsoft approved libraries. I know I know but what are you going to do. Anyway I've managed it but I'm not happy with what I've got. In Windsor I had this: ``` Register( AllTypes.Of(typeof(AbstractPresenter<>)).FromAssemblyNamed("Links.Mvp"), AllTypes.Of(typeof(IView)).FromAssemblyNamed("Links.WinForms").WithService.FromInterface()); ``` which I've converted to this in unity ``` RegisterType<IMainView, MainView>(); RegisterType<IConfigureLinkView, ConfigureLinkView>(); RegisterType<IConfigureSourceView, ConfigureSourceView>(); RegisterType<IConfigureSinkView, ConfigureSinkView>(); RegisterType<MainPresenter, MainPresenter>(); RegisterType<ConfigureLinkPresenter, ConfigureLinkPresenter>(); RegisterType<ConfigureSourcePresenter, ConfigureSourcePresenter>(); RegisterType<ConfigureSinkPresenter, ConfigureSinkPresenter>(); ``` As you can see I'm having to register every single thing rather than be able to use some sort of auto-configuration. So my question is: is there a better way of doing this in unity? Thanks, Adam.
Cool. This feature is not in unity yet but if you felt a bit ambitious you could setup your own convention based registration. Found below is a snipped that works for the executing assembly and interfaces. Good luck. P.S. This feels like a big hack, I would probably continue just registering all types by hand. ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Reflection; namespace Forum { class Program { static void Main(string[] args) { // get your assemblies and types you can register Assembly a = Assembly.GetExecutingAssembly(); var types = a.GetTypes(); var bindTo = from t in types where t.IsAbstract || t.IsInterface select t; // apply your conventions to filter our types to be registered var interfacePairs = from t in bindTo.Where(x => x.IsInterface) let match = types.FirstOrDefault(x => x.Name == t.Name.Substring(1)) where match != null select new Pair { To = t, From = match }; var abstractPairs = new Pair[] {}; // setup the generic form of the method to register the types var thisType = typeof(Program); var bindings = BindingFlags.Static | BindingFlags.Public; MethodInfo genericMethod = thisType.GetMethod("RegisterType", bindings); // register all your types by executing the // specialized generic form of the method foreach (var t in interfacePairs.Concat(abstractPairs)) { Type[] genericArguments = new Type[] { t.To, t.From }; MethodInfo method = genericMethod.MakeGenericMethod(genericArguments); method.Invoke(null, new object [] {}); } Console.ReadKey(); } public static void RegisterType<To, From>() { Console.WriteLine("Register { To: {0} From: {1} }", typeof(To), typeof(From)); } // Test classes that should be picked up interface ITest { } class Test : ITest { } class Pair { public Type To { get; set; } public Type From { get; set; } } } } ```
Check [this](http://autoregistration.codeplex.com/) out: ``` var container = new UnityContainer(); container .ConfigureAutoRegistration() .LoadAssemblyFrom("Plugin.dll") .IncludeAllLoadedAssemblies() .ExcludeSystemAssemblies() .ExcludeAssemblies(a => a.GetName().FullName.Contains("Test")) .Include(If.Implements<ILogger>, Then.Register().UsingPerCallMode()) .Include(If.ImplementsITypeName, Then.Register().WithTypeName()) .Include(If.Implements<ICustomerRepository>, Then.Register().WithName("Sample")) .Include(If.Implements<IOrderRepository>, Then.Register().AsSingleInterfaceOfType().UsingPerCallMode()) .Include(If.DecoratedWith<LoggerAttribute>, Then.Register() .AsInterface<IDisposable>() .WithTypeName() .UsingLifetime<MyLifetimeManager>()) .Exclude(t => t.Name.Contains("Trace")) .ApplyAutoRegistration(); ```
Castle Windsor to Unity - can you auto-configure in Unity the same way you can in CW?
[ "", "c#", "dependency-injection", "castle-windsor", "unity-container", "" ]
I'm a C programmer trying to understand C++. Many tutorials demonstrate object instantiation using a snippet such as: ``` Dog* sparky = new Dog(); ``` which implies that later on you'll do: ``` delete sparky; ``` which makes sense. Now, in the case when dynamic memory allocation is unnecessary, is there any reason to use the above instead of ``` Dog sparky; ``` and let the destructor be called once sparky goes out of scope? Thanks!
On the contrary, you should always prefer stack allocations, to the extent that as a rule of thumb, you should never have new/delete in your user code. As you say, when the variable is declared on the stack, its destructor is automatically called when it goes out of scope, which is your main tool for tracking resource lifetime and avoiding leaks. So in general, every time you need to allocate a resource, whether it's memory (by calling new), file handles, sockets or anything else, wrap it in a class where the constructor acquires the resource, and the destructor releases it. Then you can create an object of that type on the stack, and you're guaranteed that your resource gets freed when it goes out of scope. That way you don't have to track your new/delete pairs everywhere to ensure you avoid memory leaks. The most common name for this idiom is [RAII](http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization) Also look into smart pointer classes which are used to wrap the resulting pointers on the rare cases when you do have to allocate something with new outside a dedicated RAII object. You instead pass the pointer to a smart pointer, which then tracks its lifetime, for example by reference counting, and calls the destructor when the last reference goes out of scope. The standard library has `std::unique_ptr` for simple scope-based management, and `std::shared_ptr` which does reference counting to implement shared ownership. > Many tutorials demonstrate object > instantiation using a snippet such as ... So what you've discovered is that most tutorials suck. ;) Most tutorials teach you lousy C++ practices, including calling new/delete to create variables when it's not necessary, and giving you a hard time tracking lifetime of your allocations.
Though having things on the stack might be an advantage in terms of allocation and automatic freeing, it has some disadvantages. 1. You might not want to allocate huge objects on the Stack. 2. Dynamic dispatch! Consider this code: ``` #include <iostream> class A { public: virtual void f(); virtual ~A() {} }; class B : public A { public: virtual void f(); }; void A::f() {cout << "A";} void B::f() {cout << "B";} int main(void) { A *a = new B(); a->f(); delete a; return 0; } ``` This will print "B". Now lets see what happens when using Stack: ``` int main(void) { A a = B(); a.f(); return 0; } ``` This will print "A", which might not be intuitive to those who are familiar with Java or other object oriented languages. The reason is that you don't have a pointer to an instance of `B` any longer. Instead, an instance of `B` is created and copied to `a` variable of type `A`. Some things might happen unintuitively, especially when you are new to C++. In C you have your pointers and that's it. You know how to use them and they do ALWAYS the same. In C++ this is not the case. Just imagine what happens, when you use a in this example as an argument for a method - things get more complicated and it DOES make a huge difference if `a` is of type `A` or `A*` or even `A&` (call-by-reference). Many combinations are possible and they all behave differently.
C++ Object Instantiation
[ "", "c++", "instantiation", "" ]
Let's say I've got an object `Customer` with a couple properties (`ID`, `FirstName`, `LastName`). I've got the default constructor `Customer()`, but then I've also got a `Customer(DataRow dr)`, since I load this object from a database and that's a simple way to do it. I frequently come to a point where I want to set up another constructor, `Customer(int ID)`, for times when I want to load a `Customer` but I haven't made the trip to the database yet. The simplest way to me seems to be like so: ``` Customer(int ID) { DataTable dt = DataAccess.GetCustomer(ID); if (dt.Rows.Count > 0) { // pass control to the DataRow constructor at this point? } else { // pass control to the default constructor at this point? } } ``` It makes sense to reuse the code that's already in the DataRow constructor, but I can't figure out a way to call that and return what it gives me. Through Googling, I've found information about constructor overloading with the `: this()` syntax, but all those examples seem backwards or incompatible with what I'm trying to do. So there's a gap in my understanding of constructors, but I can't seem to sort it out. What am I missing?
Simplest solution seems to be: construct another function that does the job you want to do and have both constructors call that function.
I'm concerned that what you do not get is not about constructors, but about Single Responsibility Principle and Loose Coupling. For instance, the code that you show above means: * your domain model contains data access code * you are not reusing any code from a base class which may be injected with the data-access logic in the first place * your domain object is aware of data structures other than itself or its members, like DataTable or DataRow, which ties it with those data structures and makes it cumbersome to use other data structures. Of course this assumes that you are not using the ActiveRecord Model, which appears to be the case here, but will still be implemented with tight coupling. My preference is that a domain object will only contain logic associated to holding and manipulating genuine Customer data, nothing else. As such my constructor for it will be: ``` class Customer { public Customer(int id, string firstName, string LastName) { Id = id; FirstName = firstName; LastName = lastName; } public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } } ``` **UPDATE:** That being said, this is a major reason why some people prefer ORMs which allow for POCOs, like NHibernate: there is no need to put data loading logic there. If this were done in NHibernate, for example, you would've needed a DomainObject base class: ``` public class Customer : DomainObject ``` Which in turn can be consumed by an implementation of NHibernate's IRepository: ``` public class Repository<T> : IRepository where T : DomainObject ``` This `Repository` object would contain all the code required for CRUD operations. If you wish to stick with ADO.NET, one possible solution is to create DAL manager objects for all the loading: ``` public class CustomerManager { public IList<Customer> LoadCustomers() { //load all customers here foreach (DataRow dr in dt.Table[0]) { yield return new Customer((int) dr["Id"], dr["FirstName"].ToString(), dr["LastName"].ToString()); } } public Customer LoadCustomerByID(int id) { //load one customer here return new Customer((int) dr["Id"], dr["FirstName"].ToString(), dr["LastName"].ToString()); } } ``` Of course there are a lot more opportunities to even further promote code reuse here.
overloading constructors and reusing code
[ "", "c#", "oop", "constructor", "" ]
The following test case fails in rhino mocks: ``` [TestFixture] public class EnumeratorTest { [Test] public void Should_be_able_to_use_enumerator_more_than_once() { var numbers = MockRepository.GenerateStub<INumbers>(); numbers.Stub(x => x.GetEnumerator()).Return(new List<int> { 1, 2, 3 }.GetEnumerator()); var sut = new ObjectThatUsesEnumerator(); var correctResult = sut.DoSomethingOverEnumerator2Times (numbers); Assert.IsTrue(correctResult); } } public class ObjectThatUsesEnumerator { public bool DoSomethingOverEnumerator2Times(INumbers numbers) { int sum1 = numbers.Sum(); // returns 6 int sum2 = numbers.Sum(); // returns 0 =[ return sum1 + sum2 == sum1 * 2; } } public interface INumbers : IEnumerable<int> { } ``` I think there is something very subtle about this test case, and I think it is from me not thinking through how Rhino Mocks stubbing actually works. Typically, when you enumerate over an IEnumerable, you are starting with a fresh IEnumerator. In the example above, it looks like I could be re-using the same enumerator the second time I am calling sum, and if the enumerator is already at the end of its sequence, that would explain why the second call to Sum() returns 0. If this is the case, how could I mock out the GetEnumerator() in such a way that it behaves in the way that I am wanting it to (e.g. new enumerator or same enumerator reset to position 0)? **How would you modify the above test case so that the second .Sum() call actually returns 6 instead of 0?**
The WhenCalled() api lets you dynamically resolve return values. Changing the test case to the following will allow it to pass: ``` numbers.Stub(x => x.GetEnumerator()) .Return(null) .WhenCalled(x => x.ReturnValue = new List<int> { 1, 2, 3 }.GetEnumerator() ); ``` So instead of returning the same enumerator, the stubbed behavior will always return a new enumerator.
The statement ``` numbers.Stub(x => x.GetEnumerator()).Return(new List<int> { 1, 2, 3 }.GetEnumerator()); ``` is identical to ``` var enumerator = new List<int> { 1, 2, 3 }.GetEnumerator(); numbers.Stub(x => x.GetEnumerator()).Return(enumerator); ``` In your test, you are telling Rhino Mocks to give the identical `IEnumerator<int>` instance `enumerator` twice. That's not what you intended. A single instance of `IEnumerator<int>` is good only for one enumeration, not two enumerations (`Reset()` is not supported, typically). You intended Rhino Mocks to give two different instances of `IEnumerator<int>`, so that they can be summed separately, just as any call to any other `GetEnumerator<int>()` function would do.
Mocking GetEnumerator() method of an IEnumerable<T> types
[ "", "c#", "mocking", "rhino-mocks", "" ]
I just wrote the following C++ function to programmatically determine how much RAM a system has installed. It works, but it seems to me that there should be a simpler way to do this. Am I missing something? ``` getRAM() { FILE* stream = popen("head -n1 /proc/meminfo", "r"); std::ostringstream output; int bufsize = 128; while( !feof(stream) && !ferror(stream)) { char buf[bufsize]; int bytesRead = fread(buf, 1, bufsize, stream); output.write(buf, bytesRead); } std::string result = output.str(); std::string label, ram; std::istringstream iss(result); iss >> label; iss >> ram; return ram; } ``` First, I'm using `popen("head -n1 /proc/meminfo")` to get the first line of the meminfo file from the system. The output of that command looks like > MemTotal: 775280 kB Once I've got that output in an `istringstream`, it's simple to tokenize it to get at the information I want. Is there a simpler way to read in the output of this command? Is there a standard C++ library call to read in the amount of system RAM?
On Linux, you can use the function `sysinfo` which sets values in the following struct: ``` #include <sys/sysinfo.h> int sysinfo(struct sysinfo *info); struct sysinfo { long uptime; /* Seconds since boot */ unsigned long loads[3]; /* 1, 5, and 15 minute load averages */ unsigned long totalram; /* Total usable main memory size */ unsigned long freeram; /* Available memory size */ unsigned long sharedram; /* Amount of shared memory */ unsigned long bufferram; /* Memory used by buffers */ unsigned long totalswap; /* Total swap space size */ unsigned long freeswap; /* swap space still available */ unsigned short procs; /* Number of current processes */ unsigned long totalhigh; /* Total high memory size */ unsigned long freehigh; /* Available high memory size */ unsigned int mem_unit; /* Memory unit size in bytes */ char _f[20-2*sizeof(long)-sizeof(int)]; /* Padding for libc5 */ }; ``` If you want to do it solely using functions of C++ (I would stick to `sysinfo`), I recommend taking a C++ approach using `std::ifstream` and `std::string`: ``` unsigned long get_mem_total() { std::string token; std::ifstream file("/proc/meminfo"); while(file >> token) { if(token == "MemTotal:") { unsigned long mem; if(file >> mem) { return mem; } else { return 0; } } // Ignore the rest of the line file.ignore(std::numeric_limits<std::streamsize>::max(), '\n'); } return 0; // Nothing found } ```
There isn't any need to use `popen()`. You can just read the file yourself. Also, if their first line isn't what you're looking for, you'll fail, since `head -n1` only reads the first line and then exits. I'm not sure why you're mixing C and C++ I/O like that; it's perfectly OK, but you should probably opt to go all C or all C++. I'd probably do it something like this: ``` int GetRamInKB(void) { FILE *meminfo = fopen("/proc/meminfo", "r"); if(meminfo == NULL) ... // handle error char line[256]; while(fgets(line, sizeof(line), meminfo)) { int ram; if(sscanf(line, "MemTotal: %d kB", &ram) == 1) { fclose(meminfo); return ram; } } // If we got here, then we couldn't find the proper line in the meminfo file: // do something appropriate like return an error code, throw an exception, etc. fclose(meminfo); return -1; } ```
How do you determine the amount of Linux system RAM in C++?
[ "", "c++", "linux", "ram", "" ]
I have a logging table which has three columns. One column is a unique identifier, One Column is called "Name" and the other is "Status". Values in the Name column can repeat so that you might see Name "Joe" in multiple rows. Name "Joe" might have a row with a status "open", another row with a status "closed", another with "waiting" and maybe one for "hold". I would like to, using a defined precedence in this highest to lowest order:("Closed","Hold","Waiting" and "Open") pull the highest ranking row for each Name and ignore the others. Anyone know a simple way to do this? BTW, not every Name will have all status representations, so "Joe" might only have a row for "waiting" and "hold", or maybe just "waiting".
I would create a second table named something like "Status\_Precedence", with rows like: ``` Status | Order --------------- Closed | 1 Hold | 2 Waiting | 3 Open | 4 ``` In your query of the other table, do a join to this table (on `Status_Precedence.Status`) and then you can `ORDER BY Status_Precedence.Order`.
If you don't want to create another table, you can assign numeric precedence using a SELECT CASE ``` Select Name, Status, Case Status When 'Closed' then 1 When 'Hold' then 2 When 'Waiting' then 3 When 'Open' Then 4 END as StatusID From Logging Order By StatusId -- Order based on Case ``` A lookup table is also a good solution though.
SQL Precedence Query
[ "", "sql", "operator-precedence", "" ]
I have the following scenario I have a main form as MDI parent and MDI child form, the child form shows a new form (I called it mydialog). I want to access specific function in the MDI child form from mydialog so I tried to set the Owner property in the mydialog object but exception occured (circular reference exception) also the Parent and ParentForm properities of mydialog form are null, I made a quick and dirty soluion using the Tag property of mydialog. I am looking for a better solution and why there is an exception when I set the Owner property of mydialog
> > so I tried to set the Owner property in the mydialog object but exception occured (circular reference exception) There should be no exception when setting the Owner property. Can you paste the exception ? Also could you paste your code that sets this value ? Have you set other properties like MDIParent / etc ? EDIT: Code Update Try the following, it should work ``` groupsettingsform mydialog= new groupsettingsform(); //create dialog mydialog.Owner= this; //set owner MDI child form mydialog.ShowDialog(); // <== DO NOT PASS THE OWNER ```
Can you show us the code you were using to show the dialog, One of the ShowDialog methods takes a parent object as a parameter which may do what you are looking for.
Passing data between forms
[ "", "c#", "winforms", "" ]
I have need to select a number of 'master' rows from a table, also returning for each result a number of detail rows from another table. What is a good way of achieving this without multiple queries (one for the master rows and one per result to get the detail rows). For example, with a database structure like below: ``` MasterTable: - MasterId BIGINT - Name NVARCHAR(100) DetailTable: - DetailId BIGINT - MasterId BIGINT - Amount MONEY ``` How would I most efficiently populate the `data` object below? ``` IList<MasterDetail> data; public class Master { private readonly List<Detail> _details = new List<Detail>(); public long MasterId { get; set; } public string Name { get; set; } public IList<Detail> Details { get { return _details; } } } public class Detail { public long DetailId { get; set; } public decimal Amount { get; set; } } ```
Normally, I'd go for the two grids approach - however, you might also want to look at FOR XML - it is fairly easy (in SQL Server 2005 and above) to shape the parent/child data as xml, and load it from there. ``` SELECT parent.*, (SELECT * FROM child WHERE child.parentid = parent.id FOR XML PATH('child'), TYPE) FROM parent FOR XML PATH('parent') ``` Also - LINQ-to-SQL supports this type of model, but you need to tell it which data you want ahead of time. Via [DataLoadOptions.LoadWith](http://msdn.microsoft.com/en-us/library/system.data.linq.dataloadoptions.loadwith.aspx): ``` // sample from MSDN Northwnd db = new Northwnd(@"c:\northwnd.mdf"); DataLoadOptions dlo = new DataLoadOptions(); dlo.LoadWith<Customer>(c => c.Orders); db.LoadOptions = dlo; var londonCustomers = from cust in db.Customers where cust.City == "London" select cust; foreach (var custObj in londonCustomers) { Console.WriteLine(custObj.CustomerID); } ``` If you don't use `LoadWith`, you will get n+1 queries - one master, and one child list per master row.
It can be done with a single query like this: ``` select MasterTable.MasterId, MasterTable.Name, DetailTable.DetailId, DetailTable.Amount from MasterTable inner join DetailTable on MasterTable.MasterId = DetailTable.MasterId order by MasterTable.MasterId ``` Then in psuedo code ``` foreach(row in result) { if (row.MasterId != currentMaster.MasterId) { list.Add(currentMaster); currentMaster = new Master { MasterId = row.MasterId, Name = row.Name }; } currentMaster.Details.Add(new Detail { DetailId = row.DetailId, Amount = row.Amount}); } list.Add(currentMaster); ``` There's a few edges to knock off that but it should give you the general idea.
Efficient Way To Query Nested Data
[ "", "c#", ".net", "sql-server", "sql-server-2008", "" ]
I would like to be able to submit a form in an **HTML source (string)**. In other words I need at least the ability to generate POST parameters **from a string containing HTML source of the form**. This is needed in unit tests for a Django project. I would like a solution that possibly; * Uses only standard Python library and Django. * Allows parameter generation from a specific form if there is more than one form present. * Allows me to change the values before submission. A solution that returns a (Django) form instance from a given form class is best. Because it would allow me to use validation. Ideally it would consume the source (which is a string), a form class, and optionally a form name and return the instance as it was before rendering. NOTE: I am aware this is not an easy task, and probably the gains would hardly justify the effort needed. But I am just curious about how this can be done, in a practical and reliable way. If possible.
You should re-read the [documentation about Django's testing framework](http://docs.djangoproject.com/en/dev/topics/testing/), specifically the part about testing views (and forms) with [the test client](http://docs.djangoproject.com/en/dev/topics/testing/#module-django.test.client). The test client acts as a simple web browser, and lets you make `GET` and `POST` requests to your Django views. You can read the response HTML or get the same `Context` object the template received. Your `Context` object should contain the actual `forms.Form` instance you're looking for. As an example, if your view at the URL `/form/` passes the context `{'myform': forms.Form()}` to the template, you could get to it this way: ``` from django.test.client import Client c = Client() # request the web page: response = c.get('/form/') # get the Form object: form = response.context['myform'] form_data = form.cleaned_data my_form_data = {} # put your filled-out data in here... form_data.update(my_form_data) # submit the form back to the web page: new_form = forms.Form(form_data) if new_form.is_valid(): c.post('/form/', new_form.cleaned_data) ``` Hopefully that accomplishes what you want, without having to mess with parsing HTML. **Edit**: After I re-read the Django docs about Forms, it turns out that forms are immutable. That's okay, though, just create a new `Form` instance and submit that; I've changed my code example to match this.
Since the Django test framework does this, I'm not sure what you're asking. Do you want to test a Django app that has a form? * In which case, you need to do an initial GET * followed by the resulting POST Do you want to write (and test) a Django app that submits a form to another site? Here's how we test Django apps with forms. ``` class Test_HTML_Change_User( django.test.TestCase ): fixtures = [ 'auth.json', 'someApp.json' ] def test_chg_user_1( self ): self.client.login( username='this', password='this' ) response= self.client.get( "/support/html/user/2/change/" ) self.assertEquals( 200, response.status_code ) self.assertTemplateUsed( response, "someApp/user.html") def test_chg_user( self ): self.client.login( username='this', password='this' ) # The truly fussy would redo the test_chg_user_1 test here response= self.client.post( "/support/html/user/2/change/", {'web_services': 'P', 'username':'olduser', 'first_name':'asdf', 'last_name':'asdf', 'email':'asdf@asdf.com', 'password1':'passw0rd', 'password2':'passw0rd',} ) self.assertRedirects(response, "/support/html/user/2/" ) response= self.client.get( "/support/html/user/2/" ) self.assertContains( response, "<h2>Users: Details for", status_code=200 ) self.assertContains( response, "olduser" ) self.assertTemplateUsed( response, "someApp/user_detail.html") ``` Note - we don't parse the HTML in detail. If it has the right template and has the right response string, it has to be right.
How do I submit a form given only the HTML source?
[ "", "python", "django", "testing", "parsing", "form-submit", "" ]
I've got a sql query (using Firebird as the RDBMS) in which I need to order the results by a field, EDITION. I need to order by the contents of the field, however. i.e. "NE" goes first, "OE" goes second, "OP" goes third, and blanks go last. Unfortunately, I don't have a clue how this could be accomplished. All I've ever done is ORDER BY [FIELD] ASC/DESC and nothing else. Any suggestions? Edit: I really should clarify: I was just hoping to learn more here. I have it now that I just have multiple select statements defining which to show first. The query is rather large and I was really hoping to learn possibly a more effecient way of doing this: example: ``` SELECT * FROM RETAIL WHERE MTITLE LIKE 'somethi%' AND EDITION='NE' UNION SELECT * FROM RETAIL WHERE MTITLE LIKE 'somethi%' AND EDITION='OE' UNION SELECT * FROM RETAIL WHERE MTITLE LIKE 'somethi%' AND EDITION='OP' UNION (etc...) ```
``` Order By Case Edition When 'NE' Then 1 When 'OE' Then 2 When 'OP' Then 3 Else 4 End ```
``` SELECT /*other fields*/ CASE WHEN 'NE' THEN 1 WHEN "OE" THEN 2 WHEN "OP" THEN 3 ELSE 4 END AS OrderBy FROM /*Tables*/ WHERE /*conditions*/ ORDER BY OrderBy, /*other fields*/ ```
Ordering SQL query by specific field values
[ "", "sql", "firebird", "" ]
I'm writing a basic crawler that simply caches pages with PHP. All it does is use `get_file_contents` to get contents of a webpage and regex to get all the links out `<a href="URL">DESCRIPTION</a>` - at the moment it returns: ``` Array { [url] => URL [desc] => DESCRIPTION } ``` The problem I'm having is figuring out the logic behind determining whether the page link is local or sussing out whether it may be in a completely different local directory. It could be any number of combinations: i.e. `href="../folder/folder2/blah/page.html"` or `href="google.com"` or `href="page.html"` - the possibilities are endless. What would be the correct algorithm to approach this? I don't want to lose any data that could be important.
First of all, regex and HTML don't mix. Use: ``` foreach(DOMDocument::loadHTML($source)->getElementsByTagName('a') as $a) { $a->getAttribute('href'); } ``` Links that may go outside your site start with protocol or `//`, i.e. ``` http://example.com //example.com/ ``` `href="google.com"` is link to a local file. But if you want to create static copy of a site, why not just use `wget`?
Let's first consider the properties of local links. These will either be: * *relative* with no *scheme* and no *host*, or * *absolute* with a scheme of 'http' or 'https' and a *host* that matches the machine from which the script is running That's all the logic you'd need to identify if a link is local. Use the **[parse\_url](http://php.net/parse_url)** function to separate out the different components of a URL to identify the *scheme* and *host*.
Web crawler links/page logic in PHP
[ "", "php", "hyperlink", "logic", "web-crawler", "" ]
Anyone got any insight as to select x number of non-consecutive days worth of data? Dates are standard sql datetime. So for example I'd like to select 5 most recent days worth of data, but there could be many days gap between records, so just selecting records from 5 days ago and more recent will not do.
Following the approach [Tony Andrews](https://stackoverflow.com/questions/308650/select-x-most-recent-non-consecutive-days-worth-of-data#308670) suggested, here is a way of doing it in T-SQL: ``` SELECT Value, ValueDate FROM Data WHERE ValueDate >= ( SELECT CONVERT(DATETIME, MIN(TruncatedDate)) FROM ( SELECT DISTINCT TOP 5 CONVERT(VARCHAR, ValueDate, 102) TruncatedDate FROM Event ORDER BY TruncatedDate DESC ) d ) ORDER BY ValueDate DESC ```
This should do it and be reasonably good from a performance standpoint. You didn't mention how to handle ties, so you can add the WITH TIES clause if you need to do that. ``` SELECT TOP (@number_to_return) * -- Write out your columns here FROM dbo.MyTable ORDER BY MyDateColumn DESC ```
Select X Most Recent Non-Consecutive Days Worth of Data
[ "", "sql", "sql-server", "sql-server-2005", "datetime", "" ]
C# 3.0 introduced the `var` keyword. And when compiled, the compiler will insert the right types for you. This means that it will even work on a 2.0 runtime. So far so good. But the other day I found a case where, the `var` keyword would be replaced with just object and thus not specific enough. Say you have something like: ``` var data = AdoMD.GetData(...); // GetData returns a DataTable foreach (var row in data.Rows) { string blah = (string)row[0]; // it fails since it's of type object } ``` When I try to use row both [IntelliSense](http://en.wikipedia.org/wiki/IntelliSense) and the compiler tells me that it is of type object. `data.Rows` is of type `System.Data.DataRowCollection`. The following works: ``` var data = AdoMD.GetData(...); // GetData returns a DataTable foreach (DataRow row in data.Rows) { string blah = (string)row[0]; // works since it now has the right type } ``` This is not a question about the usage of var, there is a thread for that [here](https://stackoverflow.com/questions/41479/use-of-var-keyword-in-c). I'm using Visual Studio 2008 SP1 btw. Edit: correct code now attached.
I think I see the problem. DataRowCollection is non-generic and thus the only thing the compiler knows is that contains objects of type Object. If it had been a generic datastructure this would have worked.
This: ``` using System; namespace Test { public class X { public String Bleh; } public class Y : X { public String Blah; } public static class Program { public static void Main() { var y = SomeFunctionThatReturnsY(); // y. <-- this gives me both Blah and Bleh in the dropdown } public static Y SomeFunctionThatReturnsY() { return new Y(); } } } ``` works as expected, it gives me intellisense showing both Bleh and Blah, in Visual Studio 2008. Did you have something *like* that, or *exactly* that? There might be other things that trip up intellisense than the code you showed. To answer the actual question. The only difference between spelling out the type, and using `var`, is that in some cases you *must* use var, when you simply don't have the correct type for it. This, if I'm not mistaken, is only relevant when you're using anonymous types. There is no other difference, and the compiler will pick the same type on the left hand side as it has on the right hand side. **Edit**: Thanks to [@Telos](https://stackoverflow.com/users/17145/telos) for pointing out that one difference is that when using var, you must of course give the variable a value at declaration time, since the type of the expression on the right-hand side is used to dictate the type of the variable on the left-hand side. When spelling out the type, you can of course opt to not give it a value.
In C# when is the var keyword different from typing it out?
[ "", "c#", "var", "" ]
I have a class which is not thread safe: ``` class Foo { /* Abstract base class, code which is not thread safe */ }; ``` Moreover, if you have foo1 and foo2 objects, you cannot call foo1->someFunc() until foo2->anotherFunc() has returned (this can happen with two threads). This is the situation and it can't be changed (a Foo subclass is actually a wrapper for a python script). In order to prevent unwanted calls I've created the following - ``` class FooWrapper { public: FooWrapper(Foo* foo, FooWrappersMutex* mutex); /* Wrapped functions from Foo */ }; ``` Internally, FooWrapper wraps calls to the Foo functions with the shared mutex. I want to test FooWrapper for thread safety. My biggest problem is the fact that threads are managed by the operating system, which means I've got less control on their execution. What I would like to test is the following scenario: * Thread 1 calls fooWrapper1->someFunc() and blocks while inside the function * Thread 2 calls fooWrapper2->anotherFunc() and returns immediately (since someFunc() is still executing) * Thread 1 finishes the execution What is the simplest to test a scenario like this automatically? I'm using QT on Win32, although I would prefer a solution which is at least cross-platform as QT is.
Instead of just checking that a particular thread is finished or not, why not create a fake `Foo` to be invoked by your wrapper in which the functions record the time at which they were actually started/completed. Then your yield thread need only wait long enough to be able to distinguish the difference between the recorded times. In your test you can assert that `another_func`'s start time is after `some_func`'s start time and it's completed time is before `some_func`s completed time. Since your fake class is only recording the times, this should be sufficient to guarantee that the wrapper class is working properly. **EDIT**: You know, of course, that what your `Foo` object does could be an [anti-pattern](http://en.wikipedia.org/wiki/Anti-pattern), namely [Sequential Coupling](http://en.wikipedia.org/wiki/Sequential_coupling). Depending what it does, you may be able to handle it by simply having the second method do nothing if the first method has not yet been called. Using the example from the Sequential Coupling link, this would be similar to having the car do nothing when the accelerator pedal is pressed, if the car has not yet been started. If doing nothing is not appropriate, you could either wait and try again later, initiate the "start sequence" in the current thread, or handle it as an error. All of these things could be enforced by your wrapper as well and would probably be easier to test. You also may need to be careful to make sure that the same method doesn't get invoked twice in sequence if an intervening call to another method is required.
You might want to check out [*CHESS: A Systematic Testing Tool for Concurrent Software*](http://Research.Microsoft.Com/CHESS/) by Microsoft Research. It is a testing framework for multithreaded programs (both .NET and native code). If I understood that correctly, it replaces the operating system's threading libraries with its own, so that it can control thread switching. Then it analyzes the program to figure out every possible way that the execution streams of the threads can interleave and it re-runs the test suite for every possible interleaving.
How to write an automated test for thread safety
[ "", "c++", "multithreading", "unit-testing", "testing", "" ]
Are there C++ compilers already supporting [C++0x](http://en.wikipedia.org/wiki/C%2B%2B0x) [lambda](http://en.wikipedia.org/wiki/Lambda_calculus) expressions?
[Visual Studio 2010 CTP](https://connect.microsoft.com/VisualStudio/content/content.aspx?ContentID=9790) supports it already. **Update:** It is now [Visual Studio 2010 Beta 2](http://msdn.microsoft.com/en-us/vstudio/dd582936.aspx)
g++ has support since [4.5](http://gcc.gnu.org/gcc-4.5/changes.html). Status for C++11 support in gcc can be found [here](http://gcc.gnu.org/projects/cxx0x.html#lambdas).
What C++ compilers are supporting lambda already?
[ "", "c++", "lambda", "" ]
Recently I used a class that inherits from a collection instead of having the collection instantiated within the class, is this acceptable or does it create unseen problems further down the road? Examples below for the sake of clarity: ``` public class Cars : List<aCar> ``` instead of something like: ``` public class Cars { List<aCar> CarList = new List<aCar>(); } ``` Any thoughts?
The problem with this is that your Cars class will still have the interface it inherits from List, which may allow operations you don't want.
That depends on the final purpose of your class. If it is only going to work as your own implementation of a collection use inheritance. If not, include a a collection as a property. The second option is more versatile: * As you can only inherit from one class, you might need to inherit from another class rather than collection * If you need to see this class as a collection you can include an indexer property.
Classes with Collections as Properties vs. Classes Inheriting Collections
[ "", "c#", "class", "inheritance", "" ]
What is the best way to close an ASPX page from the code-behind? I have a button event handler that I want to close the page after the user has clicked an ASP.NET button on the page. I have tried to programmatically add a JavaScript method that contains a `window.close()` command to the `OnClientClick` event to close the page but it does not work. The button is also a `asp:AsyncPostBoskTrigger` for an AJAX Update Panel. The application uses .NET Framework 3.5.
**UPDATE:** I have taken all of your input and came up with the following solution: In code behind: ``` protected void Page_Load(object sender, EventArgs e) { Page.ClientScript.RegisterOnSubmitStatement(typeof(Page), "closePage", "window.onunload = CloseWindow();"); } ``` In aspx page: ``` function CloseWindow() { window.close(); } ```
You would typically do something like: ``` protected void btnClose_Click(object sender, EventArgs e) { ClientScript.RegisterStartupScript(typeof(Page), "closePage", "window.close();", true); } ``` However, keep in mind that different things will happen in different scenerios. Firefox won't let you close a window that wasn't opened by you (opened with `window.open()`). IE7 will prompt the user with a "This page is trying to close (Yes | No)" dialog. In any case, you should be prepared to deal with the window not always closing! One fix for the 2 above issues is to use: ``` protected void btnClose_Click(object sender, EventArgs e) { ClientScript.RegisterStartupScript(typeof(Page), "closePage", "window.open('close.html', '_self', null);", true); } ``` And create a close.html: ``` <html><head> <title></title> <script language="javascript" type="text/javascript"> var redirectTimerId = 0; function closeWindow() { window.opener = top; redirectTimerId = window.setTimeout('redirect()', 2000); window.close(); } function stopRedirect() { window.clearTimeout(redirectTimerId); } function redirect() { window.location = 'default.aspx'; } </script> </head> <body onload="closeWindow()" onunload="stopRedirect()" style=""> <center><h1>Please Wait...</h1></center> </body></html> ``` Note that close.html will redirect to default.aspx if the window does not close after 2 sec for some reason.
Programmatically close aspx page from code behind
[ "", "c#", "asp.net", "" ]
I am a 10year+, C++ linux/windows developer and I have been asked to estimate the effort to port the windows application to OS X. I haven't developed on OS X before,so I don't know what to expect. It is a C++/Qt application, so I want to ask: what are the de facto tools like editor, IDE, compiler, make tool, etc ? Which tools are commercial and need to be purchased ? How much time would it take for me to get used to the environment and be productive ? Thanks in advance, Paul
As jakber already [posted](https://stackoverflow.com/questions/366923/need-advice-on-windows-to-os-x-port-estimation-and-cost-of-dev-on-os-x#366983), XCode is the standard IDE for MacOSX, and is free (comes with the install DVD or can be downloaded from apple. The XCode IDE is quite different from that of Visual Studio, and it seems to me as if it were more familiar to Codewarrior. I don't know if there are any tools to convert VS projects to XCode, but there are tools as [CMake](http://www.cmake.org/) where you can describe your project and make it generate both Visual Studio solutions and XCode projects (well, and many more). It is quite hard to estimate the time it will take a particular person to be productive in an environment, and even more when you don't define how you are used to work. For example, if you used KDevelop, then it is directly available under MacOSX. Also, some people are more dependent than others on the tools the IDE provide (source navigation, for example). Finally, compiler errors are different in Microsoft compilers and g++ (used both in linux and MacOSX) and you will have to get used to it. Afterpost: As an answer to your comment, there are more than one compiler for MacOSX. The standard that comes with the OS is gcc (4.0.1) but you can buy the [Intel compiler](http://www.intel.com/cd/software/products/asmo-na/eng/227389.htm) or other frontends like [comeau](http://www.comeaucomputing.com/) to gcc.
You're in luck that your app is in Qt, TrollTech has a lot of documentation on how to do this; developing on OS X can be very similar to developing on Linux, make sure to check out the MacPorts project (<http://www.macports.org>), which is like apt-get/yum for Mac. Your toolchain is the usual suspects - gcc/make/autoconf, etc...
Need advice on Windows to OS X Port Estimation and cost of dev. on OS X
[ "", "c++", "macos", "qt", "" ]
Not every website exposes their data well, with XML feeds, APIs, etc How could I go about extracting information from a website? For example: ``` ... <div> <div> <span id="important-data">information here</span> </div> </div> ... ``` I come from a background of Java programming and coding with Apache XMLBeans. Is there anything similar to parse HTML, when I know the structure and the data is between a known tag? Thanks
There are several Open Source HTML Parsers out there for Java. I have used [JTidy](http://jtidy.sourceforge.net/) in the past, and have had good luck with it. It will give you a DOM of the html page, and you should be able to grab the tags you need from there.
[Here's an article](http://www.manageability.org/blog/stuff/screen-scraping-tools-written-in-java) that has a couple of screen scraping tools written in java. In general, it sounds like you want to take a look at [regular expressions](http://www.regular-expressions.info/), which do the pattern matching you're looking for. Hope that helps!
Extracting Information from websites
[ "", "java", "html", "html-content-extraction", "" ]
I have a database that hold's a user's optional profile. In the profile I have strings, char (for M or F) and ints. I ran into an issue where I try to put the sex of the user into the property of my Profile object, and the application crashes because it doesn't know how to handle a returned null value. I've tried casting the data to the appropriate type ``` char sex = (char)dt.Rows[0]["Sex"]; ``` Which didn't fix my problem. I then tried changing the types to Nullable and Nullable and get conversion issues all the same. My current solution that I was able to find is the following: ``` object.sex = null; if(dt.Rows[0]["Sex"] != DBNull.Value) object.sex = (char)dt.Rows[0]["Sex"]; object.WorkExt = null; if(dt.Rows[0]["WorkExt"] != DBNull.Value) object.WorkExt = (int)dt.Rows[0]["WorkExt"]; ``` Is there a simpler or better way to do this? Or am I pretty much on the right track?
rotard's answer (use `Is<ColumnName>Null()`) only works for typed data sets. For untyped data sets, you have to use one of the patterns in the following code. If this code isn't definitive, let me know and I'll edit it until it is. This is an extremely common question that there should really be only one right answer to. ``` using System. using System.Data; class Program { static void Main(string[] args) { DataTable dt = new DataTable(); dt.Columns.Add("test", typeof (char)); dt.Columns["test"].AllowDBNull = true; DataRow dr = dt.Rows.Add(); char? test; try { test = (char?)dr["test"]; } catch (InvalidCastException) { Console.WriteLine("Simply casting to a nullable type doesn't work."); } test = dr.Field<char?>("test"); if (test == null) { Console.WriteLine("The Field extension method in .NET 3.5 converts System.DBNull to null."); } test = (dr["test"] is DBNull) ? null : (char?) dr["test"]; if (test == null) { Console.WriteLine("Before .NET 3.5, you have to check the type of the column's value."); } test = (dr["test"] == DBNull.Value) ? null : (char?) dr["test"]; if (test == null) { Console.WriteLine("Comparing the field's value to DBNull.Value is very marginally faster, but takes a bit more code."); } // now let's put the data back try { dr["test"] = test; } catch (ArgumentException) { Console.WriteLine("You can't set nullable columns to null."); } dr.SetField("test", test); if (dr["test"] is DBNull) { Console.WriteLine("Again, in .NET 3.5 extension methods make this relatively easy."); } dr["test"] = (object)test ?? DBNull.Value; if (dr["test"] is DBNull) { Console.WriteLine("Before .NET 3.5, you can use the null coalescing operator, but note the awful cast required."); } Console.ReadLine(); } } ```
nullable types were designed just for this purpose! use 'as char?' instead of '(char?)' ``` class Foo { char? sex; } Foo object; object.sex = dt.Rows[0]["Sex"] as char?; ```
What is the best practice for handling null int/char from a SQL Database?
[ "", "c#", "sql-server", "dbnull", "" ]
I'm looking for some information on Routing in MVC with C#. I'm currently very aware of the basics of routing in MVC, but what i'm looking for is somewhat difficult to find. Effectively, what I want to find is a way of defining a single route that takes a single parameter. The common examples I have found online is all based around the example ``` routes.MapRoute( "Default", "{controller}.mvc/{action}/{id}" new { controller = "Default", action="Index", id=""}); ``` By mapping this route, you can map to any action in any controller, but if you want to pass anything into the action, the method parameter must be called "id". I want to find a way around this if it's possible, so that I don't have to constantly specify routes just to use a different parameter name in my actions. Has anyone any ideas, or found a way around this?
You can construct the routes as you like ``` routes.MapRoute( "Default", "{controller}.mvc/{action}/{param1}/{param2}/{param3}" new { controller = "Default", action="Index", param1="", param2="", param3=""}); ``` Also, [look at this post](http://chriscavanagh.wordpress.com/2008/03/11/aspnet-routing-goodbye-url-rewriting/), it contains all kind of samples in the comments section
If you want to have a different parameter name *and* keep the same routing variable, use the FromUri attribute like so: ``` public ActionResult MyView([FromUri(Name = "id")] string parameterThatMapsToId) { // do stuff } ``` In your routes, all you need is: ``` routes.MapRoute( "Default", "{controller}.mvc/{action}/{id}" new { controller = "Default", action="Index", id=""}); ```
MVC Routing - Parameter names question
[ "", "c#", "asp.net-mvc", "model-view-controller", "routes", "url-routing", "" ]
I have a VS solution, with the following projects. -GUI -DataAccess -BusinessLogic -BusinessObjects but where should the main model class reside? This is usually a cache of a set of objects which are the results from the data access layer and the GUI using virtual grids to view data inside the model. The question would be the same using MVC or MVP thoughts?
This is a subjective question, but often to enforce that your model objects don't have direct dependencies to infrastructure, people often put them in a separate project. you also need to consider what other projects might use these model objects. Another option for splitting up functionality into separate deployable units (assemblies) is so that teams can function more independently. Separate projects based on frequency of deployment and team autonomy. Lastly I've seen some projects where the model objects were invoked remotely (like with .NET remoting) and served up on an application server separate from the web server. I really don't recommend this approach, but it's an option. If you don't plan on reusing them, and you're cognizant of the fact that placing them in the same assembly *allows* you to create cross dependencies with anything else defined in that project, but you're smart enough not to do it, you can place them all in the same project. That said, 99% of the time I have these projects: * UI * Core * Persistence * Tests But you still have to take your project needs into account.
I tend to have * `Justice.Project.Core` — the POCO domain model — i.e., business objects) * `Justice.Project.Data` — NHibernate mappings etc., where the persistence scheme resides * `Justice.Project.Services` — repositories, as well as business logic which cannot easily be fit into the business objects * `Justice.Project.(Web|UI)` The model *is* – or *should be* – the business objects.
where does the model go?
[ "", "c#", "winforms", "model-view-controller", "mvp", "" ]
given the following class ... ``` public class Category { public string Name {get;set;} public Category ParentCategory {get;set;} } ``` What the most efficient way to output the following from a collection (`IList<Category>`) of Category objects? ``` + Parent Category ++ Sub Category ++ Sub Category 2 + Parent Category 2 ++ Sub ... ++ Sub .. ++ Sub .... ``` EDIT: Perhaps the real question should be, how should I represent this model in the database and retrieve it using NHibernate?
You may wish to consider reversing your relationship. If a node can get to its parent but not vice versa, you have to have *all* the leaf nodes in order to print out the full tree. Compare this to the situation where you have each node know about its children - then you only need the root node.
A small recursive function can do it for you. ``` static void recurseCategories(ref List<Category> cl, Category start, int level) { foreach (Category child in cl) { if (child.ParentCategory == start) { Console.WriteLine(new String(' ', level) + child.Name); recurseCategories(ref cl, child, level + 1); } } } ``` My assumptions were: * You've got an `List` of `Category`. (Of course *all* Category objects you want to print must be in that list. I thought that was self-evident, but seemingly it was not.) * The root category has a parent of `null`. The initial function call should therefore be `recurseCategories(ref myCategoryList, null, 0)`. * No orphaned elements exist in your list. Some error handling code should be added. * Output order will be coherent to whatever order the list is iterated, so apart from the hierarchy it's more or less coincidental.
How to iterate hierarchical data and output the hierarchy using C#
[ "", "c#", ".net", "" ]
I am writing a (very small) framework for checking pre- and postconditions of methods. Entry points are (they could be easily be methods; that doesn't matter): ``` public static class Ensures { public static Validation That { get { ... } } } public static class Requires { public static Validation That { get { ... } } } ``` Obviously, checking the postconditions may be expensive, and isn't actually necessary, when the method isn't buggy. So I want a method which works like this: ``` public static class Ensures { [ConditionalCallingCode("DEBUG")] public static Validation ThatDuringDebug { get { ... } } } ``` where `ConditionalCallingCodeAttribute` means that this method should only run when the calling code is compiled with the DEBUG symbol defined. Is this possible? I want client code to look like this: ``` public class Foo { public void Bar() { ... // do some work Ensures.That // do these checks always .IsNotNull(result) .IsInRange(result, 0, 100); Ensures.WhileDebuggingThat // only do these checks in debug mode .IsPositive(ExpensiveCalculation(result)); return result; } } ``` Of course, I can simply not provide WhileDebuggingThat. Then the client code would look like this: ``` public class Foo { public void Bar() { ... // do some work Ensures.That // do these checks always .IsNotNull(result) .IsInRange(result, 0, 100); #ifdef DEBUG Ensures.That // only do these checks in debug mode .IsPositive(ExpensiveCalculation(result)); #endif return result; } } ``` This is the fallback plan if nothing else works out, but it breaks DRY really badly. As I understand it, marking `WhileDebuggingThat` with `[Conditional("DEBUG")]` will emit (or not) this method depending on whether DEBUG is defined during the compilation of the library, *not* of the assemblies which reference this library. So I *could* do this and then write documentation telling the library users to link debug builds of their code with the debug build of the library, and release builds with release builds. This doesn't strike me as the best solution. Finally, I could tell the library users to define this class inside their projects: ``` using ValidationLibrary; public static class EnsuresWhileDebugging { [Conditional("DEBUG")] public static Validation That() { return Ensures.That; } } ``` This should work as well, as far as I see, but still requires breaking the DRY principle, if only slightly.
Any solution that is found here would be slower than the actual checks. Also, since it would not be build into the compiler like `ConditionalAttribute`, the parameters would still be calculated. If the postconditions could be very complicated, such as ``` Ensures.That.IsPositive(ExpensiveCalculation(result)); ``` You might consider using icelava's suggestion to reflect on the calling assembly to find if it is built in debug or release - but then you *must* use some sort of delegate to delay the calculation - to ensure that it is only done when needed. e.g.: ``` Ensures.WhileDebugging.That. IsPositive(() => ExpensiveCalculation(result)); ``` The IsPositive function should run the lambda and check its result, only after reflecting to find out if it should be calculated.
Is this anything that the normal [ConditionalAttribute](http://msdn.microsoft.com/en-us/library/system.diagnostics.conditionalattribute.aspx) doesn't do for you, aside from working on a property instead of a method? You may well need to change the way things are called so that you've got methods instead of properties - and the fact that it returns a value may cause issues. It would help a lot if you'd show how your framework is used - currently we've not got a lot to work with. Another thing to consider would be supplying a variety of binary builds of your library - so that the caller can just supply a different version which doesn't actually do any checking. Again though, it's hard to tell with only the code you've provided.
Is it possible in .Net to call some code only from debug builds of client assemblies?
[ "", "c#", "build", "debugging", "release", "conditional-compilation", "" ]
I've added the following code to my masterpage (Page\_Load) so once a user logs out they will not be able to use the back button to see the page they were previously at. ``` Response.Buffer = true; Response.ExpiresAbsolute = DateTime.Now.AddDays(-1); Response.Expires = -1; Response.CacheControl = "no-cache"; ``` The problem is, my users want to be able to use the browser back button while they are logged into the application. When I comment the code out to not cache the pages they can use the back button, but once they logout they can use the back button to see the previous page they were on which causes a security risk. Does anyone have any suggestions so they can use the browser back button in the application but once they are logged out they cannot go back into the application?
Is the objective to prevent an un-authenticated user from surreptitiously visiting a previously-used computer and seeing what the authenticated user was doing? If the latter, then you should redirect the user to a logout page that has a window.close(); command along with strong language about this being a **requirement**. Now, this isn't ironclad: IE will *ask* the user if they are willing to let the app close the window and other browsers ignore the request altogether. However, in the right kind of security environment, I think that it does provide a significant addition to your security policy - albeit of a primarily cultural variety (it helps the members of the culture abide by the rules). If you want "one-time through and you are done" type of security then I'm afraid that locking out the cache or adding "window.forward()" to every page (which prevents all use of the back button) is your only real option. One other thing: AJAX provides some tools that help as well. You could put sensitive information on an update panel and have the page load javascript trigger an update panel refresh. Since this will always go back to the server, unauthenticated/expired users will be turned away. This is a pretty significant workload to take on but I thought I'd throw it out there.
See: * [Disabling Back button on the browser](https://stackoverflow.com/questions/87422/disabling-back-button-on-the-browser) * [Is there a way to keep a page from rendering once a person has logged out but hit the "back" button?](https://stackoverflow.com/questions/64059/is-there-a-way-to-keep-a-page-from-rendering-once-a-person-has-logged-out-but-h)
Warning: Page Has Expired
[ "", "c#", "back-button", "" ]
Any simple, but secure script available for **Flash > MySQL DB** integration? I need something for a Login DB. Exchanging variables with **PHP** is nice and easy, but obviously insecure. **via Remoting?** I've got the [Flash 8 remoting components installed](http://blog.vixiom.com/2007/04/17/actionscript-20-flash-remoting-with-flash-cs3/), and some ideas: [idea-1](http://www.flash-db.com/Tutorials/helloAS2/), [idea-2](http://www.joristimmerman.be/wordpress/tag/as2-remoting/). **via NetConnection?** Got some leads: [lead-1](http://www.oscartrelles.com/archives/as3_flash_remoting_example), [lead-2](http://osflash.org/as3lrf). **Cold Fusion?** Anybody has any ideas? --- Less likely solutions: * **via XML?** Anybody has any idea how to use XML to connect to a DB? *(AS2 or AS3)* * **AMF-PHP** is not possible for security reasons (script installed on server root) * **Java Server** ras to be specially installed on server. --- **Edit:** Encryption should make the PHP solution more viable, although offering only basic protection for a high-security Login Database. **See also:** SO: [1](https://stackoverflow.com/questions/190461/a-simple-medium-securelogin-protocol), [2](https://stackoverflow.com/questions/46415/), [3](https://stackoverflow.com/questions/73947/), Adobe: [4](http://www.adobe.com/devnet/flashplayer/articles/secure_swf_apps_12.html).
Afaik it is impossible to talk to a MySQL server directly via ActionScript (unless someone has written a package that actually handless the net stuff, but I haven't seen one yet). May I also point out that your remark about "insecure because of PHP" is not really accurate? It is even worse when you actually do everything from the applet: It is peanuts these days to decompile an .SWF and then they will even have the login data for your database. I think, as Ristonj suggested that it is best that you use the URLRequest class. What I usually do is pass on the current php session ID to the applet so that I can include this and the user IP in the initial applet request. On the server I check if the ip/session are actually active in the session table and match. If so the user gets a sort of command token that allows him to perform requests, which in turn can do your database updates. If you do all that over an SSL connection, you are pretty safe. And yes, you have to store PHP scripts on the server, but it is more difficult to get the source for these than just being able to decompile the applet and extract everything :) I like to keep all program logic that is potentially dangerous on the server only, NOT in the applet.
Whether you use Flash or PHP, you're still using HTML form technology / specificaion to do the GET/POST, thus using Flash is just as secure (or insecure) as using PHP, Perl, CGI, etc. If you want some level of security on your logins, you should consider getting an SSL license for the site.
Simple secure way for Flash to MySQL Database
[ "", "php", "mysql", "flash", "actionscript", "database-connection", "" ]
While learning different languages, I've often seen objects allocated on the fly, most often in Java and C#, like this: ``` functionCall(new className(initializers)); ``` I understand that this is perfectly legal in memory-managed languages, but can this technique be used in C++ without causing a memory leak?
Your code is valid (assuming functionCall() actually guarantees that the pointer gets deleted), but it's fragile and will make alarm bells go off in the heads of most C++ programmers. There are multiple problems with your code: * First and foremost, who owns the pointer? Who is responsible for freeing it? The calling code can't do it, because you don't store the pointer. That means the called function must do it, but that's not clear to someone looking at that function. Similarly, if I call the code from somewhere else, I certainly don't expect the function to call delete on the pointer I passed to it! * If we make your example slightly more complex, it can leak memory, *even if the called function calls delete*. Say it looks like this: `functionCall(new className(initializers), new className(initializers));` Imagine that the first one is allocated successfully, but the second one throws an exception (maybe it's out of memory, or maybe the class constructor threw an exception). functionCall never gets called then, and *can't* free the memory. The simple (but still messy) solution is to allocate memory first, and store the pointer, and then free it in the same scope as it was declared (so the calling function owns the memory): ``` className* p = new className(initializers); functionCall(p); delete p; ``` But this is still a mess. What if functionCall throws an exception? Then p won't be deleted. Unless we add a try/catch around the whole thing, but sheesh, that's messy. What if the function gets a bit more complex, and may return after functionCall but before delete? Whoops, memory leak. Impossible to maintain. Bad code. So one of the nice solutions is to use a smart pointer: ``` boost::shared_ptr<className> p = boost::shared_ptr<className>(new className(initializers)); functionCall(p); ``` Now ownership of the memory is dealt with. The `shared_ptr` owns the memory, and guarantees that it'll get freed. We could use `std::auto_ptr` instead, of course, but `shared_ptr` implements the semantics you'd usually expect. Note that I still allocated the memory on a separate line, because the problem with making multiple allocations on the same line as you make the function call still exists. One of them may still throw, and then you've leaked memory. Smart pointers are generally the absolute minimum you need to handle memory management. But often, the *nice* solution is to write your own RAII class. `className` should be allocated on the stack, and in its constructor, make what allocations with `new` are necessary. And in its destructor, it should free that memory. This way, you're guaranteed that no memory leaks will occur, and you can make the function call as simple as this: ``` functionCall(className(initializers)); ``` The C++ standard library works like this. `std::vector` is one example. You'd never allocate a vector with `new`. You allocate it on the stack, and let it deal with its memory allocations internally.
Yes, as long as you deallocate the memory inside the function. But by no means this is a best practice for C++.
Passing newly allocated data directly to a function
[ "", "c++", "memory-leaks", "" ]
I'm reading all the files in a single directory and I want to filter on JPG,JPEG,GIF and PNG. Both capital and small letters. Those are the only files to be accepted. I am currently using this: ``` $testPics = takeFiles($picsDir, "([^\s]+(?=\.(jpg|JPG|jpeg|JPEG|png|PNG|gif|GIF))\.\2)"); ``` and the function takeFiles looks like this: ``` function takerFiles($dir, $rex="") { $dir .= "/"; $files = array(); $dp = opendir($dir); while ($file = readdir($dp)) { if ($file == '.') continue; if ($file == '..') continue; if (is_dir($file)) continue; if ($rex!="" && !preg_match($rex, $file)) continue; $files[] = $file; } closedir($dp); return $files; } ``` And it always returns nothing. So something must be wrong with my regex code.
I think something is wrong with your regex. Try testing regexes here first: <https://www.regexpal.com/> I think this one might work for you: `/^.*\.(jpg|jpeg|png|gif)$/i` Note the /i at the end - this is the "case insensitive" flag, saves you having to type out all permutations :)
How about using [glob()](http://www.php.net/glob) instead? ``` $files = glob($dir . '*.{jpg,gif,png,jpeg}',GLOB_BRACE); ```
Checking for file-extensions in PHP with Regular expressions
[ "", "php", "regex", "" ]
How can I split by word boundary in a regex engine that doesn't support it? python's re can match on \b but doesn't seem to support splitting on it. I seem to recall dealing with other regex engines that had the same limitation. example input: ``` "hello, foo" ``` expected output: ``` ['hello', ', ', 'foo'] ``` actual python output: ``` >>> re.compile(r'\b').split('hello, foo') ['hello, foo'] ```
(\W+) can give you the expected output: ``` >>> re.compile(r'(\W+)').split('hello, foo') ['hello', ', ', 'foo'] ```
One can also use re.findall() for this: ``` >>> re.findall(r'.+?\b', 'hello, foo') ['hello', ', ', 'foo'] ```
Split by \b when your regex engine doesn't support it
[ "", "python", "regex", "" ]
Without using plpgsql, I'm trying to urlencode a given text within a pgsql SELECT statement. The problem with this approach: ``` select regexp_replace('héllo there','([^A-Za-z0-9])','%' || encode(E'\\1','hex'),'g') ``` ...is that the encode function is not passed the regexp parameter, unless there's another way to call functions from within the replacement expression that actually works. So I'm wondering if there's a replacement expression that, by itself, can encode matches into hex values. There may be other combinations of functions. I thought there would be a clever regex (and that may still be the answer) out there, but I'm having trouble finding it.
``` select regexp_replace(encode('héllo there','hex'),'(..)',E'%\\1','g'); ``` This doesn't leave the alphanumeric characters human-readable, though.
Here is pretty short version, and it's even "pure SQL" function, not plpgsql. Multibyte chars (including 3- and 4-bytes emoji) are supported. ``` create or replace function urlencode(in_str text, OUT _result text) returns text as $$ select string_agg( case when ol>1 or ch !~ '[0-9a-za-z:/@._?#-]+' then regexp_replace(upper(substring(ch::bytea::text, 3)), '(..)', E'%\\1', 'g') else ch end, '' ) from ( select ch, octet_length(ch) as ol from regexp_split_to_table($1, '') as ch ) as s; $$ language sql immutable strict; ```
urlencode with only built-in functions
[ "", "sql", "postgresql", "urlencode", "" ]
What are the real differences between anonymous type(var) in c# 3.0 and dynamic type(dynamic) that is coming in c# 4.0?
An anonymous type is a real, compiler-generated type that is created for you. The good thing about this is that the compiler can re-use this type later for other operations that require it as it is a POCO. My understanding of dynamic types is that they are late-bound, meaning that the CLR (or DLR) will evaluate the object at execution time and then use duck typing to allow or disallow member access to the object. So I guess the difference is that anonymous types are true POCOs that the compiler can see but you can only use and dynamic types are late-bound dynamic objects.
You seem to be mixing three completely different, orthogonal things: * *static vs. dynamic* typing * *manifest vs. implicit* typing * *named vs. anonymous* types Those three aspects are completely independent, they have nothing whatsoever to do with each other. *Static vs. dynamic* typing refers to *when* the type checking takes place: dynamic typing takes place at *runtime*, static typing takes place *before runtime*. *Manifest vs. implicit* typing refers to whether the types are *manifest in the source code* or not: manifest typing means that the *programmer* has to write the types into the source code, implicit typing means that the *type system* figures them out on its own. *Named vs. anonymous* types refers to, well, whether the types have names or not. The `dynamic` keyword in C# 4.0 means that this variable, parameter, method, field, property ... whatever is *dynamically typed*, i.e. that its type will be checked at runtime. Everything that is not typed as dynamic is statically typed. Whether a type is static or dynamic not only determines when type checking takes place, but in C# 4.0 it also determines, when *method dispatch* takes place. In C#, method dispatch is done before runtime, based on the static type (with the exception of runtime subtype polymorphism of course), whereas on dynamically typed objects in C# 4.0, method dispatch is done at runtime, based on the runtime type. The `var` keyword in C# 3.0 means that this local variable will be *implicitly typed*, i.e. that instead of the programmer writing down the type explicitly, the type system will figure it out on its own. This has nothing to do with dynamic typing, at least in C# 3.0. The variable will be strongly statically typed just as if you had written down the type yourself. It is merely a convenience: for example, why would you have to write down all the type names *twice* in `HashMap<int, string> foo = new HashMap<int, string>();` when the type system can *clearly* figure out that `foo` is a `HashMap<int, string>`, so instead you write `var foo = new HashMap<int, string();`. Please note that there is nothing dynamic or anonymous about this. The type is static and it has a name: `HashMap<int, string>`. Of course, in C# 4.0, if the type system figures out that the right hand side of the assignment is dynamic, then the type of the variable on the left hand side will be dynamic. An *anonymous type* in C# 3.0 means that this type has no name. Well, actually, *real* anonymous types would have required a backwards-incompatible change to the Common Type System, so what *actually* happens behind the curtain is that the compiler will generate a very long, very random, unique and illegal name for the type and put that name in wherever the anonymous type appears. But from the programmer's point of view, the type has no name. Why is this useful? Well, sometimes you have intermediate results that you only need briefly and then throw away again. Giving such transient types a name of their own would elevate them to a level of importance that they simply don't deserve. But again, there is nothing dynamic about this. So, if the type has no name, how can the programmer refer to it? Well, she can't! At least not directly. What the programmer *can* do, is describe the type: it has two properties, one called "name" of type `string`, the other called "id" of type `int`. That's the type I want, but I don't care what it's called. Here is where the pieces start to come together. In C#, you have to declare the types of local variables by explicitly writing down the names of the types. But, how can you write down the name of a type that has no name? This is where `var` comes in: because since C# 3.0, this is actually no longer true: you no longer have to write down the names, you can also tell the compiler to figure it out. So, while what I wrote in the first paragraph above is true, that implicit typing and anonymous types don't have anything to do with other, it is also true that anonymous types would be pretty useless without implicit typing. Note, however, that the opposite is not true: implicit typing is perfectly useful without anonymous types. `var foo = HashMap<int, string>` makes perfect sense and there's no anonymous type in sight.
Anonymous Type vs Dynamic Type
[ "", "c#", ".net", "dynamic", "anonymous-types", "" ]
I am attempting to get a DropDownList to AutoPostBack via an UpdatePanel when the selected item is changed. I'm going a little stir-crazy as to why this isn't working. Does anyone have any quick ideas? ASPX page: ``` <asp:UpdatePanel ID="UpdatePanel1" runat="server" UpdateMode="Always" ChildrenAsTriggers="true" > <ContentTemplate> <asp:DropDownList ID="DropDownList1" runat="server" AutoPostBack="True" onselectedindexchanged="DropDownList1_SelectedIndexChanged"> <asp:ListItem>item 1</asp:ListItem> <asp:ListItem>item 2</asp:ListItem> </asp:DropDownList> </ContentTemplate> </asp:UpdatePanel> ``` Code-behind (I put a breakpoint on the string assignment to capture the postback): ``` protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e) { string s = ""; } ``` **Edit:** **OK, I have it working now. Very weird. All it took was a restart of Visual Studio. This is the kind of thing that frightens me as a developer ;) I think I've seen similar before, where VS gets "out of sync" wrt the assembly it's running.** **FYI I am running VS 2008 Web Developer Express.** **Thanks to those that answered.**
I was able to get it to work with what you posted. This is the code I used... Basically what you had but I am throwing an exception. ``` <asp:ScriptManager ID="smMain" runat="server" /> <asp:UpdatePanel ID="UpdatePanel1" runat="server" UpdateMode="Always" ChildrenAsTriggers="true" > <ContentTemplate> <asp:DropDownList ID="DropDownList1" runat="server" AutoPostBack="True" onselectedindexchanged="DropDownList1_SelectedIndexChanged"> <asp:ListItem>item 1</asp:ListItem> <asp:ListItem>item 2</asp:ListItem> </asp:DropDownList> </ContentTemplate> </asp:UpdatePanel> protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e) { throw new NotImplementedException(); } ``` I tried a lot of variations to see if there was something off, but the exception was thrown every time. You might want to try the exception route to see if the postback is happening and this isn't a debugger issue. * One issue might be with Vista and not running Visual Studios as administrator. I know that has a tendency to not allow debugging. * Maybe the assembly you are running doesn't match the code? This might happen if you "View in Browswer" and then attach the debugger.
EnableViewState="true" in UpdatePannel will definitely resolve the problem.
ASP.NET DropDownList AutoPostback Not Working - What Am I Missing?
[ "", "c#", "asp.net", "updatepanel", "autopostback", "" ]
I'been doing some inheritance in js in order to understand it better, and I found something that confuses me. I know that when you call an 'constructor function' with the new keyword, you get a new object with a reference to that function's prototype. I also know that in order to make prototypal inheritance you must replace the prototype of the constructor function with an instance of the object you want to be the 'superclass'. So I did this silly example to try these concepts: ``` function Animal(){} function Dog(){} Animal.prototype.run = function(){alert("running...")}; Dog.prototype = new Animal(); Dog.prototype.bark = function(){alert("arf!")}; var fido = new Dog(); fido.bark() //ok fido.run() //ok console.log(Dog.prototype) // its an 'Object' console.log(fido.prototype) // UNDEFINED console.log(fido.constructor.prototype == Dog.prototype) //this is true function KillerDog(){}; KillerDog.prototype.deathBite = function(){alert("AAARFFF! *bite*")} fido.prototype = new KillerDog(); console.log(fido.prototype) // no longer UNDEFINED fido.deathBite(); // but this doesn't work! ``` (This was done in Firebug's console) 1) Why if all new objects contain a reference to the creator function's prototype, fido.prototype is undefined? 2) Is the inheritance chain [obj] -> [constructor] -> [prototype] instead of [obj] -> [prototype] ? 3) is the 'prototype' property of our object (fido) ever checked? if so... why is 'deathBite' undefined (in the last part)?
> 1) Why if all new objects contain a > reference to the creator function's > prototype, fido.prototype is > undefined? All new objects do hold a reference to the prototype that was present on their constructor at the time of construction. However the property name used to store this reference is not `prototype` as it is on the constructor function itself. Some Javascript implementations do allow access to this 'hidden' property via some property name like `__proto__` where others do not (for example Microsofts). > 2) Is the inheritance chain [obj] -> > [constructor] -> [prototype] instead > of [obj] -> [prototype] ? No. Take a look at this:- ``` function Base() {} Base.prototype.doThis = function() { alert("First"); } function Base2() {} Base2.prototype.doThis = function() { alert("Second"); } function Derived() {} Derived.prototype = new Base() var x = new Derived() Derived.prototype = new Base2() x.doThis(); ``` This alerts "First" not Second. If the inheritance chain went via the constructor we would see "Second". When an object is constructed the current reference held in the Functions prototype property is transfered to the object hidden reference to its prototype. > 3) is the 'prototype' property of our > object (fido) ever checked? if so... > why is 'deathBite' undefined (in the > last part)? Assigning to an object (other than a Function) a property called `prototype` has no special meaning, as stated earlier an object does not maintain a reference to its prototype via such a property name.
You cannot change an object's prototype once it's been instantiated with `new`. In your example above, lines like ``` fido.prototype = new KillerDog(); ``` simply creates a new attribute named `prototype` on the object `fido`, and sets that attribute to a new `KillerDog` object. It's no different than ``` fido.foo = new KillerDog(); ``` As your code stands... ``` // Doesn't work because objects can't be changed via their constructors fido.deathBite(); // Does work, because objects can be changed dynamically, // and Javascript won't complain when you use prototype //as an object attribute name fido.prototype.deathBite(); ``` The special `prototype` behavior applies only to constructors in javascript, where constructors are `function`s that will be called with `new`.
Javascript Prototypal Inheritance?
[ "", "javascript", "inheritance", "prototype", "prototypal-inheritance", "" ]
What do you like to have in the C++ cheat sheet?
I found [this one](http://web.archive.org/web/20090823155413/http://www.sfu.ca/~vwchu/projects/CPPQuickRef.pdf) that seems to be detailed enough. It covers basics of templates, inheritance, operators, exceptions, etc. It has a lot of information in a very small space.
The O'Reilly book [C++ Pocket Reference](http://oreilly.com/catalog/9780596004965/) would be one such useful tool. > The C++ Pocket Reference is a memory > aid for C++ programmers, enabling them > to quickly look up usage and syntax > for unfamiliar and infrequently used > aspects of the language. The book's > small size makes it easy to carry > about, ensuring that it will always be > at-hand when needed. Programmers will > also appreciate the book's brevity; as > much information as possible has been > crammed into its small pages.
Is there any cheatsheet available for C++?
[ "", "c++", "" ]
I am using a third-party DLL. For some particular cases, a function in the DLL is throwing an exception. Is it possible to debug the DLL in the Visual Studio? After [the answer from Andrew Rollings](https://stackoverflow.com/questions/349918/debugging-a-third-party-dll-in-visual-studio/349925#349925), I am able to view the code, but is there any easy way to debug through the code in Visual Studio?
If the DLL is in a [.NET](http://en.wikipedia.org/wiki/.NET_Framework) language, you can decompile it using a tool like [.NET Reflector](http://en.wikipedia.org/wiki/.NET_Reflector) and then debug against the source code. Or you could ask the vendor if source code is available. That's probably the easiest way.
Building on Andrew's answer, you just treat the decompiled source code as a new library within your project and set breakpoints in the source. Remove all references to the 3rd party DLL so that it is the decompiled code that is executing. Other things: * You may be breaking the law by decompiling the code, or breaching a licensing agreement with the 3rd party vendor. Make sure to review this with someone. * You will want to make sure that you remove references to your decompiled version if you are shipping to other developers, or checking into a larger source tree. Easy to forget this!
Debugging a third-party DLL in Visual Studio?
[ "", "c#", ".net", "debugging", "dll", "" ]
Is it possible to create a new process on windows with a different user account? I know there are a context menu "Run as" but I want do it from Java. I have the username and the password.
You need to write a DLL using the [Java Native Interface (JNI)](http://en.wikipedia.org/wiki/Java_Native_Interface) as you cannot do this with pure Java code. The DLL itself needs to call the **[CreateProcessAsUser](http://msdn.microsoft.com/en-us/library/ms682429(VS.85).aspx)** function to create a process in the context of another user. To successfully create that process you need to provide an access token to the function, which was itself created by calling the [LogonUser](http://msdn.microsoft.com/en-us/library/aa378184(VS.85).aspx) function (it takes the username and password to authentify that other user).
There is a program called "runas.exe." You could run that process and supply the appropriate arguments for your process and username/password. I think that's the simplest method.
How can I create a new process with another User Account on Windows?
[ "", "java", "windows", "process", "account", "" ]
I was looking at Stack Overflow question *[What is the difference between abstract function and virtual function?](https://stackoverflow.com/questions/391483)*, and I was wondering whether every abstract function should be considered to be a virtual function in C# or in general? I was a bit puzzled by the "you must override/you may override" responses to that question. Not being a C# programmer, I tend to think that abstract functions are a compile-time concept only, and that abstract functions are virtual functions by definition since you must provide at least one but can provide multiple implementations further down the hierarchy. Virtual functions have a compile-time dimension too, in that you cannot override a non-virtual function, but they are mostly a runtime concept since it is "just" the selection of the correct method implementation based on the actual receiver.
Yes. From section 10.6.6 of the [C# 3.0 spec](http://download.microsoft.com/download/3/8/8/388e7205-bc10-4226-b2a8-75351c669b09/CSharp%20Language%20Specification.doc): > When an instance method declaration > includes an abstract modifier, that > method is said to be an abstract > method. Although an abstract method is > implicitly also a virtual method, it > cannot have the modifier virtual.
It ***has*** to be virtual (and Jon Skeet has already whipped out the spec to prove that it *is*), because, given a reference to the abstract base class, the concrete derived class's implementation must be called. For example, given the classic Animal hierarchy: ``` abstract class Animal{ public abstract void Speak(); } class Cat : Animal{ public override void Speak(){Console.WriteLine("meow");} } class Dog : Animal{ public override void Speak(){Console.WriteLine("bark");} } ``` A function that takes an `Animal` object, and calls its `Speak` method wouldn't know which implementation to call if the function weren't virtual. ``` static void TalkToAnimal(Animal a){ Console.WriteLine("Hello, animal."); a.Speak(); } ``` Note however, that interface implementations are **not** virtual by default. Because an interface works differently from a class, true polymorphism isn't necessary to find the implementation of an interface method.
Is every abstract function virtual in C#, in general?
[ "", "c#", "oop", "" ]
Using MySQL syntax and having a table with a row like: ``` mydate DATETIME NULL, ``` Is there a way to do something like: ``` ... WHERE mydate<='2008-11-25'; ``` I'm trying but not really getting it to work.
Nevermind found an answer. Ty the same for anyone who was willing to reply. ``` WHERE DATEDIFF(mydata,'2008-11-20') >=0; ```
Uh, `WHERE mydate<='2008-11-25'` *is* the way to do it. That should work. Do you get an error message? Are you using an ancient version of MySQL? Edit: The following works fine for me on MySQL 5.x ``` create temporary table foo(d datetime); insert into foo(d) VALUES ('2000-01-01'); insert into foo(d) VALUES ('2001-01-01'); select * from foo where d <= '2000-06-01'; ```
In SQL how to compare date values?
[ "", "sql", "mysql", "compare", "where-clause", "" ]
Is the LAMP (Linux, Apache, MySQL, PHP / Ruby / Python) stack appropriate for Enterprise use? To be clear, by "Enterprise", I mean a large or very large company, where security, robustness, availability of skill sets, Total Cost of Ownership (TCO), scalability, and availability of tools are key considerations. Said another way, a company that looks for external adoption of frameworks / architecture - Something ubiquitous will be seen as more "valid" than something exotic / esoteric in this kind of environment. I've seen use cases where Oracle, IBM, and Sun have implemented systems on the LAMP stack for various Enterprises. I've also seen examples where websites like yellowpages.com (Ruby on rails) and Facebook (php) are built on it. However, none of these examples are exactly what I'm looking for. I'm really trying to find examples where it is an Enterprise standard at a very large bank (I.e., Citigroup), Telecom company (I.e., AT&T), or manufacturer (I.e., Proctor and Gamble). Just to be clear, I'm not looking for an example where it's used in a limited sense (Like at JPMorgan Chase), but where it's a core platform for systems like CRM, manufacturing systems, or HR management, as well as for internal and external websites. The perception I've seen so far is that applications built on the LAMP stack perform slower and are less flexible. Some of the arguments I've heard are: * Linux is seen as not as well supported as Unix, Solaris, or Windows Servers. * Apache is harder to configure and maintain than web servers like BEA WebLogic or IIS. * MySQL is a "not ready for prime time" DB for hobbyists, and not a competitor for SQL Server or Oracle (Although PostgreSQL seems to have a reputation for being more robust). * PHP / Ruby on rails are optimized for CRUD (Create, Read, Update and Delete operations). Although this is an advantage when building CRUD-intensive web aplications, both perform slower than Java/Java EE or C# (which are both common Enterprise standards). Furthermore, a lot of applications and systems (like manufacturing systems) have a lot of non-CRUD functionality that may be harder to build with PHP or Ruby, or even Python. Can anyone please provide arguments to support or refute the idea of the LAMP stack being appropriate for the Enterprise? Thanks! KA UPDATE: [Some times the LAMP Stack is Appropriate for Enterprise Use: Externally-Facing Blogs](http://kaiseradvisor.blogspot.com/2009/07/sometimes-lamp-stack-is-appropriate-for.html)
"but where it's a core platform for systems like CRM and HR, as well as for internal and external websites" First, find a LAMP CRM or HR application. Then find a customer for the LAMP CRM or HR application. Sadly, there aren't a lot of examples of item 1. Therefore, your case is proven. It can't be used for enterprise applications because -- currently -- there aren't any of the applications you call "enterprise". Your other points, however, are very interesting. 1. **Linux is seen as not as well supported as Unix, Solaris, or Windows Servers**. I think Red Hat would object strongly to this. Give them a call. I think they'll make a very persuasive sales pitch. Read their [success stories](http://customers.press.redhat.com/). 2. **Apache is harder to configure and maintain than web servers like BEA WebLogic or IIS**. By whom? Apache web site managers? Or IIS web site managers? This is entirely subjective. 3. **MySQL is a "not ready for prime time" DB**. Take it up with Sun Microsystems. I think they'd object strongly to this. Give them a call. I think they'll make a very persuasive sales pitch. Read their [success stories](http://www.sun.com/systems/solutions/mysql/perspectives.jsp). 4. **PHP / Ruby on rails are optimized for CRUD, and both are slowly performing**. Could be true. Java and Python might be faster. PHP and Ruby aren't the last word in LAMP.
> Something ubiquitous will be seen as more "valid" than something exotic / esoteric in this kind of environment. Although I personally wouldn't recommend PHP due to the many flaws in the language, it's most certainly ubiquitous. With the advent of phusion passenger, Rails support amongst shared-hosting companies is growing pretty quickly too. I give it another year or 2 at most before 90+% of shared-hosting accounts support rails out of the box. If that's not ubiquitous, what is? > Linux is seen as not as well supported as Unix, Solaris, or Windows Servers. If this bothers you, purchase support from RedHat, or install Solaris and purchase support from Sun. Both of those will give you just as good support as Microsoft is likely to > Apache is harder to configure and maintain than web servers like BEA WebLogic or IIS. I can't speak for BEA WebLogic, but having configured both Apache, IIS, and Tomcat, Apache is the easiest both to understand, and to find examples and documentation for *by a long way.* > MySQL is a "not ready for prime time" DB for hobbyists, and not a competitor for SQL Server or Oracle. [Oh really?](http://www.mysql.com/customers/). You should make it your mission to tell NASA, Google, CERN, Reuters etc that they're all using a hobbyist database that isn't ready for prime-time. > PHP / Ruby on rails are optimized for CRUD, and both perform slower than Java/Java EE or C# (which are both common Enterprise standards). There are 2 things here: Optimized for CRUD - This is totally irrelevant. Rails and some of the python/php frameworks are optimized for CRUD apps. Many of the C#/Java frameworks are also optimized for CRUD apps. However, if the app you're building is a CRUD app (and 99% of web applications are), isn't this a Good Thing? If you're not building a CRUD app, there are plenty of non-crud-optimized frameworks in ruby/python/php/java/C#. Net win: Nobody (hence it's irrelevant) Perform slower than Java/C# - This is undoubtedly true, but it also doesn't matter. For a low-traffic site the performance difference isn't going to amount to anything, and for a high-traffic site your bottleneck will be the database, whether it be MySQL, oracle, or whatever. What you trade-off for all of this is development time. Once you've used all this advice to convince your boss that you *won't* lose out on anything by using LAMP, If you crunch the numbers and show your them that it is going to take 6 man-months to build the site in Java, and only 3 to build it in ruby/python then that's really what it comes down to.
Is the LAMP stack appropriate for Enterprise use?
[ "", "php", "ruby", "lamp", "system-design", "" ]
I know the rule is to NEVER throw one during a destructor, and I understand why. I would not dare do it. But even the C++ Faq Lite says that this rule is good 99% of the time. What is the other 1% that they fail to delve into? [Link to the C++ Faq Lite bullet point on throwing from ~():](http://www.parashift.com/c++-faq-lite/exceptions.html#faq-17.3)
Just don't do it. If the stars and planets align in such a way that you find you need to... Still don't do it.
Wow, I was about to vote Juan up until I saw the part about never using exceptions. Okay, first of all, Juan has it correct. If, *for whatever reason*, you end up in that situation of two exceptions chasing one another up the stack, C++ will simply throw up its hands and its last meal and terminate abnormally. So, throwing an exception from a dtor is a guarantee that you have a possible code path that leads to an unplanned abnormal termination, which is in general a bad thing. If that's what you *want*, be straightforward about it, call abort or exit, and get it over with. The part about avoiding it by not using exceptions, however, is bad advice. Exceptions are really an essential mechanism in C++ for systems that are going to be robust and run for a long time; they're really the only way to guarantee that you can handle error situations without leaking resources all over the floor. It happens I used to work for Marshall Cline, the guy who wrote that FAQ, and taught C++ from the FAQ Book; because of that, I can tell you you're misinterpreting the answer a little. He's not saying "gee, there is this one case where it would be okay, but I'm not goin to ell you about it," he's saying "I'm sure that if I say **absolutely and without exception don't throw from a dtor** someone, someday, will come up with one off-the-wall example that makes sense. But I don't know of one and don't really believe it. Don't try this at home, and consult an attorney, no warranty express or implied."
When is it OK to throw an exception from a destructor in C++?
[ "", "c++", "exception", "" ]
I have this C# extension method that will extend any dictionary where the *Value* type is an *IList*. When I write the equivalent code in VB.Net I get the following compile error: > *"Extension method 'Add' has some type constraints that can never be satisfied".* I find this really puzzling as the **same** type constraints **can** be satisfied in C#. So my question is this: Why does this not work in VB? Is there a way to make these same type constraints work in VB? Have I made a mistake converting the code? I hope somebody can shed some light on this as I have been scratching my head on this one for a while. :) *(Incase you are curious the extension method is intended to make it simple to add multiple values into a dictionary under a **single** key (such as multiple orders under one customer). But this is unimportant, I am solely concerned about the puzzling behaviour I am observing in VB).* **Here is the C# Version that works:** ``` /// <summary> /// Adds the specified value to the multi value dictionary. /// </summary> /// <param name="key">The key of the element to add.</param> /// <param name="value">The value of the element to add. The value can be null for reference types.</param> public static void Add<KeyType, ListType, ValueType>(this Dictionary<KeyType, ListType> thisDictionary, KeyType key, ValueType value) where ListType : IList<ValueType>, new() { //if the dictionary doesn't contain the key, make a new list under the key if (!thisDictionary.ContainsKey(key)) { thisDictionary.Add(key, new ListType()); } //add the value to the list at the key index thisDictionary[key].Add(value); } ``` **Here is the VB version that doesn't compile:** ``` ''' <summary> ''' Adds the specified value to the multi value dictionary. ''' </summary> ''' <param name="key">The key of the element to add.</param> ''' <param name="value">The value of the element to add. The value can be null for reference types.</param> <System.Runtime.CompilerServices.Extension()> _ Public Sub Add(Of KeyType, ListType As {IList(Of ValueType), New}, ValueType) _ (ByVal thisDictionary As Dictionary(Of KeyType, ListType), ByVal key As KeyType, ByVal value As ValueType) 'if the dictionary doesn't contain the key, make a new list under the key If Not thisDictionary.ContainsKey(key) Then thisDictionary.Add(key, New ListType()) End If 'add the value to the list at the key index thisDictionary(key).Add(value) End Sub ```
The problem only happens when `<System.Runtime.CompilerServices.Extension()>` is present. The VB compiler imposes a restriction that the constraints must be verifiable with the first argument alone. Since the first argument for the extension method (`Dictionary(Of KeyType, ListType)`) is dependant upon the third argument (`ValueType`) via the `IList(Of TValue)` constraint, this cannot be compiled in VB.
The reason is explained here: <http://msdn.microsoft.com/en-us/library/bb385206.aspx> The VB compiler might be a little pickier in this case because it has to support optional parameters. There are no optional parameters in C# (yet).
C# to VB.Net: Why does this fail to compile when converted to VB?
[ "", "c#", "vb.net", "visual-studio-2008", "generics", ".net-3.5", "" ]
I have a method which should be delayed from running for a specified amount of time. Should I use ``` Thread thread = new Thread(() => { Thread.Sleep(millisecond); action(); }); thread.IsBackground = true; thread.Start(); ``` Or ``` Timer timer = new Timer(o => action(), null, millisecond, -1); ``` I had read some [articles](http://msmvps.com/blogs/peterritchie/archive/2007/04/26/thread-sleep-is-a-sign-of-a-poorly-designed-program.aspx) about how using `Thread.Sleep` is bad design. But I don't really understand why. However, for using `Timer`, `Timer` has a dispose method. Since the execution is delayed, I don't know how to dispose the `Timer`. Do you have any suggestions? Or, if you have an alternative suggestion for delaying code execution, that would also be appreciated.
One difference is that `System.Threading.Timer` dispatches the callback on a thread pool thread, rather than creating a new thread every time. If you need this to happen more than once during the life of your application, this will save the overhead of creating and destroying a bunch of threads (a process which is very resource intensive, as the article you reference points out), since it will just reuse threads in the pool, and if you will have more than one timer going at once it means you will have fewer threads running at once (also saving considerable resources). In other words, `Timer` is going to be much more efficient. It also may be more accurate, since `Thread.Sleep` is only guaranteed to wait at LEAST as long as the amount of time you specify (the OS may put it to sleep for much longer). Granted, `Timer` is still not going to be exactly accurate, but the intent is to fire the callback as close to the specified time as possible, whereas this is NOT necessarily the intent of `Thread.Sleep`. As for destroying the `Timer`, the callback can accept a parameter, so you may be able to pass the `Timer` itself as the parameter and call Dispose in the callback (though I haven't tried this -- I guess it is possible that the Timer might be locked during the callback). Edit: No, I guess you can't do this, since you have to specify the callback parameter in the `Timer` constructor itself. Maybe something like this? (Again, haven't actually tried it) ``` class TimerState { public Timer Timer; } ``` ...and to start the timer: ``` TimerState state = new TimerState(); lock (state) { state.Timer = new Timer((callbackState) => { action(); lock (callbackState) { callbackState.Timer.Dispose(); } }, state, millisecond, -1); } ``` The locking should prevent the timer callback from trying to free the timer prior to the `Timer` field having been set. --- Addendum: As the commenter pointed out, if `action()` does something with the UI, then using a `System.Windows.Forms.Timer` is probably a better bet, since it will run the callback on the UI thread. However, if this is not the case, and it's down to `Thread.Sleep` vs. `Threading.Timer`, `Threading.Timer` is the way to go.
use [`ThreadPool.RegisterWaitForSingleObject`](http://msdn.microsoft.com/en-us/library/system.threading.threadpool.registerwaitforsingleobject(v=vs.110).aspx) instead of timer: ``` //Wait 5 seconds then print out to console. //You can replace AutoResetEvent with a Semaphore or EventWaitHandle if you want to execute the command on those events and/or the timeout System.Threading.ThreadPool.RegisterWaitForSingleObject(new AutoResetEvent(false), (state, bTimeout) => Console.WriteLine(state), "This is my state variable", TimeSpan.FromSeconds(5), true); ```
Compare using Thread.Sleep and Timer for delayed execution
[ "", "c#", "multithreading", "timer", "sleep", "delayed-execution", "" ]
What is a good way to implement keyboard handling? In any language, where I write a keyboard-interactive program (such as a tetris game), I end up having some code that looks like this: ``` for event in pygame.event.get(): if event.type == KEYDOWN: if False: pass #make everything an elif elif rotating: pass elif event.key == K_q: elif event.key == K_e: elif event.key == K_LEFT: curpiece.shift(-1, 0) shadowpiece = curpiece.clone(); setupshadow(shadowpiece) elif event.key == K_RIGHT: curpiece.shift(1, 0) shadowpiece = curpiece.clone(); setupshadow(shadowpiece) ``` (shortened). I don't like this, as this has to go in my main loop, and it messes with all parts of the program. This also makes it impossible to have a user config screen where they can change which key maps to which action. Is there a good pattern to do this using some form of function callbacks?
You could create a dictionary where the keys are the input and the value is a function that handles the keypress: ``` def handle_quit(): quit() def handle_left(): curpiece.shift(-1, 0) shadowpiece = curpiece.clone(); setupshadow(shadowpiece) def handle_right(): curpiece.shift(1, 0) shadowpiece = curpiece.clone(); setupshadow(shadowpiece) def handle_pause(): if not paused: paused = True branch = { K_q: handle_quit K_e: handle_pause K_LEFT: handle_left K_RIGHT: handle_right } for event in pygame.event.get(): if event.type == KEYDOWN: branch[event.key]() ``` Then changing the keys is a matter of modifying keys of the dictionary.
in addition to [superjoe30's answer](https://stackoverflow.com/questions/312263/effective-keyboard-input-handling#312270), you can use two levels of mapping (two dictionaries) * key => command string * command string => function I think this would make it easier to allow user-defined mappings. i.e. so users can map their keys to "commands" rather than "the name of a function"
Effective Keyboard Input Handling
[ "", "python", "user-interface", "keyboard", "user-input", "interactive", "" ]
I have rules set to move some email messages into different folders. I would like this to still show the envelope in the notification area but there is no option in the rules wizard to do this. It looks like I would either have to have the rule "run a script" or "perform a custom action" allowing either vba or c/c++ respectively. Anyone else have a better solution?
You can also achieve it *not* by using a rule, *but* doing the rule-like action in code. For example: ``` Private Sub Application_NewMailEx(ByVal EntryIDCollection As String) Dim mai As Object Dim strEntryId For Each strEntryId In Split(EntryIDCollection, ",") Set mai = Application.Session.GetItemFromID(strEntryId) If mai.Parent = "Inbox" Then If mai.SenderEmailAddress = "the-email-address-the-rule-applies-to" Then mai.Move Application.GetNamespace("MAPI").GetFolderFromID("the-entry-ID-of-the-folder-you-want-to-move-the-message-to") End If End If Set mai = Nothing Next End Sub ``` How to get the **folder ID** (i.e., entryID of the folder): This is just a manual way, you could make a recursive procedure but for simple purposes this is fine. For instance, I had a structure like: ``` Mailbox - My_Name_Here Inbox The Subfolder I'm Looking For Sent Items ... ``` So in the Immediate window I typed: ``` ? Application.GetNamespace("MAPI").Folders(1) ``` and increased the number until I got "Mailbox - My\_Name\_Here" then, I typed: ``` ? Application.GetNamespace("MAPI").Folders(the_number_of_my_mailbox).Folders(1) ``` increasing the number until I got "Inbox". Then: ``` ? Application.GetNamespace("MAPI").Folders(the_number_of_my_mailbox).Folders(the_number_of_my_Inbox).Folders(1) ``` increasing the number until I got "The Subfolder I'm Looking For" Then: ``` ? Application.GetNamespace("MAPI").Folders(the_number_of_my_mailbox).Folders(the_number_of_my_Inbox).Folders(the_number_of_the_subfolder_i_was_looking_for).EntryID ``` And that was it: the entryID of the folder I wanted to move the message to. You get the point, I'm sure :)
Check out [MailAlert](http://www.bettasoft.net/), an Outlook plug-in that does exactly that. It still works in Outlook 2007 (although I've had some instabilities since I installed it again recently, which may or may not be related).
Display Outlook icon in notification area for messages, not in inbox
[ "", "c++", "vba", "scripting", "outlook", "" ]
In order to reference a member of a class in XML comments/documentation, you have to use the following tag: ``` <see cref="member"/> ``` It is better explained [here](http://msdn.microsoft.com/en-us/library/acd0tfbe.aspx). **How do you reference an *indexer*?** I mean, a member like this one: ``` internal object this[ int index ] { ... } ``` Thanks in advance.
``` <see cref="P:System.Collections.ArrayList.Item(System.Int32)" /> ```
``` <see cref="this[int]" /> ```
How to reference an indexer member of a class in C# comments
[ "", "c#", "documentation", "" ]
A combo box is an auto-complete text box that is also a dropdown (i.e., you can choose from a list if you prefer). "Best" means combination of: stable, look/feel, API / customizability, maturity.
I like the [Yahoo UI Autocomplete widget](http://developer.yahoo.com/yui/autocomplete/). It does not provide the dropdown natively, but a dropdown button can be added with [a few lines of code](http://tech.groups.yahoo.com/group/ydn-javascript/message/30178).
[ExtJS](http://extjs.com/) has a [very powerful combo box](http://www.extjs.com/deploy/dev/docs/?class=Ext.form.ComboBox) that provides [dropdown, auto-complete, and filtering of choices](http://extjs.com/deploy/dev/examples/form/combos.html). You can also enable [pagination](http://extjs.com/deploy/dev/examples/form/forum-search.html), There are a number of extensions that provide additional capabilities, such as a [Facebook style multi-select](http://extjs.com/forum/showthread.php?t=33794&highlight=combo+extension), or [multi-select checkboxes](http://lovcombo.extjs.eu/).
Best javascript combo box?
[ "", "javascript", "user-interface", "combobox", "autocomplete", "" ]
I just came across this idiom in some open-source Python, and I choked on my drink. Rather than: ``` if isUp: return "Up" else: return "Down" ``` or even: ``` return "Up" if isUp else "Down" ``` the code read: ``` return isUp and "Up" or "Down" ``` I can see this is the same result, but is this a typical idiom in Python? If so, is it some performance hack that runs fast? Or is it just a once-off that needs a code review?
The "a and b or c" idiom was the canonical way to express the ternary arithmetic if in Python, before [PEP 308](http://www.python.org/dev/peps/pep-0308/) was written and implemented. This idiom fails the "b" answer is false itself; to support the general case, you could write ``` return (a and [b] or [c])[0] ``` An alternative way of spelling it was ``` return (b,c)[not a] ``` which, with the introduction of the bool type, could be rewritten as ``` return (c,b)[bool(a)] ``` (in case it isn't clear: the conversion to bool, and the not operator, is necessary if a is not known to be bool already) Today, the conditional expression syntax should be used if the thing must be an expression; else I recommend to use the if statement.
You should read [Using the and-or trick](http://www.diveintopython.net/power_of_introspection/and_or.html) (section 4.6.1) of *Dive Into Python* by Mark Pilgrim. It turns out that the and-or trick has major pitfalls you should be aware of.
Is this idiom pythonic? (someBool and "True Result" or "False Result")
[ "", "coding-style", "python", "" ]
I have a factory that builds the objects with longest lifetime in my application. These have types, lets say, `ClientA` and `ClientB`, which depend on `Provider` (an abstract class with many possible implementations), so both clients have a reference to Provider as member. According to the command-line arguments, the factory chooses one implementation of `Provider`, constructs it (with "`new`"), and passes it to the constructors of both clients. The factory returns an object that represents my entire app. My main function is basically this: ``` int main(int argc, char** argv) { AppFactory factory(argc, argv); App app = factory.buildApp(); return app.run(); } ``` And the `buildApp` method is basically this: ``` App AppFactory::buildApp() { Provider* provider = NULL; if (some condition) { provider = new ProviderX(); } else { provider = new ProviderY(); } ClientA clientA(*provider); ClientB clientB(*provider); App app(clientA, clientB); return app; } ``` So, when execution ends, destructors of all objects are called, except for the provider object (because it was constructed with "`new`"). How can I improve this design to make sure that the destructor of the provider is called? EDIT: To clarify, my intention is that both clients, the provider and the App object to share the same lifetime. After all answers, I now think both clients and the provider should be allocated on the heap its references passed to the App object, which will be responsible for deleting them when it dies. What do you say?
It's very simple with a shared ownership smart pointer: ``` App AppFactory::buildApp() { boost::shared_ptr<Provider> provider; if (some condition) { provider.reset(new ProviderX()); } else { provider.reset(new ProviderY()); } ClientA clientA(provider); ClientB clientB(provider); App app(clientA, clientB); return app; } ``` Assuming the app object owns the clients, and the clients all share the one provider. Make the clients take a `shared_ptr<Provider>` then, instead of a `Provider&` . As long as there is still a copy of a shared\_ptr owning the provider object, the object won't be freed. The best would be to not copy clientA and clientB, and not copy app by returning it by value, but move the clients into the app, and move the app itself into the returned object. That will be possible with the upcoming C++ version. But currently, either you make them pointers (using shared\_ptr), or you keep copying them. Another option would be to use auto\_ptr, which has a pseudo-transfer-of-ownership semantic. But that template has some inherent problems. So you should avoid using it.
make provider an instance variable of AppFactory. then make provider a smart pointer or delete it in AppFactory's dtor.
Trouble with Factory and dynamic allocation in C++
[ "", "c++", "design-patterns", "factory", "" ]
I want to make a table that simply has two integer columns to serve as a mapping table between two different data sets and I wish to put the correct constraints on it. I initially set the two columns as a compound primary key, but then realized that represents a many to many, only keeping duplicate many to many mappings from occurring. How do I specify I want both columns to be unique integers in all rows? I'm using MS SQL, but I suppose this is a general database design question.
Create a Primary Key on one column and another separate unique constraint or unique index, on the other. ``` CREATE TABLE [dbo].[test]( [x] [int] NOT NULL, [y] [int] NOT NULL, CONSTRAINT [PK_test] PRIMARY KEY CLUSTERED ( [x] ASC) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY] CREATE UNIQUE NONCLUSTERED INDEX [IX_test] ON [dbo].[test] ([y] ASC) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ```
Creating another table is done to create a many to many relationship, so you probably shouldn't have created it (unless I am missing something). One to one mandatory is the same as having the two tables as a single table. The only acceptable one to one is optional, and you simply have to relate the two tables, no third table necessary.
A Proper One-to-One Mapping Table
[ "", "sql", "mapping", "one-to-one", "" ]
How do I get the sequence number of the row just inserted?
insert ... returning. ``` declare s2 number; begin insert into seqtest2(x) values ('aaa') returning seq into s2; dbms_output.put_line(s2); end; ``` "seq" here refers to the name of the column whose value you want to store into variable s2. in python: ``` myseq=curs.var(cx_Oracle.NUMBER) curs.prepare("insert into seqtest2(x) values('bbb') returning seq into :x") curs.execute(None, x=myseq) print int(myseq.getvalue()) ```
**Edit:** as Mark Harrison pointed out, this assumes that you have control over how the id of your inserted record is created. If you have full control and responsibility for it, this *should* work... --- Use a stored procedure to perform your insert and return the id. eg: for a table of names with ids: ``` PROCEDURE insert_name(new_name IN names.name%TYPE, new_name_id OUT names.id%TYPE) IS new_id names.id%TYPE; BEGIN SELECT names_sequence.nextVal INTO new_id FROM dual; INSERT INTO names(id, name) VALUES(new_id, new_name); new_name_id := new_id; END; ``` Using stored procedures for CRUD operations is a good idea regardless if you're not using an ORM layer, as it makes your code more database-agnostic, helps against injection attacks and so on.
Oracle: How do I get the sequence number of the row just inserted?
[ "", "sql", "oracle", "" ]
I do have to say I'm fairly inexperienced when it comes to C++, don't be too harsh on me. Recently stumbled unto the wonders of the win32 API and have chosen to practice using it (I'd rather not use MFC/wxWidgets/etc at this point, just for educational purposes). Well, my real question is: How do you properly code your win32 GUI stuff into classes. I mean, sure, you can make a class that saves individual window handles, allows for easier modification of window properties, has a more simplified version of CreateWindow(). But as I understand it you will need a messagepump and several callback functions for your created windows. How does that work when combining them with classes? Could anyone point me in the right direction here? I don't mind reading (a lot of) example code as long as it is commented. PS: I'm also having trouble finding good (read: easy) documentation on using 'resource files' for my window decorations. Bonuspoints for that! :)
The biggest problem I faced back when I used the Win32 API (have since moved on to Linux and cross-platform solutions) were the callbacks. Especially the winproc one, AKA the message pump. I found [this](http://www.winterdom.com/dev/cpp/class.html), which should be a good hint. I did what that page suggests when I rolled my own wrapper.
I program in C++ for a living. I like C++. That said, your life will be so much easier if you do your windows GUI in something .Net, e.g., C#. Win32 is very low-level and you will be building tons of stuff that you will get for free with the .Net libraries. Win32 is not a wonder, anymore. :-) If you want to learn C++, pick something other than a GUI to do with it.
C++ developing a GUI - classes?
[ "", "c++", "user-interface", "winapi", "class", "" ]
I just finished watching the Google clean code video on YouTube (see [link](http://googletesting.blogspot.com/), first article) about removing `if` statements from your code and using polymorphism instead. After watching the video I had a look at some code that I was writing before watching the video and noticed some places where I could use this method, mainly places where the same kind of logic was implemented many times. So a example: I have some code like this. ``` public int Number { get { string returnValue; if (this.internalTableNumber == null) returnValue = this.RunTableInfoCommand(internalTableName, TableInfoEnum.TAB_INFO_NUM); else returnValue = this.RunTableInfoCommand(internalTableNumber.Value, TableInfoEnum.TAB_INFO_NUM); return Convert.ToInt32(returnValue); } } ``` What RunTableInfoCommand does isn't really important,but the main thing is that I have many properties with exactly the same `if` statments the only thing that changes is the TableInfoEnum. I was wondering if someone could help me refactor this so that it still does the same thing but without any `if` statements?
Just a cautionary note here after seeing some of these (technically correct) reponses, just getting rid of an If statement should not be your sole aim, the aim should be to make your code extensible, maintainable and simple, if that means getting rid of an if statement, great, but it shouldn't be an aim in an of itself. In the code sample you have given, and without knowing more about your app, and assuming you are not going to extend much past testing for a null value, I think an If (or perhaps even a ternary) is the more maintainable solution to be perfectly frank.
You will actually be implementing something like the Strategy pattern. Start by defining a super class lets call it AbstractTableInfoCommand. This class may be abstract but must specify a method called runTableInfoCommand(). You can then define several sub classes that each implement runTableInfoCommand() method. Your class, the one with the Number property, will then have a new property of type AbstractTableInfoCommand (lets call it tableInfoCommand) which will be instantiated to one of the concrete sub classes of AbstractTableInfoCommand. The code will then be: ``` public int Number { get { return this.tableInfoCommand.runTableInfoCommand(); } } ``` So you can create a NullTableInfoCommand and SomeOtherTableInfoCommand etc. The advantage is that if you have some new condition for returning a tableinfocommand then you add a new class rather than edit this code. Having said that, not every situation is necessarily right for this pattern. So it makes more extendable code but if you are in a situation that does not require that extendability it mioght be overkill.
How would you refactor this conditional to use polymorphism?
[ "", "c#", ".net", "polymorphism", "" ]
I need to do a number of network-related things in C++ that I would normally do with `ifconfig` in Linux, but I'd like to do it without parsing the output of a group of system calls. Which C or C++ libraries can I use to tell if a network adapter is up or down, read or change an adapter's IP address and netmask, and change the default DNS gateway?
Basically you need to make a bunch of ioctl calls using a socket handle (SIOCGIFADDR, SIOCADDRT). You can find sample programs that use it in the Linux kernel source under Documentation/networking. Some other links that might be helpful: * [Network Interface operations on AIX](http://www.ibm.com/developerworks/aix/library/au-ioctl-socket.html) * [XBMC's implementation](http://xbmc.org/trac/browser/branches/linuxport/XBMC/xbmc/linux/NetworkLinux.cpp) (check out CNetworkInterfaceLinux) EDIT: Let me also add that if your target is desktop linux, then you can consider using the DBUS API to query network parameters using the NetworkManager. * <http://people.redhat.com/dcbw/NetworkManager/NetworkManager> DBUS API.txt (sorry there's some issue inserting links in edits)
You can always look at ifconfig's source code to see how they did it in the first place: <http://archive.ubuntu.com/ubuntu/pool/main/n/net-tools/net-tools_1.60.orig.tar.gz>
How do you change an IP address in C++?
[ "", "c++", "linux", "networking", "" ]
Let's say I have some code like this ``` if(isset($_GET['foo'])) //do something if(isset($_GET['bar'])) //do something else ``` If a user is at example.com/?foo=abc and clicks on a link to set bar=xyz, I want to easily take them to example.com/?foo=abc&bar=xyz, rather than example.com/?bar=xyz. I can think of a few very messy ways to do this, but I'm sure there's something cleaner that I don't know about and haven't been able to track down via Google.
Here's one way.... ``` //get passed params //(you might do some sanitizing at this point) $params=$_GET; //morph the params with new values $params['bar']='xyz'; //build new query string $query=''; $sep='?'; foreach($params as $name=>$value) { $query.=$sep.$name.'='.urlencode($value); $sep='&'; } ```
If you are updating the query string you need ot make sure you don't do something like ``` $qs="a=1&b=2"; $href="$qs&b=4"; $href contains "a=1&b=2&b=4" ``` What you really want to do is overwrite the current key if you need to . You can use a function like this. (disclaimer: Off the top of my head, maybe slightly bugged) ``` function getUpdateQS($key,$value) { foreach ($_GET as $k => $v) { if ($k != $key) { $qs .= "$k=".urlencode($v)."&" } else { $qs .= "$key=".urlencode($value)."&"; } } return $qs } <a href="reports.php?<?getupdateqs('name','byron');?">View report</a> ```
Persistent HTTP GET variables in PHP
[ "", "php", "http", "get", "methods", "" ]
We have a stored procedure that runs nightly that in turn kicks off a number of other procedures. Some of those procedures could logically be run in parallel with some of the others. * How can I indicate to SQL Server whether a procedure should be run in parallel or serial — ie: kicked off of asynchronously or blocking? * What would be the implications of running them in parallel, keeping in mind that I've already determined that the processes won't be competing for table access or locks- just total disk io and memory. For the most part they don't even use the same tables. * Does it matter if some of those procedures are the *same* procedure, just with different parameters? * If I start a pair or procedures asynchronously, is there a good system in SQL Server to then wait for both of them to finish, or do I need to have each of them set a flag somewhere and check and poll the flag periodically using `WAITFOR DELAY`? At the moment we're still on SQL Server 2000. As a side note, this matters because the main procedure is kicked off in response to the completion of a data dump into the server from a mainframe system. The mainframe dump takes all but about 2 hours each night, and we have no control over it. As a result, we're constantly trying to find ways to reduce processing times.
I had to research this recently, so found this old question that was begging for a more complete answer. Just to be totally explicit: **TSQL does *not*** (by itself) **have the ability to launch other TSQL operations asynchronously**. That doesn't mean you don't still have a lot of options (some of them mentioned in other answers): * **Custom application**: Write a simple custom app in the language of your choice, using asynchronous methods. Call a SQL stored proc on each application thread. * **SQL Agent jobs**: Create multiple SQL jobs, and start them asynchronously from your proc using `sp_start_job`. You can check to see if they have finished yet using the undocumented function `xp_sqlagent_enum_jobs` as described in [this excellent article](http://www.databasejournal.com/features/mssql/article.php/10894_3491201_2/Detecting-The-State-of-a-SQL-Server-Agent-Job.htm) by Gregory A. Larsen. (Or have the jobs themselves update your own JOB\_PROGRESS table as Chris suggests.) You would literally have to create separate job for each parallel process you anticipate running, even if they are running the same stored proc with different parameters. * **OLE Automation**: Use `sp_oacreate` and `sp_oamethod` to launch a new process calling the other stored proc as described in [this article](http://www.databasejournal.com/features/mssql/article.php/3427581/Submitting-A-Stored-Procedure-Asynchronously.htm), also by Gregory A. Larsen. * **DTS Package**: Create a DTS or SSIS package with a simple branching task flow. DTS will launch tasks in individual spids. * **Service Broker**: If you are on SQL2005+, look into using [Service Broker](http://technet.microsoft.com/en-us/library/ms166104.aspx) * **CLR Parallel Execution**: Use the CLR commands `Parallel_AddSql` and `Parallel_Execute` as described in [this article](http://www.codeproject.com/KB/database/asynchronousTSQL.aspx) by Alan Kaplan (SQL2005+ only). * **Scheduled Windows Tasks**: Listed for completeness, but I'm not a fan of this option. I don't have much experience with Service Broker or CLR, so I can't comment on those options. If it were me, I'd probably use multiple Jobs in simpler scenarios, and a DTS/SSIS package in more complex scenarios. **One final comment**: SQL already attempts to parallelize individual operations whenever it can\*. This means that running 2 tasks at the same time instead of after each other is no guarantee that it will finish sooner. Test carefully to see whether it actually improves anything or not. We had a developer that created a DTS package to run 8 tasks at the same time. Unfortunately, it was only a 4-CPU server :) \*Assuming default settings. This can be modified by altering the server's Maximum Degree of Parallelism or Affinity Mask, or by using the MAXDOP query hint.
Create a couple of SQL Server agent jobs where each one runs a particular proc. Then from within your master proc kick off the jobs. The only way of waiting that I can think of is if you have a status table that each proc updates when it's finished. Then yet another job could poll that table for total completion and kick off a final proc. Alternatively, you could have a trigger on this table. The memory implications are completely up to your environment.. **UPDATE:** If you have access to the task system.. then you could take the same approach. Just have windows execute multiple tasks, each responsible for one proc. Then use a trigger on the status table to kick off something when all of the tasks have completed. **UPDATE2:** Also, if you're willing to create a new app, you could house all of the logic in a single exe...
Start stored procedures sequentially or in parallel
[ "", "sql", "sql-server", "t-sql", "sql-server-2000", "parallel-processing", "" ]
I am looking to refactor a c# method into a c function in an attempt to gain some speed, and then call the c dll in c# to allow my program to use the functionality. Currently the c# method takes a list of integers and returns a list of lists of integers. The method calculated the power set of the integers so an input of 3 ints would produce the following output (at this stage the values of the ints is not important as it is used as an internal weighting value) ``` 1 2 3 1,2 1,3 2,3 1,2,3 ``` Where each line represents a list of integers. The output indicates the index (with an offset of 1) of the first list, not the value. So 1,2 indicates that the element at index 0 and 1 are an element of the power set. I am unfamiliar with c, so what are my best options for data structures that will allow the c# to access the returned data? Thanks in advance **Update** Thank you all for your comments so far. Here is a bit of a background to the nature of the problem. The iterative method for calculating the power set of a set is fairly straight forward. Two loops and a bit of bit manipulation is all there is to it really. It just get called..a lot (in fact billions of times if the size of the set is big enough). My thoughs around using c (c++ as people have pointed out) are that it gives more scope for performance tuning. A direct port may not offer any increase, but it opens the way for more involved methods to get a bit more speed out of it. Even a small increase per iteration would equate to a measurable increase. My idea was to port a direct version and then work to increase it. And then refactor it over time (with help from everyone here at SO). **Update 2** Another fair point from jalf, I dont have to use list or equivelent. If there is a better way then I am open to suggestions. The only reason for the list was that each set of results is not the same size. **The code so far...** ``` public List<List<int>> powerset(List<int> currentGroupList) { _currentGroupList = currentGroupList; int max; int count; //Count the objects in the group count = _currentGroupList.Count; max = (int)Math.Pow(2, count); //outer loop for (int i = 0; i < max; i++) { _currentSet = new List<int>(); //inner loop for (int j = 0; j < count; j++) { if ((i & (1 << j)) == 0) { _currentSetList.Add(_currentGroupList.ElementAt(j)); } } outputList.Add(_currentSetList); } return outputList; } ``` As you can see, not a lot to it. It just goes round and round a lot! I accept that the creating and building of lists may not be the most efficient way, but I need some way of providing the results back in a manageable way. **Update 2** Thanks for all the input and implementation work. Just to clarify a couple of points raised: I dont need the output to be in 'natural order', and also I am not that interested in the empty set being returned. hughdbrown's implementation is intesting but i think that i will need to store the results (or at least a subset of them) at some point. It sounds like memory limitiations will apply long before running time becomes a real issue. Partly because of this, I think I can get away with using bytes instead of integers, giving more potential storage. The question really is then: Have we reached the maximum speed for this calcualtion in C#? Does the option of unmanaged code provide any more scope. I know in many respects the answer is futile, as even if we havled the time to run, it would only allow an extra values in the original set.
This returns one set of a powerset at a time. It is based on python code [here](http://groups.google.com/group/comp.lang.python/browse_thread/thread/d9211cd6c65e1d3a/). It works for powersets of over 32 elements. If you need fewer than 32, you can change long to int. It is pretty fast -- faster than my previous algorithm and faster than (my modified to use yield return version of) P Daddy's code. ``` static class PowerSet4<T> { static public IEnumerable<IList<T>> powerset(T[] currentGroupList) { int count = currentGroupList.Length; Dictionary<long, T> powerToIndex = new Dictionary<long, T>(); long mask = 1L; for (int i = 0; i < count; i++) { powerToIndex[mask] = currentGroupList[i]; mask <<= 1; } Dictionary<long, T> result = new Dictionary<long, T>(); yield return result.Values.ToArray(); long max = 1L << count; for (long i = 1L; i < max; i++) { long key = i & -i; if (result.ContainsKey(key)) result.Remove(key); else result[key] = powerToIndex[key]; yield return result.Values.ToArray(); } } } ``` You can download all the fastest versions I have tested [here](http://www.iwebthereforeiam.com/files/TestYieldReturn.zip). I really think that using yield return is the change that makes calculating large powersets possible. Allocating large amounts of memory upfront increases runtime dramatically and causes algorithms to fail for lack of memory very early on. Original Poster should figure out how many sets of a powerset he needs at once. Holding all of them is not really an option with >24 elements.
Also, be sure that moving to C/C++ is really what you need to do for speed to begin with. Instrument the original C# method (standalone, executed via unit tests), instrument the new C/C++ method (again, standalone via unit tests) and see what the real world difference is. The reason I bring this up is that I fear it may be a pyrhhic victory -- using Smokey Bacon's advice, you get your list class, you're in "faster" C++, but there's still a cost to calling that DLL: Bouncing out of the runtime with P/Invoke or COM interop carries a fairly substantial performance cost. Be sure you're getting your "money's worth" out of that jump before you do it. **Update based on the OP's Update** If you're calling this loop repeatedly, you need to absolutely make sure that the entire loop logic is encapsulated in a single interop call -- otherwise the overhead of marshalling (as others here have mentioned) will definitely kill you. I do think, given the description of the problem, that the issue isn't that C#/.NET is "slower" than C, but more likely that the code needs to be optimized. As another poster here mentioned, you can use pointers in C# to seriously boost performance in this kind of loop, without the need for marshalling. I'd look into that first, before jumping into a complex interop world, for this scenario.
C data structure to mimic C#'s List<List<int>>?
[ "", "c#", "c", "data-structures", "refactoring", "" ]
I have a macro that looks like this: ``` #define coutError if (VERBOSITY_SETTING >= VERBOSITY_ERROR) ods() ``` where ods() is a class that behaves similarly to cout, and VERBOSITY\_SETTING is a global variable. There are a few of these for different verbosity settings, and it allows the code to look something like this: ``` if (someErrorCondition) { // ... do things relating to the error condition ... coutError << "Error condition occurred"; } ``` And there is functionality in this framework to set the verbosity, etc. However, the obvious pattern breaks when not using braces in something like this: ``` void LightSwitch::TurnOn() { if (!PowerToSwitch) coutError << "No power!"; else SwitchOn = true; } ``` because of the macro, will turn into this: ``` void LightSwitch::TurnOn() { if (!PowerToSwitch) if (VERBOSITY_SETTING >= VERBOSITY_ERROR) ods() << "No power!"; else SwitchOn = true; } ``` Which is not the intended functionality of the if statement. Now, I understand a way to fix this macro properly so it doesn't cause this problem, but I'd like to run an audit on the code and find any place that has this pattern of "if (...) coutError << ...; else" to find out if there are any other cases where this happens to make sure that when fixing the macro, it will indeed be correct functionality. I can use any language/tool to find this, I just want to know the best way of doing that.
You could try - temporarily - modifying the macro to something like this, and see what doesn't compile... ``` #define coutError {} if (VERBOSITY_SETTING >= VERBOSITY_ERROR) ods() ``` The 'else' clauses should now give errors.
Don't bother trying to find all of the locations in your code where a logic error is occurring -- fix the problem at its source! Change the macro so that there's no possibility of error: ``` #define coutError if(VERBOSITY_SETTING < VERBOSITY_ERROR); else ods() ``` Note that what I've done here is inverted the test, added an empty statement for the `then` clause, and put the output object in the `else` clause. This still allows you to use `<< foo << bar` after the macro, and if you have a trailing `else` clause belonging to a different `if` statement, it will get matched up properly, since it expands like so: ``` if(foo) coutError << bar; else baz(); ``` becomes ``` if(foo) if(VERBOSITY_SETTING < VERBOSITY_ERROR) ; else ods() << bar; else baz(); ```
Finding statement pattern in c++ file
[ "", "c++", "parsing", "macros", "" ]
OK - I have an interesting one here. I'm working on a tetris clone (basically to "level-up" my skills). I was trying to refactor my code to get it abstracted the way I wanted it. While it was working just fine before, now I get a segmentation fault before any images can be blitted. I've tried debugging it to no avail. I have posted my SVN working copy of the project [here](http://www.andrews.edu/~sajo/downloads/Tetris-clone.tar.gz). It's just a small project and someone with more knowledge than me and a good debugger will probably figure it out in a snap. The only dependency is [SDL](http://www.libsdl.org). Kudos to the person that can tell me what I'm doing wrong. Edit: As far as I can tell, what I have now and what I had before are logically the same, so I wouldn't think that what I have now would cause a segmentation fault. Just run an svn revert on the working copy, recompile and you can see that it was working...
Look at line 15 to 18 of Surface.cpp: ``` surface = SDL_DisplayFormatAlpha( tempSurface ); surface = tempSurface; } SDL_FreeSurface( tempSurface ); ``` I assume it segfaults because when you use this surface later, you are actually operating on tempSurface because of this line: ``` surface = tempSurface; ``` and not the surface returned by SDL\_DisplayFormatAlpha(). Since you free tempSurface, surface is now pointing to invalid memory. To fix, simply remove the second line in the else block.
I don't have SDL installed on my machine, but after looking through the code. I noticed this in the Output.cpp file: ``` display = new Surface(); ``` You do nothing. The constructor for this is empty. (surface is not initialized). Then in Output::initalize() you do: ``` display->surface = SDL_SetVideoMode( 800, 600, 32, SDL_HWSURFACE | SDL_DOUBLEBUF ); ``` This looks like the issue Surface::surface was never actually initialized. If you haven't found the solution, when i get home i'll digg into it.
Segmentation fault using SDL with C++, trying to Blit images
[ "", "c++", "sdl", "" ]
I'm refactoring a 500-lines of C++ code in main() for solving a differential equation. I'd like to encapsulate the big ideas of our solver into smaller functions (i.e. "SolvePotential(...)" instead of 50 lines of numerics code). Should I code this sequentially with a bunch of functions taking **very long** parameters lists, such as: ``` int main(int *argc, void **argv){ interpolate(x,y,z, x_interp, y_interp, z_interp, potential, &newPotential); compute_flux(x,y,z, &flux) compute_energy(x,y,z, &eng) ... // 10 other high-level function calls with long parameter lists ... return 0; } ``` Or should I create a "SolvePotential" class that would be called like so: ``` int main(int *argc, void **argv){ potential = SolvePotential(nx, ny, nz, nOrder); potential.solve(); return 0; } ``` Where I would define functions in SolvePotential that uses member variables rather than long parameter lists, such as: ``` SolverPotential::solve(){ SolvePotential::interpolate() SolverPotential::compute_flux() SolverPotential::compute_energy() // ... // 10 other high-level function calls with NO parameter lists (just use private member variables) } ``` In either case, I doubt I'll re-use the code very much... really, I'm just refactoring to help with code clarity down the road. Maybe this is like arguing "Is it '12' or 'one dozen'?", but what do you think?
Neither. "Move all my code from one single function to one single class" is not OOP. One of the fundamental rules of OOP is that a class should have *one single area of responsibility*. This is not a single responsibility, it is around 15: ``` SolverPotential::solve(){ SolvePotential::interpolate() SolverPotential::compute_flux() SolverPotential::compute_energy() // ... // 10 other high-level function calls with NO parameter lists (just use private member variables) } ``` It also makes it pretty much impossible to maintain a class invariant, doesn't it? When is it valid to call compute\_flux? Solve? Interpolate? What's to stop me from doing it in the wrong order? Will the class be in a valid state if I do? Will I get valid data out of it? However, why is it an either-or? Why can't you make *multiple* classes *and* functions? ``` // This struct could be replaced with something like typedef boost::tuple<double,double,double> coord3d struct coord3d { double x, y, z; }; coord3d interpolate(const coord3d& coord, const coord3d& interpolated, double potential); // Just return the potential, rather than using messy output parameters double compute_flux(const coord3d coord&flux); // Return the flux instead of output params double compute_energy(const coord3d& coord); // And return the energy directly as well ``` Of course, these functions don't have to be functions. If necessary/convenient, each could be made a class, or probably better still, a functor, to maintain the necessary state, and perhaps to allow you to pass them as arguments to other functions efficiently. If optimal performance is important, you may have to be careful with directly returning larger structures, rather than using output parameters, but I'd definitely profile first, to see if it is a problem, and even if it is, you could probably avoid output params with expression templates. If you have an conceptual object on which a number of independent operations can be performed, it's probably a hint that you need some OOP, that it should be modelled as a class with a number of member functions, each of which of course maintain the class invariant, no matter how, when and why they're called. If you need to compose a number of functions, gluing them together to form new, larger, pieces of functionality, functional programming and functors are most likely what you need. One common reason (but definitely not the only one) to desire composable functions is if you need to perform the same operation on many different sets of data (perhaps even several different types, all implementing the same concept). Making a functor do the heavy lifting allows it to be used with std::transform or std::for\_each. You may also want to use currying to gradually assemble your functions (perhaps some of the functions can be parametrized with a set of fixed parameters, which don't vary between calls). Again, create a functor which is initialized with these fixed parameters, and then supply the varying data in operator(). And finally, if you simply need to perform a *sequence* of operations on some mutable data, plain old procedural programming may be what best suits your needs. Finally, sprinkle with generic programming, templating the necessary classes and functions to allow them to work together without having to jump through hoops like pointer indirection or inheritance. Don't get too hung up on OOP. Use the tools at your disposal. I don't know enough of the context of your question to say for sure, but it seems to me that what you really need isn't a class, it's just a hierarchy of functions. Your user code calls solve(). solve() internally calls, say (made up, for the sake of example), interpolate() and compute\_energy(). compute\_energy() internally calls compute\_flux(), and so on. Each function only makes a couple of calls to perform the logical steps that make up the responsibility of the function. So nowhere do you have a huge class with a dozen different responsibilities, or a big monolithic function which does everything sequentially. In any case, there is nothing wrong with "very long parameter lists" (You can usually shorten them by grouping some of them together, but even if you can't, there is nothing "un-OOP" about passing a lot of parameters. On the contrary, it means the function is well encapsulated from everything else. All it needs is passed in the parameters, so it isn't really tied to the rest of the application.
Write it sequentially and then refactor if there's something you think you can reuse or would make it clearer. Also, a SolvePotential class doesn't make a lot of sense since a class should be an Object with the method SolvePotential.
Object-oriented or sequential?
[ "", "c++", "oop", "" ]
I've created a new J2SE project in NetBeans, and I can run it from the IDE, but when I try to run it using Ant on the command line, I get the following problem: <snip> ``` run: [java] Exception in thread "main" java.lang.NoClassDefFoundError: IndexBuilder [java] Java Result: 1 ``` <snip> Based on the snippet from `project.properties` below, the class should be found. ``` run.classpath=\ ${javac.classpath}:\ ${build.classes.dir} ``` How do I go about fixing this?
When you are running it from the command line, you are actually invoking [Apache Ant](http://ant.apache.org). The reason you are getting the ClassNotFound Exception is because ${javac.classpath} and all the other properties are not being properly populated. That is why your code runs from within the Netbeans context. Netbeans is setting those properties for you. To answer your original question of how do you go about getting it to run from the command line, you need to either set up a properties file that defines those parameters via a property declaration: ``` <property file="myproject.properties"/> ``` Another solution is to set the properties as environment variables via a sh script. Or you can use real paths in the build script instead of properties. See [here](http://ant.apache.org/manual/running.html) for more details on how to invoke Ant from the command line.
The error you're getting means that one of the following is true: * The class `IndexBuilder` cannot be found on the classpath * A necessary (for class loading) dependency of `IndexBuilder` cannot be found on the classpath That is, when loading the class, it's possible (even likely) that the class **can** be found but that some critical dependency of the class cannot be found. For example, if `IndexBuilder` extends another class and that base class cannot be found on the classpath, you'll get this error. Another example is if `IndexBuilder` uses a class in a static initializer and that class cannot be found. Check your classpath not just for `IndexBuilder` but also for anything that `IndexBuilder` depends on. See, for example, [this discussion](http://www.jroller.com/sjivan/entry/difference_between_classnotfoundexception_and_noclassdeffounderror) of `NoClassDefFoundError`.
NoClassDefFoundError on command-line with new NetBeans project
[ "", "java", "netbeans", "ant", "" ]
I'm trying to build my first generic list and have run into some problems. I understand the declaration looks like " `List<T>` ", and I have `using System.Collections.Generic;` at the top of my page. However, Visual Studio doesn't recognize the `T` variable. What am I missing?
`List<T>` means `List<WhateverTypeYouWantItToBeAListOf>`. So for example: If I have an Employee Class, and I wanted a collection of Employees, I could say: ``` List<Employee> employeeList = new List<Employee>(); ``` I could then add `Employee` Objects to that list, and have it be Type-safe and extend to however many employee objects I put in it. Like so: ``` Employee emp1 = new Employee(); Employee emp2 = new Employee(); employeeList.Add(emp1); employeeList.Add(emp2); ``` `employeeList` now holds `emp1` and `emp2` as objects. There are several facets to generic collections, the most important being they provide an object independent way of having a... well... collection of objects. They are type-safe; which means that any collection will consist of one type of object. You won't have a `Animal` instance inside of `List<Employee>` (unless, of course, Employee is a base class that `Animal` inherits from. At that point, however, you have bigger problems. Programming with Generics is its own topic, worthy of (at least) one chapter in a book. At a very high level, programming with generics provides another way to reuse code -- independent of any class hierarchy or implementation details of a specific class. More information [here](http://missingmanuals.com/pub/a/dotnet/2005/06/20/generics.html).
You need to substitute `T` for another Type such as `int`, `string` or a class of your creation: ``` List<int> values = new List<int>(); values.Add(1); values.Add(2); int secondNumber = values[1]; // Yay! No casting or boxing required! ```
What are Generic Collections in C#?
[ "", "c#", ".net", "generics", "" ]
I want to build a nice API (C#) to make it easier for people to consume, I think I've seen this before and want to know how to do this: ``` MyNamespace.Cars car = null; if(someTestCondition) car = new Honda(); else car = new Toyota(); car.Drive(40); ``` Is this possible? If so, what needs to be done?
``` Interface Car { void Drive(int miles); } class Honda : Car { ... } class Toyota : Car { ... } ```
You could do this a couple of different ways. You could declare an abstract base class or you could have an interface that your object implement. I believe the "C#" preferred method would be to have an interface. Something like: ``` public interface ICar { public Color Color { get; set; } void Drive(int speed); void Stop(); } public class Honda : ICar { #region ICar Members public Color Color { get; set; } public void Drive(int speed) { throw new NotImplementedException(); } public void Stop() { throw new NotImplementedException(); } #endregion } public class Toyota : ICar { #region ICar Members public Color Color { get; set; } public void Drive(int speed) { throw new NotImplementedException(); } public void Stop() { throw new NotImplementedException(); } #endregion } ```
How do I implement this type of OOP structure?
[ "", "c#", ".net", "oop", "" ]