Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I've worked with **Cruise Control** as the CI framework in my last project. Any recommendations on some other tools? (Not that i found CruiseControl lacking, just wanted to know if someone did some comparisons)
We have had great success with [Hudson](http://hudson-ci.org/). It is easy to install and configure, has a great range of plugins and a good web user interface. The checkstyle and cobertura code coverage plugins are two that we use.
JetBrain's [TeamCity](http://www.jetbrains.com/teamcity/) is pretty cool.
Which Continuous Integration library to use?
[ "", "java", "comparison", "continuous-integration", "agile", "cruisecontrol", "" ]
I'm an experienced programmer in a legacy (yet object oriented) development tool and making the switch to C#/.Net. I'm writing a small single user app using SQL server CE 3.5. I've read the conceptual DataSet and related doc and my code works. Now I want to make sure that I'm doing it "right", get some feedback from experienced .Net/SQL Server coders, the kind you don't get from reading the doc. I've noticed that I have code like this in a few places: ``` var myTableDataTable = new MyDataSet.MyTableDataTable(); myTableTableAdapter.Fill(MyTableDataTable); ... // other code ``` In a single user app, would you typically just do this once when the app starts, instantiate a DataTable object for each table and then store a ref to it so you ever just use that single object which is already filled with data? This way you would ever only read the data from the db once instead of potentially multiple times. Or is the overhead of this so small that it just doesn't matter (plus could be counterproductive with large tables)?
For CE, it's probably a non issue. If you were pushing this app to thousands of users and they were all hitting a centralized DB, you might want to spend some time on optimization. In a single-user instance DB like CE, unless you've got data that says you need to optimize, I wouldn't spend any time worrying about it. Premature optimization, etc.
It's easier to figure out the answer to this question when you think about datasets as being a "session" of data. You fill the datasets; you work with them; and then you put the data back or discard it when you're done. So you need to ask questions like this: 1. **How current does the data need to be?** Do you always need to have the very very latest, or will the database not change that frequently? 2. **What are you using the data for?** If you're just using it for reports, then you can easily fill a dataset, run your report, then throw the dataset away, and next time just make a new one. That'll give you more current data anyway. 3. **Just how much data are we talking about?** You've said you're working with a relatively small dataset, so there's not a major memory impact if you load it all in memory and hold it there forever. Since you say it's a single-user app without a lot of data, I think you're safe loading everything in at the beginning, using it in your datasets, and then updating on close. The main thing you need to be concerned with in this scenario is: What if the app exits abnormally, due to a crash, power outage, etc.? Will the user lose all his work? But as it happens, datasets are extremely easy to serialize, so you can fairly easily implement a "save every so often" procedure to serialize the dataset contents to disk so the user won't lose a lot of work.
How many DataTable objects should I use in my C# app?
[ "", "c#", ".net", "sql-server", "datatable", "dataset", "" ]
I need a way to easily export and then import data in a MySQL table from a remote server to my home server. I don't have direct access to the server, and no utilities such as phpMyAdmin are installed. I do, however, have the ability to put PHP scripts on the server. How do I get at the data? *I ask this question purely to record my way to do it*
You could use SQL for this: ``` $file = 'backups/mytable.sql'; $result = mysql_query("SELECT * INTO OUTFILE '$file' FROM `##table##`"); ``` Then just point a browser or FTP client at the directory/file (backups/mytable.sql). This is also a nice way to do incremental backups, given the filename a timestamp for example. To get it back in to your DataBase from that file you can use: ``` $file = 'backups/mytable.sql'; $result = mysql_query("LOAD DATA INFILE '$file' INTO TABLE `##table##`"); ``` The other option is to use PHP to invoke a system command on the server and run 'mysqldump': ``` $file = 'backups/mytable.sql'; system("mysqldump --opt -h ##databaseserver## -u ##username## -p ##password## ##database | gzip > ".$file); ```
I did it by exporting to CSV, and then importing with whatever utility is available. I quite like the use of the php://output stream. ``` $result = $db_con->query('SELECT * FROM `some_table`'); $fp = fopen('php://output', 'w'); if ($fp && $result) { header('Content-Type: text/csv'); header('Content-Disposition: attachment; filename="export.csv"'); while ($row = $result->fetch_array(MYSQLI_NUM)) { fputcsv($fp, array_values($row)); } die; } ```
Easy way to export a SQL table without access to the server or phpMyADMIN
[ "", "php", "mysql", "export", "load-data-infile", "" ]
We are mainting a web application that is built on Classic ASP using VBScript as the primary language. We are in agreement that our backend (framework if you will) is out dated and doesn't provide us with the proper tools to move forward in a quick manner. We have pretty much embraced the current webMVC pattern that is all over the place, and cannot do it, in a reasonable manner, with the current technology. The big missing features are proper dispatching and templating with inheritance, amongst others. Currently there are two paths being discussed: 1. Port the existing application to Classic ASP using JScript, which will allow us to hopefully go from there to .NET MSJscript without too much trouble, and eventually end up on the .NET platform (preferably the MVC stuff will be done by then, ASP.NET isn't much better than were we are on now, in our opinions). This has been argued as the safer path with less risk than the next option, albeit it might take slightly longer. 2. Completely rewrite the application using some other technology, right now the leader of the pack is Python WSGI with a custom framework, ORM, and a good templating solution. There is wiggle room here for even django and other pre-built solutions. This method would hopefully be the quickest solution, as we would probably run a beta beside the actual product, but it does have the potential for a big waste of time if we can't/don't get it right. This does not mean that our logic is gone, as what we have built over the years is fairly stable, as noted just difficult to deal with. It is built on SQL Server 2005 with heavy use of stored procedures and published on IIS 6, just for a little more background. Now, the question. Has anyone taken either of the two paths above? If so, was it successful, how could it have been better, etc. We aren't looking to deviate much from doing one of those two things, but some suggestions or other solutions would potentially be helpful.
Don't throw away your code! It's the single worst mistake you can make (on a large codebase). See [Things You Should Never Do, Part 1](http://www.joelonsoftware.com/articles/fog0000000069.html). You've invested a lot of effort into that old code and worked out many bugs. Throwing it away is a classic developer mistake (and one I've done many times). It makes you feel "better", like a spring cleaning. But you don't need to buy a new apartment and all new furniture to outfit your house. You can work on one room at a time... and maybe some things just need a new paintjob. Hence, this is where refactoring comes in. For new functionality in your app, [write it in C# and call it from your classic ASP](http://blog.danbartels.com/articles/322.aspx). You'll be forced to be modular when you rewrite this new code. When you have time, refactor parts of your old code into C# as well, and work out the bugs as you go. Eventually, you'll have replaced your app with all new code. You could also write your own compiler. We wrote one for our classic ASP app a long time ago to allow us to output PHP. It's called [Wasabi](http://www.joelonsoftware.com/items/2006/09/01b.html) and I think it's the reason Jeff Atwood thought Joel Spolsky went off his rocker. Actually, maybe we should just ship it, and then you could use that. It allowed us to switch our entire codebase to .NET for the next release while only rewriting a very small portion of our source. It also caused a bunch of people to call us crazy, but writing a compiler is not that complicated, and it gave us a lot of flexibility. Also, if this is an internal only app, just leave it. Don't rewrite it - you are the only customer and if the requirement is you need to run it as classic asp, you can meet that requirement.
Use this as an opportunity to remove unused features! Definitely go with the new language. Call it 2.0. It will be a lot less work to rebuild the 80% of it that you really need. Start by wiping your brain clean of the whole application. Sit down with a list of its overall goals, then decide which features are needed based on which ones are used. Then redesign it with those features in mind, and build. (I love to delete code.)
If it is decided that our system needs an overhaul, what is the best way to go about it?
[ "", "python", "asp-classic", "vbscript", "" ]
I'm still trying to decide whether my (home) project should use [UTF-8](http://en.wikipedia.org/wiki/UTF-8) strings (implemented in terms of std::string with additional UTF-8-specific functions when necessary) or some 16-bit string (implemented as std::wstring). The project is a programming language and environment (like VB, it's a combination of both). There are a few wishes/constraints: * It would be cool if it could run on limited hardware, such as computers with limited memory. * I want the code to run on Windows, Mac and (if resources allow) Linux. * I'll be using [wxWidgets](http://www.wxwidgets.org/) as my GUI layer, but I want the code that interacts with that toolkit confined in a corner of the codebase (I will have non-GUI executables). * I would like to avoid working with two different kinds of strings when working with user-visible text and with the application's data. Currently, I'm working with std::string, with the intent of using UTF-8 manipulation functions only when necessary. It requires less memory, and seems to be the direction many applications are going anyway. If you recommend a 16-bit encoding, which one: [UTF-16](http://en.wikipedia.org/wiki/UTF-16)? [UCS-2](http://en.wikipedia.org/wiki/Universal_Character_Set#Encoding_forms_of_the_Universal_Character_Set)? Another one?
I would recommend UTF-16 for any kind of data manipulation and UI. The Mac OS X and Win32 API uses UTF-16, same for wxWidgets, Qt, ICU, Xerces, and others. UTF-8 might be better for data interchange and storage. See <http://unicode.org/notes/tn12/>. But whatever you choose, I would definitely recommend against std::string with UTF-8 "only when necessary". Go all the way with UTF-16 or UTF-8, but do not mix and match, that is asking for trouble.
UTF-16 is still a variable length character encoding (there are more than 2^16 unicode codepoints), so you can't do O(1) string indexing operations. If you're doing lots of that sort of thing, you're not saving anything in speed over UTF-8. On the other hand, if your text includes a lot of codepoints in the 256-65535 range, UTF-16 can be a substantial improvement in size. UCS-2 is a variation on UTF-16 that *is* fixed length, at the cost of prohibiting any codepoints greater than 2^16. Without knowing more about your requirements, I would personally go for UTF-8. It's the easiest to deal with for all the reasons others have already listed.
C++ strings: UTF-8 or 16-bit encoding?
[ "", "c++", "encoding", "utf-8", "stdstring", "ucs2", "" ]
I don’t think I’ve grokked currying yet. I understand what it does, and how to do it. I just can’t think of a situation I would use it. Where are you using currying in JavaScript (or where are the main libraries using it)? DOM manipulation or general application development examples welcome. [One of the answers](https://stackoverflow.com/questions/113780/javascript-curry-what-are-the-practical-applications#113799) mentions animation. Functions like `slideUp`, `fadeIn` take an element as an arguments and are normally a curried function returning the high order function with the default “animation function” built-in. Why is that better than just applying the higher-up function with some defaults? Are there any drawbacks to using it? As requested here are some good resources on JavaScript currying: * <http://www.dustindiaz.com/javascript-curry/> * Crockford, Douglas (2008) *JavaScript: The Good Parts* * [http://www.svendtofte.com/code/curried\_javascript/](http://www.crockford.com/javascript/www_svendtofte_com/code/curried_javascript/index.html) (Takes a detour into ML so skip the whole section from “A crash course in ML” and start again at “How to write curried JavaScript”) * <http://web.archive.org/web/20111217011630/http://blog.morrisjohns.com:80/javascript_closures_for_dummies> * [How do JavaScript closures work?](https://stackoverflow.com/questions/111102/how-does-a-javascript-closure-work) * **[https://johnresig.com/blog/partial-functions-in-javascript](https://johnresig.com/blog/partial-functions-in-javascript/) (Mr. Resig on the money as per usual)** * <http://benalman.com/news/2010/09/partial-application-in-javascript/> I’ll add more as they crop up in the comments. --- So, according to the answers, currying and partial application in general are convenience techniques. If you are frequently “refining” a high-level function by calling it with same configuration, you can curry (or use Resig’s partial) the higher-level function to create simple, concise helper methods.
@Hank Gay In response to EmbiggensTheMind's comment: I can't think of an instance where [currying](http://en.wikipedia.org/wiki/Currying)—by itself—is useful in JavaScript; it is a technique for converting function calls with multiple arguments into chains of function calls with a single argument for each call, but JavaScript supports multiple arguments in a single function call. In JavaScript—and I assume most other actual languages (not lambda calculus)—it is commonly associated with partial application, though. John Resig [explains it better](https://johnresig.com/blog/partial-functions-in-javascript/#postcomment), but the gist is that have some logic that will be applied to two or more arguments, and you only know the value(s) for some of those arguments. You can use partial application/currying to fix those known values and return a function that only accepts the unknowns, to be invoked later when you actually have the values you wish to pass. This provides a nifty way to avoid repeating yourself when you would have been calling the same JavaScript built-ins over and over with all the same values but one. To steal John's example: ``` String.prototype.csv = String.prototype.split.partial(/,\s*/); var results = "John, Resig, Boston".csv(); alert( (results[1] == "Resig") + " The text values were split properly" ); ```
Here's an [**interesting AND practical use of currying in JavaScript that uses closures**](http://javascriptweblog.wordpress.com/2010/10/25/understanding-javascript-closures/): > ``` > function converter(toUnit, factor, offset, input) { > offset = offset || 0; > return [((offset + input) * factor).toFixed(2), toUnit].join(" "); > } > > var milesToKm = converter.curry('km', 1.60936, undefined); > var poundsToKg = converter.curry('kg', 0.45460, undefined); > var farenheitToCelsius = converter.curry('degrees C', 0.5556, -32); > > milesToKm(10); // returns "16.09 km" > poundsToKg(2.5); // returns "1.14 kg" > farenheitToCelsius(98); // returns "36.67 degrees C" > ``` This relies on a `curry` extension of `Function`, although as you can see it only uses `apply` (nothing too fancy): > ``` > Function.prototype.curry = function() { > if (arguments.length < 1) { > return this; //nothing to curry with - return function > } > var __method = this; > var args = toArray(arguments); > return function() { > return __method.apply(this, args.concat([].slice.apply(null, arguments))); > } > } > ```
JavaScript curry: what are the practical applications?
[ "", "javascript", "function", "currying", "partial-application", "" ]
I cant post the code (proprietary issues) but does anyone know what types of things would cause the following error in C#. It is being thrown by a VOIP client that I wrote (using counterpath api) when the call is ended by the other client. The error is: ``` System.AccessViolationException was unhandled Message="Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Source="System.Windows.Forms" StackTrace: at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run(Form mainForm) at CollabAnalysisSF.Edge.GUI.Forms.Program.Main() in d:\data\beyerss\Desktop\client\GUI\ARGui\Program.cs:line 18 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: ``` *UPDATE:* Turns out one of the libraries we were using was sending off an event that we didnt know about, and the problem was in there somewhere. Fixed now.
List of some possibilities: * An object is being used after it has been disposed. This can happen a lot if you are disposing managed object in a finalizer (you should not do that). * An unmannaged implementation of one of the object you are using is bugged and it corrupted the process memory heap. Happens a lot with DirectX, GDI and others. * Mashaling on managed-unmanaged boundary is flawed. Make sure you pin a managed pointer before you use it on an unmanaged part of code. * You are using unsafe block and doing funny stuff with it. --- In you case it could be a problem with Windows Forms. But the problem is not that it is happening, but rather that it is not being reported correctly; you possibly still have done something wrong. Are you able to determine what control is causing the error using the HWND? Is it always the same? Is this control doing something funny just before the application crashes? Is the unmannaged part of the control a custom window or a standard control?
This kind of prolem can occur if you are calling unmanaged code e.g. a dll. It can occur when Marshalling goes horribly wrong. Can you tell us if you are calling unmanaged code? If so are you using default Marshalling or more specific stuff? From the looks of the stack trace are you using unsafe code e.g. Pointers and the like? This could be your problem.
C# Corrupt Memory Error
[ "", "c#", "voip", "" ]
Question in the title. And what happens when all 3 of `$_GET[foo]`, `$_POST[foo]` and `$_COOKIE[foo] exist?` Which one of them gets included to `$_REQUEST?`
I'd say never. If I wanted something to be set via the various methods, I'd code for each of them to remind myself that I'd done it that way - otherwise you might end up with things being overwritten without realising. Shouldn't it work like this: $\_GET = non destructive actions (sorting, recording actions, queries) $\_POST = destructive actions (deleting, updating) $\_COOKIE = trivial settings (stylesheet preferences etc) $\_SESSION = non trivial settings (username, logged in?, access levels)
Sometimes you might want the same script to be called with several different ways. A form submit and an AJAX call comes to mind. In most cases, however, it´s better to be explicit. Also, see <http://docs.php.net/manual/en/ini.core.php#ini.request-order> on how the different sources of variables overwrite each other if there is a name collision.
When and why should $_REQUEST be used instead of $_GET / $_POST / $_COOKIE?
[ "", "php", "" ]
I'm having trouble with global variables in php. I have a `$screen` var set in one file, which requires another file that calls an `initSession()` defined in yet another file. The `initSession()` declares `global $screen` and then processes $screen further down using the value set in the very first script. How is this possible? To make things more confusing, if you try to set $screen again then call the `initSession()`, it uses the value first used once again. The following code will describe the process. Could someone have a go at explaining this? ``` $screen = "list1.inc"; // From model.php require "controller.php"; // From model.php initSession(); // From controller.php global $screen; // From Include.Session.inc echo $screen; // prints "list1.inc" // From anywhere $screen = "delete1.inc"; // From model2.php require "controller2.php" initSession(); global $screen; echo $screen; // prints "list1.inc" ``` Update: If I declare `$screen` global again just before requiring the second model, $screen is updated properly for the `initSession()` method. Strange.
`Global` DOES NOT make the variable global. I know it's tricky :-) `Global` says that a local variable will be used *as if it was a variable with a higher scope*. E.G : ``` <?php $var = "test"; // this is accessible in all the rest of the code, even an included one function foo2() { global $var; echo $var; // this print "test" $var = 'test2'; } global $var; // this is totally useless, unless this file is included inside a class or function function foo() { echo $var; // this print nothing, you are using a local var $var = 'test3'; } foo(); foo2(); echo $var; // this will print 'test2' ?> ``` Note that global vars are rarely a good idea. You can code 99.99999% of the time without them and your code is much easier to maintain if you don't have fuzzy scopes. Avoid `global` if you can.
`global $foo` doesn't mean "make this variable global, so that everyone can use it". `global $foo` means "*within the scope of this function*, use the global variable `$foo`". I am assuming from your example that each time, you are referring to $screen from within a function. If so you will need to use `global $screen` in each function.
global variables in php not working as expected
[ "", "php", "global", "" ]
Is there a way to hide the google toolbar in my browser programmable?
You haven't said which browser you are using so I'm going to assume Internet Explorer\* and answer No. If JavaScript on a web page could manipulate the browser, it would be a serious security hole and could create a lot of confusion for users. So no... for a good reason: Security. \*. If you were using Firefox, and were talking about JavaScript within an extension to manipulate and theme the window chrome then this would be a different story.
I really think that it is imposible to do that with javascript. This is because javascript is designed to control the behaviour of the site. And the browser is not part of the site. Of course maby you are talking about some other Google toolbar then the plugin in the browser.
Hide google Toolbar by javascript
[ "", "javascript", "browser", "google-toolbar", "" ]
How would you write a regular expression to convert mark down into HTML? For example, you would type in the following: ``` This would be *italicized* text and this would be **bold** text ``` This would then need to be converted to: ``` This would be <em>italicized</em> text and this would be <strong>bold</strong> text ``` Very similar to the mark down edit control used by stackoverflow. **Clarification** For what it is worth, I am using C#. Also, these are the **only** real tags/markdown that I want to allow. The amount of text being converted would be less than 300 characters or so.
The best way is to find a version of the Markdown library ported to whatever language you are using (you did not specify in your question). --- Now that you have clarified that you only want STRONG and EM to be processed, and that you are using C#, I recommend you take a look at [Markdown.NET](http://www.aspnetresources.com/blog/markdown_announced.aspx) to see how those tags are implemented. As you can see, it is in fact two expressions. Here is the code: ``` private string DoItalicsAndBold (string text) { // <strong> must go first: text = Regex.Replace (text, @"(\*\*|__) (?=\S) (.+?[*_]*) (?<=\S) \1", new MatchEvaluator (BoldEvaluator), RegexOptions.IgnorePatternWhitespace | RegexOptions.Singleline); // Then <em>: text = Regex.Replace (text, @"(\*|_) (?=\S) (.+?) (?<=\S) \1", new MatchEvaluator (ItalicsEvaluator), RegexOptions.IgnorePatternWhitespace | RegexOptions.Singleline); return text; } private string ItalicsEvaluator (Match match) { return string.Format ("<em>{0}</em>", match.Groups[2].Value); } private string BoldEvaluator (Match match) { return string.Format ("<strong>{0}</strong>", match.Groups[2].Value); } ```
A single regex won't do. Every text markup will have it's own html translator. Better look into how the existing converters are implemented to get an idea on how it works. <http://en.wikipedia.org/wiki/Markdown#See_also>
Regular expression to convert mark down to HTML
[ "", "c#", "html", "regex", "markdown", "" ]
I am writing a web app using TurboGears, and in that app the users must be able to generate different reports. The data the reports need is stored in a database (MySQL). The reports must be returned either as a easily printable html document, or a pdf file. I have previously used jasper and iReport for creating the documents, but if I can avoid having to fire up Java to create the report I would be happy. Ideally I would like to specify reports in a easily readable markup language and then just feed the template data with some kind of library that produces the report. I gladly accept any kind of hints on how I should go about generating these reports!
You can build some fancy PDFs from Python with the [ReportLab](http://www.reportlab.com/opensource/) toolkit.
[Pod](http://appyframework.org/pod.html) is my favorite solution to your problem.
Report generation
[ "", "python", "report", "" ]
PHP's explode function returns an array of strings split on some provided substring. It will return empty strings when there are leading, trailing, or consecutive delimiters, like this: ``` var_dump(explode('/', '1/2//3/')); array(5) { [0]=> string(1) "1" [1]=> string(1) "2" [2]=> string(0) "" [3]=> string(1) "3" [4]=> string(0) "" } ``` Is there some different function or option or anything that would return everything *except* the empty strings? ``` var_dump(different_explode('/', '1/2//3/')); array(3) { [0]=> string(1) "1" [1]=> string(1) "2" [2]=> string(1) "3" } ```
Try [preg\_split](http://php.net/preg_split). `$exploded = preg_split('@/@', '1/2//3/', -1, PREG_SPLIT_NO_EMPTY);`
array\_filter will remove the blank fields, here is an example without the filter: ``` print_r(explode('/', '1/2//3/')) ``` prints: ``` Array ( [0] => 1 [1] => 2 [2] => [3] => 3 [4] => ) ``` With the filter: ``` php> print_r(array_filter(explode('/', '1/2//3/'))) ``` Prints: ``` Array ( [0] => 1 [1] => 2 [3] => 3 ) ``` You'll get all values that resolve to "false" filtered out. see <http://uk.php.net/manual/en/function.array-filter.php>
Explode string into array with no empty elements?
[ "", "php", "arrays", "string", "filtering", "explode", "" ]
Who here is using WASP (<http://wasp.sourceforge.net/content/>) to in real world applications? What impressions do you have? Good? Bad? If you can provide any inputs, how good it is comparing with rails for example. I'm really looking for MVC frameworks for PHP > **Update:** [This comparation](http://www.phpit.net/article/ten-different-php-frameworks/) I found is good.
I downloaded it a while ago and tried it out, but as the documentation is pretty terrible at the moment (consisting of some auto-generated 'documentation' that was useless) I gave up pretty quickly. I think one of the most important things to have in a framework is clear, thorough documentation - if you have to spend time digging through the code of the framework to find out if a class you want exists, the point of using a framework is lost. WASP does not seem to be ready for production environments just yet, as even their website admits that its not ready for enterprise applications. If you're looking for a PHP framework I would recommend CodeIgniter, which has excellent documentation and a helpful community, or Zend, which is pretty mature.
[CakePHP](http://www.cakephp.org) is a great framework with great documentation. Symfony lost me with all the configuration, at the time I was new to both frameworks and CakePHP stood out as being the best for me and I was able to pick it up very quickly
PHP with AWASP framework
[ "", "php", "model-view-controller", "frameworks", "" ]
I am reading a .NET book, and in one of the code examples there is a class definition with this field: ``` private DateTime? startdate ``` What does `DateTime?` mean?
Since `DateTime` is a `struct`, not a `class`, you get a `DateTime` *object*, not a *reference*, when you declare a field or variable of that type. And, in the same way as an `int` cannot be `null`, so this `DateTime` object can never be `null`, because it's not a reference. Adding the question mark turns it into a [*nullable type*](http://msdn.microsoft.com/en-us/library/1t3y8s4s%28v=vs.80%29.aspx), which means that *either* it is a `DateTime` object, *or* it is `null`. `DateTime?` is syntactic sugar for `Nullable<DateTime>`, where [`Nullable`](http://msdn.microsoft.com/en-us/library/b3h38hb0%28v=vs.80%29.aspx) is itself a `struct`.
It's a nullable DateTime. `?` after a primitive type/structure indicates that it is the nullable version. DateTime is a structure that can never be null. From [MSDN](http://msdn.microsoft.com/en-us/library/ee432844.aspx): > The DateTime value type represents dates and times with values ranging from 12:00:00 midnight, January 1, 0001 Anno Domini, or A.D. (also known as Common Era, or C.E.) through 11:59:59 P.M., December 31, 9999 A.D. (C.E.) `DateTime?` can be null however.
What does "DateTime?" mean in C#?
[ "", "c#", ".net", "datetime", "syntax", "nullable", "" ]
Is output buffering enabled by default in Python's interpreter for `sys.stdout`? If the answer is positive, what are all the ways to disable it? Suggestions so far: 1. Use the `-u` command line switch 2. Wrap `sys.stdout` in an object that flushes after every write 3. Set `PYTHONUNBUFFERED` env var 4. `sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)` Is there any other way to set some global flag in `sys`/`sys.stdout` programmatically during execution? --- If you just want to flush after a specific write using `print`, see [How can I flush the output of the print function?](https://stackoverflow.com/questions/230751).
From [Magnus Lycka answer on a mailing list](http://mail.python.org/pipermail/tutor/2003-November/026645.html): > You can skip buffering for a whole > python process using `python -u` > or by > setting the environment variable > PYTHONUNBUFFERED. > > You could also replace sys.stdout with > some other stream like wrapper which > does a flush after every call. > > ``` > class Unbuffered(object): > def __init__(self, stream): > self.stream = stream > def write(self, data): > self.stream.write(data) > self.stream.flush() > def writelines(self, datas): > self.stream.writelines(datas) > self.stream.flush() > def __getattr__(self, attr): > return getattr(self.stream, attr) > > import sys > sys.stdout = Unbuffered(sys.stdout) > print 'Hello' > ```
I would rather put my answer in [How can I flush the output of the print function?](https://stackoverflow.com/questions/230751/how-to-flush-output-of-python-print) or in [Python's print function that flushes the buffer when it's called?](https://stackoverflow.com/questions/3895481/pythons-print-function-that-flushes-the-buffer-when-its-called), but since they were marked as duplicates of this one (I do not agree), I'll answer it here. Since Python 3.3, print() supports the keyword argument "flush" ([see documentation](http://docs.python.org/3/library/functions.html?highlight=print#print)): ``` print('Hello World!', flush=True) ```
Disable output buffering
[ "", "python", "stdout", "output-buffering", "" ]
I'm using the .NET TWAIN code from <http://www.codeproject.com/KB/dotnet/twaindotnet.aspx?msg=1007385#xx1007385xx> in my application. When I try to scan an image when the scanner is not plugged in, the application freezes. How can I check if the device is plugged in, using the TWAIN driver?
Maybe I'm taking the question too literally, but using the TWAIN API, it is not possible to check if a device is plugged in i.e. connected and powered on. The TWAIN standard does define a capability for this purpose called CAP\_DEVICEONLINE, but this feature is so poorly conceived and so few drivers implement it correctly that it is useless in practice. The closest you can get is this: Open the device (MSG\_OPENDS): Almost all drivers will check for device-ready when they are opened, and will display an error dialog to the user. *There is no TWAIN mechanism for suppressing or detecting this dialog* Some drivers will allow the user to correct the problem and continue, in which case you (your app) will never know there was a problem. Some drivers will allow the user to cancel, in which case the MSG\_OPENDS operation will fail, probably returning TWRC\_CANCEL but maybe TWRC\_FAILURE A few TWAIN drivers will open without error even though the device is off-line. Such a driver *may* return FALSE to a query of CAP\_DEVICEONLINE. Such a driver will probably do the device-online check when you enable the device with MSG\_ENABLEDS, and then if the device is not on-line, you get the error dialog to the user, and so on as above. Aside and IMPO: WIA is 'more modern' but also much less comprehensive for scanning than TWAIN, and in my experience unusable for multipage scanning from a document feeder. WIA's designers and maintainers seem not to understand or care about scanners other than low-end consumer flatbeds. It's good for cameras.
I started of with the same source code that you downloaded from CodeProject, but moved most of the code in MainFrame.cs that initiates the scanning to a Scanner class. In order to check for scan errors I call the following method in stead of calling Twain.Acquire directly: ``` enum AcquireResult { OK = 0, InitFailed = 1, DeviceIDFailed = 2, CapabilityFailed = 3, UserInterfaceError = 4 } private void StartScan() { if (!_msgFilter) { _parent.Enabled = false; _msgFilter = true; Application.AddMessageFilter(this); } AcquireResult ar = _twain.Acquire(); if (ar != AcquireResult.OK) { EndingScan(); switch (ar) { case AcquireResult.CapabilityFailed: throw new Exception("Scanner capability setup failed"); case AcquireResult.DeviceIDFailed: throw new Exception("Unable to determine device identity"); case AcquireResult.InitFailed: throw new Exception("Scanner initialisation failed"); case AcquireResult.UserInterfaceError: throw new Exception("Error with the Twain user interface"); default: throw new Exception("Document scanning failed"); } } } ``` I usually initiate the scan event on a seperate thread in order for the app not to freeze while scanning is in progress.
How do I check if the scanner is plugged in (C#, .NET TWAIN)
[ "", "c#", "twain", "" ]
Imagine I have an function which goes through one million/billion strings and checks smth in them. f.ex: ``` foreach (String item in ListOfStrings) { result.add(CalculateSmth(item)); } ``` it consumes lot's of time, because CalculateSmth is very time consuming function. I want to ask: how to integrate multithreading in this kinda process? f.ex: I want to fire-up 5 threads and each of them returns some results, and thats goes-on till the list has items. Maybe anyone can show some examples or articles.. Forgot to mention I need it in .NET 2.0
You could try the [Parallel extensions](http://msdn.microsoft.com/en-us/concurrency/default.aspx) (part of .NET 4.0) These allow you to write something like: ``` Parallel.Foreach (ListOfStrings, (item) => result.add(CalculateSmth(item)); ); ``` Of course result.add would need to be thread safe.
The Parallel extensions is cool, but this can also be done just by using the threadpool like this: ``` using System.Collections.Generic; using System.Threading; namespace noocyte.Threading { class CalcState { public CalcState(ManualResetEvent reset, string input) { Reset = reset; Input = input; } public ManualResetEvent Reset { get; private set; } public string Input { get; set; } } class CalculateMT { List<string> result = new List<string>(); List<ManualResetEvent> events = new List<ManualResetEvent>(); private void Calc() { List<string> aList = new List<string>(); aList.Add("test"); foreach (var item in aList) { CalcState cs = new CalcState(new ManualResetEvent(false), item); events.Add(cs.Reset); ThreadPool.QueueUserWorkItem(new WaitCallback(Calculate), cs); } WaitHandle.WaitAll(events.ToArray()); } private void Calculate(object s) { CalcState cs = s as CalcState; cs.Reset.Set(); result.Add(cs.Input); } } } ```
Speed up loop using multithreading in C# (Question)
[ "", "c#", "multithreading", ".net-2.0", "" ]
I have an application with one form in it, and on the Load method I need to hide the form. The form will display itself when it has a need to (think along the lines of a outlook 2003 style popup), but I can' figure out how to hide the form on load without something messy. Any suggestions?
I'm coming at this from C#, but should be very similar in vb.net. In your main program file, in the Main method, you will have something like: ``` Application.Run(new MainForm()); ``` This creates a new main form and limits the lifetime of the application to the lifetime of the main form. However, if you remove the parameter to Application.Run(), then the application will be started with no form shown and you will be free to show and hide forms as much as you like. Rather than hiding the form in the Load method, initialize the form before calling Application.Run(). I'm assuming the form will have a NotifyIcon on it to display an icon in the task bar - this can be displayed even if the form itself is not yet visible. Calling `Form.Show()` or `Form.Hide()` from handlers of NotifyIcon events will show and hide the form respectively.
Usually you would only be doing this when you are using a tray icon or some other method to display the form later, but it will work nicely even if you never display your main form. Create a bool in your Form class that is defaulted to false: ``` private bool allowshowdisplay = false; ``` Then override the SetVisibleCore method ``` protected override void SetVisibleCore(bool value) { base.SetVisibleCore(allowshowdisplay ? value : allowshowdisplay); } ``` Because Application.Run() sets the forms .Visible = true after it loads the form this will intercept that and set it to false. In the above case, it will always set it to false until you enable it by setting allowshowdisplay to true. Now that will keep the form from displaying on startup, now you need to re-enable the SetVisibleCore to function properly by setting the allowshowdisplay = true. You will want to do this on whatever user interface function that displays the form. In my example it is the left click event in my notiyicon object: ``` private void notifyIcon1_MouseClick(object sender, MouseEventArgs e) { if (e.Button == System.Windows.Forms.MouseButtons.Left) { this.allowshowdisplay = true; this.Visible = !this.Visible; } } ```
Single Form Hide on Startup
[ "", "c#", "vb.net", "winforms", "" ]
I have a Data Access Object TransactionDao. When you call TransactionDao.Save(transaction) I would like for it to setting a transaction.IsSaved=true flag (this is a simplification the actual thing I'm trying to do is not quite so banal). So when mocking my TransactionDao with RhinoMocks how can I indicate that it should transform its input? Ideally I would like to write something like this: ``` Expect.Call(delegate {dao.Save(transaction);}).Override(x => x.IsSaved=true); ``` Does anyone know how to do this? --- Though I got a hint how to do it from the answer specified below the actual type signature is off, you have to do something like this: Because of what Mark Ingram posted, seems like the best answer, though nobody's explicitly said it, is to do this: ``` public delegate void FakeSave(Transaction t); ... Expect.Call(delegate {dao.Save(t); }).Do( new FakeSave(delegate(Transaction t2) { t.IsSaved = true; })); ```
Gorge, The simplest solution, which I found, applied to your question is the following: ``` Expect.Call(() => dao.Save(transaction)) .Do(new Action<Transaction>(x => x.IsSaved = true)); ``` So you don't need to create a special delegate or anything else. Just use Action which is in standard .NET 3.5 libraries. Hope this help. Frantisek
You can accomplish this using the Do callback: ``` Expect.Call(delegate {dao.Save(transaction);}) .Do(x => x.IsSaved = true); ```
Rhino Mocks: How can I mock out a method that transforms its input?
[ "", "c#", ".net", "rhino-mocks", "" ]
I am developing a C++ class library containing domain model classes, and I would like to add support for instantiating these classes from various persistence mechanisms, i.e. databases and file. The user of the class library should be given an interface(?) against which to program a class that can transfer the data from/to the persistence mechanism. I know of the Data Access Object pattern which seems to work for Java, but I'm not exactly sure how to apply it to C++. Are there other solutions?
C++ supports multiple inheritance so you can have a generic persistence API and inherit a persistence mechanism. This would still have to use introspection to get out the class metadata, but you would still have this issue with any persistence layer. Alternatively you could do something similar but use the metadata to drive a code generator that fills in the 'Getters' and 'Setters' for the persistence layer. Any persistence layer will typically use one or the other approach, so your problem is hooking the loading mechanism into the persistence layer. I think this makes your problem little different from a single persistence layer but tackling it from the other direction. Rather than building domain classes onto a persistence framework you are providing a set of domain classes with the hooks for a persistence framework that third parties can plug their data access mechanism into. I think that once you provide access to class metadata and callbacks the perisistence mechanism is relatively straightforward. Look at the metadata components of any convenient C++ O/R mapping framework and understand how they work. Encapsulate this with an API in one of the base classes of your domain classes and provide a generic getter/setter API for instantiation or persisting. The rest is up to the person implementing the persistence layer. **Edit:** I can't think of a C++ library with the type of pluggable persistence mechanism you're describing, but I did something in Python that could have had this type of facility added. The particular implementation used facilities in Python with no direct C++ equivalent, although the basic principle could probably be adapted to work with C++. In Python, you can intercept accesses to instance variables by overriding `__getattr()__` and `__setattr()__`. The persistence mechanism actually maintained its own data cache behind the scenes. When the functionality was mixed into the class (done through multiple inheritance), it overrode the default system behaviour for member accessing and checked whether the attribute being queried matched anything in its dictionary. Where this happened, the call was redirected to get or set an item in the data cache. The cache had metadata of its own. It was aware of relationships between entities within its data model, and knew which attribute names to intercept to access data. The way this worked separated it from the database access layer and could (at least in theory) have allowed the persistence mechanism to be used with different drivers. There is no inherent reason that you couldn't have (for example) built a driver that serialised it out to an XML file. Making something like this work in C++ would be a bit more fiddly, and it may not be possible to make the object cache access as transparent as it was with this system. You would probably be best with an explicit protocol that loads and flushes the object's state to the cache. The code to this would be quite amenable to generation from the cache metadata, but this would have to be done at compile time. You may be able to do something with templates or by overriding the `->` operator to make the access protocol more transparent, but this is probably more trouble than it's worth.
[Boost Serialization](http://www.boost.org/doc/libs/1_36_0/libs/serialization/doc/index.html) provides some pretty useful stuff for working with serializing C++ types, but how well it will match the interface you desire I don't know. It supports both intrusive and non-intrusive designs, so is pretty flexible.
Class library with support for several persistence strategies
[ "", "c++", "design-patterns", "persistence", "" ]
Typing Ctrl+O twice in editor when a java type is selected pops-up an outline context dialog that displays the members && inherited members. How can I have this in the main outline view?
Looks like you can't do it. Maybe you should file it as an improvement request.
right click -> Open Type Hierarchy? It does not show it in the same pane but I think you can see what you're looking for.
In eclipse, how to display inherited members in Outline view?
[ "", "java", "eclipse", "ide", "" ]
Take this code: ``` <?php if (isset($_POST['action']) && !empty($_POST['action'])) { $action = $_POST['action']; } if ($action) { echo $action; } else { echo 'No variable'; } ?> ``` And then access the file with ?action=test Is there any way of preventing $action from automatically being declared by the GET? Other than of course adding ``` && !isset($_GET['action']) ``` Why would I want the variable to be declared for me?
Check your php.ini for the `register_globals` setting. It is probably on, you want it off. > Why would I want the variable to be declared for me? [You don't.](https://www.php.net/manual/en/security.globals.php) It's a horrible security risk. It makes the Environment, GET, POST, Cookie and Server variables global [(PHP manual)](https://www.php.net/manual/en/ini.core.php#ini.register-globals). These are a handful of [reserved variables](http://us.php.net/manual/en/reserved.variables.php) in PHP.
Looks like `register_globals` in your php.ini is the culprit. You should turn this off. It's also a huge security risk to have it on. If you're on shared hosting and can't modify php.ini, you can use [ini\_set()](http://php.net/ini_set) to turn register\_globals off.
PHP: GET-data automatically being declared as variables
[ "", "php", "url", "get", "" ]
I have a large source repository split across multiple projects. I would like to produce a report about the health of the source code, identifying problem areas that need to be addressed. Specifically, I'd like to call out routines with a high cyclomatic complexity, identify repetition, and perhaps run some lint-like static analysis to spot suspicious (and thus likely erroneous) constructs. How might I go about constructing such a report?
For measuring cyclomatic complexity, there's a nice tool available at [traceback.org](http://www.traceback.org/2008/03/31/measuring-cyclomatic-complexity-of-python-code/). The page also gives a good overview of how to interpret the results. +1 for [pylint](http://www.logilab.org/project/pylint). It is great at verifying adherence to coding standards (be it [PEP8](http://www.python.org/dev/peps/pep-0008/) or your own organization's variant), which can in the end help to reduce cyclomatic complexity.
For cyclomatic complexity you can use `radon`: <https://github.com/rubik/radon> (Use `pip` to install it: `pip install radon`) Additionally it also has these features: * raw metrics (these include SLOC, comment lines, blank lines, &c.) * Halstead metrics (all of them) * Maintainability Index (the one used in Visual Studio)
How can I analyze Python code to identify problematic areas?
[ "", "python", "static-analysis", "cyclomatic-complexity", "" ]
Is it possible to launch a new window in JavaScript using the window.Open function, then set the focus to that tab? Here's what I'm doing today: ``` var winRef = window.open(outUrl,wName,'left='+ wX +',top=' + wY + ',height=' + wH + ',width=' + wW + args); try { // Not all window types support the focus() property. winRef.focus(); } catch (exception) { } ``` The window opens, but the new tab doesn't receive focus.
Jay, You are seeing designed behavior. To limit opportunities for malicious behavior, scripts running in tabbed windows cannot affect other tabs. For more information, please see Tabbed Browsing for Developers at <http://msdn.microsoft.com/en-us/library/ms537636.aspx> : "The ability to open multiple documents within the same browser window has certain practical and security implications [...] Active tabs (tabs with focus) cannot be affected by scripts that run in inactive or background tabs." BR.
As a user, I never want applications (or tabs) to take focus unless I specifically requested it. I have gone to great lengths to prevent tabs in my browser (Firefox) from taking focus for this reason.
Possible to set tab focus in IE7 from JavaScript
[ "", "javascript", "internet-explorer-7", "focus", "" ]
Using Django's built in models, how would one create a triple-join between three models. For example: * Users, Roles, and Events are the models. * Users have many Roles, and Roles many Users. (ManyToMany) * Events have many Users, and Users many Events. (ManyToMany) * But for any given Event, any User may have only one Role. How can this be represented in the model?
**zacherates** writes: > I'd model Role as an association class between Users and Roles (...) I'd also reccomed this solution, but you can also make use of some syntactical sugar provided by Django: [ManyToMany relation with extra fields](http://docs.djangoproject.com/en/dev/topics/db/models/#extra-fields-on-many-to-many-relationships). Example: ``` class User(models.Model): name = models.CharField(max_length=128) class Event(models.Model): name = models.CharField(max_length=128) members = models.ManyToManyField(User, through='Role') def __unicode__(self): return self.name class Role(models.Model): person = models.ForeignKey(User) group = models.ForeignKey(Event) date_joined = models.DateField() invite_reason = models.CharField(max_length=64) ```
I'd recommend just creating an entirely separate model for this. ``` class Assignment(Model): user = ForeignKey(User) role = ForeignKey(Role) event = ForeignKey(Event) ``` This lets you do all the usual model stuff, such as ``` user.assignment_set.filter(role__name="Chaperon") role.assignment_set.filter(event__name="Silly Walkathon") ``` The only thing left is to enforce your one-role-per-user-per-event restriction. You can do this in the Assignment class by either overriding the save method (<http://docs.djangoproject.com/en/dev/topics/db/models/#overriding-predefined-model-methods>) or using signals (<http://docs.djangoproject.com/en/dev/topics/signals/>)
How to create a triple-join table with Django
[ "", "python", "django", "model-view-controller", "model", "" ]
Is there a runtime performance penalty when using interfaces (abstract base classes) in C++?
Short Answer: No. Long Answer: It is not the base class or the number of ancestors a class has in its hierarchy that affects it speed. The only thing is the cost of a method call. A non virtual method call has a cost (but can be inlined) A virtual method call has a slightly higher cost as you need to look up the method to call before you call it (but this is a simple table look up **not** a search). Since all methods on an interface are virtual by definition there is this cost. Unless you are writing some hyper speed sensitive application this should not be a problem. The extra clarity that you will recieve from using an interface usually makes up for any perceived speed decrease.
## Functions called using virtual dispatch are not inlined There is one kind of penalty for virtual functions which is easy to forget about: virtual calls are not inlined in a (common) situation where the type of the object is not know compile time. If your function is small and suitable for inlining, this penalty may be very significant, as you are not only adding a call overhead, but the compiler is also limited in how it can optimize the calling function (it has to assume the virtual function may have changed some registers or memory locations, it cannot propagate constant values between the caller and the callee). ## Virtual call cost depends on platform As for the call overhead penalty compared to a normal function call, the answer depends on your target platform. If your are targeting a PC with x86/x64 CPU, the penalty for calling a virtual function is very small, as modern x86/x64 CPU can perform branch prediction on indirect calls. However, if you are targeting a PowerPC or some other RISC platform, the virtual call penalty may be quite significant, because indirect calls are never predicted on some platforms (Cf. [PC/Xbox 360 Cross Platform Development Best Practices](http://download.microsoft.com/download/7/9/d/79d06ce7-d587-48cf-8a36-f98e924a0150/Cross_Platform_Development_Best_Practices.ppt)).
Performance penalty for working with interfaces in C++?
[ "", "c++", "performance", "abstract-class", "virtual-functions", "" ]
I'm working on a WinCE 6.0 system with a touchscreen that stores its calibration data (x-y location, offset, etc.) in the system registry (HKLM\HARDWARE\TOUCH). Right now, I'm placing the cal values into registry keys that get put into the OS image at build time. That works fine for the monitor that I get the original cal values from, but when I load this image into another system with a different monitor, the touchscreen pointer location is (understandably) off, because the two monitors do not have the same cal values. My problem is that I don't know how to properly store values into the registry so that they persist after a power cycle. See, I can recalibrate the screen on the second system, but the new values only exist in volatile memory. I suggested to my boss that we could just tell our customer to leave the power on the unit at all times -- that didn't go over well. I need advice on how to save the new constants into the registry, so that we can calibrate the monitors once before shipping them out to our customer, and not have to make separate OS images for each unit we build. A C# method that is known to work in CE6.0 would be helpful. Thanks. -Odbasta
I think what you're probably looking for is the Flush function of the RegistryKey class. This is normally not necessary (the registry is lazily-flushed by default), but if the power is turned off on the device before the system has a chance to do this, changes will be discarded: <http://msdn.microsoft.com/en-us/library/microsoft.win32.registrykey.flush.aspx> This function is available in .NET Compact Framework version 2.0 and better.
Follow-up on this question: Thanks DannySmurf, flushing the registry key was ultimately what needed to be done. However, there were a few steps that I was missing before reaching that stage. So, here's what came to light: * I was using a RAM-based registry, where by design the registry does not persist after a cold boot. I had to switch the registry to hive-based. * When switching to a hive-based registry structure, you need to make sure that the hive exists on a non-volatile medium. This is specified in the platform.reg file: ``` [HKEY_LOCAL_MACHINE\init\BootVars] "SystemHive"="\\Hard Disk\\system.hv" "ProfileDir"="\\Documents and Settings" "RegistryFlags"=dword:1 ; Flush hive on every RegCloseKey call "SystemHiveInitialSize"=dword:19000 ; Initial size for hive-registry file "Start DevMgr"=dword:1 ``` * Once the system.hv file is on the hard disk (CF card in my case), the values in the registry will persist after a cold boot. Note that the system.hv file contains all the HKLM keys. * It's also important to note that any drivers that need to be initialized on boot have to be specified as such in the .reg files of the solution. For example, I had to make sure that the hard disk drivers (PCMCIA) were loaded before trying to read the system hive file from them. The way to do this is to add a directive in the following format around each driver init key: ``` ;HIVE BOOT SECTION [HKEY_LOCAL_MACHINE\Drivers\PCCARD\PCMCIA\TEMPLATE\PCMCIA] "Dll"="pcmcia.dll" "NoConfig"=dword:1 "IClass"=multi_sz:"{6BEAB08A-8914-42fd-B33F-61968B9AAB32}=PCMCIA Card Services" "Flags"=dword:1000 ;END HIVE BOOT SECTION ``` That, plus a lot of luck, is about it.
Save registry values in WinCE using a C# app
[ "", "c#", "registry", "windows-ce", "" ]
I have a console app in which I want to give the user *x* seconds to respond to the prompt. If no input is made after a certain period of time, program logic should continue. We assume a timeout means empty response. What is the most straightforward way of approaching this?
I'm surprised to learn that after 5 years, all of the answers still suffer from one or more of the following problems: * A function other than ReadLine is used, causing loss of functionality. (Delete/backspace/up-key for previous input). * Function behaves badly when invoked multiple times (spawning multiple threads, many hanging ReadLine's, or otherwise unexpected behavior). * Function relies on a busy-wait. Which is a horrible waste since the wait is expected to run anywhere from a number of seconds up to the timeout, which might be multiple minutes. A busy-wait which runs for such an ammount of time is a horrible suck of resources, which is especially bad in a multithreading scenario. If the busy-wait is modified with a sleep this has a negative effect on responsiveness, although I admit that this is probably not a huge problem. I believe my solution will solve the original problem without suffering from any of the above problems: ``` class Reader { private static Thread inputThread; private static AutoResetEvent getInput, gotInput; private static string input; static Reader() { getInput = new AutoResetEvent(false); gotInput = new AutoResetEvent(false); inputThread = new Thread(reader); inputThread.IsBackground = true; inputThread.Start(); } private static void reader() { while (true) { getInput.WaitOne(); input = Console.ReadLine(); gotInput.Set(); } } // omit the parameter to read a line without a timeout public static string ReadLine(int timeOutMillisecs = Timeout.Infinite) { getInput.Set(); bool success = gotInput.WaitOne(timeOutMillisecs); if (success) return input; else throw new TimeoutException("User did not provide input within the timelimit."); } } ``` Calling is, of course, very easy: ``` try { Console.WriteLine("Please enter your name within the next 5 seconds."); string name = Reader.ReadLine(5000); Console.WriteLine("Hello, {0}!", name); } catch (TimeoutException) { Console.WriteLine("Sorry, you waited too long."); } ``` Alternatively, you can use the `TryXX(out)` convention, as shmueli suggested: ``` public static bool TryReadLine(out string line, int timeOutMillisecs = Timeout.Infinite) { getInput.Set(); bool success = gotInput.WaitOne(timeOutMillisecs); if (success) line = input; else line = null; return success; } ``` Which is called as follows: ``` Console.WriteLine("Please enter your name within the next 5 seconds."); string name; bool success = Reader.TryReadLine(out name, 5000); if (!success) Console.WriteLine("Sorry, you waited too long."); else Console.WriteLine("Hello, {0}!", name); ``` In both cases, you cannot mix calls to `Reader` with normal `Console.ReadLine` calls: if the `Reader` times out, there will be a hanging `ReadLine` call. Instead, if you want to have a normal (non-timed) `ReadLine` call, just use the `Reader` and omit the timeout, so that it defaults to an infinite timeout. So how about those problems of the other solutions I mentioned? * As you can see, ReadLine is used, avoiding the first problem. * The function behaves properly when invoked multiple times. Regardless of whether a timeout occurs or not, only one background thread will ever be running and only at most one call to ReadLine will ever be active. Calling the function will always result in the latest input, or in a timeout, and the user won't have to hit enter more than once to submit his input. * And, obviously, the function does not rely on a busy-wait. Instead it uses proper multithreading techniques to prevent wasting resources. The only problem that I foresee with this solution is that it is not thread-safe. However, multiple threads can't really ask the user for input at the same time, so synchronization should be happening before making a call to `Reader.ReadLine` anyway.
``` string ReadLine(int timeoutms) { ReadLineDelegate d = Console.ReadLine; IAsyncResult result = d.BeginInvoke(null, null); result.AsyncWaitHandle.WaitOne(timeoutms);//timeout e.g. 15000 for 15 secs if (result.IsCompleted) { string resultstr = d.EndInvoke(result); Console.WriteLine("Read: " + resultstr); return resultstr; } else { Console.WriteLine("Timed out!"); throw new TimedoutException("Timed Out!"); } } delegate string ReadLineDelegate(); ```
How to add a Timeout to Console.ReadLine()?
[ "", "c#", ".net", "console", "timeout", "io", "" ]
In Firefox you can enter the following into the awesome bar and hit enter: ``` javascript:self.resizeTo(1024,768); ``` How do you do the same thing in IE?
``` javascript:resizeTo(1024,768); vbscript:resizeto(1024,768) ``` Will work in IE7, But consider using something like ``` javascript:moveTo(0,0);resizeTo(1024,768); ``` because IE7 doesn't allow the window to "resize" beyond the screen borders. If you work on a 1024,768 desktop, this is what happens... * Firefox: 1024x768 Window, going behind the taskbar. If you drop the moveTo part, the top left corner of the window won't change position.(You still get a 1024x768 window) * IE7: As close as possible to the requested size without obscuring the taskbar or allowing any part of the window to lie beyond the screen borders. * safari: As close as possible to the requested size without obscuring the taskbar or allowing any part of the window to lie beyond the screen borders, but you can ommit the moveTo part. Safari will move the top left corner of the window for you. * Opera: Nothing happens. * Chrome: Nothing happens.
Maybe not directly related if you were looking for only a JavaScript solution but you can use the free Windows utility [Sizer](http://www.brianapps.net/sizer.html) to automatically resize any (browser) window to a predefined size like 800x600, 1024,768, etc. ![enter image description here](https://i.stack.imgur.com/8v43d.png)
How do you resize an IE browser window to 1024 x 768
[ "", "javascript", "internet-explorer", "" ]
How do I get the key of the current element in a `foreach` loop in C#? For example: ## PHP ``` foreach ($array as $key => $value) { echo("$value is assigned to key: $key"); } ``` ### What I'm trying to do in C#: ``` int[] values = { 5, 14, 29, 49, 99, 150, 999 }; foreach (int val in values) { if(search <= val && !stop) { // Set key to a variable } } ```
[Grauenwolf's way](https://stackoverflow.com/questions/60032/getting-the-array-key-in-a-foreach-loop#60035) is the most straightforward and performant way of doing this with an array: > Either use a for loop or create a temp variable that you increment on each pass. Which would of course look like this: ``` int[] values = { 5, 14, 29, 49, 99, 150, 999 }; for (int key = 0; key < values.Length; ++key) if (search <= values[key] && !stop) { // set key to a variable } ``` With .NET 3.5 you can take a more functional approach as well, but it is a little more verbose at the site, and would likely rely on a couple [support functions](http://en.wikipedia.org/wiki/Apply) for [visiting](http://en.wikipedia.org/wiki/Visitor_pattern) the elements in an IEnumerable. Overkill if this is all you need it for, but handy if you tend to do a lot of collection processing.
If you want to get at the key (read: index) then you'd have to use a for loop. If you actually want to have a collection that holds keys/values then I'd consider using a HashTable or a Dictionary (if you want to use Generics). ``` Dictionary<int, string> items = new Dictionary<int, string>(); foreach (int key in items.Keys) { Console.WriteLine("Key: {0} has value: {1}", key, items[key]); } ``` Hope that helps, Tyler
Getting the array key in a 'foreach' loop
[ "", "c#", "arrays", "" ]
In a recent sharepoint project, I implemented an authentication webpart which should replace the NTLM authentication dialog box. It works fine as long as the user provides valid credentials. Whenever the user provides invalid credentials, the NTLM dialog box pops up in Internet Explorer. My Javascript code which does the authentication via XmlHttpRequest looks like this: ``` function Login() { var request = GetRequest(); // retrieves XmlHttpRequest request.onreadystatechange = function() { if (this.status == 401) { // unauthorized request -> invalid credentials // do something to suppress NTLM dialog box... // already tried location.reload(); and window.location = <url to authentication form>; } } request.open("GET", "http://myServer", false, "domain\\username", "password"); request.send(null); } ``` I don't want the NTLM dialog box to be displayed when the user provides invalid credentials. Instead the postback by the login button in the authentication form should be executed. In other words, the browser should not find out about my unauthorized request. Is there any way to do this via Javascript?
[Mark](https://stackoverflow.com/users/2199/mark-brackett)'s comment is correct; The NTLM auth prompt is triggered by a 401 response code and the presence of NTLM as the first mechanism offered in the WWW-Authenticate header (Ref: [The NTLM Authentication Protocol](http://curl.haxx.se/rfc/ntlm.html)). I'm not sure if I understand the question description correctly, but I think you are trying to wrap the NTLM authentication for SharePoint, which means you don't have control over the server-side authentication protocol, correct? If you're not able to manipulate the server side to avoid sending a 401 response on failed credentials, then you will not be able to avoid this problem, because it's part of the (client-side) spec: ### [The XMLHttpRequest Object](http://www.w3.org/TR/2006/WD-XMLHttpRequest-20060619/) > If the UA supports HTTP Authentication [RFC2617] it SHOULD consider requests > originating from this object to be part of the protection space that includes the > accessed URIs and send Authorization headers and handle 401 Unauthorised requests > appropriately. if authentication fails, UAs should prompt the users for credentials. So the spec actually calls for the browser to prompt the user accordingly if any 401 response is received in an XMLHttpRequest, just as if the user had accessed the URL directly. As far as I can tell the only way to really avoid this would be for you to have control over the server side and cause 401 Unauthorized responses to be avoided, as Mark mentioned. One last thought is that you may be able to get around this using a proxy, such a separate server side script on another webserver. That script then takes a user and pass parameter and checks the authentication, so that the user's browser isn't what's making the original HTTP request and therefore isn't receiving the 401 response that's causing the prompt. If you do it this way you can find out from your "proxy" script if it failed, and if so then prompt the user again until it succeeds. On a successful authentication event, you can simply fetch the HTTP request as you are now, since everything works if the credentials are correctly specified.
IIRC, the browser pops the auth dialog when the following comes back in the request stream: * Http status of 401 * WWW-Authenticate header I would guess that you'd need to suppress one or both of those. The easy way to do that is to have a login method that'll take a Base64 username and password (you are using HTTPS, right?) and return 200 with a valid/invalid status. Once the password has been validated, you can use it with XHR.
Suppress NTLM dialog box after unauthorized request
[ "", "asp.net", "javascript", "sharepoint", "" ]
Is accessing a **bool** field atomic in C#? In particular, do I need to put a lock around: ``` class Foo { private bool _bar; //... in some function on any thread (or many threads) _bar = true; //... same for a read if (_bar) { ... } } ```
**Yes.** > Reads and writes of the following data types are atomic: bool, char, byte, sbyte, short, ushort, uint, int, float, and reference types. as found in [C# Language Spec](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/variables#atomicity-of-variable-references). Edit: It's probably also worthwhile understanding the [**volatile**](https://msdn.microsoft.com/en-us/library/x13ttww7(v=vs.140).aspx) keyword.
As stated above, `bool` is atomic, but you still need to remember that it also depends on what you want to do with it. ``` if(b == false) { //do something } ``` is not an atomic operation, meaning that the value of `b` could change before the current thread executes the code after the `if` statement.
Is a bool read/write atomic in C#
[ "", "c#", ".net", "concurrency", "locking", "boolean", "" ]
I've got a generic<> function that takes a linq query ('items') and enumerates through it adding additional properties. How can I select all the properties of the original 'item' rather than the item itself (as the code below does)? So equivalent to the sql: select \*, 'bar' as Foo from items ``` foreach (var item in items) { var newItem = new { item, // I'd like just the properties here, not the 'item' object! Foo = "bar" }; newItems.Add(newItem); } ```
There's no easy way of doing what you're suggesting, as all types in C# are strong-typed, even the anonymous ones like you're using. However it's not impossible to pull it off. To do it you would have to utilize reflection and emit your own assembly in memory, adding a new module and type that contains the specific properties you want. It's possible to obtain a list of properties from your anonymous item using: ``` foreach(PropertyInfo info in item.GetType().GetProperties()) Console.WriteLine("{0} = {1}", info.Name, info.GetValue(item, null)); ```
Shoot you wrote exactly what i was going to post. I was just getting some code ready :/ Its a little convoluted but anyways: ``` ClientCollection coll = new ClientCollection(); var results = coll.Select(c => { Dictionary<string, object> objlist = new Dictionary<string, object>(); foreach (PropertyInfo pi in c.GetType().GetProperties()) { objlist.Add(pi.Name, pi.GetValue(c, null)); } return new { someproperty = 1, propertyValues = objlist }; }); ```
linq equivalent of 'select *' sql for generic function?
[ "", "c#", "linq", "generics", "anonymous-types", "" ]
just a quick question: I am a CS undergrad and have only had experience with the Eclipse, and Net Beans IDEs. I have recently acquired a Macbook and was wanting to recompile a recent school project in Xcode just to test it out. Right after the line where I declare a new instance of an ArrayList: ``` dictionary = new ArrayList<String>(); ``` I get the following error: **generics are not supported in -source 1.3**. I was just wondering if anybody could offer advice as to what the problem might be. The same project compiles in Eclipse on the same machine. I'm running OSX 10.5.4, with Java 1.5.0\_13. Thank you.
Java support in Xcode is obsolete and unmaintained; it's the only bit of Xcode that still uses the "old" build system inherited from Project Builder. Even Apple suggests using Eclipse instead. For Java, both Eclipse and NetBeans work quite well on the Mac; if you want to try native Mac programming, use Objective-C and Cocoa, for which Xcode is fine. That said, the problem is that javac is targeting Java 1.3, which doesn't have generics. You can modify the javac reference in the Ant buildfile (build.xml) as follows: ``` <target name="compile" depends="init" description="Compile code"> <mkdir dir="${bin}"/> <javac deprecation="on" srcdir="${src}" destdir="${bin}" source="1.3" target="1.2" ``` Change "source" and "target" to "1.5".
Generics are introduced in Java 5, so you can't use generics with -source 1.3 option.
Xcode: 'Generics are not supported in -source 1.3' compiler error?
[ "", "java", "xcode", "compiler-construction", "" ]
I'm using MySQL in particular, but I'm hoping for a cross-vendor solution. I'm using the NOW() function to add a timestamp as a column for each record. ``` INSERT INTO messages (typeId, messageTime, stationId, message) VALUES (?, NOW(), ?, ?) ```
``` SELECT * FROM messages WHERE DATE_SUB(CURDATE(),INTERVAL 1 DAY) <= messageTime ```
The SQL Server query is: ``` Select * From Messages Where MessageTime > DateAdd(dd, -1, GetDate()) ``` As far as I can tell the (untested!) MySQL equivalent is ``` Select * From Messages Where MessageTime > ADDDATE(NOW(), INTERVAL -1 DAY) ```
How do I find records added to my database table in the past 24 hours?
[ "", "sql", "" ]
Does anyone know if it is possible to restart a J2EE application (from the application)? If so, how? I would like to be able to do it in an app-server-agnostic way, if it is possible. The application will be run on many different app servers-- basically whatever the client prefers. If it isn't possible to do this in an app-server-agnostic manner, then it probably isn't really worth doing for my purposes. I can always just display a message informing the user that they will need to restart the app manually.
I would suggest that you're unlikely to find an appserver agnostic way. And while I don't pretend to know your requirements, I might question a design that requires the application to restart itself, other than an installer that is deploying a new version. Finally, I would suggest that for any nontrivial purpose "any" appserver will not work. You should have a list of supported app servers and versions, documented in your release notes, so you can test on all of those and dont have to worry about supporting clients on a non-conforming server/version. From experience, there are always subtle differences between, for example, Apache Tomcat and BEA WebLogic, and these differences are often undocument and hard to determine until you run into them.
Most application servers provide a JMX interface, so you could invoke that.
Can you programmatically restart a j2ee application?
[ "", "java", "jakarta-ee", "" ]
I have a structure in C#: ``` public struct UserInfo { public string str1 { get; set; } public string str2 { get; set; } } ``` The only rule is that `UserInfo(str1="AA", str2="BB").Equals(UserInfo(str1="BB", str2="AA"))` How to override the GetHashCode function for this structure?
[MSDN](http://msdn.microsoft.com/en-us/library/system.object.gethashcode.aspx): A hash function must have the following properties: > * If two objects compare as equal, the `GetHashCode` method for each object must return the same value. However, if two objects do not compare as equal, the `GetHashCode` methods for the two object do not have to return different values. > * The `GetHashCode` method for an object must consistently return the same hash code as long as there is no modification to the object state that determines the return value of the object's `Equals` method. Note that this is true only for the current execution of an application, and that a different hash code can be returned if the application is run again. > * For the best performance, a hash function must generate a random distribution for all input. Taking it into account correct way is: ``` return str1.GetHashCode() ^ str2.GetHashCode() ``` `^` can be substituted with other commutative operation
See [Jon Skeet's answer](https://stackoverflow.com/questions/263400/what-is-the-best-algorithm-for-an-overridden-system-object-gethashcode) - binary operations like `^` are not good, they will often generate colliding hash!
How do you implement GetHashCode for structure with two string, when both strings are interchangeable
[ "", "c#", "hashtable", "" ]
I have to write an applet that brings up a password dialog. The problem is that dialog is set to be always on top but when user clicks on IE window dialog gets hidden behind IE window nevertheless. And since dialog is modal and holds **all** IE threads IE pane does not refresh and dialog window is still painted on top of IE (but not refreshed). This behaviour confuses users (they *see* dialog on top of IE but it looks like it has hanged since it is not refreshe). So I need a way to keep that dialog on top of everything. But any other solution to this problem would be nice. Here's the code: ``` PassDialog dialog = new PassDialog(parent); /* do some non gui related initialization */ dialog.pack(); dialog.setLocationRelativeTo(null); dialog.setAlwaysOnTop(true); dialog.setVisible(true); ``` Resolution: As @shemnon noted I should make a window instead of (null, Frame, Applet) parent of modal dialog. So good way to initlialize parent was: ``` parent = javax.swing.SwingUtilities.getWindowAncestor(theApplet); ```
What argument are you using for the parent? You may have better luck if you use the parent of the Applet. ``` javax.swing.SwingUtilities.getWindowAncestor(theApplet) ``` Using the getWindowAncestor will skip the applet parents (getRoot(component) will return applets). In at least some versions of Java there was a Frame that was equivalent to the IE window. YMMV.
Make a background Thread that calls toFront on the Dialog every 2 seconds. Code that we use (I hope I got everything): ``` class TestClass { protected void toFrontTimer(JFrame frame) { try { bringToFrontTimer = new java.util.Timer(); bringToFrontTask = new BringToFrontTask(frame); bringToFrontTimer.schedule( bringToFrontTask, 300, 300); } catch (Throwable t) { t.printStackTrace(); } } class BringToFrontTask extends TimerTask { private Frame frame; public BringToFrontTask(Frame frame) { this.frame = frame; } public void run() { if(count < 2) { frame.toFront(); } else { cancel(); } count ++; } private int count = 0; } public void cleanup() { if(bringToFrontTask != null) { bringToFrontTask.cancel(); bringToFrontTask = null; } if(bringToFrontTimer != null) { bringToFrontTimer = null; } } java.util.Timer bringToFrontTimer = null; java.util.TimerTask bringToFrontTask = null; } ```
Modal dialogs in IE gets hidden behind IE if user clicks on IE pane
[ "", "java", "internet-explorer", "swing", "applet", "modal-dialog", "" ]
I am looking for a way to concatenate the strings of a field within a group by query. So for example, I have a table: | ID | COMPANY\_ID | EMPLOYEE | | --- | --- | --- | | 1 | 1 | Anna | | 2 | 1 | Bill | | 3 | 2 | Carol | | 4 | 2 | Dave | and I wanted to group by company\_id to get something like: | COMPANY\_ID | EMPLOYEE | | --- | --- | | 1 | Anna, Bill | | 2 | Carol, Dave | There is a built-in function in mySQL to do this [group\_concat](http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_group-concat)
### PostgreSQL 9.0 or later: Modern Postgres (since 2010) has the [`string_agg(expression, delimiter)`](http://www.postgresql.org/docs/current/static/functions-aggregate.html#FUNCTIONS-AGGREGATE-TABLE) function which will do exactly what the asker was looking for: ``` SELECT company_id, string_agg(employee, ', ') FROM mytable GROUP BY company_id; ``` Postgres 9 also added the ability to specify an `ORDER BY` clause [in any aggregate expression](https://www.postgresql.org/docs/current/static/sql-expressions.html#SYNTAX-AGGREGATES); otherwise you have to order all your results or deal with an undefined order. So you can now write: ``` SELECT company_id, string_agg(employee, ', ' ORDER BY employee) FROM mytable GROUP BY company_id; ``` ### PostgreSQL 8.4.x: Please note that [support for Postgres 8.4 ended in 2014](https://www.postgresql.org/support/versioning/), so you should probably upgrade for more important reasons than string aggregation. PostgreSQL 8.4 (in 2009) introduced [the aggregate function `array_agg(expression)`](http://www.postgresql.org/docs/8.4/interactive/functions-aggregate.html "array_agg(expression)") which collects the values in an array. Then `array_to_string()` can be used to give the desired result: ``` SELECT company_id, array_to_string(array_agg(employee), ', ') FROM mytable GROUP BY company_id; ``` ### PostgreSQL 8.3.x and older: When this question was originally posed, there was no built-in aggregate function to concatenate strings. The simplest custom implementation ([suggested by Vajda Gabo in this mailing list post](http://archives.postgresql.org/pgsql-novice/2003-09/msg00177.php), among many others) is to use the built-in `textcat` function: ``` CREATE AGGREGATE textcat_all( basetype = text, sfunc = textcat, stype = text, initcond = '' ); ``` [Here is the `CREATE AGGREGATE` documentation.](http://www.postgresql.org/docs/8.3/static/sql-createaggregate.html) This simply glues all the strings together, with no separator. In order to get a ", " inserted in between them without having it at the end, you might want to make your own concatenation function and substitute it for the "textcat" above. Here is one I put together and tested on 8.3.12: ``` CREATE FUNCTION commacat(acc text, instr text) RETURNS text AS $$ BEGIN IF acc IS NULL OR acc = '' THEN RETURN instr; ELSE RETURN acc || ', ' || instr; END IF; END; $$ LANGUAGE plpgsql; ``` This version will output a comma even if the value in the row is null or empty, so you get output like this: ``` a, b, c, , e, , g ``` If you would prefer to remove extra commas to output this: ``` a, b, c, e, g ``` Then add an `ELSIF` check to the function like this: ``` CREATE FUNCTION commacat_ignore_nulls(acc text, instr text) RETURNS text AS $$ BEGIN IF acc IS NULL OR acc = '' THEN RETURN instr; ELSIF instr IS NULL OR instr = '' THEN RETURN acc; ELSE RETURN acc || ', ' || instr; END IF; END; $$ LANGUAGE plpgsql; ```
How about using Postgres built-in array functions? At least on 8.4 this works out of the box: ``` SELECT company_id, array_to_string(array_agg(employee), ',') FROM mytable GROUP BY company_id; ```
How to concatenate strings of a string field in a PostgreSQL 'group by' query?
[ "", "sql", "postgresql", "group-by", "string-aggregation", "" ]
I know that `JTable` can sort by a single column. But is it possible to allow for multiple column sort or do I need to write the code myself?
You can sort by multiple columns by specifying more than one sort key when calling `setSortKeys` in the [`RowSorter`](http://java.sun.com/javase/6/docs/api/javax/swing/RowSorter.html) you're using.
You should be able to set the TableRowSorter and the Comparator associated with it. Example: ``` TableModel myModel = createMyTableModel(); JTable table = new JTable(myModel); TableRowSorter t = new TableRowSorter(myModel); t.setComparator(column that the comparator works against, Comparator<?> comparator); table.setRowSorter(new TableRowSorter(myModel)); ```
Multiple column sort in JTable
[ "", "java", "swing", "jtable", "" ]
In my app i need to save changed values (old and new) when model gets saved. Any examples or working code? I need this for premoderation of content. For example, if user changes something in model, then administrator can see all changes in separate table and then decide to apply them or not.
You haven't said very much about your specific use case or needs. In particular, it would be helpful to know what you need to do with the change information (how long do you need to store it?). If you only need to store it for transient purposes, @S.Lott's session solution may be best. If you want a full audit trail of all changes to your objects stored in the DB, try this [AuditTrail solution](http://code.djangoproject.com/wiki/AuditTrail). **UPDATE**: The AuditTrail code I linked to above is the closest I've seen to a full solution that would work for your case, though it has some limitations (doesn't work at all for ManyToMany fields). It will store all previous versions of your objects in the DB, so the admin could roll back to any previous version. You'd have to work with it a bit if you want the change to not take effect until approved. You could also build a custom solution based on something like @Armin Ronacher's DiffingMixin. You'd store the diff dictionary (maybe pickled?) in a table for the admin to review later and apply if desired (you'd need to write the code to take the diff dictionary and apply it to an instance).
I've found Armin's idea very useful. Here is my variation; ``` class DirtyFieldsMixin(object): def __init__(self, *args, **kwargs): super(DirtyFieldsMixin, self).__init__(*args, **kwargs) self._original_state = self._as_dict() def _as_dict(self): return dict([(f.name, getattr(self, f.name)) for f in self._meta.local_fields if not f.rel]) def get_dirty_fields(self): new_state = self._as_dict() return dict([(key, value) for key, value in self._original_state.iteritems() if value != new_state[key]]) ``` Edit: I've tested this BTW. Sorry about the long lines. The difference is (aside from the names) it only caches local non-relation fields. In other words it doesn't cache a parent model's fields if present. And there's one more thing; you need to reset `_original_state` dict after saving. But I didn't want to overwrite `save()` method since most of the times we discard model instances after saving. ``` def save(self, *args, **kwargs): super(Klass, self).save(*args, **kwargs) self._original_state = self._as_dict() ```
Dirty fields in django
[ "", "python", "django", "" ]
Below is the code I use to build an HTML table on the fly (using JSON data received from the server). I display an animated pleasewait (.gif) graphic while the data is loading. However, the graphic freezes while the JavaScript function is building the table. At first, I was just happy to make this happen (display the table), I guess now I need to work on efficiency. At the very least I need to stop the animated graphic from freezing. I can go to a static "Loading" display, but I would rather make this method work. Suggestions for my pleasewait display? And efficiency? Possibly a better way to build the table? Or maybe not a table, but some other "table" like display ``` var t = eval( "(" + request + ")" ) ; var myTable = '' ; myTable += '<table id="myTable" cellspacing=0 cellpadding=2 border=1>' ; myTable += "<thead>" ; myTable += "<tr>"; for (var i = 0; i < t.hdrs.length; i++) { myTable += "<th>" + header + "</th>"; } myTable += "</tr>" ; myTable += "</thead>" ; myTable += "<tbody>" ; for (var i = 0; i < t.data.length; i++) { myTable += '<tr>'; for (var j = 0; j < t.hdrs.length; j++) { myTable += '<td>'; if (t.data[i][t.hdrs[j]] == "") { myTable += "&nbsp;" ; } else { myTable += t.data[i][t.hdrs[j]] ; } myTable += "</td>"; } myTable += "</tr>"; } myTable += "</tbody>" ; myTable += "</table>" ; $("#result").append(myTable) ; $("#PleaseWaitGraphic").addClass("hide"); $(".rslt").removeClass("hide") ; ```
You basically want to set up your loops so they yield to other threads every so often. Here is some example code from [this article](http://www.julienlecomte.net/blog/2007/10/28/) on the topic of running CPU intensive operations without freezing your UI: ``` function doSomething (progressFn [, additional arguments]) { // Initialize a few things here... (function () { // Do a little bit of work here... if (continuation condition) { // Inform the application of the progress progressFn(value, total); // Process next chunk setTimeout(arguments.callee, 0); } })(); } ``` As far as simplifying the production of HTML in your script, if you're using jQuery, you might give my [Simple Templates](http://plugins.jquery.com/project/simple-templates) plug-in a try. It tidies up the process by cutting down drastically on the number of concatenations you have to do. It performs pretty well, too after I recently did some refactoring that resulted in a pretty big [speed increase](http://andrew.hedges.name/experiments/simple-templates-speed-test/). Here's an example (without doing *all* of the work for you!): ``` var t = eval('(' + request + ')') ; var templates = { tr : '<tr>#{row}</tr>', th : '<th>#{header}</th>', td : '<td>#{cell}</td>' }; var table = '<table><thead><tr>'; $.each(t.hdrs, function (key, val) { table += $.tmpl(templates.th, {header: val}); }); ... ```
I've been using [JTemplates](http://jtemplates.tpython.com/) to accomplish what you are describing. Dave Ward has an example on his blog [here](http://encosia.com/2008/06/26/use-jquery-and-aspnet-ajax-to-build-a-client-side-repeater/). The main benefit of JTemplates is that your html isn't woven into your javascript. You write a template and call two functions to have jTemplate build the html from your template and your json.
Building an HTML table on the fly using jQuery
[ "", "javascript", "jquery", "" ]
(Jeopardy-style question, I wish the answer had been online when I had this issue) Using Java 1.4, I have a method that I want to run as a thread some of the time, but not at others. So I declared it as a subclass of Thread, then either called start() or run() depending on what I needed. But I found that my program would leak memory over time. What am I doing wrong?
This is a known bug in Java 1.4: <https://bugs.java.com/bugdatabase/view_bug;jsessionid=5869e03fee226ffffffffc40d4fa881a86e3:WuuT?bug_id=4533087> It's fixed in Java 1.5 but Sun doesn't intend to fix it in 1.4. The issue is that, at construction time, a `Thread` is added to a list of references in an internal thread table. It won't get removed from that list until its start() method has completed. As long as that reference is there, it won't get garbage collected. So, never create a thread unless you're definitely going to call its `start()` method. A `Thread` object's `run()` method should not be called directly. A better way to code it is to implement the `Runnable` interface rather than subclass `Thread`. When you don't need a thread, call ``` myRunnable.run(); ``` When you do need a thread: ``` Thread myThread = new Thread(myRunnable); myThread.start(); ```
I doubt that constructing an instance of a Thread or a subclass thereof leaks memory. Firstly, there's nothing of the sorts mentioned in the Javadocs or the Java Language Specification. Secondly, I ran a simple test and it also shows that no memory is leaked (at least not on Sun's JDK 1.5.0\_05 on 32-bit x86 Linux 2.6): ``` public final class Test { public static final void main(String[] params) throws Exception { final Runtime rt = Runtime.getRuntime(); long i = 0; while(true) { new MyThread().run(); i++; if ((i % 100) == 0) { System.out.println((i / 100) + ": " + (rt.freeMemory() / 1024 / 1024) + " " + (rt.totalMemory() / 1024 / 1024)); } } } static class MyThread extends Thread { private final byte[] tmp = new byte[10 * 1024 * 1024]; public void run() { System.out.print("."); } } } ``` EDIT: Just to summarize the idea of the test above. Every instance of the MyThread subclass of a Thread references its own 10 MB array. If instances of MyThread weren't garbage-collected, the JVM would run out of memory pretty quickly. However, running the test code shows that the JVM is using a small constant amount of memory regardless of the number of MyThreads constructed so far. I claim this is because instances of MyThread are garbage-collected.
Why is my Java program leaking memory when I call run() on a Thread object?
[ "", "java", "multithreading", "memory-leaks", "" ]
I don't want `PHP` errors to display /html, but I want them to display in `/html/beta/usercomponent`. Everything is set up so that errors do not display at all. How can I get errors to just show up in that one folder (and its subfolders)?
In `.htaccess`: ``` php_value error_reporting 2147483647 ``` This number, according to documentation should enable 'all' errors irrespective of version, if you want a more granular setting, manually OR the values together, or run ``` php -r 'echo E_ALL | E_STRICT ;' ``` to let php compute the value for you. You need ``` AllowOverride All ``` in apaches master configuration to enable .htaccess files. More Reading on this can be found here: * [Php/Error Reporting Flag](http://php.net/manual/en/errorfunc.configuration.php#ini.error-reporting) * [Php/Error Reporting values](http://php.net/manual/en/errorfunc.constants.php) * [Php/Different Ways of Tuning Settings](http://php.net/manual/en/configuration.changes.php) --- **Notice** If you are using Php-CGI instead of mod\_php, this may not work as advertised, and all you will get is an internal server error, and you will be left without much option other than enabling it either site-wide on a per-script basis with ``` error_reporting( E_ALL | E_STRICT ); ``` or similar constructs before the error occurs. My advice is to **disable** displaying errors to the user, and utilize heavily php's error\_log feature. ``` display_errors = 0 error_logging = E_ALL | E_STRICT error_log = /var/log/php ``` If you have problems with this being too noisy, this is not a sign you need to just take error reporting off selectively, this is a sign somebody should fix the code. --- @Roger Yes, you can use it in a `<Directory>` construct in apaches configuration too, however, the .htaccess in this case is equivalent, and makes it more portable especially if you have multiple working checkout copies of the same codebase and you want to distribute this change to all of them. If you have multiple virtual hosts, you'll want the construct in the respective virtual hosts definition, otherwise, yes ``` <Directory /path/to/wherever/on/filesystem> <IfModule mod_php5.c> php_value error_reporting 214748364 </IfModule> </Directory> ``` The Additional "ifmodule" commands are just a safety net so the above problem with apache dying if you don't have mod\_php won't occur.
The easiest way would be to control the error reporting from a .htaccess file. But this is assuming you are using Apache and the scripts in /html/beta/usercomponent are called from that directory and not included from elsewhere. .htacess ``` php_value error_reporting [int] ``` You will have to compose the integer value yourself from the list as described in the [error\_reporting](https://www.php.net/manual/en/function.error-reporting.php) documentation, since the constants like E\_ERROR aren't defined when Apache interprets the .htaccess. It's a simple bitwise flag, so a value of 12, for example, would be E\_WARNING + E\_PARSE + E\_NOTICE.
How can I turn on PHP errors display on just a subfolder
[ "", "php", "" ]
I've heard that the `static_cast` function should be preferred to C-style or simple function-style casting. Is this true? Why?
The main reason is that classic C casts make no distinction between what we call `static_cast<>()`, `reinterpret_cast<>()`, `const_cast<>()`, and `dynamic_cast<>()`. These four things are completely different. A `static_cast<>()` is usually safe. There is a valid conversion in the language, or an appropriate constructor that makes it possible. The only time it's a bit risky is when you cast down to an inherited class; you must make sure that the object is actually the descendant that you claim it is, by means external to the language (like a flag in the object). A `dynamic_cast<>()` is safe as long as the result is checked (pointer) or a possible exception is taken into account (reference). A `reinterpret_cast<>()` (or a `const_cast<>()`) on the other hand is always dangerous. You tell the compiler: "trust me: I know this doesn't look like a `foo` (this looks as if it isn't mutable), but it is". The first problem is that it's almost impossible to tell which one will occur in a C-style cast without looking at large and disperse pieces of code and knowing all the rules. Let's assume these: ``` class CDerivedClass : public CMyBase {...}; class CMyOtherStuff {...} ; CMyBase *pSomething; // filled somewhere ``` Now, these two are compiled the same way: ``` CDerivedClass *pMyObject; pMyObject = static_cast<CDerivedClass*>(pSomething); // Safe; as long as we checked pMyObject = (CDerivedClass*)(pSomething); // Same as static_cast<> // Safe; as long as we checked // but harder to read ``` However, let's see this almost identical code: ``` CMyOtherStuff *pOther; pOther = static_cast<CMyOtherStuff*>(pSomething); // Compiler error: Can't convert pOther = (CMyOtherStuff*)(pSomething); // No compiler error. // Same as reinterpret_cast<> // and it's wrong!!! ``` As you can see, there is no easy way to distinguish between the two situations without knowing a lot about all the classes involved. The second problem is that the C-style casts are too hard to locate. In complex expressions it can be very hard to see C-style casts. It is virtually impossible to write an automated tool that needs to locate C-style casts (for example a search tool) without a full blown C++ compiler front-end. On the other hand, it's easy to search for "static\_cast<" or "reinterpret\_cast<". ``` pOther = reinterpret_cast<CMyOtherStuff*>(pSomething); // No compiler error. // but the presence of a reinterpret_cast<> is // like a Siren with Red Flashing Lights in your code. // The mere typing of it should cause you to feel VERY uncomfortable. ``` That means that, not only are C-style casts more dangerous, but it's a lot harder to find them all to make sure that they are correct.
One pragmatic tip: you can search easily for the `static_cast` keyword in your source code if you plan to tidy up the project.
Why use static_cast<T>(x) instead of (T)x?
[ "", "c++", "casting", "static-cast", "" ]
Using c#, VS2005, and .NET 2.0. (XP 32 bit) This is a Winforms app that gets called by a VBA addin (.xla) via Interop libraries. This app has been around for a while and works fine when the assembly is compiled and executed anywhere other than my dev machine. On dev it crashes hard (in debugger and just running the object) with "Unhandled exception at 0x... in EXCEL.EXE: 0x...violation reading location 0x... But here's the weird part: The first method in my interface works fine. All the other methods crash as above. Here is an approximation of the code: ``` [Guid("123Fooetc...")] [InterfaceType(ComInterfaceType.InterfaceIsIDispatch)] public interface IBar { [DispId(1)] void ThisOneWorksFine(Excel.Workbook ActiveWorkBook); [DispId(2)] string Crash1(Excel.Workbook ActiveWorkBook); [DispId(3)] int Crash2(Excel.Workbook activeWorkBook, Excel.Range target, string someStr); } [Guid("345Fooetc..")] [ClassInterface(ClassInterfaceType.None)] [ProgId("MyNameSpace.MyClass")] public class MyClass : IBar { public void ThisOneWorksFine(Excel.Workbook ActiveWorkBook) {...} string Crash1(Excel.Workbook ActiveWorkBook); {...} int Crash2(Excel.Workbook activeWorkBook, Excel.Range target, string someStr); {...} } ``` It seems like some kind of environmental thing. Registry chundered? Could be code bugs, but it works fine elsewhere.
I've had problems in this scenario with Office 2003 in the past. Some things that have helped: * Installing Office 2003 Service Pack 2 stopped some crashes that happened when closing Excel. * Installing Office 2003 Service Pack 3 fixes a bug with using XP styles in a VSTO2005 application (not your case here) * Running the Excel VBA CodeCleaner <http://www.appspro.com/Utilities/CodeCleaner.htm> periodically helps prevent random crashes. * Accessing Excel objects from multiple threads would be dodgy, so I hope you aren't doing that. If you have the possibility you could also try opening a case with Microsoft PSS. They are pretty good if you are able to reproduce the problem. And in most cases, this kind of thing is a bug, so you won't be charged for it :)
Is your dev machine Win64? I've had problems with win64 builds of apps that go away if you set the build platform to x86.
Excel Addin Access Violation
[ "", "c#", "interop", ".net-2.0", "add-in", "" ]
I'm running some JUnit tests on my applications. Every test has a for loop calling respective method 10000 times. The tested methods produce a lot of log. These logs are also automatically collected by JUnit as test output. This situation takes to OutOfMemoryError because the string buffer where JUnit keeps the output becomes too large. I dont' need these logs during tests, so if there is a way to tell JUnit "don't keep program output" it would be enough. Any ideas?
What type of logging are you using? Is there some way you can override the default logging behavior to just disregard all log messages?
Some options: 1. Change your logging so that it dumps to a file instead of standard output. 2. Increase the maximum heap size with `-Xmx <some number>M`, like `-Xmx 256M`.
Junit output and OutOfMemoryError
[ "", "java", "junit", "" ]
I have a large codebase without Javadoc, and I want to run a program to write a skeleton with the basic Javadoc information (e.g., for each method's parameter write @param...), so I just have to fill the gaps left. Anyone know a good solution for this? **Edit:** JAutodoc is what I was looking for. It has Ant tasks, an Eclipse plugin, and uses Velocity for the template definition.
The [JAutodoc](http://jautodoc.sourceforge.net/) plugin for eclipse does exactly what you need, but with a package granularity : right click on a package, select "Add javadoc for members..." and the skeleton will be added. There are numerous interesting options : templates for javadoc, adding a TODO in the header of every file saying : "template javadoc, must be filled...", etc.
I think auto-generating empty Javadoc is an anti-pattern and should be discouraged; it gives code the appearance of being documented, but just adds noise to the codebase. I would recommend instead that you configure your code editor to assist on a per-method and per-class basis to use when you actually write the javadoc (one commenter pointed to Eclipse's feature that does this).
Javadoc template generator
[ "", "java", "documentation", "javadoc", "" ]
There are lots of widgets provided by sites that are effectively bits of JavaScript that generate HTML through *DOM* manipulation or `document.write()`. Rather than slow the browser down even more with additional requests and trust yet another provider to be fast, reliable and not change the widget output, I want to execute\* the JavaScript to generate the rendered HTML, and then save that HTML source.­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ Things I've looked into that seem unworkable or way too difficult: 1. The Links Browser (*not lynx!*) 2. Headless use of Xvfb plus Firefox plus Greasemonkey (*yikes*) 3. The all-Java browser toolkit Cobra (*the best bet!*) Any ideas? \*\* Obviously you can't really execute the JavaScript completely, as it doesn't necessarily have an exit path, but you get the idea.
Wikipedia's ["Server-side JavaScript"](http://en.wikipedia.org/wiki/Server-side_JavaScript) article lists numerous implementations, many of which are based on Mozilla's **Rhino** JavaScript-to-Java converter, or its cousin **SpiderMonkey** (the same engine as found in Firefox and other Gecko-based browsers). In particular, something simple like [**mod\_js**](http://www.modjs.org/) for Apache may suit your needs.
If you're just using plain JS, [Rhino](http://www.mozilla.org/rhino/) should do the trick. But if the JS code is actually calling DOM methods and so on, you're going to need a full-blown browser. [Crowbar](http://simile.mit.edu/wiki/Crowbar) might help you. Is this really going to make things faster for users without causing compatibility issues?
Executing JavaScript to Render HTML for Server-Side Caching
[ "", "javascript", "html", "rendering", "greasemonkey", "" ]
If you need to open a SqlConnection before issuing queries, can you simply handle all non-Open ConnectionStates in the same way? For example: ``` if (connection.State != ConnectionState.Open) { connection.Open(); } ``` I read somewhere that for ConnectionState.Broken the connection needs to be closed before its re-opened. Does anyone have experience with this? Thanks-
<http://msdn.microsoft.com/en-us/library/system.data.connectionstate.aspx> Broken connection state does need to be closed and reopened before eligible for continued use. Edit: Unfortunately closing a closed connection will balk as well. You'll need to test the ConnectionState before acting on an unknown connection. Perhaps a short switch statement could do the trick.
This isn't directly answering your question, but the best practice is to open and close a connection for every access to the database. ADO.NET connection pooling ensures that this performs well. It's particularly important to do this in server apps (e.g. ASP.NET), but I would do it even in a WinForms app that accesses the database directly. Example: ``` using(SqlConnection connection = new SqlConnection(...)) { connection.Open(); // ... do your stuff here } // Connection is disposed and closed here, even if an exception is thrown ``` In this way you never need to check the connection state when opening a connection.
Handling different ConnectionStates before opening SqlConnection
[ "", "c#", ".net", "sql-server", "database", "" ]
I have been working on some legacy C++ code that uses variable length structures (TAPI), where the structure size will depend on variable length strings. The structures are allocated by casting array `new` thus: ``` STRUCT* pStruct = (STRUCT*)new BYTE[sizeof(STRUCT) + nPaddingSize]; ``` Later on however the memory is freed using a `delete` call: ``` delete pStruct; ``` Will this mix of array `new[]` and non-array `delete` cause a memory leak or would it depend on the compiler? Would I be better off changing this code to use `malloc` and `free` instead?
Technically I believe it could cause a problem with mismatched allocators, though in practice I don't know of any compiler that would not do the right thing with this example. More importantly if `STRUCT` where to have (or ever be given) a destructor then it would invoke the destructor without having invoked the corresponding constructor. Of course, if you know where pStruct came from why not just cast it on delete to match the allocation: ``` delete [] (BYTE*) pStruct; ```
I personally think you'd be better off using `std::vector` to manage your memory, so you don't need the `delete`. ``` std::vector<BYTE> backing(sizeof(STRUCT) + nPaddingSize); STRUCT* pStruct = (STRUCT*)(&backing[0]); ``` Once backing leaves scope, your `pStruct` is no longer valid. Or, you can use: ``` boost::scoped_array<BYTE> backing(new BYTE[sizeof(STRUCT) + nPaddingSize]); STRUCT* pStruct = (STRUCT*)backing.get(); ``` Or `boost::shared_array` if you need to move ownership around.
Will this C++ code cause a memory leak (casting array new)
[ "", "c++", "memory-management", "memory-leaks", "" ]
What is the main difference between an inner class and a static nested class in Java? Does design / implementation play a role in choosing one of these?
From the [Java Tutorial](http://java.sun.com/docs/books/tutorial/java/javaOO/nested.html): > Nested classes are divided into two categories: static and non-static. Nested classes that are declared static are simply called static nested classes. Non-static nested classes are called inner classes. Static nested classes are accessed using the enclosing class name: ``` OuterClass.StaticNestedClass ``` For example, to create an object for the static nested class, use this syntax: ``` OuterClass.StaticNestedClass nestedObject = new OuterClass.StaticNestedClass(); ``` Objects that are instances of an inner class exist within an instance of the outer class. Consider the following classes: ``` class OuterClass { ... class InnerClass { ... } } ``` An instance of InnerClass can exist only within an instance of OuterClass and has direct access to the methods and fields of its enclosing instance. To instantiate an inner class, you must first instantiate the outer class. Then, create the inner object within the outer object with this syntax: ``` OuterClass outerObject = new OuterClass() OuterClass.InnerClass innerObject = outerObject.new InnerClass(); ``` see: [Java Tutorial - Nested Classes](http://download.oracle.com/javase/tutorial/java/javaOO/nested.html) For completeness note that there is also such a thing as an [inner class *without* an enclosing instance](https://stackoverflow.com/questions/20468856/is-it-true-that-every-inner-class-requires-an-enclosing-instance): ``` class A { int t() { return 1; } static A a = new A() { int t() { return 2; } }; } ``` Here, `new A() { ... }` is an *inner class defined in a static context* and does not have an enclosing instance.
The [Java tutorial says](http://java.sun.com/docs/books/tutorial/java/javaOO/nested.html): > Terminology: Nested classes are > divided into two categories: static > and non-static. Nested classes that > are declared static are simply called > static nested classes. Non-static > nested classes are called inner > classes. In common parlance, the terms "nested" and "inner" are used interchangeably by most programmers, but I'll use the correct term "nested class" which covers both inner and static. Classes can be nested *ad infinitum*, e.g. class A can contain class B which contains class C which contains class D, etc. However, more than one level of class nesting is rare, as it is generally bad design. There are three reasons you might create a nested class: * organization: sometimes it seems most sensible to sort a class into the namespace of another class, especially when it won't be used in any other context * access: nested classes have special access to the variables/fields of their containing classes (precisely which variables/fields depends on the kind of nested class, whether inner or static). * convenience: having to create a new file for every new type is bothersome, again, especially when the type will only be used in one context There are **four kinds of nested class in Java**. In brief, they are: * **static class**: declared as a static member of another class * **inner class**: declared as an instance member of another class * **local inner class**: declared inside an instance method of another class * **anonymous inner class**: like a local inner class, but written as an expression which returns a one-off object Let me elaborate in more details. ## Static Classes Static classes are the easiest kind to understand because they have nothing to do with instances of the containing class. A static class is a class declared as a static member of another class. Just like other static members, such a class is really just a hanger on that uses the containing class as its namespace, *e.g.* the class *Goat* declared as a static member of class *Rhino* in the package *pizza* is known by the name *pizza.Rhino.Goat*. ``` package pizza; public class Rhino { ... public static class Goat { ... } } ``` Frankly, static classes are a pretty worthless feature because classes are already divided into namespaces by packages. The only real conceivable reason to create a static class is that such a class has access to its containing class's private static members, but I find this to be a pretty lame justification for the static class feature to exist. ## Inner Classes An inner class is a class declared as a non-static member of another class: ``` package pizza; public class Rhino { public class Goat { ... } private void jerry() { Goat g = new Goat(); } } ``` Like with a static class, the inner class is known as qualified by its containing class name, *pizza.Rhino.Goat*, but inside the containing class, it can be known by its simple name. However, every instance of an inner class is tied to a particular instance of its containing class: above, the *Goat* created in *jerry*, is implicitly tied to the *Rhino* instance *this* in *jerry*. Otherwise, we make the associated *Rhino* instance explicit when we instantiate *Goat*: ``` Rhino rhino = new Rhino(); Rhino.Goat goat = rhino.new Goat(); ``` (Notice you refer to the inner type as just *Goat* in the weird *new* syntax: Java infers the containing type from the *rhino* part. And, yes *new rhino.Goat()* would have made more sense to me too.) So what does this gain us? Well, the inner class instance has access to the instance members of the containing class instance. These enclosing instance members are referred to inside the inner class *via* just their simple names, not *via* *this* (*this* in the inner class refers to the inner class instance, not the associated containing class instance): ``` public class Rhino { private String barry; public class Goat { public void colin() { System.out.println(barry); } } } ``` In the inner class, you can refer to *this* of the containing class as *Rhino.this*, and you can use *this* to refer to its members, *e.g. Rhino.this.barry*. ## Local Inner Classes A local inner class is a class declared in the body of a method. Such a class is only known within its containing method, so it can only be instantiated and have its members accessed within its containing method. The gain is that a local inner class instance is tied to and can access the final local variables of its containing method. When the instance uses a final local of its containing method, the variable retains the value it held at the time of the instance's creation, even if the variable has gone out of scope (this is effectively Java's crude, limited version of closures). Because a local inner class is neither the member of a class or package, it is not declared with an access level. (Be clear, however, that its own members have access levels like in a normal class.) If a local inner class is declared in an instance method, an instantiation of the inner class is tied to the instance held by the containing method's *this* at the time of the instance's creation, and so the containing class's instance members are accessible like in an instance inner class. A local inner class is instantiated simply *via* its name, *e.g.* local inner class *Cat* is instantiated as *new Cat()*, not new this.Cat() as you might expect. ## Anonymous Inner Classes An anonymous inner class is a syntactically convenient way of writing a local inner class. Most commonly, a local inner class is instantiated at most just once each time its containing method is run. It would be nice, then, if we could combine the local inner class definition and its single instantiation into one convenient syntax form, and it would also be nice if we didn't have to think up a name for the class (the fewer unhelpful names your code contains, the better). An anonymous inner class allows both these things: ``` new *ParentClassName*(*constructorArgs*) {*members*} ``` This is an expression returning a new instance of an unnamed class which extends *ParentClassName*. You cannot supply your own constructor; rather, one is implicitly supplied which simply calls the super constructor, so the arguments supplied must fit the super constructor. (If the parent contains multiple constructors, the “simplest” one is called, “simplest” as determined by a rather complex set of rules not worth bothering to learn in detail--just pay attention to what NetBeans or Eclipse tell you.) Alternatively, you can specify an interface to implement: ``` new *InterfaceName*() {*members*} ``` Such a declaration creates a new instance of an unnamed class which extends Object and implements *InterfaceName*. Again, you cannot supply your own constructor; in this case, Java implicitly supplies a no-arg, do-nothing constructor (so there will never be constructor arguments in this case). Even though you can't give an anonymous inner class a constructor, you can still do any setup you want using an initializer block (a {} block placed outside any method). Be clear that an anonymous inner class is simply a less flexible way of creating a local inner class with one instance. If you want a local inner class which implements multiple interfaces or which implements interfaces while extending some class other than *Object* or which specifies its own constructor, you're stuck creating a regular named local inner class.
Java inner class and static nested class
[ "", "java", "inner-classes", "static-classes", "" ]
I have a start of a webapp that I wrote without using the Object Oriented features of PHP. I don't really know if it is worth it to go back and rewrite the parts I have finished. Is object oriented PHP worth rewriting all or part of a decent working app?
Given that you have an incomplete app I would say that reworking it into an Object based app will probably be helpful. One thing to consider is the expected size of the end application. Below a certain complexity Object based may be overkill except for the learning experience. I started out avoiding Objects like the plague because my initial introduction to them in university classes was terrible. I somewhat recently had to work on a project which was implemented in php objects. making the required changes was much easier than other projects. I have since then worked in the object model frequently and find it very handy for quick creation and easier upkeep.
Just to disagree with the consensus... I would say no in most cases. Not as an academic exercise on commercial code anyway. If it's working don't re-write it. If you have to go in to change / add bits, then refactor towards OO practices (there are lots of posts on SO about refactoring when you are changing code anyway, and not just for the sake of it). In practise if you haven't done a lot of OOP, then you'll want to start small and get a feel for it. Once you get a handle on the basics, a good beginners guide to Design Patterns (I like the Head First book) is very useful. Most PHP books would teach you OOP fairly poorly. They teach you about inheritance, but usually don't teach you about loose coupling and favouring composition over inheritance. A design patterns book will give you a better insight into this. PHP still has a reputation for not "doing" OO right. I don't think this is fair, but is a reflection of the fact that it's so easy for people to get started without really grokking OOP. I would go out on a limb and say the majority (ever so slightly - call it 51%) of PHP programmers aren't comfortable with OOP. I think it's possible to do good OO in PHP, and if you're already comfortable with the language it's a great way to grow your skills. EDIT: Just to add a couple of disclaimers... 1. My comment about *most* PHP programmers not being comfortable with OOP wouldn't apply to the current SO audience! 2. Not suggesting you aren't comfortable with OOP, this applies *if* you're not
PHP Object Oriented or not?
[ "", "php", "oop", "web-applications", "" ]
Is there any way that I can programmatically create (and I guess access) hidden folders on a storage device from within c#?
``` using System.IO; string path = @"c:\folders\newfolder"; // or whatever if (!Directory.Exists(path)) { DirectoryInfo di = Directory.CreateDirectory(path); di.Attributes = FileAttributes.Directory | FileAttributes.Hidden; } ```
Yes you can. Create the directory as normal then just set the attributes on it. E.g. ``` DirectoryInfo di = new DirectoryInfo(@"C:\SomeDirectory"); //See if directory has hidden flag, if not, make hidden if ((di.Attributes & FileAttributes.Hidden) != FileAttributes.Hidden) { //Add Hidden flag di.Attributes |= FileAttributes.Hidden; } ```
Creating hidden folders
[ "", "c#", ".net", "filesystems", "" ]
Given this example: ``` <img class="a" /> <img /> <img class="a" /> <img class="a" id="active" /> <img class="a" /> <img class="a" /> <img /> <img class="a" /> ``` *(I've just used img tags as an example, that's not what it is in my code)* Using jQuery, how would you select the img tags with class "a" that are adjacent to #active (the middle four, in this example)? You could do it fairly easily by looping over all the following and preceding elements, stopping when the filter condition fails, but I was wondering if jQuery could it natively?
Here's what I came up with in the end. ``` // here's our active element. var $active = $('#active'); // here is the filter we'll be testing against. var filter = "img.a"; // $all will be the final jQuery object with all the consecutively matched elements. // start it out by populating it with the current object. var $all = $active; for ($curr = $active.prev(filter); $curr.length > 0; $curr = $curr.prev(filter)) { $all = $all.add($curr); } for ($curr = $td.next(filter); $curr.length > 0; $curr = $curr.next(filter)) { $all = $all.add($curr); } ``` For a follow up question, I could see how this could easily be generalised by making it into a function which takes two arguments: an initial element, and a filter string - can anyone point me in the right direction to find out how to extend the jQuery object to add such a function? --- **Edit**: I've since found that the each() function would do this rather well for some purposes. In my own case it doesn't work as cleanly, since I want a single jQuery object for all those elements, but here's how you could use each for a different purpose (hiding consecutive ".a" elements, in this example:) ``` $('#active') .nextAll() .each(hideConsecutive) .end() .prevAll() .each(hideConsecutive) ; function hideConsecutive(index, element) { var $e = $(element); if (!$e.is(".a")) { return false; // this stops the each function. } else { $e.hide('slow'); } } ``` -- Edit: I've put this together into a plugin now. Take a look at <http://plugins.jquery.com/project/Adjacent> if you're interested.
I believe looping is your best bet. But you could try, each active, and then move before and after until the condition breaks, which if the set is large enough would be faster.
How to select consecutive elements that match a filter
[ "", "javascript", "jquery", "" ]
The question I want to ask is thus: Is casting down the inheritance tree (ie. towards a more specialiased class) from inside an abstract class excusable, or even a good thing, or is it always a poor choice with better options available? Now, the example of why I think it can be used for good. I recently implemented [Bencoding from the BitTorrent protocol](http://www.bittorrent.org/beps/bep_0003.html#the-connectivity-is-as-follows) in C#. A simple enough problem, how to represent the data. I chose to do it this way, We have an `abstract BItem` class, which provides some basic functionality, including the `static BItem Decode(string)` that is used to decode a Bencoded string into the necessary structure. There are also four derived classes, `BString`, `BInteger`, `BList` and `BDictionary`, representing the four different data types that be encoded. Now, here is the tricky part. `BList` and `BDictionary` have `this[int]` and `this[string]` accessors respectively to allow access to the array-like qualities of these data types. The potentially horrific part is coming now: ``` BDictionary torrent = (BDictionary) BItem.DecodeFile("my.torrent"); int filelength = (BInteger)((BDictionary)((BList)((BDictionary) torrent["info"])["files"])[0])["length"]; ``` Well, you get the picture... Ouch, that's hard on the eyes, not to mention the brain. So, I introduced something extra into the abstract class: ``` public BItem this[int index] { get { return ((BList)this)[index]; } } public BItem this[string index] { get { return ((BDictionary)this)[index]; } } ``` Now we could rewrite that old code as: ``` BDictionary torrent = (BDictionary)BItem.DecodeFile("my.torrent"); int filelength = (BInteger)torrent["info"]["files"][0]["length"]; ``` Wow, hey presto, MUCH more readable code. But did I just sell part of my soul for implying knowledge of subclasses into the abstract class? EDIT: In response to some of the answers coming in, you're completely off track for this particular question since the structure is variable, for instance my example of `torrent["info"]["files"][0]["length"]` is valid, but so is `torrent["announce-list"][0][0]`, and both would be in 90% of torrent files out there. Generics isn't the way to go, with this problem atleast :(. Have a click through to the spec I linked, it's only 4 small dot-points large.
I think I would make the this[int] and this[string] accessors virtual and override them in BList/BDictionary. Classes where the accessors does not make sense should cast a NotSupportedException() (perhaps by having a default implementation in BItem). That makes your code work in the same way and gives you a more readable error in case you should write ``` (BInteger)torrent["info"][0]["files"]["length"]; ``` by mistake.
You really should not access any derived classes from the base class as it pretty much breaks the idea of OOP. Readibility certainly goes a long way, but I wouldn't trade it for reusability. Consider the case when you'll need to add another subclass - you'll also need to update the base class accordingly.
'Looser' typing in C# by casting down the inheritance tree
[ "", "c#", "inheritance", "data-structures", "bittorrent", "" ]
The LinearGradientBrush in .net (or even in GDI+ as a whole?) seems to have a severe bug: Sometimes, it introduces artifacts. (See [here](http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2007-01/msg01592.html) or [here](http://www.experts-exchange.com/Microsoft/Development/.NET/Visual_CSharp/Q_23329115.html) - essentially, the first line of a linear gradient is drawn in the endcolor, i.e. a gradient from White to Black will start with a Black line and then with the proper White to Black gradient) I wonder if anyone found a working workaround for this? This is a really annoying bug :-( Here is a picture of the Artifacts, note that there are 2 LinearGradientBrushes: [alt text http://img142.imageshack.us/img142/7711/gradientartifactmm6.jpg](http://img142.imageshack.us/img142/7711/gradientartifactmm6.jpg)
I have noticed this as well when using gradient brushes. The only effective workaround I have is to always create the gradient brush rectangle 1 pixel bigger on all edges than the area that is going to be painted with it. That protects you against the issue on all four edges. The downside is that the colors used at the edges are a fraction off those you specify, but this is better than the drawing artifact problem!
You can use the nice Inflate(int i) method on a rectangle to get the bigger version.
LinearGradientBrush Artifact Workaround?
[ "", "c#", "gdi+", "" ]
``` String s = ""; for(i=0;i<....){ s = some Assignment; } ``` or ``` for(i=0;i<..){ String s = some Assignment; } ``` I don't need to use 's' outside the loop ever again. The first option is perhaps better since a new String is not initialized each time. The second however would result in the scope of the variable being limited to the loop itself. EDIT: In response to Milhous's answer. It'd be pointless to assign the String to a constant within a loop wouldn't it? No, here 'some Assignment' means a changing value got from the list being iterated through. Also, the question isn't because I'm worried about memory management. Just want to know which is better.
### Limited Scope is Best Use your second option: ``` for ( ... ) { String s = ...; } ``` ### Scope Doesn't Affect Performance If you disassemble code the compiled from each (with the JDK's `javap` tool), you will see that the loop compiles to the exact same JVM instructions in both cases. Note also that [Brian R. Bondy's](https://stackoverflow.com/questions/110083/which-of-these-loops-is-better-code-in-terms-of-performance-garbage-collection#110095) "Option #3" is identical to Option #1. Nothing extra is added or removed from the stack when using the tighter scope, and same data are used on the stack in both cases. ### Avoid Premature Initialization The only difference between the two cases is that, in the first example, the variable `s` is unnecessarily initialized. This is a separate issue from the location of the variable declaration. This adds two wasted instructions (to load a string constant and store it in a stack frame slot). A good static analysis tool will warn you that you are never reading the value you assign to `s`, and a good JIT compiler will probably elide it at runtime. You could fix this simply by using an empty declaration (i.e., `String s;`), but this is considered bad practice and has another side-effect discussed below. Often a bogus value like `null` is assigned to a variable simply to hush a compiler error that a variable is read without being initialized. This error can be taken as a hint that the variable scope is too large, and that it is being declared before it is needed to receive a valid value. Empty declarations force you to consider every code path; don't ignore this valuable warning by assigning a bogus value. ### Conserve Stack Slots As mentioned, while the JVM instructions are the same in both cases, there is a subtle side-effect that makes it best, at a JVM level, to use the most limited scope possible. This is visible in the "local variable table" for the method. Consider what happens if you have multiple loops, with the variables declared in unnecessarily large scope: ``` void x(String[] strings, Integer[] integers) { String s; for (int i = 0; i < strings.length; ++i) { s = strings[0]; ... } Integer n; for (int i = 0; i < integers.length; ++i) { n = integers[i]; ... } } ``` The variables `s` and `n` could be declared inside their respective loops, but since they are not, the compiler uses two "slots" in the stack frame. If they were declared inside the loop, the compiler can reuse the same slot, making the stack frame smaller. ### What Really Matters However, most of these issues are immaterial. A good JIT compiler will see that it is not possible to read the initial value you are wastefully assigning, and optimize the assignment away. Saving a slot here or there isn't going to make or break your application. The important thing is to make your code readable and easy to maintain, and in that respect, using a limited scope is clearly better. The smaller scope a variable has, the easier it is to comprehend how it is used and what impact any changes to the code will have.
In *theory*, it's a waste of resources to declare the string inside the loop. In *practice*, however, both of the snippets you presented will compile down to the same code (declaration outside the loop). So, if your compiler does any amount of optimization, there's no difference.
Which loop has better performance? Why?
[ "", "java", "performance", "string", "garbage-collection", "" ]
I'm fully aware that set division can be accomplished through a series of other operations, so my question is: Is there a command for set division in SQL?
<http://vadimtropashko.files.wordpress.com/2007/02/ch3.pdf> From Page 32: > Relational Division is not a fundamental operator. It can be expressed in terms of projection, Cartesian product, and set difference. So, no. :)
[Here is a nice explanation using relational algebra syntax](http://en.wikipedia.org/wiki/Relational_algebra#Division_.28.C3.B7.29). Given tables `sailors`, `boats` and `reserves` (examples from Ramakrishnan & Gehrke's "Database Management Systems") you can compute sailors who have reserved all boats with the following query: ``` SELECT name FROM sailors WHERE Sid NOT IN ( -- A sailor is disqualified if by attaching a boat, -- we obtain a tuple <sailor, boat> that is not in reserves SELECT s.Sid FROM sailors s, boats b WHERE (s.Sid, b.Bid) NOT IN ( SELECT Sid, Bid FROM reserves ) ); -- Alternatively: SELECT name FROM sailors s WHERE NOT EXISTS ( -- Not reserved boats (SELECT bid FROM boats) EXCEPT (SELECT r.bid FROM reserves r WHERE r.sid = s.sid) ); ```
Is there set division in SQL?
[ "", "sql", "database", "relational-algebra", "set-theory", "" ]
When using Linq to SQL and stored procedures, the class generated to describe the proc's result set uses char properties to represent char(1) columns in the SQL proc. I'd rather these be strings - is there any easy way to make this happen?
You could modify the {database}.designer.cs file. I don't have one handy to check, but I believe it's fairly straight forward --- you'll just have to plow through a lot of code, and remember to re-apply the change if you ever regenerate it. Alternately, you could create your own class and handle it in the select. For example, given the LINQ generated class: ``` class MyTable { int MyNum {get; set;} int YourNum {get; set;} char OneChar {get; set;} } ``` you could easily create: ``` class MyFixedTable { int MyNum {get; set;} int YourNum {get; set;} string OneChar {get; set;} public MyFixedTable(MyTable t) { this,MyNum = t.MyNum; this.YourNum = t.YourNum; this.OneChar = new string(t.OneChar, 1); } } ``` Then instead of writing: ``` var q = from t in db.MyTable select t; ``` write ``` var q = from t in db.MyTable select new MyFixTable(t); ```
``` var whatever = from x in something select new { yourString = Char.ToString(x.theChar); } ```
When using Linq to SQL with stored procedures, must char(1) columns be returned as c# chars?
[ "", "c#", ".net", "visual-studio", "linq-to-sql", "" ]
I've never had much need for programming with databases. Since their use is so widespread it seems like a good thing for me to learn. SQL seems like the place to start, possibly [SQLite](http://www.sqlite.org/) and maybe the [Python bindings](http://pysqlite.org/). What would you recommend for someone new to this? Libraries, tools and project ideas are all welcome.
Structure Query Language (SQL) is the language used to talk to database management systems (DBMS). While it's a good thing to learn, it's probably best to do it with a project in mind that you'd like to do. It's funny you say you've never had a need, because I'm the opposite, almost every program I've ever written has used a database of some sort. The vast majority (mostly web-based) revolve around using a database. * Learn about relations and database architecture. This means how to structure your tables, make foreign keys and relations. For example, you might have a movies database. In it, you store information about the Movies, Studios that released the movies, and the Actors in the movies. Each of these becomes a table. Each Movie is released by one Studio. Since you don't want to store duplicate the studio information (address, etc) in each Movie entry, you store a relation to it, so each Movie item contains a reference to a Studio item. This is called a one-to-many relationship (one studio has many movies). Likewise, you don't want to store Actor information for each Movie. But one Actor can be in many Movies, so this is stored as a many-to-many relationship. * Learn SQL itself. [SQLCourse](http://sqlcourse.com) is a good place to get started, but there are many other books and resources. SQL is a standard, but each RDBMS has its own vendor-specific ways of doing certain things and other limitations (for example, some systems don't support sub-queries, there are several different syntaxes for limiting the number of rows returned, etc). It's important to learn the syntax for the one you're using (eg, don't learn Oracle syntax and then try and use it in MySQL) but they are similar enough that the concepts are the same. * Tools depend on the DBMS you use. MySQL is a pretty popular database, lots of tools are available, and lots of books. SQLite and Postgresql are also quite popular, and also free/open-source.
If you can, you really want to find someone who knows how to use it, and pick their brains. That's because there are a lot of important principles (eg 3rd normal form) which will are a lot easier to learn through discussion rather than from books. If you want to teach yourself, you should learn the syntax for doing basic selects, joins, updates, deletes, and group by queries. You should also learn the "Swiss army knife" of selects, the CASE statement. Too many people don't. Many of the tutorials recommended in this thread will do that. Then you need to try to solve SQL problems. I'm sure that [Joe Celko's SQL Puzzles and Answers](https://rads.stackoverflow.com/amzn/click/com/1558604537) is a good source of them, though it may be a little advanced. This will let you actually write SQL. But you still need to learn how to organize a database. Which for most purposes means that you really need to learn what 3rd normal form looks like. You don't have to be able to give a formal definition of it, just recognize it when you see it, and know how to adjust something to be in that format. Lots of references will explain it, but you won't know if you're reading them correctly. This is where it really, *really* helps to have access to someone who can look at a table layout and tell you, "That's right" vs "That's wrong, here's what needs to be changed." Failing all else, you could post a question here with a proposed layout. But a back and forth discussion with a live person would still be preferable IMO.
Tips for getting started with SQL?
[ "", "sql", "" ]
I am working on an application and one design approach involves extremely heavy use of the `instanceof` operator. While I know that OO design generally tries to avoid using `instanceof`, that is a different story and this question is purely related to performance. I was wondering if there is any performance impact? Is is just as fast as `==`? For example, I have a base class with 10 subclasses. In a single function that takes the base class, I do checks for if the class is an instance of the subclass and carry out some routine. One of the other ways I thought of solving it was to use a "type id" integer primitive instead, and use a bitmask to represent categories of the subclasses, and then just do a bit mask comparison of the subclasses "type id" to a constant mask representing the category. Is `instanceof` somehow optimized by the JVM to be faster than that? I want to stick to Java but the performance of the app is critical. It would be cool if someone that has been down this road before could offer some advice. Am I nitpicking too much or focusing on the wrong thing to optimize?
Modern JVM/JIT compilers have removed the performance hit of most of the traditionally "slow" operations, including instanceof, exception handling, reflection, etc. As Donald Knuth wrote, "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." The performance of instanceof probably won't be an issue, so don't waste your time coming up with exotic workarounds until you're sure that's the problem.
# Approach I wrote [a benchmark program](https://github.com/michaeldorner/instanceofBenchmark) to evaluate different implementations: 1. `instanceof` implementation (as reference) 2. object-orientated via an abstract class and `@Override` a test method 3. using an own type implementation 4. `getClass() == _.class` implementation I used [jmh](http://openjdk.java.net/projects/code-tools/jmh/) to run the benchmark with 100 warmup calls, 1000 iterations under measuring, and with 10 forks. So each option was measured with 10 000 times, which takes 12:18:57 to run the whole benchmark on my MacBook Pro with macOS 10.12.4 and Java 1.8. The benchmark measures the average time of each option. For more details see [my implementation on GitHub](https://github.com/michaeldorner/instanceofBenchmark/blob/master/src/main/java/de/michaeldorner/MyBenchmark.java). For the sake of completeness: There is a [previous version of this answer and my benchmark](https://stackoverflow.com/revisions/26514984/10). # Results ``` | Operation | Runtime in nanoseconds per operation | Relative to instanceof | |------------|--------------------------------------|------------------------| | INSTANCEOF | 39,598 ± 0,022 ns/op | 100,00 % | | GETCLASS | 39,687 ± 0,021 ns/op | 100,22 % | | TYPE | 46,295 ± 0,026 ns/op | 116,91 % | | OO | 48,078 ± 0,026 ns/op | 121,42 % | ``` # tl;dr In Java 1.8 `instanceof` is the fastest approach, although `getClass()` is very close.
The performance impact of using instanceof in Java
[ "", "java", "performance", "instanceof", "" ]
How do I get the id of my Java process? I know there are several platform-dependent hacks, but I would prefer a more generic solution.
There exists no platform-independent way that can be guaranteed to work in all jvm implementations. `ManagementFactory.getRuntimeMXBean().getName()` looks like the best (closest) solution, and typically includes the PID. It's short, and *probably* works in every implementation in wide use. On linux+windows it returns a value like `"12345@hostname"` (`12345` being the process id). Beware though that [according to the docs](http://docs.oracle.com/javase/6/docs/api/java/lang/management/RuntimeMXBean.html#getName%28%29), there are no guarantees about this value: > Returns the name representing the running Java virtual machine. The > returned name string can be any arbitrary string and a Java virtual > machine implementation can choose to embed platform-specific useful > information in the returned name string. Each running virtual machine > could have a different name. **In Java 9** the new [process API](https://docs.oracle.com/javase/9/docs/api/java/lang/ProcessHandle.html) can be used: ``` long pid = ProcessHandle.current().pid(); ```
You could use [JNA](https://github.com/twall/jna). Unfortunately there is no common JNA API to get the current process ID yet, but each platform is pretty simple: ## Windows Make sure you have `jna-platform.jar` then: ``` int pid = Kernel32.INSTANCE.GetCurrentProcessId(); ``` ## Unix Declare: ``` private interface CLibrary extends Library { CLibrary INSTANCE = (CLibrary) Native.loadLibrary("c", CLibrary.class); int getpid (); } ``` Then: ``` int pid = CLibrary.INSTANCE.getpid(); ``` --- # Java 9 Under Java 9 the new [process API](https://docs.oracle.com/javase/9/docs/api/java/lang/ProcessHandle.html) can be used to get the current process ID. First you grab a handle to the current process, then query the PID: ``` long pid = ProcessHandle.current().pid(); ```
How can a Java program get its own process ID?
[ "", "java", "pid", "" ]
My question is related to the command pattern, where we have the following abstraction (C# code) : ``` public interface ICommand { void Execute(); } ``` Let's take a simple concrete command, which aims to delete an entity from our application. A `Person` instance, for example. I'll have a `DeletePersonCommand`, which implements `ICommand`. This command needs the `Person` to delete as a parameter, in order to delete it when `Execute` method is called. What is the best way to manage parametrized commands ? How to pass parameters to commands, before executing them ?
You'll need to associate the parameters with the command object, either by constructor or setter injection (or equivalent). Perhaps something like this: ``` public class DeletePersonCommand: ICommand { private Person personToDelete; public DeletePersonCommand(Person personToDelete) { this.personToDelete = personToDelete; } public void Execute() { doSomethingWith(personToDelete); } } ```
Passing the data in via a constructor or setter works, but requires the creator of the command to know the data the command needs... The "context" idea is really good, and I was working on (an internal) framework that leveraged it a while back. If you set up your controller (UI components that interact with the user, CLI interpreting user commands, servlet interpreting incoming parameters and session data, etc) to provide named access to the available data, commands can directly ask for the data they want. I really like the separation a setup like this allows. Think about layering as follows: ``` User Interface (GUI controls, CLI, etc) | [syncs with/gets data] V Controller / Presentation Model | ^ [executes] | V | Commands --------> [gets data by name] | [updates] V Domain Model ``` If you do this "right", the same commands and presentation model can be used with any type of user interface. Taking this a step further, the "controller" in the above is pretty generic. The UI controls only need to know the *name* of the command they'll invoke -- they (or the controller) don't need to have any knowledge of how to create that command or what data that command needs. That's the real advantage here. For example, you could hold the name of the command to execute in a Map. Whenever the component is "triggered" (usually an actionPerformed), the controller looks up the command name, instantiates it, calls execute, and pushes it on the undo stack (if you use one).
Command Pattern : How to pass parameters to a command?
[ "", "c#", "design-patterns", "command-pattern", "" ]
This is one of the possible ways I come out: ``` struct RetrieveKey { template <typename T> typename T::first_type operator()(T keyValuePair) const { return keyValuePair.first; } }; map<int, int> m; vector<int> keys; // Retrieve all keys transform(m.begin(), m.end(), back_inserter(keys), RetrieveKey()); // Dump all keys copy(keys.begin(), keys.end(), ostream_iterator<int>(cout, "\n")); ``` Of course, we can also retrieve all values from the map by defining another functor **RetrieveValues**. Is there any other way to achieve this easily? (I'm always wondering why `std::map` does not include a member function for us to do so.)
While your solution should work, it can be difficult to read depending on the skill level of your fellow programmers. Additionally, it moves functionality away from the call site. Which can make maintenance a little more difficult. I'm not sure if your goal is to get the keys into a vector or print them to cout so I'm doing both. You may try something like this: ``` std::map<int, int> m; std::vector<int> key, value; for(std::map<int,int>::iterator it = m.begin(); it != m.end(); ++it) { key.push_back(it->first); value.push_back(it->second); std::cout << "Key: " << it->first << std::endl; std::cout << "Value: " << it->second << std::endl; } ``` Or even simpler, if you are using the Boost library: ``` map<int,int> m; pair<int,int> me; // what a map<int, int> is made of vector<int> v; BOOST_FOREACH(me, m) { v.push_back(me.first); cout << me.first << "\n"; } ``` Personally, I like the BOOST\_FOREACH version because there is less typing and it is very explicit about what it is doing.
``` //c++0x too std::map<int,int> mapints; std::vector<int> vints; for(auto const& imap: mapints) vints.push_back(imap.first); ```
How to retrieve all keys (or values) from a std::map and put them into a vector?
[ "", "c++", "dictionary", "stl", "stdmap", "" ]
This is from an example accompanying the agsXMPP .Net assembly. I've read up on delegates, but am not sure how that fits in with this line of code (which waits for the logon to occur, and then sends a message. I guess what I'm looking for is an understanding of why `delegate(0)` accomplishes this, in the kind of simple terms I can understand. ``` xmpp.OnLogin += delegate(object o) { xmpp.Send(new Message(new Jid(JID_RECEIVER), MessageType.chat, "Hello, how are you?")); }; ```
The `delegate(object o){..}` tells the compiler to package up whatever is inside the brackets as an object to be executed later, in this case when `OnLogin` is fired. Without the `delegate()` statement, the compiler would think you are tying to execute an action in the middle of an assignemnt statement and give you errors.
It's exactly the same as ``` xmpp.OnLogin += EventHandler(MyMethod); ``` Where MyMethod is ``` public void MyMethod(object o) { xmpp.Send(new Message(new Jid(JID_RECEIVER), MessageType.chat, "Hello, how are you?")); } ```
What is this delegate call doing in this line of code (C#)?
[ "", "c#", "delegates", "" ]
I've always wondered this - why can't you declare variables after a case label in a switch statement? In C++ you can declare variables pretty much anywhere (and declaring them close to first use is obviously a good thing) but the following still won't work: ``` switch (val) { case VAL: // This won't work int newVal = 42; break; case ANOTHER_VAL: ... break; } ``` The above gives me the following error (MSC): > initialization of 'newVal' is skipped by 'case' label This seems to be a limitation in other languages too. Why is this such a problem?
`Case` statements are only **labels**. This means the compiler will interpret this as a jump directly to the label. In C++, the problem here is one of scope. Your curly brackets define the scope as everything inside the `switch` statement. This means that you are left with a scope where a jump will be performed further into the code skipping the initialization. The correct way to handle this is to define a scope specific to that `case` statement and define your variable within it: ``` switch (val) { case VAL: { // This will work int newVal = 42; break; } case ANOTHER_VAL: ... break; } ```
This question was originally tagged as [c](/questions/tagged/c "show questions tagged 'c'") and [c++](/questions/tagged/c%2b%2b "show questions tagged 'c++'") at the same time. The original code is indeed invalid in both C and C++, but for completely different unrelated reasons. * In C++ this code is invalid because the `case ANOTHER_VAL:` label jumps into the scope of variable `newVal` bypassing its initialization. Jumps that bypass initialization of automatic objects are illegal in C++. This side of the issue is correctly addressed by most answers. * However, in C language bypassing variable initialization is not an error. Jumping into the scope of a variable over its initialization is legal in C. It simply means that the variable is left uninitialized. The original code does not compile in C for a completely different reason. Label `case VAL:` in the original code is attached to the declaration of variable `newVal`. In C language declarations are not statements. They cannot be labeled. And this is what causes the error when this code is interpreted as C code. ``` switch (val) { case VAL: /* <- C error is here */ int newVal = 42; break; case ANOTHER_VAL: /* <- C++ error is here */ ... break; } ``` Adding an extra `{}` block fixes both C++ and C problems, even though these problems happen to be very different. On the C++ side it restricts the scope of `newVal`, making sure that `case ANOTHER_VAL:` no longer jumps into that scope, which eliminates the C++ issue. On the C side that extra `{}` introduces a compound statement, thus making the `case VAL:` label to apply to a statement, which eliminates the C issue. * In C case the problem can be easily solved without the `{}`. Just add an empty statement after the `case VAL:` label and the code will become valid ``` switch (val) { case VAL:; /* Now it works in C! */ int newVal = 42; break; case ANOTHER_VAL: ... break; } ``` Note that even though it is now valid from C point of view, it remains invalid from C++ point of view. * Symmetrically, in C++ case the the problem can be easily solved without the `{}`. Just remove the initializer from variable declaration and the code will become valid ``` switch (val) { case VAL: int newVal; newVal = 42; break; case ANOTHER_VAL: /* Now it works in C++! */ ... break; } ``` Note that even though it is now valid from C++ point of view, it remains invalid from C point of view. Starting from C23 all labels in C language will be interpreted as labelling implied null statements ([N2508](https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2508.pdf)), i.e. the issue with being unable to place labels in front of declarations in C will no longer exist and the above `;`-based fix will no longer be necessary.
Why can't variables be declared in a switch statement?
[ "", "c++", "switch-statement", "" ]
What is the correct way to import a C++ class from a DLL? We're using Visual C++. There's the dllexport/exports.def+LoadLibrary+GetProcAddress trifecta, but it doesn't work on C++ classes, only C functions. Is this due to C++ name-mangling? How do I make this work?
Found the solution at <http://www.codeproject.com/KB/DLL/XDllPt4.aspx> Thanks for your efforts guys & girls
You need to add the following: ``` extern "C" { ... } ``` to avoid function mangling. you might consider writing two simple C functions: ``` SomeClass* CreateObjectInstace() { return new SomeClass(); } void ReleaseObject(SomeClass* someClass) { delete someClass; } ``` By only using those functions you can afterward add/change functionality of your object creation/deletion. This is sometimes called a Factory.
Dynamically importing a C++ class from a DLL
[ "", "c++", "windows", "dll", "import", "" ]
When defining a method on a class in Python, it looks something like this: ``` class MyClass(object): def __init__(self, x, y): self.x = x self.y = y ``` But in some other languages, such as C#, you have a reference to the object that the method is bound to with the "this" keyword without declaring it as an argument in the method prototype. Was this an intentional language design decision in Python or are there some implementation details that require the passing of "self" as an argument?
I like to quote Peters' Zen of Python. "Explicit is better than implicit." In Java and C++, '`this.`' can be deduced, except when you have variable names that make it impossible to deduce. So you sometimes need it and sometimes don't. Python elects to make things like this explicit rather than based on a rule. Additionally, since nothing is implied or assumed, parts of the implementation are exposed. `self.__class__`, `self.__dict__` and other "internal" structures are available in an obvious way.
It's to minimize the difference between methods and functions. It allows you to easily generate methods in metaclasses, or add methods at runtime to pre-existing classes. e.g. ``` >>> class C: ... def foo(self): ... print("Hi!") ... >>> >>> def bar(self): ... print("Bork bork bork!") ... >>> >>> c = C() >>> C.bar = bar >>> c.bar() Bork bork bork! >>> c.foo() Hi! >>> ``` It also (as far as I know) makes the implementation of the python runtime easier.
Why do you need explicitly have the "self" argument in a Python method?
[ "", "python", "oop", "methods", "self", "" ]
What's a quick and easy way to view and edit ID3 tags (artist, album, etc.) using C#?
Thirding [TagLib Sharp](https://github.com/mono/taglib-sharp). ``` TagLib.File f = TagLib.File.Create(path); f.Tag.Album = "New Album Title"; f.Save(); ```
[TagLib Sharp](http://www.novell.com/products/linuxpackages/opensuse11.1/taglib-sharp.html) is pretty popular. As a side note, if you wanted to take a quick and dirty peek at doing it yourself.. here is a C# snippet I found to read an mp3's tag info. ``` class MusicID3Tag { public byte[] TAGID = new byte[3]; // 3 public byte[] Title = new byte[30]; // 30 public byte[] Artist = new byte[30]; // 30 public byte[] Album = new byte[30]; // 30 public byte[] Year = new byte[4]; // 4 public byte[] Comment = new byte[30]; // 30 public byte[] Genre = new byte[1]; // 1 } string filePath = @"C:\Documents and Settings\All Users\Documents\My Music\Sample Music\041105.mp3"; using (FileStream fs = File.OpenRead(filePath)) { if (fs.Length >= 128) { MusicID3Tag tag = new MusicID3Tag(); fs.Seek(-128, SeekOrigin.End); fs.Read(tag.TAGID, 0, tag.TAGID.Length); fs.Read(tag.Title, 0, tag.Title.Length); fs.Read(tag.Artist, 0, tag.Artist.Length); fs.Read(tag.Album, 0, tag.Album.Length); fs.Read(tag.Year, 0, tag.Year.Length); fs.Read(tag.Comment, 0, tag.Comment.Length); fs.Read(tag.Genre, 0, tag.Genre.Length); string theTAGID = Encoding.Default.GetString(tag.TAGID); if (theTAGID.Equals("TAG")) { string Title = Encoding.Default.GetString(tag.Title); string Artist = Encoding.Default.GetString(tag.Artist); string Album = Encoding.Default.GetString(tag.Album); string Year = Encoding.Default.GetString(tag.Year); string Comment = Encoding.Default.GetString(tag.Comment); string Genre = Encoding.Default.GetString(tag.Genre); Console.WriteLine(Title); Console.WriteLine(Artist); Console.WriteLine(Album); Console.WriteLine(Year); Console.WriteLine(Comment); Console.WriteLine(Genre); Console.WriteLine(); } } } ```
View/edit ID3 data for MP3 files
[ "", "c#", ".net", "mp3", "id3", "" ]
I'm looking at improving the performance of some SQL, currently CTEs are being used and referenced multiple times in the script. Would I get improvements using a table variable instead? (Can't use a temporary table as the code is within functions).
You'll really have to performance test - There is no Yes/No answer. As per Andy Living's post above links to, a CTE is just shorthand for a query or subquery. If you are calling it twice or more in the same function, you might get better performance if you fill a table variable and then join to/select from that. However, as table variables take up space somewhere, and don't have indexes/statistics (With the exception of any declared primary key on the table variable) there's no way of saying which will be faster. They both have costs and savings, and which is the best way depends on the data they pull in and what they do with it. I've been in your situation, and after testing for speed under various conditions - Some functions used CTEs, and others used table variables.
A **CTE** is not much more than **syntactic sugar**. It **enhances the readability** and **allows to avoid repetition**. Just think of it as a placeholder for the actual statement specified in the `WITH()`-clause. The engine will replace any occurance of the CTE's name in your query with this statement (quite similar to a view). This is the meaning of *inline*. **Compared to a previously filled table** (delared or created) You'll find **advantages**: * useable in *ad-hoc*-queries (functions, views) * no unexpected side effects (most narrow scope) ...and **disadvantages**: * You cannot use the CTE's result in different statements * You cannot use indexes, statistics to optimize your CTE's set (although it will implicitly use existing indexes and statistics of the targeted objects - if appropriate). In **terms of performance** a *persisted* set (declared or created table) can be (much!) better in some cases, but it forces you into procedural code. You will have to *race your horses* to find out which is better... **Example: Various approaches to do the same** The following simple (rather useless) example describes a set of user tables together with their columns. I use various different approaches to tell SQL-Server what I want: Try this with "include actual execution plan" ``` USE master; --in my case the master database has just 5 "user tables", you can use any other DB of course GO --simple join, first the small set joining to the large set SELECT o.name AS TableName ,c.name AS ColumnName FROM sys.objects o INNER JOIN sys.columns c ON c.object_id=o.object_id WHERE o.type='U'; GO --simple join "the other way round" with the filter as part of the ON-clause SELECT o.name AS TableName ,c.name AS ColumnName FROM sys.columns c INNER JOIN sys.objects o ON c.object_id=o.object_id AND o.type='U'; GO --join from the large set with a sub-query to the small set SELECT o.name AS TableName ,c.name AS ColumnName FROM sys.columns c INNER JOIN ( SELECT o.* FROM sys.objects o WHERE o.type='U' --user tables ) o ON c.object_id=o.object_id; GO --join for large to small with a row-wise APPLY SELECT o.name AS TableName ,c.name AS ColumnName FROM sys.columns c CROSS APPLY ( SELECT o.* FROM sys.objects o WHERE o.type='U' --user tables AND o.object_id=c.object_id ) o; GO --use a CTE to "pre-filter" the small set WITH cte AS ( SELECT o.* FROM sys.objects o WHERE o.type='U' --user tables ) SELECT cte.name AS TableName ,c.name AS ColumnName FROM sys.columns c INNER JOIN cte ON c.object_id=cte.object_id; GO ``` Now look at the result and at the execution plans: * All queries return the same result. * All queries produce the same execution plan Important hint: This might differ on your machine! **Why is this?** T-SQL is a **declarative language**. Your statement is a description of **WHAT** you want to retrieve. It is not your job to tell the engine **HOW** this is done. SQL-Server's extremely smart engine will find the best way to get the set you asked for. In the case above all result descriptions point to the same goal. The engine can *deduce* this from various statements and finds the same plan for all of them. **Well, is it just a matter of taste?** In a way... There are some important things to keep in mind: * There is no reason for the engine to compute the CTE's result *before* the rest (although the statement might *look* so). Therefore it is **wrong** to describe a CTE as *something like a temp table*... * In other words: The **visible order** of your statement does **not predict the actual order of execution**! * The smart engine will reach its limits with complexity and nest level. Imagine various `VIEW`s, all using `CTE`s and calling each-other... * There are cases where the engine really f\*\*s up. I remember a case where a CTE did not much more than a `TRY_CAST`. The idea was to ensure valid values in the query below. But the engine thought "Oh, just a CAST, not expensiv!" and included the acutal CAST to the execution plan on a higher position. I remember another case where the engine performed an expensive operation against millions of rows (unnecessarily, the final result was filtered to a tiny set), just because the actual order of execution was not as expected. **Okay... So when should I use a CTE?** The following points are good reasons to use a CTE: * A CTE can help you to avoid repeated sub queries. * A CTE can be used multiple times within your statement, e.g. within a `JOIN` with a dynamic behavior depending on the actual row-count. * You can use multiple CTEs within one statement and you can use the result of one CTE within a later CTE. * There are recursive (or better *iterative*) CTEs. * Sometimes I used *single-row*-CTEs to define / pre-compute *variables* later used in the query. Things you would do with declared variables in procedural T-SQL. You can use A `CROSS JOIN` to get them into your query easily. * and also very nice: the [updatable CTE](https://stackoverflow.com/a/11562724/5089204) allows for very easy-to-read statements, same applies [for `DELETE`](https://stackoverflow.com/a/22762991/5089204). As above: Nothing one could not do without the CTE, but it is far better to read (I really like *speaking names*). **Final hints** Well, there are cases, where ugly code performs better :-) It is always good to have clean and readable code. A CTE will help you with this. So give it a try. If the performance is bad, get into depth, look at the execution plans and try to find a reason where the engine might decide wrong. In most cases it is a bad idea trying to outsmart the engine with hints such as `FORCE ORDER` (but in can help) **UPDATE** I was asked to point to advantages and disadvantages specifically: Uhm, *technically* there are **no real advantages or disadvantages**. Disregarding *recursive CTEs* there's nothing one couldn't solve without a CTE. *Advantages* The main *advantage* is **readability and maintainabilty**. Sometimes a CTE can save hundreds of lines of code. Instead of a repeating a huge sub-query one can use just a name as a *variable*. Corrections to the sub-query can be solved just in one place. The CTE can serve in ad-hoc queries and make your life easier. *Disadvantages* One *possible disadvantage* is that it's very easy, even for experienced developers, to mistake a CTE as a temp table, assume that the visible order of steps will be the same as the acutal order of execution and stumble into **unexpected results or even errors**. And - of course :-) - the strange `wrong syntax` error you'll see when you write a CTE after another statement without a separating `;`. That's why many people tend to use `;WITH`.
What are the advantages/disadvantages of using a CTE?
[ "", "sql", "sql-server", "common-table-expression", "" ]
Hey all, my Computational Science course this semester is entirely in Java. I was wondering if there was a good/preferred set of tools to use in ubuntu. Currently I use gedit with a terminal running in the bottom, but I'd like an API browser. I've considered Eclipse, but it seems to bloated and unfriendly for quick programs.
Java editing tends to go one of two ways; people either stick with a simple editor and use a terminal to compile/run their programs, or they use a big IDE with a zillion features. I usually go the simple route and just use a plain text editor and terminal, but there's still a lot to be said for IDEs. This is especially true when learning the language, since hitting "spam." brings up a dropdown with all of the fields and methods of the spam object. And this is not just useful to a beginner; it remains useful later on when using unfamiliar libraries and third party modules. IDEs also have useful tools such as GUI builders which become invaluable when doing professional Java work. So although I typically prefer a simple editor/terminal combo, I highly recommend trying out an IDE such as Eclipse or Netbeans to see how you like it and so that you'll know how to use one later on.
Eclipse may be bloated for learning needs, but will probably give you the best overall Java experience. Try working through some of the built-in tutorials if you find the interface confusing.
Good Java Practices in Ubuntu
[ "", "java", "linux", "ubuntu", "" ]
Looking to do a very small, quick 'n dirty side project. I like the fact that the Google App Engine is running on Python with Django built right in - gives me an excuse to try that platform... but my question is this: Has anyone made use of the app engine for anything other than a toy problem? I see some good example apps out there, so I would assume this is good enough for the real deal, but wanted to get some feedback. Any other success/failure notes would be great.
I have tried app engine for my small quake watch application <http://quakewatch.appspot.com/> My purpose was to see the capabilities of app engine, so here are the main points: 1. it doesn't come by default with Django, it has its own web framework which is pythonic has URL dispatcher like Django and it uses Django templates So if you have Django exp. you will find it easy to use * But you can use any pure python framework and Django can be easily added see <http://code.google.com/appengine/articles/django.html> google-app-engine-django (<http://code.google.com/p/google-app-engine-django/>) project is excellent and works almost like working on a Django project 2. You can not execute any long running process on server, what you do is reply to request and which should be quick otherwise appengine will kill it So if your app needs lots of backend processing appengine is not the best way otherwise you will have to do processing on a server of your own 3. My quakewatch app has a subscription feature, it means I had to email latest quakes as they happend, but I can not run a background process in app engine to monitor new quakes solution here is to use a third part service like pingablity.com which can connect to one of your page and which executes the subscription emailer but here also you will have to take care that you don't spend much time here or break task into several pieces 4. It provides Django like modeling capabilities but backend is totally different but for a new project it should not matter. But overall I think it is excellent for creating apps which do not need lot of background processing. Edit: Now [task queues](http://code.google.com/appengine/docs/python/taskqueue/) can be used for running batch processing or scheduled tasks Edit: after working/creating a real application on GAE for a year, now my opnion is that unless you are making a application which needs to scale to million and million of users, don't use GAE. Maintaining and doing trivial tasks in GAE is a headache due to distributed nature, to avoid deadline exceeded errors, count entities or do complex queries requires complex code, so small complex application should stick to LAMP. Edit: Models should be specially designed considering all the transactions you wish to have in future, because entities only in same entity group can be used in a transaction and it makes the process of updating two different groups a nightmare e.g. transfer money from user1 to user2 in transaction is impossible unless they are in same entity group, but making them same entity group may not be best for frequent update purposes.... read this <http://blog.notdot.net/2009/9/Distributed-Transactions-on-App-Engine>
I am using GAE to host several high-traffic applications. Like on the order of 50-100 req/sec. It is great, I can't recommend it enough. My previous experience with web development was with Ruby (Rails/Merb). Learning Python was easy. I didn't mess with Django or Pylons or any other framework, just started from the GAE examples and built what I needed out of the basic webapp libraries that are provided. If you're used to the flexibility of SQL the datastore can take some getting used to. Nothing too traumatic! The biggest adjustment is moving away from JOINs. You have to shed the idea that normalizing is crucial. Ben
Feedback on using Google App Engine?
[ "", "python", "django", "google-app-engine", "" ]
This is a 3 part question regarding embedded RegEx into SQL statements. 1. How do you embed a RegEx expression into an Oracle PL/SQL select statement that will parse out the “DELINQUENT” string in the text string shown below? 2. What is the performance impact if used within a mission critical business transaction? 3. Since embedding regex into SQL was introduced in Oracle 10g and SQL Server 2005, is it considered a recommended practice? --- Dear Larry : Thank you for using ABC's alert service. ABC has detected a change in the status of one of your products in the state of KS. Please review the information below to determine if this status change was intended. ENTITY NAME: Oracle Systems, LLC PREVIOUS STATUS: -- CURRENT STATUS: DELINQUENT As a reminder, you may contact your the ABC Team for assistance in correcting any delinquencies or, if needed, reinstating the service. Alternatively, if the system does not intend to continue to engage this state, please notify ABC so that we can discontinue our services. Kind regards, Service Team 1 ABC --PLEASE DO NOT REPLY TO THIS EMAIL. IT IS NOT A MONITORED EMAIL ACCOUNT.-- Notice: ABC Corporation cannot independently verify the timeliness, accuracy, or completeness of the public information maintained by the responsible government agency or other sources of data upon which these alerts are based.
Why would you need regular expressions here? INSTR and SUBSTR will do the job perfectly. But if you convinced you need Regex'es you can use: [REGEXP\_INSTR](http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions129.htm#i1239887) [REGEXP\_REPLACE](http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions130.htm#i1305521) [REGEXP\_SUBSTR](http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/functions131.htm#i1239858) (only available in Oracle 10g and up) ``` SELECT emp_id, text FROM employee_comment WHERE REGEXP_LIKE(text,'...-....'); ```
If I recall correctly, it is possible to write a UDF in c#/vb for SQL Server. Here's a link, though possibly not the best: <http://www.novicksoftware.com/coding-in-sql/Vol3/cis-v3-N13-dot-net-clr-in-sql-server.htm>
What's the best approach to embed RegEx in Oracle or SQL Server 2005 SQL?
[ "", "sql", "sql-server", "regex", "oracle", "" ]
I get asked this question a lot and I thought I'd solicit some input on how to best describe the difference.
They are actually two very different things. "Delegate" is actually the name for a variable that holds a reference to a method or a lambda, and a lambda is a method without a permanent name. Lambdas are very much like other methods, except for a couple subtle differences. 1. A normal method is defined in a ["statement"](http://en.wikipedia.org/wiki/Statement_(programming)) and tied to a permanent name, whereas a lambda is defined "on the fly" in an ["expression"](http://en.wikipedia.org/wiki/Expression_(programming)) and has no permanent name. 2. Some lambdas can be used with .NET expression trees, whereas methods cannot. A delegate is defined like this: ``` delegate Int32 BinaryIntOp(Int32 x, Int32 y); ``` A variable of type BinaryIntOp can have either a method or a lambda assigned to it, as long as the signature is the same: two Int32 arguments, and an Int32 return. A lambda might be defined like this: ``` BinaryIntOp sumOfSquares = (a, b) => a*a + b*b; ``` Another thing to note is that although the generic Func and Action types are often considered "lambda types", they are just like any other delegates. The nice thing about them is that they essentially define a name for any type of delegate you might need (up to 4 parameters, though you can certainly add more of your own). So if you are using a wide variety of delegate types, but none more than once, you can avoid cluttering your code with delegate declarations by using Func and Action. Here is an illustration of how Func and Action are "not just for lambdas": ``` Int32 DiffOfSquares(Int32 x, Int32 y) { return x*x - y*y; } Func<Int32, Int32, Int32> funcPtr = DiffOfSquares; ``` Another useful thing to know is that delegate types (not methods themselves) with the same signature but different names will not be implicitly casted to each other. This includes the Func and Action delegates. However if the signature is identical, you can explicitly cast between them. Going the extra mile.... In C# functions are flexible, with the use of lambdas and delegates. But C# does not have "first-class functions". You can use a function's name assigned to a delegate variable to essentially create an object representing that function. But it's really a compiler trick. If you start a statement by writing the function name followed by a dot (i.e. try to do member access on the function itself) you'll find there are no members there to reference. Not even the ones from Object. This prevents the programmer from doing useful (and potentially dangerous of course) things such as adding extension methods that can be called on any function. The best you can do is extend the Delegate class itself, which is surely also useful, but not quite as much. Update: Also see [Karg's answer](https://stackoverflow.com/questions/73227/what-is-the-difference-between-lambdas-and-delegates-in-the-net-framework#73448) illustrating the difference between anonymous delegates vs. methods & lambdas. Update 2: [James Hart](https://stackoverflow.com/questions/73227/what-is-the-difference-between-lambdas-and-delegates-in-the-net-framework#74414) makes an important, though very technical, note that lambdas and delegates are not .NET entities (i.e. the CLR has no concept of a delegate or lambda), but rather they are framework and language constructs.
The question is a little ambiguous, which explains the wide disparity in answers you're getting. You actually asked what the difference is between lambdas and delegates in the .NET framework; that might be one of a number of things. Are you asking: * What is the difference between lambda expressions and anonymous delegates in the C# (or VB.NET) language? * What is the difference between System.Linq.Expressions.LambdaExpression objects and System.Delegate objects in .NET 3.5? * Or something somewhere between or around those extremes? Some people seem to be trying to give you the answer to the question 'what is the difference between C# Lambda expressions and .NET System.Delegate?', which doesn't make a whole lot of sense. The .NET framework does not in itself understand the concepts of anonymous delegates, lambda expressions, or closures - those are all things defined by language specifications. Think about how the C# compiler translates the definition of an anonymous method into a method on a generated class with member variables to hold closure state; to .NET, there's nothing anonymous about the delegate; it's just anonymous to the C# programmer writing it. That's equally true of a lambda expression assigned to a delegate type. What .NET *DOES* understand is the idea of a delegate - a type that describes a method signature, instances of which represent either bound calls to specific methods on specific objects, or unbound calls to a particular method on a particular type that can be invoked against any object of that type, where said method adheres to the said signature. Such types all inherit from System.Delegate. .NET 3.5 also introduces the System.Linq.Expressions namespace, which contains classes for describing code expressions - and which can also therefore represent bound or unbound calls to methods on particular types or objects. LambdaExpression instances can then be compiled into actual delegates (whereby a dynamic method based on the structure of the expression is codegenned, and a delegate pointer to it is returned). In C# you can produce instances of System.Expressions.Expression types by assigning a lambda expression to a variable of said type, which will produce the appropriate code to construct the expression at runtime. Of course, if you *were* asking what the difference is between lambda expressions and anonymous methods in C#, after all, then all this is pretty much irelevant, and in that case the primary difference is brevity, which leans towards anonymous delegates when you don't care about parameters and don't plan on returning a value, and towards lambdas when you want type inferenced parameters and return types. And lambda expressions support expression generation.
What is the difference between lambdas and delegates in the .NET Framework?
[ "", "c#", ".net", "lambda", "" ]
In my web application there is a process that queries data from all over the web, filters it, and saves it to the database. As you can imagine this process takes some time. My current solution is to increase the page timeout and give an AJAX progress bar to the user while it loads. This is a problem for two reasons - 1) it still takes to long and the user must wait 2) it sometimes still times out. I've dabbled in threading the process and have read I should async post it to a web service ("Fire and forget"). Some references I've read: - [MSDN](http://msdn.microsoft.com/en-us/library/ms978607.aspx#diforwc-ap02_plag_howtomultithread) - [Fire and Forget](http://aspalliance.com/329) So my question is - what is the best method? UPDATE: After the user inputs their data I would like to redirect them to the results page that incrementally updates as the process is running in the background.
To avoid excessive architecture astronomy, I often [use a hidden iframe to call the long running process and stream back progress information](http://encosia.com/2007/10/03/easy-incremental-status-updates-for-long-requests/). Coupled with something like [jsProgressBarHandler](http://www.bram.us/demo/projects/jsprogressbarhandler/), you can pretty easily create great out-of-band progress indication for longer tasks where a generic progress animation doesn't cut it. In your specific situation, you may want to use one LongRunningProcess.aspx call per task, to avoid those page timeouts. For example, call LongRunningProcess.aspx?taskID=1 to kick it off and then at the end of that task, emit a ``` document.location = "LongRunningProcess.aspx?taskID=2". ``` Ad nauseum.
We had a similar issue and solved it by starting the work via an asychronous web service call (which meant that the user did not have to wait for the work to finish). The web service then started a SQL Job which performed the work and periodically updated a table with the status of the work. We provided a UI which allowed the user to query the table.
What's the best way to handle long running process in an ASP.Net application?
[ "", "c#", ".net", "asp.net", "asp.net-ajax", "" ]
What is the behind-the-scenes difference between `int?` and `int` data types? Is `int?` somehow a reference type?
? wraps the value type (T) in a Nullable<T> struct: <http://msdn.microsoft.com/en-us/library/b3h38hb0.aspx>
In addition to "int?" being a shortcut for "Nullable", there was also infrastructure put into the CLR in order to implicitly and silently convert between "int?" and "int". This also means that any boxing operation will implicitly box the actual value (i.e., it's impossible to box Nullable as Nullable, it always results in either the boxed value of T or a null object). I ran into many of these issues when trying to create Nullable when you don't know T at compile time (you only know it at runtime). <http://bradwilson.typepad.com/blog/2008/07/creating-nullab.html>
Is "int?" somehow a reference type?
[ "", "c#", "nullable", "" ]
This is written in PHP, but it's really language agnostic. ``` try { try { $issue = new DM_Issue($core->db->escape_string($_GET['issue'])); } catch(DM_Exception $e) { throw new Error_Page($tpl, ERR_NOT_FOUND, $e->getMessage()); } } catch(Error_Page $e) { die($e); } ``` Is nested try, catch blocks a good practice to follow? It seems a little bulky just for an error page - however my Issue Datamanager throws an Exception if an error occurs and I consider that to be a good way of error detecting. The Error\_Page exception is simply an error page compiler. I might just be pedantic, but do you think this is a good way to report errors and if so can you suggest a better way to write this? Thanks
You're using Exceptions for page logic, and I personally think that's not a good thing. Exceptions should be used to signal when bad or unexpected things happen, not to control the output of an error page. If you want to generate an error page based on Exceptions, consider using [set\_exception\_handler](http://www.php.net/set_exception_handler). Any uncaught exceptions are run through whatever callback method you specify. Keep in mind that this doesn't stop the "fatalness" of an Exception. After an exception is passed through your callback, execution will stop like normal after any uncaught exception.
I think you'd be better off not nesting. If you expect multiple exception types, have multiple catches. ``` try{ Something(); } catch( SpecificException se ) {blah();} catch( AnotherException ae ) {blah();} ```
Exceptions: Is this a good practice?
[ "", "php", "language-agnostic", "exception", "error-handling", "" ]
Many of the parameters for interacting with the Office Object model in VSTO require object parameters that are passed by reference, even when the notional type of the parameter is an int or string. 1. I suppose that this mechanism is used so that code can modify the parameter, although I can't figure out why these need to be passed as generic object instead of as their more appropriate types. Can anyone enlighten me? 2. The mechanism I've been using (cribbed from help and MSDN resources) essentially creates a generic object that contains the appropriate data and then passes that to the method, for example: object nextBookmarkName = "NextContent"; object nextBookmark = this.Bookmarks.get\_Item( ref nextBookmarkName ).Range; Microsoft.Office.Interop.Word.Range newRng = this.Range( ref nextBookmark, ref nextBookmark ); This seems like a lot of extra code, but I can't see a better way to do it. I'm sure I'm missing something; what is it? Or is this really the best practice?
I think it was just poor design of the original Word object model. I know that passing strings by reference can be slightly faster in the COM world because it avoids the need to make a copy, so perhaps that was part of the justification. But the downside is that the callee can modify the value, and in most cases with Word they are input parameters. I think your technique is the best practice. For the millions of optional parameters that many of the Word object model methods require, you can create a single static field "missing" something like: object missing = Type.Missing; // Example object fileName = ... document.SaveAs(ref fileName, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing, ref missing);
I agree with Joe. I even developed helper structs and classes like this one: ``` internal struct Argument { internal static object False = false; internal static object Missing = System.Type.Missing; internal static object True = true; } ``` And this one: ``` /// <summary> /// Defines the "special characters" /// in Microsoft Word that VSTO 1.x /// translates into C# strings. /// </summary> internal struct Characters { /// <summary> /// Word Table end-of-cell marker. /// </summary> /// <remarks> /// Word Table end-of-row markers are also /// equal to this value. /// </remarks> internal static string CellBreak = "\r\a"; /// <summary> /// Word line break (^l). /// </summary> internal static string LineBreak = "\v"; /// <summary> /// Word Paragraph break (^p). /// </summary> internal static string ParagraphBreak = "\r"; } ``` And a few more...
Best practice for passing parameters in Microsoft Visual Studio Tools for Office (VSTO) 3 (C#)
[ "", "c#", "visual-studio-2008", "vsto", "officedev", "" ]
I want to know the memory usage of my Python application and specifically want to know what code blocks/portions or objects are consuming most memory. Google search shows a commercial one is [Python Memory Validator](http://www.softwareverify.com/python/memory/index.html) (Windows only). And open source ones are [PySizer](http://pysizer.8325.org/) and [Heapy](http://guppy-pe.sourceforge.net/#Heapy). I haven't tried anyone, so I wanted to know which one is the best considering: 1. Gives most details. 2. I have to do least or no changes to my code.
[guppy3](https://github.com/zhuyifei1999/guppy3/) is quite simple to use. At some point in your code, you have to write the following: ``` from guppy import hpy h = hpy() print(h.heap()) ``` This gives you some output like this: ``` Partition of a set of 132527 objects. Total size = 8301532 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 35144 27 2140412 26 2140412 26 str 1 38397 29 1309020 16 3449432 42 tuple 2 530 0 739856 9 4189288 50 dict (no owner) ``` You can also find out from where objects are referenced and get statistics about that, but somehow the docs on that are a bit sparse. There is a graphical browser as well, written in Tk. For Python 2.x, use [Heapy](http://guppy-pe.sourceforge.net/).
My module [memory\_profiler](http://pypi.python.org/pypi/memory_profiler) is capable of printing a line-by-line report of memory usage and works on Unix and Windows (needs psutil on this last one). Output is not very detailed but the goal is to give you an overview of where the code is consuming more memory, not an exhaustive analysis on allocated objects. After decorating your function with `@profile` and running your code with the `-m memory_profiler` flag it will print a line-by-line report like this: ``` Line # Mem usage Increment Line Contents ============================================== 3 @profile 4 5.97 MB 0.00 MB def my_func(): 5 13.61 MB 7.64 MB a = [1] * (10 ** 6) 6 166.20 MB 152.59 MB b = [2] * (2 * 10 ** 7) 7 13.61 MB -152.59 MB del b 8 13.61 MB 0.00 MB return a ```
Which Python memory profiler is recommended?
[ "", "python", "performance", "memory-management", "profiling", "" ]
I'm trying to keep dependencies to a minimum for a program I contribute to, it's a small text editor. GTK Textview doesn't seem to come with a built-in undo function. Is there any reference implementation I've been missing on so far? Is everyone writing their own undo function for their TextView widgets? I'll be happy about any sample code - most happy about python sample code, as our project is in python.
As far as I know, GTK TextView doesn't include an undo function. So while I am not familiar with Python's GTK library, I would think it doesn't have one. The Ruby-GNOME2 project has a [sample text editor](http://ruby-gnome2.sourceforge.jp/hiki.cgi?Simple+Text+Editor) that has undo/redo functionality. Basically they are connecting to the insert\_text and delete\_range signals of the TextView widget and recording the events and associated data in a list.
as a follwow-up: I ported gtksourceview's undo mechanism to python: [Florian Heinle's textbuffer\_with\_undo](https://github.com/wheeler-microfluidics/pygtk_textbuffer_with_undo/tree/ffd6afc9dc32cde800bc04e04cffb402f8a46800) serves as a drop-in replacement for gtksourceview's undo (OP here, but launchpad open-id doesn't work anymore) **2024 Edit Note:** Dead Code Link updated to github with original (Florian Heinle) commit. Forked and updated by other Contributors repository is : [pygtk\_textbuffer\_with\_undo Github repository](https://github.com/wheeler-microfluidics/pygtk_textbuffer_with_undo)
Undo with GTK TextView
[ "", "python", "gtk", "text-editor", "undo", "" ]
I am building an open search add-on for Firefox/IE and the image needs to be Base64 Encoded so how can I base 64 encode the favicon I have? I am only familiar with PHP
As far as I remember there is an xml element for the image data. You can use this [website](http://www.motobit.com/util/base64-decoder-encoder.asp) to encode a file (use the upload field). Then just copy and paste the data to the XML element. You could also use PHP to do this like so: ``` <?php $im = file_get_contents('filename.gif'); $imdata = base64_encode($im); ?> ``` Use [Mozilla's guide](http://developer.mozilla.org/en/Creating_OpenSearch_plugins_for_Firefox) for help on creating OpenSearch plugins. For example, the icon element is used like this: ``` <img width="16" height="16">data:image/x-icon;base64,imageData</> ``` Where `imageData` is your base64 data.
``` $encoded_data = base64_encode(file_get_contents('path-to-your-image.jpg')); ```
Base64 Encoding Image
[ "", "php", "encoding", "base64", "opensearch", "" ]
I have an [ExtJS](http://www.sencha.com) grid on a web page and I'd like to save some of its state information back to the server when the users leaves the page. Can I do this with an Ajax request onUnload? If not, what's a better solution?
You can use an Ajax request, but be sure to make it a synchronous request rather than an asychronous one. Alternatively, simply save state whenever the user makes a change, this also protects the data if the user's browser crashes.
There's an answer above that says to use a synchronous ajax call, and that is the best case scenario. The problem is that unload doesn't work everywhere. If you look [here](http://blog.rd2inc.com/archives/2005/05/04/unload-events-in-safari/) you'll find some tricks to help you get unload events in safari... You could also use Google Gears to save content user side for situations where the user will be coming back, but the only fully safe way to keep that information is to continuously send it as long as the user is on the page or making changes.
How can I save some JavaScript state information back to my server onUnload?
[ "", "javascript", "ajax", "extjs", "" ]
I have: ``` <?php $file=fopen(date("Y-m-d").".txt","r+") or exit("Unable to open file!"); if ($_POST["lastname"] <> "") { fwrite($file,$_POST["lastname"]."\n"); } fclose($file); ?> ``` but it overwrites the beginning of the file. How do I make it insert?
I'm not entirely sure of your question - do you want to write data and not have it over-write the beginning of an existing file, or write new data to the start of an existing file, keeping the existing content after it? **To insert text without over-writing the beginning of the file**, you'll have to open it for appending ([`a+` rather than `r+`](http://ie.php.net/fopen)) ``` $file=fopen(date("Y-m-d").".txt","a+") or exit("Unable to open file!"); if ($_POST["lastname"] <> "") { fwrite($file,$_POST["lastname"]."\n"); } fclose($file); ``` **If you're trying to write to the start of the file**, you'll have to read in the file contents (see [`file_get_contents`](http://www.google.ie/url?sa=t&source=web&ct=res&cd=1&url=http%3A%2F%2Fwww.php.net%2Ffile_get_contents&ei=39jTSKOWEoyk1wbQ-vC6Dg&usg=AFQjCNHwFDwvy4v0p90sNfmfSB_jg7gR7Q&sig2=KyOdq6fGxuj8701pDDdXqA)) first, then write your new string followed by file contents to the output file. ``` $old_content = file_get_contents($file); fwrite($file, $new_content."\n".$old_content); ``` The above approach will work with small files, but you may run into memory limits trying to read a large file in using `file_get_conents`. In this case, consider using [`rewind($file)`](http://ie.php.net/rewind), which sets the file position indicator for handle to the beginning of the file stream. Note when using `rewind()`, not to open the file with the `a` (or `a+`) options, as: > If you have opened the file in append ("a" or "a+") mode, any data you write to the file will always be appended, regardless of the file position.
A working example for inserting in the middle of a file stream without overwriting, and without having to load the whole thing into a variable/memory: ``` function finsert($handle, $string, $bufferSize = 16384) { $insertionPoint = ftell($handle); // Create a temp file to stream into $tempPath = tempnam(sys_get_temp_dir(), "file-chainer"); $lastPartHandle = fopen($tempPath, "w+"); // Read in everything from the insertion point and forward while (!feof($handle)) { fwrite($lastPartHandle, fread($handle, $bufferSize), $bufferSize); } // Rewind to the insertion point fseek($handle, $insertionPoint); // Rewind the temporary stream rewind($lastPartHandle); // Write back everything starting with the string to insert fwrite($handle, $string); while (!feof($lastPartHandle)) { fwrite($handle, fread($lastPartHandle, $bufferSize), $bufferSize); } // Close the last part handle and delete it fclose($lastPartHandle); unlink($tempPath); // Re-set pointer fseek($handle, $insertionPoint + strlen($string)); } $handle = fopen("file.txt", "w+"); fwrite($handle, "foobar"); rewind($handle); finsert($handle, "baz"); // File stream is now: bazfoobar ``` [Composer lib for it can be found here](https://github.com/prewk/file-chainer/)
Using php, how to insert text without overwriting to the beginning of a text file
[ "", "php", "insert", "" ]
I'm using the following JavaScript code: ``` <script language="JavaScript1.2" type="text/javascript"> function CreateBookmarkLink(title, url) { if (window.sidebar) { window.sidebar.addPanel(title, url,""); } else if( window.external ) { window.external.AddFavorite( url, title); } else if(window.opera && window.print) { return true; } } </script> ``` This will create a bookmark for Firefox and IE. But the link for Firefox will show up in the sidepanel of the browser, instead of being displayed in the main screen. I personally find this very annoying and am looking for a better solution. It is of course possible to edit the bookmark manually to have it *not* show up in the side panel, but that requires extra steps. I just want to be able to have people bookmark a page (that has a lot of GET information in the URL which is used to build a certain scheme) the easy way. I'm afraid that it might not be possible to have Firefox present the page in the main screen at all (as Googling this subject resulted in practically nothing worth using), but I might have missed something. If anyone has an idea if this is possible, or if there's a workaround, I'd love to hear about it.
I think that's the only solution for Firefox... I have a better function for that action, it works even for Opera and shows a message for other "unsupported" browsers. ``` <script type="text/javascript"> function addBookmark(url,name){ if(window.sidebar && window.sidebar.addPanel) { window.sidebar.addPanel(name,url,''); //obsolete from FF 23. } else if(window.opera && window.print) { var e=document.createElement('a'); e.setAttribute('href',url); e.setAttribute('title',name); e.setAttribute('rel','sidebar'); e.click(); } else if(window.external) { try { window.external.AddFavorite(url,name); } catch(e){} } else alert("To add our website to your bookmarks use CTRL+D on Windows and Linux and Command+D on the Mac."); } </script> ```
For Firefox no need to set any JavaScript for the bookmark an page by script, only an anchor tag with **title** and **rel="sidebar"** can do this functionality ``` <a href="http://www.google.com" title="Google" rel="sidebar">Bookmark This Page</a> ``` I have tested it on FF9 and its working fine. When you click on the link, Firefox will open an dialog box **New Bookmark** and if you wish to not load this bookmark on side bar then un-check **Load this bookmark in the sidebar** from dialog box.
A Firefox javascript bookmarking problem
[ "", "javascript", "firefox", "bookmarks", "" ]
Is it possible under any set of circumstances to be able to accomplish this? My current circumstances are this: ``` public class CustomForm : Form { public class CustomGUIElement { ... public event MouseEventHandler Click; // etc, and so forth. ... } private List<CustomGUIElement> _elements; ... public void CustomForm_Click(object sender, MouseEventArgs e) { // we might want to call one of the _elements[n].Click in here // but we can't because we aren't in the same class. } } ``` My first thought was to have a function similar to: ``` internal enum GUIElementHandlers { Click, ... } internal void CustomGUIElement::CallHandler(GUIElementHandler h, object[] args) { switch (h) { case Click: this.Click(this, (EventArgs)args[0]); break; ... // etc and so forth } } ``` It's a horribly ugly kludge, but it should work... There must be a more elegant solution though? The .NET library does this all the time with message handlers and calling events in Control's. Does anyone else have any other/better ideas?
You just need to add a public method for invoking the event. Microsoft already does this for some events such as *PerformClick* for controls that expose a *Click* event. ``` public class CustomGUIElement { public void PerformClick() { OnClick(EventArgs.Empty); } protected virtual void OnClick(EventArgs e) { if (Click != null) Click(this, e); } } ``` You would then do the following inside your example event handler... ``` public void CustomForm_Click(object sender, MouseEventArgs e) { _elements[0].PerformClick(); } ```
The event keyword in c# modifies the declaration of the delegate. It prevents direct assignment to the delegate (you can only use += and -= on an event), and it prevents invocation of the delegate from outside the class. So you could alter your code to look like this: ``` public class CustomGUIElement { ... public MouseEventHandler Click; // etc, and so forth. ... } ``` Then you can invoke the event from outside the class like this. ``` myCustomGUIElement.Click(sender,args); ``` The drawback is that code using the class can overwrite any registered handlers very easily with code like this: ``` myCustomGUIElement.Click = null; ``` which is not allowed if the Click delegate is declared as an event.
Calling C# events from outside the owning class?
[ "", "c#", "events", "inheritance", "" ]
I need to run a stored procedure from a C# application. I use the following code to do so: ``` Process sqlcmdCall = new Process(); sqlcmdCall.StartInfo.FileName = "sqlcmd.exe"; sqlcmdCall.StartInfo.Arguments = "-S localhost\\SQLEXPRESS -d some_db -Q \":EXIT(sp_test)\"" sqlcmdCall.Start(); sqlcmdCall.WaitForExit(); ``` From the sqlcmdCall object after the call completes, I currently get an ExitCode of -100 for success and of 1 for failure (i.e. missing parameter, stored proc does not exist, etc...). How can I customize these return codes? H.
If you are trying to call a stored procedure from c# you would want to use ADO.Net instead of the calling sqlcmd via the command line. Look at `SqlConnection` and `SqlCommand` in the `System.Data.SqlClient` namespace. Once you are calling the stored procedure via `SqlCommand` you will be able to catch an exception raised by the stored procedure as well we reading the return value of the procedure if you need to.
I have a small VB.Net app that executes system commands like that. To capture error or success conditions I define regular expressions to match the error text output from the command and I capture the output like this: ``` myprocess.Start() procReader = myprocess.StandardOutput() While (Not procReader.EndOfStream) procLine = procReader.ReadLine() If (MatchesRegEx(errRegEx, procLine)) Then writeDebug("Error reg ex: [" + errorRegEx + "] has matched: [" + procLine + "] setting hasError to true.") Me.hasError = True End If writeLog(procLine) End While procReader.Close() myprocess.WaitForExit(CInt(waitTime)) ``` That way I can capture specific errors and also log all the output from the command in case I run across an unexpected error.
How to retrieve error when launching sqlcmd from C#?
[ "", "c#", "sql-server", "" ]
Why was the `arguments.callee.caller` property deprecated in JavaScript? It was added and then deprecated in JavaScript, but it was omitted altogether by ECMAScript. Some browser (Mozilla, IE) have always supported it and don't have any plans on the map to remove support. Others (Safari, Opera) have adopted support for it, but support on older browsers is unreliable. Is there a good reason to put this valuable functionality in limbo? (Or alternately, is there a better way to grab a handle on the calling function?)
Early versions of JavaScript did not allow named function expressions, and because of that we could not make a recursive function expression: ``` // This snippet will work: function factorial(n) { return (!(n>1))? 1 : factorial(n-1)*n; } [1,2,3,4,5].map(factorial); // But this snippet will not: [1,2,3,4,5].map(function(n) { return (!(n>1))? 1 : /* what goes here? */ (n-1)*n; }); ``` To get around this, `arguments.callee` was added so we could do: ``` [1,2,3,4,5].map(function(n) { return (!(n>1))? 1 : arguments.callee(n-1)*n; }); ``` However this was actually a really bad solution as this (in conjunction with other arguments, callee, and caller issues) make inlining and tail recursion impossible in the general case (you can achieve it in select cases through tracing etc, but even the best code is sub optimal due to checks that would not otherwise be necessary). The other major issue is that the recursive call will get a different `this` value, for example: ``` var global = this; var sillyFunction = function (recursed) { if (!recursed) return arguments.callee(true); if (this !== global) alert("This is: " + this); else alert("This is the global"); } sillyFunction(); ``` Anyhow, EcmaScript 3 resolved these issues by allowing named function expressions, e.g.: ``` [1,2,3,4,5].map(function factorial(n) { return (!(n>1))? 1 : factorial(n-1)*n; }); ``` This has numerous benefits: * The function can be called like any other from inside your code. * It does not pollute the namespace. * The value of `this` does not change. * It's more performant (accessing the [arguments object](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions_and_function_scope/arguments) is expensive). ### Whoops, Just realised that in addition to everything else the question was about `arguments.callee.caller`, or more specifically [`Function.caller`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/caller). At any point in time you can find the deepest caller of any function on the stack, and as I said above, looking at the call stack has one single major effect: It makes a large number of optimizations impossible, or much much more difficult. Eg. if we can't guarantee that a function `f` will not call an unknown function, then it is not possible to inline `f`. Basically it means that any call site that may have been trivially inlinable accumulates a large number of guards, take: ``` function f(a, b, c, d, e) { return a ? b * c : d * e; } ``` If the js interpreter cannot guarantee that all the provided arguments are numbers at the point that the call is made, it needs to either insert checks for all the arguments before the inlined code, or it cannot inline the function. Now in this particular case a smart interpreter should be able to rearrange the checks to be more optimal and not check any values that would not be used. However in many cases that's just not possible and therefore it becomes impossible to inline.
`arguments.callee.caller` is **not** deprecated, though it does make use of the [`Function.caller`](https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Function/caller) property. ([`arguments.callee`](https://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Functions_and_function_scope/arguments/callee) will just give you a reference to the current function) * `Function.caller`, though non-standard according to ECMA3, is implemented across **all current major browsers**. * [`arguments.caller`](https://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Functions_and_function_scope/arguments/caller) **is** deprecated in favour of `Function.caller`, and isn't implemented in some current major browsers (e.g. Firefox 3). So the situation is less than ideal, but if you want to access the calling function in Javascript across all major browsers, you can use the `Function.caller` property, either accessed directly on a named function reference, or from within an anonymous function via the `arguments.callee` property.
Why was the arguments.callee.caller property deprecated in JavaScript?
[ "", "javascript", "ecma262", "" ]
Are System.IO.Compression.GZipStream or System.IO.Compression.Deflate compatible with zlib compression?
From [MSDN](http://msdn.microsoft.com/en-us/library/system.io.compression.gzipstream.aspx) about System.IO.Compression.GZipStream: > This class represents the gzip data format, which uses an industry standard algorithm for lossless file compression and decompression. From the [zlib FAQ](http://www.gzip.org/zlib/zlib_faq.html): > The gz\* functions in zlib on the other hand use the gzip format. So zlib and GZipStream should be interoperable, but only if you use the zlib functions for handling the gzip-format. System.IO.Compression.Deflate and zlib are reportedly not interoperable. If you need to handle zip files (you probably don't, but someone else might need this) you need to use [SharpZipLib](http://www.icsharpcode.net/OpenSource/SharpZipLib/Default.aspx) or another third-party library.
I ran into this issue with Git objects. In that particular case, they store the objects as deflated blobs with a Zlib header, which is documented in [RFC 1950](http://www.ietf.org/rfc/rfc1950.txt). You can make a compatible blob by making a file that contains: * Two header bytes (CMF and FLG from RFC 1950) with the values `0x78 0x01` + `CM` = 8 = deflate + `CINFO` = 7 = 32Kb window + `FCHECK` = 1 = checksum bits for this header * The output of the C# `DeflateStream` * An Adler32 checksum of the input data to the `DeflateStream`, big-endian format (MSB first) I made my own Adler implementation ``` public class Adler32Computer { private int a = 1; private int b = 0; public int Checksum { get { return ((b * 65536) + a); } } private static readonly int Modulus = 65521; public void Update(byte[] data, int offset, int length) { for (int counter = 0; counter < length; ++counter) { a = (a + (data[offset + counter])) % Modulus; b = (b + a) % Modulus; } } } ``` And that was pretty much it.
Zlib-compatible compression streams?
[ "", "c#", "compression", "zlib", "" ]
Does anybody have useful example of `this` assignment inside a C# method? I have been asked for it once during job interview, and I am still interested in answer myself.
The other answers are incorrect when they say you cannot assign to 'this'. True, you can't for a class type, but you *can* for a struct type: ``` public struct MyValueType { public int Id; public void Swap(ref MyValueType other) { MyValueType temp = this; this = other; other = temp; } } ``` At any point a struct can alter itself by assigning to 'this' like so.
using the **this** keyword ensures that only variables and methods scoped in the current type are accessed. This can be used when you have a naming conflict between a field/property and a local variable or method parameter. Typically used in constructors: ``` private readonly IProvider provider; public MyClass(IProvider provider) { this.provider = provider; } ``` In this example we assign the parameter provider to the private field provider.
What is an example of "this" assignment in C#?
[ "", "c#", "language-features", "" ]
We have a java program that requires a large amount of heap space - we start it with (among other command line arguments) the argument -Xmx1500m, which specifies a maximum heap space of 1500 MB. When starting this program on a Windows XP box that has been freshly rebooted, it will start and run without issues. But if the program has run several times, the computer has been up for a while, etc., when it tries to start I get this error: ``` Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. ``` I suspect that Windows itself is suffering from memory fragmentation, but I don't know how to confirm this suspicion. At the time that this happens, Task manager and sysinternals procexp report 2000MB free memory. I have looked at [this question related to internal fragmentation](https://stackoverflow.com/questions/60871/how-to-solve-memory-fragmentation) So the first question is, How do I confirm my suspicion? The second question is, if my suspicions are correct, does anyone know of any tools to solve this problem? I've looked around quite a bit, but I haven't found anything that helps, other than periodic reboots of the machine. ps - changing operating systems is also not currently a viable option.
Agree with Torlack, a lot of this is because other DLLs are getting loaded and go into certain spots, breaking up the amount of memory you can get for the VM in one big chunk. You can do some work on WinXP if you have more than 3G of memory to get some of the windows stuff moved around, look up PAE here: <http://www.microsoft.com/whdc/system/platform/server/PAE/PAEdrv.mspx> Your best bet, if you really need more than 1.2G of memory for your java app, is to look at 64 bit windows or linux or OSX. If you're using any kind of native libraries with your app you'll have to recompile them for 64 bit, but its going to be a lot easier than trying to rebase dlls and stuff to maximize the memory you can get on 32 bit windows. Another option would be to split your program up into multiple VMs and have them communicate with eachother via RMI or messaging or something. That way each VM can have some subset of the memory you need. Without knowing what your app does, i'm not sure that this will help in any way, though...
Unless you are running out of page file space, this issue isn't that the computer is running out of memory. The whole point of virtual memory is to allow the processes to use more virtual memory than is physically available. Not knowing how the JVM handles the heap, it is a bit hard to say exactly what the problem is, but one of the common issues is that there isn't enough contiguous free address space available in your process to allow the heap to be extended. Why this would be a problem after the machine has been running a while is a bit confusing. I've been working on a similar problem at work. I have found that running the program using WinDBG and using the "!address" and "!address -summary" commands have been invaluable in tracking down why a processes' virtual address space has become fragmented. You can also try running the program after reboot and using the "!address" command to take a picture of the address space and then do the same when the program no longer runs. This might clue you in on the problem. Maybe something simple as an extra DLL getting loading might cause the problem.
Tools to view/solve Windows XP memory fragmentation
[ "", "java", "windows", "memory", "memory-management", "windows-xp", "" ]
So, I am using the Linq entity framework. I have 2 entities: `Content` and `Tag`. They are in a many-to-many relationship with one another. `Content` can have many `Tags` and `Tag` can have many `Contents`. So I am trying to write a query to select all contents where any tags names are equal to `blah` The entities both have a collection of the other entity as a property(but no IDs). This is where I am struggling. I do have a custom expression for `Contains` (so, whoever may help me, you can assume that I can do a "contains" for a collection). I got this expression from: <http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2670710&SiteID=1> ## Edit 1 [I ended up finding my own answer.](https://stackoverflow.com/questions/110314/linq-to-entities-building-where-clauses-to-test-collections-within-a-many-to-ma#131551)
After reading about the [PredicateBuilder](http://www.albahari.com/nutshell/predicatebuilder.aspx), reading all of the wonderful posts that people sent to me, posting on other sites, and then reading more on [Combining Predicates](http://blogs.msdn.com/meek/archive/2008/05/02/linq-to-entities-combining-predicates.aspx) and [Canonical Function Mapping](http://msdn.microsoft.com/en-us/library/bb738681.aspx).. oh and I picked up a bit from [Calling functions in LINQ queries](http://tomasp.net/blog/linq-expand.aspx) (some of these classes were taken from these pages). I FINALLY have a solution!!! Though there is a piece that is a bit hacked... Let's get the hacked piece over with :( I had to use reflector and copy the ExpressionVisitor class that is marked as internal. I then had to make some minor changes to it, to get it to work. I had to create two exceptions (because it was newing internal exceptions. I also had to change the ReadOnlyCollection() method's return from: ``` return sequence.ToReadOnlyCollection<Expression>(); ``` To: ``` return sequence.AsReadOnly(); ``` I would post the class, but it is quite large and I don't want to clutter this post any more than it's already going to be. I hope that in the future that class can be removed from my library and that Microsoft will make it public. Moving on... I added a ParameterRebinder class: ``` public class ParameterRebinder : ExpressionVisitor { private readonly Dictionary<ParameterExpression, ParameterExpression> map; public ParameterRebinder(Dictionary<ParameterExpression, ParameterExpression> map) { this.map = map ?? new Dictionary<ParameterExpression, ParameterExpression>(); } public static Expression ReplaceParameters(Dictionary<ParameterExpression, ParameterExpression> map, Expression exp) { return new ParameterRebinder(map).Visit(exp); } internal override Expression VisitParameter(ParameterExpression p) { ParameterExpression replacement; if (map.TryGetValue(p, out replacement)) { p = replacement; } return base.VisitParameter(p); } } ``` Then I added a ExpressionExtensions class: ``` public static class ExpressionExtensions { public static Expression<T> Compose<T>(this Expression<T> first, Expression<T> second, Func<Expression, Expression, Expression> merge) { // build parameter map (from parameters of second to parameters of first) var map = first.Parameters.Select((f, i) => new { f, s = second.Parameters[i] }).ToDictionary(p => p.s, p => p.f); // replace parameters in the second lambda expression with parameters from the first var secondBody = ParameterRebinder.ReplaceParameters(map, second.Body); // apply composition of lambda expression bodies to parameters from the first expression return Expression.Lambda<T>(merge(first.Body, secondBody), first.Parameters); } public static Expression<Func<T, bool>> And<T>(this Expression<Func<T, bool>> first, Expression<Func<T, bool>> second) { return first.Compose(second, Expression.And); } public static Expression<Func<T, bool>> Or<T>(this Expression<Func<T, bool>> first, Expression<Func<T, bool>> second) { return first.Compose(second, Expression.Or); } } ``` And the last class I added was PredicateBuilder: ``` public static class PredicateBuilder { public static Expression<Func<T, bool>> True<T>() { return f => true; } public static Expression<Func<T, bool>> False<T>() { return f => false; } } ``` This is my result... I was able to execute this code and get back the resulting "content" entities that have matching "tag" entities from the tags that I was searching for! ``` public static IList<Content> GetAllContentByTags(IList<Tag> tags) { IQueryable<Content> contentQuery = ... Expression<Func<Content, bool>> predicate = PredicateBuilder.False<Content>(); foreach (Tag individualTag in tags) { Tag tagParameter = individualTag; predicate = predicate.Or(p => p.Tags.Any(tag => tag.Name.Equals(tagParameter.Name))); } IQueryable<Content> resultExpressions = contentQuery.Where(predicate); return resultExpressions.ToList(); } ``` Please let me know if anyone needs help with this same thing, if you would like me to send you files for this, or just need more info.
Summing it up... ``` contentQuery.Where( content => content.Tags.Any(tag => tags.Any(t => t.Name == tag.Name)) ); ``` So is that what you're expecting? I'm a little confused.
LINQ to entities - Building where clauses to test collections within a many to many relationship
[ "", "c#", "linq", "entity-framework", "lambda", "many-to-many", "" ]
I'm doing simple string input parsing and I am in need of a string tokenizer. I am new to C# but have programmed Java, and it seems natural that C# should have a string tokenizer. Does it? Where is it? How do I use it?
You could use [String.Split method](http://msdn.microsoft.com/en-us/library/system.string.split.aspx "String.Split method msdn reference"). ``` class ExampleClass { public ExampleClass() { string exampleString = "there is a cat"; // Split string on spaces. This will separate all the words in a string string[] words = exampleString.Split(' '); foreach (string word in words) { Console.WriteLine(word); // there // is // a // cat } } } ``` For more information see [Sam Allen's article about splitting strings in c#](http://www.dotnetperls.com/split "C# Split String Examples by Sam Allen") (Performance, Regex)
I just want to highlight the power of C#'s Split method and give a more detailed comparison, particularly from someone who comes from a Java background. Whereas StringTokenizer in Java only allows a single delimiter, we can actually split on multiple delimiters making regular expressions less necessary (although if one needs regex, use regex by all means!) Take for example this: ``` str.Split(new char[] { ' ', '.', '?' }) ``` This splits on three different delimiters returning an array of tokens. We can also remove empty arrays with what would be a second parameter for the above example: ``` str.Split(new char[] { ' ', '.', '?' }, StringSplitOptions.RemoveEmptyEntries) ``` One thing Java's String tokenizer does have that I believe C# is lacking (at least Java 7 has this feature) is the ability to keep the delimiter(s) as tokens. C#'s Split will discard the tokens. This could be important in say some NLP applications, but for more general purpose applications this might not be a problem.
Does C# have a String Tokenizer like Java's?
[ "", "c#", "string", "parsing", "" ]
I am searching for a tutorial `(optimally with Zend Framework)` on how to use `PHPUnit`. I have found a couple on `google` but have not quiet understood it yet.
What your are looking for is the [Pocket Guide](http://www.phpunit.de/manual/current/en/). It explaines how to work with PHPUnit from A to Z in several languages. You can read it online or offline, for free, and it's regularly updated.
For information about [PHPUnit](http://www.phpunit.de/), be sure to read the [documentation](http://www.phpunit.de/manual/3.3/en/). It does not look too bad IMO. There is a blog entry about [Automatic testing of MVC applications created with Zend Framework](http://replay.waybackmachine.org/20090426040436/http://www.alexatnet.com/node/12) which looks quite good, too. :)
In need for a site that explains how to use PHPUnit
[ "", "php", "unit-testing", "zend-framework", "phpunit", "" ]
Is there an easy way to return data to web service clients in JSON using java? I'm fine with servlets, spring, etc.
To me, [the best Java <-> JSON parser is XStream](http://xstream.codehaus.org/json-tutorial.html) (yes, I'm really talking about json, not about xml). XStream already deals with circular dependencies and has a simple and powerful api where you could write yours drivers, converters and so on. Kind Regards
It might be worth looking into [Jersey](http://jersey.java.net/). Jersey makes it easy to expose restful web services as xml and/or JSON. An example... start with a simple class ``` @XmlType(name = "", propOrder = { "id", "text" }) @XmlRootElement(name = "blah") public class Blah implements Serializable { private Integer id; private String text; public Blah(Integer id, String text) { this.id = id; this.text = text; } @XmlElement public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } @XmlElement public String getText() { return text; } public void setText(String value) { this.text = value; } } ``` Then create a Resource ``` @Path("/blah") public class BlahResource { private Set<Blah> blahs = new HashSet<Blah>(); @Context private UriInfo context; public BlahResource() { blahs.add(new Blah(1, "blah the first")); blahs.add(new Blah(2, "blah the second")); } @GET @Path("/{id}") @ProduceMime({"application/json", "application/xml"}) public Blah getBlah(@PathParam("id") Integer id) { for (Blah blah : blahs) { if (blah.getId().equals(id)) { return blah; } } throw new NotFoundException("not found"); } } ``` and expose it. There are many ways to do this, such as by using Jersey's ServletContainer. (web.xml) ``` <servlet> <servlet-name>jersey</servlet-name> <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>jersey</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> ``` Thats all you need to do... pop open your browser and browse to <http://localhost/blah/1>. By default you will see XML output. If you are using FireFox, install TamperData and change your `accept` header to `application/json` to see the JSON output. Obviously there is much more to it, but [Jersey](http://jersey.java.net/) makes all that stuff quite easy. Good luck!
How do I expose data in a JSON format through a web service using Java?
[ "", "java", "web-services", "json", "" ]
How do you determine the collation of a database in SQL 2005, for instance if you need to perform a case-insensitive search/replace?
Use the following SQL determines the collation of a database: ``` SELECT DATABASEPROPERTYEX('{database name}', 'Collation') SQLCollation; ```
Remember, that individual columns can override the database collation: ``` SELECT TABLE_NAME, COLUMN_NAME, COLLATION_NAME FROM INFORMATION_SCHEMA.COLUMNS ```
How do I determine the collation of a database in SQL 2005?
[ "", "sql", "collation", "" ]
Obviously I can use BCP but here is the issue. If one of the records in a Batch have an invalid date I want to redirect that to a separate table/file/whatever, but keep the batch processing running. I don't think SSIS can be installed on the server which would have helped.
Create a trigger that processes on INSERT. This trigger will do a validation check on your date field. If it fails the validation, then do an insert into your separate table, and you can also choose to continue the insert or not allow it to go through. an **important** note: by default triggers do not fire on bulk inserts (BCP & SSIS included). To get this to work, you'll need to specify that you want the trigger to fire, using something like: ``` BULK INSERT your_database.your_schema.your_table FROM your_file WITH (FIRE_TRIGGERS ) ```
Yeah, if you are using DTS, you should just import into a staging table that uses varchar instead of dates and then massage the data into the proper tables afterwords.
Get millions of records from fixed-width flat file to SQL 2000
[ "", "c#", "sql-server", "" ]
One mentor I respect suggests that a simple bean is a waste of time - that value objects 'MUST' contain some business logic to be useful. Another says such code is difficult to maintain and that all business logic must be externalized. I realize this question is subjective. Asking anyway - want to know answers from more perspectives.
The idea of putting data and business logic together is to promote encapsulation, and to expose as little internal state as possible to other objects. That way, clients can rely on an interface rather than on an implementation. See the ["Tell, Don't Ask"](http://www.pragmaticprogrammer.com/articles/tell-dont-ask) principle and the [Law of Demeter](http://en.wikipedia.org/wiki/Law_of_Demeter). Encapsulation makes it easier to understand the states data can be in, easier to read code, easier to decouple classes and generally easier to unit test. Externalising business logic (generally into "Service" or "Manager" classes) makes questions like "where is this data used?" and "What states can it be in?" a lot more difficult to answer. It's also a procedural way of thinking, wrapped up in an object. This can lead to an [anemic domain model](http://www.martinfowler.com/bliki/AnemicDomainModel.html). Externalising behaviour isn't always bad. For example, a [service layer](http://martinfowler.com/eaaCatalog/serviceLayer.html) might orchestrate domain objects, but without taking over their state-manipulating responsibilities. Or, when you are mostly doing reads/writes to a DB that map nicely to input forms, maybe you don't need a domain model - or the painful object/relational mapping overhead it entails - at all. Transfer Objects often serve to decouple architectural layers from each other (or from an external system) by providing the minimum state information the calling layer needs, without exposing any business logic. This can be useful, for example when preparing information for the view: just give the view the information it needs, and nothing else, so that it can concentrate on *how* to display the information, rather than *what* information to display. For example, the TO might be an aggregation of several sources of data. One advantage is that your views and your domain objects are decoupled. Using your domain objects in JSPs can make your domain harder to refactor and promotes the indiscriminate use of getters and setters (hence breaking encapsulation). However, there's also an overhead associated with having a lot of Transfer Objects and often a lot of duplication, too. Some projects I've been on end up with TO's that basically mirror other domain objects (which I consider an anti-pattern).
You should better call them [Transfer Objects](https://www.oracle.com/technetwork/java/transferobject-139757.html) or [Data transfer objects (DTO)](http://en.wikipedia.org/wiki/Data_Transfer_Object). Earlier this same j2ee pattern was called 'Value object' but they changed the name because it was confused with this <http://dddcommunity.org/discussion/messageboardarchive/ValueObjects.html> To answer your question, I would only put minimal logic to my DTOs, logic that is required for display reasons. Even better, if we are talking about a database based web application, I would go beyond the core j2ee patterns and use [Hibernate](http://www.hibernate.org/) or the [Java Persistence API](http://en.wikipedia.org/wiki/Java_Persistence_API) to create a domain model that supports lazy loading of relations and use this in the view. See the [Open session in view](http://www.hibernate.org/43.html). In this way, you don't have to program a set of DTOs and you have all the business logic available to use in your views/controllers etc.
How much business logic should Value objects contain?
[ "", "java", "oop", "data-transfer-objects", "" ]
I have the following C# code: ``` byte rule = 0; ... rule = rule | 0x80; ``` which produces the error: > *Cannot implicitly convert type 'int' to 'byte'. An explicit conversion exists (are you missing a cast?)* [Update: first version of the question was wrong ... I misread the compiler output] Adding the cast **doesn't** fix the problem: ``` rule = rule | (byte) 0x80; ``` I need to write it as: ``` rule |= 0x80; ``` Which just seems weird. Why is the `|=` operator any different to the `|` operator? Is there any other way of telling the compiler to treat the constant as a byte? --- **@ Giovanni Galbo** : yes and no. The code is dealing with the programming of the flash memory in an external device, and logically represents a single byte of memory. I could cast it later, but this seemed more obvious. I guess my C heritage is showing through too much! **@ Jonathon Holland** : the 'as' syntax looks neater but unfortunately doesn't appear to work ... it produces: > *The as operator must be used with a reference type or nullable type ('byte' is a non-nullable value type)*
``` int rule = 0; rule |= 0x80; ``` <http://msdn.microsoft.com/en-us/library/kxszd0kx.aspx> The | operator is defined for all value types. I think this will produced the intended result. The "|=" operator is an or then assign operator, which is simply shorthand for rule = rule | 0x80. One of the niftier things about C# is that it lets you do crazy things like abuse value types simply based on their size. An 'int' is exactly the same as a byte, except the compiler will throw warnings if you try and use them as both at the same time. Simply sticking with one (in this case, int) works well. If you're concerned about 64bit readiness, you can specify int32, but all ints are int32s, even running in x64 mode.
C# does not have a literal suffix for byte. u = uint, l = long, ul = ulong, f = float, m = decimal, but no byte. You have to cast it.
C# numeric constants
[ "", "c#", "casting", "" ]