Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have used up my 30 day trial and want to buy Resharper now. But, I'm poor and $150 is a lot for me to handle right now. Is anything in VS 2010 going to make buying Resharper now a mistake? I heard that VS 2010 is like VS 2008 + Resharper. I find it hard to believe but wanted to ask SO before spending my money. Thanks! Result: I bought Resharper and have not regretted it since. I use it's code navigation and refactoring tools many times throughout the day. I seriously cannot use VS2008 without it.
First of all look at the prospects of R# 4.5 to know what will be included in it: <http://resharper.blogspot.com/2008/07/whats-next-life-after-release.html> and <http://abdullin.com/journal/2008/12/19/resharper-45.html> I'm sure that R# will be must-have add-in for VS2010 too (JetBrains spent several years working on it and it will be hard for MS to do a really good implementation of all these features). And also you may check R# 4.5 nightly builds and compare'em with betas of VS.
From the VS previews I have seen so far, there are some enhancements to 2010 that overlap with today's Resharper, but I'm sure the next version of Resharper will in turn add even more to fill in new gaps that VS leaves out. It's an ever-forward march. That doesn't really answer your question directly, but hopefully the extra info will help you decide.
Should I wait to buy Resharper until VS 2010 is released?
[ "", "c#", ".net", "visual-studio", "resharper", "" ]
I want to store strings and issue each with a unique ID number (an index would be fine). I would only need one copy of each string and I require quick lookup. I check if the string exist in the table often enough that i notice a performance hit. Whats the best container to use for this and how do i lookup if the string exist?
I would suggest tr1::unordered\_map. It is implemented as a hashmap so it has an expected complexity of O(1) for lookups and a worst case of O(n). There is also a boost implementation if your compiler doesn't support tr1. ``` #include <string> #include <iostream> #include <tr1/unordered_map> using namespace std; int main() { tr1::unordered_map<string, int> table; table["One"] = 1; table["Two"] = 2; cout << "find(\"One\") == " << boolalpha << (table.find("One") != table.end()) << endl; cout << "find(\"Three\") == " << boolalpha << (table.find("Three") != table.end()) << endl; return 0; } ```
try this: [![alt text](https://i.stack.imgur.com/YFIqM.png)](https://i.stack.imgur.com/YFIqM.png) (source: [adrinael.net](http://adrinael.net/containerchoice.png))
container for quick name lookup
[ "", "c++", "containers", "std", "" ]
Is there a views plugin that I can use to generate a xml file? I would like something that I could choose the fields I would like to be in the xml and how they would appear (as a tag or a attribute of the parent tag). For example: I have a content type Picture that has three fields: title, size and dimensions. I would like to create a view that could generate something like this: ``` <pictures> <picture size="1000" dimensions="10x10"> <title> title </title> </picture> <picture size="1000" dimensions="10x10"> <title> title </title> </picture> ... </pictures> ``` If there isn't nothing already implemented, what should I implement? I thought about implementing a display plugin, a style, a row plugin and a field handler. Am I wrong? I wouldn't like do it with the templates because I can't think in a way to make it reusable with templates.
A custom style plugin is definitely capable of doing this; I whipped one up to output Atom feeds instead of RSS. You might find a bit of luck starting with the [Views Bonus Pack](http://drupal.org/project/views_bonus) or [Views Datasource](http://drupal.org/project/views_datasource). Both attempt to provide XML and other output formats for Views data, though the latter was a Google Summer of Code project and hasn't been updated recently. Definitely a potential starting point, though.
You might want to look at implementing another theme for XML or using the [Services](https://www.drupal.org/project/services) module. Some details about it (from its project page): > A standardized solution for building API's so that external clients can communicate with Drupal. Out of the box it aims to support anything Drupal Core supports and provides a code level API for other modules to expose their features and functionality. It provide Drupal plugins that allow others to create their own authentication mechanisms, request formats, and response formats. Also see: <http://cmsproducer.com/generate-how-to-drupal-node-XML-XHTML>
Drupal Views: Generate xml file
[ "", "php", "drupal", "drupal-6", "drupal-views", "" ]
Seems like it should a really easy topic, all the examples everywhere are just a couple of lines however no decent explainations, and thus I keep running into the same error without a solution. In short this part of the applicaion runs like so 1. Pulls images from db 2. Creates actual images files in a temp folder 3. creates pdf with images inside of it 4. now delete images that were created. Everything works up until the delete. I keep getting the error > **InnerException:** > System.ArgumentException: URI formats are not supported. > at System.IO.Path.NormalizePathFast(String path, Boolean fullCheck)... I have tried a couple different ways to accomplish the delete the latest being: ``` foreach (string item in TempFilesList) { path = System.Web.HttpContext.Current.Application["baseWebDomainUrl"] + "/temp/" + item; fileDel = new FileInfo(path); fileDel.Delete(); } ``` and the try before that one was: ``` foreach (string item in TempFilesList) { File.Delete(System.Web.HttpContext.Current.Application["baseWebDomainUrl"] + "/temp/" + item); } ``` TempFilesList is an array list containing the paths to the images to delete.
You should try calling Server.MapPath(path) to get the "real" path to the file. Pass that to File.Delete, and it should work (assuming file permissions etc. are correct) So for example: ``` foreach (string item in TempFilesList) { path = System.Web.HttpContext.Current.Application["baseWebDomainUrl"] + "/temp/" + item; path = Server.MapPath(path); fileDel = new FileInfo(path); fileDel.Delete(); } ```
You need the actual file path of the file that you've created, not the URL of the path that you've created. Your code creates a path that look something like "<http://www.mywebsite.com/location/temp/filename.jpg>". You need something that looks like "C:\MyWorkingFolder\filename.jpg". I would recommend against using Server.MapPath, however. Since you are creating the files yourself in your own code, you control the location of where the file is being created. Use that, instead. Store it in as an AppSettings key in your web.config. For example: ``` string basePath = ConfigurationManager.AppSettings["PdfGenerationWorkingFolder"]; foreach(string item in TempFilesList) { File.Delete(basePath + item); } ```
Deleting a file in C#
[ "", "c#", "asp.net", "file", "" ]
How would one go about implementing a "who's online" feature using PHP? Of course, it would involve using timestamps, and, after looking at phpBB's session table, might involve storing latest visits in a database. Is this an efficient method, or are there better ways of implementing this idea? **Edit**: I made this community wiki accidentally, because I was still new to Stack Overflow at the time.
Using a database to keep track of everyone who's logged in is pretty much the only way to do this. What I would do is, insert a row with the user info and a timestamp into the table or when someone logs in, and update the timestamp every time there is activity with that user. And I would assume that all users who have had activity in the past 5 minutes are currently online.
Depending on the way you implement (and if you implement) sessions, you could use the same storage media to get the number of active users. For example if you use the file-based session model, simply scan the directory which contains the session files and return the number of session files. If you are using database to store session data, return the number of rows in the session table. Of course this is supposing that you are happy with the timeout value your session has (ie. if your session has a timeout of 30 minutes, you will get a list of active users in the last 30 minutes).
How to implement a "who's online" feature in PHP?
[ "", "php", "mysql", "timestamp", "connection", "membership", "" ]
I'm having a problem getting CodeIgniter to work on my shared hosting account. The URL is <http://test.tallgreentree.com>. It's not giving me a .php error, but it is displaying a 404 page for everything I type into the address bar. Here's the beginning of my config.php file. ``` <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed'); /* |-------------------------------------------------------------------------- | Base Site URL |-------------------------------------------------------------------------- | | URL to your CodeIgniter root. Typically this will be your base URL, | WITH a trailing slash: | | http://example.com/ | */ $config['base_url'] = "http://test.tallgreentree.com/"; /* |-------------------------------------------------------------------------- | Index File |-------------------------------------------------------------------------- | | Typically this will be your index.php file, unless you've renamed it to | something else. If you are using mod_rewrite to remove the page set this | variable so that it is blank. | */ $config['index_page'] = "index.php"; /* |-------------------------------------------------------------------------- | URI PROTOCOL |-------------------------------------------------------------------------- | | This item determines which server global should be used to retrieve the | URI string. The default setting of "AUTO" works for most servers. | If your links do not seem to work, try one of the other delicious flavors: | | 'AUTO' Default - auto detects | 'PATH_INFO' Uses the PATH_INFO | 'QUERY_STRING' Uses the QUERY_STRING | 'REQUEST_URI' Uses the REQUEST_URI | 'ORIG_PATH_INFO' Uses the ORIG_PATH_INFO | */ $config['uri_protocol'] = "AUTO"; ``` Are there known issues when using CodeIgniter with subdomains? What could be causing this? I've tried multiple configurations, but nothing seems to be working for me. What server settings should I check with my hosting provider? Thank you all for your time and assistance.
change ``` $config['uri_protocol'] = "AUTO"; ``` to ``` $config['uri_protocol'] = "REQUEST_URI" ``` and see if that fixes it also, do you have the class controller name the same as the file name? filename = test.php ``` class Test ... ```
I got this working with "REQUEST\_URI" but then I can only hit my home page. All the other links (to the controller) always refreshes to the home page. It's kind of weird that it shows the correct URL but takes you to the home page no matter what URL you click. Any ideas? I do have a .htaccess file, RewriteEngine on RewriteBase /testci RewriteCond %{REQUEST\_URI} ^system.\* RewriteRule ^(.*)$ /index.php/$1 [L] RewriteCond %{REQUEST\_FILENAME} !-f RewriteCond %{REQUEST\_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] RewriteCond $1 !^(index.php|images|captcha|css|js|robots.txt) addType text/css .css
CodeIgniter Install Problem
[ "", "php", "apache", "codeigniter", "frameworks", "shared-hosting", "" ]
Let's say I have this table: id colorName 1 red 2 blue 3 red 4 blue How can I select one representative of each color? Result:
Not random representatives, but... ``` select color, min(id) from mytable group by color; ```
In `MS SQL Server` and `Oracle`: ``` SELECT id, colorName FROM ( SELECT id, colorName, ROW_NUMBER() OVER (PARTITION BY colorName ORDER BY id) AS rn FROM colors ) q WHERE rn = 1 ```
SQL query to select one of each kind
[ "", "sql", "" ]
I am implementing a shopping cart for my website, using a pseudo-AJAX Lightbox-esque effect. (It doesn't actually call the server between requests -- everything is just Prototype magic to update the displayed values.) There is also semi-graceful fallback behavior for users without Javascript: if they click add to cart they get taken to an (offsite, less-desirable-interaction) cart. However, a user with Javascript enabled who loads the page and then immediately hits add to cart gets whisked away from the page, too. I'd like to have the Javascript just delay them for a while, then execute the show cart behavior once it is ready. In the alternative, just totally ignoring clicks before the Javascript is ready is probably viable too. Any suggestions?
I now do this with jQuery b/c I vaguely recall browser differences which jQuery takes care of: Try $(document).ready(function() { // put all your jQuery goodness in here. });
Is your code really that slow that this is an issue? I'd be willing to bet that no one is going to be buying your product that soon after loading the page. In any reasonable case, the user will wait for the page to load before interacting with it, especially for something like a purchase. But to answer your original question, you can disable the links in normal code, then reenable them using a `document.observe("dom:loaded", function() { ... })` call.
How to disable AJAX-y links before page Javascript ready to handle them?
[ "", "javascript", "prototype", "shopping-cart", "" ]
Is there an inbuilt PHP function to replace multiple values inside a string with an array that dictates exactly what is replaced with what? For example: ``` $searchreplace_array = Array('blah' => 'bleh', 'blarh' => 'blerh'); $string = 'blah blarh bleh bleh blarh'; ``` And the resulting would be: 'bleh blerh bleh bleh blerh'.
You are looking for [`str_replace()`](http://www.php.net/str_replace). ``` $string = 'blah blarh bleh bleh blarh'; $result = str_replace( array('blah', 'blarh'), array('bleh', 'blerh'), $string ); ``` **// Additional tip:** And if you are stuck with an associative array like in your example, you can split it up like that: ``` $searchReplaceArray = array( 'blah' => 'bleh', 'blarh' => 'blerh' ); $result = str_replace( array_keys($searchReplaceArray), array_values($searchReplaceArray), $string ); ```
``` $string = 'blah blarh bleh bleh blarh'; $trans = array("blah" => "blerh", "bleh" => "blerh"); $result = strtr($string,$trans); ``` You can check the [manual](http://php.net/manual/en/function.strtr.php) for detailed explanation.
Search and replace multiple values with multiple/different values in PHP?
[ "", "php", "arrays", "string", "replace", "" ]
Here is the test URL <http://edventures.com/temp/divtest.php> Procedure: 1. Close all IE instances. 2. Open the URL in IE7 3. Open the task manager, look for memory consumed by IE 4. Now click on Create button, 5. Watch the memory it will jump up by about 2K 6. Now click on Destroy button and the DIV will be destroyed but the memory remains the same. 7. You can try it repeatedly and memory just adds up. Is there any way to fix this? Any way to call Garbage collector forcefully without reloading the window? I am under assumption that when I remove DIV the memory will be freed but does not seem to work that way. Please let me know any fix to this. Thanks for your help. Suhas
Here's how to create DOM elements and prevent memory leaks in IE. ``` function createDOMElement(el) { var el = document.createElement(el); try { return el; } finally { el = null; } } ``` You can use variations of the try/finally trick to prevent the leaks when doing other DOM operations.
Yeah - IE has some awful memory leaks. Check out [IE Drip](http://www.outofhanwell.com/ieleak/index.php?title=Main_Page) - you basically have to design your pages so that they don't do what makes IE leak like this. This is part of the reason why IE is so loathed. To avoid IE leaking you have to be very careful with how you add HTML elements to the page, especially tables. Be especially careful with non-HTML 3.2 attributes - IE7 is still basically IE4 and attributes external to the old HTML specs is where it tends to go wrong.
IE and Memory accumulation in Javascript
[ "", "javascript", "internet-explorer", "memory", "" ]
I have around 10 buttons on my form and I want them to call the same Click event handler. But for that I need the Event handler to be generalized and we don't have "this" keyword in VB.NET to refer to the control that caused the event. How do I implement the functionality of "this" keyword in VB.NET? I want to be able to write an Event handler that does the following: ``` Private Sub cmdButton1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdButton1.Click currentTag = this.Tag End Sub ```
> How do I implement the functionality of "this" keyword in VB.NET? `this` is called `Me` in VB. However, this has got nothing to do with your code and refers to the containing class, in your case most probably the current `Form`. You need to access the `sender` object parameter, after casting it to `Control`: ``` currentTag = DirectCast(sender, Control).Tag ```
In [VB.NET](http://en.wikipedia.org/wiki/Visual_Basic_.NET), `Me` is the equivalent to C#'s `this`.
Common Event Handlers in VB.NET
[ "", "c#", "vb.net", "events", "event-handling", "this", "" ]
i want to use doPostBack function in my link.When user clicks it,it wont redirect to another page and page will be postback.I am using this code but it doesnt function.Where do i miss? ``` < a id="Sample" href="javascript:__doPostBack('__PAGE','');"> function __doPostBack(eventTarget, eventArgument) { var theform = document.ctrl2 theform.__EVENTTARGET.value = eventTarget theform.__EVENTARGUMENT.value = eventArgument theform.submit() } ```
Try this : ``` System.Web.UI.HtmlControls.HtmlAnchor myAnchor = new System.Web.UI.HtmlControls.HtmlAnchor(); string postbackRef = Page.GetPostBackEventReference(myAnchor); myAnchor.HRef = postbackRef; ```
\_\_doPostBack is an auto-generated function that ensures that the page posts-back to the server to maintain page state. It's not meant to be used for redirection... You could either use `window.location.href="yourpage.aspx"` on javascript or Response.Redirect("yourpage.aspx") at server side on the page you are doing the postback.
Using doPostBack Function in asp.net
[ "", "javascript", "asp.net", "" ]
I have a database of components. Each component is of a specific type. That means there is a many-to-one relationship between a component and a type. When I delete a type, I would like to delete all the components which has a foreign key of that type. But if I'm not mistaken, cascade delete will delete the type when the component is deleted. Is there any way to do what I described?
Here's what you'd include in your components table. ``` CREATE TABLE `components` ( `id` int(10) unsigned NOT NULL auto_increment, `typeId` int(10) unsigned NOT NULL, `moreInfo` VARCHAR(32), -- etc PRIMARY KEY (`id`), KEY `type` (`typeId`) CONSTRAINT `myForeignKey` FOREIGN KEY (`typeId`) REFERENCES `types` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ``` Just remember that you need to use the InnoDB storage engine: the default MyISAM storage engine doesn't support foreign keys.
use this sql DELETE T1, T2 FROM T1 INNER JOIN T2 ON T1.key = T2.key WHERE condition
How do I use on delete cascade in mysql?
[ "", "mysql", "sql", "foreign-keys", "cascade", "" ]
I'm looking for a C# library, preferably open source, that will let me schedule tasks with a fair amount of flexibility. Specifically, I should be able to schedule things to run every N units of time as well as "Every weekday at XXXX time" or "Every Monday at XXXX time". More features than that would be nice, but not necessary. This is something I want to use in an Azure WorkerRole, which immediately rules out Windows Scheduled Tasks, "at", "Cron", and any third party app that requires installation and/or a GUI to operate. I'm looking for a library.
[http://quartznet.sourceforge.net/](http://quartznet.sourceforge.net) "Quartz.NET is a port of very propular(sic!) open source Java job scheduling framework, Quartz." PS: Word to the wise, don't try to just navigate to quartz.net when at work ;-)
I used Quartz back in my Java days and it worked great. I am now using it for some .Net work and it works even better (of course there are a number of years in there for it to have stabalized). So I certainly second the recommendations for it. Another interesting thing you should look at, that I have just begun to play with is the new System.Threading.Tasks in .Net 4.0. I've just been using the tasks for parallelizing work and it takes great advantage of multi cores/processors. I noticed that there is a class in there named TaskScheduler, I haven't looked at it in detail, but it has methods like QueueTask, DeQueTask, etc. Might be worth some investigation at least.
Recommend a C# Task Scheduling Library
[ "", "c#", "azure", "scheduled-tasks", "" ]
I fear that some of my code is causing memory leaks, and I'm not sure about how to check it. Is there a tool or something for MacOS X? Thank you
Apple has a good description of how to use MallocDebug on OS X on their developer pages. * document on [finding leaks](http://developer.apple.com/documentation/Performance/Conceptual/ManagingMemory/Articles/FindingLeaks.html) in general * [enabling debug features of malloc](http://developer.apple.com/documentation/Performance/Conceptual/ManagingMemory/Articles/MallocDebug.html) in particular.
Yes - there's an application called **MallocDebug** which is installed as part of the Xcode package. You can find it in the `/Developer/Applications/Performance Tools` folder.
Is there a way to monitor heap usage in C++/MacOS?
[ "", "c++", "macos", "memory-leaks", "heap-memory", "" ]
I'm trying to write some simple chat client in javascript, but I need a way to update messages in real time. While I could use the xmlhttprequest function, I believe it uses up a TCP/IP port on the server and possibly more importantly, is not allowed on my current hosting package. It doesn't seem like an ideal solution anyway as it seems a bit hacky to constantly have an open connection, and it would be a lot easier if I could just listen on the port and take the data as it comes. I looked on the internet and found lots of references to comet and continuous polling, which are unsatisfactory and lots of people say that javascript isn't really suited to it which I can agree with. Now I've actually learned a bit more about how the internet works however, it seems feasible.I don't need to worry about sending messages so far; I can deal with that, but is there any way to listen on a certain port on javascript?
Have you considered perhaps building your app in Flex ? You could make use of Flex's [XMLSocket](http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/flash/net/XMLSocket.html) class to implement a low-latency chat client - pretty much the sort of thing it was designed to do
Listening on a port is not possible in Javascript. But: XmlHTTPRequest is possible on your host, as it is a simple HTTP request for a special site like `chat.php?userid=12&action=poll&lasttime=31251` where the server prints all new messages since lasttime as the result.
port listening in javascript
[ "", "javascript", "ajax", "comet", "ports", "reverse-ajax", "" ]
I come from a c# background where everything has its own namespace, but this practice appears to be uncommon in the c++ world. Should I wrap my code in it's own namespace, the unnamed namespace, or no namespace?
Many C++ developers do not use namespaces, sadly. When I started with C++, I didn't use them for a long time, until I came to the conclusion that I can do better using namespaces. Many libraries work around namespaces by putting prefixes before names. For example, wxWidgets puts the characters "wx" before everything. Qt puts "Q" before everything. It's nothing really wrong with that, but it requires you to type that prefix all over again, even though when it can be deduced from the context which declarations you mean. Namespaces have a hierarchic order. Names that are lexically closer to the point that reference them are found earlier. So if you reference "Window" within your GUI framework, it will find "my::gui::Window", instead of "::Window". Namespaces enable some nice features that can't be used without them. For example, if you put your class into a namespace, you can define free functions within that namespace. You then call the function without putting the namespace in front by importing all names, or selectively only some of them into the current scope ("using declaration"). Nowadays, I don't do any project anymore without using them. They make it so easy not to type the same prefix all over again, but still have good organization and avoidance of name-pollution of the global namespace.
Depends, if your code is library code, please wrap it in namespaces, that is the practice in C++. If your code is only a very simple application that doesn't interact with anything else, like a hello world sort of app, there is no need for namespaces, because its redundant. And since namespaces aren't required the code snippets and examples on the web rarely use them but most real projects do use them.
Should I wrap all my c++ code in its own namespace?
[ "", "c++", "namespaces", "" ]
Are there any libraries which exist for accessing audio (mp3 wmw) metadata using the .net compact framework?
Have you looked at [OpenNETCF](http://www.opennetcf.com): <http://www.opennetcf.com/library/sdf/html/f3fc3169-4143-54bc-1594-186da1fb01c2.htm> ?
or maybe <http://home.fuse.net/honnert/hundred/?UltraID3Lib>
.net compact framework accessing audio metadata
[ "", "c#", ".net", "compact-framework", "" ]
Are there any declaration keywords in Python, like `local`, `global`, `private`, `public` etc.? I know that variable types are not specified in Python; but how do you know if the code `x = 5` creates a new variable, or sets an existing one?
I really like the understanding that Van Gale is providing, but it doesn't really answer the question of, "how do you know if this statement: creates a new variable or sets an existing variable?" If you want to know how to recognize it when looking at code, you simply look for a previous assignment. Avoid global variables, which is good practice anyway, and you'll be all set. Programmatically, you could try to reference the variable, and see if you get a "Name Error" exception ``` try: x except NameError: # x doesn't exist, do something else: # x exists, do something else ``` I've never needed to do this... and I doubt you will really need to either. ## soapbox alert !!! Even though Python looks kinda loosey-goosey to someone who is used to having to type the class name (or type) over and over and over... it's actually exactly as strict as you want to make it. If you want strict types, you would do it explictly: ``` assert(isinstance(variable, type)) ``` Decorators exist to do this in a very convenient way for function calls... Before long, you might just come to the conclusion that static type checking (at compile time) doesn't actually make your code that much better. There's only a small benefit for the cost of having to have redundant type information all over the place. I'm currently working in actionscript, and typing things like: ``` var win:ThingPicker = PopUpManager.createPopUp(fEmotionsButton, ThingPicker, false) as ThingPicker; ``` which in python would look like: ``` win = createPopup(parent, ThingPicker) ``` And I can see, looking at the actionscript code, that there's simply no benefit to the static type-checking. The variable's lifetime is so short that I would have to be completely drunk to do the wrong thing with it... and have the compiler save me by pointing out a type error.
An important thing to understand about Python is there are no variables, only "names". In your example, you have an object "5" and you are creating a name "x" that references the object "5". If later you do: ``` x = "Some string" ``` that is still perfectly valid. Name "x" is now pointing to object "Some string". It's not a conflict of types because the name itself doesn't have a type, only the object. If you try x = 5 + "Some string" you will get a type error because you can't add two incompatible types. In other words, it's not type free. Python objects are strongly typed. Here are some very good discussions about Python typing: * [Strong Typing vs. Strong Testing](http://mindview.net/WebLog/log-0025) * [Typing: Strong vs. Weak, Static vs. Dynamic](http://www.artima.com/weblogs/viewpost.jsp?thread=7590) **Edit**: to finish tying this in with your question, a name can reference an existing object or a new one. ``` # Create a new int object >>> x = 500 # Another name to same object >>> y = x # Create another new int object >>> x = 600 # y still references original object >>> print y 500 # This doesn't update x, it creates a new object and x becomes # a reference to the new int object (which is int because that # is the defined result of adding to int objects). >>> x = x + y >>> print x 1100 # Make original int object 500 go away >>> del y ``` **Edit 2**: The most complete discussion of the difference between mutable objects (that can be changed) and immutable objects (that cannot be changed) in the the official documentation of the [Python Data Model](http://docs.python.org/reference/datamodel.html).
Are there any declaration keywords in Python?
[ "", "python", "variables", "" ]
I've been trying to optimize my code to make it a little more concise and readable and was hoping I wasn't causing poorer performance from doing it. I think my changes might have slowed down my application, but it might just be in my head. Is there any performance difference between: ``` Command.Parameters["@EMAIL"].Value = email ?? String.Empty; ``` and ``` Command.Parameters["@EMAIL"].Value = (email == null) ? String.Empty: email; ``` and ``` if (email == null) { Command.Parameters["@EMAIL"].Value = String.Empty } else { Command.Parameters["@EMAIL"].Value = email } ``` My preference for readability would be the null coalescing operator, I just didn't want it to affect performance.
IMHO, optimize for readability and understanding - any run-time performance gains will likely be minimal compared to the time it takes you in the real-world when you come back to this code in a couple months and try to understand what the heck you were doing in the first place.
You are trying to [micro-optimize](http://www.codinghorror.com/blog/2009/01/the-sad-tragedy-of-micro-optimization-theater.html) here, and that's generally a big no-no. Unless you have performance analytics which are showing you that this is an issue, it's not even worth changing. For general use, the correct answer is whatever is easier to maintain. For the hell of it though, the IL for the null coalescing operator is: ``` L_0001: ldsfld string ConsoleApplication2.Program::myString L_0006: dup L_0007: brtrue.s L_000f L_0009: pop L_000a: ldsfld string [mscorlib]System.String::Empty L_000f: stloc.0 ``` And the IL for the switch is: ``` L_0001: ldsfld string ConsoleApplication2.Program::myString L_0006: brfalse.s L_000f L_0008: ldsfld string ConsoleApplication2.Program::myString L_000d: br.s L_0014 L_000f: ldsfld string [mscorlib]System.String::Empty L_0014: stloc.0 ``` For the [null coalescing operator](http://msdn.microsoft.com/en-us/library/ms173224.aspx), if the value is `null`, then six of the statements are executed, whereas with the [`switch`](http://msdn.microsoft.com/en-us/library/06tc147t%28v=VS.100%29.aspx), four operations are performed. In the case of a not `null` value, the null coalescing operator performs four operations versus five operations. Of course, this assumes that all IL operations take the same amount of time, which is not the case. Anyways, hopefully you can see how optimizing on this micro scale can start to diminish returns pretty quickly. That being said, in the end, for most cases whatever is the easiest to read and maintain in this case is the right answer. If you find you are doing this on a scale where it proves to be inefficient (and those cases are few and far between), then you should measure to see which has a better performance and then make that specific optimization.
?: Operator Vs. If Statement Performance
[ "", "c#", ".net", "performance", "if-statement", "operators", "" ]
I have a page with a dynamicly created javascript (the script is pretty static really, but the value of its variables are filled based on user input). The result and the controls to take user input is inside an UpdatePanel which updates itself on certain user intputs. Some of these userinputs cause changes in the variables i spoke of earlier so i need to register a new javascript. The problem ofcourse is that only the updatepanel gets updated and the scripts are registred outside the update panel so no new scripts are added. What do you think would be best practice now? I could solve this by letting this script (and variables) live *inside* the updatepanel or i could make sure the page is fully reloaded when the need for posting a new javascript arises? The ScriptManager that i already have on the page might be able to help me with this... So i'm looking for someone who either had similar problems and solved them in a nice way, or just someone with some bright ideas :)
Why not just put the variables inside the update panel, or have the JavaScript get the values through the DOM?
Have a look at [`ScriptManager.RegisterClientScriptBlock`](http://msdn.microsoft.com/en-us/library/bb350750.aspx).
Registering a dynamic javascript after an UpdatePanel's update
[ "", "asp.net", "javascript", "updatepanel", "" ]
OK I need to design a way to keep track of how many of each item exists. There are approximately 26 items. I also need a way to find out if a certain combination of items exist. For example, This is an engine for a card game. Each card has a different type and each card can have card attached to it. There need to be a certain combination of cards attached to a card for the player to do certain things in the game. To make this program streamlined, I would like to do something like ``` if (meetsCrit(2, water, 4, ground)) { do this() } else { displayerror() } ``` **EDIT:** SOLVED! I used a combination of techniques described in a few post below. Special mention to: Jon Skeet, Rinat Abdullin, Frank, Anyway here is what I did I made a class called pair which stores the type I'm looking for, and the number of that type. Then I used a Predicate Delegate to find all of that type and count how many there are, Then I compared it to number I was searching for and returned true or false respectively. This is the code for it ``` public bool meetsCrit(params Pair[] specs) { foreach (Pair i in specs) { if (!(attached.FindAll(delegate(Card c) { return c.type == i.type; }).Count >= i.value)) { return false; } } return true; } ```
I would take a Predicate which can be evaluated against your data source and then evaluate that, returning the result from the Predicate.
Use [params](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/params): > The **params** keyword lets you specify a [method parameter](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/method-parameters) that takes an argument where the number of arguments is variable. No additional parameters are permitted after the **params** keyword in a method declaration, and only one **params** keyword is permitted in a method declaration...
How to pass an arbitrary number of parameters in C#
[ "", "c#", "arrays", "parameters", "" ]
Were trying to use external file (txt or CSV) in order to create a file stream in C#. The data in the file is that of a quiz game made of : 1 short question 4 possibles answers 1 correct answer The program should be able to tell the user whether he answered correctly or not. I'm looking for an example code/algorithm/tutorial on how to use the data in the external file to create a simple quiz in C#. Also, any suggestions on how to construct the txt file (how do I remark an answer as the correct one?). Any suggestions or links? Thanks,
There's really no set way to do this, though I would agree that for a simple database of quiz questions, text files would probably be your best option (as opposed to XML or a proper database, though the former wouldn't be completely overkill). Here's a little example of a text-based format for a set of quiz questions, and a method to read the questions into code. **Edit:** I've tried to make it as easy as possible to follow now (using simple constructions), with plenty of comments! ## File Format Example file contents. ``` Question text for 1st question... Answer 1 Answer 2 !Answer 3 (correct answer) Answer 4 Question text for 2nd question... !Answer 1 (correct answer) Answer 2 Answer 3 Answer 4 ``` ## Code This is just a simple structure for storing each question in code: ``` struct Question { public string QuestionText; // Actual question text. public string[] Choices; // Array of answers from which user can choose. public int Answer; // Index of correct answer within Choices. } ``` You can then read the questions from the file using the following code. There's nothing special going on here other than the object initializer (basically this just allows you to set variables/properties of an object at the same time as you create it). ``` // Create new list to store all questions. var questions = new List<Question>(); // Open file containing quiz questions using StreamReader, which allows you to read text from files easily. using (var quizFileReader = new System.IO.StreamReader("questions.txt")) { string line; Question question; // Loop through the lines of the file until there are no more (the ReadLine function return null at this point). // Note that the ReadLine called here only reads question texts (first line of a question), while other calls to ReadLine read the choices. while ((line = quizFileReader.ReadLine()) != null) { // Skip this loop if the line is empty. if (line.Length == 0) continue; // Create a new question object. // The "object initializer" construct is used here by including { } after the constructor to set variables. question = new Question() { // Set the question text to the line just read. QuestionText = line, // Set the choices to an array containing the next 4 lines read from the file. Choices = new string[] { quizFileReader.ReadLine(), quizFileReader.ReadLine(), quizFileReader.ReadLine(), quizFileReader.ReadLine() } }; // Initially set the correct answer to -1, which means that no choice marked as correct has yet been found. question.Answer = -1; // Check each choice to see if it begins with the '!' char (marked as correct). for(int i = 0; i < 4; i++) { if (question.Choices[i].StartsWith("!")) { // Current choice is marked as correct. Therefore remove the '!' from the start of the text and store the index of this choice as the correct answer. question.Choices[i] = question.Choices[i].Substring(1); question.Answer = i; break; // Stop looking through the choices. } } // Check if none of the choices was marked as correct. If this is the case, we throw an exception and then stop processing. // Note: this is only basic error handling (not very robust) which you may want to later improve. if (question.Answer == -1) { throw new InvalidOperationException( "No correct answer was specified for the following question.\r\n\r\n" + question.QuestionText); } // Finally, add the question to the complete list of questions. questions.Add(question); } } ``` Of course, this code is rather quick and basic (certainly needs some better error handling), but it should at least illustrate a simple method you might want to use. I do think text files would be a nice way to implement a simple system such as this because of their human readability (XML would be a bit too verbose in this situation, IMO), and additionally they're about as easy to parse as XML files. Hope this gets you started anyway...
My recommendation would be to use an XML file if you must load your data from a file (as opposed to from a database). Using a text file would require you to pretty clearly define structure for individual elements of the question. Using a CSV could work, but you'd have to define a way to escape commas within the question or answer itself. It might complicate matters. So, to reiterate, IMHO, an XML is the best way to store such data. Here is a short sample demonstrating the possible structure you might use: ``` <?xml version="1.0" encoding="utf-8" ?> <Test> <Problem id="1"> <Question>Which language am I learning right now?</Question> <OptionA>VB 7.0</OptionA> <OptionB>J2EE</OptionB> <OptionC>French</OptionC> <OptionD>C#</OptionD> <Answer>OptionA</Answer> </Problem> <Problem id="2"> <Question>What does XML stand for?</Question> <OptionA>eXtremely Muddy Language</OptionA> <OptionB>Xylophone, thy Music Lovely</OptionB> <OptionC>eXtensible Markup Language</OptionC> <OptionD>eXtra Murky Lungs</OptionD> <Answer>OptionC</Answer> </Problem> </Test> ``` As far as loading an XML into memory is concerned, .NET provides many intrinsic ways to handle XML files and strings, many of which completely obfuscate having to interact with FileStreams directly. For instance, the `XmlDocument.Load(myFileName.xml)` method will do it for you internally in one line of code. Personally, though I prefer to use `XmlReader` and `XPathNavigator`. Take a look at the members of the [System.Xml namespace](http://msdn.microsoft.com/en-us/library/system.xml(VS.80).aspx) for more information.
C# file stream - build a quiz
[ "", "c#", "csv", "text-files", "" ]
What would be the cleanest way of doing this that would work in both IE and Firefox? My string looks like this `sometext-20202` Now the `sometext` and the integer after the dash can be of varying length. Should I just use `substring` and index of or are there other ways?
How I would do this: ``` // function you can use: function getSecondPart(str) { return str.split('-')[1]; } // use the function: alert(getSecondPart("sometext-20202")); ```
A solution I prefer would be: ``` const str = 'sometext-20202'; const slug = str.split('-').pop(); ``` Where `slug` would be your result
Get everything after the dash in a string in JavaScript
[ "", "javascript", "" ]
I'm currently trying out some questions just to practice my programming skills. ( Not taking it in school or anything yet, self taught ) I came across this problem which required me to read in a number from a given txt file. This number would be N. Now I'm suppose to find the Nth prime number for N <= 10 000. After I find it, I'm suppose to print it out to another txt file. Now for most parts of the question I'm able to understand and devise a method to get N. The problem is that I'm using an array to save previously found prime numbers so as to use them to check against future numbers. Even when my array was size 100, as long as the input integer was roughly < 15, the program crashes. ``` #include <cstdio> #include <iostream> #include <cstdlib> #include <fstream> using namespace std; int main() { ifstream trial; trial.open("C:\\Users\\User\\Documents\\trial.txt"); int prime; trial >> prime; ofstream write; write.open("C:\\Users\\User\\Documents\\answer.txt"); int num[100], b, c, e; bool check; b = 0; switch (prime) { case 1: { write << 2 << endl; break; } case 2: { write << 3 << endl; break; } case 3: { write << 5 << endl; break; } case 4: { write << 7 << endl; break; } default: { for (int a = 10; a <= 1000000; a++) { check = false; if (((a % 2) != 0) && ((a % 3) != 0) && ((a % 5) != 0) && ((a % 7) != 0)) // first filter { for (int d = 0; d <= b; d++) { c = num[d]; if ((a % c) == 0) { check = true; // second filter based on previous recorded primes in array break; } } if (!check) { e = a; if (b <= 100) { num[b] = a; } b = b + 1; } } if ((b) == (prime - 4)) { write << e << endl; break; } } } } trial.close(); write.close(); return 0; } ``` I did this entirely base on my dummies guide and myself so do forgive some code inefficiency and general newbie-ness of my algorithm. Also for up to 15 it displays the prime numbers correctly. Could anyone tell me how I should go about improving this current code? I'm thinking of using a txt file in place of the array. Is that possible? Any help is appreciated.
``` #include <cstdio> #include <iostream> #include <cstdlib> #include <fstream> using namespace std; int main() { ifstream trial; trial.open("C:\\Users\\User\\Documents\\trial.txt"); int prime, e; trial>>prime; ofstream write; write.open("C:\\Users\\User\\Documents\\answer.txt"); int num[10000], currentPrime, c, primePrint; bool check; currentPrime=0; num[currentPrime] = 2; currentPrime=1; for(int currentInt=2; currentInt<=1000000; currentInt++) {check = false; for( int arrayPrime=0; arrayPrime<currentPrime; arrayPrime++) { c=num[arrayPrime]; if ((currentInt%c)==0) { check = true;// second filter based on previous recorded primes in array break;} } if (!check) { e=currentInt; if( currentInt!= 2 ) { num[currentPrime]= currentInt;} currentPrime = currentPrime+1;} if(currentPrime==prime) { write<<e<<endl; break;} } trial.close(); write.close(); return 0; } ``` This is the finalized version base on my original code. It works perfectly and if you want to increase the range of prime numbers simply increase the array number. Thanks for the help =)
Since your question is about programming rather than math, I will try to keep my answer that way too. The first glance of your code makes me wonder what on earth you are doing here... If you read the answers, you will realize that some of them didn't bother to understand your code, and some just dump your code to a debugger and see what's going on. Is it that we are that impatient? Or is it simply that your code is too difficult to understand for a relatively easy problem? To improve your code, try ask yourself some questions: 1. What are `a`, `b`, `c`, etc? Wouldn't it better to give more meaningful names? 2. What exactly is your algorithm? Can you write down a clearly written paragraph in English about what you are doing (in an exact way)? Can you modify the paragraph into a series of steps that you can mentally carry out on any input and can be sure that it is correct? 3. Are all steps necessary? Can we combine or even eliminate some of them? 4. What are the steps that are easy to express in English but require, say, more than 10 lines in C/C++? 5. Does your list of steps have any structures? Loops? Big (probably repeated) chunks that can be put as a single step with sub-steps? After you have going through the questions, you will probably have a clearly laid out pseudo-code that solves the problem, which is easy to explain and understand. After that you can implement your pseudo-code in C/C++, or, in fact, any general purpose language.
Prime numbers program
[ "", "c++", "primes", "" ]
> **Possible Duplicate:** > [Can a JavaScript object have a prototype chain, but also be a function?](https://stackoverflow.com/questions/340383/can-a-javascript-object-have-a-prototype-chain-but-also-be-a-function) I'm looking to make a callable JavaScript object, with an arbitrary prototype chain, but without modifying Function.prototype. In other words, this has to work: ``` var o = { x: 5 }; var foo = bar(o); assert(foo() === "Hello World!"); delete foo.x; assert(foo.x === 5); ``` Without making any globally changes.
There's nothing to stop you from adding arbitrary properties to a function, eg. ``` function bar(o) { var f = function() { return "Hello World!"; } o.__proto__ = f.__proto__; f.__proto__ = o; return f; } var o = { x: 5 }; var foo = bar(o); assert(foo() === "Hello World!"); delete foo.x; assert(foo.x === 5); ``` I believe that should do what you want. This works by injecting the object `o` into the prototype chain, however there are a few things to note: * I don't know if IE supports `__proto__`, or even has an equivalent, frome some's comments this looks to only work in firefox and safari based browsers (so camino, chrome, etc work as well). * `o.__proto__ = f.__proto__;` is only really necessary for function prototype functions like function.toString, so you might want to just skip it, especially if you expect `o` to have a meaningful prototype.
> I'm looking to make a callable JavaScript object, with an arbitrary prototype chain, but without modifying Function.prototype. I don't think there's a portable way to do this: You must either set a function object's [[Prototype]] property or add a [[Call]] property to a regular object. The first one can be done via the non-standard `__proto__` property (see [olliej's answer](https://stackoverflow.com/questions/548487/how-do-i-make-a-callable-js-object-with-an-arbitrary-prototype/548589#548589)), the second one is impossible as far as I know. The [[Prototype]] can only portably be set during object creation via a constructor function's `prototype` property. Unfortunately, as far as I know there's no JavaScript implementation which would allow to temporarily reassign `Function.prototype`.
How do I make a callable JS object with an arbitrary prototype?
[ "", "javascript", "functional-programming", "" ]
I'm adding a new, "NOT NULL" column to my Postgresql database using the following query (sanitized for the Internet): ``` ALTER TABLE mytable ADD COLUMN mycolumn character varying(50) NOT NULL; ``` Each time I run this query, I receive the following error message: > ``` > ERROR: column "mycolumn" contains null values > ``` I'm stumped. Where am I going wrong? NOTE: I'm using pgAdmin III (1.8.4) primarily, but I received the same error when I ran the SQL from within Terminal.
You have to set a default value. ``` ALTER TABLE mytable ADD COLUMN mycolumn character varying(50) NOT NULL DEFAULT 'foo'; ... some work (set real values as you want)... ALTER TABLE mytable ALTER COLUMN mycolumn DROP DEFAULT; ```
As others have observed, you must either create a nullable column or provide a DEFAULT value. If that isn't flexible enough (e.g. if you need the new value to be computed for each row individually somehow), you can use the fact that in PostgreSQL, all DDL commands can be executed inside a transaction: ``` BEGIN; ALTER TABLE mytable ADD COLUMN mycolumn character varying(50); UPDATE mytable SET mycolumn = timeofday(); -- Just a silly example ALTER TABLE mytable ALTER COLUMN mycolumn SET NOT NULL; COMMIT; ```
How can I add a column that doesn't allow nulls in a Postgresql database?
[ "", "sql", "postgresql", "alter-table", "" ]
I am developing a console-based .NET application (using mono). I'm using asynchronous I/O (Begin/EndReceive). I'm in the middle of a callback chain several layers deep, and if an exception is thrown, it is not being trapped anywhere (having it bubble out to the console is what I would expect, as there is currently no exception handling). However, looking at the stack trace when I log it at the point where it occurs, the stack doesn't show it reaching back to the initial point-of-execution. I've tried the AppDomain.UnhandledException trick, but that doesn't work in this situation. ``` System.ArgumentOutOfRangeException: Argument is out of range. Parameter name: size at System.Net.Sockets.Socket.BeginReceive (System.Byte[] buffer, Int32 offset, Int32 size, SocketFlags socket_flags, System.AsyncCallback callback, System.Object state) [0x00000] at MyClass+State.BeginReceive () [0x00000] ```
I believe any error generated during an asynchronous call should be thrown upon calling the *EndAction* method (*EndReceive* in your case). At least, this is what I've experienced using the CLR (MSFT) implementation, and Mono should be doing the same thing, although it *may* perhaps be slightly buggy here (consider this as unlikely however). If you were in Visual Studio, I would recommend you turn on the option for catching all exceptions (i)n the Debug > Exceptions menu) - perhaps there is a similar option in whatever IDE you are using?
From the look of the stack, the exception is being thrown in the BeginReceive, so that particular I/O operation is not being initiated at all. The default behaviour (since CLR2.0) of an unhandled exception on a thread-pool thread is to terminate the process, so if you are not seeing this, then something is catching the exception.
Where do uncaught exceptions go with asynchronous I/O
[ "", "c#", "exception", "" ]
Is it possible to run a select on a table to quickly find out if **any** (one or more) of the fields contain a certain value? Or would you have to write out all of the column names in the where clause?
Dig this... It will search on all the tables in the db, but you can mod it down to just one table. ``` /*This script will find any text value in the database*/ /*Output will be directed to the Messages window. Don't forget to look there!!!*/ SET NOCOUNT ON DECLARE @valuetosearchfor varchar(128), @objectOwner varchar(64) SET @valuetosearchfor = '%staff%' --should be formatted as a like search SET @objectOwner = 'dbo' DECLARE @potentialcolumns TABLE (id int IDENTITY, sql varchar(4000)) INSERT INTO @potentialcolumns (sql) SELECT ('if exists (select 1 from [' + [tabs].[table_schema] + '].[' + [tabs].[table_name] + '] (NOLOCK) where [' + [cols].[column_name] + '] like ''' + @valuetosearchfor + ''' ) print ''SELECT * FROM [' + [tabs].[table_schema] + '].[' + [tabs].[table_name] + '] (NOLOCK) WHERE [' + [cols].[column_name] + '] LIKE ''''' + @valuetosearchfor + '''''' + '''') as 'sql' FROM information_schema.columns cols INNER JOIN information_schema.tables tabs ON cols.TABLE_CATALOG = tabs.TABLE_CATALOG AND cols.TABLE_SCHEMA = tabs.TABLE_SCHEMA AND cols.TABLE_NAME = tabs.TABLE_NAME WHERE cols.data_type IN ('char', 'varchar', 'nvchar', 'nvarchar','text','ntext') AND tabs.table_schema = @objectOwner AND tabs.TABLE_TYPE = 'BASE TABLE' ORDER BY tabs.table_catalog, tabs.table_name, cols.ordinal_position DECLARE @count int SET @count = (SELECT MAX(id) FROM @potentialcolumns) PRINT 'Found ' + CAST(@count as varchar) + ' potential columns.' PRINT 'Beginning scan...' PRINT '' PRINT 'These columns contain the values being searched for...' PRINT '' DECLARE @iterator int, @sql varchar(4000) SET @iterator = 1 WHILE @iterator <= (SELECT Max(id) FROM @potentialcolumns) BEGIN SET @sql = (SELECT [sql] FROM @potentialcolumns where [id] = @iterator) IF (@sql IS NOT NULL) and (RTRIM(LTRIM(@sql)) <> '') BEGIN --SELECT @sql --use when checking sql output EXEC (@sql) END SET @iterator = @iterator + 1 END PRINT '' PRINT 'Scan completed' ```
As others have said, you're likely going to have to write all the columns into your WHERE clause, either by hand or programatically. SQL does not include functionality to do it directly. A better question might be "why do you need to do this?". Needing to use this type of query is possibly a good indicator that your database isn't properly [normalized](http://en.wikipedia.org/wiki/Database_normalization). If you tell us your schema, we may be able to help with that problem too (if it's an actual problem).
SQL query to return all rows where one or more of the fields contains a certain value
[ "", "sql", "" ]
In C# how can you find if an object is an instance of certain class but not any of that class’s superclasses? “is” will return true even if the object is actually from a superclass.
``` typeof(SpecifiedClass) == obj.GetType() ```
You could compare the type of your object to the Type of the class that you are looking for: ``` class A { } class B : A { } A a = new A(); if(a.GetType() == typeof(A)) // returns true { } A b = new B(); if(b.GetType() == typeof(A)) // returns false { } ```
How to find if an object is from a class but not superclass?
[ "", "c#", "class", "object", "" ]
My Java source code: ``` String result = "B123".replaceAll("B*","e"); System.out.println(result); ``` The output is:`ee1e2e3e`. Why?
'\*' means zero or more matches of the previous character. So each empty string will be replaced with an "e". You probably want to use '+' instead: > `replaceAll("B+", "e")`
You want this for your pattern: ``` B+ ``` And your code would be: ``` String result = "B123".replaceAll("B+","e"); System.out.println(result); ``` The "\*" matches "zero or more" - and "zero" includes the nothing that's before the B, as well as between all the other characters.
What is the effect of "*" in regular expressions?
[ "", "java", "regex", "" ]
What's the accepted procedure and paths to configure jdk and global library source code for Intellij IDEA on OS X?
As of the latest releases: * Java for Mac OS X 10.6 Update 3 * Java for Mac OS X 10.5 Update 8 Apple has moved things around a bit. To quote the Apple Java guy on the java-dev mailing list: > 1. System JVMs live under /System/Library/... > > * These JVMs are only provided by Apple, and there is only 1 major > platform version at a time. > * The one version is always upgraded, and only by Apple Software Updates. > * It should always be GM version, that developers can revert back to, despite > any developer previews or 3rd party > JVMs they have installed. > * Like everything else in /System, it's owned by root r-x, so don't mess > with it! > 2. Developer JVMs live under /Library/Java/JavaVirtualMachines > > * Apple Java Developer Previews install under /Library. > * The Developer .jdk bundles contain everything a developer could need > (src.jar, docs.jar, etc), but are too > big to ship to the tens of millions of > Mac customers. > * 3rd party JVMs should install here. > 3. Developers working on the JVM itself can use > ~/Library/Java/JavaVirtualMachines > > * It's handy to symlink to your current build product from this > directory, and not impact other users > 4. Java IDEs should probably bias to using /Library or ~/Library detected > JVMs, but should be able to fallback > to using /System/Library JVMs if > that's the only one installed (but > don't expect src or JavaDoc). > > This allows Java developers the > maximum flexibility to install > multiple version of the JVM to regress > bugs and even develop a JVM on the Mac > themselves. It also ensures that all > Mac customers have one safe, slim, > secure version of the JVM, and that we > don't endlessly eat their disk space > every time we Software Update them a > JVM. So, instead of pointing Intellij at /System/Library/Frameworks/JavaVM.framework, you should point to a JDK in either /Library/Java/JavaVirtualMachines or /System/Library/Java/JavaVirtualMachines
In the 'Project Settings' window, go to 'JDKs' section that you see under'Platform Settings'. Click the little plus sign and choose 'JSDK'. A file chooser should open in the /System/Library/Frameworks/JavaVM.framework/Versions directory. If not then just navigate to it. There you can choose the version you would like to add.
Intellij IDEA setup on OS X
[ "", "java", "macos", "grails", "intellij-idea", "" ]
What are assemby version like major.minor.build.revision? What does it mean?
Assembly Versions are a way to allow for backwards or forwards compatibility in your applications. For instance: you could specify that your application requires a reference to a third-party library (NHibernate for instance) of a specific version or higher. You can do the same thing with the .NET Framework itself by requiring that a certain version of the .NET Framework be installed. Having Assembly versions also allows you to maintain one or more copies of an assembly in the GAC simultaneously, letting your program select which version of the assembly it wants. This can be quite useful when you're upgrading a third-party library reference, etc.
It's the indicator for the software version that the assembly represents. The leftmost number usually represents large changes that break compatibility with earlier versions, while the rightmost number represents the individual change number. .NET uses an auto numbering for revision, because one would have to be too diligent to change it. However, build systems can inject the source control revision number during the build process to make it more meaningful.
Assembly version
[ "", "c#", ".net", "" ]
At work we have a native C code responsible for reading and writing to a proprietary flat file database. I have a wrapper written in C# that encapsulates the P/Invoke calls into an OO model. The managed wrappers for the P/Invoke calls have grown in complexity considerably since the project was started. Anecdotally the current wrapper is doing fine, however, I'm thinking that I actually need to do more to ensure correct operation. A couple of notes brought up by the answers: 1. Probably don't need the KeepAlive 2. Probably don't need the GCHandle pinning 3. If you do use the GCHandle, try...finally that business (CER questions not addressed though) Here is an example of the revised code: ``` [DllImport(@"somedll", EntryPoint="ADD", CharSet=CharSet.Ansi, ThrowOnUnmappableChar=true, BestFitMapping=false, SetLastError=false)] [ReliabilityContract(Consistency.MayCorruptProcess, Cer.None)] internal static extern void ADD( [In] ref Int32 id, [In] [MarshalAs(UnmanagedType.LPStr)] string key, [In] byte[] data, // formerly IntPtr [In] [MarshalAs(UnmanagedType.LPArray, SizeConst=10)] Int32[] details, [In] [MarshalAs(UnmanagedType.LPArray, SizeConst=2)] Int32[] status); public void Add(FileId file, string key, TypedBuffer buffer) { // ...Arguments get checked int[] status = new int[2] { 0, 0 }; int[] details = new int[10]; // ...Make the details array lock (OPERATION_LOCK) { ADD(file.Id, key, buffer.GetBytes(), details, status); // the byte[], details, and status should be auto // pinned/keepalive'd if ((status[0] != 0) || (status[1] != 0)) throw new OurDatabaseException(file, key, status); // we no longer KeepAlive the data because it should be auto // pinned we DO however KeepAlive our 'file' object since // we're passing it the Id property which will not preserve // a reference to 'file' the exception getting thrown // kinda preserves it, but being explicit won't hurt us GC.KeepAlive(file); } } ``` My (revised) questions are: 1. Will data, details, and status be auto-pinned/KeepAlive'd? 2. Have I missed anything else required for this to operate correctly? EDIT: I recently found a diagram which is what sparked my curiosity. It basically states that once you call a P/Invoke method the [GC can preempt your native code](http://i.msdn.microsoft.com/ms993883.intmiglongch03-01(en-us,MSDN.10).gif). So while the native call may be made synchronously, the GC *could* choose to run and move/remove my memory. I guess now I'm wondering if automatic pinning is sufficient (or if it even runs).
1. I'm not sure what the point of your KeepAlive is, since you've already freed teh GCHandle - it seems that the data is no longer needed at that point? 2. Similar to #1, why do you feel you need to call KeepAlive at all? Is tehre something outside of the code you've posted we're not seeing? 3. Probably not. If this is a synchronous P/Invoke then the marshaler will actually pin the incoming variables until it returns. In fact you probably don't need to pin data either (unless this is async, but your construct suggests it's not). 4. No, nothing missed. I think you've actually added more than you need. EDIT in response to original question edits and comments: The diagram simply shows that the GC *mode* changes, The mode has no effect on pinned objects. Types are either [pinned or copied during marshaling](http://msdn.microsoft.com/en-us/23acw07k.aspx), depending on the type. In this case you're using a byte array, which the [docs say is a blittable type](http://msdn.microsoft.com/en-us/library/75dwhxf7.aspx). You'll see that it also specifically states that "As an optimization, arrays of blittable types and classes that contain only blittable members are pinned instead of copied during marshaling." So that means that data is pinned for the duration of the call, and if the GC runs, it is not able to move or free the array. Same is true for status. The string passed is slightly different, the string data is copied and the pointer is passed on the stack. This behavior also makes it immune to collection and compaction. The GC can't touch the copy (it knows nothing about it) and the pointer is on the stack, with the GC doesn't affect. I still don't see the point of calling KeepAlive. The file, presumably, isn't available for collection because it got passed in to the method and has some other root (where it was declared) that would keep it alive.
Unless your unmanaged code is directly manipulating the memory, I don't think you need to pin the object. Pinning essentially informs the GC that it should not move that object around in memory during the compact phase of a collection cycle. This is only important for unmanaged memory access where the unmanaged code is expecting the data to always be in the same location it was when it was passed in. The "mode" the GC operates in (concurrent or preemptive) should have no impact on pinned objects as the behavioral rules of pinning apply in either mode. The marshalling infrastructure in .NET attempts to be smart about how it marshals the data between managed/unmanaged code. In this specific case, the two arrays you are creating will be pinned automatically during the marshalling process. The call to GC.KeepAlive probably isn't needed as well unless your unmanaged ADD method is asynchronous. GC.KeepAlive is only intended to prevent the GC from reclaiming an object that it thinks is dead during a long running operation. Since file is passed in as a parameter, it is presumably used elsewhere in the code after the call to the managed Add function, so there is no need for the GC.KeepAlive call. You edited your code sample and removed the calls to GCHandle.Alloc() and Free(), so does that imply the code no longer uses those? If you are still using it, the code inside your lock(OPERATION\_LOCK) block should also be wrapped in a try/finally block. In your finally block, you probably want to do something like this: ``` if (dataHandle.IsAllocated) { dataHandle.Free(); } ``` Also, you may want to verify that the call GCHandle.Alloc() shouldn't be inside your lock. By having it ouside the lock you will have multiple threads doing allocating memory. As far as automatic pinning, if the data is automatically pinned during the marshalling process, it is pinned and it won't be moved during a GC collection cycle if one were to occur while your unmanaged code is running. I'm not sure I fully understand your code comment about the reasoning for continuing to call GC.KeepAlive. Does the unamanged code actually set a value for the file.Id field?
P/Invoke, Pinning, and KeepAlive Best Practices
[ "", "c#", ".net", "interop", "pinvoke", "" ]
Thanks for the three excellent answers which all identified my problem of using "onclick = ..." instead of "observe( "click",..." But the award for Accepted Answer has to go to Paolo Bergantino for the mechanism of adding a class name to mark the dragged element, which saved me some more work! --- In my HTML I have a table with an image link on each row. ``` <table class="search_results"> <tr> <td class="thumbnail"><a href="..."><img src="..." /></a></td> ... ``` An included Javascript file contains the code to make the images draggable: ``` $$( ".thumbnail a img" ).each( function( img ) { new Draggable( img, {revert:true} ); } ); ``` and a simple handler to detect the end of the drag ``` Draggables.addObserver({ onEnd:function( eventName, draggable, event ) { // alert ( eventName ); // leaving this in stops IE from following the link event.preventDefault(); // Does Not Work !!! // event.stop(); // Does Not Work either !!! } }); ``` My idea is that when the image is clicked the link should be followed but when it is dragged something else should happen. In fact what happens is that when the image is dragged the handler is called but the link is still followed. I guess that I'm cancelling the wrong event. How can I prevent the link from being followed after the element is dragged? --- edit: added `event.stop` after trying greystate's suggestion --- I now have a basic solution that works for FireFox, Apache, etc. See my own answer below. But I am still looking for a solution for IE7 (and hopefully IE6). Another problem when dragging images in IE is that the image becomes detached from the mouse pointer when the tool tip appears, you have to release the mouse and click again on the image to re-acquire the drag. So I'm also looking for any ideas that might help resolve that problem.
``` <script> document.observe("dom:loaded", function() { $$( ".thumbnail a img" ).each(function(img) { new Draggable(img, { revert: true, onEnd: function(draggable, event) { $(draggable.element).up('a').addClassName('been_dragged'); } }); }); $$(".thumbnail a").each(function(a) { Event.observe(a, 'click', function(event) { var a = Event.findElement(event, 'a'); if(a.hasClassName('been_dragged')) { event.preventDefault(); // or do whatever else } }); }); }); </script> ``` Works for me on Firefox, IE. It kind of uses your 'marker' idea but I think marking an already dragged element with a class is more elegant than javascript variables.
``` // for keeping track of the dragged anchor var anchorID = null; // register a click handler on all anchors $$('.thumbnail a').invoke('observe', 'click', function(e) { var a = e.findElement('a'); // stop the event from propagating if this anchor was dragged if (a.id == anchorID) { e.stop(); anchorID = null; } }); $$('.thumbnail a img').each(function(img) { new Draggable(img, { revert:true }); }); Draggables.addObserver({ onStart: function(eventName, draggable, e) { // store the dragged anchor anchorID = e.findElement('a').id; } }); ```
scriptaculous draggables: need to cancel onClick action when element is dragged
[ "", "javascript", "dom-events", "draggable", "scriptaculous", "" ]
We recently started to develop a Java desktop app and management has requested that we make use of Rich Client Platform. I know of four for Java namely: 1. Eclipse RCP - [www link to ecipse rcp](http://wiki.eclipse.org/index.php/Rich_Client_Platform), 2. Netbean RCP - [Netbeans RCP web site](http://platform.netbeans.org/), 3. Spring RCP - [spring rich client](http://spring-rich-c.sourceforge.net/1.0.0/index.html) 4. Valkyrie RCP - [Valkyrie rich client](https://www.gitorious.org/valkyrie-rcp/pages/Home) Has anyone got any experience in any of these and if so what are the strength and weaknesess of each? thanks
I recommend that you take a look at JSR 296 - it's not complete yet by any stretch, but I think it hits the sweet spot for providing certain core functionality that you really, really need in every Java GUI app, without forcing you to live in an overly complicated framework. I have used JSR 296 successfully to create a mid-sized application. For window layout in this app, we use [MyDoggy](http://mydoggy.sourceforge.net/) (highly recommended). For layout management, we use MiGLayout (Beyond highly recommended). For data binding, we use a modified form of JSR 295 (we implemented something similar to [PresentationModel](http://martinfowler.com/eaaDev/PresentationModel.html) on top of JSR 295 that we use for our GUI binding). I'm in the process of incorporating Guice as a DI mechanism but haven't finished that effort (so far, I think it will 'play well' with JSR 296 with a tweak here and there). Let's see... persistence is the big missing link here - I am currently evaluating [Simple](http://simple.sourceforge.net/) for XML persistence, but am running into issues with getting it to work with DI containers like Guice. I have [Betwixt](http://commons.apache.org/betwixt/) working, but the dependencies on Betwixt are huge so we are looking for something more streamlined. Opinions on other RCP options for Java: NetBeans: I have some fundamental philosophical objections to the approach used by NetBeans (too many design anti-patterns for my taste). In the end, the framework forces you to make poor design decisions - and it's almost impossible to use if you don't use NetBeans as your IDE (I tried, but I just couldn't switch from Eclipse to NB). It's probably just me, but it seems that it should be possible to write code for an RCP framework without using big complicated wizards and reams of auto-generated code and XML files. I've spent so many hours troubleshooting old Visual C++ code generated by Visual Studio that I'm extremely leery of any framework that can't be coded up by hand. Spring RCP: The folks at Spring have a good solid design, but the documentation is really, really weak. It's pretty difficult to get up to speed on it (But once you do, you can get things done pretty quickly). Eclipse RCP: Haven't used Eclipse just because of the deployment overhead (depends on your target audience - for us, deploying an extra 50 MB of runtime just didn't work). Without question Equinox is a beautiful thing if your app needs significant plugin functionality (of course, you could run Equinox with JSR 296 as well, or use design patterns similar to the Whiteboard pattern promoted by OSGi).
INTRO - skip if you're only interesterd in result ;) I was developing an editor for a custom programming language very simmilar to JSP. First I've implemented the editor as my thesis using **NetBeans platform**. After finishing school I've got a job and they wanted me to implement the same thing in **Eclipse RCP**, so now I can compare these two platforms at least in stuff I was facing during this project. RESULT - **If I had a choice between Netbeans platform and Eclipse RCP, I would definitelly pick a NetBeans platform**. Why? Great screencasts, good tutorials, very active friendly and helpful community, quite well documented and the source code is written nicely and with good code conventions. Also has some interesting gadgets (cookies, lookup). It simply suits me. And why Eclipse RCP does not suit me? The documentation is weaker and conventions and API are sometimes..ehm..too weird for me :-) It's quite ususal to see methods like: ``` /** * Returns a description of the cursor position. * * @return a description of the cursor position * @since 2.0 */ protected String getCursorPosition() { .. } ``` Well I thought they must be kidding me :-D How am I supposed to use this method? Or like this: ``` /** * Returns the range of the current selection in coordinates of this viewer's document. * * @return a <code>Point</code> with x as the offset and y as the length of the current selection */ Point getSelectedRange(); ``` Although the number and type of attributes fitts, I don't find the Point object ideal data structure for storing range ;-) There are numbers of theese such surpises in Eclipse RCP
Which Rich Client Platform to use
[ "", "java", "osgi", "rcp", "" ]
I'd like to be notified when a file has been changed in the file system. I have found nothing but a thread that polls the lastModified File property and clearly this solution is not optimal.
Since JDK 1.7, the canonical way to have an application be notified of changes to a file is using the [WatchService](https://docs.oracle.com/javase/9/docs/api/java/nio/file/WatchService.html) API. The WatchService is event-driven. The [official tutorial](https://docs.oracle.com/javase/tutorial/essential/io/notification.html) provides an example: ``` /* * Copyright (c) 2008, 2010, Oracle and/or its affiliates. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * * - Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * - Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * - Neither the name of Oracle nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS * IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ import java.nio.file.*; import static java.nio.file.StandardWatchEventKinds.*; import static java.nio.file.LinkOption.*; import java.nio.file.attribute.*; import java.io.*; import java.util.*; /** * Example to watch a directory (or tree) for changes to files. */ public class WatchDir { private final WatchService watcher; private final Map<WatchKey,Path> keys; private final boolean recursive; private boolean trace = false; @SuppressWarnings("unchecked") static <T> WatchEvent<T> cast(WatchEvent<?> event) { return (WatchEvent<T>)event; } /** * Register the given directory with the WatchService */ private void register(Path dir) throws IOException { WatchKey key = dir.register(watcher, ENTRY_CREATE, ENTRY_DELETE, ENTRY_MODIFY); if (trace) { Path prev = keys.get(key); if (prev == null) { System.out.format("register: %s\n", dir); } else { if (!dir.equals(prev)) { System.out.format("update: %s -> %s\n", prev, dir); } } } keys.put(key, dir); } /** * Register the given directory, and all its sub-directories, with the * WatchService. */ private void registerAll(final Path start) throws IOException { // register directory and sub-directories Files.walkFileTree(start, new SimpleFileVisitor<Path>() { @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException { register(dir); return FileVisitResult.CONTINUE; } }); } /** * Creates a WatchService and registers the given directory */ WatchDir(Path dir, boolean recursive) throws IOException { this.watcher = FileSystems.getDefault().newWatchService(); this.keys = new HashMap<WatchKey,Path>(); this.recursive = recursive; if (recursive) { System.out.format("Scanning %s ...\n", dir); registerAll(dir); System.out.println("Done."); } else { register(dir); } // enable trace after initial registration this.trace = true; } /** * Process all events for keys queued to the watcher */ void processEvents() { for (;;) { // wait for key to be signalled WatchKey key; try { key = watcher.take(); } catch (InterruptedException x) { return; } Path dir = keys.get(key); if (dir == null) { System.err.println("WatchKey not recognized!!"); continue; } for (WatchEvent<?> event: key.pollEvents()) { WatchEvent.Kind kind = event.kind(); // TBD - provide example of how OVERFLOW event is handled if (kind == OVERFLOW) { continue; } // Context for directory entry event is the file name of entry WatchEvent<Path> ev = cast(event); Path name = ev.context(); Path child = dir.resolve(name); // print out event System.out.format("%s: %s\n", event.kind().name(), child); // if directory is created, and watching recursively, then // register it and its sub-directories if (recursive && (kind == ENTRY_CREATE)) { try { if (Files.isDirectory(child, NOFOLLOW_LINKS)) { registerAll(child); } } catch (IOException x) { // ignore to keep sample readbale } } } // reset key and remove from set if directory no longer accessible boolean valid = key.reset(); if (!valid) { keys.remove(key); // all directories are inaccessible if (keys.isEmpty()) { break; } } } } static void usage() { System.err.println("usage: java WatchDir [-r] dir"); System.exit(-1); } public static void main(String[] args) throws IOException { // parse arguments if (args.length == 0 || args.length > 2) usage(); boolean recursive = false; int dirArg = 0; if (args[0].equals("-r")) { if (args.length < 2) usage(); recursive = true; dirArg++; } // register directory and process its events Path dir = Paths.get(args[dirArg]); new WatchDir(dir, recursive).processEvents(); } } ``` For individual files, various solutions exist, such as: * <https://dzone.com/articles/listening-to-fileevents-with-java-nio> Note that Apache VFS uses a polling algorithm, although it may offer greater functionality. Also note that the API does not offer a way to determine whether a file has been closed.
I've written a log file monitor before, and I found that the impact on system performance of polling the attributes of a single file, a few times a second, is actually very small. Java 7, as part of NIO.2 has added the [WatchService API](http://download.oracle.com/javase/tutorial/essential/io/notification.html) > The WatchService API is designed for applications that need to be notified about file change events.
File changed listener in Java
[ "", "java", "file", "listener", "" ]
I'm creating some checkbox elements on the fly with jQuery and appending them to a node like so ``` var topics = ['All','Cat1','Cat2']; var topicContainer = $('ul#someElementId'); $.each( topics, function( iteration, item ) { topicContainer.append( $(document.createElement("li")) .append( $(document.createElement("input")).attr({ id: 'topicFilter-' + item ,name: item ,value: item ,type: 'checkbox' ,checked:true }) .click( function( event ) { var cbox = $(this)[0]; alert( cbox.value ); } ) ) .append( $(document.createElement('label')).attr({ 'for': 'topicFilter' + '-' + item }) .text( item ) ) ) } ); ``` The problems I'm encountering are two-fold (there are in IE only) * Checkboxes are added to the page, but their default checked state is unchecked, when I'm specifying 'true' for that value. (Testing with 'checked' for the value makes no difference) * When `alert( cbox.value );` executes, the output is 'on', every time. I think the core problem here is I need a better way to set the default checked state of the checkboxes, and to set their default "value" attribute. But I haven't yet found another way. *Note: all of this code works fine in Firefox and Chrome.* This is jQuery 1.3.1 testing with IE 7.0.5730.11
Internet Explorer doesn't like to let you change the checked value of an input that is not a part of the DOM. Try setting the checked value AFTER the item has been appended and see if that works.
I reused some of your code and had a similar problem as per [Why does order of defining attributes for a dynamically created checkbox in jquery affect its value?](https://stackoverflow.com/questions/3218898/why-does-order-of-defining-attributes-for-a-dynamically-created-checkbox-in-jquer) I found the resolution was to simply move the attribute declaration ``` type: 'checkbox', ``` to the beginning i.e: ``` $(document.createElement("input")).attr({ type: 'checkbox', ``` This problem occured in all browsers for me so i dont think its an IE issue but rather a jquery "thing". For me it didnt matter when i set the value (before or after) append. The difference was in how soon/where i declared the type of the input.
Creating checkbox elements on the fly with jQuery - odd IE behavior
[ "", "javascript", "jquery", "internet-explorer", "" ]
Can Interfaces be a substitute for Multiple Inheritance in C#? EDIT: This is a duplicate of: <https://stackoverflow.com/questions/579090/why-c-doesnt-support-multiple-inheritence-can-interfaces-be-used-as-a-substi> Which was a duplicate of: [Should C# have multiple inheritance?](https://stackoverflow.com/questions/191691/should-c-include-multiple-inheritance)
Multiple inheritance implies that you inherit functionality from multiple base classes. Since interfaces define interfaces, not implementations, the answer is no.
To some extent, yes. The classic "is-a" relationship that inheritance provides can be expanded with the use of interfaces. Meaning, if a class inherits from another class, and implements 3 interfaces, an object of this class can be passed into a method that takes the base class, or any of the three interfaces as parameters. So, in a sense, this class "is-a" one of 4 different things. Makes sense?
Can Interfaces be a substitute for Multiple Inheritance in C#?
[ "", "c#", "" ]
How can one quickly turn off all **Log4J** output using a `log4j.properties` file?
Set level to OFF (instead of DEBUG, INFO, ....)
If you want to turn off logging programmatically then use ``` List<Logger> loggers = Collections.<Logger>list(LogManager.getCurrentLoggers()); loggers.add(LogManager.getRootLogger()); for ( Logger logger : loggers ) { logger.setLevel(Level.OFF); } ```
Disabling Log4J Output in Java
[ "", "java", "logging", "log4j", "" ]
I ultimately need a list of "import" records that include "album" records which only have one "song" each. This is what I'm using now: ``` select i.id, i.created_at from imports i where i.id in ( select a.import_id from albums a inner join songs s on a.id = s.album_id group by a.id having 1 = count(s.id) ); ``` The nested select (with the join) is blazing fast, but the external "in" clause is excruciatingly slow. I tried to make the entire query a single (no nesting) join but ran into problems with the group/having clauses. The best I could do was a list of "import" records with dupes, which is not acceptable. Is there a more elegant way to compose this query?
How's this? ``` SELECT i.id, i.created_at FROM imports i INNER JOIN (SELECT a.import_id FROM albums a INNER JOIN songs s ON a.id = s.album_id GROUP BY a.id HAVING Count(* ) = 1) AS TEMP ON i.id = TEMP.import_id; ``` In most database systems, the JOIN works a lost faster than doing a WHERE ... IN.
``` SELECT i.id, i.created_at, COUNT(s.album_id) FROM imports AS i INNER JOIN albums AS a ON i.id = a.import_id INNER JOIN songs AS s ON a.id = s.album_id GROUP BY i.id, i.created_at HAVING COUNT(s.album_id) = 1 ``` *(You might not need to include the `COUNT` in the `SELECT` list itself. SQL Server doesn't require it, but it's possible that a different RDBMS might.)*
Complex join with nested group-by/having clause?
[ "", "sql", "join", "group-by", "having", "" ]
I was trying to do a regex for someone else when I ran into this problem. The requirement was that the regex should return results from a set of strings that has, let's say, "apple" in it. For example, consider the following strings: `"I have an apple"` `"You have two Apples"` `"I give you one more orange"` The result set should have the first two strings. The regex(es) I tried are: `/[aA]pple/` and `/[^a-zA-Z0-9][aA]pple/` The problem with the first one is that words like "aapple", "bapple", etc (ok, so they are meaningless, but still...) test positive with it, and the problem with the second one is that when a string actually starts with the word "apple", "Apples and oranges", for example, it tests negative. Can someone explain why the second regex behaves this way and what the correct regex would be?
``` /(^.*?\bapples?\b.*$)/i ``` Edit: The above will match the entire string containing the word "apples", which I thought is what you were asking for. If you are just trying to see if the string contains the word, the following will work. ``` /\bapples?\b/i ``` > The regex(es) I tried are: > > `/[aA]pple/ and /[^a-zA-Z0-9][aA]pple/` The first one just checks for the existence of the following characters, in order: a-p-p-l-e, regardless of what context they are used in. The \b, or word-boundary character, matches any spot where a non-word character and a word character meet, ala `\W\w`. The second one is trying to match other characters before the occurrance of a-p-p-l-e, and is essentially the same as the first, except it *requires* other characters in front of it. The one I answered with works like following. From the beginning of the string, matches any characters (if they exist) non-greedily until it encounters a word boundary. If the string starts with apple, the beginning of a string is a word-boundary, so it still matches. It then matches the letters `a-p-p-l-e`, and `s` if it exists, followed by another word boundary. It then matches all characters to the end of the string. The /i at the end means it's case-insensitive, so 'Apple', 'APPLE', and 'apple' are all valid. If you have the time, I would highly recommend walking through the tutorial at <http://regular-expressions.info>. It really goes in-depth and talks about how the regular expression engines match different expressions, it helped me a ton.
To build on @tj111, the reason your second regex fails is that `[^a-zA-Z0-9]` requires that a character matches; that is, there is some character in that position, and its value is not contained in the set `[a-zA-Z0-9]`. Markers like `\b` are called "zero-width assertions". `\b`, in particular, matches against boundaries between characters or at the beginning or end of a string. Because it is not matching against any character, its "width" is zero. In sum, `[^a-zA-Z0-9]` requires a character that does not take a particular value be present, while `\b` requires only that a boundary be present. **Edit:** @tj111 has added most of this to his response. I'm in too late, again :)
Javascript regex
[ "", "javascript", "regex", "" ]
Is there a TSQL command to connect to another server? Or when you are in a Query Window, what are the Keyboard shortcuts to connect to another server and have a Query Window show up? I have seen Ctrl+N pop up the **Connect to Server** dialog in some screens but when I am in a Query Window already and hit Ctrl+N it just opens up another Query Window. The USE command lets you connect to other databases on the current server but is there a command that lets you connect to another server? I am using SQL Server 2005.
You can use [OpenDataSource](http://msdn.microsoft.com/en-us/library/ms179856.aspx) with a linked server ``` OpenDataSource(provider_name, init_string) ``` For example ``` SELECT FirstName, Gender FROM OpenDataSource ( 'SQLOLEDB', 'DataSource = NOLI\SQL2;UserID=myUserID;Password=myPassword' ).Organisation.dbo.Employees ``` From [MSDN](http://msdn.microsoft.com/en-us/library/ms179856.aspx)- > Like the OPENROWSET function, > OPENDATASOURCE should only reference > OLE DB data sources that are accessed > infrequently. Define a linked server > for any data sources accessed more > than several times. Neither > OPENDATASOURCE nor OPENROWSET provide > all the functionality of linked-server > definitions, such as security > management and the ability to query > catalog information. All connection > information, including passwords, must > be provided every time that > OPENDATASOURCE is called.
Either via the Menu... **Query > Connection > Change Connection** or via the mouse... **(Right Click Mouse Button) > Connection > Change Connection** Both will pop up the **Connect to Database Engine** dialog box If your wanting to write some TSQL between servers then you'll need to create a Linked Server and then use OPENQUERY or OPENROWSET in your SQL. There are some good pointers in the previous posts on how to do this.
TSQL command to connect to another server (SQL Server 2005)
[ "", "sql", "sql-server", "t-sql", "" ]
My project that I am working on is almost finished. I am loading a .MDB file, displaying the contents on a DataGrid and attempting to get those changes on the DataGrid and save them back into the .MDB file. I am also going to create a function that allows me to take the tables from one .MDB file and save it to another .MDB file. Of course, I cannot do any of this if I cannot figure out how to save the changes back to the .MDB file. I have researched Google extensively and there are no answers to my question. I consider myself a beginner at this specific topic so please don't make the answers too complicated -- I need the simplest way to edit a .MDB file! Please provide programming examples. 1. Assume that I've already made a connection to a DataGrid. How do I get the changes made by the Datagrid? Im sure this one is simple enough to answer. 2. I then need to know how to take this Datatable, insert it into Dataset it came from then take that Dataset and rewrite the .MDB file. (If there is a way of only inserting the tables that were changed I would prefer that.) Thank you in advance, let me know if you need more information. This is the last thing I am probably going to have to ask about this topic...thank god. **EDIT:** The .mdb I am working with is a **Microsoft Access Database.** ( I didnt even know there were multiple .mdb files) I know I cannot write directly to the .MDB file via a streamwriter or anything but is there a way I can possibly generated a .MDB File with the DataSet information already in it? OR is there just a way that I can add tables to a .MDB file that i've already loaded into the DataGrid. There HAS to be a way! Again, I need a way to do this ***PROGRAMMATICALLY*** in C#. **EDIT:** Okay, my project is fairly large but I use a seperate class file to handle all Database connections. I know my design and source is really sloppy, but it gets the job done. I am only as good as the examples I find on the internet. Remember, I am simply connecting to a DataGrid in another form. Let me know if you want my code from the Datagrid form (I dont know why you would need it though). DatabaseHandling.cs handles 2 .MDB files. So you will see two datasets in there. I will use this eventually to take tables from one Dataset and put them into another Dataset. I just need to figure out how to save these values BACK into a .MDB file. Is there anyway to do this? There has to be a way... **EDIT:** From what i've researched and read...I think the answer is right under my nose. Using the "Update()" command. Now while this is re-assuring that there is infact a simple way of doing this, I am still left with the problem that I have no-friggin-clue how to use this update command. Perhaps I can set it up like this: ``` Oledb.OledbConnection cn = new Oledb.OledbConnection(); cn.ConnectionString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Staff.mdb"; Oledb.OledbCommand cmd = new Oledb.OledbCommand(cn); cmd.CommandText = "INSERT INTO Customers (FirstName, LastName) VALUES (@FirstName, @LastName)"; ``` I think that may do it, but I dont want to manually insert anything. I want to do both of these instead: * Take information that is changed on the Datagrid and update the Access Database File (.mdb) that I got it from * Create a function that allows me to take tables from another Access Database File (.mdb) and replace them in a secondary Access Database file (.mdb). Both files will use the exact same structure but will have different information in them. I hope someone comes up with a answer for this...my project is done all that awaits is one simple answer. Thank you again in advance. **EDIT:** Okay...good news. I have figured out how to query the .mdb file itself (I think). Here is the code, which doesn't work because I get a runtime error due to the sql command i'm attempting to use. Which will bring me to my next question. **New function code added to DatabaseHandling.cs:** ``` static public void performSynchronization(string table, string tableTwoLocation) { OleDbCommand cmdCopyTables = new OleDbCommand("INSERT INTO" + table + "SELECT * FROM [MS Access;" + tableTwoLocation + ";].[" + table + "]"); // This query generates runtime error cmdCopyTables.Connection = dataconnectionA; dataconnectionA.Open(); cmdCopyTables.ExecuteNonQuery(); dataconnectionA.Close(); } ``` As you can see, I've actually managed to execute a query on the connection itself, which I believe to be the actual Access .MDB file. As I said though, the SQL query I've executed on the file doesn't work and generated a run-time error when used. The command I am attempting to execute is supposed to take a table from a .MDB file and overwrite a table of the same type of a different .MDB file. The SQL command I attempted above tried to directly take a table from a .mdb file, and directly put it in another -- this isn't what I want to do. I want to take all the information from the .MDB file -- put the tables into a Datatable and then add all the Datatables to a Dataset (which i've done.) I want to do this for two .MDB files. Once I have two Datasets I want to take specific tables out of each Dataset and add them to each file like this: * DataSetA >>>>----- [Add Tables (Overwrite Them)] ----->>>> DataSetB * DataSetB >>>>----- [Add Tables (Overwrite Them)] ----->>>> DataSetA I want to take those each those Datasets and then put them BACK into each Access .MDB file they came from. Essentially keeping both databases synchronized. So my questions, revised, is: 1. How do I create a SQL query that will add a table to the .MDB file by overwriting the existing one of the same name. The query should be able to be created dynamically during runtime with an array that replaces a variable with the table name I want to add. 2. How do I get the changes that were made by the Datagrid to the DataTable and put them back into a DataTable (or DataSet) so I can send them to the .MDB file? I've tried to elaborate as much as possible...because I believe I am not explaing my issue very well. Now this question has grown wayyy too long. I just wish I could explain this better. :[ **EDIT:** Thanks to a user below I think I've almost found a fix -- the keyword *almost*. Here is my updated DatabaseHandling.cs code below. I get a runtime error "Datatype Mismatch." I dont know how that could be possible considering I am trying to copy these tables into another database with the exact same setup. ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data.OleDb; using System.Data; using System.IO; namespace LCR_ShepherdStaffupdater_1._0 { public class DatabaseHandling { static DataTable datatableB = new DataTable(); static DataTable datatableA = new DataTable(); public static DataSet datasetA = new DataSet(); public static DataSet datasetB = new DataSet(); static OleDbDataAdapter adapterA = new OleDbDataAdapter(); static OleDbDataAdapter adapterB = new OleDbDataAdapter(); static string connectionstringA = "Provider=Microsoft.Jet.OLEDB.4.0;" + "Data Source=" + Settings.getfilelocationA(); static string connectionstringB = "Provider=Microsoft.Jet.OLEDB.4.0;" + "Data Source=" + Settings.getfilelocationB(); static OleDbConnection dataconnectionB = new OleDbConnection(connectionstringB); static OleDbConnection dataconnectionA = new OleDbConnection(connectionstringA); static DataTable tableListA; static DataTable tableListB; static public void addTableA(string table, bool addtoDataSet) { dataconnectionA.Open(); datatableA = new DataTable(table); try { OleDbCommand commandselectA = new OleDbCommand("SELECT * FROM [" + table + "]", dataconnectionA); adapterA.SelectCommand = commandselectA; adapterA.Fill(datatableA); } catch { Logging.updateLog("Error: Tried to get " + table + " from DataSetA. Table doesn't exist!"); } if (addtoDataSet == true) { datasetA.Tables.Add(datatableA); Logging.updateLog("Added DataTableA: " + datatableA.TableName.ToString() + " Successfully!"); } dataconnectionA.Close(); } static public void addTableB(string table, bool addtoDataSet) { dataconnectionB.Open(); datatableB = new DataTable(table); try { OleDbCommand commandselectB = new OleDbCommand("SELECT * FROM [" + table + "]", dataconnectionB); adapterB.SelectCommand = commandselectB; adapterB.Fill(datatableB); } catch { Logging.updateLog("Error: Tried to get " + table + " from DataSetB. Table doesn't exist!"); } if (addtoDataSet == true) { datasetB.Tables.Add(datatableB); Logging.updateLog("Added DataTableB: " + datatableB.TableName.ToString() + " Successfully!"); } dataconnectionB.Close(); } static public string[] getTablesA(string connectionString) { dataconnectionA.Open(); tableListA = dataconnectionA.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, new Object[] { null, null, null, "TABLE" }); string[] stringTableListA = new string[tableListA.Rows.Count]; for (int i = 0; i < tableListA.Rows.Count; i++) { stringTableListA[i] = tableListA.Rows[i].ItemArray[2].ToString(); } dataconnectionA.Close(); return stringTableListA; } static public string[] getTablesB(string connectionString) { dataconnectionB.Open(); tableListB = dataconnectionB.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, new Object[] { null, null, null, "TABLE" }); string[] stringTableListB = new string[tableListB.Rows.Count]; for (int i = 0; i < tableListB.Rows.Count; i++) { stringTableListB[i] = tableListB.Rows[i].ItemArray[2].ToString(); } dataconnectionB.Close(); return stringTableListB; } static public void createDataSet() { string[] tempA = getTablesA(connectionstringA); string[] tempB = getTablesB(connectionstringB); int percentage = 0; int maximum = (tempA.Length + tempB.Length); Logging.updateNotice("Loading Tables..."); for (int i = 0; i < tempA.Length ; i++) { if (!datasetA.Tables.Contains(tempA[i])) { addTableA(tempA[i], true); percentage++; Logging.loadStatus(percentage, maximum); } else { datasetA.Tables.Remove(tempA[i]); addTableA(tempA[i], true); percentage++; Logging.loadStatus(percentage, maximum); } } for (int i = 0; i < tempB.Length ; i++) { if (!datasetB.Tables.Contains(tempB[i])) { addTableB(tempB[i], true); percentage++; Logging.loadStatus(percentage, maximum); } else { datasetB.Tables.Remove(tempB[i]); addTableB(tempB[i], true); percentage++; Logging.loadStatus(percentage, maximum); } } } static public DataTable getDataTableA() { datatableA = datasetA.Tables[Settings.textA]; return datatableA; } static public DataTable getDataTableB() { datatableB = datasetB.Tables[Settings.textB]; return datatableB; } static public DataSet getDataSetA() { return datasetA; } static public DataSet getDataSetB() { return datasetB; } static public void InitiateCopyProcessA() { DataSet tablesA; tablesA = DatabaseHandling.getDataSetA(); foreach (DataTable table in tablesA.Tables) { CopyTable(table, connectionstringB); } } public static void CopyTable(DataTable table, string connectionStringB) { var connectionB = new OleDbConnection(connectionStringB); foreach (DataRow row in table.Rows) { InsertRow(row, table.Columns, table.TableName, connectionB); } } public static void InsertRow(DataRow row, DataColumnCollection columns, string table, OleDbConnection connection) { var columnNames = new List<string>(); var values = new List<string>(); for (int i = 0; i < columns.Count; i++) { columnNames.Add("[" + columns[i].ColumnName + "]"); values.Add("'" + row[i].ToString().Replace("'", "''") + "'"); } string sql = string.Format("INSERT INTO {0} ({1}) VALUES ({2})", table, string.Join(", ", columnNames.ToArray()), string.Join(", ", values.ToArray()) ); ExecuteNonQuery(sql, connection); } public static void ExecuteNonQuery(string sql, OleDbConnection conn) { if (conn == null) throw new ArgumentNullException("conn"); ConnectionState prevState = ConnectionState.Closed; var command = new OleDbCommand(sql, conn); try { prevState = conn.State; if (prevState != ConnectionState.Open) conn.Open(); command.ExecuteNonQuery(); // !!! Runtime-Error: Data type mismatch in criteria expression. !!! } finally { if (conn.State != ConnectionState.Closed && prevState != ConnectionState.Open) conn.Close(); } } } } ``` Why am I getting this error? Both tables are exactly the same. What am I doing wrong? Worst case, how do I delete the table in the other Access .MDB file before inserting the exact same structure table with different values in it? Man I wish I could just figure this out... **EDIT:** Okay, I've come some distance. My question has morphed into a new one, and thus deserves being asked seperately. I have had my question answered as now I know how to execute queries directly to the connection that I have opened. Thank you all!
I'm not sure how far you've gotten, but if you looking for a quick drag and drop operation you might want to look at creating a strongly-typed dataset that connects, and using the drag-drop features of the DataSources Tool Window in Visual Studio. There are definately samples out there, but you will want to. 1. Create a new DataSet 2. Drag-n-Drop from your DataConnection Tree in Server Explorer 3. Create a new form 4. Drag the table from the DataSources Tool 5. Window on to the form. 6. voila **Update:** First off, I'm not 100% that I understand your issue. If you can create some LinkTables between the access files that would be best, then you can copy the data between files using a sql statement like 'INSERT INTO Customers SELECT FirstName, LastName FROM File2.Customers'. If thats not and option I think your going to have to loop the DataTables and insert the records manually using INSERT statements similar to your last edit. As for the datagrid, you will probably have to keep track of whats changed by monitoring the RowChanged Event (not sure if thats the exact event) of even do the insert/update statements when the row changes. **Update:** to loop the datatable you would do something like this. not tested. I just updated this again to include the MakeValueDbReady function. This is not tested either and I'm not sure if I've handle all the cases or even all the cases correctly. You'll really have to debug the sql statement and make sure its generating the right value. Each database handles is values differently. Atleast this way the value parse is extracted away. I also realized that instead of hard coding the TableName you should be able to get it from a property on the DataTable ``` void CopyTable(DataTable table, string connectionStringB) { var connectionB = new OleDbConnection(connectionStringB); foreach(DataRow row in table.Rows) { InsertRow(row, table.Columns, table.TableName, connectionB); } } public static void InsertRow(DataRow row, DataColumnCollection columns, string table, OleDbConnection connection) { var columnNames = new List<string>(); var values = new List<string>(); // generate the column and value names from the datacolumns for(int i =0;i<columns.Count; i++) { columnNames.Add("[" + columns[i].ColumnName + "]"); // datatype mismatch should be fixed by this function values.Add(MakeValueDbReady(row[i], columns[i].DataType)); } // create the sql string sql = string.Format("INSERT INTO {0} ({1}) VALUES ({2})", table, string.Join(", ", columnNames.ToArray()), string.Join(", ", values.ToArray()) ); // debug the accuracy of the sql here and even copy into // a new Query in Access to test ExecuteNonQuery(sql, connection); } // as the name says we are going to check the datatype and format the value // in the sql string based on the type that the database is expecting public string MakeValueDbReady(object value, Type dataType) { if (value == null) return null; if (dataType == typeof(string)) { return "'" + value.ToString().Replace("'", "''") + "'" } else if (dataType == typeof(DateTime)) { return "#" + ((DateTime)value).ToString + "#" } else if (dataType == typeof(bool)) { return ((bool)value) ? "1" : "0"; } return value.ToString(); } public static void ExecuteNonQuery(string sql, OleDbConnection conn) { if (conn == null) throw new ArgumentNullException("conn"); ConnectionState prevState = ConnectionState.Closed; var command = new OleDbCommand(sql, conn); try { // the reason we are checking the prev state is for performance reasons // later you might want to open the connection once for the a batch // of say 500 rows or even wrap your connection in a transaction. // we don't want to open and close 500 connections prevState = conn.State; if (prevState != ConnectionState.Open) conn.Open(); command.ExecuteNonQuery(); } finally { if (conn.State != ConnectionState.Closed && prevState != ConnectionState.Open) conn.Close(); } } ```
To update the original MDB file with changes made to *the DataSet* (not the DataGrid, since that's just UI over the DataSet) just use the [DataAdapter.Update](http://msdn.microsoft.com/en-us/library/system.data.common.dataadapter.update.aspx) command. To move tables from 1 to the other is a bit trickier. If the table doesn't already exist in the destination, you'll need to create it using a [SQL CREATE statement](http://msdn.microsoft.com/en-us/library/bb177893.aspx). Then, [DataAdapter.Fill](http://msdn.microsoft.com/en-us/library/377a8x4t.aspx) a DataSet from the *source*. Loop through each row and set it's state to RowAdded by calling [DataRow.SetAdded](http://msdn.microsoft.com/en-us/library/system.data.datarow.setadded.aspx). Then, pass it back to a DataAdapter.Update from the *destination* database. EDIT: [Code is on the next question....](https://stackoverflow.com/questions/520506/how-do-i-structure-an-oledbcommand-query-so-that-i-can-take-tables-from-one-acces/520665#520665)
C# Issue: What is the simplest way for me to load a .MDB file, make changes to it, and save the changes back to the original file?
[ "", "c#", "datatable", "dataadapter", "ms-jet-ace", "oledbconnection", "" ]
Does VB.NET have more types than C#?
No, they are under the same .NET Framework, the only thing VB.NET has by default is a reference to Microsoft.VisualBasic.... namespace.
Types in both languages refer to the same Intrinsic CTS Data Types. (CTS = [Common Type System](http://msdn.microsoft.com/en-us/library/2hf02550.aspx))
C# and VB.nET type
[ "", "c#", "vb.net", "" ]
Typically we keep our config values in web.config/app.config and for the environment global config varialbes (and not application specific) in server machine.config file. When deploying an object to the GAC where is the best location to keep these config type values? It seems that the best location would be a linked resource file. Does anyone have any experience/recommendation with this approach? (sample code?) thx
if you dont care of having specific configuration for each application using your dll you can place the configuration in the **machine.config** file inside the framework folder. %systemRoot%/Windows/Microsoft.Net/Framework/[Version]/Machine.config
The configuration values need to be in the application configuration of the executing assembly. It is the responsibility of the application to have the configuration values so that your assembly will have access to them when it is loaded into the AppDomain.
Config files for GAC objects
[ "", "c#", ".net", "configuration", "" ]
I created a pretty fancy winforms app for my company. We had a graphic designer create the GUI, which was a pain to implement, all graphical buttons, lots of layered backgrounds and logos, animations, etc. But now my company wants to resell it under different brands. But since I *mostly* coded it well, I told my higher ups I could have a totally rebranded version done in under a week. Basically all I would do is change a bunch of settings in an xml settings file, swap out the graphics with a new set, and build. Problem is if they want 5 or 6 different brands, I'd have 5 different builds to support (I really should be supporting 1, with diff templates) The problem is its not easy (as far as I know) to swap out the images in a winforms app. I have all the graphical resources in a single folder, but once each file is entered into its respective image list or container in visual studio, the only way to get it to update is to remove it and re-add it, changing the source folder doesnt cause the embedded image to refresh. This would be incredibly tedious for each build, there has got to be an easier way. **Add On:** So after some further investigation, I am leaning torwards some sort of resx file editor. However the ones I have seen so far are more focused on translating strings to various languages, and are either very weak, or can not at all edit binary resources like bitmaps/png's. Though if you open a resx file in an xml viewer (I use notepad 2 with .resx set to use xml sytax highlighting) MS is kind enough to tell you exactly how each type is compiled (mostly variations of base 64)
I think your goal should be having "brandable" resource files; you're essentially localizing your application, except you just have a few different versions of English. You can use ResGen.exe and ResourceManager to load an external resources file, so you could use 5 different "resources" files but keep your code base the same. This post may also help... <http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/b388c700-0e07-452b-a19e-ce02775f78a6/> **Edit:** BTW, I will second the comment that if you're going through a great deal of effort on this, consider WPF... Most of those "graphical" elements could possibly be done natively especially if it's gradients and stuff, not to mention the easy templating.
What I would do is just load all the graphics of the disk at start up from a folder and create any imagelists needed as appropriate, instead of doing this in the designer. If you are worried that someone would steal the graphics, then I would create a simple file format (possibly encrypted) for my graphics and a small simple app for you or the designer to use to convert into this format from regular files. Then it's just a question of swapping out this folder between different brands.
Graphically template a .NET winforms application
[ "", "c#", "visual-studio", "winforms", "user-interface", "desktop", "" ]
How do you choose between implementing a value object (the canonical example being an address) as an immutable object or a struct? Are there performance, semantic or any other benefits of choosing one over the other?
> How do you choose between implementing a value object (the canonical example being an address) as an immutable object or a struct? I think your options are wrong. Immutable object and struct are not opposites, nor are they the only options. Rather, you've got four options: * Class + mutable + immutable * Struct + mutable + immutable I argue that in .NET, the default choice should be a **mutable class** to represent **logic** and an **immutable class** to represent an **entity**. I actually tend to choose immutable classes even for logic implementations, if at all feasible. Structs should be reserved for small types that emulate value semantics, e.g. a custom `Date` type, a `Complex` number type similar entities. The emphasis here is on *small* since you don't want to copy large blobs of data, and indirection through references is actually cheap (so we don't gain much by using structs). I tend to make structs *always* immutable (I can't think of a single exception at the moment). Since this best fits the semantics of the intrinsic value types I find it a good rule to follow.
There are a few things to consider: A struct is allocated on the stack (usually). It is a value type, so passing the data around across methods can be costly if it is too large. A class is allocated on the heap. It is a reference type, so passing the object around through methods is not as costly. Generally, I use structs for immutable objects that are not very large. I only use them when there is a limited amount of data being held in them or I want immutability. An example is the `DateTime` struct. I like to think that if my object is not as lightweight as something like a `DateTime`, it is probably not worth being used as a struct. Also, if my object makes no sense being passed around as a value type (also like `DateTime`), then it may not be useful to use as a struct. Immutability is key here though. Also, I want to stress that structs are **not** immutable by default. You have to make them immutable by design. In 99% of situations I encounter, a class is the proper thing to use. I find myself not needing immutable classes very often. It's more natural for me to think of classes as mutable in most cases.
Choosing between immutable objects and structs for value objects
[ "", "c#", "oop", "domain-driven-design", "" ]
I recently started a new project and I'm trying to keep my instance variables always initialized to some value, so none of them is at any time null. Small example below: ``` public class ItemManager { ItemMaster itemMaster; List<ItemComponentManager> components; ItemManager() { itemMaster = new ItemMaster(); components = new ArrayList<ItemComponentManager>(); } ... } ``` The point is mainly to avoid the tedious checking for *null* before using an instance variable somewhere in the code. So far, it's working good and you mostly don't need the *null*-value as you can check also for empty string or empty list, etc. I'm not using this approach for method scoped variables as their scope is very limited and so doesn't affect other parts of the code. This all is kind of experimental, so I'd like to know if this approach could work or if there are some pitfalls which I'm not seeing yet. Is it generally a good idea to keep instance variables initialized?
I usually treat an empty collection and a null collection as two separate things: An empty collection implies that I know there are zero items available. A null collection will tell me that I don't know the state of the collection, which is a different thing. So I really do not think it's an either/or. And I would declare the variable **final** if I initialize them in the constructor. If you declare it final it becomes very clear to the reader that this collection cannot be null.
First and foremost, **all** non-final instance variables must be declared **private** if you want to retain control! Consider lazy instantiation as well -- this also avoids "bad state" but only initializes upon use: ``` class Foo { private List<X> stuff; public void add(X x) { if (stuff == null) stuff = new ArrayList<X>(); stuff.add(x); } public List<X> getStuff() { if (stuff == null) return Collections.emptyList(); return Collections.unmodifiableList(stuff); } } ``` (Note the use of Collections.unmodifiableList -- unless you really want a caller to be able to add/remove from your list, you should make it immutable) Think about how many instances of the object in question will be created. If there are many, and you always create the lists (and might end up with many empty lists), you could be creating many more objects than you need. Other than that, it's really a matter of taste and if you can have meaningful values when you construct. If you're working with a DI/IOC, you want the framework to do the work for you (though you could do it through constructor injection; I prefer setters) -- Scott
Should I keep instance variables in Java always initialized or not?
[ "", "java", "variables", "initialization", "" ]
Is there any way that you can combine predicates? Lets say I have something like this: ``` class MatchBeginning : public binary_function<CStdString, CStdString, bool> { public: bool operator()(const CStdString &inputOne, const CStdString &inputTwo) const { return inputOne.substr(0, inputTwo.length()).compare(inputTwo) == 0; } }; int main(int argc, char* argv[]) { CStdString myString("foo -b ar -t az"); vector<CStdString> tokens; // splits the string every time it encounters a "-" split(myString, tokens, "-", true, true); vector<CStdString>::iterator searchResult = find_if(tokens.begin(), tokens.end(), not1(bind2nd(MatchBeginning(), "-"))); return 0; } ``` This works, but now I'd like to do something like: ``` searchResult = find_if(tokens.begin(), tokens.end(), bind2nd(MatchBeginning(), "-b") || not1(bind2nd(MatchBeginning(), "-"))); ``` So I'd like to find the first string that starts with "-b" or the first string that does not start with "-". However, this gives me an error (binary '||' undefined). Is there any way to do this?
I can recommend boost.lambda for combining function-objects for such tasks. Although it is a bit heavyweight for such a simple problem. (**edit**) See the community wiki answer started by xhantt for a good example using STL. (old, deprecated, answer) You can write your own utility for this, similar: ``` // here we define the combiner... template<class Left, class Right> class lazy_or_impl { Left m_left; Right m_right; public: lazy_or_impl(Left const& left, Right const& right) : m_left(left), m_right(right) {} typename Left::result_type operator()(typename Left::argument_type const& a) const { return m_left(a) || m_right(a); } }; // and a helper function which deduces the template arguments // (thx to xtofl to point this out) template<class Left, class Right> lazy_or_impl<Left, Right> lazy_or(Left const& left, Right const& right) { return lazy_or_impl<Left, Right>(left, right); } ``` and then use it: `... lazy_or(bind1st(...), bind1st(...)) ...`
Well you have [std::logical\_or](http://www.sgi.com/tech/stl/logical_or.html) and [std::compose2](http://www.sgi.com/tech/stl/binary_compose.html) that can do the job ``` find_if(tokens.begin(), tokens.end(), compose2(logical_or<bool>(), bind2nd(MatchBeginning(), "-b"), bind2nd(MatchBeginning(), "-") ) ); ``` but I think that boost::lambda and/or phoenix are more readable in the end, and are my recommended solution. Credits should go to SGI documentation.
Combining Predicates
[ "", "c++", "stl", "functional-programming", "predicate", "" ]
I'm using `IQueryable<T>` interfaces throughout my application and defer execution of SQL on the DB until methods like `.ToList()` I will need to find the Count of certain lists sometimes -without- needing to use the data in the list being counted. I know from my SQL experience that a SQL COUNT() is far less work for the DB than the equivalent SELECT statement that returns all the rows. So my question is: will it be less work on the DB to return the count from the `IQueryable<T>`'s `Count()` method than rendering the `IQueryable<T>` to a list and invoking the list's `Count()` method? I suspect it will given that the `ToList()` will fire the SELECT sql and then in a separate query count the rows. I'm hoping the `Count()` on the `IQueryable<T>` simply renders out the sql for a sql COUNT() query instead. But im not certain. Do you know?
Calling `ToList()` will return a genuine `List<T>` with all the data, which means fetching all the data. Not good. Calling `Count()` should indeed render the SQL to do the count on the database side. Much better. The simplest way to check this, however, is to enable logging in your data context (or whatever the equivalent is for your particular provider) and see what queries are actually being sent.
I'm not sure if it's a hard and fast rule, but linq method you add to an Iqueryable will be added into the linq expression tree - unless they are one of the methods that actually cause the tree to be evaluated (like ToList and Single etc). In the case of LinqToSql you'll know if it can't convert something into the SQL statement because you'll get a runtime exception stating that the method is not supported. eg ``` var something = dbContext.SomeTable .Select(c => c.col1 == "foo") .Distinct() .ToList() .Count() ``` In the above, Select() and Distinct() are included in the sql query passed to the server because they are added to an Iqueryable. Count() is just acting on the list that was returned by the sql query. So you don't want to do it that way :-) In your case, Count() will definitely be faster that Select() because the resulting sql statement will indeed incorporate the count so the server only needs to return a single number rather than a list of rows.
count VS select in LINQ - which is faster?
[ "", "sql", "linq", "performance", "iqueryable", "" ]
I have a PHP file at my server **root**.. index.php .. which [`include`](http://www.php.net/include/)'s .. DIR/main.php Now .. DIR/main.php .. [`include`](http://www.php.net/include/)'s many **nearby** PHP files using **relative** URLs using .. `include("./common1.php");` Any way I can change the relative-URL **base path** so when including "DIR/main.php" It can relatively access its nearby PHP files like "DIR/common1.php", instead of trying to find "common1.php" at the site root.
First, set the *"relative-URL **base path**"* to your directory ``` set_include_path( get_include_path() . '/DIR' ); ``` Second, include your file! ``` require( 'main.php' ); ``` That should work though I've not tested it.
Take a look at [set\_include\_path](http://www.php.net/manual/en/function.set-include-path.php) Edit: When appending paths to [include\_path](http://www.php.net/manual/en/ini.core.php#ini.include-path) be sure to use the PATH\_SEPARATOR constant as it is intended to make your include path OS agnostic. ``` <?php set_include_path(implode(PATH_SEPARATOR, array( get_include_path(), '/DIR1', '/DIR2/DIR3', dirname(__FILE__), ))); ?> ```
Including a PHP file, treats relative URLs from that directory
[ "", "php", "url", "relative-path", "server-side-scripting", "" ]
G'day all, I have a console project where it is intended the user presses the keyboard direction keys (non-numeric keypad) to move an avatar. I am having difficulty coding to check for the press of these keys. In Pascal it was easy enough to use "readkey" and code, for example, for #80 for the down keypress. However, I am stumped how to implement the same functionality in Java, though I think I understand the use of System.in and BufferedInputStream. Could anyone help me out? Your thoughts or hints are much appreciated.
The Console support issue in Java is well known, I am not sure that this is doable. This was not initially possible with System.in since it used to work line-based. Sun eventually added a java.io.Console class. Here are its JavaDocs: <http://java.sun.com/javase/6/docs/api/java/io/Console.html> Once you get the console (I think from System.console), you can get a reader and perhaps read characters from it, but I'm not sure if it includes keys. Generally, you're supposed to use Swing or AWT if you want access to the keyboard, which is silly. As of 2007, there was a feature request about it: [here](https://bugs.java.com/bugdatabase/view_bug?bug_id=6552816)
If java.io.console doesn't work for you (I haven't tried that), try [JLine](https://jline.github.io/). I used it to solve a [vaguely similar problem](https://stackoverflow.com/questions/414237/).
Detecting and acting on keyboard direction keys in Java
[ "", "java", "keyboard", "key", "direction", "" ]
How can I get the Cartesian product (every possible combination of values) from a group of lists? For example, given ``` somelists = [ [1, 2, 3], ['a', 'b'], [4, 5] ] ``` How do I get this? ``` [(1, 'a', 4), (1, 'a', 5), (1, 'b', 4), (1, 'b', 5), (2, 'a', 4), (2, 'a', 5), ...] ``` --- One common application for this technique is to avoid deeply nested loops. See [Avoiding nested for loops](https://stackoverflow.com/questions/11174745) for a more specific duplicate. Similarly, this technique might be used to "explode" a dictionary with list values; see [Combine Python Dictionary Permutations into List of Dictionaries](https://stackoverflow.com/questions/15211568) . If you want a Cartesian product of *the same* list with itself multiple times, `itertools.product` can handle that elegantly. See [Operation on every pair of element in a list](https://stackoverflow.com/questions/942543) or [How can I get "permutations with repetitions" from a list (Cartesian product of a list with itself)?](https://stackoverflow.com/questions/3099987). Many people who already know about `itertools.product` struggle with the fact that it expects separate arguments for each input sequence, rather than e.g. a list of lists. The accepted answer shows how to handle this with `*`. However, the use of `*` here to unpack arguments is **fundamentally not different** from any other time it's used in a function call. Please see [Expanding tuples into arguments](https://stackoverflow.com/questions/1993727) for this topic (and use that instead to close duplicate questions, as appropriate).
Use [`itertools.product`](https://docs.python.org/3/library/itertools.html#itertools.product), which has been available since Python 2.6. ``` import itertools somelists = [ [1, 2, 3], ['a', 'b'], [4, 5] ] for element in itertools.product(*somelists): print(element) ``` This is the same as: ``` for element in itertools.product([1, 2, 3], ['a', 'b'], [4, 5]): print(element) ```
``` import itertools >>> for i in itertools.product([1,2,3],['a','b'],[4,5]): ... print i ... (1, 'a', 4) (1, 'a', 5) (1, 'b', 4) (1, 'b', 5) (2, 'a', 4) (2, 'a', 5) (2, 'b', 4) (2, 'b', 5) (3, 'a', 4) (3, 'a', 5) (3, 'b', 4) (3, 'b', 5) >>> ```
How to get the Cartesian product of multiple lists
[ "", "python", "list", "cartesian-product", "" ]
I've done a bit of reading related to the concurrency issues with sqlite, but I don't see how they'd apply to Django since it's inherently single threaded. I'm not using any multiprocess modules either. I have absolutely no experience with concurrent programming either, so if someone can identify WHY the following code is causing an OperationalError: 'database is locked' I'd be grateful. views.py ``` def screening(request, ovramt=None): errors = [] if request.method == "POST": form = ScreeningForm(request.POST) if form.is_valid(): print "Woo valid!!" return HttpResponse() else: # GET if ovramt is None: o = Ovramt.objects.select_related(depth=1).latest("date_completed") print "found?" print o.id else: try: o = Ovramt.objects.select_related(depth=1).get(id=ovramt) except: errors.append("OVRAMT NOT FOUND") if o.residents.count() <= 0: o.add_active_residents() residents = list(o.residents) ``` models.py ``` def add_active_residents(self): ssa_res = SSA_Resident.objects.select_related(depth=1).filter(ssa=self.ssa, active=True) for r in ssa_res: self.residents.add(r.resident) # Fails Here self.save() ``` The add\_active\_residents method works fine, until it is called from the views module. Is there an open connection to the database open in the view which prevents writing from the model? Does someone have an explanation why this code will error?
In the following method function ``` def add_active_residents(self): ssa_res = SSA_Resident.objects.select_related(depth=1).filter(ssa=self.ssa, active=True) for r in ssa_res: self.residents.add(r.resident) # Fails Here self.save() ``` Why is there a select\_related? You only really need the FK's of `ssa_res` items. Why do additional queries for related items?
Are you using Python 2.6? If so, this is (apparently) a known issue that can be mitigated by adding: ``` DATABASE_OPTIONS = {'timeout': 30} ``` to your settings.py See <http://code.djangoproject.com/ticket/9409>
Django and Sqlite Concurrency issue
[ "", "python", "django", "sqlite", "concurrency", "" ]
I think it's a good practise to always return empty lists or arrays instead of null when a method comes up with no results to avoid null checks in the code. Because Rhino Mocks returns the default value for an object, which is null for lists and arrays, a lot of times I have to either add the null checks back in or setup the mocks with expectations to return lists. Is there a way to configure or extend Rhino Mocks with this behaviour? ``` var repositoryMock = MockRepository.GenerateMock<ICustomerRepository>(); IList<Customer> customers = repositoryMock.getCustomers(); Assert.IsNotNull(customers); Assert.AreEqual(0, customers.Count ); ```
Turns out that this behaviour is possible with [Moq](http://code.google.com/p/moq/) as long as the returned object is IEnumerable. The following tests pass: ``` [Test] public void EmptylListTest() { var repositoryMock = new Mock<ICustomerRepository>(); IEnumerable<Customer> customers = repositoryMock.Object.GetCustomers(); Assert.IsNotNull(customers); Assert.AreEqual(0, customers.Count()); } [Test] public void EmptyArrayTest() { var repositoryMock = new Mock<ICustomerRepository>(); Customer[] customerArray = repositoryMock.Object.GetCustomerArray(); Assert.IsNotNull(customerArray); Assert.AreEqual(0, customerArray.Length); } public interface ICustomerRepository { IEnumerable<Customer> GetCustomers(); Customer[] GetCustomerArray(); } ```
There is nothing in Rhino Mocks to automatically solve your problem. The simplest solution is to simply setup an extention/utility method for each type that uses SetupResult (or repeat.any) to configure a default value. You could always be tricky and enumerate through members, checking for ILists / Arrays and setup the mocks dynamically - it depends on how many types you have vs how much type you could dedicate to this utility method. Good luck!
Returning empty lists as default with Rhino Mocks
[ "", "c#", "unit-testing", "tdd", "rhino-mocks", "moq", "" ]
Is it true that MS SQL restrict self-referencing constraints with ON DELETE CASCADE option? I have a table with parent-child relation, PARENT\_ID column is foreign key for ID. Creating it with ON DELETE CASCADE option causes error > "Introducing FOREIGN KEY constraint > may cause cycles or multiple cascade > paths. Specify ON DELETE NO ACTION or > ON UPDATE NO ACTION, or modify other > FOREIGN KEY constraints." I can't believe that I have to delete this hierarchy in recursive mode. Is there any issue except triggers?
It is the case that you cannot set up ON DELETE CASCADE on a table with self-referencing constraints. There is a potential of cyclical logic problems, hence it won't allow it. There's a good article [here](http://msdn.microsoft.com/en-us/library/aa902684(SQL.80).aspx) - though it's for version 8 rather than 9 of SQL - though the same rules apply.
I just answered [another question](https://stackoverflow.com/q/38977004/5089204) where *this* question was bound as duplicate. I think it's worth to place my answer here too: This is not possible. You can solve this with an `INSTEAD OF TRIGGER` ``` create table locations ( id int identity(1, 1), name varchar(255) not null, parent_id int, constraint pk__locations primary key clustered (id) ) GO INSERT INTO locations(name,parent_id) VALUES ('world',null) ,('Europe',1) ,('Asia',1) ,('France',2) ,('Paris',4) ,('Lyon',4); GO ``` --This trigger will use a recursive CTE to get all IDs following all ids you are deleting. These IDs are deleted. ``` CREATE TRIGGER dbo.DeleteCascadeLocations ON locations INSTEAD OF DELETE AS BEGIN WITH recCTE AS ( SELECT id,parent_id FROM deleted UNION ALL SELECT nxt.id,nxt.parent_id FROM recCTE AS prv INNER JOIN locations AS nxt ON nxt.parent_id=prv.id ) DELETE FROM locations WHERE id IN(SELECT id FROM recCTE); END GO ``` --Test it here, try with different IDs. You can try `WHERE id IN(4,3)` also... ``` SELECT * FROM locations; DELETE FROM locations WHERE id=4; SELECT * FROM locations GO ``` --Clean-Up (Carefull with real data!) ``` if exists(select 1 from INFORMATION_SCHEMA.TABLES where TABLE_NAME='locations') ---DROP TABLE locations; ```
Self-referencing constraint in MS SQL
[ "", "sql", "sql-server", "sql-server-2005", "t-sql", "" ]
I fear this is a really stupid question, but here goes: Why does the clear method in Java's default LinkedList implementation bother to walk the list and unhook all the nodes? Why not just unhook the header and leave the rest of the list connected -- the GC will get it anyway, no? Here's the method: ``` /** * Removes all of the elements from this list. */ public void clear() { Entry<E> e = header.next; while (e != header) { Entry<E> next = e.next; e.next = e.previous = null; e.element = null; e = next; } header.next = header.previous = header; size = 0; modCount++; } ``` Why walk it? Why not just skip to `header.next = header.previous = header;`? Best I can figure is it helps the GC...? This link <http://java.sun.com/docs/books/performance/1st_edition/html/JPAppGC.fm.html#997442> sort of suggests that. TIA...
Their method ensures that even if other code still holds references to particular nodes, the other nodes will be GC'ed. Otherwise, even a single external reference to one of the nodes would prevent the entire chain from being collected. Also, other operations in the list might be going on simultaneously (e.g. views through `subList()` or `Collections.unmodifiableList()`, iterators), and this ensures that those things perceive the list as "empty" immediately.
IIRC, this was a change made in JDK6 to assist performance of certain (generational) GC algorithms. Often, the `List` itself and older nodes will be in an older generation than some of the other nodes. The younger generations will get collected more frequently, with the result that young nodes get copied about before it is discovered that all the nodes are garbage. So it's a minor performance optimisation. Memory performance optimisation is a little odd in that often it's not the code which is causing the problem that is taking the additional time to execute.
clear() impl in Java's LinkedList
[ "", "java", "linked-list", "" ]
It's a small thing, really: I have this function that converts dict objects to xml. Here's the function: ``` def dictToXml(d): from xml.sax.saxutils import escape def unicodify(o): if o is None: return u''; return unicode(o) lines = [] def addDict(node, offset): for name, value in node.iteritems(): if isinstance(value, dict): lines.append(offset + u"<%s>" % name) addDict(value, offset + u" " * 4) lines.append(offset + u"</%s>" % name) elif isinstance(value, list): for item in value: if isinstance(item, dict): lines.append(offset + u"<%s>" % name) addDict(item, offset + u" " * 4) lines.append(offset + u"</%s>" % name) else: lines.append(offset + u"<%s>%s</%s>" % (name, escape(unicodify(item)), name)) else: lines.append(offset + u"<%s>%s</%s>" % (name, escape(unicodify(value)), name)) addDict(d, u"") lines.append(u"") return u"\n".join(lines) ``` For example, it converts this dictionary ``` { 'site': { 'name': 'stackoverflow', 'blogger': [ 'Jeff', 'Joel' ] } } ``` to: ``` <site> <name>stackoverflow</name> <blogger>jeff</blogger> <blogger>joel</blogger> </site> ``` It works, but the `addDict` function looks a little too repetitive. I'm sure there's a way to refactor it into 3 co-recursive functions named `addDict`, `addList` and `addElse`, but my brain is stuck. Any help? Also, any way to get rid of the `offset +` thing in every line would be nice. **NOTE**: I chose these semantics because I'm trying to match the behavior of the [json-to-xml converter](http://www.json.org/javadoc/org/json/XML.html#toString(java.lang.Object)) in [org.json](http://www.json.org/java/index.html), which I use in a different part of my project. If you got to this page just looking for a dictionary to xml converter, there are some really good options in some of the answers. (Especially [pyfo](http://foss.cpcc.edu/pyfo/)).
I noticed you have commonality in adding items. Using this commonality I would refactor adding an item to a separate function. ``` def addItem(item, name, offset): if isinstance(item, dict): lines.append(offset + u"<%s>" % name) addDict(item, offset + u" " * 4) lines.append(offset + u"</%s>" % name) else: lines.append(offset + u"<%s>%s</%s>" % (name, escape(unicodify(item)), name)) def addList(value,name, offset): for item in value: addItem(item, name, offset) def addDict(node, offset): for name, value in node.iteritems(): if isinstance(value, list): addList(value, name, offset) else: addItem(value, name, offset) ``` Advisory warning: this code is not tested or written by anybody who actually uses Python.
``` >>> from pyfo import pyfo >>> d = ('site', { 'name': 'stackoverflow', 'blogger': [ 'Jeff', 'Joel' ] } ) >>> result = pyfo(d, pretty=True, prolog=True, encoding='ascii') >>> print result.encode('ascii', 'xmlcharrefreplace') <?xml version="1.0" encoding="ascii"?> <site> <blogger> Jeff Joel </blogger> <name>stackoverflow</name> </site> ``` To install [pyfo](http://foss.cpcc.edu/pyfo/): ``` $ easy_install pyfo ```
refactor this dictionary-to-xml converter in python
[ "", "python", "xml", "dry", "" ]
I am writing an application where we will need to extend a basic entity into a number of different things (eg employee, vehicle etc). The design is as such that there is a Entity table and a second table with type specific values eg an employee will have an ID Number but a vehicle will have a registration number. I have inherited from the class entity generated in the Data Context but am having trouble with the casting in my repository. What is the correct way of doing this? ``` public class cAccountEmployee : cAccountEntity { public string id_number { get { try { return this.cAccountEntityValues.Single(e => e.type == 1).value; } catch (Exception) { return ""; } } set { try { this.cAccountEntityValues.Single(e => e.type == 1).value = value; } catch (Exception) { this.cAccountEntityValues.Add(new cAccountEntityValue() { accountentity_id = this.id, cAccountEntity = this, type = 1, value = value }); } } } ``` } Then in my repository (does not inherit anything) ``` public IEnumerable<cAccountEmployee> All(int accountholder_id) { return db.cAccountEntities.Where(e => e.accountholder_id == accountholder_id).OrderBy(a => a.name).Cast<cAccountEmployee>(); } public cAccountEmployee Single(int id) { return db.cAccountEntities.Single(a => a.id == id) as cAccountEmployee; } ``` The cast fails in the single method and hence I get back null. It is my understanding you cannot define explicit or implicit operators from or to a base class? How can I get the base class Linq result to cast up to the inherited Employee class, while still maintaining its db state so I can submit changes?
With LINQ-to-SQL, there are two ways inheritance can work: * discriminator over a single table (not suitable since your data is not homogeneous) * base-class / multi-table (not that this isn't supported in the dbml - only if you write the classes manually) LINQ-to-SQL does not support multi-table inheritance (i.e. a single object with data from multiple tables). Entity Framework does, but is more complex; you use `.Cast<T>` and `.OfType<T>` in EF to cast/filter based on sub-types. You might want to look at: * [querying based on an ID](https://stackoverflow.com/questions/156113/linqtosql-and-abstract-base-classes/156365#156365) (combine with `GetTable<T>()`) * [specifying a base-class in the dbml](https://stackoverflow.com/questions/411515/way-to-automatically-add-a-linqtosql-base-class-to-entities) What is the purpose of the base class here? If it adds behaviour, then you can edit the dbml to specify a common base-class for all your entities. If it has **data properties** then it gets trickier. Personally, I simply wouldn't do it this way... I would keep separate classes for the different types, and use the data-context correctly, using the separate tables per type: ``` public IEnumerable<Employee> All(int accountholder_id) { return db.Employees.Where(e => e.accountholder_id == accountholder_id) .OrderBy(a => a.name); } public Employee Single(int id) { return db.Employees.Single(a => a.id == id); } ``` So - can you clarify what the `cAccountEntity` does here?
> How can I get the base class Linq result to cast up to the inherited Employee class It's not an upcast, it's a downcast. I think you don't understand casting or possibly - instance type vs reference type. ``` public class Animal { } public class Zebra : Animal { } public class Zoo { public void ShowZebraCast() { Animal a = new Animal(); Zebra z = (Zebra)a; } } ``` System.InvalidCastException: Unable to cast object of type 'Animal' to type 'Zebra'. In the same way, you have an instance of Entity that you can't downcast to use an Employee reference against it. You could convert the types, but then you have to supply a conversion method. ``` public partial class Animal { } public class Zebra : Animal { } //in another file public partial class Animal{ public Zebra ToZebra(){ return new Zebra() { //set Zebra properties here. }; } } public class Zoo { public void ShowZebraConvert() { Animal a = new Animal(); Zebra z = a.ToZebra(); } } ```
Inheriting a Linq to SQL class and cast the result of a linq query
[ "", "c#", "linq", "inheritance", "casting", "radix", "" ]
I have the following part of an AJAX application, which gives no errors, but also nothing is displayed to the screen, so I am unsure of where the problem lies. Calling this page directly from a browser with ?cmd&id=1 should return, or even calling it without ?cmd should return the cmd error message. edit: added test cases: I do get the cmd error message, but when I pass &id=1 (1 is a valid id), no html gets returned whatsoever, view source is completely blank. Have I used echo incorrectly or something similar? edit2: added echo as first line: the first echo is not seen whatsoever edit3: After going back to an older version and making all the changes again, I now get test charset output when calling with valid cmd and id paramters. The code I am using is identical to what is pasted below. the code: ``` <?php echo "hello world"; error_reporting(E_ALL); if (isset($_GET["cmd"])) $cmd = $_GET["cmd"]; else die("You should have a 'cmd' parameter in your URL"); $id = $_GET["id"]; $con = mysqli_connect("localhost", "user", "password", "db"); echo "test con"; if(!$con) { die('Connection failed because of' .mysqli_connect_error()); echo "test error"; } //$con->query("SET NAMES 'utf8'"); $con->set_charset("utf8")); echo "test charset"; if($cmd=="GetSALEData") { echo "test cmdifloop"; if ($getRecords = $con->prepare("SELECT * FROM SALES WHERE PRODUCT_NO = ?")) { echo "test recordifloop"; $getHtml = $con->prepare("SELECT PRODUCT_DESC FROM SALES WHERE PRODUCT_NO = ?"); $getHtml->bind_param("s", $id); $getHtml->execute(); $getHtml->bind_result($PRODUCT_DESC); $getRecords->bind_param("s", $id); $getRecords->execute(); $getRecords->bind_result($PRODUCT_NO, $PRODUCT_NAME, $SUBTITLE, $CURRENT_PRICE, $START_PRICE, $PRICE_COUNT, $QUANT_TOTAL, $QUANT_SOLD, $ACCESSSTARTS, $ACCESSENDS, $ACCESSORIGIN_END, $USERNAME, $BEST_PRICEDER_ID, $FINISHED, $WATCH, $BUYITNOW_PRICE, $PIC_URL, $PRIVATE_SALE, $SALE_TYPE, $ACCESSINSERT_DATE, $ACCESSUPDATE_DATE, $CAT_DESC, $CAT_PATH, $COUNTRYCODE, $LOCATION, $CONDITIONS, $REVISED, $PAYPAL_ACCEPT, $PRE_TERMINATED, $SHIPPING_TO, $FEE_INSERTION, $FEE_FINAL, $FEE_LISTING, $PIC_XXL, $PIC_DIASHOW, $PIC_COUNT, $ITEM_SITE_ID ); while ($getRecords->fetch()) { $ccodes = array( "1" => "USA", "77" => "Germany", "16" => "Austria", "122" => "Luxemburg", "193" => "Switzerland", ); $conditions = array( "0" => "USA", "77" => "Germany", "16" => "Austria", ); $country = $ccodes[$COUNTRYCODE]; if ( $country == "" ) $country = "Not applicable"; $columns = array('FINISHED', 'WATCH', 'PRIVATE_SALE', 'REVISED', 'PAYPAL_ACCEPT', 'PRE_TERMINATED', 'PIC_XXL', 'PIC_DIASHOW'); foreach($columns as $column) { $$column = $row[$column] ? 'YES' : 'NO'; } imageResize($PIC_URL, 250, 300); file_put_contents($id, file_get_contents($PIC_URL)); $html = htmlentities(json_encode($PRODUCT_DESC)); $shortDate = strftime("%d %m %Y", strtotime($ACCESSSTARTS)); echo "<h1>".$PRODUCT_NAME."</h1> <div id='leftlayer' class='leftlayer'> <p><strong>Username: </strong>".$USERNAME." <p><strong>PRODUCT Number: </strong>".$PRODUCT_NO." <p><strong>Subtitle: </strong>".$SUBTITLE." <p><strong>SALE Start: </strong>".$ACCESSSTARTS." <p><strong>SALE End: </strong>".$ACCESSENDS." <p><strong>SALE Type: </strong>".$SALE_TYPE." <p><strong>Category: </strong>".$CAT_DESC." </div> <div class='leftlayer2'> <p><strong>Condition: </strong> ".$CURRENT_PRICE." <p><strong>Total Items: </strong> ".$QUANT_TOTAL." <p><strong>Total Sales: </strong> ".$QUANT_SOLD." <p><strong>Start Price: &#8364</strong> ".$START_PRICE." <p><strong>Buyitnow Price: &#8364</strong> ".$BUYITNOW_PRICE." <p><strong>PRICEs: </strong> ".$PRICE_COUNT." <p><strong>Revised: </strong> ".$REVISED." </div> <div class='leftlayer2'> <p><strong>Private: </strong> ".$PRIVATE_SALE." <p><strong>Finished: </strong> ".$FINISHED." <p><strong>Cancelled: </strong> ".$PRE_TERMINATED." <p><strong>Paypal: </strong> ".$PAYPAL_ACCEPT." <p><strong>Country: </strong> ". $country ." <p><strong>Location: </strong> ".$LOCATION." <p><strong>Shipping to: </strong> ". $country ." </div> <div id='rightlayer'> <img src='".$PIC_URL."' width='".$imageSize["width"]."' height='".$imageSize["height"]."'> <p><a href='#' onclick=\"makewindows(" . $html . "); return false;\">Click for full description </a></p> </div> </div> </div>"; } } function imageResize($imageURL, $maxWidth, $maxHeight) { $imageSize["width"] = 0; $imageSize["height"] = 0; $size = getimagesize($imageURL); if ($size) { $imageWidth = $size[0]; $imageHeight = $size[1]; $wRatio = $imageWidth / $maxWidth; $hRatio = $imageHeight / $maxHeight; $maxRatio = max($wRatio, $hRatio); if ($maxRatio > 1) { $imageSize["width"] = $imageWidth / $maxRatio; $imageSize["height"] = $imageHeight / $maxRatio; return $imageSize; } else { $imageSize["width"] = $imageWidth; $imageSize["height"] = $imageHeight; return $imageSize; } } else { die(print_r(error_get_last())); } } } ``` I apologize for the spacing, but it happens automatically when I press the code button.
You have an extra parenthesis on line 22 This ``` $con->set_charset("utf8")); ``` Needs to be this ``` $con->set_charset("utf8"); ``` Two debugging tips for the future. For the first, you'll need shell access to your web host, or you'll need to install PHP locally on your development machine. If you're running Unix, you may already have it. Open a terminal and type ``` php -v ``` If you have it, you can check the syntax of a PHP file by doing ``` //on *nix php yourfile.php //or on windows c:\path\to\php.exe yourfile.php ``` This will bail on gross syntax errors. Also, google/search this site for "setup php locally tutorial" or something similar to learn how to get a full copy of A Webserver, mysql database and php running on your own machine. The second suggestion, which will be easier if you install a local copy, is to check your error logs. Even when PHP isn't displaying error messages, errors will still be logged to a file somewhere. In your case, you'd have seen something like this in the log file ``` PHP Parse error: syntax error, unexpected ')' in foo.php on line 22 ```
This will help you see your errors: ``` ini_set('display_errors', '1'); ```
php giving a blank page, no errors
[ "", "php", "mysql", "ajax", "mysqli", "" ]
So here are the details (I am using C# BTW): I receive a 32bpp image (JPEG compressed) from a server. At some point, I would like to use the Palette property of a bitmap to color over-saturated pixels (brightness > 240) red. To do so, I need to get the image into an indexed format. I have tried converting the image to a GIF, but I get quality loss. I have tried creating a new bitmap in an index format by these methods: ``` // causes a "Parameter not valid" error Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Indexed) // no error, but the resulting image is black due to information loss I assume Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Format8bppIndexed) ``` I am at a loss now. The data in this image is changed constantly by the user, so I don't want to manually set pixels that have a brightness > 240 if I can avoid it. If I can set the palette once when the image is created, my work is done. If I am going about this the wrong way to begin with please let me know. EDIT: Thanks guys, here is some more detail on what I am attempting to accomplish. We are scanning a tissue slide at high resolution (pathology application). I write the interface to the actual scanner. We use a line-scan camera. To test the line rate of the camera, the user scans a very small portion and looks at the image. The image is displayed next to a track bar. When the user moves the track bar (adjusting line rate), I change the overall intensity of the image in an attempt to model what it would look like at the new line rate. I do this using an ImageAttributes and ColorMatrix object currently. When the user adjusts the track bar, I adjust the matrix. This does not give me per pixel information, but the performance is very nice. I *could* use LockBits and some unsafe code here, but I would rather not rewrite it if possible. When the new image is created, I would like for all pixels with a brightness value of > 240 to be colored red. I was thinking that defining a palette for the bitmap up front would be a clean way of doing this.
Thanks for the help everyone. It seems that this can be solved using the ImageAttributes class and simply setting a color remap table. ``` ColorMap[] maps = new ColorMap[someNum] // add mappings imageAttrs.SetRemapTable(maps); ``` Thanks for the help again, at least I learned something.
Going from 32bpp to 8bpp indexed will almost always result in quality loss, unless the original image has less than 256 colors total. Can you create another image that is a overlay with the affected pixels red, then show both of those? Since you are going for brightness > 240, you can convert the overlay to grayscale first, then to indexed to get the overbright pixels. You don't specify what you are doing with it once you have tagged the offenders, so I don't know if that will work.
How to convert a 32bpp image to an indexed format?
[ "", "c#", "image-processing", "color-palette", "" ]
I'm using the class designer in Visual Studio 2008 and when I add properties to my classes they are created with an empty setter and a getter that throws a NotImplementedException. In the properties window there's an option called "Auto-Implemented" which is always greye out. Is there a way to change the settings for this so by default new properties will be auto-implemented or some way to turn this option on so I don't have to keep editing the code each time I add in a new property?
AFAIK, you can't. I asked this at Barcelona a year or so ago - I don't think anything has changed. Basically, as I understand it, this is done directly (not via an editable snippet).
For visual studio 2008 you could install this "power toys" that give you another option to class designer menu "create auto-implemented property" <http://modeling.codeplex.com/> i'll waiting this for Visual Studio 2010 too... else it's so annoying
How do I create auto-implemented properties in the VS class designer
[ "", "c#", "visual-studio", "" ]
Specifically, Sql Server 2005/T-Sql. I have a field that is mostly a series of two characters, and they're all supposed to be upper case but there's some legacy data that predates the current DB/System, and I need to figure out which records are in violation of the upper casing covenant. I thought this would work: ``` select * from tbl where ascii(field1) <> ascii(upper(field1)) ``` And indeed it returned me a handful of records. They've since been corrected, and now that query returns no data. But I've got people telling me there is still mixed case data in the DB, and I just found an example: 'FS' and 'Fs' are both reporting the same ascii value. Why is this approach flawed? What is a better way to go about this, or how can I fix this approach to work correctly?
if all the date should have been in upper case just do an update ``` update tbl set field1 = upper(field1) ``` but to answer your original question this query should give you the results that you expect: ``` select * from tbl where field1 COLLATE Latin1_General_CS_AS <> upper(field1) ``` Edit: just noticed that the suggestion to use COLLATE was also posted by [Ian](https://stackoverflow.com/questions/511381/differentiating-between-ab-and-ab-in-a-character-database-field/511409#511409)
ASCII is only comparing the first letter. You'd have to compare each letter, or change the database collation to be case sensitive. You can change collation on an entire database level, or just on one column for a specific query, so: ``` SELECT myColumn FROM myTable WHERE myColumn COLLATE Latin1_General_CS_AS <> upper(myColumn) ```
Differentiating between "AB" and "Ab" in a character Database Field
[ "", "sql", "sql-server", "database", "ascii", "" ]
So I was looking at some code that was checked in and I got all puzzled over: ``` // Amount of days before cancellation can't be done enum Cancellation { Limit = 2 }; ``` Asking the guy who checked it in he argued that it's much better to use enums instead of static variables, bettern than this: ``` private static int CANCELLATION_LIMIT = 2; ``` So we started arguing. My argument was that he was using enum as a way to store values (it'll break if there were two enum symbols with the same value). He argued it was an antipattern to have static variables in a class. My question is what best practice should be used for either?
Enums are typed. That is, if you have a method where you have to pass a certain 'state' to a method for instance, you can only pass 'valid' arguments. For instance: ``` enum OrderState { pending = 1, shipped = 2 } public IList<Order> GetOrdersInState( OrderState ) { } ``` This is a good example -imho- of using enums. When OrderState is an int for which you create 2 const ints, you have no restriction and are able to pass invalid values. The compiler won't complain. However, the case that you're bringing up, I think using enums is not a valid solution. It's a misuse of using an int, and a const int should be used. Enums are good, but they should be used where they must be used. They're not the preferred tool in every situation. Having a const or static var is in this case not an antipattern.
return "Is it logically a set of values" ? "Enum is appropriate" : "Static const is fine" *(I'm a big fan of the logically consistent)*
Should static variables be replaced with enums?
[ "", "c#", "enums", "static-members", "" ]
say we have simple asp.net repeater, in a row we have one checkbox, one label (database-id of the record) and not visible (used for postback) and one text (in a tabelcell) now i want to make it like in windows, if you click on the text, the checkbox should be selected or deselected. has somebody a link or solution for this, maybe already with jQuery? Edit: as i said, it is a asp.repeater. and the table is for layout, so using the checkbox.text property is not designable (e.g. line wrap) the ids of the checkbox and text are dynamically added/changed on rendering of the repeater. therefore the label solution does also not really work.
assume that you wont't need jQuery and the table-construct ``` <asp:Repeater runat="server"> <ItemTemplate> <asp:CheckBox runat="server" Text="your text" /> </ItemTemplate> </asp:Repeater> ``` this renders basically the solution provided by Ricardo Vega whatever you get in the property text of the checkbox is clickable, and checks/unchecks the checkbox ... therefor you should use <%# Eval("...") %> you can skin (via css) the margin of the label **Edit:** After thinking about this once again, there is another solution: ``` <asp:Repeater runat="server"> <HeaderTemplate> <table> </HeaderTemplate> <ItemTemplate> <tr> <td><asp:Checkbox runat="server" ID="checkbox" /></td> <td><asp:Label runat="server" AssociatedControlID="checkbox">Your text</asp:Label></td> </tr> </ItemTemplate> <FooterTemplate> </table> </FooterTemplate> </asp:Repeater> ``` Notes: You can use the Text-attribute of the asp:Label-Element either!
maybe I don't understand completely but why don't you use the html attribute "for" in the label tag? Like: ``` <label for="field_id">Checkbox 1</label> <input id="field_id" type="checkbox" /> ``` And that will make the checkbox act as clicked if the label is clicked. So you don't have to depende on JS to do this. Edit: If you really really want to use jQuery for this: ``` $('td').click(function(){ $(':checkbox',this).attr('checked',!$(':checkbox',this).attr('checked')); }); ``` Change 'td' as needed.
checkbox select/deselect in repeater when click on text
[ "", "javascript", "jquery", "asp.net", "repeater", "" ]
If not what is a good friendly java framework for newcomers? I want to build something like twitter.
You can go a very long way with just servlets and JDBC. Consider JSPs using JSTL as an added nicety. But I'd bet that if your web site consists of more than a page or two delivering database content to the browser, you'll quickly discover why web frameworks are so numerous. Hard-wired page navigation, control logic, blurred layers, etc. will cause headaches as your site grows. You'll find you have a lot of similar, repetitive, but slightly different code for each bit of new functionality. If you have to maintain a site and keep it going, eventually it's likely that you'll reach the conclusion that there are patterns ripe for capturing. Who knows? Maybe you'll decide as a result of your experience that you want to take a crack at solving the web framework problem, too. Whatever you do, I think having distinct layers is key. Don't have servlets do all the work - they're for handling HTTP requests. Embed the work in service classes that your servlets can simply call. That way you can reuse that logic. Keep persistence code in its own layer and don't let it leak out into others. You can have reusable components that will survive your first efforts. If you decide to switch to a web framework you'll just snap these layers into place and off you go. I wrote my first significant web site without any frameworks - just straight servlets, JSPs and JDBC. It gave me a better understanding of what was going on. I think it helps.
Check out [Head First Servlets and JSP](http://oreilly.com/catalog/9780596005405/) for the fundamentals of building Java Web applications *without* using complicated frameworks. It's good to understand what's going on behind the scenes when you use a framework, and this book is a great introduction. [![HFS&JSP](https://i.stack.imgur.com/68boQ.gif)](https://i.stack.imgur.com/68boQ.gif) (source: [oreilly.com](http://oreilly.com/catalog/covers/9780596005405_cat.gif))
Is it possible to build a Java web application without using a framework?
[ "", "java", "web-applications", "frameworks", "twitter", "" ]
What is an easy way in Python to format integers into strings representing thousands with K, and millions with M, and leaving just couple digits after comma? I'd like to show 7436313 as 7.44M, and 2345 as 2,34K. Is there some % string formatting operator available for that? Or that could be done only by actually dividing by 1000 in a loop and constructing result string step by step?
I don't think there's a built-in function that does that. You'll have to roll your own, e.g.: ``` def human_format(num): magnitude = 0 while abs(num) >= 1000: magnitude += 1 num /= 1000.0 # add more suffixes if you need them return '%.2f%s' % (num, ['', 'K', 'M', 'G', 'T', 'P'][magnitude]) print('the answer is %s' % human_format(7436313)) # prints 'the answer is 7.44M' ```
This version does not suffer from the bug in the previous answers where 999,999 gives you 1000.0K. It also only allows 3 significant figures and eliminates trailing 0's. ``` def human_format(num): num = float('{:.3g}'.format(num)) magnitude = 0 while abs(num) >= 1000: magnitude += 1 num /= 1000.0 return '{}{}'.format('{:f}'.format(num).rstrip('0').rstrip('.'), ['', 'K', 'M', 'B', 'T'][magnitude]) ``` The output looks like: ``` >>> human_format(999999) '1M' >>> human_format(999499) '999K' >>> human_format(9994) '9.99K' >>> human_format(9900) '9.9K' >>> human_format(6543165413) '6.54B' ```
Formatting long numbers as strings
[ "", "python", "formatting", "string", "integer", "" ]
I have this: ``` function foo($a='apple', $b='brown', $c='Capulet') { // do something } ``` Is something like this possible: ``` foo('aardvark', <use the default, please>, 'Montague'); ```
Found this, which is probably still correct: <http://www.webmasterworld.com/php/3758313.htm> Short answer: no. Long answer: yes, in various kludgey ways that are outlined in the above.
If it’s your function, you could use `null` as wildcard and set the default value later inside the function: ``` function foo($a=null, $b=null, $c=null) { if (is_null($a)) { $a = 'apple'; } if (is_null($b)) { $b = 'brown'; } if (is_null($c)) { $c = 'Capulet'; } echo "$a, $b, $c"; } ``` Then you can skip them by using `null`: ``` foo('aardvark', null, 'Montague'); // output: "aarkvark, brown, Montague" ```
Is it possible to skip parameters that have default values in a function call?
[ "", "php", "optional-parameters", "function-call", "named-parameters", "" ]
I am using SQL Express 2008 as a backend for a web application, the problem is the web application is used during business hours so sometimes during lunch or break time when there is no users logged in for a 20 minute period SQL express will kick into idle mode and free its cache. I am aware of this because it logs something like: > Server resumed execution after being idle 9709 seconds or > Starting up database 'xxxxxxx' > in the event log I would like to avoid this idle behavior. Is there anyway to configure SQL express to stop idling or at least widen the time window to longer than 20mins? Or is my only option to write a service that polls the db every 15mins to keep it spooled up ? After reading articles like [this](http://blogs.msdn.com/sqlexpress/archive/2008/02/22/sql-express-behaviors-idle-time-resources-usage-auto-close-and-user-instances.aspx) it doesn't look to promising but maybe there is a hack or registry setting someone knows about.
That behavior is not configurable. You do have to implement a method to poll the database every so often. Also, like the article you linked to said, set the AUTO CLOSE property to false.
Just a short SQL query like this every few minutes will prevent SQLserver from going idle: ``` SELECT TOP 0 NULL FROM [master].[dbo].[MSreplication_options] GO ```
Is there a way to stop SQL Express 2008 from Idling?
[ "", "sql", "database", "sql-server-express", "" ]
Is there an easy way to add an ID (Identity(1,1) & PK) column to a table that already has data? I have picked up a project that was freelanced out to a horrible developer that didn't put a PK, index or anything on the tables he made. Now that I am LINQ-ifying it, I have no PK to insert or update off of.
``` ALTER TABLE MyTable ADD id INT IDENTITY(1,1) PRIMARY KEY CLUSTERED ```
I'd be tempted to do it in three stages - 1. Create a new table with all the same columns, plus you primary key column (script out the table and then alter it to add a PK field) 2. Insert into the new table all of the values from the old table 3. Once your happy with it, delete the old table and rename your new one with the Primary Key the same as the old table.
Easy way to add an ID column when there is data present
[ "", "c#", ".net", "sql-server", "linq-to-sql", "t-sql", "" ]
I have a couple of methods that return a bool depending on their success, is there anything wrong with calling those methods inside of the IF() ? ``` //&& makes sure that Method2() will only get called if Method1() returned true, use & to call both methods if(Method1() && Method2()) { // do stuff if both methods returned TRUE } ``` Method2() doesn't need to fire if Method1() returns FALSE. Let me know there's any problem with the code above. thank you. **EDIT:** since there was nothing wrong with the code, I'll accept the most informative answer ... added the comment to solve the "newbie & &&" issue
I'll throw in that you can use the `& operator` (as opposed to `&&`) to guarantee that both methods are called even if the left-hand side is `false`, if for some reason in the future you wish to avoid short-circuiting. The inverse works for the `| operator`, where even if the left-hand condition evaluates to `true`, the right-hand condition will be evaluated as well.
No, there is nothing wrong with method calls in the if condition. Actually, that can be a great way to make your code more readable! For instance, it's a lot cleaner to write: ``` private bool AllActive() { return x.IsActive && y.IsActive && z.IsActive; } if(AllActive()) { //do stuff } ``` than: ``` if(x.IsActive && y.IsActive && z.IsActive) { //do stuff } ```
Calling methods inside if() - C#
[ "", "c#", "language-features", "short-circuiting", "" ]
I'm trying to do an echo of a variable containing 1400000. so there is written: `echo round(1400000);` this gives 1,4E+6 instead of the full number. Anybody an idea on how to display it fully?
It seems that round was the problem. I changed it with `number_format()` and this does the job just fine. Thanks Aron and Paul for the answers.
Related to your question, I also came across [this comment](http://www.php.net/manual/en/language.types.float.php#83577) on the PHP website. > PHP switches from the standard decimal > notation to exponential notation for > certain "special" floats. You can see > a partial list of such "special" > values with this: ``` for( $tmp = 0, $i = 0; $i < 100; $i++ ) { $tmp += 100000; echo round($tmp),"\n"; } ``` > So, if you add two floats, end up with > a "special" value, e.g. 1.2E+6, then > put that value unmodified into an > update query to store the value in a > decimal column, say, you will likely > get a failed transaction, since the > database will see "1.2E+6" as varchar > data, not decimal. Likewise, you will > likely get an XSD validation error if > you put the value into xml. > > I have to be honest: this is one of > the strangest things I have seen in > any language in over 20 years of > coding, and it is a colossal pain to > work around. It seems there has not been a "real" fix yet, but judging from the comments in the [bug report Paul Dixon referered to](https://stackoverflow.com/questions/491154/round-in-php-shows-power-in-stead-of-full-number/491174#491174) earlier, his solution seems to work.
round in PHP shows scientific notation instead of full number
[ "", "php", "floating-point", "integer", "rounding", "" ]
A. What does this do? ``` require ("./file.php"); ``` B. in comparison to this? ``` require ("file.php"); ``` --- (Its **not** up-one-directory.. which would be) ``` require ("../file.php"); ```
`./` is the *current* directory. It is largely the same as just `file.php`, but in many cases (this one included) it doesn't check any standard places PHP might look for a file, instead checking *only* the current directory. From the [PHP documentation](http://us.php.net/manual/en/function.include.php) (notice the last sentence): > Files for including are first looked for in each include\_path entry relative to the current working directory, and then in the directory of current script. E.g. if your include\_path is libraries, current working directory is /www/, you included include/a.php and there is include "b.php" in that file, b.php is first looked in /www/libraries/ and then in /www/include/. If filename begins with ./ or ../, it is looked only in the current working directory.
The first version forces the internal mechanism to include files relatively to the... directly executed file. So for example you have **index.php** ``` // directly executed script (php -f index.php or from a browser) include 'second.php'; ``` **second.php** ``` // This is included relatively to index.php // Actually, it is first searched relatively to include_path, then relatively // to index.php include './third.php'; ``` **third.php** ``` // This is included relatively to second.php ONLY. It does not search // include_path return "foo"; ```
What does the dot-slash do to PHP include calls?
[ "", "php", "url", "include", "relative-path", "" ]
I have two applications, one is the front end application, another is the backend one. Each application has their own copy of identical user database. When a user logs into the front end application, I will also simultaneously log int the backend application using the web services provided by the backend application. The question now is is it possible for me to destroy the backend session from front end application? The backend uses $\_Session super global array to store all the session information. If yes, how?
Yes it is. I asume that you are using the same session ID name for both front and back. I think the default is PHP\_SESSID. If this is the case, then all the session details will be being stored in the one location. I am also asuming that the domain of front and back end is teh same.. if it is different, then it wont work, as web browsers will not share session information between different domains. if you use different session ID names for front and back end, then you will have to get tricky. (the session ID for both will be in the request data, you will just have to load up both sessions and destroy them both.) use: ``` session_destroy(); ```
You'll need to delete the session record from the database or the file (depending on your system's configuration). The system likely defaults to use a file based session and you can determine the directory using the [session\_save\_path](https://www.php.net/session_save_path). It's almost certainly much easier to use sessions stored in a database instead of on the file system.
Destroy session variables in another application
[ "", "php", "session", "" ]
There is a Java library by the name of [Uncommon Maths](https://uncommons-maths.dev.java.net/) that claims to provide better random number generators than Sun and (potentially) even BouncyCastle. How one can determine whether their library can be trusted? I trust Sun and BouncyCastle because a lot of important companies use their stuff. It's not clear if Uncommon Maths falls into the same category. Any ideas?
Uncommon Maths claims to pass the [Diehard tests](http://en.wikipedia.org/wiki/Diehard_tests). That's as reliable as I know. You can always be a scientist and re-run those tests for yourself as an independent check.
Good question ;) All of the RNG algorithms are well-known algorithms invented by people smarter than myself. I am a programmer, not a mathematician. I've just ported the original C code. So you have to hope that I haven't introduced any bugs in the conversion. As with most open source software, there is NO WARRANTY. If you want to use it for simulations, I think it's a very good choice. If you want to use it for cryptography, something like [Fortuna](http://en.wikipedia.org/wiki/Fortuna_(PRNG)) would be better. Uncommons Maths is not as widely used as some libraries. It gets between 5 and 20 downloads a week. I don't know how many of those actually go on to use it in serious applications. I use it for [evolutionary computation](https://watchmaker.dev.java.net) and a few trivial poker-related programs that I've been playing with. I have run Diehard on each of the RNG implementations and it does not highlight any flaws. That said, Diehard's results are not the easiest to interpret: > Thus you should not be surprised with > occasional p-values near 0 or 1, such > as .0012 or .9983. When a bit stream > really FAILS BIG, you will get p`s of > 0 or 1 to six or more places. By all > means, do not, as a > Statistician might, think that a p < > .025 or p> .975 means that the RNG has > "failed the test at the .05 level". > Such p`s happen among the hundreds > that DIEHARD produces, even with good > RNGs. So keep in mind that "p > happens". The Uncommons Maths RNGs all satisfy this fuzzy definition of success. There are one or two p-values outside the 0.025 .. 0.975 range, but none that "fail big". This is comparable to the results obtained with Java's SecureRandom (and better than java.util.Random, which does "fail big"). If you want to test this for yourself, there is a class called [DiehardInputGenerator](https://uncommons-maths.dev.java.net/nonav/api/org/uncommons/maths/random/DiehardInputGenerator.html) in the distribution. This generates the 12mb file that you need to run Diehard.
Can the uncommon maths library be trusted?
[ "", "java", "random", "" ]
I have a jar file that runs this code: ``` public class InputOutput { /** * @param args * @throws IOException */ public static void main(String[] args) throws IOException { boolean cont = true; BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); while (cont) { System.out.print("Input something: "); String temp = in.readLine(); if (temp.equals("end")) { cont = false; System.out.println("Terminated."); } else System.out.println(temp); } } } ``` I want to program another java class that executes this jar file and can get the input and send output to it. Is it possible? The current code I have is this but it is not working: ``` public class JarTest { /** * Test input and output of jar files * @author Jack */ public static void main(String[] args) { try { Process io = Runtime.getRuntime().exec("java -jar InputOutput.jar"); BufferedReader in = new BufferedReader(new InputStreamReader(io.getInputStream())); OutputStreamWriter out = new OutputStreamWriter(io.getOutputStream()); boolean cont = true; BufferedReader consolein = new BufferedReader(new InputStreamReader(System.in)); while (cont) { String temp = consolein.readLine(); out.write(temp); System.out.println(in.readLine()); } } catch (IOException e) { e.printStackTrace(); } } } ``` Thanks for your help
Using Runtime.exec is platform dependent. If you're using windows try prepending: ``` cmd /c ``` to ``` java -jar .... etc. et ``` Something like: ``` ...getRuntime().exec("cmd /c java -jar InputOutput.... ``` See this also: [Make system call and return stdout output.](https://stackoverflow.com/questions/236737/language-showdown-how-do-you-make-a-system-call-that-returns-the-stdout-output-a/373223#373223)
Do you need to run the jar file in a different process? If not, you can write a Java program that invokes InputOutput.main(). Alternatively, if the name of the jar/class is only known at run-time, you can create a new class-loader, load the said class and invoke main() via reflection. As for the redirection of input/output streams you can use System.setOut, setIn, setErr.
Getting input and output from a jar file run from java class?
[ "", "java", "input", "executable-jar", "" ]
I need to import the data form .csv file into the database table (MS SQL Server 2005). SQL BULK INSERT seems like a good option, but the problem is that my DB server is not on the same box as my WEB server. [This question](https://stackoverflow.com/questions/440289/ms-sql-server-bulk-insert-across-a-network) describes the same issue, however i don't have any control over my DB server, and can't share any folders on it. I need a way to import my .csv programatically (C#), any ideas? **EDIT:** this is a part of a website, where user can populate the table with .csv contents, and this would happen on a weekly basis, if not more often
i ended up using [CSV Reader](http://www.codeproject.com/KB/database/CsvReader.aspx). I saw a reference to it in one of the @Jon Skeet's answers, can't find it again to put the link to it
You have several options: 1. [SSIS](http://www.accelebrate.com/sql_training/ssis_tutorial.htm) 2. [DTS](http://www.verio.com/support/documents/view_article.cfm?doc_id=3846) 3. custom application Any of these approaches ought to get the job done. If it is just scratch work it might be best to write a throwaway app in your favorite language just to get the data in. If it needs to be a longer-living solution you may want to look into SSIS or DTS as they are made for this type of situation.
Alternative to SQL BULK INSERT
[ "", "c#", "sql-server-2005", ".net-2.0", "" ]
Here is the dictionary I have ``` propertyList = { "id": "int", "name": "char(40)", "team": "int", "realOwner": "int", "x": "int", "y": "int", "description": "char(255)", "port": "bool", "secret": "bool", "dead": "bool", "nomadic": "bool", "population": "int", "slaves": "int", } ``` But when I print it out with "\n".join(myDict) I get this ``` name nomadic dead port realOwner secret slaves team y x population id description ``` I know that a dictionary is unordered but it comes out the same every time and I've no idea why.
For older versions of Python, the real question should be “why not?” — An unordered dictionary is usually implemented as a [hash table](http://en.wikipedia.org/wiki/Hash_table) where the order of elements is well-defined but not immediately obvious ([the Python documentation used to state this](https://docs.python.org/2.7/library/stdtypes.html#dict.items)). Your observations match the rules of a hash table perfectly: apparent arbitrary, but constant order. Python has since changed its `dict` implementation to preserve the order of insertion, and [this is *guaranteed* as of Python 3.7](https://mail.python.org/pipermail/python-dev/2017-December/151283.html). The implementation therefore no longer constitutes a pure hash table (but a hash table is still *used* in its implementation).
The [specification](http://docs.python.org/tutorial/datastructures.html#dictionaries) for the built-in dictionary type disclaims any preservation of order, it is best to think of a dictionary as an unordered set of `key: value` pairs... You may want to check the [`OrderedDict` module](http://www.xs4all.nl/~anthon/Python/ordereddict/), which is an implementation of an ordered dictionary with Key Insertion Order.
Why is python ordering my dictionary like so?
[ "", "python", "dictionary", "" ]
I'm writing a small webpage that will enable students to answer questions and get feedback on their answers. Part of this detection checks for common errors to give them guidance. Specifically I want to check if their answer is a power of ten out from the actual answer. If the answer was 3.93E-6, this condition should activate if they type 3.93E2, 3.93E-9, 3.93 etc. The obvious way to me to test this is to do something like this: ``` var correct = 3.93E-6; var entry = 3.93E-2; //really comes from an input box. if (!(entry / correct)%10) { alert ("power of ten error"); } ``` However, this doesn't work as error/correct doesn't work for large/small numbers. How can I fix this? Live code at: <http://bradshawenterprises.com/test.html>
``` var num = 12.4123; var numString = num.toExponential() // numString = "1.24123e+1" ``` This normalizes the number, but you have to parse it manually. (Like on how accurate the result has to be…)
Here's one way to see if two numbers are off by approximately a power of ten: ``` var correct = 3.93E-6; var entry = 3.93E-2; var epsilon = .01; var log10_ratio = Math.log(correct/entry)/Math.log(10); if (Math.abs(Math.round(log10_ratio) - log10_ratio) < epsilon) { alert ("power of ten error"); } ```
Floating point number in JS
[ "", "javascript", "floating-point", "" ]
I'm trying to default a value in a field to the current date, but my code: ``` <?php echo '<input type="text" name="makeupassignment" size="50" value="'.date("m/d/y").'">'; ?> ``` is only printing this: '; ?>
Looks to me that the web page is not running through the PHP interpreter. Check out the source for the web page. Do you see the PHP source code when you click "show source" in your browser? I think what happens is that your browser parses this as follows: * open tag whose type is ?php * two fields named "echo" and "'<input" * then fields with values: type="text", name="makeupassignment", size="50", and value="'.date(", and then m/d/y").'" * then there is the closing tag > * and then the rest is output verbatim
I would guess that PHP is not enabled/installed on the webserver you're testing with. View source on that page and you'll see all your code, I bet
PHP - Defaulting form field value to the current date
[ "", "php", "datetime", "date", "" ]
``` someFunction(link) { someOtherFunction('div' + link); } ``` By calling someFunction("Test"), the string "divTest" gets passed to someOtherFunction(). But I want the value of the variable "divTest" to be passed. How can that be done?
Make your variables members of an object. Then you can use `[]` to access the objects members using a string: ``` var byname = { divabc: ..., divxyz: ... }; function someFunction(link) { someOtherFunction(byname['div'+link]); } someFunction('abc'); // calls someOtherFunction(byname.divabc) ```
For this kind of dynamic construction/access of variable names you should use the alternative object notation where: ``` object.member === object["member"] ``` This way you could construct your variable name as a string and use it inside square brackets for accessing object members.
constructing javascript variable names at runtime
[ "", "javascript", "" ]
I am trying to get a `java.net.URI` object from a `String`. The string has some characters which will need to be replaced by their percentage escape sequences. But when I use URLEncoder to encode the String with UTF-8 encoding, even the / are replaced with their escape sequences. How can I get a valid encoded URL from a String object? **<http://www.google.com?q=a> b** gives **http%3A%2F%2www.google.com...** whereas I want the output to be **<http://www.google.com?q=a%20b>** Can someone please tell me how to achieve this. I am trying to do this in an Android app. So I have access to a limited number of libraries.
You might try: `org.apache.commons.httpclient.util.URIUtil.encodeQuery` in [Apache commons-httpclient](http://hc.apache.org/httpclient-3.x/) project Like this (see [URIUtil](http://svn.apache.org/viewvc/httpcomponents/oac.hc3x/trunk/src/java/org/apache/commons/httpclient/util/URIUtil.java?view=markup)): ``` URIUtil.encodeQuery("http://www.google.com?q=a b") ``` will become: ``` http://www.google.com?q=a%20b ``` You can of course do it yourself, but URI parsing can get pretty messy...
Android has always had the Uri class as part of the SDK: <http://developer.android.com/reference/android/net/Uri.html> You can simply do something like: ``` String requestURL = String.format("http://www.example.com/?a=%s&b=%s", Uri.encode("foo bar"), Uri.encode("100% fubar'd")); ```
Java - Convert String to valid URI object
[ "", "java", "android", "encoding", "utf-8", "" ]
I am trying to learn how to specify class constructors in Java. I am starting to understand that they specify the types of instance variables of objects made from that class. They also can be used to set the instance variable initial values. The follow example is from the Java tutorial on Sun's website: ``` public Bicycle(int startCadence, int startSpeed, int startGear) { gear = startGear; cadence = startCadence; speed = startSpeed; } ``` Where in your classes source code should you put the constructor(s)? Are these arguments the names of the variables?: (int startCadence, int startSpeed, int startGear) or are gear, cadence and speed the names of the variables? What is the difference between (int startCadence, int startSpeed, int startGear) and gear, cadence and speed? In the unlikely event that my instructor or any administrator from Salt Lake Community College ever comes across this question, let me make my intentions clear. This question is posted in the greatest spirit of academic honesty. I ask this question to seek general advice and help in understanding the proper way to use the Java programming language. I in no way use the work of others and represent it as my own work. I use the answers provided here as a general aid in my understanding. I do all my own work and do not copy work provided by people answering my question.
The constructors can appear anywhere in the code for the class. However, by convention, most people put them before any other functions that aren't constructors. As for the variable names, all 6 are actually variable names, but the scope is differnet. The ones specified as parameters to the constructor (startCadence, startSpeed, startGear) are only available within the constructor. The other 3 (gear, cadence, speed) are probably class-wide variables, available to all methods. However the definition isn't shown in your code snippet. The full class would look mroe like: ``` class Bicycle { // class-level variables private int gear; private int cadence; private int speed; // constructor public Bicycle(int startCadence, int startSpeed, int startGear) { gear = startGear; cadence = startCadence; speed = startSpeed; } // another method (not a constructor) public void ShiftUp() { gear = gear + 1; // notice the 'gear' variable is available here too. } } ``` Hope that helps!
gear, cadence and speed are member variables of the class (declared elsewhere) and startCadence, startSpeed, and startGear are function parameters. ``` class Bicycle { private int gear, cadence, speed; public Bicycle(int startCadence, int startSpeed, int startGear) { // set the value of member variables from passed parameters gear = startGear; cadence = startCadence; speed = startSpeed; } } ```
Java Constructors
[ "", "java", "constructor", "" ]
We have a huge (old legacy java) code-base, where many files (around 5k) have System.out.println's. We are planning to remove them for cleanup/performance reasons. How can we write a script that will replace them without introducing any issues in the code? The script cannot blindly delete them as following case can be an issue: ``` if () some.code... else System.out.println(...); DB.close(); ``` I'm thinking of replacing them with ';'. That will take care of above case. Do you see any other issues? Any other suggestions?
Have you consider the silly case: ``` System.out.println(" Print " + object.changeState() ); ``` I don't think it happen but chances are the println executes a method that is actually performing some action on which the system depends on and may introduce subtle bugs ( believe me or not, but I have witnessed this ) Probably replacing with a logger and disabling the loggers may do. Or creating a null object using the NullObject pattern: ``` public final class DevNull { public final static PrintStream out = new PrintStream(new OutputStream() { public void close() {} public void flush() {} public void write(byte[] b) {} public void write(byte[] b, int off, int len) {} public void write(int b) {} } ); } ``` And replacing ``` System.out.println(); ``` With ``` DevNull.out.println(); ```
Log4E is an eclipse plugin that has a "Replace System.out.println()" feature. It will happily convert all those pesky println calls to log4j calls. It will even wrap them with a log level check.
How to remove System.out.println's from codebase
[ "", "java", "performance", "scripting", "code-cleanup", "" ]
In web app I am splitting strings and assigning to link names or to collections of strings. Is there a significant performance benefit to using stringbuilder for a web application? EDIT: 2 functions: splitting up a link into 5-10 strings. THen repackaging into another string. Also I append one string at a time to a link everytime the link is clicked.
How many strings will you be concatenating? Do you know for sure how many there will be, or does it depend on how many records are in the database etc? See [my article on this subject](http://pobox.com/~skeet/csharp/stringbuilder.html) for more details and guidelines - but basically, being in a web app makes no difference to how expensive string concatenation is vs using a StringBuilder. EDIT: I'm afraid it's still not entirely clear from the question exactly what you're doing. If you've got a fixed set of strings to concatenate, and you can do it all in one go, then it's faster and probably more readable to do it using concatenation. For instance: ``` string faster = first + " " + second + " " + third + "; " + fourth; string slower = new StringBuilder().Append(first) .Append(" ") .Append(second) .Append(" ") .Append(third) .Append("; ") .Append(fourth) .ToString(); ``` Another alternative is to use a format string of course. This may well be the slowest, but most readable: ``` string readable = string.Format("{0} {1} {2}; {3}", first, second, third, fourth); ``` The part of your question mentioning "adding a link each time" suggests using a StringBuilder for that aspect though - anything which naturally leads to a loop is more efficient (for moderate to large numbers) using StringBuilder.
Yes, concatenating regular strings is expensive (really appending on string on to the end of another). Each time a string is changed, .net drops the old string and creates a new one with the new values. It is an immutable object. EDIT: Stringbuilder should be used with caution, and evaluated like any other approach. Sometimes connactenting two strings together will be more efficient, and should be evaluated on a case by case basis. [Atwood has an interesting article related to this.](http://www.codinghorror.com/blog/archives/001218.html)
Is it worth using StringBuilder in web apps?
[ "", "c#", "stringbuilder", "" ]
When I create a test using MS Visual Studio's builtin unit test wizard it creates code with lines like the below: ``` double number = 0F; ``` In C# "F" stands for float, case-independent, and "D" for double. Using "F" suffix instead of "D" leads to precision lost. If it's a bug, where can I report it to Microsoft?
I don't see how that could be anything but a bug. Completely harmless in this case, but still a bug. You can use [Connect](http://connect.microsoft.com/) to report this kind of thing. However, I've had much more important bugs swept aside than this so I wouldn't expect much action.
If the value is always 0F then no information will be lost at all. In general any float can be converted to a double with no loss of precision (IEEE 754 assumed). So long as the test value has not needed truncation to be represented as a float then there is no problem with regards to *correctness*. The confusion to end users and increased hassle when wanting to modify the values is still a problem.
Why does the MSTest wizard the F suffix for doubles instead of D?
[ "", "c#", ".net", "visual-studio-2008", "mstest", "" ]
I have just started to learn Java and is curious is it any good practice in Java for good object decomposition? Let me describe a problem. In big software project it's always a big classes like 'core' or 'ui' that tends to have a lot of methods and are intended as a mediators between smaller classes. For example, if user clicks a button on some window, this window's class sends a message to 'ui' class. This 'ui' class catches this message and acts accordingly by doing something with application user interface ( via calling method of one of it's member objects ) or by posting message to application 'core' if it's something like 'exit application' or 'start network connection'. Such objects is very hard to break apart since they are a mere mediators between a lots of small application objects. But having a classes in application with hundreds and thousands of methods is not very handy, event if such methods are trivial task delegation from one object to another. C# solves such problem by allowing to break class implementation into multiple source files: you can divide god object any way you choose, and it will work. Any practices by dividing such objects in Java?
One way to begin breaking such a large object apart is to first find a good subset of fields or properties managed by the large object that are related to each other and that don't interact with other fields or properties of the object. Then, create a new, smaller object using only those fields. That is, move *all* logic from the large class to the new smaller class. In the original large class, create a delegation method that simply passes the request along. This is a good first step that only involves changing the big object. It doesn't reduce the number of methods, but it can greatly reduce the amount of logic needed in the large class. After a few rounds of doing this, you can begin to remove some of the delegation by pointing other objects directly at the newer, smaller objects, rather than going through the previously-huge object that was in the middle of everything. See [Wikipedia's Delegation pattern](http://en.wikipedia.org/wiki/Delegation_pattern) discussion for example. As a simple example, if you have a personnel object to represent staff at a company, then you could create a payroll object to keep track of payroll-related values, a ratings object to keep track of employee ratings, an awards object to keep track of awards that the person has won, and so on. To wit, if you started out with one big class containing the following methods, each containing business logic, among many other methods: ``` ... public boolean isManagement() { ... } public boolean isExecutive() { ... } public int getYearsOfService() { ... } public Date getHireDate() { ... } public int getDepartment() { ... } public BigDecimal getBasePay() { ... } public BigDecimal getStockShares() { ... } public boolean hasStockSharePlan() { ... } ... ``` then this big object could, in its constructor, create a newly created object `StaffType` and a newly created object `PayInformation` and a newly created object `StaffInformation`, and initially these methods in the big object would look like: ``` // Newly added variables, initialized in the constructor (or as appropriate) private final StaffType staffType; private final StaffInformation staffInformation; private final PayInformation payInformation; ... public boolean isManagement() { return staffType.isManagement(); } public boolean isExecutive() { return staffType.isExecutive(); } public int getYearsOfService() { return staffInformation.getYearsOfService(); } public Date getHireDate() { return staffInformation.getHireDate(); } public int getDepartment() { return staffInformation.getDepartment(); } public BigDecimal getBasePay() { return payInformation.getBasePay(); } public BigDecimal getStockShares() { return payInformation.getStockShares(); } public boolean hasStockSharePlan() { return payInformation.hasStockSharePlan(); } ... ``` where the full logic that used to be in the big object has been moved to these three new smaller objects. With this change, you can break the big object into smaller parts without having to touch anything that makes use of the big object. However, as you do this over time, you'll find that some clients of the big object may only need access to one of the divisible components. For these clients, instead of them using the big object and delegating to the specific object, they can make direct use of the small object. But even if this refactoring never occurs, you've improved things by separating the business logic of unrelated items into different classes.
The next logical step may be to change the BigClass into a java package. Next create new objects for each group of related functionality (noting in each class that the object is part of the new package). The benefits of doing this are dependency reduction and performance. 1. No need to import the entire package/BigClass just to get a few methods. 2. Code changes to related functionality don't require a recompile/redeploy of the entire package/BigClass. 3. Less memory used for allocating/deallocating objects, since you are using smaller classes.
Big class decomposition in Java
[ "", "java", "architecture", "oop", "refactoring", "" ]
I use C#, .NET, VS.NET 2008. Besides being able to address more memory, what are the advantages to compiling my application to 64-bit? Is it going to be faster or smaller? Why? Does it make it more compatible with a x64 system (when compared to a 32-bit application)?
For *native* applications, you get benefits like increased address space and whatnot. However, .NET applications run on the CLR which abstracts away any underlying architecture differences. Assuming you're just dealing with managed code, there isn't any benefit to targeting a specific platform; you're better off just compiling with the "anycpu" flag set (which is on by default). This will generate platform agnostic assemblies that will run equally well on any of the architectures the CLR runs on. Specifically targeting (say) x64 isn't going to give you any performance boost, and will prevent your assemblies from working on a 32-bit platform. [This article](http://blogs.msdn.com/gauravseth/archive/2006/03/07/545104.aspx) has a bit more information on the subject. **Update:** Scott Hanselman just posted a good [overview](http://www.hanselman.com/blog/BackToBasics32bitAnd64bitConfusionAroundX86AndX64AndTheNETFrameworkAndCLR.aspx) of this topic as well.
In theory, a program compiled for x64 will run faster than a program compiled for x86. The reason for this is because there are more general purpose registers in the x64 architecture. 32-bit x86 has only 4 general purpose registers. AMD added an additional 8 general purpose registers in their x64 extensions. This allows for fewer memory loads and (slightly) faster performance. In reality, this doesn't make a huge difference in performance, but it should make a slight one. The size of the binary and the memory footprint will increase somewhat from using 64-bit instructions but because x64 is still a CISC archictecture, the binary size does not double as it would in a RISC architecture. Most instructions are still shorter than 64 bits in length.
How can compiling my application for 64-bit make it faster or better?
[ "", "c#", ".net", "64-bit", "" ]
I have a database with a large number of fields that are currently NTEXT. Having upgraded to SQL 2005 we have run some performance tests on converting these to NVARCHAR(MAX). If you read this article: [http://geekswithblogs.net/johnsPerfBlog/archive/2008/04/16/ntext-vs-nvarcharmax-in-sql-2005.aspx](https://web.archive.org/web/20210125040904/http://geekswithblogs.net/johnsperfblog/archive/2008/04/16/ntext-vs-nvarcharmax-in-sql-2005.aspx) This explains that a simple ALTER COLUMN does not re-organise the data into rows. I experience this with my data. We actually have much worse performance in some areas if we just run the ALTER COLUMN. However, if I run an UPDATE Table SET Column = Column for all of these fields we then get an extremely huge performance increase. The problem I have is that the database consists of hundreds of these columns with millions of records. A simple test (on a low performance virtual machine) had a table with a single NTEXT column containing 7 million records took 5 hours to update. Can anybody offer any suggestions as to how I can update the data in a more efficient way that minimises downtime and locks? EDIT: My backup solution is to just update the data in blocks over time, however, with our data this results in worse performance until all the records have been updated and the shorter this time is the better so I'm still looking for a quicker way to update.
If you can't get scheduled downtime.... create two new columns: nvarchar(max) processedflag INT DEFAULT 0 Create a nonclustered index on the processedflag You have UPDATE TOP available to you (you want to update top ordered by the primary key). Simply set the processedflag to 1 during the update so that the next update will only update where the processed flag is still 0 You can use @@rowcount after the update to see if you can exit a loop. I suggest using WAITFOR for a few seconds after each update query to give other queries a chance to acquire locks on the table and not to overload disk usage.
If you can get scheduled downtime: 1. Back up the database 2. Change recovery model to simple 3. Remove all indexes from the table you are updating 4. Add a column maintenanceflag(INT DEFAULT 0) with a nonclustered index 5. Run: UPDATE TOP 1000 tablename SET nvarchar from ntext, maintenanceflag = 1 WHERE maintenanceflag = 0 Multiple times as required (within a loop with a delay). Once complete, do another backup then change the recovery model back to what it was originally on and add old indexes. Remember that every index or trigger on that table causes extra disk I/O and that the simple recovery mode minimises logfile I/O.
SQL Server, Converting NTEXT to NVARCHAR(MAX)
[ "", "sql", "sql-server", "sql-server-2005", "" ]
Ok, i've been trying to solve this for about 2 hours now... Please advise: Tables: ``` PROFILE [id (int), name (varchar), ...] SKILL [id (int), id_profile (int), id_app (int), lvl (int), ...] APP [id (int), ...] ``` The lvl can basically go from 0 to 3. I'm trying to get this particular stat: "What is the percentage of apps that is covered by at least two people having a skill of 2 or higher?" Thanks a lot
``` SELECT AVG(covered) FROM ( SELECT CASE WHEN COUNT(*) >= 2 THEN 1 ELSE 0 END AS covered FROM app a LEFT JOIN skill s ON (s.id_app = a.id AND s.lvl >= 2) GROUP BY a.id ) ``` More efficient way for `MySQL`: ``` SELECT AVG ( IFNULL ( ( SELECT 1 FROM skill s WHERE s.id_app = a.id AND s.lvl >= 2 LIMIT 1, 1 ), 0 ) ) FROM app a ``` This will stop counting as soon as it finds the second skilled `person` for each `app`. Efficient if you have a few `app`'s but lots of `person`'s.
Untested ``` select convert(float,count(*)) / (select count(*) from app) as percentage from ( select count(*) as number from skill where lvl >= 2 group by id_app ) t where t.number >= 2 ```
coverage percentage using a complex sql query...?
[ "", "sql", "" ]
Is there a simple built-in way to take an ordered list of `IEnumerable`s and return a single `IEnumerable` which yields, in order, all the elements in the first, then the second, and so on. I could certainly write my own, but I wanted to know whether there was already a way to accomplish this seemingly useful task before I do it.
Try SelectMany. ``` IEnumerable<int> Collapse(IEnumerable<IEnumerable<int>> e){ return e.SelectMany(x => x ); } ``` The purpose of this function is to flatten a group of IEnumerable<IEnumerable<T>> into an IEnumerable<T>. The returned data will preserve the original order.
Further to JaredPar's (correct) answer - also query syntax: ``` var all = from inner in outer from item in inner select item; ```
Chaining IEnumerables in C#?
[ "", "c#", "iterator", "ienumerable", "" ]
For some reason I can't catch the error. I'm following an example in O'Reillys PHP cookbook, so I'm not doing any cowboy coding (yet). The error is: ``` SoapFault exception: [SOAP-ENV:Server] SoapFault::SoapFault() [soapfault.soapfault]: Invalid parameters. Invalid fault code. in C:\xampp\htdocs\soap\client_database.php:13 Stack trace: #0 C:\xampp\htdocs\soap\client_database.php(13): SoapClient->__soapCall('select_query', Array) #1 {main} ```
I found the solution, you can only use certain error codes when throwing an error. <http://www.w3.org/TR/soap12-part1/#faultcodes>
As per [the standards](http://www.w3.org/TR/soap12-part1/#faultcodes) there are only 5 possible options for a fault code: 1. VersionMismatch 2. MustUnderstand 3. DataEncodingUnknown 4. Sender 5. Receiver Choose the most appropriate to your situation and use that one as a value for $faultCode when throwing a SoapFault exception.
Error message trying to catch a soapfault using the PHP Soap class
[ "", "php", "soap", "" ]
I'm using base class constructor as factory and changing class in this constructor/factory to select appropriate class -- is this approach is good python practice or there are more elegant ways? I've tried to read help about metaclasses but without big success. Here example of what I'm doing. ``` class Project(object): "Base class and factory." def __init__(self, url): if is_url_local(url): self.__class__ = ProjectLocal else: self.__class__ = ProjectRemote self.url = url class ProjectLocal(Project): def do_something(self): # do the stuff locally in the dir pointed by self.url class ProjectRemote(Project): def do_something(self): # do the stuff communicating with remote server pointed by self.url ``` Having this code I can create the instance of ProjectLocal/ProjectRemote via base class Project: ``` project = Project('http://example.com') project.do_something() ``` I know that alternate way is to using fabric function that will return the class object based on url, then code will looks similar: ``` def project_factory(url): if is_url_local(url): return ProjectLocal(url) else: return ProjectRemote(url) project = project_factory(url) project.do_something() ``` Is my first approach just matter of taste or it has some hidden pitfalls?
I would stick with the factory function approach. It's very standard python and easy to read and understand. You could make it more generic to handle more options in several ways such as by passing in the discriminator function and a map of results to classes. If the first example works it's more by luck than by design. What if you wanted to have an `__init__` defined in your subclass?
You shouldn't need metaclasses for this. Take a look at the [`__new__`](http://docs.python.org/reference/datamodel.html#object.__new__) method. This will allow you to take control of the creation of the object, rather than just the initialisation, and so return an object of your choosing. ``` class Project(object): "Base class and factory." def __new__(cls, url): if is_url_local(url): return super(Project, cls).__new__(ProjectLocal, url) else: return super(Project, cls).__new__(ProjectRemote, url) def __init__(self, url): self.url = url ```
Using base class constructor as factory in Python?
[ "", "python", "factory", "" ]
I have two or more arrays -- one with IDs, one or more with string values. I want to merge these into a hash table so I can look up values by ID. The following function does the job, but a shorter and sweeter version (LINQ?) would be nice: ``` Dictionary<int, string[]> MergeArrays( IEnumerable<int> idCollection, params IEnumerable<string>[] valueCollections ) { var dict = new Dictionary<int, string[]>(); var idL = idCollection.Count(); while ( idL-- > 0 ) { dict[idCollection.ElementAt( idL )] = new string[valueCollections.Length]; var vL = valueCollections.Length; while ( vL-- > 0 ) dict[idCollection.ElementAt( idL )][vL] = valueCollections[vL].ElementAt( idL ); } return dict; } ``` Any ideas?
That's very inefficent at the moment - all those calls to ElementAt could be going through the whole sequence (as far as they need to) each time. (It depends on the implementation of the sequence.) However, I'm not at all sure I even understand what this code is doing (using foreach loops would almost certainly make it clearer, as would iterating forwards instead of backwards. Could you give some sample input? and expected outputs? EDIT: Okay, I think I see what's going on here; you're effectively pivoting valueCollections. I suspect you'll want something like: ``` static Dictionary<int, string[]> MergeArrays( IEnumerable<int> idCollection, params IEnumerable<string>[] valueCollections) { var valueCollectionArrays = valueCollections.Select (x => x.ToArray()).ToArray(); var indexedIds = idCollection.Select((Id, Index) => new { Index, Id }); return indexedIds.ToDictionary(x => Id, x => valueCollectionArrays.Select(array => array[x.Index]).ToArray()); } ``` It's pretty ugly though. If you can make idCollection an array to start with, it would frankly be easier. EDIT: Okay, assuming we can use arrays instead: ``` static Dictionary<int, string[]> MergeArrays( int[] idCollection, params string[][] valueCollections) { var ret = new Dictionary<int, string[]>(); for (int i=0; i < idCollection.Length; i++) { ret[idCollection[i]] = valueCollections.Select (array => array[i]).ToArray(); } return ret; } ``` I've corrected (hopefully) a bug in the first version - I was getting confused between which bit of the values was an array and which wasn't. The second version isn't as declarative, but I think it's clearer, personally.
How about: ``` public static Dictionary<int, string[]> MergeArrays2(IEnumerable<int> idCollection, params IEnumerable<string>[] valueCollections) { var dict = new Dictionary<int, string[]>(); var valEnums = (from v in valueCollections select v.GetEnumerator()).ToList(); foreach (int id in idCollection) { var strings = new List<string>(); foreach (var e in valEnums) if (e.MoveNext()) strings.Add(e.Current); dict.Add(id, strings.ToArray()); } return dict; } ``` or slightly editing the skeet answer (it didn't work for me): ``` static Dictionary<int, string[]> MergeArrays_Skeet(IEnumerable<int> idCollection,params IEnumerable<string>[] valueCollections) { var valueCollectionArrays = valueCollections.Select(x=>x.ToArray()).ToArray(); var indexedIds = idCollection.Select((Id, Index) => new { Index, Id }); return indexedIds.ToDictionary(x => x.Id,x => valueCollectionArrays.Select(array => array[x.Index]).ToArray()); } ```
Joining/merging arrays in C#
[ "", "c#", ".net", "linq", "arrays", "c#-3.0", "" ]
I need to display a list of records from a database table ordered by some numeric column. The table looks like this: ``` CREATE TABLE items ( position int NOT NULL, name varchar(100) NOT NULL, ); INSERT INTO items (position, name) VALUE (1, 'first'), (5, 'second'), (8, 'third'), (9, 'fourth'), (15, 'fifth'), (20, 'sixth'); ``` Now, the order of the list should change according to a parameter provided by the user. This parameter specifies which record comes first like this: ``` position = 0 order should be = 1, 5, 8, 9, 15, 20 position = 1 order should be = 20, 1, 5, 8, 9, 15 position = 2 order should be = 15, 20, 1, 5, 8, 9 ``` In other words the last record becomes the first and so on. Can you think of a way to do this in SQL? I'm using MySQL but an example in any SQL database will do. Thanks
See how this works for you. Uses generic SQL so it should be valid for MySql (untested) as well. ``` DECLARE @user_sort INTEGER SET @user_sort = 0 SELECT position, name FROM ( SELECT I1.position, I1.name, COUNT(*) AS rownumber, (SELECT COUNT(*) FROM items) AS maxrows FROM items I1, items I2 WHERE I2.position <= I1.position GROUP BY I1.position, I1.name ) Q1 ORDER BY CASE WHEN maxrows - rownumber < (@user_sort % maxrows) THEN 1 ELSE 2 END, position ``` Note: \* If the user provided sort index is greater than the row count, the value will wrap to within the valid range. To remove this functionality, remove the "% maxrows" from the ORDER BY. ***Results:*** **SET @user\_sort = 0** ``` position name 1 first 5 second 8 third 9 fourth 15 fifth 20 sixth ``` **SET @user\_sort = 1** ``` position name 20 sixth 1 first 5 second 8 third 9 fourth 15 fifth ``` **SET @user\_sort = 2** ``` position name 15 fifth 20 sixth 1 first 5 second 8 third 9 fourth ``` **SET @user\_sort = 9** ``` 9 fourth 15 fifth 20 sixth 1 first 5 second 8 third ```
Are you sure you want to do this in SQL? To me, this sounds like you should load the results in a dataset of some sort, and then either re-order them as you want, or position the starting point at the correct position. Possibly using a linked list.
Special order by on SQL query
[ "", "sql", "mysql", "sql-order-by", "" ]
This is a problem that I come to on occasion and have yet to work out an answer that I'm happy with. I'm looking for a build system that works well for building a database - that is running all of the SQL files in the correct database instance as the correct user and in the correct order, and handling dependencies and the like properly. I have a system that I hacked together using Gnu Make and it works, but it's not especially flexable and frankly can be a bit of a pain to work with in some situations. I've considered looking at things like SCons and CMake too, but I don't know how much better they are likely to be, or if there's a better system out there that already exists...
Just a shell script that runs all the create statements and imports in the proper order. You may also find [migrations](http://garrettsnider.backpackit.com/pub/367902) (comes with rails) interesting. It provides a `make` like infrastructure that let's you maintain a database the structure of which evolves over time. Say you add a new column to some table. In migrations you'd write a snippet of code which describes the requirements for adding the column and also to rollback the change so you can switch to different versions of your schema automatically. I'm not a big fan of the tight integration with rails, though, but the principles behind it are very interesting.
For SQL Server, I just use a batch file with SQLCMD.EXE and a bunch of .SQL files. It's not perfect, but it seems to work.
Whats the best build system for building a database?
[ "", "sql", "database", "build-process", "" ]
HI, I don't like posting compile problems, but I really can't figure this one out. Using this code: ``` #include <map> #include <boost/iterator/transform_iterator.hpp> using namespace std; template <typename K, typename V> struct get_value { const V& operator ()(std::pair<K, V> const& p) { return p.second; } }; class test { typedef map<int, float> TMap; TMap mymap; public: typedef get_value<TMap::key_type, TMap::value_type> F; typedef boost::transform_iterator<F, TMap::iterator> transform_iterator; transform_iterator begin() { return make_transform_iterator(mymap.begin(), F()); } }; ``` Getting this compile error: ``` transform_iterator.hpp(43) : error C2039: 'result_type' : is not a member of 'get_value<K,V>' with [ K=int, V=std::pair<const int,float> ] ``` Can anyone explain why this isn't working? I'm using Visual Studio 7.0 with boost 1.36.0 Thanks.
Since you also asked for an explanation The `transform_iterator` needs to know the return type of the function called in order to instantiate itself. This is determined via [`result_of`](http://www.boost.org/doc/libs/1_37_0/libs/utility/utility.htm#result_of) (found in `<boost/utility/result_of.hpp>` If you use a function object, you need to define a member `result_type` to specify the result type of the object. (since an object doesn't have a 'return type' as such) If you would have used a regular function, `result_of` would be able to figure it out on his own, e.g.: ``` template <typename K, typename V> const V & get_value(std::pair<K, V> const & p) { return p.second; } class test { typedef map<int, float> TMap; TMap mymap; public: typedef boost::function< const TMap::mapped_type & (const TMap::value_type &) > F; typedef boost::transform_iterator<F, TMap::iterator> transform_iterator; transform_iterator begin() { return boost::make_transform_iterator(mymap.begin(), &get_value< int, float >); } }; ```
You'll have to inherit `get_value` from `unary_function<const V&, std::pair<K, V> const&>` to tell `transform_iterator` what the signature of `get_value` is.
transform_iterator compile problem
[ "", "c++", "dictionary", "boost-iterators", "" ]
As a fun side-project for myself to help in learning yet another PHP MVC framework, I've been writing Reversi / Othello as a PHP & Ajax application, mostly straightforward stuff. I decided against using a multidimensional array for a number of reasons and instead have a linear array ( in this case 64 elements long ) and a couple methods to convert from the coordinates to integers. So I was curious, is there any other, possibly faster algorithms for converting an integer to a coordinate point? ``` function int2coord($i){ $x = (int)($i/8); $y = $i - ($x*8); return array($x, $y); } //Not a surprise but this is .003 MS slower on average function int2coord_2($i){ $b = base_convert($i, 10, 8); $x = (int) ($b != 0 ? $b/8 : 0); // could also be $b < 8 for condition $y = $b % 10; return array($x, $y); } ``` And for posterity sake, the method I wrote for coord2int ``` function coord2int($x, $y){ return ($x*8)+$y; } ``` Update: So in the land of the weird, the results were not what I was expecting but using a pre-computed lookup table has predominantly shown to be the fastest, guess trading memory for speed is always a winner? * There was a table with times here but I cut it due to styling issues with SO.
I don't have the time to measure this myself right now, but I would suspect that a pre-computed lookup table would beat your solution in speed. The code would look something like this: ``` class Converter { private $_table; function __construct() { $this->_table = array(); for ($i=0; $i<64; $i++) { $this->_table[$i] = array( (int)($i/8), (int)($i%8) ); } } function int2coord( $i ) { return $this->_table[$i]; } } $conv = new Converter(); $coord = $conv->int2coord( 42 ); ``` Of course, this does add a lot of over-head so in practice you would only bother to pre-compute all coordinates if you conversion code was called **very** often.
Oh yes! This is a perfect example of binary: ``` function int2coord($i){ $x = $i >> 3; $y = $i & 0x07; return array($x, $y); } ``` The reality is that a good compiler will find this optimization and use it, so it's not necessarily faster. Test and see if your compiler/interpreter does this. It works because any binary division by 8 is the same as a right shift by 3 bits. Modern processors have barrel shifters that can do up to a 32 bit shift in one instruction. The reverse is as easy: ``` function coord2int($x, $y){ return ($x << 3)+$y; } ``` -Adam
Alternative/faster methods of converting an integer to a cartesian coordinate?
[ "", "php", "optimization", "math", "" ]
Consider these types: ``` struct A {}; struct B : A { int i; }; ``` `sizeof(A) > 0` as required by the standard. `sizeof(B)` should be 4 due to the empty base optimization. Yet on GCC 4.1.1 it's 5 (I'm using a pack of 1 in this area). And inconsistently - some of my files are getting it, some are not. Can't be sure what the differences are yet, we have a large prjoect. On the other three compilers I'm using (by Microsoft and Freescale), I don't have this problem. The empty base optimization is optional apparently, according to [this article](http://www.cantrip.org/emptyopt.html). Is there a compiler option or pragma to tune this in GCC 4.1.1? I can work around the issue but I would like to understand what's going on first. I Googled for a while and can't seem to find anything.
This always happens. I post immediately before I figure it out. Maybe the act of posting gets me thinking in a different way.. So in my question the sample was a little bit over-simplified. It's actually more like this: ``` struct Base {}; struct C1 : Base { int i; } struct C2 : Base { C1 c; int i; } ``` sizeof(C1) is correctly 4 on all platforms, but sizeof(C2) is 9 instead of 8 on GCC. And... apparently GCC is the only thing that gets it right, according to the last bit of the article I linked to in the original question. I'll quote it (from Nathan Meyers) here: > A whole family of related "empty subobject" optimizations are possible, subject to the ABI specifications a compiler must observe. (Jason Merrill pointed some of these out to me, years back.) For example, consider three struct members of (empty) types A, B, and C, and a fourth non-empty. They may, conformingly, all occupy the same address, as long as they don't have any bases in common with one another or with the containing class. *A common gotcha in practice is to have the first (or only) member of a class derived from the same empty base as the class. The compiler has to insert padding so that they two subobjects have different addresses.* This actually occurs in iterator adapters that have an interator member, both derived from std::iterator. An incautiously-implemented standard std::reverse\_iterator might exhibit this problem. So, the inconsistency I was seeing was only in cases where I had the above pattern. Every other place I was deriving from an empty struct was ok. Easy enough to work around. Thanks all for the comments and answers.
GCC C++ follows these rules with standard padding: NOTE: `__attribute__((__packed__))` or changing the default packing will modify these rules. * class EmptyBase {}; --> sizeof(EmptyBase) == 1 * Any number of empty-bases will map to 0 in the struct offset as long as all are unique types (including parenting). * Non empty-base parents are simply in the order declared with only padding for alignment. * If the first member of a derived class that immediately follows empty-bases does not derive from any of those bases, it is allowed to start at the first properly aligned offset for that member that is greater-than-or-equal-to the empty-base address -- this may be the same address as the empty-bases. * If the first member of a derived class that immediately follows empty-bases does derive from any of those bases, it will start at the first properly aligned offset for that member that is greater-than the empty-base address -- this is never the same address as the empty-bases. * Members that are empty-classes take at least one byte of storage in the containing class. --- MSVC++ follows these rules: NOTE: `#pragma pack` or changing the default packing will modify these rules. * class EmptyBase {}; --> sizeof(EmptyBase) == 1 * The only way an empty-base class (or class derived from an empty-base) will start at offset 0 (zero) is if it is the first base class. * A Non-empty-base class will start at the next valid alignment offset for the base class. * All empty-base classes will appear to have zero effective storage in the derived class and do not affect the current offset unless followed by another empty-base class (or class derived from an empty-base) in which case you should see the following rule. * An empty-base class (or class derived from an empty-base) that follows an empty-base class (or class derived from an empty-base) will add 1 to the current offset position before padding to the proper alignment for the class. * There is no padding (other than for alignment) between the last base class and the first class member or vft-pointer(s). \*\*\* NOTE: this is an over-aggressive empty-base-optimization that can break the C++ standard. * Members that are empty-classes take at least one byte of storage in the containing class.
Is the "empty base optimization" in GCC configurable?
[ "", "c++", "gcc", "" ]