Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I'm starting a new C# project that uses a database, but I want it all to be version controlled. I'm thinking about a SQL server database (If I make a mdf file, will I be able to use it in a computer without SQL Server installed?) and LINQ to SQL. How do you usually start a project like this? Do you create a .sql file to create the database and then create the linq models? Is there a better way?
It really depends on what you prefer. You are going to get answers which tell you it's better to design your object model first (programmers will tell you that) and then there are those that are going to tell you to design your data model first (the database admins). In the end, the right way is the way that you feel most comfortable with. If you feel that you are going to do most of the manipulation of the data in code, then it might be good for you to start with the object model. If you feel you will perform the majority of the operations in data, then it would be good for you to start with the database. In regards to source control, if you want to version everything, then you are going to have to generate a sql file which will have what you need to create your db from scratch. Then, every time you change or update the database schema or data, you should place that file into source control as well. I also recommend a naming convention which incorporates the date and time of the file into the file, to make it easy to figure out what order to apply the files in when working from scratch, or any point in time after that. And of course, you need to version your source code as well, but that should go without saying.
First question: yes you always need sql server installed on your machine to access the database locally, otherwise you need to setup your server to accept external connections. Second: no you should not store the database MDF file under source control. As you said, store sql scripts in source control instead. Third: I recommend looking at an Object Relation Mapping system such FluentNHibernate and NHibernate so that you don't need to manage any of your database in sql other than the actual table create/alter statements and indexes.
Starting a new app with database support
[ "", "c#", ".net", "sql-server", "database", "version-control", "" ]
I have to develop a tool in C# that retrieves some data from an Oracle DB. Since our IT support is having some trouble with installing the Oracle client, I want to be able to connect to the database without the client installed. The idea is to just install (and maybe register) an additional library together with the app. Which alternatives to the Oracle client exist, and what are their pros and cons? This is for a small application (1 form, 2 or 3 queries, the result displayed in a DataGrid, probably no more than 10-20 data sets), but I am also interested in scalability issues, since we may also move away from the Oracle client with bigger future projects. Of course a freeware solution would be nice, but we are not limited to that.
There are a couple different flavors of the [Oracle ODP.Net driver](http://www.oracle.com/technetwork/database/windows/downloads/index-101290.html). Which version are you trying to use? It sounds like you want the Oracle 11g ODAC 11.1.0.6.21 with Xcopy Deployment, which allows you to deploy the Oracle Instant Client and the ODP.Net driver just by copying a few DLLs over and registering them. That doesn't require a full-blown Oracle client installation.
Your task can be achieved without the need of any third party software: 1. Install [Oracle Data Access Components 11g](http://www.oracle.com/technology/software/tech/windows/odpnet/utilsoft.html) in your development box. ODAC 11g is backwards compatible with 9i and 10g 2. Add a reference to the ODAC library in your .NET projects (Oracle.DataAccess.dll). 3. Allow your application to connect without the use of TNSNAMES.ORA. To do so, you have to include the connect descriptor in the connection string: "user id=scott;password=tiger;data source=" + "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)" + "(HOST=sales-server)(PORT=1521))(CONNECT\_DATA="+ "(SERVICE\_NAME=sales.us.acme.com)))" Since you are including the connect descriptor in the connection string, the thin client is not required in the targeted computers. Happy deployment.
Alternative Oracle drivers for .net
[ "", "c#", "oracle", "database-connection", "" ]
I have a string value that I want to write to the registry as a Dword to read via another program. How can i create a new key/string using c# .net?
You need to use Microsoft.Win32.Registry to add to the mobile device registry. All of the methods have mobile support: <http://msdn.microsoft.com/en-us/library/microsoft.win32.registry_members.aspx>
Using the Microsoft.Win32.Registry classes, [just like on the desktop](http://www.csharphelp.com/archives2/archive430.html). There is one caveat: if you're using CF 1.0 then the classes don't exist. In that case I'd recommend pulling the implementation from the [old SDF 1.4 source](http://www.opennetcf.com/FreeSoftware/SDF14/tabid/91/Default.aspx).
How do I write a value to the registry on a Windows Mobile 5 device?
[ "", "c#", "windows", "string", "windows-mobile", "ipc", "" ]
> **Possible Duplicate:** > [C++ HTML template framework, templatizing library, HTML generator library](https://stackoverflow.com/questions/355650/c-html-template-framework-templatizing-library-html-generator-library) Planning to write a website in C++. Would like to use a template system like Clearsilver, but maybe there's a better alternative?
[Wt](http://www.webtoolkit.eu/wt) (pronounced 'witty') is a C++ library and application server for developing and deploying web applications. It is not a 'framework', which enforces a way of programming, but a library.
To add to the previous question: [CTemplate](http://code.google.com/p/google-ctemplate/) [CTPP](http://ctpp.havoc.ru/en/) You could also interface with ClearSilver (written in C) with C++.
What's a good HTML template engine for C++?
[ "", "c++", "templates", "template-engine", "" ]
I have a WinForm application built with VS 2008 (C#) and SQL Server Express 2008 that is currently deployed on test users PC. I have modifications to do on the database structure (add fields, tables, etc.) and I wonder what's the best method to distribute the changes on the users PC (without loosing the users data). From what I've learned so far, I will need to write the changes in a script file. But what's the best method to distribute the script files without running them manually in the SQL Server Management Studio? Is there any automatic distribution method you suggest? Do I have to write my own update application? I currently use ClickOnce to distribute the changes in the EXE file. Is ClickOnce still a viable solution when you have to do changes in the database with the new version? TIA
There are lots of ways you can do this - either build up your own homegrown version somehow (e.g. we are currently packaging database change scripts into a DLL as embedded resources, and have an engine that'll run those on the client machine at install and/or update time), or you can look at readily available .NET products like Red-Gate [SQL Packager](http://www.red-gate.com/products/SQL_Packager/index.htm) which can save you lots of time and effort.
You could write your application on startup to read a script file from the application folder and execute it. You could have the script update a value in the database once its done to prevent it from being run again. You can deploy the new script file as content and the application will execute it on the next startup, upgrading your database
SQL table change distribution
[ "", "c#", "winforms", "visual-studio-2008", "software-distribution", "sql-server-2008-express", "" ]
Let me illustrate this question with an example: ``` import numpy matrix = numpy.identity(5, dtype=bool) #Using identity as a convenient way to create an array with the invariant that there will only be one True value per row, the solution should apply to any array with this invariant base = numpy.arange(5,30,5) #This could be any 1-d array, provided its length is the same as the length of axis=1 of matrix from above result = numpy.array([ base[line] for line in matrix ]) ``` `result` now holds the desired result, but I'm sure there is a numpy-specific method for doing this that avoids the explicit iteration. What is it?
If I understand your question correctly you can simply use matrix multiplication: ``` result = numpy.dot(matrix, base) ``` If the result must have the same shape as in your example just add a reshape: ``` result = numpy.dot(matrix, base).reshape((5,1)) ``` If the matrix is not symmetric be careful about the order in dot.
My try: ``` numpy.sum(matrix * base, axis=1) ```
How do I use a 2-d boolean array to select from a 1-d array on a per-row basis in numpy?
[ "", "python", "numpy", "" ]
I wanted to write a function that would take an object and convert it to an array that contains that object as a single element. It occurred to me that I could maybe do this with generics and variable arguments to essentially do this automatically, without the need to write a function for each object type I wished to use. Will this code work? Are there any subtleties I need to be aware of? ``` public static <X> X[] convert_to_array(X... in_objs){ return in_objs; } ```
It works but it seems like: ``` Object o = new Object(); someMethod(new Object[] { o } ); ``` is a little more straightforward then: ``` Object o = new Object(); someMethod(convert_to_array(o)); ``` In cases where sometimes I want to pass a single object, but other times I want to pass an array, usually I just use an overloaded method in the API: ``` public void doSomething(Object o) { doSomething(new Object[] { o } ); } public void doSomething(Object[] array) { // stuff goes here. } ``` Varargs can be used but only if the array is the last parameter of course.
Why not simply: ``` Object o = new Object(); Object[] array = { o }; // no method call required! ``` What are you really trying to accomplish?
Function to convert single objects to an array?
[ "", "java", "" ]
I'm having some trouble getting jQuery to play nice with DokuWiki - has anyone already done this successfully? At the moment, including jQuery reuslts in all sorts of JS functionality breaking, and I'm having trouble tracking down the source of the problem. What are some things to look for that tend to conflict with jQuery?
I'm not familiar with DokuWiki personally but if something is breaking just when you include jQuery then it's probably a conflict with the '$' variable in jQuery. You can use jQuery's noConflict method to get around that, more information here: [<http://docs.jquery.com/Using_jQuery_with_Other_Libraries>](http://docs.jquery.com/Using_jQuery_with_Other_Libraries) See also this Stack Overflow post: [[jQuery & Prototype Conflict](https://stackoverflow.com/questions/134572/jquery-prototype-conflict)](https://stackoverflow.com/questions/134572/jquery-prototype-conflict)
You can usually avoid any jQuery conflicts by using the following right after you load `jquery.js`: ``` jQuery.noConflict(); ``` Then, it won't overwrite the `$` variable, which is most often the source of trouble in these JS library conflicts. You'll need to call jQuery functions using `jQuery`, though. Examples: ``` jQuery(function() { ... }); // $(function ... jQuery(".klass").hide(); // $(".klass" ... ```
Can DokuWiki & jQuery play together?
[ "", "javascript", "jquery", "dokuwiki", "" ]
Is there a way to enable smooth scroll for ListBox class in Windows Forms framework? (I'm using C# and .NET framework 2.0).
It is a system setting. You could change it by P/Invoking SystemParametersInfo with the SPI\_SETLISTBOXSMOOTHSCROLLING argument. Doing so is not recommended.
No there isn't, not in WinForms anyway. WPF has a feature that can be enabled to do this, and there is a smooth scrolling animation that can be enabled in vista that can give the appearance of smooth scrolling, though it's really just an animated transition. But the scrolling in a ListBox is, as far as I can tell, is hardcoded to do one item at a time.
Smooth scroll in Windows Forms ListBox
[ "", "c#", ".net", "winforms", "listbox", "widget", "" ]
I'm looking to implement a very basic shopping cart. [Satchmo](http://www.satchmoproject.com/) seems to install a **lot** of applications and extra stuff that I don't need. I've heard others mention [django-cart](http://code.google.com/p/django-cart/). Has anyone tried this Django app (django-cart)? Anything to watch for or any other experiences?
Well if you want to use django-cart you should view it as a starting point for developing your own. The last commit (r4) for the project was November 2006. By comparison, the last commit (r1922) to Satchmo was a couple of hours ago. With Satchmo you get code that is under active development and actually used by real e-commerce sites. If you develop your own you're running the risk of alienating customers and losing money. If you use Satchmo you can spend more time developing/improving other areas of your site. I bet you can already guess my recommendation :) As for the apps and other stuff in Satchmo I can tell you, from personal experience, that you don't need to include them all in your INSTALLED\_APPS setting. I don't remember exactly what I pared it down to, but there were only around 6-7 Satchmo apps in my INSTALLED\_APPS and they were all ones I needed. I think they've done even more modularization since then.
I think the reason there aren't really many out-of-the-box solutions is because most people who use Django are power users. They tend to want to roll out their own solutions, especially considering how easy it is to do in Django. To answer your question, [DJwarehouse](http://demo.djwarehouse.org/) is the only other cart I'm aware of.
django-cart or Satchmo?
[ "", "python", "django", "e-commerce", "satchmo", "" ]
Session variables are apparently not working for me. I don't know what I'm doing wrong. This is the code that I'm using to check for a valid password: ``` if ($input_password_hash == $password_hash) { session_start(); $_SESSION['is_user'] = 1; header("Location: ../new_look"); //or Location: index.php } else echo "Wrong password."; ``` in the index.php file (where it redirects to), I have this code: ``` if ($_SESSION['is_user'] == 1) { //show index page with navigation bar for registered user } else { //do something else } ``` but it's not working at all. The session does apparently start and that can be verified by checking the cookie. What am I doing wrong?
You know that you've got to write `session_start()` before you use the `$_SESSION` variable in any request, right? It looks like you haven't put it in index.php anywhere.
Session\_start(); has to be placed before any header that will be sent out e.g. before any HTML tag on the php file.
PHP session seemingly not working
[ "", "php", "session", "cookies", "session-variables", "" ]
I've a box from divs (price\_item) with several child divs (date, price etc.). I'm using it as a caroussel for showing prices. If one clicks on any child, the parent's background color must be changed to red. I wrote it like: ``` $(...selectquery...).click(function() { var $tgt = $(event.target); $tgt.parent().css("backgroundColor", "red"); }); ``` on Chrome, event.target is working as expected, on Firefox 3.0.1 it says "event is not defined". How can I work around this? I've tried the following: ``` $(this).parent().css() ``` it says `'object is not a function'` thanks!
The `event` object gets passed into your click handler by jQuery, so you need to specify it as a parameter. You also may be able to add the click handler to the parent object and use "$(this)" instead of event.target: ``` $(...select the parent...).click(function() { $(this).css("backgroundColor", "red"); }); ``` If the user clicks the parent, or any of its children, it will call your click handler. If you need to only change the background if they actually click on one of the children (not between them, for example) this won't work.
If you want to do anything with events, make sure you pass the event as an argument to the handling function: ``` $('#foo').click(function(event) { // ... }); ``` But ditto what the earlier poster said, you should be able to just use $(this) to get what you want.
JQuery event.target error
[ "", "javascript", "jquery", "event-handling", "jquery-1.3", "" ]
There are multiple places in an application which call XmlWriter.Create on the same file, all accessed through the following function. When one calls while another is still writing, I get an IOException. What's the best way to lock or synchronize access? Here's the function that's being used: ``` public void SaveToDisk() { try { XmlWriterSettings settings = new XmlWriterSettings(); settings.Indent = true; using (XmlWriter writer = XmlWriter.Create(SaveFileAbsolutePath, settings)) { XamlWriter.Save(this, writer); writer.Close(); } } catch (Exception ex) { // Log the error System.Diagnostics.Debug.WriteLine(ex.Message); // Rethrow so we know about the error throw; } } ``` UPDATE: It looks like the problem isn't just from calls to this function, but because another thread is reading the file while this function is writing to is. What's the best way to lock so we don't try to write to the file while it's being read?
Using a lock can solve your concurrency problem and thus avoid the IOException, but you must remember to use the same object either on SaveToDisk and ReadFromDisk (i assume this is the reading function), otherwise it's totally useless to lock only when you read. ``` private static readonly object syncLock = new object(); public void SaveToDisk() { lock(syncLock) { ... write code ... } } public void ReadFromDisk() { lock(syncLock) { ... read code ... } } ```
A static lock should do the job quickly and simply: ``` private static readonly object syncLock = new object(); ``` then... ``` public void SaveToDisk() { lock(syncLock) { ...your code... } } ``` You can also use `[MethodImpl(MethodImplOptions.Synchronized)]` (on a **static** method that accepts the instance as an argument - for example, an extension method), but an explicit lock is more versatile.
What's the best way to synchronize XmlWriter access to a file to prevent IOExceptions?
[ "", "c#", "file-io", "ioexception", "xmlwriter", "" ]
In AS3 you have a function on a string with this signature: ``` function replace(pattern:*, repl:Object):String ``` The repl:Object can also specify a function. If you specify a function, the string returned by the function is inserted in place of the matching content. Also, is it possible to get the the original string in which I want to replace things? (In AS3 you can get the original string by ``` var input:String = arguments[2]; //in the callback function ``` ) I don't see a property in the `Match` class containing the original string...
In order to do this in C#, use `System.Text.RegularExpressions.Regex.Replace()` which takes a callback.
``` static void Main() { string s1 = Regex.Replace("abcdefghik", "e", match => "*I'm a callback*"); string s2 = Regex.Replace("abcdefghik", "c", Callback); } static string Callback(Match match) { return "*and so am i*"; } ``` Note you have access to the matched data via the argument (and `match.Value` in particular, unless you want access to the regex groups (`.Groups`) etc).
C# Replace with Callback Function like in AS3
[ "", "c#", "regex", "actionscript-3", "function", "replace", "" ]
I would like to randomly sort a result in a repeatable fashion for purposes such as paging. For this NEWID() is too random in that the same results cannot be re-obtained. Order by Rand(seed) would be ideal as with the same seed the same random collection would result. Unfortunately, the Rand() state resets with every row, does anyone have a solution? ``` declare @seed as int; set @seed = 1000; create table temp ( id int, date datetime) insert into temp (id, date) values (1,'20090119') insert into temp (id, date) values (2,'20090118') insert into temp (id, date) values (3,'20090117') insert into temp (id, date) values (4,'20090116') insert into temp (id, date) values (5,'20090115') insert into temp (id, date) values (6,'20090114') -- re-seeds for every item select *, RAND(), RAND(id+@seed) as r from temp order by r --1 2009-01-19 00:00:00.000 0.277720118060575 0.732224964471124 --2 2009-01-18 00:00:00.000 0.277720118060575 0.732243597442382 --3 2009-01-17 00:00:00.000 0.277720118060575 0.73226223041364 --4 2009-01-16 00:00:00.000 0.277720118060575 0.732280863384898 --5 2009-01-15 00:00:00.000 0.277720118060575 0.732299496356156 --6 2009-01-14 00:00:00.000 0.277720118060575 0.732318129327415 -- Note how the last column is +=~0.00002 drop table temp -- interestingly this works: select RAND(@seed), RAND() --0.732206331499865 0.306382810665955 ``` Note, I tried Rand(ID) but that just turns out to be sorted. Apparently Rand(n) < Rand(n+1)
Building off of gkrogers hash suggestion this works great. Any thoughts on performance? ``` declare @seed as int; set @seed = 10; create table temp ( id int, date datetime) insert into temp (id, date) values (1,'20090119') insert into temp (id, date) values (2,'20090118') insert into temp (id, date) values (3,'20090117') insert into temp (id, date) values (4,'20090116') insert into temp (id, date) values (5,'20090115') insert into temp (id, date) values (6,'20090114') -- re-seeds for every item select *, HASHBYTES('md5',cast(id+@seed as varchar)) r from temp order by r --1 2009-01-19 00:00:00.000 0x6512BD43D9CAA6E02C990B0A82652DCA --5 2009-01-15 00:00:00.000 0x9BF31C7FF062936A96D3C8BD1F8F2FF3 --4 2009-01-16 00:00:00.000 0xAAB3238922BCC25A6F606EB525FFDC56 --2 2009-01-18 00:00:00.000 0xC20AD4D76FE97759AA27A0C99BFF6710 --3 2009-01-17 00:00:00.000 0xC51CE410C124A10E0DB5E4B97FC2AF39 --6 2009-01-14 00:00:00.000 0xC74D97B01EAE257E44AA9D5BADE97BAF drop table temp ``` EDIT: Note, the declaration of @seed as it's use in the query could be replace with a parameter or with a constant int if dynamic SQL is used. (declaration of @int in a TSQL fashion is not necessary)
Creating a hash can be much more time consuming than creating a seeded random number. To get more variation in the ourput of RAND([seed]) you need to make the [seed] vary significantly too. Possibly such as... ``` SELECT *, RAND(id * 9999) AS [r] FROM temp ORDER BY r ``` Using a constant ensures the replicability you asked for. But be careful of the result of (id \* 9999) causing an overflow if you expect your table to get big enough...
Pseudo Random Repeatable Sort in SQL Server (not NEWID() and not RAND())
[ "", "sql", "sql-server", "t-sql", "random", "paging", "" ]
Hey gang. I have just written a client and server in C++ using sys/socket. I need to handle a situation where the client is still active but the server is down. One suggested way to do this is to use a heartbeat to periodically assert connectivity. And if there is none to try to reconnect every X seconds for Y period of time, and then to time out. Is this "heartbeat" the best way to check for connectivity? The socket I am using might have information on it, is there a way to check that there is a connection without messing with the buffer?
If you're using TCP sockets over an IP network, you can use the TCP protocol's keepalive feature, which will periodically check the socket to make sure the other end is still there. (This also has the advantage of keeping the forwarding record for your socket valid in any NAT routers between your client and your server.) Here's a [TCP keepalive overview](http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html) which outlines some of the reasons you might want to use TCP keepalive; [this Linux-specific HOWTO](http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/programming.html) describes how to configure your socket to use TCP keepalive at runtime. It looks like you can enable TCP keepalive in Windows sockets by setting `SIO_KEEPALIVE_VALS` using the [WSAIoctl()](http://msdn.microsoft.com/en-us/library/ms741621(VS.85).aspx) function. If you're using UDP sockets over IP you'll need to build your own heartbeat into your protocol.
Yes, this heartbeat is the best way. You'll have to build it into the protocol the server and client use to communicate. The simplest solution is to have the client send data periodically and the server close the connection if it hasn't received any data from the client in a particular period of time. This works perfectly for query/response protocols where the client sends queries and the server sends responses. For example, you can use the following scheme: 1. The server responds to every query. If the server does not receive a query for two minutes, it closes the connection. 2. The client sends queries and keeps the connection open after each one. 3. If the client has not send a query for one minute, it sends an "are you there" query. The server responds with "yes I am". This resets the server's two minutes timer and confirms to the client that the connection is still available. It may be simpler to just have the client close the connection if it hasn't needed to send a query for the past minute. Since all operations are initiated by the client, it can always just open a new connection if it needs to perform a new operation. That reduces it to just this: 1. The server closes the connection if it hasn't received a query in two minutes. 2. The client closes the connection if it hasn't needed to send a query in one minute. However, this doesn't assure the client that the server is present and ready to accept a query at all times. If you need this capability, you will have to implement an "are you there" "yes I am" query/response into your protocol.
What is the best way to implement a heartbeat in C++ to check for socket connectivity?
[ "", "c++", "sockets", "heartbeat", "" ]
I have a boolean variable declared at the top of a class and when a radio button is selected on a page, the variable gets set to true, but when the page is reloaded, the variable gets reset back to false. One way I have handled this was by using the static keyword, but I am not sure if this is the best way to handle this. Here is the class where I tried doing things in the Page\_Load event, but it is still resets the variables to false. ``` public class SendEmail { bool AllSelected; protected void Page_Load(object sender, EventArgs e) { if(!Page.IsPostBack) { AllSelected = false; } } protected void rbAll_SelectedIndexChanged(object sender, EventArgs e) { if(rbAll.SelectedValue == "All") AllSelected = true; } public Send() { if(AllSelected) { //Send Email. Never runs because AllSelected is always false; } } } ```
When the page gets reloaded, a new instance of your page class is created so any values from the last server interaction are lost. Put the value into viewstate if you want it to persist across postbacks: ``` bool AllSelected { get { object o = ViewState["AllSelected"]; if(o == null) return false; return (bool)o; } set { ViewState["AllSelected"] = value; } } ``` The ViewState collection is written in a hidden element into the form in the client's browser, and posted back and restored the next time they click a button or do any other "postback" type action.
Every time asp.net serves a page, it creates a new instance of the page class. This means that AllSelected will always be auto initialized to false. My suggestion, unless there is something I don't see here, is to just call Send() from your SelectedIndexChanged method.
Why is the boolean variable always resetting to false;
[ "", "c#", "asp.net", "" ]
I am creating a class that determines which of number of registered WCF client callbacks to call. When a client registers with the server, it supplies the token it is interested in. The class then stores a mapping of token to IClientCallback interface for that token, in a Dictionary. The method and it's class under test looks like the following ``` public class Router { private IDictionary<int, IClientCallBack> clients; public Router() { clients = new Dictionary<int, IClientCallBack>(); } public Router(IDictionary<int, IClientCallBack> clients) { this.clients = clients; } public bool Register(int token, IClientCallBack client) { if (!clients.ContainsKey(token)) { clients.Add(token, client); return true; } return false; } } ``` How do I test the clients are successfully registered with the Router? I figured I can either assume that if the function returned true, it is successful (but what is to stop the function body being only "return true;"?) or I could inject the clients Dictionary into the constructor of the class and in my test I could check that clients.Count equals 1, as per the following. ``` [TestMethod] public void RegisterTest() { IDictionary<int, IClientCallBack> clients = new Dictionary<int, IClientCallBack>(); var router = new Router(clients); var client = new Mock<IClientCallBack>().Object; var success = router.Register(4, client); Assert.IsTrue(success); Assert.AreEqual(1, clients.Count); Assert.AreEqual(clients[4], client); } ``` While the test above seems good, it seems like overkill to use dependency injection to insert the collection in, just so I can test it. However it does make testing a lot easier and more accurate (when testing other methods of the class too). Is this the recommended way to test this method, or is it overkill?
Well, why do you care what the contents of the dictionary is? If you've injected it then you may well care - but at that point it's part of the API. If you can do without the constructor taking a dictionary, then just test the actual *effects* of the dictionary's contents. So, if you call Register twice for the same token, it should return false, right? Make that a test. Presumably there will be other things to do with tokens - so test when the tokens have been registered and when they haven't. Put it this way: if someone wanted to reimplement the class using a linked list of token/client pairs, would that break the useful functionality of the class? I suspect not - so your tests probably shouldn't care. Now, before I start sounding too academic, I'm not one of those people who believe that unit tests should always, always touch just public methods and not care about implementation. They're not black box tests. Sometimes knowing the implementation can make it a lot easier to test just one complex bit of logic, even if it's only normally *exposed* via something which actually does more work around it. But in the case you've presented, I really would just stick to the "does it do what it should as far as the outside world can tell" approach.
Essentially, if your are looking at the inner data structure your test is checking a specific implementation rather than asserting the *expected behaviour* of the class. If this was my test I'd rather set some expectations on the mock object and then invoke some methods on the Router class that validate these expectations.
Unit Test a class that contains an IDictionary
[ "", "c#", "unit-testing", "" ]
Is it possible to enumerate all managed threads in C#? Visual Studio seems to be able to do this when you hit a break point while debugging. In the "Threads" window it shows a list of all running threads, including managed names. Does anyone know how it does this?
Debuggers are often able to do things "normal" code can't. You'd probably find that you can do this if you use the profiling/debugging API, but I don't believe you can do it from "normal" .NET code.
This sounds like a duplicate of "[How to enumerate threads in .NET using the Name property?](https://stackoverflow.com/questions/427485/how-to-enumerate-threads-in-net-using-the-name-property)" - If so, the short answer is "keep track of your own threads yourself" - i.e. in a `List<Thread>` or similar.
How can I enumerate all managed threads in C#?
[ "", "c#", "multithreading", "" ]
I'm trying something like this Output.py ``` print "Hello" ``` Input.py ``` greeting = raw_input("Give me the greeting. ") print "The greeting is:", greeting ``` At the cmd line ``` Output.py | Input.py ``` But it returns an *EOFError*. Can someone tell me what I am doing wrong? Thanks for your help. **EDIT** Patrick Harrington [solution](https://stackoverflow.com/questions/466801/python-piping-on-windows-why-does-this-not-work#466851) works but I don't know why...
I tested this on my Windows machine and it works if you specify the Python exe: ``` C:\>C:\Python25\python.exe output.py | C:\Python25\python.exe input.py Give me the greeting. The greeting is: hello ``` But I get an EOFError also if running the commands directly as: ``` output.py | input.py ``` I'm not sure exactly why that is, I'm still looking into this one but at least this should provide you with a workaround for now. It may have something to do with the way the file handler is invoked for .py files. **UPDATE**: well, what do you know. Looks like this is actually a bug in Windows where stdin/stdout redirection may not work properly when started from a file association. So the workaround is as noted by myself and Patrick, you need to specify "python" will be running input.py, otherwise it will not redirect stdout from output.py to the stdin for input.py correctly. *Reference*: **<http://mail.python.org/pipermail/python-bugs-list/2004-August/024923.html>** **<http://support.microsoft.com/default.aspx?kbid=321788>** **UPDATE 2**: To change this behavior and make Windows pipes work as expected for stdin/stdout redirection, you can add this value to the registry (tested on my box and verified this works as desired). > 1. Start Registry Editor. > 2. Locate and then click the following key in the registry: > > **HKEY\_LOCAL\_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer** > 3. On the Edit menu, click Add Value, and then add the following > registry value: > > Value name: *InheritConsoleHandles* > Data type: *REG\_DWORD* > Radix: Decimal > Value data: 1 > 4. Quit Registry Editor.
Change it to: ``` Output.py | python Input.py ``` The output will be: > Give me the greeting. The greeting is: hello
Python piping on Windows: Why does this not work?
[ "", "python", "windows", "piping", "" ]
I have a `HashMap` with millions of entries. Need to retrieve all entries whose keys match a specific set of criteria (in this case, each key is an object with two integer properties; I need to retrieve all keys where each of these integers fall within a specified range). What is the fastest, most efficient way to iterate through all such keys? **UPDATE:** In this particular case, though I didn't specify it up front, the first integer in the key has a natural precedence over the second integer.
Here's a **solution** using [TreeMap](http://java.sun.com/javase/6/docs/api/java/util/TreeMap.html): ``` public static void main(String[] args) { Comparator<Foo> fooComparator = new Comparator<Foo>() { @Override public int compare(Foo o1, Foo o2) { return o1.compareTo(o2); } }; TreeMap<Foo, String> map = new TreeMap<Foo, String>(fooComparator); map.put(new Foo(1, 4), ""); map.put(new Foo(1, 3), ""); map.put(new Foo(2, 4), ""); map.put(new Foo(3, 4), ""); map.put(new Foo(8, 10), ""); map.put(new Foo(8, 17), ""); map.put(new Foo(10, 10), ""); int a = 2; int b = 5; for (Foo f : getKeysInRange(map, a, b)) { System.out.println(f); } } public static List<Foo> getKeysInRange(TreeMap<Foo, String> map, int low, int high) { Foo key1 = new Foo(low, low); Foo key2 = new Foo(high, high); Foo fromKey = map.ceilingKey(key1); Foo toKey = map.floorKey(key2); if (fromKey != null && toKey != null && fromKey.compareTo(toKey) < 0) return new ArrayList<Foo>(map.subMap(fromKey, true, toKey, true).keySet()); return new ArrayList<Foo>(); } public static class Foo implements Comparable<Foo> { private int i; private int j; private Foo(int i, int j) { super(); this.i = i; this.j = j; } public int min() { if (i < j) return i; else return j; } public int max() { if (i > j) return i; else return j; } @Override public String toString() { return "I=" + i + "J=" + j; } @Override public int compareTo(Foo o) { if (this.min() > o.min()) { return 1; } else if (this.min() < o.min()) return -1; else { if (this.max() > o.max()) return 1; else if (this.max() < o.max()) return -1; else return 0; } } } ```
A HashMap is not an efficient data structure for finding keys that lie within a certain range. Generally the only keys you can find efficiently in a hash map are keys with the same hash as what you have (i.e. equal keys). For finding keys that lie within a certain range, you are better off using a [SortedMap](http://java.sun.com/javase/6/docs/api/java/util/SortedMap.java) of some kind, such as a TreeMap, which can then be viewed with the SortedMap.subMap(low, high) view method. As for finding a key based on *two* keys, that is even more difficult. Your best bet is probably to iterate over the subMap of the range of the first integer, and then to check for each one if the second integer falls within the specified range. This at least limits the scan to the keys which have one of the integers within the range. Try to sort the map based on the integer that has a more natural distribution of values over the possible ranges you might have to search for.
Efficiently iterate through all MATCHING keys in a hashmap?
[ "", "java", "search", "performance", "iterator", "hashmap", "" ]
``` #include "iostream" #include "vector" class ABC { }; class VecTest { std::vector<ABC> vec; public: std::vector<ABC> & getVec() const { //Here it errors out return vec; } }; ``` Removing the const fixes it , is it not the case that getVec is a constant method. So why is this not allowed?
What you should probably be doing is returning a const reference. const std::vector& getVec() const { return vec; } It's not allowed because you've said getVec is a const method, meaning the method should not change the **this** object in any way. Returning a non-const reference would possibly allow its object to be changed, so the compiler doesn't allow it.
If a method is const then it is a guarantee to the the compiler that the state of the object can not be changed by calling the method. If the method returns a reference to an internal member, then the user of the method can indirectly change the state of the object via the reference. So in effect a const method can not return a reference (because it allows the state of the object to be changed indirectly). What you can do is return a const reference. Thus allowing the user access to internal members but maintaining the cosnt contract. Example: ``` class X { int& getX(); // get a reference to X int const& getX() const; // get a reference to X BUT the interface guarantees // the object will not change state. private: int x; }; ``` Another way to look at it. If you have a const object. You are allowed **only** allowed to call const methods. If by calling a const method you can retrieve a reference to an internal member of the object you can alter its state. This would violate the const(ness) of the original object.
Returning a reference from a constant function
[ "", "c++", "methods", "constants", "" ]
I am using a Drop Down List for selecting some options. Everything works fine, all I need is to change the name in the URL. Is there a way to change the URL ONLY based on the option? Default Puma - ``` Default.aspx?Dept=Shoes&Type=Puma ``` if Nike is selected - ``` Default.aspx?Dept=Shoes&Type=Nike ```
To handle this on the server you could enable the autopostback property of the control and create a SelectedIndexChanged event to call a method to identify the selected option and redirect based on this selection. ``` protected void DropDownList1_SelectedIndexChanged(object sender, EventArgs e) { Response.Redirect(string.Format("Default.aspx?Dept=Shoes&Type={0}", this.DropDownList1.SelectedValue), true); } ```
More specifically ``` <select onchange="window.location.href=this.options[this.selectedIndex].value;"> <option value="#">-- Select a Search Engine ---</option> <option value="http://www.live.com">Live Search</option> <option value="http://www.google.com">Google</option> </select> ```
How to change the url based on a Drop Down List?
[ "", "c#", ".net", "asp.net", "" ]
I have to write a C++ Application (using the Qt Framework for the GUI) that can edit data stored in xml files described by a xsd schema file. Is there a tool to convert the xsd schema into C++ Classes?
Sounds to me like [CodeSynthesis](http://www.codesynthesis.com/products/xsd/) is exactly what you are looking for. It's open source and c++.
See [XmlPlus-xsd2cpp](http://code.google.com/p/xplus-xsd2cpp/) at Google: > XmlPlus xsd2cpp provides "simple to use" C++ XML data-binding through > W3C XML-Schema. Usage of XmlPlus is covered by the [GNU Lesser General Public License](http://www.gnu.org/licenses/lgpl.html)
XML Schema to C++ Classes
[ "", "c++", "xml", "qt", "xsd", "code-generation", "" ]
I need a tool or method that allows the transfer of data and automatically updates the foreign keys at the destination table. The SET IDENTITY\_INSERT ON/OFF is not what I'm looking for. Example: ``` table master (id int identity, name char) table slave (id int identity, master_id int, name char) ``` I would like to create a script like this: ``` insert into master(name) values ('master a') insert into master(name) values ('master b') insert into slave(master_id,name) values ( ?, 'slave aa') insert into slave(master_id,name) values ( ?, 'slave bb') insert into slave(master_id,name) values ( ?, 'slave cc') insert into slave(master_id,name) values ( ?, 'slave dd') ``` When inserting the *slaves*, I would like to know what is the new value of the master\_id, in order to keep the same relation as in the original table.
@John Sansom: right on the money @pv2008: ``` create proc InsertAndUpdate @parentName varchar(255), @childName varchar(255) as declare @newParentId int insert into PARENT values (@parentName) select @newParentId = SCOPE_IDENTITY() insert into CHILD values (@newParentId, @childName) ``` call this sproc every time you insert
If you are using Identity columns within your tables, the function SCOPE\_IDENTITY() will return the ID of the last inserted record. You can then use this as the foreign key for the secondary table. <http://msdn.microsoft.com/en-us/library/ms190315.aspx> Make sense? Cheers, John
Copy data and keeping referencial integrity for new IDs
[ "", "sql", "sql-server", "insert", "foreign-keys", "" ]
Hope anyone can shed light on this so I can use pens with dash patterns? I am writing a scrollable chart (a `Panel` inside `ScrollViewer` that implements `IScrollInfo`) in WPF using `DrawingVisual`'s `DataContext.Draw`*X*. I have several thousand `DrawingVisual`s that get scrolled by using `TranslateTransform` on the `Panel` that hosts them. I implemented a grid by placing a `Panel` on top of it and drawing simple horizontal lines from one edge to the other using `DataContext.DrawLine(pen, new Point(0, y), new Point(widthOfPanel, y));` //(note: these lines are always static, they never move). The scroll performance is absolutely insane (i.e. DrawingVisual's are drawn instantly and scrolling is instant). But if I use a `Pen` that uses dash patterns (see below for example) to draw the grid lines, then scrolling is very jerky and the performance seems to have been decreased by a factor of 100 (an estimate). Can anyone explain why that happens and how I can workaround this? Example of Pen with dash pattern: ``` <Pen x:Key="PenUsingDashPatterns" Brush="Black" Thickness="1"> <Pen.DashStyle > <DashStyle Dashes="3, 3" /> </Pen.DashStyle> </Pen> ```
Are the pens getting frozen? Freezing drawing objects helps performance a lot. You could set up a Loaded handler and debug to see if your pens are frozen. If not, Call the Pen.Freeze() button manually on them. Note that freeze also makes the pens read-only... you will be unable to modify them after you freeze them.
Here's a possible workaround - if you're only drawing horizontal and/or vertical lines you could try creating your `Pen` with a checker pattern `DrawingBrush` such as: ``` <Pen x:Key="PenUsingDashPatterns" Thickness="1"> <Pen.Brush> <DrawingBrush TileMode="Tile" Viewport="0 0 6 6" ViewportUnits="Absolute"> <DrawingBrush.Drawing> <GeometryDrawing Brush="Black"> <GeometryDrawing.Geometry> <GeometryGroup> <RectangleGeometry Rect="0 0 3 3"/> <RectangleGeometry Rect="3 3 3 3"/> </GeometryGroup> </GeometryDrawing.Geometry> </GeometryDrawing> </DrawingBrush.Drawing> </DrawingBrush> </Pen.Brush> </Pen> ``` Alternatively, you could use different brushes for verical and horizontal lines, or, possibly, an `ImageBrush` for better performance.
Why does use of pens with dash patterns cause huge (!) performance degredation in WPF custom 2D drawing?
[ "", "c#", ".net", "wpf", "2d", "pen", "" ]
I've been using C# for a while now but haven't really homed in my UI design skills. At the time I design them, I find myself enjoying the design, but later on, I look back on it and see horrible work. An example comes to mind from my project at work where I had to use 127 buttons to represent 127 computers in our lab. :/ I was wondering if anyone could refer me to a site or book that would teach good UI design? Thanks!
I believe **observation** and **instincts** helps tremendously. Play with applications and dig deep into the functionality they implemented. Especially look at Apple, since they revolve around aesthetics. What worked? What didn't? Why did it work? Start asking yourself questions as you tinker with web sites and software. Then I would play around and design some concepts. Show these concepts to friends, both technical and non. Ask them questions like, "Add a new drink to the list" or "Find x for me." See how they react and differ from each other. Did someone with a technical background notice buttons faster than the non-techies? Should you combine elements together? This stuff should be fun. After all, this is like building a complete Lego set and showing it off to your friend. He/she get's to see, touch, and play with your creation.
Joel Spolsky wrote a book called User Interface Design for Programmers, which is a good read, and an abridged version [can be found online](http://www.joelonsoftware.com/uibook/chapters/fog0000000057.html).
Designing better GUIs?
[ "", "c#", "user-interface", "" ]
From [this question](https://stackoverflow.com/questions/505747/best-way-to-do-nested-case-statement-logic-in-sql-server), [a neat answer about using COALESCE](https://stackoverflow.com/questions/505747/best-way-to-do-nested-case-statement-logic-in-sql-server/505760#505760) to simplify complex logic trees. I considered the problem of short circuiting. For instance, in functions in most languages, arguments are fully evaluated and are then passed into the function. In C: ``` int f(float x, float y) { return x; } f(a, a / b) ; // This will result in an error if b == 0 ``` That does not appear to be a limitation of the `COALESCE` "function" in SQL Server: ``` CREATE TABLE Fractions ( Numerator float ,Denominator float ) INSERT INTO Fractions VALUES (1, 1) INSERT INTO Fractions VALUES (1, 2) INSERT INTO Fractions VALUES (1, 3) INSERT INTO Fractions VALUES (1, 0) INSERT INTO Fractions VALUES (2, 0) INSERT INTO Fractions VALUES (3, 0) SELECT Numerator ,Denominator ,COALESCE( CASE WHEN Denominator = 0 THEN 0 ELSE NULL END, CASE WHEN Numerator <> 0 THEN Numerator / Denominator ELSE NULL END, 0 ) AS TestCalc FROM Fractions DROP TABLE Fractions ``` If it were evaluating the second case when Denominator = 0, I would expect to see an error like: ``` Msg 8134, Level 16, State 1, Line 1 Divide by zero error encountered. ``` I found some [mentions](http://it.toolbox.com/blogs/oracle-guide/minitip-5-coalesce-11378) [related](http://forums.oracle.com/forums/thread.jspa?threadID=685976) to Oracle. And some tests with [SQL Server](http://www.emoreau.com/Entries/Articles/2008/04/Short-circuiting-mainly-in-VBNet-and-SQL-Server.aspx). Looks like the short-circuiting might break down when you include user-defined functions. So, is this behavior supposed to be guaranteed by the ANSI standard?
I just had a look at the linked article and can confirm short circuiting can fail for both COALESCE and ISNULL. It seems to fail if you have any sub-query involved, but it works fine for scalar functions and hard coded values. For example, ``` DECLARE @test INT SET @test = 1 PRINT 'test2' SET @test = COALESCE(@test, (SELECT COUNT(*) FROM sysobjects)) SELECT 'test2', @test -- OUCH, a scan through sysobjects ``` COALESCE is implemented according to the [ANSI standard](http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt). It is simply a shorthand for a CASE statement. ISNULL is not part of the ANSI standard. Section 6.9 does not seem to require short circuiting explicitly, but it does imply that the first true clause in the `when` statement should be returned. Here is some proof that is works for scalar based functions (I ran it on [SQL Server 2005](http://en.wikipedia.org/wiki/Microsoft_SQL_Server#SQL_Server_2005)): ``` CREATE FUNCTION dbo.evil ( ) RETURNS int AS BEGIN -- Create an huge delay declare @c int select @c = count(*) from sysobjects a join sysobjects b on 1=1 join sysobjects c on 1=1 join sysobjects d on 1=1 join sysobjects e on 1=1 join sysobjects f on 1=1 return @c / 0 END go select dbo.evil() -- takes forever select ISNULL(1, dbo.evil()) -- very fast select COALESCE(1, dbo.evil()) -- very fast ``` Here is some proof that the underlying implementation with CASE will execute sub queries. ``` DECLARE @test INT SET @test = 1 select case when @test is not null then @test when @test = 2 then (SELECT COUNT(*) FROM sysobjects) when 1=0 then (SELECT COUNT(*) FROM sysobjects) else (SELECT COUNT(*) FROM sysobjects) end -- OUCH, two table scans. If 1=0, it does not result in a table scan. ```
The *efficient* way to guarantee short circuit in MS SQL Server is to use CASE. For the success WHEN clause, no others are evaluated. [COALESCE can have issues](http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/05/17/444.aspx) In this instance, why have so many branches in the COALESCE/CASE constructs? ``` SELECT Numerator ,Denominator ,CASE WHEN Denominator = 0 THEN 0 END, ELSE Numerator / Denominator END AS TestCalc FROM Fractions ```
COALESCE - guaranteed to short-circuit?
[ "", "sql", "sql-server", "t-sql", "coalesce", "short-circuiting", "" ]
I've build a small web-service in PHP. I want to control the number of calls to a specific API method. I thought at first of using a text file, because it's just an integer. But after taking a good look at SQLite it seemed much more convenient. So the code is just: get the "counter" from SQLite and increment it when that method is called, then store it back on SQLite db. Is this a right way of doing it? Would it be simpler and more scalable to just use a file or... maybe something else? Thanks in advance.
Put it in a real database both a textfile, and a SQLite DB will have locking issues where if the file(s) is currently open and a new request to open it happens the second request will error out. Use a real DB like MySQL, MSSQL, or whatever is available.
a third possibility: [memcachedb](http://memcachedb.org/). it's compatible with [memcached](http://www.danga.com/memcached/), but stores it's key-value store to a BDB file. it not only has read-write commands, but also atomic increment/decrement for numeric values. one more alternative would be to write a 'counter server'. an independent process that gets 'read' and 'increment' commands over a socket. the advantage is that it's really easy to do 'atomic' increments. it stores it's counter(s) on a simple file, without concurrency problems since there's never more than one server. it should be easy to write in less than a hundred lines of C. just a tight loop processing one command at a time, flushing to disk every few seconds. since the processing is so simple, the latency is minimal.
What to choose to store just one integer? Sqlite? or Text file?
[ "", "php", "web-services", "sqlite", "text-files", "" ]
I just want a very handy way to extract the numbers out of a string in Javascript and I am thinking about using jQuery, but I prefer the method that proves to be the simplest. I have requested the "left" attribute of a css block using jQuery like this: ``` var stuff = $('#block').css("left") ``` The result of "stuff" is ``` 1008px ``` I just want to get rid of the "px" because I need to do a parseInt of it. What is the best method? If Javascript had a left() function, this would be very simple. Thanks
Just do a `parseInt("1008px", 10)`, it will ignore the 'px' for you.
To answer your other question, you can add a `left()` function to JavaScript's built-in `String` `prototype` class so all other strings will inherit it: ``` String.prototype.left = function(n) { return this.substring(0, n); } ``` And once you include this you can say: ``` var num = "1008px".left(4); ``` I add helpers like `trim` and `capitalize` in a base JavaScript file for these kinds of things.
Left() function in Javascript or jQuery
[ "", "javascript", "jquery", "" ]
I want to know whether the below is an acceptable use of the visitor pattern. I feel a little uncomfortable returning from an Accept() or Visit() call - is this an appropriate usage of this pattern and if not, why not? **Note:** Apologies for the long code sample, seems necessary to get across what I'm doing as visitor always seems to be a little involved... ``` interface IAnimalElement<T> { T Accept(IAnimalVisitor<T> visitor); } interface IAnimalVisitor<T> { T Visit(Lion lion); T Visit(Peacock peacock); T VisitZoo(List<Animal> animals); } abstract class Animal { public int Age { get; protected set; } } class Lion : Animal, IAnimalElement<int> { public Lion(int age) { Age = age; } public int Accept(IAnimalVisitor<int> visitor) { return visitor.Visit(this); } } class Peacock : Animal, IAnimalElement<int> { public Peacock(int age) { Age = age; } public int Accept(IAnimalVisitor<int> visitor) { return visitor.Visit(this); } } class AnimalAgeVisitor : IAnimalVisitor<int> { public int TotalAge { get; private set; } int IAnimalVisitor<int>.Visit(Lion lion) { TotalAge += lion.Age; return lion.Age; } int IAnimalVisitor<int>.Visit(Peacock peacock) { TotalAge += peacock.Age + 10; return peacock.Age + 10; // peacocks ages are always -10y, correct. } public int VisitZoo(List<Animal> animals) { // Calculate average animal age. int sum = 0; int count = 0; foreach (IAnimalElement<int> animal in animals) { sum += animal.Accept(this); ++count; } return count == 0 ? 0 : sum / count; } } class Program { static void Main(string[] args) { List<Animal> animals = new List<Animal>() { new Lion(10), new Lion(15), new Peacock(3), new Lion(2), new Peacock(9) }; AnimalAgeVisitor visitor = new AnimalAgeVisitor(); Console.WriteLine("Average age = {0}, Total age = {1}", visitor.VisitZoo(animals), visitor.TotalAge); } } ```
Well to me this feels like the implementation is a little bit on the fence. Either have your Visit and Accept methods return void and track all the state in the Visitor object. Interrogate it at the end. or ... Have Visit and Accept return an in-progress state and accept an incoming in-progress state in a functional manner. If you go for the second option I'm not really sure that a visitor object or pattern is needed, you can use an iterator, function and some transient state instead.
**Short answer:** I don't see any problems of exposing a IVisitor returning a generic parameter. See [FxCop rules](https://msdn.microsoft.com/en-us/library/bb429379%28v=vs.80%29.aspx). It then permits to use different **IVisitor** each returning a different value. However, **in your case**, Visitor is not useful, since every animal has the *Age* property so all can be done with **Animal** or a new **IAnimal** interface. Alternative is using **[multiple-dispatch](https://en.wikipedia.org/wiki/Multiple_dispatch)** at the cost of losing **Strong Typing**. Use a [**Visitor** pattern](https://en.wikipedia.org/wiki/Visitor_pattern) when you want to replace (or avoid to write) a **switch** like this one: ``` IAnimal animal = ...; switch (animal.GetType().Name) { case "Peacock": var peacock = animal as Peacock; // Do something using the specific methods/properties of Peacock break; case "Lion": var peacock = animal as Lion; // Do something using the specific methods/properties of Lion break; etc... } ``` or the nested ***if-then-else*** equivalent. It's purpose is to route the instance to the routine relevant for its type by using polymorphism and then avoid ***ugly if-then-else/switch statements*** and ***manual casts***. Furthermore, it helps to decrease **coupling** between unrelated code. Alternative to that is to add a virtual method in the class tree to visit. However, sometimes it's not possible or desirable : * visitable class code **not modifiable** (not owned for example) * visitable class code **not related to visiting code** (adding it in class would mean lowering the **cohesion** of the class). That's why it's often used to traverse an object tree (html nodes, lexer tokens, etc...). **Visitor** pattern implies the following interfaces: * **IVisitor** ``` /// <summary> /// Interface to implement for classes visiting others. /// See Visitor design pattern for more details. /// </summary> /// <typeparam name="TVisited">The type of the visited.</typeparam> /// <typeparam name="TResult">The type of the result.</typeparam> public interface IVisitor<TVisited, TResult> : IVisitor where TVisited : IVisitable { TResult Visit(TVisited visited); } /// <summary> /// Marking interface. /// </summary> public interface IVisitor{} ``` * **IVisitable** ``` /// <summary> /// Interface to implement for classes visitable by a visitor. /// See Visitor design pattern for more details. /// </summary> /// <typeparam name="TVisitor">The type of the visitor.</typeparam> /// <typeparam name="TResult">The type of the result.</typeparam> public interface IVisitable<TVisitor, TResult> : IVisitable where TVisitor : IVisitor { TResult Accept(TVisitor visitor); } /// <summary> /// Marking interface. /// </summary> public interface IVisitable {} ``` Implementation of **Accept** in each **IVisitable** should call *Visit(this)*.
Using the visitor pattern with generics in C#
[ "", "c#", "design-patterns", "visitor-pattern", "" ]
i want to protect my Java product by using some USB-based authentication and password management solution like you can buy it here: [aladdin](http://www.aladdin.com) This mean that you have to connect a USB stick with a special software on it, before you can start your application. I would like to here some experience of users which have used hardware like this. * Is this as safe as it sounds? * General: How much money you would spend to protect a software which would sell 100 times? I will obfuscate my Java code and save some user specific OS settings in a crypted file which is lying somewhere on the hard disk. I dont want to constrain the user to do a online registration, because the internet is not necessary for the application. Thanks **Comment:** The company i am working for is using [Wibu](http://www.wibu.com/uk.html) for now more than 5 years.
Please just don't. Sell your software at a price point that represents its worth, with a basic key-scheme if you must to **keep honest people honest**, and leave it at that. The pirates will always steal it, and a hardware dongle will just cause grief for your honest customers. Besides, any scheme you build in will just be defeated by reverse engineering; if you make it a pain to use your software, you will motivate otherwise honest people to defeat it, or to search the internet for a crack. Simply make the protection less painful than searching for a crack.
Even though my view on the subject is to not use such piracy protection schemes, I can give you a few pointers since we have used such a solution in the past. In particular we used Aladdin tokens as well. This solution in terms of security is quite robust, since it is something that you either have it on the system, or you don't. It's not something that you can easily override, provided that your code is secure as well. On the down side, we came across a problem that made us drop the Hardware token solution. Our application is an intranet web Application, (i.e. a web app running in the local intranet of the customer, not a hosted solution) and quite often the customers wanted to deploy our app on blade servers or even virtual servers, where they did not have USB ports! So before you choose such a solution, take such factors under consideration.
Piracy protection using USB based hardware solution
[ "", "java", "authentication", "hardware", "copy-protection", "piracy-protection", "" ]
I understand that one can raise an event in the class that the implementation declaration occurs, but I wish to raise the event at the base class level and have the derived class's event be raised: ``` public interface IFoo { event EventHandler<FooEventArgs> FooValueChanged; void RaiseFooValueChanged(IFooView sender, FooEventArgs e); } [TypeDescriptionProvider(typeof(FooBaseImplementor))] public abstract class FooBase : Control, IFoo { public virtual event EventHandler<FooEventArgs> FooValueChanged; public void RaiseFooValueChanged(IFooView sender, FooEventArgs e) { FooValueChanged(sender, e); } } ``` I cannot have the FooValueChanged event abstract, because then the base class cannot raise the event. Current the code runs, but the call FooValueChanged(sender, e) throws a NullReferenceException because it doesn't call the derived class's event, only that of the base class. Where am I going wrong? I can have the event and the raiser both abstract, but then I need to remember to call FooValueChanged(sender, e) in every single derived class. I'm trying to avoid this while being able to use the Visual Studio designer for derived controls.
Note first, the event declaration that you are using is a short-hand notation in C#: ``` public event EventHandler Event; public void RaiseEvent() { this.Event(this, new EventArgs()); } ``` Is equivalent to: ``` private EventHandler backEndStorage; public event EventHandler Event { add { this.backEndStorage += value; } remove { this.backEndStorage -= value; } } public void RaiseEvent() { this.backEndStorage(this, new EventArgs()); } ``` Where backEndStorage is a multi-cast delegate. --- Now you can rewrite your code: ``` public interface IFoo { event EventHandler<FooEventArgs> FooValueChanged; void RaiseFooValueChanged(IFooView sender, FooEventArgs e); } [TypeDescriptionProvider(typeof(FooBaseImplementor))] public abstract class FooBase : Control, IFoo { protected event EventHandler<FooEventArgs> backEndStorage; public event EventHandler<FooEventArgs> FooValueChanged { add { this.backEndStorage += value; } remove { this.backEndStorage -= value; } } public void RaiseFooValueChanged(IFooView sender, FooEventArgs e) { this.backEndStorage(sender, e); } } public class FooDerived : FooBase { public event EventHandler<FooEventArgs> AnotherFooValueChanged { add { this.backEndStorage += value; } remove { this.backEndStorage -= value; } } } ``` So now when events are added on the derived class, they will actually be added to the backEndStorage of the base class, hence allowing the base class to call the delegates registered in the derived class.
The final result: ``` public interface IFoo { event EventHandler<FooEventArgs> FooValueChanged; void RaiseFooValueChanged(IFooView sender, FooEventArgs e); } [TypeDescriptionProvider(typeof(FooBaseImplementor))] public abstract class FooBase : Control, IFoo { protected event EventHandler<FooEventArgs> backEndStorage; public abstract event EventHandler<FooEventArgs> FooValueChanged; public void RaiseFooValueChanged(IFooView sender, FooEventArgs e) { if (backEndStorage != null) backEndStorage(sender, e); } } public class FooDerived : FooBase { public override event EventHandler<FooEventArgs> FooValueChanged { add { backEndStorage += value; } remove { backEndStorage -= value; } } } ```
Raising event from base class
[ "", "c#", ".net", "winforms", "events", "inheritance", "" ]
**Objective**: Make a progress bar where users can check how much of a file has been downloaded by my server. **Scenario**:I have a PHP script that executes a python script via popen. I have done this like so: ``` $handle = popen('python last', 'r'); $read = fread($handle, 4096); pclose($handle); ``` This python script outputs to the shell something like this: ``` [last] ZVZX-W3vo9I: Downloading video webpage [last] ZVZX-W3vo9I: Extracting video information [download] Destination: myvideo.flv [download] 9.9% of 10.09M at 3.30M/s ETA 00:02 ``` **Problem**:When I read in the file generated by the shell output I get all the shell output except the last line!? WHY? Just to add, when I run the command via the shell, the shell cursor appears at the end of that line and waits till the script is done. Thanks all
First thing that comes into my mind: maybe the program detects that it is not executed on a TTY and therefore does not show the last line, which probably involves ugly control characters because that line seems to update itself? What happens when you redirect the output to a file (in the shell), or pipe it through less? If you don't see the last line there, this is likely to be the case. I don't know of another solution than to fix the source.
Are you reading until EOF? ``` $handle = popen('python last', 'r'); $read = ""; while (!feof($handle)) { $read .= fread($handle, 4096); } pclose($handle); ```
AJAX Progress: Reading output from the Shell
[ "", "php", "linux", "shell", "command-line", "" ]
In .Net (C# or VB: don't care), given a file path string, FileInfo struct, or FileSystemInfo struct for a real existing file, how can I determine the icon(s) used by the shell (explorer) for that file? I'm not currently planning to use this for anything, but I became curious about how to do it when looking at [this question](https://stackoverflow.com/questions/462232/what-is-the-best-vb-net-control-standard-custom-for-displaying-list-of-files) and I thought it would be useful to have archived here on SO.
``` Imports System.Drawing Module Module1 Sub Main() Dim filePath As String = "C:\myfile.exe" Dim TheIcon As Icon = IconFromFilePath(filePath) If TheIcon IsNot Nothing Then ''#Save it to disk, or do whatever you want with it. Using stream As New System.IO.FileStream("c:\myfile.ico", IO.FileMode.CreateNew) TheIcon.Save(stream) End Using End If End Sub Public Function IconFromFilePath(filePath As String) As Icon Dim result As Icon = Nothing Try result = Icon.ExtractAssociatedIcon(filePath) Catch ''# swallow and return nothing. You could supply a default Icon here as well End Try Return result End Function End Module ```
You should use SHGetFileInfo. Icon.ExtractAssociatedIcon works just as well as SHGetFileInfo in most cases, but SHGetFileInfo can work with UNC paths (e.g. a network path like "\\ComputerName\SharedFolder\") while Icon.ExtractAssociatedIcon cannot. If you need or might need to use UNC paths, it would be best to use SHGetFileInfo instead of Icon.ExtractAssociatedIcon. [This is good CodeProject article](http://www.codeproject.com/KB/files/fileicon.aspx) on how to use SHGetFileInfo.
Get File Icon used by Shell
[ "", "c#", ".net", "vb.net", "winforms", "shell-icons", "" ]
I have a function in an external library that I cannot change with the following signature: ``` void registerResizeCallback(void (*)(int, int)) ``` I want to pass in a member function as the callback, as my callback needs to modify instance variables. Obviously this isn't possible with a simple: ``` registerResizeCallback(&Window::Resize); ``` so I'm not really sure how to solve the problem.
Check ["[33.2] How do I pass a pointer-to-member-function to a signal handler, X event callback, system call that starts a thread/task, etc?"](http://www.parashift.com/c++-faq-lite/pointers-to-members.html#faq-33.2) at the C++ FAQ Lite: > Don't. > > Because a member function is meaningless without an object to invoke it on, you can't do this directly > > ... > > As a patch for existing software, use a top-level (non-member) function as a wrapper which takes an object obtained through some other technique.
As [Igor Oks indicates](https://stackoverflow.com/questions/499153/passing-a-qualified-non-static-member-function-as-a-function-pointer/499159#499159), you can't do this. The remainder of this question is not so much an answer to your problem, but a discussion of how something like this should work with a properly designed callback API (it appears the one you're using isn't). Most well-designed callback interfaces let you provide a "`void *`" or some other way to get a context in the callback. A common way to use this with C++ is to pass an object pointer in the `void *` context parameter, then the callback function can cast it back into an object pointer and call the member method to do the real work. It's too bad the callback API you're using doesn't provide for context data. Strictly speaking, the callback must be `extern "C"`, but using static member methods for callbacks is common and I think in practice there's never a problem. (This is assuming that the callback API is a C interface, which is by far the most common). An example: ``` // callback API declaration's extern "C" { typedef unsigned int callback_handle_t; typedef void (*callback_fcn_t)( void* context, int data1, int data2); callback_handle_t RegisterCallback( callback_fcn_t, void* context); void UnregisterCallback( callback_handle_t); } // ---------------------------------- // prototype for wrapper function that will receive the callback and // transform it into a method call extern "C" static void doWorkWrapper( void* context, int data1, int data2); // the class that does the real work class worker { public: worker() { hCallback = RegisterCallback( doWorkWrapper, this); } ~worker() { UnregisterCallback( hCallback); } void doWork( int data1, int data2) { // ... }; private: callback_handle_t hCallback; }; // the wrapper that transforms the callback into a method call extern "C" static void doWorkWrapper( void* context, int data1, int data2) { worker* pWorker = static_cast<worker*>( context); pWorker->doWork( data1, data2); } ```
Passing a qualified non-static member function as a function pointer
[ "", "c++", "" ]
I have heard people state that Code Generators and T4 templates should not be used. The logic behind that is that if you are generating code with a generator then there is a better more efficient way to build the code through generics and templating. While I slightly agree with this statement above, I have not really found effective ways to build templates that can say for instance instantiate themselves. In otherwords I can never do : ``` return new T(); ``` Additionally, if I want to generate code based on database values I have found that using `Microsoft.SqlServer.Management.SMO` in conjunction with T4 templates have been wonderful at generating mass amounts of code without having to copy / paste or use resharper. Many of the problems I have found with Generics too is that to my shock there are a lot of developers who do not understand them. When I do examine generics for a solution, there are times where it gets complicated because C# states that you cannot do something that may seem logical in my mind. What are your thoughts? Do you prefer to build a generator, or do you prefer to use generics? Also, how far can generics go? I know a decent amount about generics, but there are traps and pitfalls that I always run into that cause me to resort to a T4 template. What is the more proper way to handle scenarios where you need a large amount of flexibility? Oh and as a bonus to this question, what are good resources on C# and Generics?
You can do new T(); if you do this ``` public class Meh<T> where T : new() { public static T CreateOne() { return new T(); } } ``` As for code-generators. I use one every day without any problems. I'm using one right now in fact :-) Generics solve one problem, code-generators solve another. For example, creating a business model using a UML editor and then generating your classes with persistence code as I do all of the time using [this tool](http://capableobjects.com/ProductsServices_ECO.aspx) couldn't be achieved with generics, because each persistent class is completely different. As for a good source on generics. The best has got to be [Jon Skeet's book](https://rads.stackoverflow.com/amzn/click/com/1933988363) of course! :-)
As the originator of T4, I've had to defend this question quite a few times as you can imagine :-) My belief is that at its best code generation is a step on the way to producing equivalent value using reusable libraries. As many others have said, the key concept to maintain DRY is never, ever changing generated code manually, but rather preserving your ability to regenerate when the source metadata changes or you find a bug in the code generator. At that point the generated code has many of the characteristics of object code and you don't run into copy/paste type problems. In general, it's much less effort to produce a parameterized code generator (especially with template-based systems) than it is to correctly engineer a high quality base library that gets the usage cost down to the same level, so it's a quick way to get value from consistency and remove repetition errors. However, I still believe that the finished system would most often be improved by having less total code. If nothing else, its memory footprint would almost always be significantly smaller (although folks tend to think of generics as cost free in this regard, which they most certainly are not). If you've realised some value using a code generator, then this often buys you some time or money or goodwill to invest in harvesting a library from the generated codebase. You can then incrementally reengineer the code generator to target the new library and hopefully generate much less code. Rinse and repeat. One interesting counterpoint that has been made to me and that comes up in this thread is that rich, complex, parametric libraries are not the easiest thing in terms of learning curve, especially for those not deeply immersed in the platform. Sticking with code generation onto simpler basic frameworks can produce verbose code, but it can often be quite simple and easy to read. Of course, where you have a lot of variance and extremely rich parameterization in your generator, you might just be trading off complexity an your product for complexity in your templates. This is an easy path to slide into and can make maintenance just as much of a headache - watch out for that.
Code Generators or T4 Templates, are they really evil?
[ "", "c#", "generics", "code-generation", "t4", "" ]
Can anyone suggest a suitable way of figuring out a pieces allowable moves on a grid similar to the one in the image below. ![grid layout](https://farm4.static.flickr.com/3534/3250436085_c91b07c7fd.jpg) Assuming piece1 is at position a1 and piece2 is at position c3, how can I figure out which grid squares are allowable moves if piece1 can move (say) 3 squares and piece2 can move 2? I've spent way too long developing text based MUDS it seems, I simply can't get my brain to take the next step into how to visualise potential movement even in the most simple of situations. If it matters, I'm trying to do this in javascript, but to be perfectly honest I think my failure here is a failure to conceptualise properly - not a failure in language comprehension. **Update - I'm adding the first round of code written after the below responses were posted. I thought it might be useful to people in a similar situation as me to see the code** It's sloppy and it only works for one item placed on the board so far, but at least the `check_allowable_moves()` function works for this initial run. For those of you wondering why the hell I'm creating those weird alphanumeric objects rather than just using numeric x axis and y axis - it's because an id in HTML can't start with a number. In fact pretending I *could* use numbers to start ids helped a great deal in making sense of the functionality and concepts described by the fantastic answers I got. ``` <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="application/xhtml+xml;utf-8"/> <title>Test page</title> <style> #chessboard { clear: both; border:solid 1px black; height: 656px; width:656px; /*width = 8*40 + 16 for border*/ } #chessboard .row { overflow: auto; clear: both; } #chessboard .row span { display: block; height: 80px; width: 80px; float: left; border:solid 1px black; } .allowable { background: blue; } </style> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1.2.6"); google.load("jqueryui", "1.5.3"); </script> <script type="text/javascript"> $(document).ready(function() { (function() { var global = this; global.Map = function(container) { function render_board() { var max_rows = 8; var cols = new Array('a','b', 'c', 'd', 'e', 'f', 'g', 'h'); var jqMap = $('<div />'); jqMap.attr('id', 'chessboard'); var x=0; for(x; x<max_rows; x++) { var jqRow = $('<span />'); jqRow.addClass('row'); var i=0; for(i; i<cols.length; i++) { var jqCol = $('<span />'); jqCol.attr('id', cols[i]+(x+1)); jqCol.addClass(cols[i]); jqRow.append(jqCol); } jqMap.append(jqRow); } $('#'+container).append(jqMap); } function add_piece(where, id) { var jqPiece = $('<div>MY PIECE'+id+'</div>'); var jqWhere = $('#'+where); jqPiece.attr('id', 'piece-'+id); jqPiece.addClass('army'); jqPiece.draggable({cursor: 'move', grid:[82, 82], containment: '#chessboard', revert: 'invalid', stop: function(ev, ui) { //console.log(ev.target.id); } }); jqWhere.append(jqPiece); check_allowable_moves(where); } function check_allowable_moves(location) { var x_axis = { 'a':1,'b':2, 'c':3, 'd':4, 'e':5, 'f':6, 'g':7, 'h':8 }; var x_axis_alpha = { 1:'a',2:'b', 3:'c', 4:'d', 5:'e', 6:'f', 7:'g', 8:'h' }; $('.allowable').droppable("destroy"); $('.allowable').removeClass('allowable'); //get the x,y values of the piece just placed var x = parseInt(x_axis[location[0]], 10); var y = parseInt(location[1], 10); var x_min = x-2; var y_min = y-2; for(x_min; x_min<=x+2; x_min++) { for(y_min; y_min<=y+2; y_min++) { var jqCell = $('#'+x_axis_alpha[x_min]+y_min) jqCell.addClass('allowable'); jqCell.droppable({ accept: '.army', drop: function(ev, ui) { //console.log(ev, ui, $(this)); //handle_drop(ev, ui, $(this)); check_allowable_moves($(this).attr('id')); } }); } y_min = parseFloat(y)-2; } } render_board(); add_piece('d5', '2'); } })(); var map = new Map('content'); }); </script> </head> <body id="debug"> <div id="page"> <div id="content"> </div> </div><!-- end page --> </body> </html> ```
Suppose piece p is at position x, y and can move n squares away to position x2, y2. This means that the sum of the absolute differences between (x - x2) and (y - y2) can be no greater than n. If you're going to show which squares can be moved to (rather than taking inputs x2 and y2), I think it'd be best to loop over all positions in a square around the piece. That is... ``` for (x - n TO x + n): for (y - n TO x + n): if (abs(x - x2) + abs(y - y2) <= n): mark as okay. ``` This answer assumes pieces can only move to adjacent squares and not diagonally. Edit: If you want diagonal movement, and moving along a diagonal costs just as much as moving horizontally or vertically, then the problem is actually much easier - the piece p can move between the ranges of (x - n, x + n) and (y - n, y + n). The answer becomes a lot more complex if moving diagonally doesn't cost as much as a horizontal + vertical movement (e.g., if diagonal costs 1.5, whereas h/v costs 1).
In general such problems involve a reasonably limited grid of places one can possibly reach. Take a data structure of the size of the grid and whose elements can hold the number of remaining movement points with sufficient precision. Initialize the grid to a not-visited value. This must not be in the range of zero to the maximum possible move speed. A negative value is ideal. Initialize the starting location to the number of moves remaining. At this point there are three possible approaches: 1) Rescan the whole grid each step. Simple but slower. Termination is when no points yield a legal move. 2) Store points on a stack. Faster than #1 but still not the best. Termination is when the stack is empty. 3) Store points in a queue. This is the best. Termination is when the queue is empty. ``` Repeat ObtainPoint {From queue, stack or brute force} For Each Neighbor do Remaining = Current - MovementCost If Remaining > CurrentValue[Neighbor] then CurrentValue[Neighbor] = Remaining Push or Queue Neighbor Until Done ``` Note that with the stack-based approach you will always have some cases where you end up throwing out the old calculations and doing them again. A queue-based approach will have this happen only if there are cases where going around bad terrain is cheaper than going through it. Check the termination condition only at the end of the loop, or else terminate when ObtainPoint attempts to use an empty queue or stack. An empty queue/stack after ObtainPoint does *NOT* mean you're done! (Note that is is a considerable expansion on Ian's answer.)
Determining allowable moves on a grid
[ "", "javascript", "grid", "" ]
Lets assume that I'm dealing with a service that involves sending large amounts of data. If I implement this with WCF, will WCF throttle the service based on how much memory each request takes to serve? Or will I be getting continuous out of memory exceptions each time I receive a large number of hits to my service? I'm quite curious as to dealing with this problem outside of WCF, I'm still a bit new to service development...
While using the binding attributes and readerQuotas like Andrew Hare suggests will allow for essentially an unlimited size for most practical uses, keep in mind that the you will run into other issues such as timeouts if you accept a long running command, no matter how that service is constructed (using WCF or not). No matter what the size of your message is, the WCF service will need to be throttled for performance so that it is not flooded. If you are hosting it in IIS or WAS, you will have additional built-in features to those hosting environments that will make your service much more "highly available". However, you still need to pay attention to concurrency issues. The following WCF config provides an example of setting some throttling values. ``` <system.serviceModel> ... <behaviors> <serviceBehaviors> <behavior name="GenericServiceBehavior"> <serviceTimeouts transactionTimeout="00:09:10"/> <serviceThrottling maxConcurrentCalls="20" maxConcurrentSessions="20" maxConcurrentInstances="20" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> ```
WCF does have a default payload size limit that will reject messages over a certain number of bytes. This is configurable of course in the binding section of your configuration file. Here is a crude example with a [`basicHttpBinding`](http://msdn.microsoft.com/en-us/library/ms731361.aspx) showing you many of the attributes available to you: ``` <bindings> <basicHttpBinding> <binding name="testBinding" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> </binding> </basicHttpBinding> ``` The idea is that you can create many different bindings that you can use for different scenarios. This is nice as you can fine tune how your services are consumed and only increase the message size limit for the endpoints that need them.
WCF Service Throttling
[ "", "c#", ".net", "wcf", "soa", "throttling", "" ]
It's been several years since I have used regular expressions, and I was hoping I could get some help on something I'm working on. You know how google's search is quite powerful and will take stuff inside quotes as a literal phrase and things with a minus sign in front of them as not included. Example: "this is literal" -donotfindme site:examplesite.com This example would search for the phrase "this is literal" in sites that don't include the word donotfindme on the webiste examplesite.com. Obviously I'm not looking for something as complex as Google I just wanted to reference where my project is heading. Anyway, I first wanted to start with the basics which is the literal phrases inside quotes. With the help of another question on this site I was able to do the following: (this is php) ``` $search = 'hello "this" is regular expressions'; $pattern = '/".*"/'; $regex = preg_match($pattern, $search, $matches); print_r($matches); ``` But this outputs "this" instead of the desired `this`, and doesn't work at all for multiple phrases in quotes. Could someone lead me in the right direction? I don't necessarily need code even a real nice place with tutorials would probably do the job. Thanks!
Well, for this example at least, if you want to match only the text inside the quotes you'll need to use a capturing group. Write it like this: ``` $pattern = '/"(.*)"/'; ``` and then `$matches` will be an array of length 2 that contains the text between the quotes in element 1. (It'll still contain the full text matched in element 0) In general, you can have more than one set of these parentheses; they're numbered from the left starting at 1, and there will be a corresponding element in `$matches` for the text that each group matched. Example: ``` $pattern = '/"([a-z]+) ([a-z]+) (.*)"/'; ``` will select all quoted strings which have two lowercase words separated by a single space, followed by anything. Then `$matches[1]` will be the first word, `$matches[2]` the second word, and `$matches[3]` the "anything". For finding multiple phrases, you'll need to pick out one at a time with `preg_match()`. There's an optional "offset" parameter you can pass, which indicates where in the string it should start searching, and to find multiple matches you should give the position right after the previous match as the offset. See the [documentation](https://www.php.net/manual/en/function.preg-match.php) for details. You could also try searching Google for "regular expression tutorial" or something like that, there are plenty of good ones out there.
Sorry, but my php is a bit rusty, but this code will probably do what you request: ``` $search = 'hello "this" is regular expressions'; $pattern = '/"(.*)"/'; $regex = preg_match($pattern, $search, $matches); print_r($matches[1]); ``` $matches[1](http://oreilly.com/catalog/9780596528126/index.html) will contain the 1st captured subexpression; $matches or $matches[0] contains the full matched patterns. See [preg\_match](https://www.php.net/preg_match) in the PHP documentation for specifics about subexpressions. I'm not quite sure what you mean by "multiple phrases in quotes", but if you're trying to match balanced quotes, it's a bit more involved and tricky to understand. I'd pick up a reference manual. I highly recommend [Mastering Regular Expressions, by Jeffrey E. F. Friedl](http://oreilly.com/catalog/9780596528126/index.html). It is, by far, the best aid to understanding and using regular expressions. It's also an excellent reference.
Google Style Regular Expression Search
[ "", "php", "" ]
Is there any particular reason to use one over the other? I personally tend to use the latter, as it just seems to flow better to me.
They do the same thing, the <?= is just called the short tag and is shorthand for <?php echo. You have to make sure the short tags are enabled to use the <?= notation.
As far as I know, they are functionally equivalent except the second can be disabled in configurations so isn't as portable.
Difference between <?php echo $session_id ?> and <?= $session_id ?>
[ "", "php", "" ]
Is there any way to beat the 100-group limit for regular expressions in Python? Also, could someone explain why there is a limit.
I'm not sure what you're doing exactly, but try using a single group, with a lot of OR clauses inside... so (this)|(that) becomes (this|that). You can do clever things with the results by passing a function that does something with the particular word that is matched: ``` newContents, num = cregex.subn(lambda m: replacements[m.string[m.start():m.end()]], contents) ``` If you really need so many groups, you'll probably have to do it in stages... one pass for a dozen big groups, then another pass inside each of those groups for all the details you want.
There is a limit because it would take too much memory to store the complete state machine efficiently. I'd say that if you have more than 100 groups in your re, something is wrong either in the re itself or in the way you are using them. Maybe you need to split the input and work on smaller chunks or something.
Python regular expressions with more than 100 groups?
[ "", "python", "regex", "" ]
I have a C# application which works with an Oracle database and has been already shipped. Now it's time to ship out a new release. The C# object model have been revised and had an impact on the table structure. If I ship out the new release, I need to take care of existing data. Just dropping tables and recreate these tables wouldn’t make any customers happy. To counter this problem I have collected SQL scripts, which alters the previously released database structure to the new database structure. In the course of this, the data are migrated too. The SQL scripts are committed to a repository like C# source code. The patching of the database is tested on regular basis with the help of CruiseControl.NET. NUnit tests are run against the patched database to uncover mismatches between database tables and C# object model. The whole procedure does work, but I have the feeling that this could be done better. I regard database migration as very critical. A shipped application, which doesn't work with a wrongly patched database, has no value. Losing data is inacceptable. These horror scenarios might make me think not to change the database at all. So it’s very important for me to have full confidence in the tools and practices I use. Last week I stumbled over [LiquiBase](http://www.liquibase.org/) and I asked myself -- and now in SO: **What tools or practices can help to do database migration with lesser risks and more confidence? Are there any good books or internet resources out there?** I am especially interested in specific solutions to C# and Oracle, which might fit in the development procedure I have outlined above.
Database upgrade scripts must be part of development process. Here is one way of keeping track about database schema upgrades: * create VERSION table in database that contains one record with version number * each time you make change to database schema of your application you should: + create SQL script for creating, altering or dropping database objects + create SQL script for managing data changes that must be done with new data schema (e.g. insert defaults in new fields, insert default records in new tables, create script for splitting or merging tables, ...) + increment database version number - For each change I usually create one script named DbVerXXXX.SQL that contains all necessary upgrades (XXXX is version number). Also, I do changes in small steps - change DB schema only for next change you will do in your application. Don't create database upgrade that will take weeks or months of work to upgrade your application. * create script that will upgrade your user's database to new version: + script should check current version of database and then execute database upgrade scripts that will convert schema to required level + change version number in VERSION table This process enables you to: * put all database schema changes under source control, so you have complete history of changes * try and test your upgrade scripts on test databases, before you ship it to customer * automatically upgrade user databases with confidence
You may want to look into some of the database redundancy technologies available out there, such as [Oracle Dataguard](http://www.oracle.com/technology/deploy/availability/htdocs/DataGuardOverview.html). I believe it in particular has some features that could help with this type of scenario. Regardless of your technology however, anytime you have a schema change in the database you will obviously run some risk. One simple practice that you can always perform is to create a copy of the database, put it on a different server, and run your upgrade procedure there first to work through your bugs. When working with customers in similar scenarios we've typically done that with them, just to alleviate their concerns and iron out any potential issues before performing the operation on the live environment.
Migrating an Oracle database with a C# application attached to it: How to manage database migration?
[ "", "c#", "database", "oracle", "migration", "" ]
How to build a simple recommendation system? I have seen some algorithms but it is so difficult to implement I wish their is practical description to implement the most simple algorithm? i have these three tables ``` Users userid username 1 aaa 2 bbb ``` and ``` products productid productname 1 laptop 2 mobile phone 3 car ``` and ``` users_products userid productid 1 1 1 3 3 2 2 3 ``` so I want to be able recommend items for each of the users depending on the items they purchased and other users' items I knew it should something like calculating the similarites between users and then see their prosucts but how can be this done and stored in a database because this will require a table with something like this ``` 1 2 3 4 5 6 << users' ids 1) 1 .4 .2 .3 .8 .4 2) .3 1 .5 .7 .3 .9 3) .4 .4 1 .8 .2 .3 4) .6 .6 .6 1 .4 .2 5) .8 .7 .4 .2 1 .3 6) 1 .4 .6 .7 .9 1 ^ ^ users' ids ``` so how can similarty beween users calculated? and how could this complex data stored in ad database? (it requires a table with column for every user)? thanks
How you want to actually store the recommendations is as a question completely unrelated to how one would actually implement a recommendation engine. I leave that to your database architecture. On to the recommending. You said "simple", so a Pearson correlation coefficient might be the thing you need to read up on. Calculating such a thing is dead simple. [Concept](http://en.wikipedia.org/wiki/Pearson_correlation), [example code](http://www.alglib.net/statistics/correlation.php).
Maybe reading ["Programming Collective Intelligence"](https://rads.stackoverflow.com/amzn/click/com/0596529325) will help you.
How to build a simple recommendation system?
[ "", "c#", "asp.net", "database", "algorithm", "database-design", "" ]
Is there a way at runtime to switch out an applications app.config (current.config to new.config, file for file). I have a backup/restore process which needs to replace its own application.exe.config file. I have seen this [post](https://stackoverflow.com/questions/242568/is-it-possible-to-switch-application-configuration-file-at-runtime-for-net-appli) but it does not answer how to do this at runtime.
Turns out I can swap the .config file for the new one and do a ConfigurationManager.RefreshSection(...) for each section. It will update from the new .config file.
Microsoft .NET's `app.config` is not designed for your scenario, as well as many others. I often encounter a similar need, so I have spent a lot of effort designing a solution. 1. Redesign to use `app.config` only as a configuration bootstrap: specify where to find the rest of the real configuration data. This information should almost never change, so there is no need to handle file watching or application restarts. 2. Pick an alternate location for the real configuration data: a file, a database, perhaps even a web service. I prefer a database most of the time, so I create a configuration table with a simple structure that allows me to store my data. 3. Implement a simple library to wrap your configuration access so that you have a simple API for the rest of your application (via dependency injection). Hide the usage of `app.config` as well as your real configuration storage location(s). Since .NET is strongly-typed, make the configuration settings so--convert each string retrieved into the most-specific type available (URL, Int32, FileInfo, etc.). 4. Determine which configuration settings can be safely changed at runtime versus those that can't. Typically, some settings need to change along with others, or it simply makes no sense to allow them to change at all. If all your configuration data can safely change at runtime, then that makes things easy, but I HIGHLY doubt such a scenario. Hide the changeability and interdependencies of the configuration settings to the extent possible. 5. Design the response to the unavailability of your real configuration data. I prefer to treat the absence of any configuration setting as a fatal error that aborts the application, unless I can identify a usable default. Likewise, I abort in the absence of the configuration storage container (file, database table, etc.). Enjoy, and best wishes.
Is switching app.config at runtime possible?
[ "", "c#", "runtime", "app-config", "" ]
How can I ensure a dll is not unloaded while any objects in it exist? The problem is, when I was using explict memory management I could delete the dll objects before freeing the dll, however with smart pointers I have no controll over the order there destroyed, meaning the dll may be freed first causeing a crash when trying to free one of the other objects: FlPtr is a simple refrence counting class thats calls AddRef and Release as needed ``` ExampleDll *dll = LoadDll(L"bin\\example.dll"); IObject *obj = dll->CreateObject(); ... obj->Release(); delete dll;//fine because all objects already deleted return 0; auto_ptr<ExampleDll> dll = LoadDll(L"bin\\example.dll"); FlPtr<IObject> obj = dll->CreateObject(); ... return 0;//crash if dll is destructed before obj since Object::Release needs to call into the dll ``` I tried making the dll handle unloading itsself, ie only unload after all objects have been deleted. This work by creating a new object IExampleDll which the dll implements. This is like the ExampleDll object from before but lives in the dll rather than the exe and is also refrence counted. Each object in the dll increments this refrence on contruction and deincrements it on destruction. This means the refrence count only reaches zero when the exe has Released its refrences AND all the dlls objects have been destroyed. It then deletes itsself calling FreeLibrary(GetModuleHandle()) in its destructor. This however crashes at the FreeLibrary, im asuming because the thread is still in the dlls code that is being unloaded... I'm at a loss now how to make sure the dll is only unloaded when there are no remaining objects, apart from going back to freeing the dll explicitly after everything else should have been deleted; ``` int main() { ExampleDll *dll = LoadDll("bin\\example.dll"); restOfProgram(); delete dll; } ``` This approach becomes difficult when dlls need to be loaded/unloaded mid program saftly, ie if the user changed from d3d to openGL in options.
Assuming you do not want to terminate the thread when unloading the library (otherwise, see MSalters), you need to free the library from the caller that loaded it. COM solves that by an in-DLL instance counter (much like yours, if I understand you correctly), and regulary checking it by calling a global exported `CanUnloadNow` function. Another option is to have your object/interface smart pointers ALSO reference the DLL they came from. This would increase the client data size, but you wouldn't need to touch the DLL. You might even recycle the LoadLibrary/FreeLibrary reference counter, however that might hit performance. Also, none of these schemes help much if you get circular DLL dependencies (Component DllA.X references DllB.Y, which references DllA.Z). I haven't yet fould a good solution to that that doesn#t requrie global knowledge.
For the case where the DLL is switched at run time, I'd avoid the smart pointer system for objects created by the DLL and use a system like this: ``` |-----------------------| |--------------------------| | Abstraction Interface | | Implementation Interface | |-----------------------| |--------------------------| ^ ^ | | |-------------|1 *|-------------------|* *|----------------| | Application |-------| Abstraction Layer |--------| Implementation | |-------------| |-------------------| |----------------| \------------- Main Program ------------------/ \-------- DLL --------/ ``` The application holds a list of all the allocated abstraction layer objects. The abstraction layer objects are the only objects that are allowed to own pointers to objects created by the implementation layer. When swapping DLLs, first iterate all abstraction layer objects and tell them to release the implementation specific data. Then unload the DLL and load the new DLL. Then iterate the abstraction layer objects again and tell them to create new implementation specific data.
C++: Dll unloading issue
[ "", "c++", "dll", "" ]
Let's say I have a fairly simple app that lets users store information on DVDs they own (title, actors, year, description, etc.) and I want to allow users to search their collection by any of these fields (e.g. "Keanu Reeves" or "The Matrix" would be valid search queries). What's the advantage of going with SQL full text search vs simply splitting the query up by spaces and doing a few "LIKE" clauses in the SQL statement? Does it simply perform better or will it actually return results that are more accurate?
Full text search is likely to be quicker since it will benefit from an index of words that it will use to look up the records, whereas using LIKE is going to need to full table scan. In some cases LIKE will more accurate since LIKE "%The%" AND LIKE "%Matrix" will pick out "The Matrix" but not "Matrix Reloaded" whereas full text search will ignore "The" and return both. That said both would likely have been a better result.
Full-text indexes (which are indexes) are much faster than using LIKE (which essentially examines each row every time). However, if you know the database will be small, there may not be a performance need to use full-text indexes. The only way to determine this is with some intelligent averaging and some testing based on that information. Accuracy is a different question. Full-text indexing allows you to do several things (weighting, automatically matching eat/eats/eating, etc.) you couldn't possibly implement that in any sort of reasonable time-frame using LIKE. The real question is whether you need those features. Without reading the full-text documentation's description of these features, you're really not going to know how you should proceed. So, read up! Also, some basic tests (insert a bunch of rows in a table, maybe with some sort of public dictionary as a source of words) will go a long way to helping you decide.
SQL full text search vs "LIKE"
[ "", "sql", "database", "full-text-search", "" ]
What's going on?! Subtraction works fine until I get to 0.1 - 0.1. I'm in visual c# 2008 using the nonoba.com API. ``` Console.WriteLine("hit! " + Users[targetNum].character.health + " : " + player.character.profile.attackPower); Users[targetNum].character.health -= player.character.profile.attackPower; Console.WriteLine("health! " + Users[targetNum].character.health); ``` output: ``` hit! 0.1 : 0.1 health! 1.490116E-08 ``` Thanks all - I might use the decimal type, as I'm normally adding/subtracting nice "round" numbers. For now I'll just go with: ``` if (Users[targetNum].character.health <= 0.00001) ``` By the way I knew this wasn't really going to be a "bug" in c# - I thought it would either by a bug in my code or some lack of understanding, which it was. Having read all the recommended reading, I'm going to conclude that my folly was due to normally using the ActionScript Number type, which maybe has a decimal rather than binary floating point - anyway, it would never give this output.
That seems pretty normal for floating point math... you always have to check against a small delta to account for imperceptible rounding differences. Depending on the scenario, `decimal` might be what you want. Basically, unless you can be sure that it is **exactly** the same 0.1 in both cases (i.e. nothing has been done to them), you aren't likely to get zero; in general you'll get something very nearly zero. With `decimal` you'll usually get more what you expect intuitively. See also Jon Skeet's pages here: * [Binary floating point](http://csharpindepth.com/Articles/General/FloatingPoint.aspx) * [Decimal floating point](http://csharpindepth.com/Articles/General/Decimal.aspx)
You obviously need to read ["What Every Computer Scientist Should Know About Floating Point Numbers"](http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html). Instead of thinking that I've found a bug in situations like this, I usually assume that one of my assumptions needs checking first.
C# float bug? 0.1 - 0.1 = 1.490116E-08
[ "", "c#", "floating-point", "" ]
I have the following javascript: ``` css = document.createElement('style'); css.setAttribute('type', 'text/css'); css_data = document.createTextNode(''); css.appendChild(css_data); document.getElementsByTagName("head")[0].appendChild(css); ``` for some reason, in IE only, it chokes on "css.appendChild(css\_data);" Giving the error: "Unexpected call to method or property access" What's going on?
Try instead: ``` var css = document.createElement('style'); css.setAttribute('type', 'text/css'); var cssText = ''; if(css.styleSheet) { // IE does it this way css.styleSheet.cssText = cssText } else { // everyone else does it this way css.appendChild(document.createTextNode(cssText)); } document.getElementsByTagName("head")[0].appendChild(css); ```
[@crescentfresh](https://stackoverflow.com/questions/436710/element-appendchild-chokes-in-ie#436750) I tried your suggestion, and the content of the style block simply never gets populated. Tried in IE6 and IE7... it just doesn't seem to do *anything* Here's my modified code: ``` function load_content() { var d = new Date(); css = document.createElement('style'); css.setAttribute('type', 'text/css'); if(css.styleSheet) { css.styleSheet.cssText = 'testing'} //Because IE is evil else { css_data = document.createTextNode(''); css.appendChild(css_data); } //And everyone else is cool document.getElementsByTagName("head")[0].appendChild(css); new Ajax.PeriodicalUpdater('content', '/%doc_path%?'+d.getTime(), { method: 'post', frequency: 5, onSuccess: function(transport) { new Ajax.Request('/%css_path%?'+d.getTime(), { method: 'get', onSuccess: function(transport) { if(css.styleSheet) { css.styleSheet.cssText = transport.responseTex} else { var new_css_data = document.createTextNode(transport.responseText); css.replaceChild(new_css_data, css_data); css_data = new_css_data; } } }); } }); } ``` Any ideas?
Element.appendChild() chokes in IE
[ "", "javascript", "internet-explorer", "" ]
I have a server that listens for a connection on a socket: ``` public class Server { private Socket _serverSocket; public Server() { _serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); _serverSocket.Bind(new IPEndPoint(IPAddress.Any, 1234)); _serverSocket.Listen(1); } public void Start() { _serverSocket.BeginAccept(HandleAsyncConnectionMethod, null); } public void Stop() { //????? MAGIC ????? } //... rest of code here } ``` What is the correct (clean) way to close down the socket? Is it sufficient to call: ``` _serverSocket.Disconnect(true); ``` in the Stop() method? or is there other work that needs to happen to close the connection cleanly?
TCP connection termination correctly involves a four-way handshake. You want both ends to inform the other that they're shutting down and then acknowledge each other's shutdown. Wikipedia explains the process: <http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Connection_termination> This post explains how to make it happen in C#: <http://vadmyst.blogspot.com/2008/04/proper-way-to-close-tcp-socket.html>
Since you are listening for incoming TCP connections, you could use [System.Net.Sockets.TcpListener](http://msdn.microsoft.com/en-us/library/system.net.sockets.tcplistener.aspx) which does have a Stop() method. It does not have asynchronous operations though.
Proper way to stop listening on a Socket
[ "", "c#", "sockets", "" ]
I'm just coming up the learning curve for Java SE & have no problem with the usual Java convention for package names, e.g. `com.example.library_name_here.package_name_here` Except. I've been noticing a failure to abide by this in some fairly well-known packages. * [JLine](http://jline.sourceforge.net/): `jline.*` * [JACOB](https://sourceforge.net/projects/jacob-project/): `com.jacob.*` (there is no jacob.com) * [JNA](https://github.com/twall/jna/): `com.sun.jna.*` (disclaimer on the site says NOTE: Sun is not sponsoring this project, even though the package name (com.sun.jna) might imply otherwise.) So I'm wondering, are there instances where the usual reverse-domain-name convention breaks down, and there are good ways to get around it? The only cases I can think of revolve around domain-name ownership issues (e.g. you change the project hosting/domain name, or there's already a well-known package that has "squatter's rights" to your domain, or your ownership of the domain runs out & someone else snaps it up). edit: if I use my company's domain name, and we are bought out or have a spin-off, what should we do with package names? keep them the same or rename? (I suppose renaming is bad from the point of view that compiled classes referring to the package then lose)
Packages are used to avoid ambiguity and collisions between components built by various entities. As long as you follow the convention, and nobody illicitly uses your slice of the package namespace pie, you shouldn't need to worry about what others have used.
It's a naming convention. There's no real requirement or even expectation that the package name maps to a domain name.
java package name convention failure
[ "", "java", "naming-conventions", "" ]
I have a table of vehicles with registration numbers, and want to select a subset of them that are between some user-supplied 'from' and 'to' values. So lets say the table looks like this: ``` id reg_num 1 DD1111 2 DD1112 3 DE2245 4 EE5678 5 EF6547 ``` The SQL I have so far looks like this: ``` select * from vehicles where reg_num >= 'DD' -- this value is user supplied and reg_num <= 'DE' -- and so is this one ``` Which should (by my thinking) return: ``` 1 DD1111 2 DD1112 3 DE2245 ``` But instead, only returns: ``` 1 DD1111 2 DD1112 ``` I imagine that SQL server sees 'DE2245' as greater than 'DE', and so excludes the row. My question: How do I get SQL server to include all rows that start with 'DE'?
``` where reg_num >= @userValueFrom and left(reg_num,char_length(@userValueTo) <= @userValueTo ``` but please note that this where does not utilize any index because of a function on the column in SARG.
You have to add 'zzzz's at the end as many as necessary to match your column width definition. ``` select * from vehicles where reg_num >= 'DD' and reg_num <= 'DE' + 'ZZZZZZZZZZZZ' ```
How do you select using a range of strings in SQL?
[ "", "sql", "sql-server", "sql-server-2005", "" ]
Let's say I have a two-column table like this: ``` userid | roleid --------|-------- 1 | 1 1 | 2 1 | 3 2 | 1 ``` I want to get all distinct userids that have `roleids` 1, 2 AND 3. Using the above example, the only result I want returned is `userid` 1. How do I do this?
``` SELECT userid FROM UserRole WHERE roleid IN (1, 2, 3) GROUP BY userid HAVING COUNT(DISTINCT roleid) = 3; ``` --- Just thinking out loud, another way to write the self-join [described by cletus](https://stackoverflow.com/questions/477006/select-values-that-meet-different-conditions-on-different-rows/477013#477013) is: ``` SELECT t1.userid FROM userrole t1 JOIN userrole t2 ON t1.userid = t2.userid JOIN userrole t3 ON t2.userid = t3.userid WHERE (t1.roleid, t2.roleid, t3.roleid) = (1, 2, 3); ``` This might be easier to read for you, and MySQL supports comparisons of tuples like that. MySQL also knows how to use covering indexes intelligently for this query. Just run it through `EXPLAIN` and see "Using index" in the notes for all three tables, which means it's reading the index and doesn't even have to touch the data rows. I ran this query over 2.1 million rows (the Stack Overflow July data dump for *PostTags*) using MySQL 5.1.48 on my MacBook, and it returned the result in 1.08 seconds. On a decent server with enough memory allocated to `innodb_buffer_pool_size`, it should be even faster. To anyone reading this: my answer is simple and straightforward, and got the 'accepted' status, but please do go read [the answer given by cletus](https://stackoverflow.com/questions/477006/sql-statement-join-vs-group-by-and-having/477013#477013). It has much better performance.
Ok, I got downvoted on this, so I decided to test it: ``` CREATE TABLE userrole ( userid INT, roleid INT, PRIMARY KEY (userid, roleid) ); CREATE INDEX ON userrole (roleid); ``` Run this: ``` <?php ini_set('max_execution_time', 120); // takes over a minute to insert 500k+ records $start = microtime(true); echo "<pre>\n"; mysql_connect('localhost', 'scratch', 'scratch'); if (mysql_error()) { echo "Connect error: " . mysql_error() . "\n"; } mysql_select_db('scratch'); if (mysql_error()) { echo "Selct DB error: " . mysql_error() . "\n"; } $users = 200000; $count = 0; for ($i=1; $i<=$users; $i++) { $roles = rand(1, 4); $available = range(1, 5); for ($j=0; $j<$roles; $j++) { $extract = array_splice($available, rand(0, sizeof($available)-1), 1); $id = $extract[0]; query("INSERT INTO userrole (userid, roleid) VALUES ($i, $id)"); $count++; } } $stop = microtime(true); $duration = $stop - $start; $insert = $duration / $count; echo "$count users added.\n"; echo "Program ran for $duration seconds.\n"; echo "Insert time $insert seconds.\n"; echo "</pre>\n"; function query($str) { mysql_query($str); if (mysql_error()) { echo "$str: " . mysql_error() . "\n"; } } ?> ``` Output: ``` 499872 users added. Program ran for 56.5513510704 seconds. Insert time 0.000113131663847 seconds. ``` That adds 500,000 random user-role combinations and there are approximately 25,000 that match the chosen criteria. First query: ``` SELECT userid FROM userrole WHERE roleid IN (1, 2, 3) GROUP by userid HAVING COUNT(1) = 3 ``` Query time: 0.312s ``` SELECT t1.userid FROM userrole t1 JOIN userrole t2 ON t1.userid = t2.userid AND t2.roleid = 2 JOIN userrole t3 ON t2.userid = t3.userid AND t3.roleid = 3 AND t1.roleid = 1 ``` Query time: 0.016s That's right. The join version I proposed is **twenty times faster than the aggregate version.** Sorry but I do this for a living and work in the real world and in the real world we test SQL and the results speak for themselves. The reason for this should be pretty clear. The aggregate query will scale in cost with the size of the table. Every row is processed, aggregated and filtered (or not) through the `HAVING` clause. The join version will (using an index) select a subset of the users based on a given role, then check that subset against the second role and finally that subset against the third role. Each [selection](http://en.wikipedia.org/wiki/Selection_(relational_algebra)) (in [relational algebra](http://en.wikipedia.org/wiki/Relational_algebra) terms) works on an increasingly small subset. From this you can conclude: **The performance of the join version gets even better with a lower incidence of matches.** If there were only 500 users (out of the 500k sample above) that had the three stated roles, the join version will get significantly faster. The aggregate version will not (and any performance improvement is a result of transporting 500 users instead of 25k, which the join version obviously gets too). I was also curious to see how a real database (ie Oracle) would deal with this. So I basically repeated the same exercise on Oracle XE (running on the same Windows XP desktop machine as the MySQL from the previous example) and the results are almost identical. Joins seem to be frowned upon but as I've demonstrated, aggregate queries can be an order of magnitude slower. **Update:** After some extensive testing, the picture is more complicated and the answer will depend on your data, your database and other factors. The moral of the story is test, test, test.
Select values that meet different conditions on different rows
[ "", "mysql", "sql", "relational-division", "sql-match-all", "" ]
Is it possible to create/have a non-modal .net OpenFileDialog I have a UI element in the main dialog which always need to be available for the user to press.
No, [OpenFileDialog](http://msdn.microsoft.com/en-us/library/system.windows.forms.openfiledialog.aspx) and [SaveFileDialog](http://msdn.microsoft.com/en-us/library/system.windows.forms.savefiledialog.aspx) are both derived from [FileDialog](http://msdn.microsoft.com/en-us/library/system.windows.forms.filedialog.aspx), which is inherently modal, so (as far as I know) there's no way of creating a non-modal version of either of them.
You can create a thread and have the thread host the OpenFileDialog. Example code is lacking any kind of synchronization but it works. ``` public partial class Form1 : Form { OFDThread ofdThread; public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { ofdThread = new OFDThread(); ofdThread.Show(); } } public class OFDThread { private Thread t; private DialogResult result; public OFDThread() { t = new Thread(new ParameterizedThreadStart(ShowOFD)); t.SetApartmentState(ApartmentState.STA); } public DialogResult DialogResult { get { return this.result; } } public void Show() { t.Start(this); } private void ShowOFD(object o) { OpenFileDialog ofd = new OpenFileDialog(); result = ofd.ShowDialog(); } } ``` With this code you could add something to fire an event in your UI thread (be careful with invoking!) to know when they're done. You can access the result of the dialog by ``` DialogResult a = ofdThread.DialogResult ``` from your UI thread.
C# OpenFileDialog Non-Modal possible
[ "", "c#", "modal-dialog", "openfiledialog", "" ]
I recently started working at a company with an enormous "enterprisey" application. At my last job, I designed the database, but here we have a whole Database Architecture department that I'm not part of. One of the stranger things in their database is that they have a bunch of views which, instead of having the user provide the date ranges they want to see, join with a (global temporary) table "TMP\_PARM\_RANG" with a start and end date. Every time the main app starts processing a request, the first thing it does it "`DELETE FROM TMP_PARM_RANG`;" then an insert into it. This seems like a bizarre way of doing things, and not very safe, but everybody else here seems ok with it. Is this normal, or is my uneasiness valid? **Update** I should mention that they use transactions and per-client locks, so it is guarded against most concurrency problems. Also, there are literally dozens if not hundreds of views that all depend on `TMP_PARM_RANG`.
Do I understand this correctly? There is a view like this: ``` SELECT * FROM some_table, tmp_parm_rang WHERE some_table.date_column BETWEEN tmp_parm_rang.start_date AND tmp_parm_rang.end_date; ``` Then in some frontend a user inputs a date range, and the application does the following: 1. Deletes all existing rows from TMP\_PARM\_RANG 2. Inserts a new row into TMP\_PARM\_RANG with the user's values 3. Selects all rows from the view I wonder if the changes to TMP\_PARM\_RANG are committed or rolled back, and if so when? Is it a temporary table or a normal table? Basically, depending on the answers to these questions, the process may not be safe for multiple users to execute in parallel. One hopes that if this were the case they would have already discovered that and addressed it, but who knows? Even if it is done in a thread-safe way, making changes to the database for simple query operations doesn't make a lot of sense. These DELETEs and INSERTs are generating redo/undo (or whatever the equivalent is in a non-Oracle database) which is completely unnecessary. A simple and more normal way of accomplishing the same goal would be to execute this query, binding the user's inputs to the query parameters: ``` SELECT * FROM some_table WHERE some_table.date_column BETWEEN ? AND ?; ```
If the database is oracle, it's possibly a global temporary table; every session sees its own version of the table and inserts/deletes won't affect other users.
Date ranges in views - is this normal?
[ "", "sql", "view", "enterprise", "" ]
I have a web application running on port :80, and I have an Axis web service that is part of that web application. As such, the service is running on port :80 as well. However, for security reasons our client has asked us to change the web service port to 8080 so that they can allow access only to that port for remote consumers of the web service. Therefore they won't have access to the regular web application, but have access to the service. Is this possible to do without refactoring the app and taking out the web service in a separate web app?
As I've said in my comment, our web application is hosted on Oracle AS 10g with an Oracle Web Cache server sitting in front of it. Oracle Web Cache is based on Apache httpd, so it has virtual host support and URL rewriting (although it is not present under these terms). I've managed to solve the problem by: * configuring the web cache to listen to port 8080 (virtual host) * rewrite all requests in the form <http://host:8080/service/>\* to <http://host/service/>\*, and drop all other URL patterns. It works like a charm. As for Axis itself, I didn't find a way to configure it to listen on another port by itself. I guess it was unreasonable to expect from Axis to provide this functionality as it is only a servlet hosted in a servlet container, and it's container's job to provide the connector/transport layer. Anyway... thanks for all who offered their help, I appreciate it.
The short answer probably is - yes it is possible. Axis webservices mostly are enough decoupled from the main application that it should be easy to get them running on a different Java web server instance that would run only at the port 8080 in the case if it's not possible to configure whatever webserver are you running to run also at the port 8080 and to serve the web service only at that port.
Changing port on which an Axis web service is listening to
[ "", "java", "http", "binding", "axis", "port", "" ]
We have a query that is taking around 5 sec on our production system, but on our mirror system (as identical as possible to production) and dev systems it takes under 1 second. We have checked out the query plans and we can see that they differ. Also from these plans we can see why one is taking longer than the other. The data, schame and servers are similar and the stored procedures identical. We know how to fix it by re-arranging the joins and adding hints, However at the moment it would be easier if we didn't have to make any changes to the SProc (Paperwork). We have also tried a sp\_recompile. What could cause the difference between the two query plans? System: SQL 2005 SP2 Enterprise on Win2k3 Enterprise Update: Thanks for your responses, it turns out that it was statistics. See summary below.
Your statistics are most likely out of date. If your data is the same, recompute the statistics on both servers and recompile. You should then see identical query plans. Also, double-check that your indexes are identical.
is the data & data size between your mirror and production as close to the same as possible? If you know why one query taking longer then the other? can you post some more details? Execution plans can be different in such cases because of the data in the tables and/or the statistics. Even in cases where auto update statistics is turned on, the statistics can get out of date (especially in very large tables) You may find that the optimizer has estimated a table is not that large and opted for a table scan or something like that.
Different Execution Plan for the same Stored Procedure
[ "", "sql", "sql-server", "sql-server-2005", "stored-procedures", "sql-execution-plan", "" ]
I found the [CollectionUtils](http://commons.apache.org/collections/apidocs/org/apache/commons/collections/CollectionUtils.html) class a year or so ago and a few of the methods like, collect, and transform seem really cool, however, I have yet to find a use where it would not by syntactically cleaner and\or easier to just write the logic with a simple loop. Has any found a unique\useful use for these methods (transform, predicatedCollection, collect, etc) e.g. the methods that take a transformer or predicate as an argument?
I think the key issue is designing/coding for flexibility. If you have a single use case (e.g. selecting the members of a collection that satisfy some specific condition), then coding a relatively simple loop by hand works. On the other hand... Suppose that the set of possible conditions grew large, or could even be composed on-the-fly at run-time (even dynamically, based on user input/interaction). Or suppose that there were a few very complex conditions which could be composed with operators (e.g. A and B, C and not D, etc.) into even more cases. Suppose that, having made the selection, there was some other processing that was to be done on the resulting collection. Now consider the structure of the code that might result from a brute-force, in-line approach to writing the above: an outer loop containing a complex decision process to determine which test(s) to perform, mixed together with code that does one or more things with the "surviving" members of the collection. Such code tends to be (and especially to become over time with maintenance) difficult to understand and difficult to modify without the risk of introducing defects. So the point is to pursue a strategy in which each aspect: * basic "select something" process, * predicates that express elementary criteria, * combining operators that compose predicates, and * transformers that operate on values, can be coded and tested independently, then snapped together as needed.
collect() is useful when you have possible alternative representations of your objects. For example, recently I was dealing with a piece of code that needed to match lists of objects from two different sources. These objects were of different classes as they were used at different points in the code, but for my purposes had the same relevant concepts (i.e. they both had an ID, both had a property path, both had a "cascade" flag etc.). I found that it was much easier to define a simple intermediate representation of these properties (as an inner class), define transformers for both concrete object classes (again very simple as it's just using relevant accessor methods to get the properties out), and then use `collect()` to convert my incoming objects into the intermediate representation. Once they're there, I can use standard Collections methods to compare and manipulate the two as sets. So as a (semi-)concrete example, let's say I need a method to check that the set of objects in the presentation layer is a subset of the objects cached in the data layer. With the approach outlined above this would be done something like this: ``` public boolean isColumnSubset(PresSpec pres, CachedDataSpec dataSpec) { final List<IntermediateRepresentation> presObjects = CollectionUtils.collect(pres.getObjects(), PRES_TRANSFORMER); final List<IntermediateRepresentation> dataObjects = CollectionUtils.collect(dataSpec.getCached(), DATA_TRANSFORMER); return dataObjects.containsAll(presObjects); } ``` To me this is much more readable, with the last line conveying a real sense of what the method is *doing*, than the equivalent with loops: ``` public boolean isColumnSubset(PresSpec pres, CachedDataSpec dataSpec) { for (PresSpecificObject presObj : pres.getObjects()) { boolean matched = false; for (CachedDataObject dataObj : dataSpec.getCached()) { if (areObjectsEquivalent(presObj, dataObj)) // or do the tests inline but a method is cleaner { matched = true; break; } } if (matched == false) { return false; } } // Every column must have matched return true; } ``` The two are probably about as efficient, but in terms of readability I'd say that the first one is much easier to immediately understand. Even though it comes in being more lines of code overall (due to defining an inner class and two transformers), the separation of the traversal implementation from the actual "true or false" logic makes the latter much clearer. Plus if you have any KLOC metrics it can't be bead either. ;-)
Good uses for Apache CollectionUtils
[ "", "java", "apache", "oop", "collections", "" ]
I am comparing text files in junit using: ``` public static void assertReaders(BufferedReader expected, BufferedReader actual) throws IOException { String line; while ((line = expected.readLine()) != null) { assertEquals(line, actual.readLine()); } assertNull("Actual had more lines then the expected.", actual.readLine()); assertNull("Expected had more lines then the actual.", expected.readLine()); } ``` Is this a good way to compare text files? What is preferred?
[junit-addons](http://sourceforge.net/projects/junit-addons) has nice support for it: [FileAssert](http://junit-addons.sourceforge.net/junitx/framework/FileAssert.html) It gives you exceptions like: ``` junitx.framework.ComparisonFailure: aa Line [3] expected: [b] but was:[a] ```
Here's one simple approach for checking if the files are *exactly* the same: ``` assertEquals("The files differ!", FileUtils.readFileToString(file1, "utf-8"), FileUtils.readFileToString(file2, "utf-8")); ``` Where `file1` and `file2` are `File` instances, and [`FileUtils`](http://commons.apache.org/io/api-1.4/org/apache/commons/io/FileUtils.html) is from [Apache Commons IO](http://commons.apache.org/io/). Not much own code for you to maintain, which is always a plus. :) And very easy if you already happen to use Apache Commons in your project. But no nice, detailed error messages like in [mark's solution](https://stackoverflow.com/questions/466841/comparing-text-files-w-junit#466872). **Edit**: Heh, looking closer at the `FileUtils` API, there's an even [simpler way](http://commons.apache.org/io/api-1.4/org/apache/commons/io/FileUtils.html#contentEquals%28java.io.File,%20java.io.File%29): ``` assertTrue("The files differ!", FileUtils.contentEquals(file1, file2)); ``` As a bonus, this version works for all files, not just text.
Comparing text files with Junit
[ "", "java", "junit", "compare", "text-files", "" ]
I have two List's which I want to check for corresponding numbers. for example ``` List<int> a = new List<int>(){1, 2, 3, 4, 5}; List<int> b = new List<int>() {0, 4, 8, 12}; ``` Should give the result 4. Is there an easy way to do this without too much looping through the lists? I'm on 3.0 for the project where I need this so no Linq.
You can use the .net 3.5 .Intersect() extension method:- ``` List<int> a = new List<int>() { 1, 2, 3, 4, 5 }; List<int> b = new List<int>() { 0, 4, 8, 12 }; List<int> common = a.Intersect(b).ToList(); ```
Jeff Richter's excellent PowerCollections has Set with Intersections. Works all the way back to .NET 2.0. <http://www.codeplex.com/PowerCollections> ``` Set<int> set1 = new Set<int>(new[]{1,2,3,4,5}); Set<int> set2 = new Set<int>(new[]{0,4,8,12}); Set<int> set3 = set1.Intersection(set2); ```
Check two List<int>'s for the same numbers
[ "", "c#", ".net", "generics", "list", "" ]
Is there a common way to express the usage of arguments in C++? I want to implicitly tell the consumers of my class how the arguments they pass will be used by the class. Examples: 1. I own your argument (will clean it up) 2. I will hold a reference to your argument during my lifetime (so you should NOT delete it while I'm stile alive) 3. I will use your argument only during construction and won't hold a reference Is there a common way to express these things simply using the method declaration? I think in the first case a std::auto\_ptr would make sense. In the second case I usually take a pointer to avoid someone passing a value from the stack which would invalidate my reference quickly, or alternatively a shared\_ptr. In the third case I take a reference to allow values from the stack. How do you deal with this? Also is it necessary to rely on smart pointers here, or can one express such things simply by using naked references and pointers somehow?
Our team has similar coding conventions to the ones you suggest: 1 - auto\_ptr argument means that the class will take control of memory management for the object. (We don't use this much.) 2 - shared\_ptr means that the class will probably use the argument for an extended period of time, and in particular may store off its own shared\_ptr to the object. 3 - Plain reference means that the argument will only be used for the duration of the call. We treat this as a coding standard. It isn't something we document for each and every call.
I don't know if there is a common idiom, but I do know that the one sure-fire way that I provide this information is comments in the interface header. The users of the class won't always read them, won't always remember them, and they'll screw them up quicker than you can blink, but the information WILL be there. Now, that being said, I've also taken to being against keeping references to something that some other piece of the system owns. It's not always practical (or sensible) to restructure so that your class owns everything (or doesn't keep references after a method call returns), but it's the safest way to do things, and either one is easier for the caller to understand. The big problem with retaining references is that your callers will never remember that they aren't allowed to destroy these things, and you'll eventually end up with 'use of deleted object' type failures.
Express the usage of C++ arguments through method interfaces
[ "", "c++", "interface", "arguments", "" ]
I'm using the data mapper pattern in a PHP app I'm developing and have a question. At present, you request a Site object with a specific ID and the mapper will lookup the row, create an object and return it. However, if you do this again for the same Site you end up with two different objects with identical data. eg.: ``` $mapper = new Site_Mapper(); $a = $mapper->get(1); $b = $mapper->get(1); $a == $b // true $a === $b // false ``` So, my question is, should I: 1. Store instantiated Site objects in the mapper so I can then check if they already exist before creating a new one (could be a problem if there's multiple mappers of the same type) 2. Do the same as #1 but ensure there is only ever one instances of each mapper 3. Do the same as #1 but use a static property so multiple instances isn't a problem 4. Don't worry about it because it's *probably* not a problem
I'd go with caching somehow - static mapper classes would be my first choice, and is what I've seen most of. Otherwise, your option 2 (which is the singleton pattern) is probably the best option. Remember you need to clear this cache when an update is made to avoid returning stale data. Having said that, unless you are making something to get a lot of use or that does a lot of queries, it may not matter. (your 4) Also worth looking at for guidance (I'm sure there are many examples, I just know this one best), Propel (<http://propel.phpdb.org/>) has the caching feature - might be worth looking at how it does it? Or just use it maybe?
What you're looking for is the **Identity Map** pattern. Be careful with so called "reading inconsistencies", though. While you use an "old instance", the DB might have been changed already. And while you edit your object, another user might get an instance of it, change it faster and save it faster. Then the other object overrides all these changes again. On the web though maybe not such a big problem since a "page" quickly runs through and no object survives for longer than a few fractional seconds.
Data Mapper pattern and duplicate objects
[ "", "php", "datamapper", "" ]
How can I store a hash table with separate chaining in a file on disk? Generating the data stored in the hash table at runtime is expensive, it would be faster to just load the HT from disk...if only I can figure out how to do it. Edit: The lookups are done with the HT loaded in memory. I need to find a way to store the hashtable (in memory) to a file in some binary format. So that next time when the program runs it can just load the HT off disk into RAM. I am using C++.
What language are you using? The common method is to do some sort binary serialization. Ok, I see you have edited to add the language. For C++ there a few options. I believe the Boost serialization mechanism is pretty good. In addition, the page for Boost's serialization library also describes alternatives. Here is the link: <http://www.boost.org/doc/libs/1_37_0/libs/serialization/doc/index.html>
# Ditch the pointers for indices. This is a bit similar to constructing an on-disk [DAWG](https://en.wikipedia.org/wiki/Deterministic_acyclic_finite_state_automaton), which I did a while back. What made that so very sweet was that it could be loaded directly with mmap instead reading the file. If the hash-space is manageable, say 216 or 224 entries, then I think I would do something like this: * Keep a list of free indices. (if the table is empty, each chain-index would point at the next index.) * When chaining is needed use the free space in the table. * If you need to put something in an index that's occupied by a squatter (overflow from elsewhere) : * record the index (let's call it N) * swap the new element and the squatter * put the squatter in a new free index, (F). * follow the chain on the squatter's hash index, to replace N with F. * If you completely run out of free indices, you probably need a bigger table, but you can cope a little longer by using mremap to create extra room after the table. This should allow you to mmap and use the table directly, without modification. (scary fast if in the OS cache!) but you have to work with indices instead of pointers. It's pretty spooky to have megabytes available in syscall-round-trip-time, and still have it take up less than that in physical memory, because of paging.
How to store a hash table in a file?
[ "", "c++", "algorithm", "serialization", "data-structures", "hashtable", "" ]
Any good libraries for *quaternion* calculations in C/C++ ? Side note: any good tutorials/examples? I've google it and been to the first few pages but maybe you have have some demos/labs from compsci or math courses you could/would share? Thanks
I'm a fan of the Irrlicht quaternion class. It is zlib licensed and is fairly easy to extract from Irrlicht: * [Irrlicht Quaternion Documentation](http://irrlicht.sourceforge.net/docu/classirr_1_1core_1_1quaternion.html) * [quaternion.h](https://irrlicht.sourceforge.io/docu/quaternion_8h_source.html)
You could try with Boost - usually good place to start with. They have a [dedicated sublibrary](http://www.boost.org/doc/libs/1_37_0/libs/math/doc/quaternion/html/index.html) for that. As for the examples look at the documentation and the unit tests that come along with Boost.
quaternion libraries in C/C++
[ "", "c++", "math", "quaternions", "" ]
After trying to understand why client code is not rendered in a page (injected by user control) I found this [link](http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=105002), it turns out you must have a form tag for it to work ([Page.RegisterClientScriptBlock](http://msdn.microsoft.com/en-us/library/system.web.ui.page.registerclientscriptblock.aspx) did declare this but [ClientScriptManager.RegisterClientScriptBlock](http://msdn.microsoft.com/en-us/library/bahh2fef(VS.80).aspx) which I use does not say anything regarding this). I am using Visual studio 2005. Does anyone know if this has been solved? **Edit**: To clarify, I want my control to add javascript code to the head section of the page without having to use the ``` <form runat="server" ``` I have tried adding it using: ``` HtmlGenericControl x = new HtmlGenericControl("script"); x.InnerText = "alert('123');"; Page.Header.Controls.Add(x); ``` But this did not work for me.
Got it! My mistake was not doing it in the **OnPreRender** method (I used the **Render** method). Now all that is needed is - like Mitchel Sellers wrote, set the header to runat server and than add to it's controls: ``` HtmlGenericControl x = new HtmlGenericControl("script"); x.InnerText = GetScriptSection(); Page.Header.Controls.Add(x); ``` Thanks for pointing me to the right direction!
As far as I know this functions the same in current versions, you can test it very simply though. **Update** per discussion in the comments, the only "workaround" that I could think of would be for your to manually insert the script into the "head" section of the page on your own, using a runat="server" declaration on the Head element.
RegisterClientScriptBlock without form tag
[ "", "asp.net", "javascript", "registerclientscriptblock", "" ]
i will be working on a project that tries to determine your position using the wifi signal strength from a few access points. i was wondering if anyone knew of any similiar projects or any articles on that topic if anyone cares: its a research project in at my university. the app is written is used as a playing ground to develop new wifi antenna that are better suited for this type of usage. i only work on the coding part of the project though. oh and its written using c# which is not optional --- clarification: its ONLY wifi. no GPS goodnes for us becaus its supposed to work indoors the software is supposed to determine your location by using the known locations of the access points and their signal strenghts to tell you where you are. its currently at around 4-5 meters of accuracy aside from that i already have a working prototype and was just wondering if anything similiar has been done before or if anyone has any tips or ideas for/about the project
Discarding the first 2 answers, where they need to use GPS and A-GPS in the first and a known WiFi network in the second, my answer is: it sounds easy, but you need to do some homework first, a Survey. * you will need to measure up and create a oval shape (in a paper) with points and percentages of all wifi routers in the camp. * when, lets imagine that you compile 2 routers information, you are ready to go. * get the current wifi points and signal strengths from the user laptop/device and query the database using those values. * give the user their current location. Example: * in the campus bar you measured that to be in that place you need to have around 55% strength of the signal provided from WiFi Router 1 and 25% of the WiFi Router 2. To use all this in C#, you should start in [this Code Project article](http://www.codeproject.com/KB/gadgets/SignalStrenghth.aspx) to get the signal strengths. Then is just use those returned values with your data that you measured before when doing the first survey. Hope it helps :) At least, that was what I would do in order to approach this problem.
We did this on a project already determining distance from Access Points, but without the signal triangulation (already covered on other answers here). I do have a recommendation from the "man, I wish I didn't have to go back and do this" department - it would be to spend extra time on 2 areas: 1. An easy and repeatable method of calibration using **Multiple Data Points**. For example, the dropoff from being "very close" to "kinda close" will be a lot more than "really far away" to "really really far" away. It's not going to be a linear slope. 2. Data Smoothing. As you move, the signal strength will vary unproportionally to your movement (due to obstacles in the path). It will make your results much more accurate if you take a rolling average of the last 5-10 samples of the signal strength rather than just taking the last sample.
determining your location using ONLY wifi signals?
[ "", "c#", ".net", "wireless", "wifi", "" ]
I have a small (500kb) swing applet that displays small HTML page with JEditorPane. This works nicely with most browsers and JREs, but with JRE 1.5 (IE6) it seem to display just blank. There are no error messages on java console or exceptions. Applet is able to load TXT files with all JREs just fine, with JAVA CONSOLE tracing 5 option it displays the same diagnostic message for both text files and html files: network: Connecting <https://xxx.net/xxx/data/my.txt> with proxy=DIRECT network: Connecting <https://xxx.net/xxx/data/my2.htm> with proxy=DIRECT Any ideas how to diagnose further whats going wrong, or how to fix it? I don't have console access to my client's server hosting the applet, but I have a test machine with IE6+JRE 1.5 that I can use to access their HTTPS url and reproduce the problem. The problem does not appear with other browsers / JRE 1.6. Applet is unsigned since the HTML page and applet are located in same folder in same server.
Solution found, JEditorPane Async mode was causing this issue, switching to Syncronized mode solved bug with JRE 1.5
Have you tried running something like [Wireshark](http://www.wireshark.org/) to see if the request is actually happening or if you're actually getting a response? If it's something weird with the network (maybe the 1.5 JRE is doing something weird with the request) then that might help you track it down.
Applet with JRE 1.5 (IE6) fails to open HTML page over HTTPS, works with JRE1.6
[ "", "java", "swing", "applet", "jeditorpane", "" ]
I was curious if anyone had any suggestions on a Java library that provides access to MSMQ? I've downloaded the trial of the J-Integra Java-COM library and have built and run their MSMQ example app, but I was curious if there were any good (free :)) alternatives. I've run across a few JNI implementations like jMSMQ and a few others, but I'd rather avoid JNI if possible. We've also investigated some .NET<->JMS interop solutions like JNBridge (with ActiveMQ). I think our company has decided to centralize our queueing in MSMQ however, so that would be our ideal solution. We are hoping to use WCF (netMsmq or msmqIntegration) on the .NET side. I'm a little concerned about how the java side will deal with the WCF messages (binary encoding), but there should be options there (customBinding with text encoding and MSMQ transport?), so I'm not too worried about that. Mostly worried about getting access to MSMQ in Java. Our other option is to just put a thin web-service layer at the input end of each queue, and achieve interop that way. Performance is a little bit of a concern there, however. Any suggestions? Thanks a lot for anyone's time.
As far as I understand J-Integra is the only pure Java implementation of MSMQ, ie this is the only alternative if you are not running on Windows. (Please let me know if there are other alternatives!) Apache Camel and other uses JNI to access the Windows API (DLL files) which obviously means that is only works on Windows. I am currently trying out the code provided here <http://www.codeplex.com/MsmqJava> which seems simple to use, but I'm not really sure how transactions work.
There is an open-source library on CodePlex: [MsmqJava](http://MsmqJava.codeplex.com). Free.
Java and MSMQ
[ "", "java", "interop", "msmq", "" ]
I'm trying to use `System.Xml.Linq` to create XHTML documents. Thus, the vast majority of the nodes in my trees ought to use this namespace: ``` http://www.w3.org/1999/xhtml ``` I can create `XElement` nodes scoped to this namespace easily enough, using an `XNamespace`, like this: ``` XNamespace xhtml = "http://www.w3.org/1999/xhtml"; // ... new XElement(xhtml + "html", // ... ``` However, I don't want to have to make an `XNamespace` available throughout all the code that creates HTML nodes, and have to prefix every single `XElement` (and `XAttribute`) name I create accordingly. The XML text format itself takes this requirement into account, and permits setting a default namespace in an ancestor which is inherited by descendants, using the reserved `xmlns` attribute. I'd like to do something similar using `System.Xml.Linq`. Is this possible?
I've decided to use a static class called `XHtml`, that looks like this: ``` public static class XHtml { static XHtml() { Namespace = "http://www.w3.org/1999/xhtml"; } public static XNamespace Namespace { get; private set; } public static XElement Element(string name) { return new XElement(Namespace + name); } public static XElement Element(string name, params object[] content) { return new XElement(Namespace + name, content); } public static XElement Element(string name, object content) { return new XElement(Namespace + name, content); } public static XAttribute Attribute(string name, object value) { return new XAttribute(/* Namespace + */ name, value); } public static XText Text(string text) { return new XText(text); } public static XElement A(string url, params object[] content) { XElement result = Element("a", content); result.Add(Attribute("href", url)); return result; } } ``` This seems to be the cleanest way of doing things, particularly as I can then add in convenience routines, such as the `XHtml.A` method (not all of my class is shown here).
I took the recursively rewriting path. You do not really have to 'reconstruct' the tree. You can just swap out the node names (`XName`). ``` private static void ApplyNamespace(XElement parent, XNamespace nameSpace) { if(DetermineIfNameSpaceShouldBeApplied(parent, nameSpace)) { parent.Name = nameSpace + parent.Name.LocalName; } foreach (XElement child in parent.Elements()) { ApplyNamespace(child, nameSpace); } } ```
How to create XElement with default namespace for children without using XNamespace in all child nodes
[ "", "c#", ".net", "xml", "namespaces", "linq-to-xml", "" ]
I'm a .NET web developer who has just been asked to produce a small demo website using NetBeans IDE 5.5. I have no experience with Java up to this point. I've followed a couple of quick tutorials, one which just uses a JSP file and another which uses a servlet. My concern at this early stage is that it looks difficult to keep my application code away from my markup. Using JSP files looks quite similar to the old days of classic ASP. On the other hand, servlets look useful but seem to involve a lot of writing out markup to the output stream, which I'm not happy with either. Is there an equivalent to the ASP .NET code-behind model, or any other strategies for separating out markup and code? Is it possible to keep markup in the JSP and then use the servlet from the JSP?
The thing about Java is that it doesn't really come 'bundled' with stuff in the same way .NET does... you generally go looking for the goods. This makes starting with web apps in Java daunting because there are so many options. Many of them are built off JSPs, which is a plus. If you are set on using vanilla JSPs you are not going to have a good time. Otherwise, I would suggest Wicket, Stripes or Spring MVC. Stripes is probably the simplest of the three (Spring is a little complicated and Wicket is conceptually different from the other two). Having said that Spring MVC is probably the most used of the three.. All Stripes really is is JSPs and ActionBeans. Action beans are Java classes that contain actions, which are methods that perform your actions. Theses ActionBean classes also contain the data for the current page. [This page](http://www.stripesframework.org/) has more information about Stripes.
You can't do something similar with ASP.NET code behind using pure Java EE technologies. You need to use an MVC framework like Spring MVC or Struts. The idea is that you create your controller (Java Class) and a JSP page and configure an action to tie the JSP page with the controller. It isn't a simple as the ASP.NET as it requires configuration and an external framework. Netbeans has JSF framework embedded. Have a look at this [tutorial](http://www.netbeans.org/kb/articles/jAstrologer-intro.html).
How do I separate markup from application code when building a Java website?
[ "", "java", "netbeans", "" ]
We are working with large amounts of data, all tagged in UTC (in Java). Between reading this data, storing it in a database and getting it out again, it happened, that some data was off one hour during daylight saving time. As UTC has no concept of daylight saving time this was clearly a bug within the software. Once known, it's easy to fix. However, it'd be nice to have some unit/integration tests that work regardless of the current time difference - e.g. I'd like to change the local time zone and run some methods over and over again within these different time zones to make sure UTC is handled correctly. As the tests should run automatically and - preferably - within one Testsuite, I'm wondering how to best test for correct behaviour. It'd be easy to change local settings like time zone upon restarting the JVM, but running this within a test suite is not that easy. Does anybody know of a test environment, library or pattern supporting this scenario? We're usually working with JUnit, but are open for adding another environment/technique if it helps getting rid of problems like this. I suppose that it's rather an integration- than unit test. **Edit**: There are already two very helpful answers, but I guess there must be more techniques out there. Does anybody have authoritative information about when/how often TimeZone.getDefault will be called (see the comments for Jon Skeets answers)? **Note**: Even though this question has an accepted answer, I was not completely sure, which answer to accept. Even with this acceptance I'd like to see more ideas and techniques. Thanks for your input!
I would recommend you check out [JodaTime](http://joda-time.sourceforge.net/userguide.html) which provides some sugar to help manage Date / Time / TimeZone type issues more legibly in your code. We use these throughout test and production since how it boosts the native Java API for Date/Time issues is unparalleled. Using these in tests works fine within JUnit
Java allows you to set the default timezone (java.util.TimeZone.setDefault). I've written tests before to set the timezone to a variety of different options and check that everything still works. Be careful though - if you're parallelising most of your unit tests, you'll need to make these ones sequential. I suggest you test in some timezones with daylight saving time applies, and some without. Using an Australian timezone is good as well, as DST applies at the opposite time of the year to the northern hemisphere.
Testing correct timezone handling
[ "", "java", "timezone", "utc", "" ]
I would like to convert the below "foreach" statement to a LINQ query that returns a substring of the file name into a list: ``` IList<string> fileNameSubstringValues = new List<string>(); //Find all assemblies with mapping files. ICollection<FileInfo> files = codeToGetFileListGoesHere; //Parse the file name to get the assembly name. foreach (FileInfo file in files) { string fileName = file.Name.Substring(0, file.Name.Length - (file.Name.Length - file.Name.IndexOf(".config.xml"))); fileNameSubstringValues.Add(fileName); } ``` The end result would be something similar to the following: ``` IList<string> fileNameSubstringValues = files.LINQ-QUERY-HERE; ```
Try something like this: ``` var fileList = files.Select(file => file.Name.Substring(0, file.Name.Length - (file.Name.Length - file.Name.IndexOf(".config.xml")))) .ToList(); ```
If you happen to know the type of the collection of `FileInfo`s, and it's a `List<FileInfo>`, I'd probably skip the Linq and write: ``` files.ConvertAll( file => file.Name.Substring(0, file.Name.Length - (file.Name.Length - file.Name.IndexOf(".config.xml"))) ); ``` or if it's an array: ``` Array.ConvertAll( files, file => file.Name.Substring(0, file.Name.Length - (file.Name.Length - file.Name.IndexOf(".config.xml"))) ); ``` Mainly because I like saying "Convert" instead of "Select" to express my intent to a programmer reading this code. However, Linq is part of C# now, so I think it's perfectly reasonable to insist that a reading programmer understand what `Select` does. And the Linq approach lets you easily migrate to PLinq in the future.
How to use LINQ to return substring of FileInfo.Name
[ "", "c#", ".net", "linq", "" ]
I want to generate a link on my page where a key/value pair is added to the URL dynamically so that: > `Default.aspx?key1=value1` Becomes: > `Default.aspx?key1=value1&key2=value2` So that the existing query retains any keys in the URL. Also, if there are no keys, then my key would be added, along with the '`?`' since it would be needed. I could easily write some logic that does this, but this seems like something that the framework should have a utiltity for. Is there any way to add keys to a query string without writing my own logic for it?
I know, it's strange this is not supported properly in the .NET Framework. A couple of [extensions to UriBuilder](http://petemontgomery.wordpress.com/2009/01/27/query-string-extension-methods-for-systemuribuilder/) will do what you need though.
You can use the UriBuilder class. See the example in the [Query property](http://msdn.microsoft.com/en-us/library/system.uribuilder.query.aspx) documentation. FWIW, ASP.NET MVC includes the UrlHelper class that does exactly this sort of thing for the MVC framework. You might want to think about adding an extension method to the HttpRequest class that takes a dictionary and returns a suitable Url based on the given request and the dictionary values. This way you'd only have to write it once.
Best way to add a key/value to an existing URL string?
[ "", "c#", ".net", "asp.net", "" ]
I am getting to the last stage of my rope (a more scalable version of `String`) implementation. Obviously, I want all operations to give the same result as the operations on `String`s whenever possible. Doing this for ordinal operations is pretty simple, but I am worried about implementing culture-sensitive operations correctly. Especially since I know only two languages and in both of them culture-sensitive operations behave precisely the same as ordinal operations do! So are there any specific things that I could test and get at least some confidence that I am doing things correctly? I know, for example, about ß being equal to SS when ignoring cases in German; about dotted and undotted i in Turkish.
Surrogate pairs, if you plan to support them - including invalid combinations (e.g. only one part of one). If you're doing encoding and decoding, make sure you retain enough state to cope with being given arbitrarily blocks of binary data to decode which may end half way through a character, with the remaining half coming in the next character.
The Turkish test is the best I know :)
Looking for String operations edge cases. What do I need to test?
[ "", "c#", "string", "globalization", "cultureinfo", "" ]
I am playing around with F# and C#, and would like to call F# code from C#. I managed to get it to work the other way around in Visual Studio by having two projects in the same solution, and adding a reference of the C# code to the F# project. After doing this, I could call C# code and even step through it while debugging. What I am trying to do is F# code FROM C# instead of C# code from F#. I added a reference to the F# project to the C# project, but it isn't working the way it did before. I would like to know if this is possible without doing it manually.
Below is a working example of calling F# from C#. As you encountered, I was not able to add a reference by selecting from the "Add Reference ... Projects" tab. Instead I did have to do it manually, by browsing to the F# assembly in the "Add Reference ... Browse" tab. ------ F# MODULE ----- ``` // First implement a foldl function, with the signature (a->b->a) -> a -> [b] -> a // Now use your foldl function to implement a map function, with the signature (a->b) -> [a] -> [b] // Finally use your map function to convert an array of strings to upper case // // Test cases are in TestFoldMapUCase.cs // // Note: F# provides standard implementations of the fold and map operations, but the // exercise here is to build them up from primitive elements... module FoldMapUCase.Zumbro #light let AlwaysTwo = 2 let rec foldl fn seed vals = match vals with | head :: tail -> foldl fn (fn seed head) tail | _ -> seed let map fn vals = let gn lst x = fn( x ) :: lst List.rev (foldl gn [] vals) let ucase vals = map String.uppercase vals ``` ----- C# UNIT TESTS FOR THE MODULE ----- ``` // Test cases for FoldMapUCase.fs // // For this example, I have written my NUnit test cases in C#. This requires constructing some F# // types in order to invoke the F# functions under test. using System; using Microsoft.FSharp.Core; using Microsoft.FSharp.Collections; using NUnit.Framework; namespace FoldMapUCase { [TestFixture] public class TestFoldMapUCase { public TestFoldMapUCase() { } [Test] public void CheckAlwaysTwo() { // simple example to show how to access F# function from C# int n = Zumbro.AlwaysTwo; Assert.AreEqual(2, n); } class Helper<T> { public static List<T> mkList(params T[] ar) { List<T> foo = List<T>.Nil; for (int n = ar.Length - 1; n >= 0; n--) foo = List<T>.Cons(ar[n], foo); return foo; } } [Test] public void foldl1() { int seed = 64; List<int> values = Helper<int>.mkList( 4, 2, 4 ); FastFunc<int, FastFunc<int,int>> fn = FuncConvert.ToFastFunc( (Converter<int,int,int>) delegate( int a, int b ) { return a/b; } ); int result = Zumbro.foldl<int, int>( fn, seed, values); Assert.AreEqual(2, result); } [Test] public void foldl0() { string seed = "hi mom"; List<string> values = Helper<string>.mkList(); FastFunc<string, FastFunc<string, string>> fn = FuncConvert.ToFastFunc((Converter<string, string, string>)delegate(string a, string b) { throw new Exception("should never be invoked"); }); string result = Zumbro.foldl<string, string>(fn, seed, values); Assert.AreEqual(seed, result); } [Test] public void map() { FastFunc<int, int> fn = FuncConvert.ToFastFunc((Converter<int, int>)delegate(int a) { return a*a; }); List<int> vals = Helper<int>.mkList(1, 2, 3); List<int> res = Zumbro.map<int, int>(fn, vals); Assert.AreEqual(res.Length, 3); Assert.AreEqual(1, res.Head); Assert.AreEqual(4, res.Tail.Head); Assert.AreEqual(9, res.Tail.Tail.Head); } [Test] public void ucase() { List<string> vals = Helper<string>.mkList("arnold", "BOB", "crAIg"); List<string> exp = Helper<string>.mkList( "ARNOLD", "BOB", "CRAIG" ); List<string> res = Zumbro.ucase(vals); Assert.AreEqual(exp.Length, res.Length); Assert.AreEqual(exp.Head, res.Head); Assert.AreEqual(exp.Tail.Head, res.Tail.Head); Assert.AreEqual(exp.Tail.Tail.Head, res.Tail.Tail.Head); } } } ```
It should 'just work', though you might have to build the F# project before a project-to-project reference from C# works (I forget). A common source of issues is namespaces/modules. If your F# code does not start with a namespace declaration, it gets put in a module with the same name as the filename, so that e.g. from C# your type might appear as "Program.Foo" rather than just "Foo" (if Foo is an F# type defined in Program.fs).
Call F# code from C#
[ "", "c#", "interop", "f#", "" ]
I am writing somewhat of a proxy program in Java. Here's how it works: 1. The browser will be connected to the program. 2. Any requests by the browser will be first be printed out to standard out, then forwarded to the server. 3. The server then returns a response, which is also printed to standard out, then forwarded back to the browser. My problem is, step 1 works, step two works, but step 3 fails. The program can get a response, and its printed off to standard out properly, but the browser can't seem to get it. I've modified the program to isolate the problem. All its doing here, is printing the response directly to the browser: ``` ServerSocket client = null; try { client = new ServerSocket(snoopPort); } catch (IOException e) { System.out.println("ERROR: Could not listen on port: " + snoopPort); System.exit(-1); } Socket clientSocket = null; try { clientSocket = client.accept(); } catch (IOException e) { System.out.println("ERROR: Accept failed on port: " + snoopPort); System.exit(-1); } PrintWriter snoopOut = new PrintWriter(clientSocket.getOutputStream(), true); snoopOut.print("HTTP/1.1 200 OK\r\n"); snoopOut.print("Date: Thu, 05 Feb 2009 06:37:28 GMT\r\n"); snoopOut.print("Server: Apache\r\n"); snoopOut.print("Set-Cookie: Apache=99.245.58.244.1233815848703045; path=/\r\n"); snoopOut.print("Accept-Ranges: bytes\r\n"); snoopOut.print("Transfer-Encoding: chunked\r\n"); snoopOut.print("Content-Type: text/html\r\n"); snoopOut.print("\r\n"); snoopOut.print("<html><head><title>test</head><body>hello world!</body></html>\r\n"); snoopOut.close(); clientSocket.close(); client.close(); ```
Okay. I've semi-figured out the problem. The code? its perfect. Everything works. It was the environment I was running under. I was testing it on a vista machine, and despite disabling all anti-virus and firewalls, it wouldn't let me connect to the port. When I took it back to work, it was flawless. Why? I still have to figure it out. But thanks for the help everyone!
IN your revised description, I'd guess you have a problem with the http message. Is this correct: "Transfer-Encoding: chunked" ? (doesn't looked chunked to me)
Java HTTP Snooper(Proxy) Does Not Forward Server Response
[ "", "java", "sockets", "proxy", "monitoring", "" ]
I've got a query that returns the cost of wages for a given member of staff ``` SELECT totalhours * staffbaserate AS TotalCost FROM newrotaRaw WHERE staffref = @staffref ``` However I need to do an additional bit of maths if the returned value is > 105. The bit of maths I need to do is that if the value is < 105 the value is returned as is, however if the value is > 105 then I need to do value \* 1.128. For example: John Smith's Cost is 90 therefore the query should return 90 David Smith's Cost is 140 therefore the query should return 157.92 I'm sure there's some way to use replace to get it to do what I want but I've only ever used replace for exact matches, in this case the replace is conditional on the value. Any help would be much appreciated!
Try something like this. ``` SELECT TotalCost = CASE WHEN (totalhours * staffbaserate) < 105 THEN (totalhours * staffbaserate) ELSE (totalhours * staffbaserate) * 1.128 END FROM newrotaRaw WHERE staffref = @staffref ```
``` SELECT CASE WHEN totalhours * staffbaserate <= 105 THEN totalhours * staffbaserate ELSE totalhours * staffbaserate * 1.128 END AS TotalCost FROM newrotaRaw WHERE staffref = @staffref ```
Conditionally replacing values in SELECT
[ "", "sql", "sql-server", "" ]
> **Possible Duplicate:** > [C#/.NET analysis tool to find race conditions/deadlocks](https://stackoverflow.com/questions/2379610/c-net-analysis-tool-to-find-race-conditions-deadlocks) I am debugging an application that I suspect is getting deadlocked and hanging. However, this only occurs every few days, and it never happens on my computer so I can't hook a debugger up to it. Are there any utilities or methods I can use to query the running application and find out what methods/locks/whatever it is deadlocked on? **Update:** Typically the application is running at a customer location and I don't have access to the machine, and I'm not entirely comfortable asking them to install tons of software.
Instead of using the regular `lock` & `Monitor.Enter` approach to lock some data, you can also use a 'TimedLock' structure. This TimedLock throws an exception if the lock couldn't be acquired in a timely fashion, and it can also give you a warning if you have some locks that you didn't release. [This](http://www.interact-sw.co.uk/iangblog/2004/04/26/yetmoretimedlocking) article by Ian Griffiths could maybe help.
You can use [WinDbg](http://www.microsoft.com/whdc/devtools/debugging/installx86.Mspx) to inspect the threads in the application. Here's a brief plan of what you could do. * When the application hangs, copy the WinDbg files to the machine. * Either attach WinDbg to the process or use ADPlus to get a hang dump of the process. If you choose ADPlus, you then load the dump in WinDbg. * From WinDbg you load sos.dll, so you can inspect managed code. * The `!threads` command will show you all threads in the application and the `!clrstack` command, will show you what they are doing. Use `~e!clrstack` to dump the call stack of all threads. Look for calls to Wait methods as they indicate locking. * The `!syncblk` command will give you information of what threads are holding the different locks. * To find out what lock a given thread is trying to acquire, switch to the thread and inspect stack objects (`!dso`). From here you should be able to find the lock the thread is trying to acquire. Clarification: WinDbg doesn't require a regular install. Just copy the files. Also, if you take the hang dump, you can continue debugging on another machine if so desired. Addition: [Sosex](http://www.stevestechspot.com/) has the `!dlk` command that automatically identifies deadlocks in many situations. It doesn't work all the time, but when it does, it does all the work for you, so that should be your first choice.
Detecting deadlocks in a C# application
[ "", "c#", "multithreading", "deadlock", "" ]
Is it possible to truncate a Java string to the closest word boundary after a number of characters. Similar to the PHP wordwrap() function, shown in this [example](https://stackoverflow.com/questions/79960/how-to-truncate-a-string-in-php-to-the-word-closest-to-a-certain-number-of-charac).
Use a `java.text.BreakIterator`, something like this: ``` String s = ...; int number_chars = ...; BreakIterator bi = BreakIterator.getWordInstance(); bi.setText(s); int first_after = bi.following(number_chars); // to truncate: s = s.substring(0, first_after); ```
You can use regular expression ``` Matcher m = Pattern.compile("^.{0,10}\\b").matches(str); m.find(); String first10char = m.group(0); ```
Truncate String on closest word boundary
[ "", "java", "string", "" ]
I know from reading [Microsoft documentation](https://learn.microsoft.com/dotnet/api/system.idisposable) that the "primary" use of the `IDisposable` interface is to clean up unmanaged resources. To me, "unmanaged" means things like database connections, sockets, window handles, etc. But, I've seen code where the `Dispose()` method is implemented to free *managed* resources, which seems redundant to me, since the garbage collector should take care of that for you. For example: ``` public class MyCollection : IDisposable { private List<String> _theList = new List<String>(); private Dictionary<String, Point> _theDict = new Dictionary<String, Point>(); // Die, clear it up! (free unmanaged resources) public void Dispose() { _theList.clear(); _theDict.clear(); _theList = null; _theDict = null; } } ``` My question is, does this make the garbage collector free memory used by `MyCollection` any faster than it normally would? --- **Edit**: So far people have posted some good examples of using `IDisposable` to clean up unmanaged resources such as database connections and bitmaps. But suppose that `_theList` in the above code contained a million strings, and you wanted to free that memory *now*, rather than waiting for the garbage collector. Would the above code accomplish that?
The point of Dispose **is** to free unmanaged resources. It needs to be done at some point, otherwise they will never be cleaned up. The garbage collector doesn't know **how** to call `DeleteHandle()` on a variable of type `IntPtr`, it doesn't know **whether** or not it needs to call `DeleteHandle()`. > **Note**: What is an *unmanaged resource*? If you found it in the Microsoft .NET Framework: it's managed. If you went poking around MSDN yourself, it's unmanaged. Anything you've used P/Invoke calls to get outside of the nice comfy world of everything available to you in the .NET Framework is unmanaged – and you're now responsible for cleaning it up. The object that you've created needs to expose *some* method, that the outside world can call, in order to clean up unmanaged resources. The method can be named whatever you like: ``` public void Cleanup() ``` or ``` public void Shutdown() ``` But instead there is a standardized name for this method: ``` public void Dispose() ``` There was even an interface created, `IDisposable`, that has just that one method: ``` public interface IDisposable { void Dispose(); } ``` So you make your object expose the `IDisposable` interface, and that way you promise that you've written that single method to clean up your unmanaged resources: ``` public void Dispose() { Win32.DestroyHandle(this.CursorFileBitmapIconServiceHandle); } ``` And you're done. **Except you can do better.** --- What if your object has allocated a 250MB **[System.Drawing.Bitmap](http://msdn.microsoft.com/en-us/library/system.drawing.bitmap.aspx)** (i.e. the .NET managed Bitmap class) as some sort of frame buffer? Sure, this is a managed .NET object, and the garbage collector will free it. But do you really want to leave 250MB of memory just sitting there – waiting for the garbage collector to *eventually* come along and free it? What if there's an [open database connection](http://msdn.microsoft.com/en-us/library/system.data.common.dbconnection.aspx)? Surely we don't want that connection sitting open, waiting for the GC to finalize the object. If the user has called `Dispose()` (meaning they no longer plan to use the object) why not get rid of those wasteful bitmaps and database connections? So now we will: * get rid of unmanaged resources (because we have to), and * get rid of managed resources (because we want to be helpful) So let's update our `Dispose()` method to get rid of those managed objects: ``` public void Dispose() { //Free unmanaged resources Win32.DestroyHandle(this.CursorFileBitmapIconServiceHandle); //Free managed resources too if (this.databaseConnection != null) { this.databaseConnection.Dispose(); this.databaseConnection = null; } if (this.frameBufferImage != null) { this.frameBufferImage.Dispose(); this.frameBufferImage = null; } } ``` And all is good, **except you can do better**! --- What if the person **forgot** to call `Dispose()` on your object? Then they would leak some **unmanaged** resources! > **Note:** They won't leak **managed** resources, because eventually the garbage collector is going to run, on a background thread, and free the memory associated with any unused objects. This will include your object, and any managed objects you use (e.g. the `Bitmap` and the `DbConnection`). If the person forgot to call `Dispose()`, we can *still* save their bacon! We still have a way to call it *for* them: when the garbage collector finally gets around to freeing (i.e. finalizing) our object. > **Note:** The garbage collector will eventually free all managed objects. > When it does, it calls the **`Finalize`** > method on the object. The GC doesn't know, or > care, about *your* **Dispose** method. > That was just a name we chose for > a method we call when we want to get > rid of unmanaged stuff. The destruction of our object by the Garbage collector is the *perfect* time to free those pesky unmanaged resources. We do this by overriding the `Finalize()` method. > **Note:** In C#, you don't explicitly override the `Finalize()` method. > You write a method that *looks like* a **C++ destructor**, and the > compiler takes that to be your implementation of the `Finalize()` method: ``` ~MyObject() { //we're being finalized (i.e. destroyed), call Dispose in case the user forgot to Dispose(); //<--Warning: subtle bug! Keep reading! } ``` But there's a bug in that code. You see, the garbage collector runs on a **background thread**; you don't know the order in which two objects are destroyed. It is entirely possible that in your `Dispose()` code, the **managed** object you're trying to get rid of (because you wanted to be helpful) is no longer there: ``` public void Dispose() { //Free unmanaged resources Win32.DestroyHandle(this.gdiCursorBitmapStreamFileHandle); //Free managed resources too if (this.databaseConnection != null) { this.databaseConnection.Dispose(); //<-- crash, GC already destroyed it this.databaseConnection = null; } if (this.frameBufferImage != null) { this.frameBufferImage.Dispose(); //<-- crash, GC already destroyed it this.frameBufferImage = null; } } ``` So what you need is a way for `Finalize()` to tell `Dispose()` that it should **not touch any managed** resources (because they *might not be there* anymore), while still freeing unmanaged resources. The standard pattern to do this is to have `Finalize()` and `Dispose()` both call a **third**(!) method; where you pass a Boolean saying if you're calling it from `Dispose()` (as opposed to `Finalize()`), meaning it's safe to free managed resources. This *internal* method *could* be given some arbitrary name like "CoreDispose", or "MyInternalDispose", but is tradition to call it `Dispose(Boolean)`: ``` protected void Dispose(Boolean disposing) ``` But a more helpful parameter name might be: ``` protected void Dispose(Boolean itIsSafeToAlsoFreeManagedObjects) { //Free unmanaged resources Win32.DestroyHandle(this.CursorFileBitmapIconServiceHandle); //Free managed resources too, but only if I'm being called from Dispose //(If I'm being called from Finalize then the objects might not exist //anymore if (itIsSafeToAlsoFreeManagedObjects) { if (this.databaseConnection != null) { this.databaseConnection.Dispose(); this.databaseConnection = null; } if (this.frameBufferImage != null) { this.frameBufferImage.Dispose(); this.frameBufferImage = null; } } } ``` And you change your implementation of the `IDisposable.Dispose()` method to: ``` public void Dispose() { Dispose(true); //I am calling you from Dispose, it's safe } ``` and your finalizer to: ``` ~MyObject() { Dispose(false); //I am *not* calling you from Dispose, it's *not* safe } ``` > **Note**: If your object descends from an object that implements `Dispose`, then don't forget to call their **base** Dispose method when you override Dispose: ``` public override void Dispose() { try { Dispose(true); //true: safe to free managed resources } finally { base.Dispose(); } } ``` And all is good, **except you can do better**! --- If the user calls `Dispose()` on your object, then everything has been cleaned up. Later on, when the garbage collector comes along and calls Finalize, it will then call `Dispose` again. Not only is this wasteful, but if your object has junk references to objects you already disposed of from the **last** call to `Dispose()`, you'll try to dispose them again! You'll notice in my code I was careful to remove references to objects that I've disposed, so I don't try to call `Dispose` on a junk object reference. But that didn't stop a subtle bug from creeping in. When the user calls `Dispose()`: the handle **CursorFileBitmapIconServiceHandle** is destroyed. Later when the garbage collector runs, it will try to destroy the same handle again. ``` protected void Dispose(Boolean iAmBeingCalledFromDisposeAndNotFinalize) { //Free unmanaged resources Win32.DestroyHandle(this.CursorFileBitmapIconServiceHandle); //<--double destroy ... } ``` The way you fix this is tell the garbage collector that it doesn't need to bother finalizing the object – its resources have already been cleaned up, and no more work is needed. You do this by calling `GC.SuppressFinalize()` in the `Dispose()` method: ``` public void Dispose() { Dispose(true); //I am calling you from Dispose, it's safe GC.SuppressFinalize(this); //Hey, GC: don't bother calling finalize later } ``` Now that the user has called `Dispose()`, we have: * freed unmanaged resources * freed managed resources There's no point in the GC running the finalizer – everything's taken care of. ## Couldn't I use Finalize to cleanup unmanaged resources? The documentation for [`Object.Finalize`](https://msdn.microsoft.com/en-us/library/system.object.finalize.aspx) says: > The Finalize method is used to perform cleanup operations on unmanaged resources held by the current object before the object is destroyed. But the MSDN documentation also says, for [`IDisposable.Dispose`](https://msdn.microsoft.com/en-us/library/system.idisposable.dispose(v=vs.110).aspx): > Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources. So which is it? Which one is the place for me to cleanup unmanaged resources? The answer is: > It's your choice! But choose `Dispose`. You certainly could place your unmanaged cleanup in the finalizer: ``` ~MyObject() { //Free unmanaged resources Win32.DestroyHandle(this.CursorFileBitmapIconServiceHandle); //A C# destructor automatically calls the destructor of its base class. } ``` The problem with that is you have no idea when the garbage collector will get around to finalizing your object. Your un-managed, un-needed, un-used native resources will stick around until the garbage collector *eventually* runs. Then it will call your finalizer method; cleaning up unmanaged resources. The documentation of **Object.Finalize** points this out: > The exact time when the finalizer executes is undefined. To ensure deterministic release of resources for instances of your class, implement a **Close** method or provide a [`IDisposable.Dispose`](https://msdn.microsoft.com/en-us/library/system.idisposable.dispose(v=vs.110).aspx) implementation. This is the virtue of using `Dispose` to cleanup unmanaged resources; you get to know, and control, when unmanaged resource are cleaned up. Their destruction is *"deterministic"*. --- To answer your original question: Why not release memory now, rather than for when the GC decides to do it? I have a facial recognition software that *needs* to get rid of 530 MB of internal images **now**, since they're no longer needed. When we don't: the machine grinds to a swapping halt. ## Bonus Reading For anyone who likes the style of this answer (explaining the *why*, so the *how* becomes obvious), I suggest you read Chapter One of Don Box's Essential COM: * Direct link: [Chapter 1 sample by Pearson Publishing](http://ptgmedia.pearsoncmg.com/images/0201634465/samplechapter/chap01.pdf) * magnet: 84bf0b960936d677190a2be355858e80ef7542c0 In 35 pages he explains the problems of using binary objects, and invents COM before your eyes. Once you realize the *why* of COM, the remaining 300 pages are obvious, and just detail Microsoft's implementation. I think every programmer who has ever dealt with objects or COM should, at the very least, read the first chapter. It is the best explanation of anything ever. ## Extra Bonus Reading [When everything you know is wrong](https://ericlippert.com/2015/05/18/when-everything-you-know-is-wrong-part-one/) [archive](https://archive.today/JuXpm)by Eric Lippert > It is therefore very difficult indeed to write a correct finalizer, > and **the best advice I can give you is to not try**.
`IDisposable` is often used to exploit the `using` statement and take advantage of an easy way to do deterministic cleanup of managed objects. ``` public class LoggingContext : IDisposable { public Finicky(string name) { Log.Write("Entering Log Context {0}", name); Log.Indent(); } public void Dispose() { Log.Outdent(); } public static void Main() { Log.Write("Some initial stuff."); try { using(new LoggingContext()) { Log.Write("Some stuff inside the context."); throw new Exception(); } } catch { Log.Write("Man, that was a heavy exception caught from inside a child logging context!"); } finally { Log.Write("Some final stuff."); } } } ```
Proper use of the IDisposable interface
[ "", "c#", ".net", "garbage-collection", "idisposable", "" ]
I am using standard windows forms controls under C#. I have two controls which are different ways of setting a single value. Thus when one is changed, the changed event handler computes a value for the other and sets it. So I need to unhook the change event on the second control temporarily to prevent looping (A changes B, B is changed so it changes A, A is changed...). There are also some other situations where I need to stop the change event. So there are several places where I need to unhook the event, and only one place where it needs to be hooked. My question is: do I need to keep track of whether the event has been hooked or can I just unhook it as many times as I like and only occasionally hook it again? (Is there any documentation that addresses this?)
Unhooking an event handler that was never registered using the `-=` operator won't give you any problems.
Ya, you can unhook it as many times as you want, no exceptions will be thrown.
Is there any problem unhooking a delegate from an event which was never hooked?
[ "", "c#", ".net", "events", "" ]
We use an enterprise framework that we wrote to facilitate all sorts of company specific stuff that we do. Within the framework, we provide a LINQ to SQL ORM to use when appropriate. All of this is based on the Microsoft MVC framework. On the MVC side, we new up a datacontext in our base controller. This allows us a full datacontext lifecycle, which is extremely useful for transactions. One task that we're looking to accomplish is to provide Winforms support. However, I'm banging my head against the wall trying to figure out an approach that would work in a similar fashion for Winforms. Given that the MVC approach is pretty straight forward becase one page load represents a logical transaction, it's difficult to come up with a solution on the Winforms side. Has anyone done anything similar or have any recommendations?
I did something like that for some small softwares we built last year. We created an application shell that loads the forms similarly to the request/response model. I built a `IRenderer` interface with a `RenderView()` method that I've implemented for web and for windows forms. It allows me to use the same controler and model for both. Search for Model-View-ViewModel (MVVM) on goodle and you may find something about this approach. I think [this article](http://msdn.microsoft.com/en-us/library/cc707841.aspx) may help you to understand what I'm talking about.
If you are thinking to choose between having a long-lived DataContext (for example as a Singleton in your app) or having short-lived DataContexts, I would choose the second. I would new() a DataContext for each "Unit Of Work" and make sure to keep it alive for as short a period as possible. Creating a new DataContext is not a big issue, since they cache metadata anyway. Having a long lived DataContext gives you a bit of a nightmare when it starts tracking to many objects.
C#, Winforms & LINQ to SQL.. Datacontext lifecycle?
[ "", "c#", "asp.net-mvc", "winforms", "linq", "transactions", "" ]
What's the best way to convert search terms entered by a user, into a query that can be used in a where clause for full-text searching to query a table and get back relevant results? For example, the following query entered by the user: ``` +"e-mail" +attachment -"word document" -"e-learning" ``` Should translate into something like: ``` SELECT * FROM MyTable WHERE (CONTAINS(*, '"e-mail"')) AND (CONTAINS(*, '"attachment"')) AND (NOT CONTAINS(*, '"word document"')) AND (NOT CONTAINS(*, '"e-learning"')) ``` I'm using a [query parser class](http://pastebin.com/gsLqR9n1) at the moment, which parses the query entered by users into tokens using a regular expression, and then constructs the where clause from the tokens. However, given that this is probably a common requirement by a lot of systems using full-text search, I'm curious as to how other developers have approached this problem, and whether there's a better way of doing things.
This may not be exactly what you are looking for but it may offer you some further ideas. <http://www.sqlservercentral.com/articles/Full-Text+Search+(2008)/64248/>
**How to implement the accepted answer using .Net / C# / Entity Framework...** 1. Install Irony using nuget. 2. Add the sample class from: <http://irony.codeplex.com/SourceControl/latest#Irony.Samples/FullTextSearchQueryConverter/SearchGrammar.cs> 3. Write code like this to convert the user-entered string to a query. ``` var grammar = new Irony.Samples.FullTextSearch.SearchGrammar(); var parser = new Irony.Parsing.Parser(grammar); var parseTree = parser.Parse(userEnteredSearchString); string query = Irony.Samples.FullTextSearch.SearchGrammar.ConvertQuery(parseTree.Root); ``` 4. Perhaps write a stored procedure like this: ``` create procedure [dbo].[SearchLivingFish] @Query nvarchar(2000) as select * from Fish inner join containstable(Fish, *, @Query, 100) as ft on ft.[Key] = FishId where IsLiving = 1 order by rank desc ``` 5. Run the query. ``` var fishes = db.SearchLivingFish(query); ```
Converting user-entered search query to where clause for use in SQL Server full-text search
[ "", "c#", "sql-server", "search", "full-text-search", "user-input", "" ]
What parts of functionality will I lose after removing the MicrosoftAjax.js MicrosoftMvcAjax.js files? I don't want to use them for various reasons and I thought if there were any 3rd party helper methods that would mimic existing ones but are tied to another JavaScript framework, such as jQuery. Also, please tell where usage of above javaScript files is preferrable/adviced. Many thanks, Valentin Vasiliev.
You won't be able to use the AjaxHelper extension methods (in the System.Web.Mvc.Ajax namespace) that are exposed by the Ajax property on the ViewPage class if you don't refer to the MicrosoftAjax scripts. They're relatively easy to replace by using the appropriate jQuery AJAX methods ($.post, $.get, $(selector).load etc) or the jQuery form plugin. As for whether one or another is preferable or not it's down to how comfortable you are with writing your own implementation. The Ajax helper methods try to take care of a lot of things for you in terms of client script programming but that comes at a price of having to load in the pretty heavyweight MS AJAX library. Rolling your own javascript methods can lead to more lightweight code as well as providing you with a choice of javascript library to use (be it jQuery or one of the many others out there).
I agree. Just use jquery. Life is good with .net mvc 3.5 and jquery. cheers
ASP.NET MVC without MicrosoftAjax.js and MicrosoftMvcAjax.js
[ "", "javascript", "asp.net-mvc", "ajax", "" ]
I recall reviewing someone else's PHP code once and he had a function or class method that rolled all GET and POST variables into a single plain old object that could then be passed around. If the same name-value pair appeared in both GET and POST, POST would win. Is there a well-coded PHP add-on out there of any sort that does exactly this?
You could use [`$_REQUEST`](http://php.net/manual/reserved.variables.request.php), but be aware that it contains the contents of `$_GET`, `$_POST` and `$_COOKIE`, and that the presence (and order, for overwriting purposes) of variables in this superglobal is subject to configuration in the execution environment. If this is causing problems, it might be a good idea to roll up a simple version, the crux of which would be something like ``` $requestData = array_merge( $_GET, $_POST ); ``` You can then wrap this up in some fashion, perhaps providing additional useful functionality, e.g. automatically applying `stripslashes()` to values mangled using `magic_quotes`, providing default values for missing variables, etc.
Merging both variables (or using `$_REQUEST` or [Register Globals](http://docs.php.net/manual/en/security.globals.php) instead) can cause security flaws as you cannot definetly identify the source of its values. So when accessing `$_REQUEST['foobar']` you cannot tell if the value has been sent via URL, POST body or Cookie. This can make you script vulnerable for [Cross-Site Request Forgery](http://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)). So I recommend you to go for security over comfort and use those variables where you expect your values come from. `$_GET` for arguments that are expected to be passed by URL, `$_POST` for those passed via POST and `$_COOKIE` for cookies.
php function for collating external variables into a single object regardless of GET or POST
[ "", "php", "http", "parameters", "add-on", "" ]
I am currently running an application with the following properties: * Java-based with Spring and Acegi * Running on Tomcat 5 I need the ability to support user sessions **without** cookies. Could someone please point me in the right direction. Thank you.
The complete answer to this question is a combination of all your responses, so I'm going to summarize: 1. There is no need to set cookies="false" in the context.xml file. The ideal functionality is for tomcat to use it's url-based session identification, which will be used by default if cookies are not supported by the user. 2. When a user doesn't have cookies enabled, tomcat will identify the session by the "JSESSIONID" parameter from the url of the request. A couple sample urls are as follows `http://www.myurl.com;jsessionid=123456AFGT3` `http://www.myurl.com;jsessionid=123456AFGT3?param1=value&param2=value2` Notice how the session id is not part of the url query string (this is a j2ee standard) 3. In order to ensure the jsessionid parameter gets appended to all your request URLs, you can't have plain url references. For example, in JSTL, you have to use < c:url>. The servlet engine will then automatically append the jsessionid to the url if it is necessary. Here's an example: <%--this is bad:--%> < a href="page.html">link< / a> <%--this is good:--%> < a href="< c:url value='page.html'/>">link< / a>
See <http://tomcat.apache.org/tomcat-5.5-doc/config/context.html>. In a file META-INF/context.xml, ``` <?xml version='1.0' encoding='UTF-8'?> <Context path='/myApplicationContext' cookies='false'> <!-- other settings --> </Context> ```
Supporting Sessions Without Cookies in Tomcat
[ "", "java", "session", "tomcat", "cookies", "" ]
I have a WCF chat service that accepts duplex tcp connections. A single duplex tcp connection can be used to send and receive messages for more than one user (so I can have multiple chat servers that all connect to each other). Now I want to add Web users into the mix, to let them chat with the desktop users. This is for a live-support type thing. Basically I'm trying to find out the best way to do "out of band" communications from ASP.Net to this chat service. I was thinking that I could have a static/global duplex connection to one of the chat servers and I could use that for all requests to that ASP.Net server. Would this work? The duplex connection is ALL one-way calls, can I use this WCF channel without locking access to it? UPDATE: Thanks for your suggestions so far. I should have noted: My chat service is self-hosted, it's not running in IIS. So, I'm mainly concerned with how I can make IIS hold a connection open until the application unloads. The connection from web browser to IIS will be silverlight, flash, ajax, iframes, anything.
Your best bet is to implement a bi-directional message queue at the app level, indexing messages by a user and a session identifier. Then you could have the app level WCF service (aka peer) pop and push based on wait objects. Access to the queue will be need to be locked, but this is relatively low cost. WCF service will do the heavy lifting. At some point, though, I would expect the app to experience bottlenecks if only a single proxy is being used for sending messages. It seems to me that having a dedicated channel proxy per session might be more efficient, thereby keeping things less stateful. I would also allow for non-duplex connections, since all messages are one way operations.
This may not answer you question, but you might be able to have silverlight do this and use similar code that your desktop version uses.
Connecting ASP.Net to Wcf/Tcp chat service
[ "", "c#", "asp.net", "wcf", "soap", "service", "" ]
I have a very large XML file which I need to transform into another XML file, and I would like to do this with XSLT. I am more interested in optimisation for memory, rather than optimisation for speed (though, speed would be good too!). Which Java-based XSLT processor would you recommmend for this task? Would you recommend any other way of doing it (non-XSLT?, non-Java?), and if so, why? The XML files in questions are very large, but not very deep - with millions of rows (elements), but only about 3 levels deep.
**At present there are only three [XSLT 2.0](http://www.w3.org/TR/xslt20/) processors known** and from them [**Saxon 9.x**](http://www.saxonica.com) is probably the most efficient (at least according to my experience) both in speed and in memory utilisation. **[Saxon-SA](http://www.saxonica.com)** (the schema-aware version of Saxon, not free as the B (basic) version) has special extensions for streamed processing. **From the various existing** **[XSLT 1.0](http://www.w3.org/TR/xslt)** processors, .NET **[XslCompiledTransform](http://msdn.microsoft.com/en-us/library/system.xml.xsl.xslcompiledtransform.aspx)** (C#-based, not Java!) seems to be the champion. **In the Java-based world of XSLT 1.0 processors** **[Saxon 6.x](http://www.saxonica.com)** again is pretty good. **UPDATE**: Now, since 2017 **we have XSLT 3.0 and Saxon10/11 is still my choice.**
You could consider [STX](http://stx.sourceforge.net/), whose Java implementation is [Joost](http://joost.sourceforge.net/). Since it is similar to XSLT, but being a stream processor it is able to process enormous files using very little RAM. Joost is able to be used as a standard javax.xml.transform.TransformerFactory
What is the Most Efficient Java-Based streaming XSLT Processor?
[ "", "java", "xslt", "processor", "" ]
**UPDATE So totally pulled a tool moment. I really meant by reference versus Out/Ref. Anything that says 'ref' I really meant by reference as in** SomeMethod(Object someObject) **Versus** SomeMethod(out someObject) **Sorry. Just don't want to change the code so the answers already make sense.** Far as I understand, unlike ref where it "copies" the pointer and creates a new space on the stack to use that pointer, but won't change the pointer: ``` SomeMethod() { SomeThing outer = new SomeThing(); RefMethod(ref outer) } RefMethod(ref inner) //new space on stack created and uses same pointer as outer { inner.Hi = "There"; //updated the object being pointed to by outer inner = new SomeThing();//Given a new pointer, no longer shares pointer with outer //New object on the heap } ``` Out copies the pointer and can manipulate where it points to: ``` SomeMethod() { SomeThing outer = new SomeThing(); RefMethod(out outer) } RefMethod(out inner) //same pointer shared { inner = new SomeThing();//pointer now points to new place on heap //outer now points to new object //Old object is orphaned if nothing else points to it } ``` That's fine and dandy with objects, but what about value types seeing as they have nothing to point to being only on the stack?
Just because the variable lives on the stack (if it's a local variable) doesn't mean you can't create a pointer to it - and indeed that's the case with reference types as well. The pointer within RefMethod is to the "outer" variable - and the variable itself lives on the stack as it's an uncaptured local variable. As Leppie said, ref and out are identical except for the rules on definite assignment - in fact, the only difference in IL is an attribute applied to out parameters. See [my article on parameter passing](http://pobox.com/~skeet/csharp/parameters.html) for more details about ref/out in general.
ref and out is exactly the same, as far I know, with the exception that an out parameter cannot be initialized. Hence both goes on the stack.
C# Out parameter question: How does Out handle value types?
[ "", "c#", "stack", "heap-memory", "out-parameters", "" ]
I have an xml document object that I need to convert into a string. Is there as simple way to do this?
Here's some quick code I pulled out of a library I had nearby. Might wanna dress it up, but it works: ``` import java.io.*; import javax.xml.transform.*; import javax.xml.transform.dom.*; import javax.xml.transform.stream.*; public String TransformDocumentToString(Document doc) { DOMSource dom = new DOMSource(doc); StringWriter writer = new StringWriter(); StreamResult result = new StreamResult(writer); TransformerFactory factory = TransformerFactory.newInstance(); Transformer transformer = factory.newTransformer(); transformer.transform(dom, result); return writer.toString(); } ``` edit: as commentor noticed earlier, i had a syntax error. had to pull out some sensitive lines so I wouldn't get canned and put them back in the wrong order. thanks! ;-)
You can use Dom4J: ``` OutputFormat format = OutputFormat.createPrettyPrint(); XMLWriter writer = new XMLWriter( System.out, format ); writer.write( document ); ```
is there a simple way to convert my XML object back to String in java?
[ "", "java", "xml", "" ]
What are some recommended WordPress plugins that make building an online user manual more effective? I've been browsing the plugin directory, but was wondering if anyone has already been down this path and could make some recommendations. Edit: Doh! Using a wiki didn't even occur to me until these responses! I started building a site using WordPress and it seems I had a bit of "tunnel vision".
dokuwiki etc.
MediaWiki is extremely easy to use, and I think would make an exceptionally better platform for a user manual over wordpress. Wordpress may excel at being a front end CMS but it doesn't handle an extensive network of pages very well at all. **EDIT** - I run a couple of different MediaWiki installations, and they are just as easy if not easier to maintain than Wordpress (which I use for my blog)
WordPress: Useful plugins for building a user manual?
[ "", "php", "wordpress", "documentation", "" ]
This is a compiler error (slightly changed for readability). This one always puzzled me. FxCop tells that this is a bad thing to return `List<T>` and classes that are derived from `Collection<T>` should be preferable as return types. Also, FxCop says that it is OK to use `List<T>` for internal data storage/manipulation. Ok, I get it, but what I don't get is why the compiler complains about trying to implicitly convert `List<T>` to `Collection<T>`. Isn't `List<T>` more interface-charged and functional? Why prohibit implicit conversion? And another question that stems from above: is the `new List<int>(some collection<int>)` constructor expensive?
`List<T>` doesn't derive from `Collection<T>` - it does, however, implement `ICollection<T>`. That would be a better choice of return type. As for the `new List<int>(some collection<int>)` question - it partly depends on what the collection is. If it implements `ICollection<T>` (at execution time) then the constructor can use its `Count` property to create the list with the right initial capacity before iterating through it and adding each item. If it doesn't implement `ICollection<T>` then it's just equivalent to: ``` List<int> list = new List<int>(); foreach (int x in otherCollection) { list.Add(x); } ``` Still nice to have in a convenient constructor, but not hugely efficient - it can't be, really. I don't believe the constructor does anything cunning for arrays, which it potentially could - using `Array.Copy` or whatever to just copy the lot in one go rather than iterating though. (Likewise if it were another `List<T>` it could get at the backing array and copy that directly.)
Why not just do the following: ``` Collection<string> collection = new Collection<string>(theList); ``` as Collection(IList input) takes a List as part of construction.
Cannot implicitly convert List<T> to Collection<T>
[ "", "c#", "generics", "collections", "" ]
I have an SQL Server 2005 table named 'EventTable' defined as such: EventID, EventTypeCode, EventStatusCode, EventDate Currently the table has a clustered index on the primary key 'EventID', there are no other indexes currently EventTypeCode and EventStatusCode columns are CHAR(3) (examples are 'NEW', 'SEN', 'SAL') and are foreign keys Common Selects will be... ``` select * from EventTable Where EventDate = @dateparam; select * from EventTable Where EventTypeCode = @eventtype; select * from EventTable Where EventStatusCode = @statustype; ``` What index strategy would you use to handle Select statements above? Is it better to have a covering (compound) index on the 3 columns? If so, what order should the compound index be in? Or a separate index on each of the 3 columns? The table will grow at the rate of about 300 events per day.. --- It will also be common to execute queries such as ```` ``` where EventDate between '2008-12-01' and '2008-12-31' and EventTypeCode = 'todo' ``` ```` * the table is more likely to grow at 500-800/records per day rather than 300 * the queries mentioned in the initial question will be run many times throughout the day, during normal use of the ASP.NET application * NHibernate 'HQL' is used to perform such queries * there is no initial load of data, the table only sits at about 10K records now because this is a new app * ...I'm more or less just trying to avoid the customer having to call us in a couple years to complain about the app becoming 'slow' since this table will be hit so much
Strategy 1, provide indexes that can be used for filtering. Table lookups will fetch the remaining data. This almost doubles the use of space and quadruples write IO cost. ``` on EventTable(EventDate) on EventTable(EventTypeCode) on EventTable(EventStatusCode) ``` Strategy 2, provide covering indexes that can be used for filtering. There will be no lookups. This quadruples the use of space and write IO cost. ``` on EventTable(EventDate, EventId, EventTypeCode, EventStatusCode) on EventTable(EventTypeCode, EventId, EventDate, EventStatusCode) on EventTable(EventStatusCode, EventId, EventDate, EventTypeCode) ``` --- The reason that the column order matters in a covering index (in general), is that data is ordered by each column in turn. That is to say: column 2 tie-breaks column 1. Column 3 tie-breaks column 1 and 2. Since you don't have any queries that filter on multiple columns, there is no significance (in your case) to the column order after the first column. If you had a query such as ``` where EventDate = @EventDate and EventTypeCode = @EventTypeCode ``` Then this covering index would be useful. EventDate is likely more selective than EventTypeCode, so it goes first. ``` on EventTable(EventDate, EventTypeCode, EventId, EventStatusCode) ``` --- Edit further: If you have a query such as ``` where EventDate between '2008-12-01' and '2008-12-31' and EventTypeCode = 'todo' ``` Then this index will work best: ``` on EventTable(EventTypeCode, EventDate, EventId, EventStatusCode) ``` This will put all the 'todo' events together, ordered by their EventDate as a tie-breaker. SQL Server just has to find the first element and read until it finds an element that doesn't meet the criteria and stop. If the EventDate was first in the index, then the data would be ordered by date, and then each date would have the 'todo' events clumped together. SQL Server would find the first todo on 12-01, read til it finds an element that doesn't meet the criteria... then find the first todo on 12-02, read until it's out of todo's... then find... on out for 31 days. You want to choose an index that places the items you want contiguous to each other. --- At 300 records per day, your table will get to 5 million records in 50 years. This isn't that big. Either strategy will work. Strategy 1 will probably be fast enough (err on the side of space).
How frequently do you run selects against the table? Are the selects generally part of normal processing or more towards reporting and/or maintenance and debugging? Is there an initial load of data? If not, the table size is pretty tiny, and likely to remain that way for years to come. Although you give some sample selects, do you know how frequent each type of select will be run? I'd probably just leave the table as it is and run profiler to see how the table is being accessed in production. If it's going to be a table that is accessed constantly and may become a bottleneck for different functions then I'd make a best guess as to which columns will be part of the WHERE clause most frequently and put a single index on that. For example, if there is a process that looks at all events for the past 24 hours that runs every 10 seconds, then an index on the date column might be in order and I would even cluster on that one rather than the primary key.
Indexing strategy on table
[ "", "sql", "sql-server", "sql-server-2005", "indexing", "" ]
A few days ago, I read a question asking how many developers hand code their HTML/XHTML rather than rely on the WYSIWYG tools - <https://stackoverflow.com/questions/406052/do-most-web-programmers-not-designers-use-wysiwyg-editors-or-hand-code-their> I tend to lean towards designing ASP.NET server controls rather than User Controls for use in my code. I do this so that I can reuse them by drag and drop into WYSIWYG and just set the few pertinent properties. This does cost a little extra in time designing the components, but simplifies things a lot when I come to using them in larger applications. After having read that most developers seem to hand code rather than use WYSIWYG, it leads me to wonder: Am I wasting my time developing components this way? **Edit:** To clarify - mostly, the original intent has been that these controls are for my own use. However, there have been a number of occasions when they could be useful for the rest of my team or potentially for public release. However, like most things, I tend to see the huge potential extra value provided by comparatively little extra work regardless of the probability that the extra value will be realized.
No, you are not wasting your time. Your potential user base will be larger if WYSIWYG users can easily use your components. If you are the only user of these components design them so they fit your development style. If you design visually then it makes sense to have WYSIWYG support.
You could create a UserControl and using different techniques get it compile into a dll that can then be referenced by your web applications: Overview of some methods: [Link](https://web.archive.org/web/20200221095437/http://geekswithblogs.net:80/dotnetrodent/archive/2006/06/16/82136.aspx) Detailed method: <http://webproject.scottgu.com/CSharp/UserControls/UserControls.aspx> I never use the WYSIWYG tools cause it never trully is WYSIWYG once you facter javascript, and CSS and other things. (I know VS2008 got better but not perfect). And the designers are always sooo slow. I prefer to code using markup. If your developing a commercial component that you intend to sell you should spend the time on having the most complete feature set IMHO. Including WYSIWYG. If your building components so that your or your team can use them, then you should evaluate the cost benefit of the time it takes to get your components that extra step.
Am I wasting my time by designing my ASP.NET components for WYSIWYG tools
[ "", "c#", "asp.net", "vb.net", "wysiwyg", "" ]
Does anyone know of an open source PHP class (preferably BSD or MIT license) that will interface with the MS Exchange Server 2007 Web Services via. SOAP? I am looking for a higher level class that has functionality for sending messages via. the web service.
I had this same problem, so I started building something, here: <https://github.com/rileydutton/Exchange-Web-Services-for-PHP> It doesn't do much yet (basically just lets you get a list of email messages from the server, and send email), but it would be good enough to use as a basic starting point for doing some more complicated things. I have abstracted out a good bit of the complexity that you would have to slog through using php-ews. If you are looking to do some raw, powerful commands with the server, I would use php-ews...this is for folks who just happen to be working with an Exchange server and want an easy way to do some basic tasks. Oh, and it is MIT licensed. Hope that someone finds it useful!
Here is a class that you need: php-ews (This library makes Microsoft Exchange 2007 Web Services easier to implement in PHP). You could find it at: <http://code.google.com/p/php-ews/> There is only one example but that should give you the way to implement it. Below you can find an implementation in order to: * connect to server * get the calendar events Note: don't forget to fill-in blank variables. You would also need to include php-ews classes files (I used the \_\_autoload PHP function). ``` $host = ''; $username = ''; $password = ''; $mail = ''; $startDateEvent = ''; //ie: 2010-09-14T09:00:00 $endDateEvent = ''; //ie: 2010-09-20T17:00:00 $ews = new ExchangeWebServices($host, $username, $password); $request = new EWSType_FindItemType(); $request->Traversal = EWSType_FolderQueryTraversalType::SHALLOW; $request->CalendarView->StartDate = $startDateEvent; $request->CalendarView->EndDate = $endDateEvent; $request->CalendarView->MaxEntriesReturned = 100; $request->CalendarView->MaxEntriesReturnedSpecified = true; $request->ItemShape->BaseShape = EWSType_DefaultShapeNamesType::ALL_PROPERTIES; $request->ParentFolderIds->DistinguishedFolderId->Id = EWSType_DistinguishedFolderIdNameType::CALENDAR; $request->ParentFolderIds->DistinguishedFolderId->Mailbox->EmailAddress = $mail; $response = $ews->FindItem($request); echo '<pre>'.print_r($response, true).'</pre>'; ```
Exchange Server 2007 Web Services PHP Class
[ "", "php", "exchange-server", "exchange-server-2007", "exchangewebservices", "" ]
I'm writing an Applescript playlist generator. Part of the process is to read the iTunes Library XML file to get a list of all of the genres in a user's library. This is the python implementation, which works as I'd like: ``` #!/usr/bin/env python # script to get all of the genres from itunes import re,sys,sets ## Boosted from the internet to handle HTML entities in Genre names def unescape(text): def fixup(m): text = m.group(0) if text[:2] == "&#": # character reference try: if text[:3] == "&#x": return unichr(int(text[3:-1], 16)) else: return unichr(int(text[2:-1])) except ValueError: pass else: # named entity try: text = unichr(htmlentitydefs.name2codepoint[text[1:-1]]) except KeyError: pass return text # leave as is return re.sub("&#?\w+;", fixup, text) # probably faster to use a regex than to try to walk # the entire xml document and aggregate the genres try: xml_path = "/Users/%s/Music/iTunes/iTunes Music Library.xml" % sys.argv[1] except: print '\tUsage: python '+sys.argv[0]+' <your OSX username>' raise SystemExit pattern = "<key>Genre</key><string>([^<]+)</string>" try: xml = file(xml_path,'r').read() except: print '\tUnable to load your iTunes Library XML file' raise SystemExit matches = re.findall(pattern,xml) uniques = map(unescape,list(sets.Set(matches))) ## need to write these out somewhere so the applescript can read them sys.stdout.write('|'.join(uniques)) raise SystemExit ``` The problem is, I'd like the Applescript to be self-contained and not require that this additional file be present (I plan on making this available to other people). And, as far as I can tell, Applescript doesn't offer any type of regular expression capabilities out of the box. I could loop over each track in the library to get all of the genres, but this is a prohibitively long process that I already do once when building the playlist. So, I'm looking for alternatives. Since Applescript allows me to run a shell script and capture the results, I imagine that I can accomplish the same behavior using some type of shell command, be it grep, perl, or something else. My \*nix command line skills are extremely rusty and I'm looking for some guidance. So, in short, I'd like to find a way to translate the above python code into something I can call directly from the shell and get a similar result. Thanks!
Why are you using regex to parse XML? Why not use a proper XML library? Python has some great utilities like ElementTree that make walking the DOM a lot easier, and it yields nice, friendly objects rather than untyped strings. Here are some ways of parsing XML using Applescript: [Applescript XML Parser](http://lists.apple.com/archives/AppleScript-Studio/2005/Sep/msg00114.html) (Available since Tiger apparently) [XML Tools you can also use with Applescript](http://www.latenightsw.com/freeware/XMLTools2/) Remember, just like Applescript can hook into iTunes, it can hook into other installed utilities like these. Lastly, why not just write the whole thing in Python since it has way better development tools for debugging and runs a lot faster. If you're running Leopard, you have Python 2.5.1 pre-installed.
Is creating a standalone App the Solution ? Look at py2app: py2app, works like py2exe but targets Mac OS [See](https://stackoverflow.com/questions/2933/an-executable-python-app)
Translating Python Regexp to Shell
[ "", "python", "applescript", "" ]
Okay, this may be a dumb question, but I've not been able to find any information on it. Are String.Empty and string.Empty the same? I always find myself gravitating towards using the upper case version (String.Empty) because I prefer the color and look of it in my IDE than the lower case version (string.Empty)... Is there a "correct" way to use these that differ or is it entirely down to personal preference? It was my assumption that they're both the same, but to be honest, I never gave it any thought until for whatever reason today I wondered "If they both exist, they must both exist for a reason". *Is* there a reason that anyone knows of? If so, what is it? Can anyone enlighten me? P.S. The "exact duplicates" only answer half of the question - "which is right?", not the "why do they both exist?" --- **Exact Duplicate**: [What is the difference between String and string in C#?](https://stackoverflow.com/questions/7074/in-c-what-is-the-difference-between-string-and-string) **Exact Duplicate**: [String vs string in C#](https://stackoverflow.com/questions/215255/string-vs-string-in-c)
In C#, lower-case type names are aliases for the `System.xxx` type names, e.g. `string` equals `System.String` and `int` equals `System.Int32`. It's best practice to use these language aliases for the type names instead of their framework equivalent, for the sake of consistency. So you're doing it wrong. ;-) As for a reason why they both exist, the .NET types exist because they are defined in a language-independent standard for the .NET libraries called CTS (common type system). Why C# defines these aliases is beyond me (VB does something quite similar). I guess the two reasons are 1. Habit. Get all these C and Java programmers to use C# by providing the same type names for some fundamental types. 2. Laziness: You don't have to import the `System` namespace to use them. **EDIT** Since many people seem to prefer the other notation let me point out that this is by no means unreasonable. A good case can actually be made for the usage of the CTS type names rather than C#'s keywords and some superficially good arguments are offered in the other answers. From a purity/style point of view I would probably concur. However, consider if this is worth breaking a well-established convention that helps to unify code across projects.
It is conceptually similar to something like this: ``` using int=System.Int32 ```
Why do lowercase and uppercase versions of string exist and which should I use?
[ "", "c#", "" ]
I have an application that sends broadcast messages and listens for response packets. Below is the code snippet. ``` m_socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); m_socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.Broadcast, 1); m_socket.Bind(new IPEndPoint(IPAddress.Any, 2000)); m_socket.BeginSendTo( buffer, 0, buffer.Length, SocketFlags.None, new IPEndPoint(IPAddress.Broadcast, 2000), Callback), null ); ``` When I run the application the broadcast message was not being sent. On my machine I have three network adapters. One is my local network adapter and other two are VMWare network virtual adapters. When I run my application I can see (using wireshark network capture) that the broadcast message is being sent from one of the VMWare network adapters. I would like to modify the code so that the broadcast message will be sent from all network adapters on the pc. What is the best way to do that?
You can use the following to get all your IP Addresses (and a lot more). So you can iterate through the list and bind (like Jon B said) to the specific IP you want when you send out your multicast. ``` foreach (var i in System.Net.NetworkInformation.NetworkInterface.GetAllNetworkInterfaces()) foreach (var ua in i.GetIPProperties().UnicastAddresses) Console.WriteLine(ua.Address); ```
When you call Bind(), you are setting the local IP end point. Instead of using IPAddress.Any, use the IP address of the NIC that you want to send from. You'll need to do this separately for each NIC.
Send broadcast message from all network adapters
[ "", "c#", ".net", "sockets", "" ]
What is best practice to create dynamic sidebar or other non content layout places with zend framework. At this moment I created controller witch i called WidgetsController. In this controller i defined some actions with 'sidebar' response segment for my sidebar and in IndexController i call them with $this->view->action(); function but I don't think that is a best practice to create dynamic sidebar. Thanks for your answers.
You question doesn't provide many details. Generally, I'd say load the sidebar as view template, via the render/partial methods of the view. So from inside a view: ``` //$data is dynamic data you want to pass to the sidebar echo $this -> partial('/path/to/sidebar.phtml',array('menuitems' => $data)); ``` And then sidebar could process that dynamic data: ``` //sidebar.phtml <div id="sidebar"> <?php foreach($this -> menuitems as $item) : ?> <a href="<?php echo $item['url']; ?>"><?php echo $item['title']; ?></a> <?php endforeach; ?> </div> ``` If you need extra functionality, you could create a dedicated [view helper](http://framework.zend.com/manual/en/zend.view.helpers.html) to handle it.
This works for ZF 1.11: [A dynamic sidebar implementation in Zend Framework](http://hewmc.blogspot.com/2010/09/dynamic-sidebar-implementation-in-zend.html.)
Best practice creating dynamic sidebar with zend framework
[ "", "php", "zend-framework", "" ]
When the application is run, the `DataGridView` is bound to a `DataTable`. Later I add more columns to the `DataTable` programmatically and it is reflected in the underlying data - i.e. the column Ordinals are as they should be. However this is not reflected in the `DataGridView`. Instead columns are appended onto the originally generated set. This example demonstrates, ``` public partial class Form1 : Form { public Form1() { InitializeComponent(); } public DataTable data = new DataTable(); private void button1_Click(object sender, EventArgs e) { this.dataGridView1.DataSource = data; for (int i = 0; i < 5; i++) { this.data.Columns.Add(i.ToString()); } } private void button2_Click(object sender, EventArgs e) { DataColumn foo = new DataColumn(); this.data.Columns.Add(foo); foo.SetOrdinal(0); } private void button3_Click(object sender, EventArgs e) { foreach (DataColumn tmpCol in this.data.Columns) { Console.WriteLine("{0} : {1}", tmpCol.ColumnName, tmpCol.Ordinal); } } } ``` Button 1 generates the columns, button 2 adds a column and sets the ordinal to 0 so it should be first in the grid, button 3 displays the ordinals of the columns and shows they are how they should be in the `DataTable`.
That is just how `DataGridView` works; with auto-generate columns enabled, extra (unmapped) columns are appended to the end. You can unbind and re-bind to fix it; set the DataSource to null and back to the table: ``` this.dataGridView1.DataSource = null; this.dataGridView1.Columns.Clear(); this.dataGridView1.DataSource = data; ```
I had a similar problem and solved it with the DataGridViewColumn.DisplayIndex property. ``` dgvData.Columns["COLUMN_NAME"].DisplayIndex = 0; // will move your custom column to first position ```
Changing column order in DataTable bound to DataGridView does not reflect in the view
[ "", "c#", ".net", "winforms", "datagridview", "datatable", "" ]