Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Are there free C/C++ and Java implementations of the point-to-point protocol (PPP) for use over a serial line? The C/C++ implementation will go into embedded hardware so portability is a concern. I'm not looking for a full TCP/IP stack, just something to provide a connection-oriented base to build on top of.
The one used in most Linux-based systems is in C and is named [Paul's PPP Package](http://ppp.samba.org/). FreeBSD has a completely different one, also written in C, but I have no experience with it.
In linux I use chat and pppd for using ppp over serial line in linux. ( Adding a gprs modem to get internet connection )
Free C/C++ and Java implementations of PPP?
[ "", "java", "c", "ppp", "" ]
I am working on a project has me constantly pinging a php script for new data, so if I understand this correctly that means that the php script being pinged gets run over and over indefinitely. It works but i'm guessing its a huge strain on the server, and is probably considered ugly and bad practice. Am I right about that? Is there any way I could keep the connection to the script alive and make use of php's built in output buffering to flush the contents I need, but keep the script running for infinity using some sort of loop so when new data is available it can be output. Is this a bad idea as well? I'm just looking for input form developers out there with more experience. One last thing... Are there any other ways to keep a constant flow of data going (excluding technologies such as flash or silverlight)?
If what you have currently works and continues to work when tested against the kind of load you might expect in this application, it is not really considered bad practice. It is not a crime to keep it simple if it works. Anything that does what you are describing is going to go against the grain of the original model of the web, so you're venturing into shaky territory. I do recommend you check out the [Comet](http://www.google.com/search?q=comet+programming) technique. It is mostly popular for the inverse of what you want - the server pushing information to a page continuously - but it can obviously work both ways. Although your mileage may vary, I've heard good things. As Wikipedia describes it: > In web development, Comet is a neologism to describe a web application model in which a long-held HTTP request allows a web server to push data to a browser, without the browser explicitly requesting it. Comet is an umbrella term for multiple techniques for achieving this interaction. All these methods rely on features included by default in browsers, such as Javascript, rather than on non-default plugins.
It almost seems like php wouldn't be the best choice of language for this. Possibly consider something like scala or erlang which are setup to handle this type of long lived messaging better.
Keeping a live connection with php?
[ "", "php", "performance", "keep-alive", "output-buffering", "" ]
I am trying to create a checkbox dynamically using following HTML/JavaScript. Any ideas why it doesn't work? ``` <div id="cb"></div> <script type="text/javascript"> var cbh = document.getElementById('cb'); var val = '1'; var cap = 'Jan'; var cb = document.createElement('input'); cb.type = 'checkbox'; cbh.appendChild(cb); cb.name = val; cb.value = cap; cb.appendChild(document.createTextNode(cap)); </script> ```
You're trying to put a text node inside an input element. Input elements are empty and can't have children. ``` ... var checkbox = document.createElement('input'); checkbox.type = "checkbox"; checkbox.name = "name"; checkbox.value = "value"; checkbox.id = "id"; var label = document.createElement('label') label.htmlFor = "id"; label.appendChild(document.createTextNode('text for label after checkbox')); container.appendChild(checkbox); container.appendChild(label); ```
The last line should read ``` cbh.appendChild(document.createTextNode(cap)); ``` Appending the text (label?) to the same container as the checkbox, not the checkbox itself
Creating the checkbox dynamically using JavaScript?
[ "", "javascript", "html", "checkbox", "" ]
I have the following piece of code in my DataModel.cs class: ``` public User ValidateUser(string Username, string Password) { DataContext db = new DataContext(); var query = from u in db.Users where (u.Username == Username && u.Password == Password) select u; if (query.Count() != 0) { return query.First(); } else { return new User { UserID = -1 }; } } ``` I have a login class calling this on my DataModel, checking to see if a user is valid. If the user is valid than I want to have a reference to that "logged-in" user in my login class (which is why ValidateUser returns a user object). Right now, if that user doesn't exist or the password is wrong, I simply return an empty User object with the id set to -1 to flag it as being not a real user. Is this bad practice? How should I handle this case? If ValidateUser just returned a bool it wouldn't be a problem but because I also need a reference to that user I don't know how else to do it.
IMO, this is bad practice, because it is implicit. If nothing is returned, the method should return null. This is self-explanatory. Or, if it needs to be handled completely different as the normal case or needs additional information, you could consider to throw an exception.
I would probably just return null but if you want a sentinal value at least make it a value on the User object: ``` public class User { // ... User class goes here public static readonly User NullUser = new User(); } public User ValidateUser(string Username, string Password) { DataContext db = new DataContext(); var user = db.Users FirstOrDefault(u.Username == Username); return user ?? User.NullUser; } ``` With the sentinal value at this level you don't have to look inside the result to check if its invalid or not. This is a version of the [null object pattern](http://en.wikipedia.org/wiki/Null_Object_pattern).
Is it considered bad practice to use a sentinal value of "-1" in a returned object if a query returned no values?
[ "", "c#", "" ]
I have a three-dimensional array that I want to reset to zero. It seems that there should be an easy way to do this that doesn't involve three `for` loops: ``` for (int i = 0; i < n; i++) { for (int j = 0; j < n; j++) { for (int k = 0; k < n; k++) { cube[i][j][k] = 0; } } } ```
If you are using JDK 1.5 or higher: ``` for (int[][] square : cube) { for (int[] line : square) { Arrays.fill(line, 0); } } ```
An array will be filled with zeros after initialization: ``` int[][][] cube = new int[10][20][30]; ``` This you can do also later, to reset them array to zero, it is not limited to declaration: ``` cube = new int[10][20][30]; ``` Simply create a new array, it is initialized with zeros. This works if you have one place that is holding the reference to the array. Don't care about the old array, that will be garbage collected. If you don't want to depend on this behavior of the language or you can't replace all occurrences of references to the old array, than you should go with the [Arrays.fill()](http://java.sun.com/javase/6/docs/api/java/util/Arrays.html#fill(int[],%20int)) as jjnguy mentioned: ``` for (int i = 0; i < cube.length; i++) { for (int j = 0; j < cube[i].length; j++) { Arrays.fill(cube[i][j], 0); } } ``` Arrays.fill seems to use a loop in the inside too, but it looks generally more elegant.
What's the best way to set all values of a three-dimensional array to zero in Java?
[ "", "java", "arrays", "multidimensional-array", "" ]
I have a Web service and a Web site (both C#) in the same solution (For now); I also have a class library in the solution. Both the web service and the web site reference this class library. The web service has a WebMethod that creates an object from the library and returns it. The website invokes this and attempts to put it into a Trainer object (once again, from the same library) ``` ProFitWebService.Service serviceConn = new ProFitWebService.Service(); ProFitLibrary.Trainer authenticatedTrainer = (ProFitLibrary.Trainer)serviceConn.GetAuthenticatedTrainer(_TrainerLogin.UserName); ``` however the following occurs: "Cannot convert type ProFitWebService.Trainer to ProFitLibrary.Trainer" Here is the WebMethod: ``` [WebMethod] public ProFitLibrary.Trainer GetAuthenticatedTrainer(string email) { ProFitLibrary.Trainer returnTrainer = new ProFitLibrary.Trainer(); SqlCommand cmd = new SqlCommand("SELECT * FROM Trainers WHERE EmailAddress = '" + email + "'", conn); conn.Open(); SqlDataReader reader; reader = cmd.ExecuteReader(); while (reader.Read()) { returnTrainer.TrainerId = reader.GetInt32(reader.GetOrdinal("TrainerId")); returnTrainer.FirstName = reader.GetString(reader.GetOrdinal("FirstName")); returnTrainer.LastName = reader.GetString(reader.GetOrdinal("LastName")); returnTrainer.PhoneNumber = reader.GetString(reader.GetOrdinal("PhoneNumber")); returnTrainer.Address = reader.GetString(reader.GetOrdinal("Address")); returnTrainer.City = reader.GetString(reader.GetOrdinal("City")); returnTrainer.PostalCode = reader.GetString(reader.GetOrdinal("PostalCode")); returnTrainer.EmailAddress = reader.GetString(reader.GetOrdinal("EmailAddress")); } return returnTrainer; } ``` **Update:** Changing the Trainer object to ProFitWebService.Trainer on the Web site fixed the issue: ``` ProFitWebService.Service serviceConn = new ProFitWebService.Service(); ProFitWebService.Trainer authenticatedTrainer = (ProFitWebService.Trainer)serviceConn.GetAuthenticatedTrainer(_TrainerLogin.UserName); ``` I think the answer to this is simply that library objects returned from a Web Service will always be type based/prefixed on the service - and I should not reference the class Library from both the Website and the Service - I should just always create the WebService version of the object - ProFitWebService.Trainer etc. Could someone confirm this as a standard practice when you're using libraries within a web service? or if I'm making this more difficult then it really is!
When creating the web reference to your web service you will get proxy classes generated for you. These proxy classes look like the classes from your library, but they are not the same types. You will need to have some method translating between the library version and the proxy versions of these types if you want to treat objects returned from the web service as types from your shared library.
This type of thing can happen when using reflection, when the actual assemblies are different - either different versions, compile-time or even sometimes when using a different copy of the assembly. I'm not sure if your code uses reflection or not though...
C# Web Service and Web Site sharing library, service returns different "type" of library object
[ "", "c#", "class-library", "web-services", "" ]
How could the following code sometimes evaluate to false? ``` (transport.responseText == '1' || transport.responseText == 'CARD_VALID') ``` My JavaScript code: ``` if (transport.responseText == '1' || transport.responseText == 'CARD_VALID') { // do something. } else if (transport.responseText == 'CARD_INVALID' || transport.responseText == 'INVALID_CHECKSUM') { // do something else.... } else { new Ajax.Request('/report_error.php?responseText='+transport.responseText); // report error to user } ``` What could cause JavaScript string compare `==` to return false when the strings are identical?
Double equals is the appropriate way to compare strings in Javascript, it is returning false then there may be whitespace to the left and or right of one string. Put a `.trim()` on the end of the strings and my comparison should started working: ``` var panel = response.substr(0, response.indexOf("<")).trim(); if(panel == "combo"){ //do something } ```
``` A1 = "speed" A2 = "speed" if(A1 == A2) => Error !!! ``` USE THIS TEST IN CONSOLE: ``` escape("speed") ``` > result: "speed" ``` escape(A1) ``` > result: "speed%0D" => This is the problem %0D !!! ``` escape(A2) ``` > result: "speed" => OK !!! Use correct code: ``` if(A1.slice(0, -1) == A2) This is OK! ```
Javascript String Compare == sometimes fails
[ "", "javascript", "string", "compare", "" ]
Simple setup: Shopping Cart in the form of a data list. Header has two buttons: Update Quantities and Remove Selected Items Each DataList Item has (among other things) a textbox with the id="txtQty" that stores the quantity of the shopping cart item. The user should be able to change the values in these textboxes and then hit the Update Quantities button in the header to save all the changes at one time. I have the logic done for the actual saving, but now I am going back to add some extra validation. I was hoping to call a single JavaScript function from the OnClientClick event of the UpdateQuantities button, which would then iterate through all the DataListItems, find the txtQty textbox, and make sure it is valid numerical input. I am having a bit of difficulty figuring out the best way of going about this. So far my only real idea would be to loop through the form elements and find anything with an id that matches "txtQty" (since ASP.NET rewrites the id's automatically), and then validating that specific element. This doesn't sound like the best solution. Are there any better ideas floating around out there?
I ended up using a bit of LongHorn's method to get my desired results. Using a custom validator I used the client-side validation ability of it to call a JS function which checked the individual textbox on text change event. It wasn't my ideal solution, but it was useful. I also have the server-side validation occuring for the whole list before any processing occurs just in case as well.
Why not put a validator on your DataGrid to check to see if the qty is a valid number? This would be the simplest by far. If that isn't a solution, I would have all those txtQty textboxes have the same css class. Then you can use JQuery to find all the elements with that class name and loop through them. Which is far better than looping through the entire form and checking if the id of the element contains 'txtQty' in its id. Another way, is to have a hidden field that would have all ids of the textboxes you want to check. You would add to this hidden field as the text boxes are added. Then just break out the hidden field in an id array, and find just those ids.
ASP.NET DataList - JavaScript Validation Of All Textboxes
[ "", "asp.net", "javascript", "validation", "datalist", "" ]
I've been having crash problems due to heap problems, so I guess a buffer overflow is happening somewhere. How do I detect it?
I use [Compuware BoundsChecker](http://www.compuware.com/products/devpartner/visualc.htm) as a tool for error detection in VC6. It can pick up buffer overruns. You can also try [PageHeap](http://support.microsoft.com/kb/286470), a tool for windows.
I use [Rational Purify](http://www-01.ibm.com/software/awdtools/purify/) for this.
Are there any tools to detect buffer overflow on Visual C++ 6.0?
[ "", "c++", "visual-c++", "" ]
I have basically this on a page: ``` <script type="text/javascript"> function refresh_context() { $("#ajax-context").html("Searching..."); $.get("/ajax/ldap_search.php", {cn: $("#username").val()}, function(xml) { $("#ajax-context").html($("display", xml).text()); $("#context").val($("context", xml).text()); }, 'xml'); } $(document).ready(function() { $("#username").blur(refresh_context); }); </script> <input type="text" name="username" id="username" maxlength="255" value="" /> <input type="hidden" name="context" id="context" value=""/> <div id="ajax-context"></div> ``` What it should do (and does fine on Firefox) is when you type a username in to the #username field, it will run /ajax/ldap\_search.php?cn=$username, which searches our company's ldap for the username and returns it's raw context and a formatted version of the context like this: ``` <result> <display>Staff -&gt; Accounting -&gt; John Smith</display> <context>cn=jsmith,ou=Accounting,ou=Staff,ou=Users,o=MyOrg</context> </result> ``` The formatted version (display) goes to the div #ajax-context and goes to the hidden input #context. (Also, the -> are actually - "& g t ;" (without spaces)). However, on IE the div stays stuck on "Searching..." and the hidden input value stays blank. I've tried both .get and .post and neither work. I'm sure it's failing on the .get because if I try this, I don't even get the alert: ``` $.get("/ajax/ldap_search.php", {cn: $("#username").val()}, function() { alert("Check"); }); ``` Also, IE doesn't give me any script errors. Edit: Added "$(document).ready(function() {", the .blur was already in it in my code, but I forgot to include that in my post. Edit 2: The request is being sent and apache2 is receiving it: ``` 10.135.128.96 - - [01/May/2009:10:04:27 -0500] "GET /ajax/ldap_search.php?cn=i_typed_this_in_IE HTTP/1.1" 200 69 ```
Problem was in the ldap\_search.php file. I had this (based on an example I read on someone's blog): ``` header("content-type:application/xml-xhtml;charset=utf-8"); ``` It actually needed to be this for IE to read it properly: ``` header("content-type:application/xml;charset=utf-8"); ``` God, I hate IE.
Try changing: ``` $("#username").blur(refresh_context); ``` To: ``` $(function(){ $("#username").blur(refresh_context); }); ``` This will hold off on assigning the `blur` event until the entire page is loaded. ### Edit: Could it be the use of `>` in the text of the XML?
jquery .get/.post not working on ie 7 or 8, works fine in ff
[ "", "javascript", "jquery", "ajax", "" ]
I have form with one input for email and two submit buttons to subscribe and unsubscribe from newsletter: ``` <form action="" method="post"> {{ form_newsletter }} <input type="submit" name="newsletter_sub" value="Subscribe" /> <input type="submit" name="newsletter_unsub" value="Unsubscribe" /> </form> ``` I have also class form: ``` class NewsletterForm(forms.ModelForm): class Meta: model = Newsletter fields = ('email',) ``` I must write my own clean\_email method and I need to know by which button was form submited. But the value of submit buttons aren't in `self.cleaned_data` dictionary. Could I get values of buttons otherwise?
You can use `self.data` in the `clean_email` method to access the POST data before validation. It should contain a key called `newsletter_sub` or `newsletter_unsub` depending on which button was pressed. ``` # in the context of a django.forms form def clean(self): if 'newsletter_sub' in self.data: # do subscribe elif 'newsletter_unsub' in self.data: # do unsubscribe ```
Eg: ``` if 'newsletter_sub' in request.POST: # do subscribe elif 'newsletter_unsub' in request.POST: # do unsubscribe ```
How can I build multiple submit buttons django form?
[ "", "python", "django", "button", "django-forms", "submit", "" ]
I recently moved my asp.net appliaction from windows 2003 / IIS 6 to windows IIS7. No other changes, but now the file upload for the fckeditor doesn't work anymore. Anyone know the obvious mistake I made here. :) Thanks
The most likely problem is that the permissions need to be updated on the target folder. Check to make sure IUSR has create / write permissions to the upload directory.
Maybe this will help. I couldn't get it to work either. I had all the permissions set. By debugging, I found out that the frmupload.html did not have execute access on the isapi.dll. In IIS 7.0 I went to the web site on the left side and highlighted it. Then on the right pane, I clicked on handler mappings. I noticed at the top that isapi and cgi were disabled at the top. I looked below and saw all the enabled handlers. I also noticed that there was not one for \*.html but there was ones for \*. Anyways, I right clicked anywhere in the lower pane where the enabled handlers are and I got a short cut menu. EDIT FEATURE PERMISSIONS is the option you want to click on. Then you will see checkboxes for read, script, and execute. I notice execute was not checked so I checked it. Now the ISAPI and CGI became enabled in the list. I tried uploading with FCKeditor and it worked. Just make sure your uploading the right file type to the right area or you might get invalid file or invalid file type message. HOWEVER, i noticed I keep getting a new error. SYS is undefined error message on my web pages. Its a javascript error that usually happens when it can't find something. There are tons of reasons why you might get this error message if you google for it. In this case it was because I used Vista IIS7.0 to enable ISAPI with execute permissions. It went into my config file an made the correct setting change for enabling execute permission, however it erased all my handler setting!!!! I took a backup copy of my webconfig and manually readded the settings. I think maybe its better to manually edit the webconfig file instead of letting IIS7 do it because it will do it but it might erase some of your settings. Here is part of my webconfig file on VISTA IIS7 web server, before and after: ## WebConfig BEFORE: enabling execute on cgi and isapi in VISTA IIS 7.0 web.config ``` <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ScriptModule"/> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <remove name="ScriptHandlerFactory"/> <remove name="ScriptHandlerFactoryAppServices"/> <remove name="ScriptResource"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </handlers> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> </assemblyBinding> </runtime> ``` AFTER: notice the ## is added and all my handlers disappear. I just readded them and it worked. ``` <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules> <remove name="ScriptModule" /> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </modules> <handlers accessPolicy="Read, Execute, Script"> </handlers> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0" /> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> ``` --- I am on a development machine VISTA so I didn't consider any security issues for opening up Execute. Hope this helps.
Permission Denied error when uploading files in FCKEditor
[ "", "c#", "asp.net", "iis", "iis-7", "fckeditor", "" ]
Here is a complete example. I want to forbid using A::set from objects casted from B to A by allowing only casting B to const A. How to do it? (I can't use virtual functions) ``` #include <iostream> #include <cassert> using namespace std; class A { public: int get() const { return i_; } void set(int i) { i_ = i; } protected: int i_; }; class B : public A { public: int ok() const { return A::get() == copy_i_; } void set(int i) { A::set(i); copy_i_ = i; } protected: int copy_i_; }; void test2() { A a; a.set(3); // ok here cout << a.get() << endl; B b; b.set(5); A& aa = b; assert(b.ok()); aa.set(3); // not ok here assert(b.ok()); // fail-here } int main() { test2(); return 0; } ```
You could make the inheritance private and provide a member function in B to use instead of casting. ``` const A& B::convert_to_A() const { return *this; } ```
Why casting? Making void A::set(int i) protected will work in your case.
Is there a way to forbid casting to subclass that is non-const in C++?
[ "", "c++", "inheritance", "constants", "" ]
I would like to use Python's [JSON](http://docs.python.org/library/json.html) module. It was only introduced in Python 2.6 and I'm stuck with 2.5 for now. Is the particular JSON module provided with Python 2.6 available as a separate module that can be used with 2.5?
You can use [simplejson](http://pypi.python.org/pypi/simplejson). As shown by [the answer](https://stackoverflow.com/a/2119597) form [pkoch](https://stackoverflow.com/users/5128/pkoch) you can use the following import statement to get a json library depending on the installed python version: ``` try: import json except ImportError: import simplejson as json ```
To Wells and others: > Way late here, but how can you write a script to import either json or simplejson depending on the installed python version? Here's how: ```` ``` try: import json except ImportError: import simplejson as json ``` ````
Which JSON module can I use in Python 2.5?
[ "", "python", "json", "" ]
Here's my proposed (very simplified to illustrate the problem space) design for a C# console application. The database connections implement IDisposable, and this solution doesn't allow for `using` the database connection objects. Can someone propose a more correct structure for a console application? This is a problem I need to solve often. ``` class Program { SQLiteConnection sourceConnection; SQLiteConnection destinationConnection; static void Main(string[] args) { Program shell = new Program(); // get connection strings from command line arguments string sourceConnectionString = shell.getConnectionString(args); string destinationConnectionString = shell.getConnectionString(args); // call non-static methods that use shell.setUpConnections(sourceConnectionString, destinationConnectionString); shell.doDatabaseWork(); } private void setUpConnections(string sourceConnectionString, string destinationConnectionString) { sourceConnection = new SQLiteConnection(sourceConnectionString); destinationConnection = new SQLiteConnection(destinationConnectionString); } private void doDatabaseWork() { // use the connections here } } ``` **Edit:** Some people can't figure out why I'd want them as member variables. Here's my use case (a little psuedocoded) of what would go in doDatabaseWork: ``` foreach (Row sourceRow in DBResultSet) { string sourceXml = sourceRow.Columns["MyColumnName"].Value; string destinationXML = transformUsingXSLT(sourceXml); writeToDestination(destinationXml); } ``` See how I'd want to keep these connections open for the life of this loop?
I think that the best solution is to extract main logic from Program class. The Program class is some kind of starter for primary work. And providing wrappers for SqlConnections is not a good idea indeed, because they are managed resources already, wrapping them is redundant. Thus my solution looks like this: ``` class ProgramCore : IDisposable { internal ProgramCore(string sourceConnectionString, string destinationConnectionString) { setUpConnections(sourceConnectionString, destinationConnectionString); } internal void Execute() { // do whatever you want doDatabaseWork(); // do whatever you want } public void Dispose() { if (_sourceConnection != null) _sourceConnection.Dispose(); if (_destinationConnection != null) _destinationConnection.Dispose(); } private void setUpConnections(string sourceConnectionString, string destinationConnectionString) { _sourceConnection = new SQLiteConnection(sourceConnectionString); _destinationConnection = new SQLiteConnection(destinationConnectionString); } private void doDatabaseWork() { // use the connections here } private SQLiteConnection _sourceConnection; private SQLiteConnection _destinationConnection; } class Program { static void Main(string[] args) { // get connection strings from command line arguments string sourceConnectionString = GetConnectionString(args); string destinationConnectionString = GetConnectionString(args); using (ProgramCore core = new ProgramCore(sourceConnectionString, destinationConnectionString)) { core.Execute(); } } static string GetConnectionString(string[] args) { // provide parsing here } } ```
How about writing a class that implements IDisposable. Inside your class constructor, you can instantiate your DB connections. Then inside your IDisposable.Dispose Method, you write your tear down code for closing your DB connections. Here is a code sample to demonstrate what I mean: ``` public class DBWrapper : IDisposable { public SqlConnection Connection1 { get; set; } public SqlConnection Connection2 { get; set; } public DBWrapper() { Connection1 = new SqlConnection(); Connection1.Open(); Connection2 = new SqlConnection(); Connection2.Open(); } public void DoWork() { // Make your DB Calls here } public void Dispose() { if (Connection1 != null) { Connection1.Dispose(); } if (Connection2 != null) { Connection2.Dispose(); } } } ``` And then, from within your main method of your Program class: ``` class Program { static void Main(string[] args) { using (DBWrapper wrapper = new DBWrapper()) { wrapper.DoWork(); } } } ```
How do I structure a C# console application to efficiently use IDisposable database resources?
[ "", "c#", "sqlite", "database-connection", "console-application", "" ]
I need a reg exp that will parse something like- ``` "2 * 240pin" ``` where the \* can be either the regular star or unicode char \u00d7 or just an x. This is what I have but its not working: ``` multiple= r'^(\d+)\s?x|*|\\u00d7\s?(\d+)(\w{2,4})$' multiplepat= re.compile(multiple, re.I) print multiplepat.search(u'1 X 240pin').groups() ``` returns ``` multiplepat= re.compile(multiple, re.I) File "C:\Python26\lib\re.py", line 188, in compile return _compile(pattern, flags) File "C:\Python26\lib\re.py", line 243, in _compile raise error, v # invalid expression error: nothing to repeat ```
You need to escape the `*` as it is a quantifier in the context you use it. But you could also use a character class. So try this: ``` ur'^(\d+)\s?[x*\u00d7]\s?(\d+)(\w{2,4})$' ```
``` multiple= r'^(\d+)\s[xX\*\\u00d7]\s?(\d+)(\w{2,4})$' ```
python regular exp. with a unicode char
[ "", "python", "regex", "" ]
What is the best approach to build a small (but scalable) application that works with Sql Server or Oracle? I'm interested in build apps that supports multiple databases, in the process behind the feature.
Using an [ORM](http://en.wikipedia.org/wiki/Object-relational_mapping) that supports multiple databases is the first step here. You could look at either [NHibernate](http://nhforge.org/Default.aspx) or [Entity framework](http://msdn.microsoft.com/en-us/library/aa697427(VS.80).aspx) for example - both have oracle and sql server support. That way you should just have to swap out the database mappings to get the application to work on either DBMS. Edit - thanks to tvanfosson, added the 'new' link for nhibernate.
In addition to the ORM comments; sometimes life is not that simple. You must keep separate scripts for generating your tables, views, and stored procedures on both systems as they will differ. You may have the need to do something tricky for performance reasons that is specific to one database platform. For example, making a new partition in Oracle. You should try to do it at this level by encapsulating it in a view or stored procedure. Your client code can call the stored procedure with the same signature on any database. You can write a stored procedure that does nothing or lots depending on what that databse requires.
Application that works in both SQL Server And Oracle Databases
[ "", "c#", "sql-server", "wpf", "database", "oracle", "" ]
Consider this code: ``` using Microsoft.Office.Interop.Word; ApplicationClass _application = new ApplicationClass(); ``` Can I get the PID from the Winword.exe process that was launched by the \_application? I need the PID because with corrupted files, I just can't quit the ApplicationClass, even using this code: ``` _application.Quit(ref saveFile, ref missing, ref missing); System.Runtime.InteropServices.Marshal.ReleaseComObject(_application); GC.Collect(); GC.WaitForPendingFinalizers(); ``` I can't search for the winword.exe process and kill it, because I will have several, and I don't know which one to kill. If I can get a PID for each ApplicationClass, I could just kill the correct winword.exe process that is giving me troubles to quit.
Here is how to do it. ``` //Set the AppId string AppId = ""+DateTime.Now.Ticks(); //A random title //Create an identity for the app this.oWordApp = new Microsoft.Office.Interop.Word.ApplicationClass(); this.oWordApp.Application.Caption = AppId; this.oWordApp.Application.Visible = true; while (GetProcessIdByWindowTitle(AppId) == Int32.MaxValue) //Loop till u get { Thread.Sleep(5); } ///Get the pid by for word application this.WordPid = GetProcessIdByWindowTitle(AppId); ///You canh hide the application afterward this.oWordApp.Application.Visible = false; /// <summary> /// Returns the name of that process given by that title /// </summary> /// <param name="AppId">Int32MaxValue returned if it cant be found.</param> /// <returns></returns> public static int GetProcessIdByWindowTitle(string AppId) { Process[] P_CESSES = Process.GetProcesses(); for (int p_count = 0; p_count < P_CESSES.Length; p_count++) { if (P_CESSES[p_count].MainWindowTitle.Equals(AppId)) { return P_CESSES[p_count].Id; } } return Int32.MaxValue; } ```
There may be some error in the Word file. As a result, when you open the file with the method `Word.ApplicationClass.Documents.Open()`, there will be a dialog shown and the process will hang. Use `Word.ApplicationClass.Documents.OpenNoRepairDialog()` instead. I found it fixed the problem.
Get PID from MS-Word ApplicationClass?
[ "", "c#", "ms-word", "pid", "" ]
I have a library contains a bunch of static `*lib` files, I wish to access them from `JNA` (a Java library that allows one to dynamically call `dll's from JAVA Code), so is there a way to magically change static lib to dll? Code was compiled using Visual studio (hope that is relevant), and I also have appropriate header files. I do not have access to source code, also I would like to do it using only free (as in beer) tools.
I did as anon suggested, I did an automatic converter (someone suggested just putting \_ddlspec(export) before declaration and compiling dll with this header would work -- well it didn't -- maybe i did something wrong -- I'm Plain Old Java Programmer ;)): it basically parses header files and turns: ``` SADENTRY SadLoadedMidFiles( HMEM, USHORT usMaxMidFiles, VOID * ); ``` to: ``` __declspec(dllexport) SADENTRY DLL_WRAPPER_SadLoadedMidFiles(HMEM param0, USHORT usMaxMidFiles, VOID* param2){ return SadLoadedMidFiles(param0, usMaxMidFiles, param2); } ``` Here is the code (most probably its Regexp abuse, but it works), gui part depends on MigLayout: ``` package cx.ath.jbzdak.diesIrae.util.wrappergen; import net.miginfocom.swing.MigLayout; import javax.swing.*; import static java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment; import java.awt.*; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.io.*; import java.nio.charset.Charset; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.regex.Matcher; import java.util.regex.Pattern; /** * Displays a window. In this window you have to specify two things: * <p/> * 1. Name of header file that you want to process. * <p/> * 2. Name of output files extension will be added automatically. We will override any existing files. * * <p/> * Dependencies: MigLayout * <p/> * Actual wrapper generation is done inside WrapperGen class. * <p/> * KNOWN ISSUES: * <p/> * 1. Ignores preprocessor so may extract finction names that are inside <code>#if false</code>. * <p/> * 2. Ignores comments * <p/> * 3. May fail to parse werid parameter syntax. . . * * Created by IntelliJ IDEA. * User: Jacek Bzdak */ public class WrapperGenerator { public static final Charset charset = Charset.forName("UTF-8"); WrapperGen generator = new WrapperGen(); // GUI CODE: File origHeader, targetHeader, targetCpp; JTextField newHeaderFileName; JFrame wrapperGeneratorFrame; { wrapperGeneratorFrame = new JFrame(); wrapperGeneratorFrame.setTitle("Zamknij mnie!"); //Wrapper generator wrapperGeneratorFrame.setLayout( new MigLayout("wrap 2, fillx", "[fill,min!]")); wrapperGeneratorFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); ActionListener buttonListener = new ActionListener() { JFileChooser fileChooser = new JFileChooser(); { fileChooser.setFileFilter(new javax.swing.filechooser.FileFilter() { @Override public boolean accept(File f) { return f.isDirectory() || f.getName().matches(".*\\.h(?:pp)?"); } @Override public String getDescription() { return "Header files"; } }); fileChooser.setCurrentDirectory(new File("C:\\Documents and Settings\\jb\\My Documents\\Visual Studio 2008\\Projects\\dll\\dll")); } public void actionPerformed(ActionEvent e) { if(JFileChooser.APPROVE_OPTION == fileChooser.showOpenDialog(wrapperGeneratorFrame)){ origHeader = fileChooser.getSelectedFile(); } } }; wrapperGeneratorFrame.add(new JLabel("Original header file")); JButton jButton = new JButton("Select header file"); jButton.addActionListener(buttonListener); wrapperGeneratorFrame.add(jButton); wrapperGeneratorFrame.add(new JLabel("Result files prefix")); newHeaderFileName = new JTextField("dll_wrapper"); wrapperGeneratorFrame.add(newHeaderFileName); ActionListener doListener = new ActionListener() { public void actionPerformed(ActionEvent e) { targetHeader = new File(origHeader.getParentFile(), newHeaderFileName.getText() + ".h"); targetCpp = new File(origHeader.getParentFile(), newHeaderFileName.getText() + ".cpp"); try { targetHeader.createNewFile(); targetCpp.createNewFile(); generator.reader = new InputStreamReader(new FileInputStream(origHeader),charset); generator.cppWriter = new OutputStreamWriter(new FileOutputStream(targetCpp), charset); generator.heaerWriter = new OutputStreamWriter(new FileOutputStream(targetHeader), charset); generator.parseReader(); } catch (IOException e1) { e1.printStackTrace(); JOptionPane.showMessageDialog(wrapperGeneratorFrame, "ERROR:" + e1.getMessage(), "Error", JOptionPane.ERROR_MESSAGE); return; } } }; JButton go = new JButton("go"); go.addActionListener(doListener); wrapperGeneratorFrame.add(go, "skip 1"); } public static void main(String []args){ SwingUtilities.invokeLater(new Runnable() { public void run() { WrapperGenerator wgen = new WrapperGenerator(); JFrame f = wgen.wrapperGeneratorFrame; wgen.wrapperGeneratorFrame.pack(); Point p = getLocalGraphicsEnvironment().getCenterPoint(); wgen.wrapperGeneratorFrame.setLocation(p.x-f.getWidth()/2, p.y-f.getHeight()/2); wgen.wrapperGeneratorFrame.setVisible(true); } }); } } /** * Does the code parsing and generation */ class WrapperGen{ /** * Method is basically syntax like this: <code>(anything apart from some special chars like #;) functionName(anything)</code>; * Method declarations may span many lines. */ private static final Pattern METHOD_PATTERN = //1 //2 //params Pattern.compile("([^#;{}]*\\s+\\w[\\w0-9_]+)\\(([^\\)]*)\\);", Pattern.MULTILINE); //1 - specifiers - including stuff like __dllspec(export)... //2 - function name //3 param list /** * Generated functions will have name prefixet with #RESULT_PREFIX */ private static final String RESULT_PREFIX = "DLL_WRAPPER_"; /** * Specifiers of result will be prefixed with #RESULT_SPECIFIER */ private static final String RESULT_SPECIFIER = "__declspec(dllexport) "; Reader reader; Writer heaerWriter; Writer cppWriter; public void parseReader() throws IOException { StringWriter writer = new StringWriter(); int read; while((read = reader.read())!=-1){ writer.write(read); } reader.close(); heaerWriter.append("#pragma once\n\n\n"); heaerWriter.append("#include \"stdafx.h\"\n\n\n"); //Standard Visual C++ import file. cppWriter.append("#include \"stdafx.h\"\n\n\n"); Matcher m = METHOD_PATTERN.matcher(writer.getBuffer()); while(m.find()){ System.out.println(m.group()); handleMatch(m); } cppWriter.close(); heaerWriter.close(); } public void handleMatch(Matcher m) throws IOException { Method meth = new Method(m); outputHeader(meth); outputCPP(meth); } private void outputDeclaration(Method m, Writer writer) throws IOException { //writer.append(RESULT_SPECIFIER); writer.append(m.specifiers); writer.append(" "); writer.append(RESULT_PREFIX); writer.append(m.name); writer.append("("); for (int ii = 0; ii < m.params.size(); ii++) { Parameter p = m.params.get(ii); writer.append(p.specifiers); writer.append(" "); writer.append(p.name); if(ii!=m.params.size()-1){ writer.append(", "); } } writer.append(")"); } public void outputHeader(Method m) throws IOException { outputDeclaration(m, heaerWriter); heaerWriter.append(";\n\n"); } public void outputCPP(Method m) throws IOException { cppWriter.append(RESULT_SPECIFIER); outputDeclaration(m, cppWriter); cppWriter.append("{\n\t"); if (!m.specifiers.contains("void") || m.specifiers.matches(".*void\\s*\\*.*")) { cppWriter.append("return "); } cppWriter.append(m.name); cppWriter.append("("); for (int ii = 0; ii < m.params.size(); ii++) { Parameter p = m.params.get(ii); cppWriter.append(p.name); if(ii!=m.params.size()-1){ cppWriter.append(", "); } } cppWriter.append(");\n"); cppWriter.append("}\n\n"); } } class Method{ private static final Pattern NAME_REGEXP = //1 //2 Pattern.compile("\\s*(.*)\\s+(\\w[\\w0-9]+)\\s*", Pattern.MULTILINE); //1 - all specifiers - including __declspec(dllexport) and such ;) //2 - function name public final List<Parameter> params; public final String name; public final String specifiers; public Method(Matcher m) { params = Collections.unmodifiableList(Parameter.parseParamList(m.group(2))); Matcher nameMather = NAME_REGEXP.matcher(m.group(1)); System.out.println("ALL: " + m.group()); System.out.println("G1: " + m.group(1)); if(!nameMather.matches()){ throw new IllegalArgumentException("for string " + m.group(1)); } // nameMather.find(); specifiers = nameMather.group(1); name = nameMather.group(2); } } class Parameter{ static final Pattern PARAMETER_PATTERN = //1 //2 Pattern.compile("\\s*(?:(.*)\\s+)?([\\w\\*&]+[\\w0-9]*[\\*&]?)\\s*"); //1 - Most probably parameter type and specifiers, but may also be empty - in which case name is empty, and specifiers are in 2 //2 - Most probably parameter type, sometimes prefixed with ** or &* ;), also // 'char *' will be parsed as grup(1) == char, group(2) = *. /** * Used to check if group that represenrs parameter name is in fact param specifier like '*'. */ static final Pattern STAR_PATTERN = Pattern.compile("\\s*([\\*&]?)+\\s*"); /** * If */ static final Pattern NAME_PATTERN = Pattern.compile("\\s*([\\*&]+)?(\\w[\\w0-9]*)\\s*"); public final String name; public final String specifiers; public Parameter(String param, int idx) { System.out.println(param); Matcher m = PARAMETER_PATTERN.matcher(param); String name = null; String specifiers = null; if(!m.matches()){ throw new IllegalStateException(param); } name = m.group(2); specifiers = m.group(1); if(specifiers==null || specifiers.isEmpty()){ //Case that parameter has no name like 'int', or 'int**' specifiers = name; name = null; }else if(STAR_PATTERN.matcher(name).matches()){ //Case that parameter has no name like 'int *' specifiers += name; name = null; }else if(NAME_PATTERN.matcher(name).matches()){ //Checks if name contains part of type like '**ptrData', and extracts '**' Matcher m2 = NAME_PATTERN.matcher(name); m2.matches(); if(m2.group(1)!=null){ specifiers += m2.group(1); name = m2.group(2); } } if(name==null){ name = "param" + idx; } this.specifiers = specifiers; this.name = name; } public static List<Parameter> parseParamList(String paramList){ List<Parameter> result = new ArrayList<Parameter>(); String[] params = paramList.split(","); int idx = 0; for(String param : params){ Parameter p = new Parameter(param, idx++); result.add(p); } if(result.size()==1){ Parameter p = result.get(0); if(p.specifiers.matches("\\s*void\\s*")){ return Collections.emptyList(); } } return result; } } ```
I'm not aware of anyb tools that will do this automatically, but the process is to create a DLL project and add your libraries to the project. For each function in the header file: ``` int SomeLibFunc( int x, int y ); ``` you would need to create and export your own function in the DLL; ``` int MyFunc( int x, int y ) { return SomLibFunc( x, y ); } ``` The process is quite mechanical, and you may be able to knock up a script using something like perl to create the DLL source files.
Convert static windows library to dll
[ "", "java", "c", "windows", "java-interop", "" ]
I have a SQL query that looks something like this: ``` SELECT * FROM( SELECT ..., row_number() OVER(ORDER BY ID) rn FROM ... ) WHERE rn between :start and :end ``` Essentially, it's the ORDER BY part that's slowing things down. If I were to remove it, the EXPLAIN cost goes down by an order of magnitude (over 1000x). I've tried this: ``` SELECT ... FROM ... WHERE rownum between :start and :end ``` But this doesn't give correct results. Is there any easy way to speed this up? Or will I have to spend some more time with the EXPLAIN tool?
`ROW_NUMBER` is quite inefficient in `Oracle`. See the article in my blog for performance details: * [**Oracle: ROW\_NUMBER vs ROWNUM**](http://explainextended.com/2009/05/06/oracle-row_number-vs-rownum/) For your specific query, I'd recommend you to replace it with `ROWNUM` and make sure that the index is used: ``` SELECT * FROM ( SELECT /*+ INDEX_ASC(t index_on_column) NOPARALLEL_INDEX(t index_on_column) */ t.*, ROWNUM AS rn FROM table t ORDER BY column ) WHERE rn >= :start AND rownum <= :end - :start + 1 ``` This query will use `COUNT STOPKEY` Also either make sure you `column` is not nullable, or add `WHERE column IS NOT NULL` condition. Otherwise the index cannot be used to retrieve all values. Note that you cannot use `ROWNUM BETWEEN :start and :end` without a subquery. `ROWNUM` is always assigned last and checked last, that's way `ROWNUM`'s always come in order without gaps. If you use `ROWNUM BETWEEN 10 and 20`, the first row that satisifies all other conditions will become a candidate for returning, temporarily assigned with `ROWNUM = 1` and fail the test of `ROWNUM BETWEEN 10 AND 20`. Then the next row will be a candidate, assigned with `ROWNUM = 1` and fail, etc., so, finally, no rows will be returned at all. This should be worked around by putting `ROWNUM`'s into the subquery.
Looks like a pagination query to me. From this ASKTOM article (about 90% down the page): [You need to order by something unique for these pagination queries, so that ROW\_NUMBER is assigned deterministically to the rows each and every time.](http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html "AskTom pagination") Also your queries are no where near the same so I'm not sure what the benefit of comparing the costs of one to the other is.
How can I speed up row_number in Oracle?
[ "", "sql", "oracle", "oracle10g", "row-number", "" ]
I'm running a query against a table in a postgresql database. The database is on a remote machine. The table has around 30 sub-tables using postgresql [partitioning capability](http://www.postgresql.org/docs/8.3/interactive/ddl-partitioning.html). The query will return a large result set, something around 1.8 million rows. In my code I use spring jdbc support, method [JdbcTemplate.query](http://static.springsource.org/spring/docs/2.5.x/api/org/springframework/jdbc/core/JdbcTemplate.html), but my [RowCallbackHandler](http://static.springsource.org/spring/docs/2.5.x/api/org/springframework/jdbc/core/RowCallbackHandler.html) is not being called. My best guess is that the postgresql jdbc driver (I use version 8.3-603.jdbc4) is accumulating the result in memory before calling my code. I thought the [fetchSize configuration](http://static.springsource.org/spring/docs/2.5.x/api/org/springframework/jdbc/core/JdbcTemplate.html#setFetchSize(int)) could control this, but I tried it and nothing changes. I did this as postgresql manual [recomended](http://jdbc.postgresql.org/documentation/83/query.html#fetchsize-example). This query worked fine when I used Oracle XE. But I'm trying to migrate to postgresql because of the partitioning feature, which is not available in Oracle XE. My environment: * Postgresql 8.3 * Windows Server 2008 Enterprise 64-bit * JRE 1.6 64-bit * Spring 2.5.6 * Postgresql JDBC Driver 8.3-603
In order to use a cursor to retrieve data you have to set the ResultSet type of ResultSet.TYPE\_FORWARD\_ONLY (the default) and autocommit to false in addition to setting a fetch size. That is referenced in the doc you linked to but you didn't explicitly mention that you did those steps. Be careful with PostgreSQL's partitioning scheme. It really does very horrible things with the optimizer and can cause massive performance issues where there should not be (depending on specifics of your data). In any case, is your row only 1.8M rows? There is no reason that it would need to be partitioned based on size alone given that it is appropriately indexed.
I'm betting that there's not a single client of your app that needs 1.8M rows all at the same time. You should think of a sensible way to chunk the results into smaller pieces and give users the chance to iterate through them. That's what Google does. When you do a search there might be millions of hits, but they return 25 pages at a time with the idea that you'll find what you want in the first page. If it's not a client, and the results are being massaged in some way, I'd recommend letting the database crunch all those rows and simply return the result. It makes no sense to return 1.8M rows just to do a calculation on the middle tier. If neither of those apply, you've got a real problem. Time to rethink it. After reading the later responses it sounds to me like this is more of a reporting solution that ought to be crunched in batch or calculated in real time and stored in tables that are not part of your transactional system. There's no way that bringing 1.8M rows to the middle tier for calculating moving averages can scale. I'd recommend reorienting yourself - start thinking about it as a reporting solution.
Large ResultSet on postgresql query
[ "", "java", "spring", "postgresql", "jdbc", "spring-jdbc", "" ]
I have a C# .net web project that have a globalization tag set to: ``` <globalization requestEncoding="utf-8" responseEncoding="utf-8" culture="nb-no" uiCulture="no"/> ``` When this URL a Flash application (you get the same problem when you enter the URL manually in a browser): c\_product\_search.aspx?search=kjøkken (alternatively: c\_product\_search-aspx?search=kj%F8kken Both return the following character codes: ``` k U+006b 107 j U+006a 106 � U+fffd 65533 k U+006b 107 k U+006b 107 e U+0065 101 n U+006e 110 ``` I don't know too much about character encoding, but it seems that the ø has been given a unicode replacement character, right? I tried to change the globalization tag to: ``` <globalization requestEncoding="iso-8859-1" responseEncoding="utf-8" culture="nb-no" uiCulture="no"/> ``` That made the request work. However, now, other searches on my page stopped working. I also tried the following with similar results: ``` NameValueCollection qs = HttpUtility.ParseQueryString(Request.QueryString.ToString(), Encoding.GetEncoding("iso-8859-1")); string search = (string)qs["search"]; ``` What should I do? Kind Regards, nitech
The problem comes from the combination Firefox/Asp.Net. When you manually entered a URL in Firefox's address bar, if the url contains french or swedish characters, Firefox will encode the url with "ISO-8859-1" by default. But when asp.net recieves such a url, it thinks that it's utf-8 encoded ... And encoded characters become "U+fffd". I couldn't find a way in asp.net to detect that the url is "ISO-8859-1". Request.Encoding is set to utf-8 ... :( Several solutions exist : * put `<globalization requestEncoding="iso-8859-1" responseEncoding="iso-8859-1"/>` in your Web.config. But your may comme with other problems, and your application won't be standard anymore (it will not work with languages like japanese) ... And anyway, I prefer using UTF-8 ! * go to about:config in Firefox and set the value of `network.standard-url.encode-query-utf8` to true. It will now work for you (Firefox will encode all your url with utf-8). But not for anybody else ... * The least worst solution I could come with was to handle this with code. If the default decoding didn't work, we reparse QueryString with iso8859-1 : ``` string query = Request.QueryString["search"]; if (query.Contains("%ufffd")) query = HttpUtility.ParseQueryString(Request.Url.Query, Encoding.GetEncoding("iso-8859-1"))["search"]; query = HttpUtility.UrlDecode(query); ``` It works with hyperlinks and manually-entered url, in french, english, or japanese. But I don't know how it will handle other encodings like ISO8859-5 (russian) ... Does anyone have a better solution ? This solves only the problem of manually-entered url. In your hyperlinks, don't forget to encode url parameters with HttpUtility.UrlEncode on the server, or encodeURIComponent on the javascript code. And use HttpUtility.UrlDecode to decode it.
``` public string GetEncodedQueryString(string key) { string query = Request.QueryString[key]; if (query != null) if (query.Contains((char)0xfffd)) query = HttpUtility.ParseQueryString(Request.Url.Query, Encoding.GetEncoding("iso-8859-1"))[key]; return query; } ```
Getting U+fffd/65533 instead of special character from Query String
[ "", "c#", ".net", "flash", "encoding", "url-encoding", "" ]
What I found so far are some online resources like this: <http://www.torsten-horn.de/techdocs/java-hibernate.htm> (GER) <https://www.hibernate.org/5.html> (Hibernate Docs) <http://docs.jboss.org/hibernate/stable/core/reference/en/html/tutorial.html> <http://www.manning.com/bauer2/chapter2.pdf> (Sample chapter of Java Persistence with Hibernate) <http://www.wenzlaff.de/hibernate.html> (GER) So thats a good way to start but i wonder if you can recommand any good books for learning Hibernate, or maybe you know some other very good online resource for learning it?
[Java Persistence with Hibernate](http://www.manning.com/bauer2/) is the second edition of "Hibernate in Action". They changed the name since they now provide JPA examples as well as Hibernate. [High-Performance Java Persistence](https://vladmihalcea.com/books/high-performance-java-persistence/) is also worth reading since it explains which Hibernate features are efficient and which you should stay out of.
**Hibernate in Action** by Christian Bauer and Gavin King Its slightly out of date, but is an excellent introduction.
Good books or online resources for learning Hibernate
[ "", "java", "hibernate", "jpa", "" ]
For example, many methods in frameworks/JDK might throw ``` java.lang.SecurityException ``` but this is not indicated in the method signature (since that is a practice normally reserved for checked exceptions). I want to argue that declaring RuntimeExceptions in method sigs has many benefits (akin to static type checking for example). Am I drunk or otherwise?
I would not declare an unchecked exception in the signature, since it is misleading to the user of that API. It is no longer obvious whether the exception has to be explicitly handled. Declaring it in the javadoc is a better approach since it allows someone to handle it if they think it is necessary, but knowing they can ignore it if they want. This makes the separation between checked and unchecked clear.
From [the Oracle Java tutorial](http://docs.oracle.com/javase/tutorial/essential/exceptions/runtime.html): > "If it's so good to document a method's API, including the exceptions > it can throw, why not specify runtime exceptions too?" Runtime > exceptions represent problems that are the result of a programming > problem, and as such, the API client code cannot reasonably be > expected to recover from them or to handle them in any way. Such > problems include arithmetic exceptions, such as dividing by zero; > pointer exceptions, such as trying to access an object through a null > reference; and indexing exceptions, such as attempting to access an > array element through an index that is too large or too small. > > Runtime exceptions can occur anywhere in a program, and in a typical > one they can be very numerous. **Having to add runtime exceptions in > every method declaration would reduce a program's clarity.**
Should methods that throw RuntimeException indicate it in method signature?
[ "", "java", "oop", "exception", "" ]
I'm writing a unit test and can't remember the syntax for initializing a string with a large block of formatted text. ``` string _testData = "a couple screens worth of text data here and I need to preserve the formatting such as line breaks, etc."; ```
Add an `@` before the literal. ``` string _testData = @"a couple screens worth of text data here and I need to preserve the formatting such as line breaks, etc."; ```
use the @ literal to denote string types. ``` string _testData = @"a couple screens worth of text data here and I need to preserve the formatting such as line breaks, etc."; ``` From [MSDN](http://msdn.microsoft.com/en-us/library/362314fe.aspx): "Verbatim string literals start with @ and are also enclosed in double quotation marks. The advantage of verbatim strings is that escape sequences are not processed, which makes it easy to write, for example, a fully qualified file name. To include a double quotation mark in an @-quoted string, double it."
What is the syntax for initializing a string with a block of text in .NET?
[ "", "c#", ".net", "text", "" ]
I'm programming network headers and a lot of protocols use 4 bits fields. Is there a convenient type I can use to represent this information? The smallest type I've found is a BYTE. I must then use a lot of binary operations to reference only a few bits inside that variable.
Since the memory is byte-addressed, you can't address any unit smaller than a single byte. However, you can build the `struct` you want to send over the network and use [**bit fields**](http://msdn.microsoft.com/en-us/library/ewwyfdbe(VS.71).aspx) like this: ``` struct A { unsigned int nibble1 : 4; unsigned int nibble2 : 4; }; ```
Expanding on Mehrdads answer, also use a union with a byte in order to avoid some evil-looking casts: ``` union Nibbler { struct { unsigned int first:4; unsigned int second:4; } nibbles; unsigned char byte_value; } ```
Using nibbles (4 bits variables) in windows C/C++
[ "", "c++", "c", "windows", "types", "" ]
What is the best way to copy or cut/paste a form from one project to another project within a Solution in Visual Studio? Whenever I try it, using the solution explorer (drag and drop or right clicking cut and paste), it only copies the underlying C# code and not the necessary 'bits and pieces' that help you visualise the form via the form designer.
1. Copy the three files, `.cs`, `.designer`, `resx` to the target solution folder. 2. In the target project, select `Add existing item` and add the designer file first. 3. Modify the Namespace attribute. The `.cs` file should come in as well. 4. Modify the namespace in the `.cs` file. 5. Add the `resx` file using `Add existing item`.
To copy a form from one VS2013 project to another, the easiest, (fewest mouse clicks) is: 1. In your target project, right mouse click on your parent folder (likely your project) and in the menu select, "Add; Existing Item". 2. Then select the form's cs file in your source project. For example select Form.cs (not the Form.Designer.cs or Form.resx). 3. Open Form.cs in your target project and change the namespace (all instances). 4. Close and reopen Form.cs and you will see all the controls, etc.
Copying winforms between projects in Visual Studio
[ "", "c#", "visual-studio", "winforms", "" ]
I want to get the difference between two sets of ints in c#. Given s1 and s2 I want to return those ints which are in s1 and not in s2. I can do something such as: ``` List<int> s1 = new List<int>(); List<int> s2 = new List<int>(); foreach (int i in s1) { if (s1.Contains(i)) { // } else { // } } ``` But I was wondering if anyone can point out anything cleaner. I would like to do something such as ``` List<int> omitted = s1.Difference(s2); ``` Not sure if there is an existing method or a LINQ construct that anyone might be able to point out? Thank you.
I think you want [HashSet.Except](http://msdn.microsoft.com/en-us/library/bb908036.aspx). That is, rather than use Lists, use HashSets, and then the operation is available. This is a better type if what you are representing is really a 'set' anyway. (If you already have a list, you can just create a 'new HashSet' out of it.)
``` IEnumerable<T> a, b; var added = a.Except(b); var removed = b.Except(a); ```
Is there a way to get the difference between two sets of objects in c#
[ "", "c#", ".net", "linq", "hashset", "" ]
I have an array of ListViewItems ( `ListViewItem[]` ), where I store a `SalesOrderMaster` object in each ListViewItem.Tag for later reference. I have some code that right now, goes through each `ListViewItem` safely casts the .Tag property into a SalesOrderMaster object, then adds that object to a collection of SalesOrders, only after checking to make sure the order doesn't already exist in that collection. The process to **compare** sales orders is expensive, and I would like to convert this to a LINQ expression for clarity and performance. ( I also have the [Parallel Extensions to .NET Framework 3.5](http://www.microsoft.com/downloads/details.aspx?FamilyId=348F73FD-593D-4B3C-B055-694C50D2B0F3&displaylang=en) installed so I can use that to further improve LINQ performance) So without further ado: This is what I have, and then what I want. ( what I want won't compile, so I know I am doing something wrong, but I hope it illustrates the point ) What I have: ( Slow ) ``` foreach (ListViewItem item in e.Argument as ListViewItem[]) { SalesOrderMaster order = item.Tag as SalesOrderMaster; if ( order == null ) { return; } if (!All_SalesOrders.Contains(order)) { All_SalesOrders.Add(order); } } ``` What I want: ( Theory ) ``` List<SalesOrderMaster> orders = (from item in (e.Argument as ListViewItem[]).AsParallel() select new { ((SalesOrderMaster)item.Tag) }).Distinct(); ``` EDIT: I know the cast is cheap, I said the "Compare", which in this case translates to the .Contains(order) operation EDIT: Everyone's answer was awesome! I wish I could mark more than one answer, but in the end I have to pick one. EDIT : This is what I ended up with: ``` List<SalesOrderMaster> orders = (from item in (e.Argument as ListViewItem[]) select (SalesOrderMaster) item.Tag).GroupBy(item => item.Number).Select(x => x.First()).ToList(); ```
I see nobody has addressed your need to convert an anonymous type to a named type explicitly, so here goes... By using "`select new { }`" you are creating an anonymous type, but you don't need to. You can write your query like this: ``` List<SalesOrderMaster> orders = (from item in (e.Argument as ListViewItem[]).AsParallel() select (SalesOrderMaster)item.Tag) .Distinct() .ToList(); ``` Notice that the query selects `(SalesOrderMaster)item.Tag` without `new { }`, so it doesn't create an anonymous type. Also note I added `ToList()` since you want a `List<SalesOrderMaster>`. This solves your anonymous type problem. However, I agree with Mark and Guffa that using a parallel query here isn't you best option. To use `HashSet<SalesOrderMaster>` as Guffa suggested, you can do this: ``` IEnumerable<SalesOrderMaster> query = from item in (ListViewItem[])e.Argument select (SalesOrderMaster)item.Tag; HashSet<SalesOrderMaster> orders = new HashSet<SalesOrderMaster>(query); ``` (I avoided using `var` so the returned types are clear in the examples.)
The part in that code that is expensive is calling the `Contains` method on the list. As it's an O(n) operation it gets slower the more objects you add to the list. Just use a `HashSet<SalesOrderMaster>` for the objects instead of a `List<SalesOrderMaster>`. The `Contains` method of the `HashSet` is an O(1) operation, so your loop will be an O(n) operation instead of an O(n\*n) operation.
How can I convert anonymous type to strong type in LINQ?
[ "", "c#", "linq", ".net-3.5", "" ]
I'm writing a JUnit test for some code that produces an Excel file (which is binary). I have another Excel file that contains my expected output. What's the easiest way to compare the actual file to the expected file? Sure I could write the code myself, but I was wondering if there's an existing method in a trusted third-party library (e.g. Spring or Apache Commons) that already does this.
Here's what I ended up doing (with the heavy lifting being done by [DBUnit](http://dbunit.sourceforge.net/)): ``` /** * Compares the data in the two Excel files represented by the given input * streams, closing them on completion * * @param expected can't be <code>null</code> * @param actual can't be <code>null</code> * @throws Exception */ private void compareExcelFiles(InputStream expected, InputStream actual) throws Exception { try { Assertion.assertEquals(new XlsDataSet(expected), new XlsDataSet(actual)); } finally { IOUtils.closeQuietly(expected); IOUtils.closeQuietly(actual); } } ``` This compares the data in the two files, with no risk of false negatives from any irrelevant metadata that might be different. Hope this helps someone.
You might consider using my project [simple-excel](https://github.com/tobyweston/simple-excel/) which provides a bunch of Hamcrest Matchers to do the job. When you do something like the following, ``` assertThat(actual, WorkbookMatcher.sameWorkbook(expected)); ``` You'd see, for example, ``` java.lang.AssertionError: Expected: entire workbook to be equal but: cell at "C14" contained <"bananas"> expected <nothing>, cell at "C15" contained <"1,850,000 EUR"> expected <"1,850,000.00 EUR">, cell at "D16" contained <nothing> expected <"Tue Sep 04 06:30:00"> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) ``` That way, you can run it from your automatted tests and get meaningful feedback whilst you're developing. You can read more about it at [this article on my site](http://baddotrobot.com/blog/2012/09/14/diff-excel-with-java-and-hamcrest/)
Easiest way to compare two Excel files in Java?
[ "", "java", "file", "binary", "compare", "" ]
I have the following regex that isn't working. I want to match the string 'www.example.com' but not the string '<http://www.example.com>' (or 'anythingwww.example.com' for that matter): ``` /\bwww\.\w.\w/ig ``` This is used in JavaScript like this: ``` text = text.replace(/\bwww\.\w.\w/ig, 'http://$&'); ``` I know the second part of the regex doesn't work correctly either, but it is the http:// part that is confusing me. It will currently match '<http://www.example.com>' resulting in output of '<http://htpp://www.example.com>'.
Perhaps something like this? ``` text = text.replace(/(^|\s)(www(?:\.\w+){2,})/ig, "$1http://$2"); ``` This will match the URLs in: * "`www.example.com`" -> "`http://www.example.com`" * "`Visit www.example.com`" -> "`Visit http://www.exmaple.com`" * "`Visit www.example.co.uk`" -> "`Visit http://www.example.co.uk`" But not: * "`http://www.example.com`" * "`ftp.example.com`" * "`www.com`"
Are you searching for the occurrence of `www.example.com` in a larger string? Maybe you can be more specific about what you want to match exactly, but something like this may work for you: ``` text = text.replace(/(\s)(www\.\w+\.\w+)/ig, "$1http://$2"); ``` The problem with `\b` (which matches word boundaries) is that it also matches between `http://` and `www`, because `/` is not a word character.
Regex to match www.example.com only if http:// not present
[ "", "javascript", "regex", "" ]
I am trying to implement the MVP pattern for WINFORMS. Its a simple for with a button and a Grid, on button click, the grid will load, and user can fill in values into the grid. For my button click event, I have something like this: \_presenter.LoadGrid(); Which is simple and straightforward. My question is, in regards to grid... I am planning to have a row click event firing....for enabling/disabling the subsequent input fields for the specific columns/rows of the grid etc. I understand that the presenter should not contain any GUI elements and the View(form) shouldn't really contain Logic? So, to have that GridRowClick event firing, I need to manipulate the grid (GUI) based on business rules (Logic). I am lost between letting the presenter handle the logic of that click event or the form? If the presenter is to handle the click event, wont that include the gui components? If the view is to handle the click event, the fieldnames etc are all business driven(logic), dynamically binded based on datatable returned from the business layer. Any advice will be much appreciated. Cheers
There are (at least) two variants of MVP. * Passive View Pattern * Supervising Controller Pattern Passive View, as its name suggests, treats the UI as a more or less passive interface between the user and the application. It moves as much testable code to the presenter as possible leaving the view to handle only the most basic UI updates. Supervising controller gives the view a little more responsibility by letting it handle data synchronization. This is usually done through data binding. In either case event handling is accomplished by delegating to a presenter method: ``` EventHandler() { presenter.HandleEvent(); } ``` If handling the event requires making changes to the form, you expose what needs to be updated as a property: ``` public string PropertyThatNeedsToBeUpdated { get { return Control.Property; } set { Control.Property = value; } } ``` For Passive View, grids are a hurdle. Their complexity make it cumbersome to capture all the possible events. With Supervising controller, grids are much easier since you leave data synchronization up to the data bound controls. You have to make the judgment call as to which is more appropriate for your situation.
The key is getting all that business logic into the presenter where it's testable. The view should call the presenter to perform the business logic, passing the information needed (e.g. the data associated with the clicked row). The presenter then performs the business logic and updates the data. Depending on what type of changes you need to make, that might be all you need to do, since the existing data binding might be sufficient to update the view automatically. If it isn't sufficient, the presenter can make one or more calls to the view (via its interface of course) in order to make the required changes. As you say, you should aim to minimize the amount of non-trivial code in your view, and in particular, the view shouldn't have any business logic in it. **EDIT:** Some good general information on MVP and other presentation patterns here: <http://martinfowler.com/eaaDev/uiArchs.html>
Winforms MVP Grid Events Problem
[ "", "c#", ".net", "winforms", "model-view-controller", "mvp", "" ]
I'm writing a small web that just makes some animation and shows some information as a homepage and a list of links. All that is going to be generated dynamically in the client side. So everything is going to be javascript and XML. Recently I've been reading some questions in SO about javascript, and most of the situations involved the use and/or recommendation of a framework (jquery and friends). When a small web development should start considering the use of such a framework? I've been until now doing my stuff just with plain javascript, as far as I'm not implementing a big site is it worth the learning a framework? Thanks
On SO you will find **a lot** of people (including me) who advocate the use of jQuery (in particular). To me, it's everything a framework should be: small, lightweight, extensible, compact yet powerful and brief syntax and it solves some pretty major problems. I would honestly have a hard time trying to envision a project where I wouldn't use it (or another framework). The reason to use it is to solve browser compatibility issues. Consider my answer to [javascript to get paragraph of selected text in web page](https://stackoverflow.com/questions/845390/javascript-to-get-paragraph-of-selected-text-in-web-page/845438#845438): > ``` > function getSelectedParagraphText() { > var userSelection; > if (window.getSelection) { > selection = window.getSelection(); > } else if (document.selection) { > selection = document.selection.createRange(); > } > var parent = selection.anchorNode; > while (parent != null && parent.localName != "P") { > parent = parent.parentNode; > } > if (parent == null) { > return ""; > } else { > return parent.innerText || parent.textContent; > } > } > ``` If you're familiar with Javascript a lot of this should be familiar to you: things like the check for innerText or textContent (Firefox 1.5) and so on. Pure Javascript is littered with things like this. Now consider the jQuery solution: ``` function getSelectedParagraphText() { var userSelection; if (window.getSelection) { selection = window.getSelection(); } else if (document.selection) { selection = document.selection.createRange(); } var parent = selection.anchorNode; var paras = $(parent).parents("p") return paras.length == 0 ? "" : paras.text(); } ``` Where jQuery really shines though is with AJAX. There JavaScript code snippets around to find the correct object to instantiate (XMLHttpRequest or equivalent) to do an AJAX request. jQuery takes care of all that for you. All of this for under 20k for the core jQuery Javascript file. To me, it's a must-have.
I'd start right now. Libraries like jQuery and prototype not only insulate you from browser differences, but also provide you with a shorthand for communicating your ideas to other programmers.
When should I use a javascript framework library?
[ "", "javascript", "frameworks", "" ]
I'm lead dev for [Bitfighter](http://bitfighter.org), and we're working with a mix of Lua and C++, using Lunar (a variant of Luna, available [here](http://lua-users.org/wiki/CppBindingWithLunar)) to bind them together. I know this environment does not have good support for object orientation and inheritance, but I'd like to find some way to at least partially work around these limitations. Here's what I have: **C++ Class Structure** ``` GameItem |---- Rock |---- Stone |---- RockyStone Robot ``` Robot implements a method called *getFiringSolution(GameItem item)* that looks at the position and speed of *item*, and returns the angle at which the robot would need to fire to hit *item*. ``` -- This is in Lua angle = robot:getFiringSolution(rock) if(angle != nil) then robot:fire(angle) end ``` So my problem is that I want to pass *rocks*, *stones*, or *rockyStones* to the getFiringSolution method, and I'm not sure how to do it. This works for Rocks only: ``` // C++ code S32 Robot::getFiringSolution(lua_State *L) { Rock *target = Lunar<Rock>::check(L, 1); return returnFloat(L, getFireAngle(target)); // returnFloat() is my func } ``` Ideally, what I want to do is something like this: ``` // This is C++, doesn't work S32 Robot::getFiringSolution(lua_State *L) { GameItem *target = Lunar<GameItem>::check(L, 1); return returnFloat(L, getFireAngle(target)); } ``` This potential solution does not work because Lunar's check function wants the object on the stack to have a className that matches that defined for GameItem. (For each object type you register with Lunar, you provide a name in the form of a string which Lunar uses to ensure that objects are of the correct type.) I would settle for something like this, where I have to check every possible subclass: ``` // Also C++, also doesn't work S32 Robot::getFiringSolution(lua_State *L) { GameItem *target = Lunar<Rock>::check(L, 1); if(!target) target = Lunar<Stone>::check(L, 1); if(!target) target = Lunar<RockyStone>::check(L, 1); return returnFloat(L, getFireAngle(target)); } ``` The problem with this solution is that the check function generates an error if the item on the stack is not of the correct type, and, I believe, removes the object of interest from the stack so I only have one attempt to grab it. I'm thinking I need to get a pointer to the Rock/Stone/RockyStone object from the stack, figure out what type it is, then cast it to the correct thing before working with it. The key bit of Lunar which does the type checking is this: ``` // from Lunar.h // get userdata from Lua stack and return pointer to T object static T *check(lua_State *L, int narg) { userdataType *ud = static_cast<userdataType*>(luaL_checkudata(L, narg, T::className)); if(!ud) luaL_typerror(L, narg, T::className); return ud->pT; // pointer to T object } ``` If I call it thusly: ``` GameItem *target = Lunar<Rock>::check(L, 1); ``` then the luaL\_checkudata() checks to see if the item on the stack is a Rock. If so, everything is peachy, and it returns a pointer to my Rock object, which gets passed back to the getFiringSolution() method. If there is a non-Rock item on the stack, the cast returns null, and luaL\_typerror() gets called, which sends the app off into lala land (where the error handling prints a diagnostic and terminates the robot with extreme prejudice). Any ideas on how to move forward with this? Many thanks!! **Best solution I've come up with... ugly, but works** Based on the suggestions below, I came up with this: ``` template <class T> T *checkItem(lua_State *L) { luaL_getmetatable(L, T::className); if(lua_rawequal(L, -1, -2)) // Lua object on stack is of class <T> { lua_pop(L, 2); // Remove both metatables return Lunar<T>::check(L, 1); // Return our object } else // Object on stack is something else { lua_pop(L, 1); // Remove <T>'s metatable, leave the other in place // for further comparison return NULL; } } ``` Then, later... ``` S32 Robot::getFiringSolution(lua_State *L) { GameItem *target; lua_getmetatable(L, 1); // Get metatable for first item on the stack target = checkItem<Rock>(L); if(!target) target = checkItem<Stone>(L); if(!target) target = checkItem<RockyStone>(L); if(!target) // Ultimately failed to figure out what this object is. { lua_pop(L, 1); // Clean up luaL_typerror(L, 1, "GameItem"); // Raise an error return returnNil(L); // Return nil, but I don't think this // statement will ever get run } return returnFloat(L, getFireAngle(target)); } ``` There are probably further optimizations I can do with this... I'd really like to figure out how to collapse this into a loop because, in reality, I will have a lot more than three classes to deal with, and this process is a bit cumbersome. **Slight improvement on the above solution** C++: ``` GameItem *LuaObject::getItem(lua_State *L, S32 index, U32 type) { switch(type) { case RockType: return Lunar<Rock>::check(L, index); case StoneType: return Lunar<Stone>::check(L, index); case RockyStoneType: return Lunar<RockyStone>::check(L, index); default: displayError(); } } ``` Then, later... ``` S32 Robot::getFiringSolution(lua_State *L) { S32 type = getInteger(L, 1); // My fn to pop int from stack GameItem *target = getItem(L, 2, type); return returnFloat(L, getFireAngle(target)); // My fn to push float to stack } ``` Lua helper function, included as a separate file to avoid user needing to add this manually to their code: ``` function getFiringSolution( item ) type = item:getClassID() -- Returns an integer id unique to each class if( type == nil ) then return nil end return bot:getFiringSolution( type, item ) end ``` User calls this way from Lua: ``` angle = getFiringSolution( item ) ```
You should tell us what exactly does not work in your code. I suppose that it is `Lunar<Rock>::check(L, 1)` that fails for all non-Rocks. Am I correct? Also it would be fine if you specified which version of Lunar you use (a link to it would be great). If it is [this one](http://lua-users.org/wiki/CppBindingWithLunar), then class type is stored in the Lua object metatable (one may say that this metatable *is* the type). Looks like the simplest way to check if object is a Rock without patching Lunar is to call `luaL_getmetatable(L, Rock::className)` to get class metatable and to compare it with lua\_getmetatable(L, 1) of your first argument (note lua**L** in the first function name). This is a bit hackish, but should work. If you fine with patching Lunar, one of possible ways is to add some `__lunarClassName` field to the metatable and store `T::name` there. Provide `lunar_typename()` C++ function (outside of the Lunar template class -- as we do not need `T` there) then, and return from it the value of that `__lunarClassName` field of argument's metatable. (Do not forget to check if object has metatable and that metatable has such field.) You may check Lua object type by calling `lunar_typename()` then. A bit of advice from personal experience: the more of business logic you push to Lua, the better. Unless you're pressed by severe performance constraints, you probably should consider to move all that hierarchy to Lua -- your life would become much simpler. If I may help you further, please say so. **Update:** The solution you've updated your post with, looks correct. To do the metatable-based dispatch in C, you may use, for example, a map of integral [`lua_topointer()`](http://www.lua.org/manual/5.1/manual.html#lua_topointer) value of the `luaL_getmetatable()` for a type to a function object/pointer which knows how to deal with that type. But, again, I suggest to move this part to Lua instead. For example: Export type-specific functions `getFiringSolutionForRock()`, `getFiringSolutionForStone()` and `getFiringSolutionForRockyStone()` from C++ to Lua. In Lua, store table of methods by metatable: ``` dispatch = { [Rock] = Robot.getFiringSolutionForRock; [Stone] = Robot.getFiringSolutionForStone; [RockyStone] = Robot.getFiringSolutionForRockyStone; } ``` If I'm right, the next line should call the correct specialized method of robot object. ``` dispatch[getmetatable(rock)](robot, rock) ```
I think you're trying to do the method dispatch in the wrong place. (This problem is symptomatic of a difficulty with *all* of these "automated" ways of making Lua interact with C or C++: with each of them, there's some magic going on behind the scenes, and it's not always obvious how to make it work. I don't understand why more people don't just use Lua's C API.) I had a look at the Lunar web pages, and it looks to me as if you need to create a `methods` table on type `T` and then call the `Luna<T>::Register` method. There's a [simple example on the web](http://lua-users.org/wiki/SimplerCppBinding). If I'm reading the code correctly, none of the glue code in your question is actually the recommended way of doing things with Lunar. (I'm also assuming that you can implement these methods entirely as C++ calls.) This is all pretty dodgy because the documentation on Lunar is thin. A sensible alternative would be to do all the work yourself, and just associate each C++ type with a Lua table containing its methods. Then you have the Lua `__index` metamethod consult that table, and Bob's your uncle. Lunar is doing something *close* to these, but it's sufficiently dressed up with C++ templates that other goo that I'm not sure how to make it work. The template stuff is very clever. You might want either to take the time to understand deeply how it works, or to reconsider if and how you want to use it. **Summary**: for each class, make an explicit methods table, and register each class using the Lunar `Register` method. Or roll your own.
Lua, C++, and poor man's subclassing
[ "", "c++", "oop", "lua", "" ]
``` public function init(){ $this->view->user = Zend_Auth::getInstance()->getIdentity(); $this->view->siteName = Zend_Registry::get('config')->site->name; $this->view->menu = $this->_helper->generateMenu(Zend_Auth::getInstance()->getIdentity()); $this->view->slogan = Zend_Registry::get('config')->site->slogan; } ``` This is the init file in all of my controllers across all modules, is there a place I can put this code so it executes every request irregardless of the module/controller being called?
You can extend Zend\_Controller\_Action: ``` public class My_Controller_Action extends Zend_Controller_Action { public function init() { $this->view->user = Zend_Auth::getInstance()->getIdentity(); $this->view->siteName = Zend_Registry::get('config')->site->name; $this->view->menu = $this->_helper->generateMenu(Zend_Auth::getInstance()->getIdentity()); $this->view->slogan = Zend_Registry::get('config')->site->slogan; } } ``` Then you just change your controllers to extend My\_Controller\_Action rather than Zend\_Controller\_Action. Just keep in mind that if you need to add additional code to the init method of a controller, you'll have to invoke parent::init() as well: ``` public class FooController extends My_Controller_Action { public function init() { parent::init(); // Do something. } public function IndexAction() { // ... } } ```
I'd rather advise you to write a plugin by extending Zend\_Controller\_Plugin\_Abstract, it is its purpose. By this way, you will have no need to do anything anywhere in your controller. Then you can use the registry to access to your data... ``` class My_Controller_Plugin_Acl extends Zend_Controller_Plugin_Abstract { protected $_auth = null; protected $_acl = null; public function __construct (Zend_Auth $auth, Zend_Acl $acl) { $this->_auth = $auth; $this->_acl = $acl; } public function preDispatch(Zend_Controller_Request_Abstract $request) { //some code } } ``` And then in your bootstrap.php ``` $this->_front->registerPlugin(new My_Controller_Plugin_Layout()); ``` <http://framework.zend.com/manual/en/zend.controller.plugins.html>
How do I centralize code from my init functions in all controllers?
[ "", "php", "zend-framework", "" ]
Does anyone know of an algorithm that will merge treenodes in the following way? ``` treeA \ child a \node(abc) \ child b \node(xyz) + treeB \ child a \node(qrs) \ child b \node(xyz) \node(pdq) \ child c \node(pdq) = // do merge treeMerged \ child a \node(abc) \node(qrs) \ child b \node(xyz) \node(pdq) \ child c \node(pdq) ``` Any help would be greatly appreciated.
Well, once I actually took the time to think about it, the solution turns out to be far more simple than I anticipated. (I've posted the critical part of the code below) ``` private TreeNode DoMerge(TreeNode source, TreeNode target) { if (source == null || target == null) return null; foreach (TreeNode n in source.Nodes) { // see if there is a match in target var match = FindNode(n, target.Nodes); // match paths if (match == null) { // no match was found so add n to the target target.Nodes.Add(n); } else { // a match was found so add the children of match DoMerge(n, match); } } return target; } ``` Still interested to know if someone has a better solution?
Ok, I'll admit, when I first started messing with this, I didn't think it would be too hard, so I figured I'll try to do it using LINQ. It came out to be nuts, but it works. I'm SURE there are more elegant and efficient algorithms, but here it is! First, I have a ToEnumerable extension method on the TreeNodeCollection class: ``` public static class TreeNodeCollectionExtensions { public static IEnumerable<TreeNode> ToEnumerable(this TreeNodeCollection nodes) { foreach (TreeNode node in nodes) { yield return node; } } } ``` Then, I implement a custom comparer: public class TreeNodeComparer : IEqualityComparer { ``` public bool Equals(TreeNode x, TreeNode y) { return x.Text == y.Text; } public int GetHashCode(TreeNode obj) { return obj.Text.GetHashCode(); } ``` } And finally, the crazyness: ``` private TreeView MergeTreeViews(TreeView tv1, TreeView tv2) { var result = new TreeView(); foreach (TreeNode node in tv2.Nodes) { result.Nodes.Add(node.Clone() as TreeNode); } foreach (TreeNode node in tv1.Nodes) { var nodeOnOtherSide = result.Nodes.ToEnumerable() .SingleOrDefault(tr => tr.Text == node.Text); if (nodeOnOtherSide == null) { TreeNode clone = node.Clone() as TreeNode; result.Nodes.Add(clone); } else { var n = node.Nodes.ToEnumerable() .Where(t => !(nodeOnOtherSide.Nodes.ToEnumerable() .Contains(t, new TreeNodeComparer()))); foreach (TreeNode subNode in n) { TreeNode clone = subNode.Clone() as TreeNode; nodeOnOtherSide.Nodes.Add(clone); } } } return result; } ``` The way I coded it was that it returns a third "merged" treeView. You can change the code, so that it takes a third treeview as a parameter, so that you can pass in a treeView you may already have. Again, I'm SURE there are better way to do this, but it SHOULD work. One more thing I'd like to point out, this will only work for a TreeView that is two layers deep.
Merging Treenodes
[ "", "c#", "algorithm", "" ]
I came across this JavaScript function and I don't understand quite what it's doing, especially with the use of 0xF. What does the 0xF do, exactly. It looks like a null nibble to me. ``` function() { var g = ""; for(var i = 0; i < 32; i++) g += Math.floor(Math.random() * 0xF).toString(0xF) return g; } ```
`0xF` == 15. It's simply hexadecimal notation. However, that snippet is not actually creating a GUID, it's just stringing a bunch of random integers together. It's not possible to create a GUID in JavaScript, because generating one requires parameters that the VM can't access (network address, etc). --- See also my answer to this question: [How to create a GUID in Javascript?](https://stackoverflow.com/questions/105034/how-to-create-a-guid-in-javascript)
0xF is hex notation EDIT: It looks like it's picking a random character 0-9 A-F 32 times
How does this JavaScript function create a GUID?
[ "", "javascript", "guid", "" ]
We've recently upgraded one of our SSRS2005 servers to SSRS2008 and have found that all of our applications that utilized the reporting services web service for producing reports no longer works. The first issue is that the web service itself was no longer available at ReportService.asmx, and had been replaced by: ReportService2005.asmx. We changed our web reference to the new location and we are now getting the message that the .Render() method is not a part of ReportService2005.asmx. What has the following code implementation been replaced with in SSRS2008? ``` report = rpt.Render(ReportPath + ReportName, this.Format.ToString(), null, devInfo.ToString(), parameters, null, null, out encoding, out mimetype, out parametersUsed, out warnings, out streamids); ``` **EDIT** After doing some more research, it turns out that the ReportService.asmx was part of SQL 2000 Reporting Services which has now been deprecated out of SQL 2008 Reporting Services.
Since ReportService.asmx was removed, as you note, you should use ReportExecution2005.asmx and then change the report parameters as required in your code.
Here are a couple of articles on migrating from SSRS 2005 to SSRS 2008 * [Upgrading Reports](http://technet.microsoft.com/en-us/library/ms143674.aspx) * [Reporting Services Backward Compatibility](http://technet.microsoft.com/en-us/library/ms143251.aspx) * [Report Server Web Service Endpoints](http://technet.microsoft.com/en-us/library/ms155398.aspx)
What has .Render() on SSRS2000 WebService been replaced with on SSRS2008?
[ "", "c#", "reporting-services", "reportingservices-2005", "ssrs-2008", "" ]
For some reason, we have a script that creates batch files to XCOPY our compiled assemblies, config files, and various other files to a network share for our beta testers. We do have an installer, but some don't have the permissions required to run the installer, or they're running over Citrix. If you vomited all over your desk at the mentions of XCOPY and Citrix, use it as an excuse to go home early. You're welcome. The code currently has hundreds of lines like: ``` CreateScripts(basePath, "Client", outputDir, FileType.EXE | FileType.DLL | FileType.XML | FileType.CONFIG); ``` It used to be worse, with 20 int parameters (one per file type) representing whether or not to copy that file type to the output directory. These hundreds of lines create upload/download batch files with thousands of XCOPY lines. In our setup projects, we can reference things like "Primary output from Client" and "Content Files from Client". I'd love to be able to do that programmatically from a non-setup project, but I'm at a loss. Obviously MS does it, either using an API or by parsing the .csproj files. How would I go about doing this? I'm just looking for a way to get a list of files for any of the setup categories, i.e.: * Primary Output * Localized Resources * Content Files * Documentation Files **EDIT**: I have a setup project like Hath suggested, and it's halfway to what I'm looking for. The only problem keeping that from being a perfect solution is that multiple projects depend on the same assemblies being in their own folder, and the setup will only copy the file once. Example: Projects Admin, Client, and Server all rely on ExceptionHandler.dll, and Admin and Client both rely on Util.dll, while Server does not. This is what I'm looking for: * Admin + Admin.exe + Admin.exe.config + ExceptionHandler.dll + Util.dll * Client + Client.exe + Client.exe.config + ExceptionHandler.dll + Util.dll * Server + Server.exe + Server.exe.config + ExceptionHandler.dll Since the referenced assemblies are all the same, what I get is this: * Admin + Admin.exe + Admin.exe.config + ExceptionHandler.dll + Util.dll * Client + Client.exe + Client.exe.config * Server + Server.exe + Server.exe.config This causes a FileNotFoundException when either Client or Server can't find one of the two DLLs it's expecting. Is there a setup property I'm missing to make it always copy the output, even if it's duplicated elsewhere in another project's output? **EDIT AGAIN**: All referenced DLLs are set to "Copy Local", and always have been. I found a decent article on [using NAnt and XSLT to grab the list of files](http://pwigle.wordpress.com/tag/continuous-integration/), so that may be a possible solution as well, as neouser99 suggested. **ACCEPTED SOLUTION**: I'm pretty much back where I started. All .exe and .dll outputs are put into a "bin" directory in the setup project, loosely packed. The other per-application folders contain shortcuts to the executable in that directory. The difference now is, I'm going to add a custom action to the installer to use reflection, enumerate the dependencies for each executable output, and copy the .exe and .dll files to the separate directories. Bit of a pain, as I just assumed there was a way to programmatically detect what files would be included via some setup library.
why not use another setup project and just set the 'Package files' setting to As Loose uncompressed files (setup project->properties)? then share the folder.. or something. edit: I see, you have 3 folders for your outputs. but the setup project only detects the ExceptionHandler.dll and Util.dll once, so it will just pick the first folder and put it in there. You could do a setup project for each project - bit annoying maybe.. You could manually add in the dll's to the projects that are missing the assembly's either by adding in the File by 'add file' or 'add assembly' or 'add project output' if you have those projects in the same solution.. (I doubt that's the case though). or just dump all of them into one output directory...
Although it's designed as a build tool, you might find [NAnt](http://nant.sourceforge.net/) to be extremely useful in what you are talking about. The tasks (build, copy, move, delete, etc.) that you can define allow for very fine-grained file lookups, up to general, full folders. If you also incorporate NAnt into your build process, I think you could find that it helps out in more ways then one.
Getting Content Files and Primary Output programmatically
[ "", "c#", "installation", "setup-project", "" ]
Here's a Django model class I wrote. This class gets a keyerror when I call `get_object_or_404` from Django (I conceive that keyerror is raised due to no kwargs being passed to `__init__` by the get function, arguments are all positional). Interestingly, it does not get an error when I call `get_object_or_404` from console. I wonder why, and if the below code is the correct way (ie, using **init** to populate the link field) to construct this class. ``` class Link(models.Model) event_type = models.IntegerField(choices=EVENT_TYPES) user = models.ForeignKey(User) created_on = models.DateTimeField(auto_now_add = True) link = models.CharField(max_length=30) isActive = models.BooleanField(default=True) def _generate_link(self): prelink = str(self.user.id)+str(self.event_type)+str(self.created_on) m = md5.new() m.update(prelink) return m.hexdigest() def __init__(self, *args, **kwargs): self.user = kwargs['user'].pop() self.event_type = kwargs['event_type'].pop() self.link = self._generate_link() super(Link,self).__init__(*args,**kwargs) ```
There's no reason to write your own `__init__` for Django model classes. I think you'll be a lot happier without it. Almost anything you think you want to do in `__init__` can be better done in `save`.
``` self.user = kwargs['user'].pop() self.event_type = kwargs['event_type'].pop() ``` You're trying to retrieve an entry from the dictionary, and then call its pop method. If you want to remove and return an object from a dictionary, call `dict.pop()`: ``` self.user = kwargs.pop('user') ``` Of course, this will fail with a `KeyError` when `"user"` is not present in `kwargs`. You'll want to provide a default value to pop: ``` self.user = kwargs.pop('user', None) ``` This means "if `"user"` is in the dictionary, remove and return it. Otherwise, return `None`". Regarding the other two lines: ``` self.link = self._generate_link() super(Link,self).__init__(*args,**kwargs) ``` `super().__init__()` will set `link` to something, probably `None`. I would reverse the lines, to something like this: ``` super(Link,self).__init__(*args,**kwargs) self.link = self._generate_link() ``` You might want to add a test before setting the link, to see if it already exists (`if self.link is not None: ...`). That way, links you pass into the constructor won't be overwritten.
keyerror inside django model class __init__
[ "", "python", "django", "django-models", "" ]
This question is about the PHP parsing engine. When I include a file multiple times in a single runtime, does PHP tokenize it every time or does it keep a cache and just run the compiled code on subsequent inclusions? **EDIT:** More details: I am not using an external caching mechanism and I am dealing with the *same* file being included multiple times during *the same request*. **EDIT 2:** The file I'm trying to include contains procedural code. I want it to be executed every time I `include()` it, I am just curious if PHP internally keeps track of the tokenized version of the file for speed reasons.
By default the file is parsed every time it is (really) included, even within the same php instance. But there are opcode caches like e.g. [apc](http://de.php.net/apc) ``` <?php $i = 'include_test.php'; file_put_contents($i, '<?php $x = 1;'); include $i; echo $x, ' '; file_put_contents($i, '<?php $x = 2;'); include $i; echo $x, ' ' ``` ``` 1 2 ``` (ok, weak proof. PHP could check whether the file's mtime has changed. And that what apc does, I think. But without a cache PHP really doesn't)
You should use a PHP bytecode cache such as APC. That will accomplish what you want, to re-use a compiled version of a PHP page on subsequent requests. Otherwise, PHP reads the file, tokenizes and compiles it on every request.
Is the same file tokenized every time I include it?
[ "", "php", "parsing", "" ]
I'm using the DOM extension in PHP to build some HTML documents, and I want the output to be formatted nicely (with new lines and indentation) so that it's readable, however, from the many tests I've done: 1. "formatOutput = true" doesn't work at all with saveHTML(), only saveXML() 2. Even if I used saveXML(), it still only works on elements created via the DOM, not elements that are included with loadHTML(), even with "preserveWhiteSpace = false" *If anyone knows differently I'd really like to know how they got it to work.* So, I have a DOM document, and I'm using saveHTML() to output the HTML. As it's coming from the DOM I know it is valid, there's no need to "Tidy" or validate it in any way. I'm simply looking for a way to get nicely formatted output from the output I receive from the DOM extension. *NB. As you may have guessed, I don't want to use the Tidy extension as a) it does a lot more that I need it too (the markup is already valid) and b) it actually makes changes to the HTML content (such as the HTML 5 doctype and some elements).* **Follow Up:** OK, with the help of the answer below I've worked out why the DOM extension wasn't working. Although the given example works, it still wasn't working with my code. With the help of [this](https://www.php.net/manual/en/domdocument.savexml.php#76867) comment I found that if you have any text nodes where isWhitespaceInElementContent() is true no formatting will be applied beyond that point. This happens regardless of whether or not preserveWhiteSpace is false. The solution is to remove all of these nodes (although I'm not sure if this may have adverse effects on the actual content).
you're right, there seems to be no indentation for HTML ([others are also confused](http://bugs.php.net/bug.php?id=27783)). XML works, even with loaded code. ``` <?php function tidyHTML($buffer) { // load our document into a DOM object $dom = new DOMDocument(); // we want nice output $dom->preserveWhiteSpace = false; $dom->loadHTML($buffer); $dom->formatOutput = true; return($dom->saveHTML()); } // start output buffering, using our nice // callback function to format the output. ob_start("tidyHTML"); ?> <html> <head> <title>foo bar</title><meta name="bar" value="foo"><body><h1>bar foo</h1><p>It's like comparing apples to oranges.</p></body></html> <?php // this will be called implicitly, but we'll // call it manually to illustrate the point. ob_end_flush(); ?> ``` result: ``` <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html> <head> <title>foo bar</title> <meta name="bar" value="foo"> </head> <body> <h1>bar foo</h1> <p>It's like comparing apples to oranges.</p> </body> </html> ``` the same with saveXML() ... ``` <?xml version="1.0" standalone="yes"?> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"> <html> <head> <title>foo bar</title> <meta name="bar" value="foo"/> </head> <body> <h1>bar foo</h1> <p>It's like comparing apples to oranges.</p> </body> </html> ``` probably forgot to set preserveWhiteSpace=false before loadHTML? > disclaimer: i stole most of the demo code from [tyson clugg/php manual comments](http://www.php.net/manual/en/domdocument.savehtml.php#52139). lazy me. --- > **UPDATE:** i now remember some years ago i tried the same thing and ran into the same problem. i fixed this by applying a dirty workaround (wasn't performance critical): i just somehow converted around between SimpleXML and DOM until the problem vanished. i suppose the conversion got rid of those nodes. maybe load with dom, import with `simplexml_import_dom`, then output the string, parse this with DOM again and *then* printed it pretty. as far as i remember this worked (but it was *really* slow).
The result: ``` <!DOCTYPE html> <html> <head> <title>My website</title> </head> </html> ``` Please consider: ``` function indentContent($content, $tab="\t"){ $content = preg_replace('/(>)(<)(\/*)/', "$1\n$2$3", $content); // add marker linefeeds to aid the pretty-tokeniser (adds a linefeed between all tag-end boundaries) $token = strtok($content, "\n"); // now indent the tags $result = ''; // holds formatted version as it is built $pad = 0; // initial indent $matches = array(); // returns from preg_matches() // scan each line and adjust indent based on opening/closing tags while ($token !== false && strlen($token)>0){ $padPrev = $padPrev ?: $pad; // previous padding //Artis $token = trim($token); // test for the various tag states if (preg_match('/.+<\/\w[^>]*>$/', $token, $matches)){// 1. open and closing tags on same line - no change $indent=0; }elseif(preg_match('/^<\/\w/', $token, $matches)){// 2. closing tag - outdent now $pad--; if($indent>0) $indent=0; }elseif(preg_match('/^<\w[^>]*[^\/]>.*$/', $token, $matches)){// 3. opening tag - don't pad this one, only subsequent tags (only if it isn't a void tag) foreach($matches as $m){ if (preg_match('/^<(area|base|br|col|command|embed|hr|img|input|keygen|link|meta|param|source|track|wbr)/im', $m)){// Void elements according to http://www.htmlandcsswebdesign.com/articles/voidel.php $voidTag=true; break; } } $indent = 1; }else{// 4. no indentation needed $indent = 0; } if ($token == "<textarea>") { $line = str_pad($token, strlen($token) + $pad, $tab, STR_PAD_LEFT); // pad the line with the required number of leading spaces $result .= $line; // add to the cumulative result, with linefeed $token = strtok("\n"); // get the next token $pad += $indent; // update the pad size for subsequent lines } elseif ($token == "</textarea>") { $line = $token; // pad the line with the required number of leading spaces $result .= $line . "\n"; // add to the cumulative result, with linefeed $token = strtok("\n"); // get the next token $pad += $indent; // update the pad size for subsequent lines } else { $line = str_pad($token, strlen($token) + $pad, $tab, STR_PAD_LEFT); // pad the line with the required number of leading spaces $result .= $line . "\n"; // add to the cumulative result, with linefeed $token = strtok("\n"); // get the next token $pad += $indent; // update the pad size for subsequent lines if ($voidTag) { $voidTag = false; $pad--; } } return $result; } //$htmldoc - DOMdocument Object! $niceHTMLwithTABS = indentContent($htmldoc->saveHTML(), $tab="\t"); echo $niceHTMLwithTABS; ``` Will result in HTML that has: * Indentation based on "levels" * Line breaks after block level elements * While inline and self-closing elements are not affected The function (which is a method for class I use) is largely based on: <https://stackoverflow.com/a/7840997/7646824>
PHP "pretty print" HTML (not Tidy)
[ "", "php", "html", "format", "tidy", "" ]
Right now I've got a mod\_wsgi script that's structured like this.. ``` def application(environ, start_response): status = '200 OK' output = 'Hello World!' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] ``` I was wondering if anyone knows of a way to change this to operate on a `yield` basis instead of `return`, that way I can send the page as it's being generated and not only once it's complete, so the page loading can go faster for the user. However, whenever I swap the output for a list and yield it in the application(), it throws an error: ``` TypeError: sequence of string values expected, value of type list found ```
``` def application(environ, start_response): status = '200 OK' output = 'Hello World!' response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) yield output ``` > *"However, whenever I swap the output for a list and yield it in the application(), it throws an error:"* Well, don't yield the list. Yield each element instead: ``` for part in mylist: yield part ``` or if the list is the entire content, just: ``` return mylist ``` Because the list is already an iterator and can yield by itself.
Note that 'yield' should be avoided unless absolutely necessary. In particular 'yield' will be inefficient if yielding lots of small strings. This is because the WSGI specification requires that after each string yielded that the response must be flushed. For Apache/mod\_wsgi, flushing means each string being forced out through the Apache output bucket brigade and filter system and onto the socket. Ignoring the overhead of the Apache output filter system, writing lots of small strings onto a socket is simply just bad to begin with. This problem also exists where an array of strings is returned from an application as a flush also has to be performed between each string in the array. This is because the string is dealt with as an iterable and not a list. Thus for a preformed list of strings, it is much better to join the individual strings into one large string and return a list containing just that one string. Doing this also allows a WSGI implementation to automatically generate a Content-Length for the response if one wasn't explicitly provided. Just make sure that when joining all the strings in a list into one, that the result is returned in a list. If this isn't done and instead the string is returned, that string is treated as an iterable, where each element in the string is a single character string. This results in a flush being done after every character, which is going to be even worse than if the strings hadn't been joined.
mod_wsgi yield output buffer instead of return
[ "", "python", "mod-wsgi", "yield", "" ]
We have a class whose semantic behaviour is like the following :- ``` struct Sample { ~Sample() throw() { throw 0; } }; void f () { try { delete new Sample; } catch (...){ } } ``` I know that throwing exceptions in dtors is evil; but the relinquishment of a 3rd Party library resource is throwing an exception (but can be re-accquired immediately, something strange!). There is also a pool of this resource, say an array/container of class Sample. So, there are two cases to consider: destruction of a dynamically allocated object and destruction of a dynamically allocated array of objects. Currently the application crashes randomly at different execution-points only when the array version (pool) is used. We believe this is due to memory corruption but then why does the unpooled version work?. What happens to the allocated memory? Is it undefined behaviour? What happens in the case of an array? Do the dtors (atleast, not memory) of all the elements of an array (say if the dtor of the first element throws) get called? Thanks in advance, EDIT-1: Well, we tracked it down to dtors of some array-elements not being called. But the allocated memory does not seem to have problems... Following is section 5.3.5.7 of SC22-N-4411.pdf) ``` If the value of the operand of the delete-expression is not a null pointer value, the delete-expression will call a deallocation function (3.7.4.2). Otherwise, it is unspecified whether the deallocation function will be called. [ Note: The deallocation function is called regardless of whether the destructor for the object or some element of the array throws an exception. —end note ] ``` <\snip> Looks like memory is always deallocated in such cases. Am i right in interpreting the standard?
There are two things that could happen in this situation: * terminate() is called * undefined behaviour In neither case can dynamically allocated memory be guaranteed to be released (except that application termination will of course return all resources to the OS).
C++ will terminate your application if a dtor throws an exception while the stack is being unwound because of another exception. As it's practically impossible to determine under what circumstances a dtor is called, the standard rule is to **never** throw exceptions from dtors. If your 3rd Party Library is throwing an exception, catch it in your dtor, log it, or save it's state to some static cache where you can pick it up "later", but don't allow it to escape out of your dtor. Do this, then see if your collection of objects work, it could be causing your crashes. **UPDATE** Unfortunately I'm not a spec lawyer, preferring the [Fisherman's Friend](http://en.wikipedia.org/wiki/Fisherman's_Friend) approach of "suck it an see". I'd write a small app with a class that allocates a meg off the heap. In a loop, make an array of the classes, have the classes dtor throw an exception, and throw a catch an exception at the end of the loop (causing the stack to unwind and call the dtors of the array of classes) and watch it to see your VM usage go through the roof (which I'm pretty sure it will). Sorry I can't give you chapter and verse, but that's my "belief"
Throwing Destructors, Memory Corruption?
[ "", "c++", "exception", "destructor", "" ]
``` public $form = array ( array( 'field' => 'email', 'params' => array( array( 'rule' => 'email', 'on' => 'create', 'required' => true, 'error' => 'The email is invalid!' ), array( 'rule' => 'email', 'on' => 'update', 'required' => false, 'error' => 'The email is invalid!' ) ) ) ); public function onlyNeeded($action) { $form = $this->form; $action = $this->action; foreach ($form as $formelement) { $field = $formelement['field']; $paramsgroup = $formelement['params']; if ($paramsgroup['on'] != $action) { form = removeparamsgroup($form, $action); } } return $form; } ``` How do I do the `removeparamsgroup()` function? There are [index]es, not only [name]s! Do you know what I mean? array(array( twice!
If you get the array key in the foreach loop, you can unset the correct array index using using that. You also need to loop over each param of each form element, which you weren't doing in your example. ``` public function onlyNeeded($action) { $form = $this->form; //get $formelement by reference so it can be modified foreach ($form as & $formelement) { //$key becomes the index of current $param in $formelement['params'] foreach ($formelement['params'] as $key => $param) { if ($param['on'] != $action) { unset($formelement['params'][$key]); } } } return $form; } ```
unset($form['params']) ? What do you mean by remove?
editing a multidimensional array with [index]es, not only [name]s
[ "", "php", "function", "multidimensional-array", "foreach", "" ]
I was wandering if it's possible to mock a Game object in order to test my DrawableGameComponent component? I know that mocking frameworks need an interface in order to function, but I need to mock the actual [Game](http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.game.aspx) object. edit: Here is a [link](http://forums.xna.com/forums/t/30645.aspx) to respective discussion on XNA Community forums. Any help?
There are some good posts in that forum on the topic of unit testing. Here's my personal approach to unit testing in XNA: * Ignore the Draw() method * Isolate complicated behavior in your own class methods * Test the tricky stuff, don't sweat the rest Here's an example of a test to confirm that my Update method moves Entities the right distance between Update() calls. (I'm using [NUnit](http://en.wikipedia.org/wiki/NUnit).) I trimmed out a couple lines with different move vectors, but you get the idea: you shouldn't need a Game to drive your tests. ``` [TestFixture] public class EntityTest { [Test] public void testMovement() { float speed = 1.0f; // units per second float updateDuration = 1.0f; // seconds Vector2 moveVector = new Vector2(0f, 1f); Vector2 originalPosition = new Vector2(8f, 12f); Entity entity = new Entity("testGuy"); entity.NextStep = moveVector; entity.Position = originalPosition; entity.Speed = speed; /*** Look ma, no Game! ***/ entity.Update(updateDuration); Vector2 moveVectorDirection = moveVector; moveVectorDirection.Normalize(); Vector2 expected = originalPosition + (speed * updateDuration * moveVectorDirection); float epsilon = 0.0001f; // using == on floats: bad idea Assert.Less(Math.Abs(expected.X - entity.Position.X), epsilon); Assert.Less(Math.Abs(expected.Y - entity.Position.Y), epsilon); } } ``` Edit: Some other notes from the comments: **My Entity Class**: I chose to wrap all my game objects up in a centralized Entity class, that looks something like this: ``` public class Entity { public Vector2 Position { get; set; } public Drawable Drawable { get; set; } public void Update(double seconds) { // Entity Update logic... if (Drawable != null) { Drawable.Update(seconds); } } public void LoadContent(/* I forget the args */) { // Entity LoadContent logic... if (Drawable != null) { Drawable.LoadContent(seconds); } } } ``` This gives me a lot of flexibility to make subclasses of Entity (AIEntity, NonInteractiveEntity...) which probably override Update(). It also lets me subclass Drawable freely, without the hell of n^2 subclasses like `AnimatedSpriteAIEntity`, `ParticleEffectNonInteractiveEntity` and `AnimatedSpriteNoninteractiveEntity`. Instead, I can do this: ``` Entity torch = new NonInteractiveEntity(); torch.Drawable = new AnimatedSpriteDrawable("Animations\litTorch"); SomeGameScreen.AddEntity(torch); // let's say you can load an enemy AI script like this Entity enemy = new AIEntity("AIScritps\hostile"); enemy.Drawable = new AnimatedSpriteDrawable("Animations\ogre"); SomeGameScreen.AddEntity(enemy); ``` **My Drawable class**: I have an abstract class from which all my drawn objects are derived. I chose an abstract class because some of the behavior will be shared. It'd be perfectly acceptable to define this as an [interface](http://msdn.microsoft.com/en-us/library/ms173156.aspx) instead, if that's not true of your code. ``` public abstract class Drawable { // my game is 2d, so I use a Point to draw... public Point Coordinates { get; set; } // But I usually store my game state in a Vector2, // so I need a convenient way to convert. If this // were an interface, I'd have to write this code everywhere public void SetPosition(Vector2 value) { Coordinates = new Point((int)value.X, (int)value.Y); } // This is overridden by subclasses like AnimatedSprite and ParticleEffect public abstract void Draw(SpriteBatch spriteBatch, Rectangle visibleArea); } ``` The subclasses define their own Draw logic. In your tank example, you could do a few things: * Add a new entity for each bullet * Make a TankEntity class which defines a List, and overrides Draw() to iterate over the Bullets (which define a Draw method of their own) * Make a ListDrawable Here's an example implementation of ListDrawable, ignoring the question of how to manage the list itself. ``` public class ListDrawable : Drawable { private List<Drawable> Children; // ... public override void Draw(SpriteBatch spriteBatch, Rectangle visibleArea) { if (Children == null) { return; } foreach (Drawable child in children) { child.Draw(spriteBatch, visibleArea); } } } ```
frameworks like [MOQ](http://code.google.com/p/moq/) and [Rhino Mocks](http://ayende.com/projects/rhino-mocks.aspx) don't specifically need an interface. They can mock any non-sealed and/or abstract class as well. Game is an abstract class, so you shouldn't have any trouble mocking it :-) The one thing to note with at least those two frameworks is that to set any expectations on methods or properties, they must be virtual or abstract. The reason for this is that the mocked instance it generates needs to be able to override. The typemock mentioned by IAmCodeMonkey I believe has a way around this, but I don't think typemock is free, while the two I mentioned are. As an aside, you can also check out a project of mine that could help in creating unit tests for XNA games without the need to make mocks: <http://scurvytest.codeplex.com/>
XNA mock the Game object or decoupling your Game
[ "", "c#", "unit-testing", "mocking", "xna", "" ]
Is there anything I can do while coding in Asp.net to make my website come on top in search engines for general keywords? (For example : cars...assuming that my site is wwww.joshautos123.com) Thanks
This has nothing to do with ASP.NET Josh, You need to start investigating SEO in general (Search Engine Optimization) This is a pretty broad topic ([more info here](http://en.wikipedia.org/wiki/Search_engine_optimization)) covering everything from keywords, content, url formatting, and cross linking to lots of different sites/resources. The best thing you can do if your only developing it (and not responsible for marketting) is that you put together a well designed, clean, standards compliant site.
you can follow the search engine optimization guidlines for your header, images etc. that the way you can acheive it. Anyways, you can read [**this tutorial**](http://professionalaspnet.com/archive/2008/08/16/Search-Engine-Optimization-for-ASP.NET-Web-Sites.aspx), it will help you. for more, you can create a good master page with good meta tages. it will help you...
asp.net program affects result in search engine
[ "", "c#", "asp.net", "search-engine", "" ]
I have a variable in code that can have file path or url as value. Examples: ``` http://someDomain/someFile.dat file://c:\files\someFile.dat c:\files\someFile.dat ``` So there are two ways to represent a file and I can't ignore any of them. What is the correct name for such a variable: path, url, location? I'm using a 3rd party api so I can't change semantics or separate to more variables.
If the values are not opaque to your application you may find it better to model them as a class. Otherwise, whenever you are going to act upon the values you may find yourself writing code like this: ``` if (variable.StartsWith("http://") || variable.StartsWith("file://")) { // Handle url } else { // Handle file path } ``` You may fold some of the functionality regarding treatment of the values into your class, but it is properly better to treat it as an immutable value type. Use a descriptive name for your class like FileLocation or whatever fits your nomenclature. It will then be very natural to declare FileLocation variables named fileLocation or inputFileLocation or even fl if you are sloppy.
The first two are URLs, the third is a file path. Of course, the file:/// protocol is only referring to a file also. When using the [Uri](http://msdn.microsoft.com/en-us/library/system.uri.aspx) class, you can use the [IsFile](http://msdn.microsoft.com/en-us/library/system.uri.isfile.aspx) and the [LocalPath](http://msdn.microsoft.com/en-us/library/system.uri.localpath.aspx) properties to handle file:/// Uris, and in that case you should also name it like that.
Is file path a url?
[ "", "c#", "url", "file", "naming-conventions", "filepath", "" ]
*See the four lines in the Go() method below:* ``` delegate void Action<T>(T arg); delegate void Action(); void DoSomething<T>(Action<T> action) { //... } void DoSomething(Action action) { //... } void MyAction<T>(T arg) { //... } void MyAction() { //... } void Go<T>() { DoSomething<T>(MyAction<T>); // throws compiler error - why? DoSomething(new Action<T>(MyAction<T>)); // no problems here DoSomething(MyAction); // what's the difference between this... DoSomething(new Action(MyAction)); // ... and this? } ``` Note that the compiler error generated by the first call is: *The type arguments for method 'Action(T)' cannot be inferred from the usage. Try specifying the type arguments explicitly.*
There's no difference between `MyAction` and `new Action(MyAction)` (when they're both valid) other than the former won't work in C# 1. This is an `implicit method group conversion`. There are times that this isn't applicable, most notable when the compiler can't work out what kind of delegate you want, e.g. ``` Delegate foo = new Action(MyAction); // Fine Delegate bar = MyAction; // Nope, can't tell target type ``` This comes into play in your question because both of the methods involved are overloaded. This leads to headaches, basically. As for the generics side - it's interesting. Method groups don't get much love from C# 3 type inference - I'm not sure whether that's going to be improved in C# 4 or not. If you call a generic method and specify the type argument, type inference works fairly well - but if you try to do it the other way round, it fails: ``` using System; class Test { static void Main() { // Valid - it infers Foo<int> DoSomething<int>(Foo); // Valid - both are specified DoSomething<int>(Foo<int>); // Invalid - type inference fails DoSomething(Foo<int>); // Invalid - mismatched types, basically DoSomething<int>(Foo<string>); } static void Foo<T>(T input) { } static void DoSomething<T>(Action<T> action) { Console.WriteLine(typeof(T)); } } ``` Type inference in C# 3 is very complicated, and works well in most cases (in particular it's great for LINQ) but fails in a few others. In an ideal world, it would become easier to understand *and* more powerful in future versions... we'll see!
The non-generic implicit delegate creation is just syntactic sugar, so the compiler generates exactly the same code for ``` DoSomething(MyAction); ``` and ``` DoSomething(new Action(MyAction)); ``` as it can infer the type of the delegate directly from the method arguments & context. With the generic delegate, you have to specify the delegate type due to covariance and contravariance (see <http://msdn.microsoft.com/en-us/library/ms173174(VS.80).aspx> for details) - the T in Action can be a supertype to the T in the method, and it will still be accepted as a delegate method. So, you need to specify the T in the delegate explicitly as the compiler can't figure it out itself.
The difference between implicit and explicit delegate creation (with and without generics)
[ "", "c#", ".net", "generics", ".net-2.0", "delegates", "" ]
I am loading a radiobutton list from an enumeration (vertically displayed). I need to show text that describes each radiobutton selection. I am loading it in the codebehind.
There's quite a few aspects of the Enum class that I've found more and more uses for recently, and one of them is the GetNames Method. This method returns a string array of all of the names in a specified enum. This code assumes you have a RadioButtonList on your page named **RadioButtonList1**. ``` public enum AutomotiveTypes { Car, Truck, Van, Train, Plane } public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string[] automotiveTypeNames = Enum.GetNames(typeof(AutomotiveTypes)); RadioButtonList1.RepeatDirection = RepeatDirection.Vertical; RadioButtonList1.DataSource = automotiveTypeNames; RadioButtonList1.DataBind(); } } ``` Give that a spin, and see if it does the trick for ya. Cheers!
You should be able to use the .Text property on the control. <http://www.w3schools.com/ASPNET/control_radiobutton.asp> EDIT: Actually I think miss-read the question, I believe this is what you are looking for ``` For Each val As [Enum] In [Enum].GetValues(GetType(YourEnum)) Radio Button Add Logic Here Next ```
How can I display text beside each radiobutton when loading a radiobutton list from an enumeration?
[ "", "c#", "asp.net", "radiobuttonlist", "" ]
I have several serializable classes that were compiled without specifying a serialVersionUID. I now need to add some data members to these classes but wish to preserve some already serialized objects. Is there anyway to find out the serialVersionUID of these serialized objects so that I can specify the same ID in the source code?
If you haven't modified the code yet you could probably run the serialver tool (part of the JDK) and see what it generates. I believe that will get you what the calculated `seriVersionUID` is. [Here is a brief article on serialver.](http://betweengo.com/2005/07/30/serialver/)
What I usually do is, I compile the code, try to run it, and the raised exception mentions the offending SVUID, so I can cut and paste it in my own code. Crude, but it works.
How can I find out the serialVersionUID of a serialized Java object?
[ "", "java", "serialization", "" ]
Given two java classes, A and B, where A is usually instantiated via B, such as: ``` A myA = B.createA(); ``` Can I create a subclass of A (let's call it SubA) and somehow have it be instantiated by the B.createA() method? (Note that I cannot modify A and B....) I know that not all instances of A are instances of SubA, thus I cannot do this: ``` SubA mySubA = B.createA(); ``` Similarly, I cannot cast it like this either: ``` SubA mySubA = (SubA) (B.createA()); ``` for the same reason -- it will get a ClassCastException. Am I being dense and forgetting something fundamental, or is there no way to do this? (Late addition: I'm so sorry, I should have mentioned that A and B have roughly 50 methods each, and all I want to do is add a single property to SubA, along with a getter and a setter. I'd really rather not implement all 50 of A's methods to invoke the corresponding method in the superclass's object.)
It sounds like like what you'd really like is to modify the behavior of both the original `A` and `B`. In that case, you could try extending both classes (where the extension of `B` is purely to specify a slightly different factory method for creating `SubA`s). ``` class SubA extends A { /** This is the one special aspect of SubA justifying a sub-class. Using double purely as an example. */ private double specialProperty; public double getSpecialProperty() { return specialProperty; } public void setSpecialProperty(double newSP) { specialProperty = newSP; } public SubA() { super(); // Important differences between SubAs and As go here.... // If there aren't any others, you don't need this constructor. } // NOTE: you don't have to do anything else with the other methods of // A. You just inherit those. } class SubB extends B { // Purely for the purposes of a slightly different factory method public A createA() { return new SubA(); } // Or if you need a static method // (this is usually instead of the first choice) public static A createA() { return new SubA(); } } ``` Note that at this point, you could create one of your `SubB` factory objects and make it look like the original `B` like so: ``` B myNewB = new SubB(); A myA = myNewB.createA(); ``` Or, if you're using the static factory instead, it isn't quite as close a match (but it's close). ``` A myA = SubB.createA(); ``` Now, if you really need to do something with the sub-property, you'll have access to it via the child interface. I.e., if you create the object like so: ``` SubA mySubA = SubB.createA(); mySubA.setSpecialProperty(3.14); double special = mySubA.getSpecialProperty(); ``` *Edit to discuss "Late addition":* At this point, your SubA object should be exactly what you want. It will inherit the 50 methods from the parent (A) and you can add your additional property to the child, plus the getter and setter. I changed the code above to illustrate what I mean.
This is usually done via a proxy: ``` class SubA extends A { private A proxiedClass; public SubA(A a) { proxiedClass = a; } public int anyMethodInA() { return proxiedClass.anyMethodInA(); } } ``` ... ``` SubA mySubA = new SubA(B.createA()); ``` Doing this manually is rather verbose, so most people use some kind of a AOP library (like AspectJ) to only intercept method calls they are interested in.
How do I extend a java class that is instantiated by another class?
[ "", "java", "inheritance", "" ]
Hi i want to run an if statement but keep getting syntax errors near all my AS aliases, what am i doing wrong? ``` SELECT IF @Origin = 'ALL' (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) <= 60) AS OneHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 61 AND 120) AS TwoHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 121 AND 180) AS ThreeHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 181 AND 240) AS FourHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 241 AND 480) AS EightHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 481 AND 1440) AS TwentyFourHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 1441 AND 2880) AS FortyEightHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) >=2881) AS PlusFortyEightHour END ELSE (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) <= 60) AS OneHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 61 AND 120) AS TwoHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 121 AND 180) AS ThreeHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 181 AND 240) AS FourHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 241 AND 480) AS EightHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 481 AND 1440) AS TwentyFourHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 1441 AND 2880) AS FortyEightHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) >=2881) AS PlusFortyEightHour END ```
There are two problems. The lack of a BEGIN, and also, the SELECT needs to be within each of the IF statements. You cannot have an inline IF within a SELECT, if you need to do this you can use a CASE statement. ``` IF @Origin = 'ALL' BEGIN SELECT (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) <= 60) AS OneHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 61 AND 120) AS TwoHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 121 AND 180) AS ThreeHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 181 AND 240) AS FourHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 241 AND 480) AS EightHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 481 AND 1440) AS TwentyFourHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 1441 AND 2880) AS FortyEightHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) >=2881) AS PlusFortyEightHour END ELSE BEGIN SELECT (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) <= 60) AS OneHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 61 AND 120) AS TwoHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 121 AND 180) AS ThreeHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 181 AND 240) AS FourHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 241 AND 480) AS EightHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 481 AND 1440) AS TwentyFourHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) BETWEEN 1441 AND 2880) AS FortyEightHour, (SELECT COUNT(*) FROM TBL_PARTORDER INNER JOIN TBL_REPAIR_ORDER ON TBL_PARTORDER.ORDERID = TBL_REPAIR_ORDER.ORDERID INNER JOIN TBL_PROPERTY ON TBL_REPAIR_ORDER.PROPREF = TBL_PROPERTY.PROPREF WHERE (TBL_PARTORDER.RAISED IS NOT NULL) AND (TBL_PARTORDER.RAISED BETWEEN CONVERT(DATETIME,@STARTDATE, 103) AND DATEADD(hh,23,CONVERT(DATETIME,@ENDDATE, 103))) AND (TBL_PROPERTY.CONTRACT = @CONTRACT) AND (TBL_REPAIR_ORDER.ORIGIN = @ORIGIN) AND DATEDIFF(mi, TBL_PARTORDER.RAISED, TBL_PARTORDER.RECEIVED) >=2881) AS PlusFortyEightHour END ```
``` IF condition BEGIN -- ... END ELSE BEGIN -- ... END ```
If statement SQL
[ "", "sql", "" ]
I have a fairly large CRUD winform app that is set up to display forms embedded in tabcontrols. I want to have objects for Person,(has a) Enrollment,(has a) Plan that hold and track the information as they interact with the forms. How do I accomplish this? I found a suggestion to declare the Person object in my ***Program.cs*** like so --> ``` internal static class Program { public static CurrentPerson _CurrentPerson; /// <summary> /// The main entry point for the application. /// </summary> [STAThread] private static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new FrmWWCShell()); } } ``` and then on the ***Search.cs*** --> ``` Program._CurrentPerson = new CurrentPerson { PersonID = Convert.ToInt32(pID), LastName = lName, FirstName = fName, SocialSn = sSN, MiddleName = mName, BirthDate = Convert.ToDateTime(bDate) }; ``` Is this the best way? There is still a bunch of Data that needs to be *filled in* from the database once they have made this selection on the Search page. What about declaring the object on each form and passing it some way? The object is slowly "built" as they progress. First they **Search** for someone by name and select who they will work with. Then they can work with there Enrollments. After selecting an Enrollment they will be able to interact with there Plans. I would be grateful for any guidance here as the scope of this has left my inexperienced head spinning...
You will need to seperate your data, logic and GUI. Look into the [Model-View-Controller pattern](http://en.wikipedia.org/wiki/Model-view-controller). If you think it's too complex in your case you might just create a central controller class to hold the central data. Pass on this object in every form constructor and you're done.
Take a look at the [Mediator](http://en.wikipedia.org/wiki/Mediator_pattern) design pattern.
Passing Object from Form to Form
[ "", "c#", ".net", "winforms", "oop", "" ]
I'm using JSLint to verify most of my external Javascript files, but the largest amount of errors I'm getting is from functions being used before they're defined. **Is this really an issue I should worry about**? It seems Firefox, IE7 and Chrome don't care. Functions like the popular `init()` (which I use often) normally stick at the top as that makes sense to me (I like to pretend it's analogous to `main()`) will, according to JSLint, need to be pushed to the bottom of the file.
If you declare functions using the `function` keyword, you can use them before they're declared. However, if you declare a function via another method (such as using a function expression or the `Function` constructor), you have to declare the function before you use it. See [this page](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions#Function_constructor_vs._function_declaration_vs._function_expression) on the Mozilla Developer Network for more information. Assuming you declare all your functions with the `function` keyword, I think it becomes a programming-style question. Personally, I prefer to structure my functions in a way that seems logical and makes the code as readable as possible. For example, like you, I'd put an `init` function at the top, because it's where everything starts from.
As this is the top rated google hit and other people might not be seeing it at first in the jslint tool, there is a option called "Tolerate misordered definitions" that allows you to hide this type of error. ``` /*jslint latedef:false*/ ```
JSLint: Using a function before it's defined error
[ "", "javascript", "jslint", "" ]
I need to implement SSO between a windows domain and a J2EE web appliction. I've been thinking what the impact of this would be on the web application's session timeout. We have a 2 hour time out. If we implement a seemless SSO, then I think it might become confusing for the user. The SSO will make it appear that the web application is immediately available. I'm worried that they will start filling in forms then go for lunch (or something) and come back after their session has timed out. However, it might not be immediately apparent that this has happened, since the SSO will just sign them on again (but now with a new session). Has anyone any experience with anything like this and how to handle it? Do we just need to implement some form of extra messaging to tell the user that their previous session has timed out and their work has been lost?
I think you definitely need to pop-up an Alert box of some kind if the user's session is reset. Get them to click OK on the message and redirect them to the home page. Also, I think a 2 hour timeout sounds like a bad idea if you're doing it how I think you are. Do you mean the user has 2 hours from when log in to work before their session times out? Wouldn't it make more sense to have something like 10 minute timeout but with the timer getting reset whenever the user sends a new request within that session.
The timeout is not a static fixed time measured from login but rather a dynamic measure of inactivity. On sites we hav buit with this functionality after 10 minutes or so the web page goes back to the login page (JS used) and the user can start again if they want. If they are busy with a long process where they are reviewing results or something then check the mouse move or some sublte key like that indicating they are still busy.
Impact of SSO on session time out
[ "", "java", "windows", "security", "single-sign-on", "" ]
I created an App.config file in my WPF application: ``` <?xml version="1.0" encoding="utf-8" ?> <configuration> <appsettings> <add key="xmlDataDirectory" value="c:\testdata"/> </appsettings> </configuration> ``` Then I try to read the value out with this: ``` string xmlDataDirectory = ConfigurationSettings.AppSettings.Get("xmlDataDirectory"); ``` But it says this is obsolete and that I should use ConfigurationManager which I can't find, even searching in the class view. Does anyone know how to use config files like this in WPF?
You have to reference the `System.Configuration` assembly which is in GAC. Use of `ConfigurationManager` is not WPF-specific: it is the privileged way to access configuration information for any type of application. Please see [Microsoft Docs - `ConfigurationManager` Class](https://learn.microsoft.com/en-us/dotnet/api/system.configuration.configurationmanager) for further info.
In my case, I followed the steps below. App.config ``` <configuration> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> </startup> <appSettings> <add key="POCPublishSubscribeQueueName" value="FormatName:Direct=OS:localhost\Private$\POCPublishSubscribe"/> </appSettings> </configuration> ``` Added `System.Configuartion` to my project. Added `using System.Configuration` statement in file at top. Then used this statement: ``` string queuePath = ConfigurationManager.AppSettings["POCPublishSubscribeQueueName"].ToString(); ```
How to use a App.config file in WPF applications?
[ "", "c#", ".net", "wpf", "configuration", "" ]
I've had a bit of a search around, but couldn't find anything similar to what I was looking for. I'm interested in knowing how to display differing contents on secondary/tertiary monitors or projectors using C#. Basically, what I'm wanting to achieve is to have some form of presenter view (a lá Powerpoint) on one particular screen (the primary display), and the output on a secondary screen or projector. I've never really attempted to develop something with multiple display outputs, so any guidance will probably have to be at a fairly obvious level. If someone could point me in the right direction as to how to handle this sort of thing in C#, that would be greatly appreciated!
You can use the `System.Windows.Forms.Screen.AllScreens` property to access a list of all the monitors Windows knows about. If you're looking to utilize a display that hasn't been configured by the user, it gets more difficult - you'd probably need to initialize and access the display adapter using DirectX.
Just to expand on Keven's answer (I +1'd it), The Screen.AllScreens array gives you have an array of Screen objects. The Screen object has a property called IsPrimary, which you can use to determine which is the primary screen, and which is the secondary (duh) and it also has a property called WorkingArea, which is a rectangle that gives you back all the coordinates of the second screen. The cool thing about this is, that even if let's say the secondary screen is configured to be on the left of the primary, the WorkingArea.X will be a negative number, and you can place forms there or whatever.
Projector Control/Display C#
[ "", "c#", "projector", "" ]
I have a simple SQL query in PostgreSQL 8.3 that grabs a bunch of comments. I provide a *sorted* list of values to the `IN` construct in the `WHERE` clause: ``` SELECT * FROM comments WHERE (comments.id IN (1,3,2,4)); ``` This returns comments in an arbitrary order which in my happens to be ids like `1,2,3,4`. I want the resulting rows sorted like the list in the `IN` construct: `(1,3,2,4)`. How to achieve that?
You can do it quite easily with (introduced in PostgreSQL 8.2) VALUES (), (). Syntax will be like this: ``` select c.* from comments c join ( values (1,1), (3,2), (2,3), (4,4) ) as x (id, ordering) on c.id = x.id order by x.ordering ```
Use [**`WITH ORDINALITY`**](https://www.postgresql.org/docs/current/functions-srf.html) in Postgres **9.4** or later. ``` SELECT c.* FROM comments c JOIN unnest('{1,3,2,4}'::int[]) WITH ORDINALITY t(id, ord) USING (id) ORDER BY t.ord; ``` * No need for a subquery, we can use the set-returning function like a table directly - a.k.a. "table-function". * A string literal to pass the array instead of an [ARRAY constructor](https://www.postgresql.org/docs/current/sql-expressions.html#SQL-SYNTAX-ARRAY-CONSTRUCTORS) may be easier to implement with some clients. * For convenience (optionally), match the column name we are joining to ("id" in the example), so we can join with a short `USING` clause and only get a single instance of the join column in the result. * Works with **any** input type. If your key column is of type `text`, provide something like `'{foo,bar,baz}'::text[]`. Detailed explanation: * [PostgreSQL unnest() with element number](https://stackoverflow.com/questions/8760419/postgresql-unnest-with-element-number/8767450#8767450)
ORDER BY the IN value list
[ "", "sql", "postgresql", "sql-order-by", "sql-in", "" ]
For an unknown reason, I have 1 page that I can't access by ID to any of the component. Here is some informations. The page use asp:Content because the website use MasterPage. Inside the asp:Content, this page has a asp:FormView with some data that I cannot access from the CodeBehind. Here is the Page declaration: Here is the code in the page behind that doesn't compile : ``` protected void FormView1_PreRender(object sender, EventArgs e) { DateBirthdayValidator.MaximumValue = DateTime.Now.Date.ToString("dd-MM-yy"); } ``` Here is the error : > Error 2 The name 'DateBirthdayValidator' does not exist in the current context I have search in google, I got some answer about using FindControl but it doesn't work. Any idea? ## Edit1: I can access the FormView1 component but no the validator inside the EditItemTemplate. How can I access control that is inside the EditTemplate? ## Edit2: If I try: `FormView1.FindControl("DateBirthdayValidator")` it compile but always return null. So still doesn't work but at least I can access the FormView1...
``` protected void FormView1_PreRender(object sender, EventArgs e) { if(FormView1.CurrentMode == FormViewMode.Edit) ((RangeValidator)FormView1.FindControl("DateBirthdayValidator")).MaximumValue = DateTime.Now.Date.ToString("dd-MM-yy"); } ``` The FormView doesn't have the control created until it's in the mode you want. Since the DateBirhdayValidator was in the Edit Mode, it required to have a validation in the CodeBehind to be sure to Find the control only when the state is in edit. I found the solution [here](http://www.codenewsgroups.net/group/microsoft.public.dotnet.framework.aspnet.webcontrols/topic9174.aspx), see the post from Steven Cheng[MSFT].
The problem is the validator you have declared in the form view does not really exist at the page level. It's a template that you define for your `FormView`. The parent control can instantiate a template (zero, one or more times, for instance think about each row in a `GridView`; and as a consequence, create its controls). You should try accessing it like: ``` // Replace RangeValidator with the actual validator type, if different. var v = (RangeValidator)myFormView.Row.FindControl("DateBirthdayValidator"); v.MaximumValue = ...; ``` Note that to be able to do this, your form view should be in the mode you declared your validator in (you can check this with the `CurrentMode` property) and you should have already called `DataBind` to bind it to a data source (so that at least single row exists, and thus an instance of the template is created).
ASP.Net codebehind can't access component from page?
[ "", "c#", "asp.net", "" ]
I'm having a problem with imports in one of my java applications. I've Taken a working JSP out of one Eclipse project, Jar'd up all the classes from that project and put it into a new project. In the new Project I've imported the jar file generated from the original, and pasted the jsp code into a new JSP document. I'm getting "The import XX could not be resolved errors". Anyone one have any idea how to diagnose this, everything seems to be in order.
I have an issue like this once in awhile. Try removing the JAR, rebuilding, and adding the JAR again. This should fix it. It's an Eclipse bug. Edit: The above answer is assuming the jar is actually on the build path.
When you say "I've imported the jar file" you mean you have copied it or put it in the classpath?
Problem with imports in java?
[ "", "java", "eclipse", "import", "" ]
I have a system that compares fiscal year up to the current date to the same date range of the year before. You can adjust the year and month to look at and it always compares the same date range to the year previous. I have it set so if the current day is leap day it compares to the 28th of last year, and if last year was leap year and today is feb 28th it compares to last year up to 29th. if you look at a month other than the current month it shows up to the last day of that month, otherwise up to current date. OK that works fine right now, but now my employers don't want it to be up to the current date they want it to go up to yesterdays date. How can I do that, my main concerns are what if today is the 1st of the month, or what if today is the first day of the fiscal year. Here is the code I have now: ``` function create_YTD_XML() { global $month; global $year; $last_year = $year - 1; if($year == date('Y') && $month == date('m')) { $this_day = date('d'); } else { $this_day = date('t', mktime(0, 0, 0, $month, 1, $year)); // LAST DAY OF MONTH } if(is_leap_year($year) && $this_day == 29) { $last_day = 28; } else if(is_leap_year($last_year) && $this_day == 28) { $last_day = 29; } else { $last_day = $this_day; } if($month >= 2) { $this_year_start = $year; $last_year_start = $last_year; } else { $this_year_start = $year - 1; $last_year_start = $last_year - 1; } $this_ytd_start = $this_year_start.'-02-01'; $last_ytd_start = $last_year_start.'-02-01'; $this_ytd_end = $year.'-'.str_pad($month, 2, "0", STR_PAD_LEFT).'-'.$this_day; $last_ytd_end = $last_year.'-'.str_pad($month, 2, "0", STR_PAD_LEFT).'-'.$last_day; } ``` What would be the best solution? Thanks!
[strtotime()](https://www.php.net/manual/en/function.strtotime.php) will do the trick. Convert your previous date to a Unix timestamp using [mktime()](https://www.php.net/manual/en/function.mktime.php), then use it like this: ``` $from_unix_time = mktime(0, 0, 0, $month, $day, $year); $day_before = strtotime("yesterday", $from_unix_time); $formatted = date('Y-m-d', $day_before); ```
You can also use [strtotime](http://php.net/manual/en/function.strtotime.php) function using *words* like this: ``` $date = '2012-11-08'; $day_before = date( 'Y-m-d', strtotime( $date . ' -1 day' ) ); ``` The output of `$day_before`: ``` 2012-11-07 ```
How can I find out the date of the day before a date?
[ "", "php", "php4", "" ]
I have a C++ application in which I need to compare two values and decide which is greater. The only complication is that one number is represented in log-space, the other is not. For example: ``` double log_num_1 = log(1.23); double num_2 = 1.24; ``` If I want to compare `num_1` and `num_2`, I have to use either `log()` or `exp()`, and I'm wondering if one is easier to compute than the other (i.e. runs in less time, in general). You can assume I'm using the standard `cmath` library. In other words, the following are semantically equivalent, so which is faster: ``` if(exp(log_num_1) > num_2)) cout << "num_1 is greater"; ``` or ``` if(log_num_1 > log(num_2)) cout << "num_1 is greater"; ```
AFAIK the algorithms, the complexity is the same, the difference should be only a (hopefully negligible) constant. Due to this, I'd use the `exp(a) > b`, simply because it doesn't break on invalid input.
Do you *really* need to know? Is this going to occupy a large fraction of you running time? How do you know? Worse, it may be platform dependent. Then what? So sure, test it if you care, but spending much time agonizing over micro-optimization is usually a bad idea.
C++ Exp vs. Log: Which is faster?
[ "", "c++", "math", "performance", "" ]
I have some JSON returned to the browser like this "product": ``` { "Title": "School Bag", "Image": "/images/school-bag.jpg" } ``` I want this data to be a "Product" object so I can use prototype methods like a `toHTMLImage()` that returns a HTML image representation of the product: ``` function Product() { } Product.prototype.toHTMLImage = function() { //Returns something like <img src="<Image>" alt="<Title>" /> } ``` How do I convert my JSON results into a `Product` object so that I can use `toHTMLImage`?
Simple, if I got it, ``` var json = { "Title": "School Bag", "Image": "/images/school-bag.jpg" } function Product(json) { this.img = document.createElement('img'); this.img.alt = json.Title; this.img.src = json.Image; this.toHTMLImage = function() { return this.img; } } var obj = new Product(json); // this is your object =D ```
Maybe this page will be usefull : [http://www.json.org/js.html](https://web.archive.org/web/20161104150452/http://www.json.org:80/js.html)
Beginner JavaScript: Working with JSON and Objects in JavaScript
[ "", "javascript", "json", "" ]
I'm not talking about doing a "SET NOCOUNT OFF". But I have a stored procedure which I use to insert some data into some tables. This procedure creates a xml response string, well let me give you an example: ``` CREATE PROCEDURE [dbo].[insertSomeData] (@myParam int) AS DECLARE @reply varchar(2048) ... Do a bunch of inserts/updates... SET @reply = '<xml><big /><outputs /></xml>' SELECT @reply GO ``` So I put together a script which uses this SP a bunch of times, and the xml "output" is getting to be too much (it's crashed my box once already). Is there a way to suppress or redirect the output generated from this stored procedure? I don't think that modifying this stored procedure is an option. thanks. --- I guess i should clarify. This SP above is being called by a T-SQL Update script that i wrote, to be run through enterprise studio manager, etc. And it's not the most elegant SQL i've ever written either (some psuedo-sql): ``` WHILE unprocessedRecordsLeft BEGIN SELECT top 1 record from updateTable where Processed = 0 EXEC insertSomeData @param = record_From_UpdateTable END ``` So lets say the UpdateTable has some 50k records in it. That SP gets called 50k times, writing 50k xml strings to the output window. It didn't bring the sql server to a stop, just my client app (sql server management studio).
I think I found *a* solution. So what i can do now in my SQL script is something like this (sql-psuedo code): ``` create table #tmp(xmlReply varchar(2048)) while not_done begin select top 1 record from updateTable where processed = 0 insert into #tmp exec insertSomeData @param=record end drop table #tmp ``` Now if there was a even more efficient way to do this. Does SQL Server have something similar to /dev/null? A null table or something?
The answer you're looking for is found in a [similar SO question](https://stackoverflow.com/a/571684/550975) by Josh Burke: ``` -- Assume this table matches the output of your procedure DECLARE @tmpNewValue TABLE ([Id] int, [Name] varchar(50)) INSERT INTO @tmpNewValue EXEC [ProcedureB] -- SELECT [Id], [Name] FROM @tmpNewValue ```
How to Suppress the SELECT Output of a Stored Procedure called from another Stored Procedure in SQL Server?
[ "", "sql", "sql-server", "t-sql", "" ]
I am creating a c++ program, but I want to be able to offer just a .exe file to the user. However, I am using libraries (curl among others) which have some dll's. Is it possible to compile these dll's into the .exe file? I use Code::Blocks and mingw.
In order to achieve that you will need [static linking](http://en.wikipedia.org/wiki/Static_linking). This requires that all your libraries (and the libraries they depend upon recursively) need to be available as static libraries. Be aware that the size of your executable will be large, as it will carry all the code from those static libraries. This is why shared libraries (DLLs) were invented in the first place, to be able to share common code among applications. However that does not always work [so well on windows](http://en.wikipedia.org/wiki/Dll_hell). I think what you may really want is an [installer](http://en.wikipedia.org/wiki/Windows_installer) that installs your executable and all it's dependent libraries.
There's an article in DDJ from 2002 that may have what you want: * [Packing DLLs in your EXE by Thiadmer Riemersma](http://www.ddj.com/windows/184416443) Basically it uses a combination of linking to the DLL using MSVC's 'delayed load' feature and packaging the DLL as an embedded resource in the EXE. The DLL is then automatically extracted at runtime when the first call to one of the exports is made. I haven't used this technique so I can't really comment on how well it works, but it sure seems like a slick idea.
C++ How to compile dll in a .exe
[ "", "c++", "dll", "compilation", "" ]
Why is the Visual C++ compiler calling the wrong overload here? I am have a subclass of ostream that I use to define a buffer for formatting. Sometimes I want to create a temporary and immediately insert a string into it with the usual << operator like this: ``` M2Stream() << "the string"; ``` Unfortunately, the program calls the operator<<(ostream, void \*) member overload, instead of the operator<<(ostream, const char \*) nonmember one. I wrote the sample below as a test where I define my own M2Stream class that reproduces the problem. I think the problem is that the M2Stream() expression produces a temporary and this somehow causes the compiler to prefer the void \* overload. But why? This is borne out by the fact that if I make the first argument for the nonmember overload const M2Stream &, I get an ambiguity. Another strange thing is that it calls the desired const char \* overload if I first define a variable of type const char \* and then call it, instead of a literal char string, like this: ``` const char *s = "char string variable"; M2Stream() << s; ``` It's as if the literal string has a different type than the const char \* variable! Shouldn't they be the same? And why does the compiler cause a call to the void \* overload when I use the temporary and the literal char string? ``` #include "stdafx.h" #include <iostream> using namespace std; class M2Stream { public: M2Stream &operator<<(void *vp) { cout << "M2Stream bad operator<<(void *) called with " << (const char *) vp << endl; return *this; } }; /* If I make first arg const M2Stream &os, I get \tests\t_stream_insertion_op\t_stream_insertion_op.cpp(39) : error C2666: 'M2Stream::operator <<' : 2 overloads have similar conversions \tests\t_stream_insertion_op\t_stream_insertion_op.cpp(13): could be 'M2Stream &M2Stream::operator <<(void *)' \tests\t_stream_insertion_op\t_stream_insertion_op.cpp(20): or 'const M2Stream &operator <<(const M2Stream &,const char *)' while trying to match the argument list '(M2Stream, const char [45])' note: qualification adjustment (const/volatile) may be causing the ambiguity */ const M2Stream & operator<<(M2Stream &os, const char *val) { cout << "M2Stream good operator<<(const char *) called with " << val << endl; return os; } int main(int argc, char argv[]) { // This line calls void * overload, outputs: M2Stream bad operator<<(void *) called with literal char string on constructed temporary M2Stream() << "literal char string on constructed temporary"; const char *s = "char string variable"; // This line calls the const char * overload, and outputs: M2Stream good operator<<(const char *) called with char string variable M2Stream() << s; // This line calls the const char * overload, and outputs: M2Stream good operator<<(const char *) called with literal char string on prebuilt object M2Stream m; m << "literal char string on prebuilt object"; return 0; } ``` Output: ``` M2Stream bad operator<<(void *) called with literal char string on constructed temporary M2Stream good operator<<(const char *) called with char string variable M2Stream good operator<<(const char *) called with literal char string on prebuilt object ```
The compiler is doing the right thing: `Stream() << "hello";` should use the `operator<<` defined as a member function. Because the temporary stream object cannot be bound to a non-const reference but only to a const reference, the non-member operator that handles `char const*` won't be selected. And it's designed that way, as you see when you change that operator. You get ambiguities, because the compiler can't decide which of the available operators to use. Because all of them were designed with rejection of the non-member `operator<<` in mind for temporaries. Then, yes, a string literal has a different type than a `char const*`. A string literal is an array of const characters. But that wouldn't matter in your case, i think. I don't know what overloads of `operator<<` MSVC++ adds. It's allowed to add further overloads, as long as they don't affect the behavior of valid programs. For why `M2Stream() << s;` works even when the first parameter is a non-const reference... Well, MSVC++ has an extension that allows non-const references bind to temporaries. Put the warning level on level 4 to see a warning of it about that (something like "non-standard extension used..."). Now, because there is a member operator<< that takes a `void const*`, and a `char const*` can convert to that, that operator will be chosen and the address will be output as that's what the `void const*` overload is for. I've seen in your code that you actually have a `void*` overload, not a `void const*` overload. Well, a string literal can convert to `char*`, even though the type of a string literal is `char const[N]` (with N being the amount of characters you put). But that conversion is deprecated. It should be not standard that a string literal converts to `void*`. It looks to me that is another extension by the MSVC++ compiler. But that would explain why the string literal is treated differently than the `char const*` pointer. This is what the Standard says: > A string literal (2.13.4) that is not a wide string literal can be converted to an rvalue of type "pointer to char"; a wide string literal can be converted to an rvalue of type "pointer to wchar\_t". In either case, the result is a pointer to the first element of the array. This conversion is considered only when there is an explicit appropriate pointer target type, and not when there is a general need to convert from an lvalue to an rvalue. [Note: this conversion is deprecated. See Annex D. ]
The first problem is caused by weird and tricky C++ language rules: 1. A temporary created by a call to a constructor is an **rvalue**. 2. An rvalue may not be bound to a non-const reference. 3. However, an rvalue object can have non-const methods invoked on it. What is happening is that `ostream& operator<<(ostream&, const char*)`, a non-member function, attempts to bind the `M2Stream` temporary you create to a non-const reference, but that fails (rule #2); but `ostream& ostream::operator<<(void*)` is a member function and therefore can bind to it. In the absence of the `const char*` function, it is selected as the best overload. I'm not sure why the designers of the IOStreams library decided to make `operator<<()` for `void*` a method but not `operator<<()` for `const char*`, but that's how it is, so we have these weird inconsistencies to deal with. I'm not sure why the second problem is occurring. Do you get the same behaviour across different compilers? It's possible that it's a compiler or C++ Standard Library bug, but I'd leave that as the excuse of last resort -- at least see if you can replicate the behaviour with a regular `ostream` first.
Why is the Visual C++ compiler calling the wrong overload here?
[ "", "c++", "visual-c++", "operator-overloading", "temporary", "" ]
How do I get this array: ``` Array ( [0] => Array ( [max] => 5 [year] => 2007 ) [1] => Array ( [max] => 6.05 [year] => 2008 ) [2] => Array ( [max] => 7 [year] => 2009 ) ) ``` Into this format: ``` [year] => [max] ``` (ashamed at my ignorance...one of those days)
``` $result = array(); foreach($array as $v) { $result[$v['year']] = $v['max']; } ``` There you go.
you would need to iterate through your current array and put the data into a new array. ``` $result = array(); foreach($currenArray as $x) { $result[$x['year']] = $x['max']; } ```
Transform Array
[ "", "php", "arrays", "" ]
For some class C: ``` C* a = new C(); C* b(a); //what does it do? C* b = a; //is there a difference? ```
C\* b(a) and C\* b = a are equivalent. As with many languages, there's more than one way to do it...
Note that in ``` C* a = new C(); C* b(a); ``` b is a pointer to a C object assigned the same value as a. However, ``` #include "somefile.h" C* b(a); ``` we could just as easily be defining b as a function which takes an object of type a, and returns a pointer to C.
What does object* foo(bar) do?
[ "", "c++", "pointers", "" ]
I want to read each line from a text file and store them in an ArrayList (each line being one entry in the ArrayList). So far I understand that a BufferedInputStream writes to the buffer and only does another read once the buffer is empty which minimises or at least reduces the amount of operating system operations. Am I correct - do I make sense? If the above is the case in what situations would anyone want to use DataInputStream. And finally which of the two should I be using and why - or does it not matter.
Use a normal `InputStream` (e.g. `FileInputStream`) wrapped in an `InputStreamReader` and then wrapped in a `BufferedReader` - then call `readLine` on the `BufferedReader`. `DataInputStream` is good for reading primitives, length-prefixed strings etc.
The two classes are not mutually exclusive - you can use both of them if your needs suit. As you picked up, BufferedInputStream is about reading in blocks of data rather than a single byte at a time. It also provides the convenience method of readLine(). However, it's also used for peeking at data further in the stream then rolling back to a previous part of the stream if required (see the mark() and reset() methods). DataInputStream/DataOutputStream provides convenience methods for reading/writing certain data types. For example, it has a method to write/read a UTF String. If you were to do this yourself, you'd have to decide on how to determine the end of the String (i.e. with a terminator byte or by specifying the length of the string). This is different from BufferedInputStream's readLine() which, as the method sounds like, only returns a single line. writeUTF()/readUTF() deal with Strings - that string can have as many lines it it as it wants. BufferedInputStream is suitable for most text processing purposes. If you're doing something special like trying to serialize the fields of a class to a file, you'd want to use DataInput/OutputStream as it offers greater control of the data at a binary level. Hope that helps.
Should I use DataInputStream or BufferedInputStream
[ "", "java", "io", "" ]
I want to make a query to list cats that took longer than average cats to sell? I have five tables: Animal, Sale, AnimalOrderItem, AnimalOrder, and SaleAnimal > Animal table: AnimalID, Name, Category > (cat, dog, fish) > > SaleAnimal table: SaleID, AnimalID, > SalePrice > > Sale table: SaleID, date, employeeID, > CustomerID > > AnimalOrderItem table: OrderID, > AnimalID, cost > > AnimalOrder: OrderID, OrderDate, > ReceivingDate, > SupplierID, ShippingCost, EmployeeID There is other tables I don’t think they have an effect on the query. I thought of the following ... make a query to calculate days to sell for all ex.: ``` [SaleDate]-[ReceiveDate] AS DaysToSell Have the INNER JOIN built: Sale INNER JOIN ((AnimalOrder INNER JOIN (Animal INNER JOIN AnimalOrderItem ON Animal.AnimalID = AnimalOrderItem.AnimalID) ON AnimalOrder. OrderID = AnimalOrderItem.OrderID) INNER JOIN SaleAnimal ON Animal. AnimalID = SaleAnimal.AnimalID) ON Sale.SaleID = SaleAnimal.SaleID ``` Create another query based on the above query ``` SELECT AnimalID, Name, Category, DaysToSell WHERE Category="Cat" AND DaysToSell> (SELECT Avg(DaysToSell) FROM the earlier query WHERE Category="Cat" ORDER BY DaysToSell DESC; ``` After running the query it I got error saying > ORA-00921: unexpected end of SQL > command Any suggestions! please
Queries can be combined with a subquery. For example, ``` select * from ( select * from mytable ) subquery ``` Applying this pattern to your problem seems fairly straightforward.
I don't see the closed bracket that matches with the select avg
Combining multiple SQL Queries
[ "", "sql", "database", "" ]
I have two arrays: ``` Array ( [2005] => 0 [2006] => 0 [2007] => 0 [2008] => 0 [2009] => 0 ) Array ( [2007] => 5 [2008] => 6.05 [2009] => 7 ) ``` I want to merge these two arrays such that if a value exists in the 2nd array, it overwrites the first array's value. So the resulting array would be: ``` Array ( [2005] => 0 [2006] => 0 [2007] => 5 [2008] => 6.05 [2009] => 7 ) ``` Thanks for your help. UPDATE: This was my best attempt, but it's wildly unsuccessful: ``` $final = ''; foreach ($years as $k => $v){ if (in_array($k,$values)){ $final .= $values[$k] . '|'; }else{ $final .= $k[$v] . '|'; } } echo "final = $final"; ```
As I've just recently learned, PHP has an [array union operator](http://php.net/manual/en/language.operators.array.php) that does exactly this: ``` $result = $a + $b; ``` Where $a is the array with the values that you want to take precedence. (So in your example, that means that the second array is "`$a`".
It's that simple: `$new_array = array_replace(array_1,array_2)`; [the php manual page](http://php.net/manual/en/function.array-replace.php)
Array Merge/Replace
[ "", "php", "arrays", "" ]
I'm using IE 8 on Vista, and everytime I change a javascript file and then start debugging, I have to hit Ctrl+F5 to have it reload my javascript. Is there any way to make it automatically reload javascript when I start debugging, but not lose the performance gains when just browsing the net? Yeah yeah I know you probably don't like IE, but keep in mind the question isn't "What's the best browser?".
Add a string at the end of your URL to break the cache. I usually do (with PHP): ``` <script src="/my/js/file.js?<?=time()?>"></script> ``` So that it reloads every time while I'm working on it, and then take it off when it goes into production. In reality I abstract this out a little more but the idea remains the same. If you check out the source of this website, they append the revision number at the end of the URL in a similar fashion to force the changes upon us whenever they update the javascript files.
Paolo's general idea (i.e. effectively changing some part of the request uri) is your best bet. However, I'd suggest using a more static value such as a version number that you update when you have changed your script file so that you can still get the performance gains of caching. So either something like this: ``` <script src="/my/js/file.js?version=2.1.3" ></script> ``` or maybe ``` <script src="/my/js/file.2.1.3.js" ></script> ``` I prefer the first option because it means you can maintain the one file instead of having to constantly rename it (which for example maintains consistent version history in your source control). Of course either one (as I've described them) would involve updating your include statements each time, so you may want to come up with a dynamic way of doing it, such as replacing a fixed value with a dynamic one every time you deploy (using Ant or whatever).
How to force IE to reload javascript?
[ "", "javascript", "internet-explorer", "internet-explorer-8", "" ]
I have a PHP frontend and a C++ backend, and I need to be able to send groups of names to the frontend. What serialized format would be the most efficient/fastest for the PHP to read? Example data ``` group1: name1 3923 name2 9879 name3 8944 group2: name5 9823 group3: name9 9822 name1 4894 ``` What would be the fastest for PHP to read? * XML * JSON * YAML * Protocol Buffer * Comma/Space Delimited our own system * Anything else? other?
PHP's own serialized format will probably be the fastest. [`unserialize()`](http://php.net/manual/en/function.unserialize.php) is the function PHP uses to convert this data back to its own types. [This post](http://objectmix.com/php/362009-specification-serialize.html#post1335166) has various links to other languages' implementations of PHP's serialized format, I'm sure you could convert one of those easily.
I've used PHP's serialize() and unserialize() on large text files, and it performed miserably (that was a couple of years ago - maybe it's better now). Anyway, I devised a little trick to overcome this, it simply involves generating a PHP array declaration from the data you're exporting straight into a text file, e.g.: ``` <?php $groups = array('groups' => array( array('jeff' => 2343, 'tom' => 8477), array('baal' => 2873, 'scorpio' => 3210), array('jeff' => 2343, 'tom' => 8477) ) ) ); ?> ``` ...and then unserializing it by simply calling: ``` include 'groups.php';//makes $groups available ``` Worked nicely back then.
Fastest serialize data format form PHP reading
[ "", "php", "serialization", "" ]
I have a File having text and few numbers.I just want to extract numbers from it.How do I go about it ??? I tried using all that split thing but no luck so far. My File is like this: *AT+CMGL="ALL" +CMGL: 5566,"REC READ","Ufone" Dear customer, your DAY\_BUCKET subscription will expire on 02/05/09 +CMGL: 5565,"REC READ","+923466666666"* **KINDLY TELL ME THE WAY TO EXTRACT NUMBERS LIKE +923466666666 from this File so I can put them into another File or textbox.** Thanks
How large is the file? If the file is under a few megabytes in size I would recommend loading the file contents into a string and using a compiled regular expression to extract matches. Here's a quick example: ``` Regex NumberExtractor = new Regex("[0-9]{7,16}",RegexOptions.Compiled); /// <summary> /// Extracts numbers between seven and sixteen digits long from the target file. /// Example number to be extracted: +923466666666 /// </summary> /// <param name="TargetFilePath"></param> /// <returns>List of the matching numbers</returns> private IEnumerable<ulong> ExtractLongNumbersFromFile(string TargetFilePath) { if (String.IsNullOrEmpty(TargetFilePath)) throw new ArgumentException("TargetFilePath is null or empty.", "TargetFilePath"); if (File.Exists(TargetFilePath) == false) throw new Exception("Target file does not exist!"); FileStream TargetFileStream = null; StreamReader TargetFileStreamReader = null; string FileContents = ""; List<ulong> ReturnList = new List<ulong>(); try { TargetFileStream = new FileStream(TargetFilePath, FileMode.Open); TargetFileStreamReader = new StreamReader(TargetFileStream); FileContents = TargetFileStreamReader.ReadToEnd(); MatchCollection Matches = NumberExtractor.Matches(FileContents); foreach (Match CurrentMatch in Matches) { ReturnList.Add(System.Convert.ToUInt64(CurrentMatch.Value)); } } catch (Exception ex) { //Your logging, etc... } finally { if (TargetFileStream != null) { TargetFileStream.Close(); TargetFileStream.Dispose(); } if (TargetFileStreamReader != null) { TargetFileStreamReader.Dispose(); } } return (IEnumerable<ulong>)ReturnList; } ``` Sample Usage: ``` List<ulong> Numbers = (List<ulong>)ExtractLongNumbersFromFile(@"v:\TestExtract.txt"); ```
If the numbers are all at the end of the lines then you can use code like the following ``` foreach ( string line in File.ReadAllLines(@"c:\path\to\file.txt") ) { Match result = Regex.Match(line, @"\+(\d+)""$"); if ( result.Success ) { var number = result.Groups[1].Value; // do what you want with the number } } ```
Searching Specific Data From a File
[ "", "c#", "" ]
How do I only include certain dlls of the .Net framework which are used in my program along with the setup project instead of installing the whole framework? Thanks
I would recommend the .NET Client Profile, described [here](http://msdn.microsoft.com/en-us/library/cc656912.aspx). "The .NET Framework Client Profile is a subset of the full .NET Framework 3.5 SP1 that targets client applications. It provides a streamlined **subset** of Windows Presentation Foundation (WPF), Windows Forms, Windows Communication Foundation (WCF), and ClickOnce features. This enables rapid deployment scenarios for WPF, Windows Forms, WCF, and console applications that target the .NET Framework Client Profile."
You want to deploy a .NET Application to a server, that doesn't have .NET installed and only have it install the specific System.\* dlls required for you're application ? I'd be pretty sure that you can't do this. There's more to the Framework than just a bunch of DLLs in the GAC. (e.g. CLR, registry entries etc...). You'll need to install the framework on the client machine
Is it possible to include only required dlls and not the entire .Net framework?
[ "", "c#", ".net", "visual-studio", "" ]
How can I use an inherited class of a triangulation in the context of a triangulation in CGAL? Basically I have the following code: ``` typedef CGAL::Exact_predicates_inexact_constructions_kernel K; typedef CGAL::Triangulation_vertex_base_with_info_2<int,K> Vb; typedef CGAL::Triangulation_face_base_with_info_2<int,K> Fb; typedef CGAL::Triangulation_data_structure_2<Vb,Fb> Tds; typedef CGAL::Delaunay_triangulation_2<K,Tds> Delaunay; typedef CGAL::Triangulation_2<K,Tds> Triangulation; typedef Triangulation::Point Point; ... Triangulation *t = new Delaunay; ... // x and y are properly defined and instantiated t->insert(Point(x,y)); ``` Well, of course, Delaunay\_triangulation\_2 inherits from Triangulation\_2 So, when I execute this code, the linking is done against the Triangulation\_2 class, in other words, it does not executes a delaunay triangulation, instead it executes a normal triangulation (executing the parent class methods instead of the child methods). I think this is because the insert method of Triangulation\_2 is not declared as virtual so redefinitions won't work. Do you know a way around this? Maybe using Constrained\_triangulation\_2 and Constrained\_delaunay\_triangulation\_2? (those classes define some virtual methods, but i've read the source code and I don't think they can be used without adding the explicit constraints) Any ideas?
I checked your program, you would need to reformat it a bit, so that it fits the generic programming model. Let me recall what your code does (the one available at [github](http://github.com/gjorquera/triangle-mesh/blob/6eb7e304a486a1a3de47ac9264a322b5cf7bc4c1/main.cpp)): 1. reads the command line 2. depending on the options, instantiates either a Triangulation or a Delaunay\_triangulation on the heap 3. uses this object for some processing, assuming that the methods are virtual (but they are not) A solution for your problem would be to put step 3 in a separate method, with the triangulation type as template parameter. Something like (I use your types and names): ``` template < class Triangulation > void compute_mesh(int n_vertices, int max_x, int max_y) { Triangulation t; // DO WHATEVER YOU WANT WITH t } ``` Then, in your main function, you would trigger the use of Delaunay or non-Delaunay triangulations in the following way: ``` if (triang_type == 'D') compute_mesh<Delaunay>(n_vertices, max_x, max_y); else compute_mesh<Triangulation>(n_vertices, max_x, max_y); ```
Are you sure these functions are virtual? Without them being defined virtual the compiler will not call the derived class functions. From a cursory look at the CGAL headers it does not seem that these classes have any virtual functions at all.
CGAL inheritance
[ "", "c++", "cgal", "" ]
I have some old code written in C for 16-bit using Borland C++ that switches between multiple stacks, using longjmps. It creates a new stack by doing a malloc, and then setting the SS and SP registers to the segment and offset, resp., of the address of the malloc'd area, using inline Assembler. I would like to convert it to Win32, and it looks like the two instructions should be replaced by a single one setting the ESP. The two instructions were surrounded by a CLI/STI pair, but in Win32 these give "privileged instructions", so I have cut them out for now. I am a real innocent when it comes to Windows, so, I was rather surprised that my first test case worked! So, my rather vague question is to ask the experts here if what I am doing is a) too dangerous to continue with, or b) will work if I add some code, take certain precautions, etc.? If the latter, what should be added, and where can I find out about it? Do I have to worry about any other registers, like the SS, EBX, etc.? I am using **no** optimization... Thanks for any tips people can give me.
Removing CLI/STI still works due to the differences in the operating environment. On 16-bit DOS, an interrupt could occur and this interrupt would be initially running on the same stack. If you got interrupted in the middle of the operation, the interrupt could crash because you only updated ss and not sp. On Windows, and any other modern environment, each user mode thread gets its own stack. If your thread is interrupted for whatever reason, it's stack and context are safely preserved - you don't have to worry about something else running on your thread and your stack. cli/sti in this case would be protecting against something you're already protected against by the OS. As Greg mentioned, the safe, supported way of swapping stacks like this on Windows is CreateFiber/SwitchToFiber. This does have the side-effect of changing your entire context, so it is not like just switching the stack. This really raises the question of what you want to do. A lot of times, switching stacks is to get by limited stack space, which was 64k on 16-bit DOS. On Windows, you have a 1 MB stack and you can allocate even larger. Why are you trying to switch stacks?
By far the safest way to do this is to port the code to official Win32 multiprogramming structures, such as threads or fibers. [Fibers](http://msdn.microsoft.com/en-us/library/ms682661(VS.85).aspx) provide a very lightweight multi-stack paradigm that sounds like it might be suitable for your application. The [Why does Win32 even have fibers?](http://blogs.msdn.com/larryosterman/archive/2005/01/05/347314.aspx) article is an interesting read too.
Switching stacks in C++
[ "", "c++", "winapi", "multiplestacks", "" ]
I’m editing a `<textarea>` with JavaScript. The problem is that when I make line breaks in it, they won’t display. How can I do this? I’m getting the value to write a function, but it won’t give line breaks.
The problem comes from the fact that line breaks (`\n\r`?) are not the same as HTML `<br/>` tags: ``` var text = document.forms[0].txt.value; text = text.replace(/\r?\n/g, '<br />'); ``` Since many of the comments and my own experience have shown me that this `<br>` solution is not working as expected, here is an example of how to append a new line to a `textarea` using '\r\n': ``` function log(text) { var txtArea; txtArea = document.getElementById("txtDebug"); txtArea.value += text + '\r\n'; } ```
If you use general JavaScript and you need to assign a string to a text area value, then document.getElementById("textareaid").value='texthere\\ntexttext'. You need to replace `\n` or `< br >` with `\\\n`. Otherwise, it gives *Uncaught SyntaxError: Unexpected token ILLEGAL* in all browsers.
How to add line breaks to an HTML textarea
[ "", "javascript", "html", "textarea", "line-breaks", "" ]
I am writing an application and using MySQL to return the difference between 2 dates in MySQL should MySQL do this or should I actually let PHP handle it? I also just need the sum of all the results I am getting back should I return them and add them up on the php side or is there a way to add all the results together on the MySQL Server side?
It depends somewhat on the application, but in general, I'd push it to the PHP, because normally you're building a web site for multiple concurrent accesses; why put the calculation into the database and potentially have a bottle neck?
I think that you have two separate cases here. In the case where you are returning two values and performing a calculation on them, then doing that on the front end probably makes the most sense as long as it's not a complex calculation that requires significant business logic. If it does involve complex or specialized business logic then you should have a central place for that logic, whether it is in a business layer or in the database, so that it is done consistently. If you're just finding the difference between two dates or something, then just do it on the front end. In the second case, where you are summing values, that sounds like something that should probably be done in the database. Networks tend to be much more of a bottleneck than modern day databases on today's hardware. Save sending a bunch of rows over the network just to add them up if you can just do it in the database.
How much calculation should be done by MySQL?
[ "", "sql", "mysql", "" ]
How can I replace diacritics (ă,ş,ţ etc) with their "normal" form (a,s,t) in javascript?
If you want to do it entirely on the client side, I think your only option is with some kind of lookup table. Here's a starting point, written by a chap called Olavi Ivask on his [blog](http://olaviivask.wordpress.com/2008/03/31/how-to-remove-diacritics-with-javascript/)... ``` function replaceDiacritics(s) { var s; var diacritics =[ /[\300-\306]/g, /[\340-\346]/g, // A, a /[\310-\313]/g, /[\350-\353]/g, // E, e /[\314-\317]/g, /[\354-\357]/g, // I, i /[\322-\330]/g, /[\362-\370]/g, // O, o /[\331-\334]/g, /[\371-\374]/g, // U, u /[\321]/g, /[\361]/g, // N, n /[\307]/g, /[\347]/g, // C, c ]; var chars = ['A','a','E','e','I','i','O','o','U','u','N','n','C','c']; for (var i = 0; i < diacritics.length; i++) { s = s.replace(diacritics[i],chars[i]); } document.write(s); } ``` You can see this is simply an array of regexes for known diacritic chars, mapping them back onto a "plain" character.
In modern browsers and node.js you can use [unicode normalization](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/normalize) to decompose those characters followed by a filtering regex. `str.normalize('NFKD').replace(/[^\w]/g, '')` If you wanted to allow characters such as whitespaces, dashes, etc. you should extend the regex to allow them. `str.normalize('NFKD').replace(/[^\w\s.-_\/]/g, '')` ``` var str = 'áàâäãéèëêíìïîóòöôõúùüûñçăşţ'; var asciiStr = str.normalize('NFKD').replace(/[^\w]/g, ''); console.info(str, asciiStr); ``` **NOTES:** This method does not work with characters that do not have unicode composed varian. i.e. `ø` and `ł`
Replacing diacritics in Javascript
[ "", "javascript", "diacritics", "" ]
Is it possible to leave a ContextMenuStrip open after a selection/check of certain items? I plan on using a simple ContextMenuStrip to set a filter (this way i could use the same filter either in a menu or as a right-click option). The menu lists a number of items, and i would like the user to be able to make a selection of the items using the basic Check functionality. Once the selection is done the user can click an Activate filter option or can click outside the menu to either activate or cancel the filter. On a selection/click event the menu normally closes. Is it possible to keep the menu open on a click event?
To prevent the contextmenu from closing when an item is clicked, do the following. On mousedown event of ContextMenuItems set flag to false then set it back to true at the closing event of the contextmenu. Example: ``` Private blnClose As Boolean = True Private Sub MoveUpToolStripMenuItem_MouseDown(ByVal sender As Object, ByVal e As System.Windows.Forms.MouseEventArgs) Handles MoveUpToolStripMenuItem.MouseDown blnClose = False End Sub Private Sub ContextMenuStrip1_Closing(ByVal sender As Object, ByVal e As System.Windows.Forms.ToolStripDropDownClosingEventArgs) Handles ContextMenuStrip1.Closing e.Cancel = Not blnClose blnClose = True End Sub ```
In case future programers are wondering how to do this, this is what I figured out. This will not close the context menu if any item is clicked. Create the context menu strip closing event and setup an if statement to cancel the close event if close reason is itemclicked. ``` private void contextMenuStrip_Closing(object sender, ToolStripDropDownClosingEventArgs e) { if (e.CloseReason == ToolStripDropDownCloseReason.ItemClicked) e.Cancel = true; } ```
Do not close ContextMenuStrip on selection of certain items
[ "", "c#", ".net", "user-interface", "contextmenustrip", "" ]
I am currently developing a very simple database that tracks people and the company certifications they hold. This db will need to store letters of certification as PDF files. I was asked to develop this in MS Access, but I think it would be better to build it in SQLServer and use Windows Forms to build the interface. This app will need to be accessible from a public location like a share drive. My question is, would it be better to do this in SQLServer like I think, or am I full of it and my boss is right on the money? Or are we both wrong?
If the application is, as you said, a very simple database, that's what access is precisely for, creating simple databases. You can write both the database and the application forms within the same environment and users won't need to get anything installed. Be careful though with concurrent access to your application. If you go for the access solution, multiple users won't be able to use the application at the same time. If you want this to happen, you will need the database and the application being apart. This doesn't mean that the DB needs to be SQL server, you can still use Access as your database if you don't require the power of a more complex engine. EDIT: Just read on a comment that you are planning to have 10 users and less than 1000 records. FORGET about SQL server, you will be wasting your money. No matter if you decide to go for a simple all-access solution or for a distributed web application or desktop app with remote storage, Access is hundreds of times more powerful that what you need. Even for the "toy-ish" engine that access is, you are not using a 1% of it.
A good alternative to Access which I use *a lot* is [SQL Server Compact](http://www.microsoft.com/sqlserver/2008/en/us/compact.aspx) (SqlCe). This is a completely different product than SQL Server Express/Standard/etc. It is an in-process database like Access, it does not run as a separate process or service. * It is free * Full ACID support + Supports multiple connections + Full transactional support + Referential integrity (including cascading updates and deletes) + Locking * T-SQL syntax and SQL Server data types (same API as SQL Server) * Small footprint (~2 MB) * Easy deployment (supports ClickOnce, MSI, XCopy, etc) * Database is contained in a single file you can move around * Supports ADO.NET, LINQ to SQL, LINQ to Entities.
Access vs SqlServer for a Simple Database
[ "", "sql", "sql-server", "ms-access", "" ]
In a GWT solution. (so this is java code that is then compiled to javascript). There are of course some classes. Is it a good idea to make the setter check for Null on a String field? something like this ``` public void setSomeField(String someField){ if (null != someField) this.someField = someField; else this.someField = String.Empty; } ``` Is this a good or bad idea? On the one had it will make coding easier as i wont have to check for null , on the other hand it would make me probably forget that I have to do this for other strings. Thoughts? Thanks
I say if such a logic is needed in your application, the setter is the place to put it. The main reason to have a get/set wrap around a private var is to be able to put logic around the access. To answer the question of to default or not to default: In my application it made sence to have a set of properties fall back to string.empty for display reasons. Although people could argue that the view should then cover these possibilities and check for nulls and display nothing, it was a lot of bloat all over my pages to do a check on every property. That's why I started implementing SafeXX properties. So say I had 'myObj.Name' that could possibly have a null value, there would also be a property 'myObj.SafeName' that caught the null in the getter and returned a string.empty in stead. The little naming convention gives away that it is not the regular getter.
Here's something to consider. Would you expect this unit test to pass or fail?: ``` yourClass.setSomeField(null); assertNull(yourClass.getSomeField()); ``` If you're changing the null value to an empty string and returning that in getSomeField, then the client now has to check two conditions when testing...a String and a null String. Not a big deal, but what happens if you've got twenty String properties in the class...you'd probably better try to be consistent amongst all of the setters, and if you're not then the reason should probably be more obvious than just the documentation saying so. There are certain conventions around getters and setters; certain expectations. If I call a setter on an object, then I usually expect the getter to return what I set. I don't expect it to return some representation of what I passed in that is more convenient for the class to work with internally. I don't care about the internals of the class, and don't want to.
Smarter setter? Good or Bad Idea?
[ "", "java", "gwt", "" ]
Are precompiled headers supported on gcc 3.3.3 ? If yes what is the syntax to generate and use precompiled headers on Linux with gcc 3.3.3. We crosscompile our projects on Windows and Linux, on Windows we precompile stdafx.h and I'm investigating how to set it up so that it is precompiled on Linux as well. I'm aware of [the gcc doc](http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html) , I'm searching for the actual solution. On gcc 3.3.3 I get this: ``` > gcc stdafx.h gcc: compilation of header file requested ``` And last, if it worked what would be the name of generated file? **EDIT**: Precompiled headers do not seem to be supported on gcc 3.3.3 and on newer gcc the generated file is named with .gch extension.
I don't know from what version gcc supports it, but for how to use them just read the [gcc documentation](http://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html). Anyway, gcc 3.3.3 is pretty old, too. Maybe there's a chance that you can upgrade to a more recent 4.X version? That should support recompiled headers. Maybe you could try the latest 3.X GCC (GCC 3.4.6). I assume the ABI break is from 3.X to 4.X, so 3.4 may be compatible. I think it may be worth checking. from <http://gcc.gnu.org/gcc-3.4/changes.html> > C/Objective-C/C++ > Precompiled headers are now supported.
I'm not entirely sure if GCC 3.3 supports it, but precompiling headers is actually no different from producing objects, at least with GCC 4.x: ``` gcc $CFLAGS header.h ``` It'll produce a new precompiled header next to the .h file and automatically use it when it's `#include`d.
Precompiled headers supported on gcc 3.3.3?
[ "", "c++", "linux", "gcc", "" ]
On SQL Server 2005, I have a complex multi-level allocation process which looks like this (pseudo-SQL): ``` FOR EACH @LVL_NUM < @MAX_LVL: INSERT INTO ALLOCS SELECT 'OUT', * FROM BALANCES(@LVL_NUM) INNER JOIN ALLOCN_SUMRY(@LVL_NUM) INSERT INTO ALLOCS SELECT 'IN', * FROM BALANCES(@LVL_NUM) INNER JOIN ALLOCNS(@LVL_NUM) INNER JOIN ALLOCN_SUMRY(@LVL_NUM) ``` Where `ALLOCS` is seeded with direct allocations and then `BALANCES(@LVL_NUM)` is based on `ALLOCS` at the `@LVL_NUM` (which might be some direct allocations plus some IN allocations from a previous level) and `ALOCNS(@LVL_NUM)` is based on `BALANCES(@LVL_NUM)` and `ALOCN_SUMRY(@LVL_NUM)` is simply based on `ALOCNS(@LVL_NUM)` - with a lot of configuration tables which indicate the drivers which drive the allocations out. This is simplified, but there are actually four or five pairs like this within the loop because there are a variety of logics which are not possible to handle together (and some cases which are possible to handle together.) The basic logic is to take the total amount in a particular cost center/product line/etc (i.e. the `BALANCES`) and then allocate it out to another cost center/product line/etc based on its share (i.e. the `ALLOCNS / ALLOCN_SUMRY` percentage share) of a particular metric. With so much logic repeated in the `OUT` recordkeeping and the `IN`, and of course the `SUMRY` based on the `ALLOCN` detail, I ended up implementing using inline table value functions, which seem to perform fairly well (and they match the existing system's behaviour in the regression tests, which is a plus!). (The existing system is a monster C/C++/MFC/ODBC program that reads all the data into massive arrays and other data structures and is pretty atrociously written.) The problem appears to be that when run in the loop I appear to be getting execution plan issues as I work my way up the levels as the `ALLOCS` table starts to change (and everything is changing, because the levels have different cost centers, so the configuration being used to drive the `ALLOCNS` is changing). I have up to 99 levels, I think, but the lowest levels start 2, 4, 6. It appears that running `@LVL_NUM = 6` by itself outside of the UDF performs fine, but that the UDF performs poorly - presumably because the UDF has a cached plan or that the overall plan is already bad because of the `ALLOCS` added from earlier steps at `@LVL_NUM IN (2, 4)`. Earlier in development, I managed to get 30 levels run in 30 minutes, but now I can't get it to complete the first 3 levels in 2 hours. I'm considering running the two inserts within another SP and calling it WITH RECOMPILE, but was curious if this RECOMPILE cascades properly into the TVF UDFs? Any other advice would also be appreciated. Real Code: ``` /****** Object: UserDefinedFunction [MISProcess].[udf_MR_BALANCES_STAT_UNI] Script Date: 05/14/2009 22:16:09 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [MISProcess].[udf_MR_BALANCES_STAT_UNI] ( @DATA_DT_ID int ,@LVL_NUM int ) RETURNS TABLE -- WITH SCHEMABINDING AS RETURN ( SELECT AB.YYMM_ID ,AB.BUS_UNIT_ID ,AB.BUS_UNIT_PROD_LINE_CD -- ,AB.ALOCN_SRC_CD ,AB.ALOCN_SRC_PROD_LINE_CD ,CASE WHEN ORIG_ALSRC.ALOCN_TYPE_CD = 'C' AND ORIG_ALSRC.RETN_IND = 'Y' THEN AB.ORIG_ALOCN_SRC_CD ELSE AB.BUS_UNIT_ID END AS ORIG_ALOCN_SRC_CD ,CASE WHEN BUPALSRC.COLLAPSE_IND = 'Y' THEN BUPLNTM.ALOCN_LINE_ITEM_NUM ELSE AB.LINE_ITEM_NUM END AS ALOCN_LINE_ITEM_NUM ,SUM(BUPLNTM.ALOCN_SIGN_IND * AB.ANULZD_ACTL_BAL) AS ANULZD_ACTL_BAL FROM MISWork.vwMR_BALANCES AS AB INNER JOIN MISProcess.LKP_BUPLNTM AS BUPLNTM ON BUPLNTM.DATA_DT_ID = @DATA_DT_ID AND BUPLNTM.LINE_ITEM_NUM = AB.LINE_ITEM_NUM AND BUPLNTM.ALOCN_LINE_ITEM_NUM <> 0 INNER JOIN [MISProcess].[udf_MR_ALSRC](@DATA_DT_ID, @LVL_NUM) AS BUPALSRC ON BUPALSRC.ALOCN_SRC_CD = AB.BUS_UNIT_ID INNER JOIN [MISProcess].LKP_BUPALSRC AS ORIG_ALSRC ON ORIG_ALSRC.DATA_DT_ID = @DATA_DT_ID AND ORIG_ALSRC.ALOCN_SRC_CD = AB.ORIG_ALOCN_SRC_CD GROUP BY AB.YYMM_ID ,AB.BUS_UNIT_ID ,AB.BUS_UNIT_PROD_LINE_CD -- ,AB.ALOCN_SRC_CD ,AB.ALOCN_SRC_PROD_LINE_CD ,CASE WHEN ORIG_ALSRC.ALOCN_TYPE_CD = 'C' AND ORIG_ALSRC.RETN_IND = 'Y' THEN AB.ORIG_ALOCN_SRC_CD ELSE AB.BUS_UNIT_ID END ,CASE WHEN BUPALSRC.COLLAPSE_IND = 'Y' THEN BUPLNTM.ALOCN_LINE_ITEM_NUM ELSE AB.LINE_ITEM_NUM END ) /****** Object: UserDefinedFunction [MISProcess].[udf_MR_ALOCNS_STAT_UNI] Script Date: 05/14/2009 22:16:16 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [MISProcess].[udf_MR_ALOCNS_STAT_UNI] ( @DATA_DT_ID int ,@LVL_NUM int ) RETURNS TABLE -- WITH SCHEMABINDING AS RETURN ( SELECT BALANCES.YYMM_ID ,BS.ALOCN_SRC_CD AS BUS_UNIT_ID ,BS.PROD_LINE_CD AS BUS_UNIT_PROD_LINE_CD ,BALANCES.BUS_UNIT_ID AS ALOCN_SRC_CD ,BALANCES.BUS_UNIT_PROD_LINE_CD AS ALOCN_SRC_PROD_LINE_CD ,BALANCES.ORIG_ALOCN_SRC_CD ,BALANCES.ALOCN_LINE_ITEM_NUM ,SUM(BS.ACCT_STATS_CNT) AS ACCT_STATS_CNT FROM [MISProcess].[udf_MR_BALANCES_STAT_UNI](@DATA_DT_ID, @LVL_NUM) AS BALANCES INNER JOIN [MISProcess].[udf_MR_ALSRC](@DATA_DT_ID, @LVL_NUM) AS BUPALSRC ON BUPALSRC.ALOCN_SRC_CD = BALANCES.BUS_UNIT_ID INNER JOIN MISProcess.LKP_PRODLINE AS PRODLINE ON PRODLINE.DATA_DT_ID = @DATA_DT_ID AND PRODLINE.PROD_LINE_CD = BALANCES.BUS_UNIT_PROD_LINE_CD INNER JOIN PUASFIN.FocusResults.BS AS BS ON BS.YYMM_ID = BALANCES.YYMM_ID AND BS.ALOCN_BASE_CD = BUPALSRC.ALOCN_BASE_CD AND BS.ALOCN_SRC_CD <> BALANCES.BUS_UNIT_ID AND ( PRODLINE.GENRC_PROD_LINE_IND = 'Y' OR BS.PROD_LINE_CD = BALANCES.BUS_UNIT_PROD_LINE_CD ) INNER JOIN [MISProcess].[udf_MR_ALSRC](@DATA_DT_ID, 0) AS DEST_BUP_ALSRC ON DEST_BUP_ALSRC.ALOCN_SRC_CD = BS.ALOCN_SRC_CD AND DEST_BUP_ALSRC.ALOCN_LVL_NUM > @LVL_NUM LEFT JOIN [MISProcess].[udf_MR_BLOCK_STD_COST_PCT](@DATA_DT_ID) AS BLOCK_STD_COST_PCT ON BLOCK_STD_COST_PCT.FROM_ALOCN_SRC_CD = BALANCES.BUS_UNIT_ID LEFT JOIN [MISProcess].[udf_MR_BLOCK_NOT](@DATA_DT_ID) AS BLOCK_NOT ON BLOCK_NOT.ALOCN_SRC_CD = BALANCES.BUS_UNIT_ID LEFT JOIN [MISProcess].[udf_MR_BLOCK](@DATA_DT_ID) AS BLOCK ON BLOCK_NOT.ALOCN_SRC_CD IS NULL AND BLOCK.FROM_ALOCN_SRC_CD = BALANCES.BUS_UNIT_ID AND ( BLOCK.FROM_PROD_LINE_CD IS NULL OR BLOCK.FROM_PROD_LINE_CD = BALANCES.BUS_UNIT_PROD_LINE_CD ) LEFT JOIN [MISProcess].[udf_MR_BLOCK_ALOCN_PAIRS](@DATA_DT_ID, @LVL_NUM) AS BLOCK_ALOCN_PAIRS ON BLOCK_NOT.ALOCN_SRC_CD IS NOT NULL AND BLOCK_ALOCN_PAIRS.FROM_ALOCN_SRC_CD = BALANCES.BUS_UNIT_ID AND BLOCK_ALOCN_PAIRS.TO_ALOCN_SRC_CD = BS.ALOCN_SRC_CD WHERE BLOCK_ALOCN_PAIRS.TO_ALOCN_SRC_CD IS NULL AND BLOCK_STD_COST_PCT.FROM_ALOCN_SRC_CD IS NULL AND ( BLOCK.TO_ALOCN_SRC_CD IS NULL OR BLOCK.TO_ALOCN_SRC_CD = BS.ALOCN_SRC_CD ) AND ( BLOCK.TO_PROD_LINE_CD IS NULL OR BLOCK.TO_PROD_LINE_CD = BS.PROD_LINE_CD ) AND ( BLOCK.YEAR_NUM IS NULL OR BLOCK.YEAR_NUM = BALANCES.YYMM_ID / 10000 ) AND ( BLOCK.MTH_NUM IS NULL OR BLOCK.MTH_NUM = (BALANCES.YYMM_ID / 100) % 100 ) AND ( BLOCK.TO_DIV_NUM IS NULL OR BLOCK.TO_DIV_NUM = DEST_BUP_ALSRC.DIV_NUM ) AND ( BLOCK.TO_GRP_NUM IS NULL OR BLOCK.TO_GRP_NUM = DEST_BUP_ALSRC.DIV_GRP ) AND ( BLOCK.TO_REGN_GRP_NM IS NULL OR BLOCK.TO_REGN_GRP_NM = DEST_BUP_ALSRC.REGN_GRP_NM ) AND ( BLOCK.TO_REGN_NM IS NULL OR BLOCK.TO_REGN_NM = DEST_BUP_ALSRC.REGN_NM ) AND ( BLOCK.TO_ARENA_NM IS NULL OR BLOCK.TO_ARENA_NM = DEST_BUP_ALSRC.ARENA_NM ) AND ( BLOCK.TO_SUB_REGN_NM IS NULL OR BLOCK.TO_SUB_REGN_NM = DEST_BUP_ALSRC.SUB_REGN_NM ) AND ( BLOCK.TO_SUB_ARENA_NM IS NULL OR BLOCK.TO_SUB_ARENA_NM = DEST_BUP_ALSRC.SUB_ARENA_NM ) GROUP BY BALANCES.YYMM_ID ,BS.ALOCN_SRC_CD ,BS.PROD_LINE_CD ,BALANCES.BUS_UNIT_ID ,BALANCES.BUS_UNIT_PROD_LINE_CD ,BALANCES.ORIG_ALOCN_SRC_CD ,BALANCES.ALOCN_LINE_ITEM_NUM ) /****** Object: UserDefinedFunction [MISProcess].[udf_MR_ALOCN_SUMRY_STAT_UNI] Script Date: 05/14/2009 22:16:28 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [MISProcess].[udf_MR_ALOCN_SUMRY_STAT_UNI] ( @DATA_DT_ID int ,@LVL_NUM int ) RETURNS TABLE -- WITH SCHEMABINDING AS RETURN ( SELECT YYMM_ID ,ALOCN_SRC_CD ,ALOCN_SRC_PROD_LINE_CD ,ORIG_ALOCN_SRC_CD ,ALOCN_LINE_ITEM_NUM ,SUM(ACCT_STATS_CNT) AS ACCT_STATS_CNT FROM [MISProcess].[udf_MR_ALOCNS_STAT_UNI](@DATA_DT_ID, @LVL_NUM) AS ALOCNS GROUP BY YYMM_ID ,ALOCN_SRC_CD ,ALOCN_SRC_PROD_LINE_CD ,ORIG_ALOCN_SRC_CD ,ALOCN_LINE_ITEM_NUM ) ``` This is my testing batch which will eventually run the entire process in a single SP. You can see from commented out sections that I've been playing with temporary tables and table variables as well: ``` USE PCAPFIN DECLARE @DATA_DT_ID_use AS int DECLARE @MinLevel AS int DECLARE @MaxLevel AS int DECLARE @TestEveryLevel AS bit DECLARE @TestFinal AS bit SET @DATA_DT_ID_use = 20090331 SET @MinLevel = 6 SET @MaxLevel = 6 SET @TestEveryLevel = 0 SET @TestFinal = 1 --DECLARE @BALANCES TABLE ( -- METHOD_TXT varchar(12) NOT NULL -- ,YYMM_ID int NOT NULL -- ,BUS_UNIT_ID varchar(6) NOT NULL -- ,BUS_UNIT_PROD_LINE_CD varchar(4) NOT NULL -- ,ALOCN_SRC_PROD_LINE_CD varchar(4) NOT NULL -- ,ORIG_ALOCN_SRC_CD varchar(6) NOT NULL -- ,ALOCN_LINE_ITEM_NUM int NOT NULL -- ,ANULZD_ACTL_BAL money -- ) -- --DECLARE @ALOCNS TABLE ( -- METHOD_TXT varchar(12) NOT NULL -- ,YYMM_ID int NOT NULL -- ,BUS_UNIT_ID varchar(6) NOT NULL -- ,BUS_UNIT_PROD_LINE_CD varchar(4) NOT NULL -- ,ALOCN_SRC_CD varchar(6) NOT NULL -- ,ALOCN_SRC_PROD_LINE_CD varchar(4) NOT NULL -- ,ORIG_ALOCN_SRC_CD varchar(6) NOT NULL -- ,ALOCN_LINE_ITEM_NUM int NOT NULL -- ,ACCT_STATS_CNT money -- ) -- --DECLARE @ALOCN_SUMRY TABLE ( -- METHOD_TXT varchar(12) NOT NULL -- ,YYMM_ID int NOT NULL -- ,ALOCN_SRC_CD varchar(6) NOT NULL -- ,ALOCN_SRC_PROD_LINE_CD varchar(4) NOT NULL -- ,ORIG_ALOCN_SRC_CD varchar(6) NOT NULL -- ,ALOCN_LINE_ITEM_NUM int NOT NULL -- ,ACCT_STATS_CNT money -- ) --IF OBJECT_ID('tempdb..#BALANCES') IS NOT NULL -- DROP TABLE #BALANCES -- --CREATE TABLE #BALANCES ( -- METHOD_TXT varchar(12) NOT NULL -- ,YYMM_ID int NOT NULL -- ,BUS_UNIT_ID varchar(6) NOT NULL -- ,BUS_UNIT_PROD_LINE_CD varchar(4) NOT NULL -- ,ALOCN_SRC_PROD_LINE_CD varchar(4) NOT NULL -- ,ORIG_ALOCN_SRC_CD varchar(6) NOT NULL -- ,ALOCN_LINE_ITEM_NUM int NOT NULL -- ,ANULZD_ACTL_BAL money -- ,CONSTRAINT [PK_BALANCES] PRIMARY KEY CLUSTERED ([METHOD_TXT] ASC, [YYMM_ID] ASC, [BUS_UNIT_ID] ASC, [BUS_UNIT_PROD_LINE_CD] ASC, [ALOCN_SRC_PROD_LINE_CD] ASC, [ORIG_ALOCN_SRC_CD] ASC, [ALOCN_LINE_ITEM_NUM] ASC) -- WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, -- IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, -- ALLOW_PAGE_LOCKS = ON) -- ) -- --IF OBJECT_ID('tempdb..#ALOCN_SUMRY') IS NOT NULL -- DROP TABLE #ALOCNS -- --CREATE TABLE #ALOCNS ( -- METHOD_TXT varchar(12) NOT NULL -- ,YYMM_ID int NOT NULL -- ,BUS_UNIT_ID varchar(6) NOT NULL -- ,BUS_UNIT_PROD_LINE_CD varchar(4) NOT NULL -- ,ALOCN_SRC_CD varchar(6) NOT NULL -- ,ALOCN_SRC_PROD_LINE_CD varchar(4) NOT NULL -- ,ORIG_ALOCN_SRC_CD varchar(6) NOT NULL -- ,ALOCN_LINE_ITEM_NUM int NOT NULL -- ,ACCT_STATS_CNT money -- ,CONSTRAINT [PK_ALOCNS] PRIMARY KEY CLUSTERED ([METHOD_TXT] ASC, YYMM_ID ASC, BUS_UNIT_ID ASC, BUS_UNIT_PROD_LINE_CD ASC, ALOCN_SRC_CD ASC, ALOCN_SRC_PROD_LINE_CD ASC, ORIG_ALOCN_SRC_CD ASC, ALOCN_LINE_ITEM_NUM ASC) -- WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, -- IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, -- ALLOW_PAGE_LOCKS = ON) -- ) -- --IF OBJECT_ID('tempdb..#ALOCN_SUMRY') IS NOT NULL -- DROP TABLE #ALOCN_SUMRY --CREATE TABLE #ALOCN_SUMRY ( -- METHOD_TXT varchar(12) NOT NULL -- ,YYMM_ID int NOT NULL -- ,ALOCN_SRC_CD varchar(6) NOT NULL -- ,ALOCN_SRC_PROD_LINE_CD varchar(4) NOT NULL -- ,ORIG_ALOCN_SRC_CD varchar(6) NOT NULL -- ,ALOCN_LINE_ITEM_NUM int NOT NULL -- ,ACCT_STATS_CNT money -- ,CONSTRAINT [PK_ALOCN_SUMRY] PRIMARY KEY CLUSTERED ([METHOD_TXT] ASC, YYMM_ID ASC, ALOCN_SRC_CD ASC, ALOCN_SRC_PROD_LINE_CD ASC, ORIG_ALOCN_SRC_CD ASC, ALOCN_LINE_ITEM_NUM ASC) -- WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, -- IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, -- ALLOW_PAGE_LOCKS = ON) -- ) SET @MinLevel = ( SELECT MIN(BUPALSRC.ALOCN_LVL_NUM) FROM MISProcess.LKP_BUPALSRC AS BUPALSRC WHERE BUPALSRC.DATA_DT_ID = @DATA_DT_ID_use AND BUPALSRC.ALOCN_LVL_NUM >= @MinLevel ) DECLARE @Restart AS bit IF @MinLevel > ( SELECT MIN(BUPALSRC.ALOCN_LVL_NUM) FROM MISProcess.LKP_BUPALSRC AS BUPALSRC WHERE BUPALSRC.DATA_DT_ID = @DATA_DT_ID_use ) SET @Restart = 0 ELSE SET @Restart = 1 DECLARE @subset_criteria AS varchar(max) SET NOCOUNT ON IF @Restart = 1 BEGIN RAISERROR ('Restarting process', 10, 1) WITH NOWAIT -- TRUNCATE TABLE MISWork.AB DELETE FROM MISWork.AB INSERT INTO MISWork.AB ( YYMM_ID ,BUS_UNIT_ID ,BUS_UNIT_PROD_LINE_CD ,ALOCN_SRC_CD ,ALOCN_SRC_PROD_LINE_CD ,ORIG_ALOCN_SRC_CD ,LINE_ITEM_NUM ,BAL_ORIGTN_IND ,ANULZD_ACTL_BAL ,ACCT_STATS_CNT ,LVL_NUM ,METHOD_TXT ) SELECT YYMM_ID ,ALOCN_SRC_CD AS BUS_UNIT_ID ,PROD_LINE_CD AS BUS_UNIT_PROD_LINE_CD ,ALOCN_SRC_CD ,PROD_LINE_CD AS ALOCN_SRC_PROD_LINE_CD ,ALOCN_SRC_CD AS ORIG_ALOCN_SRC_CD ,LINE_ITEM_NUM ,'D' AS BAL_ORIGTN_IND ,FIN_ALOCN_AMT AS ANULZD_ACTL_BAL ,0.0 AS ACCT_STATS_CNT ,0 AS LVL_NUM ,'D-INIT' AS METHOD_TXT -- FROM MISProcess.FIN_FTP FROM PUASFIN.FocusResults.BUPALLGE END ELSE BEGIN DELETE FROM MISWork.AB WHERE LVL_NUM >= @MinLevel END DECLARE @LVL_NUM AS int SET @LVL_NUM = @MinLevel WHILE @LVL_NUM <= @MaxLevel BEGIN DECLARE @LevelStart AS varchar(50) SET @LevelStart = 'Level:' + CONVERT(varchar, @LVL_NUM) RAISERROR (@LevelStart, 10, 1) WITH NOWAIT RAISERROR ('STD_COST_PCT allocations - No D - B records', 10, 1) WITH NOWAIT -- STD_COST_PCT allocations - No D - B records INSERT INTO MISWork.AB ( YYMM_ID ,BUS_UNIT_ID ,BUS_UNIT_PROD_LINE_CD ,ALOCN_SRC_CD ,ALOCN_SRC_PROD_LINE_CD ,ORIG_ALOCN_SRC_CD ,LINE_ITEM_NUM ,BAL_ORIGTN_IND ,ANULZD_ACTL_BAL ,ACCT_STATS_CNT ,LVL_NUM ,METHOD_TXT ) SELECT ALOCNS.YYMM_ID ,ALOCNS.BUS_UNIT_ID ,ALOCNS.BUS_UNIT_PROD_LINE_CD ,ALOCNS.BUS_UNIT_ID AS ALOCN_SRC_CD ,ALOCNS.BUS_UNIT_PROD_LINE_CD AS ALOCN_SRC_PROD_LINE_CD ,ALOCNS.BUS_UNIT_ID AS ORIG_ALOCN_SRC_CD ,ALOCNS.LINE_ITEM_NUM ,'B' AS BAL_ORIGTN_IND ,-1.0 * ROUND(ALOCNS.ANULZD_ACTL_BAL, 2) AS ANULZD_ACTL_BAL ,ROUND(ALOCNS.ACCT_STATS_CNT, 2) AS ACCT_STATS_CNT ,@LVL_NUM AS LVL_NUM ,'NO-D-B' AS METHOD_TXT FROM [MISProcess].[udf_MR_ALOCNS_STD_COST_PCT_NO_D](@DATA_DT_ID_use, @LVL_NUM) AS ALOCNS RAISERROR ('STD_COST_PCT allocations - No D - A records', 10, 1) WITH NOWAIT -- STD_COST_PCT allocations - No D - A records INSERT INTO MISWork.AB ( YYMM_ID ,BUS_UNIT_ID ,BUS_UNIT_PROD_LINE_CD ,ALOCN_SRC_CD ,ALOCN_SRC_PROD_LINE_CD ,ORIG_ALOCN_SRC_CD ,LINE_ITEM_NUM ,BAL_ORIGTN_IND ,ANULZD_ACTL_BAL ,ACCT_STATS_CNT ,LVL_NUM ,METHOD_TXT ) SELECT ALOCNS.YYMM_ID ,BLOCK.TO_ALOCN_SRC_CD AS BUS_UNIT_ID ,ALOCNS.ALOCN_SRC_PROD_LINE_CD AS BUS_UNIT_PROD_LINE_CD ,ALOCNS.ALOCN_SRC_CD ,ALOCNS.BUS_UNIT_PROD_LINE_CD AS ALOCN_SRC_PROD_LINE_CD ,ALOCNS.ORIG_ALOCN_SRC_CD ,ALOCNS.LINE_ITEM_NUM ,'A' AS BAL_ORIGTN_IND ,ROUND(ALOCNS.ANULZD_ACTL_BAL, 2) AS ANULZD_ACTL_BAL ,ROUND(ALOCNS.ACCT_STATS_CNT, 2) AS ACCT_STATS_CNT ,@LVL_NUM AS LVL_NUM ,'NO-D-A' AS METHOD_TXT FROM [MISProcess].[udf_MR_ALOCNS_STD_COST_PCT_NO_D](@DATA_DT_ID_use, @LVL_NUM) AS ALOCNS INNER JOIN MISProcess.LKP_BLOCK AS BLOCK -- TODO: Can this be moved into the udf above? ON BLOCK.DATA_DT_ID = @DATA_DT_ID_use AND BLOCK.FROM_ALOCN_SRC_CD = ALOCNS.BUS_UNIT_ID RAISERROR ('STD_COST_PCT allocations - B records', 10, 1) WITH NOWAIT -- STD_COST_PCT allocations - B records INSERT INTO MISWork.AB ( YYMM_ID ,BUS_UNIT_ID ,BUS_UNIT_PROD_LINE_CD ,ALOCN_SRC_CD ,ALOCN_SRC_PROD_LINE_CD ,ORIG_ALOCN_SRC_CD ,LINE_ITEM_NUM ,BAL_ORIGTN_IND ,ANULZD_ACTL_BAL ,ACCT_STATS_CNT ,LVL_NUM ,METHOD_TXT ) SELECT ALOCNS.YYMM_ID ,ALOCNS.BUS_UNIT_ID ,ALOCNS.BUS_UNIT_PROD_LINE_CD ,ALOCNS.ALOCN_SRC_CD ,ALOCNS.BUS_UNIT_PROD_LINE_CD AS ALOCN_SRC_PROD_LINE_CD ,ALOCNS.ORIG_ALOCN_SRC_CD ,ALOCNS.LINE_ITEM_NUM ,'B' AS BAL_ORIGTN_IND ,-1.0 * ROUND(ALOCNS.ANULZD_ACTL_BAL * RATIOS.RATIO, 2) AS ANULZD_ACTL_BAL ,ROUND(ALOCNS.ACCT_STATS_CNT, 2) AS ACCT_STATS_CNT ,@LVL_NUM AS LVL_NUM ,'STD-B' AS METHOD_TXT FROM [MISProcess].[udf_MR_ALOCNS_STD_COST_PCT](@DATA_DT_ID_use, @LVL_NUM) AS ALOCNS INNER JOIN [MISProcess].[udf_MR_RATIOS_STD_COST_PCT](@DATA_DT_ID_use, @LVL_NUM) AS RATIOS ON RATIOS.YYMM_ID = ALOCNS.YYMM_ID AND RATIOS.BUS_UNIT_ID = ALOCNS.BUS_UNIT_ID AND RATIOS.LINE_ITEM_NUM = ALOCNS.LINE_ITEM_NUM RAISERROR ('STD_COST_PCT allocations - A records', 10, 1) WITH NOWAIT -- STD_COST_PCT allocations - A records ; WITH CORRECTED_ALOCNS AS ( SELECT ALOCNS.YYMM_ID ,ALOCNS.BUS_UNIT_ID ,ALOCNS.BUS_UNIT_PROD_LINE_CD ,ALOCNS.ALOCN_SRC_CD ,ALOCNS.ALOCN_SRC_PROD_LINE_CD ,ALOCNS.ORIG_ALOCN_SRC_CD ,ALOCNS.LINE_ITEM_NUM ,ALOCNS.ANULZD_ACTL_BAL * RATIOS.RATIO AS ANULZD_ACTL_BAL ,CASE WHEN RATIOS.RATIO <> 1.0 THEN RATIOS.RATIO ELSE ALOCNS.ACCT_STATS_CNT END AS ACCT_STATS_CNT FROM [MISProcess].[udf_MR_CORR_ALOCNS_STD_COST_PCT](@DATA_DT_ID_use, @LVL_NUM) AS ALOCNS INNER JOIN [MISProcess].[udf_MR_RATIOS_STD_COST_PCT](@DATA_DT_ID_use, @LVL_NUM) AS RATIOS ON RATIOS.YYMM_ID = ALOCNS.YYMM_ID AND RATIOS.BUS_UNIT_ID = ALOCNS.ALOCN_SRC_CD AND RATIOS.LINE_ITEM_NUM = ALOCNS.LINE_ITEM_NUM ), ROUNDED_ALOCNS AS ( SELECT YYMM_ID ,BUS_UNIT_ID ,BUS_UNIT_PROD_LINE_CD ,ALOCN_SRC_CD ,ALOCN_SRC_PROD_LINE_CD ,ORIG_ALOCN_SRC_CD ,LINE_ITEM_NUM ,CASE WHEN ABS(ANULZD_ACTL_BAL) < 0.05 THEN 0.0 WHEN ABS(ANULZD_ACTL_BAL) > 0.05 AND ABS(ANULZD_ACTL_BAL) < 0.10 THEN 0.10 * SIGN(ANULZD_ACTL_BAL) ELSE ANULZD_ACTL_BAL END AS ANULZD_ACTL_BAL ,ACCT_STATS_CNT FROM CORRECTED_ALOCNS ) INSERT INTO MISWork.AB ( YYMM_ID ,BUS_UNIT_ID ,BUS_UNIT_PROD_LINE_CD ,ALOCN_SRC_CD ,ALOCN_SRC_PROD_LINE_CD ,ORIG_ALOCN_SRC_CD ,LINE_ITEM_NUM ,BAL_ORIGTN_IND ,ANULZD_ACTL_BAL ,ACCT_STATS_CNT ,LVL_NUM ,METHOD_TXT ) SELECT YYMM_ID ,BUS_UNIT_ID ,BUS_UNIT_PROD_LINE_CD ,ALOCN_SRC_CD ,ALOCN_SRC_PROD_LINE_CD ,ORIG_ALOCN_SRC_CD ,LINE_ITEM_NUM ,'A' AS BAL_ORIGTN_IND ,ROUND(ANULZD_ACTL_BAL, 2) AS ANULZD_ACTL_BAL ,ROUND(ACCT_STATS_CNT, 2) AS ACCT_STATS_CNT ,@LVL_NUM AS LVL_NUM ,'STD-A' AS METHOD_TXT FROM ROUNDED_ALOCNS WHERE ANULZD_ACTL_BAL <> 0.0 OR ACCT_STATS_CNT <> 0.0 RAISERROR ('COLLAPSE, BLOCK 100 ALOCN_PCT - B records', 10, 1) WITH NOWAIT -- COLLAPSE, BLOCK 100% ALOCN_PCT - B records INSERT INTO MISWork.AB ( YYMM_ID ,BUS_UNIT_ID ,BUS_UNIT_PROD_LINE_CD ,ALOCN_SRC_CD ,ALOCN_SRC_PROD_LINE_CD ,ORIG_ALOCN_SRC_CD ,LINE_ITEM_NUM ,BAL_ORIGTN_IND ,ANULZD_ACTL_BAL ,ACCT_STATS_CNT ,LVL_NUM ,METHOD_TXT ) SELECT BALANCES.YYMM_ID ,BALANCES.BUS_UNIT_ID ,BALANCES.BUS_UNIT_PROD_LINE_CD ,BALANCES.BUS_UNIT_ID AS ALOCN_SRC_CD ,BALANCES.BUS_UNIT_PROD_LINE_CD AS ALOCN_SRC_PROD_LINE_CD ,BALANCES.ORIG_ALOCN_SRC_CD ,BALANCES.ALOCN_LINE_ITEM_NUM AS LINE_ITEM_NUM ,'B' AS BAL_ORIGTN_IND ,-1.0 * BALANCES.ANULZD_ACTL_BAL ,ALOCN_SUMRY.ACCT_STATS_CNT ,@LVL_NUM AS LVL_NUM ,'BLOCK-100' AS METHOD_TXT FROM [MISProcess].[udf_MR_BALANCES_BLOCK_100_PCT](@DATA_DT_ID_use, @LVL_NUM) AS BALANCES INNER JOIN [MISProcess].[udf_MR_ALOCN_SUMRY_BLOCK_100_PCT](@DATA_DT_ID_use, @LVL_NUM) AS ALOCN_SUMRY ON ALOCN_SUMRY.YYMM_ID = BALANCES.YYMM_ID AND ALOCN_SUMRY.BUS_UNIT_ID = BALANCES.BUS_UNIT_ID AND ALOCN_SUMRY.BUS_UNIT_PROD_LINE_CD = BALANCES.BUS_UNIT_PROD_LINE_CD AND ALOCN_SUMRY.ALOCN_SRC_CD = BALANCES.ALOCN_SRC_CD AND ALOCN_SUMRY.ALOCN_SRC_PROD_LINE_CD = BALANCES.ALOCN_SRC_PROD_LINE_CD AND ALOCN_SUMRY.ORIG_ALOCN_SRC_CD = BALANCES.ORIG_ALOCN_SRC_CD RAISERROR ('COLLAPSE, BLOCK 100 ALOCN_PCT - A records', 10, 1) WITH NOWAIT -- COLLAPSE, BLOCK 100% ALOCN_PCT - A records INSERT INTO MISWork.AB ( YYMM_ID ,BUS_UNIT_ID ,BUS_UNIT_PROD_LINE_CD ,ALOCN_SRC_CD ,ALOCN_SRC_PROD_LINE_CD ```
Yes, the recompile should extend to the TV UDFS. However, I'd use parameter masking not RECOMPILE. 1. With queries like this, complation will be expensive 2. When the UDFs are unnested, the parameter masking will apply too. TV UDFs do not have a plan as such: they are part of the calling query because they are unnested. Can you break out some UDF calls into temporary tables and then join on the temp tables? I bet that when the UDFs are unnested, the query is simply too complex to run efficiently. The optimiser could take a week to find the ideal plan with something so complex. With temp tables (not table variables), I guess you'll get respectable improvements. I've used this technique myself in some larger queries (generating pricing trees for financial instruments) The fact you 150,000 rows is overshadowed by the sheer complexity I reckon. **Edit**: TVFs do not need parameter masking because they are only macros. You could literally replace it with a CTE or derived table. See my answer here: [Does query plan optimizer works well with joined/filtered table-valued functions](https://stackoverflow.com/questions/311026/does-query-plan-optimizer-works-well-with-joined-filtered-table-valued-functions/311163#311163) And [Tony Rogerson on Views](http://sqlblogcasts.com/blogs/tonyrogerson/archive/2008/01/03/views-they-offer-no-optimisation-benefits-they-are-simply-inline-macros-use-sparingly.aspx)
One of the things to watch out for with table-valued functions (especially the multi-statement table valued functions) is that the resultant table, just like a table variable, has no column statistics and no indexes. I tend to use TVFs with caution.
T-SQL Process Design and Execution Plan (UDF Parameter Sniffing?)
[ "", "sql", "sql-server", "sql-server-2005", "sql-execution-plan", "parameter-sniffing", "" ]
I decided to use favs (id's of users which marked that post as a favorite) as a comma separated list in a favs column which is also in messages table with sender,url,content etc.. But when I try to count those rows with a query like: ``` select count(id) from messages where favs like '%userid%' ``` of course it returns a wrong result because all id's may be a part of another's For example while querying for id=1 it also increase the counter for any other content that is favorited by user id 11... Can you please tell me your idea or any solution to make this system work?
With a few or's, you can have an ugly solution: ``` select count(id) from messages where favs like 'userid,%' or favs like '%,userid,%' or favs like '%,userid' ``` There's likely a more elegant solution, but that one will at least return the result you're looking for, I believe.
Is it possible to change your data model such that the association between users and their favorite messages is instead stored in another table? Storing the associations in a single column negates the advantages of a relational database. You pay a performance cost using the like function, you can no longer store additional data about the relationship, and the data is harder to query. An alternative model might looking something like this (can't include an image since I'm a new user, but I made one [here](http://img257.imageshack.us/img257/9904/picture2bpf.png)): ``` users - id messages - id favorite_messages - user_id (foreign key to users.id) - message_id (foreign key to messages.id) ``` With that in place, your original query would be simplified to the following: ``` select count(1) from favorite_messages where user_id = userid ``` Additionally, you can do things like get a list of a user's favorite messages: ``` select * from messages inner join favorite_messages on messages.id = favorite_messages.message_id where user_id = userid ```
Mysql: Getting Count of Comma Separated Values With Like
[ "", "php", "mysql", "csv", "explode", "" ]
I am using a background thread to initialize an instrument over USB. The UI hangs when I try to open the device. I would expect the background thread to pause when calling Open on the device, but not the UI thread. I am testing this with no UI interaction from the background thread. I don't know how to debug the problem, and it's too broad a question, but perhaps someone has seen something like this before. There is nothing wrong with the ActiveX interop as far as I know, the device works correctly. This is the general approach: ``` using System; using FancyVoltmeterLibrary; namespace SOQuestion { public class MeterClass { private FancyVoltmeter meter; private Thread meterThread; public MeterClass() { // Create instance of ActiveX/COM object. meter = new FancyVoltmeter(); meterThread = new Thread(UpdateMeter); meterThread.Name = "Meter Thread"; meterThread.Priority = ThreadPriority.Normal; meterThread.IsBackground = true; meterThread.Start(); } private void UpdateMeter() { while(true) { Thread.Sleep(1000); if(!meter.IsOpen()) { // Meter may be powered off here. // The call to Open takes about 1 second. // UI hangs during the call??? meter.Open(); } // code to read meter goes here. } } } } ``` Edit: Perhaps unclear what I meant. By 'hang' I should say 'freezes momentarily'.
Does meter require running in an STA? Is the call to Open() actually being marshalled back to the UI thread for this reason? You can verify this is true by looking at the callstack of the hung UI thread in the debugger.
How long time does the instantiation of the FancyVoltmeter take? Could it be that it is not the Open method that causes the UI freeze, but creating the COM object (which is done on the UI thread)? If that turns out to be the case, moving the creation of this object to happen on the new, separate worker thread should take care of the problem. Edit: I saw now that you already found this out in your comment to Michael...
How can a background thread hang the UI thread?
[ "", "c#", "winforms", "multithreading", "" ]
I am trying to find a piece of regex to match a currency value. I would like to match only numbers and 1 decimal point ie **Allowed** * 10 * 100 * 100.00 **Not Allowed** * Alpha Characters * 100,00 * +/- 100 I have search and tried quite a few without any luck. Hope you can advise
``` if (preg_match('/^[0-9]+(?:\.[0-9]+)?$/', $subject)) { # Successful match } else { # Match attempt failed } ``` **Side note** : If you want to restrict how many decimal places you want, you can do something like this : ``` /^[0-9]+(?:\.[0-9]{1,3})?$/im ``` So ``` 100.000 ``` will match, whereas ``` 100.0001 ``` wont. If you need any further help, post a comment. **PS** If you can, use the number formatter posted above. Native functions are always better (and faster), otherwise this solution will serve you well.
How about this ``` if (preg_match('/^\d+(\.\d{2})?$/', $subject)) { // correct currency format } else { //invalid currency format } ```
PHP Currency Regular Expression
[ "", "php", "regex", "format", "numbers", "" ]
I have a data set that looks like this: ``` 000 100 200 300 010 020 030 001 002 003 001 101 201 301 011 021 031 000 002 003 002 102 202 302 012 022 032 001 000 003 003 103 203 303 013 023 033 001 002 000 010 110 210 310 000 020 030 011 012 013 020 120 220 320 010 000 030 021 022 023 030 130 230 330 010 020 000 031 032 033 033 133 233 333 013 023 003 031 032 030 100 000 200 300 110 120 130 101 102 103 133 033 233 333 113 123 103 131 132 130 200 100 000 300 210 220 230 201 202 203 233 133 033 333 213 223 203 231 232 230 300 100 200 000 310 320 330 301 302 303 303 103 203 003 313 323 333 301 302 300 313 113 213 013 303 323 333 311 312 310 330 130 230 030 310 320 300 331 332 333 331 131 231 031 311 321 301 330 332 333 332 132 232 032 312 322 302 331 330 333 333 133 233 033 313 323 303 331 332 330 ``` What I intend to do is to generate list of unique strings from it, yielding: ``` 000 001 002 003 010 011 012 013 020 021 022 023 030 031 032 033 100 101 102 103 110 113 120 123 130 131 132 133 200 201 202 203 210 213 220 223 230 231 232 233 300 301 302 303 310 311 312 313 320 321 322 323 330 331 332 333 ``` The code I have to generate that is this. But it is very memory consumptive. Because in reality the string is of length >36 and there are more than 35 million lines in a file. Each line with >36\*3 number of columns/entries. Is there a memory efficient way to do it? ``` #include <iostream> #include <vector> #include <fstream> #include <sstream> #include <map> using namespace std; int main ( int arg_count, char *arg_vec[] ) { if (arg_count !=2 ) { cerr << "expected one argument" << endl; return EXIT_FAILURE; } string line; ifstream myfile (arg_vec[1]); map <string,int> Tags; if (myfile.is_open()) { while (getline(myfile,line) ) { stringstream ss(line); string Elem; while (ss >> Elem) { Tags[Elem] = 1; } } myfile.close(); } else { cout << "Unable to open file";} for (map <string,int>::iterator iter = Tags.begin(); iter != Tags.end();iter++) { cout << (*iter).first << endl; } return 0; } ```
This depends a bit on the characteristics of your dataset. In the worse case, where all strings are unique, you will need either O(n) memory to record your seen-set, or O(n^2) time to re-scan the entire file on each word. However, there are improvements that can be made. First off, if your dataset only consists of 3-digit integers, then a simple array of 1000 bools will be much more memory effieicnt than a map. If you're using general data, then another good approach would be to sort the set, so copies of the same string end up adjacent, then simply remove adjacent words. There are well-researched algorithms for [sorting a dataset too large to fit in memory](http://en.wikipedia.org/wiki/External_sorting). This is most effective when a large percentage of the words in the set are unique, and thus holding a set of seen words in memory is prohibitively expensive. Incidentally, this can be implemented easily with a shell pipeline, as GNU sort does the external sort for you: ``` tr " " "\n" < testdata | LC_ALL=C sort | uniq ``` Passing LC\_ALL=C to sort disables locale processing and multibyte character set support, which can give a significant speed boost here.
**O(1) memory [ram]:** If you want to use no memory at all (besides a couple temp variables) you could simply read 1 entry at a time and add it to the output file if it doesn't already exist in the output file. This would be slower on time though since you'd have to read 1 entry at a time from the output file. You could insert the entry into the output file in alphabetical order though and then you would be able to see if the entry already exists or not in O(logn) time via binary search per entry being inserted. To actually insert you need to re-create the file though which is O(nlogn). You do this n times for each input string, so overall the algorithm would run in O(n^2logn) (which includes lookup to find insertion pos + insertion) and use O(1) RAM memory. Since your output file is already in alphabetical order though future lookups would also only be O(logn) via binary search. You could also minimize the re-creation phase of the file by leaving excessive space between entries in the file. WHen the algorithm was done you could do a vacuum on the file. This would bring it down to O(nlogn). --- **Another way to reduce memory:** If it's common that your strings share common prefixes then you can use a [trie](http://en.wikipedia.org/wiki/Trie) and probably use less memory overall since you mentioned your strings are > length 36. This would still use a lot of memory though.
Memory Efficient Methods To Find Unique Strings
[ "", "c++", "algorithm", "string", "data-structures", "" ]
I've recently had to switch encoding of webapp I'm working on from `ISO-xx` to `utf8`. Everything went smooth, except properties files. I added `-Dfile.encoding=UTF-8` in `eclipse.ini` and normal files work fine. Properties however show some strange behaviour. If I copy `utf8` encoded properties from Notepad++ and paste them in Eclipse, they show and work fine. When I reopen properties file, I see some Unicode characters instead of proper ones, like: ``` Zur\u00EF\u00BF\u00BDck instead of Zurück ``` but app still works fine. If I start to edit properties, add some special characters and save, they display correctly, however they don't work and all previously working special characters don't work any more. When I compare local version with CVS I can see special characters correctly on remote file and after update I'm at start again: app works, but Eclipse displays Unicode chars. I tried changing file encoding by right clicking it and selecting „Other: UTF8” but it didn't help. It also said: „determined from content: ISO-8859-1” I'm using Java 6 and Jboss Developer based on Eclipse 3.3 I can live with it by editing properties in Notepad++ and pasting them in Eclipse, but I would be grateful if someone could help me with fixing this in Eclipse.
Don't waste your time, you can use [**Resource Bundle plugin**](http://essiembre.github.io/eclipse-rbe/) in **Eclipse** ![Basic Screen Shot](https://camo.githubusercontent.com/35ff0c003e9ec901203eea51e0251d052db0a9d2/687474703a2f2f65737369656d6272652e6769746875622e696f2f65636c697073652d7262652f696d672f73637265656e73686f74732f6d61696e2d73637265656e2e706e67) [Old Sourceforge page](http://sourceforge.net/projects/eclipse-rbe/)
Answer for "pre-Java-9" is below. As of Java 9, properties files are saved and loaded in UTF-8 by default, but falling back to ISO-8859-1 if an invalid UTF-8 byte sequence is detected. See the [Java 9 release notes](https://docs.oracle.com/javase/9/intl/internationalization-enhancements-jdk-9.htm#JSINT-GUID-974CF488-23E8-4963-A322-82006A7A14C7) for details. --- Properties files are ISO-8859-1 by definition - see the docs for the [Properties](http://java.sun.com/javase/6/docs/api/java/util/Properties.html) class. Spring has a replacement which can load with a specified encoding, using [`PropertiesFactoryBean`](http://static.springframework.org/spring/docs/2.5.x/api/org/springframework/beans/factory/config/PropertiesFactoryBean.html). EDIT: As Laurence noted in the comments, Java 1.6 introduced overloads for `load` and `store` which take a `Reader`/`Writer`. This means you can create a reader for the file with whatever encoding you want, and pass it to `load`. Unfortunately `FileReader` *still* doesn't let you specify the encoding in the constructor (aargh) so you'll be stuck with chaining `FileInputStream` and `InputStreamReader` together. However, it'll work. For example, to read a file using UTF-8: ``` Properties properties = new Properties(); InputStream inputStream = new FileInputStream("path/to/file"); try { Reader reader = new InputStreamReader(inputStream, "UTF-8"); try { properties.load(reader); } finally { reader.close(); } } finally { inputStream.close(); } ```
Java properties UTF-8 encoding in Eclipse
[ "", "java", "eclipse", "encoding", "utf-8", "" ]
Being a pretty experienced PHP developer, and having a fair knowledge of C (Wrote a distributed computing experiment with 16 of my Dad's NEC microcontrollers), I'd like to make the move to Objective-C and Cocoa, eventually aiming to integrate the Mac / iPhone apps with some of my PHP projects. Going through the "Cocoa Programming For Mac OS X" book by Arron Hiilegass, I'm getting a little frustrated. I'm able to work the examples, but he doesn't explain exactly **why** he's using a class in that way (NSNumber instead of int or double for example). I'm looking for a good book/books to lean Objective-C first. My question is, what would be a good book? I'm looking at "Programming In Objective-C 2.0" and it looks pretty decent. Where would I go from there? I'm thinking I should then re-start on my Cocoa book. Also, are there any resources on the internet that would help in the transition from PHP to Objective-C? I know PHP is a loosely-typed scripting language, so it has its differences. There are some things that just don't make sense with Obj-C and Cocoa, why can't I put integers into an NSMutableArray? Anyways. Thanks for the help! (I'm only 14, so go easy on me if I made any mistakes in my Q. )
I've just gone through "Programming In Objective-C 2.0" myself, and it's pretty good. I'd recommend it, especially if you've never used C (or if you've forgotten it, like me). However, Apple really has excellent documentation. If you don't mind reading online, I'd start with their [Getting Started with Cocoa](http://developer.apple.com/referencelibrary/GettingStarted/GS_Cocoa/index.html#//apple_ref/doc/uid/TP30001088) page.
*I'm able to work the examples, but he doesn't explain exactly why he's using a class in that way (NSNumber instead of int or double for example)...* *There are some things that just don't make sense with Obj-C and Cocoa, why can't I put integers into an NSMutableArray?* [`NSNumber`](http://developer.apple.com/documentation/Cocoa/Reference/Foundation/Classes/nsnumber_Class/Reference/Reference.html#//apple_ref/doc/uid/20000178-3041) is a much more useful type than a primitive type like `int` or `double`, as it is often used in conjunction with other objects you'll run into as you program in Cocoa. For example, in order to package a number as a value into a resizable array (like an `NSMutableArray`) or an associative array (an instance of [`NSDictionary`](http://developer.apple.com/DOCUMENTATION/Cocoa/Reference/Foundation/Classes/NSDictionary_Class/Reference/Reference.html#//apple_ref/doc/uid/20000140-9552)), you need to turn the number primitive (`int`, `double`, etc.) into a [serializable](http://developer.apple.com/documentation/Cocoa/Conceptual/Archiving/Archiving.html), or archivable object — an `NSNumber`. Primitives can't be serialized, unlike an `NSNumber`, because primitives aren't in the basic set of "Core Foundation" types (`NSNumber`, `NSArray`, `NSString`, etc.) that Apple has worked hard to make available to you. Also, by using `NSNumber` you also get a lot of bonus convenience methods for free: you can quickly convert the number into a string, for example, by simply typing `[myNumber stringValue]`. Or, if you're treating your `NSNumber` as the price of something ("$1.23"), you can apply an [`NSNumberFormatter`](http://developer.apple.com/documentation/Cocoa/Reference/Foundation/Classes/NSNumberFormatter_Class/Reference/Reference.html#//apple_ref/doc/uid/20000202-9197) to make sure that operations on the number give results that have the format that you would expect (e.g. if you add two price values, you would expect to get a currency value in return). That's not to say you can't or shouldn't use `int` or `double` variables. But in many cases, you'll find an `NSNumber` is a better option, in that you can write less code and get a lot of functionality for "free".
From PHP to Objective-C
[ "", "php", "objective-c", "cocoa", "transition", "" ]
I want to pipe [edit: real-time text] the output of several subprocesses (sometimes chained, sometimes parallel) to a single terminal/tty window that is not the active python shell (be it an IDE, command-line, or a running script using tkinter). IPython is not an option. I need something that comes with the standard install. Prefer OS-agnostic solution, but needs to work on XP/Vista. I'll post what I've tried already if you want it, but it’s embarrassing.
A good solution in Unix would be named pipes. I know you asked about Windows, but there might be a similar approach in Windows, or this might be helpful for someone else. on terminal 1: ``` mkfifo /tmp/display_data myapp >> /tmp/display_data ``` on terminal 2 (bash): ``` tail -f /tmp/display_data ``` **Edit**: *changed terminal 2 command to use "tail -f" instead of infinite loop.*
You say "pipe" so I assume you're dealing with text output from the subprocesses. A simple solution may be to just write output to files? e.g. in the subprocess: 1. Redirect output `%TEMP%\output.txt` 2. On exit, copy `output.txt` to a directory your main process is watching. In the main process: 1. Every second, examine directory for new files. 2. When files found, process and remove them. You could encode the subprocess name in the output filename so you know how to process it.
Creating a new terminal/shell window to simply display text
[ "", "python", "shell", "" ]
Should I use PyXML or what's in the standard library?
ElementTree is provided as part of the standard Python libs. ElementTree is pure python, and cElementTree is the faster C implementation: ``` # Try to use the C implementation first, falling back to python try: from xml.etree import cElementTree as ElementTree except ImportError, e: from xml.etree import ElementTree ``` Here's an example usage, where I'm consuming xml from a RESTful web service: ``` def find(*args, **kwargs): """Find a book in the collection specified""" search_args = [('access_key', api_key),] if not is_valid_collection(kwargs['collection']): return None kwargs.pop('collection') for key in kwargs: # Only the first keword is honored if kwargs[key]: search_args.append(('index1', key)) search_args.append(('value1', kwargs[key])) break url = urllib.basejoin(api_url, '%s.xml' % 'books') data = urllib.urlencode(search_args) req = urllib2.urlopen(url, data) rdata = [] chunk = 'xx' while chunk: chunk = req.read() if chunk: rdata.append(chunk) tree = ElementTree.fromstring(''.join(rdata)) results = [] for i, elem in enumerate(tree.getiterator('BookData')): results.append( {'isbn': elem.get('isbn'), 'isbn13': elem.get('isbn13'), 'title': elem.find('Title').text, 'author': elem.find('AuthorsText').text, 'publisher': elem.find('PublisherText').text,} ) return results ```
I always prefer to use the standard library when possible. ElementTree is well known amongst pythonistas, so you should be able to find plenty of examples. Parts of it have also been optimized in C, so it's quite fast. <http://docs.python.org/library/xml.etree.elementtree.html>
How to consume XML from RESTful web services using Django / Python?
[ "", "python", "xml", "django", "rest", "" ]
Consider I have a data grid, I need to find the number of rows and coulmns in the data grid. How can I do this in C#?
The `DataGrid.Items` property returns a `DataGridItemCollection` representing the `DataGridItems` in the DataGrid. Each `DataGridItem` is representative of a single row in the rendered table. Also, the `DataGridItem` exposes a `Cells` property which represents the no. of tablecells (in other words, the columns) in the rendered table. ``` int rowCount = myGrid.Items.Count; // Get the no. of columns in the first row. int colCount = myGrid.Items[0].Cells.Count; ```
DataGrids represent actual DataItems. ``` DataGrid dg = new DataGrid(); dg.Items.Count; //Number of Items...i.e. Rows; dg.Items[0].Cells.Count; //Number of columns for that Items ```
Row and Column count of data grid in C#
[ "", "c#", "datagrid", "" ]
when users upload the image i save it to the image folder in htdocs directory. so any user without logging into site can go type the url/images/"name" would get it. what is the best way to prevent this. since the browse would just request just like the user typing directly in the address bar the location of the image. I was thinking of using a script to download each image file from a directory after checking the session details. Do you think will it will a good approach if so can you suggest me a script. I don't want to use database. I think it will be slow. OR if it is the better approach let me know. THnks
You could put it outside the htdocs/ directory, and mod\_rewrite the images/ dir to image.php or something. So url/images/test.jpg would translate to image.php?path=test.jpg image.php may look something like this: ``` <?php if($loggedin) { header("Content-Type: image/jpeg"); echo file_get_contents("../images/".$_GET["path"]); } ?> ``` Don't forget to sanitize the input! You don't want the user to access arbitrary files.
Instead of using `echo file_get_contents(...)` I would recommending using [fpassthru](http://php.net/fpassthru) to avoid having "out of memory" errors that could occur with large files.
php,image download
[ "", "php", "" ]
I created a program, and it uses the vector.h #include, and iterators, etc... But when I run the program, under certain circumstances (I'm still trying to figure out what those would be) I get an assertion error refering me to line 98 of vector.h. I went to line 98 of vector.h and got this: ``` #if _HAS_ITERATOR_DEBUGGING if (this->_Mycont == 0 || _Myptr < ((_Myvec *)this->_Mycont)->_Myfirst || ((_Myvec *)this->_Mycont)->_Mylast <= _Myptr) { _DEBUG_ERROR("vector iterator not dereferencable"); _SCL_SECURE_OUT_OF_RANGE; } ``` Can somebody please tell me what this means and what in my program is causing this assertion? NB: Line 98, for the record, is the one that begins "\_DEBUG\_ERROR("vect..." NB: This is the code in my program that I BELIEVE triggered the error, I'm not entirely sure, though. CODE: ``` for(aI = antiviral_data.begin(); aI < antiviral_data.end();) { for(vI = viral_data.begin(); vI < viral_data.end();) { if((*aI)->x == (*vI)->x && (*aI)->y == (*vI)->y) { vI = viral_data.erase(vI); aI = antiviral_data.erase(aI); } else { vI++; } } if((*aI)->x >= maxx || (*aI)->x < 0 || (*aI)->y >= maxy || (*aI)->y < 0) { aI = antiviral_data.erase(aI); } else { aI++; } } ```
The runtime is detecting that you are dereferencing an iterator that is before begin() or after end(). Imagine if you delete the last item in the `antiviral_data` vector in line 7: ``` aI = antiviral_data.erase(aI); ``` `aI` gets set to `antiviral_data.end()`, and when you dereference it in line 14: ``` if((*aI)->x >= maxx ... ``` and also in line 5: ``` if((*aI)->x == (*vI)->x ``` You are dereferencing an out of bounds iterator. The fix is to check that `aI != antiviral_data.end()` after the erase call to make sure you haven't hit the end of the vector before you continue on using it.
You really want to look at STL algorithms like `remove_if` instead of doing this stuff manually.
What does this code in "vector" mean? (C++)
[ "", "c++", "vector", "iterator", "assertions", "" ]
I am not professional programmer so i can not be sure about this.How many mysql queries your scripts send at one page and what is your optimal query number .For example at stackoverflow's homepage it lists questions shows authors of these questions . is stackoverflow sends mysql query foreach question to get information of author. or it sends 1 query and gets all user data and match it with questions ?
Don't focus on the number of queries. This is not a useful metric. Instead, you need to look at a few other things: * how many queries are duplicated? * how many queries have intersecting datasets? or are a subset of another? * how long do they take to run? have you profiled the common ones to check indices? * how many are unnecessarily complex? Numerous times I've seen three simpler queries together execute in a tenth of the time of one complex one that returned the same information. By the same token, SQL is powerful, but don't go mad trying to do something in SQL that would be easier and simpler in a loop in PHP. * how much progressive processing are you doing? If you can't avoid longer queries with large datasets, try to re-arrange the algorithm so that you can process the dataset as it comes from the database. This lets you use an unbuffered query in MySQL and that improves your memory usage. And if you can provide output whilst you're doing this, you can improve your page's perceived speed by provinding first output sooner. * how much can you cache some of this data? Even caching it for a few seconds can help immensely.
I like to keep mine under 8. Seriously though, that's pretty meaningless. If hypothetically there was a reason for you to have 800 queries in a page, then you could go ahead and do it. You'll probably find that the number of queries per page will simply be dependant on what you're doing, though in normal circumstances I'd be surprised to see over 50 (though these days, it can be hard to realise just how many you're doing if you are abstracting your DB calls away). **Slow queries matter more** I used to be frustrated at a certain PHP based forum software which had 35 queries in a page and ran really slow, but that was a long time ago and I know now that the reason that particular installation ran slow had nothing to do with having 35 queries in a page. For example, only one or two of those queries took most of the time. It just had a couple of really slow queries, that were fixed by well-placed indexes. I think that identifying and fixing slow queries should come before identifying and eliminating unnecessary queries, as it can potentially make a lot more difference. Consider even that three fast queries might be significantly quicker than one slow query - number of queries does not necessarily relate to speed. I have one page (which is actually kind of a test case/diagnostic tool designed to be run only by an admin) which has over 800 queries but it runs in a matter of seconds. I guess they are all really simple queries. **Try caching** There are various ways to cache parts of your application which can really cut down on the number of queries you do, without reducing functionality. Libraries like [memcached](http://www.danga.com/memcached/) make this trivially easy these days and yet run really fast. This can also help improve performance a lot more than reducing the number of queries. **If queries are really unnecessary, and the performance really is making a difference, then remove/combine them** Just consider looking for slow queries and optimizing them, or caching their results, first.
What is the optimal MYSQL query number in php script?
[ "", "php", "mysql", "database", "" ]
I'm considering adding an index to an Oracle table, but I'd like to first estimate the size of the index after it has been built (I don't need a precise size - just an estimate.) Supposing I have access to all of the meta-data about the table (number of rows, columns, column data types, etc) that I can execute any arbitrary Oracle SQL query to get additional data about the current state of the table, and I know what I would want the index definition to be, how can I estimate this size?
You can estimate the size of an index by running an `explain plan` on the create index statement: ``` create table t as select rownum r from dual connect by level <= 1000000; explain plan for create index i on t (r); select * from table(dbms_xplan.display(null, null, 'BASIC +NOTE')); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------------------------------- Plan hash value: 1744693673 --------------------------------------- | Id | Operation | Name | --------------------------------------- | 0 | CREATE INDEX STATEMENT | | | 1 | INDEX BUILD NON UNIQUE| I | | 2 | SORT CREATE INDEX | | | 3 | TABLE ACCESS FULL | T | --------------------------------------- Note ----- - estimated index size: 4194K bytes ``` Look at the "Note" section at the bottom: **estimated index size: 4194K bytes**
You can use these [Oracle Capacity planning and Sizing Spreadsheets](http://www.rampant-books.com/download_sizing_spreadsheets.htm). For something not quite as full-blown, if you just want back of the envelope type [rough estimates for the index](http://forums.oracle.com/forums/thread.jspa?threadID=575619): > Calculate the average size of each of > the columns that make up the index key > and sum the columns plus one rowid and > add 2 bytes for the index row header > to get the average row size. Now add > just a little to the pctfree value for > the index to come up with an overhead > factor, maybe 1.125 for pctfree of 10. > > number of indexed table rows X avg row > len X 1.125 > > Note - if the index contains nullable > columns then every table row may not > appear in the index. On a single > column index where 90% of the columns > are null only 10% would go into the > index. > > Compare estimate to tablespace extent > allocation method and adjust final > answer if necessary. > > Also a larger overhead factor may be > better as the index gets bigger since > the more data indexed the more branch > blocks necessary to support the index > structure and the calculation really > just figures for leaf blocks.
How can I estimate the size of an Oracle index?
[ "", "sql", "oracle", "" ]
If I use: OnSelectedIndexChanged like this: ``` <asp:DropDownList ID="ddl1" AutoPostBack="true" OnSelectedIndexChanged="Test_SelectedIndexChanged" runat="server"></asp:DropDownList> ``` UpdatePanel and UpdateProgress work correctly, meaning it shows my little gif etc. However as soon as I change this to call javascript code, like this: ``` <asp:DropDownList ID="ddl1" AutoPostBack="true" onchange="selectValues()" runat="server"></asp:DropDownList> ``` It stops working. The progress doesn't show up. Now, before anyone asks why do I this, it's because I need to call some scripting into the managed code. It has to do with silverlight. Does anyone have solution to this problem?
if your update panel doesn't refresh, the updateprogress control will not operate. if you try to update something without calling the update of the updatepanel (ie using your own JS) the updateprogress will not work.
I think your javascript is returning false value. So the server side event of dropdown selectedindex change event does not fire as it it not postback whole page.
UpdatePanel and UpdateProgress not working
[ "", "asp.net", "javascript", "silverlight", "" ]