Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
How can I be sure that my result set will have `a` first and `b` second? It would help me to solve a tricky ordering problem. Here is a simplified example of what I'm doing: ``` SELECT a FROM A LIMIT 1 UNION SELECT b FROM B LIMIT 1; ```
``` SELECT col FROM ( SELECT a col, 0 ordinal FROM A LIMIT 1 UNION ALL SELECT b, 1 FROM B LIMIT 1 ) t ORDER BY ordinal ```
I don't think order is guaranteed, at least not across all DBMS. What I've done in the past to control the ordering in UNIONs is: ``` (SELECT a, 0 AS Foo FROM A LIMIT 1) UNION (SELECT b, 1 AS Foo FROM B LIMIT 1) ORDER BY Foo ```
How can I order entries in a UNION without ORDER BY?
[ "", "sql", "sql-order-by", "union", "" ]
I have a form on my page and am dynamically adding controls to the form with Javascript/JQuery. At some point I need to get all the values in the form on the client side as a collection or a query string. I don't want to submit the form because I want to pass the form values along with other information that I have on the client to a back-end WCF/Ajax service method. So I'm trying to figure out how to capture all the values in the same type of collection that the form would normally send to the server if the form was actually submitted. I suspect there is an easy way to capture this, but I'm stumped.
The jquery form plugin offers an easy way to iterate over your form elements and put them in a query string. It might also be useful for whatever else you need to do with these values. ``` var queryString = $('#myFormId').formSerialize(); ``` From <http://malsup.com/jquery/form> Or using straight jquery: ``` var queryString = $('#myFormId').serialize(); ```
If your form tag is like ``` <form action="" method="post" id="BookPackageForm"> ``` Then fetch the form element by using forms object. ``` var formEl = document.forms.BookPackageForm; ``` Get the data from the form by using FormData objects. ``` var formData = new FormData(formEl); ``` Get the value of the fields by the form data object. ``` var name = formData.get('name'); ```
How can I get all a form's values that would be submitted without submitting
[ "", "javascript", "jquery", "html", "ajax", "forms", "" ]
Is it possible to change a html tag attribute value from an RSJ template? I know that there is a page.replace\_html method, but it is not very useful in my case, since I have lengthy values of various attributes (such as alt, title of an image). What I want is change src attribute of a img tag in RJS. Is that possible at all? Thank you.
**EDIT:** My first attempt didn't work, but this one does. ``` update_page do |page| page['image_id']['src'] = new_image_url end ```
Slight modification to Can's answer. As suggested, ``` update_page do |page| page['image_id']['src'] = new_image_url end ``` translates to JS: ``` $('image_id').src = new_image_url ``` This will work for some attributes that have direct JS DOM variable access, many don't. Luckily RJS is pretty good at rewriting JS method calls: ``` update_page do |page| page['image_id'].set_attribute('attrib', new_attrib_val) end ``` translates to JS: ``` $('image_id').setAttribute('attrib', new_attrib_val) ``` and you should be good to go. --- Small update: you may want to use write\_attribute instead if you want IE compatibility. --- Small update: in the above, [:src] and :attrib would probably be better style if these are static.
How to change html tag attribute value from RJS template?
[ "", "javascript", "ruby-on-rails", "ruby", "rjs", "" ]
Any idea why the piece of code below does not add the script element to the DOM? ``` var code = "<script></script>"; $("#someElement").append(code); ```
I've seen issues where some browsers don't respect some changes when you do them directly (by which I mean creating the HTML from text like you're trying with the script tag), but when you do them with built-in commands things go better. Try this: ``` var script = document.createElement( 'script' ); script.type = 'text/javascript'; script.src = url; $("#someElement").append( script ); ``` From: [JSON for jQuery](http://mg.to/2006/01/25/json-for-jquery)
**The Good News is:** *It's 100% working.* Just add something inside the script tag such as `alert('voila!');`. The right question you might want to ask perhaps, ***"Why didn't I see it in the DOM?"***. Karl Swedberg has made a nice explanation to visitor's comment in [jQuery API site](http://api.jquery.com/append/#comment-61121802). I don't want to repeat all his words, you can read directly there here *(I found it hard to navigate through the comments there)*. > *All of jQuery's insertion methods use > a domManip function internally to > clean/process elements before and > after they are inserted into the DOM. > One of the things the domManip > function does is pull out any script > elements about to be inserted and run > them through an "evalScript routine" > rather than inject them with the rest > of the DOM fragment. It inserts the > scripts separately, evaluates them, > and then removes them from the DOM.* > > *I believe that one of the reasons jQuery > does this is to avoid "Permission > Denied" errors that can occur in > Internet Explorer when inserting > scripts under certain circumstances. > It also avoids repeatedly > inserting/evaluating the same script > (which could potentially cause > problems) if it is within a containing > element that you are inserting and > then moving around the DOM.* The next thing is, I'll summarize what's the bad news by using `.append()` function to add a script. --- **And The Bad News is..** *You can't debug your code.* I'm not joking, even if you add `debugger;` keyword between the line you want to set as breakpoint, you'll be end up getting only the call stack of the object without seeing the breakpoint on the source code, *(not to mention that this keyword only works in webkit browser, all other major browsers seems to omit this keyword)*. If you fully understand what your code does, than this will be a minor drawback. But if you don't, you will end up adding a `debugger;` keyword all over the place just to find out what's wrong with your (or my) code. Anyway, there's an alternative, don't forget that javascript can natively manipulate HTML DOM. --- **Workaround.** *Use javascript (not jQuery) to manipulate HTML DOM* If you don't want to lose debugging capability, than you can use javascript native HTML DOM manipulation. Consider this example: ``` var script = document.createElement("script"); script.type = "text/javascript"; script.src = "path/to/your/javascript.js"; // use this for linked script script.text = "alert('voila!');" // use this for inline script document.body.appendChild(script); ``` There it is, just like the old days isn't it. And don't forget to clean things up whether in the DOM or in the memory for all object that's referenced and not needed anymore to prevent memory leaks. You can consider this code to clean things up: ``` document.body.removechild(document.body.lastChild); delete UnusedReferencedObjects; // replace UnusedReferencedObject with any object you created in the script you load. ``` The drawback from this workaround is that you may accidentally add a duplicate script, and that's bad. From here you can slightly mimic `.append()` function by adding an object verification before adding, and removing the script from the DOM right after it was added. Consider this example: ``` function AddScript(url, object){ if (object != null){ // add script var script = document.createElement("script"); script.type = "text/javascript"; script.src = "path/to/your/javascript.js"; document.body.appendChild(script); // remove from the dom document.body.removeChild(document.body.lastChild); return true; } else { return false; }; }; function DeleteObject(UnusedReferencedObjects) { delete UnusedReferencedObjects; } ``` This way, you can add script with debugging capability while safe from script duplicity. This is just a prototype, you can expand for whatever you want it to be. I have been using this approach and quite satisfied with this. Sure enough I will never use jQuery `.append()` to add a script.
Can't append <script> element
[ "", "javascript", "jquery", "" ]
Can I change effective process name of a Python script? I want to show a different name instead of the real name of the process when I get the system process list. In C I can set ``` strcpy(argv[0],"othername"); ``` But in Python ``` argv[0] = "othername" ``` doesn't seem to work. When i get process list (with `ps ax` in my linux box) the real name doesn't change. I prefer a portable solution (or else one solution for posix and another for windows environments), if it exists.
Simply put, there's no portable way. You'll have to test for the system and use the preferred method for that system. Further, I'm confused about what you mean by process names on Windows. Do you mean a service name? I presume so, because nothing else really makes any sense (at least to my non-Windows using brain). If so, you need to use [Tim Golden's WMI interface](http://timgolden.me.uk/python/wmi) and call the .Change method on the service... at least according to his [tutorial](http://timgolden.me.uk/python/wmi/tutorial.html). For Linux none of the methods I found worked except for [this poorly packaged module](http://code.google.com/p/procname/) that sets argv[0] for you. I don't even know if this will work on BSD variants (which does have a setproctitle system call). I'm pretty sure argv[0] won't work on Solaris.
I've recently written a Python module to change the process title in a portable way: check <https://github.com/dvarrazzo/py-setproctitle> It is a wrapper around the code used by PostgreSQL to perform the title change. It is currently tested against Linux and Mac OS X: Windows (with limited functionality) and BSD portings are on the way. **Edit:** as of July 2010, the module works with BSD and with limited functionality on Windows, and has been ported to Python 3.x.
Is there a way to change effective process name in Python?
[ "", "python", "process", "arguments", "hide", "ps", "" ]
I want to find a better way of populating a generic list from a checkedlistbox in c#. I can do the following easily enough: ``` List<string> selectedFields = new List<string>(); foreach (object a in chkDFMFieldList.CheckedItems) { selectedFields.Add(a.ToString()); } ``` There must be a more elagent method to cast the CheckedItems collection to my list.
**Try this (using System.Linq):** `OfType()` is an extension method, so you need to use `System.Linq` ``` List<string> selectedFields = new List<string>(); selectedFields.AddRange(chkDFMFieldList.CheckedItems.OfType<string>()); ``` Or just do it in one line: ``` List<string> selectedFields = chkDFMFieldList.CheckedItems.OfType<string>().ToList(); ```
This is not exactly the answer to your requirement, but posting a more general answer. You could do it in a variety of ways: 1) ``` T[] items = new T[lb.Items.Count]; lb.Items.CopyTo(items, 0); var lst = new List<T>(items); ``` 2) looping and adding using `foreach` as you mentioned. 3) using Linq ``` var lst = lb.Items.Cast<T>().ToList(); ``` 4) or ``` var lst = lb.Items.OfType<T>().ToList(); ``` When I did some performance testing like below, I found copying to array method the fastest while the Linq methods slower. Of course in real world scenarios these wouldnt matter. I prefer the 3rd method (Linq) for readability. ``` DateTime d = DateTime.Now; for (int i = 0; i < 10000; i++) { Action(); } MessageBox.Show((DateTime.Now - d).TotalMilliseconds.ToString()); ``` For an iteration of 10000 times run multiple times with about 300 items in list box, 1) ~100ms 2) ~150ms 3) ~250ms 4) ~260ms
Casting an Item Collection from a listbox to a generic list
[ "", "c#", "list", "casting", "listbox", "" ]
Is there a way to read from a USB barcode reader while ignoring the keyboard and not knowing the PID or VID of the USB scanner? I know that there is a way of differentiating between USB scanner input and keyboard input by using the VID and or PID of the USB scanner; this was done using code from <http://nicholas.piasecki.name/blog/2009/02/distinguishing-barcode-scanners-from-the-keyboard-in-winforms/> But is there another solution to differentiate between keyboard and USB scanner without putting the scanner's VID or PID in a configuration file (or source code)? The reason for not wanting to put various VIDs or PIDs in a configuration file is that, the application being developed will be deployed on numerous laptops and have arbitrary types of scanners attached to them. Also, I don't want to configure the scanner's with a starting and or ending sequence that would be outputted, since the scanner is being used by other software on the same machine as well and I don't want to have to change the code on the other software. I don't want to program the barcode reader to be in serial mode either for the same reason mentioned previously.
There is a way to differentiate between keyboard and USB barcode reader --- Starting with the following assumptions: 1. the code scanned by barcode reader will be at least 4 characters long 2. the code scanned by barcode reader ends with an "ENTER" keypress 3. it take less than 50 ms to transmit the entire barcode This simple form using VS2005 VB contains: 1. TextBox1 2. TextBox2 3. TextBox3 4. Button1 5. Timer1 "the time interval set to 50ms" --- ``` Public Class Form1 Dim BarcodeStr As String = "" Dim IsBarcodeTaken As Boolean = False Dim Str As String = "" Dim str3 As String = "" Private Sub Form1_KeyDown(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyEventArgs) Handles Me.KeyDown If Timer1.Enabled = False Then Str = TextBox1.Text str3 = TextBox3.Text End If End Sub Private Sub Form1_KeyPress(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyPressEventArgs) Handles Me.KeyPress If Timer1.Enabled = False Then Timer1.Enabled = True End If BarcodeStr = BarcodeStr & e.KeyChar If Asc(e.KeyChar) = 13 And Len(BarcodeStr) >= 4 Then IsBarcodeTaken = True TextBox2.Text = BarcodeStr End If End Sub Private Sub Form1_KeyUp(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyEventArgs) Handles Me.KeyUp If IsBarcodeTaken = True Then TextBox1.Text = Str TextBox1.Select(Len(TextBox1.Text), 0) Str = "" TextBox3.Text = str3 TextBox3.Select(Len(TextBox3.Text), 0) str3 = "" End If End Sub Private Sub Timer1_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer1.Tick BarcodeStr = "" IsBarcodeTaken = False Timer1.Enabled = False End Sub Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click TextBox2.Text = "" End Sub End Class ```
Well, I am using a solution pretty like [the one from Ehab](https://stackoverflow.com/a/1012262) - I just cleaned up the code a little bit for my application. I am using a custom class for my edit controls (it is doing some other things too) - but these are the important parts: ``` public class ScannerTextBox : TextBox { public bool BarcodeOnly { get; set; } Timer timer; private void InitializeComponent() { this.SuspendLayout(); this.ResumeLayout(false); } void timer_Tick(object sender, EventArgs e) { if (BarcodeOnly == true) { Text = ""; } timer.Enabled = false; } protected override void OnKeyPress(KeyPressEventArgs e) { base.OnKeyPress(e); if (BarcodeOnly == true) { if (timer == null) { timer = new Timer(); timer.Interval = 200; timer.Tick += new EventHandler(timer_Tick); timer.Enabled = false; } timer.Enabled = true; } if (e.KeyChar == '\r') { if (BarcodeOnly == true && timer != null) { timer.Enabled = false; } } } } ```
Reading a barcode using a USB barcode scanner along with ignoring keyboard data input while scanner product id and vendor id are not known
[ "", "c#", "usb", "barcode-scanner", "" ]
I am executing an exe through my java program. The path is hardcoded in Java. I have packaged my the exe in the jar. But am stuck as I have the path name hardcoded in the Java file, so I am not able to execute my jar as a stand alone program. Any hints for packaging such jar i.e having an exe inside and able to run it as a stand alone program?
This will extract the `.exe` to a local file on the local disk. The file will be deleted when the Java program exists. ``` import java.io.Closeable; import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.net.URI; import java.net.URISyntaxException; import java.net.URL; import java.security.CodeSource; import java.security.ProtectionDomain; import java.util.zip.ZipEntry; import java.util.zip.ZipException; import java.util.zip.ZipFile; public class Main { public static void main(final String[] args) throws URISyntaxException, ZipException, IOException { final URI uri; final URI exe; uri = getJarURI(); exe = getFile(uri, "Main.class"); System.out.println(exe); } private static URI getJarURI() throws URISyntaxException { final ProtectionDomain domain; final CodeSource source; final URL url; final URI uri; domain = Main.class.getProtectionDomain(); source = domain.getCodeSource(); url = source.getLocation(); uri = url.toURI(); return (uri); } private static URI getFile(final URI where, final String fileName) throws ZipException, IOException { final File location; final URI fileURI; location = new File(where); // not in a JAR, just return the path on disk if(location.isDirectory()) { fileURI = URI.create(where.toString() + fileName); } else { final ZipFile zipFile; zipFile = new ZipFile(location); try { fileURI = extract(zipFile, fileName); } finally { zipFile.close(); } } return (fileURI); } private static URI extract(final ZipFile zipFile, final String fileName) throws IOException { final File tempFile; final ZipEntry entry; final InputStream zipStream; OutputStream fileStream; tempFile = File.createTempFile(fileName, Long.toString(System.currentTimeMillis())); tempFile.deleteOnExit(); entry = zipFile.getEntry(fileName); if(entry == null) { throw new FileNotFoundException("cannot find file: " + fileName + " in archive: " + zipFile.getName()); } zipStream = zipFile.getInputStream(entry); fileStream = null; try { final byte[] buf; int i; fileStream = new FileOutputStream(tempFile); buf = new byte[1024]; i = 0; while((i = zipStream.read(buf)) != -1) { fileStream.write(buf, 0, i); } } finally { close(zipStream); close(fileStream); } return (tempFile.toURI()); } private static void close(final Closeable stream) { if(stream != null) { try { stream.close(); } catch(final IOException ex) { ex.printStackTrace(); } } } } ```
The operating system doesn't care or know about .jar file, so you'll have to unpack the `.exe` file to some temporary location before you execute it.
Run exe which is packaged inside jar file
[ "", "java", "jar", "exe", "" ]
First of all, I do know about the Fisher-Yates shuffle. But lets say for arguments sake that I want to allow the user to pick a sort option from a Dropdown list. This list would include a "Random" option. Based on the result of their selection I just want to substitute in an IComparer instance for my sort. What would the IComparer look like? Google brings up a plethora of flawed results that all take this form: ``` public class NaiveRandomizer<T> : IComparer<T> { private static Random rand = new Random(); public int Compare(T x, T y) { return (x.Equals(y))?0:rand.Next(-1, 2); } } ``` However, that implementation is biased and will even throw an exception in some circumstances. The bias can be demonstrated with the following code: ``` void Test() { Console.WriteLine("NaiveRandomizer Test:"); var data = new List<int>() {1,2,3}; var sortCounts = new Dictionary<string, int>(6); var randomly = new NaiveRandomizer<int>(); for (int i=0;i<10000;i++) { //always start with same list, in _the same order_. var dataCopy = new List<int>(data); dataCopy.Sort(randomly); var key = WriteList(dataCopy); if (sortCounts.ContainsKey(key)) sortCounts[key]++; else sortCounts.Add(key, 1); } foreach (KeyValuePair<string, int> item in sortCounts) Console.WriteLine(item.Key + "\t" + item.Value); } string WriteList<T>(List<T> list) { string delim = ""; string result = ""; foreach(T item in list) { result += delim + item.ToString(); delim = ", "; } return result; } ``` So how could you implement a random `IComparer<T>` that solved those issues? It is allowed to require each call to `.Sort()` to use a separate IComparer instance, as I don't see any other way to do this: items *must* be compared using some other, truly random value, but that value *must* also be consistent for an item within a given sort operation. I have a start [here](https://stackoverflow.com/questions/554587/is-there-an-easy-way-to-randomize-a-list-in-vb-net/554652#554652), but it was posted in haste, is *extremely* slow, and doesn't even return all possible sorts (testing shows that it does at least eliminate bias, if you don't count the missing options). I don't expect O(n) performance like Fisher-Yates, but I do want something reasonable (n log n for a small-ish n), and I do expect it to show all possible sorts. Unfortunately, that link is the current accepted answer for it's question and so I'm hoping to be able to replace it with something a little better. If nothing else, I want this to be a magnet for all those google queries looking for an IComparable solution- that they'll end up here instead of somewhere else telling them to use the incorrect version.
One suggestion I got elsewhere was to create a separate IArranger interface that describes a single operation to **Arrange** a collection. This can work where IComparer/IComparable cannot because it operates on an entire collection, instead of individual items. It might look something like this: ``` public interface IArranger<T> { IEnumerable<T> Arrange(IEnumerable<T> items); } ``` Then I could implement a `Shuffle` from the IArranger interface using a proper Fisher-Yates algorithm, and also have implementations that wrap each additional `IEnumerable.Sort()/IComparable/IComparer` varieties that I care about. That might look something like this: ``` public class ComparerArranger<T> : IArranger<T> { private IComparer<T> comparer; public ComparableArranger(IComparer<T> comparer) { this.comparer = comparer; } public IEnumerable<T> Arrange(IEnumerable<T> items) { return items.OrderBy(i => i, comparer); } } ``` or ``` //uses the default Comparer for the type (Comparer<T>.Default) public class TypeArranger<T> : IArranger<T> { public IEnumerable<T> Arrange(IEnumerable<T> items) { return items.OrderBy(i => i); } } ``` or ``` public class ShuffleArranger<T> : IArranger<T> { //naive implementation for demonstration // if I ever develop this more completely I would try to // avoid needing to call .ToArray() in here // and use a better prng private Random r = new Random(); public IEnumerable<T> Arrange(IEnumerable<T> items) { var values = items.ToArray(); //valid Fisher-Yates shuffle on the values array for (int i = values.Length; i > 1; i--) { int j = r.Next(i); T tmp = values[j]; values[j] = values[i - 1]; values[i - 1] = tmp; } foreach (var item in values) yield return item; } } ``` For a final step, I add support for this to any IEnumerable via an extension method. Then you still get the simple run-time algorithm swapping, you have a better implementation of the shuffle algorithm, and the code to use it feels natural: ``` public static IEnumerable<T> Arrange(this IEnumerable<T> items, IArranger<T> arranger) { return arranger.Arrange(items); } ```
I was somewhat surprised in [this thread](https://stackoverflow.com/questions/554587/is-there-an-easy-way-to-randomize-a-list-in-vb-net/) how many wrong answers were posted. Just for the sake of others who come up with a solution similar to the one posted by the OP, the following code *looks* correct: ``` int[] nums = new int[1000]; for (int i = 0; i < nums.Length; i++) { nums[i] = i; } Random r = new Random(); Array.Sort<int>(nums, (x, y) => r.Next(-1, 2)); foreach(var num in nums) { Console.Write("{0} ", num); } ``` However, the code will throw an exception occasionally, but not always. That's what makes it fun to debug :) If you run it enough times, or execute the sort procedure in a loop 50 or so times, you'll get an error stating: `IComparer (or the IComparable methods it relies upon) did not return zero when Array.Sort called x. CompareTo(x). x: '0' x's type: 'Int32' The IComparer: ''.` In other words, the quick sort compared some number `x` to itself and got a non-zero result. The obvious solution to the code would be write: ``` Array.Sort<int>(nums, (x, y) => { if (x == y) return 0; else return r.NextDouble() < 0.5 ? 1 : -1; }); ``` But even this doesn't work, because there are occasions where .NET compares 3 numbers against one another which return inconsistent results, such as A > B, B > C, and C > A (oops!). No matter if you use a Guid, GetHashCode, or any other randomly generated input, a solution like the one shown above is still wrong. --- With that being said, Fisher-Yates is the standard way of shuffling arrays, so there's no real reason to use IComparer in the first place. Fisher-Yates is O(n) whereas any implementation using IComparer uses a quicksort behinds the scenes which has a time-complexity of O(n log n). There's just no good reason not to use the well-known, efficient, standard algorithm to solve this kind of problem. However, if you really insist on using an IComparer and a rand, then apply your random data *before* you sort. This requires a projection of the data onto another object so you don't lose your random data: ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Pair<T, U> { public T Item1 { get; private set; } public U Item2 { get; private set; } public Pair(T item1, U item2) { this.Item1 = item1; this.Item2 = item2; } } class Program { static void Main(string[] args) { Pair<int, double>[] nums = new Pair<int, double>[1000]; Random r = new Random(); for (int i = 0; i < nums.Length; i++) { nums[i] = new Pair<int, double>(i, r.NextDouble()); } Array.Sort<Pair<int, double>>(nums, (x, y) => x.Item2.CompareTo(y.Item2)); foreach (var item in nums) { Console.Write("{0} ", item.Item1); } Console.ReadKey(true); } } } ``` Or get LINQy with your bad self: ``` Random r = new Random(); var nums = from x in Enumerable.Range(0, 1000) orderby r.NextDouble() select x; ```
Shuffle using IComparer
[ "", "c#", ".net", "shuffle", "icomparer", "" ]
Environment.SpecialFolder.CommonApplicationData \*returns "C:\Documents and Settings\All Users\Application Data" under XP that is writeable for all users \*returns "C:\ProgramData[MyApp]\" under Vista and this is not writeable for regular users Now why i want CommonFolder ? Because, an admin will install my software database on XP (or vista) under Admin account, but when user logs back and run my software, the current account will tel my software to look at a different place the database was installed : the user directory in Documents and settings.... so AllUsers (common folder) is common to admin and regular non admin user.. This drives me crazy : where to put my database so it works under Vista and XP ????? thanks Jonathan
Are you using an installer to have the admin run? If so, you should be able to use the installer settings, plus a proper assembly/executable manifest to allow the application (regardless of who is running it) the proper permissions to update/modify files in the ProgramData specific to their application. I run a similar scenario (application installs to Program Files, common data repository installs to ProgramData, user config,save files store to C:\Users) and the manifest and the settings in the [WiX installer](http://wix.sourceforge.net/) allowed this to work.
User-specific settings should be stored in the User's application data folder (`Environment.SpecialFolder.ApplicationData`) , so that if multiple users log in to the machine they each get their own settings. Create a default user-settings db for the program in the program's main folder at install time and copy it to a user's folder the first time that user runs the program (you'll know it's the first time because the db file won't exist yet). If you have settings that should apply to all users on the machine, you want those settings to be set by an administrator, and you want them protected from casual change. Storing these in a place where normal users don't have write access is a good thing.
Storing User Settings
[ "", "c#", ".net", "windows-vista", "windows-xp", "" ]
When we want to have new objects for some class, and if we already have one, should we use clone() or instantiate using new keyword? Yes, clone() will have all the members copied as well, but I am curious for a case when this won't matter. ``` ClassType foo = new ClassType(); ClassType bar = new ClassType(); // here, will foo.clone() be better choice? ``` I need this in one of my current task as I am working with generic lists (List`<`T`>`), which get huge at times and are passed through a 'n' tiers before actually being used for UI.(they in between get converted to Array and back to list as well for serialization purpose). So, I was just thinking to better the performance.
It sounds like you are doing premature optimisation. Profile, and you will usually be surprised where the time is going. I doubt it will be here, unless your "clone" is using serialization. That said; `ICloneable.Clone()` is worth avoiding simply because it is poorly defined (deep vs shallow) and rarely supported. Behind the scenes, most `Clone()` implementations either just do a `new(); // now copy properties`, or they use serialization.
`Clone()` will do a `new` anyhow, so why bother? Calling `Clone()` will result in a new reference anyway and on top of that all members are copied, so it will definitely be slower.
What's better in performance - clone() or instantiation with new keyword?
[ "", "c#", ".net", "" ]
Using C#, I need to build a connection string from a few AppSettings. If I do this: ``` Connection = string.Format("Data Source={0};Initial Catalog={1);User Id={2};Password={3};", ConfigurationManager.AppSettings.Get("CartServer"), ConfigurationManager.AppSettings.Get("CartDatabase"), ConfigurationManager.AppSettings.Get("CartUserName"), ConfigurationManager.AppSettings.Get("CartPassword")); ``` I get an invalid format string exception. I narrowed it down to the "Password=" part of the format string (ie, "Passwork=" works). There's an easy enough work-around: ``` Connection = string.Format("Data Source={0};Initial Catalog={1);User Id={2};{3}={4};", ConfigurationManager.AppSettings.Get("CartServer"), ConfigurationManager.AppSettings.Get("CartDatabase"), ConfigurationManager.AppSettings.Get("CartUserName"), "Password",ConfigurationManager.AppSettings.Get("CartPassword")); // Lame!!! ``` But what's the real story with the "Password"? I checked MSDN and a few other sites but came up empty. Oh, if it matters, this is a WCF service.
Your problem is the {1) in the format string; it should be {1} (you have closed the brace with a parenthesis) This is causing your FormatException because your format string is now invalid. Why your second entry works is beyond me though. **Edit:** I agree with the other Richard that you should consider using a connection string builder object.
Have you considered the `DbConnectionStringBuilder` class (or one of its subtypes)?
string.Format and the word "Password"
[ "", "c#", "string", "" ]
I have two tables say A and B. A cols are GUID, someintVar, someMoreIntvar B col are GUID, someItemNO, SomeItemDesc Now for one GUID I will have only one row in Table A. But I can have multiple rows for the same GUID. Now I want to query the database based on GUID and select values in a class. This class will have a list that will hold different rows coming from the second table. How can I do it? Right Now I am getting many items in the result based on how many rows are there in the second table for that GUID. ``` var itemColl = from p in db.A join item in db.B on p.CardID equals item.CardID where p.CardID == "some GUID" select new { p.CardID, p.secondCol, p.ThirdCol, item.ItemNo // How to add them in a collection or list. }; ```
Unested, but how about re-writing it a bit: ``` var itemColl = from p in db.A where p.CardID == "some GUID" select new { p.CardID, p.secondCol, p.ThirdCol, Items = db.B.Where(b=>b.CardID==p.CardID) //.Select(b=>b.ItemNo) [see comments] } ``` Alternatively, you could perhaps group...
Assuming you have a foreign-key relationship set up between A and B on GUID. (And if you don't you db schema is broken and needs to be fixed) ``` var itemColl = from p in db.A where p.CardID == "some GUID" select new { p.CardID, p.secondCol, p.ThirdCol, Items = p.Bs } ```
LINQ joining two tables
[ "", "c#", "linq", "linq-to-sql", "join", "" ]
How do you login to a webpage and retrieve its content in C#?
``` string postData = "userid=ducon"; postData += "&username=camarche" ; byte[] data = Encoding.ASCII.GetBytes(postData); WebRequest req = WebRequest.Create( URL); req.Method = "POST"; req.ContentType = "application/x-www-form-urlencoded"; req.ContentLength = data.Length; Stream newStream = req.GetRequestStream(); newStream.Write(data, 0, data.Length); newStream.Close(); StreamReader reader = new StreamReader(req.GetResponse().GetResponseStream(), System.Text.Encoding.GetEncoding("iso-8859-1")); string coco = reader.ReadToEnd(); ```
That depends on what's required to log in. You could use a webclient to send the login credentials to the server's login page (via whatever method is required, GET or POST), but that wouldn't persist a cookie. There is a [way](http://couldbedone.blogspot.com/2007/08/webclient-handling-cookies.html) to get a webclient to handle cookies, so you could just POST the login info to the server, then request the page you want with the same webclient, then do whatever you want with the page.
How do you login to a webpage and retrieve its content in C#?
[ "", "c#", "http", "authentication", "screen-scraping", "" ]
On Windows, `clock()` returns the time in milliseconds, but on this Linux box I'm working on, it rounds it to the nearest 1000 so the precision is only to the "second" level and not to the milliseconds level. I found a solution with Qt using the `QTime` class, instantiating an object and calling `start()` on it then calling `elapsed()` to get the number of milliseconds elapsed. I got kind of lucky because I'm working with Qt to begin with, but I'd like a solution that doesn't rely on third party libraries, Is there no standard way to do this? **UPDATE** Please don't recommend Boost .. If Boost and Qt can do it, surely it's not magic, there must be something standard that they're using!
You could use gettimeofday at the start and end of your method and then difference the two return structs. You'll get a structure like the following: ``` struct timeval { time_t tv_sec; suseconds_t tv_usec; } ``` EDIT: As the two comments below suggest, clock\_gettime(CLOCK\_MONOTONIC) is a much better choice if you have it available, which should be almost everywhere these days. EDIT: Someone else commented that you can also use modern C++ with std::chrono::high\_resolution\_clock, but that isn't guaranteed to be monotonic. Use steady\_clock instead.
``` #include <sys/time.h> #include <stdio.h> #include <unistd.h> int main() { struct timeval start, end; long mtime, seconds, useconds; gettimeofday(&start, NULL); usleep(2000); gettimeofday(&end, NULL); seconds = end.tv_sec - start.tv_sec; useconds = end.tv_usec - start.tv_usec; mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5; printf("Elapsed time: %ld milliseconds\n", mtime); return 0; } ```
C++ obtaining milliseconds time on Linux -- clock() doesn't seem to work properly
[ "", "c++", "timer", "clock", "" ]
I am new to the thread model in .NET. What would you use to: 1. Start a process that handles a file `(process.StartInfo.FileName = fileName;)`. 2. Wait for the user to close the process OR abandon the thread after some time. 3. If the user closed the process, delete the file. Starting the process and waiting should be done on a different thread than the main thread, because this operation should not affect the application. Example: My application produces an html report. The user can right click somewhere and say "View Report" - now I retrieve the report contents in a temporary file and launch the process that handles html files i.e. the default browser. The problem is that I cannot cleanup, i.e. delete the temp file.
The .NET 5 introduced the new API [`Process.WaitForExitAsync`](https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.process.waitforexitasync), that allows to wait asynchronously for the completion of a process. It offers the same functionality with the existing [`Process.WaitForExit`](https://learn.microsoft.com/en-us/dotnet/api/system.diagnostics.process.waitforexit), with the only difference being that the waiting is asynchronous, so it does not block the calling thread. Usage example: ``` private async void button1_Click(object sender, EventArgs e) { string filePath = Path.Combine ( Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), Guid.NewGuid().ToString() + ".txt" ); File.WriteAllText(filePath, "Hello World!"); try { using Process process = new(); process.StartInfo.FileName = "Notepad.exe"; process.StartInfo.Arguments = filePath; process.Start(); await process.WaitForExitAsync(); } finally { File.Delete(filePath); } MessageBox.Show("Done!"); } ``` In the above example the UI remains responsive while the user interacts with the opened file. The UI thread would be blocked if the `WaitForExit` had been used instead.
"and waiting must be async" - I'm not trying to be funny, but isn't that a contradiction in terms? However, since you are starting a `Process`, the `Exited` event may help: ``` ProcessStartInfo startInfo = null; Process process = Process.Start(startInfo); process.EnableRaisingEvents = true; process.Exited += delegate {/* clean up*/}; ``` If you want to actually wait (timeout etc), then: ``` if(process.WaitForExit(timeout)) { // user exited } else { // timeout (perhaps process.Kill();) } ``` For waiting async, perhaps just use a different thread? ``` ThreadPool.QueueUserWorkItem(delegate { Process process = Process.Start(startInfo); if(process.WaitForExit(timeout)) { // user exited } else { // timeout } }); ```
Async process start and wait for it to finish
[ "", "c#", ".net", "multithreading", "asynchronous", "process", "" ]
I'm working on understanding and drawing my own DLL for PDF417 (2d barcodes). Anyhow, the actual drawing of the file is perfect, and in correct boundaries of 32 bits (as monochrome result). At the time of writing the data, the following is a memory dump as copied from C++ Visual Studio memory dump of the pointer to the bmp buffer. Each row is properly allocated to 36 wide before the next row. Sorry about the wordwrap in the post, but my output was intended to be the same 36 bytes wide as the memory dump so you could better see the distortion. The current drawing is 273 pixels wide by 12 pixels high, monochrome... ``` 00 ab a8 61 d7 18 ed 18 f7 a3 89 1c dd 70 86 f5 f7 1a 20 91 3b c9 27 e7 67 12 1c 68 ae 3c b7 3e 02 eb 00 00 00 ab a8 61 d7 18 ed 18 f7 a3 89 1c dd 70 86 f5 f7 1a 20 91 3b c9 27 e7 67 12 1c 68 ae 3c b7 3e 02 eb 00 00 00 ab a8 61 d7 18 ed 18 f7 a3 89 1c dd 70 86 f5 f7 1a 20 91 3b c9 27 e7 67 12 1c 68 ae 3c b7 3e 02 eb 00 00 00 ab 81 4b ca 07 6b 9c 11 40 9a e6 0c 76 0a fc a3 33 70 bb 30 55 87 e9 c4 10 58 d9 ea 0d 48 3e 02 eb 00 00 00 ab 81 4b ca 07 6b 9c 11 40 9a e6 0c 76 0a fc a3 33 70 bb 30 55 87 e9 c4 10 58 d9 ea 0d 48 3e 02 eb 00 00 00 ab 81 4b ca 07 6b 9c 11 40 9a e6 0c 76 0a fc a3 33 70 bb 30 55 87 e9 c4 10 58 d9 ea 0d 48 3e 02 eb 00 00 00 ab 85 7e d0 29 e8 14 f4 0a 7a 05 3c 37 ba 86 87 04 db b6 09 dc a0 62 fc d1 31 79 bc 5c 0a 8e 02 eb 00 00 00 ab 85 7e d0 29 e8 14 f4 0a 7a 05 3c 37 ba 86 87 04 db b6 09 dc a0 62 fc d1 31 79 bc 5c 0a 8e 02 eb 00 00 00 ab 85 7e d0 29 e8 14 f4 0a 7a 05 3c 37 ba 86 87 04 db b6 09 dc a0 62 fc d1 31 79 bc 5c 0a 8e 02 eb 00 00 00 ab 85 43 c5 30 e2 26 70 4a 1a f3 e4 4d ce 2a 3f 79 cd bc e6 de 73 6f 39 b7 9c db ce 6d 5f be 02 eb 00 00 00 ab 85 43 c5 30 e2 26 70 4a 1a f3 e4 4d ce 2a 3f 79 cd bc e6 de 73 6f 39 b7 9c db ce 6d 5f be 02 eb 00 00 00 ab 85 43 c5 30 e2 26 70 4a 1a f3 e4 4d ce 2a 3f 79 cd bc e6 de 73 6f 39 b7 9c db ce 6d 5f be 02 eb 00 00 ``` Here is the code to WRITE the file out -- verbatim immediately at the time of the memory dump from above ``` FILE *stream; if( fopen_s( &stream, cSaveToFile, "w+" ) == 0 ) { fwrite( &bmfh, 1, (UINT)sizeof(BITMAPFILEHEADER), stream ); fwrite( &bmi, 1, (UINT)sizeof(BITMAPINFO), stream ); fwrite( &RGBWhite, 1, (UINT)sizeof(RGBQUAD), stream ); fwrite( ppvBits, 1, (UINT)bmi.bmiHeader.biSizeImage, stream ); fclose( stream ); } ``` Here's what ACTUALLY Gets written to the file. ``` 00 ab a8 61 d7 18 ed 18 f7 a3 89 1c dd 70 86 f5 f7 1a 20 91 3b c9 27 e7 67 12 1c 68 ae 3c b7 3e 02 eb 00 00 00 ab a8 61 d7 18 ed 18 f7 a3 89 1c dd 70 86 f5 f7 1a 20 91 3b c9 27 e7 67 12 1c 68 ae 3c b7 3e 02 eb 00 00 00 ab a8 61 d7 18 ed 18 f7 a3 89 1c dd 70 86 f5 f7 1a 20 91 3b c9 27 e7 67 12 1c 68 ae 3c b7 3e 02 eb 00 00 00 ab 81 4b ca 07 6b 9c 11 40 9a e6 0c 76 0d 0a fc a3 33 70 bb 30 55 87 e9 c4 10 58 d9 ea 0d 48 3e 02 eb 00 00 00 ab 81 4b ca 07 6b 9c 11 40 9a e6 0c 76 0d 0a fc a3 33 70 bb 30 55 87 e9 c4 10 58 d9 ea 0d 48 3e 02 eb 00 00 00 ab 81 4b ca 07 6b 9c 11 40 9a e6 0c 76 0d 0a fc a3 33 70 bb 30 55 87 e9 c4 10 58 d9 ea 0d 48 3e 02 eb 00 00 00 ab 85 7e d0 29 e8 14 f4 0d 0a 7a 05 3c 37 ba 86 87 04 db b6 09 dc a0 62 fc d1 31 79 bc 5c 0d 0a 8e 02 eb 00 00 00 ab 85 7e d0 29 e8 14 f4 0d 0a 7a 05 3c 37 ba 86 87 04 db b6 09 dc a0 62 fc d1 31 79 bc 5c 0d 0a 8e 02 eb 00 00 00 ab 85 7e d0 29 e8 14 f4 0d 0a 7a 05 3c 37 ba 86 87 04 db b6 09 dc a0 62 fc d1 31 79 bc 5c 0d 0a 8e 02 eb 00 00 00 ab 85 43 c5 30 e2 26 70 4a 1a f3 e4 4d ce 2a 3f 79 cd bc e6 de 73 6f 39 b7 9c db ce 6d 5f be 02 eb 00 00 00 ab 85 43 c5 30 e2 26 70 4a 1a f3 e4 4d ce 2a 3f 79 cd bc e6 de 73 6f 39 b7 9c db ce 6d 5f be 02 eb 00 00 00 ab 85 43 c5 30 e2 26 70 4a 1a f3 e4 4d ce 2a 3f 79 cd bc e6 de 73 6f 39 b7 9c db ce 6d 5f be 02 eb 00 00 ``` Notice the start of the distortion with the "0d" in the result from reading the file back in the 4th line, about the 15th byte over... Then, there are a few more staggered around which in total, skew the image off by 9 bytes worth... Obviously, the drawing portion is working ok as everything remains properly aligned in memory for the 12 lines.
Shouldn't you open the file in a compound mode i.e. writable & binary as in `wb+`? > Notice the start of the distortion with the "0d" That's ASCII code for Carriage Return (CR) -- added on some OSes with newline (where a newline is actually a sequence of CR/LF). This should go away once you start writing the output in binary mode. Your code looks neat otherwise. Cheers!
Your `0x0A` (`\n`) gets converted to DOS format `0x0D0A` (`\r\n`), becouse you're write the file in text mode. Switch to binary mode.
Writing BMP data getting garbage
[ "", "c++", "graphics", "bmp", "fwrite", "" ]
Ok, let's leave the debate of whether friendship breaks encapsulation, and actually try elegantly come up with a coherent design. It is a two fold function: 1) General question on how to implement: ``` public class A { friend class B; } ``` 2) Why do I need this functionality? Some of my classes implement Serializable interface. However, I want to make Serializable methods protected in the Derived class so that I don't expose them to a client (as well as in the documentation -- javadoc). However, internal classes should be able to access them. What is the General way to solve this problem in java? Note: I am using friendship as defined in the current C++ standard. Thanks
The general solution is to make the methods package-private (which is the default protection level in Java). That way any code in the same package can access them, but not external code. Java does not allow arbitrary sharing of methods with specific external classes. EDIT: Protected members are actually *less* private than package-private. If you have protected members, you can access them from derived classes outside your package, and from any class inside the package. So that may be a solution to your problem - derive the class in another class in the package you want to export to. Generally, Java considers the package as the main module of encapsulation. The public/protected interface is for classes outside the package, and the default protection level allows access within the package.
It seems as if you want a façade. You appear to have a class that needs to give public access to various other classes (even in different packages) involved in the implementation. But you don't want clients having access. Therefore, make the implementation as complicated as you like. Have a façade class, with just the interface you want, delegate to the implementation.
Implementing friend (available in C++) functionality in Java
[ "", "java", "javadoc", "" ]
Is it possible to dump an image of a running JVM and later restore the previous state by loading the image into the JVM? I'm fairly certain the answer is negative, but would love to be wrong. With all the dynamic languages available for the JVM comes an increase in interactivity, being able to save a coding session would help save time manually restoring the VM to a previous session.
There was a [JSR 323 proposed for this](http://tech.puredanger.com/2008/01/09/strong-mobility-for-java/) a while back but it was [rejected](http://tech.puredanger.com/2008/01/24/jcp-votes-down-jsr-323/). You can find some links in those articles about the research behind this and what it would take. It was mostly rejected as an idea that was too immature. I have heard of at least one startup (unfortunately don't recall the name) that was working on a virtualization technology over a hypervisor (probably Xen) that was getting pretty close to being able to move JVMs, including even things like file system refs and socket endpoints. Because they were at the hypervisor level, they had access to all of that stuff. By hooking that and the JVM, they had most of the pieces. I think they might have gone under though. The closest thing you can get today is [Terracotta](http://terracotta.org), which allows you to cluster a portion of your JVM heap, storing it in a server array, which can be made persistent. On JVM startup, you connect to the cluster and can continue using whatever portions of your heap are specified as clustered. The actual objects are faulted in on an as-needed basis.
Not possible at present. In general, pausing and restarting a memory image of a process in a different context is incredibly hard to achieve: what are you going to do with open OS resources? Transfers to machines with different instruction sets? database connections? Also images of the running JVM are probably quite large - maybe much larger than the subset of the state you are actually interested in. So it's not a good idea from a performance perspective. A much better strategy is to have **code that persists and recreates the application state**: this is relatively feasible with most JVM dynamic languages. I do so similar stuff in Clojure, where you have an interactive environment (REPL) and it is quite possible to create and run a sequence of operations that rebuild the application state that you want in another JVM.
Can the JVM provide snapshot persistence?
[ "", "java", "jvm", "" ]
I'm trying to figure out if it's possible using Javascript to highlight a specific range of data in a text field. ``` textfield.select(); ``` That ^^ works to select the entire text, but for all my googling I haven't stumbled upon a way to select, for example, characters 2 through 10 of the entered text. Is this possible?
This object will let you get, set & modify the selected region of a text box. ``` function SelectedText(input) { // Replace the currently selected text with the given value. this.replace = function(text) { var selection = this.get(); var pre = input.value.substring(0, selection.start); var post = input.value.substring(selection.end, input.value.length); input.value = pre + text + post; this.set(selection.start, selection.start + text.length); return this; } // Set the current selection to the given start and end points. this.set = function(start, end) { if (input.setSelectionRange) { // Mozilla input.focus(); input.setSelectionRange(start, end); } else if (input.createTextRange) { // IE var range = input.createTextRange(); range.collapse(true); range.moveEnd('character', end); range.moveStart('character', start); range.select(); } return this; } // Get the currently selected region. this.get = function() { var result = new Object(); result.start = 0; result.end = 0; result.text = ''; if (input.selectionStart != undefined) { // Mozilla result.start = input.selectionStart; result.end = input.selectionEnd; } else { // IE var bookmark = document.selection.createRange().getBookmark() var selection = inputBox.createTextRange() selection.moveToBookmark(bookmark) var before = inputBox.createTextRange() before.collapse(true) before.setEndPoint("EndToStart", selection) result.start = before.text.length; result.end = before.text.length + selection.text.length; } result.text = input.value.substring(result.start, result.end); return result; } } ```
This is handled differently with IE vs everyone else. Here is a reference guide with examples: <http://www.sxlist.com/techref/language/html/ib/Scripting_Reference/trange.htm>
How can I highlight a subset of the text in an input box?
[ "", "javascript", "html", "ajax", "" ]
I just ran into an issue where a stack overflow in a threaded c++ program on HPUX caused a SEGV\_MAPERR when a local object tried to call a very simple procedure. I was puzzled for a while, but luckily I talked to someone who recognized this as a stack size issue and we were able to fix the problem by increasing the stack size available to the threads. How can I recognize when the stack overflows? Do the symptoms differ on windows/linux/hpux?
> How can I recognize when the stack overflows? If you know the stack size, where the stack starts and the direction it grows in memory, you can simply check the address of the [stack pointer](http://en.wikipedia.org/wiki/Stack_pointer#Structure) and see if it past the end of the stack. C++ does not allow direct access to the stack pointer. You could easily write a small function in assembly to perform this analysis and link it into you program.
Assuming you're not on a platform thats going to stop your app and say "stack overflow" I suspect you'll see the same behavior that you would see from any kind of buffer overflow. The stack is just another preallocated chunk of memory for your program, and if you go outside those bounds... well good luck! Who knows what you'll stomp on! You could write over the temperature readout from the CPU, it could be the email you're typing to Larry, it could be the bit saying that the kernel is locked, causing a fun deadlock condition! Who knows. As for C++, there's nothing saying how the stack should be laid out in relation to other things in memory or that this thing even needs to be a stack!
What are the symptoms of a stack overflow in a C++ program?
[ "", "c++", "stack-overflow", "" ]
Say I have the following table: ``` Create Table Comments ( ID Int Identity(1,1) Not Null Primary Key Clustered, Comment Text Not Null ) ``` Since I'm superstitious, how can I stop multiples of 13 appearing in the ID column? ie: Skip 13, 26, 39 etc. Solution in MySQL or MSSQL much appreciated.
Create a trigger to skip to the next one every time 13n - 1 comes up in the sequence BradC, this is for you. Without any knowledge of SQL Server, I'll do it in Oracle. [This](http://msdn.microsoft.com/en-us/library/ms189799.aspx) seems to be a good reference for triggers in SQL Server ``` CREATE OR REPLACE TRIGGER trigname AFTER INSERT ON Comments FOR EACH ROW IF (:new.ID % 13 = 12) THEN -- increase the sequence SELECT comment_ID_sequence.NEXTVAL FROM dual; END IF; END; ``` Without actually testing it, this will probably not work, but with a small amount of trial and error, you can get it working. Oracle has sequence objects that aren't tied to the table at all, and you can bump the sequence all day if you want, without ever touching the table. I don't know if this is true in SQL Server.
Edit: previous answer was completely wrong. You can do it like this: ``` Identity(1, 13) ``` As tested by: ``` for (int i = 1; i < 10000000; i += 13) { if (i % 13 == 0) { Console.WriteLine(i); } } ``` Incrementing by 13, starting from 1 should never give you a multiple of 13, at least up to the 10 million.
How to stop multiple of 13 appearing in an Identity Column
[ "", "sql", "" ]
``` String s1 = "BloodParrot is the man"; String s2 = "BloodParrot is the man"; String s3 = new String("BloodParrot is the man"); System.out.println(s1.equals(s2)); System.out.println(s1 == s2); System.out.println(s1 == s3); System.out.println(s1.equals(s3)); ``` // output true true false true Why don't all the strings have the same location in memory if all three have the same contents?
Java only automatically interns String *literals*. New String objects (created using the `new` keyword) are not interned by default. You can use the [String.intern()](http://java.sun.com/j2se/1.3/docs/api/java/lang/String.html#intern()) method to intern an existing String object. Calling `intern` will check the existing String pool for a matching object and return it if one exists or add it if there was no match. If you add the line ``` s3 = s3.intern(); ``` to your code right after you create `s3`, you'll see the difference in your output. [See some more examples and a more detailed explanation](http://javatechniques.com/blog/string-equality-and-interning/). This of course brings up the very important topic of when to use == and when to use the `equals` method in Java. You almost always want to use `equals` when dealing with object references. The == operator compares reference values, which is *almost* never what you mean to compare. Knowing the difference helps you decide when it's appropriate to use == or `equals`.
You explicitly call new for s3 and this leaves you with a new instance of the string.
String equality vs equality of location
[ "", "java", "string", "equality", "" ]
Just a very general question, that not only applies to this example. Let's say you have an Online Shop and you want to implement Vouchers/Gift Certificates, but with Constraints. Let's say you have a voucher for 20% off, but that applies only to products added within the last 3 weeks, but not to ones in a special promotion. I see two ways to solve it: The first way is to code your shop to "natively" support all crazy types of vouchers. This seems to be the classic way, but it means a lot of work beforehand and very little flexibility (After all, you can't know beforehand what you need, and maybe Sales may come up with some really great new promotion which requires new vouchers - by next Monday). The second way is a Plug-In way: Vouchers are like Plugins and each Voucher has it's own Code. You pass in the Shopping Basket into the Voucher and then the Voucher itself checks each item if it applies, makes the neccessary changes and returns the changed shopping cart. I just wonder, what is the Design Pattern for Case 2? It looks a bit like IoC/DI, but then again not really because Vouchers are not replacing any existing functionality. It's more like a set of Object with a Special Interface (i.e. IVoucher), and then a Queue of IVoucher Object that gets iterated over. Is there a standard pattern (and best practice) for these types of "Manipulators"? Edit: Thanks for the Answers. To clarify that just a bit, the Vouchers (or Manipulators - as said, this is not only a question about online shops but about a similar situations) are "heavy" objects, that is they have Business Logic in them. So I can say that a Voucher only applies if the Customer signed up before January 1 2008, only if the customer ordered at least 100$ in the past 6 months, only applies to articles in the Category X, "stacks" with other Vouchers except for Items marked as Reduced etc. etc. etc. So my concern was more about how to keep a clean structure to make sure the Vouchers get all that they need to check whether they apply and to be able to manipulate the Cart, so I wondered about what the standard for such situations are, which is exactly what the Visitor Pattern seems to do.
It's a case where you could use [the strategy pattern](http://en.wikipedia.org/wiki/Strategy_pattern) along with [the vistor pattern](http://en.wikipedia.org/wiki/Visitor_pattern) to calculate the value of the basket. A vistor could visit each item in the basket utilising different strategies (in this case discount vouchers) and using those to calculate the full cost of the basket. The vouchers used could be retrieved from a database in some way and injected into the visitor quite easily. The voucher strategy could look something like this: ``` public interface IVoucher { decimal CostOf(CartItem cartItem); } ``` The default would be something like this: ``` public class FullPriceVoucher : IVoucher { public decimal CostOf(CartItem cartItem) { return cartItem.Cost; } } ``` A 10% discount would be something like: ``` public class TenPercentOffVoucher : IVoucher { public decimal CostOf(CartItem cartItem) { return cartItem.Cost * 0.9m; } } ``` Then you could have a visitor for calculating cart value like this: ``` public class CartValueVisitor { private IVoucher voucher; public CartValueVisitor(IVoucher voucher) { this.voucher = voucher; } public decimal CostOf(Cart cart) { return cart.Items.Sum(item => voucher.CostOf(item)); } } ``` Which you would use like: ``` var cart = GetACart(); var fullPriceCartValueVisitor = new CartValueVisitor(new FullPriceVoucher()); var tenPercentOffCartValueVisitor = new CartValueVisitor(new TenPercentOffVoucher()); var fullPrice = fullPriceCartValueVisitor.CostOf(cart); var tenPercentOffPrice = tenPercentOffCartValueVisitor.CostOf(cart); ``` This obviously only works with a single voucher at a time but should give you an idea of the general structure.
The previous answers suggesting Visitor and Strategy patterns sound fine to me, although Visitor is overkill in the typical case where each purchase item is an object of the same concrete class. The purpose of Visitor is to allow dynamic dispatch on two (or more) object types -- the visited objects are part of one hierarchy, and the visitors are part of another. But if only one object type (the concrete type of the class implementing `IVoucher`) varies, then regular old single-type virtual dispatch is all you need. In fact I personally wouldn't bother with any "pattern" at all -- your own description is exactly what's needed: create an interface, `IVoucher`, and a bunch of classes that implement that interface. You'll also need a factory method that takes a voucher code and returns an `IVoucher` object having the appropriate concrete type. ## Beware Non-Commutative Vouchers! The fact that you mention a queue of IVoucher-implementing objects will be run against the purchase items implies that more than one voucher may be used. **In this case you need to be careful** -- does applying voucher A, then voucher B always have the same effect as applying B then A? Unfortunately many typical "special offers" would seem not to have this property (e.g. if voucher A gives you $10 off and voucher B gives you 5% off, the order definitely matters). A quick and dirty way out of this is to assign each voucher a distinct numeric "priority" value, and always apply vouchers in priority value order. To reduce the probability of "weird" combinations of vouchers driving you bankrupt, it's probably also a good idea to limit voucher combinations to some set of allowed combinations specified in your code somewhere. (This could be as simple as a list of lists of voucher codes.)
Pattern for Plugins - IoC/DI or not?
[ "", "c#", ".net", "design-patterns", "" ]
If .NET has garbage collection then why do you have to explicitly call `IDisposable`?
Garbage collection is for memory. You need to dispose of non-memory resources - file handles, sockets, GDI+ handles, database connections etc. That's typically what underlies an `IDisposable` type, although the actual handle can be quite a long way down a chain of references. For example, you might `Dispose` an `XmlWriter` which disposes a `StreamWriter` it has a reference to, which disposes the `FileStream` *it* has a reference to, which releases the file handle itself.
Expanding a bit on other comments: The Dispose() method should be called on all objects that have references to un-managed resources. Examples of such would include file streams, database connections etc. A basic rule that works most of the time is: "if the .NET object implements IDisposable then you should call Dispose() when you are done with the object. However, some other things to keep in mind: * Calling dispose does not give you control over when the object is actually destroyed and memory released. GC handles that for us and does it better than we can. * Dispose cleans up all native resources, all the way down the stack of base classes as Jon indicated. Then it calls SuppressFinalize() to indicate that the object is ready to be reclaimed and no further work is needed. The next run of the GC will clean it up. * If Dispose is not called, then GC finds the object as needing to be cleaned up, but Finalize must be called first, to make sure resources are released, that request for Finalize is queued up and the GC moves on, so the lack of a call to Dispose forces one more GC to run before the object can be cleaned. This causes the object to be promoted to the next "generation" of GC. This may not seem like a big deal, but in a memory pressured application, promoting objects up to higher generations of GC can push a high-memory application over the wall to being an out-of-memory application. * Do not implement IDisposable in your own objects unless you absolutely need to. Poorly implemented or unneccessary implementations can actually make things worse instead of better. Some good guidance can be found here: [Implementing a Dispose Method](http://msdn.microsoft.com/en-us/library/fs2xkftw.aspx) [Or read that whole section of MSDN on Garbage Collection](http://msdn.microsoft.com/en-us/library/0xy59wtx.aspx)
What is IDisposable for?
[ "", "c#", "garbage-collection", "dispose", "idisposable", "" ]
I would like the text in my textBox to be set to upper-case whenever currentItemChanged is triggered. In other words, whenever the text in the box changes I'd like to make the contents upper-case. Here is my code: ``` private void rootBindingSource_CurrentItemChanged(object sender, System.EventArgs e) { toUserTextBox.Text.ToUpper(); readWriteAuthorization1.ResetControlAuthorization(); } ``` The event triggers for sure, I've tested with a messageBox. So I know I've done something wrong here... the question is what.
Strings are immutable. ToUpper() returns a new string. Try this: ``` private void rootBindingSource_CurrentItemChanged(object sender, System.EventArgs e) { toUserTextBox.Text = toUserTextBox.Text.ToUpper(); readWriteAuthorization1.ResetControlAuthorization(); } ```
If all you need to do is force the input to upper case, try the [CharacterCasing](http://msdn.microsoft.com/en-us/library/system.windows.forms.textbox.charactercasing.aspx) property of the textbox. ``` toUserTextBox.CharacterCasing = CharacterCasing.Upper; ```
Setting the contents of a textBox to upper-case on CurrentItemChanged
[ "", "c#", ".net", "visual-studio", "textbox", "" ]
I need to walk a JDOM tree and make changes as I go along; at this point, changes are mostly adding new elements right now but could also include reordering elements or removing elements. All work is done on the same thread so there are no concurrency issues. This turns out to be difficult because JDOM iterators can throw a ConcurrentModificationException if you try to add a node during traversal. From what I can see, JDOM uses lists instead of directly linking DOM nodes and this makes it difficult to do modifications on the fly. I've seen a couple of recommendations on how to deal with this, such as deferring the adds until after the traversal is done, or building a new tree on the fly so that the traversed tree remains unchanged. These won't work for me because I need a consistent view of the tree as I modify it. I'm beginning to suspect that JDOM just won't work here. Do any of the other Java DOM models make this easier? Or is there a way to do this in JDOM?
I've come up with what looks like an easy solution using JDOM. Rather than using the JDOM iterator directly, I use the iterator to create a list of nodes and then traverse using this list. Since this list is not "live", my scripts can modify the tree (and see the changes) without affecting the traversal. The traversal won't see structural changes but that should not be a problem.
Is there a reason you can't simply do two passes? Most algorithms I'm familiar with won't require more than 2 traversals when decorating a tree (ideally, your algorithm should need a pass for initial decoration and perhaps a second for resolving references after the decoration).
Problems with XML/DOM tree rewriting using JDOM (ConcurrentModificationException)
[ "", "java", "xml", "dom", "jdom", "" ]
I created a sub-directory of my Django project called `bin` where I want to put all command-line run Python scripts. Some of these scripts need to import my Django project `settings.py` file that is in a parent directory of `bin`. How can I import the `settings.py` file from a sub-directory of the project? The code that I use in my command-line script to set into the "Django context" of the project is: ``` from django.core.management import setup_environ import settings setup_environ(settings) ``` This works fine if the script is in the root directory of my project. I tried the following two hacks to import the `settings.py` file and then setup the project: ``` import os os.chdir("..") import sys sys.path = [str(sys.path[0]) + "/../"] + sys.path ``` The cruel hack can import `settings.py`, but then I get the error: ``` project_module = __import__(project_name, {}, {}, ['']) ValueError: Empty module name ```
This is going one level up from your question, but probably the best solution here is to implement your scripts as [custom manage.py (django-admin.py) commands](http://docs.djangoproject.com/en/dev/howto/custom-management-commands/#howto-custom-management-commands). This gives you all of Django's functionality (including settings) for free with no ugly path-hacking, as well as command-line niceties like options parsing. I've never seen a good reason to write Django-related command-line scripts any other way.
I think your approach may be over-complicating something that Django 1.x provides for you. As long as your project is in your python path, you can set the environment variable DJANGO\_SETTINGS\_MODULE at the top of your script like so: ``` import os os.environ['DJANGO_SETTINGS_MODULE'] = 'myproject.settings' ``` In your command line script where you need to read your settings, simply import the settings module from 'django.conf' as you would do in your application code: ``` from django.conf import settings ``` And presto, you have your settings and a Django-enabled environment for your script. I personally prefer to set my DJANGO\_SETTINGS\_MODULE using '/usr/bin/env' in a bash script called 'proj\_env' so I don't have to repeat it ``` #!/bin/bash proj_env="DJANGO_SETTINGS_MODULE=myproject.settings" /usr/bin/env $proj_env ${*} ``` With this, now I can run any python script with my Django application in context: ``` proj_env python -m 'myproject.bin.myscript' ``` If you use virtualenv, this also gives you a good place to source the activate script. etc. etc.
How to import a Django project settings.py Python file from a sub-directory?
[ "", "python", "django", "" ]
Each product in my database can have at least three and sometimes four or five different prices, and which one is displayed is based on several factors. How should I attempt to tackle this scenario? Each product can have the following prices: * List Price (MSRP) * Cost (what we pay for it) * Retail Price - shown on "main" site * Government price (our primary customer is the U.S. Government) - only shown on our government subdmain * Sale Price (valid for 1 month, but extended every month) - only shown on our government subdomain * BPA Price (special price for BPAs) - not shown but loaded onto government sites The list price is provided in the vendor's quarterly data file and kept in the `Products` table, since it's a finite part of a product entity. How should I handle the other prices though? I'm thinking the best (only?) course of action is to have a separate table with the different price structure as well as `SaleStartDate` and `SaleEndDate` columns that I can check to see whether or not to display the sale price. In fact, I'm not sure of any other way to handle this. Also, I will need to have a duplicate of this (different prices, but same cost) as our company does the processing (all the work, really) for a different company that's in the same business as us; the products are identical but customers/orders/specific prices are different. The current implementation has a duplicate database with everything repeated (same with the code) and I want to avoid this like the plague so I'm trying to come up with a way to abstract the common things out. **EDIT (2/17/2009 @ 12:57PM):** Except for the list price, the prices are all calculated based off the cost, however it's not always that easy; sometimes items need to have their prices manually changed independently of the margin, however this is handled manually via Excel and only once per quarter (3 months); when the data is loaded into the database the price is finite and isn't changed until the following quarter; the "sale" price is on perpetually forever, but it expires (not on our site, but on several government sites like GSA Advantage) and needs to be extended; it doesn't change though when it's extended. The sale price is just to have our own site in sync with GSA Advantage, so if a government customer visits our site they'll see identical pricing as they would on Advantage. RE: The other company, that might be a way to go, although it would be insanely annoying to have to do everything multiple times; the products are exactly the same, the cost is exactly the same (we basically do everything for them, it's another company in name only and the fact they have different customers than we do), but their prices use a different markup than ours. Everything else is identical, which is why I was hoping to keep stuff like the product information in one table, and then keep the differences in separate tables. The current system is a mess largely because **everything** (database, code files, images, everything) is duplicated, so refactoring the code in our website requires an identical change made to the other website. **EDIT (2/17/2009 @ 4:52PM):** Some very good ideas so far. One of the issues is that, apart from reporting purposes and reference, only one of the price columns is ever the "real" one. For example, the government price is *always* shown to government customers, but the sale price is shown instead if the current date falls during the sale date ("sale" pricing is specific to government customers). If the customer is ordering for a BPA (this will be determined by our order processing system and/or manual override by a user), the price is automatically the BPA price. The list price might be shown to display the discount you are getting, and it's required for reports, but nothing else. Also, I'm unsure of how badly normalizing things will make updates. Right now, I'm just handed a spreadsheet with new pricing and told to update the database (since they're all in the products table); I do this based on the product's SKU since the ID is internal only and all of the related tables are linked via the SKU. With a separate price table it becomes even more unwieldy, although I could still use the SKU as the reference (it's going to be a unique index for a product) but a lot more would have to be done manually.
* My first question is, can all of the prices be calculated from one of the prices? If yes and the rules are the same across products, we're in a somewhat easier place. * Do any of the prices change independently of other prices? You have hinted that the Sale Price is valid for 1 month and then extended - does the price remain the same or does it change? If it stays the same, this is easier than if it changes. You're on the right lines with having a start and end date set for product prices as this will be fundamental for historical price reporting. **EDIT:** I was thinking that you could use [persisted computed columns](http://msdn.microsoft.com/en-us/library/ms191250.aspx) for the other prices based on the cost price, but since the prices can be manually adjusted this isn't an option (I don't believe you can override p.c.c. values). You could write a stored procedure to insert the initial prices into a `PRICE` table based on cost price. Based on the info so far, I think you're best option would be to have a separate `PRICE` table from your `PRODUCT` table and a product\_id foreign key in your `PRICE` table referencing a primary key id of the `PRODUCT` table - **PRODUCT table** ``` id | name | image | description | etc... ``` **PRICE table** ``` id | product_id | list_price | cost_price | retail_price | gov_price | sale_price | bpa_price | start_date | end_date ``` It is prudent to have the start and end date fields on your pricing because not only can you historically report prices, but also populate the table with future prices. With the above structure, now when you have a price change, you would need to insert a new record in the PRICE table for the product. Set up indexes on start and end dates, then you would query products and prices as follows ``` SELECT product.name, price.list_price, price.cost_price, price.retail_price /*, ETC... */ FROM product INNER JOIN price ON product.id = price.product_id WHERE price.start_date <= @date AND price.end_date >= @date ``` You could normalize this design further and also have a `PRICE_TYPE` table. The thing to bear in mind with taking this approach however, is that if you want to get the full set of prices for a product, then the WHERE clause is applied to 6 records for each product. To handle the other company situation, provided the proper permissions and restrictions are put in place, I see *few* \* problems with storing their prices in the same database. You say they use different markup - are you referring to their prices? If so, you could handle this with a company\_id within the price table. Access to data could be controlled through stored procedures and updating prices through transactions. \*it depends on how closely coupled the work is between companies. Is it permissible to share resources? (N.B. I've made the assumption that your target database is SQL Server, but I would imagine the logic to be similar for other platforms).
In `MS SQL` or `Oracle`: Create a table `prices`: ``` CREATE TABLE prices (product, pricetype, price, startdate, enddate) ``` Query: ``` SELECT product, price, pricetype FROM ( SELECT products.* , prices.* ROW_NUMBER() OVER (PARTITION BY product, pricetype ORDER BY startdate) rn FROM products, prices WHERE prices.product = product.id AND startdate <= @date ) WHERE rn = 1 AND enddate >= @date ``` This will give you all prices of all types that are actual for given `@date`
How should you deal with data for multiple prices for a single product?
[ "", "sql", "database-design", "" ]
Are there any alternatives to the code below: ``` startFromLine = 141978 # or whatever line I need to jump to urlsfile = open(filename, "rb", 0) linesCounter = 1 for line in urlsfile: if linesCounter > startFromLine: DoSomethingWithThisLine(line) linesCounter += 1 ``` If I'm processing a huge text file `(~15MB)` with lines of unknown but different length, and need to jump to a particular line which number I know in advance? I feel bad by processing them one by one when I know I could ignore at least first half of the file. Looking for more elegant solution if there is any.
[linecache](http://docs.python.org/library/linecache.html): > The [`linecache`](https://docs.python.org/3/library/linecache.html#module-linecache) module allows one to get any line from a Python source file, while attempting to optimize internally, using a cache, the common case where many lines are read from a single file. This is used by the [`traceback`](https://docs.python.org/3/library/traceback.html#module-traceback) module to retrieve source lines for inclusion in the formatted traceback...
You can't jump ahead without reading in the file at least once, since you don't know where the line breaks are. You could do something like: ``` # Read in the file once and build a list of line offsets line_offset = [] offset = 0 for line in file: line_offset.append(offset) offset += len(line) file.seek(0) # Now, to skip to line n (with the first line being line 0), just do file.seek(line_offset[n]) ```
How to jump to a particular line in a huge text file?
[ "", "python", "text-files", "" ]
Suppose you have an interface defined in C#. What is the easiest method to find all classes that provide an implementation of the interface? The brute force method would be to use "Find References" in Visual Studio and manually look through the results to separate out the usages from the implementations, but for an interface in a large codebase that is heavily *referenced* with relatively few implementations, this can be time consuming and error prone. In Java, running javadoc on the codebase (using the -private option to include private classes) would generate a documentation page for the interface (e.g. [Comparable](http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Comparable.html)) that includes all implementing classes for the interface as well as any subinterfaces (though it doesn't include implementing classes of the subinterfaces, these are relatively easy to determine by drilling down into the listed subinterfaces). It's this functionality that I'm looking for but with C# and Visual Studio.
In plain Visual Studio (since 2010) you can right click a method name (definition in interface or implementation in other class) and choose View Call Hierarchy. In Call Hierarchy window there is "Implements" folder where you can find all locations of the interface method implementation.
(Edit based on comment...) **If you have ReSharper installed:** In Visual Studio, right click on the type name and choose "Go to Inheritor". Alternatively, select the type name, then go to ReSharper/View/Type Hierarchy to open up a new tab. (The menu will show you the keyboard shortcut - this can vary, which is why I explained how to find it :) **If you don't have ReSharper:** * You can use [Reflector](http://www.red-gate.com/products/reflector), which is able to show you all the type hierarchy very easily - just under the type name are expandable items for base types and derived types. Similar tools are available such as [ILSpy](http://ilspy.net/) and [dotPeek](https://www.jetbrains.com/decompiler/). * Buy ReSharper - it's a great tool :)
How do you find all implementations of an interface?
[ "", "c#", "visual-studio", "resharper", "reflector", "" ]
I tried three iterations of the following simple program. This is a highly simplified attempt to write a container-and-iterator pair of classes, but I was running into issues with incomplete types (forward declarations). I discovered that this was in fact possible once I templatized everything - but only if I actually used the template parameter! (I realized this by looking at the [Google sparsetable code](http://code.google.com/p/google-sparsehash/source/browse/trunk/src/google/sparsehash/densehashtable.h).) Any hints explaining why the second works while the third doesn't? (I know why the first one doesn't work - the compiler needs to know the memory layout of the container.) Thanks in advance. ``` // This doesn't work: invalid use of incomplete type. #if 0 struct container; struct iter { container &c; int *p; iter(container &c) : c(c), p(&c.value()) {} }; struct container { int x; int &value() { return x; } iter begin() { return iter(*this); } }; int main() { container c; c.begin(); return 0; } #endif // This *does* work. template<typename T> struct container; template<typename T> struct iter { container<T> &c; T *p; iter(container<T> &c) : c(c), p(&c.value()) {} }; template<typename T> struct container { T x; T &value() { return x; } iter<T> begin() { return iter<T>(*this); } }; int main() { container<int> c; c.begin(); return 0; }; // This doesn't work either. #if 0 template<typename T> struct container; template<typename T> struct iter { container<int> &c; int *p; iter(container<int> &c) : c(c), p(&c.value()) {} }; template<typename T> struct container { int x; int &value() { return x; } iter<int> begin() { return iter<int>(*this); } }; int main() { container<int> c; c.begin(); return 0; } #endif ```
The first requires a definition of `container` since you are doing a copy operation. If you define the constructor of `iter` after `container`'s definition you'd be okay. So: ``` struct container; struct iter { container &c; int *p; iter(container &c); }; struct container { int x; int &value() { return x; } iter begin() { return iter(*this); } }; iter::iter(container &c) : c(c), p(&c.value()) {} int main() { container c; c.begin(); return 0; } ``` The second example works because there is no class until you actually instantiate one in your `main` function. By that time all types are defined. Try moving any of the `iter` or `container` templates definition after main and you'll hit an error. The third example is a specialization for `int` or so it appears. This should compile because the template parameter for `iter` is not used. You've got the specialization syntax a bit off. However, there is no proper constructor so you'll only get garbage for `x`. Moreover, iterators are modeled well by pointers. Passing `this`'s value will not be of much help. Iterators are typically required for a sequence and not an individual object. Though, there is nothing that can stop you from building one. And you don't need a `;` after a function body.
You could do this without templates by defining iter::iter() after definition of container: ``` struct container; struct iter { container &c; int *p; iter(container &c); }; struct container { int x; int &value() { return x; } iter begin() { return iter(*this); } }; iter::iter(container &c) : c(c), p(&c.value()) {} int main() { container c; c.begin(); return 0; } ``` Template version works because when you instantiate templates both classes are completely defined.
Why do C++ templates let me circumvent incomplete types (forward declarations)?
[ "", "c++", "templates", "forward-declaration", "" ]
Is the following code set up to correctly synchronize the calls on `synchronizedMap`? ``` public class MyClass { private static Map<String, List<String>> synchronizedMap = Collections.synchronizedMap(new HashMap<String, List<String>>()); public void doWork(String key) { List<String> values = null; while ((values = synchronizedMap.remove(key)) != null) { //do something with values } } public static void addToMap(String key, String value) { synchronized (synchronizedMap) { if (synchronizedMap.containsKey(key)) { synchronizedMap.get(key).add(value); } else { List<String> valuesList = new ArrayList<String>(); valuesList.add(value); synchronizedMap.put(key, valuesList); } } } } ``` From my understanding, I need the synchronized block in `addToMap()` to prevent another thread from calling `remove()` or `containsKey()` before I get through the call to `put()` but I do not need a synchronized block in `doWork()` because another thread cannot enter the synchronized block in `addToMap()` before `remove()` returns because I created the Map originally with `Collections.synchronizedMap()`. Is that correct? Is there a better way to do this?
`Collections.synchronizedMap()` guarantees that each atomic operation you want to run on the map will be synchronized. Running two (or more) operations on the map however, must be synchronized in a block. So yes - you are synchronizing correctly.
If you are using JDK 6 then you might want to check out [ConcurrentHashMap](http://java.sun.com/javase/6/docs/api/java/util/concurrent/ConcurrentHashMap.html) Note the putIfAbsent method in that class.
Java synchronized block vs. Collections.synchronizedMap
[ "", "java", "synchronization", "" ]
With the economy the way it is my team and I have been looking into using MYSQL to reduce our licensing costs. I am wondering if there are any gotchas or issues that may crop up. Is there anything we need ot do special to get .NET to talk to mysql as opposed to MsSQL? When developing for it will LINQ to SQL have issues? Any caveats we should be aware of?
Not a direct answer, but if you are familiar with SQL Server, then consider SQL Server Express (2005 or 2008). It is also free, and you'll be familiar with it, thus not requiring research into mySQL ;) That said, check this out: <http://dev.mysql.com/tech-resources/articles/dotnet/index.html>
You'll need the [MySQL Connector for .NET](http://dev.mysql.com/downloads/connector/net/), if you don't already have it, in order to get your .NET application talking to MySQL. Then you'll have access to [MySqlConnection](http://dev.mysql.com/doc/refman/5.1/en/connector-net-examples-mysqlconnection.html), [MySqlCommand](http://dev.mysql.com/doc/refman/5.1/en/connector-net-examples-mysqlcommand.html), [MySqlDataReader](http://dev.mysql.com/doc/refman/5.1/en/connector-net-examples-mysqldatareader.html) objects etc. LINQ to SQL is for SQL Server only, but there are third-party LINQ providers for MySQL - [here's one that looks promising](http://code.google.com/p/dblinq2007/), although I haven't used it myself.
Things a .NET developer needs to know when using a MYSQL backend
[ "", "c#", ".net", "mysql", "vb.net", "" ]
My code is... ``` public static void AssertNotNull<T>(string name, T val) { if (val == null) throw new ArgumentNullException(String.Format("{0} must not be null", name)); } ``` Resharper is recommending... ``` public static void AssertNotNull<T>(string name, T val) { if (Equals(val, default(T))) throw new ArgumentNullException(String.Format("{0} must not be null", name)); } ```
Because it doesn't know if `T` is a value type or reference type, so it makes the code work with both.
I second Berado's answer, but would add that you can prevent this by adding the constraint below: ``` public static void AssertNotNull<T>(string name, T val) where T : class ```
Why is resharper making the following recommendation?
[ "", "c#", "resharper", "" ]
This one is nasty complicated to solve. I have a table containing date ranges, each date range has a priority. Highest priority means this date range is the most important. Or in SQL ``` create table #ranges (Start int, Finish int, Priority int) insert #ranges values (1 , 10, 0) insert #ranges values (2 , 5 , 1) insert #ranges values (3 , 4 , 2) insert #ranges values (1 , 5 , 0) insert #ranges values (200028, 308731, 0) Start Finish Priority ----------- ----------- ----------- 1 10 0 2 5 1 3 4 2 1 5 0 200028 308731 0 ``` I would like to run a series of SQL queries on this table that will result in the table having no overlapping ranges, it is to take the highest priority ranges over the lower ones. Split off ranges as required, and get rid of duplicate ranges. It allows for gaps. So the result should be: ``` Start Finish Priority ----------- ----------- ----------- 1 2 0 2 3 1 3 4 2 4 5 1 5 10 0 200028 308731 0 ``` Anyone care to give a shot at the SQL? I would also like it to be as efficient as possible.
This is most of the way there, possible improvement would be joining up adjacent ranges of the same priority. It's full of cool trickery. ``` select Start, cast(null as int) as Finish, cast(null as int) as Priority into #processed from #ranges union select Finish, NULL, NULL from #ranges update p set Finish = ( select min(p1.Start) from #processed p1 where p1.Start > p.Start ) from #processed p create clustered index idxStart on #processed(Start, Finish, Priority) create index idxFinish on #processed(Finish, Start, Priority) update p set Priority = ( select max(r.Priority) from #ranges r where ( (r.Start <= p.Start and r.Finish > p.Start) or (r.Start >= p.Start and r.Start < p.Finish) ) ) from #processed p delete from #processed where Priority is null select * from #processed ```
This can be done in 1 SQL (i first made the query in Oracle using lag and lead, but since MSSQL doesn't support those functions i rewrote the query using row\_number. I'm not sure if the result is MSSQL compliant, but it should be very close): ``` with x as ( select rdate rdate , row_number() over (order by rdate) rn from ( select start rdate from ranges union select finish rdate from ranges ) ) select d.begin , d.end , max(r.priority) from ( select begin.rdate begin , end.rdate end from x begin , x end where begin.rn = end.rn - 1 ) d , ranges r where r.start <= d.begin and r.finish >= d.end and d.begin <> d.end group by d.begin , d.end order by 1, 2 ``` I first made a table (x) with all dates. Then I turned this into buckets by joining x with itself and taking 2 following rows. After this I linked all the possible priorities with the result. By taking the max(priority) I get the requested result.
Clearing prioritized overlapping ranges in SQL Server
[ "", "sql", "sql-server", "" ]
Before I ask my question, please take a look at this example function: ``` DateTime.TryParse("01/01/2000", out oDate) ``` Why do I need to specify the `out` keyword? Shouldn't the compiler know this from the function's definition? I'm asking this out of pure curiosity in the hope that I will learn something new about the compiler. I should also clarify that I'm asking about the C# .NET 3.5 compiler in particular.
The out keyword *could* be implied by the compiler but my understanding is that the C# team decided to make the `out` keyword explicitly required by the caller of the function to increase visibility as to the nature of the parameter.
The compiler does know, you may not. It's a way of letting you know that the parameter you are passing can change in this function you are passing it to.
Why do I have to specify the "out" keyword in C#?
[ "", "c#", ".net", "compiler-construction", "" ]
``` $Date = date("m/d/Y"); $result = mysql_query("SELECT * FROM service WHERE SType = 'Retreat' and Start_date > '$Date' "); ``` Start\_date format is m/d/y also. whats wrong with my code? all i want to do is to display all the possible data greater than the current date. but it always show all the data from the database.
Use date("Y-m-d") rather than date("m/d/Y").
Have you looked at this: <http://www.bigroom.co.uk/blog/dates-in-php-and-mysql> ? The article suggests you consider doing something like ``` $result = mysql_query( "SELECT * FROM service WHERE SType = 'Retreat' and Start_date > FROM_UNIXTIME($Date)" ); ```
how to display all the possible data greater than the current date
[ "", "php", "" ]
Ok. So I have a list of values, and I'd like to do something like the following: ``` MyObjectValues .Select(currentItems=>new MyType() { Parameter1 = currentItems.Value1, Parameter2 = currentItems.Value2 }); ``` So here's the problem. I need the above example to work with named constructors, such as: ``` MyObjectValues .Select(currentItems=>MyType.GetNewInstance() { Parameter1 = currentItems.Value1, Parameter2 = currentItems.Value2 }); ``` Is there any way I can do that? Basically, I have a static method I need to call to get the object instance back, and I'd like to initialize it as above. EDIT: I don't have an easy way to modify the interface of MyType at present, so adding new function calls (while probably the 'best' approach) isn't very practical at the moment.
Unfortunately there's no direct support for this in C# 3.0. Object initializers are *only* supported for constructor calls. However, you might consider the Builder pattern. In my Protocol Buffers port, I support it like this: ``` MyType foo = new MyType.Builder {Parameter1 = 10, Parameter2 = 20}.Build(); ``` So your example would become: ``` MyObjectValues.Select(currentItems => new MyType.Builder { Parameter1 = currentItems.Value1, Parameter2 = currentItems.Value2 }.Build()); ``` Of course, it means writing the nested Builder type, but it can work quite well. If you don't mind MyType being *strictly speaking* mutable, you can leave an immutable API but make an instance of `Builder` immediately create a new instance of `MyType`, then set properties as it goes (as it has access to private members), and then finally return the instance of `MyType` in the `Build()` method. (It should then "forget" the instance, so that further mutation is prohibited.)
I'm sure someone will think of a clever way to do it in pre 4.0 C#, but I just read [Sam Ng's blog post on named parameters in C# 4.0](https://learn.microsoft.com/en-us/archive/blogs/samng/named-arguments-optional-arguments-and-default-values). This should solve your problem. It would look like this: ``` MyObjectValues.Select(currentItems=>MyType.GetNewInstance( Parameter1:currentItems.Value1, Parameter2:currentItems.Value2)); ``` **EDIT** Forgot to mention, what makes this useful is that you can set defaults for the parameters, so you don't have to pass them all. The blog post is a good short summary.
Object Initialization and "Named Constructor Idiom"
[ "", "c#", ".net", "object-initializers", "" ]
EDIT: Just realised that the reason for the additional results is down to another line in the query! Don't think I have enough rep to close this question. I'm editing some existing SQL code which is searching a Lotus Notes DB. I have this line: ``` @Contains(Title; "blah blah 1.5") ``` and I want to return only those records which contain exactly the given string in the title. Currently it returns all records which contain "blah blah 1" so for example I get "blah blah 1.1", "blah blah 1.2" etc. My guess is that I need to escape the '.' but I don't know how to.
I'm assuming by your example you're actually editing a Lotus Notes search formula and not a SQL query. The SQL format for the CONTAINS function doesn't use semicolons to separate parameters, and does not have an @ sign in the function name. Going with that assumption, you'll want to change the search formula to be simply Title = "blah blah 1.5", and in your view or search, you'll get all documents that have a Title with that exact phrase.
I think you simply need to test for equality, instead of using a full text search operator: ``` WHERE Title = 'blah blah 1.5' ```
Escape a period character in an SQL query
[ "", "sql", "lotus-notes", "escaping", "" ]
I'm using Eclipse to generate `.equals()` and `.hashCode()`, and there is an option labeled "Use 'instanceof' to compare types". The default is for this option to be unchecked and use `.getClass()` to compare types. Is there any reason I should prefer `.getClass()` over `instanceof`? Without using `instanceof`: ``` if (obj == null) return false; if (getClass() != obj.getClass()) return false; ``` Using `instanceof`: ``` if (obj == null) return false; if (!(obj instanceof MyClass)) return false; ``` I usually check the `instanceof` option, and then go in and remove the "`if (obj == null)`" check. (It is redundant since null objects will always fail `instanceof`.) Is there any reason that's a bad idea?
If you use `instanceof`, making your `equals` implementation `final` will preserve the symmetry contract of the method: `x.equals(y) == y.equals(x)`. If `final` seems restrictive, carefully examine your notion of object equivalence to make sure that your overriding implementations fully maintain the contract established by the `Object` class. --- What I'm trying to get at here is that if you believe `getClass()` is the only reliable way to preserve symmetry, you are probably using `equals()` the wrong way. Sure, it's easy to use `getClass()` to preserve the symmetry required of `equals()`, but only because `x.equals(y)` and `y.equals(x)` are always false. Liskov substitutability would encourage you to find a symmetry-preserving implementation that can yield `true` when it makes sense. If a subclass has a radically different notion of equality, is it really a subclass?
[Josh Bloch](http://www.artima.com/intv/bloch17.html) favors your approach: > The reason that I favor the `instanceof` approach is that when you use the `getClass` approach, you have the restriction that objects are only equal to other objects of the same class, the same run time type. If you extend a class and add a couple of innocuous methods to it, then check to see whether some object of the subclass is equal to an object of the super class, even if the objects are equal in all important aspects, you will get the surprising answer that they aren't equal. In fact, this violates a strict interpretation of the *Liskov substitution principle*, and can lead to very surprising behavior. In Java, it's particularly important because most of the collections (`HashTable`, etc.) are based on the equals method. If you put a member of the super class in a hash table as the key and then look it up using a subclass instance, you won't find it, because they are not equal. See also [this SO answer](https://stackoverflow.com/questions/27581/overriding-equals-and-hashcode-in-java/32223#32223). Effective Java [chapter 3](http://java.sun.com/developer/Books/effectivejava/Chapter3.pdf) also covers this.
Any reason to prefer getClass() over instanceof when generating .equals()?
[ "", "java", "eclipse", "equals", "instanceof", "" ]
How do you specify the **Func** signature for anonymous objects? ``` new Func<DataSet, **IEnumerable<int>>** ``` I am having a trouble with return type where I have specified as **IEnumerable<>** in the Func declaration Error I am getting from the expression is > Cannot convert expression type 'System.Collections.Generic.IEnumerable<{ParentNodeId:int}>' to returnt ype 'System.Collections.Generic.IEnumerable' How can I specify **IEnumerable<{ParentNodeId:int}>** in func? ``` public int GetCachedRootNodeId(IList<int> fromNodeIds, int forNodeId) { var result = forNodeId; const string spName = "spFetchAllParentNodeIDs"; using (var ds = _df.ExecuteDatasetParamArray(_ConnectionString, spName, forNodeId)) { if (DataAccessUtil.DataSetIsEmpty(ds)) return result; var orderByLevelDesc = new Func<DataSet, IEnumerable<int>>(resultSet => from DataRow row in DataAccessUtil.GetFirstTableRows(resultSet) orderby DataAccessUtil.GetInt32(row, "Level") descending select new { ParentNodeId = DataAccessUtil.GetInt32(row, "ParentNodeID") }); //// Get top-most parent's node ID first (higher the level, the more top-most the parent is) //var query = from DataRow row in DataAccessUtil.GetFirstTableRows(ds) // orderby DataAccessUtil.GetInt32(row, "Level") descending // select new { ParentNodeId = DataAccessUtil.GetInt32(row, "ParentNodeID") }; //foreach (var nodeInfo in query) foreach (var nodeInfo in orderByLevelDesc(ds)) { if (fromNodeIds.Contains(nodeInfo.ParentNodeId)) return nodeInfo.ParentNodeId; } } return result; } ``` By the way, I could have used commented code "query" and be done with it. But wanted to be more expressive and try something new after looking at this answer [Why doesn't C# have lexically nested functions?](https://stackoverflow.com/questions/576374/why-doesnt-c-have-lexically-nested-functions/576386#576386) **EDIT**: Final Result ``` public int GetCachedRootNodeId(IList<int> fromNodeIds, int forNodeId) { var result = forNodeId; const string spName = "spFetchAllParentNodeIDs"; using (var ds = _df.ExecuteDatasetParamArray(_ConnectionString, spName, forNodeId)) { if (DataAccessUtil.DataSetIsEmpty(ds)) return result; var orderParentNodeIDByLevelDesc = new Func<DataSet, IEnumerable<int>>(resultSet => from DataRow row in DataAccessUtil.GetFirstTableRows(resultSet) orderby DataAccessUtil.GetInt32(row, "Level") descending select DataAccessUtil.GetInt32(row, "ParentNodeID")); foreach (var parentNodeId in orderParentNodeIDByLevelDesc(ds)) { if (fromNodeIds.Contains(parentNodeId)) return parentNodeId; } } return result; } ```
In this case all you really want is as int... so just change your select to: ``` select DataAccessUtil.GetInt32(row, "ParentNodeID")); ``` And change your foreach to: ``` foreach (var nodeInfo in orderByLevelDesc(ds)) { if (fromNodeIds.Contains(nodeInfo)) return nodeInfo; } ```
Here's an idea - call a static method and let type inference do it: ``` public static Func<T, TResult> FuncOf(Func<T, TResult> func) { return func; } ``` then just call it: ``` var orderByLevelDesc = FuncOf(resultSet => from DataRow row in DataAccessUtil.GetFirstTableRows(resultSet) orderby DataAccessUtil.GetInt32(row, "Level") descending select new { ParentNodeId = DataAccessUtil.GetInt32(row, "ParentNodeID") }); ``` Basically you only need the "Func" part to tell the compiler that it needs to convert the lambda expression into a delegate instead of an expression, and the type of the delegate. It should be able to work out the type given the signature of FuncOf. Worth a try, anyway. I should say, by the way, that I find your commented out version easier to understand. Why introduce an extra function? Or was your plan to *not* have that as a local variable (which would make more sense, but then you couldn't use `var`)? Or perhaps a local variable declared *outside* the loop instead of inside?
How to specify .NET Anonymous object return type in Linq using Func<T,Q>?
[ "", "c#", "linq", "anonymous-types", "" ]
I've gotten pretty used to step-through debuggers over the years, both in builder, and using the pydev debugger in Eclipse. Currently, I'm making something in Python and running it on Google App Engine, and I should add that I'm pretty new to developing any real web app; I've never really done much beyond editing HTML code. So, I'm running Google's dev\_appserver and viewing my work at <http://localhost>, dig, and right now, the only tool I'm using to identify issues is PMD (poor man's debugger). . .basically writing things to the html pages to see the value of local variables. Is there a better technique for dealing with this?
The dev\_appserver is just a python script, you can simply use the pydev debugger on that script with the proper arguments as far as I know. Here is a very detailed guide on how to do that: <http://www.ibm.com/developerworks/opensource/library/os-eclipse-mashup-google-pt1/index.html>
I would suggest to use logging statements instead of prints though as you can control them better. Python has a quite good logging library included. For logging from Google App Engine to e.g. Firebug there is also some handy tool called [FirePython](http://appengine-cookbook.appspot.com/recipe/firepython-logger-console-inside-firebug/). This allows to log to the firebug console from within your Django or WSGI app (it's middleware).
Debugging web apps
[ "", "python", "eclipse", "debugging", "google-app-engine", "" ]
I'm currently working on a sprite engine in C++. I have an abstract class IEngine with a virtual function init\_api. This takes in a void\*. ``` // Initialise the engines' API // api_params - void* to api parameters for initalisation // hWnd - window handle virtual bool init_api( void* api_params, HWND hWnd ) = 0; ``` I then have a DirectX implemented engine class CEngineDX. Which then casts api\_params to a D3DPRESENT\_PARAMETERS\*, so it can be used for initialising DirectX. ``` // Cast api_params to a D3DPRESENT_PARAMETERS D3DPRESENT_PARAMETERS* presentParams = NULL; presentParams = reinterpret_cast< D3DPRESENT_PARAMETERS* >( api_params ); ``` I'm quite happy with this setup but wanted to get some other programmers view on this "solution" if you like. Cheers for the replies! Carl
Another way to do it is just have a common header and different \*.cpp files for each implementation. That way you can include just the D3D or just the OGL files in your project. IMO its better to choose the API at compile time so your not linking against both libraries. As for the void\*, I don't really like it. I think you'd be better off defining your own types and then mapping them to the API types with wrapper structs / classes and typedefs. You can forward declare these, and put the actual implementation in your \*.cpp files. One other benefit of this method is that your not paying for virtual functions you don't need, although I realize the cost of a virtual call is pretty small.
This is a relatively common problem with the variation of argument types in inheritance hierarchies; your subclass wants to specialize the type of 'api\_params' from the parent class. I think this is OK but it is C-like. I think better solution would be to make `init_api` non-virtual and implement it with the correct type in the subclass. Anyway, most likely the `D3DPRESENT_PARAMETERS` struct makes only sense with the DirectX engine, so why not have it in the subclass where it logically belongs to?
void* as unknown variable type
[ "", "c++", "void", "" ]
I am trying to wrap my mind around something in java. When I pass an object to another class' method, can I not just call any methods inherent to that object class? What is the reason code such as the example below does not compile? Thank you, ``` class a { public static void myMethod(Object myObj) { myObj.testing(); } } class b { public void testing() { System.out.println ("TESTING!!!"); } } class c { public static void main (String[] args) { b myB = new b(); a.myMethod(myB); } } ``` **Edit: The reason I have left the parameter in myMethod as type Object, is because I would like to be able to pass in a variety of object types, each having a testing() method.**
If you would like to pass in a variety of objects with `testing()` methods, have each object implement a `Testable` interface: ``` public interface Testable { public void testing() } ``` Then have `myMethod()` take a `Testable`. ``` public static void myMethod(Testable testable) { testable.testing(); } ``` --- Edit: To clarify, implementing an interface means that the class is guaranteed to have the method, but the method can do whatever it wants. So I could have two classes whose `testing()` methods do different things. ``` public class AClass implements Testable { public void testing() { System.out.println("Hello world"); } } public class BClass implements Testable { public void testing() { System.out.println("Hello underworld"); } } ```
The problem is that `myMethod` can't know it's getting a `b` object until it actually runs. You could pass a `String` in, for all it knows. Change it to ``` public static void myMethod(b myObj) { myObj.testing(); } ``` and it should work. --- Update of the question: > **Edit: The reason I have left the parameter in myMethod as type Object, is because I would like to be able to pass in a variety of object types, each having a testing() method.** As Amanda S and several others have said, this is a perfect case for an interface. The way to do this is to create an interface which defines the `testing()` method and change `myMethod` to take objects implementing that interface. An alternative solution (without interfaces) would be to reflectively discover if the object has a `testing()` method and call it, but this is not recommended and not needed for a such a simple case.
Why can't java find my method?
[ "", "java", "class", "object", "methods", "" ]
What is the best way to extend a dictionary with another one while avoiding the use of a `for` loop? For instance: ``` >>> a = { "a" : 1, "b" : 2 } >>> b = { "c" : 3, "d" : 4 } >>> a {'a': 1, 'b': 2} >>> b {'c': 3, 'd': 4} ``` Result: ``` { "a" : 1, "b" : 2, "c" : 3, "d" : 4 } ``` Something like: ``` a.extend(b) # This does not work ```
``` a.update(b) ``` [Latest Python Standard Library Documentation](http://docs.python.org/library/stdtypes.html#dict.update)
A beautiful gem in [this closed question](https://stackoverflow.com/questions/1551666/how-can-2-python-dictionaries-become-1/1551878#1551878): The "oneliner way", altering neither of the input dicts, is ``` basket = dict(basket_one, **basket_two) ``` Learn what [`**basket_two` (the `**`) means here](http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-in-python/). In case of conflict, the items from `basket_two` will override the ones from `basket_one`. As one-liners go, this is pretty readable and transparent, and I have no compunction against using it any time a dict that's a mix of two others comes in handy (any reader who has trouble understanding it will in fact be very well served by the way this prompts him or her towards learning about `dict` and the `**` form;-). So, for example, uses like: ``` x = mungesomedict(dict(adict, **anotherdict)) ``` are reasonably frequent occurrences in my code. Originally submitted by [Alex Martelli](https://stackoverflow.com/users/95810/alex-martelli) ***Note:* In Python 3, this will only work if every key in basket\_two is a `string`.**
Python "extend" for a dictionary
[ "", "python", "dictionary", "" ]
I'm having some brain failure in understanding reading and writing text to a file (Python 2.4). ``` # The string, which has an a-acute in it. ss = u'Capit\xe1n' ss8 = ss.encode('utf8') repr(ss), repr(ss8) ``` > ("u'Capit\xe1n'", "'Capit\xc3\xa1n'") ``` print ss, ss8 print >> open('f1','w'), ss8 >>> file('f1').read() 'Capit\xc3\xa1n\n' ``` So I type in `Capit\xc3\xa1n` into my favorite editor, in file f2. Then: ``` >>> open('f1').read() 'Capit\xc3\xa1n\n' >>> open('f2').read() 'Capit\\xc3\\xa1n\n' >>> open('f1').read().decode('utf8') u'Capit\xe1n\n' >>> open('f2').read().decode('utf8') u'Capit\\xc3\\xa1n\n' ``` What am I not understanding here? Clearly there is some vital bit of magic (or good sense) that I'm missing. What does one type into text files to get proper conversions? What I'm truly failing to grok here, is what the point of the UTF-8 representation is, if you can't actually get Python to recognize it, when it comes from outside. Maybe I should just JSON dump the string, and use that instead, since that has an asciiable representation! More to the point, is there an ASCII representation of this Unicode object that Python will recognize and decode, when coming in from a file? If so, how do I get it? ``` >>> print simplejson.dumps(ss) '"Capit\u00e1n"' >>> print >> file('f3','w'), simplejson.dumps(ss) >>> simplejson.load(open('f3')) u'Capit\xe1n' ```
In the notation `u'Capit\xe1n\n'` (should be just `'Capit\xe1n\n'` in 3.x, and *must* be in 3.0 and 3.1), the `\xe1` represents just one character. `\x` is an escape sequence, indicating that `e1` is in hexadecimal. Writing `Capit\xc3\xa1n` into the file in a text editor means that it actually contains `\xc3\xa1`. Those are 8 bytes and the code reads them all. We can see this by displaying the result: ``` # Python 3.x - reading the file as bytes rather than text, # to ensure we see the raw data >>> open('f2', 'rb').read() b'Capit\\xc3\\xa1n\n' # Python 2.x >>> open('f2').read() 'Capit\\xc3\\xa1n\n' ``` Instead, just input characters like `á` in the editor, which should then handle the conversion to UTF-8 and save it. In 2.x, a string that actually contains these backslash-escape sequences can be decoded using the `string_escape` codec: ``` # Python 2.x >>> print 'Capit\\xc3\\xa1n\n'.decode('string_escape') Capitán ``` The result is a `str` that is encoded in UTF-8 where the accented character is represented by the two bytes that were written `\\xc3\\xa1` in the original string. To get a `unicode` result, decode again with UTF-8. In 3.x, the `string_escape` codec is replaced with `unicode_escape`, and it is strictly enforced that we can only `encode` from a `str` to `bytes`, and `decode` from `bytes` to `str`. `unicode_escape` needs to start with a `bytes` in order to process the escape sequences (the other way around, it *adds* them); and then it will treat the resulting `\xc3` and `\xa1` as *character* escapes rather than *byte* escapes. As a result, we have to do a bit more work: ``` # Python 3.x >>> 'Capit\\xc3\\xa1n\n'.encode('ascii').decode('unicode_escape').encode('latin-1').decode('utf-8') 'Capitán\n' ```
Rather than mess with `.encode` and `.decode`, specify the encoding when opening the file. The [`io` module](https://docs.python.org/3/library/io.html#io.open), added in Python 2.6, provides an `io.open` function, which allows specifying the file's `encoding`. Supposing the file is encoded in UTF-8, we can use: ``` >>> import io >>> f = io.open("test", mode="r", encoding="utf-8") ``` Then `f.read` returns a decoded Unicode object: ``` >>> f.read() u'Capit\xe1l\n\n' ``` In 3.x, the `io.open` function is an alias for the built-in `open` function, which supports the `encoding` argument (it does not in 2.x). We can also use [`open` from the `codecs` standard library module](http://docs.python.org/library/codecs.html#codecs.open): ``` >>> import codecs >>> f = codecs.open("test", "r", "utf-8") >>> f.read() u'Capit\xe1l\n\n' ``` Note, however, that this [can cause problems when mixing `read()` and `readline()`](https://stackoverflow.com/a/46438434/101923).
Unicode (UTF-8) reading and writing to files in Python
[ "", "python", "unicode", "utf-8", "io", "" ]
Can somebody clarify the C# `is` keyword please. In particular these 2 questions: Q1) line 5; Why does this return true? Q2) line 7; Why no cast exception? ``` public void Test() { object intArray = new int[] { -100, -200 }; if (intArray is uint[]) //why does this return true? { uint[] uintArray = (uint[])intArray; //why no class cast exception? for (int x = 0; x < uintArray.Length; x++) { Console.Out.WriteLine(uintArray[x]); } } } ``` MSDN's description does not clarify the situation. It states that `is` will return true if either of these conditions are met. (http://msdn.microsoft.com/en-us/library/scekt9xw(VS.71).aspx>MDSN Article) ``` expression is not null. expression can be cast to type. ``` I don't believe that you can do a valid cast of int[] into uint[]. Because: A) This code does not compile: ``` int[] signed = new int[] { -100 }; uint[] unsigned = (uint[])signed; ``` B) Doing the cast in the debugger gives an error: ``` (uint[])signed "Cannot convert type 'int[]' to 'uint[]'" ``` Sure enough, if line 3 was int[] instead of object then it would never compile. Which brings me to a final question related to Q2. Q3) Why does C# raise a cast/conversion error in the debugger and compiler but not at runtime?
C# and the CLR have somewhat different conversion rules. You can't *directly* cast between `int[]` and `uint[]` in C# because the *language* doesn't believe any conversion is available. However, if you go via `object` the result is up to the CLI. From the CLI spec section 8.7 (I hope - I'm quoting an [email exchange I had on this topic with Eric Lippert](http://groups.google.com/group/microsoft.public.dotnet.languages.csharp/browse_thread/thread/2d21bf036a23918e#5a5c351206ebd999) a while ago): > Signed and unsigned integral primitive > types can be assigned to each other; > e.g., int8 := uint8 is valid. For this > purpose, bool shall be considered > compatible with `uint8` and vice versa, > which makes `bool := uint8` valid, and > vice versa. This is also true for > arrays of signed and unsigned integral > primitive types of the same size; > e.g., `int32[] := uint32[]` is valid. (I haven't checked, but I assume that this sort of reference type conversion being valid is what makes `is` return true as well.) It's somewhat unfortunate that there are disconnects between the language and the underlying execution engine, but it's pretty much unavoidable in the long run, I suspect. There are a few other cases like this, but the good news is that they rarely seem to cause significant harm. EDIT: As Marc deleted his answer, I've linked to the full mail from Eric, as posted to the C# newsgroup.
Now that's interesting. I found this in the ECMA-335 standard. 4.3 castclass. Note that: * Arrays inherit from System.Array. * If Foo can be cast to Bar, then Foo[] can be cast to Bar[]. * For the purposes of note 2 above, enums are treated as their underlying type: thus E1[] can be cast to E2[] if E1 and E2 share an underlying type. You can cast int to uint, but that it behaves like this is very strange. Visual Studio does not recognize any of this, even the watch, when the debugger is attached just shows a question mark '?'. **You might wanna take a look at [this](http://channel9.msdn.com/shows/Going+Deep/Expert-to-Expert-Anders-Hejlsberg-The-Future-of-C/), fast forward about 10 minutes in and listen to Anders explain the co-variant array implementation. I think that is the fundamentally underlying issue here.**
Why does "int[] is uint[] == true" in C#
[ "", "c#", "arrays", "casting", "unsigned", "" ]
I heard a few people expressing worries about "+" operator in std::string and various workarounds to speed up concatenation. Are any of these really necessary? If so, what is the best way to concatenate strings in C++?
**The extra work is probably not worth it, unless you really really need efficiency.** You probably will have much better efficiency simply by using operator += instead. Now after that disclaimer, I will answer your actual question... The efficiency of the STL string class depends on the implementation of STL you are using. You could **guarantee efficiency** and **have greater control** yourself by doing concatenation manually via c built-in functions. **Why operator+ is not efficient:** Take a look at this interface: ``` template <class charT, class traits, class Alloc> basic_string<charT, traits, Alloc> operator+(const basic_string<charT, traits, Alloc>& s1, const basic_string<charT, traits, Alloc>& s2) ``` You can see that a new object is returned after each +. That means that a new buffer is used each time. If you are doing a ton of extra + operations it is not efficient. **Why you can make it more efficient:** * You are guaranteeing efficiency instead of trusting a delegate to do it efficiently for you * the std::string class knows nothing about the max size of your string, nor how often you will be concatenating to it. You may have this knowledge and can do things based on having this information. This will lead to less re-allocations. * You will be controlling the buffers manually so you can be sure that you won't copy the whole string into new buffers when you don't want that to happen. * You can use the stack for your buffers instead of the heap which is much more efficient. * string + operator will create a new string object and return it hence using a new buffer. **Considerations for implementation:** * Keep track of the string length. * Keep a pointer to the end of the string and the start, or just the start and use the start + the length as an offset to find the end of the string. * Make sure the buffer you are storing your string in, is big enough so you don't need to re-allocate data * Use strcpy instead of strcat so you don't need to iterate over the length of the string to find the end of the string. **Rope data structure:** If you need really fast concatenations consider using a [rope data structure](https://brianbondy.com/blog/90/introducing-the-rope-data-structure).
Reserve your final space before, then use the append method with a buffer. For example, say you expect your final string length to be 1 million characters: ``` std::string s; s.reserve(1000000); while (whatever) { s.append(buf,len); } ```
Efficient string concatenation in C++
[ "", "c++", "performance", "string", "concatenation", "" ]
I'm currently developing on a project written in Java. We have a bunch of algorithms written in C/C++ (at least a couple hundred) that need to be incorporated in our project. Our two options are to use JNI to call this code, or to rewrite all the algorithms in Java. I am aware of the consequences of using JNI, it can open up a whole new set of problems, which is why rewriting all the code in Java is even being considered. But the thought of rewriting it seems...wrong. The algorithms have been tested and work as far as I know, they're just in the wrong language. In this situation, would JNI make this task easy? Or would it cause more headache than rewriting the code in Java would? FYI - Our Java project is not meant to be portable in any way. That might eliminate one of the downsides of JNI in that it supposedly reduces portability.
The simple answer is, if the code is going to be called a lot AND performance matters then convert it over to Java. The more complex answers are: * if the library easily wrapped in JNI then go with JNI * if the tests you have for the C/C++ code easily converted over to Java then go for the port I would do the following: * take one of the algorithms and wrap it in JNI * take the same algorithm and convert it to Java * see which is more of a pain to do * if speed matters then profile both versions and see which of them is acceptable.
I think the answer lies in the amount of coupling there would be between the calling java code and the invoked C/C++ code and in the level of effort the rewrite would entail. If your C code takes a few integers, does some hairy calculation, and returns another int. Use JNI. If there's a lot of complex back and forth, but the algorithms are reasonably simple, rewrite 'em. The fault line is the JNI connection. If that's going to be complex, you may end up writing more JNI interface code than you would algorithm code for a rewrite.
Should we rewrite our C code in Java or use JNI instead?
[ "", "java", "java-native-interface", "" ]
Can you call C++ functions from Ada? I'm wondering if there is a way to do this directly, without doing the implementation in C and writing a C++ wrapper & and Ada wrapper, e.g. I would like to go c++ -> Ada rather than c++ -> c -> Ada.
The problem with Ada to C++ is that C++ does NOT have a defined ABI. Each compiler is allowed to define the most effecient ABI it can. Thus interfacing from other languages (Ada) is a pain as you would need your Ada compiler to know which compiler the C++ was compiled with before it could generate the correct code to call any C++ method/function. On the other hand the C ABI is well defined a standard across all compilers and as such provides a nice convenient interface for any language to connect with.
The only really compiler-agnostic answer I can give you is that it is just as possible as calling C++ from C on your system. Much like with C, you have to figure out your C++ routine's name-mangled symbol and write a binding on the C (in this case the Ada) side that links to that mangled name. You will also probably have to do some things on the C++ side, like declaring the C++ function extern. If you can declare your C++ function extern "C", it's easy. Just do that on the C++ side, and use Ada's standard C import features on the Ada side. Example: in your cpp: ``` extern "C" int cpp_func (int p1, int p2) { ; // Whatever.. } ``` in your .adb: ``` function cpp_func (p1, p2 : Interfaces.C.Int) return Interfaces.C.Int; pragma Import (C, cpp_func); ... Result : constant Interfaces.C.Int := cpp_func (1, 2); ```
Can you call C++ functions from Ada?
[ "", "c++", "ada", "" ]
What's the most robust way of creating a global keyboard shortcut handler for a Web application using JavaScript i.e. which event(s) should I handle and what should the event handler(s) be attached to? I want something like the system in Gmail which can handle both single keypress shortcuts and also shortcuts with modifier keys e.g. Ctrl + B etc. The code has to work in IE 6 as well as modern browsers. I have the Prototype framework available to use but not jQuery, so please, no jQuery-specific answers!
The HotKey library available in the LivePipe controls package works with Prototype and is IE compatible. <http://livepipe.net/extra/hotkey>
Just thought I'd throw another into the mix. I recently released a library called Mousetrap. Check it out at <http://craig.is/killing/mice>
Robust keyboard shortcut handling using JavaScript
[ "", "javascript", "prototypejs", "" ]
This piece of code doesn't work; it's logging in into website which is using https protocol. How to solve this problem? The code stops at `GetRequestStream()` anytime anywhere saying that protocol violation exception is unhandled.. ``` string username = "user"; string password = "pass"; HttpWebRequest request = (HttpWebRequest)WebRequest.Create("https://moje.azet.sk/prihlasenie.phtml?KDE=www.azet.sk%2Findex.phtml%3F"); request.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; .NET CLR 1.0.3705)"; Console.WriteLine(request.GetRequestStream()); using (StreamWriter writer = new StreamWriter(request.GetRequestStream(), Encoding.ASCII)) { writer.Write("nick=" + username + "&password=" + password); } HttpWebResponse response = (HttpWebResponse)request.GetResponse(); //Retrieve your cookie that id's your session //response.Cookies using (StreamReader reader = new StreamReader(response.GetResponseStream())) { Console.WriteLine(reader.ReadToEnd()); } ```
Set request method to post, before calling GetRequestStream like ``` request.Method = "POST"; using (StreamWriter writer = new StreamWriter(request.GetRequestStream(), Encoding.ASCII)) { writer.Write("nick=" + username + "&password=" + password); } ```
My guess is that the issue you are experiencing is due to the fact (like others have advised) that you are doing a GET request instead of a POST request. Additionally, I noticed that the actual name for the password field on that page is "heslo" and not "password". This typo won't cause the web server to not return a response, but it will cause other issues since the server is looking for that specific variable name to be posted with the password value.
HttpWebRequest with https in C#
[ "", "c#", ".net", "https", "httpwebrequest", "" ]
``` for (int i = 0 ; i < stlVector.size() ; i++) { if (i == 10) { stlVector.erase(stlVector.begin() + 5 ) } } ``` Does the termination condition part "stlVector.size()" take "stlVector.erase(...)" into consideration? In other word does stlVector.size() refresh for every loop iteration? I can't test it right now, so i posted a question here. Thx in advance! Best regards, zhengtonic
Just to be clear, don't think of it in terms of the loop refreshing anything. Every time the condition is checked (at the start of each time through the loop), the size() method is called on the stlVector variable, and the current size of the vector is returned. The erase() method reduces the size of the vector, so the next time size() is called, the returned value will be smaller.
Yes it does! ``` stlVector.size () // is called for evey iteration ``` Thus for every loop you'll have the test "i < stlVector.size ()" re-evaluate!
Does the termination condition of a 'for loop' refresh in VC++ 6?
[ "", "c++", "visual-c++", "for-loop", "visual-c++-6", "termination", "" ]
I'm writing a filter function to return the specific type specified out of a larger collection of supertypes (objects for example). The idea is I give you an enumerable and you return me all the strings for example. you can write it this way without generics: ``` public static IEnumerable Filter(IEnumerable source, Type type) { List<object> results = new List<object>(); foreach(object o in source) { if(o != null && o.GetType() == type) { results.Add(o); } } return results; } ``` if we want to return generics there are a few different ways to go about it. As a straight port: ``` public static IEnumerable<TResult> Filter<TResult> (IEnumerable source, Type type) ``` Pass in an 'example': ``` IEnumerable<TResult> Filter<TResult> (IEnumerable source, TResult resultType) ``` Ultimately what I think is cleanest: ``` public static IEnumerable<T> Filter<T>(IEnumerable source) ``` The second type would be called entirely with parameters (and infer the type): ``` Filter(myList, "exampleString"); ``` where as the final version there would get called with a type specifier: ``` Filter<string>(myList); ``` What is the appropriate way to strongly type the return of a generic function, where the return type isn't automatically implied in the signature? (why?) (Edit Note: Our input is NOT typed, e.g. IEnumerable<T>. At best it would be IEnumerable. This function is returning the Ts out of the whole collection of other types.)
The following extension method included in Linq does exactly what you need: ``` IEnumerable<T> OfType<T>(this IEnumerable enumerable); ``` Here is a usage example: ``` List<object> objects = //... foreach(string str in objects.OfType<string>()) { //... } ``` As you can see, they used the generic parameter as the return type specifier. This is simpler and safer than using a Type or a string and return a non type safe enumeration.
I generally prefer the final version - it specifies all the relevant information *and nothing else*. Given the version with parameters, if you were new to the code wouldn't you expect the *value* of the parameter to be meaningful, rather than just the type? Very occasionally this "dummy parameter" pattern is useful, but I'd generally steer clear of it - or at the very least provide an overload which didn't require it.
What is the appropriate way to strongly type the return of a generic function?
[ "", "c#", "generics", "" ]
I am programming in C++ for many years, still I have doubt about one thing. In many places in other people code I see something like: ``` void Classx::memberfunction() { this->doSomething(); } ``` If I need to import/use that code, I simply remove the **this->** part, and I have never seen anything broken or having some side-effects. ``` void Classx::memberfunction() { doSomething(); } ``` So, do you know of any reason to use such construct? EDIT: Please note that I'm talking about member functions here, not variables. I understand it can be used when you want to make a distinction between a member variable and function parameter. EDIT: apparent duplicate: [Are there any reasons not to use "this" ("Self", "Me", ...)?](https://stackoverflow.com/questions/333291/are-there-any-reasons-not-to-use-this-self-me)
To guarantee you trigger compiler errors if there is a macro that might be defined with the same name as your member function and you're not certain if it has been reliably undefined. No kidding, I'm pretty sure I've had to do exactly this for that reason!
The only place where it really makes a difference is in templates in derived classes: ``` template<typename T> class A { protected: T x; }; template<typename T> class B : A<T> { public: T get() { return this->x; } }; ``` Due to [details in the name lookup in C++ compilers](http://www.parashift.com/c++-faq-lite/templates.html#faq-35.19), it has to be made explicitly clear that `x` is a (inherited) member of the class, most easily done with `this->x`. But this is a rather esoteric case, if you don't have templated class hierarchies you don't really need to explicitly use `this` to access members of a class.
Is there any reason to use this->
[ "", "c++", "" ]
I need to designe predicate for stl algorithms such as find\_if, count\_if. ``` namespace lib { struct Finder { Finder( const std::string& name ): name_( name ) { } template< typename TElement > bool operator( const TElement& element ) { return element.isPresent( name_ ); } /* template< typename TElement > bool operator( const TElement& element ) { const Data& data = element.getData(); return data.isPresent( name_ ); }*/ }; } ``` But I need it to have different operators () according to presence of some certain methods in TElement. Like if it has "getData" I'd like to check that data and if it hasn't I'd do some other actions. I am aware of SFINAE. But I don't have boost:: on the project. So either there is some easy implementation of template "has\_method" or you know some other design solution. I can't point specific types and simply overload because I'd like to put this Predicate to the one of the project library, which don't know about those specific classes with "getData" method. Solution with class traits are good as far as there is no namespaces. Predicate Finder in in "lib" namespace and class with "getData" is in "program" namespace. Thanks.
Why use template mathods at all? Just use the specific class that you want to base it on or a common base classes if there are lots of class types. e.g. ``` struct Finder { Finder( const std::string& name ): name_( name ) { } bool operator( const IsPresentBaseClass& element ) { return element.isPresent( name_ ); } bool operator( const GetDataBaseClass& element ) { const Data& data = element.getData(); return data.isPresent( name_ ); } }; ``` If this pattern happens a lot with different class types and you know the types before using the predicate you could template the predicate itself. e.g. ``` template<class T1, class T2> struct Finder { Finder( const std::string& name ): name_( name ) { } bool operator( const T1& element ) { return element.isPresent( name_ ); } bool operator( const T2& element ) { const Data& data = element.getData(); return data.isPresent( name_ ); } }; ``` Or another approach you could use is to use some sort of class traits to hold the information. e.g. ``` struct UseIsPresent { template<class T> static bool CompareElement( const T& element, const std::string& name ) { return element.isPresent( name ); } }; struct UseGetData { template<class T> static bool CompareElement( const T& element, const std::string& name ) { const Data& data = element.getData(); return data.isPresent( name ); } }; // default to using the isPresent method template <class T> struct FinderTraits { typedef UseIsPresent FinderMethodType; }; // either list the classes that use GetData method // or use a common base class type, e.g. UseGetData template <> struct FinderTraits<UseGetData> { typedef UseGetData FinderMethodType; }; struct Finder { Finder( const std::string& name ) : name_( name ) { } template<class T> bool operator()( const T& element ) { return FinderTraits<T>::FinderMethodType::CompareElement<T>(element, name_); } std::string name_; }; ``` The downsides of all these methods is that at some point you need to know the types to be able to split them up into which method to use.
You can have a look at [Veldhuizen's homepage](http://ubiety.uwaterloo.ca/~tveldhui/papers/Template-Metaprograms/meta-art.html) for the `switch` template. You can probably use this to choose the exact operator?
C++ "smart" predicate for stl algorithm
[ "", "c++", "templates", "stl", "sfinae", "" ]
While looking at the [jslint code conventions](http://javascript.crockford.com/code.html) I saw this line: ``` total = subtotal + (+myInput.value); ``` What is the purpose of the second '+'?
The unary plus is there for completeness, compared with the familiar unary minus (-x). However it has the side effect, relied upon here, of casting myInput.value into a Number, if it's something else such as a String: ``` alert(1+'2'); // 12 alert(1+(+'2')); // 3 ```
That's called the "unary + operator", it can be used as a quick way to force a variable to be converted to a number, so that it can be used in a math operation.
What does this line of javascript do?
[ "", "javascript", "" ]
Firstly - my description ;) I've got a XmlHttpRequests JSON response from the server. MySQL driver outputs all data as string and PHP returns it as it is, so any integer is returned as string, therefore: Is there any fast alternative (hack) for parseInt() function in JS which can parse pure numeric string, e.g. ``` var foo = {"bar": "123"}; ... foo.bar = parseInt(foo.bar); // (int) 123 ```
To convert to an integer simply use the unary + operator, it should be the fastest way: ``` var int = +string; ``` Conversions to other types can be done in a similar manner: ``` var string = otherType + ""; var bool = !!anything; ``` [More info](http://www.jibbering.com/faq/faq_notes/type_convert.html "More info").
Type casting in JavaScript is done via the constructor functions of the built-in types **without `new`**, ie ``` foo.bar = Number(foo.bar); ``` This differs from `parseInt()` in several ways: * leading zeros won't trigger octal mode * floating point values will be parsed as well * the whole string is parsed, i.e. if it contains additional non-numeric characters, the return value will be `NaN`
parseInt alternative
[ "", "javascript", "mysql", "ajax", "" ]
I have an app that shows store sales. It is a multi-dimensional array, so each value in the root array is an array containing `[sales]`, `[cost]`, `[date]`, etc. for the given day it pertains to. OK, there are 2 arrays for each store. One is for verified numbers and the next is for unverified numbers. The unverified picks up right after the verified, so the first date in unverified will be one day after the verified. OK, all this is fine so far. But when I show the total sales for all stores, I need to combine all the verified and all the unverified numbers to get the total. Here's the tricky part. The verified array should only go up to the date of the lowest verified store and all the rest should be unverified. For example: On a given date, if all the stores have verified numbers but one is unverified for that date, then they all need to be unverified for that date. So it's like it needs to create a verified total, and an unverified total, check each array and if they are all verified add to the verified array, else if any are unverified add to unverified array. I hope this makes since, I am trying my best to explain the situation. I do have an algorithm that is working but it's so complex I have to study it forever every time I work on it, and I was hoping there was a more elegant solution. Thanks!!! Here is what the array structure looks like ``` $verified ( [0](sales => 355, cost=> 233, date=> 2008-03-01) [0](sales => 235, cost=> 133, date=> 2008-03-02) [0](sales => 435, cost=> 143, date=> 2008-03-02) ) $unverified ( [0](sales => 232, cost=> 133, date=> 2008-03-03) [0](sales => 335, cost=> 233, date=> 2008-03-04) [0](sales => 535, cost=> 243, date=> 2008-03-05) ) ``` This is dummy data, but in reality there will be more entries. There are these 2 arrays for each store. The date won't show up for both arrays; a date will only be in unverified or verified. But when you have several sets of these arrays for each store and need to combine them, different store's unverified numbers will begin at different dates. storeA may be verified up to the 15th and storeB may be verified up to the 7th. So I need to build a new `$verified` and a new `$unverified` from all the ones for each store. But I can't simply combine all the verifieds, because they span across different date ranges. So if all the dates are verified then they stay verified in the new master array, but if any are unverified they need to go to the new master unverified array. If this doesn't make since I am sorry.
> Much more often, strategic > breakthrough will come from redoing > the representation of the data or > tables. This is where the heart of a > program lies. Show me your flowcharts > and conceal your tables, and I shall > continue to be mystified. Show me your > tables, and I won't usually need your > flowcharts; they'll be obvious. —Frederick P. Brooks, Jr., The Mythical Man-Month I assume that you'd like to construct `$total_verified` and `$total_unverified` arrays for all stores. 1. Construct `$total_unverified` from `$unverified` arrays for all stores keeping track of the earliest date in `$earliest_unverified_date`. 2. Add value from each `$verified` array for all stores to `$total_verified` iff the record has date and it is earlier than `$earliest_unverified_date` else add the value to `$total_unverified` array.
Your problem sounds like the sort of thing relational databases were made for. Are you using a database? If so, a proper query (or two queries) using `GROUP BY` or `ROLLUP` could save you a bazillion lines of PHP.
How do I best process my multi-dimensional arrays in PHP?
[ "", "php", "arrays", "multidimensional-array", "" ]
I have been working on a site that makes some pretty big use of AJAX and dynamic JavaScript on the front end and it's time to start stress testing. But how do you properly stress test something that requires clicking several links on the front-end? One way I was able to easily hit every page of the site quickly and repeatedly was to point a Google Mini at it. But that's not going to click links and then navigate Modal windows and things like that. Edit - I should point out that the site is done in PHP5 and the JavaScript library used is jQuery. Not sure if this would make any difference but felt it might be useful to know.
[JMeter](http://jakarta.apache.org/jmeter/) is great at this. You may record your sessions and tweak them to your liking. So-called 'ajax load testing' is a recurring subject on this site, and is often confused. So let's get it straight: There is really no difference between load testing a normal web page and load testing with ajax. It all boils down to discrete requests; they just happen to not be full page refreshes. One thing to keep in mind is there is a distinct difference between load testing the server processing the requests (a load test) and the performance on screen of the UI components being updated (how well your javascript performs.) Simple load test example: 1. initial page load 2. login 3. navigate? 4. 5-10 'ajax' requests (or whatever may fit your application usage pattern) 5. logout
I disagree with Nathan and Freddy to some degree. They are correct that "AJAX testing" is really no different in that HTTP requests are made. But it's not that simple. See my article on Ajaxian.com on [Why Load Testing Ajax is Hard](http://ajaxian.com/archives/why-load-testing-ajax-is-hard). JMeter, Pylot, and The Grinder are all great tools for generating HTTP requests (I personally recommend Pylot). But at their core, they don't act as a browser and process JavaScript, meaning all they do is replay the traffic they saw at record time. If those AJAX requests were unique to that session, they may not be suitable/correct to replay in large volumes. The fact is that as more logic is pushed down in to the browser, it becomes much more difficult (if not impossible) to properly simulate the traffic using traditional load testing tools. In my article, I give a simple example of how difficult it becomes to test something like Google's home page when you want to query 1000's of different search terms (an important goal during load testing). To do it with JMeter/Pylot/Grinder you effectively end up re-writing parts of the AJAX code (in your case w/ jQuery) over again in the native language of the tool. It gets even more complex if your goal is to measure the response time as perceived by the user (which is arguably the most important thing at the end of the day). For really complex applications that use Comet/"Reverse Ajax" (a technique that keeps open sockets for long periods of time), traditional load tools don't work at all. My company, BrowserMob, provides a [load testing service](http://browsermob.com) that uses Firefox browsers, powered by [Selenium](http://seleniumhq.org), to drive hundreds or thousands of real browsers, allowing you to measure and time the performance of visual elements as seen in the browser. We also support traditional virtual users (blind HTTP traffic) and a simulated browser (via [HtmlUnit](http://htmlunit.sourceforge.net/)). All that said, usually a mix of a service like BrowserMob plus traditional load testing is the right approach. That is, real browsers are great for a full-fidelity load test, but they will never be as economical as "virtual users", since they require 10-100X more RAM and CPU. See my recent blog post on whether to [simulate or not to simulate virtual users](http://blog.browsermob.com/2009/02/virtual-users-to-simulate-or-not-to-simulate/). Hope that helps!
Load testing the UI
[ "", "php", "stress-testing", "" ]
I would like to use the generic queue class as described in the .NET framework (3.5) but I will need a Remove(int index) method to remove items from the queue. Can I achieve this functionality with an extension method? Anyone care to point me in the right direction?
What you want is a [`List<T>`](http://msdn.microsoft.com/en-us/library/6sh2ey19.aspx) where you always call [`RemoveAt(0)`](http://msdn.microsoft.com/en-us/library/c93ab5c9.aspx) when you want to get the item from the `Queue`. Everything else is the same, really (calling `Add` would add an item to the end of the `Queue`).
Here's how you remove a **specific** item from the queue with one line of Linq (it's recreating the queue, BUT for the lack of a better method...) ``` //replace "<string>" with your actual underlying type myqueue = new Queue<string>(myqueue.Where(s => s != itemToBeRemoved)); ``` I know it's not removing **by index**, but still, someone might find this useful (this question ranks in Google for "remove specific item from a c# queue" so I decided to add this answer, sorry)
c# Adding a Remove(int index) method to the .NET Queue class
[ "", "c#", "queue", "" ]
I am using .net 1.1. I have a session class in which I have stored many static variables that hold some data to be used by many classes. I want to find a simple way of destroying this class instead of resetting every variable one by one. For example if there is a static class MyStatic, I would have liked to destroy/remove this class from the memory by writing MyStatic = null, which is not currently possible, ## Additional question. The idea of singleton is good, but I have the following questions: If singleton is implemented, the 'single' object will still remain in the memory. In singleton, we are only checking if an instance is already existing. how can i make sure that this instance variable also gets destroyed. I have a main class which initializes the variable in the static class. Even if I plan to implement a Rest() method, I need to call it from a method, for eg, the destructor in the main class. But this destructor gets called only when GC collects this main class object in the memory, which means the Reset() gets called very late thanks pradeep
Don't use a static class to store your variables. Use an instance (and make it a singleton if you only want one instance at any given time.) You can then implement IDisposible, and just call Dispose() when you want to destroy it. For more information check out this site: <http://csharpindepth.com/Articles/General/Singleton.aspx> **EDIT** The object is still subject to garbage collection, so unless you are using lots of unmanaged resources, you should be fine. You can implement IDisposible to clean up any resources that need to be cleaned up as well.
Instead of a static class, have a static instance of a class: ``` class Foo { public int Something; public static Foo Instance = new Foo(); public void Reset() { Instance = new Foo(); } } void test { int i = Foo.Instance.Something; } ``` You can also delegate to an instance of the class: ``` class Foo { public int Something { get { return instance.something; } } private int something; private static Foo instance = new Foo(); public void Reset() { instance = new Foo(); } } void test { int i = Foo.Something; } ```
how to destroy a Static Class in C#
[ "", "c#", "" ]
I have a class that is maps to a field in a database. The class only cares about the name of the field and its related .NET type. **The type can be string, int, datetime, etc.** ``` class Foo() { string Name { get; set; } Type FooType { get; set; } } ``` I have another class that inherits from Foo that adds a property for a value. Right now I store the value as an object and use a switch statement to box the value based on the base classes FooType. ``` class FooWithStuff() : Foo { object Value { get; set; } } ``` Is there way to implement this with generics to give type safety for the values? **Edit:** I have made the key requirement bold. When declaring a list say Foo it needs a type. If I were doing this against custom classes I would create and interface and use that. However here I am using int, string, DateTime, etc. Int is a struct, string is an object, so a Foo< object> does not work for both.
Define your class like this: ``` class Foo<T> : IFoo { public Foo(string name) { Name = name; } string Name { get; set; } T Value {get; set;} Type FooType { get { return typeof(T); } } } ``` You could then define the interface IFoo as: ``` string Name { get; set; } Type FooType { get; set; } ``` And declare a list as: ``` List<IFoo> list = new List<IFoo>(); ```
``` class Foo { public string Name { get; set; } public Type Type { get; set; } } class Bar<T> : Foo { public T Value { get; set; } public Bar() { base.Type = typeof( T ); } } ```
C# Generic Lists
[ "", "c#", "generics", "" ]
> Microsoft Visual Studio > > Unable to start program 'theprogram.exe'. > > This application has failed to start > because the application configuration > is incorrect. Review the manifest file > for possible errors. Reinstalling the > application may fix this problem. For > more details, please see the > application event log. > > OK The program in question is a C++ project, no MFC, no AFX, used libraries are: `opengl32.lib glu32.lib SDL.lib sdlmain.lib` plus the pre-built `libboost_signals-vc80-mt-gd-1_37.lib` from BoostPro computing. The program starts fine in Release builds, but on Debug I get the aforementioned error message, plus a zombie process attached to the debugger that I can't kill. The manifest files for debug build: ``` <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC80.DebugCRT" version="8.0.50608.0" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC80.CRT" version="8.0.50608.0" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC80.DebugCRT" version="8.0.50727.762" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC80.CRT" version="8.0.50727.762" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> ``` and Release build: ``` <?xml version='1.0' encoding='UTF-8' standalone='yes'?> <assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.CRT' version='8.0.50608.0' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.CRT' version='8.0.50727.762' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> </assembly> ``` Dependency walker: ``` Error: The Side-by-Side configuration information for "c:\prog\opengl guis\gg-0.7.0\debug\TUTORIAL.EXE" contains errors. This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem (14001). Error: At least one required implicit or forwarded dependency was not found. Error: At least one module has an unresolved import due to a missing export function in an implicitly dependent module. Error: Modules with different CPU types were found. Warning: At least one delay-load dependency module was not found. Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module. ``` DLLs: ``` DEVIL.DLL ILU.DLL MSVCP80D.DLL MSVCR80D.DLL SDL.DLL DWMAPI.DLL ADVAPI32.DLL DCIMAN32.DLL DDRAW.DLL GDI32.DLL GLU32.DLL KERNEL32.DLL MSVCRT.DLL NTDLL.DLL OPENGL32.DLL RPCRT4.DLL SECUR32.DLL USER32.DLL ACTIVEDS.DLL ADSLDPC.DLL ADVPACK.DLL APPHELP.DLL ATL.DLL AUTHZ.DLL BROWSEUI.DLL CABINET.DLL CDFVIEW.DLL CERTCLI.DLL CFGMGR32.DLL CLBCATQ.DLL CLUSAPI.DLL COMCTL32.DLL COMDLG32.DLL COMRES.DLL CREDUI.DLL CRYPT32.DLL CRYPTUI.DLL CSCDLL.DLL DBGHELP.DLL DEVMGR.DLL DHCPCSVC.DLL DNSAPI.DLL DUSER.DLL EFSADU.DLL ESENT.DLL GDIPLUS.DLL HLINK.DLL HNETCFG.DLL IEFRAME.DLL IERTUTIL.DLL IEUI.DLL IMAGEHLP.DLL IMGUTIL.DLL IMM32.DLL INETCOMM.DLL IPHLPAPI.DLL LINKINFO.DLL LZ32.DLL MFC42U.DLL MLANG.DLL MOBSYNC.DLL MPR.DLL MPRAPI.DLL MPRUI.DLL MSASN1.DLL MSGINA.DLL MSHTML.DLL MSI.DLL MSIMG32.DLL MSLS31.DLL MSOERT2.DLL MSRATING.DLL MSSIGN32.DLL MSVCP60.DLL MSWSOCK.DLL NETAPI32.DLL NETCFGX.DLL NETMAN.DLL NETPLWIZ.DLL NETRAP.DLL NETSHELL.DLL NETUI0.DLL NETUI1.DLL NETUI2.DLL NORMALIZ.DLL NTDSAPI.DLL NTLANMAN.DLL ODBC32.DLL OLE32.DLL OLEACC.DLL OLEAUT32.DLL OLEDLG.DLL POWRPROF.DLL PRINTUI.DLL PSAPI.DLL QUERY.DLL RASAPI32.DLL RASDLG.DLL RASMAN.DLL REGAPI.DLL RTUTILS.DLL SAMLIB.DLL SCECLI.DLL SETUPAPI.DLL SHDOCVW.DLL SHELL32.DLL SHLWAPI.DLL SHSVCS.DLL TAPI32.DLL URLMON.DLL USERENV.DLL USP10.DLL UTILDLL.DLL UXTHEME.DLL VERSION.DLL W32TOPL.DLL WINHTTP.DLL WININET.DLL WINIPSEC.DLL WINMM.DLL WINSCARD.DLL WINSPOOL.DRV WINSTA.DLL WINTRUST.DLL WLDAP32.DLL WMI.DLL WS2_32.DLL WS2HELP.DLL WSOCK32.DLL WTSAPI32.DLL WZCDLG.DLL WZCSAPI.DLL WZCSVC.DLL MSVCR80.DLL TUTORIAL.EXE ``` linker call: ``` /OUT:"C:\prog\OpenGL GUIS\GG-0.7.0\Debug\tutorial.exe" /INCREMENTAL /NOLOGO /LIBPATH:"C:\prog\SDL-1.2.13\lib" /LIBPATH:"C:\prog\contrib\lib_win32" /LIBPATH:"C:\prog\boost\boost_1_37\lib" /MANIFEST /MANIFESTFILE:"Debug\tutorial.exe.intermediate.manifest" /NODEFAULTLIB:"LIBC.LIB" /DEBUG /PDB:"c:\prog\OpenGL GUIS\GG-0.7.0\Debug\tutorial.pdb" /SUBSYSTEM:CONSOLE /MACHINE:X86 /ERRORREPORT:PROMPT opengl32.lib glu32.lib SDL.lib sdlmain.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib ".\debug\gg.lib" ```
Looks like the debug build is linked against both debug and release runtimes which would be a very bad thing -- you haven't included either in the list of libraries you're linking against -- might be worth checking though Edit: Might also be worth checking for any `#pragma comment (lib...)` statements which can force the linker to include an additional library, causing potential conflicts (see also [documentation on MSDN](http://support.microsoft.com/kb/153901))
If you right click on the solution and go to Properties, you need to select Single Startup Project and select your hosting project. This worked for me.
"Unable to start program" (Debug build)
[ "", "c++", "visual-c++", "visual-studio-2005", "boost", "" ]
I have a table that stores formulary drug information and needs to be updated daily from a central formulary. The temp table is identical to the drug table. The temp table data could be identical (and on most days will be) to the main table or it could have updated rows or new rows. I have a stored procedure to update the main table, but it fails because it won't update NULL rows (if there is a new row in the temp table). This is a MSSQL Server 2005. Where am I going wrong here: ``` -- Insert statements for procedure here UPDATE [RX_Billing].[dbo].[FS_Drug] SET [TRADENAME] = [RX_Billing].[dbo].[FS_Drug_TEMP].[TRADENAME] ,[CDM] = [RX_Billing].[dbo].[FS_Drug_TEMP].[CDM] ,[NDC] = [RX_Billing].[dbo].[FS_Drug_TEMP].[NDC] ,[IP_COST] = [RX_Billing].[dbo].[FS_Drug_TEMP].[IP_COST] ,[OP_COST] = [RX_Billing].[dbo].[FS_Drug_TEMP].[OP_COST] ,[HH_COST] = [RX_Billing].[dbo].[FS_Drug_TEMP].[HH_COST] ,[VAR_COST] = [RX_Billing].[dbo].[FS_Drug_TEMP].[VAR_COST] ,[LSTUPDATE] = [RX_Billing].[dbo].[FS_Drug_TEMP].[LSTUPDATE] FROM [RX_Billing].[dbo].[FS_Drug] RIGHT OUTER JOIN [RX_Billing].[dbo].[FS_Drug_TEMP] ON [RX_Billing].[dbo].[FS_Drug].[TRADENAME] = [RX_Billing].[dbo].[FS_Drug_TEMP].[TRADENAME] ``` **EDIT:** I went with Rory's code. Thanks, that works beautifully. A note to Orion Edwards: UPSERT/MERGE is exactly what I wanted, but it is not supported under SQL Server 2005. Apparently it was planned, but didn't make that release. It is available in Server 2008. (From what the Interwebs has told me.)
Standard way is to do an UPDATE and then an INSERT: ``` -- UPDATE rows using an INNER JOIN with matching TRADENAME. No need to update TRADENAME column. UPDATE drug SET [CDM] = tmp.[CDM] , [NDC] = tmp.[NDC] , [IP_COST] = tmp.[IP_COST] , [OP_COST] = tmp.[OP_COST] , [HH_COST] = tmp.[HH_COST] , [VAR_COST] = tmp.[VAR_COST] , [LSTUPDATE] = tmp.[LSTUPDATE] FROM [RX_Billing].[dbo].[FS_Drug] drug INNER JOIN [RX_Billing].[dbo].[FS_Drug_TEMP] tmp ON drug.[TRADENAME] = tmp.[TRADENAME] -- Insert rows that don't have matching TRADENAME INSERT INTO drug SELECT tmp.[TRADENAME] , tmp.[CDM] , tmp.[NDC] , tmp.[IP_COST] , tmp.[OP_COST] , tmp.[HH_COST] , tmp.[VAR_COST] , tmp.[LSTUPDATE] FROM [RX_Billing].[dbo].[FS_Drug] drug RIGHT OUTER JOIN [RX_Billing].[dbo].[FS_Drug_TEMP] tmp ON drug.[TRADENAME] = tmp.[TRADENAME] WHERE drug.[TRADENAME] IS NULL ``` You might also want to delete or flag as deleted any records in drug that aren't in tmp any more. Do that as a separate statement same as the UPDATE but with a LEFT OUTER JOIN where tmp.TRADENAME IS NULL.
You could try doing an `UPSERT` (which is basically "if exists, update, else insert"). [SQL server calls it `MERGE`](https://learn.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql?view=sql-server-ver16#a-using-merge-to-do-insert-and-update-operations-on-a-table-in-a-single-statement)
Update a table from a temp table
[ "", "sql", "sql-server", "" ]
I do all my PHP coding in Dreamweaver (currently version CS3) I am used to it and like it. There is one thing I would like though, is to be able to check for errors. Currently I don't see them until I upload the page to a server. Is there anyway to setup PHP error checking in Dreamweaver? Thanks.
Do you mean as you type? Since PHP isn't compiled, it's not as easy to do as with a language like Java or C#. The closest you can come to that is with [PHPLint](https://www.icosaedro.it/phplint/). Unfortunately, you have to do some hackery with Cygwin to get it to work under Windows. I know that this isn't what you asked for, but Dreamweaver is not a very good editor for PHP. It lacks many features and isn't really meant to be used mainly for PHP. Try these editors, which have on-the-go error checking: * [Eclipse PDT](https://www.eclipse.org/pdt/) * [Zend Studio](https://web.archive.org/web/20190818034321/https://www.zend.com/en/products/studio) * [Komodo Edit](https://github.com/Komodo/KomodoEdit) If you mean how can you setup your system so you can test PHP locally on your computer without the need to upload it to the web, you need a web server on your computer. I highly recommend [Xampp](https://web.archive.org/web/20140121004237/http://www.apachefriends.org/en/xampp.html). It's an all-in-one package to install Apache, MySQL, PHP, and more on your computer and has a spiffy control panel.
I've been coding in PHP/MySQL/HTML/CSS/JS for many years. Dreamweaver has done a lot of good things in its evolution. The CS4 offering is good, but still lacks those things that are absolutely needed for professionals...real time code checking, code completion etc. These are absolutes. PHPed is an OK offering, but only available in Windows. Despite its claims, it is NOT stable on Mac or Linux platforms. Zend and Aptana are virtually the same thing. Both are Eclips-PDT based products, except Aptana is more full featured. Zend has been very unresponsive to its customers in not including simple things like word wrap, for instance. Aptana Studio 1.5 was outstanding with its own PHP plugin, but is no longer available since they adopted the unfinished, raw PDT. NetBeans is absolutely the BEST PHP platform, but has a downside in running away with memory..at least on the Mac it does (this could be an issue with the Java VM however). But it DOES provide the best set of features...and its free. Aptana 1.5 is my choice at the moment, but since it's not being enhanced anymore, I'm looking for a new IDE of choice. So far that's been NetBeans. I'm hoping that they will eventually fix the memory leaks.
Anyway to error check PHP in Dreamweaver?
[ "", "php", "plugins", "dreamweaver", "" ]
I was given good direction to solve a problem today from here but I got stuck in trying to follow this direction. I would like to retrieve a `count` for all rows from a table for the past `30 days` using my `date field`. I populate these fields using `now()` and they are in the format `Y-m-d h:i:s`. * I need a `count` for the rows added today, yesterday, 3 days ago...etc * I guess I can then use the above to get a count for each months? * Finally a `count` for the year, year before etc using the months as total? I was hoping not to specify dates in my query and for the query to just be generic and work out counts relative to today. I was hoping also I could do it accurately too by taking into consideration different months have different number of days. How can I get this `count` using **just SQL**? I then can use the result set as an array and parse this with `PHP`.
Number of records inserted yesterday: ``` select count(*) from mytable where date(myfield)=date(date_sub(now(),interval 1 day)); ``` For the year: ``` select count(*) from mytable where year(myfield)=year(now()); ``` And so on...
Look into MySQLs [DATE\_ADD](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date-add) and [DATE\_SUB](http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_date-sub) function it will give you what your looking for.
Count rows added today, yesterday...and other times
[ "", "mysql", "sql", "count", "" ]
I am using a fresh Glassfish install with very little customizations. I have a Message Driven Bean (ObjectUpdateMDB) that listens to a topic, then updates the object it receives in a database. There are a lot of objects being updated. After a while of running I get this exception: ``` SEVERE: JTS5031: Exception [org.omg.CORBA.INTERNAL: vmcid: 0x0 minor code: 0 completed: Maybe] on Resource [rollback] operation. SEVERE: MDB00049: Message-driven bean [Persistence:ObjectUpdateMDB]: Exception in postinvoke : [javax.transaction.SystemException: org.omg.CORBA.INTERNAL: JTS5031: Exception [org.omg.CORBA.INTERNAL: vmcid: 0x0 minor code: 0 completed: Maybe] on Resource [rollback] operation. vmcid: 0x0 minor code: 0 completed: No] SEVERE: javax.transaction.SystemException javax.transaction.SystemException: org.omg.CORBA.INTERNAL: JTS5031: Exception [org.omg.CORBA.INTERNAL: vmcid: 0x0 minor code: 0 completed: Maybe] on Resource [rollback] operation. vmcid: 0x0 minor code: 0 completed: No at com.sun.jts.jta.TransactionManagerImpl.rollback(TransactionManagerImpl.java:350) at com.sun.enterprise.distributedtx.J2EETransactionManagerImpl.rollback(J2EETransactionManagerImpl.java:1144) at com.sun.enterprise.distributedtx.J2EETransactionManagerOpt.rollback(J2EETransactionManagerOpt.java:426) at com.sun.ejb.containers.BaseContainer.completeNewTx(BaseContainer.java:3767) at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:3571) at com.sun.ejb.containers.MessageBeanContainer.afterMessageDeliveryInternal(MessageBeanContainer.java:1226) at com.sun.ejb.containers.MessageBeanContainer.afterMessageDelivery(MessageBeanContainer.java:1197) at com.sun.ejb.containers.MessageBeanListenerImpl.afterMessageDelivery(MessageBeanListenerImpl.java:79) at com.sun.enterprise.connectors.inflow.MessageEndpointInvocationHandler.invoke(MessageEndpointInvocationHandler.java:139) at $Proxy98.afterDelivery(Unknown Source) at com.sun.messaging.jms.ra.OnMessageRunner.run(OnMessageRunner.java:324) at com.sun.enterprise.connectors.work.OneWork.doWork(OneWork.java:76) at com.sun.corba.ee.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.run(ThreadPoolImpl.java:555) INFO: MDB00037: [Persistence:ObjectUpdateMDB]: Message-driven bean invocation exception: [java.lang.OutOfMemoryError: Java heap space] INFO: java.lang.OutOfMemoryError java.lang.OutOfMemoryError: Java heap space ``` Looks like it is an issue with Heap Space. What do I need to adjust the heap space for? The App Server itself or the Broker? How do I do this?
You need to assign more RAM to your Glassfish installation. See [http://spaquet.blogspot.com/2006/07/liferay-glassfish-part-ii-configuring.html](https://web.archive.org/web/20150525031346/http://spaquet.blogspot.com/2006/07/liferay-glassfish-part-ii-configuring.html) and [http://docs.sun.com/app/docs/doc/820-4495/gepzd?a=view](https://web.archive.org/web/20090217121946/http://docs.sun.com/app/docs/doc/820-4495/gepzd?a=view).
I have used the following `asadmin` commands to sort the problem on Glassfish 3.1: ``` asadmin create-jvm-options --target server-config -- '-XX\:+UnlockExperimentalVMOptions' asadmin create-jvm-options --target server-config -- '-XX\:+UseG1GC' asadmin delete-jvm-options --target server-config -- '-Xmx512m' asadmin create-jvm-options --target server-config -- '-Xmx1024m' asadmin delete-jvm-options --target server-config -- '-XX\:MaxPermSize=192m' asadmin create-jvm-options --target server-config -- '-XX\:MaxPermSize=256m' asadmin create-jvm-options --target default-config -- '-XX\:+UnlockExperimentalVMOptions' asadmin create-jvm-options --target default-config -- '-XX\:+UseG1GC' asadmin delete-jvm-options --target default-config -- '-Xmx512m' asadmin create-jvm-options --target default-config -- '-Xmx1024m' asadmin delete-jvm-options --target default-config -- '-XX\:MaxPermSize=192m' asadmin create-jvm-options --target default-config -- '-XX\:MaxPermSize=256m' ``` It is a variation on Michael Myers hint. Using `asadmin` commands makes the change easily repeatable. Also I switched to the new G1 collector which is a lot better then the normal collector. Helps with Eclipse as well ;-) Note that syntax is for TakeCommand on Windows. If you use a different combination of shell and OS you might need different escape characters (i.E. strait ticks instead of backticks for most unix shells). If you mess your setup up with the `*-jvm-options` commands then you can fix it with `domain.xml` file.
Java Heap Space error in glassfish
[ "", "java", "glassfish", "application-server", "out-of-memory", "jboss-mdb", "" ]
I have a loop that is doing some error checking in my PHP code. Originally it looked something like this... ``` foreach($results as $result) { if (!$condition) { $halt = true; ErrorHandler::addErrorToStack('Unexpected result.'); } doSomething(); } if (!$halt) { // do what I want cos I know there was no error } ``` This works all well and good, but it is still looping through despite after one error it needn't. Is there a way to escape the loop?
You are looking for the [break](http://www.php.net/break) statement. ``` $arr = array('one', 'two', 'three', 'four', 'stop', 'five'); while (list(, $val) = each($arr)) { if ($val == 'stop') { break; /* You could also write 'break 1;' here. */ } echo "$val<br />\n"; } ```
As stated in other posts, you can use the break keyword. One thing that was hinted at but not explained is that the keyword can take a numeric value to tell PHP how many levels to break from. For example, if you have three foreach loops nested in each other trying to find a piece of information, you could do 'break 3' to get out of all three nested loops. This will work for the 'for', 'foreach', 'while', 'do-while', or 'switch' structures. ``` $person = "Rasmus Lerdorf"; $found = false; foreach($organization as $oKey=>$department) { foreach($department as $dKey=>$group) { foreach($group as $gKey=>$employee) { if ($employee['fullname'] == $person) { $found = true; break 3; } } // group } // department } // organization ```
Can you 'exit' a loop in PHP?
[ "", "php", "loops", "" ]
When you pickle an object that has some attributes which cannot be pickled it will fail with a generic error message like: ``` PicklingError: Can't pickle <type 'instancemethod'>: attribute lookup __builtin__.instancemethod failed ``` Is there any way to tell which attribute caused the exception? I am using Python 2.5.2. Even though I understand in principle the root cause of the problem (e.g. in the above example having an instance method) it can still be very hard to *exactly pinpoint it*. In my case I already defined a custom `__getstate__` method, but forgot about a critical attribute. This happened in a complicated structure of nested objects, so it took me a while to identify the bad attribute. As requested, here is one simple example were pickle intentionally fails: ``` import cPickle as pickle import new class Test(object): pass def test_func(self): pass test = Test() pickle.dumps(test) print "now with instancemethod..." test.test_meth = new.instancemethod(test_func, test) pickle.dumps(test) ``` This is the output: ``` now with instancemethod... Traceback (most recent call last): File "/home/wilbert/develop/workspace/Playground/src/misc/picklefail.py", line 15, in <module> pickle.dumps(test) File "/home/wilbert/lib/python2.5/copy_reg.py", line 69, in _reduce_ex raise TypeError, "can't pickle %s objects" % base.__name__ TypeError: can't pickle instancemethod objects ``` Unfortunately there is no hint that the attribute `test_meth` causes the problem.
You could file a bug against Python for not including more helpful error messages. In the meantime, modify the `_reduce_ex()` function in `copy_reg.py`. ``` if base is self.__class__: print self # new raise TypeError, "can't pickle %s objects" % base.__name__ ``` Output: ``` <bound method ?.test_func of <__main__.Test object at 0xb7f4230c>> Traceback (most recent call last): File "nopickle.py", line 14, in ? pickle.dumps(test) File "/usr/lib/python2.4/copy_reg.py", line 69, in _reduce_ex raise TypeError, "can't pickle %s objects" % base.__name__ TypeError: can't pickle instancemethod objects ```
I had the same problem as you, but my classes were a bit more complicated (i.e. a large tree of similar objects) so the printing didn't help much so I hacked together a helper function. It is not complete and is only intended for use with pickling protocol 2: It was enough so I could locate my problems. If you want to extend it to cover everything, the protocol is described at <http://www.python.org/dev/peps/pep-0307/> i've made this post editable so everybody can update the code. ``` import pickle def get_pickling_errors(obj,seen=None): if seen == None: seen = [] try: state = obj.__getstate__() except AttributeError: return if state == None: return if isinstance(state,tuple): if not isinstance(state[0],dict): state=state[1] else: state=state[0].update(state[1]) result = {} for i in state: try: pickle.dumps(state[i],protocol=2) except pickle.PicklingError: if not state[i] in seen: seen.append(state[i]) result[i]=get_pickling_errors(state[i],seen) return result ``` An example of the usage where K is the object that doesn't pickle ``` >>> get_pickling_errors(K) {'_gen': {}, '_base': {'_gens': None}} ``` This means that the attibute K.\_gen is not picklable and the same goes for K.\_base.\_gens.
How to tell for which object attribute pickle fails?
[ "", "python", "serialization", "" ]
I've got a registry of classes and types in Python 2.5, like so: ``` class ClassA(object): pass class ClassB(ClassA): pass MY_TYPES = { basestring : 'A string', int : 'An integer', ClassA : 'This is ClassA or a subclass', } ``` I'd like to be able to pass types to a function, and have it look up the closest matching type in the hierarchy. So, looking up `str` would return `"A string"` and looking up `ClassB` would return `"This is ClassA or a subclass"` The problem is, I don't know how to find the superclass (or, rather, trace the MRO chain) of a type object. What's the best way of handling this?
``` from inspect import getmro [st for cls, st in MY_TYPES.items() if cls in getmro(ClassB)] ['This is ClassA or a subclass'] ``` or if you're only interested in first match(es) generator version: ``` (st for cls, st in MY_TYPES.iteritems() if cls in getmro(ClassB)) ```
To get the superclasses of a class, use `__bases__`: ``` class ClassA(object): pass class ClassB(ClassA): pass print ClassB.__bases__ (<class '__main__.ClassA'>,) ``` Beware of things like `int`, though. The base for all of those will be `<type 'object'>`. You'll have to do some testing and iteration to match the dict keys you've listed in your example. Another option would be to use `isinstance` on any object instance, or `issubclass` on the parameters, testing each against your dict keys. Some consider `isinstance` and its ilk to be the mark of the devil, but c'est la vie. Again, though, beware of using `issubclass` with ints and other such types. You'll probably need to combine a few approaches given your dict keys.
What's the best way to find the closest matching type to an existing type?
[ "", "python", "types", "" ]
In my application I have to maintain some global application state and global application wide methods like currently connected users, total number of answers, create an application config file etc. There are two options: 1. Make a separate appstate.py file with global variables with functions over them. It looks fine initially but it seems that I am missing *something* in clarity of my code. 2. Create a class AppState with class functions in a appstate.py file, all other modules have been defined by their specific jobs. This looks fine. But now I have to write longer line like appstate.AppState.get\_user\_list(). Moreover, the methods are not so much related to each other. I can create separate classes but that would be too many classes. EDIT: If I use classes I will be using classmethods. I don't think there is a need to instantiate the class to an object.
The second approach seems better. I'd use the first one only for configuration files or something. Anyway, to avoid the problem you could always: ``` from myapp.appstate import AppState ``` That way you don't have to write the long line anymore.
Sounds like the classic conundrum :-). In Python, there's nothing dirty or shameful about choosing to use a module if that's the best approach. After all, modules, functions, and the like are, in fact, first-class citizens in the language, and offer introspection and properties that many other programming languages offer only by the use of objects. The way you've described your options, it kinda sounds like you're not too crazy about a class-based approach in this case. I don't know if you've used the Django framework, but if not, have a look at the documentation on how it handle settings. These are app-wide, they are defined in a module, and they are available globally. The way it parses the options and expose them globally is quite elegant, and you may find such an approach inspiring for your needs.
choosing between Modules and Classes
[ "", "python", "oop", "module", "" ]
I have an array of items and I would like to sort on one of their properties. I can access the items property using "*item.Fields["FieldName"].Value*" the property is returned as a string but I can cast it to an int. I had a look at OrderBy<> but I have no idea of how to use it.
To be clear, `OrderBy` won't sort the array in place - it will return a new sequence which is a sorted *copy* of the array. If that's okay, then you want *something* like: ``` var sorted = array.OrderBy(item => item.Fields["FieldName"].Value); ``` On the other hand, I don't understand your comment that the property is returned as a string but that you can cast it to an int - you can't cast strings to ints, you have to parse them. If that's what you meant, you probably want: ``` var sorted = array.OrderBy(item => int.Parse(item.Fields["FieldName"].Value)); ``` If you want that as an array, you can call `ToArray()` afterwards: ``` var sorted = array.OrderBy(item => int.Parse(item.Fields["FieldName"].Value)) .ToArray(); ``` Alternatively you could use `Array.Sort` if you want to sort in-place, but that will be somewhat messier.
Use the Sort method to sort an array: ``` Array.Sort(theArray, (a, b) => String.Compare(a.Fields["FieldName"].Value, b.Fields["FieldName"].Value)); ``` If you are not using C# 3, you use a delegate instead of a lambda expression: ``` Array.Sort(theArray, delegate(Item a, Item b) { return String.Compare(a.Fields["FieldName"].Value, b.Fields["FieldName"].Value); } ); ``` (This also works with framework 2, which the OrderBy extension doesn't.)
Sort array of items using OrderBy<>
[ "", "c#", "arrays", "sorting", "" ]
I am creating concatenated strings based on the data in each row of my ListView control. I need to figure out how to remove the first element in each string which pertains to a single row of the ListView. How might I accomplish this? This is a C# winforms project. **Here is my current code:** ``` foreach (ListViewItem HazPackErrItems in HazmatPackageErrorListview.Items) { string HazPackErrRow = " "; foreach (ListViewItem.ListViewSubItem HazPackErrSub in HazPackErrItems.SubItems) { HazPackErrRow += " " + HazPackErrSub.Text + ","; } // Remove comma after last element of string. HazPackErrRow = HazPackErrRow.Substring(0, HazPackErrRow.Length - 2); MessageBox.Show(HazPackErrRow); // List concatenated subitems } ```
Just don't add it in the first place? ``` foreach (ListViewItem HazPackErrItems in HazmatPackageErrorListview.Items) { string HazPackErrRow = " "; bool first = true; foreach (ListViewItem.ListViewSubItem HazPackErrSub in HazPackErrItems.SubItems) { if (first) first = false; else HazPackErrRow += " " + HazPackErrSub.Text + ","; } // Remove comma after last element of string. HazPackErrRow = HazPackErrRow.Substring(0, HazPackErrRow.Length - 2); MessageBox.Show(HazPackErrRow); // List concatenated subitems } ```
This is based on Inferis's answer, but it uses a StringBuilder and avoids the need for the final Substring call: ``` foreach (ListViewItem errors in HazmatPackageErrorListview.Items) { StringBuilder builder = new StringBuilder(); bool first = true; foreach (ListViewItem.ListViewSubItem error in errors.SubItems) { if (first) { first = false; } else { if (builder.Length > 0) { builder.Append(", "); } builder.Append(error.Text); } } MessageBox.Show(builder.ToString()); // List concatenated subitems } ``` Here's an alternative way of tackling the delimiter issue: ``` foreach (ListViewItem errors in HazmatPackageErrorListview.Items) { StringBuilder builder = new StringBuilder(); bool first = true; string prefix = ""; foreach (ListViewItem.ListViewSubItem error in errors.SubItems) { if (first) { first = false; } else { builder.Append(prefix); prefix = ", "; builder.Append(error.Text); } } MessageBox.Show(builder.ToString()); // List concatenated subitems } ``` Of course, there's a simpler way: ``` foreach (ListViewItem errors in HazmatPackageErrorListview.Items) { string[] bits = errors.SubItems.Skip(1) .Select(item => item.Text) .ToArray(); string errorLine = string.Join(", ", bits); MessageBox.Show(errorLine); } ```
C# string manipulation - How to remove the first element of each concatenated string in a collection
[ "", "c#", "string", "listview", "" ]
Say you retrieve 100 records, and display them on a page. The user only updates 2 of the records on the page. Now you want to update only the two records, and not the other 98. Is it best to have one submit on the page, then somehow know which 2 are updated, then send only those two to the db for an update? What does the "somehow" look like? Or, would you have an update-submit button for each row, and have it only update the record its tied to?
Of course there are different ways you could do this. In general, you can save yourself some trouble and server-side processing by using Javascript to assemble your POST data for only the records that have changed. Two thoughts on how this might work: 1) Go the ajax route and do *live*-editing. So records are presented in a table and appear to be non-editable. When a user clicks a particular row, that row becomes editable by using Javascript to create the appropriate html form on the fly. Then have either a submit button or some other handler (say, moving focus to another table row) which will trigger the POST which updates the DB (asynchronously via your preferred ajax method). Happily the mainstream Javascript frameworks can help a lot in this area. 2) Checkboxes - whenever a row is edited, its checkbox becomes checked. When the submit button is clicked, use javascript to post the POST data by grabbing everything in row whose checkbox is checked. A user can un-check a box to cancel changes to that row before submitting.
Ajax it using jQuery or some other JavaScript library and put and update button on each row.
Best Practice Updating DB Records
[ "", "java", "database", "hibernate", "sql-update", "" ]
I'm writing a very simple blog engine for own use (since every blog engine I encountered is too complex). I want to be able to uniquely identify each post by its URL which is something like `/2009/03/05/my-blog-post-slug`. To accomplish it in the data tier, I want to create a compound unique constraint on `(Date, Slug)` where `Date` is only the date part (ignoring the time of day) of the composition date. I have a few ideas myself (like another column, probably calculated, to hold only the date part) but I came to SO to know what's the best practice to solve this problem. I doubt SQL Server version matters here, but for the records, I'm on 2008 Express (I appreciate a more portable solution). Table schema: ``` create table Entries ( Identifier int not null identity, CompositionDate datetime not null default getdate(), Slug varchar(128) not null default '', Title nvarchar(max) not null default '', ShortBody nvarchar(max) not null default '', Body nvarchar(max) not null default '', FeedbackState tinyint not null default 0, constraint pk_Entries primary key(Identifier), constraint uk_Entries unique (Date, Slug) -- the subject of the question ) ``` ## Selected Solution: I think marc's solution is more appropriate, considering this question is about 2008. However, I'll go with the integer method (but not with `INSERT`s, as it does not ensure the integrity of data; I'll use a precomputed integer column) since I think it's easier to work with the integer thing from the client (in the query). Thank you guys. ``` create table Entries ( Identifier int not null identity, CompositionDate smalldatetime not null default getdate(), CompositionDateStamp as cast(year(CompositionDate) * 10000 + month(CompositionDate) * 100 + day(CompositionDate) as int) persisted, Slug varchar(128) not null default '', Title nvarchar(max) not null default '', ShortBody nvarchar(max) not null default '', Body nvarchar(max) not null default '', FeedbackState tinyint not null default 0, constraint pk_Entries primary key(Identifier), constraint uk_Entries unique (CompositionDateStamp, Slug) ) go ```
Well, in SQL Server 2008, there's a new datatype called "DATE" - you could use that column and create an index on that. You could of course also add a computed column of type "DATE" to your table and just fill the date portion of the DATETIME column into that computed column, make it PERSISTED, and index it. Should work just fine! Something like that: ``` ALTER TABLE dbo.Entries ADD DateOnly as CAST(CompositionDate AS DATE) PERSISTED CREATE UNIQUE INDEX UX_Entries ON Entries(DateOnly, Slug) ``` Marc
Since you're on 2008, use the Date datatype as Marc suggests. Otherwise, an easier solution is to have a non-computed column (which means you'll have to populate it on an INSERT) which uses the date in the format YYYYMMDD. That's an integer data type and is small and easy to use.
How to create a unique constraint just on the date part of a datetime?
[ "", "sql", "sql-server", "sql-server-2008", "datetime", "unique-constraint", "" ]
In PHP 5.0.4, if you **don't** configure -enable-memory-limit, the memory\_limit directive is ignored. (It's set to 8M in the recommended php.ini file, but the documentation says it's ignored.) So in that case, is there a per-script memory limit at all, or is it only limited by the system? I ask because I'm upgrading to PHP 5.2.8, and it **does** allow memory limiting by default. So now I actually have to set the value to something appropriate. The recommended php.ini file now has it set to 128M, but I don't know if that's *more* or *less* than what 5.0.4 did by default! I'm upgrading production systems, so I'd like to avoid any major change in behavior. The [documentation](https://www.php.net/ini.core) (search for "memory\_limit") is very confusing on this point. It says "default", but I don't know if that means the default value set in the config file, or the default value that it uses when memory limiting is disabled.
The memory limiter in PHP is optional; if you disable it at compile time there's no limit at all. In 5.0.4 it's disabled unless you explicitly asked for it at compile time, the reason being that the memory limiter was [useless until 5.2](http://php.net/manual/en/migration52.other.php) and didn't count a lot of things it should have done, including things like the mysql functions. It's turned *on* from 5.2.1 now that they learned to count. If in doubt, disable it or make sure you update the config file to use the new default. Leaving it at 8MB and upgrading to 5.2.8 will almost definitely cause problems.
128M is very high. You may need that but I'd be surprised. More to the point, the limit can be set to a global default in php.ini: ``` memory_limit = 32M ``` You can also override it in scripts: ``` <?php ini_set('memory_limit', '128M'); ... ``` You'll probably find you only have a handful of scripts that need a lot of memory. Find some comfortable value (with testing) and then just up it for the ones that need more.
PHP memory limit
[ "", "php", "" ]
Are these the same: ``` int foo(bar* p) { return p->someInt(); } ``` and ``` int foo(bar& r) { return r.someInt(); } ``` Ignore the null pointer potential. Are these two functions functionally identical no matter if `someInt()` is virtual or if they are passed a `bar` or a subclass of `bar`? Does this slice anything: ``` bar& ref = *ptr_to_bar; ```
C++ references are intentionally not specified in the standard to be implemented using pointers. A reference is more like a "synonym" to a variable than a pointer to it. This semantics opens some possible optimizations for the compiler when it's possible to realize that a pointer would be an overkill in some situations. A few more differences: * You can't assign NULL to a reference. This is a crucial difference and the main reason you'd prefer one over the other. * When you take the address of a pointer, you get the address of the pointer variable. When you take the address of a reference, you get the address of the variable being referred to. * You can't reassign a reference. Once it is initialized it points to the same object for its entire life.
Ignoring every syntactic sugar and possibilities that can be done with the one and not with the other and difference between pointers and references explained in other answers (to other questions) ... Yeah those two are functionally exactly the same! Both call the function and both handle virtual functions equally well. And no, your line does not slice. It's just binding the reference directly to the object pointed to by a pointer. Some questions on why you would want to use one over the other: * [Difference between pointer and reference](https://stackoverflow.com/questions/57483/difference-between-pointer-variable-and-reference-variable-in-c) * [Are there any benefits of passing by pointer over reference?](https://stackoverflow.com/questions/334856/are-there-benefits-of-passing-by-pointer-over-passing-by-reference-in-c/334944) * [Pointer vs. Reference](https://stackoverflow.com/questions/114180/pointer-vs-reference) Instead of trying to come up with differences myself, i delegate you to those in case you want to know.
difference between a pointer and reference parameter?
[ "", "c++", "pointers", "reference", "object-slicing", "" ]
I've got a string like "Foo: Bar" that I want to use as a filename, but on Windows the ":" char isn't allowed in a filename. Is there a method that will turn "Foo: Bar" into something like "Foo- Bar"?
Try something like this: ``` string fileName = "something"; foreach (char c in System.IO.Path.GetInvalidFileNameChars()) { fileName = fileName.Replace(c, '_'); } ``` **Edit:** Since `GetInvalidFileNameChars()` will return 10 or 15 chars, it's better to use a `StringBuilder` instead of a simple string; the original version will take longer and consume more memory.
``` fileName = fileName.Replace(":", "-") ``` However ":" is not the only illegal character for Windows. You will also have to handle: ``` /, \, :, *, ?, ", <, > and | ``` These are contained in System.IO.Path.GetInvalidFileNameChars(); Also (on Windows), "." cannot be the only character in the filename (both ".", "..", "...", and so on are invalid). Be careful when naming files with ".", for example: ``` echo "test" > .test. ``` Will generate a file named ".test" Lastly, if you **really** want to do things correctly, there are some [special file names](http://en.wikipedia.org/wiki/Filename#Reserved_characters_and_words) you need to look out for. **On Windows** you can't create files named: ``` CON, PRN, AUX, CLOCK$, NUL COM0, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9 LPT0, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9. ```
How to make a valid Windows filename from an arbitrary string?
[ "", "c#", "windows", "filenames", "" ]
I'm stuck with a legacy Java codebase that has THOUSANDS of warnings when you compile it. I would love to actually fix the source of all these warnings, but unfortunately that's not an option at this time at my company (other things like "making new products that generate revenue" are considered higher priority by the people in charge; fancy that). Now, I could just live with all of these warnings, if it wasn't for the fact that they make it difficult to find actual errors in the output from our continuous build server. The build server just uses an ant call, nothing fancy, but so far I haven't been able to find anything anywhere for how I can modify this call to prevent warning output. Going through the code and adding a @SuppressWarnings annotation everywhere would work, but it would also be almost as much of a pain as going through and fixing all the warnings' sources. So what I'd really love is if there was just some way I could do: ``` <javadoc suppressWarrnings="true" ``` or something similar, to make the javadoc compiler not output all the warning messages. Is anything like this (global javadoc warning disabling) possible?
In Java 8 you can add `additionalparam="-Xdoclint:none"` to the `javadoc` task. [(Source)](http://blog.joda.org/2014/02/turning-off-doclint-in-jdk-8-javadoc.html)
Try the -quiet flag.
How can I suppress warnings (codebase-wide) during javadoc compilation?
[ "", "java", "javadoc", "suppress-warnings", "" ]
Given two spans like this: ``` <span>Click for info</span> <span style="display: none">Here is a big long string of info blah blah</span> ``` ...I'd like to have an onclick function attached to each so that the visitor sees "Click for info", and then when they click it, the first span is hidden and the second is unhidden. And, of course, when the second is clicked, things go back to the way they were originally. I don't have a way of easily generating unique IDs, so I'd prefer to avoid the getElementByID() route, and I'd rather not use jQuery or any other heavyweight dependency. Is there an elegant way to do this? The best idea I have so far is to write a toggleAllSiblings() function, set onclick to that function for both elements in the pair, and then wrap each pair in a parent element. Can it be done without the parent node?
Yet another version, a la progressive enhancement — will properly show the full info when JavaScript is off. ``` <span class="details">Here is a big long string of info blah blah</span> <script type="text/javascript"> // Open/close span behaviour // function infoToggle(span) { var ersatz= document.createElement('span'); ersatz.appendChild(document.createTextNode('Click for info...')); span.parentNode.insertBefore(ersatz, span); span.style.display= 'none'; span.onclick= function() { ersatz.style.display= 'inline'; span.style.display= 'none'; }; ersatz.onclick= function() { ersatz.style.display= 'none'; span.style.display= 'inline'; }; } // Bind to all spans with class="details" // var spans= document.getElementsByTagName('span'); for (var i= spans.length; i-->0;) if (spans[i].className=='details') infoToggle(spans[i]); </script> ```
If you really want to go without ids you could do something like this: ``` <script> function togglePrevious(sender) { sender.previousSibling.style.display = "inline"; sender.style.display = "none"; } function toggleNext(sender) { sender.nextSibling.style.display = "inline"; sender.style.display = "none"; } </script> <span onclick="javascript:toggleNext(this);">Click for info</span> <span onclick="javascript:togglePrevious(this);" style="display:none">Info blablabla</span> ``` And please not that you should really use "inline" instead of "block" if you are using spans as they are inline elements. Setting their display style to block will force them to be divs.
How can I create an element that flips between two values on click?
[ "", "javascript", "html", "" ]
Ok so I have an ASP.Net website project. The project resides on a server on the network, accessed by Visual Studio 2008 via a fileshare. In the web site project I add a web reference to a webservice I have previously created. I can consume this webservice via winforms apps without any problems. But when I try to reference the Webservice in my code, no intellisense comes up, and the service name gets the red underline in Visual Studioand say s "The type or namespace name 'IMWebService' could not be found are you missing an assembly reference or a using directive?" However, if I know the exact method signature of a method of the webservice, I can type it in, save it to the webserver, and it works just fine. For example: ``` bool test = Contractor.CheckLogin(email, password); ``` How do I get my Visual Studio to recognize the service just as the website is doing?
I've encountered this problem, isolated only to classes created via "Web References". One solution was to clean out the bin and obj folders and restart Visual Studio. That didn't always work, so maybe it's not a "solution", but sometimes Intellisense picked up after that. I also tried opening up the auto-genned code for the web service and modifying it, then saving. That seemed to kickstart Intellisense into watching that file. Nothing else I did seemed to have an effect.
You could try pulling the auto generated code for the webservice client VS makes for you from the temp area and add it to your actual project
ASP.net Web Service and intellisense
[ "", "c#", "asp.net", "web-services", "soap", "intellisense", "" ]
I would like to achieve something similar to the following in Guice: ``` public MyClass { private final InjectedObject[] injectedObjects; @Inject public MyClass(InjectedObject[] injectedObjects) { this.injectedObjects=injectedObjects; } } ``` ie I would like to be able to create a certain number of instances of an object, and inject them into another object as an array. I could probably do this instead: ``` public MyClass { private final InjectedObject[] injectedObjects; @Inject public MyClass(InjectedObjectProvider injectedObjectProvider) { this.injectedObjects=injectedObjectProvider.getArrayOfInjectedObjects(5); } } ``` ...but I was wondering if there was another route that was more elegant?
I'm curious why you want several objects created eagerly. You might have success injecting a `Provider<InjectedObject>`, and calling `Provider.get()` each time you need an instance. If you really need 5, you could build 'em in a loop: ``` public MyClass { private final List<InjectedObject> injectedObjects; @Inject public MyClass(Provider<InjectedObject> injectedObjectProvider) { injectedObjects = new ArrayList<InjectedObject>(); for (int i = 0; i < 5; i++) { injectedObjects.add(injectedObjectProvider.get()); } } } ```
Not sure if this suits your needs, but [Multibindings](http://code.google.com/p/google-guice/wiki/Multibindings) worked for me when I needed to inject multiple elements of the same kind (it produces a set though).
Inject an array of Objects in Guice
[ "", "java", "guice", "" ]
I'm working on some Java <-> Perl interaction. I would like to know what the best way is to pass information from Perl to Java. (Great answers about Perl and Java [here](https://stackoverflow.com/questions/603554/how-should-i-call-a-perl-script-in-java) and [here](https://stackoverflow.com/questions/603163/is-perl-a-good-option-for-heavy-text-processing/603837#603837) btw). There's a lot of text and XML(XML::Twig) I'm parsing in Perl, in a script I'm supposed to call from a Java Web App. So I have all this gathered data, and I need it to use it inside certain objects in Java. What could be a good strategy to send the information from Perl to Java? Is it even possible to return an object or other compatible data structure from Perl to Java? I guess writing to a text file and reading it from Java would make all the optimization gained by using Perl meaningless. Perlformance is an important issue here. **EDIT:** From what I've seen [here](https://stackoverflow.com/questions/274840/how-can-i-call-perl-from-java), maybe Inline-Java would be a good option?
If performance is important I'd recommend having a persistent Perl process running. Starting a Perl interpreter every time you want to run your code will be quite an overhead. The easiest way to communicate is for the Java process to open a TCP connection to the Perl processs, write some data and get some back. The format you use to send data to from your Perl process back to your Java one will depend on what you're sending and how generic and re-usable you want your code to be. Sending back XML strings will be nice and generic but sending back byte arrays created with [pack](http://perldoc.perl.org/functions/pack.html) in Perl and then read a with [DataInputStream](http://java.sun.com/j2se/1.4.2/docs/api/java/io/DataInputStream.html) in Java will be much, much faster.
[JSON](http://json.org/) is an easy, lightweight format for passing data around. you could add [Rhino](http://www.mozilla.org/rhino/) or something similar to your toolkit and may gain additional performance (not to mention a scripting engine), but this will depend on how complex you are planning your project to be.
How can I pass data from Perl to Java?
[ "", "java", "perl", "language-interoperability", "" ]
I have a small project that would be perfect for Google App Engine. Implementing it hinges on the ability to generate a ZIP file and return it. Due to the distributed nature of App Engine, from what I can tell, the ZIP file couldn't be created "in-memory" in the traditional sense. It would basically have to be generated and and sent in a single request/response cycle. Does the Python zip module even exist in the App Engine environment?
[zipfile](http://docs.python.org/library/zipfile.html) is available at appengine and reworked [example](http://www.tareandshare.com/2008/09/28/Zip-Google-App-Engine-GAE/) follows: ``` from contextlib import closing from zipfile import ZipFile, ZIP_DEFLATED from google.appengine.ext import webapp from google.appengine.api import urlfetch def addResource(zfile, url, fname): # get the contents contents = urlfetch.fetch(url).content # write the contents to the zip file zfile.writestr(fname, contents) class OutZipfile(webapp.RequestHandler): def get(self): # Set up headers for browser to correctly recognize ZIP file self.response.headers['Content-Type'] ='application/zip' self.response.headers['Content-Disposition'] = \ 'attachment; filename="outfile.zip"' # compress files and emit them directly to HTTP response stream with closing(ZipFile(self.response.out, "w", ZIP_DEFLATED)) as outfile: # repeat this for every URL that should be added to the zipfile addResource(outfile, 'https://www.google.com/intl/en/policies/privacy/', 'privacy.html') addResource(outfile, 'https://www.google.com/intl/en/policies/terms/', 'terms.html') ```
``` import zipfile import StringIO text = u"ABCDEFGHIJKLMNOPQRSTUVWXYVabcdefghijklmnopqqstuvweyxáéöüï东 廣 広 广 國 国 国 界" zipstream=StringIO.StringIO() file = zipfile.ZipFile(file=zipstream,compression=zipfile.ZIP_DEFLATED,mode="w") file.writestr("data.txt.zip",text.encode("utf-8")) file.close() zipstream.seek(0) self.response.headers['Content-Type'] ='application/zip' self.response.headers['Content-Disposition'] = 'attachment; filename="data.txt.zip"' self.response.out.write(zipstream.getvalue()) ```
Is it possible to generate and return a ZIP file with App Engine?
[ "", "python", "google-app-engine", "zip", "in-memory", "" ]
Let's say I have two existing tables, "dogs" and "cats": ``` dog_name | owner ---------+------ Sparky | Bob Rover | Bob Snoopy | Chuck Odie | Jon cat_name | owner ---------+------ Garfield | Jon Muffy | Sam Stupid | Bob ``` How do I write a query with this output? ``` owner | num_dogs | num_cats ------+----------+--------- Bob | 2 | 1 Chuck | 1 | 0 Sam | 0 | 1 Jon | 1 | 1 ``` **The answer I've got is the next query** ``` select owner, sum(num_dogs), sum(num_cats) from (select owner, 1 as num_dogs, 0 as num_cats from dogs union select owner, 0 as num_dogs, 1 as num_cats from cats) group by owner ``` **but the problem is that sql doesn't support "select \* from (select...)" type of query** Instead, there has to be a table name after "from". Unfortunately I can't afford myself to have temporary table or a table created with "select into" clause. I need some workaround for the proposed solution where a nested select clause is formed in this way. What's your opinion? Cheers
My SQL does, just need to name the table... What database are you using? ``` select owner, sum(num_dogs), sum(num_cats) from (select owner, 1 as num_dogs, 0 as num_cats from dogs union all select owner, 0 as num_dogs, 1 as num_cats from cats) as g group by owner ```
``` create table cats (cat_name varchar(8), owner varchar(8)) create table dogs (dog_name varchar(8), owner varchar(8)) create table owners (owner varchar(8)) insert into owners values ('Jon') insert into owners values ('Bob') insert into owners values ('Chuck') insert into owners values ('Sam') insert into dogs values ('Sparky', 'Bob') insert into dogs values ('Rover', 'Bob') insert into dogs values ('Snoopy', 'Chuck') insert into dogs values ('Odie', 'Jon') insert into cats values ('Garfield', 'Jon') insert into cats values ('Muffy', 'Sam') insert into cats values ('Stupid', 'Bob') select owners.owner, count(distinct dog_name) as num_dogs, count(distinct cat_name) as num_cats from owners left outer join dogs on dogs.owner = owners.owner left outer join cats on cats.owner = owners.owner group by owners.owner ``` Note that count(dog\_name) should probably be count(dog\_id)... multiple dogs can have same name different owners (heck... same name same owner is probably allowed). Note the addition of DISTINCT to the count(..) to correct problem.
SQL Select syntax workaround
[ "", "sql", "select", "" ]
I have a really weird issue that I can't figure out with comparing objects on IIS 7. We are in the process of deploying our old IIS 6 based ASP.NET application on IIS 7, however we have this equality comparison issue that we can't seem to figure out. Let me start out by saying that I have the same assemblies and code running both on IIS 6 and IIS 7, however the comparison of the objects is differing with the same code both on IIS 6 and IIS 7. Here is an example of what my object looks like: ``` class Country : EntityBase { public int CountryID { get; set; } public string Name { get; set; } public override bool Equals(object obj) { if (obj == null || !(obj is Country)) return false; Country c = (Country)obj; return CountryID == c.CountryID; } public override int GetHashCode() { return CountryID.GetHashCode(); } } ``` I have the following code in an ASPX page both on IIS 6 and IIS 7: ``` <% foreach(var country in proposalCountries) { %> <%= country.Country.CountryID %> <%= country.Country.CountryID.GetHashCode() %> <%= country.Country.GetHashCode() %> <%= proposalCountryServices.Count(c => c.Country == country.Country) %> <%= proposalCountryServices.Count(c => (c.Country != null && country.Country != null) && c.Country.Equals(country.Country)) %>) <%= proposalCountryServices.Count(c => Object.Equals(c.Country, country.Country)) %> <% } %> ``` Here are my results: ### IIS 6: ``` 100 <-- CountryID 100 <-- CountryID Hash Code 100 <-- Country Hash Code 1 <-- Something Found 1 <-- Something Found 1 <-- Something Found ``` ### IIS 7: ``` 100 <-- CountryID 100 <-- CountryID Hash Code 100 <-- Country Hash Code 0 <-- Nothing Found 1 <-- Something Found 1 <-- Something Found ``` Is there a difference between .NET 3.5 SP1 on Windows 2003 vs Windows 2008? I am really at a loss of what the problem could be. Has anybody experienced a similar issue? ### Update 1: To answer Jon's question. The two collections are loaded using NHibernate. But I feel I should reiterate that both IIS 6 and IIS 7 are using the **exact same build** of the application, so unless NHibernate or DynamicProxy2 is changing how things are loaded based on Windows 2003 or Windows 2007, which I haven't been able to find anything about on Google, I don't know what to make of it. This is also a system wide issue of whenever I am comparing two of my entity objects. So it could have something to do with the DynamicProxy2 wrapper, but both objects are Country objects and given the overrides I have created everything should work the same in IIS 6 and IIS 7. ### Update 2: This appears to be a DynamicProxy2 or NHibernate issue. Because I tried the following code: ``` <% var c1 = new ICost.Business.Entities.Country { CountryID = 100 }; var c2 = new ICost.Business.Entities.Country { CountryID = 100 }; %> <%= c1.CountryID == c2.CountryID %> <%= c1.GetHashCode() == c2.GetHashCode() %> <%= c1.Equals(c2) %> <%= Object.Equals(c1, c2) %> <%= c1 == c2 %> ``` And for both IIS 6 and IIS 7 the result was, `true`, `true`, `true`, `true`, `false`. See my answer below for what I did to solve this. ### Update 3: This also might have had something to do with it: [Looks like you forgot to register the http module with Windsor Castle with IIS7](https://stackoverflow.com/questions/258011/problem-with-windsor-castle-with-iis7)
This is the solution that worked for me: ``` public static bool operator ==(BaseEntity a, BaseEntity b) { return Object.Equals(a, b); } public static bool operator !=(BaseEntity a, BaseEntity b) { return !Object.Equals(a, b); } ``` Apparently NHibernate or DynamicProxy was doing some kind of magic under Windows 2003 to get the "==" operator to work with out the operator being overloaded.
You haven't explained what `proposalCountries` and `proposalCountryServices` are. It looks to me like your Equals method is working just fine, but in IIS7 they contain distinct objects - you've got two objects with the same ID (so Equals matches) but they're distinct objects, so == doesn't match. Please give details as to how the two collections are loaded - that's likely to be the cause of it.
Why am I having a weird issue comparing object instances on IIS 7?
[ "", "c#", ".net", "asp.net", "iis-7", "windows-server-2008", "" ]
In my software I need to be able to interface to a very simple document management system. I need to be able to: 1. Check in/out documents 2. Add documents 3. delete documents 4. *maybe* version documents. (The domain expert says no, I suspect he is wrong.) At this time we have no need to search the documents. I need to be able to do this from a C# program so there must be a .net API. I need to have more than one client be able to do this at a time. I will eventually need to be able to do this from a web connection as well. But only later. This is part of a larger application that, so I must be able to keep costs low. I don't think I need anything as complex as Documentum or the higher-end DM products. The customer will be selling the application, so there must be a friendly re-seller agreement. Is there anything out there like this?
Why not use [WSS (Windows SharePoint Services)](http://office.microsoft.com/en-us/sharepointtechnology/FX100503841033.aspx) it's free with Windows Server and provides all the functionality and API's that you're looking for to manage documents, Check In/Out version control, work flows and much more. It also already has built in admin interfaces.
In regards to the need to version documents: your domain expert is *definitely* wrong. Not only will users eventually ask for this or need to merge something, but once you establish the other requirements, versioning is pretty simple to add as well so you might as well go for it. Perhaps you could just put a C# front end on a subversion repository.
Simple document management system and API
[ "", "c#", ".net", "document-management", "" ]
I am trying to do something like this: ``` <asp:Button ID="btnSearch" runat="server" CssClass="greybtn" Text='<span>Search</span>' OnClick="btnSearch_Click" /> ``` It displays `<span>Search</span>` instead of just `Search`.
You can't put markup inside a button. An asp:Button control just renders as an input button HTML tag: `<input type="button" value="<span>Search</span>" />` (technically `value="&lt;span&gt;Search&lt;/span&gt;" />`). The browser treats the contents of the `value` attribute as a literal string. However, you **can** put markup inside a `<button><span>Search</span></button>` (you can put quite a bit of HTML in there, including images). [This question](https://stackoverflow.com/questions/187482/how-can-i-use-the-button-tag-with-aspnet) talks about building a control which emits the `button` tag.
You can put a span on an ASP.NET LinkButton like this:- ``` <asp:LinkButton ID="TestLinkButton" CssClass="btn btn-success" runat="server"><span class="glyphicon glyphicon-refresh"/>Press Me</asp:LinkButton> ```
Can I put a <span> tag in the Text property of an ASP:Button?
[ "", "c#", "asp.net", "" ]
Users would select their date from 3 dropdowns (day, month, year). I will combine them on server-side to make a string like '2008-12-30'. How can I then validate to make sure this date was in the right format/numeric only, etc?
If they are 3 separate drop-downs, you will need to validate them as three separate values. Ie, * Validate that the year column is numeric and between whatever years are valid in your app * Validate that the month column is numeric * Validate that the day column is numeric * Validate that they are all valid values using checkdate() Or, you could just cast them all to integer, combine them together into a date, and see if the resulting date is valid. Ie, ``` $time = mktime(0, 0, 0, (int)$_POST['month'], (int)$_POST['day'], (int)$_POST['year']); // in this example, valid values are between jan 1 2000 (server time) and now // modify as required if ($time < mktime(0, 0, 0, 1, 1, 2000) || $time > time()) return 'Invalid!'; $mysqltime = date('Y-m-d', $time); // now insert $mysqltime into database ``` The downside to this method is that it'll only work with dates within the Unix timestamp range ie 1970 to 2038 or so.
I personally found this to be *the* correct and elegant way to determine if the date is both **according to format** and **valid**: * treats dates like `20111-03-21` as invalid - unlike `checkdate()` * no possible PHP warnings (if any one parameter is provided, naturally) - unlike most `explode()` solutions * takes leap years into account unlike regex-only solutions * fully compatible with the mysql `DATE` format (10.03.21 is the same as 2010-03-21) Here's the method you can use: ``` function isValidMysqlDate( string $date ): bool { return preg_match( '#^(?P<year>\d{2}|\d{4})([- /.])(?P<month>\d{1,2})\2(?P<day>\d{1,2})$#', $date, $matches ) && checkdate($matches['month'],$matches['day'], $matches['year']); } ```
How to validate a MYSQL Date in PHP?
[ "", "php", "mysql", "date", "" ]
I am looking for a generic asynchronous Java job execution framework that could handle `Callable`s or `Runnable`s. It would be similar to `java.util.concurrent.ExecutorService`, (and possibly wrap `ExecutorService`), but it would also have the following features: 1. The ability to persist jobs to a database in case the application goes down while a job is being serviced, and be able to restart the unfinished jobs. (I understand that my job may have to implement `Serializable` which is OK.) 2. Work with UUIDs to enable the client to obtain job tokens and inquire about job status. (Under the hood this information would be persisted to a database, as well.) I have started working on this myself by building around `ExecutorService`, but I would prefer an out of the box, open source solution, if one exists. Something that could work within the Spring Framework would be ideal.
You can use [Quartz](http://www.opensymphony.com/quartz/), and create a concrete [`Job`](http://www.opensymphony.com/quartz/api/org/quartz/Job.html) adapter that delegates to a `Runnable` or `Callable`. Quartz' `Job` interface adds the ability to maintain some state between invocations of a task. If desired, Quartz can store jobs and their state durably in a relational database, and execute them on a scalable cluster of hosts.
You may want to look at [Quartz](http://www.quartz-scheduler.org/). > Quartz is a full-featured, open source job scheduling system that can be integrated with, or used along side virtually any J2EE or J2SE application - from the smallest stand-alone application to the largest e-commerce system. Quartz can be used to create simple or complex schedules for executing tens, hundreds, or even tens-of-thousands of jobs; jobs whose tasks are defined as standard Java components or EJBs. The Quartz Scheduler includes many enterprise-class features, such as JTA transactions and clustering.
Searching for Generic Asynchronous Java Job Execution Framework / Library
[ "", "java", "spring", "executorservice", "" ]
I'm working on a Windows Service that one of its tasks is archiving files on remote machine but I've problem regarding access privileges "Access id denied". The service account is "LocalService", How can I give service access to remote machine?
You'll need to run the service in a dedicated account (such as a domain account) that both machines recognise. Then grant this account the necessary ACL permissions to access the second machine. If you aren't on a domain, there are things you can do with having the same username and password... Another approach is to use the "network service" account - this will authenticate with the identity of the machine that is hosting the service; it can work, but personally I like the dedicated account approach - it makes it easier to achieve granular security, and means you can relocate the service to another host without much effort.
LocalService cannot have permissions on another machine, so you'll need to change the service account to something else and make sure the account has the necessary permissions on the remote machine.
How to give a Window Service access on a remote machine?
[ "", "c#", "windows-services", "" ]
I often use `($var & 1)` in my code, which returns true if `$var` is an odd number and false if it's an even number. But what does "&" actually do?
& is binary `and`. If you have a binary value, and you `and` with another binary value, then the result will be the bitwise `and` of the two. An example: ``` 01101010 & 01011001 = 01001000 ``` The rightmost bit is either a 1 (and in that case the number is an odd number) or it is a 0, in which case the number is even. If you `&` a number with 1, you only look at the least significant bit, and the if checks if the number is a 1 or a 0. As others have mentioned, look at the bitwise operators for info on how they work.
Two operations which are fundamental to binary systems are OR and AND. OR means 'if either A is on or B is on'. A real world example would be two switches in parallel. If either is allowing current through, then current passes through. AND means 'if both A and B is on'. The real world example is two switches in series. Current will only pass through if both are allowing current through. In a computer, these aren't physical switches but semiconductors, and their functionality are called [logic gates](http://en.wikipedia.org/wiki/Logic_gate). They do the same sorts of things as the switches - react to current or no current. When applied to integers, every bit in one number is combined with every bit in the other number. So to understand the bitwise operators OR and AND, you need to convert the numbers to binary, then do the OR or AND operation on every pair of matching bits. That is why: ``` 00011011 (odd number) AND 00000001 (& 1) == 00000001 (results in 1) ``` Whereas ``` 00011010 (even number) AND 00000001 (& 1) == 00000000 (results in 0) ``` The (& 1) operation therefore compares the right-most bit to 1 using AND logic. All the other bits are effectively ignored because anything AND nothing is nothing. An even number in binary is also an even number in decimal notation (10 is a multiple of 2). Other fundamental operations to binary systems include NOT and XOR. NOT means 'if A is off' and is the only form of logic gate that takes only one signal or 'parameter' instead of two. XOR means 'if either A or B is on, but not both'. And then there are NAND, NOR, and NXOR, which are basically just NOT combined with AND, OR, and XOR, ie NAND means 'if A and B are *not* both on'. In programming, the operator ``` & means AND, | means OR, ~ means NOT, and ^ means XOR. ``` The others can be made up by combining these, for example: ``` ~ (a & b) is equivalent to a NAND operation ``` **PHP specific note** Bitwise operators do not work on floating-point values, and in PHP float values will be implicitly converted to integers first. Numbers outside the range that can be expressed as integers will be truncated to zero - that is, all numbers over PHP\_INT\_MAX will look "even" in the expression `($num & 1)`). If you want to support numbers outside of PHP\_INT\_MIN/PHP\_INT\_MAX, you'll need `fmod($num, 2)`. *If, however, you're on 64-bit PHP your integers will have greater precision than floats anyway.*
Understanding PHP & (ampersand, bitwise and) operator
[ "", "php", "operators", "bit-manipulation", "bitwise-operators", "" ]
My database background is mainly Oracle, but I've recently been helping with some SQL Server work. My group has inherited some SQL server DTS packages that do daily loads and updates of large amounts of data. Currently it is running in SQL Server 2000, but will soon be upgraded to SQL Server 2005 or 2008. The mass updates are running too slowly. One thing I noticed about the code is that some large updates are done in procedural code in loops, so that each statement only updates a small portion of the table in a single transaction. Is this a sound method to do updates in SQL server? Locking of concurrent sessions should not be an issue because user access to tables is disabled while the bulk loading takes place. I've googled around some, and found some articles suggesting that doing it this way conserves resources, and that resources are released each time an update commits, leading to greater efficiency. In Oracle this is generally a bad approach, and I've used single transactions for very large updates with success in Oracle. Frequent commits slow the process down and use more resources in Oracle. My question is, for mass updates in SQL Server, is it generally a good practice to use procedural code, and commit many SQL statements, or to use one big statement to do the whole update?
Sorry Guys, None of the above answer the question. They are just examples of how you can do things. The answer is, more resources get used with frequent commits, however, the transaction log cannot be truncated until a commit point. Thus, if your single spanning transaction is very big it will cause the transaction log to grow and possibly fregment which if undetected will cause problems later. Also, in a rollback situation, the duration is generally twice as long as the original transaction. So if your transaction fails after 1/2 hour it will take 1 hour to roll back and you can't stop it :-) I have worked with SQL Server2000/2005, DB2, ADABAS and the above is true for all. I don't really see how Oracle can work differently. You could possibly replace the T-SQL with a bcp command and there you can set the batch size without having to code it. Issuing frequest commits in a single table scan is prefferable to running multiple scans with small processing numbers because generally if a table scan is required the whole table will be scanned even if you only returning a small subset. Stay away from snapshots. A snapshot will only increase the number of IOs and competes for IO and CPU
In general, I find it better to update in batches - typically in the range of between 100 to 1000. It all depends on how your tables are structured: foreign keys? Triggers? Or just updating raw data? You need to experiment to see which scenario works best for you. If I am in pure SQL, I will do something like this to help manage server resources: ``` SET ROWCOUNT 1000 WHILE 1=1 BEGIN DELETE FROM MyTable WHERE ... IF @@ROWCOUNT = 0 BREAK END SET ROWCOUNT 0 ``` In this example, I am purging data. This would only work for an UPDATE if you could restrict or otherwise selectively update rows. (Or only insert xxxx number of rows into an auxiliary table that you can JOIN against.) But yes, try not to update xx million rows at one time. It takes forever and if an error occurs, all those rows will be rolled back (which takes an additional forever.)
Mass Updates and Commit frequency in SQL Server
[ "", "sql", "sql-server", "oracle", "" ]
I'm trying to setup Django on an internal company server. (No external connection to the Internet.) Looking over the server setup documentation it appears that the "[Running Django on a shared-hosting provider with Apache](http://docs.djangoproject.com/en/dev/howto/deployment/fastcgi/#running-django-on-a-shared-hosting-provider-with-apache)" method seems to be the most-likely to work in this situation. Here's the server information: * Can't install `mod_python` * no root access * Server is SunOs 5.6 * Python 2.5 * Apache/2.0.46 * I've installed Django (and [flup](http://trac.saddi.com/flup)) using the --[prefix option](http://docs.python.org/install/index.html#alternate-installation-the-home-scheme) (reading again I probably should've used --home, but at the moment it doesn't seem to matter) I've added the `.htaccess` file and `mysite.fcgi` file to my root web directory as mentioned [here](http://docs.djangoproject.com/en/dev/howto/deployment/fastcgi/#running-django-on-a-shared-hosting-provider-with-apache). When I run the *mysite.fcgi* script from the server I get my expected output (the correct site HTML output). But, it won't when trying to access it from a browser. It seems that it may be a problem with the `PYTHONPATH` setting since I'm using the prefix option. I've noticed that if I run `mysite.fcgi` from the command-line without setting the `PYTHONPATH` enviornment variable it throws the following error: ``` prompt$ python2.5 mysite.fcgi ERROR: No module named flup Unable to load the flup package. In order to run django as a FastCGI application, you will need to get flup from http://www.saddi.com/software/flup/ If you've already installed flup, then make sure you have it in your PYTHONPATH. ``` I've added **sys.path.append(prefixpath)** and **os.environ['PYTHONPATH'] = prefixpath** to `mysite.fcgi`, but if I set the enviornment variable to be empty on the command-line then run `mysite.fcgi`, I still get the above error. Here are some command-line results: ``` >>> os.environ['PYTHONPATH'] = 'Null' >>> >>> os.system('echo $PYTHONPATH') Null >>> os.environ['PYTHONPATH'] = '/prefix/path' >>> >>> os.system('echo $PYTHONPATH') /prefix/path >>> exit() prompt$ echo $PYTHONPATH Null ``` It looks like Python is setting the variable OK, but the variable is only applicable inside of the script. Flup appears to be distributed as an .egg file, and my guess is that the egg implementation doesn't take into account variables added by `os.environ['key'] = value` (?) at least when installing via the `--prefix` option. I'm not that familiar with .pth files, but it seems that the easy-install.pth file is the one that points to flup: ``` import sys; sys.__plen = len(sys.path) ./setuptools-0.6c6-py2.5.egg ./flup-1.0.1-py2.5.egg import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sy s.path[p:p]=new; sys.__egginsert = p+len(new) ``` It looks like it's doing something funky, anyway to edit this or add something to my code so it will find flup?
In your settings you have to point go actual egg file, not directory where egg file is located. It should look something like: ``` sys.path.append('/path/to/flup/egg/flup-1.0.1-py2.5.egg') ```
Try using a utility called [virtualenv](http://pypi.python.org/pypi/virtualenv). According to the official package page, "virtualenv is a tool to create isolated Python environments." It'll take care of the `PYTHONPATH` stuff for you and make it easy to correctly install Django and flup.
Setting up Django on an internal server (os.environ() not working as expected?)
[ "", "python", "django", "apache", "" ]
I'm trying to get into Java web development but seem to be running into a strange issue with Tomcat and an extremely simple servlet. The catalina log is spewing this every time I try and load the app: ``` Caused by: java.lang.IllegalArgumentException: Servlet mapping specifies an unknown servlet name MyServlet at org.apache.catalina.core.StandardContext.addServletMapping(StandardContext.java:2393) at org.apache.catalina.core.StandardContext.addServletMapping(StandardContext.java:2373) ... 40 more Mar 4, 2009 10:37:58 AM org.apache.catalina.startup.ContextConfig applicationWebConfig SEVERE: Parse error in application web.xml file at jndi:/localhost/mywebapp/WEB-INF/web.xml java.lang.IllegalArgumentException: Servlet mapping specifies an unknown servlet name MyServlet ``` Makes decent sense. It can't seem to find my servlet. However, the servlet seems to be in the right place. I can plainly see it at **WEB-INF/classes/MyServlet.class** For reference, this is the web.xml file I'm currently using: ``` <?xml version="1.0" encoding="ISO-8859-1"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5"> <description>My first web app in Java.</description> <display-name>My Web App</display-name> <servlet> <servlet-name>MyServlet</servlet-name> <servlet-class>MyServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>MyServlet</servlet-name> <url-pattern>/myservlet</url-pattern> </servlet-mapping> </web-app> ``` As you can see, I'm keeping things simple... but it continually throws this exception. What does a bare-bones web app look like in Java, which components am I missing? **Update** To make absolutely certain it wasn't an artifact of some kind, I started up a fresh copy of Tomcat and tried again. Upon doing so, this started appearing in the log files: ``` SEVERE: Error deploying web application archive mywebapp.war java.lang.UnsupportedClassVersionError: Bad version number in .class file (unable to load class MyServlet) ``` I dumped my .class file for the MyServlet class, rebuilt it with -target 1.5, repackaged the .war and everything worked perfectly. Thank you so much for the help! A good lesson in troubleshooting never hurt anybody.
Well, given the updated information, it appears that your problem is that the compiler you used for your class is potentially a newer version of the JDK than the one running Tomcat. Check the JDK version being used to start Tomcat, and then see if you can do something to reconcile the version differences between that and the one you're using to compile your servlet with. That should clear up your issue.
Are the lines below **exactly** from your `web.xml`? Tomcat finds the servlet mapping that maps `/myservlet` to the servlet named "`MyServlet`" but complains that it cannot find any servlet definition with that name. Case matters. What you provided looks correct, but double check your `web.xml` to make sure the case is correct. Double check the `web.xml` where Tomcat is using it, in the directory `mywebapp/WEB-INF/web.xml` ``` <servlet> <servlet-name>MyServlet</servlet-name> <-- Check this name <servlet-class>MyServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>MyServlet</servlet-name> <-- Compare against this name <url-pattern>/myservlet</url-pattern> </servlet-mapping> ``` If that's not it, let us know. But these names are case sensitive.
Tomcat is unable to find my Servlet and is throwing exceptions, but why?
[ "", "java", "tomcat", "" ]
In sybase is there any built in functions to convert a seconds value into hours, minutes, and seconds?
I don't know if there is such a function, but if there isn't, the simple formulae are: ``` HH = floor(seconds / 3600) MM = floor(seconds / 60) % 60 SS = seconds % 60 ```
If you want your results formatted as hours:minutes:seconds ``` select convert(varchar(8), dateadd(SECOND, 65, '1970-01-01'), 108) ``` Passing in 65 as our number of seconds results in... ``` 00:01:05 ```
sybase sql way of converting seconds value to hours minutes and seconds
[ "", "sql", "date", "sybase", "" ]
I'm trying to take a given URL entered by user and determine if the URL is pointing to a image or a video. Example use case: When a user paste in the URL of a YouTube video, on save the page will auto display the embedded YouTube player. When a user posts URL of a picture in Flickr, on save, the page will auto display a smaller version of the Flickr image.
You can fetch the URL and see Content-type from the response. You can use the [HTTP Client](http://hc.apache.org/httpclient-3.x/) from apache, it helps you to fetch the content of the URL and you can use it to navigate the redirects. For instance try to fetch the following: <http://www.youtube.com/watch?v=d4LkTstvUL4> Will return an HTML containing the video. After a while you'll find out the video is here: <http://www.youtube.com/v/d4LkTstvUL4> But if you fetch that page you will get a redirect: ``` HTTP/1.0 302 Redirect Date: Fri, 23 Jan 2009 02:25:37 GMT Content-Type: text/plain Expires: Fri, 23 Jan 2009 02:25:37 GMT Cache-Control: no-cache Server: Apache X-Content-Type-Options: nosniff Set-Cookie: VISITOR_INFO1_LIVE=sQc75zc-QSU; path=/; domain=.youtube.com; expires= Set-Cookie: VISITOR_INFO1_LIVE=sQc75zc-QSU; path=/; domain=.youtube.com; expires= Location: http://www.youtube.com/swf/l.swf?swf=http%3A//s.ytimg.com/yt/swf/cps-vf L4&rel=1&eurl=&iurl=http%3A//i1.ytimg.com/vi/d4LkTstvUL4/hqdefault.jpg&sk=Z_TM3JF e_get_video_info=1&load_modules=1 ``` So, what you have to do is to fetch the URL and examine it, until you get final content [This section](http://hc.apache.org/httpclient-3.x/redirects.html) explains how to handle the redirects.
Issue an HTTP HEAD request so you can examine the HTTP headers that come back without having to first download the entire document. Showing a non-programmatic case under Linux using "curl": ``` $ curl --head http://stackoverflow.com/Content/Img/stackoverflow-logo-250.png HTTP/1.1 200 OK Cache-Control: max-age=28800 Content-Length: 3428 Content-Type: image/png Last-Modified: Fri, 16 Jan 2009 09:35:30 GMT Accept-Ranges: bytes ETag: "98f590c5bd77c91:0" Server: Microsoft-IIS/7.0 Date: Fri, 23 Jan 2009 03:55:39 GMT ``` You can see here from the Content-Type that this is an image. You can use [HTTPClient from Apache](http://hc.apache.org/httpclient-3.x/) from Java to do the HTTP Head request. If you want to download the content for sure, then just issue the HTTP GET (using Httpclient) and use the same HTTP Header to determine the content type.
How to determine if a given URL link is a video or image?
[ "", "java", "url", "grails", "http-headers", "" ]
Should I use LAMP, .Net, or any other platform? I have access to resources with LAMP, .Net, ROR and Java experience. Thanks. Sorry for being vague. Could you give me a few pluses and minuses of each like Paul? Thanks for your time and effort.
If you're planning to host the system, then you are best to work in a language that you are comfortable with and can hire people to help you. That means you want something reasonably popular, but also oriented towards the type of functionality you are implementing. Presumably you are writing some form of web application. Personally, my favorite web app language was Perl, but it took a huge amount of self-discipline to keep from making it a mess. It's probably not great for big teams, and it's hard to hire people with experience. PHP is great for pounding out web apps, and it has a big following. Unless I haven't been keeping up on it LAMP, .Net, ROR and PHP are all bounded by web 1.0 technologies, which means augmenting them with a JavaScript library like Dojo if you want to use AJAX (which is all the rage these days). If the software is enterprise, then Java or C# are more sellable, but since you're running it yourself that doesn't matter. Personally I'd stay far away from J2EE, it's just too much complexity for too little functionality. My latest combo is Java, GWT and hibernate, but I find they are very difficult technologies to master. Good if you have the time for long term development, but not so great if you're in a hurry. So, if you're in a rush, prototyping in LAMP, with a very simple 1.0 front-end is probably best, then later once you've crystallized the concept, redoing the system in some stronger technology will help you get to the next level of performance. Paul.
If its a professional project then use whatever you have the best resources for, if you don't know then ask your coworkers what they would be most comfortable with. If things really are equal (and I doubt they are) then use each of the platforms to write a random platform selector, and use that to make your choice. Conversly if its a personal project and you are looking for something new, then use the thing that you know least about.
What platform would you recommend to develop a SAAS application?
[ "", "java", ".net", "ruby", "lamp", "" ]