Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
There is a PHP application right now on a Linux box running under Apache with MySQL. Since we are a windows shop, management wants to get rid of the Linux box and move everything over to windows. Is there a performance difference between the two platforms? Or, is there any significant difference at all, in terms of performance or management?
Microsoft had a team help out optimising PHP for Windows, which work is part of PHP 5.3. Some figures I've seen places the performance close to PHP + Apache on a unix system. Before 5.3 (Which means currently, since 5.3 isn't out yet), performance is bad on Windows. I think there are some patches and tricks you can pull to improve it, but it's going to cost you a bit of performance. That may or may not be a problem; People have a tendency to overestimate performance. Note that there are other reasons to use unix than just performance. Code may not be portable and even though the core php runs fairly ok, you can well get into trouble with php-extensions and third party libraries. No matter how you look at it, Windows is a second-rate system for running php on.
If your application isn't huge or get hit a couple thousand times per second, there's no difference between the two. LAMP == WAMP in php small projects. Just install something like [XAMPP](http://www.apachefriends.org/en/xampp-windows.html) if you want your environment to be as close as possible to your existing one but in Windows. Good luck with your project!
Performance differences running PHP under Windows/IIS 7 vs. Linux/Apache?
[ "", "php", "performance", "apache", "iis", "" ]
I want to retrieve the next sequence number via an adhoc query in Entity Framework. I'm using: ``` LogEntities db = new LogEntities(); ObjectQuery<Int64> seq = db.CreateQuery<Int64>("SELECT AUDITLOG.EVENTID_SEQ.NEXTVAL from sysibm.sysdummy1"); ``` This returns the following error: *ErrorDescription = "'sysibm.sysdummy1' could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly."* I guess this is because sysdummy1 is not a mapped table in my model. Anyone know a way I can perform this query using the LINQ 2 Entity context?
Thanks for your answer Craig. The reason I am unable to use an auto incrementing identity column is because this particular logical table is physically partitioned into 31 separate (daily) tables and the ID needs to be unique across all tables. I ended up creating a stored procedure to retrieve the next number from the sequence, and then adding that to my EF Model store. ``` private static long GetNextEventId(DbConnection dbConnection) { using (DbCommand cmd = dbConnection.CreateCommand()) { cmd.CommandText = "LogEntities.GetNextEventId"; cmd.CommandType = CommandType.StoredProcedure; // Execute the command return Convert.ToInt64(cmd.ExecuteScalar()); } } ``` Here's another way of call a stored proc in EF that returns a scalar. [Using Stored Procedures That Return Non Entity Type](http://blogs.msdn.com/bindeshv/archive/2008/11/21/using-stored-procedures-that-return-non-entity-type.aspx)
An ObjectQuery needs to use Entity SQL, not "regular" SQL. If you want to write "regular" SQL, you need to use a store connection, not an ObjectQuery. That said, I kind of wonder why you're manually retrieving a sequence number. If the entity property is set to auto increment on the server, this will the retrieved automatically when you SaveChanges. If you need to get a store connection, there is a method on the EntityConnection type, [CreateDbCommand](http://msdn.microsoft.com/en-us/library/system.data.entityclient.entityconnection_members.aspx), which does this. Again, I recommend that you don't do it. Using this feature makes your code provider-specific. Most Entity Framework code should be provider-agnostic.
How to get next sequence number in DB2 using Entity Framework?
[ "", "sql", "entity-framework", "db2", "" ]
I have following table structure in my DB ``` ID Name -------------------------- ID_1 Name1 ID1 Name2 ``` when I execute the following query ``` SELECT * FROM tblNames WHERE ID LIKE '_1' ``` I get both records selected... any suggestions how to get rid off it??
An underscore (`_`) is a special character in LIKE expressions that matches a single character. To match an actual underscore, you need to escape it: ``` select * from tblNames where id like '%\_1' escape '\' ``` Here I'm telling SQL Server to consider backslash an escape character, and using it to escape the underscore. Alternatively, you can represent the underscore as a character range with a single character in it - it will be interpreted literally in this case: ``` select * from tblNames where id like '%[_]1' ``` Which is a bit more succinct. Reference: [MSDN](http://msdn.microsoft.com/en-us/library/aa933232(SQL.80).aspx).
[LIKE operator](http://msdn.microsoft.com/en-us/library/aa933232(SQL.80).aspx) Quickie: \_ is a wildcard character that matches anything. ``` SELECT * FROM tblNames WHERE ID LIKE '\_1' ESCAPE '\' ```
Strange behavior of sql server
[ "", "sql-server", "sql", "" ]
In another Bruce Eckel exercise, the code I've written takes a method and changes value in another class. Here is my code: ``` class Big { float b; } public class PassObject { static void f(Letter y) { y.c = 'z'; } //end f() static void g(Big z) { z.b = 2.2; } public static void main(String[] args ) { Big t = new Big(); t.b = 5.6; System.out.println("1: t.b : " + t.b); g(x); System.out.println("2: t.b: " + t.b); } //end main }//end class ``` It's throwing an error saying "Possible loss of precision." ``` PassObject.java:13: possible loss of precision found: double required : float z.b = 2.2 passobject.java:20: possible loss of precision found : double required : float t.b = 5.6 ``` Can't `double`s be `float`s as well?
Yes, but you have to specify that they are floats, otherwise they are treated as doubles: ``` z.b = 2.2f ``` The 'f' at the end of the number makes it a float instead of a double. Java won't automatically narrow a double to a float.
No, floats can be automatically upcast to doubles, but doubles can never be floats without explicit casting because doubles have the larger range. float range is `1.40129846432481707e-45` to `3.40282346638528860e+38` double range is `4.94065645841246544e-324d` to `1.79769313486231570e+308d`
Java: double vs float
[ "", "java", "types", "casting", "floating-point", "double", "" ]
I'm trying to update all records in one table with the values found in another table. I've tried many versions of the same basic query and always get the same error message: > Operation must use an updateable > query. Any thoughts on why this query won't work in Access DB? ``` UPDATE inventoryDetails as idet SET idet.itemDesc = ( SELECT bomItemDesc FROM BOM_TEMPLATES as bt WHERE bt.bomModelNumber = idet.modelNumber ) ``` also tried this because I realized that since the second table has multiple model number records for each modelnumber - and I only need the first description from the first record found for each model number. ``` UPDATE inventoryDetails as idet SET idet.item_desc = ( SELECT TOP 1 bomItemDescription FROM BOM_TEMPLATES as bt WHERE bt.bomModelNumber = idet.modelNumber ) ``` ...still getting the same error though.
You have to use a join ``` UPDATE inventoryDetails INNER JOIN BOM_TEMPLATES ON inventoryDetails.modelNumber = BOM_TEMPLATES.bomModelNumber SET inventoryDetails.itemDesc = [bomItemDesc]; ```
`Any thoughts on why this query won't work in Access DB`? The answer is, because ACE/Jet SQL syntax is not SQL-92 compliant (even when in its ANSI-92 Query Mode!). I'm assuming yours is a scalar subquery. This construct is simply not supported by ACE/Jet. ACE/Jet has its own quirky and flawed `UPDATE..JOIN` syntax, flawed because the engine doesn't force the `JOIN`ed values to be scalar and it is free to silently use an arbitrary value. It is different again from SQL Server's own UPDATE..JOIN syntax but at least SQL Server supports the Standard scalar subquery as an alternative. ACE/Jet forces you to either learn its quirky non-portable ways or to use an alternative SQL product. Sorry to sound negative: the ACE/Jet engine is a great piece of software but UPDATE syntax is absolutely fundamental and the fact it hasn't been changed since the SQL-92 Standard really show its age.
Access DB update one table with value from another
[ "", "sql", "database", "ms-access", "" ]
I'm learning wxPython so most of the libraries and classes are new to me. I'm creating a Preferences dialog class but don't know the best way to make sure the OK/Cancel (or Save/Close) buttons are in the correct order for the platform. This program is intended to run both on GNOME and Windows, so I want to make sure that the buttons are in the correct order for each platform. Does wxPython provide functionality that prevents me from doing a `if platform.system() == 'Linux'` kind of hack?
The appearance of a dialog can change only if you use stock dialogs (like wx.FileDialog), if you make your own the layout will stay the same on every platform. wx.Dialog has a CreateStdDialogButtonSizer method that creates a wx.StdDialogButtonSizer with standard buttons where you might see differences in layout on different platforms but you don't have to use that.
You can use a StdDialogButtonSizer <http://www.wxpython.org/docs/api/wx.StdDialogButtonSizer-class.html> So long as your buttons have the standard IDs they will be put in the correct order. Just to add a wrinkle though, on a Mac for instance, a preferences dialog would not have OK / Cancel buttons. It would automatically apply the preferences as they were entered (or at least on dialog close). So you'd still have to do some platform sniffing in that case.
WxPython: Cross-Platform Way to Conform Ok/Cancel Button Order
[ "", "python", "user-interface", "cross-platform", "wxpython", "" ]
I have problem with comparing the value of array elements. e.g. I wanted to compare the value of index 0 and index 2, and index 1 to index 3 and so on. With the code below I suppose to get the result of numOfdifferentShape is 2 but I get 3. How can I solve this problem? :-( ``` int numOfdifferentShape=0; myArray = {40.0, 40.0, 40.0, 40.0, 80.0, 40.0, 40.0, 40.0} for (int a=0; int a<myArray.size(); a=a+2) { for (int b=a+2; b<myArray.size; b=b+2) { if (!(myArray.get(a).equals(myArray.get(b) && myArray.get(a+1).equals(b+1))) numOfdifferentShape++; break; } } ```
``` for (int i = 0; i < (myArray.size() - 2); ++i) { if (myArray[i] != myArray[i + 2]) ++numOfdifferentShapes; } ```
There are several syntax errors in this code, but since TofuBeer has already pointed them out in the comments, I'll move on the the design and logic. Going from the code, I'm assuming you don't have much experience with Java, and perhaps not with programming at all. So I'm going to go slowly here. I hope you aren't insulted by my explanations. You say you are trying to find out how many of the objects which you are storing (as two ints) in your array are equal. To do this, you have to keep track of what unique objects you have already seen. Then you compare each object the list of unique objects and, if it doesn't match any of them, add it to the list. This is the basic algorithm. Now, have you noticed that I keep using the word "object" in my description? When that happens, it usually means you should be making a class. I would make a simple one like this, holding the two integers: ``` class Box { // or whatever the objects are called private final int height; private final int width; public Box(int h, int w) { height = h; width = w; } public int getHeight() { return height; } public int getWidth() { return width; } @Override public boolean equals(Object other) { if (!(other instanceof Box)) return false; Box b = (Box) other; return b.height == height && b.width == width; } @Override public int hashCode() { int hash = 7; hash = 97 * hash + this.height; hash = 97 * hash + this.width; return hash; } } ``` Try to understand what each part of this code does (especially if this is actually your homework). Once you've got it, move on to the next part: doing the calculation that you were trying to do. Let's say you have an array of Boxes, like this: ``` Box[] boxes = { new Box(40, 40), new Box(40, 40), new Box(80, 40), new Box(40, 40) }; ``` (I can't tell if you're using an array or a list, so I'm just picking one to demonstrate.) I already gave the algorithm for finding the number of unique items, so I'll show you how I would write it: ``` List<Box> unique = new ArrayList<Box>(); for (Box box : boxes) { if (!unique.contains(box)) { // this is why I implemented equals() and hashCode()! unique.add(box); } } int numOfDifferentShape = unique.size(); ``` This is much easier than trying to keep track of two ints for each object, plus it has the advantage that you can't get your array indices confused. You could do this even more easily with a [`Set`](http://java.sun.com/javase/6/docs/api/java/util/Set.html). It would look something like this: ``` Set<Box> boxSet = new HashSet<Box>(); for (Box b : boxes) boxSet.add(b); int numOfDifferentShape = boxSet.size(); ``` Note that these last two snippets use features from [Java 1.5](http://java.sun.com/j2se/1.5.0/docs/relnotes/features.html#lang), so I don't know if you've run into them before. Does this make things clearer?
Compare elements of the same array
[ "", "java", "arrays", "comparison", "" ]
This may be a bit of an easy, headdesk sort of question, but my first attempt surprisingly completely failed to work. I wanted to take an array of primitive longs and turn it into a list, which I attempted to do like this: ``` long[] input = someAPI.getSomeLongs(); List<Long> inputAsList = Arrays.asList(input); //Total failure to even compile! ``` What's the right way to do this?
I found it convenient to do using apache commons lang ArrayUtils ([JavaDoc](http://commons.apache.org/proper/commons-lang/javadocs/api-3.1/org/apache/commons/lang3/ArrayUtils.html#toObject(long[])), [Maven dependency](https://mvnrepository.com/artifact/org.apache.commons/commons-lang3/latest)) ``` import org.apache.commons.lang3.ArrayUtils; ... long[] input = someAPI.getSomeLongs(); Long[] inputBoxed = ArrayUtils.toObject(input); List<Long> inputAsList = Arrays.asList(inputBoxed); ``` it also has the reverse API ``` long[] backToPrimitive = ArrayUtils.toPrimitive(objectArray); ``` **EDIT:** updated to provide a complete conversion to a list as suggested by comments and other fixes.
Since Java 8 you can now use streams for that: ``` long[] arr = { 1, 2, 3, 4 }; List<Long> list = Arrays.stream(arr).boxed().collect(Collectors.toList()); ```
Convert an array of primitive longs into a List of Longs
[ "", "java", "arrays", "collections", "boxing", "" ]
Is there any way to get the version and vendor of the compiler used by the user through qmake? What I need is to disable building some targets of my project when g++ 3.x is used and enable them when g++ 4.x is used. **Update:** Most answers targeted the preprocessor. This is something that I want to avoid. I don't want a target to be build for a specific compiler version and I want this decision to be made by the build system.
In addition to [ashcatch](https://stackoverflow.com/questions/801279/finding-compiler-vendor-version-using-qmake/801769#801769)'s answer, `qmake` allows you to [query the command line](http://doc.qt.io/qt-5/qmake-test-function-reference.html#system-command) and get the response back as a variable. So you could to something like this: ``` linux-g++ { system( g++ --version | grep -e "\<4.[0-9]" ) { message( "g++ version 4.x found" ) CONFIG += g++4 } else system( g++ --version | grep -e "\<3.[0-9]" ) { message( "g++ version 3.x found" ) CONFIG += g++3 } else { error( "Unknown system/compiler configuration" ) } } ``` Then later, when you want to use it to specify targets, you can use the config scoping rules: ``` SOURCES += blah blah2 blah3 g++4: SOURCES += blah4 blah5 ```
My answer based on [Caleb Huitt - cjhuitt](https://stackoverflow.com/a/807398/3045403)'s. But his approach does not work for me. ``` *-g++ { GCC_VERSION = $$system("g++ -dumpversion") contains(GCC_VERSION, 6.[0-9]) { message( "g++ version 6.x found" ) CONFIG += g++6 } else { contains(GCC_VERSION, 5.[0-9]) { message( "g++ version 5.x found" ) CONFIG += g++5 } else { contains(GCC_VERSION, 4.[0-9]) { message( "g++ version 4.x found" ) CONFIG += g++4 } else { message( "Unknown GCC configuration" ) } } } } ``` As you see you can get version from GCC and then compare it with regex expression. The way how to use is the same: ``` SOURCES += blah blah2 blah3 g++4: SOURCES += blah4 blah5 ```
Finding compiler vendor / version using qmake
[ "", "c++", "qt", "build-process", "makefile", "qmake", "" ]
In Windows, is there a tool to monitor the status of process synchronisation objects? ie. * event/mutex : signaled or not signaled * semaphore : count Better yet, to log which thread did what, eg. "thread #5421 Signal Event - testevt"
[Memory Validator](http://www.memoryvalidator.com/cpp/memory/memval_object.html) [Process Explorer](http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) [Handle](http://technet.microsoft.com/en-us/sysinternals/bb896655.aspx) usage: handle -s ==> `Print count of each type of handle open.` **[EDIT]**: **How to monitor the status of process synchronization objects using Process Explorer.** Open Process Explorer * Click on your exe in the process section (for ex: MyApp.exe) * Click `Show Lower Pane` (or press `Ctrl+L`). This will show all synchronization objects. (for ex: myEvent) * Right click on synchronization object (for ex: myEvent) and click Properties... in context menu. * This brings the details of the synchronization object. In the `Details` tab, you can see `Event Info` (if synch object is event): Gives information about the state (whether the synchronization object is signaled) `Semaphore info` (if synch object is semaphore): Provides the count of the semaphore.
Take a look at Intel's ThreadChecker and Parallel Studio. Most of their tools sit on top of Visual Studio. <http://software.intel.com/en-us/intel-vtune/> <http://software.intel.com/en-us/intel-thread-checker/> <http://software.intel.com/en-us/intel-vtune/>
Is there a tool to monitor synchronisation objects (mutex, events, semaphores) in Windows?
[ "", "c++", "windows", "visual-studio", "synchronization", "" ]
I'm looking for a jquery plugin to simulate a vertical marquee. I need it to support: 1. Scroll any opaque (unstructured) content. No li, no div. The user can even paste from Word. 2. Automatic constant scroll velocity. 3. Pause on hover. 4. Circular scroll - after scrolling to end, continue smoothly from the beginning. No scroll back, no visible jump. I tried all the tickers, serialScroll, carousels, etc... Most require some structure, list, constant width/height items. Also they scroll by full items (scroll, wait, scroll). But the biggest blocker is requirement 4. None provide "fake" tail to smoothly restart scrolling. Have I missed something, or I'll have to write this by myself?
2 live examples of what you want: <http://www.learningjquery.com/2006/10/scroll-up-headline-reader> <http://woork.blogspot.com/2008/10/automatic-news-ticker-with-vertical.html> Both written using jquery. Is this what you are looking for?
I am using currently: <http://jdsharp.us/jQuery/plugins/jdNewsScroll/1.1/> Which does seem to do also what you requested.
Implementing circular scroller in jquery
[ "", "javascript", "jquery", "" ]
I'm using the following piece of code: ``` $.log('Ending time: ' + ending_time); $.log('Now: ' + new Date()); $.log('Difference: ' + new Date(ending_time - new Date())); ``` The output is the following: ``` Ending time: Thu Apr 23 2009 14:31:29 GMT+0200 Now: Thu Apr 23 2009 11:56:02 GMT+0200 Difference: Thu Jan 01 1970 03:35:26 GMT+0100 ``` I'm using the "difference" to display how many hours and minutes there are left until `ending_time`, but because of the timezone differences, I get the wrong time (offset by one hour.) So is there any neat way of calculating the difference taking timezones into account?
You are no longer dealing with a date, so don't convert it to one. You have a time difference, which doesn't have a time zone for instance. The result should be in milliseconds, so perform the appropriate math to get minutes, hours, days, or possibly all of the above as needed.
You should be able to use the getTimezoneOffset function. Check it out [here](http://www.w3schools.com/jsref/jsref_getTimezoneOffset.asp).
Make a date object take timezone into consideration
[ "", "javascript", "date", "timezone", "" ]
I have a textbox accepting user input; I am trying to use this user input it to populate this member of one of my business objects: ``` public System.Decimal? ExchangeRate ``` The application is localized - I need to support at the same time cultures that accept these as valid inputs: "1,5" and "1.5" The code I have now is: ``` var culture = Thread.CurrentThread.CurrentUICulture; int exchangeRate; int.TryParse(txtExchangeRate.Text, NumberStyles.Number, culture, out exchangeRate); entity.ExchangeRate = exchangeRate; ``` When the user culture is set to a culture that expects the "1,5" format (comma as decimal separator) - e.g "ro-RO", I want the value that gets stored in entity.ExchangeRate to be 1.5; however, when running the code above, it gets converted to 15 instead. Any suggestions on how to convert these various formats so that the data that gets stored in my business entity is "1.5" (point as decimal separator)? Thanks. --- You guys were right - it made sense to use Thread.CurrentThread.CurrentCulture instead of Thread.CurrentThread.CurrentUICulture and decimal.TryParse instead of int.TryParse. But these changes would still not solve my problem. And after playing around with the code some more, I can now simplify the issue to this: I am using a telerik RadNumericTextBox control which enforce users to use the correct format based on their culture. So, when Thread.CurrentThread.CurrentCulture is set to "ro-RO", it will only accept the "1,5" format, and when it's set to "en-GB", it will only accept the "1.5" format. Here's the code I am using now: ``` decimal exchangeRate; decimal.TryParse(txtExchangeRate.Text, out exchangeRate); entity.ExchangeRate = exchangeRate; ``` Case 1: current culture is "en-GB" - accepted input is "1.5" , exchangeRate is set to 1.5 - everything works fine. Case 2: current culture is "ro-RO" - accepted input is "1,5" , but after executing the decimal.TryParse... line, **exchangeRate is set to 15** - wrong, obviously. I should also mention that in this case, the value of txtExchangeRate.Text is also shown as "1.5" in my Watch window. So, it looks like decimal.TryParse will take into consideration the current culture, but I can't find a way to actually make it work properly for me. Any suggestions?
OK, here's the code that seems to work on both cases I described in my above post (1. culture "ro-RO", comma as decimal separator and 2. culture "en-GB", dot as decimal separator): ``` decimal exchangeRate; decimal.TryParse(txtExchangeRate.Text, NumberStyles.Any, CultureInfo.InvariantCulture, out exchangeRate); entity.ExchangeRate = exchangeRate; ```
1. Obviously, int cannot hold 1.5 ! :-) Use float instead. 2. Use CurrentCulture instead of CurrentUICulture. My culture is fr-BE (therefore accepts 1,5 but my Windows UI is English, which doesn't). 3. I would make the float.Parse() test with both CurrentCulture AND InvariantCulture: By the time some programs learned to accept "1,5", everybody was used to type "1.5". There's nothing which bothers me more than Excel requiring me to type 1,5 when I say 1.5 ! Also, here in Belgium, the 1st year government launched the web-based tax declaration, the site forced you to use commas instead of periods as decimal points. Everybody was wondering why the figures entered were refused! So be nice to your users and accept both.
ASP.NET - Converting localized number
[ "", "c#", "asp.net", "localization", "culture", "" ]
I like how I can do `string [] stringArray = sz.split('.');` but is there a way to merge them back together? `(stringArray.Merge(".");)`
String.Join
``` string mergedString = String.Join(" ", stringArray); ```
How can I merge back a string I split?
[ "", "c#", "string", "" ]
Given these 3 lists of data and a list of keywords: ``` good_data1 = ['hello, world', 'hey, world'] good_data2 = ['hey, man', 'whats up'] bad_data = ['hi, earth', 'sup, planet'] keywords = ['world', 'he'] ``` I'm trying to write a simple function to check if any of the keywords exist as a substring of any word in the data lists. It should return True for the `good_data` lists and False for `bad_data`. I know how to do this in what seems to be an inefficient way: ``` def checkData(data): for s in data: for k in keywords: if k in s: return True return False ```
In your example, with so few items, it doesn't really matter. But if you have a list of several thousand items, this might help. Since you don't care which element in the list contains the keyword, you can scan the whole list once (as one string) instead of one item at the time. For that you need a join character that you know won't occur in the keyword, in order to avoid false positives. I use the newline in this example. ``` def check_data(data): s = "\n".join(data); for k in keywords: if k in s: return True return False ``` In my completely unscientific test, my version checked a list of 5000 items 100000 times in about 30 seconds. I stopped your version after 3 minutes -- got tired of waiting to post =)
Are you looking for ``` any( k in s for k in keywords ) ``` It's more compact, but might be less efficient.
Search a list of strings for any sub-string from another list
[ "", "python", "" ]
I am trying to submit a form using [Ajax.Updater](http://www.prototypejs.org/api/ajax/updater) and have the result of that update a div element in my page. Everything works great in IE6, FF3, Chrome and Opera. However, In IE7 it sporadically works, but more often than not, it just doesn't seem to do anything. Here's the javascript: ``` function testcaseHistoryUpdate(testcase, form) { document.body.style.cursor = 'wait'; var param = Form.serialize(form); new Ajax.Updater("content", "results/testcaseHistory/" + testcase, { onComplete: function(transport) {document.body.style.cursor = 'auto'}, parameters: param, method: 'post' } ); } ``` I've verified using `alert()` calls that param is set to what I expect. I've read in many places that IE7 caches aggressively and that it might be the root cause, however every after adding the following to my php response, it still doesn't work. ``` header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); ``` To further try to fix a caching issue I've tried adding a bogus parameter which just gets filled with a random value to have different parameters for every call, but that didn't help. I've also found [this](https://stackoverflow.com/questions/676448/prototype-js-1-6-0-3-ajax-updater-not-working-in-ie7-or-ie8-help-please), where UTF-8 seemed to be causing an issue with IE7, but my page is clearly marked: ``` <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> ``` Does anyone have any idea what could be wrong with IE7 as opposed to the other browsers I tested to cause this kind of issue?
It appears that you cannot do an update on a DOM id element that is inside a form tag in ie. Has anyone found a way around this? My code works fine when I move it outside the form tag, and also when I just comment out the form tag and don't move the DOM element.
A common problem seems to be the [extra comma problem](https://stackoverflow.com/questions/723759/javscript-jquery-not-executing-in-ie-until-the-body-of-the-page-is-moused-over). Make sure IE is not giving you the bottom left alert icon. I had some trouble with this in the past, because IE was not validating my javascript it would not run as I wished.
Prototype's Ajax.Updater not actually updating on IE7
[ "", "php", "ajax", "caching", "internet-explorer-7", "prototypejs", "" ]
I have a list say: ``` ['batting average', '306', 'ERA', '1710'] ``` How can I convert the intended numbers without touching the strings? Thank you for the help.
``` changed_list = [int(f) if f.isdigit() else f for f in original_list] ```
The data looks like you would know in which positions the numbers are supposed to be. In this case it's probably better to explicitly convert the data at these positions instead of just converting anything that looks like a number: ``` ls = ['batting average', '306', 'ERA', '1710'] ls[1] = int(ls[1]) ls[3] = int(ls[3]) ```
How to convert strings numbers to integers in a list?
[ "", "python", "string", "integer", "numbers", "" ]
I am working with Open Id, just playing around making a class to interact / auth Open Id's on my site (in PHP). I know there are a few other Libraries (like RPX), but I want to use my own (its good to keep help better understand the protocol and whether its right for me). The question I have relates to the Open Id discovery sequence. Basically I have reached the point where I am looking at using the XRDS doc to get the local identity (openid.identity) from the claimed identity (openid.claimed\_id). *My question is, do I have to make a cURL request to get the XRDS Location (X-XRDS-location) and then make another cURL request to get the actual XRDS doc??* It seems like with a DUMB request I only make one cURL request and get the Open Id Server, but have to make two to use the XRDS Smart method. Just doesn't seem right, can anyone else give me some info.
To be complete, yes, your RP must HTTP GET on the URL the user gives you, and then search for an XRDS document reference and if found do another HTTP GET from there. Keep in mind that the XRDS may be hosted on a different server, so don't code up anything that would require the connection to be the same between the two requests since it might not be the same connection. If in your initial HTTP GET request you include the HTTP header: ``` Accept: application/xrds+xml ``` Then the page MAY respond immediately with the XRDS document rather than an HTML document that you have to parse for an XRDS link. You'll be able to detect that this has occurred by checking the HTTP response header for application/xrds+xml in its Content-Type header. This is an optimization so that RPs don't typically have to make that second HTTP GET call -- but you can't rely on it happening.
The best advice I can give you, is to try to abstract your HTTP requesting a little bit, and then just go through the entire process of doing an HTTP request twice. You can keep your curl instances around if you want to speed things up using persistent connections, but that may or may not be want you want. I hope this helps, and good luck.. OpenID is one of the most bulky and convoluted web standards I've come across since WebDAV =) Evert
Open Id XRDS Discovery
[ "", "php", "openid", "openid2", "" ]
I'm creating this rating system using 5-edged stars. And I want the heading to include the average rating. So I've created stars showing 1/5ths. Using "1.2" I'll get a full star and one point on the next star and so on... But I haven't found a good way to round up to the closest .2... I figured I could multiply by 10, then round of, and then run a switch to round 1 up to 2, 3 up to 4 and so on. But that seems tedious and unnecessary...
``` round(3.78 * 5) / 5 = 3.8 ```
A flexible solution ``` function roundToNearestFraction( $number, $fractionAsDecimal ) { $factor = 1 / $fractionAsDecimal; return round( $number * $factor ) / $factor; } // Round to nearest fifth echo roundToNearestFraction( 3.78, 1/5 ); // Round to nearest third echo roundToNearestFraction( 3.78, 1/3 ); ```
round number to nearest 0.2 with PHP
[ "", "php", "math", "rounding", "" ]
I have a somewhat complex regular expression which I'm trying to match against a long string (65,535 characters). I'm looking for multiple occurrences of the re in the string, and so am using finditer. It works, but for some reason it hangs after identifying the first few occurrences. Does anyone know why this might be? Here's the code snippet: ``` pattern = "(([ef]|([gh]d*(ad*[gh]d)*b))d*b([ef]d*b|d*)*c)" matches = re.finditer(pattern, string) for match in matches: print "(%d-%d): %s" % (match.start(), match.end(), match.group()) ``` It prints out the first four occurrences, but then it hangs. When I kill it using Ctrl-C, it tells me it was killed in the iterator: ``` Traceback (most recent call last): File "code.py", line 133, in <module> main(sys.argv[1:]) File "code.py", line 106, in main for match in matches: KeyboardInterrupt ``` If I try it with a simpler re, it works fine. I'm running this on python 2.5.4 running on Cygwin on Windows XP. I managed to get it to hang with a very much shorter string. With this 50 character string, it never returned after about 5 minutes: ``` ddddddeddbedddbddddddddddddddddddddddddddddddddddd ``` With this 39 character string it took about 15 seconds to return (and display no matches): ``` ddddddeddbedddbdddddddddddddddddddddddd ``` And with this string it returns instantly: ``` ddddddeddbedddbdddddddddddddd ```
Definitely exponential behaviour. You've got so many `d*` parts to your regexp that it'll be backtracking like crazy when it gets to the long string of d's, but fails to match something earlier. You need to rethink the regexp, so it has less possible paths to try. In particular I think: ``` ([ef]d\*b|d\*)*</pre></code> and <code><pre>([ef]|([gh]d\*(ad\*[gh]d)\*b))d\*b ``` Might need rethinking, as they'll force a retry of the alternate match. Plus they also overlap in terms of what they match. They'd both match edb for example, but if one fails and tries to backtrack the other part will probably have the same behaviour. So in short try not to use the `|` if you can and try to make sure the patterns don't overlap where possible.
Could it be that your expression triggers exponential behavior in the Python RE engine? [This article](http://swtch.com/~rsc/regexp/regexp1.html) deals with the problem. If you have the time, you might want to try running your expression in an RE engine developed using those ideas.
finditer hangs when matching against long string
[ "", "python", "regex", "performance", "" ]
I'm working on a project that needs to emulate a keypress of the Windows key. I've tried SendKeys.Send to no avail. Specifically, the windows key needs to come in concert with a button. That is, I want to send Windows Key and plus / minus.
This may be overkill, but you could try using [AutoItX](http://www.autoitscript.com/autoit3/) which is a way to use AutoIt as a DLL. I've only written standalone scripts, but I know AutoIt makes it very easy to simulate pressing the Windows key. For example, to open the run dialog is just: > Send("#r") ;Win + R = run
I would add that it is **often unlikely for you to find lower level functions like these in the .NET framework**. If you were confused as to why the suggestions both pointed to "non C#" functions, then you probably could use some details on [P/Invoke](http://en.wikipedia.org/wiki/Platform_Invocation_Services). Basically there are ways to define C# functions that "tie" them to Windows API functions that do not exist within .NET assemblies (Instead they are typically implemented in C++ and available as a standard DLL). This process is considered to be "(Windows) Platform Invoking" (thus P/Invoke). It can be a bit wobbly at first to match up all the data types between C++ and C# style calls, but fortunately, there are others out there that have paved the way. The suggested function, [SendInput](http://msdn.microsoft.com/en-us/library/ms646310%28VS.85%29.aspx), has a [PInvoke wrapper](http://www.pinvoke.net/default.aspx/user32.SendInput) over at [PInvoke.net](http://www.pinvoke.net). This wrapper class, when available in your assembly, will allow you to call SendInput as if it were a C# function. PInvoke.net is basically a PInvoke wiki for well known API calls in windows, and typically has a C#/VB.NET wrapper of API calls.
How to send keyboard scan codes manually?
[ "", "c#", "sendkeys", "" ]
I was creating a simple command line utility and using a dictionary as a sort of case statement with key words linking to their appropriate function. The functions all have different amount of arguments required so currently to check if the user entered the correct amount of arguments needed for each function I placed the required amount inside the dictionary case statement in the form `{Keyword:(FunctionName, AmountofArguments)}`. This current setup works perfectly fine however I was just wondering in the interest of self improval if there was a way to determine the required number of arguments in a function and my google attempts have returned so far nothing of value but I see how args and kwargs could screw such a command up because of the limitless amount of arguments they allow.
[inspect.getargspec()](http://docs.python.org/library/inspect.html#inspect.getargspec): > Get the names and default values of a function’s arguments. A tuple of four things is returned: (args, varargs, varkw, defaults). args is a list of the argument names (it may contain nested lists). varargs and varkw are the names of the \* and \*\* arguments or None. defaults is a tuple of default argument values or None if there are no default arguments; if this tuple has n elements, they correspond to the last n elements listed in args.
What you want is in general not possible, because of the use of varargs and kwargs, but `inspect.getargspec` (Python 2.x) and `inspect.getfullargspec` (Python 3.x) come close. * Python 2.x: ``` >>> import inspect >>> def add(a, b=0): ... return a + b ... >>> inspect.getargspec(add) (['a', 'b'], None, None, (0,)) >>> len(inspect.getargspec(add)[0]) 2 ``` * Python 3.x: ``` >>> import inspect >>> def add(a, b=0): ... return a + b ... >>> inspect.getfullargspec(add) FullArgSpec(args=['a', 'b'], varargs=None, varkw=None, defaults=(0,), kwonlyargs=[], kwonlydefaults=None, annotations={}) >>> len(inspect.getfullargspec(add).args) 2 ```
Programmatically determining amount of parameters a function requires - Python
[ "", "python", "parameters", "function", "" ]
Problem: When POSTing data with Python's urllib2, all data is URL encoded and sent as Content-Type: application/x-www-form-urlencoded. When uploading files, the Content-Type should instead be set to multipart/form-data and the contents be MIME-encoded. To get around this limitation some sharp coders created a library called MultipartPostHandler which creates an OpenerDirector you can use with urllib2 to mostly automatically POST with multipart/form-data. A copy of this library is here: [MultipartPostHandler doesn't work for Unicode files](http://peerit.blogspot.com/2007/07/multipartposthandler-doesnt-work-for.html) I am new to Python and am unable to get this library to work. I wrote out essentially the following code. When I capture it in a local HTTP proxy, I can see that the data is still URL encoded and not multi-part MIME-encoded. Please help me figure out what I am doing wrong or a better way to get this done. Thanks :-) ``` FROM_ADDR = 'my@email.com' try: data = open(file, 'rb').read() except: print "Error: could not open file %s for reading" % file print "Check permissions on the file or folder it resides in" sys.exit(1) # Build the POST request url = "http://somedomain.com/?action=analyze" post_data = {} post_data['analysisType'] = 'file' post_data['executable'] = data post_data['notification'] = 'email' post_data['email'] = FROM_ADDR # MIME encode the POST payload opener = urllib2.build_opener(MultipartPostHandler.MultipartPostHandler) urllib2.install_opener(opener) request = urllib2.Request(url, post_data) request.set_proxy('127.0.0.1:8080', 'http') # For testing with Burp Proxy # Make the request and capture the response try: response = urllib2.urlopen(request) print response.geturl() except urllib2.URLError, e: print "File upload failed..." ``` EDIT1: Thanks for your response. I'm aware of the ActiveState httplib solution to this (I linked to it above). I'd rather abstract away the problem and use a minimal amount of code to continue using urllib2 how I have been. Any idea why the opener isn't being installed and used?
It seems that the easiest and most compatible way to get around this problem is to use the 'poster' module. ``` # test_client.py from poster.encode import multipart_encode from poster.streaminghttp import register_openers import urllib2 # Register the streaming http handlers with urllib2 register_openers() # Start the multipart/form-data encoding of the file "DSC0001.jpg" # "image1" is the name of the parameter, which is normally set # via the "name" parameter of the HTML <input> tag. # headers contains the necessary Content-Type and Content-Length # datagen is a generator object that yields the encoded parameters datagen, headers = multipart_encode({"image1": open("DSC0001.jpg")}) # Create the Request object request = urllib2.Request("http://localhost:5000/upload_image", datagen, headers) # Actually do the request, and get the response print urllib2.urlopen(request).read() ``` This worked perfect and I didn't have to muck with httplib. The module is available here: <http://atlee.ca/software/poster/index.html>
Found this recipe to post multipart using `httplib` directly (no external libraries involved) ``` import httplib import mimetypes def post_multipart(host, selector, fields, files): content_type, body = encode_multipart_formdata(fields, files) h = httplib.HTTP(host) h.putrequest('POST', selector) h.putheader('content-type', content_type) h.putheader('content-length', str(len(body))) h.endheaders() h.send(body) errcode, errmsg, headers = h.getreply() return h.file.read() def encode_multipart_formdata(fields, files): LIMIT = '----------lImIt_of_THE_fIle_eW_$' CRLF = '\r\n' L = [] for (key, value) in fields: L.append('--' + LIMIT) L.append('Content-Disposition: form-data; name="%s"' % key) L.append('') L.append(value) for (key, filename, value) in files: L.append('--' + LIMIT) L.append('Content-Disposition: form-data; name="%s"; filename="%s"' % (key, filename)) L.append('Content-Type: %s' % get_content_type(filename)) L.append('') L.append(value) L.append('--' + LIMIT + '--') L.append('') body = CRLF.join(L) content_type = 'multipart/form-data; boundary=%s' % LIMIT return content_type, body def get_content_type(filename): return mimetypes.guess_type(filename)[0] or 'application/octet-stream' ```
Using MultipartPostHandler to POST form-data with Python
[ "", "python", "file", "upload", "urllib2", "multipartform-data", "" ]
I have a windows forms app with a `maskedtextbox` control that I want to only accept alphabetic values in. Ideally, this would behave such that pressing any other keys than alphabetic keys would either produce no result or immediately provide the user with feedback about the invalid character.
From [MSDN](http://msdn.microsoft.com/en-us/library/system.windows.forms.control.keypress.aspx) (This code shows how to handle the KeyDown event to check for the character that is entered. In this example it is checking for only numerical input. You could modify it so that it would work for alphabetical input instead of numerical): ``` // Boolean flag used to determine when a character other than a number is entered. private bool nonNumberEntered = false; // Handle the KeyDown event to determine the type of character entered into the control. private void textBox1_KeyDown(object sender, System.Windows.Forms.KeyEventArgs e) { // Initialize the flag to false. nonNumberEntered = false; // Determine whether the keystroke is a number from the top of the keyboard. if (e.KeyCode < Keys.D0 || e.KeyCode > Keys.D9) { // Determine whether the keystroke is a number from the keypad. if (e.KeyCode < Keys.NumPad0 || e.KeyCode > Keys.NumPad9) { // Determine whether the keystroke is a backspace. if(e.KeyCode != Keys.Back) { // A non-numerical keystroke was pressed. // Set the flag to true and evaluate in KeyPress event. nonNumberEntered = true; } } } //If shift key was pressed, it's not a number. if (Control.ModifierKeys == Keys.Shift) { nonNumberEntered = true; } } // This event occurs after the KeyDown event and can be used to prevent // characters from entering the control. private void textBox1_KeyPress(object sender, System.Windows.Forms.KeyPressEventArgs e) { // Check for the flag being set in the KeyDown event. if (nonNumberEntered == true) { // Stop the character from being entered into the control since it is non-numerical. e.Handled = true; } } ```
This question has probably been asked and answered a million times on every conceivable programming forum. Every answer provided has the distinction of being unique to the stated requirements. Since you are using a `MaskedTextBox`, you have additional validation features available to you and do not really need to handle keypresses. You can simply set the Mask property to something like "L" (character required) or "?" (optional characters). In order to show feedback to the user that the input is not acceptable, you can use the `BeepOnError` property or add a Tooltip to show the error message. This feedback mechanism should be implemented in the `MaskedInputRejected` event handler. The `MaskedTextBox` control offers a `ValidatingType` property to check input that passes the requirements of the Mask, but may not be the correct datatype. The `TypeValidationCompleted` event is raised after this type validation and you can handle it to determine results. If you still need to handle keypress events, then read on...! The method I would recommend in your case is that instead of handling the `KeyDown` event (you ostensibly do not need advanced key handling capability) or using a Regex to match input (frankly, overkill), I would simply use the built-in properties of the Char structure. ``` private void maskedTextBox1_KeyPress(object sender, KeyPressEventArgs e) { Char pressedKey = e.KeyChar; if (Char.IsLetter(pressedKey) || Char.IsSeparator(pressedKey) || Char.IsPunctuation(pressedKey)) { // Allow input. e.Handled = false } else // Stop the character from being entered into the control since not a letter, nor punctuation, nor a space. e.Handled = true; } } ``` Note that this snippet allows you to handle punctutation and separator keys as well.
How to make Textbox only accept alphabetic characters
[ "", "c#", "winforms", "" ]
Our coding guidelines prefer `const_iterator`, because they are a little faster compared to a normal `iterator`. It seems like the compiler optimizes the code when you use `const_iterator`. Is this really correct? If yes, what really happens internally that makes `const_iterator` faster?. EDIT: I wrote small test to check `const_iterator` vs `iterator` and found varying results: For iterating 10,000 objects `const_terator` was taking a few milliseconds (around 16 ms) less. But **not always**. There were iterations in which both were equal.
If nothing else, a `const_iterator` *reads* better, since it tells anyone reading the code "I'm just iterating over this container, not messing with the objects contained". That's a great big win, never mind any performance differences.
The guideline we use is: *Always prefer const over non-const* If you tend to use const object, you get used to using only constant operations on the objects you get and that is as much as using *const\_iterator* as much as possible. Constness has a *viral* property. Once you get to use it, it propagates to all your code. Your non-mutating methods become constant, and that requires using only constant operations on the attributes, and passing constant references around, that itself forces only constant operations... To me, the performance advantage of using constant iterators over non constant iterators (if any at all) is much less important than the improvement in the code itself. Operations meant (designed) to be non-mutating *are* constant.
Are const_iterators faster?
[ "", "c++", "stl", "iterator", "const-iterator", "" ]
Here's a sample of a SpinBox that writes its changes to underlying variables. The main problem that I'm having is valueChanged is called when the widget is constructed. Is there a more elegant way to do this? I think it's weird that I connected a widget to itself, but valueChanged isn't virtual. ``` class ValueWriterInt: public QSpinBox { Q_OBJECT public: ValueWriterInt(vector<int*> const& value): myValue(value) { QObject::connect(this, SIGNAL(valueChanged(int)), this, SLOT(valueChanged(int))); } ~ValueWriterInt() {} private slots: void valueChanged(int new_value) { for (auto it = myValue.begin(); it != myValue.end(); ++it) **it = new_value; } private: vector<int*> myValue; }; ```
I see nothing particularly weird about connecting a widget to itself. Having a single method of detecting and responding to data updates actually sounds like a good thing because you have fewer points of failure to check when you are debugging. In your specific case, it is causing some undesired behavior, but in general it is a fine solution. Now, having expressed the opinion that a reflexive connection isn't inherently inelegant, I am going to suggest a less than "elegant" solution to prevent the calling of `valueChanged` after construction. You can have a flag to determine whether the object was just constructed and return early to prevent the code being run immediately after construction. In your example: ``` class ValueWriterInt: public QSpinBox { Q_OBJECT public: ValueWriterInt(vector<int*> const& value): myValue(value), myAfterInit(true) { QObject::connect(this, SIGNAL(valueChanged(int)), this, SLOT(valueChanged(int))); } ~ValueWriterInt() {} private slots: void valueChanged(int new_value) { if (myAfterInit) { myAfterInit = false; return; } for (auto it = myValue.begin(); it != myValue.end(); ++it) **it = new_value; } private: vector<int*> myValue; boolean myAfterInit; }; ``` That isn't too bad of a solution. It will at least give you your desired behavior until (and if) you can find a more elegant method.
So what are you trying to accomplish here? Yep, valueChanged ain't virtual -- why should it be, your objects should directly connect *their own* slots to whatever signals they want to react to, no?
In qt, how do I implement a widget that stays consistent with variables in the code
[ "", "c++", "qt", "qt4", "" ]
Outside of the source code, is there any extensive documentation on the Magento ORM? I get the basics, and can usually dig through the Mage code base, litter it with Mage::Log calls and figure out something that works, but my efficiency would go way up if I had a high level overview of how the models are **intended** to be used. How do the aggregate methods work, what's the best way to join, when should you extend the models, when's the best time to use the eav models vs. the non-eav ones, etc. Something like the [Django Model Documentation](http://docs.djangoproject.com/en/dev/topics/db/models/), but for Magento.
Since this question was answered, some documentation has been created on the ORM at the magento knowledge base. Introductory information: <http://www.magentocommerce.com/knowledge-base/entry/magento-for-dev-part-5-magento-models-and-orm-basics> Advanced ORM usage and the EAV system: <http://www.magentocommerce.com/knowledge-base/entry/magento-for-dev-part-7-advanced-orm-entity-attribute-value> Working with the Varien collections (lazy loading, filtering, etc...): <http://www.magentocommerce.com/knowledge-base/entry/magento-for-dev-part-8-varien-data-collections>
While there isn't much documentation on the ORM, here were a few pages that greatly helped in my understanding of the database structure which will help you understand everything else: <http://www.magentocommerce.com/wiki/development/magento_database_diagram> <http://www.magentocommerce.com/boards/viewthread/7359/>
Magento ORM Documentation
[ "", "php", "django", "orm", "magento", "entity-attribute-value", "" ]
I want to create a simple image viewer in WPF that will enable the user to: * Pan (by mouse dragging the image). * Zoom (with a slider). * Show overlays (rectangle selection for example). * Show original image (with scroll bars if needed). Can you explain how to do it? I didn't find a good sample on the web. Should I use ViewBox? Or ImageBrush? Do I need ScrollViewer?
The way I solved this problem was to place the image within a Border with it's ClipToBounds property set to True. The RenderTransformOrigin on the image is then set to 0.5,0.5 so the image will start zooming on the center of the image. The RenderTransform is also set to a TransformGroup containing a ScaleTransform and a TranslateTransform. I then handled the MouseWheel event on the image to implement zooming ``` private void image_MouseWheel(object sender, MouseWheelEventArgs e) { var st = (ScaleTransform)image.RenderTransform; double zoom = e.Delta > 0 ? .2 : -.2; st.ScaleX += zoom; st.ScaleY += zoom; } ``` To handle the panning the first thing I did was to handle the MouseLeftButtonDown event on the image, to capture the mouse and to record it's location, I also store the current value of the TranslateTransform, this what is updated to implement panning. ``` Point start; Point origin; private void image_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { image.CaptureMouse(); var tt = (TranslateTransform)((TransformGroup)image.RenderTransform) .Children.First(tr => tr is TranslateTransform); start = e.GetPosition(border); origin = new Point(tt.X, tt.Y); } ``` Then I handled the MouseMove event to update the TranslateTransform. ``` private void image_MouseMove(object sender, MouseEventArgs e) { if (image.IsMouseCaptured) { var tt = (TranslateTransform)((TransformGroup)image.RenderTransform) .Children.First(tr => tr is TranslateTransform); Vector v = start - e.GetPosition(border); tt.X = origin.X - v.X; tt.Y = origin.Y - v.Y; } } ``` Finally don't forget to release the mouse capture. ``` private void image_MouseLeftButtonUp(object sender, MouseButtonEventArgs e) { image.ReleaseMouseCapture(); } ``` As for the selection handles for resizing this can be accomplished using an adorner, check out [this article](http://msdn.microsoft.com/en-us/library/ms746703.aspx) for more information.
After using samples from this question I've made complete version of pan & zoom app with proper zooming relative to mouse pointer. All pan & zoom code has been moved to separate class called ZoomBorder. *ZoomBorder.cs* ``` using System.Linq; using System.Windows; using System.Windows.Controls; using System.Windows.Input; using System.Windows.Media; namespace PanAndZoom { public class ZoomBorder : Border { private UIElement child = null; private Point origin; private Point start; private TranslateTransform GetTranslateTransform(UIElement element) { return (TranslateTransform)((TransformGroup)element.RenderTransform) .Children.First(tr => tr is TranslateTransform); } private ScaleTransform GetScaleTransform(UIElement element) { return (ScaleTransform)((TransformGroup)element.RenderTransform) .Children.First(tr => tr is ScaleTransform); } public override UIElement Child { get { return base.Child; } set { if (value != null && value != this.Child) this.Initialize(value); base.Child = value; } } public void Initialize(UIElement element) { this.child = element; if (child != null) { TransformGroup group = new TransformGroup(); ScaleTransform st = new ScaleTransform(); group.Children.Add(st); TranslateTransform tt = new TranslateTransform(); group.Children.Add(tt); child.RenderTransform = group; child.RenderTransformOrigin = new Point(0.0, 0.0); this.MouseWheel += child_MouseWheel; this.MouseLeftButtonDown += child_MouseLeftButtonDown; this.MouseLeftButtonUp += child_MouseLeftButtonUp; this.MouseMove += child_MouseMove; this.PreviewMouseRightButtonDown += new MouseButtonEventHandler( child_PreviewMouseRightButtonDown); } } public void Reset() { if (child != null) { // reset zoom var st = GetScaleTransform(child); st.ScaleX = 1.0; st.ScaleY = 1.0; // reset pan var tt = GetTranslateTransform(child); tt.X = 0.0; tt.Y = 0.0; } } #region Child Events private void child_MouseWheel(object sender, MouseWheelEventArgs e) { if (child != null) { var st = GetScaleTransform(child); var tt = GetTranslateTransform(child); double zoom = e.Delta > 0 ? .2 : -.2; if (!(e.Delta > 0) && (st.ScaleX < .4 || st.ScaleY < .4)) return; Point relative = e.GetPosition(child); double absoluteX; double absoluteY; absoluteX = relative.X * st.ScaleX + tt.X; absoluteY = relative.Y * st.ScaleY + tt.Y; st.ScaleX += zoom; st.ScaleY += zoom; tt.X = absoluteX - relative.X * st.ScaleX; tt.Y = absoluteY - relative.Y * st.ScaleY; } } private void child_MouseLeftButtonDown(object sender, MouseButtonEventArgs e) { if (child != null) { var tt = GetTranslateTransform(child); start = e.GetPosition(this); origin = new Point(tt.X, tt.Y); this.Cursor = Cursors.Hand; child.CaptureMouse(); } } private void child_MouseLeftButtonUp(object sender, MouseButtonEventArgs e) { if (child != null) { child.ReleaseMouseCapture(); this.Cursor = Cursors.Arrow; } } void child_PreviewMouseRightButtonDown(object sender, MouseButtonEventArgs e) { this.Reset(); } private void child_MouseMove(object sender, MouseEventArgs e) { if (child != null) { if (child.IsMouseCaptured) { var tt = GetTranslateTransform(child); Vector v = start - e.GetPosition(this); tt.X = origin.X - v.X; tt.Y = origin.Y - v.Y; } } } #endregion } } ``` **MainWindow.xaml** ``` <Window x:Class="PanAndZoom.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:PanAndZoom" Title="PanAndZoom" Height="600" Width="900" WindowStartupLocation="CenterScreen"> <Grid> <local:ZoomBorder x:Name="border" ClipToBounds="True" Background="Gray"> <Image Source="image.jpg"/> </local:ZoomBorder> </Grid> </Window> ``` **MainWindow.xaml.cs** ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; namespace PanAndZoom { public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } } } ```
Pan & Zoom Image
[ "", "c#", "wpf", "xaml", "zooming", "pan", "" ]
I have an n-tier system where a frontend templating layer makes calls out to a backend application server. For instance, I need to retrieve some search results, and the frontend calls the backend to get the results. Both the templating engine and the appserver are written in PHP. I currently use PHPed to initiate debug sessions to the templating engine, however, when the http request goes out to remote service my debugger just sits and waits for the IO to complete. What I would like to do is emulate the HTTP call but really just stay inside my PHP process, do a giant push of the environment onto some kind of stack, then have my appserver environment load and process the call. After the call is done, I do an env pop, and get the results of the http call in a var (for instance, via an output buffer). I *can* run both services on the same server. Does anyone have any ideas or libraries that already do this?
Can you not run a debugger and set a breakpoint in the appserver too? Two different debug sessions - one to trap the templating engine call and one to trap the call in the appserver. You should be able to trace the output from the appserver in the templating engine debugging session. If it is not possible to run two debug sessions then create some test inputs for the appserver by capturing outputs from the templating engine and use a single debugger with your test appserver inputs.
This is embarrassingly crude, and quite free of any study of how the debugger works, but have you tried adding ``` debugBreak(); ``` at the entry points to your called routine? (Assuming both processes running on the same machine). I have used this technique to break back into a process called via AMFPHP. I have had a PHP file loading Flash file into browser, which then calls back to PHP using AMFPHP, all on the same server. When I hit the debugBreak() line, PhpED regains control.
Best method for debugging a called webservice in php?
[ "", "php", "debugging", "phped", "" ]
I get why an HttpServlet would throw ServletException, but why IOException? What was the reasoning behind this?
From the docs: > IOException - if an input or output error is detected when the servlet handles the GET request This can happen when you print the servlet output: response.getWriter().print() - this method throws IOException The socket can be closed before the response finishes to print the output.
I'd suggest that an `IOException` indicates a problem with processing input/output, e.g. problems reading from the request input, or writing the response, whereas a `ServletException` has more to do with servlet-specific problems, such as errors regarding servlet provisioning/initialisation, and processing requests.
Why does HttpServlet throw an IOException?
[ "", "java", "servlets", "" ]
What's the best and easiest way to build (for Linux) a C++ application which was written in Visual Studio? The code itself is ready - I used only cross-platform libs. Is it possible to prepare everything under Windows in Visual Studio and then build it with a CLI tool under Linux? Are there any docs describing this? **EDIT:** Some more information: * Libs used: stl, wxwidgets, boost, asio, cryptlib. * Very little Linux know-how. **EDIT#2:** I chose the following solution: Make new project with kdevelop and compile everything there.
We're using CMake for Linux projects. CMake can generate KDevelop and Visual Studio project files, so you can just create your CMake file as the origin of platform-specific IDE files. The KDevelop generator is fine, so you can edit and compile in KDevelop (which will in turn call Make). On the other hand, if you have nothing fancy, you can use CMake or just Make to build the thing on Linux, if you want to stay with your solution file (which is what I'm currently doing for a library with test applications). This approach gets complicated when your build process is more than just "throw these files at the compiler and let's see what it does", so the CMake solution is in my opinion a good thing for cross-platform development.
8 years later ... I stumbled across this question today and thought you might like to take a look at [Visual C++ for Linux](https://blogs.msdn.microsoft.com/vcblog/2016/03/30/visual-c-for-linux-development/). Released in March 2016, VC++ for Linux allows you to create a project in VC++, and then build it in a Linux machine/VM using native Linux tools. What's more, you can also **debug** your app from within VS since the Linux tools allow the VS debugger to drive GDB in your Linux machine/VM via SSH!! Looks like this is almost exactly what @mspoerr was asking for :)
How to build a Visual C++ Project for Linux?
[ "", "c++", "visual-studio", "linux", "visual-c++", "" ]
I am using reflection to get the values out of an anonymous type: ``` object value = property.GetValue(item, null); ``` when the underlying value is a nullable type (T?), how can I get the underlying type when the value is null? Given ``` int? i = null; type = FunctionX(i); type == typeof(int); // true ``` Looking for FunctionX(). Hope this makes sense. Thanks.
You can do something like this: ``` if(type.IsgenericType) { Type genericType = type.GetGenericArguments()[0]; } ``` EDIT: For general purpose use: ``` public Type GetTypeOrUnderlyingType(object o) { Type type = o.GetType(); if(!type.IsGenericType){return type;} return type.GetGenericArguments()[0]; } ``` usage: ``` int? i = null; type = GetTypeOrUnderlyingType(i); type == typeof(int); //true ``` This will work for any generic type, not just Nullable.
If you know that is `Nullable<T>`, you can use the static helper `GetUnderlyingType(Type)` in `Nullable`. ``` int? i = null; type = Nullable.GetUnderlyingType(i.GetType()); type == typeof(int); // true ```
C# Reflected Property Type
[ "", "c#", "reflection", "" ]
In our group we primarily do search engine architecture and content integration work and most of that code base is in Python. All our build tools and Python module dependencies are in source control so they can be checked out and the environment loaded for use regardless of os/platform, kinda similar to the approach [virtualenv](http://lucumr.pocoo.org/2008/7/5/virtualenv-to-the-rescue "virtualenv") uses. For years we've maintained a code base compatible with Python 2.3 because one of the commercial products we use depends on Python 2.3. Over the years this has caused more and more issues as newer tools and libraries need newer versions of Python since 2.3 came out in ~2004. We've recently decoupled our build environment from dependencies on the commercial product's environment and can use any version of Python (or Java) we want. Its been about a month or so since we standardized on Python 2.6 as the newest version of Python that is backwards compatible with previous versions. Python 3.0 is not an option (for now) since we'd have to migrate too much of our code base to make our build and integration tools to work correctly again. We like many of the new features of Python 2.6, especially the improved modules and things like class decorators, but many modules we depend on cause the Python 2.6 interpreter to spout various depreciation warnings. Another tool we're interested in for managing EC2 cloud cluster nodes, [Supervisor](http://supervisord.org/ "Supervisor") doesn't even work correctly with Python 2.6. Now I am wondering if we should standardize on Python 2.5 for now instead of using Python 2.6 in development of production environment tools. Most of the tools we want/need seem to work correctly with Python 2.5. We're trying to sort this out now before there are many dependencies on Python 2.6 features or modules. Many Thanks! -Michael
I wouldn't abandon 2.6 just because of deprecation warnings; those will disappear over time. (You can use the `-W ignore` option to the Python interpreter to prevent them from being printed out, at least) But if modules you need to use actually don't work with Python 2.6, that would be a legitimate reason to stay with 2.5. Python 2.5 is in wide use now and probably will be for a long time to come (consider how long 2.3 has lasted!), so even if you go with 2.5, you won't be forced to upgrade for a while. I use Python 2.5 for all my development work, but only because it's the version that happens to be available in Gentoo (Linux)'s package repository. When the Gentoo maintainers declare Python 2.6 "stable"`*`, I'll switch to that. Of course, this reasoning wouldn't necessarily apply to you. `*` Python 2.6 actually is stable, the reason it's not declared as such in Gentoo is that Gentoo relies on other programs which themselves depend on Python and are not yet upgraded to work with 2.6. Again, this reasoning probably doesn't apply to you.
My company is standardized in 2.5. Like you we can't make the switch to 3.0 for a million reasons, but I very much wish we could move up to 2.6. Doing coding day to day I'll be looking through the documentation and I'll find exactly the module or function that I want, but then it'll have the little annotation: New in Version 2.6 I would say go with the newest version, and if you have depreciation warnings pop up (there will probably be very few) then just go in a find a better way to do it. Overall your code will be better with 2.6.
What version of Python (2.4, 2.5, 2.6, 3.0) do you standardize on for production development efforts (and why)?
[ "", "python", "standards", "production", "supervisord", "" ]
Sorry my newbie question :P If I promp "java -version" in the cmd on a windows system, am I *guaranteed* that the system will be able to run .jar files *if I don't get any error*?
I guess the only guaranteed way to check for a JRE is to try to run a small Java program. Or maybe not even that - I suppose conceivably a system could have only part of the Java standard library installed, in which case a small test JAR might work fine but a full program might not. Although I can't imagine why anyone would go to the trouble of setting a system up that way.
From the command line you should be able to invoke "java --version" which will return an error if java is not installed or the currently installed version information.
An easy bulletproof technique to check if the system has jre (windows)
[ "", "java", "windows", "cmd", "" ]
I am struggling to port a Perl program to Java, and learning Java as I go. A central component of the original program is a [Perl module](http://search.cpan.org/~jfreeman/File-SortedSeek-0.015/lib/File/SortedSeek.pm) that does string prefix lookups in a +500 GB sorted text file using binary search (essentially, "seek" to a byte offset in the middle of the file, backtrack to nearest newline, compare line prefix with the search string, "seek" to half/double that byte offset, repeat until found...) I have experimented with several database solutions but found that nothing beats this in sheer lookup speed with data sets of this size. Do you know of any existing Java library that implements such functionality? Failing that, could you point me to some idiomatic example code that does random access reads in text files? Alternatively, I am not familiar with the new (?) Java I/O libraries but would it be an option to memory-map the 500 GB text file (I'm on a 64-bit machine with memory to spare) and do binary search on the memory-mapped byte array? I would be very interested to hear any experiences you have to share about this and similar problems.
I am a *big* fan of Java's **[`MappedByteBuffers`](http://download.oracle.com/javase/1.5.0/docs/api/index.html?java/nio/MappedByteBuffer.html)** for situations like this. It is blazing fast. Below is a snippet I put together for you that maps a buffer to the file, seeks to the middle, and then searches backwards to a newline character. This should be enough to get you going? I have similar code (seek, read, repeat until done) in my own application, benchmarked `java.io` streams against `MappedByteBuffer` in a production environment and posted the results on my blog ([Geekomatic posts tagged 'java.nio'](http://geekomatic.ch/tags/java.nio/) ) with raw data, graphs and all. Two second summary? *My `MappedByteBuffer`-based implementation was about 275% faster.* YMMV. To work for files larger than ~2GB, which is a problem because of the cast and `.position(int pos)`, I've crafted paging algorithm backed by an array of `MappedByteBuffer`s. You'll need to be working on a 64-bit system for this to work with files larger than 2-4GB because MBB's use the OS's virtual memory system to work their magic. ``` public class StusMagicLargeFileReader { private static final long PAGE_SIZE = Integer.MAX_VALUE; private List<MappedByteBuffer> buffers = new ArrayList<MappedByteBuffer>(); private final byte raw[] = new byte[1]; public static void main(String[] args) throws IOException { File file = new File("/Users/stu/test.txt"); FileChannel fc = (new FileInputStream(file)).getChannel(); StusMagicLargeFileReader buffer = new StusMagicLargeFileReader(fc); long position = file.length() / 2; String candidate = buffer.getString(position--); while (position >=0 && !candidate.equals('\n')) candidate = buffer.getString(position--); //have newline position or start of file...do other stuff } StusMagicLargeFileReader(FileChannel channel) throws IOException { long start = 0, length = 0; for (long index = 0; start + length < channel.size(); index++) { if ((channel.size() / PAGE_SIZE) == index) length = (channel.size() - index * PAGE_SIZE) ; else length = PAGE_SIZE; start = index * PAGE_SIZE; buffers.add(index, channel.map(READ_ONLY, start, length)); } } public String getString(long bytePosition) { int page = (int) (bytePosition / PAGE_SIZE); int index = (int) (bytePosition % PAGE_SIZE); raw[0] = buffers.get(page).get(index); return new String(raw); } } ```
I have the same problem. I am trying to find all lines that start with some prefix in a sorted file. Here is a method I cooked up which is largely a port of Python code found here: <http://www.logarithmic.net/pfh/blog/01186620415> I have tested it but not thoroughly just yet. It does not use memory mapping, though. ``` public static List<String> binarySearch(String filename, String string) { List<String> result = new ArrayList<String>(); try { File file = new File(filename); RandomAccessFile raf = new RandomAccessFile(file, "r"); long low = 0; long high = file.length(); long p = -1; while (low < high) { long mid = (low + high) / 2; p = mid; while (p >= 0) { raf.seek(p); char c = (char) raf.readByte(); //System.out.println(p + "\t" + c); if (c == '\n') break; p--; } if (p < 0) raf.seek(0); String line = raf.readLine(); //System.out.println("-- " + mid + " " + line); if (line.compareTo(string) < 0) low = mid + 1; else high = mid; } p = low; while (p >= 0) { raf.seek(p); if (((char) raf.readByte()) == '\n') break; p--; } if (p < 0) raf.seek(0); while (true) { String line = raf.readLine(); if (line == null || !line.startsWith(string)) break; result.add(line); } raf.close(); } catch (IOException e) { System.out.println("IOException:"); e.printStackTrace(); } return result; } ```
Binary search in a sorted (memory-mapped ?) file in Java
[ "", "java", "nio", "large-files", "binary-search", "memory-mapping", "" ]
I am currently working on a web app that will replace old systems in some office. (very old as in they are in FoxPro) My current task is to develop a very desktop-like fast UI, meaning like, trying not to use the mouse at all, so they can capture data fairly quickly and they do it almost without even looking. They expect things like: Using the arrow keys to navigate, jumping to the next field when they are done filling the current one, pressing enter at one field and one list with data come up for them to choose (using arrow keys to navigate again), etc. I can get this done with javascript fairly easy, but since I was asked to help with this project because the time frame to get it done is very short, **What libraries, controls, or similar tools can help me to do this quickly?**
You haven't mentioned the kind of browser support you require. This web app sounds like it will need to catch and handle quite a few keyboard events. Different browsers handle events differently. So, you will need to keep that in mind. Yes, it is relatively straightforward to roll your own Key handling Javascript, but it is definitely better to use a free public framework like JQuery, Prototype or Dojo. Rather than suggesting one over the other (the SO community seems to have a special soft corner for JQuery, trust me on this!), I would say, check them all out and decide on your own. You may also want to look into pre-built(commercial or otherwise) custom controls that provide the kind of application functionality you need. For instance, if you require a spreadsheet kind of data entry interface, many controls are available on the web.
Use JQuery... And forget about cross browser DOM handling. JQuery has great support in VS.Net.
Rapid Web UI Development
[ "", "asp.net", "javascript", "user-interface", "" ]
If `actions` is a Panel, am I able to raise the `Click` event of it's parent? I have this code at the moment, but the `Click` event isn't a method, so this code is invalid. Does anyone know how I can achieve this? ``` actions.Click += delegate(object Sender, EventArgs e) { ((Panel)Sender).Parent.Click(); } ```
It's not possible to raise a C# `event` directly from another `class` (even if it's `public`). You could provide a method (with a sufficient access modifier) that raises the `event` on your behalf and call that method in the other `class`. By the way, this is possible with reflection, but I consider that a dirty hack.
Most types in the CLR that raises events has a protected On[EventName] method that takes care of raising the event. You can invoke this protected method "from the outside" using Reflection: ``` Control parent = ((Control)sender).Parent; Type parentType = parent.GetType(); MethodInfo onClickMethod = parentType.GetMethod("OnClick", BindingFlags.Instance | BindingFlags.NonPublic); onClickMethod.Invoke(parent, new object[] { e }); ```
Raising external object's events in C#
[ "", "c#", ".net", "events", "raise", "" ]
I am creating a very simple file search, where the search database is a text file with one file name per line. The database is built with PHP, and matches are found by grepping the file (also with PHP). This works great in Linux, but [not on Mac when non-ascii characters are used](https://sourceforge.net/tracker/?func=detail&aid=2766482&group_id=208076&atid=1004244). It looks like names are encoded differently on HFS+ (MacOSX) than on e.g. ext3 (Linux). Here's a test.php: ``` <?php $mystring = "abcóüÚdefå"; file_put_contents($mystring, ""); $h = dir('.'); $h->read(); // "." $h->read(); // ".." $filename = $h->read(); print "string: $mystring and filename: $filename are "; if ($mystring == $filename) print "equal\n"; else print "different\n"; ``` When run MacOSX: ``` $ php test.php string: abcóüÚdefå and filename: abcóüÚdefå are different $ php test.php |cat -evt string: abcóü?M-^Zdefå$ and filename: abco?M-^Au?M-^HU?M-^Adefa?M-^J are different$ ``` When run on Linux (or on a nfs-mounted ext3 filesystem on MacOSX): ``` $ php test.php string: abcóüÚdefå and filename: abcóüÚdefå are equal $ php test.php |cat -evt string: abcM-CM-3M-CM-<M-CM-^ZdefM-CM-% and filename: abcM-CM-3M-CM-<M-CM-^ZdefM-CM-% are equal$ ``` Is there a way to make this script return "equal" on both platforms?
MacOSX uses normalization form D (NFD) to encode UTF-8, while [most other systems use NFC](http://www.j3e.de/linux/convmv/man/#hfs__on_os_x___darwin). ![NFC vs NFD](https://i.stack.imgur.com/MxjJB.jpg) ([from unicode.org](http://unicode.org/reports/tr15/)) There are [several](https://github.com/widop/phpbb3/blob/master/includes/utf/utf_normalizer.php) [implementations](https://web.archive.org/web/20130511114752/http://rishida.net/code/showsource.php?source=normalization/n11n.php) on NFD to NFC conversion. Here I've used the PHP [Normalizer class](http://php.net/manual/en/class.normalizer.php) to detect NFD strings and convert them to NFC. It's available in PHP 5.3 or through the [PECL Internationalization extension](http://pecl.php.net/package/intl). The following amendment will make the script work: ``` ... $filename = $h->read(); if (!normalizer_is_normalized($filename)) { $filename = normalizer_normalize($filename); } ... ```
It seems that Mac OS X/HFS+ is using character combinations instead of single characters. So the `ó` (U+00F3) is instead encoded as `o` (U+006F) + `´` (U+CC81, COMBINING ACUTE ACCENT). See also [Apple’s Unicode Decomposition Table](http://developer.apple.com/technotes/tn/tn1150table.html).
In PHP, how do I deal with the difference in encoded filenames on HFS+ vs. elsewhere?
[ "", "php", "macos", "unicode", "utf-8", "unicode-normalization", "" ]
I'm trying to use the HttpListener class in a C# application to have a mini webserver serve content over SSL. In order to do this I need to use the httpcfg tool. I have a .pfx file with my public and private key pair. If I import this key pair manually using mmc into the local machine store, everything works fine. However, if I import this key pair programmatically using the X509Store class, I am not able to connect to my mini webserver. Note that in both methods the cert is getting imported to the MY store in LocalMachine. Oddly, I am able to view the certificate in mmc once I programmatically import it and when I view it, the UI indicates that a private key is also available for this certificate. Digging a little deeper, I notice that when I manually import the key pair, I can see a new file appear in `C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys`, but one does not appear when I import programmatically. On a related note, when I delete a manually imported certificate, it does not remove the corresponding private key file from the previously mentioned directory. Ultimately, my question is this: When I programmatically add the certificate to the store, where is the private key being stored and why isn't it accessible to the HttpListener class (HttpApi)? Note that this question is slightly related but I don't think permissioning is the problem since this is all being done as the same Windows user: [How to set read permission on the private key file of X.509 certificate from .NET](https://stackoverflow.com/questions/425688/how-to-set-read-permission-on-the-private-key-file-of-x-509-certificate-from-ne)
Ok, I figured it out. It had to do with the key storage parameters for the certificate object. For anyone else that runs into this problem, make sure you construct your `X509Certificate2` objects that you are adding to the store using the `X509KeyStorageFlags.PersistKeySet` and `X509KeyStorageFlags.MachineKeySet` flags. This will force the private key to persist in the machine key set location which is required by `HttpApi` (`HttpListener` wraps this).
Is this a 2 way SSL? If it is then did you send over a SSL Certificate Request file generated on your machine? This Certificate Request file will be used to create the SSL and they together form a public private key pair. Also did you try assigning the cert permission for the user account that is being used to run the web app? You can do this by using the Microsoft WSE 3.0 tool.
Import certificate with private key programmatically
[ "", "c#", ".net", "ssl", "ssl-certificate", "" ]
**A bit background:** I've got a page with a table and a number of checkboxes. The page is generated in asp.net. Each row has a checkbox, there's a checkbox in the header, and in certain cells there will be groups of check boxes (you get the picure lots of checkboxes). Each of these check boxes currently works fine with a little bit of javascript magic in their onclick events. so you have something like: ``` <td><input type="checkbox" id="sellRow1" onclick="javascript:highlightRow(this, 'AlternateRowStyle1');"/></td> ``` Not much of a surprise there then. **Ok so the here's the problem:** So this works fine however I need each of the check boxes to reflect the states of other checkboxes. So for example: the checkbox in the header changes the values of the row checkboxes, changes to the row checkboxes can change the header check box etc. I know what you're thinking: easy just call that Javascript function `highlightRow`. But if I did how would I get the parameters (ok the `this` is easy but where on earth could I get that `'AlternateRowStyle1'`?) **So I guess the question is:** Where do I put those parameters so I can get at them with JS in a nice cross browser way. (`<PossibleRedHerring>`tried putting custom attributes on each checkbox but wasn't sure that was the correct way to go`</PossibleRedHerring>`), also I'd prefer not having to keep calling back to the server if that's at all avoidable. --- (btw sorry if this is a bit badly formatted / written, I'm extraordinarily tired!) --- **Update:** Ok so in the end I managed to dodge the custom attributes as noticed that there was a hierarchy to the check boxes. This meant I was able to trigger the click event of the child checkboxes (which inturn would call it's childrens' click event etc) luckily in this case the flow will never go in the opposite direction causing an infinite loop (there are a lot of comments / documentation to point this out!) The only interesting thing with this is the difference between click events in IE and in firefox, chrome and safari. IE allows anything to have a click where as the others limit click to INPUT elements of type button, checkbox, radio, reset or submit. I kind of wanted to use event bubbling to attach the click events to an element that contained a group of checkboxes. In the end went with a bit of a hack: ``` // In IE every element supports Click wilst Firefox (also chrome and safari) only supports INPUT elements of type button, checkbox, radio, reset or submit // https://developer.mozilla.org/en/DOM/element.click // this function allows both browers to support click on all elements function FireClickEvent(element) { if (element.click) { element.click(); } else { // We don't have a click on this element, so add our own. var evt = document.createEvent("MouseEvents"); evt.initMouseEvent("click", true, true, window, 0, 0, 0, 0, 0, false, false, false, false, 0, null); element.dispatchEvent(evt); } } ``` Think that could be somewhat improved but it does the business for now. Should also admit this was my first shot at proper javascript. It's a bit of a scary language (esp when hitting the dom!) interesting though, am looking forward to spending a bit of time delving in further.
you can do this quite easily by using jquery. you can define some custom attributes on the checkboxes depending upon their position and pick up the value of attributes on click and manipulate the css of rows, checkbox the way you want. thats how you can define alternate row color for the table using jquery ``` $("table tr:nth-child(even)").addClass("striped"); <style> .striped{ background-color:#efefef; } </style> ```
I think custom attributes is indeed your solution, can't see any problem with that. Although I would put something like an alternate-row-style as an attribute of the row, and not as an attribute of the checkbox.
Wiring up javascript events and passing parameters
[ "", "javascript", "" ]
IIS enables us to also configure Asp.Net file mappings. Thus besides aspx, IIS also invokes Asp.Net runtime, when requests have the following file extensions: a) .ascx --> .asmx extension is used to request user controls. * Since user controls can’t be accessed directly, how and why would anyone send a request to a user control? b) .ashx --> this extension is used for HTTP handlers. • But why would you want to request an .ashx page directly instead of registering this handler inside configuration file and enable it to be called when files with certain ( non ashx ) extensions are requested? • Besides, since there can be several Http handlers registered, how will Asp.Net know which handler to invoke if they all use ashx extension? • What does the requested ashx file contain? Perhaps a definition of a Http handler class? • I know how we register Http handlers to be invoked when non-ashx pages are requested, but how do we register Http handler for ashx page? c) .asax --> This extension is used to request a global application file • Why would we ever want to call Global.asax directly? thanx --- Q - Besides Asp.Net being able to request global.asax for compilation, is there any other reason why I would choose to request file with .asax extension directly? > • ashx files don't have to be registered. They are basically a simpler aspx, for when you don't need the entire page life cycle. A common use is for retrieving dynamic images from a database. So if I write a Http handler, I should put it in a file with .ashx extension and Asp.Net will build an HttpHandler object similarly to how it builds a page instance from .aspx file? > • If a hacker did try to make a request for one of these files, what would you want to happen? You certainly wouldn't want IIS to treat it like a text file and send the source for your app down to the browser. Asp.Net could do the same it does with .cs, .csproj, .config, .resx, .licx, .webinfo file types. Namely, it registers these file types with IIS so that it can explicitly prevent users from accessing these files > •Just because you don't expect requests from the browser for a resource, it doesn't mean you don't want that resource handled by the asp.net engine. These extensions are also how ASP.Net picks up files to compile for the web site model sites. But then why doesn’t Asp.Net also allow .cs, .csproj, .config, .resx, .licx, .webinfo files to be directly requested? > a) and c) - as far as I am aware, these are not exposed to process any external requests my book claims the two are mapped in IIS I appreciate your help EDIT: > b) The .ashx extention is defined in a config file it's just not the web.config, its in the machine.config > > `<add path="*.ashx" verb="*" type="System.Web.UI.SimpleHandlerFactory" validate="True" />` > > <http://msdn.microsoft.com/en-us/library/bya7fh0a.aspx> > > Why use .ashx: The difference is that the .NET class that handles a .ashx reads the Page directive in the .ashx file to map the request to a class specified in that directive. This saves you from having to put an explicit path in the web.config for every handler that you have, which could result in a very long web.config. I thought Http handler class was defined inside .ashx file, but instead file with .ashx extension only contains Page directive? Since I’m not 100% sure if I understand this correctly: Say we have ten Http handlers we want to invoke by making a request to IIS7. I assume for each Http handler there will be specific .ashx file --> thus if request is made for FirstHandler.asxh, then handler specified inside that file will be invoked? **YET ANOTHER EDIT:** I must confess that I’m still a bit unsure about ashx extension. I realize that by using it we can for example create 'hey.ashx' page, where Page directive will tell which class ( Http handler) to invoke when request is made for 'hey.ashx' – thus no need to register Http handler in web.config. But if you use Http handlers that way, then they will only get invoked when requests are made for files with .ashx extension. Thus, if I want Http handler to be invoked for files with other extensions, such as .sourceC, then I will still need to register Http handler in web.config?!
To definitely clear any confusion you might have on what asp.net does with these requests, check the web.config in: `%systemroot%\Microsoft.NET\Framework\v2.0.50727\CONFIG` As you can see (posted mine below), asp.net excludes pretty much any of the files that you are unsure if they were receiving special treatment. Notice there is \*.cs, \*.acsx, \*.asax. ``` <add path="*.asax" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.ascx" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.master" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.skin" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.browser" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.sitemap" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.dll.config" verb="GET,HEAD" type="System.Web.StaticFileHandler" validate="True"/> <add path="*.exe.config" verb="GET,HEAD" type="System.Web.StaticFileHandler" validate="True"/> <add path="*.config" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.cs" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.csproj" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.vb" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.vbproj" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.webinfo" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.licx" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.resx" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.resources" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.mdb" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.vjsproj" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.java" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.jsl" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.ldb" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.ad" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.dd" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.ldd" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.sd" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.cd" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.adprototype" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.lddprototype" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.sdm" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.sdmDocument" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.mdf" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.ldf" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.exclude" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> <add path="*.refresh" verb="*" type="System.Web.HttpForbiddenHandler" validate="True"/> ``` Also, bear in mind that IIS might not be configured to map some requests (MIME types) to the ASP.NET pipeline.
A few points: * asmx files are not the same as ascx files. You use them for web services (soap) rather than web controls. * ashx files don't *have* to be registered. They are basically a *simpler* aspx, for when you don't need the entire page life cycle. A common use is for retrieving dynamic images from a database. * If a hacker *did* try to make a request for one of these files, what would you want to happen? You certainly wouldn't want IIS to treat it like a text file and send the source for your app down to the browser. * Just because you don't expect requests from the browser for a resource, it doesn't mean you don't want that resource handled by the asp.net engine. These extensions are also how ASP.Net picks up files to compile for the web site model sites.
IIS7 file mappings - .asax, .ashx, .asap
[ "", "c#", "asp.net", "iis", "iis-7", "httphandler", "" ]
I am testing my java application for any performance bottlenecks. The application uses concurrent.jar for locking purposes. I have a high computation call which calls lock and unlock functions for its operations. On removing the lock-unlock mechanism from the code, I have seen the performance degradation by multiple folds contrary to my expectations. Among other things observed was increase in CPU consumption which made me feel that the program is running faster but actually it was not. Q1. What can be the reason for this degradation in performance when we remove locks? Best Regards !!!
This can be quite a usual finding, depending on what you're doing and what you're using as an alternative to Locks. Essentially, what happens is that constructs such as ReentrantLock have some logic built into them that knows "when to back off" when they realistically can't acquire the lock. This reduces the amount of CPU that's burnt just in the logic of repeatedly trying to acquire the lock, which can happen if you use simpler locking constructs. As an example, have a look at the graph I've hurriedly put up [here](http://www.javamex.com/tutorials/lock_performance.shtml). It shows the throughput of threads continually accessing random elements of an array, using different constructs as the locking mechanism. Along the X axis is the number of threads; Y axis is throughput. The blue line is a ReentrantLock; the yellow, green and brown lines use variants of a spinlock. Notice how with low numbers of threads, the spinlock gives heigher throughput as you might expect, but as the number of threads ramps up, the back-off logic of ReentrantLock kicks in, and it ends up doing better, while with high contention, the spinlocks just sit burning CPU. By the way, this was really a trial run done on a dual-processor machine; I also ran it in the Amazon cloud (effectively an 8-way Xeon) but I've ahem... mislaid the file, but I'll either find it or run the experiment again soon and train and post an update. But you get an essentially similar pattern as I recall. **Update:** whether it's in locking code or not, a phenomenon that can happen on some multiprocessor architectures is that as the multiple processors do a high volume of memory accesses, you can end up flooding the memory bus, and in effect the processors slow each other down. (It's a bit like with ethernet-- the more machines you add to the network, the more chance of collisions as they send data.)
Profile it. Anything else here will be just a guess and an uninformed one at that. Using a profiler like [YourKit](http://yourkit.com/) will not only tell you which methods are "hot spots" in terms of CPU time but it will also tell you where threads are spending most of their time BLOCKED or WAITING
Java Performance Degradation on removing locks
[ "", "java", "performance", "" ]
how would i write a function in java that generates all the 3\*3 multidimensional arrays such that every number between 1...9 only appears once? thanks!
Basically, you are looking for all permutations of the series `[1, 2, ... 9]`, but notice that you split them up to a matrix of `3x3`. It is easy to prove mathematically that the number of permutations is `9!` (factorial 9). There are many algorithms to generate permutations. Choose the one you find the most convenient. For example, [at wikipedia](http://en.wikipedia.org/wiki/Permutation#Algorithms_to_generate_permutations).
**Note:** this is not actually homework according to the OP. How far have you gotten? It sounds like homework, and that is fine, SO will probably help, but you should show us how much you've done. SO is about making better programmers, so show us what you can do and we'll help out
generate all 3*3 two dimentional arrays
[ "", "java", "matrix", "" ]
I'd like to have the browser act as if the user had pressed the Tab key when they click on something. In the click handler I've tried the following approaches: ``` var event = document.createEvent('KeyboardEvent'); event.initKeyEvent("keypress", true, true, null, false, false, false, false, 9, 0); this.input.focus()[0].dispatchEvent(event); ``` And jQuery: ``` this.input.focus().trigger({ type : 'keypress', which : 9 }); ``` ...which I took from [here](https://stackoverflow.com/questions/596481/simulate-javascript-key-events). The first approach seems to be the best bet, but doesn't quite work. If I change the last two parameters to 98, 98, indeed, a 'b' is typed into the input box. But 9, 0 and 9, 9 (the former of which I took right from the MDC web site) both give me these errors in firebug under FF3: ``` Permission denied to get property XULElement.popupOpen [Break on this error] this.input.focus()[0].dispatchEvent(event); Permission denied to get property XULElement.overrideValue [Break on this error] this.input.focus()[0].dispatchEvent(event); Permission denied to get property XULElement.selectedIndex [Break on this error] this.input.focus()[0].dispatchEvent(event); Permission denied to set property XULElement.selectedIndex [Break on this error] this.input.focus()[0].dispatchEvent(event); ``` I've heard such (with no clear definition of 'such') events are 'untrusted', which might explain these errors. The second approach causes whatever value I put as event.which to be passed as event.which, but to no effect (even if I use 98 instead of 9, no 'b' is typed in the box.) If I try setting event.data in the object I'm passing, it ends up undefined when the event is triggered. What follows is the code I'm using to view that: ``` $('#hi').keypress(function(e) { console.log(e); }); ``` Any other ideas?
The solution I ended up going with is to create a "focus stealer" div (with tabindex = -1--can have the focus but can't be tabbed to initially) on either side of the area in which I want to manually manage the focus. Then I put a bubbling-true event listener for focus and blur on the whole area. When any focus occurs on the area, the tabindex values are changed to -1, and when any blur occurs, they're changed to 0. This means that while focused in the area, you can tab or shift-tab out of it and correctly end up on other page elements or browser UI elements, but as soon as you focus out of there, the focus stealers become tabbable, and on focus they set up the manual area correctly and shunt the focus over to the element at their end, as if you had clicked on one end or the other of the manual area.
This is the solution I used on our webapp for two custom controls, a pop-up calendar and a pop-up unit / value weight selector (clicking the text box pops up a div with two selects) ``` function tab_focus(elem) var fields = elem.form.getElements() for(var i=0;i<fields.length;i++) { if(fields[i].id == elem.id){ for(i=i+1;i<fields.length;i++){ if(fields[i].type != 'hidden'){ fields[i].focus() return } } break; } } elem.form.focusFirstElement(); } ``` This is using the Prototype framework and expects an extended element(ie $('thing\_id')) as its parameter. It gets the form the element belongs to, and loops through the elements of the form until it finds itself. It then looks for the first element after it that is not hidden, and passes it the focus. If there are no elements after it in the form, it moves the focus back the to first element in the form. I could instead find the next form on the page through document.forms, but most of our pages use a single form.
Simulating a tab keypress using JavaScript
[ "", "javascript", "jquery", "html", "events", "keyboard", "" ]
I have written the following code as, ``` Dim report As New ReportDocument report.PrintOptions.PrinterName = "\\network\printer" report.Load(CrystalReportName.rpt, OpenReportMethod.OpenReportByDefault) report.PrintToPrinter(1, False, 0, 0) ``` when i am trying to run this code , it shows the error message as "Invalid Printer Specified". If i give the local printer name, it is working fine. But i can't able to print the crystal report directly to the network printer. Kind help needed. Thanks in advance. Sivakumar.P
May be the printer name is wrong. Please use the following code to debug what name is coming while choosing the network printer <http://www.codeproject.com/KB/printing/printtoprinter.aspx> and then assign proper name. Still if incase it did not work out, there might be a permission issue then look at <http://forums.asp.net/t/1383129.aspx> Best of luck,.
Use this code to know the installed printers ``` Imports System.Drawing Imports System.Drawing.Printing ``` and this code on the load function... you will fill a combobox with the printers and their names correctly, and then use your code ``` For Each Printer In PrinterSettings.InstalledPrinters cmbPrinters.Items.Add(Printer) Next ```
How to printing crystal report directly to network printer in Vb.net or C#.net in Windows Applications
[ "", "c#", "vb.net", "" ]
What is the right way to implement DataBind() method for control which have a repeater inside of it? These are requirements for this control (but you can offer yours if think these are missing of something or are an overhead) * Control should accept a collection or an enumerable (list of objects, anonymous objects, dictionaries or data table) * DataSource should be completely should be completely decoupled from the control (using Data\*Field properties to specify properties or keys mapped; like DataValueField and DataTextField in DropDownList) * The control should go easy on ViewState. If possible ViewState shouldn't be used at all, or it's usage should be as low as possible (store some ID or something like this) * The control should handle any type (converting it using ToString()) * Inside of ItemDataBound be able to use e.DataItem should be accessible if possible I want my control to be initialized like so: ``` var control = new Control(); control.DataDateField = "Date"; control.DataNameField = "FullName"; control.DataTextField = "Comment"; control.DataSource = data; control.DataBind(); ``` And data item can be one of the following List of dictionaries (or table rows) ``` var data = new List<Dictionary<string, string>> { new Dictionary<string, string> {{"Date", "2009-03-15"}, {"FullName", "John Walker"}, {"Comment", "comment1"}}, new Dictionary<string, string> {{"Date", "2009-03-12"}, {"FullName", "Chuck Norris"}, {"Comment", "comment2"}}, new Dictionary<string, string> {{"Date", "2009-03-13"}, {"FullName", "Sergej Andrejev"}, {"Comment", "comment3"}} }; ``` List of anonymous objects ``` var data = new List<object> { new {Date = "2009-03-15", FullName = "John Walker", Comment = "comment1"}, new {Date = "2009-03-12", FullName = "Chuck Norris", Comment = "comment2"}, new {Date = "2009-03-13", FullName = "Sergej Andrejev", Comment = "comment3"}, }; ``` List of ojects ``` public class SampleClass { public object Date { get; set; } public object FullName { get; set; } public object Comment { get; set; } public SampleClass(string date, string fullName, string comment) { Date = date; FullName = fullName; Comment = comment; } }; var data = new List<SampleClass> { new SampleClass("2009-03-15", "John Walker", "comment1"), new SampleClass("2009-03-12", "Chuck Norris", "comment2"), new SampleClass("2009-03-13", "Sergej Andrejev", "comment3"), }; ``` DataTable ``` var data = new DataTable(); data.Columns.Add(new DataColumn { DataType = typeof(DateTime), ColumnName = "Date" }); data.Columns.Add(new DataColumn { DataType = typeof(string), ColumnName = "FullName" }); data.Columns.Add(new DataColumn { DataType = typeof(string), ColumnName = "Comment" }); data.Rows.Add(new object[] { DateTime.Parse("2009-03-15"), "John Walker", "comment1" }); data.Rows.Add(new object[] { DateTime.Parse("2009-03-12"), "Chuck Norris", "comment2" }); data.Rows.Add(new object[] { DateTime.Parse("2009-03-13"), "Sergej Andrejev", "comment3" }); ``` Basically what I want is universal code for data binding so I wouldn't waste time creating it from scratch every time I create new control. I would appreciate any references to good practices, official guides and of course your personal experience.
This article works up an example databound control with templating: [Building DataBound Templated Custom ASP.NET Server Controls](http://msdn.microsoft.com/en-us/library/aa479322.aspx)
Here is what I think you are looking for... <http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.databoundcontrol.aspx> It is a base class that your control can inherit from...
Right way to implement data binding in ASP.NET
[ "", "c#", "asp.net", "" ]
Simple question - is there any way to select a sub-set of the text displayed in a <textarea> control using Javascript? e.g. have a function like selectText(startCharNo, endCharNo, textareaName); It also needs to be IE6 compatible.
yes, it is possible ``` element.focus(); if(element.setSelectionRange) element.setSelectionRange(startCharNo, endCharNo); else { var r = element.createTextRange(); r.collapse(true); r.moveEnd('character', endCharNo); r.moveStart('character', startCharNo); r.select(); } ``` element is the reference to the textarea
createTextRange() <http://www.developerfusion.com/forum/thread/48987/>
How to select a character range in a textarea using Javascript?
[ "", "javascript", "internet-explorer-6", "textarea", "" ]
I have a function that binds LINQ results to the controls on my form. The below code works but I just can't get over the feeling I should be slapped for the copy/paste aspect. Can someone help me with what i need to do to deodorize this? Thank You! ``` private void BindDataToForm() { // Bind data to form CaseNotesDataContext db = new CaseNotesDataContext(); Table<CN_MaintItem> caseNotesItems = db.GetTable<CN_MaintItem>(); // For each object var contactType = from cType in caseNotesItems where cType.CategoryID == 2 select cType.ItemDescription; chkContactType.DataSource = contactType; var contactLocation = from cLocation in caseNotesItems where cLocation.CategoryID == 3 select cLocation.ItemDescription; lkuContactLocation.Properties.DataSource = contactLocation; var contactMethod = from cMethod in caseNotesItems where cMethod.CategoryID == 4 select cMethod.ItemDescription; lkuContactMethod.Properties.DataSource = contactMethod; var contactWith = from cWith in caseNotesItems where cWith.CategoryID == 5 select cWith.ItemDescription; chkContactWith.DataSource = contactWith; var domains = from d in caseNotesItems where d.CategoryID == 6 select d.ItemDescription; chkDomains.DataSource = domains; } ```
I don't know if this really solves anything but try: ``` public static class MyExtentsions { public IQueryable<string> GetItemDescriptions(this Table<CN_MaintItem> table, int cat) { return from x in table where x.CategoryID == cat select x.ItemDescription; } } ``` So you can pull the information like this: ``` using (CaseNotesDataContext db = new CaseNotesDataContext()) { Table<CN_MaintItem> caseNotesItems = db.GetTable<CN_MaintItem>(); chkContactType.DataSource = caseNotesItems.GetItemDescriptions(2); lkuContactLocation.Properties.DataSource = caseNotesItems.GetItemDescriptions(3); // etc... } ```
The trick is to store the information required to bind the controls explictly. You could create a `IDictionary<Control, Int32>` to store the category id for each control. Then just iterate over all controls, look up the id in the dictionary, and finally bind it if an entry exists. You can extend this method by creating a dictionary with a delegates as values. This allows you to perform different queries for different controls. Further you could also use the `Control.Tag` property to store this information.
Refactoring method that binds controls to LINQ
[ "", "c#", ".net", "winforms", "linq", "refactoring", "" ]
I am a beginner in GDB and I got it working correctly. However, I am wondering how this is used in big projects. I have a project where build is done using makefile and g++. For GDB to work, we need to compile with debug symbols on, right (g++ -g files)? **Question** 1. Do I need to create a new target in makefile something like "debug", so that I can make a debug build like *make debug*. Is this the best practice? 2. Suppose, I need to debug only foo.cpp and is it possible to generate debug symbols only for that other than building whole program including main? Any thoughts?
1. Not needed, although you may want to consider always building with -g (sometimes, you may even need to try and debug optimized (-O1, -O2, etc) code; why not leave -g on? For releases, you can always just run strip on the binary. 2. Yes. Build just that file with -g .
I don't think there is a big difference between the usage of gdb in big, medium or small projects. However, for big projects you must consider the amount of space required for the build, because the debugging info increases the size of the object and executable files. 1. If you initially underestimate the need for debugging of the whole solution you will likely suffer from your decision in the future. It is always good when the build could be done with or without debugging information, so write your build scripts carefully. 2. Yes, but consider my previous answer. Sometimes the problem could be coming from a module for which you don't have debugging info.
Debugging using gdb - Best practices
[ "", "c++", "makefile", "gdb", "" ]
I have the following scenario: 1. Entities are loaded from the database. 2. One of them is presented to the user in a Form (a WPF UserControl) where the user can edit properties of that entity. 3. The user can decide to apply the changes to the entity or to cancel the editing. How would I implement something like this with the EntityFramework? My problem is that, when I bind the UI directly to the Properties of the Entity, every change is instantanously applied to the entity. I want to delay that to the moment where the user presses OK and the entity **is validated** successfully. I thought about loading the Entities with `NoTracking` and calling `ApplyPropertyChanges` after the detached entity has been validated, but I'm not entirely sure about the correct way to do that. The docu of the EntityFramework at MSDN is very sparse. Another way I could think of is to `Refresh` the entity with `StoreWins`, but I don't like resetting the changes at Cancel instead of applying changes at Ok. Has anyone a good tutorial or sample?
One options is what you said do a no-tracking query. ``` ctx.Customers.MergeOption = MergeOption.NoTracking; var customer = ctx.Customers.First(c => c.ID == 232); ``` Then the customer can modify `'customer'` as required in memory, and nothing is actually happening in the context. Now when you want actually make the change you can do this: ``` // get the value from the database var original = ctx.Customers.First(c => c.ID == customer.ID); // copy values from the changed entity onto the original. ctx.ApplyPropertyChanges(customer); . ctx.SaveChanges(); ``` Now if you are uncomfortable with the query either for performance or concurrency reasons, you could add a new extension method AttachAsModified(...) to ObjectContext. that looks something like this: ``` public static void AttachAsModified<T>( this ObjectContext ctx, string entitySet, T entity) { ctx.AttachTo(entitySet, entity); ObjectStateEntry entry = ctx.ObjectStateManager.GetObjectStateEntry(entity); // get all the property names var propertyNames = from s in entry.CurrentValues.DataRecordInfo.FieldMetadata select s.FieldType.Name; // mark every property as modified foreach(var propertyName in propertyNames) { entry.SetModifiedProperty(propertyName); } } ``` Now you can write code like this: ``` ctx.Customers.MergeOption = MergeOption.NoTracking; var customer = ctx.Customers.First(); // make changes to the customer in the form ctx.AttachAsModified("Customers", customer); ctx.SaveChanges(); ``` And now you have no concurrency or extranous queries. The only problem now is dealing with FK properties. You should probably look at my index of tips for help here: <http://blogs.msdn.com/alexj/archive/2009/03/26/index-of-tips.aspx> Hope this helps Alex
The normal way of doing this is binding to something that implements [`IEditableObject`](http://msdn.microsoft.com/en-us/library/system.componentmodel.ieditableobject.aspx). If and how that fits in with the entity framework, I'm not sure.
Changing Entities in the EntityFramework
[ "", "c#", ".net", "wpf", "entity-framework", "" ]
### Duplicate: > [Finding previous page Url](https://stackoverflow.com/questions/772780/finding-previous-page-url) --- How do I find the referring url that brought a user to my site in ASP.NET? --- ### See Also: > [How can I find what search terms (if any) brought a user to my site?](https://stackoverflow.com/questions/57004/how-can-i-find-what-search-terms-if-any-brought-a-user-to-my-site)
``` Request.UrlReferrer ``` But this isn't guaranteed to be correct or even exist.
Request.UrlReferrer can be used to retrieve the previous page url from where the current page has been redirected.
Finding the referring url that brought a user to my site
[ "", "c#", "asp.net", "" ]
I have a performance problem with an Oracle select statement that I use in a cursor. In the statement one of the terms in the `SELECT` clause is expensive to evaluate (it's a PL/SQL procedure call, which accesses the database quite heavily). The `WHERE` clause and `ORDER BY` clauses are straightforward, however. I expected that Oracle would first perform the `WHERE` clause to identify the set of records that match the query, then perform the `ORDER BY` clause to order them, and finally evaluate each of the terms in the `SELECT` clause. As I'm using this statement in a cursor from which I then pull results, I expected that the expensive evaluation of the `SELECT` term would only be performed as needed, when each result was requested from the cursor. However, I've found that this is not the sequence that Oracle uses. Instead it appears to evaluate the terms in the `SELECT` clause for each record that matches the WHERE clause before performing the sort. Due to this, the procedure that is expensive to call is called for every result result in the result set before any results are returned from the cursor. I want to be able to get the first results out of the cursor as quickly as possible. Can anyone tell me how to persuade Oracle not to evaluate the procedure call in the SELECT statement until after the sort has been performed? This is all probably easier to describe in example code: Given a table `example` with columns `a`, `b`, `c` and `d`, I have a statement like: ``` select a, b, expensive_procedure(c) from example where <the_where_clause> order by d; ``` On executing this, `expensive_procedure()` is called for every record that matches the `WHERE` clause, even if I open the statement as a cursor and only pull one result from it. I've tried restructuring the statement as: ``` select a, b, expensive_procedure(c) from example, (select example2.rowid, ROWNUM from example example2 where <the_where_clause> order by d) where example.rowid = example2.rowid; ``` Where the presence of `ROWNUM` in the inner `SELECT` statement forces Oracle to evaluate it first. This restructuring has the desired performance benefit. Unfortunately it doesn't always respect the ordering that is required. Just to be clear, I know that I won't be improving the time it takes to return the entire result set. I'm looking to improve the time taken to return the first few results from the statement. I want the time taken to be progressive as I iterate over the results from the cursor, not all of it to elapse before the first result is returned. Can any Oracle gurus tell me how I can persuade Oracle to stop executing the PL/SQL until it is necessary?
Why join EXAMPLE to itself in the in-line view? Why not just: ``` select /*+ no_merge(v) */ a, b, expensive_procedure(c) from ( select a, b, c from example where <the_where_clause> order by d ) v; ```
If your `WHERE` conditions are equalities, i. e. ``` WHERE col1 = :value1 AND col2 = :value2 ``` you can create a composite index on `(col1, col2, d)`: ``` CREATE INDEX ix_example_col1_col2_d ON example(col1, col2, d) ``` and hint your query to use it: ``` SELECT /*+ INDEX (e ix_example_col1_col2_d) */ a, b, expensive_procedure(c) FROM example e WHERE col1 = :value1 AND col2 = :value2 ORDER BY d ``` In the example below, `t_even` is a `1,000,000` rows table with an index on `value`. Fetching `100` columns from this query: ``` SELECT SYS_GUID() FROM t_even ORDER BY value ``` is instant (`0,03` seconds), while this one: ``` SELECT SYS_GUID() FROM t_even ORDER BY value + 1 ``` takes about `170` seconds to fetch first `100` rows. `SYS_GUID()` is quite expensive in `Oracle` As proposed by others, you can also use this: ``` SELECT a, b, expensive_proc(c) FROM ( SELECT /*+ NO_MERGE */ * FROM mytable ORDER BY d ) ``` , but using an index will improve your query response time (how soon the first row is returned).
Lazy evaluation of Oracle PL/SQL statements in SELECT clauses of SQL queries
[ "", "sql", "performance", "oracle", "" ]
How do i access objects of an anonymous type outside the scope where its declared? for e.g. ``` void FuncB() { var obj = FuncA(); Console.WriteLine(obj.Name); } ??? FuncA() { var a = (from e in DB.Entities where e.Id == 1 select new {Id = e.Id, Name = e.Name}).FirstOrDefault(); return a; } ```
As the other answers have stated, **you really shouldn't do this**. But, if you insist, then there's a nasty hack known as "cast by example" which will allow you to do it. The technique is mentioned in a couple of articles, [here](http://tomasp.net/blog/cannot-return-anonymous-type-from-method.aspx) and [here](http://msmvps.com/blogs/jon_skeet/archive/2009/01/09/horrible-grotty-hack-returning-an-anonymous-type-instance.aspx). ``` public void FuncB() { var example = new { Id = 0, Name = string.Empty }; var obj = CastByExample(FuncA(), example); Console.WriteLine(obj.Name); } private object FuncA() { var a = from e in DB.Entities where e.Id == 1 select new { Id = e.Id, Name = e.Name }; return a.FirstOrDefault(); } private T CastByExample<T>(object target, T example) { return (T)target; } ``` (I can't take the credit for this hack, although [the author of one of those articles says that he doesn't want to be associated with it either](https://webcache.googleusercontent.com/search?q=cache:x_rICZ2nnxUJ:https://blogs.msmvps.com/jonskeet/2009/01/09/horrible-grotty-hack-returning-an-anonymous-type-instance/+&cd=1&hl=en&ct=clnk&gl=us). His name might be familiar.)
You can't return an anonymous type from a function. From the [MSDN documentation](http://msdn.microsoft.com/en-us/library/bb397696.aspx): > To pass an anonymous type, or a collection that contains anonymous types, outside a method boundary, you must first cast the type to object. However, this defeats the strong typing of the anonymous type. If you must store your query results or pass them outside the method boundary, consider using an ordinary named struct or class instead of an anonymous type.
Accessing C# Anonymous Type Objects
[ "", "c#", "anonymous-types", "" ]
Can you use `case` expressions in Access? I'm trying to determine the max date form 2 columns but keep getting syntax errors in the following code: ``` CASE WHEN dbo_tbl_property.LASTSERVICEDATE > Contour_dates.[Last CP12 Date] THEN dbo_tbl_property.LASTSERVICEDATE ELSE Contour_dates.[Last CP12 Date] END AS MaxDate ```
You can use the `IIF()` function instead. ``` IIF(condition, valueiftrue, valueiffalse) ``` * `condition` is the value that you want to test. * `valueiftrue` is the value that is returned if condition evaluates to TRUE. * `valueiffalse` is the value that is returned if condition evaluates to FALSE. There is also the [`Switch`](https://support.office.com/en-us/article/switch-function-d750c10d-0c8e-444c-9e63-f47504f9e379) function which is easier to use and understand when you have multiple conditions to test: ``` Switch( expr-1, value-1 [, expr-2, value-2 ] … [, expr-n, value-n ] ) ``` > The Switch function argument list consists of pairs of expressions and > values. The expressions are evaluated from left to right, and the > value associated with the first expression to evaluate to True is > returned. If the parts aren't properly paired, a run-time error > occurs. For example, if expr-1 is True, Switch returns value-1. If > expr-1 is False, but expr-2 is True, Switch returns value-2, and so > on. > > Switch returns a Null value if: > > * None of the expressions is True. > * The first True expression has a corresponding value that is Null. > > **NOTE: Switch evaluates all of the expressions**, even though it returns only one of them. For this reason, you should watch for > undesirable side effects. For example, if the evaluation of any > expression results in a division by zero error, an error occurs.
There is no case statement in Access. Instead you can use switch statement. It will look something like the one below: `switch(dbo_tbl_property.LASTSERVICEDATE > Contour_dates.[Last CP12 Date],dbo_tbl_property.LASTSERVICEDATE,dbo_tbl_property.LASTSERVICEDATE <= Contour_dates.[Last CP12 Date],Contour_dates.[Last CP12 Date])` For further reading look at: <http://www.techonthenet.com/access/functions/advanced/switch.php> Or for case function implementation example in VBA: <http://ewbi.blogs.com/develops/2006/02/adding_case_to_.html> Regards, J.
Case expressions in Access
[ "", "sql", "ms-access", "" ]
Is there a quick way to convert a Generic Dictionary from one type to another I have this ``` IDictionary<string, string> _commands; ``` and need to pass it to a function that takes a slightly different typed Dictionary ``` public void Handle(IDictionary<string, Object> _commands); ```
I suppose I would write ``` Handle(_commands.ToDictionary(p => p.Key, p => (object)p.Value)); ``` Not the most efficient thing in the world to do, but until covariance is in, that's the breaks.
maybe this function can be useful for you ``` IEnumerable<KeyValuePair<string, object>> Convert(IDictionary<string, string> dic) { foreach(var item in dic) { yield return new KeyValuePair<string, object>(item.Key, item.Value); } } ``` And you will call it like so: ``` Handle(Convert(_commands)); ```
Convert Generic Dictionary to different type
[ "", "c#", ".net", "generics", "collections", "" ]
Several visitors connect to <http://site.com/chat.php> They each can write and send a text message to chat.php and it displays instantly on everyone's browser (<http://site.com/chat.php>) Do I have to use a database? I mean, is AJAX or PHP buffer capabilities enough for such a chat room on sessions? How can sessions of different users share data from each other? Any idea or insights will be appreciated, thanks! Edit: Thanks for the links. But what I want is the way to push data to a client browser. Is constantly refreshing client browser (AJAX or not) the only way? Also the challenge here is how different users, for example, 2, 1 on 1, share chat texts? How do you store them? And how do you synchronize the texts between the 2 clients? Not using a database preferably. Edit 2: Actually [YShout](http://lumichat.com/yshout5/example/) mentioned by Peter D does this job pretty well. It doesn't seem to keep refresh the browser. But I don't understand how it pushes new messages to existing user's window.
there are (roughly) 3 options for creating a chat application: ## sockets > use flash/java and sockets for the frontend and a socket-capable programming language for the backend. for the backend, i'd recommend java or python, because they are multithreading and NIO-capable. it's possible to do it with PHP (but php can't really do efficient multithreading and is generally not really suited for this). this is an option if you need high performance, and probably not what you're looking for. ## use ajax and pull > in this case all clients are constantly (for example ever 2 seconds) polling if something new has happened. it feels strange because you only get responses at those intervals. additionally, it puts quite a strain on your server and bandwidth. you know an application uses this technique because the browser constantly refreshes. this is a suboptimal solution. ## use ajax and push > this works with multipart-responses and has long running (php-) scripts in the backend. not the best solution, but most of the time it's better than pulling and it works and is used in several well known chat apps. this technique is sometimes called [COMET](http://en.wikipedia.org/wiki/Comet_(programming)). my advise: if you need a chat app for production use, install an existing one. programming chat applications is not *that* easy. if you just want to learn it, start with a simple ajax/pull app, then try to program one using ajax and push. and yes, most probably you'll need a database, tough i successfully implemented a very simple ajax/pull solution that works with text files for fun (but i certainly wouldn't use it in production!). it is (to my knowledge, but i'm pretty sure) not possible to create a chat app without a server-side backend (with just frontend javascript alone)! ## UPDATE if you want to know how the data pushing is done, look at the source here: <http://wehrlos.strain.at/httpreq/client.html>. async multipart is what you want :) ``` function asSendSyncMulti() { var httpReq = new XMLHttpRequest(); showMessage( 'Sending Sync Multipart ' + (++this.reqCount) ); // Sync - wait until data arrives httpReq.multipart = true; httpReq.open( 'GET', 'server.php?multipart=true&c=' + (this.reqCount), false ); httpReq.onload = showReq; httpReq.send( null ); } function showReq( event ) { if ( event.target.readyState == 4 ) { showMessage( 'Data arrives: ' + event.target.responseText ); } else { alert( 'an error occured: ' + event.target.readyState ); } } ``` showReq is called *every time* data arrives, not just once like in regular ajax-requests (i'm not using jquery or prototype here, so the code's a bit obese - this is really old :)). here's the server side part: ``` <?php $c = $_GET[ 'c' ]; header('Content-type: multipart/x-mixed-replace;boundary="rn9012"'); sleep( 1 ); print "--rn9012\n"; print "Content-type: application/xml\n\n"; print "\n"; print "Multipart: First Part of Request " . $c . "\n"; print "--rn9012\n"; flush(); sleep( 3 ); print "Content-type: application/xml\n\n"; print "\n"; print "Multipart: Second Part of Request " . $c . "\n"; print "--rn9012--\n"; ?> ``` ## update2 regarding the database: if you've got a nothing-shared architecture like mod\_php/cgi in the backend, you *definitley* need *some* kind of external storage like databases or textfiles. but: you could rely on memory by writing your own http server (possible with php, but i'd not recommend it for serious work). that's not really complicated, but probably a bit out of the scope of your question ^^ ## update3 i made a mistake! got everything mixed up, because it's been a long time i actually did something like that. here are the corrections: 1. multipart responses only work with mozilla browsers and therefore are of limited use. COMET doesn't mean multipart-response. 2. COMET means: traditional singlepart response, but held (with an infinite loop and sleep) until there is data available. so the browser has 1 request/response for every action (in the worst case), not one request every x seconds, even if nothing response-worthy happens.
You mention wanting this to work without a DB, and without the client(s) polling the server for updates. In theory you can do this by storing the "log" of chats in a text file on the server, and changing your page so that the user does a GET request on the chat.php page, but the PHP page never actually finishes sending back to the user. (e.g. the Response never completes) You would need to send back some "*no op*" data to keep the connection going when there are no messages but in theory this would work. The problem is, to accomplish the above is still a lot of work. You would need to do AJAX posts back to the server to submit new comments... the users' browser would be spinning the the whole time (unless you nest the chat log in an iframe - e.g. more work)... and this kind of setup would just be very hard to manage. I'd suggest grabbing a free chat script from elsewhere (e.g. <http://tinychat.com/>) or if you want to roll your own (for fun/experience) then go ahead, but start with a DB and build a page that will push and pull messages from the server. Finally if you are worried about "hammering" the server with AJAX requests... don't. Just build the chat, then if you find there are performance issues, return to StackOverflow with a question on how to optimize it so that hundreds of requests are not flooding the chat when there is no activity.
How to make a chat room script with PHP?
[ "", "php", "chatroom", "" ]
I am trying to run the following in php $test = `svn cat ....` Now the output of $test is basically a binary file returned by svn. How do I make this binary file available as a download. Am trying to put the following: ``` $test = `svn cat ....` header("Content-Disposition: attachment; filename=" . urlencode($filename)); header("Content-Type: application/force-download"); header("Content-Type: application/octet-stream"); header("Content-Type: application/download"); header("Content-Description: File Transfer"); echo $test; ```
From [the comments](https://www.php.net/passthru) on PHP.net's documentation on `passthru()`: ``` header("Content-Type: application/octet-stream"); header("Content-Disposition: attachment; filename=\"myfile.zip\""); header("Content-Length: 11111"); passthru("cat myfile.zip",$err); exit(); ``` *The above code was provided by igor at bboy dot ru.*
You probably want to use the [passthru()](http://php.net/passthru) function in PHP. The call might need to come after the headers, but try both ways first. edit: I don't think this will cause a memory issue. I don't think PHP will keep the output in memory, because it's sent straight on through to stdout.
Php exec and return binary
[ "", "php", "binary", "" ]
I have 1 table "Products" that looks like this: ``` ID Product Shop Color 01 Car A Black 02 Car B Black 03 Bike C Red 04 Plane A Silver 05 Car C Black 06 Bike A Red ``` In this example, a Product always has the same color, independent from the Shop where it is sold. I want to make a query, that returns a distinct set of products, with the Color property. I also will need to have an ID, it could be any ID, that allows me to do a follow up query. The result of the query should be: ``` ID Product Color 01 Car Black 03 Bike Red 04 Plane Silver ``` I tried: ``` SELECT DISTINCT Product, Color FROM Products ``` But that obviously doesn't return the ID as well I guess I need to join something, but my knowledge of SQL is too poor. I hope this is something simple.
This would be one way of getting the result you want: ``` SELECT min(ID), Product, Color FROM table GROUP BY Product, Color; ```
How About ``` SELECT Product, Color, Min(ID) FROM TABLE GROUP BY Product, Colour ``` That'll return unique Product/Color Combinations and the first (lowest) ID found.
SQL: finding double entries without losing the ID
[ "", "sql", "join", "distinct", "max", "" ]
I have jQuery code which works offline in Safari and Opera, but neither in Firefox 3.1 nor 3.08. **How can you use Firefox's JavaScript engine in offline debugging?**
You can install [SpiderMonkey](http://www.mozilla.org/js/spidermonkey/) and play around with that. Or you can use [Firebug](http://getfirebug.com/) which has a JavaScript debugger.
Both Safari and Opera come with JavaScript debuggers. For Firefox, the weapon of choice is to install [Firebug](http://getfirebug.com/).
How can you use Firefox's JavaScript engine offline?
[ "", "javascript", "firefox", "offline", "" ]
Listening to a podcast, I heard that C# is not dynamic language while Ruby is. What is a "dynamic language"? Does the existence of dynamic languages imply that there are static languages? Why is C# a dynamic language and what other languages are dynamic? If C# is *not* dynamic, why is Microsoft pushing it strongly to the market? As well why most of .NET programmers are going crazy over it and leaving other languages and moving to C#? Why is Ruby "the language of the future"?
## What is a dynamic language? Whether or not a language is dynamic typically refers to the type of binding the compiler does: static or late binding. Static binding simply means that the method (or method hierarchy for virtual methods) is bound at compile time. There may be a virtual dispatch involved at runtime but the method token is bound at compile time. If a suitable method does not exist at compile time you will receive an error. Dynamic languages are the opposite. They do their work at runtime. They do little or no checking for the existence of methods at compile time but instead do it all at runtime. ## Why is C# not a dynamic language? C#, prior to 4.0, is a statically bound language and hence is not a dynamic language. ## Why is Ruby the language of the future? This question is based on a false premise, namely that there does exist one language that is the future of programming. There isn't such a language today because no single language is the best at doing all the different types of programming that need to be done. For instance Ruby is a great language for a lot of different applications: web development is a popular one. I would not however write an operating system in it.
In a dynamic language, you can do this: ``` var something = 1; something = "Foo"; something = {"Something", 5.5}; ``` In other words, the type is not static. In a statically typed language, this would result in a compiler error. Languages such as C, C++, C#, and Java are statically typed. Languages such as Ruby, Python, and Javascript are dynamically typed. Also, this is not the same as "strongly or weakly" typed. That is something different all together.
What is a dynamic language, and why doesn't C# qualify?
[ "", "c#", "ruby", "static-libraries", "dynamic-languages", "" ]
I have the following XML document: ``` <?xml version="1.0" encoding="UTF-8"?> <FamilyTree> <Parent name="Ken"> <Child name="Lorna"> <Grandchild name="Andrew"/> <Grandchild name="Brian"/> </Child> <Child name="Mike"> <Grandchild name="Ann"/> <Grandchild name="Beth"/> </Child> </Parent> <Parent name="Norma"> <Child name="Owen"> <Grandchild name="Charles"/> </Child> <Child name="Peter"> <Grandchild name="Charlotte"/> </Child> </Parent> <Parent name="Quinn"> <Child name="Robert"> <Grandchild name="Debbie"/> <Grandchild name="Eric"/> </Child> <Child name="Susan"> <Grandchild name="Frank"/> </Child> </Parent> <Parent name="Tom"> <Child name="Ursula"> <Grandchild name="George"/> <Grandchild name="Harriet"/> </Child> <Child name="Victor"> <Grandchild name="Ian"/> <Grandchild name="Juliet"/> </Child> </Parent> </FamilyTree> ``` I'm trying to select all the "Parents" with a Child who has at least two children ("Grandchild") of his/her own. Note that I'm *not* looking for "Parents" with at least two "Grandchild[ren]". The following LINQ query works, but I've a feeling it's not the most elegant. ``` IEnumerable<XElement> parents = (from c in familyTreeElement.Descendants("Child") where c.Elements().Count() > 1 select c.Parent).Distinct(); ``` Is there a better way to specify this?
Ahh the edit (2 grand-children) helps ;-p While `XDocument` is useful, at times I miss XPath/XQuery. With `XmlDocument` you could just use `doc.DocumentElement.SelectNodes("Parent[Child/Grandchild[2]]")`.
Hmmm... I'm finding it hard to get my head round it exactly :) Normally to find out if there are *any* elements, I'd use `Any` - but you want to see if there are *at least two* elements. We still don't need to use `Count` though - because there being at least two elements is the same as skipping an element and seeing if there are still any. So... ``` var parents = familyTreeElement.Elements("Parent") .Where(parent => parent.Elements("Child").Any( child => child.Elements("Grandchild").Skip(1).Any())); ``` I think that works - and actually it doesn't read *too* badly: For each parent, see whether *any* of there children has *any* (grand)children after ignoring the first (grand)child. I suspect using XPath (as per Marc's answer) would be the most readable option though.
Best LINQ-to-XML query to select nodes based on properties of descendant nodes?
[ "", "c#", "xml", "linq-to-xml", "" ]
I have a Web application using spring and hibernate and struts (it runs on Tomcat) The call sequence is something like this... Struts action calls spring service bean which in turn calls Spring DAO bean. The DAO implementation is a Hibernate implementation. **The question is** Would all my spring beans be running in the same thread ? Can I store something in the ThreadLocal and get it in another bean? I am quite sure this would not work in Stateless Session Bean. The EJB container can (or will) spawn a new thread for every call to the session bean **Will the spring container do the same? i.e. run all beans in the same thread ?** When I tried a JUnit test - I got the same id via Thread.currentThread().getId() in the Test Case and the two beans- which leads me to believe there was only one thread in action Or is the behavior unpredictable? Or will it change when running on Tomcat server ? **Clarification** I do not wish to exchange data between two threads. I want to put data in the ThreadLocal and be able to retrieve it from all beans in the call stack. This will work only if all beans are in the same thread
Spring doesn't spawn the threads. Tomcat does. Spring is just creating and wiring up the objects for you. Each request from the browser is processed in one request. It is Tomcat that handles the request. It is Tomcat that creates the thread to process the request. Assuming you have just created a singleton bean in Spring called "X". Then the same instance of X is used by all requests. The Spring beans don't live in a thread. They are just allocated on the heap.
> Would all my spring beans be running > in the same thread ? Can I store > something in the ThreadLocal and get > it in another bean? > AFAIK for the components you mentioned (service bean, DAO bean - i guess they are plain spring beans), Spring does not spawn a new thread. I do not understand your use case (ie, exchanging data between two threads). For most webapps, a new thread is spawned for each new request, and if you want to share data between two requests you normally: - use the get/post parameters to pass the data - use the session to share data To answer your question, I'm pretty sure the spring container does not spawn threads for most components.
Threads in Spring
[ "", "java", "multithreading", "spring", "tomcat", "thread-local", "" ]
If I have a string "Param1=value1;Param2=value2;Param3=val3", how can I get the value between the substrings "Param2=" and the next semicolon (or end of string, whichever comes first)?"
"Param\d+=([^;]\*)" will capture the contents between = and ; in group 1
`/Param2=([^;]+)/`
Get value between two substrings using regex
[ "", "c#", ".net", "regex", "" ]
The question is, why do these code snippets give different results? ``` private void InitializeOther() { double d1, d2, d3; int i1; d1 = 4.271343859532459e+18; d2 = 4621333065.0; i1 = 5; d3 = (i1 * d1) - Utils.Sqr(d2); MessageBox.Show(d3.ToString()); } ``` and ``` procedure TForm1.InitializeOther; var d1, d2, d3 : Double; i1 : Integer; begin d1:=4.271343859532459e+18; d2:=4621333065.0; i1:=5; d3:=i1*d1-Sqr(d2); ShowMessage(FloatToStr(d3)); end; ``` The Delphi code gives me 816, while the c# code gives me 0. Using a calculator, I get 775. Can anybody please give me a detailed explanation? Many thanks!
Delphi stores intermediate values as Extended (an 80-bit floating point type). This expression is Extended: ``` i1*d1-Sqr(d2); ``` The same may not be true of C# (I don't know). The extra precision could be making a difference.
Note that you're at the limits of the precision of the Double data type here, which means that calculations here won't be accurate. Example: ``` d1 = 4.271343859532459e+18 ``` which can be said to be the same as: ``` d1 = 4271343859532459000 ``` and so: ``` d1 * i1 = 21356719297662295000 ``` in reality, the value in .NET will be more like this: ``` 2.1356719297662296E+19 ``` Note the rounding there. Hence, at this level, you're not getting the right answers.
simple calculation, different results in c# and delphi
[ "", "c#", "delphi", "" ]
I was reading through some java code in a project I was working on and came along a method with something to this effect in it. ``` Object[] objs = null; objs[0] = str1; objs[1] = obj1; for(String str: strArray) { objs[objs.length+1] = str; } ``` I can't figure out what is going on here. I was hoping someone could explain how this code is working to me, preferably on both a high level and the behind the scenes low level.
That should not run. objs[0] will give an error. Are you sure you are running the correct version of the file?
This has no hope of ever working in Java. I suspect that "something to this effect" is subtly different from the real code, if it seemed to be working. A few problems: * It should obviously be throwing a `NullPointerException` on the second line when you try to dereference the variable * `objs[objs.length+1]` doesn't extend an array, which appears to be the intention - arrays are a fixed size after allocation. * Even if arrays *could* be extended, they're 0-based, so you'd actually want to extend them by setting `objs[objs.length]` - if this code worked, you'd end up with every other element being "missing"
Why does this code compile and run successfully it seems it should get an index out of bounds
[ "", "java", "arrays", "exception", "" ]
Is there a far shorter way to write the following code? ``` my_string = my_string.replace('A', '1') my_string = my_string.replace('B', '2') my_string = my_string.replace('C', '3') my_string = my_string.replace('D', '4') my_string = my_string.replace('E', '5') ``` Note that I don't need those exact values replaced; I'm simply looking for a way to turn 5+ lines into fewer than 5
Looks like a good opportunity to use a loop: ``` mapping = { 'A':'1', 'B':'2', 'C':'3', 'D':'4', 'E':'5'} for k, v in mapping.iteritems(): my_string = my_string.replace(k, v) ``` A faster approach if you don't mind the parentheses would be: ``` mapping = [ ('A', '1'), ('B', '2'), ('C', '3'), ('D', '4'), ('E', '5') ] for k, v in mapping: my_string = my_string.replace(k, v) ```
You can easily use string.maketrans() to create the mapping string to pass to str.translate(): ``` import string trans = string.maketrans("ABCDE","12345") my_string = my_string.translate(trans) ```
A list of string replacements in Python
[ "", "python", "string", "replace", "" ]
I'm trying to update a file in an existing jar (in this example antlr) using the command: ``` jar -uf antlrworks-1.2.3.jar org/antlr/codegen/templates/Java/Java.stg ``` But I get the following message > java.util.zip.ZipException: duplicate entry: antlr/ANTLRError.class > at java.util.zip.ZipOutputStream.putNextEntry(ZipOutputStream.java:175) > at java.util.jar.JarOutputStream.putNextEntry(JarOutputStream.java:92) > at sun.tools.jar.Main.update(Main.java:508) > at sun.tools.jar.Main.run(Main.java:185) > at sun.tools.jar.Main.main(Main.java:1044) Any ideas?
You're trying to do the right thing, but the jar file is problematic - it's got the same entry twice :( (This is legal for a zip file, but not really helpful - and as you can see, it makes `jar` complain.) If you run ``` jar tvf antlrworks-1.2.3.jar > contents ``` and then look at the generated `contents` file you'll see that there are various duplicate files. You should probably report this to the antlr project (after checking they don't already know). As a workaround, you can extract the contents of the jar file, jar it up again, and *then* you'll have a "working" jar file you can update. (If you only need to do this once, you can just extract, put the file you want in there, and then jar the whole lot up rather than updating it afterwards.)
You can do the same operation with the [Ant `jar` task](http://ant.apache.org/manual/Tasks/jar.html). ``` <jar duplicate="preserve" jarfile="..."> your files </jar> ``` the duplicate attribute with the preserve value will take care of the duplicate entries. [As mentioned here](http://www.mooreds.com/wordpress/archives/000445), the update attribute with the value “preserve” does tell you that duplicates exist, in this form: ``` aPath/aFile already added, skipping ``` If your file is on top of the list the jar task has to pick tp build itself, your new file will be taken into account.
Updating file in a jar throws ZipException
[ "", "java", "jar", "" ]
How do I process the elements of a sequence in batches, idiomatically? For example, with the sequence "abcdef" and a batch size of 2, I would like to do something like the following: ``` for x, y in "abcdef": print "%s%s\n" % (x, y) ab cd ef ``` Of course, this doesn't work because it is expecting a single element from the list which itself contains 2 elements. What is a nice, short, clean, pythonic way to process the next n elements of a list in a batch, or sub-strings of length n from a larger string (two similar problems)?
I am sure someone is going to come up with some more "Pythonic" but how about: ``` for y in range(0, len(x), 2): print "%s%s" % (x[y], x[y+1]) ``` Note that this would only work if you know that `len(x) % 2 == 0;`
A generator function would be neat: ``` def batch_gen(data, batch_size): for i in range(0, len(data), batch_size): yield data[i:i+batch_size] ``` Example use: ``` a = "abcdef" for i in batch_gen(a, 2): print i ``` prints: ``` ab cd ef ```
Iterate over a python sequence in multiples of n?
[ "", "iteration", "python", "" ]
This is mind-boggling... I can make getResource() and getResourceAsStream() work properly when I run Java on my packaged JAR file that includes a text file. (for reference see the Sun docs on [accessing resources](http://java.sun.com/j2se/1.5.0/docs/guide/lang/resources.html)) I can't seem to make the same program work properly when I am running it within Eclipse, even though I've placed my text file in the same tree as my compiled .class files Can one of you point me at any subtleties to ensure that getResource() and getResourceAsStream() functions work properly? I have a hunch it has to do with CLASSPATH and/or where Eclipse puts the .class files it autocompiles. (I've noticed when I run Ant, it compiles all the Java files that have changed since my last Ant build, even though Eclipse has compiled those Java files already.)
If you put your text file along with your source files, Eclipse will copy it to wherever it's placing its compiled .class files, and so you'll see your text file when you run your app from Eclipse (and you will be able to edit the file from within Eclipse as well).
A few notes: First -- as Chocos says, put it in the eclipse source dirs, not the binary dirs. Eclipse will clear the binary dirs when you "clean", as well as clean up unmatched files. It will copy non-java source files to the binary dir. This means that even though you drop the file in the binary dir, it may be deleted by eclipse... Keep in mind that getResourceAsStream and getResource operate relative to the package of the code that calls them. For example: ``` package a.b.c; public class Foo { ... getClass().getClassLoader().getResourceAsStream("fee.txt"); ... } ``` This will actually look for a/b/c/fee.txt; the package is pre-pended. This works well if you have the fee.txt in the same source dir as the Foo.java file, or if you have a separate set of resource dirs on the classpath with the same directory structure. If you use ``` ...getResourceAsStream("/fee.txt"); ``` it will look for fee.txt directly on the classpath. When you run from the command-line, where in the JAR is the resource file? -- Scott
how do you make getResourceAsStream work while debugging Java in Eclipse?
[ "", "java", "eclipse", "resources", "" ]
I want to create a collection of classes that behave like math vectors, so that multiplying an object by a scalar multiplies each field by that ammount, etc. The thing is that I want the fields to have actual names, instead of being treated as an index. My original idea to implement this was creating a base class Rn with the overloads and then create derived classes with the pretty names. Something like this: ``` #include <iostream> #include <algorithm> using namespace std; template<int N, class X=double> struct Base{ X xs[N]; Base(){}; Base(X *data){ copy(data, data+N, xs); } Base operator*= (double d){ for(int i=0; i<N; i++){ xs[i] *= d; } return *this; } Base operator* (double d){ Base answer = *this; answer *= d; return answer; } //also operators for +=, +, multiplication from left, maybe [] too }; struct Derived : public Base<2>{ Derived(double a, double b){ foo() = a; bar() = b; } double &foo(){ return xs[0]; } double &bar(){ return xs[1]; } }; int main() { //this is OK: double data[2] = {0.0, 2.0}; Base<2> b(data); b = b*17.0; cout << b.xs[0] << endl; //I can't do this: Derived x(0.0, 2.0); x = x*17.0; cout << x.foo() << endl; return 0; } ``` I get a compiler error whenever I try to use of of the operators that requires copying. gcc gave me the following compiler error: ``` teste.cpp: In function ‘int main()’: teste.cpp:52: error: no match for ‘operator=’ in ‘x = x.Derived::<anonymous>.Base<N, X>::operator* [with int N = 2, X = double](1.7e+1)’ teste.cpp:31: note: candidates are: Derived& Derived::operator=(const Derived&) ``` I think the problem is that the overloading functions deal with Base objects that can't be converted to Derived ones, so I can't use them in the derived class. However, I can't come up with a solution. Is there a way around this or should I use a totally different approach? Bonus question: is there some way that I can use std::valarray to keep from having to type lots and lots of operator overloads?
I'll only address the technical difficulty, not whether this is a good idea or not. The problem is that the result of operator\* of Derived is a Base, and operator= of Derived (which is a default operator=) doesn't know how to "eat" a Base. A simple solution is to create a constructor of Derived that gets a Base, and does whatever is needed to initialize itself correctly. This would allow an on-the-fly conversion of a Base to a Derived - and would work for all other operators of Derived that expect a Base. Something along the lines of - ``` Derived(const Base<2>& B) : Base<2>( B ) { } ```
Your Base operators (\* in this case) can accept Derived object, but they return a Base, which can't be used as a right-hand operand in Derived default assignment operator. Easiest way to fix this is just add an assignment operator to Derive that will take a Base: ``` Derived& operator= (const Base<2>& other) ``` You will have to add it to any derived class, but the implementation is rather straightforward (you can have a void CopyOtherBase function in Base that will do the copy, and have all operator= call it and return \*this).
Deriving from a class with operator overloading
[ "", "c++", "templates", "" ]
I am getting an "Undefined index" error when submitting a form with an un-checked checkbox. Is there any other way besides running an "isset" or "empty" check on each individual posted value? I have looked at this [Question](https://stackoverflow.com/questions/476426/submit-an-html-form-with-empty-checkboxes) and am having trouble believing that this is the only solution. Below is some example code: EDIT: please not that these are not the actual names of the tables columns; they are named uniquely (like "postAddress, displayPhone, student, etc.)
You could write a function that checks whether a checkbox was checked: ``` function checkbox_value($name) { return (isset($_POST[$name]) ? 1 : 0); } ``` Now call that function in your query like this: ``` $sql = 'UPDATE table SET '. 'checkbox1 = '. checkbox_value('checkbox1') .','. 'checkbox2 = '. checkbox_value('checkbox2') .','. 'checkbox3 = '. checkbox_value('checkbox3') .','. 'checkbox4 = '. checkbox_value('checkbox4') .','. 'checkbox5 = '. checkbox_value('checkbox5') .','. "LIMIT 1"; ```
If you want a on/off checkbox you can write a hidden value before you write the checkbox. ``` <input type="hidden" name="checkbox1" value="no" /> <input type="checkbox" name="checkbox1" value="yes" /> ``` This will always return a value, either no (default unless checkbox is checked by default) or yes. You can validate input with the [filter functions](http://php.net/manual/en/book.filter.php) with [FILTER\_VALIDATE\_BOOLEAN](http://php.net/manual/en/filter.filters.valiate.php). Its easier if you write a function for this, like formCheckbox($name), with options for values (value 'on' means checkbox is checked by default), attributes, etc.
PHP form checkbox and undefined index
[ "", "php", "mysql", "" ]
I have seen several places that "Class.getClassLoader() returns the ClassLoader used to load that particular class", and therefore, I am stumped by the results of the following example: ``` package test; import java.lang.*; public class ClassLoaders { public static void main(String[] args) throws java.lang.ClassNotFoundException{ MyClassLoader mcl = new MyClassLoader(); Class clazz = mcl.loadClass("test.FooBar"); System.out.println(clazz.getClassLoader() == mcl); // prints false System.out.println(clazz.getClassLoader()); // prints e.g. sun.misc.Launcher$AppClassLoader@553f5d07 } } class FooBar { } class MyClassLoader extends ClassLoader { } ``` Shouldn't the statement clazz.getClassLoader() == mcl return true? Can someone explain what I am missing here? Thanks.
Whenever you create your own classloader it will be attached in a tree-like hierarchy of classloaders. To load a class a classloader first delegates the loading to its parent. Only once all the parents didn't find the class the loader that was first asked to load a class will try to load it. In your specific case the loading is delegated to the parent classloader. Although you ask you MyClassLoader to load it, it is the parent that does the loading. In this case it is the AppClassLoader.
Citing the [API doc of ClassLoader](http://java.sun.com/javase/6/docs/api/java/lang/ClassLoader.html): > Each instance of ClassLoader has an > associated parent class loader. When > requested to find a class or resource, > a ClassLoader instance will delegate > the search for the class or resource > to its parent class loader before > attempting to find the class or > resource itself.
ClassLoader confusion
[ "", "java", "classloader", "" ]
What happens when you concurrently open two (or more) FileOutputStreams on the same file? The [Java API](http://java.sun.com/javase/6/docs/api/) says this: > Some platforms, in particular, allow a file to be opened for writing by only one FileOutputStream (or other file-writing object) at a time. I'm guessing Windows isn't such a platform, because I have two threads that read some big file (each one a different one) then write it to the same output file. No exception is thrown, the file is created and seems to contain chunks from both input files. Side questions: * Is this true for Unix, too? * And since I want the behaviour to be the same (actually I want one thread to write correctly and the other to be warned of the conflict), how can I determine that the file is already opened for writing?
There's not a reliable, cross-platform way to be passively notified when a file has another writer—i.e., raise an exception if a file is already open for writing. There are a couple of techniques that help you actively check for this, however. If multiple processes (which can be a mix of Java and non-Java) might be using the file, use a [`FileLock`](http://docs.oracle.com/javase/6/docs/api/java/nio/channels/FileLock.html). A key to using file locks successfully is to remember that they are only "advisory". The lock is guaranteed to be visible if you check for it, but it won't stop you from doing things to the file if you forget. All processes that access the file should be designed to use the locking protocol. If a single Java process is working with the file, you can use the concurrency tools built into Java to do it safely. You need a map visible to all threads that associates each file name with its corresponding lock instance. The answers to [a related question](https://stackoverflow.com/questions/659915/synchronizing-on-an-integer-value) can be adapted easily to do this with `File` objects or [canonical paths](http://docs.oracle.com/javase/6/docs/api/java/io/File.html#getCanonicalPath()) to files. The lock object could be a `FileOutputStream`, some wrapper around the stream, or a [`ReentrantReadWriteLock`.](http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/locks/ReentrantReadWriteLock.html)
I would be wary of letting the OS determine file status for you (since this is OS-dependent). If you've got a shared resource I would restrict access to it using a [Re-entrant lock](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/locks/ReentrantLock.html) Using this lock means one thread can get the resource (file) and write to it. The next thread can check for this lock being held by another thread, and/or block indefinitely until the first thread releases it. Windows (I think) would restrict two processes writing to the same file. I don't believe Unix would do the same.
Concurrent file write in Java on Windows
[ "", "java", "file-io", "portability", "concurrency", "" ]
I've had this problem in a couple of ZF applications now and it's very frustrating: Near the top of my bootstrap I have the following two lines ``` Zend_Session::start(); Zend_Session::regenerateId(); ``` My application requires authentication using Zend\_Auth and uses default session storage for persisting an identity. At random the session is lost giving the effect that the user has logged out. If I remove the call to Zend\_Session::regenerateId() the session is not lost. Regenerating the session id increases security so I'd like to get this working. This problem occurs on both my local development server (windows) and our staging server (linux)
After reading your question i have been looking for a solution to your problem. Actually according to Zend Framework: Documentation the solution is quiet easy. "If a user has successfully logged into your website, use rememberMe() instead of regenerateId()." I hope this will help you out. Greetings, Younes [edit: this is the link of where i found this: [Zend Framework: Documentation](http://framework.zend.com/manual/en/zend.session.global_session_management.html)
I've had a related problem with Zend\_Session. Your session might have gotten lost due to the following problem: try using this: ``` Zend_Session::rememberMe(100); // any value here ``` Then go to any of your pages in the application with your session data set (user logged in for example) and click on the refresh button of the browser very quickly (like double clicking) for 3-5 times. You'll see that your session data has disappeared :( The only way I could deal with this is by not using the rememberMe() method for now. I hope somebody will shed light on this issue here. What I think is that the regenerateId() method somehow messes up the cookie value so that subsequent calls from the browser to the server do not get associated with an existing session id.
Zend_Session lost when using regenerateId()
[ "", "php", "zend-framework", "session", "" ]
I have a project which will be made up mostly of Java Servlets, JSP/Html pages, and some regular javascript. However, there are some screens which will be too complex to do in just javascript, so I will want to use GWT for that. Is it possible to do this, so there are multiple `EntryPoint` classes in the same project? Then, depending on which page is being loaded I will load the appropriate javascript file? I was thinking that perhaps, the `RootPanel.get()` function will be used to check if certain `<div>s` with a given ID exist. For example, if the signup div exists, load the sign up GWT control, if 'search' div exists then load Search, but this will make the javascript file very large unnessarily. I'd much rather each component be in its own javascript file. Is that possible?
I think it would be best to separate pages into GWT modules. If you will use RootPanel.get() to check if specific slot (DIV or other element) exists your module will hold all of your UI, so client will load whole module, only accessing one page - which well be unnecessary overhead. You can place common code in some module which all of your pages modules will inherit from. RootPanel.get() is better for you to have sets of widgets (logical panels in your UI e.g. menu, main content, etc.) and you can place them in proper slots on page. Example structure in your app would look like (see question in comment): ``` - gwtui -- common (module1) --- client ---- widgets (eg. common widgets) ---- ... ---- ApplicationService.java ---- ApplicationServiceAsync.java --- public ---- images (common images etc.) ---- css (common style) ---- ... --- server ---- ... ---- ApplicationServiceImpl.java --- GwtCommon.gwt.xml -- expenses (module2) --- client ---- ExpensesEntrypoint.java --- public --- server --- GwtExpenses.gwt.xml (inherits GwtCommon, entry point ExpensesEntrypoint) -- reports (module2) --- client ---- ReportsEntrypoint.java --- public --- server --- GwtReports.gwt.xml (inherits GwtCommon, entry point ReportsEntrypoint) ``` Note that you can use one service in all modules, as well in common module you might don't have to create entry point (but I'm not 100% sure, in some old project I had to create empty entry point, but don't remember exact reason - sorry) That worked for us. I certainly would like to hear different opinions.
Here's a five minute lightning talk I did on this exact subject: <http://www.youtube.com/watch?v=0DuR9xDvrHA&feature=channel_page> Understandably, you cannot put a lot of information into 5 minutes, so hopefully this will give you enough to get going, but if you have further questions, please feel free to ask. I intend to make more code samples available but I haven't found the time.
Managing multiple Google Web Toolkit pages in 1 Project
[ "", "java", "gwt", "" ]
How do I go about defining a method, e.g. `void doSuff()` in an anonymous type? All documentation I can find uses only anonymous restricted basically to property lists. Can I even define a method in an anonymous type? EDIT: OK, a quick look at very quick answers tells me this is impossible. Is there any way at all to construct a type dynamically and add an anonymous method to a delegate property on that type? I'm looking for a C# way to accomplish what the following JavaScript does: ``` ... person.getCreditLimit = function() { ... } ... ```
You absolutely can, with delegates: ``` Action action = MethoThatDoesSomething; var obj = new { DoSomething = action }; obj.DoSomething(); ``` I tried with a lambda in the `new { ... }` and that didn't work, but the above is totally fine.
Well, you can. With delegates you just treat methods as data: ``` var myMethods = from x in new[] { "test" } select new { DoStuff = new Func<string>(() => x) }; var method = myMethods.First(); var text = method.DoStuff(); ``` What do you think the value of "text" is? With the Action<> and Func<> generic types you can put (almost) whatever you want in there. Almost, because you cannot for instance access other properties on the anonymous type, like this: ``` var myMethods = from x in new[] { "test" } select new { Text = x, DoStuff = new Func<string>(() => Text) }; ```
How do I define a method in an anonymous type?
[ "", "c#", ".net", "" ]
I'm writing a basic class using prototype.js in which some class vars are set when the class is initialised - the problem is that these variables aren't available to other methods within the class. ``` var Session = Class.create({ initialize: function(){ // define defaults this.source = ''; }, shout: function(){ alert(this.source); }}); ``` I know it's something to do with scope and I'm sure it's a fairly basic issue - all help appreciated! Thanks, Adam
I tested Your code. It works as far as I can tell. Maybe setting the variable to the empty string is throwing you off?
looks right... and it works for me. ``` document.observe('dom:loaded', function() { var s = new Session(); s.shout(); }); ```
Javascript class variable scope using prototype
[ "", "javascript", "class", "variables", "scope", "prototypejs", "" ]
What do you think of the following `IDisposable` pattern implementation? ``` public class Connection : IDisposable { private Socket _socket; public bool IsConnected() { if (_socket.Poll(1, SelectMode.SelectRead) && _socket.Available == 0) return false; return true; } public void Disconnect() { if (m_socket != null && IsConnected()) { try { _socket.Shutdown(SocketShutdown.Both); _socket.Disconnect(false); } catch (SocketException se) { System.Console.WriteLine(se.Message); } } } ~Connection() { Dispose(false); } private void Dispose(bool disposing) { if (!IsConnected()) { if (disposing) { Disconnect(); } else { AppDomain currentDomain = AppDomain.CurrentDomain; if (currentDomain.IsFinalizingForUnload() && !Environment.HasShutdownStarted) { System.Console.WriteLine("Client failed to call Destroy"); } } } } } ``` I was given this error using the code above: > {"An operation was attempted on something that is not a socket"} > System.Net.Sockets.Socket.Poll(Int32 microSeconds, SelectMode mode)
The implementation is severely flawed. You don't truly implement [`IDisposable`](http://msdn.microsoft.com/en-us/library/system.idisposable.aspx), and you end up relying on the garbage collector to clean up your resources, which is a bad thing. Additionally, you don't even clean up those resources properly when the GC does come around (it does do it correctly, but it's by mistake that it happens). It is the responsibility of your class to implement `IDisposable` as you are holding onto references which implement `IDisposable`. Then, in your implementation of [`Dispose`](http://msdn.microsoft.com/en-us/library/system.idisposable.dispose.aspx), if you are not being GCed (it is an explicit call to `Dispose`) you are to call `Dispose` on any `IDisposable` implementations that you are holding onto. You check the connection status of the `Socket`, but that's not the same as calling `Dispose` on it, and you leak the resource as a result (GC eventually picks it up). For the guideline on how to properly implement `IDisposable`, see the section of the MSDN documentation titled "Implementing Finalize and Dispose to Clean Up Unmanaged Resources", located here: <http://msdn.microsoft.com/en-us/library/b1yfkh5e(VS.71).aspx> I should note that I don't agree completely with these guidelines, but they are the most adopted. For my position, see here: <http://www.caspershouse.com/post/A-Better-Implementation-Pattern-for-IDisposable.aspx>
This implementation is flawed, for a few reasons. First, your Dispose() method should have a single purpose - to call `socket.Dispose();`. Right now, you are putting far too much logic in there, and not actually "Disposing" of the single managed, IDisposable resource you own. Second, you do not need a finalizer at all, since you do not directly own or allocate any native, unmanaged resources. The only resource that you are disposing of is a Socket, which is managed, and will implement its own finalizer as needed. If you want to trap and find the cases where a Connection was not disposed properly, I would set up a debug-only finalizer to warn of that case. The finalizer in IDisposable is meant to handle the case where the GC must do the cleanup because the caller forgot to call Dispose() - in your case, the socket's finalizer will take care of that for you. Third, part of the IDisposable pattern as suggested in the design guidelines from Microsoft states that the client should be able to call Dispose() multiple times with no consequences. There should be nothing using the socket directly after the first call to Dispose() - in fact, I would suggest that Dispose() should call `socket.Close();` or `(socket as IDisposable).Dispose();` and immediately set `socket = null;` to prevent this from being a possibility. With your current logic, it is very possible to have the calls in `IsConnected()` cause the socket to throw an exception on subsequent calls to Dispose(), which should be avoided. Fourth, it's a strong suggestion to have a Close() method on all resources that use a file, socket, or other "closable" resources. Close() should call Dispose(). Finally, there should be checks post-disposal for use of the Connection. Any method on the connection used after disposal should throw an ObjectDisposedException.
What do you think of my IDisposable pattern implementation?
[ "", "c#", ".net", "design-patterns", "idisposable", "" ]
I am using linq to Nhibernate to fire some select query to data base. My question is, how do I know, the query generated by Fluent NHibernate?
If you want the SQL to be in log4net, make sure you set the logger in your configuration section. I put the NHibernate package at "INFO" to reduce the noise and the NHibernate.SQL to all so I can log all SQL statements. ``` <logger name="NHibernate"> <level value="INFO" /> </logger> <logger name="NHibernate.SQL"> <level value="ALL" /> </logger> ```
With Fluent NHibernate, you can turn on `show_sql` like this: ``` Fluently.Configure() .Database( MsSqlConfiguration.MsSql2005.ShowSql().ConnectionString(...) )... ``` NHibernate will now print every sql statement to `Console.Out`.
How to know query generated by Fluent NHibernate
[ "", "sql", "nhibernate", "logging", "output", "" ]
I have been programming in C# for a while and now I want to brush up on my C++ skills. Having the class: ``` class Foo { const std::string& name_; ... }; ``` What would be the best approach (I only want to allow read access to the name\_ field): * use a getter method: `inline const std::string& name() const { return name_; }` * make the field public since it's a constant Thanks.
It tends to be a bad idea to make non-const fields public because it then becomes hard to force error checking constraints and/or add side-effects to value changes in the future. In your case, you have a const field, so the above issues are not a problem. The main downside of making it a public field is that you're locking down the underlying implementation. For example, if in the future you wanted to change the internal representation to a C-string or a Unicode string, or something else, then you'd break all the client code. With a getter, you could convert to the legacy representation for existing clients while providing the newer functionality to new users via a new getter. I'd still suggest having a getter method like the one you have placed above. This will maximize your future flexibility.
Using a getter method is a better design choice for a long-lived class as it allows you to replace the getter method with something more complicated in the future. Although this seems less likely to be needed for a const value, the cost is low and the possible benefits are large. As an aside, in C++, it's an especially good idea to give both the getter and setter for a member *the same name*, since in the future you can then actually change the the pair of methods: ``` class Foo { public: std::string const& name() const; // Getter void name(std::string const& newName); // Setter ... }; ``` Into a single, public member variable that defines an `operator()()` for each: ``` // This class encapsulates a fancier type of name class fancy_name { public: // Getter std::string const& operator()() const { return _compute_fancy_name(); // Does some internal work } // Setter void operator()(std::string const& newName) { _set_fancy_name(newName); // Does some internal work } ... }; class Foo { public: fancy_name name; ... }; ``` The client code will need to be recompiled of course, but no syntax changes are required! Obviously, this transformation works just as well for const values, in which only a getter is needed.
C++ getters/setters coding style
[ "", "c++", "coding-style", "getter-setter", "" ]
In PHP, I would do this to get `name` as an array. ``` <input type"text" name="name[]" /> <input type"text" name="name[]" /> ``` Or if I wanted to get `name` as an associative array: ``` <input type"text" name="name[first]" /> <input type"text" name="name[last]" /> ``` What is the Django equivalent for such things?
[Check out the QueryDict documentation](http://docs.djangoproject.com/en/dev/ref/request-response/#querydict-objects), particularly the usage of [`QueryDict.getlist(key)`](http://docs.djangoproject.com/en/dev/ref/request-response/#django.http.QueryDict.getlist). Since request.POST and request.GET in the view are instances of QueryDict, you could do this: ``` <form action='/my/path/' method='POST'> <input type='text' name='hi' value='heya1'> <input type='text' name='hi' value='heya2'> <input type='submit' value='Go'> </form> ``` Then something like this: ``` def mypath(request): if request.method == 'POST': greetings = request.POST.getlist('hi') # will be ['heya1','heya2'] ```
Sorry for digging this up, but Django has an utils.datastructures.DotExpandedDict. Here's a piece of it's docs: ``` >>> d = DotExpandedDict({'person.1.firstname': ['Simon'], \ 'person.1.lastname': ['Willison'], \ 'person.2.firstname': ['Adrian'], \ 'person.2.lastname': ['Holovaty']}) >>> d {'person': {'1': {'lastname': ['Willison'], 'firstname': ['Simon']}, '2': {'lastname': ['Holovaty'], 'firstname': ['Adrian']}}} ``` The only difference being you use dot's instead of brackets. EDIT: This mechanism was replaced by form prefixes, but here's the old code you can drop in your app if you still want to use this concept: <https://gist.github.com/grzes/73142ed99dc8ad6ac4fc9fb9f4e87d60>
Django equivalent of PHP's form value array/associative array
[ "", "python", "django", "forms", "" ]
I have the following data: ``` User System SubSystem user1 System1 SubSystem1 user2 System1 SubSystem2 user3 N/A N/A ``` and i need to be able to determine the system/subsystem tuple from the user. I must be able to add users at any time without rebuilding and redeploying the system. I know the database would be the best option here but **I cannot use a database table.** I currently have it mapped using a hash map but I don't want it to be hard-coded. I was thinking about using a properties file but I can't visualize how I would implement it. Anyone else have any suggestion? Not that it matters but I'm using JAVA, on weblogic 10.3.
I would go for something as simple as : ``` user1 = userValue user1.system = systemValue user1.system.subsystem= subsystemValue user2 = userValue user2.system = systemValue user2.system.subsystem= subsystemValue ``` user(id) is used as "primary" key in your properties, and a very simple concatenation of your fields to store your table values. I use this very often : trust me, it's much more powerfull than it may appear :)
You could do this using a HashMap (as you do now) and store it using [XStream](http://xstream.codehaus.org/). XStream allows you to serialise/deserialise Java objects to/from readable/editable XML. You can then write this to (say) a filesystem, and the result is editable by hand. The downside is that it's a serialisation in XML of a Java object, so not as immediately obvious as a properties file to edit. However it's still very readable, and easily understood by anyone remotely technical. Whether this is an appropriate solution depends on the audience of this file.
mapping data in properties files
[ "", "java", "data-structures", "properties", "" ]
Of course an HTML page can be parsed using any number of python parsers, but I'm surprised that there don't seem to be any public parsing scripts to extract meaningful content (excluding sidebars, navigation, etc.) from a given HTML doc. I'm guessing it's something like collecting DIV and P elements and then checking them for a minimum amount of text content, but I'm sure a solid implementation would include plenty of things that I haven't thought of.
Try the [Beautiful Soup](http://www.crummy.com/software/BeautifulSoup/) library for Python. It has very simple methods to extract information from an html file. Trying to generically extract data from webpages would require people to write their pages in a similar way... but there's an almost infinite number of ways to convey a page that looks identical let alone all the conbinations you can have to convey the same information. Was there a particular type of information you were trying to extract or some other end goal? You could try extracting any content in 'div' and 'p' markers and compare the relative sizes of all the information in the page. The problem then is that people probably group information into collections of 'div's and 'p's (or at least they do if they're writing well formed html!). Maybe if you formed a tree of how the information is related (nodes would be the 'p' or 'div or whatever and each node would contain the associated text) you could do some sort of analysis to identify the smallest 'p' or 'div' that encompases what appears to be the majority of the information.. ? **[EDIT]** Maybe if you can get it into the tree structure I suggested, you could then use a similar points system to spam assassin. Define some rules that attempt to classify the information. Some examples: ``` +1 points for every 100 words +1 points for every child element that has > 100 words -1 points if the section name contains the word 'nav' -2 points if the section name contains the word 'advert' ``` If you have a lots of low scoring rules which add up when you find more relevent looking sections, I think that could evolve into a fairly powerful and robust technique. **[EDIT2]** Looking at the readability, it seems to be doing pretty much exactly what I just suggested! Maybe it could be improved to try and understand tables better?
Have a look at templatemaker: <http://www.holovaty.com/writing/templatemaker/> It's written by one of the founders of Django. Basically you feed it a few example html files and it will generate a "template" that you can then use to extract just the bits that are different (which is usually the meaningful content). Here's an example from the [google code page](http://code.google.com/p/templatemaker/): ``` # Import the Template class. >>> from templatemaker import Template # Create a Template instance. >>> t = Template() # Learn a Sample String. >>> t.learn('<b>this and that</b>') # Output the template so far, using the "!" character to mark holes. # We've only learned a single string, so the template has no holes. >>> t.as_text('!') '<b>this and that</b>' # Learn another string. The True return value means the template gained # at least one hole. >>> t.learn('<b>alex and sue</b>') True # Sure enough, the template now has some holes. >>> t.as_text('!') '<b>! and !</b>' ```
python method to extract content (excluding navigation) from an HTML page
[ "", "python", "html", "parsing", "semantics", "html-content-extraction", "" ]
For some time I am battling to solve this problem but I am not coming to any conclusion so thought to seek some help here. The problem is that I am getting a blank dropdown instead I should get list of cities populated from the database. Database connection is fine but I am not getting anything in my dropdown. This is what I am doing: ``` <?php require 'includes/connect.php'; - database connection $country=$_REQUEST['country']; - get from form (index.php) $q = "SELECT city FROM city where countryid=".$country; $result = $mysqli->query($q) or die(mysqli_error($mysqli)); if ($result) { ?> <select name="city"> <option>Select City</option> $id = 0; <?php while ($row = $result->fetch_object()) { $src = $row->city; $id = $id + 1; ?> <option value= <?php $id ?> > <?php $src ?></option> <?php } ?> </select> <?php } ?> ``` ajax script is this: ``` <script> function getXMLHTTP() { //function to return the xml http object var xmlhttp=false; try{mlhttp=new XMLHttpRequest();} catch(e) { try{ xmlhttp= new ActiveXObject("Microsoft.XMLHTTP"); } catch(e){ try{ xmlhttp = new ActiveXObject("Msxml2.XMLHTTP"); } catch(e1){ xmlhttp=false; } } } return xmlhttp; } function getCity(strURL) { var req = getXMLHTTP(); if (req) { req.onreadystatechange = function() { if (req.readyState == 4) { if (req.status == 200) { document.getElementById('citydiv').innerHTML=req.responseText; } else { alert("There was a problem while using XMLHTTP:\n" + req.statusText); } } } req.open("GET", strURL, true); req.send(null); } } </script> ``` This is my form code: ``` <form method="post" action="" name="form1"> <table width="60%" border="0" cellspacing="0" cellpadding="0"> <tr> <td width="150">Country</td> <td width="150"><select name="country" onChange="getCity('findcity.php?country='+this.value)"> <option value="">Select Country</option> <option value="1">New Zealand</option> <option value="2">Canada</option> </select></td> </tr> <tr style=""> <td>City</td> <td ><div id="citydiv"><select name="city"> <option>Select City</option> </select></div></td> </tr> </table> </form> ```
I think the problem is where you are outputting the `<option>` tags. Try using this block of code between your `<select>` tags. ``` <option>Select City</option> <?php $id = 0; while ($row = $result->fetch_object()) { $src = $row->city; $id = $id + 1; ?> <option value="<?php echo htmlspecialchars($id,ENT_QUOTES) ?>"><?php echo htmlspecialchars($src) ?></option> <?php } ?> ``` Edit: To clarify, you didn't have any `echo` statements before the `$id` and `$src` variables. I added `htmlspecialchars()` as a habit to produce properly escaped html.
A few things to try: 1. If you request findcity.php manually in your browser with a city you know exist in the database, will i return the correct HTML? 2. Try with FireBug or another javascript debugger, to set a breakpoint in the onreadystatechange function and see if the returned values are as expected. Set the breakpoint at the first line of the function.
dropdown with php and ajax
[ "", "php", "ajax", "" ]
In JavaScript, every object is at the same time an instance and a class. To do inheritance, you can use any object instance as a prototype. In Python, C++, etc.. there are classes, and instances, as separate concepts. In order to do inheritance, you have to use the base class to create a new class, which can then be used to produce derived instances. Why did JavaScript go in this direction (prototype-based object orientation)? what are the advantages (and disadvantages) of prototype-based OO with respect to traditional, class-based OO?
There are about a hundred terminology issues here, mostly built around someone (not you) trying to make their idea sound like The Best. All object oriented languages need to be able to deal with several concepts: 1. encapsulation of data along with associated operations on the data, variously known as data members and member functions, or as data and methods, among other things. 2. inheritance, the ability to say that these objects are just like that other set of objects EXCEPT for these changes 3. polymorphism ("many shapes") in which an object decides for itself what methods are to be run, so that you can depend on the language to route your requests correctly. Now, as far as comparison: First thing is the whole "class" vs "prototype" question. The idea originally began in Simula, where with a class-based method each class represented a set of objects that shared the same state space (read "possible values") and the same operations, thereby forming an equivalence class. If you look back at Smalltalk, since you can open a class and add methods, this is effectively the same as what you can do in Javascript. Later OO languages wanted to be able to use static type checking, so we got the notion of a fixed class set at compile time. In the open-class version, you had more flexibility; in the newer version, you had the ability to check some kinds of correctness at the compiler that would otherwise have required testing. In a "class-based" language, that copying happens at compile time. In a prototype language, the operations are stored in the prototype data structure, which is copied and modified at run time. Abstractly, though, a class is still the equivalence class of all objects that share the same state space and methods. When you add a method to the prototype, you're effectively making an element of a new equivalence class. Now, why do that? primarily because it makes for a simple, logical, elegant mechanism at run time. now, to create a new object, *or* to create a new class, you simply have to perform a deep copy, copying all the data and the prototype data structure. You get inheritance and polymorphism more or less for free then: method lookup *always* consists of asking a dictionary for a method implementation by name. The reason that ended up in Javascript/ECMA script is basically that when we were getting started with this 10 years ago, we were dealing with much less powerful computers and much less sophisticated browsers. Choosing the prototype-based method meant the interpreter could be very simple while preserving the desirable properties of object orientation.
A comparison, which is slightly biased towards the prototypes based approach, can be found in the paper [Self: The Power of Simplicity](https://courses.cs.washington.edu/courses/cse505/05wi/readings/selfPower.pdf). The paper makes the following arguments in favor of prototypes: > **Creation by copying**. Creating new objects from prototypes is accomplished by a simple operation, copying, with a simple biological > metaphor, cloning. Creating new objects from classes is accomplished > by instantiation, which includes the interpretation of format > information in a class. Instantiation is similar to building a house > from a plan. Copying appeals to us as a simpler metaphor than > instantiation. > > **Examples of preexisting modules**. Prototypes are more concrete than classes because they are examples of objects rather than descriptions > of format and initialization. These examples may help users to reuse > modules by making them easier to understand. A prototype-based system > allows the user to examine a typical representative rather than > requiring him to make sense out of its description. > > **Support for one-of-a-kind objects**. Self provides a framework that can easily include one-of-a-kind objects with their own behavior. > Since each object has named slots, and slots can hold state or > behavior, any object can have unique slots or behavior. Class-based > systems are designed for situations where there are many objects with > the same behavior. There is no linguistic support for an object to > possess its own unique behavior, and it is awkward to create a class that is guaranteed to have only one > instance [*think singleton > pattern*]. Self suffers from neither of these disadvantages. Any object > can be customized with its own behavior. A unique object can hold the > unique behavior, and a separate "instance" is not needed. > > **Elimination of meta-regress**. No object in a class-based system can be self-sufficient; another object (its class) is needed to express > its structure and behavior. This leads to a conceptually infinite > meta-regress: a `point` is an instance of class `Point`, which is an > instance of metaclass `Point`, which is an instance of metametaclass > `Point`, ad infinitum. On the other hand, in prototype-based systems > an object can include its own behavior; no other object is needed to > breathe life into it. Prototypes eliminate meta-regress. [Self](http://selflanguage.org/) is probably the first language to implement prototypes (it also pioneered other interesting technologies like JIT, which later made its way into the JVM), so reading [the other Self papers](http://www.cs.ucsb.edu/%7Eurs/oocsb/self/papers/papers.html) should also be instructive.
prototype based vs. class based inheritance
[ "", "javascript", "oop", "inheritance", "prototype-programming", "" ]
Given the following Hello World servlet, how could you transfer the Hello World output out of the servlet code and put it in some kind of HTML templating format? I would like to simply call the template from the servlet and have it render the Java variables I refer to in the template - perhaps by referring to the "Hello World" string as a class variable in the SprogzServlet class? ``` package boochy; import java.io.IOException; import javax.servlet.http.*; @SuppressWarnings("serial") public class SprogzServlet extends HttpServlet { public void doGet(HttpServletRequest req, HttpServletResponse resp) throws IOException { resp.setContentType("text/plain"); resp.getWriter().println("Hello, world"); } } ```
It's pretty rare to be doing Java Web development without using some kind of MVC framework that'll delegate all views to JSPs (apart from PDF output and other corner cases) so you have: * JSPs; * [Apache Velocity](http://velocity.apache.org/moving.html); * [Freemarker](http://freemarker.org/). Some Web frameworks like [Tapestry](http://tapestry.apache.org/) and [JSF](http://java.sun.com/javaee/javaserverfaces/) ("Java Server Faces") are a little more like HTML views with extra tags. JSPs are ultimately just compiled to servlets anyway and tend to be a more convenient form for outputting HTML. Generally speaking I'd use them as a minimum rather than writing a heap of out.println() statements in a servlet directly.
I have successfully used Velocity for a number of years on a very small scale internal site. Its easy to use and has a nice clean API. It handles burst of activity extremely well.
What HTML templating options are there in Java as an alternative to doing HTML output from your servlet code?
[ "", "java", "templates", "servlets", "" ]
I'm developing a [single serving site](http://kottke.org/08/02/single-serving-sites) in PHP that simply displays messages that are posted by visitors (ideally surrounding the topic of the website). Anyone can post up to three messages an hour. Since the website will only be one page, I'd like to control the vertical length of each message. However, I do want to at least partially preserve line breaks in the original message. A compromise would be to allow for two line breaks, but if there are more than two, then replace them with a total of two line breaks in a row. Stack Overflow implements this. For example: ``` Porcupines\nare\n\n\n\nporcupiney. ``` would be changed to ``` Porcupines<br />are<br /><br />porcupiney. ``` One tricky aspect of checking for line breaks is the possibility of their being collected and stored as `\r\n`, `\r`, or `\n`. I thought about converting all line breaks to `<br />`s using `nl2br()`, but that seemed unnecessary. My question: Using regular expressions in PHP (with functions like `preg_match()` and `preg_replace()`), how can I check for instances of more than two line breaks in a row (with or without blank space between them) and then change them to a total of two line breaks?
``` preg_replace('/(?:(?:\r\n|\r|\n)\s*){2}/s', "\n\n", $text) ```
Something like ``` preg_replace('/(\r|\n|\r\n){2,}/', '<br/><br/>', $text); ``` should work, I think. Though I don't remember PHP syntax exactly, it might need some more escaping :-/
How to replace one or two consecutive line breaks in a string?
[ "", "php", "regex", "pcre", "line-breaks", "" ]
I have a large data set of tuples containing (time of event, latitude, longitude) that I need to visualize. I was hoping to generate a 'movie'-like xy-plot, but was wondering if anyone has a better idea or if there is an easy way to do this in Python? Thanks in advance for the help, --Leo
get [matplotlib](http://matplotlib.sourceforge.net)
The easiest option is matplotlib. Two particular solutions that might work for you are: 1) You can generate a series of plots, each a snapshot at a given time. These can either be displayed as a dynamic plot in matplotlib, where the axes stay the same and the data moves around; or you can save the series of plots to separate files and later combine them to make a movie (using a separate application). There a number of examples in the official examples for doing these things. 2) A simple scatter plot, where the colors of the circles changes with time might work well for your data. This is super easy. See [this](http://matplotlib.sourceforge.net/examples/pylab_examples/ellipse_collection.html), for example, which produces this figure [alt text http://matplotlib.sourceforge.net/plot\_directive/mpl\_examples/pylab\_examples/ellipse\_collection.hires.png](http://matplotlib.sourceforge.net/plot_directive/mpl_examples/pylab_examples/ellipse_collection.hires.png)
Scientific Plotting in Python
[ "", "python", "visualization", "plot", "scientific-computing", "" ]
A *String* representation of a *double* is written to and read from a file by a **C#** application. The C# application converts the double to a string using the following fragment: ``` value.ToString("R", NumberFormatInfo.InvariantInfo); ``` The C# application converts the string to a double using the following fragment ``` double num = double.Parse(s, NumberStyles.Float, (IFormatProvider) NumberFormatInfo.InvariantInfo); ``` If that same file were to be written to and read from by a **Java** application, how would you go about converting the types without losing data?
Just using `Double.parseDouble()` and `Double.toString()` should work without losing data, I believe. In particular, from the docs for `Double.toString()`: > How many digits must be printed for > the fractional part of m or a? There > must be at least one digit to > represent the fractional part, and > beyond that as many, but only as many, > more digits as are needed to uniquely > distinguish the argument value from > adjacent values of type double. That > is, suppose that x is the exact > mathematical value represented by the > decimal representation produced by > this method for a finite nonzero > argument d. Then d must be the double > value nearest to x; or if two double > values are equally close to x, then d > must be one of them and the least > significant bit of the significand of > d must be 0. Another alternative, if you want to preserve the exact string representation (which isn't quite the same thing) is to use `BigDecimal` in Java.
Doubles have a limited precision and might not preserve the string intact. The BigDecimal class has arbitrary precission and keeps sring representation. To convert a string into a BigDecimal: ``` BigDecimal d = new BigDecimal("10.1234567890"); ``` To Convert a BigDecimal into string: ``` System.out.println(d.toString()); ``` More details here: <http://epramono.blogspot.com/2005/01/double-vs-bigdecimal.html>
Convert a double to a String in Java and vice versa without losing accuracy
[ "", "java", "types", "" ]
Is there any documentation on qpThreads? In what way is it different from pthreads?
Found some documentation finally. [Sourceforge qpthreads](http://sourceforge.net/docman/display_doc.php?docid=21185&group_id=100112)
It looks like [qpthread](http://sourceforge.net/projects/qpthread/) has become a sourceforge project. It hasn't changed in four or five years from the CVS repository and doesn't seem to contain any documentation. Chances are that if you don't have docs on it, then none exist save for the source code and headers of course. I grabbed the source out of curiosity and it looks like a pretty standard threading implementation that is layered over pthreads. The largest difference is that pthreads is a C-style interface and qpThreads is an OO interface more akin to what you might find in Java. I haven't delved into it very deeply, but you might want to look at the interfaces like [java.util.Timer](http://java.sun.com/j2se/1.3/docs/api/java/util/Timer.html) and [java.util.concurrent](http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/package-summary.html). Some of the interfaces look quite similar to what qpThreads offers. You might want to investigate replacing it with [Boost.thread](http://www.boost.org/doc/libs/1_38_0/doc/html/thread.html) or something more recent. Chances are that you will have better luck maintaining it. It shouldn't be that hard to do depending on how much code you have to deal with.
qpThreads documentation
[ "", "c++", "documentation", "" ]
I am not sure what is wrong with this (keep in mind I'm kinda sorta new to C++) I have this class: ``` Foo { string name; public: SetName(string); } string Foo::SetName(string name) { this->name = name; return this->name; }; ////////////////////////////////////////////// //This is where I am trying to return a Foo pointer from this global function: Foo * ReturnFooPointer() { Foo foo; Foo * foo_ptr; foo_ptr = &foo; return foo_ptr; } ``` At compile time, this compiles just fine. However at run time it throws a runtime exception(Some sort of access violation) What am I doing wrong?
You need to use the new keyword instead to create new Foo on the heap. The object on the stack will be freed when the function ends, so you are returning a pointer to an invalid place in memory. Here is the correct code. ``` Foo * ReturnFooPointer() { Foo * foo_ptr = new Foo(); return foo_ptr; } ``` Remember later to delete the pointer you are returning. So later in code: ``` Foo *pFoo = ReturnFooPointer(); //Use pFoo //... delete pFoo; ```
You're returning a pointer to a local object on the stack. It goes out of scope the moment your function returns, and is invalid. You should create a new instance to return, i.e., ``` Foo* foo_ptr = new Foo(); ``` This will create an object in the heap and will live until you call `delete` on it.
Return class pointer from a function
[ "", "c++", "" ]
I’ve just encountered some strange behaviour when dealing with the ominous typographic apostrophe ( ’ ) – not the typewriter apostrophe ( ' ). Used with wide string literal, the apostrophe breaks wofstream. This code works ``` ofstream file("test.txt"); file << "A’B" ; file.close(); ``` ==> A’B This code works ``` wofstream file("test.txt"); file << "A’B" ; file.close(); ``` ==> A’B This code fails ``` wofstream file("test.txt"); file << L"A’B" ; file.close(); ``` ==> A This code fails... ``` wstring test = L"A’B"; wofstream file("test.txt"); file << test ; file.close(); ``` ==> A Any idea ?
You should "enable" locale before using wofstream: ``` std::locale::global(std::locale()); // Enable locale support wofstream file("test.txt"); file << L"A’B"; ``` So if you have system locale `en_US.UTF-8` then the file `test.txt` will include utf8 encoded data (4 byes), if you have system locale `en_US.ISO8859-1`, then it would encode it as 8 bit encoding (3 bytes), unless ISO 8859-1 misses such character. ``` wofstream file("test.txt"); file << "A’B" ; file.close(); ``` This code works because `"A’B"` is actually utf-8 string and you save utf-8 string to file byte by byte. **Note:** I assume you are using POSIX like OS, and you have default locale different from "C" that is the default locale.
Are you sure it's not your compiler's support for unicode characters in source files that is "broken"? What if you use `\x` or similar to encode the character in the string literal? Is your source file even in whatever encoding might might to a `wchar_t` for your compiler?
Typographic apostrophe + wide string literal broke my wofstream (C++)
[ "", "c++", "unicode", "widestring", "wofstream", "" ]
I'm wondering if it's possible to cast an object to a Type... I've just started using Reflection, so maybe I'm doing it all wrong but here's what I would like to do: ``` ... Type type = ...; Type interfaceType = someOtherType.GetInterface("IConverter`2"); return (Cast to interfaceType)Activator.CreateInstance(type); ``` Is the cast to the interface possible? **Update:** Compiler says that T and K can not be found. The myInterface Type instance knows the T and K class... ``` public IConverter<T, K> GetConverter(Type type) { if (dtoModelDictionary.ContainsKey(type)) { Type foundType = dtoModelDictionary[type]; Type myInterface = foundType.GetInterface("IConverter`2"); return (IConverter<T, K>)Activator.CreateInstance(foundType); } else if (dalModelDictionary.ContainsKey(type)) { Type foundType = dalModelDictionary[type]; return (IConverter<T, K>)Activator.CreateInstance(foundType); } else { throw new System.Exception(); } } ``` Second update: ``` public SomeClass GetConverter(Type type) { if (dtoModelDictionary.ContainsKey(type)) { Type foundType = dtoModelDictionary[type]; Type myInterface = foundType.GetInterface("IConverter`2"); IConverter<T, K> converter = (IConverter<T, K>)Activator.CreateInstance(foundType); return converter.someMethod(); } } ```
Answer to you update: You cannot cast to a type where the generic arguments are not defined. T and K must be defined for the method that is using it. Either declare it: ``` public IConverter<T, K> GetConverter<T, K>(Type type) ``` Or, if you face the problem often that this interface is used but you don't know any T or K types, use an interface without generics: ``` interface IConverter { // general members } interface IConverter<T, K> : IConverter { // typesave members } public IConverter GetConverter(Type type) { // ... return (IConverter)Activator.CreateInstance(type); } ```
Not really, no... at least not in this way. The problem is that your return value is going to have to be whatever your method's return value is typed as. Because everything must be typed at compile-time, there is limited or no real use case that I can see for this particular kind of type coersion - maybe you can say some more about what you are trying to accomplish? Now if you are using generics, you *do* have a run-time typing story, you can return your Type parameter type: ``` public T MyMethod<T>(...) ... return (T)Activator.CreateInstance(type); ```
Casting to a Type
[ "", "c#", "reflection", "casting", "" ]
Instead of calling: ``` var shows = _repository.ListShows("PublishDate"); ``` to return a collection of objects sorted by the publish date, I would like to use a syntax like this: ``` var shows = _repository.ListShows(s => s.PublishDate); ``` What do I need to write to take advantage of the lambda as an argument?
``` public IEnumerable<Show> ListShows(Func<Show, string> stringFromShow) { } ``` Within that method, use ``` string str = stringFromShow(show); ```
``` var shows = _repository.OrderBy(s=>s.PublishDate); ```
How to use Lambda expression to replace string parameter
[ "", "c#", "linq", "" ]
I've written a custom serialization routine that does not use ISerializable or the SerialzableAttribute to save my objects to a file. I also remote these same objects and would like to use the same serialization technique. However, I don't want to implement ISerializable because my serialization method is completely decoupled from my objects (and I'd like for it to stay that way). Is there an easy way (possibly with remoting sinks) where I can take a stream and write bytes to it and on the other side read bytes from it, skipping the Serialization framework in .NET?
If you want to use *remoting* then you are limited to `BinaryFormatter`. Normally, you can use a "serialization surrogate" to provide a serializer separate to the formatter, but AFAIK this deoesn't work with .NET remoting. However; if you write your own RPC stack (over TCP/IP or HTTP, for example), you'll have a lot more control. Equally, with WCF you can replace the serializer via a behavior. I use both of these tricks in [protobuf-net](http://code.google.com/p/protobuf-net/) (the WCF hooks [are here](http://code.google.com/p/protobuf-net/source/browse/#svn/trunk/protobuf-net/ServiceModel)). Not sure you can do this with *remoting* though - you'd probably have to use `ISerializable`.
If you want to explicitly deal with streaming bytes between your processes, use pipes. Here's one getting-started on [named pipes](http://www.switchonthecode.com/tutorials/interprocess-communication-using-named-pipes-in-csharp), Google has plenty as well.
How to use custom serialization during .NET remoting?
[ "", "c#", ".net", "serialization", "remoting", "" ]
Does anyone have an example of a stored procedure which makes a connection to a remote server? I have been searching the web and have so far discovered that it might can be done using `sp_addlinkedserver` and `sp_addlinkedsrvlogin` but I haven't found a good example and I don't understand the documentation that well. ### UPDATE: None of the two first replies help me out, the closest I can get is using this: ``` EXEC sp_addlinkedserver @server = 'SiminnSrv', @provider = 'SQLNCLI', @catalog = 'devel', @srvproduct = '', @provstr = 'DRIVER={SQL Server};SERVER=my.serveradr.com;UID=my_user_name;PWD=my_pass_word;' ``` That actually makes me connect but when I query a table I get this message: > Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server >connection.
Essentially you create a linked server to the other server, and then provide login credentials to be used for SQL calls to that linked server. e.g. this will connect to "MyOtherServer" using a DomainAccount for that server with the username & password 'DomainUserName', 'DomainPassword' ``` EXEC sp_addlinkedserver 'MyOtherServer', N'SQL Server' EXEC sp_addlinkedsrvlogin 'MyOtherServer', 'false', 'OtherServerDomain\DomainUser', 'DomainUserName', 'DomainPassword' ``` More Info [Here](http://msdn.microsoft.com/en-us/library/ms190479.aspx) And [Here](http://msdn.microsoft.com/en-us/library/ms189811.aspx)
I managed to connect to MSSQL Server 2008 through a linked server using the "SQL Server Native Client 10" (`SQLNCLI10`), but I had to use `sp_addlinkedsrvlogin` instead of `@provstr` to provide the connection details. This is based on the example from [this article](http://www.codeproject.com/Articles/35943/How-to-Config-Linked-Servers-in-a-Minute): ``` EXEC master.dbo.sp_addlinkedserver @server = 'MyServerConnection', @srvproduct = '', @datasrc = 'SERVERNAME\INSTANCENAME', @provider = 'SQLNCLI10', @provstr = '' EXEC sp_addlinkedsrvlogin @rmtsrvname = 'MyServerConnection', @useself = 'false', --@locallogin = 'someLocalUser' -- Use to restrict the connection to specific login @rmtuser = 'remoteUser', @rmtpassword = 'secret' ``` Querying this linked server: ``` SELECT * FROM [MyServerConnection].[SomeDatabase].[dbo].[TableName] ```
Connect to remote server in T-SQL (SQL server 2008)
[ "", "sql", "sql-server", "t-sql", "" ]
This is a long shot, but does anyone know of an algorithm for estimating and categorising text width (for a variable width font) based on its contents? For example, I'd like to know that *iiiiiiii* is not as wide as *abcdefgh*, which in turn is not as wide as *WWWWWWWW*, even though all three strings are eight characters in length. This is actually an attempt to build some smarts into a string truncation method, which at the moment is correctly truncating a visually wide string, but is also unnecessarily truncating a visually narrow string, because both strings contain the same number of characters. It's probably sufficient for the algorithm to categorise the input string as *narrow*, *normal* or *wide* and then truncate as appropriate. This question isn't really language-specific, but if there is an algorithm then I'll implement it in Java. This is for a web application. I'm aware that there are answers on SO that deal with this problem using JavaScript to obtain the width of a containing `div` element, but I wondered if a server-side solution is possible.
Most GUI frameworks provide some way to calculate text metrics for fonts on given output devices. Using `java.awt.FontMetrics`, for example, I believe you can do this: ``` import java.awt.Font; import java.awt.FontMetrics; import java.awt.Graphics; public int measureText(Graphics g, String text) { g.setFont(new Font("TimesRoman", Font.PLAIN, 12)); FontMetrics metrics = g.getFontMetrics(); return metrics.stringWidth(text); } ``` Not tested, but you get the idea. --- Under .Net you can use the [`Graphics.MeasureString`](http://msdn.microsoft.com/en-us/library/6xe5hazb(VS.80).aspx) method. In C#: ``` private void MeasureStringMin(PaintEventArgs e) { // Set up string. string measureString = "Measure String"; Font stringFont = new Font("Arial", 16); // Measure string. SizeF stringSize = new SizeF(); stringSize = e.Graphics.MeasureString(measureString, stringFont); // Draw rectangle representing size of string. e.Graphics.DrawRectangle(new Pen(Color.Red, 1), 0.0F, 0.0F, stringSize.Width, stringSize.Height); // Draw string to screen. e.Graphics.DrawString(measureString, stringFont, Brushes.Black, new PointF(0, 0)); } ```
This worked for me: ``` AffineTransform af = new AffineTransform(); FontRenderContext fr = new FontRenderContext(af,true,true); Font f = new Font("Arial", 0, 10); // use exact font double width= f.getStringBounds("my string", fr).getWidth(); ```
Algorithm for estimating text width based on contents
[ "", "java", "algorithm", "text", "" ]
Say, I want to see if a DOM element is a block. I can write it in three ways, depending on my mood: ``` // first way if (el.currentStyle.display == "block" || el.currentStyle.display == "inline-block" || el.currentStyle.display == "table-cell") // second way var blocks = {"block": 1, "inline-block": 1, "table-cell": 1}; if (el.currentStyle.display in blocks)// // third way if (el.currentStyle.display.match(/block|inline-block|table-cell/)) ``` I have mixed feeling about all of them. First is too verbose once I have more than one option. Second contains those arbitrary values in the object (where I put 1s this time). Third looks like overkill. (What exactly is bad about overkilling?) Do you know another, better way? If no, any cons I am missing about these three ways? Javascript only, please.
I like the third way; I don't think it looks like overkill at all. If you need an even shorter way then this works too: ``` el.currentStyle.display.match(/(e-)?(block|cell)/) ``` But that's not very readable... It might be worth abstracting it all away by extending the String prototype: ``` String.prototype.matches = function(what) { return (',' + what + ',').indexOf(',' + this + ',') > -1; }; // Using it: el.currentStyle.display.matches('block,inline-block,table-cell'); ```
If we're primarily aiming for readability, and if this is happening more than once -- perhaps even if it *is* just once -- I'd move the test to a function. Then define that function whichever way you like -- probably option 1, for max simplicity there. Overkill? Possibly. But a gift to the programmer who wants to scan and understand the code 6 months from now. Probably you :-) ``` function isBlock(el) { return (el.currentStyle.display == "block" || el.currentStyle.display == "inline-block" || el.currentStyle.display == "table-cell"); } // ... if (isBlock(el)) { // do something } ```
Finding in a predefined set of text options
[ "", "javascript", "" ]
What's the best way to convert datetimes between local time and UTC in C/C++? By "datetime", I mean some time representation that contains date and time-of-day. I'll be happy with `time_t`, `struct tm`, or any other representation that makes it possible. My platform is Linux. Here's the specific problem I'm trying to solve: I get a pair of values containing a julian date and a number of seconds into the day. Those values are in GMT. I need to convert that to a local-timezone "YYYYMMDDHHMMSS" value. I know how to convert the julian date to Y-M-D, and obviously it is easy to convert seconds into HHMMSS. However, the tricky part is the timezone conversion. I'm sure I can figure out a solution, but I'd prefer to find a "standard" or "well-known" way rather than stumbling around. --- A possibly related question is [Get Daylight Saving Transition Dates For Time Zones in C](https://stackoverflow.com/questions/678445/get-daylight-saving-transition-dates-for-time-zones-in-c)
You're supposed to use combinations of `gmtime`/`localtime` and `timegm`/`mktime`. That should give you the orthogonal tools to do conversions between `struct tm` and `time_t`. For UTC/GMT: ``` time_t t; struct tm tm; struct tm * tmp; ... t = timegm(&tm); ... tmp = gmtime(t); ``` For localtime: ``` t = mktime(&tm); ... tmp = localtime(t); ``` All `tzset()` does is set the internal timezone variable from the `TZ` environment variable. I don't think this is supposed to be called more than once. If you're trying to convert between timezones, you should modify the `struct tm`'s `tm_gmtoff`.
If on Windows, you don't have timegm() available to you: ``` struct tm *tptr; time_t secs, local_secs, gmt_secs; time( &secs ); // Current time in GMT // Remember that localtime/gmtime overwrite same location tptr = localtime( &secs ); local_secs = mktime( tptr ); tptr = gmtime( &secs ); gmt_secs = mktime( tptr ); long diff_secs = long(local_secs - gmt_secs); ``` or something similar...
Converting Between Local Times and GMT/UTC in C/C++
[ "", "c++", "c", "timezone", "dst", "" ]
I have no other developers to ask for advice or "what do you think - I'm thinking *this*" so please, if you have time, have a read and let me know what you think. It's easier to show than describe, but the app is essentially like a point of sale app with 3 major parts: Items, OrderItems and the Order. The item class is the data as it comes from the datastore. ``` public class Item : IComparable<OrderItem>, IEquatable<OrderItem> { public Int32 ID { get; set; } public String Description { get; set; } public decimal Cost { get; set; } public Item(Int32 id, String description, decimal cost) { ID = id; Description = description; Cost = cost; } // Extraneous Detail Omitted } ``` The order item class is an item line on an order. ``` public class OrderItem : Item, IBillableItem, IComparable<OrderItem>, IEquatable<OrderItem> { // IBillableItem members public Boolean IsTaxed { get; set; } public decimal ExtendedCost { get { return Cost * Quantity; } } public Int32 Quantity { get; set; } public OrderItem (Item i, Int32 quantity) : base(i.ID, i.Description, i.Cost) { Quantity = quantity; IsTaxed = false; } // Extraneous Detail Omitted } ``` Currently when you add fees or discounts to an order it's as simple as: ``` Order order = new Order(); // Fee order.Add(new OrderItem(new Item("Admin Fee", 20), 1)); // Discount order.Add(new OrderItem(new Item("Today's Special", -5), 1)); ``` I like it, it makes sense and a base class that Order inherits from iterates through the items in the list, calculates appropriate taxes, and allows for other Order-type documents (of which there are 2) to inherit from the base class that calculates all of this without re-implimenting anything. If an order-type document doesn't have discounts, it's as easy as just not adding a -$ value OrderItem. The only problem that I'm having is displaying this data. The form(s) that this goes on has a grid where the Sale items (ie. not fees/discounts) should be displayed. Likewise there are textboxes for certain fees and certain discounts. I would very much like to databind those ui elements to the fields in this class so that it's easier on the user (and me). *MY THOUGHT* Have 2 interfaces: IHasFees, IHasDiscounts and have Order implement them; both of which would have a single member of List. That way, I could access only Sale items, only Fees and only Discounts (and bind them to controls if need be). What I don't like about it: - Now I've got 3 different add/remove method for the class (AddItem/AddFee/AddDiscount/Remove...) - I'm duplicating (triplicating?) functionality as all of them are simply lists of the same type of item, just that each list has a different meaning. Am I on the right path? I suspect that this is a solved problem to most people (considering that this type of software is very common).
I'll point you to a remark by Rob Connery on an ALT.net podcast I listened to not long ago (I'm not an ALT.net advocate, but the reasoning seemed sound): What does make sense to a "business user" (if you have any of those around). As a programmer, you're gonna want to factor in Item, Fee, Discount etc, because they have similar attributes and behaviors. BUT, they might be two totally separate concepts in terms of the model. And someone is gonna come at a later time, saying "but this makes no sense, they are separate things, I need to report on them separately and I need to apply this specific rule to discounts in that case". DRY does not mean limiting your model, and you should keep that in sight when factoring behavior via inheritance or anything like that. The specific example that was used in that case was that of the shopping cart. The programmer's natural idea was to use an order in an uncommited state. And it makes sense, because they look exactly the same. Except that they are not. It makes no sense to the client, because they are two separate concept, and it just make the design less clear. It is a matter of practices, taste and opinion though, so don't blindly follow advice posted on a web site :) And to your specific problem, the system I work with uses items, fees, line-item discount (a property of the item) and a global discount on the order (though it's not an order, it's POS receipt but it does not really matter in that case). I guess the reason is that, behind those concepts, Items are specific instances of inventoried pieces, they impact stock quantities, they are enumerable and quantifiable. Fees are not. They do not share most of the attributes. It might not matter in your case, because your domain seems much more limited than that, but you might want to keep those issues in mind.
Effectively, I'd look at your design in the details and try to figure out where the **behaviors** lie; then extract any commonalities in those behaviors to a distinct interface and make sure that applies to your design. To wit; Fees may have associated validation behaviors associated with them. Let's say you add a Fee to any Order which has 20 items or more (just a random example, run with me on this one). Now, when you add the 20th item, you may want to add that Fee to the Order, but there's a problem; when you remove an item from your order, do you want to have to check every time to see if you need to remove that Fee from your order? I doubt it; the implication here is that there is a behavior that is associated with the Fees / Discounts that essentially makes them an entirely different class of things. I'd look at it this way; categorize Fees and Discounts as "Special" things, and then create an "ISpecial" interface from which both Fees and Discounts inherit. Extract any common functionality to the ISpecial interface (for example, "Validate"). Then have your Order implement the ISpecial (or whatever) interface. In that way, you can define the specific Fee.Validate() behavior and the Discount.Validate behavior, and have the operate properly thanks to the magic of polymorphism (foreach of m\_specialCollection .validate those). In that way, as well, you can easily extend the Special interface for anything else that might be necessary (say, Taxes).
Should I incorporate list of fees/discounts into an order class or have them be itemlines
[ "", "c#", "oop", "" ]