Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
Hi Javascript gurus, I have this Javascript code which is working fine on Firefox , but it is not working on IE 7. Any ideas why? Here is the code ``` function TestWindow() { SimpleWindow('Default.aspx', 'Simple Test', 200, 200, 'yes') } function SimpleWindow(mypage,myname,w,h,scroll) { var win= null; var winl = (screen.width-w)/2; var wint = (screen.height-h)/2; settings='height='+h+',width='+w+',top='+wint+',left='+winl+',scrollbars='+scroll+',toolbar=no,location=no,status=no,menubar=no,resizable=no,dependent=no' win=window.open(mypage,myname,settings) if(parseInt(navigator.appVersion) >= 4) { win.window.focus(); } } ```
You may have realized that IE is giving the error "Invalid argument." IE doesn't seem to like window names with spaces in them. Change 'Simple Test' to 'SimpleTest' etc.
For *myname* parameter use only a-zA-Z0-9 characters. IE doesn't like any other, especially whitespace characters.
Popup window not opening on IE7
[ "", "javascript", "internet-explorer", "popup", "" ]
What i am really looking for is a maths equation function that takes in a string representing and equation and calculates the answer as a return type For Example "(((4 \* 5) + 6) \* 2) / 8" OutPut: 6.5 So in coding tearms something like ``` print calc("(((4 * 5) + 6) * 2) / 8"); ``` Is there already a class or a function that some angel has built or do i gota do it my self Thanks
As cloudhead said, just fixed up. ``` $nums = "(((4 * 5) + 6) * 2) / 8"; eval("\$nums = $nums;"); echo $nums; ```
you can use `eval()` for that, it'll evaluate the argument as php code: ``` $result = eval("(((4 * 5) + 6) * 2) / 8"); // 6.5 print $result; ```
PHP Maths Equation Function
[ "", "php", "math", "" ]
I have a 2-dimensional jagged array (though it's always rectangular), which I initialize using the traditional loop: ``` var myArr = new double[rowCount][]; for (int i = 0; i < rowCount; i++) { myArr[i] = new double[colCount]; } ``` I thought maybe some LINQ function would give me an elegant way to do this in one statement. However, the closest I can come up with is this: ``` double[][] myArr = Enumerable.Repeat(new double[colCount], rowCount).ToArray(); ``` The problem is that it seems to be creating a single `double[colCount]` and assigning references to that intsead of allocating a new array for each row. Is there a way to do this without getting too cryptic?
``` double[][] myArr = Enumerable .Range(0, rowCount) .Select(i => new double[colCount]) .ToArray(); ```
What you have won't work as the `new` occurs before the call to [`Repeat`](http://msdn.microsoft.com/en-us/library/bb348899.aspx). You need something that also repeats the creation of the array. This can be achieved using the [`Enumerable.Range` method](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.range.aspx) to generate a range and then performing a [`Select` operation](http://msdn.microsoft.com/en-us/library/system.linq.enumerable.select.aspx) that maps each element of the range to a new array instance (as in [Amy B's answer](https://stackoverflow.com/questions/1099731/initialize-a-jagged-array-the-linq-way/1099794#1099794)). However, I think that you are trying to use LINQ where it isn't really appropriate to do so in this case. What you had prior to the LINQ solution is just fine. Of course, if you wanted a LINQ-style approach similar to [`Enumerable.Repeat`](http://msdn.microsoft.com/en-us/library/bb348899.aspx), you could write your own extension method that generates a new item, such as: ``` public static IEnumerable<TResult> Repeat<TResult>( Func<TResult> generator, int count) { for (int i = 0; i < count; i++) { yield return generator(); } } ``` Then you can call it as follows: ``` var result = Repeat(()=>new double[rowCount], columnCount).ToArray(); ```
Initialize a Jagged Array the LINQ Way
[ "", "c#", "linq", "arrays", "initialization", "jagged-arrays", "" ]
I hear HTML5 has `window.postMessage()`, but it seems to require having a handle on the window (or tab, throughout this question) you're posting the message to. What if I want to broadcast to all open windows? Is this possible? (What I'm trying to do is warn other windows without any server round-trips when a user does something in one window that affects the others, so that they can update their content. However, while some windows may be opened from existing ones--allowing me to intercept and store references to them--some fresh windows may be opened manually by the user and then a bookmark selected or URL typed in. In this case there doesn't seem to be a way to intercept and store references.)
Assuming you don't need to broadcast a message to *other websites* (just your own), [`BroadcastChannel`](https://developer.mozilla.org/en-US/docs/Web/API/BroadcastChannel) is made for this purpose. ``` const channel = new BroadcastChannel('myChannel'); channel.postMessage('myMessage'); channel.addEventListener('message', m => { console.log('Received message from other document', m); }); ```
I wrote a library to do just this: **[intercom.js](https://github.com/diy/intercom.js)** (for the same reasons you outlined). We're currently using it to broadcast notifications to all windows, so only one window needs to maintain a socket connection to the server. As some others suggested, it uses the localStorage API. **Usage is really simple:** ``` var intercom = Intercom.getInstance(); $('a').on('click', function() { intercom.emit('notice', {message: 'Something just happened!'); }); ``` To catch the message, ``` intercom.on('notice', function(notice) { console.log(notice.message); }); ``` The interface is designed to mimic [socket.io](http://socket.io/).
Sending a message to all open windows/tabs using JavaScript
[ "", "javascript", "html", "cross-window-scripting", "" ]
My boss is asking me to code a report that has the following components: * A pie chart of employee count by state * A pie chart of employee count by age bracket (10 year brackets) * A pie chart of employee length of service (5 year brackets) * A pie chart of employee Male/Female breakdown * A pie chart of employee count by salary band (computer generates brackets). There may be others. I know I can do this by writting 5 different sql statements. However it seems like this would generate 5 table scans for one report. I could switch gears and do one table scan and analyse each record on the front end and increment counters and probably accomplish this with one-pass. Which way would the collective wisdom at stackoverflow go on this? Is there a way to accomplish this with the CUBE or ROLL UP clauses in T-SQL?
If your data is properly indexed, those reports may not require any table scans at all. Really, for a problem like this you should code up the reports the simple way, and then see whether the performance meets the business requirements. If not, *then* look at optimisation strategies.
if you want 5 pie charts and need to summarize then you need 5 SQL statements since your WHERE clause is different for each
Summarizing data by multiple columns
[ "", "c#", "sql-server", "" ]
I have a number in Javascript, that I know is less than 10000 and also non-negative. I want to display it as a four-digit number, with leading zeroes. Is there anything more elegant than the following? ``` if(num<10) num="000"+num; else if(num<100) num="00"+num; else if(num<1000) num="0"+num; ``` I want something that is built into Javascript, but I can't seem to find anything.
Since ES2017 padding to a minimum length can be done simply with [String.prototype.padStart](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/padStart) and [String.prototype.padEnd](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/padEnd): ``` let number = 3 let string = number.toString().padStart(3, '0') console.log(string) // "003" ``` Or if only the whole part of a float should be a fixed length: ``` let number = 3.141 let array = number.toString().split('.') array[0] = array[0].padStart(3, '0') let string = array.join('.') console.log(string) // "003.141" ``` Neither of these simple uses handle sign, only showing a fraction part when number is not an integer, or other scenarios - so here is a simple example formatting function without options: ``` function format (number) { let [ integer, fraction = '' ] = number.toString().split('.') let sign = '' if (integer.startsWith('-')) { integer = integer.slice(1) sign = '-' } integer = integer.padStart(3, '0') if (fraction) { fraction = '.' + fraction.padEnd(6, '0') } let string = sign + integer + fraction return string } console.log(format(3)) // "003" console.log(format(-3)) // "-003" console.log(format(4096)) // "4096" console.log(format(-3.141)) // "-003.141000" ``` Although notably this will not handle things that are not numbers, or numbers that toString into scientific notation.
The simplest way I can think of is this: ``` ("000" + num).slice(-4) ``` A padded number is a string. When you add a number to a string, it is converted to a string. Strings has the method slice, that retuns a fixed length piece of the string. If length is negative the returned string is sliced from the end of the string. to test: ``` var num=12; console.log(("000" + num).slice(-4)); // Will show "0012" ``` Of cause this only works for positive integers of up to 4 digits. A slightly more complex solution, will handle positive integers: ``` '0'.repeat( Math.max(4 - num.toString().length, 0)) + num ``` Create a string by repeat adding zeros, as long as the number of digits (length of string) is less than 4 Add the number, that is then converted to a string also. **Edit:** from now on you should probably use this function: ``` String(num).padStart(4,'0') ``` It still doesn't handle negative numbers...
How can I format an integer to a specific length in javascript?
[ "", "javascript", "formatting", "" ]
I've been tasked with reconciling two big data sets (two big lists of transactions). Basically i extract the relevant fields from the two data sources into two files of the same format, then compare the files to find any records that are in A but not in B, or vice versa, and report on them. I wrote a blog entry on [my best efforts achieving this (click if interested).](http://splinter.com.au/blog/?p=135) The gist of it is to load both data sets into a big hash table, with the keys being the rows, and the values being +1 each time it appears in file A, and -1 each time it appears in file B. Then at the end, i look for any key/value pairs where the value != 0. My algorithm seems fast enough (10 seconds for 2\*100mb files), however its a bit memory-intensive: 280mb to compare two sets of 100mb files, i would hope to get it down to 100mb peak memory usage, and possibly lower if the two data sets are sorted in roughly the same order. Any ideas? Also, let me know if this is too open ended for SO.
I have done something similar to this only in scripts on unix using shell and perl, however the theory may cary over. Step 1, sort both files so they are in order by the same criteria. I used the unix sort command to do this (i required the unique flag, but you just need some sort of memory efficient file sort). This is likely the tricky part to figure out on you're own. Step 2, open both files, and essentially scan them line by line (or record by record if binary format). If the line in the left file is equal to the one in the right file, then the lines match and move on (remember we already sorted the file, so the smallest record should be first). If left record is greater than right record, you're right record is missing, add it to you're list, and read the next line on the right file. And simply do you're check again. Same thing applies if you're right record is greater, than you left record is missing, report it and keep going. The scanning the records should be very memory efficient. It may not be as fast, but for me I was able to crunch several gigs of data with multiple passes looking at different fields witihn a couple minutes.
The only way I can think of is to not load all of the data into memory at once. If you change the way you process it so that it grabs a bit of each file at a time it would reduce your memory foot print but increase your disk IO which would probably result in a longer processing time.
Is there a more efficient way to reconcile large data sets?
[ "", "c#", "optimization", "memory", "comparison", "" ]
I have a `MethodInfo` of an *interface* method and `Type` of a *class* that implements the *interface*. I want to find the `MethodInfo` of the class method that implements the interface method. The simple `method.GetBaseDefinition()` does not work with interface methods. Lookup by name won't work either, because when implementing interface method explicitly it can have any name (yes, not in C#). So what is the *correct* way of doing that that covers all the possibilities?
OK, I found a way, using [GetInterfaceMap](https://msdn.microsoft.com/en-us/library/system.type.getinterfacemap.aspx). ``` var map = targetType.GetInterfaceMap(interfaceMethod.DeclaringType); var index = Array.IndexOf(map.InterfaceMethods, interfaceMethod); if (index == -1) { //this should literally be impossible } return map.TargetMethods[index]; ```
Here's an extension method! ``` public static MethodInfo GetImplementedMethod(this Type targetType, MethodInfo interfaceMethod) { if (targetType is null) throw new ArgumentNullException(nameof(targetType)); if (interfaceMethod is null) throw new ArgumentNullException(nameof(interfaceMethod)); var map = targetType.GetInterfaceMap(interfaceMethod.DeclaringType); var index = Array.IndexOf(map.InterfaceMethods, interfaceMethod); if (index < 0) return null; return map.TargetMethods[index]; } ```
How to get MethodInfo of interface method, having implementing MethodInfo of class method?
[ "", "c#", ".net", "reflection", "methodinfo", "" ]
I have a Windows server that is intermittently losing the ability to lookup DNS information. I'm trying to get to the root cause of the problem but in the mean time I'd like to be able to monitor whether the server can perform lookups. Basically, it should attempt to lookup some common hostnames and the display 'Success' if the lookups are successful. The site runs PHP so I'd prefer that the monitor script be in PHP but if someone knows how to do this in ASP / .Net that would work as well.
<http://www.php.net/manual/en/function.dns-get-record.php> is the function in php it sounds like you are after.
On windows PHP DNS functions are not available natively prior to PHP 5.3. You will need the Pear Net\_DNS Class. <http://pear.php.net/package/Net_DNS> Example usage: ``` require_once 'Net/DNS.php'; $resolver = new Net_DNS_Resolver(); $resolver->debug = $this->debug; // nameservers to query $resolver->nameservers = array('192.168.0.1'); $resp = $resolver->query($domain, 'A'); ``` source: <http://code.google.com/p/php-smtp-email-validation/source/browse/trunk/smtp_validateEmail.class.php#232>
DNS Lookup in PHP
[ "", "php", "windows", "dns", "" ]
Hey I just sort of learned how to put my SQL statements into VBA (or atleast write them out), but I have no idea how to get the data returned? I have a couple forms (chart forms) based on queries that i run pretty regular parameters against, just altering timeframe (like top 10 sales for the month kinda of thing). Then I have procedures that automatically transport the chart object into a powerpoint presentation. So I have all these queries pre-built (like 63), and the chart forms to match (uh, yeah....63...i know this is bad), and then all these things set up on "open/close" events triggering the next (its like my very best attempt at being a hack....or dominos; whichever you prefer). So I was trying to learn how to use SQL statements in VBA, so that eventually I can do all this in there (I may still need to keep all those chart forms but I don't know because I obviously lack understanding). So aside from the question that I asked at the top, can anyone offer advice? thanks
It's a bit dated, so you might want to grab a [book on the subject](https://rads.stackoverflow.com/amzn/click/com/0782123244). But, here's a ton of [access resources](http://www.kayodeok.btinternet.co.uk/favorites/kbofficeaccess.htm) and some [tutorials and examples](http://www.blueclaw-db.com/accessvisualbasic/) as well. But, basically ... ``` Dim dbs As Database Dim rs As Recordset Dim strSQL As String Set dbs = CurrentDb strSQL = 'your query here Set rs = dbs.OpenRecordset(strSQL) If Not (rs.EOF And rs.BOF) Then rs.MoveFirst 'get results using rs.Fields() Else 'Use results ``` Per comment: Take a look at the [recordset class](http://www.w3schools.com/ado/ado_ref_recordset.asp). It contains a collection called Fields that are the columns that are returned from your query. Without knowing your schema, it's hard to say, but something like ... ``` rs.MoveFirst Do While Not rs.EOF 'do something like rs("SomeFieldName") rs.MoveNext Loop ``` Like I said, your best bet is to grab a book on this subject, they have tons of examples.
Use a parameterized querydef and invoke it from vba. The query is easier to design...easily testable..and easily accessible from VBA or a form. ``` dim qd as querydef set qd = currentdb.querydefs!myquerydef qd.parameters!parm1=val1 ``` .... either qd.execute or ``` dim rs as recordset set rs = qd.openrecordset() ``` YMMV ...
How do I get the data from an SQL query in microsoft Access VBA?
[ "", "sql", "ms-access", "vba", "" ]
Since `ldd` lists only the dynamic libraries, is there a way to extract the information about the static libraries used to create the executable?
`ldd <exe filename>` shows dynamically linked libraries `nm <exe filename>` shows the symbols in the file. To see which symbols come from static libraries requires running `nm` against those libraries to get a list of the symbols (functions, etc.) in them, then comparing them to what your list of symbols from `nm <exe filename>`. You compare lists with the `comm` command. See `man comm` for details. This was taken from this forum [here](http://cboard.cprogramming.com/linux-programming/71132-static-library-how-do-i-list.html).
No, the names of the libraries are discarded during the linking process. However, if your executable contains debug information (i.e. it was compiled with the -g flag), you may be able to get information from that.
Get list of static libraries used in an executable
[ "", "c++", "c", "unix", "static", "linker", "" ]
I want to conditionally output HTML to generate a page, so what's the easiest way to echo multiline snippets of HTML in PHP 4+? Would I need to use a template framework like Smarty? ``` echo '<html>', "\n"; // I'm sure there's a better way! echo '<head>', "\n"; echo '</head>', "\n"; echo '<body>', "\n"; echo '</body>', "\n"; echo '</html>', "\n"; ```
There are a few ways to echo HTML in PHP. ## 1. In between PHP tags ``` <?php if(condition){ ?> <!-- HTML here --> <?php } ?> ``` ## 2. In an echo ``` if(condition){ echo "HTML here"; } ``` With echos, if you wish to use double quotes in your HTML you must use single quote echos like so: ``` echo '<input type="text">'; ``` Or you can escape them like so: ``` echo "<input type=\"text\">"; ``` ## 3. Heredocs ## 4. Nowdocs (as of PHP 5.3.0) **Template engines** are used for using PHP in documents that contain mostly HTML. In fact, PHP's original purpose was to be a templating language. That's why with PHP you can use things like short tags to echo variables (e.g. `<?=$someVariable?>`). There are other template engines (such as Smarty, Twig, etc.) that make the syntax even more concise (e.g. `{{someVariable}}`). The primary benefit of using a template engine is keeping the design ([presentation logic](https://en.wikipedia.org/wiki/Presentation_logic)) separate from the coding ([business logic](https://en.wikipedia.org/wiki/Business_logic)). It also makes the code cleaner and easier to maintain in the long run. If you have any more questions feel free to leave a comment. Further reading is available on these things in the [PHP documentation](http://www.php.net/manual/en/language.types.string.php). --- **NOTE:** PHP short tags `<?` and `?>` are discouraged because they are only available if enabled with `short_open_tag` php.ini configuration file directive, or if PHP was configured with the `--enable-short-tags` option. [They are available, regardless of settings from 5.4 onwards](https://stackoverflow.com/questions/200640/are-php-short-tags-acceptable-to-use/200666#200666).
Try it like this ([heredoc](https://en.wikipedia.org/wiki/Here_document#PHP) syntax): ``` $variable = <<<XYZ <html> <body> </body> </html> XYZ; echo $variable; ```
How can I echo HTML in PHP?
[ "", "php", "html", "templates", "echo", "" ]
I've googled but I could only find how to upload one file... and I'm trying to upload all files from local directory to remote ftp directory. Any ideas how to achieve this?
with the loop? **edit**: in universal case uploading only files would look like this: ``` import os for root, dirs, files in os.walk('path/to/local/dir'): for fname in files: full_fname = os.path.join(root, fname) ftp.storbinary('STOR remote/dir' + fname, open(full_fname, 'rb')) ``` Obviously, you need to look out for name collisions if you're just preserving file names like this.
Look at [Python-scriptlines required to make upload-files from JSON-Call](https://stackoverflow.com/questions/40805778/python-scriptlines-required-to-make-upload-files-from-json-call) and next [FTPlib-operation: why some uploads, but others not?](https://stackoverflow.com/questions/42049403/ftplib-operation-why-some-uploads-but-others-not) Although a different starting position than your question, in the **Answer** of that first url you see an example construction to upload by ftplib a json-file plus an xml-file: look at scriptline 024 and further. In the second url you see some other aspects related to upload of more files. Also applicable for other file-types than json and xml, obviously with a different 'entry' before the 2 final sections which define and realize the FTP\_Upload-function.
Python ftplib - uploading multiple files?
[ "", "python", "ftplib", "" ]
I have an arraylist that gets **different type of values** in it, 1st value->**string**,2nd value-> **datetime**, 3rd value--> **boolean** and 4th value is **int**, how do I find thier type and assign those values accordingly, any help is appreciated:) **here is my Code:** ``` foreach (object obj in lstTop) { if(obj.GetType() == string) {do this...) else if(obj.GetType() == DateTime) {do this....} else if(obj.GetType() == bool) {do this....} else if(obj.GetType() == Int) {do this....} } ``` Thank you all, **my Final Code:** ``` string Subscription = ""; DateTime issueFirst; DateTime issueEnd; foreach (object obj in lstTop) { ///Type t = obj.GetType(); if (obj is string) Subscription += obj + ","; else if (obj is DateTime) { Subscription += Convert.ToDateTime(obj).ToShortDateString() + ","; } /// else if (t == typeof(DateTime)) } return ("User Authenticated user name: " + userName + ", Subscription: " + Subscription); ```
``` foreach (object obj in lstTop) { if(obj is string) {do this.....} else if(obj is DateTime) {do this.....} else if(obj is bool) {do this.....} else if(obj is Int) {do this.....} else { // always have an else in case it falls through throw new Exception(); } } ```
ArrayLists in .Net 2.0 are almost always the wrong way to do it. Even if you don't know what the list will hold, you're better off using the generic `List<Object>` because that communicates to others that the list really could hold anything and isn't just a left over from a .Net 1.1 programmer. Other than that, the `is` keyword should do what you want: ``` if (obj is string) // do this else if (obj is DateTime) // do this // ... ``` **Update** I know this is old, but it come up in my notices today. Reading it again, it occurs to me that another nice way to do this is via type resolution for an overloaded function: ``` void DoSomething(string value) { /* ... */ } void DoSomething(DateTime value) { /* ... */ } DoSomething(obj); ```
get object type and assign values accordingly
[ "", "c#", ".net", ".net-2.0", "" ]
I'm creating a class (say, C) that associates data (say, D) with an object (say, O). When O is destructed, O will notify C that it soon will no longer exist :( ... Later, when C feels it is the right time, C will let go of what belonged to O, namely D. If D can be any type of object, what's the best way for C to be able to execute "delete D;"? And what if D is an array of objects? My solution is to have D derive from a base class that C has knowledge of. When the time comes, C calls delete on a pointer to the base class. I've also considered storing void pointers and calling delete, but I found out that's undefined behavior and doesn't call D's destructor. I considered that templates could be a novel solution, but I couldn't work that idea out. Here's what I have so far for C, minus some details: ``` // This class is C in the above description. There may be many instances of C. class Context { public: // D will inherit from this class class Data { public: virtual ~Data() {} }; Context(); ~Context(); // Associates an owner (O) with its data (D) void add(const void* owner, Data* data); // O calls this when he knows its the end (O's destructor). // All instances of C are now aware that O is gone and its time to get rid // of all associated instances of D. static void purge (const void* owner); // This is called periodically in the application. It checks whether // O has called purge, and calls "delete D;" void refresh(); // Side note: sometimes O needs access to D Data *get (const void *owner); private: // Used for mapping owners (O) to data (D) std::map _data; }; // Here's an example of O class Mesh { public: ~Mesh() { Context::purge(this); } void init(Context& c) const { Data* data = new Data; // GL initialization here c.add(this, new Data); } void render(Context& c) const { Data* data = c.get(this); } private: // And here's an example of D struct Data : public Context::Data { ~Data() { glDeleteBuffers(1, &vbo); glDeleteTextures(1, &texture); } GLint vbo; GLint texture; }; }; ``` P.S. If you're familiar with computer graphics and VR, I'm creating a class that separates an object's per-context data (e.g. OpenGL VBO IDs) from its per-application data (e.g. an array of vertices) and frees the per-context data at the appropriate time (when the matching rendering context is current).
The question is rather vague on the requirements, so it's hard to give a good concrete answer. I hope the following helps. If you want the data to disappear immediately when its owner dies, have the owner delete it (and notify C if the C instances need to know). If you want C to do the deletion at its leisure, your solution looks fine. Deriving from Data seems to me the right thing to do. (Of course it is crucial that ~Data() be virtual, as you have done.) What if D is an array of objects? There are two interpretations of this question. If you mean that D is always an array, let it be an array (or vector<>) of pointers to Data. Then in C::purge() walk the vector and delete the objects. If you mean that D could be an array of objects but could also be a single object, there are two ways to go. Either decide that it is always an array (possibly of size 1), or that it is a single object (derived from Data) which can be a class wrapping the array of the actual objects (or pointers to them). In the latter case, the wrapper class destructor should walk the array and do the deletions. Note that if you want the array (or vector<>) to contains the actual objects, not pointers to them, (in which case you won't have to walk the array and delete manually) then you'll have the following limitations. 1. All objects in the array will have to be of the same actual type. 2. You will have to declare the array to be of that type. This will lose you all the benfits of polymorphism.
What you're looking for is [Boost::shared\_ptr](http://www.boost.org/doc/libs/1_39_0/libs/smart_ptr/shared_ptr.htm), or some similar smart-pointer system.
How do I destruct data associated with an object after the object no longer exists?
[ "", "c++", "opengl", "graphics", "" ]
When I search images using Bing.com, I realize their images are well cropped and sorted. When you place your mouse on an image, another window will pop up with an enlarged image. <http://www.bing.com/images/search?q=Heros&FORM=BIFD#> I want to do the same thing in my program. I checked the source code of their page. They are using javascript, but still I have no clue how they make it. Does anyone familiar with it? Any suggestion is welcomed.
If you look at the HTML, you'll see a span immediately above each of the images. It sets that frame's display style from "none" to "block". It then uses an animation library to resize the content of the covering frame.
It's the same image. It just enlarges it slightly.
How does Bing.com create enlarged thumbnails?
[ "", "javascript", "thumbnails", "bing", "" ]
I want to start an application which will use ajax push, however the web server should be configured properly and i don't know how to start on server side components. I wanted to start with [dojo's cometd](http://cometdproject.dojotoolkit.org/) and then read some blogs saying that [activeMQ](http://activemq.apache.org/) is older and flag carrier on ajax push thing, but there was also another blog saying that it is very hard to set-up and get it work. Now i'm confused before giving a start, please tell me my way :) What's the optimum way of configuring an ajax push environment? Sinan.
At PHP conference in Slovenia, it was said that Meteor is the best server for Comet.
did you check ActiveMQ Ajax page (<http://activemq.apache.org/ajax.html>)? it shouldn't be that hard to configure it right. And feel free to post any related questions to the ActiveMQ user mailing list. Cheers Dejan
How can I start ajax push website (activemq or cometd or something else)?
[ "", "javascript", "ajax", "activemq-classic", "cometd", "bayeux", "" ]
In Sharepoint how can you copy a list item from one list to another list eg copy from "List A" to "List B" (both are at the root of the site) I want this copying to occur when a new list item is added to "List A" I tried using the CopyTo() method of an SPListItem inside the ItemAdded event receiver but couldnt figure out the url to copy to.
Indeed as Lars said, it can be tricky to move items and retain versions and correct userinfo. I have done similar things with that before so if you need some code examples, let me know through a comment and can supply you with some guidance. The [CopyTo](https://msdn.microsoft.com/en-us/library/office/microsoft.sharepoint.splistitem.copyto.aspx) method (if you decide to go with that) need an absolute Uri like: <http://host/site/web/list/filename.doc> So, if you are performing this in an event receiver you need to concatinate a string containing the elements needed. Something like (note that this can be done in other ways to): ``` string dest= siteCollection.Url + "/" + site.Name + list.Name + item.File.Name; ```
Here is the code I use. Pass it a SPlistItem and the name of the destination list as seen in Sharepoint(Not the URL). The only restriction is that both list must be in the same site: ``` private SPListItem CopyItem(SPListItem sourceItem, string destinationListName) { //Copy sourceItem to destinationList SPList destinationList = sourceItem.Web.Lists[destinationListName]; SPListItem targetItem = destinationList.Items.Add(); foreach (SPField f in sourceItem.Fields) { //Copy all except attachments. if (!f.ReadOnlyField && f.InternalName != "Attachments" && null != sourceItem[f.InternalName]) { targetItem[f.InternalName] = sourceItem[f.InternalName]; } } //Copy attachments foreach (string fileName in sourceItem.Attachments) { SPFile file = sourceItem.ParentList.ParentWeb.GetFile(sourceItem.Attachments.UrlPrefix + fileName); byte[] imageData = file.OpenBinary(); targetItem.Attachments.Add(fileName, imageData); } return targetItem; } ```
copy list items from one list to another in sharepoint
[ "", "c#", "sharepoint", "" ]
Referring to the "Configuration Manager" under the Build menu, Is there a way to comment my C# code so that the commented out code does not compile while the solution is in Debug mode, but would compile if I changed it to Release mode? Why do I want this? The reason that I want to have code that will be compiled in Release mode, but not in Debug, is that I've got some code that will not work from my development PC (code that sends emails from my host, etc.). Instead of having to run back through my code and uncomment lines before publishing, I'd like that to be automatic.
You may be looking for something like this: ``` #if DEBUG Console.WriteLine("Debug Mode"); #else Console.WriteLine("Release Mode"); #endif ``` If you only care about release mode, you can use: ``` #if !DEBUG Console.WriteLine("Release Mode"); #endif ```
You could use the *Conditional* attribute on methods (but not individual lines of code) for this purpose. For example, the following will only be compiled into DEBUG builds. ``` [Conditional("DEBUG")] public void MyMethod() { // Do Stuff } ``` The DEBUG symbol is already specified in the project settings. You'd have to create your own symbol for a release build, say "RELEASE", so that you can do this: ``` [Conditional("RELEASE")] public void MyMethod() { // Do Stuff } ``` However, I'd recommend stepping back and looking at your problem anew from a higher level as I wouldn't really recommend this solution.
Is Visual Studio 2008 capable of conditional compilation for C#?
[ "", "c#", "visual-studio", "debugging", "build", "projects-and-solutions", "" ]
To be able to unit test my C++ code I usually pass the constructor of the class under test one or several objects that can be either "production code" or fake/mock objects (let's call these injection objects). I have done this either by 1. Creating an interface that both the "production code" class and the fake/mock class inherits. 2. Making the class under test a template class that takes the types of the injection objects as template parameters, and instances of the injection objects as parameters to the constructor. Some random thoughts: * Until we have concepts (C++0x), only documentation and parameter naming will hint what to provide the class under test (when using templates). * It is not always possible to create interfaces for legacy code * The interface is basically only created to be able to do dependency injection * In the same way: templating the class under test is done only to enable dependency injection What are your thoughts? Are there other solutions to this problem?
I think interface option is better, but one doesn't have to create common base class just for test. You can inherit your mock class from production class and override necessary methods. You'll have to make the methods virtual though, but that's how tools like [mockpp](http://mockpp.sourceforge.net/) work and they also allow automate this process a little bit.
With C++, there's another option - you give your mock classes exact same names as the real classes, and when linking your unit tests, just link them with mock object/library files instead of real ones.
Interfaces vs Templates for dependency injection in C++
[ "", "c++", "unit-testing", "mocking", "" ]
I'm currently on my way to write a web application implementing the MVC and well reducing complex things. Yet I've been pondering for a few hours what to do for future database systems like Oracle, PostgreSQL, Firebird, etc. Then again to implement these adapters in the logic is what gets me, should I just trigger-happy it with tons of SWITCHes? What can I do in this situation?
The typical answer to this is to use the ORM functionality in your framework. You can either treat your models as dataobjects or you could use some composition and let models have data objects. Either way, your data objects should be fairly abstract and mainly compile DB queries queries using methods - like the Zend framework does it with the Select object. This allows you to a) keep sql out of your objects and b) replace the objects that actually produce your sql. So, if you data objects all inherit from the same base ORM class, this base class can be given a DB object that it sends queries to. If you swing it right, the ORM base class will be ignorant of the type of DB class: the data object compiles a select object and hands it over to the DB object that then interprets it in whichever way it finds best. And the short answer: no, don't use a bunch of switch statements :)
> Yet I've been pondering for a few hours what to do for future database systems like ... [You’re NOT gonna need it!](http://xprogramming.com/practices/pracnotneed.htm)
Layering PHP Applications for other DBMS
[ "", "php", "model-view-controller", "database", "" ]
I'm working in the Google App Engine environment and programming in Python. I am creating a function that essentially generates a random number/letter string and then stores to the memcache. ``` def generate_random_string(): # return a random 6-digit long string def check_and_store_to_memcache(): randomstring = generate_random_string() #check against memcache #if ok, then store key value with another value #if not ok, run generate_random_string() again and check again. ``` Does creating two functions instead of just one big one affect performance? I prefer two, as it better matches how I think, but don't mind combining them if that's "best practice".
Focus on being able to read and easily understand your code. Once you've done this, if you have a performance problem, then look into what might be causing it. Most languages, python included, tend to have fairly low overhead for making method calls. Putting this code into a single function is not going to (dramatically) change the performance metrics - I'd guess that your random number generation will probably be the bulk of the time, not having 2 functions. That being said, splitting functions does have a (very, very minor) impact on performance. However, I'd think of it this way - it may take you from going 80 mph on the highway to 79.99mph (which you'll never really notice). The important things to watch for are avoiding stoplights and traffic jams, since they're going to make you have to stop altogether...
In almost all cases, "inlining" functions to increase speed is like getting a hair cut to lose weight.
Does creating separate functions instead of one big one slow processing time?
[ "", "python", "google-app-engine", "function", "performance", "" ]
Currently, I'm passing a command into a user-defined function in PHP. How do I have PHP execute this when I ask it to?
You'll want to either use [`eva`l](https://www.php.net/manual/en/function.eval.php) or [`call_user_func`](http://nz.php.net/manual/en/function.call-user-func.php), depending on whether it's a set of expressions or simply a function call.
If it's a shell command you want to execute you use [backticks](http://www.php.net/manual/en/language.operators.execution.php) or [shell\_exec](http://www.php.net/manual/en/function.shell-exec.php).
How do I execute a command contained within a variable in PHP?
[ "", "php", "function", "" ]
First, I noticed there are *many* questions regarding this, lots marked as duplicate. I eventually came to [this one](https://stackoverflow.com/questions/561447/store-pictures-as-files-or-in-the-database-for-a-web-app). And the accepted answer for that question although partially solves my question, it doesn't answer all of it. My question is, a user uploads an image. I store the path in the database and the image file in the file system. However, I make 3 copies of that image (large-, medium- and small-sized). So all in all I have 4 images - original, large, medium, small. Should I store all 4 paths in the database, like so ``` ID | original | large | medium | small | ----+--------------------+-----------------+-----------------+----------------+ 1 | /path/to/original | /path/to/large/ | /path/to/medium | /path/to/small | ``` or just store the original's path, and give the other 3 a naming convention, like so: `car.jpg, car.jpg, large-car.jpg, medium-car.jpg, small-car.jpg`? I feel this way would be less heavy on the database, and that if later, I wanted to add another size (ie. extra small) I wouldn't have to moditfy the database.
If all the images in a given row live in the same place, I'd say their base path should be its own column (rather than re-deriving the base path from the original image's full path all the time). If all the images in the *database* live in the same place, don't store the base path in this table at all; have it in code, or in a global configuration table.
Seems like you are trying to overuse the Database. How about this method instead. ``` ImageID | UserID | name.. ---------+---------+----- 1 | 495 | car454.jpg 2 | 495 | house.jpg 3 | 44 | kittysmall.jpg ``` And Store all the images in one place. IMAGES\_PATH = "/path/to/images" And name the images by the imageID (Auto Increment), so for the 5th image, it would be 5.ori.jpg or 5.large.jpg etc This way you can easily see who owns what image, and also the user can upload different images with the same filename and not have to worry about that.
Is storing image paths in a database necessary?
[ "", "php", "mysql", "storage", "" ]
If my software has two object instances, one of which is subscribed to the events of the other. Do I need to unsubscribe them from one another before they are orphaned for them to be cleaned up by the garbage collector? Or is there any other reason why I should clear the event relationships? What if the subscribed to object is orphaned but the subscriber is not, or vise versa?
Yes you do. The event publishers are holding references to the objects, and would prevent them from being garbage collected. Let's look at an example to see what happens. We have two classes; one exposes an event, the other consumes it: ``` class ClassA { public event EventHandler Test; ~ClassA() { Console.WriteLine("A being collected"); } } class ClassB { public ClassB(ClassA instance) { instance.Test += new EventHandler(instance_Test); } ~ClassB() { Console.WriteLine("B being collected"); } void instance_Test(object sender, EventArgs e) { // this space is intentionally left blank } } ``` Note how ClassB does not store a reference to the ClassA instance; it merely hooks up an event handler. Now, let's see how the objects are collected. Scenario 1: ``` ClassB temp = new ClassB(new ClassA()); Console.WriteLine("Collect 1"); GC.Collect(); Console.ReadKey(); temp = null; Console.WriteLine("Collect 2"); GC.Collect(); Console.ReadKey(); ``` We create a ClassB instance and hold a reference to it through the temp variable. It gets passed a new instance of ClassA, where we do not store a reference to it anywhere, so it goes out of scope immediately after the ClassB constructor is done. We have the garbage collector run once when ClassA has gone out of scope, and once when ClassB as gone out of scope. The output: ``` Collect 1 A being collected Collect 2 B being collected ``` Scenario 2: ``` ClassA temp = new ClassA(); ClassB temp2 = new ClassB(temp); temp2 = null; Console.WriteLine("Collect 1"); GC.Collect(); Console.ReadKey(); temp = null; Console.WriteLine("Collect 2"); GC.Collect(); Console.ReadKey(); ``` A new instance of ClassA is created and a reference to it is stored in the temp variable. Then a new instance of ClassB is created, getting the ClassA instance in temp passed to it, and we store a reference to it in temp2. Then we set temp2 to null, making the ClassB instance going out of scope. As before, we have the garbage collector run after each instance has gone out of scope. The output: ``` Collect 1 Collect 2 B being collected A being collected ``` So, to conclude; if the instance that exposes an event goes out of scope, it becomes available for garbage collection, regardless of whether there are event handlers hooked up or not. If an instance that has an event handler hooked up to an event in another instance, it will not be available for garbage collection until either the event handler is detached, or the instance to which the event handler is attached becomes available for garbage collection.
You only need to unhook events if the object *exposing* the events is long-lived, but the object *hooking* the event would otherwise be short-lived (and get garbage collected fairly quickly). In this case, failing to unhook will cause what amounts to a memory leak, because your short-lived object will not be able to be GCed -- because the event in the long-lived object holds onto a delegate, which holds a reference to the short-lived object. Since the short-lived object is still referenced by that delegate, it can't get garbage-collected. Static events are long-lived by definition -- they live until the program exits. If you hook a static event, you definitely should unhook it when you're done. If both objects are about to be orphaned, unhooking isn't necessary.
Do I need to remove event subscriptions from objects before they are orphaned?
[ "", "c#", ".net", "events", "garbage-collection", "" ]
I've used mod\_rewrite to rewrite links on my site like so: ``` mysite.com/store/<store_id>/<store_name>/<page> mysite.com/store.php?id=<store_id>&page=<page> ``` for example: ``` mysite.com/store/1313/johnny-walker-inc/13 mysite.com/store.php?id=1313&page=13 ``` However, as a result, all my links that we're relationally placed now start at the end of the link, for example: ``` mysite.com/send_message.php ``` has become ``` mysite.com/store/1313/johnny-walker-inc/send_message.php ``` How can I fix this? Here is my `mod_rewrite` code, in case I'm making a mistake with it: ``` RewriteRule ^store/([0-9]+)/[^/]+/([0-9]+)$ store.php?storeid=$1&page=$2 [L] ``` Thanks!
You need to make your links relative to the root, like so: ``` <a href="/send_message.php">link</a> ``` Note the slash before `send_message.php`.
Other solutions include a BASE href or just rewriting all the page elements that can be referenced by the imaginary context root you're showing the client.
mod_rewrite taking over all the links on the page
[ "", "php", "mod-rewrite", "seo", "" ]
I'd like to get the generic type of a collection, using reflection, at runtime. Code (JAVA): ``` Field collectionObject = object.getClass().getDeclaredField( collectionField.getName()); //here I compare to see if a collection if (Collection.class.isAssignableFrom(collectionObject.getType())) { // here I have to use the generic type of the collection // to see if it's from a specific type - in this case Persistable if (Persistable.class.isAssignableFrom(GENERIC_TYPE_COLLECTION.class)) { } } ``` Is there a way of getting the generic type of the collection in java at runtime? In my case I need the .class of the collection's generic type. Thanks in advance!
[Type erasure](http://www.angelikalanger.com/GenericsFAQ/FAQSections/TechnicalDetails.html#What%20is%20type%20erasure?) means that information about the generic type of an *object* simply isn't present at execution time. (The link is to the relevant section of Angelika Langer's [Java Generics FAQ](http://www.angelikalanger.com/GenericsFAQ/JavaGenericsFAQ.html) which should answer virtually every question you could possibly ask about Java generics :) However, you're not really interested in the type of an object - you're interested in the type of a *field*. I misread the question, and although the answer has been accepted I hope to make amends by fixing it now :) If the field doesn't use a type parameter itself, it can be done. For example: ``` import java.lang.reflect.*; import java.util.*; public class Test { public List<String> names; public static void main(String [] args) throws Exception // Just for simplicity! { Field field = Test.class.getDeclaredField("names"); ParameterizedType type = (ParameterizedType) field.getGenericType(); // List System.out.println(type.getRawType()); // Just String in this case for (Type typeArgument : type.getActualTypeArguments()) { System.out.println(" " + typeArgument); } } } ``` If the field were in a class `T` with the field being `List<T>` then you'd have to know the type argument for the instance in order to know the type argument for the collection. Translating this into your required code is somewhat tricky though - you really need to know the type argument at the point of the collection class. For instance, if someone declared: ``` public class StringCollection implements Collection<String> ``` and then had a field of type `StringCollection`, that field itself wouldn't have any type arguments. You'd then need to check `getGenericSuperType` and `getGenericInterfaces` recursively until you found what you wanted. It's really not going to be easy to do that, even though it's possible. If I were you I'd try to change your design so that you don't need this.
You can absolutely do what you want. Type erasure means you can't inspect an **instance** of a collection for its generic type, but you can certainly inspect a field for its generic type. ``` class Thing { List<Persistable> foo; } Field f = Thing.class.getDeclaredField("foo"); if( Collection.class.isAssignableFrom( f.getType() ) { Type t = f.getGenericType(); if( t instanceof ParameterizedType ) { Class genericType = (Class)((ParameterizedType)t).getActualTypeArguments()[0]; if( Persistable.class.isAssignableFrom( genericType ) ) return true; } } ``` There's a lot of things which can go wrong here, for example, if you have ``` Class Thing<T> { List<T> foo; } ``` then the above won't work.
Specifying generic collection type param at runtime (Java Reflection)
[ "", "java", "reflection", "" ]
I am wondering whether I can upgrade a basic IoC container I am using to support lazy load. So if I have registered IFoo, I would like the IoC container to know how to fulfil both of the following dependencies (the first being the standard way IoC containers work, while the second returns a simple delegate that calls into the container for an IFoo when it is invoked). ``` public Bar(IFoo x) public Bar2(Func<IFoo> lazyFoo) ``` The problem comes when I try to write the code that will actually do this. Is there a syntax that will make the following pseudo-code compile? ``` public T Resolve<T>() { if (T is Func<X>) return (T) () => Resolve(typeof(X)); return (T)Resolve(typeof(T)); } ``` Or to put my question another way, if I have a type `T`, how can I detect if it is an instance of `Func<X>`, and if so, what is the type of `X`?
take a look at this question from this morning - might give you a good start - [C# generic list <T> how to get the type of T?](https://stackoverflow.com/questions/1043755/c-generic-list-t-how-to-get-the-type-of-t)
I misunderstood your question. It is impossible to do it in one function the way you're trying to because the compiler must have a delegate type to create the lambda as at compile time. However, this should work. ``` public T Resolve<T>() { return (T)Resolve(typeof(T)); } public Func<T> LazyResolve<T>() { return () => Resolve<T>(); } ```
How to detect if a type is a Func<T> and get the type of the T
[ "", "c#", "generics", "ioc-container", "" ]
Is there some sort of C# directive to use when using a development machine (32-bit or 64-bit) that says something to the effect of: ``` if (32-bit Vista) // set a property to true else if (64-bit Vista) // set a property to false ``` but I want to do this in Visual Studio as I have an application I'm working on that needs to be tested in 32/64 bit versions of Vista. Is something like this possible?
Can you do it at runtime? ``` if (IntPtr.Size == 4) // 32 bit else if (IntPtr.Size == 8) // 64 bit ```
There are two conditions to be aware of with 64-bit. First is the *OS* 64-bit, and second is the *application* running in 64-bit. If you're only concerned about the application itself you can use the following: ``` if( IntPtr.Size == 8 ) // Do 64-bit stuff else // Do 32-bit ``` At runtime, the JIT compiler can optimize away the false conditional because the IntPtr.Size property is constant. Incidentally, to check if the *OS* is 64-bit we use the following ``` if( Environment.GetEnvironmentVariable( "PROCESSOR_ARCHITEW6432" ) != null ) // OS is 64-bit; else // OS is 32-bit ```
C# Directive to indicate 32-bit or 64-bit build
[ "", "c#", "visual-studio-2008", "" ]
I remember seeing somewhere there "^" operator is used as a pointer operator in Managed C++ code. Hence "^" should be equivalent to "\*" operator right?? Assuming my understanding is right, when I started understanding .Net and coded a few example programs, I came across some code like this: ``` String ^username; //my understanding is you are creating a pointer to string obj . . // there is no malloc or new that allocates memory to username pointer . username = "XYZ"; // shouldn't you be doing a malloc first??? isn't it null pointer ``` I am having trouble understanding this.
`String^` is a pointer to the managed heap, aka *handle*. Pointers and handles are not interchangable. Calling `new` will allocate an object on an unmanaged heap and return a pointer. On the other hand, calling `gcnew` will allocate an object on a managed heap and return a handle. The line `username = "XYZ"` is merely a compiler sugar. It is equivalent to ``` username = gcnew String(L"XYZ"); ```
That's a reference, not pointer, to a garbage collected string. It will be allocated and deallocated automatically, when nothing is referencing it anymore.
understanding String^ in C++ .Net
[ "", ".net", "c++", "string", "reference", "managed-c++", "" ]
I've been teaching myself Python by working through [Dive Into Python](http://www.diveintopython.net/) by Mark Pilgrim. I thoroughly recommend it, as do [other Stack Overflow users](https://stackoverflow.com/questions/34570/what-is-the-best-quick-read-python-book-out-there/34608#34608). However, the last update to Dive Into Python was five years ago. I look forward to reading the new [Dive into Python 3](http://diveintopython3.net/) When I make the switch to 3.x, but for now, using django means I'll stick to 2.x. I'm interested to know what new features of Python I'm missing out on, if I've used Dive Into Python as my primary resource for learning the language. A couple of examples that I've come across are * itertools * ElementTree Is there anything else I'm missing out on? edit: As Bastien points out in his answer, I could just read the [What's New in Python](http://docs.python.org/whatsnew/) pages, but sometimes it's fun to discover a useful tip on Stack Overflow rather than struggle through the complete, comprehensive answer in the official documentation.
Check out [What's New in Python](http://docs.python.org/whatsnew/). It has all the versions in the 2.x series. Per Alex's comments, you'll want to look at all Python 2.x for x > 2. Highlights for day-to-day coding: *Enumeration*: Instead of doing: ``` for i in xrange(len(sequence)): val = sequence[i] pass ``` You can now more succinctly write: ``` for i, val in enumerate(iterable): pass ``` This is important because it works for non-getitemable iterables (you would otherwise have to use an incrementing index counter alongside value iteration). *Logging*: a sane alternative to print-based debugging, standardized in a Log4j-style library module. *Booleans*: True and False, added for clarity: `return True` clearer intention than `return 1`. *Generators*: An expressive form of lazy evaluation ``` evens = (i for i in xrange(limit) if i % 2 == 0) ``` *Extended slices*: Builtins support strides in slices. ``` assert [1, 2, 3, 4][::2] == [1, 3] ``` *Sets*: For O(1) lookup semantics, you no longer have to do: ``` pseudo_set = {'foo': None, 'bar': None} assert 'foo' in pseudo_set ``` You can now do: ``` set_ = set(['foo', 'bar']) assert 'foo' in set_ ``` *Reverse iteration*: `reversed(sequence)` is more readable than `sequence[::-1]`. *Subprocess*: Unifies all the ways you might want to invoke a subprocess -- capturing outputs, feeding input, blocking or non-blocking. *Conditional expressions*: There's an issue with the idiom: ``` a and b or c ``` Namely, when b is falsy. `b if a else c` resolves that issue. *Context management*: Resource acquisition/release simplified via the `with` statement. ``` with open(filename) as file: print file.read() # File is closed outside the `with` block. ``` *Better string formatting*: Too much to describe -- see Python documentation under `str.format()`.
Mark(author of the book) had [some comments](http://www.reddit.com/r/Python/comments/8miar/im_new_to_python_should_i_read_dive_into_python_3/c09s45y) on this. I've shamelessly copied the related paragraph here: """If you choose Python 2, I can only recommend "Dive Into Python" chapters 2-7, 13-15, and 17. The rest of the book is horribly out of date."""
Changes to Python since Dive into Python
[ "", "python", "" ]
I have a `std::map` that I'm using to store values for x and y coordinates. My data is very sparse, so I don't want to use arrays or vectors, which would result in a massive waste of memory. My data ranges from -250000 to 250000, but I'll only have a few thousand points at the most. Currently I'm creating a `std::string` with the two coordinates (i.e. `"12x45"`) and using it as a key. This doesn't seem like the best way to do it. My other thoughts were to use an int64 and shove the two int32s into it and use it as a key. Or to use a class with the two coordinates. What are the requirements on a class that is to be used as the key? What is the best way to do this? I'd rather not use a map of maps.
Use std::pair<int32,int32> for the key: ``` std::map<std::pair<int,int>, int> myMap; myMap[std::make_pair(10,20)] = 25; std::cout << myMap[std::make_pair(10,20)] << std::endl; ```
I usually solve this kind of problem like this: ``` struct Point { int x; int y; }; inline bool operator<(const Point& p1, const Point& p2) { if (p1.x != p2.x) { return p1.x < p2.x; } else { return p1.y < p2.y; } } ```
What is the best way to use two keys with a std::map?
[ "", "c++", "dictionary", "stl", "key", "stdmap", "" ]
I have a newssystem where you can rate News with 1 to 5 stars. In the Database i save the count, the sum and the absolute rating as int up to 100 (for html output, so 5 stars would be 100 1 star would be 20percent. Now i have three toplists: Best Rated Most viewed Most commented Last two ones are simple, but the first is kinda tricky. Before i took that thing over it was all a big mess, and they just put the 5 best rated news there, so in fact if there was a news rated 4.995 with 100k votes and another one with 5 stars at 1 vote, the "better rated" one is on top even if that is obv ridiculous. For the first moment i capped the list so only news with a certain amount of votes (like 10 or 20) can be in the list. But i do not really like that. Is there a nice method to kind-a give those things a "weight" with the count or something like that?
Have you considered using a [weighted bayesian rating system](http://www.thebroth.com/blog/118/bayesian-rating)? It'll weight the results based on the number of votes and the vote values themselves.
You could explore the statistical confidence in the rating perhaps based around the average rating received for all entries and the standard deviation of all votes. While an entry has an average rating of 5, if you only have a few votes then you may not be able to say with more than 90% confidence that the actual rating is above 4.7 say. You can then rate the entries based upon the rating for which you have 90% confidence. I'm not sure if this meets your requirement of being simple.
Build a Ranking
[ "", "php", "" ]
I want to make sure the first 4 letters in a eight character code is a letter. Is there a function for that, or do I have to make my own?
You can add javascript/regex validation like this: ``` /^[a-zA-Z]{4}.*/ ```
You should use a [regular expression validator](http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.regularexpressionvalidator.aspx) to achieve this.
Can I test if the first four characters in a string are letters?
[ "", "c#", ".net", "asp.net", "" ]
A simple test app: ``` cout << new int[0] << endl; ``` outputs: ``` 0x876c0b8 ``` So it looks like it works. What does the standard say about this? Is it always legal to "allocate" empty block of memory?
From 5.3.4/7 > When the value of the expression in a direct-new-declarator is zero, the allocation function is called to allocate an array with no elements. From 3.7.3.1/2 > The effect of dereferencing a pointer returned as a request for zero size is undefined. Also > Even if the size of the space requested [by new] is zero, the request can fail. That means you can do it, but you can not legally (in a well defined manner across all platforms) dereference the memory that you get - you can only pass it to array delete - and you should delete it. Here is an interesting foot-note (i.e not a normative part of the standard, but included for expository purposes) attached to the sentence from 3.7.3.1/2 > [32. The intent is to have operator new() implementable by calling malloc() or calloc(), so the rules are substantially the same. C++ differs from C in requiring a zero request to return a non-null pointer.]
Yes, it is legal to allocate a zero-sized array like this. But you must also delete it.
C++ new int[0] -- will it allocate memory?
[ "", "c++", "memory-management", "new-operator", "standards-compliance", "" ]
I have 3 tables in my DB (MySQL). ``` categories (name:string) items (name:string, category_id:int) votes (value:int, item_id:int, created_at:datetime) ``` So a category has many items, and an item has many votes. I want to write a query to get the most popular categories, which means getting the categories whose items have gotten the most number of votes (both up or down) over the last week. I started off trying something simpler, just getting popular items, but I'm really just guessing at this point and it doesn't work. ``` SELECT *, COUNT(votes.item_id) AS score FROM items JOIN votes USING(item_id) WHERE votes.created_at > #{1.week.ago} ORDER BY COUNT(votes.item_id) DESC LIMIT 5; ``` I don't really know what I'm doing, any ideas? Also, if anyone knows of a good write up on doing more advanced selects like this I'd love to read it. The MySQL documentation is a bit cryptic and I don't really understand 'AS' and 'JOINS'.
try this. use group by with the name of the category. i have commented out the created at clause as you specified, you can uncomment it if you want to use it. ``` SELECT c.name, SUM(ABS(v.item_id)) FROM categories c,items i, votes v WHERE c.name = i.name AND i.item_id=v.item_id --AND v.created_at > #{1.week.ago} GROUP BY c.name ORDER BY SUM(ABS(v.item_id)) DESC LIMIT 5; ``` you will notice that i did not use the JOIN keyword but instead filtered the results of the query using only WHERE clauses, which might be easier to understand. if you want to learn more about JOINs, [here is a tutorial](http://www.sql-tutorial.net/SQL-JOIN.asp). [Here, too, is a tutorial on SQL aliases](http://www.sql-tutorial.net/SQL-Aliases.asp) (the AS clause). in fact, there are a bunch more tutorials on this site for a bunch of different SQL topics that are not platform dependent. **edit:** fixed as per comments, added the abs function,
This is what I ended up using (slightly modified version of @akf's answer). The other answers were great also, but this one seemed the simplest to me as a novice. I find it interesting that it doesn't use any joins? Would not have thought this was possible. I like how clean and simple it is. ``` SELECT c.*, SUM(ABS(v.vote)) AS score FROM categories c,items i, votes v WHERE c.id = i.category_id AND i.id = v.item_id AND v.created_at > '#{1.week.ago}' GROUP BY c.id ORDER BY score DESC LIMIT 5; ``` (I forgot to mention in the question that each table has an 'id' column, which you can see being used here.) Summing the absolute value of the vote value works in this case since there are positives and negatives as some posters correctly pointed out, but I was thinking it would be even simpler to just count the number of vote records in the database. I tried "COUNT(v) AS score" but this didn't work. If anyone knows how please post a comment. Thanks everyone!
SQL To Find Most Popular Category
[ "", "sql", "mysql", "" ]
I am trying to increase the overall Integration test execution time and I am currently evaluating various in-memory db solutions. The idea is to have DAOs hit in-mem db during the tests as opposed to hitting a real DB. This is a java app using Hibernate for persistence. I'd be interested to see your experience with one of these products H2, Derby, HSQLDB, Oracle Berkeley DB. Some of my concerns are: will in-mem DBs be able to execute stored procedures, custom native sql? Can you selectively choose which one of your services should hit real DB vs in mem DB? And overall, since this approach involves DB bootstrapping(pre-load/pre-create all tables with data) I am now thinking if it'd be simply easier to just mock out the DAO layer and not even worry about all the unknown problems that in mem DB may bring... thanks.
My suggestion is to test everything, including the DAO layer as you mention. But see if you can test it in pieces. Services, DAOs, UI. For service layer testing, mock out the DAOs. That way the service layer tests are independent of whether the DAOs are working. If the service layer tests are using DAOs and using a real database then I'd argue that it's not really a Unit test but an Integration test. Although those are valuable too, if they fail it doesn't pinpoint the problem like a Unit test. For our DAO layer tests we use DbUnit with **HSQLDB**. (Using Unitils helps if you are using Spring/Hibernate/DbUnit to tie it all together.) Our DAO tests execute nice and quickly (which is important when you have 500+ tests). The memory db schema is built from our schema creation scripts so as a side effect we are testing those as well. We load/refresh a known set of data from some flat files into the memory database. (Compared to when we were using the DEV database and some data would get removed which then broke tests). **This solution is working great for us** and I would recommend it to anyone. Note, however, that we are not able to test the DAO that uses a stored proc this way (but we only have one). I disagree somewhat with the poster who mentioned that using different databases is "bad" -- just be aware of the differences and know the implications of doing so. You didn't mention if you are using Hibernate or not -- that is one important factor in that it abstracts us away from modifying any SQL that may be specific to Oracle or SQLServer or HSQLDB which another poster mentioned.
Mock out the DAO layer. Despite what some claim unless you are just using trivial sql the subtle implementation differences and differing feature sets between databases will limit what you can do (stored procedures, views etc.) and also to some extent invalidate the tests. My personal mocking framework of choice is Mockito. But there are lots that do the job and mocking out the DAO is standard practice so you'll find lots of documentation.
in-memory DBs evaluation
[ "", "java", "database", "testing", "integration-testing", "" ]
There is any command line or .NET method that runs a process in the background hiding any window it tries to open? Already tried: ``` var process = new Process() { StartInfo = new ProcessStartInfo() { CreateNoWindow = true, WindowStyle = ProcessWindowStyle.Hidden, FileName = TheRealExecutableFileNameHere } } process.Start(); ``` With no success so far
Check out the [Matlab Engine](http://www.mathworks.com/access/helpdesk/help/techdoc/matlab_external/f29148.html). There's even an interesting [article](http://www.codeproject.com/KB/dotnet/matlabeng.aspx) on CodeProject, if this approach fits your needs.
I reviewed my code and it looks nearly identical to yours: ``` ProcessStartInfo psi = new ProcessStartInfo(fileName, arguments) { CreateNoWindow = true, WindowStyle = ProcessWindowStyle.Hidden, UseShellExecute = false, RedirectStandardOutput = true }; Process process = Process.Start(psi); ``` The only notable difference (other than formatting and which PSI constructor we chose) is my use of UseShellExecute and RedirectStandardOutput as I needed to read the result of the ran process. I have found the code above consistently runs a hidden process on XP and Vista. I have also found, however, and you may be experiencing the same, that a hidden process may kick off another process which by default isn't hidden. In other words, if you start hidden Process A and Process A, in turn, kicks off Process B, you have no control as to how Process B will be displayed. Windows which you have no control over may be displayed. I hope this helps a little. Good luck.
There is any way to run processes in the background?
[ "", "c#", ".net", "nohup", "" ]
I want to fetch the unmatching records from two table in SQL, the table structure is as follows: Table1 ``` Id Name 1 Prashant 2 Ravi 3 Gaurav 5 Naween 7 Sachin ``` Table2 ``` Id Name 1 Prashant 2 Ravi 4 Alok 6 Raja ``` The output I want is ``` Id Name 3 Gaurav 4 Alok 5 Naween 6 Raja 7 Sachin ``` What will be the query to fetch the required output in SQL?
I think joeslice's answer will only give half the results. You need to union the other table. Alternatively, you could do a full outer join. ``` select a.Id, a.Name from Table1 a left outer join Table2 b on a.Name = b.Name where b.Id is null UNION ALL select a.Id, a.Name from Table2 a left outer join Table1 b on a.Name = b.Name where b.Id is null ```
``` create table #t1 (Id int, name varchar(50)) create table #t2 (Id int, name varchar(50)) insert #t1 values (1, 'Prashant') insert #t1 values (2, 'Ravi') insert #t1 values (3, 'Gaurav') insert #t1 values (5, 'Naween') insert #t1 values (7, 'Sachin') insert #t2 values (1, 'Prashant') insert #t2 values (2, 'Ravi') insert #t2 values (4, 'Alok') insert #t2 values (6, 'Raja') select isnull(#t1.id, #t2.id), isnull(#t1.name,#t2.name) from #t1 full outer join #t2 on #t1.id = #t2.id where #t2.Id is null or #t1.id is null ``` results: ``` 3 Gaurav 5 Naween 7 Sachin 4 Alok 6 Raja ```
How to fetch unmatching records from two SQL tables?
[ "", "sql", "" ]
I have a question: My program will search FireFox windows opened by user. When a user open Firefox and enter any site, I want to search for a keyword in that page's HTML content. How can I access Firefox's Active Tab's DOM (or HTML content) from outside firefox using my C++ program. Is it possible? If so, can you give me some idea or links? If it is not possible, how can I copy text to clipboard within Firefox without installing / setting up anything? Best regards, Nuri Akman
There is no built-in way to access the DOM of a web page inside Firefox from an external program. You can write an extension that implements some sort of IPC (using sockets or whatever) and communicate with that, but not built-in to Firefox.
It can be done through addons
How to access Firefox's DOM (or HTML content) from outside firefox
[ "", "c++", "html", "firefox", "dom", "" ]
I'm running a library via JNI (I didn't write it), and internally it calls another DLL. I get an error saying "Can't find dependent libraries" unless I put the path of the other DLL on the system PATH variable (I'm on Windows XP). I'd like to be able to handle this on the java command line, and I've already tried adding it to -Djava.library.path and to the classpath, neither which worked (I expected -Djava.library.path to work but not classpath, but neither worked). Is there a way to do this? thanks, Jeff
* If you have a DLL name 'MyNativeDLL.dll' then you should use 'MyNativeDLL' in your LoadLibrary call. * Use [Dependency Walker](http://www.dependencywalker.com/) to check if there are any files required by MyNativeDLL.dll * If there are, include them in the same folder as MyNativeDLL.dll - one you get it working try putting the additional required files in System32 folder.
I was able to get this to work without putting any DLLs on the PATH by using System.load() on all DLLs in reverse dependency order. Just to be clear, I was calling System.load() on all dependent DLLs, not just JNI DLLs. You don't have to call System.load() on DLLs that come with Windows (they're on the PATH). I was doing this in a web app where a jar included DLLs that were getting unpacked. Your situation seems simpler, so I believe it should work. I generally followed the solution here: [How to make a JAR file that includes DLL files?](https://stackoverflow.com/questions/1611357/how-to-make-a-jar-file-that-include-dll-files)
JNI dependent libraries
[ "", "java", "java-native-interface", "" ]
To be specific, I'm talking about avoiding this type of code: ``` <input type='text' id='title_33' class='title' onfocus='updateCharsLeft(33);' onkeypress='updateCharsLeft(33);' /> ``` Here I would like to put the `onfocus` and `onkeypress` event handles separately, i.e in a .js file. Like this: ``` $(document).ready(function() { $(".title").focus(updateCharsLeft); $(".title").keypress(updateCharsLeft); ); ``` However here the problem is that the ID of the textbox needs to be passed onto the function `updateCharsLeft()`. It would suck to have to extract out the id from the ID of the textbox in that function, so it would actually be cleaner to just put in the event handlers within the HTML code. Thoughts?
I've had to do something similar before and also wasn't happy with parsing the value out the ID attribute. The best thing I can suggest is that you use another attribute for the value you need, like the `rel` attribute: ``` <input type='text' id='title_33' class='title' rel='33' /> ``` Or depending on how religious you are about validation, just use a custom attribute: ``` <input type='text' id='title_33' class='title' myval='33' /> ```
Can't you do this: ``` $(document).ready(function() { $(".title").focus(function() { updateCharsLeft(this.id); }); $(".title").keypress(function() { updateCharsLeft(this.id); }); ); ``` or more neatly: ``` $(document).ready(function() { $(".title .another .someother .andAnother").focus(function() { updateCharsLeft(this.id); }).keypress(function() { updateCharsLeft(this.id); }); ); ```
Is it impossible to separate javascript from HTML?
[ "", "javascript", "jquery", "code-organization", "" ]
This seems like a basic question, but it's still bugging me. Why doesn't ``` MyObject [] myobject = new MyObject [10]; ``` allocate 10 objects? Why do we have to call `new` on each individual object? ``` myobject [0] = new MyObject(); ::: myobject [9] = new MyObject(); ``` Or am I just making a silly mistake? :)
As far as I am aware, you arent creating 10 objects in the array, you are creating an array with 10 spaces for objects of type "My Object", null is a perfectly acceptable state for an item within the array though.
You're calling the *array constructor*, which creates an array, not the object constructor ten times. What constructor on the element type would you like to call? There's no way to provide any parameters as it is. The only constructors being called implicitly are value types (structs) like integers and doubles. What if the element constructor fails? It could throw any exception. Should you get an half initialized array, or no array at all? Should the exception bubble or be hidden? What if the element constructor is just very very slow. Add a Thread.Sleep() and the code that calls you would basically hang. How would you debug that? And my last thought, why are you even using arrays when there's List<T>? =)
Why do we have to call a constructor for each element in an Array?
[ "", "c#", "arrays", "" ]
`round(45.923,-1)` gives a result of 50. Why is this? How it is calculated? *(sorry guys i was mistaken with earlier version of this question suggesting value was 46)*
The SQL ROUND() function rounds a number to a precision... For example: round(45.65, 1) gives result = 45.7 round(45.65, -1) gives result = 50 because the precision in this case is calculated from the decimal point. If positive then it'll consider the right side number and round it upwards if it's >= 5, and if <=4 then round is downwards... and similarly if it's negative then the precision is calculated for the left hand side of decimal point... if it's >= 5 for example round(44.65, -1) gives 40 but round(45.65, -1) gives 50...
ROUND(748.58, -1) 750.00 the second parameter: Lenght, is the precision to which numeric\_expression is to be rounded. length must be an expression of type tinyint, smallint, or int. When length is a positive number, numeric\_expression is rounded to the number of decimal positions specified by length. When length is a negative number, numeric\_expression is rounded on the left side of the decimal point, as specified by length. [From](http://msdn.microsoft.com/en-us/library/ms175003.aspx)
SQL Round Function
[ "", "sql", "rounding", "" ]
I have the following regular expression which is used to give me the tags in the HTML string: ``` <[^>]*> ``` So, if I pass in the following: ``` <b> Bold </b> ``` Then it will give me: ``` <b> </b> ``` How can I make it to give me: ``` <b> Bold </b> ``` UPDATE: Here is another example to get the big picture: If this is the text: ``` <b>Bold</b> This is the stuff <i>Italic</i> ``` then the final result would be the following: ``` matches[0] = <b> matches[1] = Bold matches[2] = </b> matches[3] = This is the stuff matches[4] = <i> matches[5] = Italic matches[6] = </i> ```
I second the advice not to use reglar expressions; html can't be properly expressed using a regular language. Better to investigate System.Xml.XmlReader and System.Web.HtmlWriter. You should be able to write a function that reads an element from a reader then writes it to a writer; something along the lines of ``` public static string HtmlReformat(string html) { var sw = new StringWriter(); HtmlTextWriter htmlWriter = new HtmlTextWriter(sw); XmlReader rdr = XmlReader.Create(new StringReader(html)); while (rdr.Read()) { switch (rdr.NodeType) { case XmlNodeType.EndElement: htmlWriter.WriteEndTag(rdr.Name); htmlWriter.Write(System.Environment.NewLine); break; case XmlNodeType.Element: htmlWriter.WriteBeginTag(rdr.Name); for (int attributeIdx = 0; attributeIdx < rdr.AttributeCount; attributeIdx++) { string attribName = rdr.GetAttribute(attributeIdx); htmlWriter.WriteAttribute(rdr.Name, attribName); } htmlWriter.Write(">"); htmlWriter.Write(System.Environment.NewLine); break; case XmlNodeType.Text: htmlWriter.Write(rdr.Value); break; default: throw new NotImplementedException("Handle " + rdr.NodeType); } } return sw.ToString(); } ``` This should give you a base to work from, anyway.
**Do not use regular expressions to parse HTML.** [HTML is not regular](http://welbog.homeip.net/glue/53/XML-is-not-regular), and therefore regex is not at all suited to parsing it. Use an HTML or XML parser instead. There are many (HT|X)ML parsers available online. What language are you using? You're not going to be able to create a regular expression that matches HTML because of the complexity of the language. Regex operates on a class of languages smaller than the class HTML is a member of. Any regex you try to write will be hard to understand and incorrect. Use something like XPath instead. EDIT: You're using C#. Luckily you have an entire [System.Xml namespace](http://msdn.microsoft.com/en-us/library/system.xml(VS.71).aspx) available to you. Also, there are other libraries for parsing HTML specifically if your HTML is not strict.
What manner of regular expression might I use to add line breaks near HTML tags?
[ "", "c#", "html", "regex", "" ]
We're finally migrating our unit test code base from JUnit 3 to JUnit 4. We also make heavy use of JMock 2. With JUnit 3, JMock provides a useful base class for your tests (MockObjectTestCase), which as well as itself being s subclass of Junit's TestCase, it handles various housekeeping duties regarding the mock framework. It makes life pretty easy for the test class. Now with JUnit4, JMock provides no such support. Your test class has to manually create a Mockery object, it has to remember to use the correct test runner annotation, and must delegate all mock-related operations to the mockery. In short, it puts far more responsibility on the test class than was needed for JUnit 3 tests. Now I appreciate that part of JUnit4's charm is there being no need to subclass something, but this JMock situation seems like a step backwards, and makes porting from 3 to 4 rather more work than should be necessary. Am I missing something? Is there actually a nice way to write my JUnit4/Jmock2 test classes without manually adding all that plumbing to every class? I could write my own support base class, of course, but it seems such an obvious omission from the JMock2 API, I have to wonder if I've missed the point. --- Edit: here's the source code of what the optional support class would look like: ``` @RunWith(JMock.class) public class JMockSupport { protected final Mockery mockery = new Mockery(); protected void checking(ExpectationBuilder expectations) { mockery.checking(expectations); } protected <T> T mock(Class<T> typeToMock) { return mockery.mock(typeToMock); } protected <T> T mock(Class<T> typeToMock, String name) { return mockery.mock(typeToMock, name); } protected Sequence sequence(String name) { return mockery.sequence(name); } protected void setDefaultResultForType(Class<?> type, Object result) { mockery.setDefaultResultForType(type, result); } protected void setImposteriser(Imposteriser imposteriser) { mockery.setImposteriser(imposteriser); } protected void setNamingScheme(MockObjectNamingScheme namingScheme) { mockery.setNamingScheme(namingScheme); } protected States states(String name) { return mockery.states(name); } } ``` This contains all of the methods that the JUnit3 MockObjectTestCase class defined, which just echo to the mockery. The @RunWith annotation is there also, to avoid the possibility of forgetting to add it to your test class.
There are also problems with having base classes. In previous versions, I suffered from trying to combine base classes from different test frameworks. That's why we went to composition over inheritance. It'll be interesting to see what we can do with the new @Rule structure.
I've done this migration too, and it is a pain. I can understand why they've binned the base class mechanism - I was trying to juggle JMock base classes with Spring JUnit-enabled base classes, and that obvious doesn't work. Once I embarked on this migration, one area I found for 'optimisation' was creating appropriate Expectation base classes encapsulating common operations on your mock objects, rather than creating a new Expectation object (and instance) for every test. That will save you a little grief.
Lack of support base class in Junit4/Jmock2
[ "", "java", "junit", "jmock", "" ]
I have a table of categories. Each category can either be a root level category (parent is NULL), or have a parent which is a root level category. There can't be more than one level of nesting. I have the following table structure: [Categories Table Structure http://img16.imageshack.us/img16/8569/categoriesi.png](http://img16.imageshack.us/img16/8569/categoriesi.png) Is there any way I could use a query which produced the following output: ``` Free Stuff Hardware Movies CatA CatB CatC Software Apples CatD CatE ``` So the results are ordered by top level category, then after each top level category, subcategories of that category are listed? It's not really ordering by Parent or Name, but a combo of the two. I'm using SQL Server.
Ok, here we go : ``` with foo as ( select 1 as id, null as parent, 'CatA' as cat from dual union select 2, null, 'CatB' from dual union select 3, null, 'CatC' from dual union select 4, 1, 'SubCatA_1' from dual union select 5, 1, 'SubCatA_2' from dual union select 6, 2, 'SubCatB_1' from dual union select 7, 2, 'SubCatB_2' from dual ) select child.cat from foo parent right outer join foo child on parent.id = child.parent order by case when parent.id is not null then parent.cat else child.cat end, case when parent.id is not null then 1 else 0 end ``` Result : ``` CatA SubCatA_1 SubCatA_2 CatB SubCatB_1 SubCatB_2 CatC ``` Edit - Solution change inspire from van's order by ! Much simpler that way.
It seems to me like you are looking to flatten and order your hierarchy, the cheapest way to get this ordering would be to store an additional column in the table that has the full path. So for example: ``` Name | Full Path Free Stuff | Free Stuff aa2 | Free Stuff - aa2 ``` Once you store the full path, you can order on it. If you only have a depth of one you can auto generate a string to this effect with a single subquery (and order on it), but this solution does not work that easily when it gets deep. Another option, is to move this all over to a temp table and calculate the full path there, on demand. But it is fairly expensive.
SQL - Ordering by multiple criteria
[ "", "sql", "sql-server-2005", "t-sql", "" ]
I have classes which have automatic properties only like public customerName {get; set;}. They are public because they are accessed outside the class. They can also be accessed inside the class. They offer good encapsulation and better debugging. I can put a breakpoint on one if I need to know who is accessing it and when. My question is what are the disadvantages of using properties only with no corresponding fields? I can make the setter or getter private, internal.. etc which means I also have flexibility of scoping it when needed.
Serialization with `BinaryFormatter` - you have **big** problems if you need to change your property to a "regular" property later, for example to add some validation / eventing /etc - sinc `BinaryFormatter` uses the field names. And you can't duplicate this, since the field name the compiler generates cannot be written as legal C#. Which is a good reason to look at a contract-based serializer instead. See [this blog entry](http://marcgravell.blogspot.com/2009/03/obfuscation-serialization-and.html) for more info.
You can't create **truly read only** property, because you have to define both setter and getter. You can only use private setter to achieve pseudo-readonly property from outside. Otherwise, as said above there are no other disadvantages.
Disadvantages of using properties only with no corresponding fields in .NET?
[ "", "c#", ".net", "vb.net", "" ]
We have a few C++ solutions and we run some build scripts using batch files that call msbuild.exe for each of the configurations in the solutions. This had been working fine on 3 developer machines and one build machine, but then one of the projects started to hang when linking. This only happens on the newest machine which is a quad core, 2.8ghz I think. It runs on Windows Server 2003 and the others are on XP or Vista. This happens consistently even if I change the order of builds in the bat file. If I run the build from the IDE on that machine it does not hang. Any ideas about what could possibly be causing this? I am using Visual Studio 2008. --- ### Edit: I see now that when it is hung the following are running: * link.exe (2 instances) One with large memory usage and one with a small amount of memory usage. * vcbuild.exe * msbuild.exe * vcbuildhelper.exe * mspdbsrv.exe --- ### Edit: The exe file exists and so does the pdb file. The exe file is locked by some process, and I can't delete it or move it. I can delete the pdb file though. I also have the problem if I just use VCBuild.exe. I decided to try debugging the 2 link.exe processes and the mspdbsrv.exe processes. When I attached the debugger/MSdev IDE to them I got a message box saying that the application was deadlocked and/or that "all threads have exited". I guess I will have to check for a service pack for that msdev install on that machine. --- ### Edit: In the debug.htm output file I get all sorts of stuff output after the link.exe command is generated. However, for the release buildlog.htm the linke.exe line is the last line. This is clearly a hang in the linker. Definitely a Microsoft bug. I am now trying to figure out what the .rsp (linker response) file is. When I issue: > link.exe @c:\\Release\RSP00000535202392.rsp /NOLOGO /ERRORREPORT:QUEUE That is the last line in the release build log. The debug one has lots more information after that. Reinstalling a different version of Visual Studio did not solve the problem. I will open an issue/ticket with Microsoft. I will post an answer if I can.
Whole-program optimization (/GL and /LTCG) and /MP don't mix -- the linker hangs. I raised this on [Connect](https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=366628). The upshot is that it's a confirmed bug in VS2008; contact PSS if you want a hotfix; and the fix is included in VS2010. If you can't wait that long, turn off /MP (slower compiles) or /LTCG (slower code). ![VS2012 screenshot for setting](https://i.stack.imgur.com/8aWe6.png)
Are you using xcopy in your scripts? [This](http://barnabashoward.livejournal.com/20849.html) suggests wrapping xcopy with cmd /c " .. " as a solution. If that wasn't it, I'd recommend to narrow things down by only letting one cpu work (i.e. removing /maxcpucount) This would rule out any form of race condition between compilation processes.
Why is msbuild and link.exe "hanging" during a build?
[ "", "c++", "visual-studio", "msbuild", "batch-file", "linker", "" ]
Trying to unicast packets to available networks. There are totally 3 networks. Managed to get packets in only one network.But i am not able to receive the packets in different networks. using this code.. ``` foreach (var i in System.Net.NetworkInformation.NetworkInterface.GetAllNetworkInterfaces()) { foreach (var ua in i.GetIPProperties().UnicastAddresses) { System.Windows.Forms.MessageBox.Show(ua.Address.ToString()); IPAddress Tip = IPAddress.Parse(ua.Address.ToString()); IPEndPoint targetEndPoint = new IPEndPoint(Tip, iTargetPort); MyUdpClient sendUdpClient = new MyUdpClient(); int numBytesSent = sendUdpClient.Send(CombineHeaderBody, CombineHeaderBody.Length, targetEndPoint); } } ``` What is the prob ? Thanks.
I think that you're trying to send packets to yourself? Are you sure you're not confusing unicast and multicast addresses? Ok so you're not trying to multicast... Each of your network interface has an ip address. What you're doing here is sending a packet to your network card. It is not really a matter of network because your machine most probably knows its own ip addresses and reroute it to 127.0.0.1
Since you have more than one interface you are multihomed. For each interface you will have an IPaddress. So with three interfaces you will have three local IP's. When you use the UdpClient you need to specify which interface to send out by using it's IP. lets assume you have the following three local IP's > 10.1.0.1 > 10.2.0.1 > 10.4.0.1 > with a netmask of 255.255.0.0 and you want to send a UDP packet to 10.2.5.5 you need to send it out 10.2.0.1 so use the following code ``` IPEndPoint localEndPoint = new IPEndPoint(IPAddress.Parse("10.2.0.1"), 0); IPEndPoint targetEndPoint = new IPEndPoint(IPAddress.Parse("10.2.5.5"), iTargetPort); UdpClient sendUdpClient = new UdpClient(localEndPoint); int numBytesSent = sendUdpClient.Send(CombineHeaderBody, CombineHeaderBody.Length, targetEndPoint); ``` and to send a UDP packet to 10.1.90.5 you need to send it out 10.1.0.1 so use the following code ``` IPEndPoint localEndPoint = new IPEndPoint(IPAddress.Parse("10.1.0.1"), 0); IPEndPoint targetEndPoint = new IPEndPoint(IPAddress.Parse("10.1.90.5"), iTargetPort); UdpClient sendUdpClient = new UdpClient(localEndPoint); int numBytesSent = sendUdpClient.Send(CombineHeaderBody, CombineHeaderBody.Length, targetEndPoint); ``` The difference between the two are the localEndPoint and the targetEndPoint.
Problem Trying to unicast packets to available networks
[ "", "c#", "udp", "udpclient", "multihomed", "" ]
I'm running a multithreaded windows service that need to call a VB6 dll. There's no documentation about this VB6 dll and this legacy system supports a very critical business process. At first time (1st thread), this dll performs well. As other threads need access, it start provide wrong results. I read one guys saying: > "Just be careful of one thing if you are using VB6. Your threading > model is going to have to change to support apartments if you are > running a multithreaded service. VB only supports multiple > single-threaded apartments, but .NET runs fully free threaded > normally. The thread that calls into the VB6 DLL needs to be > compatible with the DLL." Another guy from team gave me the idea to put this ddl in a separated application domain. But I'm not sure. How can we work with VB6 dll called from a multithreaded c# windows service application?
When the threads come in, are you saving objects and reusing them later on new threads? If you can, create the objects fresh for every thread. We have a situation like this with a data layer dll we use. If you create a connection on one thread, it can't be used from another. If you create a new connection on each thread, it works fine. If it's slow to create your objects, look at the ThreadPool class and the ThreadStatic attribute. Threadpools recycle the same set of threads over and over to do work, and ThreadStatic lets you create an object that exists for one thread only. eg ``` [ThreadStatic] public static LegacyComObject myObject; ``` As a request comes in, turn it into a job and queue it in your thread pool. When the job starts, check if the static object is initialised; ``` void DoWork() { if (myObject == null) { // slow intialisation process myObject = New ... } // now do the work against myObject myObject.DoGreatStuff(); } ```
You say > I'm running a multithreaded windows > service that need to call a VB6 dll. > There's no documentation about this > VB6 dll and this legacy system > supports a very critical business > process. and at the same time you say > At first time (1º thread), this dll > performs well. As other threads need > access, it start provide wrong > results. I'd make very certain that Management is aware of the failure you're seeing because the code supporting the critical business process is old and undocumented, and is being used in a way it was never intended to be used, and was never tested to be used. I bet it's also never been tested to be used from .NET before, has it? Here's my suggestion, and this is similar to something I've actually implemented: The VB6 DLL expects to be called on a single thread. ***Do not disappoint it!*** When your service starts, have it start up a thread of the appropriate type (I can't say, since I've deliberately forgotten all that STA/MTA stuff). Queue up requests to that thread for access to the VB6 DLL. Have all such access go through the single thread. That way, as far as the VB6 DLL is concerned, it's running exactly as it was tested to run. --- BTW, this is slightly different from what I've implemented. I had a web service, not a Windows Service. I had a C DLL, not VB6, and it wasn't COM. I just refactored all access to the thing into a single class, then put lock statements around each of the public methods.
Call VB6 DLL from a multithreaded c# windows service application?
[ "", "c#", "vb6", ".net-2.0", "integration", "legacy", "" ]
I'm writing a custom Ant task that needs to accept a custom nested type. According to [the Ant manual](http://ant.apache.org/manual/develop.html#nestedtype), I should be able to use addConfigured(TYPE x) rather than addConfiguredTYPE(TYPE x). Also, according to [this article](http://www.oracle.com/technology/pub/articles/bodewig_taskwriters.html) (section New Reflection rules, Polymorphism in Ant 1.6) support for addConfigured(TYPE x) was added in Ant 1.6. ``` <taskdef name="custom-task" classname="com.acme.CustomTask"> <classpath refid="task.classpath" /> </taskdef> <typedef name="custom-type" classname="com.acme.CustomTask$CustomType"> <classpath refid="task.classpath" /> </typedef> ... <custom-task> <custom-type/> </custom-task> ``` The task is implemented in Java ``` public class CustomTask extends Task { ... public void addConfigured( CustomType t ) {...} .... public static class CustomType {...} } ``` When I try to run the build script, I get the following exception: ``` Build Failed: custom-task doesn't support the nested "custom-type" element. ``` However, when I change ``` <typedef name="custom-type" classname="com.acme.CustomTask$CustomType"> ... <custom-task> <custom-type/> </custom-task> ... public void addConfigured( CustomType t ) ``` to ``` <typedef name="customtype" classname="com.acme.CustomTask$CustomType"> ... <custom-task> <customtype/> </custom-task> ... public void addConfiguredCustomType( CustomType t ) ``` everything works as expected. Is there a reason why the generic addConfigured( TYPE x ) technique does not seem to work in my case? Other people [here](http://groups.google.com/group/comp.lang.java.programmer/browse_thread/thread/9b0a21bfc8c2c0f5) and [here](http://www.coderanch.com/t/419666/Ant-Maven-Other-Build-Tools/Arbitrary-nested-elements-Ant) were having the same problem. PS: Ant version 1.7.0
Have you tried altering the definition order, i.e. do the typedef first? Not sure if this matters but it's worth trying. Also, have you tried packing this in an antlib? At my company we have many custom tasks and I know that the plain `add` and `addConfigured` methods work. We use antlibs and always define the types first in antlib.xml
Considering the [Ant manual](http://ant.apache.org/manual/develop.html) does specify that: > The name of the add (`addConfigured`) method must begin with add (`addConfigured`), followed by the element name. , the fact you renamed your "`addConfigured`" method to "`addConfiguredCustomType`" was the real key to make it work here.
Custom tasks in Ant: addTYPE(TYPE x) vs add(TYPE x) (latter doesn't work)
[ "", "java", "ant", "" ]
I am having an annoying issue. I have a script which uses the $\_GET feature of php to get a variable from the url of the page. However, I need to know how to, first and foremost, make my page url go from <http://www.example.com/dir/dir2/page.php?value=something&anothervalue=somethingelse> to <http://www.example.com/something/dir2/somethingelse/page.php> This is confusing for me. Now, here is the tough part: How do I $\_GET the values of value and anothervalue while leaving any other url values intact at the end of page.php? Also, if a user visits <http://www.example.com/something/dir2/somethingelse/page.php> they will be shown the content for <http://www.example.com/dir/dir2/page.php?value=something&anothervalue=somethingelse> , and vice versa, right? This is very important to the operation of my site, thanks for helping in advance! I don't know anything about .htaccess at all but I do know that it is possible to do this. To clarify, doing this very confusing thing will actually make things much simpler in the long run as it will require much less recoding and will provide cleaner URLS.
This rule should do it: ``` RewriteCond %{REQUEST_URI} !^/dir/ RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^([^/]+)/([^/]+)/([^/]+)/(.*) dir/$2/$4?value=$1&anothervalue=$3 [L,QSA] ``` It will rewrite requests of `/something/dir2/somethingelse/page.php` internally to `/dir/dir2/page.php?value=something&anothervalue=somethingelse`. But that wouldn’t prevent from requesting the latter directly but just allowing an alias. Redirecting requests of `/dir/dir2/page.php?value=something&anothervalue=somethingelse` externally to `/something/dir2/somethingelse/page.php` is also possible with mod\_rewrite but requires a little more complex rule since the URL parameters can have an arbitrary order: ``` RewriteCond %{REQUEST_FILENAME} -f RewriteCond %{QUERY_STRING} ^(([^&]*&)*)value=([^&]+)&?([^&].*)?$ RewriteCond %3&%1%4 ^([^&]+)&(([^&]*&)*)anothervalue=([^&]+)&?([^&].*)?$ RewriteRule ^dir/([^/]+)/(.*) /%1/$1/%4/$2?%2%5 [L,R=301] ``` As you see, that’s not really nice. A better solution would be to check with PHP what URL has been requested (see [`$_SERVER['REQUEST_URI']`](http://docs.php.net/manual/en/reserved.variables.server.php)) and do the redirect with PHP (see [`header` function](http://docs.php.net/header)).
You're right, it is possible to do that. Others may disagree, but I tend to think that the appropriate thing at the point you're at is to refer you to [the documentation for mod\_rewrite](http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html). If you're going to be maintaining that complex of a set of rewrites, you're going to need to learn about it sooner or later, so it may as well be now.
.htaccess and $_GET rewrite
[ "", "php", ".htaccess", "mod-rewrite", "" ]
``` /************************************************************************** * * Function: toggleVis * * Description: Following Function hides and expands the main column. * * ***************************************************************************/ // Set the default "show" mode to that specified by W3C DOM // compliant browsers var showMode = 'table-cell'; // However, IE5 at least does not render table cells correctly // using the style 'table-cell', but does when the style 'block' // is used, so handle this if (document.all) showMode='block'; // This is the function that actually does the manipulation var States = { }; function toggleVis(col){ if (!States[col] || States[col].IsOpen == null) { States[col] = {isOpen : true}; // This assumes the cell is already shown //States[col] = {isOpen : false}; // This assumes the cell is already hidden } //mode = States[col].IsOpen ? showMode : 'none'; mode = States[col].IsOpen ? 'none' : showMode; //starts from closed, next click need open cells = document.getElementsByName(col); for(j = 0; j < cells.length; j++) cells[j].style.display = mode; States[col].IsOpen = !States[col].IsOpen; } ``` This function hides and displayed a column for a html table. When I call this function the object States toggles accordingly, true if expanded, false if hidden or none. After the function is executed once, what saves the last state of States so that it can be used in this function, when called again? Is it because the object States{} is declared as a global?
Yes. You define `States` in the outermost closure, which means it's actually a property of the `window` object as well, that is, `window.States` === `States`. However, were you to define a function like ``` function foo(param) { var States = param; } ``` it would not affect the global States variable, since you are defining it anew as a local for that function. (But you can access the global States variable, too, by using `window.States` within that function.)
Absolutely correct. States is declared in the global namespace and is available to all javascript functions (that don't hide it with a variable of the same name). It will retain it's value outside of any function that uses it.
How does global Javascript object save state?
[ "", "javascript", "html", "dom", "" ]
I am working on writing a network application in C++ on the Linux platform using the typical sockets API, and I am looking at 2 alternative ways of writing a byte array to a TCP stream: either by calling write(), or by calling send(). I know that, since this is Linux, the socket handle is simply a file descriptor, and therefore it is valid to perform read() and write() calls on the socket, however the sockets API also provides the send() and recv() functions to perform the same tasks. I am therefore wondering if there is any particular reason to choose one class of functions over the other - are the send/recv functions optimized for network writing/reading, do they perform better, etc? Or is it really arbitrary which functions I use? Do read() and write() behave properly in all cases? Thanks for any insights!
There should be no difference. Quoting from `man 2 send`: > The only difference between `send()` and `write()` is the presence of flags. With zero flags parameter, `send()` is equivalent to `write()`. So long as you don't want to specify and flags for `send()` you can use `write()` freely.
`recv` and `send` allow you to specify flags, such as for out-of-band packets. If you don't need to specify the flags, `read` and `write` are perfectly adequate.
Performance impact of using write() instead of send() when writing to a socket
[ "", "c++", "sockets", "network-programming", "" ]
I often find myself wanting to debug CSS layout issues that involve DOM changes caused by Javascript in reaction to a hover event or different CSS rules applying due to the :hover selector. Normally, I'd use Firebug to inspect the element that's giving me trouble and see what its CSS properties were, and where those properties come from. However, when hovering is involved, it becomes impossible, because as soon as you move your mouse down to the Firebug panel, the elements you're interested in are no longer hovered, the CSS rules that apply are different, and (in the case of JS hovers) the DOM is changed. Is there any way I can "freeze" the state of the DOM and application of :hover in order to inspect the DOM as it was **during** a hover event? Any other ideas on how to debug this sort of issue are welcome, of course.
Add an `onmouseover` function handler to the element that is taking the `:hover`. Inside that function, call `console.info(element)` on whichever element you'd like to know about. ``` myHoverElement.onmouseover = function() { console.info(document.getElementById("someotherelementofinterest")); }; ``` When you run this with firebug active, the element will be available to inspect in the firebug console.
You can do this in Firebug but its a little "buggy". If you inspect the element and then click off the the html tab, to the DOM tab for instance, when you go back to the html tab the "style" css tab on the right will have an arrow drop down selector where you can select the :hover state of that element to be active. Sucks to have to switch tabs to get it to show but it works for me.
How to debug CSS/Javascript hover issues
[ "", "javascript", "html", "css", "firebug", "hover", "" ]
Just curious. How does actually the function Sleep() work (declared in windows.h)? Maybe not just that implementation, but anyone. With that I mean - how is it implemented? How can it make the code "stop" for a specific time? Also curious about how cin >> and those actually *work*. What do they do exactly? The only way I know how to "block" something from continuing to run is with a while loop, but considering that that takes a huge amount of processing power in comparison to what's happening when you're invoking methods to read from stdin (just compare a while (true) to a read from stdin), I'm guessing that isn't what they do.
The OS uses a mechanism called a scheduler to keep all of the threads or processes it's managing behaving nicely together. several times per second, the computer's hardware clock interrupts the CPU, which causes the OS's scheduler to become activated. The scheduler will then look at all the processes that are trying to run and decides which one gets to run for the next time slice. The different things it uses to decide depend on each processes state, and how much time it has had before. So if the current process has been using the CPU heavily, preventing other processes from making progress, it will make the current process wait and swaps in another process so that it can do some work. More often, though, most processes are going to be in a wait state. For instance, if a process is waiting for input from the console, the OS can look at the processes information and see which io ports its waiting for. It can check those ports to see if they have any data for the process to work on. If they do, it can start the process up again, but if there is no data, then that process gets skipped over for the current timeslice. as for `sleep()`, any process can notify the OS that it would like to wait for a while. The scheduler will then be activated even before a hardware interrupt (which is also what happens when a process tries to do a blocking read from a stream that has no data ready to be read,) and the OS makes a note of what the process is waiting for. For a sleep, the process is waiting for an alarm to go off, or it may just yield again each time it's restarted until the timer is up. Since the OS only resumes processes after something causes it to preempt a running process, such as the process yielding or the hardware timer interrupt i mentioned, `sleep()` is not very accurate, how accurate depends on the OS or hardware, but it's usually on the order of one or more milliseconds. If more accuracy is needed, or very short waits, the only option is to use the busy loop construct you mentioned.
The operating system schedules how processes run (which processes are eligible to run, in what order, ...). `Sleep()` probably issues a system call which tells the kernel “don't let me use the processor for *x* milliseconds”.
C++ - how does Sleep() and cin work?
[ "", "c++", "windows", "multithreading", "blocking", "scheduler", "" ]
I need to take the first N rows for each group, ordered by custom column. Given the following table: ``` db=# SELECT * FROM xxx; id | section_id | name ----+------------+------ 1 | 1 | A 2 | 1 | B 3 | 1 | C 4 | 1 | D 5 | 2 | E 6 | 2 | F 7 | 3 | G 8 | 2 | H (8 rows) ``` I need the first 2 rows (ordered by **name**) for each **section\_id**, i.e. a result similar to: ``` id | section_id | name ----+------------+------ 1 | 1 | A 2 | 1 | B 5 | 2 | E 6 | 2 | F 7 | 3 | G (5 rows) ``` I am using PostgreSQL 8.3.5.
New solution (PostgreSQL 8.4) ``` SELECT * FROM ( SELECT ROW_NUMBER() OVER (PARTITION BY section_id ORDER BY name) AS r, t.* FROM xxx t) x WHERE x.r <= 2; ```
Since v9.3 you can do a lateral join ``` select distinct t_outer.section_id, t_top.id, t_top.name from t t_outer join lateral ( select * from t t_inner where t_inner.section_id = t_outer.section_id order by t_inner.name limit 2 ) t_top on true order by t_outer.section_id; ``` It [might be faster](http://charlesnagy.info/it/postgresql/group-by-limit-per-group-in-postgresql) but, of course, you should test performance specifically on your data and use case.
Grouped LIMIT in PostgreSQL: show the first N rows for each group?
[ "", "sql", "postgresql", "" ]
This may be something common and trivial, but I seem to be having trouble finding a concrete answer. In C# there is a concept of delegates, which relates strongly to the idea of function pointers from C++. Is there a similar functionality in Java? Given that pointers are somewhat absent, what is the best way about this? And to be clear, we're talking first class here.
The Java idiom for function-pointer-like functionality is an an anonymous class implementing an interface, e.g. ``` Collections.sort(list, new Comparator<MyClass>(){ public int compare(MyClass a, MyClass b) { // compare objects } }); ``` **Update:** the above is necessary in Java versions prior to Java 8. Now we have much nicer alternatives, namely lambdas: ``` list.sort((a, b) -> a.isGreaterThan(b)); ``` and method references: ``` list.sort(MyClass::isGreaterThan); ```
You can substitue a function pointer with an interface. Lets say you want to run through a collection and do something with each element. ``` public interface IFunction { public void execute(Object o); } ``` This is the interface we could pass to some say CollectionUtils2.doFunc(Collection c, IFunction f). ``` public static void doFunc(Collection c, IFunction f) { for (Object o : c) { f.execute(o); } } ``` As an example say we have a collection of numbers and you would like to add 1 to every element. ``` CollectionUtils2.doFunc(List numbers, new IFunction() { public void execute(Object o) { Integer anInt = (Integer) o; anInt++; } }); ```
Function Pointers in Java
[ "", "java", "pointers", "delegates", "function-pointers", "" ]
I need two methods one to encrypt and one to decrypt an xml file with a key= "hello world",the key hello world should be used to encrypt and decrypt the xml file.These methods should work on all machines!!! Any encryption methods will do. XML File contents below: ``` <root> <lic> <number>19834209</number> <expiry>02/02/2002</expiry> </lic> </root> ``` Can some give me a sample?The issue is the msdn sample encyptions make a xml file encypted but when I decrypt on another machine it doesn't work.For example I tried this sample: [How to: Encrypt XML Elements with Asymmetric Keys](http://msdn.microsoft.com/en-us/library/ms229746.aspx), but here there is some kinda session and on another machine it says bad data phewf!
If you want the same key for encrypting and decrypting you should use a symmetric method (that's the definition, really). Here's the closest one to your sample (same source). <http://msdn.microsoft.com/en-us/library/sb7w85t6.aspx> The posted sample isn't working because they aren't using the same keys. Not only on different machines: running the program on the same machine twice should not work either (didn't work for me), because they use different random keys every time. try adding this code after creating your key: ``` key = new RijndaelManaged(); string password = "Password1234"; //password here byte[] saltBytes = Encoding.UTF8.GetBytes("Salt"); // salt here (another string) var p = new Rfc2898DeriveBytes(password, saltBytes); //TODO: think about number of iterations (third parameter) // sizes are devided by 8 because [ 1 byte = 8 bits ] key.IV = p.GetBytes(key.BlockSize / 8); key.Key = p.GetBytes(key.KeySize / 8); ``` Now the program is using the same key and initial vector, and Encrypt and Decrypt should work on all machines. Also, consider renaming `key` to `algorithm`, otherwise this is very misleading. I'd say it's a bad, not-working-well example from MSDN. **NOTE**: `PasswordDeriveBytes.GetBytes()` has been deprecated because of serious (security) issues within the [`PasswordDeriveBytes`](http://msdn.microsoft.com/en-us/library/system.security.cryptography.rfc2898derivebytes.aspx) class. The code above has been rewritten to use the safer `Rfc2898DeriveBytes` class instead (PBKDF2 instead of PBKDF1). Code generated with the above using `PasswordDeriveBytes` may be compromised. See also: [Recommended # of iterations when using PKBDF2-SHA256?](https://security.stackexchange.com/q/3959/17306)
First of all, if you want to use the same key for encrypting and decrypting, you should look at **symmetric cryptography**. Asymmetric cryptography is when the keys for encrypting and decrypting are different. Just so that you know - RSA is asymmetric, TripleDES and Rijndael are symmetric. There are others too, but .NET does not have default implementations for them. I'd advise studying the `System.Security.Cryptography namespace`. And learning a bit about all that stuff. It has all you need to encrypt and decrypt files, as well as generate a password. In particular, you might be interested in these classes: * `CryptoStream` * `PasswordDeriveBytes` * `RijndaelManaged` There are also examples for usage in MSDN for each of them. You can use these classes to encrypt any file, not just XML. If however you want to encrypt just a select few elements, you can take a look at `System.Security.Cryptography.Xml` namespace. I see you've already found one article about it. Keep following the links on that page and you will learn more about those classes.
C# Encrypt an XML File
[ "", "c#", "encryption", "" ]
> **Note** this question was originally posted in 2009, before C++11 was ratified and before the meaning of the `auto` keyword was drastically changed. The answers provided pertain *only* to the C++03 meaning of `auto` -- that being a storage class specified -- and not the C++11 meaning of `auto` -- that being automatic type deduction. If you are looking for advice about when to use the C++11 `auto`, this question is not relevant to that question. For the longest time I thought there was no reason to use the `static` keyword in C, because variables declared outside of block-scope were implicitly global. Then I discovered that declaring a variable as `static` within block-scope would give it permanent duration, and declaring it outside of block-scope (in program-scope) would give it file-scope (can only be accessed in that compilation unit). So this leaves me with only one keyword that I (maybe) don't yet fully understand: The `auto` keyword. Is there some other meaning to it other than 'local variable?' Anything it does that isn't implicitly done for you wherever you may want to use it? How does an `auto` variable behave in program scope? What of a `static auto` variable in file-scope? Does this keyword have any purpose other than *just existing for completeness*?
`auto` is a storage class specifier, `static`, `register` and `extern` too. You can only use one of these four in a declaration. Local variables (without `static`) have automatic storage duration, which means they live from the start of their definition until the end of their block. Putting auto in front of them is redundant since that is the default anyway. I don't know of any reason to use it in C++. In old C versions that have the implicit int rule, you could use it to declare a variable, like in: ``` int main(void) { auto i = 1; } ``` To make it valid syntax or disambiguate from an assignment expression in case `i` is in scope. But this doesn't work in C++ anyway (you have to specify a type). Funny enough, the C++ Standard writes: > An object declared without a storage-class-specifier at block scope or declared as a function parameter has automatic storage duration by default. [Note: hence, the auto specifier is almost always redundant and not often used; one use of auto is to distinguish a declaration-statement from an expression-statement (6.8) explicitly. — end note] which refers to the following scenario, which could be either a cast of `a` to `int` or the declaration of a variable `a` of type `int` having redundant parentheses around `a`. It is always taken to be a declaration, so `auto` wouldn't add anything useful here, but would for the human, instead. But then again, the human would be better off removing the redundant parentheses around `a`, I would say: ``` int(a); ``` With the new meaning of `auto` arriving with C++0x, I would discourage using it with C++03's meaning in code.
In C++11, `auto` has new meaning: it allows you to automatically deduce the type of a variable. Why is that ever useful? Let's consider a basic example: ``` std::list<int> a; // fill in a for (auto it = a.begin(); it != a.end(); ++it) { // Do stuff here } ``` The `auto` there creates an iterator of type `std::list<int>::iterator`. This can make some seriously complex code much easier to read. Another example: ``` int x, y; auto f = [&]{ x += y; }; f(); f(); ``` There, the `auto` deduced the type required to store a lambda expression in a variable. Wikipedia has good [coverage on the subject.](http://en.wikipedia.org/wiki/C++11#Type_inference)
Is there any reason to use the 'auto' keyword in C++03?
[ "", "c++", "keyword", "auto", "c++03", "" ]
I need to insert about 1.8 million rows from a CSV file into a MySQL database. (only one table) Currently using Java to parse through the file and insert each line. As you can imagine this takes quite a few hours to run. (10 roughtly) The reason I'm not piping it straight in from the file into the db, is the data has to be manipulated before it adds it to the database. This process needs to be run by an IT manager in there. So I've set it up as a nice batch file for them to run after they drop the new csv file into the right location. So, I need to make this work nicely by droping the file into a certain location and running a batch file. (Windows enviroment) My question is, what way would be the fastest way to insert this much data; large inserts, from a temp parsed file or one insert at a time? some other idea possibly? The second question is, how can I optimize my MySQL installation to allow very quick inserts. (there will be a point where a large select of all the data is required as well) Note: the table will be eventually droped and the whole process run again at a later date. Some clarification: currently using ...opencsv.CSVReader to parse the file then doing an insert on each line. I'm concating some columns though and ignoring others. More clarification: Local DB MyISAM table
Tips for fast insertion: * Use the [LOAD DATA INFILE](http://dev.mysql.com/doc/refman/5.7/en/load-data.html) syntax to let MySQL parse it and insert it, even if you have to mangle it and feed it after the manipulation. * Use this insert syntax: insert into table (col1, col2) values (val1, val2), (val3, val4), ... * Remove all keys/indexes prior to insertion. * Do it in the fastest machine you've got (IO-wise mainly, but RAM and CPU also matter). Both the DB server, but also the inserting client, remember you'll be paying twice the IO price (once reading, the second inserting)
I'd probably pick a large number, like 10k rows, and load that many rows from the CSV, massage the data, and do a batch update, then repeat until you've gone through the entire csv. Depending on the massaging/amount of data 1.8 mil rows shouldn't take 10 hours, more like 1-2 hours depending on your hardware. edit: whoops, left out a fairly important part, your con has to have autocommit set to false, the code I copied this from was doing it as part of the GetConnection() method. ``` Connection con = GetConnection(); con.setAutoCommit(false); try{ PreparedStatement ps = con.prepareStatement("INSERT INTO table(col1, col2) VALUES(?, ?)"); try{ for(Data d : massagedData){ ps.setString(1, d.whatever()); ps.setString(2, d.whatever2()); ps.addBatch(); } ps.executeBatch(); }finally{ ps.close(); } }finally{ con.close(); } ```
MySQL Inserting large data sets from file with Java
[ "", "java", "mysql", "" ]
My company is now supporting Oracle for the first time, and most of my colleagues are SQL Server developers who haven't really worked with Oracle. I'm a little bit familiar with Oracle, but far from a genius on the subject. Unfortunately, that is enough to mean that I know more about Oracle than most of my co-workers, so I find myself constantly struggling to explain concepts I'm still learning myself in terms that people who aren't familiar with Oracle at all can understand. The biggest problem that I run into is in knowing how SQL Server terminology translates into Oracle terminology. I understand that there's not a direct 1:1 mapping of terms, but it would be helpful to be able to know how to put Oracle concepts into SQL Server terminology and vice-versa when talking to my SQL Server-centric co-workers. Can someone tell me the best way to put these SQL Server terms into Oracle terminology? * database * instance * schema And conversely, how to explain these Oracle concepts in SQL Server terms? * tablespace * database * listener * service name * SID
Oracle/SQLServer: * tablespace = doesn't exist * database = doesn't exist * instance = instance * schema = database * listener = doesn't exist * service name = database name * SID = database name
I don't know anything from sqlserver so what I can do is to try to explain what \* tablespace \* database \* listener \* service name \* SID are. A **database** is the collection of files that all together make the database. This can be datafiles but also parameter and password files. That database is serviced by one or more **instance[s]**. An instance is the interface between client processes and the database. In a parallel server configuration - RAC - there can be multiple instances active on the same database. A **service** is a connection handle to a database. A service van be serviced by multiple instances and every instance can serve multiple services. Services are used to make the combination of applications in one database possible. Every application connects to it's own service, ending up in an instance of the database. A **listener** is the process that listens for clients that want to connect to an instance or service. the listeners hands over the connection request to an instance background process that performs the database interactions for the client. A **SID** or **ORACLE\_SID** is just the name of an instance. The [Concepts manual](http://download.oracle.com/docs/cd/B28359_01/server.111/b28318/toc.htm) will be of great help. I hope this helps. [Ronald](http://ronr.blogspot.com)
Help me put Oracle terminology into SQL Server terminology
[ "", "sql", "sql-server", "oracle", "terminology", "rdbms", "" ]
I have a requirement to write HTML to the file system and I was wondering if there is any speed boost in converting it to bytes and writing it using a FileStream rather than using the File.WriteAllText() (or a similar text method).
[File.WriteAllText](http://msdn.microsoft.com/en-us/library/ms143376.aspx) uses a StreamWriter behind the scenes: ``` public static void WriteAllText(string path, string contents, Encoding encoding) { using (StreamWriter writer = new StreamWriter(path, false, encoding)) { writer.Write(contents); } } ``` You have a string already, so converting it to a byte array would be pointless, because this is performed inside the [StreamWriter.Flush](http://msdn.microsoft.com/en-us/library/system.io.streamwriter.flush.aspx) method anyway using its underlying [Encoder](http://msdn.microsoft.com/en-us/library/system.text.encoder(VS.71).aspx) class. Flush is called by the [StreamWriter.Dispose](http://msdn.microsoft.com/en-us/library/system.io.streamwriter.dispose.aspx) method which the using clause calls. This is a snippet from the source of Flush via reflector: ``` int count = this.encoder.GetBytes(this.charBuffer, 0, this.charPos, this.byteBuffer, 0, flushEncoder); ``` You can see it has a charBuffer. That's just a char[] array which it writes to when you perform [StreamWriter.Write(string)](http://msdn.microsoft.com/en-us/library/ce2kyyb4.aspx). So in short, you already have the string, just let the shorter File method perform the cascading calls, as it makes your source slightly more readable. Converting it to a byte array is also not necessary, as I mentioned the StreamWriter does it for you. If you're getting conversion issues then use Encoding.Unicode as the final parameter in the 3 parameter overload, as [File.WriteAllText(string,string)](http://msdn.microsoft.com/en-us/library/ms143375.aspx) uses UTF8 without a BOM.
What do you think happens within File.WriteAllText? In the end, it is going to be bytes hitting the disk. That disk is not unlikely to be the slow part in this chain anyway. Unless the data being written is very large, I would not worry about it, and just use the implementation that I felt was most convenient. If you design the code well it will not be a major thing to change later, if needed.
What is quicker, writing text to a file or converting to bytes and writing it to a file?
[ "", "c#", "text", "byte", "" ]
So earlier I asked a question about removing the label that Django forms have by default. That worked out great, and I removed the label. However, the text that is generated by the form is still there! I would very much like to remove the text. Here is what I mean: ``` <p>Text: <textarea rows="10" cols="40" name="text"></textarea></p> ``` I would like to remove the Text: part of this, as I do not want it. Again, it is generated with the form I create via: ``` {{ form.as_p }} ``` Here is the model I use for my form: ``` class CommentForm(forms.Form): comment = forms.CharField(widget=forms.Textarea()) ``` EDIT: So far, I've looked at all of the documentation about the label tag and what stuff Forms generate. Apparently, this is possible to remove, it just does not tell me how. Also, I can remove the colon by adding: ``` label_suffix=None ``` I have now also tried label, label\_tag, label\_prefix, prefix, both in the form constructor and the charField constructor. Nothing. As a parameter in the constructor, but this is not enough. Anyone know how to fix this one? EDIT2: I have changed around how the form is done: ``` class CommentForm(forms.Form): comment = forms.Textarea() ``` It's only that now. This means the Textarea is the problem. What parameter can I pass in the textarea or to the form that will remove the aforementioned problem?
The answer: ``` class CommentForm(forms.Form): comment = forms.CharField(widget=forms.Textarea(), label='') ``` Also, no auto\_id in the constructor when creating the object, it should be left as: ``` comment = new CommentForm() ```
Have you tried: ``` class CommentForm(forms.Form): comment = forms.CharField(widget=forms.Textarea(), label=None) ``` ?
How Do I Remove Text From Generated Django Form?
[ "", "python", "django", "forms", "textarea", "" ]
I have a class like this ``` public SomeClass { private List<string> _strings = new List<string>(); public IEnumerable<string> Strings { { get return _strings; } } } ``` How would I do the mapping for \_strings? I tried this, but it complains about the List typehandler not being found, which it doesn't complain about if I mapped it as an object. ``` <result property="_strings" column="value" /> ``` So I searched Google and found this workaround (originally for a Java issue, no idea if it's suppose to work in C#) ``` <result property="_strings" resultMapping="someMapping.StringList"/> <resultMap id="StringList" class="System.String"> <result property="" column="Value"/> </resultMap> ``` This at least lets the test run, and it returns the rest of my object fine, and my list has the right number of entries, except they're all blank. I think the problem is that the property attribute is blank, but I'm not sure whats suppose to go there. (I also tried using 'value', but that didn't work either). This seems like it should be a lot simpler and I'm just overlooking something obvious. Thanks.
Since no solution was found I just went with a method I'm not particularly proud of. I mapped a class that had no other property other than a string value. ``` public class StringValue { public String Name { get; set; } } <resultMap id="StringList" class="StringValue" > <result property="Name" column="Value"/> </resultMap> ``` iBatis seems to have no problem with that, just with mapping to a collection of strings.
Use auto result-mapping of IBatis. This is the solution in Java which you can easily map to C#. This is your sql map: ``` <sqlMap namespace="Users"> <select id="names" resultClass="java.lang.String"> select first_name as firstName from user </select> <sqlMap> ``` And then you can call it like this: ``` List<String> userNames = (List<String>)sqlMap.queryForList("Users.names"); ``` So you don't have to create a custom type with one property to do that.
How do you map a List<string> in iBatis?
[ "", "c#", "mapping", "ibatis.net", "" ]
I am working on a project where I will let users submit small 'scripts' to the server, and I will execute those scripts. There are many scripting languages which can be embedded into a Java program, such as mvel, ognl, uel, clojure, rhino javascript, etc., but, as far as I can tell, they all allow script writer to call Java constructors, static methods, etc. I don't want my users to be able to call anything which I don't provide them (usually through some sort of context object). Most of their scripts will be arithmetic and logical expressions, in some cases they will need to traverse object properties (getters/setters) or contents of a Map. I just don't want them to escape the sandbox I provide them. Any suggestions?
I think you can achieve this through using a [security](http://java.sun.com/j2se/1.4.2/docs/guide/security/spec/security-spec.doc1.html) [policy](http://java.sun.com/j2se/1.4.2/docs/guide/security/PolicyFiles.html).
Just : ``` //Remember old one ClassLoader orginalClassLoader = Thread.currentThread().getContextClassLoader(); //Set my classloader ClassLoader myClassLoader = new SecureMVELClassLoader(); Thread.currentThread().setContextClassLoader(myClassLoader); System.out.println(MVEL.eval("new com.myapp.insecure.InsecureClass()")); //Set back to original classloader Thread.currentThread().setContextClassLoader(orginalClassLoader); ``` and in my classLoader ``` public class SecureMVELClassLoader extends ClassLoader { @Override public Class<?> loadClass(String name) throws ClassNotFoundException { //some filter logic here if (name.startsWith("com.myapp.insecure.")) throw new ClassNotFoundException(); return super.loadClass(name); } ```
java expression language that can't access 'unsafe' java methods
[ "", "java", "scripting", "" ]
Im iterating a list and getting hold of 2 fields "Name" and "Url" i want to extract these fields and use them to create a datasource I want to databind these to a dropdownList ``` DropDownList.DataTextField = "Name"; DropDownList.DataValueField = "Url"; ``` how can I create a datasource based on this list? then do the above, then databind.
Assuming that you have an list item named MyListItem and MyListItem has two properties Name and Url, you can bind a list of MyListItem like that : ``` List<MyListItem> dataSource = new List<MyListItem>(); MyListItem item1 = new MyListItem(); item1.Name = "Name 1"; item1.Url = "Url 1"; dataSource.Add(item1); MyListItem item2 = new MyListItem(); item2.Name = "Name 2"; item2.Url = "Url 2"; dataSource.Add(item2); dropDownList.DataSource = dataSource; dropDownList.DataTextField = "Name"; dropDownList.DataValueField = "Url"; dropDownList.DataBind(); ```
Are you using .NET 3.5? Then go with an anonymous type, like: ``` var ds = from item in items select new { Name = item.Name, Url = item.Url }; dropdownlist.DataSource = ds; ...etc ```
create a datasource from a list
[ "", "c#", "" ]
There are two things that seem to be popular nowadays and I was wondering what are the pros and cons of using something like this: <http://codeigniter.com/user_guide/database/active_record.html> ? Another thing is ORM (Doctrine for instance). What are the benefits of using these?
ActiveRecord is a pattern common in ORMs. Doctrine is an ORM which uses an ActiveRecord'ish style. Some benefits of using tools like Doctrine: * Database independence: The code should be easy to port to different DBs. For example, I often test using SQLite and use MySQL or Postgre in production with no changes in code. * They reduce the amount of code you have to write: A large part of application code deals with communicating with the database. An ORM takes care of most of that, so you can concentrate on writing the actual app. Of course, they don't come without disadvantages: * Doctrine is heavy so it is slower than using straight SQL * ORMs can be complex, adding some weight to what you have to learn, and they can sometimes be difficult to understand for inexperienced programmers
You can take a look at these questions though they're not exactly PHP specific: * [Are there good reasons not to use an ORM?](https://stackoverflow.com/questions/194147/are-there-good-reasons-not-to-use-an-orm) * [Using an ORM or plain SQL?](https://stackoverflow.com/questions/494816/using-an-orm-or-plain-sql)
ORM and Active Record Pattern in PHP?
[ "", "php", "design-patterns", "activerecord", "orm", "" ]
Any tips on combining multiple .js files into 1 (for a build process). Will yuicompressor do this?
YUI Compressor can do this very easily, just send in the file names into [YUI as arguments](http://www.julienlecomte.net/yuicompressor/README) and it will output them as one file. For example ``` > yuicompressor-2.4.2 file1.js file2.js file3.js -o combined.min.js ```
There's always the old standby (without compression like YUI compressor) `cat file1.js file2.js > newfile.js`
combining multiple .js files into one in a build process
[ "", "javascript", "performance", "" ]
I have been developing an application with django and elementtree and while deploying it to the production server i have found out it is running python 2.4. I have been able to bundle elementtree but now i am getting the error: ``` "No module named expat; use SimpleXMLTreeBuilder instead" ``` Unfortunately i cannot upgrade python so im stuck with what i got. How do i use SimpleXMLTreeBuilder as the parser and/or will i need to rewrite code?
If you have third party module that wants to use ElementTree (and XMLTreeBuilder by dependency) you can change ElementTree's XMLTreeBuilder definition to the one provided by SimpleXMLTreeBuilder like so: ``` from xml.etree import ElementTree # part of python distribution from elementtree import SimpleXMLTreeBuilder # part of your codebase ElementTree.XMLTreeBuilder = SimpleXMLTreeBuilder.TreeBuilder ``` Now ElementTree will always use the SimpleXMLTreeBuilder whenever it's called. See also: <http://groups.google.com/group/google-appengine/browse_thread/thread/b7399a91c9525c97>
We ran into this same problem using python version 2.6.4 on CentOS 5.5. The issue happens when the expat class attempts to load the pyexpat modules, see /usr/lib64/python2.6/xml/parsers/expat.py Looking inside of /usr/lib64/python2.6/lib-dynload/, I didn't see the "pyexpat.so" shared object. However, I *did* see it on another machine, which wasn't having the problem. I compared the python versions (yum list 'python\*') and identified that the properly functioning machine had python 2.6.5. Running 'yum update python26' fixed the issue for me. If that doesn't work for you and you want a slapstick solution, you can copy the SO file into your dynamic load path.
Using SimpleXMLTreeBuilder in elementtree
[ "", "python", "django", "elementtree", "" ]
I have an array of integers which represent a RGB image and would like to convert it to a byte array and save it to a file. What's the best way to convert an array of integers to the array of bytes in Java?
As [Brian](https://stackoverflow.com/a/1086067/1536976) says, you need to work out how what sort of conversion you need. Do you want to save it as a "normal" image file (jpg, png etc)? If so, you should probably use the [Java Image I/O](http://docs.oracle.com/javase/7/docs/technotes/guides/imageio/) API. If you want to save it in a "raw" format, the order in which to write the bytes must be specified, and then use an `IntBuffer` and NIO. As an example of using a ByteBuffer/IntBuffer combination: ``` import java.nio.*; import java.net.*; class Test { public static void main(String [] args) throws Exception // Just for simplicity! { int[] data = { 100, 200, 300, 400 }; ByteBuffer byteBuffer = ByteBuffer.allocate(data.length * 4); IntBuffer intBuffer = byteBuffer.asIntBuffer(); intBuffer.put(data); byte[] array = byteBuffer.array(); for (int i=0; i < array.length; i++) { System.out.println(i + ": " + array[i]); } } } ```
Maybe use this method ``` byte[] integersToBytes(int[] values) { ByteArrayOutputStream baos = new ByteArrayOutputStream(); DataOutputStream dos = new DataOutputStream(baos); for(int i=0; i < values.length; ++i) { dos.writeInt(values[i]); } return baos.toByteArray(); } ```
How to convert int[] to byte[]
[ "", "java", "arrays", "type-conversion", "" ]
I have a text input : ``` <input type="text" onkeydown="processText(this)" /> ``` I have a processing function : ``` function processText(sender) { console.log(sender.value); /// processing.... } ``` But then I check my value, its content hasn't been updated yet. How can I do that?
Use onkeyup instead : ``` <input type="text" onkeyup="processText(this)" /> ```
try onkeyup <http://www.w3schools.com/jsref/jsref_onkeyup.asp> Josh
How can I process the new content in a text box after the onkeydown event?
[ "", "javascript", "events", "keyboard", "" ]
I have the following code to do this, but how can I do it better? Right now I think it's better than nested loops, but it starts to get Perl-one-linerish when you have a generator in a list comprehension. ``` day_count = (end_date - start_date).days + 1 for single_date in [d for d in (start_date + timedelta(n) for n in range(day_count)) if d <= end_date]: print strftime("%Y-%m-%d", single_date.timetuple()) ``` ## Notes * I'm not actually using this to print. That's just for demo purposes. * The `start_date` and `end_date` variables are `datetime.date` objects because I don't need the timestamps. (They're going to be used to generate a report). ## Sample Output For a start date of `2009-05-30` and an end date of `2009-06-09`: ``` 2009-05-30 2009-05-31 2009-06-01 2009-06-02 2009-06-03 2009-06-04 2009-06-05 2009-06-06 2009-06-07 2009-06-08 2009-06-09 ```
Why are there two nested iterations? For me it produces the same list of data with only one iteration: ``` for single_date in (start_date + timedelta(n) for n in range(day_count)): print ... ``` And no list gets stored, only one generator is iterated over. Also the "if" in the generator seems to be unnecessary. After all, a linear sequence should only require one iterator, not two. ## Update after discussion with John Machin: Maybe the most elegant solution is using a generator function to completely hide/abstract the iteration over the range of dates: ``` from datetime import date, timedelta def daterange(start_date, end_date): for n in range(int((end_date - start_date).days)): yield start_date + timedelta(n) start_date = date(2013, 1, 1) end_date = date(2015, 6, 2) for single_date in daterange(start_date, end_date): print(single_date.strftime("%Y-%m-%d")) ``` NB: For consistency with the built-in `range()` function this iteration stops **before** reaching the `end_date`. So for inclusive iteration use the next day, as you would with `range()`.
This might be more clear: ``` from datetime import date, timedelta start_date = date(2019, 1, 1) end_date = date(2020, 1, 1) delta = timedelta(days=1) while start_date <= end_date: print(start_date.strftime("%Y-%m-%d")) start_date += delta ```
Iterating through a range of dates in Python
[ "", "python", "date", "datetime", "iteration", "date-range", "" ]
Windows 7 has a very nifty way of showing "Contributing Artist" metadata in Windows Explorer. In wonder how can I access that metadata from C++? Maybe you even point to some source code? Greatly appreciate in advance.
Assuming you can limit your app to Windows Vista and later, use [IPropertyStore](http://msdn.microsoft.com/en-us/library/bb761474(VS.85).aspx). Otherwise each file type needs to be parsed independently.
Assuming we're talking about MP3s, I'd guess it uses [ID3 tags](http://www.id3.org/).
How to read "Contributing Artist" metadata in C++?
[ "", "c++", "windows", "winapi", "windows-7", "frameworks", "" ]
I have found a few threads in regards to this issue. Most people appear to favor using int in their c# code accross the board even if a byte or smallint would handle the data unless it is a mobile app. I don't understand why. Doesn't it make more sense to define your C# datatype as the same datatype that would be in your data storage solution? My Premise: If I am using a typed dataset, Linq2SQL classes, POCO, one way or another I will run into compiler datatype conversion issues if I don't keep my datatypes in sync across my tiers. I don't really like doing System.Convert all the time just because it was easier to use int accross the board in c# code. I have always used whatever the smallest datatype is needed to handle the data in the database as well as in code, to keep my interface to the database clean. So I would bet 75% of my C# code is using byte or short as opposed to int, because that is what is in the database. Possibilities: Does this mean that most people who just use int for everything in code also use the int datatype for their sql storage datatypes and could care less about the overall size of their database, or do they do system.convert in code wherever applicable? Why I care: I have worked on my own forever and I just want to be familiar with best practices and standard coding conventions.
Performance-wise, an int is faster in almost all cases. The CPU is designed to work efficiently with 32-bit values. Shorter values are complicated to deal with. To read a single byte, say, the CPU has to read the 32-bit block that contains it, and then mask out the upper 24 bits. To write a byte, it has to read the destination 32-bit block, overwrite the lower 8 bits with the desired byte value, and write the entire 32-bit block back again. Space-wise, of course, you save a few bytes by using smaller datatypes. So if you're building a table with a few million rows, then shorter datatypes may be worth considering. (And the same might be good reason why you should use smaller datatypes in your database) And correctness-wise, an int doesn't overflow easily. What if you *think* your value is going to fit within a byte, and then at some point in the future some harmless-looking change to the code means larger values get stored into it? Those are some of the reasons why int should be your default datatype for all integral data. Only use byte if you actually want to store machine bytes. Only use shorts if you're dealing with a file format or protocol or similar that actually specifies 16-bit integer values. If you're just dealing with integers in general, make them ints.
I am only 6 years late but maybe I can help someone else. Here are some guidelines I would use: * If there is a possibility the data will not fit in the future then use the larger int type. * If the variable is used as a struct/class field then by default it will be padded to take up the whole 32-bits anyway so using byte/int16 will not save memory. * If the variable is short lived (like inside a function) then the smaller data types will not help much. * "byte" or "char" can sometimes describe the data better and can do compile time checking to make sure larger values are not assigned to it on accident. e.g. If storing the day of the month(1-31) using a byte and try to assign 1000 to it then it will cause an error. * If the variable is used in an array of roughly 100 or more I would use the smaller data type as long as it makes sense. * byte and int16 arrays are not as thread safe as an int (a primitive). One topic that no one brought up is the limited CPU cache. Smaller programs execute faster then larger ones because the CPU can fit more of the program in the faster L1/L2/L3 caches. Using the int type can result in fewer CPU instructions however it will also force a higher percentage of the data memory to not fit in the CPU cache. Instructions are cheap to execute. Modern CPU cores can execute 3-7 instructions per clock cycle however a single cache miss on the other hand can cost 1000-2000 clock cycles because it has to go all the way to RAM. When memory is conserved it also results in the rest of the application performing better because it is not squeezed out of the cache. I did a quick sum test with accessing random data in random order using both a byte array and an int array. ``` const int SIZE = 10000000, LOOPS = 80000; byte[] array = Enumerable.Repeat(0, SIZE).Select(i => (byte)r.Next(10)).ToArray(); int[] visitOrder = Enumerable.Repeat(0, LOOPS).Select(i => r.Next(SIZE)).ToArray(); System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch(); sw.Start(); int sum = 0; foreach (int v in visitOrder) sum += array[v]; sw.Stop(); ``` Here are the results in time(ticks): (x86, release mode, without debugger, .NET 4.5, I7-3930k) (smaller is better) ``` ________________ Array Size __________________ 10 100 1K 10K 100K 1M 10M byte: 549 559 552 552 568 632 3041 int : 549 566 552 562 590 1803 4206 ``` * Accessing 1M items randomly using byte on my CPU had a 285% performance increase! * Anything under 10,000 was hardly noticeable. * int was never faster then byte for this basic sum test. * These values will vary with different CPUs with different cache sizes. One final note, Sometimes I look at the now open-source .NET framework to see what Microsoft's experts do. The .NET framework uses byte/int16 surprisingly little. I could not find any actually.
Why should I use int instead of a byte or short in C#
[ "", "c#", "asp.net", "sql-server", "types", "" ]
Imagine you have a large dataset that may or may not be filtered by a particular condition of the dataset elements that can be intensive to calculate. In the case where it is not filtered, the elements are grouped by the value of that condition - the condition is calculated once. However, in the case where the filtering has taken place, although the subsequent code still expects to see an `IEnumerable<IGrouping<TKey, TElement>>` collection, it doesn't make sense to perform a `GroupBy` operation that would result in the condition being re-evaluated a second time for each element. Instead, I would like to be able to create an `IEnumerable<IGrouping<TKey, TElement>>` by wrapping the filtered results appropriately, and thus avoiding yet another evaluation of the condition. Other than implementing my own class that provides the `IGrouping` interface, is there any other way I can implement this optimization? Are there existing LINQ methods to support this that would give me the `IEnumerable<IGrouping<TKey, TElement>>` result? Is there another way that I haven't considered?
Inspired by [David B's answer](https://stackoverflow.com/questions/1100461/faking-igrouping-for-linq/1100516#1100516), I have come up with a simple solution. So simple that I have no idea how I missed it. In order to perform the filtering, I obviously need to know what value of the condition I am filtering by. Therefore, given a condition, `c`, I can just project the filtered list as: ``` filteredList.GroupBy(x => c) ``` This avoids any recalculation of properties on the elements (represented by `x`). Another solution I realized would work is to revers the ordering of my query and perform the grouping before I perform the filtering. This too would mean the conditions only get evaluated once, although it would unnecessarily allocate groupings that I wouldn't subsequently use.
> the condition is calculated once I hope those keys are still around somewhere... If your data was in some structure like this: ``` public class CustomGroup<T, U> { T Key {get;set;} IEnumerable<U> GroupMembers {get;set} } ``` You could project such items with a query like this: ``` var result = customGroups .SelectMany(cg => cg.GroupMembers, (cg, z) => new {Key = cg.Key, Value = z}) .GroupBy(x => x.Key, x => x.Value) ```
Faking IGrouping for LINQ
[ "", "c#", "linq", "optimization", "" ]
So, if I try to remove elements from a Java *HashSet* while iterating, I get a *ConcurrentModificationException*. What is the best way to remove a subset of the elements from a *HashSet* as in the following example? ``` Set<Integer> set = new HashSet<Integer>(); for(int i = 0; i < 10; i++) set.add(i); // Throws ConcurrentModificationException for(Integer element : set) if(element % 2 == 0) set.remove(element); ``` Here is a solution, but I don't think it's very elegant: ``` Set<Integer> set = new HashSet<Integer>(); Collection<Integer> removeCandidates = new LinkedList<Integer>(); for(int i = 0; i < 10; i++) set.add(i); for(Integer element : set) if(element % 2 == 0) removeCandidates.add(element); set.removeAll(removeCandidates); ``` Thanks!
You can manually iterate over the elements of the set: ``` Iterator<Integer> iterator = set.iterator(); while (iterator.hasNext()) { Integer element = iterator.next(); if (element % 2 == 0) { iterator.remove(); } } ``` You will often see this pattern using a `for` loop rather than a `while` loop: ``` for (Iterator<Integer> i = set.iterator(); i.hasNext();) { Integer element = i.next(); if (element % 2 == 0) { i.remove(); } } ``` As people have pointed out, using a `for` loop is preferred because it keeps the iterator variable (`i` in this case) confined to a smaller scope.
The reason you get a `ConcurrentModificationException` is because an entry is removed via **Set.remove()** as opposed to **Iterator.remove()**. If an entry is removed via **Set.remove()** while an iteration is being done, you will get a ConcurrentModificationException. On the other hand, removal of entries via **Iterator.remove()** while iteration is supported in this case. The new for loop is nice, but unfortunately it does not work in this case, because you can't use the Iterator reference. If you need to remove an entry while iteration, you need to use the long form that uses the Iterator directly. ``` for (Iterator<Integer> it = set.iterator(); it.hasNext();) { Integer element = it.next(); if (element % 2 == 0) { it.remove(); } } ```
Remove Elements from a HashSet while Iterating
[ "", "java", "iteration", "hashmap", "hashset", "" ]
I have 3 tables; I write a stored procedure in ADO.NET Entity Framework. ``` ALTER PROCEDURE [dbo].[sp_GetDepartmanData] (@departman nvarchar(50)) BEGIN SELECT d.ID, d.Name as DepartmanName, sb.Salary, sb.email, sp.Name, sp.SurName, sp.Phone, sp.Married, sp.Address FROM Departman d INNER JOIN StaffsBusiness sb ON d.ID = sb.StaffsPersonelDepartmanID INNER JOIN StaffsPersonel sp ON sb.StaffsPersonelID = sp.ID WHERE d.Name = @departman END ``` --- I need a stored procedure function I write below: ``` var staffPersonel = staffContext.GetPersonelInformationWithDepartmanID("Yazılım"); gvPersonel.DataSource = staffPersonel; gvPersonel.DataBind(); ``` GetPersonelInformationWithDepartmanID function I write from SQL (user defined function in ADO.NET Entity Framework) there are 3 alternative (it is silly!!!) but i have 3 joininig table!!!. How can i use if i join 3 table before?
Okay, you need a few steps here: * add your stored procedure `sp_GetDepartmanData` to your Entity Framework model (as an aside - it's is **strongly** recommend **NOT** to call your stored procedures `sp_(something)` - use of the `sp_` prefix is reserved for Microsoft-only system stored procedures) * since your stored procedure is returning a set of data, you will need to create a conceptual entity for it first, before you can use your stored proc; in the Entity Designer, create a new entity and call it some useful name like `DepartmentDataEntityType` or something; add all the fields being returned from the stored procedure to that entity type * now, you can create your function import in the entity data model - go to the model browser, in the "model.store" section go to your stored procedure, and right-click on "create function import" * you can now give your function in the object context a name and define what it returns - in this case, pick your newly created entity type (e.g. `DepartmentDataEntityType` from above) * you're done! You should now have a function import something like: ``` public global::System.Data.Objects.ObjectResult<DepartmentDataEntityType> GetPersonelInformationWithDepartmanID(global::System.String departmentName) { global::System.Data.Objects.ObjectParameter departmentNameParameter; departmentNameParameter = new global::System.Data.Objects.ObjectParameter("departmentNameParameter", departmentName); return base.ExecuteFunction<DepartmentDataEntityType>("sp_GetDepartmanData", departmentNameParameter); } ``` This function on your object context can now be call to retrieve the data via the stored procedure from your database. Marc **Edit:** If you are getting a mapping error ("Error 3027: No mapping specified for the following EntitySet/AssociationSet") after doing this, it's because the entity you created is not mapped to anything and is only ever used when the function import populates a collection of these entities. You either need to map this entity to a data store somehow or you need to change it to a complex type. To create a complex type simply open up the EF designer and right-click on an empty area. Go to Add > Complex Type. You should see a new complex type appear in the model browser. Right click it and add scalar properties similar to how you added properties to your entity. Then delete your entity and rename your complex type the same as the entity. That's all you have to do :)
How do you create this "conceptual entity"? If I create an entity which is not mapped to the I get the following error: "Entity type 'foobar' is not mapped to the database.
How to use a stored procedure in ADO.NET Entity Framework
[ "", "c#", ".net", "linq", "entity-framework", "" ]
I am working on a latex document that will require typesetting significant amounts of python source code. I'm using [pygments](http://pygments.org/) (the python module, not the online demo) to encapsulate this python in latex, which works well except in the case of long individual lines - which simply continue off the page. I could manually wrap these lines except that this just doesn't seem that elegant a solution to me, and I prefer spending time puzzling about crazy automated solutions than on repetitive tasks. What I would like is some way of processing the python source code to wrap the lines to a certain maximum character length, while preserving functionality. I've had a play around with some python and the closest I've come is inserting `\\\n` in the last whitespace before the maximum line length - but of course, if this ends up in strings and comments, things go wrong. Quite frankly, I'm not sure how to approach this problem. So, is anyone aware of a module or tool that can process source code so that no lines exceed a certain length - or at least a good way to start to go about coding something like that?
You might want to extend your current approach a bit, but using the [tokenize](http://docs.python.org/library/tokenize.html) module from the standard library to determine where to put your line breaks. That way you can see the actual tokens (COMMENT, STRING, etc.) of your source code rather than just the whitespace-separated words. Here is a short example of what tokenize can do: ``` >>> from cStringIO import StringIO >>> from tokenize import tokenize >>> >>> python_code = ''' ... def foo(): # This is a comment ... print 'foo' ... ''' >>> >>> fp = StringIO(python_code) >>> >>> tokenize(fp.readline) 1,0-1,1: NL '\n' 2,0-2,3: NAME 'def' 2,4-2,7: NAME 'foo' 2,7-2,8: OP '(' 2,8-2,9: OP ')' 2,9-2,10: OP ':' 2,11-2,30: COMMENT '# This is a comment' 2,30-2,31: NEWLINE '\n' 3,0-3,4: INDENT ' ' 3,4-3,9: NAME 'print' 3,10-3,15: STRING "'foo'" 3,15-3,16: NEWLINE '\n' 4,0-4,0: DEDENT '' 4,0-4,0: ENDMARKER '' ```
I use the `listings` package in LaTeX to insert source code; it does syntax highlight, linebreaks et al. Put the following in your preamble: ``` \usepackage{listings} %\lstloadlanguages{Python} # Load only these languages \newcommand{\MyHookSign}{\hbox{\ensuremath\hookleftarrow}} \lstset{ % Language language=Python, % Basic setup %basicstyle=\footnotesize, basicstyle=\scriptsize, keywordstyle=\bfseries, commentstyle=, % Looks frame=single, % Linebreaks breaklines, prebreak={\space\MyHookSign}, % Line numbering tabsize=4, stepnumber=5, numbers=left, firstnumber=1, %numberstyle=\scriptsize, numberstyle=\tiny, % Above and beyond ASCII! extendedchars=true } ``` The package has hook for inline code, including entire files, showing it as figures, ...
Sensible python source line wrapping for printout
[ "", "python", "latex", "syntax-highlighting", "code-formatting", "pygments", "" ]
If I have a function like this: ``` function abc($a,$b,$c = 'foo',$d = 'bar') { ... } ``` And I want `$c` to assume it's default value, but need to set `$d`, how would I go about making that call in PHP?
PHP can't do this, unfortunately. You could work around this by checking for null. For example: ``` function abc($a, $b, $c = 'foo', $d = 'bar') { if ($c === null) $c = 'foo'; // Do something... } ``` Then you'd call the function like this: ``` abc('a', 'b', null, 'd'); ``` No, it's not exactly pretty, but it does the job. If you're feeling really adventurous, you could pass in an associative array instead of the last two arguments, but I think that's way more work than you want it to be.
Associative arrays aren't so bad here, especially when the argument list starts getting bigger: ``` function abc($args = array()) { $defaults = array('c' => 'foo', 'd' => 'bar'); $args = array_merge($defaults, $args); } ``` If you wanted to explicitly make some of them required, you could put them in front: ``` function abc($a, $b, $args = array()) { } ``` It's up to you, I've seen big projects use both approaches (forcing passing null and this) and I honestly sort of prefer this one. If you are familiar with Javascript a lot of scripts take use of this behavior when you pass options to them, so it's not completely foreign.
How to skip arguments when their default is desired
[ "", "php", "" ]
Herlihy and Shavit's book ([The Art of Multiprocessor Programming](https://rads.stackoverflow.com/amzn/click/com/0123705916)) solution to memory reclamation uses Java's `AtomicStampedReference<T>;`. To write one in C++ for the x86\_64 I imagine requires at least a 12 byte swap operation - 8 for a 64bit pointer and 4 for the int. Is there x86 hardware support for this and if not, any pointers on how to do wait-free memory reclamation without it?
Yes, there is hardware support, though I don't know if it is exposed by C++ libraries. Anyway, if you don't mind doing some low-level unportable assembly language trickery - look up the CMPXCHG16B instruction in Intel manuals.
Windows gives you a bunch of [Interlocked functions](http://msdn.microsoft.com/en-us/library/ms683560(VS.85).aspx) that are atomic and can probably be used to do what you want. Similar functions exist for other platforms, and I believe Boost has an interlocked library as well. Your question isn't super clear and I don't have a copy of Herlihy and Shavit laying around. Perhaps if you elaborated or gave psuedo code outlining what you want to do, we can give you a more specific answer.
lock-free memory reclamation with 64bit pointers
[ "", "c++", "lock-free", "" ]
I'm not a .NET developer, and I have a feeling this would be trivial for someone who is: I have a C# web application that makes user of the user credentials of the logged in user. Currently it uses the SID which comes from ``` System.Security.Principal.WindowsIdentity.GetCurrent().User.Value ``` I need to get either the users UPN login or email address (as defined in active directory) instead of the SID. GetCurrent() returns an object of type WindowsIdentity; looking in the details for WindowsIdentity Members: [MSDN: WindowsIdentity Members](http://msdn.microsoft.com/en-us/library/system.security.principal.windowsidentity_members.aspx) I can't see anything that looks like it would give me either the UPN or email in there. How can I pull up that information to use, either by feeding the SID into some other function or calling something different in the first place.
Meanwhile (.NET 3.5) this is a one-liner: ``` System.DirectoryServices.AccountManagement.UserPrincipal.Current.EmailAddress ``` for the email, or ``` System.DirectoryServices.AccountManagement.UserPrincipal.Current.UserPrincipalName ``` for the UPN.
To query active directory using a directory searcher you need to do something like this (totally untested code): ``` string userName = System.Security.Principal.WindowsIdentity.GetCurrent().Name; string ldapPath = "LDAP://domain.company.com"; public string GetEmail(string userName, string ldapPath) { using (DirectoryEntry root = new DirectoryEntry(ldapPath)) { DirectorySearcher searcher = new DirectorySearcher(root); searcher.Filter = string.Format(@"(&(sAMAccountName={0}))", userName); searcher.PropertiesToLoad = "mail"; SearchResult result = searcher.FindOne(); if (result != null) { PropertyValueCollection property = result.Properties["mail"]; return (string)property.Value; } else { // something bad happened } } } ```
Get UPN or email for logged in user in a .NET web application
[ "", "c#", ".net", "active-directory", "" ]
I have two functions that have different enough logic but pretty much the same exception handling: ``` public void DoIt1 // DoIt2 has different logic but same exception handling { try ... DoIt1 logic catch (MySpecialException myEx) { Debug.WriteLine(myEx.MyErrorString); throw; } catch (Exception e) { Debug.WriteLine(e.ToString()); throw; } } ``` It is not possible to use a single entry point for DoIt1 and DoIt2, because they are called in from outside. Is Copy/Pase (for the exception block) the best approach?
It depends... if there is **that much** commonality, you could pass in the *thing to do* as a parameter - either as an interface or a delegate: ``` void Foo(Action thingToDo) { if(thingToDo == null) throw new ArgumentNullException("thingToDo"); try { thingToDo(); } catch {...} // lots of } ``` And call as: ``` Foo(delegate { /* logic A */ }); Foo(delegate { /* logic B */ }); ```
Try: ``` public static class Catching<TException> where TException : Exception { public static bool Try<T>(Func<T> func, out T result) { try { result = func(); return true; } catch (TException x) { // log exception message (with call stacks // and all InnerExceptions) } result = default(T); return false; } public static T Try<T>(Func<T> func, T defaultValue) { T result; if (Try(func, out result)) return result; return defaultValue; } } ``` Example: ``` int queueSize = Catching<MyParsingException> .Try(() => Parse(optionStr, "QueueSize"), 5); ``` If `Parse` throws a `MyParsingException`, `queueSize` will default to `5`, otherwise the returned value from `Parse` is used (or any other exception will propagate normally, which is usually what you want with an unexpected exception). This helps to avoid breaking up the flow of the code, and also centralises your logging policy. You can write specialised versions of this kind of exception wrapping for special cases, e.g. catching a particular set of three exceptions, or whatever.
What is the best way to re-use exception handling logic in C#?
[ "", "c#", "" ]
The ECMAScript working group has started working on the next edition of the language. What can they learn from Ruby?
This is actually a much more challenging question than it appears at first. The main reason for this is that it's been shown to be very difficult to force browser vendors, by way of specification, to implement the pet or favourite features of language enthusiasts, users, other vendors, or academics without very good justifications. This is how we ended up with the ES4 spec pretty much dead on the table, which yielded a much less ambitious (though still pretty awesome) ES Harmony. A language like JavaScript which has such insanely tricky political deployment and implementation issues is simply unable to be the type of awesome experimental playground that Ruby has been for much of its lifetime. Anyone who has followed es-discuss (the ECMAScript language development mailing list) has probably noticed by now that it takes many many months of debate and experimentation to merely articulate and agree upon common language features like, in recent memory, operator overloading, or short form lambda notation. Perhaps it may be too much to ask of any working group to nail a spec that will target every device on the planet? On the surface it would appear that it's a very narrow band of lessons, even the social ones, that can be easily transferred from Ruby to the JavaScript. To that end, and to ease the burden of Brendan Eich and his group: One of the most urgently useful "lessons" to bring to the language from a perspective inspired by Ruby (or LISP) would be **language malleability**. The ability to introduce new features, syntax hacks and domain-specific languages not originating from a inner cabal of spec writers would be incredibly valuable. Allow the language to be a good place for modular extensions to the language to be made, and for those extensions to be **self-hosted**, so as to minimize fragmentation risks and to allow those changes to permeate and be mashed up, etc. Such malleability would allow the **community** at large to apply lessons from all sorts of directions and allow the Internet to decide over time which lessons are worthwhile from which language, etc. We've already got a high rate of iteration and evolution happening at the other ends of this sandwich, i.e. in browsers themselves (eg: HTML5), and in js libraries. If that was able to happen more intimately at the language level, we could see some very interesting things happen very quickly. **[addendum/edit]:** The language has to be able to morph significantly because a small group of people is simply incapable of anticipating all the things it'll ever be used for. A theme that comes up on es-discuss often is that underlying current of designing "a language for the next 10-15 years". IMHO, this is an incredibly unrealistic goal. If you don't build it, the system **will** evolve an alternative long before spec's intended lifetime. With the immense speedup in javascript engine/JIT technology of late, we're already seeing the early signs of this happening in the form of new languages being written **on top of JavaScript** or being cross-compiled on the fly into JavaScript. Yep, even Ruby: <http://hotruby.yukoba.jp/>
Embrace function programming, don't try to bury it in static language constructs
What can JavaScript learn from Ruby?
[ "", "javascript", "ruby", "" ]
I have template function "compare" defined as below. ``` #include<iostream> using namespace std; template<typename T> void compare(const T&a, const T& b) { cout<<"Inside compare"<<endl; } main() { compare("aa","bb"); compare("aa","bbbb"); } ``` When i instantiate compare with string literals of same length, the compiler doesnot complain. When i do it with literals of different length,it says "error: no matching function for call to compare(const char[3],const char[5])" I am confused as compare function should be instantiated with character pointer rather than character array. Should not string literals always decay to pointer?
As stated in Greg's answer and comments, the two different array types (since that's what string literals are) is the problem. You may want to leave the function as-is for generic types, but overload it for character pointers and arrays, this is mostly useful when you want to treat them slightly differently. ``` void compare(char const* a, char const* b) { // do something, possibly use strlen() } template<int N1, int N2> void compare(char const (&a)[N1], char const (&b)[N2]) { // ... } ``` If you want to specify that compare should take character pointers explicitly, then the arrays will automatically convert: ``` compare<char const*>("aa", "bbbb"); ``` On the other hand, maybe compare could be written to work with two different types? This can be useful for other types as well, e.g. maybe it calls `f(a)` if `a.size() < b.size()`, and `f(b)` otherwise (with `f` overloaded). (T1 and T2 are allowed to be the same type below, and this would replace your function instead of overloading it as the above two.) ``` template<typename T1, typename T2> void compare(T1 const& a, T2 const& b) { // ... } ```
Your example compiles if you change the declaration to: ``` void compare(const T* a, const T* b) ``` The reason is that the types of the different sized character arrays are actually different types. If you used `sizeof(T)` within the template function the compiler would not know how to resolve the ambiguity. With the above declaration, you are calling a template function with pointer-to-T types, which the compiler will happily resolve as `const char*` when passed character strings.
Template instantiation error
[ "", "c++", "templates", "" ]
i know i can debug an asp.net page in visual studio 2008, because visual studio knows asp.net pages. but i'm developing coldfusion based application now, and i'm curious whether i can use visual studio to debug the js code in coldfusion pages. thank you in advance.
Visual Studio isn't aware of CFML Syntax, so it can't debug a CFML page. I don't think you can debug (in the sense of line-debugging, "stepping through") JavaScript code in Visual Studio anyway -- not even inside an ASPX page -- but I'm not sure. What you should use to step-debug JavaScript is [Firefox](http://www.getfirefox.com/) and [FireBug](http://getfirebug.com/). Once you've got the tools, all you need to know is how to use FireBug to debug your JavaScript, which is explained pretty well [right on the FireBug website](http://getfirebug.com/js.html).
I would add 1 to what Adam said. To debug Javascript in IE, you can download Visual Web Developer 2008. Create a default website, (file>new web site) and then start the debug (the green play button on the menu). This will launch an IE window pointing to localhost. (<http://localhost:3928/WebSite1/Default.aspx>) If you change the url to whatever, you will start debugging in IE as well. I have firebug for FF. Web developer add on is also useful. The thing is you will need to debug in IE too since IE craps on places where FF does not. hope it helps... Also, in your solutions explorer window, right click on default.aspx and pick "browse with..." then set IE as default browser.
how can i debug javascript code within a coldfusion template(a coldfusion page) using visual studio 2008
[ "", "javascript", "visual-studio", "visual-studio-2008", "debugging", "coldfusion", "" ]
``` >>> os.path.basename('http://example.com/file.txt') 'file.txt' ``` .. and I thought `os.path.*` work only on local paths and not URLs? Note that the above example was run on Windows too .. with similar result.
In practice many functions of `os.path` are just string manipulation functions (which just *happen* to be especially handy for path manipulation) -- and since that's innocuous and occasionally handy, while formally speaking "incorrect", I doubt this will change anytime soon -- for more details, use the following simple one-liner at a shell/command prompt: ``` $ python -c"import sys; import StringIO; x = StringIO.StringIO(); sys.stdout = x; import this; sys.stdout = sys.__stdout__; print x.getvalue().splitlines()[10][9:]" ``` Or, for Python 3: ``` $ python -c"import sys; import io; x = io.StringIO(); sys.stdout = x; import this; sys.stdout = sys.__stdout__; print(x.getvalue().splitlines()[10][9:])" ```
On windows, look at the source code: C:\Python25\Lib\ntpath.py ``` def basename(p): """Returns the final component of a pathname""" return split(p)[1] ``` os.path.split (in the same file) just split "\" (and sth. else)
os.path.basename works with URLs, why?
[ "", "python", "url", "path", "" ]
I am using this AJAX proxy for cross-domain AJAX requests: <http://www.daniweb.com/code/snippet494.html>. It works great when I use GET. But when I try to use POST, the body seems to get lost and is never printed out. I tried printing the log and studied the code but I cannot figure out what is going wrong. Thanks, Isaac Waller **Edit:** Here is the code I am using: ``` $.post("proxy.php?proxy_url=http://www.google.com", { postvar1: "hello", postvar2: "goodbye" }, function(data){ alert(data); },"text"); ``` Even if I make a simple form and test it with that no output results.
Try this: ``` $.post("/proxy.php", {proxy_url: "http://www.google.com", postvar1: "hello", postvar2: "goodbye" }, function(data){ alert(data); },"text"); ``` When I tried your original version, it came back with an error 405, but the above works. Either JQUERY or proxy.php isn't happy about mixing post and get parameters. EDIT: I did get back a page, which I was able to display in the alert box, but it doesn't look anything like what I'd expect. I tried the same code on some sites with contact forms that post, and it worked as expected.
Replace any and all $\_GET variables in your PHP file with $\_REQUEST.
PHP AJAX proxy not returning body for POST requests
[ "", "php", "ajax", "http", "post", "proxy", "" ]
I've written my own container template with an iterator. How do I implement const\_iterator? ``` template <class T> class my_container { private: ... public: my_container() : ... { } ~my_container() { } class iterator : public std::iterator<std::bidirectional_iterator_tag, T> { public: ... ```
The only difference should be that when you de-reference a const iterator you get a const reference rather than a reference to the object in the container.
I find the easiest way to implement iterators is [boost::iterator](http://www.boost.org/doc/libs/1_39_0/libs/iterator/doc/index.html). If you want to roll your own, I think the signature should be: ``` class const_iterator : public std::iterator<std::bidirectional_iterator_tag, const T> { ``` with the implementation the same (assuming you are using reference\_type and so forth in your function signatures)
C++ : How to write a const_iterator?
[ "", "c++", "iterator", "const-iterator", "" ]
I'm trying to create a java program to cleanup and merge rows in my table. The table is large, about 500k rows and my current solution is running very slowly. The first thing I want to do is simply get an in-memory array of objects representing all the rows of my table. Here is what I'm doing: * pick an increment of say 1000 rows at a time * use JDBC to fetch a resultset on the following SQL query SELECT \* FROM TABLE WHERE ID > 0 AND ID < 1000 * add the resulting data to an in-memory array * continue querying all the way up to 500,000 in increments of 1000, each time adding results. This is taking way to long. In fact its not even getting past the second increment from 1000 to 2000. The query takes forever to finish (although when I run the same thing directly through a MySQL browser its decently fast). Its been a while since I've used JDBC directly. Is there a faster alternative?
First of all, are you sure you need the whole table in memory? Maybe you should consider (if possible) selecting rows that you want to update/merge/etc. If you really have to have the whole table you could consider using a scrollable ResultSet. You can create it like this. ``` // make sure autocommit is off (postgres) con.setAutoCommit(false); Statement stmt = con.createStatement( ResultSet.TYPE_SCROLL_INSENSITIVE, //or ResultSet.TYPE_FORWARD_ONLY ResultSet.CONCUR_READ_ONLY); ResultSet srs = stmt.executeQuery("select * from ..."); ``` It enables you to move to any row you want by using 'absolute' and 'relative' methods.
One thing that helped me was `Statement.setFetchSize(Integer.MIN_VALUE)`. I got this idea from [Jason's blog](http://javaquirks.blogspot.com/2007/12/mysql-streaming-result-set.html). This cut down execution time by more than half. Memory consumed went down dramatically (as only one row is read at a time.) This trick doesn't work for `PreparedStatement`, though.
Fastest way to iterate through large table using JDBC
[ "", "java", "mysql", "jdbc", "" ]
I'm trying to do the following (I'm using the prototype library): ``` var div = document.createElement('div'); div.innerHTML = '<script src="somescript.js"></script>'; $('banner').insert(div); ``` In IE, div.innerHTML property is always equal to "" after I set the property in the second line. This snippet is inside a function which is overriding document.write() in an external vendor script, so that is why I am doing it this way rather than creating a script element and appending it to the div element directly. Any help would really be appreciated, this is giving me grey hairs!
You could try to do something like this instead: ``` function loadScript(src) { var script = document.createElement("script"); script.type = "text/javascript"; document.getElementsByTagName("head")[0].appendChild(script); script.src = src; } ``` or do ``` .. div.innerHTML = "<script src=\"somescript.js\"></script>"; .. ```
This one had me stymied for a bit as well. It turns out that IE does not allow the insertion of JS directly via innerHTML unless you include the 'defer' property (see the second link below). This property is unique to IE and apparently allows IE to defer execution of any JS until after the other markup has been loaded. A warning, though...if you include two script tags (as I did), there is no guarantee which one will execute first, as the scripts appear to be loaded asynchronously. This should only be a problem if your scripts are dependent on one another (as mine were). There is an additional caveat as well...you must insert non-script markup at the same time that you insert the script. I was unable to insert the script tags by themselves, with or without the 'defer' property. Finally, the script tags must be placed after all other non-script markup being inserted. Otherwise, the script tags are stripped out of the inserted HTML. Here are a few references: MS innerHTML Reference: <http://msdn.microsoft.com/en-us/library/ms533897%28v=vs.85%29.aspx> MS Defer Property Reference: <http://msdn.microsoft.com/en-us/library/ms533719%28v=vs.85%29.aspx> Example of Script Insert via code (yes, it actually does work): <http://samples.msdn.microsoft.com/workshop/samples/author/dhtml/refs/insertScript_2.htm> My Test Code: ``` // I downloaded the MS example file above and tweaked their script a bit, // resulting in this. Using the proper approach to the defer property // (namely: defer="defer") did not provide me with consistent results, so // sticking with 'DEFER' may be necessary. // Note: Try moving the 'sHTML' variable to the end of the script string. function insertScript2() { var sHTML="<input type=button onclick=" + "go2()" + " value='Click Me'><BR>"; var sScript = sHTML + "<SCRIPT DEFER type='text/javascript'> function go2(){ alert('Hello from inserted script.') } </SCRIPT" + ">"; ScriptDiv.innerHTML = sScript; } ```
Why can't I add a string containing a script tag to innerHTML in IE
[ "", "javascript", "innerhtml", "" ]
header ``` $id = intval($_GET['id']); $query = mysql_query("SELECT * FROM table WHERE id = $id"); ``` view ``` while ($row = mysql_fetch_array($query)) { $column1 = $row['column1']; $column2 = $row['column2']; $column3 = $row['column3']; echo $column1....... } ``` How can I keep the above code in my header file? So my designers wont have to see it? Can you save the data into variables and print it out on the page? Thanks!
You could create a function which takes the $GET\_['id'] value as a parameter and returns whatever is supposed to be echo'd. Something like: (inc.php) ``` <?php function queryCall($id) { $query = mysql_query("SELECT * FROM table WHERE id = $id"); $ret = ""; while ($row = mysql_fetch_array($query)) { $column1 = $row['column1']; $column2 = $row['column2']; $column3 = $row['column3']; $ret .= $column1....... } return $ret; } ?> ``` and then in your main file: ``` <?php include 'inc.php'; ?> ... <?php echo queryCall(intval($_GET['id'])); ?> ... ```
You could use the [MVC pattern](http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller) to separate logic from presentation. The simplest way would be to just save up all data you want to present into either local variables or an array perhaps and then require a view file. The view file just echos the data in a html template. If you want to dig deeper you could check out a framework like [codeigniter](http://codeigniter.com/) or [cakephp](http://cakephp.org/).
Newbie, while loops
[ "", "php", "" ]
C# has the `using` statement, specifically for IDisposable objects. Presumably, any object specified in the `using` statement will hold some sort of resource that should be freed deterministically. However, it seems to me that there are many designs in programming which have a single, definite beginning and end, but lack intrinsic language support. The `using` construct provides an opportunity to use the built in features of a code editor to, at least, clearly and naturally highlight the scope of such a design or operation. What I have in mind is the sort of operation that frequently starts with a `BeginXXX()` and `EndXXX()` method, though there are plenty of different flavors, such as an asynchronous code execution that involves a "start" and a "join". Take this naive example. ``` webDataOperation.Start(); GetContentFromHardDrive(); webDataOperation.Join(); // Perform operation that requires data from both sources ``` What if, instead, the Start method returned an object whose `IDisposable.Dispose` method performs the join operation. ``` using(webDataOperation.Start()) { GetContentFromHardDrive(); } // Perform operation that requires data from both sources ``` Or, better yet, what I specifically had in mind: I have an object that does highly specialized graphics blitting and has a `Begin()` and `End()` method (a design also present in DirectX and XNA). Instead... ``` using(blitter.BlitOperation()) { // Do work } // Use result ``` It seems to be more natural and readable, but is it inadvisable, seeing as it uses the `IDisposable` interface and the `using` statement for unintended purposes? In other words, would this be on par with o*verloading an operator in a non-intuitive way*?
This is a perfectly acceptable practice. These are called Factored Types, and the [Framework Design Guidelines](http://blogs.msdn.com/brada/archive/2009/02/23/framework-design-guidelines-factored-types.aspx) recommends doing just this. Basically, if the type wraps an operation with a specific lifetime, using IDisposable and the using statement becomes an appropriate thing to consider. I actually blogged about [this specific topic here](http://reedcopsey.com/?p=30), as well.
I recommend against it; my belief is that code is to effectively communicate with the maintainer of the code, not the compiler, and should be written with the maintainer's comprehension in mind. I try to use "using" only to dispose of a resource, typically an unmanaged resource. I am in a minority. Most people it seems use "using" as a general purpose "I want some cleanup code to run even if an exception is thrown" mechanism. I dislike this because (1) we already have a mechanism for that, called "try-finally", (2) it uses a feature for a purpose it was not intended for, and (3) if the call to the cleanup code is important, then why isn't it visible at the point where it is called? If it is important then I want to be able to see it.
Bad practice? Non-canon usage of c#'s using statement
[ "", "c#", "using-statement", "" ]
I'm slowly building a [web browser](http://github.com/regomodo/qtBrowser/tree/master) in PyQt4 and like the speed i'm getting out of it. However, I want to combine easylist.txt with it. I believe adblock uses this to block http requests by the browser. How would you go about it using python/PyQt4? [edit1] Ok. I think i've setup Privoxy. I haven't setup any additional filters and it seems to work. The PyQt4 i've tried to use looks like this ``` self.proxyIP = "127.0.0.1" self.proxyPORT= 8118 proxy = QNetworkProxy() proxy.setType(QNetworkProxy.HttpProxy) proxy.setHostName(self.proxyIP) proxy.setPort(self.proxyPORT) QNetworkProxy.setApplicationProxy(proxy) ``` However, this does absolutely nothing and I cannot make sense of the docs and can not find any examples. [edit2] I've just noticed that i'f I change self.proxyIP to my actual local IP rather than 127.0.0.1 the page doesn't load. So something is happening.
I know this is an old question, but I thought I'd try giving an answer for anyone who happens to stumble upon it. You could create a subclass of QNetworkAccessManager and combine it with <https://github.com/atereshkin/abpy>. Something kind of like this: ``` from PyQt4.QtNetwork import QNetworkAccessManager from abpy import Filter adblockFilter = Filter(file("easylist.txt")) class MyNetworkAccessManager(QNetworkAccessManager): def createRequest(self, op, request, device=None): url = request.url().toString() doFilter = adblockFilter.match(url) if doFilter: return QNetworkAccessManager.createRequest(self, self.GetOperation, QNetworkRequest(QUrl())) else: QNetworkAccessManager.createRequest(self, op, request, device) myNetworkAccessManager = MyNetworkAccessManager() ``` After that, set the following on all your QWebView instances, or make a subclass of QWebView: ``` QWebView.page().setNetworkAccessManager(myNetworkAccessManager) ``` Hope this helps!
Is this question about web filtering? Then try use some of external web-proxy, for sample Privoxy (<http://en.wikipedia.org/wiki/Privoxy>).
How would you adblock using Python?
[ "", "python", "pyqt", "pyqt4", "adblock", "" ]
This is a problem I have been trying to track down for a couple months now. I have a java app running that processes xml feeds and stores the result in a database. There have been intermittent resource problems that are very difficult to track down. **Background:** On the production box (where the problem is most noticeable), i do not have particularly good access to the box, and have been unable to get Jprofiler running. That box is a 64bit quad-core, 8gb machine running centos 5.2, tomcat6, and java 1.6.0.11. It starts with these java-opts ``` JAVA_OPTS="-server -Xmx5g -Xms4g -Xss256k -XX:MaxPermSize=256m -XX:+PrintGCDetails - XX:+PrintGCTimeStamps -XX:+UseConcMarkSweepGC -XX:+PrintTenuringDistribution -XX:+UseParNewGC" ``` The technology stack is the following: * Centos 64-bit 5.2* Java 6u11* Tomcat 6* Spring/WebMVC 2.5* Hibernate 3* Quartz 1.6.1* DBCP 1.2.1* Mysql 5.0.45* Ehcache 1.5.0* (and of course a host of other dependencies, notably the jakarta-commons libraries) The closest I can get to reproducing the problem is a 32-bit machine with lower memory requirements. That I do have control over. I have probed it to death with JProfiler and fixed many performance problems (synchronization issues, precompiling/caching xpath queries, reducing the threadpool, and removing unnecessary hibernate pre-fetching, and overzealous "cache-warming" during processing). In each case, the profiler showed these as taking up huge amounts of resources for one reason or another, and that these were no longer primary resource hogs once the changes went in. **The Problem:** The JVM seems to completely ignore the memory usage settings, fills all memory and becomes unresponsive. This is an issue for the customer facing end, who expects a regular poll (5 minute basis and 1-minute retry), as well for our operations teams, who are constantly notified that a box has become unresponsive and have to restart it. There is nothing else significant running on this box. The problem *appears* to be garbage collection. We are using the ConcurrentMarkSweep (as noted above) collector because the original STW collector was causing JDBC timeouts and became increasingly slow. The logs show that as the memory usage increases, that is begins to throw cms failures, and kicks back to the original stop-the-world collector, which then seems to not properly collect. However, running with jprofiler, the "Run GC" button seems to clean up the memory nicely rather than showing an increasing footprint, but since I can not connect jprofiler directly to the production box, and resolving proven hotspots doesnt seem to be working I am left with the voodoo of tuning Garbage Collection blind. **What I have tried:** * Profiling and fixing hotspots.* Using STW, Parallel and CMS garbage collectors.* Running with min/max heap sizes at 1/2,2/4,4/5,6/6 increments.* Running with permgen space in 256M increments up to 1Gb.* Many combinations of the above.* I have also consulted the JVM [tuning reference](http://java.sun.com/javase/technologies/hotspot/gc/gc\_tuning\_6.html) , but can't really find anything explaining this behavior or any examples of \_which\_ tuning parameters to use in a situation like this.* I have also (unsuccessfully) tried jprofiler in offline mode, connecting with jconsole, visualvm, but I can't seem to find anything that will interperet my gc log data. Unfortunately, the problem also pops up sporadically, it seems to be unpredictable, it can run for days or even a week without having any problems, or it can fail 40 times in a day, and the only thing I can seem to catch consistently is that garbage collection is acting up. Can anyone give any advice as to: a) Why a JVM is using 8 physical gigs and 2 gb of swap space when it is configured to max out at less than 6. b) A reference to GC tuning that actually explains or gives reasonable examples of when and what kind of setting to use the advanced collections with. c) A reference to the most common java memory leaks (i understand unclaimed references, but I mean at the library/framework level, or something more inherenet in data structures, like hashmaps). Thanks for any and all insight you can provide. **EDIT** Emil H: 1) Yes, my development cluster is a mirror of production data, down to the media server. The primary difference is the 32/64bit and the amount of RAM available, which I can't replicate very easily, but the code and queries and settings are identical. 2) There is some legacy code that relies on JaxB, but in reordering the jobs to try to avoid scheduling conflicts, I have that execution generally eliminated since it runs once a day. The primary parser uses XPath queries which call down to the java.xml.xpath package. This was the source of a few hotspots, for one the queries were not being pre-compiled, and two the references to them were in hardcoded strings. I created a threadsafe cache (hashmap) and factored the references to the xpath queries to be final static Strings, which lowered resource consumption significantly. The querying still is a large part of the processing, but it should be because that is the main responsibility of the application. 3) An additional note, the other primary consumer is image operations from JAI (reprocessing images from a feed). I am unfamiliar with java's graphic libraries, but from what I have found they are not particularly leaky. (thanks for the answers so far, folks!) **UPDATE:** I was able to connect to the production instance with VisualVM, but it had disabled the GC visualization / run-GC option (though i could view it locally). The interesting thing: The heap allocation of the VM is obeying the JAVA\_OPTS, and the actual allocated heap is sitting comfortably at 1-1.5 gigs, and doesnt seem to be leaking, but the box level monitoring still shows a leak pattern, but it is not reflected in the VM monitoring. There is nothing else running on this box, so I am stumped.
Well, I finally found the issue that was causing this, and I'm posting a detail answer in case someone else has these issues. I tried jmap while the process was acting up, but this usually caused the jvm to hang further, and I would have to run it with --force. This resulted in heap dumps that seemed to be missing a lot of data, or at least missing the references between them. For analysis, I tried jhat, which presents a lot of data but not much in the way of how to interpret it. Secondly, I tried the eclipse-based memory analysis tool ( <http://www.eclipse.org/mat/> ), which showed that the heap was mostly classes related to tomcat. The issue was that jmap was not reporting the actual state of the application, and was only catching the classes on shutdown, which was mostly tomcat classes. I tried a few more times, and noticed that there were some very high counts of model objects (actually 2-3x more than were marked public in the database). Using this I analyzed the slow query logs, and a few unrelated performance problems. I tried extra-lazy loading ( <http://docs.jboss.org/hibernate/core/3.3/reference/en/html/performance.html> ), as well as replacing a few hibernate operations with direct jdbc queries (mostly where it was dealing with loading and operating on large collections -- the jdbc replacements just worked directly on the join tables), and replaced some other inefficient queries that mysql was logging. These steps improved pieces of the frontend performance, but still did not address the issue of the leak, the app was still unstable and acting unpredictably. Finally, I found the option: -XX:+HeapDumpOnOutOfMemoryError . This finally produced a very large (~6.5GB) hprof file that accurately showed the state of the application. Ironically, the file was so large that jhat could not anaylze it, even on a box with 16gb of ram. Fortunately, MAT was able to produce some nice looking graphs and showed some better data. This time what stuck out was a single quartz thread was taking up 4.5GB of the 6GB of heap, and the majority of that was a hibernate StatefulPersistenceContext ( <https://www.hibernate.org/hib_docs/v3/api/org/hibernate/engine/StatefulPersistenceContext.html> ). This class is used by hibernate internally as its primary cache (i had disabled the second-level and query-caches backed by EHCache). This class is used to enable most of the features of hibernate, so it can't be directly disabled (you can work around it directly, but spring doesn't support stateless session) , and i would be very surprised if this had such a major memory leak in a mature product. So why was it leaking now? Well, it was a combination of things: The quartz thread pool instantiates with certain things being threadLocal, spring was injecting a session factory in, that was creating a session at the start of the quartz threads lifecycle, which was then being reused to run the various quartz jobs that used the hibernate session. Hibernate then was caching in the session, which is its expected behavior. The problem then is that the thread pool was never releasing the session, so hibernate was staying resident and maintaining the cache for the lifecycle of the session. Since this was using springs hibernate template support, there was no explicit use of the sessions (we are using a dao -> manager -> driver -> quartz-job hierarchy, the dao is injected with hibernate configs through spring, so the operations are done directly on the templates). So the session was never being closed, hibernate was maintaining references to the cache objects, so they were never being garbage collected, so each time a new job ran it would just keep filling up the cache local to the thread, so there was not even any sharing between the different jobs. Also since this is a write-intensive job (very little reading), the cache was mostly wasted, so the objects kept getting created. The solution: create a dao method that explicitly calls session.flush() and session.clear(), and invoke that method at the beginning of each job. The app has been running for a few days now with no monitoring issues, memory errors or restarts. Thanks for everyone's help on this, it was a pretty tricky bug to track down, as everything was doing exactly what it was supposed to, but in the end a 3 line method managed to fix all the problems.
It seems like memory other than heap is leaking, you mention that heap is remaining stable. A classical candidate is permgen (permanent generation) which consists of 2 things: loaded class objects and interned strings. Since you report having connected with VisualVM you should be able to seem the amount of loaded classes, if there is a continues increase of the **loaded** classes (important, visualvm also shows the total amount of classes ever loaded, it's okay if this goes up but the amount of loaded classes should stabilize after a certain time). If it does turn out to be a permgen leak then debugging gets trickier since tooling for permgen analysis is rather lacking in comparison to the heap. Your best bet is to start a small script on the server that repeatedly (every hour?) invokes: ``` jmap -permstat <pid> > somefile<timestamp>.txt ``` jmap with that parameter will generate an overview of loaded classes together with an estimate of their size in bytes, this report can help you identify if certain classes do not get unloaded. (note: with I mean the process id and should be some generated timestamp to distinguish the files) Once you identified certain classes as being loaded and not unloaded you can figure out mentally where these might be generated, otherwise you can use jhat to analyze dumps generated with jmap -dump. I'll keep that for a future update should you need the info.
Tracking down a memory leak / garbage-collection issue in Java
[ "", "java", "memory-leaks", "garbage-collection", "profiling", "" ]
I want to be able to store information about a song that has been opened using my application. I would like for the user to be able to give the song a rating and this rating be loaded every time the users opens that file using my application. I also need to know whether I should store the ratings in a database or an xml file.
[C# ID3 Library](http://sourceforge.net/projects/csid3lib/ "C# ID3 Library") is a .Net class library for editing id3 tags (v1-2.4). I would store the ratings directly into the comments section of the mp3 since id3v1 does not have many of the storage features that id3v2 does. If you want to store additional information for each mp3, what about placing a unique identifier on the mp3 and then having that do a database lookup? I would be cautious about adding custom tags to mp3s as it is an easy way to ruin a large library. Also, I have gone down this road before and while I enjoyed the programming knowledge that came out of it, trying something like the iTunes SDK or Last FM might be a better route.
I would use a single-file, zero-config database. [SQL Server Compact](http://www.microsoft.com/Sqlserver/2008/en/us/compact.aspx) in your case. I don't think XML is a good idea. XML shines in data *interchange* and storing very small amounts of information. In this case a user may rate thousands of tracks ( I have personally in online radios that allow ratings), and you may have lots of other information to store about the track. Export and import using XML export procedures if you have to. Don't use it as your main datastore.
How do I store a rating in a song?
[ "", "c#", ".net", "xml", "database", "winforms", "" ]
Alright. I figured it out. transfer.flags needed to be a byte instead of an int. Silly me. Now I'm getting an error code from ioctl, errno 16, which I think means the device is busy. What a workaholic. I've asked on the libusb mailing list. Below is what I have so far. This isn't really that much code. Most of it is ctypes structures for libusb. Scroll down to the bottom to see the actual code where the error occurs. ``` from ctypes import * VENDOR_ID = 0x04d8 PRODUCT_ID = 0xc002 _USBLCD_MAX_DATA_LEN = 24 LIBUSB_ENDPOINT_IN = 0x80 LIBUSB_ENDPOINT_OUT = 0x00 class EnumerationType(type(c_uint)): def __new__(metacls, name, bases, dict): if not "_members_" in dict: _members_ = {} for key,value in dict.items(): if not key.startswith("_"): _members_[key] = value dict["_members_"] = _members_ cls = type(c_uint).__new__(metacls, name, bases, dict) for key,value in cls._members_.items(): globals()[key] = value return cls def __contains__(self, value): return value in self._members_.values() def __repr__(self): return "<Enumeration %s>" % self.__name__ class Enumeration(c_uint): __metaclass__ = EnumerationType _members_ = {} def __init__(self, value): for k,v in self._members_.items(): if v == value: self.name = k break else: raise ValueError("No enumeration member with value %r" % value) c_uint.__init__(self, value) @classmethod def from_param(cls, param): if isinstance(param, Enumeration): if param.__class__ != cls: raise ValueError("Cannot mix enumeration members") else: return param else: return cls(param) def __repr__(self): return "<member %s=%d of %r>" % (self.name, self.value, self.__class__) class LIBUSB_TRANSFER_STATUS(Enumeration): _members_ = {'LIBUSB_TRANSFER_COMPLETED':0, 'LIBUSB_TRANSFER_ERROR':1, 'LIBUSB_TRANSFER_TIMED_OUT':2, 'LIBUSB_TRANSFER_CANCELLED':3, 'LIBUSB_TRANSFER_STALL':4, 'LIBUSB_TRANSFER_NO_DEVICE':5, 'LIBUSB_TRANSFER_OVERFLOW':6} class LIBUSB_TRANSFER_FLAGS(Enumeration): _members_ = {'LIBUSB_TRANSFER_SHORT_NOT_OK':1<<0, 'LIBUSB_TRANSFER_FREE_BUFFER':1<<1, 'LIBUSB_TRANSFER_FREE_TRANSFER':1<<2} class LIBUSB_TRANSFER_TYPE(Enumeration): _members_ = {'LIBUSB_TRANSFER_TYPE_CONTROL':0, 'LIBUSB_TRANSFER_TYPE_ISOCHRONOUS':1, 'LIBUSB_TRANSFER_TYPE_BULK':2, 'LIBUSB_TRANSFER_TYPE_INTERRUPT':3} class LIBUSB_CONTEXT(Structure): pass class LIBUSB_DEVICE(Structure): pass class LIBUSB_DEVICE_HANDLE(Structure): pass class LIBUSB_CONTROL_SETUP(Structure): _fields_ = [("bmRequestType", c_int), ("bRequest", c_int), ("wValue", c_int), ("wIndex", c_int), ("wLength", c_int)] class LIBUSB_ISO_PACKET_DESCRIPTOR(Structure): _fields_ = [("length", c_int), ("actual_length", c_int), ("status", LIBUSB_TRANSFER_STATUS)] class LIBUSB_TRANSFER(Structure): pass LIBUSB_TRANSFER_CB_FN = CFUNCTYPE(c_void_p, POINTER(LIBUSB_TRANSFER)) LIBUSB_TRANSFER._fields_ = [("dev_handle", POINTER(LIBUSB_DEVICE_HANDLE)), ("flags", c_ubyte), ("endpoint", c_ubyte), ("type", c_ubyte), ("timeout", c_uint), ("status", LIBUSB_TRANSFER_STATUS), ("length", c_int), ("actual_length", c_int), ("callback", LIBUSB_TRANSFER_CB_FN), ("user_data", c_void_p), ("buffer", POINTER(c_ubyte)), ("num_iso_packets", c_int), ("iso_packet_desc", POINTER(LIBUSB_ISO_PACKET_DESCRIPTOR))] class TIMEVAL(Structure): _fields_ = [('tv_sec', c_long), ('tv_usec', c_long)] lib = cdll.LoadLibrary("libusb-1.0.so") lib.libusb_open_device_with_vid_pid.restype = POINTER(LIBUSB_DEVICE_HANDLE) lib.libusb_alloc_transfer.restype = POINTER(LIBUSB_TRANSFER) def libusb_fill_interrupt_transfer(transfer, dev_handle, endpoint, buffer, length, callback, user_data, timeout): transfer[0].dev_handle = dev_handle transfer[0].endpoint = chr(endpoint) transfer[0].type = chr(LIBUSB_TRANSFER_TYPE_INTERRUPT) transfer[0].timeout = timeout transfer[0].buffer = buffer transfer[0].length = length transfer[0].user_data = user_data transfer[0].callback = LIBUSB_TRANSFER_CB_FN(callback) def cb_transfer(transfer): print "Transfer status %d" % transfer.status if __name__ == "__main__": context = POINTER(LIBUSB_CONTEXT)() lib.libusb_init(None) transfer = lib.libusb_alloc_transfer(0) handle = lib.libusb_open_device_with_vid_pid(None, VENDOR_ID, PRODUCT_ID) size = _USBLCD_MAX_DATA_LEN buffer = c_char_p(size) libusb_fill_interrupt_transfer(transfer, handle, LIBUSB_ENDPOINT_IN + 1, buffer, size, cb_transfer, None, 0) r = lib.libusb_submit_transfer(transfer) # This is returning -2, should be => 0. if r < 0: print "libusb_submit_transfer failed", r while r >= 0: print "Poll before" tv = TIMEVAL(1, 0) r = lib.libusb_handle_events_timeout(None, byref(tv)) print "Poll after", r ```
Running it as root once fixed the busy flag.
* Have you checked to make sure the return values of `libusb_alloc_transfer` and `libusb_open_device_with_vid_pid` are valid? * Have you tried annotating the library functions with the appropriate [argtypes](http://python.net/crew/theller/ctypes/reference.html#foreign-functions)? * You may run in to trouble with `transfer[0].callback = LIBUSB_TRANSFER_CB_FN(callback)`—you're not keeping any references to the `CFunctionType` object returned from `LIBUSB_TRANSFER_CB_FN()`, and so that object might be getting released and overwritten. The next step, I suppose, would be to install a version of libusb with debugging symbols, boot up GDB, set a breakpoint at `libusb_submit_transfer()`, make sure the passed-in `libusb_transfer` is sane, and see what's triggering the error to be returned.
Some help understanding async USB operations with libusb-1.0 and ctypes
[ "", "python", "usb", "ctypes", "libusb", "" ]
I have been looking into different systems for creating a fast cache in a web-farm running Python/mod\_wsgi. Memcache and others are options ... But I was wondering: Because I don't need to share data across machines, wanting each machine to maintain a local cache ... Does Python or WSGI provide a mechanism for Python native shared data in Apache such that the data persists and is available to all threads/processes until the server is restarted? This way I could just keep a cache of objects with concurrency control in the memory space of all running application instances? If not, it sure would be useful Thanks!
There's Django's thread-safe in-memory cache back-end, see [here](http://code.djangoproject.com/browser/django/trunk/django/core/cache/backends/locmem.py). It's cPickle-based, and although it's designed for use with Django, it has minimal dependencies on the rest of Django and you could easily refactor it to remove these. Obviously each process would get its own cache, shared between its threads; If you want a cache shared by all processes on the same machine, you could just use this cache in its own process with an IPC interface of your choice (domain sockets, say) or use `memcached` locally, or, if you might ever want persistence across restarts, something like [Tokyo Cabinet](http://tokyocabinet.sourceforge.net/) with a Python interface like [this](http://code.google.com/p/pytyrant/).
This is thoroughly covered by the [Sharing and Global Data section](http://code.google.com/p/modwsgi/wiki/ProcessesAndThreading) of the mod\_wsgi documentation. The short answer is: No, not unless you run everything in one process, but that's not an ideal solution. It should be noted that [caching is *ridiculously* easy to do with Beaker middleware](http://beaker.groovie.org/), which supports multiple backends including memcache.
Python/mod_wsgi server global data
[ "", "python", "apache", "caching", "mod-wsgi", "" ]
I am getting this warning from FxCop: > "'RestartForm' contains field 'RestartForm.done' that is of IDisposable type: 'ManualResetEvent'. Change the Dispose method on 'RestartForm' to call Dispose or Close on this field." Ok, I understand what this means and why this is what needs to be done... Except `System.Windows.Forms.Form` doesn't allow you to override either `.Close()` or `.Dispose()`, so what to do? Currently I'm running with this solution: ``` private void RestartForm_FormClosing(object sender, FormClosingEventArgs e) { done.Set(); done.Close(); } ``` Which works as intended for my application... But FxCop still shows this message. Am I covered and can I safely ignore it, or is there another way I should be doing this?
You need to override the `Dispose` method from `Form` Typically this is automatically overridden in the RestartForm.Designer.cs file, so you will need to move the dispose into your code file so that you can add whatever code you need to add without it being rewritten by the designer. In the RestartForm.cs ``` protected override void Dispose(bool disposing) { if (disposing) { if (components != null) { components.Dispose(); } // Dispose stuff here } base.Dispose(disposing); } ```
I use this method :) ``` Image bgImage = Image.FromFile(workingDir + "\\" + button.BackgroundImage); currentButton.Image = bgImage; currentButton.Disposed += (Object sndr, EventArgs evnt) => bgImage.Dispose(); ```
How do I extend a WinForm's Dispose method?
[ "", "c#", "winforms", "dispose", "fxcop", "" ]