Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I need a for loop which will complete all its Iterations even if there's any exception in any one of the iterations.
``` for (...) { try { // Do stuff } catch (Exception ex) { // Handle (or ignore) the exception } } ```
Just put each iteration inside a try..catch ``` foreach(Person a in people) { try { WorkOnPerson(a); } catch { // do something if you want to. } } ```
How handle an exception in a loop and keep iterating?
[ "", "c#", "for-loop", "" ]
I have a project in SSIS and I've added an Execute SQL Task which sends its result out to a variable. I wanted to confirm the value because I was worried that it would try to write it out as a resultset object rather than an actual integer (in this case I'm returning a COUNT). My first thought was just to run it in debug mode and add the global variable to my Watch window. Unfortunately, when I right-click on the Watch window, the option to "Add Variable" is greyed out. What am I missing here? I've gotten around confirming that my variable is set correctly, so I'm not interested in methods like putting a script in to do a MsgBox with the value or anything like that. For future reference I'd like to be able to watch variables in debug mode. If there are some kind of constraints on that then I'd like to know the what and why of it all if anyone knows. The help is woefully inadequate on this one and every "tutorial" that I can find just says, "Add the variable to the Watch window and debug" as though there should never be a problem doing that. Thanks for any insight!
I believe you can only add variables to the Watch window while the debugger is stopped on a breakpoint. If you set a breakpoint on a step, you should be able to enter variables into the Watch window when the breakpoint is hit. You can select the first empty row in the Watch window and enter the variable name (you may or may not get some Intellisense there, I can't remember how well that works.)
Drag the variable from Variables pane to Watch pane and voila!
Watching variables in SSIS during debug
[ "", "sql", "sql-server-2005", "debugging", "ssis", "watch", "" ]
Greetings again, and thanks once more to all of you who provided answers to the first question. The following code is updated to include the two functions per the assignment. To see the original question, click [here](https://stackoverflow.com/questions/550181/c-pointer-snippet). I am *pretty* sure this fulfills the requirements of the assignment, but once again I would greatly appreciate any assistance. Did I modify the delete statements appropriately? Thanks again. ``` #include<iostream> #include<string> int** createArray(int, int); void deleteArray(int*[], int); using namespace std; int main() { int nRows; int nColumns; cout<<"Number of rows: "; cin>>nRows; cout<<"Number of columns: "; cin>>nColumns; int** ppInt = createArray(nRows, nColumns); deleteArray(ppInt, nRows); } int** createArray(int nRows, int nColumns) { int** ppInt = new int*[nRows]; for (int nCount = 0; nCount < nRows; nCount++) { ppInt[nCount] = new int[nColumns]; } return ppInt; } void deleteArray(int** nPointer, int nRows) { for (int nCount = 0; nCount < nRows; nCount++) { delete[] nPointer[nCount]; } delete[] nPointer; } ``` P.S. Here is the assignment documentation itself, in case it helps: (1) Design and implement a function to allocate memory for a 2-D integer array: the function is supposed to take two integers as parameters, one for number of rows and one for number of columns. You need to use “new” operator in this function. Remember that we need to first create an array of pointers. Then, for each pointer in that array, we need to create an array of integers. This function is supposed to return a pointer which points to a 2-D integer array. (2) Design and implement a function to de-allocate memory for this 2-D array: the function is supposed to have two parameters (a pointer which points to a 2-D integer array, and the other one is number of rows in the array). In the function, you are supposed to de-allocate memory for this 2-D array using the “delete” operator. You should delete each row (an array of integers) first, and then delete the array of pointers.
The code looks good. However, there are some problems you may want to address, for us humans: 1. Your function signatures (declarations) lack parameter names. More suitable: ``` int** createArray(int rows, int columns); void deleteArray(int** array, int rows); ``` 2. Your function names aren't too descriptive as to what they *really* create/delete. `create2DArray` would be a wiser choice, for example. 3. Your `n` prefixes to your variables hurt my eyes. `numRows` or `rowCount` is more readable. 4. Similarly, `ppInt` is crazy. Try `array` (for `nPointer` as well, for consistency). (Sadly, you can't write `2dArray`.) 5. Using `i` as a loop counter is more common than `nCount` or similar (especially for array indexes). I suggest you use that instead. Some things which go Above And Beyond, for your personal practice: 1. Create a class which takes `rows` and `cols` as arguments to its constructor. Make sure to deallocate the array automatically. 2. Use `std::vector` and create a `resize` member function for your class. *Note that this deviates from the original question, which asked for pointers.* 3. Create a `copy` function and a `clone` function to copy data to another 2D array (possibly of a different size!) or clone an existing array.
Its OK. The problem is that you are not thinking about exception safety in your code. ``` int** ppInt = new int*[nRows]; // ALLOC 1 for (int nCount = 0; nCount < nRows; nCount++) { ppInt[nCount] = new int[nColumns]; // ALLOC 2 } ``` Say ALLOC 1 goes fine. But if any of the ALLOC 2 fail then you have an exception and a severe memory leak. For example. You fail on the fourth call to ALLOC 2. Then you leak the memory from ALLOC 1 and the first three calls to ALLOC 2. Now in your situation the code is so trivial it probably does not matter. BUT this is the kind of thing you should always keep in mind when writting C++ code. What will happen here if an exception is throw, what resources are going to be leaked what resources are not going to be cleaned up correctly. I think you should think about wrapping your 2D array inside a class so that you can guarantee that memory is allocated and de-allocated correctly even in the presence of exceptions.
UPDATE: C++ Pointer Snippet
[ "", "c++", "pointers", "" ]
This is more of a generic regex question than a PHP-specific one. I am given different strings that may look like: > `A/B/PA ID U/C/D` And I'm trying to extract the segment in the middle slashes that has spaces ("/PA ID U") using: > `preg_match('/(\/PA .+)(\/.+|$)/', $string, $matches);` However, instead of getting "/PA ID U" as I was expecting, I was getting "/PA ID U/C/D". How can I make it prioritize matching "/.+" over "$" in that last group? --- Additional notes: I need that last group to match either another "/somethingsomthing" or "" because the string varies a lot. If I only match for the "/.+", I won't be able to get the "/PA ID U" if it's at the end of the line, such as in "A/B/PA ID U". Basically, I need to be able to extract specific segments like so: > Given: `"A/B/PA ID U/PA ID U/C/D"` > > Extract: `(A), (B), (PA ID U), (PA ID U), (C), (D)` --- [UPDATE] I'm trying to avoid using `split()` or `explode()` because that would mean that I have to match the "PA ID U" pattern separately. Aside from merely extracting the slash-separated segments, I need to validate that the substrings match specific patterns.
Your regular expression is not working because the `.+` is being greedy. You could fix it by adding a non-greedy modifier (a `?`) to your first `.+` as such: ``` preg_match('/(\/PA .+?)(\/.+|$)/', '', $matches); ``` You could alternatively do: ``` '/\/(PA [^\/]+)(\/.+|$)/' ``` I moved the slash outside of the parens to avoid capturing that (I presume you're not interested in the slash). The `[^\/]+` will capture any character up to the next slash.
I think you can most effectively use split to accomplish what you want. ``` split('/',$string); ``` See: [php manual](https://www.php.net/split)
Parse/Split a forward slash delimited string
[ "", "php", "regex", "parsing", "text-extraction", "csv", "" ]
Not sure that the example (nor the actual usecase) qualifies as NP-Complete, but I'm wondering about the most Pythonic way to do the below assuming that this was the algorithm available. Say you have : ``` class Person: def __init__(self): self.status='unknown' def set(self,value): if value: self.status='happy' else : self.status='sad' ... blah . Maybe it's got their names or where they live or whatev. ``` and some operation that requires a group of Persons. (The key value is here whether the Person is happy or sad.) Hence, given PersonA, PersonB, PersonC, PersonD - I'd like to end up a list of the possible 2\*\*4 combinations of sad and happy Persons. i.e. ``` [ [ PersonA.set(true), PersonB.set(true), PersonC.set(true), PersonD.set(true)], [ PersonA.set(true), PersonB.set(true), PersonC.set(true), PersonD.set(false)], [ PersonA.set(true), PersonB.set(true), PersonC.set(false), PersonD.set(true)], [ PersonA.set(true), PersonB.set(true), PersonC.set(false), PersonD.set(false)], etc.. ``` Is there a good Pythonic way of doing this? I was thinking about list comprehensions (and modifying the object so that you could call it and get returned two objects, true and false), but the comprehension formats I've seen would require me to know the number of Persons in advance. I'd like to do this independent of the number of persons. EDIT : Assume that whatever that operation that I was going to run on this is part of a larger problem set - we need to test out all values of Person for a given set in order to solve our problem. (i.e. I know this doesn't look NP-complete right now =) ) any ideas? Thanks!
According to what you've stated in your problem, you're right -- you do need `itertools.product`, but not exactly the way you've stated. ``` import itertools truth_values = itertools.product((True, False), repeat = 4) people = (person_a, person_b, person_c, person_d) all_people_and_states = [[person(truth) for person, truth in zip(people, combination)] for combination in truth_values] ``` That should be more along the lines of what you mentioned in your question.
I think this could do it: ``` l = list() for i in xrange(2 ** n): # create the list of n people sublist = [None] * n for j in xrange(n): sublist[j] = Person() sublist[j].set(i & (1 << j)) l.append(sublist) ``` Note that if you wrote `Person` so that its constructor accepted the value, or such that the `set` method returned the person itself (but that's a little weird in Python), you could use a list comprehension. With the constructor way: ``` l = [ [Person(i & (1 << j)) for j in xrange(n)] for i in xrange(2 ** n)] ``` The runtime of the solution is `O(n 2**n)` as you can tell by looking at the loops, but it's not really a "problem" (i.e. a question with a yes/no answer) so you can't really call it NP-complete. See [What is an NP-complete in computer science?](https://stackoverflow.com/questions/210829/what-is-an-np-complete-problem) for more information on that front.
Obtaining all possible states of an object for a NP-Complete(?) problem in Python
[ "", "iteration", "python", "combinatorics", "" ]
Does anyone have any tutorials/info for creating and rendering fonts in native directx 9 that doesn't use GDI? (eg doesn't use ID3DXFont). I'm reading that this isn't the best solution (due to accessing GDI) but what is the 'right' way to render fonts in dx?
ID3DXFont is a great thing for easy to use, early, debug output. However, it does use the GDI for font rasterization (not hardware accelerated) and there is a significant performance hit (try it, its actually very noticable). As of DirectX 11, though, fonts will be rendered with Direct2D and be hardware accelerated. The fastest way to render text is using what's called "Bitmap Fonts". I would explain how to do this, except that there is a lot of different ways to do implement this technique, each differing in complexity and capability. It can be as simple as a system that loads a pre-created texture and draws the letters from that, or a system that silently registers a font with Windows and creates a texture in memory at load-time (The engine I developed with a friend did this, it was very slick). Either way, you should see a very noticable performance increase with bitmap fonts.
Why this isn't a good solution? Mixing GDI rendering and D3D rendering into the same window is a bad idea. However, ID3DXFont does not use that. It uses GDI to rasterize the glyphs into a texture. And uses that texture to render the actual text. About the only alternative would be using another library (e.g. FreeType) to rasterize glyphs into a texture, but I'm not sure if that would result in any substantial benefits. Of course, for simple (e.g. non-Asian) fonts you could rasterize all glyphs into a texture beforehand, then use that texture to draw text at runtime. This way runtime does not need to use any font rendering library, it just draws quads using the texture. This approach does not scale well with large font sizes or fonts with lots of characters. Also would not handle complex typography very well (e.g. where letters have to be joined etc.)
DirectX Font tutorial that doesn't use GDI
[ "", "c++", "fonts", "directx", "" ]
I want to use jQuery inside a firefox extension, I imported the library in the xul file like this: ``` <script type="application/x-javascript" src="chrome://myExtension/content/jquery.js"> </script> ``` but the $() function is not recognized in the xul file neither do the jQuery(). I googled about the problem and found some solutions but no one did work with me: <http://gluei.com/blog/view/using-jquery-inside-your-firefox-extension> <http://forums.mozillazine.org/viewtopic.php?f=19&t=989465> I've also tried to pass the 'content.document' object(which refrences the 'document' object) as the context parameter to the jQuery function like this: ``` $('img',content.document); ``` but still not working, does any one came across this problem before?
I use the following `example.xul`: ``` <?xml version="1.0"?> <overlay id="example" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <head></head> <script type="application/x-javascript" src="jquery.js"></script> <script type="application/x-javascript" src="example.js"></script> </overlay> ``` And here is an `example.js` ``` (function() { jQuery.noConflict(); $ = function(selector,context) { return new jQuery.fn.init(selector,context||example.doc); }; $.fn = $.prototype = jQuery.fn; example = new function(){}; example.log = function() { Firebug.Console.logFormatted(arguments,null,"log"); }; example.run = function(doc,aEvent) { // Check for website if (!doc.location.href.match(/^http:\/\/(.*\.)?stackoverflow\.com(\/.*)?$/i)) return; // Check if already loaded if (doc.getElementById("plugin-example")) return; // Setup this.win = aEvent.target.defaultView.wrappedJSObject; this.doc = doc; // Hello World this.main = main = $('<div id="plugin-example">').appendTo(doc.body).html('Example Loaded!'); main.css({ background:'#FFF',color:'#000',position:'absolute',top:0,left:0,padding:8 }); main.html(main.html() + ' - jQuery <b>' + $.fn.jquery + '</b>'); }; // Bind Plugin var delay = function(aEvent) { var doc = aEvent.originalTarget; setTimeout(function() { example.run(doc,aEvent); }, 1); }; var load = function() { gBrowser.addEventListener("DOMContentLoaded", delay, true); }; window.addEventListener("pageshow", load, false); })(); ```
The following solution makes it possibile to use jQuery in contentScriptFile (Targetting 1.5 Addon-sdk) In your main.js: ``` exports.main = function() { var pageMod = require("page-mod"); pageMod.PageMod({ include: "*", contentScriptWhen: 'end', contentScriptFile: [data.url("jquery-1.7.1-min.js") , data.url("notifier.js") , data.url("message.js")], onAttach: function onAttach(worker) { //show the message worker.postMessage("Hello World"); } }); }; ``` In your message.js : ``` self.on("message", function(message){ if(message !== "undefined"){ Notifier.info(message); } }); ``` ***Some pitfalls you need to watchs out for:*** * The order of the contentScriptFile array. if message.js would be placed first: jQuery won't be reconized. * Do not place a http:// url in the [data.url](https://addons.mozilla.org/en-US/developers/docs/sdk/latest/packages/addon-kit/docs/self.html#data) (this does not work)! * All your javascript files should be in the data folder. (only main.js should be in lib folder)
How to use jQuery in Firefox Extension
[ "", "javascript", "jquery", "firefox", "firefox-addon", "" ]
I have an application that is built as a Windows Service and a c# library assembly (.dll.) In order to manage and maintain this service, I'd like to add the ability to run a command-line application that tells the last time the service archived files, the next time it's scheduled to do so, the status of the last run, and the location of a file created by the service. What's the best architecture for writing a service and library that can share data with another application? I'm using .net 2.0.
The way that inter-process communication happens in .net is through remoting (even if both processes are on the same machine). Other responses have suggested alternatives to inter-process communication which would not require remoting.
The best architecture is probably to make your service be a "server" that can report on it's status (and whatever information you want). Using WCF for this like ocdecio suggested would make it pretty simple.
Access the same running code from a Windows Service and a console application?
[ "", "c#", ".net", "architecture", "windows-services", ".net-2.0", "" ]
I have created a fairly large Javascript app that works in Firefox and up until yesterday, IE. I've made quite a few changes and now I'm getting syntax errors in IE, but all is fine in Firefox. Obviously I have a trailing comma somewhere but I can't see it. IE's error message is less than helpful, not even telling me which JS file has the error. Is there a syntax checker for Javascript that will inform me of such errors in more detail?
<http://www.jslint.com/> will catch those comma and semicolon problems. Example: ``` Error: Problem at line 1 character 17: Extra comma. var foo = ['bar',] Problem at line 1 character 19: Missing semicolon. var foo = ['bar',] ```
JS lint -> <http://jslint.com/>
Javascript syntax errors in IE
[ "", "javascript", "debugging", "internet-explorer", "" ]
Is there any tool that will help detect potential errors like "undefined function" in a PHP script *before* runtime? For example, consider: ``` <?php zarfnutz ( 'blah' ); ?> ``` If you ask the PHP command line interface to check that for syntax errors, it responds that there are none. But of course the script will fail if you try to run it, because there's no function called "zarfnutz". I understand that if the language is self-referential enough, it's literally not possible to create such a tool which is guaranteed accurate (I don't know if PHP is in fact self-referential enough). But in any case, there could definitely be a tool that could at least *warn* you that "zarfnutz" *might* be undefined, and such a tool would be very helpful to me. Does anyone know of one? Thanks in advance.
I believe this is one of the features of [PHPLint](http://www.icosaedro.it/phplint/).
Well, I don't know of a *tool* to do it, but [`function_exists`](http://www.php.net/function_exists) and [`get_defined_functions`](http://www.php.net/get_defined_functions) are part of the PHP core.
PHP: Detect "undefined function"-type errors before runtime?
[ "", "php", "undefined", "" ]
I have a C# Regex class matching multiple subgroups such as ``` (?<g1>abc)|(?<g2>def)|(?<g3>ghi) ``` but with much more complicated sub-patterns. I basically want to match anything that doesn't belong to any of those groups, in addition to existing groups. I tried ``` (?<g1>abc)|(?<g2>def)|(?<g3>ghi)|(.+?) ``` but it turned out too slow. I can't do negation because I don't want to copy those complex subpatterns redundantly. Using just (.+) overrides all other groups as expected. Is there any other way? If that doesn't work I'll have to write an ad-hoc parser. Additional details: All these groups are evaluated against a MatchEvaluator. So a Regex class behavior that sends "unmatched strings" to the MatchEvaluator will also work. A sample text would be ``` .......abc........ghi.....def.....abc....def...ghi......abc....... ``` I want to catch parts inbetween.
Your regex generates separate match for every single character outside g1,g2,g3. So when you use it with MatchEvaluator it generates lots of evaluator calls. Thats why its slow. If you try following regex: ``` (?<rest>.*?)((?<g1>abc)|(?<g2>def)|(?<g3>ghi)|$) ``` you will get single "rest" group match for entire fragment of text that doesnt contain "g" group. Regex C# code: ``` Regex regex = new Regex( @"(?<rest>.*?)((?<g1>abc)|(?<g2>def)|(?<g3>ghi)|$)", RegexOptions.Singleline | RegexOptions.Compiled ); ```
> but it turned out too slow. I can't do > negation because I don't want to copy > those complex subpatterns redundantly. Why not something like: `const string COMPLEX_REGEX_PATTERN = "\Gobbel[dy]go0\k"`
Not that lazy Regex match?
[ "", "c#", ".net", "regex", "" ]
I was wondering if any of you out there knows where I can get really good training material (videos, examples, etc) on Flex - Java Data Services (BlazeDS would be ideal) besides the ones offered directly by Adobe. I'm thinking on something like [David Tucker's blog](http://www.davidtucker.net/)
Here is a screencast of a presentation I did about Flex and Java a while back: <http://www.jamesward.com/blog/2008/07/21/video-flex-and-java/> Also, here is an intro to BlazeDS article I co-authored: <http://www.infoq.com/articles/blazeds-intro>
[Flex on Java](http://www.manning.com/allmon/) covers BlazeDS in the chapter 5. I haven't read it through properly yet so I cannot tell how good the book is.
Where to get Flex - Java DS (Blaze) training material?
[ "", "java", "apache-flex", "blazeds", "amf", "dataservice", "" ]
I need, if possible, a t-sql query that, returning the values from an arbitrary table, also returns a incremental integer column with value = 1 for the first row, 2 for the second, and so on. This column does not actually resides in any table, and must be strictly incremental, because the ORDER BY clause could sort the rows of the table and I want the incremental row in perfect shape always. The solution must run on SQL Server 2000
For SQL 2005 and up ``` SELECT ROW_NUMBER() OVER( ORDER BY SomeColumn ) AS 'rownumber',* FROM YourTable ``` for 2000 you need to do something like this ``` SELECT IDENTITY(INT, 1,1) AS Rank ,VALUE INTO #Ranks FROM YourTable WHERE 1=0 INSERT INTO #Ranks SELECT SomeColumn FROM YourTable ORDER BY SomeColumn SELECT * FROM #Ranks Order By Ranks ``` see also here [Row Number](http://wiki.lessthandot.com/index.php/Row_Number)
You can start with a custom number and increment from there, for example you want to add a cheque number for each payment you can do: ``` select @StartChequeNumber = 3446; SELECT ((ROW_NUMBER() OVER(ORDER BY AnyColumn)) + @StartChequeNumber ) AS 'ChequeNumber' ,* FROM YourTable ``` will give the correct cheque number for each row.
MSSQL Select statement with incremental integer column... not from a table
[ "", "sql", "sql-server", "sql-server-2000", "auto-increment", "row-number", "" ]
I need to pass a variable that php is aware of to my javascript code and I'm wondering what the *correct* way to do this is. I already know that I could add this to the page generation: ``` <script type="text/javascript"> var someJSVariable = <?php echo $somePHPVariable ?> </script> ``` But I find this method to be more obtrusive than I'd like. I am wondering if there is a *better* way to do this, or am I stuck with having to just inject inline javascript code in to the view script?
If it's just 1 variable, I think this is the best solution. If you don't want to mix in JS into your normal view, make a separate view which will be rendered as .js file and then just include a link to that .js in your "real" view. If you need performance, some smart catching would be needed there. If there's more then 1 variable, for example a data exchange between html document and server you could use [AJAX](<http://en.wikipedia.org/wiki/Ajax_(programming))>.
I love [`json_encode()`](http://www.php.net/json_encode) for this kind of thing. But yes, you are "stuck" with this inline approach. Honestly, would you prefer this? ``` $js = '<script type="text/javascript">'; $js .= "var someJSVariable = " . $somePHPVariable; $js .= '</script>'; // some time later echo $js; ``` Didn't think so. The inline approach is php's bread and butter. Go with it.
Correct way to pass a variable from the server backend to javascript?
[ "", "javascript", "standards", "unobtrusive-javascript", "" ]
I am lost in a big database and I am not able to find where the data I get comes from. I was wondering if it is possible with SQL Server 2005 to search for a string in all tables, rows and columns of a database? Does anybody has an idea if it is possible and how?
This code should do it in SQL 2005, but a few caveats: 1. It is RIDICULOUSLY slow. I tested it on a small database that I have with only a handful of tables and it took many minutes to complete. If your database is so big that you can't understand it then this will probably be unusable anyway. 2. I wrote this off the cuff. I didn't put in any error handling and there might be some other sloppiness especially since I don't use cursors often. For example, I think there's a way to refresh the columns cursor instead of closing/deallocating/recreating it every time. If you can't understand the database or don't know where stuff is coming from, then you should probably find someone who does. Even if you can find where the data is, it might be duplicated somewhere or there might be other aspects of the database that you don't understand. If no one in your company understands the database then you're in a pretty big mess. ``` DECLARE @search_string VARCHAR(100), @table_name SYSNAME, @table_schema SYSNAME, @column_name SYSNAME, @sql_string VARCHAR(2000) SET @search_string = 'Test' DECLARE tables_cur CURSOR FOR SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' OPEN tables_cur FETCH NEXT FROM tables_cur INTO @table_schema, @table_name WHILE (@@FETCH_STATUS = 0) BEGIN DECLARE columns_cur CURSOR FOR SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA = @table_schema AND TABLE_NAME = @table_name AND COLLATION_NAME IS NOT NULL -- Only strings have this and they always have it OPEN columns_cur FETCH NEXT FROM columns_cur INTO @column_name WHILE (@@FETCH_STATUS = 0) BEGIN SET @sql_string = 'IF EXISTS (SELECT * FROM ' + QUOTENAME(@table_schema) + '.' + QUOTENAME(@table_name) + ' WHERE ' + QUOTENAME(@column_name) + ' LIKE ''%' + @search_string + '%'') PRINT ''' + QUOTENAME(@table_schema) + '.' + QUOTENAME(@table_name) + ', ' + QUOTENAME(@column_name) + '''' EXECUTE(@sql_string) FETCH NEXT FROM columns_cur INTO @column_name END CLOSE columns_cur DEALLOCATE columns_cur FETCH NEXT FROM tables_cur INTO @table_schema, @table_name END CLOSE tables_cur DEALLOCATE tables_cur ```
I’d suggest you find yourself a 3rd party tool for this such as [ApexSQL Search](http://www.apexsql.com/sql_tools_search.aspx) (there are probably others out there too but I use this one because it’s free). If you really want to go the SQL way you can try using stored procedure created by [Sorna Kumar Muthuraj](http://gallery.technet.microsoft.com/scriptcenter/c0c57332-8624-48c0-b4c3-5b31fe641c58) – copied code is below. Just execute this stored procedure for all tables in your schema (easy with dynamics SQL) ``` CREATE PROCEDURE SearchTables @Tablenames VARCHAR(500) ,@SearchStr NVARCHAR(60) ,@GenerateSQLOnly Bit = 0 AS /* Parameters and usage @Tablenames -- Provide a single table name or multiple table name with comma seperated. If left blank , it will check for all the tables in the database @SearchStr -- Provide the search string. Use the '%' to coin the search. EX : X%--- will give data staring with X %X--- will give data ending with X %X%--- will give data containig X @GenerateSQLOnly -- Provide 1 if you only want to generate the SQL statements without seraching the database. By default it is 0 and it will search. Samples : 1. To search data in a table EXEC SearchTables @Tablenames = 'T1' ,@SearchStr = '%TEST%' The above sample searches in table T1 with string containing TEST. 2. To search in a multiple table EXEC SearchTables @Tablenames = 'T2' ,@SearchStr = '%TEST%' The above sample searches in tables T1 & T2 with string containing TEST. 3. To search in a all table EXEC SearchTables @Tablenames = '%' ,@SearchStr = '%TEST%' The above sample searches in all table with string containing TEST. 4. Generate the SQL for the Select statements EXEC SearchTables @Tablenames = 'T1' ,@SearchStr = '%TEST%' ,@GenerateSQLOnly = 1 */ SET NOCOUNT ON DECLARE @CheckTableNames Table ( Tablename sysname ) DECLARE @SQLTbl TABLE ( Tablename SYSNAME ,WHEREClause VARCHAR(MAX) ,SQLStatement VARCHAR(MAX) ,Execstatus BIT ) DECLARE @sql VARCHAR(MAX) DECLARE @tmpTblname sysname IF LTRIM(RTRIM(@Tablenames)) IN ('' ,'%') BEGIN INSERT INTO @CheckTableNames SELECT Name FROM sys.tables END ELSE BEGIN SELECT @sql = 'SELECT ''' + REPLACE(@Tablenames,',',''' UNION SELECT ''') + '''' INSERT INTO @CheckTableNames EXEC(@sql) END INSERT INTO @SQLTbl ( Tablename,WHEREClause) SELECT SCh.name + '.' + ST.NAME, ( SELECT '[' + SC.name + ']' + ' LIKE ''' + @SearchStr + ''' OR ' + CHAR(10) FROM SYS.columns SC JOIN SYS.types STy ON STy.system_type_id = SC.system_type_id AND STy.user_type_id =SC.user_type_id WHERE STY.name in ('varchar','char','nvarchar','nchar') AND SC.object_id = ST.object_id ORDER BY SC.name FOR XML PATH('') ) FROM SYS.tables ST JOIN @CheckTableNames chktbls ON chktbls.Tablename = ST.name JOIN SYS.schemas SCh ON ST.schema_id = SCh.schema_id WHERE ST.name <> 'SearchTMP' GROUP BY ST.object_id, SCh.name + '.' + ST.NAME ; UPDATE @SQLTbl SET SQLStatement = 'SELECT * INTO SearchTMP FROM ' + Tablename + ' WHERE ' + substring(WHEREClause,1,len(WHEREClause)-5) DELETE FROM @SQLTbl WHERE WHEREClause IS NULL WHILE EXISTS (SELECT 1 FROM @SQLTbl WHERE ISNULL(Execstatus ,0) = 0) BEGIN SELECT TOP 1 @tmpTblname = Tablename , @sql = SQLStatement FROM @SQLTbl WHERE ISNULL(Execstatus ,0) = 0 IF @GenerateSQLOnly = 0 BEGIN IF OBJECT_ID('SearchTMP','U') IS NOT NULL DROP TABLE SearchTMP EXEC (@SQL) IF EXISTS(SELECT 1 FROM SearchTMP) BEGIN SELECT Tablename=@tmpTblname,* FROM SearchTMP END END ELSE BEGIN PRINT REPLICATE('-',100) PRINT @tmpTblname PRINT REPLICATE('-',100) PRINT replace(@sql,'INTO SearchTMP','') END UPDATE @SQLTbl SET Execstatus = 1 WHERE Tablename = @tmpTblname END SET NOCOUNT OFF go ```
Search for a string in all tables, rows and columns of a DB
[ "", "sql", "sql-server-2005", "" ]
If I throw a JavaScript exception myself (eg, `throw "AArrggg"`), how can I get the stack trace (in Firebug or otherwise)? Right now I just get the message. **edit**: As many people below have posted, it is possible to get a stack trace for a *JavaScript exception* but I want to get a stack trace for *my* exceptions. For example: ``` function foo() { bar(2); } function bar(n) { if (n < 2) throw "Oh no! 'n' is too small!" bar(n-1); } ``` When `foo` is called, I want to get a stack trace which includes the calls to `foo`, `bar`, `bar`.
**Edit 2 (2017):** In all modern browsers you can simply call: `console.trace();` [(MDN Reference)](https://developer.mozilla.org/en-US/docs/Web/API/Console/trace) **Edit 1 (2013):** A better (and simpler) solution as pointed out in the comments on the original question is to use the `stack` property of an `Error` object like so: ``` function stackTrace() { var err = new Error(); return err.stack; } ``` This will generate output like this: ``` DBX.Utils.stackTrace@http://localhost:49573/assets/js/scripts.js:44 DBX.Console.Debug@http://localhost:49573/assets/js/scripts.js:9 .success@http://localhost:49573/:462 x.Callbacks/c@http://localhost:49573/assets/js/jquery-1.10.2.min.js:4 x.Callbacks/p.fireWith@http://localhost:49573/assets/js/jquery-1.10.2.min.js:4 k@http://localhost:49573/assets/js/jquery-1.10.2.min.js:6 .send/r@http://localhost:49573/assets/js/jquery-1.10.2.min.js:6 ``` Giving the name of the calling function along with the URL, its calling function, and so on. **Original (2009):** A modified version of [this snippet](https://web.archive.org/web/20090504054309/http://ivan-ghandhi.livejournal.com/942493.html) may somewhat help: ``` function stacktrace() { function st2(f) { return !f ? [] : st2(f.caller).concat([f.toString().split('(')[0].substring(9) + '(' + f.arguments.join(',') + ')']); } return st2(arguments.callee.caller); } ```
Chrome/Chromium and other browsers using V8, as well as Firefox, have a convenient interface to get a stacktrace through the `stack` property of `Error` objects: ``` try { // Code throwing an exception throw new Error(); } catch(e) { console.log(e.stack); } ``` See details in the [V8 documentation](https://v8.dev/docs/stack-trace-api)
How can I get a JavaScript stack trace when I throw an exception?
[ "", "javascript", "stack-trace", "" ]
I have some exposure to CakePHP and think it is a great framework. Then, I run into this thing called Qcodo. It is another PHP Framework. I've been hearing Zend alot. They all seem very neat, but I'm wondering what are differences between all these frameworks. Before I waste too much time learning another framework, does anyone know pros and cons of each framework? They all seemed to have the general goal: making web application development in PHP easy, modular, and scalable. **EDIT** Found this interesting comparison result between [CakePHP and Zend](http://2tbsp.com/node/87)
I have never heard of Qcodo. CakePHP is a full featured framework with a lot of automagic, but unfortunately it is one of the [slowest frameworks out there](http://www.avnetlabs.com/php/php-framework-comparison-benchmarks). It also doesn't have official forums, and there really isn't that busy of a community. It tries to be a Ruby on Rails clone, but that just doesn't work so well with PHP. Zend is impressive. It has a strong community and a corporate backing. It is very featured, but it is also very bloated (see that benchmark) so it has moderate performance. From what I hear though, you are able to use the components separately without using the whole framework, and can even use them with other frameworks! I use [CodeIgniter](http://codeigniter.com/) at work and I love it. It is by far the most newbie friendly framework, and with your CakePHP experience it should be a breeze. It has [very good documentation](http://codeigniter.com/user_guide/toc.html) that is thorough, clear, and well written. There are a few features that it doesn't have (such as an ORM) that other frameworks have, but CI has one of the most active framework communities I've seen. Since it's easy to write libraries for it, you can find a library to do pretty much anything you want to - ORMs, authentication, etc. To add to this, if you want an ORM for CI, I recommend [DataMapper](http://codeigniter.com/forums/viewthread/90181/). You may also want to check out [Kohana](http://kohanaphp.com/). It's a spin off of CodeIgniter. It's strictly PHP5 and has an ORM and some more features. Haven't used it myself though, but I have heard good things.
I recommend reading the features of each and determining what you need and looking at code samples of the features you'll be using most. Some frameworks may seem intuitive to you, but not to others. Selecting a PHP framework is a very personal choice IMO. I've been using CodeIgniter myself after using CakePHP for some time. The main reason for switching was due to performance, but CakePHP has come a long way in the last few years so that's probably not a valid reason anymore. [Edit: Apparently, it still has performance shortcomings, see post by ryeguy and comments.] I've never found Zend to be intuitive and have never looked into Qcodo. Wikipedia has a [list of web app frameworks by language](http://en.wikipedia.org/wiki/List_of_web_application_frameworks), you might find information to help you decide there as well.
Qcodo vs. CakePHP vs. Zend
[ "", "php", "zend-framework", "cakephp", "qcodo", "" ]
I have got postfix installed on my machine and I am updating virtual\_alias on the fly programmatically(using python)(on some action). Once I update the entry in the /etc/postfix/virtual\_alias, I am running the command: ``` sudo /usr/sbin/postmap /etc/postfix/virtual_alias 2>>/work/postfix_valias_errorfile ``` But I am getting the error: ``` sudo: sorry, you must have a tty to run sudo ``` I want to run the mentioned sudo command in a non-human way(meaning, I am running this system command from a python script.). So how do I get this command run programmatically?
You can either run your python script as root itself - then you won't need to add privilege to reload postfix. Or you can configure sudo to not need a password for `/etc/init.d/postfix`. sudo configuration (via visudo) allows NOPASSWD: to allow the command without a password. See <http://www.sudo.ws/sudo/man/sudoers.html#nopasswd_and_passwd> ``` <username> ALL = NOPASSWD: /etc/init.d/postfix ``` or something similar.
``` #include <unistd.h> #include <stdlib.h> // gcc -o reload_postfix reload_postfix.c // chown root reload_postfix // chmod +s reload_postfix int main( int argc, char **argv ) { setuid( geteuid() ); system("/etc/init.d/postifx reload"); } ``` Wrap your command in setuid-ed program. This will let any user restart postfix. You can of course further restrict the execute permission to certain groups.
Run a linux system command as a superuser, using a python script
[ "", "python", "linux", "system-administration", "sudo", "root", "" ]
This is going to be a generic question. I am struggling in designing a GUI application, esp. with dealing with interactions between different parts. I don't know how I should deal with shared state. On one hand, shared state is bad, and things should be as explicit as possible. On the other hand, not having shared state introduces unwanted coupling between components. An example: I want my application to be extendable in an Emacs/Vim sort of way, via scripts. Clearly, some sort of shared state needs to be modified, so that the GUI will use it. My initial plan was having a global "session" that is accessible from everywhere, but I'm not so sure about it. One tricky use case is key bindings. I want the user to be able to specify custom keybindings from a script. Each keybinding maps to an arbitrary command, that receives the session as the only argument. Now, the editor component captures keypresses. It has to have access to the keymappings, which are per-session, so it needs access to the session. Is coupling the editor to the session a good idea? Other components will also need to access the keybindings, so the session now becomes shared and can be a singleton... Is there any good reading about designing GUI applications that goes beyond MVC? This is Python and wxPython, FWIW. [EDIT]: Added concrete usecase.
Sorry to jump on this question so late, but nothing, I mean *nothing* can beat looking at the source of an application that does something similar. (I might recommend something like <http://pida.co.uk>, but there are plenty of extensible wx+Python IDEs out there as that sounds like what you are making). If I might make a few notes: 1. message passing is not inherently bad, and it doesn't necessarily cause coupling between components as long as components adhere to interfaces. 2. shared state is not inherently bad, but I would go with your gut instinct and use as little as possible. Since the universe itself is stateful, you can't really avoid this entirely. I tend to use a shared "Boss" object which is usually a non-singleton single instance per application, and is responsible for brokering other components. 3. For keybindings, I tend to use some kind of "Action" system. Actions are high level things which a user can do, for example: "Save the current buffer", and they can be conveniently represented in the UI by toolbar buttons or menu items. So your scripts/plugins create actions, and register them with something central (eg some kind of registry object - see 1 and 2). And their involvement ends there. On top of this you have some kind of key-binding service that maps keys to actions (which it lists from the registry, per session or otherwise). This way you have achieved separation of the plugin and keybinding code, separation of the editor and the action code. As an added bonus your task of "Configuring shortcuts" or "User defined key maps" is made particularly easier. I could go on, but most of what I have to say is in the PIDA codebase, so back to my original point...
If you've looked at MVC you're probably moving in the right direction. MVC, MVP, Passive View, Supervising Controller. Those are all different ways, each with their own pros and cons, of accomplishing what you're after. I find that Passive View is the "ideal", but it causes you to introduce far too many widgets into your GUI interfaces (i.e. IInterface). In general I find that Supervising Controller is a good compromise.
Organising a GUI application
[ "", "python", "model-view-controller", "user-interface", "architecture", "wxpython", "" ]
I have a file which is encoded as iso-8859-1, and contains characters such as ô . I am reading this file with java code, something like: ``` File in = new File("myfile.csv"); InputStream fr = new FileInputStream(in); byte[] buffer = new byte[4096]; while (true) { int byteCount = fr.read(buffer, 0, buffer.length); if (byteCount <= 0) { break; } String s = new String(buffer, 0, byteCount,"ISO-8859-1"); System.out.println(s); } ``` However the ô character is always garbled, usually printing as a ? . I have read around the subject (and learnt a little on the way) e.g. * <http://www.joelonsoftware.com/articles/Unicode.html> * <https://bugs.java.com/bugdatabase/view_bug?bug_id=4508058> * <http://www.ingrid.org/java/i18n/utf-16/> but still can not get this working Interestingly this works on my local pc (xp) but not on my linux box. I have checked that my jdk supports the required charsets (they are standard, so this is no suprise) using : ``` System.out.println(java.nio.charset.Charset.availableCharsets()); ```
I suspect that either your file isn't *actually* encoded as ISO-8859-1, or System.out doesn't know how to print the character. I recommend that to check for the first, you examine the relevant byte in the file. To check for the second, examine the relevant character in the string, printing it out with ``` System.out.println((int) s.getCharAt(index)); ``` In both cases the result *should* be 244 decimal; 0xf4 hex. See [my article on Unicode debugging](http://pobox.com/~skeet/csharp/debuggingunicode.html) for general advice (the code presented is in C#, but it's easy to convert to Java, and the principles are the same). In general, by the way, I'd wrap the stream with an `InputStreamReader` with the right encoding - it's easier than creating new strings "by hand". I realise this may just be demo code though. EDIT: Here's a really easy way to prove whether or not the console will work: ``` System.out.println("Here's the character: \u00f4"); ```
Parsing the file as fixed-size blocks of bytes is not good --- what if some character has a byte representation that straddles across two blocks? Use an [`InputStreamReader`](http://java.sun.com/javase/6/docs/api/java/io/InputStreamReader.html) with the appropriate character encoding instead: ``` BufferedReader br = new BufferedReader( new InputStreamReader( new FileInputStream("myfile.csv"), "ISO-8859-1"); char[] buffer = new char[4096]; // character (not byte) buffer while (true) { int charCount = br.read(buffer, 0, buffer.length); if (charCount == -1) break; // reached end-of-stream String s = String.valueOf(buffer, 0, charCount); // alternatively, we can append to a StringBuilder System.out.println(s); } ``` Btw, remember to check that the unicode character can indeed be displayed correctly. You could also redirect the program output to a file and then compare it with the original file. As [Jon Skeet](https://stackoverflow.com/questions/498636/java-app-unable-to-read-iso-8859-1-encoded-file-correctly/498644#498644) suggests, the problem may also be console-related. Try [`System.console().printf(s)`](http://java.sun.com/javase/6/docs/api/java/io/Console.html) to see if there is a difference.
Java App : Unable to read iso-8859-1 encoded file correctly
[ "", "java", "encoding", "character-encoding", "iso-8859-1", "" ]
Given: Class has no fields, every variable is local. littleString was created by refactoring bigString in Eclipse: ``` public String bigString() { StringBuffer bob = new StringBuffer(); this.littleString(bob); return bob.toString(); } private void littleString(final StringBuffer bob) { bob.append("Hello, I'm Bob"); } ``` The method littleString should not be passing the StringBuffer back, but yet is is. What kind of Black Magic goes on here? This is breaking all rules of encapsulation that I know. I'm in shock, words fail me.
`littleString` isn't passing the object back -- it's just using the same object. Both the local variable `bob` in `bigString()` and the parameter `bob` in `littleString()` refer to the same object, so if you change one of those objects, the changes will appear instantaneously in the other *because they're both references to the same object*. The issue is that `StringBuffer`s are mutable and have internal state associated with them. Some types of objects (such as `String`s) are immutable, so you can safely pass them around as method parameters, and you know they won't ever get modified. Note that the addition of the `final` keyword doesn't help here -- it just makes sure that `bob` never gets assigned to refer to a different `StringBuffer` object.
It's not passing anything back. It's modifying the StringBuffer you passed a reference to. Objects in Java are not passed by value.
How is a StringBuffer passing data through voids with no fields in the Class?
[ "", "java", "stringbuffer", "" ]
In my current project I'm using two libraries where one is using log4net and the other NLog for its logging. I'm personally prefer NLog so it is used in my application as well. I'm not knowing much about log4net so I'm asking what would be the best way to programmatically forward all the messages from log4net to NLog. There is a [post about a log4net forwarder at the NLog forum](http://nlog-project.org/forum#nabble-td1685351) but it looks like no one had done this before.
create a custom log4net Appender that logs the messages to a nlog logger. this may be at least the solution if you just want to pass the log information to nlog instead of replacing all occurences of log4net logging with nlog. look [here](http://logging.apache.org/log4net/release/faq.html), [here](http://karlagius.com/2008/01/02/writing-a-custom-appender-for-log4net/) and [here](http://blog.joachim.at/?p=31)
Basically you'll need a log4net *appender* (`log4net.Appender.IAppender`) which would delegate all `DoAppend` calls to NLogs' `Logger` or `Target`.
forward from log4net to NLog
[ "", "c#", ".net", "logging", "log4net", "nlog", "" ]
I was wondering if there is a way to use php to return the values from a search without having to reload the whole webpage or using iframes or anything like that. I've tried searching for it but I always end up with AJAX and I was wondering if there is a PHP way for it...
I suggest you read up on AJAX and what it is, as it is exactly what you are describing. What AJAX is generating a request on the browser with javascript, sending the request to a server, generating content with whatever technology you want (be it PHP, .NET, etc.) and returning it to the browser, without the page ever 'reloading'. That's all it is, and that's what you want. I recommend you check out something like [jQuery](http://www.jquery.com) as it is far away the most popular javascript library. It makes doing AJAX requests a piece of cake.
AJAX is what you're looking for. It means using JavaScript (on the browser) to initiate a request to the server (which may be running PHP, or any other language).
PHP live updating
[ "", "php", "live", "" ]
Google Analytics tracks 'Time On Site'. How would you do that effectively? Does it actually track when the user closes the browser or navigates away? Thoughts?
I can't be 100% certain of course, but I would guess they use javascript and the onload and onunload events and/or setTimer to communicate with a web service via AJAX. This way they could figure out when users go to or leave a page on your site. Once the browser stops "pinging" the web service, it's assumed that they left your site. I'm sure there's some margin of error involved no matter how you do it, but you could get a pretty decent estimate that way.
Just set a function like this: ``` function track() { setTimeout(track ,1000); now = new Date; now_string = now.getHours() + ":" + now.getMinutes() + ":" + now.getSeconds(); (... whatever you want to do with this data) } ``` In javascript you get like that a string saying HH:MM:SS, or H:M:S in case the hour/minute/second number as less than 2 characters So just make a bit of math arround it Just put this running somewhere in your window, and associating it with the session (guaranteing this is in all pages) you get exactly how much time some user was in your page. Edit: had to remove a function i had there that was mine and it isn't from javascript sry :p
How to determine Time On Site in Javascript?
[ "", "javascript", "analytics", "" ]
I am new to programming language and I am using PHP and mysql. I got an assignment to do a hashtables in php. What I need to do is, store items that a user collected and then display it. After do some research over the internet, I will do the following steps when implement the hashtable, please correct me if I am wrong: 1. Set up the tables: -> Users Table: uid(int[5]), username(varchar[128]), item\_id(int[8], items\_id\_hash(int[50]) -> Items Table: item\_id(int[5]), item\_name(varchar[128]), items\_id\_hash(int[50]) 2. Create a hash function (how to create a hash function? Create by myself or get from internet?) to convert a key into a hash value and then insert into database. E.g.: hash item\_id = 001 into hash value = (e.g) 12345. Then insert into users table. 3. To display/search. Retrieve the hash values from the user and then compare it to the items table and display it. Questions: 1. Are my steps correct? 2. Where can I find a good php hash function? Can i use md5 or sha1 or salt?
I think your idea of a hashtable is a little [defunct]. Hashtables break down keys into lists that are alike. For example: hashtable based on first letter of name, so there would be 26 lists. Your hash is the first letter of the name, which then makes it quicker to search through. md5, sha1 are used to derive hashes that are used to verify that data has not been tampered. they usually come in either 128-bit or 160-bit versions. So it takes X data and sends it through a hash to come up with a 128-bit alphanumeric string that should be the same no matter where it is done. This is usually a security thing. **EDIT:** Expanding on Question of how to derive keys. You can utilize a modulus of the data to create a key to use for the row. In the example data % X where X is the total number of keys you would like to have. The issue with this is that X is difficult to find; if you have 20 items, then making X into 20 is feasible and makes it a quick search as each item has it's own row. But if you have 1000 items, then doing % 1000 is NOT feasible. Doing something like X = 75 would work better for this.
You have two main problems : 1) The hashtable paradigm you want to choose (open|closed) hash table. 2) The Hashtable can be a simple array with key indexes and an array reference for collisions cases. 3) You have to study your hash key generation algorithm (the $hash = ord($string[$i]) + ($hash << 5) - $hash; can be enough), but you can choose md5/sha too. If you know your key space , maybe you can use unix gperf. Here is my hash table implementation : ``` <?php /** A brief but simple closed hash table class. Jorge Niedbalski R. <jnr@niedbalski.org> **/ class HashTable { public $HashTable = array(); public $HashTableSize; public function __construct($tablesize) { if($tablesize) { $this->HashTableSize = $tablesize; } else { print "Unknown file size\n"; return -1; } } public function __destruct() { unset($this->HashTable); } public function generate_bucket($string) { for($i=0; $i <= strlen($string); $i++) { $hash = ord($string[$i]) + ($hash << 5) - $hash; } print "".$this->HashTableSize."\n"; return($hash%$this->HashTableSize); } public function add($string, $associated_array) { $bucket = $this->generate_bucket($string); $tmp_array = array(); $tmp_array['string'] = $string; $tmp_array['assoc_array'] = $associated_array; if(!isset($this->HashTable[$bucket])) { $this->HashTable[$bucket] = $tmp_array; } else { if(is_array($this->HashTable[$bucket])) { array_push($this->HashTable[$bucket], $tmp_array); } else { $tmp = $this->HashTable[$bucket]; $this->HashTable[$bucket] = array(); array_push($this->HashTable[$bucket], $tmp); array_push($this->HashTable[$bucket], $tmp_array); } } } public function delete($string, $attrname, $attrvalue) { $bucket = $this->generate_bucket($string); if(is_null($this->HashTable[$bucket])) { return -1; } else { if(is_array($this->HashTable[$bucket])) { for($x = 0; $x <= sizeof($this->HashTable[$bucket]); $x++) { if(($this->HashTable[$bucket][$x]['string'] == $string) && ($this->HashTable[$bucket][$x]['.$attrname.'] == $attrvalue)) { unset($this->HashTable[$bucket][$x]); } } } else { unset($this->HashTable[$bucket][$x]); } } /** everything is OK **/ return 0; } public function search($string) { $resultArray = array(); $bucket = $this->generate_bucket($string); if(is_null($this->HashTable[$bucket])) { return -1; } else { if(is_array($this->HashTable[$bucket])) { for($x = 0; $x <= sizeof($this->HashTable[$bucket]); $x++) { if(strcmp($this->HashTable[$bucket][$x]['string'], $string) == 0) { array_push($resultArray,$this->HashTable[$bucket][$x]); } } } else { array_push($resultArray,$this->HashTable[$bucket]); } } return($resultArray); } } $hash = new HashTable(16); $arr = array('nombre' => "jorge niedbalski"); $hash->add("astroza", $arr); $hash->add("astrozas", $arr); print_r($hash->search("astroza")); ?> ```
Steps in implementing hashtable in PHP and Mysql
[ "", "php", "mysql", "hashtable", "" ]
Are there any commonly usable annotations available? Similar to commons-lang? If not, have you seen any effective use of annontations (not built-in annotations) as part of any open source application development. I remember Mifos was using it for Transaction. Mohan
i think [Hibernate Validator](http://www.hibernate.org/hib_docs/validator/reference/en/html_single/index.html#d0e46) has really good and reusable annotations for any kind of validation. it is based on a the reference implementation for JSR 303: Bean Validation.
Only non-standard annotations I've used more than once outside my testing project have been [WicketStuff Annotations](http://wicketstuff.org/confluence/display/STUFFWIKI/wicketstuff-annotation) which are very useful in their own context. Another interesting annotation set which is also the basis for [JSR-305](http://jcp.org/en/jsr/detail?id=305) is [FindBugs' annotations](http://findbugs.sourceforge.net/manual/annotations.html) which also may prove useful in the future - we'll see how that goes.
Commonly reusable annotations or commons annotations?
[ "", "java", "annotations", "" ]
[Quickcheck](http://www.cs.chalmers.se/~rjmh/QuickCheck/) and its variants (even there is one in [Java](https://bitbucket.org/blob79/quickcheck)), seems to be interesting. However, apart from academic interest, is it really useful in a real application testing (Eg. a GUI application or Client/Server or even take StackOverflow itself)? Any experiences you had with similar test generators is appreciated.
Yes, well. Actually no, but I've studied under the man who originally developed QuickCheck and he's a really interesting guy. Back in 2004, we were forced to use QuickCheck to test our Haskell programs and it was combination of good and bad. Mostly bad because Haskell was a bit daunting itself but never the less wonderful when you got it working. John has since then perfected that which he wrote years back and actually helped Ericssion test their complex telecom hardware, and he found bugs in 20 millions or so lines of code reducing that to a mere three steps through his approach. He's a great speaker so it's always a joy listening to him present what he does so well, but all in all, what he did with QuickCheck was new to me. So I asked him, what his interest was in bringing this to the market. He was open to the idea, but at the time his business (based around QuickCheck) was relatively new and so there were other areas he would focus on. This is now 2007. My point is, you could learn from QuickCheck even if you wont end up using it. But what is QuickCheck? It's a a combinatorial testing framework and an interesting way to test programs. The people over at Microsoft Research has built [Pex](http://research.microsoft.com/en-us/projects/Pex/) which is sort of similar. Pex generates tests automatically by examining your IL. However, John would write a generator for possible input and test properties of a function. A property is something which can easily be tested and it's a lot more formal. e.g. reversing a list? Well, reversing a list, is the same thing as splitting a list in two halves, reversing them each individually and then concatenating the two reversed halves in reverse order. ``` 1,2,3,4 // original 1,2 3,4 // split into A and B 2,1 4,3 // reverse A and B 4,3,2,1 // concat B and A ``` This is a great property to test with QuickCheck called the specification and the result is quite astonishing. Pex is nice, but not as cool as QuickCheck, Pex simplifies things, QuickCheck does to but it takes a lot of effort to write a good specification. The power of QuickCheck is that when it runs into a failure it will reduce the input which caused your test to fail, to the smallest possible form. Leaving you with a detailed description of what progression of state caused your test to fail. In comparison to other testing frameworks which will just try to break your code in a brute force manner. This is made possible due to how you write your testing specification. QuickCheck relies on pseudo-randomness to invent input and it's because of this, its capable of backtracking and find really small input which does not pass your test. It's a lot more work to write QuickCheck properties but the end result is better testing. As John himself said, 70% of bugs are caught by unit testing, but it's that other 30% which causes your program to crash. QuickCheck is testing those last 30%.
I've done a real Haskell problem which involved a discrete event simulation. So I wrote a DES library based on the continuation monad, along with the equivalents to MVars and Channels. I needed to check that this worked properly, so I wrote a bunch of QuickCheck properties to demonstrate that, for instance, two streams of concurrent data written to a Channel would be correctly merged without dropping anything. I've also used QuickCheck to document and verify the properties in my [Ranged Sets](http://hackage.haskell.org/cgi-bin/hackage-scripts/package/Ranged-sets-0.2.0) and [Decimal](http://hackage.haskell.org/cgi-bin/hackage-scripts/package/Decimal) libraries. In my experience QuickCheck is sometimes great. If you can summarise an important property in a concise way, although the algorithm that delivers that property is hairy, then QuickCheck is a huge win. On the other hand I often find that the algorithm is equivalent to the property I want to verify. In that case I look for simpler properties. For instance, suppose function "foo" is supposed to be non-strictly monotonic. Then you can write ``` prop_fooMonotonic x y = (x > y) ==> (foo x >= foo y) ```
Have you used Quickcheck in a real project
[ "", "java", "haskell", "testing", "quickcheck", "" ]
I'm trying to do some benchmarking of JVMs running on various hardware and OS platforms. I've created an algorithm to exercise the parts of the JVM I'm interested in and intend to run this algorithm many times to find a decent average. When I run the benchmark, I find that the first run is significantly longer than subsequent runs: ``` 132ms 86ms 77ms 89ms 72ms ``` My suspicion is that classes are loaded lazily, putting a large overhead on the first run. While this is indeed a feature that I assume is unique to each JVM, it's not one I'm interested in at this point. Is there a standard command line option or property to eagerly load classes? or does anyone have any other theories?
The simplest thing to do is ignore the first run. (If that is a valid thing to do) Note: if you run the same code 10,000 times, it will compile the code further and you get better results, so you might want to ignore the first 10K results for some micro-benchmarks. Some JVMs support eager loading but I don't think Sun's JVM does. JWrapper support AOT <https://www.jwrapper.com/features>
If you want to force the classes to be loaded do something like this: ``` public class Main { static { loadClasses(); } public static void main(final String[] argv) { // whatever } private static void loadClasses() { final String[] classesToLoad; // even better, read them from a file and pass the filename to this method classesToLoad = new String[] { "foo.bar.X", "foo.bar.Y", } for(final String className : classesToLoad) { try { // load the class Class.forName(className); } catch(final ClassNotFoundException ex) { // do something that makes sense here ex.printStackTrace(); } } } } ```
Eager java class loading
[ "", "java", "" ]
I have a table with a charge/credit column: ``` Item | PriceVal | CostVal | CHARGE_CODE 1 5 3 CH 2 8 5 CH 1 -5 -3 CR 3 7 1 CH 4 15 10 CH 1 5 3 CH ``` I've got the query I need to get the NET price and cost, but I'm also interested in the NET charges. Right now I have: ``` SELECT Item, SUM(PriceVal), SUM(CostVal) FROM Table GROUP BY Item ``` How do I get another column with the value ``` COUNT(SUM(CHARGE_CODE=CH)-SUM(CHARGE_CODE=CR)) ``` I'm at a loss.
count() is going to count one for every value thats not null, so I don't think thats exactly what you want. Take the count out and just take the ``` sum(case when charge_code = CH then costval else 0 end) - sum(case when charge_code = 'CR' then costval else 0 end) ```
Since you have the dollar values entered as negatives in the table already, you can use the simple formula: ``` select Item, sum(PriceVal), sum(CostVal), sum(PriceVal-CostVal) from Table group by Item ``` I don't believe you should be subtracting the credit items as they're already negative. If you really do want want the net **count** of transactions: ``` select Item, sum(PriceVal), sum(CostVal), sum(case when charge_code = 'CH' then 1 else 0 end) - sum(case when charge_code = 'CR' then -1 else 0 end) from Table group by Item ``` or, if there are only two charge codes, substitute: ``` sum(case when charge_code = 'CH' then 1 else -1 end) ``` for the last column.
SQL - Count( ) issue
[ "", "sql", "sql-server", "t-sql", "" ]
How can the following be accomplished in Magento? * Display a "Hello World" message using a controller/view/model approach. So, if I went to `http://example.com/myController` it would show the string 'Hello World'. Being able to show this string within the template of my website (for example, the header, footer, etc.) will be a bonus. * How do I add a method to this controller (or a new controller if necessary), which interacts with a model, and performs the query `Select * FROM articles where id='10'` and returns the row (containing the columns `id, title, content`) to the controller? And then use the controller to include a view, which would display this row. So going to `http://example.com/myController/show_row` (or something similar) would display the row within a view. (No need to be fancy, just a `echo $row->id;` or something similar would work.) Any other information about Magento's code structure will also be very helpful.
First and foremost, I highly recommend you buy the [PDF/E-Book from PHP Architect](http://www.phparch.com/books/phparchitects-guide-to-programming-with-magento/). It's US$20, but is the only straightforward "Here's how Magento works" resource I've been able to find. I've also started writing [Magento tutorials at my own website](http://alanstorm.com/category/magento). Second, if you have a choice, and aren't an experienced programmer or don't have access to an experienced programmer (ideally in PHP *and* Java), **pick another cart**. Magento is well engineered, but it was engineered to be a shopping cart solution that other programmers can build modules on top of. It was not engineered to be easily understood by people who are smart, but aren't programmers. Third, Magento MVC is very different from the [Ruby on Rails](http://en.wikipedia.org/wiki/Ruby_on_Rails), [Django](http://en.wikipedia.org/wiki/Django_%28web_framework%29), [CodeIgniter](http://en.wikipedia.org/wiki/Codeigniter#CodeIgniter), [CakePHP](http://en.wikipedia.org/wiki/CakePHP), etc. MVC model that's popular with PHP developers these days. I think it's based on the [Zend](http://en.wikipedia.org/wiki/Zend_Framework) model, and the whole thing is very Java OOP-like. There's **two** controllers you need to be concerned about. The module/frontName controller, and then the MVC controller. Fourth, the Magento application itself is built using the same module system you'll be using, so poking around the core code is a useful learning tactic. Also, a lot of what you'll be doing with Magento is **overriding** existing classes. What I'm covering here is **creating** new functionality, not overriding. Keep this in mind when you're looking at the code samples out there. I'm going to start with your first question, showing you how to setup a controller/router to respond to a specific URL. This will be a small novel. I might have time later for the model/template related topics, but for now, I don't. I will, however, briefly speak to your SQL question. Magento uses an [EAV](http://en.wikipedia.org/wiki/Entity-Attribute-Value_model) database architecture. Whenever possible, try to use the model objects the system provides to get the information you need. I know it's all there in the SQL tables, but it's best not to think of grabbing data using raw SQL queries, or you'll go mad. Final disclaimer. I've been using Magento for about two or three weeks, so caveat emptor. This is an exercise to get this straight in my head as much as it is to help Stack Overflow. ## Create a module All additions and customizations to Magento are done through modules. So, the first thing you'll need to do is create a new module. Create an XML file in `app/modules` named as follows ``` cd /path/to/store/app touch etc/modules/MyCompanyName_HelloWorld.xml ``` ``` <?xml version="1.0"?> <config> <modules> <MyCompanyName_HelloWorld> <active>true</active> <codePool>local</codePool> </MyCompanyName_HelloWorld> </modules> </config> ``` MyCompanyName is a unique namespace for your modifications, it doesn't have to be your company's name, but that the recommended convention my magento. `HelloWorld` is the name of your module. ## Clear the application cache Now that the module file is in place, we'll need to let Magento know about it (and check our work). In the admin application 1. Go to System->Cache Management 2. Select Refresh from the All Cache menu 3. Click Save Cache settings Now, we make sure that Magento knows about the module 1. Go to System->Configuration 2. Click Advanced 3. In the "Disable modules output" setting box, look for your new module named "MyCompanyName\_HelloWorld" If you can live with the performance slow down, you might want to turn off the application cache while developing/learning. Nothing is more frustrating then forgetting the clear out the cache and wondering why your changes aren't showing up. ## Setup the directory structure Next, we'll need to setup a directory structure for the module. You won't need all these directories, but there's no harm in setting them all up now. ``` mkdir -p app/code/local/MyCompanyName/HelloWorld/Block mkdir -p app/code/local/MyCompanyName/HelloWorld/controllers mkdir -p app/code/local/MyCompanyName/HelloWorld/Model mkdir -p app/code/local/MyCompanyName/HelloWorld/Helper mkdir -p app/code/local/MyCompanyName/HelloWorld/etc mkdir -p app/code/local/MyCompanyName/HelloWorld/sql ``` And add a configuration file ``` touch app/code/local/MyCompanyName/HelloWorld/etc/config.xml ``` and inside the configuration file, add the following, which is essentially a "blank" configuration. ``` <?xml version="1.0"?> <config> <modules> <MyCompanyName_HelloWorld> <version>0.1.0</version> </MyCompanyName_HelloWorld> </modules> </config> ``` Oversimplifying things, this configuration file will let you tell Magento what code you want to run. ## Setting up the router Next, we need to setup the module's routers. This will let the system know that we're handling any URLs in the form of ``` http://example.com/magento/index.php/helloworld ``` So, in your configuration file, add the following section. ``` <config> <!-- ... --> <frontend> <routers> <!-- the <helloworld> tagname appears to be arbitrary, but by convention is should match the frontName tag below--> <helloworld> <use>standard</use> <args> <module>MyCompanyName_HelloWorld</module> <frontName>helloworld</frontName> </args> </helloworld> </routers> </frontend> <!-- ... --> </config> ``` What you're saying here is "any URL with the frontName of helloworld ... ``` http://example.com/magento/index.php/helloworld ``` should use the frontName controller MyCompanyName\_HelloWorld". So, with the above configuration in place, when you load the helloworld page above, you'll get a 404 page. That's because we haven't created a file for our controller. Let's do that now. ``` touch app/code/local/MyCompanyName/HelloWorld/controllers/IndexController.php ``` Now try loading the page. Progress! Instead of a 404, you'll get a PHP/Magento exception ``` Controller file was loaded but class does not exist ``` So, open the file we just created, and paste in the following code. The name of the class needs to be based on the name you provided in your router. ``` <?php class MyCompanyName_HelloWorld_IndexController extends Mage_Core_Controller_Front_Action{ public function indexAction(){ echo "We're echoing just to show that this is what's called, normally you'd have some kind of redirect going on here"; } } ``` What we've just setup is the module/frontName controller. This is the default controller and the default action of the module. If you want to add controllers or actions, you have to remember that the tree first part of a Magento URL are immutable they will always go this way `http://example.com/magento/index.php/frontName/controllerName/actionName` So if you want to match this url ``` http://example.com/magento/index.php/helloworld/foo ``` You will have to have a FooController, which you can do this way : ``` touch app/code/local/MyCompanyName/HelloWorld/controllers/FooController.php ``` ``` <?php class MyCompanyName_HelloWorld_FooController extends Mage_Core_Controller_Front_Action{ public function indexAction(){ echo 'Foo Index Action'; } public function addAction(){ echo 'Foo add Action'; } public function deleteAction(){ echo 'Foo delete Action'; } } ``` Please note that the default controller IndexController and the default action indexAction can by implicit but have to be explicit if something come after it. So `http://example.com/magento/index.php/helloworld/foo` will match the controller FooController and the action indexAction and NOT the action fooAction of the IndexController. If you want to have a fooAction, in the controller IndexController you then have to call this controller explicitly like this way : `http://example.com/magento/index.php/helloworld/index/foo` because the second part of the url is and will always be the controllerName. This behaviour is an inheritance of the Zend Framework bundled in Magento. You should now be able to hit the following URLs and see the results of your echo statements ``` http://example.com/magento/index.php/helloworld/foo http://example.com/magento/index.php/helloworld/foo/add http://example.com/magento/index.php/helloworld/foo/delete ``` So, that should give you a basic idea on how Magento dispatches to a controller. From here I'd recommended poking at the existing Magento controller classes to see how models and the template/layout system should be used.
I've been wrestling with Magento for the last month or so and I'm still trying to figure it out. So this is a case of the blind leading the blind. There's little in the way of documentation and the forum/wiki is chaotic at best. Not only that, but there are several solutions that are either outdated or far from optimal. I'm not sure if you have a project or just trying to figure it out, but it's probably easier if you started with modifying existing functionality as opposed to creating something completely new. For that I'd definately go with the "Recommended articles for developers" in the wiki. The new payment method one was a real eye-opener. For debugging I'd definitely recommend [using FirePHP](https://magento.stackexchange.com/questions/181/how-to-configure-firephp) and looking at your HTML source when something goes wrong. The ole echo debug method doesn't really work all that well. The general architecture is so mind-numbingly complex, that even if I completely understood it, I'd need to write a book to cover it. The best I can do is give you advice I wish someone had given me when I first started... Stay away from core files. Don't modify them, instead write your own module and override what you need. Magento uses config files consisting of XML to decide what it needs to do. In order to get it to run your own stuff as opposed to core functionality you need the correct xml. Unfortunately there is no guide on how to build you XML; you need to look at examples and do some serious testing. To complicate things the content of these files is largely case-sensitive. However if you master these you can override any part of the basic functionality which makes for a very powerful system. Magento uses methods like `Mage::getModel('mymodel')`, `Mage::getSingleton('mysingleton')`, `Mage::helper('myhelper')` to return objects of certain classes. It finds these by default in its core namespace. If you want it to use your own, you need to override these in your `config.xml` file. The name of your classes must correspond to the folder they're in. A lot of the objects in Magento ultimately extend something called a `Varien_Object`. This is a general purpose class (kind of like a swiss army knife) and its purpose in life is to allow you to define your own methods/variables on the fly. For example you'll see it used as a glorified array to pass data from one method to another. During development make sure you caching is disabled. It'll make magento excruciatingly slow, but it'll save you a lot of head trauma (from banging it on your desk). You'll see `$this` being used a lot. It means a different class depending on what file you see it. `get_class($this)` is your friend, especially in conjunction with FirePHP. Jot things down on paper. A lot. There are countless little factoids that you're gonna need 1-2 days after you encounter them. Magento loves OO. Don't be surprised if tracing a method takes you through 5-10 different classes. Read the designer's guide [here](http://www.magentocommerce.com/design_guide). It's meant mostly for graphics designers, but you *need* it to understand where and why the output from your module will end up. For that don't forget to turn on "Template path hints" in the developer section of the admin panel. There's more, but I'll stop here before this turns into a dissertation.
How do I create a simple 'Hello World' module in Magento?
[ "", "php", "magento", "controller", "magento-1.9", "" ]
My application is related to the stock market. I have a feed that is consistently updating an object called Price. Price has a HashMap that stores the security code (String) and price (Double). Every time a new price comes in this object is updated. The application is supposed to scan the prices for large moves. I have a separate class called Poller which polls the Price object every second and takes snapshot of the prices. The snapshot is a HashMap as described above. I then want to store this HashMap of prices along with a pollNumber in another HashMap I can later pass the pollNumber and get out the prices at the time corresponding to that pollNumber. But instead I get all the previous prices being overwritten and output similar to that below. 0 : {MSFT=17.67, AAPL=93.85, GOOG=333.86} {0={MSFT=17.67, AAPL=93.85, GOOG=333.86}} 1 : {MSFT=17.64, AAPL=93.85, GOOG=334.02} {0={MSFT=17.64, AAPL=93.85, GOOG=334.02}, 1={MSFT=17.64, AAPL=93.85, GOOG=334.02}} 2 : {MSFT=17.64, AAPL=93.85, GOOG=334.08} {0={MSFT=17.64, AAPL=93.85, GOOG=334.08}, 1={MSFT=17.64, AAPL=93.85, GOOG=334.08}, 2={MSFT=17.64, AAPL=93.85, GOOG=334.08}} 3 : {MSFT=17.65, AAPL=93.83, GOOG=334.08} {0={MSFT=17.65, AAPL=93.83, GOOG=334.08}, 1={MSFT=17.65, AAPL=93.83, GOOG=334.08}, 2={MSFT=17.65, AAPL=93.83, GOOG=334.08}, 3={MSFT=17.65, AAPL=93.83, GOOG=334.08}} 4 : {MSFT=17.64, AAPL=93.83, GOOG=334.07} {0={MSFT=17.64, AAPL=93.83, GOOG=334.07}, 1={MSFT=17.64, AAPL=93.83, GOOG=334.07}, 2={MSFT=17.64, AAPL=93.83, GOOG=334.07}, 3={MSFT=17.64, AAPL=93.83, GOOG=334.07}, 4={MSFT=17.64, AAPL=93.83, GOOG=334.07}} As you can see when I print the entire HashMap that should have different price series they are all the same. Basically the .put() function is overwriting the old entries somehow. If you know how to fix the behaviour so that the HashMap (the big one) has a new price series entry each time. --- ``` public class Poller { private final int period=1000; private final int delay=1000; private static int pollNumber=0; private static HashMap<Integer,HashMap<String,Double>> polledPrice = new HashMap<Integer, HashMap<String,Double>>(); public void pollPrice(){ Timer timer = new Timer(); timer.scheduleAtFixedRate(new TimerTask() { public void run() { // System.out.println(Price.getPricesMap()); System.out.println(pollNumber+" : "+Price.getPricesMap()); polledPrice.put(pollNumber, Price.getPricesMap()); System.out.println(polledPrice); pollNumber = pollNumber+1; Time atime = new Time(); atime.addToTimeMap(pollNumber); } }, delay, period); } } ``` ---
You need to take copy of the HashMap, otherwise it looks like you are just storing the same Map over and over again, which of course gets overwritten. Use this line: ``` polledPrice.put(pollNumber, new HashMap(Price.getPricesMap())); ``` As the simplest fix.
The problem is that `Price.getPricesMap()` is returning a reference to the same object each time. This sounds like a bad bit of API design to me - or at least one that ought to be documented. It can be fixed either by making the copy in the client code ([as suggested by Nick Fortescue](https://stackoverflow.com/questions/576912/hashmap-gets-over-written-every-time-i-use-put/576916#576916)) or by changing `Price`. The latter could either create an immutable map each time there was an actual change, or return a copy on each call to `getPricesMap()`.
HashMap gets over written every time I use .put()
[ "", "java", "hashmap", "" ]
I want to intercept all method invocations to some class MyClass to be able to react on some setter-invocations. I tried to use dynamic proxies, but as far as I know, this only works for classes implementing some interface. But MyClass does not have such an interface. Is there any other way, besides implementing a wrapper class, that delegates all invocations to a member, which is an instance of the MyClass or besided using AOP?
As you note, you cannot use JDK dynamic proxies (no interface), but using [Spring](http://static.springframework.org/spring/docs/2.5.x/reference/aop.html) and CGLIB (JAR included with Spring), you can do the following: ``` public class Foo { public void setBar() { throw new UnsupportedOperationException("should not go here"); } public void redirected() { System.out.println("Yiha"); } } Foo foo = new Foo(); ProxyFactory pf = new ProxyFactory(foo); pf.addAdvice(new MethodInterceptor() { public Object invoke(MethodInvocation mi) throws Throwable { if (mi.getMethod().getName().startsWith("set")) { Method redirect = mi.getThis().getClass().getMethod("redirected"); redirect.invoke(mi.getThis()); } return null; } }); Foo proxy = (Foo) pf.getProxy(); proxy.setBar(); // prints "Yiha" ```
If you are prepared to do something really ugly, have a look at: <http://docs.oracle.com/javase/7/docs/technotes/guides/jpda/> Basically the debugger interface ought to allow you to attach like a debugger, and hence intercept calls. Bear in mind I think this is a **really** bad idea, but you asked if it was possible.
How do I intercept a method invocation with standard java features (no AspectJ etc)?
[ "", "java", "reflection", "methods", "" ]
I'm shopping for an ORM tool. I'm agonizing over the purchase of either CodeSmith (which is currently available at a substantial discount) versus an ORM tool. LINQ to SQL is off my list; SubSonic 2.x is off the list (I don't want to invest in that dead end knowing that SubSonic 3.0 is coming. NHibernate seems like overkill as does LLBLGEN. I've only briefly evaluated EF but don't get quickly get a warm and fuzzy feeling from it. Am I crazy thinking that CodeSmith is a rational alternative to off-the-shelf ORMs? Will CodeSmith pay for itself in other ways? Please note that I am in no way related to any vendors and this isn't a cheap shot SO question just for the sake of generating product noise! I am looking for honest advice and opinions about CodeSmith as an ORM tool (with its provided, or community available) templates.
In fact, hibernate is a good ORM tool. But it stops there! Code smith capabilities can be more than just a relational mapping staff! I use code smith to generate some UI forms, business layers (templates), data access layers, patterns, and so on. But to work with code smith, you may need good experience with system design or use their templates which I don’t like to use but I like as an example. Code smith approach has one special drawback; you have to design your system considering the database implementation first. Nowadays, in object analysis approach, people success in implementing business logic & entities just before any database implementation – they forget about this. Decision is hard; I've constantly read important names such as Scott W. Ambler, Kent beck, Robert C. Martin and people from The Pragmatic Programmers series which recommends ORM Tool to speed up development. They said that ORM Tool developers are concerned with all database issues (pooling, connections, database vendor specifics, etc). So when we have to design data access layers we have to consider all these aspects too. I believe that these ORM tools come along with an overburden. I don't know yet how these tools would behaviour in low budget projects (I mean not good hosting servers or any kind of shared resources). I’ve seen inexperienced developers not taking this into account as they try to evangelise their beloved tools. But in java projects, hibernate is already a widespread and well-known tool. I have no doubt the great projects has been delivered using this technology but I have seen anyone and again java developers may need to teach us (.net developers) how to build great solutions. (Sorry, we have to admit.) The only thing I would recommend is to consider your context. Are you doing a new system? You need work in pattern? Have you ever try to consider such code generator and ORM tools altogether? I do prefer code smith because I generate entire solutions at once, not just data access layer. Code generation is very important and it is not for less that Microsoft has imitated code smith approach in visual studio.net 2008 and so on. Good luck
Code Smith is not an ORM, it's just a code generator IDE. You can generate a DAL using code smith based off your database but that would defeat (one of) the purposes of using an ORM which is basically that it generates the DAL dynamically so you don't have to write the code. If you're really trying to compare the two, then maybe you might get some benefit using code smith because you would have absolute control over the code that gets generated, but I'm not sure that benefit would outweight the drawback of what could turn out to be spending months to write the code smith templates to generate a DAL based off your database. And then you have to considering what happens when you make a change to your database, you will most likely have to run code smith and build everytime you do so. A good ORM will allow you to configure your database changes in schema, and then it will generate dynamically so you don't have to worry (as much) about making changes to the database.
Is Codesmith a viable ORM tool (or should I stick with a true ORM)
[ "", "c#", "orm", "codesmith", "" ]
So I had this working last week. At least, I thought I did! [DataGridView Update](https://stackoverflow.com/questions/548091/datagridview-update) Then I start working on the project again today and am getting > Update requires a valid UpdateCommand when passed DataRow collection > with modified rows. On ``` scDB.SSIS_Configurations_StagingDataTable table = (scDB.SSIS_Configurations_StagingDataTable)stagingGrid.DataSource; myStagingTableAdapter.Update(table); ``` The `StagingTableAdapter` has an additional query which takes 'filter' as a parameter. That was used to fill the `DataGridView`. In the wizard for creating that query I see 'update was generated'. I see that most posts with this error require that an update statement be generated with a command builder. What do I do?
The error is quite literal: The Adapter needs a valid SQL Update statement. Dataset designers and CommandBuilders will generate these for you, but there is nothing wrong with hand-crafting a bit of SQL either. Anyway, you'll have to verify (debugger) that the Update statement is still configured and what it actually is. It could be more of a SQL than a C# problem. Edit: the Command Builder tools will only handle straight, single table, Select statements. Use a Join or anything fancy and you're on your own.
This message will also be displayed caused when you do not have a primary key defined on the table you are updating.
Update requires a valid UpdateCommand when passed DataRow collection with modified rows
[ "", "c#", ".net", "tableadapter", "" ]
This will surely be an easy one but it is really bugging me. I have a script that reads in a webpage and uses [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/) to parse it. From the *soup* I extract all the links as my final goal is to print out the link.contents. All of the text that I am parsing is ASCII. I know that Python treats strings as unicode, and I am sure this is very handy, just of no use in my wee script. Every time I go to print out a variable that holds 'String' I get `[u'String']` printed to the screen. Is there a simple way of getting this back into just ascii or should I write a regex to strip it?
`[u'ABC']` would be a one-element list of unicode strings. [Beautiful Soup always produces Unicode](http://www.crummy.com/software/BeautifulSoup/documentation.html#Beautiful%20Soup%20Gives%20You%20Unicode,%20Dammit). So you need to convert the list to a single unicode string, and then convert that to ASCII. I don't know exaxtly how you got the one-element lists; the contents member would be a list of strings and tags, which is apparently not what you have. Assuming that you really always get a list with a single element, and that your test is really *only* ASCII you would use this: ``` soup[0].encode("ascii") ``` However, please double-check that your data is really ASCII. This is pretty rare. Much more likely it's latin-1 or utf-8. ``` soup[0].encode("latin-1") soup[0].encode("utf-8") ``` Or you ask Beautiful Soup what the original encoding was and get it back in this encoding: ``` soup[0].encode(soup.originalEncoding) ```
You probably have a list containing one unicode string. The `repr` of this is `[u'String']`. You can convert this to a list of byte strings using any variation of the following: ``` # Functional style. print map(lambda x: x.encode('ascii'), my_list) # List comprehension. print [x.encode('ascii') for x in my_list] # Interesting if my_list may be a tuple or a string. print type(my_list)(x.encode('ascii') for x in my_list) # What do I care about the brackets anyway? print ', '.join(repr(x.encode('ascii')) for x in my_list) # That's actually not a good way of doing it. print ' '.join(repr(x).lstrip('u')[1:-1] for x in my_list) ```
Python string prints as [u'String']
[ "", "python", "unicode", "ascii", "" ]
How would I generate a random date that has to be between two other given dates? The function's signature should be something like this: ``` random_date("1/1/2008 1:30 PM", "1/1/2009 4:50 AM", 0.34) ^ ^ ^ date generated has date generated has a random number to be after this to be before this ``` and would return a date such as: `2/4/2008 7:20 PM`
Convert both strings to timestamps (in your chosen resolution, e.g. milliseconds, seconds, hours, days, whatever), subtract the earlier from the later, multiply your random number (assuming it is distributed in the `range [0, 1]`) with that difference, and add again to the earlier one. Convert the timestamp back to date string and you have a random time in that range. Python example (output is almost in the format you specified, other than `0` padding - blame the American time format conventions): ``` import random import time def str_time_prop(start, end, time_format, prop): """Get a time at a proportion of a range of two formatted times. start and end should be strings specifying times formatted in the given format (strftime-style), giving an interval [start, end]. prop specifies how a proportion of the interval to be taken after start. The returned time will be in the specified format. """ stime = time.mktime(time.strptime(start, time_format)) etime = time.mktime(time.strptime(end, time_format)) ptime = stime + prop * (etime - stime) return time.strftime(time_format, time.localtime(ptime)) def random_date(start, end, prop): return str_time_prop(start, end, '%m/%d/%Y %I:%M %p', prop) print(random_date("1/1/2008 1:30 PM", "1/1/2009 4:50 AM", random.random())) ```
``` from random import randrange from datetime import timedelta def random_date(start, end): """ This function will return a random datetime between two datetime objects. """ delta = end - start int_delta = (delta.days * 24 * 60 * 60) + delta.seconds random_second = randrange(int_delta) return start + timedelta(seconds=random_second) ``` The precision is seconds. You can increase precision up to microseconds, or decrease to, say, half-hours, if you want. For that just change the last line's calculation. example run: ``` from datetime import datetime d1 = datetime.strptime('1/1/2008 1:30 PM', '%m/%d/%Y %I:%M %p') d2 = datetime.strptime('1/1/2009 4:50 AM', '%m/%d/%Y %I:%M %p') print(random_date(d1, d2)) ``` output: ``` 2008-12-04 01:50:17 ```
Generate a random date between two other dates
[ "", "python", "datetime", "random", "" ]
I would like to convert some HTML characters back to text using Java Standard Library. I was wondering whether any library would achieve my purpose? ``` /** * @param args the command line arguments */ public static void main(String[] args) { // TODO code application logic here // "Happy & Sad" in HTML form. String s = "Happy &amp; Sad"; System.out.println(s); try { // Change to "Happy & Sad". DOESN'T WORK! s = java.net.URLDecoder.decode(s, "UTF-8"); System.out.println(s); } catch (UnsupportedEncodingException ex) { } } ```
I think the Apache Commons Lang library's `StringEscapeUtils.unescapeHtml3()` and `unescapeHtml4()` methods are what you are looking for. See <https://commons.apache.org/proper/commons-text/javadocs/api-release/org/apache/commons/text/StringEscapeUtils.html>.
Here you have to just add jar file in lib jsoup in your application and then use this code. ``` import org.jsoup.Jsoup; public class Encoder { public static void main(String args[]) { String s = Jsoup.parse("&lt;Fran&ccedil;ais&gt;").text(); System.out.print(s); } } ``` Link to download jsoup: <http://jsoup.org/download>
Convert HTML Character Back to Text Using Java Standard Library
[ "", "java", "html", "html-entities", "" ]
I have a set of key/values eg orange=123, banana=4, apple=567. How can I store these key/values in a javascript object such that I can: 1. Retrieve a value via lookup eg set["orange"] should return 123 and, 2. Iterate over the set ***in the order in which the key/value pairs were added***. It seems that for 1. an object literal would be suitable but the iteration order is not guaranteed and that for 2. an array of key/value pairs (object literals) would provide an iteration order but not the ability to lookup the value based on a key. @\* Thanks for all the answers - Is this problem not a common one? Do libraries like jQuery not include support for this kind of type?
Both of your assumptions are correct. Object literals are not guaranteed to return keys in any order. The only way to do this without spinning your own type is to maintain an ordered list of keys: ``` var obj = { orange:123, banana:4, apple:567 } var keys = ['orange', 'banana', 'apple']; for (var i=0; i<keys.length; i++){ value = obj[keys[i]]]; } ``` Ugly, I know.
How about cooking your own list constructor? ``` function List(obj) { if (this instanceof List) { var t = this, keys = []; /* inititalize: add the properties of [obj] to the list, and store the keys of [obj] in the private keys array */ for (var l in obj) { keys.push(l); t[l] = obj[l]; } /* public: add a property to the list */ t.add = function(key, value) { t[key] = value; keys.push(key); return t; /* allows method chaining */ }; /* public: return raw or sorted list as string, separated by [separator] Without [sort] the order of properties is the order in which the properties are added to the list */ t.iterate = function(sort,separator){ separator = separator || '\n'; var ret = [], lkeys = sort ? keys.slice().sort() : keys; for (var i=0;i<lkeys.length;i++){ ret.push(lkeys[i]+': '+t[lkeys[i]]); } return ret.join(separator); }; } else if (obj && obj instanceof Object) { return new List(obj); } else if (arguments.length === 2) { var a = {}; a[String(arguments[0])] = arguments[1]; return new List(a); } else { return true; } /* the 'if (this instanceof List)' pattern makes the use of the 'new' operator obsolete. The constructor also allows to be initialized with 2 parameters => 'List(key,value)' */ } ``` now you can have it raw (the order you added props is maintained) or sorted: ``` var myList = List( { orange:123, banana:4, apple:567 } ); myList.add('peach',786); alert(myList.iterate()); /*=>output: orange: 123 banana: 4 apple: 567 peach: 786 */ or: alert(myList.iterate(1)); /*=>output: apple: 567 banana: 4 orange: 123 peach: 786 */ ```
How can I define a set of key/values in javascript and both lookup values and iterate the set in order?
[ "", "javascript", "" ]
What's the easiest way to implement (something like) Gmail's interface for labeling messages? Does any javascript library have a widget like this? <http://img294.imageshack.us/img294/7097/36698396.png> Behavior: 1. Click a label in the dropdown to immediately label the message. 2. Check multiple labels and then "Apply" to add multiple labels (not sure I like the "Apply" requirement...) 3. Type in the box to narrow down your list of labels. If you narrow it down to 1, pressing enter applies that label. 4. Trigger the widget with the shortcut key "L".
Horace - you want something really complicated, and you should take into account that even with a lot of pre-made component, it will take some time to get something like this working (not to mention - being pretty :-) ). Anyway, pre-made components are nice but I often like to hack it myself, free-style like. So here is something I cooked up to get you going: [My simple label demo](http://geek.co.il/articles/labels/) (*warning - I only tested this on Firefox 3 and I'm pretty sure that it won't work in anything else*). It is far from being pretty, but the code is pretty straight forward and should be easy to understand.
I doubt there is a precooked solution out there that does *exactly* what you want, but you should be able to put something together yourself with a decent JavaScript library. Something like [Ext JS](http://extjs.com/) would be ideal for this. A few API pages of potential interest: * [`Ext.Panel`](http://extjs.com/deploy/dev/docs/?class=Ext.Panel) * [`Ext.form.TextArea`](http://extjs.com/deploy/dev/docs/?class=Ext.form.TextArea) * [`Ext.menu.CheckItem`](http://extjs.com/deploy/dev/docs/?class=Ext.menu.CheckItem) * [`Ext.KeyNav`](http://extjs.com/deploy/dev/docs/?class=Ext.KeyNav)
How can I implement a gmail-style label chooser?
[ "", "javascript", "user-interface", "gmail", "label", "" ]
What content type should I use, if any, in an inline JavaScript block inside HTML? ``` application/x-javascript text/javascript ``` Or something else?
Beside that [`application/javascript` is the official MIME type of JavaScript](http://www.ietf.org/rfc/rfc4329.txt) , most browsers only accept `text/javascript` or just ignore the `type` attribute value.
According to [the standards](http://www.w3.org/TR/REC-html40/interact/scripts.html) you want to use "text/javascript"
What is the proper content type for JavaScript in HTML?
[ "", "javascript", "html", "xhtml", "" ]
I have a class `Server` which talks to a server connection for IRC. It holds a list of known `User`s, and creates them as necessary. I have two problems involving the `User` class: 1. Anyone can create an instance of a `User`. I only want the `Server` class to be able to do this. 2. If a user (the thing `User` describes) changes his/her name (or other info, like joined channels), the `Server` class can change it itself. However, other classes can, too! I want to disallow other classes from touching this information (making it read-only to them). How can I solve these two problems? In C++, it can be solved by using the `friend` keyword and making the ctor and `setName` (and such) private. Is there a C# keyword which can allow a certain method be accessable by a specified class? This would solve my problem.
Honestly I find the `friend` access that originated from C++ to be symptomatic of bad design. You're better off fixing your design. For a start, who really cares if someone creates a User? Does it really matter? I ask this because it seems that sometimes we programmers get caught up worrying about things that simply won't happen or, if they do, it doesn't matter. If you really do care then do one of the following: * Make User an interface. Server can instantiate a private class that implements it; or * Make User an inner class of Server with no public constructors so only Server can instantiate it. Visibility hacks (of which friends in C++ are one and package access in Java are both good examples) are simply asking for trouble and not a good idea.
The closest in the .NET world to `friend` is the `internal` visibility. Note that if your two classes are in separate assemblies, you can use the [InternalsVisibleTo](http://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.internalsvisibletoattribute.aspx) attribute to allow one assembly visibility of the internals of the other.
Make methods/properties visible to one class, hidden to others
[ "", "c#", "encapsulation", "" ]
I have a small application that I'm developing, that I may want to give/sell to others. I want to persist some settings, and create an admin interface to modify them. What would be the best way to store them away? A DB table seems like overkill for the 10-20 settings I'll have, and I want the retrieval of these settings to be as fast as possible. Is a flat file another viable option? What are the pitfalls associated with using a flat file? What would be the fastest/easiest way to interface with a flat file storing multiple keys and values?
I often use PHP's [`parse_ini_file`](https://www.php.net/function.parse-ini-file) for this. So if you write an .ini file with this: ``` ; This is a sample configuration file ; Comments start with ';', as in php.ini [first_section] one = 1 five = 5 animal = BIRD [second_section] path = "/usr/local/bin" URL = "http://www.example.com/~username" [third_section] phpversion[] = "5.0" phpversion[] = "5.1" phpversion[] = "5.2" phpversion[] = "5.3" ``` And read it in with this PHP code: ``` define('BIRD', 'Dodo bird'); $ini_array = parse_ini_file("sample.ini", true); print_r($ini_array); ``` You will get this output: ``` Array ( [first_section] => Array ( [one] => 1 [five] => 5 [animal] => Dodo bird ) [second_section] => Array ( [path] => /usr/local/bin [URL] => http://www.example.com/~username ) [third_section] => Array ( [phpversion] => Array ( [0] => 5.0 [1] => 5.1 [2] => 5.2 [3] => 5.3 ) ) ) ```
It really depends on the type of "settings" you want to store. Are they "bootstrap" settings like DB host, port, and login? Or are they application settings specifically for your application? The problem with letting an admin interface write a file on the file system is the permissions needed in order to write to the file. Anytime you open up the web server to write files, you increase the possibility that an error in your code could allow severe privilege escalation. Databases are designed to allow reads and writes without introducing the potential system security risks. We use "generated" PHP files to store static configuration data in (like database access info). It is generated by a utility script by a user on the command line. After that, all non-static information is stored in a database table. The database table, in turn, is easy to update from an Admin area. It is easy to extend and upgrade as you upgrade your applicecation. It's also much easier to centralize the "data" that needs backed up in one place. May I suggest using memcached or something similar to speed it up? Just a couple thoughts...
What is the best way to persist PHP application setings?
[ "", "php", "database", "web-applications", "file-io", "settings", "" ]
I have an application that is locking on the GUI thread, and I've used WinDbg, along with the "!clrstack" command to get this stack trace, but I can't figure out where the issue is. All of these methods look like framework methods, and none are mine. Any help would be much appreciated. I apologize for the long lines ``` OS Thread Id: 0x724 (0) ESP EIP 0012ec88 7c90e4f4 [HelperMethodFrame_1OBJ: 0012ec88] System.Threading.WaitHandle.WaitOneNative(Microsoft.Win32.SafeHandles.SafeWaitHandle, UInt32, Boolean, Boolean) 0012ed34 792b687f System.Threading.WaitHandle.WaitOne(Int64, Boolean) 0012ed50 792b6835 System.Threading.WaitHandle.WaitOne(Int32, Boolean) 0012ed64 7b6f192f System.Windows.Forms.Control.WaitForWaitHandle(System.Threading.WaitHandle) 0012ed78 7ba2d0bb System.Windows.Forms.Control.MarshaledInvoke(System.Windows.Forms.Control, System.Delegate, System.Object[], Boolean) 0012ee18 7b6f328c System.Windows.Forms.Control.Invoke(System.Delegate, System.Object[]) 0012ee4c 7b920717 System.Windows.Forms.WindowsFormsSynchronizationContext.Send(System.Threading.SendOrPostCallback, System.Object) 0012ee64 7a924102 Microsoft.Win32.SystemEvents+SystemEventInvokeInfo.Invoke(Boolean, System.Object[]) 0012ee98 7a922833 Microsoft.Win32.SystemEvents.RaiseEvent(Boolean, System.Object, System.Object[]) 0012eee4 7a923d2f Microsoft.Win32.SystemEvents.OnUserPreferenceChanged(Int32, IntPtr, IntPtr) 0012ef04 7aa8f184 Microsoft.Win32.SystemEvents.WindowProc(IntPtr, Int32, IntPtr, IntPtr) 0012ef08 003620a4 [InlinedCallFrame: 0012ef08] 0012f0cc 7b1d8cce System.Windows.Forms.Application+ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32, Int32, Int32) 0012f168 7b1d8937 System.Windows.Forms.Application+ThreadContext.RunMessageLoopInner(Int32, System.Windows.Forms.ApplicationContext) 0012f1bc 7b1d8781 System.Windows.Forms.Application+ThreadContext.RunMessageLoop(Int32, System.Windows.Forms.ApplicationContext) 0012f1ec 7b195911 System.Windows.Forms.Application.Run(System.Windows.Forms.Form) 0012f200 00eb0ebb WinForms1.Program.Main() 0012f69c 79e71b4c [GCFrame: 0012f69c] ```
It looks like this code isn't the true cause of the problem. See if one of these pages helps: * [Mysterious Hang or The Great Deception of InvokeRequired](http://ikriv.com/en/prog/info/dotnet/MysteriousHang.html) * [OnUserPreferenceChanged Hang](http://krgreenlee.blogspot.com/2007/09/onuserpreferencechanged-hang.html) **Update:** Fixed url to first page.
The other answer comes from this Aaron Lerch. I really like the "filtered breakpoint on someone else's code" part. I think it would've saved me a couple of days. <http://www.aaronlerch.com/blog/2008/12/15/debugging-ui/>
Need help deciphering a C# stack trace
[ "", "c#", "stack-trace", "" ]
I've got a static map of identifier<=>struct pairs, and each struct should contain some arrays. Everything is known at compile time. That is, I want to have something like this here: ``` ID1 => name: someString flagCount: 3 flags: [1, 5, 10] ``` statically created (if possible). Of course, a declaration like: ``` struct Info { const char* name; int flagCount; int flags[]; }; ``` would be ideal, as long as I could initialize it like ... ``` Info infos [] = { ... }; ``` which is not possible, due to the varying length arrays (except I'm missing something). Alternatively, I though about (ab)using `boost::assign` for this, but I'm wondering if there is a recommended solution to this. I'm fine if I can store only the info structures into an array, and to the mapping elsewhere. Edit: A note on the current solution. At the moment, I have: ``` struct Info { Info (const std::vector<int>& flags) : flags {} std::vector<int> flags; }; ``` and I use: ``` const std::map<ID, Info> map = boost::assign::map_list_of ("ID1", Info (boost::assign::list_of (1)(2)(3)); ``` which works, I'm just curious whether there is a simpler solution (template-based?).
The elements in an array must be the same size as each other, otherwise you can't use `infos[i]` to access them - the compiler would have to step through the array and look at the size of each element up to `i` to find where the next one started. You can allocate enough memory for each element contiguously, and then create an array of pointers to the elements (pointers being a fixed size). If you only need the mapping and not to index the infos then your map will be identifier to pointer. Alternatively, as you know the size at compile time, if there are only a few flags, make the `Info::flags` array large enough for the maximum flags, or make it a pointer to an array of flags, so that Info is a fixed size struct.
Either use a pointer to the variable-length array: ``` struct Info { const char* name; int flagCount; int *flags; }; ``` or fixed-size array large enough to hold all flags: ``` struct Info { const char* name; int flagCount; int flags[MAX_FLAGS]; }; ``` Both solutions will waste some memory; but for solution 1, it's just one pointer per struct; note that you are already implicitly using this solution for the name field.
Statically initializing a structure with arrays of varying length
[ "", "c++", "" ]
I'm fairly new to programming but I've been reading some interesting discussions on StackOverflow about various programming approaches. I'm still not 100% clear on what the difference is between procedural programming and object oriented programming. It sounds like object oriented programming still uses procedures (methods) but everything is organized differently because the object is the star of the show. But it seems to me that procedures still allow you to do all of the same things. Like in C, you can put all of your similar procedures into a library. So couldn't you really say that a library in C is similar to an object in C++?
The difference between the two is subtle but significant. In a procedural program, modules interact by reading and writing state that is stored in shared data structures. In an object oriented program, modules in the form of objects interact by sending messages to other objects.
In a procedural program, the code is king and the data is subordinate. In other words, you have programs which act on data and they're not usually tightly bound. In the OO world, objects are the primary thing of interest. An object consists of data *and* the code that is allowed to act on that data, and they are very tightly bound. It is the concept of encapsulation, the hiding of information. An example, let's say you have a number and you want to double it. A procedural way of doing this is: ``` n = n * 2 ``` The code here quite explicitly multiplies n by 2 and stores the result back into n. The OO way of doing this is to send a "message" to the number object telling it to double itself: ``` n.double(); ``` The advantage of this is called polymorphism. What happens when you decide you want to be able to double a string like "bob". In the procedural world, you'd have to provide more code to do the doubling but you'd also have to call that code differently. With OO, you create a string object which can also take the 'double' message. The code to double a string belongs to the string object so it knows it has to act differently to the number object. If it decided that "bob" \* 2 was "bobbob", the code would look something like: ``` class number: class string: int n char array s procedure double: procedure double: n = n * 2 s = string_join(s,s) ``` Then you could call x.double() no matter what actual type x was (number or string) and it would know which code to run - this greatly simplifies your code. You can double integer, strings, matrices, complex numbers, reals, window sizes on your monitor and all sorts of different things. And you're right, a C library can be made to look a little bit like objects. The classic example is `stdio.h` - you don't ever care *what* a `FILE*` actually points to, just the fact that it will behave in a certain way. The `FILE*`, `fopen()`, `fclose()` and other functions are an class of sorts representing the I/O capabilities of C.
What's the difference between a procedural program and an object oriented program?
[ "", "c++", "c", "oop", "procedural-programming", "" ]
I find that my program is searching through lots of lengthy strings (20,000+) trying to find a particular unique phrase. What is the most efficent method for doing this in C#? Below is the current code which works like this: 1. The search begins at startPos because the target area is somewhat removed from the start 2. It loops through the string, at each step it checks if the substring from that point starts with the startMatchString, which is an indicator that the start of the target string has been found. (The length of the target string varys). 3. From here it creates a new substring (chopping off the 11 characters that mark the start of the target string) and searches for the endMatchString I already know that this is a horribly complex and possibly very inefficent algorithm. What is a better way to accomplish the same result? ``` string result = string.Empty; for (int i = startPos; i <= response.Length - 1; i++) { if (response.Substring(i).StartsWith(startMatchString)) { string result = response.Substring(i).Substring(11); for (int j = 0; j <= result.Length - 1; j++) { if (result.Substring(j).StartsWith(endMatchString)) { return result.Remove(j) } } } } return result; ```
You can use String.IndexOf, but make sure you use StringComparison.Ordinal or it may be one order of magnitude slower. ``` private string Search2(int startPos, string startMatchString, string endMatchString, string response) { int startMarch = response.IndexOf(startMatchString, startPos, StringComparison.Ordinal); if (startMarch != -1) { startMarch += startMatchString.Length; int endMatch = response.IndexOf(endMatchString, startMarch, StringComparison.Ordinal); if (endMatch != -1) { return response.Substring(startMarch, endMatch - startMarch); } } return string.Empty; } ``` Searching 1000 times a string at about the 40% of a 183 KB file took about 270 milliseconds. Without StringComparison.Ordinal it took about 2000 milliseconds. Searching 1 time with your method took over 60 seconds as it creates a new string (O(n)) each iteration, making your method O(n^2).
There are a whole bunch of algorithms, * boyer and moore * Sunday * Knuth-Morris-Pratt * Rabin-Karp I would recommend to use the simplified Boyer-Moore, called Boyer–Moore–Horspool. The C-code appears at the wikipedia. For the java code look at <http://www.fmi.uni-sofia.bg/fmi/logic/vboutchkova/sources/BoyerMoore_java.html> A nice article about these is available under <http://www.ibm.com/developerworks/java/library/j-text-searching.html> If you want to use built-in stuff go for regular expressions.
What is the most efficient (read time) string search method? (C#)
[ "", "c#", "algorithm", "string", "search", "" ]
In PHP can I include a directory of scripts? i.e. Instead of: ``` include('classes/Class1.php'); include('classes/Class2.php'); ``` is there something like: ``` include('classes/*'); ``` Couldn't seem to find a good way of including a collection of about 10 sub-classes for a particular class.
``` foreach (glob("classes/*.php") as $filename) { include $filename; } ```
Here is the way I include lots of classes from several folders in PHP 5. This will only work if you have classes though. ``` /*Directories that contain classes*/ $classesDir = array ( ROOT_DIR.'classes/', ROOT_DIR.'firephp/', ROOT_DIR.'includes/' ); function __autoload($class_name) { global $classesDir; foreach ($classesDir as $directory) { if (file_exists($directory . $class_name . '.php')) { require_once ($directory . $class_name . '.php'); return; } } } ```
How to include() all PHP files from a directory?
[ "", "php", "include", "" ]
I need to set the 'ReadOnly' property of a BoundField in a GridView to the value of a bit field in the recordset that is being displayed by the same GridView. I am aware I could achieve this in code, but I was wondering, out of interest, if it's possible to do this declaratively inside the property using a <% %> snippet? Cheers, Jamie
Yes you can do this. Create a TemplatedField and in the binding statement use either ``` <%# ((Employee)Container.DataItem).IsApproved ? "yes" : "no" %> ``` or you can use a method from the code-behind ``` <%# FormatBool(((Employee)Container.DataItem).IsApproved) %> ``` where FormatBool is a property in your code-behind ``` protected string FormatBool(bool value) { if (value) return "yes"; return "no"; } ```
I was unable to find a way of manipulating the properties of either the BoundField or the TemplateField declaratively based on the DetailView's data. Bendewey very charitably assumed I was binding to a business object when in fact I am dealing with the data directly in this instance - apologies for not making that clear. My solution in the code behind was the following: ``` protected void DetailsView1_DataBound(object sender, EventArgs e) { DetailsView dv = (DetailsView)sender; if (dv.DataItemCount > 0) { DataRowView data = (DataRowView)dv.DataItem; bool isFixed = (bool)data["IsFixed"]; if (isFixed) { dv.Rows[2].Enabled = false; dv.Rows[6].Enabled = false; } } } ``` I was disapointed that I had to reference the rows I wanted to disable by index. Possibly there's a way of referencing these rows by their data field without iterating through all the rows, but I couldn't find it at first glance. Thanks for the help!
Using data to set properties of a GridView
[ "", "c#", "webforms", "" ]
In the MVC folder structure, where should general class files reside? For example, I have a class that determines the right DataContext to use, so I'm not reinventing the wheel in each of my controllers. Should it live in the Controllers folder even though it's not a controller? Should it be with the Models because it's database related, even though it's not a model? Possibly the Views\Shared folder? Or is Content the catch-all folder for that kind of stuff? I'm sure I could put it anywhere, but I was wondering where the "right" place is.
It's not a controller, content or a view, so don't use those. It sounds most closely related to your model, so you could put it in model under a subfolder called "Helpers" or "Utility" or some such. Or you can add another top level folder called Services and put it there. That's where I put all my app logic, the middle man between the controllers and the model.
If you look at Rob's MVC Storefront: Separate class library project (like Commerce.MVC.Data)
MVC general class location
[ "", "c#", ".net", "asp.net-mvc", "solution", "" ]
I have some code which I've been using to query MySQL, and I'm hoping to use it with SQLite. My real hope is that this will not involve making too many changes to the code. Unfortunately, the following code doesn't work with SQLite: ``` cursor.execute(query) rows = cursor.fetchall() data = [] for row in rows data.append(row["column_name"]) ``` This gives the following error: ``` TypeError: tuple indices must be integers ``` Whereas if I change the reference to use a column number, it works fine: ``` data.append(row[1]) ``` Can I execute the query in such a way that I can reference columns by their names?
I'm not sure if this is the best approach, but here's what I typically do to retrieve a record set using a DB-API 2 compliant module: ``` cursor.execute("""SELECT foo, bar, baz, quux FROM table WHERE id = %s;""", (interesting_record_id,)) for foo, bar, baz, quux in cursor.fetchall(): frobnicate(foo + bar, baz * quux) ``` The query formatting method is one of the DB-API standards, but happens to be the preferred method for Psycopg2; other DB-API adapters might suggest a different convention which will be fine. Writing queries like this, where implicit tuple unpacking is used to work with the result set, has typically been more effective for me than trying to worry about matching Python variable names to SQL column names (which I usually only use to drop prefixes, and then only if I'm working with a subset of the column names such that the prefixes don't help to clarify things anymore), and is *much* better than remembering numerical column IDs. This style also helps you avoid `SELECT * FROM table...`, which is just a maintenance disaster for anything but the simplest tables and queries. So, not exactly the answer you were asking for, but possibly enlightening nonetheless.
In the five years since the question was asked and then answered, a very simple solution has arisen. Any new code can simply wrap the connection object with a row factory. Code example: ``` import sqlite3 conn = sqlite3.connect('./someFile') conn.row_factory = sqlite3.Row // Here's the magic! cursor = conn.execute("SELECT name, age FROM someTable") for row in cursor: print(row['name']) ``` Here are some [fine docs](http://docs.python.org/2/library/sqlite3.html#accessing-columns-by-name-instead-of-by-index). Enjoy!
How can I reference columns by their names in python calling SQLite?
[ "", "python", "sqlite", "" ]
In weblogic I can configure in the console for the Serverlog to use log4j instead of default JDK logging. However the serverlog is not using a log4j.properties file, but seems to use the configuration in config.xml Even if the log4j.properties file is in the classpath and I set these properties: ``` set JAVA_OPTIONS=%JAVA_OPTIONS% -Dlog4j.configuration=file:<path>/log4j.properties set JAVA_OPTIONS=%JAVA_OPTIONS% -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger set JAVA_OPTIONS=%JAVA_OPTIONS% -Dweblogic.log.Log4jLoggingEnabled=true ``` Is it possible to use log4j.properties configuration for Weblogic Server Logging, or can I only change the log4j configuration with java code?
I don't know anything about WebLogic in particular, but adding `-Dlog4j.debug` will cause log4j to tell you where it's looking for its configuration. I've found that to be invaluable when tracking down logging issues in tomcat previously. Check out the docs for [PropertyConfigurator](http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PropertyConfigurator.html) and [DOMConfigurator](http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/xml/DOMConfigurator.html) for details on the log4j configuration process.
Where are you setting the above options? Try putting the -Dlog4j option in the Server Start options for each managed server that will use log4j
Using log4j logging in weblogic 9/10
[ "", "java", "logging", "log4j", "weblogic", "" ]
I'm a little confused by what I should use to escape user output. Firstly, there's the [`Zend_Filter_Input`](http://framework.zend.com/manual/en/zend.filter.input.html) class which looks like it might do what I want but seems oriented towards batch filtering lots of items. At the moment I only want to filter one. Also I'm a little confused by the definition of escapers compared to filters. What's the difference between the `StringTrim` filter and the escaper? Is there a better solution for escaping single elements?
Filters are great on your forms so that you can clean & normalize your data before processing/storing it. You mentioned StringTrim - you've got other ones that ensure capitalization or that your input is all numeric (or alphanumeric or...). Make a note that this is to ensure consistency and sanity in your data - not for avoiding SQL injection - ZF's Database libraries handle that as a separate issue. On the flip-side of this, you get to escape things for output. While "x < 5" or "PB&J" may be perfectly valid data to store and process in your system, they can cause problems when displayed on a web page. This is why you'd normally use `htmlspecialchars()` or `htmlentities()` - by default, Zend\_View uses htmlspecialchar() when you `$this->escape($foo)`.
Use [htmlspecialchars()](http://www.php.net/htmlspecialchars)? If this is not what you want, please specify what you mean by "escape user output".
What is the best way to escape user output with the Zend Framework?
[ "", "php", "zend-framework", "" ]
How would you initialise a static `Map` in Java? Method one: static initialiser Method two: instance initialiser (anonymous subclass) or some other method? What are the pros and cons of each? Here is an example illustrating the two methods: ``` import java.util.HashMap; import java.util.Map; public class Test { private static final Map<Integer, String> myMap = new HashMap<>(); static { myMap.put(1, "one"); myMap.put(2, "two"); } private static final Map<Integer, String> myMap2 = new HashMap<>(){ { put(1, "one"); put(2, "two"); } }; } ```
The instance initialiser is just syntactic sugar in this case, right? I don't see why you need an extra anonymous class just to initialize. And it won't work if the class being created is final. You can create an immutable map using a static initialiser too: ``` public class Test { private static final Map<Integer, String> myMap; static { Map<Integer, String> aMap = ....; aMap.put(1, "one"); aMap.put(2, "two"); myMap = Collections.unmodifiableMap(aMap); } } ```
I like the [Guava](https://github.com/google/guava) way of initialising a static, immutable map: ``` static final Map<Integer, String> MY_MAP = ImmutableMap.of( 1, "one", 2, "two" ); ``` As you can see, it's very concise (because of the convenient factory methods in [`ImmutableMap`](https://google.github.io/guava/releases/snapshot/api/docs/com/google/common/collect/ImmutableMap.html)). If you want the map to have more than 5 entries, you can no longer use `ImmutableMap.of()`. Instead, try [`ImmutableMap.builder()`](https://google.github.io/guava/releases/snapshot/api/docs/com/google/common/collect/ImmutableMap.html#builder--) along these lines: ``` static final Map<Integer, String> MY_MAP = ImmutableMap.<Integer, String>builder() .put(1, "one") .put(2, "two") // ... .put(15, "fifteen") .build(); ``` To learn more about the benefits of Guava's immutable collection utilities, see [*Immutable Collections Explained* in Guava User Guide](https://github.com/google/guava/wiki/ImmutableCollectionsExplained). (A subset of) Guava used to be called *Google Collections*. If you aren't using this library in your Java project yet, I **strongly** recommend trying it out! Guava has quickly become one of the most popular and useful free 3rd party libs for Java, as [fellow SO users agree](https://stackoverflow.com/questions/130095/most-useful-free-third-party-java-libraries/132639#132639). (If you are new to it, there are some excellent learning resources behind that link.) --- **Update (2015)**: As for **Java 8**, well, I would still use the Guava approach because it is way cleaner than anything else. If you don't want Guava dependency, consider a [plain old init method](https://stackoverflow.com/a/509016/56285). The hack with [two-dimensional array and Stream API](https://stackoverflow.com/a/25829097/56285) is pretty ugly if you ask me, and gets uglier if you need to create a Map whose keys and values are not the same type (like `Map<Integer, String>` in the question). As for future of Guava in general, with regards to Java 8, Louis Wasserman [said this](https://groups.google.com/d/msg/guava-discuss/fEdrMyNa8tA/F4XFm6-uA6oJ) back in 2014, and [*update*] in 2016 it was announced that [**Guava 21 will require and properly support Java 8**](https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/guava-announce/o954PqvaXLY/7ss96X6sAwAJ). --- **Update (2016)**: As [Tagir Valeev points out](https://stackoverflow.com/questions/507602/how-can-i-initialise-a-static-map/34508760#34508760), **Java 9** will finally make this clean to do using nothing but pure JDK, by adding [convenience factory methods](http://openjdk.java.net/jeps/269) for collections: ``` static final Map<Integer, String> MY_MAP = Map.of( 1, "one", 2, "two" ); ```
How can I initialise a static Map?
[ "", "java", "dictionary", "collections", "initialization", "idioms", "" ]
**Note:** Version 2, below, uses the Sieve of Eratosthenes. There are several answers that helped with what I originally asked. I have chosen the Sieve of Eratosthenes method, implemented it, and changed the question title and tags appropriately. Thanks to everyone who helped! ## Introduction I wrote this fancy little method that generates an array of int containing the prime numbers less than the specified upper bound. It works very well, but I have a concern. ## The Method ``` private static int [] generatePrimes(int max) { int [] temp = new int [max]; temp [0] = 2; int index = 1; int prime = 1; boolean isPrime = false; while((prime += 2) <= max) { isPrime = true; for(int i = 0; i < index; i++) { if(prime % temp [i] == 0) { isPrime = false; break; } } if(isPrime) { temp [index++] = prime; } } int [] primes = new int [index]; while(--index >= 0) { primes [index] = temp [index]; } return primes; } ``` ## My Concern My concern is that I am creating an array that is far too large for the final number of elements the method will return. The trouble is that I don't know of a good way to correctly guess the number of prime numbers less than a specified number. ## Focus This is how the program uses the arrays. This is what I want to improve upon. 1. I create a temporary array that is large enough to hold every number less than the limit. 2. I generate the prime numbers, while keeping count of how many I have generated. 3. I make a new array that is the right dimension to hold just the prime numbers. 4. I copy each prime number from the huge array to the array of the correct dimension. 5. I return the array of the correct dimension that holds just the prime numbers I generated. ## Questions 1. Can I copy the whole chunk (at once) of `temp[]` that has nonzero elements to `primes[]` without having to iterate through both arrays and copy the elements one by one? 2. Are there any data structures that behave like an array of primitives that can grow as elements are added, rather than requiring a dimension upon instantiation? What is the performance penalty compared to using an array of primitives? --- Version 2 (thanks to [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet)): ``` private static int [] generatePrimes(int max) { int [] temp = new int [max]; temp [0] = 2; int index = 1; int prime = 1; boolean isPrime = false; while((prime += 2) <= max) { isPrime = true; for(int i = 0; i < index; i++) { if(prime % temp [i] == 0) { isPrime = false; break; } } if(isPrime) { temp [index++] = prime; } } return Arrays.copyOfRange(temp, 0, index); } ``` --- Version 3 (thanks to [Paul Tomblin](https://stackoverflow.com/users/3333/paul-tomblin)) which uses the [Sieve of Erastosthenes](http://en.wikipedia.org/wiki/Sieve%5Fof%5FErastosthenes): ``` private static int [] generatePrimes(int max) { boolean[] isComposite = new boolean[max + 1]; for (int i = 2; i * i <= max; i++) { if (!isComposite [i]) { for (int j = i; i * j <= max; j++) { isComposite [i*j] = true; } } } int numPrimes = 0; for (int i = 2; i <= max; i++) { if (!isComposite [i]) numPrimes++; } int [] primes = new int [numPrimes]; int index = 0; for (int i = 2; i <= max; i++) { if (!isComposite [i]) primes [index++] = i; } return primes; } ```
Your method of finding primes, by comparing every single element of the array with every possible factor is hideously inefficient. You can improve it immensely by doing a [Sieve of Eratosthenes](http://en.wikipedia.org/wiki/Sieve_of_Erastosthenes) over the entire array at once. Besides doing far fewer comparisons, it also uses addition rather than division. Division is way slower.
### `ArrayList<>` Sieve of Eratosthenes ``` // Return primes less than limit static ArrayList<Integer> generatePrimes(int limit) { final int numPrimes = countPrimesUpperBound(limit); ArrayList<Integer> primes = new ArrayList<Integer>(numPrimes); boolean [] isComposite = new boolean [limit]; // all false final int sqrtLimit = (int)Math.sqrt(limit); // floor for (int i = 2; i <= sqrtLimit; i++) { if (!isComposite [i]) { primes.add(i); for (int j = i*i; j < limit; j += i) // `j+=i` can overflow isComposite [j] = true; } } for (int i = sqrtLimit + 1; i < limit; i++) if (!isComposite [i]) primes.add(i); return primes; } ``` Formula for upper bound of number of primes less than or equal to `max` (see [wolfram.com](http://mathworld.wolfram.com/PrimeCountingFunction.html)): ``` static int countPrimesUpperBound(int max) { return max > 1 ? (int)(1.25506 * max / Math.log((double)max)) : 0; } ```
Finding prime numbers with the Sieve of Eratosthenes (Originally: Is there a better way to prepare this array?)
[ "", "java", "arrays", "primes", "sieve-of-eratosthenes", "" ]
I'm trying to implement caching for a PHP script I'm writing, but I keep running into the following problem. I want the script to be included in other PHP pages, but when I try to pass the cached file and exit the embedded script it exits both the script and the parent page, but doesn't parse the rest of the code on the parent page. See the code below for an example. **index.php** ``` <?php echo "Hello World!<br />"; include("file2.php"); echo "This line will not be printed"; ?> ``` **file2.php** ``` <?php $whatever = true; if ($whatever == true) { echo "file2.php has been included<br />"; exit; // This stops both scripts from further execution } // Additional code here ?> ``` If the above index.php is executed you get the following output: ``` Hello World! file2.php has been included ``` However, I'm trying to get it to look like this: ``` Hello World! file2.php has been included This line will not be printed ```
Just wrap the "additional code here" in an else statement? ``` <?php $whatever = true; if ($whatever == true) { echo "file2.php has been included<br />"; } else { // Additional code here } ?> ``` Otherwise I'm not sure what you're getting at. The **exit** command always terminates the current execution in whole - not just execution of the current file (for which, there is no command) ## EDIT Thanks to comments and posts by PHLAK, tomhaigh, MichaelM, and Mario, I myself learned something today - that you *CAN* indeed terminate the execution of a single included file w/the **return** command. Thanks, guys!
Use `return;` instead of `exit;` in the included file - this will only halt execution of that script. Note that you an also use this to return a value to the parent script e.g. file1.php ``` <?php echo 'parent script'; $val = include('file2.php'); //$val will equal 'value' echo 'This will be printed'; ``` file2.php ``` <?php echo 'child script'; return 'value'; ```
Quiting a PHP script within another PHP page
[ "", "php", "include", "quit", "" ]
More of a wiki list/collection, I'm looking for a list of good Java desktop apps. I've added a couple below to get started. Please list the framework/widget toolkit being used if it's know as well.
This use to be a good showcase for that. [Swing Sightings at Sun](http://java.sun.com/products/jfc/tsc/sightings)
[IntelliJ IDEA](http://www.jetbrains.com/idea/index.html) (Swing)
Examples of good Java desktop applications
[ "", "java", "desktop-application", "" ]
How do I write a Python class that handles calls on undefined methods by first, getting the output of a function of the same name from a given module, and then, doing something further with that output? For example, given add(x, y), doublerInstance.add(1, 1) should return 4. I know \_ \_ getattr \_ \_() intercepts calls on undefined methods, and getattr() can retrieve a function object. But I don't know how get the arguments passed to the undefined call caught by \_ \_ getattr \_ \_() to the function retrieved by getattr(). ``` EXAMPLE Module functions.py: def add(x, y): return x + Module doubler.py: class Doubler: def __init__(self, source): self.source = source def __getattr__(self, attrname): fnc = getattr(self.source, attrname) return fnc() * 2 Session: >import functions as f >import doubler as d >doublerInstance = d.Doubler(f) >doublerInstance.add(1, 2) <snip> TypeError: add() takes exactly 2 arguments, (0 given) END ``` I do understand the error -- getattr() returns a function to be run, and the call fnc() doesn't pass any arguments to that function -- here, add(). But how do I get the arguments passed in to the call dblr.add(1, 2) and pass those to the function returned by the getattr() call? I'm looking for the right way to do this, not some usage of \_ \_ getattr \_ \_. I realize that decorator functions using @ might be a better tool here, but I don't yet understand those well enough to see whether they could be applied here. ALSO -- what resource should I have looked at to figure this out for myself? I haven't found it in the Lutz books, the Cookbook, or the Python Library Reference.
When you call `doublerInstance.add(1, 2)`, you're getting an attribute `add` from it, and then you're *calling* it. But inside your getattr, you're returning a value. You have to return a function. Anyway, for this particular case to work, you need this: ``` def __getattr__(self, attrname) : fnc = getattr(self.source, attrname) def doubled(*args, **kwargs) : return 2 * fnc(*args, **kwargs) return doubled ```
`__getattr__` has to return the function - not the result from calling it: ``` class Doubler: def __init__(self, source): self.source = source def __getattr__(self, attrname): fnc = getattr(self.source, attrname) return lambda x,y : fnc(x,y) * 2 ``` This uses a lambda expression; it returns a new function that doubles the output on fnc Perhaps this test code will make it clearer: ``` import functions as f import doubler as d doublerInstance = d.Doubler(f) print doublerInstance.add(1, 2) doubleadd = doublerInstance.add print doubleadd(1,2) print doubleadd(2,3) ```
Further Processing of Output of Undefined Methods (Python)
[ "", "python", "" ]
I am trying to revamp our build process, which is currently a gigantic Ant build.xml that calls into other ant build files and executes several Java classes to perform more complex logic that would be impossible/scary to attemp in Ant. Background: * experience in Java and Ant, some Groovy * Windows platforms Goals: * run as a combination of command line cron and when a servlet is posted to * as simplified as possible, fewest languages and bouncing between techs I need higher level logical power that a language like Java provides and Ant is pretty easy and we use the filtering to override default properties files for different clients. Mostly I'm wondering if there is something other than Ant/Java that people use.
Except the Ant you mentioned and the scarry make/autotools, the mainstream tools are: * [SCons](http://www.scons.org) * [Jam](http://www.perforce.com/jam/jam.html) * [CMake](http://www.cmake.org) * [Maven](http://maven.apache.org/) I use SCons, because it is python based, well-funded and elegant. Jam seems to be the most pragmatic one. I don't know too much about CMake. Maven may be the choice for you as it is Java centric and more high level than Ant. More you can find at wikipedia: [List of built tools](http://en.wikipedia.org/wiki/List_of_build_automation_software)
If you pursue Maven, then you will have two problems: a complex build and learning the f@\*#ing "magic" of Maven. Maven just makes the problem worse because it is obtuse and overly-complicated. I inherited a legacy Maven 1.x build at a large Fortune 500 company. I used Maven 2.x by choice on many other projects in recent years. I evaluated Maestro, in hopes that it might make Maven tractable. My conclusion, like many other peoples' (check the 'net), is that Maven is a big step in the wrong direction. It definitely is not an improvement over Ant. I have used Ant for MANY years, including writing a large open-source library of Ant helper scripts. I have also extensively used its .NET cousin nAnt. However, Ant has two major failings. One, XML is simply not the right place to be doing build tasks. Two, Ant and XML do not scale well to large, complex builds. In fact, I have written a lot here at SO about my experiences in that arena (and with Maven). Industry leaders have concluded that a build is just another application, and should be approached using general application tools. However, since it involves system-level and cross-platform functionality, most development languages/platforms are not properly suited (which includes Java, and therefore Ant and Maven). That also excludes .NET. I spent two years looking for an alternative, and I found it: Python. It has the right combination of system-level access, cross-platform portability, simplicity, readability, power, robustness, and maturity. SCons, buildbot, setuptools/easyinstall, and base Python are my current target platform for the build process. When necessary, integration with Ant, Maven, and any other such tool is easy. Meanwhile, I can use these tools for the core of any build on any platform with any source language. No more roadblocks, no more crazy complexity, no more supposedly-helpful "declarative" scripting, no more black-box f@\*#ing "magic". If you can't switch to Python, then try Ant + Ivy (at apache.org). It gives you Maven's cool repository without most of Maven's evils. That is what I am doing as well, where necessary and suitable. Best wishes.
What do you use for a complex build process?
[ "", "java", "ant", "build-process", "" ]
Is there a way to compile C# files into one single file that is ready to give to the user?
One way is to use [ILMerge](http://research.microsoft.com/en-us/people/mbarnett/ilmerge.aspx). This will merge multiple assemblies into a single one (think combining all DLLs and a main exe into a single exe).
Yes :) But you must put all your sources into a single assembly and compile it to an EXE. Also note that the target system must also have the required .NET infrastructure installed. Note that security policies on the target system may prevent the user from directly running your app. Lastly, unless you "[NGEN](http://msdn.microsoft.com/en-us/library/6t9t5wcf(VS.80).aspx)" your code, it will be [jitted](http://msdn.microsoft.com/en-us/library/z1zx9t92(VS.80).aspx) the first time it runs. This will incur some startup time costs. THese can be considerable in some instances.
Compile all C# files into one file?
[ "", "c#", ".net", "" ]
I'm trying to incorporate the following JavaScript Scroller into a website but as soon as I include it in a HTML page that has layout eg. Tables/DIVs it seems to break and won't display. Was wondering if anyone had any advice. ``` <script language="javascript"> //ENTER CONTENT TO SCROLL BELOW. var content='Content to be scrolled'; var boxheight=150; // BACKGROUND BOX HEIGHT IN PIXELS. var boxwidth=150; // BACKGROUND BOX WIDTH IN PIXELS. var boxcolor="#E9F0F8"; // BACKGROUND BOX COLOR. var speed=50; // SPEED OF SCROLL IN MILLISECONDS (1 SECOND=1000 MILLISECONDS).. var pixelstep=2; // PIXELS "STEPS" PER REPITITION. var godown=false; // TOP TO BOTTOM=TRUE , BOTTOM TO TOP=FALSE // DO NOT EDIT BEYOND THIS POINT var outer,inner,elementheight,ref,refX,refY; var w3c=(document.getElementById)?true:false; var ns4=(document.layers)?true:false; var ie4=(document.all && !w3c)?true:false; var ie5=(document.all && w3c)?true:false; var ns6=(w3c && navigator.appName.indexOf("Netscape")>=0)?true:false; var txt=''; if(ns4){ txt+='<table cellpadding=0 cellspacing=0 border=0 height='+boxheight+' width='+boxwidth+'><tr><td>'; txt+='<ilayer name="ref" bgcolor="'+boxcolor+'" width='+boxwidth+' height='+boxheight+'></ilayer>'; txt+='</td></tr></table>' txt+='<layer name="outer" bgcolor="'+boxcolor+'" visibility="hidden" width='+boxwidth+' height='+boxheight+'>'; txt+='<layer name="inner" width='+(boxwidth-4)+' height='+(boxheight-4)+' visibility="hidden" left="2" top="2" >'+content+'</layer>'; txt+='</layer>'; }else{ txt+='<div id="ref" style="position:relative; width:'+boxwidth+'; height:'+boxheight+'; background-color:'+boxcolor+';" ></div>'; txt+='<div id="outer" style="position:absolute; width:'+boxwidth+'; height:'+boxheight+'; visibility:hidden; background-color:'+boxcolor+'; overflow:hidden" >'; txt+='<div id="inner" style="position:absolute; visibility:visible; left:2px; top:2px; width:'+(boxwidth-4)+'; overflow:hidden; cursor:default;">'+content+'</div>'; txt+='</div>'; } document.write(txt); function getElHeight(el){ if(ns4)return (el.document.height)? el.document.height : el.clip.bottom-el.clip.top; else if(ie4||ie5)return (el.style.height)? el.style.height : el.clientHeight; else return (el.style.height)? parseInt(el.style.height):parseInt(el.offsetHeight); } function getPageLeft(el){ var x; if(ns4)return el.pageX; if(ie4||w3c){ x = 0; while(el.offsetParent!=null){ x+=el.offsetLeft; el=el.offsetParent; } x+=el.offsetLeft; return x; }} function getPageTop(el){ var y; if(ns4)return el.pageY; if(ie4||w3c){ y=0; while(el.offsetParent!=null){ y+=el.offsetTop; el=el.offsetParent; } y+=el.offsetTop; return y; }} function scrollbox(){ if(ns4){ inner.top+=(godown)? pixelstep: -pixelstep; if(godown){ if(inner.top>boxheight)inner.top=-elementheight; }else{ if(inner.top<2-elementheight)inner.top=boxheight+2; }}else{ inner.style.top=parseInt(inner.style.top)+((godown)? pixelstep: -pixelstep)+'px'; if(godown){ if(parseInt(inner.style.top)>boxheight)inner.style.top=-elementheight+'px'; }else{ if(parseInt(inner.style.top)<2-elementheight)inner.style.top=boxheight+2+'px'; }}} window.onresize=function(){ if(ns4)setTimeout('history.go(0)', 400); else{ outer.style.left=getPageLeft(ref)+'px'; outer.style.top=getPageTop(ref)+'px'; }} window.onload=function(){ outer=(ns4)?document.layers['outer']:(ie4)?document.all['outer']:document.getElementById('outer'); inner=(ns4)?outer.document.layers['inner']:(ie4)?document.all['inner']:document.getElementById('inner'); ref=(ns4)?document.layers['ref']:(ie4)?document.all['ref']:document.getElementById('ref'); elementheight=getElHeight(inner); if(ns4){ outer.moveTo(getPageLeft(ref),getPageTop(ref)); outer.clip.width=boxwidth; outer.clip.height=boxheight; inner.top=(godown)? -elementheight : boxheight-2; inner.clip.width=boxwidth-4; inner.clip.height=elementheight; outer.visibility="show"; inner.visibility="show"; }else{ outer.style.left=getPageLeft(ref)+'px'; outer.style.top=getPageTop(ref)+'px'; inner.style.top=((godown)? -elementheight : boxheight)+'px'; inner.style.clip='rect(0px, '+(boxwidth-4)+'px, '+(elementheight)+'px, 0px)'; outer.style.visibility="visible"; } setInterval('scrollbox()',speed); } </script> ```
I know I'm not answering your question, but this will at least help me to sleep tonight. :) I'm not sure exactly what you are trying to do, but I wouldn't use that script. I'm guessing you've copied and pasted it from somewhere, but it looks like it was written a good few years ago, and doesn't really meet many of the best practise guidelines for JavaScript these days. If you are looking for scripts and libraries for these type of effects, I would recommend looking at: * jQuery, <http://jquery.com/> * Prototype, - <http://www.prototypejs.org/> * Dojo, - <http://www.dojotoolkit.org/> * MooTools, - <http://mootools.net/> * Mochikit, - <http://mochikit.com/> If you had to push me for one, I would recommend jQuery. Along with learning JavaScript of course, which is really required for use of any of the above libraries. :)
Your Script is quite old (IE4, Netscape era...), and it uses [non standard tags](http://www.devguru.com/technologies/html/quickref/html_other_tags.html) (ilayer, layer). Continuing with the [Andy's recommendation](https://stackoverflow.com/questions/550277/javascript-scroller-problem/550304#550304), there are some jQuery alternatives that may suit your needs, like [jdNewsScroll](http://jdsharp.us/jQuery/plugins/jdNewsScroll/) or this [ScrollUp Headline Reader](http://www.learningjquery.com/2006/10/scroll-up-headline-reader).
JavaScript Scroller Problem
[ "", "javascript", "scroll", "" ]
I'd like to use an object across AppDomains. For this I can use the [Serializeable] attribute: ``` [Serializable] class MyClass { public string GetSomeString() { return "someString" } } ``` Or subclass from MarshalByRefObject: ``` class MyClass: MarshalByRefObject { public string GetSomeString() { return "someString" } } ``` In both cases I can use the class like this: ``` AppDomain appDomain = AppDomain.CreateDomain("AppDomain"); MyClass myObject = (MyClass)appDomain.CreateInstanceAndUnwrap( typeof(MyClass).Assembly.FullName, typeof(MyClass).FullName); Console.WriteLine(myObject.GetSomeString()); ``` **Why do both approaches seem to have the same effect? What is the difference in both approaches? When should I favor the one approach over the other?** EDIT: At the surface I know that there are differences between both mechanisms, but if someone jumped out of a bush and asked me the question I couldn't give him a proper answer. The questions are quite open questions. I hoped that someone can explain it better than I could do.
Using MarshallByRef will execute your methods in the remote AppDomain. When you use CreateInstanceAndUnwrap with a Serializable object, a copy of the object is made to the local AppDomain, so any method call will be executed in the local AppDomain. If what you want is to communicate between AppDomains go with the MarshallByRef approach. An example: ``` using System; using System.Reflection; [Serializable] public class SerializableClass { public string WhatIsMyAppDomain() { return AppDomain.CurrentDomain.FriendlyName; } } public class MarshallByRefClass : MarshalByRefObject { public string WhatIsMyAppDomain() { return AppDomain.CurrentDomain.FriendlyName; } } class Test { static void Main(string[] args) { AppDomain ad = AppDomain.CreateDomain("OtherAppDomain"); MarshallByRefClass marshall = (MarshallByRefClass)ad.CreateInstanceAndUnwrap(Assembly.GetExecutingAssembly().FullName, "MarshallByRefClass"); SerializableClass serializable = (SerializableClass)ad.CreateInstanceAndUnwrap(Assembly.GetExecutingAssembly().FullName, "SerializableClass"); Console.WriteLine(marshall.WhatIsMyAppDomain()); Console.WriteLine(serializable.WhatIsMyAppDomain()); } } ``` This code will display "OtherAppDomain" when you call WhatIsMyAppDomain from the MarshallByRef object, and your default AppDomain name when you call from the Serializable object.
These approaches have dramatically different effects. With the MarshalByRef version you are creating 1 instance of your object. It will live in the newly created AppDomain. All accesse to the object is done via a [TransparentProxy](http://blogs.msdn.com/cbrumme/archive/2003/07/14/51495.aspx). With the Serializable version you are created 2 instances of your object. One is created in the newly created AppDomain. The CreateInstanceAndUnwrap call will then serialize this object and deserialize it in the original app domain. This creates a second version of the object that is completely independent from the first. In fact, the very next GC will almost certainly eliminate the original object and you'll be left with one instance.
Use the [Serializable] attribute or subclassing from MarshalByRefObject?
[ "", "c#", "remoting", "appdomain", "" ]
I visited a university CS department open day today and in the labs tour we sat down to play with a couple of final-year projects from undergraduate students. One was particularly good - a sort of FPS asteroids game. I decided to take a peek in the `src` directory to find it was done in C++ (most of the other projects were Java 3D apps). I haven't done any C before but I have looked through some C code before. From what I saw in the .cpp code in this game it didn't look very different. I'm interested in learning either C or C++ but will probably learn the other later on. **Is there any advantage to me learning one before the other** and **if so, which one?**
There is no need to learn C before learning C++. **They are different languages**. It is a common misconception that C++ is in some way dependent on C and not a fully specified language on its own. Just because C++ shares a lot of the same syntax and a lot of the same semantics, does not mean you need to learn C first. If you learn C++ you will eventually learn most of C with some differences between the languages that you will learn over time. In fact its a very hard thing to write proper C++ because intermediate C++ programmers tend to write C/C++.That is true whether or not you started with C or started with C++. If you know C first, then that is good plus to learning C++. You will start with knowing a chunk of the language. If you do not know C first then there is no point focusing on a different language. There are plenty of good books and tutorials available that start you from knowing nothing and will cover anything you would learn from C which applies to C++ as well. [Please see further reasoning in this answer](https://stackoverflow.com/questions/145096/is-it-true-that-there-is-no-need-to-learn-c-because-c-contains-everything/145098#145098).
I love this question - it's like asking "what should I learn first, snowboarding or skiing"? I think it depends if you want to snowboard or to ski. If you want to do both, you have to learn both. In both sports, you slide down a hill on snow using devices that are sufficiently similar to provoke this question. However, they are also sufficiently different so that learning one does not help you much with the other. Same thing with C and C++. While they appear to be languages sufficiently similar in syntax, the mind set that you need for writing OO code vs procedural code is sufficiently different so that you pretty much have to start from the beginning, whatever language you learn second.
Should I learn C before learning C++?
[ "", "c++", "c", "" ]
I'm having quite some trouble since I migrated my controllers from classical inheritance to use the annotations like @Controller and @RequestMapping. The problem is that I don't know how to plug in validation like in the old case. Are there any good tutorials about this?
Have a look at [§13.11.x of the Spring MVC documentation](http://static.springsource.org/spring/docs/2.5.x/reference/mvc.html#mvc-annotation). It covers annotation-based controller configuration, including data binding. It should be very useful to you. Good luck!
Spring 3 includes improved support for validation, and in particular, JSR-303 validation. You can validate form beans using the @Valid annotation and . See <http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/validation.html#validation-beanvalidation> section 5.7.
Spring MVC validation with Annotations
[ "", "java", "spring", "spring-mvc", "validation", "" ]
I am looking at using Ruby on Rails for a storefront that has to make use of existing data, but I can create my own database schema if I need to. A lot of the fields in the existing data are char(1) emulating a boolean field (i.e. Y/N) for, I believe, cross-platform portability. Since the data is prone to change, I don't want to have to change the existing structure and convert these fields to bit/boolean fields. If I do use Rails I really would like to make use of Ruby's beautiful boolean syntax and say something like `<%= image_tag 'recycled.jpg' if product.recycled? %>`, but will Rails recognize char(1) as a boolean, or do I have to define those methods myself in the model like: ``` class Product < ActiveRecord::Base # ... other stuff here def recycled? self.recycled == 'Y' end end ``` I'm thinking I will have to redefine them myself, which is no big deal, I just want to make sure since using char(1) as yes/no values isn't something I've used in the past.
As far as I know, what you describe is not possible with `ActiveRecord` out-of-the-box. However, if you have a lot of columns like this you could look at doing a little bit of meta-programming to provide a declarative way to add the relevant accessor logic. Something like :- ``` class Product < ActiveRecord::Base yes_no_accessor :recycled end ``` Another possibility is to monkey-patch `ActiveRecord`. I think the relevant method is `ActiveRecord::ConnectionAdapters::Column.value_to_boolean(value)`. You could try overriding the `ActiveRecord::ConnectionAdapters::Column::TRUE_VALUES` constant to include `'Y'`. I haven't actually tried this!
I'd probably attack it at the model level - when you load a row into a model instance, compute a boolean attribute based on the char. Add a getter for the virtual attribute that returns this value, and a setter that updates both the boolean and the underlying char.
Can RoR deal with char(1) fields as "boolean" fields?
[ "", "sql", "ruby-on-rails", "types", "" ]
I am new to regular expressions. Is it possible to match everything before a word that meets a certain criteria: E.g. THIS IS A TEST - - +++ This is a test I would like it to encounter a word that begins with an uppercase and the next character is lower case. This constitutes a proper word. I would then like to delete everything before that word. The example above should produce: This is a test I only want to this processing until it finds the proper word and then stop. Any help would be appreciated. Thanks
Replace ``` ^.*?(?=[A-Z][a-z]) ``` with the empty string. This works for ASCII input. For non-ASCII input (Unicode, other languages), different strategies apply. Explanation ``` .*? Everything, until (?= followed by [A-Z] one of A .. Z and [a-z] one of a .. z ) ``` The Java Unicode-enabled variant would be this: ``` ^.*?(?=\p{Lu}\p{Ll}) ```
Having woken up a bit, you don't need to delete anything, or even create a sub-group - just find the pattern expressed elsewhere in answers. Here's a complete example: ``` import java.util.regex.*; public class Test { public static void main(String args[]) { Pattern pattern = Pattern.compile("[A-Z][a-z].*"); String original = "THIS IS A TEST - - +++ This is a test"; Matcher match = pattern.matcher(original); if (match.find()) { System.out.println(match.group()); } else { System.out.println("No match"); } } } ``` EDIT: Original answer This looks like it's doing the right thing: ``` import java.util.regex.*; public class Test { public static void main(String args[]) { Pattern pattern = Pattern.compile("^.*?([A-Z][a-z].*)$"); String original = "THIS IS A TEST - - +++ This is a test"; String replaced = pattern.matcher(original).replaceAll("$1"); System.out.println(replaced); } } ``` Basically the trick is not to ignore everything before the proper word - it's to group everything from the proper word onwards, and replace the whole text with that group. The above would fail with `"*** FOO *** I am fond of peanuts"` because the "I" wouldn't be considered a proper word. If you want to fix that, change the [a-z] to [a-z\s] which will allow for whitespace instead of a letter.
How to use regular expressions to match everything before a certain type of word
[ "", "java", "regex", "" ]
I have a form with two panels(top, bottom), each panel contains grids. Basically, it's a Master-Detail form where selecting a row from the top grid would show details in the bottom grid. Binding the data to the detail grid is taking some time. Since binding is done on UI thread, it blocks the thread and therefore the user cannot select another row from the master grid until the binding is done. Please note that by binding I don't mean getting data from data source. It's the actual binding that's taking longer as it does a lot of data massaging. How can I keep the UI thread free while the detail grid is doing it's binding? Thanks a million.
Finally I found the solution. The solution doesn't include multithreading to start with. As I said that the delay was in binding the grid meaning the main thread was held, we couldn't do much. So the solution is to bring delays. When user selects the master row, a timer ticks off for a certain time. If another request is made before the time is expired, timer gets restarted. This is we ignore all calls that are made because user was clicking or selecting rows too fast. Once timer is expired, I take the selected row and display data. Simple and elegant solution.
You can't. The update of the UI has to be performed on the UI thread. You may be able to speed up the binding by using things such as BeginUpdate/EndUpdate which is available on some controls but as you don't specify what you are using I can't say if that's available.
Keep UI Thread free
[ "", "c#", "winforms", "" ]
I have my greasemonkey script scanning every page i visit for a specific string. I would like to recordkeep the variations of the string in a sqlite db. I'll have another app process this db everyonce in a while. What i dont know is HOW do i store the data into the sqlite db? i was thinking i can launch an executable automatically if the string was found but i dont know how to do that through javascript. Another alternative i thought was have a socket listen on a certain port and have some js magic but i couldnt think of a silent way to send data like that.
I'm not sure how you can use it with Greasemonkey but Firefox has an API called Storage for using an sqlite database. Check it out here: <https://developer.mozilla.org/en/Storage>
I recommend using a webserver to gather the data. You can set up a domain or IP to send the data to. Just for starting out you could even run on localhost if you need to. The advantage is that, once created, the same architecture can be used from different PCs, so that any computer you run the script from can share the results. **Update:** To communicate with your server you will need to use [GM\_xmlhttpRequest](http://diveintogreasemonkey.org/api/gm_xmlhttprequest.html). I know of one library that adds an abstraction layer to make using GM\_xmlhttpRequest easier: Speakeasy.js. It is a relatively unknown lightweight ActiveResource like interface for sending and retrieving data from a RESTful webserver. [Here's an example](http://userscripts.org/scripts/show/42544) of a Greasemonkey script that communicates with a webserver on every page load. It loads annotations and displays them on the page. Here's an adapted version close to your needs: ``` // ==UserScript== // @name Demo Script // @namespace http://example.com // @description Sample // @include * // // @require http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js // @require http://strd6.googlecode.com/svn/trunk/gm_util/d_money.js // @require http://strd6.googlecode.com/svn/trunk/gm_util/speakeasy.js // // ==/UserScript== error = D$.error; log = D$.log; D$.debug(false); Speakeasy .generateResource('result') .configure({ baseUrl: 'http://localhost:3000/' }) ; // Attach all annotations for this page from remote server var href = window.location.href; currentUrl = href.substring(href.indexOf('://') + 3); log(currentUrl); var result1 = 'something'; // Insert your function to get your result data var result2 = 'something else'; // Insert your function to get your result data Speakeasy.result.create({ data: { url: currentUrl, result1: result1, result2: result2 } }); ``` You can quickly create a Rails site or use whatever backend you are familiar with.
launch an app to record keep with greasemonkey
[ "", "javascript", "greasemonkey", "" ]
I just got a question that I can't answer. Suppose you have this loop definition in Java: ``` while (i == i) ; ``` What is the type of `i` and the value of `i` if the loop is not an infinite loop and **the program is using only one thread**?
``` double i = Double.NaN; ``` The API for [Double.equals()](http://java.sun.com/javase/6/docs/api/java/lang/Double.html#equals(java.lang.Object)) spells out the answer: "Double.NaN==Double.NaN has the value false". This is elaborated in the Java Language Specification under "[Floating-Point Types, Formats, and Values](http://java.sun.com/docs/books/jls/third_edition/html/typesValues.html#4.2.3)": > `NaN` is unordered, so the numerical > comparison operators `<`, `<=`, `>`, and `>=` > return `false` if either or both > operands are `NaN`. The > equality operator `==` returns `false` if > either operand is `NaN`, and the > inequality operator `!=` returns `true` if > either operand is `NaN`. **In > particular, `x!=x` is `true` if and only > if `x` is `NaN`**, and `(x<y) == !(x>=y)` will > be `false` if `x` or `y` is `NaN`.
The value of `i` is then Invalid. "Not a Number". After some googling, i found out that you CAN have NaN ( Not a Number ) in Java! So, a Float Pointing number is the Data Type and the Value is NaN. See [here](http://www.concentric.net/~Ttwang/tech/javafloat.htm)
How can "while (i == i) ;" be a non-infinite loop in a single threaded application?
[ "", "java", "" ]
im trying to learn to modify games in C++ not the game just the memory its using to get ammo whatnot so can someone point me to books
The most convenient way to manipulate a remote process' memory is to create a thread within the context of that program. This is usually accomplished by forcibly injecting a dll into the target process. Once you have code executing inside the target application you are free to use standard memory routines. e.g (memcpy, malloc, memset). I can tell you right now that the most convenient and easy to implement method is the CreateRemoteThread / LoadLibrary trick. As other people have mentioned, simple hacks can be performed by scanning memory for known values. But if you want to perform anything more advanced you will need to look into debugging and dead-list analysis. (Tools: ollydbg and IDA pro, respectively). You have scratched the surface of a very expansive hacking topic, there is a wealth of knowledge out there.. First a few internet resources: gamedeception.net - A community dedicated to game RE (Reverse Engineering) and hacking. <http://www.edgeofnowhere.cc/viewtopic.php?p=2483118> - An excellent tutorial on various DLL injection methods. Openrce.org - Community for reverse code engineering. I can also recommend a book to you - <http://www.exploitingonlinegames.com/> Windows API Routines you should research (msdn.com): ``` CreateRemoteThread LoadLibraryA VirtualAllocEx VirtualProtectEx WriteProcessMemory ReadProcessMemory CreateToolhelp32Snapshot Process32First Process32Next ```
**Injecting Code:** I think the best method is to modify the exe to inject code into one of the loaded modules. [Check this tutorial](http://home.inf.fh-rhein-sieg.de/~ikarim2s/how2injectcode/code_inject.html) **Short related story:** Over 10 years ago though, I do remember successfully modifying my score in solitaire in windows with a simple C++ program. I did this by starting an int \* pointer at some base address and iterating through memory addresses (with a try /catch to catch exceptions). I would look for what my current score was in one of those pointer variables, and replace it with a new integer value. I just made sure that my current score was some obscure value that wouldn't be contained in many memory addresses. Once I found a set of memory addresses that matched my score, I would change my score manually in solitaire and only look through the memory addresses that were found in the last iteration. Usually this would narrow down to a single memory address that contained the score. At this point I had the magical simple line of code \*pCode = MY\_DESIRED\_SCORE; This may not be possible anymore though with new memory security models. But the method worked pretty good with a 10-20 line C++ program and it only took about a minute to modify my score.
Memory modifying in C++
[ "", "c++", "memory", "" ]
I'd like to compile cpp file w/o turning off vi. I know the **:!g++ file.cpp** but I prefer **:make** so I added this line in .vimrc file > au FileType C set makeprg=gcc\ % > au FileType Cpp set makeprg=g++\ % but I keep getting "make: \*\*\*\*\* No targets specified and no makefile found. Stop.\*\* "message. can anyone tell me what is wrong with my setting? I use to compile successfully with the option above.
I should change **C,Cpp** into **c,cpp**, then it works fine. thank you all, especially **Rob Wells**, your answer helped me a lot. thank you.
You need the substitution there, try something like: ``` set makeprg=gmake\ %:r.o ``` Oh, this assumes that you've got: 1. a (M|m)akefile in the directory, or 2. default SUFFIX rules are available for your environment (which it looks like there aren't) Check for the default by entering: ``` make -n <my_file>.o ``` and see if that gives you something sensible. If there is a makefile in another location you can add the -f option to point at the makefile, for example: ``` set makeprg=gmake\ -f\ ../some_other_dir/makefile\ %:r.o ``` BTW For learning about make, and especially gmake, I'd suggest having a look at the excellent book "Managing Projects with GNU Make" ([sanitised Amazon link](https://rads.stackoverflow.com/amzn/click/com/0596006101)). HTH. cheers
compile directly from vim
[ "", "c++", "vim", "compilation", "" ]
When I attempt to set an application role on a SqlConnection with [sp\_setapprole](http://msdn.microsoft.com/en-us/library/ms188908.aspx) I sometimes get the following error in the Windows event log... > The connection has been dropped because the principal that opened it subsequently assumed a new security context, and then tried to reset the connection under its impersonated security context. This scenario is not supported. See "Impersonation Overview" in Books Online.) ... and a matching exception is thrown in my application. These are pooled connections, and there was a time when connection pooling was incompatible with app roles - in fact the old advice from Microsoft was to [disable connection pooling](http://support.microsoft.com/default.aspx?scid=KB;EN-US;Q229564) (!!) but with the introduction of [sp\_unsetapprole](http://msdn.microsoft.com/en-us/library/ms365415.aspx) it is now (in theory) possible to clean a connection before returning it to the pool. I believe these errors occur when (for reasons unknown) sp\_unsetapprole is not run on the connection before it is closed and returned to the connection pool. sp\_approle is then doomed to fail when this connection is returned from the pool. I can catch and handle this exception but I would much prefer to detect the impending failure and avoid the exception (and messages in the event log) altogether. Is it possible to detect the problem without causing the exception? Thoughts or advice welcome.
Nope, it's not possible.
It would seem that you are calling sp\_setapprole but not calling sp\_unsetapprole and then letting the connection just be returned to the pool. I would suggest using a structure (or a class, if you have to use this across methods) with an implementation of IDisposable which will take care of this for you: ``` public struct ConnectionManager : IDisposable { // The backing for the connection. private SqlConnection connection; // The connection. public SqlConnection Connection { get { return connection; } } public void Dispose() { // If there is no connection, get out. if (connection == null) { // Get out. return; } // Make sure connection is cleaned up. using (SqlConnection c = connection) { // See (1). Create the command for sp_unsetapprole // and then execute. using (SqlCommand command = ...) { // Execute the command. command.ExecuteNonQuery(); } } } public ConnectionManager Release() { // Create a copy to return. ConnectionManager retVal = this; // Set the connection to null. retVal.connection = null; // Return the copy. return retVal; } public static ConnectionManager Create() { // Create the return value, use a using statement. using (ConnectionManager cm = new ConnectionManager()) { // Create the connection and assign here. // See (2). cm.connection = ... // Create the command to call sp_setapprole here. using (SqlCommand command = ...) { // Execute the command. command.ExecuteNonQuery(); // Return the connection, but call release // so the connection is still live on return. return cm.Release(); } } } } ``` 1. You will create the SqlCommand that corresponds to calling the sp\_setapprole stored procedure. You can generate the cookie and store it in a private member variable as well. 2. This is where you create your connection. The client code then looks like this: ``` using (ConnectionManager cm = ConnectionManager.Create()) { // Get the SqlConnection for use. // No need for a using statement, when Dispose is // called on the connection manager, the connection will be // closed. SqlConnection connection = cm.Connection; // Use connection appropriately. } ```
Detecting unusable pooled SqlConnections
[ "", "c#", "sql-server", "ado.net", "connection-pooling", "application-role", "" ]
I created a new property for my db model in the Google App Engine Datastore. Old: ``` class Logo(db.Model): name = db.StringProperty() image = db.BlobProperty() ``` New: ``` class Logo(db.Model): name = db.StringProperty() image = db.BlobProperty() is_approved = db.BooleanProperty(default=False) ``` How to query for the Logo records, which to not have the 'is\_approved' value set? I tried ``` logos.filter("is_approved = ", None) ``` but it didn't work. In the Data Viewer the new field values are displayed as .
According to the App Engine documentation on [Queries and Indexes](http://code.google.com/appengine/docs/python/datastore/queriesandindexes.html#Introducing_Indexes), there is a distinction between entities that have *no* value for a property, and those that have a *null* value for it; and "Entities Without a Filtered Property Are Never Returned by a Query." So it is not possible to write a query for these old records. A useful article is [Updating Your Model's Schema](http://code.google.com/appengine/articles/update_schema.html), which says that the only currently-supported way to find entities missing some property is to examine all of them. The article has example code showing how to cycle through a large set of entities and update them.
A practice which helps us is to assign a "version" field on every Kind. This version is set on every record initially to 1. If a need like this comes up (to populate a new or existing field in a large dataset), the version field allows iteration through all the records containing "version = 1". By iterating through, setting either a "null" or another initial value to the new field, bump the version to 2, store the record, allows populating the new or existing field with a default value. The benefit to the "version" field is that the selection process can continue to select against that lower version number (initially set to 1) over as many sessions or as much time is needed until ALL records are updated with the new field default value.
AppEngine: Query datastore for records with <missing> value
[ "", "python", "google-app-engine", "google-cloud-datastore", "" ]
I want to unserialize a `session_encode()`'d string of session data to my own array (i.e. not to `$_SESSION`.) There doesn't appear to be an in-built function that handles this. There's `session_decode()` but it writes directly to the `$_SESSION` super-global. There's `unserialize()` but it returns false on `session_encode()`'d strings as they're a slightly different format. What's the best way to do this?
Check out the comments in the PHP manual's [`session_decode`](http://us.php.net/session_decode) page. It has a bunch of user functions to handle decoding a `session_encode` string manually. As far as I know, that's the only way to go.
I noticed that the posted solutions (on php.net) for manually decoding sessions are not perfect, so I've contributed a more robust solution. The preg\_match solution can never work. It's not so hard to find a case that might break unserialization. In the case of jason-joeymail is breaks on: ``` $_SESSION["test"] = ";oops|"; ``` Below you can find my solution. It doesn't use a regular expression but rather the reversibility of the serialize operation and the 'feature' that serialize ignores all further input when it thinks it's done. It's by no means a beautiful or particularly fast solution but it is a more robust solution. I've added a deserializer for "php" and "php\_binary". It should be trivial to add one for "wddx". ``` class Session { public static function unserialize($session_data) { $method = ini_get("session.serialize_handler"); switch ($method) { case "php": return self::unserialize_php($session_data); break; case "php_binary": return self::unserialize_phpbinary($session_data); break; default: throw new Exception("Unsupported session.serialize_handler: " . $method . ". Supported: php, php_binary"); } } private static function unserialize_php($session_data) { $return_data = array(); $offset = 0; while ($offset < strlen($session_data)) { if (!strstr(substr($session_data, $offset), "|")) { throw new Exception("invalid data, remaining: " . substr($session_data, $offset)); } $pos = strpos($session_data, "|", $offset); $num = $pos - $offset; $varname = substr($session_data, $offset, $num); $offset += $num + 1; $data = unserialize(substr($session_data, $offset)); $return_data[$varname] = $data; $offset += strlen(serialize($data)); } return $return_data; } private static function unserialize_phpbinary($session_data) { $return_data = array(); $offset = 0; while ($offset < strlen($session_data)) { $num = ord($session_data[$offset]); $offset += 1; $varname = substr($session_data, $offset, $num); $offset += $num; $data = unserialize(substr($session_data, $offset)); $return_data[$varname] = $data; $offset += strlen(serialize($data)); } return $return_data; } } ``` Usage: ``` Session::unserialize(session_encode()); ```
How can I unserialize session data to an arbitrary variable in PHP?
[ "", "php", "session", "serialization", "" ]
I've noticed with `Integer.parseInt()` that you don't have to surround it with a try catch or declare that the method might throw an exception, despite the fact that it "throws" a `NumberFormatException`. Why don't I have to explicitly catch the `NumberFormatException` or state that my method throws it?
Because that is a "runtime" exception. **RuntimeExceptions** are used to identify programming problems ( that a good programmer could avoid ) while **Checked exceptions** are to identify environment problems ( that could not be avoided no matter how good do you program , a server is down for instance ) You could read more about [them here](https://stackoverflow.com/questions/27578/when-to-choose-checked-and-unchecked-exceptions) There are actually [three kinds of exceptions](https://stackoverflow.com/questions/462501/exception-other-than-runtimeexception/462745#462745), only one of them should be handled ( most of the times )
``` Throwable / \ Error Exception / \ *checked* RuntimeException \ *unchecked* ``` See [Thinking in Java](http://smart2help.com/e-books/tij-3rd-edition/TIJ311.htm) for a good explanation of Checked vs. Unchecked exceptions. Some consider the idea of checked exceptions a failed experiment. For example, both Spring and Hibernate use unchecked exceptions, and often wrap checked exceptions in unchecked versions.
Why don't you have to explicitly declare that you might throw some built in exceptions in Java?
[ "", "java", "exception", "" ]
I'm currently working on a j2ee project that's been in beta for a while now. Right now we're just hammering out some of the issues with the deployment process. Specifically, there are a number of files embedded in the war (some xml-files and .properties) that need different versions deploying depending on whether you are in a dev, testing or production environment. Stuff like loglevels, connection pools, etc. So I was wondering how developers here structure their process for deploying webapps. Do you offload as much configuration as you can to the application server? Do you replace the settings files programmatically before deploying? Pick a version during build process? Manually edit the wars? Also how far do you go in providing dependencies through the application servers' static libraries and how much do you put in the war themselves? All this just to get some ideas of what the common (or perhaps best) practice is at the moment.
I work in an environment where a separate server team performs the configuration of the QA and Production servers for our applications. Each application is generally deployed on two servers in QA and three servers in Production. My dev team has discovered that it is best to minimize the amount of configuration required on the server by putting as much configuration as possible in the war (or ear). This makes server configuration easier and also minimizes the chance that the server team will incorrectly configure the server. We don't have machine-specific configuration, but we do have environment-specific configuration (Dev, QA, and Production). We have configuration files stored in the war file that are named by environment (ex. dev.properties, qa.properties, prod.properties). We put a -D property on the server VM's java command line to specify the environment (ex. java -Dapp.env=prod ...). The application can look for the app.env system property and use it to determine the name of the properties file to use. I suppose if you have a small number of machine-specific properties then you could specify them as -D properties as well. Commons Configuration provides an easy way to combine properties files with system properties. We configure connection pools on the server. We name the connection pool the same for every environment and simply point the servers that are assigned to each environment to the appropriate database. The application only has to know the one connection pool name.
I think that if the properties are machine/deployment specific, then they belong on the machine. If I'm going to wrap things up in a war, it should be drop-innable, which means nothing that's specific to the machine it's running on. This idea will break if the war has machine dependent properties in it. What I like to do is build a project with a properties.example file, each machine has a .properties that lives somewhere the war can access it. An alternative way would be to have ant tasks, e.g. for dev-war, stage-war, prod-war and have the sets of properties part of the project, baked in in the war-build. I don't like this as much because you're going to end up having things like file locations on an individual server as part of your project build.
How do you manage embedded configuration files and libraries in java webapps?
[ "", "java", "deployment", "jakarta-ee", "web-applications", "war", "" ]
I want to cache a `DataGridView` row between 'refreshes' i.e. between a `Rows.Clear()` and a `Columns.Clear()`. However it seems that calling the `Clear()` methods does not unbind the data from the `DataGridView` instance, An example, ``` public partial class Form1 : Form { public Form1() { InitializeComponent(); } DataGridViewRow cachedRow = new DataGridViewRow(); private void button1_Click(object sender, EventArgs e) { this.dataGridView1.Rows.Clear(); this.dataGridView1.Columns.Clear(); DataGridViewColumn aColumn = new DataGridViewTextBoxColumn(); this.dataGridView1.Columns.Add(aColumn); this.dataGridView1.Rows.Add(cachedRow); } } ``` This is done on a Form containing a `DataGridView` and a `Button`. Clicking the button twice gives the "Row provided already belongs to a DataGridView" error. There has been some discussion online about this that suggests that it may be a bug, however this was around 2004.
Once a row is part of a gridview, you can't re-add it. The row itself [keeps track of what DataGridView](http://msdn.microsoft.com/en-us/library/system.windows.forms.datagridviewrow.aspx) it is in. I would suggest making a copy of the cached row and adding the copy to the view. Since you make a new copy each time it won't be in the view. Alternatively, you can go through and remove only those rows that have not been cached from the view, leaving the cached rows behind so that you don't need to re-add it.
Clone it to a `DataRow()` and then `DataTable.ImportRow` to the originating `DataTable`.
How do you cache a row without raising a "Row provided already belongs to a DataGridView" error?
[ "", "c#", ".net", "winforms", "datagridview", "" ]
Many times when I am watching others code I just want to find where and how a variable is defined. Normally what I do now is look for the type of the variable until I find the definition, that is very time consuming. And I guess that there are some tools that can help me in this rutinary situation. Any suggestion in some tools or commands to help me in this task?. I know that using a GUI and creating a project this is done automatically I am talking of a way to do this without a GUI. I am working with only text mode. I am running under Linux and I am using C/C++, but suggestions for other languages are welcome. Thanks a lot. **A possible solution** Michel in one of his comments propose a simple an effective solution define again the variable, in that case in compilation time, the compiler will inform where is the previous definiton. Of course to apply this solution we need to think previously in the locality of the variable.
*Edit*: OK, you say you're using C++. I'm editing my response. I would use the C preprocessor and then grep for the variable. It will appear in the first place. ``` cpp -I...(preprocessor options here) file.cpp | grep variable ``` The C preprocessor will join all the includes that the program uses, and the definition has to be before any usage of that variable in the file. Not a perfect thing, but without an IDE or a complete language description/managing tool, you only have the text. Another option would be using [ctags](http://ctags.sourceforge.net/). It understands the C and C++ syntaxes (among others), and can be searched for variables and functions using command line tools, emacs and vi, among others.
You've already given the most appropriate tool: an IDE. This is *exactly* the kind of thing which an IDE excels at. Why would you *not* want to use an IDE if you're finding development painful without one? Note that Emacs, Vim etc can work as IDEs - I'm not talking about forcing you the world of GUIs if you want to stay in a text-only situation, e.g. because you're SSHing in. (I'm really *not* trying to be rude here. I just think you've discounted the obvious solution without explaining why.)
Any program or trick to find the definition of a variable?
[ "", "c++", "variables", "find", "definition", "" ]
Is there a way to serialize a lexical closure in Python using the standard library? pickle and marshal appear not to work with lexical closures. I don't really care about the details of binary vs. string serialization, etc., it just has to work. For example: ``` def foo(bar, baz) : def closure(waldo) : return baz * waldo return closure ``` I'd like to just be able to dump instances of closure to a file and read them back. Edit: One relatively obvious way that this could be solved is with some reflection hacks to convert lexical closures into class objects and vice-versa. One could then convert to classes, serialize, unserialize, convert back to closures. Heck, given that Python is duck typed, if you overloaded the function call operator of the class to make it look like a function, you wouldn't even really need to convert it back to a closure and the code using it wouldn't know the difference. If any Python reflection API gurus are out there, please speak up.
If you simply use a class with a `__call__` method to begin with, it should all work smoothly with `pickle`. ``` class foo(object): def __init__(self, bar, baz): self.baz = baz def __call__(self,waldo): return self.baz * waldo ``` On the other hand, a hack which converted a closure into an instance of a new class created at runtime would not work, because of the way `pickle` deals with classes and instances. `pickle` doesn't store classes; only a module name and class name. When reading back an instance or class it tries to import the module and find the required class in it. If you used a class created on-the-fly, you're out of luck.
PiCloud has released an open-source (LGPL) pickler which can handle function closure and a whole lot more useful stuff. It can be used independently of their cloud computing infrastructure - it's just a normal pickler. The whole shebang is documented [here](http://docs.picloud.com/), and you can download the code via 'pip install cloud'. Anyway, it does what you want. Let's demonstrate that by pickling a closure: ``` import pickle from StringIO import StringIO import cloud # generate a closure def foo(bar, baz): def closure(waldo): return baz * waldo return closure closey = foo(3, 5) # use the picloud pickler to pickle to a string f = StringIO() pickler = cloud.serialization.cloudpickle.CloudPickler(f) pickler.dump(closey) #rewind the virtual file and reload f.seek(0) closey2 = pickle.load(f) ``` Now we have `closey`, the original closure, and `closey2`, the one that has been restored from a string serialisation. Let's test 'em. ``` >>> closey(4) 20 >>> closey2(4) 20 ``` Beautiful. The module is pure python—you can open it up and easily see what makes the magic work. (The answer is a lot of code.)
Python serialize lexical closures?
[ "", "python", "serialization", "closures", "" ]
I am just learning C# through Visual Studio 2008? I was wondering what exactly is the correlation between dabases, datasets and binding sources? As well, what is the function of the table adapter?
At a super high level: * Database -- stores raw data * DataSet -- a .NET object that can be used to read, insert, update and delete data in a database * BindingSource -- a .NET object that can be used for Data Binding for a control. The BindingSource could point to a DataSet, in which case the control would display and edit that data * TableAdapter -- Maps data from a database table into a DataSet There is a lot more to all of these, and understanding the way ADO.NET is architected can take a bit of time. Good luck!
A DataSet is usually used to hold a result from the database in memory, i.e. it contains a DataTable object. The DataSet and DataTable objects themselfs are independent of the database, so the result doesn't have to come from a database. The DataSet can contain several DataTables, and you can even define relations between them. It's like a mini database in memory. A binding source is any object that can provide a list of objects with properties. A DataSet or a DataTable can do that, but it could basically be any kind of list containing objects that has properties. A TableAdapter is used to read data from a DataReader provided by a Command object, and put the data in a DataTable object.
C# (Visual studio): Correlation between database, dataset, binding source
[ "", "c#", "database", "visual-studio", "dataset", "bindingsource", "" ]
I'm currently working on cross-platform applications and was just curious as to how other people tackle problems such as: * Endianess * Floating point support (some systems emulate in software, VERY slow) * I/O systems (i.e. display, sound, file access, networking, etc. ) * And of course, the plethora of compiler differences Obviously this is targeted at languages like c/c++ which don't abstract most of this stuff (unlike java or c#, which aren't supported on a lot of systems). And if you were curious, the systems I'm developing on are the Nintendo DS, Wii, PS3, XBox360 and PC. --- *EDIT* There have been a lot of really good answers on here, ranging from how to handle the differences yourself, to library suggestions (even the suggestion of just giving in and using wine). I'm not actually looking for a solution (already have one), but was just curious as to how others tackle this situation as it is always good to see how others think/code so you can continue to evolve and grow. Here's the way I've tackled the problem (and, if you haven't guessed from this list of systems above, I'm developing console/windows games). Please keep in mind that the systems I work on generally don't have cross-platform libraries already written for them (Sony actually recommends that you write your own rendering engine from scratch and just use their OpenGL implementation, which doesn't quite follow the standards anyway, as a reference). **Endianess** All of our assets can be custom made for each system. All of our raw data (except for textures) is stored in XML which we convert to a system specific binary format when the project is built. Seeing as how we are developing for game consoles, we don't need to worry about data being transfered between platforms with different endian formats (only the PC allows the users to do this, thus, it is insulated from the other systems as well). **Floating point support** Most modern systems do floating point values fine, the exception to this is the Nintendo DS (and GBA, but thats pretty much a dead platform for us these days). We handle this through 2 different classes. The first is a "fixed point" class (templated, can specify what integer type to use and how many bits for the decimal value) which implements all arithmetic operators (taking care of bit-shifts) and automates type conversions. The second is a "floating point" class, which is a basically just a wrapper around the float for the most part, the only difference is that it also implements the shift operators. By implementing the shift operators, we can then use bit shifts for fast multiplications/divisions on the DS and then seamlessly transition to platforms that work better with floats (like the XBox360). **I/O Systems** This is probably the trickiest problem for us, because every system has there own method for controller input, graphics (XBox360 uses a variant of DirectX9, PS3 has OpenGL or you can write your own from scratch and the DS and Wii have thier own proprietary systems), sound and networking (really only the DS differs in protocol by much, but then they each have their own server system that you have to use). The way we ended up tackling this was by simply writing fairly high level wrappers for each of the systems (e.g. meshes for graphics, key mapping systems for controllers, etc.) and having all the systems use the same header files for access. It's then just a matter of writing specific cpp files for each platform (thus forming "the engine"). **Compiler Differences** This is one thing that can't be tackled too easily, as we run into problems with compilers, we usually log the information on a local wiki (so others can see what to look out for and the workarounds to go with it) and if possible, write a macro that will handle the situation for us. While its not the most elegant solution, it works and seeing how some compilers a simply broken in certain places, the more elegant solutions tend to break the compilers anyway. (I just wish all of the compilers implemented Microsoft's "#pragma once" command, so much easier than wrapping everything in #ifdef's)
A great deal of this complexity is generally solved by the third party libraries (boost being the most famous) you are using. One rarely writes everything from scratch...
For endian issues in data loaded from files, embed a value such as 0x12345678 in the file header. The object that loads the data, look at this value, and if it matches its internal representation of the value, then the file contains native endian values. The load is simple from there. If the value does not match, then it is a foreign endianism, so the loader needs to flip the values before storing them.
What's your favorite way of dealing with cross-platform development?
[ "", "c++", "c", "cross-platform", "" ]
I want to write an elegant linq query to handle the following SAMPLE object model: ``` class Category { public string Name { get; set; } public IList<Product> Products { get; set;} } class Product { public string Title { get; set; } public IList<Photo> Photos { get; set; } } class Photo { public int Id { get; set; } } ``` I constructed the following query to get the Photo Id: ``` var query = from category in Factory.GetCategories() where category.Name == "Cameras" select (from product in category.Products where product.Title == "Sony" select (from photo in product.Photos select photo.Id) ); var v = query.ToList(); ``` At the moment the query does not project correctly i have to add a FirstOrDefault() to each of the Sub Selects!: ``` var query = from category in Factory.GetCategories() where category.Name == "Cameras" select (from product in category.Products where product.Title == "Sony" select (from photo in product.Photos select photo.Id).FirstOrDefault() ).FirstOrDefault(); var v = query.ToList(); ``` Is there a better way to do this? Ignoring the fact that we are not dealing with a database and PK's/FK's are not in play. I really want to avoid writing a big for loop when i could do the same thing in a linq query: ``` foreach (var category in Factory.GetCategories()) { if (category.Name == "Camera") { foreach (var product in category.Products) { if (product.Title == "Sony") { foreach (var photo in product.Photos) { //get data int id = photo.Id; } } } } } ``` The actual implementation is more complex than this simple object model. I want to get the basic idea from this simple sample so i apply it to my real object model. Cheers!
Do you just want the flattened ids? ``` var query = from category in Factory.GetCategories() where category.Name == "Cameras" from product in category.Products where product.Title == "Sony" from photo in product.Photos select photo.Id; ```
Well yes - you have to add a FirstOrDefault because there's no such thing as "the" photo ID. There are potentially many photos per camera - which do you want? If you only care about the first photo, that's fine - although multiple "from" clauses would make your life easier: ``` var query = from category in Factory.GetCategories() where category.Name == "Cameras" from product in category.Products where product.Title == "Sony" select product.Photos.Select(photo => photo.Id) .FirstOrDefault(); ``` Note that that will return 0 for a product with no photos. Is that what you want? If you could be clearer in your requirements, we'll be better equipped to help you. EDIT: If you only want the very first ID of *any* Sony Camera, then use: ``` var query = from category in Factory.GetCategories() where category.Name == "Cameras" from product in category.Products where product.Title == "Sony" from photo in product.Photos select photo.Id; var firstId = query.FirstOrDefault(); ```
Linq query with aggregates
[ "", "c#", "linq", "linq-to-objects", "" ]
I want to use this snippet from [Mr-Edd's iostreams article](http://www.mr-edd.co.uk/?p=16) to print std::clog somewhere. ``` #include <iostream> #include <iomanip> #include <string> #include <sstream> int main() { std::ostringstream oss; // Make clog use the buffer from oss std::streambuf *former_buff = std::clog.rdbuf(oss.rdbuf()); std::clog << "This will appear in oss!" << std::flush; std::cout << oss.str() << '\\n'; // Give clog back its previous buffer std::clog.rdbuf(former_buff); return 0; } ``` so, in a mainloop, I will do something like ``` while (! oss.eof()) { //add to window text somewhere } ``` Here's the [ostringstream docs](http://en.cppreference.com/w/cpp/io/basic_ostringstream) but I'm having trouble understanding the best way to do this. I have a method that displays the text, I just want to call it with any data in the ostringstream. What is the easiest/best way to get anything sent to std::clog redirected to a method of my choice? is it as above, and fill in the while !eof part (not sure how), or is there a better way, say by overloading some 'commit' operator somewhere that calls my method? I'm loking for quick and easy, I really don't want to start defining sinks and such with boost iostreams as the article does - that stuff is way over my head.
I encourage you to look at [`Boost.IOStreams`](http://www.boost.org/doc/libs/release/libs/iostreams/doc/index.html). It seems to fit your use-case nicely, and using it is surprisingly simple: ``` #include <boost/iostreams/concepts.hpp> #include <boost/iostreams/stream_buffer.hpp> #include <iostream> namespace bio = boost::iostreams; class MySink : public bio::sink { public: std::streamsize write(const char* s, std::streamsize n) { //Do whatever you want with s //... return n; } }; int main() { bio::stream_buffer<MySink> sb; sb.open(MySink()); std::streambuf * oldbuf = std::clog.rdbuf(&sb); std::clog << "hello, world" << std::endl; std::clog.rdbuf(oldbuf); return 0; } ```
I **think** you want to pull the text from the ostream while it's not empty. You could do something like this: ``` std::string s = oss.str(); if(!s.empty()) { // output s here oss.str(""); // set oss to contain the empty string } ``` Let me know if this isn't what you wanted. Of course, the better solution is to remove the middle man and have a new streambuf go wherever you **really** want it, no need to probe later. something like this (note, this does it for every char, but there is plenty of buffering options in streambufs as well): ``` class outbuf : public std::streambuf { public: outbuf() { // no buffering, overflow on every char setp(0, 0); } virtual int_type overflow(int_type c = traits_type::eof()) { // add the char to wherever you want it, for example: // DebugConsole.setText(DebugControl.text() + c); return c; } }; int main() { // set std::cout to use my custom streambuf outbuf ob; std::streambuf *sb = std::cout.rdbuf(&ob); // do some work here // make sure to restore the original so we don't get a crash on close! std::cout.rdbuf(sb); return 0; ``` }
redirect std::cout to a custom writer
[ "", "c++", "stream", "stringstream", "iostream", "streambuf", "" ]
Can you recommend a good series of articles *or preferably a book* on how to get started with threading in general and in C# in particular? I am primarily looking for the use of threads in console applications and in ASP.Net apps. I understand only the very basics of threads and know that "here be dragons", so want to get a good grounding in it before I start using them. Things I am curious about are things like the concept of having a threadpool, how you manage the size of it, how you choose for things to be queued until a thread is available vs forcing a new thread to start etc. Also, I understand that IIS has lots of built-in thread handling, so anything that explains how to work with threads in ASP.Net under IIS and how it differs from threading in a console C# applicaion is interesting. My intended usage includes things like; * The user does something on the ASP.Net page that causes my server-side code to need to connect to another system and carry out lengthy operations, so I want to return control to the user quickly by firing that action to another thread. The user can keep watching progress through AJAX or simply move away - the web is stateless after all :) * Use of Fire and Forget patterns (lots of sample code out there, I want to understand more about how they work) Thanks
As well as my own articles linked by Andrew, Joe Albahari has [another tutorial](http://albahari.com/threading). If you want a really thorough examination, read Joe Duffy's [Concurrent Programming in Windows](https://rads.stackoverflow.com/amzn/click/com/032143482X).
Here is a good set of articles by our very own Jon Skeet: <http://www.yoda.arachsys.com/csharp/threads/>
Beginners threading in C#
[ "", "c#", "asp.net", "multithreading", "iis", "" ]
What are most important things you know about generics: hidden features, common mistakes, best and most useful practices, tips... I am starting to implement most of my library/API using generics and would like to collect most common patterns, tips, etc., found in practice. Let me formalize the question: What is the most important thing you've learned about generics? **Please try to provide examples -- it would be easier to understand, as opposed to convoluted and overly-dry descriptions** Thanks This question is somewhat similar to [Jon's question](https://stackoverflow.com/questions/215548/whats-the-hardest-or-most-misunderstood-aspect-of-linq?...I), though, on a different subject.
One of the most important things I've learned is that you can [constrain the generic type parameter(s)](http://msdn.microsoft.com/en-us/library/d5x73970.aspx). This can be very powerful, allowing you to take customize the class for only certain types of objects and allowing you to use the members of that type in your generic class. I realize that this is pretty fundamental, but it's one of the things that makes generics incredibly useful.
Each specialization of a generic type is treated as a unique type when it comes to things like static members. For example, with this type: ``` class GenericType<T> { public static int SomeValue; } ``` The assert succeeds if we do this: ``` GenericType<int>.SomeValue = 3; Debug.Assert(GenericType<double>.SomeValue == 0); ``` This is because: ``` typeof(GenericType<int>) != typeof(GenericType<double>) ``` Even though ``` typeof(GenericType<int>.GetGenericTypeDefinition() == typeof(GenericType<double>).GetGenericTypeDefinition() ```
Most important things about C# generics... lesson learned
[ "", "c#", ".net", "generics", "" ]
> **Possible Duplicate:** > [C#: Interfaces - Implicit and Explicit implementation](https://stackoverflow.com/questions/143405/c-interfaces-implicit-and-explicit-implementation) Would someone explain the differences between these two beasts and how to use them. AFAIK, many pre.2.0 classes were implemented without generic types, thus causing latter version to implement both flavors of interfaces. Is the the only case why one would need to use them? Can you also explain in depth how to use them.? Thanks
[There is a good and pretty detailed blog post about this.](https://learn.microsoft.com/en-us/archive/blogs/mhop/implicit-and-explicit-interface-implementations) Basically with implicit interface implementation you access the interface methods and properties as if they were part of the class. With explicit interface implementations you can only access them when treating it as that interface. In terms of when you would use one over the other, sometimes you have to use explicit interface implementation as you either have a property/method with same signature as the interface or you want to implement two interfaces with the same signatures and have different implementations for those properties/methods that match. The below rules are from Brad Abrams [design guidelines blog](https://learn.microsoft.com/en-us/archive/blogs/brada/design-guideline-update-explicit-member-implementation). * **Do not** use explicit members as a security boundary. They can be called by any client who cast an instance to the interface. * **Do** use explicit members to hide implementation details * **Do** use explicit members to approximate private interface implementations. * **Do** expose an alternative way to access any explicitly implemented members that subclasses are allowed to override. Use the same method name unless a conflict would arise. It's also mentioned in the comments in Brad's blog that there is boxing involved when using explicit implementation on value types so be aware of the performance cost.
In layman's terms, if a class inherits from 2 or more interfaces and if the interfaces happen to have the same method names, the class doesn't know which interface method is being implemented if you use implicit interface implementation. This is one of the scenarios when you would explicitly implement an interface. **Implicit Interface Implementtation** ``` public class MyClass : InterfaceOne, InterfaceTwo { public void InterfaceMethod() { Console.WriteLine("Which interface method is this?"); } } interface InterfaceOne { void InterfaceMethod(); } interface InterfaceTwo { void InterfaceMethod(); } ``` **Explicit Interface Implementation** ``` public class MyClass : InterfaceOne, InterfaceTwo { void InterfaceOne.InterfaceMethod() { Console.WriteLine("Which interface method is this?"); } void InterfaceTwo.InterfaceMethod() { Console.WriteLine("Which interface method is this?"); } } interface InterfaceOne { void InterfaceMethod(); } interface InterfaceTwo { void InterfaceMethod(); } ``` The following link has an excellent video explaining this concept [Explicit Interface Implementation](http://venkatcsharpinterview.blogspot.co.uk/2011/08/explicit-interface-implementation.html)
implicit vs explicit interface implementation
[ "", "c#", ".net", "interface", "" ]
TL;DR version: I have two threads. One of them might need to Interrupt() the other, but only if the other thread is in th middle of processing data that is related to the object that is affected by the first thread. How can I only Interrupt() the second thread based on certain conditions. I have been working on a program that spins up two threads for processing socket data and updating the GUI with the information. The two threads (topologyThread and dataThread) handle two different aspects of my application, namely topology change notifications for our wireless sensor network, and data information for receiving and processing data from the wireless sensor network. When data is coming into the dataThread, it's possible that the network member that the data packet represents has already been removed by the topologyThread and as such should not be processed. So, I was thinking that using the Thread.Interrupt() method would allow me to instantly notify the dataThread of changes to the topology and prevent some of the problems we have seen with trying to update the GUI with data for a network member that is no longer connected. My question is this: How can I tell if the dataThread needs to be interrupted? I don't want it to throw an exception while it's processing data for a network member that is still connected, but I want it to be interrupted if it's in the middle of processing data for a network member that is then disconnected. Scenario 1: * Data comes in on dataThread. * dataThread makes sure network member is still part of the network. * dataThread processes data to the end and updates GUI Scenario 2: * Data comes in on dataThread. * dataThread makes sure network member is still part of the network. * Network member has been disconnected so no processing takes place. Scenario 3: * Data comes in on dataThread. * dataThread makes sure network member is still part of the network. * dataThread begins processing data * topologyThread receives notice that the network member is disconnected and removes it from topology. * dataThread tries to update GUI for a network member that is no longer connected. It's scenario 3 that I am trying to code for. In our testing, this scenario is where the tree view in the sidebar freezes up and the app has to be killed. But I only need to interrupt the dataThread if the object that is affected by the topologyThread is the one that the dataThread is currently acting on. Thanks for reading. :)
> dataThread tries to update GUI for a network member that is no longer connected. What if you have topologyThread set a flag when the network member is no longer connected, and have dataThread check that flag before attempting to update the GUI?
You shouldn't be interrupting your threads. It might be safer to program these using EventWaitHandles that could fire signals based around what you're trying to achieve instead.
How can I synchronize these threads to avoid data corruption?
[ "", "c#", "multithreading", "" ]
To check if a column is auto incremented i can do the following ``` Connection con = ... DatabaseMetaData meta = con.getMetaData(); ResultSet metaCols = meta.getColumns(catalog, schema, table, "%"); while ( metaCols.next() ) String value = rs.getString("IS_AUTOINCREMENT") ... ``` works fine except with Sybase databases. I've tried it with the jTDS and JConnect drivers, but with both drivers I get the this exception: ``` java.sql.SQLException: Invalid column name IS_AUTOINCREMENT. ``` Is there another the get find out, whether a column in Sybase is auto incremented or not? I thought "IS\_AUTOINCREMENT" is a feature with JDBC4 and jTDS is a JDBC4 compatible driver.
sp\_help delivers all the information I need. This SP returns several ResultSets. The third ResultSet contains the information I need. ``` Statement stmt = con.createStatement(); stmt.executeQuery("sp_help " + table); stmt.getMoreResults(); stmt.getMoreResults(); ResultSet rs = stmt.getResultSet(); //... while( rs.next() ) boolean identity = rs.getBoolean("Identity"); ```
Sybase uses 'identity' columns rather than 'default autoincrement' which is why I believe you are getting this message. Try checking if TYPE\_NAME column contains keyword "identity". The behaviour of identity columns is a little different also, but that is an aside.
Check if a column is auto incremented in Sybase with JDBC
[ "", "java", "jdbc", "metadata", "sybase", "" ]
I have a very specific problem, and I wanted to know if there is a way to change the owner of a JDialog (it can be set using the constructor). I suppose there is no "official" possibility (other than a hack), but I wanted to make sure I didn't miss something. Any ideas or hints on the topic would be helpful, thanks already...
If your question is about how to reuse dialogs during your application lifecycle, then a better way is to: 1. define all your dialog contents as JPanel subclasses 2. and instantiate a new JDialog with the existing JPanel subclass instance For point 2, you can of course use lazy evaluation of the panels (instantiate upon first use only, then reuse). You will also need to have your panels implement some interface (of your own) that allows you to re-initialize them for reuse in a new JDialog (reinit typically means erasing all fields contents, or setting these fields back to their default values).
Only thing I can think of falls under unsafe hack (use reflection and alter the owner, but that could possibly change under different version of the JVM (even from the same vensor on the same platform)). Perhaps a better question for you to ask is "this is what I am trying to do... do I really need to change the owner of the dialog or is there a better way"? I am trying to think of reasons to want to change the owner and I cannot come up with any...
Is there a way to change the owner of a JDialog?
[ "", "java", "swing", "jdialog", "" ]
Visual Studio seems to complain when I pass a string into an exception parameter. ``` if (str1 == null || str2 == null) { throw new ArgumentNullException("lmkl"); } ``` Visual Studio says that it cannot resolve symbol `"lmkl"`. If I have a string variable (eg above `throw new... string s = "test";`) and include this as the parameter for the exception, Visual Studio is more than happy with this. What gives? Thanks
The documentation for the overloaded constructor for `ArgumentNullException` that takes a single string parameter states that this argument should be: ``` The name of the parameter that caused the exception. ``` At the moment, if your code throws an exception you won't know which argument was null. Recommend rewriting to ``` if (str1 == null) throw new ArgumentNullException("str1"); if (str2 == null) throw new ArgumentNullException("str2"); ```
Actually, Visual Studio doesn't care about this **at all**. I assume you have [ReSharper](http://www.jetbrains.com/resharper/) installed? This validates a lot of common errors, including incorrect use of patterns such as `ArgumentException` etc. It also has better `null` checking - not quite "contracts", but still pretty helpful. It only attempts this when it can see a string literal used in a known pattern - the analysis to chase how you assign variables is simply too much for realistic analysis.
Can't write string in exception constructor
[ "", "c#", ".net", "visual-studio-2008", "" ]
I have a system where items are locked (with a flag in the database) when a user is vieiwing that item. Currently the item is only unlocked when a user performs a certain action. However, when a user leaves the page through any method, I'd like to make a call to a webservice / ashx page that will unlock the item, but not stop the page from changing.
What you are describing is not possible. The JavaScript thread/runtime lives within the scope of the loaded page, so when the page unloads the JavaScript is unloaded as well. When the asynchronous call returns and the browser has moved away from the page, the user will get a JS error. You can make your web service/AJAX call synchronous and block until the server returns a response, but of course that means the page will wait for the response before unloading.
What about using a "heartbeat" function to keep the lock alive while the page is loaded? Make a server call every, say, 15 seconds or so to renew the lock. Program your server-side code so that a lock will time out after 20 seconds if it hasn't heard the "heartbeat" from the page. When the user navigates away from the page, the heartbeat will stop, and the lock will be released a maximum of 20 seconds later.
How can I make an async call in Javascript onunload element?
[ "", "asp.net", "javascript", "html", "" ]
I'm using a **Builder** pattern in Python to separate a bunch of different configuration possibilities. Basically, I have a bunch of classes that are named `ID...` (e.g. `ID12345`). These all inherit from the base Builder class. In my script, I need to instantiate an instance for each class (about 50) every time this app runs. So, I'm trying to see if instead of doing something like this: ``` ProcessDirector = ProcessDirector() ID12345 = ID12345() ID01234 = ID01234() ProcessDirector.construct(ID12345) ProcessDirector.construct(ID01234) ID12345.run() ID01234.run() ``` Can I do something like this (I know this doesn't work): ``` IDS = ["ID12345", "ID01234"] ProcessDirector = ProcessDirector() for id in IDS: builder = id() #some how instantiate class from string ProcessDirector.construct(builder) builder.run() ``` That way, when I need to add a new one in the future, all I have to do is add the id to the IDS list, rather than peppering the new ID throughout the code. EDIT: Looks like there are different opinions based on where the data is coming from. These IDs are entered in a file that no one else has access to. I'm not reading the strings from the command line, and I'd like to be able to do as little alteration when adding a new ID in the future.
Not sure this is what you want but it seems like a more Pythonic way to instantiate a bunch of classes listed in a string: ``` class idClasses: class ID12345:pass class ID01234:pass # could also be: import idClasses class ProcessDirector: def __init__(self): self.allClasses = [] def construct(self, builderName): targetClass = getattr(idClasses, builderName) instance = targetClass() self.allClasses.append(instance) IDS = ["ID12345", "ID01234"] director = ProcessDirector() for id in IDS: director.construct(id) print director.allClasses # [<__main__.ID12345 instance at 0x7d850>, <__main__.ID01234 instance at 0x7d918>] ```
If you wanted to avoid an eval(), you could just do: ``` id = "1234asdf" constructor = globals()[id] instance = constructor() ``` Provided that the class is defined in (or imported into) your current scope.
Can you use a string to instantiate a class?
[ "", "python", "design-patterns", "reflection", "builder", "class-method", "" ]
We are beginning to roll out more and more WAN deployments of our product (.NET fat client with an IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to [this](http://www.codeproject.com/KB/cs/FastSerialization.aspx)), and we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. **Is there a fancy way of serializing primitive types, beyond the obvious?** For example, today we serialize an array of ints as follows: > [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: *Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast.* Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses (no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.
(relates to messages/classes, not just primitives) Google designed "protocol buffers" for this type of scenario (they shift a huge amount of data around) - their format is compact (using things like base-128 encoding) but extensible and version tolerant (so clients and servers can upgrade easily). In the .NET world, I can recommend 2 protocol buffers implementations: * [protobuf-net](http://code.google.com/p/protobuf-net/) (by me) * [dotnet-protobufs](http://github.com/jskeet/dotnet-protobufs/tree/master) (by Jon Skeet) For info, protobuf-net has direct support for `ISerializable` and remoting (it is part of the [unit tests](http://code.google.com/p/protobuf-net/source/browse/trunk/Examples/Remoting/RemotingDemo.cs)). There are performance/size metrics [here](http://code.google.com/p/protobuf-net/wiki/Performance). And best of all, all you do is add a few attributes to your classes. Caveat: it doesn't claim to be the theoretical **best** - but pragmatic and easy to get right - a compromise between performance, portability and simplicity.
Check out the [base-128 varint](http://code.google.com/apis/protocolbuffers/docs/encoding.html#varints) type used in Google's protocol buffers; that might be what you're looking for. (There are a number of .NET implementations of protocol buffers available if you search the web which, depending on their license, you might be able to grovel some code from!)
Optimal Serialization of Primitive Types
[ "", "c#", "serialization", "" ]
I have created a SCORM API for our LMS and right now I am using hard coded userID and courseID variables (variables that reference things in the database). I need to pass the real userID and courseID instead of using hard coded ones. I know the userID is stored in the session and the courseID is passed over from the launch page. How do I get these into JavaScript so I can include them in my calls to the .ashx that handles the SCORM calls?
Probably ~~best~~ easiest to expose them as properties of your page (or master page if used on every page) and reference them via page directives. ``` <script type="text/javascript"> var userID = '<%= UserID %>'; var courseID = '<%= CourseID %>'; .... more stuff.... </script> ``` Then set the values on Page\_Load (or in the Page\_Load for the master page). ``` public void Page_Load( object source, EventArgs e ) { UserID = Session["userID"]; CourseID = Session["courseID"]; ... } ```
All the answers here that suggest something like ``` var userID = '<%= UserID %>'; ``` are all missing something important if the variable you are embedded can contain arbitrary string data. The embedded string data needs to be escaped so that if it contains backslashes, quotes or unprintable characters they don't cause your Javascript to error. Rick Strahl has some suggestions for the escaping code needed [here](http://www.west-wind.com/weblog/posts/114530.aspx). Using Rick's code the embedded variable will look like this: ``` var userId = <%= EncodeJsString(UserID) %>; ``` Note that there are no quotes now, Rick's code wraps the escaped string with quotes.
How do I give JavaScript variables data from ASP.NET variables?
[ "", "javascript", "asp.net", "escaping", "scorm", "" ]
Edit: On further examination Firefox does not seem to be doing this, but Chrome definitely does. I guess its just a bug with a new browser - for every event an I/O Read also occurs in Chrome but not in FF. When I load the following page in a browser (I've tested in Chrome and Firefox 3 under Vista) and move the mouse around the memory always increases and does not ever seems to recede. Is this: 1. expected behaviour from a browser 2. a memory leak in the browser or 3. a memory leak in the presented code? . ``` <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>test</title> </head> <body> <script> var createEl = function (i) { var el = document.createElement("div"); var t = document.createTextNode(i.toString()); el.appendChild(t); t=null; el.id=i.toString(); var fn = function (e) {}; el.addEventListener("mouseover", fn, false); //el.onmouseover = fn; fn = null; try{ return el; } finally{ el=null; } //return (el = [el]).pop(); }; var i,x; for (i= 0; i < 100; i++){ x = createEl(i) document.body.appendChild(x); x = null; } </script> </body> </html> ``` The `(el = [el].pop())` and the `try/finally` ideas are both from [here](http://ajaxian.com/archives/is-finally-the-answer-to-all-ie6-memory-leak-issues), though they do not either seem to help - understandably since they are only meant to be ie6 fixes. I have also experimented with using the addEventListener and the onmouseover methods of adding the events. The only way I have found to prevent memory from increasing is to comment out both lines of code.
Memory leaks related to event handlers are, generally speaking, related to enclosures. In other words, attaching a function to an event handler which points back to its element can prevent browsers from garbage-collecting either. (Thankfully, most newer browsers have "learned the trick" and no longer leak memory in this scenario, but there are a lot of older browsers floating around out there!) Such an enclosure could look like this: ``` var el = document.createElement("div"); var fnOver = function(e) { el.innerHTML = "Mouse over!"; }; var fnOut = function(e) { el.innerHTML = "Mouse out."; }; el.addEventListener("mouseover", fnOver, false); el.addEventListener("mouseout", fnOut, false); document.getElementsByTagName("body")[0].appendChild(el); ``` The fact that `fnOver` and `fnOut` reach out to their enclosing scope to reference `el` is what creates an enclosure (two, actually — one for each function) and can cause browsers to leak. Your code doesn't do anything like this, so creates no enclosures, so shouldn't cause a (well-behaved) browser to leak. Just one of the bummer of beta software, I guess. :-)
fn is a closure even with no code in it. E.g. try debug with Firebug and set the breakpoint inside that function. All variables defined in closure (fn code + variables that hang around = closure) are theoretically accessible(though I don't know how to access them on practice).
Javascript event handlers always increase browser memory usage
[ "", "javascript", "memory-leaks", "event-handling", "" ]
I've built a winforms application which checks for CTR+ALT+S and CTRL+ALT+E keypresses, using by overriding the ProcessCmdKey method. This works great, but if the screensaver goes on and then goes off the form doesn't have focus and the keypresses aren't intercepted. How can I receive these even if the form does not have focus?
Alexander Werner has a "[System Hotkey Component](http://www.codeproject.com/KB/miscctrl/systemhotkey.aspx)" project over at Code Project that wraps the RegisterHotkey() API into a user control that's really easy to implement.
I'm aware of two methods: 1. [RegisterHotKey()](http://msdn.microsoft.com/en-us/library/ms646309.aspx) - You can use the RegisterHotKey() function to define a system-wide hot key. If the user presses the hotkey, Windows sends a WM\_HOTKEY message. 2. [Win32 Hooks](http://msdn.microsoft.com/en-us/library/ms997537.aspx) - This is an old API originally designed to support computer-based training (CBT) applications, but I believe that Windows still supports it. The API allows you to intercept and possibly modify messages, mouse actions, and keystrokes for any window. These are Win32 APIs, not .NET APIs, but .NET uses the same underlying components of Windows so the methods ought to work with .NET.
intercepting keypresses even when the form does not have focus
[ "", "c#", ".net", "winforms", "" ]
I am currently creating a small table in Oracle and am unsure of which data type to choose for a particular column. The column is labelled 'stay' and I would like it to contain only these values 'Short', 'Medium' and 'Long'. The end goal is to have these values in a drop down list within a form. Am I right in picking a data type of 'char(6)' here? Is it then possible to use Oracle form builder to make this into a list of values? Thank you for your time.
First of all, I would name the column 'StayDuration' or 'StayLength', as just 'Stay' by itself is still a little confusing. Once that is done, I see two acceptable approaches: * Put your lengths in a separate table and have your existing column store a foreign key into that table. * Store an integer, such that the value of the integer can be easily expanded into actual values for the stay length barriers via a simple formula. (Example: a 'short' stay could be 1 day or a less, 'medium' one week or less, and then everything else is long, you might use 0, 1, and 7 for your integers. I tend to avoid storing varchar-type data directly here. It requires more storage and makes it easy to end up with mis-typed or obsolete data. A hybrid approach is also possible, such that the 'keys' from the new table in the first option map to the integer values proposed in the 2nd.
There are basically 3 options: * Some databases support the type Enum specifically for this situations. (not sure about Oracle) * You can use a char(6) * You can use an int and map these at application level to the appropriate type. I would say Enum is the 'cleanest' option. But some ORM don't support this properly. This SO question might be interesting for you: [How to use enums in Oracle?](https://stackoverflow.com/questions/203469/how-to-use-enums-in-oracle)
Basic SQL Question - Data Type Choice
[ "", "sql", "database", "" ]
I am using nested repeaters to build a table for reasons I won't discuss here, but what I'm looking to do is have two datasources, one for the top level repeater that will correspond to the rows, and one for the second level repeater that will return cells within a row. What I'm wondering, however, is if I can somehow specify a parameter in the nested repeater's datasource that is set a field in the results from the first datasource? Can I set a parameter to the value of a data binding expression? The reason I want to do this is I have two stored procedures. When the page is loaded I have a session parameter I can use to run the first stored procedure, however, for the second stored procedure, I need to associate a value from each instance of the top level repeater with a call to the second stored procedure with a different parameter value.
I think the best way would be to handle the ItemDataBound event of the Outer Repeater, **Find** the inner DataSource control and set a SelectParameter for it. ``` void MyOuterRepeater_ItemDataBound(Object sender, RepeaterItemEventArgs e) { // Find the Inner DataSource control in this Row. SqlDataSource s = (SqlDataSource)e.Item.FindControl("InnerDataSource"); // Set the SelectParameter for this DataSource control // by re-evaluating the field that is to be passed. s.SelectParameters["MyParam"].DefaultValue = DataBinder.Eval(e.Item.DataItem, "MyFieldValueToPass").ToString(); } ``` For an example using the DataList, check out the ASP.NET quickstarts [here](http://quickstarts.asp.net/QuickStartv20/util/srcview.aspx?path=~/aspnet/samples/data/NestedMasterDetailsList.src&file=NestedMasterDetailsList_cs.aspx&lang=C%23+Source) P.S.: *Please see Tony's reply below for an important correction to the above presented snippet. Notably, it would be essential to check the ItemType of the current RepeaterItem. Alternatively, it's an excellent practice to always check for nulls on every object.*
I did this by using a HiddenField to store a value to use as a parameter later. Gets the job done. ``` <asp:SqlDataSource ... /> <asp:Repeater ...> <ItemTemplate> <asp:HiddenField ID="txtOuterID" runat="server" Value='<%# Eval("ID") %>' Visible="false" /> <asp:SqlDataSource ...> <SelectParameters> <asp:ControlParameter Name="OuterID" Type="Int32" ControlID="txtOuterID" PropertyName="Value" /> </SelectParameters> </asp:SqlDataSource> <asp:Repeater ...> </ItemTemplate> </asp:Repeater> ```
Nested Repeaters and SqlDataSource Parameters
[ "", "asp.net", "sql", "sql-server", "repeater", "" ]
My personal style with C++ has always been to put class declarations in an include file and definitions in a `.cpp` file, very much like stipulated in [Loki's answer to *C++ Header Files, Code Separation*](https://stackoverflow.com/a/280048). Admittedly, part of the reason I like this style probably has to do with all the years I spent coding Modula-2 and Ada, both of which have a similar scheme with specification and body files. I have a coworker, much more knowledgeable in C++ than I, who is insisting that all C++ declarations should, where possible, include the definitions right there in the header file. He's not saying this is a valid alternate style, or even a slightly better style, but rather this is the new universally-accepted style that everyone is now using for C++. I'm not as limber as I used to be, so I'm not really anxious to scrabble up onto this bandwagon of his until I see a few more people up there with him. So how common is this idiom really? Just to give some structure to the answers: Is it now *The Way™*, very common, somewhat common, uncommon, or bug-out crazy?
Your coworker is wrong, the common way is and always has been to put code in .cpp files (or whatever extension you like) and declarations in headers. There is occasionally some merit to putting code in the header, this can allow more clever inlining by the compiler. But at the same time, it can destroy your compile times since all code has to be processed every time it is included by the compiler. Finally, it is often annoying to have circular object relationships (sometimes desired) when all the code is the headers. Bottom line, you were right, he is wrong. **EDIT:** I have been thinking about your question. There is **one** case where what he says is true. templates. Many newer "modern" libraries such as boost make heavy use of templates and often are "header only." However, this should only be done when dealing with templates as it is the only way to do it when dealing with them. **EDIT:** Some people would like a little more clarification, here's some thoughts on the downsides to writing "header only" code: If you search around, you will see quite a lot of people trying to find a way to reduce compile times when dealing with boost. For example: [How to reduce compilation times with Boost Asio](https://stackoverflow.com/questions/2258967/how-to-reduce-compilation-times-with-boost-asio), which is seeing a 14s compile of a single 1K file with boost included. 14s may not seem to be "exploding", but it is certainly a lot longer than typical and can add up quite quickly when dealing with a large project. Header only libraries do affect compile times in a quite measurable way. We just tolerate it because boost is so useful. Additionally, there are many things which cannot be done in headers only (even boost has libraries you need to link to for certain parts such as threads, filesystem, etc). A Primary example is that you cannot have simple global objects in header only libs (unless you resort to the abomination that is a singleton) as you will run into multiple definition errors. **NOTE:** C++17's inline variables will make this particular example doable in the future. As a final point, when using boost as an example of header only code, a huge detail often gets missed. Boost is library, not user level code. so it doesn't change that often. In user code, if you put everything in headers, every little change will cause you to have to recompile the entire project. That's a monumental waste of time (and is not the case for libraries that don't change from compile to compile). When you split things between header/source and better yet, use forward declarations to reduce includes, you can save hours of recompiling when added up across a day.
The day C++ coders agree on **The Way**, lambs will lie down with lions, Palestinians will embrace Israelis, and cats and dogs will be allowed to marry. The separation between .h and .cpp files is mostly arbitrary at this point, a vestige of compiler optimizations long past. To my eye, declarations belong in the header and definitions belong in the implementation file. But, that's just habit, not religion.
Is it a good practice to place C++ definitions in header files?
[ "", "c++", "coding-style", "header-files", "code-separation", "" ]
I came across a function definition like: ``` char* abc(char *f, ...) { } ``` What do the three dots mean?
These type of functions are called ***variadic functions*** ([Wikipedia link](https://en.wikipedia.org/wiki/Variadic_function)). They use ellipses (i.e., three dots) to indicate that there is a variable number of arguments that the function can process. One place you've probably used such functions (perhaps without realising) is with the various `printf` functions, for example (from the ISO standard): ``` int printf(const char * restrict format, ...); ``` The ellipses allow you to create functions where the number of parameters are not known beforehand, and you can use `stdargs.h` functions (`va_start`, `va_arg` and `va_end`) to get the specific arguments. You do have to know the *types* of the arguments you extract and have some way of deciding when you're done. The `printf` functions do this with the format string (for both types and count), while my example code below always assumes `const char *` as the type with a sentinel value `NULL` to decide completion. This link [here](http://www.eskimo.com/%7Escs/cclass/int/sx11b.html) has a good treatise on the use of variable argument lists in `printf`. --- As an example, the following program contains a function `outStrings()`, that allows you to print an arbitrary number of strings: ``` #include <stdio.h> #include <stdarg.h> void outStrings(const char *strFirst, ...) { // First argument handled specially. printf("%s", strFirst); va_list pArg; va_start(pArg, strFirst); // Just get and process each string until NULL given. const char *strNext = va_arg(pArg, const char *); while (strNext != NULL) { printf("%s", strNext); strNext = va_arg(pArg, const char *); } // Finalise processing. va_end(pArg); } int main(void) { char *name = "paxdiablo"; outStrings("Hello, ", name, ", I hope you're feeling well today.\n", NULL); } ```
Wikipedia on [vararg functions in C++](http://en.wikipedia.org/wiki/Varargs#Variadic_functions_in_C.2C_Objective-C.2C_C.2B.2B.2C_and_D).
What does the three dots in the parameter list of a function mean?
[ "", "c++", "c", "variadic-functions", "" ]
I have a key combination like this ``` Keys key=Keys.Control | Keys.Shift | Keys.D ... ``` I don't know how to expand key variable to separated Keys values. Maybe like this ``` foreach(Keys k in key) { MessageBox.Show(k.ToString()); } ``` this seems stupid to me. How to do this?
``` Keys key = Keys.Control | Keys.Shift | Keys.D; foreach (string s in key.ToString().Split(',')) { Keys k = (Keys) Enum.Parse(typeof(Keys), s.Trim()); Console.WriteLine(k); } ```
See the definition of keys. All values act as normal, mutually-exclusive values, except the following: ``` // The bitmask to extract a key code from a key value. KeyCode = 65535, // The SHIFT modifier key. Shift = 65536, // The CTRL modifier key. Control = 131072, // The ALT modifier key. Alt = 262144, ``` So all you need to check is the alt, control and shift. To get the non-shifted key, use ``` Keys value = key & Keys.KeyCode ``` To find out if shift, alt or control is pressed ``` bool altValue = key & Keys.Alt bool controlValue = key & Keys.Control bool shiftValue = key & Keys.Shift ``` And that's it
how to expand flagged enum
[ "", "c#", "" ]