Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a custom class file in C# that I inherited and partially extended. I am trying to *re factor* it now as I have just enough knowhow to know that with something like *generics*(I think) I could greatly condense this class. As an inexperienced solo dev I would greatly appreciate any direction or constructive critism any can provide. Don't be gentle! I appreciate your time and have a blessed day! I am preemptively sorry for the length. ``` using System.Windows.Forms; using DevExpress.XtraEditors; using DevExpress.XtraTab; namespace psWinForms { public static class WinFormCustomHandling { public static void ShowXFormInControl(Form frm, ref XtraTabPage ctl, FormBorderStyle style) { frm.TopLevel = false; frm.ControlBox = false; frm.Parent = ctl; frm.FormBorderStyle = style; frm.Left = 0; frm.Top = 0; frm.Width = ctl.Width + 4; frm.Dock = DockStyle.Fill; frm.Show(); //IMPORTANT: .Show() fires a form load event frm.BringToFront(); } public static void ShowXFormInControl(Form frm, ref XtraPanel ctl, FormBorderStyle style) { frm.TopLevel = false; frm.ControlBox = false; frm.Parent = ctl; frm.FormBorderStyle = style; frm.Left = 0; frm.Top = 0; frm.Width = ctl.Width + 4; frm.Dock = DockStyle.Fill; frm.Show(); //IMPORTANT: .Show() fires a form load event frm.BringToFront(); } public static void ShowXFormInControl(XtraForm Xfrm, ref XtraTabPage ctl, FormBorderStyle style) { Xfrm.TopLevel = false; Xfrm.ControlBox = false; Xfrm.Parent = ctl; Xfrm.FormBorderStyle = style; Xfrm.Left = 0; Xfrm.Top = 0; Xfrm.Width = ctl.Width + 4; Xfrm.Dock = DockStyle.Fill; Xfrm.Show(); //IMPORTANT: .Show() fires a form load event Xfrm.BringToFront(); } public static void ShowXFormInControl(XtraForm Xfrm, ref XtraPanel ctl, FormBorderStyle style) { Xfrm.TopLevel = false; Xfrm.ControlBox = false; Xfrm.Parent = ctl; Xfrm.FormBorderStyle = style; Xfrm.Left = 0; Xfrm.Top = 0; Xfrm.Width = ctl.Width + 4; Xfrm.Dock = DockStyle.Fill; Xfrm.Show(); //IMPORTANT: .Show() fires a form load event Xfrm.BringToFront(); } public static void ShowFormInControl(Form frm, ref Panel ctl, FormBorderStyle style) { { frm.TopLevel = false; frm.ControlBox = false; frm.Parent = ctl; frm.FormBorderStyle = style; frm.Left = 0; frm.Top = 0; frm.Width = ctl.Width + 4; frm.Dock = DockStyle.Fill; frm.Show(); //IMPORTANT: .Show() fires a form load event frm.BringToFront(); } //.SetBounds(ctl.Left, ctl.Top, ctl.Width, ctl.Height) } public static void ShowFormInControl(Form frm, ref TabPage ctl, FormBorderStyle style) { { frm.TopLevel = false; frm.ControlBox = false; frm.Parent = ctl; frm.FormBorderStyle = style; frm.Left = 0; frm.Top = 0; frm.Width = ctl.Width + 4; frm.Dock = DockStyle.Fill; frm.Show(); //IMPORTANT: .Show() fires a form load event frm.BringToFront(); } } public static void ShowFormInControl(Form frm, Panel ctl, FormBorderStyle style, FormWindowState state) { { frm.TopLevel = false; frm.ControlBox = false; frm.Parent = ctl; frm.FormBorderStyle = style; frm.Left = 0; frm.Top = 0; frm.Width = ctl.Width + 4; frm.WindowState = state; frm.Dock = DockStyle.Fill; frm.Show(); //IMPORTANT: .Show() fires a form load event frm.BringToFront(); } //.SetBounds(ctl.Left, ctl.Top, ctl.Width, ctl.Height) } public static void ShowFormInControl(Form frm, TabPage ctl, FormBorderStyle style, FormWindowState state) { { frm.TopLevel = false; frm.ControlBox = false; frm.Parent = ctl; frm.FormBorderStyle = style; frm.Left = 0; frm.Top = 0; frm.Width = ctl.Width + 4; frm.WindowState = state; frm.Dock = DockStyle.Fill; frm.Show(); //IMPORTANT: .Show() fires a form load event frm.BringToFront(); } } public static void ShowFormInControl(Form frm, ref Panel ctl, FormBorderStyle style, int left, int top) { { frm.TopLevel = false; frm.ControlBox = false; frm.Parent = ctl; frm.FormBorderStyle = style; frm.Left = left; frm.Top = top; frm.Width = ctl.Width + 4; frm.Dock = DockStyle.Fill; frm.Show(); //IMPORTANT: .Show() fires a form load event //.Activate() frm.BringToFront(); } //.SetBounds(ctl.Left, ctl.Top, ctl.Width, ctl.Height) } public static void ShowFormInControl(Form frm, ref TabPage ctl, FormBorderStyle style, int left, int top) { { frm.TopLevel = false; frm.ControlBox = false; frm.Parent = ctl; frm.FormBorderStyle = style; frm.Left = left; frm.Top = top; frm.Width = ctl.Width + 4; frm.Dock = DockStyle.Fill; frm.Show(); //IMPORTANT: .Show() fires a form load event //.Activate() frm.BringToFront(); } } public static void ShowFormInControl(Form frm, Panel ctl, FormBorderStyle style, int left, int top, string title) { { frm.TopLevel = false; frm.ControlBox = false; frm.Parent = ctl; frm.FormBorderStyle = style; frm.Left = left; frm.Top = top; frm.Width = ctl.Width + 4; frm.Text = title; frm.Dock = DockStyle.Fill; frm.Show(); //IMPORTANT: .Show() fires a form load event frm.BringToFront(); } //.SetBounds(ctl.Left, ctl.Top, ctl.Width, ctl.Height) } public static void ShowFormInControl(Form frm, TabPage ctl, FormBorderStyle style, int left, int top, string title) { { frm.TopLevel = false; frm.ControlBox = false; frm.Parent = ctl; frm.FormBorderStyle = style; frm.Left = left; frm.Top = top; frm.Width = ctl.Width + 4; frm.Text = title; frm.Dock = DockStyle.Fill; frm.Show(); //IMPORTANT: .Show() fires a form load event frm.BringToFront(); } } } } ``` --- I have the darnedest time posting code here and it turning out looking decent!
You don't need generic to do this: ``` public static void ShowFormInControl(Control ctl, Form frm) { frm.TopLevel = false; frm.FormBorderStyle = FormBorderStyle.None; // Others rarely make sense frm.Dock = DockStyle.Fill; frm.Visible = true; ctl.Controls.Add(frm); } ``` Sample usage: ``` public Form1() { InitializeComponent(); ShowFormInControl(this.panel1, new Form2()); } ```
Have you tried ReSharper? <http://www.jetbrains.com/resharper/>
Custom Class for dealing with embedding in Forms
[ "", "c#", "winforms", "generics", ".net-3.5", "refactoring", "" ]
I am generating associative arrays and the key value is a string concat of 1..n columns. Is there a max length for keys that will come back to bite me? If so, I'll probably stop and do it differently.
It seems to be limited only by the script's memory limit. A quick test got me a key of 128mb no problem: ``` ini_set('memory_limit', '1024M'); $key = str_repeat('x', 1024 * 1024 * 128); $foo = array($key => $key); echo strlen(key($foo)) . "<br>"; echo strlen($foo[$key]) . "<br>"; ```
There is no practical limit to string size in PHP. According to [the manual](http://us.php.net/manual/en/language.types.string.php): > Note: It is no problem for a string to > become very large. PHP imposes no > boundary on the size of a string; the > only limit is the available memory of > the computer on which PHP is running. It is safe to assume that this would apply to using strings as keys in arrays as well, but depending on how PHP handles its lookups, you may notice a performance hit as strings get larger.
What is the max key size for an array in PHP?
[ "", "php", "arrays", "key", "" ]
I need to be able to move a div with my mouse and store the new pos of the div in database to remember the display. How can I do it?
I would highly recommend you look into [jQuery UI](http://jqueryui.com/) and the draggable interaction. Basically, you'll want to add the code to your draggable div (assuming it has id="draggable"): ``` $("#draggable").draggable(); ``` And, then put your necessary behavior in the stop event. More specifically, you'd do this: ``` $('#draggable').draggable({ stop: function(event, ui) { ... } }); ``` As for the database storing, you could use an AJAX call in the above function, or you could store it in-page, such that some form-send or other action results in the positional information being passed to the server and stored inline with other data. I'd be careful with an AJAX call, since you may bomb your db with position data with every dragging on every browser. Depends on your app...
Here is a little jQuery function I just wrote to let you drag divs using only jQuery and not using jQuery UI. ``` /* PlugTrade.com - jQuery draggit Function */ /* Drag A Div with jQuery */ jQuery.fn.draggit = function (el) { var thisdiv = this; var thistarget = $(el); var relX; var relY; var targetw = thistarget.width(); var targeth = thistarget.height(); var docw; var doch; thistarget.css('position','absolute'); thisdiv.bind('mousedown', function(e){ var pos = $(el).offset(); var srcX = pos.left; var srcY = pos.top; docw = $('body').width(); doch = $('body').height(); relX = e.pageX - srcX; relY = e.pageY - srcY; ismousedown = true; }); $(document).bind('mousemove',function(e){ if(ismousedown) { targetw = thistarget.width(); targeth = thistarget.height(); var maxX = docw - targetw - 10; var maxY = doch - targeth - 10; var mouseX = e.pageX; var mouseY = e.pageY; var diffX = mouseX - relX; var diffY = mouseY - relY; // check if we are beyond document bounds ... if(diffX < 0) diffX = 0; if(diffY < 0) diffY = 0; if(diffX > maxX) diffX = maxX; if(diffY > maxY) diffY = maxY; $(el).css('top', (diffY)+'px'); $(el).css('left', (diffX)+'px'); } }); $(window).bind('mouseup', function(e){ ismousedown = false; }); return this; } // end jQuery draggit function // ``` The jQuery function even prevents the div from going out of the bounds. Basically you attach it to a div that you destine to be the drag activator (say the title bar for instance). Invoking it is as simple as this: ``` $("#titleBar").draggit("#whatToDrag"); ``` So #titleBar would be the id of your titlebar div and #whatToDrag would be the id of what you wanted to drag. I apologize for the messy code, I just hacked it up and thought it would give you an alternative to jQuery UI, while still making it easy to implement.
How to move div with the mouse using jquery?
[ "", "javascript", "jquery", "html", "dom", "" ]
``` $ cat t.cpp int sign(int i) { if(i > 0) return 1; if(i == 0) return 0; if(i < 0) return -1; } $ g++ -c t.cpp -Wall t.cpp: In function ‘int sign(int)’: t.cpp:5: warning: control reaches end of non-void function $ ``` What do I do about this? Stop using -Wall as it's clearly wrong? Add a bogus return 0 at the end? Clutter the code with "else" clauses?
If you don't want to add "else" clauses because they would make the code longer, then perhaps you would like to remove the final "if" and make the code shorter: ``` int sign(int i) { if(i > 0) return 1; if(i == 0) return 0; return -1; // i<0 } ``` Or if you're really computing "sign" yourself and this isn't a simplification of some longer example: ``` int sign(int i) { return (i>0) ? 1 : ((i<0)?-1:0); } ```
Your `sign()` function isn't very efficient. Try this ``` int sign(int i) { return (i > 0) - (i < 0); } ``` Source: [Bit Twiddling Hacks](http://graphics.stanford.edu/~seander/bithacks.html)
What to do about wrong "control reaches end of non-void function" gcc warning?
[ "", "c++", "gcc", "warnings", "" ]
What is wrong with the statement below? I keep getting the following error message.... Server: Msg 156, Level 15, State 1, Line 4 Incorrect syntax near the keyword 'THEN'. ``` update oildatasetstatus set oildatasetstatusid = case WHEN 5 THEN 16 WHEN 6 THEN 17 WHEN 7 THEN 18 WHEN 8 THEN 18 WHEN 9 THEN 18 WHEN 10 THEN 19 WHEN 11 THEN 20 End where oildatasetlabstatusid in ( select oildatasetstatusid from OilDataSetStatus inner join OilDataSet on OilDataSet.OilDataSetID = OilDataSetStatus.OilDataSetID where SamplePointID in ( select SamplePointID from SamplePoint where CustomerSiteID in ( select CustomerSiteID from CustomerSite where CustomerID = 2 ) ) ) ```
The way you have your statement coded now will work (once you add the column reference to the case statement, as mentioned by other posts), however, to let the rest of your syntax go uncommented on would be a disservice to others in your situation. While you may only need to run this query once, I and others have run into similar situations where an `Update` to multiple rows also relies on data 3 or 4 tables away from our source and has to be run many times (like in a report). By collapsing your sub selects into a single `select` statement and saving the results of that into a `#Temp` table or a `@Table` variable, you only have to do that lookup once, then select from the result set for your update. Here is a sample using a @table variable: ``` declare @OilStatus table (oilDatasetStatusID int) insert into @OilStatus select odss.oildatasetstatusid from OildataSetStatus odss join oilDataSet ods on ods.OilDataSetID = odss.OilDataSetID join SamplePoint sp on sp.SamplePointID = odss.SamplePointID join CustomerSite cs on cs.CustomerSiteID = sp.CustomerSiteID where cs.CustomerID = 2 update oildatasetstatus set oildatasetstatusid = case oildatasetstatusid WHEN 5 THEN 16 WHEN 6 THEN 17 WHEN 7 THEN 18 WHEN 8 THEN 18 WHEN 9 THEN 18 WHEN 10 THEN 19 WHEN 11 THEN 20 end where oildatasetlabstatusid in ( select oilDatasetStatusID from @OilStatus ) ``` Since I do not have your exact schema, there may be errors when trying to implement the sample above but I think you will get the idea. Also, whenever multiple tables are used in a single statement try to preface every column name with an alias or the full table name. It helps keep both the sql engine and the people reading your code from getting lost.
I think you're missing the statement that you want to evaluate in the CASE statement. ``` update oildatasetstatus set oildatasetstatusid = case oildatasetstatusid WHEN 5 THEN 16 WHEN 6 THEN 17 WHEN 7 THEN 18 WHEN 8 THEN 18 WHEN 9 THEN 18 WHEN 10 THEN 19 WHEN 11 THEN 20 End where oildatasetlabstatusid in ( select oildatasetstatusid from OilDataSetStatus inner join OilDataSet on OilDataSet.OilDataSetID = OilDataSetStatus.OilDataSetID where SamplePointID in ( select SamplePointID from SamplePoint where CustomerSiteID in ( select CustomerSiteID from CustomerSite where CustomerID = 2 ) ) ) ``` Give that a shot?
Syntax Error
[ "", "sql", "" ]
I have some old Java 2D code I want to reuse, but was wondering, is this the best way to get the highest quality images? ``` public static BufferedImage getScaled(BufferedImage imgSrc, Dimension dim) { // This code ensures that all the pixels in the image are loaded. Image scaled = imgSrc.getScaledInstance( dim.width, dim.height, Image.SCALE_SMOOTH); // This code ensures that all the pixels in the image are loaded. Image temp = new ImageIcon(scaled).getImage(); // Create the buffered image. BufferedImage bufferedImage = new BufferedImage(temp.getWidth(null), temp.getHeight(null), BufferedImage.TYPE_INT_RGB); // Copy image to buffered image. Graphics g = bufferedImage.createGraphics(); // Clear background and paint the image. g.setColor(Color.white); g.fillRect(0, 0, temp.getWidth(null),temp.getHeight(null)); g.drawImage(temp, 0, 0, null); g.dispose(); // j2d's image scaling quality is rather poor, especially when // scaling down an image to a much smaller size. We'll post filter // our images using a trick found at // http://blogs.cocoondev.org/mpo/archives/003584.html // to increase the perceived quality.... float origArea = imgSrc.getWidth() * imgSrc.getHeight(); float newArea = dim.width * dim.height; if (newArea <= (origArea / 2.)) { bufferedImage = blurImg(bufferedImage); } return bufferedImage; } public static BufferedImage blurImg(BufferedImage src) { // soften factor - increase to increase blur strength float softenFactor = 0.010f; // convolution kernel (blur) float[] softenArray = { 0, softenFactor, 0, softenFactor, 1-(softenFactor*4), softenFactor, 0, softenFactor, 0}; Kernel kernel = new Kernel(3, 3, softenArray); ConvolveOp cOp = new ConvolveOp(kernel, ConvolveOp.EDGE_NO_OP, null); return cOp.filter(src, null); } ```
Chris Campbell has an excellent and detailed write-up on scaling images - see **[this article](http://today.java.net/pub/a/today/2007/04/03/perils-of-image-getscaledinstance.html)**. Chet Haase and Romain Guy also have a detailed and very informative write-up of image scaling in their book, **[Filthy Rich Clients](http://filthyrichclients.org/)**.
Adding some clarifying information here. No, that isn't the *best* way to get a good looking scaled image in Java. Use of getScaledInstance and the underlying AreaAveragingScaleFilter are deprecated by the Java2D team in favor of some more advanced methods. If you are just trying to get a good-looking thumbnail, using Chris Campbell's method as suggested by David is the way to go. For what it's worth, I have implemented that algorithm along with 2 other faster methods in a Java image-scaling library called [imgscalr](http://www.thebuzzmedia.com/software/imgscalr-java-image-scaling-library/) (Apache 2 license). The point of the library was to specifically address this question in a highly tuned library that is easy to use: ``` BufferedImage thumbnail = Scalr.resize(srcImg, 150); ``` To get the best-looking scaled instance possible in Java, the method call would look something like this: ``` BufferedImage scaledImg = Scalr.resize(img, Method.QUALITY, 150, 100, Scalr.OP_ANTIALIAS); ``` The library will scale the original image using the incremental-scaling approach recommended by the Java2D team and then to make it look even nicer a very mild convolveop is applied to the image, effectively anti-aliasing it slightly. This is really nice for small thumbnails, not so important for huge images. If you haven't worked with convolveops before, it's a LOT of work just to get the perfect looking kernel for the op to look good in all use-cases. The OP constant defined on the Scalr class is the result of a week of collaboration with a social networking site in Brazil that had rolled out imgscalr to process profile pictures for it's members. We went back and forth and tried something like 10 different kernels until we found one that was subtle enough not to make the image look soft or fuzzy but still smooth out the transitions between pixel values so the image didn't look "sharp" and noisey at small sizes. If you want the *best looking* scaled image regardless of speed, go with Juha's suggestion of using the java-image-scaling library. It is a very comprehensive collection of Java2D Ops and includes support for the [Lanczsos algorithm](http://en.wikipedia.org/wiki/Lanczos_algorithm) which will give you the best-looking result. I would stay away from JAI, not because it's bad, but because it is just a different/broader tool than what you are trying to solve. Any of the previous 3 approaches mentioned will give you great looking thumbnails without needing to add a whole new imaging platform to your project in fewer lines of code.
Java 2D and resize
[ "", "java", "image", "resize", "2d", "image-scaling", "" ]
I have a listbox inside an update panel. When I scroll down and select an item, it scrolls back to the top of the listbox. I heard that the dom does not keep track of the scroll position on a postback. Does anyone have a solution/example on how to solve this? Thanks, XaiSoft
You're running into this problem because the `UpdatePanel` completely replaces your scrolled `<select>` element with a new one when the asynchronous request comes back. **Possible solutions:** 1. Use JavaScript to store the `scrollTop` property of the `<select>` element in a hidden form element before the `UpdatePanel` is submitted (by calling the `ClientScriptManager.RegisterOnSubmitStatement` method) and then setting it on the new `<select>` when the AJAX call comes back. This will be tedious, error-prone, and probably not very compatible (see [here](http://siderite.blogspot.com/2007/06/setting-vertical-scroll-property-in.html)). 2. Use JavaScript to store the `<select>`'s `selectedIndex` property and re-select that item when the AJAX call comes back. Obviously this won't work if the user hasn't selected anything yet. 3. **[Don't use `UpdatePanel`s](http://encosia.com/2007/07/11/why-aspnet-ajax-updatepanels-are-dangerous/)**. Try [jQuery](http://jquery.com/) + [ASP.NET page methods](http://encosia.com/2008/05/29/using-jquery-to-directly-call-aspnet-ajax-page-methods/) instead.
``` var xPos, yPos; var prm = Sys.WebForms.PageRequestManager.getInstance(); function BeginRequestHandler(sender, args) { if (($get('Panel1')) != null) { xPos = $get('Panel1').scrollLeft; yPos = $get('Panel1').scrollTop; } } function EndRequestHandler(sender, args) { if (($get('Panel1')) != null) { $get('Panel1').scrollLeft = xPos; $get('Panel1').scrollTop = yPos; } } prm.add_beginRequest(BeginRequestHandler); prm.add_endRequest(EndRequestHandler); //Note: "Panel1" Panel or div u want to maintain scroll position //Note: This Java Script should be added after Scriptmanager***** ``` //Maintain Scroll Position for Panel/Div with out Update panel ``` window.onload = function() { var strCook = document.cookie; if (strCook.indexOf("!~") != 0) { var intS = strCook.indexOf("!~"); var intE = strCook.indexOf("~!"); var strPos = strCook.substring(intS + 2, intE); document.getElementById('Panel1').scrollTop = strPos; } } function SetDivPosition() { var intY = document.getElementById('Panel1').scrollTop; document.title = intY; document.cookie = "yPos=!~" + intY + "~!"; } //Note: "Panel1" Panel id or div id for which u want to maintain scroll position ```
Maintain scroll position in listboxes in updatepanels, NOT the page
[ "", "c#", "asp.net", "listbox", "" ]
I have this SQL Job (in SQL Server 2005) that creates a backup every six(6) hours, the backup's filename is based on the timestamp so that it will create a unique filename(dbname\_yyyymmddhhmmss.bak), Now my question is, How would I know using xp\_cmdshell if the file is three day old and based on my script I want to delete backup(.bak) that is a three day old. Can someone out there help me, thanks in advance. Cheers!
This is not really the answer to your questions, but you could do this directly in SqlServer 2005 with a Maintenance Plan (Object Explorer -> Management -> Maintenance Plans). I usually create one Maintenance Plan including two tasks: One "Maintenance Cleanup Task" which deletes old backups after x days, followed by a "Back Up Database Task".
I agree that xp\_cmdshell is not the best alternative for the job. If you're like me and you don't like/trust maintenance plans, you can probably write a C# console application, where file system support is much stronger than what you can do in DOS (or using T-SQL to parse the output of xp\_cmdshell 'DIR ...'), and then schedule that in a windows scheduled task so that you don't have to worry about escalation of privileges from the SQL Server service/proxy account. While it's nice to put everything in one package, you don't always want the guy who changes your oil to make you a quiche.
Copy a file from one dir to another by date
[ "", "sql", "sql-server", "xp-cmdshell", "" ]
What is the best way (and I presume simplest way) to place the cursor at the end of the text in a input text element via JavaScript - after focus has been set to the element?
I faced this same issue (after setting focus through RJS/prototype) in IE. Firefox was already leaving the cursor at the end when there is already a value for the field. IE was forcing the cursor to the beginning of the text. The solution I arrived at is as follows: ``` <input id="search" type="text" value="mycurrtext" size="30" onfocus="this.value = this.value;" name="search"/> ``` This works in both IE7 and FF3 **but doesn't work in modern browsers** (see comments) as it is not specified that UA must overwrite the value in this case (edited in accordance with [meta policy](https://meta.stackoverflow.com/questions/300372/what-to-do-with-accepted-answers-that-were-valid-when-the-op-asked-but-not-anym)).
There's a simple way to get it working in *most* browsers. ``` this.selectionStart = this.selectionEnd = this.value.length; ``` However, due to the \*quirks of a few browsers, a more inclusive answer looks more like this ``` setTimeout(function(){ that.selectionStart = that.selectionEnd = 10000; }, 0); ``` **Using jQuery** *(to set the listener, but it's not necessary otherwise)* ``` $('#el').focus(function(){ var that = this; setTimeout(function(){ that.selectionStart = that.selectionEnd = 10000; }, 0); }); ``` ``` <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <input id='el' type='text' value='put cursor at end'> ``` **Using Vanilla JS** *(borrowing `addEvent` function from [this answer](https://stackoverflow.com/a/10150042/731314))* ``` // Basic cross browser addEvent function addEvent(elem, event, fn){ if(elem.addEventListener){ elem.addEventListener(event, fn, false); }else{ elem.attachEvent("on" + event, function(){ return(fn.call(elem, window.event)); }); }} var element = document.getElementById('el'); addEvent(element,'focus',function(){ var that = this; setTimeout(function(){ that.selectionStart = that.selectionEnd = 10000; }, 0); }); ``` ``` <input id='el' type='text' value='put cursor at end'> ``` --- ## Quirks Chrome has an odd quirk where the focus event fires before the cursor is moved into the field; which screws my simple solution up. Two options to fix this: 1. You can add a timeout of 0 ms (to [defer the operation until the stack is clear](https://youtu.be/8aGhZQkoFbQ?t=15m13s)) 2. You can change the event from `focus` to `mouseup`. This would be pretty annoying for the user unless you still kept track of focus. I'm not really in love with either of these options. Also, @vladkras pointed out that some older versions of Opera incorrectly calculate the length when it has spaces. For this you can use a huge number that should be larger than your string.
Use JavaScript to place cursor at end of text in text input element
[ "", "javascript", "" ]
I've just tried the following, the idea being to concatenate the two strings, substituting an empty string for nulls. ``` string a="Hello"; string b=" World"; ``` -- Debug (amusing that ? is print, doesn't exactly help readability...) ``` ? a ?? "" + b ?? "" ``` -> "Hello" Correct is: ``` ? (a??"")+(b??"") "Hello World" ``` I was kind of expecting "Hello World", or just "World" if a is null. Obviously this is todo with operator precedence and can be overcome by brackets, is there anywhere that documents the order of precedence for this new operator. (Realising that I should probably be using stringbuilder or String.Concat) Thanks.
Aside from what you'd *like* the precedence to be, what it is according to ECMA, what it is according to the MS spec and what csc actually does, I have one bit of advice: **Don't do this**. I think it's *much* clearer to write: ``` string c = (a ?? "") + (b ?? ""); ``` Alternatively, given that null in string concatenation ends up just being an empty string anyway, just write: ``` string c = a + b; ``` EDIT: Regarding the documented precedence, in both the [C# 3.0 spec](http://download.microsoft.com/download/3/8/8/388e7205-bc10-4226-b2a8-75351c669b09/CSharp%20Language%20Specification.doc) (Word document) and [ECMA-334](http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-334.pdf), addition binds tighter than ??, which binds tighter than assignment. The MSDN link given in another answer is just wrong and bizarre, IMO. There's a change shown on the page made in July 2008 which moved the conditional operator - but apparently incorrectly!
Never rely on operator precedence. **Always** explicitly specify how you want your code to act. Do yourself and others a favour for when you come back to your code. ``` (a ?? "") + (b ?? "") ``` This leaves no room for ambiguity. Ambiguity is the breeding ground of bugs.
What is the operator precedence of C# null-coalescing (??) operator?
[ "", "c#", "null", "null-coalescing-operator", "" ]
I am trying to disable a number of text boxes intended for displaying data (not edit) in one of my UserControls. However, for some reason I can not get the textBoxes to disable properly. I've set "ApplyAuthorization on readWriteAuthorization" to true and the textBoxes are databound to the correct properties. I've also added the following lines to the `CanWriteProperty` of my object: ``` if (propertyName == OpeningDateProperty.Name) return false; if (propertyName == ChangeDateProperty.Name) return false; if (propertyName == CloseDateProperty.Name) return false; return base.CanWriteProperty(propertyName); ``` I can't figure out what I'm doing wrong here. I've implemented pretty much the same thing recently in other UserControls without any problems... I am using Windows Forms in C# .NET (Visual Studio 2008) **EDIT:** The code snippets and the properties are taken from my customer object. The date represent opening, last change and closure of the customer account. They are never supposed to be edited at all and in fact in the old sollution they are represented by textLabels, however we now want to use a text box and make the property's CanWriteProperty false. I realise that the information might be sort of scarce, but I am looking for what I might have forgotten in this process. **EDIT:** We are using ***CSLA*** as well and I guess (I'm new at this whole thing) this has something to do with why we want to do it like this. **EDIT (Sollution):** As you can see in my answer below, the problem was that I had not set up the `CurrentItemChanged` event like I should have.
To make this work you need to do the following: 1. Make sure the TextBox is databound to the right property in the correct way 2. Set up the needed checks for each textBox in the CanWriteProperty override in your root object ``` if (propertyName == OpeningDateProperty.Name) return false; ``` 3. Make sure the rootBindingsource's CurrentItemChanged event is set up right ``` private void rootBindingSource_CurrentItemChanged(object sender, EventArgs e) { readWriteAuthorization1.ResetControlAuthorization(); } ``` 4. Make sure the texBox's "ApplyAuthorization on ReadWriteAuthorization" is set to true This solved the problem for me.
If you're trying to get them to be read only, then just set the .ReadOnly property to true. Alternatively, if you're never ever using these textboxes for editing, then maybe just use a Label instead? **EDIT:** Ahh it appears this more of a CSLA-framework question than a pure windows forms question. I've never even heard of CSLA before this question, but it looks interesting.
Disabling a TextBox in C# .NET using CSLA
[ "", "c#", "winforms", "textbox", "csla", "" ]
In our API, we use byte[] to send over data across the network. Everything worked fine, until the day our "foreign" clients decided to pass/receive Unicode characters. As far as I know, Unicode characters occupy 2 bytes, however, we only allocate 1 byte in the byte array for them. Here is how we read the character from the byte[] array: ``` // buffer is a byte[6553] and index is a current location in the buffer char c = System.BitConverter.ToChar(buffer, m_index); index += SIZEOF_BYTE; return c; ``` So the current issue is the API is receiving a strange Unicode character, when I look at the Unicode hexadecimal. I found that the last significant byte is correct but the most significant byte has a value when it’s supposed to be 0. A quick workaround, thus far, has been to 0x00FF & c to filter the msb. Please suggest the correct approach to deal with Unicode characters coming from the socket? Thanks. **Solution:** *Kudos to Jon:* char c = (char) buffer[m\_index]; And as he mentioned, the reason it works, is because the client api receives a character occupying only one byte, and BitConverter.ToChar uses two, hence the issue in converting it. I am still startled as to why it worked for some set of characters and not the others, as it should have failed in all cases. Thanks Guys, great responses!
You should use [`Encoding.GetString`](http://msdn.microsoft.com/en-us/library/system.text.encoding.getstring.aspx), using the most appropriate encoding. I don't quite understand your situation fully, but the `Encoding` class is almost certain to be the way to handle it. Who is in control of the data here? Your code, or that of your customers? Have you defined what the correct format is? EDIT: Okay, I've had another look at your code: BitConverter.ToChar returns "A character formed by two bytes beginning at startIndex." If you only want to use *one* byte, just cast it: ``` char c = (char) buffer[m_index]; ``` I'm surprised your code has been working at all, as it would be breaking any time the next byte was non-zero.
You should look at the System.Text.ASCIIEncoder.ASCII.GetString function which takes a byte[] array and converts it to a string (for ascii). And System.Text.UTF8Encoder or System.Text.UTF16Encoder for Unicode strings in the UTF8 or UTF16 encodings. There are also functions for converting Strings to Byte[] in the ASCIIEncoding, UTF8Encoding and UTF16Encoding classes: see the GetBytes(String) functions.
Conversion of a unicode character from byte
[ "", "c#", "unicode", "types", "" ]
We have a URL object in one of our Java classes that we want to mock, but it's a final class so we cannot. We do not want to go a level above, and mock the InputStream because that will still leave us with untested code (we have draconian test coverage standards). I've tried jMockIt's reflective powers but we work on Macs and there are problems with the Java agent handler that I haven't been able to resolve. So are there any solutions that do not involve using real URLs in the junit test?
When I have a class that can't be easily mocked because it is final (or sealed in C#), my usual route is to write a wrapper around the class and use the wrapper wherever I would use the actual class. Then I would mock out the wrapper class as necessary.
Like Rob said, if what you want is to mock the connection returned from the URL, you can extend `URLStreamHandler`. For instance, with mockito: ``` final URLConnection mockUrlCon = mock(URLConnection.class); ByteArrayInputStream is = new ByteArrayInputStream( "<myList></myList>".getBytes("UTF-8")); doReturn(is).when(mockUrlCon).getInputStream(); //make getLastModified() return first 10, then 11 when(mockUrlCon.getLastModified()).thenReturn((Long)10L, (Long)11L); URLStreamHandler stubUrlHandler = new URLStreamHandler() { @Override protected URLConnection openConnection(URL u) throws IOException { return mockUrlCon; } }; URL url = new URL("foo", "bar", 99, "/foobar", stubUrlHandler); doReturn(url).when(mockClassloader).getResource("pseudo-xml-path"); ```
Mocking a URL in Java
[ "", "java", "url", "mocking", "" ]
I have a page written in Perl by someone else. I don't know Perl, so I wrote a PHP file that right now just links from the Perl page. What I'd like to do is embed the PHP file in the Perl file if the perl page has been passed a certain variable. If I were using PHP for both I could just do ``` if ($_GET['sidebar']) include "embedded.php"; ``` I know there are ways to read text files in Perl, but can I include a PHP file within a Perl file? I'm assuming it wouldn't work because they're processed by different parts of the server, so no high hopes, but maybe someone's tried something like it.
If you simply want to include the resulting output (HTML, etc.) of the PHP script into the perl-generated page you can use backticks to call either the PHP script via php-cli, like this: First, the test.pl script: ``` [root@www1 cgi-bin]# cat test.pl #!/usr/bin/perl print "This is a perl script\n"; my $output = `php test.php`; print "This is from PHP:\n$output\n"; print "Back to perl...\n"; exit 0; ``` Next, the test.php script: ``` [root@www1 cgi-bin]# cat test.php <?php echo "PHP generated this"; ?> ``` Here's the output of running "test.pl": ``` [root@www1 cgi-bin]# perl test.pl This is a perl script This is from PHP: PHP generated this Back to perl... ```
There are the [PHP](http://search.cpan.org/perldoc?PHP) and [PHP::Interpreter](http://search.cpan.org/perldoc?PHP::Interpreter) modules, both have an `include` method. Haven't tried either of them myself though.
Can you include PHP in a Perl file?
[ "", "php", "perl", "" ]
I am trying to understand a script generated by Asp.Net Ajax Toolkit, which is currently giving an "object expected" (error goes away if I place my PopupControlExtender in an update panel). ``` document.getElementById('ctl00_ValidationSummary1').dispose = function() { Array.remove(Page_ValidationSummaries, document.getElementById('ctl00_ValidationSummary1')); } (function() {var fn = function() {AjaxControlToolkit.ModalPopupBehavior.invokeViaServer('ctl00_c1_componentCategoryListUC_componentCategoryGrid_modalPopupExtender', true); Sys.Application.remove_load(fn);};Sys.Application.add_load(fn);})(); ``` What I see here is: ``` someobject.someevent = function() { dosth; } /* Get ready, I am about to do sth crazy ... */ (function() { dosth; })(); /* you did what? */ ``` What does this syntax mean? **Edit:** I am specifically curious about (function () { ... })() coming immediately after another function's ending }. **Edit:** Turns out, ajax guys forgot to place a semicolon after the event handler assignment.
The ``` (function() { dosth; })(); ``` syntax declares an anonymous function, and then executes it immediately. It's equivalent to doing this: ``` var myFun = (function() { dosth; }); myFun(); ``` but without the temporary variable. Broadly speaking this is similar to just executing whatever `dosth` is; but creating a function object introduces a new scope for variables (due to the closure), and thus this is often used to work around issues with scoping. In the specific case you've quoted, I don't see any reason why this would be particularly necessary. However it could be done like this for two reasons - either the final Javascript itself is generated by some automatic process that can't tell whether the closure is needed; or it was written by a human who decided to always wrap things in functions to be safe.
``` (function() { dosth; })(); ``` Here, an anonymous function is created, and then immediately invoked. This is a relatively popular idiom to create a local scope in JavaScript, where only functions get their own scope. A local scope allows you to have private variables and avoids name clashes. In other languages you could write something like ``` int a = 1; { int b = 0; if (something){ int c = 3; } } ``` and the three variables would all be in separate scopes, but in JavaScript you have to declare a function to get a new scope.
What does this javascript syntax mean?
[ "", "javascript", "syntax", "" ]
I haven't had the chance to take any serious low-level programming courses in school. (I know I really should get going on learning the "behind-the-scenes" to be a better programmer.) I appreciate the conveniences of Java, including the ability to stick anything into a `System.out.print` statement. However, is there any reason why you would want to use `System.out.printf` instead? Also, should I avoid print calls like this in "real applications"? It's probably better to to print messages to the client's display using some kind of UI function, right?
The [`printf`](http://java.sun.com/javase/6/docs/api/java/io/PrintStream.html#printf(java.lang.String,%20java.lang.Object...)) method of the [PrintStream](http://java.sun.com/javase/6/docs/api/java/io/PrintStream.html) class provides [string formatting](http://java.sun.com/developer/technicalArticles/Programming/sprintf/) similar to the `printf` function in C. The formatting for `printf` uses the [`Formatter` class' formatting syntax](http://java.sun.com/javase/6/docs/api/java/util/Formatter.html#syntax). The `printf` method can be particularly useful when displaying multiple variables in one line which would be tedious using string concatenation: ``` int a = 10; int b = 20; // Tedious string concatenation. System.out.println("a: " + a + " b: " + b); // Output using string formatting. System.out.printf("a: %d b: %d\n", a, b); ``` Also, writting Java applications doesn't necessarily mean writing GUI applications, so when writing console applications, one would use `print`, `println`, `printf` and other functions that will output to `System.out`.
It is better if you need to control the precision of your floating-point numbers, do padding etc. System.out.print can print out all kinds of stuff, but you can't do finegrained control of precision and padding with it.
Is there a good reason to use "printf" instead of "print" in java?
[ "", "java", "printing", "printf", "" ]
**Duplicate**: [Generating SQL Schema from XML](https://stackoverflow.com/questions/263836/generating-sql-schema-from-xml) --- In a project i am working on, i have a need to support either a strongly-typed dataset for storing the data as XML, or storing the data in sql server. Now i already have the XSD schema created and i would like to be able to create a sql server database using the tables and relationships defined in the XSD. Is this possible? and if so, what is the best way to approach this problem? --- **Clarification**: What i'm looking for is a way to do the above via code at runtime with C# and SQL Server. Can this be done?
I managed to come up with the following class based on the SQL Server Management Objects: ``` using System; using System.Collections.Generic; using System.Data; using System.Data.SqlClient; using System.IO; using System.Text; using Microsoft.SqlServer.Management.Common; using Microsoft.SqlServer.Management.Smo; using Rule=System.Data.Rule; namespace XSD2SQL { public class XSD2SQL { private readonly Server _server; private readonly SqlConnection _connection; private Database _db; private DataSet _source; private string _databaseName; public XSD2SQL(string connectionString, DataSet source) { _connection = new SqlConnection(connectionString); _server = new Server(new ServerConnection(_connection)); _source = source; } public void CreateDatabase(string databaseName) { _databaseName = databaseName; _db = _server.Databases[databaseName]; if (_db != null) _db.Drop(); _db = new Database(_server, _databaseName); _db.Create(); } public void PopulateDatabase() { CreateTables(_source.Tables); CreateRelationships(); } private void CreateRelationships() { foreach (DataTable table in _source.Tables) { foreach (DataRelation rel in table.ChildRelations) CreateRelation(rel); } } private void CreateRelation(DataRelation relation) { Table primaryTable = _db.Tables[relation.ParentTable.TableName]; Table childTable = _db.Tables[relation.ChildTable.TableName]; ForeignKey fkey = new ForeignKey(childTable, relation.RelationName); fkey.ReferencedTable = primaryTable.Name; fkey.DeleteAction = SQLActionTypeToSMO(relation.ChildKeyConstraint.DeleteRule); fkey.UpdateAction = SQLActionTypeToSMO(relation.ChildKeyConstraint.UpdateRule); for (int i = 0; i < relation.ChildColumns.Length; i++) { DataColumn col = relation.ChildColumns[i]; ForeignKeyColumn fkc = new ForeignKeyColumn(fkey, col.ColumnName, relation.ParentColumns[i].ColumnName); fkey.Columns.Add(fkc); } fkey.Create(); } private void CreateTables(DataTableCollection tables) { foreach (DataTable table in tables) { DropExistingTable(table.TableName); Table newTable = new Table(_db, table.TableName); PopulateTable(ref newTable, table); SetPrimaryKeys(ref newTable, table); newTable.Create(); } } private void PopulateTable(ref Table outputTable, DataTable inputTable) { foreach (DataColumn column in inputTable.Columns) { CreateColumns(ref outputTable, column, inputTable); } } private void CreateColumns(ref Table outputTable, DataColumn inputColumn, DataTable inputTable) { Column newColumn = new Column(outputTable, inputColumn.ColumnName); newColumn.DataType = CLRTypeToSQLType(inputColumn.DataType); newColumn.Identity = inputColumn.AutoIncrement; newColumn.IdentityIncrement = inputColumn.AutoIncrementStep; newColumn.IdentitySeed = inputColumn.AutoIncrementSeed; newColumn.Nullable = inputColumn.AllowDBNull; newColumn.UserData = inputColumn.DefaultValue; outputTable.Columns.Add(newColumn); } private void SetPrimaryKeys(ref Table outputTable, DataTable inputTable) { Index newIndex = new Index(outputTable, "PK_" + outputTable.Name); newIndex.IndexKeyType = IndexKeyType.DriPrimaryKey; newIndex.IsClustered = false; foreach (DataColumn keyColumn in inputTable.PrimaryKey) { newIndex.IndexedColumns.Add(new IndexedColumn(newIndex, keyColumn.ColumnName, true)); } if (newIndex.IndexedColumns.Count > 0) outputTable.Indexes.Add(newIndex); } private DataType CLRTypeToSQLType(Type type) { switch (type.Name) { case "String": return DataType.NVarCharMax; case "Int32": return DataType.Int; case "Boolean": return DataType.Bit; case "DateTime": return DataType.DateTime; case "Byte[]": return DataType.VarBinaryMax; } return DataType.NVarCharMax; } private ForeignKeyAction SQLActionTypeToSMO(Rule rule) { string ruleStr = rule.ToString(); return (ForeignKeyAction)Enum.Parse(typeof (ForeignKeyAction), ruleStr); } private void DropExistingTable(string tableName) { Table table = _db.Tables[tableName]; if (table != null) table.Drop(); } } } ``` It hasn't been rigorously tested yet, and there needs to be more SQL to CLR types mapped out, but it does create a new database, all the tables, columns, primary keys, and foreign keys. For this code to work, a few assemblies need to be referenced: ``` Microsoft.SqlServer.ConnectionInfo Microsoft.SqlServer.Management.Sdk.Sfc Microsoft.SqlServer.Smo Microsoft.SqlServer.SqlEnum ``` Hope this helps someone else out.
I would write some XSLT to turn the XSD into SQL create statements.
Generating SQL Server DB from XSD
[ "", "c#", ".net", "sql-server", "xsd", "" ]
As a database architect, developer, and consultant, there are many questions that can be answered. One, though I was asked recently and still can't answer good, is... > "What is one of, or some of, the best methods or techniques to keep database changes documented, organized, and yet able to roll out effectively either in a single-developer or multi-developer environment." This may involve stored procedures and other object scripts, but especially schemas - from documentation, to the new physical update scripts, to rollout, and then full-circle. There are applications to make this happen, but require schema hooks and overhead. I would rather like to know about techniques used without a lot of extra third-party involvement.
The easiest way I have seen this done without the aid of an external tool is to create a "schema patch" if you will. The schema patch is just a simple t-sql script. The schema patch is given a version number within the script and this number is stored in a table in the database to receive the changes. Any new changes to the database involve creating a new schema patch that you can then run in sequence which would then detect what version the database is currently on and run all schema patches in between. Afterwards the schema version table is updated with whatever date/time the patch was executed to store for the next run. A good book that goes into details like this is called [Refactoring Databases](https://rads.stackoverflow.com/amzn/click/com/0321293533). If you wish to use an external tool you can look at [Ruby's Migrations](http://wiki.rubyonrails.org/rails/pages/understandingmigrations) project or a similar tool in C# called [Migrator.NET](http://code.google.com/p/migratordotnet/). These tools work by creating c# classes/ruby classes with an "Forward" and "Backward" migration. These tools are more feature rich because they know how to go forward as well as backwards in the schema patches. As you stated however, you are not interested in an external tool, but I thought I would add that for other readers anyways.
I rather liked this series: <http://odetocode.com/Blogs/scott/archive/2008/02/03/11746.aspx>
Best Database Change Control Methodologies
[ "", "sql", "database", "architecture", "schema", "methodology", "" ]
I'm using CakePHP with $modelName->find(...) calls to select quite a number of rows (could be hundreds) Normally, in PHP/MySQL of course this wouldn't be a problem, as you fetch them in a while loop. But, CakePHP loads all rows in to an array which exhausts the memory limit. Is there a way to use the $modelName->find(...) constructs but return an iterator to fetch each row on demand? Thanks, David
No, out of the box Cake PHP (and ActiveRecord in general) doesn't support iterating over a result set like that. If you have a use case where you actually need *all* the records of a table, you're probably better off using raw SQL. (or rethinking your use case). You could also (and have likely already thought to) use an offset of some kind, and call ->find multiple times. If you take this approach don't forget to order your result set by some field to ensure a deterministic result. Databases only seem to return sorted rows when you leave an ORDER BY off. I haven't personally tried this, and it seems inefficient from a multiple queries standpoint, but it's something worth trying.
If your problem is caused by the relationships of your model, you can reduce the recursion this way: $modelname->recursive = -1; then you only will get the data of the current model, without any relation. Iterating through all the records, you'll be able to obtain one-by-one their relationships querying again with recursive > 0
Can I stop CakePHP fetching all rows for a query?
[ "", "php", "mysql", "database", "cakephp", "" ]
I'm trying to create a generic function that removes duplicates from an std::vector. Since I don't want to create a function for each vector type, I want to make this a template function that can accept vectors of any type. Here is what I have: ``` //foo.h Class Foo { template<typename T> static void RemoveVectorDuplicates(std::vector<T>& vectorToUpdate); }; //foo.cpp template<typename T> void Foo::RemoveVectorDuplicates(std::vector<T>& vectorToUpdate) { for(typename T::iterator sourceIter = vectorToUpdate.begin(); (sourceIter != vectorToUpdate.end() - 1); sourceIter++) { for(typename T::iterator compareIter = (vectorToUpdate.begin() + 1); compareIter != vectorToUpdate.end(); compareIter++) { if(sourceIter == compareIter) { vectorToUpdate.erase(compareIter); } } } } //SomeOtherClass.cpp #include "foo.h" ... void SomeOtherClass::SomeFunction(void) { std::vector<int> myVector; //fill vector with values Foo::RemoveVectorDuplicates(myVector); } ``` I keep getting a linker error, but it compiles fine. Any ideas as to what I'm doing wrong? UPDATE: Based on the answer given by Iraimbilanja, I went and rewrote the code. However, just in case someone wanted working code to do the RemoveDuplicates function, here it is: ``` //foo.h Class Foo { template<typename T> static void RemoveVectorDuplicates(T& vectorToUpdate){ for(typename T::iterator sourceIter = vectorToUpdate.begin(); sourceIter != vectorToUpdate.end(); sourceIter++) { for(typename T::iterator compareIter = (sourceIter + 1); compareIter != vectorToUpdate.end(); compareIter++) { if(*sourceIter == *compareIter) { compareIter = vectorToUpdate.erase(compareIter); } } } }; ``` Turns out that if I specify std::vector in the signature, the iterators don't work correctly. So I had to go with a more generic approach. Also, when erasing compareIter, the next iteration of the loop produces a pointer exception. The post decrement of compareIter on an erase takes care of that problem. I also fixed the bugs in the iterator compare and in the initialization of compareIter in the 2nd loop. UPDATE 2: I saw that this question got another up vote, so figured I'd update it with a better algorithm that uses some C++14 goodness. My previous one only worked if the type stored in the vector implemented operator== and it required a bunch of copies and unnecessary comparisons. And, in hindsight, there is no need to make it a member of a class. This new algorithm allows for a custom compare predicate, shrinks the compare space as duplicates are found and makes a significantly smaller number of copies. The name has been changed to `erase_duplicates` to better conform to STL algorithm naming conventions. ``` template<typename T> static void erase_duplicates(T& containerToUpdate) { erase_duplicates(containerToUpdate, nullptr); } template<typename T> static void erase_duplicates(T& containerToUpdate, std::function<bool (typename T::value_type const&, typename T::value_type const&)> pred) { auto lastNonDuplicateIter = begin(containerToUpdate); auto firstDuplicateIter = end(containerToUpdate); while (lastNonDuplicateIter != firstDuplicateIter) { firstDuplicateIter = std::remove_if(lastNonDuplicateIter + 1, firstDuplicateIter, [&lastNonDuplicateIter, &pred](auto const& compareItem){ if (pred != nullptr) { return pred(*lastNonDuplicateIter, compareItem); } else { return *lastNonDuplicateIter == compareItem; } }); ++lastNonDuplicateIter; } containerToUpdate.erase(firstDuplicateIter, end(containerToUpdate)); } ```
## Short Answer Define the function in the header, preferably inside the class definition. ## Long answer Defining the template function inside the .cpp means it won't get `#include`d into any translation units: it will only be available to the translation unit it's defined in. Hence `RemoveVectorDuplicates` must be defined in the header, as this is the only way the compiler can text-substitute the template arguments, hence *instantiating* the template, producing an usable class. ### There are two workarounds for this inconvenience **First**, you can remove the `#include "foo.h"` from the .cpp and add another one, in the *end* of the *header*: ``` #include "foo.cpp" ``` This lets you organize your files consistently, but doesn't provide the usual advantages of separate compilation (smaller dependencies, faster and rarer compiles). **Second**, you can just define the template function in the .cpp and explicitly instantiate it for all the types it'll be ever used with. For example, this can go in the end of the .cpp to make the function usable with `int`s: ``` template void Foo::RemoveVectorDuplicates(std::vector<int>*); ``` However, this assumes you only use templates to save some typing, rather than to provide true genericity.
One alternative you have is to first `std::sort()` the vector, and then use the pre-existing `std::unique()` function to remove duplicates. The sort takes O(nlog n) time, and removing duplicates after that takes just O(n) time as all duplicates appear in a single block. Your current "all-vs-all" comparison algorithm takes O(n^2) time.
How do you create a static template member function that performs actions on a template class?
[ "", "c++", "stl", "templates", "vector", "static-members", "" ]
I'd like to work on a BB Code filter for a PHP website. (I'm using CakePHP, it would be a BB Code helper). I have the following requirements: * **BB Code can be nested. So something like this is valid** ``` [block] [block] [/block] [block] [block] [/block] [/block] [/block] ``` * **Bbcodes can have 0 or more parameters.** Example: ``` [video: url="url", width="500", height="500"]Title[/video] ``` * **BB Code might have multiple behaviours** Let's say `[url]text[/url]` would be transformed to `[url:url="text"]text[/url]` or the video BB Code would be able to choose between YouTube, Dailymotion, etc. I've already done something with regex, but my biggest problem was matching parameters. In fact, I got nested BB Code and BB Code with 0 parameters to work. But when I added a regex match for parameters, it didn't match nested BB Code correctly: `"\[($tag)(=.*)\"\](.*)\[\/\1\]"` (It wasn't `.*` but the non-greedy matcher) I don't have the complete regex with me right now, But I had something that looked like that(above). Is there a way to match BB Code with regex or something else? The only thing I can think of is to use the visitor pattern and to split my text with each possible tags. This way, I can have a bit more of control over my text parsing and I could probably validate my document so if the input text doesn't have valid BB Code... I could notify the user with a error before saving anything. I would use SableCC to create my text parser.
There's both a [pecl](http://pecl.php.net/package/bbcode) and [PEAR](http://pear.php.net/package/HTML_BBCodeParser) BBCode parsing library. Software's hard enough without reinventing years of work on your own. If neither of those are an option, I'd concentrate on turning the BBCode into a valid XML string, and then using your favorite XML parsing routine on that. Very very rough idea here, but 1. Run the code through htmlspecialchars to escape any entities that need escaping 2. Transform all [ and ] characters into < and > respectively 3. Don't forget to account for the colon in cases like [tagname: If the BBCode was nested properly, you should be all set to pass this string into an XML parsing object (SimpleXML, DOMDocument, etc.)
There are several existing libraries for parsing BBCode, it may be easier to look into those than trying to roll your own: Here's a couple, I'm sure there are more if you look around: [PECL bbcode](http://pecl.php.net/package/bbcode) [PEAR HTML\_BBCodeParser](http://pear.php.net/package/HTML_BBCodeParser)
How to parse nested BB Code with parameters
[ "", "php", "bbcode", "" ]
I have a VB6 application which works with datetime values in SQL Server (which are obviously storing dates as mm/dd/yyyy). I need to represent these dates to the user as dd/mm/yyyy, read them in as dd/mm/yyyy, and then store them back into the database as the standard mm/dd/yyyy. This is the current code snippets I have which pull + insert the dates, however I have read many conflicting methods of handling the conversions, and I was wondering if anyone here knew a clear solution for this situation. ``` "SELECT * FROM List WHERE DateIn LIKE '%" & txtDateIn.Text & "%'" "UPDATE [Progress] SET [Date] = '" & txtDate.Text & "'" txtDate.Text = "" & RecordSet.Fields("Date").Value ``` Any thoughts? Thanks in advance. \*\*Update Actually I just noticed I do have dates stored in datetime fields in the form of 16/08/2009 00:00:00 which is dd/mm/yyyy. So perhaps I misunderstood the problem. But when trying to update the datetime value I have been getting 'The conversion of char data type to a datetime data type resulted in an out-of-range datetime value.'. I assumed this was because the date formats did not match (causing a problem with having a month value out of range) however I do have date values in the format of day/month/year in the datetime field already. And the date being submitted to the database is definitely dd/mm/yyyy. \*\*\*\* Update 2\*\* Ok, there seems to be some confusion I have caused. I apologize. * I am storing the dates as datetime in the SQL database * The texts are TextBox controls in the VB6 application * I am running SQL SELECT statements to read the dates from the database and place the value in a TextBox * I then have a 'commit' command button which then performs an UPDATE SQL statement to place the value of the TextBox into the datetime field in the SQL database * This works perfectly fine until 1 specific occasion. In this occasion I have a datetime value (which SQL Server 2005 displays as 16/08/2009 00:00:00) which is read from the database and populated the TextBox with the value 16/08/2009. Now when I try to run the UPDATE statement without modifying the TextBox text I get the error 'The conversion of char data type to a datetime data type resulted in an out-of-range datetime value.' This does not occur with other records such as one where the date is 04/08/2009 so the only issue I can see is possibly with the position of day and month in the value because if the DB is expecting month first then obviously 16/08/2009 would be out-of-range. However the value in the database is already 16/08/2009 with no issues.
Well after all of that the problem was simple. I have the date value wrapped in single (') and double (") quotes. The problem was encountered due to date values not requiring the single quotes. Removing them solved the issue. Thank you anyway for trying to help all.
SQL Server doesn't "obviously" store dates as mm/dd/yyyy. It doesn't store them in a text format at all, as far as I'm aware. I don't know what the VB6 support for parameterised queries is, but that's what you want: basically you want to pass the argument to the query *as* a date rather than as text. Basically you should parse the user input into a date (in whatever way VB6 does this) and then pass it through in the paramterised query. EDIT: I've tried to find out how VB6 handles parameterised queries, and not had a great deal of luck - hopefully any good book on VB6 will cover it. (There are loads of examples for VB.NET, of course...) There's a [Wrox post](http://www.dotneteer.com/Weblog/post/2008/04/Prepared-statements-(parameterized-query)-with-ADO.aspx) which gives an example; that may be enough to get you going. EDIT: As the comment to this answer and the edit to this question edit indicate, there's some confusion as to what your data types really are. *Please* don't use character-based fields to store dates: no good can come of that. Use a proper date/datetime/whatever field, and then make sure you use parameterised queries to access the database so that the driver can do any necessary conversions. Relying on a text format at all is a bad idea.
Dealing with dates in dd/mm/yyyy format
[ "", "sql", "datetime", "vb6", "" ]
I would like to know how to convert `64 bit long` Data Type to any of the `16 bit` Data Types. This feature is required in the **Ethernet Application** to include the **Time Stamp**. Only **2 Bytes** ( **16 bits** ) are available to include the **Time Stamp**. But we are getting `64 bit long` as the Time Stamp value from `Win API`. So a conversion from 64 bit data type to to 16 bit data type is essential.
Well, you can't fit 64 bits of information into 16 bits of storage without losing some of the information. So it's up to you how to quantize or truncate the timestamp. E.g. suppose you get the timestamp in nanosecond precision, but you only need to store it at seconds precision. In that case you divide the 64 bit number by 1000000000 and are left with the seconds. Then it might fit into 16 bits or not (16 bits would only store up to 65535 seconds). If it won't fit, then you'll have the timestamp wrapping around periodically. Which, again, might be a problem in your case or it might be not a problem. In any case, if you need to interface an existing library that requires timestamps - figure out *what* it needs in that timestamp (clock ticks? seconds? years?). Then figure out what the Windows times function that you're using returns. Then convert the Windows time unit into the-library-that-you-use time unit.
16 bits may or may not be enough, depending on what you need the timestamp for. For most purposes it's way too small or at least inconvenient. But some examples where this might work could be: timeouts, measuring round-trip time for packets, grossly measuring time intervals (which might work alright for displaying time information to users) and so on. On the other hand, it's probably useless for reordering packets. If this is the case, I'd suggest you replaced the timestamp with a sequence counter. Depending on the typical number of packets in the stream, you might even be able to cut down a few bits and use them for other purposes, since sequence counters can handle wrapping more easily.
How to Convert 64bit Long Data Type to 16bit Data Type
[ "", "c++", "c", "timestamp", "" ]
I need to check a generic object for null, or default(T). But I have a problem... Currently I have done it like this: ``` if (typeof(T).IsValueType) { if(default(T).Equals(thing)) // Do something else // Do something else } else { if(thing == null) // Do something else // Do something else } ``` But then I end up repeating myself... which I don't like. The problem is the following: ``` thing == null; ``` Here ReSharper warns about Possible compare of value type with 'null'. ``` thing == default(T); ``` Here I get compiler error: Cannot apply operator '==' to operands of type 'T' and 'T'. ``` thing.Equals(null|default(T)); ``` `thing` can obviously be null (that's why I have to check!), so will cause NullReferenceException. ``` null|default(T).Equals(thing); ``` null and default(T) is very often null as well... Is there a clean way to do this??
If boxing isn't an issue, you could just use: ``` object.Equals(value, default(T)) ```
The proper way to do this is: ``` return EqualityComparer<T>.Default.Equals(value, default(T)) ``` No boxing. You could even define an extension method like this: ``` public static void bool IsDefault<T>(this T value) { return EqualityComparer<T>.Default.Equals(value, default(T)); } ``` .. and invoke it like this: ``` return entry.IsDefault(); ``` Though, I personally don't care for extension methods on T (e.g. this object IsNull()) since it hampers readability sometimes.
C#: Alternative to GenericType == null
[ "", "c#", "generics", "null", "" ]
I want to use the TRACE() macro to get output in the debug window in [Visual Studio 2005](http://en.wikipedia.org/wiki/Microsoft_Visual_Studio#Visual_Studio_2005) in a non-[MFC](http://en.wikipedia.org/wiki/Microsoft_Foundation_Class_Library) C++ project, but which additional header or library is needed? Is there a way of putting messages in the debug output window and how can I do that?
Build your own. trace.cpp: ``` #ifdef _DEBUG bool _trace(TCHAR *format, ...) { TCHAR buffer[1000]; va_list argptr; va_start(argptr, format); wvsprintf(buffer, format, argptr); va_end(argptr); OutputDebugString(buffer); return true; } #endif ``` trace.h: ``` #include <windows.h> #ifdef _DEBUG bool _trace(TCHAR *format, ...); #define TRACE _trace #else #define TRACE false && _trace #endif ``` then just #include "trace.h" and you're all set. Disclaimer: I just copy/pasted this code from a personal project and took out some project specific stuff, but there's no reason it shouldn't work. ;-)
If you use ATL you can try ATLTRACE. TRACE is defined in afx.h as (at least in vs 2008): ``` // extern ATL::CTrace TRACE; #define TRACE ATLTRACE ``` And ATLTRACE can be found in atltrace.h
How can I use the TRACE macro in non-MFC projects?
[ "", "c++", "visual-c++", "visual-studio-2005", "" ]
Okay, so I'm sure plenty of you have built crazy database intensive pages... I am building a page that I'd like to pull all sorts of unrelated database information from. Here are some sample different queries for this one page: * article content and info * IF the author is a registered user, their info * UPDATE the article's view counter * retrieve comments on the article * retrieve information for the authors of the comments * if the reader of the article is signed in, query for info on them * etc... I know these are basically going to be pretty lightning quick, and that I could combine some; but I wanted to make sure that this isn't abnormal? How many fairly normal and un-heavy queries would you limit yourself to on a page?
As many as needed, but not more. Really: don't worry about optimization (right now). Build it first, measure performance second, and **IFF** there is a performance problem somewhere, then start with optimization. Otherwise, you risk spending a lot of time on optimizing something that doesn't need optimization.
I've had pages with 50 queries on them without a problem. A fast query to a non-large (ie, fits in main memory) table can happen in 1 millisecond or less, so you can do quite a few of those. If a page loads in less than 200 ms, you will have a snappy site. A big chunk of that is being used by latency between your server and the browser, so I like to aim for < 100ms of time spent on the server. Do as many queries as you want in that time period. The big bottleneck is probably going to be the amount of time you have to spend on the project, so optimize for that first :) Optimize the code later, if you have to. That being said, if you are going to write any code related to this problem, write something that makes it obvious how long your queries are taking. That way you can at least find out you have a problem.
How many MySQL queries should I limit myself to on a page? PHP / MySQL
[ "", "php", "mysql", "performance", "" ]
In [*Programming Python*](https://rads.stackoverflow.com/amzn/click/com/0596009259), Mark Lutz mentions the term *mixin*. I am from a C/C++/C# background and I have not heard the term before. What is a mixin? Reading between the lines of [this example](http://books.google.com/books?id=5zYVUIl7F0QC&pg=RA1-PA584&lpg=RA1-PA584&dq=programming+python+guimixin&source=bl&ots=HU833giXzH&sig=jwLpxSp4m_VbOYQ897UDkGNx_2U&hl=en&ei=x8iRSaTTF5iq-ganpbGPCw&sa=X&oi=book_result&resnum=3&ct=result) (which I have linked to because it is quite long), I am presuming it is a case of using multiple inheritance to extend a class as opposed to proper subclassing. Is this right? Why would I want to do that rather than put the new functionality into a subclass? For that matter, why would a mixin/multiple inheritance approach be better than using composition? What separates a mixin from multiple inheritance? Is it just a matter of semantics?
A mixin is a special kind of multiple inheritance. There are two main situations where mixins are used: 1. You want to provide a lot of optional features for a class. 2. You want to use one particular feature in a lot of different classes. For an example of number one, consider [werkzeug's request and response system](http://werkzeug.pocoo.org/docs/wrappers/). I can make a plain old request object by saying: ``` from werkzeug import BaseRequest class Request(BaseRequest): pass ``` If I want to add accept header support, I would make that ``` from werkzeug import BaseRequest, AcceptMixin class Request(AcceptMixin, BaseRequest): pass ``` If I wanted to make a request object that supports accept headers, etags, authentication, and user agent support, I could do this: ``` from werkzeug import BaseRequest, AcceptMixin, ETagRequestMixin, UserAgentMixin, AuthenticationMixin class Request(AcceptMixin, ETagRequestMixin, UserAgentMixin, AuthenticationMixin, BaseRequest): pass ``` The difference is subtle, but in the above examples, the mixin classes weren't made to stand on their own. In more traditional multiple inheritance, the `AuthenticationMixin` (for example) would probably be something more like `Authenticator`. That is, the class would probably be designed to stand on its own.
First, you should note that mixins only exist in multiple-inheritance languages. You can't do a mixin in Java or C#. Basically, a mixin is a stand-alone base type that provides limited functionality and polymorphic resonance for a child class. If you're thinking in C#, think of an interface that you don't have to actually implement because it's already implemented; you just inherit from it and benefit from its functionality. Mixins are typically narrow in scope and not meant to be extended. [edit -- as to why:] I suppose I should address why, since you asked. The big benefit is that you don't have to do it yourself over and over again. In C#, the biggest place where a mixin could benefit might be from the [Disposal pattern](http://blog.jawaji.com/2008/08/disposal-pattern-c.html). Whenever you implement IDisposable, you almost always want to follow the same pattern, but you end up writing and re-writing the same basic code with minor variations. If there were an extendable Disposal mixin, you could save yourself a lot of extra typing. [edit 2 -- to answer your other questions] > What separates a mixin from multiple inheritance? Is it just a matter of semantics? Yes. The difference between a mixin and standard multiple inheritance is just a matter of semantics; a class that has multiple inheritance might utilize a mixin as part of that multiple inheritance. The point of a mixin is to create a type that can be "mixed in" to any other type via inheritance without affecting the inheriting type while still offering some beneficial functionality for that type. Again, think of an interface that is already implemented. I personally don't use mixins since I develop primarily in a language that doesn't support them, so I'm having a really difficult time coming up with a decent example that will just supply that "ahah!" moment for you. But I'll try again. I'm going to use an example that's contrived -- most languages already provide the feature in some way or another -- but that will, hopefully, explain how mixins are supposed to be created and used. Here goes: Suppose you have a type that you want to be able to serialize to and from XML. You want the type to provide a "ToXML" method that returns a string containing an XML fragment with the data values of the type, and a "FromXML" that allows the type to reconstruct its data values from an XML fragment in a string. Again, this is a contrived example, so perhaps you use a file stream, or an XML Writer class from your language's runtime library... whatever. The point is that you want to serialize your object to XML and get a new object back from XML. The other important point in this example is that you want to do this in a generic way. You don't want to have to implement a "ToXML" and "FromXML" method for every type that you want to serialize, you want some generic means of ensuring that your type will do this and it just works. You want code reuse. If your language supported it, you could create the XmlSerializable mixin to do your work for you. This type would implement the ToXML and the FromXML methods. It would, using some mechanism that's not important to the example, be capable of gathering all the necessary data from any type that it's mixed in with to build the XML fragment returned by ToXML and it would be equally capable of restoring that data when FromXML is called. And.. that's it. To use it, you would have any type that needs to be serialized to XML inherit from XmlSerializable. Whenever you needed to serialize or deserialize that type, you would simply call ToXML or FromXML. In fact, since XmlSerializable is a fully-fledged type and polymorphic, you could conceivably build a document serializer that doesn't know anything about your original type, accepting only, say, an array of XmlSerializable types. Now imagine using this scenario for other things, like creating a mixin that ensures that every class that mixes it in logs every method call, or a mixin that provides transactionality to the type that mixes it in. The list can go on and on. If you just think of a mixin as a small base type designed to add a small amount of functionality to a type without otherwise affecting that type, then you're golden. Hopefully. :)
What is a mixin and why is it useful?
[ "", "python", "oop", "multiple-inheritance", "mixins", "python-class", "" ]
Is it possible to mock a static method using Rhino.Mocks? If Rhino does not support this, is there a pattern or something which would let me accomplish the same?
> Is it possible to mock a static method > using Rhino.Mocks No, it is not possible. TypeMock can do this because it utilizes the CLR profiler to intercept and redirect calls. RhinoMocks, NMock, and Moq cannot do this because these libraries are simpler; they don't use the CLR profiler APIs. They are simpler in that they use proxies to intercept virtual members and interface calls. The downside of this simplicity is that they cannot mock certain things, such as static methods, static properties, sealed classes, or non-virtual instance methods.
Wrap the static method call in a virtual instance method in another class, then mock that out.
Mocking Static methods using Rhino.Mocks
[ "", "c#", "tdd", "mocking", "rhino-mocks", "" ]
I have a object in a presenter connected to a view. Inside my XAMTL I have the following: ``` <Label Content="{Binding ElementName=PSV, Path=Presenter.Portfolio.Name}"/> ``` Now when the control is created, Portfolio is null, then i run another method which sets Portfolio. I've implemented INotifyPropertyChanged but so far, I've not been able to trigger to hookup to the binding. Can someone give me tips? Can I bind to a property of a property ?
Binding always works with the DataContext you would need to set your Presenter into the local DataContext. For example you could do this in the constructor of your Window or UserControl: ``` this.DataContext = new Presenter(); ``` Your binding would then change to: ``` <Label Content="{Binding ElementName=PSV, Path=Portfolio.Name}"/> ``` The Presenter part of your former path is implicit in the DataContext. This way the DataContext is watching for the NotifyChanged events and will update the view properly when Portfolio changes from null to having a value. In answer to the final part of your question, binding to a property of a property does work.
Maybe <http://msdn.microsoft.com/en-us/library/system.windows.data.bindingoperations.getbindingexpression.aspx> and <http://msdn.microsoft.com/en-us/library/system.windows.data.bindingexpression.updatetarget.aspx> help
Refreshing a wpf databinding after object has been updated
[ "", "c#", "wpf", "data-binding", "xaml", "" ]
Both these joins will give me the same results: ``` SELECT * FROM table JOIN otherTable ON table.ID = otherTable.FK ``` vs ``` SELECT * FROM table INNER JOIN otherTable ON table.ID = otherTable.FK ``` Is there any difference between the statements in performance or otherwise? Does it differ between different **SQL** implementations?
They are functionally equivalent, but `INNER JOIN` can be a bit clearer to read, especially if the query has other join types (i.e. `LEFT` or `RIGHT` or `CROSS`) included in it.
No, there is no difference, pure [**syntactic sugar**](http://en.wikipedia.org/wiki/Syntax_sugar).
What is the difference between JOIN and INNER JOIN?
[ "", "sql", "sql-server", "join", "inner-join", "" ]
This prints 83 ``` System.out.println(0123) ``` However this prints 123 ``` System.out.println(123) ``` Why does it work that way?
A leading zero denotes that the literal is expressed using [octal (a base-8 number)](https://en.wikipedia.org/wiki/Octal). 0123 can be converted by doing `(1 * 8 * 8) + (2 * 8) + (3)`, which equals 83 in decimal. For some reason, octal floats are not available. Just don't use the leading zero if you don't intend the literal to be expressed in octal. There is also a `0x` prefix which denotes that the literal is expressed in hexadecimal (base 16).
Because integer literals starting with `0` are treated as octal numbers. See section 3.10.1 of the [JLS](https://docs.oracle.com/javase/specs/jls/se9/html/jls-3.html#jls-3.10.1)
Why are integer literals with leading zeroes interpreted strangely?
[ "", "java", "" ]
While developing for multiple sets of browsers, what issues have you faced while development due to differences in browser implementation? To start with I am listing some of those which i faced: * A text node in Firefox allows only 4K data. So an XML Ajax response gets split up into multiple text child nodes instead of only one node. Its fine in Internet Explorer. For Firefox, to get the full data you either need to use node.normalize before you call node.firstChild or use node.textContent, both of which are Mozilla specific methods * Internet Explorer does not replace `&nbsp;` or HTML char code 160, you need to replace its Unicode equivalent \u00a0 * In Firefox a dynamically created input field inside a form (created using document.createElement) does not pass its value on form submit. * document.getElementById in Internet Explorer will return an element even if the element name matches. Mozilla only returns element if id matches. * In Internet Explorer if a select box has a value not represented by any of the options, it will display blank, Firefox displays the first option.
Most of the problems I have are with IE, specifically IE6. Problems I personally deal with that have left a memorable impression (in no particular order): * Having to use frameworks to do basic things because each browser implements the DOM a little differently. This is especially heinous with IE and AJAX, which necessitates multiple if-blocks just to get the call started. In an ideal world I'd be able to work in JavaScript without the framework to do basic things. * onChange on selects in IE are implemented wrong, and fire before the select loses focus (which is incorrect). This means you can never use onChange with selects due to IE, since keyboard-only users will be crippled by this implementation issue. * You mentioned it in your post, but it's a huge pain when IE grabs an element by name when using getElementBy**Id**(). * When in an RTL locale (Arabic, Hebrew, etc.), Firefox implements "text-align: right;" incorrectly. If the container overflows for some reason, the text aligns to the right side of the viewable container, rather than the right side of the container itself (even if it makes part of it invisible). * Different browsers have differing levels of pickiness with regards to how you end arrays and objects. For example, Firefox is more than okay with an array looking like this: [ item0, item1, ]". However, this same code will make Opera barf because it hates the trailing comma. IE will make the array a three-item array, with the third item undefined! This is bad code for sure, but there's been dynamically generated javascript I've worked on that was a huge pain to rewrite - would've been nice if this just worked. * Everything having to do with IE's [hasLayout](http://www.satzansatz.de/cssd/onhavinglayout.html). So much awful pain has revolved around this attribute, especially when I didn't know it existed. So many problems fixed by using hacks to add hasLayout. So many more problems created as a result of the hacks. * Floats in IE rarely work the way you hope they do. They also tend to be annoying in other browsers, but they at least conform to a particular behavior. ;) * IE adding [extra white space between list items](http://www.brunildo.org/test/IEWlispace.php) has caused me no end of pain, since YUI uses lists to make their menus. (To fully grasp the issue, you have to view that link in IE and another browser side by side.) * I have lots of issues getting text not to wrap in containers in IE. Other browsers listen to "white-space: nowrap" a lot better. This has been a problem with a UI I worked on that has a resizable sidebar; in IE, the sidebar items will start to wrap if you resize it too much. * The lack of many CSS selector types in IE6 means you have to class-up your DOM more than necessary. For example, the lack of +, :hover, :first-child. * Different browsers treat empty text nodes differently. Specifically, when traversing the DOM with Opera, I have to worry about empty text nodes when browsing a node's children. This isn't a problem if you're looking for a particular item, but it is if you're writing code that expects a particular input and the way the browser views that input differs. * In IE6, when you dynamically generate an iframe via javascript, the iframe sometimes doesn't fill its container automatically (even with width and height set to max). I still don't know how to solve this issue, and have been thinking of posting a question about it. * In IE, you can't set overflow CSS on the <tbody> element. This means that scrollable tables (with a concrete <thead> and <tfoot>) are impossible to make in a simple manner. I will probably add more to this list later, since (to me) the worst part of web development are cross-browser issues. Also, I doubt I'll ever edit out the "I will probably add more to this list later", since these problems are endless. :)
The only one that **really** gets to me: * [IE6 is still used by ~18% of the web](http://www.w3schools.com/browsers/browsers_stats.asp) -- that's nearly 1 in 5 -- and addressing its issues is time consuming, hackish, and frustrating. ;) The [issues](http://www.google.com/search?q=the+ie+factor) are really too numerous to list here. If you're interested in the issues themselves, [QuirksMode.org](http://quirksmode.org/) is an amazing resource I used every day before making the leap to client-side libraries. Also check out John Resig's [The DOM is a Mess](http://ejohn.org/blog/the-dom-is-a-mess/) presentation at yahoo, which gives a lot of theory about how to deal with cross-browser topics efficiently. However, if you're interested in simply having them solved, your question is an excellent example of why many consider using client-side libraries like [jQuery](http://www.jquery.com), [YahooUI](http://developer.yahoo.com/yui/), [MooTools](http://mootools.net/), [Dojo](http://www.dojotoolkit.org/), etc. With a thriving community, talented people and corporate backing projects like those allow you to focus on your app rather than these issues. Here are some jQuery examples that avoid much of the cross-browser frustration and can really make all of this.. fun. **Cross-browser mouse click binding** ``` $('#select anything + you[want=using] ~ css:selectors').click( function(){ alert('hi'); } ); ``` **Cross-browser HTML Injection** ``` $('#anElementWithThisId').html('<span>anything you want</span>'); ``` **Cross-browser Ajax (all request objects are [still made available](http://docs.jquery.com/Ajax/load) to you)** ``` $('p.message').load('/folder/file.html'); ``` **And what really blows me away, load a data subset with selectors (see [manual](http://docs.jquery.com/Ajax/load) for details)** ``` $('p.message').load('/folder/file.html body p:first-child'); ``` **Now, how all this really starts to get fun: chaining methods together** ``` $('ul.menu a').click( // bind click event to all matched objects function(evt){ // stnd event object is the first parameter evt.preventDefault(); // method is cross-browser thx to jquery $(this) // this = the clicked 'a' tag, like raw js .addClass('selected') // add a 'selected' css class to it .closest('ul.menu') // climb the dom tree back up to the ul .find('a.selected') // find any existing selected dom children .not(this) // filter out this element from matches .removeClass('selected'); // remove 'selected' css class } ) ``` Reminds me of Joel's [Can Your Programming Language Do This?](http://www.joelonsoftware.com/items/2006/08/01.html) article. Taking all this to a theoretical level, true advancement doesn't come from what you can do with conscious thought and effort, but rather what you can do automatically (without thought or effort). Joel has a segment on this in *Smart And Gets Things Done* regarding interviewing questions and smart developers, completely changed my approach to programming. Similar to a pianist who can just 'play' the music because she knows all the keys, your advancement comes not from doing more things that require thought but rather more things that require no thought. The goal then becomes making all the basics easy.. natural.. subconscious.. so we can all geek out on our higher level goals. Client side libraries, in a way, help us do just that. ;)
What Cross-Browser issues have you faced?
[ "", "javascript", "css", "dom", "cross-browser", "" ]
In ASP.NET if items are left in the session state that Implement IDisposable but are never specifically removed and disposed by the application when the session expires will Dispose be called on the objects that any code in Dipose() will execute?
If the `IDisposable` pattern is [implemented properly](http://www.codeproject.com/KB/cs/idisposable.aspx), then yes (i.e. the class's destructor will take care of disposing the object). I don't believe the ASP.NET session manager makes any guarantees about explicitly calling `Dispose()` on classes implementing `IDisposable`. Note that despite Mark's aggressive objections, I am not suggesting "routinely" adding finalizers. I am simply suggesting that if you **want** the `Dispose` method on your object called when the session expires, this is a viable option.
I'd disagree with Sean's answer; firstly, finalizers should **not** be routinely added to classes, even if they are `IDisposable` - finalizers should only really be used in classes that represent unmanaged resources. Conversely, a class with a finalizer often is also `IDisposable`. Re the question: is `Dispose()` called - no, it isn't. The object will be garbage collected at some point in the future (indeterminate), but that is about it. A finalizer wouldn't add much here, as any encapsulated objects will also already be eligible for collection (assuming that they aren't referenced elsewhere).
Session containing items implementing IDisposable
[ "", "c#", "asp.net", "session", "garbage-collection", "idisposable", "" ]
Is there any way in NHibernate that I can use the following Entities ``` public class Person { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual IList<Pet> Pets { get; set; } } public class Pet { public virtual int Id { get; set; } public virtual string Name { get; set; } } ``` And not have to create a "special" AddPet method on Person in order to have Child pets saved. ``` public void AddPet(Pet p) { p.Person = this; Pets.Add(p); } _session.SaveOrUpdate(person); ``` Does not save the Pets because Pet has no Person reference. If I update Pets to contain this reference. ``` public class Pet { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual Person Person { get; set; } } ``` On new pets I still have to set Person this seems like overkill to me and also risky as People can still call ``` person.Pets.Add(new Pet()) ``` The only other option I can think of is a Custom list that sets the parent reference when adding child entities.
I modified your example just a bit (in line with many of the suggestions here): ``` public class Person { private IList<Pet> pets; protected Person() {} public Person(string name) { Name = name; pets = new List<Pet>(); } public virtual Guid Id { get; set; } public virtual string Name { get; set; } public virtual IEnumerable<Pet> Pets { get { return pets; } } public virtual void AddPet(Pet pet) { pets.Add(pet); } } public class Pet { protected Pet() {} public Pet(string name) { Name = name; } public virtual Guid Id { get; set; } public virtual string Name { get; set; } } public class PersonMap : ClassMap<Person> { public PersonMap() { Id(x => x.Id).GeneratedBy.GuidComb(); Map(x => x.Name); HasMany(x => x.Pets).Cascade.AllDeleteOrphan().Access.AsLowerCaseField(); } } public class PetMap : ClassMap<Pet> { public PetMap() { Id(x => x.Id).GeneratedBy.GuidComb(); Map(x => x.Name); } } ``` The following test: ``` [Test] public void CanSaveAndRetrievePetAttachedToPerson() { Person person = new Person("Joe"); person.AddPet(new Pet("Fido")); Session.Save(person); Person retrievedPerson = Session.Get<Person>(person.Id); Assert.AreEqual("Fido", retrievedPerson.Pets.First().Name); } ``` passes. Note that this is using Fluent NHibernate for the mapping and the Session.
[C# Reflection & Generics](https://stackoverflow.com/questions/552805/c-reflection-generics) Might be able to use this approach? Wont work for children of children but its pretty good for Parent-Children. ``` protected static void SetChildReferences(E parent) { foreach (var prop in typeof(E).GetProperties(BindingFlags.Public | BindingFlags.Instance)) { if (!prop.CanRead) continue; Type listType = null; foreach (Type type in prop.PropertyType.GetInterfaces()) { if (type.IsGenericType && type.GetGenericTypeDefinition() == typeof(ICollection<>)) { listType = type.GetGenericArguments()[0]; break; } } List<PropertyInfo> propsToSet = new List<PropertyInfo>(); foreach (PropertyInfo childProp in (listType ?? prop.PropertyType).GetProperties(BindingFlags.Public | BindingFlags.Instance)) { if (childProp.PropertyType == typeof(E)) propsToSet.Add(childProp); } if (propsToSet.Count == 0) continue; if (listType == null) { object child = prop.GetValue(parent, null); if (child == null) continue; UpdateProperties(propsToSet, child, parent); } else { ICollection collection = (ICollection)prop.GetValue(parent, null); foreach (object child in collection) { if (child == null) continue; UpdateProperties(propsToSet, child, parent); } } } } protected static void UpdateProperties(List<PropertyInfo> properties, object target, object value) { foreach (PropertyInfo property in properties) { property.SetValue(target, value, null); } } ```
Child tables in NHibernate
[ "", "c#", "nhibernate", "" ]
What is the difference between ``` void func(const Class *myClass) ``` and ``` void func(Class *const myClass) ``` --- See also: * [C++ const question](https://stackoverflow.com/questions/269882/c-const-question) * [How many and which are the uses of "const" in C++?](https://stackoverflow.com/questions/455518/how-many-and-which-are-the-uses-of-const-in-c) and probably others...
The difference is that for ``` void func(const Class *myClass) ``` You point to a class that you cannot change because it is const. But you can modify the myClass pointer (let it point to another class; this don't have any side effects to the caller because it's pointer is copied, it only changes your local the pointer copy) In contrast ``` void func(Class *const myClass) ``` Now myClass points to a class that can be modified while you cannot change the parameter.
In the first one you're declaring a function that accepts a pointer to a constant Class object. You cannot modify the object inside the function. In the second one you're declaring a function that accepts a constant pointer to a non constant Class object. You can modify the object through the pointer, but cannot modify the pointer value itself. **I always keep in mind this easy rule: `const` always applies on the thing at the immediate left of it, if this thing doesn't exists, it applies to the thing on the immediate right.** Also take a look to [this](https://stackoverflow.com/questions/455518/how-many-and-which-are-the-uses-of-const-in-c) question which I asked a week ago, it points to some very useful links to understand const correctness.
Difference between const declarations in C++
[ "", "c++", "syntax", "pointers", "constants", "parameters", "" ]
Say I have a Page "foo" in wordpress such that http:/www.blah.com/foo brings it up. Say this page has a trigger within its content such as <!-- foo --!> which is being tracked by my plugin through the add\_filter('the\_content', ..) filter. So, all calls to http:www.blah.com/foo and <http://www.blah.com/foo?var=bar> are picked up by the plugin and dealt with. Now is there a way by which the plugin could let wordpress know that it wants to handle all urls of the type <http://www.blah.com/foo/bar> <http://www.blah.com/foo/bar/morefoo> etc.. "without" having to create pages/subpages for each of these as these urls would be created dynamically? Also, is there anyway besides using the add\_content filter on the\_content within a page that one can grab control from within a plugin ideally keyed of the url so all calls to <http://www.blah.com/foo> are handled by the plugin.
Answering my own q: The pointers in the answers above were useful and got me going on the right track, but I kept hitting a snag in that whenever the url in question was invoked it kept calling index.php. I then came across <http://mikeschinkel.com/blog/restful-web-services-in-wordpress-plugin/> where he goes into an explanation and provides an answer that requires a template\_redirect action to ensure that control is where you want it.
Yes, you can grab URL references (aka queries) and parse them, without creating subpages. The sequence of events is described in the [Query Overview page](http://codex.wordpress.org/Query_Overview). You want to look at hooking into the `parse_request` action in particular.
wordpress url resolution of subpages from within plugin
[ "", "php", "wordpress", "url", "plugins", "" ]
I am writing a simple proxy in Java. I am having trouble reading the entirety of a given request into a byte array. Specifically, in the following loop, the call to 'read' blocks even though the client has sent all the data that it will (that is, the end of stream is never reached). As I can't be sure that it is time to start writing output until I've read the entirety of the input, this is causing a bit of trouble. If I kill the connection to the server, the end of stream is finally reached, and everything goes off without a hitch (all of the data from the client, in this case Firefox requesting www.google.com, has been read by the server, and it is able to process it as required, though obviously it can't send anything back to the client). ``` public static void copyStream(InputStream is, OutputStream os) throws IOException { int read = 0; byte[] buffer = new byte[BUFFER_SIZE]; while((read = is.read(buffer, 0, BUFFER_SIZE)) != -1) { os.write(buffer, 0, read); } return; } ``` The InputStream comes from the client socket (getInputStream(), then buffered) directly; the OutputStream is a ByteArrayOutputStream. What am I doing wrong?
Typically in HTTP the `Content-Length` header indicates how much data you're supposed to read from the stream. Basically it tells you how many bytes follow the double-newline (actually double-`\r\n`) that indicates the end of the HTTP headers. See [W3C](http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.13) for more info... If there is no `Content-Length` header sent, you could try interrupting the read after a certain amount of time passes with no data sent over the connection, although that's definitely not preferable. (I'm assuming that you're going to be processing the data you're reading somehow, otherwise you could just write out each byte as you read it)
HTTP 1.1, supported by all modern browsers, has a feature called "keep-alive", or "persistent connections", in which clients are allowed by default to reuse a HTTP 1.1 connection to a server for several requests (see <http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html>). So if you are pointing FF to <http://www.google.com>, the connection to www.google.com:80 will remain open for a while, even if the first request has been completed. You thus can not know if all the data has been sent without a basic understanding of HTTP protocol by your application. You can somehow circumvent that by using a timeout on the connection, hoping the client is not stuck somewhere and that silence actually means the end of the data block. An other way would be to rewrite server response headers, to advertise your proxy as HTTP 1.0 compliant, and not 1.1, thus forbidding the client to use persistent connections.
Why is the end of the input stream never reached using Java Sockets?
[ "", "java", "sockets", "stream", "" ]
I have a 'suite' of VS2005 unit tests that attach a db as part of the initialization. Tests modify the db fairly substantially so need to revert it to a known state before each test run. I deploy the test db to the 'out' folder of each TestResult and attach it in the MyClassInitialize method. DB is fairly large so this uses up lots of space as more and more TestResults created. Is there any way from within Visual Studio to limit the maximum number of testresults stored? ie. stores a max of 5 + deletes oldest when hits 5? Regards, Matt
This works from VS2010 (from MSDN): To limit the number of stored test runs 1.In Visual Studio, click Options on the Tools menu. The Options dialog box appears. 2.Expand Test Tools and click Test Execution. 3.Under Test Results Management, select the number of test runs to keep. 4.Click OK.
One could create a method that runs on test initialization that browses to the proper location, and just uses file manipulation to delete a certain number, however the method you're suggesting might cause issues if multiple users attempt to run the unit tests at the same time (depending on how your test environment is setup). The method I generally use is to create a copy of the database in test initialization, then on test tear down to delete the copy that's being used we generally append a guid to the database name to ensure uniqueness. The biggest issue with this method is if you abort the tests during debug the Database never gets deleted.
Limiting number of test run results in Visual Studio 2005
[ "", "c#", ".net", "visual-studio", "unit-testing", "" ]
Does anybody know a useable MVC/MVP framework for enterprise WinForms applications? Before there was [User Interface Process Application Block for .NET](http://msdn.microsoft.com/en-us/library/ms998252.aspx). But it is not longer under development since [Windows Workflow Foundation](http://msdn.microsoft.com/en-us/library/ms735967.aspx) has been released (which also will be completely rewritten with .NET 4.0). Maybe i am not up-to-date, but i seems to me like there is a gap at the moment.
Microsoft has the Composite Application Block (CAB) for use with WinForms applications. While not technically an MVC/MVP implementation, it does provide nice separation between UI code and non-UI code: <http://msdn.microsoft.com/en-us/library/aa480450.aspx> It is also fairly popular to just do it yourself without using any frameworks as long as you are comfortable with the relevant patterns. Jeremy Miller has an excellent series of articles on some best practices when taking this approach: [Build your own CAB Part #1 - The Preamble](http://codebetter.com/blogs/jeremy.miller/archive/2007/05/21/build-your-own-cab-part-1-the-preamble.aspx) [Build your own CAB Part #2 - The Humble Dialog Box](http://codebetter.com/blogs/jeremy.miller/archive/2007/05/23/build-your-own-cab-part-2-the-humble-dialog-box.aspx) [Build your own CAB Part #3 - The Supervising Controller Pattern](http://codebetter.com/blogs/jeremy.miller/archive/2007/05/25/build-you-own-cab-part-3-the-supervising-controller-pattern.aspx) [Build your own CAB Part #4 - The Passive View](http://codebetter.com/blogs/jeremy.miller/archive/2007/05/30/build-your-own-cab-part-4-the-passive-view.aspx) [Build your own CAB Part #5 - The Presentation Model](http://codebetter.com/blogs/jeremy.miller/archive/2007/06/03/build-your-own-cab-part-5-the-presentation-model.aspx) [Build your own CAB Part #6 - View to Presenter Communication](http://codebetter.com/blogs/jeremy.miller/archive/2007/06/04/build-your-own-cab-part-6-view-to-presenter-communication.aspx) [Build your own CAB - Answering some questions](http://codebetter.com/blogs/jeremy.miller/archive/2007/06/05/build-your-own-cab-answering-some-questions.aspx) [Build your own CAB Part #7 - Whats the Model?](http://codebetter.com/blogs/jeremy.miller/archive/2007/06/05/build-your-own-cab-part-7-what-s-the-model.aspx) [Build your own CAB Part #8 - Assigning Responsibilities in a Model View Presenter Architecture](http://codebetter.com/blogs/jeremy.miller/archive/2007/06/10/build-your-own-cab-part-8-model-view-presenter-wrapup.aspx) [Build your own CAB Part #9 - Domain Centric Validation with the Notification Pattern](http://codebetter.com/blogs/jeremy.miller/archive/2007/06/13/build-your-own-cab-part-9-domain-centric-validation-with-the-notification-pattern.aspx) [Build your own CAB Part #10 - Unit Testing the UI with NUnitForms](http://codebetter.com/blogs/jeremy.miller/archive/2007/06/26/build-your-own-cab-part-10-unit-testing-the-ui-with-nunitforms.aspx) [Build your own CAB Part #11 - Event Aggregator](http://codebetter.com/blogs/jeremy.miller/archive/2007/06/29/build-your-own-cab-11-event-aggregator.aspx) [Build your own CAB Part #12 - Rein in runaway events with the "Latch"](http://codebetter.com/blogs/jeremy.miller/archive/2007/07/02/build-your-own-cab-12-rein-in-runaway-events-with-the-quot-latch-quot.aspx) [Build your own CAB Part #13 - Embedded Controllers with a Dash of DSL](http://codebetter.com/blogs/jeremy.miller/archive/2007/07/06/build-your-own-cab-13-embedded-controllers-with-a-dash-of-dsl.aspx)
Have you tried [MVC#](http://www.mvcsharp.org)? I started using it a while back, but the project fell by the wayside, so I can't vouch for it to much - sorry! **EDIT:** I just found [this](http://www.c-sharpcorner.com/UploadFile/rmcochran/PassiveView01262008091652AM/PassiveView.aspx) article which looks pretty good too.
MVC/MVP framework for .NET WinForms applications
[ "", "c#", ".net", "windows", "winforms", "frameworks", "" ]
What are some good desktop application projects I could code up that would be good practice? I have gone through the .NET Framework namespace structure pinpointing some of the namespaces and classes I have not used, so that I could come up with a project or two to create using them. However, I am having trouble thinking of projects that have real-world use. I would like to hear ideas for projects that cover these criteria: 1. Use many of the namespaces in the .NET Framework 2. Be "real": I find it hard to get into something if it doesn't have real practical use 3. It would be ideal if the sum of all submitted projects covered literally every namespace that is relevant to desktop applications (I aim high) The aim here is to expose myself to much of the .NET framework for the purpose of growing as a developer. Thank you.
A couple from the top of my head: RSS aggregator that loads feeds and saves them to a database. Then add some searching and filtering to it and you should move into a few different namespaces. Of the top of my head, System, System.Net, System.XML, System.Data, System.Linq, System.Collections.Generic, System.Threading, System.Configuration (if you want to you configuration files), System.Printing. A project I've spent some time on is a IDictionary-based object that is a configuration tool I can use to store configuration settings to a database. Every time I get to the "halfway" mark I end up discovering more functionality that I need and add to it. Pretty limited in it's namespace usage, but it is a challenging exercise that you could find a lot of use in the long run.
It's odd to worried about how much API of the framework you would use as a criteria to choose what application to develop, However, if you must have an answer, perhaps you could consider creating a sort of Outlook-like application where you would you use: * System: This is a no-brainer. * System.Drawing: Custom drawing of controls, calendar drawing. * System.Diagnostics: Debugging and code profiling stuff. * System.Globalization, System.Resources: For internationalization. * System.IO: where you need to retain data such as e-mail, notes etc. * System.Security: Encryption of sensitive personal e-mail, appointments etc. * System.Media: Built-in simple media player for playing back audio-based attachment? * System.Net: To retrieve e-mails, what else. * System.Text: Text processing, you almost always need this. * System.Threading: You can seldom run away from threads in most real world application. * System.Timers: There's probably a check your e-mail every x minutes somewhere... * System.Configuration, System.Data: You probably need to persists application configuration, partly using the System.Configuration namespace, and some of it in a local sql server. * System.Collections: Some data structures that you almost certainly need to use. * System.Management: Show a cpu resources status or disk space indicator in your about window. Also, parts of it could be develop in WPF, say you have a feature where you generate charts for people to visualize their daily e-mail/appointment activities. You may also get to cover Windows CardSpace because it surely will involves management of identities.
What are some good sample desktop application projects I could do in C# .NET
[ "", "c#", ".net", "" ]
We are using a badly written windows service, which will hang when we are trying to Stop it from code. So we need to find which process is related to that service and kill it. Any suggestions?
WMI has this information: the Win32\_Service class. A WQL query like ``` SELECT ProcessId FROM Win32_Service WHERE Name='MyServiceName' ``` using System.Management should do the trick. From a quick look see: `taskllist.exe /svc` and other tools from the command line.
You can use `System.Management.MangementObjectSearcher` to get the process ID of a service and `System.Diagnostics.Process` to get the corresponding `Process` instance and kill it. The `KillService()` method in the following program shows how to do this: ``` using System; using System.ComponentModel; using System.Diagnostics; using System.Management; namespace KillProcessApp { class Program { static void Main(string[] args) { KillService("YourServiceName"); } static void KillService(string serviceName) { string query = string.Format( "SELECT ProcessId FROM Win32_Service WHERE Name='{0}'", serviceName); ManagementObjectSearcher searcher = new ManagementObjectSearcher(query); foreach (ManagementObject obj in searcher.Get()) { uint processId = (uint) obj["ProcessId"]; Process process = null; try { process = Process.GetProcessById((int)processId); } catch (ArgumentException) { // Thrown if the process specified by processId // is no longer running. } try { if (process != null) { process.Kill(); } } catch (Win32Exception) { // Thrown if process is already terminating, // the process is a Win16 exe or the process // could not be terminated. } catch (InvalidOperationException) { // Thrown if the process has already terminated. } } } } } ```
Finding out Windows service's running process name .NET 1.1
[ "", "c#", "windows-services", "process", ".net-1.1", "" ]
I mainly develop in native C++ on Windows using Visual Studio. A lot of times, I find myself creating a new function/class or whatever, and I just want to test that piece of logic I just wrote, quickly. A lot of times, I have to run the entire application, which sometimes could take a while since there are many connected parts. Is there some sort of tool that will allow me to test that new piece of code quickly *without* having to run the whole application? i.e. Say I have a project with about 1000 files, and I'm adding a new class called Adder. Adder has a method Add( int, int ); I just want the IDE/tool to allow me to test just the Adder class (without me having to create a new project and write a dummy main.cpp) by allowing me to specify the value of the inputs going into Adder object. Likewise, it would be nice if it would allow me to specify the expected output from the tested object. What would be even cooler is if the IDE/tool would then "record" these sets of inputs/expected output, and automatically create unit tester class based on them. If I added more input/output sets, it would keep building a history of input/outputs. Or how about this: what if I started the actual application, feed some real data to it, and have the IDE/tool capture the complete inputs going into the unit being tested. That way, I can quickly restart my testing if I found some bugs in my program or I want to change its interface a bit. I think this feature would be so neat, and can help developer quickly test / modify their code. Am I talking about mock object / unit testing that already exists? Sidenote: it would be cool if Visual Studio debugger has a "replay" technology where user can step back to find what went wrong. Such debugger already exists here: <http://www.totalviewtech.com/>
It's very easy to get started with static unit testing in C++ - [three lines of code](http://www.jera.com/techinfo/jtns/jtn002.html). VS is a bit poor in that you have to go through wizards to make a project to build and run the tests, so if you have a thousand classes you'd need a thousand projects. So for large projects on VS I've tended to organised the project into a few DLLs for independent building and testing rather than monolithic ones. An alternative to static tests more similar to your 'poke and dribble' script could be done in python, using [swig](http://www.swig.org/) to bind your code to the interpreter, and python's [doc tests](http://docs.python.org/library/doctest.html) . I haven't used both together myself. Again, you'd need a separate target to build the python binding, and another to run the tests, rather than it being just a simple 'run this class' button.
I think you are talking about unit testing and mock objects. Here are couple of C++ mock object libraries that might be useful :- * [googlemock](http://code.google.com/p/googlemock/) which only works with [googletest](http://code.google.com/p/googletest/) * [mockpp](http://mockpp.sourceforge.net/)
Can any IDE or framework help test new code quickly without having to run the whole application
[ "", "c++", "testing", "" ]
I am currently calling a web service that returns a service defined class which I am interpreting in my application. I'm considering asking the vendor of this web service to add a property to this class which will make my life as well as their other clients lives a lot easier. To be clear I'm not asking them to modify existing behaviour or properties, so this would extend existing functionality. My question is, if they add this property to the class, will it adversely affect existing clients' applications?
It can potentially be a problem, yes: * the old client may barf when receiving the unexpected property from the server * the server may barf when not receiving the expected property from old clients It can also work... it just needs testing / planning. A safer option (if you have complex deployment that can't all go at once) is to consider the API sealed and add a new "v2" end-point etc.
Yes, it could mean you get serialization errors if you haven't updated your proxies. Its better to version the service interface even if that means supporting multiple versions at once.
Can I modify/extend a production web service without affecting existing clients?
[ "", "c#", ".net", "asp.net", "vb.net", "web-services", "" ]
I'm learning C++ and the book I'm reading (The C++ Programming Language) says to not reinvent the wheel, to rely on the standard libraries. In C, I often end up creating a linked list, and link list iteration over and over again (maybe I'm doing that wrong not sure), so the ideas of containers available in C++, and strings, and algorithms really appeal to me. However I have read a little online, and heard some criticisms from my friends and coworkers about STL, so I thought I maybe I'd pick some brains here. What are some best practices for using STL, and what lessons have you learned about STL?
There is a companion book to the Effective C++ series, which is called "[Effective STL](http://www.amazon.co.uk/Effective-STL-Specific-Professional-Computing/dp/0201749629)". It's a good starting point for learning about best practises using the Standard C++ library (neé STL).
You might want to pick up a copy of "Effective C++: 50 Specific Ways to Improve Your Programs and Design (2nd Edition)": [http://www.amazon.com/Effective-Specific-Addison-Wesley-Professional-Computing/dp/0201924889](https://rads.stackoverflow.com/amzn/click/com/0201924889) I've found it to be invaluable, and it's still very relevant today, even if you aren't programming in C++.
What are some C++ Standard Library usage best practices?
[ "", "c++", "stl", "" ]
I need to redirect to a url passing a parameter as a query string. This can include an Ampersand in the value. such as ``` string value = "This & That"; Response.Redirect("http://www.example.com/?Value=" + Server.UrlEncode(value)); ``` This however returns <http://www.example.com/?Value=This+&+That> What should I be using to encode this string? EDIT: Thanks Luke for pointing out the obvious, the code does indeed work correctly. I Apologise, my question was not a valid question after all! The page I was going to had a lot of old legacy code which is apparently doing some kinda of encoding and decoding itself making it appear as if my urlencode was not working. My solution unfortunately is to completely drop use of an & until the code in question can be re-written. Don't you just hate old code!
The [documentation](http://msdn.microsoft.com/en-us/library/zttxte6w.aspx) suggests that `Server.UrlEncode` should handle ampersands correctly. I've just tested your exact code and the returned string was correctly encoded: > <http://www.example.com/?Value=This+%26+That>
Technically doing: ``` value = value.Replace("&", "%26") ``` will do the trick. EDIT: There seem to be some tricky issues with the whole UrlEncode/HttpEncode methods that don't quite do the trick. I wrote up a simple method a while back that may come in handy. This should cover all the major encoding issues, and its easy to write a "desanitizer" as well. ``` Protected Function SanitizeURLString(ByVal RawURLParameter As String) As String Dim Results As String Results = RawURLParameter Results = Results.Replace("%", "%25") Results = Results.Replace("<", "%3C") Results = Results.Replace(">", "%3E") Results = Results.Replace("#", "%23") Results = Results.Replace("{", "%7B") Results = Results.Replace("}", "%7D") Results = Results.Replace("|", "%7C") Results = Results.Replace("\", "%5C") Results = Results.Replace("^", "%5E") Results = Results.Replace("~", "%7E") Results = Results.Replace("[", "%5B") Results = Results.Replace("]", "%5D") Results = Results.Replace("`", "%60") Results = Results.Replace(";", "%3B") Results = Results.Replace("/", "%2F") Results = Results.Replace("?", "%3F") Results = Results.Replace(":", "%3A") Results = Results.Replace("@", "%40") Results = Results.Replace("=", "%3D") Results = Results.Replace("&", "%26") Results = Results.Replace("$", "%24") Return Results End Function ```
ASP.Net URLEncode Ampersand for use in Query String
[ "", "c#", "asp.net", "query-string", "" ]
I made a Window service and let it work automatically and under localsystem account, when the service starts it fires this message for me and then stops > The [service name] service on local computer started and then stopped. Some Services stop automatically if they are not in use by another services or programs. What's the problem and what's the solution?
Either you are not starting any threads on the OnStart method to do work, or there is an exception raised within your OnStart method. If an exception is thrown, it will appear in the Windows Event log. The Windows Event log is a good place to start in any case. Generally an OnStart method looks like this: ``` Thread _thread; protected override void OnStart(string[] args) { // Comment in to debug // Debugger.Break() // Do initial setup and initialization Setup(); // Kick off a thread to do work _thread = new Thread(new MyClass().MyMethod) _thread.Start(); // Exit this method to indicate the service has started } ```
This particular error message means what it says - that your service **has started** but then **quite soon it exited** for some reason. The good news is that your service is actually doing *something*, so you have the executable configured and running as a service properly. Once started, for some reason it is quitting. You need to find out why this is. Add some debugging to tell you its up and running and known exit cases. If that doesn't reveal the problem then add some debugging to let you know it's still running and work backwards from when that stops.
Windows service stops automatically
[ "", "c#", ".net", "windows-services", "" ]
In a normal loop you can break out of a loop using break. Can the same be done using an anonymous delegate? Example inputString and result are both declared outside the delegate. ``` blackList.ForEach(new Action<string>( delegate(string item) { if(inputString.Contains(item)==true) { result = true; // I want to break here } } )); ``` Edit: Thanks for the replies, I'm actually reading your book at the minute John :) Just for the record i hit this issue and switched back to a normal foreach loop but I posted this question to see if i missed something.
As others have posted, you can't exit the loop in ForEach. Are you able to use LINQ? If so, you could easily combine TakeWhile and a custom ForEach extension method (which just about every project seems to have these days). In your example, however, [`List<T>.FindIndex`](http://msdn.microsoft.com/en-us/library/x1xzf2ca.aspx) would be the best alternative - but if you're not actually doing that, please post an example of what you *really* want to do.
There is no loop that one has access to, from which to break. And each call to the (anonymous) delegate is a new function call so local variables will not help. But since C# gives you a closure, you can set a flag and then do nothing in further calls: ``` bool stop = false; myList.ForEach((a) => { if (stop) { return; } else if (a.SomeCondition()) { stop = true; } }); ``` (This needs to be tested to check if correct reference semantics for closure is generated.) A more advanced approach would be to create your own extension method that allowed the delegate to return false to stop the loop: ``` static class MyExtensions { static void ForEachStoppable<T>(this IEnumerable<T> input, Func<T, bool> action) { foreach (T t in input) { if (!action(t)) { break; } } } } ```
How do i exit a List<string>.ForEach loop when using an anonymous delegate?
[ "", "c#", "loops", "delegates", "" ]
I'm currently trying to tune my Eclipse installation and bumped into the "-vm" option. Other posts on SO mentioned that it's good to always use the latest JVM available because they keep getting better in terms of performance, so I'm likely to do that. I was wondering though how you could find out what JVM Eclipse runs on if you are not specifying the "-vm" parameter. I found the following in the docs but that doesn't clarify how it will look for a JVM: > when passed to the Eclipse executable, this option is used to locate the Java VM to use to run Eclipse. It must be the full file system path to an appropriate Java executable. **If not specified, the Eclipse executable uses a search algorithm to locate a suitable VM**. In any event, the executable then passes the path to the actual VM used to Java Main using the -vm argument. Java Main then stores this value in eclipse.vm.
Actually, Windows->Preferences->Java->installed doesn't show you the one Eclipse runs under, but only the default JRE that it runs things under. To see the one Eclipse is runing under, go to Help->About Eclipse Platform->Configuration Details, and look for the property eclipse.vm. For example: ``` eclipse.vm=C:\Program Files\Java\jre6\bin\client\jvm.dll ```
According to [this thread](http://dev.eclipse.org/newslists/news.eclipse.webtools/msg00089.html) and [this one](http://forums.sun.com/thread.jspa?threadID=645735): eclipse will try to locate your JVM used for **launching** itself **by examining your path.** So, you can: > * Create a JAVA\_HOME environment variable, that points to your actually preferred JDK. > This way you can easily choose between JDKs installed. > * Put %JAVA\_HOME%\bin in the path, anywhere preceeding `<windowsroot>\system32`, which also > contains java, javaw & co binaries. > This way the java\* in the JDK bin directory is invoked whenever it is invoked from the path. I believe the other posts tell you how to find what JVM eclipse is using. However, I interpreted your question as: "what JVM will eclipse *will use* when launching itself ?" **when no `-vm` args is used**. --- After further research, it appears that eclipse will look for a JRE: * either directly under its installation directory (`<eclipse>\jre`): if you copy the content of any installed JRE in that sub-directory, it will be selected * in `C:\Program Files\Java\JRE...`. * in the `%PATH%` (see above) Also read this [SO answer](https://stackoverflow.com/questions/316265/tricks-to-speed-up-eclipse/316535#316535) to see why eclipse will by default select jvm.dll instead of javaw.exe
Find out what JVM Eclipse is running on
[ "", "java", "eclipse", "jvm", "" ]
The following string of mine tried to find difference between two strings. But it's horribly slow as it iterate the length of string: ``` #include <string> #include <vector> #include <iostream> using namespace std; int hd(string s1, string s2) { // hd stands for "Hamming Distance" int dif = 0; for (unsigned i = 0; i < s1.size(); i++ ) { string b1 = s1.substr(i,1); string b2 = s2.substr(i,1); if (b1 != b2) { dif++; } } return dif; } int main() { string string1 = "AAAAA"; string string2 = "ATATT"; string string3 = "AAAAA"; int theHD12 = hd(string1,string2); cout << theHD12 << endl; int theHD13 = hd(string1,string3); cout << theHD13 << endl; } ``` Is there a fast alternative to do that? In Perl we can have the following approach: ``` sub hd { return ($_[0] ^ $_[1]) =~ tr/\001-\255//; } ``` which is much2 faster than iterating the position. I wonder what's the equivalent of it in C++?
Fun with the STL: ``` #include <numeric> //inner_product #include <functional> //plus, equal_to, not2 #include <string> #include <stdexcept> unsigned int hd(const std::string& s1, const std::string& s2) { // TODO: What should we do if s1.size() != s2.size()? if (s1.size() != s2.size()){ throw std::invalid_argument( "Strings passed to hd() must have the same lenght" ); } return std::inner_product( s1.begin(), s1.end(), s2.begin(), 0, std::plus<unsigned int>(), std::not2(std::equal_to<std::string::value_type>()) ); } ```
Try to replace the for loop by: ``` for (unsigned i = 0; i < s1.size(); i++ ) { if (b1[i] != b2[i]) { dif++; } } ``` This should be a lot faster because no new strings are created.
Bit Operation For Finding String Difference
[ "", "c++", "string", "" ]
I wrote the two methods below to automatically select N distinct colors. It works by defining a piecewise linear function on the RGB cube. The benefit of this is you can also get a progressive scale if that's what you want, but when N gets large the colors can start to look similar. I can also imagine evenly subdividing the RGB cube into a lattice and then drawing points. Does anyone know any other methods? I'm ruling out defining a list and then just cycling through it. I should also say I don't generally care if they clash or don't look nice, they just have to be visually distinct. ``` public static List<Color> pick(int num) { List<Color> colors = new ArrayList<Color>(); if (num < 2) return colors; float dx = 1.0f / (float) (num - 1); for (int i = 0; i < num; i++) { colors.add(get(i * dx)); } return colors; } public static Color get(float x) { float r = 0.0f; float g = 0.0f; float b = 1.0f; if (x >= 0.0f && x < 0.2f) { x = x / 0.2f; r = 0.0f; g = x; b = 1.0f; } else if (x >= 0.2f && x < 0.4f) { x = (x - 0.2f) / 0.2f; r = 0.0f; g = 1.0f; b = 1.0f - x; } else if (x >= 0.4f && x < 0.6f) { x = (x - 0.4f) / 0.2f; r = x; g = 1.0f; b = 0.0f; } else if (x >= 0.6f && x < 0.8f) { x = (x - 0.6f) / 0.2f; r = 1.0f; g = 1.0f - x; b = 0.0f; } else if (x >= 0.8f && x <= 1.0f) { x = (x - 0.8f) / 0.2f; r = 1.0f; g = 0.0f; b = x; } return new Color(r, g, b); } ```
You can use the [HSL color model](http://en.wikipedia.org/wiki/HSL_and_HSV) to create your colors. If all you want is differing hues (likely), and slight variations on lightness or saturation, you can distribute the hues like so: ``` // assumes hue [0, 360), saturation [0, 100), lightness [0, 100) for(i = 0; i < 360; i += 360 / num_colors) { HSLColor c; c.hue = i; c.saturation = 90 + randf() * 10; c.lightness = 50 + randf() * 10; addColor(c); } ```
This questions appears in quite a few SO discussions: * [Algorithm For Generating Unique Colors](https://stackoverflow.com/questions/1168260/algorithm-for-generating-unique-colors) * [Generate unique colours](https://stackoverflow.com/questions/773226/generate-unique-colours) * [Generate distinctly different RGB colors in graphs](https://stackoverflow.com/questions/309149/generate-distinctly-different-rgb-colors-in-graphs) * [How to generate n different colors for any natural number n?](https://stackoverflow.com/questions/2328339/how-to-generate-n-different-colors-for-any-natural-number-n) Different solutions are proposed, but none are optimal. Luckily, *science* comes to the rescue **Arbitrary N** * [Colour displays for categorical images](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.65.2790) (free download) * [A WEB SERVICE TO PERSONALISE MAP COLOURING](http://icaci.org/documents/ICC_proceedings/ICC2009/html/nonref/13_18.pdf) (free download, a webservice solution should be available by next month) * [An Algorithm for the Selection of High-Contrast Color Sets](http://onlinelibrary.wiley.com/doi/10.1002/%28SICI%291520-6378%28199904%2924:2%3C132::AID-COL8%3E3.0.CO;2-B/abstract) (the authors offer a free C++ implementation) * [High-contrast sets of colors](http://www.opticsinfobase.org/abstract.cfm?URI=ao-21-16-2936) (The first algorithm for the problem) The last 2 will be free via most university libraries / proxies. **N is finite and relatively small** In this case, one could go for a list solution. A very interesting article in the subject is freely available: * [A Colour Alphabet and the Limits of Colour Coding](http://eleanormaclure.files.wordpress.com/2011/03/colour-coding.pdf) There are several color lists to consider: * Boynton's list of 11 colors that are almost never confused (available in the first paper of the previous section) * Kelly's 22 colors of maximum contrast (available in the paper above) I also ran into [this](http://web.media.mit.edu/~wad/color/palette.html) Palette by an MIT student. Lastly, The following links may be useful in converting between different color systems / coordinates (some colors in the articles are not specified in RGB, for instance): * <http://chem8.org/uch/space-55036-do-blog-id-5333.html> * <https://metacpan.org/pod/Color::Library::Dictionary::NBS_ISCC> * [Color Theory: How to convert Munsell HVC to RGB/HSB/HSL](https://stackoverflow.com/questions/3620663/color-theory-how-to-convert-munsell-hvc-to-rgb-hsb-hsl/4353544#4353544) For Kelly's and Boynton's list, I've already made the conversion to RGB (with the exception of white and black, which should be obvious). Some C# code: ``` public static ReadOnlyCollection<Color> KellysMaxContrastSet { get { return _kellysMaxContrastSet.AsReadOnly(); } } private static readonly List<Color> _kellysMaxContrastSet = new List<Color> { UIntToColor(0xFFFFB300), //Vivid Yellow UIntToColor(0xFF803E75), //Strong Purple UIntToColor(0xFFFF6800), //Vivid Orange UIntToColor(0xFFA6BDD7), //Very Light Blue UIntToColor(0xFFC10020), //Vivid Red UIntToColor(0xFFCEA262), //Grayish Yellow UIntToColor(0xFF817066), //Medium Gray //The following will not be good for people with defective color vision UIntToColor(0xFF007D34), //Vivid Green UIntToColor(0xFFF6768E), //Strong Purplish Pink UIntToColor(0xFF00538A), //Strong Blue UIntToColor(0xFFFF7A5C), //Strong Yellowish Pink UIntToColor(0xFF53377A), //Strong Violet UIntToColor(0xFFFF8E00), //Vivid Orange Yellow UIntToColor(0xFFB32851), //Strong Purplish Red UIntToColor(0xFFF4C800), //Vivid Greenish Yellow UIntToColor(0xFF7F180D), //Strong Reddish Brown UIntToColor(0xFF93AA00), //Vivid Yellowish Green UIntToColor(0xFF593315), //Deep Yellowish Brown UIntToColor(0xFFF13A13), //Vivid Reddish Orange UIntToColor(0xFF232C16), //Dark Olive Green }; public static ReadOnlyCollection<Color> BoyntonOptimized { get { return _boyntonOptimized.AsReadOnly(); } } private static readonly List<Color> _boyntonOptimized = new List<Color> { Color.FromArgb(0, 0, 255), //Blue Color.FromArgb(255, 0, 0), //Red Color.FromArgb(0, 255, 0), //Green Color.FromArgb(255, 255, 0), //Yellow Color.FromArgb(255, 0, 255), //Magenta Color.FromArgb(255, 128, 128), //Pink Color.FromArgb(128, 128, 128), //Gray Color.FromArgb(128, 0, 0), //Brown Color.FromArgb(255, 128, 0), //Orange }; static public Color UIntToColor(uint color) { var a = (byte)(color >> 24); var r = (byte)(color >> 16); var g = (byte)(color >> 8); var b = (byte)(color >> 0); return Color.FromArgb(a, r, g, b); } ``` And here are the RGB values in hex and 8-bit-per-channel representations: ``` kelly_colors_hex = [ 0xFFB300, # Vivid Yellow 0x803E75, # Strong Purple 0xFF6800, # Vivid Orange 0xA6BDD7, # Very Light Blue 0xC10020, # Vivid Red 0xCEA262, # Grayish Yellow 0x817066, # Medium Gray # The following don't work well for people with defective color vision 0x007D34, # Vivid Green 0xF6768E, # Strong Purplish Pink 0x00538A, # Strong Blue 0xFF7A5C, # Strong Yellowish Pink 0x53377A, # Strong Violet 0xFF8E00, # Vivid Orange Yellow 0xB32851, # Strong Purplish Red 0xF4C800, # Vivid Greenish Yellow 0x7F180D, # Strong Reddish Brown 0x93AA00, # Vivid Yellowish Green 0x593315, # Deep Yellowish Brown 0xF13A13, # Vivid Reddish Orange 0x232C16, # Dark Olive Green ] kelly_colors = dict(vivid_yellow=(255, 179, 0), strong_purple=(128, 62, 117), vivid_orange=(255, 104, 0), very_light_blue=(166, 189, 215), vivid_red=(193, 0, 32), grayish_yellow=(206, 162, 98), medium_gray=(129, 112, 102), # these aren't good for people with defective color vision: vivid_green=(0, 125, 52), strong_purplish_pink=(246, 118, 142), strong_blue=(0, 83, 138), strong_yellowish_pink=(255, 122, 92), strong_violet=(83, 55, 122), vivid_orange_yellow=(255, 142, 0), strong_purplish_red=(179, 40, 81), vivid_greenish_yellow=(244, 200, 0), strong_reddish_brown=(127, 24, 13), vivid_yellowish_green=(147, 170, 0), deep_yellowish_brown=(89, 51, 21), vivid_reddish_orange=(241, 58, 19), dark_olive_green=(35, 44, 22)) ``` For all you Java developers, here are the JavaFX colors: ``` // Don't forget to import javafx.scene.paint.Color; private static final Color[] KELLY_COLORS = { Color.web("0xFFB300"), // Vivid Yellow Color.web("0x803E75"), // Strong Purple Color.web("0xFF6800"), // Vivid Orange Color.web("0xA6BDD7"), // Very Light Blue Color.web("0xC10020"), // Vivid Red Color.web("0xCEA262"), // Grayish Yellow Color.web("0x817066"), // Medium Gray Color.web("0x007D34"), // Vivid Green Color.web("0xF6768E"), // Strong Purplish Pink Color.web("0x00538A"), // Strong Blue Color.web("0xFF7A5C"), // Strong Yellowish Pink Color.web("0x53377A"), // Strong Violet Color.web("0xFF8E00"), // Vivid Orange Yellow Color.web("0xB32851"), // Strong Purplish Red Color.web("0xF4C800"), // Vivid Greenish Yellow Color.web("0x7F180D"), // Strong Reddish Brown Color.web("0x93AA00"), // Vivid Yellowish Green Color.web("0x593315"), // Deep Yellowish Brown Color.web("0xF13A13"), // Vivid Reddish Orange Color.web("0x232C16"), // Dark Olive Green }; ``` the following is the unsorted kelly colors according to the order above. [![unsorted kelly colors](https://i.stack.imgur.com/oIczY.png)](https://i.stack.imgur.com/oIczY.png) the following is the sorted kelly colors according to hues (note that some yellows are not very contrasting) [![ sorted kelly colors](https://i.stack.imgur.com/3lb9t.png)](https://i.stack.imgur.com/3lb9t.png)
How to automatically generate N "distinct" colors?
[ "", "java", "colors", "color-scheme", "color-picker", "" ]
I have 3 MySQL tables representing: photos a user post, videos a user post, comments a user post and I need to view the 10 (20, 30, 40...) most recent activities by the user. For example in photos table may be composed by: ``` user_id | photo_id | photo_path | photo_name | date_added 5 | 18 | /photos | pht_18.png | 2009-02-12 5 | 21 | /photos | pht_21.png | 2009-02-15 5 | 29 | /photos | pht_29.png | 2009-03-30 ``` the videos table ``` user_id | video_id | video_url | date_added 5 | 36 | youtube.com/... | 2009-01-09 5 | 48 | youtube.com/... | 2009-02-18 5 | 90 | youtube.com/... | 2009-03-19 ``` the comments table ``` user_id | comment_id | comment | date_added 5 | 6 | hi! | 2009-02-11 5 | 11 | great photo | 2009-02-13 5 | 19 | nice shot! | 2009-03-28 ``` As you can see the 3 tables have different number of attributes, so how can I do the union? and while fetching the query result how can I understand to which table it belongs to? So in the user profile page I'd like to show his recent activities of course ordered by DATE DESC this way: ``` 2009-09-01: user posted a video 2009-11-02: user posted a comment 2009-12-02: user posted a photo 2009-13-02: user posted a comment 2009-15-02: user posted a photo 2009-18-02: user posted a video 2009-19-03: user posted a video 2009-28-03: user posted a comment 2009-30-03: user posted a photo ``` Can anyone help me please?
A MySQL UNION query could work here: ``` (SELECT `user_id`, `date_added`, 'photo' AS `type` FROM `photos` WHERE `user_id` = uid) UNION (SELECT `user_id`, `date_added`, 'video' AS `type` FROM `videos` WHERE `user_id` = uid) UNION (SELECT `user_id`, `date_added`, 'comment' AS `type` FROM `comments` WHERE `user_id` = uid) ORDER BY `date_added` DESC; ``` Then you'd wind up with a result set like ``` user_id | date_added | type 5 | 2009-01-03 | photo 5 | 2008-12-07 | video 5 | 2008-11-19 | comment ``` and so on. (actually you can leave `user_id` out of the SELECT if you want, of course)
Why do you have separate tables in the first place? That's probably a mistake in database design. [***EDIT***: As it turned out through comments and a question edit, the OP had a valid reason to maintain three tables. Further advice about this removed.] To solve your problem you can use UNION or UNION ALL: ``` (SELECT 'photo' AS item_type, date_added, user_id FROM photos) UNION ALL (SELECT 'video' AS item_type, date_added, user_id FROM videos) UNION ALL (SELECT 'comment' AS item_type, date_added, user_id FROM comments) ORDER BY date_added DESC ```
User recent activities - PHP MySQL
[ "", "php", "mysql", "" ]
On a site I'm working on, the pages are generating 45 external WebResource.axd and ScriptResource.axd files so the broswers have to request all 45 references. That's a lot of references so I'd like to know if there is a way that all of those requests could be combined into one request? I've seen that the Script Manager is supposed to be able to do something regarding that but I haven't seen any results with the WebResource.axd and ScriptResource.axd files. How would I go about getting these to all combine?
As the others have stated, using the ASP.NET AJAX Toolkit's ToolkitScriptManager rather than the default ASP.NET ScriptManager will allow you to do some script combining, including your own scripts embedded as resources - there's a good post about it at [Script combining made easy](http://blogs.msdn.com/delay/archive/2007/06/11/script-combining-made-easy-overview-of-the-ajax-control-toolkit-s-toolkitscriptmanager.aspx). If you have the luxury of using ASP.NET 3.5 SP1, then there are some further additions to the ASP.NET ScriptManager that allow you to combine a lot more scripts, including general .js files in your solution into one script call. Check out the video here: > [Using Script Combining to Improve Performance](http://www.asp.net/learn/3.5-SP1/video-296.aspx)
Try using ajax - ToolkitScriptManager instead of asp - ScriptManager - it tends to combine as much of the resource requests as it can.
How do I combine WebResource.axd and ScriptResource.axd files so as to result in less requests to my ASP.NET server?
[ "", "asp.net", "javascript", "ajaxcontroltoolkit", "" ]
I am working for a company where we are developing video chat support on an existing application. I have looked at various solutions for this like 1. Using Managed Direct show for video capture and streaming in C# 2. Some code samples in code project where we take an image and pass it over the network (I would call it rather a crude solution as this would eat up lot of bandwidth. 3. Code a compression algorithm from scratch from scratch and use it to compress-decompress video. Now the challenge is that we are looking to achieve very high quality video streaming and the container application is coded in C#.NET This is what I have proposed so far. The network logic to stream data is written in C# , the video compression to be written in VC++ and call this VC++ dll using pinvoke or either CLI which way possible. I am looking for some one more experienced that me in this field who can suggest me if Iam going correct or can this be still improved. The ultimate goal is high quality video streaming. The codec can be any anything like h.2633, h.264 etc.
I've used several ways to get video streaming/conferencing with .net easily, without need to dig into directshow. (ok, dig some, but not deep :) 1) Use of plain Windows Media Encoder components. It is documented with samples in Windows Media Encoder SDK. Good for any high resolution streaming, but delay is too big for realtime chat (0.5-2 seconds at best). Modern Express Encoder SDK another option. 2) Microsoft Research ConferenceXP <http://cct.cs.washington.edu/> Full featured conferencing API including application streaming. They too low level Windows Media coded filters and wrapped them into managed code. Works well. Easily customizable. Looks bit abandoned now. 3) Microsoft RTC Client up to version 1.3 - core of windows messenger. pros: managed samples from Microsoft, good docs, reliable performance, freely redistributable, microsoft compatible (good) SIP stack included. Major conferencing vendors like Emblaze VCON based their solutions on it in some near past, not sure about this days, but I know that Tandberg licensed Microsft's VC-1. cons: version up to 1.3 support h261-h263 video only. modern version with support of VC-1(h264) codec does not allow direct serverless ip-ip connections. It does at require Microsoft Live Communications server. Newer version SDK does not cover well video conferencing calls. <http://msdn.microsoft.com/en-us/library/ms775892(VS.85).aspx> Please let us know what platform you have chosen. By the way, I've even used ConferenceXP video rtp part with RTC 1.3 voice/SIP features together to improve video quality, so you have wide choice of managed technologies here. Another thing is Live Meeting at which I had no chance to take good look yet.
Save yourself the trouble and use VLC. There are some decent .NET wrappers for it (<http://forum.videolan.org/viewtopic.php?f=32&t=52021&start=30>) We are using C# and VLC for an IPTV network. We take input off DISH network satellites via Osprey-450 video capture devices on a Windows XP server. From there, we have a .NET server component that we wrote in C# that uses VLC behind the scenes (starting separate processes in .NET to control the vlc.exe instances). The VLC processes transcode and stream the signals over a network (.h264 or MPEG-4, we've successfully done both). On the client side we have a C# WinForm application that uses an embedded VLC Viewer to view multicast signals. This application is mainly for command & control. The real use of the multicast signals happens when our set top boxes attached to our TV's decode and display the streams. We thought we were going to have to write our own DirectX encoders too, but don't go to all the trouble. VLC works really well and has enough C# support to be very useful. Feel free to e-mail me if you have specific questions about implementation.
Developing a Video Chat Application with high quality video streaming
[ "", "c#", "video", "streaming", "chat", "" ]
I have a C# Excel Add-in project "MyExcelAddIn" that has a public method Foo() to do something complex. For testing purposes, the add-in also defines a toolbar button which is wired to Foo() so I can test this and verify that clicking the button calls Foo() and does what I want it to do. This is fine. Now I want to call this method from a C# Windows Forms project. In the Windows Forms project I can create an Excel instance and make it visible and verify that my VSTO add-in is running as I can see the button and it works. But I can't work out how to call Foo() programatically from the Windows Forms project. I've googled a bit and got as far as getting the "MyExcelAddIn" COMAddIn object, but can't work out how to call Foo(). It looks something like this: ``` // Create Excel and make it visible Application excelApp = new Application(); excelApp.Visible = true; // I know my VSTO add-in is running because I can see my test button // Now get a reference to my VSTO add-in Microsoft.Office.Core.COMAddIns comAddIns = _excelApp.COMAddIns; object addinName = "MyExcelAddIn"; Microsoft.Office.Core.COMAddIn myAddin = comAddIns.Item(ref addinName); // This works, but now what? How do I make a call on myAddin? // Note that myAddin.Object is null... ``` So I want to know what I can do to call Foo() from my Windows Forms application. Note that I have full control over both the Windows Forms application and the add-in and I suspect I have to make changes to both of them (particularly the add-in) but I have no idea how to do this. Note that this is a VS2008 C# application and I'm using Excel 2003.
I'm using the SendMessage Win32 API to do this. My C# Add-in creates a "NativeWindow" with a uniqe window title that the WinForm app can locate.
If you're building an application-level add-in, I believe this may be your answer: [MSDN VSTO Article](http://msdn.microsoft.com/en-us/library/bb608621.aspx) It involves two steps: (From the article) 1. In your add-in, expose an object to other solutions. 2. In another solution, access the object exposed by your add-in, and call members of the object. The other solution may be: (Again from the article) * Any solution that is running in a different process than your add-in (these types of solutions are also named out-of-process clients). These include applications that automate an Office application, such as a Windows Forms or console application, and add-ins that are loaded in a different process.
How to call a VSTO AddIn method from a separate C# project?
[ "", "c#", ".net", "visual-studio-2008", "vsto", "add-in", "" ]
I've recently become interested in algorithms and have begun exploring them by writing a naive implementation and then optimizing it in various ways. I'm already familiar with the standard Python module for profiling runtime (for most things I've found the timeit magic function in IPython to be sufficient), but I'm also interested in memory usage so I can explore those tradeoffs as well (e.g. the cost of caching a table of previously computed values versus recomputing them as needed). Is there a module that will profile the memory usage of a given function for me?
This one has been answered already here: [Python memory profiler](https://stackoverflow.com/questions/110259/python-memory-profiler) Basically you do something like that (cited from [Guppy-PE](http://guppy-pe.sourceforge.net/#Heapy)): ``` >>> from guppy import hpy; h=hpy() >>> h.heap() Partition of a set of 48477 objects. Total size = 3265516 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 25773 53 1612820 49 1612820 49 str 1 11699 24 483960 15 2096780 64 tuple 2 174 0 241584 7 2338364 72 dict of module 3 3478 7 222592 7 2560956 78 types.CodeType 4 3296 7 184576 6 2745532 84 function 5 401 1 175112 5 2920644 89 dict of class 6 108 0 81888 3 3002532 92 dict (no owner) 7 114 0 79632 2 3082164 94 dict of type 8 117 0 51336 2 3133500 96 type 9 667 1 24012 1 3157512 97 __builtin__.wrapper_descriptor <76 more rows. Type e.g. '_.more' to view.> >>> h.iso(1,[],{}) Partition of a set of 3 objects. Total size = 176 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 33 136 77 136 77 dict (no owner) 1 1 33 28 16 164 93 list 2 1 33 12 7 176 100 int >>> x=[] >>> h.iso(x).sp 0: h.Root.i0_modules['__main__'].__dict__['x'] >>> ```
Python 3.4 includes a new module: [`tracemalloc`](https://docs.python.org/3/library/tracemalloc.html). It provides detailed statistics about which code is allocating the most memory. Here's an example that displays the top three lines allocating memory. ``` from collections import Counter import linecache import os import tracemalloc def display_top(snapshot, key_type='lineno', limit=3): snapshot = snapshot.filter_traces(( tracemalloc.Filter(False, "<frozen importlib._bootstrap>"), tracemalloc.Filter(False, "<unknown>"), )) top_stats = snapshot.statistics(key_type) print("Top %s lines" % limit) for index, stat in enumerate(top_stats[:limit], 1): frame = stat.traceback[0] # replace "/path/to/module/file.py" with "module/file.py" filename = os.sep.join(frame.filename.split(os.sep)[-2:]) print("#%s: %s:%s: %.1f KiB" % (index, filename, frame.lineno, stat.size / 1024)) line = linecache.getline(frame.filename, frame.lineno).strip() if line: print(' %s' % line) other = top_stats[limit:] if other: size = sum(stat.size for stat in other) print("%s other: %.1f KiB" % (len(other), size / 1024)) total = sum(stat.size for stat in top_stats) print("Total allocated size: %.1f KiB" % (total / 1024)) tracemalloc.start() counts = Counter() fname = '/usr/share/dict/american-english' with open(fname) as words: words = list(words) for word in words: prefix = word[:3] counts[prefix] += 1 print('Top prefixes:', counts.most_common(3)) snapshot = tracemalloc.take_snapshot() display_top(snapshot) ``` And here are the results: ``` Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)] Top 3 lines #1: scratches/memory_test.py:37: 6527.1 KiB words = list(words) #2: scratches/memory_test.py:39: 247.7 KiB prefix = word[:3] #3: scratches/memory_test.py:40: 193.0 KiB counts[prefix] += 1 4 other: 4.3 KiB Total allocated size: 6972.1 KiB ``` ### When is a memory leak not a leak? That example is great when the memory is still being held at the end of the calculation, but sometimes you have code that allocates a lot of memory and then releases it all. It's not technically a memory leak, but it's using more memory than you think it should. How can you track memory usage when it all gets released? If it's your code, you can probably add some debugging code to take snapshots while it's running. If not, you can start a background thread to monitor memory usage while the main thread runs. Here's the previous example where the code has all been moved into the `count_prefixes()` function. When that function returns, all the memory is released. I also added some `sleep()` calls to simulate a long-running calculation. ``` from collections import Counter import linecache import os import tracemalloc from time import sleep def count_prefixes(): sleep(2) # Start up time. counts = Counter() fname = '/usr/share/dict/american-english' with open(fname) as words: words = list(words) for word in words: prefix = word[:3] counts[prefix] += 1 sleep(0.0001) most_common = counts.most_common(3) sleep(3) # Shut down time. return most_common def main(): tracemalloc.start() most_common = count_prefixes() print('Top prefixes:', most_common) snapshot = tracemalloc.take_snapshot() display_top(snapshot) def display_top(snapshot, key_type='lineno', limit=3): snapshot = snapshot.filter_traces(( tracemalloc.Filter(False, "<frozen importlib._bootstrap>"), tracemalloc.Filter(False, "<unknown>"), )) top_stats = snapshot.statistics(key_type) print("Top %s lines" % limit) for index, stat in enumerate(top_stats[:limit], 1): frame = stat.traceback[0] # replace "/path/to/module/file.py" with "module/file.py" filename = os.sep.join(frame.filename.split(os.sep)[-2:]) print("#%s: %s:%s: %.1f KiB" % (index, filename, frame.lineno, stat.size / 1024)) line = linecache.getline(frame.filename, frame.lineno).strip() if line: print(' %s' % line) other = top_stats[limit:] if other: size = sum(stat.size for stat in other) print("%s other: %.1f KiB" % (len(other), size / 1024)) total = sum(stat.size for stat in top_stats) print("Total allocated size: %.1f KiB" % (total / 1024)) main() ``` When I run that version, the memory usage has gone from 6MB down to 4KB, because the function released all its memory when it finished. ``` Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)] Top 3 lines #1: collections/__init__.py:537: 0.7 KiB self.update(*args, **kwds) #2: collections/__init__.py:555: 0.6 KiB return _heapq.nlargest(n, self.items(), key=_itemgetter(1)) #3: python3.6/heapq.py:569: 0.5 KiB result = [(key(elem), i, elem) for i, elem in zip(range(0, -n, -1), it)] 10 other: 2.2 KiB Total allocated size: 4.0 KiB ``` Now here's a version inspired by [another answer](https://stackoverflow.com/a/10117657/4794) that starts a second thread to monitor memory usage. ``` from collections import Counter import linecache import os import tracemalloc from datetime import datetime from queue import Queue, Empty from resource import getrusage, RUSAGE_SELF from threading import Thread from time import sleep def memory_monitor(command_queue: Queue, poll_interval=1): tracemalloc.start() old_max = 0 snapshot = None while True: try: command_queue.get(timeout=poll_interval) if snapshot is not None: print(datetime.now()) display_top(snapshot) return except Empty: max_rss = getrusage(RUSAGE_SELF).ru_maxrss if max_rss > old_max: old_max = max_rss snapshot = tracemalloc.take_snapshot() print(datetime.now(), 'max RSS', max_rss) def count_prefixes(): sleep(2) # Start up time. counts = Counter() fname = '/usr/share/dict/american-english' with open(fname) as words: words = list(words) for word in words: prefix = word[:3] counts[prefix] += 1 sleep(0.0001) most_common = counts.most_common(3) sleep(3) # Shut down time. return most_common def main(): queue = Queue() poll_interval = 0.1 monitor_thread = Thread(target=memory_monitor, args=(queue, poll_interval)) monitor_thread.start() try: most_common = count_prefixes() print('Top prefixes:', most_common) finally: queue.put('stop') monitor_thread.join() def display_top(snapshot, key_type='lineno', limit=3): snapshot = snapshot.filter_traces(( tracemalloc.Filter(False, "<frozen importlib._bootstrap>"), tracemalloc.Filter(False, "<unknown>"), )) top_stats = snapshot.statistics(key_type) print("Top %s lines" % limit) for index, stat in enumerate(top_stats[:limit], 1): frame = stat.traceback[0] # replace "/path/to/module/file.py" with "module/file.py" filename = os.sep.join(frame.filename.split(os.sep)[-2:]) print("#%s: %s:%s: %.1f KiB" % (index, filename, frame.lineno, stat.size / 1024)) line = linecache.getline(frame.filename, frame.lineno).strip() if line: print(' %s' % line) other = top_stats[limit:] if other: size = sum(stat.size for stat in other) print("%s other: %.1f KiB" % (len(other), size / 1024)) total = sum(stat.size for stat in top_stats) print("Total allocated size: %.1f KiB" % (total / 1024)) main() ``` The `resource` module lets you check the current memory usage, and save the snapshot from the peak memory usage. The queue lets the main thread tell the memory monitor thread when to print its report and shut down. When it runs, it shows the memory being used by the `list()` call: ``` 2018-05-29 10:34:34.441334 max RSS 10188 2018-05-29 10:34:36.475707 max RSS 23588 2018-05-29 10:34:36.616524 max RSS 38104 2018-05-29 10:34:36.772978 max RSS 45924 2018-05-29 10:34:36.929688 max RSS 46824 2018-05-29 10:34:37.087554 max RSS 46852 Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)] 2018-05-29 10:34:56.281262 Top 3 lines #1: scratches/scratch.py:36: 6527.0 KiB words = list(words) #2: scratches/scratch.py:38: 16.4 KiB prefix = word[:3] #3: scratches/scratch.py:39: 10.1 KiB counts[prefix] += 1 19 other: 10.8 KiB Total allocated size: 6564.3 KiB ``` If you're on Linux, you may find [`/proc/self/statm`](https://stackoverflow.com/a/53475728/4794) more useful than the `resource` module.
How do I profile memory usage in Python?
[ "", "python", "memory", "profiling", "" ]
Is there a way to determine the exception type even know you caught the exception with a catch all? Example: ``` try { SomeBigFunction(); } catch(...) { //Determine exception type here } ```
You can actully determine type inside a catch(...), but it is not very useful: ``` #include <iostream> #include <exception> class E1 : public std::exception {}; class E2 : public std::exception {}; int main() { try { throw E2(); } catch( ... ) { try { throw; } catch( const E1 & e ) { std::cout << "E1\n"; } catch( const E2 & e ) { std::cout << "E2\n"; } } } ```
Short Answer: No. Long Answer: If you derive all your exceptions from a common base type (say std::exception) and catch this explicitly then you can use this to get type information from your exception. But you should be using the feature of catch to catch as specific type of exception and then working from there. The only real use for catch(...) is: * Catch: and throw away exception (stop exception escaping destructor). * Catch: Log an unknwon exception happend and re-throw. Edited: You can extract type information via dynamic\_cast<>() or via typid() Though as stated above this is not somthing I recomend. Use the case statements. ``` #include <stdexcept> #include <iostream> class X: public std::runtime_error // I use runtime_error a lot { // its derived from std::exception public: // And has an implementation of what() X(std::string const& msg): runtime_error(msg) {} }; int main() { try { throw X("Test"); } catch(std::exception const& e) { std::cout << "Message: " << e.what() << "\n"; /* * Note this is platform/compiler specific * Your milage may very */ std::cout << "Type: " << typeid(e).name() << "\n"; } } ```
Determining exception type after the exception is caught?
[ "", "c++", "exception", "" ]
Why does this work: ``` result = (from e in db.CampaignCodes where e.Code.Equals("") && e.Domain.Equals(null) select e).FirstOrDefault(); ``` But not (result is null): ``` String code = ""; String domain = null; result = (from e in db.CampaignCodes where e.Code.Equals(code) && e.Domain.Equals(domain) select e).FirstOrDefault(); ``` ??
That does indeed sound quite odd. LINQ to SQL may well notice the difference between getting the value from a variable and getting it from a constant, but I wouldn't have expected it to make any difference. I strongly recommend that whenever LINQ to SQL appears to be behaving oddly, you turn the context logging on and see what query it's actually executing in each case. EDIT: The other answers around the overloading aspect are really interesting. What happens if you declare the `domain` variable as type `object` instead of `string` in the second query, or cast the `null` to `string` in the first query?
To expand on Iain's answer: Depending on what overloads of e.Domain.Equals() there are, passing a null string variable may hit a different one than passing null? I'm not sure what the rules would be here, but I suspect the compiler might prefer an e.Domain.Equals(object variable) to e.Domain.Equals(string variable) when explicitly given "just" null, rather than a string variable that has been set as null?
Linq is weird or I'm stupid?
[ "", "c#", "linq-to-sql", "" ]
I want to make scroller like given here: <http://www.shopping.com/xPC-Kettler-Kettler-Vito-Xl~linkin_id-8051267> in jQuery. Its in dojo I think. Please give me link of scroller like this one. Thanks
I imagine there are lots of different plugins for a jQuery scroller, heres one: <http://www.webresourcesdepot.com/scrollable-jquery-plugin-to-scroll-content/> and the demo of it working: <http://www.flowplayer.org/tools/scrollable.html> hope this helps! :)
[This one perhaps](http://sorgalla.com/jcarousel/) (jcarousel) Edit: jCarousel doesn't support mouseovers though
How to make scroller in jQuery?
[ "", "javascript", "jquery", "scroll", "" ]
I have already defined onclick event. I want to add additional call back function, that will be invoked BEFORE already defined callback. * How to add an event in the head of the queue for concrete event type?
Are you working with jquery framework ? I'm currently working with extjs and i found a class you can use or copy Lets see on <http://extjs.com/deploy/dev/docs/> Ext > EventManager > removeAll() > removeAll( String/HTMLElement el ) : > void Removes all event handers from an > element. Typically you will use > Ext.Element.removeAllListeners > directly on an Element in favor of > calling this version. Parameters: el : > String/HTMLElement The id or html > element from which to remove the event > Returns: void I think You can set the tag `body` as HTMLElement to shut down all events. (not tested sorry) Download here <http://extjs.com/products/extjs/build/> > select jquery adapter if you are working with jquery > extcore will suffice (it contain EventManager.js ). On the other way, like crescentfresh (answer n°4), i made this sample code to apply to all node events : ``` var nodes = document.getElementsByTagName('*'); // collect your nodes however for(var i=0; i < nodes.length; i++) { var node = nodes[i]; Ext.EventManager.removeAll(node); } } ``` Please let me know if you have trouble using extjs i can help, and vote for me if my answer is useful.
Super. That made me look into jquery doc and I've found that combining crescentfresh's solution with jQuery's unbind() method (an analog of Extjs.removeAll()) will solve my issue. So, at the end I have this: ``` this.onclick = function( e ) if ( /*trigger clause*/ true ) { //cancel all events $(this).unbind(); return false; } else { //continue processing return original ? original.apply(this, arguments) : true; } } ``` Thanks for advices guys! PS What the jerks have made this awful reply editor??! It's a headache to paste some code here. PPS Unfortunatelly I can't vote for two replys and I can't raise useful mark for your reply due to lack of reputation and I can't mark more than one answer acceptable. So, please forgive me. Stupid forum engine, but really cool people here
Add event in the head of the queue
[ "", "javascript", "events", "" ]
I'm tired of inserting ``` import pdb; pdb.set_trace() ``` lines into my Python programs and debugging through the console. How do I connect a remote debugger and insert breakpoints from a civilized user interface?
use [Winpdb](https://pypi.org/project/winpdb/). It is a **platform independent** graphical GPL Python debugger with support for remote debugging over a network, multiple threads, namespace modification, embedded debugging, encrypted communication and is up to 20 times faster than pdb. Features: * GPL license. Winpdb is Free Software. * Compatible with CPython 2.3 through 2.6 and Python 3000 * Compatible with wxPython 2.6 through 2.8 * Platform independent, and tested on Ubuntu Gutsy and Windows XP. * User Interfaces: rpdb2 is console based, while winpdb requires wxPython 2.6 or later. [![Screenshot](https://i.stack.imgur.com/GwX8x.jpg)](https://i.stack.imgur.com/GwX8x.jpg) (source: [winpdb.org](http://winpdb.org/images/screenshot_winpdb_small.jpg)) [winpdb-reborn · PyPI](https://pypi.org/project/winpdb-reborn/) [GitHub - bluebird75/winpdb: Fork of the official winpdb with improvements](https://github.com/bluebird75/winpdb)
A little bit late, but here is a very lightweight remote debugging solution courtesy of [Michael DeHaan • Tips on Using Debuggers With Ansible](https://web.archive.org/web/20150317050549/http://michaeldehaan.net/post/35403909347/tips-on-using-debuggers-with-ansible): 1. `pip install epdb` on the remote host. 2. Make sure your firewalling setup is not allowing non-local connections to port 8080 on the remote host, since `epdb` defaults to listening on any address (`INADDR_ANY`), not 127.0.0.1. 3. Instead of using `import pdb; pdb.set_trace()` in your program, use `import epdb; epdb.serve()`. 4. Securely log in to the remote host, since `epdb.connect()` uses telnet. 5. Attach to the program using `python -c 'import epdb; epdb.connect()'`. Adjust the security bits to suit your local network setup and security stance, of course.
How do I attach a remote debugger to a Python process?
[ "", "python", "remote-debugging", "" ]
I'm loading elements via AJAX. Some of them are only visible if you scroll down the page. Is there any way I can know if an element is now in the visible part of the page?
This should do the trick: ``` function isScrolledIntoView(elem) { var docViewTop = $(window).scrollTop(); var docViewBottom = docViewTop + $(window).height(); var elemTop = $(elem).offset().top; var elemBottom = elemTop + $(elem).height(); return ((elemBottom <= docViewBottom) && (elemTop >= docViewTop)); } ``` **Simple Utility Function** This will allow you to call a utility function that accepts the element you're looking for and if you want the element to be fully in view or partially. ``` function Utils() { } Utils.prototype = { constructor: Utils, isElementInView: function (element, fullyInView) { var pageTop = $(window).scrollTop(); var pageBottom = pageTop + $(window).height(); var elementTop = $(element).offset().top; var elementBottom = elementTop + $(element).height(); if (fullyInView === true) { return ((pageTop < elementTop) && (pageBottom > elementBottom)); } else { return ((elementTop <= pageBottom) && (elementBottom >= pageTop)); } } }; var Utils = new Utils(); ``` **Usage** ``` var isElementInView = Utils.isElementInView($('#flyout-left-container'), false); if (isElementInView) { console.log('in view'); } else { console.log('out of view'); } ```
[This answer](https://stackoverflow.com/a/488073/643514) in Vanilla: ``` function isScrolledIntoView(el) { var rect = el.getBoundingClientRect(); var elemTop = rect.top; var elemBottom = rect.bottom; // Only completely visible elements return true: var isVisible = (elemTop >= 0) && (elemBottom <= window.innerHeight); // Partially visible elements return true: //isVisible = elemTop < window.innerHeight && elemBottom >= 0; return isVisible; } ```
How to check if element is visible after scrolling?
[ "", "javascript", "jquery", "scroll", "" ]
I have a certain PHP script that calls exec() to execute a command to convert a PDF to JPG. This command works fine in bash. To preempt your initial troubleshooting guesses, note the following: * safe\_mode = Off * Permission on the directory containing the PDF and the script is set to 777, and this directory is also where the JPG is being written. * The command I am passing to exec() explicitly points to the binary being used (e.g. /usr/local/bin/convert). * display\_errors = On * error\_reporting = E\_ALL * disable\_functions = [blank] * I am echoing exec()'s output and it returns nothing. The command being run by default returns nothing. When I call this PHP script from the browser (visiting <http://www.example.com/script.php>), exec() does not execute its argument. *IMPORTANT*: I know that there are no issues with my script or the way I have constructed the bash command, because from bash, I can execute the script with 'php' and it works (e.g. 'php script.php' converts the file) I have also tried switching out exec() with system(). Last, I have had this issue once before in the past but cannot remember how I fixed it. I know there is something I am missing, so I hope someone else has experienced this as I have and remembers how to fix it! Thank you in advance for any assistance you can provide. Alex
Add `2>&1` to the end of your command to redirect errors from stderr to stdout. This should make it clear what's going wrong.
Just some guess, it might be that your webserver process user does not have privileges to do so.
PHP exec() will not execute shell command when executed via browser
[ "", "php", "system", "exec", "" ]
How to override or extend .net main classes. for example ``` public class String { public boolean contains(string str,boolean IgnoreCase){...} public string replace(string str,string str2,boolean IgnoreCase){...} } ``` after ``` string aa="this is a Sample"; if(aa.contains("sample",false)) {...} ``` is it possible?
The String class is sealed so you can't inherit from it. Extension methods are your best bet. They have the same **feel** as instance methods without the cost of inheritance. ``` public static class Extensions { public static bool contains(this string source, bool ignoreCase) {... } } void Example { string str = "aoeeuAOEU"; if ( str.contains("a", true) ) { ... } } ``` You will need to be using VS 2008 in order to use extension methods.
The String class is sealed, so you cannot extend it. If you want to add features you can either use extension methods or wrap it in a class of your own and provide whatever additional features you need.
Possible to extend the String class in .net
[ "", "c#", ".net", "" ]
Just like the subject states. I want to be able to run iisreset, batch files, etc. from the a console application. Can I/How do I do it?
That's quite possible, for example: ``` System.Diagnostics.Process.Start(@"C:\listfiles.bat"); ```
Check this example from [C# Station](http://www.csharp-station.com/HowTo/ProcessStart.aspx) ``` using System; using System.Diagnostics; namespace csharp_station.howto { /// <summary> /// Demonstrates how to start another program from C# /// </summary> class ProcessStart { static void Main(string[] args) { Process notePad = new Process(); notePad.StartInfo.FileName = "notepad.exe"; notePad.StartInfo.Arguments = "ProcessStart.cs"; notePad.Start(); } } } ```
Can I use the command line, or run a batch file from managed code? (.NET)
[ "", "c#", ".net", "command-line", "" ]
In a Eclipse a `// TODO` comment in a Java file marks an area in code as a task for later consideration. Is there a way to add other expressions that will do the same? For example if I want to use `// myprojectname`.
*Window > Preferences > Java > Compiler > Task Tags* You can add new tags to your heart's content. --- NOTE: Why the Eclipse guys made this a *Compiler* setting is beyond me. It just makes the setting hard for people to find, so it rarely gets customized.
The answers by benjismith and j pimmel are perfectly valid and correct. I just want to add that you can find this and other settings by using eclipse's neat preference-dialog feature: When you open the preferences dialog via "Window/Preferences", type "todo" in the text field in the upper left corner. The options tree will be limited to matching settings. Voilá
Custom TODO mark in Eclipse
[ "", "java", "eclipse", "todo", "" ]
I am currently developing an app targeted for the HP IPAQ 210. Part of this app requires the WLAN radio to be enabled/powered on to connect to a pre-configured access point. I'm currently using the IPAQ SDK (via P/Invoke) to enable the WLAN radio, but I'm having trouble reliably determining when the radio has established a connection with the preferred access point. I'm currently monitoring the Microsoft.WindowsMobile.Status.SystemState.WiFiStateConnected property, but I would prefer to subscribe to an event to be notified when the connection is established. I've looked around a bit in the OpenNETCF library, and there seems to be promising things in 2.3, but we're stuck on 2.2 for the moment. Is there a reliable way to determine the connection status?
It is ugly, and it is no event, but if all else fails, you could try and check the Wifi hardware state by reading it's registry key: ``` int key = (int)Registry.GetValue("HKEY_LOCAL_MACHINE\\System\\State\\Hardware", "WiFi", -1); ```
So, in case anyone else happens upon this, I've found the registry key method described above to mostly reliable, but I needed a more reliable method. I've moved to using the OpenNETCF 2.2 NetworkInformation library to monitor the CurrentIPAddress property of the WirelessZeroConfigInterface. I'm still using the IPAQUtils for managing the WLAN radio power (I've found the OpenNETCF 2.2 radio control to be lacking and the device will only have a single WiFi network entry), but here's how I monitor the IP Address of the interface: ``` NetworkInterface[] netIntfs = NetworkInterface.GetAllNetworkInterfaces(); foreach (NetworkInterface ni in netIntfs) { if (ni is WirelessZeroConfigNetworkInterface) { wzcni = (WirelessZeroConfigNetworkInterface)ni; } } while (wzcni.CurrentIpAddress.ToString() == "0.0.0.0" && tryCount < 10) { wzcni.Refresh(); System.Threading.Thread.Sleep(3000); tryCount++; } ```
Monitoring WLAN Radio Connection in Windows Mobile 6/C#
[ "", "c#", "compact-framework", "wifi", "smart-device-framework", "" ]
I have a PDF form with a number of text fields. The values entered in these fields are used to calculate values in other fields (the calculated fields are read-only). When I open the form in Adobe Reader and fill in a field, the calculated fields automatically re-calculate. However, I am using iTextSharp to fill in the fields, flatten the resulting form, then stream the flattened form back to the user over the web. That part works just fine except the calculated fields never calculate. I'm assuming that since no user triggered events (like keydowns or focus or blur) are firing, the calculations don't occur. Obviously, I could remove the calculations from the fillable form and do them all on the server as I am filling the fields, but I'd like the fillable form to be usable by humans as well as the server. Does anyone know how to force the calculations? EDIT: I ain't feeling too much iText/iTextSharp love here... Here are a few more details. Setting stamper.AcroFields.GenerateAppearances to true doesn't help. I *think* the answer lies somewhere in the page actions, but I don't know how to trigger it...
Paulo Soares (one of the main devs of iText and current maintainer of iTextSharp) [says](http://www.mail-archive.com/itext-questions@lists.sourceforge.net/msg26180.html): > iText doesn't do any effort to fix > calculated fields because most of the > times that's impossible. PdfCopyFields > has some support for it that sometimes > works and sometimes don't.
I have updated all the calculated fields of my pdfs by calling the javascript method calculateNow on the Doc object. According to the adobe javascript documentation `this.calculateNow();` > Forces computation of all calculation fields in the current document. > > When a form contains many calculations, there can be a significant delay after the user inputs data into a field, even if it is not a calculation field. One strategy is to turn off calculations at some point and turn them back on later (see example). To include the javascript call with iTextSharp : ``` using (PdfReader pdfReader = new PdfReader(pdfTemplate)) using (PdfStamper pdfStamper = new PdfStamper(pdfReader, new FileStream(newFile, FileMode.Create))) { // fill f1 field and more... AcroFields pdfFormFields = pdfStamper.AcroFields; pdfFormFields.SetField("f1", "100"); //... // add javascript on load event of the pdf pdfStamper.JavaScript = "this.calculateNow();"; pdfStamper.Close(); } ```
How do I force formatting and calculations in a PDF when filling other fields using iTextSharp?
[ "", "c#", "pdf", "pdf-generation", "itext", "" ]
I am storing a JSON string in the database that represents a set of properties. In the code behind, I export it and use it for some custom logic. Essentially, I am using it only as a storage mechanism. I understand XML is better suited for this, but I read that JSON is faster and preferred. Is it a good practice to use JSON if the intention is not to use the string on the client side?
JSON is a perfectly valid way of storing structured data and simpler and more concise than XML. I don't think it is a "bad practice" to use it for the same reason someone would use XML as long as you understand and are OK with its limitations.
Whether it's good practice I can't say, but it strikes me as odd. Having XML fields in your SQL database are at least queryable (SQL Server 2000 or later, MySQL and others) but more often than not is a last resort for metadata. JSON is usually the carrier between JavaScript and your back end, not the storage itself unless you have a JSON back end [document orientated database](http://en.wikipedia.org/wiki/Document-oriented_database) such as [CouchDB](http://couchdb.apache.org/) or [SOLR](http://lucene.apache.org/solr/) as JSON lends itself perfectly for storing documents. Not to say I don't agree with using JSON as a simple (that is, not serializing references) data serializer over XML, but I won't go into a JSON vs XML rant just for the sake of it :). If you're not using JSON for its portability between 2 languages, and you're positive you will never query the data from SQL you will be better off with the default serialization from .NET.
Is JSON used only for JavaScript?
[ "", "javascript", "asp.net", "database", "json", "" ]
I have a windows service that runs a method when the services main Timer elapses (OnElapse). The OnElapse method gets a list of .xml files to process. Each xml file is inserted into a ThreadPool. **I want to make sure I don't insert 2 XML's with the same name into the ThreadPool.** How can I manage which items are in the ThreadPool? I basically want to do this: ``` if xmlfilename not in threadpool insert in threadpool ```
This is pretty trick because you need to closely monitor the ThreadPool and it will require a form of synchronization. Here's a quick and dirty example of a way to do this. ``` class XmlManager { private object m_lock = new object(); private HashSet<string> m_inPool = new HashSet<string>(); private void Run(object state) { string name = (string)state; try { FunctionThatActuallyProcessesFiles(name); } finally { lock ( m_lock ) { m_inPool.Remove(name); } } } public void MaybeRun(string xmlName) { lock ( m_lock ) { if (!m_pool.Add(xmlName)) { return; } } ThreadPool.QueueUserWorkItem(Run, xmlName); } } ``` This is not a foolproof solution. There is at least one race condition in the code. Namely that an item could be being removed from the pool while you're trying to add it back in and it won't actually get added. But if your are only concerned with them being processed a single time, this doesn't matter.
Something like this should do it (use a HashSet instead of a Dictionary if you are using .Net 3.5 or higher): ``` using System; using System.Collections.Generic; using System.Threading; namespace Something { class ProcessFilesClass { private object m_Lock = new object(); private Dictionary<string, object> m_WorkingItems = new Dictionary<string, object>(); private Timer m_Timer; public ProcessFilesClass() { m_Timer = new Timer(OnElapsed, null, 0, 10000); } public void OnElapsed(object context) { List<string> xmlList = new List<string>(); //Process xml files into xmlList foreach (string xmlFile in xmlList) { lock (m_Lock) { if (!m_WorkingItems.ContainsKey(xmlFile)) { m_WorkingItems.Add(xmlFile, null); ThreadPool.QueueUserWorkItem(DoWork, xmlFile); } } } } public void DoWork(object xmlFile) { //process xmlFile lock (m_Lock) { m_WorkingItems.Remove(xmlFile.ToString()); } } } } ```
How can I manage which items are in the ThreadPool?
[ "", "c#", ".net", "multithreading", "windows-services", "" ]
Having seen [some](https://stackoverflow.com/questions/51574/good-java-graph-algorithm-library) [suggestions](https://stackoverflow.com/questions/555804/real-time-java-graph-chart-library) for graphs, I wonder what's the optimum for my problem. I want to render a directed graph to a servlet/picture that is displayed in the browser. There should be some kind of optimization of position. No dependency to Swing would be preferred. Algorithms are not important, since the structure of the graph is determined by business logic. It would be desired to be able add labels to edges as well. it would be optimal if i can serve this as png/svg. Which library/service would you recommend? clarifications: 1) The question is all about Graphs - like [Directed Acyclic Graph](http://en.wikipedia.org/wiki/Directed_acyclic_graph) - NOT - [Charts](http://en.wikipedia.org/wiki/Chart). 2) flot, Google Charts - cannot plot graphs, only charts, or have i missed something? 3) no i do not need interactivity 4) graphviz would be nice, but the grappa java library is quite outdated and is built upon swing/awt. while it may be theoretically possible to render swing to images, it would not be my favorite way to to so in a server-app. 5) it would be fine to use an online service where the images are not hosted locally. edit: added links to Wikipedia to clarify graph/chart term
As well as waiting weeks to hear about the Magic Framework that's going to solve all your problems in one line of code, there is also the other option of just Writing Some Code yourself to do exactly what you want... (I'm not saying it's 10 minutes' work, but it's probably one or two days, and you posted your question over two weeks ago...) Have you had a look, for example, at the Wikipedia entry on [Force-based algorithms](http://en.wikipedia.org/wiki/Force-based_algorithms)-- it has pseudocode and a few links that might be helpful. I'm assuming it is the layout algorithm that's the issue, and not the matter of creating a BufferedImage, drawing to its graphics context, PNG-encoding it and sending it down the socket. You really don't need a framework for *that* bit, I don't think.
Take a look at [graphviz](http://www.graphviz.org/)
Optimized graph drawing for the web
[ "", "java", "graph", "server-side", "graph-layout", "" ]
I'm using the DriveInfo class in my C# project to retrieve the available bytes on given drives. How to I correctly convert this number into Mega- or Gigabytes? Dividing by 1024 will not do the job I guess. The results always differ from those shown in the Windows-Explorer.
1024 is correct for usage in programs. The reason you may be having differences is likely due to differences in what driveinfo reports as "available space" and what windows considers available space. Note that only drive manufacturers use 1,000. Within windows and most programs the correct scaling is 1024. Also, while your compiler should optimize this anyway, this calculation can be done by merely shifting the bits by 10 for each magnitude: > KB = B >> 10 > MB = KB >> 10 = B >> 20 > GB = MB >> 10 = KB >> 20 = B >> 30 Although for readability I expect successive division by 1024 is clearer.
XKCD has the [definite answer](http://xkcd.com/394/): ![Single, definitive standard for KB](https://imgs.xkcd.com/comics/kilobyte.png)
How to correctly convert filesize in bytes into mega or gigabytes?
[ "", "c#", "byte", "disk", "driveinfo", "megabyte", "" ]
In C#, what's the difference between ``` Assert.AreNotEqual ``` and ``` Assert.AreNotSame ```
Almost all the answers given here are correct, but it's probably worth giving an example: ``` public static string GetSecondWord(string text) { // Yes, an appalling implementation... return text.Split(' ')[1]; } string expected = "world"; string actual = GetSecondWord("hello world"); // Good: the two strings should be *equal* as they have the same contents Assert.AreEqual(expected, actual); // Bad: the two string *references* won't be the same Assert.AreSame(expected, actual); ``` `AreNotEqual` and `AreNotSame` are just inversions of `AreEqual` and `AreSame` of course. EDIT: A rebuttal to the [currently accepted answer](https://stackoverflow.com/questions/543263/whats-the-difference-between-assert-arenotequal-and-assert-arenotsame/543337#543337)... If you use `Assert.AreSame` with value types, they are boxed. In other words, it's equivalent to doing: ``` int firstNumber = 1; int secondNumber = 1; object boxedFirstNumber = firstNumber; object boxedSecondNumber = secondNumber; // There are overloads for AreEqual for various value types // (assuming NUnit here) Assert.AreEqual(firstNumber, secondNumber); // ... but not for AreSame, as it's not intended for use with value types Assert.AreSame(boxedFirstNumber, boxedSecondNumber); ``` Neither `firstNumber` nor `secondNumber` has an object value, because `int` is a value type. The reason the `AreSame` call will fail is because in .NET, boxing a value creates a new box each time. (In Java it sometimes doesn't - this has caught me out before.) Basically you should *never* use `AreSame` when comparing value types. When you're comparing *reference* types, use `AreSame` if you want to check for identical references; use `AreEqual` to check for equivalence under `Equals`. EDIT: Note that there *are* situations where NUnit doesn't just use `Equals` directly; it has built-in support for collections, where the elements in the collections are tested for equality. The claim in the answer that: > Using the example above changing the > int to string, AreSame and AreEqual > will return the same value. entirely depends on how the variables are initialized. If they use string literals, then yes, interning will take care of that. If, however, you use: ``` string firstString = 1.ToString(); string secondString = 1.ToString(); ``` then `AreSame` and `AreEqual` will almost certainly *not* return the same value. As for: > The general rule of thumb is to use > AreEqual on value types and AreSame on > reference types. I almost *never* want to check for reference identity. It's rarely useful to me. I want to check for *equivalence* which is what `AreEqual` checks for. (I'm not saying that `AreSame` shouldn't be there - it's a useful method, just far more rarely than `AreEqual`.)
Two things can be equal, but different objects. AreNotEqual checks the objects *values* via the equality test, while AreNotSame checks that they are not the same exact object. It is obvious why we would want to test that things AreNotEqual (we care about the values being tested); what about AreNotSame? The usefulness of this in testing is found when you have passed references around and want to make sure that after your shuffling is done that two references are still the same object. In a real world case, we use a lot of caching objects to mitigate round trips to the database. After an object has been handed off to the cache system, our unit tests ensure that in some cases we get back the same object (cache was valid) and in other cases we get back a *fresh* object (cache was invalidated). Note that AreNotEqual would not necessary suffice in this case. If the object had a new timestamp in the database, yet the *data* was not "different enough" to fail an equality test, AreNotEqual wouldn't notice that we refreshed the *object*.
What's the difference between Assert.AreNotEqual and Assert.AreNotSame?
[ "", "c#", ".net", "unit-testing", "testing", "assert", "" ]
I used this code to maintain scroll position and don't have a clue of what it means. If someone has the time, can you provide me with an step by step explanation of what it is doing. Here it is: ``` <script language="javascript" type="text/javascript"> var xPos, yPos; var prm = Sys.WebForms.PageRequestManager.getInstance(); function BeginRequestHandler(sender, args) { if ($get('<%=lstAuctions.ClientID %>') != null) { xPos = $get('<%=lstAuctions.ClientID %>').scrollLeft; yPos = $get('<%=lstAuctions.ClientID %>').scrollTop; } } function EndRequestHandler(sender, args) { if ($get('<%=lstAuctions.ClientID %>') != null) { $get('<%=lstAuctions.ClientID %>').scrollLeft = xPos; $get('<%=lstAuctions.ClientID %>').scrollTop = yPos; } } prm.add_beginRequest(BeginRequestHandler); prm.add_endRequest(EndRequestHandler); </script> ```
``` var xPos, yPos; // global variable declaration var prm = Sys.WebForms.PageRequestManager.getInstance(); // Some webforms javascript manager /* * Begin function with 2 arguments */ function BeginRequestHandler(sender, args) { // check if the element generated by .net with id 'lstAuctions.ClientID' exists if ($get('<%=lstAuctions.ClientID %>') != null) { // get its scroll left and top position and // assign it to the global variables xPos = $get('<%=lstAuctions.ClientID %>').scrollLeft; yPos = $get('<%=lstAuctions.ClientID %>').scrollTop; } } /* * this method gets executed last, it uses the * already set global variables to assign the old scrollpositions again */ function EndRequestHandler(sender, args) { if ($get('<%=lstAuctions.ClientID %>') != null) { // assign the previous scroll positions $get('<%=lstAuctions.ClientID %>').scrollLeft = xPos; $get('<%=lstAuctions.ClientID %>').scrollTop = yPos; } } // first function gets executed on the beginning of a request prm.add_beginRequest(BeginRequestHandler); // second function gets executed on the end of the request prm.add_endRequest(EndRequestHandler); ```
[Sys.WebForms.PageRequestManager](http://msdn.microsoft.com/en-us/library/bb398822.aspx) is an ASP.Net AJAX construct. Specifically in your code, there are some allocated variables (xPos,yPos,prm) and two defined functions (BeginRequestHandler,EndRequestHandler). At the end of the code are two function calls (prm.add\_beginRequest,prm.add\_endRequest) that are assigning those functions as event handlers. The $get calls are part of the library as a shortcut for getting data from the client-side. It's very much javascript under the covers, but it's just a syntactical implementation through the ASP.Net AJAX client-side library.
New to Javascript, can someone explain this code line by line?
[ "", "javascript", "" ]
In Javascript how can I tell if a checkbox has focus or not? I thought there would be a method or property called isfocused. But apparently not. By in focus I mean they've tabbed to it using the keyboard and at this point pressing space would check the box.
Create an event handler that is wired to the onfocus event. When it's called, set a global var to remember that it's got the focus. Write another one on the onblur event which clears the variable.
There is a onfocus event that fires when an element receives focus. ``` <script type="text/javascript"> var isFocused = false; </script> <input type="checkbox" name="team" value="team" onfocus="javascript:isFocused = true;">Spurs<br> ```
In Javascript find if a checkbox is focused
[ "", "javascript", "dom", "checkbox", "" ]
This is a C++ disaster, check out this code sample: ``` #include <iostream> void func(const int* shouldnotChange) { int* canChange = (int*) shouldnotChange; *canChange += 2; return; } int main() { int i = 5; func(&i); std::cout << i; return 0; } ``` The output was 7! So, how can we make sure of the behavior of C++ functions, if it was able to change a supposed-to-be-constant parameter!? EDIT: I am not asking how can I make sure that **my** code is working as expected, rather I am wondering how to believe that someone else's function (for instance some function in some dll library) isn't going to change a parameter or posses some behavior...
Based on your edit, your question is "how can I trust 3rd party code not to be stupid?" The short answer is "you can't." If you don't have access to the source, or don't have time to inspect it, you can only trust the author to have written sane code. In your example, the author of the function declaration specifically claims that the code will not change the contents of the pointer by using the const keyword. You can either trust that claim, or not. There are ways of testing this, as suggested by others, but if you need to test large amounts of code, it will be very labour intensive. Perhaps moreso than reading the code. If you are working on a team and you have a team member writing stuff like this, then you can talk to them about it and explain why it is bad.
By writing sane code. If you write code you can't trust, then obviously your code won't be trustworthy. Similar stupid tricks are possible in pretty much any language. In C#, you can modify the code at runtime through reflection. You can inspect and change private class members. How do you protect against that? You don't, you just have to write code that behaves as you expect. Apart from that, write a unittest testing that the function does not change its parameter.
How can I trust the behavior of C++ functions that declare const?
[ "", "c++", "memory-management", "" ]
The following code shows what I expect in Firefox and Chrome: a small white square in a big green rectangle. I don't see the small square in IE7. How can I make it appear? ``` <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Insert title here</title> <script type="text/javascript"> <!-- function start() { var g_Div = document.getElementById("bigdiv"); var littleDiv = document.createElement('div'); littleDiv.setAttribute('style', 'position:absolute;'+ 'left:300px;'+ 'top:300px;'+ 'width:5px;'+ 'height:5px;'+ 'clip:rect(0pt,5px,5px,0pt);'+ 'background-color: rgb(255, 255, 255);'); g_Div.appendChild(littleDiv); } //--> </script> </head> <body> <div id="bigdiv" style="border: 1px solid ; margin: auto; height: 600px; width: 800px; background-color: green;" > </div> <script type="text/javascript"> <!-- start(); //--> </script> </body> </html> ```
This should do what you want, and should work across the major browsers: ``` function start() { var g_Div = document.getElementById("bigdiv"); var littleDiv = document.createElement('div'); littleDiv.style.background = 'rgb(255, 255, 255)'; littleDiv.style.width = '5px'; littleDiv.style.height = '5px'; littleDiv.style.left = '300px'; littleDiv.style.top = '300px'; littleDiv.style.position = 'absolute'; g_Div.appendChild(littleDiv); } ```
Use this approach to changing the style on the element:- ``` littleDiv.style.cssText = 'position:absolute;'+ 'left:300px;'+ 'top:300px;'+ 'width:5px;'+ 'height:5px;'+ 'clip:rect(0pt,5px,5px,0pt);'+ 'background-color: rgb(255, 255, 255)'; ```
displaying a javascript-generated div in IE7
[ "", "javascript", "html", "internet-explorer-7", "" ]
I have some software that is heavily dependent on MySQL, and is written in python without any class definitions. For performance reasons, and because the database is really just being used to store and retrieve large amounts of data, I'd like to convert this to an object-oriented python script that does not use the database at all. So my plan is to export the database tables to a set of files (not many -- it's a pretty simple database; it's big in that it has a lot of rows, but only a few tables, each of which has just two or three columns). Then I plan to read the data in, and have a set of functions which provide access to and operations on the data. My question is this: is there a preferred way to convert a set of database tables to classes and objects? For example, if I have a table which contains fruit, where each fruit has an id and a name, would I have a "CollectionOfFruit" class which contains a list of "Fruit" objects, or would I just have a "CollectionOfFruit" class which contains a list of tuples? Or would I just have a list of Fruit objects? I don't want to add any extra frameworks, because I want this code to be easy to transfer to different machines. So I'm really just looking for general advice on how to represent data that might more naturally be stored in database tables, in objects in Python. Alternatively, is there a good book I should read that would point me in the right direction on this?
If the data is a natural fit for database tables ("rectangular data"), why not convert it to sqlite? It's portable -- just one file to move the db around, and sqlite is available anywhere you have python (2.5 and above anyway).
Generally you want your Objects to absolutely match your "real world entities". Since you're starting from a database, it's not always the case that the database has any real-world fidelity, either. Some database designs are simply awful. If your database has reasonable models for Fruit, that's where you start. Get that right first. A "collection" may -- or may not -- be an artificial construct that's part of the solution algorithm, not really a proper part of the problem. Usually collections *are* part of the problem, and you should design those classes, also. Other times, however, the collection is an artifact of having used a database, and a simple Python list is all you need. Still other times, the collection is actually a proper mapping from some unique key value to an entity, in which case, it's a Python dictionary. And sometimes, the collection is a proper mapping from some non-unique key value to some collection of entities, in which case it's a Python `collections.defaultdict(list)`. Start with the fundamental, real-world-like entities. Those get class definitions. Collections may use built-in Python collections or may require their own classes.
Converting a database-driven (non-OO) python script into a non-database driven, OO-script
[ "", "python", "object", "" ]
I want to develop a process() method. The method takes some data in the form of a data class, and processes it. The data classes are similar, but slightly different. For example we have the following classes of data processDataObject\_A, processDataObject\_B and processDataObject\_C. Is it better to overload the method: ``` void process(processDataObject_A data) { //Process processDataObject_A here } void process(processDataObject_B data) { //Process processDataObject_B here } void process(processDataObject_C data) { //Process processDataObject_C here } ``` OR have the concrete data classes extend some Abstract Data Class, and pass that to the process method and then have the method check the type and act accordingly: ``` void process(AbstractProcessDataObject data) { //Check for type here and do something } ``` OR is there some better way to address it? Would the approach change if this were to be a Web Method? Thanks in advance
I would go with: ``` process (data) { data.doProcessing(); } ```
The fact that your methods return void lead me to believe that you may have your responsibilities turned around. I think it may be better to think about this as having each of your classes implement an interface, `IProcessable`, that defines a `Process` method. Then each class would know how to manipulate it's own data. This, I think, is less coupled than having a class which manipulates data inside each object. Assuming all of theses classes derive from the same base class you could put the pieces of the processing algorithm that are shared in the base class. This is slightly different than the case where you may have multiple algorithms that operate on identical data. If you need this sort of functionality then you may still want to implement the interface, but have the `Process` method take a strategy type parameter and use a factory to create an appropriate strategy based on its type. You'd end up having a strategy class for each supported algorithm and data class pair this way, but you'd be able keep the code decoupled. I'd probably only do this if the algorithms were reasonably complex so that separating the code makes it more readable. If it's just a few lines that are different, using the switch statement on the strategy type would probably suffice. With regard to web methods, I think I'd have a different signature per class. Getting the data across the wire correctly will be much easier if the methods take concrete classes of the individual types so it knows how to serialize/deserialize it properly. Of course, on the back end the web methods could use the approach described above. ``` public interface IProcessable { public void Process() {....} } public abstract class ProcessableBase : IProcessable { public virtual void Process() { ... standard processing code... } } public class FooProcessable : ProcessableBase { public override void Process() { base.Process(); ... specific processing code } } ... IProcessable foo = new FooProcessable(); foo.Process(); ``` Implementing the strategy-based mechanism is a little more complex. Web interface, using data access objects ``` [WebService] public class ProcessingWebService { public void ProcessFoo( FooDataObject foo ) { // you'd need a constructor to convert the DAO // to a Processable object. IProcessable fooProc = new FooProcessable( foo ); fooProc.Process(); } } ```
Object Oriented Method Design options
[ "", "c#", "oop", "" ]
What is the best way to programmatically generate a GUID or UUID in C++ without relying on a platform-specific tool? I am trying to make unique identifiers for objects in a simulation, but can't rely on Microsoft's implementation as the project is cross-platform. Notes: * Since this is for a simulator, I don't really need cryptographic randomness. * It would be best if this is a 32 bit number.
If you can afford to use Boost, then there is a [UUID](http://www.boost.org/libs/uuid) library that should do the trick. It's very straightforward to use - check the documentation and [this answer](https://stackoverflow.com/a/3248017/61574).
on linux: man uuid on win: check out for UUID structure and UuidCreate function in msdn [edit] the function would appear like this ``` extern "C" { #ifdef WIN32 #include <Rpc.h> #else #include <uuid/uuid.h> #endif } std::string newUUID() { #ifdef WIN32 UUID uuid; UuidCreate ( &uuid ); unsigned char * str; UuidToStringA ( &uuid, &str ); std::string s( ( char* ) str ); RpcStringFreeA ( &str ); #else uuid_t uuid; uuid_generate_random ( uuid ); char s[37]; uuid_unparse ( uuid, s ); #endif return s; } ```
Platform-independent GUID generation in C++?
[ "", "c++", "cross-platform", "guid", "uuid", "" ]
Is it possible to stop execution of a python script at any line with a command? Like ``` some code quit() # quit at this point some more code (that's not executed) ```
[sys.exit()](https://docs.python.org/2/library/sys.html#sys.exit) will do exactly what you want. ``` import sys sys.exit("Error message") ```
You could `raise SystemExit(0)` instead of going to all the trouble to `import sys; sys.exit(0)`.
Programmatically stop execution of python script?
[ "", "python", "" ]
I have encountered many half-solutions to the task of returning XML in ASP.NET. I don't want to blindly copy & paste some code that happens to work most of the time, though; I want the *right* code, and I want to know *why* it's right. I want criticism; I want information; I want knowledge; I want understanding. Below are code fragments, in order of increasing complexity, representing some of the partial solutions I've seen, including some of the further questions each one causes, and which I'd like to have answered here. A thorough answer must address why we *must* have or *must not* have any of the following things, or else explain why it's irrelevant. * Response.Clear(); * Response.ContentType = "text/xml"; * Response.ContentEncoding = Encoding.UTF8; * Response.ContentEncoding = Encoding.UTF16; * Response.ContentType = "text/xml; charset=utf-8"; * Response.ContentType = "text/xml; charset=utf-16"; * Response.End() * Using an aspx with the front-file guts ripped out * Using an ashx file In the end, imagine you need to write the contents of a helper function like this: ``` ///<summary>Use this call inside your (Page_Xxx) method to write the ///xml to the web client. </summary> ///<remarks>See for https://stackoverflow.com/questions/543319/how-to-return-xml-in-asp-net ///for proper usage.</remarks> public static void ReturnXmlDocumentToWebClient( XmlDocument document, Page page) { ... } ``` --- Every solution I see starts with taking an empty aspx page, and trimming all the HTML out of the front file (which causes warnings in Visual Studio): ``` <%@ Page Language="C#" AutoEventWireup="true" CodeFile="GetTheXml.aspx.cs" Inherits="GetTheXml" %> ``` Next we use the `Page_Load` event to write to the output: ``` protected void Page_Load(object sender, EventArgs e) { String xml = "<foo>Hello, world!</foo>"; Response.Write(xml); } ``` --- Do we need to change the **ContentType** to **"text/xml"**? I.e.: ``` protected void Page_Load(object sender, EventArgs e) { String xml = "<foo>Hello, world!</foo>"; Response.ContentType = "text/xml"; Response.Write(xml); } ``` --- Do we need to call `Response.Clear` first? ``` protected void Page_Load(object sender, EventArgs e) { String xml = "<foo>Hello, world!</foo>"; Response.Clear(); Response.ContentType = "text/xml"; Response.Write(xml); } ``` Do we really need to call that? Doesn't `Response.Clear` make the prior step of making sure that the code in the front file was empty (not even a space or a carriage return) outside of the `<% ... %>` unnecessary? Does `Response.Clear` make it more robust, in case someone left a blank line or space in the code-front file? Is using ashx the same as a blank aspx main file, because it's understood that it's not going to output HTML? --- Do we need to call `Response.End`? I.e.: ``` protected void Page_Load(object sender, EventArgs e) { String xml = "<foo>Hello, world!</foo>"; Response.Clear(); Response.ContentType = "text/xml"; Response.Write(xml); Response.End(); } ``` What else could possibly happen after `Response.Write` that needs us to end the response **right now**? --- Is the content-type of `text/xml` sufficient, or should it instead be **text/xml; charset=utf-8**? ``` protected void Page_Load(object sender, EventArgs e) { String xml = "<foo>Hello, world!</foo>"; Response.Clear(); Response.ContentType = "text/xml; charset=utf-8"; Response.Write(xml); Response.End(); } ``` Or should it specifically **not** be that? Does having a charset in the content type, but not setting the property, screw up the server? Why not some other content type, e.g.: * UTF-8 * utf-16 * UTF-16 --- Should the charset be specified in `Response.ContentEncoding`? ``` protected void Page_Load(object sender, EventArgs e) { String xml = "<foo>Hello, world!</foo>"; Response.Clear(); Response.ContentType = "text/xml"; Response.ContentEncoding = Encoding.UTF8; Response.Write(xml); Response.End(); } ``` Is using `Response.ContentEncoding` better than jamming it into `Response.ContentType`? Is it worse? Is the former supported? Is the latter? --- I don't actually want to write a String out; I want to write out an `XmlDocument`. [Someone suggests I can use the `XmlWriter`](https://stackoverflow.com/questions/374535/how-to-return-xml-in-aspnet): ``` protected void Page_Load(object sender, EventArgs e) { XmlDocument xml = GetXmlDocumentToShowTheUser(); Response.Clear(); Response.ContentType = "text/xml"; Response.ContentEncoding = Encoding.UTF8; using (TextWriter textWriter = new StreamWriter( Response.OutputStream, Encoding.UTF8)) { XmlTextWriter xmlWriter = new XmlTextWriter(textWriter); // Write XML using xmlWriter //TODO: How to do this? } } ``` Note the use of `Response.OutputStream`, rather than `Response.Write`. Is this good? Bad? Better? Worse? Faster? Slower? More memory intensive? Less memory intensive? --- I [read](http://www.deez.info/sengelha/2006/02/06/how-return-xml-from-aspx-in-aspnet-11/#comment-24670) that you should render > the XML in the page’s Render() method > to avoid problems with chunking > encountered when using Page\_Load(). What is **chunking**? What are the problems with chunking, and how does using using `Page_Render` eliminate them? --- I don't want to write the contents of my `XmlDocument` object into a string and then write that because that wastes memory. That is, any of these would be bad: ``` Response.Write(doc.ToString()); Response.Write(doc.InnerXml); xmlWrite.WriteString(doc.ToString()); xmlWrite.WriteString(doc.InnerXml); ``` --- **Similar Questions** [How to return XML in ASP.NET](https://stackoverflow.com/questions/374535/how-to-return-xml-in-aspnet) **References** [How Return XML From ASPX in ASP.NET 1.1](http://www.deez.info/sengelha/2006/02/06/how-return-xml-from-aspx-in-aspnet-11) [Writing XML output to an ASP.NET webpage](http://forums.asp.net/t/1298093.aspx) [How do you output XML from ASP.NET?](http://bytes.com/groups/net-asp/647876-how-do-you-output-xml-asp-net) [Creating an ASHX handler in ASP.NET](http://www.aspcode.net/Creating-an-ASHX-handler-in-ASPNET.aspx)
I've found the proper way to return XML to a client in ASP.NET. I think if I point out the wrong ways, it will make the right way more understandable. **Incorrect:** ``` Response.Write(doc.ToString()); ``` **Incorrect:** ``` Response.Write(doc.InnerXml); ``` **Incorrect:** ``` Response.ContentType = "text/xml"; Response.ContentEncoding = System.Text.Encoding.UTF8; doc.Save(Response.OutputStream); ``` **Correct:** ``` Response.ContentType = "text/xml"; //Must be 'text/xml' Response.ContentEncoding = System.Text.Encoding.UTF8; //We'd like UTF-8 doc.Save(Response.Output); //Save to the text-writer //using the encoding of the text-writer //(which comes from response.contentEncoding) ``` --- ## Use a TextWriter Do **not** use `Response.OutputStream` **Do** use `Response.Output` Both are streams, but `Output` is a [TextWriter](http://msdn.microsoft.com/en-us/library/system.io.textwriter.aspx). When an `XmlDocument` saves itself to a **TextWriter**, it will use the **encoding** specified by that TextWriter. The XmlDocument will automatically change the xml declaration node to match the encoding used by the TextWriter. e.g. in this case the XML declaration node: ``` <?xml version="1.0" encoding="ISO-8859-1"?> ``` would become ``` <?xml version="1.0" encoding="UTF-8"?> ``` This is because the TextWriter has been set to UTF-8. (More on this in a moment). As the TextWriter is fed character data, it will encode it with the byte sequences appropriate for its set encoding. **Incorrect**: ``` doc.Save(Response.OutputStream); ``` In this example the document is incorrectly saved to the OutputStream, which performs no encoding change, and may not match the response's content-encoding or the XML declaration node's specified encoding. **Correct** ``` doc.Save(Response.Output); ``` The XML document is correctly saved to a TextWriter object, ensuring the encoding is properly handled. --- ## Set Encoding The encoding given to the client in the header: ``` Response.ContentEncoding = ... ``` must match the XML document's encoding: ``` <?xml version="1.0" encoding="..."?> ``` must match the actual encoding present in the byte sequences sent to the client. To make all three of these things agree, set the single line: ``` Response.ContentEncoding = System.Text.Encoding.UTF8; ``` When the encoding is set on the **Response** object, it sets the same encoding on the **TextWriter**. The encoding set of the TextWriter causes the **XmlDocument** to change the **xml declaration**: ``` <?xml version="1.0" encoding="UTF-8"?> ``` when the document is Saved: ``` doc.Save(someTextWriter); ``` --- ## Save to the response Output You do not want to save the document to a binary stream, or write a string: **Incorrect:** ``` doc.Save(Response.OutputStream); ``` Here the XML is incorrectly saved to a binary stream. The final byte encoding sequence won't match the XML declaration, or the web-server response's content-encoding. **Incorrect:** ``` Response.Write(doc.ToString()); Response.Write(doc.InnerXml); ``` Here the XML is incorrectly converted to a string, which does not have an encoding. The XML declaration node is not updated to reflect the encoding of the response, and the response is not properly encoded to match the response's encoding. Also, storing the XML in an intermediate string wastes memory. You **don't** want to save the XML to a string, or stuff the XML into a string and `response.Write` a string, because that: ``` - doesn't follow the encoding specified - doesn't set the XML declaration node to match - wastes memory ``` **Do** use `doc.Save(Response.Output);` Do **not** use `doc.Save(Response.OutputStream);` Do **not** use `Response.Write(doc.ToString());` Do **not** use 'Response.Write(doc.InnerXml);` --- ## Set the content-type The Response's ContentType must be set to `"text/xml"`. If not, the client will not know you are sending it XML. ## Final Answer ``` Response.Clear(); //Optional: if we've sent anything before Response.ContentType = "text/xml"; //Must be 'text/xml' Response.ContentEncoding = System.Text.Encoding.UTF8; //We'd like UTF-8 doc.Save(Response.Output); //Save to the text-writer //using the encoding of the text-writer //(which comes from response.contentEncoding) Response.End(); //Optional: will end processing ``` # Complete Example Rob Kennedy had the good point that I failed to include the start-to-finish example. **GetPatronInformation.ashx**: ``` <%@ WebHandler Language="C#" Class="Handler" %> using System; using System.Web; using System.Xml; using System.IO; using System.Data.Common; //Why a "Handler" and not a full ASP.NET form? //Because many people online critisized my original solution //that involved the aspx (and cutting out all the HTML in the front file), //noting the overhead of a full viewstate build-up/tear-down and processing, //when it's not a web-form at all. (It's a pure processing.) public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { //GetXmlToShow will look for parameters from the context XmlDocument doc = GetXmlToShow(context); //Don't forget to set a valid xml type. //If you leave the default "text/html", the browser will refuse to display it correctly context.Response.ContentType = "text/xml"; //We'd like UTF-8. context.Response.ContentEncoding = System.Text.Encoding.UTF8; //context.Response.ContentEncoding = System.Text.Encoding.UnicodeEncoding; //But no reason you couldn't use UTF-16: //context.Response.ContentEncoding = System.Text.Encoding.UTF32; //Or UTF-32 //context.Response.ContentEncoding = new System.Text.Encoding(500); //Or EBCDIC (500 is the code page for IBM EBCDIC International) //context.Response.ContentEncoding = System.Text.Encoding.ASCII; //Or ASCII //context.Response.ContentEncoding = new System.Text.Encoding(28591); //Or ISO8859-1 //context.Response.ContentEncoding = new System.Text.Encoding(1252); //Or Windows-1252 (a version of ISO8859-1, but with 18 useful characters where they were empty spaces) //Tell the client don't cache it (it's too volatile) //Commenting out NoCache allows the browser to cache the results (so they can view the XML source) //But leaves the possiblity that the browser might not request a fresh copy //context.Response.Cache.SetCacheability(HttpCacheability.NoCache); //And now we tell the browser that it expires immediately, and the cached copy you have should be refreshed context.Response.Expires = -1; context.Response.Cache.SetAllowResponseInBrowserHistory(true); //"works around an Internet&nbsp;Explorer bug" doc.Save(context.Response.Output); //doc saves itself to the textwriter, using the encoding of the text-writer (which comes from response.contentEncoding) #region Notes /* * 1. Use Response.Output, and NOT Response.OutputStream. * Both are streams, but Output is a TextWriter. * When an XmlDocument saves itself to a TextWriter, it will use the encoding * specified by the TextWriter. The XmlDocument will automatically change any * XML declaration node, i.e.: * <?xml version="1.0" encoding="ISO-8859-1"?> * to match the encoding used by the Response.Output's encoding setting * 2. The Response.Output TextWriter's encoding settings comes from the * Response.ContentEncoding value. * 3. Use doc.Save, not Response.Write(doc.ToString()) or Response.Write(doc.InnerXml) * 3. You DON'T want to save the XML to a string, or stuff the XML into a string * and response.Write that, because that * - doesn't follow the encoding specified * - wastes memory * * To sum up: by Saving to a TextWriter: the XML Declaration node, the XML contents, * and the HTML Response content-encoding will all match. */ #endregion Notes } private XmlDocument GetXmlToShow(HttpContext context) { //Use context.Request to get the account number they want to return //GET /GetPatronInformation.ashx?accountNumber=619 //Or since this is sample code, pull XML out of your rear: XmlDocument doc = new XmlDocument(); doc.LoadXml("<Patron><Name>Rob Kennedy</Name></Patron>"); return doc; } public bool IsReusable { get { return false; } } } ```
Ideally you would use an ashx to send XML although I do allow code in an ASPX to intercept normal execution. ``` Response.Clear() ``` I don't use this if you not sure you've dumped anything in the response already the go find it and get rid of it. ``` Response.ContentType = "text/xml" ``` Definitely, a common client will not accept the content as XML without this content type present. ``` Response.Charset = "UTF-8"; ``` Let the response class handle building the content type header properly. Use UTF-8 unless you have a really, really good reason not to. ``` Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.Cache.SetAllowResponseInBrowserHistory(true); ``` If you don't send cache headers some browsers (namely IE) will cache the response, subsequent requests will not necessarily come to the server. You also need to AllowResponseInBrowser if you want this to work over HTTPS (due to yet another bug in IE). To send content of an XmlDocument simply use: ``` dom.Save(Response.OutputStream); ``` ``` dom.Save(Response.Output); ``` Just be sure the encodings match, (another good reason to use UTF-8). The `XmlDocument` object will automatically adjust its embedded `encoding="..."` encoding to that of the `Response` (e.g. `UTF-8`) ``` Response.End() ``` If you really have to in an ASPX but its a bit drastic, in an ASHX don't do it.
How to return XML in ASP.NET?
[ "", "c#", "asp.net", "xml", "" ]
Two Questions: 1. Is there any way to write cross platform programs on Microsoft Visual Studio? 2. If there isn't then could I write a C# application on VS2008 and recompile it with MonoDevelop and have it work?
1 - I dont' think so. Not without something like Mono. 2 - Yes you can, but Mono doesn't cover all the framework - they are working on it. The best thing to do is check with the Mono Migration Analyzer. The Mono Migration Analyzer (MoMA) tool helps you identify issues you may have on Mono - <http://mono-project.com/MoMA>. I have found most of my .NET 2.0 applications can be converted, but you may need some tweaks.
You can always use C++ and QT. Soon QT will be released on LGPL license (from version 4.5) that will give some more freedom. The only limit of using free QT license is that you don't get integration with VS. However this can be handled by using eg. CMake (which will generate VS solution files).
Cross platform programming on Windows
[ "", "c#", ".net", "mono", "cross-platform", "" ]
Using C# (vs2005) I need to copy a table from one database to another. Both database engines are SQL Server 2005. For the remote database, the source, I only have execute access to a stored procedure to get the data I need to bring locally. The local database I have more control over as it's used by the [asp.net] application which needs a local copy of this remote table. We would like it local for easier lookup and joins with other tables, etc. Could you please explain to me an efficient method of copying this data to our local database. The local table can be created with the same schema as the remote one, if it makes things simpler. The remote table has 9 columns, none of which are identity columns. There are approximately 5400 rows in the remote table, and this number grows by about 200 a year. So not a quickly changing table.
Bulk Copy feature of ADO.NET might help you take a look at that : [MSDN - Multiple Bulk Copy Operations (ADO.NET)](http://msdn.microsoft.com/en-us/library/s4s223c6.aspx) [An example article](http://www.c-sharpcorner.com/UploadFile/mahesh/BulckCopyAdoNet2008192005135138PM/BulckCopyAdoNet20.aspx)
Perhaps SqlBulkCopy; use SqlCommand.ExecuteReader to get the reader that you use in the call to SqlBulkCopy.WriteToServer. This is the same as bulk-insert, so very quick. It should look *something* like (untested); ``` using (SqlConnection connSource = new SqlConnection(csSource)) using (SqlCommand cmd = connSource.CreateCommand()) using (SqlBulkCopy bcp = new SqlBulkCopy(csDest)) { bcp.DestinationTableName = "SomeTable"; cmd.CommandText = "myproc"; cmd.CommandType = CommandType.StoredProcedure; connSource.Open(); using(SqlDataReader reader = cmd.ExecuteReader()) { bcp.WriteToServer(reader); } } ```
Copy from one database table to another C#
[ "", "c#", "sql-server", "ado.net", "" ]
I have a Java Applet that I'm making some edits to and am running into performance issues. More specifically, the applet generates an image which I need to export to the client's machine. This is really at the proof-of-concept stage so bear with me. For right now, the image is exported to the clients machine at a pre-defined location (This will be replaced with a save-dialog or something in the future). However, the process takes nearly 15 seconds for a 32kb file. I've done some 'shoot-by-the-hip' profiling where I have printed messages to the console at logical intervals throughout the method in question. I've found, to my surprise, that the bottleneck appears to be with the actual data stream writing process, not the jpeg encoding. **KEEP IN MIND THAT I ONLY HAVE A BASIC KNOWLEDGE OF JAVA AND ITS METHODS** So go slow :p - I'm mainly looking for suggestions to solve the problem rather the solution itself. Here is the block of code where the magic happens: ``` ByteArrayOutputStream jpegOutput = new ByteArrayOutputStream(); JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(jpegOutput); encoder.encode(biFullView); byte[] imageData = jpegOutput.toByteArray(); String myFile="C:" + File.separator + "tmpfile.jpg"; File f = new File(myFile); try { dos = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(myFile),512)); dos.writeBytes(byteToString(imageData)); dos.flush(); dos.close(); } catch (SecurityException ee) { System.out.println("writeFile: caught security exception"); } catch (IOException ioe) { System.out.println("writeFile: caught i/o exception"); } ``` Like I mentioned, using system.out.println() I've narrowed the performance bottleneck to the DataOutputStream block. Using a variety of machines with varying hardware stats seems to have little effect on the overall performance. Any pointers/suggestions/direction would be much appreciated. EDIT: As requested, byteToString(): ``` public String byteToString(byte[] data){ String text = new String(); for ( int i = 0; i < data.length; i++ ){ text += (char) ( data[i] & 0x00FF ); } return text; } ```
If you don't need the image data byte array you can encode directly to the file: ``` String myFile="C:" + File.separator + "tmpfile.jpg"; File f = new File(myFile); FileOutputStream fos = null; try { fos = new FileOutputStream(f); JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder( new BufferedOutputStream(fos)); encoder.encode(biFullView); } catch (SecurityException ee) { System.out.println("writeFile: caught security exception"); } catch (IOException ioe) { System.out.println("writeFile: caught i/o exception"); }finally{ if(fos != null) fos.close(); } ``` If you need the byte array to perform other operations it's better to write it directly to the FileOutputStream: ``` //... fos = new FileOutputStream(myFile)); fos.write(imageData, 0, imageData.length); //... ```
You might want to take a look at [`ImageIO`](http://java.sun.com/javase/6/docs/api/javax/imageio/ImageIO.html). And I think the reason for the performance problem is the looping in `byteToString`. You *never* want to do a concatenation in a loop. You could use the `String(byte[])` constructor instead, but you don't really need to be turning the bytes into a string anyway.
Java Applet - Image Export Performance Issues
[ "", "java", "image-processing", "applet", "" ]
What are the common misuse of using [STL](http://en.wikipedia.org/wiki/Standard_Template_Library) containers with iterators?
Forgetting that iterators are quite often invalidated if you change the container by inserting or erasing container members. For many great tips on using STL I **highly** recommend Scott Meyers's book "Effective STL" ([sanitised Amazon link](https://rads.stackoverflow.com/amzn/click/com/0201749629))
The end range check should be using != and not < since the order of the pointers isn't guaranteed. Example: ``` for(it = list.begin(); it != list.end(); ++it) { // do stuff } ```
What are the common misuse of using STL containers with iterators?
[ "", "c++", "stl", "iterator", "" ]
I've got a doozy of a problem here. I'm aiming to build a framework to allow for the integration of different traffic simulation models. This integration is based upon the sharing of link connectivities, link costs, and vehicles between simulations. To make a distributed simulation, I plan to have a 'coordinator' (star topology). All participating simulations simply register with it, and talk only to the coordinator. The coordinator then coordinates the execution of various tasks between each simulation. A quick example of a distribution problem, is when one simulation is 'in charge' of certain objects, like a road. And another is 'in charge' of other roads. However, these roads are interconnected (and hence, we need synchronisation between these simulations, and need to be able to exchange data / invoke methods remotely). I've had a look at RMI and am thinking it may be suited for this task. (To abstract out having to create an over-wire signalling discipline). Is this sane? The issue here, is that simulation participants need to centralize *some* of their data storage in the 'coordinator' to ensure explicit synchronisation between simulations. Furthermore, some simulations may require components or methods from other simulations. (Hence the idea of using RMI). My basic approach is to have the 'coordinator' run a giant RMI registry. And every simulation simply looks up everything in the registry, ensuring that the correct objects are used at each step. Anyone have any tips for heading down this path?
You may want to check out [Hazelcast](http://www.hazelcast.com) also. Hazelcast is an open source transactional, distributed/partitioned implementation of queue, topic, map, set, list, lock and executor service. It is super easy to work with; just add hazelcast.jar into your classpath and start coding. Almost no configuration is required. If you are interested in executing your Runnable, Callable tasks in a distributed fashion, then please check out Distributed Executor Service documentation at <http://code.google.com/docreader/#p=hazelcast> [Hazelcast](http://www.hazelcast.com) is released under Apache license and enterprise grade support is also available.
Is this sane? IMHO no. And I'll tell you why. But first I'll add the disclaimer that this is a complicated topic so any answer has to be viewed as barely scratching the surface. First instead of repeating myself I'll point you to a [summary of Java grid/cluster technologies](https://stackoverflow.com/questions/383920/what-is-the-best-library-for-java-to-grid-cluster-enable-your-application/383929#383929) that I wrote awhile ago. Its a mostly complete list. The star topology is "natural" for a "naive" (I don't mean that in a bad way) implementation because point-to-point is simple and centralizing key controller logic is also simple. It is however not fault-tolerant. It introduces scalability problems and a single bottleneck. It introduces communication inefficiences (namely the points communicate via a two-step process through the center). What you really want for this is probably a cluster (rather than a data/compute grid) solution and I'd suggest you look at [Terracotta](http://www.terracotta.org/). Ideally you'd look at [Oracle Coherence](http://www.oracle.com/technology/products/coherence/index.html) but it's no doubt expensive (compared to free). It is a fantastic product though. These two products can be used a number of ways but the core of both is to treat a cache like a distributed map. You put things in, you take things out and you fire off code that alters the cache. Coherence (with which I'm more familiar) in this regards scales fantastically well. These are more "server" based products though for a true cluster. If you're looking at a more distributed model then perhaps you should be looking at more of an SOA based approach.
Java - Distributed Programming, RMI?
[ "", "java", "distributed", "rmi", "simulation", "traffic-simulation", "" ]
I've always been of the opinion that large switch statements are a symptom of bad OOP design. In the past, I've read articles that discuss this topic and they have provided altnerative OOP based approaches, typically based on polymorphism to instantiate the right object to handle the case. I'm now in a situation that has a monsterous switch statement based on a stream of data from a TCP socket in which the protocol consists of basically newline terminated command, followed by lines of data, followed by an end marker. The command can be one of 100 different commands, so I'd like to find a way to reduce this monster switch statement to something more manageable. I've done some googling to find the solutions I recall, but sadly, Google has become a wasteland of irrelevant results for many kinds of queries these days. Are there any patterns for this sort of problem? Any suggestions on possible implementations? One thought I had was to use a dictionary lookup, matching the command text to the object type to instantiate. This has the nice advantage of merely creating a new object and inserting a new command/type in the table for any new commands. However, this also has the problem of type explosion. I now need 100 new classes, plus I have to find a way to interface them cleanly to the data model. Is the "one true switch statement" really the way to go? I'd appreciate your thoughts, opinions, or comments.
You may get some benefit out of a [Command Pattern](http://c2.com/cgi/wiki?CommandPattern). For OOP, you may be able to collapse several similar commands each into a single class, if the behavior variations are small enough, to avoid a complete class explosion (yeah, I can hear the OOP gurus shrieking about that already). However, if the system is already OOP, and each of the 100+ commands is truly unique, then just make them unique classes and take advantage of inheritance to consolidate the common stuff. If the system is not OOP, then I wouldn't add OOP just for this... you can easily use the Command Pattern with a simple dictionary lookup and function pointers, or even dynamically generated function calls based on the command name, depending on the language. Then you can just group logically associated functions into libraries that represent a collection of similar commands to achieve manageable separation. I don't know if there's a good term for this kind of implementation... I always think of it as a "dispatcher" style, based on the MVC-approach to handling URLs.
I see having **two** switch statements as a symptom of non-OO design, where the switch-on-enum-type might be replaced with multiple types which provide different implementations of an abstract interface; for example, the following ... ``` switch (eFoo) { case Foo.This: eatThis(); break; case Foo.That: eatThat(); break; } switch (eFoo) { case Foo.This: drinkThis(); break; case Foo.That: drinkThat(); break; } ``` ... should perhaps be rewritten to as ... ``` IAbstract { void eat(); void drink(); } class This : IAbstract { void eat() { ... } void drink() { ... } } class That : IAbstract { void eat() { ... } void drink() { ... } } ``` However, **one** switch statement **isn't** imo such a strong indicator that the switch statement ought to be replaced with something else.
Large Switch statements: Bad OOP?
[ "", "c#", "oop", "design-patterns", "" ]
I'm preparing to go to a computer science contest by completing problems from past contests. Most of them are pretty easy, but this one is bugging me...it seems simple but I'm just not being able to do it. If you have a string of ones and zeros: ``` 100111010001111100101010 ``` What would be the code to take that as an input and then output this: ``` 1:1 2:0 3:1 1:0 1:1 3:0 5:1 2:0 1:1 1:0 1:1 1:0 ``` Where the digit to the left of each colon is the number of times the digit after the colon appears. So, another example...inputting: ``` 1100011 ``` Would output: ``` 2:1 3:0 2:1 ``` According to the problem this is similar to the algorithm used to compress fax transmissions. An answer in java would be best, but all I'm really looking for is pseudocode or even thoughts on how to do it. Thanks in advance.
This is called Run-Length-Encoding (RLE) and is used in a number of things (such as the Windows Bitmap file-format) to provide very basic compression (especially if the original includes lots of repeated values (like a bitmap or fax) containing a long run of the same colour). ``` int[] array = { ........ }; // your values... for ( int i=0; i < array.Length; i++ ) { int count = 1; int value = array[i]; // Consume until different.. while ( i+1 < array.Length && array[i] == array[i+1] ) { count++; i++ } Console.WriteLine("{0}:{1}", count, value); } // OR, as suggested by @jon [done in my head, so could probably be improved a lot...] int count = 0; int oldValue = -1; for ( int i=0; i<array.Length; i++ ) { int newValue = array[i]; count = ( newValue != oldValue ) ? 1 : count+1; if ( i+1 >= array.Length || array[i+1] != newValue) { Console.WriteLine("{0}:{1}", count, newValue); } oldValue = newValue; } ```
Just as a thought: why would you bother with the number on the right? It will always alternate between 1 and 0 won't it, so just assume it starts with 1 and encode an initial 0 if the actual sequence starts with 0. In other words, you'd end up with: 1 2 3 1 1 3 5 2 1 1 1 1 But basically you need to keep track of "what am I currently looking at?" and "how many of them have I seen"? If it changes, write out what you've been looking at and the count, and then update "what I'm looking at" to the new value and the count to 1, then keep going. Don't forget to write out the last value at the end of the data as well. (I haven't given pseudocode or Java as I think you'll learn more by taking small hints than being presented with working code. If you need further hints though, just say.)
Programming Problem - Fax Compression
[ "", "java", "algorithm", "binary", "compression", "fax", "" ]
I was wondering if it is possible, as my 5 minutes of experimentation proved fruitless. I hoped it would be as easy as: ``` T Identity<T>(T t) { return t; } ``` But this fails to compile on generic methods taking Func parameters. Eg OrderBy. Even specifying type parameters (which is exactly what I want to avoid!), it fails to compile. Next I tried something I thought would work: ``` Func<T, R> MakeIdentity<T, R>() { return (T t) => (R)(object)t; } ``` Also no go :( (this compiles when applying type parameters, again, not what I want) Has anyone had luck making such a thing? UPDATE: please dont say: x => x, I know that, it's obvious! I am asking for a function, not an expression :) UPDATE 2: When I refer to identity, I mean in the functional sense, where the function simply returns the same object that you passed to it. It is probably in every functional language I have come across, but those do not use static typing. I am wondering how to do this (if possible) with generics. Just for fun! UPDATE 3: Here's a partial 'solution' based on the 2nd idea: ``` Expression<Func<T, T>> MakeIdentity<T>() { return t => t; } void Foo(string[] args) { var qargs = args.AsQueryable(); var q = qargs.OrderBy(MakeIdentity<string>()); ... } ``` I dont think anything more than this will be possible.
Type inference will not work since host method and input method both are generic. To do this you must write ``` myList.OrderBy<int, int>(Identity); ``` Or ``` myList.OrderBy((Func<int, int>)Identity); ```
This works for me in places I've needed it so far. ``` internal class IdentityFunction<TSource> { public static Func<TSource, TSource> Instance { get { return x => x; } } } OrderBy(IdentityFunction<Foo>.Instance) ```
Generic identity function for use with type inference
[ "", "c#", "generics", "identity", "functional-programming", "type-inference", "" ]
I have a base class which implements the == operator. I want to write another class, inheriting the base class, and which should reimplement the == operator. Here is some sample code : ``` #include <iostream> #include <string> class Person { public: Person(std::string Name) { m_Name = Name; }; bool operator==(const Person& rPerson) { return m_Name == rPerson.m_Name; } private: std::string m_Name; }; class Employee : public Person { public: Employee(std::string Name, int Id) : Person(Name) { m_Id = Id; }; bool operator==(const Employee& rEmployee) { return (Person::operator==(rEmployee)) && (m_Id == rEmployee.m_Id); } private: int m_Id; }; void main() { Employee* pEmployee1 = new Employee("Foo" , 1); Employee* pEmployee2 = new Employee("Foo" , 2); if (*pEmployee1 == *pEmployee2) { std::cout << "same employee\n"; } else { std::cout << "different employee\n"; } Person* pPerson1 = pEmployee1; Person* pPerson2 = pEmployee2; if (*pPerson1 == *pPerson2) { std::cout << "same person\n"; } else { std::cout << "different person\n"; } } ``` This sample code give the following result : ``` different employee same person ``` Where I would like, even when handling Person\* pointers, to make sure they are different. How am I supposed to solve this problem ? Thanks !
What you want to do is essentiall "virtualize" the comparison operator. ~~Since operators cannot be virtual~~ (operators can be `virtual`), you will need to delegate it to something else. Here's one possible solution. ``` class Person { public: /* ... */ bool operator==(const Person& rhs) { return m_Name == rPerson.m_Name && this->doCompare(rhs); } private: virtual bool doCompare() = 0; }; } class Employee : public Person { /* ... */ private: virtual bool doCompare(const Person& rhs) { bool bRetval = false; const Employee* pRHSEmployee = dynamic_cast<const Employee*>(&rhs); if (pEmployee) { bRetval = m_Id == pRHSEmployee->m_Id } return bRetval; } }; ``` The question didn't make clear whether Person needs to be a concrete class. If so, you can make it not pure-virtual, and implement it to return true. This also uses RTTI, which you may or may not be happy with.
Add a virtual function int Compare(const Person& rPerson) and use that in your operators
implementing operator== when using inheritance
[ "", "c++", "" ]
In management studio you can see the owner under properties but it won't let you change it. My guess is there's some stored procedure to change it and you can't do it through the gui.
Surpisingly, it's called [`sp_changedbowner`](http://msdn.microsoft.com/library/ms178630.aspx). You can actually change it in `SQL Server Management Studio` under `Database / Properties / Files`
In addition to using `SSMS` GUI, you can also use `ALTER AUTHORIZATION` or alternately use `sp_changedbowner` statement. ``` ALTER AUTHORIZATION ON DATABASE::MyDatabaseName TO NewOwner; GO ``` Please note `sp_changedbowner` is deprecated from `SQL Server 2012`.
How do you change the owner of a database in sql?
[ "", "sql", "sql-server", "" ]
**Without**: * MFC * ATL How can I use [`FormatMessage()`](http://msdn.microsoft.com/en-us/library/ms679351.aspx) to get the error text for a `HRESULT`? ``` HRESULT hresult = application.CreateInstance("Excel.Application"); if (FAILED(hresult)) { // what should i put here to obtain a human-readable // description of the error? exit (hresult); } ```
Here's the proper way to get an error message back from the system for an `HRESULT` (named hresult in this case, or you can replace it with `GetLastError()`): ``` LPTSTR errorText = NULL; FormatMessage( // use system message tables to retrieve error text FORMAT_MESSAGE_FROM_SYSTEM // allocate buffer on local heap for error text |FORMAT_MESSAGE_ALLOCATE_BUFFER // Important! will fail otherwise, since we're not // (and CANNOT) pass insertion parameters |FORMAT_MESSAGE_IGNORE_INSERTS, NULL, // unused with FORMAT_MESSAGE_FROM_SYSTEM hresult, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), (LPTSTR)&errorText, // output 0, // minimum size for output buffer NULL); // arguments - see note if ( NULL != errorText ) { // ... do something with the string `errorText` - log it, display it to the user, etc. // release memory allocated by FormatMessage() LocalFree(errorText); errorText = NULL; } ``` The key difference between this and David Hanak's answer is the use of the `FORMAT_MESSAGE_IGNORE_INSERTS` flag. MSDN is a bit unclear on how insertions should be used, but [Raymond Chen notes that you should never use them](https://devblogs.microsoft.com/oldnewthing/20071128-00/?p=24353) when retrieving a system message, as you've no way of knowing which insertions the system expects. FWIW, if you're using Visual C++ you can make your life a bit easier by using the [`_com_error`](http://msdn.microsoft.com/en-us/library/0ye3k36s.aspx) class: ``` { _com_error error(hresult); LPCTSTR errorText = error.ErrorMessage(); // do something with the error... //automatic cleanup when error goes out of scope } ``` Not part of MFC or ATL directly as far as I'm aware.
Keep in mind that you cannot do the following: ``` { LPCTSTR errorText = _com_error(hresult).ErrorMessage(); // do something with the error... //automatic cleanup when error goes out of scope } ``` As the class is created and destroyed on the stack leaving errorText to point to an invalid location. In most cases this location will still contain the error string, but that likelihood falls away fast when writing threaded applications. So ***always*** do it as follows as answered by Shog9 above: ``` { _com_error error(hresult); LPCTSTR errorText = error.ErrorMessage(); // do something with the error... //automatic cleanup when error goes out of scope } ```
How should I use FormatMessage() properly in C++?
[ "", "c++", "windows", "error-handling", "formatmessage", "" ]
Is there a way to do the opposite of `String.Split` in .Net? That is, to combine all the elements of an array with a given separator. Taking `["a", "b", "c"]` and giving `"a b c"` (with a separator of `" "`). **UPDATE:** I found the answer myself. It is the `String.Join` method.
Found the answer. It's called [String.Join](http://msdn.microsoft.com/en-us/library/System.String.Join(v=vs.110).aspx "MSDN page for String.Join").
You can use [`String.Join`](https://msdn.microsoft.com/en-us/library/System.String.Join(v=vs.110).aspx): ``` string[] array = new string[] { "a", "b", "c" }; string separator = " "; string joined = String.Join(separator, array); // "a b c" ``` Though more verbose, you can also use a [`StringBuilder`](https://msdn.microsoft.com/en-us/library/system.text.stringbuilder(v=vs.110).aspx) approach: ``` StringBuilder builder = new StringBuilder(); if (array.Length > 0) { builder.Append(array[0]); } for (var i = 1; i < array.Length; ++i) { builder.Append(separator); builder.Append(array[i]); } string joined = builder.ToString(); // "a b c" ```
Opposite of String.Split with separators (.net)
[ "", "c#", ".net", "arrays", "string", "" ]
I am trying to alter style at print-time: Is there an event in javascript that you can listen for for when file>>print is called? What is it? Also - is there a handler for when printing is finished? What is it? or if there is a better way to do this with some other means, such as style sheets, how do you do that?
**Different Style Sheets** You can specify a different stylesheet for printing. ``` <link rel="stylesheet" type="text/css" media="print" href="print.css" /> <link rel="stylesheet" type="text/css" media="screen" href="main.css" /> ``` **One Style Sheet** As kodecraft mentioned, you can also put the styles into the same file by using the @media block. ``` @media print { div.box { width:100px; } } @media screen { div.box { width:400px; } } ```
In IE there are the nonstandard window.onBeforePrint() and window.onAfterPrint() event listeners. There isn't a non-IE way to do it that I know of, however. What kinds of changes are you trying to make? It's possible that your problem could be solved by specifying different rules for your print stylesheet.
Javascript Event Handler for Print
[ "", "javascript", "events", "printing", "" ]
I'm building a controller that other controllers can inherit (provide base functionality across site without repeating code): ``` public abstract class ApplicationController : Controller { protected ApplicationController() { //site logic goes here //what is the value of agentID from the Action below?? } } public class AgentController : ApplicationController { public ActionResult Index(string agentID) { return View(); } } ``` The logic that applies to the entire site will go into the constructor of the ApplicationController class. The problem is in that constructor I need to access the value in the parameter from the Action, in this case agentID (it will be the same across the entire site). Is there a way to read that value in?
I figured out how to do it ... very similar to Craig Stuntz's answer, but the difference is in how you reach the RouteData. Using ControllerContext.RouteData.Values does not work in a regular method used this way (it does from the original controller, but not from a base one like I built), but I did get to the RouteData by overriding the OnActionExecuting method: ``` protected override void OnActionExecuting(ActionExecutingContext filterContext) { string agentID = filterContext.RouteData.Values["agentID"].ToString(); OtherMethodCall(agentID); } ```
Actions happen *after* constructors. The value doesn't exist (hasn't been bound) in a constructor. The route data might be known in the constructor, but the action data binding certainly will not have happened yet. You *can't* get this value with certainty until the action has been invoked. Route data can be accessed inside the controller via: ``` ControllerContext.RouteData.Values ``` However, it is incorrect to suppose that agentID can only be bound to route data. In fact, it could come from a server variable, a form field, a query string parameter, etc. My advice would be to pass it explicitly wherever it is needed. If nothing else, it makes your unit tests better.
In .Net MVC, how do you access variables from URL Routing outside of the Action?
[ "", "c#", "asp.net-mvc", "" ]
Given a string: "Person.Address.Postcode" I want to be able to get/set this postcode property on an instance of Person. How can I do this? My idea was to split the string by "." and then iterate over the parts, looking for the property on the previous type, then build up an expression tree that would look something like (apologies for the pseudo syntax): ``` (person => person.Address) address => address.Postcode ``` I'm having real trouble acutally creating the expression tree though! If this is the best way, can someone suggest how to go about it, or is there an easier alternative? Thanks Andrew ``` public class Person { public int Age { get; set; } public string Name { get; set; } public Address Address{ get; set; } public Person() { Address = new Address(); } } public class Address { public string Postcode { get; set; } } ```
Why you don't use recursion? Something like: ``` setProperyValue(obj, propertyName, value) { head, tail = propertyName.SplitByDotToHeadAndTail(); // Person.Address.Postcode => {head=Person, tail=Address.Postcode} if(tail.Length == 0) setPropertyValueUsingReflection(obj, head, value); else setPropertyValue(getPropertyValueUsingReflection(obj, head), tail, value); // recursion } ```
It sounds like you're sorted with regular reflection, but for info, the code to build an expression for nested properties would be very similar to [this order-by code](http://groups.google.co.uk/group/microsoft.public.dotnet.languages.csharp/msg/7d931ec0e2356d21). Note that to set a value, you need to use `GetSetMethod()` on the property and invoke that - there is no inbuilt expression for assigning values after construction (although it is [supported in 4.0](http://marcgravell.blogspot.com/2008/11/future-expressions.html)). (edit) like so: ``` using System; using System.Linq; using System.Linq.Expressions; using System.Reflection; class Foo { public Foo() { Bar = new Bar(); } public Bar Bar { get; private set; } } class Bar { public string Name {get;set;} } static class Program { static void Main() { Foo foo = new Foo(); var setValue = BuildSet<Foo, string>("Bar.Name"); var getValue = BuildGet<Foo, string>("Bar.Name"); setValue(foo, "abc"); Console.WriteLine(getValue(foo)); } static Action<T, TValue> BuildSet<T, TValue>(string property) { string[] props = property.Split('.'); Type type = typeof(T); ParameterExpression arg = Expression.Parameter(type, "x"); ParameterExpression valArg = Expression.Parameter(typeof(TValue), "val"); Expression expr = arg; foreach (string prop in props.Take(props.Length - 1)) { // use reflection (not ComponentModel) to mirror LINQ PropertyInfo pi = type.GetProperty(prop); expr = Expression.Property(expr, pi); type = pi.PropertyType; } // final property set... PropertyInfo finalProp = type.GetProperty(props.Last()); MethodInfo setter = finalProp.GetSetMethod(); expr = Expression.Call(expr, setter, valArg); return Expression.Lambda<Action<T, TValue>>(expr, arg, valArg).Compile(); } static Func<T,TValue> BuildGet<T, TValue>(string property) { string[] props = property.Split('.'); Type type = typeof(T); ParameterExpression arg = Expression.Parameter(type, "x"); Expression expr = arg; foreach (string prop in props) { // use reflection (not ComponentModel) to mirror LINQ PropertyInfo pi = type.GetProperty(prop); expr = Expression.Property(expr, pi); type = pi.PropertyType; } return Expression.Lambda<Func<T, TValue>>(expr, arg).Compile(); } } ```
how to create expression tree / lambda for a deep property from a string
[ "", "c#", "lambda", "expression-trees", "" ]
In [Python](http://en.wikipedia.org/wiki/Python_%28programming_language%29), is the following the only way to get the number of elements? ``` arr.__len__() ``` If so, why the strange syntax?
``` my_list = [1,2,3,4,5] len(my_list) # 5 ``` The same works for tuples: ``` my_tuple = (1,2,3,4,5) len(my_tuple) # 5 ``` And strings, which are really just arrays of characters: ``` my_string = 'hello world' len(my_string) # 11 ``` It was [intentionally done this way](https://web.archive.org/web/20200627113910/http://effbot.org/pyfaq/why-does-python-use-methods-for-some-functionality-e-g-list-index-but-functions-for-other-e-g-len-list.htm) so that lists, tuples and other container types or iterables didn't all need to explicitly implement a public `.length()` method, instead you can just check the `len()` of anything that implements the 'magic' `__len__()` method. Sure, this may seem redundant, but length checking implementations can vary considerably, even within the same language. It's not uncommon to see one collection type use a `.length()` method while another type uses a `.length` property, while yet another uses `.count()`. Having a language-level keyword unifies the entry point for all these types. So even objects you may not consider to be lists of elements could still be length-checked. This includes strings, queues, trees, etc. The functional nature of `len()` also lends itself well to functional styles of programming. ``` lengths = map(len, list_of_containers) ```
The way you take a length of anything for which that makes sense (a list, dictionary, tuple, string, ...) is to call `len` on it. ``` l = [1,2,3,4] s = 'abcde' len(l) #returns 4 len(s) #returns 5 ``` The reason for the "strange" syntax is that internally python translates `len(object)` into `object.__len__()`. This applies to any object. So, if you are defining some class and it makes sense for it to have a length, just define a `__len__()` method on it and then one can call `len` on those instances.
Is arr.__len__() the preferred way to get the length of an array in Python?
[ "", "python", "arrays", "methods", "" ]
I'm new to SQL and relational databases and I have what I would imagine is a common problem. I'm making a website and when each user submits a post they have to provide a location in either a zip code or a City/State. What is the best practice for handling this? Do I simply create a Zip Code and City and State table and query against them or are there ready made solutions for handling this? I'm using SQL Server 2005 if it makes a difference. I need to be able to retrieve a zip code given a city/state or I need to be able to spit out the city state given a zip code.
You have a couple options. You can buy a [bulk zip-code library](http://www.zip-code-database.org/) from somebody which will list zip codes, cities, counties, etc. by state, or you can pay someone to [access a web service](http://codebump.com/zipcode/) which will perform the same function on a more granular level. Your best bet would be to go with the zip-code library option, as it'll cost you less than the web service and will provide better performance. How you query or pre-process this library is up to you. You mention SQL Server, so you'd probably want State, Zipcode, and City tables, and include the relevant relationships between them. You'll also need to have provisions for cities that span multiple zipcodes, or for zipcodes that have multiple cities - but none of these issues are insurmountable. As far as dealing with the vagarities of user input, you may consider enlisting the help of an [address validation web service](http://www.serviceobjects.com/products/address/address-validation-(us)?zut=ggl3001), although most of them require a full shipping address in order to validate. Edit: looks like there's a [SourceForge project offering free zip-code data](http://sourceforge.net/projects/zips/), including lat/lon data, etc. Not sure how correct or current it is. Edit 2: After some cursory looking on that SourceForge project's site it looks like this is a dead project. If you use this data, you'll need to provide some allowance for zipcodes / cities that don't exist in your database. Purchased bulk libraries usually come with some sort of guarantee of updates, or a pricing plan for updates, etc., and are probably more reliable.
Have a ZipCode table that is related to a CityState table. Some zip codes have multiple cities associated with them, so you may need to have the interface let the user select from the city they want or let them override the default. I use the paid service from [ZipInfo.com](http://www.zipinfo.com/products/products.htm) since I needed additional information such as lat/long, zip type and county. Zip codes also change several times as year as new zip codes are added or merged with others, so you will need to update your data a few times a year to stay consistent.
Zip Code to City/State and vice-versa in a database?
[ "", "sql", "sql-server", "zipcode", "" ]
I am referring to container managed transaction attributes in Enterprise Java Beans. I can't think of any use cases where using 'Mandatory' and 'Never' makes sense. Can anyone please explain what cases warrant using these transaction attributes?
Here's my stab on this: **Mandatory**: An EJB may be providing some internal function that assumes/relies on a caller's *transaction* already running, and if it is not, for various reasons, cannot initiate one and so it will throw an EJB error. So the real question here is why would that ever be a requirement and the only scenario I can devise would be one where there may be specific transaction related actions that must be executed when a transaction starts and some EJBs are not equipped for these actions and so are marked mandatory. I suppose you might also use this attribute to ensure a consistent and correctly ordered lock acquisition where a failure to do so could result in a deadlock. **Never**: This forces your EJB to throw an exception if a transaction is running when the EJB is invoked, and again, the real question is what sort of scenario would require this. Referring to [Mastering EJB Third Edition](http://www.theserverside.com/tt/books/wiley/masteringEJB/downloads/MasteringEJB3rdEd.pdf), Ed Roman asserts that this attribute is useful in reducing client side coding errors by preventing the incorrect assumption that the EJB will participate in an [ACID](http://en.wikipedia.org/wiki/ACID) procedure. Perhaps others will be able to supply more concrete scenarios for these attributes.
I worked with a project recently that insisted that all transactions were marked out at the application layer (the service classes), and that Data Access Objects must not be called directly. This was to ensure that the database wasn't ever written to without the associated application logic being invoked (usually, the database operation was paired up with a message sent to queue). An application of MANDATORY on every DAO EJB ensured that whilst they would run in a transaction, it was the job of a caller to start one. Since the DAOs were called by the service classes (which had the default of REQUIRED), this meant that they worked just fine but anybody accidentally calling them directly would be rewarded with an exception.
When should I use EJB transaction attributes 'Mandatory' and 'Never'
[ "", "java", "jakarta-ee", "ejb", "" ]
I've installed VS 2008 and SP1. I've tried to load a solution that has a WCF project and it complains that "This project type is not supported by this installation". I've tried the devenv.exe /setup but that didn't help. If it matters, this is all on Windows 7. On a Windows Vista installation I have no problems with the same solution (also VS2008 w/sp1).
So, it turned out I had made a pretty bone-head maneuver during the install and didn't install the Web Developer portion, only the C# portion. Once I installed that, all was well.
You're best off using VS2008 on a supported operating system. Windows 7 is in beta and you're more likely to run into problems....just like this one... unless your goal is to test VS2008 on Windows 7 in which case what you want to do is contact Microsoft in order to give them feedback.
VS 2008 can't open projects in Windows 7
[ "", "c#", "visual-studio", "wcf", "windows-7", "" ]