Prompt
stringlengths
10
31k
Chosen
stringlengths
3
29.4k
Rejected
stringlengths
3
51.1k
Title
stringlengths
9
150
Tags
listlengths
3
7
I have a UserControl with some predefined controls (groupbox,button,datagridview) on it, these controls are marked as protected and the components variable is also marked as protected. I then want to inherit from this base UserControl to another UserControl, however the DataGridView is always locked in the designer. I suspect it may have something to do with the DataGridView implementing ISupportInitilize. ``` public class BaseGridDetail : UserControl ``` Has a DataGridView control (et al) defined. ``` public class InheritedDetail : BaseGridDetail ``` The DataGridView control is locked Does anyone have any ideas how to make this control available in the designer after inheritenace?
By the looks of it, DataListView (and some other controls) do not support visual inheritance. There's a connect issue [logged here](http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=116457) which doesn't look hopeful. There have been similar issues logged with other form controls, e.g. [flowlayoutpanel](http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=119442). I'm unable to find a way to force visual inheritance. Here's the official answer on [connect](http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=110787): "For this particular release, the DataGridView was not designed to be used in visual intheritance. We will keep your suggestion in mind when we plan our future release" That is of 26/05/2006. Update: found [this blog post which may have the answer](http://dotnetengineer.wordpress.com/2008/06/27/visual-inheritance-using-datagridview/) Edit: I was unable to verify the blog post's claims. Looks like [might be the latest on this issue](https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=284335) It looks like you can still manipulate the DataListView at runtime though, so you might be able to set visual properties (and other settings). It's not a great compromise.
[1] create Your Custom UserControl [2] make your custom userControl use the below Inherited DataGridView: ``` [Designer(typeof System.Windows.Forms.Design.ControlDesigner))] public class InheritedDataGridView : DataGridView { } ``` [3] Inherit from your Custom UserControl , And viola !! [4] Ohh dont forget to add "System.Design" dll Enjoy.
DataGridView locked on a inherited UserControl
[ "", "c#", ".net", "winforms", "inheritance", "" ]
I want a method of the class: "`One`" ("`AccessibleWithinSameNamespace`") to be accessible by the class: "`Two`", without having "`Two`" extending "`One`". Both classes are in the same namespace, so I'm thinking that maybe there's an access-modifier that emulates the "*protected*" modifyer, but for namespaces. Some code: ``` namespace Test { class One { public void AccessibleToAll() { } protected void AccessibleWithinSameNamespace() { // I am not public // I can be accessed from other classes // within the same namespace of my class } } } namespace Test { class Two { public Two() { One one = new One(); // I am able to access this method because my class // is in the same namespace as the class: "One" one.AccessibleWithinSameNamespace(); } } } ```
You can use the [internal](http://msdn.microsoft.com/en-us/library/7c5ka91b.aspx) modifier if both classes are in the same assembly. With your example: ``` namespace Test { class One { public void AccessibleToAll() { } internal void AccessibleWithinSameNamespace() { // I am not public // I can be accessed from other classes // within the same namespace of my class } } } namespace Test { class Two { public Two() { One one = new One(); // I am able to access this method because my class // is in the same namespace as the class: "One" one.AccessibleWithinSameNamespace(); } } } ```
C# and .NET have no concept of "within the same namespace". Internal is the closest equivalent.
Class-member access-modifiers
[ "", "c#", ".net-3.5", "" ]
I have an ASP.net website ( <http://www.erate.co.za> ) version 2.0. When someone opens my website in Firefox everything looks different. Why is that and how can I make it compatible? Please help! Etienne
The problems don't have anything to do with ASP.NET / C# Specifically. They have to do with your understanding of web design / HTML / CSS and how you can make a cross-browser compatible UI. I'd suggest you look at <http://www.w3schools.com/> for some information on good web design practices. Some obvious problems with the Source that you need to address are * No common css Stylesheets * Styles are applied inline on lots of elements * using long strings of "&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;" to align text
The underlying server technology should not have any impact on your websites appearence as long as you are just producing HTML. What you need to do is make sure that your HTML and CSS works as intended in all browsers. A good way to start is to make sure that you only output standards compliant code.
Making my ASP.NET website compatible with Firefox?
[ "", "c#", "asp.net", "vb.net", "firefox", "" ]
I have an object that starts a thread, opens a file, and waits for input from other classes. As it receives input, it writes it to disk. Basically, it's a thread safe data logging class... Here's the weird part. When I open a form in the designer (Visual Studio 2008) that uses the object the file gets created. It's obviously running under the design time vhost process... The odd thing is I've not been able to reproduce the issue in another project. I'm not sure what the rules are for code that gets executed in the designer and code that does not. For example, creating a file in a Windows Forms constructor doesn't actually create the file at design time... What is the explanation? Is there a reference?
You can check the UsageMode of the LicenseManager, to check if the code is in design time or not. System.ComponentModel.LicenseManager.UsageMode == System.ComponentModel.LicenseUsageMode.Designtime Here is a quick example: ``` using System; using System.ComponentModel; using System.Windows.Forms; namespace Test { public class ComponentClass : Component { public ComponentClass() { MessageBox.Show("Runtime!"); } } } ``` When this component gets add to your form in the designer, you will immediatly get a message box. To prevent this you can add a simple if statement to check if the code is not in design time ``` using System; using System.ComponentModel; using System.Windows.Forms; namespace Test { public class ComponentClass : Component { public ComponentClass() { if (LicenseManager.UsageMode != LicenseUsageMode.Designtime) { MessageBox.Show("Runtime!"); } } } } ``` After adding the if statement, the messagebox no longer appears when the component is added to the form via the designer.
The constructor of a control or form does not get executed when editing that class in the designer (nor does OnLoad get called). I've occasionally used this to set one value in the designer (eg. making its child controls all Visible in the designer) but override some of them to a different default value in the constructor (eg. hiding certain child controls which will only show in certain circumstances, such as an indicator on a status bar). However, the constructor *does* get executed if the control is placed as a child on another control or form in the designer. OnLoad gets executed as well. This may be how your logging code was getting accidentally triggered in the designer. For detecting design vs runtime, [an answer](https://stackoverflow.com/questions/34664/designmode-with-controls/708594#708594) to [another question](https://stackoverflow.com/questions/34664/designmode-with-controls) has screenshots of some emperical tests showing the values returned by some common approaches. It appears that a child control of a child control (two levels down) of the form or control being edited in the designer sees its own DesignMode == false, so the normal property check will fail to protect code (eg. in the OnLoad method) for controls nested within a control added in the designer. If you were checking DesignMode as one would expect, it could be the nesting which caused it to get around that check. It also always sees DesignMode == false within the constructor. Also, note that the LicenseManager.UsageMode check *only* sees DesignTime within the constructor; when OnLoad is called it is within a RunTime LicenseContext. The most complete solution seems to be to check LicenseManager.UsageMode in the constructor of the control or form (or component) and save the setting to a member variable or property which you can check later to avoid running code that should never run in the designer even when nested. There's also another approach in [another answer](https://stackoverflow.com/questions/34664/designmode-with-controls/2693338#2693338) to that other question which accounts for nesting but only works outside the constructor.
.NET Windows Forms design time rules
[ "", "c#", ".net", "winforms", "design-time", "" ]
Imagine the following A type T has a field Company. When executing the following method it works perfectly: ``` Type t = typeof(T); t.GetProperty("Company") ``` Whith the following call I get null though ``` Type t = typeof(T); t.GetProperty("company", BindingFlags.IgnoreCase) ``` Anybody got an idea?
You've overwritten the default look-up flags, if you specify new flags you need to provide all the info so that the property can be found. For example: `BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.Instance`
You need to add `BindingFlags.Public | BindingFlags.Instance`
BindingFlags.IgnoreCase not working for Type.GetProperty()?
[ "", "c#", "reflection", "getproperty", "bindingflags", "" ]
Is there any way to find out what exceptions might be thrown by any method in .NET code? Ideally I want to see what might be thrown and choose which ones I want to handle. I guess I want the information you'd get from the throws clause in java. The situation is I'm doing a linq query on an xml document from the network and want to know what could go wrong. I could open up the assembly in reflector and have a look but I thought there might be an easier way.
.NET does not have enforced ("checked") exceptions like java. The intellisense *might* show this information, if the developer has added a `/// <exception.../>` block - but ultimately more exceptions can happen than you expect (`OutOfMemoryException`, `ThreadAbortException`, `TypeLoadException`, etc can all happen fairly unpredictably). In general, you should have an idea of what things are likely to go wrong, and which ones you can actually do something useful about. In most cases, the correct behaviour is to let the exception bubble up (just running any "finally" code to release resources). Eric Lippert has a good blog on this subject [here](https://learn.microsoft.com/en-us/archive/blogs/ericlippert/vexing-exceptions).
I think that [Exception hunter](http://www.red-gate.com/products/Exception_Hunter/index.htm) can provide this information however it costs money...
Finding out what exceptions a method might throw in C#
[ "", "c#", ".net", "exception", "" ]
how do i check if an item is selected or not in my listbox? so i have a button remove, but i only want that button to execute if an item is selected in the list box. im using asp.net code behind C#. I'd prefer if this validation occurred on the server side. cheers..
``` for (int i = 0; i < lbSrc.Items.Count; i++) { if (lbSrc.Items[i].Selected == true) { lbSrc.Items.RemoveAt(lbSrc.SelectedIndex); } } ``` this is what i came up with.
On the callback for the button click, just check if the selected index of the list box is greater than or equal to zero. ``` protected void removeButton_Click( object sender, EventArgs e ) { if (listBox.SelectedIndex >= 0) { listBox.Items.RemoveAt( listBox.SelectedIndex ); } } ```
listbox validation
[ "", "c#", "asp.net", "listbox", "validation", "" ]
I have a page with a "Print" link that takes the user to a printer-friendly page. The client wants a print dialog box to appear automatically when the user arrives at the print-friendly page. How can I do this with javascript?
``` window.print(); ``` unless you mean a custom looking popup.
You could do ``` <body onload="window.print()"> ... </body> ```
How can I pop-up a print dialog box using Javascript?
[ "", "javascript", "" ]
I have multiple RequireFieldValidators on my aspx page. On the backend (C#) I want to be able to tell which control specifically wasn't valid so I can apply a style to that control. I use the Page.IsValid method to see if the overall page passed validation but I need to know specifically which one control failed.
As others have mentioned, you need to loop the validator collection of the page and check their states. [MSDN has examples here.](http://msdn.microsoft.com/en-us/library/dh9ad08f.aspx) ``` If (Me.IsPostBack) Then Me.Validate() If (Not Me.IsValid) Then Dim msg As String ' Loop through all validation controls to see which ' generated the error(s). Dim oValidator As IValidator For Each oValidator In Validators If oValidator.IsValid = False Then msg = msg & "<br />" & oValidator.ErrorMessage End If Next Label1.Text = msg End If ``` End If
All Validators are added to the ValidatorCollection of the Page (property Page.Validators). You can loop through this collection to validate each control manually. Call method IValidator.Validate();
How to tell which RequiredFieldValidator failed
[ "", "c#", ".net", "validation", "" ]
I am working on a simple notification service that will be used to deliver messages to the users surfing a website. The notifications do not have to be sent in real time but it might be a better user experience if they happened more frequently than say every 5 minutes. The data being sent to and from the client is not very large and it is a straight forward database query to retrieve the data. In reading other conversations on the topic it would appear that an AJAX push can result in higher server loads. Since I can tolerate longer server delays is it worth while to have the server push notifications or to simply poll. It is not much harder to implement the push scenario and so I thought I would see what the opinion was here. Thanks for your help. EDIT: I have looked into a simple AJAX Push and implemented a simple demo based on this [article](http://uwmike.com/articles/2008/01/22/browser-data-push/#more-384) by Mike Purvis. The client load is fairly low at around 5k for the initial version and expected to stay that way for quite some time. --- Thank you everyone for your responses. I have decided to go with the polling solution but to wrap it all within a utility library so that if they want to change it later it is easier.
Because using a push requires an open HTTP connection to be maintained between your server and each client, I'd go for poll as well - not only is that going to consume a lot of server resources but it's also going to be significantly more tricky to implement as matt b mentioned. My experience with polling is that if you have a frequent enough polling interval on a busy enough site your web server logs can get flooded with poll requests real quickly. **Edit (2017)**: I'd say your choices are now are between websockets and long polling (mentioned in another answer). Sounds like long polling might be the right choice based on the way the question mentions that the notifications don't need to be received in real time, an infrequent polling period would be pretty easy to implement and shouldn't be very taxing on your server. Websockets are cool and a great choice for many applications these days, sounds like that might be overkill in this case though.
I'm surprised noone here has mentioned long-polling. Long polling means keeping an open connection for a longer period (say 30-60 seconds), and once it's closed, re-opening it again, and simply having the socket/connection listen for responses. This results in less connections (but longer ones), and means that responses are almost immediate (some may have to wait for a new polling connection). I'd like to add that in combination with technologies like NodeJS, this results in a very efficient, and resource-light solution, that is 100% browser compatible across all major browsers and versions, and does not require any additional tech like Comet or Flash. I realize this is an old question, but thought it might still be useful to provide this information :)
Client notification, should I use an AJAX Push or Poll?
[ "", "javascript", "ajax", "web-services", "server-push", "" ]
Basically, I have an `iframe` embedded in a page and the `iframe` has some [JavaScript](http://en.wikipedia.org/wiki/JavaScript) routines I need to invoke from the parent page. Now the opposite is quite simple as you only need to call `parent.functionName()`, but unfortunately, I need exactly the opposite of that. Please note that my problem is not changing the source [URL](http://en.wikipedia.org/wiki/Uniform_Resource_Locator) of the `iframe`, but invoking a function defined in the `iframe`.
Assume your iFrame's id is "targetFrame" and the function you want to call is `targetFunction()`: ``` document.getElementById('targetFrame').contentWindow.targetFunction(); ``` You can also access the frame using `window.frames` instead of `document.getElementById`. ``` // this option does not work in most of latest versions of chrome and Firefox window.frames[0].frameElement.contentWindow.targetFunction(); ```
There are some quirks to be aware of here. 1. `HTMLIFrameElement.contentWindow` is probably the easier way, but it's not quite a standard property and some browsers don't support it, mostly older ones. This is because the DOM Level 1 HTML standard has nothing to say about the `window` object. 2. You can also try `HTMLIFrameElement.contentDocument.defaultView`, which a couple of older browsers allow but IE doesn't. Even so, the standard doesn't explicitly say that you get the `window` object back, for the same reason as (1), but you can pick up a few extra browser versions here if you care. 3. `window.frames['name']` returning the window is the oldest and hence most reliable interface. But you then have to use a `name="..."` attribute to be able to get a frame by name, which is slightly ugly/deprecated/transitional. (`id="..."` would be better but IE doesn't like that.) 4. `window.frames[number]` is also very reliable, but knowing the right index is the trick. You can get away with this eg. if you know you only have the one iframe on the page. 5. It is entirely possible the child iframe hasn't loaded yet, or something else went wrong to make it inaccessible. You may find it easier to reverse the flow of communications: that is, have the child iframe notify its `window.parent` script when it has finished loaded and is ready to be called back. By passing one of its own objects (eg. a callback function) to the parent script, that parent can then communicate directly with the script in the iframe without having to worry about what HTMLIFrameElement it is associated with.
Invoking JavaScript code in an iframe from the parent page
[ "", "javascript", "html", "iframe", "" ]
Is there any tool that enables you to "hot swap" JavaScript contents while executing a webpage? I am looking for something similar to what HotSpot does for Java, a way to "hot deploy" new JS code without having to reload the whole page. Is there anything like that out there? **Clarifying in case people don't understand "hot swap", as indicated by *lock*:** By "hot swap" I mean allowing me to change parts of the code contained on the page itself and its .js files. Then this framework would detect the change - either automagically or by some indication from my end - and reload the code dynamically, avoiding the new server-side post (reload). That kind of approach would simplify debugging and error fixing, since you don't need to reload the page and start the interaction all over again, from scratch.
Interesting idea :) I wrote the following bookmarklet: ``` function reload(){var scripts=document.getElementsByTagName("script");var head=document.getElementsByTagName("head")[0];var newScripts=[];var removeScripts=[];for(var i=0;i<scripts.length;i++){var parent=scripts[i].parentNode;if(parent==head&&scripts[i].src){var newScript={};newScript.src=scripts[i].src;newScript.innerHTML=scripts[i].innerHTML;newScripts.push(newScript);removeScripts.push(scripts[i]);}}for(var i=0;i<removeScripts.length;i++){head.removeChild(removeScripts[i]);}for(var i=0;i<newScripts.length;i++){var script=document.createElement("script");script.src=newScripts[i].src;script.type="text/javascript";script.innerHTML=newScripts[i].innerHTML;head.appendChild(script);}} ``` add that to the location of a new bookmark, and it will reload all the javascripts referenced in <head>. Not sure how well this will work in practice, but it was worth a shot :) I guess you'd have to be very careful in the way you write your scripts, so as not to have things added to the page body multiple times, etc. Maybe support for a 'reload="true"' attribute could be useful, that way you could tag only your libraries as reloadable. Full source: ``` function reload() { var scripts = document.getElementsByTagName("script"); var head = document.getElementsByTagName("head")[0]; var newScripts = []; var removeScripts = []; for(var i=0; i < scripts.length; i++) { var parent = scripts[i].parentNode; if(parent == head && scripts[i].src) { var newScript = {}; newScript.src = scripts[i].src; newScript.innerHTML = scripts[i].innerHTML; newScripts.push(newScript); removeScripts.push(scripts[i]); } } for(var i=0; i < removeScripts.length; i++) { head.removeChild(removeScripts[i]); } for(var i=0; i < newScripts.length; i++) { var script = document.createElement("script"); script.src = newScripts[i].src; script.type = "text/javascript"; script.innerHTML = newScripts[i].innerHTML; head.appendChild(script); } } ```
Since I had a similar problem to solve I wrote a small js lib to hotswap javascript, css and image files. It´s of course open source on github: [hotswap.js](https://github.com/geo-at-github/hotswap.js) Hope it helps. **Update**: I have attached the full lib source here. To use it simply copy the content into a file (e.g.: hotswap.js) and insert the script tag into your website like this: ``` <script src="hotswap.js"></script> ``` API: ``` // refresh .js files hotswap.refreshAllJs(arrExcludedFiles); hotswap.refreshJs(arrIncludedFiles); // refresh .css files hotswap.refreshAllCss(arrExcludedFiles); hotswap.refreshCss(arrIncludedFiles); // refresh images hotswap.refreshAllImg(arrExcludedFiles); hotswap.refreshImg(arrIncludedFiles); // show a gui (this is optional and not required for hotswap to work) (Click on the "H"). hotswap.createGui(); // Examples: // refresh all .js files hotswap.refreshAllJs(); // refresh main.css only hotswap.refreshCss( ["main.js"] ); // refresh all images (img tags) except "dont-refreh-me.png". hotswap.refreshAllImg( ["dont-refreh-me.png"] ); ``` Full source (v. 0.2.0): I had to remove all comments to make it under the 30000 char answer limit. The inline html + css is ugly I know, but I wanted to keep this within on single .js file. ``` (function() { var root = this; var previousHotswap = root.hotswap; var hotswap = function() { if (!(this instanceof hotswap)) { return new hotswap(); } else { return this; } }; root.hotswap = hotswap(); hotswap.prototype.VERSION = '0.2.0'; hotswap.prototype.RND_PARAM_NAME = 'hs982345jkasg89zqnsl'; hotswap.prototype.FILE_REMOVAL_DELAY = 400; hotswap.prototype.CSS_HTML_PREFIX = 'hs982345jkasg89zqnsl'; hotswap.prototype._prefix = false; hotswap.prototype._prefixCache = []; hotswap.prototype._guiCache = {}; hotswap.prototype._guiGuiRefreshInterval = null; hotswap.prototype._guiHtml = '' + '<style type="text/css">'+ ' #PREFIX'+ ' {'+ ' display: block;'+ ' position: fixed;'+ ' top: 20%;/*distance from top*/'+ ' right: 0;'+ ' z-index: 99999;'+ ' width: 20em;'+ ' height: auto;'+ ' color: black;'+ ' background-color: #666666;'+ ' font-family: Verdana, sans-serif;'+ ' font-size: 0.8em;'+ ' -webkit-box-shadow: 0 0px 0.3em 0.1em #999999;'+ ' -moz-box-shadow: 0 0px 0.3em 0.1em #999999;'+ ' box-shadow: 0 0px 0.3em 0.1em #999999;'+ ' }'+ ' #PREFIX.mini'+ ' {'+ ' width: 2.9em;'+ ' height: 2.9em;'+ ' overflow:hidden;'+ ' }'+ ' #PREFIX.mini .PREFIX-header input, #PREFIX.mini .PREFIX-list, #PREFIX.mini .PREFIX-footer'+ ' {'+ ' display:none;'+ ' }'+ ' #PREFIX.mini .PREFIX-header div'+ ' {'+ ' display: block;'+ ' width: 100%;'+ ' height: 100%;'+ ' }'+ ' #PREFIX input'+ ' {'+ ' font-size: 1.0em;'+ ' border: 0.1em solid #999999;'+ ' border-radius: 0.2em;'+ ' padding: 0.2em 0.1em;'+ ' }'+ ' #PREFIX .PREFIX-header'+ ' {'+ ' height: 2.4em;'+ ' overflow:hidden;'+ ' padding: 0.4em;'+ ' color: white;'+ ' background-color: black;'+ ' }'+ ' #PREFIX .PREFIX-header input'+ ' {'+ ' width: 83.5%;'+ ' height: 1.6em;'+ ' }'+ ' #PREFIX .PREFIX-header div'+ ' {'+ ' position: absolute;'+ ' top:0;'+ ' right:0;'+ ' width: 14.5%;'+ ' height: 1.6em;'+ ' line-height: 1.4em;'+ ' text-align: center;'+ ' font-size: 2em;'+ ' font-weight: bold;'+ ' cursor: pointer;'+ ' }'+ ' #PREFIX .PREFIX-header div:hover'+ ' {'+ ' background-color: #444444;'+ ' }'+ ' #PREFIX .PREFIX-list'+ ' {'+ ' width: 100%;'+ ' height: 22em;'+ ' overflow: auto;'+ ' }'+ ' #PREFIX ul'+ ' {'+ ' list-style-type: none;'+ ' list-style-position: inside;'+ ' padding: 0;'+ ' margin: 0.5em 0.5em 1.2em 0.5em;'+ ' }'+ ' #PREFIX ul li'+ ' {'+ ' margin: 0.3em;'+ ' padding: 0.5em 0.5em;'+ ' color: white;'+ ' background-color: #717171;'+ ' font-size: 0.9em;'+ ' line-height: 1.5em;'+ ' cursor: pointer;'+ ' }'+ ' #PREFIX ul li:hover'+ ' {'+ ' background-color: #797979;'+ ' }'+ ' #PREFIX ul li.template'+ ' {'+ ' display: none;'+ ' }'+ ' #PREFIX ul li.active'+ ' {'+ ' background-color: black;'+ ' }'+ ' #PREFIX ul li.PREFIX-headline'+ ' {'+ ' color: white;'+ ' background-color: transparent;'+ ' text-align: center;'+ ' font-weight: bold;'+ ' cursor: default;'+ ' }'+ ' #PREFIX .PREFIX-footer'+ ' {'+ ' padding: 0;'+ ' margin:0;'+ ' background-color: #444444;'+ ' }'+ ' #PREFIX .PREFIX-footer ul'+ ' {'+ ' margin: 0;'+ ' padding: 0.5em;'+ ' }'+ ' #PREFIX .PREFIX-footer ul li'+ ' {'+ ' color: white;'+ ' background-color: black;'+ ' font-size: 1.0em;'+ ' border-radius: 0.5em;'+ ' text-align: center;'+ ' height: 2.2em;'+ ' line-height: 2.2em;'+ ' }'+ ' #PREFIX .PREFIX-footer ul li input.PREFIX-seconds'+ ' {'+ ' text-align: center;'+ ' width: 2em;'+ ' }'+ ' #PREFIX .PREFIX-footer ul li:hover'+ ' {'+ ' background-color: #222222;'+ ' }'+ ' #PREFIX .PREFIX-footer ul li.inactive'+ ' {'+ ' background-color: #666666;'+ ' cursor: default;'+ ' }'+ ' </style>'+ ' <div id="PREFIX" class="mini">'+ ' <div class="PREFIX-header">'+ ' <input id="PREFIX-prefix" placeholder="prefix" type="text" name="" />'+ ' <div id="PREFIX-toggle">H</div>'+ ' </div>'+ ' <div class="PREFIX-list">'+ ' <ul id="PREFIX-css">'+ ' <li class="PREFIX-headline">CSS</li>'+ ' <li class="template"></li>'+ ' </ul>'+ ' <ul id="PREFIX-js">'+ ' <li class="PREFIX-headline">JS</li>'+ ' <li class="template"></li>'+ ' </ul>'+ ' <ul id="PREFIX-img">'+ ' <li class="PREFIX-headline">IMG</li>'+ ' <li class="template"></li>'+ ' </ul>'+ ' </div>'+ ' <div class="PREFIX-footer">'+ ' <ul>'+ ' <li id="PREFIX-submit-selected">refresh selected</li>'+ ' <li id="PREFIX-submit-start">refresh every <input class="PREFIX-seconds" type="text" value="1" /> sec.</li>'+ ' <li id="PREFIX-submit-stop" class="inactive">stop refreshing</li>'+ ' <li id="PREFIX-submit-refresh-list">refresh list</li>'+ ' </ul>'+ ' </div>'+ ' </div>'; var xGetElementById = function(sId){ return document.getElementById(sId) }, xGetElementsByTagName = function(sTags){ return document.getElementsByTagName(sTags) }, xAppendChild = function(parent, child){ return parent.appendChild(child) }, xCloneNode = function(node){ return document.cloneNode(node) }, xCreateElement = function(sTag){ return document.createElement(sTag) }, xCloneNode = function(ele, deep){ return ele.cloneNode(deep) }, xRemove = function(ele) { if( typeof ele.parentNode != "undefined" && ele.parentNode ) { ele.parentNode.removeChild( ele ); } }, xAddEventListener = function(ele, sEvent, fn, bCaptureOrBubble) { if( xIsEmpty(bCaptureOrBubble) ) { bCaptureOrBubble = false; } if (ele.addEventListener) { ele.addEventListener(sEvent, fn, bCaptureOrBubble); return true; } else if (ele.attachEvent) { return ele.attachEvent('on' + sEvent, fn); } else { ele['on' + sEvent] = fn; } }, xStopPropagation = function(evt) { if (evt && evt.stopPropogation) { evt.stopPropogation(); } else if (window.event && window.event.cancelBubble) { window.event.cancelBubble = true; } }, xPreventDefault = function(evt) { if (evt && evt.preventDefault) { evt.preventDefault(); } else if (window.event && window.event.returnValue) { window.eventReturnValue = false; } }, xContains = function(sHaystack, sNeedle) { return sHaystack.indexOf(sNeedle) >= 0 }, xStartsWith = function(sHaystack, sNeedle) { return sHaystack.indexOf(sNeedle) === 0 }, xReplace = function(sHaystack, sNeedle, sReplacement) { if( xIsEmpty(sReplacement) ) { sReplacement = ""; } return sHaystack.split(sNeedle).join(sReplacement); }, xGetAttribute = function(ele, sAttr) { var result = (ele.getAttribute && ele.getAttribute(sAttr)) || null; if( !result ) { result = ele[sAttr]; } if( !result ) { var attrs = ele.attributes; var length = attrs.length; for(var i = 0; i < length; i++) if(attrs[i].nodeName === sAttr) result = attrs[i].nodeValue; } return result; }, xSetAttribute = function(ele, sAttr, value) { if(ele.setAttribute) { ele.setAttribute(sAttr, value) } else { ele[sAttr] = value; } }, xGetParent = function(ele) { return ele.parentNode || ele.parentElement; }, xInsertAfter = function( refEle, newEle ) { return xGetParent(refEle).insertBefore(newEle, refEle.nextSibling); }, xBind = function(func, context) { if (Function.prototype.bind && func.bind === Function.prototype.bind) { return func.bind(context); } else { return function() { if( arguments.length > 2 ) { return func.apply(context, arguments.slice(2)); } else { return func.apply(context); } }; } }, xIsEmpty = function(value) { var ret = true; if( value instanceof Object ) { for(var i in value){ if(value.hasOwnProperty(i)){return false}} return true; } ret = typeof value === "undefined" || value === undefined || value === null || value === ""; return ret; }, xAddClass = function(ele, sClass) { var clazz = xGetAttribute( ele, "class" ); if( !xHasClass(ele, sClass) ) { xSetAttribute( ele, "class", clazz + " " + sClass ); } }, xRemoveClass = function(ele, sClass) { var clazz = xGetAttribute( ele, "class" ); if( xHasClass(ele, sClass) ) { xSetAttribute( ele, "class", xReplace( clazz, sClass, "" ) ); } }, xHasClass = function(ele, sClass) { var clazz = xGetAttribute( ele, "class" ); return !xIsEmpty(clazz) && xContains( clazz, sClass ); }; hotswap.prototype._recreate = function( type, xcludedFiles, xcludeComparator, nDeleteDelay, bForceRecreation ) { if( typeof nDeleteDelay == "undefined") { nDeleteDelay = 0; } if( typeof bForceRecreation == "undefined") { bForceRecreation = false; } var tags = this._getFilesByType(type, xcludedFiles, xcludeComparator); var newTags = []; var removeTags = []; var i, src, detected, node, srcAttributeName; for(i=0; i<tags.length; i++) { node = tags[i].node; srcAttributeName = tags[i].srcAttributeName; var newNode = { node: null, oldNode: node, parent: xGetParent(node) }; if( bForceRecreation ) { newNode.node = xCreateElement("script"); } else { newNode.node = xCloneNode(node, false); } for (var p in node) { if (node.hasOwnProperty(p)) { newNode.node.p = node.p; } } src = xGetAttribute( node, srcAttributeName ); xSetAttribute( newNode.node, srcAttributeName, this._updatedUrl(src) ); newTags.push(newNode); removeTags.push(node); } for(var i=0; i < newTags.length; i++) { xInsertAfter(newTags[i].oldNode, newTags[i].node); } if( nDeleteDelay > 0 ) { for(var i=0; i < removeTags.length; i++) { xSetAttribute(removeTags[i], "data-hotswap-deleted", "1"); } setTimeout( function() { for(var i=0; i < removeTags.length; i++) { xRemove(removeTags[i]); } }, nDeleteDelay); } else { for(var i=0; i < removeTags.length; i++) { xRemove(removeTags[i]); } } }; hotswap.prototype._reload = function( type, xcludedFiles, xcludeComparator ) { var tags = this._getFilesByType(type, xcludedFiles, xcludeComparator); var i, src, node, srcAttributeName; for(i=0; i<tags.length; i++) { node = tags[i].node; srcAttributeName = tags[i].srcAttributeName; // update the src property src = xGetAttribute( node, srcAttributeName ); xSetAttribute( node, srcAttributeName, this._updatedUrl(src) ); } }; hotswap.prototype._getFilesByType = function( type, xcludedFiles, xcludeComparator ) { var files; switch(type) { case "css": files = this._getFiles( "css", "link", function(ele) { return (xGetAttribute(ele, "rel") == "stylesheet" || xGetAttribute(ele, "type") == "text/css"); }, "href", xcludedFiles, xcludeComparator ) break; case "js": files = this._getFiles( "js", "script", function(ele) { return (xGetAttribute(ele, "type") == "" || xGetAttribute(ele, "type") == "text/javascript"); }, "src", xcludedFiles, xcludeComparator ) break; case "img": files = this._getFiles( "img", "img", function(ele) { return (xGetAttribute(ele, "src") != ""); }, "src", xcludedFiles, xcludeComparator ) break; } return files; } hotswap.prototype._getFiles = function( type, tagName, tagFilterFunc, srcAttributeName, xcludedFiles, xcludeComparator ) { if( typeof xcludedFiles == "undefined" || !xcludedFiles) { xcludedFiles = []; } if( typeof xcludeComparator == "undefined" || !xcludeComparator) { xcludeComparator = false; } var fileNodes = []; var tags = xGetElementsByTagName(tagName); var src, detected, node; for(var i=0; i<tags.length; i++) { node = tags[i]; src = xGetAttribute(node,[srcAttributeName]); if( xIsEmpty( xGetAttribute(node, "data-hotswap-deleted") ) ) { if(src && tagFilterFunc(node)) { detected = false; for(var j=0; j<xcludedFiles.length; j++) { if( xContains(src,xcludedFiles[j]) ) { detected = true; break; } } if( detected == xcludeComparator ) { fileNodes.push({ type: type, node : node, tagName : tagName, srcAttributeName : srcAttributeName }); } } } } return fileNodes; }; hotswap.prototype._updatedUrl = function( url, getCleanUrl ) { var cleanUrl; if( typeof getCleanUrl == "undefined") { getCleanUrl = false; } url = cleanUrl = url.replace(new RegExp("(\\?|&)"+this.RND_PARAM_NAME+"=[0-9.]*","g"), ""); var queryString = "", randomizedQueryString = ""; if( xContains(url, "?") ) { if(xContains(url, "&" + this.RND_PARAM_NAME)) { queryString = url.split("&" + this.RND_PARAM_NAME).slice(1,-1).join(""); } randomizedQueryString = queryString + "&" + this.RND_PARAM_NAME + "=" + Math.random() * 99999999; } else { if(xContains(url, "?" + this.RND_PARAM_NAME)) { queryString = url.split("?" + this.RND_PARAM_NAME).slice(1,-1).join(""); } randomizedQueryString = queryString + "?" + this.RND_PARAM_NAME + "=" + Math.random() * 99999999; } var foundAt = -1; if( !xIsEmpty( this._prefixCache ) ) { for(var i=0; i<this._prefixCache.length; ++i) { if( !xIsEmpty(this._prefixCache[i]) && foundAt < 0 ) { for(var h=0; h<this._prefixCache[i].length; ++h) { if( this._prefixCache[i][h] == cleanUrl + queryString ) { cleanUrl = this._prefixCache[i][0]; foundAt = i; break; } } } } } var prefixHistory = [cleanUrl + queryString]; var applyPrefix = true; if( prefixHistory[0].match( new RegExp('^[A-Za-z0-9-_]+://') ) ) { applyPrefix = false; } var prefix = this._prefix; if( !xIsEmpty(this._prefix) && this._prefix ) { prefixHistory.push( this._prefix + cleanUrl + queryString ); if(foundAt >= 0) { this._prefixCache[foundAt] = prefixHistory; } else { this._prefixCache.push( prefixHistory ); } } else { prefix = ""; } if( !applyPrefix ) { prefix = ""; } url = prefix + cleanUrl + randomizedQueryString; return (getCleanUrl) ? (cleanUrl + queryString) : url; } hotswap.prototype.refreshAllJs = function( excludedFiles ) { if( typeof excludedFiles == "undefined" || !excludedFiles) { excludedFiles = [] } excludedFiles.push("hotswap.js"); this._recreate( "js", excludedFiles, false, 0, true ); }; hotswap.prototype.refreshJs = function( includedFiles ) { this._recreate( "js", includedFiles, true, 0, true ); }; hotswap.prototype.refreshAllCss = function( excludedFiles ) { this._recreate( "css", excludedFiles, false, this.FILE_REMOVAL_DELAY ); }; hotswap.prototype.refreshCss = function( includedFiles ) { this._recreate( "css", includedFiles, true, this.FILE_REMOVAL_DELAY ); }; hotswap.prototype.refreshAllImg = function( excludedFiles ) { this._reload( "img", excludedFiles, false ); }; hotswap.prototype.refreshImg = function( includedFiles ) { this._reload( "img", includedFiles, true ); }; hotswap.prototype.setPrefix = function( prefix ) { this._prefix = prefix; var gui = xGetElementById(this.CSS_HTML_PREFIX + "_wrapper"); if( gui ) { if( !xIsEmpty(this._prefix) && this._prefix ) { xGetElementById(this.CSS_HTML_PREFIX+"-prefix").value = this._prefix; } else { xGetElementById(this.CSS_HTML_PREFIX+"-prefix").value = ""; } } } hotswap.prototype.getPrefix = function() { return this._prefix; } hotswap.prototype.createGui = function( nDistanceFromTopInPercent ) { if( xIsEmpty(nDistanceFromTopInPercent) ) { nDistanceFromTopInPercent = 20; } var gui = xGetElementById(this.CSS_HTML_PREFIX + "_wrapper"); if( gui ) { xRemove(xGetElementById(this.CSS_HTML_PREFIX + "_wrapper")); } gui = xCreateElement("div"); xSetAttribute( gui, "id", this.CSS_HTML_PREFIX + "_wrapper" ); var guiHtml = xReplace( this._guiHtml, "PREFIX", this.CSS_HTML_PREFIX ); guiHtml = xReplace( guiHtml, '20%;/*distance from top*/', nDistanceFromTopInPercent+'%;/*distance from top*/' ); gui.innerHTML = guiHtml; xAppendChild( xGetElementsByTagName("body")[0], gui ); if( !xIsEmpty(this._guiCache) ) { this._guiCache = {}; } this._guiCreateFilesList(); if( !xIsEmpty(this._prefix) && this._prefix ) { xGetElementById(this.CSS_HTML_PREFIX+"-prefix").value = this._prefix; } var self = this; xAddEventListener( xGetElementById(this.CSS_HTML_PREFIX+"-toggle"), "click", function(evt) { var gui = xGetElementById(self.CSS_HTML_PREFIX); if( xHasClass(gui, "mini") ) { xRemoveClass( gui, "mini" ); } else { xAddClass( gui, "mini" ); } }); xAddEventListener( xGetElementById(this.CSS_HTML_PREFIX+"-prefix"), "blur", function(evt) { self._guiPrefixChanged(evt.target); }); xAddEventListener( xGetElementById(this.CSS_HTML_PREFIX+"-submit-selected"), "click", function(evt) { self._guiRefreshSelected() }); xAddEventListener( xGetElementById(this.CSS_HTML_PREFIX+"-submit-start"), "click", function(evt) { if( xGetAttribute(evt.target, "class") != this.CSS_HTML_PREFIX+"-seconds" ) { var input, nSeconds = 1; var children = evt.target.children; for(var i=0; i<children.length; ++i) { if( xGetAttribute(children[i], "class") == this.CSS_HTML_PREFIX+"-seconds" ) { nSeconds = children[i].value; } } self._guiRefreshSelected(); self._guiRefreshStart( nSeconds ); } }); xAddEventListener( xGetElementById(this.CSS_HTML_PREFIX+"-submit-stop"), "click", function(evt) { self._guiRefreshStop(); }); xAddEventListener( xGetElementById(this.CSS_HTML_PREFIX+"-submit-refresh-list"), "click", xBind(self.guiRefreshFilesList,self) ); } hotswap.prototype._guiCreateFilesList = function() { this._guiCache.files = []; this._guiCache.activeFiles = { "css" : [], "js" : [], "img" : [] }; var self = this; var createFilesList = function(list, files) { var i, j, r, clone, template, file, fileName, nodesToRemove = []; for(j=0; j<list.children.length; ++j) { if( xHasClass( list.children[j], "template" ) ) { template = list.children[j]; } else { if( !xHasClass( list.children[j], self.CSS_HTML_PREFIX + "-headline" ) ) { nodesToRemove.push(list.children[j]); } } } for(r=0; r<nodesToRemove.length; ++r) { xRemove( nodesToRemove[r] ); } for(i=0; i<files.length; ++i) { file = files[i]; clone = xCloneNode( template ); xRemoveClass( clone, "template" ); fileName = self._updatedUrl( xGetAttribute( file.node, file.srcAttributeName ), true ); if( !xContains(self._guiCache.files,fileName) ) { self._guiCache.files.push(fileName); clone.innerHTML = fileName; xAppendChild( list, clone ); xAddEventListener( clone, "click", (function(type, fileName){ return function(evt){ xStopPropagation(evt); xPreventDefault(evt); self._guiClickedFile(evt.target, type, fileName); }; })(file.type, fileName) ); } } } createFilesList( xGetElementById(this.CSS_HTML_PREFIX+"-css"), this._getFilesByType("css") ); createFilesList( xGetElementById(this.CSS_HTML_PREFIX+"-js"), this._getFilesByType("js", ["hotswap.js"]) ); createFilesList( xGetElementById(this.CSS_HTML_PREFIX+"-img"), this._getFilesByType("img") ); } hotswap.prototype.deleteGui = function() { var gui = xGetElementById(this.CSS_HTML_PREFIX + "_wrapper"); if( gui ) { xRemove(xGetElementById(this.CSS_HTML_PREFIX + "_wrapper")); } } hotswap.prototype._guiPrefixChanged = function(ele) { if( ele ) { this.setPrefix(ele.value); } }, hotswap.prototype._guiClickedFile = function( ele, sType, sFileName ) { var activeFiles = this._guiCache.activeFiles[sType]; if( xContains( activeFiles, sFileName ) ) { xRemoveClass(ele, "active"); activeFiles.splice( activeFiles.indexOf(sFileName), 1 ) } else { xAddClass(ele, "active"); activeFiles.push( sFileName ); } }, hotswap.prototype._guiRefreshSelected = function() { var activeFiles = this._guiCache.activeFiles; if( activeFiles['css'].length > 0 ) { this.refreshCss( activeFiles['css'] ); } if( activeFiles['js'].length > 0 ) { this.refreshJs( activeFiles['js'] ); } if( activeFiles['img'].length > 0 ) { this.refreshImg( activeFiles['img'] ); } }, hotswap.prototype._guiRefreshStart = function( nSeconds ) { if( this._guiGuiRefreshInterval !== null ) { this._guiRefreshStop(); } var self = this; this._guiGuiRefreshInterval = setInterval( xBind(this._guiRefreshSelected, this), nSeconds * 1000 ); xAddClass( xGetElementById(this.CSS_HTML_PREFIX+"-submit-start"), "inactive" ); xRemoveClass( xGetElementById(this.CSS_HTML_PREFIX+"-submit-stop"), "inactive" ); }, hotswap.prototype._guiRefreshStop = function() { if( this._guiGuiRefreshInterval !== null ) { clearInterval(this._guiGuiRefreshInterval); } this._guiGuiRefreshInterval = null; xRemoveClass( xGetElementById(this.CSS_HTML_PREFIX+"-submit-start"), "inactive" ); xAddClass( xGetElementById(this.CSS_HTML_PREFIX+"-submit-stop"), "inactive" ); } hotswap.prototype.guiRefreshFilesList = function() { this._guiCreateFilesList(); } }).call(this); ```
Is there a way to "hot swap" JavaScript code within the browser?
[ "", "javascript", "debugging", "frameworks", "" ]
I remember from C days that we were encouraged to use ``` i > -1 ``` instead of ``` i >= 0 ``` because of performance. Does this still apply in the C# .NET world? What are the performance implications of using one over the other with today's compilers? i.e. Is the compiler clever enough to optimize these for you? (As an aside try and type the question "use >= or >" into the question field on Stack Overflow and see what happens.)
No, there are no performance issues associated with comparison operators. And any good compiler would optimize something this trivial anyway. I'm not sure where you got the suggestion to use "i > -1" rather than "i >= 0". On the x86 architecture, it makes no difference which you use: either case takes exactly two instructions... one to compare and one to jump: ``` ;; if (i > -1) { cmp eax, -1 jle else then: ... else: ;; if (i >= 0) { cmp eax, 0 jl else then: ... else: ``` On most RISC architectures that I know, "i >= 0" may actually be faster since there is usually a dedicated zero register, and "i > -1" may require loading a constant. For example, MIPS only has a < instruction (no <=). Here is how the two constructs would be (naively!) expressed in MIPS assembly language: ``` // if (i >= 0) { (assuming i is in register %t0) stl $t1, $0, $t0 // in C: t1 = (0 < t0) beq $t1, $0, else // jump if t1 == 0, that is if t0 >= 0 nop then: ... else: // if (i > -1) { (assuming i is in register %t0) addi $t2, $0, -1 // in C: t2 = -1 stl $t1, $t2, $t0 // in C: t1 = (t2 < t0) = (-1 < t0) bne $t1, $0, else // jump if t1 != 0, that is if t0 > -1 nop then: ... else: ``` So in the naive, general case, it will actually be one instruction faster to do "i >= 0" on MIPS. Of course, RISC code is so heavily optimizable that a compiler would likely change either of these instruction sequences almost beyond recognition :-) So... the short answer is no no no, no difference.
Quite apart from the fact that any decent compiler does the right thing, and apart from that fact that in modern architectures there's no speed difference between `>` and `>=` comparisons, the bigger picture says that this is a "micro-optimisation" that doesn't affect runtime performance in the vast majority of cases. In the case of comparisons it usually doesn't affect readability whichever way you write it, but there are occasions when picking one boundary over the other is clearer: e.g., ``` if (length >= str.size()) ``` versus ``` if (length > str.size() - 1) ``` I don't know about you, but I'd pick option 1 any day. :-) In cases that don't appreciably affect performance, such as this, the more readable option should win.
Use "greater than or equals" or just "greater than"
[ "", "c#", ".net", "c", "" ]
I am using msbuild to build a C++ project and I want to suppress the 'Post-Build Event'. I have tried the following properties with no success: ``` /property:PostBuildEvent= /property:VCPostBuildEventTool= ``` Neither will make any difference and the post build events are still executed. Does anyone know how to suppress these (and potentially other) events using msbuild?
I just figured out how to do this. There's 2 steps involved: First, if you're not starting the msbuild process using a batch file already, create a batch file that calls the msbuild process. Then add a variable in the batch file before you call the msbuild process. Something like this: ``` set ISFULLBUILD=True msbuild TFSBuild.proj >> BuildOutput.txt ``` Then, you need to modify your post build event to only do something if the variable is not set, or has the wrong value: ``` IF NOT '%ISFULLBUILD%'=='True' copy /y $(TargetFileName) "..\..\..\..\..\..\Binaries\$(ConfigurationName)\" ``` In this case - the TargetFileName (Whatever.dll) will only get copied when you're building from Visual Studio. But, when you build using your new batch file - it will NOT execute the command after the IF statement.
If you are able to modify the post-build events, you could do this with an environment variable, say `SKIP_POST_BUILD_EVENTS`. You can then set that environment variable before calling `msbuild`, and check whether the variable exists in the post-build event before executing whatever code is there.
Suppressing C++ 'Post-Build Event' when using msbuild
[ "", "c++", "msbuild", "" ]
I received a WSDL file for a web service interface that our system should call somewhere in the future. Until then, I'd like to setup a dummy/mockup web service that does nothing else than to log the web service invocations and return dummy data. What I've done so far is to generate Java objects from the WSDL file. What's the fastest way to setup such a mockup web service when you already have an application server (here: JBoss) running?
We just faced this same problem, and found [SoapUI](http://www.soapui.org/) to be the perfect tool. Given a WSDL it'll create a service on your machine you can call, and it allows you to edit the response as you need.
You can also use [Fiddler](http://www.fiddlertool.com/fiddler/), a HTTP Debugging Proxy. You can easily configure Fiddler to return a pre-defined HTTP response with its AutoResponder feature when a request is sent to a particular URL.
Dummy web service
[ "", "java", "web-services", "mockups", "" ]
I've tried Red Gate's SQL Prompt and like it, but cannot afford it at the moment. I miss the good code completion that this tool provides. Are there any good free tools out there that have SQL code completion (table and column names, etc)? I've tried TOAD, but it was quite buggy. It would crash, and would also leave connections open to the database after the app was closed. I'm also aware that SQL Server 2008 has some form of code completion, but it requires that the database you are connected to be 2008, not just the Management Studio version. My employer will eventually upgrade, but I believe this may be a while in coming, so I'd like a good tool in the interim. Any good recommendations?
Red Gate's [SQL Prompt](http://www.red-gate.com/products/SQL_Prompt/index.htm) is what you want. It's not free, but it **is** worth it.
I have tried a few different tools myself, and nothing really compared to SQL Prompt. I personally recommend contacting Red Gate if you like the tool, they are known to have "specials" at given times where you might be able to get a discount, especially this time of the month. Otherwise, contact me and I get get you 5% off anyway
What is a good tool to get SQL code completion with SQL Server 2005?
[ "", "sql", "sql-server", "" ]
I develop and maintain small intranet web apps written in JSP and Java. They run on a Resin web server, without dedicated httpd like Apache. Before performing maintenance, I activate a very simple Ajax message to urge users to logout. What I want is, while I do maintenance, a user trying to log in to that particular app to see a "Sorry, temporarily out of service" page, while all other web apps running on the same web server are available as usual. What are some best practices in this situation?
Setup an alternate site on your webserver with the outage message. Then when doing maintenance, redirect your website to the maintenance site while up update it. That way if users have pages other than the default page bookmarked, they will still get the outage message. Otherwise, a simple approach is to just swap out your default/login page with a maintenance page.
I just swap out the first page of the web app with a linkless page containing the message you mentioned.
How can I show a "temporarily out of service" page to web-app users during maintenance?
[ "", "java", "jsp", "" ]
Let's say we have a method signature like ``` public static function explodeDn($dn, array &$keys = null, array &$vals = null, $caseFold = self::ATTR_CASEFOLD_NONE) ``` we can easily call the method by omitting all parameters after `$dn`: ``` $dn=Zend_Ldap_Dn::explodeDn('CN=Alice Baker,CN=Users,DC=example,DC=com'); ``` We can also call the method with 3 parameters: ``` $dn=Zend_Ldap_Dn::explodeDn('CN=Alice Baker,CN=Users,DC=example,DC=com', $k, $v); ``` and with 4 parameters: ``` $dn=Zend_Ldap_Dn::explodeDn('CN=Alice Baker,CN=Users,DC=example,DC=com', $k, $v, Zend_Ldap_Dn::ATTR_CASEFOLD_UPPER); ``` But why is it impossible to call the method with the following parameter combination for example: ``` $dn=Zend_Ldap_Dn::explodeDn('CN=Alice Baker,CN=Users,DC=example,DC=com', $k, null, Zend_Ldap_Dn::ATTR_CASEFOLD_UPPER); $dn=Zend_Ldap_Dn::explodeDn('CN=Alice Baker,CN=Users,DC=example,DC=com', null, $v); ``` What's the difference between passing `null` to the method and relying on the default value? Is this constraint written in the manual? Can it be circumvented?
It's because you can't have a reference to null. You can have a reference to a variable that contains null - that is exactly what the default value does. Or you can pass in null as a literal value - but since you want an out parameter this is not possible here.
While you must create a dummy variable for by-ref arguments if you want to pass NULL explicitly, you don't have to create that variable on a separate line. You can use an assignment expression like $dummy=NULL directly as a function argument: ``` function foo (&$ref = NULL) { if (is_null($ref)) $ref="bar"; echo "$ref\n"; } foo($dummy = NULL); //this works! ```
PHP by-reference parameters and default null
[ "", "php", "null", "pass-by-reference", "" ]
Is there a preference or behavior difference between using: ``` if(obj.getClass().isArray()) {} ``` and ``` if(obj instanceof Object[]) {} ``` ?
In most cases, you should use the `instanceof` operator to test whether an object is an array. Generally, you test an object's type before downcasting to a particular type which is known at compile time. For example, perhaps you wrote some code that can work with a `Integer[]` or an `int[]`. You'd want to guard your casts with `instanceof`: ``` if (obj instanceof Integer[]) { Integer[] array = (Integer[]) obj; /* Use the boxed array */ } else if (obj instanceof int[]) { int[] array = (int[]) obj; /* Use the primitive array */ } else ... ``` At the JVM level, the `instanceof` operator translates to a specific ["instanceof"](http://java.sun.com/docs/books/jvms/second_edition/html/Instructions2.doc6.html#instanceof) byte code, which is optimized in most JVM implementations. In rarer cases, you might be using reflection to traverse an object graph of unknown types. In cases like this, the `isArray()` method can be helpful because you don't know the component type at compile time; you might, for example, be implementing some sort of serialization mechanism and be able to pass each component of the array to the same serialization method, regardless of type. There are two special cases: null references and references to primitive arrays. A null reference will cause `instanceof` to result `false`, while the `isArray` throws a `NullPointerException`. Applied to a primitive array, the `instanceof` yields `false` unless the component type on the right-hand operand exactly matches the component type. In contrast, `isArray()` will return `true` for any component type.
In the latter case, if obj is null you won't get a NullPointerException but a false.
Java array reflection: isArray vs. instanceof
[ "", "java", "arrays", "reflection", "" ]
Is it possible to `typedef` long types that use templates? For example: ``` template <typename myfloat_t> class LongClassName { // ... }; template <typename myfloat_t> typedef std::vector< boost::shared_ptr< LongClassName<myfloat_t> > > LongCollection; LongCollection<float> m_foo; ``` This doesn't work, but is there a way to achieve a similar effect? I just want to avoid having to type and read a type definition that covers almost the full width of my editor window.
No, that isn't possible currently. It will be made possible in C++0X AFAIK. The best I can think of is ``` template<typename T> struct LongCollection { typedef std::vector< boost::shared_ptr< LongClassName<T> > > type; }; LongCollection<float>::type m_foo; ```
If you don't want to go the macro way you have to make individual typedefs for each type: ``` typedef std::vector< boost::shared_ptr< LongClassName<float> > > FloatCollection; typedef std::vector< boost::shared_ptr< LongClassName<double> > > DoubleCollection; ```
typedefs for templated classes?
[ "", "c++", "templates", "typedef", "" ]
Looking up LINQ and Or in google is proving somewhat difficult so here I am. I want to so the following: ``` (from creditCard in AvailableCreditCards where creditCard.BillToName.ToLowerInvariant().Contains(txtFilter.Text.ToLowerInvariant()) **or creditCard.CardNumber.().Contains(txtFilter.Text)** orderby creditCard.BillToName select creditCard) ```
C# keywords supporting LINQ are still C#. Consider `where` as a conditional like `if`; you perform logical operations in the same way. In this case, a logical-OR, you use `||` ``` (from creditCard in AvailableCreditCards where creditCard.BillToName.ToLowerInvariant().Contains( txtFilter.Text.ToLowerInvariant()) || creditCard.CardNumber.().Contains(txtFilter.Text) orderby creditCard.BillToName select creditCard) ```
You can use the .Where() function to accomplish this. ``` var cards = AvailableCreditCards.Where(card=> card.CardNumber.Contains(txtFilter.Text) || card.BillToName.ToLowerInvariant().Contains(txtFilter.Text.ToLowerInvariant()); ```
Can I perform an "or" operation in LINQ to objects?
[ "", "c#", "linq", "" ]
I want to pass an int list (List) as a declarative property to a web user control like this: ``` <UC:MyControl runat="server" ModuleIds="1,2,3" /> ``` I created a TypeConverter to do this: ``` public class IntListConverter : System.ComponentModel.TypeConverter { public override bool CanConvertFrom( System.ComponentModel.ITypeDescriptorContext context, Type sourceType) { if (sourceType == typeof(string)) return true; return base.CanConvertFrom(context, sourceType); } public override object ConvertFrom( System.ComponentModel.ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value) { if (value is string) { string[] v = ((string)value).Split( new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries); List<int> list = new List<int>(); foreach (string s in vals) { list.Add(Convert.ToInt32(s)); } return list } return base.ConvertFrom(context, culture, value); } public override bool CanConvertTo(ITypeDescriptorContext context, Type destinationType) { if (destinationType == typeof(InstanceDescriptor)) return true; return base.CanConvertTo(context, destinationType); } public override object ConvertTo(ITypeDescriptorContext context, System.Globalization.CultureInfo culture, object value, Type destinationType) { if (destinationType == typeof(InstanceDescriptor) && value is List<int>) { List<int> list = (List<int>)value; ConstructorInfo construcor = typeof(List<int>).GetConstructor(new Type[] { typeof(IEnumerable<int>) }); InstanceDescriptor id = new InstanceDescriptor(construcor, new object[] { list.ToArray() }); return id; } return base.ConvertTo(context, culture, value, destinationType); } } ``` And then added the attribute to my property: ``` [TypeConverter(typeof(IntListConverter))] public List<int> ModuleIds { get { ... }; set { ... }; } ``` But I get this error at runtime: `Unable to generate code for a value of type 'System.Collections.Generic.List'1[[System.Int32, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]'. This error occurred while trying to generate the property value for ModuleIds.` My question is similar to one found [here](https://stackoverflow.com/questions/116797/passing-int-array-as-parameter-in-web-user-control), but the solution does not solve my problem: **Update:** I found a page which solved the first problem. I updated the code above to show my fixes. The added code is the `CanConvertTo` and `ConvertTo` methods. Now I get a different error.: `Object reference not set to an instance of an object.` This error seems to be indirectly caused by something in the `ConvertTo` method.
After hooking a debugger into Cassini, I see that the null ref is actually coming from System.Web.Compilation.CodeDomUtility.GenerateExpressionForValue, which is basically trying to get an expression for the int[] array you pass into the List constructor. Since there's no type descriptor for the int[] array, it fails (and throws a null ref in the process, instead of the "can't generate property set exception" that it should). I can't figure out a built in way of getting a serializable value into a List<int>, so I just used a static method: ``` class IntListConverter : TypeConverter { public static List<int> FromString(string value) { return new List<int>( value .Split(new char[] { ',' }, StringSplitOptions.RemoveEmptyEntries) .Select(s => Convert.ToInt32(s)) ); } public override object ConvertTo(ITypeDescriptorContext context, CultureInfo culture, object value, Type destinationType) { if (destinationType == typeof(InstanceDescriptor)) { List<int> list = (List<int>)value; return new InstanceDescriptor(this.GetType().GetMethod("FromString"), new object[] { string.Join(",", list.Select(i => i.ToString()).ToArray()) } ); } return base.ConvertTo(context, culture, value, destinationType); } } ```
I solved something simular by creating 2 properties: ``` public List<int> ModuleIDs { get .... set ... } public string ModuleIDstring { get ... set ... } ``` The ModuleIDstring converts its value set to a list and sets the ModuleIDs property. This will also make the ModuleIDs usable from a PropertyGrid etc. Ok, not the best, typesafe solution, but for me it works.
Passing int list as a parameter to a web user control
[ "", "c#", "asp.net", "typeconverter", "" ]
I am trying to use `ResourceBundle#getStringArray` to retrieve a `String[]` from a properties file. The description of this method in the documentation reads: > Gets a string array for the given key from this resource bundle or one of its parents. However, I have attempted to store the values in the properties file as multiple individual key/value pairs: ``` key=value1 key=value2 key=value3 ``` and as a comma-delimited list: ``` key=value1,value2,value3 ``` but neither of these is retrievable using `ResourceBundle#getStringArray`. How do you represent a set of key/value pairs in a properties file such that they can be retrieved using `ResourceBundle#getStringArray`?
A `Properties` object can hold **`Object`s**, not just `String`s. That tends to be forgotten because they're overwhelmingly used to load .properties files, and so often will only contain `String`s. [The documentation](https://web.archive.org/web/20081217073139/http://java.sun.com/j2se/1.4.2/docs/api/java/util/ResourceBundle.html#getStringArray(java.lang.String)) indicates that calling `bundle.getStringArray(key)` is equivalent to calling `(String[]) bundle.getObject(key)`. That's the problem: the value isn't a `String[]`, it's a `String`. I'd suggest storing it in comma-delimited format and calling `split()` on the value.
You can use [Commons Configuration](http://commons.apache.org/proper/commons-configuration/userguide/howto_properties.html), which has methods `getList` and `getStringArray` that allow you to retrieve a list of comma separated strings.
How do I specify values in a properties file so they can be retrieved using ResourceBundle#getStringArray?
[ "", "java", "resourcebundle", "" ]
What is the state generating Excel documents from a PHP application on a Linux server? I am interesting in creating Office 97 (xls) Excel files. My limited research on the subject has turned up this [Pear package](http://pear.php.net/package/Spreadsheet_Excel_Writer). It appears to be in beta status since 2006. Can you share your success or failures in generating Excel files from PHP? Is there a reliable and mature tool available? Update: For this application I do need to generate an Excel file, not just a CSV file.
There is something much better than the PEAR package out there! [PHPExcel](http://phpexcel.codeplex.com/)
I've used the PEAR package with lots of success, but there are some limitations when implementing complex formulas. It has something to do with the way excel stores the formula, along with the last calculated result. Sometimes even though the formula is correct, when first opening the sheet the cells show as empty. Once the cell has focus and then loses focus, the calculation is performed and the cell populated. The library is actually a port from a perl library, which is slightly more complete but has the same issues.
Can I reliably create Excel documents from a PHP application on a Linux server?
[ "", "php", "linux", "export-to-excel", "" ]
> **Possible Duplicate:** > [When will C++0x be finished?](https://stackoverflow.com/questions/5436139/when-will-c0x-be-finished) What are the latest news about C++0X? (or should I say C++1X) Any release date decided yet?
## UPDATE : years later... The last Draft have been officially finalized few weeks ago, in Mars 2011 and will be officially out around July 2011. The name of the new standard would be C++2011 : <http://herbsutter.com/2011/03/25/we-have-fdis-trip-report-march-2011-c-standards-meeting/> Microsoft C++ compiler (VC10) provide C++0x features (lambda, decltype, auto, r-value reference and nullptr). GCC provide a work in progress version that already implements a lot of features (see <http://gcc.gnu.org/projects/cxx0x.html>). Comeau C++ seems to be more advanced. CLang started to provide some features but not much for the moment (see <http://clang.llvm.org/cxx_status.html> ) So, most of the features are be availables for the main c++ compilers at the time the ISO administration officially validate the draft. Some advanced features are still not be available before some years I guess.
It's unlikely that this committee draft will become the FCD. I would say there will be at least 2 more meetings of the standard committee before it goes to FCD. I think there is something like a 1 year lag between the FCD and the actual standard, so it could be 2011 (or even later!!!)
C++0X when?
[ "", "c++", "c++11", "" ]
I have code similar to the following in many places: ``` var dbParams = db.ReadParams(memberID, product, GetSubscriptionFields()); Debug.Assert(dbParams.Count == 4); _memberCode = dbParams[0]; _password = dbParams[1]; _userName = dbParams[2]; _reasonCode = dbParams[3]; ``` ReadParams() returns an array of strings, the number of strings depending on the GetSubscriptionFields() function. I could use dbParams[] directly in my code, but I find it more helpful to give meaningful names to each of the values in the array. Is there a way I can get all the results directly, without going through the array? I am looking for something like: ``` db.ReadParams(memberID, product, out _memberCode, out _password, out _userName, out _reasonCode); ``` or ``` Tuple<_memberCode, _password, _userName, _reasonCode> = db.ReadParams(memberID, product); ``` Of course, it has to be legal C# code :)
Why not use constants instead? Then in your code you could have ``` dbParams[MEMBER_CODE] dbParams[PASSWORD] dbParams[USERNAME] dbParams[REASON_CODE] ``` which meets your goal of meaningful names without changing the way the method works.
You are writing code in a highly object oriented language, so why don't you use objects? ``` Member m = db.ReadParams(memberID, product, GetSubscriptionFields()); ``` and in your code you use ``` m.memberCode m.password m.username m.reasonCode ``` Of course you don't have to make the values publicly accessible, you can make them only accessible via setter/getter methods, and by only having getters, you can avoid them from being altered after object creation. Of course different calls to db.ReadParams should return different objects, e.g. you can create an abstract base class and inherit all possible results from db.ReadParams of it. Therefor you may have to encapsulate db.ReadParams into another method that finds out the right type of object to create: ``` ReadParamsResult rpr = myDb.ReadParamsAsObject(memberID, product, GetSubscriptionFields()); // Verify that the expected result object has been returned Debug.Assert(rpr is Member); // Downcast Member m = (Member)rpr; ```
Variable number of results from a function
[ "", "c#", "" ]
I'm trying to figure out a way to add a self-update feature to a Java/Swing application I'm working on. Basically I've got a bunch of jar files with extra functionality to be re-deployed to the installed users when they change. Nothing complicated, just check if a new version has been released, download them over HTTP, and then optionally offer to restart the app to the user. I had a look at webstart, and it could work. But this particular app does some funky stuff with classloading and GC memory settings that don't look like they are supported via webstart, or will at least complicate matters. (It's a tweaked build of JMeter) I also went down the road of adding in this plugin handler <http://swing-fx.blogspot.com/2008/06/add-auto-update-and-plugins-to-your.html>, but it is very alpha, and tries to do too much with the usual bugs you get with alpha stuff.
I did the exact same thing. But that was long back so there are probably better tools today. What I found out I needed was a loader. The loader main program did not have the app jars in classpath. It first downloaded an update if required and then created a custom classloader with the app jars in class path and invoked the main method of the application main class. It is not very complicated. IIRC I needed to do this because the jars could not be overwritten in windows if they were already in classpath. Hope this helps.
Updates, plugins, separation of concern etc. are exactly what [OSGi](http://www.osgi.org) is about - you might want to take a look at this. It won't come free (read: with a steep initial learning curve, especially when you are currently using classloading tricks) at least there are good open source implementations (felix - see [felix.apache.org](http://felix.apache.org), equinox - see [www.eclipse.org](http://www.eclipse.org) and others) For these implementations autoupdaters are available - if you write your modules correctly it's possible to update at runtime without restarting.
What's the best way to add a self-update feature to a Java Swing application?
[ "", "java", "swing", "" ]
In AJAX applications that need to poll the server in regular intervals (like a chat applications), what is the recommended minimum time between two calls, so that the update is done as quickly as possible? What times are considered as hogs for the server and the client?
It depends on the application, but for chat you probably want to poll pretty often - 1 to 4 seconds I'd say. What you can do is dynamically change the polling interval to decrease your server load - if nobody has said anything for a minute, increase to 10 seconds... after 5 minutes increase to 30 seconds - that kind of thing.
We have a different solution for AJAX polling in our chat: The request is sent to the server and polls for the data on the *server side* in cycle with very short delay intervals (like 0.5 sec). Ones data is get, the request returns. Then next request is sent immediately to the server. All requests' timeout is set to something like 60 sec, when it expires without getting data, next request replaces it.
Minimum time between subsequent AJAX calls
[ "", "javascript", "html", "ajax", "" ]
How do I get a function's name as a string? ``` def foo(): pass >>> name_of(foo) "foo" ```
``` my_function.__name__ ``` Using `__name__` is the preferred method as it applies uniformly. Unlike `func_name`, it works on built-in functions as well: ``` >>> import time >>> time.time.func_name Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: 'builtin_function_or_method' object has no attribute 'func_name' >>> time.time.__name__ 'time' ``` Also the double underscores indicate to the reader this is a special attribute. As a bonus, classes and modules have a `__name__` attribute too, so you only have remember one special name.
To get the current function's or method's name from inside it, consider: ``` import inspect this_function_name = inspect.currentframe().f_code.co_name ``` `sys._getframe` also works instead of `inspect.currentframe` although the latter avoids accessing a private function. To get the calling function's name instead, consider `f_back` as in `inspect.currentframe().f_back.f_code.co_name`. --- If also using `mypy`, it can complain that: > error: Item "None" of "Optional[FrameType]" has no attribute "f\_code" To suppress the above error, consider: ``` import inspect import types from typing import cast this_function_name = cast(types.FrameType, inspect.currentframe()).f_code.co_name ```
How to get a function name as a string?
[ "", "python", "string", "function", "" ]
An easy jQuery question. I have several identical forms ( except their name ) on one page with a few hidden inputs in each. I want to refer to them by using the form name and then the input name. ( the input names are not unique in my page ) So for instance: ``` var xAmt = $('#xForm'+num).('#xAmt'); ``` I really want to supply these values to an AJAX POST ``` $.ajax({ url: "X.asp", cache: false, type: "POST", data: "XID=xID&xNumber=xNum&xAmt=xAmt", ``` ... If I can get the values in the AJAX call even better.
``` function queryX( args ) { var queryString = [ "XID=", args.XID, "&xNumber=", args.xNumber, "&xAmt=", args.xAmt ].join(""); $.ajax({ url: "X.asp", cache: false, type: "POST", data: queryString, success : function( data ) { return data; } }); } var myReturnData = queryX({ XID : $("input[name='XID']").val(), xNumber : $("input[name='xNumber']").val(), xAmt : $("input[name='xAmt']").val() }); ``` **EDIT:** This allows you the most flexibility, and if only the input values will change (but the query string variables won't), then you can pass in whatever you want as the value.
The flexible way to do it has already been answered here, but you can also just make it work with your current code. (Forgive me if this was too basic for what you're looking for.) Drill down from the unique form name by using the CSS descendant selector (a space): ``` var xAmt = $('#xForm'+num+ ' #xAmt').val(); ``` Repeat for each value you need and call $.ajax just like you're doing.
How to reference an input within a form in jQuery
[ "", "javascript", "jquery", "ajax", "" ]
There is a console Java application which is supposed to run until it is stopped by `Ctrl`+`C` or closing the console window. How that application can be programmed to execute a clean up code before exit?
You could use a [Shutdown Hook](http://www.onjava.com/pub/a/onjava/2003/03/26/shutdownhook.html). Basically you need to create a Thread which will perform your shutdown actions, and then [add it as a shutdown hook](http://java.sun.com/javase/6/docs/api/java/lang/Runtime.html#addShutdownHook(java.lang.Thread)). For example: ``` class ShutdownHook extends Thread { public void run() { // perform shutdown actions } } // Then, somewhere in your code Runtime.getRuntime().addShutdownHook(new ShutdownHook()) ```
A Shutdown hook is the way to go, but be aware that there is no guarantee that the code is actually executed. JVM crashes, power failures, or a simple "kill -9" on your JVM can prevent the code from cleaning up. Therefore you must ensure that your program stays in a consistent state even if it has been aborted abruptly. Personally, I simply use a database for all state-storage. Its transactions model makes sure that the persistent storage is in a sane state no matter what happens. They spend years making that code fool-proof, so why should I waste my time on problems already solved.
Get notification on a Java process termination
[ "", "java", "" ]
My servlet application includes a number of library .jars, some of which contain embedded log4j.xml or log4j.properties files. I'd like to ensure that log4j finds my log4j.xml first! I've tried searching for some specification of the priorities of the various classpath elements in a servlet (e.g. does WEB-INF/classes always precede WEB-INF/lib?), or some way to configure or tweak the servlet's classloader so that a given resource directory appears early in the classpath. So far, I've drawn a blank. Any suggestions on ensuring that a servlet .war file loads the correct log4j.xml via the classloader?
As far as I understand the resource selection from the classpath is non-deterministic (from the point of view of the app developer). Even if the same file is loaded consistently the behaviour could change: 1. When you upgrade the version of your current container. 2. If you switch containers. The simplest solution will be to *remove embedded log4j config files from library jars*. It is almost never a good idea to embed log4j config's as it leads to the problem you are seeing here... Are they third party jars or jars you developed?
# Tomcat 8.5 Ditto Tomcat 8.0. See documentation: [Class Loader HOW-TO](https://tomcat.apache.org/tomcat-8.5-doc/class-loader-howto.html). # Tomcat 8.0 The answer is simple, taken from the Tomcat documentation page, [Class Loader HOW-TO](https://tomcat.apache.org/tomcat-8.0-doc/class-loader-howto.html). In particular notice the use of the `/WEB-INF/` directory/folder. > Therefore, from the perspective of a web application, class or resource loading looks in the following repositories, in this order: > > * Bootstrap classes of your JVM > * `/WEB-INF/classes` of your web application > * `/WEB-INF/lib/*.jar` of your web application > * System class loader classes (described above) > * Common class loader classes (described above) > > If the web application class loader is [configured](https://tomcat.apache.org/tomcat-8.0-doc/config/loader.html) with `<Loader delegate="true"/>` then the order becomes: > > * Bootstrap classes of your JVM > * System class loader classes (described above) > * Common class loader classes (described above) > * `/WEB-INF/classes` of your web application > * `/WEB-INF/lib/*.jar` of your web application # Tomcat 6 Excerpted from Tomcat 6 page, [Class Loader HOW-TO](http://tomcat.apache.org/tomcat-6.0-doc/class-loader-howto.html). > Therefore, from the perspective of a web application, class or resource loading looks in the following repositories, in this order: > > * Bootstrap classes of your JVM > * System class loader classes (described above) > * **`/WEB-INF/classes` of your web application** > * **`/WEB-INF/lib/*.jar` of your web application** > * `$CATALINA_HOME/lib` > * `$CATALINA_HOME/lib/*.jar`
Controlling the classpath in a servlet
[ "", "java", "tomcat", "servlets", "classpath", "" ]
What is the best way to use ListView and a set of GroupBoxes as an Options window? For example, Listview will have items such as General, Sounds, Shortcuts and there will be three groupboxes defining those same things. What would be the best programmatical way to navigate through them everytime an item in the listview is selected? Hide all groupboxes, then show groupbox based by index of selected listview? Or does anyone have a better idea to handle something like this?
I don't know if there's a better way, but in the past that what you described is the general approach I have taken.
I may have misunderestimated your question, but perhaps a TreeView would be more appropriate for your problem? This would allow you to have top-level category nodes (like General, Sounds etc.) and then lists of items under each node.
What is the best way to use ListView and GroupBoxes as an "options" window?
[ "", "c#", "user-interface", "" ]
I've seen (and used) on various projects this layout, with a group of fields followed by a group of properties: ``` private int MyIntField; private string MyStringField; public int MyInt { get { return MyIntField; } set { MyIntField = value; } } public string MyString { get { return MyStringField; } set { MyStringField = value; } } ``` And I've also encountered this layout with fields next to their property: ``` private int MyIntField; public int MyInt { get { return MyIntField; } set { MyIntField = value; } } private string MyStringField; public string MyString { get { return MyStringField; } set { MyStringField = value; } } ``` Is there a reason to consider one better than the other? I think most coding standards recommend Option #1, but sometimes it's handy having the field next to the property that operates on it. *Note: I'm assuming non-trivial properties that can't use auto-implemented properties.*
I think it is whatever the team feels comfortable with. Settle on a standard for the project/company/language and stick to it. I prefer the private variables all together, the methods/interfaces together, the private members....I think you get the point.
I group them at the top of the class. In fact the only thing that is over my private attribute is all constant of the class.
Do you group private fields or put them with their property?
[ "", "c#", "layout", "coding-style", "" ]
Is it possible to search every field of every table for a particular value in Oracle? There are hundreds of tables with thousands of rows in some tables so I know this could take a very long time to query. But the only thing I know is that a value for the field I would like to query against is `1/22/2008P09RR8`. < I've tried using this statement below to find an appropriate column based on what I think it should be named but it returned no results. ``` SELECT * from dba_objects WHERE object_name like '%DTN%' ``` There is absolutely no documentation on this database and I have no idea where this field is being pulled from. Any thoughts?
Quote: > I've tried using this statement below > to find an appropriate column based on > what I think it should be named but it > returned no results.\* > > ``` > SELECT * from dba_objects WHERE > object_name like '%DTN%' > ``` A column isn't an object. If you mean that you expect the column name to be like '%DTN%', the query you want is: ``` SELECT owner, table_name, column_name FROM all_tab_columns WHERE column_name LIKE '%DTN%'; ``` But if the 'DTN' string is just a guess on your part, that probably won't help. By the way, how certain are you that '1/22/2008P09RR8' is a value selected directly from a single column? If you don't know at all where it is coming from, it could be a concatenation of several columns, or the result of some function, or a value sitting in a nested table object. So you might be on a wild goose chase trying to check every column for that value. Can you not start with whatever client application is displaying this value and try to figure out what query it is using to obtain it? Anyway, diciu's answer gives one method of generating SQL queries to check every column of every table for the value. You can also do similar stuff entirely in one SQL session using a PL/SQL block and dynamic SQL. Here's some hastily-written code for that: ``` SET SERVEROUTPUT ON SIZE 100000 DECLARE match_count INTEGER; BEGIN FOR t IN (SELECT owner, table_name, column_name FROM all_tab_columns WHERE owner <> 'SYS' and data_type LIKE '%CHAR%') LOOP EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM ' || t.owner || '.' || t.table_name || ' WHERE '||t.column_name||' = :1' INTO match_count USING '1/22/2008P09RR8'; IF match_count > 0 THEN dbms_output.put_line( t.table_name ||' '||t.column_name||' '||match_count ); END IF; END LOOP; END; / ``` There are some ways you could make it more efficient too. In this case, given the value you are looking for, you can clearly eliminate any column that is of NUMBER or DATE type, which would reduce the number of queries. Maybe even restrict it to columns where type is like '%CHAR%'. Instead of one query per column, you could build one query per table like this: ``` SELECT * FROM table1 WHERE column1 = 'value' OR column2 = 'value' OR column3 = 'value' ... ; ```
I did some modification to the above code to make it work faster if you are searching in only one owner. You just have to change the 3 variables v\_owner, v\_data\_type and v\_search\_string to fit what you are searching for. ``` SET SERVEROUTPUT ON SIZE 100000 DECLARE match_count INTEGER; -- Type the owner of the tables you are looking at v_owner VARCHAR2(255) :='ENTER_USERNAME_HERE'; -- Type the data type you are look at (in CAPITAL) -- VARCHAR2, NUMBER, etc. v_data_type VARCHAR2(255) :='VARCHAR2'; -- Type the string you are looking at v_search_string VARCHAR2(4000) :='string to search here...'; BEGIN FOR t IN (SELECT table_name, column_name FROM all_tab_cols where owner=v_owner and data_type = v_data_type) LOOP EXECUTE IMMEDIATE 'SELECT COUNT(*) FROM '||t.table_name||' WHERE '||t.column_name||' = :1' INTO match_count USING v_search_string; IF match_count > 0 THEN dbms_output.put_line( t.table_name ||' '||t.column_name||' '||match_count ); END IF; END LOOP; END; / ```
Search All Fields In All Tables For A Specific Value (Oracle)
[ "", "sql", "oracle", "search", "plsql", "database-table", "" ]
I have a directory of bitmaps that are all of the same dimension. I would like to convert these bitmaps into a video file. I don't care if the video file (codec) is wmv or avi. My only requirement is that I specify the frame rate. This does not need to be cross platform, Windows (Vista and XP) only. I have read a few things about using the Windows Media SDK or DirectShow, but none of them are that explicit about providing code samples. Could anyone provide some insight, or some valuable resources that might help me to do this in C#?
At the risk of being voted down, I'll offer a possible alternative option-- a buffered Bitmap animation. ``` double framesPerSecond; Bitmap[] imagesToDisplay; // add the desired bitmaps to this array Timer playbackTimer; int currentImageIndex; PictureBox displayArea; (...) currentImageIndex = 0; playbackTimer.Interval = 1000 / framesPerSecond; playbackTimer.AutoReset = true; playbackTimer.Elapsed += new ElapsedEventHandler(playbackNextFrame); playbackTimer.Start(); (...) void playbackNextFrame(object sender, ElapsedEventArgs e) { if (currentImageIndex + 1 >= imagesToDisplay.Length) { playbackTimer.Stop(); return; } displayArea.Image = imagesToDisplay[currentImageIndex++]; } ``` An approach such as this works well if the viewing user has access to the images, enough resources to keep the images in memory, doesn't want to wait for a video to encode, and there may exist a need for different playback speeds. ...just throwing it out there.
You can use [Splicer](http://www.codeplex.com/splicer) to do this. Please see example 3 at <http://www.codeplex.com/splicer/Wiki/View.aspx?title=News%20Feeds&referringTitle=Home> Edit: ``` using (ITimeline timeline = new DefaultTimeline(25)) { IGroup group = timeline.AddVideoGroup(32, 160, 100); ITrack videoTrack = group.AddTrack(); IClip clip1 = videoTrack.AddImage("image1.jpg", 0, 2); IClip clip2 = videoTrack.AddImage("image2.jpg", 0, 2); IClip clip3 = videoTrack.AddImage("image3.jpg", 0, 2); IClip clip4 = videoTrack.AddImage("image4.jpg", 0, 2); double halfDuration = 0.5; group.AddTransition(clip2.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true); group.AddTransition(clip2.Offset, halfDuration, StandardTransitions.CreateFade(), false); group.AddTransition(clip3.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true); group.AddTransition(clip3.Offset, halfDuration, StandardTransitions.CreateFade(), false); group.AddTransition(clip4.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true); group.AddTransition(clip4.Offset, halfDuration, StandardTransitions.CreateFade(), false); ITrack audioTrack = timeline.AddAudioGroup().AddTrack(); IClip audio = audioTrack.AddAudio("soundtrack.wav", 0, videoTrack.Duration); audioTrack.AddEffect(0, audio.Duration, StandardEffects.CreateAudioEnvelope(1.0, 1.0, 1.0, audio.Duration)); using ( WindowsMediaRenderer renderer = new WindowsMediaRenderer(timeline, "output.wmv", WindowsMediaProfiles.HighQualityVideo)) { renderer.Render(); } } ```
How can I create a video from a directory of images in C#?
[ "", "c#", "directshow", "media", "multimedia", "wmv", "" ]
I realise this is not the ideal place to ask about this in terms of searchability, but I've got a page whose JavaScript code throws "Stack overflow in line 0" errors when I look at it in Internet Explorer. The problem is quite clearly not in line 0, but somewhere in the list of stuff that I'm writing to the document. Everything works fine in Firefox, so I don't have the delights of Firebug and friends to assist in troubleshooting. Are there any standard causes for this? I'm guessing this is probably an Internet Explorer 7 bug or something quite obscure, and my [Google-fu](http://en.wiktionary.org/wiki/Google-fu) is bringing me little joy currently. I can find lots of people who have run into this before, but I can't seem to find how they solved it.
You can turn *off* the `"Disable Script Debugging"` option inside of Internet Explorer and start debugging with `Visual Studio` if you happen to have that around. I've found that it is one of few ways to diagnose some of those `IE` specific issues.
I ran into this problem recently and wrote up a post about the particular case in our code that was causing this problem. <http://cappuccino.org/discuss/2010/03/01/internet-explorer-global-variables-and-stack-overflows/> The quick summary is: recursion that passes through the host global object is limited to a stack depth of 13. In other words, if the reference your function call is using (not necessarily the function itself) was defined with some form window.foo = function, then recursing through foo is limited to a depth of 13.
"Stack overflow in line 0" on Internet Explorer
[ "", "javascript", "internet-explorer", "error-handling", "" ]
Can every possible value of a `float` variable can be represented exactly in a `double` variable? In other words, for all possible values `X` will the following be successful: ``` float f1 = X; double d = f1; float f2 = (float)d; if(f1 == f2) System.out.println("Success!"); else System.out.println("Failure!"); ``` My suspicion is that there is no exception, or if there is it is only for an edge case (like +/- infinity or NaN). **Edit**: Original wording of question was confusing (stated two ways, one which would be answered "no" the other would be answered "yes" for the same answer). I've reworded it so that it matches the question title.
Yes. Proof by enumeration of all possible cases: ``` public class TestDoubleFloat { public static void main(String[] args) { for (long i = Integer.MIN_VALUE; i <= Integer.MAX_VALUE; i++) { float f1 = Float.intBitsToFloat((int) i); double d = (double) f1; float f2 = (float) d; if (f1 != f2) { if (Float.isNaN(f1) && Float.isNaN(f2)) { continue; // ok, NaN } fail("oops: " + f1 + " != " + f2); } } } } ``` finishes in 12 seconds on my machine. 32 bits are *small*.
In theory, there is not such a value, so "yes", every float should be representable as a double.. Converting from a float to a double should involve just tacking four bytes of 00 on the end -- they are stored using the same format, just with different sized fields.
Can every float be expressed exactly as a double?
[ "", "java", "floating-point", "" ]
Is there a way to dynamically invoke a method in the same class for PHP? I don't have the syntax right, but I'm looking to do something similar to this: ``` $this->{$methodName}($arg1, $arg2, $arg3); ```
There is more than one way to do that: ``` $this->{$methodName}($arg1, $arg2, $arg3); $this->$methodName($arg1, $arg2, $arg3); call_user_func_array(array($this, $methodName), array($arg1, $arg2, $arg3)); ``` You may even use the reflection api <http://php.net/manual/en/class.reflection.php>
You can use the Overloading in PHP: [Overloading](http://php.net/manual/en/language.oop5.overloading.php) ``` class Test { private $name; public function __call($name, $arguments) { echo 'Method Name:' . $name . ' Arguments:' . implode(',', $arguments); //do a get if (preg_match('/^get_(.+)/', $name, $matches)) { $var_name = $matches[1]; return $this->$var_name ? $this->$var_name : $arguments[0]; } //do a set if (preg_match('/^set_(.+)/', $name, $matches)) { $var_name = $matches[1]; $this->$var_name = $arguments[0]; } } } $obj = new Test(); $obj->set_name('Any String'); //Echo:Method Name: set_name Arguments:Any String echo $obj->get_name();//Echo:Method Name: get_name Arguments: //return: Any String ```
Dynamic class method invocation in PHP
[ "", "php", "" ]
One of our customers wants to be able to enter a date with only 2 digits for the year component. The date will be in the past, so we want it to work for the previous century if the 2 digit year is after the current year, but work for the current century if the 2 digit year is equal to or less than the current year. as of today 10/30/2008 01/01/01 = 01/01/2001 01/01/09 = 01/01/1909 This is a strange requirement, and I solved the problem, I just don't like my solution. It feels like there is a better way to do this. Thanks for the help. ``` public static String stupidDate(String dateString) { String twoDigitYear = StringUtils.right(dateString, 2); String newDate = StringUtils.left(dateString, dateString.length() - 2); int year = NumberUtils.toInt(twoDigitYear); Calendar c = GregorianCalendar.getInstance(); int centuryInt = c.get(Calendar.YEAR) - year; newDate = newDate + StringUtils.left(Integer.toString(centuryInt), 2) + twoDigitYear; return newDate; } ```
Groovy script (easy enough to throw into java) demonstrating the point @bobince made about SimpleDateFormat. ``` import java.text.SimpleDateFormat SimpleDateFormat sdf = new SimpleDateFormat('MM/dd/yy') SimpleDateFormat fmt = new SimpleDateFormat('yyyy-MM-dd') Calendar cal = Calendar.getInstance() cal.add(Calendar.YEAR, -100) sdf.set2DigitYearStart(cal.getTime()) dates = ['01/01/01', '10/30/08','01/01/09'] dates.each {String d -> println fmt.format(sdf.parse(d)) } ``` Yields ``` 2001-01-01 2008-10-30 1909-01-01 ```
SimpleDateFormat already does two-digit year parsing for you, using the two-letter ‘yy’ format. (It'll still allow four digits, obviously.) By default it uses now-80→now+20, so it's not exactly the same rule you propose, but it's reasonable and standardised (in the Java world at least), and can be overridden using set2DigitYearStart() if you want. ``` DateFormat informat= new SimpleDateFormat("MM/dd/yy"); DateFormat outformat= new SimpleDateFormat("MM/dd/yyyy"); return outformat.format(informat.parse(dateString)); ``` In the longer term, try to migrate to ISO8601 date formatting (yyyy-MM-dd), because MM/dd/yy is approximately the worst possible date format and is bound to cause problems eventually.
What is the best way to parse a date in MM/DD/YY format and adjust it to the current / previous century?
[ "", "java", "date", "date-format", "" ]
I'm trying to read a single XML document from stream at a time using dom4j, process it, then proceed to the next document on the stream. Unfortunately, dom4j's SAXReader (using JAXP under the covers) keeps reading and chokes on the following document element. Is there a way to get the SAXReader to stop reading the stream once it finds the end of the document element? Is there a better way to accomplish this?
I was able to get this to work with some gymnastics using some internal JAXP classes: * Create a custom scanner, a subclass of XMLNSDocumentScannerImpl + Create a custom driver, an implementation of XMLNSDocumentScannerImpl.Driver, inside the custom scanner that returns END\_DOCUMENT when it sees an declaration or an element. Get the ScannedEntity from fElementScanner.getCurrentEntity(). If the entity has a PushbackReader, push back the remaining unread characters in the entity buffer onto the reader. + In the constructor, replaces the fTrailingMiscDriver with an instance of this custom driver. * Create a custom configuration class, a subclass of XIncludeAwareParserConfiguration, that replaces the stock DOCUMENT\_SCANNER with an instance of this custom scanner in its constructor. * Install an instance of this custom configuration class as the "com.sun.org.apache.xerces.internal.xni.parser.XMLParserConfiguration" property so it will be instantiated when dom4j's SAXReader class tries to create a JAXP XMLReader. * When passing a Reader to dom4j's SAXReader.read() method, supply a PushbackReader with a buffer size considerably larger than the one-character default. At least 8192 should be enough to support the default buffer size of the XMLEntityManager inside JAXP's copy of Apache2. This isn't the cleanest solution, as it involves subclassing internal JAXP classes, but it does work.
Most likely, you don't want to have more than one document in the same stream at the same time. I don't think that the SAXReader is smart enough to stop when it gets to the end of the first document. Why is it necessary to have multiple documents in the same stream like this?
Reading a single XML document from a stream using dom4j
[ "", "java", "xml", "stream", "sax", "dom4j", "" ]
I have a project with a number of different classes querying and modifying data in a common set of tables. I've set up a .dbml file which provides us with a DataContext class. My question is whether a single instance of the DataContext should be used by all objects, or whether multiple instances are safe to use. I'm also wondering about thread safety in the case of a single DataContext, and whether access to it's methods should be synchronized.
Rick Strahl has a nice article about your options: <http://www.west-wind.com/weblog/posts/246222.aspx>. See also: [LINQ to SQL - where does your DataContext live?](https://stackoverflow.com/questions/196253/linq-to-sql-where-does-your-datacontext-live). You may want a slightly different strategy for each type of deployment - web, desktop, windows service... Summarized, your options are: * Global DataContext - dangerous in multi-threaded environments (including web apps). Remember that instance members are not guaranteed to be thread-safe (from Bradley Grainger's [answer](https://stackoverflow.com/questions/196253/linq-to-sql-where-does-your-datacontext-live#196264) above). * DataContext per thread - complicated. If your DataContext is tracking changes you must be sure to flush them at the appropriate time. Instantiating, storing, and retrieving the DataContext is a pain. * DataContext per atomic action - you lose the ability to track changes since one DataContext creates an object while another updates or deletes it. Attaching a data object to a new DataContext may not work like you expect. * DataContext per data object - seems inelegant because you have to fuss with the DataContext on instantiation(create and attach) and update/delete (pull it off the data object and use it). I opted for a DataContext per data object. It may not be the fanciest solution but it works in all deployment environments.
I use an new instance of DataContext for every transaction. Reusing old instances can be troublesome, and will bloat the content of the DataContext, since any item that has been loaded at some time, will have to be tracked - your app will get slower and slower, bloating up memory. If you need an item longer than for a transaction, you can detach it from the DataContext by cloning the item, and can reattach it later to a new and fresh DataContext using Attach(). I even can clone an item, send it over the network with WCF, get it back in some later call, attach it to a new DataContext and save the changes (of course I need a timestamp column for this).
Multiple/single instance of Linq to SQL DataContext
[ "", "c#", "linq-to-sql", "" ]
Is there anyway to decompile java webstart application?
Start the application, then fetch the JAR(s) from the webstart cache. Proceed to decompile like with any other Java application.
Yes, there is. A Java Webstart application is just one form of deployment. The result is a bunch of jars copied to the users home directory. And these can of course be decompiled for example with [jad](http://www.kpdus.com/jad.html).
java webstart decompiler
[ "", "java", "decompiling", "" ]
``` function main() { Hello(); } function Hello() { // How do you find out the caller function is 'main'? } ``` Is there a way to find out the call stack?
**Note that this solution is deprecated and should no longer be used according to MDN documentation** <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/caller> --- ``` function Hello() { alert("caller is " + Hello.caller); } ``` Note that this feature is **non-standard**, from [`Function.caller`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/caller): > **Non-standard** > This feature is non-standard and is not on a standards track. Do not use it on production sites facing the Web: it will not work for every user. There may also be large incompatibilities between implementations and the behavior may change in the future. --- The following is the old answer from 2008, which is no longer supported in modern Javascript: ``` function Hello() { alert("caller is " + arguments.callee.caller.toString()); } ```
## StackTrace You can find the entire stack trace using browser specific code. The good thing is [someone already made it](http://eriwen.com/javascript/stacktrace-update/); here is the [project code on GitHub](http://github.com/eriwen/javascript-stacktrace). But not all the news is good: 1. It is really slow to get the stack trace so be careful (read [this](http://webreflection.blogspot.com/2009/06/es5-arguments-and-callee-i-was-wrong.html) for more). 2. You will need to define function names for the stack trace to be legible. Because if you have code like this: ``` var Klass = function kls() { this.Hello = function() { alert(printStackTrace().join('\n\n')); }; } new Klass().Hello(); ``` Google Chrome will alert `... kls.Hello ( ...` but most browsers will expect a function name just after the keyword `function` and will treat it as an anonymous function. An not even Chrome will be able to use the `Klass` name if you don't give the name `kls` to the function. And by the way, you can pass to the function printStackTrace the option `{guess: true}` but I didn't find any real improvement by doing that. 3. Not all browsers give you the same information. That is, parameters, code column, etc. --- ## Caller Function Name By the way, if you only want the name of the caller function (in most browsers, but not IE) you can use: ``` arguments.callee.caller.name ``` But note that this name will be the one after the `function` keyword. I found no way (even on Google Chrome) to get more than that without getting the code of the whole function. --- ## Caller Function Code And summarizing the rest of the best answers (by Pablo Cabrera, nourdine, and Greg Hewgill). **The only cross-browser and really safe thing you can use is:** ``` arguments.callee.caller.toString(); ``` Which will show the **code** of the caller function. Sadly, that is not enough for me, and that is why I give you tips for the StackTrace and the caller function Name (although they are not cross-browser).
How do you find out the caller function in JavaScript?
[ "", "javascript", "callstack", "" ]
If I have a property that I want to let inheritors write to, but keep readonly externally, what is the preferred way to implement this? I usually go with something like this: ``` private object m_myProp; public object MyProp { get { return m_myProp; } } protected void SetMyProp(object value) { m_myProp = value; } ``` Is there a better way?
``` private object m_myProp; public object MyProp { get { return m_myProp; } protected set { m_myProp = value; } } ``` Or in C# 3.0 ``` public object MyProp {get; protected set;} ```
This is definitely the way to go. ``` public object MyProp {get; protected set;} ``` If you're on an older version of C# then this is the way to go. ``` private object _myProp; public object MyProp { get { return _myProp; } protected set { _myProp = value; } } ```
What is the best way to implement a property that is readonly to the public, but writable to inheritors?
[ "", "c#", ".net", "properties", "readonly", "" ]
I always hear that programmers try to pick the right tool for the job. I've never programmed in Java, so I was wondering What are its benefits? Why is it so popular? What should I use it for? I just program recreationally. Is there anything about it that makes it particularly fun??
I think after 12 years or so, the "write once, run anywhere" mantra is *almost* true. Writing Java code pretty much isolates you from the platform dependent aspects of the systems on which you deploy it.
* Portability * Incredible breadth of libraries * Bottom-up security * Performance * Robustness
What are the benefits of Java?
[ "", "java", "" ]
I've often used pointers to const objects, like so... ``` const int *p; ``` That simply means that you can't change the integer that `p` is pointing at through `p`. But I've also seen reference to const pointers, declared like this... ``` int* const p; ``` As I understand it, that means that the pointer variable *itself* is constant -- you can change the integer it points at all day long, but you can't make it point at something else. What possible use would that have?
When you're designing C programs for embedded systems, or special purpose programs that need to refer to the same memory (multi-processor applications sharing memory) then you need constant pointers. For instance, I have a [32 bit MIPs processor](http://flickr.com/photos/adavis/2945502665/) that has a [little LCD](http://flickr.com/photos/adavis/2946164671/) attached to it. I have to write my LCD data to a specific port in memory, which then gets sent to the LCD controller. I could #define that number, but then I also have to cast it as a pointer, and the C compiler doesn't have as many options when I do that. Further, I might need it to be volatile, which can also be cast, but it's easier and clearer to use the syntax provided - a const pointer to a volatile memory location. For PC programs, an example would be: If you design DOS VGA games (there are tutorials online which are fun to go through to learn basic low level graphics) then you need to write to the VGA memory, which might be referenced as an offset from a const pointer.
It allows you to protect the pointer from being changed. This means you can protect assumptions you make based on the pointer never changing or from unintentional modification, for example: ``` int* const p = &i; ... p++; /* Compiler error, oops you meant */ (*p)++; /* Increment the number */ ```
What use are const pointers (as opposed to pointers to const objects)?
[ "", "c++", "c", "constants", "" ]
Basically I am inserting an image using the listviews inserting event, trying to resize an image from the fileupload control, and then save it in a SQL database using LINQ. I found some code to create a new bitmap of the content in the fileupload control, but this was to store it in a file on the server, from [this source](http://forums.asp.net/t/1208353.aspx), but I need to save the bitmap back into the SQL database, which I think I need to convert back into a byte[] format. So how do I convert the bitmap to a byte[] format? If I am going about this the wrong way I would be grateful it you could correct me. Here is my code: ``` // Find the fileUpload control string filename = uplImage.FileName; // Create a bitmap in memory of the content of the fileUpload control Bitmap originalBMP = new Bitmap(uplImage.FileContent); // Calculate the new image dimensions int origWidth = originalBMP.Width; int origHeight = originalBMP.Height; int sngRatio = origWidth / origHeight; int newWidth = 100; int newHeight = sngRatio * newWidth; // Create a new bitmap which will hold the previous resized bitmap Bitmap newBMP = new Bitmap(originalBMP, newWidth, newHeight); // Create a graphic based on the new bitmap Graphics oGraphics = Graphics.FromImage(newBMP); // Set the properties for the new graphic file oGraphics.SmoothingMode = SmoothingMode.AntiAlias; oGraphics.InterpolationMode = InterpolationMode.HighQualityBicubic; // Draw the new graphic based on the resized bitmap oGraphics.DrawImage(originalBMP, 0, 0, newWidth, newHeight); PHJamesDataContext db = new PHJamesDataContext(); System.IO.MemoryStream stream = new System.IO.MemoryStream(); newBMP.Save(stream, System.Drawing.Imaging.ImageFormat.Bmp); stream.Position = 0; byte[] data = new byte[stream.Length]; PHJProjectPhoto myPhoto = new PHJProjectPhoto { ProjectPhoto = data, OrderDate = DateTime.Now, ProjectPhotoCaption = ProjectPhotoCaptionTextBox.Text, ProjectId = selectedProjectId }; db.PHJProjectPhotos.InsertOnSubmit(myPhoto); db.SubmitChanges(); ```
You should be able to change this block to ``` System.IO.MemoryStream stream = new System.IO.MemoryStream(); newBMP.Save(stream, System.Drawing.Imaging.ImageFormat.Bmp); PHJProjectPhoto myPhoto = new PHJProjectPhoto { ProjectPhoto = stream.ToArray(), // <<--- This will convert your stream to a byte[] OrderDate = DateTime.Now, ProjectPhotoCaption = ProjectPhotoCaptionTextBox.Text, ProjectId = selectedProjectId }; ```
If you've got a `MemoryStream` already, just call [`MemoryStream.ToArray`](http://msdn.microsoft.com/en-us/library/system.io.memorystream.toarray.aspx) to get the data out.
How do I convert a Bitmap to byte[]?
[ "", "c#", "linq", "asp.net-3.5", "byte", "image-uploading", "" ]
We all know that you can overload a function according to the parameters: ``` int mul(int i, int j) { return i*j; } std::string mul(char c, int n) { return std::string(n, c); } ``` Can you overload a function according to the return value? Define a function that returns different things according to how the return value is used: ``` int n = mul(6, 3); // n = 18 std::string s = mul(6, 3); // s = "666" // Note that both invocations take the exact same parameters (same types) ``` You can assume the first parameter is between 0-9, no need to verify the input or have any error handling.
``` class mul { public: mul(int p1, int p2) { param1 = p1; param2 = p2; } operator int () { return param1 * param2; } operator std::string () { return std::string(param2, param1 + '0'); } private: int param1; int param2; }; ``` Not that I would use that.
You have to tell the compiler which version to use. In C++, you can do it three ways. ## Explicitly differentiate the calls by typing You somewhat cheated because you sent an integer to a function waiting for a char, and wrongly sent the number six when the char value of '6' is not 6 but 54 (in ASCII): ``` std::string mul(char c, int n) { return std::string(n, c); } std::string s = mul(6, 3); // s = "666" ``` The right solution would be, of course, ``` std::string s = mul(static_cast<char>(54), 3); // s = "666" ``` This was worth mentioning, I guess, even if you did not want the solution. ## Explicitly differentiate the calls by dummy pointer You can add a dummy parameter to each functions, thus forcing the compiler to choose the right functions. The easiest way is to send a NULL dummy pointer of the type desired for the return: ``` int mul(int *, int i, int j) { return i*j; } std::string mul(std::string *, char c, int n) { return std::string(n, c); } ``` Which can be used with the code: ``` int n = mul((int *) NULL, 6, 3); // n = 18 std::string s = mul((std::string *) NULL, 54, 3); // s = "666" ``` ## Explicitly differentiate the calls by templating the return value With this solution, we create a "dummy" function with code that won't compile if instantiated: ``` template<typename T> T mul(int i, int j) { // If you get a compile error, it's because you did not use // one of the authorized template specializations const int k = 25 ; k = 36 ; } ``` You'll note this function won't compile, which is a good thing because we want only to use some limited functions through template specialization: ``` template<> int mul<int>(int i, int j) { return i * j ; } template<> std::string mul<std::string>(int i, int j) { return std::string(j, static_cast<char>(i)) ; } ``` Thus, the following code will compile: ``` int n = mul<int>(6, 3); // n = 18 std::string s = mul<std::string>(54, 3); // s = "666" ``` But this one won't: ``` short n2 = mul<short>(6, 3); // error: assignment of read-only variable ‘k’ ``` ## Explicitly differentiate the calls by templating the return value, 2 > > Hey, you cheated, too! Right, I did use the same parameters for the two "overloaded" functions. But you did start the cheating (see above)... ^\_^ More seriously, if you need to have different parameters, then you will to write more code, and then have to explicitly use the right types when calling the functions to avoid ambiguities: ``` // For "int, int" calls template<typename T> T mul(int i, int j) { // If you get a compile error, it's because you did not use // one of the authorized template specializations const int k = 25 ; k = 36 ; } template<> int mul<int>(int i, int j) { return i * j ; } // For "char, int" calls template<typename T> T mul(char i, int j) { // If you get a compile error, it's because you did not use // one of the authorized template specializations const int k = 25 ; k = 36 ; } template<> std::string mul<std::string>(char i, int j) { return std::string(j, (char) i) ; } ``` And this code would be used as such: ``` int n = mul<int>(6, 3); // n = 18 std::string s = mul<std::string>('6', 3); // s = "666" ``` And the following line: ``` short n2 = mul<short>(6, 3); // n = 18 ``` Would still not compile. ## Conclusion I love C++... :-p
Overload a C++ function according to the return value
[ "", "c++", "function", "puzzle", "overloading", "" ]
whats the best way to export a Datagrid to excel? I have no experience whatsoever in exporting datagrid to excel, so i want to know how you guys export datagrid to excel. i read that there are a lot of ways, but i am thinking to just make a simple export excel to datagrid function.i am using asp.net C# cheers..
The simplest way is to simply write either csv, or html (in particular, a `<table><tr><td>...</td></tr>...</table>`) to the output, and simply pretend that it is in excel format via the content-type header. Excel will happily load either; csv is simpler... Here's a similar example (it actually takes an IEnumerable, but it would be similar from any source (such as a `DataTable`, looping over the rows). ``` public static void WriteCsv(string[] headers, IEnumerable<string[]> data, string filename) { if (data == null) throw new ArgumentNullException("data"); if (string.IsNullOrEmpty(filename)) filename = "export.csv"; HttpResponse resp = System.Web.HttpContext.Current.Response; resp.Clear(); // remove this line if you don't want to prompt the user to save the file resp.AddHeader("Content-Disposition", "attachment;filename=" + filename); // if not saving, try: "application/ms-excel" resp.ContentType = "text/csv"; string csv = GetCsv(headers, data); byte[] buffer = resp.ContentEncoding.GetBytes(csv); resp.AddHeader("Content-Length", buffer.Length.ToString()); resp.BinaryWrite(buffer); resp.End(); } static void WriteRow(string[] row, StringBuilder destination) { if (row == null) return; int fields = row.Length; for (int i = 0; i < fields; i++) { string field = row[i]; if (i > 0) { destination.Append(','); } if (string.IsNullOrEmpty(field)) continue; // empty field bool quote = false; if (field.Contains("\"")) { // if contains quotes, then needs quoting and escaping quote = true; field = field.Replace("\"", "\"\""); } else { // commas, line-breaks, and leading-trailing space also require quoting if (field.Contains(",") || field.Contains("\n") || field.Contains("\r") || field.StartsWith(" ") || field.EndsWith(" ")) { quote = true; } } if (quote) { destination.Append('\"'); destination.Append(field); destination.Append('\"'); } else { destination.Append(field); } } destination.AppendLine(); } static string GetCsv(string[] headers, IEnumerable<string[]> data) { StringBuilder sb = new StringBuilder(); if (data == null) throw new ArgumentNullException("data"); WriteRow(headers, sb); foreach (string[] row in data) { WriteRow(row, sb); } return sb.ToString(); } ```
You can do it in this way: ``` private void ExportButton_Click(object sender, System.EventArgs e) { Response.Clear(); Response.Buffer = true; Response.ContentType = "application/vnd.ms-excel"; Response.Charset = ""; this.EnableViewState = false; System.IO.StringWriter oStringWriter = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter oHtmlTextWriter = new System.Web.UI.HtmlTextWriter(oStringWriter); this.ClearControls(dataGrid); dataGrid.RenderControl(oHtmlTextWriter); Response.Write(oStringWriter.ToString()); Response.End(); } ``` Complete example [here](http://www.c-sharpcorner.com/UploadFile/DipalChoksi/ExportASPNetDataGridToExcel11222005041447AM/ExportASPNetDataGridToExcel.aspx).
export Datagrid to excel asp
[ "", "c#", "asp.net", "datagrid", "export-to-excel", "" ]
Currently, I'm using: ``` DataTable dt = CreateDataTableInSomeWay(); List<DataRow> list = new List<DataRow>(); foreach (DataRow dr in dt.Rows) { list.Add(dr); } ``` Is there a better/magic way?
If you're using .NET 3.5, you can use [`DataTableExtensions.AsEnumerable`](http://msdn.microsoft.com/en-us/library/system.data.datatableextensions.asenumerable.aspx) (an extension method) and then if you really need a `List<DataRow>` instead of just `IEnumerable<DataRow>` you can call [`Enumerable.ToList`](http://msdn.microsoft.com/en-us/library/bb342261.aspx): ``` IEnumerable<DataRow> sequence = dt.AsEnumerable(); ``` or ``` using System.Linq; ... List<DataRow> list = dt.AsEnumerable().ToList(); ```
``` List<Employee> emp = new List<Employee>(); //Maintaining DataTable on ViewState //For Demo only DataTable dt = ViewState["CurrentEmp"] as DataTable; //read data from DataTable //using lamdaexpression emp = (from DataRow row in dt.Rows select new Employee { _FirstName = row["FirstName"].ToString(), _LastName = row["Last_Name"].ToString() }).ToList(); ```
How do you convert a DataTable into a generic list?
[ "", "c#", "generics", "datatable", "" ]
How can I restart(recycle) IIS Application Pool from C# (.net 2)? Appreciate if you post sample code?
If you're on **IIS7** then this will do it if it is stopped. I assume you can adjust for restarting without having to be shown. ``` // Gets the application pool collection from the server. [ModuleServiceMethod(PassThrough = true)] public ArrayList GetApplicationPoolCollection() { // Use an ArrayList to transfer objects to the client. ArrayList arrayOfApplicationBags = new ArrayList(); ServerManager serverManager = new ServerManager(); ApplicationPoolCollection applicationPoolCollection = serverManager.ApplicationPools; foreach (ApplicationPool applicationPool in applicationPoolCollection) { PropertyBag applicationPoolBag = new PropertyBag(); applicationPoolBag[ServerManagerDemoGlobals.ApplicationPoolArray] = applicationPool; arrayOfApplicationBags.Add(applicationPoolBag); // If the applicationPool is stopped, restart it. if (applicationPool.State == ObjectState.Stopped) { applicationPool.Start(); } } // CommitChanges to persist the changes to the ApplicationHost.config. serverManager.CommitChanges(); return arrayOfApplicationBags; } ``` If you're on **IIS6** I'm not so sure, but you could try getting the web.config and editing the modified date or something. Once an edit is made to the web.config then the application will restart.
Here we go: ``` HttpRuntime.UnloadAppDomain(); ```
Restarting (Recycling) an Application Pool
[ "", "c#", ".net", "asp.net-mvc", "iis", "application-pool", "" ]
Why was C# designed this way? As I understand it, an interface only describes behaviour, and serves the purpose of describing a contractual obligation for classes implementing the interface that certain behaviour is implemented. If classes wish to implement that behavour in a shared method, why shouldn't they? Here is an example of what I have in mind: ``` // These items will be displayed in a list on the screen. public interface IListItem { string ScreenName(); ... } public class Animal: IListItem { // All animals will be called "Animal". public static string ScreenName() { return "Animal"; } .... } public class Person: IListItem { private string name; // All persons will be called by their individual names. public string ScreenName() { return name; } .... } ```
Assuming you are asking why you can't do this: ``` public interface IFoo { void Bar(); } public class Foo: IFoo { public static void Bar() {} } ``` This doesn't make sense to me, semantically. Methods specified on an interface should be there to specify the contract for interacting with an object. Static methods do not allow you to interact with an object - if you find yourself in the position where your implementation could be made static, you may need to ask yourself if that method really belongs in the interface. --- To implement your example, I would give Animal a const property, which would still allow it to be accessed from a static context, and return that value in the implementation. ``` public class Animal: IListItem { /* Can be tough to come up with a different, yet meaningful name! * A different casing convention, like Java has, would help here. */ public const string AnimalScreenName = "Animal"; public string ScreenName(){ return AnimalScreenName; } } ``` For a more complicated situation, you could always declare another static method and delegate to that. In trying come up with an example, I couldn't think of any reason you would do something non-trivial in both a static and instance context, so I'll spare you a FooBar blob, and take it as an indication that it might not be a good idea.
My (simplified) technical reason is that static methods are not in the [vtable](http://en.wikipedia.org/wiki/Virtual_table), and the call site is chosen at compile time. It's the same reason you can't have override or virtual static members. For more details, you'd need a CS grad or compiler wonk - of which I'm neither. For the political reason, I'll [quote Eric Lippert](https://learn.microsoft.com/en-us/archive/blogs/ericlippert/calling-static-methods-on-type-parameters-is-illegal-part-one) (who is a compiler wonk, and holds a Bachelor of Mathematics, Computer science and Applied Mathematics from University of Waterloo (source: [LinkedIn](https://www.linkedin.com/pub/eric-lippert/85/934/a38)): > ...the core design principle of static methods, the principle that gives them their name...[is]...it can always be determined exactly, at compile time, what method will be called. That is, the method can be resolved solely by static analysis of the code. Note that Lippert does leave room for a so-called type method: > That is, a method associated with a type (like a static), which does not take a non-nullable “this” argument (unlike an instance or virtual), but one where the method called would depend on the constructed type of T (unlike a static, which must be determinable at compile time). but is yet to be convinced of its usefulness.
Why Doesn't C# Allow Static Methods to Implement an Interface?
[ "", "c#", "oop", "language-features", "" ]
Consider that I have a transaction: ``` BEGIN TRANSACTION DECLARE MONEY @amount SELECT Amount AS @amount FROM Deposits WHERE UserId = 123 UPDATE Deposits SET Amount = @amount + 100.0 WHERE UserId = 123 COMMIT ``` And it gets executed on 2 threads, in the order: 1. thread 1 - select 2. thread 2 - select 3. thread 1 - update 4. thread 2 - update Assume that before execution Amount is 0. What will happen in this case in the different settings of SQL Server (read uncommited, read commited, repeatable read, serializable), what will be amount at the end, will there be a deadlock?
Nice well stated scenario. I decided to test it. Here's my setup script: ``` CREATE TABLE Deposits(Amount Money, UserID int) INSERT INTO Deposits (Amount, UserID) SELECT 0.0, 123 --Reset UPDATE Deposits SET Amount = 0.00 WHERE UserID = 123 ``` Here's my test script. ``` SET TRANSACTION ISOLATION LEVEL Serializable ---------------------------------------- -- Part 1 ---------------------------------------- BEGIN TRANSACTION DECLARE @amount MONEY SET @amount = ( SELECT Amount FROM Deposits WHERE UserId = 123 ) SELECT @amount as Amount ---------------------------------------- -- Part 2 ---------------------------------------- DECLARE @amount MONEY SET @amount = *value from step 1* UPDATE Deposits SET Amount = @amount + 100.0 WHERE UserId = 123 COMMIT SELECT * FROM Deposits WHERE UserID = 123 ``` I loaded up this test script in two query analyzer windows and ran each part as described by the question. All of the reading happens before any of the writing, so all threads/scenarios will read the value of 0 into @amount. Here are the results: Read committed ``` 1 T1.@Amount = 0.00 2 T1.@Amount = 0.00 3 Deposits.Amount = 100.00 4 Deposits.Amount = 100.00 ``` Read uncommitted ``` 1 T1.@Amount = 0.00 2 T1.@Amount = 0.00 3 Deposits.Amount = 100.00 4 Deposits.Amount = 100.00 ``` Repeatable Read ``` 1 T1.@Amount = 0.00 (locks out changes by others on Deposit.UserID = 123) 2 T1.@Amount = 0.00 (locks out changes by others on Deposit.UserID = 123) 3 Hangs until step 4. (due to lock in step 2) 4 Deadlock! Final result: Deposits.Amount = 100.00 ``` Serializable ``` 1 T1.@Amount = 0.00 (locks out changes by others on Deposit) 2 T1.@Amount = 0.00 (locks out changes by others on Deposit) 3 Hangs until step 4. (due to lock in step 2) 4 Deadlock! Final result: Deposits.Amount = 100.00 ``` Here's an explanation of each type which can be used to reach these results through thought simulations. **Read Committed** and **Read Uncommited**, both do not lock the data that was read against modifications by other users. The difference is that read uncommitted will allow you to see data that is not yet committed (downside) and will not block your read if there is data locked by others against reading (upside), which is really saying the same thing twice. **Repeatable Read** and **Serializable**, both behave like read committed for reading. For locking, both lock data which has been read against modification by other users. The difference is that serializable blocks more than the row which has been read, it also blocks inserts that would introduce records that were not present before. So with repeatable read, you could see new records (termed : phantom records) in later reads. With serializable, you block the creation of those records until you commit. The above explanations come from my interpretation of this [msdn](http://msdn.microsoft.com/en-us/library/aa259216(SQL.80).aspx) article.
Others already addressed the issue of using REPEATABLE READ. So I'll chime in with a different piece of advice... Why use two statements and not just one statement like the following? ``` UPDATE Deposits SET Amount = Amount + 100.0 WHERE UserId = 123 ``` Also, your real transactions are going off of something more than a UserID, right? If not, you run the risk of working with more records than you originally intended.
How does SQL Server locking work in this scenario?
[ "", "sql", "sql-server", "locking", "" ]
I have read the answers for [What's the best way to check if a file exists in C? (cross platform)](https://stackoverflow.com/questions/230062/whats-the-best-way-to-check-if-a-file-exists-in-c-cross-platform), but I'm wondering if there is a better way to do this using standard c++ libs? Preferably without trying to open the file at all. Both `stat` and `access` are pretty much ungoogleable. What should I `#include` to use these?
Use [boost::filesystem](http://www.boost.org/doc/libs/1_37_0/libs/filesystem/doc/index.htm): ``` #include <boost/filesystem.hpp> if ( !boost::filesystem::exists( "myfile.txt" ) ) { std::cout << "Can't find my file!" << std::endl; } ```
Be careful of race conditions: if the file disappears between the "exists" check and the time you open it, your program will fail unexpectedly. It's better to go and open the file, check for failure and if all is good then do something with the file. It's even more important with security-critical code. Details about security and race conditions: <http://www.ibm.com/developerworks/library/l-sprace.html>
What’s the best way to check if a file exists in C++? (cross platform)
[ "", "c++", "file", "file-io", "" ]
The install instructions are: ``` $ python setup.py build $ sudo python setup.py install # or su first ``` This gives me an .egg file. How do I tell the install to dump the files as a normal, uncompressed library? Thanks!
OK, I hate to answer my own question, but: find your python site-packages (mine is /usr/local/lib/python2.5/site-packages ) then: ``` $ unzip MySQL_python-1.2.2-py2.5-linux-i686.egg ``` This worked fine for me
I'm a little late to this party, but here's a way to do it that seems to work great: ``` sudo python setup.py install --single-version-externally-managed --root=/ ``` And then you don't use a .python-egg, any \*.pth files etc.
Install MySQLdb (for python) as non-compressed egg
[ "", "python", "installation", "mysql", "egg", "" ]
I am implementing a BFS, and what it is going to do is go through an ordered tree to find the shortest solution to a puzzle. What i will be doing is creating a Snapshot object that holds the current position of each piece in a puzzle. I will add this Snapshot object into the queue and check if it is the solution. However, I am creating these snapshots on the fly. So is there some kind of way that will automatically generate the names of the Snapshot objects when they are put into the queue? or do i have to keep track of how many declarations i have made and just hard code it by saying... Snapshot snapshot2; Snapshot snapshot3; Snapshot snapshot4; etc..
There is a way - you use the [Preprocessor's Token-Pasting Operator](http://msdn.microsoft.com/en-us/library/09dwwt6y(VS.80).aspx). This allows you to create a name based on a variable, so you'd specify: ``` #define S(variable) snapshot#variable ``` and you'd be able to create variables named snapshot1, snapshot2 etc: ``` Snapshot S(1) Snapshot S(2) ``` However, I'm not sure this is what you really want, I've maybe used this technique once in practise, and that was for a code generator.
I think you're asking how do you create and keep lots of objects when you don't know how many there will be. You need to create an array of the objects. Then you can access them as snapshot[1], snapshot[2]... snapshot[i].
Variable naming schemes for objects in C++?
[ "", "c++", "oop", "search", "breadth-first-search", "" ]
I have a NotifyIcon control that appears in the system tray. How can I find the location (x, y) of the icon on the screen? Thanks
You can't. It isn't hard to get the window handle for the system tray, GetClassName() returns "ToolbarWindow32", the class name of the standard TOOLBAR common control. Beware that a 64-bit OS has two of them. Then you can send messages to it like TB\_GETBUTTONINFO. Chief hang-ups are that you won't know what button ID to choose and the returned info does not include the button position. Which is for the better, buttons move around without you being able to lock them.
Actually you can do this, here is code that shows you how and much more. In any Mouse Event from NotifyIcon, simply look at Control.MousePosition, this contains the (x,y) of the mouse. The same can be done to position a ContextMenu on a form/control exactly where the form/control was clicked using these (x,y) values. A good example of both of these is here: More Here: <http://code.msdn.microsoft.com/TheNotifyIconExample>
How to find the location of the icon in the system tray
[ "", "c#", "winforms", "visual-studio-2008", "notifyicon", "" ]
e.g, ``` foo1 foo2 foo10 foo100 ``` rather than ``` foo1 foo10 foo100 foo2 ``` Update: not interested in coding the sort myself (although that's interesting in its own right), but having the database to do the sort for me.
You can use functions in your order-by clause. In this case, you can split the non-numeric and numeric portions of the field and use them as two of the ordering criteria. ``` select * from t order by to_number(regexp_substr(a,'^[0-9]+')), to_number(regexp_substr(a,'[0-9]+$')), a; ``` You can also create a function-based index to support this: ``` create index t_ix1 on t (to_number(regexp_substr(a, '^[0-9]+')), to_number(regexp_substr(a, '[0-9]+$')), a); ```
# For short strings, small number of numerics If number of "numerics" and the maximum length are limited, there is a regexp-based solution. The idea is: * Pad all numerics with 20 zeroes * Remove excessive zeroes using another regexp. This might be slow due to [regexp backtracking](http://www.regular-expressions.info/catastrophic.html). Assumptions: * Maximum length of numerics is known beforehand (e.g. 20) * All the numerics can be padded (in other words, `lpad('1 ', 3000, '1 ')` will fail due do unable to fit padded numerics into `varchar2(4000)`) The following query is optimized for "short numerics" case (see `*?`) and it takes 0.4 seconds. However, when using such approach, you need to predefine padding length. ``` select * from ( select dbms_random.string('X', 30) val from xmltable('1 to 1000') ) order by regexp_replace(regexp_replace(val, '(\d+)', lpad('0', 20, '0')||'\1') , '0*?(\d{21}(\D|$))', '\1'); ``` # "Clever" approach Even though separate `natural_sort` function can be handy, there is a little-known trick to do that in pure SQL. Key ideas: * Strip leading zeroes from all the numerics so `02` is ordered between `1` and `3`: `regexp_replace(val, '(^|\D)0+(\d+)', '\1\2')`. Note: this might result in "unexpected" sorting of `10.02` > `10.1` (since `02` is converted to `2`), however there is no single answer how things like `10.02.03` should be sorted * Convert `"` to `""` so text with quotes works properly * Convert input string to comma delimited format: `'"'||regexp_replace(..., '([^0-9]+)', '","\1","')||'"'` * Convert csv to the list of items via `xmltable` * Augment numeric-like items so string sort works properly * Use `length(length(num))||length(num)||num` instead of `lpad(num, 10, '0')` as the latter is less compact and does not support 11+ digit numbers. Note: Response time is something like 3-4 seconds for sorting list of 1000 random strings of length 30 (the generation of the random strings takes 0.2 sec itself). The main time consumer is `xmltable` that splits text into rows. If using PL/SQL instead of `xmltable` to split string into rows the response time reduces to 0.4sec for the same 1000 rows. The following query performs natural sort of 100 random alpha-numeric strings (note: it produces wrong results in Oracle 11.2.0.4 and it works in 12.1.0.2): ``` select * from ( select (select listagg(case when regexp_like(w, '^[0-9]') then length(length(w))||length(w)||w else w end ) within group (order by ord) from xmltable(t.csv columns w varchar2(4000) path '.' , ord for ordinality) q ) order_by , t.* from ( select '"'||regexp_replace(replace( regexp_replace(val, '(^|\D)0+(\d+)', '\1\2') , '"', '""') , '([^0-9]+)', '","\1","')||'"' csv , t.* from ( select dbms_random.string('X', 30) val from xmltable('1 to 100') ) t ) t ) t order by order_by; ``` The fun part is this `order by` can be expressed without subqueries, so it is a handy tool to make your reviewer crazy (it works in both 11.2.0.4 and 12.1.0.2): ``` select * from (select dbms_random.string('X', 30) val from xmltable('1 to 100')) t order by ( select listagg(case when regexp_like(w, '^[0-9]') then length(length(w))||length(w)||w else w end ) within group (order by ord) from xmltable('$X' passing xmlquery(('"'||regexp_replace(replace( regexp_replace(t.val, '(^|\D)0+(\d+)', '\1\2') , '"', '""') , '([^0-9]+)', '","\1","')||'"') returning sequence ) as X columns w varchar2(4000) path '.', ord for ordinality) q ); ```
Oracle: How can I implement a "natural" order-by in a SQL query?
[ "", "sql", "oracle", "" ]
Take the following function: ``` DataTable go() { return someTableAdapter.getSomeData(); } ``` When I set a breakpoint in this function, is there a possibility to inspect the returned value? `go()` is directly coupled to a datagrid in an `.aspx` page. The only way to inspect the returned datatable is to use a temporary variable. However, that's a bit inconvenient. Isn't there another way?
Not that I know of. Note that if you *do* add a variable, it will get removed by the compiler in release builds anyway... **Update:** [This functionality has been added to VS2013](https://devblogs.microsoft.com/devops/seeing-function-return-values-in-the-visual-studio-2013-debugger/). You can see the return values in the autos windows or use `$ReturnValue` (or `$ReturnValue1` or `$returnvalue`, depending on the VS version) in the watch/immediate window. The value can only be seen directly after returning from the function, thus the easiest way to access it is by putting a breakpoint on the function call and step over (F10) the call. --- Update for VS2015: boo! unfortunately, it doesn't appear to be in VS2015 (devenv v14) Update for VS2017: it's back. (devenv v15) Update for VS2022: In Visual Studio 2022 the name $ReturnValue in watch window works, but it needs to be added to the watch window BEFORE "return" statement is hit, not after. (according to [Andrew](https://stackoverflow.com/users/257470/andrew-fox))
This can be done in Visual Studio 2013 with CLR 4.5.1 [according to the customer feedback site](http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2206747-function-return-value-in-debugger). It was not available in previous versions for C#. (Visual Studio 2008 and earlier supported it for VB.NET. It has always been available to C/C++ developers.)
Can I find out the return value before returning while debugging in Visual Studio?
[ "", "c#", "visual-studio", "visual-studio-debugging", "" ]
Looking at `System.Collections.Generic.Dictionary<TKey, TValue>`, it clearly implements `ICollection<KeyValuePair<TKey, TValue>>`, but doesn't have the required "`void Add(KeyValuePair<TKey, TValue> item)`" function. This can also be seen when trying to initialize a `Dictionary` like this: ``` private const Dictionary<string, int> PropertyIDs = new Dictionary<string, int>() { new KeyValuePair<string,int>("muh", 2) }; ``` which fails with > No overload for method 'Add' takes '1' arguments Why is that so?
The expected API is to add via the two argument `Add(key,value)` method (or the `this[key]` indexer); as such, it uses explicit interface implementation to provide the `Add(KeyValuePair<,>)` method. If you use the `IDictionary<string, int>` interface instead, you will have access to the missing method (since you can't hide anything on an interface). Also, with the collection initializer, note that you can use the alternative syntax: ``` Dictionary<string, int> PropertyIDs = new Dictionary<string, int> { {"abc",1}, {"def",2}, {"ghi",3} } ``` which uses the `Add(key,value)` method.
Some interface methods are implemented [explicitly](http://msdn.microsoft.com/en-us/library/aa664591(VS.71).aspx). If you use reflector you can see the explicitly implemented methods, which are: ``` void ICollection<KeyValuePair<TKey, TValue>>.Add(KeyValuePair<TKey, TValue> keyValuePair); bool ICollection<KeyValuePair<TKey, TValue>>.Contains(KeyValuePair<TKey, TValue> keyValuePair); void ICollection<KeyValuePair<TKey, TValue>>.CopyTo(KeyValuePair<TKey, TValue>[] array, int index); bool ICollection<KeyValuePair<TKey, TValue>>.Remove(KeyValuePair<TKey, TValue> keyValuePair); IEnumerator<KeyValuePair<TKey, TValue>> IEnumerable<KeyValuePair<TKey, TValue>>.GetEnumerator(); void ICollection.CopyTo(Array array, int index); void IDictionary.Add(object key, object value); bool IDictionary.Contains(object key); IDictionaryEnumerator IDictionary.GetEnumerator(); void IDictionary.Remove(object key); IEnumerator IEnumerable.GetEnumerator(); ```
C#: How can Dictionary<K,V> implement ICollection<KeyValuePair<K,V>> without having Add(KeyValuePair<K,V>)?
[ "", "c#", ".net", "dictionary", "interface", "" ]
Is there a built-in way to know if a given session variable is a serialized object? Say I retrieve a value like $\_SESSION['foo'], but I don't know if it was originally a string or if it is a serialized object. Is there some way to check, or once serialized does PHP just see a string as a string as a string?
It's a pretty common misconception that you have to manually serialize objects before putting them in session. That is not the case. You can simply assign an object instance to a slot in `$_SESSION`, and PHP will automagically serialize and unserialize it for you, between requests.
A string is a string is a string. I think the best you'll be able to do is just try to unserialize it, and if it works, it works. If it doesn't, it doesn't. The only other option would be to use a regex to see if it "looks" like a serialized object. Just running `unserialize()` on it might be easier though.
Checking for Serialized Objects
[ "", "php", "serialization", "session-variables", "" ]
This is a little confusing to explain, so bear with me here... I want to set up a system where a user can send templated emails via my website, except it's not actually sent using my server - it instead just opens up their own local mail client with an email ready to go. The application would fill out the body of the email with predefined variables, to save the user having to type it themselves. They can then edit the message as desired, should it not exactly suit their purposes. There's a number of reasons I want it to go via the user's local mail client, so getting the server to send the email is not an option: it has to be 100% client-side. I already have a mostly-working solution running, and I'll post the details of that as an answer, I'm wondering if there's any better way?
The way I'm doing it now is basically like this: The HTML: ``` <textarea id="myText"> Lorem ipsum... </textarea> <button onclick="sendMail(); return false">Send</button> ``` The Javascript: ``` function sendMail() { var link = "mailto:me@example.com" + "?cc=myCCaddress@example.com" + "&subject=" + encodeURIComponent("This is my subject") + "&body=" + encodeURIComponent(document.getElementById('myText').value) ; window.location.href = link; } ``` This, surprisingly, works rather well. The only problem is that if the body is particularly long (somewhere over 2000 characters), then it just opens a new email but there's no information in it. I suspect that it'd be to do with the maximum length of the URL being exceeded.
Here's the way doing it using jQuery and an "element" to click on : ``` $('#element').click(function(){ $(location).attr('href', 'mailto:?subject=' + encodeURIComponent("This is my subject") + "&body=" + encodeURIComponent("This is my body") ); }); ``` Then, you can get your contents either by feeding it from input fields (ie. using `$('#input1').val()` or by a server side script with `$.get('...')`. Have fun
Sending emails with Javascript
[ "", "javascript", "email", "client-side", "" ]
I'm wondering if there are any simple ways to get a list of all fixed-width (monospaced) fonts installed on a user's system in C#? I'm using .net 3.5 so have access to the WPF System.Windows.Media namespace and LINQ to get font information, but I'm not sure what I'm looking for. I want to be able to provide a filtered list of monospaced fonts and/or pick out monospaced fonts from a larger list of fonts (as seen in the VS options dialog).
Have a look at: <http://www.pinvoke.net/default.aspx/Structures/LOGFONT.html> Use one of the structures in there, then loop over families, instantiating a Font, and getting the LogFont value and checking lfPitchAndFamily. The following code is written on the fly and untested, but something like the following should work: ``` foreach (FontFamily ff in System.Drawing.FontFamily.Families) { if (ff.IsStyleAvailable(FontStyle.Regular)) { Font font = new Font(ff, 10); LOGFONT lf = new LOGFONT(); font.ToLogFont(lf); if (lf.lfPitchAndFamily ^ 1) { do stuff here...... } } } ```
Unfortunately **ToLogFont** function does not fill **lfPitchAndFamily** field to correct values. In my case it's always 0. One approximation to detect which fonts might be fixed is the following ``` foreach ( FontFamily ff in FontFamily.Families ) { if ( ff.IsStyleAvailable( FontStyle.Regular ) ) { float diff; using ( Font font = new Font( ff, 16 ) ) { diff = TextRenderer.MeasureText( "WWW", font ).Width - TextRenderer.MeasureText( "...", font ).Width; } if ( Math.Abs( diff ) < float.Epsilon * 2 ) { Debug.WriteLine( ff.ToString() ); } } } ``` Keep in mind that they are several false positives, for example Wingdings
How do I get all installed fixed-width fonts?
[ "", "c#", "wpf", "fonts", "monospace", "" ]
How do I check for an open connection in jdbc for oracle database? Note: `conn.isClosed()` cannot be used for this.
Usually a Connection Pool will also use the Connection.isClosed() method to check if the Connection is still valid. The problem is that not all JDBC drivers will handle this call correctly. So I assume that there are some simple check statements just like RealHowTo said. For Oracle he already mentioned the "SELECT 1 FROM Dual" which should succeed always for Oracle databases. I think that there are similar queries for the different database. I can remember that in a previous project we also implemented an own Connection Pool which used such validation queries.
Something like: ``` Statement stmt = null; ResultSet rs =null; try { stmt = conn.createStatement(); // oracle rs = stmt.executeQuery("SELECT 1 FROM Dual"); // others // rs = stmt.executeQuery("SELECT 1"); if (rs.next()) return true; // connection is valid } catch (SQLException e) { // TODO : log the exception ... return false; } finally { if (stmt != null) stmt.close(); if (rs != null) rs.close(); } ``` Note that if the connection is coming from a Connection Pool (in a Application Server for example) then the Pool may have a mechanism to check if a connection is valid or not. With BEA, you specify the SELECT in the "test-on-reserve" property. If you are developing your own pool then you may want to take a look at how others are doing it (ex. [Proxool](http://proxool.sourceforge.net/configure.html)).
programmatically checking for open connection in JDBC
[ "", "java", "oracle", "jdbc", "" ]
I was looking for something like Server.MapPath in the ASP.NET realm to convert the output of Assembly.GetExecutingAssembly().CodeBase into a file path with drive letter. The following code works for the test cases I've tried: ``` private static string ConvertUriToPath(string fileName) { fileName = fileName.Replace("file:///", ""); fileName = fileName.Replace("/", "\\"); return fileName; } ``` It seems like there should be something in the .NET Framework that would be much better--I just haven't been able to find it.
Try looking at the [Uri.LocalPath](http://msdn.microsoft.com/en-us/library/system.uri.localpath.aspx) property. ``` private static string ConvertUriToPath(string fileName) { Uri uri = new Uri(fileName); return uri.LocalPath; // Some people have indicated that uri.LocalPath doesn't // always return the corret path. If that's the case, use // the following line: // return uri.GetComponents(UriComponents.Path, UriFormat.SafeUnescaped); } ```
I looked for an answer a lot, and the most popular answer is using [Uri.LocalPath](http://msdn.microsoft.com/en-us/library/system.uri.localpath.aspx). But System.Uri fails to give correct LocalPath if the Path contains “#”. Details are [here](https://stackoverflow.com/questions/896572/system-uri-fails-to-give-correct-absolutepath-and-localpath-if-the-path-contains). My solution is: ``` private static string ConvertUriToPath(string fileName) { Uri uri = new Uri(fileName); return uri.LocalPath + Uri.UnescapeDataString(uri.Fragment).Replace('/', '\\'); } ```
Is there a .NET Framework method for converting file URIs to paths with drive letters?
[ "", "c#", ".net", "winforms", "uri", "" ]
I have 3 lists, I will make them simple here. list of letters A B C list of numbers 1 2 3 Mixed A,1 A,2 B,2 B,3 C,1 C,3 I need to know what is missing: A,3 B,1 C,2 The list of letters has about 85 entries and the list of numbers has about 500 entries. The mixed list has about 75,000 entries. I can use either a database query (mysql 5.0) or Turbo Delphi 2006 to process text files. What would be the best way to find what is missing? Thanks, Dave
There's no need to create extra tables. The following query would work just as well: ``` SELECT c.chr, n.num FROM chars c, nums n WHERE NOT EXISTS (SELECT 1 FROM mix m WHERE m.chr = c.chr AND m.num = n.num) ```
A cross join would produce every combination there is, given that you have both of your lists in SQL tables: ``` SELECT Letter + ',' + Number AS Combination FROM NumberList, LetterList ``` Using the combined result (maybe save it into a temporary table), you can use a NOT EXISTS query to find what is missing: ``` SELECT Combination FROM AllCombinations AS a WHERE NOT EXISTS (SELECT 1 FROM MyCombitations AS m WHERE m.Combination = a.Combination) ``` This would require a table `MyCombitations`, which lists all of the combinations you actually have and want to check against the full list. If you want to speed things up, you should use a permanent table of combinations and an index on the `MyCombitations.Combination` field. For repeated querying this is definitely advisable.
Search multiple list for missing entries
[ "", "sql", "delphi", "comparison", "list", "" ]
When I use `DateTime.Now` I get the date and time from the server point of view. Is there any way to get the *client* date and time in ASP.NET?
What I'd do is create a hidden input field and then wire a Javascript routine to the onsubmit event for the form. This routine would populate the hidden field with the time on the client machine. The hidden field can used with ASP.NET by using the HTML control "HtmlInputHidden" class. You just give you input control a runat="server" attribute like any other server side control. The server can then read out this time when the form posts back. You could even wrap this up in a server control if you need to do this in a number of places. Alternatively, you could do this with AJAX but the implementation will depend on which library you use.
I like the idea of either using the browser/system time and time zone or letting them select their time zone. In a past project I used something like this: ``` <script language="javascript"> function checkClientTimeZone() { // Set the client time zone var dt = new Date(); SetCookieCrumb("ClientDateTime", dt.toString()); var tz = -dt.getTimezoneOffset(); SetCookieCrumb("ClientTimeZone", tz.toString()); // Expire in one year dt.setYear(dt.getYear() + 1); SetCookieCrumb("expires", dt.toUTCString()); } // Attach to the document onload event checkClientTimeZone(); </script> ``` And then on the server: ``` /// <summary> /// Returns the client (if available in cookie) or server timezone. /// </summary> public static int GetTimeZoneOffset(HttpRequest Request) { // Default to the server time zone TimeZone tz = TimeZone.CurrentTimeZone; TimeSpan ts = tz.GetUtcOffset(DateTime.Now); int result = (int) ts.TotalMinutes; // Then check for client time zone (minutes) in a cookie HttpCookie cookie = Request.Cookies["ClientTimeZone"]; if (cookie != null) { int clientTimeZone; if (Int32.TryParse(cookie.Value, out clientTimeZone)) result = clientTimeZone; } return result; } ``` Or you can pass it in as a URL parameter and handle it in the Page\_Load: ``` http://host/page.aspx?tz=-360 ``` Just remember to use minutes, since not all time zones are whole hours.
How to get client date and time in ASP.NET?
[ "", "c#", ".net", "asp.net", "datetime", "client-side", "" ]
In C# some of default name space such as System.Collections are listed without typing in using blah. In visual basic, they are not imports for you. Is there a way to force vb to auto imports some of default name space or VB work differently than C#?
I think the first item posted by John Rudy is what you're looking for- add them in the project properties. However, VB.Net does also work differently than C#, in that it means a different thing in VB to import a namespace than it does in C#. When you import a namespace in VB, it also brings child namespaces 'in scope', in a manner of speaking. Take the `System` namespace, for example, which is imported by default. Because the System namespace is imported, you don't have to first type `System.` to reference a child namespace like `IO`, like you would in C#. So, right out of the box you can say something like this in VB: ``` If IO.File.Exists(MyFile) Then .... ``` That just isn't possible in C# right now. You either have to also import `System.IO` and then just say `File.Exists()` or list out the System namespace as well: `System.IO.File.Exists()`. It may not seem very significant, but you really get used to this VB feature after a while, and it comes in handy more than you'd think. I bring all this up because the end result of this feature is that you often don't want to import as many namespaces in VB as you do in C#.
I'm not sure what you're asking. I can see potentially two questions there: 1. Can you change the VB auto-imports? 2. Can you get auto-import behavior in C#? For #1, yes you can. Assuming Visual Studio 2005 or higher, go into your project properties, and select the References tab. The auto-imports are listed under "Imported Namespaces" at the bottom of the view. For #2, not that I'm aware of. I've never seen that behavior in Visual C#.
Visual basic auto imports namespaces
[ "", "c#", "vb.net", "visual-studio", "" ]
We've got a fairly complex httphandler for handling images. Basically it streams any part of the image at any size that is requested. Some clients use this handler without any problems. But we've got one location that gives us problems, and now it also gives problems on my development environment. What happens is that the client never receives anything on some requests. So request 1 and 2 are fine, but request 3 and 4 never end. * While debugging I can see that the server is ready and has completed the request. * The client however is still waiting on a result (debugging with fiddler2 shows that there is no response received) The code that we use to stream an image is ``` if (!context.Response.IsClientConnected) { imageStream.Close(); imageStream.Dispose(); return; } context.Response.BufferOutput = true; context.Response.ContentType = "image/" + imageformat; context.Response.AppendHeader("Content-Length", imageStream.Length.ToString()); if (imageStream != null && imageStream.Length > 0 && context.Response.IsClientConnected) context.Response.BinaryWrite(imageStream.ToArray()); if (context.Response.IsClientConnected) context.Response.Flush(); imageStream.Close(); imageStream.Dispose(); ``` The imageStream is a MemoryStream with the contents of an image. After the call to response.Flush() we do some more clean-up and writing summaries to the eventlog. We also call GC.Collect() after every request, because the objects that we use in-memory become very large. I know that that is not a good practice, but could it give us trouble? The problems with not returning requests happen at both IIS 5 (Win XP) and IIS 6 (Win 2003), we use .NET framework v2.
A client will limit the number of simultaneous requests it will make to any one server. Furthermore when requesting from a resource that requires session state (the default) other requests for resources requiring session state will block. When using `HttpWebResponse` you must dispose either that object or the stream returned by its `GetResponseStream` method to complete the connection. Your code was very confusing. You've turned buffering on, set a content-length and used a flush. This results in some strange HTTP headers. Ordinarily with buffering on you would leave the setting of the Content-Length header to ASP.NET to handle. When you use flush ASP.NET assumes that you may subsequently send more data. In this case it will use chunked transfer. Once the response is complete a final set of headers is sent for the final chunk, each chunk as its own length header and the total length of content is derived from these. The first chunk should *not* have a Content-Length header, however your code is adding that header. If you turn off buffering and pump the bytes into the output stream yourself **then** you should set the Content-Length header yourself because effectively buffer off means you are taking responsibility for exactly what gets sent to the client. Marc's code is a simple example of such a pump, although I would use a bigger buffer, or on a MemoryStream the WriteTo method would be more efficient.
First off, there are better ways of dealing with streams that using an array for the entire thing (i.e. `MemoryStream` *might* be unnecessary here). I would envisage a loop: ``` const int BUFFER_SIZE = 4096; // pick your poison bute[] buffer = new byte[BUFFER_SIZE]; int bytesRead; while((bytesRead = inStream.Read(buffer, 0, BUFFER_SIZE)) > 0) { outStream.Write(buffer, 0, bytesRead); } ``` You should also do this with buffering disabled (`.Response.BufferOutput = false`). Re the problem, my suspicion is that you haven't written enough data / closed the response (`.Response.Close()`).
HttpHandler response is never returned
[ "", "c#", "asp.net", "httphandler", "" ]
I want to use input from a user as a regex pattern for a search over some text. It works, but how I can handle cases where user puts characters that have meaning in regex? For example, the user wants to search for Word `(s)`: regex engine will take the `(s)` as a group. I want it to treat it like a string `"(s)"` . I can run `replace` on user input and replace the `(` with `\(` and the `)` with `\)` but the problem is I will need to do replace for every possible regex symbol. Do you know some better way ?
Use the `re.escape()` function for this: [4.2.3 `re` Module Contents](http://docs.python.org/library/re.html#re.escape) > **escape(string)** > > Return string with all non-alphanumerics backslashed; this is useful if you want to match an arbitrary literal string that may have regular expression metacharacters in it. A simplistic example, search any occurence of the provided string optionally followed by 's', and return the match object. ``` def simplistic_plural(word, text): word_or_plural = re.escape(word) + 's?' return re.match(word_or_plural, text) ```
You can use [`re.escape()`](http://docs.python.org/library/re.html#re.escape): > re.escape(string) > Return string with all non-alphanumerics backslashed; this is useful if you want to match an arbitrary literal string that may have regular expression metacharacters in it. ``` >>> import re >>> re.escape('^a.*$') '\\^a\\.\\*\\$' ``` If you are using a Python version < 3.7, this will escape non-alphanumerics that are *not* part of regular expression syntax as well. If you are using a Python version < 3.7 but >= 3.3, this will escape non-alphanumerics that are *not* part of regular expression syntax, *except* for specifically underscore (`_`).
Escaping regex string
[ "", "python", "regex", "" ]
I have a Perl module that I would like to use from Java. Is there a way to call this code using either ActiveState Perl on Windows or the generic Perl that comes with Linux? I have found references to JPL but it doesn’t appear to be maintained anymore.
[Inline-Java](http://search.cpan.org/~patl/Inline-Java-0.52/Java.pod) is the usual library to call java from Perl, and this [post propose a **org.perl.java**](http://www.piersharding.com/blog/archives/2004/04/calling_perl_fr.html) module which should allow calling Perl from Java, as asked. However, because of the unpredictability of the JNI implementations for different JVMs it is difficult to say what combinations of JVM and Perl will work. Typically, what is required is Perl with MULTIPLICITY, and threads compiled in. That means he uses a custom built Perl. Otherwise, **[Inline::Java::Callback](http://linux.wareseeker.com/Programming/inline-java-callback-0.52.zip/326355)** allows you to call Perl functions from Java. To do this you need to create an `org.perl.inline.java.InlinePerlCaller` object. Here is a example of a typical use: ``` use Inline Java => <<END ; import java.util.* ; import org.perl.inline.java.* ; class Pod_regexp extends InlineJavaPerlCaller { public Pod_regexp() throws InlineJavaException { } public boolean match(String target, String pattern) throws InlineJavaException { try { String m = (String)CallPerlSub("main::regexp", new Object [] {target, pattern}) ; if (m.equals("1")){ return true ; } } catch (InlineJavaPerlException pe){ // $@ is in pe.GetObject() } return false ; } } END my $re = new Pod_regexp() ; my $match = $re->match("Inline::Java", "^Inline") ; print($match . "n") ; # prints 1 sub regexp { my $target = shift ; my $pattern = shift ; return ($target =~ /$pattern/) ; } ```
Is this not what Runtime.exec() is for? ``` Runtime.getRuntime().exec("/usr/bin/perl myPerl.pl"); ``` Or am I misunderstanding the question?
How can I call Perl from Java?
[ "", "java", "perl", "" ]
I am writing multi-thread socket chat in C++Builder 2009. It is almost complete in accordance with what I need to do but I have a little problem. I need to pass the TMemo\* pointer into CreateThread WinAPI function which upcasts it to void\*. I tryed this way: ``` HANDLE xxx = MemoChat->Handle; hNetThread = CreateThread(NULL, 0, NetThread, xxx, 0, &dwNetThreadId); //... ``` and then, in NetThread function, ``` TMemo* MyMemo((HANDLE)lpParam); TMemo* MyMemo((TMemo*)lpParam); ``` but it didn`t work:( The question is how I can really downcast it correctly so I can use my Memo Component in this new thread?
Call: ``` TMemo* MemoChat = // You defined that somewhere I assume HANDLE hNetThread = CreateThread(NULL, 0, NetThread, MemoChat, 0, &dwNetThreadId); ``` What is happening here is that any pointer you pass as the third parameter is being auto converted into a void pointer (or in WinTerms LPVOID). That's fine it does not change it it just loses the type information as the system does not know anything about your object. The new Thread Start point: ``` DWORD NetThread(LPVOID lpParameter) { TMemo* MemoChat = reinterpret_cast<TMemo*>(lpParameter); // Do your thread stuff here. } ``` Once your thread start method is called. Just convert the void pointer back into the correct type and you should be able to start using it again. Just to clear up other misconceptions. A **HANDLE is a pointer**. And you could have passed it as the parameter to the NetThread(). A HANDLE is a pointer to pointer under system control which points at the object you are using. So why the double indirection. It allows the system to move the object (and update its pointer) without finding all owners of the object. The owners all have handles that point at the pointer that was just updated. It is an old fashioned computer science concept that is used infrequently in modern computers because of the OS/Hardware ability to swap main memory into secondary storage. but for certain resource they are still useful. Nowadays when handles are required they are hidden inside objects away from the user.
Please understand that a HANDLE is **not** a pointer, but a concept of the Win32 API. So the first line casts the LPVOID to HANDLE - this is correct, as the parameter of the thread routine really is given as a handle (xxx). However, it then goes on converting the HANDLE to a MyMemo object; this treats the bits in the handle as if they would form an address - which they aren't. The second line does exactly the same conversion - it treats a handle as if it were a pointer directly. I wonder why you aren't passing MemoChat itself to the thread: ``` hNetThread = CreateThread(NULL, 0, NetThread, MemoChat, 0, &dwNetThreadId); ```
How can I correctly downcast the pointer from void* to TMemo* in C++Builder2009?
[ "", "c++", "c++builder", "downcast", "tmemo", "" ]
I have some elements on a page which are draggable. These same elements have a click event which navigates to another page. I'm trying to determine the best way of preventing the click event from firing if the user is dragging but still allow the click event if not dragging. Anyone have any ideas of the best way to accomplish this?
I solved this by using something like the following: ``` new Draggable('id', { onStart: function() { dPhoto = $('id'); Event.stopObserving('id', 'click'); }, onEnd : function() { setTimeout("Event.observe('id', 'click', function() { location.href = 'url'; });", 500); }, revert: true }); ```
``` var click_func; function onDragStart(drgObj,mouseEvent){ click_func = mouseEvent.target.onclick; mouseEvent.target.onclick = function(e){ mouseEvent.target.onclick = click_func; return false; } } ```
Preventing JavaScript click event with Scriptaculous drag and drop
[ "", "javascript", "events", "event-handling", "drag-and-drop", "scriptaculous", "" ]
I've written a paged search stored procedure using SQL Server 2005. It takes a number of parameters and the search criteria is moderately complex. Due to the front-end architecture I need to be able to return the number of results that would come back **without** actually returning the results. The front end would then call the stored procedure a second time to get the actual results. On the one hand I can write two stored procedures - one to handle the count and one to handle the actual data, but then I need to maintain the search logic in at least two different places. Alternatively, I can write the stored procedure so that it takes a bit parameter and based on that I either return data or just a count. Maybe fill a temporary table with the data and if it's count only just do a count from that, otherwise do a select from it. The problem here is that the count process could be optimized so that's a lot of extra overhead it seems (have to get unneeded columns, etc.). Also, using this kind of logic in a stored procedure could result in bad query plans as it goes back and forth between the two uses. The amount of data in the system isn't too high (only a couple million rows for even the larger tables). There may be many concurrent users though. What are people's thoughts on these approaches? Has anyone solved this problem before in a way that I haven't thought of? They **CANNOT** take the results and count at the same time from a single call. Thanks!
I personally go with the two query approach, yes, you have to maintain the search logic in two places, but I've found that the performance optimization benefit, and overall cleanness of the code pays off in the end. Using a flag passed to a single procedure, is a potential solution, but I just find that very hard to maintain, especially for complex search logic. The route of using temporary tables etc, that just adds WAY more overhead than what is needed. Thus, why I have landed with the two query method. Everything that I find online recommends this approach as well.
This isn't a normal problem and you usually want the total counts at the same time your getting a page. That said, use two different procedures. The reason is that you have two very different actions which only resemble each other superficially.
Getting counts for a paged SQL search stored procedure
[ "", "sql", "t-sql", "search", "stored-procedures", "paging", "" ]
If I'm reading a text file in shared access mode and another process truncates it, what is the easiest way to detect that? (I'm excluding the obvious choice of refreshing a FileInfo object periodically to check its size) Is there some convenient way to capture an event? (Filewatcher?)
There is, **It's called [FileSystemWatcher](http://msdn.microsoft.com/en-us/library/system.io.filesystemwatcher.aspx)**. If you are developing a windows forms application, you can drag-and-drop it from the toolbox. Here's some usage example: ``` private void myForm_Load(object sender, EventArgs e) { var fileWatcher = new System.IO.FileSystemWatcher(); // Monitor changes to PNG files in C:\temp and subdirectories fileWatcher.Path = @"C:\temp"; fileWatcher.IncludeSubdirectories = true; fileWatcher.Filter = @"*.png"; // Attach event handlers to handle each file system events fileWatcher.Changed += fileChanged; fileWatcher.Created += fileCreated; fileWatcher.Renamed += fileRenamed; // Start monitoring! fileWatcher.EnableRaisingEvents = true; } void fileRenamed(object sender, System.IO.FileSystemEventArgs e) { // a file has been renamed! } void fileCreated(object sender, System.IO.FileSystemEventArgs e) { // a file has been created! } void fileChanged(object sender, System.IO.FileSystemEventArgs e) { // a file is modified! } ``` It's in System.IO and System.dll so you should be able to use it in most type of projects.
FSW cannot work reliably, it is asynchronous. Assuming you don't get an exception, StreamReader.ReadLine() will return null when the file got truncated. Then check if the size changed. Beware of the unavoidable race condition, you'll need to verify assumptions about timing.
How to detect a truncated file in C#
[ "", "c#", ".net", "" ]
I'm attempting to resize pngs with transparent backgrounds in PHP and the code samples I've found online don't work for me. Here's the code I'm using, advice will be much appreciated! ``` $this->image = imagecreatefrompng($filename); imagesavealpha($this->image, true); $newImage = imagecreatetruecolor($width, $height); // Make a new transparent image and turn off alpha blending to keep the alpha channel $background = imagecolorallocatealpha($newImage, 255, 255, 255, 127); imagecolortransparent($newImage, $background); imagealphablending($newImage, false); imagesavealpha($newImage, true); imagecopyresampled($newImage, $this->image, 0, 0, 0, 0, $width, $height, $this->getWidth(), $this->getHeight()); $this->image = $newImage; imagepng($this->image,$filename); ``` **Update** By 'not working' I meant to say the background color changes to black when I resize pngs.
From what I can tell, you need to set the blending mode to `false`, and the save alpha channel flag to `true` **before** you do the imagecolorallocatealpha() ``` <?php /** * https://stackoverflow.com/a/279310/470749 * * @param resource $image * @param int $newWidth * @param int $newHeight * @return resource */ public function getImageResized($image, int $newWidth, int $newHeight) { $newImg = imagecreatetruecolor($newWidth, $newHeight); imagealphablending($newImg, false); imagesavealpha($newImg, true); $transparent = imagecolorallocatealpha($newImg, 255, 255, 255, 127); imagefilledrectangle($newImg, 0, 0, $newWidth, $newHeight, $transparent); $src_w = imagesx($image); $src_h = imagesy($image); imagecopyresampled($newImg, $image, 0, 0, 0, 0, $newWidth, $newHeight, $src_w, $src_h); return $newImg; } ?> ``` **UPDATE** : This code is working only on background transparent with opacity = 0. If your image have 0 < opacity < 100 it'll be black background.
Here is a final solution that is working fine for me. ``` function resizePng($im, $dst_width, $dst_height) { $width = imagesx($im); $height = imagesy($im); $newImg = imagecreatetruecolor($dst_width, $dst_height); imagealphablending($newImg, false); imagesavealpha($newImg, true); $transparent = imagecolorallocatealpha($newImg, 255, 255, 255, 127); imagefilledrectangle($newImg, 0, 0, $width, $height, $transparent); imagecopyresampled($newImg, $im, 0, 0, 0, 0, $dst_width, $dst_height, $width, $height); return $newImg; } ```
How do I resize pngs with transparency in PHP?
[ "", "php", "png", "gd", "resize", "" ]
How do I import a Python module given its relative path? For example, if `dirFoo` contains `Foo.py` and `dirBar`, and `dirBar` contains `Bar.py`, how do I import `Bar.py` into `Foo.py`? Here's a visual representation: ``` dirFoo\ Foo.py dirBar\ Bar.py ``` `Foo` wishes to include `Bar`, but restructuring the folder hierarchy is not an option.
Assuming that both your directories are real Python packages (do have the `__init__.py` file inside them), here is a safe solution for inclusion of modules relatively to the location of the script. I assume that you want to do this, because you need to include a set of modules with your script. I use this in production in several products and works in many special scenarios like: scripts called from another directory or executed with python execute instead of opening a new interpreter. ``` import os, sys, inspect # realpath() will make your script run, even if you symlink it :) cmd_folder = os.path.realpath(os.path.abspath(os.path.split(inspect.getfile( inspect.currentframe() ))[0])) if cmd_folder not in sys.path: sys.path.insert(0, cmd_folder) # Use this if you want to include modules from a subfolder cmd_subfolder = os.path.realpath(os.path.abspath(os.path.join(os.path.split(inspect.getfile( inspect.currentframe() ))[0],"subfolder"))) if cmd_subfolder not in sys.path: sys.path.insert(0, cmd_subfolder) # Info: # cmd_folder = os.path.dirname(os.path.abspath(__file__)) # DO NOT USE __file__ !!! # __file__ fails if the script is called in different ways on Windows. # __file__ fails if someone does os.chdir() before. # sys.argv[0] also fails, because it doesn't not always contains the path. ``` As a bonus, this approach does let you force Python to use your module instead of the ones installed on the system. Warning! I don't really know what is happening when current module is inside an `egg` file. It probably fails too.
Be sure that dirBar has the `__init__.py` file -- this makes a directory into a Python package.
Import a module from a relative path
[ "", "python", "relative-path", "python-import", "" ]
Is there a way to create register a handler that will be called exactly at the time when the last reference to a certain object is released? An example would be an object that is backed by a physical data file and once the object become unreferenced, the file should be closed and than renamed. It would be nice if this was possible without having to explicitly call a "close" method on that object. All the notification mechanisms I am aware of from the Weak/Phantom reference area only state that notification will occur at some point in time but there is no gurantee as to when this will happen...
In short, **no.** The Java specification explicitly denies you the ability to know when the last reference is released. JVM implementations (and optimizations) depend on this. There is no hook.
I think WeakReference does what you want. A WeakReference gets put into the ReferenceQueue as soon as its weakly reachable (i.e. all strong references are gone). See this article by [Ethan Nicholas](http://weblogs.java.net/blog/enicholas/archive/2006/05/understanding_w.html). If you are worried about some references not reaching the ReferenceQueue at shutdown, then keep a list of all objects created (using WeakReferences or PhantomReferences). Add a shutdown hook that checks the list for any outstanding references and perform whatever action you need.
How to detect when an object is no longer referenced
[ "", "java", "reference", "weak-references", "" ]
Is there a way to copy an entire directory to the output directory in a console application in C#.NET? I know for files you can right click them, properties and mark copy to output directory. But I'm not going to do that for 20.000 files... Thx, Lieven Cardoen aka Johlero
Are you sure you want all 20000 files in the output? Besides manual copy-paste operation I see two options: one is you can have a post-build task to copy all the files and another is to have 20000 files zipped, zip file added as a content item to your project with "copy always" set and either console application unzip 'em, or a post-build step is calling unzip utility to do that.
1. While in development, use application configuration to point to where these files CURRENTLY reside, instead of copying them. (copying that many objects on each build is a nightmare. Even if copying only new/updated files) 2. When preparing a release build, you might want to consider using a build script and MSBuild or NAnt to do this type of thing for you. 3. If you REALLY are convinced you are sure you still want to perform a copy on each build - use Post/pre-build script (in your project build configuration, point to a batch file which performs an xcopy.. some smart batch scripting can sometimes be very helpful).
Copy Directory to Output Directory - Console Application .NET
[ "", "c#", "include", "directory", "copy", "" ]
If I have a date, how do I calculate the week number for that date within that year? For example, in 2008, January 1st to January 6th are in week 1 and January 7th to the 13th are in week 2, so if my date was January 10th 2008, my week number would be 2. An algorithm would be great to get me started and sample code would also help - I'm developing in C++ on Windows. ### Related: > [Getting week number off a date in MS SQL Server 2005?](https://stackoverflow.com/questions/348880/getting-week-number-off-a-date-in-ms-sql-server-2005)
Pseudocode: ``` int julian = getDayOfYear(myDate) // Jan 1 = 1, Jan 2 = 2, etc... int dow = getDayOfWeek(myDate) // Sun = 0, Mon = 1, etc... int dowJan1 = getDayOfWeek("1/1/" + thisYear) // find out first of year's day // int badWeekNum = (julian / 7) + 1 // Get our week# (wrong! Don't use this) int weekNum = ((julian + 6) / 7) // probably better. CHECK THIS LINE. (See comments.) if (dow < dowJan1) // adjust for being after Saturday of week #1 ++weekNum; return (weekNum) ``` To clarify, this algorithm assumes you number your weeks like this: ``` S M T W R F S 1 2 3 <-- week #1 4 5 6 7 8 9 10 <-- week #2 [etc.] ``` getDayOfWeek() and getDayOfYear() are standard date-object operations in most languages. If yours doesn't have them, you can count-forward from some known date (Jan 1, 1970 is a common one), after looking up to see what day of the week it was. If you're going to implement your own date counting routines, remember that years that are divisible by 100 are **NOT** leap years, unless they are also divisible by 400. So 1900 was not a leap year, but 2000 was. If you're going to work far back in time, you have to mess with Gregorian vs Julian calendars, etc., see [Wikipedia](http://en.wikipedia.org/wiki/Calendar_reform) for loads of info on that. [This link](http://www.codeproject.com/KB/datetime/datetimedisc.aspx) talks about date/time functions in Windows/C++ in greater detail.
Be aware that while your definition of *nth* week of the year is tenable, it is also not 'the' standard one. ISO 8601 defines a standard for the representation of dates, times and time zones. It defines weeks that start on a Monday. It also says Week 1 of a year is the one which contains at least 4 days from the given year. Consequently, the 29th, 30th and 31st of December 20xx could be in week 1 of 20xy (where xy = xx + 1), and the 1st, 2nd and 3rd of January 20xy could all be in the last week of 20xx. Further, there can be a week 53. [*Added*: note that the C standard and the `strftime() function provides for weeks that start on Sunday as well as weeks that start on Monday. It is not clear that the C standard provides for the year number of week 0 for Sunday-based weeks. See also the answer from Emerick Rogul.] Then comes the interesting testing phase -- when do you get week 53? One answer is on Friday 1st January 2010, which is in 2009-W53 (as, indeed, is Sunday 3rd January 2010). Similarly, Saturday 1st January 2005 is in 2004-W53, but Sunday 1st January 2006 is in 2005-W52. That is an extract from a comment in the following code, which is actually in Informix SPL (Stored Procedure Language), but is readable - though probably not writable - without much further explanation. The '||' operator is the SQL string concatenation operation, and Sunday is day 0, Monday is day 1, ... Saturday is day 6 of the week. There are extensive notes in the comments, including relevant text from the standard. One line comments start '`--`'; possibly multiline comments start with '`{`' and end at the next '`}`'. ``` -- @(#)$Id: iso8601_weekday.spl,v 1.1 2001/04/03 19:34:43 jleffler Exp $ -- -- Calculate ISO 8601 Week Number for given date -- Defines procedure: iso8601_weekday(). -- Uses procedure: iso8601_weeknum(). { According to a summary of the ISO 8601:1988 standard "Data Elements and Interchange Formats -- Information Interchange -- Representation of dates and times": The week notation can also be extended by a number indicating the day of the week. For example the day 1996-12-31 which is the Tuesday (day 2) of the first week of 1997 can also be written as 1997-W01-2 or 1997W012 for applications like industrial planning where many things like shift rotations are organized per week and knowing the week number and the day of the week is more handy than knowing the day of the month. This procedure uses iso8601_weeknum() to format the YYYY-Www part of the date, and appends '-d' to the result, allowing for Informix's coding of Sunday as day 0 rather than day 7 as required by ISO 8601. } CREATE PROCEDURE iso8601_weekday(dateval DATE DEFAULT TODAY) RETURNING CHAR(10); DEFINE rv CHAR(10); DEFINE dw CHAR(4); LET dw = WEEKDAY(dateval); IF dw = 0 THEN LET dw = 7; END IF; RETURN iso8601_weeknum(dateval) || '-' || dw; END PROCEDURE; -- @(#)$Id: iso8601_weeknum.spl,v 1.1 2001/02/27 20:36:25 jleffler Exp $ -- -- Calculate ISO 8601 Week Number for given date -- Defines procedures: day_one_week_one() and iso8601_weeknum(). { According to a summary of the ISO 8601:1988 standard "Data Elements and Interchange Formats -- Information Interchange -- Representation of dates and times": In commercial and industrial applications (delivery times, production plans, etc.), especially in Europe, it is often required to refer to a week of a year. Week 01 of a year is per definition the first week which has the Thursday in this year, which is equivalent to the week which contains the fourth day of January. In other words, the first week of a new year is the week which has the majority of its days in the new year. Week 01 might also contain days from the previous year and the week before week 01 of a year is the last week (52 or 53) of the previous year even if it contains days from the new year. A week starts with Monday (day 1) and ends with Sunday (day 7). For example, the first week of the year 1997 lasts from 1996-12-30 to 1997-01-05 and can be written in standard notation as 1997-W01 or 1997W01 The week notation can also be extended by a number indicating the day of the week. For example the day 1996-12-31 which is the Tuesday (day 2) of the first week of 1997 can also be written as 1997-W01-2 or 1997W012 for applications like industrial planning where many things like shift rotations are organized per week and knowing the week number and the day of the week is more handy than knowing the day of the month. Referring to the standard itself, section 3.17 defines a calendar week: week, calendar: A seven day period within a calendar year, starting on a Monday and identified by its ordinal number within the year; the first calendar week of the year is the one that includes the first Thursday of that year. In the Gregorian calendar, this is equivalent to the week which includes 4 January. Section 5.2.3 "Date identified by Calendar week and day numbers" states: Calendar week is represented by two numeric digits. The first calendar week of a year shall be identified as 01 [...] Day of the week is represented by one decimal digit. Monday shall be identified as day 1 of any calendar week [...] Section 5.2.3.1 "Complete representation" states: When the application clearly identifies the need for a complete representation of a date identified by calendar week and day numbers, it shall be one of the alphanumeric representations as follows, where CCYY represents a calendar year, W is the week designator, ww represents the ordinal number of a calendar week within the year, and D represents the ordinal number within the calendar week. Basic format: CCYYWwwD Example: 1985W155 Extended format: CCYY-Www-D Example: 1985-W15-5 Both the summary and the formal definition are intuitively clear, but it is not obvious how to translate it into an algorithm. However, we can deal with the problem by exhaustively enumerating the seven options for the day of the week on which 1st January falls (with actual year values for concreteness): 1st January 2001 is Monday => Week 1 starts on 2001-01-01 1st January 2002 is Tuesday => Week 1 starts on 2001-12-31 1st January 2003 is Wednesday => Week 1 starts on 2002-12-30 1st January 2004 is Thursday => Week 1 starts on 2003-12-29 1st January 2010 is Friday => Week 1 starts on 2010-01-04 1st January 2005 is Saturday => Week 1 starts on 2005-01-03 1st January 2006 is Sunday => Week 1 starts on 2006-01-02 (Cross-check: 1st January 1997 was a Wednesday; the summary notes state that week 1 of 1997 started on 1996-12-30, which is consistent with the table derived for dates in the first decade of the third millennium above). When working with the Informix DATE types, bear in mind that Informix uses WEEKDAY values 0 = Sunday, 1 = Monday, 6 = Saturday. When the weekday of the first of January has the value in the LH column, you need to add the value in the RH column to the 1st of January to obtain the date of the first day of the first week of the year. Weekday Offset to 1st January 1st day of week 1 0 +1 1 0 2 -1 3 -2 4 -3 5 +3 6 +2 This can be written as MOD(11-w,7)-3 where w is the (Informix encoding of the) weekday of 1st January and the value 11 is used to ensure that no negative values are presented to the MOD operator. Hence, the expression for the date corresponding to the 1st day (Monday) of the 1st week of a given year, yyyy, is: d1w1 = MDY(1, 1, yyyy) + MOD(11 - WEEKDAY(MDY(1,1,yyyy)), 7) - 3 This expression is encapsulated in stored procedure day_one_week_one: } CREATE PROCEDURE day_one_week_one(yyyy INTEGER) RETURNING DATE; DEFINE jan1 DATE; LET jan1 = MDY(1, 1, yyyy); RETURN jan1 + MOD(11 - WEEKDAY(jan1), 7) - 3; END PROCEDURE; { Given this date d1w1, we can calculate the week number of any other date in the same year as: TRUNC((dateval - d1w1) / 7) + 1 The residual issues are ensuring that the wraparounds are correct. If the given date is earlier than the start of the first week of the year that contains it, then the date belongs to the last week of the previous year. If the given date is on or after the start of the first week of the next year, then the date belongs to the first week of the next year. Given these observations, we can write iso8601_weeknum as shown below. (Beware: iso8601_week_number() is too long for servers with the 18-character limit; so is day_one_of_week_one()). Then comes the interesting testing phase -- when do you get week 53? One answer is on Friday 1st January 2010, which is in 2009-W53 (as, indeed, is Sunday 3rd January 2010). Similarly, Saturday 1st January 2005 is in 2004-W53, but Sunday 1st January 2006 is in 2005-W52. } CREATE PROCEDURE iso8601_weeknum(dateval DATE DEFAULT TODAY) RETURNING CHAR(8); DEFINE rv CHAR(8); DEFINE yyyy CHAR(4); DEFINE ww CHAR(2); DEFINE d1w1 DATE; DEFINE tv DATE; DEFINE wn INTEGER; DEFINE yn INTEGER; -- Calculate year and week number. LET yn = YEAR(dateval); LET d1w1 = day_one_week_one(yn); IF dateval < d1w1 THEN -- Date is in early January and is in last week of prior year LET yn = yn - 1; LET d1w1 = day_one_week_one(yn); ELSE LET tv = day_one_week_one(yn + 1); IF dateval >= tv THEN -- Date is in late December and is in the first week of next year LET yn = yn + 1; LET d1w1 = tv; END IF; END IF; LET wn = TRUNC((dateval - d1w1) / 7) + 1; -- Calculation complete: yn is year number and wn is week number. -- Format result. LET yyyy = yn; IF wn < 10 THEN LET ww = '0' || wn; ELSE LET ww = wn; END IF LET rv = yyyy || '-W' || ww; RETURN rv; END PROCEDURE; ``` --- For completeness, the inverse function is also easy to write with the `day_one_week_one()` function above: ``` -- @(#)$Id: ywd_date.spl,v 1.1 2012/12/29 05:13:27 jleffler Exp $ -- @(#)Create ywd_date() and ywdstr_date() stored procedures -- Convert a date in format year, week, day (ISO 8601) to DATE. -- Two variants: -- ywd_date(yyyy SMALLINT, ww SMALLINT, dd SMALLINT) RETURNING DATE; -- ywdstr_date(ywd CHAR(10)) RETURNING DATE; -- NB: If week 53 is supplied, there is no check that the year had week -- 53 (GIGO). -- NB: If year yyyy is a leap year and yyyy-01-01 falls on Wed (3) or -- Thu (4), there are 53 weeks in the year. -- NB: If year yyyy is not a leap year and yyyy-01-01 falls on Thu (4), -- there are 53 weeks in the year. CREATE PROCEDURE ywd_date(yyyy SMALLINT, ww SMALLINT, dd SMALLINT) RETURNING DATE AS date; DEFINE d DATE; -- Check ranges IF yyyy < 1 OR yyyy > 9999 OR ww < 1 OR ww > 53 OR dd < 1 OR dd > 7 THEN RETURN NULL; END IF; LET d = day_one_week_one(yyyy); LET d = d + (ww - 1) * 7 + (dd - 1); RETURN d; END PROCEDURE; -- Input: 2012-W52-5 CREATE PROCEDURE ywdstr_date(ywd CHAR(10)) RETURNING DATE AS date; DEFINE yyyy SMALLINT; DEFINE ww SMALLINT; DEFINE dd SMALLINT; LET yyyy = SUBSTR(ywd, 1, 4); LET ww = SUBSTR(ywd, 7, 2); LET dd = SUBSTR(ywd, 10, 1); RETURN ywd_date(yyyy, ww, dd); END PROCEDURE; CREATE TEMP TABLE test_dates(d DATE); INSERT INTO test_dates VALUES('2011-12-28'); INSERT INTO test_dates VALUES('2011-12-29'); INSERT INTO test_dates VALUES('2011-12-30'); INSERT INTO test_dates VALUES('2011-12-31'); INSERT INTO test_dates VALUES('2012-01-01'); INSERT INTO test_dates VALUES('2012-01-02'); INSERT INTO test_dates VALUES('2012-01-03'); INSERT INTO test_dates VALUES('2012-01-04'); INSERT INTO test_dates VALUES('2012-01-05'); INSERT INTO test_dates VALUES('2012-01-06'); INSERT INTO test_dates VALUES('2012-01-07'); SELECT d, iso8601_weeknum(d), iso8601_weekday(d), ywdstr_date(iso8601_weekday(d)) FROM test_dates ORDER BY d; ``` As noted in the comments, the code will accept a week 53 date even if the year should only accept 52 weeks.
How do I calculate the week number given a date?
[ "", "c++", "windows", "algorithm", "time", "" ]
PHP (among others) will execute the deepest function first, working its way out. For example, ``` $text = strtoupper(str_replace('_', ' ', file_get_contents('file.txt'))); ``` I'm doing something very similar to the above example for a template parser. It looks for the tags ``` {@tag_name} ``` and replaces it with a variable of the name **$tag\_name**. One more example: ``` $a = 'hello'; $b = ' world'; INPUT = 'This is my test {@a{@b}} string.'; OUTPUT (step 1) = 'This is my test {@a} world string.'; OUTPUT output = 'This is my test hello world string.'; ``` How can I go about doing this? Does this make sense? If not, I can try explaining better.
I'm not sure I understand the nesting in your example, as the example doesn't demonstrate a purpose behind nesting. Your example input could very easily be ``` 'This is my test {@a} {@b} string.' ``` And using arrays in str\_replace would handle this very simply and quickly. ``` $aVars = array('{@a}' => 'hello', '{@b}' => 'world'); $sString = 'This is my test {@a} {@b} string.'; echo str_replace(array_keys($aVars), array_values($aVars), $sString); ``` Which gives us > This is my test hello world string. Now, a recursive function for this isn't too difficult, though I'm not sure I understand how useful it would be. Here's a working example: ``` function template($sText, $aVars) { if (preg_match_all('/({@([^{}]+)})/', $sText, $aMatches, PREG_SET_ORDER)) { foreach($aMatches as $aMatch) { echo '<pre>' . print_r($aMatches, 1) . '</pre>'; if (array_key_exists($aMatch[2], $aVars)) { // replace the guy inside $sText = str_replace($aMatch[1], $aVars[$aMatch[2]], $sText); // now run through the text again since we have new variables $sText = template($sText, $aVars); } } } return $sText; } ``` That print\_r will show you what the matches look like so you can follow the function through its paces. Now lets try it out... ``` $aVars = array('a' => 'hello', 'b' => 'world'); $sStringOne = 'This is my test {@a} {@b} string.'; $sStringTwo = 'This is my test {@a{@b}} string.'; echo template($sStringOne, $aVars) . '<br />'; ``` First Result: > This is my test hello world string. Now lets try String Two ``` echo template($sStringTwo, $aVars) . '<br />'; ``` Second Result: > This is my test {@aworld} string. That may very well be what you're looking for. Obviously you would need an `aworld` variable for this to work recursively... ``` $aVars = array('a' => '', 'b' => '2', 'a2' => 'hello world'); echo template($sStringTwo, $aVars) . '<br />'; ``` And our result. > This is my test hello world string. And just for some fun with the recursion... ``` $aVars = array('a3' => 'hello world', 'b2' => '3', 'c1' => '2', 'd' => '1'); $sStringTre = 'This is my test {@a{@b{@c{@d}}}} string.'; echo template($sStringTre, $aVars) . '<br />'; ``` > This is my test hello world string. Not sure if this is what you're asking for...
This is not a trivial task. You need to parse the string manually and do your own logical substitutions. There's no magic function or functionality that will do this for you. My own template engine does about that (and more) and only the core (no template macros) weighs in at 600+ LOC
How does PHP do recursive function calls?
[ "", "php", "string", "recursion", "" ]
Currently i have this method: ``` static boolean checkDecimalPlaces(double d, int decimalPlaces){ if (d==0) return true; double multiplier = Math.pow(10, decimalPlaces); double check = d * multiplier; check = Math.round(check); check = check/multiplier; return (d==check); } ``` But this method fails for `checkDecmialPlaces(649632196443.4279, 4)` probably because I do base 10 math on a base 2 number. So how can this check be done correctly? I thought of getting a string representation of the double value and then check that with a regexp - but that felt weird. **EDIT:** Thanks for all the answers. There are cases where I really get a double and for those cases I implemented the following: ``` private static boolean checkDecimalPlaces(double d, int decimalPlaces) { if (d == 0) return true; final double epsilon = Math.pow(10.0, ((decimalPlaces + 1) * -1)); double multiplier = Math.pow(10, decimalPlaces); double check = d * multiplier; long checkLong = (long) Math.abs(check); check = checkLong / multiplier; double e = Math.abs(d - check); return e < epsilon; } ``` I changed the `round` to a truncation. Seems that the computation done in `round` increases the inaccuracy too much. At least in the failing testcase. As some of you pointed out if I could get to the 'real' string input I should use `BigDecimal` to check and so I have done: ``` BigDecimal decimal = new BigDecimal(value); BigDecimal checkDecimal = decimal.movePointRight(decimalPlaces); return checkDecimal.scale() == 0; ``` The `double` value I get comes from the Apache POI API that reads excel files. I did a few tests and found out that although the API returns `double` values for numeric cells I can get a accurate representation when I immediately format that `double` with the `DecimalFormat`: ``` DecimalFormat decimalFormat = new DecimalFormat(); decimalFormat.setMaximumIntegerDigits(Integer.MAX_VALUE); // don't use grouping for numeric-type cells decimalFormat.setGroupingUsed(false); decimalFormat.setDecimalFormatSymbols(new DecimalFormatSymbols(Locale.US)); value = decimalFormat.format(numericValue); ``` This also works for values that can't be represented exactly in binary format.
The test fails, because you have reached the accuracy of the binary floating point representation, which is approximately 16 digits with [IEEE754 double precision](http://en.wikipedia.org/wiki/Double_precision). Multiplying by 649632196443.4279 by 10000 will truncate the binary representation, leading to errors when rounding and dividing afterwards, thereby invalidating the result of your function completely. For more details see <http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems> A better way would be to check whether the `n+1` decimal places are below a certain threshold. If `d - round(d)` is less than [`epsilon`](http://en.wikipedia.org/wiki/Epsilon) (see [limit](http://en.wikipedia.org/wiki/Limit_(mathematics))), the decimal representation of `d` has no significant decimal places. Similarly if `(d - round(d)) * 10^n` is less than `epsilon`, d can have at most `n` significant places. Use [Jon Skeet](https://stackoverflow.com/users/22656/jon-skeet)'s [`DoubleConverter`](http://pobox.com/~skeet/csharp/DoubleConverter.cs) to check for the cases where `d` isn't accurate enough to hold the decimal places you are looking for.
If your goal is to represent a number with exactly *n* significant figures to the right of the decimal, [BigDecimal](http://java.sun.com/j2se/1.5.0/docs/api/java/math/BigDecimal.html) is the class to use. > Immutable, arbitrary-precision signed > decimal numbers. A BigDecimal consists > of an arbitrary precision integer > unscaled value and a 32-bit integer > scale. If zero or positive, the scale > is the number of digits to the right > of the decimal point. If negative, the > unscaled value of the number is > multiplied by ten to the power of the > negation of the scale. The value of > the number represented by the > BigDecimal is therefore (unscaledValue > × 10-scale). `scale` can be set via [setScale(int)](http://java.sun.com/j2se/1.5.0/docs/api/java/math/BigDecimal.html#setScale(int))
How to check if a double has at most n decimal places?
[ "", "java", "floating-point", "decimal", "" ]
I'm pivoting data in MS SQL stored procedure. Columns which are pivoted are dynamically created using stored procedure parameter (for exampe: "location1,location2,location3,") so number of columns which will be generated is not known. Output should look like (where locations are taken from stored procedure parameter): > OrderTime | Location1 | Location2 | > Location3 Any chance that this can be used in LINQ to SQL? When I dragged this procedure to dbml file it shows that this procedure returns int type. Columns I use from `log_sales` table are: * Location (various location which I'm pivoting), * Charge (amount of money) * OrderTime Stored procedure: ``` CREATE PROCEDURE [dbo].[proc_StatsDay] @columns NVARCHAR(64) AS DECLARE @SQL_PVT1 NVARCHAR(512), @SQL_PVT2 NVARCHAR(512), @SQL_FULL NVARCHAR(4000); SET @SQL_PVT1 = 'SELECT OrderTime, ' + LEFT(@columns,LEN(@columns)-1) +' FROM (SELECT ES.Location, CONVERT(varchar(10), ES.OrderTime, 120),ES.Charge FROM dbo.log_sales ES ) AS D (Location,OrderTime,Charge) PIVOT (SUM (D.Charge) FOR D.Location IN ('; SET @SQL_PVT2 = ') )AS PVT ORDER BY OrderTime DESC'; SET @SQL_FULL = @SQL_PVT1 + LEFT(@columns,LEN(@columns)-1) + @SQL_PVT2; EXEC sp_executesql @SQL_FULL, N'@columns NVARCHAR(64)',@columns = @columns ``` In dbml `designer.cs` file my stored procedure part of code: ``` [Function(Name="dbo.proc_StatsDay")] public int proc_EasyDay([Parameter(DbType="NVarChar(64)")] string columns) { IExecuteResult result = this.ExecuteMethodCall(this,((MethodInfo)MethodInfo.GetCurrentMethod())), columns); return ((int)(result.ReturnValue)); } ```
Assuming truly dire dynamic need, you could use [DataContext.ExecuteQuery](http://msdn.microsoft.com/en-us/library/bb361109.aspx) Just whip up a type that will cover the result space (the property names must match the column names in the query): ``` public class DynamicResult { public DateTime OrderDate {get;set;} public decimal? Location1 {get;set;} public decimal? Location2 {get;set;} //.. public decimal? Location100 {get;set;} } ``` Then call ``` IEnumerable<DynamicResult> result = myDataContext.ExecuteQuery<DynamicResult>(commandString, param1); ```
You can create your linq object for access after your returned dataset. But would that really be of any use. Linq are usefull for typesafe calls and not dynamic results. You would not know what to look for compile time.
Accessing dynamically created stored procedure from LINQ
[ "", "sql", "linq", "stored-procedures", "pivot", "" ]
Was wondering if it is recomended to pass a database connection object around(to other modules) or let the method (in the other module) take care of setting it up. I am leaning toward letting the method set it up as to not have to check the state of the connection before using it, and just having the caller pass any needed data to the calling method that would be needed to setup the connection.
Personally I like to use tightly scoped connections; open them late, use them, and close them (in a "using" block, all within the local method). Connection pooling will deal with re-using the connection in most cases, so there is no real overhead in this approach. The main advantage in passing connections *used* to be so that you could pass the transaction around; however, [`TransactionScope`](https://stackoverflow.com/questions/224689/transactions-in-net#224702) is a simpler way of sharing a transaction between methods. Since the classes are implementation specific, I'd write each to open it's own native transaction. Otherwise, you can use the ado.net factory methods to create the appropriate type from the config file (the provider name).
Personally, I like storing a stack of my current open connection and transactions on top of the [Thread Local Storage](http://msdn.microsoft.com/en-us/library/system.runtime.remoting.messaging.callcontext_methods.aspx) using SetData and GetData. I define a class that manages my connections to the database and allow it to use the dispose pattern. This saves me the need to pass connections and transactions around, which is something that I think clutters and complicates the code. I would strongly recommend **against** leaving it up to the methods to open connections every time they need data. It will leads to a really bad situation where it is both hard to manage transactions throughout the application and too many connections are opened and closed (I know about connection pooling, it is still more expensive to look up a connection from the pool than it is to reuse an object) So I end up having something along these lines (totally untested): ``` class DatabaseContext : IDisposable { List<DatabaseContext> currentContexts; SqlConnection connection; bool first = false; DatabaseContext (List<DatabaseContext> contexts) { currentContexts = contexts; if (contexts.Count == 0) { connection = new SqlConnection(); // fill in info connection.Open(); first = true; } else { connection = contexts.First().connection; } contexts.Add(this); } static List<DatabaseContext> DatabaseContexts { get { var contexts = CallContext.GetData("contexts") as List<DatabaseContext>; if (contexts == null) { contexts = new List<DatabaseContext>(); CallContext.SetData("contexts", contexts); } return contexts; } } public static DatabaseContext GetOpenConnection() { return new DatabaseContext(DatabaseContexts); } public SqlCommand CreateCommand(string sql) { var cmd = new SqlCommand(sql); cmd.Connection = connection; return cmd; } public void Dispose() { if (first) { connection.Close(); } currentContexts.Remove(this); } } void Test() { // connection is opened here using (var ctx = DatabaseContext.GetOpenConnection()) { using (var cmd = ctx.CreateCommand("select 1")) { cmd.ExecuteNonQuery(); } Test2(); } // closed after dispose } void Test2() { // reuse existing connection using (var ctx = DatabaseContext.GetOpenConnection()) { using (var cmd = ctx.CreateCommand("select 2")) { cmd.ExecuteNonQuery(); } } // leaves connection open } ```
passing DB Connection object to methods
[ "", "c#", ".net", "ado.net", "database-connection", "" ]
What is the most efficient way to create a **constant** (never changes at runtime) mapping of `string`s to `int`s? I've tried using a [const Dictionary](https://stackoverflow.com/questions/268064/c-how-can-dictionarykv-implement-icollectionkeyvaluepairkv-without-having-addke), but that didn't work out. I could implement a [immutable wrapper](https://stackoverflow.com/questions/35002/does-c-have-a-way-of-giving-me-an-immutable-dictionary) with appropriate semantics, but that still doesn't seem totally right. --- For those who have asked, I'm implementing [IDataErrorInfo](https://msdn.microsoft.com/en-us/library/system.componentmodel.idataerrorinfo(v=vs.110).aspx) in a generated class and am looking for a way to make the columnName lookup into my array of descriptors. I wasn't aware (typo when testing! d'oh!) that switch accepts strings, so that's what I'm gonna use. Thanks!
Creating a truly compile-time generated constant dictionary in C# is not really a straightforward task. Actually, none of the answers here really achieve that. There is one solution though which meets your requirements, although not necessarily a nice one; remember that according to the C# specification, switch-case tables are compiled to constant hash jump tables. That is, they are constant dictionaries, not a series of if-else statements. So consider a switch-case statement like this: ``` switch (myString) { case "cat": return 0; case "dog": return 1; case "elephant": return 3; } ``` This is exactly what you want. And yes, I know, it's ugly.
There are precious few immutable collections in the current framework. I can think of one relatively pain-free option in .NET 3.5: Use `Enumerable.ToLookup()` - the [`Lookup<,>`](http://msdn.microsoft.com/en-us/library/bb460184.aspx) class is immutable (but multi-valued on the rhs); you can do this from a `Dictionary<,>` quite easily: ``` Dictionary<string, int> ids = new Dictionary<string, int> { {"abc",1}, {"def",2}, {"ghi",3} }; ILookup<string, int> lookup = ids.ToLookup(x => x.Key, x => x.Value); int i = lookup["def"].Single(); ```
Creating a constant Dictionary in C#
[ "", "c#", ".net", "collections", "dictionary", "constants", "" ]
What is the best compiler to experiment with C++0x features? I have been experimenting with GNU g++ 4.4.
Definitely **GCC Trunk**. **ConceptGCC** misses many features GCC trunk has. It is being merged into GCC currently though. It has all these features, including the new `auto`-typed variables (no new function declaration syntax yet though): <http://gcc.gnu.org/projects/cxx0x.html> . There is a GCC branch containing partial lambda support, which also contains other C++0x features. I would recommend you to try that one out too. It's in use on `#geordi at irc.freenode.org`, you can experiment with it there.
**[Comeau](http://www.comeaucomputing.com/tryitout/)** - just for trying it online
What is the best compiler to use when you want to experiment with C++0x features?
[ "", "c++", "compiler-construction", "" ]
I have a type ahead text field, and when the user hits "Enter" I want to make an ajax call and not submit the form at the same time. My html looks like this: ``` <input id="drug_name" class="drugs_field" type="text" size="30" onkeypress="handleKeyPress(event,this.form); return false;" name="drug[name]" autocomplete="off"/> <div id="drug_name_auto_complete" class="auto_complete" style="display: none;"/> <script type="text/javascript"> //<![CDATA[ var drug_name_auto_completer = new Ajax.Autocompleter('drug_name', 'drug_name_auto_complete', '/sfc/pharmacy/auto_complete_for_drug_name', {}) //]]> </script> ```
You should add an event handler to the form itself which calls a function to decide what to do. ``` <form onsubmit="return someFunction();"> ``` And then make sure that your someFunction() returns false on success. If it returns true the form will submit normally (which is what you are trying to prevent!). So you can do your AJAX call, see if it succeeded, and return false. Doing it this way you can provide a fallback in case your AJAX call fails and submit the form normally by returning true from your function.
Trap the event and cancel it. It's something like trap onSubmit(event) and event.ignoreDefault(). The event can tell you that it was triggered by a keystroke, and which.
How can I prevent a webform from submiting on 'Enter'
[ "", "javascript", "html", "ruby-on-rails", "webforms", "" ]
*Background*: I am currently using custom controls within my C# project (basic controls just drawing a custom look and feel (using gdi+?)). The majoritiy of these controls have transparent segments for irregular shapes etc. *Problem*: I am looking to overlay a semi-transparent (irregularly shaped) panel over a group of controls. Currently I Clip the covering panel to the region of the controls involved. This works fine and the results look good, however the process is so slow. *Question*: Is there some manner I can speed up the rendering process by mixing in a little OpenGL/DirectX and getting the video card to perform all the necessary rendering rather than relying on the rather slow CPU?
AFAIK you can't really mix GDI+ and OpenGL/DX. If you're getting slow performance and are **absolutely sure** that it's a bottleneck in GDI+ rather than in your code, than it *could* make sense to ditch GDI+ and replace it with DX/OGL. (You would have to write your own controls, though, which would be a major pain in the @$$) Or, for a simpler approach, try WPF/Silverlight! It's customizable and skinnable by default and it's based on DX.
Look at WPF and Silverlight. you would sleep better at night.
C# Combine GDI+ and OpenGL/DirectX
[ "", "c#", "winforms", "opengl", "directx", "gdi+", "" ]
I know it's simple to implement, but I want to reuse something that already exist. Problem I want to solve is that I load configuration (from XML so I want to cache them) for different pages, roles, ... so the combination of inputs can grow quite much (but in 99% will not). To handle this 1%, I want to have some max number of items in cache... Till know I have found org.apache.commons.collections.map.LRUMap in apache commons and it looks fine but want to check also something else. Any recommendations?
You can use a [LinkedHashMap](http://docs.oracle.com/javase/6/docs/api/index.html?java/util/LinkedHashMap.html) (Java 1.4+) : ``` // Create cache final int MAX_ENTRIES = 100; Map cache = new LinkedHashMap(MAX_ENTRIES+1, .75F, true) { // This method is called just after a new entry has been added public boolean removeEldestEntry(Map.Entry eldest) { return size() > MAX_ENTRIES; } }; // Add to cache Object key = "key"; cache.put(key, object); // Get object Object o = cache.get(key); if (o == null && !cache.containsKey(key)) { // Object not in cache. If null is not a possible value in the cache, // the call to cache.contains(key) is not needed } // If the cache is to be used by multiple threads, // the cache must be wrapped with code to synchronize the methods cache = (Map)Collections.synchronizedMap(cache); ```
This is an old question, but for posterity I wanted to list [ConcurrentLinkedHashMap](https://github.com/ben-manes/concurrentlinkedhashmap), which is thread safe, unlike [LRUMap](https://commons.apache.org/proper/commons-collections/apidocs/org/apache/commons/collections4/map/LRUMap.html). Usage is quite easy: ``` ConcurrentMap<K, V> cache = new ConcurrentLinkedHashMap.Builder<K, V>() .maximumWeightedCapacity(1000) .build(); ``` And the documentation has some good [examples](https://github.com/ben-manes/concurrentlinkedhashmap/wiki/ExampleUsage), like how to make the LRU cache size-based instead of number-of-items based.
Easy, simple to use LRU cache in java
[ "", "java", "caching", "lru", "" ]
I have a page on which I must load controls dynamically based on the user selection. Let's say that I have something like this: ``` public static readonly Dictionary<string, string> DynamicControls = new Dictionary<string, string> { { "UserCtrl1", "~/Controls/UserCtrl1.ascx" }, { "UserCtrl2", "~/Controls/UserCtrl2.ascx" }, { "UserCtrl3", "~/Controls/UserCtrl3.ascx" }, { "UserCtrl4", "~/Controls/UserCtrl4.ascx"} }; ``` Now let's say than on the page where the controls are loaded the code is something like this: ``` protected void Page_Load(object sender, EventArgs e) { SomePanel.Controls.Add(GetControl()); } private Control GetControl() { string dynamicCtrl = CurrentItem.DynamicControl; string path = SomeClass.DynamicControls[dynamicCtrl]; Control ctrl = null; //TODO: find a better way to load the controls switch (dynamicCtrl) { case "UserCtrl1": { ctrl = (UserCtrl1)LoadControl(path); } break; case "UserCtrl2": { ctrl = (UserCtrl2)LoadControl(path); } break; case "UserCtrl3": { ctrl = (UserCtrl3)LoadControl(path); } break; default: { throw new ApplicationException("Invalid dynamic control added."); } } return ctrl; } ``` The page has the required registered statements. Any idea how I can get rid of this ugly switch statement?
You don't need to cast the result from LoadControl. This should do: ``` private Control GetControl() { string dynamicCtrl = CurrentItem.DynamicControl; string path = SomeClass.DynamicControls[dynamicCtrl]; Control ctrl = LoadControl(path); return ctrl; } ```
You probably want something like this (pseudo-ish code): ``` foreach key in dictionary if key = dynamicControl then ctrl = (Type.GetType(key))LoadControl(dictionary.get(key)) end if next ```
Refactoring solution needed
[ "", "c#", "asp.net", "refactoring", "" ]
I just wrote a new web part and now I am getting this error when I try to deploy them on my non-dev servers: > the default namespace > '<http://schemas.microsoft.com/WebPart/v2>' > is a reserved namespace for base Web > Part propertiees. Custom Web Part > properties require a unique namespace > (specified through an > XmlElementAttribute on the property , > or an XmlRootAttribute on the class). I am writing the web parts into CAB files and deploying them with this: ``` stsadm -o addwppack -filename web_part_name.CAB -url http://your_url_here -globalinstall -force ``` Everything works fine until I try to add the web part, then I get this error in a popup. It works just fine on my dev VM...? Any ideas would be appreciate, thank you.
Well, it looks like your webpart definition file si somehow broken. The wey i do it is to put webpart into page and then export it. You can do this just by opening webpart galery, which can be located in site settings of root site collection and add your webpart there. After that just place webpart to any page and use export button in webpart settings. This will produce .webpart or .dwp file depending on your webpart (.webpart is 'new' definition which is used by webparts native to MOSS 2007 and .dwp is for older webparts, but it depends how you have written your webpart) Then just take exported file and include it into your project. This approach works for me at least.
A bit of a educated guess here but anyway :- First check that you have XmlRoot attribute like this in your web part ``` [XmlRoot(Namespace = "Your.Namespace")] public class YourWebPart: WebPart { ... ``` and XmlElement attribute on your custom properties ``` [DefaultValue(0)] [WebPartStorage(Storage.Shared)] [Browsable(false)] [XmlElement(ElementName = "YourProperty")] public Int64 YourProperty { ... } ``` This error is happening when .NET is attempting to desterilize the data from the .DWP file and set the custom properties in your web part. I suspect that the error may have nothing to do with namespace conflicts as SharePoint sometimes tends to fall back to an error messages that can be a red herring. I would firstly examine your .dwp file. Do you have any custom properties set there, if so remove them and retest. Comment out the custom properties in your web part code one by one and retest at each step. I think you will find one of these is causing the problem - exactly why is the next question!
Error when trying to deploy web part
[ "", "c#", "sharepoint", "web-parts", "" ]
I am refering to the [question](https://stackoverflow.com/questions/252893/how-do-you-change-the-classpath-within-java#252905) on changing the classpath programmatically. I read and found out that there is some function under **System** class as getproperties where we can retrieve the properties and then also can set it using setProperties(). The answers however I got was that It Wont work. I have not tried this myself, however, i am taking the call. Just to clarify, then why these setProperty() and getProperty() methods are there if they cannot alter it at run time. Or is this specific to the classpath property only ? I will appreciate if someone can present a scenario where they are really helpful?
You can certainly set any system properties you want at any point of time. The question is, will it have any effect? In the case of classpath, the answer is NO. The system class loader is initialized at a very early point in the startup sequence. It copies the classpath into its own data structures, and the classpath property is not read again. Changing it affect nothing in the system. The reason for this may be two-fold. The lesser reason is performance. You may need to have some sort of data structure built for quick lookup of resources, and re-parsing classpath every time may be inefficient. The more important reason is security. You don't want a rogue class change the classpath under you and load compromised version of another class.
## Modify Classpath Even though you cannot set the classpath using the system properties (because the JVM reads system properties once: at startup), you can still change the classpath by forcibly invoking the `addURL` method of the classloader. Note that the solution below does not take into consideration the current thread. Consequently, it might not be accurate in all situations. ## Example Solution The original source on Sun's website for the following code has been removed: ``` import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; import java.io.File; import java.io.IOException; import java.net.URL; import java.net.URLClassLoader; /** * Allows programs to modify the classpath during runtime. */ public class ClassPathUpdater { /** Used to find the method signature. */ private static final Class[] PARAMETERS = new Class[]{ URL.class }; /** Class containing the private addURL method. */ private static final Class<?> CLASS_LOADER = URLClassLoader.class; /** * Adds a new path to the classloader. If the given string points to a file, * then that file's parent file (i.e., directory) is used as the * directory to add to the classpath. If the given string represents a * directory, then the directory is directly added to the classpath. * * @param s The directory to add to the classpath (or a file, which * will relegate to its directory). */ public static void add( String s ) throws IOException, NoSuchMethodException, IllegalAccessException, InvocationTargetException { add( new File( s ) ); } /** * Adds a new path to the classloader. If the given file object is * a file, then its parent file (i.e., directory) is used as the directory * to add to the classpath. If the given string represents a directory, * then the directory it represents is added. * * @param f The directory (or enclosing directory if a file) to add to the * classpath. */ public static void add( File f ) throws IOException, NoSuchMethodException, IllegalAccessException, InvocationTargetException { f = f.isDirectory() ? f : f.getParentFile(); add( f.toURI().toURL() ); } /** * Adds a new path to the classloader. The class must point to a directory, * not a file. * * @param url The path to include when searching the classpath. */ public static void add( URL url ) throws IOException, NoSuchMethodException, IllegalAccessException, InvocationTargetException { Method method = CLASS_LOADER.getDeclaredMethod( "addURL", PARAMETERS ); method.setAccessible( true ); method.invoke( getClassLoader(), new Object[]{ url } ); } private static URLClassLoader getClassLoader() { return (URLClassLoader)ClassLoader.getSystemClassLoader(); } } ``` The link no longer works: <http://forums.sun.com/thread.jspa?threadID=300557> ## Example Usage The following example will add `/home/user/dev/java/app/build/com/package` to the classpath at runtime: ``` try { ClassPathUpdater.add( "/home/user/dev/java/app/build/com/package/Filename.class" ); } catch( Exception e ) { e.printStackTrace(); } ```
Why can't System.setProperty() change the classpath at runtime?
[ "", "java", "classpath", "" ]
Of the two methods below, which do you prefer to read? Is there another (better?) way to check if a flag is set? ``` bool CheckFlag(FooFlag fooFlag) { return fooFlag == (this.Foo & fooFlag); } ``` And ``` bool CheckFlag(FooFlag fooFlag) { return (this.Foo & fooFlag) != 0; } ``` --- Please vote up the method you prefer.
The two expressions do different things (if fooFlag has more than one bit set), so which one is better really depends on the behavior you want: ``` fooFlag == (this.Foo & fooFlag) // result is true iff all bits in fooFlag are set (this.Foo & fooFlag) != 0 // result is true if any bits in fooFlag are set ```
``` bool CheckFlag(FooFlag fooFlag) { return fooFlag == (this.Foo & fooFlag); } ```
How should I check if a flag is set in a flags enum?
[ "", "c#", "coding-style", "" ]
I have information spread out across a few databases and want to put all the information onto one webpage using PHP. I was wondering how I can connect to multiple databases on a single PHP webpage. I know how to connect to a single database using: ``` $dbh = mysql_connect($hostname, $username, $password) or die("Unable to connect to MySQL"); ``` However, can I just use multiple "mysql\_connect" commands to open the other databases, and how would PHP know what database I want the information pulled from if I do have multiple databases connected.
**Warning :** `mysql_xx` functions are deprecated since php 5.5 and removed since php 7.0 (see <http://php.net/manual/intro.mysql.php>), use `mysqli_xx` functions or see the answer below from @Troelskn --- You can make multiple calls to `mysql_connect()`, but if the parameters are the same you need to pass true for the '`$new_link`' (fourth) parameter, otherwise the same connection is reused. For example: ``` $dbh1 = mysql_connect($hostname, $username, $password); $dbh2 = mysql_connect($hostname, $username, $password, true); mysql_select_db('database1', $dbh1); mysql_select_db('database2', $dbh2); ``` Then to query database 1 pass the first link identifier: ``` mysql_query('select * from tablename', $dbh1); ``` and for database 2 pass the second: ``` mysql_query('select * from tablename', $dbh2); ``` If you do not pass a link identifier then the last connection created is used (in this case the one represented by `$dbh2`) e.g.: ``` mysql_query('select * from tablename'); ``` **Other options** If the MySQL user has access to both databases and they are on the same host (i.e. both DBs are accessible from the same connection) you could: * Keep one connection open and call `mysql_select_db()` to swap between as necessary. I am not sure this is a clean solution and you could end up querying the wrong database. * Specify the database name when you reference tables within your queries (e.g. `SELECT * FROM database2.tablename`). This is likely to be a pain to implement. Also please read troelskn's answer because that is a better approach if you are able to use PDO rather than the older extensions.
If you use PHP5 (And you should, given that PHP4 has been deprecated), you should use [PDO](http://docs.php.net/manual/en/book.pdo.php), since this is slowly becoming the new standard. One (very) important benefit of PDO, is that it supports bound parameters, which makes for much more secure code. You would connect through PDO, like this: ``` try { $db = new PDO('mysql:dbname=databasename;host=127.0.0.1', 'username', 'password'); } catch (PDOException $ex) { echo 'Connection failed: ' . $ex->getMessage(); } ``` (Of course replace databasename, username and password above) You can then query the database like this: ``` $result = $db->query("select * from tablename"); foreach ($result as $row) { echo $row['foo'] . "\n"; } ``` Or, if you have variables: ``` $stmt = $db->prepare("select * from tablename where id = :id"); $stmt->execute(array(':id' => 42)); $row = $stmt->fetch(); ``` If you need multiple connections open at once, you can simply create multiple instances of PDO: ``` try { $db1 = new PDO('mysql:dbname=databas1;host=127.0.0.1', 'username', 'password'); $db2 = new PDO('mysql:dbname=databas2;host=127.0.0.1', 'username', 'password'); } catch (PDOException $ex) { echo 'Connection failed: ' . $ex->getMessage(); } ```
How do you connect to multiple MySQL databases on a single webpage?
[ "", "php", "mysql", "" ]
I need to make some changes to a ClickOnce application that I haven't touched for over a year and therefore the certificate has expired. I've read that publishing with a new certificate will make the application fail, because it will be signed with a different key. Therefore I think I need to use the same certificate but not sure how to renew it.
If you're after a quick solution, then you can "renew" your existing certificate and just give it a longer expiry date. Cliff Stanford has cleaned up the Microsoft "workaround" and made it available as a simple command line exe - available here: <http://may.be/renewcert/> - Nice work Cliff !
Here's the definitive MSDN article on certificate expiration, which also includes a link to an update on RenewCert. <http://msdn.microsoft.com/en-us/library/ff369721.aspx> This covers all cases. If you are targeting .NET 3.5, using automatic updates, and don't have a VSTO app, OR you are targeting .NET 4, changing the certificate will not cause you any problems.
How can I renew my expired ClickOnce certificate?
[ "", "c#", "visual-studio", "visual-studio-2008", "clickonce", "" ]
As a web developer, a number of the projects I work on fall under government umbrellas and hence are subject to [508 Accessibility](http://www.section508.gov/) laws, and sometimes [W3C accessibility](http://www.w3.org/TR/WCAG/) guidelines. To what extent can JavaScript be used while still meeting these requirements? Along these lines, to what extent is JavaScript, specifically AJAX and using packages like jQuery to do things such as display modal dialogues, popups, etc. supported by modern accessibility software such as JAWS, Orca, etc? In the past, the rule went something like "If it won't work in Lynx, it won't work for a screen reader." Is this still true, or has there been more progress in these areas? EDIT: The consensus seems to be that javascript is fine as long as there are non-javascript fallbacks, however it still seems uncertain about the support for AJAX in screen reader software. If anyone has specific experience with this, that would be most helpful.
If accessibility is your primary concern, always start a website using standards-compliant (pick a Document Type Definition and stick to it) HTML. If it's a web application (form submissions, etc), make sure the forms will work using just HTTP GET and POST. Once you have a complete website/application you can add bits of CSS and JavaScript as long as the site still functions, with either or both off. The most important concept here is [Progressive Enhancement](http://en.wikipedia.org/wiki/Progressive_enhancement). You're adding additional bells and whistles using CSS/JavaScript, but your web site/application will function perfectly well **without** either. A great tool for testing [508](http://www.section508.gov/), [WAI](http://www.w3.org/WAI/), CSS off, JavaScript off try using the [Web Developer](https://addons.mozilla.org/en-US/firefox/addon/60) plugin for Firefox.
Progressive enhancement is certainly one route but unobtrusiveness is not the be all and end all of JavaScript accessibility as screen readers tend to use browsers as a basis for their work. Since those browsers support JavaScript, scripts on your page will still run. This is a particular problem with AJAX as clicking on one part of the page could change another part of the page that the screen reader isn't aware of. As AJAX matures, however, methods of making it accessible are emerging. Look into the [WAI-ARIA](http://www.w3.org/WAI/intro/aria) for modern methods of making AJAX accessible, and [Google's AxsJAX](http://code.google.com/p/google-axsjax/) for a good way of implementing it. ---
Javascript and Accessibility
[ "", "javascript", "jquery", "ajax", "accessibility", "screen-readers", "" ]
I want to convert for example a particular date 12-11-2008 11:33:04.510 to UTC datetime. Can anyone help me how to do this. I want to do this in c# coding.
Just use [DateTime.ToUniversalTime](http://msdn.microsoft.com/en-us/library/system.datetime.touniversaltime.aspx), assuming it's in the local timezone of your computer at the moment.
DateTime is not timezone-aware. It can be treated as local time or UTC time, and as Jon Skeet said, DateTime.ToUniversalTime can convert between them. In .NET3.5 there is also the TimeZoneInfo class which allows for conversion of DateTime's between arbitrary timezones, but for your needs, the former is probably good enough. There's also a DateTimeOffset class which works just like a DateTime, except it also stores an offset from UTC, making it a bit more robust if you have to handle multiple timezones.
Is there any function for UTC to convert to particular datetime to UTC date time
[ "", "c#", "" ]
I do TDD on a daily basis for my C++ development on Solaris10. It has greatly reduced the time I have to spend using my debugger but sometime this is the only option. DBX is pretty powerful but not that user friendly. Note that I'm talking about console DBX not the SunStudio GUI for DBX). What are the best productivity tips you can give for dbx C++ debugging? PS. Changing debugger is not an option.
I have bookmarked a few sites related to dbx. Here they are, in no particular order in case they might be useful to you: * [Why Can't dbx Find My Function?](http://developers.sun.com/solaris/articles/dbxerr.html) * [Online Help for the Dbx Debugger](http://blogs.oracle.com/quenelle/resource/dbxhelp.html) * [Sun Studio: debugging a multi-threaded application w/ dbx](http://technopark02.blogspot.com/2005/12/sun-studio-debugging-multi-threaded.html) * [gdb vs. dbx: commands mapping and feature comparison](https://blogs.oracle.com/quenelle/entry/dbx_versus_gdb) * [Importing debug information into dbx](http://quenelle.org/unix/2005/importing-debug-information-into-dbx/) * [command line dbx : See more source](http://blogs.oracle.com/quenelle/entry/command_line_dbx_see_more) * [Sun Studio C++ - Your .dbxrc](http://forums.sun.com/thread.jspa?threadID=5070229&messageID=9258271) (NB: adapt the integer type used to cast pointers according to your pointer size) * [Sun Studio: Investigating memory leaks with dbx](http://technopark02.blogspot.com/2005/10/sun-studio-investigating-memory-leaks.html) * [Sun Studio: Gathering memory allocations/leaks data, from a running process](http://technopark02.blogspot.com/2005/11/sun-studio-gathering-memory.html) EDIT on 2012 july 19th: I've tried to fix several link since Chris Quenelle's blog has moved. Some articles can still be accessed thanks to the wayback machine.
One of the powerful features of dbx is the built in ksh. You can write shell functions that are as complex as you might want to print customized views of data structures, or follow pointers in a specific route. The shell is an old variation of pdksh, so it's not the nicest for programming, but it's much better than not having any built-in language to use. There is also extensive help available from the command line. One trick for searching the help is to use "vi $helpfile" from within the dbx command shell. Tha tedits the raw helpfile data in one large file. You can see some examples of shell functions in the dbxrc file that ships with dbx and is loaded by default. You can see a more sophisticated example by looking at the libumem integration module. You can find it here: <http://quenelle.org/sun%20tech/2007/using-dbx-and-libumem.html>
how to get as much as possible from dbx
[ "", "c++", "debugging", "solaris", "dbx", "" ]
I have a multidimensional array. I need to search it for a specific range of values, edit those values and return the edited data. Example array: ``` array(3) { ["first"]=> array(1) { [0]=> string(4) "baz1" } ["second"]=> array(1) { [0]=> string(4) "foo1" } ["third"]=> array(1) { [0]=> string(4) "foo2" } ``` Now I want to find any values that match foo (foo1 and foo2 in the example array), insert "-bar" into them (foo-bar1, foo-bar2) and return that value. What are the best ways to approach this? *EDIT* I should have mentioned that foo could actually be anythingfoo (ex. examplefoo1, somethingelsefoo2, blahblahfoo3). I think this rules out str\_replace.
If your array will not be extremely deep, this can work. ($array being what you want to replace later with yours) ``` $array= array('first' => array('bazi1'), 'second' => array('foo1'), 'third' => array('foo2') ); function modify_foo(&$item, $key) { $item = str_replace('foo', 'foo-bar', $item); } array_walk_recursive( $array, 'modify_foo' ); ``` If you want foo to be replaced even in somethingelsefoo2, then str\_replace will be just fine.
How about something like this: ``` function addDashBar($arr) { foreach ($arr as $key => $value) { if (is_array($value)) $arr[$key] = addDashBar($value) else { $arr[$key] = str_replace($value, "foo", "foo-bar"); } } return $arr; } ```
PHP - Editing multiple array values (if they exist)
[ "", "php", "arrays", "" ]