text stringlengths 8 267k | meta dict |
|---|---|
Q: Horrible VMware keyboard shortcuts I'm a VMware user and far too often I use keyboard shortcuts while programming. However, this has proved to be quite distressing as sometimes the VMware gets hold of it and turns off / pauses (Ctrl+Z) the virtual machine.
Is there a way to disable keyboard shortcuts on VMware? Has anyone here ever found a workaround?
A: I use AutoHotKey (are you running VMWare on Windows ?) to disable certain shortcuts. You can find this tool here:
http://www.autohotkey.com/
It's open source and I quite like it. Can be used for automation tasks, but you can also have it respond differently to different windows. With some AHK scripting, I think you should be able to fix your problem.
The site had got loads of tutorials too on writing handy scripts.
Good luck.
A: If it is Ok for you - here's a bit of hacky solution which is very simple - just use ResHacker program to get rid of those annoying accelerators (they are defined as resources in vmware.exe).
A: This was very annoying to me as well. I finally resolved it for me by doing the following:
*
*Open the VMWare Workstation preferences (Edit menu | Preferences)
*Select the Input tab
*Check the "Grab keyboard and mouse input on key press" option.
This way, if your mouse drifts outside the virtual machine's window, when you type someting, the virtual machine will "regain" focus.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/152901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Get the contents of a Application Server directory I need to get a listing of a server-side directory inside SAP. How do I achieve this in ABAP? Are there any built-in SAP functions I can call?
Ideally I want a function which I can pass a path as input, and which will return a list of filenames in an internal table.
A: EPS2_GET_DIRECTORY_LISTING does the same thing as EPS_GET_DIRECTORY_LISTING BUT retunrs the file names up to 200 chars!
A: Call function RZL_READ_DIR_LOCAL:
FUNCTION RZL_READ_DIR_LOCAL.
*"----------------------------------------------------------------------
*"Lokale Schnittstelle:
*" IMPORTING
*" NAME LIKE SALFILE-LONGNAME
*" TABLES
*" FILE_TBL STRUCTURE SALFLDIR
*" EXCEPTIONS
*" ARGUMENT_ERROR
*" NOT_FOUND
*"----------------------------------------------------------------------
Place the path in the NAME import parameter, and then read the directory listing from FILE_TBL after it returns.
RZL_READ_DIR_LOCAL can handle normal local paths as well as UNC paths.
The only downside is it only gives you access to the first 32 chars of each filename. However, you can easily create a new function based on the RZL_READ_DIR_LOCAL code, and change the way the C program output is read, as the first 187 characters of each filename are actually available.
A: the answer is calling function module EPS_GET_DIRECTORY_LISTING.
DIR_NAME -> Name of directory
FILE_MASK -> Pass '*' to get all files.
Note: This does not deal with really large file names (80 characters+), it truncates the name.
A: After reading the answers of Chris Carrthers and tomdemuyt I would say:
1) Use RZL_READ_DIR_LOCAL if you need simple list of filenames.
2) EPS_GET_DIRECTORY_LISTING is more powerfull - it can also list subdirectories.
Thanks You both!
With best Regards
Niki Galanov
A: Take a look at transaction AL11 source code: RSWATCH0 form fill_file_list
There you can get all information about files.
Hope this helps!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/152919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: In Visual Studio when viewing a changeset, how can I change the view of cs files? In Visual Studio with TFS as source control, when I view the history and double click a cs file, the file is loaded in notepad. How can i change the application to be notepad++?
I also would like the OS's default application for the file to still be visual studio
A: After pouring over ProcessMonitor logs I think I found the solution!
You need to change the what the Windows shell (explorer) thinks the "Edit" action for text files. I was able to change this key:
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\SystemFileAssociations\text\shell\edit\command
to something other than Notepad (in my case notepad2). Now Visual Studio's TFS's changeset dialog opens cs files with that editor.
This will probably change the edit option for not just cs files, but everything considered "text'. The registry entries for file associations are pretty complicated. I suspect that it would be possible to disassociate .cs files from this common "text" category and make this change only for cs files (but I'm not that ambitious). Also, I wouldn't be surprised if people's file associations / shell commands (open, edit, etc...) vary from machine to machine (OS versions, tools installed, etc) - so YMMV.
A: The only way I found is to replace notepad with notepad++. This article describes how to do it. Don't forget to check the comments to get a link to the "little exe" that comes with notepad++.
Works like a charm on W7 x64.
Cheers,
Phil
A: \I was able to configure this by adding new value to the registry.
OS: Windows 7 Enterprise x64
Steps on how to do it.
*
*Run: Regedit (alt + r, type regedit)
*Look for HKEY_LOCAL_MACHINE\SOFTWARE\Classes\SystemFileAssociations
*Right click "SystemFileAssociation" -> add new key then name it as .cs
*Right click .cs and add new key then name it as shell
*Right click shell and add new keys name it as edit and open 6
*Right click edit and add new key command then change the default value to point to the file exe you want it to run.
ex: C:\Program Files (x86)\Notepad++\notepad++.exe %1
Don't forget to add the %1 at the end of the .exe
*do the same for open
Hope it helps.
A: I don't see any options in Visual Studio for changing that, so I'm guessing it uses the system's default text editor.
Try assigning Notepad++ as the default handler for *.cs files.
You can do this from withing Notepad++ by going to Settings/Preference/File Association.
You can also do it by right-clicking on a .cs file in explorer, go to Open With/Choose Program..., then select Notepad++ and check the "Always use the selected program to open this kind of file" box before hitting OK.
A: The only thing that works for me is when I set the default program for the particular file type in Windows Explorer to open with the VS IDE:
C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe
This opens the code in a new instance of VS. Not ideal, but at least it's easier to read.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/152926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: When is modal UI acceptable? By and large, modal interfaces suck big rocks. On the other hand, I can't think of a better way to handle File Open..., or Print... and this, I think, is because
*
*they are occasional actions, infrequent and momentous, and
*they are atomic in nature; you either finish specifying all your print options and go through with it, or you cancel the whole show.
Let's put together a little style-guide. Suggest any use-cases in which a dialog is the preferred presentation and why it is preferred. Can the dialog be non-modal? If it is, how do you mark transactional boundaries, since Cancel ceases to have a clear meaning. Do you use an Apply button, for example?
A: When doing non-modal windows, you might want to ensure they are unique: you don't really want two identical toolboxes (in a graphical program for example) or two identical preferences dialog (I saw this in a product), which can be confusing at best.
On the other hand, I appreciate when a Search/Replace dialog is non-modal: I can go back to the document and cancel last change, skip elsewhere, etc.; without losing the current settings.
Somehow, modal dialogs tell the user to "stop everything else and finish what you are doing", which has its uses, as pointed out in Stephen Wrighton's answer.
A: In my experience, there are very few things that should ever be modal in a UI. One of the best examples of this, and probably one very familiar to the users of the site, is Eclipse. Although it has some modal dialogs, and I'm speaking only of the core IDE here, they largely fall into three categories: File operations, preference dialogs and parameter dialogs.
Preference dialogs, while modal by tradition, need not be modal either. All you have to do is look at the Mac OS preference model, where configuration changes take place immediately, with modal behaviour introduced only in cases where the change might be disruptive to work in progress.
In short, here's what I would say is a good summary of what should be modal. Exceptions to this set should be well-justified by the usage.
*
*Parameter entry dialogs (example: refactoring wizards. anti-example: find dialogs)
*File operations
*Confirmation of an action that will take immediate disruptive effect
A: IMO, modal interfaces should only be used when you HAVE to deal with whatever the dialog is doing or asking before the application can continue. Any other time, if you're using a dialog, it should be non-modal.
A: How about a user login window, you cannot (or should not) use the rest of an application until you've logged in, assuming security is necessary.
A: I think the distinction is that if there is anything at all that a user might be able to do in the application while the dialog is shown, then it should not be modal. This includes copy/paste actions. Personally I would prefer it if file/open and print dialogs weren't modal either. I think modal dialogs are a sign of weak design, a necessary evil to get code out the door quickly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/152938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: Get notified that MS Excel file is no longer used in Progress 4GL (ABL) There is a GUI ADM2 Progress v9 application using AppServer.
It needs to give users an ability to view MS Excel files stored on the AppServer. So far it:
*
*Pulls .xls file from AppServer to a local drive.
*Fires up a copy of MS Excel and opens the file.
The problem is that the temporary file on the local drive needs to be removed once it's no longer required. Any hints?
A: You can run Excel using the os-command function in Progress and tell it to wait until you're done viewing to come back to the progress code. Once you're out of Excel run the os-delete command against the file.
A: If you are "firing up a copy of Excel", is there any special reason you can't just point that "fired-up" Excel application at the file on your App Server?If you are starting Excel from a command-line shell, you could just go Excel.exe "http://myserver/myexcelbook.xls" right?
If you are opening it via something like Office Interop Assemblies, then you can key off of the Application.WorkBookBeforeClose event like
ThisMethodHandlesTheWorkbookBeforeCloseEvent()
{
DeleteTheFile();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/152946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Deployment tools ENTERPRISE - what is the best for Windows environment? What is the "best" tool for creating deployment packages/jobs/stuff for all "enterprise level" deployments.....
GAC, COM+, Credentials, App pools, Web sites, registry entries, etc... It would be best if there is a way to "tokenize" the credentials and registry entries so that we can enter the appropriate credentials for the "next" environment. Assume there are 7 environments in the enterprise predev, dev, integrated test, user test, prod, pfix , training each needing it's own set of credentials
What are you using to migrate an INTERNAL WEBSITE to your corporate environment (even if external users access it)....
We will need Audit logging, reporting, rollback, Multiple server deployment, ability to remove a server from cluster, stop services, deploy "everything" and start everything back up and then place server back in the cluster....
Is anyone actually USING anything they like where there is minimal "scripting" required.
A: Power Shell, actually
| {
"language": "en",
"url": "https://stackoverflow.com/questions/152957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Can you use Java libraries in a VB.net program? I'm wondering if a Java library can be called from a VB.net application.
(A Google search turns up lots of shady answers, but nothing definitive)
A: No, you can't. Unless you are willing to use some "J#" libraries (which is not nearly the same as Java) or IKVM which is a Java implementation that runs on top of .NET, but as their documentation says:
IKVM.OpenJDK.ClassLibrary.dll: compiled version of the Java class libraries derived from the OpenJDK class library with some parts filled in with code from GNU Classpath and IcedTea, plus some additional IKVM.NET specific code.
So it's not the real deal.
A: I am author of jni4net, open source intraprocess bridge between JVM and CLR. It's build on top of JNI and PInvoke. No C/C++ code needed. I hope it will help you.
A: You can call Java from .NET if you wrap it in some form to make it accessable and the easiest way is typically to use a Runtime bridge like
http://www.jnbridge.com/
Other way is to wrap your API with java webservices.
check this also http://www.devx.com/interop/Article/19945
A: Nothing out of the box.
Most java/.net interop that I know uses web services.
A: If you can create COM components with Java, you can use tlbimp to create an interop assembly for using in VB.Net.
If can create standard DLLs that can be used from C++ with Java, you can write P/Invoke declarations and call them from VB.Net.
If you can create a web service with Java, you can generate proxy class from the WSDL and call it from VB.Net.
In any case, chances are the Java component will live in a separate process. I doubt you can load both the Java VM and the CLR in the same process.
A: If you have the source code and compile it using the J# compiler, then the answer is yes. If you want to call any pre-Java 2 (aka 1.2) libraries, then these are included pretty much verbatim with J#. More recent stuff is going to be tricky though (i.e., it's not there).
An example where this is used commercially are the yFiles graph layout algorithms from yWorks. These were originally just a Java library, but for the past few years they've been offering a .NET version, which is just the Java version compiled with Visual J#.
It's not without problems, and there are some limitations that you can't get around, but it can be done. So... unfortunately this answer looks pretty shady as well.
A: You could use JNI to instantiate a virtual machine and then use Java Classes. It will be some fun, though, because you would need to use C++ as a bridge between VB.Net and Java.
This article in java world has a quick tutorial on how to use Java from C++ and viceversa.
http://www.javaworld.com/javaworld/javatips/jw-javatip17.html
A: If you have the source, Visual Studio will let you convert Java code into c#.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/152967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How can I convert a number to its multiple form in Perl? Do you know an easy and straight-forward method/sub/module which allows me to convert a number (say 1234567.89) to an easily readable form - something like 1.23M?
Right now I can do this by making several comparisons, but I'm not happy with my method:
if($bytes > 1000000000){
$bytes = ( sprintf( "%0.2f", $bytes/1000000000 )). " Gb/s";
}
elsif ($bytes > 1000000){
$bytes = ( sprintf( "%0.2f", $bytes/1000000 )). " Mb/s";
}
elsif ($bytes > 1000){
$bytes = ( sprintf( "%0.2f", $bytes/1000 )). " Kb/s";
}
else{
$bytes = sprintf( "%0.2f", $bytes ). "b/s";
}
Thank you for your help!
A: Number::Bytes::Human seems to do exactly what you want.
A: The Number::Bytes::Human module should be able to help you out.
An example of how to use it can be found in its synopsis:
use Number::Bytes::Human qw(format_bytes);
$size = format_bytes(0); # '0'
$size = format_bytes(2*1024); # '2.0K'
$size = format_bytes(1_234_890, bs => 1000); # '1.3M'
$size = format_bytes(1E9, bs => 1000); # '1.0G'
# the OO way
$human = Number::Bytes::Human->new(bs => 1000, si => 1);
$size = $human->format(1E7); # '10MB'
$human->set_options(zero => '-');
$size = $human->format(0); # '-'
A: In pure Perl form, I've done this with a nested ternary operator to cut on verbosity:
sub BytesToReadableString($) {
my $c = shift;
$c >= 1073741824 ? sprintf("%0.2fGB", $c/1073741824)
: $c >= 1048576 ? sprintf("%0.2fMB", $c/1048576)
: $c >= 1024 ? sprintf("%0.2fKB", $c/1024)
: $c . "bytes";
}
print BytesToReadableString(225939) . "/s\n";
Outputs:
220.64KB/s
A: sub magnitudeformat {
my $val = shift;
my $expstr;
my $exp = log($val) / log(10);
if ($exp < 3) { return $val; }
elsif ($exp < 6) { $exp = 3; $expstr = "K"; }
elsif ($exp < 9) { $exp = 6; $expstr = "M"; }
elsif ($exp < 12) { $exp = 9; $expstr = "G"; } # Or "B".
else { $exp = 12; $expstr = "T"; }
return sprintf("%0.1f%s", $val/(10**$exp), $expstr);
}
A: This snippet is in PHP, and it's loosely based on some example someone else had on their website somewhere (sorry buddy, I can't remember).
The basic concept is instead of using if, use a loop.
function formatNumberThousands($a,$dig)
{
$unim = array("","k","m","g");
$c = 0;
while ($a>=1000 && $c<=3) {
$c++;
$a = $a/1000;
}
$d = $dig-ceil(log10($a));
return number_format($a,($c ? $d : 0))."".$unim[$c];
}
The number_format() call is a PHP library function which returns a string with commas between the thousands groups. I'm not sure if something like it exists in perl.
The $dig parameter sets a limit on the number of digits to show. If $dig is 2, it will give you 1.2k from 1237.
To format bytes, just divide by 1024 instead.
This function is in use in some production code to this day.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/152968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do I detect a click outside an element? I have some HTML menus, which I show completely when a user clicks on the head of these menus. I would like to hide these elements when the user clicks outside the menus' area.
Is something like this possible with jQuery?
$("#menuscontainer").clickOutsideThisElement(function() {
// Hide the menus
});
A: If you are scripting for IE and FF 3.* and you just want to know if the click occured within a certain box area, you could also use something like:
this.outsideElementClick = function(objEvent, objElement) {
var objCurrentElement = objEvent.target || objEvent.srcElement;
var blnInsideX = false;
var blnInsideY = false;
if (objCurrentElement.getBoundingClientRect().left >= objElement.getBoundingClientRect().left && objCurrentElement.getBoundingClientRect().right <= objElement.getBoundingClientRect().right)
blnInsideX = true;
if (objCurrentElement.getBoundingClientRect().top >= objElement.getBoundingClientRect().top && objCurrentElement.getBoundingClientRect().bottom <= objElement.getBoundingClientRect().bottom)
blnInsideY = true;
if (blnInsideX && blnInsideY)
return false;
else
return true;
}
A: Instead using flow interruption, blur/focus event or any other tricky technics, simply match event flow with element's kinship:
$(document).on("click.menu-outside", function(event){
// Test if target and it's parent aren't #menuscontainer
// That means the click event occur on other branch of document tree
if(!$(event.target).parents().andSelf().is("#menuscontainer")){
// Click outisde #menuscontainer
// Hide the menus (but test if menus aren't already hidden)
}
});
To remove click outside event listener, simply:
$(document).off("click.menu-outside");
A: If someone curious here is javascript solution(es6):
window.addEventListener('mouseup', e => {
if (e.target != yourDiv && e.target.parentNode != yourDiv) {
yourDiv.classList.remove('show-menu');
//or yourDiv.style.display = 'none';
}
})
and es5, just in case:
window.addEventListener('mouseup', function (e) {
if (e.target != yourDiv && e.target.parentNode != yourDiv) {
yourDiv.classList.remove('show-menu');
//or yourDiv.style.display = 'none';
}
});
A: Use:
var go = false;
$(document).click(function(){
if(go){
$('#divID').hide();
go = false;
}
})
$("#divID").mouseover(function(){
go = false;
});
$("#divID").mouseout(function (){
go = true;
});
$("btnID").click( function(){
if($("#divID:visible").length==1)
$("#divID").hide(); // Toggle
$("#divID").show();
});
A: Hook a click event listener on the document. Inside the event listener, you can look at the event object, in particular, the event.target to see what element was clicked:
$(document).click(function(e){
if ($(e.target).closest("#menuscontainer").length == 0) {
// .closest can help you determine if the element
// or one of its ancestors is #menuscontainer
console.log("hide");
}
});
A: Here is a simple solution by pure javascript. It is up-to-date with ES6:
var isMenuClick = false;
var menu = document.getElementById('menuscontainer');
document.addEventListener('click',()=>{
if(!isMenuClick){
//Hide the menu here
}
//Reset isMenuClick
isMenuClick = false;
})
menu.addEventListener('click',()=>{
isMenuClick = true;
})
A: I have used below script and done with jQuery.
jQuery(document).click(function(e) {
var target = e.target; //target div recorded
if (!jQuery(target).is('#tobehide') ) {
jQuery(this).fadeOut(); //if the click element is not the above id will hide
}
})
Below find the HTML code
<div class="main-container">
<div> Hello I am the title</div>
<div class="tobehide">I will hide when you click outside of me</div>
</div>
You can read the tutorial here
A: After research, I have found three working solutions
First solution
<script>
//The good thing about this solution is it doesn't stop event propagation.
var clickFlag = 0;
$('body').on('click', function () {
if(clickFlag == 0) {
console.log('hide element here');
/* Hide element here */
}
else {
clickFlag=0;
}
});
$('body').on('click','#testDiv', function (event) {
clickFlag = 1;
console.log('showed the element');
/* Show the element */
});
</script>
Second solution
<script>
$('body').on('click', function(e) {
if($(e.target).closest('#testDiv').length == 0) {
/* Hide dropdown here */
}
});
</script>
Third solution
<script>
var specifiedElement = document.getElementById('testDiv');
document.addEventListener('click', function(event) {
var isClickInside = specifiedElement.contains(event.target);
if (isClickInside) {
console.log('You clicked inside')
}
else {
console.log('You clicked outside')
}
});
</script>
A: $(document).click(function() {
$(".overlay-window").hide();
});
$(".overlay-window").click(function() {
return false;
});
If you click on the document, hide a given element, unless you click on that same element.
A: I did it like this in YUI 3:
// Detect the click anywhere other than the overlay element to close it.
Y.one(document).on('click', function (e) {
if (e.target.ancestor('#overlay') === null && e.target.get('id') != 'show' && overlay.get('visible') == true) {
overlay.hide();
}
});
I am checking if ancestor is not the widget element container,
if target is not which open the widget/element,
if widget/element I want to close is already open (not that important).
A: We implemented a solution, partly based off a comment from a user above, which works perfectly for us. We use it to hide a search box / results when clicking outside those elements, excluding the element that originally.
// HIDE SEARCH BOX IF CLICKING OUTSIDE
$(document).click(function(event){
// IF NOT CLICKING THE SEARCH BOX OR ITS CONTENTS OR SEARCH ICON
if ($("#search-holder").is(":visible") && !$(event.target).is("#search-holder *, #search")) {
$("#search-holder").fadeOut('fast');
$("#search").removeClass('active');
}
});
It checks if the search box is already visible first also, and in our case, it's also removing an active class on the hide/show search button.
A: Upvote for the most popular answer, but add
&& (e.target != $('html').get(0)) // ignore the scrollbar
so, a click on a scroll bar does not [hide or whatever] your target element.
A: For easier use, and more expressive code, I created a jQuery plugin for this:
$('div.my-element').clickOut(function(target) {
//do something here...
});
Note: target is the element the user actually clicked. But callback is still executed in the context of the original element, so you can utilize this as you'd expect in a jQuery callback.
Plugin:
$.fn.clickOut = function (parent, fn) {
var context = this;
fn = (typeof parent === 'function') ? parent : fn;
parent = (parent instanceof jQuery) ? parent : $(document);
context.each(function () {
var that = this;
parent.on('click', function (e) {
var clicked = $(e.target);
if (!clicked.is(that) && !clicked.parents().is(that)) {
if (typeof fn === 'function') {
fn.call(that, clicked);
}
}
});
});
return context;
};
By default, the click event listener is placed on the document. However, if you want to limit the event listener scope, you can pass in a jQuery object representing a parent level element that will be the top parent at which clicks will be listened to. This prevents unnecessary document level event listeners. Obviously, it won't work unless the parent element supplied is a parent of your initial element.
Use like so:
$('div.my-element').clickOut($('div.my-parent'), function(target) {
//do something here...
});
A: $("#menuscontainer").click(function() {
$(this).focus();
});
$("#menuscontainer").blur(function(){
$(this).hide();
});
Works for me just fine.
A:
How to detect a click outside an element?
The reason that this question is so popular and has so many answers is that it is deceptively complex. After almost eight years and dozens of answers, I am genuinely surprised to see how little care has been given to accessibility.
I would like to hide these elements when the user clicks outside the menus' area.
This is a noble cause and is the actual issue. The title of the question—which is what most answers appear to attempt to address—contains an unfortunate red herring.
Hint: it's the word "click"!
You don't actually want to bind click handlers.
If you're binding click handlers to close the dialog, you've already failed. The reason you've failed is that not everyone triggers click events. Users not using a mouse will be able to escape your dialog (and your pop-up menu is arguably a type of dialog) by pressing Tab, and they then won't be able to read the content behind the dialog without subsequently triggering a click event.
So let's rephrase the question.
How does one close a dialog when a user is finished with it?
This is the goal. Unfortunately, now we need to bind the userisfinishedwiththedialog event, and that binding isn't so straightforward.
So how can we detect that a user has finished using a dialog?
focusout event
A good start is to determine if focus has left the dialog.
Hint: be careful with the blur event, blur doesn't propagate if the event was bound to the bubbling phase!
jQuery's focusout will do just fine. If you can't use jQuery, then you can use blur during the capturing phase:
element.addEventListener('blur', ..., true);
// use capture: ^^^^
Also, for many dialogs you'll need to allow the container to gain focus. Add tabindex="-1" to allow the dialog to receive focus dynamically without otherwise interrupting the tabbing flow.
$('a').on('click', function () {
$(this.hash).toggleClass('active').focus();
});
$('div').on('focusout', function () {
$(this).removeClass('active');
});
div {
display: none;
}
.active {
display: block;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<a href="#example">Example</a>
<div id="example" tabindex="-1">
Lorem ipsum <a href="http://example.com">dolor</a> sit amet.
</div>
If you play with that demo for more than a minute you should quickly start seeing issues.
The first is that the link in the dialog isn't clickable. Attempting to click on it or tab to it will lead to the dialog closing before the interaction takes place. This is because focusing the inner element triggers a focusout event before triggering a focusin event again.
The fix is to queue the state change on the event loop. This can be done by using setImmediate(...), or setTimeout(..., 0) for browsers that don't support setImmediate. Once queued it can be cancelled by a subsequent focusin:
$('.submenu').on({
focusout: function (e) {
$(this).data('submenuTimer', setTimeout(function () {
$(this).removeClass('submenu--active');
}.bind(this), 0));
},
focusin: function (e) {
clearTimeout($(this).data('submenuTimer'));
}
});
$('a').on('click', function () {
$(this.hash).toggleClass('active').focus();
});
$('div').on({
focusout: function () {
$(this).data('timer', setTimeout(function () {
$(this).removeClass('active');
}.bind(this), 0));
},
focusin: function () {
clearTimeout($(this).data('timer'));
}
});
div {
display: none;
}
.active {
display: block;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<a href="#example">Example</a>
<div id="example" tabindex="-1">
Lorem ipsum <a href="http://example.com">dolor</a> sit amet.
</div>
The second issue is that the dialog won't close when the link is pressed again. This is because the dialog loses focus, triggering the close behavior, after which the link click triggers the dialog to reopen.
Similar to the previous issue, the focus state needs to be managed. Given that the state change has already been queued, it's just a matter of handling focus events on the dialog triggers:
This should look familiar
$('a').on({
focusout: function () {
$(this.hash).data('timer', setTimeout(function () {
$(this.hash).removeClass('active');
}.bind(this), 0));
},
focusin: function () {
clearTimeout($(this.hash).data('timer'));
}
});
$('a').on('click', function () {
$(this.hash).toggleClass('active').focus();
});
$('div').on({
focusout: function () {
$(this).data('timer', setTimeout(function () {
$(this).removeClass('active');
}.bind(this), 0));
},
focusin: function () {
clearTimeout($(this).data('timer'));
}
});
$('a').on({
focusout: function () {
$(this.hash).data('timer', setTimeout(function () {
$(this.hash).removeClass('active');
}.bind(this), 0));
},
focusin: function () {
clearTimeout($(this.hash).data('timer'));
}
});
div {
display: none;
}
.active {
display: block;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<a href="#example">Example</a>
<div id="example" tabindex="-1">
Lorem ipsum <a href="http://example.com">dolor</a> sit amet.
</div>
Esc key
If you thought you were done by handling the focus states, there's more you can do to simplify the user experience.
This is often a "nice to have" feature, but it's common that when you have a modal or popup of any sort that the Esc key will close it out.
keydown: function (e) {
if (e.which === 27) {
$(this).removeClass('active');
e.preventDefault();
}
}
$('a').on('click', function () {
$(this.hash).toggleClass('active').focus();
});
$('div').on({
focusout: function () {
$(this).data('timer', setTimeout(function () {
$(this).removeClass('active');
}.bind(this), 0));
},
focusin: function () {
clearTimeout($(this).data('timer'));
},
keydown: function (e) {
if (e.which === 27) {
$(this).removeClass('active');
e.preventDefault();
}
}
});
$('a').on({
focusout: function () {
$(this.hash).data('timer', setTimeout(function () {
$(this.hash).removeClass('active');
}.bind(this), 0));
},
focusin: function () {
clearTimeout($(this.hash).data('timer'));
}
});
div {
display: none;
}
.active {
display: block;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<a href="#example">Example</a>
<div id="example" tabindex="-1">
Lorem ipsum <a href="http://example.com">dolor</a> sit amet.
</div>
If you know you have focusable elements within the dialog, you won't need to focus the dialog directly. If you're building a menu, you could focus the first menu item instead.
click: function (e) {
$(this.hash)
.toggleClass('submenu--active')
.find('a:first')
.focus();
e.preventDefault();
}
$('.menu__link').on({
click: function (e) {
$(this.hash)
.toggleClass('submenu--active')
.find('a:first')
.focus();
e.preventDefault();
},
focusout: function () {
$(this.hash).data('submenuTimer', setTimeout(function () {
$(this.hash).removeClass('submenu--active');
}.bind(this), 0));
},
focusin: function () {
clearTimeout($(this.hash).data('submenuTimer'));
}
});
$('.submenu').on({
focusout: function () {
$(this).data('submenuTimer', setTimeout(function () {
$(this).removeClass('submenu--active');
}.bind(this), 0));
},
focusin: function () {
clearTimeout($(this).data('submenuTimer'));
},
keydown: function (e) {
if (e.which === 27) {
$(this).removeClass('submenu--active');
e.preventDefault();
}
}
});
.menu {
list-style: none;
margin: 0;
padding: 0;
}
.menu:after {
clear: both;
content: '';
display: table;
}
.menu__item {
float: left;
position: relative;
}
.menu__link {
background-color: lightblue;
color: black;
display: block;
padding: 0.5em 1em;
text-decoration: none;
}
.menu__link:hover,
.menu__link:focus {
background-color: black;
color: lightblue;
}
.submenu {
border: 1px solid black;
display: none;
left: 0;
list-style: none;
margin: 0;
padding: 0;
position: absolute;
top: 100%;
}
.submenu--active {
display: block;
}
.submenu__item {
width: 150px;
}
.submenu__link {
background-color: lightblue;
color: black;
display: block;
padding: 0.5em 1em;
text-decoration: none;
}
.submenu__link:hover,
.submenu__link:focus {
background-color: black;
color: lightblue;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<ul class="menu">
<li class="menu__item">
<a class="menu__link" href="#menu-1">Menu 1</a>
<ul class="submenu" id="menu-1" tabindex="-1">
<li class="submenu__item"><a class="submenu__link" href="http://example.com/#1">Example 1</a></li>
<li class="submenu__item"><a class="submenu__link" href="http://example.com/#2">Example 2</a></li>
<li class="submenu__item"><a class="submenu__link" href="http://example.com/#3">Example 3</a></li>
<li class="submenu__item"><a class="submenu__link" href="http://example.com/#4">Example 4</a></li>
</ul>
</li>
<li class="menu__item">
<a class="menu__link" href="#menu-2">Menu 2</a>
<ul class="submenu" id="menu-2" tabindex="-1">
<li class="submenu__item"><a class="submenu__link" href="http://example.com/#1">Example 1</a></li>
<li class="submenu__item"><a class="submenu__link" href="http://example.com/#2">Example 2</a></li>
<li class="submenu__item"><a class="submenu__link" href="http://example.com/#3">Example 3</a></li>
<li class="submenu__item"><a class="submenu__link" href="http://example.com/#4">Example 4</a></li>
</ul>
</li>
</ul>
lorem ipsum <a href="http://example.com/">dolor</a> sit amet.
WAI-ARIA Roles and Other Accessibility Support
This answer hopefully covers the basics of accessible keyboard and mouse support for this feature, but as it's already quite sizable I'm going to avoid any discussion of WAI-ARIA roles and attributes, however I highly recommend that implementers refer to the spec for details on what roles they should use and any other appropriate attributes.
A: Function:
$(function() {
$.fn.click_inout = function(clickin_handler, clickout_handler) {
var item = this;
var is_me = false;
item.click(function(event) {
clickin_handler(event);
is_me = true;
});
$(document).click(function(event) {
if (is_me) {
is_me = false;
} else {
clickout_handler(event);
}
});
return this;
}
});
Usage:
this.input = $('<input>')
.click_inout(
function(event) { me.ShowTree(event); },
function() { me.Hide(); }
)
.appendTo(this.node);
And functions are very simple:
ShowTree: function(event) {
this.data_span.show();
}
Hide: function() {
this.data_span.hide();
}
A: This should work:
$('body').click(function (event) {
var obj = $(event.target);
obj = obj['context']; // context : clicked element inside body
if ($(obj).attr('id') != "menuscontainer" && $('#menuscontainer').is(':visible') == true) {
//hide menu
}
});
A: The solutions here work fine when only one element is to be managed. If there are multiple elements, however, the problem is much more complicated. Tricks with e.stopPropagation() and all the others will not work.
I came up with a solution, and maybe it is not so easy, but it's better than nothing. Have a look:
$view.on("click", function(e) {
if(model.isActivated()) return;
var watchUnclick = function() {
rootView.one("mouseleave", function() {
$(document).one("click", function() {
model.deactivate();
});
rootView.one("mouseenter", function() {
watchUnclick();
});
});
};
watchUnclick();
model.activate();
});
A: I ended up doing something like this:
$(document).on('click', 'body, #msg_count_results .close',function() {
$(document).find('#msg_count_results').remove();
});
$(document).on('click','#msg_count_results',function(e) {
e.preventDefault();
return false;
});
I have a close button within the new container for end users friendly UI purposes. I had to use return false in order to not go through. Of course, having an A HREF on there to take you somewhere would be nice, or you could call some ajax stuff instead. Either way, it works ok for me. Just what I wanted.
A: I just want to make @Pistos answer more apparent since it's hidden in the comments.
This solution worked perfectly for me. Plain JS:
var elementToToggle = $('.some-element');
$(document).click( function(event) {
if( $(event.target).closest(elementToToggle).length === 0 ) {
elementToToggle.hide();
}
});
in CoffeeScript:
elementToToggle = $('.some-element')
$(document).click (event) ->
if $(event.target).closest(elementToToggle).length == 0
elementToToggle.hide()
A: Let's say the div you want to detect if the user clicked outside or inside has an id, for example: "my-special-widget".
Listen to body click events:
document.body.addEventListener('click', (e) => {
if (isInsideMySpecialWidget(e.target, "my-special-widget")) {
console.log("user clicked INSIDE the widget");
}
console.log("user clicked OUTSIDE the widget");
});
function isInsideMySpecialWidget(elem, mySpecialWidgetId){
while (elem.parentElement) {
if (elem.id === mySpecialWidgetId) {
return true;
}
elem = elem.parentElement;
}
return false;
}
In this case, you won't break the normal flow of click on some element in your page, since you are not using the "stopPropagation" method.
A: Now there is a plugin for that: outside events (blog post)
The following happens when a clickoutside handler (WLOG) is bound to an element:
*
*the element is added to an array which holds all elements with clickoutside handlers
*a (namespaced) click handler is bound to the document (if not already there)
*on any click in the document, the clickoutside event is triggered for those elements in that array that are not equal to or a parent of the click-events target
*additionally, the event.target for the clickoutside event is set to the element the user clicked on (so you even know what the user clicked, not just that he clicked outside)
So no events are stopped from propagation and additional click handlers may be used "above" the element with the outside-handler.
A: This worked for me perfectly!!
$('html').click(function (e) {
if (e.target.id == 'YOUR-DIV-ID') {
//do something
} else {
//do something
}
});
A: This is my solution to this problem:
$(document).ready(function() {
$('#user-toggle').click(function(e) {
$('#user-nav').toggle();
e.stopPropagation();
});
$('body').click(function() {
$('#user-nav').hide();
});
$('#user-nav').click(function(e){
e.stopPropagation();
});
});
A: The answer marked as the accepted answer does not take into account that you can have overlays over the element, like dialogs, popovers, datepickers, etc. Clicks in these should not hide the element.
I have made my own version that does take this into account. It's created as a KnockoutJS binding, but it can easily be converted to jQuery-only.
It works by the first query for all elements with either z-index or absolute position that are visible. It then hit tests those elements against the element I want to hide if click outside. If it's a hit I calculate a new bound rectangle which takes into account the overlay bounds.
ko.bindingHandlers.clickedIn = (function () {
function getBounds(element) {
var pos = element.offset();
return {
x: pos.left,
x2: pos.left + element.outerWidth(),
y: pos.top,
y2: pos.top + element.outerHeight()
};
}
function hitTest(o, l) {
function getOffset(o) {
for (var r = { l: o.offsetLeft, t: o.offsetTop, r: o.offsetWidth, b: o.offsetHeight };
o = o.offsetParent; r.l += o.offsetLeft, r.t += o.offsetTop);
return r.r += r.l, r.b += r.t, r;
}
for (var b, s, r = [], a = getOffset(o), j = isNaN(l.length), i = (j ? l = [l] : l).length; i;
b = getOffset(l[--i]), (a.l == b.l || (a.l > b.l ? a.l <= b.r : b.l <= a.r))
&& (a.t == b.t || (a.t > b.t ? a.t <= b.b : b.t <= a.b)) && (r[r.length] = l[i]));
return j ? !!r.length : r;
}
return {
init: function (element, valueAccessor) {
var target = valueAccessor();
$(document).click(function (e) {
if (element._clickedInElementShowing === false && target()) {
var $element = $(element);
var bounds = getBounds($element);
var possibleOverlays = $("[style*=z-index],[style*=absolute]").not(":hidden");
$.each(possibleOverlays, function () {
if (hitTest(element, this)) {
var b = getBounds($(this));
bounds.x = Math.min(bounds.x, b.x);
bounds.x2 = Math.max(bounds.x2, b.x2);
bounds.y = Math.min(bounds.y, b.y);
bounds.y2 = Math.max(bounds.y2, b.y2);
}
});
if (e.clientX < bounds.x || e.clientX > bounds.x2 ||
e.clientY < bounds.y || e.clientY > bounds.y2) {
target(false);
}
}
element._clickedInElementShowing = false;
});
$(element).click(function (e) {
e.stopPropagation();
});
},
update: function (element, valueAccessor) {
var showing = ko.utils.unwrapObservable(valueAccessor());
if (showing) {
element._clickedInElementShowing = true;
}
}
};
})();
A: I know there are a million answers to this question, but I've always been a fan of using HTML and CSS to do most of the work. In this case, z-index and positioning. The simplest way that I have found to do this is as follows:
$("#show-trigger").click(function(){
$("#element").animate({width: 'toggle'});
$("#outside-element").show();
});
$("#outside-element").click(function(){
$("#element").hide();
$("#outside-element").hide();
});
#outside-element {
position:fixed;
width:100%;
height:100%;
z-index:1;
display:none;
}
#element {
display:none;
padding:20px;
background-color:#ccc;
width:300px;
z-index:2;
position:relative;
}
#show-trigger {
padding:20px;
background-color:#ccc;
margin:20px auto;
z-index:2;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="outside-element"></div>
<div id="element">
<div class="menu-item"><a href="#1">Menu Item 1</a></div>
<div class="menu-item"><a href="#2">Menu Item 1</a></div>
<div class="menu-item"><a href="#3">Menu Item 1</a></div>
<div class="menu-item"><a href="#4">Menu Item 1</a></div>
</div>
<div id="show-trigger">Show Menu</div>
This creates a safe environment, since nothing is going to get triggered unless the menu is actually open and the z-index protects any of the content within the element from creating any misfires upon being clicked.
Additionally, you're not requiring jQuery to cover all of your bases with propagation calls and having to purge all of the inner elements from misfires.
A: Here is what I do to solve to problem.
$(window).click(function (event) {
//To improve performance add a checklike
//if(myElement.isClosed) return;
var isClickedElementChildOfMyBox = isChildOfElement(event,'#id-of-my-element');
if (isClickedElementChildOfMyBox)
return;
//your code to hide the element
});
var isChildOfElement = function (event, selector) {
if (event.originalEvent.path) {
return event.originalEvent.path[0].closest(selector) !== null;
}
return event.originalEvent.originalTarget.closest(selector) !== null;
}
A: If you are using tools like "Pop-up", you can use the "onFocusOut" event.
window.onload=function(){
document.getElementById("inside-div").focus();
}
function loseFocus(){
alert("Clicked outside");
}
#container{
background-color:lightblue;
width:200px;
height:200px;
}
#inside-div{
background-color:lightgray;
width:100px;
height:100px;
}
<div id="container">
<input type="text" id="inside-div" onfocusout="loseFocus()">
</div>
A: All of these answers solve the problem, but I would like to contribute with a moders es6 solution that does exactly what is needed. I just hope to make someone happy with this runnable demo.
window.clickOutSide = (element, clickOutside, clickInside) => {
document.addEventListener('click', (event) => {
if (!element.contains(event.target)) {
if (typeof clickInside === 'function') {
clickOutside();
}
} else {
if (typeof clickInside === 'function') {
clickInside();
}
}
});
};
window.clickOutSide(document.querySelector('.block'), () => alert('clicked outside'), () => alert('clicked inside'));
.block {
width: 400px;
height: 400px;
background-color: red;
}
<div class="block"></div>
A: I don't think what you really need is to close the menu when the user clicks outside; what you need is for the menu to close when the user clicks anywhere at all on the page. If you click on the menu, or off the menu it should close right?
Finding no satisfactory answers above prompted me to write this blog post the other day. For the more pedantic, there are a number of gotchas to take note of:
*
*If you attach a click event handler to the body element at click time be sure to wait for the 2nd click before closing the menu, and unbinding the event. Otherwise the click event that opened the menu will bubble up to the listener that has to close the menu.
*If you use event.stopPropogation() on a click event, no other elements in your page can have a click-anywhere-to-close feature.
*Attaching a click event handler to the body element indefinitely is not a performant solution
*Comparing the target of the event, and its parents to the handler's creator assumes that what you want is to close the menu when you click off it, when what you really want is to close it when you click anywhere on the page.
*Listening for events on the body element will make your code more brittle. Styling as innocent as this would break it: body { margin-left:auto; margin-right: auto; width:960px;}
A: As another poster said there are a lot of gotchas, especially if the element you are displaying (in this case a menu) has interactive elements.
I've found the following method to be fairly robust:
$('#menuscontainer').click(function(event) {
//your code that shows the menus fully
//now set up an event listener so that clicking anywhere outside will close the menu
$('html').click(function(event) {
//check up the tree of the click target to check whether user has clicked outside of menu
if ($(event.target).parents('#menuscontainer').length==0) {
// your code to hide menu
//this event listener has done its job so we can unbind it.
$(this).unbind(event);
}
})
});
A: A simple solution for the situation is:
$(document).mouseup(function (e)
{
var container = $("YOUR SELECTOR"); // Give you class or ID
if (!container.is(e.target) && // If the target of the click is not the desired div or section
container.has(e.target).length === 0) // ... nor a descendant-child of the container
{
container.hide();
}
});
The above script will hide the div if outside of the div click event is triggered.
A: Check the window click event target (it should propagate to the window, as long as it's not captured anywhere else), and ensure that it's not any of the menu elements. If it's not, then you're outside your menu.
Or check the position of the click, and see if it's contained within the menu area.
A: I am surprised nobody actually acknowledged focusout event:
var button = document.getElementById('button');
button.addEventListener('click', function(e){
e.target.style.backgroundColor = 'green';
});
button.addEventListener('focusout', function(e){
e.target.style.backgroundColor = '';
});
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
</head>
<body>
<button id="button">Click</button>
</body>
</html>
A: Solution1
Instead of using event.stopPropagation() which can have some side affects, just define a simple flag variable and add one if condition. I tested this and worked properly without any side affects of stopPropagation:
var flag = "1";
$('#menucontainer').click(function(event){
flag = "0"; // flag 0 means click happened in the area where we should not do any action
});
$('html').click(function() {
if(flag != "0"){
// Hide the menus if visible
}
else {
flag = "1";
}
});
Solution2
With just a simple if condition:
$(document).on('click', function(event){
var container = $("#menucontainer");
if (!container.is(event.target) && // If the target of the click isn't the container...
container.has(event.target).length === 0) // ... nor a descendant of the container
{
// Do whatever you want to do when click is outside the element
}
});
A: 2020 solution using native JS API closest method.
document.addEventListener('click', ({ target }) => {
if (!target.closest('#menupop')) {
document.querySelector('#menupop').style.display = 'none'
}
})
#menupop {
width: 300px;
height: 300px;
background-color: red;
}
<div id="menupop">
clicking outside will close this
</div>
A: This worked perfectly fine in time for me:
$('body').click(function() {
// Hide the menus if visible.
});
A: To be honest, I didn't like any of previous the solutions.
The best way to do this, is binding the "click" event to the document, and comparing if that click is really outside the element (just like Art said in his suggestion).
However, you'll have some problems there: You'll never be able to unbind it, and you cannot have an external button to open/close that element.
That's why I wrote this small plugin (click here to link), to simplify these tasks. Could it be simpler?
<a id='theButton' href="#">Toggle the menu</a><br/>
<div id='theMenu'>
I should be toggled when the above menu is clicked,
and hidden when user clicks outside.
</div>
<script>
$('#theButton').click(function(){
$('#theMenu').slideDown();
});
$("#theMenu").dClickOutside({ ignoreList: $("#theButton") }, function(clickedObj){
$(this).slideUp();
});
</script>
A: One more solution is here:
http://jsfiddle.net/zR76D/
Usage:
<div onClick="$('#menu').toggle();$('#menu').clickOutside(function() { $(this).hide(); $(this).clickOutside('disable'); });">Open / Close Menu</div>
<div id="menu" style="display: none; border: 1px solid #000000; background: #660000;">I am a menu, whoa is me.</div>
Plugin:
(function($) {
var clickOutsideElements = [];
var clickListener = false;
$.fn.clickOutside = function(options, ignoreFirstClick) {
var that = this;
if (ignoreFirstClick == null) ignoreFirstClick = true;
if (options != "disable") {
for (var i in clickOutsideElements) {
if (clickOutsideElements[i].element[0] == $(this)[0]) return this;
}
clickOutsideElements.push({ element : this, clickDetected : ignoreFirstClick, fnc : (typeof(options) != "function") ? function() {} : options });
$(this).on("click.clickOutside", function(event) {
for (var i in clickOutsideElements) {
if (clickOutsideElements[i].element[0] == $(this)[0]) {
clickOutsideElements[i].clickDetected = true;
}
}
});
if (!clickListener) {
if (options != null && typeof(options) == "function") {
$('html').click(function() {
for (var i in clickOutsideElements) {
if (!clickOutsideElements[i].clickDetected) {
clickOutsideElements[i].fnc.call(that);
}
if (clickOutsideElements[i] != null) clickOutsideElements[i].clickDetected = false;
}
});
clickListener = true;
}
}
}
else {
$(this).off("click.clickoutside");
for (var i = 0; i < clickOutsideElements.length; ++i) {
if (clickOutsideElements[i].element[0] == $(this)[0]) {
clickOutsideElements.splice(i, 1);
}
}
}
return this;
}
})(jQuery);
A: The broadest way to do this is to select everything on the web page except the element where you don't want clicks detected and bind the click event those when the menu is opened.
Then when the menu is closed remove the binding.
Use .stopPropagation to prevent the event from affecting any part of the menuscontainer.
$("*").not($("#menuscontainer")).bind("click.OutsideMenus", function ()
{
// hide the menus
//then remove all of the handlers
$("*").unbind(".OutsideMenus");
});
$("#menuscontainer").bind("click.OutsideMenus", function (event)
{
event.stopPropagation();
});
A: For touch devices like iPad and iPhone we can use this code:
$(document).on('touchstart', function (event) {
var container = $("YOUR CONTAINER SELECTOR");
if (!container.is(e.target) && // If the target of the click isn't the container...
container.has(e.target).length === 0) // ... nor a descendant of the container
{
container.hide();
}
});
A: This might be a better fix for some people.
$(".menu_link").click(function(){
// show menu code
});
$(".menu_link").mouseleave(function(){
//hide menu code, you may add a timer for 3 seconds before code to be run
});
I know mouseleave does not only mean a click outside, it also means leaving that element's area.
Once the menu itself is inside the menu_link element then the menu itself should not be a problem to click on or move on.
A: I believe the best way of doing it is something like this.
$(document).on("click", function(event) {
clickedtarget = $(event.target).closest('#menuscontainer');
$("#menuscontainer").not(clickedtarget).hide();
});
This type of solution could easily be made to work for multiple menus and also menus that are dynamically added through javascript. Basically it just allows you to click anywhere in your document, and checks which element you clicked in, and selects it's closest "#menuscontainer". Then it hides all menuscontainers but excludes the one you clicked in.
Not sure about exactly how your menus are built, but feel free to copy my code in the JSFiddle. It's a very simple but thoroughly functional menu/modal system. All you need to do is build the html-menus and the code will do the work for you.
https://jsfiddle.net/zs6anrn7/
A: $(document).on("click",function (event)
{
console.log(event);
if ($(event.target).closest('.element').length == 0)
{
//your code here
if ($(".element").hasClass("active"))
{
$(".element").removeClass("active");
}
}
});
Try this coding for getting the solution.
A: This works for me
$("body").mouseup(function(e) {
var subject = $(".main-menu");
if(e.target.id != subject.attr('id') && !subject.has(e.target).length) {
$('.sub-menu').hide();
}
});
A: Still looking for that perfect solution for detecting clicking outside? Look no further! Introducing Clickout-Event, a package that provides universal support for clickout and other similar events, and it works in all scenarios: plain HTML onclickout attributes, .addEventListener('clickout') of vanilla JavaScript, .on('clickout') of jQuery, v-on:clickout directives of Vue.js, you name it. As long as a front-end framework internally uses addEventListener to handle events, Clickout-Event works for it. Just add the script tag anywhere in your page, and it simply works like magic.
HTML attribute
<div onclickout="console.log('clickout detected')">...</div>
Vanilla JavaScript
document.getElementById('myId').addEventListener('clickout', myListener);
jQuery
$('#myId').on('clickout', myListener);
Vue.js
<div v-on:clickout="open=false">...</div>
Angular
<div (clickout)="close()">...</div>
A: This is the simplest answer I have found to this question:
window.addEventListener('click', close_window = function () {
if(event.target !== windowEl){
windowEl.style.display = "none";
window.removeEventListener('click', close_window, false);
}
});
And you will see I named the function "close_window" so that I could remove the event listener when the window closes.
A: A way to write in pure JavaScript
let menu = document.getElementById("menu");
document.addEventListener("click", function(){
// Hide the menus
menu.style.display = "none";
}, false);
document.getElementById("menuscontainer").addEventListener("click", function(e){
// Show the menus
menu.style.display = "block";
e.stopPropagation();
}, false);
A:
Note: Using stopPropagation is something that should be avoided as it breaks normal event flow in the DOM. See this CSS Tricks article for more information. Consider using this method instead.
Attach a click event to the document body which closes the window. Attach a separate click event to the container which stops propagation to the document body.
$(window).click(function() {
//Hide the menus if visible
});
$('#menucontainer').click(function(event){
event.stopPropagation();
});
A: I've had success with something like this:
var $menuscontainer = ...;
$('#trigger').click(function() {
$menuscontainer.show();
$('body').click(function(event) {
var $target = $(event.target);
if ($target.parents('#menuscontainer').length == 0) {
$menuscontainer.hide();
}
});
});
The logic is: when #menuscontainer is shown, bind a click handler to the body that hides #menuscontainer only if the target (of the click) isn't a child of it.
A: The other solutions here didn't work for me so I had to use:
if(!$(event.target).is('#foo'))
{
// hide menu
}
Edit: Plain Javascript variant (2021-03-31)
I used this method to handle closing a drop down menu when clicking outside of it.
First, I created a custom class name for all the elements of the component. This class name will be added to all elements that make up the menu widget.
const className = `dropdown-${Date.now()}-${Math.random() * 100}`;
I create a function to check for clicks and the class name of the clicked element. If clicked element does not contain the custom class name I generated above, it should set the show flag to false and the menu will close.
const onClickOutside = (e) => {
if (!e.target.className.includes(className)) {
show = false;
}
};
Then I attached the click handler to the window object.
// add when widget loads
window.addEventListener("click", onClickOutside);
... and finally some housekeeping
// remove listener when destroying the widget
window.removeEventListener("click", onClickOutside);
A: The event has a property called event.path of the element which is a "static ordered list of all its ancestors in tree order". To check if an event originated from a specific DOM element or one of its children, just check the path for that specific DOM element. It can also be used to check multiple elements by logically ORing the element check in the some function.
$("body").click(function() {
target = document.getElementById("main");
flag = event.path.some(function(el, i, arr) {
return (el == target)
})
if (flag) {
console.log("Inside")
} else {
console.log("Outside")
}
});
#main {
display: inline-block;
background:yellow;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div id="main">
<ul>
<li>Test-Main</li>
<li>Test-Main</li>
<li>Test-Main</li>
<li>Test-Main</li>
<li>Test-Main</li>
</ul>
</div>
<div id="main2">
Outside Main
</div>
So for your case It should be
$("body").click(function() {
target = $("#menuscontainer")[0];
flag = event.path.some(function(el, i, arr) {
return (el == target)
});
if (!flag) {
// Hide the menus
}
});
A: As a variant:
var $menu = $('#menucontainer');
$(document).on('click', function (e) {
// If element is opened and click target is outside it, hide it
if ($menu.is(':visible') && !$menu.is(e.target) && !$menu.has(e.target).length) {
$menu.hide();
}
});
It has no problem with stopping event propagation and better supports multiple menus on the same page where clicking on a second menu while a first is open will leave the first open in the stopPropagation solution.
A: Use focusout for accessibility
There is one answer here that says (quite correctly) that focusing on click events is an accessibility problem since we want to cater for keyboard users. The focusout event is the correct thing to use here, but it can be done much more simply than in the other answer (and in pure JavaScript too):
A simpler way of doing it:
The 'problem' with using focusout is that if an element inside your dialog/modal/menu loses focus, to something also 'inside', the event will still get fired. We can check that this isn't the case by looking at event.relatedTarget (which tells us what element will have gained focus).
dialog = document.getElementById("dialogElement")
dialog.addEventListener("focusout", function (event) {
if (
// We are still inside the dialog so don't close
dialog.contains(event.relatedTarget) ||
// We have switched to another tab so probably don't want to close
!document.hasFocus()
) {
return;
}
dialog.close(); // Or whatever logic you want to use to close
});
There is one slight gotcha to the above, which is that relatedTarget may be null. This is fine if the user is clicking outside the dialog, but will be a problem if unless the user clicks inside the dialog and the dialog happens to not be focusable. To fix this you have to make sure to set tabIndex=0 so your dialog is focusable.
A: You can listen for a click event on document and then make sure #menucontainer is not an ancestor or the target of the clicked element by using .closest().
If it is not, then the clicked element is outside of the #menucontainer and you can safely hide it.
$(document).click(function(event) {
var $target = $(event.target);
if(!$target.closest('#menucontainer').length &&
$('#menucontainer').is(":visible")) {
$('#menucontainer').hide();
}
});
Edit – 2017-06-23
You can also clean up after the event listener if you plan to dismiss the menu and want to stop listening for events. This function will clean up only the newly created listener, preserving any other click listeners on document. With ES2015 syntax:
export function hideOnClickOutside(selector) {
const outsideClickListener = (event) => {
const $target = $(event.target);
if (!$target.closest(selector).length && $(selector).is(':visible')) {
$(selector).hide();
removeClickListener();
}
}
const removeClickListener = () => {
document.removeEventListener('click', outsideClickListener);
}
document.addEventListener('click', outsideClickListener);
}
Edit – 2018-03-11
For those who don't want to use jQuery. Here's the above code in plain vanillaJS (ECMAScript6).
function hideOnClickOutside(element) {
const outsideClickListener = event => {
if (!element.contains(event.target) && isVisible(element)) { // or use: event.target.closest(selector) === null
element.style.display = 'none';
removeClickListener();
}
}
const removeClickListener = () => {
document.removeEventListener('click', outsideClickListener);
}
document.addEventListener('click', outsideClickListener);
}
const isVisible = elem => !!elem && !!( elem.offsetWidth || elem.offsetHeight || elem.getClientRects().length ); // source (2018-03-11): https://github.com/jquery/jquery/blob/master/src/css/hiddenVisibleSelectors.js
NOTE:
This is based on Alex comment to just use !element.contains(event.target) instead of the jQuery part.
But element.closest() is now also available in all major browsers (the W3C version differs a bit from the jQuery one).
Polyfills can be found here: Element.closest()
Edit – 2020-05-21
In the case where you want the user to be able to click-and-drag inside the element, then release the mouse outside the element, without closing the element:
...
let lastMouseDownX = 0;
let lastMouseDownY = 0;
let lastMouseDownWasOutside = false;
const mouseDownListener = (event: MouseEvent) => {
lastMouseDownX = event.offsetX;
lastMouseDownY = event.offsetY;
lastMouseDownWasOutside = !$(event.target).closest(element).length;
}
document.addEventListener('mousedown', mouseDownListener);
And in outsideClickListener:
const outsideClickListener = event => {
const deltaX = event.offsetX - lastMouseDownX;
const deltaY = event.offsetY - lastMouseDownY;
const distSq = (deltaX * deltaX) + (deltaY * deltaY);
const isDrag = distSq > 3;
const isDragException = isDrag && !lastMouseDownWasOutside;
if (!element.contains(event.target) && isVisible(element) && !isDragException) { // or use: event.target.closest(selector) === null
element.style.display = 'none';
removeClickListener();
document.removeEventListener('mousedown', mouseDownListener); // Or add this line to removeClickListener()
}
}
A: I found this method in some jQuery calendar plugin.
function ClickOutsideCheck(e)
{
var el = e.target;
var popup = $('.popup:visible')[0];
if (popup==undefined)
return true;
while (true){
if (el == popup ) {
return true;
} else if (el == document) {
$(".popup").hide();
return false;
} else {
el = $(el).parent()[0];
}
}
};
$(document).bind('mousedown.popup', ClickOutsideCheck);
A: I have an application that works similarly to Eran's example, except I attach the click event to the body when I open the menu... Kinda like this:
$('#menucontainer').click(function(event) {
$('html').one('click',function() {
// Hide the menus
});
event.stopPropagation();
});
More information on jQuery's one() function
A: Here is the vanilla JavaScript solution for future viewers.
Upon clicking any element within the document, if the clicked element's id is toggled, or the hidden element is not hidden and the hidden element does not contain the clicked element, toggle the element.
(function () {
"use strict";
var hidden = document.getElementById('hidden');
document.addEventListener('click', function (e) {
if (e.target.id == 'toggle' || (hidden.style.display != 'none' && !hidden.contains(e.target))) hidden.style.display = hidden.style.display == 'none' ? 'block' : 'none';
}, false);
})();
(function () {
"use strict";
var hidden = document.getElementById('hidden');
document.addEventListener('click', function (e) {
if (e.target.id == 'toggle' || (hidden.style.display != 'none' && !hidden.contains(e.target))) hidden.style.display = hidden.style.display == 'none' ? 'block' : 'none';
}, false);
})();
<a href="javascript:void(0)" id="toggle">Toggle Hidden Div</a>
<div id="hidden" style="display: none;">This content is normally hidden. click anywhere other than this content to make me disappear</div>
If you are going to have multiple toggles on the same page you can use something like this:
*
*Add the class name hidden to the collapsible item.
*Upon document click, close all hidden elements which do not contain the clicked element and are not hidden
*If the clicked element is a toggle, toggle the specified element.
(function () {
"use strict";
var hiddenItems = document.getElementsByClassName('hidden'), hidden;
document.addEventListener('click', function (e) {
for (var i = 0; hidden = hiddenItems[i]; i++) {
if (!hidden.contains(e.target) && hidden.style.display != 'none')
hidden.style.display = 'none';
}
if (e.target.getAttribute('data-toggle')) {
var toggle = document.querySelector(e.target.getAttribute('data-toggle'));
toggle.style.display = toggle.style.display == 'none' ? 'block' : 'none';
}
}, false);
})();
<a href="javascript:void(0)" data-toggle="#hidden1">Toggle Hidden Div</a>
<div class="hidden" id="hidden1" style="display: none;" data-hidden="true">This content is normally hidden</div>
<a href="javascript:void(0)" data-toggle="#hidden2">Toggle Hidden Div</a>
<div class="hidden" id="hidden2" style="display: none;" data-hidden="true">This content is normally hidden</div>
<a href="javascript:void(0)" data-toggle="#hidden3">Toggle Hidden Div</a>
<div class="hidden" id="hidden3" style="display: none;" data-hidden="true">This content is normally hidden</div>
A: It's 2020 and you can use event.composedPath()
From: Event.composedPath()
The composedPath() method of the Event interface returns the event’s path, which is an array of the objects on which listeners will be invoked.
const target = document.querySelector('#myTarget')
document.addEventListener('click', (event) => {
const withinBoundaries = event.composedPath().includes(target)
if (withinBoundaries) {
target.innerText = 'Click happened inside element'
} else {
target.innerText = 'Click happened **OUTSIDE** element'
}
})
/* Just to make it good looking. You don't need this */
#myTarget {
margin: 50px auto;
width: 500px;
height: 500px;
background: gray;
border: 10px solid black;
}
<div id="myTarget">
Click me (or not!)
</div>
A: Here is my code:
// Listen to every click
$('html').click(function(event) {
if ( $('#mypopupmenu').is(':visible') ) {
if (event.target.id != 'click_this_to_show_mypopupmenu') {
$('#mypopupmenu').hide();
}
}
});
// Listen to selector's clicks
$('#click_this_to_show_mypopupmenu').click(function() {
// If the menu is visible, and you clicked the selector again we need to hide
if ( $('#mypopupmenu').is(':visible') {
$('#mypopupmenu').hide();
return true;
}
// Else we need to show the popup menu
$('#mypopupmenu').show();
});
A: jQuery().ready(function(){
$('#nav').click(function (event) {
$(this).addClass('activ');
event.stopPropagation();
});
$('html').click(function () {
if( $('#nav').hasClass('activ') ){
$('#nav').removeClass('activ');
}
});
});
A: Just a warning that using this:
$('html').click(function() {
// Hide the menus if visible
});
$('#menucontainer').click(function(event){
event.stopPropagation();
});
It prevents the Ruby on Rails UJS driver from working properly. For example, link_to 'click', '/url', :method => :delete will not work.
This might be a workaround:
$('html').click(function() {
// Hide the menus if visible
});
$('#menucontainer').click(function(event){
if (!$(event.target).data('method')) {
event.stopPropagation();
}
});
A: This is a more general solution that allows multiple elements to be watched, and dynamically adding and removing elements from the queue.
It holds a global queue (autoCloseQueue) - an object container for elements that should be closed on outside clicks.
Each queue object key should be the DOM Element id, and the value should be an object with 2 callback functions:
{onPress: someCallbackFunction, onOutsidePress: anotherCallbackFunction}
Put this in your document ready callback:
window.autoCloseQueue = {}
$(document).click(function(event) {
for (id in autoCloseQueue){
var element = autoCloseQueue[id];
if ( ($(e.target).parents('#' + id).length) > 0) { // This is a click on the element (or its child element)
console.log('This is a click on an element (or its child element) with id: ' + id);
if (typeof element.onPress == 'function') element.onPress(event, id);
} else { //This is a click outside the element
console.log('This is a click outside the element with id: ' + id);
if (typeof element.onOutsidePress == 'function') element.onOutsidePress(event, id); //call the outside callback
delete autoCloseQueue[id]; //remove the element from the queue
}
}
});
Then, when the DOM element with id 'menuscontainer' is created, just add this object to the queue:
window.autoCloseQueue['menuscontainer'] = {onOutsidePress: clickOutsideThisElement}
A: Simple plugin:
$.fn.clickOff = function(callback, selfDestroy) {
var clicked = false;
var parent = this;
var destroy = selfDestroy || true;
parent.click(function() {
clicked = true;
});
$(document).click(function(event) {
if (!clicked && parent.is(':visible')) {
if(callback) callback.call(parent, event)
}
if (destroy) {
//parent.clickOff = function() {};
//parent.off("click");
//$(document).off("click");
parent.off("clickOff");
}
clicked = false;
});
};
Use:
$("#myDiv").clickOff(function() {
alert('clickOff');
});
A: $('#propertyType').on("click",function(e){
self.propertyTypeDialog = !self.propertyTypeDialog;
b = true;
e.stopPropagation();
console.log("input clicked");
});
$(document).on('click','body:not(#propertyType)',function (e) {
e.stopPropagation();
if(b == true) {
if ($(e.target).closest("#configuration").length == 0) {
b = false;
self.propertyTypeDialog = false;
console.log("outside clicked");
}
}
// console.log($(e.target).closest("#configuration").length);
});
A:
const button = document.querySelector('button')
const box = document.querySelector('.box');
const toggle = event => {
event.stopPropagation();
if (!event.target.closest('.box')) {
console.log('Click outside');
box.classList.toggle('active');
box.classList.contains('active')
? document.addEventListener('click', toggle)
: document.removeEventListener('click', toggle);
} else {
console.log('Click inside');
}
}
button.addEventListener('click', toggle);
.box {
position: absolute;
display: none;
margin-top: 8px;
padding: 20px;
background: lightgray;
}
.box.active {
display: block;
}
<button>Toggle box</button>
<div class="box">
<form action="">
<input type="text">
<button type="button">Search</button>
</form>
</div>
A: This will toggle the Nav menu when you click on/off the element.
$(document).on('click', function(e) {
var elem = $(e.target).closest('#menu'),
box = $(e.target).closest('#nav');
if (elem.length) {
e.preventDefault();
$('#nav').toggle();
} else if (!box.length) {
$('#nav').hide();
}
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<li id="menu">
<a></a>
</li>
<ul id="nav">
<!--Nav will toggle when you Click on Menu(it can be an icon in this example)-->
<li class="page"><a>Page1</a></li>
<li class="page"><a>Page2</a></li>
<li class="page"><a>Page3</a></li>
<li class="page"><a>Page4</a></li>
</ul>
A: Outside click plugin!
Usage:
$('.target-element').outsideClick(function(event){
//code that fires when user clicks outside the element
//event = the click event
//$(this) = the '.target-element' that is firing this function
}, '.excluded-element')
The code for it:
(function($) {
//when the user hits the escape key, it will trigger all outsideClick functions
$(document).on("keyup", function (e) {
if (e.which == 27) $('body').click(); //escape key
});
//The actual plugin
$.fn.outsideClick = function(callback, exclusions) {
var subject = this;
//test if exclusions have been set
var hasExclusions = typeof exclusions !== 'undefined';
//switches click event with touch event if on a touch device
var ClickOrTouchEvent = "ontouchend" in document ? "touchend" : "click";
$('body').on(ClickOrTouchEvent, function(event) {
//click target does not contain subject as a parent
var clickedOutside = !$(event.target).closest(subject).length;
//click target was on one of the excluded elements
var clickedExclusion = $(event.target).closest(exclusions).length;
var testSuccessful;
if (hasExclusions) {
testSuccessful = clickedOutside && !clickedExclusion;
} else {
testSuccessful = clickedOutside;
}
if(testSuccessful) {
callback.call(subject, event);
}
});
return this;
};
}(jQuery));
Adapted from this answer https://stackoverflow.com/a/3028037/1611058
A: Subscribe capturing phase of click to handle click on elements which call preventDefault.
Retrigger it on document element using the other name click-anywhere.
document.addEventListener('click', function (event) {
event = $.event.fix(event);
event.type = 'click-anywhere';
$document.trigger(event);
}, true);
Then where you need click outside functionality subscribe on click-anywhere event on document and check if the click was outside of the element you are interested in:
$(document).on('click-anywhere', function (event) {
if (!$(event.target).closest('#smth').length) {
// Do anything you need here
}
});
Some notes:
*
*You have to use document as it would be a perfomance fault to trigger event on all elements outside of which the click occured.
*This functionality can be wrapped into special plugin, which calls some callback on outside click.
*You can't subscribe capturing phase using jQuery itself.
*You don't need document load to subscribe as subscription is on document, even not on its body, so it exists always independently ащкь script placement and load status.
A: To hide fileTreeClass if clicked outside of it:
jQuery(document).mouseup(function (e) {
var container = $(".fileTreeClass");
if (!container.is(e.target) && // If the target of the click isn't the container...
container.has(e.target).length === 0) // ... nor a descendant of the container
{
container.hide();
}
});
A: if you just want to display a window when you click on a button and undisp this window when you click outside.( or on the button again ) this bellow work good
document.body.onclick = function() { undisp_menu(); };
var menu_on = 0;
function menu_trigger(event){
if (menu_on == 0)
{
// otherwise u will call the undisp on body when
// click on the button
event.stopPropagation();
disp_menu();
}
else{
undisp_menu();
}
}
function disp_menu(){
menu_on = 1;
var e = document.getElementsByClassName("menu")[0];
e.className = "menu on";
}
function undisp_menu(){
menu_on = 0;
var e = document.getElementsByClassName("menu")[0];
e.className = "menu";
}
don't forget this for the button
<div class="button" onclick="menu_trigger(event)">
<div class="menu">
and the css:
.menu{
display: none;
}
.on {
display: inline-block;
}
A: I've read all on 2021, but if not wrong, nobody suggested something easy like this, to unbind and remove event. Using two of the previous answers and a more small tricks, so I put all in one (it could also be added as a parameter to the function to pass selectors, for more popups).
Maybe it is useful for someone to know that the joke. It could also be done this way:
<div id="container" style="display:none"><h1>my menu is nice, but it disappears if I click outside it</h1></div>
<script>
function printPopup() {
$("#container").css({"display": "block"});
var remListener = $(document).mouseup(function (e) {
if ($(e.target).closest("#container").length === 0 &&
(e.target != $('html').get(0)))
{
//alert('closest call');
$("#container").css({"display": "none"});
remListener.unbind('mouseup'); // Isn't it?
}
});
}
printPopup();
</script>
A: You don't need (much) JavaScript, just the :focus-within selector:
*
*Use .sidebar:focus-within to display your sidebar.
*Set tabindex=-1 on your sidebar and body elements to make them focussable.
*Set the sidebar visibility with sidebarEl.focus() and document.body.focus().
const menuButton = document.querySelector('.menu-button');
const sidebar = document.querySelector('.sidebar');
menuButton.onmousedown = ev => {
ev.preventDefault();
(sidebar.contains(document.activeElement) ?
document.body : sidebar).focus();
};
* { box-sizing: border-box; }
.sidebar {
position: fixed;
width: 15em;
left: -15em;
top: 0;
bottom: 0;
transition: left 0.3s ease-in-out;
background-color: #eef;
padding: 3em 1em;
}
.sidebar:focus-within {
left: 0;
}
.sidebar:focus {
outline: 0;
}
.menu-button {
position: fixed;
top: 0;
left: 0;
padding: 1em;
background-color: #eef;
border: 0;
}
body {
max-width: 30em;
margin: 3em;
}
<body tabindex='-1'>
<nav class='sidebar' tabindex='-1'>
Sidebar content
<input type="text"/>
</nav>
<button class="menu-button">☰</button>
Body content goes here, Lorem ipsum sit amet, etc
</body>
A: For those who want a short solution to integrate into their JS code - a small library without JQuery:
Usage:
// demo code
var htmlElem = document.getElementById('my-element')
function doSomething(){ console.log('outside click') }
// use the lib
var removeListener = new elemOutsideClickListener(htmlElem, doSomething);
// deregister on your wished event
$scope.$on('$destroy', removeListener);
Here is the lib:
function elemOutsideClickListener (element, outsideClickFunc, insideClickFunc) {
function onClickOutside (e) {
var targetEl = e.target; // clicked element
do {
// click inside
if (targetEl === element) {
if (insideClickFunc) insideClickFunc();
return;
// Go up the DOM
} else {
targetEl = targetEl.parentNode;
}
} while (targetEl);
// click outside
if (!targetEl && outsideClickFunc) outsideClickFunc();
}
window.addEventListener('click', onClickOutside);
return function () {
window.removeEventListener('click', onClickOutside);
};
}
I took the code from here and put it in a function:
https://www.w3docs.com/snippets/javascript/how-to-detect-a-click-outside-an-element.html
A: <div class="feedbackCont" onblur="hidefeedback();">
<div class="feedbackb" onclick="showfeedback();" ></div>
<div class="feedbackhide" tabindex="1"> </div>
</div>
function hidefeedback(){
$j(".feedbackhide").hide();
}
function showfeedback(){
$j(".feedbackhide").show();
$j(".feedbackCont").attr("tabindex",1).focus();
}
This is the simplest solution I came up with.
A: Try this code:
if ($(event.target).parents().index($('#searchFormEdit')) == -1 &&
$(event.target).parents().index($('.DynarchCalendar-topCont')) == -1 &&
(_x < os.left || _x > (os.left + 570) || _y < os.top || _y > (os.top + 155)) &&
isShowEditForm) {
setVisibleEditForm(false);
}
A: You can set a tabindex to the DOM element. This will trigger a blur event when the user click outside the DOM element.
Demo
<div tabindex="1">
Focus me
</div>
document.querySelector("div").onblur = function(){
console.log('clicked outside')
}
document.querySelector("div").onfocus = function(){
console.log('clicked inside')
}
A: Standard HTML:
Surround the menus by a <label> and fetch focus state changes.
http://jsfiddle.net/bK3gL/
Plus: you can unfold the menu by Tab.
A: Using not():
$("#id").not().click(function() {
alert('Clicked other that #id');
});
A: $("body > div:not(#dvid)").click(function (e) {
//your code
});
A: $("html").click(function(){
if($('#info').css("opacity")>0.9) {
$('#info').fadeOut('fast');
}
});
A: This is a classic case of where a tweak to the HTML would be a better solution. Why not set the click on the elements which don't contain the menu item? Then you don't need to add the propagation.
$('.header, .footer, .main-content').click(function() {
//Hide the menus if visible
});
A: The easiest way: mouseleave(function())
More info: https://www.w3schools.com/jquery/jquery_events.asp
A: $('#menucontainer').click(function(e){
e.stopPropagation();
});
$(document).on('click', function(e){
// code
});
A: As a wrapper to this great answer from Art, and to use the syntax originally requested by OP, here's a jQuery extension that can record wether a click occured outside of a set element.
$.fn.clickOutsideThisElement = function (callback) {
return this.each(function () {
var self = this;
$(document).click(function (e) {
if (!$(e.target).closest(self).length) {
callback.call(self, e)
}
})
});
};
Then you can call like this:
$("#menuscontainer").clickOutsideThisElement(function() {
// handle menu toggle
});
Here's a demo in fiddle
A:
$('html').click(function() {
//Hide the menus if visible
});
$('#menucontainer').click(function(event){
event.stopPropagation();
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<html>
<button id='#menucontainer'>Ok</button>
</html>
A: Have a try of this:
$('html').click(function(e) {
if($(e.target).parents('#menuscontainer').length == 0) {
$('#menuscontainer').hide();
}
});
https://jsfiddle.net/4cj4jxy0/
But note that this cannot work if the click event cannot reach the html tag. (Maybe other elements have stopPropagation()).
A: $(document).on('click.menu.hide', function(e){
if ( !$(e.target).closest('#my_menu').length ) {
$('#my_menu').find('ul').toggleClass('active', false);
}
});
$(document).on('click.menu.show', '#my_menu li', function(e){
$(this).find('ul').toggleClass('active');
});
div {
float: left;
}
ul {
padding: 0;
position: relative;
}
ul li {
padding: 5px 25px 5px 10px;
border: 1px solid silver;
cursor: pointer;
list-style: none;
margin-top: -1px;
white-space: nowrap;
}
ul li ul:before {
margin-right: -20px;
position: absolute;
top: -17px;
right: 0;
content: "\25BC";
}
ul li ul li {
visibility: hidden;
height: 0;
padding-top: 0;
padding-bottom: 0;
border-width: 0 0 1px 0;
}
ul li ul li:last-child {
border: none;
}
ul li ul.active:before {
content: "\25B2";
}
ul li ul.active li {
display: list-item;
visibility: visible;
height: inherit;
padding: 5px 25px 5px 10px;
}
<script src="https://code.jquery.com/jquery-2.1.4.js"></script>
<div>
<ul id="my_menu">
<li>Menu 1
<ul>
<li>subMenu 1</li>
<li>subMenu 2</li>
<li>subMenu 3</li>
<li>subMenu 4</li>
</ul>
</li>
<li>Menu 2
<ul>
<li>subMenu 1</li>
<li>subMenu 2</li>
<li>subMenu 3</li>
<li>subMenu 4</li>
</ul>
</li>
<li>Menu 3</li>
<li>Menu 4</li>
<li>Menu 5</li>
<li>Menu 6</li>
</ul>
</div>
Here is jsbin version http://jsbin.com/xopacadeni/edit?html,css,js,output
A: First you have to track wether the mouse is inside or outside your element1, using the mouseenter and mouseleave events.
Then you can create an element2 which covers the whole screen to detect any clicks, and react accordingly depending on wether you are inside or outside element1.
I strongly recommend to handle both initialization and cleanup, and that the element2 is made as temporary as possible, for obvious reasons.
In the example below, the overlay is an element positionned somewhere, which can be selected by clicking inside, and unselected by clicking outside.
The _init and _release methods are called as part of an automatic initialisation/cleanup process.
The class inherits from a ClickOverlay which has an inner and outerElement, don't worry about it. I used outerElement.parentNode.appendChild to avoid conflicts.
import ClickOverlay from './ClickOverlay.js'
/* CSS */
// .unselect-helper {
// position: fixed; left: -100vw; top: -100vh;
// width: 200vw; height: 200vh;
// }
// .selected {outline: 1px solid black}
export default class ResizeOverlay extends ClickOverlay {
_init(_opts) {
this.enterListener = () => this.onEnter()
this.innerElement.addEventListener('mouseenter', this.enterListener)
this.leaveListener = () => this.onLeave()
this.innerElement.addEventListener('mouseleave', this.leaveListener)
this.selectListener = () => {
if (this.unselectHelper)
return
this.unselectHelper = document.createElement('div')
this.unselectHelper.classList.add('unselect-helper')
this.unselectListener = () => {
if (this.mouseInside)
return
this.clearUnselectHelper()
this.onUnselect()
}
this.unselectHelper.addEventListener('pointerdown'
, this.unselectListener)
this.outerElement.parentNode.appendChild(this.unselectHelper)
this.onSelect()
}
this.innerElement.addEventListener('pointerup', this.selectListener)
}
_release() {
this.innerElement.removeEventListener('mouseenter', this.enterListener)
this.innerElement.removeEventListener('mouseleave', this.leaveListener)
this.innerElement.removeEventListener('pointerup', this.selectListener)
this.clearUnselectHelper()
}
clearUnselectHelper() {
if (!this.unselectHelper)
return
this.unselectHelper.removeEventListener('pointerdown'
, this.unselectListener)
this.unselectHelper.remove()
delete this.unselectListener
delete this.unselectHelper
}
onEnter() {
this.mouseInside = true
}
onLeave() {
delete this.mouseInside
}
onSelect() {
this.innerElement.classList.add('selected')
}
onUnselect() {
this.innerElement.classList.remove('selected')
}
}
A: This works fine for me. I am not an expert.
$(document).click(function(event) {
var $target = $(event.target);
if(!$target.closest('#hamburger, a').length &&
$('#hamburger, a').is(":visible")) {
$('nav').slideToggle();
}
});
A: Here is solution within a container or whole document. If click target is out of your element (with class 'yourClass'), element is hidden.
$('yourContainer').on('click', function(e) {
if (!$(e.target).hasClass('yourClass')) {
$('.yourClass').hide();
}
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/152975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2873"
} |
Q: What would be a recommended choice of SSIS component to perform SFTP or FTPS task? Sometimes normal FTP doesn't quite cut it... When you need to do secure FTP via SSIS packages, which product would you recommend?
Before answering, please see if someone has already suggested the same thing and, if so, vote it up.
NOTE: Ideally, it needs to handle both SSH and SSL FTP connections, but I'd consider two separate components if it makes the most sense....
A: Not actually SSIS component. But you may consider checking guide how to use WinSCP open source SFTP client as task in SSIS.
A: Secure Blackbox does SSH and is supposed to be good, but no personal experience.
A: A coworker pointed out CozyRoc, but I haven't tried it out yet.
A: I use Rebex.net File Transfer Pack for SFTP and FTP transfers in .NET.
A: our Rebex File Transfer Pack includes both SFTP and FTP/SSL.
I wrote a blogpost about registering the component for use in SSIS package:
http://blog.rebex.net/news/archive/2008/10/03/how-to-register-sftp-and-ftp-ssl-for-use-in-ssis-package.aspx.
A: Have you tried SSIS Task Factory from Pragmatic Works.... Community Edition is FREE
SSIS Custom Tasks and Components
including
SFTP Task (Secure FTP ... FTPS)
Compression/Zip Task
Upsert Destination (Bulk Update or Insert)
and many more....
They have some really kool components
| {
"language": "en",
"url": "https://stackoverflow.com/questions/152981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Do you put your indexes in source control? And how do you keep them in synch between test and production environments?
When it comes to indexes on database tables, my philosophy is that they are an integral part of writing any code that queries the database. You can't introduce new queries or change a query without analyzing the impact to the indexes.
So I do my best to keep my indexes in synch betweeen all of my environments, but to be honest, I'm not doing very well at automating this. It's a sort of haphazard, manual process.
I periodocally review index stats and delete unnecessary indexes. I usually do this by creating a delete script that I then copy back to the other environments.
But here and there indexes get created and deleted outside of the normal process and it's really tough to see where the differences are.
I've found one thing that really helps is to go with simple, numeric index names, like
idx_t_01
idx_t_02
where t is a short abbreviation for a table. I find index maintenance impossible when I try to get clever with all the columns involved, like,
idx_c1_c2_c5_c9_c3_c11_5
It's too hard to differentiate indexes like that.
Does anybody have a really good way to integrate index maintenance into source control and the development lifecycle?
A: The full schema for your database should be in source control right beside your code. When I say "full schema" I mean table definitions, queries, stored procedures, indexes, the whole lot.
When doing a fresh installation, then you do:
- check out version X of the product.
- from the "database" directory of your checkout, run the database script(s) to create your database.
- use the codebase from your checkout to interact with the database.
When you're developing, every developer should be working against their own private database instance. When they make schema changes they checkin a new set of schema definition files that work against their revised codebase.
With this approach you never have codebase-database sync issues.
A: Yes, any DML or DDL changes are scripted and checked in to source control, mostly thru activerecord migrations in rails. I hate to continually toot rails' horn, but in many years of building DB-based systems I find the migration route to be so much better than any home-grown system I've used or built.
However, I do name all my indexes (don't let the DBMS come up with whatever crazy name it picks). Don't prefix them, that's silly (because you have type metadata in sysobjects, or in whatever db you have), but I do include the table name and columns, e.g. tablename_col1_col2.
That way if I'm browsing sysobjects I can easily see the indexes for a particular table (also it's a force of habit, wayyyy back in the day on some dBMS I used, index names were unique across the whole DB, so the only way to ensure that is to use unique names).
A: Indexes are a part of the database schema and hence should be source controlled along with everything else. Nobody should go around creating indexes on production without going through the normal QA and release process- particularly performance testing.
There have been numerous other threads on schema versioning.
A: I think there are two issues here: the index naming convention, and adding database changes to your source control/lifecycle. I'll tackle the latter issue.
I've been a Java programmer for a long time now, but have recently been introduced to a system that uses Ruby on Rails for database access for part of the system. One thing that I like about RoR is the notion of "migrations". Basically, you have a directory full of files that look like 001_add_foo_table.rb, 002_add_bar_table.rb, 003_add_blah_column_to_foo.rb, etc. These Ruby source files extend a parent class, overriding methods called "up" and "down". The "up" method contains the set of database changes that need to be made to bring the previous version of the database schema to the current version. Similarly, the "down" method reverts the change back to the previous version. When you want to set the schema for a specific version, the Rails migration scripts check the database to see what the current version is, then finds the .rb files that get you from there up (or down) to the desired revision.
To make this part of your development process, you can check these into source control, and season to taste.
There's nothing specific or special about Rails here, just that it's the first time I've seen this technique widely used. You can probably use pairs of SQL DDL files, too, like 001_UP_add_foo_table.sql and 001_DOWN_remove_foo_table.sql. The rest is a small matter of shell scripting, an exercise left to the reader.
A: I do not put my indexes in source control but the creation script of the indexes. ;-)
Index-naming:
*
*IX_CUSTOMER_NAME for the field "name" in the table "customer"
*PK_CUSTOMER_ID for the primary key,
*UI_CUSTOMER_GUID, for the GUID-field of the customer which is unique (therefore the "UI" - unique index).
A: I always source-control SQL (DDL, DML, etc). Its code like any other. Its good practice.
A: I am not sure indexes should be the same across different environments since they have different data sizes. Unless your test and production environments have the same exact data, the indexes would be different.
As to whether they belong in source control, am not really sure.
A: On my current project, I have two things in source control - a full dump of an empty database (using pg_dump -c so it has all the ddl to create tables and indexes) and a script that determines what version of the database you have, and applies alters/drops/adds to bring it up to the current version. The former is run when we're installing on a new site, and also when QA is starting a new round of testing, and the latter is run at every upgrade. When you make database changes, you're required to update both of those files.
A: Using a grails app the indexes are stored in source control by default since you are defining the index definition inside of a file that represents your domain object. Just offering the 'Grails' perspective as an FYI.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/152985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Suggestion on remoting (rpc, rmi) for Java SE client-server app? I'm absolutely in love with the way GWT does RPC. Is there anything similar for Java SE you have experience with that:
*
*Is not spring
*Doesn't require a Java EE/Servlet container to run the server side
*Is not RMI that comes with Java SE
A: Apache Mina. Not true rpc but easy to use and surely a option to consider.
A: Maybe Jini. Not Spring- or EJB-related, doesn't run in a container, and you can pick the transport.
A: If you want something like distributed computing you could have a look at the cajo project.
A: POJ (Plain Old JSON).
A: Did you try sfnrpc (http://code.google.com/p/sfnrpc)
No spring required.
You can run multiple servers within same JVM
You can have a client connect to multiple servers also.
No event style programming, but supports asynch and synch calls over nio.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/152988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How does ADO.Net Data services support POST being something other than create? From the documentation that I have read so far, ADO.Net data services is positioned as way of exposing a CRUD like interface to tables in a database in a RESTful way. This is great for applications that only do those four operations, but what about applications that do more?
What about verbs like Print, Approve, Submit, Copy, Transfer, Process, Calculate.
The common approach in RESTful applications is to create new resources that can be interacted with to perform these additional requests. e.g. Create a Printer resource that can be posted to perform a print operation. However, these resources do not necessarily map to tables or views in the database.
How does ADO.Net Data Services handle these resources that do not map to a table? As an example let's say you were calculating payroll and you created a endpoint /PayrollService/PayCalculator. POSTing an entity which contains some employee information and their gross pay would instruct the PayCalculator resource to return the employees benefits and deductions.
A: Data Services can expose "any" object graph that you implement IQueryable on and optionally IUpdateable. The objects don't need to in any way be mapped to the db. This should do what you are looking for. Check out this 15min video http://channel9.msdn.com/posts/mtaulty/ADONET-Data-Services-VS08-Sp1-B1-Surfacing-Data/
You can also implement service operations - methods if you like. Ok - not so RESTful but handy and easy to do. Check out this 7min video http://channel9.msdn.com/posts/mtaulty/ADONET-Data-Services-VS08-Sp1-B1-Service-Operations/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I create an XML file using templating similar to ASP.NET I need to generate an XML file in C#.
I want to write the code that generates this in a file that is mostly XML with code inside of it as I can in an ASP.NET MVC page.
So I want a code file that looks like:
<lots of angle brackets...>
<% foreach(data in myData)
{ %>
< <%= data.somefield %>
<% } %>
More angle brackets>
This would generate my XML file. I would not mind using part of System.Web if someone tells me how I can do it without IIS overhead or kludging a generation of a web file.
I want to use templating and I want templating that is similar to ASP.NET
A: First off, its MUCH easier to generate XML using XElements. There are many examples floating around. Just search for "Linq to XML."
Alternatively, if you absolutely need to do templating, I'd suggest using a template engine such as NVelocity rather than trying to kludge ASP.NET into doing it for you.
A: Further to the above - use the new XML classes delivered with Linq - they make generation of XML in a logical fashion much much easier though you won't get down to something akin to a template in C#.
If you really need something template like then - and I know that this won't necessarily go down well - you should look at doing this part of the system in VB.NET which does have explicit support for template like XML generation as part of its Linq to XML implementation. At the very least you should look at what VB.NET offers before dismissing it.
The nature of .NET means that you don't have to use VB.NET for anything else, you can limit it to the class(es) necessary to do the XML generation and its "generic" in the sense that it comes with .NET and the surrounding logic should be comprehensible to any competent .NET programmer.
A: Create a PageView in a standard ASPX file, but don't include a master or anything else. Just start putting in the angle brackets and everything else. The one thing you will need to do is set the content type. But that can be done in your action by calling Response.ContentType = "text/xml";
A: There is an XSLT View Enging in MvcContrib here:
http://mvccontrib.googlecode.com/svn/trunk/src/MvcContrib.XsltViewEngine/
This can probably give you what you need.
(any of the view engines will work, actually... though the WebForms view engine will complain that what you're writing isn't valid HTML.
A: The simplest way of doing this from code would be using the XMLWriter class in System.Xml.
Tutorial here.
XmlWriterSettings settings = new XmlWriterSettings();
settings.Indent = true;
settings.IndentChars = (" ");
using (XmlWriter writer = XmlWriter.Create("books.xml", settings))
{
// Write XML data.
writer.WriteStartElement("book");
writer.WriteElementString("price", "19.95");
writer.WriteEndElement();
writer.Flush();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Insert rows into Access db from C# using Microsoft.Jet.OLEDB.4.0, autonumber column is set to zero I'm using C# and Microsoft.Jet.OLEDB.4.0 provider to insert rows into an Access mdb.
Yes, I know Access sucks. It's a huge legacy app, and everything else works OK.
The table has an autonumber column. I insert the rows, but the autonumber column is set to zero.
I Googled the question and read all the articles I could find on this subject. One suggested inserting -1 for the autonumber column, but this didn't work. None of the other suggestions I could find worked.
I am using OleDbParameter's, not concatenating a big SQL text string.
I've tried the insert with and without a transaction. No difference.
How do I get this insert to work (i.e. set the autonumber column contents correctly)?
Thanks very much in advance,
Adam Leffert
A: In Access it is possible to INSERT an explicit value into an IDENTITY (a.k.a. Automnumber) column. If you (or your middleware) is writing the value zero to the IDENTITY column and there is no unique constraint on the IDENTITY column then that might explain it.
Just to be clear you should be using the syntax
INSERT INTO (<column list>) ...
and the column list should omit the IDENTITY column. Jet SQL will allow you to omit the entire column list but then implicitly include the IDENTITY column. Therefore you should use the INSERT INTO () syntax to explicitly omit the IDENTITY column.
In Access/Jet, you can write explicit values to the IDENTITY column, in which case the value will obviously not be auto-generated. Therefore, ensure both you and your middleware (ADO.NET etc) are not explicitly writing a zero value to the IDENTITY column.
BTW just for the IDENTITY column in the below table will auto-generate the value zero every second INSERT:
CREATE Table Test1
(
ID INTEGER IDENTITY(0, -2147483648) NOT NULL,
data_col INTEGER
);
A: When doing the insert, you need to be sure that you are NOT specifying a value for the AutoNumber column. Just like in SQL Server you don't insert a value for an identity column.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: SQL Server Management Studio 2005 "Remember Password" doesn't work Folks,
I connect to a large number of SQL Server 2005 databases through SQL Server Management Studio 2005. I frequently check off "Remember password", yet the next time I try to connect it doesn't actually remember it. Have you had this experience? Any workarounds?
A: According to the bug report this is a known issue, still not fixed and there is no workaround.
A: Here is a workaround that I've used. It's lame, but in the absence of a real fix, this works for me!
Add another hostname for the server. An example of this would be, if you are trying to get it to remember the name for "coolserver1" then create yourself a hostname that resolves to the same server but by a different name, perhaps "mycoolserver" or something like that.
You can do that by adding an entry to your hostfile (c:\windows\system32\drivers\ets\hosts), or by some DNS or other Windows/AD trickery. The point being that you can now address that server by a new "name."
Then, use that new name in the connect dialog, with your name/password, and hit the "remember me" checkbox, and it really will remember you :)
Enjoy!
A: This happens to me also, but it is most notable when I'm connecting to different servers. If you connect to the same server every time then it should remember your password every time.
Maybe one day, SP3 perhaps, Microsoft will release a nice fix for this somewhat continuous annoyance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I make foreign-key combo boxes user-friendly on an Access form? I've got two tables:
Employees:
uid (number) | first_name (string) | last_name (string) | ...
Projects:
uid | project_title (string) | point_of_contact_id (FK: Employees.uid) | ...
I'd like to create a form for Projects with a "Point of Contact" combo box (dropdown) field. The display values should be "first_name last_name" but the backing data is the UID. How do I set up the form to show one thing to the user and save another thing to the table?
I'd be fine with only being able to show one field (just "first_name" for example), since I can create a view with a full_name field.
Later:
If there is a way to do this at the table design level, I would prefer that, since then I would only have to set a setting per UID column (and there are many tables), rather than one setting per UID field (and there are many forms, each with several UID fields).
A: Set the rowsource of the dropdownbox to "select uid,first_name,lastname from tablename' and set the columnwidth to 0. This way the width of the first column is set to zero, so the user doesn't see it. (You can provide the width of the other columns seperating them with a semicolon, ie: 0cm;4cm;4cm)
A: To expand on Loesje's answer, you use the Bound Column property along with Column Count and Column Widths when you are displaying multiple fields so that you can tell Access which one should be written to the database. (There are other ways to do this using VBA, but this should work for your particular case.)
In your case, setting Row Source to select uid, first_name, last_name from tablename means that your Bound Column should be 1, for the first column in your Row Source (uid). This is the default value, so you'd only have to change it if you wanted to save a value from a different field. (For example, if you wanted to save last_name from the Row Source above, you'd set Bound Column to 3.)
Don't forget that when you set the widths of the other columns you are displaying, the combo box's Width property should be greater than or equal to the sum of the column widths, otherwise you may not see all the columns.
There isn't a way to indicate at the table level that a form based on that table needs to pull particular columns, or that a particular column is the foreign key, but you can copy a combo box to other forms and it will carry all its properties with it. You can also copy the Row Source query and paste it into other combo boxes if that helps you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: ASP.NET MVC Client Side Validation I am all about using ASP.NET MVC, but one of the areas that I hope gets improved on is Client-Side Validation.
I know the most recent version (Preview 5) has a lot of new features for Validation, but they all seem to be after the page has been posted.
I have seen an interesting article by Steve Sanderson... using Live Validation, Castle.Components.Validator.dll, and a Validator framework he made.
I have used it in my project, but I am hoping something like it will get integrated into the official ASP.NET MVC release. I definitely think the business rules should reside either on the model or in the controller rather than in the View.
Have any of you used a similar approach?
Are you aware of something like this being added to the official ASP.NET MVC release?
A: "Obviously you'll still need to validate your input on the server side for the small percentage of users who disable javascript."
Just an update to this comment. Server-side validation has nothing to do with users that run with JavaScript disabled. Instead, it is needed for security reasons, and to do complex validation that can't be done on the client. A form should always have server-side validation. Client-side validation is only there as a convenience.
A malicious user could easily post data to your form bypassing any client-side validation that you have in place. Never trust input data!
A: I agree with other posters, client side validation is strictly for improving user experience.
Check out the JQuery Validation plugin. It's super easy to get started with basic validation -- literally one line of JS plus adding class names to the input controls. It's also very powerful. You can extend to do whatever you want.
A: LiveValidation is another helpful javascript library that can help out. See an example (with ASP.NET MVC) here:
http://blog.codeville.net/2008/09/08/thoughts-on-validation-in-aspnet-mvc-applications/
A: Have a look at this blog article. It describes how to automatically generate client-side validation rules with xVal and also how to automatically implement remote client-side validation.
A: It looks like this area will see lots of improvement in ASP.NET MVC 2
http://weblogs.asp.net/scottgu/archive/2009/07/31/asp-net-mvc-v2-preview-1-released.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Launch web page from my application Ok, this probably has a really simple answer, but I've never tried to do it before: How do you launch a web page from within an app? You know, "click here to go to our FAQ", and when they do it launches their default web browser and goes to your page. I'm working in C/C++ in Windows, but if there's a broader, more portable way to do it I'd like to know that, too.
A: I believe you want to use the ShellExecute() function which should respect the users choice of default browser.
A: Please read the docs for ShellExecute closely. To really bulletproof your code, they recommend initializing COM. See the docs here, and look for the part that says "COM should be initialized as shown here". The short answer is to do this (if you haven't already init'd COM):
CoInitializeEx(NULL, COINIT_APARTMENTTHREADED | COINIT_DISABLE_OLE1DDE)
A: For the record (since you asked for a cross-platform option), the following works well in Linux:
#include <unistd.h>
#include <stdlib.h>
void launch(const std::string &url)
{
std::string browser = getenv("BROWSER");
if(browser == "") return;
char *args[3];
args[0] = (char*)browser.c_str();
args[1] = (char*)url.c_str();
args[2] = 0;
pid_t pid = fork();
if(!pid)
execvp(browser.c_str(), args);
}
Use as:
launch("http://example.com");
A: You can use ShellExecute function.
Sample code:
ShellExecute( NULL, "open", "http://stackoverflow.com", "", ".", SW_SHOWDEFAULT );
A: #include <windows.h>
void main()
{
ShellExecute(NULL, "open", "http://yourwebpage.com",
NULL, NULL, SW_SHOWNORMAL);
}
A: For some reason, ShellExecute do not work sometimes if application is about to terminate right after call it. We've added Sleep(5000) after ShellExecute and it helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How to mock with static methods? I'm new to mock objects, but I understand that I need to have my classes implement interfaces in order to mock them.
The problem I'm having is that in my data access layer, I want to have static methods, but I can't put a static method in an interface.
What's the best way around this? Should I just use instance methods (which seems wrong) or is there another solution?
A: You might be trying to test at too deep a starting point. A test does not need to be created to test each and every method individually; private and static methods should be tested by calling the public methods that then call the private and static ones in turn.
So lets say your code is like this:
public object GetData()
{
object obj1 = GetDataFromWherever();
object obj2 = TransformData(obj1);
return obj2;
}
private static object TransformData(object obj)
{
//Do whatever
}
You do not need to write a test against the TransformData method (and you can't). Instead write a test for the GetData method that tests the work done in TransformData.
A: Use instance methods where possible.
Use public static Func[T, U] (static function references that can be substituted for mock functions) where instance methods are not possible.
A: Yes, you use instance methods. Static methods basically say, "There is one way to accomplish this functionality - it's not polymorphic." Mocking relies on polymorphism.
Now, if your static methods logically don't care about what implementation you're using, they might be able to take the interfaces as parameters, or perhaps work without interacting with state at all - but otherwise you should be using instances (and probably dependency injection to wire everything together).
A: I found a blog via google with some great examples on how to do this:
*
*Refactor class to be an instance class and implement an interface.
You have already stated that you don't want to do this.
*Use a wrapper instance class with delegates for static classes members
Doing this you can simulate a static interface via delegates.
*Use a wrapper instance class with protected members which call the static class
This is probably the easiest to mock/manage without refactoring as it can just be inherited from and extended.
A: I would use a method object pattern. Have a static instance of this, and call it in the static method. It should be possible to subclass for testing, depending on your mocking framework.
i.e. in your class with the static method have:
private static final MethodObject methodObject = new MethodObject();
public static void doSomething(){
methodObject.doSomething();
}
and your method object can be a very simple, easily-tested:
public class MethodObject {
public void doSomething() {
// do your thang
}
}
A: A simple solution is to allow to change the static class's implementation via a setter:
class ClassWithStatics {
private IClassWithStaticsImpl implementation = new DefaultClassWithStaticsImpl();
// Should only be invoked for testing purposes
public static void overrideImplementation(IClassWithStaticsImpl implementation) {
ClassWithStatics.implementation = implementation;
}
public static Foo someMethod() {
return implementation.someMethod();
}
}
So in the setup of your tests, you call overrideImplementation with some mocked interface. The benefit is that you don't need to change clients of your static class. The downside is that you probably will have a little duplicated code, because you'll have to repeat the methods of the static class and it's implementation. But some times the static methods can use a ligther interface which provide base funcionality.
A: The problem you have is when you're using 3rd party code and it's called from one of your methods. What we ended up doing is wrapping it in an object, and calling passing it in with dep inj, and then your unit test can mock 3rd party static method call the setter with it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "35"
} |
Q: What Happened to ASP.Net Mobile Web Forms? Previously Visual Studio had templates for mobile web forms (not the mobile SDK). They appear to be gone in Visual Studio 2008 and the only solution I've seen is to download some templates from Omar here:
http://blogs.msdn.com/webdevtools/archive/2007/09/17/tip-trick-asp-net-mobile-development-with-visual-studio-2008.aspx
Is this supported anymore and if so is this the supported solution?
A: I thought I'd come back to answer this. The mobile forms controls are still there and the templates provided unofficially above are the only ones available that I've found. I'm not sure why they took them out in Visual Studio 2008.
Without the templates, you mostly you just need to change your pages to derive from MobilePage instead of Page and your controls to derive from MobileUserControl instead of UserControl. To access the controls in markup, reference the mobile namespace like this:
and then you will be able to use the mobile controls like this:
mobile:form, mobile:textview ...
These are still the only way that I've found to create pages that are compatible with older phones and browsers. Newer phones and browsers of course use standard HTML for the most part and pages can be created the same as any other ASP.Net page.
A: Just saw a blog post about this on MSDN. MS got tired of trying to maintain definitions for every mobile device out there and is really pushing ASP.NET development. As you said earlier they are still supported, but no designer view. So, they are on the way out.
A: From what I can tell, this has not made it in there, supposedly they were to be included in SP1 of VS2008, but they were not.
Since the source of the templates came from blogs.msdn.com, I would guess that yes, it is the current supported method for building mobile targeted forms.
A: About me is i design and test my Mobile Forms in VS 2005 where i can see design view and all. After this i just copy paste the code in VS 2008. I am doing this because of same reason that there is no any template for Mobile form controls in VS 2008.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Microsoft JET SQL Query Logging or "How do I debug my customer's program?" The problem:
We use a program written by our biggest customer to receive orders, book tranports and do other order-related stuff. We have no other chance but to use the program and the customer is very unsupportive when it comes to problems with their program. We just have to live with the program.
Now this program is most of the time extremely slow when using it with two or more user so I tried to look behind the curtain and find the source of the problem.
Some points about the program I found out so far:
*
*It's written in VB 6.0
*It uses a password-protected Access-DB (Access 2000 MDB) that is located a folder on one user's machine.
*That folder is shared over the network and used by all other users.
*It uses the msjet40.dll version 4.00.9704 to communicate with access. I guess it's ADO?
I also used Process Monitor to monitor file access and found out why the program is so slow: it is doing thousands of read operations on the mdb-file, even when the program is idle. Over the network this is of course tremendously slow:
Process Monitor Trace http://img217.imageshack.us/img217/1456/screenshothw5.png
The real question:
Is there any way to monitor the queries that are responsible for the read activity? Is there a trace flag I can set? Hooking the JET DLL's? I guess the program is doing some expensive queries that are causing JET to read lots of data in the process.
PS: I already tried to put the mdb on our company's file server with the success that accessing it was even slower than over the local share. I also tried changing the locking mechanisms (opportunistic locking) on the client with no success.
I want to know what's going on and need some hard facts and suggestions for our customer's developer to help him/her make the programm faster.
A: To get your grubby hands on exactly what Access is doing query-wise behind the scenes there's an undocumented feature called JETSHOWPLAN - when switched on in the registry it creates a showplan.out text file. The details are in
this TechRepublic article alternate, summarized here:
The ShowPlan option was added to Jet 3.0, and produces a text file
that contains the query's plan. (ShowPlan doesn't support subqueries.)
You must enable it by adding a Debug key to the registry like so:
\\HKEY_LOCAL_MACHINE\SOFTWARE\MICROSOFT\JET\4.0\Engines\Debug
Under the new Debug key, add a string data type named JETSHOWPLAN
(you must use all uppercase letters). Then, add the key value ON to
enable the feature. If Access has been running in the background, you
must close it and relaunch it for the function to work.
When ShowPlan is enabled, Jet creates a text file named SHOWPLAN.OUT
(which might end up in your My Documents folder or the current
default folder, depending on the version of Jet you're using) every
time Jet compiles a query. You can then view this text file for clues
to how Jet is running your queries.
We recommend that you disable this feature by changing the key's value
to OFF unless you're specifically using it. Jet appends the plan to
an existing file and eventually, the process actually slows things
down. Turn on the feature only when you need to review a specific
query plan. Open the database, run the query, and then disable the
feature.
For tracking down nightmare problems it's unbeatable - it's the sort of thing you get on your big expensive industrial databases - this feature is cool - it's lovely and fluffy - it's my friend… ;-)
A: Could you not throw a packet sniffer (like Wireshark) on the network and watch the traffic between one user and the host machine?
A: If it uses an ODBC connection you can enable logging for that.
*
*Start ODBC Data Source Administrator.
*Select the Tracing tab
*Select the Start Tracing Now button.
*Select Apply or OK.
*Run the app for awhile.
*Return to ODBC Administrator.
*Select the Tracing tab.
*Select the Stop Tracing Now button.
*The trace can be viewed in the location that you initially specified in the Log file Path box.
A: First question: Do you have a copy of MS Access 2000 or better?
If so:
When you say the MDB is "password protected", do you mean that when you try to open it using MS Access you get a prompt for a password only, or does it prompt you for a user name and password? (Or give you an error message that says, "You do not have the necessary permissions to use the foo.mdb object."?)
If it's the latter, (user-level security), look for a corresponding .MDW file that goes along with the MDB. If you find it, this is the "workgroup information file" that is used as a "key" for opening the MDB. Try making a desktop shortcut with a target like:
"Path to MSACCESS.EXE" "Path To foo.mdb" /wrkgrp "Path to foo.mdw"
MS Access should then prompt you for your user name and password which is (hopefully) the same as what the VB6 app asks you for. This would at least allow you to open the MDB file and look at the table structure to see if there are any obvious design flaws.
Beyond that, as far as I know, Eduardo is correct that you pretty much need to be able to run a debugger on the developer's source code to find out exactly what the real-time queries are doing...
A: It is not possible without the help of the developers. Sorry.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Is there any way to programmatically set the application name in Elmah? I need to change the app name based on what configuration I'm using in Visual Studio. For example, if I'm in Debug configuration, I want the app name to show as 'App_Debug' in the Application field in the Elmah_Error table. Does anyone have any experience with this? Or is there another way to do it?
A: By default, Elmah uses the AppPool's application GUID as the default application name. It uses this as the key to identify the errors in the Elmah_Error table when you look at the web interface that's created through it's HTTP Module.
I was tasked to explore this option for my company earlier this year. I couldn't find a way to manipulate this by default since Elmah pulls the application name from HttpRuntime.AppDomainAppId in the ErrorLog.cs file. You could manipulate it by whatever key you want; however, that is the AppPool's GUID.
With that said, I was able to manipulate the ErrorLog.cs file to turn Elmah into a callable framework instead of a handler based one and allow for me set the ApplicationName. What I ended up doing was modifying ErrorLog.cs to include a property that allowed me to set the name as below:
public virtual string ApplicationName
{
get
{
if (_applicationName == null) { _applicationName = HttpRuntime.AppDomainAppId; }
return _applicationName;
}
set { _applicationName = value; }
}
What you will probably need to do is adjust this differently and set the ApplicationName not to HttpRuntime.AppDomainAppId but, instead, a value pulled from the web.config. All in all, it's possible. The way I did it enhanced the ErrorLog.Log(ex) method so I could use Elmah has a callable framework beyond web applications. Looking back I wish I did the app/web.config approach instead.
One thing to keep in mind when changing the application name in Elmah. The http handler that generates the /elmah/default.aspx interface will no longer work. I'm still trying to find time to circle back around to such; however, you may need to look into creating a custom interface when implementing.
A: This can now be done purely in markup. Just add an applicationName attribute to the errorLog element in the <elmah> section of the web.config file. Example:
<errorLog type="Elmah.SqlErrorLog, Elmah"
connectionStringName="connectionString" applicationName="myApp" />
I've tested this and it works both when logging an exception and when viewing the log via Elmah.axd.
In the case of the OP, one would imagine it can be set programatically too but I didn't test that. For me and I imagine for most scenarios the markup approach is sufficient.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Any ideas on how to make edit-in-place degradable? I'm currently writing an edit-in-place script for MooTools and I'm a little stumped as to how I can make it degrade gracefully without JavaScript while still having some functionality. I would like to use progressive enhancement in some way. I'm not looking for code, but more a concept as to how one would approach the situation. If you have any ideas or know of any edit-in-place scripts that degrade gracefully, please share.
A: You can't do edit-in-place at all without JavaScript, so graceful degradation for it consists of making sure that the user can still edit the item in question when JavaScript isn't available.
As such, I'd just have a link to edit the entire item in question and then create the edit-in-place controls in JavaScript on page load, hiding the edit link if you'd rather than users use edit-in-place when available.
A: It sounds like you might be approaching this from the wrong direction. Rather than creating the edit-in-place and getting it degrade nicely (the Graceful Degradation angle), you should really be creating a non-Javascript version for editing and then adding the edit-in-place using Javascript after page load, reffered to as Progressive Enhancement.
There are two options for this. Create the display as a form with a submit button that works without Javascript, then using Javascript replace the inputs with some kind of label that performs the edit-in-place. You should be able to use a combination of labels and id attributes to pick out the correct properties for your edit-in-place implementation to work. The other option if you don't want a form to display by default is to display the values with an button/link for turning it into a form using server-side processing, then adding the edit-in-palce to that.
A: If it's textual content, you could show the editable content as an input type submit button, with as caption the content. When clicked, it would submit the entire form, preserving the other values, and show an edit dialog. Afterwards the form values could be restored.
A: Maybe put an input in a div under each element that has an edit-in-place. When the page loads, use javascript to hide those divs. That way they'll only be usable if the javascript never fires.
A: I'm asuming what you're trying to do is something like the following scenario:
<li>
<span id="editable">Editable text</span> <a class="edit_button"> </a>
</li>
Where the <a> is a button that replaces the <span> with an <input>, so that it can be edited. There are a couple of ways to make this degrade gracefully (ie work without javascript).
In theory:
Using CSS, do it with psuedo-selectors. :active is somewhat like an onclick event, so if you nest a hidden <input> in the <li>, this CSS hides the <span> and shows the <input> when the li is clicked on.
li:active #editable {
display:none;
}
li:active input{
display:block;
}
This may work in your favorite browser, but you'll without doubt find that it breaks in IE.
In practice:
Use a link. Have that <a> be an actual link that goes to a page where this input/span substitution has been done server side. You stick an event handler on the <a> that uses MooTools to cancel the click event for people who have JS, like this:
function make_editable(evt) {
var evt = new Event(evt);
evt.preventDefault();
}
A: Try using a styled input text, and after the page loaded make it readonly, using the readonly attribute.
and when click on it remove the readonly attribute, and onblur making it readonly again.
when clicking on "Save" button or on onblur event make an Ajax Request to save the data in the server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Escape < and > in sed/shell How do I escape '<' and '>' character in sed.
I have some xml files which needs some text between the tags to be replaced. How do I escape the '>' and '<' characters.
The problem with > and < is it has special meaning in the shell to redirect the output to a file. So backslash doesn't work.
A: Ok. Found out by myself. Use quotes.
$ sed -i "s/>foo</>bar</g" file
A: Escape them with backslash
A: Just put a backslash before them or enclose them in single or double quotes. On second thought, your question I think needs to be more clear. Are you trying to use sed to process an XML file and you want to get what's between a tag? Then you want:
sed -re 's@(<TAG.*?>).*?(</TAG>)@\1hi\2@' test.xml
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Why isn't bittorrent more widespread? I suppose this question is a variation on a theme, but different.
Torrents will never replace HTTP, or even FTP download options. This said, why aren't there torrent links next to those options on more websites?
I'm imagining a web-system whereby downloaded files are able to be downloaded via HTTP, say from http://example.com/downloads/files/myFile.tar.bz2, torrents can be cheaply autogenerated and stored in /downloads/torrents/myFile.tar.bz2.torrent, and the tracker might be /downloads/tracker/.
Trackers are a well defined problem, and not incredibly difficult to implement, and there are many drop in place alternatives out there already. I imagine it wouldn't be difficult to customise one to do what is needed here.
The autogenerated torrent file can include the normal HTTP server as a permanent seed, the extensions to do this are very well supported by most, if not all, of the major torrent clients and requires no reconfiguration or special things on the server end (it uses stock standard HTTP Range headers).
Personally, if I setup such a system, I would then speed limit the /downloads/files/ directory to something reasonable, say maybe 40-50kb/s, depending on what exactly you were trying to serve.
Does such a file delivery system exist? Would you use it if it did: for your personal, company, or other website?
A: first of all: http://torrent.ubuntu.com/ for torrents on ubuntu.
second of all: opera has a built in torrent client.
third: I agree with the stigma attached to p2p. So much so that we have sites that need to be called legaltorrents and such like because by default a torrent would be an illegal thing, and let us not kid ourselves, it is.
getting torrents into the main stream is an excellent idea. you can't tamper with the files you are seeding so there is no risk there.
the big reason is not really stigma. the big reason is analytics, and their protection. with torrents these people (companies like microsoft and such like) would not be able to gather important information about who is doing the downloads (not personally identifiable information, and quickly aggregated away). with torrents, other people would be able to see this information, at least partially. A company would love to seed the torrent of an evaluation version of a competing companys product, just to get an idea of how popular it is and where it is getting downloaded from. It is not as good as hosting the download on your webservers, but it is the next best thing.
this is possibly the reason why the vista download on microsofts sites, or its many service packs and SDKs are not in torrents.
Another thing is that people just wont participate, and that is not difficult to figure out why because of the number of hoops you have to jump through. you got to figure out the firewall, the NAT thing, and then uPNP thing, and then maybe your ISP is throttling your bandwidth, and so on.
Again, I would (and I do) seed my 1.5 times or beyond for the torrents that I download, but that is because these are linux, openoffice that sort of thing. I would probably feel funny seeding adobe acrobat, or some evaluation version or something, because those guys are making profits and I am not a fool to save money for them. Let them pay for http downloads.
edit: (based on the comment by monoxide)
For the freeware out there and for SF.net downloads, their problem is that they cannot rely on seeders and will need their fallback on mirrors anyway, so for them torrents is adding to their expense. One more reason that pops to mind is that even in software shops, Internet access is now thoroughly controlled, and ports on which torrents rely plus the upload requirement is absolutely no-no. Since most people who need these sites and their downloads are in these kinds of offices, they will continue to use http.
BUT even that is not the answer. These people have in their licensing terms restrictions on redistribution. And so their problem is this: if you are seeding their software you are redistributing it. That is a violation of their licensing terms so if they host a torrent download and allow you to seed it, that is entrapment and they can be sued (I am not a lawyer, I learn from watching TV). They have to then delicately change their licensing to allow distribution by seeding torrents but not otherwise. This is an easy enough concept for most of us, but the vagaries of the English language and the dumb hard look on the face of the judge make it a very tricky thing to do. The judge may personally understand torrents, but sitting up their in the court he has to frown and pretend not to because it is not documented in legalese.
That there is the ditch they have dug and there they fall into it. Let us laugh at them and their misery. Yesterdays smart is todays stupid.
Cheers!
A: I'm wondering if part of it is the stigma associated with torrents. The only software that I see providing torrent links are Linux distros, and not all of them (for example, the Ubuntu website does not provide torrents to download Ubuntu). However, if I said I was going to torrent something, most people associate it with illegal downloads (music, video, TV shows, etc).
I think this might come from the top. An engineer might propose using a torrent system to provide downloads, yet management shudders when they hear the word "torrent".
That said, I would indeed use such a system. Although I doubt I would be able to seed at home (I found that the bandwidth kills the connection for everyone else in the house). However, at school, I probably would not only use such a system, but seed for it as well.
Another problem, as mentioned in the other question, is that torrent software is not built into browsers. Until it is, you won't see widespread use of it.
A: Kontiki (which is very similar to bittorrent), makes up about 10% of all internet traffic by volume in the UK, and is exclusively used for legal distribution of "big media" content.
A: There are people who won't install a torrent client because they don't want the RIAA sending them extortion letters and running up legal fees in court when they (the RIAA) break into your computer and see MP3 files that are completely legal backup copies of CDs that were legally purchased.
There's a lot of fear about torrents out there and I'm not comfortable with any of the clients that would allow even limited access to my PC because that's the "camel's nose in the tent".
A: The other posters are correct. There is a huge stigmata against Torrent files in general due to their use by hackers and people who violate copyright law. Look at PirateBay, that is all they "serve" are torrent files. A lot of cable companies in the US have started traffic shaping Torrent traffic on their networks as well because it is such a bandwidth hog.
Remember that torrents are not a download accellerator. They are meant to offload someone who cannot afford (or maybe just doesn't desire) to pay for all the bandwidth themselves. The users who are seeding take the majority of the load. No one seeding? You get no files.
The torrent protocol is also horrible for being so darn chatty. As much as 40% of your communications on the wire can be control flow messages and chat between clients asking for pieces. This is why cable companies hate it so much. There are some other problems of the torrent end game (where it asks a lot of people for final parts in an attempt to complete the torrent but can sometimes end up with 0 available parts so you are stuck with 99% and seeding for everyone).
http is also pretty well established and can be traffic shaped for load balancers, etc. So most legit companies that serve up their content can afford to host it, or use someone like Akamai to repeat the data and then load balance.
A: Perhaps its the ubiquity of http-enabled browsers, you don't see so much FTP download links anymore, so that could be the biggest factor (ease of use for the end-user).
Still, I think torrent downloads are a valid alternative, even if they won't be the primary download.
I even suggested Sourceforge auto-generate torrents for downloads, and they agreed it was a good idea.. but havn't implemented it (yet). Here's hoping they will.
A: Something like this actually exists at speeddemosarchive.com.
The server hosts a Metroid Prime speedrun and provides a permanent seed for it.
I think that it's a very clever idea.
Contrary to your idea, you don't need an HTTP URL.
A: I think one of the reasons is that (currently) torrent links are not fully supported inside web browser... you have to fire up the torrent client and so on.
Maybe is time for a little firefox extension/plugin? Damn, now I am at work! :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Converting a pointer into an integer I am trying to adapt an existing code to a 64 bit machine. The main problem is that in one function, the previous coder uses a void* argument that is converted into suitable type in the function itself. A short example:
void function(MESSAGE_ID id, void* param)
{
if(id == FOO) {
int real_param = (int)param;
// ...
}
}
Of course, on a 64 bit machine, I get the error:
error: cast from 'void*' to 'int' loses precision
I would like to correct this so that it still works on a 32 bit machine and as cleanly as possible. Any idea ?
A: Several answers have pointed at uintptr_t and #include <stdint.h> as 'the' solution. That is, I suggest, part of the answer, but not the whole answer. You also need to look at where the function is called with the message ID of FOO.
Consider this code and compilation:
$ cat kk.c
#include <stdio.h>
static void function(int n, void *p)
{
unsigned long z = *(unsigned long *)p;
printf("%d - %lu\n", n, z);
}
int main(void)
{
function(1, 2);
return(0);
}
$ rmk kk
gcc -m64 -g -O -std=c99 -pedantic -Wall -Wshadow -Wpointer-arith \
-Wcast-qual -Wstrict-prototypes -Wmissing-prototypes \
-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE kk.c -o kk
kk.c: In function 'main':
kk.c:10: warning: passing argument 2 of 'func' makes pointer from integer without a cast
$
You will observe that there is a problem at the calling location (in main()) — converting an integer to a pointer without a cast. You are going to need to analyze your function() in all its usages to see how values are passed to it. The code inside my function() would work if the calls were written:
unsigned long i = 0x2341;
function(1, &i);
Since yours are probably written differently, you need to review the points where the function is called to ensure that it makes sense to use the value as shown. Don't forget, you may be finding a latent bug.
Also, if you are going to format the value of the void * parameter (as converted), look carefully at the <inttypes.h> header (instead of stdint.h — inttypes.h provides the services of stdint.h, which is unusual, but the C99 standard says [t]he header <inttypes.h> includes the header <stdint.h> and extends it with
additional facilities provided by hosted implementations) and use the PRIxxx macros in your format strings.
Also, my comments are strictly applicable to C rather than C++, but your code is in the subset of C++ that is portable between C and C++. The chances are fair to good that my comments apply.
A: Use intptr_t and uintptr_t.
To ensure it is defined in a portable way, you can use code like this:
#if defined(__BORLANDC__)
typedef unsigned char uint8_t;
typedef __int64 int64_t;
typedef unsigned long uintptr_t;
#elif defined(_MSC_VER)
typedef unsigned char uint8_t;
typedef __int64 int64_t;
#else
#include <stdint.h>
#endif
Just place that in some .h file and include wherever you need it.
Alternatively, you can download Microsoft’s version of the stdint.h file from here or use a portable one from here.
A: 'size_t' and 'ptrdiff_t' are required to match your architecture (whatever it is). Therefore, I think rather than using 'int', you should be able to use 'size_t', which on a 64 bit system should be a 64 bit type.
This discussion unsigned int vs size_t goes into a bit more detail.
A: *
*#include <stdint.h>
*Use uintptr_t standard type defined in the included standard header file.
A: I came across this question while studying the source code of SQLite.
In the sqliteInt.h, there is a paragraph of code defined a macro convert between integer and pointer. The author made a very good statement first pointing out it should be a compiler dependent problem and then implemented the solution to account for most of the popular compilers out there.
#if defined(__PTRDIFF_TYPE__) /* This case should work for GCC */
# define SQLITE_INT_TO_PTR(X) ((void*)(__PTRDIFF_TYPE__)(X))
# define SQLITE_PTR_TO_INT(X) ((int)(__PTRDIFF_TYPE__)(X))
#elif !defined(__GNUC__) /* Works for compilers other than LLVM */
# define SQLITE_INT_TO_PTR(X) ((void*)&((char*)0)[X])
# define SQLITE_PTR_TO_INT(X) ((int)(((char*)X)-(char*)0))
#elif defined(HAVE_STDINT_H) /* Use this case if we have ANSI headers */
# define SQLITE_INT_TO_PTR(X) ((void*)(intptr_t)(X))
# define SQLITE_PTR_TO_INT(X) ((int)(intptr_t)(X))
#else /* Generates a warning - but it always works */
# define SQLITE_INT_TO_PTR(X) ((void*)(X))
# define SQLITE_PTR_TO_INT(X) ((int)(X))
#endif
And here is a quote of the comment for more details:
/*
** The following macros are used to cast pointers to integers and
** integers to pointers. The way you do this varies from one compiler
** to the next, so we have developed the following set of #if statements
** to generate appropriate macros for a wide range of compilers.
**
** The correct "ANSI" way to do this is to use the intptr_t type.
** Unfortunately, that typedef is not available on all compilers, or
** if it is available, it requires an #include of specific headers
** that vary from one machine to the next.
**
** Ticket #3860: The llvm-gcc-4.2 compiler from Apple chokes on
** the ((void*)&((char*)0)[X]) construct. But MSVC chokes on ((void*)(X)).
** So we have to define the macros in different ways depending on the
** compiler.
*/
Credit goes to the committers.
A: The best thing to do is to avoid converting from pointer type to non-pointer types.
However, this is clearly not possible in your case.
As everyone said, the uintptr_t is what you should use.
This link has good info about converting to 64-bit code.
There is also a good discussion of this on comp.std.c
A: Use uintptr_t as your integer type.
A: I'd say this is the modern C++ way:
#include <cstdint>
void *p;
auto i = reinterpret_cast<std::uintptr_t>(p);
EDIT:
The correct type to the the Integer
So the right way to store a pointer as an integer is to use the uintptr_t or intptr_t types. (See also in cppreference integer types for C99).
These types are defined in <stdint.h> for C99 and in the namespace std for C++11 in <cstdint> (see integer types for C++).
C++11 (and onwards) Version
#include <cstdint>
std::uintptr_t i;
C++03 Version
extern "C" {
#include <stdint.h>
}
uintptr_t i;
C99 Version
#include <stdint.h>
uintptr_t i;
The correct casting operator
In C there is only one cast and using the C cast in C++ is frowned upon (so don't use it in C++). In C++ there are different types of casts, but reinterpret_cast is the correct cast for this conversion (see also here).
C++11 Version
auto i = reinterpret_cast<std::uintptr_t>(p);
C++03 Version
uintptr_t i = reinterpret_cast<uintptr_t>(p);
C Version
uintptr_t i = (uintptr_t)p; // C Version
Related Questions
*
*What is uintptr_t data type
A: I think the "meaning" of void* in this case is a generic handle.
It is not a pointer to a value, it is the value itself.
(This just happens to be how void* is used by C and C++ programmers.)
If it is holding an integer value, it had better be within integer range!
Here is easy rendering to integer:
int x = (char*)p - (char*)0;
It should only give a warning.
A: Since uintptr_t is not guaranteed to be there in C++/C++11, if this is a one way conversion you can consider uintmax_t, always defined in <cstdint>.
auto real_param = reinterpret_cast<uintmax_t>(param);
To play safe, one could add anywhere in the code an assertion:
static_assert(sizeof (uintmax_t) >= sizeof (void *) ,
"No suitable integer type for conversion from pointer type");
A: With C++11, For what it's worth, suppose you don't have any headers, then define:
template<bool B, class T, class F> struct cond { typedef T type; };
template<class T, class F> struct cond<false, T, F> { typedef F type;};
static constexpr unsigned int PS = sizeof (void *);
using uintptr_type = typename cond<
PS==sizeof(unsigned short), unsigned short ,
typename cond<
PS==sizeof(unsigned int), unsigned int,
typename cond<
PS==sizeof(unsigned long), unsigned long, unsigned long long>::type>::type>::type;
After that you can do the following:
static uintptr_type ptr_to_int(const void *pointer) {
return reinterpret_cast<uintptr_type>(pointer);
}
static void *int_to_ptr(uintptr_type integer) {
return reinterpret_cast<void *>(integer);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "107"
} |
Q: Tool to visualise code flow (C/C++) Do you have any sugestions of tools to ease the task of understanding C/C++ code? We just inherited a large piece of software written by others and we need to quickly get up to speed on it. Any advice on tools that might simplify this task?
A: Doxygen is very good at generating diagrams from code without applying markup, if you turn on the EXTRACT_ALL option. You need GraphViz installed to get diagrams generated with the HAVE_DOT setting. I find having it installed and leaving the DOT_PATH blank works fine on Windows but on OS/X keep having to point directly to the dot tool location.
There's an excellent Code Spelunking article in ACM Queue which talks more about using doxygen and DTrace.
A: Personally, I use the debugger. Run through the code and see what its doing, and where its going is the only way.
However, you can run it through some documentation-generators which (sometimes) help. A good example is doxygen.
A: KScope, built upon the cscope utility, if you're on linux (KDE).
The best thing I ever used (and use all the time) to deleve into some huge piece of unfamiliar code which I have to modify somehow or which API I'm to employ for my needs.
Among its features are the cross-reference database, which can be searched in a plenty of ways: you can find all references of a symbol, its global definition, find callers/callees of a function and much more.
It even has a built-in IDE and an ability to show a call-graph.
A: Doxygen will give you class relationship diagrams if you use graphviz, even if you haven't specifically prepared for it.
A: There are some tools like Egypt http://www.gson.org/egypt/egypt.html that work, but only if you match the expected GCC version used to compile the code and the exact version of the callgraph generator. The same can be said about codeviz http://www.csn.ul.ie/~mel/projects/codeviz/
Other option is valgrind used in cachegrind mode (it generates a type of callgraph that you can follow from kcachegrind program.
A: SourceInsight and Understand for C++ are the best tools you can get for c/c++ code analysis including flow charts.
A: Profiling software gives you an idea of which functions have been called.
If you can use Linux, try KCachegrind
A: I personnaly use Visual Studio debugger tools.
It comes with the "Caller Graph" feature which will allow you to visualize stuff in little boxes. Also the Call Stack and the usual watch features are usually all I need
A: There's also AspectBrowser which doesn't work very good with Eclipse 3.4.0
A: try AQtime, It's a profiling tool that displays all the functions that got called (and the time it took), you can set the start and end points for the analysis. They have a 30 day trial.
A: I used Borland Together a while back and it did a decent job of generating models from code. I believe it will even generate sequence diagrams from code. Keep in mind if your code is a mess your model will be too. And as I recall it isn't cheap but sometimes you can catch a special.
A: Rational Quantify also presents a nice call graph.
A: i tried a tool named Visustin which is not very great graphically but does what it says: Flowchart from Code.
http://www.aivosto.com/visustin.html
A: doxygen is a free doc-generating tool (similar to Javadoc) that will also produce relationship graphs as well.
A: Doxygen, the good thing about it is it will let you know hoe ugly/good is your code in terms of cyclic dependency of classes. So you will be forced to re-factor your code, though you may not like it :-)
A: Slickedit is great for navigating large blocks of code you don't know. The tags feature allows you to deal with the code on a functional basis, without having to deal with which file is it in. (EMACS actually has tags and is every bit as good as Slickedit, but with a slightly steeper learning curve)
When you get to a method or class or variable you don't understand, you just push-tag to go to that code, look it over, then pop-tag back. (those are bound to keystrokes, so it is very quick)
You can also use find-references to see where that function/variable is used.
Saves tons of time over having to go and figure out which file something is in.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "85"
} |
Q: SQL Server 2008 Reporting Services on Failover Cluster When I try to install the Reporting Services on a second node of a failover cluster, I get the following error message:
Existing clustered or cluster-prepared instance failed. The instance selected for installation is already installed and clustered on node 2.
But, we never installed it before. Does anyone have any ideas? This is on Windows 2008 server 64-bit.
A: Just one thing though. You have to install Reporting Services using the Enterprise Edition media and to avoid doing all kinds of nasty stuff to the Licensing the fail-over cluster also have to be Enterprise Edition.
If you try to do this using the SQL Server Standard media you will not be able to install SSRS to a "Shared Database" and the encryption information in the database created by instance #1 will be mercilessly blasted into oblivion.
This will result in instance 1 of SSRS not working while instance 2 (on the second server) is working. Trying to fix this will only flip the situation.
Personally i find this to be quite the bummer from Microsoft since it practically means that you have to weigh the cost of Enterprise Edition against HA-enabling Reporting Services.
So basically, you have to acquire Enterprise Edition if you do not want your Reporting Services to be the Single Point of Failure.
A: Apparently, SQL Server can be installed on each node on a failover cluster. The caveat is that it must be two separate installations with two different instance names. You can, however, share the same report server database. Please read this article for more information on deploying Reporting Services.
A: http://msdn.microsoft.com/en-us/library/ms159114.aspx
Consider this article on SSRS scale out process. Install the shared report server database on the cluster. Then install reporting services to use the shared database. Reporting services will not failover, but the database will.
This is an option to failover IIS: http://support.microsoft.com/kb/970759
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How can I read MS Office files in a server without installing MS Office and without using the Interop Library? The interop library is slow and needs MS Office installed.
Many times you don't want to install MS Office on servers.
I'd like to use Apache POI, but I'm on .NET.
I need only to extract the text portion of the files, not creating nor "storing information" in Office files.
I need to tell you that I've got a very large document library, and I can't convert it to newer XML files.
I don't want to write a parser for the binaries files.
A library like Apache POI does this for us. Unfortunately, it is only for the Java platform. Maybe I should consider writing this application in Java.
I am still not finding an open source alternative to POI in .NET, I think I'll write my own application in Java.
A: For all MS Office versions:
*
*You could use the third-party components like TX Text Controls for Word and TMS Flexcel Studio for Excel
For the new Office (2007):
*
*You could do some basic stuff using .net functionality from system.io.packaging. See how at http://msdn.microsoft.com/en-us/library/bb332058.aspx
For the old Office (before 2007):
*
*The old Office formats are now documented: http://www.microsoft.com/interop/docs/officebinaryformats.mspx. If you want to do something really easy you might consider trying it. But be aware that these formats are VERY complex.
A: As the new docx formats are inherently XML based files, you can create and manipulate them programmatically with standard XML DOM techniques, once you know the structure.
The files are basically zip archives with an alternate file extension. Use the System.IO.Packaging namespace to get access to the internal elements of the file, then open them into a XmlDocument to perform the manipulation.
There are examples available for doing this, and the Office Open XML project on SourceForge may be worth looking at for inspiration.
As for the older binary formats, these were proprietary to MS, and the only way you're likely to get at the content from within is through the Office object model (requires an Office install), or a third party file converter/parser.
Unfortunately there's nothing first party and native to the .NET platform to work with these files.
A: Check out the Aspose components. They are designed to mimic the Interop functionality without requiring a full Office install on a server.
A: What do you need to do with those file? If you just want to stream them to the user, then the basic file streams are fine. If you want to create new files (perhaps based on a template) to send to the user that the user can open in Office, there are a variety or work-arounds.
If you're actually keeping data in Office documents for use by your web site, you're doing it wrong. Office documents, even Excel spreadsheets and access databases, are not really an appropriate choice for use with an interactive web site.
A: If the document is in word 2007 format, you can use the system.io.packaging library to interact with it programatically.
RWendi
A: In Java world, there is also JExcelApi. It is very clearly written, from what I was able to see, much cleaner then POI. So maybe even a port of that code to .NET is not out of the question, depending of course you have enough of time on your hands.
A: OpenOffice.
You can program against it and have it do a lot for you, without spending the money on a license for the server, or have the vulnerability associated with it on your server.
A: Microsoft Excel workbooks can be read using an ODBC driver (or is it an OLE DB driver? can't remember) that makes the workbook look like a database table. But I don't know whether that driver is available without the Office Suite itself.
A: You can use OpenOffice. It has a command-line conversion tool:
Conversion Howto
In short, you define a macro in OpenOffice and you call that macro with a command-line
argument to OpenOffice. In that argument the name of the local file (the Office file) is
encoded.
It's not a great sollution, but it should be workable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Getting / setting file owner in C# I have a requirement to read and display the owner of a file (for audit purposes), and potentially changing it as well (this is secondary requirement). Are there any nice C# wrappers?
After a quick google, I found only the WMI solution and a suggestion to PInvoke GetSecurityInfo
A: No need to P/Invoke. System.IO.File.GetAccessControl will return a FileSecurity object, which has a GetOwner method.
Edit: Reading the owner is pretty simple, though it's a bit of a cumbersome API:
const string FILE = @"C:\test.txt";
var fs = File.GetAccessControl(FILE);
var sid = fs.GetOwner(typeof(SecurityIdentifier));
Console.WriteLine(sid); // SID
var ntAccount = sid.Translate(typeof(NTAccount));
Console.WriteLine(ntAccount); // DOMAIN\username
Setting the owner requires a call to SetAccessControl to save the changes. Also, you're still bound by the Windows rules of ownership - you can't assign ownership to another account. You can give take ownership perms, and they have to take ownership.
var ntAccount = new NTAccount("DOMAIN", "username");
fs.SetOwner(ntAccount);
try {
File.SetAccessControl(FILE, fs);
} catch (InvalidOperationException ex) {
Console.WriteLine("You cannot assign ownership to that user." +
"Either you don't have TakeOwnership permissions, or it is not your user account."
);
throw;
}
A: FileInfo fi = new FileInfo(@"C:\test.txt");
string user = fi.GetAccessControl().GetOwner(typeof(System.Security.Principal.NTAccount)).ToString();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: SQL Developer error I'm trying to use SQL developer, but it won't connect using the proxy I specify in the preferences. I guess it's because of some kind of certificate error? Not sure. I'm getting the error:
No HTTP response received.
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:174)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1591)
at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Handshaker.java:187)
at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Handshaker.java:181)
at com.sun.net.ssl.internal.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:975)
at com.sun.net.ssl.internal.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:123)
at com.sun.net.ssl.internal.ssl.Handshaker.processLoop(Handshaker.java:516)
at com.sun.net.ssl.internal.ssl.Handshaker.process_record(Handshaker.java:454)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:884)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1096)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1123)
at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1107)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:405)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:166)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:133)
at oracle.ide.webbrowser.HttpPing$PingRunnable.run(HttpPing.java:109)
at oracle.ide.webbrowser.ProxyOptions.doTask(ProxyOptions.java:522)
at oracle.ide.webbrowser.HttpPing.ping(HttpPing.java:74)
at oracle.ide.webbrowser.ProxySettingsPanel$5.run(ProxySettingsPanel.java:766)
at java.lang.Thread.run(Thread.java:619)
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:285)
at sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:191)
at sun.security.validator.Validator.validate(Validator.java:218)
at com.sun.net.ssl.internal.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:126)
at com.sun.net.ssl.internal.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:209)
at com.sun.net.ssl.internal.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:249)
at com.sun.net.ssl.internal.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:954)
... 15 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:174)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:238)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:280)
... 21 more
A: This most likely means that the web server you are connecting to by SSL uses a certificate issued by an unknown authority. You want to add the certificate to your keystore (on the client).
See this article for instructions.
A: I think you don't install jdk yet. I recommand you if you use sql developer 1.5.1, you should use jdk 1.5.
A: One of the first rules of debugging errors: Google the error message you're getting, in quotes, like this: "unable to find valid certification path to". When I did this, I found lots of useful information that is probably relevant to you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I setup remote debugging from scratch for an Asp.Net app I would like to be able to step through an application deployed to a remote location which as yet has nothing bar version 3.5 of the .Net framework.
What steps do I need to go through to achieve this and how long would you envisage this taking?
A: How to: Set Up Remote Debugging
Screencast for Visual Studio 2008 - Remote Debugging with MSVSMON.EXE
This is also a good KB showing some troubleshooting scenarios..
A: If you have unrestricted TCP/IP access to the remote location, this will be very easy (as in, 5 minutes tops to get it to work): see How to: Set Up Remote Debugging and How to: Run the Remote Debugging Monitor for the steps involved.
If your development machine is separated from the remote server by firewalls, routers, etc., things get a bit more difficult. Since remote debugging requires Windows authentication, DCOM and other things that are usually (and quite sensibly) blocked by security policies, you'll most likely require some kind of VPN access to the remote network in order to get things to work.
Setting up a Routing and Remote Access service on the target server is a quick way to get PPTP dial-in access to it, but there are significant security implications to doing this. So, this is most likely the step that will take up most of your time (and, depending on the organization that manages the target network, lots of discussions with their network/security people...).
My advice would be to start testing with remote debugging using a test machine on your local LAN first, and deal with the connectivity issues once you're comfortable with the basics.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Packing rectangles for compact representation I am looking for pointers to the solution of the following problem: I have a set of rectangles, whose height is known and x-positions also and I want to pack them in the more compact form. With a little drawing (where all rectangles are of the same width, but the width may vary in real life), i would like, instead of.
-r1-
-r2--
-r3--
-r4-
-r5--
something like.
-r1- -r3--
-r2-- -r4-
-r5--
All hints will be appreciated. I am not necessarily looking for "the" best solution.
A: Your problem is a simpler variant, but you might get some tips reading about heuristics developed for the "binpacking" problem. There has been a lot written about this, but this page is a good start.
A: Topcoder had a competition to solve the 3D version of this problem. The winner discussed his approach here, it might be an interesting read for you.
A: Something like this?
*
*Sort your collection of rectangles by x-position
*write a method that checks which rectangles are present on a certain interval of the x-axis
Collection<Rectangle> overlaps (int startx, int endx, Collection<Rectangle> rects){
...
}
*loop over the collection of rectangles
Collection<Rectangle> toDraw;
Collection<Rectangle> drawn;
foreach (Rectangle r in toDraw){
Collection<Rectangle> overlapping = overlaps (r.x, r.x+r.width, drawn);
int y = 0;
foreach(Rectangle overlapRect in overlapping){
y += overlapRect.height;
}
drawRectangle(y, Rectangle);
drawn.add(r);
}
A: Are the rectangles all of the same height? If they are, and the problem is just which row to put each rectangle in, then the problem boils down to a series of constraints over all pairs of rectangles (X,Y) of the form "rectangle X cannot be in the same row as rectangle Y" when rectangle X overlaps in the x-direction with rectangle Y.
A 'greedy' algorithm for this sorts the rectangles from left to right, then assigns each rectangle in turn to the lowest-numbered row in which it fits. Because the rectangles are being processed from left to right, one only needs to worry about whether the left hand edge of the current rectangle will overlap any other rectangles, which simplifies the overlap detection algorithm somewhat.
I can't prove that this is gives the optimal solution, but on the other hand can't think of any counterexamples offhand either. Anyone?
A: Put a tetris-like game into you website. Generate the blocks that fall and the size of the play area based on your paramters. Award points to players based on the compactness (less free space = more points) of their design. Get your website visitors to perform the work for you.
A: I had worked on a problem like this before. The most intuitive picture is probably one where the large rectangles are on the bottom, and the smaller ones are on top, kinda like putting them all in a container and shaking it so the heavy ones fall to the bottom. So to accomplish this, first sort your array in order of decreasing area (or width) -- we will process the large items first and build the picture ground up.
Now the problem is to assign y-coordinates to a set of rectangles whose x-coordinates are given, if I understand you correctly.
Iterate over your array of rectangles. For each rectangle, initialize the rectangle's y-coordinate to 0. Then loop by increasing this rectangle's y-coordinate until it does not intersect with any of the previously placed rectangles (you need to keep track of which rectangles have been previously placed). Commit to the y-coordinate you just found, and continue on to process the next rectangle.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How do I get a multi line tooltip in MFC Right now, I have a tool tip that pops up when I hover over an edit box. The problem is that this tool tip contains multiple error messages and they are all in one long line. I need to have each error message be on its own line. The error messages are contained in a CString with a new line seperating them.
My existing code is below.
BOOL OnToolTipText(UINT, NMHDR* pNMHDR, LRESULT* pResult)
{
ASSERT(pNMHDR->code == TTN_NEEDTEXTA || pNMHDR->code == TTN_NEEDTEXTW);
// need to handle both ANSI and UNICODE versions of the message
TOOLTIPTEXTA* pTTTA = (TOOLTIPTEXTA*)pNMHDR;
TOOLTIPTEXTW* pTTTW = (TOOLTIPTEXTW*)pNMHDR;
// TCHAR szFullText[256];
CString strTipText=_T("");
UINT nID = pNMHDR->idFrom;
if (pNMHDR->code == TTN_NEEDTEXTA && (pTTTA->uFlags & TTF_IDISHWND) ||
pNMHDR->code == TTN_NEEDTEXTW && (pTTTW->uFlags & TTF_IDISHWND))
{
// idFrom is actually the HWND of the tool
nID = ::GetDlgCtrlID((HWND)nID);
}
//m_errProjAccel[ch] contains 1 or more error messages each seperated by a new line.
if((int)nID >= ID_PROJECTED_ACCEL1 && (int)nID < ID_PROJECTED_ACCEL1 + PROJECTED_ROWS -1 ) {
int ch = nID - ID_PROJECTED_ACCEL1;
strTipText = m_errProjAccel[ch];
}
#ifndef _UNICODE
if (pNMHDR->code == TTN_NEEDTEXTA)
lstrcpyn(pTTTA->szText, strTipText, sizeof(pTTTA->szText)/sizeof(pTTTA->szText[0]));
else
_mbstowcsz(pTTTW->szText, strTipText, sizeof(pTTTA->szText)/sizeof(pTTTA->szText[0]));
#else
if (pNMHDR->code == TTN_NEEDTEXTA)
_wcstombsz(pTTTA->szText, strTipText, sizeof(pTTTA->szText)/sizeof(pTTTA->szText[0]));
else
lstrcpyn(pTTTW->szText, strTipText, sizeof(pTTTA->szText)/sizeof(pTTTA->szText[0]));
#endif
*pResult = 0;
// bring the tooltip window above other popup windows
::SetWindowPos(pNMHDR->hwndFrom, HWND_TOP, 0, 0, 0, 0,
SWP_NOACTIVATE|SWP_NOSIZE|SWP_NOMOVE|SWP_NOOWNERZORDER);
return TRUE; // message was handled
}
A: Creating multiline tooltips is explained here in the MSDN library - read the "Implementing Multiline ToolTips" section. You should send a TTM_SETMAXTIPWIDTH message to the ToolTip control in response to a TTN_GETDISPINFO notification to force it to use multiple lines. In your string you should separate lines with \r\n.
Also, if your text is more than 80 characters, you should use the lpszText member of the NMTTDISPINFO structure instead of copying into the szText array.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Connect to an Oracle 8.0 database using a 10g client I recently upgraded my oracle client to 10g (10.2.0.1.0).
Now when I try to connect to a legacy 8.0 database, I get
ORA-03134: Connections to this server version are no longer supported.
Is there any workaround for this problem, or do I have to install two clients on my local machine?
A: Yes, you can connect to an Oracle 8i database with the 10g client, but the 8i Database requires the 8.1.7.3 patchset, which you can get from Oracle's Metalink support site (requires login).
Here's an Oracle forum post with the details.
If updating your Oracle Database isn't an option, then you can have 2 different clients installed (in different "Oracle Homes" (or directories), and use the selecthome.bat file to switch between your installed clients.
For example, before connecting to 8i, you'd run:
C:\Oracle\Client1_8i\bin\selecthome.bat
or this to use your Oracle 10g client:
C:\Oracle\Client2_10g\bin\selecthome.bat
A: I had to connect a C# code to an Oracle 7 (I know you it's 8...)... the only way I get it was to get the CD to install the Oracle Server and to go in the "Optional Configuration Component" and to use the Oracle73 Ver2.5.
I think you should go check the CD of the Oracle 8 Server and check if an ODBC is still available.
A: The best way to connect an Oracle 8.1.7 and higher is through Instant Client. Download Instant client 10.2 from oracle site, copy all files in the same folder where .NET assemblies resides and use classes located in System.Data.OracleClient. This work for me in .NET 4 project and oracle DB 8.1.7 server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Resizing an iframe based on content I am working on an iGoogle-like application. Content from other applications (on other domains) is shown using iframes.
How do I resize the iframes to fit the height of the iframes' content?
I've tried to decipher the javascript Google uses but it's obfuscated, and searching the web has been fruitless so far.
Update: Please note that content is loaded from other domains, so the same-origin policy applies.
A: The simplest way using jQuery:
$("iframe")
.attr({"scrolling": "no", "src":"http://www.someotherlink.com/"})
.load(function() {
$(this).css("height", $(this).contents().height() + "px");
});
A: If you do not need to handle iframe content from a different domain, try this code, it will solve the problem completely and it's simple:
<script language="JavaScript">
<!--
function autoResize(id){
var newheight;
var newwidth;
if(document.getElementById){
newheight=document.getElementById(id).contentWindow.document .body.scrollHeight;
newwidth=document.getElementById(id).contentWindow.document .body.scrollWidth;
}
document.getElementById(id).height= (newheight) + "px";
document.getElementById(id).width= (newwidth) + "px";
}
//-->
</script>
<iframe src="usagelogs/default.aspx" width="100%" height="200px" id="iframe1" marginheight="0" frameborder="0" onLoad="autoResize('iframe1');"></iframe>
A: Finally I found some other solution for sending data to parent website from iframe using window.postMessage(message, targetOrigin);. Here I explain How I did.
Site A = http://foo.com
Site B = http://bar.com
SiteB is loading inside the siteA website
SiteB website have this line
window.parent.postMessage("Hello From IFrame", "*");
or
window.parent.postMessage("Hello From IFrame", "http://foo.com");
Then siteA have this following code
// Here "addEventListener" is for standards-compliant web browsers and "attachEvent" is for IE Browsers.
var eventMethod = window.addEventListener ? "addEventListener" : "attachEvent";
var eventer = window[eventMethod];
var messageEvent = eventMethod == "attachEvent" ? "onmessage" : "message";
// Listen to message from child IFrame window
eventer(messageEvent, function (e) {
alert(e.data);
// Do whatever you want to do with the data got from IFrame in Parent form.
}, false);
If you want to add security connection you can use this if condition in eventer(messageEvent, function (e) {})
if (e.origin == 'http://iframe.example.com') {
alert(e.data);
// Do whatever you want to do with the data got from IFrame in Parent form.
}
For IE
Inside IFrame:
window.parent.postMessage('{"key":"value"}','*');
Outside:
eventer(messageEvent, function (e) {
var data = jQuery.parseJSON(e.data);
doSomething(data.key);
}, false);
A: We had this type of problem, but slightly in reverse to your situation - we were providing the iframed content to sites on other domains, so the same origin policy was also an issue. After many hours spent trawling google, we eventually found a (somewhat..) workable solution, which you may be able to adapt to your needs.
There is a way around the same origin policy, but it requires changes on both the iframed content and the framing page, so if you haven't the ability to request changes on both sides, this method won't be very useful to you, i'm afraid.
There's a browser quirk which allows us to skirt the same origin policy - javascript can communicate either with pages on its own domain, or with pages it has iframed, but never pages in which it is framed, e.g. if you have:
www.foo.com/home.html, which iframes
|-> www.bar.net/framed.html, which iframes
|-> www.foo.com/helper.html
then home.html can communicate with framed.html (iframed) and helper.html (same domain).
Communication options for each page:
+-------------------------+-----------+-------------+-------------+
| | home.html | framed.html | helper.html |
+-------------------------+-----------+-------------+-------------+
| www.foo.com/home.html | N/A | YES | YES |
| www.bar.net/framed.html | NO | N/A | YES |
| www.foo.com/helper.html | YES | YES | N/A |
+-------------------------+-----------+-------------+-------------+
framed.html can send messages to helper.html (iframed) but not home.html (child can't communicate cross-domain with parent).
The key here is that helper.html can receive messages from framed.html, and can also communicate with home.html.
So essentially, when framed.html loads, it works out its own height, tells helper.html, which passes the message on to home.html, which can then resize the iframe in which framed.html sits.
The simplest way we found to pass messages from framed.html to helper.html was through a URL argument. To do this, framed.html has an iframe with src='' specified. When its onload fires, it evaluates its own height, and sets the src of the iframe at this point to helper.html?height=N
There's an explanation here of how facebook handle it, which may be slightly clearer than mine above!
Code
In www.foo.com/home.html, the following javascript code is required (this can be loaded from a .js file on any domain, incidentally..):
<script>
// Resize iframe to full height
function resizeIframe(height)
{
// "+60" is a general rule of thumb to allow for differences in
// IE & and FF height reporting, can be adjusted as required..
document.getElementById('frame_name_here').height = parseInt(height)+60;
}
</script>
<iframe id='frame_name_here' src='http://www.bar.net/framed.html'></iframe>
In www.bar.net/framed.html:
<body onload="iframeResizePipe()">
<iframe id="helpframe" src='' height='0' width='0' frameborder='0'></iframe>
<script type="text/javascript">
function iframeResizePipe()
{
// What's the page height?
var height = document.body.scrollHeight;
// Going to 'pipe' the data to the parent through the helpframe..
var pipe = document.getElementById('helpframe');
// Cachebuster a precaution here to stop browser caching interfering
pipe.src = 'http://www.foo.com/helper.html?height='+height+'&cacheb='+Math.random();
}
</script>
Contents of www.foo.com/helper.html:
<html>
<!--
This page is on the same domain as the parent, so can
communicate with it to order the iframe window resizing
to fit the content
-->
<body onload="parentIframeResize()">
<script>
// Tell the parent iframe what height the iframe needs to be
function parentIframeResize()
{
var height = getParam('height');
// This works as our parent's parent is on our domain..
parent.parent.resizeIframe(height);
}
// Helper function, parse param from request string
function getParam( name )
{
name = name.replace(/[\[]/,"\\\[").replace(/[\]]/,"\\\]");
var regexS = "[\\?&]"+name+"=([^&#]*)";
var regex = new RegExp( regexS );
var results = regex.exec( window.location.href );
if( results == null )
return "";
else
return results[1];
}
</script>
</body>
</html>
A: The solution on http://www.phinesolutions.com/use-jquery-to-adjust-the-iframe-height.html works great (uses jQuery):
<script type=”text/javascript”>
$(document).ready(function() {
var theFrame = $(”#iFrameToAdjust”, parent.document.body);
theFrame.height($(document.body).height() + 30);
});
</script>
I don't know that you need to add 30 to the length... 1 worked for me.
FYI: If you already have a "height" attribute on your iFrame, this just adds style="height: xxx". This might not be what you want.
A: https://developer.mozilla.org/en/DOM/window.postMessage
window.postMessage()
window.postMessage is a method for safely enabling cross-origin communication. Normally, scripts on different pages are only allowed to access each other if and only if the pages which executed them are at locations with the same protocol (usually both http), port number (80 being the default for http), and host (modulo document.domain being set by both pages to the same value). window.postMessage provides a controlled mechanism to circumvent this restriction in a way which is secure when properly used.
Summary
window.postMessage, when called, causes a MessageEvent to be dispatched at the target window when any pending script that must be executed completes (e.g. remaining event handlers if window.postMessage is called from an event handler, previously-set pending timeouts, etc.). The MessageEvent has the type message, a data property which is set to the string value of the first argument provided to window.postMessage, an origin property corresponding to the origin of the main document in the window calling window.postMessage at the time window.postMessage was called, and a source property which is the window from which window.postMessage is called. (Other standard properties of events are present with their expected values.)
The iFrame-Resizer library uses postMessage to keep an iFrame sized to it's content, along with MutationObserver to detect changes to the content and doesn't depend on jQuery.
https://github.com/davidjbradshaw/iframe-resizer
jQuery: Cross-domain scripting goodness
http://benalman.com/projects/jquery-postmessage-plugin/
Has demo of resizing iframe window...
http://benalman.com/code/projects/jquery-postmessage/examples/iframe/
This article shows how to remove the dependency on jQuery... Plus has a lot of useful info and links to other solutions.
http://www.onlineaspect.com/2010/01/15/backwards-compatible-postmessage/
Barebones example...
http://onlineaspect.com/uploads/postmessage/parent.html
HTML 5 working draft on window.postMessage
http://www.whatwg.org/specs/web-apps/current-work/multipage/comms.html#crossDocumentMessages
John Resig on Cross-Window Messaging
http://ejohn.org/blog/cross-window-messaging/
A: may be a bit late, as all the other answers are older :-) but... here´s my solution. Tested in actual FF, Chrome and Safari 5.0.
css:
iframe {border:0; overflow:hidden;}
javascript:
$(document).ready(function(){
$("iframe").load( function () {
var c = (this.contentWindow || this.contentDocument);
if (c.document) d = c.document;
var ih = $(d).outerHeight();
var iw = $(d).outerWidth();
$(this).css({
height: ih,
width: iw
});
});
});
Hope this will help anybody.
A: This answer is only applicable for websites which uses Bootstrap. The responsive embed feature of the Bootstrap does the job. It is based on the width (not height) of the content.
<!-- 16:9 aspect ratio -->
<div class="embed-responsive embed-responsive-16by9">
<iframe class="embed-responsive-item" src="http://www.youtube.com/embed/WsFWhL4Y84Y"></iframe>
</div>
jsfiddle: http://jsfiddle.net/00qggsjj/2/
http://getbootstrap.com/components/#responsive-embed
A: Here is a simple solution using a dynamically generated style sheet served up by the same server as the iframe content. Quite simply the style sheet "knows" what is in the iframe, and knows the dimensions to use to style the iframe. This gets around the same origin policy restrictions.
http://www.8degrees.co.nz/2010/06/09/dynamically-resize-an-iframe-depending-on-its-content/
So the supplied iframe code would have an accompanying style sheet like so...
<link href="http://your.site/path/to/css?contents_id=1234&dom_id=iframe_widget" rel="stylesheet" type="text/css" />
<iframe id="iframe_widget" src="http://your.site/path/to/content?content_id=1234" frameborder="0" width="100%" scrolling="no"></iframe>
This does require the server side logic being able to calculate the dimensions of the rendered content of the iframe.
A: If you have control over the iframe content , I strongly recommend using
ResizeObserver
Just insert the following at the end of srcdoc attribute of iframe ,
escape it if needed.
<script type="text/javascript">
var ro = new ResizeObserver(entries => {
for (let entry of entries) {
const cr = entry.contentRect;
// console.log(window.frameElement);
window.frameElement.style.height =cr.height +30+ "px";
}
});
ro.observe(document.body);
</script>
A: I'm implementing ConroyP's frame-in-frame solution to replace a solution based on setting document.domain, but found it to be quite hard determining the height of the iframe's content correctly in different browsers (testing with FF11, Ch17 and IE9 right now).
ConroyP uses:
var height = document.body.scrollHeight;
But that only works on the initial page load. My iframe has dynamic content and I need to resize the iframe on certain events.
What I ended up doing was using different JS properties for the different browsers.
function getDim () {
var body = document.body,
html = document.documentElement;
var bc = body.clientHeight;
var bo = body.offsetHeight;
var bs = body.scrollHeight;
var hc = html.clientHeight;
var ho = html.offsetHeight;
var hs = html.scrollHeight;
var h = Math.max(bc, bo, bs, hc, hs, ho);
var bd = getBrowserData();
// Select height property to use depending on browser
if (bd.isGecko) {
// FF 11
h = hc;
} else if (bd.isChrome) {
// CH 17
h = hc;
} else if (bd.isIE) {
// IE 9
h = bs;
}
return h;
}
getBrowserData() is browser detect function "inspired" by Ext Core's http://docs.sencha.com/core/source/Ext.html#method-Ext-apply
That worked well for FF and IE but then there were issues with Chrome. One of the was a timing issue, apparently it takes Chrome a while to set/detect the hight of the iframe. And then Chrome also never returned the height of the content in the iframe correctly if the iframe was higher than the content. This wouldn't work with dynamic content when the height is reduced.
To solve this I always set the iframe to a low height before detecting the content's height and then setting the iframe height to it's correct value.
function resize () {
// Reset the iframes height to a low value.
// Otherwise Chrome won't detect the content height of the iframe.
setIframeHeight(150);
// Delay getting the dimensions because Chrome needs
// a few moments to get the correct height.
setTimeout("getDimAndResize()", 100);
}
The code is not optimized, it's from my devel testing :)
Hope someone finds this helpful!
A: <html>
<head>
<script>
function frameSize(id){
var frameHeight;
document.getElementById(id).height=0 + "px";
if(document.getElementById){
newheight=document.getElementById(id).contentWindow.document.body.scrollHeight;
}
document.getElementById(id).height= (frameHeight) + "px";
}
</script>
</head>
<body>
<iframe id="frame" src="startframe.html" frameborder="0" marginheight="0" hspace=20 width="100%"
onload="javascript:frameSize('frame');">
<p>This will work, but you need to host it on an http server, you can do it locally. </p>
</body>
</html>
A: This is an old thread, but in 2020 it's still a relevant question. I've actually posted this answer in another old thread as well^^ (https://stackoverflow.com/a/64110252/4383587)
Just wanted to share my solution and excitement. It took me four entire days of intensive research and failure, but I think I've found a neat way of making iframes entirely responsive! Yey!
I tried a ton of different approaches... I didn't want to use a two-way communication tunnel as with postMessage because it's awkward for same-origin and complicated for cross-origin (as no admin wants to open doors and implement this on your behalf).
I've tried using MutationObservers and still needed several EventListeners (resize, click,..) to ensure that every change of the layout was handled correctly. - What if a script toggles the visibility of an element? Or what if it dynamically preloads more content on demand? - Another issue was getting an accurate height of the iframe contents from somewhere. Most people suggest using scrollHeight or offsetHeight, or combination of it by using Math.max. The problem is, that these values don't get updated until the iframe element changes its dimensions. To achieve that you could simply reset the iframe.height = 0 before grabbing the scrollHeight, but there are even more caveats to this. So, screw this.
Then, I had another idea to experiment with requestAnimationFrame to get rid of my events and observers hell. Now, I could react to every layout change immediately, but I still had no reliable source to infer the content height of the iframe from. And theeen I discovered getComputedStyle, by accident! This was an enlightenment! Everything just clicked.
Well, see the code I could eventually distill from my countless attempts.
function fit() {
var iframes = document.querySelectorAll("iframe.gh-fit")
for(var id = 0; id < iframes.length; id++) {
var win = iframes[id].contentWindow
var doc = win.document
var html = doc.documentElement
var body = doc.body
var ifrm = iframes[id] // or win.frameElement
if(body) {
body.style.overflowX = "scroll" // scrollbar-jitter fix
body.style.overflowY = "hidden"
}
if(html) {
html.style.overflowX = "scroll" // scrollbar-jitter fix
html.style.overflowY = "hidden"
var style = win.getComputedStyle(html)
ifrm.width = parseInt(style.getPropertyValue("width")) // round value
ifrm.height = parseInt(style.getPropertyValue("height"))
}
}
requestAnimationFrame(fit)
}
addEventListener("load", requestAnimationFrame.bind(this, fit))
That is it, yes! - In your HTML code write <iframe src="page.html" class="gh-fit gh-fullwidth"></iframe>. The gh-fit is a just fake CSS class, used to identify which iframe elements in your DOM should be affect by the script. The gh-fullwidth is a simple CSS class with one rule width: 100%;.
The above script automatically fetches all iframes from the DOM, that have a .gh-fit class assigned. It then grabs and uses the pre-calculated style values for width and height from document.getComputedStyle(iframe), which always contain a pixel-perfect size of that element!!! Just perfect!
Note, this solution doesn't work cross-origin (nor does any other solution, without a two-way communication strategy like IFrameResizer). JS simply can't access the DOM of an iframe, if it doesn't belong to you.
The only other cross-origin solution I can think of, is to use a proxy like https://github.com/gnuns/allorigins. But this would involve deep-copying every request you make - in other words - you 'steal' the entire page source code (to make it yours and let JS access the DOM) and you patch every link/path in this source, so that it goes through the proxy as well. The re-linking routine is a tough one, but doable.
I'll probably try myself at this cross-origin problem, but that's for another day. Enjoy the code! :)
A: iGoogle gadgets have to actively implement resizing, so my guess is in a cross-domain model you can't do this without the remote content taking part in some way. If your content can send a message with the new size to the container page using typical cross-domain communication techniques, then the rest is simple.
A: When you want to zoom out a web page to fit it into the iframe size:
*
*You should resize the iframe to fit it with the content
*Then you should zoom out the whole iframe with the loaded web page content
Here is an example:
<div id="wrap">
<IFRAME ID="frame" name="Main" src ="http://www.google.com" />
</div>
<style type="text/css">
#wrap { width: 130px; height: 130px; padding: 0; overflow: hidden; }
#frame { width: 900px; height: 600px; border: 1px solid black; }
#frame { zoom:0.15; -moz-transform:scale(0.15);-moz-transform-origin: 0 0; }
</style>
A: Here's a jQuery approach that adds the info in json via the src attribute of the iframe. Here's a demo, resize and scroll this window.. the resulting url with json looks like this...
http://fiddle.jshell.net/zippyskippy/RJN3G/show/#{docHeight:5124,windowHeight:1019,scrollHeight:571}#
Here's the source code fiddle http://jsfiddle.net/zippyskippy/RJN3G/
function updateLocation(){
var loc = window.location.href;
window.location.href = loc.replace(/#{.*}#/,"")
+ "#{docHeight:"+$(document).height()
+ ",windowHeight:"+$(window).height()
+ ",scrollHeight:"+$(window).scrollTop()
+"}#";
};
//setInterval(updateLocation,500);
$(window).resize(updateLocation);
$(window).scroll(updateLocation);
A: get iframe content height then give it to this iframe
var iframes = document.getElementsByTagName("iframe");
for(var i = 0, len = iframes.length; i<len; i++){
window.frames[i].onload = function(_i){
return function(){
iframes[_i].style.height = window.frames[_i].document.body.scrollHeight + "px";
}
}(i);
}
A: Work with jquery on load (cross browser):
<iframe src="your_url" marginwidth="0" marginheight="0" scrolling="No" frameborder="0" hspace="0" vspace="0" id="containiframe" onload="loaderIframe();" height="100%" width="100%"></iframe>
function loaderIframe(){
var heightIframe = $('#containiframe').contents().find('body').height();
$('#frame').css("height", heightFrame);
}
on resize in responsive page:
$(window).resize(function(){
if($('#containiframe').length !== 0) {
var heightIframe = $('#containiframe').contents().find('body').height();
$('#frame').css("height", heightFrame);
}
});
A: David Bradshaw and Chris Jacob already suggested using the postMessage approach. And I totally agree, that the proper way of doing things like these.
I just want to post an example, real code that works, in case it'll be a ready answers for some.
On the iframed-side:
<body onload="docResizePipe()">
<script>
var v = 0;
const docResizeObserver = new ResizeObserver(() => {
docResizePipe();
});
docResizeObserver.observe(document.querySelector("body"));
function docResizePipe() {
v += 1;
if (v > 5) {
return;
}
var w = document.body.scrollWidth;
var h = document.body.scrollHeight;
window.parent.postMessage([w,h], "*");
}
setInterval(function() {
v -= 1;
if (v < 0) {
v = 0;
}
}, 300);
</script>
Note the recursion-blocking mechanics - it was necessary because of apparently a bug in Firefox, but anyways let it be there.
On the parent document side:
<iframe id="rpa-frame" src="3.html" style="border: none;"></iframe>
<script>
var rpaFrame = document.getElementById("rpa-frame");
window.addEventListener("message", (event) => {
var width = event.data[0];
var height = event.data[1];
rpaFrame.width = parseInt(width)+60;
rpaFrame.height = parseInt(height)+60;
console.log(event);
}, false);
</script>
Hope it'll be useful.
A: I have been reading a lot of the answers here but nearly everyone gave some sort of cross-origin frame block.
Example error:
Uncaught DOMException: Blocked a frame with origin "null" from
accessing a cross-origin frame.
The same for the answers in a related thread:
Make iframe automatically adjust height according to the contents without using scrollbar?
I do not want to use a third party library like iFrame Resizer or similar library either.
The answer from @ChrisJacob is close but I'm missing a complete working example and not only links. @Selvamani and @latitov are good complements as well.
https://stackoverflow.com/a/3219970/3850405
I'm using width="100%" for the iframe but the code can be modified to work with width as well.
This is how I solved setting a custom height for the iframe:
Embedded iframe:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="description"
content="Web site" />
<title>Test with embedded iframe</title>
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
<iframe id="ifrm" src="https://localhost:44335/package/details?key=123" width="100%"></iframe>
<script type="text/javascript">
window.addEventListener('message', receiveMessage, false);
function receiveMessage(evt) {
console.log("Got message: " + JSON.stringify(evt.data) + " from origin: " + evt.origin);
// Do we trust the sender of this message?
if (evt.origin !== "https://localhost:44335") {
return;
}
if (evt.data.type === "frame-resized") {
document.getElementById("ifrm").style.height = evt.data.value + "px";
}
}
</script>
</body>
</html>
iframe source, example from Create React App but only HTML and JS is used.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="description"
content="Web site created using create-react-app" />
<title>React App</title>
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
<script type="text/javascript">
//Don't run unless in an iframe
if (self !== top) {
var rootHeight;
setInterval(function () {
var rootElement = document.getElementById("root");
if (rootElement) {
var currentRootHeight = rootElement.offsetHeight;
//Only send values if height has changed since last time
if (rootHeight !== currentRootHeight) {
//postMessage to set iframe height
window.parent.postMessage({ "type": "frame-resized", "value": currentRootHeight }, '*');
rootHeight = currentRootHeight;
}
}
}
, 1000);
}
</script>
</body>
</html>
The code with setInterval can of course be modified but it works really well with dynamic content. setInterval only activates if the content is embedded in a iframe and postMessage only sends a message when height has changed.
You can read more about Window.postMessage() here but the description fits very good in what we want to achieve:
The window.postMessage() method safely enables cross-origin
communication between Window objects; e.g., between a page and a
pop-up that it spawned, or between a page and an iframe embedded
within it.
Normally, scripts on different pages are allowed to access each other
if and only if the pages they originate from share the same protocol,
port number, and host (also known as the "same-origin policy").
window.postMessage() provides a controlled mechanism to securely
circumvent this restriction (if used properly).
https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage
A: https://getbootstrap.com/docs/4.0/utilities/embed/
After a lot of research, it dawned on me, this is not a unique problem, I bet Bootstrap handles it. Lo and behold…
A: I have an easy solution and requires you to determine the width and height in the link, please try (It works with most browsers):
<a href='#' onClick=" document.getElementById('myform').src='t2.htm';document.getElementById('myform').width='500px'; document.getElementById('myform').height='400px'; return false">500x400</a>
A: This is slightly tricky as you have to know when the iframe page has loaded, which is difficuly when you're not in control of its content. Its possible to add an onload handler to the iframe, but I've tried this in the past and it has vastly different behaviour across browsers (not guess who's the most annoying...). You'd probably have to add a function to the iframe page that performs the resize and inject some script into the content that either listens to load events or resize events, which then calls the previous function. I'm thinking add a function to the page since you want to make sure its secure, but I have no idea how easy it will be to do.
A: Something on the lines of this i belive should work.
parent.document.getElementById(iFrameID).style.height=framedPage.scrollHeight;
Load this with your body onload on the iframe content.
A: Using jQuery:
parent.html
<body>
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.4.min.js"></script>
<style>
iframe {
width: 100%;
border: 1px solid black;
}
</style>
<script>
function foo(w, h) {
$("iframe").css({width: w, height: h});
return true; // for debug purposes
}
</script>
<iframe src="child.html"></iframe>
</body>
child.html
<body>
<script type="text/javascript" src="https://code.jquery.com/jquery-2.1.4.min.js"></script>
<script>
$(function() {
var w = $("#container").css("width");
var h = $("#container").css("height");
var req = parent.foo(w, h);
console.log(req); // for debug purposes
});
</script>
<style>
body, html {
margin: 0;
}
#container {
width: 500px;
height: 500px;
background-color: red;
}
</style>
<div id="container"></div>
</body>
A: couldn't find something that perfectly handled large texts + large images, but i ended up with this, seems this gets it right, or nearly right, every single time:
iframe.addEventListener("load",function(){
// inlineSize, length, perspectiveOrigin, width
let heightMax = 0;
// this seems to work best with images...
heightMax = Math.max(heightMax,iframe.contentWindow.getComputedStyle(iframe.contentWindow.document.body).perspectiveOrigin.split("px")[0]);
// this seems to work best with text...
heightMax = Math.max(heightMax,iframe.contentWindow.document.body.scrollHeight);
// some large 1920x1080 images always gets a little bit off on firefox =/
const isFirefox = navigator.userAgent.toLowerCase().indexOf('firefox') > -1;
if(isFirefox && heightMax >= 900){
// grrr..
heightMax = heightMax + 100;
}
iframe.style.height = heightMax+"px";
//console.log(heightMax);
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "519"
} |
Q: How to count distinct values in a node? How to count distinct values in a node in XSLT?
Example: I want to count the number of existing countries in Country nodes, in this case, it would be 3.
<Artists_by_Countries>
<Artist_by_Country>
<Location_ID>62</Location_ID>
<Artist_ID>212</Artist_ID>
<Country>Argentina</Country>
</Artist_by_Country>
<Artist_by_Country>
<Location_ID>4</Location_ID>
<Artist_ID>108</Artist_ID>
<Country>Australia</Country>
</Artist_by_Country>
<Artist_by_Country>
<Location_ID>4</Location_ID>
<Artist_ID>111</Artist_ID>
<Country>Australia</Country>
</Artist_by_Country>
<Artist_by_Country>
<Location_ID>12</Location_ID>
<Artist_ID>78</Artist_ID>
<Country>Germany</Country>
</Artist_by_Country>
</Artists_by_Countries>
A: In XSLT 1.0 this isn't obvious, but the following should give you an idea of the requirement:
count(//Artist_by_Country[not(Location_ID=preceding-sibling::Artist_by_Country/Location_ID)]/Location_ID)
The more elements in your XML the longer this takes, as it checks every single preceding sibling of every single element.
A: Try something like this:
count(//Country[not(following::Country/text() = text())])
"Give me the count of all Country nodes without a following Country with matching text"
The interesting bit of that expression, IMO, is the following axis.
You could probably also remove the first /text(), and replace the second with .
A: If you have a large document, you probably want to use the "Muenchian Method", which is usually used for grouping, to identify the distinct nodes. Declare a key that indexes the things you want to count by the values that are distinct:
<xsl:key name="artists-by-country" match="Artist_by_Country" use="Country" />
Then you can get the <Artist_by_Country> elements that have distinct countries using:
/Artists_by_Countries
/Artist_by_Country
[generate-id(.) =
generate-id(key('artists-by-country', Country)[1])]
and you can count them by wrapping that in a call to the count() function.
Of course in XSLT 2.0, it's as simple as
count(distinct-values(/Artists_by_Countries/Artist_by_Country/Country))
A: If you have control of the xml generation on the first occurence of a country you could add an attribute to the country node such as distinct='true' flag the country as "used" and not subsequently add the distinct attribute if you come across that country again.
You could then do
<xsl:for-each select="Artists_by_Countries/Artist_by_Country/Country[@distinct='true']" />
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Is there a way to determine when a .NET thread terminates? I'm trying to find out whether there is a way to reliably determine when a managed thread is about to terminate. I'm using a third-party library that includes support for PDF documents and the problem is that in order to use the PDF functionality, I have to explicitly initialize the PDF component, do the work, then explicitly uninitialize the component before the thread terminates. If the uninitialize is not called, exceptions are thrown because unmanaged resources are not being released correctly. Since the thread class is sealed and has no events, I have to wrap the thread instance into a class and only allow instances of this class to do the work.
I should point out that this is part of a shared library used by multiple Windows applications. I may not always have control of threads making calls into this library.
Since a PDF object may be the output of a call to this library, and since the calling thread may do some other work with that object, I don't want to call the cleanup function immediately; I need to try to do it right before the thread terminates. Ideally I'd like to be able to subscribe to something like a Thread.Dispose event, but that's what I'm missing.
A: You don't want to wrap System.Thread per se - just compose it with your PDFWidget class that is doing the work:
class PDFWidget
{
private Thread pdfWorker;
public void DoPDFStuff()
{
pdfWorker = new Thread(new ThreadStart(ProcessPDF));
pdfWorker.Start();
}
private void ProcessPDF()
{
OtherGuysPDFThingie pdfLibrary = new OtherGuysPDFThingie();
// Use the library to do whatever...
pdfLibrary.Cleanup();
}
}
You could also use a ThreadPool thread, if that is more to your taste - the best choice depends on how much control you need over the thread.
A: I think you can use an [Auto|Manual]ResetEvent which you will set when the thread terminates
A: Catch the ThreadAbortExcpetion.
http://msdn.microsoft.com/en-us/library/system.threading.threadabortexception.aspx
A: what about calling a standard method in async mode?
e.g
//declare a delegate with same firmature of your method
public delegete string LongMethodDelegate ();
//register a callback func
AsyncCallback callbackFunc = new AsyncCallback (this.callTermined);
//create delegate for async operations
LongMethodDelegate th = new LongMethodDelegate (yourObject.metyodWichMakeWork);
//invoke method asnync.
// pre last parameter is callback delegate.
//the last parameter is an object wich you re-find in your callback function. to recovery return value, we assign delegate itSelf, see "callTermined" method
longMethod.beginInvoke(callbackFunc,longMethod);
//follow function is called at the end of thr method
public static void callTermined(IAsyincResult result) {
LongMethodDelegate method = (LongMethodDelegate ) result.AsyncState;
string output = method.endInvoke(result);
Console.WriteLine(output);
}
See here form more info: http://msdn.microsoft.com/en-us/library/2e08f6yc.aspx
A: There are many ways you can do this, but the most simple one is to do like McKenzieG1 said and just wrap the call to the PDF library. After you've called the PDF-library in the thread you can use an Event or ManualResetEvent depending on how you need to wait for the thread to finish.
Don't forget to marshal event-calls to the UI-thread with a BeginInvoke if you're using the Event approach.
A: Wouldn't you just wrap your PDF usage with a finally (if it's a single method), or in an IDisposable?
A: Check the Powerthreading library at http://wintellect.com.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: RoR: nested namespace routes, undefined method error I am working on the admin section of a new rails app and i'm trying to setup some routes to do things "properly". I have the following controller:
class Admin::BlogsController < ApplicationController
def index
@blogs = Blog.find(:all)
end
def show
@blog = Blog.find(params[:id])
end
...
end
in routes.rb:
map.namespace :admin do |admin|
admin.resources :blogs
end
in views/admin/blogs/index.html.erb:
<% for blog in @blogs %>
<%= link_to 'Delete', admin_blog(blog), :method => :delete
<% end %>
i have verified that the routes exist:
admin_blogs GET /admin/blogs {:action => "index", :controller=>"admin/blogs"}
admin_blog GET /admin/blogs/:id {:action => "show", :controller => "admin/blogs"}
....
but when i try to view http://localhost:3000/admin/blogs i get this error:
undefined method 'admin_blog' for #<ActionView::Base:0xb7213da8>
where am i going wrong and why?
A: I'm assuming you are using rails 2.0.x so the way you generate a route is
__path
admin_blog_path(blog)
and if you are riding a previous version I think it's just
blog_path(blog)
A: Your Delete link should end in _path:
<%= link_to 'Delete', admin_blog_path(blog), :method => :delete %>
A: Side note:
I also see that your controller is defined like this:
class Admin::BlogsController < ApplicationController
shouldn't it be like this?
class Admin::BlogsController < Admin::ApplicationController
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Microsoft Search Server 2008 Express Edition from Classic ASP or ASP.NET We have a new installation of Microsoft Search Server 2008 Express Edition on one server and it's nicely indexing our intranet (on another server) which we can search from the provided search form on the search server.
I'd like to customise the search results so that they actually look like our intranet has generated them and also place the search form's textbox and submit button on the intranet pages themselves. The existing, provided search form appears to be an ASP.NET page and performs a postback so it's not like I can just duplicate that in my intranet classic ASP code and anyway, I'd end up with some pre-formatted HTML returned when I'm just after some raw XML to transform/format myself.
Is there some URL that I can access the search server with, passing the query parameter(s) and have it return some valid XML that I can then, via ASP, or ASP.NET perform a tranformation using XSLT?
All the customisation articles I seem to come across on the Web refer to creating Sharepoint Web Parts and using them on an ASP.NET page and that's (Sharepoint Web Parts) something I know nothing about :(
I currently do just what I'm looking for with a Google Mini appliance, calling a URL with search terms tacked onto the URL and use XSLT to transform the returned XML search results into something that, style-wise at least, matches our (mainly) classic ASP intranet. However, we want to look at using Microsoft Search Server 2008 to perform the same task if possible.
A: You can call the Search webservice. Thihs isn't quite as straightforward as calling a Url like the Google appliance, but it's not daunting.
In MOSS 2007, the Url is http://portalname/_vti_bin/Search.asmx. The method you will probably want to use is Query. This will return results as an Xml Document. From there, you can apply your XSL and display inline on your custom search page.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Are there any good free .Net network libraries? (FTP, SFTP, SSH, etc.) I'm a bit surprised I haven't found a good open source library for performing common network tasks. There are a few very good commercial libraries, but they're too expensive to use on an open source project.
Anyone know of any?
A: edtFTPNet is free, but you have to buy their "Pro" version to get SFTP (FTP over SSH) and FTPS (FTP over SSL).
A: Although it hasn't been updated in a while, it is free! I remember being able to get SharpSSH to run without much hassle, and it supports port forwarding (which is what I was looking for at the time!).
SharpSSH
http://www.tamirgal.com/home/dev.aspx?Item=SharpSsh
A: .Net ships with some basic FTP support via System.Net.FtpWebRequest, but its a bit crude at best. A far superior alternative that I can recommend is dotNET FTP client from sourceforge.
I don't know if you're looking for email libraries too, but its something that I came across, so I'll mention it. For email composing and delivery, the basic .NET libraries are fine. System.Net.Mail.MailMessage is great for composing emails, and System.Net.Mail.SmtpClient is good for sending emails over SMTP.
For retrieving emails with POP3 and parsing MIME messages, you will want an external library.
I've been using POP3 MIME Client from codeproject without any problems.
I hope that helps!
A: SSH.NET Library - https://github.com/sshnet/SSH.NET
Inspired by Sharp.SSH, this library is complete rewrite using .NET 4.0, without any third party dependencies and utilizes parallelism as much as possible to allow best performance.
It's been a solid C# implementation of client side SSH.
A: It's not a single library, and I'm not sure how good they are but I was able to find a couple of links to open source libraries here:
http://csharp-source.net/open-source/network-clients
Hope this helps!
Jeff
A: Lumisoft is open source and has FTP, DNS, IMAP, POP3 clients, among other stuff. It doesn't include SSH and SFTP though.
A: *
*All-singing-all-dancing solution which looks good to me, but which I haven't tried: http://www.nsoftware.com/products/component/sftp.aspx
*SSH / SFTP library which my company uses: http://www.eldos.com/sbb/net-sftp.php
In practice, the only place where I currently do SFTP, I use putty's bundled psftp utility, and run it from a process object. That may not be great, but it's working reliably for me.
A: IIRC, FTP is built in to .NET, (System.Net.FtpWebRequest) and last time I looked (a couple of years ago, admittedly) I couldn't find any free SSH / SFTP assemblies. That might have changed, though.
A: I did some research and found an implementation of SSH in C#, called sharpSsh. It is a port of the Jsch (Java Secure Channel) library for Java. We use Jcsh here at my employer, and it's great. I can't vouch for the C# version.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: Running a Python web server as a service in Windows I have a small web server application I've written in Python that goes and gets some data from a database system and returns it to the user as XML. That part works fine - I can run the Python web server application from the command line and I can have clients connect to it and get data back. At the moment, to run the web server I have to be logged in to our server as the administrator user and I have to manually start the web server. I want to have the web server automatically start on system start as a service and run in the background.
Using code from ActiveState's site and StackOverflow, I have a pretty good idea of how to go about creating a service, and I think I've got that bit sorted - I can install and start my web server as a Windows service. I can't, however, figure out how to stop the service again. My web server is created from a BaseHTTPServer:
server = BaseHTTPServer.HTTPServer(('', 8081), SIMSAPIServerHandler)
server.serve_forever()
The serve_forever() call, naturally enough, makes the web server sit in an infinite loop and wait for HTTP connections (or a ctrl-break keypress, not useful for a service). I get the idea from the example code above that your main() function is supposed to sit in an infinite loop and only break out of it when it comes accross a "stop" condition. My main calls serve_forever(). I have a SvcStop function:
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
exit(0)
Which seems to get called when I do "python myservice stop" from the command line (I can put a debug line in there that produces output to a file) but doesn't actually exit the whole service - subsequent calls to "python myservice start" gives me an error:
Error starting service: An instance of
the service is already running.
and subsequent calls to stop gives me:
Error stopping service: The service
cannot accept control messages at this
time. (1061)
I think I need either some replacement for serve_forever (serve_until_stop_received, or whatever) or I need some way of modifying SvcStop so it stops the whole service.
Here's a full listing (I've trimmed includes/comments to save space):
class SIMSAPIServerHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(self):
try:
reportTuple = self.path.partition("/")
if len(reportTuple) < 3:
return
if reportTuple[2] == "":
return
os.system("C:\\Programs\\SIMSAPI\\runCommandReporter.bat " + reportTuple[2])
f = open("C:\\Programs\\SIMSAPI\\out.xml", "rb")
self.send_response(200)
self.send_header('Content-type', "application/xml")
self.end_headers()
self.wfile.write(f.read())
f.close()
# The output from CommandReporter is simply dumped to out.xml, which we read, write to the user, then remove.
os.unlink("C:\\Programs\\SIMSAPI\\out.xml")
return
except IOError:
self.send_error(404,'File Not Found: %s' % self.path)
class SIMSAPI(win32serviceutil.ServiceFramework):
_svc_name_ = "SIMSAPI"
_svc_display_name_ = "A simple web server"
_svc_description_ = "Serves XML data produced by SIMS CommandReporter"
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
exit(0)
def SvcDoRun(self):
import servicemanager
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,servicemanager.PYS_SERVICE_STARTED,(self._svc_name_, ''))
self.timeout = 3000
while 1:
server = BaseHTTPServer.HTTPServer(('', 8081), SIMSAPIServerHandler)
server.serve_forever()
def ctrlHandler(ctrlType):
return True
if __name__ == '__main__':
win32api.SetConsoleCtrlHandler(ctrlHandler, True)
win32serviceutil.HandleCommandLine(SIMSAPI)
A: This is what I do:
Instead of instancing directly the class BaseHTTPServer.HTTPServer, I write a new descendant from it that publishes an "stop" method:
class AppHTTPServer (SocketServer.ThreadingMixIn, BaseHTTPServer.HTTPServer):
def serve_forever(self):
self.stop_serving = False
while not self.stop_serving:
self.handle_request()
def stop (self):
self.stop_serving = True
And then, in the method SvcStop that you already have, I call that method to break the serve_forever() loop:
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
self.httpd.stop()
(self.httpd is the instance of AppHTTPServer() that implements the webserver)
If you use setDaemon() correctly on the background threads, and interrupt correctly all the loops in the service, then the instruction
exit(0)
in SvcStop() should not be necessary
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: MVP pattern - Passive View and exposing complex types through IView (Asp.Net, Web Forms) I've recently switch to MVP pattern with a Passive View approach. I feel it very comfortable to work with when the view interface exposes only basic clr types, such as string mapped to TextBoxes, IDictionary mapped to DropDownLists, IEnumerable mapped to some grids, repeaters.
However, this last approach works only when from those grid I care only about one collumn. How can I map grid's multirow content inside IView ? For now, two solutions comes to my mind, both not brilliant:
*
*Create a DTO for the grid's content and expose the IEnumerable in IView,
or
*Expose the IEnumerable or just the "grid" as is in IView.
First solution seems to break the Passive View rules while going closer to Supervising Controller pattern and second breaks the whole MVP pattern at all.
How would do you handle this?
thanks, Łukasz
A: MVP makes webforms development much easier, except in cases like this. However, if you used TDD to verify that your IView really needs that grid of data, then I don't really see what the problem is.
I assume you're trying to do something like this:
public interface IView
{
DataTable DataSource {get; set;}
}
public class View : IView {
private GridView _datasource;
public DataSource
{
get { return _datasource; }
set
{
_datasource = value;
_datasource.DataBind();
}
}
When used with the MVP pattern, I find this little pattern to be quite helpful.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do you organize your Unit Tests in TDD? I do TDD, and I've been fairly loose in organizing my unit tests. I tend to start with a file representing the next story or chunk of functionality and write all the unit-tests to make that work.
Of course, if I'm introducing a new class, I usually make a separate unit-test module or file for that class, but I don't organize the tests themselves into any higher level structure. The result is I write code fast and I believe my actual program is reasonably well structured, but the unit tests themselves are "messy". Especially, their structure tends to recapitulate the phylogeny of the development process. Sometimes I see myself as trading laziness in the code for laziness in the tests.
How big a problem is this? Who here continually refactors and reorganizes their unit tests to try to improve their overall structure? Any tips for this? What should the overall structure of tests look like.
(Note, that I'm not so much asking the "how many assertions per function" question asked here : How many unit tests should I write per function/method? I'm talking about the bigger picture.)
A: The less important part is organizing the tests.
I start by putting the tests into a class that relates to the class under test, so com.jeffreyfredrick.Foo has a test com.jeffreyfredrick.FooTest. But if some subset of those classes need a different setup then I'll move them into their own test class. I put my tests into a separate source directory but keep them in the same project.
The more important part is refactoring the tests.
Yes I try and refactor my tests as I go. The goal is to remove duplication while still remaining declarative and easy to read. This is true both within test classes and across test classes. Within a test class I might have a parametrized method for creating a test fake (mock or stub). My test fakes are usually inner classes within a test class but if I find there's need I'll pull them out for reuse across tests. I'll also create a TestUtil class with common methods when it seems appropriate.
I think refactoring yours tests is important to long term success of unit testing on large projects. Have you ever heard people complaining about how their tests are too brittle or preventing them from change? You don't want to be in a position where changing the behavior of a class means making dozens or even hundreds of changes to your tests. And just like with code, you achieve this through refactoring and keeping the tests clean.
Tests are code.
A: I write a unit test class for each class in the application, and keep the test classes organized in the same package structure as the classes under test.
Inside each test class I don't really have much organizational structure. Each one only has a handful of methods for each public method in the class under test, so I've never had any problem finding what I'm looking for.
A: For every class in the software, I maintain a unit test class. The unit test classes follow the same package hierarchy as the classes which are tested.
I keep my unit test code in a separate project. Some people also prefer to keep their test code in the same project under a separate source directory called 'test'. You could follow whatever feels comfortable to you.
A: Divide your tests in 2 sets:
*
*functional tests
*units tests
Functional tests are per-user story. Unit tests are per-class. The former check that you actually support the story, the latter exercise and document your functionality.
There is one directory (package) for functional tests. Unit tests should be closely bound with functionality they exercise (so they're scattered). You move them around and refactor them as you move & refactor your code around.
A: I try to look at the unit tests as a project on their own. As with any project the organisation should follow some internal logic. It does not however have to be specific or formally defined - anything you're comfortable with is OK as long as it keeps your project well-organised and clean.
So for the unit tests I usually either follow the main project code structure or (sometimes when the situation calls of it) focus on the functional areas instead.
Leaving them in one heap is as you might imagine messy and difficult to maintain
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26"
} |
Q: TSVNCache.exe is heating up my Mac I run windows in a VMWare partition. At times, TSVNCache.exe process starts doing some weird things (Seems like its doing an endless loop of I/O operations). Suddenly my whole VMWare session starts slowing down. My mac heats up badly. In the sense its freaking hot.
My question is what is this TSVNCache process anyway ?, seems like I can do pretty much everything on the SVN repository without it.
A: It's what produces the overlaid icons in Explorer that tell you whether files/directories are modified, conflicted etc or not.
There have been several fixes to it recently, make sure you have the latest version of TortoiseSVN. Performance will also improve if you minimize the set of things SVN has to check - tell it to ignore any temporary directories, object file directories etc.
A: It's the tortoise SVN icon overlay process, disable icon overlay in tortoise settings for it to go away. If you want to see the icon overlays, (the icons indicating if a file is updated, modified, conflicted, etc in windows explorer) you will have to kill the process when it hangs, and press f5 or restart windows explorer so it restarts and shows the icon overlays correctly and updated.
A: The process looks for stattus changes in the background. By default it will watch all files on your system so you should restrict to the directories where you have checkouts.
Go into TortoiseSVN -> Settings -> Look and Feel -> Icon Overlays and set Exclude paths and Include paths.
Also if you don't care about seeing deep status the in "Status cache" change from "Default" to "Shell"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Extension functions and 'help' When I call
help(Mod.Cls.f)
(Mod is a C extension module), I get the output
Help on method_descriptor:
f(...)
doc_string
What do I need to do so that the help output is of the form
Help on method f in module Mod:
f(x, y, z)
doc_string
like it is for random.Random.shuffle, for example?
My PyMethodDef entry is currently:
{ "f", f, METH_VARARGS, "doc_string" }
A: You cannot. The inspect module, which is what 'pydoc' and 'help()' use, has no way of figuring out what the exact signature of a C function is. The best you can do is what the builtin functions do: include the signature in the first line of the docstring:
>>> help(range)
Help on built-in function range in module __builtin__:
range(...)
range([start,] stop[, step]) -> list of integers
...
The reason random.shuffle's docstring looks "correct" is that it isn't a C function. It's a function written in Python.
A: Thomas's answer is right on, of course.
I would simply add that many C extension modules have a Python "wrapper" around them so that they can support standard function signatures and other dynamic-language features (such as the descriptor protocol).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How deep are your unit tests? The thing I've found about TDD is that its takes time to get your tests set up and being naturally lazy I always want to write as little code as possible. The first thing I seem do is test my constructor has set all the properties but is this overkill?
My question is to what level of granularity do you write you unit tests at?
..and is there a case of testing too much?
A: The classic answer is "test anything that could possibly break". I interpret that as meaning that testing setters and getters that don't do anything except set or get is probably too much testing, no need to take the time. Unless your IDE writes those for you, then you might as well.
If your constructor not setting properties could lead to errors later, then testing that they are set is not overkill.
A: I write tests to cover the assumptions of the classes I will write. The tests enforce the requirements. Essentially, if x can never be 3, for example, I'm going to ensure there is a test that covers that requirement.
Invariably, if I don't write a test to cover a condition, it'll crop up later during "human" testing. I'll certainly write one then, but I'd rather catch them early. I think the point is that testing is tedious (perhaps) but necessary. I write enough tests to be complete but no more than that.
A: Part of the problem with skipping simple tests now is in the future refactoring could make that simple property very complicated with lots of logic. I think the best idea is that you can use Tests to verify requirements for the module. If when you pass X you should get Y back, then that's what you want to test. Then when you change the code later on, you can verify that X gives you Y, and you can add a test for A gives you B, when that requirement is added later on.
I've found that the time I spend during initial development writing tests pays off in the first or second bug fix. The ability to pick up code you haven't looked at in 3 months and be reasonably sure your fix covers all the cases, and "probably" doesn't break anything is hugely valuable. You also will find that unit tests will help triage bugs well beyond the stack trace, etc. Seeing how individual pieces of the app work and fail gives huge insight into why they work or fail as a whole.
A: In most instances, I'd say, if there is logic there, test it. This includes constructors and properties, especially when more than one thing gets set in the property.
With respect to too much testing, it's debatable. Some would say that everything should be tested for robustness, others say that for efficient testing, only things that might break (i.e. logic) should be tested.
I'd lean more toward the second camp, just from personal experience, but if somebody did decide to test everything, I wouldn't say it was too much... a little overkill maybe for me, but not too much for them.
So, No - I would say there isn't such a thing as "too much" testing in the general sense, only for individuals.
A: Test Driven Development means that you stop coding when all your tests pass.
If you have no test for a property, then why should you implement it? If you do not test/define the expected behaviour in case of an "illegal" assignment, what should the property do?
Therefore I'm totally for testing every behaviour a class should exhibit. Including "primitive" properties.
To make this testing easier, I created a simple NUnit TestFixture that provides extension points for setting/getting the value and takes lists of valid and invalid values and has a single test to check whether the property works right. Testing a single property could look like this:
[TestFixture]
public class Test_MyObject_SomeProperty : PropertyTest<int>
{
private MyObject obj = null;
public override void SetUp() { obj = new MyObject(); }
public override void TearDown() { obj = null; }
public override int Get() { return obj.SomeProperty; }
public override Set(int value) { obj.SomeProperty = value; }
public override IEnumerable<int> SomeValidValues() { return new List() { 1,3,5,7 }; }
public override IEnumerable<int> SomeInvalidValues() { return new List() { 2,4,6 }; }
}
Using lambdas and attributes this might even be written more compactly. I gather MBUnit has even some native support for things like that. The point though is that the above code captures the intent of the property.
P.S.: Probably the PropertyTest should also have a way of checking that other properties on the object didn't change. Hmm .. back to the drawing board.
A: I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.
Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we'll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.
A: Write unit tests for things you expect to break, and for edge cases. After that, test cases should be added as bug reports come in - before writing the fix for the bug. The developer can then be confident that:
*
*The bug is fixed;
*The bug won't reappear.
Per the comment attached - I guess this approach to writing unit tests could cause problems, if lots of bugs are, over time, discovered in a given class. This is probably where discretion is helpful - adding unit tests only for bugs that are likely to re-occur, or where their re-occurrence would cause serious problems. I've found that a measure of integration testing in unit tests can be helpful in these scenarios - testing code higher up codepaths can cover the codepaths lower down.
A:
Everything should be made as simple as
possible, but not simpler. - A. Einstein
One of the most misunderstood things about TDD is the first word in it. Test. That's why BDD came along. Because people didn't really understand that the first D was the important one, namely Driven. We all tend to think a little bit to much about the Testing, and a little bit to little about the driving of design. And I guess that this is a vague answer to your question, but you should probably consider how to drive your code, instead of what you actually are testing; that is something a Coverage-tool can help you with. Design is a quite bigger and more problematic issue.
A: To those who propose testing "everything": realise that "fully testing" a method like int square(int x) requires about 4 billion test cases in common languages and typical environments.
In fact, it's even worse than that: a method void setX(int newX) is also obliged not to alter the values of any other members besides x -- are you testing that obj.y, obj.z, etc. all remain unchanged after calling obj.setX(42);?
It's only practical to test a subset of "everything." Once you accept this, it becomes more palatable to consider not testing incredibly basic behaviour. Every programmer has a probability distribution of bug locations; the smart approach is to focus your energy on testing regions where you estimate the bug probability to be high.
A: I make unit test to reach the maximum feasible coverage. If I cannot reach some code, I refactor until the coverage is as full as possible
After finished to blinding writing test, I usually write one test case reproducing each bug
I'm used to separate between code testing and integration testing. During integration testing, (which are also unit test but on groups of components, so not exactly what for unit test are for) I'll test for the requirements to be implemented correctly.
A: So the more I drive my programming by writing tests, the less I worry about the level of granuality of the testing. Looking back it seems I am doing the simplest thing possible to achieve my goal of validating behaviour. This means I am generating a layer of confidence that my code is doing what I ask to do, however this is not considered as absolute guarantee that my code is bug free. I feel that the correct balance is to test standard behaviour and maybe an edge case or two then move on to the next part of my design.
I accept that this will not cover all bugs and use other traditional testing methods to capture these.
A: Generally, I start small, with inputs and outputs that I know must work. Then, as I fix bugs, I add more tests to ensure the things I've fixed are tested. It's organic, and works well for me.
Can you test too much? Probably, but it's probably better to err on the side of caution in general, though it'll depend on how mission-critical your application is.
A: I think you must test everything in your "core" of your business logic. Getter ans Setter too because they could accept negative value or null value that you might do not want to accept. If you have time (always depend of your boss) it's good to test other business logic and all controller that call these object (you go from unit test to integration test slowly).
A: I don't unit tests simple setter/getter methods that have no side effects. But I do unit test every other public method. I try to create tests for all the boundary conditions in my algorthims and check the coverage of my unit tests.
Its a lot of work but I think its worth it. I would rather write code (even testing code) than step through code in a debugger. I find the code-build-deploy-debug cycle very time consuming and the more exhaustive the unit tests I have integrated into my build the less time I spend going through that code-build-deploy-debug cycle.
You didn't say why architecture you are coding too. But for Java I use Maven 2, JUnit, DbUnit, Cobertura, & EasyMock.
A: The more I read about it the more I think some unit tests are just like some patterns: A smell of insufficient languages.
When you need to test whether your trivial getter actually returns the right value, it is because you may intermix getter name and member variable name. Enter 'attr_reader :name' of ruby, and this can't happen any more. Just not possible in java.
If your getter ever gets nontrivial you can still add a test for it then.
A: Test the source code that makes you worried about it.
Is not useful to test portions of code in which you are very very confident with, as long as you don't make mistakes in it.
Test bugfixes, so that it is the first and last time you fix a bug.
Test to get confidence of obscure code portions, so that you create knowledge.
Test before heavy and medium refactoring, so that you don't break existing features.
A: This answer is more for figuring out how many unit tests to use for a given method you know you want to unit test due to its criticality/importance. Using Basis Path Testing technique by McCabe, you could do the following to quantitatively have better code coverage confidence than simple "statement coverage" or "branch coverage":
*
*Determine Cyclomatic Complexity value of your method that you want to unit test (Visual Studio 2010 Ultimate for example can calculate this for you with static analysis tools; otherwise, you can calculate it by hand via flowgraph method - http://users.csc.calpoly.edu/~jdalbey/206/Lectures/BasisPathTutorial/index.html)
*List the basis set of independent paths that flow thru your method - see link above for flowgraph example
*Prepare unit tests for each independent basis path determined in step 2
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "88"
} |
Q: Can someone give me a high overview of how lucene.net works? I have an MS SQL database and have a varchar field that I would like to do queries like where name like '%searchTerm%'. But right now it is too slow, even with SQL enterprise's full text indexing.
Can someone explain how Lucene .Net might help my situation? How does the indexer work? How do queries work?
What is done for me, and what do I have to do?
A: I saw this guy (Michael Neel) present on Lucene at a user group meeting - effectively, you build index files (using Lucene) and they have pointers to whatever you want (database rows, whatever)
http://code.google.com/p/vinull/source/browse/#svn/Examples/LuceneSearch
Very fast, flexible and powerful.
What's good with Lucene is the ability to index a variety of things (files, images, database rows) together in your own index using Lucene and then translating that back to your business domain, whereas with SQL Server, it all has to be in SQL to be indexed.
It doesn't look like his slides are up there in Google code.
A: This article (strangely enough it's on the top of the Google search results :) has a fairly good description of how the Lucene search could be optimised.
Properly configured Lucene should easily beat SQL (pre 2005) full-text indexing search. If you on MS SQL 2005 and your search performance is still too slow you might consider checking your DB setup.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Programmatically creating Excel 2007 Sheets I'm trying to create Excel 2007 Documents programmatically. Now, there are two ways I've found:
*
*Manually creating the XML, as outlined in this post
*Using a Third Party Library like ExcelPackage.
Currently, I use ExcelPackage, which has some really serious drawbacks and issues. As I do not need to create overly complex Excel sheets (the most "complicated" thing is that I explicitely need to set a cell type to numeric or text), I 'm looking towards Option 1 next, but I just wonder if there are any other good and supported ways to generate Excel 2007 Sheets? Bonus Points if they can be created without having to save them to the harddrive, i.e. generate and directly output them into a stream.
.net 3.0 is the target here, no 3.5 goodness :(
Edit: Thanks so far. The XML SDK is indeed 3.5 only, but Russian Roulet... erm... COM Interop is also something I want to avoid whenever possible, especially since Excel 2007 has a somewhat complicated but still rather easy to create document format. I'll have a look at the various links and hints posted.
A: You could try using the Office Open XML SDK. This will allow you to create Excel files in memory using say a MemoryStream and much more easily than generating all the XML by hand.
As Brian Kim pointed out, version 2.0 of the SDK requires .NET 3.5 which you stated wasn't available. Version 1 of the SDK is also available which supports .NET 3.0. It isn't as complete, but will at least help you manage the XML parts of the document and the relationships between them.
A: I'm just generating HTML table, which can be opened by Excel:
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Wnioski</title>
<style type="text/css">
<!--
br { mso-data-placement: same-cell; }
td { mso-number-format: \@; mso-displayed-decimal-separator: "."; }
-->
</style>
<table>
<tr style="height:15.0pt">
<td style="mso-number-format: general">474</td>
<td>474</td>
<tr>
<td>data2</td>
<td>data3</td>
<tr>
<td style="mso-number-format: "yyyy-mm-dd"">2008-10-01</td>
<td style="mso-number-format: standard">0.00</td>
<tr>
<td style="mso-number-format: general">line1<br>line2<br>line3</td>
<td></td>
</table>
This works well also as a data source for serial letters in Word, works with OpenOffice calc etc. And it is dead simple to generate.
A: I was going to comment on BKimmel's post, but apparently I don't have enough SO cool points yet.
If Excel is already open, you can get that annoying "personal.xls is already open" message. I try getting the app first. If that fails, then create it:
On Error Resume Next
Set myxlApp = GetObject(, "Excel.Application")
If Err.Number <> 0 Then
Err.Clear
Set myxlApp = CreateObject("Excel.Application")
If Err.Number <> 0 Then
' return False, show message, etc.
End If
End If
A: I did most of the Open XML (xlsx) work in SpreadsheetGear for .NET and it is far from a trivial exercise. I recommend using the Office Open XML SDK or another 3rd party library like SpreadsheetGear or one of our competitors (just Google "SpreadsheetGear" to see who our competitors are).
SpreadsheetGear for .NET would seem to meet your needs:
*
*Works with .NET 2.0, .NET 3.0 and .NET 3.5.
*Supports writing to a stream with IWorkbook.SaveToStream or writing to a byte array with IWorkbook.SaveToMemory.
*Supports Excel 97-2003 (xls) workbooks which is likely to be the next thing you are asked for if you need to support very many users.
You can download a free, fully-functional evaluation here.
A: I used to generate Wordprocessing Markup documents using a library of functions for creating the correct XML for tables and paragraphs.
I then just changed the MIME type for the Response header to be a word document and sent the XML as the response to the client. It either opens in their browser or saves as a file.
There doesn't seem to be the same 2003 SDK for Excel although there is MS Open XML SDK
A: Excel is very scriptable through COM objects. I haven't tried it, but this should be a solid solution, though neither fast or lightweight.
Whatever you end up doing, check if your solution supports Unicode. I've run into some serious problems using existing solutions for writing xls files (that is, pre 2007 files).
A: You can use the Primary Interop Assemblies (the Office API) to programmatically generate Excel docs -- I wouldn't want to do anything overly complex using them, but it doesn't sound like that is your requirement.
This approach should work for generating the docs in memory as well.
Here are some resources:
*
*Installing the Primary Interop Assemblies
*How to create new Excel workbooks (from MSDN)
A: Every code-snob alive hates vbscript, but it's simple and it works:
Set myxlapp = CreateObject("Excel.Application")
Set mywb = myxlapp.Workbooks.Add
myxlapp.Visible = true
Put that in a text file, append a ".vbs" at the end and presto.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Random MoveFileEx failures on Vista I noticed that writing to a file, closing it and moving it to destination place randomly fails on Vista. Specifically, MoveFileEx() would return ERROR_ACCESS_DENIED for no apparent reason. This happens on Vista SP1 at least (32 bit). Does not happen on XP SP3.
Found this thread on the internets about exactly the same problem, with no real solutions. So far it looks like the error is caused by Vista's search indexer, see below.
The code example given there is enough to reproduce the problem. I'm pasting it here as well:
#include <windows.h>
#include <stdlib.h>
#include <stdio.h>
bool test() {
unsigned char buf[] = {
0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99,
0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99
};
HANDLE h;
DWORD nbytes;
LPCTSTR fn_tmp = "aaa";
LPCTSTR fn = "bbb";
h = CreateFile(fn_tmp, GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, 0, OPEN_ALWAYS, 0, 0);
if (h == INVALID_HANDLE_VALUE) return 0;
if (!WriteFile(h, buf, sizeof buf, &nbytes, 0)) goto error;
if (!FlushFileBuffers(h)) goto error;
if (!CloseHandle(h)) goto error;
if (!MoveFileEx(fn_tmp, fn, MOVEFILE_REPLACE_EXISTING | MOVEFILE_COPY_ALLOWED | MOVEFILE_WRITE_THROUGH)) {
printf("error=%d\n", GetLastError());
return 0;
}
return 1;
error:
CloseHandle(h);
return 0;
}
int main(int argc, char** argv) {
unsigned int i;
for (i = 0;; ++i) {
printf("*%u\n", i);
if (!test()) return 1;
}
return 0;
}
Build this as console app with Visual Studio. Correct behaviour would be infinite loop that prints test numbers. On Vista SP1, the program exits after random number of iterations (usually before 100 iterations are made).
This does not happen on Windows XP SP2. There's no antivirus running at all; and no other strange background processes (machine is pretty much vanilla OS install + Visual Studio).
Edit: Digging further via Process Monitor (thanks @sixlettervariables), I can't see anything particularly bad. Each test iteration results in 176 disk operations, majority of them coming from SearchProtocolHost.exe (search indexer). If search indexing service is stopped, no errors occur, so it looks like it's the culprit.
At the time of failure (when the app gets ERROR_ACCESS_DENIED), SearchProtocolHost.exe has two CreateFile(s) to the detination file (bbb) open with read/write/delete share modes, so it should be ok. One of the opens is followed by opportunistic lock (FSCTL_REQUEST_FILTER_OPLOCK), maybe that's the cause?
Anyway, I found out that I can avoid the problem by setting FILE_ATTRIBUTE_TEMPORARY and FILE_ATTRIBUTE_NOT_CONTENT_INDEXED flags on the file. It looks like FILE_ATTRIBUTE_NOT_CONTENT_INDEXED is enough by itself, but marking file as temporary also dramatically cuts down disk operations caused by search indexer.
But this is not a real solution. I mean, if an application can't expect to be able to create a file and rename it because some Vista's search indexer is messing with it, it's totally crazy! Should it keep retrying? Yell at the user (which is very undesirable)? Do something else?
A: I suggest you use Process Monitor (edit: the artist formerly known as FileMon) to watch and see which application exactly is getting in the way. It can show you the entire trace of file system calls made on your machine.
(edit: thanks to @moocha for the change in application)
A: I'd say it's either your anti-virus or Windows Indexing messing with the file at the same moment. Can you run the same test without an anti-virus. Then run it again making sure the temp file is created somewhere not indexed by Windows Search?
A: That usually means something else has an open handle on the file in question, maybe an active virus scanner running? Have you tried running something like Process Monitor from the Sysinternals site? You should be to filter all file operations and get a better picture of whats going on underneath the hood.
A: Windows has a special location for storing application files and I don't think its indexed (at least not by default). In Vista the path is:
C:\Users\user name\AppData
I suggest you put your files there if it is appropriate for your application.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Manage Scrum Software What software do you use to manage Scrum software development ?
We've tried Tackle and VersionOne (both free) so far and they are good except for the fact that it's difficult to track work in progress. For example, if I have a task that I estimate will take me 8 hours to complete, I've done 4 hours of work with 4 hours remaining, the task is always reported as 8 hours remaining until it is marked complete, at which time it falls to zero.
I'd like to use a tool that will allow me to take an accurate work at the teams WIP at the end of each week and see how much impact that work has had towards a deadline along with completed tasks.
Thanks for your input!
A: I recommend a white board and excel spreadsheets. The whiteboard has story cards (index cards) , where the work in progress is tracked. The story card starts out with say 8 hours, and as the work progresses decrement the number on the card. At the end of the day, put the numbers in the cards to a spreadsheet.
The whiteboard is visible all the time, and gives the whole team visibility on how the work is progressing.
A: This question was asked recently:.
Everything from Excel to VersionOne to Scrumworks to BaseCamp was mentioned.
Personally, though, we use a heavily customized Excel sheet, whiteboards, index cards in a variety of colors and a large corkboard.
You also might want to check out Mingle. It's a tool developed by ThoughtWorks, a company that only does Agile.
A: We've looked at most of the tools out there and ended up with Scrumwise. We've been using it for a while now, and it's incredibly easy to use, and does what we need. It uses the remaining time on each task to compute the burndown etc.
A: I note that no one has pointed out the misunderstanding of WIP (work in progress).
In agile, “it is not done until it is done”.
While most people see work done as a good thing, it is not. WIP represents investment that can not yet be realised. This is an important part of Agile, but made more explicit in Lean/Kanban.
If you track work done you will encourage developers to work on several things at once, getting everything to “80%” complete. At the end of the project you will spend 4 times more (80% of your time) in “bug fixing”, doing the last 20%. You will look like you are ahead of schedule, but you will over-run.
Also after one sprint, if work packages are small (they will be if you are doing scrum), then the error from not adding part done work to work done is insignificant.
Therefore: Track WIP separately from work done and try to keep it low.
Compromise
As a compromise, you can track part completed with the following rules:
*
*Only track one task per developer. (probably the biggest)
*Add a cap, maybe 1/2 sprint.
*Discount the rate, maybe 50%, if it is 80% done then report 40%. (tweak the discount rate when you have evidence but don't let it be as high as 100%)
A: OnTime recently added support for Scrum management (disclaimer: I work for them and helped build the product). :) We also put together an intro video for Scrum if you need more information: Video.
A: I've been working on an open source web based tool that you can install on site or use our hosted version. We've got sub-task tracking and a real time planning poker feature.
http://www.scrumdo.com/
A: I was looking up SCRUM software and found this old topic - just my two cents ....
I worked on a project in a healthcare domain for about a year and we used Version One. I am sorry to say, it was perhaps the most despised tool in the project. The testers especially, loathed it. Neither did we developers like it as it was pretty clunky/slow and generally pretty lethargic. We always had excellent customer Support from V1 but the tool just didn't cut it for us.
I am now working in a different project and we are using www.scrumwise.com - and so far so good....
A: VersionOne does let you change the estimates as you go - the burndown report wouldn't work otherwise. You may be hiding the estimate column or have it set to read-only - click the spanner on the right to list available columns and make sure that the estimate/ToDo column is editable.
We've found it to be rather good, though their odd insistence on customised controls breaks in Chrome.
A: I would suggest checking out OnTime's Planning board because using excel and an actual white board takes away from actual development time when you can automate the process with software..
A: I answered a similar question at https://stackoverflow.com/a/16667842/1810290 and I thought of sharing it here as well.
If you are looking for an online scrum tool, then you could have a look at Flying Donut. It is a new online product, and I've used it in my projects with great deal of success. There is a nice way of organizing your backlog, and its GUI is clean with quick response times. It provides different iteration views for planning, execution, and review.
Disclaimer: I have been using it for many months, since I helped building it.
A: We have used XPlanner. It's simple, but does it's job pretty well. Especially Developers get a nice overview for their current status.
A: ScrumWorks is nice for small teams. And free, too, for the basic version. We have about 30 developers with multiple projects/iterations/etc. Some basic burn-down charts, good for "yesterday's weather", etc.
Check it out at:
http://danube.com/scrumworks/basic
A: I think RallyDev might be worth checking out for you. Unless I'm mistaken the way that it tracks time will not cause the issue that you mentioned above.
We have been using it on our project for several months and have grown with it to where the team enjoys using the project.
A: We use Scrum for Team System which is excellent, but you do need to be using Visual Studio Team System to get it!
A: I've used this Index card generator, but I see now that there is a newer version link that only uses Excel
I also like their Planning Poker when trying to get estimates.
A: Just saw this, maybe in a another stackO q/a, https://scrumy.com/demo
A: Check out Scrum Pig at http://www.bellacode.com. It is a great Windows tool for teams collaborating together when using Scrum.
A: We use RallyDev for scrum management system. And I found it very much handy.
A: No-one has mentioned JIRA, is that a cost/OpenSource thing?
I've used JIRA for the last 3 years and have found it to be an excellent tool.
A: We also use a physical board with a google spreadsheet that serves as an online replica. It works really well and doesn't really add any overhead if everyone gets in the habit of maintaining both. I've blogged about it and included a sample spreadsheet:
http://www.littlebluemonkey.com/blog/online-scrum-tools-part-1-the-scrum-board/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Problem with NTFRS - Missing Sysvol and Netlogon on a 2003 Server This is a problem that is affecting me. Although this description wasn´t written by me (seen at www.experts-exchange.com (1), one of the kind of sites stackoverflow was created because of) it applies 95% to the problem I´m having. Next time I won´t follow MS advice, The domain was working until I tried the suggested solution on the event log ("Enable Journal Wrap Automatic Restore"). What a mistake it was!!!.
Description (as seen in (1)) follows:
(1)
http://www.experts-exchange.com/OS/Microsoft%5fOperating%5fSystems/Server/Windows%5f2003%5fActive%5fDirectory/Q%5f22871376.html
---------- Cut starts here ----------
I am having some apparently serious problems with a 2003 SBS server.
We have just started the process of putting in a second Domain Controller into this network for a project.
I performed the same task in a lab environment before I started on the live environment, and had no problems.
Firstly, we upgraded the SBS (tisserver) to 2003 R2. After that, I did the adprep to update the schema to R2.
I built up the 2003 R2 server (tisdr), installed DNS, joined it to the domain and did a DCPROMO.
This all worked fine, but I discovered errors in the File Replicaction Service event log on the new server:
Event ID 13508 - Source NtFRS
The File Replication Service is having trouble enabling replication from tisserver.TIS.local to TISDR for c:\windows\sysvol\domain using the DNS name tisserver.TIS.local. FRS will keep retrying.
Following are some of the reasons you would see this warning.
[1] FRS can not correctly resolve the DNS name tisserver.TIS.local from this computer.
[2] FRS is not running on tisserver.TIS.local.
[3] The topology information in the Active Directory for this replica has not yet replicated to all the Domain Controllers.
This event log message will appear once per connection, After the problem is fixed you will see another event log message indicating that the connection has been established.
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
When I went and checked the SBS, I found the following error had been occurring:
Eventid ID 13568 - Source NtFrs
The File Replication Service has detected that the replica set "DOMAIN SYSTEM VOLUME (SYSVOL SHARE)" is in JRNL_WRAP_ERROR.
Replica set name is : "DOMAIN SYSTEM VOLUME (SYSVOL SHARE)"
Replica root path is : "c:\windows\sysvol\domain"
Replica root volume is : "\.\C:"
A Replica set hits JRNL_WRAP_ERROR when the record that it is trying to read from the NTFS USN journal is not found. This can occur because of one of the following reasons.
[1] Volume "\.\C:" has been formatted.
[2] The NTFS USN journal on volume "\.\C:" has been deleted.
[3] The NTFS USN journal on volume "\.\C:" has been truncated. Chkdsk can truncate the journal if it finds corrupt entries at the end of the journal.
[4] File Replication Service was not running on this computer for a long time.
[5] File Replication Service could not keep up with the rate of Disk IO activity on "\.\C:".
Setting the "Enable Journal Wrap Automatic Restore" registry parameter to 1 will cause the following recovery steps to be taken to automatically recover from this error state.
[1] At the first poll, which will occur in 5 minutes, this computer will be deleted from the replica set. If you do not want to wait 5 minutes, then run "net stop ntfrs" followed by "net start ntfrs" to restart the File Replication Service.
[2] At the poll following the deletion this computer will be re-added to the replica set. The re-addition will trigger a full tree sync for the replica set.
WARNING: During the recovery process data in the replica tree may be unavailable. You should reset the registry parameter described above to 0 to prevent automatic recovery from making the data unexpectedly unavailable if this error condition occurs again.
To change this registry parameter, run regedit.
Click on Start, Run and type regedit.
Expand HKEY_LOCAL_MACHINE.
Click down the key path:
"System\CurrentControlSet\Services\NtFrs\Parameters"
Double click on the value name
"Enable Journal Wrap Automatic Restore"
and update the value.
If the value name is not present you may add it with the New->DWORD Value function under the Edit Menu item. Type the value name exactly as shown above.
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
After doing a bit of reading, it seemed like the right thing to do was a non-authoritative resotre, so I went through and created the registry key, then stopped and started the NTFRS service.
As expected, I got:
EventID 13560 - Source NtFRS
The File Replication Service is deleting this computer from the replica set "DOMAIN SYSTEM VOLUME (SYSVOL SHARE)" as an attempt to recover from the error state,
Error status = FrsErrorSuccess
At the next poll, which will occur in 5 minutes, this computer will be re-added to the replica set. The re-addition will trigger a full tree sync for the replica set.
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
Exactly five minutes later, I got:
EventID 13520 - Source NtFRS
The File Replication Service moved the preexisting files in c:\windows\sysvol\domain to c:\windows\sysvol\domain\NtFrs_PreExisting___See_EventLog.
The File Replication Service may delete the files in c:\windows\sysvol\domain\NtFrs_PreExisting___See_EventLog at any time. Files can be saved from deletion by copying them out of c:\windows\sysvol\domain\NtFrs_PreExisting___See_EventLog. Copying the files into c:\windows\sysvol\domain may lead to name conflicts if the files already exist on some other replicating partner.
In some cases, the File Replication Service may copy a file from c:\windows\sysvol\domain\NtFrs_PreExisting___See_EventLog into c:\windows\sysvol\domain instead of replicating the file from some other replicating partner.
Space can be recovered at any time by deleting the files in c:\windows\sysvol\domain\NtFrs_PreExisting___See_EventLog.
For more information, see Help and Support Center at
&
EventID 13553 - Source NtFRS
The File Replication Service successfully added this computer to the following replica set:
"DOMAIN SYSTEM VOLUME (SYSVOL SHARE)"
Information related to this event is shown below:
Computer DNS name is "tisserver.TIS.local"
Replica set member name is "TISSERVER"
Replica set root path is "c:\windows\sysvol\domain"
Replica staging directory path is "c:\windows\sysvol\staging\domain"
Replica working directory path is "c:\windows\ntfrs\jet"
For more information, see Help and Support Center at
---------- Cut ends here ----------
From this point on the responses I got start to drift from the original poster:
EventID 13566 - Source NtFRS
File Replication Service is scanning the data in the system volume. Computer DOMSERVER cannot become a domain controller until this process is complete. The system volume will then be shared as SYSVOL.
To check for the SYSVOL share, at the command prompt, type:
net share
When File Replication Service completes the scanning process, the SYSVOL share will appear.
The initialization of the system volume can take some time. The time is dependent on the amount of data in the system volume.
For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
I have left it for about an hour and a half now, and am not seeing any sign of a sysvol or netlogon share yet. The users are unable to log on. I don´t know where to go to from here. I´m in such desperate state that, if I had the money, I sure would pay experts-exchange (and the bad guys would win, I know :( ). Unfortunaly, I can´t do that for many reasons (not having a credit card is one of them).
Your help would be greatly appreciated!
PS: sorry for my not so good english. It´s not my mother tongue. Next time I will be better at it. :)
A: "Using the BurFlags registry key to reinitialize File Replication Service replica sets" (http://support.microsoft.com/kb/290762) did the trick!!!! Yessss!!! ufff!!! I feel like flying right now!!!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to cast a number to a byte? In C and C++ you can tell the compiler that a number is a 'long' by putting an 'l' at the end of the number.
e.g long x = 0l;
How can I tell the C# compiler that a number is a byte?
A: According to the C# language specification there is no way to specify a byte literal. You'll have to cast down to byte in order to get a byte. Your best bet is probably to specify in hex and cast down, like this:
byte b = (byte) 0x10;
A: Remember, if you do:
byte b = (byte)300;
it's not going to work the way you expect.
A: byte b = (byte) 123;
even though
byte b = 123;
does the same thing. If you have a variable:
int a = 42;
byte b = (byte) a;
A: MSDN uses implicit conversion. I don't see a byte type suffix, but you might use an explicit cast. I'd just use a 2-digit hexadecimal integer (int) constant.
A: No need to tell the compiler. You can assign any valid value to the byte variable and the compiler is just fine with it: there's no suffix for byte.
If you want to store a byte in an object you have to cast:
object someValue = (byte) 123;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: What to learn first? I went to school for programming years ago and when I got out I found a job in system administration and that is the direction my career took. I'd like to get back into development of some sort and have been 'playing' with C# and ASP.NET, but I've been hearing lots of buzz for other 'new' languages (by new I mean that they are new to me) like Ruby and F#. I guess I'm wondering if I'm wasting my time with learning largely MS languages instead of being more of a generalist. Having not been apart of the development community for a long time (if ever I was) has me floundering with trends and I'd like not to be left behind the times.
Any thoughts to if it's better to follow the "latest" languages or stick with what is more tried and true technologies?
A: The language you choose is not important. When you understand the concepts you will most likely be able to pick up a new language pretty fast.
A: see http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html for a complete index of language popularity
A: C.
Seriously, learn C.
If you don't run screaming for the hills pulling your hair out then you're cut out to be a developer.
Note that I'm not saying that people who don't know C aren't developers (Jeff, the founder of this site, doesn't know C and he's doing just fine) but C will introduce you to a lot of the less glamorous and sugar coated aspects of development.
As a second choice, pick C#.
A: You should learn at least 1 compiled language (like C# or Java) and 1 Script Language (Python, Ruby, etc). This is usually enough to help most developers succeed at what they do, regardless of the age of the language.
As for new vs old, I'd stick with C# for now as it's pretty popular. Learning a new language wouldn't be too bad though.
A: C# is my language of choice, Java and C# are similar enough, I don't think it's a big deal to learn Java once the c# fundamentals are understood... but c++ is another beast altogether.
I think c++ is one of the better general tools and will be easier to tackle once c# is understood well (It has a LOT of documentation and help forums). The experience in c++ isn't limited to Microsoft, though - most popular platforms will run c++, so with this experience, you won't be limited to windows. It's also good because it's not as candy-coated as c# or Java and not as gritty as pure c, and it can interop fairly easily with c# (which is one reason a transition is easier)
So c# is a good choice, and imho followed closely by c++
A: I have to agree with many of the above: the language isn't important. Largely, the language just matters for the following:
*
*Features. If you need multiple inheritance, you'd better go with C++. If, like 90+% of developers, you don't need anything that's specific to one (or one small subset of) languages, this doesn't matter.
*Syntax. Do you hate whitespace? Go with C#. Hate curly braces? LISP is your friend. Don't care one way or another? This doesn't matter.
*Compiled or Interpreted? This matters. Go with compiled (or partially compiled, like .NET) and it'll be faster...but the speed gap is closing.
*Local job opportunities. Sure, you may be a whiz in C#...but if everyone near you who's hiring is looking for PERL programmers, it won't do you any good.
*Community support. If your language hasn't been used seriously in 20 years (or ever), don't expect much of a lifeline on Google. QBASIC, I'm looking at you. StackOverflow will be here though...
In the end, we can discuss things until we're all blue in the face. Pick a language with a featureset you like, with syntax that won't drive you bonkers, a decent community, and hopefully job opportunities in your area.
As to new or old...both are good. The newer languages MAY tend to be easier to pickup, but there's more widespread documentation and use of the older languages, though that may be phasing out.
A: This is a bit of a loaded question, but you'll find that folks here are passionate about their tool choice while believing in flexibility that a choice provides.
That said, if you don't mind "vendor lockin," the Microsoft stack is an excellent way to get into programming and find gainful employment for years to come. Microsoft is both "tried and true" and "latest." The Microsoft stack is traditionally geared toward building business applications, but you're not limited to that (ASP.NET MVC, for example, was used to build this site).
I don't know a whole lot about the world of Ruby on Rails and such, so I'll defer to a more knowledgable person.
My real advice is to go with what you like. Learn C# and F# if that's what you're into.
A: I'd say learn what is easiest, and grab hold of the fundamental concepts behind them. Syntax is an easy hurdle to get over, the difference between languages are a little more tricky.
However, since C# seems to have a wide base of help on the net and here on SO, I would start there, and learn about the ins and outs of Object Oriented programming. Then, ideally, you should be able to switch to most any other OO language you need at the time (like ruby, Java, Obj-C, or even the dreaded C++) jk, C++ people.
A language like F#, while popular, is quite a bit different than C#, as it's functional. If you are used to writing functional style code, F# may be a good place to look. But, even then, learn what it's like to write functional code, and grasp the fundamentals of the language.
A: Well, if you keep up to date on the latest languages, you will always be employable by companies that are looking to increase their marketing buzzword count. Not saying that this is the only use for the languages, but it is definitely a use.
On the other hand, there is also always a market for older technologies with either companies that want to build on top of more reliable and tested older tech and to maintain said older technology.
I guess it depends on how you want to progress. If you go for trends you may find you never have time to really learn a technology inside out, but if you go for older languages, you may find that the more interesting projects are being launched with newer stuff you aren't familiar with and you are left with maintenance. It's all about the trade off.
A: My own take would be to think about what kinds of work do you want to be doing and what kinds of requirements for those jobs would be helpful to have. For example, do you know about design patterns? How about modelling out a system using classes and inheritance? If you want to get into Rich Internet Applications there is AJAX and other Javascript elements to learn as well as HTML and its various offshoots like XHTML and DHTML.
I don't think there is anything wrong with knowing just the Microsoft technologies, if they didn't bump into other ones that may be needed should you want to apply for a developer job and they require Javascript or CSS that you don't have.
Another point is to be aware of changes to tools over time as there will likely be more changes to come as the Web evolves some more, e.g. what Visual Studio 2010 will have may be a lot more changes than the current Visual Studio 2008 looking at some of the recent announcements around jQuery and the MVC framework.
A: I have used C#, Ruby, Perl, JavaScript and PHP professionally over the last year and they've all been useful in different ways. I would suggest that if you want an easy way to get caught up with the basics of programming Ruby is a great language to do that with- it has simple, easy to follow syntax and it makes it very easy to think in an Object-Oriented way, something that can be harder with a semi-OO language like C# or Java.
C# is well worth learning because it's useful to know a language in the C family - the fundamental idioms are so common that it is well worth using them - but I would avoid C or C++ simply because I don't really see the need to manage one's own memory - it makes life very hard, introduces a lot of unnecessary bugs and confers few benefits until you are really excellent at it. Get good at something that handles memory management first and then you can go on to the tougher stuff should you need to.
I would avoid functional languages like F# to start with. They are quite hard and quite different although by all accounts understanding them makes you a better programmer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: File name corruption on file download (IE) I have implemented a simple file upload-download mechanism. When a user clicks a file name, the file is downloaded with these HTTP headers:
HTTP/1.1 200 OK
Date: Tue, 30 Sep 2008 14:00:39 GMT
Server: Microsoft-IIS/6.0
Content-Disposition: attachment; filename=filename.doc;
Content-Type: application/octet-stream
Content-Length: 10754
I also support Japanese file names. In order to do that, I encode the file name with this java method:
private String encodeFileName(String name) throws Exception{
String agent = request.getHeader("USER-AGENT");
if(agent != null && agent.indexOf("MSIE") != -1){ // is IE
StringBuffer res = new StringBuffer();
char[] chArr = name.toCharArray();
for(int j = 0; j < chArr.length; j++){
if(chArr[j] < 128){ // plain ASCII char
if (chArr[j] == '.' && j != name.lastIndexOf("."))
res.append("%2E");
else
res.append(chArr[j]);
}
else{ // non-ASCII char
byte[] byteArr = name.substring(j, j + 1).getBytes("UTF8");
for(int i = 0; i < byteArr.length; i++){
// byte must be converted to unsigned int
res.append("%").append(Integer.toHexString((byteArr[i]) & 0xFF));
}
}
}
return res.toString();
}
// Firefox/Mozilla
return MimeUtility.encodeText(name, "UTF8", "B");
}
It worked well so far, until someone found out that it doesn't work well with long file names. For example: あああああああああああああああ2008.10.1あ.doc. If I change one of the single-byte dots to a single-byte underline , or if I remove the first character, it works OK. i.e., it depends on length and URL-encoding of a dot character.
Following are a few examples.
This is broken (あああああああああああああああ2008.10.1あ.doc):
Content-Disposition: attachment; filename=%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%822008%2E10%2E1%e3%81%82.doc;
This is OK (あああああああああああああああ2008_10.1あ.doc):
Content-Disposition: attachment; filename=%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%822008_10%2E1%e3%81%82.doc;
This is also fine (あああああああああああああああ2008.10.1あ.doc):
Content-Disposition: attachment; filename=%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%82%e3%81%822008%2E10%2E1%e3%81%82.doc;
Anybody have a clue?
A: gmail handles file name escaping somewhat differently: the file name is quoted (double-quotes), and single-byte periods are not URL-escaped.
This way, the long file name in the question is OK.
Content-Disposition: attachment; filename="%E3%81%82%E3%81%82%E3%81%82%E3%81%82%E3%81%82%E3%81%82%E3%81%82%E3%81%82%E3%81%82%E3%81%82%E3%81%82%E3%81%82%E3%81%82%E3%81%82%E3%81%822008.10.1%E3%81%82.doc"
However, there is still a limitation (apparently IE-only) on the byte-length of the file name (a bug, I assume). So even if the file name is made of only single-byte characters, the beginning of the file name is truncated.
The limitation is around 160 bytes.
A: As mentioned above, Content-Disposition and Unicode is impossible to get working all main browsers without browser sniffing and returning different headers for each.
My solution was to avoid the Content-Disposition header entirely, and append the filename to the end of the URL to trick the browser into thinking it was getting a file directly. e.g.
http://www.xyz.com/cgi-bin/dynamic.php/あああああああああああああああ2008.10.1あ.doc
This naturally assumes that you know the filename when you create the link, although a quick redirect header could set it on demand.
A: The main issue here is that IE does not support the relevant RFC, here: RFC2231. See pointers and test cases. Furthermore, the workaround that you use for IE (just using percent-escaped UTF-8) has several additional problems; it may not work in all locales (as far as I recall, the method fails in Korea unless IE is configured to always use UTF-8 in URLs which is not the default), and, as previously mentioned, there are length limits (I hear that that is fixed in IE8, but I did not try yet).
A: I think this issue is fixed in IE8, I have seen it working in IE 8.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Has anyone used lucene.net with Linq-to-Entities? If anyone has done this, please let me know. I don't know anything about lucene.net. I have never used it, but I heard about it. I was wondering how something like that would integrate with the Linq entity framework?
A: Check out Linq to Lucene project.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to decouple a middle tier and a dataset to allow for unit testing? I have a question at SO asking how to wire a middle tier to a DataSet.
I put up an answer showing what I had come up with, but I am not happy with the tight coupling. I have just started to get into testing and find it a good goal for my code.
How would this code be de-coupled to allow for unit testing?
Thank you,
Keith
A: IMO; DataSets are evil. They are, and should only be used as, offline databases. Nothing more, IMO. However, what you do in your Data Access Layer (DAL) should not really impact your Business layer (BL). I'd just use objects (use interfaces) between them (IList) and then use an interface to define your DAL (IRepository) and the nyou can mock that interface to return whatever you need in your BL for unit testing. Unit testing Datasets is another beast, never tried it and I hopefully never have to... Perhaps an in-memory database is your best bet there...
Oh, and for mocking I've used RhinoMock with some success. I'd also encourage you to look at IoCs (http://www.castleproject.org/).
A: You need IOC (inversion of control) and mock objects.
I encourage you to watch dnrTV episode 126 with James Kovacs.
He demonstrates exactly what you are looking for.
A: Have you tried Spring.net? It will make your code cleaner and less coupled. It also provides useful classes to do your integration tests.
A: It depends on what you want to test:
*
*Do you want to test the data retrieval from the database?
*Building the objects from the datasets?
*Inserts or updates to the database?
*And so on...
Here's a suggestion:
An order contains all its children. This is an aggregate, a whole.
You get an order with details from a repository:
var order = repository.GetOrderBy(id);
The repository gets the data from the database:
var dataset = orderDatabase.GetOrderAndDetailsBy(id);
The repository could use a builder to create the order:
var order = orderBuilder.CreateOrderAndDetailsFrom(dataset);
You would have to create a repository as follows:
var repository = new OrderRepository(orderDatabase, orderBuilder);
Now you can create a repository with fake collaborators, depending on what you want to test.
A: If you have entity objects, you can use mocks for unit testing your middle tier.
RWendi
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Animations with full alpha in Qt? Is there an animation format supported in Qt (using v4.4) that will support a full alpha channel? GIF only has one-bit transparency, and I don't think Qt supports APNG.
Update: MNG seems to be supported, but that's even less popular than APNG! Maybe SVG is my best bet.
A: Qt supports SVG 1.2 Tiny as well as GIF and MNG. On a side-note, an animation API is being worked on which might be interesting for you;
http://labs.trolltech.com/blogs/2008/11/05/qt-animation-framework/
and
http://labs.trolltech.com/page/Projects/Graphics/Kinetic
A: Use the Animation codec with Quicktime, it supports a full alpha channel. Just chooose "Million of Colors+" when you select the Animation codec. You may need to enable "legacy codecs" option in the advanced settings.
A: Without knowing much of Qt, I will mention MNG and SVG, perhaps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: HTML authoring in an editorial environment Having recently produced an HTML/CSS/Javascript based report from various word and excel files sent to me I'm trying to work out how to do this better in future, ideally enabling non-technical users in the office to do many of the tasks currently handed to me.
There are a range of HTML editors out there but none of them seem obviously adept at doing this kind of task. For example, most tables in the document are displayed via a thickbox (jquery plugin). In addition to the table, this requires that I enclose them in a div with various id and class attributes and then create a link at the top of the page looking something like this:
<a href="#TB_inline?height=300&width=700&inlineId=tbtable2"
class="thickbox tablelink" title="Municipal Operating Expenditure (A$m)">Municipal Operating Expenditure</a>
I need a solution that will be careful with my templates, have a WYSIWYG interface, but also provide easy input for this kind of thing without frustrating those in the office with no HTML knowledge, ideally keeping them totally away from the code.
A: You can't give your non-technial users such a complex HTML template and hope they will not break it. There is no HTML editor that can enforce such rules for structures that are more complex than a class attribute on an element.
This scenario calls for the use of XML: you need to separate your content and presentation.
You should define an XML flavour to describe your report. Then write an XSLT that will transform your <thickbox/> XML element into the HTML structure you describe above.
To allow non-technical users to do some of your tasks, you could use Xopus to make the XML editable (demo). You could do the initial conversion from OOXML, or you could use the copy/paste functionality in Xopus to allow them to copy content from Excel and automatically convert it into your <thickbox/> element.
A: Have you tried FCKEditor. It is very popular and used in a number of blogs, wikis and CMSs. It can produce very clean HTML and is highly customizable.
A: Get them to use Markdown, just like here, and insert the rendered HTML into your template.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What are some good toolsets for graphing/charting in a web application? What are some good toolsets for graphing/charting in a web application? Preferably open-source/freeware, and particularly looking at developing with ASP.NET MVC or Silverlight 2.0.
A: Google Charts?
A: Dundas Chart is one of the best out there. It's not free, but it's worth it.
A: Check out Visiblox charts, they are what I am currently using for my telemetry application. I found the examples here useful - http://www.visiblox.com/examples
You can now see the telemetry application I created at CodeProject.
A: For free flash charting, you may look at FusionCharts Free. Or, if you want more professional and are ready to shell out $$$, look at FusionCharts v3
A: You may now also want to consider the Microsoft Chart Controls for .NET Framework 3.5
These have just been released.
A: If you're looking for free components get Google Charts.
Non-free components which I really like are
*
*DevExpress Xtra Charts (especially if you use their other components)
*Dundas Charts (great and highly recommended)
A: We use XSLT to transform XML into SVG. Once you build up the various charting formats and data DTDs, its very easy to reuse.
A: If you are interested in Flash-based charts, then: http://teethgrinder.co.uk/open-flash-chart/
A: Did a search on CodePlex and found
Free Silverlight Chart Control http://www.codeplex.com/FreeSilverlightChart
Google Chart Control for ASP.Net http://www.codeplex.com/GoogleChartNet
Free Silverlight Chart Control - visifire http://www.codeplex.com/visifire
etc....
The search I used http://www.codeplex.com/Project/ProjectDirectory.aspx?ProjectSearchText=chart
I personally can't suggest any since I never used them, but hope this helps.
A: I think Google Charts are outstanding if you're not looking for animations etc. It'll take loads of your servers and Google will render the whole thing for you. It'll also give very detailed control over how you want the graph to look. It's also the simplest and cleanest way I think. It's just an image ... No Flash no SVG and so on.
One tip I'd give is to not use a wrapper API. I found that the easiest way to work with it to use the actually use URL based "API" direct. But I guess that's just MHO.
A: Note that the public VS2010 CTP image includes chart functionality built into ASP.NET; overdue, but welcome.
A: Googel Charts was the first thing that came to my mind. I had also used Emprise Javascript Charts at a previous employer with some luck; but it is not free to use.
A: There's also fusion charts, scruffy and riya. Most of these charting libraries generate charts from xml files, so you can use them from any framework.
A: For graphs, nothings beats graphviz. There are tons of third party wrapper-libraries so you'll be able to dynamically generate the graphs with most every system.
A: Depends on what you're looking for.
Google Charts is excellent at what it does and is quick and easy to pick up - I used it for the first time about two weeks ago and was generating multiple reports, in two different styles, within about three hours of first looking at its documentation. However, unless there's more to it than the docs let on, "what it does" is limited to relatively basic charts and it will not support anything particularly fancy. It also requires you to do a lot of the grunt work yourself, such as figuring out what the scale and axis labels should be.
For anything beyond Google Charts' capabilities, I would use the GD::Graph modules from CPAN, but those are for Perl rather than .NET or Silverlight, so they probably won't do you much good.
A: Not a free solution, but a very good one IMHO is Telerik's SL Chart. Note - They also have charts for other Web Applications. Using their controls, you don't have to send data to Google (to collect and do who knows what with). This is very important in HealthCare situations where HIPAA would frown upon you sending data to Google.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: DSL Tools: Create a new Diagram in custom code I am using DSL Tools for Visual Studio 2005.
I have a DSL where at a certain point I would like to create a new Diagram using custom code.
So far, I was able to create a new Diagram by overwriting the current, already opened diagram. Code follows:
FEGeneratorDiagram diag = new FEGeneratorDiagram(ThisElem.Store);
diag.Associate(ThisElem);
FEGeneratorDiagram currentDiag = (FEGeneratorDiagram)ThisElem.Store.ElementDirectory.FindElements<FEGeneratorDiagram>(false)[0];
currentDiag = diag;
But, what I would really like to do would be,
to create a new DSL document with a new instance FEGeneratorDiagram and then keep on my logic of adding elements and setting properties.
Any help?
A: //Create a Store
Type[] modelTypes = new Type[] { typeof(CoreDesignSurfaceDomainModel), typeof(FEGeneratorDomainModel) };
Store store = new Store(modelTypes);
RootElement root;
using (Transaction t =
store.TransactionManager.BeginTransaction("Create Elements"))
{
root = FEGeneratorSerializationHelper.Instance.LoadModel(store, diagramPath, null, null);
t.Commit();
}
//Do whatever custom things you want!
SerializationResult result = new SerializationResult();
//Save the file
FEGeneratorSerializationHelper.Instance.SaveModel(result, root, diagramPath);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What are the potential problems with this WebService security scheme? We have a service that handles authorization based on a User Name and Password. Instead of making the username and password part of the call, we place it in the SOAP header.
In a typical scenario, a Web Service calls the Authorization service at the start of execution to check that the caller is allowed to call it. The problem is though that some of these Web services call each other, and it would mean that on every sub-call the user's permissions are checked, and that can be very expensive.
What I thought of doing was to have the Authorization service return a Security Token after the first call. Then, instead of having to call the Authorization service each time, the Web Service can validate the Security Header locally.
The Security Header looks something like this (C# code - trimmed to illustrate the essential concept):
public sealed class SecurityHeader : SoapHeader
{
public string UserId; // Encrypted
public string Password; // Encrypted; Just realized this field isn't necessary [thanks CJP]
public DateTime TimeStamp; // Used for calculating header Expiry
public string SecurityToken;
}
The general idea is that the SecurityHeader gets checked with every call. If it exists, hasn't expired, and the SecurityToken is valid, then the Web Method proceeds as normal. Otherwise it will either return an error, or it will attempt to re-authorize and generate a new SecurityHeader
The SecurityToken is based on a salted hash of the UserId, Password, and TimeStamp. The salt is changed every day to prevent replays.
The one problem I do see is that a user might have permission to access Web Service A, but not Web Service B. If he calls A and receives a security token, as it stands now it means that B will let him through if he uses that same token. I have to change it so that the security token is only valid from Web Service to Web Service, rather than User to Web Service ie. It should be OK if the user calls A which calls B, but not OK if the user calls Service A and then Service D. A way around that is to assign a common key (or a set of keys) to logically related services. (ie. if client can do A then logically he can do B as well).
Alternatively, I'd have to encode the user's entire permission set as part of the security header. I'll have to investigate what the overhead will be.
Edit:
Several people have mentioned looking at other security schemes like WS-Security and SAML etc. I already have. In fact, I got the idea from WS-Security. The problem is that other schemes don't provide the functionality I need (caching authorization information and protecting against replays without an intemediary database). If someone knows of a scheme that does then I will glady use it instead. Also, this is not about authentication. That is handled by another mechanism beyond my control.
If it turns out that there is no way to cache authorization data, then it means that I'll just have to incur the overhead of authorization at each level.
A: The fundamental problem with your scheme is that you're not using a standard framework, or implementation. This is regardless of any particular merits of your scheme itself.
The reason is simple, security (cryptography in particular) is very, very complicated and pretty much impossible to get right. Use a common tool that is robust, well understood and proven.
See: http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss for more information on WS-Security.
Most frameworks (.NET/JavaEE etc) will have in-built support (to some degree) for WS-Security.
If you beleive your scheme to be better in some way than the standards, I suggest you write it up as a paper and submit it for peer-review (along with a reference implementation), but DO NOT use it to secure an application.
EDIT to respond to OP Edit:
I think you're confusing the roles of Authentication and Authorization a little, which is easy to do...
The roll of the Security Token (or similar) in schemes is to Authenticate the sender of the message - basically, is the sender who they should be. As you rightly pointed out, Authentication does not imply anything about which underlying resources the sender is to be granted access to.
Authorization is the process whereby you take an authenticated sender and apply some set of permissions so that you can restrict scope of access. Generally the frameworks won't do authorization by default, you either have to enable it by creating some form of ACL, or by extending some kind of "Security Manager" type interface.
In effect, the idea is that the Authentication layer tells you who is trying to access Page A, and leaves it up to you to decide if that person is authorized to access Page A.
You should never store information about the rights and permissions in the message itself - the receiver should verify rights against its ACL (or database or whatever) for each message. This limits your exposure should someone figure out how to modify the message.
A: Ok, replacing my older answers with hopefully a better one.
What you describe should work if you have a way to securely share data between your services. For example, if your services share a secret key with the Authorization Service, you can use this key to get the salt.
BTW, I don't know enough cryptography to say whether it's safe enough to add secret salt + hash (although seems fine); I'm pretty sure it's safe to HMAC with a secret or private key. Rotating keys is a good idea, so you would still have a master key and propagate a new signing key.
Other issues with your approach are that (a) you're hardcoding the hashing logic in every service, and (b) the services might want to get more detailed data from the Authorization Service than just a yes/no answer. For example, you may want the Authorization Service to insert into the header that this user belongs to roles A and B but not C.
As an alternative, you can let the Authorization Service create a new header with whatever interesting information it has, and sign that block.
At this point, we're discussing a Single Sign-On implementation. You already know about WS-Security specs. This header I described sounds a lot like a SAML assertion.
Here's an article about using WS-Security and SAML for Single Sign-On.
Now, I don't know whether you need all this... there are in-between solutions too. For example, the Authorization Service could sign the original Username block; if you worry about public/private crypto performance, and you're ok sharing secret keys, you could also use a secret key to sign instead of public/private keys.
A: Personally I don't see you having any issues there as long as you have a centralized underlying framework to support the validation of the SecurityToken values.
A: First of all, read @CJP 's post, he makes an excellent and valid point.
If you want to go ahead and roll it yourself anyway (maybe you do have a good reason for it), I would make the following points:
*
*You're talking about an Authentication Service, NOT an authorization service. Just to make sure you know what you're talking about...?
*Second, you need to separate between the salt (which is not secret) and the key (which is). You should have a keyed hash (e.g. HMAC), together with salt (or a keyed salted hash. Sounds like a sandwich.). The salt, as noted, is not secret, but should be changed FOR EACH TOKEN, and can be included in the header; the key MUST be secret and being changed every day is good. Of course, make sure you're using strong hash (e.g. SHA-256), proper key management techniques, etc etc.
Again, I urge you to reconsider rolling your own, but if you have to go out on your own...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: JPA annotations and Interfaces I have a class Animal and an interface it inherits from IAnimal.
@MappedSuperclass
public class Animal implements Serializable, IAnimal{...}.
@Entity
public class Jaguar extends Animal{...}
My first question is, do I need to annotate the interface?
I asked this because I am getting this error when I run my tests:
Error compiling the query [SELECT s
FROM animal s WHERE s.atype =
:atype].
Unknown abstract schema type
[animal]
If I remember correctly, before I added this interface it was working.
A: This error is occurring because you spelled Animal with a common a in the query. Try this:
SELECT s FROM Animal s WHERE s.atype = :atype
A: Does
SELECT s FROM Animal s WHERE s.atype = :atype
work? (just changed the case of animal)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: C++ SQL Database program I have the following code that I wrote but it the SQLBindCol does not seem to work correctly (of course I could have screwed up the whole program too!.) The connection works, it creates the table in the DB, addes the record fine and they all look good in SQL Enterprise Manager. So what I need help with is after the comment "Part 3 & 4: Searchs based on criteria." Perhaps I should have done this assignment completely different or is this an acceptable method?
#include <iostream>
#include <cstdio>
#include <string>
#include <windows.h>
#include <sql.h>
#include <sqlext.h>
#include <sqltypes.h>
using namespace std; // to save us having to type std::
const int MAX_CHAR = 1024;
int main ( )
{
SQLCHAR SQLStmt[MAX_CHAR];
char strSQL[MAX_CHAR];
char chrTemp;
SQLVARCHAR rtnFirstName[50];
SQLVARCHAR rtnLastName[50];
SQLVARCHAR rtnAddress[30];
SQLVARCHAR rtnCity[30];
SQLVARCHAR rtnState[3];
SQLDOUBLE rtnSalary;
SQLVARCHAR rtnGender[1];
SQLINTEGER rtnAge;
// Get a handle to the database
SQLHENV EnvironmentHandle;
RETCODE retcode = SQLAllocHandle( SQL_HANDLE_ENV, SQL_NULL_HANDLE, &EnvironmentHandle );
// Set the SQL environment flags
retcode = SQLSetEnvAttr( EnvironmentHandle, SQL_ATTR_ODBC_VERSION, (SQLPOINTER) SQL_OV_ODBC3, SQL_IS_INTEGER );
// create handle to the SQL database
SQLHDBC ConnHandle;
retcode = SQLAllocHandle( SQL_HANDLE_DBC, EnvironmentHandle, &ConnHandle );
// Open the database using a System DSN
retcode = SQLDriverConnect(ConnHandle,
NULL,
(SQLCHAR*)"DSN=PRG411;UID=myUser;PWD=myPass;",
SQL_NTS,
NULL,
SQL_NTS,
NULL,
SQL_DRIVER_NOPROMPT);
if (!retcode)
{
cout << "SQLConnect() Failed";
}
else
{
// create a SQL Statement variable
SQLHSTMT StatementHandle;
retcode = SQLAllocHandle(SQL_HANDLE_STMT, ConnHandle, &StatementHandle);
// Part 1: Create the Employee table (Database)
do
{
cout << "Create the new table? ";
cin >> chrTemp;
} while (cin.fail());
if (chrTemp == 'y' || chrTemp == 'Y')
{
strcpy((char *) SQLStmt, "CREATE TABLE [dbo].[Employee]([pkEmployeeID] [int] IDENTITY(1,1) NOT NULL,[FirstName] [varchar](50) NOT NULL,[LastName] [varchar](50) NOT NULL,[Address] [varchar](30) NOT NULL,[City] [varchar](30) NOT NULL,[State] [varchar](3) NOT NULL, [Salary] [double] NOT NULL,[Gender] [varchar](1) NOT NULL, [Age] [int] NOT NULL, CONSTRAINT [PK_Employee] PRIMARY KEY CLUSTERED ([pkEmployeeID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
}
// Part 2: Hardcode records into the table
do
{
cout << "Add records to the table? ";
cin >> chrTemp;
} while (cin.fail());
if (chrTemp == 'y' || chrTemp == 'Y')
{
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Mike','Slentz','123 Torrey Dr.','North Clairmont','CA', 48000.00 ,'M',34)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Sue','Vander Hayden','46 East West St.','San Diego','CA', 36000.00 ,'F',28)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Sharon','Stonewall','756 West Olive Garden Way','Plymouth','MA', 56000.00 ,'F',58)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('James','Bartholemew','777 Praying Way','Falls Church','VA', 51000.00 ,'M',45)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Joe','Smith','111 North 43rd Ave','Peoria','AZ', 44000.00 ,'M', 40)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Michael','Smith','20344 North Swan Park','Phoenix','AZ', 24000.00 ,'M', 40)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Jennifer','Jones','123 West North Ave','Flagstaff','AZ', 40000.00 ,'F', 40)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Cora','York','33rd Park Way Drive','Mayville','MI', 30000.00 ,'F', 61)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Tom','Jefferson','234 Friendship Way','Battle Creek','MI', 41000.00 ,'M', 31)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
}
// Part 3 & 4: Searchs based on criteria
do
{
cout << "1. Display all records in the database" << endl;
cout << "2. Display all records with age greater than 40" << endl;
cout << "3. Display all records with salary over $30K" << endl;
cout << "4. Exit" << endl << endl;
do
{
cout << "Please enter a selection: ";
cin >> chrTemp;
} while (cin.fail());
if (chrTemp == '1')
{
strcpy((char *) SQLStmt, "SELECT [FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age] FROM EMPLOYEE");
}
else if (chrTemp == '2')
{
strcpy((char *) SQLStmt, "SELECT [FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age] FROM EMPLOYEE WHERE [AGE] > 40");
}
else if (chrTemp == '3')
{
strcpy((char *) SQLStmt, "SELECT [FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age] FROM EMPLOYEE WHERE [Salary] > 30000");
}
if (chrTemp == '1' || chrTemp == '2' || chrTemp == '3')
{
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
SQLBindCol(StatementHandle, 1, SQL_C_CHAR, &rtnFirstName, sizeof(rtnFirstName), NULL );
SQLBindCol(StatementHandle, 2, SQL_C_CHAR, &rtnLastName, sizeof(rtnLastName), NULL );
SQLBindCol(StatementHandle, 3, SQL_C_CHAR, &rtnAddress, sizeof(rtnAddress), NULL );
SQLBindCol(StatementHandle, 4, SQL_C_CHAR, &rtnCity, sizeof(rtnCity), NULL );
SQLBindCol(StatementHandle, 5, SQL_C_CHAR, &rtnState, sizeof(rtnState), NULL );
SQLBindCol(StatementHandle, 6, SQL_C_DOUBLE, &rtnSalary, sizeof(rtnSalary), NULL );
SQLBindCol(StatementHandle, 7, SQL_C_CHAR, &rtnGender, sizeof(rtnGender), NULL );
SQLBindCol(StatementHandle, 8, SQL_C_NUMERIC, &rtnAge, sizeof(rtnAge), NULL );
for(;;)
{
retcode = SQLFetch(StatementHandle);
if (retcode == SQL_NO_DATA_FOUND) break;
cout << rtnFirstName << " " << rtnLastName << " " << rtnAddress << " " << rtnCity << " " << rtnState << " " << rtnSalary << " " << rtnGender << "" << rtnAge << endl;
}
}
} while (chrTemp != '4');
SQLFreeStmt(StatementHandle, SQL_CLOSE );
SQLFreeConnect(ConnHandle);
SQLFreeEnv(EnvironmentHandle);
printf( "Done.\n" );
}
return 0;
}
A: OK, here is the code now working...
using namespace std; // to save us having to type std::
const int MAX_CHAR = 1024;
int main ( )
{
SQLSMALLINT RecNumber;
SQLCHAR * SQLState;
SQLINTEGER * NativeErrorPtr;
SQLCHAR * MessageText;
SQLSMALLINT BufferLength;
SQLSMALLINT * TextLengthPtr;
SQLCHAR SQLStmt[MAX_CHAR];
char strSQL[MAX_CHAR];
char chrTemp;
SQLVARCHAR rtnFirstName[50];
SQLVARCHAR rtnLastName[50];
SQLVARCHAR rtnAddress[30];
SQLVARCHAR rtnCity[30];
SQLVARCHAR rtnState[3];
SQLDOUBLE rtnSalary;
SQLVARCHAR rtnGender[2];
SQLINTEGER rtnAge;
// Get a handle to the database
SQLHENV EnvironmentHandle;
RETCODE retcode = SQLAllocHandle( SQL_HANDLE_ENV, SQL_NULL_HANDLE, &EnvironmentHandle );
// Set the SQL environment flags
retcode = SQLSetEnvAttr( EnvironmentHandle, SQL_ATTR_ODBC_VERSION, (SQLPOINTER) SQL_OV_ODBC3, SQL_IS_INTEGER );
// create handle to the SQL database
SQLHDBC ConnHandle;
retcode = SQLAllocHandle( SQL_HANDLE_DBC, EnvironmentHandle, &ConnHandle );
// Open the database using a System DSN
retcode = SQLDriverConnect(ConnHandle,
NULL,
(SQLCHAR*)"DSN=PRG411;UID=myUser;PWD=myPass;",
SQL_NTS,
NULL,
SQL_NTS,
NULL,
SQL_DRIVER_NOPROMPT);
if (!retcode)
{
cout << "SQLConnect() Failed";
}
else
{
// create a SQL Statement variable
SQLHSTMT StatementHandle;
retcode = SQLAllocHandle(SQL_HANDLE_STMT, ConnHandle, &StatementHandle);
// Part 1: Create the Employee table (Database)
do
{
cout << "Create the new table? ";
cin >> chrTemp;
} while (cin.fail());
if (chrTemp == 'y' || chrTemp == 'Y')
{
strcpy((char *) SQLStmt, "CREATE TABLE [dbo].[Employee]([pkEmployeeID] [int] IDENTITY(1,1) NOT NULL,[FirstName] [varchar](50) NOT NULL,[LastName] [varchar](50) NOT NULL,[Address] [varchar](30) NOT NULL,[City] [varchar](30) NOT NULL,[State] [varchar](3) NOT NULL, [Salary] [decimal] NOT NULL,[Gender] [varchar](1) NOT NULL, [Age] [int] NOT NULL, CONSTRAINT [PK_Employee] PRIMARY KEY CLUSTERED ([pkEmployeeID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
}
// Part 2: Hardcode records into the table
do
{
cout << "Add records to the table? ";
cin >> chrTemp;
} while (cin.fail());
if (chrTemp == 'y' || chrTemp == 'Y')
{
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Mike','Slentz','123 Torrey Dr.','North Clairmont','CA', 48000.00 ,'M',34)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Sue','Vander Hayden','46 East West St.','San Diego','CA', 36000.00 ,'F',28)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Sharon','Stonewall','756 West Olive Garden Way','Plymouth','MA', 56000.00 ,'F',58)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('James','Bartholemew','777 Praying Way','Falls Church','VA', 51000.00 ,'M',45)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Joe','Smith','111 North 43rd Ave','Peoria','AZ', 44000.00 ,'M', 40)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Michael','Smith','20344 North Swan Park','Phoenix','AZ', 24000.00 ,'M', 40)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Jennifer','Jones','123 West North Ave','Flagstaff','AZ', 40000.00 ,'F', 40)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Cora','York','33rd Park Way Drive','Mayville','MI', 30000.00 ,'F', 61)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy((char *) SQLStmt, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Tom','Jefferson','234 Friendship Way','Battle Creek','MI', 41000.00 ,'M', 31)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
}
// Part 3 & 4: Searchs based on criteria
do
{
cout << "1. Display all records in the database" << endl;
cout << "2. Display all records with age 40 or over" << endl;
cout << "3. Display all records with salary $30K or over" << endl;
cout << "4. Exit" << endl << endl;
do
{
cout << "Please enter a selection: ";
cin >> chrTemp;
} while (cin.fail());
if (chrTemp == '1')
{
strcpy((char *) SQLStmt, "SELECT [FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age] FROM EMPLOYEE");
}
else if (chrTemp == '2')
{
strcpy((char *) SQLStmt, "SELECT [FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age] FROM EMPLOYEE WHERE [AGE] >= 40");
}
else if (chrTemp == '3')
{
strcpy((char *) SQLStmt, "SELECT [FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age] FROM EMPLOYEE WHERE [Salary] >= 30000");
}
if (chrTemp == '1' || chrTemp == '2' || chrTemp == '3')
{
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
//SQLGetDiagRec(SQL_HANDLE_STMT, StatementHandle, RecNumber, SQLState, NativeErrorPtr, (SQLCHAR*) MessageText, (SQLINTEGER) BufferLength, (SQLSMALLINT*) &TextLengthPtr);
SQLBindCol(StatementHandle, 1, SQL_C_CHAR, &rtnFirstName, sizeof(rtnFirstName), NULL );
SQLBindCol(StatementHandle, 2, SQL_C_CHAR, &rtnLastName, sizeof(rtnLastName), NULL );
SQLBindCol(StatementHandle, 3, SQL_C_CHAR, &rtnAddress, sizeof(rtnAddress), NULL );
SQLBindCol(StatementHandle, 4, SQL_C_CHAR, &rtnCity, sizeof(rtnCity), NULL );
SQLBindCol(StatementHandle, 5, SQL_C_CHAR, &rtnState, sizeof(rtnState), NULL );
SQLBindCol(StatementHandle, 6, SQL_C_DOUBLE, &rtnSalary, sizeof(rtnSalary), NULL );
SQLBindCol(StatementHandle, 7, SQL_C_CHAR, &rtnGender, sizeof(rtnGender), NULL );
SQLBindCol(StatementHandle, 8, SQL_C_LONG, &rtnAge, sizeof(rtnAge), NULL );
for(;;)
{
retcode = SQLFetch(StatementHandle);
if (retcode == SQL_NO_DATA_FOUND) break;
cout << rtnFirstName << " " << rtnLastName << " " << rtnAddress << " " << rtnCity << " " << rtnState << " " << rtnSalary << " " << rtnGender << " " << rtnAge << endl;
}
SQLFreeStmt(StatementHandle, SQL_CLOSE);
}
} while (chrTemp != '4');
SQLFreeStmt(StatementHandle, SQL_CLOSE );
SQLFreeHandle(SQL_HANDLE_STMT, StatementHandle);
SQLDisconnect(ConnHandle);
SQLFreeHandle(SQL_HANDLE_DBC, ConnHandle);
SQLFreeHandle(SQL_HANDLE_ENV, EnvironmentHandle);
printf( "Done.\n" );
}
return 0;
}
A: You can get enough diagnostic out of SQL that you may be able to isolate and resolve the issue.
You can get the statement handle to tell you what has gone wrong with it by calling when SQLExecDirect returns something other than SQL_SUCCESS or SQL_SUCCESS_WITH_INFO
SQLGetDiagRec( SQL_HANDLE_STMT, StatementHandle, req, state, &error, (SQLCHAR*) buffer, (SQLINTEGER) MAX_CHAR, (SQLSMALLINT*) &output_length );
You'll have to allocate the variables you see here of course... I suggest you put a throw away line after the SQLGetDiagRec call and assign a breakpoint to it. When it breaks there, you can look at state's value: that will align with the "Diagnostics" section here:
http://msdn.microsoft.com/en-us/library/ms713611(VS.85).aspx
A: You said you were getting errors with:
string sqlString = "Select * From Customers Where Customers.Employee = '" +id+ "'";
It should be obvious, sorry, lol. The id is integer, sure. But when you evaluate the string, it comes up like so:
string sqlString = "Select * From Customers Where Customers.Employee = '100'";
Notice what's wrong? You have single quotes around it. So no matter what data type you are using, the single quotes makes the SQL treat it as a string. So just take them out like so:
string sqlString = "Select * From Customers Where Customers.Employee = " + id + "";
. . . . . . . Or,
string sqlString = "Select * From Customers Where Customers.Employee = " + id;
My question is this... Can you explain how looping through records in C++ works? For example, the user inputs a user name to strUName, and you wanna see if that user name is in the database table Users. The SQL is easy enough (select * from Users where [UName] = '" + strUName + "'; But How do you actually execute it in C++ and figure it out?
I see the SQLStmt, I see it being executed using Direct. I then see some SQLBindCol junk and then an infinite loop until break evals. But I don't quite get what's happening (This is easy for any other language for me, but I'm new to C++.
A: Here is the corrected code.
#include <iostream>
#include <cstdio>
#include <string>
#include <windows.h>
#include <sql.h>
#include <sqlext.h>
#include <sqltypes.h>
using namespace std; // to save us having to type std::
const int MAX_CHAR = 1024;
int main()
{
SQLSMALLINT RecNumber;
SQLCHAR * SQLState;
SQLINTEGER * NativeErrorPtr;
SQLCHAR * MessageText;
SQLSMALLINT BufferLength;
SQLSMALLINT * TextLengthPtr;
SQLCHAR SQLStmt[MAX_CHAR];
char strSQL[MAX_CHAR];
char chrTemp;
SQLVARCHAR rtnFirstName[50];
SQLVARCHAR rtnLastName[50];
SQLVARCHAR rtnAddress[30];
SQLVARCHAR rtnCity[30];
SQLVARCHAR rtnState[3];
SQLDOUBLE rtnSalary;
SQLVARCHAR rtnGender[2];
SQLINTEGER rtnAge;
// Get a handle to the database
SQLHENV EnvironmentHandle;
RETCODE retcode = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &EnvironmentHandle);
// Set the SQL environment flags
HWND desktopHandle = GetDesktopWindow();
retcode = SQLSetEnvAttr(EnvironmentHandle, SQL_ATTR_ODBC_VERSION, (SQLPOINTER)SQL_OV_ODBC3, SQL_IS_INTEGER);
// create handle to the SQL database
SQLHDBC ConnHandle;
retcode = SQLAllocHandle(SQL_HANDLE_DBC, EnvironmentHandle, &ConnHandle);
SQLSetConnectAttr(ConnHandle, SQL_LOGIN_TIMEOUT, (SQLPOINTER)5, 0);
// Open the database using a System DSN
retcode = SQLDriverConnect(ConnHandle,
desktopHandle,
(SQLCHAR*)"DSN=PRG411;UID=myUser;PWD=myPass;",
SQL_NTS,
NULL,
SQL_NTS,
NULL,
SQL_DRIVER_NOPROMPT);
if (retcode != SQL_SUCCESS || retcode != SQL_SUCCESS_WITH_INFO)
{
cout << "SQLConnect() Failed";
}
else
{
// create a SQL Statement variable
SQLHSTMT StatementHandle;
retcode = SQLAllocHandle(SQL_HANDLE_STMT, ConnHandle, &StatementHandle);
// Part 1: Create the Employee table (Database)
do
{
cout << "Create the new table? ";
cin >> chrTemp;
} while (cin.fail());
if (chrTemp == 'y' || chrTemp == 'Y')
{
strcpy_s((char *)SQLStmt,1024, "CREATE TABLE [dbo].[Employee]([pkEmployeeID] [int] IDENTITY(1,1) NOT NULL,[FirstName] [varchar](50) NOT NULL,[LastName] [varchar](50) NOT NULL,[Address] [varchar](30) NOT NULL,[City] [varchar](30) NOT NULL,[State] [varchar](3) NOT NULL, [Salary] [decimal] NOT NULL,[Gender] [varchar](1) NOT NULL, [Age] [int] NOT NULL, CONSTRAINT [PK_Employee] PRIMARY KEY CLUSTERED ([pkEmployeeID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
}
// Part 2: Hardcode records into the table
do
{
cout << "Add records to the table? ";
cin >> chrTemp;
} while (cin.fail());
if (chrTemp == 'y' || chrTemp == 'Y')
{
strcpy_s((char *)SQLStmt,1024, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Mike','Slentz','123 Torrey Dr.','North Clairmont','CA', 48000.00 ,'M',34)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy_s((char *)SQLStmt,1024, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Sue','Vander Hayden','46 East West St.','San Diego','CA', 36000.00 ,'F',28)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy_s((char *)SQLStmt,1024, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Sharon','Stonewall','756 West Olive Garden Way','Plymouth','MA', 56000.00 ,'F',58)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy_s((char *)SQLStmt,1024, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('James','Bartholemew','777 Praying Way','Falls Church','VA', 51000.00 ,'M',45)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy_s((char *)SQLStmt,1024, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Joe','Smith','111 North 43rd Ave','Peoria','AZ', 44000.00 ,'M', 40)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy_s((char *)SQLStmt,1024, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Michael','Smith','20344 North Swan Park','Phoenix','AZ', 24000.00 ,'M', 40)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy_s((char *)SQLStmt,1024, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Jennifer','Jones','123 West North Ave','Flagstaff','AZ', 40000.00 ,'F', 40)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy_s((char *)SQLStmt,1024, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Cora','York','33rd Park Way Drive','Mayville','MI', 30000.00 ,'F', 61)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
strcpy_s((char *)SQLStmt,1024, "INSERT INTO employee([FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age]) VALUES ('Tom','Jefferson','234 Friendship Way','Battle Creek','MI', 41000.00 ,'M', 31)");
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
}
// Part 3 & 4: Searchs based on criteria
do
{
cout << "1. Display all records in the database" << endl;
cout << "2. Display all records with age 40 or over" << endl;
cout << "3. Display all records with salary $30K or over" << endl;
cout << "4. Exit" << endl << endl;
do
{
cout << "Please enter a selection: ";
cin >> chrTemp;
} while (cin.fail());
if (chrTemp == '1')
{
strcpy_s((char *)SQLStmt,1024, "SELECT [FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age] FROM EMPLOYEE");
}
else if (chrTemp == '2')
{
strcpy_s((char *)SQLStmt,1024, "SELECT [FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age] FROM EMPLOYEE WHERE [AGE] >= 40");
}
else if (chrTemp == '3')
{
strcpy_s((char *)SQLStmt,1024, "SELECT [FirstName], [LastName], [Address], [City], [State], [Salary], [Gender],[Age] FROM EMPLOYEE WHERE [Salary] >= 30000");
}
if (chrTemp == '1' || chrTemp == '2' || chrTemp == '3')
{
retcode = SQLExecDirect(StatementHandle, SQLStmt, SQL_NTS);
//SQLGetDiagRec(SQL_HANDLE_STMT, StatementHandle, RecNumber, SQLState, NativeErrorPtr, (SQLCHAR*) MessageText, (SQLINTEGER) BufferLength, (SQLSMALLINT*) &TextLengthPtr);
SQLBindCol(StatementHandle, 1, SQL_C_CHAR, &rtnFirstName, sizeof(rtnFirstName), NULL);
SQLBindCol(StatementHandle, 2, SQL_C_CHAR, &rtnLastName, sizeof(rtnLastName), NULL);
SQLBindCol(StatementHandle, 3, SQL_C_CHAR, &rtnAddress, sizeof(rtnAddress), NULL);
SQLBindCol(StatementHandle, 4, SQL_C_CHAR, &rtnCity, sizeof(rtnCity), NULL);
SQLBindCol(StatementHandle, 5, SQL_C_CHAR, &rtnState, sizeof(rtnState), NULL);
SQLBindCol(StatementHandle, 6, SQL_C_DOUBLE, &rtnSalary, sizeof(rtnSalary), NULL);
SQLBindCol(StatementHandle, 7, SQL_C_CHAR, &rtnGender, sizeof(rtnGender), NULL);
SQLBindCol(StatementHandle, 8, SQL_C_LONG, &rtnAge, sizeof(rtnAge), NULL);
for (;;)
{
retcode = SQLFetch(StatementHandle);
if (retcode == SQL_NO_DATA_FOUND) break;
cout << rtnFirstName << " " << rtnLastName << " " << rtnAddress << " " << rtnCity << " " << rtnState << " " << rtnSalary << " " << rtnGender << " " << rtnAge << endl;
}
SQLFreeStmt(StatementHandle, SQL_CLOSE);
}
} while (chrTemp != '4');
SQLFreeStmt(StatementHandle, SQL_CLOSE);
SQLFreeHandle(SQL_HANDLE_STMT, StatementHandle);
SQLDisconnect(ConnHandle);
SQLFreeHandle(SQL_HANDLE_DBC, ConnHandle);
SQLFreeHandle(SQL_HANDLE_ENV, EnvironmentHandle);
printf("Done.\n");
}
return 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is anyone using Microsoft Software Licensing and Protection Services (SLP) in production? Microsoft has come out with this fairly new technology that I am considering using for a .NET 3.5 application. I am curious, is anyone using this technology already? I am worried that the use of the secure virtual machine will negatively affect performance. Also, the way Microsoft advertises the product, it seems as though the licensing integration is very seamless and does not require any development work in the code. It seems like a great product but I want to make sure I know of any pitfalls before committing to it.
A: Seems as http://www.inishtech.com has taken over...
A: Microsoft announced today that they are not accepting any new customers for SLP. Sounds like the whole program is going down the drain... I'm glad my company didn't sign up for it yet!
A: It seems that SLP does not support .NET 3.5 at the moment.
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=3526285&SiteID=1
Your best bet would be to implement an auxiliary DLL containing SLP API calls in .NET 2.0 or 3.0, secure it and add it as a reference to your .NET 3.5 app.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: NameSpaceBinding and wsadmin I am trying to create a StringNameSpaceBinding using the wsadmin tool of Websphere 6.1
Here are the steps i take
set cell [$AdminConfig getid /Cell:cell/]
$AdminConfig create StringNameSpaceBinding $cell { {name bindname} {nameInNameSpace Bindings/string} {stringToBind "This is the String value that gets bound"} }
But when i run this last step i get an error like this:
WASX7015E: Exception running command: "$AdminConfig create StringNameSpaceBinding $cell { {name bindname} {nameInNameSpace Bindings/string} {stringToBind "This is the String value that gets bound"} }"; exception information:
com.ibm.ws.scripting.ScriptingException: WASX7444E: Invalid parameter value "" for parameter "parent config id" on command "create"
Any idea what could be up with this?
Thanks
Damien
A: I'm betting that the following command:
set cell [$AdminConfig getid /Cell:cell/]
Doesn't work. Most likely, cell is not the name of your cell.
You don't need to specify a cell name; there's only one cell in the WAS topology. I would change this to:
set cell [$AdminConfig getid /Cell:/]
Good luck.
A: I have faced a similar issue and it is now resolved after removing both node and cell name from the following line:
cell = AdminConfig.getid('/Cell:/Node:/Server:WebSphere_Portal/')
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Fixed Statement in C# We have similar code to the following in one of our projects. Can anyone explain (in simple English) why the fixed statement is needed here?
class TestClass
{
int iMyVariable;
static void Main()
{
TestClass oTestClass = new TestClass();
unsafe
{
fixed (int* p = &oTestClasst.iMyVariable)
{
*p = 9;
}
}
}
}
A: The fixed statement will "pin" the variable in memory so that the garbage collector doesn't move it around when collecting. If it did move the variable, the pointer would become useless and when you used it you'd be trying to access or modify something that you didn't intend to.
A: You need it anywhere you do pointer arithmetic, to prevent the garbage collector from moving it around on you.
A: Because you are running in unsafe mode (pointer), the fixed instruction allocate a specific memory space to that variable. If you didn't put the fixed instruction, the garbage collector could move in memory the variable anywhere, when he want.
Hope this help.
A: MSDN has a very similar example. The fixed statement basically blocks garbage collection. In .Net if you use a pointer to a memory location the runtime can reallocate the object to a "better" location at any time. SO if you want to access memory directly you need to fix it in place.
A: It fixes the pointer in memory. Garbage collected languages have the freedom to move objects around memory for efficiency. This is all transparent to the programmer because they don't really use pointers in "normal" CLR code. However, when you do require pointers, then you need to fix it in memory if you want to work with them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Javascript String concatenation faster than this example? I have to concatenate a bunch of Strings in Javascript and am searching for the fastest way to do so. Let's assume that the Javascript has to create a large XML-"file" that, naturally, consists of many small Strings. So I came up with:
var sbuffer = [];
for (var idx=0; idx<10000; idx=idx+1) {
sbuffer.push(‘<xmltag>Data comes here... bla... </xmltag>’);
}
// Now we "send" it to the browser...
alert(sbuffer.join(”));
Do not pay any attention to the loop or the other "sophisticated" code which builds the example.
My question is: For an unknown number of Strings, do you have a faster algorithm / method / idea to concatenate many small Strings to a huge one?
A: The question JavaScript string concatenation has an accepted answer that links to a very good comparison of JavaScript string concatenation performance.
Edit:
I would have thought that you could eek out a little more performance by using Duff's device as the article suggests.
A: Changing the line:
sbuffer.push(‘Data comes here... bla... ’);
to
sbuffer[sbuffer.length] = ‘Data comes here... bla... ’;
will give you 5-50% speed gain (depending on browser, in IE - gain will be highest)
Regards.
A: I think you are quite close to the optimum. YMMV, a great deal of speed is gained or lost within the JavaScript engine of the host process (e.g. browser).
A: I think that pushing the strings onto an array and then joining the array is the fastest technique for string concatenation in JavaScript. There is some supporting evidence in this discussion about W3C DOM vs. innerHTML. Note the difference between the innerHTML 1 and innerHTML 2 results.
A: As far as I know, your algorithm is good and known as a performant solution to the string concatenation problem.
A: Beware of IE bad garbage collector! What do you suppose to do with your array after using? Probably it will get GC'd?
You can gain perfornace on concatenating with joins, and then lose on post-GC'ing. On the other hand if you leave an array in scope all the time, and NOT reuse it, that can be a good solution.
Personally I'd like the simpliest solution: just to use += operator.
A: You might get a little more speed by buffering.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How to solve Delphi's [Pascal Fatal Error] F2084 Internal Error: LA33? I'm really sick of this problem. Google searches always seem to suggest "delete all bpls for the package", "delete all dcus". Sometimes this just-does-not-work. Hopefully I can get some other ideas here.
I have a package written in-house, which had been installed without issue a few months ago. Having made a few changes to the source, I figured it was time to recompile/reinstall the package. Now I get two errors, the first if I choose "install" is
Access violation at address 02422108 in module 'dcc100.dll'. Read of address 00000000.
...or if I try to build/compile the package, I get
[Pascal Fatal Error] F2084 Internal Error: LA33
This is one of those Delphi problems that seems to occur time and time again for many of us. Would be great if we could collate a response something along the lines of "any one or combination of these steps might fix it, but if you do all these steps it will fix it...."
At the moment, I've removed all references to the bpl/dcp files for this package, but still getting the same error...
Using BDS2006 (Delphi)
Update 01-Oct-2008: I managed to solve this - see my post below. As I can't accept my own answer, I'm not entirely sure what to do here. Obviously these types of issues occur frequently for some people, so I'll leave it open for a while to get other suggestions. Then I guess if someone collates all the info into a super-post, I can accept the answer
A: These are bugs in the compiler/linker. You can find many references of these bugs on the internet in different Delphi versions, but they are not always the same bugs. That makes it difficult to give one solution for all those different kind of problems.
General solutions that might fix it are, as you noted:
*
*Remove *.dcp *.dcpil *.dcu *.dcuil *.bpl *.dll
*Rewrite your code in another way
*Tinker with compiler options
*Get the latest Delphi version
I personally found one of such bugs to be resolved if I turned off Range Checking. Others are solved if you don't use generics from another unit. And one was solved if the unit name and class name was renamed to be smaller.
And of course you should report any problem you have on http://qc.codegear.com
A: Maybe the following step will be a better solution:
Declare the array as a type and just define the class constant with this type, eg.
TMyArray = array[TErrEnum] of string;
TMyClass = class(TComponent)
private
const ErrStrs: TMyArray
= ('', //erOK
'Invalid user name or password', //erInvUserPass
'Trial Period has Expired'); //erTrialExp
protected
...
public
...
end;
This makes the array declaration explicit.
A: I wasted several hours on this issue, deleting dcu's, etc to no avail.
Finally, what worked for me was to uncheck Overflow Checking in Compiler Options, rebuilding the project, re-checking Overflow Checking, and rebuilding again. Voila! the problem has gone away. Go figure. (still using D7).
A: I managed to solve this, following the below procedure
*
*Create a new package
*One by one, add the components to the package, compile & install, until it failed.
*Investigate the unit causing the failure.
As it turns out, the unit in question had a class constant array, eg
TMyClass = class(TComponent)
private
const ErrStrs: array[TErrEnum] of string
= ('', //erOK
'Invalid user name or password', //erInvUserPass
'Trial Period has Expired'); //erTrialExp
protected
...
public
...
end;
So it appears that Delphi does not like class constants (or perhaps class constant arrays) in package components
Update: and yes, this has been reported to codegear
A: I had a similar case, where the solution was to remove the file urlmon.dcu from /lib/debug.
It also worked to turn off "use debug .dcus" altogether. This of course is not desirable, but you can use it to check whether the problem lies with any of your own units, or with any of delphi's units.
A: Try cleaning up the "Output Directory" so Delphi cannot fine dirty .DCUs and it is forced to bould the .PAS.
Sometimes this helps.
In case you didn't configure an "output directory", try deleting (or better moving in a backup folder) all the .DCU files.
A: Disabling "Include remote debug symbols" from the Linker Options fixed the issue for me Delphi 2007, dll project
A: Delphi XE3 Update 2
F2084 Internal Error: URW1147
CASE 1:
problem was that a type was declared in a procedure of a generic class.
procedure TMyClass<TContainerItem, TTarget>.Foo();
type
TCacheInfo = record
UniqueList: TStringList;
UniqueInfo: TUniqueInfo;
end;
var
CacheInfo: TCacheInfo;
moving the type declaration to the private part of the class declaration solved this issue.
CASE 2:
problem in this case was related to an optional parameter:
unit A.pas;
interface
type
TTest<T> = class
public
type
TTestProc = procedure (X: T) of object;
constructor Create(TestProc_: TTestProc = nil);
end;
...
the internal compile error occurred as soon as a variable of the TTest class was declared in another unit: e.g.
unit B.pas:
uses A;
var
Test: TTest<TObject>;
solution was to make the constructor argument of TestProc_ non-optional.
A: For me, in D2010 disabling the compiler option "Emit runtime type information" did the trick.
A: From the various answers this error looks to be a generic unhandled exception by the compiler.
My issue was caused by mistakenly calling function X(someString:String) : Boolean; which altered the string and returned a boolean, using someString := X(someString);
A: As my experience of Internal Error is that, I re-wrote line by line and compile again and realized that some if else statement does not work like
Internal Error Occurs
if (DataType in ASet)
begin
//do work
end
else if (DataType = B)
begin
//do work
end
else
begin
//do work
end;
How I solved :
if (DataType = B)
begin
//do work
end
else if (DataType in ASet)
begin
//do work
end
else
begin
//do work
end;
Just switched the conditions as example.Hope it helps.
A: I just experienced a similar behaviour, resulting in internal error LA30.
The reason were newly added string constants.
After changing from
const cLogFileName : string = 'logfilename.log';
to
const cLogFileName = 'logfilename.log';
(and of course restarting of Delphi IDE) the error was not showing up anymore.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: tomcat session replication without multicast i am planning to use 2 dedicated root servers rented at a hosting provider. those machines will run tomcat 6 in a cluster.
if i will add additional machines later on - it is unlikely that they will be accessible with multicast, because they will be located in different subnets.
is it possible to run tomcat without multicast? all tutorials for tomcat 6 clustering include multicast heartbeat. are there any alternatives to SimpleTcpCluster?
or are other alternatives more appropriate in this situation?
A: With no control over the distance between both servers (they might be in two different datacenters) and no dedicated inter-server-communication line, I'd rather run them via round-robin DNS or a loadbalancer that redirects clients to either www1.yourdomain.xxx or www2.yourdomain.xxx and handle server-communication carefully.
If the servers are heavily communicating with each other you might either look to change your architecture, optimize the hell out of your application to "fit" on one server (at least for a while) or go for dedicated hosting with control over the location, distance and cabling of your servers. Otherwise your inter-server-communication, heartbeat etc. would use the same channel as the clients that are communicating with it (e.g. the same network segment) which might slow everyone down.
If you are really expecting that much load I suppose there's at least some money involved, no? Use it wisely and use your setup skills for problems harder than setting up distributed clustering with no control or dedicated lines.
A: Seeing the comment to the question after having given my other answer I'm puzzled about what your question is... Is it about session replication? Cluster communication? It might be better to state your problem instead of your planned solution that has problems itself.
I'll state some possible problems together with quick answers:
Your application is CPU/RAM intensive
*
*Profile it, optimize it, try again
*Buy a bigger/better server
Your application is bandwidth intensive
*
*using the cheapo clustering you mentioned in your question will most likely make it worse, as the same (cloaked) channel is used for inter-server-communication as for client-server communication
*You might be able to separate different kinds of bandwidth e.g. by having dynamic content served from a different server than static content: No need for inter-server-communication here
Your application is storage intensive
*
*get a bigger server
*go for dedicated hosting and fit in as many spinning disks as you need
*see if other models (like amazons S3 storage) might work for you)
Your application is likely to be slashdotted
*
*determine which of the above factors (or others) are determining the limits of your application, fix that.
You just need session replication?
*
*Tomcats SessionManager interface is small and can easily be implemented/extended yourself. Use it for any session replication you like. See the StandardManager documentation and implementation for more information
More ideas
*
*look at more flexible setups like EC2 (amazon), googles offerings or other cloud computing setups. Make use of their own cloud-storage and inter-server-communication-facilities. Be careful to not depend too much on this infrastructure.
I certainly have forgotten something, but this might provide some starting point. Be more concrete about the nature of your underlying problem to get better answers :)
A: I am attempting to deploy the yale Central Authentication Server (CAS) and I would like to cluster it for redundancy, because this is a critical piece of infrastructure. CAS requires that the session be replicated, because after a user logs in application A and navigates to application B that partakes in the single-sign-on domain, application B sends a request to CAS to determine whether the user has an active 'ticket'. Since there is no device to indicate to application B to which node it should address itself to validate the ticket, all the active tickets in one node must be replicated to all nodes in the cluster. In other words, session stickiness is not a solution here, because application B, when validating the ticket in the user's cookie, is not aware of the sessionId of the original session in application A during which the user logged in.
Thus, CAS requires that the session be replicated across all nodes. The requirement that the network support multicasting adds a non-trivial amount of overhead, and makes this approach a bit heavier to deploy. I tested this project at google code:
http://code.google.com/p/memcached-session-manager
which seems very useful and simple to deploy (at least on a linux OS), but unfortunately only provides for session failover, and is not a session replication solution.
A: Just use http://code.google.com/p/memcached-session-manager/ . It works great. We are using it for years for this setup with 20 Tomcat servers sharing sessions. You can have one or two memcached servers to handle the session replication.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: OO vs. Layered; balancing "OO purity" with getting things done I believe in OO, but not to the point where inappropriate designs/implementations should be used just to be "OO Compliant".
So, how to deal with the Serlvet/EJB/DataContainer layered architecture:
*
*Servlets receive requests and call a "business layer" (e.g. session EJBs)
*The business layer locates DataContainers from the database and manipulates them to implement the business logic
*DataContainers contain no real code, just get/set corresponding to the database.
This approach has appeal; the DataContainers are clear in what they do and it's very easy to know where data comes from.
Aside from not being OO, this leads to unclear Business Layer classes that can be hard to name and hard to organize.
Even if we were trying to be more "OO" (e.g. putting some of these methods in the DataConatiners), some of these operations operate on more than one set of data.
How do you keep your Business Layer from getting confusingly procedural, but without polluting your DataContainers with business logic?
Example
class UserServlet {
handleRequest() {
String id = request.get("id");
String name = request.get("name");
if (UserBizLayer.updateUserName(id,name))
response.setStatus(OK);
else
response.setStatus(BAD_REQUEST);
}
}
class UseBizLayer {
updateUserName(String id, String name) {
long key = toLong(id);
user = userDAO.find(key);
if user == null
return false;
if (!validateUserName(name))
return false;
user.setName(name);
userDAO.update(user);
return true;
}
validateUserName(String name) {
// do some validations and return
}
}
class User {
long key;
String name;
String email;
// imagine getters/setters here
}
*
*We don't want validateUserName on the user, since it only operates on a name; I guess it could go into another class, but then we have another procedural "uti" type class
*We don't want persistence methods on the User, since there's value in decoupling data structures from their persistence strategy
*We don't want business logic in our Servlet, since we may need to re-use that logic elsewhere
*We don't want our business logic in our User, as this draws in way too much to the User class, making re-use of the business logic difficult and coupling the user with its persistence strategy
I realize this example isn't that bad, but imagine 10 DataContainers and 20 BizLayer objects with several methods each. Imagine that some of those operations aren't "centered" on a particular data container.
How do we keep this from being a procedural mess?
A: So I'll address my thoughts on this in a few bullet points:
*
*It seems in a Java EE system at some point you have to deal with the plumbing of Java EE, the plumbing doesn't always benefit from OO concepts, but it certainly can with a bit of creativity and work. For example you could might take advantage of things such as AbstractFactory, etc to help commonize as much of this Infrastructure as possible.
*Alot of what you are looking into is discussed in Eric Evans excellent book called Domain Driven Design. I highly reccomend you look at it as he does address the problem of expressing the knowledge of the domain and dealing with the technical infrastructure to support it.
*Having read and GROKD some of DDD, I would encapsulate my technical infrastructure in repositories. The repositories would all be written to use a strategy for persistence that is based on your session EJBs. You would write a default implementation for that knows how to talk to your session EJBS. To make this work, you would need to add a little bit of convention and specify that convention/contract in your interfaces. The repositories do all of the CRUD and should only do more than that if absolutely needed. If you said "My DAOS are my repositories", then I would agree.
*So to continue with this. You need something to encapsulate the unit of work that is expressed in UseBizLayer. At this level I think the nature of it is that you are stuck writing code that is all going to be transaction script. You are creating a seperation of responsibility and state. This is typically how I've seen it done within Java EE systems as a default sort of architecture. But it isn't Object Oriented. I would try to explore the model and see if I could at least try to commonize some of the behaviours that are written into the BizClasses.
*Another approach I've used before is to get rid of the BizLayer classes and then proxy the calls from the Domain to the actual Repositories/DAO's doing the operation. However this might require some investment in building infrastructure. But you could do alot with a framework like Spring and using some AOP concept's to make this work well and cut down the amount of custom infrastructure that is needed.
A: since you're implementing classes and objects, your solution is going to be OO regardless of how you layer things - it just may not be very well-structured depending on your situation/needs! ;-)
as to your specific question, it would in some cases make sense for validateUserName to belong to the User class, since every User would like to have a valid name. Or you can have a validation utility class assuming that other things have names that use the same validation rules. The same goes for email. You could split these into NameValidator and EmailValidator classes, which would be a fine solution if they will be used a lot. You could also still provide a validateUserName function on the User object that just called the utility class method. All of these are valid solutions.
One of the great joys about OOD/OOP is that when the design is right, you know that it is right, because a lot of things just fall out of the model that you can do that you couldn't do before.
In this case I would make NameValidator and EmailValidator classes, because it seems likely that other entities will have names and email addresses in future, but I would provide validateName and validateEmailAddress functions on the User class because that makes for a more convenient interface for the biz objects to use.
the rest of the 'we-don't-want' bullets are correct; they are not only necessary for proper layering, but they are also necessary for a clean OO design.
layering and OO go hand-in-glove based on a separation of concerns between the layers. I think you've got the right idea, but will need some utility classes for common validations as presented
A: Think about how these tasks would be done if there was no computer, and model your system that way.
Simple example... Client fills out a form to request a widget, hands it to an employee, the employee verifies the client's identity, processes the form, obtains a widget, gives the widget and a record of the transaction to the client and keeps a record of the transaction somewhere for the company.
Does the client store their data? No, the employee does. What role is the employee taking when he's storing the client data? Client Records Keeper.
Does the form verify that it was filled out correctly? No, the employee does. What role is the employee taking when he's doing that? Form Processor.
Who gives the client the widget? The employee acting as a Widget Distributor
And so on...
To push this into a Java EE implementation...
The Servlet is acting on behalf of the Client, filling out the form (pulling data from the HTTP request and making the appropriate Java object) and passing it to the appropriate employee (EJB), who then does with the form what needs to be done. While processing the request, the EJB might need to pass it along to another EJB that specializes in different tasks, part of which would include accessing/putting information from/to storage (your data layer). The only thing that shouldn't map directly to the analogy should be the specifics on how your objects communicate with each other, and how your data layer communicates with your storage.
A: I've had the same thoughts myself.
In the traditional MVC the most important thing is separating the View from the Model and Controller portions. It seems to be a good idea to separate the controller and the model simply because you can end up with bloated model objects:
public class UserModel extends DatabaseModel implements XMLable, CRUDdy, Serializable, Fooable, Barloney, Baznatchy, Wibbling, Validating {
// member fields
// getters and setters
// 100 interface methods
}
Whereas you can have separate controllers (or entire patterns) for many of the interfaces above, and yes, it's rather procedural in nature but I guess that's how things work these days. Alternatively you can realise that some of the interfaces are doing the same thing (CRUDdy - database storage and retrieval, Serializable - the same to a binary format, XMLable, the same to XML) so you should create a single system to handle this, with each potential backend being a separate implementation that the system handles. God, that's really badly written.
Maybe there's something like "co-classes" that let you have separate source files for controller implementation that act as if they're a member of the model class they act on.
As for business rules, they often work on multiple models at the same time, and thus they should be separate.
A: I think this is a question about "separation of concerns". You seem to be a long way down the right track with your layered architecture, but maybe you need to do more of the same - i.e. create architectural layers within your Java EE layers?
A DataContainer looks a lot like the Data Transfer Objects (DTO) pattern.
A modern OO design has a lot of small classes, each one related to a small number of "friends", e.g. via composition. This might produce a lot more classes and Java boiler-plate than you're really comfortable with, but it should lead to a design that is better layered and easier to unit test and maintain.
(+1 for the question, +1 for the answer about when you know you have the layering right)
A: Crudely, the "We don't want" section has to go. Either you want it to be right, or you want it to stay as it is. There's no point in not creating a class when you need one. "We have plenty" is a bad excuse.
Indeed, it is bad to expose all the internal classes, but in my experience, creating a class per concept (i.e. a User, an ID, a Database concept...) always helps.
Next to that, isn't a Facade pattern something to solve the existence of loads of BizRules classes, hidden behind one well-organized and clean interface?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Echo2: How to create a centered Pane? I'm new to the Echo 2 framework. I think it should be simple, but I found no way to center the generated content horizontally and vertically.
Is there any possibility to get a centered ContentPane (or something similar) with a fixed width and height?
thx,
André
A: Found the solution with echoPointNG:
public static ContainerEx createCenteredPane( int width, int height ) {
ContainerEx cp = new ContainerEx();
cp.setPosition( Positionable.ABSOLUTE );
cp.setOutsets( new Insets( width / 2 * -1, height / 2 * -1, 0, 0 ) );
cp.setWidth( new Extent( width ) );
cp.setHeight( new Extent( height ) );
cp.setLeft( new Extent( 50, Extent.PERCENT ) );
cp.setTop( new Extent( 50, Extent.PERCENT ) );
return cp;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I make a JavaScript call to a WCF service hosted on a different domain? We are designing a web application using ASP.NET and AJAX and we want to host our WCF Service Layer on a different website and make JavaScript calls to the Service Layer from our client pages. We understand that the browser will not allow AJAX calls to a different port or domain. What is the best way to architect a solution? We are considering using a proxy layer with services hosted on the same domain as the client which has a web reference to the service layer. Is there a better solution?
A: It's generally best to limit the number of domains accessed by your page. A server-side proxy is really a good way to go.
A: i think the best way is to call a local Page which call remote resource and returns result. in this way, you avoid cross domains problems
A: *
*You can do virtual hosting of the
service and website under same
domain but different folder.
*define the services in different
dlls and create svc files in your
websites and point the svc files to
the dll which has the services
*server side proxy.
A: Your load balancer could send all request to /service to your service server.
If you don't have a load balancer, you can have your webserver act as a reverse proxy to your service server. If you are using IIS7, you can do this with the Application Request Routing module.
A: Prototype has a plug in for this. The issue is on the client not the server. www.mellowmorning.com/2007/10/25/introducing-a-cross-site-ajax-plugin-for-prototype/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What is the best way to Update 'TranslatePosition' from Silverlight beta 2 to RC? I am upgrading a silverlight beta 2 app to RC0 and have a function that translates a point from a child element to it's parent. The purpose of the function is to ensure that an element appears exactly on top of the child even though they are not on the same canvas and don't share a parent.
Here is the current function:
protected Point TranslatePosition(Point current, Panel from, Panel to, MouseEventArgs e)
{
Point rtn = new Point(-1, -1);
// get point relative to existing parent
Point fromPoint = e.GetPosition(from);
// get point relative to new parent
Point toPoint = e.GetPosition(to);
// calculate delta
double deltaX = fromPoint.X - toPoint.X;
double deltaY = fromPoint.Y - toPoint.Y;
// calculate new position
rtn = new Point(current.X - deltaX, current.Y - deltaY);
return rtn;
}
Notice that it relies on the MouseEventArgs.GetPosition function to get the position relative to existing and new parent. In cases where there is no MouseEventArgs available, we were creating a new instance and passing it in. This was a hack but seemed to work. Now, in RC0, the MouseEventArgs constructor is internal so this hack no longer works.
Any ideas on how to write a method to do the translation of a point in RC0 that doesn't rely on MouseEventArgs.GetPosition?
A: See TransformToVisual method of framework element. It does exactly what you want: given another control, it generates a new transform that maps the coordinates of a point relative to the current control, to coordinates relative to the passed in control.
var transform = from.TransformToVisual(to);
return transform.Transform(current);
A: Yet but... There appears to be a problem with how the rendering transform pipeline accepts updates which is different to how it works in WPF.
I've created a wiki entry at daisley-harrison.com then talks about this. I'll turn it into a blog entry at blog.daisley-harrison.com when I get around to it later.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I get an XElement's .InnerText value in Linq to XML? I'm trying to extract the polygons from placemarks in a KML file. So far so good:
Imports <xmlns:g='http://earth.google.com/kml/2.0'>
Imports System.Xml.Linq
Partial Class Test_ImportPolygons
Inherits System.Web.UI.Page
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
Dim Kml As XDocument = XDocument.Load(Server.MapPath("../kmlimport/ga.kml"))
For Each Placemark As XElement In Kml.<g:Document>.<g:Folder>.<g:Placemark>
Dim Name As String = Placemark.<g:name>.Value
...
Next
End Sub
End Class
I'd like to capture the entire <polygon>...</polygon> block as a string. I tried something like this (where the ... is above):
Dim Polygon as String = Placemark.<g:Polygon>.InnerText
but the XElement object doesn't have an InnerText property, or any equivalent as far as I can tell. How do I grab the raw XML that defines an XElement?
A: Have you tried:
Placemark.ToString()
A: What I was missing was that Placemark.<g:Polygon> is a collection of XElements, not a single XElement. This works:
For Each Placemark As XElement In Kml.<g:Document>.<g:Folder>.<g:Placemark>
Dim Name As String = Placemark.<g:name>.Value
Dim PolygonsXml As String = ""
For Each Polygon As XElement In Placemark.<g:Polygon>
PolygonsXml &= Polygon.ToString
Next
Next
XElement.ToString is the equivalent of InnerText, as tbrownell suggested.
A: I missed the Enumeration also. When using .Value it is possible to receive a null exception. Try the equivelent of this instead:
(string)Placemark.<g:name>
Sorry not sure of the VB syntax,,,it has been a while since I have coded in VB.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Tool to visualize code flow in Java? I'm inspired by the C/C++ question for a code flow visualization tool.
Is there such a thing for Java servlets or applications?
A: http://code.google.com/p/jtracert/ was discontinued. The link for new project is:
https://github.com/bedrin/jsonde
A: If found that doxygen works for Java also.
A: Source Navigator says it does Java, though I've only ever used it for C/C++ myself.
http://developer.berlios.de/projects/sourcenav
A: Maybe Ctrl + Alt + H in Eclipse / IntelliJ IDEA ? (albo present in NB somewhere)
Or "data flow from/to" in IntelliJ IDEA?
A: JBuilder's UML view goes some of the way
A: IBM has an old (2004) structure analysis tool that does some visualization of Java code.
Netbeans' UML does a decent job reverse engineering the code too.
A: I have tested this and is AWESOME for automatic sequence diagram generation
https://github.com/bedrin/jsonde
A: HandyEdit has made a plugin that does exactly this: http://plugins.intellij.net/plugin/?id=3739
A: You mean something like Jeliot and jGrasp?
A: In UML 2 there are two basic categories of diagrams: structure diagrams and behavior diagrams. Every UML diagram belongs to one these two diagram categories. The purpose of structure diagrams is to show the static structure of the system being modeled. They include the class, component, and or object diagrams. Behavioral diagrams, on the other hand, show the dynamic behavior between the objects in the system, including things like their methods, collaborations, and activities. Example behavior diagrams are activity, use case, and sequence diagrams.
Here, my understanding is that the OP is asking for a tool to visualize code flow (this is the title of the question), i.e. dynamic behavior. A perfect diagram for this would be the sequence diagram.
But, AFAIK, neither UML reverse engineering tools nor Doxygen can figure out such diagrams from sources. These tools know how to generate structure diagrams (e.g. class diagram), but not behavior diagrams (this would require execution). So these tools doesn't answer the question (even for C++).
IMO, to visualize the code flow, you'll have to look at the call hierarchy as someone pointed out.
A: My favorite one was Creole. Unfortunately last update was made on August 10, 2007... but still it is worth trying.
Another option, but more for the architecture visualization than code flow, is Structure101, which is a great tool and absolutely worth to check out.
A: Check out Onyem JTracer The tool automatically generates execution flow diagrams by analysis of your java program. I have used it with a relatively large codebase as well.
A: Heatlamp will visualize running Java code. It can also visualize Java stack traces.
A: I think Zeta Code seeks to do this.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
} |
Q: Encrypt plain text inside EXE / RAM :: HxD editor HxD (hex editor) allows to search/view/edit RAM.
How can I protect a EXE against such editor? Data is encrypted inside INI/registry/DB,
but is decoded at RAM.
What is the solution? At runtime decode, use and recode data inside RAM ASAP?
A: You can use an exe protector like Themida (one that will make even viewing the memory difficult), but remember that users will hate you for that. Also, remember that Themida, like all other protectors, is routinely cracked, despite what vendors of such software claim.
Short: There is no good way to prevent reverse engineering, ever.
A: You are trying to hold back the sea with a teaspoon.
This kind of "in memory protection" is what good (evil?) malware does. I have seen live demonstrations of how to break this kind of "protection". It is ultimately useless - at some point your clear text must be available for execution by the computer. A competent programmer/reverse engineer can easily find when the clear text becomes available and then just pause the program and examine the process memory at their leisure.
This is the same problem the RIAA faces with DRM: the requirements are defective. You want to hide your program from your users, and yet in order for them to use it, they must have the clear text at some point.
Your only possible salvation TPM but they are so rare in the consumer market your user base will be down to single digits.
A: Not too sure if they do that at this level, but KeePass, an open source password manager, claims to take every available care to hide passwords from investigation. It might be interesting to see how they do that... :-)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is it feasible to create a REST client with Flex? I'm starting a project using a Restful architecture implemented in Java (using the new JAX-RS standard)
We are planning to develop the GUI with a Flex application. I have already found some problems with this implementation using the HTTPService component (the response error codes, headers access...).
Any of you guys have some experience in a similar project. Is it feasible?
A: As many have pointed out HTTPService is a bit simplistic and doesn't do all that you want to do. However, HTTPService is just sugar on top of the flash.net.* classes like URLLoader, URLRequest and URLRequestHeader. Using these you can assemble most HTTP requests.
When it comes to support for other methods than GET and POST the problem mostly lies in that some browsers (for example Safari) don't support these, and Flash Player relies on the browser for all it's networking.
A: There are definite shortcomings of Flex's ability to act as a pure RESTful client.
The comments below are from this blog:
The problem is HTTPService class has
several major limitations:
*
*Only GET and POST methods are supported out of the box (unless you
use FDS and set useProxy attribute to
true)
*Not able to set request headers and there is no access to response
headers. Therefore I am not able to
access the response body in the case
of an error.
*It HTTPService gets a status code anything other 200, it consider
an error. (event 201, ouch!!). The
FaultEvent doesn’t provide information
about the status code any response
body. The Flex client will have no
idea what went wrong.
Matt Raible also gave a nice presentation on REST with Rails, Grails, GWT and Flex that have some good references linked from it.
Whether it's feasible or not really depends on how much your willing to work around by proxying, etc.
A: I've been working on an open source replacement for the HTTPService component that fully supports REST. If interested, you can find the beta version (source code and/or compiled Flex shared runtime library) and instructions here:
http://code.google.com/p/resthttpservice/
A: The problem here is that a lot of the web discussions around this issue are a year or more old. I'm working through this same research right now, and this is what I've learned today.
This IBM Developer Works article from August 2008 by Jorge Rasillo and Mike Burr shows how to do a Flex front-end / RESTful back-end app (examples in PHP and Groovy). Nice article. Anyway, here's the take away:
*
*Their PHP/Groovy code uses and expects PUT and DELETE.
*But the Flex code has to use POST, but sets the HTTP header X-Method-Override to DELETE (you can do the same for PUT I presume).
*Note that this is not the Proxy method discussed above.
// Flex doesn't know how to generate an HTTP DELETE.
// Fortunately, sMash/Zero will interpret an HTTP POST with
// an X-Method-Override: DELETE header as a DELETE.
deleteTodoHS.headers['X-Method-Override'] = 'DELETE';
What's happening here? the IBM web server intercepts and interprets the "POST with DELETE" as a DELETE.
So, I dug further and found this post and discussion with Don Box (one of the original SOAP guys). Apparently this is a fairly standard behavior since some browsers, etc. do not support PUT and DELETE, and is a work-around that has been around a while. Here's a snippet, but there's much more discussion.
"If I were building a GData client, I honestly wonder why I'd bother using DELETE and PUT methods at all given that X-HTTP-Method-Override is going to work in more cases/deployments."
My take away from this is that if your web side supports this X-Method-Override header, then you can use this approach. The Don Box comments make me think it's fairly well supported, but I've not confirmed that yet.
Another issue arises around being able to read the HTTP response headers. Again, from a blog post in 2007 by Nathan de Vries, we see this discussed. He followed up that blog post and discussion with his own comment:
"The only change on the web front is that newer versions of the Flash Player (certainly those supplied with the Flex 3 beta) now support the responseHeaders property on instances of HTTPStatusEvent."
I'm hoping that means it is a non-issue now.
A: The short answer is yes, you can do RESTful with Flex. You just have to work around the limitations of the Flash player (better with latest versions) and the containing browser's HTTP stack limitations.
We've been doing RESTful client development in Flex for more than a year after solving the basic HTTP request header and lack of PUT and DELETE via the rails-esque ?_method= approach. Tacky perhaps, but it gets the job done.
I noted some of the headers pain in an old blog post at http://verveguy.blogspot.com/2008/07/truth-about-flex-httpservice.html
A: Flex support for REST is weak at best. I spent a lot of time building a prototype so I know most of the issues. As mentioned previously , out of the box there is only support for GET and POST. At first glance it appears that you can use the proxy config in LiveCycle Data Services or Blaze to get support for PUT and DELETE. However, its a sham. The request coming from your Flex app will still be a POST. The proxy converts it to PUT or DELETE on the server side to trick your server side code. There are other issues as well. It's heard to believe that this is the best that Adobe could come up with. After my evaluation we decided to go in another direction.
A: Yes, I was able to use POST and access headers with this component:
http://code.google.com/p/as3httpclient/wiki/Links
Example
A: I'm working right now on an application that relies heavily on REST calls between Flex and JavaScript and Java Servlets. We get around the response error code problem by establishing a convention of a <status id="XXX" name="YYYYYY"> block that gets returned upon error, with error IDs that roughly map to HTTP error codes.
We get around the cross-site scripting limitations by using a Java Servlet as an HTTP proxy. Calls to the proxy (which runs on the same server that serves the rest of the content, including the Flex content, sends the request to the other server, then sends the response back to the original caller.
A: RestfulX has solved most/all of the REST problems with Flex. It has support for Rails/GAE/Merb/CouchDB/AIR/WebKit, and I'm sure it would be a snap to connect it to your Java implementation.
Dima's integrated the AS3HTTPClient Library into it also.
Check it out!
A: Actually were are already using Flex with a Rest-Style Framework. As mbrevort already mentioned PUT and DELETE methods cannot be directly used. Instead we are doing PUT via a POST and for DELETE we are using a GET on a resource with an URL parameter like ?action=delete.
This is not 100% Rest style, so I am not sure, if this works with a JSR 311 implementation. You will need some flexbility on the server side to workaround the PUT and DELETE restrictions.
With regards to error handling, we have implemented an error service. In case of an server side error, the Flex application can query this error service to get the actual error message. This is also much more flexible than just mapping HTTP return codes to static messages.
However thanks To ECMA scripting of Flex working with XML based REST services is very easy.
A: REST is more of an ideology than anything. You go to the REST presentations and they have coolaide dispensers.
For Flex apps, rolling a stack in conjunction to BlazeDS and AMF data marshalling is more convenient and more performant.
A: The way I've managed this in the past is to utilize a PHP proxy that deals with the remote web service calls and returns RTU JSON to the client ..
A: May be the new flex 4 is the answer http://labs.adobe.com/technologies/flex4sdk/
A: The book Flexible Rails may be helpful -- It is an excellent resource on how to use Flex as a RESTful client. Although it focuses on using Flex with the Rails framework, I believe the concepts apply to any RESTful framework. I used this book to get up to speed quickly on using Flex with REST.
A: I work on a big flex project for Franklin Covey. We use REST services. In order to support this. We created a XMLHttpRequest wrapper. By using external interface with some event handlers. We opensourced the library. You can check it out at https://github.com/FranklinCovey/AS3-XMLHttpRequest
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32"
} |
Q: Is there a JUnit TestRunner for running groups of tests? I am currently using JUnit 4 and have a need to divide my tests into groups that can be run selectively in any combination. I know TestNG has a feature to annotate tests to assign them to groups, but I can't migrate to TestNG right now. It seems this could easily be accomplished in JUnit with some custom annotations and a custom JUnit TestRunner. I've checked both the JUnit docs and searched the web but couldn't find such a thing. Is anyone aware of such a TestRunner?
Update: Thanks for your replies regarding test suites. I should have addressed these in my original question. Here we go: I don't want to use test suites because they would require me to manually create and manage them, which means touching ALL my tests and arranging them into suites manually (too much work and a maintenance nightmare). All I need to do is run all unit tests, except a few that are really integration tests. So I want to annotate these, and run all others. At other times, I want to just run the integration tests. I also have the need to put a unit test into multiple groups, which is not possible with suites. Hope this helps to clear things up.
Update 2: If JUnit doesn't have this OOB, I'm looking for an Open Source library that adds this to JUnit (annotations + custom JUnit Test Runner).
A: Check out Spring's SpringJUnit4ClassRunner. I've used it to optionally run tests based on a System property, using the IfProfileValue annotation.
This:
@IfProfileValue(name="test-groups", values={"unit-tests", "integration-tests"})
public void testWhichRunsForUnitOrIntegrationTestGroups() {
// ...
}
Will run if the System property 'test-groups' is set to either 'unit-tests' or 'integration-tests'.
Update: JUnitExt has @Category and @Prerequisite annotations and looks like it should do what you need. However, I've never used it myself, so I can't vouch for it.
A: First, you are addressing two problems - unit tests (often in the same package as the unit under test) and integration tests. I usually keep my integration tests in a separate package, something like com.example.project.tests. In eclipse, my projects look like:
project/
src/
com.example.project/
tsrc/
com.example.project/
com.example.project.tests/
Right-clicking on a package and selecting 'run' runs the tests in the package; doing the same on the source folder runs all the tests.
You can acheive a similar effect, although you expressed a disinterest in it, by using the Suite runner. However, this violates DRY - you have to keep copies of the test names up to date in the suite classes. However, you can easily put the same test in multiple suites.
@RunWith(Suite.class)
@Suite.SuiteClasses( {
TestAlpha.class,
TestBeta.class })
public class GreekLetterUnitTests {
}
Of course, I really should be keeping these things automated. A good method for doing that is to use the Ant task.
<target name="tests.unit">
<junit>
<batchtest>
<fileset dir="tsrc">
<include name="**/Test*.java"/>
<exclude name="**/tests/*.java"/>
</fileset>
</batchtest>
</junit>
</target>
<target name="tests.integration">
<junit>
<batchtest>
<fileset dir="tsrc">
<include name="**/tests/Test*.java"/>
</fileset>
</batchtest>
</junit>
</target>
A: JUnit has no such runner at the moment. Addressing the underlying issue, the need to get reasonable assurance from a test suite in a limited amount of time, is our highest development priority for the next release. In the meantime, implementing a Filter that works through annotations seems like it wouldn't be a big project, although I'm biased.
A: No, there is no similar concept to TestNG groups, unfortunately. It was planned for JUnit4, but for some unclear reason, it was dropped from the planning.
A: You can create suites, although that puts all the configuration in the suite, and not in annotations.
A: JUnit 3 allows you to create test-suites which can be run like any other test. Doesn't JUnit 4 have a similar concept?
A: If you're using ANT or maven, you can control which tests are run by filtering the tests by name. A bit awkward, but it might work for you.
A: TestNG has my vote. It's annotation based, can run as Groups, single Tests, etc, can be linked into Maven, and can run all JUnit tests as part of it's test runs.
I highly recommend it over JUnit.
A: my advice is simply ditch JUnit and use TestNG. Once you get used to TestNG, Junit looks like Stone Age.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: Install a chain of embedded MSI packages each using an embedded UI - display common progress bar I'm using Windows Installer 4.5 new features and WiX to generate MSI packages.
I have created an MSI chain installation in order to install a collection of other MSI packages as a transaction. Each package is using the new Embedded UI option so the UI can be WPF. Everything works OK this far.
Except one of the goals would be to display a common progress bar for all installs. At this moment, I have a progress bar in the chain installer, but this one reaches 100% before the other packages start to run.
I have read a post, Fun with MsiEmbeddedChainer, that states that what I want can be achieved. But I can't get it to work. I would like a bit more detailed explanations and maybe some code samples.
A: You can manually control the status of the progress bar by issuing INSTALLMESSAGE_PROGRESS messages to the installer. Details can be found here:
http://msdn.microsoft.com/en-us/library/aa370354.aspx
In particular, you'll need a custom action to manage the status bar (it is what will be responsible for making the appropriate calls to MsiProcessMessage. I recommend that you also use it to spawn the sub-installers. Here is some pseudocode to illustrate what I have in mind:
LONG LaunchSubinstallersCA(MSIHANDLE current_installer)
{
// Initialize the progress bar range and position
MsiProcessMessage(current_installer, reset_message); // see MSDN for details
for each (subinstaller in list_of_installers)
{
launch subinstaller; // see MSDN for details
// Update the progress bar to reflect most recent changes
MsiProcessMessage(current_installer, increment_message); // see MSDN for details
}
return (result);
}
The main down-side is that the progress bar will progress in a somewhat choppy fashion. If you really wanted to get fancy and make it smoother, you could launch a separate "listener" thread that would wait for updates from the sub-installer to make finer-grained increments to the progress bar. Something like:
LONG LaunchSubinstallersCA(MSIHANDLE current_installer)
{
// Initialize the progress bar range and position
MsiProcessMessage(current_installer, reset_message); // see MSDN for details
launch_listener_thread(); // launches listener_thread_proc (see below)
for each (subinstaller in list_of_installers)
{
launch subinstaller; // see MSDN for details
}
tell_listener_thread_to_stop();
optionally_wait_for_listener_thread_to_die();
return (result);
}
void listener_thread_proc()
{
// Loop until told to stop
while (!time_for_me_to_stop)
{
// Listen for update from sub-installer
timed_wait_for_update(); // probably required IPC, perhaps a named event?
// Only update the progress bar if an update message was actually received
if (!timeout)
{
// Update the progress bar to reflect most recent changes
MsiProcessMessage(current_installer, increment_message); // see MSDN for details
}
}
}
Obviously each sub-installer would have to be able to signal the main installer that progress has been made, so this will potentially require more extensive changes across your product. Whether or not that is worth the effort is up to you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Retrieving the original error number from a COM method called via reflection I have a VB6 COM component which I need to call from my .Net method. I use reflection to create an instance of the COM object and activate it in the following manner:
f_oType = Type.GetTypeFromProgID(MyProgId);
f_oInstance = Activator.CreateInstance(f_oType);
I need to use GetTypeFromProgID rather than using tlbimp to create a library against the COM DLL as the ProgId of the type I need to instantiate can vary. I then use Type.InvokeMember to call the COM method in my code like:
f_oType.InvokeMember("Process", BindingFlags.InvokeMethod, null, f_oInstance, new object[] { param1, param2, param3, param4 });
I catch any raised TargetInvocationException's for logging and can get the detailed error description from the TargetInvocationException.InnerException field. However, I know that the COM component uses Error.Raise to generate an error number and I need to somehow get hold of this in my calling .Net application.
The problem seems to stem from the TargetInvocationException not containing the error number as I'd expect if it were a normal COMException so:
How can I get the Error Number from the COM object in my .Net code?
or
Can I make this same call in a way that would cause a COMException (containing the error number) rather than a TargetInvocationException when the COM component fails?
Please also note that the target platform is .Net 2.0 and I do have access to the VB6 source code but would regard altering the error message raised from VB6 to contain the error code as part of the text to be a bit of a hack.
A: You will handle COMException and use the ErrorCode property of that exception object. Normally in visual basic DLLs if you are throwing custom errors you would raise the error with:
Err.Raise vbObjectError + 88, "Project1.Class1.Test3()", "Forced error test"
If this is the case you will need to subtract vbobjecterror (-2147221504) from the exceptions ErrorCode to get the actual error number. If not just use the ErrorCode value.
Example VB dll code: (from Project1.Class1)
Public Sub Test3()
MsgBox "this is a test 3"
Err.Raise vbObjectError + 88, "Project1.Class1.Test3()", "Forced error test"
End Sub
Example C# consumption handling code:
private void button1_Click(object sender, EventArgs e)
{
try
{
var p = new Class1();
p.Test3();
}
catch (COMException ex)
{
int errorNumber = (ex.ErrorCode - (-2147221504));
MessageBox.Show(errorNumber.ToString() + ": " + ex.Message);
}
catch(Exception ex)
{ MessageBox.Show(ex.Message); }
}
The ErrorCode in this test I just completed returns 88 as expected.
A: I looked at your code a little closer and with reflection you handle TargetInvocationException and work with the inner exception that is a COMException... Code example below (I ran and tested this as well):
private void button1_Click(object sender, EventArgs e)
{
try
{
var f_oType = Type.GetTypeFromProgID("Project1.Class1");
var f_oInstance = Activator.CreateInstance(f_oType);
f_oType.InvokeMember("Test3", BindingFlags.InvokeMethod, null, f_oInstance, new object[] {});
}
catch(TargetInvocationException ex)
{
//no need to subtract -2147221504 if non custom error etc
int errorNumber = ((COMException)ex.InnerException).ErrorCode - (-2147221504);
MessageBox.Show(errorNumber.ToString() + ": " + ex.InnerException.Message);
}
catch(Exception ex)
{ MessageBox.Show(ex.Message); }
}
A: Just want to offer an update to @sharvell's catch code. Unless you're absolutely sure InnerException is a COMException, it's better to safely test it first. Otherwise you'll have an exception in your exception handler. Whoops!
catch(TargetInvocationException ex)
{
if( ex.InnerException != null && ex.InnerException is COMException )
{
COMException ce = (COMException)ex.InnerException;
// do something with ce - e.g. logging the error
}
// else InnerException not set, or it's not a COMException
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: jQuery index selectors I'm trying to place 4 of my image containers into a new pane, having a total of 16 images. The jQuery below is what I came up with to do it. The first pane comes out correctly with 4 images in it. But the second has 4 images, plus the 3rd pane. And the 3rd pane has 4 images plus the 4th pane. I don't know exactly why the nesting is occurring. My wrapping can't be causing their index to change. I added css borders to them and it appears to be indexed correctly. How should I be going about this? What I want is to have 1-4 in one pane, 5-8 in another, 9-12, and 13-16. It needs to be dynamic so that I can change the number in each pane, so just doing it in the HTML isn't an option.
A demo of the issue can be seen here: http://beta.whipplehill.com/mygal/rotate.html. I'm using firebug to view the DOM.
Any help would be splentabulous!
The jQuery Code
$(function() {
$(".digi_image:gt(-1):lt(4)").wrapAll("<div class=\"digi_pane\"></div>").css("border", "2px solid red");
$(".digi_image:gt(3):lt(8)").wrapAll("<div class=\"digi_pane\"></div>").css("border", "2px solid blue");
$(".digi_image:gt(7):lt(12)").wrapAll("<div class=\"digi_pane\"></div>").css("border", "2px solid green");
$(".digi_image:gt(11):lt(16)").wrapAll("<div class=\"digi_pane\"></div>").css("border", "2px solid orange");
$(".digi_pane").append("<div style=\"clear: both;\"></div>");
});
The HTML (abbreviated), but essentially repeated 16 times.
<div class="digi_image">
<div class="space_holder"><img src="images/n883470064_4126667_9320.jpg" width="100" /></div>
</div>
A: I think your problem is your use of the gt() and lt() selectors. You should look up slice() instead.
Check out this post:
http://docs.jquery.com/Traversing/slice
A: For those who are curious... this is what I did.
$(".digi_image").slice(0, 4).wrapAll("<div class=\"digi_pane\"></div>").css("border", "2px solid red");
$(".digi_image").slice(4, 8).wrapAll("<div class=\"digi_pane\"></div>").css("border", "2px solid blue");
$(".digi_image").slice(8, 12).wrapAll("<div class=\"digi_pane\"></div>").css("border", "2px solid green");
$(".digi_image").slice(12, 16).wrapAll("<div class=\"digi_pane\"></div>").css("border", "2px solid orange");
$(".digi_pane").append("<div style=\"clear: both;\"></div>");
And it works precisely how I need it to. Could probably be made a bit more efficient, but it works.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to track System Dependencies? Introduction
In my current organisation, we have many desktop and web applications all feeding into each other at some point. When looking after older applications or creating new applications, it's proving very difficult to try and remember which system rely on other systems in order to work. I'm not talking about software dependencies like DLL's and images, I'm talking about entire systems like a Finance system dependant on the HR system etc.
My Question
Which is the one best way to track how one entire system is dependant on another?
The answer can suggest either a method of doing the above, a software package or documentation techniques.
In my particular case, Many means over 20 web and desktop application over a dozen servers.
A: I would say to clearly state that on your architecture design document. There are some good tools for this like Enterprise Architect. This tool allows you to create diagrams using the UML standard for describing these dependencies in a clear and visual way.
A: The best source of information is usually found in Config files. This typically has the connection strings, web service urls etc which will give a good idea on the external dependencies.
Another technique is by using profiling or tracing and applying filters, we can easily track any external calls. Most of the cases, the dependency is in the database layer, and checking for linked servers and tracking their dependencies can unearth lots of info.
I am not sure if there is any automatic way to get this info especially if the systems are on multiple platforms. Lot of manual work will be involved to document all that.
A: This is the kind of application we produce at Tideway Systems, and which many large organizations use for just this purpose. You can use the product to discover your estate, and use the modeling capabilities to describe your business apps (which typically consist of more than one piece of software and span servers).
It sounds like you qualify to use the free Community Edition of Foundation, which you can use on up to 30 servers - just download it and check it out. Then let us know what you think please!
Disclaimer: I run the development group at Tideway. The product is very cool IMO, although I haven't written any of it myself directly :)
A: Turn off each machine one by one and see what breaks.. ;p
Seriously though, there is no simple answer to this question. With a collection of systems you could create a diagram showing the basic dependencies, but it wouldn't have a great deal of meaning unless you had an idea of what the dependency was. Usually your goal is to determine what you need to "revalidate" when you change another system, not which machines you can turn off at random. But that kind of information requires a large amount of detail and is hard to accumulate in the first place.
All this eventually ends up in a situation where you're systems are ahead of your automation. You'll never find a shrink wrapped automation tool that keeps up. On the other hand with so much detail necessary anything that can take care of half or even a third of the workload is going to be valuable.
A: This is a good question -- we struggle with this every time, it seems.
What we've tried to do over the last year or so is be "ruthless" on two things:
*
*automation -- if you automate it and build/deploy often, then the automation process will tend to get things right most of the times (config settings, etc)
*wiki, wiki, wiki -- we try to be hard-core on keeping the team and project wiki up-to-date.
Curious to see other responses.
A: Sounds like a job for an enterprise discovery that is automated as far as it can go. Depending on the size of your organization and the environment there are different solutions. For big landscapes you'll need a CMDB (Configuration Management Database) anyway. Products like HP Universal CMDB can discover and track dependencies in large scale environments.
E.g. it can discover the relations between a SAP system and it's related databases and the hosts on which the distributed systems are running and show you the dependencies. More important it can warn you in case some unauthorized changes are done to the actual environment.
So the answer depends on what you consider as 'many'.
A: Two sorts of problems involved:
a.) for those who want to know how to determine dependencies for each component
b.) for those who want to track inter-dependencies and their priorities in a system of components. (as in, which component gets installed into a test environment first etc...)
If what you have is a series of components, for each of which you know dependencies, and you want a dependency order for the entire list of components, you may find a Perl module called Algorithm::Dependency::Ordered to be of some value. There are other related modules that can work with databases records of components etc, or even simple file records. But a warning: I've had problems getting this to work.
Alternatively, a graphing tool may be of value.
A: This is function of a "Configuration Management" group. To get started, you'll have to talk to the "experts" at your company and create a map/graph of applications. Use graphviz/dot to generate a diagram, it won't be pretty, but it will give you a visual representation of the dependencies.
Here is an example:
digraph g {
rankdir=LR;
app1->app2->db1;
app1->app3;
}
Hope this helps,
A: System dependency mapping is one thing.
True environmental settings, uid's, passwords, impersonation settings, database names, and other data which change from development to qa to uat to production is the real challenge.
Who stores/remembers them all?
The developer knows not which production server(s) his application will reside on.
He only documents the name of his development database, uid's, pwd's and describes his database tables, conn strings, etc.
Once it's checked into the code repository, and migrated to the QA environment, who is the keeper of the data required to update those config file with the proper values?
Again when migrated to QA and UAT, who?
Who's responsibility is it to inform the next migration group of what needs to be changed?
In my company, this is what causes us the most headache. By the time it gets approved by the internal change control process and a migration request is created to migrate the application into the production environment, all it takes is one config setting to be forgotten to ruin the whole implementation, and it happens all the time because clear lines of responsibility are not drawn (in my opnion).
Beyond responsibility I think is a central repository for this information.
ie. A system that stores all configuration settings for all projects/applications, and based on your "role" you can/can't see the actual values.
The developer finishes his build, and creates a migration request in the "system".
The QA person receives notification that build ### is ready.
The QA person logs into the "system" and retrieves the migration instructions.
Now they clearly know what needs to be done, and they beging the code-check out and migration process.
Repeat for UAT and ultimately prod.
When someone builds this Migration system let me know, because THAT will help many people.
Maybe I'll build it myself... Who wants to contract me?
A: I was new to a job, and it was suggested as a first task that I go identify the system dependencies. It turns out that what my boss meant was to go talk to people - that way I would learn who was who. I thought my boss wanted me to write a computer program to do that. And so I did.
My assumption was that if a program was a client of another program (a service or a server), then netstat -pant and netstat -panu then grep for ESTABLISHED would give you that. You can identify the services by grepping the output for LISTENING.
This is only a partial solution. Yes, it tells you what applications talk to which applications, but there are other dependencies. So, for example, some applications use DNS to find their servers, while others are hard coded or in configuration files. Anything that uses TCP or UDP is dependent on IP. In most places, IP is dependent on ARP and either Ethernet or WiFi. Anything dependent on a service on another LAN is dependent on at least one router.
If you have a load balancer or some sort of cluster, then the problem becomes more interesting. If I a service that comes out of a load balancer, and either "real" server behind the firewall goes down, then the service is degraded but is still up.
It gets even more interesting because services (programs) depend on servers (hardware). Servers, in turn, depend on power and air conditioning.
So as my thinking spiraled out of control, things got more horribly complicated, and I thought about creating a domain specific language (DSL) to capture all of these dependencies. I thought that, for example, server_1, server_3, and server_5 are on power phase 1; server_2, server_4 and server_6 are on power phase 2. Server_1, Server_3 and server_5 all fail at about the same time: probably phase 1 has failed. I still haven't quite figured that out. Obviously, the situation can be represented by a directed graph, I just haven't worked out the details.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: How to check if System.Net.WebClient.DownloadData is downloading a binary file? I am trying to use WebClient to download a file from web using a WinForms application. However, I really only want to download HTML file. Any other type I will want to ignore.
I checked the WebResponse.ContentType, but its value is always null.
Anyone have any idea what could be the cause?
A: Given your update, you can do this by changing the .Method in GetWebRequest:
using System;
using System.Net;
static class Program
{
static void Main()
{
using (MyClient client = new MyClient())
{
client.HeadOnly = true;
string uri = "http://www.google.com";
byte[] body = client.DownloadData(uri); // note should be 0-length
string type = client.ResponseHeaders["content-type"];
client.HeadOnly = false;
// check 'tis not binary... we'll use text/, but could
// check for text/html
if (type.StartsWith(@"text/"))
{
string text = client.DownloadString(uri);
Console.WriteLine(text);
}
}
}
}
class MyClient : WebClient
{
public bool HeadOnly { get; set; }
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest req = base.GetWebRequest(address);
if (HeadOnly && req.Method == "GET")
{
req.Method = "HEAD";
}
return req;
}
}
Alternatively, you can check the header when overriding GetWebRespons(), perhaps throwing an exception if it isn't what you wanted:
protected override WebResponse GetWebResponse(WebRequest request)
{
WebResponse resp = base.GetWebResponse(request);
string type = resp.Headers["content-type"];
// do something with type
return resp;
}
A: I'm not sure the cause, but perhaps you hadn't downloaded anything yet. This is the lazy way to get the content type of a remote file/page (I haven't checked if this is efficient on the wire. For all I know, it may download huge chunks of content)
Stream connection = new MemoryStream(""); // Just a placeholder
WebClient wc = new WebClient();
string contentType;
try
{
connection = wc.OpenRead(current.Url);
contentType = wc.ResponseHeaders["content-type"];
}
catch (Exception)
{
// 404 or what have you
}
finally
{
connection.Close();
}
A: WebResponse is an abstract class and the ContentType property is defined in inheriting classes. For instance in the HttpWebRequest object this method is overloaded to provide the content-type header. I'm not sure what instance of WebResponse the WebClient is using. If you ONLY want HTML files, your best of using the HttpWebRequest object directly.
A: Here is a method using TCP, which http is built on top of. It will return when connected or after the timeout (milliseconds), so the value may need to be changed depending on your situation
var result = false;
try {
using (var socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)) {
var asyncResult = socket.BeginConnect(yourUri.AbsoluteUri, 80, null, null);
result = asyncResult.AsyncWaitHandle.WaitOne(100, true);
socket.Close();
}
}
catch { }
return result;
A: You could issue the first request with the HEAD verb, and check the content-type response header? [edit] It looks like you'll have to use HttpWebRequest for this, though.
A: Your question is a bit confusing: if you're using an instance of the Net.WebClient class, the Net.WebResponse doesn't enter into the equation (apart from the fact that it's indeed an abstract class, and you'd be using a concrete implementation such as HttpWebResponse, as pointed out in another response).
Anyway, when using WebClient, you can achieve what you want by doing something like this:
Dim wc As New Net.WebClient()
Dim LocalFile As String = IO.Path.Combine(Environment.GetEnvironmentVariable("TEMP"), Guid.NewGuid.ToString)
wc.DownloadFile("http://example.com/somefile", LocalFile)
If Not wc.ResponseHeaders("Content-Type") Is Nothing AndAlso wc.ResponseHeaders("Content-Type") <> "text/html" Then
IO.File.Delete(LocalFile)
Else
'//Process the file
End If
Note that you do have to check for the existence of the Content-Type header, as the server is not guaranteed to return it (although most modern HTTP servers will always include it). If no Content-Type header is present, you can fall back to another HTML detection method, for example opening the file, reading the first 1K characters or so into a string, and seeing if that contains the substring <html>
Also note that this is a bit wasteful, as you'll always transfer the full file, prior to deciding whether you want it or not. To work around that, switching to the Net.HttpWebRequest/Response classes might help, but whether the extra code is worth it depends on your application...
A: I apologize for not been very clear. I wrote a wrapper class that extends WebClient. In this wrapper class, I added cookie container and exposed the timeout property for the WebRequest.
I was using DownloadDataAsync() from this wrapper class and I wasn't able to retrieve content-type from WebResponse of this wrapper class. My main intention is to intercept the response and determine if its of text/html nature. If it isn't, I will abort this request.
I managed to obtain the content-type after overriding WebClient.GetWebResponse(WebRequest, IAsyncResult) method.
The following is a sample of my wrapper class:
public class MyWebClient : WebClient
{
private CookieContainer _cookieContainer;
private string _userAgent;
private int _timeout;
private WebReponse _response;
public MyWebClient()
{
this._cookieContainer = new CookieContainer();
this.SetTimeout(60 * 1000);
}
public MyWebClient SetTimeout(int timeout)
{
this.Timeout = timeout;
return this;
}
public WebResponse Response
{
get { return this._response; }
}
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest request = base.GetWebRequest(address);
if (request.GetType() == typeof(HttpWebRequest))
{
((HttpWebRequest)request).CookieContainer = this._cookieContainer;
((HttpWebRequest)request).UserAgent = this._userAgent;
((HttpWebRequest)request).Timeout = this._timeout;
}
this._request = request;
return request;
}
protected override WebResponse GetWebResponse(WebRequest request)
{
this._response = base.GetWebResponse(request);
return this._response;
}
protected override WebResponse GetWebResponse(WebRequest request, IAsyncResult result)
{
this._response = base.GetWebResponse(request, result);
return this._response;
}
public MyWebClient ServerCertValidation(bool validate)
{
if (!validate) ServicePointManager.ServerCertificateValidationCallback += delegate(object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) { return true; };
return this;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23"
} |
Q: Add to right click application menu in taskbar in .NET Most applications only have "Restore, Move, Size, Minimize, Maximize and Close", however MS SQL offers extra options "Help, Customize view". Along those lines, is it possible to add to the right click menu of an application in the task bar?
Note: I'm not referring to an icon in the notification area next to the clock.
A: This is a simpler answer I found. I quickly tested it and it works.
My code:
private const int WMTaskbarRClick = 0x0313;
protected override void WndProc(ref Message m)
{
switch (m.Msg)
{
case WMTaskbarRClick:
{
// Show your own context menu here, i do it like this
// there's a context menu present on my main form so i use it
MessageBox.Show("I see that.");
break;
}
default:
{
base.WndProc(ref m);
break;
}
}
}
A: The menu on right click of the minimized program or Alt+Space or right click of the window icon in the title bar is called the SysMenu.
Here's an option for WPF:
// License MIT 2019 Mitch Gaffigan
// https://stackoverflow.com/a/58160366/138200
public class SysMenu
{
private readonly Window Window;
private readonly List<MenuItem> Items;
private bool isInitialized;
private IntPtr NextID = (IntPtr)1000;
private int StartPosition = 5;
public SysMenu(Window window)
{
this.Items = new List<MenuItem>();
this.Window = window ?? throw new ArgumentNullException(nameof(window));
this.Window.SourceInitialized += this.Window_SourceInitialized;
}
class MenuItem
{
public IntPtr ID;
public string Text;
public Action OnClick;
}
public void AddSysMenuItem(string text, Action onClick)
{
if (string.IsNullOrWhiteSpace(text))
{
throw new ArgumentNullException(nameof(text));
}
if (onClick == null)
{
throw new ArgumentNullException(nameof(onClick));
}
var thisId = NextID;
NextID += 1;
var newItem = new MenuItem()
{
ID = thisId,
Text = text,
OnClick = onClick
};
Items.Add(newItem);
var thisPosition = StartPosition + Items.Count;
if (isInitialized)
{
var hwndSource = PresentationSource.FromVisual(Window) as HwndSource;
if (hwndSource == null)
{
return;
}
var hSysMenu = GetSystemMenu(hwndSource.Handle, false);
InsertMenu(hSysMenu, thisPosition, MF_BYPOSITION, thisId, text);
}
}
private void Window_SourceInitialized(object sender, EventArgs e)
{
var hwndSource = PresentationSource.FromVisual(Window) as HwndSource;
if (hwndSource == null)
{
return;
}
hwndSource.AddHook(WndProc);
var hSysMenu = GetSystemMenu(hwndSource.Handle, false);
/// Create our new System Menu items just before the Close menu item
InsertMenu(hSysMenu, StartPosition, MF_BYPOSITION | MF_SEPARATOR, IntPtr.Zero, string.Empty);
int pos = StartPosition + 1;
foreach (var item in Items)
{
InsertMenu(hSysMenu, pos, MF_BYPOSITION, item.ID, item.Text);
pos += 1;
}
isInitialized = true;
}
private IntPtr WndProc(IntPtr hwnd, int msg, IntPtr wParam, IntPtr lParam, ref bool handled)
{
if (msg == WM_SYSCOMMAND)
{
var item = Items.FirstOrDefault(d => d.ID == wParam);
if (item != null)
{
item.OnClick();
handled = true;
return IntPtr.Zero;
}
}
return IntPtr.Zero;
}
#region Win32
private const Int32 WM_SYSCOMMAND = 0x112;
private const Int32 MF_SEPARATOR = 0x800;
private const Int32 MF_BYPOSITION = 0x400;
private const Int32 MF_STRING = 0x0;
[DllImport("user32.dll")]
private static extern IntPtr GetSystemMenu(IntPtr hWnd, bool bRevert);
[DllImport("user32.dll")]
private static extern bool InsertMenu(IntPtr hMenu, int wPosition, int wFlags, IntPtr wIDNewItem, string lpNewItem);
#endregion
}
Example of use:
internal partial class MainWindow : Window
{
public MainWindow()
{
var sysMenu = new SysMenu(this);
sysMenu.AddSysMenuItem("Quit", miQuit_Click);
sysMenu.AddSysMenuItem("Show debug tools", miShowDebug_Click);
}
private void miQuit_Click()
{
// "On-Click" logic here
}
private void miShowDebug_Click()
{
// "On-Click" logic here
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: How do I set HttpOnly on a session cookie in Rails 2.1? I notice that Rails 2.2 (currently edge) supports setting HttpOnly on the session cookie.
Is there a way of setting it on a Rails 2.1 application without moving to edge/2.2?
A: Well it isn't supported, as you note, but you can of course monkey-patch Rails to do what you want. Actually, the difference between directly patching your Rails v. monkey-patching in this case is very little, as either would be removed/reverted when you upgrade to 2.2.
In both cases you would look at that applied diff as a guide for patching 2.1 yourself - either through applying the patch directly (modulo any 2.1/edge differences), or by reopening those classes from your own code post-environment-loading to apply the changes.
A: I have written a monkey patch to add this support to Rails 2.1, from the patch for Rails 2.2.
I've not tested on anything other than Rails 2.1, and your mileage may vary!
A: Set the http_only option to true in the cookie's options hash:
cookies['visits'] = { :value => '20', :http_only => true }
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What are the best Open Source tools for developing Flash applications? What are the best places to start learning? As far as tools go, I am aware of Haxe, MTASC, and swfmill. Can you offer any success or horror stories related to any of them? Are there any others that I should be investigating?
With respect to learning, the Adobe Developer Connection seems to contain decent reference materials, but the tutorials all assume that the reader is using the "Adobe Flash" product. Are there any tutorials out there targeted at Open Source users?
What are the advantages of Flash 9/ActionScript 3 versus Flash 8/ActionScript 2? Am I correct in thinking that Flash 8 is still more widely deployed than Flash 9, and better supported by the Open Source flash players?
A: Tool-wise, FlashDevelop is really good (and free). In fact, the whole osflash.org site is very impressive and thorough.
A: Flash Develop is an awesome IDE: http://www.flashdevelop.org/wikidocs/index.php?title=Main_Page
A: Another, old tool: Ming. Looks like it is still alive.
Deployment of Flash 7/8/9: see Adobe stats: Adobe Flash Player Version Penetration. Not too sure of what figures mean, though, I suppose they mean that Flash applets targeting Flash 7 have 99% penetration (ie. spawning readers from 7 to 9) while those targeting Flash 9 have 97.7% penetration, still a high figure.
Should I try and start coding for Flash, I think I will directly to at AS 3 (better OO support, better performances).
PS.: I see you mention open source players, instead. I have no idea of their penetration, but I fear it is very low.
A:
"Am I correct in thinking that Flash 8
is still more widely deployed than
Flash 9, and better supported by the
Open Source flash players?"
There is not much point in worrying about the open source players as very few people use them. No open source player I know of supports Flash 8 - they are all version 6/7 I think. There are official Flash 9 players for Windows/Mac/Linux and various other devices support either older versions or FlashLite. Wii and PS3 for example support Flash player 7.
A: If you are going to be targeting mobile devices (or the Wii) you will probably want to stick to Flash 7.
Adobe still doesn't have a Flash 9 SDK available to those (embedded/mobile) platforms. The newest SDK is FlashLite 3 which supports Flash 8.
Anything build before or around Oct. '07 would most likely still be using the Flash 7 SDK though.
Of course, if you are only developing for Mac/Win/Linux then none of that really matters.
I've always been of the opinion that if you don't need any of the new features or bug fixes it's generally best to stick with the slightly older but more compatible versions. That applies to pretty much any development.
A: From a development standpoint - I would avoid actionscript 2 like the plague (unless it's part of your requirements of course...that is, your customer is requiring the plague).
It's old, not being actively developed (i.e. Adobe is not putting any more effort into the AS2 codebase), and most things you learn (like why the hell you have you have to use .createMovieClip() to instantiate an object) are things that will not carry over into your AS3 development.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Keeping in sync with database The solution we developed uses a database (sqlserver 2005) for persistence purposes, and thus, all updated data is saved to the database, instead of sent to the program.
I have a front-end (desktop) that currently keeps polling the database for updates that may happen anytime on some critical data, and I am not really a fan of database polling and wasted CPU cycles with work that is being redone uselessly.
Our manager doesn't seem to mind us polling the database. The amount of data is small (less than 100 records) and the interval is high (1 min), but I am a coder. I do. Is there a better way to accomplish a task of keeping the data on memory as synced as possible with the data on the database? The system is developed using C# 3.5.
A: First thought off the top of my head is a trigger combined with a message queue.
A: Since you're on SQL2005, you can use a SqlDependency to be notified of changes. Note that you can use it pretty effortlessly with System.Web.Caching.Cache, which, despite it's namespace runs just fine in a WinForms app.
A: This may probably be overkill for your situation, but it may be interesting to take a look at the Microsoft Sync Framework
A: SQL Notification Services will allow you to have the database callback to an app based off a number of protocols. One method of implementation is to have the notification service create (or modify) a file on an accessible network share and have your desktop app react by using a FileSystemWatcher.
More information on Notification Services can be found at: http://technet.microsoft.com/en-us/library/aa226909(SQL.80).aspx
Please note that this may be a sledgehammer approach to a nut type problem though.
A: In ASP.NET, http://msdn.microsoft.com/en-us/library/ms178604(VS.80).aspx.
A: This may also be overkill but maybe you could implement some sort of caching mechanism. That is, when the data is written to the database, you could cache it at the same time and when you're trying to fetch data back from the DB, check the cache first.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Is there a SQL Server profiler similar to Java/.Net profilers? I love the way I can profile a Java/.Net app to find performance bottlenecks or memory problems. For example, it's very easy to find a performance bottleneck looking at the call tree with execution times and invocation counts per method. In SQL Server, I have stored procedures that call other stored procedures that depend on views, which is similar to Java/.Net methods calling other methods. So it seems the same kind of profiler would be very helpful here. However, I looked far and wide and could not find one. Is anyone aware of such tools, either for SQL Server or any other DBMS?
Update: Thanks fro your replies around SQL Server Profiler, but this tool is very limited. Take a look at the screenshot.
A: Check out SQL Nexus Tool. This has some good reports on identifying bottlenecks.
SQL Nexus is a tool that helps you identify the root cause of SQL Server performance issues. It loads and analyzes performance data collected by SQLDiag and PSSDiag. It can dramatically reduce the amount of time you spend manually analyzing data.
In one of the Inside SQL 2005 books (maybe T-SQL Querying), there was a cool technique in which the author dumps the SQL profiler output to a table or excel file and applies a pivot to get the output in a similar format as your screenshot.
I have not seen any built-in SQL tools which gives you that kind of analysis.
Another useful post.
A: In addition to SQL Server Profiler, as mentioned in a comment from @Galwegian, also check out your execution plan when you run a query.
http://www.sql-server-performance.com/tips/query_execution_plan_analysis_p1.aspx
http://en.wikipedia.org/wiki/Query_plan
A: Another whole thread about the SQL Server profiler:
Identifying SQL Server Performance Problems
I understand what you are talking about, but typically, database optimization takes place at a finer grained level. If the database activity is driven from a client, you should be able to use the existing client profiler to get the total time on each step and then address the low hanging fruit (whether in the database or not).
When you need to profile a particular database step in detail, you can use profiler and a trace.
Typically, the database access has a certain granularity which is addressed on an individual basis and database activity is not linear with all kinds of user access going on, whereas a program profiler is typically profiling a linear path of code.
A: As mentioned, SQL Server Profiler, which is great for checking what parameters you're program is passing to SQL etc. It won't show you an execution tree though if that's what you need. For that, all I can think of is to use Show Plan to see what exactly is executed at run-time. E.g. if you're calling an sp that calls a view, Profiler will only show you that the sp was executed and what params were passed in.
Also, the Windows Performance Monitor has extensive run-time performance metrics specific to SQL Server. You can run it on the server, or connect remotely.
A: To find performance bottlenecks, you can use the Database Engine Tuning Advisor (found in Tools menu of SQL Server Management Studio. It provides suggestions for optimizing your queries and offers to optimize them for you automatically (e.x. create the appropriate indexes, etc.).
A: You could use Sql Profiler - which covers the profiling aspect, but I tend to think of it more as a logging tool.
For diagnosing performance, you should probably just be looking at the query plan.
A: There's the sql server profiler, but despite it's name, it doesn't do what you want, by the sound of your question. It'll show you a detailed view of all the calls going on in the database. It's Better for troubleshooting the app as a whole, not just one sproc at a time
Sounds like you need to view the execution plan of your queries/spocs in query analyzer and that will give you something akin to the data you are looking for.
A: As mentioned by several replies the SQL Profiler will show what you're asking for. What you'll have to be sure to do is to turn on the events SP:StmtCompleted, which is in the Stored Procedures group, and if you want the query plans as well turn on Showplan XML Statistics Profile, which is in the Performance group. The XML plan last one gives you a graphical description and shows the actual rows processed by each step in the plan.
If the profiler is slowing your app down, filter it as much as possible and consider going to a server side trace.
HTH
Andy
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Setting ISO-8859-1 encoding for a single Tapestry 4 page in application that is otherwise totally UTF-8 I have a Tapestry application that is serving its page as UTF-8. That is, server responses have header:
Content-type: text/html;charset=UTF-8
Now within this application there is a single page that should be served with ISO-8859-1 encoding. That is, server response should have this header:
Content-type: text/html;charset=ISO-8859-1
How to do this? I don't want to change default encoding for whole application.
Based on google searching I have tried following:
@Meta({ "org.apache.tapestry.output-encoding=ISO-8859-1",
"org.apache.tapestry.response-encoding=ISO-8859-1",
"org.apache.tapestry.template-encoding=ISO-8859-1",
"tapestry.response-encoding=ISO-8859-1"})
abstract class MyPage extends BasePage {
@Override
protected String getOutputEncoding() {
return "ISO-8859-1";
}
}
But neither setting those values with @Meta annotation or overriding getOutputEncoding method works.
I am using Tapestry 4.0.2.
EDIT: I ended up doing this with a Servlet filter with subclassed HttpServletResposeWrapper. The wrapper overrides setContentType() to force required encoding for the response.
A: Have you considered a Filter? Maybe not as elegant as something within Tapestry, but using a plain Filter, that registers the url mapping(s) of interest. One of its init parameters would be the encoding your after. Example:
public class EncodingFilter implements Filter {
private String encoding;
private FilterConfig filterConfig;
/**
* @see javax.servlet.Filter#init(javax.servlet.FilterConfig)
*/
public void init(FilterConfig fc) throws ServletException {
this.filterConfig = fc;
this.encoding = filterConfig.getInitParameter("encoding");
}
/**
* @see javax.servlet.Filter#doFilter(javax.servlet.ServletRequest, javax.servlet.ServletResponse, javax.servlet.FilterChain)
*/
public void doFilter(ServletRequest req, ServletResponse resp,
FilterChain chain) throws IOException, ServletException {
req.setCharacterEncoding(encoding);
chain.doFilter(req, resp);
}
/**
* @see javax.servlet.Filter#destroy()
*/
public void destroy() {
}
}
A: You could have done:
@Override
public ContentType getResponseContentType() {
return new ContentType("text/html;charset=" + someCharEncoding);
}
A: The filter suggestion is good. You can also mix servlets with Tapestry. For instance, we have servlets for serving displaying XML documents and dynamically generated Excel files. Just make sure that correctly set the mappings in web.xml so that that the servlets do not go through Tapestry.
A: Tapestry has the concept of filters that can be applied to the request/response pipeline, but with the advantage that you can access the T5 IoC Container & Services.
http://tapestry.apache.org/tapestry5/tapestry-core/guide/request.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Automated Python to Java translation Is there a tool out there that can automatically convert Python to Java?
Can Jython do this?
A: It may not be an easy problem.
Determining how to map classes defined in Python into types in Java will be a big challange because of differences in each of type binding time. (duck typing vs. compile time binding).
A: Yes Jython does this, but it may or may not be what you want
A: Actually, this may or may not be much help but you could write a script which created a Java class for each Python class, including method stubs, placing the Python implementation of the method inside the Javadoc
In fact, this is probably pretty easy to knock up in Python.
I worked for a company which undertook a port to Java of a huge Smalltalk (similar-ish to Python) system and this is exactly what they did. Filling in the methods was manual but invaluable, because it got you to really think about what was going on. I doubt that a brute-force method would result in nice code.
Here's another possibility: can you convert your Python to Jython more easily? Jython is just Python for the JVM. It may be possible to use a Java decompiler (e.g. JAD) to then convert the bytecode back into Java code (or you may just wish to run on a JVM). I'm not sure about this however, perhaps someone else would have a better idea.
A: to clarify your question:
From Python Source code to Java source code? (I don't think so)
.. or from Python source code to Java Bytecode? (Jython does this under the hood)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34"
} |
Q: .Net CF Prevent Overzealous, Impatient Clicking (while screen is redrawing) .Net Compact Framework
Scenario: User is on a screen. Device can't finds a printer and asks the user if they want to try again. If they click "No", the current screen is closed and they are returned to the parent menu screen. If they click the "No" button multiple times, the first click will be used by the No button and the next click will take effect once the screen has completed redrawing. (In effect clicking a menu item which then takes the user to another screen.)
I don't see a good place to put a wait cursor...there isn't much happening when the user clicks "No" except a form closing. But the CF framework is slow to redraw the screen.
Any ideas?
A: you can skip pending clicks by clearing the windows message queue with
Application.DoEvents();
We use the following custom Event class to solve your problem (preventing multiple clicks and showing a wait cursor if necessary):
using System;
using System.Windows.Forms;
public sealed class Event {
bool forwarding;
public event EventHandler Action;
void Forward (object o, EventArgs a) {
if ((Action != null) && (!forwarding)) {
forwarding = true;
Cursor cursor = Cursor.Current;
try {
Cursor.Current = Cursors.WaitCursor;
Action(o, a);
} finally {
Cursor.Current = cursor;
Application.DoEvents();
forwarding = false;
}
}
}
public EventHandler Handler {
get {
return new EventHandler(Forward);
}
}
}
You can verify that it works with the following example (Console outputs click only if HandleClick has terminated):
using System;
using System.Threading;
using System.Windows.Forms;
class Program {
static void HandleClick (object o, EventArgs a) {
Console.WriteLine("Click");
Thread.Sleep(1000);
}
static void Main () {
Form f = new Form();
Button b = new Button();
//b.Click += new EventHandler(HandleClick);
Event e = new Event();
e.Action += new EventHandler(HandleClick);
b.Click += e.Handler;
f.Controls.Add(b);
Application.Run(f);
}
}
To reproduce your problem change the above code as follows (Console outputs all clicks, with a delay):
b.Click += new EventHandler(HandleClick);
//Event e = new Event();
//e.Action += new EventHandler(HandleClick);
//b.Click += e.Handler;
The Event class can be used for every control exposing EventHandler events (Button, MenuItem, ListView, ...).
Regards,
tamberg
A: Random thoughts:
*
*Disable the some of the controls on the parent dialog while a modal dialog is up. I do not believe that you can disable the entire form since it is the parent of the modal dialog.
*Alternatively I would suggest using a Transparent control to catch the clicks but transparency is not supported on CF.
*How many controls are on the parent dialog? I have not found CF.Net that slow in updating. Is there any chance that the dialog is overloaded and could be custom drawn faster that with sub controls?
*override the DialogResult property and the Dispose method of the class to handle adding/remvoing a wait cursor.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Calculate the position of an accelerating body after a certain time How do I calculate the position of an accelerating body (e.g. a car) after a certain time (e.g. 1 second)?
For a moving body that it not accelerating, it is a linear relationship, so I presume for an accelerating body it involves a square somewhere.
Any ideas?
A: Well, it depends on whether or not acceleration is constant. If it is it is simply
s = ut+1/2 at^2
If a is not constant, you need to numerically integrated. Now there is a variety of methods and none of them will beat doing this by hand for accuracy, as they are all ultimately approximate solutions.
The easiest and least accurate is Euler's method . Here you divide time into discrete chunks called time steps, and perform
v[n] = v[n-1] * t * a[t]
n is index, t is size of a time step. Position is similarly updated. This is only really good for those cases where accuracy is not all that important. A special version of Euler's method will yield an exact solution for projectile motion (see wiki), so while this method is crude, it can be perfect for some suituations.
The most common numerical integration method used in games and in some chemistry simulations is Velocity Verlet, which is a special form of the more generic Verlet method. I would recommend this one if Euler's is too crude.
A: The equation is: s = ut + (1/2)a t^2
where s is position, u is velocity at t=0, t is time and a is a constant acceleration.
For example, if a car starts off stationary, and accelerates for two seconds with an acceleration of 3m/s^2, it moves (1/2) * 3 * 2^2 = 6m
This equation comes from integrating analytically the equations stating that velocity is the rate-of-change of position, and acceleration is the rate-of-change of velocity.
Usually in a game-programming situation, one would use a slightly different formulation: at every frame, the variables for velocity and position are integrated not analytically, but numerically:
s = s + u * dt;
u = u + a * dt;
where dt is the length of a frame (measured using a timer: 1/60th second or so). This method has the advantage that the acceleration can vary in time.
Edit A couple of people have noted that the Euler method of numerical integration (as shown here), though the simplest to demonstrate with, has fairly poor accuracy. See Velocity Verlet (often used in games), and 4th order Runge Kutta (a 'standard' method for scientific applications) for improved algorithms.
A: In this article: http://www.ugrad.math.ubc.ca/coursedoc/math101/notes/applications/velocity.html (webarchive), you can find this formula:
p(t) = x(0) + v(0)*t + (1/2)at^2
where
*
*p(t) = position at time t
*x(0) = the position at time zero
*v(0) = velocity at time zero (if you don't have a velocity, you can ignore this term)
*a = the acceleration
*t = your current itme
A: Assuming you're dealing with constant acceleration, the formula is:
distance = (initial_velocity * time) + (acceleration * time * time) / 2
where
distance is the distance traveled
initial_velocity is the initial velocity (zero if the body is intially at rest, so you can drop this term in that case)
time is the time
acceleration is the (constant) acceleration
Make sure to use the proper units when calculating, i.e. meters, seconds and so on.
A very good book on the topic is Physics for Game Developers.
A: Assuming constant acceleration and initial velocity v0,
x(t) = (1/2 * a * t^2) + (v0 * t)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21"
} |
Q: Code to logging ratio? What is the ideal code to logging ratio? I'm not used to writing logs as most of the applications I've developed have not had much logging.
Recently though I've changed job, and I've noticed that you can't see the application code for the calls to log4net. I appreciate that this is useful but surely having too many debug statements is just as bad as not having any at all?
There are logging statements that tell you when every method starts and finishes and what they are returning. and when pretty much anything is done.
Would it not be easier to have some addon that used reflection to add the logging statements in at compile time so they didn't get in the way as you were trying to look at the code?
Also in these days of powerful IDEs and remote debugging is that much logging really nescisary?
A: That much logging is not necessary. There's no reason (in production) to know when each method starts and ends. Maybe you need that on certain methods, but having that much noise in the log files makes them nearly impossible to analyze effectively.
You should log when important things happen such as errors, user logins (audit log), transactions started, important data updated... so on and so forth. If you have a problem that you can't figure out from the logs, then you can add more to it if necessary... but only if necessary.
Also, just for your information, the adding logging in at compile time would be an example of what is called Aspect Oriented Programming. Logging would be the "cross cutting concern".
A: I think "logs to code ratio" is a misunderstanding of the problem.
In my job I once in a while have a situation where a bug in a Java program cannot be reproduced outside the production environment and where the customer does NOT want it to happen again.
Then ALL you have available to you to fix the bug, is the information you yourself have put in the log files. No debugging sessions (that is forbidden in production environments anyway) - no poking at input data - nothing!
So the logs are your time machine back to when the bug happened, and since you cannot predict ahead of time what information you will need to fix a bug yet unknown - otherwise you could just fix the bug in the first place - you need to log lots of stuff...
Exactly WHAT stuff depends on the scenario, but basically enough to ensure that you are never in doubt what happens where :)
Naturally this means that a LOT of logging will happen. You will then create two logs - one with everything which is only kept around for long enough to ensure that you will not need it, and the other one with non-trivial information which can be kept for a lot longer.
Dismissing logging as excessive, is usually done by those who have not had to fix a bug with nothing else to go by :)
A: Since log4net does a great job at not clogging up the resources, I tend to be a little verbose on logging because when you have to change to debug mode, the more info you have, the better. Here's what I typically log:
DEBUG Level
*
*Any parameters passed into the
method
*Any row counts from result sets I retrieve
*Any datarows that may contain suspicious data when being passed down to the method
*Any "generated" file paths, connection strings, or other values that could get mungled up when being "pieced together" by the environment.
INFO Level
*
*The start and end of the method
*The start and end of any major loops
*The start of any major case/switch statements
ERROR Level
*
*Handled exceptions
*Invalid login attempts (if security is an issue)
*Bad data that I have intercepted forreporting
FATAL Level
*
*Unhandled exceptions.
Also having a lot of logging details prevents me from asking the user what they were doing when they got the error message. I can easily piece it together.
A: When you come across a bug during the beta release of your application and can't reproduce it, you know that you should have done excessive logging. Same way if a client reports a bug but you can't reproduce it an excessive logging feature can save the day.
A: When you have a customer scenario (i.e., someone whose machine you don't get physical access to), the only things that are "too much logging" are repainting functions and nearly anything called by them (which should be nearly nothing). Or other functions that are called 100's of times per second during operation (program startup is ok, though, to have 100's of calls to get/set routines logged because, in my experience, that's where most of the problems originate).
Otherwise, you'll just be kicking yourself when you're missing some key log point that would definitively tell you what the problem is on the user's machine.
(Note: here I'm referring to the logging that happens when trace mode is enabled for developer-oriented logs, not user-oriented normal operation logs.)
A: I personally believe that first of all there is no hard and fast rule. I have some applications that log a LOT, in and out of methods, and status updates through the middle. These applications though are scheduled processes, run hands off, and the logs are parsed by another application that stores success/failure.
I have found that in all reality, many user applications don't need large amounts of logging, as really if issues come up you will be debugging to trace the values there. Additionally you typically don't need the expense of logging.
However, it really depends on the project.
A: How many of those lines are logging by default? I've worked on a system very much like what you describe - just booting it up would cause over 20MB of logs to be written if logging was cranked way up, but even debugging we didn't turn it all the way up for all modules. By default it would log when a module of code was entered, and major system events. It was great for debugging since QA could just attach a log to a ticket, and even if it wasn't reproducible you could see what was going on when the problem happened. If you have serious multithreading going on then logging is still better than any IDE or debugger I've worked with.
A: In my line of work, I write a lot of Windows services. For me, logging isn't a luxury; it's actually my only UI. When we deploy to production, we lose access to debugging and even the databases to which our services write and without logging we would have no way of knowing any specifics of issues that arise.
Having said that, I do believe that a concise logging style is the best approach. Log messages tend to be limited to the business logic of the application such as "received message from account xxx" than "entered function yyy". We do log exceptions, thread starts, echoing of environment settings and timings. Beyond that, we look to the debugger to identify logical errors in the development and QA phases.
A: I find that logging is much less necessary since I've started using TDD. It makes it much easier to determine where bugs lie. However, I find that logging statements can help understand what's going on in code. Sure, debuggers help give you a low-level idea of what's happening. But I find it easier when I can match a line of output to a line of code if I want to get a high level view of what's happening..
However, one thing that I should add is this: make sure your log statements include the module that the log statement is in! I can't count the number of times I've had to go back through and find where a log statement actually lies.
A: Complete log files are amazingly useful. Consider a situation where your application is deployed somewhere like a bank. You can't go in there and debug it by hand and they sure aren't going to send you their data. What you can get is a complete log which can point you to where the problem occured. Having a number of log levels is very helpful. Normally the application would run in a mode such that it only reports on fatal errors or serious errors. When you need to debug it a user can switch on the debug or trace output and get far more information.
The sort of logging you're seeing does seem excessive but I can't really say it is for certain without knowing more about the application and where it might be deployed.
A:
Also in these days of powerful IDEs and remote debugging is that much logging really nescisary?
Yes, absolutely, although the mistake that many unskilled developers make is to try to fix bugs using the wrong method, usually tending towards logging when they should be debugging. There is a place for each, but there are at least a few areas where logging will almost always be necessary:
*
*For examining problems in realtime code, where pausing with the debugger would effect the result of the calculation (granted, logging will have a slight impact on timing in a realtime process like this, but how much depends greatly on the software)
*For builds sent to beta testers or other colleagues who may not have access to a debugger
*For dumping data to disk that may not be easy to view within a debugger. For instance, certain IDE's which cannot correctly parse STL structures.
*For getting a "feel" of the normal flow of your program
*For making code more readable in addition to commenting, like so:
// Now open the data file
fp = fopen("data.bin", "rb");
The above comment could just as easily be placed in a logging call:
const char *kDataFile = "data.bin";
log("Now opening the data file %s", kDataFile);
fp = fopen(kDataFile, "rb");
That said, you are in some ways correct. Using the logging mechanism as a glorified stack-trace logger will generate very poor quality logfiles, as it doesn't provide a useful enough failure point for the developer to examine. So the key here is obviously the correct and prudent use of logging calls, which I think boils down to the developer's discretion. You need to consider that you're essentially making the logfiles for yourself; your users don't care about them and will usually grossly misinterpret their contents anyways, but you can use them to at least determine why your program misbehaved.
Also, it's quite rare that a logfile will point you to the direct source of a certain bug. In my experience, it usually provides some insight into how you can replicate a bug, and then either by the process of replicating it or debugging it, find the cause of the problem.
A: There is actually a nice library for adding in logging after the fact as you say, PostSharp. It lets you do it via attribute-based programming, among many other very useful things beyond just logging.
I agree that what you say is a little excessive for logging.
Some others bring up some good points, especially the banking scenario and other mission critical apps. It may be necessary for extreme logging, or at least be able to turn it on and off if needed, or have various levels set.
A: I must confess that when started programming I more or less logged all details as described by "Dillie-O".
Believe me... It helped a lot during initial days of production deployment where we heavily relied on log files to solve hundreds of problems.
Once the system becomes stable, I slowly started removing log entries as their value add started diminishing. (No Log4j at those point in time.)
I think, the ratio of code-to-log entries depends on the project and environment, and it need not be a constant ratio.
Nowadays we've lot of flexibility in logging with packages like Log4j, dynamic enabling of log level, etc.
But if programmers doesn't use it appropriately, such as when to use, when NOT to use INFO, DEBUG, ERROR etc. as well as details in log messages (I've seen log message like, "Hello X, Hello XX, Hello XXX, etc." which only the programmer can understand) the ratio will continue to be high with less ROI.
A: I think another factor is the toolset/platform being used and the conventions that come with it. For example, logging seems to be quite pervasive in the J(2)EE world, whereas I can't remember ever writing a log statement in a Ruby on Rails application.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49"
} |
Q: Setting the character encoding in form submit for Internet Explorer I have a page that contains a form. This page is served with content type text/html;charset=utf-8. I need to submit this form to server using ISO-8859-1 character encoding. Is this possible with Internet Explorer?
Setting accept-charset attribute to form element, like this, works for Firefox, Opera etc. but not for IE.
<form accept-charset="ISO-8859-1">
...
</form>
Edit: This form is created by server A and will be submitted to server B. I have no control over server B.
If I set server A to serve content with charset ISO-8859-1 everything works, but I am looking a way to make this work without changes to server A's encoding. I have another question about setting the encoding in server A.
A: It seems that this can't be done, not at least with current versions of IE (6 and 7).
IE supports form attribute accept-charset, but only if its value is 'utf-8'.
The solution is to modify server A to produce encoding 'ISO-8859-1' for page that contains the form.
A: I've got the same problem here. I have an UTF-8 Page an need to post to an ISO-8859-1 server.
Looks like IE can't handle ISO-8859-1. But it can handle ISO-8859-15.
<form accept-charset="ISO-8859-15">
...
</form>
So this worked for me, since ISO-8859-1 and ISO-8859-15 are almost the same.
A: There is a simple hack to this:
Insert a hidden input field in the form with an entity which only occur in the character set the server your posting (or doing a GET) to accepts.
Example: If the form is located on a server serving ISO-8859-1 and the form will post to a server expecting UTF-8 insert something like this in the form:
<input name="iehack" type="hidden" value="☠" />
IE will then "detect" that the form contains a UTF-8 character and use UTF-8 when you POST or GET. Strange, but it does work.
A: With decent browsers:
<form accept-charset="ISO-8859-1" .... >
With IE (any):
document.charset = 'ISO-8859-1'; // do this before submitting your non-utf8 <form>!
A: If you have any access to the server at all, convert its processing to UTF-8. The art of submitting non-UTF-8 forms is a long and sorry story; this document about forms and i18n may be of interest. I understand you do not seem to care about international support; you can always convert the UTF-8 data to html entities to make sure it stays Latin-1.
A: Just got the same problem and I have a relatively simple solution that does not require any change in the page character encoding(wich is a pain in the ass).
For example, your site is in utf-8 and you want to post a form to a site in iso-8859-1. Just change the action of the post to a page on your site that will convert the posted values from utf-8 to iso-8859-1.
this could be done easily in php with something like this:
<?php
$params = array();
foreach($_POST as $key=>$value) {
$params[] = $key."=".rawurlencode(utf8_decode($value));
}
$params = implode("&",$params);
//then you redirect to the final page in iso-8859-1
?>
A: For Russian symbols 'windows-1251'
<form action="yourProcessPage.php" method="POST" accept-charset="utf-8">
<input name="string" value="string" />
...
</form>
When simply convert string to cp1251
$string = $_POST['string'];
$string = mb_convert_encoding($string, "CP1251", "UTF-8");
A: Looks like Microsoft knows accept-charset, but their doc doesn't tell for which version it starts to work...
You don't tell either in which versions of browser you tested it.
A: I seem to remember that Internet Explorer gets confused if the accept-charset encoding doesn't match the encoding specified in the content-type header. In your example, you claim the document is sent as UTF-8, but want form submits in ISO-8859-1. Try matching those and see if that solves your problem.
A: <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
A: I am pretty sure it won't be possible with older versions of IE. Before the accept-charset attribute was devised, there was no way for form elements to specify which character encoding they accepted, and the best that browsers could do is assume the encoding of the page the form is in will do.
It is a bit sad that you need to know which encoding was used -- nowadays we would expect our web frameworks to take care of such details invisibly and expose the text data to the application as Unicode strings, already decoded...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "60"
} |
Q: Deactivating Weblogic Load Balancing Optimization for collocated objects Is there a way to deactivate the optimization for collocated objects that Weblogic uses by default for a specific EJB ?
EDIT: Some context :
We have a scheduler service that runs inside one node of the cluster. This is for historic reasons and cannot be changed at the moment.
This service makes call to an EJB and we would like to load balance these calls. Unfortunately at the moment every calls runs on the node that hosts the scheduler service because of the optimization mentioned in the question.
I was thinking of coding a custom load balancing class however this optimization seems to be done before the load balancing step happens.
A: Supposing you are trying to call a remote EJB (load balancing on local ejbs can only be obtained through an indirection trick like Patrick mentioned) you will have to create a new InitialContext using the address of the cluster instead of a particular server. This new IC will provide stubs as if you were a foreign client, subject to the same load balancing strategies as they are.
Unfortunately, this means that EJB3 injections won't work. You will have to do the lookup yourself. There is a chance, an this is pure speculation, that those stubs you can get from the cluster IC are serializable. In other words, it might be possible to bind them and get them injected using @Resource afterwards.
A: Not being too familiar with guts of weblogic but reading their material I would say you cannot, without some amount of trickery.
It does seem though that you can put a JMS (MDB) facade in front of your EJB that does not abide by the collocated object optimization.
OR
If your scheduler is servlet based, you should be able to deploy it in a separate web-application within the container and have it execute calls to the EJB cluster.
A: How many nodes are there in your cluster? Have you considered deploying the EJB to nodes other than the one the scheduler service has been deployed to?
A: I happen to stumble upon this old question.
Is it possible that you create a JMS queue in between the scheduler and the actual executor? Since all managed servers will consume from the same queue, the queue will act as a load-balancer, in a way.
If you have solved this issue differently, I'd be interested to know how :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Flex/Flash Debugging in the Browser I'm having an issue with a Flash/Flex erroring in Firefox but not IE. I need to see the error that the Flash/Flex app is getting from the ASP.NET app. Is there any way to debug the response that Flash/Flex is getting?
A: Install the Debug version of the FlashPlayer for Firefox. Maybe this is already enough and an Error might pop up.
If not use the FlexBuilder and debug the Flex application. If you don't have a license for the FlexBuilder you may also use the Evaluation licence.
A: Remember that you can also observe the network traffic from firebug, even for flash apps.
A: Depending on how you are making calls to your ASP.net app - you could use something like LiveHTTPHeaders to see the url that your flex app is call and see what response is being sent back from the server (i.e. 200, 404, 503, etc...)
Also - you could use something like Charles to do the same thing but actually see the full response from the server (be it images, text, xml...) and not just the header info.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: What are some good profilers for native C++ on Windows? I'm looking for a profiler to use with native C++. It certainly does not have to be free, however cost does factor into the purchase decision. This is for commercial work so I can't use personal or academic licensed copies.
The key features I'm looking for are:
*
*Process level metrics
*Component level metrics
*Line-level metrics
*Supports Multi-threaded code
*Usability
*Cost
*Visual Studio 2005 Professional support required (VS 2008 Professional support highly
desirable)
I've used Intel's VTune and Compuware's Devpartner Performance Analysis Community Edition.
VTune seemed very powerful but it has a steep learning curve. It also is very "modular" so you have to figure out what parts are you need to buy.
DevPartner PACE was pretty easy to use and provides all of the key features however it's only a 45-day trial. The licensed version (DevPartner for Visual C++ BoundsChecker Suite) is about $1400 a seat, which is doable but a bit high imo.
What are some good profilers for native C++ and WHY?
See also:
What's Your Favorite Profiling Tool For C++
A: On Windows, GlowCode is affordable, fairly easy to use, and offers a free trial so you can see if it works for you.
A: Many people are not aware but MSFT is making a great progress putting the best possible tools for improving performance in the hands of devlopers for free :-). They are exposing to all of us the internals of Windows tracing: ETW.
perftools
It is part of the new windows SDK for server 2008 and Vista. Simply impressive and must to download if performance analysis and profiling under Windows is your goal (regardless of language).
Check the documentation here before you decide to download it:
msdn doc
A: Try Intel Parallel Studio. Currently, it's in beta, but the name Intel says it all.
http://www.intel.com/go/parallel
A: Just found Luke StackWalker on SourceForge (http://lukestackwalker.sourceforge.net/).
Unfortunately it does not have a 'focus on sub tree', but it remains handy to use, uses the symbol server (I suggest you set it up immediately if you don't have it yet), offers a graphical visualisation, ...
The down side is that it doesn't show the accumulated times (samples) of the child functions.
Another alternative is "Very Sleepy" (http://www.codersnotes.com/sleepy). It can show the accumulated times of the children, but unfortunately it doesn't use the symbol server.
A: CodeXL may also be worth looking at, it can run on both Linux and Windows, although it is mainly dedicated to OpenGL/OpenCL debugging and profiling there is a time based sample option for CPUs under the profiling section which maybe helpful. It's also free and works as long as pdb files are available (well on windows, I don't know how it works on Linux) (even for release builds with pdb).
A: Definitely Visual Studio Team System. By far.
A: I just finished the first usable version of CxxProf, a portable manual instrumented profiling library for C++.
It fulfills your requirements:
*
*Profiles multithreaded applications
*Support for profiling multiple processes throughout the same network is on the way
*It is written with the best usability and easiest integration in mind
*It's free as in beer and free as in speech
*It will work with VS05,08,10,12 and 13. As well as with g++ on Linux. It's currently tested with VS 2013 Express.
See the project wiki for more info.
Disclaimer: Im the main developer of CxxProf
A: I wrote an open source lightweight win32/64 profiler, support both CPU and memory profiling,
it's kind of similar with VS profiler, but with unique feature like flame graph of CPU and
memory data. it's here: dprofiler
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17"
} |
Q: Parser-generator that outputs C# given a BNF grammar? I'm looking for a tool that will be able to build a parser (in C#) if I give it a BNF grammar (eg. http://savage.net.au/SQL/sql-2003-2.bnf)
Does such a generator exist?
A: Also take a look at Irony:
http://irony.codeplex.com/
seems very promising
A: IronMeta is a C# implementation of Alex Warth's OMeta; it's a packrat PEG (parsing expression grammar; uses biased choice), so grammars can be cleaner than when using a yacc-like LALR system.
A: Normally BNF grammars are too ambiguous. ANTLR will be probably good for what you are looking for.
A: The Visual Studio SDK actually ships with lexer and parser generation tools. These are called MPPG and MPLex and are part of the Managed Babel package. While the intention of bundling them with the SDK is to develop language extensions for Visual Studio, they are perfectly usable for creating general AST-emitting parsers.
MPLex and MPPG are based on GPLEX and GPPG (projects of Queensland University of Technology) and are used in a similar fashion to Lex and Yacc. The SDK also contains MSBuild actions for making the parser generation a part of the regular build process.
Here's a screencast showing MPLex and MPPG in action:
http://msdn.microsoft.com/en-us/vstudio/cc837016.aspx
A: You will have to tweak the BNF a bit, but TinyPG is a great tool.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/153572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.