text stringlengths 8 267k | meta dict |
|---|---|
Q: .NET multiplication optimization Does the compiler optimize out any multiplications by 1? That is, consider:
int a = 1;
int b = 5 * a;
Will the expression 5 * a be optimized into just 5? If not, will it if a is defined as:
const int a = 1;
A: It will pre-calculate any constant expressions when it compiles, including string concatenation. Without the const it will be left alone.
Your first example compiles to this IL:
.maxstack 2
.locals init ([0] int32, [1] int32)
ldc.i4.1 //load 1
stloc.0 //store in 1st local variable
ldc.i4.5 //load 5
ldloc.0 //load 1st variable
mul // 1 * 5
stloc.1 // store in 2nd local variable
The second example compiles to this:
.maxstack 1
.locals init ( [0] int32 )
ldc.i4.5 //load 5
stloc.0 //store in local variable
A: Constant propagation is one of the most common and easiest optimisations.
A: Looking at the code generated by the mono compiler, the version with the non-const a performs the multiplication at run time. That is, the multiplication is not optimized out. If you make a const, then the multiplication is optimized out.
The Microsoft compiler might have a more aggressive compiler, the best solution is to look at the code generated by the compiler to see what it is doing.
A: What the compiler would optimise here is not multiplication by 1 per-se, but rather arithmetic with values known at compile-time. So yeah, a compiler would optimise out all the maths in your example, with or without the const.
Edit: A competent compiler, I should say.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: What is "lambda binding" in Python? I understand what are lambda functions in Python, but I can't find what is the meaning of "lambda binding" by searching the Python docs.
A link to read about it would be great.
A trivial explained example would be even better.
Thank you.
A: I've never heard that term, but one explanation could be the "default parameter" hack used to assign a value directly to a lambda's parameter. Using Swati's example:
def foo(x):
a = lambda x=x: x
x = 7
b = lambda: x
return a,b
aa, bb = foo(4)
aa() # Prints 4
bb() # Prints 7
A: First, a general definition:
When a program or function statement
is executed, the current values of
formal parameters are saved (on the
stack) and within the scope of the
statement, they are bound to the
values of the actual arguments made in
the call. When the statement is
exited, the original values of those
formal arguments are restored. This
protocol is fully recursive. If within
the body of a statement, something is
done that causes the formal parameters
to be bound again, to new values, the
lambda-binding scheme guarantees that
this will all happen in an orderly
manner.
Now, there is an excellent python example in a discussion here:
"...there is only one binding for x: doing x = 7 just changes the value in the pre-existing binding. That's why
def foo(x):
a = lambda: x
x = 7
b = lambda: x
return a,b
returns two functions that both return 7; if there was a new binding after the x = 7, the functions would return different values [assuming you don't call foo(7), of course. Also assuming nested_scopes]...."
A: Where have you seen the phrase used?
"Binding" in Python generally refers to the process by which a variable name ends up pointing to a specific object, whether by assignment or parameter passing or some other means, e.g.:
a = dict(foo="bar", zip="zap", zig="zag") # binds a to a newly-created dict object
b = a # binds b to that same dictionary
def crunch(param):
print param
crunch(a) # binds the parameter "param" in the function crunch to that same dict again
So I would guess that "lambda binding" refers to the process of binding a lambda function to a variable name, or maybe binding its named parameters to specific objects? There's a pretty good explanation of binding in the Language Reference, at http://docs.python.org/ref/naming.html
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Software for creating PNG 8-bit transparent images I'm looking for software to create PNG8 format transparent images as per this article.
NOTE: I need a Linux solution myself, but please submit answers for other OSes.
A: pngquant does a good job of converting to PNG8 while preserving full transparency.
If you're size-conscious, you may also be interested in pngcrush, which can usually (losslessly) compress PNG files quite a bit.
A: I also needed a Linux solution and found pngnq to do a pretty good job. It seems to be designed specifically for creating 8-bit PNG images with alpha channels.
apt-get install pngnq # If on Ubuntu/Debian
A: The link you provided references ImageMagick, which is an excellent toolkit for manipulating images on Linux.
A: Ah, if I remember correctly, when I have read this article some months ago, pngquant hadn't a Windows version. I see it has one now. So I tried it, and pngnq too.
The latter seems to do a slightly better job on the IceAlpha.png test image (from libpng.org), at the cost of a slightly bigger image (it can be post-processed with pngcrush or pngout anyway).
The dithering algorithms (the two of pngquant, the only one of pngnq) are different, and it might be worth having both tools, converting images with all algorithms and see what looks the best.
For the record, on the Windows side, IrfanView (4.10) displays these images very well (using the transparency level on each palette entry) while XnView (1.85.1) and GIMP (2.4) apply only a full transparency/opaque display, à la GIF: the light bulb given as an example in the linked article has a transparent background around it, but the orange part is fully opaque.
And the excellent utility TweakPNG shows we have a PLTE (palette, 222 entries) chunk and a tRNS (alpha values for palette colors, 222 entries) chunk. Even more, it allows to edit each palette entry, color and alpha level. It might be an interesting complementary tool for this format.
Note on IrfanView support: if it handles PNG8 correctly for transparency, it doesn't handle gamma information in PNG files: on the toucan image or the ping-pong image, I had to apply a gamma of 2.4 to get similar (lighter) colors.
Note also that IrfanView does an awful job of converting 32-bit PNG images to 256, allowing only one transparent color, which looks bad if full color was dithered!
I see that the GIMP manual states: "his “PNG8” format, like GIF, uses only one bit for transparency; only two transparency levels are possible, transparent or opaque. "
while the ISO/W3C standard states:
"The tRNS chunk specifies either alpha values that are associated with palette entries (for indexed-colour images) or a single transparent colour (for greyscale and truecolour images).". The PNG specification 1.2 added: "Although simple transparency is not as elegant as the full alpha channel, it requires less storage space and is sufficient for many common cases."
It looks like the unique transparent color is more implemented than the full transparency palette, alas. At least browsers get it right.
A: It depends on what exactly your original images look like.
If your images already contain 256 or fewer colors (RGBA values), you need only look at pngout (Windows) (Linux/BSD/Mac OS X ports), which you should already be using to optimize your PNG images anyway. It can't quantize images, but it can save them as 8-bit, including alpha transparency. Just pass in the /c3 (or -c3 on Linux et al.) color option to force it to save the image as PNG8.
If your images do contain more than 256 colors, you have a few more, but all less than perfect options:
*
*Adobe Fireworks is probably the best option in terms of the resulting image quality. It will do the job if you only need to convert a few images, or if you don't mind relying on Fireworks to do the batch processing. I did find that it sometimes somehow limits the number of colors in the palette, creating a worse quality image than necessary. I don't know if that's perhaps a bug in CS3 that's been fixed in CS4.
If you're not on Windows or OS X this obviously isn't an option, and buying Fireworks just for this probably isn't worth it either.
*The only alternatives I know of are the already mentioned pngquant and pngnq. I've had better luck with pngnq, but that's probably just going to depend on which quantization strategy works best on the files you're working with.
Unfortunately, I've noticed that neither of them work very well with small amounts of transparency (say, an opaque image with transparent, rounded corners).
A: For Mac: ImageOptim and ImageAlpha are GUIs that run pngcrush, pngquant, and various other normally command-line compression utilities. http://pngmini.com/
A: I recommend "The GIMP" as it is possible to output in PNG8 and supports Linux/Windows. If you want a quick Windows-only solution, I also recommend IrfanView.
A: Microsoft Windows: Ultimate Paint (freeware and shareware
versions are available).
Both versions can save as an 8 bit transparent PNG image.
It can also save as a 4 bit PNG (16 colours). This cuts the
file size in half compared to 8 bit.
Input formats include BMP, GIF, ICO, JPG/JPEG and PNG.
The freeware edition of Ultimate Paint Standard 2.88 LE can
be downloaded directly from
http://www.ultimatepaint.com/up.zip (1.7 MB).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How does a Perl socket resolve hostnames under Linux? I have a (from what I can tell) perfectly working Linux setup (Ubuntu 8.04) where all tools (nslookup, curl, wget, firefox, etc) are able to resolve addresses. Yet, the following code fails:
$s = new IO::Socket::INET(
PeerAddr => 'stackoverflow.com',
PeerPort => 80,
Proto => 'tcp',
);
die "Error: $!\n" unless $s;
I verified the following things:
*
*Perl is able to resolve addresses with gethostbyname (ie the code below works):
my $ret = gethostbyname('stackoverflow.com');
print inet_ntoa($ret);
*The original source code works under Windows
*This is how it supposed to work (ie. it should resolve hostnames), since LWP tries to use this behavior (in fact I stumbled uppon the problem by trying to debug why LWP wasn't working for me)
*Running the script doesn't emit DNS requests (so it doesn't even try to resolve the name). Verified with Wireshark
A: From a quick look, the following code from IO::Socket::INET
sub _get_addr {
my($sock,$addr_str, $multi) = @_;
my @addr;
if ($multi && $addr_str !~ /^\d+(?:\.\d+){3}$/) {
(undef, undef, undef, undef, @addr) = gethostbyname($addr_str);
} else {
my $h = inet_aton($addr_str);
push(@addr, $h) if defined $h;
}
@addr;
}
suggests (if you look at the caller of this code) the work-around of adding MultiHomed => 1, to your code.
Without that work-around, the above code appears to try to call inet_aton("hostname.com") using the inet_aton() from Socket.pm. That works for me in both Win32 and Unix, so I guess that is where the breakage lies for you.
See Socket.xs for the source code of inet_aton:
void
inet_aton(host)
char * host
CODE:
{
struct in_addr ip_address;
struct hostent * phe;
if (phe = gethostbyname(host)) {
Copy( phe->h_addr, &ip_address, phe->h_length, char );
} else {
ip_address.s_addr = inet_addr(host);
}
ST(0) = sv_newmortal();
if(ip_address.s_addr != INADDR_NONE) {
sv_setpvn( ST(0), (char *)&ip_address, sizeof ip_address );
}
}
It appears that the Perl gethostbyname() works better than the C gethostbyname() for you.
A: Could you perhaps tells us exactly how your code fails? You've got error checking code in there but you haven't reported what the error is!
I've just tried the original code (with the addition of the "use IO::Socket::INET" on my Mac OS X machine and it works fine.
I suspect that the Multihomed option is an unnecessary hack and some other issue is the root cause of your problem.
A: Make sure that you have the statement
use IO::Socket::INET;
At the beginning of your source code. If you leave this out, you are probably getting the error message:
Can't locate object method "new" via
package "IO::Socket::INET"
Beyond that you might verify that DNS is working using Net::DNS::Resoler, see more information here.
use Net::DNS;
my $res = Net::DNS::Resolver->new;
# Perform a lookup, using the searchlist if appropriate.
my $answer = $res->search('example.com');
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: VB6 Integration with MSBuild So this is a question for anyone who has had to integrate the building/compilation of legacy projects/code in a Team Build/MSBuild environment - specifically, Visual Basic 6 applications/projects.
Outside of writing a custom build Task (which I am not against) does anyone have any suggestions on how best to integrate compilation and versioning of legacy VB6 projects into MSBuild builds?
I'm aware of the FreeToDev msbuild tasks at CodePlex but they've been withdrawn at the moment.
Ideally I'm looking to version and compile the code as well as capture the compilation output (especially errors) for the msbuild log.
I've seen advice on encapsulating this functionality in a custom task, but really wondered if anyone has tried another solution (aside from executing shell commands) -
In essence, does anyone have a "cleaner" solution?
Ideally, executing commands using would be a last resort..
A: I am using Nant to build VB6 projects daily. This does resort using the Nant execute command to do the builds (we build 4 projects as part of one "solution").
It also allows you to label versions in your source control repository, get latest code, check in, check out, all the normal requirements, compile the update/setup programs copy the files to required locations and send emails of the results.
The logged results are fairly minimal though as you only get the output provided by a VB6 command line compile.
For versioning, I had to write a small app to extract the version number of my compiled executable and write it to a text file that Nant could then read and use (for labels, file names etc. (A bit of a pain but VB generated version numbers don't comply anyway).
For help with other non-core tasks see NAntContrib - from the NAnt link above.
A: The VB6 task will be back on Monday. With regards to versioning, there is no explicit vb versioning task in the pack, however you could make use of the TfsVersion (TaskAction="GetVersion") and the File (TaskAction="Replace") tasks. If you think there is value in creating a new task to encapsulate / provide other functions, then please let me know and I will add it to the pack for the benefit of the whole community.
Apologies for the withdrawal, but come Monday I'm sure all will understand.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Emulation of lex like functionality in Perl or Python Here's the deal. Is there a way to have strings tokenized in a line based on multiple regexes?
One example:
I have to get all href tags, their corresponding text and some other text based on a different regex.
So I have 3 expressions and would like to tokenize the line and extract tokens of text matching every expression.
I have actually done this using flex (not to be confused with Adobe), which is an implementation of the good old lex. lex provides
an elegant way to do this by executing "actions" based on expressions. One can control the way
lex reading a file too (block / line based read).
The problem is that flex actually produces C/ C++ code which actually does the tokenizing job. I have a
make file which wraps all these things.
I was wondering if perl /python can in some way do the same thing. Its just that I would like to do everything
I like in a single programming language itself.
Tokenizing is just one of the things that I want to do as part of my application.
Apart from perl or python can any language (functional also) do this?
I did read about PLY and ANTLR here (Parsing, where can I learn about it).
But is there a way to do it naturally in python itself? pardon my ignorance, but are these tools used in any popular products / services?
Thank you.
A: Look at documentation for following modules on CPAN
HTML::TreeBuilder
HTML::TableExtract
and
Parse::RecDescent
I've used these modules to process quite large and complex web-pages.
A: If you're specifically after parsing links out of web-pages, then Perl's WWW::Mechanize module will figure things out for you in a very elegant fashion. Here's a sample program that grabs the first page of Stack Overflow and parses out all the links, printing their text and corresponding URLs:
#!/usr/bin/perl
use strict;
use warnings;
use WWW::Mechanize;
my $mech = WWW::Mechanize->new;
$mech->get("http://stackoverflow.com/");
$mech->success or die "Oh no! Couldn't fetch stackoverflow.com";
foreach my $link ($mech->links) {
print "* [",$link->text, "] points to ", $link->url, "\n";
}
In the main loop, each $link is a WWW::Mechanize::Link object, so you're not just constrained to getting the text and URL.
All the best,
Paul
A: Sounds like you really just want to parse HTML, I recommend looking at any of the wonderful packages for doing so:
*
*BeautifulSoup
*lxml.html
*html5lib
Or! You can use a parser like one of the following:
*
*PyParsing
*DParser - A GLR parser with good python bindings.
*ANTLR - A recursive decent parser generator that can generate python code.
This example is from the BeautifulSoup Documentation:
from BeautifulSoup import BeautifulSoup, SoupStrainer
import re
links = SoupStrainer('a')
[tag for tag in BeautifulSoup(doc, parseOnlyThese=links)]
# [<a href="http://www.bob.com/">success</a>,
# <a href="http://www.bob.com/plasma">experiments</a>,
# <a href="http://www.boogabooga.net/">BoogaBooga</a>]
linksToBob = SoupStrainer('a', href=re.compile('bob.com/'))
[tag for tag in BeautifulSoup(doc, parseOnlyThese=linksToBob)]
# [<a href="http://www.bob.com/">success</a>,
# <a href="http://www.bob.com/plasma">experiments</a>]
A: Have you looked at PyParsing?
From their homepage:
Here is a program to parse "Hello, World!" (or any greeting of the form ", !"):
from pyparsing import Word, alphas
greet = Word( alphas ) + "," + Word( alphas ) + "!" # <-- grammar defined here
hello = "Hello, World!"
print hello, "->", greet.parseString( hello )
The program outputs the following:
Hello, World! -> ['Hello', ',', 'World', '!']
A: If your problem has anything at all to do with web scraping, I recommend looking at Web::Scraper , which provides easy element selection via XPath respectively CSS selectors. I have a (German) talk on Web::Scraper , but if you run it through babelfish or just look at the code samples, that can help you to get a quick overview of the syntax.
Hand-parsing HTML is onerous and won't give you much over using one of the premade HTML parsers. If your HTML is of very limited variation, you can get by by using clever regular expressions, but if you're already breaking out hard-core parser tools, it sounds as if your HTML is far more regular than what is sane to parse with regular expressions.
A: Also check out pQuery it as a really nice Perlish way of doing this kind of stuff....
use pQuery;
pQuery( 'http://www.perl.com' )->find( 'a' )->each(
sub {
my $pQ = pQuery( $_ );
say $pQ->text, ' -> ', $pQ->toHtml;
}
);
# prints all HTML anchors on www.perl.com
# => link text -> anchor HTML
However if your requirement is beyond HTML/Web then here is the earlier "Hello World!" example in Parse::RecDescent...
use strict;
use warnings;
use Parse::RecDescent;
my $grammar = q{
alpha : /\w+/
sep : /,|\s/
end : '!'
greet : alpha sep alpha end { shift @item; return \@item }
};
my $parse = Parse::RecDescent->new( $grammar );
my $hello = "Hello, World!";
print "$hello -> @{ $parse->greet( $hello ) }";
# => Hello, World! -> Hello , World !
Probably too much of a large hammer to crack this nut ;-)
A: From perlop:
A useful idiom for lex -like scanners
is /\G.../gc . You can combine
several regexps like this to process a
string part-by-part, doing different
actions depending on which regexp
matched. Each regexp tries to match
where the previous one leaves off.
LOOP:
{
print(" digits"), redo LOOP if /\G\d+\b[,.;]?\s*/gc;
print(" lowercase"), redo LOOP if /\G[a-z]+\b[,.;]?\s*/gc;
print(" UPPERCASE"), redo LOOP if /\G[A-Z]+\b[,.;]?\s*/gc;
print(" Capitalized"), redo LOOP if /\G[A-Z][a-z]+\b[,.;]?\s*/gc;
print(" MiXeD"), redo LOOP if /\G[A-Za-z]+\b[,.;]?\s*/gc;
print(" alphanumeric"), redo LOOP if /\G[A-Za-z0-9]+\b[,.;]?\s*/gc;
print(" line-noise"), redo LOOP if /\G[^A-Za-z0-9]+/gc;
print ". That's all!\n";
}
A: Modifying Bruno's example to include error checking:
my $input = "...";
while (1) {
if ($input =~ /\G(\w+)/gc) { print "word: '$1'\n"; next }
if ($input =~ /\G(\s+)/gc) { print "whitespace: '$1'\n"; next }
if ($input !~ /\G\z/gc) { print "tokenizing error at character " . pos($input) . "\n" }
print "done!\n"; last;
}
(Note that using scalar //g is unfortunately the one place where you really can't avoid using the $1, etc. variables.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Generating random numbers in Objective-C I'm a Java head mainly, and I want a way to generate a pseudo-random number between 0 and 74. In Java I would use the method:
Random.nextInt(74)
I'm not interested in a discussion about seeds or true randomness, just how you accomplish the same task in Objective-C. I've scoured Google, and there just seems to be lots of different and conflicting bits of information.
A: There are some great, articulate answers already, but the question asks for a random number between 0 and 74. Use:
arc4random_uniform(75)
A: Same as C, you would do
#include <time.h>
#include <stdlib.h>
...
srand(time(NULL));
int r = rand() % 74;
(assuming you meant including 0 but excluding 74, which is what your Java example does)
Edit: Feel free to substitute random() or arc4random() for rand() (which is, as others have pointed out, quite sucky).
A: I thought I could add a method I use in many projects.
- (NSInteger)randomValueBetween:(NSInteger)min and:(NSInteger)max {
return (NSInteger)(min + arc4random_uniform(max - min + 1));
}
If I end up using it in many files I usually declare a macro as
#define RAND_FROM_TO(min, max) (min + arc4random_uniform(max - min + 1))
E.g.
NSInteger myInteger = RAND_FROM_TO(0, 74) // 0, 1, 2,..., 73, 74
Note: Only for iOS 4.3/OS X v10.7 (Lion) and later
A: This will give you a floating point number between 0 and 47
float low_bound = 0;
float high_bound = 47;
float rndValue = (((float)arc4random()/0x100000000)*(high_bound-low_bound)+low_bound);
Or just simply
float rndValue = (((float)arc4random()/0x100000000)*47);
Both lower and upper bound can be negative as well. The example code below gives you a random number between -35.76 and +12.09
float low_bound = -35.76;
float high_bound = 12.09;
float rndValue = (((float)arc4random()/0x100000000)*(high_bound-low_bound)+low_bound);
Convert result to a rounder Integer value:
int intRndValue = (int)(rndValue + 0.5);
A: Use the arc4random_uniform(upper_bound) function to generate a random number within a range. The following will generate a number between 0 and 73 inclusive.
arc4random_uniform(74)
arc4random_uniform(upper_bound) avoids modulo bias as described in the man page:
arc4random_uniform() will return a uniformly distributed random number less than upper_bound. arc4random_uniform() is recommended over constructions like ``arc4random() % upper_bound'' as it avoids "modulo bias" when the upper bound is not a power of two.
A: Generate random number between 0 to 99:
int x = arc4random()%100;
Generate random number between 500 and 1000:
int x = (arc4random()%501) + 500;
A: As of iOS 9 and OS X 10.11, you can use the new GameplayKit classes to generate random numbers in a variety of ways.
You have four source types to choose from: a general random source (unnamed, down to the system to choose what it does), linear congruential, ARC4 and Mersenne Twister. These can generate random ints, floats and bools.
At the simplest level, you can generate a random number from the system's built-in random source like this:
NSInteger rand = [[GKRandomSource sharedRandom] nextInt];
That generates a number between -2,147,483,648 and 2,147,483,647. If you want a number between 0 and an upper bound (exclusive) you'd use this:
NSInteger rand6 = [[GKRandomSource sharedRandom] nextIntWithUpperBound:6];
GameplayKit has some convenience constructors built in to work with dice. For example, you can roll a six-sided die like this:
GKRandomDistribution *d6 = [GKRandomDistribution d6];
[d6 nextInt];
Plus you can shape the random distribution by using things like GKShuffledDistribution.
A: According to the manual page for rand(3), the rand family of functions have been obsoleted by random(3). This is due to the fact that the lower 12 bits of rand() go through a cyclic pattern. To get a random number, just seed the generator by calling srandom() with an unsigned seed, and then call random(). So, the equivalent of the code above would be
#import <stdlib.h>
#import <time.h>
srandom(time(NULL));
random() % 74;
You'll only need to call srandom() once in your program unless you want to change your seed. Although you said you didn't want a discussion of truly random values, rand() is a pretty bad random number generator, and random() still suffers from modulo bias, as it will generate a number between 0 and RAND_MAX. So, e.g. if RAND_MAX is 3, and you want a random number between 0 and 2, you're twice as likely to get a 0 than a 1 or a 2.
A: Better to use arc4random_uniform. However, this isn't available below iOS 4.3. Luckily iOS will bind this symbol at runtime, not at compile time (so don't use the #if preprocessor directive to check if it's available).
The best way to determine if arc4random_uniform is available is to do something like this:
#include <stdlib.h>
int r = 0;
if (arc4random_uniform != NULL)
r = arc4random_uniform (74);
else
r = (arc4random() % 74);
A: //The following example is going to generate a number between 0 and 73.
int value;
value = (arc4random() % 74);
NSLog(@"random number: %i ", value);
//In order to generate 1 to 73, do the following:
int value1;
value1 = (arc4random() % 73) + 1;
NSLog(@"random number step 2: %i ", value1);
Output:
*
*random number: 72
*random number step 2: 52
A: I wrote my own random number utility class just so that I would have something that functioned a bit more like Math.random() in Java. It has just two functions, and it's all made in C.
Header file:
//Random.h
void initRandomSeed(long firstSeed);
float nextRandomFloat();
Implementation file:
//Random.m
static unsigned long seed;
void initRandomSeed(long firstSeed)
{
seed = firstSeed;
}
float nextRandomFloat()
{
return (((seed= 1664525*seed + 1013904223)>>16) / (float)0x10000);
}
It's a pretty classic way of generating pseudo-randoms. In my app delegate I call:
#import "Random.h"
- (void)applicationDidFinishLaunching:(UIApplication *)application
{
initRandomSeed( (long) [[NSDate date] timeIntervalSince1970] );
//Do other initialization junk.
}
Then later I just say:
float myRandomNumber = nextRandomFloat() * 74;
Note that this method returns a random number between 0.0f (inclusive) and 1.0f (exclusive).
A: You should use the arc4random_uniform() function. It uses a superior algorithm to rand. You don't even need to set a seed.
#include <stdlib.h>
// ...
// ...
int r = arc4random_uniform(74);
The arc4random man page:
NAME
arc4random, arc4random_stir, arc4random_addrandom -- arc4 random number generator
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <stdlib.h>
u_int32_t
arc4random(void);
void
arc4random_stir(void);
void
arc4random_addrandom(unsigned char *dat, int datlen);
DESCRIPTION
The arc4random() function uses the key stream generator employed by the arc4 cipher, which uses 8*8 8
bit S-Boxes. The S-Boxes can be in about (2**1700) states. The arc4random() function returns pseudo-
random numbers in the range of 0 to (2**32)-1, and therefore has twice the range of rand(3) and
random(3).
The arc4random_stir() function reads data from /dev/urandom and uses it to permute the S-Boxes via
arc4random_addrandom().
There is no need to call arc4random_stir() before using arc4random(), since arc4random() automatically
initializes itself.
EXAMPLES
The following produces a drop-in replacement for the traditional rand() and random() functions using
arc4random():
#define foo4random() (arc4random() % ((unsigned)RAND_MAX + 1))
A: For game dev use random() to generate randoms. Probably at least 5x faster than using arc4random(). Modulo bias is not an issue, especially for games, when generating randoms using the full range of random(). Be sure to seed first. Call srandomdev() in AppDelegate. Here's some helper functions:
static inline int random_range(int low, int high){ return (random()%(high-low+1))+low;}
static inline CGFloat frandom(){ return (CGFloat)random()/UINT32_C(0x7FFFFFFF);}
static inline CGFloat frandom_range(CGFloat low, CGFloat high){ return (high-low)*frandom()+low;}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "753"
} |
Q: Rails Request Initialization We all hear a lot about scaling issues in Rails.
I was just curious what the actual costs in handling a HTTP request is in the Rails framework. Meaning, what has to happen for each and every request which comes in? Is there class parsing? Configuration? Database Connection establishment?
A: That actually depends a lot on which web server you're using, and which configuration you're using, not to mention the application design itself. Configuration and design issues involved include:
*
*Whether you're using fastcgi, old-school cgi, or some other request handling mechanism (affects whether you're going to have to rerun all of the app initialization code per request or not)
*Whether you're using memcache (or an alternate caching strategy) or not (affects cost of database requests)
*Whether you're using additional load balancing techniques or not
*Which session persistence strategy you're using (if needed)
*Whether you're using "development" mode or not, which causes code files to be reloaded whenever they're changed (as I recall; maybe it's just per-request) or not
Like most web app frameworks, there are solutions for connection pooling, caching, and process management. There are a whole bunch of ways to manage database access; the usual, default ones are not necessarily the highest performance, but it's not rocket science to adjust that strategy.
Someone who has dug into the internals more deeply can probably speak in more excruciating detail, but most apps use either FastCGI on Apache or an alternate more rails-friendly web server, which means that you only have app setup once per process.
A: Until the release of Phusion Passenger (aka mod_rails) the "standard" for deployment was not FastCGI but using a cluster of Mongrel servers fronted by Apache and mod_proxy (or Nginx etc).
The main issue behind the "Rails doesn't scale" is the fact that there are some quite complicated threading issues which has meant tiwht the current version of Ruby and the available serving mechanisms, Rails has not been threadsafe. This has meant that multiple containers have been required to run a Rails app to support high-levels of concurrent requests. Passenger makes some of this moot, as it handles all of this internally, and can also be run on a custom build of Ruby (Ruby Enterprise Edition) that changes the way memory is handled.
On top of this, the upcoming versions of both Ruby and Rails are directly addressing the threading issue and should close this argument once and for all.
As far as I am concerned the whole claim is pretty bogus. "Scale" is an architectural concern.
A: Here's a good high level overview of the lifecycle of a Rails request. After going through this, you can choose specific sections to profile and optimize.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Traversing HABTM relationships on ActiveRecord I'm working on a project for my school on rails (don't worry this is not graded on code) and I'm looking for a clean way to traverse relationships in ActiveRecord.
I have ActiveRecord classes called Users, Groups and Assignments. Users and Groups have a HABTM relationship as well as Groups and Assignments. Now what I need is a User function get_group(aid) where "given a user, find its group given an assignment".
The easy route would be:
def get_group(aid)
group = nil
groups.each { |g| group = g if g.assignment.find(aid).id == aid }
return group
end
Is there a cleaner implementation that takes advantage of the HABTM relationship between Groups and Assignments rather than just iterating? One thing I've also tried is the :include option for find(), like this:
def get_group(aid)
user.groups.find(:first,
:include => :assignments,
:conditions => ["assignments.id = ?", aid])
end
But this doesn't seem to work. Any ideas?
A: First off, be careful. Since you are using has_and_belongs_to_many for both relationships, then there might be more than one Group for a given User and Assignment. So I'm going to implement a method that returns an array of Groups.
Second, the name of the method User#get_group that takes an assignment id is pretty misleading and un-Ruby-like.
Here is a clean way to get all of the common groups using Ruby's Array#&, the intersection operator. I gave the method a much more revealing name and put it on Group since it is returning Group instances. Note, however, that it loads Groups that are related to one but not the other:
class Group < ActiveRecord::Base
has_and_belongs_to_many :assignments
has_and_belongs_to_many :users
# Use the array intersection operator to find all groups associated with both the User and Assignment
# instances that were passed in
def self.find_all_by_user_and_assignment(user, assignment)
user.groups & assignment.groups
end
end
Then if you really needed a User#get_groups method, you could define it like this:
class User < ActiveRecord::Base
has_and_belongs_to_many :groups
def get_groups(assignment_id)
Group.find_all_by_user_and_assignment(self, Assignment.find(assignment_id))
end
end
Although I'd probably name it User#groups_by_assignment_id instead.
My Assignment model is simply:
class Assignment < ActiveRecord::Base
has_and_belongs_to_many :groups
end
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Oracle considers empty strings to be NULL while SQL Server does not - how is this best handled? I have to write a component that re-creates SQL Server tables (structure and data) in an Oracle database. This component also has to take new data entered into the Oracle database and copy it back into SQL Server.
Translating the data types from SQL Server to Oracle is not a problem. However, a critical difference between Oracle and SQL Server is causing a major headache. SQL Server considers a blank string ("") to be different from a NULL value, so a char column can be defined as NOT NULL and yet still include blank strings in the data.
Oracle considers a blank string to be the same as a NULL value, so if a char column is defined as NOT NULL, you cannot insert a blank string. This is causing my component to break whenever a NOT NULL char column contains a blank string in the original SQL Server data.
So far my solution has been to not use NOT NULL in any of my mirror Oracle table definitions, but I need a more robust solution. This has to be a code solution, so the answer can't be "use so-and-so's SQL2Oracle product".
How would you solve this problem?
Edit: here is the only solution I've come up with so far, and it may help to illustrate the problem. Because Oracle doesn't allow "" in a NOT NULL column, my component could intercept any such value coming from SQL Server and replace it with "@" (just for example).
When I add a new record to my Oracle table, my code has to write "@" if I really want to insert a "", and when my code copies the new row back to SQL Server, it has to intercept the "@" and instead write "".
I'm hoping there's a more elegant way.
Edit 2: Is it possible that there's a simpler solution, like some setting in Oracle that gets it to treat blank strings the same as all the other major database? And would this setting also be available in Oracle Lite?
A: My typical solution would be to add a constraint in SQL Server forcing all string values in the affected columns to have a length greater than 0:
CREATE TABLE Example (StringColumn VARCHAR(10) NOT NULL)
ALTER TABLE Example
ADD CONSTRAINT CK_Example_StringColumn CHECK (LEN(StringColumn) > 0)
However, as you have stated, you have no control over the SQL Database. As such you really have four choices (as I see it):
*
*Treat empty string values as invalid, skip those records, alert an operator and log the records in some manner that makes it easy to manually correct / re-enter.
*Convert empty string values to spaces.
*Convert empty string values to a code (i.e. "LEGACY" or "EMPTY").
*Rollback transfers that encounter empty string values in these columns, then put pressure on the SQL Server database owner to correct their data.
Number four would be my preference, but isn't always possible. The action you take will really depend on what the oracle users need. Ultimately, if nothing can be done about the SQL database, I would explain the issue to the oracle business system owners, explain the options and consequences and make them make the decision :)
NOTE: I believe in this case SQL Server actually exhibits the "correct" behaviour.
A: Do you have to permit empty strings in the SQL Server system? If you can add a constraint to the SQL Server system that disallows empty strings, that is probably the easiest solution.
A: Its nasty and could have unexpected side effects.. but you could just insert "chr(0)" rather than ''.
drop table x
drop table x succeeded.
create table x ( id number, my_varchar varchar2(10))
create table succeeded.
insert into x values (1, chr(0))
1 rows inserted
insert into x values (2, null)
1 rows inserted
select id,length(my_varchar) from x
ID LENGTH(MY_VARCHAR)
---------------------- ----------------------
1 1
2
2 rows selected
select * from x where my_varchar is not null
ID MY_VARCHAR
---------------------- ----------
1
A: NOT NULL is a database constraint used to stop putting invalid data into your database. This is not serving any purpose in your Oracle database and so I would not have it.
I think you should just continue to allow NULLS in any Oracle column that mirrors a SqlServer column that is known to contain empty strings.
If there is a logical difference in the SqlServer database between NULL and empty string, then you would need something extra to model this difference in Oracle.
A: For those that think a Null and an empty string should be considered the same. A null has a different meaning from an empty string. It captures the difference between 'undefined' and 'known to be blank'. As an example a record may have been automatically created, but never validated by user input, and thus receive a 'null' in the expectation that when a user validates it, it will be set to be empty. Practically we may not want to trigger logic on a null but may want to on an empty string. This is analogous to the case for a 3 state checkbox of Yes/No/Undefined.
Both SQL and Oracle have not got it entirely correct. A blank should not satisfy a 'not null' constraint, and there is a need for an empty string to be treated differently than a null is treated.
A: I'd go with an additional column on the oracle side. Have your column allow nulls and have a second column that identifies whether the SQL-Server side should get a null-value or empty-string for the row.
A: I don't see an easy solution for this.
Maybe you can store your values as one or more blanks -> ' ', which aren't NULLS in Oracle, or keep track of this special case through extra fields/tables, and an adapter layer.
A: If you are migrating data you might have to substitute a space for an empty string. Not very elegant, but workable. This is a nasty "feature" of Oracle.
A: I've written an explanation on how Oracle handles null values on my blog a while ago. Check it here: http://www.psinke.nl/blog/hello-world/ and let me know if you have any more questions.
If you have data from a source with empty values and you must convert to an Oracle database where columns are NOT NULL, there are 2 things you can do:
*
*remove the not null constraint from the Oracle column
*Check for each individual column if it's acceptable to place a ' ' or 0 or dummy date in the column in order to be able to save your data.
A: Well, main point I'd consider is absence of tasks when some field can be null, the same field can be empty string and business logic requires to distinguish these values. So I'd make this logic:
*
*check MSSQL if column has NOT NULL constraint
*check MSSQL if column has CHECK(column <> '') or similar constraint
If both are true, make Oracle column NOT NULL. If any one is true, make Oracle column NULL. If none is true, raise INVALID DESIGN exception (or maybe ignore it, if it's acceptable by this application).
When sending data from MSSQL to Oracle, just do nothing special, all data would be transferred right. When retrieving data to MSSQL, any not-null data should be sent as is. For null strings you should decide whether it should be inserted as null or as empty string. To do this you should check table design again (or remember previous result) and see if it has NOT NULL constraint. If has - use empty string, if has not - use NULL. Simple and clever.
Sometimes, if you work with unknown and unpredictable application, you cannot check for existence of {not empty string} constraint because of various forms of it. If so, you can either use simplified logic (make Oracle columns always nullable) or check whether you can insert empty string into MSSQL table without error.
A: Although, for the most part, I agree with most of the other responses (not going to get into an argument about any I disagree with - not the place for that :) )
I do notice that OP mentioned the following:
"Oracle considers a blank string to be the same as a NULL value, so if a char column is defined as NOT NULL, you cannot insert a blank string."
Specifically calling out CHAR, and not VARCHAR2.
Hence, talking about an "empty string" of length 0 (ie '' ) is moot.
If he's declared the CHAR as, for example, CHAR(5), then just add a space to the empty string coming in, Oracle's going to pad it anyway. You'll end up with a 5 space string.
Now, if OP meant VARCHAR2, well yeah, that's a whole other beast, and yeah, the difference between empty string and NULL becomes relevant.
SQL> drop table junk;
Table dropped.
SQL>
SQL> create table junk ( c1 char(5) not null );
Table created.
SQL>
SQL> insert into junk values ( 'hi' );
1 row created.
SQL>
SQL> insert into junk values ( ' ' );
1 row created.
SQL>
SQL> insert into junk values ( '' );
insert into junk values ( '' )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("GREGS"."JUNK"."C1")
SQL>
SQL> insert into junk values ( rpad('', 5, ' ') );
insert into junk values ( rpad('', 5, ' ') )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("GREGS"."JUNK"."C1")
SQL>
SQL> declare
2 lv_in varchar2(5) := '';
3 begin
4 insert into junk values ( rpad(lv_in||' ', 5) );
5 end;
6 /
PL/SQL procedure successfully completed.
SQL>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Django admin site not displaying ManyToManyField relationship I'm working on what I think is a pretty standard django site, but am having trouble getting my admin section to display the proper fields.
Here's my models.py:
class Tech(models.Model):
name = models.CharField(max_length = 30)
class Project(models.Model):
title = models.CharField(max_length = 50)
techs = models.ManyToManyField(Tech)
In other words, a Project can have different Tech objects and different tech objects can belong to different Projects (Project X was created with Python and Django, Project Y was C# and SQL Server)
However, the admin site doesn't display any UI for the Tech objects. Here's my admin.py:
class TechInline(admin.TabularInline):
model = Tech
extra = 5
class ProjectAdmin(admin.ModelAdmin):
fields = ['title']
inlines = []
list_display = ('title')
admin.site.register(Project, ProjectAdmin)
I've tried adding the TechInline class to the inlines list, but that causes a
<class 'home.projects.models.Tech'> has no ForeignKey to <class 'home.projects.models.Project'>
Error. Also tried adding techs to the fields list, but that gives a
no such table: projects_project_techs
Error. I verified, and there is no projects_project_techs table, but there is a projects_tech one. Did something perhaps get screwed up in my syncdb?
I am using Sqlite as my database if that helps.
A:
I've tried adding the TechInline class to the inlines list, but that causes a
'TechInLine' not defined
Is that a straight copy-paste? It looks like you just made a typo -- try TechInline instead of TechInLine.
If your syncdb didn't create the proper table, you can do it manually. Execute this command:
python manage.py sqlreset <myapp>
And look for the definition for the projects_project_techs table. Copy and paste it into the client for your database.
A: Assuming your app is called "projects", the default name for your techs table will be projects_tech and the projects table will be projects_project.
The many-to-many table should be something like projects_project_techs
A: @John Millikin - Thanks for the sqlreset tip, that put me on the right path. The sqlreset generated code that showed me that the projects_project_techs was never actually created. I ended up just deleting my deb.db database and regenerating it. techs then showed up as it should.
And just as a sidenote, I had to do an admin.site.register(Tech) to be able to create new instances of the class from the Project page too.
I'll probably post another question to see if there is a better way to implement model changes (since I'm pretty sure that is what caused my problem) without wiping the database.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: OpenJDK7: What essential ADTs are not Implemented in OpenJDK7? What Abstract Datatypes should be added to OpenJDK7 ?
A: If you look at these two links you will see what some people feel is missing:
*
*http://commons.apache.org/collections/
*http://code.google.com/p/google-collections/
My favourite missing one is Bag... it is easy to implement, but annoying to have to implement it.
Mulimaps would also nice to have in the standard packages.
A: I think they should stop adding new libraries to the JDK. You can easily get what you need from third-party, open-source libraries. Hopefully, with version 7 the JVM and its standard libraries will become more modular.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I change the width of the vertical header column for Q3Table in qt 4.3? I am using a Q3Table object and wanted to change the width of the vertical header column. Does anyone know how to do this? It seems I can only adjust the height of the header cell but not the width.
A: You want to use Q3Table::setLeftMargin. This will set the width of the vertical header.
void Q3Table::setLeftMargin ( int m )
Sets the left margin to be m pixels wide.
The verticalHeader(), which displays row labels, occupies this margin.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to simulate a Delphi breakpoint in code? I am pretty sure I have seen this before, but I haven't found out / remembered how to do it. I want to have a line of code that when executed from the Delphi debugger I want the debugger to pop-up like there was a break point on that line.
Something like:
FooBar := Foo(Bar);
SimulateBreakPoint; // Cause break point to occur in Delphi IDE if attached
WriteLn('Value: ' + FooBar);
Hopefully that makes sense. I know I could use an exception, but that would be a lot more overhead then I want. It is for some demonstration code.
Thanks in advance!
A: To trigger the debugger from code (supposedly, I don't have a copy of delphi handy to try):
asm int 3 end;
See this page:
http://17slon.com/blogs/gabr/2008/03/debugging-with-lazy-breakpoints.html
A: As Andreas Hausladen stated in comments to that artice, Win32 API DebugBreak() function is less DOS-ish and works equally well.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: How can I kill a process by name instead of PID, on Linux? Sometimes when I try to start Firefox it says "a Firefox process is already running". So I have to do this:
jeremy@jeremy-desktop:~$ ps aux | grep firefox
jeremy 7451 25.0 27.4 170536 65680 ? Sl 22:39 1:18 /usr/lib/firefox-3.0.1/firefox
jeremy 7578 0.0 0.3 3004 768 pts/0 S+ 22:44 0:00 grep firefox
jeremy@jeremy-desktop:~$ kill 7451
What I'd like is a command that would do all that for me. It would take an input string and grep for it (or whatever) in the list of processes, and would kill all the processes in the output:
jeremy@jeremy-desktop:~$ killbyname firefox
I tried doing it in PHP but exec('ps aux') seems to only show processes that have been executed with exec() in the PHP script itself (so the only process it shows is itself.)
A: You can kill processes by name with killall <name>
killall sends a signal to all
processes running any of the specified
commands. If no signal name is
specified, SIGTERM is sent.
Signals can be specified either by
name (e.g. -HUP or -SIGHUP ) or by number (e.g.
-1) or by option -s.
If the command name is not regular
expression (option -r) and contains a
slash (/), processes executing that
particular file will be selected for
killing, independent of their name.
But if you don't see the process with ps aux, you probably won't have the right to kill it ...
A: To kill with grep:
kill -9 `pgrep myprocess`
A: pkill firefox
More information: http://linux.about.com/library/cmd/blcmdl1_pkill.htm
A: A bit longer alternative:
kill `pidof firefox`
A: The easiest way to do is first check you are getting right process IDs with:
pgrep -f [part_of_a_command]
If the result is as expected. Go with:
pkill -f [part_of_a_command]
If processes get stuck and are unable to accomplish the request you can use kill.
kill -9 $(pgrep -f [part_of_a_command])
If you want to be on the safe side and only terminate processes that you initially started add -u along with your username
pkill -f [part_of_a_command] -u [username]
A: I normally use the killall command.
Check this link for details of this command.
A: more correct would be:
export pid=`ps aux | grep process_name | awk 'NR==1{print $2}' | cut -d' ' -f1`;kill -9 $pid
A: I was asking myself the same question but the problem with the current answers is that they don't safe check the processes to be killed so... it could lead to terrible mistakes :)... especially if several processes matches the pattern.
As a disclaimer, I'm not a sh pro and there is certainly room for improvement.
So I wrote a little sh script :
#!/bin/sh
killables=$(ps aux | grep $1 | grep -v mykill | grep -v grep)
if [ ! "${killables}" = "" ]
then
echo "You are going to kill some process:"
echo "${killables}"
else
echo "No process with the pattern $1 found."
return
fi
echo -n "Is it ok?(Y/N)"
read input
if [ "$input" = "Y" ]
then
for pid in $(echo "${killables}" | awk '{print $2}')
do
echo killing $pid "..."
kill $pid
echo $pid killed
done
fi
A: kill -9 $(ps aux | grep -e myprocessname| awk '{ print $2 }')
A: Kill all processes having snippet in startup path. You can kill all apps started from some directory by for putting /directory/ as a snippet. This is quite usefull when you start several components for the same application from the same app directory.
ps ax | grep <snippet> | grep -v grep | awk '{print $1}' | xargs kill
* I would prefer pgrep if available
A: Strange, but I haven't seen the solution like this:
kill -9 `pidof firefox`
it can also kill multiple processes (multiple pids) like:
kill -9 `pgrep firefox`
I prefer pidof since it has single line output:
> pgrep firefox
6316
6565
> pidof firefox
6565 6316
A: Using killall command:
killall processname
Use -9 or -KILL to forcefully kill the program (the options are similar to the kill command).
A: Also possible to use:
pkill -f "Process name"
For me, it worked up perfectly. It was what I have been looking for.
pkill doesn't work with name without the flag.
When -f is set, the full command line is used for pattern matching.
A: If you run GNOME, you can use the system monitor (System->Administration->System Monitor) to kill processes as you would under Windows. KDE will have something similar.
A: The default kill command accepts command names as an alternative to PID. See kill (1). An often occurring trouble is that bash provides its own kill which accepts job numbers, like kill %1, but not command names. This hinders the default command. If the former functionality is more useful to you than the latter, you can disable the bash version by calling
enable -n kill
For more info see kill and enable entries in bash (1).
A: On Mac I could not find the pgrep and pkill neither was killall working so wrote a simple one liner script:-
export pid=`ps | grep process_name | awk 'NR==1{print $1}' | cut -d' ' -f1`;kill $pid
If there's an easier way of doing this then please share.
A: ps aux | grep processname | cut -d' ' -f7 | xargs kill -9 $
A: awk oneliner, which parses the header of ps output, so you don't need to care about column numbers (but column names). Support regex. For example, to kill all processes, which executable name (without path) contains word "firefox" try
ps -fe | awk 'NR==1{for (i=1; i<=NF; i++) {if ($i=="COMMAND") Ncmd=i; else if ($i=="PID") Npid=i} if (!Ncmd || !Npid) {print "wrong or no header" > "/dev/stderr"; exit} }$Ncmd~"/"name"$"{print "killing "$Ncmd" with PID " $Npid; system("kill "$Npid)}' name=.*firefox.*
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "490"
} |
Q: How do I check if an integer is even or odd? How can I check if a given number is even or odd in C?
A: Use bit arithmetic:
if((x & 1) == 0)
printf("EVEN!\n");
else
printf("ODD!\n");
This is faster than using division or modulus.
A: i % 2 == 0
A: I'd say just divide it by 2 and if there is a 0 remainder, it's even, otherwise it's odd.
Using the modulus (%) makes this easy.
eg.
4 % 2 = 0 therefore 4 is even
5 % 2 = 1 therefore 5 is odd
A: One more solution to the problem
(children are welcome to vote)
bool isEven(unsigned int x)
{
unsigned int half1 = 0, half2 = 0;
while (x)
{
if (x) { half1++; x--; }
if (x) { half2++; x--; }
}
return half1 == half2;
}
A: I would build a table of the parities (0 if even 1 if odd) of the integers (so one could do a lookup :D), but gcc won't let me make arrays of such sizes:
typedef unsigned int uint;
char parity_uint [UINT_MAX];
char parity_sint_shifted [((uint) INT_MAX) + ((uint) abs (INT_MIN))];
char* parity_sint = parity_sint_shifted - INT_MIN;
void build_parity_tables () {
char parity = 0;
unsigned int ui;
for (ui = 1; ui <= UINT_MAX; ++ui) {
parity_uint [ui - 1] = parity;
parity = !parity;
}
parity = 0;
int si;
for (si = 1; si <= INT_MAX; ++si) {
parity_sint [si - 1] = parity;
parity = !parity;
}
parity = 1;
for (si = -1; si >= INT_MIN; --si) {
parity_sint [si] = parity;
parity = !parity;
}
}
char uparity (unsigned int n) {
if (n == 0) {
return 0;
}
return parity_uint [n - 1];
}
char sparity (int n) {
if (n == 0) {
return 0;
}
if (n < 0) {
++n;
}
return parity_sint [n - 1];
}
So let's instead resort to the mathematical definition of even and odd instead.
An integer n is even if there exists an integer k such that n = 2k.
An integer n is odd if there exists an integer k such that n = 2k + 1.
Here's the code for it:
char even (int n) {
int k;
for (k = INT_MIN; k <= INT_MAX; ++k) {
if (n == 2 * k) {
return 1;
}
}
return 0;
}
char odd (int n) {
int k;
for (k = INT_MIN; k <= INT_MAX; ++k) {
if (n == 2 * k + 1) {
return 1;
}
}
return 0;
}
Let C-integers denote the possible values of int in a given C compilation. (Note that C-integers is a subset of the integers.)
Now one might worry that for a given n in C-integers that the corresponding integer k might not exist within C-integers. But with a little proof it is can be shown that for all integers n, |n| <= |2n| (*), where |n| is "n if n is positive and -n otherwise". In other words, for all n in integers at least one of the following holds (exactly either cases (1 and 2) or cases (3 and 4) in fact but I won't prove it here):
Case 1: n <= 2n.
Case 2: -n <= -2n.
Case 3: -n <= 2n.
Case 4: n <= -2n.
Now take 2k = n. (Such a k does exist if n is even, but I won't prove it here. If n is not even then the loop in even fails to return early anyway, so it doesn't matter.) But this implies k < n if n not 0 by (*) and the fact (again not proven here) that for all m, z in integers 2m = z implies z not equal to m given m is not 0. In the case n is 0, 2*0 = 0 so 0 is even we are done (if n = 0 then 0 is in C-integers because n is in C-integer in the function even, hence k = 0 is in C-integers). Thus such a k in C-integers exists for n in C-integers if n is even.
A similar argument shows that if n is odd, there exists a k in C-integers such that n = 2k + 1.
Hence the functions even and odd presented here will work properly for all C-integers.
A: // C#
bool isEven = ((i % 2) == 0);
A: Use the modulo (%) operator to check if there's a remainder when dividing by 2:
if (x % 2) { /* x is odd */ }
A few people have criticized my answer above stating that using x & 1 is "faster" or "more efficient". I do not believe this to be the case.
Out of curiosity, I created two trivial test case programs:
/* modulo.c */
#include <stdio.h>
int main(void)
{
int x;
for (x = 0; x < 10; x++)
if (x % 2)
printf("%d is odd\n", x);
return 0;
}
/* and.c */
#include <stdio.h>
int main(void)
{
int x;
for (x = 0; x < 10; x++)
if (x & 1)
printf("%d is odd\n", x);
return 0;
}
I then compiled these with gcc 4.1.3 on one of my machines 5 different times:
*
*With no optimization flags.
*With -O
*With -Os
*With -O2
*With -O3
I examined the assembly output of each compile (using gcc -S) and found that in each case, the output for and.c and modulo.c were identical (they both used the andl $1, %eax instruction). I doubt this is a "new" feature, and I suspect it dates back to ancient versions. I also doubt any modern (made in the past 20 years) non-arcane compiler, commercial or open source, lacks such optimization. I would test on other compilers, but I don't have any available at the moment.
If anyone else would care to test other compilers and/or platform targets, and gets a different result, I'd be very interested to know.
Finally, the modulo version is guaranteed by the standard to work whether the integer is positive, negative or zero, regardless of the implementation's representation of signed integers. The bitwise-and version is not. Yes, I realise two's complement is somewhat ubiquitous, so this is not really an issue.
A: Here is an answer in
Java:
public static boolean isEven (Integer Number) {
Pattern number = Pattern.compile("^.*?(?:[02]|8|(?:6|4))$");
String num = Number.toString(Number);
Boolean numbr = new Boolean(number.matcher(num).matches());
return numbr.booleanValue();
}
A: Reading this rather entertaining discussion, I remembered that I had a real-world, time-sensitive function that tested for odd and even numbers inside the main loop. It's an integer power function, posted elsewhere on StackOverflow, as follows. The benchmarks were quite surprising. At least in this real-world function, modulo is slower, and significantly so. The winner, by a wide margin, requiring 67% of modulo's time, is an or ( | ) approach, and is nowhere to be found elsewhere on this page.
static dbl IntPow(dbl st0, int x) {
UINT OrMask = UINT_MAX -1;
dbl st1=1.0;
if(0==x) return (dbl)1.0;
while(1 != x) {
if (UINT_MAX == (x|OrMask)) { // if LSB is 1...
//if(x & 1) {
//if(x % 2) {
st1 *= st0;
}
x = x >> 1; // shift x right 1 bit...
st0 *= st0;
}
return st1 * st0;
}
For 300 million loops, the benchmark timings are as follows.
3.962 the | and mask approach
4.851 the & approach
5.850 the % approach
For people who think theory, or an assembly language listing, settles arguments like these, this should be a cautionary tale. There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.
A: Try this: return (((a>>1)<<1) == a)
Example:
a = 10101011
-----------------
a>>1 --> 01010101
a<<1 --> 10101010
b = 10011100
-----------------
b>>1 --> 01001110
b<<1 --> 10011100
A: This is a follow up to the discussion with @RocketRoy regarding his answer, but it might be useful to anyone who wants to compare these results.
tl;dr From what I've seen, Roy's approach ((0xFFFFFFFF == (x | 0xFFFFFFFE)) is not completely optimized to x & 1 as the mod approach, but in practice running times should turn out equal in all cases.
So, first I compared the compiled output using Compiler Explorer:
Functions tested:
int isOdd_mod(unsigned x) {
return (x % 2);
}
int isOdd_and(unsigned x) {
return (x & 1);
}
int isOdd_or(unsigned x) {
return (0xFFFFFFFF == (x | 0xFFFFFFFE));
}
CLang 3.9.0 with -O3:
isOdd_mod(unsigned int): # @isOdd_mod(unsigned int)
and edi, 1
mov eax, edi
ret
isOdd_and(unsigned int): # @isOdd_and(unsigned int)
and edi, 1
mov eax, edi
ret
isOdd_or(unsigned int): # @isOdd_or(unsigned int)
and edi, 1
mov eax, edi
ret
GCC 6.2 with -O3:
isOdd_mod(unsigned int):
mov eax, edi
and eax, 1
ret
isOdd_and(unsigned int):
mov eax, edi
and eax, 1
ret
isOdd_or(unsigned int):
or edi, -2
xor eax, eax
cmp edi, -1
sete al
ret
Hats down to CLang, it realized that all three cases are functionally equal. However, Roy's approach isn't optimized in GCC, so YMMV.
It's similar with Visual Studio; inspecting the disassembly Release x64 (VS2015) for these three functions, I could see that the comparison part is equal for "mod" and "and" cases, and slightly larger for the Roy's "or" case:
// x % 2
test bl,1
je (some address)
// x & 1
test bl,1
je (some address)
// Roy's bitwise or
mov eax,ebx
or eax,0FFFFFFFEh
cmp eax,0FFFFFFFFh
jne (some address)
However, after running an actual benchmark for comparing these three options (plain mod, bitwise or, bitwise and), results were completely equal (again, Visual Studio 2005 x86/x64, Release build, no debugger attached).
Release assembly uses the test instruction for and and mod cases, while Roy's case uses the cmp eax,0FFFFFFFFh approach, but it's heavily unrolled and optimized so there is no difference in practice.
My results after 20 runs (i7 3610QM, Windows 10 power plan set to High Performance):
[Test: Plain mod 2 ] AVERAGE TIME: 689.29 ms (Relative diff.: +0.000%)
[Test: Bitwise or ] AVERAGE TIME: 689.63 ms (Relative diff.: +0.048%)
[Test: Bitwise and ] AVERAGE TIME: 687.80 ms (Relative diff.: -0.217%)
The difference between these options is less than 0.3%, so it's rather obvious the assembly is equal in all cases.
Here is the code if anyone wants to try, with a caveat that I only tested it on Windows (check the #if LINUX conditional for the get_time definition and implement it if needed, taken from this answer).
#include <stdio.h>
#if LINUX
#include <sys/time.h>
#include <sys/resource.h>
double get_time()
{
struct timeval t;
struct timezone tzp;
gettimeofday(&t, &tzp);
return t.tv_sec + t.tv_usec*1e-6;
}
#else
#include <windows.h>
double get_time()
{
LARGE_INTEGER t, f;
QueryPerformanceCounter(&t);
QueryPerformanceFrequency(&f);
return (double)t.QuadPart / (double)f.QuadPart * 1000.0;
}
#endif
#define NUM_ITERATIONS (1000 * 1000 * 1000)
// using a macro to avoid function call overhead
#define Benchmark(accumulator, name, operation) { \
double startTime = get_time(); \
double dummySum = 0.0, elapsed; \
int x; \
for (x = 0; x < NUM_ITERATIONS; x++) { \
if (operation) dummySum += x; \
} \
elapsed = get_time() - startTime; \
accumulator += elapsed; \
if (dummySum > 2000) \
printf("[Test: %-12s] %0.2f ms\r\n", name, elapsed); \
}
void DumpAverage(char *test, double totalTime, double reference)
{
printf("[Test: %-12s] AVERAGE TIME: %0.2f ms (Relative diff.: %+6.3f%%)\r\n",
test, totalTime, (totalTime - reference) / reference * 100.0);
}
int main(void)
{
int repeats = 20;
double runningTimes[3] = { 0 };
int k;
for (k = 0; k < repeats; k++) {
printf("Run %d of %d...\r\n", k + 1, repeats);
Benchmark(runningTimes[0], "Plain mod 2", (x % 2));
Benchmark(runningTimes[1], "Bitwise or", (0xFFFFFFFF == (x | 0xFFFFFFFE)));
Benchmark(runningTimes[2], "Bitwise and", (x & 1));
}
{
double reference = runningTimes[0] / repeats;
printf("\r\n");
DumpAverage("Plain mod 2", runningTimes[0] / repeats, reference);
DumpAverage("Bitwise or", runningTimes[1] / repeats, reference);
DumpAverage("Bitwise and", runningTimes[2] / repeats, reference);
}
getchar();
return 0;
}
A: [Joke mode="on"]
public enum Evenness
{
Unknown = 0,
Even = 1,
Odd = 2
}
public static Evenness AnalyzeEvenness(object o)
{
if (o == null)
return Evenness.Unknown;
string foo = o.ToString();
if (String.IsNullOrEmpty(foo))
return Evenness.Unknown;
char bar = foo[foo.Length - 1];
switch (bar)
{
case '0':
case '2':
case '4':
case '6':
case '8':
return Evenness.Even;
case '1':
case '3':
case '5':
case '7':
case '9':
return Evenness.Odd;
default:
return Evenness.Unknown;
}
}
[Joke mode="off"]
EDIT: Added confusing values to the enum.
A: I know this is just syntactic sugar and only applicable in .net but what about extension method...
public static class RudiGroblerExtensions
{
public static bool IsOdd(this int i)
{
return ((i % 2) != 0);
}
}
Now you can do the following
int i = 5;
if (i.IsOdd())
{
// Do something...
}
A: In the "creative but confusing category" I offer:
int isOdd(int n) { return n ^ n * n ? isOdd(n * n) : n; }
A variant on this theme that is specific to Microsoft C++:
__declspec(naked) bool __fastcall isOdd(const int x)
{
__asm
{
mov eax,ecx
mul eax
mul eax
mul eax
mul eax
mul eax
mul eax
ret
}
}
A: You guys are waaaaaaaay too efficient. What you really want is:
public boolean isOdd(int num) {
int i = 0;
boolean odd = false;
while (i != num) {
odd = !odd;
i = i + 1;
}
return odd;
}
Repeat for isEven.
Of course, that doesn't work for negative numbers. But with brilliance comes sacrifice...
A: The bitwise method depends on the inner representation of the integer. Modulo will work anywhere there is a modulo operator. For example, some systems actually use the low level bits for tagging (like dynamic languages), so the raw x & 1 won't actually work in that case.
A: IsOdd(int x) { return true; }
Proof of correctness - consider the set of all positive integers and suppose there is a non-empty set of integers that are not odd. Because positive integers are well-ordered, there will be a smallest not odd number, which in itself is pretty odd, so clearly that number can't be in the set. Therefore this set cannot be non-empty. Repeat for negative integers except look for the greatest not odd number.
A: Portable:
i % 2 ? odd : even;
Unportable:
i & 1 ? odd : even;
i << (BITS_PER_INT - 1) ? odd : even;
A: As some people have posted, there are numerous ways to do this. According to this website, the fastest way is the modulus operator:
if (x % 2 == 0)
total += 1; //even number
else
total -= 1; //odd number
However, here is some other code that was bench marked by the author which ran slower than the common modulus operation above:
if ((x & 1) == 0)
total += 1; //even number
else
total -= 1; //odd number
System.Math.DivRem((long)x, (long)2, out outvalue);
if ( outvalue == 0)
total += 1; //even number
else
total -= 1; //odd number
if (((x / 2) * 2) == x)
total += 1; //even number
else
total -= 1; //odd number
if (((x >> 1) << 1) == x)
total += 1; //even number
else
total -= 1; //odd number
while (index > 1)
index -= 2;
if (index == 0)
total += 1; //even number
else
total -= 1; //odd number
tempstr = x.ToString();
index = tempstr.Length - 1;
//this assumes base 10
if (tempstr[index] == '0' || tempstr[index] == '2' || tempstr[index] == '4' || tempstr[index] == '6' || tempstr[index] == '8')
total += 1; //even number
else
total -= 1; //odd number
How many people even knew of the Math.System.DivRem method or why would they use it??
A: In response to ffpf - I had exactly the same argument with a colleague years ago, and the answer is no, it doesn't work with negative numbers.
The C standard stipulates that negative numbers can be represented in 3 ways:
*
*2's complement
*1's complement
*sign and magnitude
Checking like this:
isEven = (x & 1);
will work for 2's complement and sign and magnitude representation, but not for 1's complement.
However, I believe that the following will work for all cases:
isEven = (x & 1) ^ ((-1 & 1) | ((x < 0) ? 0 : 1)));
Thanks to ffpf for pointing out that the text box was eating everything after my less than character!
A: A nice one is:
/*forward declaration, C compiles in one pass*/
bool isOdd(unsigned int n);
bool isEven(unsigned int n)
{
if (n == 0)
return true ; // I know 0 is even
else
return isOdd(n-1) ; // n is even if n-1 is odd
}
bool isOdd(unsigned int n)
{
if (n == 0)
return false ;
else
return isEven(n-1) ; // n is odd if n-1 is even
}
Note that this method use tail recursion involving two functions. It can be implemented efficiently (turned into a while/until kind of loop) if your compiler supports tail recursion like a Scheme compiler. In this case the stack should not overflow !
A: A number is even if, when divided by two, the remainder is 0. A number is odd if, when divided by 2, the remainder is 1.
// Java
public static boolean isOdd(int num){
return num % 2 != 0;
}
/* C */
int isOdd(int num){
return num % 2;
}
Methods are great!
A: int isOdd(int i){
return(i % 2);
}
done.
A: To give more elaboration on the bitwise operator method for those of us who didn't do much boolean algebra during our studies, here is an explanation. Probably not of much use to the OP, but I felt like making it clear why NUMBER & 1 works.
Please note like as someone answered above, the way negative numbers are represented can stop this method working. In fact it can even break the modulo operator method too since each language can differ in how it deals with negative operands.
However if you know that NUMBER will always be positive, this works well.
As Tooony above made the point that only the last digit in binary (and denary) is important.
A boolean logic AND gate dictates that both inputs have to be a 1 (or high voltage) for 1 to be returned.
1 & 0 = 0.
0 & 1 = 0.
0 & 0 = 0.
1 & 1 = 1.
If you represent any number as binary (I have used an 8 bit representation here), odd numbers have 1 at the end, even numbers have 0.
For example:
1 = 00000001
2 = 00000010
3 = 00000011
4 = 00000100
If you take any number and use bitwise AND (& in java) it by 1 it will either return 00000001, = 1 meaning the number is odd. Or 00000000 = 0, meaning the number is even.
E.g
Is odd?
1 & 1 =
00000001 &
00000001 =
00000001 <— Odd
2 & 1 =
00000010 &
00000001 =
00000000 <— Even
54 & 1 =
00000001 &
00110110 =
00000000 <— Even
This is why this works:
if(number & 1){
//Number is odd
} else {
//Number is even
}
Sorry if this is redundant.
A: Number Zero parity | zero http://tinyurl.com/oexhr3k
Python code sequence.
# defining function for number parity check
def parity(number):
"""Parity check function"""
# if number is 0 (zero) return 'Zero neither ODD nor EVEN',
# otherwise number&1, checking last bit, if 0, then EVEN,
# if 1, then ODD.
return (number == 0 and 'Zero neither ODD nor EVEN') \
or (number&1 and 'ODD' or 'EVEN')
# cycle trough numbers from 0 to 13
for number in range(0, 14):
print "{0:>4} : {0:08b} : {1:}".format(number, parity(number))
Output:
0 : 00000000 : Zero neither ODD nor EVEN
1 : 00000001 : ODD
2 : 00000010 : EVEN
3 : 00000011 : ODD
4 : 00000100 : EVEN
5 : 00000101 : ODD
6 : 00000110 : EVEN
7 : 00000111 : ODD
8 : 00001000 : EVEN
9 : 00001001 : ODD
10 : 00001010 : EVEN
11 : 00001011 : ODD
12 : 00001100 : EVEN
13 : 00001101 : ODD
A: I execute this code for ODD & EVEN:
#include <stdio.h>
int main()
{
int number;
printf("Enter an integer: ");
scanf("%d", &number);
if(number % 2 == 0)
printf("%d is even.", number);
else
printf("%d is odd.", number);
}
A: For the sake of discussion...
You only need to look at the last digit in any given number to see if it is even or odd.
Signed, unsigned, positive, negative - they are all the same with regards to this.
So this should work all round: -
void tellMeIfItIsAnOddNumberPlease(int iToTest){
int iLastDigit;
iLastDigit = iToTest - (iToTest / 10 * 10);
if (iLastDigit % 2 == 0){
printf("The number %d is even!\n", iToTest);
} else {
printf("The number %d is odd!\n", iToTest);
}
}
The key here is in the third line of code, the division operator performs an integer division, so that result are missing the fraction part of the result. So for example 222 / 10 will give 22 as a result. Then multiply it again with 10 and you have 220. Subtract that from the original 222 and you end up with 2, which by magic is the same number as the last digit in the original number. ;-)
The parenthesis are there to remind us of the order the calculation is done in. First do the division and the multiplication, then subtract the result from the original number. We could leave them out, since the priority is higher for division and multiplication than of subtraction, but this gives us "more readable" code.
We could make it all completely unreadable if we wanted to. It would make no difference whatsoever for a modern compiler: -
printf("%d%s\n",iToTest,0==(iToTest-iToTest/10*10)%2?" is even":" is odd");
But it would make the code way harder to maintain in the future. Just imagine that you would like to change the text for odd numbers to "is not even". Then someone else later on want to find out what changes you made and perform a svn diff or similar...
If you are not worried about portability but more about speed, you could have a look at the least significant bit. If that bit is set to 1 it is an odd number, if it is 0 it's an even number.
On a little endian system, like Intel's x86 architecture it would be something like this: -
if (iToTest & 1) {
// Even
} else {
// Odd
}
A: If you want to be efficient, use bitwise operators (x & 1), but if you want to be readable use modulo 2 (x % 2)
A: Checking even or odd is a simple task.
We know that any number exactly divisible by 2 is even number else odd.
We just need to check divisibility of any number and for checking divisibility we use % operator
Checking even odd using if else
if(num%2 ==0)
{
printf("Even");
}
else
{
printf("Odd");
}
C program to check even or odd using if else
Using Conditional/Ternary operator
(num%2 ==0) printf("Even") : printf("Odd");
C program to check even or odd using conditional operator.
Using Bitwise operator
if(num & 1)
{
printf("Odd");
}
else
{
printf("Even");
}
A: +66% faster > !(i%2) / i%2 == 0
int isOdd(int n)
{
return n & 1;
}
The code checks the last bit of the integer if it's 1 in Binary
Explanation
Binary : Decimal
-------------------
0000 = 0
0001 = 1
0010 = 2
0011 = 3
0100 = 4
0101 = 5
0110 = 6
0111 = 7
1000 = 8
1001 = 9
and so on...
Notice the rightmost bit is always 1 for Odd numbers.
the & bitwise AND operator checks the rightmost bit in our return line if it's 1
Think of it as true & false
When we compare n with 1 which means 0001 in binary (number of zeros doesn't matter).
then let's just Imagine that we have the integer n with a size of 1 byte.
It'd be represented by 8-bit / 8-binary digits.
If the int n was 7 and we compare it with 1, It's like
7 (1-byte int)| 0 0 0 0 0 1 1 1
&
1 (1-byte int)| 0 0 0 0 0 0 0 1
********************************************
Result | F F F F F F F T
Which F stands for false and T for true.
It compares only the rightmost bit if they're both true. So, automagically 7 & 1 is True.
What if I want to check the bit before the rightmost?
Simply change n & 1 to n & 2 which 2 represents 0010 in Binary and so on.
I suggest using hexadecimal notation if you're a beginner to bitwise operations
return n & 1; >> return n & 0x01;.
A: Modulus operator '%' can be used to check whether a number is odd or even.That is when a number is divided by 2 and if the remainder is 0 then its an even number else its an odd number.
#include <stdio.h>
int main()
{
int n;//using modulus operator
scanf("%d",&n);//take input n from STDIN
printf("%s",n%2==0?"Even":"Odd");//prints Even/Odd depending on n to STDOUT
return 0;
}
But using Bit manipulation is quite faster than the above method,so if you take a number and apply logically AND '&' to it ,if the answer is 1 then its even else its odd.That is basically we have to check the last bit of the number n in binary.If the last bit is 0 then n is even else its odd.
for example : suppose N = 15 , in binary N = 1111 , now we AND it with 1
1111
0001
&-----
0001
Since the result is 1 the number N=15 is Odd.
Again,suppose N = 8 , in binary N = 1000 , now we AND it with 1
1000
0001
&-----
0000
Since the result is 0 the number N=8 is Even.
#include <stdio.h>
int main()
{
int n;//using AND operator
scanf("%d",&n);//take input n from STDIN
printf("%s",n&1?"Odd":"Even");//prints Even/Odd depending on n to STDOUT
return 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "193"
} |
Q: Fatal Error C1083: Cannot Open Include file: 'tiffio.h': No such file or directory VC++ 2008 I got this error when I tried to compile an application that includes the tiffio.h header.
For the record, the tiffio.h is located at:
C:\Program Files\GnuWin32\include
I have set that directory to be included in the 'Projects' and 'Solution' Visual C++ directories, as shown in the image located here.
What am I missing?
A: In the Options dialog you have in the image - change the "Show directories for" from "Executable Files" to "Include files" and then add the include path there.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How can I tell if SP1 has been installed on VS2008? How can I tell if SP1 has been installed on VS2008? e.g. If I'm working on a co-worker's machine - how can I tell if he/she has installed SP1 for VS2008?
A: In Help->About, you can view the installed products. You should see something similar to
Microsoft Visual Studio Team System
2008 Team Suite - ENU Service Pack 1
(KB945140) KB945140
in the list of entries.
A: You may also be able to tell by looking in the registry here
HKEY_LOCAL_MACHINE\Software\Microsoft\DevDiv[ProductFamily]\Servicing\9.0\
then finding a property named something like "SP" or "SPIndex". A value of 1 means installed, and 0 means not installed.
Tip was found here.
A: Open Visual Studio 2008 and click Help>About. If you do have MS VS SP1 installed, the upper left corner should look like this:
Microsoft Visual Studio 2008
version 9.0.30729.1 SP
The upper right corner states what version of Microsoft .NET Framework you have and may show that it has SP 1 installed, but this DOES NOT mean you have Visual Studio SP 1 installed.
Here is a link to a picture: http://quick-page.net/46ad2310
Hope this helps!
Iconoclast
A: It also puts a little '9' in a white box on the program icon. (Probably not dependable, of course)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22"
} |
Q: Rails Console Doesn't Automatically Load Models For 2nd DB I have a Rails project which has a Postgres database for the actual application but which needs to pull a heck of a lot of data out of an Oracle database.
database.yml looks like
development:
adapter: postgresql
database: blah blah
...
oracle_db:
adapter: oracle
database: blah blah
My models which descend from data on the Oracle DB look something like
class LegacyDataClass < ActiveRecord::Base
establish_connection "oracle_db"
set_primary_key :legacy_data_class_id
has_one :other_legacy_class, :foreign key => :other_legacy_class_id_with_funny_column_name
...
end
Now, by habit I often do a lot of my early development (and this is early development) by coding for a bit and then playing in the Rails console. For example, after defining all the associations for LegacyDataClass I'll start trying things like a = LegacyDataClass.find(:first); puts a.some_association.name. Unexpectedly, this dies with LegacyDataClass not being already loaded.
I can then require 'LegacyDataClass' which fixes the problem until I either need to reload!, which won't actually reload it, or until I open a new instance of the console.
Thus the questions:
*
*Why does this happen? Clearly there is some Rails magic I am not understanding.
*What is the convenient Rails workaround?
A: I believe this might have to do with your model name, rather than your connection. The Rails convention is that model class names are CamelCase, while the files they reside in are lowercase+underscore.
The "LegacyModel" class should therefore be in models/legacy_model.rb. Your statement about "require 'LegacyDataClass'" indicates that this is not the case, and therefore Rails doesn't know how to automagically load that model.
A: I wrote something for an app at work that handles connections to other databases' at runtime, it might be able to help.
http://github.com/cherring/connection_ninja
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Error: initializer element is not computable at load time I have a function that takes a struct, and I'm trying to store its variables in array:
int detect_prm(Param prm) {
int prm_arr[] = {prm.field1, prm.field2, prm.field3};
return 0;
}
But with gcc -Wall -ansi -pedantic-errors -Werror I get the following error:
initializer element is not computable at load time
It looks fine to me, what's wrong?
A: This is illegal in C. Initializer lists must be constant compile time expressions. Do the following instead:
int prm_arr[3];
prm_arr[0] = prm.field1;
prm_arr[1] = prm.field2;
prm_arr[2] = prm.field3;
A: Mike's answer is absolutely right.
However, if you're able to use the GNU C extensions, or to use the newer and better C99 standard instead (use the --std=c99 option), then initializers such as this are perfectly legal. The C99 standard has been out for, well, 9 years, and most C compilers support it quite well... especially this feature.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
} |
Q: How do you balance the conflicting needs of backwards compatibility and innovation? I work on an application that has a both a GUI (graphical) and API (scripting) interface. Our product has a very large installed base. Many customers have invested a lot of time and effort into writing scripts that use our product.
In all of our designs and implementation, we (understandably) have a very strict requirement to maintain 100% backwards compatibility. A script which ran before must continue to run in exactly the same way, without any modification, when we introduce a new software version.
Unfortunately, this requirement sometimes ties our hands behind our back, as it really restricts our ability to innovate and come up with new and better ways of doing things.
For example, we might come up with a better (and more usable) way of achieving a task which is already possible. It would be desirable to make this better way the default way, but we can't do this as it may have backwards compatibility implications. So we are stuck with leaving the new (better) way as a mode, that the user must "turn on" before it becomes available to them. Unless they read the documentation or online help (which many customers don't do), this new functionality will remain hidden forever.
I know that Windows Vista annoyed a lot of people when it first came out, because of all the software and peripherals which didn't work on it, even when they worked on XP. It received a pretty bad reception because of this. But you can see that Microsoft have also succeeded in making some great innovations in Vista, at the expense of backwards compatibility for a lot of users. They took a risk. Did it pay off? Did they make the right decision? I guess only time will tell.
Do you find yourself balancing the conflicting needs of innovation and backwards compatibility? How do you handle the juggling act?
A: As far is my programming experience is concerned, if I'm going to fundamentally change something that will prevent past incoming data to be used correctly, I need to create an abstraction layer for the old data where it can be converted for use in the new format.
Basically I set the "improved" way as default and make sure through a converter it can read data of the old format, but save or store data as the new format.
I think the big thing here is test, test, test. Backwards compatibility shouldn't hinder forward progress.
A: Split development into two branches, one that maintains backwards compatibility and one for a new major release, where you make it clear that backwards compatibility is being broken.
A: The critical question that you need to ask is wether the customers want/need this "improvement" even if you perceive it as one your customers might not. Once a certain way of doing things has been established changing the workflow is a very "expensive" operation. Depending on the computer savyness of your users it might take some a long time to adjust to the change in the UI.
If you are dealing with clients innovation for innovation's sake is not always a good thing as fun as it might be for you to develop these improvements.
A: You could alawys look for innovative ways to maintain backwards compatibilty.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: How do I invoke a Java method when given the method name as a string? If I have two variables:
Object obj;
String methodName = "getName";
Without knowing the class of obj, how can I call the method identified by methodName on it?
The method being called has no parameters, and a String return value. It's a getter for a Java bean.
A: try {
YourClass yourClass = new YourClass();
Method method = YourClass.class.getMethod("yourMethodName", ParameterOfThisMethod.class);
method.invoke(yourClass, parameter);
} catch (Exception e) {
e.printStackTrace();
}
A: This sounds like something that is doable with the Java Reflection package.
http://java.sun.com/developer/technicalArticles/ALT/Reflection/index.html
Particularly under Invoking Methods by Name:
import java.lang.reflect.*;
public class method2 {
public int add(int a, int b)
{
return a + b;
}
public static void main(String args[])
{
try {
Class cls = Class.forName("method2");
Class partypes[] = new Class[2];
partypes[0] = Integer.TYPE;
partypes[1] = Integer.TYPE;
Method meth = cls.getMethod(
"add", partypes);
method2 methobj = new method2();
Object arglist[] = new Object[2];
arglist[0] = new Integer(37);
arglist[1] = new Integer(47);
Object retobj
= meth.invoke(methobj, arglist);
Integer retval = (Integer)retobj;
System.out.println(retval.intValue());
}
catch (Throwable e) {
System.err.println(e);
}
}
}
A: Here are the READY TO USE METHODS:
To invoke a method, without Arguments:
public static void callMethodByName(Object object, String methodName) throws IllegalAccessException, InvocationTargetException, NoSuchMethodException {
object.getClass().getDeclaredMethod(methodName).invoke(object);
}
To invoke a method, with Arguments:
public static void callMethodByName(Object object, String methodName, int i, String s) throws IllegalAccessException, InvocationTargetException, NoSuchMethodException {
object.getClass().getDeclaredMethod(methodName, int.class, String.class).invoke(object, i, s);
}
Use the above methods as below:
package practice;
import java.io.IOException;
import java.lang.reflect.InvocationTargetException;
public class MethodInvoke {
public static void main(String[] args) throws ClassNotFoundException, NoSuchMethodException, SecurityException, IllegalAccessException, IllegalArgumentException, InvocationTargetException, IOException {
String methodName1 = "methodA";
String methodName2 = "methodB";
MethodInvoke object = new MethodInvoke();
callMethodByName(object, methodName1);
callMethodByName(object, methodName2, 1, "Test");
}
public static void callMethodByName(Object object, String methodName) throws IllegalAccessException, InvocationTargetException, NoSuchMethodException {
object.getClass().getDeclaredMethod(methodName).invoke(object);
}
public static void callMethodByName(Object object, String methodName, int i, String s) throws IllegalAccessException, InvocationTargetException, NoSuchMethodException {
object.getClass().getDeclaredMethod(methodName, int.class, String.class).invoke(object, i, s);
}
void methodA() {
System.out.println("Method A");
}
void methodB(int i, String s) {
System.out.println("Method B: "+"\n\tParam1 - "+i+"\n\tParam 2 - "+s);
}
}
Output:
Method A
Method B:
Param1 - 1
Param 2 - Test
A: Please refer following code may help you.
public static Method method[];
public static MethodClass obj;
public static String testMethod="A";
public static void main(String args[])
{
obj=new MethodClass();
method=obj.getClass().getMethods();
try
{
for(int i=0;i<method.length;i++)
{
String name=method[i].getName();
if(name==testMethod)
{
method[i].invoke(name,"Test Parameters of A");
}
}
}
catch(Exception ex)
{
System.out.println(ex.getMessage());
}
}
Thanks....
A: Method method = someVariable.class.getMethod(SomeClass);
String status = (String) method.invoke(method);
SomeClass is the class and someVariable is a variable.
A: The method can be invoked like this. There are also more possibilities (check the reflection api), but this is the simplest one:
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import org.junit.Assert;
import org.junit.Test;
public class ReflectionTest {
private String methodName = "length";
private String valueObject = "Some object";
@Test
public void testGetMethod() throws SecurityException, NoSuchMethodException, IllegalArgumentException,
IllegalAccessException, InvocationTargetException {
Method m = valueObject.getClass().getMethod(methodName, new Class[] {});
Object ret = m.invoke(valueObject, new Object[] {});
Assert.assertEquals(11, ret);
}
}
A: Student.java
class Student{
int rollno;
String name;
void m1(int x,int y){
System.out.println("add is" +(x+y));
}
private void m3(String name){
this.name=name;
System.out.println("danger yappa:"+name);
}
void m4(){
System.out.println("This is m4");
}
}
StudentTest.java
import java.lang.reflect.Method;
public class StudentTest{
public static void main(String[] args){
try{
Class cls=Student.class;
Student s=(Student)cls.newInstance();
String x="kichha";
Method mm3=cls.getDeclaredMethod("m3",String.class);
mm3.setAccessible(true);
mm3.invoke(s,x);
Method mm1=cls.getDeclaredMethod("m1",int.class,int.class);
mm1.invoke(s,10,20);
}
catch(Exception e){
e.printStackTrace();
}
}
}
A: Use method invocation from reflection:
Class<?> c = Class.forName("class name");
Method method = c.getDeclaredMethod("method name", parameterTypes);
method.invoke(objectToInvokeOn, params);
Where:
*
*"class name" is the name of the class
*objectToInvokeOn is of type Object and is the object you want to invoke the method on
*"method name" is the name of the method you want to call
*parameterTypes is of type Class[] and declares the parameters the method takes
*params is of type Object[] and declares the parameters to be passed to the method
A: You should use reflection - init a class object, then a method in this class, and then invoke this method on an object with optional parameters. Remember to wrap the following snippet in try-catch block
Hope it helps!
Class<?> aClass = Class.forName(FULLY_QUALIFIED_CLASS_NAME);
Method method = aClass.getMethod(methodName, YOUR_PARAM_1.class, YOUR_PARAM_2.class);
method.invoke(OBJECT_TO_RUN_METHOD_ON, YOUR_PARAM_1, YOUR_PARAM_2);
A: using import java.lang.reflect.*;
public static Object launchProcess(String className, String methodName, Class<?>[] argsTypes, Object[] methodArgs)
throws Exception {
Class<?> processClass = Class.forName(className); // convert string classname to class
Object process = processClass.newInstance(); // invoke empty constructor
Method aMethod = process.getClass().getMethod(methodName,argsTypes);
Object res = aMethod.invoke(process, methodArgs); // pass arg
return(res);
}
and here is how you use it:
String className = "com.example.helloworld";
String methodName = "print";
Class<?>[] argsTypes = {String.class, String.class};
Object[] methArgs = { "hello", "world" };
launchProcess(className, methodName, argsTypes, methArgs);
A: With jooR it's merely:
on(obj).call(methodName /*params*/).get()
Here is a more elaborate example:
public class TestClass {
public int add(int a, int b) { return a + b; }
private int mul(int a, int b) { return a * b; }
static int sub(int a, int b) { return a - b; }
}
import static org.joor.Reflect.*;
public class JoorTest {
public static void main(String[] args) {
int add = on(new TestClass()).call("add", 1, 2).get(); // public
int mul = on(new TestClass()).call("mul", 3, 4).get(); // private
int sub = on(TestClass.class).call("sub", 6, 5).get(); // static
System.out.println(add + ", " + mul + ", " + sub);
}
}
This prints:
3, 12, 1
A: First, don't. Avoid this sort of code. It tends to be really bad code and insecure too (see section 6 of Secure Coding Guidelines for the
Java Programming Language, version 2.0).
If you must do it, prefer java.beans to reflection. Beans wraps reflection allowing relatively safe and conventional access.
A: To complete my colleague's answers, You might want to pay close attention to:
*
*static or instance calls (in one case, you do not need an instance of the class, in the other, you might need to rely on an existing default constructor that may or may not be there)
*public or non-public method call (for the latter,you need to call setAccessible on the method within an doPrivileged block, other findbugs won't be happy)
*encapsulating into one more manageable applicative exception if you want to throw back the numerous java system exceptions (hence the CCException in the code below)
Here is an old java1.4 code which takes into account those points:
/**
* Allow for instance call, avoiding certain class circular dependencies. <br />
* Calls even private method if java Security allows it.
* @param aninstance instance on which method is invoked (if null, static call)
* @param classname name of the class containing the method
* (can be null - ignored, actually - if instance if provided, must be provided if static call)
* @param amethodname name of the method to invoke
* @param parameterTypes array of Classes
* @param parameters array of Object
* @return resulting Object
* @throws CCException if any problem
*/
public static Object reflectionCall(final Object aninstance, final String classname, final String amethodname, final Class[] parameterTypes, final Object[] parameters) throws CCException
{
Object res;// = null;
try {
Class aclass;// = null;
if(aninstance == null)
{
aclass = Class.forName(classname);
}
else
{
aclass = aninstance.getClass();
}
//Class[] parameterTypes = new Class[]{String[].class};
final Method amethod = aclass.getDeclaredMethod(amethodname, parameterTypes);
AccessController.doPrivileged(new PrivilegedAction() {
public Object run() {
amethod.setAccessible(true);
return null; // nothing to return
}
});
res = amethod.invoke(aninstance, parameters);
} catch (final ClassNotFoundException e) {
throw new CCException.Error(PROBLEM_TO_ACCESS+classname+CLASS, e);
} catch (final SecurityException e) {
throw new CCException.Error(PROBLEM_TO_ACCESS+classname+GenericConstants.HASH_DIESE+ amethodname + METHOD_SECURITY_ISSUE, e);
} catch (final NoSuchMethodException e) {
throw new CCException.Error(PROBLEM_TO_ACCESS+classname+GenericConstants.HASH_DIESE+ amethodname + METHOD_NOT_FOUND, e);
} catch (final IllegalArgumentException e) {
throw new CCException.Error(PROBLEM_TO_ACCESS+classname+GenericConstants.HASH_DIESE+ amethodname + METHOD_ILLEGAL_ARGUMENTS+String.valueOf(parameters)+GenericConstants.CLOSING_ROUND_BRACKET, e);
} catch (final IllegalAccessException e) {
throw new CCException.Error(PROBLEM_TO_ACCESS+classname+GenericConstants.HASH_DIESE+ amethodname + METHOD_ACCESS_RESTRICTION, e);
} catch (final InvocationTargetException e) {
throw new CCException.Error(PROBLEM_TO_ACCESS+classname+GenericConstants.HASH_DIESE+ amethodname + METHOD_INVOCATION_ISSUE, e);
}
return res;
}
A: For those who want a straight-forward code example in Java 7:
Dog class:
package com.mypackage.bean;
public class Dog {
private String name;
private int age;
public Dog() {
// empty constructor
}
public Dog(String name, int age) {
this.name = name;
this.age = age;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
public void printDog(String name, int age) {
System.out.println(name + " is " + age + " year(s) old.");
}
}
ReflectionDemo class:
package com.mypackage.demo;
import java.lang.reflect.*;
public class ReflectionDemo {
public static void main(String[] args) throws Exception {
String dogClassName = "com.mypackage.bean.Dog";
Class<?> dogClass = Class.forName(dogClassName); // convert string classname to class
Object dog = dogClass.newInstance(); // invoke empty constructor
String methodName = "";
// with single parameter, return void
methodName = "setName";
Method setNameMethod = dog.getClass().getMethod(methodName, String.class);
setNameMethod.invoke(dog, "Mishka"); // pass arg
// without parameters, return string
methodName = "getName";
Method getNameMethod = dog.getClass().getMethod(methodName);
String name = (String) getNameMethod.invoke(dog); // explicit cast
// with multiple parameters
methodName = "printDog";
Class<?>[] paramTypes = {String.class, int.class};
Method printDogMethod = dog.getClass().getMethod(methodName, paramTypes);
printDogMethod.invoke(dog, name, 3); // pass args
}
}
Output:
Mishka is 3 year(s) old.
You can invoke the constructor with parameters this way:
Constructor<?> dogConstructor = dogClass.getConstructor(String.class, int.class);
Object dog = dogConstructor.newInstance("Hachiko", 10);
Alternatively, you can remove
String dogClassName = "com.mypackage.bean.Dog";
Class<?> dogClass = Class.forName(dogClassName);
Object dog = dogClass.newInstance();
and do
Dog dog = new Dog();
Method method = Dog.class.getMethod(methodName, ...);
method.invoke(dog, ...);
Suggested reading: Creating New Class Instances
A: Object obj;
Method method = obj.getClass().getMethod("methodName", null);
method.invoke(obj, null);
A: //Step1 - Using string funClass to convert to class
String funClass = "package.myclass";
Class c = Class.forName(funClass);
//Step2 - instantiate an object of the class abov
Object o = c.newInstance();
//Prepare array of the arguments that your function accepts, lets say only one string here
Class[] paramTypes = new Class[1];
paramTypes[0]=String.class;
String methodName = "mymethod";
//Instantiate an object of type method that returns you method name
Method m = c.getDeclaredMethod(methodName, paramTypes);
//invoke method with actual params
m.invoke(o, "testparam");
A: Indexing (faster)
You can use FunctionalInterface to save methods in a container to index them. You can use array container to invoke them by numbers or hashmap to invoke them by strings. By this trick, you can index your methods to invoke them dynamically faster.
@FunctionalInterface
public interface Method {
double execute(int number);
}
public class ShapeArea {
private final static double PI = 3.14;
private Method[] methods = {
this::square,
this::circle
};
private double square(int number) {
return number * number;
}
private double circle(int number) {
return PI * number * number;
}
public double run(int methodIndex, int number) {
return methods[methodIndex].execute(number);
}
}
Lambda syntax
You can also use lambda syntax:
public class ShapeArea {
private final static double PI = 3.14;
private Method[] methods = {
number -> {
return number * number;
},
number -> {
return PI * number * number;
},
};
public double run(int methodIndex, int number) {
return methods[methodIndex].execute(number);
}
}
Edit 2022
Just now I was thinking to provide you with a universal solution to work with all possible methods with variant number of arguments:
@FunctionalInterface
public interface Method {
Object execute(Object ...args);
}
public class Methods {
private Method[] methods = {
this::square,
this::rectangle
};
private double square(int number) {
return number * number;
}
private double rectangle(int width, int height) {
return width * height;
}
public Method run(int methodIndex) {
return methods[methodIndex];
}
}
Usage:
methods.run(1).execute(width, height);
A: Coding from the hip, it would be something like:
java.lang.reflect.Method method;
try {
method = obj.getClass().getMethod(methodName, param1.class, param2.class, ..);
} catch (SecurityException e) { ... }
catch (NoSuchMethodException e) { ... }
The parameters identify the very specific method you need (if there are several overloaded available, if the method has no arguments, only give methodName).
Then you invoke that method by calling
try {
method.invoke(obj, arg1, arg2,...);
} catch (IllegalArgumentException e) { ... }
catch (IllegalAccessException e) { ... }
catch (InvocationTargetException e) { ... }
Again, leave out the arguments in .invoke, if you don't have any. But yeah. Read about Java Reflection
A: If you do the call several times you can use the new method handles introduced in Java 7. Here we go for your method returning a String:
Object obj = new Point( 100, 200 );
String methodName = "toString";
Class<String> resultType = String.class;
MethodType mt = MethodType.methodType( resultType );
MethodHandle methodHandle = MethodHandles.lookup().findVirtual( obj.getClass(), methodName, mt );
String result = resultType.cast( methodHandle.invoke( obj ) );
System.out.println( result ); // java.awt.Point[x=100,y=200]
A: This is working fine for me :
public class MethodInvokerClass {
public static void main(String[] args) throws NoSuchMethodException, SecurityException, IllegalAccessException, IllegalArgumentException, ClassNotFoundException, InvocationTargetException, InstantiationException {
Class c = Class.forName(MethodInvokerClass.class.getName());
Object o = c.newInstance();
Class[] paramTypes = new Class[1];
paramTypes[0]=String.class;
String methodName = "countWord";
Method m = c.getDeclaredMethod(methodName, paramTypes);
m.invoke(o, "testparam");
}
public void countWord(String input){
System.out.println("My input "+input);
}
}
Output:
My input testparam
I am able to invoke the method by passing its name to another method (like main).
A: For those who are calling the method within the same class from a non-static method, see below codes:
class Person {
public void method1() {
try {
Method m2 = this.getClass().getDeclaredMethod("method2");
m1.invoke(this);
} catch (NoSuchMethodException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
}
}
public void method2() {
// Do something
}
}
A: Suppose you're invoking a static method from a static method within the same class. To do that, you can sample the following code.
class MainClass
{
public static int foo()
{
return 123;
}
public static void main(String[] args)
{
Method method = MainClass.class.getMethod("foo");
int result = (int) method.invoke(null); // answer evaluates to 123
}
}
To explain, since we're not looking to perform true object-oriented programming here, hence avoiding the creation of unnecessary objects, we will instead leverage the class property to invoke getMethod().
Then we will pass in null for the invoke() method because we have no object to perform this operation upon.
And finally, because we, the programmer, know that we are expecting an integer, then
we explicitly cast the return value of the invoke() invocation to an integer.
Now you may wonder: "What even is the point of doing all this non-object oriented programming in Java?"
My use case was to solve Project Euler problems in Java. I have a single Java source file containing all the solutions, and I wanted to pass in command line arguments to determine which Project Euler problem to run.
A: for me a pretty simple and fool proof way would be to simply make a method caller method like so:
public static object methodCaller(String methodName)
{
if(methodName.equals("getName"))
return className.getName();
}
then when you need to call the method simply put something like this
//calling a toString method is unnessary here, but i use it to have my programs to both rigid and self-explanitory
System.out.println(methodCaller(methodName).toString());
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "757"
} |
Q: What are your language "hangups"? I've read some of the recent language vs. language questions with interest... Perl vs. Python, Python vs. Java, Can one language be better than another?
One thing I've noticed is that a lot of us have very superficial reasons for disliking languages. We notice these things at first glance and they turn us off. We shun what are probably perfectly good languages as a result of features that we'd probably learn to love or ignore in 2 seconds if we bothered.
Well, I'm as guilty as the next guy, if not more. Here goes:
*
*Ruby: All the Ruby example code I see uses the puts command, and that's a sort of childish Yiddish anatomical term. So as a result, I can't take Ruby code seriously even though I should.
*Python: The first time I saw it, I smirked at the whole significant whitespace thing. I avoided it for the next several years. Now I hardly use anything else.
*Java: I don't like identifiersThatLookLikeThis. I'm not sure why exactly.
*Lisp: I have trouble with all the parentheses. Things of different importance and purpose (function declarations, variable assignments, etc.) are not syntactically differentiated and I'm too lazy to learn what's what.
*Fortran: uppercase everything hurts my eyes. I know modern code doesn't have to be written like that, but most example code is...
*Visual Basic: it bugs me that Dim is used to declare variables, since I remember the good ol' days of GW-BASIC when it was only used to dimension arrays.
What languages did look right to me at first glance? Perl, C, QBasic, JavaScript, assembly language, BASH shell, FORTH.
Okay, now that I've aired my dirty laundry... I want to hear yours. What are your language hangups? What superficial features bother you? How have you gotten over them?
A: Pascal's Begin and End. Too verbose, not subject to bracket matching, and worse, there isn't a Begin for every End, eg.
Type foo = Record
// ...
end;
A: Although I'm mainly a PHP developer, I dislike languages that don't let me do enough things inline. E.g.:
$x = returnsArray();
$x[1];
instead of
returnsArray()[1];
or
function sort($a, $b) {
return $a < $b;
}
usort($array, 'sort');
instead of
usort($array, function($a, $b) { return $a < $b; });
A: I like object-oriented style. So it bugs me in Python to see len(str) to get the length of a string, or splitting strings like split(str, "|") in another language. That is fine in C; it doesn't have objects. But Python, D, etc. do have objects and use obj.method() other places. (I still think Python is a great language.)
Inconsistency is another big one for me. I do not like inconsistent naming in the same library: length(), size(), getLength(), getlength(), toUTFindex() (why not toUtfIndex?), Constant, CONSTANT, etc.
The long names in .NET bother me sometimes. Can't they shorten DataGridViewCellContextMenuStripNeededEventArgs somehow? What about ListViewVirtualItemsSelectionRangeChangedEventArgs?
And I hate deep directory trees. If a library/project has a 5 level deep directory tree, I'm going to have trouble with it.
A: C and C++'s syntax is a bit quirky. They reuse operators for different things. You're probably so used to it that you don't think about it (nor do I), but consider how many meanings parentheses have:
int main() // function declaration / definition
printf("hello") // function call
(int)x // type cast
2*(7+8) // override precedence
int (*)(int) // function pointer
int x(3) // initializer
if (condition) // special part of syntax of if, while, for, switch
And if in C++ you saw
foo<bar>(baz(),baaz)
you couldn't know the meaning without the definition of foo and bar.
*
*the < and > might be a template instantiation, or might be less-than and greater-than (unusual but legal)
*the () might be a function call, or might be just surrounding the comma operator (ie. perform baz() for size-effects, then return baaz).
The silly thing is that other languages have copied some of these characteristics!
A: Java, and its checked exceptions. I left Java for a while, dwelling in the .NET world, then recently came back.
It feels like, sometimes, my throws clause is more voluminous than my method content.
A: There's nothing in the world I hate more than php.
*
*Variables with $, that's one extra odd character for every variable.
*Members are accessed with -> for no apparent reason, one extra character for every member access.
*A freakshow of language really.
*No namespaces.
*Strings are concatenated with ..
*A freakshow of language.
A: All the []s and @s in Objective C. Their use is so different from the underlying C's native syntax that the first time I saw them it gave the impression that all the object-orientation had been clumsily bolted on as an afterthought.
A: I abhor the boiler plate verbosity of Java.
*
*writing getters and setters for properties
*checked exception handling and all the verbiage that implies
*long lists of imports
Those, in connection with the Java convention of using veryLongVariableNames, sometimes have me thinking I'm back in the 80's, writing IDENTIFICATION DIVISION. at the top of my programs.
Hint: If you can automate the generation of part of your code in your IDE, that's a good hint that you're producing boilerplate code. With automated tools, it's not a problem to write, but it's a hindrance every time someone has to read that code - which is more often.
While I think it goes a bit overboard on type bureaucracy, Scala has successfully addressed some of these concerns.
A: Coding Style inconsistencies in team projects.
I'm working on a large team project where some contributors have used 4 spaces instead of the tab character.
Working with their code can be very annoying - I like to keep my code clean and with a consistent style.
It's bad enough when you use different standards for different languages, but in a web project with HTML, CSS, Javascript, PHP and MySQL, that's 5 languages, 5 different styles, and multiplied by the number of people working on the project.
I'd love to re-format my co-workers code when I need to fix something, but then the repository would think I changed every line of their code.
A: I hate Hate HATE "End Function" and "End IF" and "If... Then" parts of VB. I would much rather see a curly bracket instead.
A: It irritates me sometimes how people expect there to be one language for all jobs. Depending on the task you are doing, each language has its advantages and disadvantages. I like the C-based syntax languages because it's what I'm most used to and I like the flexibility they tend to bestow on the developer. Of course, with great power comes great responsibility, and having the power to write 150 line LINQ statements doesn't mean you should.
I love the inline XML in the latest version of VB.NET although I don't like working with VB mainly because I find the IDE less helpful than the IDE for C#.
A: If Microsoft had to invent yet another C++-like language in C# why didn't they correct Java's mistake and implement support for RAII?
A: Case sensitivity.
What kinda hangover do you need to think that differentiating two identifiers solely by caSE is a great idea?
A: I hate semi-colons. I find they add a lot of noise and you rarely need to put two statements on a line. I prefer the style of Python and other languages... end of line is end of a statement.
A: Any language that can't fully decide if Arrays/Loop/string character indexes are zero based or one based.
I personally prefer zero based, but any language that mixes the two, or lets you "configure" which is used can drive you bonkers. (Apache Velocity - I'm looking in your direction!)
snip from the VTL reference (default is 1, but you can set it to 0):
# Default starting value of the loop
# counter variable reference.
directive.foreach.counter.initial.value = 1
(try merging 2 projects that used different counter schemes - ugh!)
A: In no particular order...
OCaml
*
*Tuples definitions use * to separate items rather than ,. So, ("Juliet", 23, true) has the type (string * int * bool).
*For being such an awesome language, the documentation has this haunting comment on threads: "The threads library is implemented by time-sharing on a single processor. It will not take advantage of multi-processor machines. Using this library will therefore never make programs run faster." JoCaml doesn't fix this problem.
*^^^ I've heard the Jane Street guys were working to add concurrent GC and multi-core threads to OCaml, but I don't know how successful they've been. I can't imagine a language without multi-core threads and GC surviving very long.
*No easy way to explore modules in the toplevel. Sure, you can write module q = List;; and the toplevel will happily print out the module definition, but that just seems hacky.
C#
*
*Lousy type inference. Beyond the most trivial expressions, I have to give types to generic functions.
*All the LINQ code I ever read uses method syntax, x.Where(item => ...).OrderBy(item => ...). No one ever uses expression syntax, from item in x where ... orderby ... select. Between you and me, I think expression syntax is silly, if for no other reason than that it looks "foreign" against the backdrop of all other C# and VB.NET code.
LINQ
Every other language uses the industry standard names are Map, Fold/Reduce/Inject, and Filter. LINQ has to be different and uses Select, Aggregate, and Where.
Functional Programming
Monads are mystifying. Having seen the Parser monad, Maybe monad, State, and List monads, I can understand perfectly how the code works; however, as a general design pattern, I can't seem to look at problems and say "hey, I bet a monad would fit perfect here".
Ruby
GRRRRAAAAAAAH!!!!! I mean... seriously.
VB
Module Hangups
Dim _juliet as String = "Too Wordy!"
Public Property Juliet() as String
Get
Return _juliet
End Get
Set (ByVal value as String)
_juliet = value
End Set
End Property
End Module
And setter declarations are the bane of my existence. Alright, so I change the data type of my property -- now I need to change the data type in my setter too? Why doesn't VB borrow from C# and simply incorporate an implicit variable called value?
.NET Framework
I personally like Java casing convention: classes are PascalCase, methods and properties are camelCase.
A: PHP's function name inconsistencies.
// common parameters back-to-front
in_array(needle, haystack);
strpos(haystack, needle);
// _ to separate words, or not?
filesize();
file_exists;
// super globals prefix?
$GLOBALS;
$_POST;
A: In C/C++, it annoys me how there are different ways of writing the same code.
e.g.
if (condition)
{
callSomeConditionalMethod();
}
callSomeOtherMethod();
vs.
if (condition)
callSomeConditionalMethod();
callSomeOtherMethod();
equate to the same thing, but different people have different styles. I wish the original standard was more strict about making a decision about this, so we wouldn't have this ambiguity. It leads to arguments and disagreements in code reviews!
A: I found Perl's use of "defined" and "undefined" values to be so useful that I have trouble using scripting languages without it.
Perl:
($lastname, $firstname, $rest) = split(' ', $fullname);
This statement performs well no matter how many words are in $fullname. Try it in Python, and it explodes if $fullname doesn't contain exactly three words.
A: SQL, they say you should not use cursors and when you do, you really understand why...
its so heavy going!
DECLARE mycurse CURSOR LOCAL FAST_FORWARD READ_ONLY
FOR
SELECT field1, field2, fieldN FROM atable
OPEN mycurse
FETCH NEXT FROM mycurse INTO @Var1, @Var2, @VarN
WHILE @@fetch_status = 0
BEGIN
-- do something really clever...
FETCH NEXT FROM mycurse INTO @Var1, @Var2, @VarN
END
CLOSE mycurse
DEALLOCATE mycurse
A: Although I program primarily in python, It irks me endlessly that lambda body's must be expressions.
I'm still wrapping my brain around JavaScript, and as a whole, Its mostly acceptable. Why is it so hard to create a namespace. In TCL they're just ugly, but in JavaScript, it's actually a rigmarole AND completely unreadable.
In SQL how come everything is just one, huge freekin SELECT statement.
A: In Ruby, I very strongly dislike how methods do not require self. to be called on current instance, but properties do (otherwise they will clash with locals); i.e.:
def foo()
123
end
def foo=(x)
end
def bar()
x = foo() # okay, same as self.foo()
x = foo # not okay, reads unassigned local variable foo
foo = 123 # not okay, assigns local variable foo
end
To my mind, it's very inconsistent. I'd rather prefer to either always require self. in all cases, or to have a sigil for locals.
A: I never really liked the keywords spelled backwards in some scripting shells
if-then-fi is bad enough, but case-in-esac is just getting silly
A: I just thought of another... I hate the mostly-meaningless URLs used in XML to define namespaces, e.g. xmlns="http://purl.org/rss/1.0/"
A: Java's packages. I find them complex, more so because I am not a corporation.
I vastly prefer namespaces. I'll get over it, of course - I'm playing with the Android SDK, and Eclipse removes a lot of the pain. I've never had a machine that could run it interactively before, and now I do I'm very impressed.
A: Prolog's if-then-else syntax.
x -> y ; z
The problem is that ";" is the "or" operator, so the above looks like "x implies y or z".
A: Java
*
*Generics (Java version of templates) are limited. I can not call methods of the class and I can not create instances of the class. Generics are used by containers, but I can use containers of instances of Object.
*No multiple inheritance. If a multiple inheritance use does not lead to diamond problem, it should be allowed. It should allow to write a default implementation of interface methods, a example of problem: the interface MouseListener has 5 methods, one for each event. If I want to handle just one of them, I have to implement the 4 other methods as an empty method.
*It does not allow to choose to manually manage memory of some objects.
*Java API uses complex combination of classes to do simple tasks. Example, if I want to read from a file, I have to use many classes (FileReader, FileInputStream).
Python
*
*Indentation is part of syntax, I prefer to use the word "end" to indicate end of block and the word "pass" would not be needed.
*In classes, the word "self" should not be needed as argument of functions.
C++
*
*Headers are the worst problem. I have to list the functions in a header file and implement them in a cpp file. It can not hide dependencies of a class. If a class A uses the class B privately as a field, if I include the header of A, the header of B will be included too.
*Strings and arrays came from C, they do not provide a length field. It is difficult to control if std::string and std::vector will use stack or heap. I have to use pointers with std::string and std::vector if I want to use assignment, pass as argument to a function or return it, because its "=" operator will copy entire structure.
*I can not control the constructor and destructor. It is difficult to create an array of objects without a default constructor or choose what constructor to use with if and switch statements.
A: I hated the parentheses in Lisp and Scheme, because after C, C# and languages like that it seemed very obfuscated, and it wasn't really clear how things are related. However, now that I know something about Scheme, and it's usual formatting guidelines, I wouldn't say that I like the way it works, but at least I understand, and overcome my fears when reading code written in CLisp/Scheme.
I think if you learn something and use it for a while(maybe even a few hours are enough, at least it was for me), you can actually overcome your dislike in the syntax, and will be able to concentrate on what you are supposed to do really with the tool which is the language.
A: tsql begin & end...that's annoying...
A: In most languages, file access. VB.NET is the only language so far where file access makes any sense to me. I do not understand why if I want to check if a file exists, I should use File.exists("") or something similar instead of creating a file object (actually FileInfo in VB.NET) and asking if it exists. And then if I want to open it, I ask it to open: (assuming a FileInfo object called fi) fi.OpenRead, for example. Returns a stream. Nice. Exactly what I wanted. If I want to move a file, fi.MoveTo. I can also do fi.CopyTo. What is this nonsense about not making files full-fledged objects in most languages? Also, if I want to iterate through the files in a directory, I can just create the directory object and call .GetFiles. Or I can do .GetDirectories, and I get a whole new set of DirectoryInfo objects to play with.
Admittedly, Java has some of this file stuff, but this nonsense of having to have a whole object to tell it how to list files is just silly.
Also, I hate ::, ->, => and all other multi-character operators except for <= and >= (and maybe -- and ++).
A: My big hangup is MATLAB's syntax. I use it, and there are things I like about it, but it has so many annoying quirks. Let's see.
*
*Matrices are indexed with parentheses. So if you see something like Image(350,260), you have no clue from that whether we're getting an element from the Image matrix, or if we're calling some function called Image and passing arguments to it.
*Scope is insane. I seem to recall that for loop index variables stay in scope after the loop ends.
*If you forget to stick a semicolon after an assignment, the value will be dumped to standard output.
*You may have one function per file. This proves to be very annoying for organizing one's work.
I'm sure I could come up with more if I thought about it.
A: [Disclaimer: i only have a passing familiarity with VB, so take my comments with a grain of salt]
I Hate How Every Keyword In VB Is Capitalized Like This. I saw a blog post the other week (month?) about someone who tried writing VB code without any capital letters (they did something to a compiler that would let them compile VB code like that), and the language looked much nicer!
A: I got one.
I have a grudge against all overly strict static typed languages.
I thought C# was awesome until I started being forced to write code like this:
void event...(object sender,EventArgs e){
int t=(int)(decimal)(MyControl.Value); //Value is an object which is actually a decimal to be converted into an int
}
Oh, and attributes are fugly. Could Microsoft seriously not think of anything uglier than [MyAttribute(Argument)] void function... Seriously. wtf? Don't even get me started on XAML markup..
I can't take Python seriously because of the whitespace issue..
At times I have trouble taking Ruby seriously because
a) I taught myself from Why's Poignant Guide
b) Identifier type is determined by case in some instances. I've grown past this though cause it's sensible and more clean than a const keyword. Now in every language when I make a constant it's uppercase.
Oh and I also hate the
if(a)
b();
syntax. You have no idea how many times I've just done
if(a)
b();
c();
by accident with code written like that.. Actually it can be worse with
if(a)
b(); c();
The only place it should ever be able to be used is
if(a){ ....
}else if(b){ ...
A: I hate that in Python I never know if something is a method on an object, or some random function floating around (see built-in functions). It feels like they started to make the language object-oriented but then slacked off. It makes more sense to me to have such functions be methods on some base class, like Object.
I also hate the __methodName__-style methods, and that if I really want to, I can still access private stuff in a class from outside the class.
The whitespace requirement bugs me; I don't want the language designer making me code a certain way.
I don't like the one-right-way-to-do-something ideal to which Python adheres. I want options.
A: My hangups with C# are very simple:
*
*Scoped blocks
I wish I could write
with SomeObject
{
.Property = "Some value";
.Event();
}
instead of:
SomeObject.Property = "Some value";
SomeObject.Event();
Especially with Append, AppendFormat on StringBuilder this would make the code less cumbersome to read.
*
*Nulls
I wish I could say:
var result = SomeObject.SomeCollection.First().SomeProperty ??? "Default value";
Instead of:
string result = string.Empty;
if ( SomeObject != null && SomeObject.SomeCollection.Count() > 0 )
{
result = SomeObject.SomeCollection.FirsT().SomeProperty;
}
else
{
result = "Default value";
}
I hate NullExceptions and would like to have a smooth null coalesce that works on multiple levels deep.
A: I have a practical one from years of code revewing and debugging other people's code. I would remove (from all languages) the ability to group logical operations in a conditional statement. This comes from a specific gripe about the AND operator e.g...
if (a and b)
{
do something
}
There are four cases, three of which have not been handled. The programmer may well have considered all 4 cases and deliberately chosen to write the code this way, but we have no indication that is the case unless they commented the code - and normally they don't.
It may be a bit more verbose, but the following is unambiguous...
if (a)
{
if (b)
{
do something
}
else
{
what about this case?
}
}
else
{
if (b)
{
what about this case?
}
else
{
do something else
}
}
As the poor person following up a year later at least I will know exactly what is supposed to be going on.
A: Applies to several languages:
*
*Case sensitivity - whoever's idea was that ?! (And people who use SeveralWordsThatMeanSomething as well as severalwordsthatmeansomething for different meanings should be shot :)
*Array indexing starting from 0. I come from fortran background, so that is another reason, but in mathematics array indexing always starts with 1, so it tends to create a lot of headaches (expecially when debugging) when implementing a larger model.
*Semicolons - just junk in code. If you're careful writing code (fortran, python, ...) you don't need them. If you're not, they're not gonna save you.
*Curly brackets - see 3.
p.s. All of you out there. Don't get mad. If you don't like the answer, you shouldn't have asked.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16"
} |
Q: Visual Studio 2005 locks up when attaching to process I have a simple C++ DLL that implements a few custom actions for a WiX installer.
Debugging the custom actions is usually simple: put up a temporary dialog box at the beginning of the action, and attach to the process when the dialog box appears.
But today, whenever I attach to the process, I get the "Microsoft Visual Studio is Busy" bubble appearing at the bottom of the screen. I cannot figure out where this is coming from. Any ideas?
A: After hours of trying to figure this out, I realized that the problem was that I had debugging symbols enabled in Tools->Options->Debugging->Symbols. The latency in looking up symbols was leading to the apparent lockup.
Clearing the "Search the above locations only when symbols are loaded manually" seems to have alleviated the problem.
A: Are you referring debug symbols from a network location that is not available (e.g. a ClearCase dynamic view or something similar). This can cause Visual Studio to hang when you attach to a process.
Check Tools->Options->Debugging->Symbols and try temporarily disabling the symbol file (.pdb) locations until you figure out which is slowing it down (or causing it to hang). Through elimination you should be able to figure this out.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to typedef a pointer to method which returns a pointer the method? Basically I have the following class:
class StateMachine {
...
StateMethod stateA();
StateMethod stateB();
...
};
The methods stateA() and stateB() should be able return pointers to stateA() and stateB().
How to typedef the StateMethod?
A: Using just typedef:
class StateMachine {
public:
class StateMethod;
typedef StateMethod (StateMachine::*statemethod)();
class StateMethod {
statemethod method;
StateMachine& obj;
public:
StateMethod(statemethod method_, StateMachine *obj_)
: method(method_), obj(*obj_) {}
StateMethod operator()() { return (obj.*(method))(); }
};
StateMethod stateA() { return StateMethod(&StateMachine::stateA, this); }
StateMethod stateB() { return StateMethod(&StateMachine::stateB, this); }
};
A: EDIT: njsf proved me wrong here. You might find static casting simpler to maintain, however, so I will leave the rest here.
There is no 'correct' static type since the full type is recursive:
typedef StateMethod (StateMachine::*StateMethod)();
Your best bet is to use typedef void (StateMachine::*StateMethod)(); then do the ugly state = (StateMethod)(this->*state)();
PS: boost::function requires an explicit return type, at least from my reading of the docs: boost::function0<ReturnType>
A: My philosophy is don't use raw member function pointers. I don't even really know how to do what you want using raw pointer typedef's the syntax is so horrible. I like using boost::function.
This is almost certainly wrong:
class X
{
public:
typedef const boost::function0<Method> Method;
// some kind of mutually recursive state machine
Method stateA()
{ return boost::bind(&X::stateB, this); }
Method stateB()
{ return boost::bind(&X::stateA, this); }
};
This problem is definitely a lot harder than first meets the eye
A: GotW #57 says to use a proxy class with an implicit conversion for this very purpose.
struct StateMethod;
typedef StateMethod (StateMachine:: *FuncPtr)();
struct StateMethod
{
StateMethod( FuncPtr pp ) : p( pp ) { }
operator FuncPtr() { return p; }
FuncPtr p;
};
class StateMachine {
StateMethod stateA();
StateMethod stateB();
};
int main()
{
StateMachine *fsm = new StateMachine();
FuncPtr a = fsm->stateA(); // natural usage syntax
return 0;
}
StateMethod StateMachine::stateA
{
return stateA; // natural return syntax
}
StateMethod StateMachine::stateB
{
return stateB;
}
This solution has three main
strengths:
*
*It solves the problem as required. Better still, it's type-safe and
portable.
*Its machinery is transparent: You get natural syntax for the
caller/user, and natural syntax for
the function's own "return stateA;"
statement.
*It probably has zero overhead: On modern compilers, the proxy class,
with its storage and functions, should
inline and optimize away to nothing.
A: I can never remember the horrible C++ function declspec, so whenever I have to find out the syntax that describes a member function, for example, I just induce an intentional compiler error which usually displays the correct syntax for me.
So given:
class StateMachine {
bool stateA(int someArg);
};
What's the syntax for stateA's typedef? No idea.. so let's try to assign to it something unrelated and see what the compiler says:
char c = StateMachine::stateA
Compiler says:
error: a value of type "bool (StateMachine::*)(int)" cannot be used to initialize
an entity of type "char"
There it is: "bool (StateMachine::*)(int)" is our typedef.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: WPF: Slider doesnt raise MouseLeftButtonDown or MouseLeftButtonUp I tried this XAML:
<Slider Width="250" Height="25" Minimum="0" Maximum="1" MouseLeftButtonDown="slider_MouseLeftButtonDown" MouseLeftButtonUp="slider_MouseLeftButtonUp" />
And this C#:
private void slider_MouseLeftButtonDown(object sender, MouseButtonEventArgs e)
{
sliderMouseDown = true;
}
private void slider_MouseLeftButtonUp(object sender, MouseButtonEventArgs e)
{
sliderMouseDown = false;
}
The sliderMouseDown variable never changes because the MouseLeftButtonDown and MouseLeftButtonUp events are never raised. How can I get this code to work when a user has the left mouse button down on a slider to have a bool value set to true, and when the mouse is up, the bool is set to false?
A: Try using LostMouseCapture and GotMouseCapture.
private void sliderr_LostMouseCapture(object sender, MouseEventArgs e)
private void slider_GotMouseCapture(object sender, MouseEventArgs e)
GotMouseCapture fires when the user begins dragging the slider, and LostMouseCapture when he releases it.
A: Sliders swallow the MouseDown Events (similar to the button).
You can register for the PreviewMouseDown and PreviewMouseUp events which get fired before the slider has a chance to handle them.
A: Another way to do it (and possibly better depending on your scenario) is to register an event handler in procedural code like the following:
this.AddHandler
(
Slider.MouseLeftButtonDownEvent,
new MouseButtonEventHandler(slider_MouseLeftButtonDown),
true
);
Please note the true argument. It basically says that you want to receive that event even if it has been marked as handled. Unfortunately, hooking up an event handler like this can only be done from procedural code and not from xaml.
In other words, with this method, you can register an event handler for the normal event (which bubbles) instead of the preview event which tunnels (and therefore occur at different times).
See the Digging Deeper sidebar on page 70 of WPF Unleashed for more info.
A: I'd like to mention that the Slider doesn't quite swallow the entire MouseDown event. By clicking on a tick mark, you can get notified for the event. The Slider won't handle MouseDown events unless they come from the slider's... slider.
Basically if you decide to use the
AddHandler(Slider.MouseLeftButtonDownEvent, ..., true)
version with the ticks turned on, be sure that the event was handled previously. If you don't you'll end up with an edge case where you thought the slider was clicked, but it was really a tick. Registering for the Preview event is even worse - you'll pick up the event anywhere, even on the white-space between ticks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/160995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18"
} |
Q: Dynamic breadcrumb generation - how to do? I'm in the early phases of developing a brand spanking new site with Spring + Tiles. The site needs dynamically generated breadcrumbs.
What I mean by dynamic is that the user may reach a certain site from multiple starting points. If I have views for Customers, Orders and Products, the user could reach a Product directly:
Products -> Product xyz
or the user could reach a product through a customer's order:
Customers -> John Doe -> Orders -> Order 123 -> Product xyz
What is the best way to achieve breadcrumbs like these in a java environment? I've previously done this by using a request attribute (a Vector of Url objects) that is filled with the Urls in each action/servlet of my webapp (like in the action List of Products). I'm not happy with this solution as it requires adding code to each controller/action for generating the breadcrumb trail. And in a case like viewing a product of given order of given customer, the if-then-else logic needed to determine the trail is awful.
Are there any libraries that I could use?
A: Why don't you just use a session variable that stores the trail? Each view would only have to either append itself to the variable or reset the variable in case of the 'root' views. The code to append it and the code to show it would always be the same and could go in a generic library, you just would call it with a flag to either append or reset the value in the case of storing the trail.
A: Struts2 has a breadcrumbs plugin.
A: There is a more recent struts2 breadcrumb plugin hosted at google code it is very configurable and should satisfy your needs.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Launch Content Editor from code I have an application that is creating an new item in Sitecore then opening up the Content Editor to that item, it is loading fine but whenever i try to open the html editor i get a 'NullReferenceException'. This is only happening when i launch the application in this method.
Source Code:
[Serializable]
public class PushToCMS : Command
{
public override void Execute(CommandContext context)
{
//Context.ClientPage.Start(this, "Action_PushToCMS");
Database dbCore = Sitecore.Configuration.Factory.GetDatabase("core");
Item contentEditor = dbCore.GetItem(new ID("{7EADA46B-11E2-4EC1-8C44-BE75784FF105}"));
Database dbMaster = Sitecore.Configuration.Factory.GetDatabase("master");
DatabaseEngines engine = new DatabaseEngines(dbMaster);
Item parentItem = dbMaster.GetItem("/sitecore/content/Home/Events/Parent/");
// Load existing related item if it exists
Event evt = new Event(new Guid(HttpContext.Current.Items["id"].ToString()));
Item item = dbMaster.SelectSingleItem("/sitecore/content/Home/Events/Parent/Item");
if (item == null)
item = CreateNewEvent(engine.DataEngine, parentItem, evt);
Sitecore.Text.UrlString parameters = new Sitecore.Text.UrlString();
parameters.Add("id", item.ID.ToString());
parameters.Add("fo", item.ID.ToString());
Sitecore.Shell.Framework.Windows.RunApplication(contentEditor, contentEditor.Appearance.Icon, contentEditor.DisplayName, parameters.ToString());
}
The only difference i can tell in the loading of the two methods is the url to the html editor, however i dont know where this is being defined or how i can control it.
Launched through normal method:
http://xxxx/sitecore/shell/default.aspx?xmlcontrol=RichTextEditor&da=core&id=%7bDD4372AC-5D37-4C9E-BBFA-C4E3E2A27722%7d&ed=F27055570&vs&la=en&fld=%7b60D10DBB-7CD5-4341-A960-C7AB10347A2C%7d&so&di=0&hdl=H27055699&us=%7b83D34C8A-4CC4-4CD9-A209-600D51B26AAE%7d&mo
Launched through RunApplication:
http://xxxx/layouts/xmlcontrol.aspx?xmlcontrol=RichTextEditor&da=core&id=%7bDD4372AC-5D37-4C9E-BBFA-C4E3E2A27722%7d&ed=F27055196&vs&la=en&fld=%7b60D10DBB-7CD5-4341-A960-C7AB10347A2C%7d&so&di=0&hdl=H27055325&us=%7b83D34C8A-4CC4-4CD9-A209-600D51B26AAE%7d&mo
any help on this would be greatly appreciated.
A: Phil,
If it is not too late for the answer... :)
It might be the case that you run this code without the permissions to read the core database. In this case, when you try to call contentEditor. you'll get NullReference. I would recommend you using another format of running the application - use another method:
Sitecore.Shell.Framework.Windows.RunApplication("Content Editor", parameters.ToString());
If this doesn't help, please attach the stack trace of the exception you get.
Hope this helps.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I create a web page that shows aggregate data from Sawtooth surveys? I'm guessing this won't apply to 99.99% of anyone that sees this. I've been doing some Sawtooth survey programming at work and I've been needing to create a webpage that shows some aggregate data from the completed surveys. I was just wondering if anyone else has done this using the flat files that Sawtooth generates and how you went about doing it. I only know very basic Perl and the server I use does not have PHP so I'm somewhat at a loss for solutions. Anything you've got would be helpful.
Edit: The problem with offering example files is that it's more complicated. It's not a single file and it occasionally gets moved to a different file with a different format. The complexities added in there are why I ask this question.
A: Doesn't Sawtooth export into CSV format? There are many Perl parsers for CSV files. Just about every language has a CSV parser or two (or twelve), and MS Excel can open them directly, and they're still plaintext so you can look at them in any text editor.
I know our version of Sawtooth at work (which is admittedly very old) exports Sawtooth data into SPSS format, which can then be exported into various spreadsheet formats including CSV, if all else fails.
If you have a flat (fixed-width field) file, you can easily parse it in Perl using regular expressions or just taking substrings of each line one at a time, assuming you know the width of the fields. Your question is too general to give much better advice, sorry.
Matching the values up from a plaintext file with meta-data (variable names and labels, value labels etc.) is more complicated unless you already have the meta-data in some script-readable format. Making all of that stuff available on a web page is more complicated still. I've done it and it can be a bit of a lengthy project to roll your own. There are packages you can buy, like SDA, which will help you build a website where people can browse and download your survey data and view your codebooks.
Honestly though the easiest thing to do if you're posting statistical data on a website is get the data into SPSS or SAS or another statistics package format and post those files for download directly. Then you don't have to worry about it.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: why are local styles being ignored when using forms authentication in asp.net? I have some styles applied to html for example
<body style="background: #C3DAF9;">
and when I use forms authentication it is ignored. If I put the style into an external .css file then it works.
This doesn't seem like normal behaviour to me.
A: Have you tried inspecting your HTML elements with Firebug? That should hopefully tell you what, if anything, is overriding your CSS.
A: Solved the problem. I'm not sure I understand why it happened but here is the offending code;
if (User.Identity.IsAuthenticated) {
if (User.Identity is BookingIdentity) {
BookingIdentity id = (BookingIdentity) User.Identity;
Response.Write("<p/>UserName: " + id.Name);
}
}
Removing the Response.Write makes everything work again.
The Response.Write (which I added to check the user was logged in at same time as the forms authentication) seems to be doing something to the page render? Any ideas?
Turns out that Response.Write was the problem, it essentially aborts the rendering of the rest of the page from that point. (or words to that effect)
A: That's weird. I've experienced this problem but the other way around: when I use external style sheets the external style sheet is the one being ignored, and only my inline CSS works.
The solution to that problem was to add permissions for the folder where the external CSS file resides.
One suggestion: View the source of the rendered page, and check the body tag there. It is possible that the style is being overwritten somewhere with the value of the external CSS file.
A: Learn how to use Firebug and use it to determine what styles are applied to your page.
A: The background style does not take a 'color' value.
You are looking for background-color.
A: Yes you should check the output html, and your browser.
If there is no style tag in your html output you could use and try:
<body bgcolor="#C3DAF9">
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Advice on splitting up a process involving multiple actors into Use Cases Let's say I am modelling a process that involves a conversation or exchnage between two actors. For this example, I'll use something easily understandable:-
*
*Supplier creates a price list,
*Buyer chooses some items to buy and sends a Purchase Order,
*Supplier receives the purchase order and sends the goods.
*Supplier sends an invoice
*Buyer receives the invoice and makes a payment
Of course each of those steps in itself could be quick complicated. How would you split this up into use cases in your requirements document?
If this process was treated as a single use-case it could fill a book.
Alternatively, making a use case out of each of the above steps would hide some of the essential interaction and flow that should be captured. Would it make sense to have a use case that starts at "Received a purchase order" and finishes at "Send an Invoice" and then another that starts at "Receive an Invoice" and ends at "Makes a Payment"?
Any advice?
A: Yes, there are many possibilities here. In your example above it could be even more complicated by the Buyer making multiple partial payments to pay the bill.
You probably need to create complete workflow use cases. Splitting each of the above steps into their own use cases may not prove useful as some of the steps will have pre & post conditions.
I work on the QuickBooks source code and the number of ways that a transaction can flow through the system is daunting. It is almost impossible for our QA guys to test every combination.
A: The way I usually approach such tasks is by just starting to create UML Use Case and high-level Activity diagrams for the process. Don't bother about specifics, just give it your best shot.
When you will have a draft you would almost immediately see from it how it could be improved. You could then go on refactoring it - getting the use case smaller, structuring large Activities and so on. Alternatively you could lump a couple of Use Cases together if they are too small.
Without knowing the details of your project I would just go ahead and make each step a separate Use Case - they all seem to be self-contained and could be described without any cross-references. If while doing so you will find any dependencies you could always rethink the approach.
Also consider use 'extend' and 'include' blocks for common elements like logging, security etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: In .NET is there a way to enable Assembly.Load tracing? In .NET is there a way to enable Assembly.Load tracing? I know while running under the debugger it gives you a nice message like "Loaded 'assembly X'" but I want to get a log of the assembly loads of my running application outside the debugger, preferably intermingled with my Debug/Trace log messages.
I'm tracing out various things in my application and I basically want to know what action triggered a particular assembly to be loaded.
A: Fusion Log Viewer is your friend.
[edit] Actually this might be too over the top, the AssemblyResolve event is good too[edit]
A: MS Visual Studio has this functionality built in.
Select 'Module Load Messages' from the context menu of the output window in MS Visual Studio and it will display something like:
Loaded 'C:\Windows\assembly\GAC_64\mscorlib\2.0.0.0__b77a5c561934e089\mscorlib.dll'
Loaded 'C:\projects\trunk\bin\Tester.exe', Symbols loaded.
Loaded 'C:\projects\trunk\bin\log4net.dll'
A: Get the AppDomain for your application and attach to the AssemblyLoad event.
Example (C#):
AppDomain.CurrentDomain.AssemblyLoad += new AssemblyLoadEventHandler(OnAssemblyLoad);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: What do I need for a compliant email header I am trying to send an email from a site I am building, but it ends up in the yahoo spam folder. It is the email that sends credentials. What can I do to legitimize it?
$header = "From: site <sales@site.com>\r\n";
$header .= "To: $name <$email>\r\n";
$header .= "Subject: $subject\r\n";
$header .= "Reply-To: site <sales@site.com>" . "\r\n";
$header .= "MIME-VERSION: 1.0\r\n";
$header .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n";
$phpversion = phpversion();
$header .= "X-Mailer: PHP v$phpversion\r\n";
mail($email,$subject,$body,$header);
A: I just successfully tried the following from my Yahoo! Web Hosting account:
$email = "me@site.com";
$subject = "Simple test";
$body = "Simple test";
$header = "From: site \r\n";
$header .= "To: $name \r\n";
$header .= "Subject: $subject\r\n";
$header .= "Reply-To: site " . "\r\n";
$header .= "MIME-VERSION: 1.0\r\n";
$header .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n";
$phpversion = phpversion();
$header .= "X-Mailer: PHP v$phpversion\r\n";
mail($email,$subject,$body,$header);
However, you have some duplication in your header you should only need to do the following:
$email = "me@site.com";
$subject = "Simple test";
$body = "Simple test";
$header = "From: site \r\n";
$header .= "MIME-VERSION: 1.0\r\n";
$header .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n";
$phpversion = phpversion();
$header .= "X-Mailer: PHP v$phpversion\r\n";
mail($email,$subject,$body,$header);
A: In addition to Ted Percival's suggestions, you could try using PHPMailer to create the emails for you rather than manually building the headers. I've used this class extensively and not had any trouble with email being rejected as spam by Yahoo, or anyone else.
A: There is also the possibility that 'sendmail' (which is underneath the PHP mail() function) needs extra parameters. If you have a problem with return headers (such as Return-Path) not being set with what you set them to be, you may need to use the fifth mail() parameter. Example:
mail('recipient@domain.com', 'Subject', $mail_body, $headers, " -f sender@domain.com");
There is some further evidence that true vanilla sendmail may have problem with this! Hopefully you have 'postfix' as PHP's underlying mail() support on your target server.
A: *
*Don't use HTML in your email.
*Send it via a legitimate mail server with a static IP and reverse-DNS (PTR) that points to the machine's real host name (and matches a forward lookup).
*Include a Message-ID (or ensure that the local mailer adds one for you).
*Run your email through SpamAssassin and see which bad-scoring rules it matches. Avoid matching them.
*Use DomainKeys Identified Mail to digitally sign your messages.
A: In addition to Ted Percival's suggestions, make sure that the IP address the email is coming from is a legitimate source for email according to the SPF record of site.com. If site.com doesn't have an SPF record, adding one (which allows the IP address in question, of course) may help get the emails past spam filters.
And if absolutely do need to use HTML in your email, make sure that you also include a plain text version as well; you'd use the content type of "multipart/alternative" instead of "text/html".
A: Ted's suggestions are good, as are Tim's, but the only way I've ever been able to reliably get email through to Yahoo/Hotmail/etc is to use the PEAR email classes. Try those & (assuming your server is OK) I can pretty much guarantee it'll work.
A: Ted and Tim have excellent suggestions. As does Shabbyrobe. We use PHPMailer and don't have any problems with spam filters.
One thing to note is that many spam filters will count NOT having a text version against you if you are using a MIME format. You could add all of the headers and the text version yourself, or just let PHPMailer or the PEAR mail library take care of that for you. Having a text version may or may not help, but it is good practice and user friendly.
I realize that your code sample is just that - a sample, but it is worth saying: Do not ever just drop user provided data into your mail headers. Make sure you validate that it is data you expect. It is trivial to turn a php mail script into an open relay, and nobody wants that.
A: Check rfc 822 and rfc 2045 for email format. I find python's Email class really easy to work with. I assume php's PEAR does the same (according to earlier mails). Also the header and the body are separated by a "\r\n\r\n", not sure if your code automatically inserts that, but you can try appending that to the header.
I dont think that DK/SPF might be necessary (since there are lots of webservers out there without DK/SPF support). There can be alot of factors that might be causing it to get blocked(atleast 10K different criterions and methods.. p0f,greylisting,greylisting, blacklisting etc etc). Make sure that your email is properly formatted(this makes a BIG difference). Look into libraries that generate the complete header for you.. that way you have least chances of making any mistake.
A: Adding a SPF record is very easy. You should try.
This one is for dreamhost plus googlemail
You should also ad you webserver ip address (in my case, the line before googlemail)
The last line tells the server to do a soft reject (mark as spam but don't delete) I'm using it instead of "-" (delete) because google documentation says so :-)
It's a TXT record
v=spf1
ip4:64.111.100.0/24 ip4:66.33.201.0/24 ip4:66.33.216.0/24
ip4:208.97.132.0/24 ip4:208.97.187.0/24 ip4:208.113.200.0/24 ip4:208.113.244.0/24
ip4:208.97.132.74 ip4:67.205.36.71
include:aspmx.googlemail.com
mx ~all
Hope it helps
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Which is faster: Stack allocation or Heap allocation This question may sound fairly elementary, but this is a debate I had with another developer I work with.
I was taking care to stack allocate things where I could, instead of heap allocating them. He was talking to me and watching over my shoulder and commented that it wasn't necessary because they are the same performance wise.
I was always under the impression that growing the stack was constant time, and heap allocation's performance depended on the current complexity of the heap for both allocation (finding a hole of the proper size) and de-allocating (collapsing holes to reduce fragmentation, as many standard library implementations take time to do this during deletes if I am not mistaken).
This strikes me as something that would probably be very compiler dependent. For this project in particular I am using a Metrowerks compiler for the PPC architecture. Insight on this combination would be most helpful, but in general, for GCC, and MSVC++, what is the case? Is heap allocation not as high performing as stack allocation? Is there no difference? Or are the differences so minute it becomes pointless micro-optimization.
A: I don't think stack allocation and heap allocation are generally interchangable. I also hope that the performance of both of them is sufficient for general use.
I'd strongly recommend for small items, whichever one is more suitable to the scope of the allocation. For large items, the heap is probably necessary.
On 32-bit operating systems that have multiple threads, stack is often rather limited (albeit typically to at least a few mb), because the address space needs to be carved up and sooner or later one thread stack will run into another. On single threaded systems (Linux glibc single threaded anyway) the limitation is much less because the stack can just grow and grow.
On 64-bit operating systems there is enough address space to make thread stacks quite large.
A: Usually stack allocation just consists of subtracting from the stack pointer register. This is tons faster than searching a heap.
Sometimes stack allocation requires adding a page(s) of virtual memory. Adding a new page of zeroed memory doesn't require reading a page from disk, so usually this is still going to be tons faster than searching a heap (especially if part of the heap was paged out too). In a rare situation, and you could construct such an example, enough space just happens to be available in part of the heap which is already in RAM, but allocating a new page for the stack has to wait for some other page to get written out to disk. In that rare situation, the heap is faster.
A: Aside from the orders-of-magnitude performance advantage over heap allocation, stack allocation is preferable for long running server applications. Even the best managed heaps eventually get so fragmented that application performance degrades.
A: Stack allocation is much faster since all it really does is move the stack pointer.
Using memory pools, you can get comparable performance out of heap allocation, but that comes with a slight added complexity and its own headaches.
Also, stack vs. heap is not only a performance consideration; it also tells you a lot about the expected lifetime of objects.
A: Probably the biggest problem of heap allocation versus stack allocation, is that heap allocation in the general case is an unbounded operation, and thus you can't use it where timing is an issue.
For other applications where timing isn't an issue, it may not matter as much, but if you heap allocate a lot, this will affect the execution speed. Always try to use the stack for short lived and often allocated memory (for instance in loops), and as long as possible - do heap allocation during application startup.
A: A stack has a limited capacity, while a heap is not. The typical stack for a process or thread is around 8K. You cannot change the size once it's allocated.
A stack variable follows the scoping rules, while a heap one doesn't. If your instruction pointer goes beyond a function, all the new variables associated with the function go away.
Most important of all, you can't predict the overall function call chain in advance. So a mere 200 bytes allocation on your part may raise a stack overflow. This is especially important if you're writing a library, not an application.
A: It's not jsut stack allocation that's faster. You also win a lot on using stack variables. They have better locality of reference. And finally, deallocation is a lot cheaper too.
A: As others have said, stack allocation is generally much faster.
However, if your objects are expensive to copy, allocating on the stack may lead to an huge performance hit later when you use the objects if you aren't careful.
For example, if you allocate something on the stack, and then put it into a container, it would have been better to allocate on the heap and store the pointer in the container (e.g. with a std::shared_ptr<>). The same thing is true if you are passing or returning objects by value, and other similar scenarios.
The point is that although stack allocation is usually better than heap allocation in many cases, sometimes if you go out of your way to stack allocate when it doesn't best fit the model of computation, it can cause more problems than it solves.
A: An interesting thing I learned about Stack vs. Heap Allocation on the Xbox 360 Xenon processor, which may also apply to other multicore systems, is that allocating on the Heap causes a Critical Section to be entered to halt all other cores so that the alloc doesn't conflict. Thus, in a tight loop, Stack Allocation was the way to go for fixed sized arrays as it prevented stalls.
This may be another speedup to consider if you're coding for multicore/multiproc, in that your stack allocation will only be viewable by the core running your scoped function, and that will not affect any other cores/CPUs.
A: I think the lifetime is crucial, and whether the thing being allocated has to be constructed in a complex way. For example, in transaction-driven modeling, you usually have to fill in and pass in a transaction structure with a bunch of fields to operation functions. Look at the OSCI SystemC TLM-2.0 standard for an example.
Allocating these on the stack close to the call to the operation tends to cause enormous overhead, as the construction is expensive. The good way there is to allocate on the heap and reuse the transaction objects either by pooling or a simple policy like "this module only needs one transaction object ever".
This is many times faster than allocating the object on each operation call.
The reason is simply that the object has an expensive construction and a fairly long useful lifetime.
I would say: try both and see what works best in your case, because it can really depend on the behavior of your code.
A: Stack allocation is a couple instructions whereas the fastest rtos heap allocator known to me (TLSF) uses on average on the order of 150 instructions. Also stack allocations don't require a lock because they use thread local storage which is another huge performance win. So stack allocations can be 2-3 orders of magnitude faster depending on how heavily multithreaded your environment is.
In general heap allocation is your last resort if you care about performance. A viable in-between option can be a fixed pool allocator which is also only a couple instructions and has very little per-allocation overhead so it's great for small fixed size objects. On the downside it only works with fixed size objects, is not inherently thread safe and has block fragmentation problems.
A: There's a general point to be made about such optimizations.
The optimization you get is proportional to the amount of time the program counter is actually in that code.
If you sample the program counter, you will find out where it spends its time, and that is usually in a tiny part of the code, and often in library routines you have no control over.
Only if you find it spending much time in the heap-allocation of your objects will it be noticeably faster to stack-allocate them.
A: Stack allocation will almost always be as fast or faster than heap allocation, although it is certainly possible for a heap allocator to simply use a stack based allocation technique.
However, there are larger issues when dealing with the overall performance of stack vs. heap based allocation (or in slightly better terms, local vs. external allocation). Usually, heap (external) allocation is slow because it is dealing with many different kinds of allocations and allocation patterns. Reducing the scope of the allocator you are using (making it local to the algorithm/code) will tend to increase performance without any major changes. Adding better structure to your allocation patterns, for example, forcing a LIFO ordering on allocation and deallocation pairs can also improve your allocator's performance by using the allocator in a simpler and more structured way. Or, you can use or write an allocator tuned for your particular allocation pattern; most programs allocate a few discrete sizes frequently, so a heap that is based on a lookaside buffer of a few fixed (preferably known) sizes will perform extremely well. Windows uses its low-fragmentation-heap for this very reason.
On the other hand, stack-based allocation on a 32-bit memory range is also fraught with peril if you have too many threads. Stacks need a contiguous memory range, so the more threads you have, the more virtual address space you will need for them to run without a stack overflow. This won't be a problem (for now) with 64-bit, but it can certainly wreak havoc in long running programs with lots of threads. Running out of virtual address space due to fragmentation is always a pain to deal with.
A: Remark that the considerations are typically not about speed and performance when choosing stack versus heap allocation. The stack acts like a stack, which means it is well suited for pushing blocks and popping them again, last in, first out. Execution of procedures is also stack-like, last procedure entered is first to be exited. In most programming languages, all the variables needed in a procedure will only be visible during the procedure's execution, thus they are pushed upon entering a procedure and popped off the stack upon exit or return.
Now for an example where the stack cannot be used:
Proc P
{
pointer x;
Proc S
{
pointer y;
y = allocate_some_data();
x = y;
}
}
If you allocate some memory in procedure S and put it on the stack and then exit S, the allocated data will be popped off the stack. But the variable x in P also pointed to that data, so x is now pointing to some place underneath the stack pointer (assume stack grows downwards) with an unknown content. The content might still be there if the stack pointer is just moved up without clearing the data beneath it, but if you start allocating new data on the stack, the pointer x might actually point to that new data instead.
A: class Foo {
public:
Foo(int a) {
}
}
int func() {
int a1, a2;
std::cin >> a1;
std::cin >> a2;
Foo f1(a1);
__asm push a1;
__asm lea ecx, [this];
__asm call Foo::Foo(int);
Foo* f2 = new Foo(a2);
__asm push sizeof(Foo);
__asm call operator new;//there's a lot instruction here(depends on system)
__asm push a2;
__asm call Foo::Foo(int);
delete f2;
}
It would be like this in asm. When you're in func, the f1 and pointer f2 has been allocated on stack (automated storage). And by the way, Foo f1(a1) has no instruction effects on stack pointer (esp),It has been allocated, if func wants get the member f1, it's instruction is something like this: lea ecx [ebp+f1], call Foo::SomeFunc(). Another thing the stack allocate may make someone think the memory is something like FIFO, the FIFO just happened when you go into some function, if you are in the function and allocate something like int i = 0, there no push happened.
A: You can write a special heap allocator for specific sizes of objects that is very performant. However, the general heap allocator is not particularly performant.
Also I agree with Torbjörn Gyllebring about the expected lifetime of objects. Good point!
A: Stack is much faster. It literally only uses a single instruction on most architectures, in most cases, e.g. on x86:
sub esp, 0x10
(That moves the stack pointer down by 0x10 bytes and thereby "allocates" those bytes for use by a variable.)
Of course, the stack's size is very, very finite, as you will quickly find out if you overuse stack allocation or try to do recursion :-)
Also, there's little reason to optimize the performance of code that doesn't verifiably need it, such as demonstrated by profiling. "Premature optimization" often causes more problems than it's worth.
My rule of thumb: if I know I'm going to need some data at compile-time, and it's under a few hundred bytes in size, I stack-allocate it. Otherwise I heap-allocate it.
A: Concerns Specific to the C++ Language
First of all, there is no so-called "stack" or "heap" allocation mandated by C++. If you are talking about automatic objects in block scopes, they are even not "allocated". (BTW, automatic storage duration in C is definitely NOT the same to "allocated"; the latter is "dynamic" in the C++ parlance.) The dynamically allocated memory is on the free store, not necessarily on "the heap", though the latter is often the (default) implementation.
Although as per the abstract machine semantic rules, automatic objects still occupy memory, a conforming C++ implementation is allowed to ignore this fact when it can prove this does not matter (when it does not change the observable behavior of the program). This permission is granted by the as-if rule in ISO C++, which is also the general clause enabling the usual optimizations (and there is also an almost same rule in ISO C). Besides the as-if rule, ISO C++ also has copy elision rules to allow omission of specific creations of objects. The constructor and destructor calls involved are thereby omitted. As a result, the automatic objects (if any) in these constructors and destructors are also eliminated, compared to naive abstract semantics implied by the source code.
On the other hand, free store allocation is definitely "allocation" by design. Under ISO C++ rules, such an allocation can be achieved by a call of an allocation function. However, since ISO C++14, there is a new (non-as-if) rule to allow merging global allocation function (i.e. ::operator new) calls in specific cases. So parts of dynamic allocation operations can also be no-op like the case of automatic objects.
Allocation functions allocate resources of memory. Objects can be further allocated based on allocation using allocators. For automatic objects, they are directly presented - although the underlying memory can be accessed and be used to provide memory to other objects (by placement new), but this does not make great sense as the free store, because there is no way to move the resources elsewhere.
All other concerns are out of the scope of C++. Nevertheless, they can be still significant.
About Implementations of C++
C++ does not expose reified activation records or some sorts of first-class continuations (e.g. by the famous call/cc), there is no way to directly manipulate the activation record frames - where the implementation need to place the automatic objects to. Once there is no (non-portable) interoperations with the underlying implementation ("native" non-portable code, such as inline assembly code), an omission of the underlying allocation of the frames can be quite trivial. For example, when the called function is inlined, the frames can be effectively merged into others, so there is no way to show what is the "allocation".
However, once interops are respected, things are getting complex. A typical implementation of C++ will expose the ability of interop on ISA (instruction-set architecture) with some calling conventions as the binary boundary shared with the native (ISA-level machine) code. This would be explicitly costly, notably, when maintaining the stack pointer, which is often directly held by an ISA-level register (with probably specific machine instructions to access). The stack pointer indicates the boundary of the top frame of the (currently active) function call. When a function call is entered, a new frame is needed and the stack pointer is added or subtracted (depending on the convention of ISA) by a value not less than the required frame size. The frame is then said allocated when the stack pointer after the operations. Parameters of functions may be passed onto the stack frame as well, depending on the calling convention used for the call. The frame can hold the memory of automatic objects (probably including the parameters) specified by the C++ source code. In the sense of such implementations, these objects are "allocated". When the control exits the function call, the frame is no longer needed, it is usually released by restoring the stack pointer back to the state before the call (saved previously according to the calling convention). This can be viewed as "deallocation". These operations make the activation record effectively a LIFO data structure, so it is often called "the (call) stack". The stack pointer effectively indicates the top position of the stack.
Because most C++ implementations (particularly the ones targeting ISA-level native code and using the assembly language as its immediate output) use similar strategies like this, such a confusing "allocation" scheme is popular. Such allocations (as well as deallocations) do spend machine cycles, and it can be expensive when the (non-optimized) calls occur frequently, even though modern CPU microarchitectures can have complex optimizations implemented by hardware for the common code pattern (like using a stack engine in implementing PUSH/POP instructions).
But anyway, in general, it is true that the cost of stack frame allocation is significantly less than a call to an allocation function operating the free store (unless it is totally optimized away), which itself can have hundreds of (if not millions of :-) operations to maintain the stack pointer and other states. Allocation functions are typically based on API provided by the hosted environment (e.g. runtime provided by the OS). Different to the purpose of holding automatic objects for functions calls, such allocations are general-purpose, so they will not have frame structure like a stack. Traditionally, they allocate space from the pool storage called the heap (or several heaps). Different from the "stack", the concept "heap" here does not indicate the data structure being used; it is derived from early language implementations decades ago. (BTW, the call stack is usually allocated with fixed or user-specified size from the heap by the environment in program/thread startup.) The nature of use cases makes allocations and deallocations from a heap far more complicated (than pushing/poppoing of stack frames), and hardly possible to be directly optimized by hardware.
Effects on Memory Access
The usual stack allocation always puts the new frame on the top, so it has a quite good locality. This is friendly to the cache. OTOH, memory allocated randomly in the free store has no such property. Since ISO C++17, there are pool resource templates provided by <memory_resource>. The direct purpose of such an interface is to allow the results of consecutive allocations being close together in memory. This acknowledges the fact that this strategy is generally good for performance with contemporary implementations, e.g. being friendly to cache in modern architectures. This is about the performance of access rather than allocation, though.
Concurrency
Expectation of concurrent access to memory can have different effects between the stack and heaps. A call stack is usually exclusively owned by one thread of execution in a typical C++ implementation. OTOH, heaps are often shared among the threads in a process. For such heaps, the allocation and deallocation functions have to protect the shared internal administrative data structure from the data race. As a result, heap allocations and deallocations may have additional overhead due to internal synchronization operations.
Space Efficiency
Due to the nature of the use cases and internal data structures, heaps may suffer from internal memory fragmentation, while the stack does not. This does not have direct impacts on the performance of memory allocation, but in a system with virtual memory, low space efficiency may degenerate overall performance of memory access. This is particularly awful when HDD is used as a swap of physical memory. It can cause quite long latency - sometimes billions of cycles.
Limitations of Stack Allocations
Although stack allocations are often superior in performance than heap allocations in reality, it certainly does not mean stack allocations can always replace heap allocations.
First, there is no way to allocate space on the stack with a size specified at runtime in a portable way with ISO C++. There are extensions provided by implementations like alloca and G++'s VLA (variable-length array), but there are reasons to avoid them. (IIRC, Linux source removes the use of VLA recently.) (Also note ISO C99 does have mandated VLA, but ISO C11 turns the support optional.)
Second, there is no reliable and portable way to detect stack space exhaustion. This is often called stack overflow (hmm, the etymology of this site), but probably more accurately, stack overrun. In reality, this often causes invalid memory access, and the state of the program is then corrupted (... or maybe worse, a security hole). In fact, ISO C++ has no concept of "the stack" and makes it undefined behavior when the resource is exhausted. Be cautious about how much room should be left for automatic objects.
If the stack space runs out, there are too many objects allocated in the stack, which can be caused by too many active calls of functions or improper use of automatic objects. Such cases may suggest the existence of bugs, e.g. a recursive function call without correct exit conditions.
Nevertheless, deep recursive calls are sometimes desired. In implementations of languages requiring support of unbound active calls (where the call depth only limited by total memory), it is impossible to use the (contemporary) native call stack directly as the target language activation record like typical C++ implementations. To work around the problem, alternative ways of the construction of activation records are needed. For example, SML/NJ explicitly allocates frames on the heap and uses cactus stacks. The complicated allocation of such activation record frames is usually not as fast as the call stack frames. However, if such languages are implemented further with the guarantee of proper tail recursion, the direct stack allocation in the object language (that is, the "object" in the language does not stored as references, but native primitive values which can be one-to-one mapped to unshared C++ objects) is even more complicated with more performance penalty in general. When using C++ to implement such languages, it is difficult to estimate the performance impacts.
A: Honestly, it's trivial to write a program to compare the performance:
#include <ctime>
#include <iostream>
namespace {
class empty { }; // even empty classes take up 1 byte of space, minimum
}
int main()
{
std::clock_t start = std::clock();
for (int i = 0; i < 100000; ++i)
empty e;
std::clock_t duration = std::clock() - start;
std::cout << "stack allocation took " << duration << " clock ticks\n";
start = std::clock();
for (int i = 0; i < 100000; ++i) {
empty* e = new empty;
delete e;
};
duration = std::clock() - start;
std::cout << "heap allocation took " << duration << " clock ticks\n";
}
It's said that a foolish consistency is the hobgoblin of little minds. Apparently optimizing compilers are the hobgoblins of many programmers' minds. This discussion used to be at the bottom of the answer, but people apparently can't be bothered to read that far, so I'm moving it up here to avoid getting questions that I've already answered.
An optimizing compiler may notice that this code does nothing, and may optimize it all away. It is the optimizer's job to do stuff like that, and fighting the optimizer is a fool's errand.
I would recommend compiling this code with optimization turned off because there is no good way to fool every optimizer currently in use or that will be in use in the future.
Anybody who turns the optimizer on and then complains about fighting it should be subject to public ridicule.
If I cared about nanosecond precision I wouldn't use std::clock(). If I wanted to publish the results as a doctoral thesis I would make a bigger deal about this, and I would probably compare GCC, Tendra/Ten15, LLVM, Watcom, Borland, Visual C++, Digital Mars, ICC and other compilers. As it is, heap allocation takes hundreds of times longer than stack allocation, and I don't see anything useful about investigating the question any further.
The optimizer has a mission to get rid of the code I'm testing. I don't see any reason to tell the optimizer to run and then try to fool the optimizer into not actually optimizing. But if I saw value in doing that, I would do one or more of the following:
*
*Add a data member to empty, and access that data member in the loop; but if I only ever read from the data member the optimizer can do constant folding and remove the loop; if I only ever write to the data member, the optimizer may skip all but the very last iteration of the loop. Additionally, the question wasn't "stack allocation and data access vs. heap allocation and data access."
*Declare e volatile, but volatile is often compiled incorrectly (PDF).
*Take the address of e inside the loop (and maybe assign it to a variable that is declared extern and defined in another file). But even in this case, the compiler may notice that -- on the stack at least -- e will always be allocated at the same memory address, and then do constant folding like in (1) above. I get all iterations of the loop, but the object is never actually allocated.
Beyond the obvious, this test is flawed in that it measures both allocation and deallocation, and the original question didn't ask about deallocation. Of course variables allocated on the stack are automatically deallocated at the end of their scope, so not calling delete would (1) skew the numbers (stack deallocation is included in the numbers about stack allocation, so it's only fair to measure heap deallocation) and (2) cause a pretty bad memory leak, unless we keep a reference to the new pointer and call delete after we've got our time measurement.
On my machine, using g++ 3.4.4 on Windows, I get "0 clock ticks" for both stack and heap allocation for anything less than 100000 allocations, and even then I get "0 clock ticks" for stack allocation and "15 clock ticks" for heap allocation. When I measure 10,000,000 allocations, stack allocation takes 31 clock ticks and heap allocation takes 1562 clock ticks.
Yes, an optimizing compiler may elide creating the empty objects. If I understand correctly, it may even elide the whole first loop. When I bumped up the iterations to 10,000,000 stack allocation took 31 clock ticks and heap allocation took 1562 clock ticks. I think it's safe to say that without telling g++ to optimize the executable, g++ did not elide the constructors.
In the years since I wrote this, the preference on Stack Overflow has been to post performance from optimized builds. In general, I think this is correct. However, I still think it's silly to ask the compiler to optimize code when you in fact do not want that code optimized. It strikes me as being very similar to paying extra for valet parking, but refusing to hand over the keys. In this particular case, I don't want the optimizer running.
Using a slightly modified version of the benchmark (to address the valid point that the original program didn't allocate something on the stack each time through the loop) and compiling without optimizations but linking to release libraries (to address the valid point that we don't want to include any slowdown caused by linking to debug libraries):
#include <cstdio>
#include <chrono>
namespace {
void on_stack()
{
int i;
}
void on_heap()
{
int* i = new int;
delete i;
}
}
int main()
{
auto begin = std::chrono::system_clock::now();
for (int i = 0; i < 1000000000; ++i)
on_stack();
auto end = std::chrono::system_clock::now();
std::printf("on_stack took %f seconds\n", std::chrono::duration<double>(end - begin).count());
begin = std::chrono::system_clock::now();
for (int i = 0; i < 1000000000; ++i)
on_heap();
end = std::chrono::system_clock::now();
std::printf("on_heap took %f seconds\n", std::chrono::duration<double>(end - begin).count());
return 0;
}
displays:
on_stack took 2.070003 seconds
on_heap took 57.980081 seconds
on my system when compiled with the command line cl foo.cc /Od /MT /EHsc.
You may not agree with my approach to getting a non-optimized build. That's fine: feel free modify the benchmark as much as you want. When I turn on optimization, I get:
on_stack took 0.000000 seconds
on_heap took 51.608723 seconds
Not because stack allocation is actually instantaneous but because any half-decent compiler can notice that on_stack doesn't do anything useful and can be optimized away. GCC on my Linux laptop also notices that on_heap doesn't do anything useful, and optimizes it away as well:
on_stack took 0.000003 seconds
on_heap took 0.000002 seconds
A: It has been mentioned before that stack allocation is simply moving the stack pointer, that is, a single instruction on most architectures. Compare that to what generally happens in the case of heap allocation.
The operating system maintains portions of free memory as a linked list with the payload data consisting of the pointer to the starting address of the free portion and the size of the free portion. To allocate X bytes of memory, the link list is traversed and each note is visited in sequence, checking to see if its size is at least X. When a portion with size P >= X is found, P is split into two parts with sizes X and P-X. The linked list is updated and the pointer to the first part is returned.
As you can see, heap allocation depends on may factors like how much memory you are requesting, how fragmented the memory is and so on.
A: In general, stack allocation is faster than heap allocation as mentioned by almost every answer above. A stack push or pop is O(1), whereas allocating or freeing from a heap could require a walk of previous allocations. However you shouldn't usually be allocating in tight, performance-intensive loops, so the choice will usually come down to other factors.
It might be good to make this distinction: you can use a "stack allocator" on the heap. Strictly speaking, I take stack allocation to mean the actual method of allocation rather than the location of the allocation. If you're allocating a lot of stuff on the actual program stack, that could be bad for a variety of reasons. On the other hand, using a stack method to allocate on the heap when possible is the best choice you can make for an allocation method.
Since you mentioned Metrowerks and PPC, I'm guessing you mean Wii. In this case memory is at a premium, and using a stack allocation method wherever possible guarantees that you don't waste memory on fragments. Of course, doing this requires a lot more care than "normal" heap allocation methods. It's wise to evaluate the tradeoffs for each situation.
A: Never do premature assumption as other application code and usage can impact your function. So looking at function is isolation is of no use.
If you are serious with application then VTune it or use any similar profiling tool and look at hotspots.
Ketan
A: Naturally, stack allocation is faster. With heap allocation, the allocator has to find the free memory somewhere. With stack allocation, the compiler does it for you by simply giving your function a larger stack frame, which means the allocation costs no time at all. (I'm assuming you're not using alloca or anything to allocate a dynamic amount of stack space, but even then it's very fast.)
However, you do have to be wary of hidden dynamic allocation. For example:
void some_func()
{
std::vector<int> my_vector(0x1000);
// Do stuff with the vector...
}
You might think this allocates 4 KiB on the stack, but you'd be wrong. It allocates the vector instance on the stack, but that vector instance in turn allocates its 4 KiB on the heap, because vector always allocates its internal array on the heap (at least unless you specify a custom allocator, which I won't get into here). If you want to allocate on the stack using an STL-like container, you probably want std::array, or possibly boost::static_vector (provided by the external Boost library).
A: I'd like to say actually code generate by GCC (I remember VS also) doesn't have overhead to do stack allocation.
Say for following function:
int f(int i)
{
if (i > 0)
{
int array[1000];
}
}
Following is the code generate:
__Z1fi:
Leh_func_begin1:
pushq %rbp
Ltmp0:
movq %rsp, %rbp
Ltmp1:
subq $**3880**, %rsp <--- here we have the array allocated, even the if doesn't excited.
Ltmp2:
movl %edi, -4(%rbp)
movl -8(%rbp), %eax
addq $3880, %rsp
popq %rbp
ret
Leh_func_end1:
So whatevery how much local variable you have (even inside if or switch), just the 3880 will change to another value. Unless you didn't have local variable, this instruction just need to execute. So allocate local variable doesn't have overhead.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "540"
} |
Q: CSS editor which expands one-line declarations on editing Is there a CSS editor which automatically expands one-line declarations as multi-line declarations on focus ? To clarify my thought, see example below:
Original CSS:
div#main { color: orange; margin: 1em 0; border: 1px solid black; }
But when focusing on it, editor automatically expands it to:
div#main {
color: orange;
margin: 1em 0;
border: 1px solid black;
}
And when it looses focus, editor again it automatically compresses it to one-line declaration.
Thanks.
A: If you are using Visual Studio you should be able to do a close approximation of this:
*
*You can change how CSS is formatted
via the Tools -> Options menu.
*Check 'Show all settings' if it is unchecked.
*Go to Text Editor -> CSS -> Format and pick the semi-expanded option
*Ok you changes.
*Then ctrl+A, ctrl+K, ctrl+D should re-format your document
*When you are finished editing just go back to the options and pick the compact CSS format then ctrl+A,ctrl+K,ctrl+D to re-format again.
Hope this helps.
A: I've not heard of one. If you're on a Mac I can definitely recommend CSSEdit. It does auto-formatting very nicely, amoungst other things.
EDIT: I originally said "though as the comment says it's a great idea" but, thinking about it, is that what you really want? I can see that it would be good to have expansion/contraction onClick (in which case TextMate - again Mac - though CSS suport isn't as good as CSSEdit), but onFocus?
A: Sorry. I don't know of any IDEs that explicitly do that.
But, there are quite a few external options:
*
*CSSTidy (download)
*Clean CSS (in-browser)
*CSS Optimiser (in-browser)
*others... (Google Search)
A: da5id, I actually don't care about implementation details (onclick or onhover, though onclick seems better when you say it ;), I'm just curious if there are any editors which supports this kind of feature in any way.
PS. I'm not on Mac but Windows.
A: Its not exactly what you want but try the windows port of textmate E Text Editor, for on click folding of css rules, auto formating and most other textmate functionality.
A: You can do that with the scripting language of your favorite editor.
For example in SciTE:
function ExpandContractCSS()
local ext = string.lower(props["FileExt"])
if ext ~= "css" then return end
local line = GetCurrentLine()
local newForm
if string.find(line, "}") then
-- On one line
newForm = string.gsub(line, "; *", ";\r\n ")
newForm = string.gsub(newForm, "{ *", "{\r\n ")
newForm = string.gsub(newForm, " *}", "}")
else
-- To contract
-- Well, just use Ctrl+Z!
-- Maybe not, code to come if interest
end
if newForm ~= nil then
ReplaceCurrentLine(newForm)
end
end
GetCurrentLine and ReplaceCurrentLine are just convenience functions from my collection, I can give them (and do the contraction part) if you are interested.
A: It's a good question. I'd love to see this in a CSS editor. TopStyle does this, but it isn't automatic; you have you use a hotkey.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: autoconf using sh, I need SHELL=BASH, how do I force autoconf to use bash? I'm running autoconf and configure sets SHELL to '/bin/sh'.
This creates huge problems. How to force SHELL to be '/bin/bash' for autoconf?
I'm trying to get this running on osx, it's working on linux. Linux is using SHELL=/bin/bash. osx defaults to /bin/sh.
A: What are the "huge problems"? autoconf works very hard to generate a configure script that works with a very large percentage of shells. If you have an example of a construct that autoconf is writing that is not portable, please report it to the autoconf mailing list. On the other hand, if the problems you are experiencing are a result of your own shell code in configure.ac not being portable (eg, you're using bashisms) then the solution is to either stop using non-portable code or require the user to explicitly set SHELL or CONFIG_SHELL at configure time.
It sounds like the problem you are experiencing is in the environment of the user running configure. On Linux, your user has SHELL set to /bin/bash, but on OS X it is set to /bin/sh. The configure script generated by autoconf does some initial tests of the shell it is running under and does attempt to re-exec itself using a different shell if the provided shell lacks certain features. However, if you are introducing non-portable shell code in configure.ac, then you are violating one of the main philosophy's of autoconf -- namely that configure scripts should be portable. If you truly want to use bashisms in your shell code, then you are requiring your user to pass SHELL=/bin/bash as an argument to the configure script. This is not a bug in autoconf, but would be considered by many to be a bug in your project's build.
A: Autoconf is supposed to solve portability problems by generating a script which can run "anywhere". That's why it generates bizarre code like:
if test X$foo = X ; then ... # check if foo is empty
rather than:
if [ "$x" = "" ] ; then ...
That kind of crufty code probably once allowed these scripts to run on some ancient Ultrix system or whatever.
An configure script not running because of shell differences is like coming to a Formula-1 race with 10 liters of gas, and three spare tires.
If you're developing a configure script with Autoconf, and it is sensitive to whether the shell is Bash or the OSX shell, you are doing something wrong, or the Autoconf people broke something. If it's from you, fix whatever shell pieces you are adding to the script by making them portable.
A: I have similar problems on Solaris with GCC - and I use the 'standard' technique:
CONFIG_SHELL=/bin/bash ./configure ...
(Or, actually, I use /bin/ksh, but setting the CONFIG_SHELL env var allows you to tell autoconf scripts which shell to use.)
I checked the configure script for git and gd (they happened to be extracted) to check that this wasn't a GCC peculiar env var.
A: Where is SHELL being set to that? What is being run with /bin/sh when you want /bin/bash?
configure scripts are meant to run anywhere, even on the horribly broken and non-Bash shells that exist in the wild.
Edit: What exactly is the problem?
Another edit: Perhaps you'd like the script to re-execute itself, something like this. It's probably buggy:
if test "$SHELL" = "/bin/sh" && test -x /bin/bash; then
exec /bin/bash -c "$0" "$@"
fi
A: ln -f /bin/bash /bin/sh
:-P (No, it's not a serious answer. Please don't hose your system by doing it!)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Should I upgrade to Windows Server & Exchange 2008? Currently running Server 2003 but am looking at reinstalling in the near future due to a change of direction with the domains. Should I take this opportunity to install Windows Server 2008 instead?
I would love to play with new technology and the server is only for a small home business so downtime/performance issues aren't really a concern.
A: I am no expert on Windows server revisions, but the only new feature of Server 2008 I can think of is Hyper-V. But I would try Server 2008 just for Hyper-V, as this VM hypervisor is supposedly much faster than VMware and Virtual PC, and is compatible with Virtual PC virtual disks.
A: One rule that has served me very well over the years is: Do not upgrade infrastructure components just for the sake of upgrading. If it works well, leave it be. You mentioned that some downtime isn't a big deal, but if the server is actually used then there is a chance it can become a big deal unexpectedly. Why not simply get (or build) a new machine and play with the new operating system there? That way you get the best of both worlds.
A: There is no Exchange Server 2008. Exchange has always been tightly integrated with IIS which tends to bind it to a specific version of Windows. However, Exchange Server 2007 SP1 can be installed on Windows Server 2008.
Exchange Server 2003, however, cannot run on Windows Server 2008 and I do not believe there are any plans to do so in a future service pack.
Note that Exchange Server 2007 requires x64 architecture, running the 64-bit OS, on a production system. The days of booting /3GB are past - it simply does not provide enough virtual address space for current large databases. Exchange's long-running virtual memory fragmentation problem has not been fixed, it has just been given more virtual address space to work in.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What would be the best way to parse this file? I was just wondering if anyone knew of a good way that I could parse the file at the bottom of the post.
I have a database setup with the correct tables for each section eg Refferal Table,Caller Table,Location Table. Each table has the same columns that are show in the file below
I would really like something that is fairly genetic so if the file layout changes it won't mess me around to much. At the moment I am just reading the file in a line at a time and just using a case statement to check which section i'm in.
Is anyone able to help me with this?
PS. I am using VB but C# or anything else will be fine, also the x's in the document are just personal info I have blanked
Thanks,
Nathan
File:--->
DIAL BEFORE YOU DIG
Call 1100, Fax 1300 652 077
PO Box 7710 MELBOURNE, VIC 8004
Utilities are requested to respond within 2 working days and reference the Sequence number.
[REFFERAL DETAILS]
FROM= Dial Before You Dig - Web
TO= Technical Services
UTILITY ID= xxxxxx
COMPANY= {Company Name}
ENQUIRY DATE= 02/10/2008 13:53
COMMENCEMENT DATE= 06/10/2008
SEQUENCE NO= xxxxxxxxx
PLANNING= No
[CALLER DETAILS]
CUSTOMER ID= 403552
CONTACT NAME= {Name of Contact}
CONTACT HOURS= 0
COMPANY= Underground Utility Locating
ADDRESS= {Address}
SUBURB= {Suburb}
STATE= {State}
POSTCODE= 4350
TELEPHONE= xxxxxxxxxx
MOBILE= xxxxxxxxxx
FAX TYPE= Private
FAX NUMBER= xxxxxxxxxx
PUBLIC ADDRESS= xxxxxxxxxx
PUBLIC TELEPHONE=
EMAIL ADDRESS= {Email Address}
[LOCATION DETAILS]
ADDRESS= {Location Address}
SUBURB= {Location Suburb}
STATE= xxx
POSTCODE= xxx
DEPOSITED PLAN NO= 0
SECTION & HUNDRED NO= 0
PROPERTY PHONE NO=
SIDE OF STREET= B
INTERSECTION= xxxxxx
DISTANCE= 0-200m B
ACTIVITY CODE= 15
ACTIVITY DESCRIPTION= xxxxxxxxxxxxxxxxxx
MAP TYPE= StateGrid
MAP REF= Q851_63
MAP PAGE=
MAP GRID 1=
MAP GRID 2=
MAP GRID 3=
MAP GRID 4=
MAP GRID 5=
GPS X COORD=
GPS Y COORD=
PRIVATE/ROAD/BOTH= B
TRAFFIC AFFECTED= No
NOTIFICATION NO= 3082321
MESSAGE= entire intersection of Allora-Clifton rd , Hillside
rd and merivale st
MOCSMESSAGE= Digsafe generated referral
Notice: Please DO NOT REPLY TO THIS EMAIL as it has been automatically generated and replies are not monitored. Should you wish to advise Dial Before You Dig of any issues with this enquiry, please Call 1100
(See attached file: 3082321_LLGDA94.GML)
A: Google has the answers, once you know that the file-format is called '.ini'
Edit: That is, it's an .ini plus some extra leading/trailing gunk.
A: You could read each line of the file sequentially. Each line is essentially a name value pair. Place each value in a map (hash table) keyed by name. Use a map for each section. When done parsing the file you'll have maps containing all the name value pairs. Iterate over each map and populate your database tables.
A: I would head to Python for any type of string parsing like this. I'm not sure how much of this information you want to retain, but I would perhaps use Python's split() function to split on = to get rid of the equals sign, then strip the whitespace out of the second piece of the pie.
First, I would mask out the header/footer info I know I don't need, then do something akin to the following:
Let's take a chunk and save it in test1.txt:
ADDRESS= {Location Address}
SUBURB= {Location Suburb}
STATE= xxx
POSTCODE= xxx
DEPOSITED PLAN NO= 0
SECTION & HUNDRED NO= 0
PROPERTY PHONE NO=
Here's a small python snippet:
>>> f = open("test1.txt", "r")
>>> l = f.readlines()
>>> l = [line.split('=') for line in l]
>>> for line in l:
print line
['ADDRESS', '{Location Address}']
['SUBURB', '{Location Suburb}']
['STATE', 'xxx']
['POSTCODE', 'xxx']
['DEPOSITED PLAN NO', '0']
['SECTION & HUNDRED NO', '0']
['PROPERTY PHONE NO', '']
This would essentially give you a [Column, Value] tuple you could use to insert the data into your database (after escaping all strings, etc etc, SQL Injection warning).
This is assuming the email input and your DB will have the same column names, but if they didn't, it'd be fairly trivial to set up a column mapping using a dictionary. On the flip side, if the email and columns are in sync, you don't need to know the names of the columns to get the parsing down.
You could iterate through the pseudo-dictionary and print out each key-value pair in the right spot in your parameterized sql string.
Hope this helps!
Edit: While this is in Python, C#/VB.net should have the same/similar abilities.
A: Using f As StreamReader = File.OpenText("sample.txt")
Dim g As String = "undefined"
Do
Dim s As String = f.ReadLine
If s Is Nothing Then Exit Do
s = s.Replace(Chr(9), " ")
If s.StartsWith("[") And s.EndsWith("]") Then
g = s.Substring("[".Length, s.Length - "[]".Length)
Else
Dim ss() As String = s.Split(New Char() {"="c}, 2)
If ss.Length = 2 Then
Console.WriteLine("{0}.{1}={2}", g, Trim(ss(0)), Trim(ss(1)))
End If
End If
Loop
End Using
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Prevent Enter key from firing button click even in vb6 I have a form with a progress bar and a cancel button which is displayed as a process runs. The buttons "Cancel" property is set to true so pressing escape, cancels the process.
But, as the button is the only control on the form capable of taking the focus, should the user inadvertently press enter (or space bar) while the process is running it will be cancelled.
I have prevented the Space Bar from working by setting KeyPreview to true (on the form) then setting KeyAscii to 0 but this approach deson't seem to work for the enter key as the button click event fires first.
I've tried setting the button's TabStop property to "false" - no change.
A: In my opinion, the Enter key should activate the cancel button. Or are you requiring the user to reach out for the mouse? why?
I suggest adding just a confirmation dialog after the user cancels the operation, so if anyone accidentally presses the Enter key have the chance to resume saying 'no, I don't want to cancel'.
But as a user I would be annoyed if the Cancel button has the focus and I can't activate it pressing the Enter key on my keyboard.
My 2 cents
A: Add a default button with size 1x1, no caption, no border, etc. Make a handler for it that does nothing. The Escape key will still do a cancel as it does now.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to integrate a SpringSource dm Server into another OSGi-based application server? I would really like to use SpringSource dm Server, but our customer requires us to run our apps on their application server (Websphere). Is there a way to integrate SpringSource dm Server with other application servers? At least dm Server is build on OSGi, and many other application servers (including Websphere) are based on OSGi as well. Is it possible to run a SpringSource dm Server as a websphere component?
A: Spring DM is deployed on a Knoplerfish OSGi implementation.
Websphere is deployed on an Equinox OSGi implmentation.
So the question becomes - are the two interchangeable? They both support R4, so I would say, yes, they are.
The next question would be to check dependencies, particularly with respect to things like HttpServices.
I would say this would be ok, but I think the final proof would be try deploying it. Easiest would be to drop the bundles into a Websphere deployment. You'll need your bundles and whatever spring bundles you're using.
A: SpringSource dm Server is based on the Eclipse Equinox OSGi framework (and should not be confused with the Spring DM technology, included in dm Server, which can run on Equinox, Apache Felix, and Knopflerfish).
However, embedding dm Server in another application server, such as WebSphere Application Server, based on Equinox would be a non-trivial piece of work. It would be necessary to get both products to use the same version of Equinox, which they currently do not, then modify dm Server to support embedding in the server (e.g. to integrate with the host server's application invocation mechanism, thread pools, and class loading scheme).
If you think this support is important, please raise a requirement (which requires a simple registration) against dm Server.
A: I do not think that this is really the case ...
see the following link for this: http://apsblog.burtongroup.com/2008/11/websphere-7-osgi.html
But it seems on the other side, that the trend is clear ... there will be a time when OSGI based application can be deployed on Java EE application servers
A: I'm also interested in this topic. Another way of looking at this problem is that you want an application depoyable in both Spring dm server and a traditional app server (Websphere, weblogic, JBoss, ...).
The OSGi containers are embeddable inside non-OSGi applications, so it is theoretically possible to deploy an app to both Spring dm server and the same app + OSGi container to a traditional app server.
Now, as usual, the devil's in the details, including such topics of web development and bridging servlets between the outer app server and the OSGi container.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Do this with a single SQL I have a table that looks like that:
The rows are sorted by CLNDR_DATE DESC.
I need to find a CLNDR_DATE that corresponds to the highlighted row, in other words:
Find the topmost group of rows WHERE EFFECTIVE_DATE IS NOT NULL,
and return the CLNR_DATE of a last row of that group.
Normally I would open a cursor and cycle from top to bottom until I find a NULL in EFFECTIVE_DATE. Then I would know that the date I am looking for is CLNDR_DATE, obtained at the previous step.
However, I wonder if the same can be achieved with a single SQL?
A: Warning: Not a DBA by any means. ;)
But, a quick, untested stab at it:
SELECT min(CLNDR_DATE) FROM [TABLE]
WHERE (EFFECTIVE_DATE IS NOT NULL)
AND (CLNDR_DATE > (
SELECT max(CLNDR_DATE) FROM [TABLE] WHERE EFFECTIVE_DATE IS NULL
))
Assuming you want the first CLNDR_DATE with EFFECTIVE_DATE after the last without.
If you want the first with after the first without, change the subquery to use min() instead of max().
A: Using Oracle's analytic function (untested)
select *
from
(
select
clndr_date,
effective_date,
lag(clndr_date, 1, null) over (order by clndr_date desc) prev_clndr_date
from table
)
where effective_date is null
The lag(clndr_date, 1, null) over (order by clndr_date desc) returns the previous clndr_date, or use null if this is the first row.
(edit: fixed order)
A: The first result from this recordset is what you're looking for. Depending on your Database, you may be able to only return this row by using LIMIT, or TOP
SELECT CLNDR_DATE
FROM TABLE
WHERE CLNDR_DATE > (SELECT MAX(CLNDR_DATE)
FROM TABLE
WHERE EFFECTIVE_DATE IS NOT NULL)
ORDER BY CLNDR_DATE
A: When you are in Oracle environment you can use analytical functions (http://www.orafaq.com/node/55), which are very powerful tools to do kind of queries you asking for.
Using standard SQL I think it is impossible, but maybe some SQL gurus will show up some nice solution.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Visiting the points in a triangle in a random order For a right triangle specified by an equation aX + bY <= c on integers
I want to plot each pixel(*) in the triangle once and only once, in a pseudo-random order, and without storing a list of previously hit points.
I know how to do this with a line segment between 0 and x
pick a random point'o' along the line,
pick 'p' that is relatively prime to x
repeat for up to x times: Onext = (Ocur + P) MOD x
To do this for a triangle, I would
1. Need to count the number of pixels in the triangle sans lists
2. Map an integer 0..points into a x,y pair that is a valid pixel inside the triangle
I hope any solution could be generalized to pyramids and higher dimensional shapes.
(*) I use the CG term pixel for the pair of integer points X,Y such that the equation is satisfied.
A: Since you want to guarantee visiting each pixel once and only once, it's probably better to think in terms of pixels rather than the real triangles.
You can slice the triangles horizontally and get bunch of horizontal scan lines. Connect the scan lines together and you have converted your "triangle" into a long line. Apply your point visiting algorithm to your long chain of scan lines.
By the way, this mapping only needs to happen on paper, all you need is a function that can return (x, y) given (t) along the virtual scan line.
Edit:
To convert two points to a line segment, you can look for Bresenham's scan conversion. Once you get the 3 line segments converted into series of points, you can put all points into a bucket and group all points by y. Within the same y-value, sort points by x. The smallest x within a y-value is the begin point of the scan line and the largest x within the y-value is the end point of the scan line. This is called "scan converting triangle". You can find more info if you Google.
A: Here's a solution for Triangle Point Picking.
What you have to do is choose two vectors (sides) of your triangle, multiply each with a random number in [0,1] and add them up. This will provide a uniform distribution in the quadrilateral defined by the vectors. You'll have to check whether the result lies inside the original triangle; if it doesn't either transform it back in or simply discard it and try again.
A: One method is to put all of the pixels into an array and then shuffle the array (this is O(n)), then visit the pixels in the order in the shuffled array. This could require quite a lot of memory though.
A: Here's a method which wastes some CPU time but probably doesn't waste as much as a more complicated method would do.
Compute a rectangle that circumscribes the triangle. It will be easy to "linearize" that rectangle, each scan line followed by the next. Use the algorithm that you already know in order to traverse the pixels of the rectangle. When you hit each pixel, check if the pixel is in the triangle, and if not then skip it.
A: I would consider the lines of the triangle as single line, which is cut into segments. The segments would be stored in an array where the length of the segment also stored as well as the offset in the total length of the lines. Then depending on the value of O, you can select which array element contains the pixel you want to draw at that moment based on this information and paint the pixel based on the values in the element.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Distributed Cache/Session where should I turn? I am currently looking at a distributed cache solution.
If money was not an issue, which would you recommend?
*
*www.scaleoutsoftware.com
*ncache
*memcacheddotnet
*MS Velocity
A: Out of your selection I've only ever attempted to use memcached, and even then it wasn't the C#/.NET libraries.
However memcached technology is fairly well proven, just look at the sites that use it:
...The system is used by several very large, well-known sites including YouTube, LiveJournal, Slashdot, Wikipedia, SourceForge, ShowClix, GameFAQs, Facebook, Digg, Twitter, Fotolog, BoardGameGeek, NYTimes.com, deviantART, Jamendo, Kayak, VxV, ThePirateBay and Netlog.
I don't really see a reason to look at the other solution's.
Good Luck,
Brian G.
A: One thing that people typically forget when evaluating solutions is dedicated support.
If you go with memcached then you'll get none, because you're using completely open source software that is not backed by any vendor. Yes, the core platform is well tested by virtue of age, but the C# client libraries are probably much less so. And yes, you'll probably get some help on forums and the like, but there is no guarantee responses will be fast, and no guarantee you'll get any responses at all.
I don't know what the support for NCache or the ScaleOut cache is like, but it's something that's worth finding out before choosing them. I've dealt with many companies for support over the last few years and the support is often outsourced to people who don't even work at the company (with no chance of getting to the people who do) and this means no chance of getting quality of timely support. On the other hand I've also dealt with companies who'll escalate serious issues to the right people, fix important issues very fast, and ship you a personal patch.
One of those companies is Microsoft, which is one of the reasons that we use their software as our platform. If you have a production issue, then you can rely on their support. So my inclination would be to go with Velocity largely on this basis.
Possible the most important thing though, whichever cache you choose, is to abstract it behind your own interface (e.g. ICache) which will allow you to evaluate a number of them without holding up the rest of the development process. This means that even if your initial decision turns out not to work for you, you can switch it without breaking much of the application.
(Note: I'm assuming here that all caches have sufficient features to support what you need from them, and that all caches have sufficient and broadly similar performance. This may not be a valid assumption, in which case you'll need to provide more detail in your question as to why it isn't).
A: You could also add Oracle Coherence to your list. It has both .NET and Java APIs.
A: From microsoft : App fabric
Commerical : NCache
Open source : RIAK
We tried a couple in the end we use the SQL session provider for asp.net/mvc yes there is the overhead of the connection to the DB but our DB server is very fast and the web farm has loads of capacity so not an issue.
Very interested in RIAK has .net client and used by Yahoo - can be scaled to many manu server
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Why ActiveRecord instead of a MySql API I've been developing web applications for a while and i am quite comfortable with mySql, in fact as many do i use some form of SQL almost every day. I like the syntax and a have zero problems writing queries or optimizing my tables. I have enjoyed this mysql api.
The thing that has been bugging me is Ruby on Rails uses ActiveRecord and migrates everything so you use functions to query the database. I suppose the idea being you "never have to look at SQL again". Maybe this isn't KISS (keep it simple stupid) but is the ActiveRecord interface really best? If so why?
Is development without having to ever write a SQL statement healthy? What if you ever have to look something up that isn't already defined as a rails function? I know they have a function that allows me to do a custom query. I guess really i want to know what people think the advantages are of using ActiveRecord over mySQL and if anyone feels like me that maybe this would be for the rails community what the calculator was to the math community and some people might forget how to do long division.
A: You're right that hiding the SQL behind ActiveRecord's layer means people might forget to check the generated SQL. I've been bitten by this myself: missing indexes, inefficient queries, etc.
What ActiveRecord allows is making the easy things easy:
Post.find(1)
vs
SELECT * FROM posts WHERE posts.id = 1
You, the developer, have less to type, and thus have less chances for error.
Validation is another thing that ActiveRecord makes easy. You have to do it anyway, so why not have an easy way to do it? With the repetitive, boring, parts abstracted out?
class Post < ActiveRecord::Base
validates_presence_of :title
validates_length_of :title, :maximum => 80
end
vs
if params[:post][:title].blank? then
# complain
elsif params[:post][:title].length > 80 then
# complain again
end
Again, easy to specify, easy to validate. Want more validation? A single line to add to an ActiveRecord model. Convoluted code with multiple conditions is always harder to debug and test. Why not make it easy on you?
The final thing I really like about ActiveRecord instead of SQL are callbacks. Callbacks can be emulated with SQL triggers (which are only available in MySQL 5.0 or above), while ActiveRecord has had callbacks since way back then (I started on 0.13).
To summarize:
*
*ActiveRecord makes the easy things easy;
*ActiveRecord removes the boring, repetitive parts;
*ActiveRecord does not prevent you from writing your own SQL (usually for performance reasons), and finally;
*ActiveRecord is fully portable accross most database engines, while SQL itself is not (sometimes).
I know in your case you are talking specifically about MySQL, but still. Having the option is nice.
A: The idea here is that by putting your DB logic inside your Active Records, you're dealing with SQL code in one place, rather than spread all over your application. This makes it easier for the various layers of your application to follow the Single Responsibility Principle (that an object should have only one reason to change).
Here's an article on the Active Record pattern.
A: Avoiding SQL helps you when you decide to change the database scheme. The abstraction is also necessary for all kinds of things, like validation. I doesn't mean you don't get to write SQL: you can always do that if you feel the need for it. But you don't have to write a 5 line query where all you need is user.save. It's the rails philosophy to avoid unnecessary code.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do you keep track of temporary threads of conversation online Often when I post a comment or answer on a site I like to keep an eye out for additional responses from other people, possibly replying again if appropriate. Sometimes I'll bookmark a page for a while, other times I'll end up re-googling keywords to locate the post again. I've always thought there should be something better than my memory for keeping track of pages I care about for a few days to a week.
Does anyone any clever ideas for this type of thing? Is there a micro-delicious type of online app with a bookmarklet for very short term followup?
Update I think I should clarify. I wasn't asking about Stack Overflow specifically - on the "read/write web" in general I add comments to blog posts, respond to google group threads, etc. It's that sort of mish-mash of individual pages on random sites that I would care to keep track of for seven-to-ten days.
A: For stackoverflow, I put together a little bookmarklet thing at http://stackoverflow.hewgill.com. I use it to keep track of posts that I might want to come back to later, for reference or to answer if nobody else did, or whatever. The backend automatically retrieves updates from the SO server and updates your list of bookmarklets.
A: In my head mostly. I occasionally forget things, but it works well enough.
A: That's a very interesting question you asked here.
I do th efollowing:
*
*temp bookmarks in browser
*just a tab in Firefox left opened for weeks :)
*subscription to email\rss when possible. When email notification comes I often put it into special folder in my email tree.
Different logins, notification types etc are complicating following info in the web :(
Other interesting questions:
*
*how to organize information storage (notes, saved web pages, forum threads etc) for current usage and as a read-only library, sync it between different PCs and USB disks, how to label (tag) it and search it
*how to store old mails, conversations, chats,..?
*store digital photos for future: make hard-copy printouts or just regulary rewrite it from CD to a new one
A: Click on your username, then Responses.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161114",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do I configure the place of the Netbeans .netbeans directory? I want Netbeans 6.1 to store the .netbeans directory in another place than the default. How do I do this?
A: You can also specify this when you run Netbeans IDE via the command line. This is useful if you want to have different profiles/working environments in the IDE or when you are testing out Netbeans IDE plug-ins. This works from 5.0 to the current version (6.5).
Simply specify "--userdir " on the command line. Example:
netbeans --userdir /local/file/system/netbeans/userdir/6.1
A: There's config file:
<Netbeans>/etc/netbeans.conf
netbeans_default_userdir=<dir>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Can a WPF ListBox be "read only"? We have a scenario where we want to display a list of items and indicate which is the "current" item (with a little arrow marker or a changed background colour).
ItemsControl is no good to us, because we need the context of "SelectedItem". However, we want to move the selection programattically and not allow the user to change it.
Is there a simple way to make a ListBox non-interactive? We can fudge it by deliberately swallowing mouse and keyboard events, but am I missing some fundamental property (like setting "IsEnabled" to false without affecting its visual style) that gives us what we want?
Or ... is there another WPF control that's the best of both worlds - an ItemsControl with a SelectedItem property?
A: I had the same issue. I resolved it by leaving the IsEnabled set to true and handling the PreviewMouseDown event of the ListBox. In the handler set e.Handled to true in the case you don't want it to be edited.
private void lstSMTs_PreviewMouseDown(object sender, System.Windows.Input.MouseButtonEventArgs e)
{
e.Handled = !editRights;
}
A: The magical incantation that will do the trick is:
<ListBox IsHitTestVisible="False">
Unfortunately, this will also prevent any scrollbars from working.
The fix to that is to put the listbox inside a scrollviewer:
<ScrollViewer>
<ListBox IsHitTestVisible="False">
</ListBox>
</ScrollViewer>
A: One option is to set ListBoxItem.IsEnabled to false:
<ListBox x:Name="_listBox">
<ListBox.ItemContainerStyle>
<Style TargetType="ListBoxItem">
<Setter Property="IsEnabled" Value="False"/>
</Style>
</ListBox.ItemContainerStyle>
</ListBox>
This ensures that the items are not selectable, but they may not render how you like. To fix this, you can play around with triggers and/or templates. For example:
<ListBox x:Name="_listBox">
<ListBox.ItemContainerStyle>
<Style TargetType="ListBoxItem">
<Setter Property="IsEnabled" Value="False"/>
<Style.Triggers>
<Trigger Property="IsEnabled" Value="False">
<Setter Property="Foreground" Value="Red" />
</Trigger>
</Style.Triggers>
</Style>
</ListBox.ItemContainerStyle>
</ListBox>
A: Is your ItemsControl/ListBox databound?
I'm just thinking you could make the Background Brush of each item bound to a property from the source data, or pass the property through a converter. Something like:
<ItemsControl DataContext="{Binding Source={StaticResource Things}}" ItemsSource="{Binding}" Margin="0">
<ItemsControl.Resources>
<local:SelectedConverter x:Key="conv"/>
</ItemsControl.Resources>
<ItemsControl.ItemsPanel>
<ItemsPanelTemplate>
<local:Control Background="{Binding Path=IsSelected, Converter={StaticResource conv}}"/>
</ItemsPanelTemplate>
</ItemsControl.ItemsPanel>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: How do you import users from Community Server into DNN? A client is switching from Community Server to DNN. We would like to just import all the users including their passwords. It seems like this should work since both products use the .NET SqlMembershipProvider. What tables need to be populated to get a user set up correctly in DNN?
A: What you will mostlikely need is a membership provider for DNN that allows you to access the CS membership stuff - i have no specific information on this path but that would be my guess and most likely the easiest path
According to Alex Chrome on this post http://dev.communityserver.com/forums/t/488539.aspx
there may be a way to do it with the REST API in CS 2008
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I force a tomcat web application reload the trust store after I update it I have the following problem.
My tomcat 5.5 based web application is using a trust store to verify SSL connections.
The application allows the user to add or remove CA certificates to be used in the verification process.
However, adding or removing certificates from the trust store doesn't change a thing. The application 'recognizes' only the certificates that were in the trust store when tomcat started. For it to recognize the new set of certificates, I need to restart tomcat.
This is not a valid solution, however.
What I do need is a code based solution.
Please advice.
A: how about writing a Custom Classloader that loads in the trust store ONLY for this webapp. You could unload the classloader when you need to refresh the contents and reload it ?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Is it safe to increase an iterator inside when using it as an argument? Currently I'm trying to erase a sequence of iterators from a set, however GCC's standard library seems to be broken because std::set::erase(iterator) should return the an iterator (next iterator), however in GCC it returns void (which is standard?)
Anyways I want to write:
myIter = mySet.erase(myIter);
But GCC doesn't like it...
So Is it safe to write this instead?
mySet.erase(myIter++);
Edit: And yes I'm checking against mySet.end();
A: There is no problem with
mySet.erase(myIter++);
The order of operation is well-defined: myIter is copied into myTempIter, myIter is incremented, and myTempIter is then given to the erase method.
For Greg and Mark: no, there is no way operator++ can perform operations after the call to erase. By definition, erase() is called after operator++ has returned.
A: First, reread the standard and you'll see that the prototype of the set::erase method is:
void erase(iterator position);
However, the associative containers in the STL are "stable", as erasing an element do not affect the iterators on the other elements. This means that the following line is valid:
iterator to_erase = myIter++;
mySet.erase(to_erase);
// Now myIter is still on the next element
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I determine the file and line # of a C# method from a symbols (.pdb) file? pdb files contain symbol information for .NET assemblies. I'd like to read a pdb file in order to correlate methods with their file location. The data is contained within it but I can't seem to find a good description of how to get it out.
I know about mdbg, but that is very heavy (I think/hope) for what I want.
A: You should look:
*
*Mono.Cecil and especially the Mono.Cecil.Pdb module. It should do what you want and more.
A: In DBGHELP.DLL, you can use the SymGetLineFromAddr64 function. You'll need to use P/Invoke. There might be a corresponding API in the DIA SDK, but I'm not as familiar with it as I am DBGHELP.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: What is the best way to export a large table with many records from SQLite to a custom delimited text file? The table I'm dealing with it potentially larger than available memory (let's say 10GB) and some of the fields can have at most 100MB of text. So a simple SELECT query probably isn't going to cut it. I've seen some command line solutions but I need to be able to do this through a C# program.
A: A select should be fine. Last time I checked, the entire recordset (and all its data) isn't loaded into memory when you query a DB.
If that is somehow not the case, and it is taking up all the RAM in the known universe, do your query to just get IDs and then tick through the IDs getting induvidual records. Much, much slower, but it should limit the RAM usage.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I load an XML document in a SUN One ASP hosting environment in Linux I am working on an ASP site hosted using SUN One (used to be called Chillisoft) server. I am having trouble loading in an XML file, the code I am using is below
dim directory
set directory = Server.CreateObject("MSXML2.DOMDocument")
if(directory.load(Server.MapPath("directory.xml"))) then
Response.Write("Loaded")
else
Response.Write("NotLoaded")
If directory.parseError.errorCode Then
Response.Write( "Parse error" )
end if
end if
My asp page and directory.xml are both in the same folder "/public_html/".
I think the problem might have something to do with the mappath not finding the file, but no errors are returned so not sure what to do.
Thanks
A: I don't know much about Sun One but I do know it has a Bean that emulates MSXML.
Oridinarily, you would use:-
Set directory = Server.CreateObject("MSXML2.DOMDocument")
directory.async = false
directory.load(Server.MapPath("directory.xml")
Otherwise load returns immeadiately whilst the xml is loaded asynchronously.
It can't see how the code you have posted would not return something without error.
First diagnositic I would is:-
Response.Write(Server.MapPath("directory.xml"))
and then
Dim direcotory
Set directory = Server.CreateObject("MSXML.DOMDocument")
Response.Write(Not (directory Is Nothing))
A: The load likely returns false because it hasn't fully loaded the document yet. You need to find a way to set async to false. If the Sun One is emulating MSXML2.DOMDocument well then async should accept false but you could try -1 or Response.Write(directory.async) to get an idea of what it is originally set to.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Does C++ support 'finally' blocks? (And what's this 'RAII' I keep hearing about?) Does C++ support 'finally' blocks?
What is the RAII idiom?
What is the difference between C++'s RAII idiom and C#'s 'using' statement?
A: In C++ the finally is NOT required because of RAII.
RAII moves the responsibility of exception safety from the user of the object to the designer (and implementer) of the object. I would argue this is the correct place as you then only need to get exception safety correct once (in the design/implementation). By using finally you need to get exception safety correct every time you use an object.
Also IMO the code looks neater (see below).
Example:
A database object. To make sure the DB connection is used it must be opened and closed. By using RAII this can be done in the constructor/destructor.
C++ Like RAII
void someFunc()
{
DB db("DBDesciptionString");
// Use the db object.
} // db goes out of scope and destructor closes the connection.
// This happens even in the presence of exceptions.
The use of RAII makes using a DB object correctly very easy. The DB object will correctly close itself by the use of a destructor no matter how we try and abuse it.
Java Like Finally
void someFunc()
{
DB db = new DB("DBDesciptionString");
try
{
// Use the db object.
}
finally
{
// Can not rely on finaliser.
// So we must explicitly close the connection.
try
{
db.close();
}
catch(Throwable e)
{
/* Ignore */
// Make sure not to throw exception if one is already propagating.
}
}
}
When using finally the correct use of the object is delegated to the user of the object. i.e. It is the responsibility of the object user to correctly to explicitly close the DB connection. Now you could argue that this can be done in the finaliser, but resources may have limited availability or other constraints and thus you generally do want to control the release of the object and not rely on the non deterministic behavior of the garbage collector.
Also this is a simple example.
When you have multiple resources that need to be released the code can get complicated.
A more detailed analysis can be found here: http://accu.org/index.php/journals/236
A: RAII is usually better, but you can have easily the finally semantics in C++. Using a tiny amount of code.
Besides, the C++ Core Guidelines give finally.
Here is a link to the GSL Microsoft implementation and a link to the Martin Moene implementation
Bjarne Stroustrup multiple times said that everything that is in the GSL it meant to go in the standard eventually. So it should be a future-proof way to use finally.
You can easily implement yourself if you want though, continue reading.
In C++11 RAII and lambdas allows to make a general finally:
namespace detail { //adapt to your "private" namespace
template <typename F>
struct FinalAction {
FinalAction(F f) : clean_{f} {}
~FinalAction() { if(enabled_) clean_(); }
void disable() { enabled_ = false; };
private:
F clean_;
bool enabled_{true}; }; }
template <typename F>
detail::FinalAction<F> finally(F f) {
return detail::FinalAction<F>(f); }
example of use:
#include <iostream>
int main() {
int* a = new int;
auto delete_a = finally([a] { delete a; std::cout << "leaving the block, deleting a!\n"; });
std::cout << "doing something ...\n"; }
the output will be:
doing something...
leaving the block, deleting a!
Personally I used this few times to ensure to close POSIX file descriptor in a C++ program.
Having a real class that manage resources and so avoids any kind of leaks is usually better, but this finally is useful in the cases where making a class sounds like an overkill.
Besides, I like it better than other languages finally because if used naturally you write the closing code nearby the opening code (in my example the new and delete) and destruction follows construction in LIFO order as usual in C++. The only downside is that you get an auto variable you don't really use and the lambda syntax make it a little noisy (in my example in the fourth line only the word finally and the {}-block on the right are meaningful, the rest is essentially noise).
Another example:
[...]
auto precision = std::cout.precision();
auto set_precision_back = finally( [precision, &std::cout]() { std::cout << std::setprecision(precision); } );
std::cout << std::setprecision(3);
The disable member is useful if the finally has to be called only in case of failure. For example, you have to copy an object in three different containers, you can setup the finally to undo each copy and disable after all copies are successful. Doing so, if the destruction cannot throw, you ensure the strong guarantee.
disable example:
//strong guarantee
void copy_to_all(BIGobj const& a) {
first_.push_back(a);
auto undo_first_push = finally([first_&] { first_.pop_back(); });
second_.push_back(a);
auto undo_second_push = finally([second_&] { second_.pop_back(); });
third_.push_back(a);
//no necessary, put just to make easier to add containers in the future
auto undo_third_push = finally([third_&] { third_.pop_back(); });
undo_first_push.disable();
undo_second_push.disable();
undo_third_push.disable(); }
If you cannot use C++11 you can still have finally, but the code becomes a bit more long winded. Just define a struct with only a constructor and destructor, the constructor take references to anything needed and the destructor does the actions you need. This is basically what the lambda does, done manually.
#include <iostream>
int main() {
int* a = new int;
struct Delete_a_t {
Delete_a_t(int* p) : p_(p) {}
~Delete_a_t() { delete p_; std::cout << "leaving the block, deleting a!\n"; }
int* p_;
} delete_a(a);
std::cout << "doing something ...\n"; }
Hopefully you can use C++11, this code is more to show how the "C++ does not support finally" has been nonsense since the very first weeks of C++, it was possible to write this kind of code even before C++ got its name.
A: Sorry for digging up such an old thread, but there is a major error in the following reasoning:
RAII moves the responsibility of exception safety from the user of the object to the designer (and implementer) of the object. I would argue this is the correct place as you then only need to get exception safety correct once (in the design/implementation). By using finally you need to get exception safety correct every time you use an object.
More often than not, you have to deal with dynamically allocated objects, dynamic numbers of objects etc. Within the try-block, some code might create many objects (how many is determined at runtime) and store pointers to them in a list. Now, this is not an exotic scenario, but very common. In this case, you'd want to write stuff like
void DoStuff(vector<string> input)
{
list<Foo*> myList;
try
{
for (int i = 0; i < input.size(); ++i)
{
Foo* tmp = new Foo(input[i]);
if (!tmp)
throw;
myList.push_back(tmp);
}
DoSomeStuff(myList);
}
finally
{
while (!myList.empty())
{
delete myList.back();
myList.pop_back();
}
}
}
Of course the list itself will be destroyed when going out of scope, but that wouldn't clean up the temporary objects you have created.
Instead, you have to go the ugly route:
void DoStuff(vector<string> input)
{
list<Foo*> myList;
try
{
for (int i = 0; i < input.size(); ++i)
{
Foo* tmp = new Foo(input[i]);
if (!tmp)
throw;
myList.push_back(tmp);
}
DoSomeStuff(myList);
}
catch(...)
{
}
while (!myList.empty())
{
delete myList.back();
myList.pop_back();
}
}
Also: why is it that even managed lanuages provide a finally-block despite resources being deallocated automatically by the garbage collector anyway?
Hint: there's more you can do with "finally" than just memory deallocation.
A: FWIW, Microsoft Visual C++ does support try,finally and it has historically been used in MFC apps as a method of catching serious exceptions that would otherwise result in a crash. For example;
int CMyApp::Run()
{
__try
{
int i = CWinApp::Run();
m_Exitok = MAGIC_EXIT_NO;
return i;
}
__finally
{
if (m_Exitok != MAGIC_EXIT_NO)
FaultHandler();
}
}
I've used this in the past to do things like save backups of open files prior to exit. Certain JIT debugging settings will break this mechanism though.
A: As pointed out in the other answers, C++ can support finally-like functionality. The implementation of this functionality that is probably closest to being part of the standard language is the one accompanying the C++ Core Guidelines, a set of best practices for using C++ edited by Bjarne Stoustrup and Herb Sutter. An implementation of finally is part of the Guidelines Support Library (GSL). Throughout the Guidelines, use of finally is recommended when dealing with old-style interfaces, and it also has a guideline of its own, titled Use a final_action object to express cleanup if no suitable resource handle is available.
So, not only does C++ support finally, it is actually recommended to use it in a lot of common use-cases.
An example use of the GSL implementation would look like:
#include <gsl/gsl_util.h>
void example()
{
int handle = get_some_resource();
auto handle_clean = gsl::finally([&handle] { clean_that_resource(handle); });
// Do a lot of stuff, return early and throw exceptions.
// clean_that_resource will always get called.
}
The GSL implementation and usage is very similar to the one in Paolo.Bolzoni's answer. One difference is that the object created by gsl::finally() lacks the disable() call. If you need that functionality (say, to return the resource once it's assembled and no exceptions are bound to happen), you might prefer Paolo's implementation. Otherwise, using GSL is as close to using standardized features as you will get.
A: I have a use case where I think finally should be a perfectly acceptable part of the C++11 language, as I think it is easier to read from a flow point of view. My use case is a consumer/producer chain of threads, where a sentinel nullptr is sent at the end of the run to shut down all threads.
If C++ supported it, you would want your code to look like this:
extern Queue downstream, upstream;
int Example()
{
try
{
while(!ExitRequested())
{
X* x = upstream.pop();
if (!x) break;
x->doSomething();
downstream.push(x);
}
}
finally {
downstream.push(nullptr);
}
}
I think this is more logical that putting your finally declaration at the start of the loop, since it occurs after the loop has exited... but that is wishful thinking because we can't do it in C++. Note that the queue downstream is connected to another thread, so you can't put in the sentinel push(nullptr) in the destructor of downstream because it can't be destroyed at this point... it needs to stay alive until the other thread receives the nullptr.
So here is how to use a RAII class with lambda to do the same:
class Finally
{
public:
Finally(std::function<void(void)> callback) : callback_(callback)
{
}
~Finally()
{
callback_();
}
std::function<void(void)> callback_;
};
and here is how you use it:
extern Queue downstream, upstream;
int Example()
{
Finally atEnd([](){
downstream.push(nullptr);
});
while(!ExitRequested())
{
X* x = upstream.pop();
if (!x) break;
x->doSomething();
downstream.push(x);
}
}
A: Beyond making clean up easy with stack-based objects, RAII is also useful because the same 'automatic' clean up occurs when the object is a member of another class. When the owning class is destructed, the resource managed by the RAII class gets cleaned up because the dtor for that class gets called as a result.
This means that when you reach RAII nirvana and all members in a class use RAII (like smart pointers), you can get away with a very simple (maybe even default) dtor for the owner class since it doesn't need to manually manage its member resource lifetimes.
A:
why is it that even managed languages provide a finally-block despite resources being deallocated automatically by the garbage collector anyway?
Actually, languages based on Garbage collectors need "finally" more. A garbage collector does not destroy your objects in a timely manner, so it can not be relied upon to clean up non-memory related issues correctly.
In terms of dynamically-allocated data, many would argue that you should be using smart-pointers.
However...
RAII moves the responsibility of exception safety from the user of the object to the designer
Sadly this is its own downfall. Old C programming habits die hard. When you're using a library written in C or a very C style, RAII won't have been used. Short of re-writing the entire API front-end, that's just what you have to work with. Then the lack of "finally" really bites.
A: No, C++ does not support 'finally' blocks. The reason is that C++ instead supports RAII: "Resource Acquisition Is Initialization" -- a poor name† for a really useful concept.
The idea is that an object's destructor is responsible for freeing resources. When the object has automatic storage duration, the object's destructor will be called when the block in which it was created exits -- even when that block is exited in the presence of an exception. Here is Bjarne Stroustrup's explanation of the topic.
A common use for RAII is locking a mutex:
// A class with implements RAII
class lock
{
mutex &m_;
public:
lock(mutex &m)
: m_(m)
{
m.acquire();
}
~lock()
{
m_.release();
}
};
// A class which uses 'mutex' and 'lock' objects
class foo
{
mutex mutex_; // mutex for locking 'foo' object
public:
void bar()
{
lock scopeLock(mutex_); // lock object.
foobar(); // an operation which may throw an exception
// scopeLock will be destructed even if an exception
// occurs, which will release the mutex and allow
// other functions to lock the object and run.
}
};
RAII also simplifies using objects as members of other classes. When the owning class' is destructed, the resource managed by the RAII class gets released because the destructor for the RAII-managed class gets called as a result. This means that when you use RAII for all members in a class that manage resources, you can get away with using a very simple, maybe even the default, destructor for the owner class since it doesn't need to manually manage its member resource lifetimes. (Thanks to Mike B for pointing this out.)
For those familliar with C# or VB.NET, you may recognize that RAII is similar to .NET deterministic destruction using IDisposable and 'using' statements. Indeed, the two methods are very similar. The main difference is that RAII will deterministically release any type of resource -- including memory. When implementing IDisposable in .NET (even the .NET language C++/CLI), resources will be deterministically released except for memory. In .NET, memory is not deterministically released; memory is only released during garbage collection cycles.
† Some people believe that "Destruction is Resource Relinquishment" is a more accurate name for the RAII idiom.
A: Not really, but you can emulate them to some extend, for example:
int * array = new int[10000000];
try {
// Some code that can throw exceptions
// ...
throw std::exception();
// ...
} catch (...) {
// The finally-block (if an exception is thrown)
delete[] array;
// re-throw the exception.
throw;
}
// The finally-block (if no exception was thrown)
delete[] array;
Note that the finally-block might itself throw an exception before the original exception is re-thrown, thereby discarding the original exception. This is the exact same behavior as in a Java finally-block. Also, you cannot use return inside the try&catch blocks.
A: I came up with a finally macro that can be used almost like¹ the finally keyword in Java; it makes use of std::exception_ptr and friends, lambda functions and std::promise, so it requires C++11 or above; it also makes use of the compound statement expression GCC extension, which is also supported by clang.
WARNING: an earlier version of this answer used a different implementation of the concept with many more limitations.
First, let's define a helper class.
#include <future>
template <typename Fun>
class FinallyHelper {
template <typename T> struct TypeWrapper {};
using Return = typename std::result_of<Fun()>::type;
public:
FinallyHelper(Fun body) {
try {
execute(TypeWrapper<Return>(), body);
}
catch(...) {
m_promise.set_exception(std::current_exception());
}
}
Return get() {
return m_promise.get_future().get();
}
private:
template <typename T>
void execute(T, Fun body) {
m_promise.set_value(body());
}
void execute(TypeWrapper<void>, Fun body) {
body();
}
std::promise<Return> m_promise;
};
template <typename Fun>
FinallyHelper<Fun> make_finally_helper(Fun body) {
return FinallyHelper<Fun>(body);
}
Then there's the actual macro.
#define try_with_finally for(auto __finally_helper = make_finally_helper([&] { try
#define finally }); \
true; \
({return __finally_helper.get();})) \
/***/
It can be used like this:
void test() {
try_with_finally {
raise_exception();
}
catch(const my_exception1&) {
/*...*/
}
catch(const my_exception2&) {
/*...*/
}
finally {
clean_it_all_up();
}
}
The use of std::promise makes it very easy to implement, but it probably also introduces quite a bit of unneeded overhead which could be avoided by reimplementing only the needed functionalities from std::promise.
¹ CAVEAT: there are a few things that don't work quite like the java version of finally. Off the top of my head:
*
*it's not possible to break from an outer loop with the break statement from within the try and catch()'s blocks, since they live within a lambda function;
*there must be at least one catch() block after the try: it's a C++ requirement;
*if the function has a return value other than void but there's no return within the try and catch()'s blocks, compilation will fail because the finally macro will expand to code that will want to return a void. This could be, err, avoided by having a finally_noreturn macro of sorts.
All in all, I don't know if I'd ever use this stuff myself, but it was fun playing with it. :)
A: Another "finally" block emulation using C++11 lambda functions
template <typename TCode, typename TFinallyCode>
inline void with_finally(const TCode &code, const TFinallyCode &finally_code)
{
try
{
code();
}
catch (...)
{
try
{
finally_code();
}
catch (...) // Maybe stupid check that finally_code mustn't throw.
{
std::terminate();
}
throw;
}
finally_code();
}
Let's hope the compiler will optimize the code above.
Now we can write code like this:
with_finally(
[&]()
{
try
{
// Doing some stuff that may throw an exception
}
catch (const exception1 &)
{
// Handling first class of exceptions
}
catch (const exception2 &)
{
// Handling another class of exceptions
}
// Some classes of exceptions can be still unhandled
},
[&]() // finally
{
// This code will be executed in all three cases:
// 1) exception was not thrown at all
// 2) exception was handled by one of the "catch" blocks above
// 3) exception was not handled by any of the "catch" block above
}
);
If you wish you can wrap this idiom into "try - finally" macros:
// Please never throw exception below. It is needed to avoid a compilation error
// in the case when we use "begin_try ... finally" without any "catch" block.
class never_thrown_exception {};
#define begin_try with_finally([&](){ try
#define finally catch(never_thrown_exception){throw;} },[&]()
#define end_try ) // sorry for "pascalish" style :(
Now "finally" block is available in C++11:
begin_try
{
// A code that may throw
}
catch (const some_exception &)
{
// Handling some exceptions
}
finally
{
// A code that is always executed
}
end_try; // Sorry again for this ugly thing
Personally I don't like the "macro" version of "finally" idiom and would prefer to use pure "with_finally" function even though a syntax is more bulky in that case.
You can test the code above here: http://coliru.stacked-crooked.com/a/1d88f64cb27b3813
PS
If you need a finally block in your code, then scoped guards or ON_FINALLY/ON_EXCEPTION macros will probably better fit your needs.
Here is short example of usage ON_FINALLY/ON_EXCEPTION:
void function(std::vector<const char*> &vector)
{
int *arr1 = (int*)malloc(800*sizeof(int));
if (!arr1) { throw "cannot malloc arr1"; }
ON_FINALLY({ free(arr1); });
int *arr2 = (int*)malloc(900*sizeof(int));
if (!arr2) { throw "cannot malloc arr2"; }
ON_FINALLY({ free(arr2); });
vector.push_back("good");
ON_EXCEPTION({ vector.pop_back(); });
...
A: As many people have stated, the solution is to use C++11 features to avoid finally blocks. One of the features is unique_ptr.
Here is Mephane's answer written using RAII patterns.
#include <vector>
#include <memory>
#include <list>
using namespace std;
class Foo
{
...
};
void DoStuff(vector<string> input)
{
list<unique_ptr<Foo> > myList;
for (int i = 0; i < input.size(); ++i)
{
myList.push_back(unique_ptr<Foo>(new Foo(input[i])));
}
DoSomeStuff(myList);
}
Some more introduction to using unique_ptr with C++ Standard Library containers is here
A: I also think that RIIA is not a fully useful replacement for exception handling and having a finally. BTW, I also think RIIA is a bad name all around. I call these types of classes 'janitors' and use them a LOT. 95% of the time they are neither initializing nor acquiring resources, they are applying some change on a scoped basis, or taking something already set up and making sure it's destroyed. This being the official pattern name obsessed internet I get abused for even suggesting my name might be better.
I just don't think it's reasonable to require that that every complicated setup of some ad hoc list of things has to have a class written to contain it in order to avoid complications when cleaning it all back up in the face of needing to catch multiple exception types if something goes wrong in the process. This would lead to lots of ad hoc classes that just wouldn't be necessary otherwise.
Yes it's fine for classes that are designed to manage a particular resource, or generic ones that are designed to handle a set of similar resources. But, even if all of the things involved have such wrappers, the coordination of cleanup may not just be a simple in reverse order invocation of destructors.
I think it makes perfect sense for C++ to have a finally. I mean, jeez, so many bits and bobs have been glued onto it over the last decades that it seems odd folks would suddenly become conservative over something like finally which could be quite useful and probably nothing near as complicated as some other things that have been added (though that's just a guess on my part.)
A: EDITED
If you are not breaking/continuing/returning etc., you could just add a catch to any unknown exception and put the always code behind it. That is also when you don't need the exception to be re-thrown.
try{
// something that might throw exception
} catch( ... ){
// what to do with uknown exception
}
//final code to be called always,
//don't forget that it might throw some exception too
doSomeCleanUp();
So what's the problem?
Normally finally in other programming languages usually runs no matter what(usually meaning regardless of any return, break, continue, ...) except for some sort of system exit() - which differes a lot per programming language - e.g. PHP and Java just exit in that moment, but Python executes finally anyways and then exits.
But the code I've described above doesn't work that way
=> following code outputs ONLY something wrong!:
#include <stdio.h>
#include <iostream>
#include <string>
std::string test() {
try{
// something that might throw exception
throw "exceptiooon!";
return "fine";
} catch( ... ){
return "something wrong!";
}
return "finally";
}
int main(void) {
std::cout << test();
return 0;
}
A: try
{
...
goto finally;
}
catch(...)
{
...
goto finally;
}
finally:
{
...
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "325"
} |
Q: Fastest way to get productive in VS 08 and C# I have recently been working with Python using Komodo Edit and other simpler editors but now I am doing a project which is to be done in C# using VS 08. I would appreciate any hints on how to get productive on that platform as quickly as possible.
A: I would personally concentrate on learning the core parts of both C# and .NET first. For me, that would mean writing console apps (rather than Windows Forms) to experiment with the language and important aspects like IO. When you're happy with the foundations, move onto whichever "peripheral" technology (WinForms, WPF, ASP.NET, WCF etc) you need for your project.
In terms of books, I can recommend both C# 3.0 in a Nutshell and Accelerated C# 2008. The links are to my reviews of the books. Both cover language + core libraries.
I wouldn't worry too much about LINQ to start with - get comfortable with the rest of the language, particularly delegates and generics, before you tackle LINQ. At that point, I'd thoroughly recommend playing with LINQ to Objects for quite a while before you start using LINQ to SQL or the Entity Framework. (On the other hand, if you need to use XML at all, I'd go straight to LINQ to XML - it's a whole XML API, not just a LINQ provider. It's much nicer than the normal DOM API.)
A: Pick a Python project you've completed in the past and manually convert it to C#. This is how I've learned every language I currently know (except for x86 assembly).
Consider using IronPython to help bridge the gap - you can reference .NET assemblies via IronPython, as well as create .NET assemblies to reference from C#.
Also, stay very far away from those Learn C# in 21 Days! books... They almost never live up to the hype, and are typically more harm than good.
A:
I would appreciate any hints on how to
get productive on that platform as
quickly as possible.
Practical experience my friend. Start using it as soon as possible to be productive as soon as possible.
Some obvious recommendations are:
*
*create shortcut\macros\templates for frequent actions. Force yourself use shortcuts instead of clicking on menus
*install ReSharper - in will give you 1000% productivity boost (if you have a couple of bucks to buy it)
And don't hesitate to look into manual from time to time :)
A: As far as becoming proficient with C# I would highly recommend Programming C# and C# in Depth.
For Visual Studio, start poking around in the IDE a lot, play around, get familiar with it. Start with simple projects and explore all the different aspects. Learn how to optimize Visual Studio and get familiar with some of the great keyboard shortcuts / hidden features of the IDE.
Definitely do each of the following at least once:
Projects:
*
*Create a simple console application (e.g. hello world)
*Create a class library (managed .dll) and use it from another application you create
*Create a simple windows application
*Create a simple asp.net web app
Debugging:
*
*Debug a command line app
*Get familiar with: breakpoints, the locals and watch windows, step over, step into, step out of, continue, stop debugging
*Create a command line app which uses a function in a class library. Store the dll and symbol file (.pdb) for the library but delete the source code, debug through app as it goes into the library
*Debug into a webservice
*Learn how to use ILDasm and ILAsm
Command Line:
*
*Get familiar with the Visual Studio command line environment
*Build using only the command line
*Debug from the command line using devenv.exe /debugexe
*Use ILDasm / ILAsm from the command line to disassemble a simple app into .IL, reassemble it into a differently named file, test to see that it still works
Testing:
*
*Create unit tests (right click in a method, select the option to create a test)
*Learn how to: run all unit tests, run all unit tests under the debugger, rerun failed unit tests, see details on test failures, run a subset of unit tests
*Learn how to collect code coverage statistics for your tests
Source Control:
*
*Learn how to interact with your source control system of choice while developing using VS
Refactoring et al:
*
*Become familiar with all of the built-in refactorings (especially rename and extract method)
*Use "Go To Definition"
*Use "Find All References"
*Use "Find In Files" (ctrl-shift-F)
IDE & Keyboard Shortcuts:
*
*Learn how to use the designer well for web and winforms
*Get very familiar with the Solution Explorer window
*Experiment with different window layouts until you find one your comfortable with, keep experimenting later to see if that's still the best choice
*Learn the ins and outs of intellisense, use it to your advantage as much as possible
*Learn the keyboard shortcut for everything you do
A: Get an excellent book and start reading. I have Pro C# 2008 and the .NET 3.5 Platform.
Since you have a project to work on that should help greatly as well.
A: I look at the move from Python to C# as a step down the evolutionary ladder.
Expect a much more verbose experience (e.g. variable declarations and class properties).
Keep an eye on IronPython - it will help you to get the feel of .NET using a familiar language. The dynamic nature of Python makes checking .NET behavior a lot quicker then checking ideas in C#. You can use IronPython directly from Visual Studio with IronPython Studio .
A: Python to C# transition
You usually learn next language by comparing it's features to languages you already know. Since you're familiar with Python, read some Python/C# comparisons like "A Python Programmmer's Perspective on C#" and "Does C# 3.0 Beat Dynamic Languages at their Own Game?". The delta between C# 3.5 and Python is not that big.
A: Do small mini projects. Some of the top of my head.
1) Hello world 2) Console 3) GUI
design 4) Toy project
They're going to bring you up faster to speed than reading a tutorial. Grab a beer and have fun.
A: Microsoft has a nice learning platform for this:
http://msdn.microsoft.com/en-us/vcsharp/aa336766.aspx
I recommend to take a look at the "How do I" video series.
A: I would consider this dependant on a few things. For example, do you use keyboard more than mouse? If so i would get learning VS shortcuts. Do you know C# at all? Read some books. I know this is vague, but its a somewhat vague question.
Practice, practice, practice, gain experience, become productive.
A: For every tool, system, language or whatever, the quickest way to get productive is (at least for me) to get to understand the background rather than starting from hello world followed by stumbling from one command to next one you think you need.
So buy a good book (e.g. Microsoft Press) or go to Microsoft MSDN and Technet and read as much as possible of the background stuff. Detailed stuff (at book or internet) can by referred later.
Additionally - to boost the productivity - join mailinglists, IRCs, usegroups, etc. to get knowledge about every day problems of others, using the same tool, system, language, whatever.
Then - after a week or two - start programming.
I know, thats not that what one want to do when starting with a new language. But for me, this apporach works best, the last years.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Error "Cannot change Visible in OnShow or OnHide" in Delphi IDE I have a very strange problem with the Delphi 2006 IDE.
If the IDE is minimized, and the PC is locked. I return to the pc, unlock it and maximize the IDE, I sometimes get the error "Cannot change Visible in OnShow or OnHide".
If this happens, I have to kill the IDE with the taskmanager.
Is there anybody out there who has the same problem? And if possible a solution?
I suspect that some third party components are involved, but my coworkers never experience this error.
A: Yes I have noticed the same problem when using Terminal Servers. I use GExperts. If you can reproduce the problem you should report it on http://qc.codegear.com/
A: Is this connected to a specific form / forms that you have open at the moment?
If not, then it's probably not a 3rd party component that causes problems, but an expert you have added to the Delphi. Try disabling all experts. If that solves the problem, add them back one by one.
A: Other way with no need to kill IDE in Taskmanager is to display Taskbar settings and turn off "show above other windows". Use this setting, click OK on IDE err. msg and then set it back. With some practice it could be usefull by time...
Btw: colleague's IDE layout setting is "classic undocked" and he never had this problem.
But something like bugfix is still missing :/
(WinXP, TurboDelphi 2006)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: FTP File Upload with HTTP Proxy Is there a way to upload a file to a FTP server when behind an HTTP proxy ?
It seems that uploading a file is not supported behind an HTTP Proxy using .Net Webclient. (http://msdn.microsoft.com/en-us/library/system.net.ftpwebrequest.proxy.aspx).
If there is no workaround ? If not, do you know a good and free FTP library I can use ?
Edit: Unfortunately, I don't have any FTP proxy to connect to.
A: In active FTP mode, the server initiates a data connection to the client. If the client is behind an HTTP proxy, this obviously won't work. In passive FTP mode it is the client who initiates both the initial and the data connections. Since HTTP proxies can tunnel arbitrary outgoing TCP connections (using the CONNECT method), it should be possible to access an FTP server in passive mode via an HTTP proxy.
The FtpWebRequest seems to support passive mode. However, I don't understand why file download and directory listings are supported, whereas file upload, which also uses the same data connection, is not.
Have you confirmed that FtpWebRequest configured for passive mode does not work via an HTTP proxy through which directory listings/file download work just fine?
A: most FTP proxies do their thing on the connection, so if you had NO proxy, you do this:
*
*server: myftpserver.com
*user: me
*password: pwd
using an FTP proxy, you do:
*
*server: ftpproxy.mydomain.com
*user: me@myftpserver.com
*password: pwd
and it just works it out from there. I'm using this RIGHT THIS SECOND (trying to debug something) thru a squid proxy.
... but as you dont have an FTP proxy....
Do you have a SOCKS proxy? That might work, but I dont know if .NET can do it. Otherwise, to be honest, I think you are stuck! FTP is an "odd" protocol, when compared to HTTP, as it has a control channel (port 21) and a data channel (or more than one, on a random port), so going via proxies is.... fun to say the least!
A: Our Rebex FTP/SSL can use HTTP proxy. It's not free, though...
// initialize FTP client
Ftp client = new Ftp();
// setup proxy details
client.Proxy.ProxyType = FtpProxyType.HttpConnect;
client.Proxy.Host = proxyHostname;
client.Proxy.Port = proxyPort;
// add proxy username and password when needed
client.Proxy.UserName = proxyUsername;
client.Proxy.Password = proxyPassword;
// connect, login
client.Connect(hostname, port);
client.Login(username, password);
// do some work
// ...
// disconnect
client.Disconnect();
A: As of the .NET framework 4.8, the FtpWebRequest still cannot upload files over HTTP proxy.
If the specified proxy is an HTTP proxy, only the DownloadFile, ListDirectory, and ListDirectoryDetails commands are supported.
And it probably never will as FtpWebRequest is now deprecated. So you need to use a 3rd party FTP library.
For example with WinSCP .NET assembly, you can use:
// Setup session options
SessionOptions sessionOptions = new SessionOptions
{
Protocol = Protocol.Ftp,
HostName = "example.com",
UserName = "user",
Password = "mypassword",
};
// Configure proxy
sessionOptions.AddRawSettings("ProxyMethod", "3");
sessionOptions.AddRawSettings("ProxyHost", "proxy");
using (Session session = new Session())
{
// Connect
session.Open(sessionOptions);
// Upload file
string localFilePath = @"C:\path\file.txt";
string pathUpload = "/file.txt";
session.PutFiles(localFilePath, pathUpload).Check();
}
For the options for tje SessionOptions.AddRawSettings, see raw settings.
Easier is to have WinSCP GUI generate C# FTP code template for you.
Note that WinSCP .NET assembly is not a native .NET library. It's rather a thin .NET wrapper over a console application.
(I'm the author of WinSCP)
A: One solution is to try Mono's implementation of FtpWebRequest. I had a look at its source code and it appears it'll be easy to modify so that all connections (control and data) are tunneled via an HTTP proxy.
You establish a TCP connection to your HTTP proxy instead of the actual FTP server. Then you send CONNECT myserver:21 HTTP/1.0 followed by two CRLFs (CRLF = \r\n). If the proxy needs authentication, you need to use HTTP/1.1 and also send a proxy authentication header with the credentials. Then you need to read the first line of the response. If it starts with "HTTP/1.0 200 " or "HTTP/1.1 200 ", then you (the rest of the code) can continue using the connection as though it's connected directly to the FTP server.
A: If there's a way for you to upload a file via FTP without C# then it should also be possible in C#. Does uploading via browser or an FTP client work?
The one FTP library I like the most is .NET FTP Client library.
A: As Alexander says, HTTP proxies can proxy arbitrary traffic. What you need is an FTP Client that has support for using a HTTP Proxy. Alexander is also correct that this would only work in passive mode.
My employer sells such an FTP client, but it is a enterprise level tool that only comes as part of a very large system.
I'm certain that there are others available that would better fit your needs.
A: The standard way of uploading content to an ftp:// URL via an HTTP proxy would be using an HTTP PUT request. An HTTP proxy acts as an HTTP<->FTP gateway when dealing with ftp:// URLs, speaking HTTP to the requesting client and FTP to the requested FTP server.
At least the Squid HTTP Proxy supports PUT to ftp:// URLs, not sure what other proxies do.
The more common way is by abusing the CONNECT method to esablish tunnels over the proxy. But this is often not allowed due to security implications of allowing bidirectional tunnels over the proxy.
A: Hi I had the same issue - the resolution was to create the proxy object and derive the defaultcredentials - this should be fine provided your application is been run with a network account -
FtpWebRequest reqFTP = (FtpWebRequest)FtpWebRequest.Create(new Uri(uri));
System.Net.WebProxy proxy = System.Net.WebProxy.GetDefaultProxy();
proxy.Credentials = System.Net.CredentialCache.DefaultCredentials;
// set the ftpWebRequest proxy
reqFTP.Proxy = proxy;
This resolved the issue for me.
A: Damn these unfree applications and components!!!
Here is my open source C# code that can uploads file to FTP via HTTP proxy.
public bool UploadFile(string localFilePath, string remoteDirectory)
{
var fileName = Path.GetFileName(localFilePath);
string content;
using (var reader = new StreamReader(localFilePath))
content = reader.ReadToEnd();
var proxyAuthB64Str = Convert.ToBase64String(Encoding.ASCII.GetBytes(_proxyUserName + ":" + _proxyPassword));
var sendStr = "PUT ftp://" + _ftpLogin + ":" + _ftpPassword
+ "@" + _ftpHost + remoteDirectory + fileName + " HTTP/1.1\n"
+ "Host: " + _ftpHost + "\n"
+ "User-Agent: Mozilla/4.0 (compatible; Eradicator; dotNetClient)\n" + "Proxy-Authorization: Basic " + proxyAuthB64Str + "\n"
+ "Content-Type: application/octet-stream\n"
+ "Content-Length: " + content.Length + "\n"
+ "Connection: close\n\n" + content;
var sendBytes = Encoding.ASCII.GetBytes(sendStr);
using (var proxySocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp))
{
proxySocket.Connect(_proxyHost, _proxyPort);
if (!proxySocket.Connected)
throw new SocketException();
proxySocket.Send(sendBytes);
const int recvSize = 65536;
var recvBytes = new byte[recvSize];
proxySocket.Receive(recvBytes, recvSize, SocketFlags.Partial);
var responseFirstLine = new string(Encoding.ASCII.GetChars(recvBytes)).Split("\n".ToCharArray()).Take(1).ElementAt(0);
var httpResponseCode = Regex.Replace(responseFirstLine, @"HTTP/1\.\d (\d+) (\w+)", "$1");
var httpResponseDescription = Regex.Replace(responseFirstLine, @"HTTP/1\.\d (\d+) (\w+)", "$2");
return httpResponseCode.StartsWith("2");
}
return false;
}
A: Id don't really see the connection between a http proxy and uploading to an ftp server. If you use the http proxy class thats for accessing http resources trough a http proxy. ftp is another protocol and the ftp proxies use a different protocol.
A: I've just had the same problem.
My primary goal was to upload a file to an ftp. And I didn't care if my traffic would go through proxy or not.
So I've just set FTPWebRequest.Proxy property to null right after FTPWebRequest.Create(uri).
And it worked. Yes, I know this solution is not the greatest one. And more of that, I don't get why did it work. But the goal is complete, anyway.
A: I'm not sure if all HTTP proxies work in the same way, but I managed to cheat ours by simply creating an HTTP request to access resource on URI ftp://user:pass@your.server.com/path.
Sadly, to create an instance of HttpWebRequest you should use WebRequest.Create. And if you do that you can't create an HTTP request for ftp:// schema.
So I used a bit of reflection to invoke a non-public constructor which does that:
var ctor = typeof(HttpWebRequest).GetConstructor(
BindingFlags.NonPublic | BindingFlags.Instance,
null,
new Type[] { typeof(Uri), typeof(ServicePoint) },
null);
var req = (WebRequest)ctor.Invoke(new object[] { new Uri("ftp://user:pass@host/test.txt"), null });
req.Proxy = new WebProxy("myproxy", 8080);
req.Method = WebRequestMethods.Http.Put;
using (var inStream = req.GetRequestStream())
{
var buffer = Encoding.ASCII.GetBytes("test upload");
inStream.Write(buffer, 0, buffer.Length);
}
using (req.GetResponse())
{
}
You can also use other methods like "DELETE" for other tasks.
In my case, it worked like a charm.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Sending mass emails programmatically I need to be able to periodically send email alerts to subscribed users. PHP seems to struggle with sending one message, so I'm looking for good alternatives.
Any language will do, if the implementation is fast enough. The amount of mails sent will eventually be in the thousands.
If purchasing licensed software can be avoided, so much the better.
A: Email queueing in php - the short version - Pear's Mail_Queue. I've been using this to send out 30-50,000+ mails per day or more (a few days a week) without problems for more than a year.
A: We have various applications writing to an email queue in a database table, and a .Net Windows Service polling that table to compose the emails and send out through our mail server.
We do up to 1000 emails per minute...
A: For Java, there is http://java.sun.com/products/javamail/
I've used it in an application. Pretty easy to set up and use.
In Ruby it is extremely simple, but I haven't used it so can't say anything about performance.
http://snippets.dzone.com/posts/show/2362
That said... I doubt PHP itself would be too slow to send mails. Perhaps you have some bottleneck in your application?
A: Would just like to mention that in my previous job we created a mass emailer solution in PHP that worked great, so I don't see why you would rule it out completely :)
A: smtplib in python is a doddle to set up and a very clean API.
A: One thing you could do is change the focus of the question to the underlying mail software. For instance, if I wanted to send a ton of emails, I would use any language to write them out in BSMTP format, which basically looks like simple SMTP client commands. Something like:
MAIL FROM:<me@example.com>
RCPT TO:<you@example.com>
DATA
From: Me <me@example.com>
To: You <you@example.com>
Subject: test email
This is the body of the test email I'm sending
.
Then I would feed the BSMTP files through exim:
cat *.bsmtp | exim -bS
This essentially removes the delay in sending the emails from you program and places the burden on exim (which as an MTA is better equipped to handle it).
Once you get the basics, there are a ton of things you can modify to make more efficient. For instance, if your emails aren't customized, you can pre-optimize by putting all recipients to the same domain into the same BSMTP file:
MAIL FROM:<me@example.com>
RCPT TO:<you@example.com>
RCPT TO:<him@example.com>
RCPT TO:<her@example.com>
RCPT TO:<them@example.com>
DATA
From: Me <me@example.com>
To: Me <me@example.com>
Subject: test email
This is the body of the test email I'm sending
.
You also then get a ton of wiggle room in how you optimize the MTA itself to send the mail (for instance, it'll automatically handle parallel deliveries, deliveries of email to the same mail server down the same TCP connection, etc).
With respect to doing it in code, we used to have a drop in perl library that helped us do this stuff. Essentially you fed it the emails and the addresses and it would fork off calls to the mail server as needed. It was configurable in how many parallel sessions it would allow, and it also monitored the load on the server and would throttle back if the load crossed a user-configurable threshold.
A: I use a program called e-Campaign that reads in CSV files. If you have to do it programmatically, then you may want to build in a waiting technique so you don't try to send 10,000 emails all at once. With e-Campaign you can choose how many emails to send at a time and put a break time between those batches. It's still very fast but doesn't cause overload issues with the server.
A: There is a dos based command line tool called blat that you can download and send emails very easily
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: State Service when using system.web.routing in WebForms I am using the System.Web.Routing assembly in a WebForms application. When running the application deployed on win2008/IIS7 I got the following message.
Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. Please also make sure that System.Web.SessionStateModule or a custom session state module is included in the \\ section in the application configuration.
This is only a problem when using a route I have configured. It is not a problem when directly navigating to an aspx page.
EnableSessionState has been turned on in both the web.config and the Page directive. I have added the Session entry to httpmodule of the web.config.
This is not an issue when developing using Visual Studio on my workstation. It is only a problem when trying to run the application under IIS7 on Win 2008.
A: I'm having the same issue and I think I know what the problem is.
I'm trying to implement a FileNotFound page under the routing system (something I've yet to figure out is how to both give a FileNotFound page AND give a 404 response in the header)
What I've found, is that for some reason, the pages registered with BuildManager seem to be instanced without session context (the page was not actually requested, just instanced!).
At least that's what it seems to do.
I'm now less sure of my previous assertion. Apparently, requests for images also go through the routing system when they do not exist on physically. This causes an IRouteHandler to be called when the image path matches. I'm pretty sure the session object does not exist when an image is requested, so that may cause the problem when the page that was routed to tries to access it.
A: I think that what you are describing is similar to a question that I had.
It may be that your IIS7 is running in a differnt mode nad is more like II6 than 7:
A couple of questions:
1. Is your mapping redirecting your request correctly?
2. When your request is mapped where does it go?
3. If you trace through this page what line of code is generating the error (does it even hit your code)?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I use continuous integration with an Eclipse project? I've been using maven2 and hudson for a while to do my continuous integration, but I find that Eclipse and Maven do not play well together. Sure there's a plugin, but it's cranky to mash the maven project into something that eclipse likes and the build times and unit test are too long.
I'm considering switching back to a pure eclipse project with no ant and no maven involved. With the infinitest plugin and possible the JavaRebel agent, it would give me a very fast build-deploy-test cycle. However I'd still like to have automatic and testing as well, so:
How do I use continuous integration with an Eclipse project?
Is there a command line way to do it?
Is there a build server that already supports it natively?
A: Yeah, Eclipse Maven2 plugin is crap for now. But I would suggest for you to hang in there, there is a lot of benefit to using Maven2, so it actually balances out.
What we do, is that we use Eclipse to develop and only use Maven to manage dependencies. Everything else is done by running "mvn" on command line. We keep tests in their own integration test projects (...-itest) and have continuous integration server to do build in 2 phases, first build the actual code, and second pass build and runs the -itest projects. (First pass (pure build) usually is very quick, and the integration tests build (with running of tests) usually takes quite a while.)
Here's command line to make mvn run tests:
mvn -o verify -Ditest
Of course you need to define 'itest' profile in your parent pom:
Say, like this:
<profiles>
<profile>
<id>integration-test</id>
<activation>
<property>
<name>itest</name>
</property>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<executions>
<execution>
<id>itest</id>
</execution>
</executions>
<configuration>
<testSourceDirectory>src/main</testSourceDirectory>
<testClassesDirectory>target/classes</testClassesDirectory>
<forkMode>once</forkMode>
</configuration>
</plugin>
</plugins>
</build>
</profile>
</profiles>
A: I managed find a good solution. I simply got infinitest (can be installed from the Eclipse marketplace) to work when using maven and eclipse
In Eclipse->Project Properties->Java Build Path->Source uncheck the box called: "Allow output
folders for source folders"
That will enable your project to have more than one output path and Eclipse will then start reporting the test-classes as being part of the class path. Infinitest now finds it and starts running tests!
All I did was use the official Maven Eclipse plugin and add this to my POM
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.5</version>
<!-- <scope>provided</scope> -->
</dependency>
<dependency>
<groupId>org.infinitest</groupId>
<artifactId>infinitest</artifactId>
<scope>test</scope>
<version>4.0</version>
</dependency>
</dependencies>
A: I've had fair success using Eclipse + Ant with CruiseControl. If you want automation, you're probably going to need more than just pure Eclipse.
CruiseControl can automatically check out a copy of your project from source control, build it, run tests, and then update a web application with the results. It was pretty slick last I used it, but that was a long time ago now.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Site-Wide Filters in ASP.NET MVC What is the best way to enable site-wide filters in an ASP.NET MVC application?
To clarify, I have a form in my master page which has a drop down list =, the value of which filters every page on the site. Each individual page also has it's own form elements. I'd really rather not have a form element across the whole page (a la vanilla WebForms) but am having difficulty knowing what to call when the site-wide filter in the header is changed.
A: I would probably use one small form on master page, and on Submit to its controller save value from dropdownlist to session.
Then every other controller can check value from session and do something with that, and also you can have many more forms on views also.
But again, maybe I didn't understood your question :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What are the differences between the different saving methods in Hibernate? Hibernate has a handful of methods that, one way or another, takes your object and puts it into the database. What are the differences between them, when to use which, and why isn't there just one intelligent method that knows when to use what?
The methods that I have identified thus far are:
*
*save()
*update()
*saveOrUpdate()
*saveOrUpdateCopy()
*merge()
*persist()
A: *
*See the Hibernate Forum for a explanation of the subtle differences between persist and save. It looks like the difference is the time the INSERT statement is ultimately executed. Since save does return the identifier, the INSERT statement has to be executed instantly regardless of the state of the transaction (which generally is a bad thing). Persist won't execute any statements outside of the currently running transaction just to assign the identifier.
Save/Persist both work on transient instances, ie instances which have no identifier assigned yet and as such are not saved in the DB.
*Update and Merge both work on detached instances, ie instances which have a corresponding entry in the DB but which are currently not attached to (or managed by) a Session. The difference between them are what happens to the instance which is passed to the function. update tries to reattach the instance, that means that there must be no other instance of the persistent entity attached to the Session right now, otherwise an exception is thrown. merge, however, just copies all values to a persistent instance in the Session (which will be loaded if it is not currently loaded). The input object is not changed. So merge is more general than update, but may use more resources.
A: Actually the difference between hibernate save() and persist() methods is depends on generator class we are using.
If our generator class is assigned, then there is no difference between save() and persist() methods. Because generator ‘assigned’ means, as a programmer we need to give the primary key value to save in the database right [ Hope you know this generators concept ]
In case of other than assigned generator class, suppose if our generator class name is Increment means hibernate it self will assign the primary key id value into the database right [ other than assigned generator, hibernate only used to take care the primary key id value remember ], so in this case if we call save() or persist() method then it will insert the record into the database normally
But hear thing is, save() method can return that primary key id value which is generated by hibernate and we can see it by
long s = session.save(k);
In this same case, persist() will never give any value back to the client.
A: I found a good example showing the differences between all hibernate save methods:
http://www.journaldev.com/3481/hibernate-session-merge-vs-update-save-saveorupdate-persist-example
In brief, according to the above link:
save()
*
*We can invoke this method outside a transaction. If we use this without transaction and we have cascading between entities, then only the primary entity gets saved unless we flush the session.
*So, if there are other objects mapped from the primary object, they gets saved at the time of committing transaction or when we flush the session.
persist()
*
*Its similar to using save() in transaction, so it’s safe and takes care of any cascaded objects.
saveOrUpdate()
*
*Can be used with or without the transaction, and just like save(), if its used without the transaction, mapped entities wont be saved un;ess we flush the session.
*Results into insert or update queries based on the provided data. If the data is present in the database, update query is executed.
update()
*
*Hibernate update should be used where we know that we are only updating the entity information. This operation adds the entity object to persistent context and further changes are tracked and saved when transaction is committed.
*Hence even after calling update, if we set any values in the entity,they will be updated when transaction commits.
merge()
*
*Hibernate merge can be used to update existing values, however this method create a copy from the passed entity object and return it. The returned object is part of persistent context and tracked for any changes, passed object is not tracked. This is the major difference with merge() from all other methods.
Also for practical examples of all these, please refer to the link I mentioned above, it shows examples for all these different methods.
A: You should prefer the JPA methods most of the time, and the update for batch processing tasks.
A JPA or Hibernate entity can be in one of the following four states:
*
*Transient (New)
*Managed (Persistent)
*Detached
*Removed (Deleted)
The transition from one state to the other is done via the EntityManager or Session methods.
For instance, the JPA EntityManager provides the following entity state transition methods.
The Hibernate Session implements all the JPA EntityManager methods and provides some additional entity state transition methods like save, saveOrUpdate and update.
Persist
To change the state of an entity from Transient (New) to Managed (Persisted), we can use the persist method offered by the JPA EntityManager which is also inherited by the Hibernate Session.
The persist method triggers a PersistEvent which is handled by the DefaultPersistEventListener Hibernate event listener.
Therefore, when executing the following test case:
doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
entityManager.persist(book);
LOGGER.info(
"Persisting the Book entity with the id: {}",
book.getId()
);
});
Hibernate generates the following SQL statements:
CALL NEXT VALUE FOR hibernate_sequence
-- Persisting the Book entity with the id: 1
INSERT INTO book (
author,
isbn,
title,
id
)
VALUES (
'Vlad Mihalcea',
'978-9730228236',
'High-Performance Java Persistence',
1
)
Notice that the id is assigned prior to attaching the Book entity to the current Persistence Context. This is needed because the managed entities are stored in a Map structure where the key is formed by the entity type and its identifier and the value is the entity reference. This is the reason why the JPA EntityManager and the Hibernate Session are known as the First-Level Cache.
When calling persist, the entity is only attached to the currently running Persistence Context, and the INSERT can be postponed until the flush is called.
The only exception is the IDENTITY which triggers the INSERT right away since that's the only way it can get the entity identifier. For this reason, Hibernate cannot batch inserts for entities using the IDENTITY generator.
Save
The Hibernate-specific save method predates JPA and it's been available since the beginning of the Hibernate project.
The save method triggers a SaveOrUpdateEvent which is handled by the DefaultSaveOrUpdateEventListener Hibernate event listener. Therefore, the save method is equivalent to the update and saveOrUpdate methods.
To see how the save method works, consider the following test case:
doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
Session session = entityManager.unwrap(Session.class);
Long id = (Long) session.save(book);
LOGGER.info(
"Saving the Book entity with the id: {}",
id
);
});
When running the test case above, Hibernate generates the following SQL statements:
CALL NEXT VALUE FOR hibernate_sequence
-- Saving the Book entity with the id: 1
INSERT INTO book (
author,
isbn,
title,
id
)
VALUES (
'Vlad Mihalcea',
'978-9730228236',
'High-Performance Java Persistence',
1
)
As you can see, the outcome is identical to the persist method call. However, unlike persist, the save method returns the entity identifier.
Update
The Hibernate-specific update method is meant to bypass the dirty checking mechanism and force an entity update at the flush time.
The update method triggers a SaveOrUpdateEvent which is handled by the DefaultSaveOrUpdateEventListener Hibernate event listener. Therefore, the update method is equivalent to the save and saveOrUpdate methods.
To see how the update method works consider the following example which persists a Book entity in one transaction, then it modifies it while the entity is in the detached state, and it forces the SQL UPDATE using the update method call.
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
entityManager.persist(book);
return book;
});
LOGGER.info("Modifying the Book entity");
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
doInJPA(entityManager -> {
Session session = entityManager.unwrap(Session.class);
session.update(_book);
LOGGER.info("Updating the Book entity");
});
When executing the test case above, Hibernate generates the following SQL statements:
CALL NEXT VALUE FOR hibernate_sequence
INSERT INTO book (
author,
isbn,
title,
id
)
VALUES (
'Vlad Mihalcea',
'978-9730228236',
'High-Performance Java Persistence',
1
)
-- Modifying the Book entity
-- Updating the Book entity
UPDATE
book
SET
author = 'Vlad Mihalcea',
isbn = '978-9730228236',
title = 'High-Performance Java Persistence, 2nd edition'
WHERE
id = 1
Notice that the UPDATE is executed during the Persistence Context flush, right before commit, and that's why the Updating the Book entity message is logged first.
Using @SelectBeforeUpdate to avoid unnecessary updates
Now, the UPDATE is always going to be executed even if the entity was not changed while in the detached state. To prevent this, you can use the @SelectBeforeUpdate Hibernate annotation which will trigger a SELECT statement that fetched loaded state which is then used by the dirty checking mechanism.
So, if we annotate the Book entity with the @SelectBeforeUpdate annotation:
@Entity(name = "Book")
@Table(name = "book")
@SelectBeforeUpdate
public class Book {
//Code omitted for brevity
}
And execute the following test case:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
entityManager.persist(book);
return book;
});
doInJPA(entityManager -> {
Session session = entityManager.unwrap(Session.class);
session.update(_book);
});
Hibernate executes the following SQL statements:
INSERT INTO book (
author,
isbn,
title,
id
)
VALUES (
'Vlad Mihalcea',
'978-9730228236',
'High-Performance Java Persistence',
1
)
SELECT
b.id,
b.author AS author2_0_,
b.isbn AS isbn3_0_,
b.title AS title4_0_
FROM
book b
WHERE
b.id = 1
Notice that, this time, there is no UPDATE executed since the Hibernate dirty checking mechanism has detected that the entity was not modified.
SaveOrUpdate
The Hibernate-specific saveOrUpdate method is just an alias for save and update.
The saveOrUpdate method triggers a SaveOrUpdateEvent which is handled by the DefaultSaveOrUpdateEventListener Hibernate event listener. Therefore, the update method is equivalent to the save and saveOrUpdate methods.
Now, you can use saveOrUpdate when you want to persist an entity or to force an UPDATE as illustrated by the following example.
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
Session session = entityManager.unwrap(Session.class);
session.saveOrUpdate(book);
return book;
});
_book.setTitle("High-Performance Java Persistence, 2nd edition");
doInJPA(entityManager -> {
Session session = entityManager.unwrap(Session.class);
session.saveOrUpdate(_book);
});
Beware of the NonUniqueObjectException
One problem that can occur with save, update, and saveOrUpdate is if the Persistence Context already contains an entity reference with the same id and of the same type as in the following example:
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
Session session = entityManager.unwrap(Session.class);
session.saveOrUpdate(book);
return book;
});
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
try {
doInJPA(entityManager -> {
Book book = entityManager.find(
Book.class,
_book.getId()
);
Session session = entityManager.unwrap(Session.class);
session.saveOrUpdate(_book);
});
} catch (NonUniqueObjectException e) {
LOGGER.error(
"The Persistence Context cannot hold " +
"two representations of the same entity",
e
);
}
Now, when executing the test case above, Hibernate is going to throw a NonUniqueObjectException because the second EntityManager already contains a Book entity with the same identifier as the one we pass to update, and the Persistence Context cannot hold two representations of the same entity.
org.hibernate.NonUniqueObjectException:
A different object with the same identifier value was already associated with the session : [com.vladmihalcea.book.hpjp.hibernate.pc.Book#1]
at org.hibernate.engine.internal.StatefulPersistenceContext.checkUniqueness(StatefulPersistenceContext.java:651)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.performUpdate(DefaultSaveOrUpdateEventListener.java:284)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.entityIsDetached(DefaultSaveOrUpdateEventListener.java:227)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.performSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:92)
at org.hibernate.event.internal.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:73)
at org.hibernate.internal.SessionImpl.fireSaveOrUpdate(SessionImpl.java:682)
at org.hibernate.internal.SessionImpl.saveOrUpdate(SessionImpl.java:674)
Merge
To avoid the NonUniqueObjectException, you need to use the merge method offered by the JPA EntityManager and inherited by the Hibernate Session as well.
The merge fetches a new entity snapshot from the database if there is no entity reference found in the Persistence Context, and it copies the state of the detached entity passed to the merge method.
The merge method triggers a MergeEvent which is handled by the DefaultMergeEventListener Hibernate event listener.
To see how the merge method works consider the following example which persists a Book entity in one transaction, then it modifies it while the entity is in the detached state, and passes the detached entity to merge in a subsequence Persistence Context.
Book _book = doInJPA(entityManager -> {
Book book = new Book()
.setIsbn("978-9730228236")
.setTitle("High-Performance Java Persistence")
.setAuthor("Vlad Mihalcea");
entityManager.persist(book);
return book;
});
LOGGER.info("Modifying the Book entity");
_book.setTitle(
"High-Performance Java Persistence, 2nd edition"
);
doInJPA(entityManager -> {
Book book = entityManager.merge(_book);
LOGGER.info("Merging the Book entity");
assertFalse(book == _book);
});
When running the test case above, Hibernate executed the following SQL statements:
INSERT INTO book (
author,
isbn,
title,
id
)
VALUES (
'Vlad Mihalcea',
'978-9730228236',
'High-Performance Java Persistence',
1
)
-- Modifying the Book entity
SELECT
b.id,
b.author AS author2_0_,
b.isbn AS isbn3_0_,
b.title AS title4_0_
FROM
book b
WHERE
b.id = 1
-- Merging the Book entity
UPDATE
book
SET
author = 'Vlad Mihalcea',
isbn = '978-9730228236',
title = 'High-Performance Java Persistence, 2nd edition'
WHERE
id = 1
Notice that the entity reference returned by merge is different than the detached one we passed to the merge method.
Now, although you should prefer using JPA merge when copying the detached entity state, the extra SELECT can be problematic when executing a batch processing task.
For this reason, you should prefer using update when you are sure that there is no entity reference already attached to the currently running Persistence Context and that the detached entity has been modified.
Conclusion
To persist an entity, you should use the JPA persist method. To copy the detached entity state, merge should be preferred. The update method is useful for batch processing tasks only. The save and saveOrUpdate are just aliases to update and you should not probably use them at all.
Some developers call save even when the entity is already managed, but this is a mistake and triggers a redundant event since, for managed entities, the UPDATE is automatically handled at the Persistence context flush time.
A: Be aware that if you call an update on an detached object, there will always be an update done in the database whether you changed the object or not. If it is not what you want you should use Session.lock() with LockMode.None.
You should call update only if the object was changed outside the scope of your current session (when in detached mode).
A: Here's my understanding of the methods. Mainly these are based on the API though as I don't use all of these in practice.
saveOrUpdate
Calls either save or update depending on some checks. E.g. if no identifier exists, save is called. Otherwise update is called.
save
Persists an entity. Will assign an identifier if one doesn't exist. If one does, it's essentially doing an update. Returns the generated ID of the entity.
update
Attempts to persist the entity using an existing identifier. If no identifier exists, I believe an exception is thrown.
saveOrUpdateCopy
This is deprecated and should no longer be used. Instead there is...
merge
Now this is where my knowledge starts to falter. The important thing here is the difference between transient, detached and persistent entities. For more info on the object states, take a look here. With save & update, you are dealing with persistent objects. They are linked to a Session so Hibernate knows what has changed. But when you have a transient object, there is no session involved. In these cases you need to use merge for updates and persist for saving.
persist
As mentioned above, this is used on transient objects. It does not return the generated ID.
A: This link explains in good manner :
http://www.stevideter.com/2008/12/07/saveorupdate-versus-merge-in-hibernate/
We all have those problems that we encounter just infrequently enough that when we see them again, we know we’ve solved this, but can’t remember how.
The NonUniqueObjectException thrown when using Session.saveOrUpdate() in Hibernate is one of mine. I’ll be adding new functionality to a complex application. All my unit tests work fine. Then in testing the UI, trying to save an object, I start getting an exception with the message “a different object with the same identifier value was already associated with the session.” Here’s some example code from Java Persistence with Hibernate.
Session session = sessionFactory1.openSession();
Transaction tx = session.beginTransaction();
Item item = (Item) session.get(Item.class, new Long(1234));
tx.commit();
session.close(); // end of first session, item is detached
item.getId(); // The database identity is "1234"
item.setDescription("my new description");
Session session2 = sessionFactory.openSession();
Transaction tx2 = session2.beginTransaction();
Item item2 = (Item) session2.get(Item.class, new Long(1234));
session2.update(item); // Throws NonUniqueObjectException
tx2.commit();
session2.close();
To understand the cause of this exception, it’s important to understand detached objects and what happens when you call saveOrUpdate() (or just update()) on a detached object.
When we close an individual Hibernate Session, the persistent objects we are working with are detached. This means the data is still in the application’s memory, but Hibernate is no longer responsible for tracking changes to the objects.
If we then modify our detached object and want to update it, we have to reattach the object. During that reattachment process, Hibernate will check to see if there are any other copies of the same object. If it finds any, it has to tell us it doesn’t know what the “real” copy is any more. Perhaps other changes were made to those other copies that we expect to be saved, but Hibernate doesn’t know about them, because it wasn’t managing them at the time.
Rather than save possibly bad data, Hibernate tells us about the problem via the NonUniqueObjectException.
So what are we to do? In Hibernate 3, we have merge() (in Hibernate 2, use saveOrUpdateCopy()). This method will force Hibernate to copy any changes from other detached instances onto the instance you want to save, and thus merges all the changes in memory before the save.
Session session = sessionFactory1.openSession();
Transaction tx = session.beginTransaction();
Item item = (Item) session.get(Item.class, new Long(1234));
tx.commit();
session.close(); // end of first session, item is detached
item.getId(); // The database identity is "1234"
item.setDescription("my new description");
Session session2 = sessionFactory.openSession();
Transaction tx2 = session2.beginTransaction();
Item item2 = (Item) session2.get(Item.class, new Long(1234));
Item item3 = session2.merge(item); // Success!
tx2.commit();
session2.close();
It’s important to note that merge returns a reference to the newly updated version of the instance. It isn’t reattaching item to the Session. If you test for instance equality (item == item3), you’ll find it returns false in this case. You will probably want to work with item3 from this point forward.
It’s also important to note that the Java Persistence API (JPA) doesn’t have a concept of detached and reattached objects, and uses EntityManager.persist() and EntityManager.merge().
I’ve found in general that when using Hibernate, saveOrUpdate() is usually sufficient for my needs. I usually only need to use merge when I have objects that can have references to objects of the same type. Most recently, the cause of the exception was in the code validating that the reference wasn’t recursive. I was loading the same object into my session as part of the validation, causing the error.
Where have you encountered this problem? Did merge work for you or did you need another solution? Do you prefer to always use merge, or prefer to use it only as needed for specific cases
A: ╔══════════════╦═══════════════════════════════╦════════════════════════════════╗
║ METHOD ║ TRANSIENT ║ DETACHED ║
╠══════════════╬═══════════════════════════════╬════════════════════════════════╣
║ ║ sets id if doesn't ║ sets new id even if object ║
║ save() ║ exist, persists to db, ║ already has it, persists ║
║ ║ returns attached object ║ to DB, returns attached object ║
╠══════════════╬═══════════════════════════════╬════════════════════════════════╣
║ ║ sets id on object ║ throws ║
║ persist() ║ persists object to DB ║ PersistenceException ║
║ ║ ║ ║
╠══════════════╬═══════════════════════════════╬════════════════════════════════╣
║ ║ ║ ║
║ update() ║ Exception ║ persists and reattaches ║
║ ║ ║ ║
╠══════════════╬═══════════════════════════════╬════════════════════════════════╣
║ ║ copy the state of object in ║ copy the state of obj in ║
║ merge() ║ DB, doesn't attach it, ║ DB, doesn't attach it, ║
║ ║ returns attached object ║ returns attached object ║
╠══════════════╬═══════════════════════════════╬════════════════════════════════╣
║ ║ ║ ║
║saveOrUpdate()║ as save() ║ as update() ║
║ ║ ║ ║
╚══════════════╩═══════════════════════════════╩════════════════════════════════╝
A: None of the following answers are right.
All these methods just seem to be alike, but in practice do absolutely different things.
It is hard to give short comments. Better to give a link to full documentation about these methods:
http://docs.jboss.org/hibernate/core/3.6/reference/en-US/html/objectstate.html
A: None of the answers above are complete. Although Leo Theobald answer looks nearest answer.
The basic point is how hibernate is dealing with states of entities and how it handles them when there is a state change. Everything must be seen with respect to flushes and commits as well, which everyone seems to have ignored completely.
NEVER USE THE SAVE METHOD of HIBERNATE. FORGET THAT IT EVEN EXISTS IN HIBERNATE!
Persist
As everyone explained, Persist basically transitions an entity from "Transient" state to "Managed" State. At this point, a slush or a commit can create an insert statement. But the entity will still remains in "Managed" state. That doesn't change with flush.
At this point, if you "Persist" again there will be no change. And there wont be any more saves if we try to persist a persisted entity.
The fun begins when we try to evict the entity.
An evict is a special function of Hibernate which will transition the entity from "Managed" to "Detached". We cannot call a persist on a detached entity. If we do that, then Hibernate raises an exception and entire transaction gets rolled back on commit.
Merge vs Update
These are 2 interesting functions doing different stuff when dealt in different ways. Both of them are trying to transition the entity from "Detached" state to "Managed" state. But doing it differently.
Understand a fact that Detached means kind of an "offline" state. and managed means "Online" state.
Observe the code below:
Session ses1 = sessionFactory.openSession();
Transaction tx1 = ses1.beginTransaction();
HibEntity entity = getHibEntity();
ses1.persist(entity);
ses1.evict(entity);
ses1.merge(entity);
ses1.delete(entity);
tx1.commit();
When you do this? What do you think will happen?
If you said this will raise exception, then you are correct. This will raise exception because, merge has worked on entity object, which is detached state. But it doesn't alter the state of object.
Behind the scene, merge will raise a select query and basically returns a copy of entity which is in attached state. Observe the code below:
Session ses1 = sessionFactory.openSession();
Transaction tx1 = ses1.beginTransaction();
HibEntity entity = getHibEntity();
ses1.persist(entity);
ses1.evict(entity);
HibEntity copied = (HibEntity)ses1.merge(entity);
ses1.delete(copied);
tx1.commit();
The above sample works because merge has brought a new entity into the context which is in persisted state.
When applied with Update the same works fine because update doesn't actually bring a copy of entity like merge.
Session ses1 = sessionFactory.openSession();
Transaction tx1 = ses1.beginTransaction();
HibEntity entity = getHibEntity();
ses1.persist(entity);
ses1.evict(entity);
ses1.update(entity);
ses1.delete(entity);
tx1.commit();
At the same time in debug trace we can see that Update hasn't raised SQL query of select like merge.
delete
In the above example I used delete without talking about delete. Delete will basically transition the entity from managed state to "removed" state. And when flushed or commited will issue a delete command to store.
However it is possible to bring the entity back to "managed" state from "removed" state using the persist method.
Hope the above explanation clarified any doubts.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "212"
} |
Q: Access to remote computer's MSMQ gives "Remote computer is not available" We have a windows application that runs on a server and accesses 4 other servers (all of them are members in the domain) to get the messages in each of their private queues. We've just installed a new server, and for some reason when the application tries to access that computer, it gets a "Remote computer is not available" message.
The application accesses the other servers with a user who is an admin domain user.
Has anyone encountered such a problem, or have a clue as to what could be causing it?
A: Probably way too late for this thread, but I found the answer to this here: http://blogs.msdn.com/johnbreakwell/archive/2008/07/10/getting-msmq-messages-out-of-windows-server-2008.aspx
A: Have you fired up a packet capture tool such as Microsoft Network Monitor or Wireshark and looked at the traffic going to and from the system that gets the error? That's often the surest way to see what is going on without alot of time consuming experimentation.
I would set up a capture from the box getting the error, run until you get the error, and immediately stop the capture. Set a filter to look at just the traffic to and from that system. If you can't install the capture tool on the box itself, make sure you place it on the network in a way that it will still be able to see all the traffic from that box. (I.e. don't put it on an adjacent port on a switch, because the switch's job is to insulate each ports traffic from each other).
IF you see no actual traffic being sent to the remote server in question, then you probably have a naming/directory/DNS type issue. I.e. the local server can't figure out where the other one is. Since this is a Windows domain type situation, I'd start looking in Active Directory for clues.
IF you see traffic going out to the remote server, but you never see even one packet coming back from it before the failure, then you probably have a firewall issue either on the remote box or on the route from here to there.
IF you see traffic going back and forth to the remote server but then it stops, you'll need to dig into those packets and see what low-level error codes might be present in the traffic. Both NETMON and Wireshark have good decodes for the Microsoft protocols so you should be able to see exactly what is happening. If you're not familiar with these protocols, you might want to first capture a correctly working connection to one of the other servers so you can compare.
A: Could it be a firewall issue?
http://support.microsoft.com/kb/183293
A: The problem is finally solved, and it was solved accidentally: Apparently there was some confusion in the DNS server, and the cache server had difficulty accessing the correct server. Our webmaster corrected al the server names, and that also solved the MSMQ problem.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do you force constructor signatures and static methods? Is there a way of forcing a (child) class to have constructors with particular signatures or particular static methods in C# or Java?
You can't obviously use interfaces for this, and I know that it will have a limited usage. One instance in which I do find it useful is when you want to enforce some design guideline, for example:
Exceptions
They should all have the four canonical constructors, but there is no way to enforce it. You have to rely on a tool like FxCop (C# case) to catch these.
Operators
There is no contract that specifies that two classes can be summed (with operator+ in C#)
Is there any design pattern to work around this limitation?
What construct could be added to the language to overcome this limitation in future versions of C# or Java?
A: Not enforced at compile-time, but I have spent a lot of time looking at similar issues; a generic-enabled maths library, and an efficient (non-default) ctor API are both avaiable in MiscUtil. However, these are only checked at first-usage at runtime. In reality this isn't a big problem - your unit tests should find any missing operator / ctor very quickly. But it works, and very quickly...
A: You could use the Factory pattern.
interface Fruit{}
interface FruitFactory<F extends Fruit>{
F newFruit(String color,double weight);
Cocktail mixFruits(F f1,F f2);
}
You could then create classes for any type of Fruit
class Apple implements Fruit{}
class AppleFactory implements FruitFactory<Apple>{
public Apple newFruit(String color, double weight){
// create an instance
}
public Cocktail mixFruits(Apple f1,Apple f2){
// implementation
}
}
This does not enforce that you can't create instance in another way than by using the Factory but at least you can specify which methods you would request from a Factory.
A: Force Constructors
You can't. The closest that you can come is make the default constructor private and then provide a constructor that has parameters. But it still has loopholes.
class Base
{
private Base() { }
public Base(int x) {}
}
class Derived : Base
{
//public Derived() { } won't compile because Base() is private
public Derived(int x) :base(x) {}
public Derived() : base (0) {} // still works because you are giving a value to base
}
A: Using generics you can force a type argument to have a parameterless constructor - but that's about the limit of it.
Other than in generics, it would be tricky to actually use these restrictions even if they existed, but it could sometimes be useful for type parameters/arguments. Allowing static members in interfaces (or possibly static interfaces) could likewise help with the "generic numeric operator" issue.
I wrote about this a little while ago when facing a similar problem.
A: The problem in the language is that static methods are really second class citizens (A constructor is also a kind of static method, because you don't need an instance to start with).
Static methods are just global methods with a namespace, they don't really "belong" to the class they are defined in (OK, they have access to private (static) methods in the class, but that's about it).
The problem on the compiler level is that without a class instance you don't have a virtual function table, which means you cannot use all the inheritance and polymorphism stuff.
I think one could make it work by adding a global/static virtual table for each class but if it hasn't been done yet, there's probably a good reason for it.
A: Here is I would solve it if I were a language designer.
Allow interfaces to include static methods, operators and constructors.
interface IFoo
{
IFoo(int gottaHaveThis);
static Bar();
}
interface ISummable
{
operator+(ISummable a, ISummable b);
}
Don't allow the corresponding new IFoo(someInt) or IFoo.Bar()
Allow constructors to be inherited (just like static methods).
class Foo: IFoo
{
Foo(int gottaHaveThis) {};
static Bar() {};
}
class SonOfFoo: Foo
{
// SonOfFoo(int gottaHaveThis): base(gottaHaveThis); is implicitly defined
}
class DaughterOfFoo: Foo
{
DaughhterOfFoo (int gottaHaveThis) {};
}
Allow the programmer to cast to interfaces and check, if necessary, at run time if the cast is semantically valid even if the class does not specify explicitly.
ISummable PassedFirstGrade = (ISummable) 10;
A: Unfortunately you can't in C#. Here is a punch at it though:
class Program
{
static void Main(string[] args)
{
Console.WriteLine(Foo.Instance.GetHelloWorld());
Console.ReadLine();
}
}
public class Foo : FooStaticContract<FooFactory>
{
public Foo() // Non-static ctor.
{
}
internal Foo(bool st) // Overloaded, parameter not used.
{
}
public override string GetHelloWorld()
{
return "Hello World";
}
}
public class FooFactory : IStaticContractFactory<Foo>
{
#region StaticContractFactory<Foo> Members
public Foo CreateInstance()
{
return new Foo(true); // Call static ctor.
}
#endregion
}
public interface IStaticContractFactory<T>
{
T CreateInstance();
}
public abstract class StaticContract<T, Factory>
where Factory : IStaticContractFactory<T>, new()
where T : class
{
private static Factory _factory = new Factory();
private static T _instance;
/// <summary>
/// Gets an instance of this class.
/// </summary>
public static T Instance
{
get
{
// Scary.
if (Interlocked.CompareExchange(ref _instance, null, null) == null)
{
T instance = _factory.CreateInstance();
Interlocked.CompareExchange(ref _instance, instance, null);
}
return _instance;
}
}
}
public abstract class FooStaticContract<Factory>
: StaticContract<Foo, Factory>
where Factory : IStaticContractFactory<Foo>, new()
{
public abstract string GetHelloWorld();
}
A: Well, I know from the wording of your question you are looking for compile-time enforcement. Unless someone else has a brilliant suggestion/hack that will allow you to do this the way you are implying the compiler should, I would suggest that you could write a custom MSbuild task that did this. An AOP framework like PostSharp might help you accomplish this at comiple-time by piggy backing on it's build task model.
But what is wrong with code analysis or run-time enforcement? Maybe it's just preference and I respect that, but I personally have no issues with having CA/FXCop check these things... and if you really want to force downstream implementers of your classes to have constructor signatures, you can always add rules run-time checking in the base class constructor using reflection.
Richard
A: I'm unsure as to what you are trying to achieve, can you please elaborate? The only reason for forcing a specific constructor or static method accross different classes is to try and execute them dynamically at run time, is this correct?
A constructor is intended to be specific to a particular class, as it is intended to initialise the specific needs of the class. As I understand it, the reason you would want to enforce something in a class hierarchy or interface, is that it is an activity/operation relevant to the process being performed, but may vary in different circumstances. I believe this is the intended benefit of polymorphism, which you can't achieve using static methods.
It would also require knowing the specific type of the class you wanted to call the static method for, which would break all of the polymorphic hiding of differences in behaviour that the interface or abstract class is trying to achieve.
If the behaviour being represented by the constructor is intended to be part of the contract between the client of these classes then I would add it explicitly to the interface.
If a hierarchy of classes have similar initialisation requirements then I would use an abstract base class, however it should be up to the inheriting classes how they find the parameter for that constructor, which may include exposing a similar or identical constructor.
If this is intended to allow you to create different instances at runtime, then I would recommend using a static method on an abstract base class which knows the different needs of all of the concrete classes (you could use dependency injection for this).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: Problem with propset svn:ignore - possibly Vista related As I understand it, the command to ignore the content of a directory using SVN is this:
svn propset svn:ignore "*" tmp/
This should set the ignore property on the content of the tmp directory, right? In other words, the wildcard is set to be the ignore value on the tmp directory. Trouble is, here's what is happening on my Windows box:
> svn propset svn:ignore "*" ./tmp
property 'svn:ignore' set on 'app'
property 'svn:ignore' set on 'config'
property 'svn:ignore' set on 'db'
property 'svn:ignore' set on 'doc'
property 'svn:ignore' set on 'lib'
property 'svn:ignore' set on 'log'
property 'svn:ignore' set on 'nbproject'
property 'svn:ignore' set on 'public'
[etc...]
That's not right. Am I doing something wrong (or perhaps going insane), or is my svn on Windows broken?
Some notes:
*
*The machine is running Windows Vista SP1
*Setting this property via Tortoise works perfectly.
*I'm using the Collabnet binaries for Windows:
> svn --version
svn, version 1.5.2 (r32768)
compiled Aug 28 2008, 19:05:34
Update: I've have just tried this on a Windows XP machine and it works as expected. So either this is a Vista specific issue, or there is a problem with my Vista configuration. Is anyone else able to reproduce this problem on Vista? I have just spotted that Vista isn't listed as one of the supported platforms on the CollabNet downloads page.
A: It looks like Microsoft, in their infinite wisdom, have changed the behavior of wildcard expansion in Windows Vista:
So instead of an escaped wildcard being passed in, it gets expanded:
Under Win 95, 98, 2000, XP, the application runs as expected: it does wildcard expansion when parameters are like «*.txt» and it does NOT when parameters are like «"*.txt"». Under Windows Vista, wildcard expansion takes place always, or, said otherwise, double quotation marks DOES NOT suppress it.
There is further discussion on this issue on the Collabnet forum.
A: The command should be working as you expect.
The * is getting globbed, which it shouldn't be doing. So, you're running:
svn propset svn:ignore [value] app config db doc lib log nbproject public ... tmp
(since app was the first folder affected, I'm guessing there's another folder before it).
2 things you can try:
*
*Specify a list file: svn propset svn:ignore tmp -F .svnignore
*Just specify the path: svn propset svn:ignore tmp. This should open your default text editor (if configured) to allow you to write and save the list.
Reply to comment
Since you're now attempting to correct the setting, propedit and propdel would work fine -- especially if you have other changes within the directory.
But, if you don't have any other changes to worry about (check svn st), it'll be faster using svn revert -R and svn propset.
A: This doesn't answer your svn question, but why are you trying to ignore all the contents of a directory? It seems to me that if you want a temporary directory at some point in the build, you should make the directory as part of the build instead of it being there from the repo.
Are you trying to ignore it because it's already there and you can't delete it?
Anyway, from my unix command line, this worked for me to ignore untracked file in a directory called tmp:
$ svn --version
svn, version 1.5.1 (r32289)
compiled Aug 28 2008, 10:00:12
$ svn propset svn:ignore '*' tmp
Is Windows horking your quoting?
A: It sounds to me like the svn.exe binary compiled for windows is doing built-in globbing, which is something that it wouldn't normally do on a unix build because the unix shell is expected to do globbing while constructing the command line. I would consider that unexpected behaviour, especially since you can't seem to work around the globbing.
As others have pointed out, you can supply the * using the -F option or interactively in a text editor.
However, I think you may not be going about this in the easiest way. For ignoring an entire subdirectory, I would do something like this:
svn propset svn:ignore tmp .
This sets the svn:ignore property on . (the current directory, the parent of tmp/) that tells it to ignore the tmp subdirectory and everything underneath it.
A: Which version of subversion are you using?
I tried 1.5.2 on Windows, and it only changed the property on the tmp directory:
[C:\Temp\temp] :svn propset svn:ignore "*" tmp/
property 'svn:ignore' set on 'tmp'
and:
[C:\Temp\temp] :svn proplist *
svn: Skipping argument: '.svn' ends in a reserved name
Properties on 'tmp':
svn:ignore
A: For a one-liner to use in shell scripts using the "-F" alternative, simply try this:
echo "*" > .svnignore && svn propset svn:ignore <path> -F .svnignore && rm .svnignore
A: Try it without the trailing slash. Also, the tmp directory itself has to be added to the repository.
A: It's reasonable to use a GUI SVN client (unless you're a masochist!). If you're on Windows TortoiseSVN should be you first port of call. Right click on the file you want to ignore then click "TortoiseSVN -> Properties". In the properties dialog you can ignore the entire directory by clicking on the drop down arrow for "Name" and selecting "svn:ignore". Then in the values box just type "*" for all. This is all without the quotes of course.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Google Web Toolkit Should I use the GWT for a struts web application?
A: I think you'll find that GWT complements something like Stripes much more than it does struts. I don't want to start some kind of flame/development platform war but in my opinion Stripes is a far superior and easier to use framework than struts, we migrated from struts a year ago and have never looked back.
That said, yes you can integrate GWT with struts without too many hassles, in fact if you do your entire UI in GWT you don't even need struts, you can just RPC straight to your java back end. If you just want to make small components then GWT will work well for that too, and I guess you could shudder pass it your struts forms if you wanted to.
A: I don't think GWT would integrate so well into Struts - GWT is more like a framework which you would use instead of Struts.
You could try something like Ext JS, which (being a pure JavaScript library) I think would be more likely to integrate into Struts.
A: Struts is a solid framework for everything web-related, GWT is a framework for web application-like behaviour... So, make GWT modules of the parts requiring heavy AJAX functionality, and handle the overall site structural business with Struts, and you'll do Just Fine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Compiled, strongly-typed alternative to .NET? Is there a programming language suitable for building web applications, that is compiled, strongly-typed, and isn't ASP.NET?
I thought of using Mono (http://www.mono-project.com/), but I wonder if there are any other alternatives.
(If the language and framework are open-source, that's a big plus!)
A: Not sure what do you mean by saying "compiled". What about Java ?
Java has a lot of frameworks for web development. For example Tapestry:
Tapestry is an open-source framework
for creating dynamic, robust, highly
scalable web applications in Java.
A: Java meets all the criteria
A: If you mean compiled to win32 code, and not to an intermediate language, try Delphi.
A: The spring framework and the java language.
http://www.springframework.org/ opensource and extensively used in the industry.
In particular checkout spring-mvc and spring web-flow modules which make creating web projects a lot simpler.
A:
Is there a programming language suitable for building web applications, that is compiled, strongly-typed, and isn't ASP.NET?
Just for the sake of completeness: In theory, one could even use Ada to satisfy those requirements:
AdaCGI is an Ada 95 interface to the "Common Gateway Interface" (CGI). AdaCGI makes it easier to create Ada programs that can be invoked by World Wide Web (WWW) HTTP servers using the standard CGI interface. Using it, you can create Ada programs that perform queries or other processing by request from a WWW user. AdaCGI was formerly named "Package CGI".AdaCGI is open source/free software, and is released using the LGPL ("Lesser General Public License") license.
Ada for the Web: This website is dedicated to promoting the use of Ada95 as a major language for programming Web and Internet applets and applications, servers and browsers.
There is also the Ada "aws" package available at http://libre.adacore.com/libre/tools/aws/
First of all, AWS stands for Ada Web Server but it is more than just another Web server…
AWS is a complete framework to develop Web based applications. The main part of the framework is the embedded Web server. This small yet powerful Web server can be embedded into your application so your application will be able to talk with a standard Web browser like Microsoft Internet Explorer or Netscape Communicator for example. Around this Web server a lot of services have been developed.
The framework includes:
* A Web parameters module. This module takes care of retrieving the forms or URL parameters and to build an associative table for easy access.
* A session server, this is a very important module to be able to keep client’s data from page to page.
* Support SOAP to develop Web Services.
* A tool (based on ASIS) to generate a WSDL document from an Ada spec.
* A tool to generate Web Services stubs/skeletons from a WSDL document.
* A template parser, this module makes it possible to completely separate the Web design from the code. No more scripting into your Web page. This template engine is amazingly fast due to its concurrent cached compiled templates support.
* An Ajax runtime based on templates hiding javascript.
* Support for Secure Sockets (HTTPS/SSL), this is based on OpenSSL library.
* Support for large servers using dispatchers based on URI, request methods, timers.
* Support for virtual hosting (dispatchers based on the host name).
* Support for server push.
* A directory browser ready to be used in any application.
* A status page to get many information about the current AWS server.
* A log module. Log files keep information about all resources requested to the server.
* Hotplug modules which can be loaded/unloaded dynamically to add specific features to a server.
* A light communication API to exchange data between applications using the HTTP protocol.
* A configuration API to tune/change the server parameters without recompilation.
* A client API to retrieve any Web page from a Web site.
* A Web Page service to build a simple static page server.
* Support for SMTP, LDAP and Jabber protocols.
* And more…
A server built with AWS is very easy to deploy. You just need to copy and launch a single executable. There is no Web server installation and configuration steps to do.
See http://www.adacore.com/wp-content/files/auto_update/aws-docs/aws.html for the aws documentation
http://en.wikibooks.org/wiki/Ada_Programming/Libraries/Web
A: What exactly are you asking for?
Are you asking for something compiled, or something performant?
Are you asking for something strongly typed, or are you asking for something that will easily help you debug errors? (unit testing is sometimes a better subtitute for compilers)
Is there a requirement from your customer that it's not written in ASP.Net?
Is there a technical requirement that .Net code cannot be run?
You are asking for a technology to solve problems you haven't properly defined.
A: Mono is not a different programming language, it's just an open source implementation of the .NET framework for Unix systems (and Macs too). It aims to be totally compatible with .NET, so you'd end up using C# and ASP.NET just the same.
A: Maybe you should meant "compiled to machine code"?
C# and Java are compiled to an intermediate language which is then interpreted at run time.
Most decent interpreters compile this to actual machine code at runtime to speed up (Just In Time compiling).
Of course it is not as efficient, but many language features would be extremely hard to implement otherwise (for example Garbage Collection).
Also having an intermediate language allows your compiled code to run on different platforms.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: bash: start multiple chained commands in background I'm trying to run some commands in paralel, in background, using bash. Here's what I'm trying to do:
forloop {
//this part is actually written in perl
//call command sequence
print `touch .file1.lock; cp bigfile1 /destination; rm .file1.lock;`;
}
The part between backticks (``) spawns a new shell and executes the commands in succession. The thing is, control to the original program returns only after the last command has been executed. I would like to execute the whole statement in background (I'm not expecting any output/return values) and I would like the loop to continue running.
The calling program (the one that has the loop) would not end until all the spawned shells finish.
I could use threads in perl to spawn different threads which call different shells, but it seems an overkill...
Can I start a shell, give it a set of commands and tell it to go to the background?
A: GavinCattell got the closest (for bash, IMO), but as Mad_Ady pointed out, it would not handle the "lock" files. This should:
If there are other jobs pending, the wait will wait for those, too. If you need to wait for only the copies, you can accumulate those PIDs and wait for only those. If not, you could delete the 3 lines with "pids" but it's more general.
In addition, I added checking to avoid the copy altogether:
pids=
for file in bigfile*
do
# Skip if file is not newer...
targ=/destination/$(basename "${file}")
[ "$targ" -nt "$file" ] && continue
# Use a lock file: ".fileN.lock" for each "bigfileN"
lock=".${file##*/big}.lock"
( touch $lock; cp "$file" "$targ"; rm $lock ) &
pids="$pids $!"
done
wait $pids
Incidentally, it looks like you're copying new files to an FTP repository (or similar). If so, you could consider a copy/rename strategy instead of the lock files (but that's another topic).
A: I haven't tested this but how about
print `(touch .file1.lock; cp bigfile1 /destination; rm .file1.lock;) &`;
The parentheses mean execute in a subshell but that shouldn't hurt.
A: The facility in bash that you're looking for is called Compound Commands. See the man page for more info:
Compound Commands
A compound command is one of the following:
(list) list is executed in a subshell environment (see COMMAND EXECUTION ENVIRONMENT below). Variable assignments and
builtin commands that affect the shell's environment do not remain in effect after the command completes. The
return status is the exit status of list.
{ list; }
list is simply executed in the current shell environment. list must be terminated with a newline or semicolon.
This is known as a group command. The return status is the exit status of list. Note that unlike the metacharac‐
ters ( and ), { and } are reserved words and must occur where a reserved word is permitted to be recognized.
Since they do not cause a word break, they must be separated from list by whitespace or another shell metacharac‐
ter.
There are others, but these are probably the 2 most common types. The first, the parens, will run a list of command in series in a subshell, while the second, the curly braces, will a list of commands in series in the current shell.
parens
% ( date; sleep 5; date; )
Sat Jan 26 06:52:46 EST 2013
Sat Jan 26 06:52:51 EST 2013
curly braces
% { date; sleep 5; date; }
Sat Jan 26 06:52:13 EST 2013
Sat Jan 26 06:52:18 EST 2013
A: Another way is to use the following syntax:
{ command1; command2; command3; } &
wait
Note that the & goes at the end of the command group, not after each command. The semicolon after the final command is necessary, as are the space after the first bracket and before the final bracket. The wait at the end ensures that the parent process is not killed before the spawned child process (the command group) ends.
You can also do fancy stuff like redirecting stderr and stdout:
{ command1; command2; command3; } 2>&2 1>&1 &
Your example would look like:
forloop() {
{ touch .file1.lock; cp bigfile1 /destination; rm .file1.lock; } &
}
# ... do some other concurrent stuff
wait # wait for childs to end
A: Thanks Hugh, that did it:
adrianp@frost:~$ (echo "started"; sleep 15; echo "stopped")
started
stopped
adrianp@frost:~$ (echo "started"; sleep 15; echo "stopped") &
started
[1] 7101
adrianp@frost:~$ stopped
[1]+ Done ( echo "started"; sleep 15; echo "stopped" )
adrianp@frost:~$
The other ideas don't work because they start each command in the background, and not the command sequence (which is important in my case!).
Thank you again!
A: Run the command by using an at job:
# date
# jue sep 13 12:43:21 CEST 2012
# at 12:45
warning: commands will be executed using /bin/sh
at> command1
at> command2
at> ...
at> CTRL-d
at> <EOT>
job 20 at Thu Sep 13 12:45:00 2012
The result will be sent to your account by mail.
A: I stumbled upon this thread here and decided to put together a code snippet to spawn chained statements as background jobs. I tested this on BASH for Linux, KSH for IBM AIX and Busybox's ASH for Android, so I think it's safe to say it works on any Bourne-like shell.
processes=0;
for X in `seq 0 10`; do
let processes+=1;
{ { echo Job $processes; sleep 3; echo End of job $processes; } & };
if [[ $processes -eq 5 ]]; then
wait;
processes=0;
fi;
done;
This code runs a number of background jobs up to a certain limit of concurrent jobs. You can use this, for example, to recompress a lot of gzipped files with xz without having a huge bunch of xz processes eat your entire memory and make your computer throw up: in this case, you use * as the for's list and the batch job would be gzip -cd "$X" | xz -9c > "${X%.gz}.xz".
A: run the commands in a subshell:
(command1 ; command2 ; command3) &
A: for command in $commands
do
"$command" &
done
wait
The ampersand at the end of the command runs it in the background, and the wait waits until the background task is completed.
A: Try to put commands in curly braces with &s, like this:
{command1 & ; command2 & ; command3 & ; }
This does not create a sub-shell, but executes the group of commands in the background.
HTH
A: I don't know why nobody replied with the proper solution:
my @children;
for (...) {
...
my $child = fork;
exec "touch .file1.lock; cp bigfile1 /destination; rm .file1.lock;" if $child == 0;
push @children, $child;
}
# and if you want to wait for them to finish,
waitpid($_) for @children;
This causes Perl to spawn children to run each command, and allows you to wait for all the children to complete before proceeding.
By the way,
print `some command`
and
system "some command"
output the same contents to stdout, but the first has a higher overhead, as Perl has to capture all of "some command"'s output
A: Forking in a for loop:
for i in x; do ((a; b; c;)&); done
Example:
for i in 500 300 100; do ((printf "Start $i: "; date; dd if=/dev/zero of=testfile_$i bs=1m count=$i 2>/dev/null; printf "End $i: "; date;)&) && sleep 1; done
A: Just in case that someone is still interested, you can do it without calling a subshell like this:
print `touch .file1.lock && cp bigfile1 /destination && rm .file1.lock &`;
A: You can use GNU parallel command to run jobs in parallel. It is more safe are faster.
My guess is that you are trying to copy multiple large files from source to destination. And for that you can do that in parallel with below statement.
$ ls *|parallel -kj0 --eta 'cp {} /tmp/destination'
As we have used -j0 option, all the files will be copied in parallel. In case if you need to reduce the number of parallel process then you can use -j<n> where <n> is the number of parallel process to be executed.
Parallel will also collect the output of the process and report it in a sequential manner (with -k option) which other job control mechanism cannot do.
--eta option will give you a details statistics of the process that is going on. So we can know how may of the process have been completed and how long will it take to get finished.
A: You can pass parameters to a command group (having sequential commands) and run them in background.
for hrNum in {00..11};
do
oneHour=$((10#$hrNum + 0))
secondHour=$((10#$hrNum + 12))
{ echo "$oneHour"; echo "$secondHour"; } &
wait
done
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "47"
} |
Q: What language/methods to use to listen to removeable drives in Windows? What language or method would I use to listen to the event when a removeable drive is plugged into the PC?
A: I guess any language that can work with the Windows API should do. Basically, you listen to the windows message WM_DEVICECHANGE. This alone will let you listen to system-wide messages.
For more specific scenarios look at the API function RegisterDeviceNotification(). Needless to day, C/C++ would be straightforward for this task.
A: Is your program going to be running as a windows service and waiting?
or is putting a startup script on the removable drive an option in this case?
A: This article on codeproject.com is in C++, and has a solution using the shell change notify register function.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How do I program a driver for a USB device for windows platform? I am looking for a device that reads wiring voltages via a USB interface and returns the data. How would I go about programming something to interpret this data and what language would I use?
edit:
If it helps, this project is to develop a digital tachometre for older engines that don't support a comprehensive ODB2 data port. Therefore, it will read voltages on a DC circuit and have an accurate graphical interface. I have absolutely no idea where to start with all this but am determined to make it work! it's for windows.
A: Cheat and use libusb. I did this for a project I've been working on for a while and wrote a C++/wxWidgets app to handle the data.
I've been thinking recently of re-writing the app on the PC in wxPython though as it's much faster for GUI development.
How you want to display / log the data? There are a lot of options available. You can do some pretty cool stuff (easily) with the OpenGL capabilities of wxWidgets whether it's 2D or 3D data representation.
A: If you can I would suggest using a library like libusb, as kris and Jon Cage have suggested.
If libusb isn't going to suit your needs and you're developing for Windows you should have a look at the software that Jungo provides. Again, this moves the usb software into user space rather than requiring Windows kernel development.
(edit 3: Ilya points out in the comment that Jungo is also available for Linux)
If you must do some kernel development (either Windows or Linux) then C is pretty well the only option you have. Investigate this book by Rubini for linux development. For windows driver development I can recommend this book by Oney.
But I'd investigate the libusb option in preference to the driver development in both cases.
Btw. If all you are interested in is being able to measure voltages on a usb device (and writing the code isn't important) there are many products out there that will do that for you. Take a look at some of the offerings from National Instruments. These will deal with the hard work of usb and the data acquisition and give you a nice programming interface to use in your application.
(edit 2)
There are also some usb-serial chips (eg. these) that can be interfaced directly to an embedded processor usig only a uart. Typically these come with drivers.
A: Have a look at libusb. It is available for both Linux and Windows.
A: Since you're still looking for a device that converts voltages into information, I suggest you take a look at devices that implement a USB-HID (Human Interface Device) interface, such as those found here.
They have the benefit of not requiring any device driver development to be made, or drivers to be installed. They are as plug and play as a mouse, keyboard or flash drive. The interface is pretty generic, and most manufacturers also provide the necessary libraries to read the information from the device, be notified when a device is plugged in/out, discover devices, and so on.
In addition, have a look at this article that explains how to use a HID device in C#, for example.
Dave
A: Seems to me that if you want to read wiring voltages then you need an A/D converter. Are you making your own A/D converter? If so, you've got some nice firmware programming to do on the device side, more than the host-side driver that you're asking about here. Otherwise you're going to buy an A/D converter, and you should just use the driver that the vendor supplies with it.
A: Unless you're bit-banging your own USB driver on the firmware side, your chip will probably come with a driver for the PC. For example, PIC microcontrollers from Microchip come not only with firmware for the PIC, but with a Windows driver. I expect that other USB-enabled chips would also come with their own drivers.
Remember that while you interact with the USB port directly on the firmware side, on the PC side all you actually interact with is the driver for the host controller.
A: Your easiest option is probably to buy some kind of off-the-shelf data acquisition device. Lots of companies make that kind of thing, but they're sometimes frighteningly expensive:
*
*National Instruments
*Amplicon
*LabJack
You could also build your own from a kit, although I can't find any links for you just now.
If you want something more custom you could use an EZ-USB or a PIC. They provide USB drivers (for Windows, at least) that allow you to interact with the device without writing drivers.
With most of these you have a fairly wide choice of programming languages, I've written software to communicate with EZ-USB devices from Visual Basic 6 in the past.
A: Most microcontrollers have built in ADC, and a ton of these also have a built in usb subsystem. Cypress, PIC, AVR come to mind. Whenever I am doing USB work for my own projects, I use pyusb and wxPython. They make it damn easy to get the job done, although there is quite a harsh initial learning curve.
Shameless self plug aside, I wrote a small python driver with pyusb for a USB-LCD device. You can check out my source code here.
A: I'm, personally, using Microchip PICs - they have AD/DA converters, USB ports, free drivers and boot loaders - and all this for under $4. After you plug in such a device you're getting one extra COM port - the rest is trivial.
A: You don't say what platform you're looking at. If you're targeting Windows, USB Revealed is an awesome reference.
A: For hardware, take a look at FTDI products.
If you have hardware and want to access it in Windows, I recently discovered WinUSB. If that's what you need, take a look at this white paper.
A: In addition to WinUSB, libusb and Jungo, there is another option for programming USB devices from user-mode - User-Mode Driver Framework (UMDF).
Writing a UMDF driver is basically creating an in-process COM component with your favorite tools.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: ruby/ruby on rails memory leak detection I wrote a small web app using ruby on rails, its main purpose is to upload, store, and display results from xml(files can be up to several MB) files. After running for about 2 months I noticed that the mongrel process was using about 4GB of memory. I did some research on debugging ruby memory leaks and could not find much. So I have two questions.
*
*Are there any good tools that can be used to find memory leaks in Ruby/rails?
*What type of coding patterns cause memory leaks in ruby?
A: Memory leak is a problem in the current ruby implementation a good place to start about this is
http://whytheluckystiff.net/articles/theFullyUpturnedBin.html Whytheluckystiff website doesn't exist anymore but you can find the original article here: https://viewsourcecode.org/why/hacking/theFullyUpturnedBin.html
for a more specific answer on problems with long running ruby processes see
https://just.do/2007/07/18/heap-fragmentation-in-a-long-running-ruby-process/
maybe you could give passenger (mod_rails) a try https://web.archive.org/web/20130901072209/http://nubyonrails.com/articles/ask-your-doctor-about-mod_rails
A: You should give a look to ruby-prof.
A: Some tips to find memory leaks in Rails:
*
*use the Bleak House plugin
*implement Scout monitoring specifically the memory usage profiler
*try another simple memory usage logger
The first is a graphical exploration of memory usage by objects in the ObjectSpace.
The last two will help you identify specific usage patterns that are inflating memory usage, and you can work from there.
As for specific coding-patterns, from experience you have to watch anything that's dealing with file io, image processing, working with massive strings and the like.
I would check whether you are using the most appropriate XML library - ReXML is known to be slow and believed to be leaky (I have no proof of that!). Also check whether you can memoize expensive operations.
A: A super simple method to log memory usage after or before each request (only for Linux).
#Put this in applictation_controller.rb
before_filter :log_ram # or use after_filter
def log_ram
logger.warn 'RAM USAGE: ' + `pmap #{Process.pid} | tail -1`[10,40].strip
end
You might want to load up script/console and try the statement out first to make sure it works on your box.
puts 'RAM USAGE: ' + `pmap #{Process.pid} | tail -1`[10,40].strip
Then just monitor top, when a request makes your memory usage jump, go check the logs. This, of course, will only help if you have a memory leak that occurs in large jumps, not tiny increments.
A: Switch to jruby and use the Eclipse Memory Analyzer.
There's no comparable tool for Ruby at the moment.
A: Now, you can run the following to get the memory in a format that R can read. I am assuming that your log line looks like:
1234567890 RAM USAGE: 27456K
Run this (or modify to suite):
$ grep 'RAM USAGE' fubar.log | awk '{print s " " $1 " " $4; s++}' | sed 's/K//g' > mem.log
Then you can run this:
#!/bin/sh
rm -f mem.png
R --vanilla --no-save --slave <<RSCRIPT
lst <- read.table("mem.log")
attach(lst)
m = memory / 1024.0
summary(m)
png(filename="mem.png", width=1024)
plot(date, m, type='l', main="Memory usage", xlab="time", ylab="memory")
RSCRIPT
and get a nice graph.
A: These gems worked for me:
MemoryLogic
Adds in proccess id and memory usage in your rails logs, great for tracking down memory leaks
Oink
Log parser to identify actions which significantly increase VM heap size
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45"
} |
Q: Is there a PHP library for email address validation? I need to validate the email address of my users. Unfortunately, making a validator that conforms to standards is hard.
Here is an example of a regex expression that tries to conform to the standard.
Is there a PHP library (preferably, open-source) that validates an email address?
A: Cal Henderson (of Flickr) wrote an RFC822 compliant email address matcher, with an explanation of the RFC and code utilizing the RFC to match email addresses. I've been using it for quite some time now with no complaints.
RFC822 (published in 1982) defines,
amongst other things, the format for
internet text message (email)
addresses. You can find the RFC's by
googling - there's many many copies of
them online. They're a little terse
and weirdly formatted, but with a
little effort we can seewhat they're
getting at.
... Update ...
As Porges pointed out in the comments, the library on the link is outdated, but that page has a link to an updated version.
A: I found a library in google code: http://code.google.com/p/php-email-address-validation/
Are there any others?
A: Have you looked at PHP's filter_ functions? They're not perfect, but they do a fairly decent job in my experience.
Example usage (returns boolean):
filter_var($someEmail, FILTER_VALIDATE_EMAIL);
A: AFAIK, the only good way to validate an e-mail is to to send an e-mail and see if user goes back to the site using a link in this e-mail. That's what lot of sites do.
As you point out with the link to the well known mammoth regex, validating all forms of e-mail address is hard, near to impossible. It is so easy to do it wrong, even for trivial style e-mails (I found too many sites rejecting caps in e-mail addresses! And most old regexes reject TLDs of more than 4 letters!).
AFAIK, "Jean-Luc B. O'Grady"@example.com and e=m.c^2@[82.128.45.117] are both valid addresses... While I-Made-It-Up@Absurd-Domain-Name.info is likely to be invalid.
So somehow, I would just check that we have something, a unique @, something else, and go with it: it would catch most user errors (like empty field or user name instead of e-mail address).
If user wants to give a fake address, it would just give something random looking correct (see@on.tv or bill.gates@microsoft.com). And no validator will catch typos (jhon.b@example.com instead of john.b@example.com).
If one really want to validate e-mails against full RFC, I would advise to use regexes to split around @, then check separately local name and domain name. Separate case of local name starting with " from other cases, etc. Separate case of domain name starting with [ from other cases, etc. Split problem in smaller specific domains, and use regexes only on a well defined, simpler cases.
This advice can be applied to lot of regex uses, of course...
A: [UPDATED] I've collated everything I know about email address validation here: http://isemail.info, which now not only validates but also diagnoses problems with email addresses. I agree with many of the comments here that validation is only part of the answer; see my essay at http://isemail.info/about.
I've now collated test cases from Cal Henderson, Dave Child, Phil Haack, Doug Lovell and RFC 3696. 158 test addresses in all.
I ran all these tests against all the validators I could find. The comparison is here: http://www.dominicsayers.com/isemail
I'll try to keep this page up-to-date as people enhance their validators. Thanks to Cal, Dave and Phil for their help and co-operation in compiling these tests and constructive criticism of my own validator.
People should be aware of the errata against RFC 3696 in particular. Three of the canonical examples are in fact invalid addresses. And the maximum length of an address is 254 or 256 characters, not 320.
A: Zend_Validate includes an email validator.
There are plenty of regular expressions around for validating - everything from very basic to very advanced.
You really should pick something that matches the importance of a valid email in your application.
A: I'd recommend to look at the source code of Zend_Validate_EmailAddress [source].
Once you have your dependencies fixed, you can simply do the following:
$mail_validator = new Zend_Validate_EmailAddress();
$mail_validator->isValid($address); // returns true or false
The best would be to get the full Zend Library into your project via svn external and point the include path to it...
But you can just download the necessary files (1,2,3,4,5,6), and include them all (remove the require_once calls).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: How do I prevent ServerXMLHTTP from automatically following redirects (HTTP 303 See Other responses)? I am using ServerXMLHTTP to perform an HTTP POST. The response returned is a redirect (specifically 303 See Other). ServerXMLHTTP is automatically following this redirect but this is causing an authentication failure as is not propagating the Authorization header of the original request.
Is there a way I can prevent the automatic redirection (or alternatively ensure that the Authorization header is resent)?
A: ServerXMLHTTP does not support interception of redirects (see Microsoft Knowledge Base Article 308607). However WinHTTP can be used in its place and this does contain a configurable 'enable redirects' option.
How to disable WinHTTP redirects in VBA:
webClient.Option(6) = False
In context:
Set webClient = CreateObject("WinHttp.WinHttpRequest.5.1")
webClient.Option(6) = False
webClient.Open "POST", "http://example.com", False
webClient.send ("")
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: F# - while woes given tuple condition Given the following:
#light
//any function returning bool * 'a
let foo =
let x = ref 10
fun () ->
x := !x - 1
if !x <> 0 then
(true, x)
else
(false, x)
while let (c,x) = foo() in c do print_any x;//can't access x, but would be convinent.
//this is how I want it to work, without all the typing
let rec loop f =
match f() with
| (true, x) ->
print_any x
loop f
| (false, _) -> ()
loop foo
How should I go about solving this?
Or should I just go through the hassle to convert "foo" to a sequence expression?
A: This is one solution, but I personally think it is abuse of the while construction.
#light
while
(let (c,x) = foo()
if c then print_any !x
c)
do ()
A: Another solution which is slightly better in my opinion. It gets x out of the condition clause scope of the while and puts it in a reference y which is available in the higher scope. Still not the best (functional) solution but it works.
let y = ref 1
while (let (c,x) = foo()
y := !x
c)
do printf "%i" !y
I think your rec loop solution works best since it is the most functional one (avoiding side effects, although foo uses state) and the most general one (it works on all functions of the same time as foo). It is longer typing, but if you will use more functions like foo, loop is more productive than the single one shortest solution just for foo.
I would even generalize loop some more and abstract away the action of what you want to do with a value in a 'true' situation:
let loop f a =
let rec loop2() =
match f() with
| (true, x) ->
a x
loop2()
| (false, _) -> ()
loop2()
loop foo print_any
A: I like the other suggestions for how to consume "foo" assuming foo is held fixed.
To my nose, the code for "foo" smells. If it's reasonable to convert "foo" to "bar" along the lines of
let bar =
let x = ref 10
seq {
x := !x - 1
while !x <> 0 do
yield x
x := !x - 1
}
bar |> Seq.iter print_any
then I would do it, but "bar", while somewhat better, still seems fishy. (In "bar", I preserved the odd aspect that it returns "int ref" rather than just "int", as "foo" did, but hopefully that aspect was unintentional?)
I think the thing that is so funky about "foo" is the kind of implicit information that's not obvious from the data type (you can keep calling this so long as the bool portion is true), which is what makes the seq version a little more attractive.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Top reason not to use EJB 3.0 again? The scenario
*
*You have developed a webapp using EJBs version 3.
*The system is deployed, delivered and is used by the customer.
If you would have to rewrite the system from scratch, would you use EJBs again?
Yes: Don't answer this question, answer this one instead.
No: Provide the top reason for not using EJBs again, based on your personal experience.
Let the answer contain just one reason. This will let other readers vote up the number one reason to stay away from EJB 3.
A: The project did not have any of the problems that EJBs are supposed to solve. Using EJBs just made it harder to code, to debug, to build, to deploy and to document and understand.
A: The top reason for not using EJB 3.0 again? Maybe you can wait for EJB 3.1 which does away with a major piece of insanity: the mandatory local interface.
https://blogs.oracle.com/kensaks/entry/optional_local_business_interfaces
A: Having to do relationship-child management yourself: Hibernate's all-delete-orphan didn't make it into the 3.0 spec.
A: Coding an app in EJB is too bulky and in my experience you can get away with a light weight alternative
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Create a new Word Document using VSTO How can I create a new Word document pro grammatically using Visual Studio Tools for Office?
A: What you are actually after is Office Automation using the PIA's (Primary Interop Assemblies).
VSTO is actually a set of Managed .net extensions which make writing add-ins for Office far easier. For external interaction VSTO is not used at all (though you can still reference VSTO libraries and use some of the helpers if you wanted to).
Have a look at http://support.microsoft.com/kb/316384 to get you started. And google 'word interop create document'
A: For a VSTO app-level add-in, you can do something like this:
Globals.ThisAddIn.Application.Documents.Add(ref objTemplate, ref missingType, ref missingType, ref missingType);
where objTemplate can be a template of document
See Documents.Add Method
A: Now, I might be wrong on this, but I don't believe you can actually make a new Word doc using VSTO. I'm not intimately familiar with VSTO, so forgive me if I'm incorrect on that point.
I do know that you can use Office Interop libraries to do this, however.
To download the libraries, just do a search for "office interop assemblies," possibly including the Office version you want (ex: "office interop assemblies 2007").
Once you've included the Word Interop assembly into your application (using Add Reference), you can do something like:
using Word = Microsoft.Office.Interop.Word;
object missing = System.Reflection.Missing.Value;
Word.Application app = new Word.ApplicationClass();
Word.Document doc = app.Documents.Add(ref missing, ref missing, ref missing, ref missing);
doc.Activate();
app.Selection.TypeText("This is some text in my new Word document.");
app.Selection.TypeParagraph();
Hope that helps!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Library for converting a traceback to its exception? Just a curiosity: is there an already-coded way to convert a printed traceback back to the exception that generated it? :) Or to a sys.exc_info-like structure?
A: Converting a traceback to the exception object wouldn't be too hard, given common exception classes (parse the last line for the exception class and the arguments given to it at instantiation.) The traceback object (the third argument returned by sys.exc_info()) is an entirely different matter, though. The traceback object actually contains the chain of frame objects that constituted the stack at the time of the exception. Including local variables, global variables, et cetera. It is impossible to recreate that just from the displayed traceback.
The best you could do would be to parse each 'File "X", line N, in Y:' line and create fake frame objects that are almost entirely empty. There would be very little value in it, as basically the only thing you would be able to do with it would be to print it. What are you trying to accomplish?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Pitfalls for converting a .net 2.0 solution to .net 3.5 We're moving a solution with 20+ projects from .net 2.0 to 3.5 and at the same time moving from Visual Studio 2005 to 2008. We're also at the same time switching from MS Entlib 2.0 to 4.0.
*
*Is there any reasons not to let the
Visual Studio wizard convert the
solution for us?
*Is 3.5 fully backwards compatible
with 2.0?
*Is Entlib 4.0 fully backwards compatible
with 2.0?
Edit: I might been a bit confused when I wrote this, the backwards compatability is supposed to mean; is there anything that exists in a 2.0 project that will not work/compile in 3.5
:)
//W
A: We upgrade a rather large solution (20+ projects) from 2005 to 2008 but it was really trivial. Project upgrade only basically. The underlying framework is still the same since both 3.0/3.5 and 2.0 share the same core framework.
As was said above, even though you are upgrading, you don't need to change the framework reference for the projects - in fact, it defaults to leaving the framework at 2.0 instead of changing it to 3.0/3.5. This means you will not be able to take advantage of 3.0/3.5 features until you change the reference (Project Properties page, Application Table "Target Framework" field) but it also means you are that much more assured there won't be additional compatibility issues (as you will get an error adding 3.0/3.5 code until that reference is changed).
New features of TFS 2008 shouldn't be overlooked either though you don't need to upgrade your app to be able to use TFS 2008.
1.1 to 2.0 conversion was much more painful...
A: I upgraded several projects from Visual Studio 2005 to 2008 with the Wizard, and they all went painless (well... except for that C++ beast. But you're talking about .NET anyway).
Keep in mind that you don't need to upgrade the .NET version. Visual Studio 2008 supports .NET 2.0, 3.0 and 3.5. However, 3.5 is backwards-compatible anyway, since it sits on the same CLR and is, more or less, just some extra libraries. And the "old" libraries stay the same.
I don't know about Entlib.
Why don't you just try and run your unit tests? :)
A: *
*Is there any reasons not to let the Visual Studio wizard convert the solution for us?
No.
*
*Is 3.5 fully backwards compatible with 2.0?
No. There are new features in 3.5 that wouldn't port backwards natively. And (IIRC) there are some deprecations going from 2.0 to 3.5.
*
*Is Entlib 4.0 fully backwards compatible with 2.0?
I don't think so. 3.5 is listed as a requirement.
Make a backup, run the wizard, see what happens. It might take a while for such a chunky project but you'll be in a position where you can tell if it'll build/run-as-expected.
A: When I upgraded from EntLib 2.0 to 4.0 I observed the following breaking source code change if you use the Caching application block:
*
*In 2.0, you get a cache manager using CacheManager cache = CacheFactory.GetCacheManager().
*In 4.0, you have to replace CacheManager with ICacheManager or it won't compile.
Also, if you are writing your own exception formatter class for the Exception Handling block:
*
*In 2.0, you have to define one constructor with the signature (TextWriter, Exception).
*In 4.0, that is obsolete, and you have to define a second constructor with the signature (TextWriter, Exception, Guid).
A: There aren't supposed to be any breaking changes when migrating from EntLib 3.1 to 4.0:
"There are no breaking changes to the public API. That was one of the design goals of EL4. Just remember EL4 requires .NET3.5.
--Grigori"
http://blogs.msdn.com/agile/archive/2008/05/16/enterprise-library-4-0-for-visual-studio-2008-released.aspx
(Grigori is the Program Manager for EntLib)
I'm not sure about 2.0 to 3.1 though. If I can find the right people @ p&p tomorrow I'll update this.
Ade
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Switch off Run-Time check in Visual Studio I have the problem, to get a failed run-time check in Visual C++ 2008 because of casting a too big number to a smaller type. The failure is in an external dll, so I can not fix it there. So how can I switch off this run time check for an external project.
A: If the cast (and check) is happening in this DLL which you can't recompile, then you can't easily turn off the check.
The only thing you could do is change the data which you pass to the DLL to avoid the problem. Or patch the binary to disable the check, which probably wouldn't be terribly difficult as that sort of thing goes - are you good with a disassembler?
A: The Runtime check depends on the option /RTC c able to find in Visual Studio Configuration Properties of the project, C/C++ Code generation, "Smaller Type Check". You should switch off this, and recompile.
A: You can always just turn off the cast to smaller type check in the project settings.
If that doesn't work as the check is compiled into the dll, then you can try linking to the non-debug version dll, as the check can only be on for debug "optimized" build. It might affect your debugging though of course.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Hide WinForms TreeView plus sign I have a WinForms TreeView with one main node and several sub-nodes.
How can I hide the + (plus sign) in the main node?
A: Treview Property: .ShowRootLines = false
When ShowRootLines is false, the Plus/Minus sign will not be shown for the root node, but will still show when necessary on child nodes.
With the Plus/Minus sign hidden, you might consider executing the Expand() method of the root node once the tree is populated. That will make sure that the root node shows all first-level child nodes.
Note: There is a ShowPlusMinus property on the TreeView, but it works on all nodes.
A: See the TreeView::ShowExpandCollapse property. Set it to false to disable the expand/collapse node indicators.
A: Set SiteMapDataSource property ShowStartingNode to false. For example:
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Hide triangle of dropdownlist when using css with media print I'm using @media print in my external css file to hide menus etc. However while printing the little triangle of a dropdownlist still shows. Is there a css setting available to hide it as well and only print the selected item?
A: This works in Chrome and Firefox (others may work also)
-moz-appearance: none;
-webkit-appearance: none;
appearance: none;
A: No, there isn't. Besides, every browser displays its dropdowns in its own way, some use system widgets, some have their own. In Safari, for example, no matter what styling you remove, it still has a box (well, sort of) around the whole of it.
If you don't want to change your HTML code, perhaps a bit of javascript might do it - get the selected value and exchange the dropdown for a paragraph.
A: I would tentatively say you cannot, because it is a monolithic component: you cannot change it in the same way you cannot change the look of scrollbars, for example.
I didn't chose my example at random: of course, in some browsers (IE at least), you can change the latter. But using some browser-specific CSS, which isn't very practical, unless you are targeting captive intranet application...
Too bad, it is indeed a good idea to hide this part.
[Update] There might be a way, although semantically-wise it is a bit ugly... Whatever.
<select name="Snakes" style="width: 200px;">
<option value="A">Anaconda</option>
<option value="B">Boa</option>
<option value="C">Cobra</option>
<option selected="" value="P">Python</option>
<option value="V">Viper</option>
</select>
<!-- Put this style in a class, of course -->
<div style="background-color: white;
min-width: 20px; max-width: 20px; position: relative;
right: -180px; top: -19px;">&Nbsp;</div>
Of course, the div must be hidden in screen media and get the above style in print media.
Works decently in FF3, Opera 9.5 and even IE7 (not IE6) on WinXP. Alas, I fear the above hack is system dependent and might be broken in other browsers.
PS.: I wrote Nbsp because syntax highlighting hides it otherwise... :-P
A: As most people have said, the rendering style of form widgets is left pretty much up to the browser. You can style them a bit, but fundamental changes to them are unreliable at best.
As another commenter mentioned, you'd be best using a bit of javascript to achieve this effect. I've given a bit of jQuery that will do this. It's not ideal though - it relies on the user clicking the "Print this page" links, and not using the browser's own Print functions.
For the following markup:
<p><a class="print" href="#">print this</a></p>
<form action="/my/action/" method="POST">
<select id="mySelect">
<option value="1">An Option</option>
<option value="2" selected="selected">Another Option</option>
</select>
</form>
This jQuery will append a paragraph containing the content of the currently selected item from the drop-down, and hide the form element before printing the page.
$(document).ready(function() {
$('a.print').click(function() {
var selected = $('#mySelect option:selected').text();
$('#mySelect').after('<p class="replacement">' + selected + '</p>');
$('#mySelect').hide();
window.print();
});
});
A: This worked for me in IE6. I didn't try other browsers
http://weblogs.asp.net/bleroy/archive/2005/08/09/how-to-put-a-div-over-a-select-in-ie.aspx
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Finding a User in Active Directory with the Login Name I'm possibly just stupid, but I'm trying to find a user in Active Directory from C#, using the Login name ("domain\user").
My "Skeleton" AD Search Functionality looks like this usually:
de = new DirectoryEntry(string.Format("LDAP://{0}", ADSearchBase), null, null, AuthenticationTypes.Secure);
ds = new DirectorySearcher(de);
ds.SearchScope = SearchScope.Subtree;
ds.PropertiesToLoad.Add("directReports");
ds.PageSize = 10;
ds.ServerPageTimeLimit = TimeSpan.FromSeconds(2);
SearchResult sr = ds.FindOne();
Now, that works if I have the full DN of the user (ADSearchBase usually points to the "Our Users" OU in Active Directory), but I simply have no idea how to look for a user based on the "domain\user" syntax.
Any Pointers?
A: You need to set a filter (DirectorySearcher.Filter) something like:
"(&(objectCategory=person)(objectClass=user)(sAMAccountName={0}))"
Note that you only specify the username (without the domain) for the property sAMAccountName. To search for domain\user, first locate the naming context for the required domain, then search there for sAMAccountName.
By the way, when building LDAP query strings using String.Format, you should generally be careful to escape any special characters. It probably isn't necessary for an account name, but could be if you're searching by other properties such as the user's first (givenName property) or last (sn property) name. I have a utility method EscapeFilterLiteral to do this: you build your string like this:
String.Format("(&(objectCategory=person)(objectClass=user)(sn={0}))",
EscapeFilterLiteral(lastName, false));
where EscapeFilterLiteral is implemented as follows:
public static string EscapeFilterLiteral(string literal, bool escapeWildcards)
{
if (literal == null) throw new ArgumentNullException("literal");
literal = literal.Replace("\\", "\\5c");
literal = literal.Replace("(", "\\28");
literal = literal.Replace(")", "\\29");
literal = literal.Replace("\0", "\\00");
literal = literal.Replace("/", "\\2f");
if (escapeWildcards) literal = literal.Replace("*", "\\2a");
return literal;
}
This implementation allows you treat the * character as part of the literal (escapeWildcard = true) or as a wildcard character (escapeWildcard = false).
UPDATE: This is nothing to do with your question, but the example you posted does not call Dispose on the disposable objects it uses. Like all disposable objects these objects (DirectoryEntry, DirectorySearcher, SearchResultCollection) should always be disposed, normally with the using statement. See this post for more info.
A: Thanks. I figured that i can get the Domain (at least in my AD) through specifying "LDAP://{0}.somedomain.com/DC={0},DC=somedomain,DC=com", replacing {0} with the domain, which works in our my environment at least.
One question though: sAMAccountName seems depreciated: The logon name used to support clients and servers running older versions of the operating system, such as Windows NT 4.0, Windows 95, Windows 98, and LAN Manager. This attribute must be less than 20 characters to support older clients.
Is this still the best approach to it? Or is there a more "modern" field to query? (Windows 2003 Active Directory, Windows XP or 2003 Clients, .net 3.0)
Edit: Thanks again. Our structure is a bit complicated: we have a big "domain.com" forest, with several domains for regional offices. Essentially: The Login is "something\username", the full domain us something.domain.com and the mail is user@domain.com (without the something), but the principal name is user@something.domain.com. I will manually "translate" something\username to username@something.domain.com, as this seems to be the most robust way. Especially since I want to keep the Auto-Discovery feature in.
A: Logon Name(Pre-Windows 2000)
"(&(objectCategory=person)(objectClass=user)(!sAMAccountType=805306370)(sAMAccountName=John))"
Logon Name(Windows 2000 and above)
"(&(objectCategory=person)(objectClass=user)(!sAMAccountType=805306370)(userPrincipalName=John))"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Page.Tostring() behaves a bit weird in .net 1.1? I have a control where I have to check in which page I am, so I can set a certain variable accordingly.
string pageName = this.Page.ToString();
switch (pageName)
{
case "ASP.foo_bar_aspx": doSomething(); break;
default: doSomethingElse(); break;
}
this works fine locally and on some developmentservers, however when I put it live, It stopped working because I don't get ASP.foo_bar_aspx but _ASP.foo_bar_aspx
(notice the underscore in the live version)
Why does it act that way, Can I set it somehow?
A: You cant rely on auto generated names. Use types instead, eg:
if (Page is FooBar) { ... }
A: That seems like a really dodgy way of getting the current request. Have you tried using HttpContext.Current.Request.FilePath or another HttpContext.Current.Request... variable instead?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Using DISTINCT inner join in SQL I have three tables, A, B, C, where A is many to one B, and B is many to one C. I'd like a list of all C's in A.
My tables are something like this: A[id, valueA, lookupB], B[id, valueB, lookupC], C[id, valueC]. I've written a query with two nested SELECTs, but I'm wondering if it's possible to do INNER JOIN with DISTINCT somehow.
SELECT valueC
FROM C
INNER JOIN
(
SELECT DISTINCT lookupC
FROM B INNER JOIN
(
SELECT DISTINCT lookupB
FROM A
)
A2 ON B.id = A2.lookupB
)
B2 ON C.id = B2.lookupC
EDIT:
The tables are fairly large, A is 500k rows, B is 10k rows and C is 100 rows, so there are a lot of uneccesary info if I do a basic inner join and use DISTINCT in the end, like this:
SELECT DISTINCT valueC
FROM
C INNER JOIN B on C.id = B.lookupB
INNER JOIN A on B.id = A.lookupB
This is very, very slow (magnitudes times slower than the nested SELECT I do above.
A: SELECT DISTINCT C.valueC
FROM C
LEFT JOIN B ON C.id = B.lookupC
LEFT JOIN A ON B.id = A.lookupB
WHERE C.id IS NOT NULL
I don't see a good reason why you want to limit the result sets of A and B because what you want to have is a list of all C's that are referenced by A. I did a distinct on C.valueC because i guessed you wanted a unique list of C's.
EDIT: I agree with your argument. Even if your solution looks a bit nested it seems to be the best and fastest way to use your knowledge of the data and reduce the result sets.
There is no distinct join construct you could use so just stay with what you already have :)
A: I did a test on MS SQL 2005 using the following tables: A 400K rows, B 26K rows and C 450 rows.
The estimated query plan indicated that the basic inner join would be 3 times slower than the nested sub-queries, however when actually running the query, the basic inner join was twice as fast as the nested queries, The basic inner join took 297ms on very minimal server hardware.
What database are you using, and what times are you seeing? I'm thinking if you are seeing poor performance then it is probably an index problem.
A: I believe your 1:m relationships should already implicitly create DISTINCT JOINs.
But, if you're goal is just C's in each A, it might be easier to just use DISTINCT on the outer-most query.
SELECT DISTINCT a.valueA, c.valueC
FROM C
INNER JOIN B ON B.lookupC = C.id
INNER JOIN A ON A.lookupB = B.id
ORDER BY a.valueA, c.valueC
A: Is this what you mean?
SELECT DISTINCT C.valueC
FROM
C
INNER JOIN B ON C.id = B.lookupC
INNER JOIN A ON B.id = A.lookupB
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40"
} |
Q: What tools are available for providing a breakdown of the diskspace used by an SQL Server database I have an MSDE2000 database which appears to be approaching it's 2Gb limit.
What tools can I use to determine where all the space is being used?
Ideally think TreesizePro for SQL Databases
A: Whilst the DB size may be, say, 1.5GB, it may only be containing 500MB of data. This will depend on many factors (i.e. auto-growth size, index fill factors and so on). Run sp_spaceused to find out how much is unallocated. You should then be able to use the likes of DBCC SHRINKDB to reclaim some space.
To just see the size of the file you could just look on the disk. There will be a data file (.MDF) and a log file (.LDF) - unless you have split the DB across multiple file-groups which I don't know if you can do in MSDE.
If you want to find out what tables/indexes use most space (and assuming you don't have Enterprise Manager to simply look at the Taskpad View - which would also give you the info from above), then you can execute sp_spaceused with a tablename as a parameter. It wouldn't take long to run against all, or to write a script to loop through all the tables.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: automatic query cache invalidation I'm trying to understand how hibernate query cache really works.
What I see now is that Hibernate does not update its second-level cache automatically when I insert new entities into the database (although I'm using only Hibernate calls).
The only way I have found to make it work was to manually clean the cache after inserting new entities.
Here is the more concrete example.
I have a persistent entity called Container which can have many Items. I wanted to have all the items cached:
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
class Item
{
// rest of the code ...
}
class Container {
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
public List getItems() { ... }
// rest of the code ...
}
The problem which I have noticed is that when I:
1) read some Containers from the db into memory (together with the corresponding items)
String hql =
"from Container c left join fetch c.items where c.type = 1";
List<Item> list = hibernateTemplate.find(hql);
2) insert new Item for a chosen Container
hibernateTemplate.save(item)
3) repeat the first step
then in the 3rd step I cannot see the item I have inserted in the second step.
I see them only if I clean the cache manually after inserting new items:
sessionFactory.evictCollection("Container.items", updatedContainerId)
My gut feeling tells me that Hibernate should do such a cache invalidation automatically. Has anyone seen it working? Am I doing something wrong or is it just not supported?
Thanks in advance for the answer.
Greetings
Tom
A: You might find my blog on query cache workings to be helpful in understanding what the query cache does and why it might not work the way you think it works:
*
*http://tech.puredanger.com/2009/07/10/hibernate-query-cache/
A: Hibernate stores data from queries by using a key composed of the query (or query name) and the value of the specified parameters. I guess it can not easily know what cache to invalidate when you modify data.
To solve this problem you should simply call SessionFactory.evictQueries.
A: Yes, query cache auto invalidation is absent for native SQL-queries. For HQL-queries it just CLEARS ALL CACHES if any of tables participated in query have INSERT/UPDATE/DELETE for any object.
So you may try Hibernate Dynamic SQL Cache project, it aims to resolve this issue by automatically updating SQL query caches without invalidation.
P.S. "Bill the Lizard" thanks for understaing :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Unit Testing: Maven or Eclipse? I am not really familiar with Maven program but I've been using Eclipse for quite a while for Unit testing, code coverage, javadoc generation, code style checking, etc. Probably, the only main thing that I didn't really like about Eclipse is the "compilation errors" that it generates when you are running Eclipse and Ant simultaneously. So I am wondering if Maven 2 does the same thing if you are running Ant task.
Lee23
A: The company I currently work for has a lot of JUnit tests which are run using Maven (1.x). We've never really had any problem and any tests that fail in maven can be debugged using the remote debugger or in Eclipse on their own.
The most important thing is that you take the time and effort to set up the environment properly so that when your tests run they are using the correct directories, variables, etc. This way you shouldn't get the 'compilation errors' that you would get when running Eclipse and Ant.
Maven has the ability to run all your tests for you during the testing cycle of compilation, this should stop the need for any Ant tasks to run the tests. However, if you still need to have scripts for other tasks (generation of code, etc) then be weary of Maven's ability to generate code and not include it in the compiled binaries (jar, war), though this may well be fixed for newer versions of Maven.
At the end of the day it would be best to evaluate Maven 2 and see if it's right for you. It sounds like you're having (sarcasm)a lot of fun (/sarcasm) with Ant and Eclipse already though. :)
A: One of the main benefits of using maven is that you need to specify the dependencies, build-setup and deployment methods once, and then you can run them in multiple environments.
In particular you can have your continuous integration/nightly build environment run these tasks simply by using the pom.xml file.
Moreover, you can (and probably should) add the pom.xml file to your version control repository. This has two benefits: a) You keep track of how the build procedure changes between versions, and b) Other developers don't need to manually find and install all the JAR files that your project depends on, they simple fetch the POM file and maven takes care of the rest.
A: If you are working with tests and code coverage tools allready, you should look into maven.
Especialy if you start to work with a project team.
Running tests in eclipse is fine as long as you are the only developer.
Using maven will enable you to use continuous intregration tools like continuum
It might seem you have to invest more time in setting up maven correectly, but it wil ldefinetly pay of fin the long run.
We use continuum here, and we have never seen any problems with compilation errors, as soon as the system was setup corectly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Best algorithm for synchronizing two IList in C# 2.0 Imagine the following type:
public struct Account
{
public int Id;
public double Amount;
}
What is the best algorithm to synchronize two IList<Account> in C# 2.0 ? (No linq) ?
The first list (L1) is the reference list, the second (L2) is the one to synchronize according to the first:
*
*All accounts in L2 that are no longer present in L1 must be deleted from L2
*All accounts in L2 that still exist in L1 must be updated (amount attribute)
*All accounts that are in L1 but not yet in L2 must be added to L2
The Id identifies accounts. It's no too hard to find a naive and working algorithm, but I would like to know if there is a smart solution to handle this scenario without ruining readability and perfs.
EDIT :
*
*Account type doesn't matter, is could be a class, has properties, equality members, etc.
*L1 and L2 are not sorted
*L2 items could not be replaced by L1 items, they must be updated (field by field, property by property)
A: For a start I'd get rid of the mutable struct. Mutable value types are a fundamentally bad thing. (As are public fields, IMO.)
It's probably worth building a Dictionary so you can easily compare the contents of the two lists. Once you've got that easy way of checking for presence/absence, the rest should be straightforward.
To be honest though, it sounds like you basically want L2 to be a complete copy of L1... clear L2 and just call AddRange? Or is the point that you also want to take other actions while you're changing L2?
A: If your two lists are sorted, then you can simply walk through them in tandem. This is an O(m+n) operation. The following code could help:
class Program
{
static void Main()
{
List<string> left = new List<string> { "Alice", "Charles", "Derek" };
List<string> right = new List<string> { "Bob", "Charles", "Ernie" };
EnumerableExtensions.CompareSortedCollections(left, right, StringComparer.CurrentCultureIgnoreCase,
s => Console.WriteLine("Left: " + s), s => Console.WriteLine("Right: " + s), (x,y) => Console.WriteLine("Both: " + x + y));
}
}
static class EnumerableExtensions
{
public static void CompareSortedCollections<T>(IEnumerable<T> source, IEnumerable<T> destination, IComparer<T> comparer, Action<T> onLeftOnly, Action<T> onRightOnly, Action<T, T> onBoth)
{
EnumerableIterator<T> sourceIterator = new EnumerableIterator<T>(source);
EnumerableIterator<T> destinationIterator = new EnumerableIterator<T>(destination);
while (sourceIterator.HasCurrent && destinationIterator.HasCurrent)
{
// While LHS < RHS, the items in LHS aren't in RHS
while (sourceIterator.HasCurrent && (comparer.Compare(sourceIterator.Current, destinationIterator.Current) < 0))
{
onLeftOnly(sourceIterator.Current);
sourceIterator.MoveNext();
}
// While RHS < LHS, the items in RHS aren't in LHS
while (sourceIterator.HasCurrent && destinationIterator.HasCurrent && (comparer.Compare(sourceIterator.Current, destinationIterator.Current) > 0))
{
onRightOnly(destinationIterator.Current);
destinationIterator.MoveNext();
}
// While LHS==RHS, the items are in both
while (sourceIterator.HasCurrent && destinationIterator.HasCurrent && (comparer.Compare(sourceIterator.Current, destinationIterator.Current) == 0))
{
onBoth(sourceIterator.Current, destinationIterator.Current);
sourceIterator.MoveNext();
destinationIterator.MoveNext();
}
}
// Mop up.
while (sourceIterator.HasCurrent)
{
onLeftOnly(sourceIterator.Current);
sourceIterator.MoveNext();
}
while (destinationIterator.HasCurrent)
{
onRightOnly(destinationIterator.Current);
destinationIterator.MoveNext();
}
}
}
internal class EnumerableIterator<T>
{
private readonly IEnumerator<T> _enumerator;
public EnumerableIterator(IEnumerable<T> enumerable)
{
_enumerator = enumerable.GetEnumerator();
MoveNext();
}
public bool HasCurrent { get; private set; }
public T Current
{
get { return _enumerator.Current; }
}
public void MoveNext()
{
HasCurrent = _enumerator.MoveNext();
}
}
You'll have to be careful about modifying the collections while iterating over them, though.
If they're not sorted, then comparing every element in one with every element in the other is O(mn), which gets painful really quickly.
If you can bear to copy the key values from each collection into a Dictionary or similar (i.e. a collection with acceptable performance when asked "is X present?"), then you could come up with something reasonable.
A: I had the same problem, and my best solution was the following (adapted to your case), having both lists loaded:
*
*For each Account in L1, verify if it exists in L2:
*
*If found, update all values from L1's Account based on L2's. Then, delete the Account from L2.
*If not found, mark L1's Account as deleted, or delete it from the list, it depends on how your database is structured.
*For each Account left in L2, add it into L1.
I recommend implementing the IEquatable<> interface in your Account class (or just override the Equals() method) so it always compares IDs on methods that require comparison between objects:
public struct Account : IEquatable<Account>
{
public int Id;
public double Amount;
public bool Equals(Account other)
{
if (other == null) return false;
return (this.Id.Equals(other.Id));
}
}
The sync algorithm would be something like this (make sure both lists are initialized so no errors occurr):
L1.ForEach (L1Account =>
{
var L2Account = L2.Find(a => a.Id == L1Account.id);
// If found, update values
if (L2Account != null)
{
L1Account.Amount = L2Account.Amount;
L2.Remove(L2Account);
}
// If not found, remove it
else
{
L1.Remove(L1Account);
}
}
// Add any remaining L2 Account to L1
L1.AddRange(L2);
A: In addition to Jon Skeet's comment make your Account struct a class and override the Equals() and GetHashCode() method to get nice equality checking.
A: L2 = L1.clone()?
... but I would guess you forgot to mention something.
A: I know this is an old post but you should check out AutoMapper. It will do exactly what you want in a very flexible and configurable way.
AutoMapper
A: Introduction
I have implemented two algorithms, one for sorted and one for sequential collections. Both support null values and duplicates, and work in the same way:
They yield return CollectionModification<LeftItemType,RightItemType>s which is similar to CollectionChangedEventArgs<T> (reference) which can be used in return to synchronize a collection.
Regarding yield return:
When using one or the other algorithm where your left items (the reference collection) is compared to right items, you can apply each yield returned CollectionModification as soon as they are yield returned , but this can result in a "collection was modified"-exception (for example when using List<T>.GetEnumerator). To prevent this, both algorithms have the ability implemented to use an indexable collection as reference collection that is about to be mutated. You only have to wrap the reference collection with YieldIteratorInfluencedReadOnlyList<ItemType> (abstract) by using the extensions methods in YieldIteratorInfluencedReadOnlyListExtensions. :)
SortedCollectionModifications
The first algorithm works for ascended or descended ordered lists and uses IComparer<T>.
/// <summary>
/// The algorithm creates modifications that can transform one collection into another collection.
/// The collection modifications may be used to transform <paramref name="leftItems"/>.
/// Assumes <paramref name="leftItems"/> and <paramref name="rightItems"/> to be sorted by that order you specify by <paramref name="collectionOrder"/>.
/// Duplications are allowed but take into account that duplications are yielded as they are appearing.
/// </summary>
/// <typeparam name="LeftItemType">The type of left items.</typeparam>
/// <typeparam name="RightItemType">The type of right items.</typeparam>
/// <typeparam name="ComparablePartType">The type of the comparable part of left item and right item.</typeparam>
/// <param name="leftItems">The collection you want to have transformed.</param>
/// <param name="getComparablePartOfLeftItem">The part of left item that is comparable with part of right item.</param>
/// <param name="rightItems">The collection in which <paramref name="leftItems"/> could be transformed.</param>
/// <param name="getComparablePartOfRightItem">The part of right item that is comparable with part of left item.</param>
/// <param name="collectionOrder">the presumed order of items to be used to determine <see cref="IComparer{T}.Compare(T, T)"/> argument assignment.</param>
/// <param name="comparer">The comparer to be used to compare comparable parts of left and right item.</param>
/// <param name="yieldCapabilities">The yieldCapabilities that regulates how <paramref name="leftItems"/> and <paramref name="rightItems"/> are synchronized.</param>
/// <returns>The collection modifications.</returns>
/// <exception cref="ArgumentNullException">Thrown when non-nullable arguments are null.</exception>
public static IEnumerable<CollectionModification<LeftItemType, RightItemType>> YieldCollectionModifications<LeftItemType, RightItemType, ComparablePartType>(
IEnumerable<LeftItemType> leftItems,
Func<LeftItemType, ComparablePartType> getComparablePartOfLeftItem,
IEnumerable<RightItemType> rightItems,
Func<RightItemType, ComparablePartType> getComparablePartOfRightItem,
SortedCollectionOrder collectionOrder,
IComparer<ComparablePartType> comparer,
CollectionModificationsYieldCapabilities yieldCapabilities)
Python algorithm inspiration taken from: Efficient synchronization of two instances of an ordered list.
EqualityTrailingCollectionModifications
The second algorithm works for any order and uses IEqualityComparer<T>.
/// <summary>
/// The algorithm creates modifications that can transform one collection into another collection.
/// The collection modifications may be used to transform <paramref name="leftItems"/>.
/// The more the collection is synchronized in an orderly way, the more efficient the algorithm is.
/// Duplications are allowed but take into account that duplications are yielded as they are appearing.
/// </summary>
/// <typeparam name="LeftItemType">The type of left items.</typeparam>
/// <typeparam name="RightItemType">The type of right items.</typeparam>
/// <typeparam name="ComparablePartType">The type of the comparable part of left item and right item.</typeparam>
/// <param name="leftItems">The collection you want to have transformed.</param>
/// <param name="getComparablePartOfLeftItem">The part of left item that is comparable with part of right item.</param>
/// <param name="rightItems">The collection in which <paramref name="leftItems"/> could be transformed.</param>
/// <param name="getComparablePartOfRightItem">The part of right item that is comparable with part of left item.</param>
/// <param name="equalityComparer">The equality comparer to be used to compare comparable parts.</param>
/// <param name="yieldCapabilities">The yield capabilities, e.g. only insert or only remove.</param>
/// <returns>The collection modifications.</returns>
/// <exception cref="ArgumentNullException">Thrown when non-nullable arguments are null.</exception>
public static IEnumerable<CollectionModification<LeftItemType, RightItemType>> YieldCollectionModifications<LeftItemType, RightItemType, ComparablePartType>(
IEnumerable<LeftItemType> leftItems,
Func<LeftItemType, ComparablePartType> getComparablePartOfLeftItem,
IEnumerable<RightItemType> rightItems,
Func<RightItemType, ComparablePartType> getComparablePartOfRightItem,
IEqualityComparer<ComparablePartType>? equalityComparer,
CollectionModificationsYieldCapabilities yieldCapabilities)
where ComparablePartType : notnull
Requirements
One of the following frameworks is required
*
*.NET-Standard 2.0
*.NET Core 3.1
*.NET 5.0
Both algorithms are created with custom implemented types (IndexDirectory, NullableKeyDictionary, LinkedBucketList to name a few), so I can not simply copy paste the code here, so I would like to reference you to my following packages:
*
*Teronis.NetStandard.Core: Transitively used by the packages below
*Teronis.NetStandard.Collections: Contains a few custom collection types. Transitively used by the packages below
*Teronis.NetStandard.Collections.Algorithms: Contains the algorithms
*Teronis.NetStandard.Collections.Synchronization: Contains the collections synchroniaztion classes
Implementation
Anitcipated classes
Account:
public class Account
{
public Account(int id) =>
Id = id;
public int Id { get; }
public double Amount { get; }
}
And the following collection item equality comparer class:
AccountEqualityComparer:
public class AccountEqualityComparer : EqualityComparer<Account>
{
public new static AccountEqualityComparer Default = new AccountEqualityComparer();
public override bool Equals([AllowNull] Account x, [AllowNull] Account y) =>
ReferenceEquals(x, y) || (!(x is null && y is null) && x.Id.Equals(y.Id));
public override int GetHashCode([DisallowNull] Account obj) =>
obj.Id;
}
"My" classes
AccountCollectionViewModel:
using Teronis.Collections.Algorithms.Modifications;
using Teronis.Collections.Synchronization;
using Teronis.Collections.Synchronization.Extensions;
using Teronis.Reflection;
public class AccountCollectionViewModel : SyncingCollectionViewModel<Account, Account>
{
public AccountCollectionViewModel()
: base(CollectionSynchronizationMethod.Sequential(AccountEqualityComparer.Default))
{
// In case of SyncingCollectionViewModel, we have to pass a synchronization method.
//
// Sequential means any order
//
}
protected override Account CreateSubItem(Account superItem) =>
superItem;
protected override void ApplyCollectionItemReplace(in ApplyingCollectionModificationBundle modificationBundle)
{
foreach (var (oldItem, newItem) in modificationBundle.OldSuperItemsNewSuperItemsModification.YieldTuplesForOldItemNewItemReplace())
{
// Implementation detail: update left public property values by right public property values.
TeronisReflectionUtils.UpdateEntityVariables(oldItem, newItem);
}
}
}
Program:
using System.Diagnostics;
using System.Linq;
class Program
{
static void Main()
{
// Arrange
var collection = new AccountCollectionViewModel();
var initialData = new Account[] {
new Account(5) { Amount = 0 },
new Account(7) { Amount = 0 },
new Account(3) { Amount = 0 }
};
var newData = new Account[] {
new Account(5) { Amount = 10 },
/* Account by ID 7 got removed .. */
/* but account by ID 8 is new. */
new Account(8) { Amount = 10 },
new Account(3) { Amount = 10 }
};
// Act
collection.SynchronizeCollection(initialData);
// Assert
Debug.Assert(collection.SubItems.ElementAt(1).Id == 7, "The account at index 1 has not the ID 7.");
Debug.Assert(collection.SubItems.All(x => x.Amount == 0), "Not all accounts have an amount of 0.");
// Act
collection.SynchronizeCollection(newData);
// Assert
Debug.Assert(collection.SubItems.ElementAt(1).Id == 8, "The account at index 1 has not the ID 8.");
Debug.Assert(collection.SubItems.All(x => x.Amount == 10), "Not all accounts have an amount of 10.");
;
}
}
You can see that I use SyncingCollectionViewModel, a very "heavy" type. That's because I have not finished the lightweight SynchronizableCollection implementation yet (virtual methods for add, remove, replace and so on are missing).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11"
} |
Q: Use URL segments as action method parameters in Zend Framework In Kohana/CodeIgniter, I can have a URL in this form:
http://www.name.tld/controller_name/method_name/parameter_1/parameter_2/parameter_3 ...
And then read the parameters in my controller as follows:
class MyController
{
public function method_name($param_A, $param_B, $param_C ...)
{
// ... code
}
}
How do you achieve this in the Zend Framework?
A: Update (04/13/2016):
The link in my answer below moved and has been fixed. However, just in case it disappears again -- here are a few alternatives that provide some in depth information on this technique, as well as use the original article as reference material:
*
*Zend Framework Controller Actions with Function Parameters
*Zend_Controller actions that accept parameters?
@Andrew Taylor's response is the proper Zend Framework way of handling URL parameters. However, if you would like to have the URL parameters in your controller's action (as in your example) - check out this tutorial on Zend DevZone.
A: I have extended Zend_Controller_Action with my controller class and made the following changes:
In dispatch($action) method replaced
$this->$action();
with
call_user_func_array(array($this,$action), $this->getUrlParametersByPosition());
And added the following method
/**
* Returns array of url parts after controller and action
*/
protected function getUrlParametersByPosition()
{
$request = $this->getRequest();
$path = $request->getPathInfo();
$path = explode('/', trim($path, '/'));
if(@$path[0]== $request->getControllerName())
{
unset($path[0]);
}
if(@$path[1] == $request->getActionName())
{
unset($path[1]);
}
return $path;
}
Now for a URL like /mycontroller/myaction/123/321
in my action I will get all the params following controller and action
public function editAction($param1 = null, $param2 = null)
{
// $param1 = 123
// $param2 = 321
}
Extra parameters in URL won't cause any error as you can send more params to method then defined. You can get all of them by func_get_args()
And you can still use getParam() in a usual way.
Your URL may not contain action name using default one.
Actually my URL does not contain parameter names. Only their values. (Exactly as it was in the question)
And you have to define routes to specify parameters positions in URL to follow the concepts of framework and to be able to build URLs using Zend methods.
But if you always know the position of your parameter in URL you can easily get it like this.
That is not as sophisticated as using reflection methods but I guess provides less overhead.
Dispatch method now looks like this:
/**
* Dispatch the requested action
*
* @param string $action Method name of action
* @return void
*/
public function dispatch($action)
{
// Notify helpers of action preDispatch state
$this->_helper->notifyPreDispatch();
$this->preDispatch();
if ($this->getRequest()->isDispatched()) {
if (null === $this->_classMethods) {
$this->_classMethods = get_class_methods($this);
}
// preDispatch() didn't change the action, so we can continue
if ($this->getInvokeArg('useCaseSensitiveActions') || in_array($action, $this->_classMethods)) {
if ($this->getInvokeArg('useCaseSensitiveActions')) {
trigger_error('Using case sensitive actions without word separators is deprecated; please do not rely on this "feature"');
}
//$this->$action();
call_user_func_array(array($this,$action), $this->getUrlParametersByPosition());
} else {
$this->__call($action, array());
}
$this->postDispatch();
}
// whats actually important here is that this action controller is
// shutting down, regardless of dispatching; notify the helpers of this
// state
$this->_helper->notifyPostDispatch();
}
A: For a simpler method that allows for more complex configurations, try this post. In summary:
Create application/configs/routes.ini
routes.popular.route = popular/:type/:page/:sortOrder
routes.popular.defaults.controller = popular
routes.popular.defaults.action = index
routes.popular.defaults.type = images
routes.popular.defaults.sortOrder = alltime
routes.popular.defaults.page = 1
routes.popular.reqs.type = \w+
routes.popular.reqs.page = \d+
routes.popular.reqs.sortOrder = \w+
Add to bootstrap.php
// create $frontController if not already initialised
$frontController = Zend_Controller_Front::getInstance();
$config = new Zend_Config_Ini(APPLICATION_PATH . ‘/config/routes.ini’);
$router = $frontController->getRouter();
$router->addConfig($config,‘routes’);
A: Take a look at the Zend_Controller_Router classes:
http://framework.zend.com/manual/en/zend.controller.router.html
These will allow you to define a Zend_Controller_Router_Route which maps to your URL in the way that you need.
An example of having 4 static params for the Index action of the Index controller is:
$router = new Zend_Controller_Router_Rewrite();
$router->addRoute(
'index',
new Zend_Controller_Router_Route('index/index/:param1/:param2/:param3/:param4', array('controller' => 'index', 'action' => 'index'))
);
$frontController->setRouter($router);
This is added to your bootstrap after you've defined your front controller.
Once in your action, you can then use:
$this->_request->getParam('param1');
Inside your action method to access the values.
Andrew
A: Originally posted here http://cslai.coolsilon.com/2009/03/28/extending-zend-framework/
My current solution is as follows:
abstract class Coolsilon_Controller_Base
extends Zend_Controller_Action {
public function dispatch($actionName) {
$parameters = array();
foreach($this->_parametersMeta($actionName) as $paramMeta) {
$parameters = array_merge(
$parameters,
$this->_parameter($paramMeta, $this->_getAllParams())
);
}
call_user_func_array(array(&$this, $actionName), $parameters);
}
private function _actionReference($className, $actionName) {
return new ReflectionMethod(
$className, $actionName
);
}
private function _classReference() {
return new ReflectionObject($this);
}
private function _constructParameter($paramMeta, $parameters) {
return array_key_exists($paramMeta->getName(), $parameters) ?
array($paramMeta->getName() => $parameters[$paramMeta->getName()]) :
array($paramMeta->getName() => $paramMeta->getDefaultValue());
}
private function _parameter($paramMeta, $parameters) {
return $this->_parameterIsValid($paramMeta, $parameters) ?
$this->_constructParameter($paramMeta, $parameters) :
$this->_throwParameterNotFoundException($paramMeta, $parameters);
}
private function _parameterIsValid($paramMeta, $parameters) {
return $paramMeta->isOptional() === FALSE
&& empty($parameters[$paramMeta->getName()]) === FALSE;
}
private function _parametersMeta($actionName) {
return $this->_actionReference(
$this->_classReference()->getName(),
$actionName
)
->getParameters();
}
private function _throwParameterNotFoundException($paramMeta, $parameters) {
throw new Exception(”Parameter: {$paramMeta->getName()} Cannot be empty”);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
} |
Q: Java: Writing a DOM to an XML file (formatting issues) I'm using org.w3c XML API to open an existing XML file. I'm removing some nodes , and I'm adding others instead.
The problem is that the new nodes that are added are written one after the other, with no newline and no indentation what so ever. While it's true that the XML file is valid , it is very hard for a human to examine it.
Is there anyway to add indentation , or at least a newline after each node?
A: I'm assuming that you're using a Transformer to do the actual writing (to a StreamResult). In which case, do this before you call transform:
transformer.setOutputProperty(OutputKeys.INDENT, "yes");
transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2");
A: transformer.setOutputProperty(OutputKeys.INDENT, "yes");
transformer.setOutputProperty("{http://xml.apache.org/xslt}indent-amount", "2");
source
How to pretty print XML from Java?
A: There are a few good examples of "pretty printing" in the following thread
how to pretty print xml from Java
Link to my effort at a solution
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27"
} |
Q: How do you 'clone' WebControls in C# .NET? My basic question is, in .NET, how do I clone WebControls?
I would like to build a custom tag, which can produce multiple copies of its children.
Ultimately I intend to build a tag similar to in JSP/Struts.
But the first hurdle I have is the ability to duplicate/clone the contents of a control.
Consider this rather contrived example;
<custom:duplicate count="2">
<div>
<p>Some html</p>
<asp:TextBox id="tb1" runat="server" />
</div>
</custom:duplicate>
The HTML markup which is output would be something like,
<div>
<p>Some html</p>
<input type="text" id="tb1" />
</div>
<div>
<p>Some html</p>
<input type="text" id="tb1" />
</div>
Note: I know i have the id duplicated, I can come up with a solution to that later!
So what we would have is my custom control with 3 children (I think) - a literal control, a TextBox control, and another literal control.
In this example I have said 'count=2' so what the control should do is output/render its children twice.
What I would hope to do is write some "OnInit" code which does something like:
List<WebControl> clones;
for(int i=1; i<count; i++)
{
foreach(WebControl c in Controls)
{
WebControl clone = c.Clone();
clones.Add(clone);
}
}
Controls.AddRange(clones);
However, as far as I can tell, WebControls do not implement ICloneable, so its not possible to clone them in this way.
Any ideas how I can clone WebControls?
A: What's wrong with using a Repeater and binding a dud data source. It'll duplicate the templated controls and handle the ID creation and all.
A: Just as a reference for others which really want to clone a custom server control.
public class MyCustomServerCtrl
{
...
public MyCustomServerCtrl Clone()
{
return MemberwiseClone() as MyCustomServerCtrl;
}
}
But note: this is needed very rarely and if so, most probably just when you're having some really specific logic. It should be avoided when possible. Generally it should be enough to use existing controls like Repeater, ListView etc..
A: The way to do this in ASP.NET is using templates. There are samples in MSDN for this, just look for templated controls / ITemplate.
A: The WebControl.CopyBaseAttributes method copies the AccessKey, Enabled, ToolTip, TabIndex, and Attributes properties from the specified Web server control to the Web server control that this method is called from.
MSDN Documentation
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Equivalent of typedef in C# Is there a typedef equivalent in C#, or someway to get some sort of similar behaviour? I've done some googling, but everywhere I look seems to be negative. Currently I have a situation similar to the following:
class GenericClass<T>
{
public event EventHandler<EventData> MyEvent;
public class EventData : EventArgs { /* snip */ }
// ... snip
}
Now, it doesn't take a rocket scientist to figure out that this can very quickly lead to a lot of typing (apologies for the horrible pun) when trying to implement a handler for that event. It'd end up being something like this:
GenericClass<int> gcInt = new GenericClass<int>;
gcInt.MyEvent += new EventHandler<GenericClass<int>.EventData>(gcInt_MyEvent);
// ...
private void gcInt_MyEvent(object sender, GenericClass<int>.EventData e)
{
throw new NotImplementedException();
}
Except, in my case, I was already using a complex type, not just an int. It'd be nice if it were possible to simplify this a little...
Edit: ie. perhaps typedefing the EventHandler instead of needing to redefine it to get similar behaviour.
A: With C# 10 you can now do
global using Bar = Foo
Which works like a typedef within the project.
I haven't tested it in depth, so there might be quirks.
I'm using it like
global using DateTime = DontUseDateTime
Where DontUseDateTime is a struct marked Obsolete, to force people to use NodaTime.
A: I think there is no typedef. You could only define a specific delegate type instead of the generic one in the GenericClass, i.e.
public delegate GenericHandler EventHandler<EventData>
This would make it shorter. But what about the following suggestion:
Use Visual Studio. This way, when you typed
gcInt.MyEvent +=
it already provides the complete event handler signature from Intellisense. Press TAB and it's there. Accept the generated handler name or change it, and then press TAB again to auto-generate the handler stub.
A: C# supports some inherited covariance for event delegates, so a method like this:
void LowestCommonHander( object sender, EventArgs e ) { ... }
Can be used to subscribe to your event, no explicit cast required
gcInt.MyEvent += LowestCommonHander;
You can even use lambda syntax and the intellisense will all be done for you:
gcInt.MyEvent += (sender, e) =>
{
e. //you'll get correct intellisense here
};
A: You can use an open source library and NuGet package called LikeType that I created that will give you the GenericClass<int> behavior that you're looking for.
The code would look like:
public class SomeInt : LikeType<int>
{
public SomeInt(int value) : base(value) { }
}
[TestClass]
public class HashSetExample
{
[TestMethod]
public void Contains_WhenInstanceAdded_ReturnsTrueWhenTestedWithDifferentInstanceHavingSameValue()
{
var myInt = new SomeInt(42);
var myIntCopy = new SomeInt(42);
var otherInt = new SomeInt(4111);
Assert.IsTrue(myInt == myIntCopy);
Assert.IsFalse(myInt.Equals(otherInt));
var mySet = new HashSet<SomeInt>();
mySet.Add(myInt);
Assert.IsTrue(mySet.Contains(myIntCopy));
}
}
A: Here is the code for it, enjoy!, I picked that up from the dotNetReference
type the "using" statement inside the namespace line 106
http://referencesource.microsoft.com/#mscorlib/microsoft/win32/win32native.cs
using System;
using System.Collections.Generic;
namespace UsingStatement
{
using Typedeffed = System.Int32;
using TypeDeffed2 = List<string>;
class Program
{
static void Main(string[] args)
{
Typedeffed numericVal = 5;
Console.WriteLine(numericVal++);
TypeDeffed2 things = new TypeDeffed2 { "whatever"};
}
}
}
A: Jon really gave a nice solution, I didn't know you could do that!
At times what I resorted to was inheriting from the class and creating its constructors. E.g.
public class FooList : List<Foo> { ... }
Not the best solution (unless your assembly gets used by other people), but it works.
A: I'd do
using System.Collections.Generic;
global using CustomerList = List<Customer>;
A: No, there's no true equivalent of typedef. You can use 'using' directives within one file, e.g.
using CustomerList = System.Collections.Generic.List<Customer>;
but that will only impact that source file. In C and C++, my experience is that typedef is usually used within .h files which are included widely - so a single typedef can be used over a whole project. That ability does not exist in C#, because there's no #include functionality in C# that would allow you to include the using directives from one file in another.
Fortunately, the example you give does have a fix - implicit method group conversion. You can change your event subscription line to just:
gcInt.MyEvent += gcInt_MyEvent;
:)
A: For non-sealed classes simply inherit from them:
public class Vector : List<int> { }
But for sealed classes it's possible to simulate typedef behavior with such base class:
public abstract class Typedef<T, TDerived> where TDerived : Typedef<T, TDerived>, new()
{
private T _value;
public static implicit operator T(Typedef<T, TDerived> t)
{
return t == null ? default : t._value;
}
public static implicit operator Typedef<T, TDerived>(T t)
{
return t == null ? default : new TDerived { _value = t };
}
}
// Usage examples
class CountryCode : Typedef<string, CountryCode> { }
class CurrencyCode : Typedef<string, CurrencyCode> { }
class Quantity : Typedef<int, Quantity> { }
void Main()
{
var canadaCode = (CountryCode)"CA";
var canadaCurrency = (CurrencyCode)"CAD";
CountryCode cc = canadaCurrency; // Compilation error
Concole.WriteLine(canadaCode == "CA"); // true
Concole.WriteLine(canadaCurrency); // CAD
var qty = (Quantity)123;
Concole.WriteLine(qty); // 123
}
A: If you know what you're doing, you can define a class with implicit operators to convert between the alias class and the actual class.
class TypedefString // Example with a string "typedef"
{
private string Value = "";
public static implicit operator string(TypedefString ts)
{
return ((ts == null) ? null : ts.Value);
}
public static implicit operator TypedefString(string val)
{
return new TypedefString { Value = val };
}
}
I don't actually endorse this and haven't ever used something like this, but this could probably work for some specific circumstances.
A: Both C++ and C# are missing easy ways to create a new type which is semantically identical to an exisiting type. I find such 'typedefs' totally essential for type-safe programming and its a real shame c# doesn't have them built-in. The difference between void f(string connectionID, string username) to void f(ConID connectionID, UserName username) is obvious ...
(You can achieve something similar in C++ with boost in BOOST_STRONG_TYPEDEF)
It may be tempting to use inheritance but that has some major limitations:
*
*it will not work for primitive types
*the derived type can still be casted to the original type, ie we can send it to a function receiving our original type, this defeats the whole purpose
*we cannot derive from sealed classes (and ie many .NET classes are sealed)
The only way to achieve a similar thing in C# is by composing our type in a new class:
class SomeType {
public void Method() { .. }
}
sealed class SomeTypeTypeDef {
public SomeTypeTypeDef(SomeType composed) { this.Composed = composed; }
private SomeType Composed { get; }
public override string ToString() => Composed.ToString();
public override int GetHashCode() => HashCode.Combine(Composed);
public override bool Equals(object obj) => obj is TDerived o && Composed.Equals(o.Composed);
public bool Equals(SomeTypeTypeDefo) => object.Equals(this, o);
// proxy the methods we want
public void Method() => Composed.Method();
}
While this will work it is very verbose for just a typedef.
In addition we have a problem with serializing (ie to Json) as we want to serialize the class through its Composed property.
Below is a helper class that uses the "Curiously Recurring Template Pattern" to make this much simpler:
namespace Typedef {
[JsonConverter(typeof(JsonCompositionConverter))]
public abstract class Composer<TDerived, T> : IEquatable<TDerived> where TDerived : Composer<TDerived, T> {
protected Composer(T composed) { this.Composed = composed; }
protected Composer(TDerived d) { this.Composed = d.Composed; }
protected T Composed { get; }
public override string ToString() => Composed.ToString();
public override int GetHashCode() => HashCode.Combine(Composed);
public override bool Equals(object obj) => obj is Composer<TDerived, T> o && Composed.Equals(o.Composed);
public bool Equals(TDerived o) => object.Equals(this, o);
}
class JsonCompositionConverter : JsonConverter {
static FieldInfo GetCompositorField(Type t) {
var fields = t.BaseType.GetFields(BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.FlattenHierarchy);
if (fields.Length!=1) throw new JsonSerializationException();
return fields[0];
}
public override bool CanConvert(Type t) {
var fields = t.GetFields(BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.FlattenHierarchy);
return fields.Length == 1;
}
// assumes Compositor<T> has either a constructor accepting T or an empty constructor
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer) {
while (reader.TokenType == JsonToken.Comment && reader.Read()) { };
if (reader.TokenType == JsonToken.Null) return null;
var compositorField = GetCompositorField(objectType);
var compositorType = compositorField.FieldType;
var compositorValue = serializer.Deserialize(reader, compositorType);
var ctorT = objectType.GetConstructor(new Type[] { compositorType });
if (!(ctorT is null)) return Activator.CreateInstance(objectType, compositorValue);
var ctorEmpty = objectType.GetConstructor(new Type[] { });
if (ctorEmpty is null) throw new JsonSerializationException();
var res = Activator.CreateInstance(objectType);
compositorField.SetValue(res, compositorValue);
return res;
}
public override void WriteJson(JsonWriter writer, object o, JsonSerializer serializer) {
var compositorField = GetCompositorField(o.GetType());
var value = compositorField.GetValue(o);
serializer.Serialize(writer, value);
}
}
}
With Composer the above class becomes simply:
sealed Class SomeTypeTypeDef : Composer<SomeTypeTypeDef, SomeType> {
public SomeTypeTypeDef(SomeType composed) : base(composed) {}
// proxy the methods we want
public void Method() => Composed.Method();
}
And in addition the SomeTypeTypeDef will serialize to Json in the same way that SomeType does.
Hope this helps !
A: The best alternative to typedef that I've found in C# is using. For example, I can control float precision via compiler flags with this code:
#if REAL_T_IS_DOUBLE
using real_t = System.Double;
#else
using real_t = System.Single;
#endif
Unfortunately, it requires that you place this at the top of every file where you use real_t. There is currently no way to declare a global namespace type in C#.
A: Since the introduction of C# 10.0, we now have the global using directive.
global using CustomerList = System.Collections.Generic.List<Customer>;
This introduces CustomerList as alias of List<Customer> on a global scope (throughout the whole project and all references to it).
Though I would have liked to be able to limit its scope (say for instance 'internal using') this does actually do a terrific job of fulfilling a typedef variant in C#.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "380"
} |
Q: Spring MVC : Binding 3 dropdowns to a date property in SimpleFormController How should I configure the class to bind three dropdowns (date, month, year) to a single Date property so that it works the way it works for 'single request parameter per property' scenario ?
I guess a should add some custom PropertyEditors by overriding initBinder method. What else ?
A: Aleksey Kudryavtsev: you can override the onBind method in your controller, i which you cant fiddle something special in command object, like
dateField = new SimpleFormat("YYYY-mm-dd").parse(this.year + "-" + this.month + "-" this.day);
or:
Calendar c = Calendar.getInstance();
c.set(year, month, day);
dateField = calendar.getTime();
but i'd rather do validation in javascript and use some available date picker component, there are plenty of them...
A: You could create a hidden input in your form and populate it using JavaScript when user selects date, then bind to this input in your command object.
Probably you will be using javascript anyway for things like checking correctness of the date, so why not format the ready to use date in one parameter.
Then you need to register a property editor that would convert from string "2008-05-20" to Date object.
A: then i would have three fields in my command object - year, month, day and would use standard spring validation for date checking
A: I haven't tried it, but you could try binding to MutableDateTime in the Joda library. It has separate setters and getters for all three fields.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Change language of error messages in ASP.NET I developing ASP.NET application using a Swedish version of Windows XP and Visual studio Professional. When ever i get an error aka. "yellow screen of death" the error message is in swedish, making it a bit hard to search for info about it.
How can i change what language the error messages in ASP.NET uses?
I have no language pack installed for the .net framework. I am however running an english windows xp with a swedish language interface pack on it.
I also have this in my web.config:
<system.web>
<globalization uiCulture="en-US" />
</system.web>
A: In web.config add:
<system.web>
<globalization uiCulture="en-US" />
</system.web>
or whatever language you prefer (note: uiCulture="en-US" not culture="en-US").
Also you should check that your app is not changing the uiCulture, for example to a user-specific uiCulture in global.asax.
If the error occurs before or during processing the web.config file, this will of course make no difference. In this case, you need to change the regional settings of the account under which the ASP.NET app is running.
If you are developing with VS2005 or later, you're probably running under the Cassini web server, under the identity of the current user - so just change the current user's settings. If you're using IIS, you probably want to change the regional settings of the ASPNET account - you can do this from Regional Settings in the Control Panel by checking the checkbox "Apply to current user and to the default user profile".
A: You can find your error translated into English on finderr.net
or
The second solution to this problem is to move, delete or rename a file containing translations of exceptions. These translations are in file:
%windir%\assembly\mscorlib.resources.dll { version: 2.0.0.0 culture: sv token: b77a5c561934e089}
After the change you have to restart .NET framework.
Important information: Do it on your own risk and I do not know what are the side effects to this solution.
A: I had this problem aswell. I had given up, untill I today gave it another try, that worked;
Open CMD as administrator, and then type "LPKSETUP" and hit enter, and then uninstall the language that is causing the issue.
All credit goes to spunk.funk (source)
This worked for me.
And it seems that the timezone and localized keyboard still works, which is the only localized things I want.
A: I had the same issue recently on IIS version 10 and these steps fixed it.
*
*Open IIS Manager
*Select the server from the Connections panel
*Under the "ASP.NET" double click on ".NET Globalization"
*Edit "UI Culture" property
*Optionally set the "File" property to "utf-8"
*Finally click "Apply" and restart the server.
A: Aren't the error messages dependent on the installed .NET Framework? I.e. you could just uninstall the Swedish language pack for .NET. On the production server, you'll most likely have an English-only Framework anyway.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33"
} |
Q: Referenced structure not 'sticking' I am currently porting a lot of code from an MFC-based application to a DLL for client branding purposes.
I've come across an unusual problem. This bit of code is the same in both systems:
// ...
CCommsProperties props;
pController->GetProperties( props );
if (props.handshake != HANDSHAKE_RTS_CTS)
{
props.handshake = HANDSHAKE_RTS_CTS;
pController->RefreshCommProperties( props );
}
// ... in another file:
void CControllerSI::RefreshCommProperties ( const CCommsProperties& props )
{
// ... code ...
}
CommProperties is a wrapper for the comm settings, serialization of etc. and pController is of type ControllerSI which itself is a layer between the actual Comms and the Application.
On the original MFC version the setting of handshake to RTS-CTS sticks but when running as the DLL version it resets itself to 0 as soon as the function is entered. The code is contained entirely in the DLL section of the code, so there are no boundaries.
The main differences between the original and the new modules is the variables that call the various dialogs have been removed and the removed #includes
I've lost an afternoon to this and I don't really want to lose any more...
A: It is difficult to see what is wrong from the given code alone. Some general pointers:
*
*The object is initialized and processed in different binary modules with incompatible linking (such as C-run times)
*If the class/structure is shared it is not exported/imported correctly.
*The class(s) are defined in more than one place, and you are not including the correct definitions.
The above three are the most likely causes, especially, if all fields are reset to their default initialized values.
if this is only happening with only one or two fields, the structure may be poorly aligned and you may need to rearrange the fields to correct these (check that in release too).
In general, I am tempted to hypothesize that the object you have intialized is not the one RefreshCommProperties() sees, for some reason, may be one of the three above.
A: To really figure out what is going on, you probably need to post your source code -- or at least as much to replicate the problem. Unfortunately, StackOverflow doesn't seem like it encourages this. You could post your code on an FTP site or go to a site that allows posting of source code (like CodeGuru).
A: It's possible that CCommsProperties are defined in two different places, and each file includes its own version.
To test this theory, in the debugger you need to look at &props.handshake . If debugger tells you that the field has different address inside and outside the function, then the hypothesis is true and you can proceed to examining preprocessor output to figure out why it happens.
A: After Saratv posting, I decided to ditch what I had done and restart it from working source again.
This time however it works...I guess I will never know why passing a structure caused it to change.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Some good resources for learning F# please I am a .Net programmer(both C# and VB.net) , and I want to go into F# area also, but can't find some good online article/pdf to start with. Please guide me through this.
A: hubFS: THE place for F#
hubFS: THE place for F# » Establish skills in F# » Books, Tutorials, links and other resources
A: Check the MSDN F# Development center, they have some articles on there:
http://msdn.microsoft.com/en-us/fsharp/default.aspx
A: Robert Pickering who wrote Foundations of F# has a blog you could check.
Don Syme, lead designer of the F# language also has one. He co-wrote the Expert F# book as well.
A: There is a great series of articles called "F# for game development" which explains F# step by step. The good thing about the articles is that they are written by a beginner and shows how he learns F#.
Another good resours is the "Exploring the F# Language" articles which explains F# from a developer point of view.
Hanselman has a few resources mentioned Here you should have a look on the great presentation called "F# Eye for the C# Guy" that really helped me understand F#.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Is there a nice way of handling multi-line input with GNU readline? My application has a command line interface, and I'm thinking about using the GNU Readline library to provide history, an editable command line, etc.
The hitch is that my commands can be quite long and complex (think SQL) and I'd like to allow users to spread commands over multiple lines to make them more readable in the history.
Is it possible to do this in readline (maybe by specifying a difference between a newline and an end of command)?
Or would I be better off implementing my own command line (but maybe using the GNU History library)?
A: You sure can.
You can define options for the '\r' and '\n' values with
rl_bind_key('\r', return_func);
Your return_func can now decide what to do with those keys.
int return_func(int cnt, int key) { ... }
If you're doing this inside a UNIX terminal, you will need to learn about ANSI terminal codes if you want to move your cursor around. There's a starting reference on wikipedia.
Here's some sample code that uses readline to read multi-line and will stop editing when you enter in a semi-colon (I've set that as the EOQ or end-or-query). Readline is extremely powerful, there's plenty of stuff to learn.
#include <stdio.h>
#include <unistd.h>
#include <readline/readline.h>
#include <readline/history.h>
int my_startup(void);
int my_bind_cr(int, int);
int my_bind_eoq(int, int);
char *my_readline(void);
int my_eoq;
int
main(int argc, char *argv[])
{
if (isatty(STDIN_FILENO)) {
rl_readline_name = "my";
rl_startup_hook = my_startup;
my_readline();
}
}
int
my_startup(void)
{
my_eoq = 0;
rl_bind_key('\n', my_bind_cr);
rl_bind_key('\r', my_bind_cr);
rl_bind_key(';', my_bind_eoq);
}
int
my_bind_cr(int count, int key) {
if (my_eoq == 1) {
rl_done = 1;
}
printf("\n");
}
int
my_bind_eoq(int count, int key) {
my_eoq = 1;
printf(";");
}
char *
my_readline(void)
{
char *line;
if ((line = readline("")) == NULL) {
return NULL;
}
printf("LINE : %s\n", line);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: How to Execute an application in XP when a particular type of USB device is attached In Windows XP what is the best way to execute a particular application when a particular type of USB device is attached (it currently attaches as a storage device - i.e. it appears as a drive).
The solution I am looking for must execute the application from the very first time the device is attached or offer the application as a selection, whichever is easier to achieve, the device must remain attached as a storage device.
EDIT: Polling all attached devices is not adequate - windows will already have done its pop-ups at that stage. The issue is with starting the application without additional pop-ups, the application will then need to use the device as a normal storage drive.
A: A quick search revealed this site, see section "3.3 Device change listener"
A: You can also turn on autoplay for USB drives, and setup an autorun.inf file on the USB drive, although I advise against this method as there are several viruses around that exploit this. There's a reason it's off by default.
If you do want to go down this road though, have a look at this website, there's lots of good information and an autorun.inf generator that you can play with.
A: You could have a background application reacting to the connect event of this particular USB device, this would start the actual application.
ManagementEventWatcher Watcher;
WqlEventQuery Query = new WqlEventQuery();
Query.EventClassName = "__InstanceCreationEvent";
Query.Condition = "TargetInstance ISA 'Win32_USBControllerDevice'";
Query.WithinInterval = new TimeSpan(0, 0, 2);
Watcher = new ManagementEventWatcher(Query);
Watcher.EventArrived += new EventArrivedEventHandler(OnUsbConnected);
The OnUsbConnected handler would then start the desired application.
A: Monoxide has the right idea. I use this technique myself in managing my music collection. My main PC is a laptop, but my music collection got big enough that I had to move it to an external drive. So on the external drive I put the following AUTORUN.INF:
[autorun]
open=c:\progra~1\itunes\itunes.exe
label=Open iTunes
icon=c:\progra~1\itunes\itunes.exe,0
As you can see it offers to launch iTunes from C: when this drive is attached. For some reason the label and icon don't get picked up by the AutoPlay window, but LABEL does appear when this drive is viewed in My Computer. What you see in the AutoPlay dialog that comes up in XP is the default selection is "Run the program / using the program provided on the device". One click and you are off and running.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Non intrusive 'live' help system I'm searching a C# component or code snipped that does something like that:
I want to inform new users about the most important program functions if he opens a new window for example.
It should be a box showing text (formated if possible) that is of course not modal and has some mechanism to 'go out of the way' if the user enters the textbox area. So that he can access what's underneath it. Alternativly the window could also stick to the border of the window, but there needs to be a way that this also works if the window is maximized.
So I want to present him with a short introduction of what he can do in every corner of my app most painlessly.
Thank you!
A: I use a "bar" at the top of every window to display some information about the current window/dialog.
A: Use tooltips. They can be programmatically controlled, and you can have them appear at will. You'll need to add the functionality to your app to keep track of what tooltips have been shown to the user already.
You can add a "balloon" style by setting the IsBalloon property to true.
You can also replace them with smaller descriptions for when the user wants to hover over the control and have them displayed again.
A: I'm already using tooltips heavily. However, they aren't very practical when displaying bigger amounts of data and they are bound to specific user actions.
A: Have you considered having a contextual menu for each form / page which contains links to Adobe Captivate style presentations for each available task? That way the user can investigate an example of how to achieve a task relating to what they are trying to achieve from within the application / site.
This approach would require a good deal of maintenance and management if your code changes regularly but coordinating it with a training department can provide rich help features in your application.
See http://www.adobe.com/products/captivate/ for more information.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: 'pass parameter by reference' in Ruby? In Ruby, is it possible to pass by reference a parameter with value-type semantics (e.g. a Fixnum)?
I'm looking for something similar to C#'s 'ref' keyword.
Example:
def func(x)
x += 1
end
a = 5
func(a) #this should be something like func(ref a)
puts a #should read '6'
Btw. I know I could just use:
a = func(a)
A: You can try following trick:
def func(x)
x[0] += 1
end
a = [5]
func(a) #this should be something like func(ref a)
puts a[0] #should read '6'
A: You can accomplish this by explicitly passing in the current binding:
def func(x, bdg)
eval "#{x} += 1", bdg
end
a = 5
func(:a, binding)
puts a # => 6
A: Ruby doesn't support "pass by reference" at all. Everything is an object and the references to those objects are always passed by value. Actually, in your example you are passing a copy of the reference to the Fixnum Object by value.
The problem with the your code is, that x += 1 doesn't modify the passed Fixnum Object but instead creates a completely new and independent object.
I think, Java programmers would call Fixnum objects immutable.
A: http://ruby-doc.org/core-2.1.5/Fixnum.html
Fixnum objects have immediate value. This means that when they are assigned or
passed as parameters, the actual object is passed, rather than a reference to
that object.
Also Ruby is pass by value.
A: In Ruby you can't pass parameters by reference. For your example, you would have to return the new value and assign it to the variable a or create a new class that contains the value and pass an instance of this class around. Example:
class Container
attr_accessor :value
def initialize value
@value = value
end
end
def func(x)
x.value += 1
end
a = Container.new(5)
func(a)
puts a.value
A: However, it seems that composite objects, like hashes, are passed by reference:
fp = {}
def changeit(par)
par[:abc] = 'cde'
end
changeit(fp)
p fp
gives
{:abc=>"cde"}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/161510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.